Abstract
Despite widespread concerns over AI-generated misinformation, its impact on people’s reasoning and the effectiveness of countermeasures remain unclear. This study examined whether a pre-emptive, source-focused inoculation—designed to lower trust in AI-generated information—could reduce its influence on reasoning. This approach was compared with a retroactive, content-focused debunking, as well as a simple disclaimer that AI-generated information may be misleading, as often seen on real-world platforms. Additionally, the extent to which trust in AI-generated information is malleable was also tested with an intervention designed to boost trust. Across two experiments (total N = 1223), a misleading AI-generated article influenced reasoning regardless of its alleged source (human or AI). In both experiments, the inoculation reduced general trust in AI-generated information, but did not significantly reduce the misleading article’s specific influence on reasoning. The additional trust-boosting and disclaimer interventions used in Experiment 1 also had no impact. By contrast, debunking of misinformation in Experiment 2 effectively reduced its impact, although only a combination of inoculation and debunking eliminated misinformation influence entirely. Findings demonstrate that generative AI can be a persuasive source of misinformation, potentially requiring multiple countermeasures to negate its effects.
Keywords: misinformation, generative artificial intelligence, source credibility, continued influence effect
1. Introduction
The capabilities of generative artificial-intelligence (AI) systems have grown exponentially in recent years. However, even the best and most widely used AI systems make errors. Generative AI models that specialize in text production—so-called large language models (LLMs; e.g. ChatGPT)—not only produce responses that may reflect the human biases present in the data on which they are trained, but often also fabricate information, a phenomenon known as ‘hallucinations’ [1–3]. The tendency for LLMs to produce misleading content, combined with the increasing accessibility of AI systems, has given rise to concerns that people may inadvertently consume and believe AI-generated misinformation, and that nefarious actors may use such systems to create misinformation at scale, with negative consequences for users and society [4,5].
It is clear from previous research that misinformation can have detrimental impacts, contributing to outcomes such as increased vaccine hesitancy and reduced satisfaction with democracy [6,7]; for a review, see [8]. Despite growing concerns that AI-generated misinformation may amplify these adverse outcomes, few studies have examined how it impacts people’s cognition. In the current study, we examined whether the impact of a misleading AI-generated article depends on its perceived source (human versus AI), and whether a pre-emptive, source-focused inoculation or a retroactive, content-based debunking can reduce the impact of AI-generated misinformation.
Generative AI tools are now commonplace. In 2024, ChatGPT has accumulated over 200 million active weekly users [9] and 65% of businesses now report regularly using generative AI systems [10], with user numbers approximately doubling in the past year. Despite having more regular interaction with AI systems, research suggests that people struggle to discriminate between AI-generated and human-generated content [11–14]. For example, Heppell et al. [4] found that participants were only 59% accurate at detecting AI-generated misinformation about the Ukraine war, and tended to overpredict human authorship. AI-generated misinformation may not only be hard to detect but may also be more compelling than human-generated misinformation, particularly when people are unaware of the origin of the information [15–18]. One recent study found that participants assessed the veracity of true AI-generated tweets more quickly and accurately than human-generated tweets, but were worse at identifying false information in AI-generated tweets than in human-generated tweets [19]. In other words, people were more often deceived by misinformation that was generated by AI than by misinformation that was generated by a person. One explanation for this finding is that LLMs can generate text that is easier and quicker to read than text written by humans, and the ease of processing this information is mistaken for a sign that the information is accurate [20,21].
Amid the growing popularity of generative AI systems and concerns about the inaccuracies in the output of such systems, some social media platforms have implemented warning labels to alert users to AI-generated content [22,23]. However, simply labelling AI-generated content may not be an effective countermeasure and may induce general scepticism [24]. For example, in some studies participants perceived headlines labelled as AI-generated to be less accurate and reported less intention to share them compared with the same headlines labelled as human-generated, regardless of headline veracity [25,26]. At the same time, generative AI platforms such as ChatGPT feature simple disclaimers that warn that AI-generated content can be inaccurate, but recent work suggests that these, too, are likely to have limited practical value [27]. For example, Kreps et al. [28] found that disclaimers warning that an article might contain misleading information and AI-generated content did not have a consistent effect; for Democrats, the disclaimer significantly reduced the perceived credibility of a politically congenial story but did not affect the perceived credibility of a politically non-congenial story, whereas the opposite was found for Republicans. Together, these findings suggest that neither labelling AI-generated content nor warning people about its potential inaccuracies by means of a simple disclaimer will be sufficient to reduce reliance on AI-generated misinformation.
Two strategies that may be more effective at reducing reliance on AI-generated misinformation are inoculation and debunking [29,30]. Inoculation involves pre-emptively warning people about an impending deception and providing an explanation of the misleading persuasive techniques that might be used [31,32]. Debunking interventions involve retroactively correcting misinformation after it has been encountered [33,34]. It is clear from numerous studies that both inoculation and debunking interventions can be effective at reducing misinformation reliance. The interventions can improve people’s ability to discern factual from misleading content, reduce people’s references to misinformation, and make people less likely to share misleading information [30,35,36]. It is also clear, however, that such interventions generally do not eliminate misinformation impacts entirely, with a large literature demonstrating that people frequently continue to rely on misinformation to some extent after having received clear and credible corrective interventions—a phenomenon known as the continued influence effect [29,37–39]. There is also little consensus on which intervention is the most effective for combatting misinformation influence. Some studies suggest that pre-emptive interventions are more effective because they enable people to develop counterarguments before encountering the misinformation, reducing its persuasiveness [40,41]. Conversely, other studies suggest that retroactive interventions are slightly more effective because both the misinformation and corrective information are accessible at the time of the intervention, making it easier to develop a coherent mental model [33,35,42–44]. By and large, however, inoculation and debunking approaches seem to be similarly effective for combatting misinformation influence (for a review, see [29,30]). This conclusion aligns well with findings from other areas, such as the comparable impact of pre-emptive and retroactive interventions on psychological reactance (e.g. in the context of health campaigns [45]).
One recommended—but rarely used—strategy for combatting misinformation is to discredit sources of biased or inaccurate information [46–48]. Source discreditation can reduce misinformation reliance by providing reasons for why the misinformation was initially presented and why it should be dismissed (e.g. the source tried to manipulate recipients due to a hidden agenda, or the source does not have the expertise to present accurate information on a topic). This idea is supported by theoretical models of misinformation influence that emphasize the weighting of perceived information reliability [49], and in line with empirical findings that perceived source credibility influences both belief [50–52] and correction effectiveness [53–56].
While source discreditation has been combined with other interventions in some research using ‘optimized’ debunking interventions [57], few studies have directly examined the effectiveness of source discreditation. Ecker et al. [46] found that highlighting a source’s conflict of interest or poor track record of communication reduced reliance on misinformation both for human and media sources. Connor Desai & Reimers [58], however, found that corrections stating that misinformation resulted from intentional deception versus unintentional error were equally effective. To the best of our knowledge, no study has assessed the effectiveness of a pre-emptive source discreditation. In light of these mixed results and growing concerns over AI-generated misinformation, the primary aim of the current study was to examine the effectiveness of a source-focused inoculation for reducing reliance on AI-generated misinformation.
2. Experiment 1
Experiment 1 aimed to explore the extent to which people’s trust in AI-generated information is malleable, and how this affects the influence of misleading AI-generated information presented in the form of a biased article. Participants were randomly assigned to read a misleading AI-generated article attributed to an AI or human source, or a generic AI-generated article that did not contain any misleading information. If the misleading article was attributed to an AI source, it was accompanied by a pre-emptive trust boost, a pre-emptive inoculation that discredited generative AI systems, a retroactive disclaimer that warned that generative AI can make mistakes, or no intervention. The trust boost outlined the benefits of AI systems (e.g. that they have a wealth of information available). By contrast, the inoculation explained why AI-generated content can be misleading (e.g. because generative AI systems are trained on potentially biased human data and sometimes fabricate information), whereas the disclaimer merely warned that AI systems can make mistakes without providing any explanation. We measured people’s general trust in AI-generated information, as well as their reliance upon specific AI-generated misinformation via inferential-reasoning questions. Although these two constructs are presumably related, it should be noted that they are distinct; people who are relatively trusting of AI-generated information in general may show minimal or no reliance on a specific piece of AI-generated information, and vice versa [59]. That is, even someone highly distrustful of AI content in general may rely on AI-generated information if it is consistent with their prior beliefs [38,49]. Materials and data for both experiments are available at: https://osf.io/t2g3a/.
We hypothesized that AI-generated misinformation would significantly influence reasoning (H1), with the size of the effect depending on its alleged source (H2).1 We further hypothesized that a trust-boosting statement would increase trust in AI-generated information (H3a) and specific misinformation reliance (H3b), that a source-focused inoculation would reduce trust in AI-generated information (H4a) and specific misinformation reliance (H4b), and that a simple disclaimer would have no effect on trust in AI-generated information (H5a) or specific misinformation reliance (H5b); finally, we predicted that misinformation would continue to have some influence on reasoning post-interventions (H6). For a summary of supported and rejected hypotheses across both experiments, please see electronic supplementary material, table S1.
2.1. Method
Experiment 1 used a between-subjects design with six conditions: control (non-misleading article); human misinformation (misleading article with human byline); AI misinformation (misleading article with AI byline); trust boost (passage designed to boost trust in AI followed by the misleading article with AI byline); inoculation (source-focused inoculation passage followed by the misleading article with AI byline); or disclaimer (misleading article with AI byline followed by a disclaimer).
2.1.1. Participants
An a priori power analysis conducted using G*Power 3.1 [60] indicated that at least 100 participants in each experimental condition were required to detect a small effect (f = 0.2, α = 0.05, and 1 – β = 0.80). We collected data from 631 English-speaking adults from the United States via Prolific (https://www.prolific.com/), using representative sampling.2 We excluded participants due to a self-reported lack of effort (n = 1) or inconsistent responding on the reasoning (n = 7) and trust (n = 23) measures, as per a priori exclusion criteria (see electronic supplementary material for details). The final sample comprised n = 603 participants, including 297 women, 295 men, 10 non-binary participants and 1 participant who self-described as transgender male. Age ranged from 18 to 80 years (M = 44.83, SD = 16.01). Participants were randomly allocated to one of the six conditions, with the constraint of approximately equal cell sizes.
2.1.2. Materials
2.1.2.1. Articles
The freely accessible version of ChatGPT v3.5 was used to generate a biased, 529-word article in favour of trickle-down economics, titled ‘The Case for Trickle-Down Economics: Fostering Prosperity for All’.3 The topic of trickle-down economics was chosen based on pilot-testing indicating that participants tend to be familiar with the concept without having detailed knowledge or a firm stance on the topic, and to avoid exposing participants to misinformation with a high potential for direct harm (e.g. health misinformation; [61]). ChatGPT was prompted to provide a biased, one-sided perspective, despite this being potentially misleading. The final article was based on two ChatGPT responses that were combined and edited to improve clarity and flow. This approach mimics how a malicious actor may use generative AI to produce effective misleading content. The article was identical in the human and AI-misinformation conditions, except that the header stated that the article was written by a human author (‘Alex Kennedy’) or ChatGPT, respectively. The source information was presented twice more, in bold font, in the survey instructions, to ensure participants noticed and encoded it.
In the control condition, participants were given an unbiased, 378-word article on the retirement of a fictional radio host, titled ‘A Decade of Voices: Reflecting on a Local Radio Legend’s Journey’.4 The article featured several quotes from the fictional radio host, Jim Morgan (e.g. he was quoted saying that ‘The world changes rapidly, the economy goes up and down, but what remains constant is the power of people’s stories’). The topic was chosen to be unrelated to the misleading article, while ensuring that the article remained sufficiently relevant to the questionnaires to avoid arousing suspicion or confusion about the study. To this end, the article mentioned that the radio host had often discussed topics related to economics, and included some titbits related to the economy.
2.1.3. Interventions
2.1.3.1. Trust boost
In the trust-boost condition, participants read a brief 169-word passage outlining the advantages of AI-generated information. For example, ‘The algorithms that AI chatbots rely on have access to a wealth of human-generated data and knowledge, which they use to make predictions and generate information in real time’). The passage explained that AI companies have ‘put safeguards in place to ensure that AI systems do not produce information that may go beyond their capabilities’, which ‘ensures that AI-generated information can become increasingly accurate and trustworthy’.
2.1.3.2. Source-focused inoculation
In the inoculation condition, participants read a 174-word passage designed to reduce trust in AI-generated information, which highlighted the potential for AI to generate biased or inaccurate information. For example, ‘Because AI tools are trained on large, human-generated datasets, AI-generated information can reflect human biases and stereotypes. In other words, AI systems are only as accurate as the human-generated data they are trained on. […] AI-generated information can also produce absurd claims and be outright false. This is because AI algorithms are unable to distinguish between accurate and inaccurate information when generating content.’
2.1.3.3. Disclaimer
In the disclaimer condition, participants read a statement after the misleading article that pointed to the tendency of AI systems to make errors; this was the original disclaimer currently used by ChatGPT (i.e. ‘ChatGPT can make mistakes. Check important info.’).
2.1.3.4. Questionnaires
Participants completed two questionnaires. First, to examine how much participants relied on misinformation in their reasoning, they were asked to rate their agreement with seven statements relating to trickle-down economics (e.g. ‘Reducing taxes for corporations and higher-income earners would ultimately benefit all of society’) on an 11-point scale ranging from 0 (strongly disagree) to 10 (strongly agree). Second, participants rated eight items regarding their level of trust in AI-generated misinformation (e.g. ‘I trust AI-generated content’) on the same 11-point scale. Each questionnaire contained two reverse-coded items.
2.1.4. Procedure
Participants viewed an ethics-approved information page and provided informed consent by ticking a box, before providing some demographic information (i.e. age, gender). Participants were told that the study was ‘investigating how we process information about current economic affairs’. Participants were then presented with the article and intervention text as per their assigned condition. Text was presented in paragraphs, each presented on a separate page. Reading was self-paced, but each page was presented for a minimum time (set at approximately 100 ms per word). Participants then completed a 1 min distractor task (a word puzzle), before responding to the inferential-reasoning and trust questionnaires. They were also asked to indicate whether they put in a ‘reasonable effort’ or whether their data should be discarded. Finally, participants were fully debriefed following best-practice guidelines [61] and compensated £1.50 (approx. US$1.90). The experiment took approximately 8−10 min.
2.2. Results
2.2.1. Trust in AI-generated information
Before turning to the analysis of misinformation reliance, we first checked whether the interventions were effective at impacting trust in AI-generated information. To this end, we conducted a one-way ANOVA on trust scores, calculated by averaging participants’ responses to the trust questions (after reverse-scoring relevant items). Scores ranged from 0 to 10, with higher scores indicating greater trust. This analysis revealed a significant main effect of condition, F(5, 597) = 4.69, p < 0.001, ηp2 = 0.038 (see figure 1).
Figure 1.
Mean trust in AI-generated information across conditions in Experiment 1. Note: Misinfo., misinformation; error bars show 95% confidence intervals.
To test specific hypotheses regarding the interventions’ influence on trust in AI-generated information, we conducted planned contrasts comparing each condition with the AI-misinformation condition, applying the Holm–Bonferroni correction for each set of contrasts in a hypothesis-specific manner (see table 1). The analyses revealed that the inoculation significantly reduced general trust in AI-generated information compared with the AI-misinformation condition, supporting H4a. Neither the trust boost nor the disclaimer significantly impacted trust in AI-generated information, by contrast to H3a but in line with H5a. To quantify evidence for the absence of effects, we conducted Bayesian independent-samples t-tests, which revealed moderate evidence for the null hypothesis in case of the trust boost (BF01 = 4.42) and anecdotal evidence for the null for the disclaimer (BF01 = 1.57).
Table 1.
Planned contrasts on trust scores in Experiment 1.
hypothesis |
contrast |
F(1, 597) |
p |
ηp2 |
---|---|---|---|---|
H3a |
AI misinfo. versus trust boost |
0.79 |
0.373 |
0.001 |
H4a |
AI misinfo. versus inoculation |
9.45 |
<0.001* |
0.016 |
H5a |
AI misinfo. versus disclaimer |
2.57 |
0.109 |
0.004 |
* indicates statistical significance after Holm–Bonferroni adjustment.
Misinfo., misinformation.
Additional exploratory contrasts were run to check whether exposure to the biased article itself affected trust (see electronic supplementary material, table S2). These contrasts suggested that neither the human-misinformation nor the AI-misinformation condition differed significantly from control, although trust scores were higher in the AI-misinformation condition than the human-misinformation condition.
2.2.2. Misinformation reliance
For the main analyses of misinformation reliance, we calculated misinformation-reliance scores by averaging each participant’s responses to the inferential-reasoning questionnaire (after reverse-scoring relevant items); scores ranged from 0 to 10, with higher scores indicating greater misinformation reliance. A one-way ANOVA was then performed on these scores, which indicated a significant main effect of condition, F(5, 597) = 5.69, p < 0.001, ηp2 = 0.046 (see figure 2).
Figure 2.
Mean misinformation reliance across conditions in Experiment 1. Note: Misinfo., misinformation; error bars show 95% confidence intervals.
To directly assess our specific hypotheses, we again conducted planned contrasts (see table 2). First, we tested whether the misleading article influenced participants’ reasoning. In support of H1, we found misinformation reliance was higher in both misinformation conditions than in the control condition. Second, we tested the effect of the alleged source on misinformation reliance. The prediction that misinformation impact would depend on the perceived source (H2) was not supported, with no significant difference between misinformation-reliance scores in human- and AI-misinformation conditions. Accordingly, a Bayesian independent-samples t‐test yielded anecdotal evidence for the null hypothesis (BF01 = 2.48). Third, we tested whether the interventions were effective at influencing misinformation reliance by comparing each intervention condition (trust boost, inoculation, disclaimer) with the AI-misinformation condition. All contrasts were nonsignificant, suggesting that none of the interventions significantly influenced misinformation reliance. H3b and H4b were thus not supported, although the null effect of the disclaimer was predicted (H5b). Bayesian independent-samples t-tests yielded moderate evidence for the null hypothesis across the trust-boost (BF01 = 5.85), inoculation (BF01 = 5.71) and disclaimer conditions (BF01 = 5.78). Finally, we compared the control condition with the inoculation and disclaimer conditions to test for continued influence. As expected, and in line with earlier comparisons, participants continued to rely on the AI-generated misinformation in their reasoning after receiving the interventions, in support of H6.
Table 2.
Planned contrasts on misinformation-reliance scores in Experiment 1.
hypothesis |
contrast |
F(1, 597) |
p |
ηp2 |
---|---|---|---|---|
H1 |
control versus human misinfo. |
10.04 |
0.002* |
0.017 |
control versus AI misinfo. |
20.71 |
<0.001* |
0.034 |
|
H2 |
human misinfo. versus AI misinfo. |
1.97 |
0.161 |
0.003 |
H3b |
AI misinfo. versus trust boost |
0.22 |
0.639 |
<0.001 |
H4b |
AI misinfo. versus inoculation |
0.27 |
0.603 |
<0.001 |
H5b |
AI misinfo. versus disclaimer |
0.22 |
0.638 |
<0.001 |
H6 |
control versus inoculation |
16.52 |
<0.001* |
0.027 |
control versus disclaimer |
16.65 |
<0.001* |
0.027 |
* indicates statistical significance after Holm–Bonferroni adjustment.
Misinfo., misinformation.
2.3. Discussion
In Experiment 1, we tested whether providing a trust boost or a source-focused inoculation could influence participants’ general trust in AI-generated information; whereas our trust boost was ineffective, we found that a source-focused inoculation significantly reduced trust in AI-generated information. We also examined whether AI-generated misinformation would influence participants’ reasoning, and whether this effect was dependent on the perceived source. The provided misinformation had the expected persistent influence on participants’ reasoning, replicating much previous research (see [29]). Misinformation impact was comparable regardless of the perceived source, suggesting that people are neither more likely nor less likely to believe and rely on information from an AI source than from an unfamiliar human source. No intervention was effective at reducing the effect of the misinformation on reasoning; participants were influenced by the misinformation regardless of whether they received an initial AI trust boost, a source-focused inoculation, a simple disclaimer or no intervention. The ineffectiveness of a disclaimer was not surprising given its generic nature and prior findings [28]. The ineffectiveness of the trust boost is also understandable given it had no impact on trust in AI-generated information to begin with. However, the ineffectiveness of the source-focused inoculation is noteworthy given that it did significantly reduce AI trust, and given that technique-based inoculations and retrospective source discreditations have been found to be effective in prior work [31,46,62,63].
One possible reason why the pre-emptive source-focused inoculation had no effect may be that people lacked sufficient knowledge to identify the flaws in the misleading arguments. Consistent with this idea, research informed by mental-model theory has suggested that interventions may fail to reduce misinformation reliance if they do not explain why the misinformation was wrong or how the information came to be [34,64,65]. If inoculated participants were more sceptical of AI-generated content but were unable to identify flaws in the biased argument, then it is possible that a pre-emptive source discreditation would be effective when accompanied by a retroactive debunking that provided some explanation regarding the falsity of the information provided. This was tested in Experiment 2.
3. Experiment 2
In Experiment 1, we found that a source-focused inoculation reduced trust in AI-generated information but did not significantly influence misinformation reliance. Experiment 2 further examined the efficacy of a pre-emptive source-focused inoculation to reduce misinformation reliance, and compared its effects with a retroactive, content-focused debunking. Whereas the disclaimer in Experiment 1 merely warned participants that generative AI systems ‘can make mistakes’, the debunking intervention in Experiment 2 explicitly told participants that the misleading article contained inaccurate content and highlighted criticisms of trickle-down economics. Although both pre-emptive and retroactive interventions have been found useful, it is still unclear which approach is more effective at combating the influence of misinformation [33,35,41,42]. The combination of (retroactive) source-focused and content-focused interventions has been found to be particularly effective, presumably because a combined intervention discredits both the misinformation itself and the source of that information [46].
In Experiment 2, participants again read a misleading AI-generated article about trickle-down economics, attributed to either a human or AI source (or a non-misleading generic article in a control condition). To ensure a debunking approach was applicable, ChatGPT was prompted to produce fabricated and misleading arguments that were incorporated into the misleading article. In the AI-misinformation conditions, the misleading article was presented with a pre-emptive, source-focused inoculation that discredited generative AI systems, a retroactive, content-focused debunking that identified the misleading nature of the AI-generated arguments, both an inoculation and a debunking, or no intervention. Misinformation reliance was again assessed by examining participants’ answers to inferential-reasoning questions.
We hypothesized that AI-generated misinformation would impact reasoning (H1) and that the size of this effect may depend on its alleged source (H2); that a source-focused inoculation would reduce trust in AI-generated information (H3a) and misinformation reliance (H3b); that a content-focused debunking would reduce misinformation reliance (H4); and that a combined intervention would reduce trust in AI-generated information (H5a) and misinformation reliance (H5b). We also expected that the combined intervention would produce a larger reduction in misinformation reliance than inoculation or debunking alone (H6), and that there would be continued influence of the misinformation on reasoning post-interventions (H7).5
3.1. Method
Experiment 2 used a between-subjects design with six conditions: control (non-misleading article); human misinformation (misleading article with human byline); AI misinformation (misleading article with AI byline); debunking (misleading article with AI byline followed by debunking); inoculation (source-focused inoculation followed by misleading article with AI byline); and combination (misleading article with AI byline sandwiched by source inoculation and debunking).
3.1.1. Participants
As in Experiment 1, we aimed for data from 600 participants. Data were collected from 634 English-speaking adults from the United States via Prolific, again using representative sampling. We excluded participants due to a self-reported lack of effort (n = 1) and inconsistent responding on the reasoning (n = 4) and trust (n = 9) measures (see electronic supplementary material for details). The final sample comprised n = 620 participants, including 312 women, 299 men, 8 non-binary participants and 1 participant who self-described as gender-fluid. Age ranged from 18 to 86 (M = 44.85, SD = 15.74). Participants were randomly allocated to one of the six conditions, with the constraint of approximately equal cell sizes.
3.1.2. Materials
3.1.2.1. Articles
We crafted a new misleading article titled ‘The Benefits of Trickle-Down Economics’ by combining three generic and seven misleading statements relevant to trickle-down economics, which were again generated by ChatGPT. The generic statements were placed at the beginning of the article and provided introductory, unbiased information about economics (e.g. ‘Economics is a social science that examines how individuals, businesses, governments and societies make choices about allocating resources to satisfy their wants and needs’). For the misleading statements, ChatGPT was initially prompted to generate 20 statements that presented misleading or factually inaccurate information in support of trickle-down economics, including fabricated elements such as fake experts, statistics and quotes.6 For example, one misleading statement read: ‘Professor Michael Roberts’ research, presented at the International Conference on Economics and Innovation (ICEI), demonstrated the effectiveness of targeted tax incentives for technology startups. He revealed a 30% increase in innovation and patent filings, thereby fuelling technological progress and industry expansion’ (note that the expert, conference and evidence were all fabricated).
As in Experiment 1, some statements were edited to improve clarity and to mimic how a malicious actor might use generative AI to produce misinformation. We pilot-tested the initial 20 statements by asking a separate sample of n = 59 Prolific participants to rate each statement’s persuasiveness on an 11-point scale ranging from 0 (not at all persuasive) to 10 (very persuasive). We then selected the seven most persuasive statements for the misleading article. These were introduced in the article as ‘the top seven arguments for trickle-down economics’. The article had 366 words and again was presented either with a human (‘by Alex Kennedy’) or AI (‘by ChatGPT’) byline, respectively. Survey instructions again referred to the respective article source twice more, in bold font, to ensure participants encoded the source information. The control article comprised only the three generic statements (60 words).
3.1.3. Interventions
3.1.3.1. Source-focused inoculation
The source-focused inoculation was presented in the format of a 301-word passage titled ‘The Limitations of AI Technology’. Like in Experiment 1, the inoculation article highlighted the risk of human biases and fabricated information in AI-generated output, but provided additional examples of situations where AI has been found to fabricate plausible yet inaccurate information and highlighted the potential for malicious actors to use AI to create misleading content. In particular, the passage used in this experiment described generative AI tools, before warning participants about the threat of misleading AI-generated information (e.g. ‘Users need to be mindful of the limitations of AI tools and that AI-generated information might mislead them’). The article then explained that generative AI tools can be prone to bias due to their training datasets (e.g. ‘AI systems tend to exhibit significant gender, racial and political biases’), and provide fabricated information such as fake evidence (e.g. ‘AI tools can also invent fake evidence—statistics, quotes or specific research studies—that may also be attached to a real or fake expert. This can create the illusion that the presented information is expert-endorsed and therefore credible and truthful, when in fact it is not based on factual or even existent evidence’). The article also explained that AI can be used by malicious actors to purposefully produce misleading arguments (e.g. ‘Malicious actors can easily create and spread such arguments quickly and widely to influence others for their own gain, or in pursuit of a hidden agenda. For example, a person with a vested interest may use an AI chatbot to generate misleading arguments about a particular topic, which they then disseminate through social media’).
3.1.3.2. Debunking
The content-focused, 77-word debunking intervention informed participants that some of the arguments for trickle-down economics presented in the article had been ‘fact-checked and found to be misleading’, and that the evidence provided ‘was largely incorrect and presented trickle-down economics in an overly positive manner.’ A brief description of the criticisms aimed at trickle-down economics was also provided (e.g. ‘While the theory is supported by some, it has also been criticized for disproportionately benefitting the wealthy, not leading to substantial job creation, and providing limited economic growth’).
3.1.3.3. Combined
The combined intervention comprised both the source-focused inoculation (presented before the misleading article) and the content-focused debunking (presented after the misleading article), which were identical to those in the other intervention conditions.
3.1.3.4. Questionnaires
We used the same inferential-reasoning and trust questionnaires as in Experiment 1.
3.1.4. Procedure
The procedure was identical to Experiment 1. Participants received £1.35 (approximately US$1.56) for completing the experiment, which took approximately 8−10 min.
3.2. Results
3.2.1. Trust in AI
To examine whether the interventions influenced participants’ general trust in AI-generated misinformation, a one-way ANOVA on trust scores was conducted, which revealed a significant main effect of condition, F(5, 614) = 6.97, p < 0.001, ηp2 = 0.054 (see figure 3).
Figure 3.
Mean trust in AI-generated information across conditions in Experiment 2. Note: Misinfo., misinformation; error bars show 95% confidence intervals.
Planned contrasts were then run to test specific hypotheses (see table 3). First, to test whether inoculation was effective at reducing trust in AI-generated misinformation, inoculation and AI-misinformation conditions were compared. As expected, trust was lower in the inoculation condition, in support of H3a. Trust levels were also lower in the combined condition relative to the AI-misinformation condition, in line with H4a.
Table 3.
Planned contrasts on trust scores in Experiment 2.
hypothesis |
contrast |
F(1, 614) |
p |
ηp2 |
---|---|---|---|---|
H3a |
AI misinfo. versus inoculation |
7.34 |
0.007* |
0.012 |
H4a |
AI misinfo. versus combination |
26.89 |
<0.001* |
0.042 |
* indicates statistical significance after Holm–Bonferroni adjustment.
Misinfo., misinformation.
Additional exploratory contrasts were again run (see electronic supplementary material, table S3). Although a content-focused debunking intervention was not expected to influence trust, it is nevertheless possible that it could affect the perceived trustworthiness of the AI source by demonstrating that AI can sometimes produce inaccurate information. However, there was no significant difference between the AI-misinformation and debunking conditions (although there was a significant difference between the inoculation and combination conditions, suggesting that the addition of the debunking further reduced trust relative to the inoculation alone). Finally, we checked whether merely being exposed to a misleading article affected trust. Again, neither the human- nor the AI-misinformation condition differed significantly from control, nor did the two misinformation conditions differ from each other.
3.2.2. Misinformation reliance
To assess whether the interventions were effective at reducing misinformation reliance, a one-way ANOVA on misinformation-reliance scores was run, again yielding a significant main effect of condition, F(5, 614) = 10.12, p < 0.001, ηp2 = 0.076 (see figure 4).
Figure 4.
Mean misinformation reliance across conditions in Experiment 2. Note: Misinfo., misinformation; error bars show 95% confidence intervals.
Next, we conducted planned contrasts to test our hypotheses (see table 4). First, the hypothesis that AI-generated misinformation would influence reasoning (H1) was supported, with greater scores in the two misinformation conditions relative to control. Second, we tested whether misinformation reliance differed between the human- and AI-misinformation conditions; the result was nonsignificant, so H2 was not supported. This was corroborated by a Bayesian independent-samples t‐test (BF01 = 3.10). Third, we tested whether the interventions were effective at reducing reliance on AI-generated misinformation. Misinformation-reliance scores did not differ significantly between AI-misinformation and inoculation conditions, so in replication of Experiment 1, H3b was not supported. A Bayesian independent-samples t‐test returned support for the null hypothesis, if only anecdotal (BF01 = 1.40). As predicted, the debunking and combined interventions did significantly reduce misinformation reliance relative to the AI-misinformation condition, supporting H4 and H5b. (It should be noted, however, that misinformation reliance scores in the debunking condition did not significantly differ from those in the inoculation condition, F(1, 614) = .69, p = .41, ηp2 = 0.001; a Bayesian t‐test yielded BF01 = 4.90.) The combination condition was also associated with lower misinformation-reliance scores than either the inoculation or debunking conditions, in support of H6. Finally, to test for continued influence post-intervention, we compared the control condition separately with the three intervention conditions. As predicted, misinformation-reliance scores were higher in the inoculation and debunking conditions than control, demonstrating continued influence, in support of H7. However, there was no evidence for a continued influence effect in the combined condition, meaning that misinformation reliance in the combined condition was not statistically discernible from baseline; correspondingly, a Bayesian t‐test yielded BF01 = 2.11.
Table 4.
Planned contrasts on misinformation-reliance scores in Experiment 2.
hypothesis |
contrast |
F(1, 614) |
p |
ηp2 |
---|---|---|---|---|
H1 |
control versus human misinfo. |
22.65 |
<0.001* |
0.036 |
control versus AI misinfo. |
37.51 |
<0.001* |
0.058 |
|
H2 |
human misinfo. versus AI misinfo. |
1.67 |
0.196 |
0.003 |
H3b |
AI misinfo. versus inoculation |
3.27 |
0.071 |
0.005 |
H4 |
AI misinfo. versus debunking |
6.98 |
0.008* |
0.011 |
H5b |
AI misinfo. versus combination |
21.5 |
<0.001* |
0.034 |
H6 |
combination versus inoculation |
7.93 |
0.005* |
0.013 |
combination versus debunking |
3.97 |
0.047* |
0.006 |
|
H7 |
control versus inoculation |
18.44 |
<0.001* |
0.029 |
control versus debunking |
12.05 |
<0.001* |
0.019 |
|
control versus combination |
2.16 |
0.142 |
0.004 |
* indicates statistical significance after Holm–Bonferroni adjustment.
Misinfo., misinformation.
To put these findings into perspective, effect sizes for the inoculation (d = 0.25) and debunking (d = 0.36) interventions in this study were relatively modest compared with averages found in meta-analyses. In terms of inoculation, for example, Banas & Rains [66] found an average effect size of d = 0.43. In terms of debunking, Walter & Murphy [44] found that retroactive corrections generally had a large effect (d = 0.82) on misinformation belief, but corrective messages focusing on source credibility produced only a moderate effect (d = .28) more in line with our findings. Additionally, it should be noted that more detailed corrections are generally more effective at reducing misinformation influence, which may explain the relatively modest effect of our debunking intervention, given that it did not specifically target all the misleading arguments presented [67,68].
3.3. Discussion
Experiment 2 examined whether a pre-emptive source-focused inoculation, a retroactive debunking, or a combination of both interventions could reduce people’s reliance on AI-generated misinformation. Experiment 2 replicated the finding that AI-generated misinformation influenced participants’ reasoning, regardless of the perceived source. Also consistent with Experiment 1, discrediting AI as a source of reliable information pre-emptively through inoculation did not significantly reduce the subsequent impact of the specific misinformation provided, even though it again reduced participants’ general trust in AI-generated information. By contrast, a retroactive debunking that identified the provided article as misleading was effective at reducing misinformation reliance (although we again note that there was no statistical evidence that the two interventions’ impacts differed from each other). The superior efficacy of the combined intervention indicates that a source-focused inoculation was able to reduce misinformation reliance when it was supported by a subsequent debunking (or, alternatively, that debunking was more effective when the source had already been discredited).
4. General discussion
Across two experiments, we examined the effectiveness of a simple disclaimer, a pre-emptive source-focused inoculation, and a retroactive content-focused debunking to reduce reliance on AI-generated misinformation. We also explored whether a trust-boosting intervention would increase misinformation reliance. We found that AI-generated misinformation influenced people’s reasoning regardless of whether it was attributed to a human or AI source. A disclaimer had no impact on misinformation reliance, demonstrating again that generic, tentatively worded corrections are likely to have merely token value [69,70]. A trust boost was also inconsequential. Perhaps more surprisingly, a source-focused inoculation was similarly unable to reduce misinformation reliance, despite reducing general trust in AI-generated information. Our pre-emptive intervention differed substantively from the most common type of inoculation, which is technique- and not source-based [31,71,72]; whereas the aim of technique-focused inoculation is to provide people with the skills to detect misleading arguments by providing an explanation of the techniques that might be used to mislead, the aim of the source-focused inoculation was to reduce reliance on the misinformation by undermining the source’s credibility and explaining why it sometimes provides inaccurate information. However, this finding nevertheless demonstrates that not all pre-emptive inoculation-type interventions are effective or even superior to retroactive interventions [40,41]. In fact, in the present study, a retroactive debunking was able to significantly reduce the influence of misinformation on participants’ reasoning, in line with much previous research [33,35,44,73]. Only a combination of inoculation and debunking, however, was able to eliminate the influence of misinformation entirely [74,75].
Our finding that a pre-emptive source discreditation was largely ineffective at reducing misinformation reliance stands in contrast to previous work on retroactive source discreditation [46]. One possible explanation for this discrepancy is that, although inoculated participants in the present study were less trusting of AI-generated information in general, they did not have sufficient knowledge of trickle-down economics to counter the misleading arguments presented. Consistent with this idea, the literature on mental models suggests that misinformation countermeasures will be more effective when they provide sufficient information for participants to update their existing mental model with new evidence and to understand why the previously encountered information was wrong [34,64,65,76]. This interpretation is also supported by the finding that, although source inoculation alone did not reduce misinformation reliance, it nonetheless contributed to eliminating the influence of misinformation in the combination condition, where a debunking provided additional information to allow for updating that could preserve mental-model coherence.
If source-focused interventions are only effective at combating misinformation when people can generate counterarguments based on specific knowledge they have, then it follows that the effectiveness of source discreditation may generally depend on people’s level of knowledge about a topic. If people have no relevant background knowledge, they may be unable to identify the flaws in a misleading argument; if they have significant relevant expertise, they may be able to reject misinformation outright [77–79]. In both these situations, source credibility may have little to no effect on misinformation reliance. If people have moderate levels of knowledge about a topic, however, they may be able to generate counterarguments, but may not do so unless they are alerted to a potentially untrustworthy source. This may also explain the effectiveness of retroactive source discreditation in the study by Ecker et al. [8], which used misinformation in social scenarios that participants likely had some stereotypical knowledge of (e.g. reasons for a restaurant closure).
Another possible reason why source discreditation did not reduce misinformation reliance in this study is that participants may have believed that they would notice misinformation if any was presented, so they did not perceive the inoculation as relevant. This idea is supported by research showing that people are generally overconfident in their ability to detect misinformation and believe that other people will benefit more from inoculation than themselves [80,81]. These findings, combined with prior work showing that source discreditation can be effective at countering human-generated misinformation, could suggest that people believe that AI-generated misinformation is easier to identify and are therefore more confident in their ability to detect it than human-generated misinformation. Alternatively, people may have believed the arguments in the AI-generated article because they were consistent with their pre-existing beliefs [38] or relatively easy to process [20,82], despite being relatively distrustful of the AI source or AI-generated information generally. In line with this idea, research suggests that people may readily believe a non-credible source when the assertions are plausible or corroborated by others [59], and may engage in motivated reasoning when misinformation is consistent with their worldview [83–85]. Future research should investigate whether source discreditation is more effective on individuals who are less confident in their ability to detect misinformation, and whether the perceived believability of a misleading argument can moderate the effects of source discreditation.
The current research also provides further insight into when people are willing to trust AI-generated information. Specifically, we found that people were influenced by misleading information about trickle-down economics regardless of whether it was attributed to a human or AI source. This is in line with some prior research [86–88], but it may be a topic-dependent finding, as previous work has suggested that people are more willing to rely on AI in domains that are typically considered more objective, technical or analytical than in domains that are considered more emotional or moral [89–93]. The impact of source information may also vary depending on the strength of people’s pre-existing attitudes about the topic at hand. Generally, source credibility is likely to be less influential when people have strong pre-existing attitudes, as whether information is accepted or rejected in such cases may depend primarily on its compatibility with those attitudes [94]—noting that source-credibility evaluation itself can be influenced by pre-existing attitudes [95,96]. On the other hand, source information may be more influential when people make decisions based on conflicting information [97]. Caution should therefore be applied when generalizing the current findings to other topics.
Our findings nevertheless have implications for the use of AI labels on online platforms. By contrast to previous work suggesting that people perceive content labelled as AI-generated to be less accurate than the same information labelled as human-generated [25,26], our results indicate that AI labels do not necessarily increase scepticism, even when people are warned that AI systems can make mistakes. One important caveat is that the current study focused on reducing trust as a countermeasure to AI-generated misinformation. This differs from the goal of labelling AI-generated content, which may be to encourage people to carefully consider the veracity of AI-generated information without inducing general scepticism towards such information [98,99]. As the current study focused on misleading AI content, we could not examine how a pre-emptive source inoculation targeting generative AI systems may impact people’s responses to accurate AI-generated information [100–102]. Therefore, a clear target for future research is to examine whether labelling AI-generated information can improve people’s ability to discriminate between accurate and inaccurate information.
On a practical level, our findings support the recommendation that misinformation interventions include both source discreditation and content-focused correction [46]. We found that people continued to rely on misinformation to some extent after receiving a pre-emptive, source-focused inoculation or a retroactive, content-focused debunking, but misinformation reliance was eliminated when people received a combined intervention. These findings are consistent with previous research showing that source discreditation can significantly improve the effectiveness of other misinformation interventions (or vice versa), presumably because both the message and the messenger are targeted [46,74,103,104].
On a theoretical level, our results further highlight the need for models of continued misinformation influence to account for social variables, such as the perceived credibility of the misinformation source [49,53,58]. Specifically, the finding that source discreditation improves the effectiveness of retroactive corrections indicates that, although a failure to encode or remember corrective information likely contributes to continued influence effects, people’s evaluation of source credibility is a contributing factor, too. Source discreditation may also increase the effectiveness of content-focused interventions by reducing the psychological discomfort associated with processing corrections or by making corrections more memorable [48,105].
There are several limitations of the present study. First, trust in AI is known to be influenced by culture [106]. A worldwide survey, for example, found that people in developing countries tended to report higher levels of trust in AI systems and were more likely to believe that the benefits of AI outweighed the risks than people in Western countries [107]. In this study, we only recruited participants from the United States, where many people report having significant concerns about the use of AI systems. It is possible, therefore, that labelling content as AI-generated and/or discrediting AI systems may impact people’s responses to AI-generated information differently in other cultures. Future research could also examine whether amount of exposure to AI systems mediates the relationship between trust in AI and misinformation reliance across cultures.
Second, the current research only examined the impacts of AI-generated information presented outside an AI platform—as it may be encountered on news and social media sites—so participants did not engage with the generative AI system directly. Therefore, our findings cannot be used to draw conclusions about the direct impacts of AI-generated misinformation when people actively interact with an AI system (e.g. see [108]). In this space, future work may consider whether a source-focused inoculation may be more effective at reducing misinformation reliance when it includes an active component that provides people with direct experience of AI making mistakes. Although some research suggests that inoculation that requires active engagement (e.g. matching content with the persuasion technique used to mislead; [108–110]) can be effective at countering misinformation, no research—to our knowledge—has yet examined the effectiveness of such strategies for inoculating people against AI-generated misinformation. Additionally, given that people use heuristics to assess source credibility [111] and are sometimes more trusting of artificial agents that display humanlike features or behaviours [112–115], it may be fruitful for future research to examine whether targeting the more affective intuition dimension of trust and people’s tendency to anthropomorphize AI systems can influence trust in the information that these systems produce.
Finally, participants in the current study were only given the name of the alleged human author and therefore had little information to guide their credibility and veracity judgements. Thus, another question for future research may be whether people are more trusting of human-generated versus AI-generated information when the human author is associated with other credibility cues [52,116,117].
To conclude, the present study demonstrates that AI-generated misinformation has the potential to influence reasoning, regardless of whether people are aware of its AI source. Our findings suggest that the labels and generic disclaimers used by social media sites and generative AI platforms do not necessarily induce scepticism and may be of limited practical value for reducing reliance on AI-generated information. Retroactive debunking interventions that specifically counter the misinformation, however, are likely to be somewhat effective at reducing AI-misinformation reliance. Although a pre-emptive source discreditation alone was found to be insufficient to reduce reliance on AI-generated misinformation, the current research adds to the evidence that providing interventions that tackle both the message and the messenger may be crucial for eliminating misinformation effects.
Acknowledgements
We thank Rod Cumberbatch for research assistance.
Footnotes
Although we had intuitions that the impact might be greater when the source was perceived to be human versus AI, we did not specify a direction for this hypothesis.
Based on age, gender, race and political orientation, matched to US census data.
There was no resistance from ChatGPT to generate a biased article, including fictional statements from fictional experts from fictional institutes, so no ‘prompt engineering’ was required.
While ChatGPT was also used to assist with the initial draft of this article, it was largely written by the senior author, not least to minimize the potential for it to be perceived as synthetically generated.
Experiments were run simultaneously; hypotheses therefore largely mirrored those of Experiment 1.
There was little resistance from ChatGPT to generate misleading information, so minimal ‘prompt engineering’ was required. ChatGPT was simply asked to ‘pretend’ it was in favour of trickle-down economics and that the request to use persuasion techniques including use of fake experts or evidence was for research purposes.
Contributor Information
Emily R. Spearing, Email: emily.spearing@uwa.edu.au.
Constantina I. Gile, Email: 23249104@student.uwa.edu.au.
Amy L. Fogwill, Email: 22896782@student.uwa.edu.au.
Toby Prike, Email: toby.prike@uwa.edu.au; toby.prike@gmail.com.
Briony Swire-Thompson, Email: b.swire-thompson@northeastern.edu.
Stephan Lewandowsky, Email: stephan.lewandowsky@bristol.ac.uk.
Ullrich K. H. Ecker, Email: ullrich.ecker@uwa.edu.au.
Ethics
This study was approved by the Human Research Ethics Office of the University of Western Australia (ethics approval RA/4/20/6423).
Data accessibility
The data and materials for this study are available on the Open Science Framework (OSF): [118].
Supplementary material is available online [119].
Declaration of AI use
We have not used AI-assisted technologies in creating this article.
Authors’ contributions
E.S.: formal analysis, visualization, writing—original draft, writing—review and editing; C.G.: formal analysis, investigation, methodology; A.F.: formal analysis, investigation, methodology; T.P.: methodology, writing—review and editing; B.S.: funding acquisition, methodology, writing—review and editing; S.L.: funding acquisition, methodology, writing—review and editing; U.K.H.E.: conceptualization, formal analysis, funding acquisition, investigation, methodology, project administration, supervision, writing—review and editing.
All authors gave final approval for publication and agreed to be held accountable for the work performed therein.
Conflict of interest declaration
We declare we have no competing interests.
Funding
This research was supported by Australian Research Council grant DP240101230 to U.K.H.E., S.L. and B.S.T.; S.L. acknowledges financial support from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Advanced Grant agreement No. 101020961 PRODEMINFO), and the Humboldt Foundation through a research award.
References
- 1. Alkaissi H, McFarlane SI. 2023. Artificial hallucinations in ChatGPT: implications in scientific writing. Cureus 15, e35179. ( 10.7759/cureus.35179) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Buchanan J, Hill S, Shapoval O. 2024. ChatGPT hallucinates non-existent citations: evidence from economics. Am. Econ. 69, 80–87. ( 10.1177/05694345231218454) [DOI] [Google Scholar]
- 3. Gravel J, D’Amours-Gravel M, Osmanlliu E. 2023. Learning to fake it: limited responses and fabricated references provided by ChatGPT for medical questions. Mayo Clin. Proc. 1, 226–234. ( 10.1016/j.mcpdig.2023.05.004) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Heppell F, Bakir ME, Bontcheva K. 2024. Lying blindly: bypassing ChatGPT’s safeguards to generate hard-to-detect disinformation claims at scale. arXiv. See http://arxiv.org/abs/2402.08467.
- 5. Shukla AK, Tripathi S. 2024. AI-generated misinformation in the election year 2024: measures of European Union. Front. Polit. Sci 6, 1451601. ( 10.3389/fpos.2024.1451601) [DOI] [Google Scholar]
- 6. Loomba S, de Figueiredo A, Piatek SJ, de Graaf K, Larson HJ. 2021. Measuring the impact of COVID-19 vaccine misinformation on vaccination intent in the UK and USA. Nat. Hum. Behav. 5, 337–348. ( 10.1038/s41562-021-01056-1) [DOI] [PubMed] [Google Scholar]
- 7. Nisbet EC, Mortenson C, Li Q. 2021. The presumed influence of election misinformation on others reduces our own satisfaction with democracy. Harv. Kennedy Sch. Misinformation Rev 1. ( 10.37016/mr-2020-59) [DOI] [Google Scholar]
- 8. Ecker UKH, Tay LQ, Roozenbeek J, van der Linden S, Cook J, Oreskes N, Lewandowsky S. 2024. Why misinformation must not be ignored. Am. Psychol. ( 10.1037/amp0001448) [DOI] [PubMed] [Google Scholar]
- 9. Ortiz S. 2024. 200 million people use ChatGPT every week - up from 100 million last fall, says OpenAI. ZDNET. See https://www.zdnet.com/article/200-million-people-use-chatgpt-every-week-up-from-100-million-last-fall-says-openai/.
- 10. Singla A, Sukharevsky A, Yee L, Chui M, Hall B. 2024. The state of AI in early 2024: Gen AI adoption spikes and starts to generate value. See https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai.
- 11. Gao CA, Howard FM, Markov NS, Dyer EC, Ramesh S, Luo Y, Pearson AT. 2023. Comparing scientific abstracts generated by ChatGPT to real abstracts with detectors and blinded human reviewers. Npj Digit. Med. 6, 75. ( 10.1038/s41746-023-00819-6) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Huang Y, Sun L. 2023. FakeGPT: fake news generation, explanation and detection of large language models. arXiv. See http://arxiv.org/abs/2310.05046.
- 13. Jakesch M, Hancock JT, Naaman M. 2023. Human heuristics for AI-generated language are flawed. Proc. Natl Acad. Sci. USA 120, e2208839120. ( 10.1073/pnas.2208839120) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Jiang B, Tan Z, Nirmal A, Liu H. 2023. Disinformation detection: an evolving challenge in the age of LLMs. arXiv. See http://arxiv.org/abs/2309.15847.
- 15. Fu Y, Hanaki N. 2024. Do people rely on ChatGPT more than their peers to detect fake news? ISER Discussion Paper. See https://ideas.repec.org//p/dpr/wpaper/1233.html.
- 16. Howe PDL, Fay N, Saletta M, Hovy E. 2023. ChatGPT’s advice is perceived as better than that of professional advice columnists. Front. Psychol. 14, 1281255. ( 10.3389/fpsyg.2023.1281255) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Klingbeil A, Grützner C, Schreck P. 2024. Trust and reliance on AI—an experimental study on the extent and costs of overreliance on AI. Comput. Human Behav. 160, 108352. ( 10.1016/j.chb.2024.108352) [DOI] [Google Scholar]
- 18. Vodrahalli K, Daneshjou R, Gerstenberg T, Zou J. 2021. Do humans trust advice more if it comes from AI? An analysis of human-AI interactions. arXiv. See http://arxiv.org/abs/2107.07015.
- 19. Spitale G, Biller-Andorno N, Germani F. 2023. AI model GPT-3 (dis)informs us better than humans. Sci. Adv. 9, eadh1850. ( 10.1126/sciadv.adh1850) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Alter AL, Oppenheimer DM. 2009. Uniting the tribes of fluency to form a metacognitive nation. Personal. Soc. Psychol. Rev. 13, 219–235. ( 10.1177/1088868309341564) [DOI] [PubMed] [Google Scholar]
- 21. Wänke M, Hansen J. 2015. Relative processing fluency. Curr. Dir. Psychol. Sci. 24, 195–199. ( 10.1177/0963721414561766) [DOI] [Google Scholar]
- 22. Ortutay B. 2021. Twitter rolls out redesigned misinformation warning labels. AP News https://apnews.com/article/technology-business-media-social-media-misinformation-ae496a53fbc761146627fa534cb2f8d9 [Google Scholar]
- 23. Reuters . 2024. Facebook and Instagram to label digitally altered content ‘made with AI’. The Guardian. See https://www.theguardian.com/technology/2024/apr/05/facebook-instagram-ai-label-digitally-altered-media.
- 24. Toff B, Simon FM. 2023. ‘Or they could just not use it?’: the paradox of AI disclosure for audience trust in news. See https://ora.ox.ac.uk/objects/uuid:5f3db236-dd1c-4822-aa02-ce2d03fc61f7.
- 25. Altay S, Gilardi F. 2024. People are skeptical of headlines labeled as AI-generated, even if true or human-made, because they assume full AI automation. PNAS Nexus 3, gae403. ( 10.1093/pnasnexus/pgae403) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Longoni C, Fradkin A, Cian L, Pennycook G. 2022. News from generative artificial intelligence is believed less. In 2022 ACM Conf. on Fairness, Accountability, and Transparency, FAccT ’22, Seoul, Republic of Korea. New York, NY, USA. ( 10.1145/3531146.3533077). https://dl.acm.org/doi/proceedings/10.1145/3531146. [DOI] [Google Scholar]
- 27. Clayton K, et al. 2020. Real solutions for fake news? Measuring the effectiveness of general warnings and fact-check tags in reducing belief in false stories on social media. Polit. Behav. 42, 1073–1095. ( 10.1007/s11109-019-09533-0) [DOI] [Google Scholar]
- 28. Kreps S, McCain RM, Brundage M. 2022. All the news that’s fit to fabricate: AI-generated text as a tool of media misinformation. J. Exp. Polit. Sci. 9, 104–117. ( 10.1017/xps.2020.37) [DOI] [Google Scholar]
- 29. Ecker UKH, Lewandowsky S, Cook J, Schmid P, Fazio LK, Brashier N, Kendeou P, Vraga EK, Amazeen MA. 2022. The psychological drivers of misinformation belief and its resistance to correction. Nat. Rev. Psychol. 1, 13–29. ( 10.1038/s44159-021-00006-y) [DOI] [Google Scholar]
- 30. Kozyreva A, et al. 2024. Toolbox of individual-level interventions against online misinformation. Nat. Hum. Behav. 8, 1044–1052. ( 10.1038/s41562-024-01881-0) [DOI] [PubMed] [Google Scholar]
- 31. Roozenbeek J, Traberg CS, van der Linden S. 2022. Technique-based inoculation against real-world misinformation. R. Soc. Open Sci. 9, 211719. ( 10.1098/rsos.211719) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Traberg CS, Roozenbeek J, van der Linden S. 2022. Psychological inoculation against misinformation: current evidence and future directions. Ann. Am. Acad. Polit. Soc. Sci. 700, 136–151. ( 10.1177/00027162221087936) [DOI] [Google Scholar]
- 33. Brashier NM, Pennycook G, Berinsky AJ, Rand DG. 2021. Timing matters when correcting fake news. Proc. Natl Acad. Sci. USA 118, e2020043118. ( 10.1073/pnas.2020043118) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34. Chan MPS, Jones CR, Hall Jamieson K, Albarracín D. 2017. Debunking: a meta-analysis of the psychological efficacy of messages countering misinformation. Psychol. Sci. 28, 1531–1546. ( 10.1177/0956797617714579) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Tay LQ, Hurlstone MJ, Kurz T, Ecker UKH. 2022. A comparison of prebunking and debunking interventions for implied versus explicit misinformation. Br. J. Psychol. 113, 591–607. ( 10.1111/bjop.12551) [DOI] [PubMed] [Google Scholar]
- 36. van Erkel PFA, van Aelst P, de Vreese CH, Hopmann DN, Matthes J, Stanyer J, Corbu N. 2024. When are fact-checks effective? An experimental study on the inclusion of the misinformation source and the source of fact-checks in 16 European countries. Mass Commun. Soc. 27, 851–876. ( 10.1080/15205436.2024.2321542) [DOI] [Google Scholar]
- 37. Ecker UKH, Lewandowsky S, Tang DTW. 2010. Explicit warnings reduce but do not eliminate the continued influence of misinformation. Mem. Cogn. 38, 1087–1100. ( 10.3758/mc.38.8.1087) [DOI] [PubMed] [Google Scholar]
- 38. Lewandowsky S, Ecker UKH, Seifert CM, Schwarz N, Cook J. 2012. Misinformation and its correction: continued influence and successful debiasing. Psychol. Sci. Public Interest 13, 106–131. ( 10.1177/1529100612451018) [DOI] [PubMed] [Google Scholar]
- 39. Walter N, Tukachinsky R. 2020. A meta-analytic examination of the continued influence of misinformation in the face of correction: how powerful is it, why does it happen, and how to stop it? Commun. Res. 47, 155–177. ( 10.1177/0093650219854600) [DOI] [Google Scholar]
- 40. Bolsen T, Druckman JN. 2015. Counteracting the politicization of science. J. Commun. 65, 745–769. ( 10.1111/jcom.12171) [DOI] [Google Scholar]
- 41. Jolley D, Douglas KM. 2017. Prevention is better than cure: addressing anti‐vaccine conspiracy theories. J. Appl. Soc. Psychol. 47, 459–469. ( 10.1111/jasp.12453) [DOI] [Google Scholar]
- 42. Bruns H, Dessart FJ, Krawczyk M, Lewandowsky S, Pantazi M, Pennycook G, Schmid P, Smillie L. 2024. Investigating the role of source and source trust in prebunks and debunks of misinformation in online experiments across four EU countries. Sci. Rep 14, 20723. ( 10.1038/s41598-024-71599-6) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43. Tay LQ, Hurlstone MJ, Kurz T, Ecker UKH. 2024. Do prebunking and debunking protect against novel misinformation? PsyArXiv. See 10.31234/osf.io/w7gja. [DOI]
- 44. Walter N, Murphy ST. 2018. How to unring the bell: a meta-analytic approach to correction of misinformation. Commun. Monogr. 85, 423–441. ( 10.1080/03637751.2018.1467564) [DOI] [Google Scholar]
- 45. Richards AS, Bessarabova E, Banas JA, Bernard DR. 2022. Reducing psychological reactance to health promotion messages: comparing preemptive and postscript mitigation strategies. Health Commun. 37, 366–374. ( 10.1080/10410236.2020.1839203) [DOI] [PubMed] [Google Scholar]
- 46. Ecker UKH, Prike T, Paver AB, Scott RJ, Swire-Thompson B. 2024. Don’t believe them! reducing misinformation influence through source discreditation. Cogn. Res. 9, 52. ( 10.1186/s41235-024-00581-7) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Swire-Thompson B, Kilgallen K, Dobbs M, Bodenger J, Wihbey J, Johnson S. 2024. Discrediting health disinformation sources: advantages of highlighting low expertise. J. Exp. Psychol. 153, 2299–2313. ( 10.1037/xge0001627) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Westbrook V, Wegener DT, Susmann MW. 2023. Mechanisms in continued influence: the impact of misinformation corrections on source perceptions. Mem. Cogn. 51, 1317–1330. ( 10.3758/s13421-023-01402-w) [DOI] [PubMed] [Google Scholar]
- 49. Zmigrod L, Burnell R, Hameleers M. 2023. The misinformation receptivity framework: political misinformation and disinformation as cognitive Bayesian inference problems. Eur. Psychol. 28, 173–188. ( 10.1027/1016-9040/a000498) [DOI] [Google Scholar]
- 50. Appel M, Mara M. 2013. The persuasive influence of a fictional character’s trustworthiness. J. Commun 63, 912–932. ( 10.1111/jcom.12053) [DOI] [Google Scholar]
- 51. Lewandowsky S, Stritzke WGK, Oberauer K, Morales M. 2005. Memory for fact, fiction, and misinformation: the Iraq War 2003. Psychol. Sci. 16, 190–195. ( 10.1111/j.0956-7976.2005.00802.x) [DOI] [PubMed] [Google Scholar]
- 52. Zeng HK, Lo SY, Li SCS. 2024. Credibility of misinformation source moderates the effectiveness of corrective messages on social media. Public Underst. Sci. 33, 587–603. ( 10.1177/09636625231215979) [DOI] [PubMed] [Google Scholar]
- 53. Ecker UKH, Antonio LM. 2021. Can you believe it? An investigation into the impact of retraction source credibility on the continued influence effect. Mem. Cogn. 49, 631–644. ( 10.3758/s13421-020-01129-y) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54. Guillory JJ, Geraci L. 2013. Correcting erroneous inferences in memory: the role of source credibility. J. Appl. Res. Mem. Cogn. 2, 201–209. ( 10.1016/j.jarmac.2013.10.001) [DOI] [Google Scholar]
- 55. Jin Y, van der Meer TGLA, Lee YI, Lu X. 2020. The effects of corrective communication and employee backup on the effectiveness of fighting crisis misinformation. Public Relations Rev. 46, 101910. ( 10.1016/j.pubrev.2020.101910) [DOI] [Google Scholar]
- 56. Wood RM, Juanchich M, Ramirez M, Zhang S. 2023. Promoting COVID-19 vaccine confidence through public responses to misinformation: the joint influence of message source and message content. Soc. Sci. Med. 324, 115863. ( 10.1016/j.socscimed.2023.115863) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57. MacFarlane D, Tay LQ, Hurlstone MJ, Ecker UKH. 2021. Refuting spurious COVID-19 treatment claims reduces demand and misinformation sharing. J. Appl. Res. Mem. Cogn. 10, 248–258. ( 10.1037/h0101793) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58. Connor Desai S, Reimers S. 2023. Does explaining the origins of misinformation improve the effectiveness of a given correction? Mem. Cogn. 51, 422–436. ( 10.3758/s13421-022-01354-7) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59. Foy JE, LoCasto PC, Briner SW, Dyar S. 2017. Would a madman have been so wise as this?’ The effects of source credibility and message credibility on validation. Mem. Cogn. 45, 281–295. ( 10.3758/s13421-016-0656-1) [DOI] [PubMed] [Google Scholar]
- 60. Faul F, Erdfelder E, Lang AG, Buchner A. 2007. G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav. Res. Methods 39, 175–191. ( 10.3758/bf03193146) [DOI] [PubMed] [Google Scholar]
- 61. Greene CM, de Saint Laurent C, Murphy G, Prike T, Hegarty K, Ecker UKH. 2023. Best practices for ethical conduct of misinformation research. Eur. Psychol. 28, 139–150. ( 10.1027/1016-9040/a000491) [DOI] [Google Scholar]
- 62. Nadarevic L, Reber R, Helmecke AJ, Köse D. 2020. Perceived truth of statements and simulated social media postings: an experimental investigation of source credibility, repeated exposure, and presentation format. Cogn. Res. 5, 56. ( 10.1186/s41235-020-00251-4) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63. van der Linden S. 2024. Countering misinformation through psychological inoculation. In Advances in experimental social psychology (ed. Gawronski B), pp. 1–58, vol. 69. Academic Press. ( 10.1016/bs.aesp.2023.11.001) [DOI] [Google Scholar]
- 64. Johnson HM, Seifert CM. 1994. Sources of the continued influence effect: when misinformation in memory affects later inferences. J. Exp. Psychol. 20, 1420–1436. ( 10.1037//0278-7393.20.6.1420) [DOI] [Google Scholar]
- 65. Johnson-Laird PN. 1994. Mental models and probabilistic thinking. Cognition 50, 189–209. ( 10.1016/0010-0277(94)90028-0) [DOI] [PubMed] [Google Scholar]
- 66. Banas JA, Rains SA. 2010. A meta-analysis of research on inoculation theory. Commun. Monogr. 77, 281–311. ( 10.1080/03637751003758193) [DOI] [Google Scholar]
- 67. Swire B, Ecker UKH, Lewandowsky S. 2017. The role of familiarity in correcting inaccurate information. J. Exp. Psychol. 43, 1948–1961. ( 10.1037/xlm0000422) [DOI] [PubMed] [Google Scholar]
- 68. van der Meer TGLA, Jin Y. 2020. Seeking formula for misinformation treatment in public health crises: the effects of corrective information type and source. Health Commun. 35, 560–575. ( 10.1080/10410236.2019.1573295) [DOI] [PubMed] [Google Scholar]
- 69. MacFarlane D, Hurlstone MJ, Ecker UKH. 2021. Countering demand for ineffective health remedies: do consumers respond to risks, lack of benefits, or both? Psychol. Health 36, 593–611. ( 10.1080/08870446.2020.1774056) [DOI] [PubMed] [Google Scholar]
- 70. Paynter J, et al. 2019. Evaluation of a template for countering misinformation—real-world autism treatment myth debunking. PLoS One 14, e0210746. ( 10.1371/journal.pone.0210746) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71. Compton J, Linden S, Cook J, Basol M. 2021. Inoculation theory in the post‐truth era: extant findings and new frontiers for contested science, misinformation, and conspiracy theories. Soc. Personal. Psychol. Compass 15, e12602. ( 10.1111/spc3.12602) [DOI] [Google Scholar]
- 72. Cook J, Lewandowsky S, Ecker UKH. 2017. Neutralizing misinformation through inoculation: exposing misleading argumentation techniques reduces their influence. PLoS One 12, e0175799. ( 10.1371/journal.pone.0175799) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73. Christner C, Merz P, Barkela B, Jungkunst H, von Sikorski C. 2024. Combatting climate disinformation: comparing the effectiveness of correction placement and type. Environ. Commun. 18, 729–742. ( 10.1080/17524032.2024.2316757) [DOI] [Google Scholar]
- 74. Bak-Coleman JB, Kennedy I, Wack M, Beers A, Schafer JS, Spiro ES, Starbird K, West JD. 2022. Combining interventions to reduce the spread of viral misinformation. Nat. Hum. Behav. 6, 1372–1380. ( 10.1038/s41562-022-01388-6) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75. Pennycook G, Berinsky AJ, Bhargava P, Lin H, Cole R, Goldberg B, Lewandowsky S, Rand DG. 2024. Inoculation and accuracy prompting increase accuracy discernment in combination but not alone. Nat. Hum. Behav. 8, 2330–2341. ( 10.1038/s41562-024-02023-2) [DOI] [PubMed] [Google Scholar]
- 76. Wilkes AL, Leatherbarrow M. 1988. Editing episodic memory following the identification of error. Q. J. Exp. Psychol. 40, 361–387. [Google Scholar]
- 77. Guath M, Nygren T. 2022. Civic online reasoning among adults: an empirical evaluation of a prescriptive theory and its correlates. Front. Educ 7, 721731. ( 10.3389/feduc.2022.721731) [DOI] [Google Scholar]
- 78. Nygren T, Guath M. 2022. Students evaluating and corroborating digital news. Scand. J. Educ. Res. 66, 549–565. ( 10.1080/00313831.2021.1897876) [DOI] [Google Scholar]
- 79. Scherer LD, McPhetres J, Pennycook G, Kempe A, Allen LA, Knoepke CE, Tate CE, Matlock DD. 2021. Who is susceptible to online health misinformation? A test of four psychosocial hypotheses. Health Psychol. 40, 274–284. ( 10.1037/hea0000978) [DOI] [PubMed] [Google Scholar]
- 80. Johnson A, Madsen JK. 2024. Inoculation hesitancy: an exploration of challenges in scaling inoculation theory. R. Soc. Open Sci. 11, 231711. ( 10.1098/rsos.231711) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81. Martínez-Costa MP, López-Pan F, Buslón N, Salaverría R. 2023. Nobody-fools-me perception: influence of age and education on overconfidence about spotting disinformation. Journal. Pract. 17, 2084–2102. ( 10.1080/17512786.2022.2135128) [DOI] [Google Scholar]
- 82. Berinsky AJ. 2017. Rumors and health care reform: experiments in political misinformation. Br. J. Polit. Sci. 47, 241–262. ( 10.1017/s0007123415000186) [DOI] [Google Scholar]
- 83. Ecker UKH, Sze BKN, Andreotta M. 2021. Corrections of political misinformation: no evidence for an effect of partisan worldview in a US convenience sample. Phil. Trans. R. Soc. Lond. B 376, 20200145. ( 10.1098/rstb.2020.0145) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 84. Flynn DJ, Nyhan B, Reifler J. 2017. The nature and origins of misperceptions: understanding false and unsupported beliefs about politics. Polit. Psychol. 38, 127–150. ( 10.1111/pops.12394) [DOI] [Google Scholar]
- 85. Persson E, Andersson D, Koppel L, Västfjäll D, Tinghög G. 2021. A preregistered replication of motivated numeracy. Cognition 214, 104768. ( 10.1016/j.cognition.2021.104768) [DOI] [PubMed] [Google Scholar]
- 86. Buchanan J, Hickman W. 2024. Do people trust humans more than ChatGPT? J. Behav. Exp. Econ. 112, 102239. ( 10.1016/j.socec.2024.102239) [DOI] [Google Scholar]
- 87. Huang G, Wang S. 2023. Is artificial intelligence more persuasive than humans? A meta-analysis. J. Commun. 73, 552–562. ( 10.1093/joc/jqad024) [DOI] [Google Scholar]
- 88. Leib M, Köbis N, Rilke RM, Hagens M, Irlenbusch B. 2024. Corrupted by algorithms? How AI-generated and human-written advice shape (dis)honesty. Econ. J. 134, 766–784. ( 10.1093/ej/uead056) [DOI] [Google Scholar]
- 89. Bigman YE, Gray K. 2018. People are averse to machines making moral decisions. Cognition 181, 21–34. ( 10.1016/j.cognition.2018.08.003) [DOI] [PubMed] [Google Scholar]
- 90. Castelo N, Bos MW, Lehmann DR. 2019. Task-dependent algorithm aversion. J. Mark. Res. 56, 809–825. ( 10.1177/0022243719851788) [DOI] [Google Scholar]
- 91. Chugunova M, Sele D. 2022. We and it: an interdisciplinary review of the experimental evidence on how humans interact with machines. J. Behav. Exp. Econ. 99, 101897. ( 10.1016/j.socec.2022.101897) [DOI] [Google Scholar]
- 92. Logg JM, Minson JA, Moore DA. 2019. Algorithm appreciation: people prefer algorithmic to human judgment. Organ. Behav. Hum. Decis. Process. 151, 90–103. ( 10.1016/j.obhdp.2018.12.005) [DOI] [Google Scholar]
- 93. Riva P, Aureli N, Silvestrini F. 2022. Social influences in the digital era: when do people conform more to a human being or an artificial intelligence? Acta Psychol. 229, 103681. ( 10.1016/j.actpsy.2022.103681) [DOI] [PubMed] [Google Scholar]
- 94. Kumkale GT, Albarracín D, Seignourel PJ. 2010. The effects of source credibility in the presence or absence of prior attitudes: implications for the design of persuasive communication campaigns. J. Appl. Soc. Psychol. 40, 1325–1356. ( 10.1111/j.1559-1816.2010.00620.x) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95. Cook J, Lewandowsky S. 2016. Rational irrationality: modeling climate change belief polarization using Bayesian networks. Top. Cogn. Sci. 8, 160–179. ( 10.1111/tops.12186) [DOI] [PubMed] [Google Scholar]
- 96. Cruz SM, Carpenter CJ. 2024. The roles of identity- and belief-driven motivated reasoning and source credibility in persuasion on climate change policy. J. Lang. Soc. Psychol. 43, 592–619. ( 10.1177/0261927X241291572) [DOI] [Google Scholar]
- 97. Bromme R, Thomm E, Wolf V. 2015. From understanding to deference: laypersons’ and medical students’ views on conflicts within medicine. Int. J. Sci. Educ. Part B 5, 68–91. ( 10.1080/21548455.2013.849017) [DOI] [Google Scholar]
- 98. Gillath O, Ai T, Branicky MS, Keshmiri S, Davison RB, Spaulding R. 2021. Attachment and trust in artificial intelligence. Comput. Hum. Behav. 115, 106607. ( 10.1016/j.chb.2020.106607) [DOI] [Google Scholar]
- 99. Leichtmann B, Humer C, Hinterreiter A, Streit M, Mara M. 2023. Effects of explainable artificial intelligence on trust and human behavior in a high-risk decision task. Comput. Hum. Behav. 139, 107539. ( 10.1016/j.chb.2022.107539) [DOI] [Google Scholar]
- 100. Modirrousta-Galian A, Higham PA. 2023. Gamified inoculation interventions do not improve discrimination between true and fake news: reanalyzing existing research with receiver operating characteristic analysis. J. Exp. Psychol. 152, 2411–2437. ( 10.1037/xge0001395) [DOI] [PubMed] [Google Scholar]
- 101. van der Meer TGLA, Hameleers M, Ohme J. 2023. Can fighting misinformation have a negative spillover effect? How warnings for the threat of misinformation can decrease general news credibility. Journal. Stud. 24, 803–823. ( 10.1080/1461670x.2023.2187652) [DOI] [Google Scholar]
- 102. Van Duyn E, Collier J. 2019. Priming and fake news: the effects of elite discourse on evaluations of news media. Mass Commun. Soc. 22, 29–48. ( 10.1080/15205436.2018.1511807) [DOI] [Google Scholar]
- 103. Bode L, Vraga E. 2021. The Swiss cheese model for mitigating online misinformation. Bull. At. Sci. 77, 129–133. ( 10.1080/00963402.2021.1912170) [DOI] [Google Scholar]
- 104. Prike T, Ecker UKH. 2023. Effective correction of misinformation. Curr. Opin. Psychol. 54, 101712. ( 10.1016/j.copsyc.2023.101712) [DOI] [PubMed] [Google Scholar]
- 105. Susmann MW, Wegener DT. 2022. The role of discomfort in the continued influence effect of misinformation. Mem. Cogn. 50, 435–448. ( 10.3758/s13421-021-01232-8) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 106. Epstein Z, Fang MC, Arechar AA, Rand DG. 2023. What label should be applied to content produced by generative AI? PsyArXiv. See 10.31234/osf.io/v4mfz. [DOI]
- 107. Gillespie N, Lockey S, Curtis C, Pool J, Akbari A. 2023. Trust in artificial intelligence: a global study. The University of Queensland; KPMG Australia. See 10.14264/00d3c94. [DOI] [Google Scholar]
- 108. Costello TH, Pennycook G, Rand DG. 2024. Durably reducing conspiracy beliefs through dialogues with AI. Science 385, eadq1814. ( 10.1126/science.adq1814) [DOI] [PubMed] [Google Scholar]
- 109. Green M, McShane CJ, Swinbourne A. 2022. Active versus passive: evaluating the effectiveness of inoculation techniques in relation to misinformation about climate change. Aust. J. Psychol 74, 2113340. ( 10.1080/00049530.2022.2113340) [DOI] [Google Scholar]
- 110. McGuire WJ. 1961. Resistance to persuasion conferred by active and passive prior refutation of the same and alternative counterarguments. J. Abnorm. Soc. Psychol. 63, 326–332. [Google Scholar]
- 111. Metzger MJ, Flanagin AJ. 2013. Credibility and trust of information in online environments: the use of cognitive heuristics. J. Pragmat. 59, 210–220. ( 10.1016/j.pragma.2013.07.012) [DOI] [Google Scholar]
- 112. Glikson E, Woolley AW. 2020. Human trust in artificial intelligence: review of empirical research. Acad. Manag. Ann. 14, 627–660. ( 10.5465/annals.2018.0057) [DOI] [Google Scholar]
- 113. Troshani I, Rao Hill S, Sherman C, Arthur D. 2021. Do we trust in AI? Role of anthropomorphism and intelligence. J. Comput. Inf. Syst. 61, 481–491. ( 10.1080/08874417.2020.1788473) [DOI] [Google Scholar]
- 114. Waytz A, Heafner J, Epley N. 2014. The mind in the machine: anthropomorphism increases trust in an autonomous vehicle. J. Exp. Soc. Psychol. 52, 113–117. ( 10.1016/j.jesp.2014.01.005) [DOI] [Google Scholar]
- 115. Zhang T, Kaber DB, Zhu B, Swangnetr M, Mosaly P, Hodge L. 2010. Service robot feature design effects on user perceptions and emotional responses. Intell. Serv. Robot. 3, 73–88. ( 10.1007/s11370-010-0060-9) [DOI] [Google Scholar]
- 116. Bhattacherjee A. 2022. The effects of news source credibility and fact-checker credibility on users’ beliefs and intentions regarding online misinformation. J. Electron. Bus. Digit. Econ. 1, 24–33. ( 10.1108/jebde-09-2022-0031) [DOI] [Google Scholar]
- 117. Oeldorf-Hirsch A, DeVoss CL. 2020. Who posted that story? Processing layered sources in Facebook news posts. Journal. Mass Commun. Q. 97, 141–160. ( 10.1177/1077699019857673) [DOI] [Google Scholar]
- 118. Spearing ER, Ecker UKH, Prike T. 2025. Countering AI-Generated Misinformation https://osf.io/t2g3a/
- 119. Spearing E, Gile C, Fogwill A, Prike T, Swire B, Lewandowsky S. 2025. Supplementary material from: Countering AI-Generated Misinformation With Pre-Emptive Source Discreditation and Debunking. Figshare. ( 10.6084/m9.figshare.c.7837940) [DOI]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The data and materials for this study are available on the Open Science Framework (OSF): [118].
Supplementary material is available online [119].