Abstract
Exposure to counterattitudinal information has been shown to yield mixed effects on attitude polarization. The current research explores the differential impact of such information when generated by artificial intelligence (AI) versus human sources. While prior work highlights a general aversion to AI for decision-making, our research reveals a consistent openness to AI in the context of counterattitudinal messages. Across four pre-registered studies (N = 2061), we find that when people receive counterattitudinal messages on potentially polarizing issues, AI sources are perceived as less biased, more informative, and having less persuasive intent than human sources. This leads to greater receptiveness to counterattitudinal messages when those messages come from AI rather than human sources. In addition, we find preliminary evidence that receiving counterattitudinal messages from an AI (versus human) source can diminish outgroup animosity and facilitate attitude change.
Supplementary Information
The online version contains supplementary material available at 10.1038/s41598-025-00791-z.
Keywords: Artificial intelligence, Receptiveness, Source perceptions, Polarization, Persuasion
Subject terms: Psychology, Human behaviour
Political polarization is a pressing issue1. People are increasingly divided on topics such as vaccinations, gun control, immigration, and more2,3. One manifestation of polarization is that individuals primarily interact with people and perspectives that align with their ideologies4,5, while remaining unreceptive to and avoiding opposing perspectives and disagreeing others6,7. In response to rising polarization, considerable research has been directed to understanding receptiveness to opposing views. Receptiveness, also known as openness, refers to the willingness to access, engage with, and consider new or opposing perspectives in an open-minded manner8–12. Reluctance or inability to consider or engage with opposing views has been shown to aggravate attitude conflict10, which can exacerbate ideological segregation and extremism13–15. Consequently, exploring factors that increase receptiveness to opposing views is vital for understanding how to bridge divides and reduce polarization.
Receptiveness to counterattitudinal messages
A much-theorized contributor to polarization is selective exposure16–18—the phenomenon whereby people preferentially expose themselves to proattitudinal (attitude-congruent) information and avoid counterattitudinal information that contradicts their views. Thus, it would be reasonable to postulate that simply exposing people to counterattitudinal messages could induce them to engage with opposing perspectives and soften their views. However, past research shows mixed results. Some research suggests that cross-stance discussions increase people’s openness to opposing views and disagreeing others19–21, whereas other research suggests that counterattitudinal messages can be aversive and even backfire by reinforcing people’s existing views7,15,22–24. In fact, some research suggests that exposure to counterattitudinal messages5,25,26 and dialogues with dissenters27 escalates polarization and incivility.
Why would exposure to counterattitudinal messages reduce people’s receptiveness to opposing perspectives? One reason is that people often perceive the sources of counterattitudinal messages unfavorably—for example, as uninformed, biased, emotional, and more intent on persuading than listening and learning, all of which undermine message recipients’ willingness to engage with opposing views11,28–31. Interventions that address these problems might promote greater receptiveness to counterattitudinal messages, and thus help mitigate polarization. For example, if people perceive the source of counterattitudinal messages as more informed, less biased, and having less intent to persuade, perhaps people would be more open and willing to engage with counterattitudinal messages, and more likely to soften their views after receiving them.
The current research investigates whether Artificial Intelligence (AI) might offer a means of producing greater receptiveness to opposing positions and disagreeing others. Specifically, we examine whether people are more receptive to counterattitudinal messages, even holding constant the content of those messages, when they believe the source of those messages is AI rather than human.
AI as a vehicle for counterattitudinal messages
As noted, people often resist counterattitudinal messages, due in part to their perceptions of the message sources. Indeed, people often see individuals presenting counterattitudinal messages as uninformed (e.g., lacking expertise), biased, untrustworthy, and having persuasive intent. Each of these perceptions is known to lower people’s receptiveness to incoming information. For example, research reveals that people are less open to persuasive messages from sources perceived as lacking expertise or trustworthiness32,33 or having bias on the issue at hand34,35. Likewise, past research has shown that perceiving persuasive intent in a message generally inhibits persuasion36–38. Reactance theory suggests that this effect occurs because people see persuasive attempts as threatening their attitudinal freedom39. Thus, when people perceive persuasive intent, they have been shown to reject persuasive messages to reestablish their freedom, even if unaware of the messages’ topic or position40. Recent research reveals that these effects are especially likely with counterattitudinal messages41.
We submit that compared to human sources, AI sources can mitigate these concerns. First, because AI requires a massive scale of data and information to train its models42, people might perceive AI sources to have access to more information and knowledge than human sources, giving them more perceived expertise. Second, people appear to have a common belief in the inherent neutrality of AI, perceiving it as less biased, more objective, and less emotional than humans43–45. Third, compared to human sources, AI is widely viewed as lacking the capacity to possess its own intentions46–48. Thus, we posit that people perceive less persuasive intent from AI compared to human sources. In short, we hypothesize that people would have greater receptiveness to counterattitudinal messages from AI than human sources, due to perceptions of AI sources being more informed, less biased, and having less persuasive intent. If true, this could mean that people would be more open to engaging with (e.g., receiving, sharing, or seeking out) counterattitudinal messages from AI than human sources, and thus perhaps that AI holds potential to reduce outgroup animosity and polarization9,12.
It is worth noting that rapid advances in AI technology have stimulated a burgeoning literature evaluating how AI, especially large language models (LLMs), can affect communication and persuasion. For instance, recent studies have begun to explore the effects of AI models in political messaging. In some of this work, AI-generated messages have been found to be as persuasive as human-generated messages49–53. Interestingly, though, these studies have not disentangled the effect of the AI source from the effect of the AI-generated message content. Studies exploring AI and persuasion have prompted humans and AI (e.g., GPT-4) to generate messages independently and then compared the quality and impact of those different messages. Therefore, extant research on AI and persuasion does not speak to the question of how perceptions of AI sources might shape receptiveness to opposing views.
Departing from prior work, we examine whether people’s perceptions of counterattitudinal messages differ based on whether people believe the sources of those messages are AI or human. That is, we examine whether the exact same message might be perceived differently depending on whether people think the message was generated by AI or a human source. We hypothesize that AI sources might foster greater receptiveness to counterattitudinal messages than human sources, because AI sources are perceived as more informative, less biased, and having less persuasive intent. Note that our hypothesis offers a counterpoint to the well-documented notion of algorithm aversion, which refers to the fact people often prefer to rely on humans over algorithms even though algorithms frequently outperform humans54,55. However, research on algorithm aversion focuses on the decision-making domain, such as medical advice-taking or forecasting56,57, not the effect on receptiveness to counterattitudinal information. Thus, our work seeks to update the algorithm literature and explore the potential of AI to foster receptiveness and help combat polarization.
Overview
We test our hypotheses across four pre-registered experiments, varying the focal issues and the specific AI and human sources. In particular, we compare different AI sources of varying familiarity (see Supplemental Materials for Study 3) with a variety of representative human sources that people commonly encounter in daily life, such as incidentally similar peers, experts, social media influencers, and advocacy groups. Across studies, our goal is to compare AI sources to realistic human alternatives that people actually receive messages from on political issues, thus boosting the ecological validity of our experiments. In addition, we test the implications of people’s perceptions of AI versus human sources for their openness to opposing perspectives, operationalized as felt receptiveness to receiving counterattitudinal information, willingness to seek out and share counterattitudinal messages with others, and reduced outgroup animosity. Furthermore, we aggregate the evidence across studies to offer an initial, preliminary assessment of whether AI sources have potential to promote attitude change and, thus, reduce attitude polarization.
The hypotheses, methods, and analysis plan for all studies reported in the main text were preregistered. All preregistrations, consent forms, data, experimental materials, and codes used to generate the analyses for all reported studies have been organized and made publicly available for peer review at https://osf.io/smb24/?view_only=4cea014c9e024f95820c96d0b6ce811a. Additional analyses and details about methods and study materials (e.g., the counterattitudinal messages) are provided in the Supplemental Materials.
All study procedures were approved by the Institutional Review Board (IRB) of Stanford University (IRB #10777) and conducted in accordance with the Declaration of Helsinki. Written informed consent was obtained from all participants prior to their inclusion in the study. All personal identifiers have been removed to ensure confidentiality.
Study 1A
Method
In Study 1A, we examined whether people would perceive an AI source as less biased, more informative, and having less persuasive intent than a human source, both before and after reading a counterattitudinal message. Recent research reveals that exposure to incidentally similar people with differing political views can reduce opinion polarization19. Thus, we compared an AI source (ChatGPT) to an incidentally similar human source (another study participant) designed to reflect a regular person who participants might encounter in their daily life. Four-hundred fifty-seven participants (43.54% female, Mage = 37.96) were recruited from Prolific Academic. Per our pre-registration, we recruited participants who indicated that they supported universal healthcare, since our counterattitudinal message was about the disadvantages of universal healthcare. A sensitivity analysis (GPower)58 revealed that our sample provided 80% power to detect an effect size of Cohen’s d = 0.26. Participants were randomly assigned to one condition in a two-cell between-participants design.
At the outset, participants were informed that we were interested in their perceptions of arguments on the issue of universal healthcare. Then, participants read that they would be presented with a message regarding the disadvantages of universal healthcare. Participants were randomly assigned to be informed that the message was generated by ChatGPT or another Prolific participant. Next, participants read a short description of the source. In the ChatGPT condition, participants read that: “ChatGPT is a language model developed using artificial intelligence technology. Its design allows it to generate human-like text and participate in natural language conversations. It can be applied in discussing topics such as universal healthcare.” In the Prolific participant condition, participants read that: “In a previous survey, we asked participants to share their thoughts on the topic of universal healthcare. This provided us with a diverse range of perspectives on the issue. You will hear from one of our previous participants.”
Before reading the counterattitudinal message, participants reported their expectations of the message source. First, participants reported how biased and informative they believed the message and source would be, using Lupia59 and Wallace et al.’s35 definitions of bias (i.e., being skewed and less objective) and expertise (i.e., being informative and having relevant knowledge). In addition, because past research suggests that AI is viewed as lacking emotion44, we expanded Wallace et al.’s35 definition of bias to offer a more comprehensive assessment of how people perceive AI versus human sources. Specifically, participants rated the expected bias of the message on three items (α = 0.56; Due to the low alpha, we also individually analyzed the three items included in the bias composite. We found similar results for the individual items as for the composite index, both before and after reading the counterattitudinal message. This was true in subsequent studies and for other measures as well. See the Supplemental Materials for more information.): (1) “How biased do you expect the message to be?” (2) “How emotional do you expect the message to be?” (3) “How objective do you expect the message to be?” (1 = very impartial/very unemotional/not objective at all, 9 = very biased/very emotional/very objective; the objectivity item was reverse scored). Participants also rated expected persuasive intent of the source on two items (r = .36, p < .001): (1) “To what extent do you believe the source will be motivated to convince you of a certain perspective?” (2) “To what extent do you believe the source will have a hidden agenda on this issue?” (for both items, 1 = not at all, 9 = very much). The expected informativeness of the source was also rated on two items (r = .79, p < .001): (1) “How much information do you believe the source will have on this issue?” (2) “How knowledgeable do you believe the source will be on this issue?” (1 = not at all/not knowledgeable at all, 9 = very much/very knowledgeable).
Next, participants received the counterattitudinal message. The message was identical across source conditions, outlining three disadvantages of universal healthcare (counterattitudinal messages for all studies can be found in Supplemental Materials). After reading the message, participants rated the source on bias (α = 0.53), persuasive intent (r = .39, p < .001), and informativeness (r = .85, p < .001), using the same items as before but referencing the message they read instead of the message they would read. Finally, participants wrote their thoughts about the counterattitudinal message, answered demographic questions, and were debriefed.
Results
Before reading the counterattitudinal message, participants expected the message from ChatGPT to be less biased than the message from a previous participant (MChatGPT = 4.45, SDChatGPT = 1.49 vs. MProlific participant = 5.83, SDProlific participant = 1.29; t(455) = − 10.59, p < .001, d = − 0.99, 95% CI = [− 1.19, − 0.80]). Participants also expected the ChatGPT source to have less persuasive intent compared to the participant source (MChatGPT = 4.92, SDChatGPT = 2.08 vs. MProlific participant = 5.74, SDProlific participant = 1.57; t(455) = − 4.76, p < .001, d = − 0.45, 95% CI = [− 0.63, − 0.26]). Additionally, participants expected the ChatGPT source to be more informative than the participant source (MChatGPT = 6.49, SDChatGPT = 1.77 vs. MProlific participant = 4.44, SDProlific participant = 1.68; t(455) = 12.73, p < .001, d = 1.19, 95% CI = [0.99, 1.39]).
After reading the counterattitudinal message, participants perceived the message from ChatGPT (vs. a previous participant) to be less biased (MChatGPT = 4.14, SDChatGPT = 1.59 vs. MProlific participant = 4.64, SDProlific participant = 1.44; t(455) = − 3.50, p < .001, d = − 0.33, 95% CI = [− 0.51, − 0.14]). Participants also perceived the ChatGPT source to have less persuasive intent compared to the participant source (MChatGPT = 5.17, SDChatGPT = 2.19 vs. MProlific participant = 5.59, SDProlific participant = 1.66; t(455) = − 2.30, p = .022, d = − 0.22, 95% CI = [− 0.40, − 0.03]), and they perceived the ChatGPT source to be more informative than the participant source (MChatGPT = 6.16, SDChatGPT = 2.08 vs. MProlific participant = 5.30, SDProlific participant = 1.87; t(455) = 4.63, p < .001, d = 0.43, 95% CI = [0.25, 0.62]). See Fig. 1.
Fig. 1.
The effect of the source manipulation (ChatGPT vs. Prolific participant) on perceived bias, persuasive intent, and informativeness following the counterattitudinal message in Study 1A. The dots represent individual data points. The bottom and top edges of the boxes represent the first (Q1) and third (Q3) quartiles of data. The horizontal line inside each box indicates the average value of the measure indicated on the x-axis.
Study 1B
Method
In Study 1B, we used an expert as the human source to test whether the results observed in Study 1A would hold when the human source was more knowledgeable on the subject matter. Experts are often more persuasive than non-experts60, so this source offered an even stronger test of our hypotheses. Four-hundred forty-nine participants (50.11% female, Mage = 40.95) were recruited from Prolific Academic. Per our pre-registration, we recruited participants who supported universal healthcare, as our counterattitudinal message presented the disadvantages of universal healthcare. A sensitivity analysis (GPower)58 revealed that our sample provided 80% power to detect an effect size of Cohen’s d = 0.27. We randomly assigned participants to one condition in a two-cell between-participants design.
In this study, participants followed the exact same procedure as in Study 1A except that the human source was an expert. In the ChatGPT condition, we used the same description as in Study 1A. In the expert condition, the description of the source was: “You will hear from an expert on the topic of Universal Healthcare. This person has considerable knowledge and understanding of the nuances and intricacies surrounding this issue, and they can offer an informed perspective on its implications, challenges, and potential benefits.” Participants responded to the same measures as in Study 1A both before and after reading the message. These measures assessed perceived bias (before: α = 0.48; after: α = 0.58), persuasive intent (before: r = .49, p < .001; after: r = .42, p < .001), and informativeness (before: r = .71, p < .001; after: r = .87, p < .001) of the message and source. Finally, participants wrote their thoughts about the message, answered demographic questions, and were debriefed.
Results
Before receiving the counterattitudinal message, participants expected the message from ChatGPT (vs. an expert) to be less biased (MChatGPT = 4.51, SDChatGPT = 1.53 vs. Mexpert = 5.42, SDexpert = 1.38; t(447) = − 6.63, p < .001, d = − 0.63, 95% CI = [− 0.82, − 0.44]). Participants also expected the ChatGPT source to have less persuasive intent compared to the expert source (MChatGPT = 5.10, SDChatGPT = 1.98 vs. Mexpert = 6.51, SDexpert = 1.63; t(447) = − 8.21, p < .001, d = − 0.78, 95% CI = [− 0.97, − 0.58]). However, participants expected the ChatGPT and expert sources to be equally informative (MChatGPT = 6.53, SDChatGPT = 1.68 vs. Mexpert = 6.43, SDexpert = 1.63; t(447) = 0.67, p = .501, d = 0.06, 95% CI = [− 0.12, 0.25]).
After reading the counterattitudinal message, participants still perceived the message from ChatGPT (vs. an expert) to be less biased (MChatGPT = 4.21, SDChatGPT = 1.67 vs. Mexpert = 5.06, SDexpert = 1.55; t(447) = − 5.55, p < .001, d = − 0.52, 95% CI = [− 0.71, − 0.33]) and perceived the ChatGPT source to have less persuasive intent compared to the expert source (MChatGPT = 5.26, SDChatGPT = 2.09 vs. Mexpert = 6.54, SDexpert = 1.69; t(447) = − 7.12, p < .001, d = − 0.67, 95% CI = [− 0.86, − 0.48]). Notably, despite having equivalent expectations for informativeness, after reading the (exact same) message participants perceived the ChatGPT source to be more informative than the expert source (MChatGPT = 6.35, SDChatGPT = 1.82 vs. Mexpert = 5.82, SDexpert = 1.97; t(447) = 2.98, p = .003, d = 0.28, 95% CI = [0.09, 0.47]). See Fig. 2. Additional analyses can be found in the Supplemental Materials.
Fig. 2.
The effect of the source manipulation (ChatGPT vs. Expert) on perceived bias, persuasive intent, and informativeness following the counterattitudinal message in Study 1B. The dots represent individual data points. The bottom and top edges of the boxes represent the first (Q1) and third (Q3) quartiles of data. The horizontal line inside each box indicates the average value of the measure indicated on the x-axis.
The results of Studies 1A and 1B provide initial evidence regarding people’s perceptions of AI versus human sources both before and after receiving counterattitudinal messages. We found that, especially after reading the counterattitudinal messages, people perceived AI (vs. a human source) as less biased, more informative, and having less persuasive intent. We found that these perceptions held regardless of whether the human source was another study participant (i.e., a regular person similar to the participant) or an established expert on the topic.
Study 2
Method
Study 2 expanded on Studies 1A and 1B in several ways. First, we tested the source perception effect with a different issue, a different human source, and a different AI model, moving beyond the commonly studied ChatGPT. Second, we examined how these perceptions influenced receptiveness to opposing views—for example, people’s openness to receiving more information about the opposing position and their willingness to seek out and share that information with others. Third, to eliminate potential spillover from pre-message expectations, we assessed all perceptions and intentions solely after participants had read the counterattitudinal message. Finally, we recruited participants on both sides of the focal issue to ensure that our findings were robust to variations in the valence and position of the message.
In this study, participants read a counterattitudinal message on vaccinations that they believed came from either Bard (a relatively unfamiliar AI source at the time of our study; see Supplemental Materials for Study 3) or a social media influencer (i.e., the human source). Social media influencers have significantly shaped American politics in recent years, with both major political parties actively engaging with them. Recent reports estimate that Democratic political action committees have invested approximately $2.7 million on influencer campaigns and that Donald Trump’s presidential campaign allocated nearly $2.5 million to digital agencies working with online influencers for the 2024 election61,62. Notably, adults in the U.S. have been shown to trust information from social media influencers nearly as much as they do from national news outlets63. Indeed, research reveals that social media influencers are perceived as relatable, credible, trustworthy, and appealing, and that they reach broad audiences and have a profound impact on people’s attitudes64–67, suggesting that influencer sources might have substantial potential to enhance receptiveness. Therefore, in Study 2, we assessed people’s reactions to a counterattitudinal message purported to come from an influencer or Bard (the AI source). In addition to the measures included in Studies 1A-1B, we examined whether participants in the Bard (vs. influencer) condition would exhibit greater receptiveness to opposing views, operationalized as felt receptiveness to receiving counterattitudinal information and willingness to share and seek further counterattitudinal messages from the source.
Starting in Study 2 and continuing in Study 3, we also measured attitudes after participants read the counterattitudinal message, and we computed an attitude change index indicating the degree to which attitudes shifted towards the counterattitudinal position. Altering deeply held attitudes, particularly those related to politicized or moralized issues, is typically a gradual process that may not occur following a single exposure to a brief persuasive message. Indeed, such attitudes can be resistant to change as they are closely linked to core values, beliefs, and identities68 and therefore require substantial persuasive power to produce observable effects. Nevertheless, while our individual studies did not yield statistically significant attitude change effects, we observed consistent directional trends tentatively suggesting that counterattitudinal messages might have fostered more attitude change in the AI (vs. human source) conditions. To further explore this pattern, we pooled the data across studies and conducted an internal meta-analysis as an exploratory assessment of the attitude change effect69,70. Across studies, the data revealed that counterattitudinal messages induced more attitude change when delivered by AI compared to human sources. We report this internal meta-analysis as preliminary evidence following our individual studies.
In Study 2, 531 participants were recruited from Prolific Academic. As pre-registered, we excluded responses that indicated a neutral attitude toward vaccinations (N = 43), leaving a final sample of 488 participants (52.66% female; Mage = 42.18). A sensitivity analysis (GPower)58 revealed that our sample provided 80% power to detect an effect size of Cohen’s d = 0.25. At the outset, participants indicated their attitudes towards vaccinations on a 9-point scale (1 = Strongly Oppose, 9 = Strongly Support). Participants then read that we were interested in understanding how they perceived some of the arguments on this issue. Next, participants were told that they would be presented with a message regarding vaccinations that was generated by Bard (AI source condition) or a social media influencer (human source condition). On the same page, participants read a short description of the corresponding source. Notably, in this study, we highlighted that Bard, the AI model, was developed by researchers at Google Research. This information was provided to assess whether emphasizing the human developers behind the AI model would influence participants’ perceptions of the AI source. This information also allowed us to further distinguish the AI source in Study 2 from the one used in Studies 1A and 1B (ChatGPT, developed by OpenAI), where the human developer was not explicitly mentioned.
Next, all participants received a counterattitudinal message regarding vaccinations, ostensibly from the source they had been assigned. If participants’ initial attitudes were favorable (unfavorable) toward vaccinations, they received a message against (in favor of) vaccinations. The message outlined three disadvantages (or advantages, depending on participants’ initial attitudes) of vaccinations and was identical across source conditions. The only thing that varied was whether the source was described as AI (Bard) or human (influencer). After reading the message, participants rated bias (α = 0.60) and informativeness (r = .83, p < .001), using the same items as in Studies 1A and 1B. We also added one item (“To what extent do you think the source is trying to persuade you (or change your mind) on this issue?” 1 = Not at all, 9 = Very much)71 to form a 3-item composite of persuasive intent (α = 0.88).
Following these measures, participants indicated their receptiveness to receiving more counterattitudinal messages from the source using two items (r = .88, p < .001) adapted from Hussein and Tormala72: (1) “How receptive do you feel to hearing more about the advantages/disadvantages of vaccinations from Bard/the social media influencer?” (2) “How open do you feel to receiving more information about the advantages/disadvantages of vaccinations from Bard/the social media influencer?” Participants then rated their willingness to share the counterattitudinal message they just read with others using three items (α = 0.94) adapted from Cheatham and Tormala73: (1) “How willing would you be to share the message you just read from Bard/the social media influencer with your friends or family?” (2) “How willing would you be to share the message you just read from Bard/the social media influencer with someone you do not know well but see often (a classmate, colleague, or neighbor)?” (3) “How willing would you be to share the message you just read from Bard/the social media influencer with a stranger?” Participants also rated their intention to seek further counterattitudinal information from the source using one item: “How much do you want to seek more information about the advantages/disadvantages of vaccinations from Bard/the social media influencer?” All measures used 9-point scales (1 = Not at all, 9 = Very much). Finally, participants wrote their thoughts about the counterattitudinal message they just read, again reported their attitudes toward vaccinations using the same scale as earlier, and reported their demographics.
Results
After reading the message, participants in the Bard (vs. influencer) condition perceived the message to be less biased (MBard = 4.70, SDBard = 1.64 vs. Minfluencer = 5.53, SDinfluencer = 1.66; t(486) = − 5.53, p < .001, d = − 0.50, 95% CI = [− 0.68, − 0.32]), to have less persuasive intent (MBard = 5.35, SDBard = 2.33 vs. Minfluencer = 6.83, SDinfluencer = 1.82; t(486) = − 7.80, p < .001, d = − 0.71, 95% CI = [− 0.89, − 0.52]), and to be more informative (MBard = 5.72, SDBard = 1.95 vs. Minfluencer = 4.47, SDinfluencer = 1.97; t(486) = 7.01, p < .001, d = 0.64, 95% CI = [0.45, 0.82]).
Next, we tested the source effect on receptiveness to opposing views. We found that participants felt more receptive to receiving more counterattitudinal messages from the source in the Bard (vs. influencer) condition (MBard = 4.41, SDBard = 2.56 vs. Minfluencer = 3.27, SDinfluencer = 2.47; t(486) = 5.03, p < .001, d = 0.46, 95% CI = [0.27, 0.64]). Participants in the Bard (vs. influencer) condition were also more willing to share the counterattitudinal message they read with others (MBard = 3.09, SDBard = 2.18 vs. Minfluencer = 2.36, SDinfluencer = 2.01; t(486) = 3.81, p < .001, d = 0.35, 95% CI = [0.17, 0.52]), and expressed a greater desire to seek more counterattitudinal messages on vaccinations (MBard = 3.52, SDBard = 2.66 vs. Minfluencer = 2.68, SDinfluencer = 2.35; t(486) = 3.72, p < .001, d = 0.34, 95% CI = [0.16, 0.51]). See Fig. 3.
Fig. 3.
The effect of the source manipulation (Bard vs. social media influencer) on receptiveness, willingness to share, and willingness to seek further counterattitudinal information from the source in Study 2. The dots represent individual data points. The bottom and top edges of the boxes represent the first (Q1) and third (Q3) quartiles of data. The horizontal line inside each box indicates the average value of the measure indicated on the x-axis.
We conducted a parallel mediation analysis using PROCESS Model v4.3 (10,000 bootstrap samples, random seed, contrast codes: Bard = 0, Influencer = 1)74. We examined whether perceived bias, persuasive intent, and informativeness simultaneously mediated the effect of source type on receptiveness. This analysis yielded significant mediation via perceived bias (indirect effect = − 0.15, SE = 0.07, 95% CI = [− 0.29, − 0.03]), persuasive intent (indirect effect = − 0.36, SE = 0.09, 95% CI = [− 0.55, − 0.19]), and informativeness (indirect effect = − 0.74, SE = 0.12, 95% CI = [− 0.99, − 0.50]). See Fig. 4. In addition to receptiveness, we found similar mediation patterns for willingness to share and seek more counterattitudinal messages, suggesting that the effects were driven by perceived bias, persuasive intent, and informativeness (see the Supplemental Materials for full details).
Fig. 4.
Parallel mediation analysis testing the effect of the source manipulation (Bard vs. social media influencer) on receptiveness through perceived bias, persuasive intent, and informativeness in Study 2. Path coefficients are unstandardized betas. Standard errors are given in parentheses. Asterisks indicate significant paths (*p < .05, **p < .01, ***p < .001).
Study 3
Method
In Study 3, we tested whether our findings would generalize to another political issue—gun control—as well as to different AI (PaLM 2, recently renamed Gemini, which only 6.92% of participants had heard of prior to our study; see Supplemental Materials) and human (an advocacy group) sources. Advocacy groups, which are collectives of individuals seeking to sway public opinion, have been shown to exert considerable influence in real-world political processes and outcomes75–77. Despite their persuasion goals, messages from advocacy groups are often perceived as relatively unbiased and well-informed78. This finding aligns with research on crowd wisdom, which suggests that aggregating groups of opinions into collective judgments (which advocacy groups do) can enhance accuracy and mitigate the biases that often characterize individual viewpoints57,79. Thus, we changed the human source to an advocacy group in Study 3 not only to increase the generalizability and ecological validity of our stimuli80, but also to compare the AI source to a human source known to be influential in real-world political discourse.
As in our previous studies, we examined whether participants would perceive the AI source (PaLM 2) to be less biased, have less persuasive intent, and be more informative than the human source (the advocacy group). In addition, we assessed whether participants in the AI (vs. human source) condition would exhibit greater receptiveness to opposing views, operationalized as increased receptiveness to receiving counterattitudinal information and greater willingness to share the counterattitudinal message. Finally, we also explored outgroup animosity toward individuals on the other side of the issue. Outgroup animosity is commonly defined as the tendency for people to dislike and distrust those from the other party70,81–83. Based on our previous finding that AI sources can elicit greater receptiveness, we further examined whether AI sources could reduce outgroup animosity. Past research has demonstrated that reducing outgroup animosity often requires long-term, over-time socialization processes84. As such, it would be reasonable to expect that simply exposing people to a one-time experimental manipulation similar to ours would have a limited impact. Nevertheless, we assessed this effect in Study 3 as a general possibility.
In this study, participants read a counterattitudinal message on gun control. Based on random assignment, participants were led to believe that the message was created by an AI model (PaLM 2) or an advocacy group. On the basis of a pilot test indicating a Cohen’s d of 0.21, we set a target sample size of 750 participants, yielding 80% power to detect a small effect size. A total of 722 Prolific workers completed our study. Following our preregistration, we excluded participants who reported a neutral position on the topic of interest (gun control, N = 55), leaving a final total sample of 667 participants (52.62% female, Mage = 39.95). At the beginning of the study, participants indicated their current attitude toward gun control (i.e., stricter gun laws in the US) on a 9-point scale (1 = Strongly Oppose, 9 = Strongly Support). In both conditions, participants read that we were interested in understanding how they perceived arguments on the issue of gun control. Then, participants were told that they would be presented with a message regarding gun control generated by PaLM 2 or an advocacy group.
Before reading the message, participants reported their expectations of the message and source, as in Studies 1A-1B. Participants reported expected bias (α = 0.67), persuasive intent (α = 0.81), and informativeness (r = .71, p < .001) using the same items as in Study 2 but applied to pre-message expectation ratings. Next, all participants read a counterattitudinal message about gun control. If participants’ initial attitude was favorable (unfavorable) toward gun control, they received a message outlining three disadvantages (or advantages) of gun control. After reading the message, participants completed the same measures of receptiveness (r = .89, p < .001) and willingness to share (α = 0.93) as in Study 2.
Following the receptiveness and willingness to share measures, we assessed outgroup animosity using commonly employed measures—a feeling thermometer and trait ratings—to gauge different manifestations of animosity81. First, participants used a feeling thermometer to rate an outgroup member who would endorse the counterattitudinal message81. Specifically, participants were asked to rate their feelings towards the outgroup member on a “feeling thermometer” that ranged from 0 to 100 degrees, with higher values indicating a more positive and warmer feeling toward the outgroup member. Next, participants indicated the extent to which the outgroup member was intelligent, interested in the welfare of humanity, selfish, and ignorant, all on 9-point scales (1 = Not at all, 9 = Very much)83,85. Following prior literature, we created a positive trait composite (r = .76) and a negative trait composite (r = .79) by averaging the two positive items and two negative items, with higher values indicating greater positivity or negativity. Finally, participants once again reported their attitudes toward gun control and then reported demographics.
Results
Participants in the PaLM 2 (vs. advocacy group) condition expected the message to be less biased (MPaLM2 = 4.83, SDPaLM 2 = 1.62 vs. MAdvocacy group = 6.72, SDAdvocacy group = 1.44; t(665) = 15.87, p < .001, d = 1.23, 95% CI = [1.06, 1.39]), to have less persuasive intent (MPaLM2 = 5.67, SDPaLM 2 = 2.08 vs. MAdvocacy group = 7.43, SDAdvocacy group = 1.44; t(665) = 12.75, p < .001, d = 0.99, 95% CI = [0.83, 1.15]), and to be more informative (MPaLM2 = 6.12, SDPaLM2 = 1.82 vs. MAdvocacy group = 5.20, SDAdvocacy group = 1.95; t(665) = − 6.28, p < .001, d = − 0.49, 95% CI = [− 0.64, − 0.33]).
After reading the counterattitudinal message, participants in the PaLM 2 (vs. advocacy group) condition were more receptive to receiving additional counterattitudinal messages (MPaLM2 = 4.71, SDPaLM2 = 2.48 vs. MAdvocacy group = 3.83, SDAdvocacy group = 2.50; t(665) = − 4.56, p < .001, d = − 0.35, 95% CI = [− 0.51, 0.20]) and were more willing to share the counterattitudinal message they read with others (MPaLM2 = 3.39, SDPaLM2 = 2.27 vs. MAdvocacy group = 2.72, SDAdvocacy group = 2.05; t(655) = − 3.96, p < .001, d = − 0.31, 95% CI = [− 0.46, − 0.15]). See Fig. 5. Furthermore, participants in the PaLM 2 (vs. advocacy group) condition indicated a more positive feeling towards the outgroup member on the feeling thermometer (MPaLM2 = 40.42, SDPaLM2 = 24.96 vs. MAdvocacy group = 35.77, SDAdvocacy group = 24.84; t(665) = − 2.41, p = .016, d = − 0.19, 95% CI = [− 0.34, − 0.03]). Participants also rated the outgroup member more highly on positive traits (MPaLM2 = 4.81, SDPaLM2 = 2.11 vs. MAdvocacy group = 4.43, SDAdvocacy group = 2.07; t(665) = − 2.30, p = .022, d = − 0.18, 95% CI = [− 0.33, − 0.03]) and tended to rate the outgroup member lower on negative traits (MPaLM2 = 5.32, SDPaLM2 = 2.24 vs. MAdvocacy group = 5.50, SDAdvocacy group = 2.25; t(665) = 1.02, p = .308, d = 0.08, 95% CI = [− 0.07, 0.23]) in the PaLM 2 (vs. advocacy group) condition, though the latter effect was not significant.
Fig. 5.
The effect of the source manipulation (PaLM 2 vs. advocacy group) on receptiveness and willingness to share the counterattitudinal message in Study 3. The dots represent individual data points. The bottom and top edges of the boxes represent the first (Q1) and third (Q3) quartiles of data. The horizontal line inside each box indicates the average value of the measure indicated on the x-axis.
As in Study 2, we conducted a parallel mediation analysis to examine whether perceived bias, persuasive intent, and informativeness drove the effect of source type on receptiveness (10,000 bootstrap samples, random seed, contrast codes: PaLM 2 = 0, advocacy group = 1)74. This analysis revealed significant mediation via perceived bias (indirect effect = − 0.44, SE = 0.14, 95% CI = [− 0.73, − 0.18]), persuasive intent (indirect effect = − 0.46, SE = 0.10, 95% CI = [− 0.67, − 0.26]), and informativeness (indirect effect = − 0.40, SE = 0.08, 95% CI = [− 0.56, − 0.25]). See Fig. 6. We found similar mediation results for willingness to share the counterattitudinal message and outgroup animosity; full details can be found in the Supplemental Materials.
Fig. 6.
Parallel mediation analysis testing the effect of the source manipulation (PaLM 2 vs. advocacy group) on receptiveness through perceptions of bias, persuasive intent, and informativeness in Study 3. Path coefficients are unstandardized betas. Standard errors are given in parentheses. Asterisks indicate significant paths (*p < .05, **p < .01, ***p < .001).
Interestingly, the mediation results for outgroup animosity suggest that when people receive a counterattitudinal message from an AI source, they view that source more favorably and, thus, come to view outgroup members more favorably as well. We suspect that attributional inferences drive this effect. That is, if people perceive the source and message as more reasonable (i.e., less biased, more informative, and less intent on persuasion), they likely see the opposing position as more reasonable as well. This perception, in turn, could translate into believing that supporters of that position are more reasonable60, which results in more favorable assessments of individuals on the other side.
Exploratory internal meta-analysis
As mentioned earlier, in Studies 2 and 3, we measured attitudes toward the focal issues both before and after presenting the counterattitudinal messages. We created an attitude change index in each study by calculating the degree to which participants’ attitudes shifted towards the counterattitudinal message. For instance, if participants initially had favorable attitudes toward vaccinations, attitude change was computed as the initial attitude minus the final attitude; if participants initially had unfavorable attitudes toward vaccinations, attitude change was calculated as the final attitude minus the initial attitude. Although the results for attitude change were not significant between source conditions in the individual studies, we consistently found a directional trend suggesting more attitude change in the AI (vs. human) source condition.
To examine the overall effect of AI (vs. human) sources on attitude change, we conducted an exploratory internal meta-analysis of all three of our pre-registered studies (N = 1435 across 3 studies) that measured attitudes both before and after the message69. The three pre-registered studies included Studies 2 and 3 in the main text and a pre-registered study reported in the Supplemental Materials (“Supplemental Study” in Fig. 7). Given that this internal meta-analysis was not pre-registered, it offers a preliminary, or tentative, assessment of whether our pooled data indicate a potential effect of AI (vs. human) sources on attitude change. We used the R package metafor86 and computed Cohen’s d and the variance of d for continuous outcome variables on the basis of work by Borenstein et al.87. The hypothesis of homogeneity was not rejected, and there was no more variation in effect sizes than would be expected based on sampling error alone, Q(2) = 0.09, p = .96, I2 = .0088. Thus, we were interested in the fixed-effects model, which resulted in a significant point estimate, d = 0.11, 95% CI = [0.01, 0.22], z = 2.16, p = .03, suggesting a small-sized effect89. In short, we found preliminary evidence across studies that AI sources induced more attitude change than human sources (see Fig. 7).
Fig. 7.
Forest plot for the meta-analysis on the effect of AI (vs. human) sources on attitude change for Studies 2 and 3 as well as a pre-registered study reported in the Supplemental Materials. Squares show effect-size estimates (Cohen’s ds). The size of each square gives a representation of each study’s sample size. Error bars show 95% confidence intervals (CIs). The diamond represents the point estimate and 95% CI averaged across studies.
To explore the possibility that receptiveness might contribute to this attitude change effect—consistent with prior research showing significant indirect effects on post-message attitudes through receptiveness90,91—we conducted an exploratory mediation analysis. Across our pre-registered studies measuring both receptiveness and attitude change (i.e., Studies 2 and 3 in the main text plus the supplemental study), we examined whether the effect of source type on attitude change was mediated by receptiveness (10,000 bootstrap samples, random seed, contrast codes: AI sources = 0, human sources = 1). This analysis revealed a significant total effect (total effect = − 0.14, SE = 0.07, 95% CI = [− 0.27, − 0.01]) and significant mediation via receptiveness while controlling for the study identifier as a covariate (indirect effect = − 0.15, SE = 0.02, 95% CI = [− 0.20, − 0.10]).
General discussion
Across multiple pre-registered studies spanning different topics, AI models of varying familiarity (see Supplemental Materials for Study 3), and human sources (of varying perceived bias, persuasive intent, and informativeness; see Posttest 3 in Supplemental Materials), we found that using AI rather than human sources to present counterattitudinal messages promoted greater receptiveness to opposing views. AI sources were perceived as less biased, as having less persuasive intent, and as more informative than human sources, which increased participants’ openness to receiving, sharing, and seeking out counterattitudinal information, and even reduced outgroup animosity. As a secondary finding, we also obtained preliminary evidence suggesting that counterattitudinal messages delivered by AI sources (relative to human sources) might have potential to facilitate at least a modest degree of attitude change. Given the exploratory nature of the internal meta-analytic approach, this finding should be interpreted with caution and viewed as a starting point for future research. Nevertheless, the effects of source type on openness to opposing perspectives (evidenced by greater receptiveness, increased willingness to seek out and share counterattitudinal information, and reduced outgroup animosity) underscore the potential of AI as a tool for bridging divides and promoting constructive dialogue.
Our findings make several important contributions. First, they contribute to a rapidly growing literature on receptiveness and openness. Specifically, this research identifies AI sources as potentially useful vehicles for increasing people’s receptiveness to opposing perspectives and disagreeing others, and highlights perceived persuasive intent, bias, and informativeness as three source dimensions that play a pivotal role. This research also advances an exploding literature on the role of AI in persuasion52,53. Whereas other research has assessed the effect of AI versus human messages that differ in content, our studies held message content constant to isolate the effect of the source’s identity. Thus, our studies show that the mere perception that a message was generated by AI can open people up to information they often reject. More generally, our studies extend classic research on the roles of persuasive intent and source credibility in persuasion by showing that AI sources can alter perceptions of these dimensions. Finally, our research challenges the prevailing narrative of algorithm aversion by illuminating a context in which people prefer to receive AI rather than human input.
On the practical side, this research introduces a novel tool to increase openness and potentially reduce animosity in real-world settings. For example, social media platforms such as Facebook and X (formerly Twitter) have recognized the critical role their recommendation algorithms can play in depolarization, and have made efforts to mitigate attitudinal extremism and foster open dialogue online92. These platforms have revamped their original feed-ranking algorithms to prioritize presenting content that is more relevant, less polarized, and of higher quality93. Despite these efforts, research suggests that such algorithmic adjustments do not necessarily reduce polarization in beliefs or attitudes94. Our research illuminates a cost-effective strategy to combat polarization on these platforms—specifically, that exposing people to counterattitudinal messages identified as coming from AI sources might boost receptiveness to opposing perspectives and disagreeing others. Because those messages are not human-generated, people might process them more charitably and open up to them in a way that is uncommon with human sources. Exploring the potential for AI to help increase receptiveness and reduce outgroup animosity in the real world would be a valuable next step.
Looking ahead, there are several promising directions for future research. First, it would be useful to investigate the specific attributes of AI that contribute to its perception as less biased, more informative, and having less persuasive intent than human sources. This could provide deeper insight into how AI-mediated communication can be tailored for maximum effectiveness. Additionally, expanding the scope of this research to include different cultural contexts could uncover critical variations in how AI is perceived globally. This is particularly relevant given that perceptions of technology and AI vary widely across cultures and demographics95,96. Relatedly, the ethical dimensions of using AI in persuasion warrant careful consideration. While the potential benefits are significant, it is vital to ensure that the technology is employed in a manner that respects ethical standards and societal norms.
It would also be useful to further examine the mechanisms through which source perceptions drive receptiveness in the current paradigm. For instance, these effects could operate heuristically, with individuals relying on simple cues about the source, or they could affect deeper cognitive processes, such as shaping the level of thought people devote to the message or issue under consideration. In Studies 1A, 1B, and 2, we included open-ended thought listing measures that, in principle, could have revealed differences in cognitive processing across conditions. However, participants provided little written input, making their responses less informative than we hoped. Indeed, our preliminary analysis of the thoughts participants shared failed to uncover reliable differences across source conditions in the valence, extremity, emotionality, certainty, or length of participants’ responses (see the Supplemental Materials for more detail about these analyses). Future research exploring this issue more systematically could help further unpack the precise psychological mechanism through which source perceptions affect receptiveness.
Finally, research exploring additional or alternative mechanisms for the current effects would be valuable. For instance, perhaps source liking contributes to the effect of AI (vs. human) sources on receptiveness. Previous research has indicated that people are often more persuaded by sources they like than sources they dislike97,98. Although our studies did not directly measure liking, it would be reasonable to surmise that participants liked the AI sources more than the human sources given the counterattitudinal message context. Indeed, people have a well-known distaste for others who disagree with them99,100. Perhaps because they are non-human, AI sources are spared this distaste. Another potential factor could be the perceived social currency associated with engaging with AI versus human sources. Recent research suggests that there can be reputational costs to engaging with sources aligned with an opposing political party101. In our studies, it is possible that participants perceived greater reputational risk to engaging with the counterattitudinal message when it came from a human rather than AI source, as the human source was more plausibly affiliated with an opposing political group. Future research exploring liking, reputational costs, and other factors as drivers of differential receptiveness to AI versus human sources would expand our understanding of this effect.
In conclusion, this research offers a novel glimpse into the potential of AI as a tool for bridging divides in an increasingly polarized world. A nuanced understanding of how AI is perceived as a source of counterattitudinal messages not only contributes to our knowledge of receptiveness and AI communication, but also opens new avenues for employing technology to foster openness and reduce polarization. We hope the current studies prompt new research in these domains.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Author contributions
L.L. and Z.T. designed the research. L.L. conceived the project, supervised the data collection, analyzed the data, wrote the original draft, and prepared Figs. 1, 2, 3, 4, 5, 6 and 7. All authors reviewed and edited the manuscript.
Data availability
All data and code are publicly available for peer review: https://osf.io/smb24/?view_only=4cea014c9e024f95820c96d0b6ce811a.
Declarations
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Voelkel, J. G. et al. Interventions reducing affective polarization do not necessarily improve anti-democratic attitudes. Nat. Hum. Behav.7, 55–64 (2023). [DOI] [PubMed] [Google Scholar]
- 2.Cinelli, M., de Francisci Morales, G., Galeazzi, A., Quattrociocchi, W. & Starnini, M. The echo chamber effect on social media. Proc. Natl. Acad. Sci. U. S. A.118, e2023301118 (2021). [DOI] [PMC free article] [PubMed]
- 3.Mason, L. Uncivil Agreement: How Politics Became our Identity (University of Chicago Press, 2019).
- 4.Barberá, P., Jost, J. T., Nagler, J., Tucker, J. A. & Bonneau, R. Tweeting from left to right: Is online political communication more than an echo chamber? Psychol. Sci.26, 1531–1542 (2015). [DOI] [PubMed] [Google Scholar]
- 5.Buder, J. et al. Online interaction turns the congeniality bias into an uncongeniality bias. Psychol. Sci.34, 1055–1068 (2023). [DOI] [PubMed] [Google Scholar]
- 6.Chen, M. K. & Rohla, R. The effect of partisanship and political advertising on close family ties. Science360, 1020–1024 (2018). [DOI] [PubMed] [Google Scholar]
- 7.Minson, J. A. & Dorison, C. A. Why is exposure to opposing views aversive? Reconciling three theoretical perspectives. Curr. Opin. Psychol.47, 101435 (2022). [DOI] [PubMed] [Google Scholar]
- 8.Briñol, P. & Petty, R. E. Openness and persuasion: Multiple processes, meanings, and outcomes. In Divided: Open-mindedness and Dogmatism in a Polarized World (eds Ottati, V. & Stern, C.) 59–77 (Oxford University Press, 2023).
- 9.Hussein, M. A. & Tormala, Z. L. Undermining your case to enhance your impact: A framework for understanding the effects of acts of receptiveness in persuasion. Personal Soc. Psychol. Rev.25, 229–250 (2021). [DOI] [PubMed] [Google Scholar]
- 10.Minson, J. A. & Chen, F. S. Receptiveness to opposing views: Conceptualization and integrative review. Personal Soc. Psychol. Rev.26, 93–111 (2022). [DOI] [PubMed] [Google Scholar]
- 11.Tormala, Z. L. & Rucker, D. D. Attitudes: Form, function, and the factors that shape them. In The Handbook of Social Psychology (ed. Gilbert, D. T., Fiske, S. T., Finkel, E. J., & Mendes, W. B.) (Situational Press, in press).
- 12.Yeomans, M., Minson, J., Collins, H., Chen, F. & Gino, F. Conversational receptiveness: Improving engagement with opposing views. Organ. Behav. Hum. Decis. Process.160, 131–148 (2020). [Google Scholar]
- 13.Fernbach, P. M., Rogers, T., Fox, C. R. & Sloman, S. A. Political extremism is supported by an illusion of Understanding. Psychol. Sci.24, 939–946 (2013). [DOI] [PubMed] [Google Scholar]
- 14.Krosnick, J. A. The role of attitude importance in social evaluation: A study of policy preferences, presidential candidate evaluations, and voting behavior. J. Pers. Soc. Psychol.55, 196–210 (1988). [DOI] [PubMed] [Google Scholar]
- 15.Lord, C. G., Ross, L. & Lepper, M. R. Biased assimilation and attitude polarization: The effects of prior theories on subsequently considered evidence. J. Pers. Soc. Psychol.37, 2098–2109 (1979). [Google Scholar]
- 16.Frey, D. Recent research on selective exposure to information. Adv. Exp. Soc. Psychol.19, 41–80 (1986). [Google Scholar]
- 17.Hart, W. et al. Feeling validated versus being correct: A Meta-analysis of selective exposure to information. Psychol. Bull.135, 555–588 (2009). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Knobloch-Westerwick, S. & Meng, J. Looking the other way: Selective exposure to attitude-consistent and counterattitudinal political information. Communic Res.36, 426–448 (2009). [Google Scholar]
- 19.Balietti, S., Getoor, L., Goldstein, D. G. & Watts, D. J. Reducing opinion polarization: Effects of exposure to similar people with differing political views. Proc. Natl. Acad. Sci. U. S. A.118, e2112552118 (2021). [DOI] [PMC free article] [PubMed]
- 20.Combs, A. et al. Reducing political polarization in the united States with a mobile chat platform. Nat. Hum. Behav.7, 1454–1461 (2023). [DOI] [PubMed] [Google Scholar]
- 21.Kardas, M., Nordgren, L. & Rucker, D. How civil conversations dissolve disagreements and are surprisingly likely to reduce attitude polarization. In Society for Judgment and Decision Making Annual Conference, 1–40 (2023).
- 22.Bail, C. A. et al. Exposure to opposing views on social media can increase political polarization. Proc. Natl. Acad. Sci. U S A. 115, 9216–9221 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Frimer, J. A., Skitka, L. J. & Motyl, M. Liberals and Conservatives are similarly motivated to avoid exposure to one another’s opinions. J. Exp. Soc. Psychol.72, 1–12 (2017). [Google Scholar]
- 24.Tormala, Z. L. & Petty, R. E. What doesn’t kill me makes me stronger: The effects of resisting persuasion on attitude certainty. J. Pers. Soc. Psychol.83, 1298–1313 (2002). [DOI] [PubMed] [Google Scholar]
- 25.Nyhan, B. & Reifler, J. When corrections fail: The persistence of political misperceptions. Polit Behav.32, 303–330 (2010). [Google Scholar]
- 26.Taber, C. S. & Lodge, M. Motivated skepticism in the evaluation of political beliefs. Am. J. Pol. Sci.50, 755–769 (2006). [Google Scholar]
- 27.Lee, J. K., Choi, J., Kim, C. & Kim, Y. Social media, network heterogeneity, and opinion polarization. J. Commun.64, 702–722 (2014). [Google Scholar]
- 28.Collins, H. K., Dorison, C. A., Gino, F. & Minson, J. A. Underestimating counterparts’ learning goals impairs conflictual conversations. Psychol. Sci.33, 1732–1752 (2022). [DOI] [PubMed] [Google Scholar]
- 29.Fisher, M. & Keil, F. C. The illusion of argument justification. J. Exp. Psychol. Gen.143, 425–433 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Kennedy, K. A. & Pronin, E. When disagreement gets ugly: Perceptions of bias and the escalation of conflict. Personal Soc. Psychol. Bull.34, 833–848 (2008). [DOI] [PubMed] [Google Scholar]
- 31.Teeny, J. D. & Petty, R. E. Attributions of emotion and reduced attitude openness prevent people from engaging others with opposing views. J. Exp. Soc. Psychol.102, 104373 (2022). [Google Scholar]
- 32.Petty, R. E. & Cacioppo, J. T. The effects of involvement on responses to argument quantity and quality: Central and peripheral routes to persuasion. J. Pers. Soc. Psychol.46, 69–81 (1984). [Google Scholar]
- 33.Priester, J. R. & Petty, R. E. Source attributions and persuasion: Perceived honesty as a determinant of message scrutiny. Personal Soc. Psychol. Bull.21, 637–654 (1995). [Google Scholar]
- 34.Wallace, L. E., Wegener, D. T. & Petty, R. E. Influences of source bias that differ from source untrustworthiness: When flip-flopping is more and less surprising. J. Pers. Soc. Psychol.118, 603–616 (2020). [DOI] [PubMed] [Google Scholar]
- 35.Wallace, L. E., Wegener, D. T. & Petty, R. E. When sources honestly provide their biased opinion: Bias as a distinct source perception with independent effects on credibility and persuasion. Personal Soc. Psychol. Bull.46, 439–453 (2020). [DOI] [PubMed] [Google Scholar]
- 36.Bearden, W. O., Hardesty, D. M. & Rose, R. L. Consumer self-confidence: Refinements in conceptualization and measurement. J. Consum. Res.28, 121–134 (2001). [Google Scholar]
- 37.Campbell, M. C. & Kirmani, A. Consumers’ use of persuasion knowledge: The effects of accessibility and cognitive capacity on perceptions of an influence agent. J. Consum. Res.27, 69–83 (2000). [Google Scholar]
- 38.Petty, R. E. & Cacioppo, J. T. Forewarning, cognitive responding, and resistance to persuasion. J. Pers. Soc. Psychol.35, 645–655 (1977). [Google Scholar]
- 39.Brehm, J. W. A Theory of Psychological Reactance (Academic Press, 1966).
- 40.Hass, R. G. & Grady, K. Temporal delay, type of forewarning, and resistance to influence. J. Exp. Soc. Psychol.11, 459–469 (1975). [Google Scholar]
- 41.Moyer-Gusé, E., Tchernev, J. M. & Walther-Martin, W. The persuasiveness of a humorous environmental narrative combined with an explicit persuasive appeal. Sci. Commun.41, 422–441 (2019). [Google Scholar]
- 42.Duan, Y., Edwards, J. S. & Dwivedi, Y. K. Artificial intelligence for decision making in the era of big data – evolution, challenges and research agenda. Int. J. Inf. Manage.48, 63–71 (2019). [Google Scholar]
- 43.Araujo, T., Helberger, N., Kruikemeier, S. & de Vreese, C. H. In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc.35, 611–623 (2020). [Google Scholar]
- 44.Castelo, N., Bos, M. W. & Lehmann, D. R. Task-dependent algorithm aversion. J. Mark. Res.56, 809–825 (2019). [Google Scholar]
- 45.Pethig, F. & Kroenung, J. Biased humans, (un)biased algorithms? J. Bus. Ethics. 183, 637–652 (2023). [Google Scholar]
- 46.Garvey, A. M., Kim, T. W. & Duhachek, A. Bad news? Send an AI. Good news? Send a human. J. Mark.87, 10–25 (2023). [Google Scholar]
- 47.Malle, B. F., Moses, L. J. & Baldwin, D. A. Intentions and Intentionality: Foundations of Social Cognition (The MIT Press, 2001).
- 48.Kim, T. W. & Duhachek, A. Artificial intelligence and persuasion: A construal-level account. Psychol. Sci.31, 363–380 (2020). [DOI] [PubMed] [Google Scholar]
- 49.Bai, H., Voelkel, J. G., Eichstaedt, C. & Willer, R. Artificial intelligence can persuade humans on political issues. Preprint At.10.31219/osf.io/stakv (2023). [Google Scholar]
- 50.Goldstein, J. A., Chao, J., Grossman, S., Stamos, A. & Tomz, M. How persuasive is AI-generated propaganda? PNAS Nexus. 3, 1–7 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Hackenburg, K. & Margetts, H. Evaluating the persuasive influence of political microtargeting with large language models. Proc. Natl. Acad. Sci. U. S. A.121, e2403116121 (2024). [DOI] [PMC free article] [PubMed]
- 52.Huang, G. & Wang, S. Is artificial intelligence more persuasive than humans? A meta-analysis. J. Commun.73, 552–562 (2023). [Google Scholar]
- 53.Matz, S. C. et al. The potential of generative AI for personalized persuasion at scale. Sci. Rep.14, 4692 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Dietvorst, B. J., Simmons, J. P. & Massey, C. Algorithm aversion: People erroneously avoid algorithms after seeing them err. J. Exp. Psychol. Gen.144, 114–126 (2015). [DOI] [PubMed] [Google Scholar]
- 55.Longoni, C., Bonezzi, A. & Morewedge, C. K. Resistance to medical artificial intelligence. J. Consum. Res.46, 629–650 (2019). [Google Scholar]
- 56.Cadario, R., Longoni, C. & Morewedge, C. K. Understanding, explaining, and utilizing medical artificial intelligence. Nat. Hum. Behav.5, 1636–1642 (2021). [DOI] [PubMed] [Google Scholar]
- 57.Mellers, B. A., Lu, L. & McCoy, J. P. Predicting the future with humans and AI. Consum. Psychol. Rev.6, 109–120 (2023). [Google Scholar]
- 58.Mayr, S., Erdfelder, E., Buchner, A. & Faul, F. A short tutorial of GPower. Tutor. Quant. Methods Psychol.3, 51–59 (2007). [Google Scholar]
- 59.Lupia, A. Communicating science in politicized environments. Proc. Natl. Acad. Sci. U S A. 110, 14048–14054 (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Pornpitakpan, C. The persuasiveness of source credibility: A critical review of five decades’ evidence. J. Appl. Soc. Psychol.34, 243–281 (2004). [Google Scholar]
- 61.Duffy, C. & Fung, B. Influencers are playing a big role in this year’s election. There’s no way to tell who’s getting paid for their endorsements. CNN. https://www.cnn.com/2024/10/29/tech/influencers-presidential-campaign-paid-disclosure/index.html (2024).
- 62.Goldmacher, S. How Kamala Harris burned through $1.5 billion in 15 weeks. The New York Times. https://www.nytimes.com/2024/11/17/us/politics/harris-campaign-finances.html (2024).
- 63.Eddy, K. Republicans, young adults now nearly as likely to trust info from social media as from national news outlets. Pew Research Center. https://www.pewresearch.org/short-reads/2024/10/16/republicans-young-adults-now-nearly-as-likely-to-trust-info-from-social-media-as-from-national-news-outlets/ (2024).
- 64.Chung, J. J., Ding, Y. & Kalra, A. I really know you: How influencers can increase audience engagement by referencing their close social ties. J. Consum. Res.50, 683–703 (2023). [Google Scholar]
- 65.Lou, C. & Yuan, S. Influencer marketing: How message value and credibility affect consumer trust of branded content on social media. J. Interact. Advert. 19, 58–73 (2019). [Google Scholar]
- 66.Mintel Group Limited. US social media influencers market report 2022. Mintel Group Limited. https://store.mintel.com/report/social-media-influencers-us-2022 (2022).
- 67.Schouten, A. P., Janssen, L. & Verspaget, M. Celebrity Vs. Influencer Endorsements in Advertising: The Role of Identification, Credibility, and product-endorser Fit in Leveraged Mark (eds Yoon, C. & Choi, S.), 208–231 (Routledge, 2021).
- 68.Goren, P. Party identification and core political values. Am. J. Pol. Sci.49, 881–896 (2005). [Google Scholar]
- 69.McShane, B. B. & Böckenholt, U. Single-paper Meta-analysis: Benefits for study summary, theory testing, and replicability. J. Consum. Res.43, 1048–1063 (2017). [Google Scholar]
- 70.Rathje, S., van Bavel, J. J. & van der Linden S. Out-group animosity drives engagement on social media. Proc. Natl. Acad. Sci. U. S. A.118, e2024292118 (2021). [DOI] [PMC free article] [PubMed]
- 71.Rocklage, M. D., Rucker, D. D. & Nordgren, L. F. Persuasion, emotion, and Language: The intent to persuade transforms Language via emotionality. Psychol. Sci.29, 749–760 (2018). [DOI] [PubMed] [Google Scholar]
- 72.Hussein, M. A. & Tormala, Z. L. You versus we: How pronoun use shapes perceptions of receptiveness. J. Exp. Soc. Psychol.110, 104555 (2024). [Google Scholar]
- 73.Cheatham, L. & Tormala, Z. L. Attitude certainty and attitudinal advocacy: The unique roles of clarity and correctness. Personal Soc. Psychol. Bull.41, 1537–1550 (2015). [DOI] [PubMed] [Google Scholar]
- 74.Hayes, A. F. Introduction To Mediation, Moderation, and Conditional Process Analysis: A regression-based Approach (Guilford, 2018).
- 75.Cigler, A. J., Loomis, B. A. & Nownes, A. J. Interest Group Politics (CQ Press, 2015).
- 76.Obar, J. A., Zube, P. & Lampe, C. Advocacy 2.0: An analysis of how advocacy groups in the united States perceive and use social media as tools for facilitating civic engagement and collective action. J. Inf. Policy. 2, 1–25 (2012). [Google Scholar]
- 77.Wallack, L. Media advocacy: A strategy for empowering people and communities. J. Public. Health Policy. 15, 420–436 (1994). [PubMed] [Google Scholar]
- 78.Jungherr, A., Wuttke, A., Mader, M. & Schoen, H. A source like any other? Field and survey experiment evidence on how interest groups shape public opinion. J. Commun.71, 276–304 (2021). [Google Scholar]
- 79.Martel, C., Allen, J., Pennycook, G. & Rand, D. G. Crowds can effectively identify misinformation at scale. Perspect. Psychol. Sci.19, 477–488 (2024). [DOI] [PubMed] [Google Scholar]
- 80.Kihlstrom, J. F. Ecological validity and ecological validity. Perspect. Psychol. Sci.16, 466–471 (2021). [DOI] [PubMed] [Google Scholar]
- 81.Druckman, J. N., Klar, S., Krupnikov, Y., Levendusky, M. & Ryan, J. B. Affective polarization, local contexts and public opinion in America. Nat. Hum. Behav.5, 28–38 (2021). [DOI] [PubMed] [Google Scholar]
- 82.Gershon, R. & Fridman, A. Individuals prefer to harm their own group rather than help an opposing group. Proc. Natl. Acad. Sci. U. S. A.119, e2215633119 (2022). [DOI] [PMC free article] [PubMed]
- 83.Iyengar, S., Lelkes, Y., Levendusky, M., Malhotra, N. & Westwood, S. J. The origins and consequences of affective polarization in the united States. Annu. Rev. Polit Sci.22, 129–146 (2019). [Google Scholar]
- 84.Druckman, J. N. & Levy, J. in Affective Polarization in the American Public in Handbook on Politics and Public Opinion, 257–270 (eds Rudolph, T.) (Edward Elgar Publishing, 2022).
- 85.Garrett, R. K. et al. Implications of pro- and counterattitudinal information exposure for affective polarization. Hum. Commun. Res.40, 309–332 (2014). [Google Scholar]
- 86.Viechtbauer, W. Conducting Meta-analyses in R with the metafor package. J. Stat. Softw.36, 1–48 (2010). [Google Scholar]
- 87.Borenstein, M., Hedges, L. V., Higgins, J. P. T. & Rothstein, H. R. Introduction To Meta-analysis (Wiley, 2021).
- 88.Higgins, J. P. T. & Green, S. Cochrane Handbook for Systematic Reviews of Interventions: Version 5.1.0 (The Cochrane Collaboration, 2011).
- 89.Cohen, J. Statistical power analysis. Curr. Dir. Psychol. Sci.1, 98–101 (1992). [Google Scholar]
- 90.Norcross, J. C. & Wampold, B. E. What works for whom: Tailoring psychotherapy to the person. J. Clin. Psychol.67, 127–132 (2011). [DOI] [PubMed] [Google Scholar]
- 91.Xu, M. & Petty, R. E. Order matters when using two-sided messages to influence morally based attitudes. Personal Soc. Psychol. Bull.50, 01461672231223308 (2024). [DOI] [PubMed] [Google Scholar]
- 92.Overgaard, C. S. B. & Woolley, S. How social media platforms can reduce polarization. Brookings. https://www.brookings.edu/articles/how-social-media-platforms-can-reduce-polarization/. (2022).
- 93.Rosen, G. Investments to fight polarization. Meta. https://about.fb.com/news/2020/05/investments-to-fight-polarization/ (2020).
- 94.Guess, A. M. et al. How do social media feed algorithms affect attitudes and behavior in an election campaign? Science381, 398–404 (2023). [DOI] [PubMed] [Google Scholar]
- 95.Gerlich, M. Perceptions and acceptance of artificial intelligence: A multi-dimensional study. Soc. Sci.12, 502 (2023). [Google Scholar]
- 96.Stein, J. P., Messingschlager, T., Gnambs, T., Hutmacher, F. & Appel, M. Attitudes towards AI: Measurement and associations with personality. Sci. Rep.14, 2909 (2024). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 97.Chaiken, S. Heuristic versus systematic information processing and the use of source versus message cues in persuasion. J. Pers. Soc. Psychol.39, 752–766 (1980). [Google Scholar]
- 98.Roskos-Ewoldsen, D. R. & Fazio, R. H. The accessibility of source likability as a determinant of persuasion. Personal Soc. Psychol. Bull.18, 19–25 (1992). [DOI] [PubMed] [Google Scholar]
- 99.Byrne, D. An overview (and underview) of research and theory within the attraction paradigm. J. Soc. Pers. Relat.14, 417–431 (1997). [Google Scholar]
- 100.Montoya, R. M. & Horton, R. S. A meta-analytic investigation of the processes underlying the similarity-attraction effect. J. Soc. Pers. Relat.30, 64–94 (2013). [Google Scholar]
- 101.Hussein, M. A. & Wheeler, S. C. Reputational costs of receptiveness: When and why being receptive to opposing political views backfires. J. Exp. Psychol. Gen.153, 1425–1448 (2024). [DOI] [PubMed]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
All data and code are publicly available for peer review: https://osf.io/smb24/?view_only=4cea014c9e024f95820c96d0b6ce811a.







