Skip to main content
SAGE - PMC COVID-19 Collection logoLink to SAGE - PMC COVID-19 Collection
. 2022 Mar 31;100(1):145–171. doi: 10.1177/10776990221084606

How Misinformation and Rebuttals in Online Comments Affect People’s Intention to Receive COVID-19 Vaccines: The Roles of Psychological Reactance and Misperceptions

Yanqing Sun 1,, Fangcao Lu 2
PMCID: PMC9936178  PMID: 36814707

Abstract

This study investigated how exposure to negative and misleading online comments about the COVID-19 vaccination persuasive messages and the ensuing corrective rebuttals of these comments affected people’s attitudes and intentions regarding vaccination. An online experiment was performed with 344 adults in the United States. The results showed that rebuttals by the Centers for Disease Control and Prevention, rather than those by social media users, indirectly increased people’s willingness to receive the vaccine by reducing their psychological reactance to persuasive messages and their belief in the misinformation contained in the comments. Rebuttals by social media users became more effective in reducing reactance when people initially had stronger pro-vaccination attitudes.

Keywords: COVID-19 vaccination, psychological reactance, misperception, online comments, rebuttal, correction


In recent years, public health agencies and practitioners have turned to social media to engage with audiences through the creation and exchange of user-generated content regarding health promotion messages (Bhattacharya et al., 2017; H. Y. Kim et al., 2021; Mendoza-Herrera et al., 2020). For instance, engagement is a key component of the social media strategy of the Centers for Disease Control and Prevention (CDC); audiences are encouraged to interact with the agency and communicate with each other about public health issues on platforms such as Facebook, Twitter, and YouTube (Heldman et al., 2013). Such engagement is achieved mainly through user comments, a key feature of social media platforms that allows users to publish and exchange information and opinions (H. Y. Kim et al., 2021). Compared with rating systems and “like” buttons, other channels for engagement on social media, comments contain more information on people’s opinions (Shi et al., 2014).

However, the opinions shared in the comments sections of social media may challenge public health campaigns, as comments may question or deny the validity of persuasive messaging and thereby evoke audiences’ negative cognitions and anger regarding the information shared (Shi et al., 2014; Walther et al., 2010). Furthermore, lacking effective gatekeeper mechanisms, online comments sections often become forums for misinformation (H. Y. Kim et al., 2021); for example, they can facilitate the spread of anti-vaccine sentiment and falsehoods (Hoffman et al., 2019; Smith & Graham, 2019). With the growing prominence of online comments in today’s information environment (Shi et al., 2014), it is necessary to understand how such comments undermine the effectiveness of public health promotion campaigns and how to mitigate their detrimental effects.

To date, two studies (H. Y. Kim et al., 2021; Li & Sundar, 2021) have addressed this topic. Both proposed psychological reactance as one of the mechanisms by which negative comments obstruct compliance with health promotion messages. According to psychological reactance theory (PRT; J. W. Brehm, 1966; S. S. Brehm & Brehm, 1981), any attempt at persuasion that threatens people’s freedom of choice is likely to induce psychological reactance, that is, people are motivated to resist persuasion and regain the threatened freedom. Given that psychological reactance is composed of negative cognitions and anger (Dillard & Shen, 2005), which usually arise from exposure to negative comments (Shi et al., 2014; Walther et al., 2010), it is likely that negative comments trigger psychological reactance toward persuasive messages.

Unlike these two studies, which either focused on how the interaction between comment valence and prior attitudes affects people’s reactance (H. Y. Kim et al., 2021) or how bandwagon perceptions of supportive comments affect psychological reactance (Li & Sundar, 2021), our study aimed to investigate how negative and misleading comments on persuasive messages affect people’s psychological reactance. In particular, statistics have shown that nearly 60% of Americans expect experts to respond to online comments (Stroud et al., 2016). Thus, we examined whether corrective responses from expert institutions and knowledgeable social media users to negative and misleading comments could mitigate people’s psychological reactance to promotional messages and alter their belief in the misinformation contained in the comments. Subsequently, we investigated whether corrections to such comments affected people’s health attitudes and intentions by reducing their reactance and misperceptions. We also examined the potential moderating effect of people’s preexisting attitudes. To this end, we performed an online experiment with participants from the United States. The COVID-19 vaccination campaign was chosen as the research context, given its significance in fighting the pandemic.

Overall, the findings of this study contribute to both theory and practice. By integrating research on psychological reactance and misinformation, the study offers theoretical insights into these two communication phenomena and their relationship. In addition, the study deepens our understanding of the mechanisms by which negative and misleading comments in response to online health promotion messages affect people’s health attitudes and intentions. The results may help health agencies and practitioners to design social media strategies that better promote healthful attitudes and behaviors.

The Concept of Psychological Reactance

PRT is based on the assumption that human beings have a need for freedom in making their own choices (J. W. Brehm, 1966; S. S. Brehm & Brehm, 1981). When this freedom is threatened or eliminated, people experience psychological reactance and consequently make efforts to restore their freedom—either directly, by engaging in behaviors contrary to those advocated by the persuasive messages (Dillard & Shen, 2005; Song et al., 2018), or indirectly, by ignoring the messages (Youn & Kim, 2019). Although the creators of PRT stated that psychological reactance cannot be directly measured (S. S. Brehm & Brehm, 1981), Dillard and Shen (2005) proposed conceptualizing it as a combination of negative cognitions and anger, which are interwoven and exert a joint effect on persuasion. Studies have supported this model in various contexts (see a review by Rains, 2013), including sunscreen promotion (Quick & Stephenson, 2008) and diabetes prevention campaigns (Gardner & Leshner, 2016).

Persuasive communication may be seen as a threat to freedom because the recommended actions or changes can threaten one’s autonomy, leading to reactance rather than acceptance (Dillard & Shen, 2005). Therefore, health promotion messages that explicitly promote certain attitudes or behaviors may arouse reactance, such as persuasive messages that oppose smoking and drunk driving (Shen, 2010) or promote exercise and sunscreen use (Quick & Stephenson, 2008).

As reactance may undermine health-related persuasive campaigns to the detriment of people’s health attitudes and intentions (Reynolds-Tylus, 2019), scholars have examined the factors affecting reactance. In particular, the influence of message characteristics on reactance has attracted most attention. Scholars have found that messages containing controlling language (Bessarabova et al., 2013; Gardner & Leshner, 2016), major requests (Rains & Turner, 2007), weak arguments, and personal insults (S.-Y. Kim et al., 2017) can induce reactance. In contrast, using autonomy-supporting postscripts (Miller et al., 2007; X. Zhang, 2020), eliciting empathy (Shen, 2010), using narratives and other referencing strategies (Gardner & Leshner, 2016), and emphasizing similarity to the message source (Silvia, 2005) can dissipate or circumvent psychological reactance.

However, in the age of the internet and digitalized media, the ways in which messages are transmitted and received have undergone tremendous changes. Such changes include a decrease in the number of traditional mass media channels, an increase in online information sources and content disseminated through social media, and the rise of user-generated content, such as comments (Gunther, 2017). In this environment, established communication phenomena need to be reconsidered (Gunther, 2017), and psychological reactance to persuasion is no exception. It is important to examine whether the technological affordances and features of these online media, such as the commenting system, can affect people’s reactance to persuasive messages disseminated through online media channels (Li & Sundar, 2021).

Online Comments and Reactance to Persuasive Messages

In the Web 2.0 era, where the internet network serves as a platform that not only provides information but also enables participation, interaction, the exchange of user-generated content, community building, and rich user experiences (Fuchs, 2011; O’Reilly, 2005), many health campaign messages have been moved online and disseminated through social media (Mendoza-Herrera et al., 2020; Neiger et al., 2012; Park et al., 2011). For example, in the last few years, the CDC’s use of Facebook, YouTube, Twitter, and other social media tools to disseminate health information has increased significantly (CDC, 2015). Audiences have also changed from passive information receivers to active participants who can interact with campaign messages by commenting on them. These comments may influence the wider online communities’ perceptions of and attitudes toward persuasive messages by triggering a mental shortcut called the “bandwagon effect,” or the rule of thumb that “if others think that this message is good, then I should think so too” (Lee et al., 2022; Sundar, 2008). Therefore, when people find opposing comments on a persuasive message, they may “jump on the bandwagon” and regard the message as unreliable because others object to it. Consequently, people are more likely to have negative perceptions of the message, to believe that it poses an unreasonable threat to their freedom (Li & Sundar, 2021), and to resist it (Lee et al., 2022).

There is preliminary support for the link between negative comments and reactance to persuasive messages. Studies (Shi et al., 2014; Walther et al., 2010) have shown that compared with positive comments, negative comments on a public service announcement (PSA) YouTube video led audiences to generate more negative thoughts on the video, making people perceive the video as less convincing, important, and persuasive. Negative comments also elicited negative emotions such as anger toward the PSA (Shi et al., 2014). Given that reactance is a mixture of anger and negative cognitions (Dillard & Shen, 2005), negative comments challenging a persuasive message and inducing anger toward and negative thoughts about it may also induce reactance to it. More relevantly, Li and Sundar (2021) found that multiple supportive comments on an anti-drinking PSA video triggered bandwagon perceptions that other people liked the video, thereby reducing the perceived threat to freedom and reactance to the video. Conversely, negative comments about persuasive messages may also trigger the bandwagon effect, leading people to believe that others oppose the messages and thus triggering greater reactance.

Rebuttals of Negative Comments, Sources, and Reactance

Given that campaign messages may be undermined by negative and misleading comments evoking psychological reactance in audiences, it is important to explore how to mitigate, or even eliminate, the deleterious effects of such comments. Although negative comments can be removed and comments sections can be shut down entirely, such practices may undermine the internet’s capacity to promote citizen engagement and free speech (Sherrick & Hoewe, 2018; Wright, 2006; Ziegele & Jost, 2020). A viable alternative strategy might be to respond to and refute these comments by making use of credible sources, because source credibility is an important predictor of successful persuasion (Pornpitakpan, 2004). It is possible that rebuttals of negative and misleading comments from reliable sources may offset the harmful impact of such comments on reactance to advocacy messages, and increase the likelihood of people’s accepting the messages.

The concept of source credibility originated from Aristotle’s notion of ethos, which is deemed the most effective and powerful approach to persuasion (McCroskey & Teven, 1999). Source credibility is defined as “judgments made by a perceiver concerning the believability of a communicator” (O’Keefe, 1990, pp. 130–131). According to Hovland, Janis, and Kelley (1953), source credibility consists of two main components: expertise and trustworthiness. “Expertise” refers to the perception that the message sender is qualified to make correct assertions; and “trustworthiness” refers to the perception that the message sender is motivated to tell the truth. A credible message source plays a crucial role in increasing acceptance of a message (Hocevar et al., 2017), making it more likely to change receivers’ attitudes or behaviors (Pornpitakpan, 2004).

In other words, sources with high credibility can reduce people’s tendency to resist persuasive messages. Q. Zhang and Sapp (2013) reported that when college students were asked by a professor to provide evidence to support their illness-related absences, they perceived less threat to their freedom and experienced less reactance if they considered their professor to be more credible (e.g., competent, caring, and trustworthy). Similarly, greater trust in the source of a policy message has been found to reduce reactance to the message (Song et al., 2018).

In the context of health and risk information, expert organizations such as the CDC are perceived as credible because they have relevant expertise and are trusted by the U.S. population (Pew Research Center, 2020). Therefore, during the pandemic, an expert organization’s rebuttals of negative and misleading comments about health campaign messages may offset any detrimental effect of the comments and reduce people’s reactance to the persuasive messages endorsed by the organization. In addition, endorsement by an expert organization may increase the persuasiveness of campaign messages, which can further mitigate people’s reactance (Li & Sundar, 2021). We thus propose the following hypothesis:

  • H1a: Participants who view an expert organization’s rebuttals of misleading comments on messages promoting vaccination report less psychological reactance to the messages than those who do not read the organization’s rebuttals.

In addition to responses from expert organizations such as the CDC, social media users can take the initiative to refute misleading comments. Opinions provided by laypeople may be considered trustworthy, because they are similar to the members of the audience. This perceived similarity to the message source has been found to increase trust in the message (Song et al., 2018) and decrease perceptions of threats to freedom, further reducing reactance (Silvia, 2005; Song et al., 2018). Although laypeople may not possess extensive medical expertise or knowledge, when they cite reliable sources such as the CDC to corroborate their rebuttals of misleading comments, their responses may reflect expertise and thereby increase the persuasiveness of the campaign messages. As a result, their rebuttals may reduce people’s reactance to the messages. Based on this, we propose the following hypothesis:

  • H1b: Participants who view other social media users’ rebuttals of misleading comments on messages promoting vaccination report less psychological reactance to the messages than those who do not view users’ rebuttals.

Previous studies have consistently supported the idea that greater psychological reactance to persuasive messages corresponds to a lower level of positivity of the attitudes toward what is advocated by the persuasive messages (Dillard & Shen, 2005; Rains, 2013; X. Zhang, 2020). Recently, scholars (S.-Y. Kim et al., 2013, 2017) have suggested observing attitude changes when examining the outcome of reactance, rather than simply focusing on attitudes after exposure to persuasive messages. This is because PRT “considers the manner in which one’s initial attitude changes, particularly in the unintended direction, after receiving a forceful message” (S.-Y. Kim et al., 2017, p. 933). Greater reactance leads to less movement of an attitude toward what is recommended in an advocacy message (S.-Y. Kim et al., 2013, 2017). Conversely, less reactance prompts a greater shift in an attitude in the intended direction. In the current context, this means that people’s attitudes toward COVID-19 vaccination are likely to become more positive than initially (i.e., more positive attitude change) when they experience less reactance. We thus propose the following hypothesis:

  • H2: Less psychological reactance to pro-vaccination messages causes a more positive attitude change toward vaccination.

In addition, given that less reactance predicts a more positive attitude toward what is being advocated (Rains & Turner, 2007; X. Zhang, 2020) and attitudes have been widely acknowledged as powerful predictors of behavioral intentions (Dillard & Shen, 2005; Rains & Turner, 2007), we expect that a reduction in psychological reactance will increase the intention to get vaccinated (H. Y. Kim et al., 2021). Thus, we propose the following hypothesis:

  • H3: Psychological reactance to pro-vaccination messages is negatively related to the intention to receive the vaccine.

Furthermore, the less the psychological reactance to pro-vaccination information (i.e., the greater the acceptance of pro-vaccination campaign information), the greater the rejection of misinformation about vaccines. Therefore, we propose the following hypothesis:

  • H4: Less psychological reactance to pro-vaccination messages reduces belief in the misinformation contained in negative comments about the messages.

Rebuttals of Negative Comments and Misperceptions

Besides reducing people’s psychological reactance to persuasive messages, rebuttals of misleading comments by expert organizations or social media users may also reduce people’s belief in the related misinformation conveyed by the misleading comments. Corrections by expert organizations such as the CDC have been found to reduce people’s misperceptions regarding the Zika virus (Vraga & Bode, 2017), flu vaccines (Nyhan & Reifler, 2015), and the vaccine–autism link (Nyhan et al., 2014). Likewise, rebuttals by social media users were also found to reduce people’s belief in Zika virus misinformation when they cited reliable resources (Bode & Vraga, 2018; Vraga & Bode, 2018). Thus, we propose the following hypothesis:

  • H5a and b: Rebuttals by (a) an expert organization or (b) other social media users reduce people’s belief in the misinformation contained in negative comments about pro-vaccination messages.

Moreover, belief in anti-vaccination misinformation has been shown to have a negative impact on people’s attitudes toward vaccination and intention to get vaccinated (Jolley & Douglas, 2014). Thus, reduced belief in the misinformation contained in negative comments (i.e., misperception) may lead to more positive attitudes toward vaccination and make people more likely to get vaccinated. Thus, we propose the following hypotheses:

  • H6: Less misperceptions regarding vaccination cause a more positive attitude change toward vaccination.

  • H7: Less misperceptions regarding vaccination increase intention to get vaccinated.

The Moderating Role of Initial Attitudes

Research has suggested that initial attitudes toward an issue powerfully influence people’s subsequent information selection, processing, and acceptance, because initial attitudes can prompt directionally motivated reasoning, whereby information that supports prior beliefs is privileged and preferred (Kunda, 1990; Nir, 2011). Driven by directionally motivated reasoning, people argue against information that is discordant with their prior attitudes, and perceive counter-attitudinal information more unfavorably than attitude-congruent information (Flynn et al., 2017; Taber & Lodge, 2006). For instance, when people hold a strong initial belief in certain misinformation, the effectiveness of fact checking is greatly reduced, because these people may resist corrective messages challenging their prior attitudes (Walter et al., 2020). In addition, processing counter-attitudinal information generates more anger among people (Arpan & Nabi, 2011). To make matters worse, fact checking can backfire among partisans with strong beliefs; that is, partisans’ misperceptions are strengthened rather than diminished by a correction (Nyhan & Reifler, 2010; Nyhan et al., 2013).

Considering that counterarguments and anger are indicators of reactance, initial attitudes toward COVID-19 vaccination may influence people’s reactance to pro-vaccination messages. Indeed, H. Y. Kim et al. (2021) found that the greater the inconsistency between people’s prior attitudes toward influenza vaccination and the valence of Facebook comments on flu vaccination messages (e.g., a prior attitude in favor of vaccination and anti-vaccination comments are considered inconsistent in the direction of the attitude toward vaccination), the greater the reactance of people to these comments. That is, people who have a more positive attitude toward flu vaccination have greater reactance to anti-vaccination comments, and people who have a more negative attitude toward flu vaccination have greater reactance to pro-vaccination comments. Similarly, for people with an initially more positive attitude toward COVID-19 vaccination, rebuttals of negative comments on the vaccination may produce more support for and thus less reactance to pro-vaccination campaign messages. Among those with a more unfavorable attitude, however, rebuttals may lead to greater reactance to the campaign messages. Based on this, we propose the following hypothesis:

  • H8a and b: Initial attitudes toward vaccination will moderate the impact of rebuttals of negative comments on people’s (a) psychological reactance to persuasive messages and (b) misperceptions, such that positive (or negative) preexisting attitudes toward vaccination will increase (or reduce) the efficacy of the rebuttals in reducing reactance and misperceptions.

Method

This study employed a two (COVID-19 vaccination promotion message 1 vs. message 2) by two (anti-vaccination comments section 1 vs. comments section 2) by three (rebuttals of negative and misleading comments by social media users vs. rebuttals of negative and misleading comments by an expert organization vs. no rebuttals) between-subjects experimental design. The effect of rebuttals on people’s reactance and misperceptions was the question of interest. The two promotional messages and two comments sections were created to enhance external validity and avoid case-category confounding issues due to the use of a single stimulus (Jackson, 1992). During the experiment, one of the two messages and one of the two comments sections were randomly assigned to the participants. (Please see the Supplemental Material for details of the stimuli.)

Participants from the United States were recruited using Amazon’s MTurk platform in January 2021. MTurk has been shown to facilitate representative and diverse sampling (Mason & Suri, 2012). MTurk data have been found to be comparable in quality to data from social media surveys or interview surveys of college students (Casler et al., 2013; Kees et al., 2017) and have performed better than data collected by professional research companies (Kees et al., 2017). To improve the data quality, only MTurk workers with a high reputation (with a 95% task approval rating and more than 100 approved tasks) were eligible to participate in the experiment. In addition, to ensure that the participants paid full attention to the survey questions, we included three attention checks in different parts of the survey (e.g., asking the participants to choose a specific answer from the options provided). Participants who failed the attention checks (n = 208) were automatically excluded from the experiment. To further check whether the participants had paid attention to the source of rebuttals, the participants in the rebuttal conditions were asked whether the replies to comments had been posted by social media users or by the CDC. If the participants who had read user rebuttals answered that they had viewed the CDC rebuttals, or if the participants who had read the CDC’s rebuttals answered that they had viewed user rebuttals, they were deemed to have failed the check. The online survey program automatically removed the 97 participants who answered incorrectly. This check was performed after the questions measuring the variables of interest had been asked, to avoid any confounding effects. In this way, an initial sample of 360 participants was obtained.

However, a preliminary test of the statistical assumptions of the data (e.g., normality, homoscedasticity of residuals, independence of errors, and multicollinearity) led to the removal of 16 cases because their standardized regression residuals exceeded 3.29, indicating extreme outliers (Field, 2018). Thus, the final sample consisted of 344 participants. This sample was shown to be univariate and multivariate normal in an assessment of the multivariate normality of the model items at the univariate and multivariate levels using AMOS 26.0 (Byrne, 2010; Kline, 2015).1

The participants in the study were on average 38.75 years old (with a median of 36 years and a range of 19-78 years, SD = 11.23), and there were more male (57.6%) than female participants. Regarding education level, 5.5% of the participants had completed high school, 2.3% had received vocational or technical school education, 19.5% had received some college education, and 72.7% were college graduates or postgraduates. Their median annual income was in the range of US$35,000 to US$49,999. In terms of ethnicity, most were White (73%), followed by African American (11.6%), Asian (10.2%), and Hispanic (2.9%). Approximately, 23.5% claimed to be Republicans, 57.6% were Democrats, and 18.9% were politically neutral.

Procedure

This study received ethical approval from the Institutional Review Board of City University of Hong Kong. After consenting to participate in the study, the participants were randomly assigned to one of the 12 experimental conditions. They were first asked to answer questions about their prior attitudes toward the COVID-19 vaccines. They were then asked to read one of two promotional messages about COVID-19 vaccination embedded in a public Facebook page. After reading the post, one of the six comments sections was displayed. All comments sections contained two negative and misleading user-generated comments on the promotional message that the participants had read. The participants in the control condition saw only the two comments, whereas the participants in the user rebuttal condition saw that each of the two comments had been refuted by two social media users. Those in the expert organization condition saw that each of the two comments had been rebutted by the CDC (see the Supplemental Material for the stimuli). Immediately after exposure to the stimuli (please see Footnote 2 for details on creating the stimuli), the participants were asked to complete a questionnaire that surveyed the variables of interest and demographic information.

After completing the survey, the participants were debriefed and thanked for their responses. They were informed that the comments, but not the rebuttals, contained misleading information. They were asked to seek reliable COVID-19 vaccination-related information from the CDC or World Health Organization. Each participant received US$.80 as compensation after completing the survey.

Measures

Change in attitude toward COVID-19 vaccination

The participants’ prior attitudes toward COVID-19 vaccination were measured using 7-point semantic differential word pairs adapted from Dillard and Shen (2005), including bad/good, foolish/wise, unfavorable/favorable, and so on. These items were combined into an index, with higher scores signifying more positive attitudes toward COVID-19 vaccination (i.e., prior attitudes: α = .968, M = 5.59, SD = 1.40).3 The same measures were used after the participants had been exposed to the stimuli (i.e., post-attitudes: α = .973, M = 5.63, SD = 1.37). The magnitude of the attitude change was calculated by subtracting prior attitude scores from post-attitude scores (ranging from −1 to 1.29, M = 0.04, SD = 0.37).4 Approximately, 32.6% of the participants’ attitudes toward COVID-19 vaccination became more positive than their initial attitudes (i.e., the value of the attitude change was positive), and 42.4% of the participants had no attitude shift (i.e., the value was equal to zero), whereas 25% of the participants’ attitudes toward COVID-19 vaccination became more negative compared with their initial attitudes (i.e., the value was negative).

Reactance to COVID-19 vaccination promotion message

Because studies have shown that psychological reactance is best conceptualized as a combination of anger and negative cognitions, eight items measuring anger and negative cognitions were used as indicators of reactance.5 However, the reliability test showed that deleting one of the items for negative cognitions would increase Cronbach’s α from .74 to .79; thus, scores for the other seven items were averaged to form an index of reactance to the promotional message (α = .79, M = 2.95, SD = 1.14). A higher score indicated greater reactance.

Belief in misinformation reflected in the comments

We prepared four statements that corresponded to the misinformation mentioned in the four negative comments on COVID-19 vaccination. The four statements included “If you have already had COVID-19 and have recovered, you don’t need to get the COVID-19 vaccine”; “After getting the COVID-19 vaccine, people will test positive for COVID-19 on a viral test”; “The COVID-19 vaccination is unnecessary and meaningless because even if you have been vaccinated, you still need to wear a mask and avoid close contact with others”; and “The COVID-19 vaccine has serious side effects.” Given that each participant saw only two negative and misleading comments, the participants were asked to indicate the extent to which they believed the two statements corresponding to the two negative comments they saw. The responses were recorded on a 5-point scale, where 1 signified “Don’t believe it at all,” and 5 signified “Completely believe it.” The participants’ responses were averaged to form an index for belief in the misinformation. Cronbach’s α for the belief of the participants who saw the first two misinformative statements was .80 (n = 171, M = 2.55, SD = 1.22) and that for the belief of the participants who saw the last two misinformative statements was .73 (n = 173, M = 2.45, SD = 1.07). We compiled the belief values for the two groups of participants and created the variable “belief in misinformation” (M = 2.50, SD = 1.15).

Intention to receive COVID-19 vaccination

The participants were asked how likely they would be to take up the COVID-19 vaccine if they had access to it. The responses were recorded on a 7-point scale, where 1 signified “Very unlikely,” and 7 signified “Very likely” (M = 5.57, SD = 1.65).

Control variables

We measured and controlled for the participants’ age, gender, income, education level, political ideology, frequency of social media use, perceptions of the authenticity of the comments, and exposure to the misinformation in the social media comments.

Results

Preliminary Analyses

To examine whether the randomization was successful, a series of analysis of variance (ANOVA) were conducted, and the results showed that across the 12 experimental conditions,6 there were no significant differences in the participants’ demographics, including age, education, income, and political party identification, F(11, 332) = 0.41, 0.49, 1.38, and 1.00; p = .95, .91, .18, and .45, respectively. Two chi-square analyses were also conducted, confirming that there were no significant differences in the participants’ gender and race (those who identified as White were coded as 1 and others as 0) across conditions, χ2(11, N = 344) = 9.34 and 9.13; p = .59 and .61, respectively. Thus, the randomization was considered to be successful.

Two multigroup analyses were employed in AMOS 26.0 to examine whether the different COVID-19 vaccination promotion messages and comments sections would influence the relationships among the variables in the proposed model. The results indicated that there was no significant interaction involving the messages (Δχ2df = 9] = 4.75, p = .86) or the comments sections (Δχ2df = 9] = 9.33, p = .41). In other words, different promotional messages and comments sections did not affect the relationships among the variables in the model. Thus, the data were collapsed across the two messages and comments sections. As a result, the following three groups were created: a group exposed to rebuttals by social media users (n = 109), a group exposed to rebuttals by the CDC (n = 126), and a group exposed to no rebuttals (n = 109).

Hypothesis Testing

Regarding H1, an ANOVA showed that the sources of the rebuttals had a significant effect on reactance to the promotional messages, F(2, 341) = 4.41, p = .01, η2= .03. The pairwise comparisons revealed that the participants who viewed rebuttals by the CDC reported significantly less reactance to the messages (M = 2.73, SD = 1.09) than did those who saw no rebuttals (M = 3.15, SD = 1.16), p = .01. Although those who saw rebuttals by users reported less reactance (M = 3.01, SD = 1.13) than did those who saw no rebuttals, the difference was not significant (p = .64). Furthermore, no significant difference was found between the user rebuttal group and the CDC group (p = .13). Therefore, H1a was supported, whereas H1b was not.

In addition, there were no significant differences in belief in the misinformation contained in the comments between the group with no rebuttals (M = 2.57, SD = 1.05), the group with the CDC rebuttals (M = 2.40, SD = 1.25), and the group with user rebuttals (M = 2.54, SD = 1.11), F(2, 341) = 0.70, p = .50. This indicated that exposure to rebuttals by the CDC or social media users did not affect people’s belief in the misleading information about the COVID-19 vaccines compared with exposure to no rebuttals. Thus, neither H5a nor H5b was supported.

To test the remaining hypotheses, we performed a path analysis with six variables using AMOS 26.0. The two exogenous variables were dummy variables related to the source of the rebuttals. Taking the group without rebuttals as the reference group, we created variables for the group with user rebuttals and the group with CDC rebuttals. The four endogenous variables were psychological reactance; belief in misleading information contained in the negative comments on COVID-19 vaccination (i.e., misperception); attitude change toward the COVID-19 vaccines after exposure to the stimuli; and intention to receive a COVID-19 vaccine (see Table 1 for the correlation matrix). The eight control variables were all measured and controlled for in the model.7 The model-fit indices were as follows: χ2(df = 5) = 7.39, p = .19, comparative fit index (CFI) = .986, Tucker–Lewis index (TLI) = .959, root mean square error of approximation (RMSEA) = .037, and standardized root mean square residual (SRMR) = .026. According to the criteria proposed by Hu and Bentler (1999), a model has a good fit if its CFI and TLI are ≥.95, its RMSEA is ≤.06, and its SRMR is ≤.08. Thus, the indices of the model demonstrated that the data had a good fit to the model.

Table 1.

Zero-Order Correlations Between the Exogenous and Endogenous Variables Used in the Structural Equation Modeling Analysis (N = 344).

Variables 1 2 3 4 5 6
1. User’s rebuttal 1.00
2. CDC’s rebuttal −.52*** 1.00
3. Reactance .04 −.15** 1.00
4. Misperception .03 −.06 .23*** 1.00
5. Attitude change −.04 .12* −.06 .03 1.00
6. Intention −.10 .05 −.24*** −.32*** −.04 1.00

Note. CDC = Centers for Disease Control and Prevention.

p < .10. *p < .05. **p < .01. ***p < .001.

The results showed that compared with exposure to no rebuttals, exposure to rebuttals by the CDC generated less reactance to the promotional messages about COVID-19 vaccination (β = −.15, p = .02). However, exposure to rebuttals from users did not have a significant impact on people’s reactance to the promotional messages (β = −.05, p = .44). Moreover, reactance was not significantly related to attitude change toward vaccination (β = −.07, p = .22); thus, H2 was not supported. Less reactance led to a stronger intention to receive the vaccination (β = −.20, p < .001), thereby supporting H3.

Consistent with H4, reactance was positively related to misperceptions (β = .16, p = .003). However, contrary to H5, neither rebuttals by the CDC (β = −.05, p = .46) nor rebuttals by users (β = .06, p = .31) significantly affected people’s misperceptions. Thus, H4 was supported, whereas H5a and H5b were not.

Moreover, inconsistent with H6, misperception was not significantly related to attitude change toward the COVID-19 vaccines (β = .01, p = .93). However, as H7 anticipated, less misperception corresponded to greater intention to get vaccinated (β = −.30, p < .001). Therefore, H6 was not supported, whereas H7 was supported.

The research model accounted for 1% of the variance in attitude change and 15% of the variance in intentions to get the COVID-19 vaccines. We also examined the indirect effects involved in the model illustrated in Figure 1. Regarding the three-path mediation effect, the Sobel test (Howard, n.d.; Taylor et al., 2008) showed that reactance and misperceptions did not significantly mediate the effect of the CDC’s rebuttals on the intention to get vaccinated (z-score = 1.79, effect = .01, SE = .003, p = .07). Concerning the two-path mediation effect, we performed product-of-coefficient tests using the PRODCLIN program, and calculated the confidence intervals (CIs) of the mediation effects (MacKinnon et al., 2007). The results showed that reactance significantly mediated the effect of the CDC’s rebuttals on misperceptions (effect = −.02, 95% CI [−.051, −.003]) and intentions (effect = .03, 95% CI [.005, .058]). In addition, misperceptions significantly mediated the effect of reactance on intentions (effect = −.05, 95% CI [−.087, −.016]).

Figure 1.

Figure 1.

Path analyses of the influence of rebuttals on people’s reactance, misperceptions, attitude change, and intentions.

Note. The method of maximum likelihood estimation was used. The correlations among the errors of prediction were fixed at zero. The coefficients are standardized. All solid-line arrows are significant at p <.05 or better. Dotted-line arrows are nonsignificant at p <.05. R2 values are reported in parentheses.

χ2(df = 5) = 7.39, p = .19, comparative fit index = .986, Tucker–Lewis index = .959, root mean square error of approximation = .037, and standardized root mean square residual = .026.

*p < .05; **p < .01; ***p < .001.

We tested H8 to determine whether initial attitudes toward the COVID-19 vaccines moderated the effect of rebuttals on people’s reactance to the messages and related misperceptions. To this end, we followed Vraga and Bode (2017) and used the PROCESS macro (Model 1) with 5,000 bootstrap samples and 95% bias-corrected CIs (Hayes, 2018) to test whether the comparison between each rebuttal condition and the control condition depended on initial attitudes toward COVID-19 vaccination, controlling for the eight control variables. The rebuttal type was dummy coded, and the no-rebuttal condition served as the reference group. The results showed that initial attitudes significantly moderated the relationship between users’ rebuttals and reactance (b = −.25, SE = .10, p = .02, 95% CI = [−.45, −.05]) but did not moderate the relationship between the CDC’s rebuttals and reactance (b = −.16, SE = .10, p = .11, 95% CI = [−.36, .03]). In addition, initial attitudes significantly moderated the relationship between the CDC’s rebuttals and misperceptions (b = −.22, SE = .08, p = .004, 95% CI = [−.38, −.07]) but did not moderate the relationship between users’ rebuttals and misperceptions (b = −.04, SE = .08, p = .62, 95% CI = [−.19, .11]).

The pattern of the moderation effects was similar; compared with no rebuttals, each type of rebuttal was more effective in reducing psychological reactance or misperceptions among those with more positive initial attitudes toward COVID-19 vaccination (see Figures 2 and 3). This indicates that the greater the consistency between prior attitudes toward the COVID-19 vaccine and users’ rebuttals, the less the reactance. Similarly, the greater the consistency between prior attitudes and the CDC’s rebuttals, the fewer the misperceptions. The results also showed that when people initially had relatively negative attitudes toward the vaccination—that is, their initial attitudes were more negative than the moderate level (i.e., mean of the initial attitudes, see Figures 2 and 3), exposure to rebuttals from users resulted in higher reactance than exposure to no rebuttals at all did (see Figure 2), and the CDC’s rebuttals brought about more misperceptions than exposure to no rebuttals at all did (see Figure 3). Based on these results, H8a and H8b are partially supported.

Figure 2.

Figure 2.

Predicting psychological reactance by initial attitudes toward COVID-19 vaccination for each rebuttal condition.

Note. The categories of levels of initial attitudes toward COVID-19 vaccination are defined as follows: 1 SD below mean is a low level of initial attitude (M = 4.20), the mean is a moderate level of initial attitude (M = 5.59), and 1 SD above the mean is a high level of initial attitude (M = 6.99), as measured on a 7-point scale, with higher scores indicating more positive initial attitudes.

Figure 3.

Figure 3.

Predicting misperceptions by initial attitudes toward COVID-19 vaccination for each rebuttal condition.

Note. The categories of levels of initial attitudes toward COVID-19 vaccination are defined as follows: 1 SD below mean is a low level of initial attitude (M = 4.20), the mean is a moderate level of initial attitude (M = 5.59), and 1 SD above the mean is a high level of initial attitude (M = 6.99), as measured on a 7-point scale, with higher scores indicating more positive initial attitudes.

Finally, regarding the impact of the control variables on psychological reactance, misperceptions, attitude change, and intention to get the COVID-19 vaccine, we found that people who regarded the comments as less authentic (β = −.14, p = .01) and who had a higher level of exposure to the misinformation (β = .25, p < .001) were more likely to experience reactance. People who regarded the comments as more authentic (β = .18, p < .001) and who had a higher level of exposure to misinformation (β = .61, p < .001) were more likely to have misperceptions. In addition, people who used social media more often (β = .12, p = .03) were more likely to have positive attitude changes, whereas people who identified more strongly with the Democratic Party (β = .19, p < .001) had a greater intention to get vaccinated.

Discussion

This study investigated whether exposure to negative and misleading online comments on vaccination campaigns and corrective rebuttals of the comments could affect people’s attitudes and intentions to get vaccinated through the mediation of psychological reactance to the messages and belief in the misinformation in the comments. The results showed that, compared with exposure to no rebuttals, exposure to rebuttals by an expert organization such as the CDC, rather than by social media users, can reduce people’s reactance to persuasive messages and misperceptions, thereby increasing their intention to get vaccinated. Moreover, people’s initial attitudes toward the vaccination affected the impact of the CDC’s rebuttals on reactance and misperceptions such that users’ rebuttals were more effective in reducing people’s reactance among those initially with more positive attitudes toward the vaccination. The effect of the CDC’s rebuttals on reducing misperceptions also increased when people initially had more positive attitudes. The theoretical and practical implications of this study are presented below.

In line with the literature on source credibility, people who were exposed to rebuttals of negative and misleading comments by the CDC seemed to perceive them as reliable, compared with no exposure to rebuttals. Consequently, people had less reactance to the pro-COVID-19 vaccination messages endorsed by the CDC. However, rebuttals by social media users did not significantly mitigate the reactance evoked by the negative and misleading comments on pro-vaccination messages. One reason may be that because negative comments were also posted by social media users, audiences might give equal weight to both opinions and be confused by the contradictory comments and rebuttals. In this case, they were less likely to be persuaded by the rebuttals. It is also possible that the participants perceived no similarity with the social media users who provided corrective rebuttals, because in the stimuli there were no clues to indicate similarity. The insufficient similarity between the participants and the social media users who posted rebuttals may have inhibited the potential role of perceived similarity in reducing the participants’ reactance. We recommend that future studies manipulate the perceived similarity between the participants and the users posting rebuttals and examine whether increasing the perceived similarity (e.g., indicating that the participants and sources of rebuttals share personal characteristics, interests, or values) mitigates the participants’ reactance to what is advocated. In addition, although we hoped that social media users could improve their credibility by citing expert sources such as the CDC, the results showed that this approach was not effective.

Overall, this study echoed the literature, which has shown that comments sections play a role in persuasion by influencing people’s reactance to persuasive messages (H. Y. Kim et al., 2021; Li & Sundar, 2021). Scholars have also suggested offsetting the negative impact of misleading comments on vaccination by responding with factual information about the vaccines’ effectiveness and safety (H. Y. Kim et al., 2021). This study provides empirical evidence for this strategy, suggesting that when responding to negative comments on campaign messages, institutions with expertise and trustworthiness in the health field can successfully reduce people’s suspicions and psychological reactance that are triggered by negative comments.

Contrary to expectations, neither expert organizations nor social media users directly quashed people’s belief in the misinformation. Studies have shown that corrections by expert sources such as the CDC were effective in reducing people’s misperceptions about the Zika virus, flu vaccines, and the vaccine–autism link (Nyhan & Reifler, 2015; Nyhan et al., 2014; Vraga & Bode, 2017). However, these studies either examined only corrective messages or presented them as comments on misleading social media posts, indicating that the context in which corrective messages are presented is important. When our study presented them as rebuttals of misleading comments on pro-vaccination campaigns, the corrections failed to directly eliminate people’s belief in the misinformation. However, corrections by the CDC indirectly reduced people’s misperceptions by reducing their reactance to persuasive messages.

This study extends the consequences of reactance to belief in misinformation, showing that declining reactance to pro-vaccination messages can weaken belief in misinformation surrounding the vaccination, leading to stronger intentions to get vaccinated. Therefore, although rebuttals by expert organizations may not directly affect people’s attitudes or intentions regarding vaccination (Nyhan & Reifler, 2015; Nyhan et al., 2014), they can encourage people to accept vaccines by reducing reactance and misperceptions.

Unlike S.-Y. Kim et al. (2013, 2017), we found no evidence that less reactance led people’s attitudes to become more positive toward what was advocated than their initial attitudes. This may be because our study involved misinformation about vaccination, which is known to reduce people’s pro-vaccination attitudes (Featherstone & Zhang, 2020; Nan & Madden, 2012). Moreover, the study failed to find evidence that rebuttals by social media users altered people’s beliefs, perhaps because the misleading comments were refuted by a single user response which may not be sufficient to refute misinformation (Vraga & Bode, 2017). Future studies could examine whether multiple corrections by social media users can trigger the bandwagon effect and increase the effectiveness of user corrections in reducing misperceptions evoked by misleading comments.

In-depth analysis of the results showed that preexisting attitudes toward the COVID-19 vaccines moderated the impact of rebuttals by social media users on people’s reactance. User rebuttals became more effective in attenuating reactance to campaign messages when people already had positive attitudes toward vaccination. This was consistent with a study by H. Y. Kim et al. (2021), which showed that congenial comments on the flu vaccination reduced reactance to comments, whereas uncongenial comments produced greater reactance. Furthermore, consistent with a meta-analysis by Walter et al. (2020), the effectiveness of the CDC’s rebuttals in reducing misperceptions increased when they conformed with people’s prior attitudes toward COVID-19 vaccination, indicating that people engaged in directionally motivated reasoning when processing corrective messages from expert organizations. People who initially had more pro-vaccination attitudes were more likely to accept corrective information from an expert source and reduce their misperceptions, than were those not exposed to rebuttals. However, corrective information from an expert source did not alleviate the misperceptions of people whose initial attitudes were not very positive; even worse, this strategy seemed to backfire (Nyhan et al., 2013; Nyhan & Reifler, 2010), with the rebuttals reinforcing people’s misperceptions.

These results indicate that social media users, public health institutions, and practitioners should adopt different strategies to promote vaccination among different groups of people based on their preexisting attitudes toward vaccination. Although negative and misleading comments may trigger reactance to pro-vaccination campaigns, rebuttals to negative and misleading comments by social media users and professional institutions are effective in reducing reactance and misperceptions among those whose initial positive attitude exceeds a moderate level. For people less favorable to vaccination, more caution is needed because rebuttals may backfire. Because people seem to trust vaccine safety information from pediatricians and other health care providers (Freed et al., 2011), future studies could examine whether rebuttals by doctors or caregivers, rather than the CDC or social media users, would be more effective in reducing vaccine opponents’ reactance to pro-vaccination messages and their misperceptions regarding vaccination.

This study has room for improvement. First, the participants were forced to read negative comments on health promotion messages and corrective rebuttals, perhaps undermining the external validity of the results. However, health campaigns today are moving online and often use public Facebook pages to promote health messages. In addition, more than half the Americans post and read online comments (Stroud et al., 2016). Therefore, people are likely to see others’ comments and the ensuing rebuttals on public Facebook pages, meaning that the manipulation in our study was realistic. However, research conducted on people’s own social media accounts would increase the external validity of the results. Second, using a multiple-item scale to measure intention to receive COVID-19 vaccination would improve measurement validity and reliability. Third, although we hypothesized that social media users’ rebuttals may be considered trustworthy because these users are similar to the audience, we did not measure the participants’ perceptions of the similarity between them and the users providing the rebuttals. Future studies should explore the role of perceived similarity in the relationship between rebuttal sources and psychological reactance, to uncover the mechanisms by which rebuttal sources affect people’s reactance and misperceptions. Fourth, this study’s R2 values for the variables reactance, misperception, and attitude change were relatively small, indicating that the proposed model accounted for only a small amount of variance in these variables. It is possible that the model excluded factors that have a significant impact on these variables. Examining more potential contributors would inform the literature on reactance, misperception, and attitude change. Fifth, the absence of a separate question about Hispanic status may have reduced the reliability of the measure of ethnicity. Future studies could include two separate questions on Hispanic origin and race to improve the reliability of the related measures.

Scholars have shown that different social media platforms have different affordances, which may affect audiences’ perceptions of the credibility of the information on a specific social media platform. Twitter, for instance, is more likely to be considered a news source than is Facebook (Pew Research Center, 2021), which is mainly seen as a social space (Moreno et al., 2016; Raacke & Bonds-Raacke, 2008). People may have different expectations of these platforms’ content (Vraga & Bode, 2018). Therefore, perceptions of opinions expressed in comments and rebuttals on these platforms may differ. Future studies could also examine whether the model we tested could be applied to other platforms. Finally, it is important to investigate whether and how other affordances of digital media affect people’s acceptance of and resistance to health campaign messages. For instance, social cues such as the number of likes or dislikes on comments may trigger cognitive heuristics and change people’s perceptions of the comments.

In sum, this study deepens our understanding of the mechanisms by which the affordances of digital media, such as comments sections, can affect people’s health attitudes and intentions. It shows that the opinions expressed in misleading comments and their rebuttals can, by influencing people’s psychological reactance to persuasive messages and their belief in the related misinformation, affect the effectiveness of the persuasion. In this way, our study married the two lines of research, on psychological reactance and misinformation, thereby contributing theoretical insights into these two communication phenomena. This study also provides important insights for public health institutions and practitioners, who should respond swiftly and firmly to negative and misleading comments to reduce reactance to vaccination campaigns and related misperceptions, especially among vaccine supporters. In view of the high proportion of vaccine supporters in the total population, shielding them from reactance and the misperceptions evoked by the negative comments on vaccination campaigns will contribute significantly to the fight against the pandemic.

Supplemental Material

sj-docx-1-jmq-10.1177_10776990221084606 – Supplemental material for How Misinformation and Rebuttals in Online Comments Affect People’s Intention to Receive COVID-19 Vaccines: The Roles of Psychological Reactance and Misperceptions

Supplemental material, sj-docx-1-jmq-10.1177_10776990221084606 for How Misinformation and Rebuttals in Online Comments Affect People’s Intention to Receive COVID-19 Vaccines: The Roles of Psychological Reactance and Misperceptions by Yanqing Sun and Fangcao Lu in Journalism & Mass Communication Quarterly

Authors’ Biographies

Yanqing Sun (PhD, City University of Hong Kong, 2021) is an associate professor at the School of Journalism and Communication, Hunan University. Her research focuses on media effects, impacts of new media, and science and health communication. Her work has been published in Health Communication, Children and Youth Services Review, and Computers in Human Behavior.

Fangcao Lu is a PhD Candidate in the Department of Media and Communication, City University of Hong Kong. Her research examines person perception, uses and impacts of new media technology, and health communication.

1.

At the univariate level, skewness values ranging between −3 and + 3 and kurtosis values ranging between −10 and + 10 are considered to indicate univariate normality (Kline, 2015). At the multivariate level, Mardia’s (1970) normalized multivariate kurtosis coefficient (i.e., the critical ratio of multivariate kurtosis) of less than 5 indicates multivariate normality (Byrne, 2010). In our study, values of univariate skewness ranged from −1.30 to 0.13 and values of univariate kurtosis ranged from −1.54 to 1.46, thus indicating univariate normality. In addition, Mardia’s normalized estimate of multivariate kurtosis was −0.11, indicating that the data were multivariate normal.

2.

Regarding the stimuli, to ensure external validity, four vaccination campaign messages were obtained and modified from the real educational information about the COVID-19 vaccines posted on the websites of the CDC, the U.S. Department of Health and Human Services, the Mayo Clinic, Johns Hopkins Medicine, and MidMichigan Health. The messages were edited to an almost identical length (approximately 260 words).

To determine which messages should be selected as the stimuli, a pilot study was conducted. The participants (n = 50) were asked to read four promotional messages and evaluate the quality of the messages, including whether they agreed that the message was convincing, strong, believable, important, and elicited agreement (Kang et al., 2006). They were also asked to evaluate whether each campaign message constituted a threat to freedom (Dillard & Shen, 2005). Their responses were recorded on a 5-point Likert-type scale, with 1 signifying “Strongly disagree” and 5 signifying “Strongly agree.” Based on the results, the two messages with the highest quality (M = 4.07, SD = 0.77; M = 4.04, SD = 0.76, respectively) and the least perceived threat to freedom (M = 3.24, SD = 1.21; M = 3.26, SD = 1.18, respectively) were selected, to reduce the effect of the promotional message itself on people’s reactance to the message and observe the effect of comments and rebuttals on psychological reactance to the promotional messages. There was no significant difference in the quality (t[49] = 0.38, p = .71) or perceived threat to freedom (t[49] = −0.26, p = .80) between the two selected messages. In the two messages, their specific titles and content were different, but the attitudes toward COVID-19 vaccination were the same. They both contained pro-vaccination messages, discussed the benefits and safety of the COVID-19 vaccination, and encouraged people to get vaccinated (please see the Supplemental Material).

For the user-generated comments, we selected the 11 most common misperceptions about COVID-19 vaccines on the aforementioned websites and asked the participants in the pilot study how often they saw this information on social media. The four most frequently encountered misinformation-related items were selected. We then selected from social media four real comments that reflected these items. To reduce other confounding effects, we made minor revisions to the comments to ensure that they were similar in length (around 20 words) and expressed in a reasonably civil manner. To increase perceived authenticity of the comments, we left grammatical and formatting errors uncorrected.

Next, we created two comments sections, each containing two of the four negative and misleading comments on COVID-19 vaccination. For each comments section, the specific content of the comments was different. However, the attitudes toward COVID-19 vaccination expressed in these comments were the same. They were anti-vaccination messages that included misleading information about the safety and benefits of the COVID-19 vaccines (please see the Supplemental Material for details of the stimuli).

Based on the information on the aforementioned websites, we prepared four rebuttals for the four comments. The rebuttals were simple and brief, emphasized facts, and cited reliable sources, as recommended by previous studies (Bode & Vraga, 2018; Lewandowsky et al., 2012). The rebuttals in the user rebuttal condition were posted by social media users, and those in the expert organization condition were posted by the CDC (see the Supplemental Material). In addition, all names and profiles of users posting comments and rebuttals were gender-neutral to avoid confounding effects.

In this experiment, we used real profile pictures in the stimuli. To protect the privacy of users, we have blurred their pictures in this article.

3.

Initially, 10.5% of the participants (n = 36) had a negative attitude toward COVID-19 vaccination (i.e., an average score below 4), and 88.4% (n = 304) had a positive attitude (i.e., an average score above 4). Only 1.1% (n = 4) were neutral regarding the vaccination (i.e., an average score of 4). After exposure to the stimuli, approximately 10.2% (n = 35) of the participants held a negative attitude toward the vaccination, 86.6% (n = 298) held a positive attitude, and 3.2% (n = 11) were neutral.

4.

Some scholars (e.g., Cronbach & Furby, 1970; Harris, 1963) have objected to using difference scores and argued that difference scores may have low reliability and lead to biases caused by regression toward the mean (Allison, 1990). However, using mathematical and statistical methods, Allison (1990) refuted this objection, and showed that in a random experimental design, the use of difference scores is appropriate and can reflect reality accurately.

5.

Because previous studies have shown that psychological reactance is best conceptualized as a combination of anger and negative cognitions, reactance was measured from two perspectives: affective reactance output (i.e., state of anger) and cognitive reactance output (i.e., negative cognition; Gardner & Leshner, 2016).

Regarding the state of anger, four items proposed by Dillard and Shen (2005) were commonly used to measure the state of anger (Ratcliff, 2019) in previous studies (e.g., H. Y. Kim et al., 2021; Miller et al., 2007; Quick & Stephenson, 2008; Shen, 2010; Song et al., 2018; X. Zhang, 2020). Therefore, the same four items were used to measure the state of anger, namely irritation, anger, annoyance, and aggravation. The participants were asked to indicate the extent to which they were currently experiencing each of these feelings. Their responses were given on a 7-point scale, with 1 signifying “None of this feeling” and 7 signifying “A great deal of this feeling.”

Previous studies have not formed a consistent measure of negative cognitions (Ratcliff, 2019) and mainly use three approaches (Reynolds-Tylus et al., 2021): employing trained coders to code participants’ thoughts (e.g., Dillard & Shen, 2005; H. Y. Kim et al., 2021; Li & Sundar, 2021; Shen, 2010, 2015), asking participants to code their own thoughts (e.g., Quick & Stephenson, 2008; Rains & Turner, 2007; Song et al., 2018), or using a Likert-type scale (e.g., Gardner & Leshner, 2016; S.-Y. Kim et al., 2017; Silvia, 2005; Q. Zhang & Sapp, 2013; X. Zhang, 2020). In a study that compared the three measures of negative cognition, Reynolds-Tylus et al. (2021) showed that the performances of the three methods were comparable. Negative cognitions measured by the three approaches were all greater for participants who viewed forceful language than for those who viewed non-forceful language, and all reduced intention to engage in the advocated healthy behavior. In addition, the models based on the three measures of negative cognition were similar in terms of model fit and variance explained. Notably, compared with the other two measures, the Likert-type scale was slightly better in terms of model fit and variance explained, and much better in terms of factor loadings. Therefore, the present study used a Likert-type scale and measured negative cognitions using four items adapted from Gardner and Leshner (2016). The participants were asked to indicate the extent to which they agreed that the promotional message in the post was pleasant, got in the way of what they wanted (item 2, which is the reversed item), reasonable, and fair. Their responses were given on a 7-point scale, where 1 indicated “Strongly disagree,” and 7 indicated “Strongly agree.” All items except item 2 were reversed so that higher scores indicated greater negative cognition.

The eight items measuring anger and negative cognitions were used as indicators of reactance. However, the reliability test showed that deleting item 2 for negative cognitions increased Cronbach’s α from .74 to .79; thus, the scores for the other seven items were averaged to form an index of psychological reactance to the promotional message (α = .79, M = 2.95, SD = 1.14).

6.

We ran a power analysis using the software package G*Power 3 (Faul, Buchner et al., 2007; Faul, Erdfelder et al., 2007) to determine the appropriate sample size. Using the ANOVA F-test model (fixed effects, omnibus, one-way), we set the effect size as f = .25 (equals Cohen’s d = .50), α = .05, power as .80, and number of groups as 12. The results showed that the appropriate total sample size was 288. Thus, a sample size of 344 was considered to be sufficient.

7.

To control for the effects of the eight aforementioned control variables, separate regression analyses were performed for each variable in the model. In each regression analysis, each variable in the model was the dependent variable, and the eight control variables were predictors. The standardized residuals of the regression analyses were used as the new indicators in the model. According to Kohler and Kreuter (2012), using this method can completely control for the effects of control variables on the model of interest. In this way, the coefficients in the model reflect the impact of the independent variables on the dependent variables after excluding the effect of the control variables (Kohler & Kreuter, 2012). The same method has been used in previous communication studies, such as Chia (2010), Kim (2016), and Tian (2011).

Footnotes

Authors’ Note: Both authors have agreed to the submission and that the article is not currently being considered for publication by any other print or electronic journal.

The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

Supplemental Material: Supplemental material for this article is available online.

References

  1. Allison P. D. (1990). Change scores as dependent variables in regression analysis. Sociological Methodology, 20, 93–114. [Google Scholar]
  2. Arpan L. M., Nabi R. L. (2011). Exploring anger in the hostile media process: Effects on news preferences and source evaluation. Journalism & Mass Communication Quarterly, 88(1), 5–22. [Google Scholar]
  3. Bessarabova E., Fink E. L., Turner M. (2013). Reactance, restoration, and cognitive structure: Comparative statics. Human Communication Research, 39(3), 339–364. [Google Scholar]
  4. Bhattacharya S., Srinivasan P., Polgreen P. (2017). Social media engagement analysis of U.S. federal health agencies on Facebook. BMC Medical Informatics and Decision Making, 17(1), 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bode L., Vraga E. K. (2018). See something, say something: Correction of global health misinformation on social media. Health Communication, 33(9), 1131–1140. [DOI] [PubMed] [Google Scholar]
  6. Brehm J. W. (1966). A theory of psychological reactance. Academic Press. [Google Scholar]
  7. Brehm S. S., Brehm J. W. (1981). Psychological reactance: A theory of freedom and control. Academic Press. [Google Scholar]
  8. Byrne B. M. (2010). Structural equation modeling with AMOS. Taylor & Francis Group. [Google Scholar]
  9. Casler K., Bickel L., Hackett E. (2013). Separate but equal? A comparison of participants and data gathered via Amazon’s Mturk, social media, and face-to-face behavioral testing. Computers in Human Behavior, 29(6), 2156–2160. [Google Scholar]
  10. Centers for Disease Control and Prevention. (2015, January8). CDC enterprise social media policy. https://www.cdc.gov/maso/policy/SocialMediaPolicy508.pdf
  11. Chia S. C. (2010). How social influence mediates media effects on adolescents’ materialism. Communication Research, 37(3), 400–419. [Google Scholar]
  12. Cronbach L. J., Furby L. (1970). How we should measure “change”: Or should we? Psychological Bulletin, 74(1), 68–80. [Google Scholar]
  13. Dillard J. P., Shen L. (2005). On the nature of reactance and its role in persuasive health communication. Communication Monographs, 72(2), 144–168. [Google Scholar]
  14. Faul F., Buchner A., Erdfelder E., Mayr S. (2007). A short tutorial of Gpower. Tutorials in Quantitative Methods for Psychology, 3(2), 51–59. [Google Scholar]
  15. Faul F., Erdfelder E., Lang A. G., Buchner A. (2007). G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods, 39(2), 175–191. [DOI] [PubMed] [Google Scholar]
  16. Featherstone J. D., Zhang J. (2020). Feeling angry: The effects of vaccine misinformation and refutational messages on negative emotions and vaccination attitude. Journal of Health Communication, 25(9), 692–702. [DOI] [PubMed] [Google Scholar]
  17. Field A. (2018). Discovering statistics using IBM SPSS statistics (5th ed.). SAGE. [Google Scholar]
  18. Flynn D. J., Nyhan B., Reifler J. (2017). The nature and origins of misperceptions: Understanding false and unsupported beliefs about politics. Political Psychology, 38, 127–150. [Google Scholar]
  19. Freed G. L., Clark S. J., Butchart A. T., Singer D. C., Davis M. M. (2011). Sources and perceived credibility of vaccine-safety information for parents. Pediatrics, 127(Suppl. 1), S107–S112. [DOI] [PubMed] [Google Scholar]
  20. Fuchs C. (2011). Web 2.0, prosumption, and surveillance. Surveillance and Society, 8(3), 288–308. [Google Scholar]
  21. Gardner L., Leshner G. (2016). The role of narrative and other-referencing in attenuating psychological reactance to diabetes self-care messages. Health Communication, 31(6), 738–751. [DOI] [PubMed] [Google Scholar]
  22. Gunther A. C. (2017). Hostile media effect. In Rössler P., Hoffner C. A., Zoonen L. (Eds.), The international encyclopedia of media effects (pp. 1–10). John Wiley. [Google Scholar]
  23. Harris C. W. (1963). Problems in measuring change. University of Wisconsin Press. [Google Scholar]
  24. Hayes A. F. (2018). Introduction to mediation, moderation, and conditional process analysis: A regression-based approach (2nd ed.). Guilford Press. [Google Scholar]
  25. Heldman A. B., Schindelar J., Weaver J. B. (2013). Social media engagement and public health communication: Implications for public health organizations being truly “social.” Public Health Reviews, 35(1), 1–18. [Google Scholar]
  26. Hocevar K. P., Metzger M., Flanagin A. J., Hocevar K. P., Metzger M., Flanagin A. J. (2017). Source credibility, expertise, and trust in health and risk messaging. In Nussbaum J. F. (Ed.), The Oxford research encyclopedia of communication (pp. 1–22). Oxford University Press. [Google Scholar]
  27. Hoffman B. L., Felter E. M., Chu K.-H., Shensa A., Williams D., Himmel R., Wolynn R., Hermann C., Wolynn T., Primack B. A. (2019). The emerging landscape of anti-vaccination sentiment on Facebook. Journal of Adolescent Health, 64(2), S136. [DOI] [PubMed] [Google Scholar]
  28. Hovland C. I., Janis I. L., Kelley H. H. (1953). Communication and persuasion. Yale University Press. [Google Scholar]
  29. Howard M. C. (n.d.). Sobel test formula for serial mediation/sequential mediation. https://mattchoward.com/sobel-test-formula-for-serial-mediation-sequential-mediation/
  30. Hu L. T., Bentler P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. [Google Scholar]
  31. Jackson S. (1992). Message effects research: Principles of design and analysis. Guilford Press. [Google Scholar]
  32. Jolley D., Douglas K. M. (2014). The effects of anti-vaccine conspiracy theories on vaccination intentions. PLOS ONE, 9(2), Article e89177. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Kang Y., Cappella J., Fishbein M. (2006). The attentional mechanism of message sensation value: Interaction between message sensation value and argument quality on message effectiveness. Communication Monographs, 73(4), 351–378. [Google Scholar]
  34. Kees J., Berry C., Burton S., Sheehan K. (2017). An analysis of data quality: Professional panels, student subject pools, and Amazon’s Mechanical Turk. Journal of Advertising, 46(1), 141–155. [Google Scholar]
  35. Kim H. J. (2016). The role of emotions and culture in the third-person effect process of news coverage of election poll results. Communication Research, 43(1), 109–130. [Google Scholar]
  36. Kim H. Y., Seo Y., Yoon H. J., Han J. Y., Ko Y. (2021). The effects of user comment valence of Facebook health messages on intention to receive the flu vaccine: The role of pre-existing attitude towards the flu vaccine and psychological reactance. International Journal of Advertising, 40(7), 1187–1208. [Google Scholar]
  37. Kim S.-Y., Levine T. R., Allen M. (2013). Comparing separate process and intertwined models for reactance. Communication Studies, 64(3), 273–295. [Google Scholar]
  38. Kim S.-Y., Levine T. R., Allen M. (2017). The intertwined model of reactance for resistance and persuasive boomerang. Communication Research, 44(7), 931–951. [Google Scholar]
  39. Kline R. (2015). Principles and practices of structural equation modeling (4th ed.). Guilford Publications. [Google Scholar]
  40. Kohler U., Kreuter F. (2012). Data analysis using stata (3rd ed.). Stata Press. [Google Scholar]
  41. Kunda Z. (1990). The case for motivated reasoning. Psychological Bulletin, 108(3), 480–498. [DOI] [PubMed] [Google Scholar]
  42. Lee S., Atkinson L., Sung Y. H. (2022). Online bandwagon effects: Quantitative versus qualitative cues in online comments sections. New Media & Society, 24, 580–599. [Google Scholar]
  43. Lewandowsky S., Ecker U. K. H., Seifert C. M., Schwarz N., Cook J. (2012). Misinformation and its correction: Continued influence and successful debiasing. Psychological Science in the Public Interest, 13(3), 106–131. [DOI] [PubMed] [Google Scholar]
  44. Li R., Sundar S. S. (2021). Can interactive media attenuate psychological reactance to health messages? A study of the role played by user commenting and audience metrics in persuasion. Health Communication. Advance online publication. 10.1080/10410236.2021.1888450. [DOI] [PubMed]
  45. MacKinnon D. P., Fritz M. S., Williams J., Lockwood C. M. (2007). Distribution of the product confidence limits for the indirect effect: Program PRODCLIN. Behavior Research Methods, 39(3), 384–389. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Mardia K. V. (1970). Measures of multivariate skewness and kurtosis with applications. Biometrika, 57(3), 519–530. [Google Scholar]
  47. Mason W., Suri S. (2012). Conducting behavioral research on Amazon’s Mechanical Turk. Behavior Research Methods, 44(1), 1–23. [DOI] [PubMed] [Google Scholar]
  48. McCroskey J. C., Teven J. J. (1999). Goodwill: A reexamination of the construct and its measurement. Communication Monographs, 66(1), 90–103. [Google Scholar]
  49. Mendoza-Herrera K., Valero-Morales I., Ocampo-Granados M. E., Reyes-Morales H., Arce-Amaré F., Barquera S. (2020). An overview of social media use in the field of public health nutrition: Benefits, scope, limitations, and a latin American experience. Preventing Chronic Disease, 17, 1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Miller C. H., Lane L. T., Deatrick L. M., Young A. M., Potts K. A. (2007). Psychological reactance and promotional health messages: The effects of controlling language, lexical concreteness, and the restoration of freedom. Human Communication Research, 33(2), 219–240. [Google Scholar]
  51. Moreno M. A., Arseniev-Koehler A., Litt D., Christakis D. (2016). Evaluating college students’ displayed alcohol references on Facebook and Twitter. Journal of Adolescent Health, 58(5), 527–532. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Nan X., Madden K. (2012). HPV vaccine information in the blogosphere: How positive and negative blogs influence vaccine-related risk perceptions, attitudes, and behavioral intentions. Health Communication, 27(8), 829–836. [DOI] [PubMed] [Google Scholar]
  53. Neiger B. L., Thackeray R., van Wagenen S. A., Hanson C. L., West J. H., Barnes M. D., Fagen M. C. (2012). Use of social media in health promotion: Purposes, key performance indicators, and evaluation metrics. Health Promotion Practice, 13(2), 159–164. [DOI] [PubMed] [Google Scholar]
  54. Nir L. (2011). Motivated reasoning and public opinion perception. Public Opinion Quarterly, 75(3), 504–532. [Google Scholar]
  55. Nyhan B., Reifler J. (2010). When corrections fail: The persistence of political misperceptions. Political Behavior, 32(2), 303–330. [Google Scholar]
  56. Nyhan B., Reifler J. (2015). Does correcting myths about the flu vaccine work? An experimental evaluation of the effects of corrective information. Vaccine, 33(3), 459–464. [DOI] [PubMed] [Google Scholar]
  57. Nyhan B., Reifler J., Richey S., Freed G. L. (2014). Effective messages in vaccine promotion: A randomized trial. Pediatrics, 133(4), e835–e842. [DOI] [PubMed] [Google Scholar]
  58. Nyhan B., Reifler J., Ubel P. A. (2013). The hazards of correcting myths about Health Care Reform. Medical Care, 51(2), 127–132. [DOI] [PubMed] [Google Scholar]
  59. O’Keefe D. A. (1990). Persuasion: Theory and research. SAGE. [Google Scholar]
  60. O’Reilly T. (2005, October1). Web 2.0: Compact definition? https://www.bibsonomy.org/bibtex/c3c938edd28458ae677d4422ae0ca113
  61. Park H., Rodgers S., Stemmle J. (2011). Health organizations’ use of Facebook for health advertising and promotion. Journal of Interactive Advertising, 12(1), 62–77. [Google Scholar]
  62. Pew Research Center. (2020, April9). Public holds broadly favorable views of many federal agencies, including CDC and HHS. https://www.pewresearch.org/politics/2020/04/09/public-holds-broadly-favorable-views-of-many-federal-agencies-including-cdc-and-hhs/
  63. Pew Research Center. (2021, January12). News Use across Social Media Platforms in 2020. https://www.pewresearch.org/journalism/2021/01/12/news-use-across-social-media-platforms-in-2020/
  64. Pornpitakpan C. (2004). The persuasiveness of source credibility: A critical review of five decades’ evidence. Journal of Applied Social Psychology, 34(2), 243–281. [Google Scholar]
  65. Quick B. L., Stephenson M. T. (2008). Examining the role of trait reactance and sensation seeking on perceived threat, state reactance, and reactance restoration. Human Communication Research, 34(3), 448–476. [Google Scholar]
  66. Raacke J., Bonds-Raacke J. (2008). MySpace and Facebook: Applying the uses and gratifications theory to exploring friend-networking sites. Cyberpsychology and Behavior, 11(2), 169–174. [DOI] [PubMed] [Google Scholar]
  67. Rains S. A. (2013). The nature of psychological reactance revisited: A meta-analytic review. Human Communication Research, 39(1), 47–73. [Google Scholar]
  68. Rains S. A., Turner M. M. (2007). Psychological reactance and persuasive health communication: A test and extension of the intertwined model. Human Communication Research, 33(2), 241–269. [Google Scholar]
  69. Ratcliff C. L. (2019). Characterizing reactance in communication research: A review of conceptual and operational approaches. Communication Research, 48(7), 1033–1058. [Google Scholar]
  70. Reynolds-Tylus T. (2019). Psychological reactance and persuasive health communication: A review of the literature. Frontiers in Communication, 4, 1–12. [Google Scholar]
  71. Reynolds-Tylus T., Bigsby E., Quick B. L. (2021). A comparison of three approaches for measuring negative cognitions for psychological reactance. Communication Methods and Measures, 15(1), 43–59. [Google Scholar]
  72. Shen L. (2010). Mitigating psychological reactance: The role of message-induced empathy in persuasion. Human Communication Research, 36(3), 397–422. [Google Scholar]
  73. Shen L. (2015). Antecedents to psychological reactance: The impact of threat, message frame, and choice. Health Communication, 30(10), 975–985. [DOI] [PubMed] [Google Scholar]
  74. Sherrick B., Hoewe J. (2018). The effect of explicit online comment moderation on three spiral of silence outcomes. New Media & Society, 20(2), 453–474. [Google Scholar]
  75. Shi R., Messaris P., Cappella J. N. (2014). Effects of online comments on smokers’ perception of antismoking public service announcements. Journal of Computer-mediated Communication, 19(4), 975–990. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Silvia P. J. (2005). Deflecting reactance: The role of similarity in increasing compliance and reducing resistance. Basic and Applied Social Psychology, 27(3), 277–284. [Google Scholar]
  77. Smith N., Graham T. (2019). Mapping the anti-vaccination movement on Facebook. Information, Communication & Society, 22(9), 1310–1327. [Google Scholar]
  78. Song H., McComas K. A., Schuler K. L. (2018). Source effects on psychological reactance to regulatory policies: The role of trust and similarity. Science Communication, 40(5), 591–620. [Google Scholar]
  79. Stroud N. J., Van Duyn E., Peacock C. (2016, March). Survey of commenters and comment readers. Center for Media Engagement. [Google Scholar]
  80. Sundar S. S. (2008). The MAIN Model: A heuristic approach to understanding technology effects on credibility. In Metzger M. J., Flanagin A. J. (Eds.), Digital media, youth, and credibility (pp. 73–100). The MIT Press. [Google Scholar]
  81. Taber C. S., Lodge M. (2006). Motivated skepticism in the evaluation of political beliefs. American Journal of Political Science, 50(3), 755–769. [Google Scholar]
  82. Taylor A. B., MacKinnon D. P., Tein J.-Y. (2008). Tests of the three-path mediated effect. Organizational Research Methods, 11(2), 241–269. [Google Scholar]
  83. Tian Y. (2011). Communication behaviors as mediators: Examining links between political orientation, political communication, and political participation. Communication Quarterly, 59(3), 380–394. [Google Scholar]
  84. Vraga E. K., Bode L. (2017). Using expert sources to correct health misinformation in social media. Science Communication, 39(5), 621–645. [Google Scholar]
  85. Vraga E. K., Bode L. (2018). I do not believe you: How providing a source corrects health misperceptions across social media platforms. Information, Communication & Society, 21(10), 1337–1353. [Google Scholar]
  86. Walter N., Cohen J., Holbert R. L., Morag Y. (2020). Fact-checking: A meta-analysis of what works and for whom. Political Communication, 37(3), 350–375. [Google Scholar]
  87. Walther J. B., DeAndrea D., Kim J., Anthony J. C. (2010). The influence of online comments on perceptions of antimarijuana public service announcements on YouTube. Human Communication Research, 36(4), 469–492. [Google Scholar]
  88. Wright S. (2006). Government-run online discussion fora: Moderation, censorship and the shadow of control. British Journal of Politics and International Relations, 8(4), 550–547x. [Google Scholar]
  89. Youn S., Kim S. (2019). Understanding ad avoidance on Facebook: Antecedents and outcomes of psychological reactance. Computers in Human Behavior, 98, 232–2025. [Google Scholar]
  90. Zhang Q., Sapp D. A. (2013). Psychological reactance and resistance intention in the classroom: Effects of perceived request politeness and legitimacy, relationship distance, and teacher credibility. Communication Education, 62(1), 1–7008. [Google Scholar]
  91. Zhang X. (2020). Effects of freedom restoration, language variety, and issue type on psychological reactance. Health Communication, 35(11), 1316–131565. [DOI] [PubMed] [Google Scholar]
  92. Ziegele M., Jost P. B. (2020). Not funny? The effects of factual versus sarcastic journalistic responses to uncivil user comments. Communication Research, 47(6), 891–91854. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

sj-docx-1-jmq-10.1177_10776990221084606 – Supplemental material for How Misinformation and Rebuttals in Online Comments Affect People’s Intention to Receive COVID-19 Vaccines: The Roles of Psychological Reactance and Misperceptions

Supplemental material, sj-docx-1-jmq-10.1177_10776990221084606 for How Misinformation and Rebuttals in Online Comments Affect People’s Intention to Receive COVID-19 Vaccines: The Roles of Psychological Reactance and Misperceptions by Yanqing Sun and Fangcao Lu in Journalism & Mass Communication Quarterly


Articles from Journalism & Mass Communication Quarterly are provided here courtesy of SAGE Publications

RESOURCES