Skip to main content
PLOS One logoLink to PLOS One
. 2022 Jul 21;17(7):e0271206. doi: 10.1371/journal.pone.0271206

COVIDisgust: Language processing through the lens of partisanship

Veranika Puhacheuskaya 1,*, Isabell Hubert Lyall 1, Juhani Järvikivi 1
Editor: Koji Miwa2
PMCID: PMC9302854  PMID: 35862298

Abstract

Disgust is an aversive reaction protecting an organism from disease. People differ in how prone they are to experiencing it, and this fluctuates depending on how safe the environment is. Previous research has shown that the recognition and processing of disgusting words depends not on the word’s disgust per se but rather on individual sensitivity to disgust. However, the influence of dynamically changing disgust on language comprehension has not yet been researched. In a series of studies, we investigated whether the media’s portrayal of COVID-19 will affect subsequent language processing via changes in disgust. The participants were exposed to news headlines either depicting COVID-19 as a threat or downplaying it, and then rated single words for disgust and valence (Experiment 1; N = 83) or made a lexical decision (Experiment 2; N = 86). The headline type affected only word ratings and not lexical decisions, but political ideology and disgust proneness affected both. More liberal participants assigned higher disgust ratings after the headlines discounted the threat of COVID-19, whereas more conservative participants did so after the headlines emphasized it. We explain the results through the politicization and polarization of the pandemic. Further, political ideology was more predictive of reaction times in Experiment 2 than disgust proneness. High conservatism correlated with longer reaction times for disgusting and negative words, and the opposite was true for low conservatism. The results suggest that disgust proneness and political ideology dynamically interact with perceived environmental safety and have a measurable effect on language processing. Importantly, they also suggest that the media’s stance on the pandemic and the political framing of the issue may affect the public response by increasing or decreasing our disgust.

Introduction

Prior linguistic research has shown that language comprehension is affected by story context, real-world context, and the experience of the listener in the world [15]. Furthermore, language processing is affected by the emotions it evokes. Affective content of words has been shown to influence how fast they are recognized [6, 7], which also interacts with a range of person-based factors, such as age [810], sex [11], character traits and mood [12, 13], native speaker status [14, 15] and others.

Two major theoretical approaches to classifying affective words are the dimensional approach and the categorical approach [16]. According to the dimensional approach, words vary along continuous affective dimensions like valence and arousal [17]. Different theoretical models make different predictions about the effect of valence/arousal on lexical processing. The automatic vigilance hypothesis posits rapid allocation of attention to negative stimuli that impedes the processing of other properties of those stimuli, resulting in an inhibitory effect on word recognition and reading [18, 19]. Yet another model proposed by [20, 21] is based on motivated attention and claims that all extremely valenced stimuli regardless of their polarity draw attention faster than neutral stimuli. The empirical evidence has been mixed as well [7, 2225] and a recent study suggested that valence effects may be modality-specific [26]. The categorical approach usually postulates five discrete universal emotions that words are associated with: happiness, anger, fear, disgust, and sadness [27]. It has been argued that these emotions have at least partially distinct neural correlates [28], and the DISGUST system has been proposed as a primary emotional system eliciting disgust [29]. Importantly, according to [30], discrete emotions influence word processing over and above the effects of valence and arousal. In particular, the study found that high-disgust words were processed slower than neutral words when valence and arousal were controlled for.

Given its importance in human evolution and intergroup dynamics, it is not surprising that disgust plays a role in language processing. In fact, several studies showed that the same neural circuitries activate in response to disgusting words as to non-linguistic disgust-related stimuli (facial expressions of disgust, disgusting smells and images, etc.) [31, 32], suggesting that brain areas processing emotional information are not domain-specific [33]. The primary function of disgust is believed to be disease avoidance [34] as healthy individuals have a greater chance to reproduce. For humans, however, disgust extends well beyond these: we can experience aversion to something or someone we deem morally disgusting too, like members of outgroups (e.g., homosexual individuals, immigrants) [35, 36].

Individual differences in affective word processing

Naturally, people vary in their predisposition to feeling disgusted, which is usually measured by a trait called disgust sensitivity ([37], modified by [38]). Disgust sensitivity is more nuanced than many other character traits because it is less stable and can fluctuate. For instance, events such as epidemics are believed to activate the so-called “behavioral immune system” [39], elevating our disgust sensitivity to protect us from parasites.

There is mounting evidence that listener’s character traits, from disgust sensitivity to political ideology, affect various aspects of healthy language processing [e.g., 4045]. Importantly for our purpose, [46] found that individual disgust sensitivity mediates lexical decision times to high-disgust words. In their study, disgusting words produced an inhibitory effect on high disgust-prone participants but a facilitatory effect on low disgust-prone participants. This suggests that studying the processing of disgusting words may be meaningless without accounting for individual disgust proneness. The authors explained the effect with the contextual-learning hypothesis [47], which claims that the person’s response to a specific emotion depends on their life-long experience with this emotion.

Additionally, there is some evidence that traits and fluctuating states may make a differential contribution to affective word processing. For instance, [48] examined the effect of trait anxiety versus an induced anxiety state on negative word recognition and found that individuals in induced anxiety states (but not those scoring high in anxiety trait) showed better recognition of negative words and had a memory bias to negative events. The authors of the study explained it with hypervigilance against danger in a threatening environment that helps an individual to stay safe. This provides motivation to examine both traits (e.g., disgust sensitivity) and states (e.g., feeling disgusted) in language processing.

The present study and hypotheses

Prior research has not yet investigated the effect of dynamic changes in a person’s disgust on word processing. Since pandemics can naturally elevate our disgust to keep us safe, we aimed to make use of the unique situation the world is currently facing and investigate whether the media’s portrayal of the danger of COVID-19 would influence subsequent language processing. Specifically, news platforms often use one of the two approaches in covering the current pandemic: they either emphasize the severity of COVID-19, portraying it as a serious disease demanding public action (“much worse than the flu”, “can have lasting detrimental effects on health”), or downplaying it (“no more dangerous than the flu”, “kills only the old and the weak”). We investigated whether exposure to these two types of headlines would affect ratings of single words along the dimensions of disgust and valence (Experiment 1) and response times to these words in a lexical decision task (Experiment 2), both as modulated by person-based factors and word-inherent characteristics.

We predicted that being exposed to headlines highlighting the severity of COVID-19 would elevate participants’ disgust, helping them to stay safe in a threatening environment. For the rating task, this should translate into higher evaluations on the disgust scale and lower ratings on the valence scale. This effect may potentially be stronger for moderately disgusting words because of the ceiling effect for highly disgusting words. We also expected these effects to be mediated by the participant’s baseline disgust sensitivity: people with higher scores should assign higher disgust ratings and be more affected by the headline manipulation than people with lower scores. Combining a word rating task with a lexical decision task allowed us to examine both the fluctuations of participant’s disgust as reflected in their word ratings and how these fluctuations affect the core language processing mechanisms, such as lexical access. Based on the findings by [46], we expected that 1) less disgust-prone participants would have shorter lexical decision times for disgusting words, whereas high disgust-prone participants would have longer lexical decision times for the same words 2) the severe headlines would exacerbate this effect due to increased disease salience.

We additionally examined whether the effect of headlines is mediated by the participant’s political ideology. Conservatives tend to score higher on disgust in general [4951], and making people physically disgusted shifts their attitudes to the conservative end of the political spectrum [52]. One would thus expect conservative views to strengthen as a function of physical disgust and/or perceived disease vulnerability. Indeed, a study by [53] conducted in the U.S. and Poland found that exposure to press reports on COVID-19 increased the support for conservative presidential candidates (see [54] for similar findings during the Ebola outbreak). At the same time, high politicization of COVID-19 in the U.S. has led to more denial of the virus and less adherence to social distancing guidelines by the conservative compared to the liberal [55]. Largely, this attitude stems from the framing of the issue by conservative politicians as a trade-off between economic growth and saving human lives, with a common rhetoric that the cure “cannot be worse than the problem” [56]. This has created a paradox whereby conservatives downplay the threat of COVID-19 more and engage in social distancing less than liberals. It is thus difficult to predict how the headline manipulation would interact with political ideology in our study. We had two hypotheses regarding the conservative response to the severe headlines: 1) Higher conservatism scores will correlate with higher disgust ratings due to higher disgust proneness, and highlighting the threat of COVID-19 will exacerbate this effect; 2) Headlines emphasizing the threat will be considered uncredible, untrustworthy, and be discarded by more conservative participants, resulting in no effect in disgust ratings. For liberals, we predicted that the severe headlines will not produce a strong effect due to them accepting this situation as a norm, whereas headlines downplaying COVID-19 will produce a strong disgust reaction due to a heightened sense of threat and a conflict with their own view of the severity of the virus. This should result in higher disgust ratings following the downplaying headlines.

We should also note that while most of the data about politicization and polarization of the COVID-19 pandemic does come from the U.S., [57, 58] did not find significant differences between the U.S. and Canada in this respect (whereas the level of political polarization was significantly lower in the U.K.). We thus assumed that the level of politicization and polarization of the COVID-19 pandemic in Canada is high.

Experiment 1—Word ratings

Materials and methods

Participants

Eighty-three students at the University of Alberta received partial course credit for their participation. Nineteen were removed from the data analysis (14 = indicated they wished their data to be withdrawn, 5 = provided incorrect answers to trap questions in the Disgust Sensitivity survey), leaving 64 participants in the data analysis (43F [67%], mean age = 20.3, range = 17–41, SD = 3.2). Thirty-four were native speakers of English. All participants who did not choose English as the first language they acquired in childhood were classified as non-native speakers. The participants were asked to rate their English proficiency on a 5-point scale. Mean self-reported proficiency for non-native speakers was 4.0/5 (SD = 0.84) and for native speakers 4.9/5 (SD = 0.40). The plan for this study was reviewed for its adherence to ethical guidelines by a Research Ethics Board at the University of Alberta (reference number Pro00102348).

Materials

We used two databases: norms of valence, arousal, and dominance (VAD) by [17] and the NRC Emotion Intensity Lexicon by [59]. The latter provides emotions with which the words are associated and the strength of association scores. Only words that had disgust scores associated with them were extracted (1,093 in total). From these, we selected the words that also occurred in the former database. This left us with 787 words. Since the VAD and disgust scores were on different scales, we rescaled them uniformly. We kept arousal values constant (between -1 and 1 in a normalized distribution) and split the words into four subsets (high/low valence high/low disgust). Because there were virtually no high valence/high disgust items, we excluded this group. This allowed us to keep stricter thresholds for the other three categories and thus get better representatives of those categories:

  • high disgust (> .75) low valence (< -.75);

  • low disgust (< -.75) high valence (> .75);

  • low disgust (< -.5) low valence (< -.5).

For each category, we selected 33 words, 99 words in total (see Table 1). Correlations between words’ characteristics together with their p-values are provided in Table 2. We took concreteness ratings from [60], who used [61] as training data. Age of acquisition was taken from [62].

Table 1. Mean characteristics of words by category.
Word category Valence Arousal Disgust Log. Freq Length Concreteness AoA
high disgust low valence -1.09 (.29) .15 (.57) 1.39 (.42) 7.93 (1.43) 7.18 (2.31) 4.11 (2.19) 9.61 (2.36)
low disgust high valence 2.05 (.88) -.11 (.54) -1.61 (.50) 8.92 (1.70) 6.06 (2.09) 3.85 (2.67) 7.98 (2.58)
low disgust low valence -.74 (.20) -.01 (.45) -.90 (.36) 8.25 (1.22) 8.06 (2.60) 1.23 (1.04) 8.31 (1.71)
total .07 (1.51) .01 (.53) -.37 (1.36) 8.36 (1.51) 7.10 (2.46) 3.06 (2.44) 8.63 (2.33)

AoA = Age of Acquisition. SDs are in brackets.

Table 2. Correlation coefficients between stimuli’s characteristics.
Valence Arousal Disgust Log. Freq Length Concreteness AoA
Valence 1.00 -.21* -.73*** .29** -.28** .15 -.28**
Arousal -.21* 1.00 .21* .21* .23* -.08 .01
Disgust -.73*** .21* 1.00 -.31** .08 .21* .40***
Log. Freq .29** .21* -.31** 1.00 -.04 -.12 -.36***
Length -.28** .23* .08 -.04 1.00 -.44*** .33***
Concreteness .15 -.09 .21* -.12 -.44*** 1.00 -.09
AoA -.28** .01 .40*** -.36*** .33*** -.09 1.00

AoA = Age of Acquisition. Asterisks indicate significance levels (*** < .001, ** < .01, * < .05).

For the headline manipulation, we randomly selected seven news articles emphasizing the severity of COVID-19 and eight articles downplaying it. We then made screenshots of the headlines, some of which also had illustrations. No disturbing imagery was used. In the severe condition, the images showed patients in hospitals surrounded by healthcare workers in protective gear, an illustration of a SARS-CoV-2 virion, or an infographic. The downplayed condition included images of people protesting the lockdown or enjoying their vacation despite the pandemic. The stimuli are available in Appendix 1 in S1 File and the headlines in Appendix 2 in S1 File.

Procedure

The experiment was programmed in PsychoPy3 [63] and conducted online on the Pavlovia platform at pavlovia.org. Each session started with newspaper headlines appearing on the screen one by one. Depending on the condition, the headlines either highlighted the severity of COVID-19 or downplayed it. The headlines were only shown in the beginning of each session, not before every trial. After the participant looked through all the assigned headlines, the main experiment began. Single words appeared on the screen one after another, and the participant’s task was to rate how disgusting each word feels to them (1 = “not at all” to 5 = “extremely”) and how positive/negative it feels (1 = “very negative” to 5 = “very positive”). Three questionnaires were presented after the main experiment: The Disgust Sensitivity Revised scale (DS-R) ([37], modified by [38]), the Wilson-Patterson (W-P) Conservatism Scale [64], and a short language background questionnaire. Before submitting their data, each participant was explicitly asked whether they wanted to withdraw their data.

Data analysis

All the stimuli, raw data, and scripts used in this experiment are available on Open Science Framework. Initially the data was analyzed using generalized additive mixed modeling (GAMM) for ordinal data following [65]. However, the resulting analysis was highly complex, and it was advised that we use linear mixed modeling instead. The results of the two analyses were virtually identical, and we will thus report linear mixed modeling for simplicity. The full GAMM analysis with the scripts and plots is available for review on OSF. Linear mixed-effects regression models were fitted using the lmer function from the lme4 package (1.1–27.1) [66]. P-values were obtained using the sjPlot::tab_model function from the sjPlot package (2.8.9) [67]. The results were plotted with the same package. The scores in the DS-R and W-P questionnaires were standardized (by subtracting the mean and dividing the remainder by standard deviation); word frequency was log-transformed. The models included a maximal supported random structure (random intercepts for subjects and items; by-subject slopes were tested but the models did not converge). Since the correlation between a word’s inherent disgust and valence was high (r = -.73), we additionally tested each fitted model using the check_collinearity() function from the performance package (0.8.0) [68]. The function provides Variance Inflation Factors (VIF) for each term in a model. All VIFs were < 2.5 for both disgust and valence rating models, so none of them had collinearity issues. The correlation between the DS-R and W-P scores was .17. Cronbach’s alpha was .74 for the DS-R inventory and .80 for the W-P inventory, indicating good questionnaire reliability.

Results

Since disgust sensitivity is not a stable measure, presenting the DS-R survey before or after the experiment had its pros and cons. Completed before the experiment, the questionnaire may prime the participants beyond the effect of headlines, or they may guess the purpose of the study. There is also evidence that mere exposure to disgust surveys before the experiment increases temporal disease salience and intergroup bias [69]. If the survey is completed after the experiment, the scores may be shifted, especially if the participants were exposed to the severe headlines. Since we opted for post-experiment surveys, we checked the mean and the range of DS-R scores per condition to detect any abnormalities. The mean for the downplayed condition was 59.5 (range 22 to 80), and for the severe condition 62.8 (range 26 to 88). A Student’s t-test showed that the means did not differ significantly (t = -0.84, df = 62, p-value = .404) and thus were unlikely to present a confound.

The correlation between disgust and valence ratings was -.48. This was expected since word disgust and valence were strongly correlated (-.73).

Disgust rating. We fitted a fully specified model with the following predictors: control variables (log frequency, word arousal, word valence, native speaker (ns) status), words’ characteristics (word disgust), headline condition, participants’ individual differences (DS-R, W-P) and two- and three-way interactions between all the predictors excluding control variables. The results are provided in Table 3.

Table 3. Summary of the linear mixed-effects model with disgust rating as a dependent variable.
Predictors Estimates CI p
(Intercept) 2.98 2.59–3.38 < .001 ***
log freq 0.01 -0.04–0.05 .794
ns [ns] -0.07 -0.31–0.18 .598
wd arousal 0.02 -0.09–0.13 .732
wd valence -0.02 -0.07–0.04 .511
wd disgust 0.56 0.50–0.62 < .001 ***
headline [covid severe] 0.11 -0.12–0.34 .332
DS-R 0.27 0.11–0.43 .001 **
W-P -0.11 -0.27–0.05 .166
headline [covid severe] * wd disgust -0.05 -0.08–-0.01 .012 *
headline [covid severe] * W-P 0.25 0.01–0.48 .038 *
headline [covid severe] * DS-R -0.06 -0.30–0.18 .619
DS-R * wd disgust 0.08 0.06–0.11 < .001 ***
W-P * wd disgust -0.07 -0.10–-0.05 < .001 ***
DS-R * headline [covid severe] * wd disgust -0.07 -0.10–-0.03 < .001 ***
W-P * headline [covid severe] * wd disgust -0.01 -0.05–0.03 .648
Random Effects
σ2 0.99
τ00 word 0.06
τ00 participant 0.20
ICC 0.21
N participant 64
N word 99
Observations 6336
Marginal R2 / Conditional R2 0.339 / 0.478

Model’s formula: disgust rating ~ log freq + wd arousal + wd valence + ns + DS-R*headline*wd disgust + W-P*headline*wd disgust + (1 | participant) + (1 | word). Asterisks indicate significance (*** < .001, ** < .01, * < .05, . < .07).

All control variables were insignificant. Native speaker status did not have any effect on word ratings. There were expected main effects of word disgust and participants’ disgust sensitivity. More disgust-prone participants rated everything as more disgusting, and more disgusting words were rated as more disgusting. These effects were meaningful even in the presence of higher-level interactions with these variables. We will now go over all the significant interactions in detail.

Political ideology interacted significantly with the headline (Fig 1A) and with word disgust (Fig 1B) (the three-way interaction was not significant). Fig 1A supports our prediction about the politicization and polarization of the COVID-19 pandemic. The plot shows that the headlines produced the exact opposite effect on participants depending on what side of the conservatism scale they were on. More liberal participants (lower W-P score) rated the stimuli as more disgusting in the downplayed compared to the severe condition. In contrast, the effect of the headline was reversed and stronger for more conservative participants. The severe headlines made them rate the stimuli as substantially more disgusting compared to the downplayed condition. Fig 1B shows that political ideology also affected baseline disgust ratings. On average, less conservative participants assigned more extreme disgust ratings compared to more conservative participants, rating high-disgust words substantially higher and low-disgust words slightly lower. We are not aware of any previous studies that investigated the effects of political ideology on word ratings. This is a novel finding that warrants further research.

Fig 1. Predicted values (marginal effects) of the interaction between W-P and headline (A) and W-P and word disgust (B) with disgust rating as a dependent variable.

Fig 1

Two two-way interactions with word disgust were significant: DS-R x word disgust and headline x word disgust. In addition, the three-way interaction between headline, word disgust, and DS-R was significant (Fig 2). Fig 2A shows that DS-R and word disgust interacted in the downplayed condition only. The lines for the quantiles of word disgust are virtually parallel following the severe headlines, reflecting main effects of word disgust and DS-R, whereas they are fan-shaped following the downplaying ones. High-disgust words (disgust index > 0) had a much steeper slope than low-disgust words (disgust index < 0) in the downplayed condition. This suggests that downplaying the threat of the virus has a larger effect for more disgusting stimuli if the participant is more sensitive to disgust. Moving a little higher on the DS-R scale in this case leads to a substantial increase in disgust ratings for high-disgust words but a much smaller increase for low-disgust words. Fig 2B additionally shows that low disgust-prone participants consistently rated words higher for disgust in the severe condition compared to the downplayed one, and the more disgusting the word, the larger was an increase compared to the rating in the downplayed condition. This means that the headline manipulation was successful. The results for high disgust-prone participants are more complicated. In the severe condition, they rated low-disgust words higher for disgust but high-disgust words lower for disgust.

Fig 2. Predicted values (marginal effects) of the interaction between DS-R, word disgust, and headline with disgust rating as a dependent variable.

Fig 2

Panel A shows a breakdown by headline, Panel B shows a breakdown by word disgust.

Valence rating. We fitted the same model as for disgust rating, with the only difference that word disgust was now a control variable and word valence was examined in interaction with all the other predictors. The results are provided in Table 4.

Table 4. Summary of the linear mixed-effects model with valence rating as a dependent variable.
Predictors Estimates CI p
(Intercept) 2.26 1.93–2.59 < .001 ***
log freq -0.00 -0.04–0.04 .897
ns [ns] 0.03 -0.08–0.15 .591
wd arousal -0.11 -0.21–0.00 .051 .
wd disgust -0.03 -0.09–0.02 .271
wd valence 0.49 0.44–0.54 < .001 ***
headline [covid severe] 0.01 -0.10–0.11 .927
DS-R -0.15 -0.23–-0.08 < .001 ***
W-P 0.10 0.02–0.17 .014 *
headline [covid severe] * wd valence -0.02 -0.05–0.00 .053 .
headline [covid severe] * W-P -0.00 -0.11–0.11 .950
headline [covid severe] * DS-R 0.09 -0.03–0.20 .134
DS-R * wd valence 0.05 0.03–0.07 < .001 ***
W-P * wd valence -0.06 -0.07–-0.04 < .001 ***
DS-R * headline [covid severe] * wd valence -0.03 -0.05–-0.00 .032 *
W-P * headline [covid severe] * wd valence 0.01 -0.01–0.04 .329
Random Effects
σ2 0.53
τ00 word 0.06
τ00 participant 0.04
ICC 0.16
N participant 64
N word 99
Observations 6336
Marginal R2 / Conditional R2 0.491 / 0.573

Model’s formula: valence rating ~ log freq + wd arousal + wd disgust + ns + DS-R*headline*wd valence + W-P*headline*wd valence + (1 | participant) + (1 | word). Asterisks indicate significance (*** < .001, ** < .01, * < .05, . < .07).

Naturally, word valence was directly correlated with participants’ ratings: the more positive the word, the more positively it was rated, and vice versa. This effect also remained in the presence of higher-level interactions. Word arousal was marginally significant. On average, words with lower arousal were rated as slightly more positive, and vice versa. This is in line with a weak negative correlation between word valence and word arousal in our experiment (r = -.21, see Table 2). We also found main effects of both disgust sensitivity and political ideology, which were meaningless due to higher-level interactions.

A two-way interaction between DS-R and word valence as well as a three-way interaction between headline, DS-R, and word valence were significant. Fig 3A corroborates the effect found for disgust rating. Once again, word valence mostly interacted with DS-R in the downplayed condition, with the lines having a distinct fan shape. Words with the lowest valence had a much steeper slope than words with the highest valence, suggesting that individual disgust sensitivity has a much stronger effect on low- than on high-valence words. Fig 3B shows that, same as for disgust, the least disgust-prone individuals rated words as more negative in the severe compared to the downplayed condition, and this effect was the largest for the most negative words. Most disgust-prone participants, however, again rated negative stimuli as more negative in the downplayed compared to the severe condition. It should be noted that the difference between the two ends of the DS-R scale was very small, even for the most negative words.

Fig 3. Predicted values (marginal effects) of the interaction between DS-R, word valence, and headline with valence rating as a dependent variable.

Fig 3

Panel A shows a breakdown by headline, Panel B shows a breakdown by word valence.

The interaction between the participant’s political ideology and word valence (Fig 4) corroborated and further extended the effect found for disgust ratings (Fig 1B). On average, more liberal participants rated all positive words as more positive and negative words as more negative regardless of the condition. The joint findings from the disgust and valence ratings essentially translate to less conservative participants being more extreme with their ratings and having a broader range of responses.

Fig 4. Predicted values (marginal effects) of the interaction between W-P and word valence with valence rating as a dependent variable.

Fig 4

Discussion

As predicted, the headlines affected the participant’s disgust ratings differently depending on their political ideology and disgust proneness. Less conservative participants rated the stimuli higher for disgust following the downplaying headlines, whereas more conservative participants assigned higher disgust ratings following the severe headlines. We can think of two possible explanations for these results, both of which reflect the dominant COVID-19 narrative in the two political spheres [55, 58, 70]. The first, and the simplest, explanation is that the stance representing one’s political outgroup (headlines emphasizing the threat of the virus for the conservative participants and headlines downplaying it for the liberal participants) evokes more disgust and negative affect. Since hardly any language processing is done without affective evaluation [71], it is possible that a take on the virus that is so far from your own would elicit a strong emotional response. However, in this case, one would also logically expect not only higher disgust but also more negative ratings. This was not the case, and the interaction between political ideology and headline was not significant. The second and more nuanced explanation would be habituation of disgust and thus an asymmetric response to the severe headlines. Since the dominant and largely supported view by the liberal politicians in Canada revolves around the high danger of the virus, this stance may result in desensitization to it by the liberal public and a consequent lack of strong disgust response. The opposite is true for more conservative participants, with the dominant conservative view being the lack of such danger. Importantly, the severe headlines did elevate disgust levels in more conservative participants despite the lack of trust in contradictory media [55] and science in general [70] found for conservative participants in previous research. Again, two explanations are possible: 1) more conservative participants did not treat the severe headlines as unreliable, or 2) more conservative participants did consciously register the severe headlines as unreliable but their bodies still reacted to the increased disease salience by elevating their disgust levels. It is unfortunately not possible at the time to choose between the two explanations.

Further, we found initial evidence that subscribing to a more liberal or more conservative ideology may be correlated with the extremity of ratings. Political ideology affected word ratings for both disgust and valence regardless of the condition, with less conservative participants providing more extreme ratings (higher disgust ratings for disgusting words and lower for non-disgusting words as well as higher valence ratings for positive words and lower for negative words). To the best of our knowledge, this is a novel finding. Individual differences in average ratings (the so-called “rater’s generosity”) was previously found to affect acceptability ratings both by itself and in interaction with other predictors [65], but we are not aware of any research on factors affecting ratings’ range/extremity. At the very least, our findings highlight the importance of accounting for individual differences in rating studies, in particular for political ideology that has been by and large ignored in linguistic research.

Individual disgust sensitivity affected word ratings both by itself and in interaction with other predictors. Overall, disgust proneness was correlated with higher disgust ratings. This was expected from the definition of disgust proneness and, importantly, shows that word ratings for disgust can reliably serve as a proxy to participant’s disgust. Disgust sensitivity also interacted with word disgust and the headlines. Thus, downplaying the threat of the virus had the largest effect for the most disgusting and negative words: a slight increase in participant’s disgust proneness was associated with a significant increase in word ratings for disgust and negativity. In addition, whereas low disgust-prone participants consistently rated all the words as more disgusting in the severe condition, only low and moderately disgusting words, but not highly disgusting ones, were rated higher for disgust in the same condition by high disgust-prone participants. The same effect was observed for valence ratings. High disgust and low valence words were actually rated lower for disgust and higher for valence in the severe compared to the downplayed condition by high disgust-prone participants. While the reasons for this effect are not entirely clear, it should be noted that the effect itself was very small and that high disgust-prone participants rated high disgust and low valence words very close to the top of the disgust scale and the bottom of the valence scale in both conditions. This may potentially suggest a ceiling effect.

All in all, the results of this study show that individual differences such as disgust proneness and political ideology dynamically interact with how safe the environment around the participant feels and have a measurable effect on language processing, specifically on word ratings. In the next experiment, we intended to find out whether this effect also extends to online language processing.

Experiment 2—Lexical decision

Materials and methods

Participants

Eighty-six students at the University of Alberta received partial course credit for their participation. None participated in Experiment 1. Seventeen participants were removed from the data analysis (7 = indicated they wished their data to be withdrawn, 8 = provided incorrect answers to trap questions in the Disgust Sensitivity survey, 2 = mean reaction times were above 3500 ms). This left us with sixty-nine participants (51F [83%], mean age = 20.7, range 17–59, SD = 5.6). Forty-two were native speakers of English. Participants who did not choose English as the first language they acquired in childhood were classified as non-native speakers. The participants were asked to rate their English proficiency on a 5-point scale. Mean self-reported English proficiency for non-native speakers was 4.0/5 (SD = .83) and for native speakers 5.0/5 (SD = .15). The plan for this study was reviewed for its adherence to ethical guidelines by a Research Ethics Board at the University of Alberta (reference number Pro00102348).

Materials

The real English words were the same as in Experiment 1. Ninety-three pseudowords were created by modifying one or more letters in existing English words. A native speaker of English checked the final list of pseudowords and made sure they did not contain any real but archaic words, words that look like a typo in real words, slang words, and words that do not currently exist but sound like they could be a neologism. The full list of the stimuli, including the pseudowords, is available in Appendix 1 in S1 File.

Procedure

A visual lexical decision task was used. Each trial began with a letter string appearing in the center of the computer screen and staying until the participant responded. The participants indicated whether the letter strings were real English words or not by pressing either the left or the right arrow (the side of “word”/”non-word” arrows was counterbalanced between participants). The instruction was to respond as quickly and as accurately as possible. Seven practice trials preceded the experimental trials, with feedback after each. No feedback was given in the main session. Opportunities for taking a break were given after each third of the stimuli. Otherwise, the procedure was the same as in Experiment 1.

Data analysis

Overall mean accuracy for words was 91.2% and for non-words 85.5%. For native speakers, mean accuracy for words and non-words was 96.3% and 89.8%, respectively. For non-native speakers, mean accuracy for words and non-words was 83.7% and 79.2%, respectively.

Before data analysis, pseudowords, incorrect responses (8.8%), two participants with mean RTs over 3500 ms (2.4%), and responses below 100 ms and above 2500 ms (1.9%) were removed. In addition, we removed trials with RTs below and above 2.5 SDs per participant (3%). The reaction times were reciprocally transformed (-1000/RTs), the right tail was removed (0.4%), and the RTs were multiplied by 1000 to avoid extremely small numbers [72]. This means that smaller numbers (i.e., bigger negative numbers) in the final analysis correspond to shorter lexical decision times, so the plots can be read intuitively. One more predictor, match/mismatch of the participant’s dominant hand and the location of the “word” button, was added to the model. The model included a maximal supported random structure (random intercepts for subjects and items). The by-subject slope for trial was tested and produced a singular fit, suggesting an overfitted model, so we removed it. Compared to Experiment 1, a few more control variables were used: age of onset, concreteness, and orthographic length (all standardized). The correlation between the participant’s W-P and DS-R scores was higher (r = .4) than in Experiment 1. Same as before, we tested the fitted fully specified model using the check_collinearity() function from the performance package (0.8.0). The results showed several potential collinearity issues (VIFs for word disgust, word valence, DS-R, and W-P were all in the 3.5–4 range; VIFs for interactions were in the 5–8 range even though the predictors were standardized). Testing word disgust and word valence in separate models did not remove multicollinearity to a sufficient extent (VIFs for DS-R and W-P were still > 3.7). We thus had to run four different models: for word disgust and DS-R, for word disgust and W-P, for word valence and DS-R, for word valence and W-P. This resolved multicollinearity (all VIFs < 2.7). Cronbach’s alpha was .78 for the DS-R inventory and .83 for the W-P inventory, again indicating good reliability.

Results

We first checked the range and means of disgust sensitivity scores depending on the headline condition. The mean for the downplayed condition was 63.4 (range 25 to 91), and for the severe condition 62.1 (range 21 to 89). Student’s t-test confirmed that the means were not significantly different (t = 0.52, df = 67, p-value = .605).

The outputs of the four models are available in Appendices 3–6 in S1 File. In all the models, there were significant main effects of log frequency, age of acquisition, and native speaker status. Consistent with previous research, higher frequency words were associated with shorter RTs than lower frequency words. Also in line with previous research, non-native speakers had longer reaction times than native speakers [14, 15]. Age of acquisition affected RTs in a predictable direction: words acquired earlier were reacted to faster. In addition, concreteness was only significant in the models with word disgust: less concrete words were recognized slightly faster than more concrete ones.

We will now go over all the interactions. First, there were significant two-way interactions W-P x word disgust and W-P x word valence. Fig 5A shows that the effect of word disgust changed direction depending on the side of the conservatism scale. More liberal participants recognized high-disgust words faster than low-disgust words. In contrast, the more disgusting the word, the longer it took more conservative participants to recognize it. This is in line with the findings for disgust proneness by [46], since conservatism and disgust proneness have been found to be correlated both in prior research [4951] and this experiment (r = .4). Fig 5B corroborates this effect for word valence. While more liberal participants had shorter RTs to negative words, more conservative participants had longer RTs to negative words. The opposite was true for positive words.

Fig 5. Predicted values (marginal effects) of the interaction between W-P and word disgust (A) and W-P and word valence (B) with reciprocally transformed RTs as a dependent variable.

Fig 5

Bigger negative numbers correspond to shorter lexical decision times.

In addition, there was a significant two-way interaction DS-R x word disgust and a marginally significant interaction DS-R x word valence (p = .06). As can be seen in Fig 6A, we corroborated the direction of the findings of [46], although the effect was very small for high disgust-prone participants. The least disgust-prone participants reacted faster to disgusting than to non-disgusting words, and this facilitating advantage decreased as the participant moved a little higher on the DS-R scale. The marginally significant interaction with word valence was in the same direction (Fig 6B): negative words produced a facilitatory effect on low disgust-prone participants compared to positive words.

Fig 6. Predicted values (marginal effects) of the interaction between DS-R and word disgust (A) and DS-R and word valence (B) with reciprocally transformed RTs as a dependent variable.

Fig 6

Bigger negative numbers correspond to shorter lexical decision times.

Since political ideology and disgust sensitivity affected RTs consistently (disgusting and negative words had a facilitatory effect on more liberal and on less disgust-prone participants), it is important to know which predictor, W-P and DS-R, explained more variance. When W-P and DS-R were added together in the models for word disgust and word valence, only W-P came out significant (word disgust x W-P: p < .001; word valence x W-P: p = .014). Stepwise backward elimination using the step() function from the lmerTest package (3.1–3) [73] showed that, apart from age of onset, native speaker status, and log frequency only the word disgust x W-P interaction significantly improved the model’s fit. Thus, political ideology was more predictive of RTs in this study than disgust sensitivity.

Importantly, no effect of headline was found, either by itself or in interaction with other predictors.

Discussion

The main finding of this experiment was the effect of political ideology on word recognition latencies. Lower conservatism was associated with faster recognition of disgusting and negative words, whereas higher conservatism was associated with slower recognition of disgusting and negative words, regardless of the headline. Assuming that disgust and conservatism are correlated, this is in line with the findings of [46]. Importantly, however, the effect of disgust proneness disappeared when the two predictors were added to the models together. This suggests that political ideology was more important in predicting RTs than disgust proneness. This has important implications for future studies using the lexical decision task, especially with words varying in disgust and valence. Moreover, removing non-native speakers did not change the results for political ideology (interactions of W-P with both word disgust and valence were significant and in the same direction) but removed the effect of disgust proneness. This once again testifies to the stability of the results for political ideology and provides evidence that combining native and non-native speakers did not have any substantial effect on the results.

Importantly, we did not find any effect of the headlines on word recognition latencies. This finding, although unexpected, is not in direct contradiction to the results of Experiment 1, where such an effect was observed. The lexical decision task and the rating task reflect very different dimensions of processing. The lexical decision task indexes the ease of accessing a word in long-term memory, whereas ratings are an explicit affective evaluation that is not time-sensitive. Thus, the results of this experiment suggest that fluctuating levels of disgust may not modulate the accessibility of disgust-related concepts in long-term memory while still modulating their affective evaluation.

General discussion

Our study tested the hypothesis that the media’s stance on the pandemic may elevate or reduce participants’ disgust, which would affect word ratings and word recognition latencies. We also predicted that such person-based factors as disgust proneness and political ideology will mediate the effect. A word rating (Experiment 1) and a lexical decision (Experiment 2) study found partial support for these hypotheses. In brief, the main findings were as follows:

  • More liberal participants rated the stimuli as more disgusting after being exposed to the headlines downplaying the threat of COVID-19, whereas more conservative participants gave higher disgust ratings following the headlines emphasizing it.

  • More liberal participants were more extreme with their ratings and gave a broader range of responses (rating disgusting words as more disgusting and negative words as more negative, as well as rating non-disgusting words as less disgusting and positive words as more positive) than their more conservative peers regardless of the condition.

  • Disgusting and negative words had a facilitatory effect for more liberal participants (shorter RTs) and an inhibitory effect for more conservative participants (longer RTs).

  • More disgust-prone individuals rated everything as more disgusting than low disgust-prone ones.

  • In the severe condition, low disgust-prone participants rated all the stimuli as more disgusting and negative, whereas high disgust-prone participants only rated low to moderately disgusting words as more disgusting and negative.

Affective word ratings and political ideology

As expected, political orientation had a clear impact on word ratings. As we noted in the Introduction, the perception of the severity of the virus became an identity marker for both ends of the political spectrum. Given such drastic polarization, it is not surprising that the two types of the headlines produced the exact opposite effect on the participants depending on their political ideology. More liberal participants in our study were more disgusted by the headlines downplaying the severity of COVID-19 than those emphasizing it, rating everything as more disgusting afterwards. In contrast, more conservative participants assigned higher disgust ratings following the severe headlines. We offer two possible explanations for this result, one based on direct affective evaluation and the other on the disgust system response due to stimulus habituation. According to the first explanation, the stance on the virus from the political outgroup (downplaying headlines for the liberal participants and severe headlines for the conservative ones) evoked a strong emotional response, which translated into higher ratings on the disgust scale. However, in this case, one would also expect lower ratings on the valence scale depending on the headline, and this was not what we found. That leaves us the second possibility. As the liberal narrative revolves around the costs of not treating the virus seriously enough, they may have become habituated and desensitized to it. The take on the danger of COVID-19 may thus be perceived as the “default” by them and no longer alert their disgust system, whereas headlines contradicting this view might instantly elevate their disgust levels, signaling danger. The opposite, of course, should be true for more conservative participants. Note that one of our hypotheses was that more conservative participants will discard the severe headlines as alarmist, since previous research showed that conservatives have less trust in contradictory media and firmly believe that COVID-19 does not pose big health risks [55]. The current findings suggest that this was either not the case or, if it was the case, it did not stop their disgust system from ramping up. All in all, this is in line with mounting evidence that conservatives are more prone to disgust [4951]. Even though previous studies found conservatives to be less concerned about the pandemic and less eager to engage in social distancing than liberals [55, 70], our results show that highlighting the danger of the virus still makes conservative participants give higher disgust ratings. Whether this translates into more adherence to safety protocols is a topic for further research.

One novel finding of our study is more extreme disgust and valence ratings by more liberal participants compared to their more conservative peers regardless of the condition. Disgusting and negative words were rated as more disgusting and more negative by more liberal participants, and the opposite was true for non-disgusting and positive words. We are not aware of any research examining whether political ideology correlates with ratings’ extremity. It is entirely possible that this broader range of ratings is additionally mediated by some other personality traits and this needs to be verified by future research.

Affective word ratings and disgust proneness

Disgust proneness affected word ratings over and above the effects of political ideology. Regardless of the headline type, more disgust-prone individuals rated all the stimuli as more disgusting and negative stimuli as more negative than less disgust-prone individuals. This demonstrates that disgust ratings can serve as a good proxy for participant’s disgust and adds to the growing body of evidence regarding the effects of disgust proneness on cognition in general and language processing in particular. [40, 41] found that disgust sensitivity was positively correlated with pupil dilation during the processing of stereotype-based clashing statements, suggesting that more disgust-prone individuals may experience greater arousal when interacting with stimuli that are disgusting either physically or morally. The results of the current study further indicate that even single word processing can be significantly affected by the participant’s disgust sensitivity. In addition, disgust proneness significantly interacted with the headline type. While low disgust-prone participants rated all the stimuli as more disgusting and negative when the threat of COVID-19 was highlighted (severe headlines), high disgust-prone participants only rated low and moderately disgusting words as more disgusting and positive words as more negative in the severe condition. As we addressed in the Discussion after Experiment 1, this may be due to the ceiling effect since ratings assigned by high-disgust prone participants to extremely valenced stimuli were very close to the top of the disgust scale and the bottom of the valence scale.

Political ideology vs disgust proneness in lexical access: The role of the pandemic

Our findings from the lexical decision experiment partially corroborated and extended the results for French by [46]. The authors found that disgusting words had a facilitatory effect for lexical recognition in less disgust-prone participants and an inhibiting effect in more disgust-prone participants. Our study, however, found that political ideology was more predictive of RTs than disgust sensitivity. Even though the general direction of the effect was the same (more liberal participants patterned like less disgust-prone ones), only political ideology significantly improved the model’s fit when both factors were examined together. Overall, disgusting and negative words had a facilitatory effect on word recognition for more liberal participants and an inhibitory effect for more conservative participants. To the best of our knowledge, the interaction between political ideology and lexical decision times has not yet been researched. One possible explanation for the dominant effect of political ideology in our study is a big political component pertinent to the ongoing pandemic from its very beginning. [55] suggested that political ideology was uniquely predictive of the participant’s COVID-19 behavior even when controlling for such variables as belief in science and COVID-related anxiety. Thus, it may be that political views have temporarily become a more salient marker of the behavioral immune system response than disgust sensitivity per se. This is, of course, a speculative idea that needs to be addressed by further research. One way to verify this would be to conduct the same study during the pandemic and post-pandemic.

Differential effects of traits and states on lexical access

We did not find an effect of dynamically changing disgust levels (induced by headlines) on lexical access. Even though the headlines successfully affected participants’ ratings in Experiment 1, they did not have an effect on RTs in Experiment 2—neither by themselves nor in interaction with person-based factors. Unlike headlines, however, political ideology was found predictive of word recognition latencies. Why would that be the case? Previous research has found political views to be just one manifestation of a cognitive and affective make-up and to have a robust correlation with threat perception [74]. It is thus not surprising that aligning with a particular political ideology may make disgust-related concepts in long-term memory more or less accessible (see [75] for converging findings with threat-related concepts). Thus, our results suggest a difference between fluctuating states (i.e., the participant’s emotional response to a particular set of headlines) and stable traits (i.e., aligning with more conservative or more liberal ideology) in affecting the ease of accessing disgusting and negative words. One alternative possibility to consider is that an exposure to COVID-related news may need to be longer to see an effect on lexical decision (we only showed a handful of headlines that the participants could switch through at their own pace). This could be tested by future research.

Limitations of present research

Our study had several limitations that need to be noted. First and foremost, we did not collect participants’ socioeconomic status, belief in science, self-perceived likelihood of contracting COVID-19, or COVID-19 related anxiety. Second, a convenience sample of university students produced a slightly skewed distribution of gender, age, and political ideology (most participants were young and more liberal females), which may have affected the results. That said, within the range of scores obtained in this experiment, the distribution was very close to normal.

One other concern needs to be addressed. Since our headlines reported on the pandemic, it is important to make sure that word recognition latencies were not affected by the presence of words directly related to the pandemic and to disease in general. As no lists of pandemic-related words exist, it is difficult to estimate how many words in the final dataset satisfied this criterion. Using our best judgment, we counted 6 out of 99 words that were disease-related, with the disgust indexes given in brackets: “unhealthy” (-0.7), “germ” (0.9), “sickening” (1.24), “deadly” (0.8), “parasite” (1.5), “disease” (1). Three of those words occurred in the severe headlines in full (“sickening”, “disease”, “deadly”) and one in part (“bloodthirsty”–“blood”). As one can see, the words were relatively dispersed on the disgust scale. To make sure the results of Experiment 2 were not contaminated by this overlap, we reran the models without these four words. While disgust proneness was no longer significant, political ideology remained significant. This, once again, testifies to the stability of the effect of political ideology.

All in all, our studies found that not only do headlines about the pandemic affect the participants’ disgust levels but that they also interact with a range of person-based factors, namely how prone the participant is to disgust and what political ideology they align with.

Conclusion

The study shows that dynamic disgust levels affect word ratings but not the core language processing mechanisms, such as lexical access. It also provides tentative evidence that the politicization and polarization of the pandemic has led to tangible consequences in how strongly an individual’s disgust system activates in response to different types of headlines.

Supporting information

S1 File. Contains all the supporting stimuli and tables.

(DOCX)

Acknowledgments

We thank the three reviewers and the editor whose insightful comments and suggestions have substantially improved this manuscript.

Data Availability

All the stimuli, raw data, and scripts used in this experiment are available on Open Science Framework at https://osf.io/5ep9g/.

Funding Statement

This research was supported by a Social Sciences and Humanities Research Council of Canada (http://www.sshrc-crsh.gc.ca/) Partnership Grant (Words in the World, 895-2016-1008). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.van den Brink D, Hagoort P. The Influence of Semantic and Syntactic Context Constraints on Lexical Selection and Integration in Spoken-Word Comprehension as Revealed by ERPs. J Cogn Neurosci. 2004. Jul;16(6):1068–84. doi: 10.1162/0898929041502670 [DOI] [PubMed] [Google Scholar]
  • 2.Sedivy JC, Tanenhaus MK., Chambers CG, Carlson GN. Achieving incremental semantic interpretation through contextual representation. Cognition. 1999. Jun;71(2):109–47. doi: 10.1016/s0010-0277(99)00025-6 [DOI] [PubMed] [Google Scholar]
  • 3.Otten M, Van Berkum JJA. Discourse-Based Word Anticipation During Language Processing: Prediction or Priming? Discourse Process. 2008. Nov 11;45(6):464–96. [Google Scholar]
  • 4.van den Brink D, Van Berkum JJA, Bastiaansen MCM, Tesink CMJY, Kos M, Buitelaar JK, et al. Empathy matters: ERP evidence for inter-individual differences in social language processing. Soc Cogn Affect Neurosci. 2012. Feb 1;7(2):173–83. doi: 10.1093/scan/nsq094 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Nieuwland MS, Van Berkum JJA. When Peanuts Fall in Love: N400 Evidence for the Power of Discourse. J Cogn Neurosci. 2006. Jul 1;18(7):1098–111. doi: 10.1162/jocn.2006.18.7.1098 [DOI] [PubMed] [Google Scholar]
  • 6.Citron FMM, Weekes BS, Ferstl EC. Effects of valence and arousal on written word recognition: Time course and ERP correlates. Neurosci Lett. 2013. Jan;533:90–5. doi: 10.1016/j.neulet.2012.10.054 [DOI] [PubMed] [Google Scholar]
  • 7.Kuperman V, Estes Z, Brysbaert M, Warriner AB. Emotion and language: Valence and arousal affect word recognition. J Exp Psychol Gen. 2014. Jun;143(3):1065–81. doi: 10.1037/a0035669 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Monnier C, Syssau A. Affective norms for 720 French words rated by children and adolescents (FANchild). Behav Res Methods. 2017. Oct;49(5):1882–93. doi: 10.3758/s13428-016-0831-0 [DOI] [PubMed] [Google Scholar]
  • 9.Sabater L, Guasch M, Ferré P, Fraga I, Hinojosa JA. Spanish affective normative data for 1,406 words rated by children and adolescents (SANDchild). Behav Res Methods. 2020. Oct;52(5):1939–50. doi: 10.3758/s13428-020-01377-5 [DOI] [PubMed] [Google Scholar]
  • 10.Fairfield B, Ambrosini E, Mammarella N, Montefinese M. Affective Norms for Italian Words in Older Adults: Age Differences in Ratings of Valence, Arousal and Dominance. Papadelis C, editor. PLOS ONE. 2017. Jan 3;12(1):e0169472. doi: 10.1371/journal.pone.0169472 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Teismann H, Kissler J, Berger K. Investigating the roles of age, sex, depression, and anxiety for valence and arousal ratings of words: a population-based study. BMC Psychol. 2020. Dec;8(1):118. doi: 10.1186/s40359-020-00485-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Ku LC, hui Chan S, Lai VT. Personality Traits and Emotional Word Recognition: An ERP Study. Cogn Affect Behav Neurosci. 2020. Apr;20(2):371–86. doi: 10.3758/s13415-020-00774-9 [DOI] [PubMed] [Google Scholar]
  • 13.Sereno SC, Scott GG, Yao B, Thaden EJ, O’Donnell PJ. Emotion word processing: does mood make a difference? Front Psychol [Internet]. 2015. Aug 24 [cited 2021 Aug 17];6. Available from: http://journal.frontiersin.org/Article/10.3389/fpsyg.2015.01191/abstract [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Conrad M, Recio G, Jacobs AM. The time course of emotion effects in first and second language processing: A cross cultural ERP study with German-Spanish bilinguals. Front Psychol [Internet]. 2011. [cited 2019 Nov 12];2. Available from: http://journal.frontiersin.org/article/10.3389/fpsyg.2011.00351/abstract [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Imbault C, Titone D, Warriner AB, Kuperman V. How are words felt in a second language: Norms for 2,628 English words for valence and arousal by L2 speakers. Biling Lang Cogn. 2021. Mar;24(2):281–92. [Google Scholar]
  • 16.Ferré P, Guasch M, Martínez-García N, Fraga I, Hinojosa JA. Moved by words: Affective ratings for a set of 2,266 Spanish words in five discrete emotion categories. Behav Res Methods. 2017. Jun;49(3):1082–94. doi: 10.3758/s13428-016-0768-3 [DOI] [PubMed] [Google Scholar]
  • 17.Warriner AB, Kuperman V, Brysbaert M. Norms of valence, arousal, and dominance for 13,915 English lemmas. Behav Res Methods. 2013. Dec;45(4):1191–207. doi: 10.3758/s13428-012-0314-x [DOI] [PubMed] [Google Scholar]
  • 18.Pratto F, John OP. Automatic Vigilance: The Attention-Grabbing Power of Negative Social Information. J Pers Soc Psychol. 61(3):380–91. doi: 10.1037//0022-3514.61.3.380 [DOI] [PubMed] [Google Scholar]
  • 19.Estes Z, Adelman JS. Automatic vigilance for negative words in lexical decision and naming: Comment on Larsen, Mercer, and Balota (2006). Emotion. 2008;8(4):441–4. doi: 10.1037/1528-3542.8.4.441 [DOI] [PubMed] [Google Scholar]
  • 20.Lang PJ, Bradley MM, Cuthbert BN. Emotion, Attention, and the Startle Reflex. Psychol Rev. 1990;97(3):377–95. [PubMed] [Google Scholar]
  • 21.Lang PJ, Bradley MM, Cuthbert BN. Motivated attention: Affect, activation, and action. In: Attention and orienting: Sensory and motivational processes. Lawrence Erlbaum Associates Publishers; 1997. p. 97–135. (Lang P. J., Simons R. F., & Balaban M. T. (Eds.)). [Google Scholar]
  • 22.Lang A, Dhillon K, Dong Q. The effects of emotional arousal and valence on television viewers’ cognitive capacity and memory. J Broadcast Electron Media. 1995. Jun;39(3):313–27. [Google Scholar]
  • 23.Wentura D, Rothermund K, Bak P. Automatic vigilance: The attention-grabbing power of approach- and avoidance-related social information. J Pers Soc Psychol. 2000;78(6):1024–37. doi: 10.1037//0022-3514.78.6.1024 [DOI] [PubMed] [Google Scholar]
  • 24.Scott GG, O’Donnell PJ, Sereno SC. Emotion words and categories: evidence from lexical decision. Cogn Process. 2014. May;15(2):209–15. doi: 10.1007/s10339-013-0589-6 [DOI] [PubMed] [Google Scholar]
  • 25.Kousta ST, Vinson DP, Vigliocco G. Emotion words, regardless of polarity, have a processing advantage over neutral words. Cognition. 2009. Sep;112(3):473–81. doi: 10.1016/j.cognition.2009.06.007 [DOI] [PubMed] [Google Scholar]
  • 26.Gao C, Shinkareva SV, Peelen MV. Affective valence of words differentially affects visual and auditory word recognition. J Exp Psychol Gen. 2022; doi: 10.1037/xge0001176 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Stevenson RA, Mikels JA, James TW. Characterization of the Affective Norms for English Words by discrete emotional categories. Behav Res Methods. 2007. Nov;39(4):1020–4. doi: 10.3758/bf03192999 [DOI] [PubMed] [Google Scholar]
  • 28.Scott SK, Young AW, Calder AJ, Hellawell DJ, Aggleton JP, Johnsons M. Impaired auditory recognition of fear and anger following bilateral amygdala lesions. Nature. 1997;385(6613):254–7. doi: 10.1038/385254a0 [DOI] [PubMed] [Google Scholar]
  • 29.Toronchuk JA, Ellis GFR. Disgust: Sensory affect or primary emotional system? Cogn Emot. 2007. Dec;21(8):1799–818. [Google Scholar]
  • 30.Briesemeister BB, Kuchinke L, Jacobs AM. Discrete Emotion Effects on Lexical Decision Response Times. Sirigu A, editor. PLoS ONE. 2011. Aug 24;6(8):e23743. doi: 10.1371/journal.pone.0023743 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Ziegler JC, Montant M, Briesemeister BB, Brink TT, Wicker B, Ponz A, et al. Do Words Stink? Neural Reuse as a Principle for Understanding Emotions in Reading. J Cogn Neurosci. 2018. Jul;30(7):1023–32. doi: 10.1162/jocn_a_01268 [DOI] [PubMed] [Google Scholar]
  • 32.Ponz A, Montant M, Liegeois-Chauvel C, Silva C, Braun M, Jacobs AM, et al. Emotion processing in words: a test of the neural re-use hypothesis using surface and intracranial EEG. Soc Cogn Affect Neurosci. 2014. May;9(5):619–27. doi: 10.1093/scan/nst034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Anderson ML. Neural reuse: A fundamental organizational principle of the brain. Behav Brain Sci. 2010. Aug;33(4):245–66. doi: 10.1017/S0140525X10000853 [DOI] [PubMed] [Google Scholar]
  • 34.Curtis V. Why disgust matters. Philos Trans R Soc B Biol Sci. 2011. Dec 12;366(1583):3478–90. doi: 10.1098/rstb.2011.0165 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Chapman HA, Anderson AK. Understanding disgust. Ann N Y Acad Sci. 2012. Mar;1251(1):62–76. doi: 10.1111/j.1749-6632.2011.06369.x [DOI] [PubMed] [Google Scholar]
  • 36.Moll J, de Oliveira-Souza R, Moll FT, Ignácio FA, Bramati IE, Caparelli-Dáquer EM, et al. The moral affiliations of disgust: A functional MRI study. Cogn Behav Neurol. 2005. Mar;18(1):68–78. doi: 10.1097/01.wnn.0000152236.46475.a7 [DOI] [PubMed] [Google Scholar]
  • 37.Haidt J, McCauley C, Rozin P. Individual differences in sensitivity to disgust: A scale sampling seven domains of disgust elicitors. Personal Individ Differ. 1994. May;16(5):701–13. [Google Scholar]
  • 38.Olatunji BO, Williams NL, Tolin DF, Abramowitz JS, Sawchuk CN, Lohr JM, et al. The Disgust Scale: Item analysis, factor structure, and suggestions for refinement. Psychol Assess. 2007;19(3):281–97. doi: 10.1037/1040-3590.19.3.281 [DOI] [PubMed] [Google Scholar]
  • 39.Schaller M, Park JH. The Behavioral Immune System (and why it matters). Curr Dir Psychol Sci. 2011. Apr;20(2):99–103. [Google Scholar]
  • 40.Hubert Lyall I. It’s personal and disgusting: Extra-linguistic information in language comprehension. University of Alberta; 2019.
  • 41.Hubert Lyall I, Järvikivi J. Individual Differences in Political Ideology and Disgust Sensitivity Affect Real-Time Spoken Language Comprehension. Front Psychol. 2021. Oct 11;12:699071. doi: 10.3389/fpsyg.2021.699071 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Cichocka A, Bilewicz M, Jost JT, Marrouch N, Witkowska M. On the Grammar of Politics-or Why Conservatives Prefer Nouns: On the Grammar of Politics. Polit Psychol. 2016. Dec;37(6):799–815. [Google Scholar]
  • 43.Eekhof LS, van Krieken K, Sanders J, Willems RM. Reading Minds, Reading Stories: Social-Cognitive Abilities Affect the Linguistic Processing of Narrative Viewpoint. Front Psychol. 2021. Sep 28;12:698986. doi: 10.3389/fpsyg.2021.698986 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Van Berkum JJA, Holleman B, Nieuwland M, Otten M, Murre J. Right or Wrong?: The Brain’s Fast Response to Morally Objectionable Statements. Psychol Sci. 2009. Sep;20(9):1092–9. doi: 10.1111/j.1467-9280.2009.02411.x [DOI] [PubMed] [Google Scholar]
  • 45.Puhacheuskaya V, Järvikivi J. I was being sarcastic!: The effect of foreign accent and political ideology on irony (mis)understanding. Acta Psychol (Amst). 2022. Feb;222:103479. doi: 10.1016/j.actpsy.2021.103479 [DOI] [PubMed] [Google Scholar]
  • 46.Silva C, Montant M, Ponz A, Ziegler JC. Emotions in reading: Disgust, empathy and the contextual learning hypothesis. Cognition. 2012. Nov;125(2):333–8. doi: 10.1016/j.cognition.2012.07.013 [DOI] [PubMed] [Google Scholar]
  • 47.Barrett LF, Lindquist KA, Gendron M. Language as context for the perception of emotion. Trends Cogn Sci. 2007. Aug;11(8):327–32. doi: 10.1016/j.tics.2007.06.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Yu Q, Zhuang Q, Wang B, Liu X, Zhao G, Zhang M. The effect of anxiety on emotional recognition: evidence from an ERP study. Sci Rep. 2018. Dec;8(1):16146. doi: 10.1038/s41598-018-34289-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Hodson G, Costello K. Interpersonal disgust, ideological orientations, and dehumanization as predictors of intergroup attitudes. Psychol Sci. 2007. Aug;18(8):691–8. doi: 10.1111/j.1467-9280.2007.01962.x [DOI] [PubMed] [Google Scholar]
  • 50.Terrizzi JA, Shook NJ, McDaniel MA. The behavioral immune system and social conservatism: a meta-analysis. Evol Hum Behav. 2013. Mar;34(2):99–108. [Google Scholar]
  • 51.Tybur JM, Merriman LA, Hooper AEC, McDonald MM, Navarrete CD. Extending the Behavioral Immune System to political psychology: Are political conservatism and disgust sensitivity really related? Evol Psychol. 2010. Oct;8(4):147470491000800. [PubMed] [Google Scholar]
  • 52.Inbar Y, Pizarro D, Iyer R, Haidt J. Disgust sensitivity, political conservatism, and voting. Soc Psychol Personal Sci. 2012. Sep;3(5):537–44. [Google Scholar]
  • 53.Karwowski M, Kowal M, Groyecka A, Białek M, Lebuda I, Sorokowska A, et al. When in danger, turn right: Does COVID-19 threat promote social conservatism and right-wing presidential candidates? Hum Ethol. 2020. Jan 1;35(1):37–48. [Google Scholar]
  • 54.Beall AT, Hofer MK, Schaller M. Infections and elections: Did an Ebola outbreak influence the 2014 U.S. federal elections (and if so, how)? Psychol Sci. 2016. May;27(5):595–605. doi: 10.1177/0956797616628861 [DOI] [PubMed] [Google Scholar]
  • 55.Rothgerber H, Wilson T, Whaley D, Rosenfeld DL, Humphrey M, Moore AL, et al. Politicizing the COVID-19 pandemic: Ideological differences in adherence to social distancing [Internet]. PsyArXiv; 2020 Apr [cited 2021 May 5]. https://osf.io/k23cv
  • 56.Haberman M, Sanger DE. Trump says coronavirus cure cannot ‘be worse than the problem itself.’ The New York Times. 2020 Mar 23;
  • 57.Pennycook G, McPhetres J, Bago B, Rand DG. Beliefs About COVID-19 in Canada, the United Kingdom, and the United States: A Novel Test of Political Polarization and Motivated Reasoning. Pers Soc Psychol Bull.: 16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Pickup M, Stecula D, van der Linden C. Novel Coronavirus, Old Partisanship: COVID-19 Attitudes and Behaviours in the United States and Canada. Can J Polit Sci. 2020. Jun;53(2):357–64. [Google Scholar]
  • 59.Mohammad SM. Word affect intensities. ArXiv170408798 Cs [Internet]. 2017 Apr 27 [cited 2021 Mar 3]; http://arxiv.org/abs/1704.08798
  • 60.Köper M, Schulte im Walde S. Improving Verb Metaphor Detection by Propagating Abstractness to Words, Phrases and Individual Senses. In: Proceedings of the 1st Workshop on Sense, Concept and Entity Representations and their Applications [Internet]. Valencia, Spain: Association for Computational Linguistics; 2017 [cited 2021 Dec 30]. p. 24–30. http://aclweb.org/anthology/W17-1903
  • 61.Brysbaert M, Warriner AB, Kuperman V. Concreteness ratings for 40 thousand generally known English word lemmas. Behav Res Methods. 2014. Sep;46(3):904–11. doi: 10.3758/s13428-013-0403-5 [DOI] [PubMed] [Google Scholar]
  • 62.Kuperman V, Stadthagen-Gonzalez H, Brysbaert M. Age-of-acquisition ratings for 30,000 English words. Behav Res Methods. 2012. Dec;44(4):978–90. doi: 10.3758/s13428-012-0210-4 [DOI] [PubMed] [Google Scholar]
  • 63.Peirce J, Gray JR, Simpson S, MacAskill M, Höchenberger R, Sogo H, et al. PsychoPy2: Experiments in behavior made easy. Behav Res Methods. 2019. Feb;51(1):195–203. doi: 10.3758/s13428-018-01193-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Wilson GD, Patterson JR. A new measure of conservatism. Br J Soc Clin Psychol. 1968;7(4):264–9. doi: 10.1111/j.2044-8260.1968.tb00568.x [DOI] [PubMed] [Google Scholar]
  • 65.Baayen RH, Divjak D. Ordinal GAMMs: a new window on human ratings. 2017;1–13.
  • 66.Bates D, Maechler M, Bolker B, Walker S, Haubo Bojesen Christensen R, Singmann H, et al. lme4: Linear Mixed-Effects Models using “Eigen” and S4 [Internet]. 2021. https://cran.r-project.org/web/packages/lme4/index.html
  • 67.Lüdecke D, Bartel A, Schwemmer C, Powell C, Djalovski A, Titz J. sjPlot: Data Visualization for Statistics in Social Science [Internet]. 2021. https://cran.r-project.org/web/packages/sjPlot/index.html
  • 68.Lüdecke D, Makowski D, Ben-Shachar MS, Patil I, Waggoner P, Wiernik BM, et al. performance: Assessment of Regression Models Performance. 2021.
  • 69.Navarrete CD, Fessler DMT. Disease avoidance and ethnocentrism: the effects of disease vulnerability and disgust sensitivity on intergroup attitudes. Evol Hum Behav. 2006. Jul;27(4):270–82. [Google Scholar]
  • 70.Conway, III LG, Woodard SR, Zubrod A, Chan L. Why are conservatives less concerned about the coronavirus (COVID-19) than liberals? Comparing political, experiential, and partisan messaging explanations. Psyarxiv. 2020; [DOI] [PMC free article] [PubMed]
  • 71.van Berkum JJA. Language comprehension, emotion, and sociality [Internet]. 2nd ed. Rueschemeyer SA, Gaskell MG, editors. Oxford: Oxford University Press; 2018. [cited 2019 Oct 31]. (The Oxford Handbook of Psycholinguistics). [Google Scholar]
  • 72.Milin P, Feldman LB, Ramscar M, Hendrix P, Baayen RH. Discrimination in lexical decision. van Rijn H, editor. PLOS ONE. 2017. Feb 24;12(2):e0171935. doi: 10.1371/journal.pone.0171935 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Kuznetsova A, Brockhoff PB, Christensen RHB, Jensen SP. lmerTest: Tests in Linear Mixed Effects Models. 2021.
  • 74.Hibbing JR, Smith KB, Alford JR. Differences in negativity bias underlie variations in political ideology. Behav Brain Sci. 2014. Jun;37(3):297–307. doi: 10.1017/S0140525X13001192 [DOI] [PubMed] [Google Scholar]
  • 75.Lavine H, Lodge M, Polichak J, Taber C. Explicating the black box through experimentation: Studies of authoritarianism and threat. Polit Anal. 2002;10(04):343–61. [Google Scholar]

Decision Letter 0

Koji Miwa

23 Nov 2021

PONE-D-21-29311COVIDisgust: Language Processing through the Lens of a Pandemic

PLOS ONE

Dear Dr. Puhacheuskaya,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Your manuscript (PONE-D-21-29311) was read by three expert reviewers. Their comments are attached below. Reviewer 1 is Huili Wang. As an academic editor, I have read the manuscript myself. As you will see, all reviewers found some merits in your study. However, they also recommended that the manuscript should be greatly improved before it is published in PLOS ONE. Reviewer 1 suggested that the logic in the storyline be reconsidered. Reviewers 2 and 3 commented that more information and justification should be provided in the method and result sections. I largely agree with the reviewers, and I request that you respond to all their comments. 

The most serious issue, from my perspective, is the storyline. After the first two paragraphs, the hypothesis is presented rather abruptly in line 31. It is not crystal clear why you opted for the current research design. Please explain, for the purpose of studying the the effect of COVID-19-related news on people's disgust, why you asked participants to respond to words and used single word processing "as a proxy for disgust" (line 13). Please describe what the advantage of this procedure is. In the current introduction, it is also not clearly described why you opted for the word rating experiment and what the lexical decision task offers on top of the rating experiment. If your goal is to also study single word processing mechanism, then I would like you to describe more clearly what is missing in the previous research and what the present study offers. Although you are stating the importance of testing the plain text effect in line 84, this is no longer mentioned in the rest of the manuscript, and your stimuli unfortunately contained illustrations (line 116, see also Reviewer 3's comment). In addition, your conclusion is not supported by the data because you did not study "an individual's response to news about COVID-19" (line 362); what you studied was individuals' responses to words with a prior presentation of news about COVID-19 (see also Reviewer 1's comment).

In addition to reviewers' suggestions for the method and result sections, I am also concerned about your choice for the statical analysis. Although I agree that the GAMM can offer an interesting insight in many occasions, I am not fully convinced that the GAMM is the best choice in this study. Neither your predictions nor your interpretation of the results involves nonlinearity. If you choose to retain the GAMM analyses, please describe more clearly why it is important to consider nonlinearity for this topic. Otherwise, the three-way wiggly interactions look unnecessarily complex, and they might not attract a wide range of readers. For this reason, I strongly recommend that you also report (generalized) linear mixed-effects models. Assuming that the results are comparable between the GAMM and the LMM/GLMM, I prefer to see the LMM/GLMM in the main text and the GAMM in the supplementary material.

Finally, here are my line-by-line comments:

line 104: Please spell out "VAD."

line 175: the difference plot is crucial for readers to digest the three-way interaction fully. I request that the difference plot be presented together with Fig 2. This is applicable to all difference plots reported in this manuscript.

line 245: Please double-check whether you analyzed -1000/RT. Given the intercept and the slope in Table 4, as well as the values shown in Figure 5, I suspect that you analyzed -1/RT.

Figures: Please refrain from using different labels for the same variables. stand.p.disgust should be DS-R, and stand.p.politics should be W-P. 

In light of the reviewers' recommendation, my editorial decision is "Major Revision." If and only if you find it possible to satisfy the reviewers' and my requests, please revise and resubmit your manuscript. Please note that this does not guarantee eventual acceptance of your manuscript. If resubmitted, depending on the quality of the revision, I might send it to the same reviewers or reject it at the editorial stage. 

Please submit your revised manuscript by Jan 07 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Koji Miwa, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.

If you are reporting a retrospective study of medical records or archived samples, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information

3. Thank you for stating the following in the Acknowledgments Section of your manuscript: 

"This research was supported by a Social Sciences and Humanities Research Council of 

Canada (http://www.sshrc-crsh.gc.ca/) Partnership Grant (Words in the World, 

895-2016-1008)."

We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form. 

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows: 

"This research was supported by a Social Sciences and Humanities Research Council of Canada (http://www.sshrc-crsh.gc.ca/) Partnership Grant (Words in the World, 895-2016-1008). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

Reviewer #3: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: I Don't Know

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The paper titled “COVIDisgust: Language Processing through the Lens of a Pandemic” investigated whether the media’s stance on the COVID-19 pandemic can affect an individual’s disgust levels. The manuscript is technically sound as the experiments were carried out in a rigorous fashion with relevant variables being appropriately controlled. The statistical analysis was highly detailed and scientific with all data underlying the findings being fully available. In addition, the manuscript was written in standard English in an intelligible fashion. However, several issues should be addressed before the publication of the paper.

1. The research question of this paper is “whether the media’s stance on the COVID-19 pandemic can affect an individual’s disgust levels”, however, the conclusion drawn seemed to be the other way around, namely, an individual’s disgust levels would affect his or her response to news about the COVID-19 pandemic. The author should reconsider the conclusion drawn and clarify the logical connections between the hypotheses and results.

2. It would be better to present a brief summary of the experiment and the results in the first paragraph of the discussion section rather than introducing new information and questions.

3. Political ideology seemed to play an important role in manipulating participants’ responses to news about COVID-19 as the author stressed in the discussion and conclusion sections. It is recommended to incorporate it into the title and the research question.

4. The discussion about the lexical decision task could be enriched and extended with several citations.

5. One minor issue in the participants section, the author indicated that the participants’ self-reported English proficiency in the two experiments were 4.5 and 4.6 respectively. Could the author provide more details about how the score was calculated including questions or standards adopted?

Reviewer #2: This is an interesting study which examines the role of media information, disgust sensitivity and political orientation on word affective ratings and processing. Many studies have been conducted on the psychological consequences of the COVID pandemic. The main contribution of this work is that it focuses on a less explored issue, that is, the cognitive consequences of the COVID pandemic, in terms of language processing, relating them with individual differences. Clearly, the study is of interest to PlosOne readers. There are several issues, however, that need to be addressed before it is accepted for publication. I list them below:

Materials

-More information about the materials is needed. In particular:

1) Do the disgust-related words have high ratings only in disgust (and low ratings in other discrete emotions)?. That is, are they “pure” disgust words? (See Ferré et al., 2017, and Syssau et al., 2020, in Behavior Research Methods, for a distinction between pure and non-pure emotion related words).

2) Are high and low disgust words (with low valence) matched in arousal?

3) Are the three groups of words matched in the lexico-semantic variables that are known to affect language processing in general, and lexical decision tasks in particular?: Lexical frequency, length, age of acquisition, concreteness, cognate status of words (i.e., orthographic overlap between the English words and their translations in the native language of the bilingual participants).

4) Are there words directly related with pandemics among the data set? How many?

5) Please, include an appendix with the materials.

6) More information should be provided about the newspaper headlines. How many headlines were presented to the participants? It would be useful to have them in an appendix.

7) How were the pseudowords of Experiment 2 created?

Participants

-Almost half of the participants are not native speakers of English. There is much evidence of differences in emotional language processing in native and non-native languages. Part of them has been obtained with the lexical decision task, which is also used here (see, for instance, the work by Dewaele, Pavlenko, Caldwell-Harris, Costa, Ferré, Duñabeitia, etc). There is also evidence of differences in affective ratings between the L1 and the L2 (see, for instance, the work by Imbault, Vélez-Uribe or Prada). The authors should examine if there are differences in affective ratings (Experiment 1), as well as in emotional word processing (Experiment 2) between the native and non-native English participants.

Analyses

-In Experiment 2, the effect of the above mentioned variables (i.e., arousal, lexical frequency, length, age of acquisition, concreteness, cognate status) needs to be considered, as they are known to affect word processing in the lexical decision task. In particular, it is important to demonstrate that there is not a confounding effect of arousal, since high disgusting words seem to be more arousing than low disgusting words.

Discussion

-A discussion for each experiment should be included. In the General Discussion, the results of both experiments need to be integrated, trying to provide any explanation for the distinct results across experiments).

Minor

-A relevant reference is lacking: Silva et al. (2012), who explored the role of disgust sensitivity on the processing of disgust-related words in a lexical decision task.

Reviewer #3: This is an interesting article with a very thought-provoking topic related to COVID-19. The current research investigated how headlines related to COVID-19 influence on people’s word perception in terms of disgust level. In addition, the authors included participants’ political status as a possible influential factor in how they perceive words. Experiment 1 was to discover participants’ rating scale on how disgusting the participants feel towards words after viewing headlines. Experiment 2 was to discover the reaction time on a lexical decision task(LDT) after viewing headlines. Overall, they discovered the significant interaction between individual disgust sensitivity and political ideology. In short, liberals rated words as more disgusting after reading headlines compared to conservatives. In addition, they found that less conservative participants spent less RT on disgust words during LDT, which may be explained by the relation between the disgust level of words and long-term memory.

What follows is a page-by-page response to points in the article. The responses were divided into minor and major issues. The symbol '>>>' with page number(s) introduces a quote from the article, and is followed by my query.

<minor issues="">

p.2

>>> Prior linguistic research has shown that affective content of words influences how fast they are recognized [9, 10], which also interacts with a range of person-based factors, such as age [11–13], sex [14], character traits and mood [15, 16], native speaker status [17, 18] and others.

In this study, both native and non-native speakers were included as participants. Was there any difference between them in both Experiment 1 and 2 besides RT for the lexical decision task?

If there is any difference between them in terms of arousal and political status and their effect on RT in Experiment 2, please describe in detail. If no, just state that the difference between them was not observed in this research other than RT.

p.2

>>> Conservatives tend to score higher on disgust in general [25–27], and making people physically disgusted shifts their attitudes to the conservative end of the political spectrum [28].

p.6

>>> left-leaning but not right-leaning participants rated all positive words as more positive and negative words as more negative.

The authors use the words conservatives/liberals, and right-learning/left-learning interchangeably. They are synonyms, but conservatives/liberals sound more general whereas right-learning/left-learning sound more related to politics and policies. To add consistency, it would be recommended to stick to either of them.

p.3

>>> Mean self-reported English proficiency was 4.5

Out of what? Also, please add SD with the score.

p.4

>>>PsychoPy3

This should be properly cited as stated here (https://psychopy.org/about/index.html)

p.4

>>> DS-R

The abbreviation should be fully spelled when it appears for the first time.

p.4

>>> Likert-type scale

In this section, please describe each ratio more in detail. For example, the maximum score of the scale is unclear. To increase the replicability, let readers know how they can precisely replicate your experiments.

p.4

>>> In addition to not treating categorical data as continuous

Do you mean, not treating continuous data as categorical?

p.5

>>> if the participants were exposed to the Type I (severe) headlines.

I would recommend you to change the term Type I/II to avoid possible misinterpretation. Type I/II sounds more familiar to me in the statistical contexts.

p.5

>>> The best-fitting model for disgust ratings included the following significant predictors:

As GAM is a relatively new statistical approach in the linguistic field research, it would be appreciated if you could add some sentences to describe what the best-fitting model means and how you found the best-fitting model instead of just stating “Deviance explained = 30.5%.”.

p. 7

>>> Mean self-reported English proficiency was 4.6.

4.6 out of what? Also, please add SD with the score.

p.5, p.8

>>> A Student’s t-test

This is a tiny point, but I am not a big fan of the expression “student’s” (this is because this article is not so related to education that there is no need to treat participants as students). A participant’s t-test sound more natural.

<major issues="">

p.3

>>> It is therefore important to know whether plain texts (some of which also featured neutral, non-disgusting imagery) have the potential to produce the same effect as affective imagery.

p.4

>>> We then made screenshots of the headlines, some of which also had illustrations. No disturbing imagery was used.

From the sentence on p.3, I interpreted the authors were aiming to investigate whether plain texts about COVID-19 issues affect the feeling towards words as the imagery does. Yet, on p.4, some materials included illustrations. Therefore, what they wanted to achieve by making the difference from the previous research is unclear. In addition, although the authors states that disturbing imagery was not used, it is unclear how they distinguished the images whether they are disturbing or not.

p.7

>>> The procedure was the same as in Experiment 1 except for the main task and a short practice session before it.

Is it possible to add a figure to describe the procedure with images?

Each session seems to start with seeing the headline (s?) before the main task in both Exp1 and 2, but how often they perceived the headline(s) is not clearly described. My understanding is that the participants saw the headline 99 times in Exp 1 simply because there were 99 words to rate. But for LDT in Exp 2, for sure the authors had filler items so that how many times the participants saw the headline is unclear.

p.9

>>> However, prior studies have not explored how the feeling of disgust interacts with lexical decision times. It is thus possible that being more disgusted may in fact make the associated concepts in the long-term memory more accessible, facilitating recognition of highly disgusting words.

p.10

>>> This may indicate that being more disgusted may make disgust-related concepts in the long-term memory more accessible, facilitating recognition of associated words. More research on the topic would be valuable.

I see the points, but the discussion of long-term memory lacks a logical explanation at this moment. It is worth to state this is a new discovery of this article, which the previous research could not find. However, clear explanation why the results are showing the possibility of the relation with long-term memory should be clearly and precisely given with citations.</major></minor>

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: 王慧莉

Reviewer #2: No

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Jul 21;17(7):e0271206. doi: 10.1371/journal.pone.0271206.r002

Author response to Decision Letter 0


9 Mar 2022

PONE-D-21-29311

COVIDisgust: Language Processing through the Lens of a Pandemic

PLOS ONE

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Your manuscript (PONE-D-21-29311) was read by three expert reviewers. Their comments are attached below. Reviewer 1 is Huili Wang. As an academic editor, I have read the manuscript myself. As you will see, all reviewers found some merits in your study. However, they also recommended that the manuscript should be greatly improved before it is published in PLOS ONE. Reviewer 1 suggested that the logic in the storyline be reconsidered. Reviewers 2 and 3 commented that more information and justification should be provided in the method and result sections. I largely agree with the reviewers, and I request that you respond to all their comments.

The most serious issue, from my perspective, is the storyline. After the first two paragraphs, the hypothesis is presented rather abruptly in line 31. It is not crystal clear why you opted for the current research design. Please explain, for the purpose of studying the the effect of COVID-19-related news on people's disgust, why you asked participants to respond to words and used single word processing "as a proxy for disgust" (line 13). Please describe what the advantage of this procedure is. In the current introduction, it is also not clearly described why you opted for the word rating experiment and what the lexical decision task offers on top of the rating experiment. If your goal is to also study single word processing mechanism, then I would like you to describe more clearly what is missing in the previous research and what the present study offers.

Author’s comment: Thank you for pointing this out. We have re-written the Introduction. We have clarified what was missing in previous research on affective word processing and how the present study corroborates and extends previous findings. We hope that our goals are now more transparent.

Although you are stating the importance of testing the plain text effect in line 84, this is no longer mentioned in the rest of the manuscript, and your stimuli unfortunately contained illustrations (line 116, see also Reviewer 3's comment).

Author’s comment: Unlike stimuli used in prior research, for instance those from the DIRTI database (Haberkamp A, Glombiewski JA, Schmidt F, Barke A. The DIsgust-RelaTed-Images (DIRTI) database: Validation of a novel standardized set of disgust pictures. Behaviour Research and Therapy. 2017 Feb 1;89:86–94.), all our illustrations were neutral and did not fall under an existing list of categories known to evoke disgust, such as bodily products, injuries/infections, deaths, etc. However, we decided to remove this paragraph from the main text due to that not being of primary importance for this study. Additionally, the full list of headlines can now be found in Appendix 2 for a review.

In addition, your conclusion is not supported by the data because you did not study "an individual's response to news about COVID-19" (line 362); what you studied was individuals' responses to words with a prior presentation of news about COVID-19 (see also Reviewer 1's comment).

Author’s comment: We believe that this is a reasonable generalization since disgust ratings act as a proxy for disgust and thus indicate how strongly an individual responded to a particular set of headlines about COVID-19. We have also rewritten the abstract, the discussion, and the conclusions so that the link between the participants’ response to news and their response to disgusting words is more obvious.

In addition to reviewers' suggestions for the method and result sections, I am also concerned about your choice for the statical analysis. Although I agree that the GAMM can offer an interesting insight in many occasions, I am not fully convinced that the GAMM is the best choice in this study. Neither your predictions nor your interpretation of the results involves nonlinearity. If you choose to retain the GAMM analyses, please describe more clearly why it is important to consider nonlinearity for this topic. Otherwise, the three-way wiggly interactions look unnecessarily complex, and they might not attract a wide range of readers. For this reason, I strongly recommend that you also report (generalized) linear mixed-effects models. Assuming that the results are comparable between the GAMM and the LMM/GLMM, I prefer to see the LMM/GLMM in the main text and the GAMM in the supplementary material.

Author’s comment: As per your suggestion, we are now reporting the results of LMM in the main text. Since the results of LMM and GAMM were virtually identical, we have decided to only provide the LMM analysis in the main text and to not add GAMM in Supplementary Material since it would not add anything to the paper. However, the full GAMM analysis with all the scripts and plots will be available on OSF in the main project folder (we have added this information to the main text).

Finally, here are my line-by-line comments

line 104: Please spell out "VAD."

Author’s comment: We have spelled it out.

line 175: the difference plot is crucial for readers to digest the three-way interaction fully. I request that the difference plot be presented together with Fig 2. This is applicable to all difference plots reported in this manuscript.

Author’s comment: This comment is no longer applicable since we are reporting LMMs in the main text as per your request.

line 245: Please double-check whether you analyzed -1000/RT. Given the intercept and the slope in Table 4, as well as the values shown in Figure 5, I suspect that you analyzed -1/RT.

Author’s comment: We specified in the data analysis section that RTs were first reciprocally transformed (-1000/RTs) and then multiplied by 1000 to avoid extremely small numbers (and to make the plots look more intuitive, with shorter RTs being in the lower end of the y-axis). The range of reciprocally transformed RTs was -2247 to -433, which corresponds to 445 to 2305 ms of untransformed RTs.

Figures: Please refrain from using different labels for the same variables. stand.p.disgust should be DS-R, and stand.p.politics should be W-P.

Author’s comment: Thank you for spotting this. We have changed the labels accordingly.

In light of the reviewers' recommendation, my editorial decision is "Major Revision." If and only if you find it possible to satisfy the reviewers' and my requests, please revise and resubmit your manuscript. Please note that this does not guarantee eventual acceptance of your manuscript. If resubmitted, depending on the quality of the revision, I might send it to the same reviewers or reject it at the editorial stage.

Reviewer #1: The paper titled “COVIDisgust: Language Processing through the Lens of a Pandemic” investigated whether the media’s stance on the COVID-19 pandemic can affect an individual’s disgust levels. The manuscript is technically sound as the experiments were carried out in a rigorous fashion with relevant variables being appropriately controlled. The statistical analysis was highly detailed and scientific with all data underlying the findings being fully available. In addition, the manuscript was written in standard English in an intelligible fashion. However, several issues should be addressed before the publication of the paper.

1. The research question of this paper is “whether the media’s stance on the COVID-19 pandemic can affect an individual’s disgust levels”, however, the conclusion drawn seemed to be the other way around, namely, an individual’s disgust levels would affect his or her response to news about the COVID-19 pandemic. The author should reconsider the conclusion drawn and clarify the logical connections between the hypotheses and results.

Author’s comment: While it is true that the participant’s disgust level itself also affected their word ratings and lexical decisions times, this was not the main result of the study. The main result of the experiment showed that our experimental manipulation, type of the headline (“the media’s stance”), significantly modulated participants' word ratings and, additionally, interacted with individual disgust proneness and political ideology. As the suggested change would not capture this result in its entirety or the direction of the observed effects, we therefore have no empirical grounds to change the conclusion.

2. It would be better to present a brief summary of the experiment and the results in the first paragraph of the discussion section rather than introducing new information and questions.

Author’s comment: Thank you for this suggestion. We have rewritten the discussion accordingly.

3. Political ideology seemed to play an important role in manipulating participants’ responses to news about COVID-19 as the author stressed in the discussion and conclusion sections. It is recommended to incorporate it into the title and the research question.

Author’s comment: We have changed the title.

4. The discussion about the lexical decision task could be enriched and extended with several citations.

Author’s comment: We have described our predictions, hypotheses, and discussion of the results for Experiment 2 in more detail.

5. One minor issue in the participants section, the author indicated that the participants’ self-reported English proficiency in the two experiments were 4.5 and 4.6 respectively. Could the author provide more details about how the score was calculated including questions or standards adopted?

Author’s comment: We have added a clarification to the Participants section. The participants were simply asked to rate their English proficiency on a 5-point scale. No tests were used.

Reviewer #2: This is an interesting study which examines the role of media information, disgust sensitivity and political orientation on word affective ratings and processing. Many studies have been conducted on the psychological consequences of the COVID pandemic. The main contribution of this work is that it focuses on a less explored issue, that is, the cognitive consequences of the COVID pandemic, in terms of language processing, relating them with individual differences. Clearly, the study is of interest to PlosOne readers. There are several issues, however, that need to be addressed before it is accepted for publication. I list them below:

Materials

-More information about the materials is needed. In particular:

1) Do the disgust-related words have high ratings only in disgust (and low ratings in other discrete emotions)?. That is, are they “pure” disgust words? (See Ferré et al., 2017, and Syssau et al., 2020, in Behavior Research Methods, for a distinction between pure and non-pure emotion related words).

Author’s comment: We did not control for other discrete emotions in this study, and our words were not “pure” disgust words since the goal of the article was to examine how the word’s disgust score interacts with other variables rather than to examine the differential contribution of discrete emotions. Ferré et al. (2017) actually did not find any differences in RTs between disgusting and fearful words in any of their experiments (albeit their stimuli were Spanish, not English).

We checked the NRC Emotion Intensity Lexicon for other emotions post-factum and it turned out that a half or more of our 33 high disgust words did not have associated anger, fear and sadness scores, and only 2 had anticipation scores. Mean fear score associated with the remaining words was 0.72, mean anger score 0.62, mean sadness score 0.73. Since there are so many missing scores, it is not possible to add the scores from other emotions in our data analysis at this point. We also don’t think that keeping the fear score low for highly disgusting words is reasonable (or even manageable) because this will likely constitute a confound of its own, as highly disgusting words may naturally evoke fear due to their relatedness to disease. In fact, only 52 (due to word forms being present, even fewer) words in the NRC database had a disgust score > 0.5 and a fear score < 0.5 (range 0-1), with mean disgust score of 0.62 and mean fear score of 0.4. As you can see, these are not high disgust low fear words – these are mixed words with moderate fear and disgust scores. Further, at least one third of those words had extremely low lexical frequency (freq range = 8 - 100, log freq range = 1.6 to 2.6). To compare, the lowest lexical frequency of our stimuli was 211 (log 5.4). Thus, by getting rid of one potential confound we would decrease the power of the experiment due to only having mid- and low-disgust words, in addition to adding other confounds (frequency, prevalent religious and morality themes, etc.).

2) Are high and low disgust words (with low valence) matched in arousal?

Author’s comment: When preparing the stimuli, we kept arousal values constant (between -1 and 1 in a normalized distribution). However, since the correlation between word disgust and arousal was still significant albeit small (r = 0.2, p < .04), we added arousal as a predictor in all our models (it turned out to be only marginally significant in 1 out of 6 models). We also added a table with correlation coefficients between all words’ characteristics to the main text.

3) Are the three groups of words matched in the lexico-semantic variables that are known to affect language processing in general, and lexical decision tasks in particular?: Lexical frequency, length, age of acquisition, concreteness, cognate status of words (i.e., orthographic overlap between the English words and their translations in the native language of the bilingual participants).

Author’s comment: Thank you for pointing this out. We have added all those variables (except for cognate status) in our models for the lexical decision experiment but not for the word rating experiment, since we don’t believe they have any effect on word ratings for disgust and valence. At this stage, it is unfortunately impossible to test the effects of cognate status since some participants chose the option “other” as their native language in the list of languages. Only age of acquisition was significant, with the predicted direction of its effect: words acquired earlier were reacted to faster. Adding these control variables did not change the overall results and conclusions of Experiment 2.

4) Are there words directly related with pandemics among the data set? How many?

Author’s comment: It is unfortunately very difficult to estimate it since no lists of pandemic-related words exist (to our knowledge). Using our best judgment, we counted 6 out of 99 words that could be more or less related to the pandemic (but also to disease in general, not specific to the current pandemic): unhealthy, germ, sickening, deadly, parasite, disease. Three words actually occurred in the severe headlines in full (“sickening”, “disease”, “deadly”) and one in part (“bloodthirsty” – “blood”). To make sure the results of Experiment 2 were not contaminated by this overlap, we additionally reran the models without these four words, and the main result did not change.

5) Please, include an appendix with the materials.

Author’s comment: We have added Appendix 1 with the stimuli.

6) More information should be provided about the newspaper headlines. How many headlines were presented to the participants? It would be useful to have them in an appendix.

Author’s comment: We provided that information in the Materials section: “For the headline manipulation, we randomly selected seven news articles emphasizing the severity of COVID-19 and eight articles downplaying it.” We have now added the headlines in Appendix 2.

7) How were the pseudowords of Experiment 2 created?

Author’s comment: Pseudowords were created by modifying one or more letters in existing English words. A native speaker of English made sure they did not contain any real but archaic words, words that look like they could be a typo in real words, slang words, and words that do not currently exist but sound like a neologism. We have added this description into the main text. The full list of pseudowords is available in Appendix 1.

Participants

-Almost half of the participants are not native speakers of English. There is much evidence of differences in emotional language processing in native and non-native languages. Part of them has been obtained with the lexical decision task, which is also used here (see, for instance, the work by Dewaele, Pavlenko, Caldwell-Harris, Costa, Ferré, Duñabeitia, etc). There is also evidence of differences in affective ratings between the L1 and the L2 (see, for instance, the work by Imbault, Vélez-Uribe or Prada). The authors should examine if there are differences in affective ratings (Experiment 1), as well as in emotional word processing (Experiment 2) between the native and non-native English participants.

Author’s comment: Thank you for the suggestion. We have reclassified all participants who did not choose English as the first language they acquired in childhood as non-native speakers (rather than only those who did not choose English as their primary language) and aggregated self-reported English proficiency and SD by these two groups (native and non-native speakers) in the description of the participants. We are also now providing the results of fully specified models for both experiments, which show that native speaker status was a significant predictor only in Experiment 2 (lexical decision) but not in Experiment 1 (ratings). Moreover, rerunning the models for Experiment 2 with native speakers only did not change the main result.

Analyses

-In Experiment 2, the effect of the above mentioned variables (i.e., arousal, lexical frequency, length, age of acquisition, concreteness, cognate status) needs to be considered, as they are known to affect word processing in the lexical decision task. In particular, it is important to demonstrate that there is not a confounding effect of arousal, since high disgusting words seem to be more arousing than low disgusting words.

Author’s comment: In addition to lexical frequency that was already in the models, we have now added word length, arousal, age of acquisition, and concreteness and report the results of the full models. At this stage, it is unfortunately impossible to test the effects of cognate status since some participants chose the option “other” as their native language in the list of languages. Only age of acquisition was significant.

Discussion

-A discussion for each experiment should be included. In the General Discussion, the results of both experiments need to be integrated, trying to provide any explanation for the distinct results across experiments).

Author’s comment: We have added a discussion after each experiment and a general discussion in the end.

Minor

-A relevant reference is lacking: Silva et al. (2012), who explored the role of disgust sensitivity on the processing of disgust-related words in a lexical decision task.

Author’s comment: Thank you! We have incorporated the study into our predictions and discussion.

Reviewer #3: This is an interesting article with a very thought-provoking topic related to COVID-19. The current research investigated how headlines related to COVID-19 influence on people’s word perception in terms of disgust level. In addition, the authors included participants’ political status as a possible influential factor in how they perceive words. Experiment 1 was to discover participants’ rating scale on how disgusting the participants feel towards words after viewing headlines. Experiment 2 was to discover the reaction time on a lexical decision task(LDT) after viewing headlines. Overall, they discovered the significant interaction between individual disgust sensitivity and political ideology. In short, liberals rated words as more disgusting after reading headlines compared to conservatives. In addition, they found that less conservative participants spent less RT on disgust words during LDT, which may be explained by the relation between the disgust level of words and long-term memory.

What follows is a page-by-page response to points in the article. The responses were divided into minor and major issues. The symbol '>>>' with page number(s) introduces a quote from the article, and is followed by my query.

p.2

>>> Prior linguistic research has shown that affective content of words influences how fast they are recognized [9, 10], which also interacts with a range of person-based factors, such as age [11–13], sex [14], character traits and mood [15, 16], native speaker status [17, 18] and others.

In this study, both native and non-native speakers were included as participants. Was there any difference between them in both Experiment 1 and 2 besides RT for the lexical decision task?

If there is any difference between them in terms of arousal and political status and their effect on RT in Experiment 2, please describe in detail. If no, just state that the difference between them was not observed in this research other than RT.

Author’s comment: We are now providing the output of fully specified models for both experiments, which shows that the native speaker status was a significant predictor only in Experiment 2 but not in Experiment 1. We have also added a note into the Results section for Experiment 2 that excluding non-native speakers from the analysis did not change the main findings. Please also see our response to the comment about English proficiency below.

p.2

>>> Conservatives tend to score higher on disgust in general [25–27], and making people physically disgusted shifts their attitudes to the conservative end of the political spectrum [28].

p.6

>>> left-leaning but not right-leaning participants rated all positive words as more positive and negative words as more negative.

The authors use the words conservatives/liberals, and right-learning/left-learning interchangeably. They are synonyms, but conservatives/liberals sound more general whereas right-learning/left-learning sound more related to politics and policies. To add consistency, it would be recommended to stick to either of them.

Author’s comment: Thank you for this suggestion! Since we used the Wilson-Patterson’s Conservatism Scale for collecting political ideology, we decided to stick with conservatives/liberals and edited the article appropriately.

p.3

>>> Mean self-reported English proficiency was 4.5

Out of what? Also, please add SD with the score.

Author’s comment: We have specified that the proficiency was measured on a 5-point scale. We also reclassified all participants who did not choose English as the first language they acquired in childhood as non-native speakers (rather than only those who did not choose English as their primary language) and aggregated self-reported English proficiency and SD by these two groups (native and non-native speakers). We are now providing the results of fully specified models for both experiments, which show that native speaker status was a significant predictor only in Experiment 2 but not in Experiment 1.

p.4

>>>PsychoPy3

This should be properly cited as stated here (https://psychopy.org/about/index.html)

Author’s comment: Thank you! We have added a proper citation.

p.4

>>> DS-R

The abbreviation should be fully spelled when it appears for the first time.

Author’s comment: We have spelled the abbreviation.

p.4

>>> Likert-type scale

In this section, please describe each ratio more in detail. For example, the maximum score of the scale is unclear. To increase the replicability, let readers know how they can precisely replicate your experiments.

Author’s comment: The scale is provided in the previous paragraph Procedure: “After that, the participants rated how disgusting a word feels to them (1 = “not at all” to 5 “extremely”) and how positive/negative it feels (1 = “very negative” to 5 “very positive”).

p.4

>>> In addition to not treating categorical data as continuous

Do you mean, not treating continuous data as categorical?

Author’s comment: Ratings elicited on a Likert scale yield discrete/categorical (ordinal) data since there is no strictly defined distance between the values of the scale, they can vary in magnitude between respondents, and they are ordered. Standard regression analyses treat such data as continuous. The advantage of GAMMs for ordinal data is precisely that they do not treat categorical data as continuous.

p.5

>>> if the participants were exposed to the Type I (severe) headlines.

I would recommend you to change the term Type I/II to avoid possible misinterpretation. Type I/II sounds more familiar to me in the statistical contexts.

Author’s comment: Thank you. We have removed the Type I/II terminology and now refer to them as “severe headlines” and “downplaying headlines”.

p.5

>>> The best-fitting model for disgust ratings included the following significant predictors:

As GAM is a relatively new statistical approach in the linguistic field research, it would be appreciated if you could add some sentences to describe what the best-fitting model means and how you found the best-fitting model instead of just stating “Deviance explained = 30.5%.”.

Author’s comment: The best-fitting model in our paper was obtained by doing stepwise forward selection (adding predictors one by one and checking whether an N+1 model produced a significant improvement over an N model). The compareML() function that we used for model comparison outputs a chi-square test of REML scores and an AIC difference between two models. If the chi-square test had a p-value of > .05, suggesting a non-significant difference in REML scores between a less and a more complex model, then the simpler model was preferred and the predictor was removed. Thus, the best-fitting model in our analysis was a maximally specified model that yielded a significant improvement over a simpler model.

However, since it was requested that we reanalyze the data using linear mixed-effect modeling, this description is no longer relevant and will not be added to the main text.

p. 7

>>> Mean self-reported English proficiency was 4.6.

4.6 out of what? Also, please add SD with the score.

Author’s comment: We have specified that proficiency was measured on a 5-point scale. We have also added means and SDs for English proficiency for the two groups (native and non-native speakers).

p.5, p.8

>>> A Student’s t-test

This is a tiny point, but I am not a big fan of the expression “student’s” (this is because this article is not so related to education that there is no need to treat participants as students). A participant’s t-test sound more natural.

Author’s comment: Student’s t-test got its name from statistician William Sealy Gosset who published under the pen name Student. It is not related to the area of our paper.

p.3

>>> It is therefore important to know whether plain texts (some of which also featured neutral, non-disgusting imagery) have the potential to produce the same effect as affective imagery.

p.4

>>> We then made screenshots of the headlines, some of which also had illustrations. No disturbing imagery was used.

From the sentence on p.3, I interpreted the authors were aiming to investigate whether plain texts about COVID-19 issues affect the feeling towards words as the imagery does. Yet, on p.4, some materials included illustrations. Therefore, what they wanted to achieve by making the difference from the previous research is unclear. In addition, although the authors states that disturbing imagery was not used, it is unclear how they distinguished the images whether they are disturbing or not.

Author’s comment: Although we explicitly stated that some of our headlines contained illustrations and that, unlike stimuli used in prior research such as those from DIRTI https://www.sciencedirect.com/science/article/abs/pii/S0005796716301978, all our illustrations were neutral (i.e., did not fall under categories known to evoke strong disgust, such as bodily products, injuries/infections, deaths, etc.), we have removed this paragraph from the main text due to that not being of primary importance for this study. Additionally, the full list of headlines can now be found in Appendix 2 for a review.

p.7

>>> The procedure was the same as in Experiment 1 except for the main task and a short practice session before it.

Is it possible to add a figure to describe the procedure with images?

Each session seems to start with seeing the headline (s?) before the main task in both Exp1 and 2, but how often they perceived the headline(s) is not clearly described. My understanding is that the participants saw the headline 99 times in Exp 1 simply because there were 99 words to rate. But for LDT in Exp 2, for sure the authors had filler items so that how many times the participants saw the headline is unclear.

Author’s comment: We have edited the description of the procedure for clarity. The participants only saw a set of headlines once, in the beginning of each session before the main task started. They did not see the headlines before every trial.

p.9

>>> However, prior studies have not explored how the feeling of disgust interacts with lexical decision times. It is thus possible that being more disgusted may in fact make the associated concepts in the long-term memory more accessible, facilitating recognition of highly disgusting words.

p.10

>>> This may indicate that being more disgusted may make disgust-related concepts in the long-term memory more accessible, facilitating recognition of associated words. More research on the topic would be valuable.

I see the points, but the discussion of long-term memory lacks a logical explanation at this moment. It is worth to state this is a new discovery of this article, which the previous research could not find. However, clear explanation why the results are showing the possibility of the relation with long-term memory should be clearly and precisely given with citations.

Author’s comment: We have extended the discussion and added relevant citations.

Attachment

Submitted filename: Response_to_Reviewers.docx

Decision Letter 1

Koji Miwa

2 May 2022

PONE-D-21-29311R1COVIDisgust: Language Processing through the Lens of Partisanship

PLOS ONE

Dear Dr. Puhacheuskaya,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Kudos for the effort and the time you devoted for the respectable revisions! As you can see all three reviewers responded to your comments and revisions more positively. I agree with the reviewers. The manuscript was improved in many respects. Although, as Reviewer 3 commented, it might be still challenging for many readers to digest the whole results with many interactions, I think you have done your job. Now, I will be happy if you can respond to the comments from Reviewers 1 and 3. In addition to the reviewers' comments, I have several comments of my own.

(1) Your motivation for the GAMM

You stated your motivation for the GAMM analysis as "Since ratings on a Likert-type scale yield ordinal data and thus, in principle, should not be analyzed with Gaussian family models assuming continuous data, we additionally ran a generalized additive mixed-modeling analysis (GAMM) for ordinal data" (p. 12). However, given this motivation, it might sound puzzling to the readers why you are not analyzing the data with something like the cumulative-link mixed-effects model for ordinal data (with the R package "ordinal"). I am not asking you to redo the whole analyses, but I would like you to reconsider your motivation for the GAMM. Doesn't your motivation have something to do with potential nonlinearity?

(2) RTs

The way you transformed RTs (after the reciprocal transformation) is not common, I personally think. Given that the resulting RTs are still different from the original RTs, not all readers might agree with your statement that "the plots can be read intuitively" (p. 24). However, I am not asking you to redo the analyses because it is not wrong either. I would just like you to explore optimal reporting method in the future. If an intuitive interpretation is what you want to achieve, I think a back-transformation can be applied when plotting the model-predicted values.

> RT = c(600, 700, 800) # for these sample RTs

> (-1000/RT)*1000 # this was done in this study

  # [1] -1666.667 -1428.571 -1250.000

> -1000/(-1000/RT) # back-transformation for the reciprocal transformation

  # [1] 600 700 800

(3) For all p-values and rs, please remove the zero before the decimal point.

(4) This might be my problem, but it seems that the interpretation for the W-P scale is not provided before the text "More liberal participants rated the stimuli as more disgusting..." (p. 15) It is therefore not clear, for "liberal," which part of the scale the reader should focus on.

(5) "The lines for the five levels of word disgust are..." (p. 16) might be misleading because it is not a factor with five levels, isn't it? Do you mean quantiles?

(5) Please double-check your statement on page 19: "The interaction between the participant's political ideology and valence ratings." Do you mean an interaction with word valence?

Please submit your revised manuscript by Jun 16 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Koji Miwa, Ph.D.

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

Reviewer #3: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Partly

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Generally speaking, the authors revised as the reviewers suggested. However, there are some minor points that need to be explained or revised.

1.Line 14-16: In the abstract, I was wondering if the detailed reference information should be given in the parenthesis. My suggestion is to remove it.

2.Line 25: “after the headlines emphasizing it” seems ungrammatical. Please confirm.

3.In the section of “Introduction” and “Discussion”, I suggest adding some subtitles so that the ideas are more clearly elaborated. Furthermore, the ideas expressed in these two sections should be more symmetrical, being more focused on the hypotheses of the research.

Reviewer #2: (No Response)

Reviewer #3: Comments to the authors

The manuscript was well revised including additional explanations for materials and procedures as reviewers requested. The added explanations of the data analysis on page 11 sound convincing, and I respect how the authors changed the statistical analysis during the revision.

However, the manuscript is still challenging for readers to understand, and it probably requires readers to read several times to fully comprehend what the authors are trying to do.

Comments with (>>) below are some minor revisions and I hope that they could enhance the readers' comprehensibility of the manuscript.

p.7 line 134

untrustworthy, and be discarded by more conservative participants, resulting in no effect.

>>The authors could specify to which valuable (disgust rating?) the effect is going to be less influential.

p.28 line 515

Importantly, we did not find an effect of the headline on word recognition latencies. This suggests that fluctuating levels of disgust may not affect such core language processing mechanisms as lexical access. It may also mean that stable traits are more predictive of lexical recognition latencies than fluctuating states.

>>This explanation of the results of experiment 2 (LDT) is convincing. However, the result of no effect of COVID headlines makes readers confused a little bit.

Political ideology has an effect on online language (disgust words) processing, not limited to pandemic associated topics, is that what the authors are trying to say?

Probably additional sentences are needed to help readers understand the interpretation of the results and how the results can be generalized.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: Yes: Wang Huili

Reviewer #2: No

Reviewer #3: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Jul 21;17(7):e0271206. doi: 10.1371/journal.pone.0271206.r004

Author response to Decision Letter 1


2 Jun 2022

Response to the Editor’s:

(1) Your motivation for the GAMM

You stated your motivation for the GAMM analysis as "Since ratings on a Likert-type scale yield ordinal data and thus, in principle, should not be analyzed with Gaussian family models assuming continuous data, we additionally ran a generalized additive mixed-modeling analysis (GAMM) for ordinal data" (p. 12). However, given this motivation, it might sound puzzling to the readers why you are not analyzing the data with something like the cumulative-link mixed-effects model for ordinal data (with the R package "ordinal"). I am not asking you to redo the whole analyses, but I would like you to reconsider your motivation for the GAMM. Doesn't your motivation have something to do with potential nonlinearity?

We have changed the text of the manuscript to “Initially the data was analyzed using generalized additive mixed modeling (GAMM) for ordinal data following (60). However, the resulting analysis was highly complex, and it was advised that we use linear mixed modeling instead. The results of the two analyses were virtually identical, and we will thus report linear mixed modeling for simplicity. The full GAMM analysis with the scripts and plots is available for review on OSF.” We hope this wording will be less confusing.

(2) RTs

The way you transformed RTs (after the reciprocal transformation) is not common, I personally think. Given that the resulting RTs are still different from the original RTs, not all readers might agree with your statement that "the plots can be read intuitively" (p. 24). However, I am not asking you to redo the analyses because it is not wrong either. I would just like you to explore optimal reporting method in the future. If an intuitive interpretation is what you want to achieve, I think a back-transformation can be applied when plotting the model-predicted values.

> RT = c(600, 700, 800) # for these sample RTs

> (-1000/RT)*1000 # this was done in this study

# [1] -1666.667 -1428.571 -1250.000

> -1000/(-1000/RT) # back-transformation for the reciprocal transformation

# [1] 600 700 800

Thank you! We will certainly keep it in mind for future analyses. To clarify, the comment about intuitive reading meant that, after this transformation, shorter RTs are still plotted in the bottom of the Y-axis and longer RTs at the top of the Y-axis, but we do see your point here.

(3) For all p-values and rs, please remove the zero before the decimal point.

We have removed the zeroes.

(4) This might be my problem, but it seems that the interpretation for the W-P scale is not provided before the text "More liberal participants rated the stimuli as more disgusting..." (p. 15) It is therefore not clear, for "liberal," which part of the scale the reader should focus on.

We have added the following clarification: “More liberal participants (lower W-P score)...”

(5) "The lines for the five levels of word disgust are..." (p. 16) might be misleading because it is not a factor with five levels, isn't it? Do you mean quantiles?

Thank you, we have corrected the error.

(5) Please double-check your statement on page 19: "The interaction between the participant's political ideology and valence ratings." Do you mean an interaction with word valence?

Thank you for spotting this, we have corrected this error.

Response to Reviewer 1:

Reviewer #1: Generally speaking, the authors revised as the reviewers suggested. However, there are some minor points that need to be explained or revised.

1.Line 14-16: In the abstract, I was wondering if the detailed reference information should be given in the parenthesis. My suggestion is to remove it.

We have removed the detailed reference.

2.Line 25: “after the headlines emphasizing it” seems ungrammatical. Please confirm.

We have changed the sentences to “More liberal participants assigned higher disgust ratings after the headlines discounted the threat of COVID-19, whereas more conservative participants did so after the headlines emphasized it.” for more clarity.

3.In the section of “Introduction” and “Discussion”, I suggest adding some subtitles so that the ideas are more clearly elaborated. Furthermore, the ideas expressed in these two sections should be more symmetrical, being more focused on the hypotheses of the research.

We have added subtitles and restructured the Introduction and the General Discussion for more clarity.

Response to Reviewer 3:

Reviewer #3: Comments to the authors

The manuscript was well revised including additional explanations for materials and procedures as reviewers requested. The added explanations of the data analysis on page 11 sound convincing, and I respect how the authors changed the statistical analysis during the revision.

However, the manuscript is still challenging for readers to understand, and it probably requires readers to read several times to fully comprehend what the authors are trying to do.

Comments with (>>) below are some minor revisions and I hope that they could enhance the readers' comprehensibility of the manuscript.

p.7 line 134

untrustworthy, and be discarded by more conservative participants, resulting in no effect.

>>The authors could specify to which valuable (disgust rating?) the effect is going to be less influential.

We clarified the sentence: “Headlines emphasizing the threat will be considered uncredible, untrustworthy, and be discarded by more conservative participants, resulting in no effect in disgust ratings.”

p.28 line 515

Importantly, we did not find an effect of the headline on word recognition latencies. This suggests that fluctuating levels of disgust may not affect such core language processing mechanisms as lexical access. It may also mean that stable traits are more predictive of lexical recognition latencies than fluctuating states.

>>This explanation of the results of experiment 2 (LDT) is convincing. However, the result of no effect of COVID headlines makes readers confused a little bit.

Political ideology has an effect on online language (disgust words) processing, not limited to pandemic associated topics, is that what the authors are trying to say?

Probably additional sentences are needed to help readers understand the interpretation of the results and how the results can be generalized.

We have rewritten the paragraph and added more details to both the Discussion after Exp 2 and the General Discussion. Specifically, we have added a subsection “Differential effects of traits and states on lexical access” to General Discussion.

Attachment

Submitted filename: Response to Reviewers Comments.docx

Decision Letter 2

Koji Miwa

27 Jun 2022

COVIDisgust: Language Processing through the Lens of Partisanship

PONE-D-21-29311R2

Dear Dr. Puhacheuskaya,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Koji Miwa, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Koji Miwa

1 Jul 2022

PONE-D-21-29311R2

COVIDisgust: Language processing through the lens of partisanship

Dear Dr. Puhacheuskaya:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Koji Miwa

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. Contains all the supporting stimuli and tables.

    (DOCX)

    Attachment

    Submitted filename: Response_to_Reviewers.docx

    Attachment

    Submitted filename: Response to Reviewers Comments.docx

    Data Availability Statement

    All the stimuli, raw data, and scripts used in this experiment are available on Open Science Framework at https://osf.io/5ep9g/.


    Articles from PLoS ONE are provided here courtesy of PLOS

    RESOURCES