Skip to main content
PLOS One logoLink to PLOS One
. 2022 Nov 9;17(11):e0276400. doi: 10.1371/journal.pone.0276400

Disclosing political partisanship polarizes first impressions of faces

Brittany S Cassidy 1,*,#, Colleen Hughes 2,#, Anne C Krendl 3
Editor: Peter Karl Jonason4
PMCID: PMC9645606  PMID: 36350842

Abstract

Americans’ increasing levels of ideological polarization contribute to pervasive intergroup tensions based on political partisanship. Cues to partisanship may affect even the most basic aspects of perception. First impressions of faces constitute a widely-studied basic aspect of person perception relating to intergroup tensions. To understand the relation between face impressions and political polarization, two experiments were designed to test whether disclosing political partisanship affected face impressions based on perceivers’ political ideology. Disclosed partisanship more strongly affected people’s face impressions than actual, undisclosed, categories (Experiment 1). In a replication and extension, disclosed shared and opposing partisanship also engendered, respectively, positive and negative changes in face impressions (Experiment 2). Partisan disclosure effects on face impressions were paralleled by the extent of people’s partisan threat perceptions (Experiments 1 and 2). These findings suggest that partisan biases appear in basic aspects of person perception and may emerge concomitant with perceived partisan threat.

Introduction

Political polarization in the United States has been a central focus of social science research for several decades [e.g., 13]. Americans have growing alignment on within-political party opinions ranging from gun control to same-sex marriage [4,5] to the extent that political ideology predicts policy preferences almost three times better than demographic factors like education [6]. Inherent to such polarization is intergroup tension. Indeed, conservatives and liberals are similarly intolerant toward each other [7], make negative attributions about groups whose values are inconsistent with their own [8], and avoid people who do not share their values [9,10]. Because critical societal challenges require bipartisan cooperation to address them [e.g., COVID-19; 11], identifying ways in which such polarization emerges has considerable utility. Although there are many ways that political ideologies may affect interpersonal behavior, the current investigation focused on face impressions, a well-studied aspect of person perception [12,13]. Face impressions affect how people behave toward others [e.g., 14] and become polarized based on incoming information [e.g., 15]. Identifying how simple partisan cues affect face impressions may thus be useful to better characterize rising political sectarianism in America [16].

Recent work suggests that people devalue facial cues from targets who are ideologically dissimilar from them. Such work has been interpreted through the lens of attitudinal dissimilarity, whereby people respond negatively to dissimilar others [e.g., 7]. In one study [17], people viewed a dating profile featuring a face and limited information about the target (e.g., personality traits). Later, the target’s partisanship was disclosed. Perceiver conservatism related to liking a conservative target more and liking a liberal target less after disclosure. This pattern aligned with work showing that people find target individuals as less physically attractive when they hold dissimilar political candidate preferences [18], highlighting a role of partisan dissimilarity in changing face impressions. Although these studies suggest that partisanship impacts face impressions, one limitation is that they focus on ideology. That is, they do not examine other factors affecting face impressions in parallel that could guide manipulations in future work to establish causal mechanisms for polarized partisan impressions. Further, that these studies were conducted in the context of forming romantic relationships makes it unclear whether the resulting polarization reflects a general effect or one limited to a specific motivational context. These questions are important given that face impressions affect the extent to which people cooperate with others [e.g., 19]. To this end, we present two experiments testing whether disclosing political partisanship polarizes face impressions in the absence of other information. Further, we explore whether partisan threat parallels expected ideology effects on this polarization.

Prior work supports that disclosed partisanship may polarize face impressions across contexts. For example, people more negatively perceive faces paired with negative versus neutral group labels [20] and treat outgroup faces in a negatively prejudicial way [21]. These findings extend work showing that visible group-associated cues elicit negative bias [e.g., on the basis of race; 22] by suggesting that labels simply implying that target individuals differ in group membership and values from perceivers polarize face impressions. Notably, ideological partisanship is a salient marker of relative value dissimilarity when disclosed to perceivers [e.g., 23,24]. Illustrating negative effects of this dissimilarity on social cognition, political partisanship elicits biases along party lines similar to racial biases [2426], often outweighing other group memberships to predict bias [27].

Although some work suggests that partisanship is a relatively concealable aspect of identity [see 28], other work shows that people are relatively accurate at identifying political affiliation in the absence of explicit information [29,30]. At the same time, explicit partisan labels (e.g., Democrat) polarize impressions in contexts where romantic interests are salient [17] and can be randomly assigned to targets to elicit bias [31]. This work raises the possibility that explicitly disclosed partisanship may polarize impressions to a greater extent than cues that people may naturally detect. This possibility is important to study because people can be mischaracterized as belonging to a negatively evaluated group and incur negative bias [e.g., 32]. We hypothesized that pairing faces with partisan labels, irrespective of the implied veracity of those designations, would affect perceivers’ impressions more than actual target partisanship not explicitly disclosed to perceivers (Experiment 1).

If disclosed partisan labels strongly affect person perception based on perceivers’ ideological partisanship, they should also change first impressions after their disclosure. Thus, also of interest was whether simply pairing partisan labels with faces, irrespective of the accuracy of those pairings, modulated face impressions. To this end, Experiment 2 was a replication and extension of Experiment 1 in which people evaluated faces before and after target partisanship disclosure to measure impression updating. Prior work has not systematically examined such changes. Although some work suggests that face impressions are resilient to disclosed new information [e.g., 33], other work indicates that people update face impressions depending on what information is disclosed. For example, although facial untrustworthiness elicits negative impressions, disclosing salient positive behaviors results in more positive impressions [15]. This finding aligns with work showing that impressions are updated after salient behaviors are disclosed [34], as well as work showing that impressions based on implicit cues are malleable based on explicit and diagnostic incoming information [35].

One possibility is that arbitrary partisan labels may polarize impressions once they are disclosed, irrespective of their accuracy and based on perceiver partisanship. That is, labeling someone as having the opposite partisanship as the perceiver should elicit negative impression change (i.e., impressions becoming more negative), whereas perceived shared partisanship should elicit positive impression change (i.e., impressions becoming more positive). Such patterns would extend work showing favoritism and, sometimes, derogation, based on group membership [3639] from a romantic [17] to a more general context and show that simple partisan labels in the absence of other partisan information can powerfully affect impressions.

Showing that disclosed partisanship strongly affects face impressions, Experiment 1 tested whether accurately and inaccurately disclosed partisanship affects impressions more strongly than accurate, yet undisclosed, partisanship. Replicating and extending this finding, Experiment 2 tested the prediction that disclosing partisanship changes face impressions. Because opposing partisans are negatively evaluated [16], we expected the direction of impression change to be based on perceivers’ ideological partisanship. Recognizing that partisans at both ends of the ideological spectrum express negative bias against ideologically dissimilar people [7], we expected similarly polarized biases from people identifying as more conservative and more liberal.

Although our main goal was to identify partisanship-based effects of disclosed partisanship on face impressions, an open question regarded what factors produce parallel effects. We examined this question on an exploratory basis. Recent work suggests that negative trait attributions of opposing partisans relate to threat opposing partisans are perceived to pose [40], a pattern consistent with intergroup threat theory [41]. Because the extent to which politically salient stimuli affect attitudes depends on their eliciting threatening feelings [42], it seemed plausible that patterns of perceived partisan threat on face impressions of disclosed partisans would parallel effects of perceiver political ideology. Supporting a connection between perceived threat and trait impressions of faces, recent work using visual cues showed that threatening contexts affect facial trustworthiness impressions more than other contexts [43]. If the presence of opposing partisan labels is threatening, the extent of partisan threat perceived from one party relative to another should affect face impressions and how they change like perceiver ideology is expected to affect them.

To further assess partisan disclosure effects on face impressions, we thus explored whether the expected patterns shown for perceivers’ face impressions were paralleled by the extent of their self-reported perceived partisan threat. That is, if perceivers evaluated opposing partisans as being especially threatening relative to ingroup partisans, then they should especially favor similar over opposing partisans (Experiment 1) and accordingly change the valence of their face impressions based on disclosed target partisanship (Experiment 2).

Experiment 1

Although people can often detect political partisanship from faces [29], disclosed group labels can override naturally occurring ones to elicit biases [44]. Disclosing partisan labels, irrespective of their accuracy, may thus affect impressions more than actual partisanship that, albeit potentially detectable, is not disclosed. We expected the direction of these effects to emerge based on perceivers’ political partisanship. Experiment 1 used impressions of unfamiliar political candidate faces to test these possibilities.

Impressions in this task involved asking people to select the more likable and competent of two unfamiliar faces (one Republican and one Democrat). These traits were selected because they are core dimensions of person perception [45] reflecting separable ways in which people stereotype others [46]. Devaluing these traits would complement distinct ways of deriding opposing partisans that are becoming more commonplace in the United States [16]. Partisan disclosure was manipulated between-subjects. In one task version, partisanship was not disclosed. Here, actual partisanship was expected to polarize impressions. In another version, partisan labels were paired with faces. These labels were accurate (i.e., consistent with actual partisanship) or inaccurate (i.e., reflecting an opposing partisanship). We expected perceiver political ideology (reflecting their own partisanship) to exacerbate intergroup bias (i.e., more frequently selecting faces labeled with shared partisanship to be more likable and competent than non-labeled faces with shared partisanship). Such patterns would show that explicit (versus more implicit) partisan cues polarize face impressions based on perceiver partisanship.

Method

Participants

The Indiana University Institutional Review Board approved all experiments. All participants provided written informed consent. Power analyses using the R-package WebPower [47] targeted 74 participants to detect a moderate perceiver political ideology effect (i.e., a 20% lower probability of more conservative participants choosing a Democrat as the more positive of a pair of faces) with 80% power and α = .05. Because disclosure was manipulated between-subjects, we doubled the target sample to ensure enough participants in each version. We oversampled to account for exclusions and to increase the likelihood of a wide range of political ideologies. Of 185 undergraduates recruited from a large Midwestern university in the United States, we excluded four. Two did not complete the partisanship characterization measures (see below) and two failed the manipulation check (see below). The analyzed sample comprised 181 undergraduates (Mage = 18.53 years, SD = .81; 128 female; 143 White, 22 Asian, 8 Black, 3 multiple, 2 unknown; 10 Hispanic). See https://osf.io/9khta/?view_only=65e52204b50d492b975001825d2f4efc for additional methods and results (e.g., Supplementary Information.docx), data, and code for all experiments.

Task

One hundred ten pairs of neutrally expressive White male faces were drawn from databases of opponents in United States political races that have been used in past work [e.g., 48]. Each pair depicted one actual Republican and one actual Democrat who were opponents in a past political race. Thus, the pairs were pre-determined and the same across participants. Across pairs, half of the Republicans and Democrats had won. We counterbalanced whether the actual winner appeared on the right or left of the screen within task versions. Like prior work [48], we did not tell participants the pictures were of politicians.

In both experiments, the task was presented using E-Prime 2.0. Self-paced competence and likability evaluations were made over two evaluation-specific blocks of 110 trials each presented in a counterbalanced order. Pair presentation order was randomized. Before each block, participants saw the evaluation they would make (“You will now choose which of two faces is the more competent [likable]”). Each trial comprised a prompt (“Which person is the more competent [likable]”) above two side-by-side faces. Participants selected whether which face appeared more competent [likable]. There was a 250ms blank screen between trials. To make partisanship the most salient difference within each pair, we did not include female faces.

Three task versions counterbalanced disclosed political partisanship on a between-subjects basis. In one version, partisanship was not disclosed. Because actual partisanship was known, we could determine when actual Republicans or Democrats were selected. The second two versions disclosed partisanship for each face via a red border indicating a Republican and a blue border indicating a Democrat. Of these two versions, one had the left and right faces labeled, respectively, as Republican and Democrat. In the other, left and right faces were labeled as, respectively, Democrat and Republican. Participants did not explicitly categorize partisanship. Rather, we measured the frequency that a disclosed Republican or Democrat was evaluated as the more competent [likable] of the pair. Accuracy (correct or incorrect) of the partisanship disclosed for each face was thus counterbalanced across these versions, allowing us to test whether disclosure affected impressions irrespective of veracity.

Immediately after the task, participants disclosed if they recognized any faces. If they said yes (N = 66), disclosed who they thought they recognized. Our a priori exclusion criterion was to exclude any participants who accurately identified faces. No participants, however, did so. At the end of task versions with disclosed partisanship, analyzed participants accurately verified representative colors, which served as a manipulation check.

Partisanship characterization

We collected partisanship characterization measures in a random order after the task.

Perceiver political ideology. Participants indicated political ideology over four items (overall, social issues, economic issues, and foreign policy issues) on a scale ranging from 1 [extremely liberal] to 9 [extremely conservative], similar to past work [e.g., 49]. Responses (Cronbach’s α = .90) were averaged to create a composite political ideology score (M = 4.80, SD = 1.91). As a single continuous variable relating to partisan prejudice [27], we measured effects of disclosure on impressions of faces with regard to composite political ideology. Although relative ideology does not exactly match Republican and Democrat labels, these correlated concepts can determine partisanship effects on person perception [50]. Composite political ideology scores did not differ between task versions in which labels were and were not disclosed, F(1,179) = .41, p = .52.

Perceived partisan threat. Because perceived partisan threat contributes to political polarization [e.g., 40], we characterized the threat with which participants perceived partisans over four items: “How much of a threat do you think a person of the following party [Republican, Democrat, Independent/undecided] poses to you [society]?” and “How much of a threat do you think a person of the following party who is also an elected official poses to you [society]?” using scales ranging from 1 [not at all] to 7 [very much]. Responses toward each party (Cronbach’s α at least .81) were averaged to create three composite threat scores. Within individuals, we also calculated the difference in Democrat minus Republican threat composites. We standardized (i.e., z-scored) these differences across our sample for exploratory analyses of perceived partisan threat on the anticipated partisan disclosure effect.

Results

Analytic strategy

Across experiments, the base R function lm was used for linear regressions. Mixed effects models were fitted using lme4 [51]. Model p-values were calculated using lmerTest [52]. Confidence intervals were calculated via the base R function confint. The emmeans package [53] was used to calculate the estimated marginal means and simple effects tests reported alongside the regression results. P-values for post-hoc tests were adjusted using Tukey method. When t-tests were employed and group variances were unequal, we used the Welch-Satterthwaite approximation for degrees of freedom.

Characterizing partisan disclosure effects on face impressions

We first tested whether disclosing partisanship more strongly affected impressions than non-disclosed partisanship. Likability and competency choices (Republican = 0, Democrat = 1) were logistically regressed on Trait (competent = 0, likable = 1), Task Version (disclosed labels = 0, non-disclosed labels = 1), Perceiver Political Ideology (standardized around the composite political ideology scores for the sample to have a mean of 0 and a standard deviation of 1), and their interactions as fixed effects. Models with different random effects structures were compared to determine best fit [54]. A first model included random intercepts for participants and face. A second allowed a Trait effect to vary by participants. Because fit did not differ between the models, χ2(2) = .20, p = .91, we report from the simpler model.

A higher probability of selected faces disclosed as having shared partisanship would support a disclosure effect. Main effects of Task Version (reflecting more selected Democrats in the disclosed versus non-disclosed task version) and Perceiver Political Ideology (reflecting fewer selected Democrats with higher perceiver conservatism) were qualified by a Task Version × Perceiver Political Ideology interaction that partially supported this hypothesis (Table 1A; Fig 1). We allow main effects to be interpreted within the context of this higher-order interaction. Although not further qualified by Trait, post-hoc tests are reported by Trait for completeness. See Table 2A for estimated marginal means.

Table 1. Mixed effects model predicting selected faces in Experiment 1.
Probability of Choosing a Democrat (1) relative to a Republican (0)
a. Perceiver Political Ideology b. Partisan Threat
Predictors Odds Ratio 95% CI p Odds Ratio 95% CI p
(Intercept) 1.11 1.01 – 1.21 .027 1.12 1.02 – 1.23 .016
Task Version [Non-disclosed] 0.85 0.78 – 0.91 < .001 0.84 0.77 – 0.91 < .001
Trait [Likable] 1.00 0.94 – 1.07 .927 1.01 0.95 – 1.07 .871
Ideology (a) / Threat (b) 0.85 0.80 – 0.90 < .001 0.89 0.84 – 0.94 < .001
Task Version [Non-disclosed] * Trait [Likable] 1.01 0.93 – 1.09 .898 1.00 0.92 – 1.09 .969
Task Version [Non-disclosed] * Ideology (a) / Threat (b) 1.15 1.06 – 1.24 < .001 1.12 1.03 – 1.22 .006
Trait [Likable] * Ideology (a) / Threat (b) 0.98 0.92 – 1.04 .437 0.94 0.89 – 1.00 .059
Task Version [Non-disclosed] * Trait [Likable] * Ideology (a) / Threat (b) 1.00 0.93 – 1.09 .929 1.06 0.97 – 1.15 .187

The Task Version reference condition is Disclosed and the Trait reference condition is Competent. Reflecting the parallel nature of these analyses, columns A and B use perceiver political ideology and the difference in perceived partisan threat from Democrats relative to Republicans, respectively, as predictors.

Fig 1. Predicted probability of choosing a Democrat (vs. Republican) as a function of Trait (competent, likable), Task Version (not-disclosed or disclosed labels), and perceiver political ideology.

Fig 1

Points represent the condition means and whiskers represent the standard error of the mean. ** = < .001; NS = non-significant.

Table 2. Estimated marginal means for Experiments 1–2.
a. Experiment 1
Trait Task Version (labels) Perceiver Political Ideology Estimated Marginal Mean [95% CI]
Likable Non-disclosed Liberal .50 [.47, .52]
Likable Disclosed Liberal .57 [.55, .60]
Competent Non-disclosed Liberal .49 [.46, .52]
Competent Disclosed Liberal .57 [.54, .59]
Likable Non-disclosed Conservative .48 [.45, .50]
Likable Disclosed Conservative .48 [.45, .51]
Competent Non-disclosed Conservative .48 [45, .50]
Competent Disclosed Conservative .49 [.46, .51]
b. Experiment 2
Label Time Perceiver
Political Ideology
Estimated Marginal Mean [95% CI]
undecided Before Label Liberal 3.76 [3.56, 3.95]
undecided After Label Liberal 3.87 [3.68, 4.07]
Democrat Before Label Liberal 3.75 [3.54, 3.95]
Democrat After Label Liberal 4.02 [3.81, 4.23]
Republican Before Label Liberal 3.77 [3.56, 3.97]
Republican After Label Liberal 3.13 [2.93, 3.34]
undecided Before Label Conservative 3.64 [3.45, 3.84]
undecided After Label Conservative 3.63 [3.44, 3.83]
Democrat Before Label Conservative 3.69 [3.48, 3.90]
Democrat After Label Conservative 3.31 [3.10, 3.52]
Republican Before Label Conservative 3.63 [3.42, 3.83]
Republican After Label Conservative 3.98 [3.77, 4.18]

Liberal/conservative corresponds to -1/+1 SD above/below the mean on the composite political ideology score.

We defined more liberal and more conservative participants as having standardized composite political ideology scores that were, respectively, one standard deviation below and above the sample mean. Consistent with our hypothesis that partisan disclosure would polarize impressions, more liberal participants were more likely to select disclosed versus non-disclosed Democrats as more competent, OR = 1.36, z = 5.44, p < .001, 95% CI [1.18, 1.57], and likable, OR = 1.36, z = 5.41, p < .001, 95% CI [1.17, 1.57]. Inconsistent with this hypothesis, however, no differences emerged for more conservative participants (one standard deviation above the mean composite political ideology score) when choosing the more competent, OR = 1.03, z = .50, p = .96, 95% CI [.89, 1.19], or likable, OR = 1.02, z = .34, p = .99, 95% CI [.88, 1.18] face.

Characterizing partisanship effects on face impressions by the veracity of disclosed labels

Because people can often detect partisanship from faces alone [e.g., 29], our next analyses concerned determining whether the veracity of disclosed labels affected polarized face impressions based on perceiver political ideology. First, we examined whether face impressions were polarized by non-disclosed partisanship. Among participants for whom party labels were not disclosed, however, perceiver political ideology did not affect face selections, OR = .98, p = .49, 95% CI [.92, 1.04].

We next tested whether the veracity of disclosed partisanship affected face impressions. Likability and competency choices (Republican = 0, Democrat = 1) were logistically regressed on Trait (competent = 0, likable = 1), Disclosed Label Veracity (accurate = 1, inaccurate = 0), Perceiver Political Ideology, and their interactions as fixed effects (Table 3) among participants who saw the labels. The random effects structure included intercepts for participants and faces. A main effect of Perceiver Political Ideology reflected fewer selected Democrats with higher perceiver conservatism. A main effect of Disclosed Label Veracity reflected fewer selected Democrats with accurate versus inaccurate labels. This pattern may seem surprising because given the liberal skew of the sample, one might expect more accurately labeled Democrats because potential detections would match labels. Because differences between faces signal the likelihood of winning [48], it could also be that inaccurate labels resulted in more “Democrats” with positively interpreted facial cues. Suggesting disclosed labels affected impressions irrespective of their veracity, however, no interaction between Disclosed Label Veracity and Perceiver Political Ideology emerged.

Table 3. Mixed effects model predicting selected faces based on the veracity of partisan labels in Experiment 1.
Probability of Choosing a Democrat (1) relative to a Republican (0)
Predictors Odds Ratios 95% CI p
(Intercept) 1.20 1.10 – 1.30 < .001
Trait [Likable] 0.99 0.91 – 1.08 .798
Label Veracity [Accurate] 0.85 0.78 – 0.92 < .001
Perceiver Political Ideology 0.87 0.80 – 0.94 .001
Trait [Likable] * Label Veracity 1.03 0.91 – 1.16 .651
Trait [Likable] * Perceiver Political Ideology 0.98 0.90 – 1.06 .568
Label Veracity [Accurate] * Perceiver Political Ideology 0.97 0.89 – 1.05 .429
Trait [Likable] * Label Veracity [Accurate] * Perceiver Political Ideology 1.00 0.89 – 1.12 .966

This analysis is based on the subset of participants for whom labels were disclosed (N = 82), and the Trait reference condition is Competent.

Exploring partisan disclosure effects on face impressions

Exploratory analyses identified how the above-described partisan disclosure effects were paralleled by perceived partisan threat.

Relations between perceiver political ideology and perceived partisan threat. Preliminary correlations showed that perceiver political ideology differentially related to threat perceptions of Republicans, Democrats, and undecideds (Table 4). We next regressed perceiver political ideology on the perceived threat ratings of each party. The model was significant, R2 = .23, p < .001 (Table 5A). Patterns for more liberal participants were consistent with their face impressions. More liberal participants perceived Republicans (M = 4.16, SE = .14) as more threatening than Democrats (M = 2.29, SE = .14), b = 1.87, t = 9.75, p < .001, 95% CI [1.42, 2.32], and undecideds (M = 2.16, SE = .14), b = 2.00, t = 10.43, p < .001, 95% CI [1.55, 2.45]. They perceived Democrats and undecideds as similarly threatening, b = .13, t = .68, p = .78, 95% CI [-.32, .58]. Thus, more liberal participants showed both ideologically polarized face impressions and partisan threat perceptions. There were no 3-way interactions with Task Version, bs < .42, ps> = .12.

Table 4. Means (M), standard deviations (SD), and intercorrelations (r) between political ideology, perceived threat, and political affiliation in Experiment 1 (lower diagonal) and 2 (upper diagonal).
Measure Exp1
M [SD]
Exp2
M [SD]
1 2 3 4
1. Political ideology 4.80 [1.91] 5.03 [1.79] -- -.50**
[-.64, -.33]
.48**
[.31, .62]
-.00
[-.21, .20]
2. Republican threat 3.33 [1.74] 3.24 [1.72] -.48**
[-.59, -.36]
-- .07
[-.14, .27]
.36**
[.17, .53]
3. Democrat threat 2.58 [1.22] 2.95 [1.34] .24**
[.09, .37]
.31**
[.17, .43]
-- .30**
[.11, .48]
4. Independent/ undecided threat 2.19 [1.11] 2.42 [1.17] .03
[-.12, .17]
.38**
[.25, .50]
.56**
[.45, .66]
--

*p < .05

**p < .01. Numbers within brackets are the 95% confidence intervals. Higher values for political ideology indicate greater conservatism.

Table 5. Regression models predicting partisan threat perceptions (Republican, Democrat, undecided) from perceiver political ideology in Experiments 1 & 2.
a. Experiment 1 b. Experiment 2
Predictors Estimates 95% CI p Estimates 95% CI p
(Intercept) 2.19 2.00–2.38 < .001 2.42 2.15 – 2.68 < .001
Political Party [Democrat] 0.39 0.12–0.65 .005 0.54 0.16 – 0.91 .005
Political Party [Republican] 1.13 0.86–1.39 < .001 0.83 0.45 – 1.20 < .001
Perceiver Political Ideology 0.03 -0.16–0.22 .762 -0.00 -0.27 – 0.26 .990
Political Party [Democrat] * Perceiver Political Ideology 0.26 -0.01 – 0.52 .060 0.64 0.27 – 1.02 .001
Political Party [Republican] * Perceiver Political Ideology -0.87 -1.14 – -0.60 < .001 -0.86 -1.23– -0.48 < .001

The Political Party reference condition is undecided.

Patterns for more conservative participants were also consistent with their face impressions. More conservative participants perceived Republicans (M = 2.48, SE = .14) versus Democrats (M = 2.87, SE = .14), b = -.39, t = -2.02, p = .11, 95% CI [-.84, .06] and Republicans versus undecideds (M = 2.22, SE = .14), b = .26, t = -1.32, p = .39, 95% CI [-.20, .71], as similarly threatening. However, they perceived Democrats as more threatening than undecideds, b = .64, t = 3.34, p = .003, 95% CI [.19, 1.10]. Thus, more conservative participants did not show ideologically polarized face impressions nor partisan threat perceptions.

Relations between partisan disclosure effects and perceived partisan threat. Next, we verified that the difference in partisan threat perceptions of Democrats and Republicans (see above) positively related to perceiver political ideology, r(179) = .63, p < .001. This relation validated that people self-reporting as more conservative found Democrats more threatening relative to Republicans. We therefore explored whether this difference in perceived partisan threat had similar partisan disclosure effects as perceiver political ideology on face impressions (Table 1B). Across traits, the difference in perceived partisan threat indeed qualified a Task Version effect on face impressions just as perceiver political ideology did. Participants who perceived Republicans as being more threatening than Democrats (i.e., participants one standard deviation below the mean threat difference score) were more likely to select disclosed versus non-disclosed Democrats as more competent, OR = 1.34, z = 4.98, p < .001, 95% CI [1.15, 1.56], and likable, OR = 1.41, z = 5.88, p < .001, 95% CI [1.22, 1.64]. Similar to the above-reported analyses with perceiver political ideology, no difference emerged for participants who perceived Democrats as being more threatening than Republicans (i.e., participants one standard deviation above the mean threat difference score) when choosing the more competent, OR = 1.07, z = 1.11, p = .68, 95% CI [.92, 1.24], or likable, OR = 1.01, z = .15, p > .99, 95% CI [.87, 1.17] face.

Discussion

Disclosed partisanship polarized face impressions among more liberal, but not more conservative, perceivers. This pattern partially supported that disclosed partisanship polarizes impressions based on perceiver partisanship. These patterns emerged regardless of the disclosed label’s veracity (e.g., it did not matter if an actual Republican was labeled as a Democrat). Thus, simply implying partisanship is enough to polarize face impressions. Although partisanship can be detected from facial cues alone [e.g., 29], non-disclosed partisanship did not polarize face impressions. Although it could be that participants did not detect partisanship from these faces, another possibility is that being asked to evaluate traits overrode undisclosed partisanship effects on face impressions in this task overall [see 48]. Future research may assess this possibility by addressing partisanship effects on face impressions when people are informed, for example, that they are evaluating politicians versus not.

These patterns emerged regardless of whether perceivers selected faces as more likable or competent. Partisanship thus polarizes impressions spanning well-studied primary dimensions of social perception capturing separable ways in which people stereotype others [46]. Future work may replicate these findings while focusing on reaction times to better understand why they emerged. For example, it could be that label disclosure enables people to require less evidence to say a similar relative to opposing partisans are competent and likable. However, it could also be that people more steeply accumulate evidence of competence and likability from similar relative to opposing partisans. Disentangling these findings using drift diffusion modeling [e.g., 55], can help clarify processes underlying face impressions polarized by partisanship.

Prior work suggests that perceived partisan threat drives ideological prejudice [40]. One possibility was thus that the lack of polarized impression from conservative perceivers would be paralleled by their not perceiving opposing partisans as threatening to the same extent as more liberal perceivers. Indeed, perceived partisan threat perceptions paralleled both more liberal and more conservative participants’ face impressions. Here, more conservative participants did not perceive Democrats as more threatening than Republicans. More liberal participants, by contrast, perceived Republicans as significantly more threatening than Democrats. That more conservative participants perceived Democrats as more threatening than undecideds suggests their threat perceptions were not indiscriminately attenuated–a finding consistent with people treating undecideds more favorably than opposing partisans [26]. Moreover, the above-described findings replicated when replacing perceiver political ideology with perceived partisan. That perceiver political ideology and perceived partisan threat were strongly related is consistent with growing political sectarianism in the United States [16]. It also aligns with work showing that threatening contexts polarize valenced face impressions [43]. Speculatively, simple partisan labels may be enough to provide the threat that polarizes first impressions.

Although threat ratings suggested that more conservative perceivers did not find Democrats as more threatening than Republicans, significant correlations emerged between perceiver ideology and partisan threat perceptions. What might have caused this inconsistency? One possibility may lie in the college-aged sample recruited for the experiment. College-aged students often show a bias to perceiving themselves as more conservative than they really are [56]. If the students identifying themselves as more conservative were indeed more liberal than they realized, it would allow for the possibility of threat perceptions less extreme than those of the students identifying as more liberal. Indeed, these biased perceptions of one’s own partisanship are more pronounced for conservatives than for liberals [56].

Although these patterns provide some initial correlational evidence that perceived threat may drive impressions of similar to opposing partisans [e.g., 40], several possibilities remained open for exploration. First, it could be that more liberal and conservative perceivers differentially use partisan labels when forming face impressions. Indeed, Republicans endorse more “Republican-looking” candidates as being likeable and competent [57]. Second, more liberal and conservative perceivers could both be affected by partisan cues in their face impressions but start at different baselines when making them. If true, that would make partisan cue effects on face impressions difficult to detect in a task where impressions were measured using a binary choice. To consider these explanations, Experiment 2 was designed to replicate and extend Experiment 1.

Experiment 2

Experiment 2 replicated and extended Experiment 1 by characterizing whether partisan disclosure elicited changes in face impressions. Here, we tested whether disclosing opposing partisanship negatively changed impressions and if disclosing shared partisanship positively changed them based on perceiver political ideology. By using a scale to characterize face impressions, we could measure whether more liberal and more conservative perceivers broadly differed in how they approached making face impressions and if their partisan threat perceptions paralleled their face impressions. Here, people saw a face and evaluated likability before and after the face was paired with a partisan label. Evaluation change from before to after disclosure quantified impression change. If more liberal and conservative perceivers similarly approach making face impressions, we expected people to have positive and negative impression change, respectively, toward faces disclosed as having shared and opposing partisanship. If partisan threat perceptions underscore face impressions when given explicit partisan cues, we expected impression change to mirror perceived partisan threat. That is, if more conservative individuals perceived Democrats as more threatening than Republicans (unlike Experiment 1), we expected that they would negatively change their impressions of disclosed Democrats. Such a pattern would suggest perceived partisan threat as concomitant process alongside partisan impression change.

In addition to Republican and Democrat targets, a subset of targets was identified as undecideds. This addition allowed us to make a more nuanced interpretation of impression change based on partisan disclosure and a potential parallel pattern in perceived partisan threat. Politically undecided people vary in their identification with Republicans and Democrats [58], making their partisanship ambiguous and providing a natural control. People over-exclude others from their ingroups [59,60], suggesting positive change might be reserved for ideologically similar targets. If over-exclusion elicits similar change toward any targets who do not share perceiver ideology, similarly negative impression change should emerge for disclosed opposing partisan and undecided faces. This pattern would support change largely explained by shared attributes with perceivers. People, however, behave more favorably toward independent versus opposing partisans [26]. Another possibility is thus that stronger negative change will emerge for opposing partisan versus undecided faces among more extreme partisans because the former reflects a group especially derogated by partisans [16]. The latter relative to the former possibility would be more consistent with an expectation of partisan disclosure effects on face impressions paralleled by perceived partisan threat effects.

Method

Participants

Power analyses [47] targeted 77 participants to detect a moderate political ideology effect (f2 = .15) on impressions before versus after disclosure with 80% power and α = .05. We oversampled for the same reasons as in Experiment 1. Of 101 undergraduates recruited from a large Midwestern university, we excluded seven for failing the manipulation check. The analyzed sample comprised 94 undergraduates (Mage = 18.90 years, SD = 2.43, 64 female, 77 White, 11 Asian, 3 Black, 1 multiple, 1 unknown; 3 Hispanic).

Task

One hundred twenty neutrally expressive younger adult White faces (60 male and 60 female) were drawn from the PAL database [61] on the basis of attractiveness and trustworthiness norms [see 62]. Forty faces each were randomly selected for one of three group categories (i.e., Republican, Democrat, or undecided). Three task versions counterbalanced the partisan label (Republican, Democrat, or undecided) paired with each face on a within-subjects basis. Depicted faces’ actual partisanship was unknown. Male and female faces were equally represented across the three categories. We included female faces because, unlike Experiment 1, people did not choose between two partisan faces. Two ANOVAs showed that male and female faces paired with each category did not differ on attractiveness or trustworthiness (all Fs < 3.10, all ps > 0.08).

In each trial, participants first saw a face for 1000ms followed by a scale (1 [extremely dislike] to 7 [extremely like]). They were told their self-paced likeability evaluations should be based on the picture. Immediately following, they were told they would be provided information indicating the political partisanship of the individuals and that they would evaluate them again. In a second 1000ms face presentation, a colored border surrounding the photo denoted political partisanship. Republicans were denoted with red borders, Democrats with blue borders, and undecideds with yellow borders. Participants then made another self-paced likeability evaluation. There was a 500ms blank screen between each trial. Participants verified colors designations before and after the task, which served as a manipulation check.

Partisanship characterization

Participants completed the same measures as in Experiment 1. We summarize data assessing perceiver political ideology and perceived threat here. The political ideology items (Cronbach’s α = .88) were averaged to create a composite political ideology score (M = 5.03, SD = 1.79). The threat items for each party (Cronbach’s α at least 0.85) were averaged to create three composite threat scores.

Results

Characterizing impression modulation by partisan disclosure

Likability evaluations were regressed on Time (before label = 0, after label = 1), Partisan Label (Dummy coded using “undecided” as the reference of 0: Republican, Democrat, undecided), Perceiver Political Ideology (standardized as in Experiment 1), and their interactions as fixed effects. Like Experiment 1, the reported model included random intercepts for participants and face and allowed a Partisan Label effect to vary by participants. This model fit better than one with only random intercepts for participants and face, χ2(5) = 645.53 p < .001. A third model allowing a Partisan Label by Time interaction to vary by participants failed to converge.

As hypothesized, interactions supported positive and negative impression change based on Partisan Disclosure and Political Ideology (Table 6A; Fig 2). More liberal participants liked people less after seeing Republican borders, b = -.63, z = -15.18, p < .001, 95% CI [-.75, -.52], and more after seeing Democrat borders, b = .27, z = 6.51, p < .001, 95% CI [.15, .39]. Impressions did not change after seeing undecided borders, b = .12, z = 2.78, p = .06, 95% CI [-.00, .24]. More conservative participants liked people more after seeing Republican borders, b = .35, z = 8.37, p < .001, 95% CI [.23, .47], and less after seeing Democrat borders, b = -.38, z = -9.01, p < .001, 95% CI [-.50, -.26]. Impressions did not change after seeing undecided borders, b = -.01, z = -.26, p > .99, 95% CI [-.13, .11]. See Table 2B for estimated marginal means.

Table 6. Linear mixed effects model predicting evaluations in Experiment 2.
a. Perceiver Political Ideology b. Partisan Threat
Predictors Estimates 95% CI p Estimates 95% CI p
(Intercept) 3.70 3.56 – 3.84 < .001 3.71 3.57 – 3.85 < .001
Label [Republican] -0.00 -0.11 – 0.10 .971 -0.01 -0.12 – 0.10 .857
Label [Democrat] 0.02 -0.08 – 0.11 .736 0.01 -0.09 – 0.10 .912
Time [After Label] 0.05 -0.01 – 0.11 .074 0.05 -0.01 – 0.11 .079
Ideology (a) / Threat (b) -0.06 -0.20 – 0.08 .405 -0.00 -0.14 – 0.14 .972
Label [Republican] * Time [After Label] -0.19 -0.28 – -0.11 < .001 -0.21 -0.29 –-0.13 < .001
Label [Democrat] * Time [After Label] -0.10 -0.19 – -0.02 .012 -0.09 -0.17 – -0.01 .026
Label [Republican] * Ideology (a) / Threat (b) -0.01 -0.12 – 0.09 .829 -0.02 -0.13 – 0.08 .665
Label [Democrat] * Political Ideology 0.03 -0.06 – 0.12 .536 0.02 -0.07 – 0.12 .657
Time [After Label] * Ideology (a) / Threat (b) -0.06 -0.12 – -0.01 .032 -0.01 -0.07 – -0.05 .656
Label [Republican] * Time [After Label] * Ideology (a) / Threat (b) 0.56 0.47 – 0.64 < .001 0.53 0.44 – 0.61 < .001
Label [Democrat] * Time [After Label] * Ideology (a) / Threat (b) -0.26 -0.34 – -0.18 < .001 -0.34 -0.42 – -0.25 < .001

The Label reference condition is undecided and the Time reference condition is Before Label. Reflecting the parallel nature of these analyses, columns A and B use perceiver political ideology and partisan threat, respectively, as predictors.

Fig 2. Predicted likability as a function of Time, Label (party of evaluated face), and composite political ideology (+1 SD: More conservative; -1 SD = more liberal).

Fig 2

Points represent the condition means and whiskers represent the standard error of the mean.

Exploring partisan disclosure effects on face impressions

Relations between perceiver political ideology and perceived partisan threat. Correlations again showed that perceiver political ideology related to different threat perceptions of Republicans, Democrats, and undecideds (Table 4). We then regressed standardized political ideology scores on the perceived threat ratings of each party. The model was significant, R2 = .23, p < .001 (Table 5B). More liberal participants perceived Republicans (M = 4.10, SE = .19) as more threatening than Democrats (M = 2.31, SE = .19), b = 1.79, z = 6.68, p < .001, 95% CI [1.16, 2.42]; and undecideds (M = 2.42, SE = .19), b = 1.69, z = 6.30, p < .001, 95% CI [1.06, 2.32], but perceived Democrats and undecideds as similarly threatening, b = -.10, z = .39, p = .92, 95% CI [-.74, .53]. Thus, more liberal participants showed both ideologically polarized face impressions and partisan threat perceptions.

Critically, more conservative participants perceived Democrats as more threatening than Republicans, b = 1.21, z = 4.48, p < .001, 95% CI [.57, 1.85]. More conservative participants perceived Democrats (M = 3.59, SE = .19) as more threatening than undecideds (M = 2.41, SE = .19), b = 1.18, z = 4.37, p < .001, 95% CI [.54, 1.82], and perceived Republicans (M = 2.38, SE = .19) and undecideds as similarly threatening, b = .03, z = .11, p = .99, 95% CI [-.67, .61]. Thus, more conservative participants showed both ideologically polarized face impressions and partisan threat perceptions.

Relations between partisan disclosure effects and perceived partisan threat. As in Experiment 1, perceiver political ideology positively related to the standardized difference in partisan threat perceptions of Democrats relative to Republicans, r(92) = .72, p < .001. Interactions supported positive and negative impression change based on Partisan Disclosure and Threat (Table 6B), again paralleling the results using perceiver political ideology. Participants who perceived Republicans as more threatening liked people less after seeing Republican borders, b = -.67, z = -15.95, p < .001, 95% CI [-.79, -.55], and more after seeing Democrat borders, b = .31, z = 7.36, p < .001, 95% CI [.19, .43]. Impressions did not change after seeing undecided borders, b = .07, z = 1.55, p = .63, 95% CI [-.05, .18]. Participants who perceived Democrats as more threatening liked people more after seeing Republican borders, b = .36, z = 8.47, p < .001, 95% CI [.24, .47], and less after seeing Democrat borders, b = -.39, z = 9.32, p < .001, 95% CI [-.51, -.27]. Impressions did not change after seeing undecided borders, b = .04, z = 0.94, p = .94, 95% CI [-.08, .16].

Discussion

People positively changed their impressions of disclosed ingroup partisans and negatively changed impressions of disclosed opposing partisans based on their own political ideology. Just as salient behaviors change impressions of faces [e.g., 15], simply labeling faces with partisan cues does too and that the extent of resulting polarization varies by people’s partisan similarity to those cues. One question was whether negative change toward opposing partisans emerged because not sharing partisanship denotes a negative group or because opposing partisans specifically elicit negativity. Indeed, favoritism toward people sharing values can emerge without derogation toward those who do not [36, but see 63]. Because people behave more favorably to independents than to opposing partisans [26], examining impression change toward undecided and opposing partisan faces addressed these possibilities.

More conservative perceivers did not change impressions of undecideds after disclosure. Yet, they evaluated disclosed Republicans versus undecideds as more likable, b = .35, z = 4.56, p < .001, 95% CI [.13, .56]. These findings suggest favoring people with likely shared values in the absence of derogating undecideds, supporting that “ingroup love” motivates behavior over “outgroup hate” [e.g., 64]. Supporting this possibility, more liberal perceivers also did not change impressions of disclosed undecideds. These perceivers, however, did not differ in their impressions of disclosed Democrats versus undecideds. Speculatively, more conservative and liberal people may have different perceptions of the similarity of their group and undecideds, perhaps based on how they view the current polarized political climate. Future work may use a control condition with no denoted partisanship to examine this possibility.

Complementing Experiment 1, partisan threat perceptions paralleled impression change. Moreover, replacing perceiver political ideology with partisan threat perceptions in our model yielded the same patterns of impression modulation. Notably, whereas opposing partisans were perceived as more threatening than similar partisans, undecideds fell at a middle ground. These threat perceptions suggested that the more conservative participants in Experiment 2 may have been more likely to outwardly derogate Democrats [65], potentially explaining why conservative ideology affected impressions in Experiment 2, but not in Experiment 1. Correlations supported this explanation, as a positive relation between a more conservative ideology and perceptions of Democrats as threatening was double the size in Experiment 2 than in Experiment 1.

Further, that more conservative perceivers changed their impressions based on disclosed partisanship did not support the explanations that they simply used facial stereotypes [e.g., 57] more than explicit partisan cues when evaluating faces or that conservatives and liberals start their impressions in different places (e.g., starting more positively). Rather, these findings raise the possibility that partisan threat perceptions elicit changes to face impressions. Complementary recent work [40] suggests that perceived partisan threat may be more likely to mirror face impressions of partisans. The extent to which people dehumanize opposing partisans may thus depend, in part, on their perception of group-based threats from them [66].

General discussion

The current work identified political ideology as affecting face impressions of disclosed partisans. Experiment 1 showed that disclosed partisanship more strongly affects face impressions than non-disclosed partisanship even when that disclosure is inaccurate. Experiment 2 showed that people change their impressions of disclosed partisans based on their own ideological partisanship. Across experiments, partisan disclosure effects on face impressions were paralleled by the extent of perceived partisan threat. These findings extend work showing partisan differences in face impressions in romantic contexts [17] to the general face impressions eliciting everyday approach and avoidance decisions [67]. They also build on work showing that perceived threat drives negative impressions of opposing partisans [40] by showing that partisan threat perceptions parallel partisan disclosure effects on face impressions. It will be important for future work to experimentally manipulate partisan threat to establish it as a causal mechanism.

Salient behavioral information elicits updated face impressions [15]. The current findings show that disclosed partisanship is salient enough to elicit impression change, and this effect is pronounced among people with strong political ideologies and perceptions of partisan threat. Indeed, Experiment 1 showed partisan disclosure effects only among more liberal perceivers, but only more liberal perceivers reported perceiving Republicans as especially threatening. When more liberal and conservative perceivers evaluated opposing partisans as threatening in Experiment 2, partisan disclosure effects on face impressions emerged across perceivers. Experiment 2 also showed that the simply labeling people as sharing partisanship elicits more positive impressions almost immediately after evaluating faces, consistent with ingroup favoritism when membership is arbitrarily determined [3739].

Although ideology effects on face impressions were paralleled by partisan threat perception effects across experiments, it is worth considering why inconsistencies across experiments might emerge. One previously discussed possibility regarded college students self-reporting being more conservative than they are when ideology is more objectively assessed [56]. Potential conflicts between self-reported and actual ideologies could lead to inconsistencies both within- and across-experiments. Speculatively, more objective ideology assessments could, in part, resolve inconsistencies. It could be that factors that are beyond the scope of the current work interfaced with perceived threat and ideology to relate to impressions. For example, people who have high actual [68] or even imagined [69] contact with opposing partisans have less affective polarization, findings that broadly reflect work on intergroup contact to reduce prejudice [70]. Future work may consider the extent to which relative partisan contact or isolation interfaces with perceived threat to affect face impressions separably or interactively.

The current work has broad implications for partisan interactions. Impressions of faces affect countless behaviors [e.g., 33]. Notably, outgroup disclosure quickly elicits avoidance tendencies [71] that are likely related to the communicative hesitation promoting intergroup tension [72]. The current work raises the possibility that this tension may reflect, in part, negative impressions of faces from people who perceive opposing partisans as especially threatening. Indeed, more negative impressions of faces are theorized to reflect a motivation to avoid them [67]. Speculatively, some people’s more negative impressions of faces disclosed as opposing partisans may perpetuate overall intergroup tensions. Future work can disentangle the relationship between political ideology and partisan threat, by experimentally manipulating threat perceptions. This work would examine whether threat is a core feature of ideology or if there are contexts where ideological differences do not coincide with partisan threat and its pernicious consequences.

An open question regards whether polarized impressions based on political ideology can be changed. Indeed, to the extent that valence is a fundamental perception of face evaluation [73] relating to countless interpersonal outcomes [e.g., 19], more positive face impressions may be necessary to mitigate growing political sectarianism in the United States [16]. Future work may identify positive behavioral cues that are enough to counteract opposing partisan cues to begin addressing this possibility. For example, given that people place different weight on positive and negative morality- and competence-related behavioral information when updating impressions [74], one potential area for fruitful work would be to determine how behaviors in different domains may mitigate negative impressions of opposing partisans. Other work aimed creating more equitable partisan interactions may consider interventions that address impressions of faces [e.g., 75] and assess how longstanding intervention effects may be.

The current work also raises interesting avenues for future basic person perception research. Because people have stereotypic visualizations of group members [76], for example, disclosed partisanship may change impressions only to the extent to which faces match the stereotypic prototypes held by perceivers. Indeed, relative partisanship is often interpretated as reflecting group divisions [7,2527]. To further characterize how disclosed partisanship affects face impressions, future work can vary the characteristics of faces disclosed as partisans (e.g., trustworthy or untrustworthy) and address disclosure effects using both implicit and explicit measures. Such manipulations can clarify the strength of disclosure on impressions and at what levels they manifest. Moreover, it would be worthwhile to test how changing party affiliations or knowledge of a target’s within-party disagreement affects face impressions. It could be that partisanship polarizes impressions only to the extent that partisans are perceived as being loyal to their party.

Simply labeling people as political partisans shifts impressions of their faces. These findings have implications for when people might disclose their partisanship to others. Based on Experiment 2, for example, people might avoid negative impressions by not disclosing their partisanship until they are in an inclusive space and perceived as relatively non-threatening. Affecting initial impressions of faces may be an initial step by which disclosing political partisanship affect countless aspects of social interactions, illustrating one way by which political partisanship shapes social cognition.

Supporting information

S1 File

(DOCX)

Data Availability

Additional methods, data, and code for all experiments are available at the Open Science Framework: https://osf.io/9khta.

Funding Statement

This research was supported by grant numbers KL2TR002530 and UL1TR002529 (A. Shekhar, PI) from the National Institutes of Health, National Center for Advancing Translational Sciences (https://ncats.nih.gov/), Clinical and Translational Sciences Award to A.C.K. The authors declare no conflicts of interest. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.DiMaggio P., Evans J., and Bryson B., Have Americans’ social attitudes become more polarized? American Journal of Sociology, 1996. 102(3): p. 690–755. [Google Scholar]
  • 2.DellaPosta D., Shi Y., and Macy M., Why do liberals drink lattes? American Journal of Sociology, 2015. 120(5): p. 1473–1511. [DOI] [PubMed] [Google Scholar]
  • 3.Westfall J., et al., Perceiving political polarization in the United States: Party identity strength and attitude extremity exacerbate the perceived partisan divide. Perspectives on Psychological Science, 2015. 10(2): p. 145–158. [DOI] [PubMed] [Google Scholar]
  • 4.Hetherington M., Putting polarization in perspective. British Journal of Political Science, 2009. 39(2): p. 413–448. [Google Scholar]
  • 5.Baldassarri D. and Gelman A., Partisans without constraint: Political polarization and trends in American public opinion. American Journal of Sociology, 2008. 114(2): p. 408–446. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Dimcock M. and Carroll D., Political polarization in the American public: how increasing ideological uniformity and partisan antipathy affect politics, compropose, and everyday life, P.R. Center, Editor. 2014: Washington, DC. [Google Scholar]
  • 7.Brandt M., et al., The ideological-conflict hypothesis: Intolerance among both liberals and conservatives. Current Directions in Psychological Science, 2014. 23: p. 27–34. [Google Scholar]
  • 8.Morgan G., Mullen E., and Skitka L., When values and attributions collide: Liberals’ and conservatives’ values motivate attributions for alleged misdeeds. Personality and Social Psychology Bulletin, 2010. 36: p. 1241–1254. doi: 10.1177/0146167210380605 [DOI] [PubMed] [Google Scholar]
  • 9.Skitka L., Bauman C., and Sargis E., Moral conviction: Another contributor to attitude strength or something more? Journal of Personality and Social Psychology, 2005. 88: p. 895–917. [DOI] [PubMed] [Google Scholar]
  • 10.Sklenar A., et al., Person memory mechanism underlying approach and avoidance judgments of social targets. Social Cognition, in press. [Google Scholar]
  • 11.Kerr J., Panagopoulos C., and van der Linden S., Political polarization on COVID-19 pandemic response in the United States. Personality and Individual Differences, 2021. 179: p. 110892. doi: 10.1016/j.paid.2021.110892 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Todorov A., et al., Social attributions from faces: determinants, consequences, accuracy, and functional significance. Annual Review of Psychology, 2015. 66: p. 519–545. [DOI] [PubMed] [Google Scholar]
  • 13.Zebrowitz L. and Montepare J., Social psychological face perception: Why appearance matters. Social and Personality Psychology Compass, 2008. 2(3): p. 1497–1517. doi: 10.1111/j.1751-9004.2008.00109.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Wilson J. and Rule N., Facial trustworthiness predicts extreme criminal-sentencing outcomes. Psychological Science, 2015. 26(8): p. 1325–1331. [DOI] [PubMed] [Google Scholar]
  • 15.Shen X., Mann T., and Ferguson M., Beware a dishonest face?: Updating face-based implicit impressoins using diagnostic behavioral information. Journal of Experimental Social Psychology, 2020. 86: p. 103888. [Google Scholar]
  • 16.Finkel E., et al., Political sectarianism in America. Science, 2020. 370: p. 533–536. [DOI] [PubMed] [Google Scholar]
  • 17.Mallinas S., Crawford J., and Cole S., Political opposites do not attract: The effects of ideologial dissimilarity on impression formation. Journal of Social and Political Psychology, 2018. 6(1): p. 49–75. [Google Scholar]
  • 18.Nicholson S., et al., The politics of beauty: The effects of partisan bias on physical attractiveness. Political Behavior, 2016. 38: p. 883–898. [Google Scholar]
  • 19.van’t Wout M. and Sanfey A., Friend or foe: The effect of implicit trustworthiness judgments in social decision-making. Cognition, 2008. 108: p. 796–803. doi: 10.1016/j.cognition.2008.07.002 [DOI] [PubMed] [Google Scholar]
  • 20.Cassidy B. and Krendl A., Believing is seeing: Abritrary stigma labels affect the visual representation of faces. Social Cognition, 2018. 36(4): p. 381–410. [Google Scholar]
  • 21.Ratner K., et al., Visualizing minimal ingroup and outgroup faces: implications for impressions, attitudes, and behavior. Journal of Personality and Social Psychology, 2014. 106(6): p. 897–911. [DOI] [PubMed] [Google Scholar]
  • 22.Blair I., Judd C., and Fallman J., The automaticity of race and afrocentric facial features in social judgments. Journal of Personality and Social Psychology, 2004. 87(6): p. 763–778. [DOI] [PubMed] [Google Scholar]
  • 23.Motyl M., “If he wins, I’m moving to Canada”: Ideological migration threats following the 2012 U.S. presidential election. Analyses of Social Issues and Public Policy, 2014. 14: p. 123–136. [Google Scholar]
  • 24.Motyl M., et al., How ideological migration geographically segregates groups. Journal of Experimental Social Psychology, 2014. 51(1–14). [Google Scholar]
  • 25.Iyengar S., Sood G., and Lelkes Y., Affect, not ideology: A social identity perspective on polarization. Public Opinion Quarterly, 2012. 76(3): p. 405–431. [Google Scholar]
  • 26.Iyengar S. and Westwood S., Fear and loathing across party lines: New evidence on group polarization. American Journal of Political Science, 2015. 59: p. 690–707. [Google Scholar]
  • 27.Brandt M., Predicting ideological prejudice. Psychological Science, 2017. 28: p. 713–722. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Tskhay K. and Rule N., Accuracy in categorizing perceptually ambiguous groups: a review and meta-analysis. Personality and Social Psychology Review, 2013. 17: p. 72–86. [DOI] [PubMed] [Google Scholar]
  • 29.Rule N. and Ambady N., Democrats and Republicans can be differentiated from their faces. PLOS one, 2010. 5: p. 1–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Samochowiec J., Wanke M., and Fiedler K., Political ideology at face value. Social Psychological and Personality Science, 2010. 1(3): p. 206–213. [Google Scholar]
  • 31.Cikara M., et al., Decoding “us” and “them”: neural representations of generalized group concepts. Journal of Experimental Psychology: General, 2017. 146(5): p. 621–631. [DOI] [PubMed] [Google Scholar]
  • 32.Bhattacharya R., Barton S., and Catalan J., When good news is bad news: psychological impact of false positive diagnosis of HIV. AIDS Care, 2008. 20(5): p. 560–564. [DOI] [PubMed] [Google Scholar]
  • 33.Jaeger B., et al., Can we reduce facial biases? Persistent effects of facial trustworthiness on sentencing decisions. Journal of Experimental Social Psychology, 2020. 90: p. 104004. [Google Scholar]
  • 34.Brambilla M., et al., Changing impressions: Moral character dominates impression updating. Journal of Experimental Social Psychology, 2019. 82: p. 64–73. [Google Scholar]
  • 35.Ferguson M., et al., When and how implicit first impressions can be updated. Current Directions in Psychological Science, 2019. 28(4): p. 331–336. [Google Scholar]
  • 36.Brewer M., The psychology of prejudice: ingroup love and outgroup hate? Journal of Social Issues, 1999. 55(3): p. 429–444. [Google Scholar]
  • 37.Tajfel H., Experiments in intergroup discrimination. Scientific American, 1970. 223: p. 96–102. [PubMed] [Google Scholar]
  • 38.Tajfel H., et al., Social categorization and intergroup behavior. European Journal of Social Psychology, 1971. 1: p. 149–178. [Google Scholar]
  • 39.Tajfel H. and Turner J., An integrative theory of intergroup conflict, in Organizational Identity: A Reader. 1979. p. 56–65. [Google Scholar]
  • 40.Renstrom E., Back H., and Carroll R., Intergroup threat and affective polarization in a multi-party system. Journal of Social and Political Psychology, 2021. 9(2): p. 553–576. [Google Scholar]
  • 41.Stephan W., Ybarra O., and Morrison K., Intergroup threat theory, in Handbook of Prejudice, Stereotyping, and Discrimination, Nelson T., Editor. 2009, Lawrence Erlbaum Associates: Mahwah, NJ, USA. p. 255–278. [Google Scholar]
  • 42.Marcus G., Neuman W., and MacKuen M., Affective intelligence and political judgment. 2000, Chicago, IL: The University of Chicago Press. [Google Scholar]
  • 43.Mattavelli S., Masi M., and Brambilla M., Untrusted under threat: On the superior bond between trustworthiness and threat in face-context integration. Cognition and Emotion, 2022. [DOI] [PubMed] [Google Scholar]
  • 44.Van Bavel J., Packer D., and Cunningham W., The neural substrates of in-group bias: a functional magnetic imaging investigation. Psychological Science, 2008. 11: p. 1131–1139. [DOI] [PubMed] [Google Scholar]
  • 45.Fiske S., Cuddy A., and Glick P., Universal dimensions of social cognition: warmth and competence. Trends in Cognitive Sciences, 2007. 11(2): p. 77–83. [DOI] [PubMed] [Google Scholar]
  • 46.Fiske S., et al., A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition. Journal of Personality and Social Psychology, 2002. 82(6): p. 878–902. [PubMed] [Google Scholar]
  • 47.Zhang Z. and Yuan K., Practical statistical power analysis using Webpower and R. 2018, Granger, IN: ISDSA Press. [Google Scholar]
  • 48.Todorov A., et al., Inferences of competence from faces predict election outcomes. Science, 2005. 308(5728): p. 1623–1626. [DOI] [PubMed] [Google Scholar]
  • 49.Inbar Y. and Lammers J., Political diversity in social and personality psychology. Perspectives on Psychological Science, 2012. 7(5): p. 496–503. [DOI] [PubMed] [Google Scholar]
  • 50.Wilson J. and Rule N., Perceptions of others’ poltical affiliation are moderated by individual perceivers’ own political attitudes. PLOS one, 2014. 9(4): p. e95431. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Bates D., et al., Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 2015. 67(1): p. 1–48. [Google Scholar]
  • 52.Kuznetsova A., Brockhoff P., and Christensen R., lmerTest package: Tests in linear mixed effects models. Journal of Statistical Software, 2017. 82(13): p. 1–26. [Google Scholar]
  • 53.Lenth R., emmeans: Estimated marginal means, aka least-squares means. R package version 1.4.7, 2018. [Google Scholar]
  • 54.Judd C., Westfall J., and Kenny D., Treating stimuli as a random factor in social psychology: A new and comprehensive solution to a pervasive but largely ignored problem. Journal of Personality and Social Psychology, 2012. 103(1): p. 54–69. [DOI] [PubMed] [Google Scholar]
  • 55.Johnson D., et al., Advancing research on cognitive processes in social and personality psychology: A hierarchical drift diffusion model primer. Social Psychological and Personality Science, 2017. 8(4): p. 413–423. [Google Scholar]
  • 56.Zell E. and Bernstein M., You may think you’re right…Young adults are more liberal than they realize. Social Psychological and Personality Science, 2014. 5(3): p. 326–333. [Google Scholar]
  • 57.Olivola C., et al., Republicans prefer Republican-looking leaders: Political facial stereotypes predict candidate electoral success among right-leaning voters. Social Psychological and Personality Science, 2012. 3(5): p. 605–613. [Google Scholar]
  • 58.Hawkins C. and Nosek B., Motivated independence? Implicit party identity predicts political judgments among self-proclaimed independents. Personality and Social Psychology Bulletin, 2012. 38(11): p. 1437–1452. [DOI] [PubMed] [Google Scholar]
  • 59.Leyens J. and Yzerbyt V., The ingroup overexclusion effect: Impact of valence and confirmation on stereotypical information search. European Journal of Social Psychology, 1992. 22: p. 549–569. [Google Scholar]
  • 60.Castano E., et al., Who may enter? The impact of in-group identification on in-group/out-group categorization. Journal of Experimental Social Psychology, 2002. 38: p. 315–322. [Google Scholar]
  • 61.Minear M. and Park D., A lifespan database of adult facial stimuli. Brain Research Methods, Instruments, & Computers, 2004. 36(4): p. 630–633. [DOI] [PubMed] [Google Scholar]
  • 62.Cassidy B. and Gutchess A., Influences of appearance-behaviour congruity on memory and social judgments. Memory, 2015. 23(7): p. 1039–1055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Greenwald A. and Pettigrew T., With malice toward none and charity for some: Ingroup favoritism enables discrimination. American Psychologist, 2014. 69(7): p. 669–684. [DOI] [PubMed] [Google Scholar]
  • 64.Halevy N., Bornstein G., and Sagiv L., “Ingroup love” and “outgroup hate” as motives for individual participation in intergroup conflict. Psychological Science, 2008. 19(4): p. 405–411. [DOI] [PubMed] [Google Scholar]
  • 65.Stangor C. and Crandall C., Threat and the social construction of stigma, in The Social Psychology of Stigma, Heatherton T., et al., Editors. 2000, Guilford Press: New York. p. 62–87. [Google Scholar]
  • 66.Cassese E., Partisan dehumanization in American politics. Political Behavior, 2021. 43(1): p. 29–50. [Google Scholar]
  • 67.Todorov A., Evaluating faces on trustworthiness: An extention of systems for recognition of emotions signaling approach/avoidance behaviors. Annals of the New York Academy of Sciences, 2008. 1124: p. 208–224. [DOI] [PubMed] [Google Scholar]
  • 68.Whitt S., et al., Tribalism in America: Behavioral experiments on affective polariszation in the Trump era. Journal of Experimental Political Science, 2021. 8: p. 247–259. [Google Scholar]
  • 69.Warner B. and Villamil A., A test of imagined contact as a means to improve cross-partisan feelings and reduce attribution of malevolence and acceptance of political violence. Communication Monographs, 2017. 84(4): p. 447–465. [Google Scholar]
  • 70.Pettigrew T., Intergroup contact theory. Annual Review of Psychology, 1998. 49(1): p. 65–85. [DOI] [PubMed] [Google Scholar]
  • 71.Paladino M. and Castelli L., On the immediate consequences of intergroup categorization: Activation of approach and avoidance motor behavior toward ingroup and outgroup members. Personality and Social Psychology Bulletin, 2008. 34(6): p. 755–768. [DOI] [PubMed] [Google Scholar]
  • 72.Pearson A., et al., The fragility of intergroup relations: Divergent effects of delayed audiovisual feedback in intergroup and intragroup interaction. Psychological Science, 2008. 19(12): p. 1272–1279. [DOI] [PubMed] [Google Scholar]
  • 73.Oosterhof N. and Todorov A., The functional basis of face evaluation. Proceedings of the National Academy of Sciences, 2008. 105(32): p. 11087–11092. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Mende-Siedlecki P., Baron S., and Todorov A., Diagnostic value underlies asymmetric updating of impressions in the morality and ability domains. The Journal of Neuroscience, 2013. 33(50): p. 19406–19415. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Chua K. and Freeman J., Facial stereotype bias is mitigated by training. Social Psychological and Personality Science, 2020. [Google Scholar]
  • 76.Dotsch R., et al., Ethnic out-group faces are biased in the prejudiced mind. Psychological Science, 2008. 19(10): p. 978–980. [DOI] [PubMed] [Google Scholar]

Decision Letter 0

Peter Karl Jonason

1 Jul 2022

PONE-D-22-08989Disclosing Political Partisanship Polarizes First Impressions of FacesPLOS ONE

Dear Dr. Cassidy,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Aug 15 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Peter Karl Jonason

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that according to our submission guidelines (http://journals.plos.org/plosone/s/submission-guidelines), outmoded terms and potentially stigmatizing labels should be changed to more current, acceptable terminology. To this effect,  “Caucasian” should be changed to “white” or “of [Western] European descent” (as appropriate).

3. Please include your full ethics statement in the ‘Methods’ section of your manuscript file. In your statement, please include the full name of the IRB or ethics committee who approved or waived your study, as well as whether or not you obtained informed written or verbal consent. If consent was waived for your study, please include this information in your statement as well. 

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The present work examines a critical question with important social implications: whether disclosing political affiliation impacts impressions drawn from faces. The authors present two studies that read methodologically stringent and strong. Overall, I enjoyed reading the paper and think that this work is worth getting published eventually. However, I refrain from making this recommendation at this point without suggesting some major revisions, mostly on the analyses and the interpretation of the results. Below you can find a list of reasons.

Major issues

Theoretical

-The difference between the past work by Mallinas et al. (2018) and the present work is not clear enough. Why would examining the same question in a romantic relationship context be a “limitation”? Is there a reason why the observed effect (e.g., negative attitudes towards those of opposing ideology) may or may not be generalized across different contexts? Perhaps the authors can highlight other procedural differences (consideration of the veracity of political ideology, changes in evaluations, etc.) across that past work and their work early on, as the current framing does not make these differences clear.

-The conceptualization of political identification gets obscure at times. Sometimes they interpret identification in terms of extremity (see page 7, though page numbers are missing on the initial pages). But they are measuring identification in terms of the strength of affiliation, which is clearly different from the extremity. I would suggest clarifying their definition of the concept and staying consistent throughout the paper.

Methods

- Some methodological details were missing, making it challenging for me to interpret the task in detail, especially in Experiment 1. The authors cited an osf page for additional methods, but I could not locate any files about methods on that page.

For instance:

-Which software was used to program the studies?

-How were picture pairs constructed in Experiment 1? Were the pairs randomly selected from lists of Republicans and Democrats for each participant, or were the pairs pre-determined and the same for all participants?

-How did they test recognition of each face (familiarity) after the task in Experiment 1?

-Exclusion criteria were vague for both experiments. How did they determine “not following the task instructions”?

Results

-I think the most major issues were about the chosen analyses and the interpretation of the results.

1. The description of the models made it a bit difficult to comprehend. As far as I understood, in Experiment 1, the DV was a choice (binary; republican = 0, liberal = 1), and the trait of evaluation (competency, likability; how that was coded is unclear) was a predictor. Is that correct? How were the predictors coded specifically? Were they dummy coded, effect coded, etc.? How was political ideology standardized? Was it mean-centered? Specifying all these either in the text or on the table would help with interpreting the results.

2. As the codings are not specified, I have trouble interpreting the tables. For instance, political ideology has a positive estimate of choice (DV). It is mentioned that democrat was coded as 1 and republican as 0 when choosing. I am confused because if higher ideology scores indicate more conservatism, were conservatives more likely to choose democrats as more competent/likable overall? As estimated marginal means suggest otherwise, there should be something I am missing here. The authors go straight to interpreting the post hoc tests without explaining the main tests here. I believe the main analyses require explanation.

3. The motivation behind adding the Trait as a predictor to the model is unclear. Did the authors have any predictions about trait differences? If competence and likability choices are strongly related (it seems so), it would be justifiable to average these scores and create a simpler model. Simplifying the model this way could also help with the convergence of random slopes (which were not included in the veracity analyses, perhaps that was because of a convergence issue?) and would also make a new (missing) model with the perceived threat as another predictor possible and more interpretable (see the analysis suggested below in point 6).

4. The veracity analysis in Experiment 1 is left uninterpreted. What does the significant main effect of veracity in Table 3 mean? Does it mean that the option labeled as “democrat “is evaluated as more competent overall (democrat labeled as democrat > democrat labeled as republican)?

5. The relevance of the partisan affiliation analyses to the study's main purpose was unclear. It is not surprising that ideology relates to relevant partisanship. I would move these analyses to supplementary materials and not interpret them in terms of face perception, as face perception is not part of these models.

6. Partisan threat could have been relevant to the study's main question. Yet again, the threat was analyzed separately from the face perception data, though the findings were interpreted in terms of its relevance to face perception. A direct test of the partisan threat’s role in face choice is missing in both experiments

7. Another standing question: were participants more likely to evaluate the faces of ingroup members (same ideology) as more likable/competent than those of outgroup members (other ideology) when ideology was not disclosed? The authors mention past work suggesting that in the introduction, but they do not report their findings on this question. Veracity was only analyzed among the disclosed ideology condition. Also, the title of these analyses reads a bit misleading: “Characterizing partisan disclosure effect on face impressions by veracity,”: but the manipulation of disclosure (nondisclosed vs. disclosed) was not a predictor in the reported analyses (only disclosed conditions are included).

Discussion

-The authors speculate about the relative roles of partisan affiliation and perceived threat in face impressions. Again, their data should allow them to analyze such relative effects. Why are these analyses unavailable? Perhaps, the studies are underpowered for such analyses, but if that is the case, the authors should at least comment on that. Then, I would recommend not interpreting the results in terms of their parallel to a perceived threat (as there are no direct analyses, these interpretations are too speculative) or, more ideally, conducting a third study to test the relationship between perceived threat and face perception directly.

Typos

P 7. “would the complement”

p. 21 “conservates”

Reviewer #2: In this article, the authors show that providing political identification information shifts initial evaluations (experiment 1) and updating (experiment 2) of explicit competence/likability ratings.

In my review, I have tried to adhere to the following guidelines of plos one “Unlike many journals which attempt to use the peer review process to determine whether or not an article reaches the level of 'importance' required by a given journal, PLOS ONE uses peer review to determine whether a paper is technically rigorous and meets the scientific and ethical standard for inclusion in the published scientific record.”

In my opinion, the present paper clearly meets the above standard of being technically rigorous and ethical. I go through each of the PLOS one criteria point-by-point, then conclude with some final thoughts

1. The study presents the results of primary scientific research.

2. Results reported have not been published elsewhere.

The present manuscript clearly meets the above 2 criteria

3. Experiments, statistics, and other analyses are performed to a high technical standard and are described in sufficient detail.

The present manuscript appears to follow all best practices of complex mixed effects models, and I was encouraged that they used random effects for both subject and stimuli (as suggested by the recent literature that they cite). I have no doubts of the technical integrity of their findings. If anything, they err on the side of reporting too much, and I think they could move some of the less important tables or statistics to a supplement (e.g., it’s probably not necessary to fully report how republicans and democrats vary on political ideology), and I think by carefully considering which analyses are central to their point they could streamline their paper.

4. Conclusions are presented in an appropriate fashion and are supported by the data.

They are. I would suggest three changes for Figure 1 to increase clarity: first, include what -1 and +1 standard deviation on political ideology refers to (so readers don’t have to scroll all the way back to the methods). Second, I think the graph may be clearer if they used facets only for likability vs. competence, and not for political ideology, which could become the x variable, with condition becoming the grouping variable (so, in other words, use aes(x = ideology, color = disclosure)). I think this would make all the values much closer together and easier to compare. Finally, consider including significance bars to highlight which cells are significantly different from one another.

5. The article is presented in an intelligible fashion and is written in standard English.

The article is quite intelligible.

6. The research meets all applicable standards for the ethics of experimentation and research integrity.

I have no reason to doubt the ethicality of the present research

7. The article adheres to appropriate reporting guidelines and community standards for data availability.

The article exceeds these, having both data and code already available. I was further encouraged that the posted code appears to be quite clean and commented well, which is uncommon (especially before a manuscript is accepted).

Together, I believe the present article meets the standards of PLOS One as I understand them. Some final suggestions to the authors for future research that may be of importance to the field:

First, if the authors have reaction times from the first study, it would be interesting to do a drift-diffusion model on the reaction times to see if the presence of ideology is shifting (a) the starting point bias, (b) the rate of accumulation, or both. This might further get at the mechanism of what is going on (i.e., are participants simply requiring less evidence to say the politically consistent individual is competent/likable, or are they accumulating evidence more steeply from individuals who share their ideology?).

Second, the updating question is interesting, and I would be interested to see it followed up with (a) implicit measures, particularly if they deviate from explicit measures, and (b) looking at more nuanced cases, such as finding someone switched political parties. It seems that learning about party affiliation, and how that shifts evaluations, is a potentially fruitful area of future research.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Nov 9;17(11):e0276400. doi: 10.1371/journal.pone.0276400.r002

Author response to Decision Letter 0


16 Aug 2022

Responses to Individual Reviewer Concerns:

Reviewer 1

Comment 1: The difference between the past work by Mallinas et al. (2018) and the present work is not clear enough. Why would examining the same question in a romantic relationship context be a “limitation”? Is there a reason why the observed effect (e.g., negative attitudes towards those of opposing ideology) may or may not be generalized across different contexts? Perhaps the authors can highlight other procedural differences (consideration of the veracity of political ideology, changes in evaluations, etc.) across that past work and their work early on, as the current framing does not make these differences clear.

Action taken: Good point. We have revised the manuscript in several ways to highlight how it builds on past (related) work. The key difference between this manuscript and past work is that past work has not really considered why partisan cues elicit polarized impressions. In our revised manuscript, we leverage new included analyses (also suggested by the reviewer) to more concretely state that a secondary goal of our manuscript was to determine what effects co-occur with ideology effects on polarized impressions of faces that might help to better explain and characterize them. We first make this distinction on page 4 in the introduction, calling back to it on page 7 of the introduction and mentioning it further throughout the discussion sections of each experiment and the general discussion.

Comment 2: The conceptualization of political identification gets obscure at times. Sometimes they interpret identification in terms of extremity (see page 7, though page numbers are missing on the initial pages). But they are measuring identification in terms of the strength of affiliation, which is clearly different from the extremity. I would suggest clarifying their definition of the concept and staying consistent throughout the paper.

Action taken: We agree with the reviewer on this point. To this end, we have removed wording suggesting that we interpret political identification in terms of extremity from the manuscript. Instead, we use the simpler (and intended) definition of political identification as simply where partisans fall on a continuous ideological scale that we define on p. 10-11 in the Experiment 1 method.

Comment 3: Some methodological details were missing, making it challenging for me to interpret the task in detail, especially in Experiment 1. The authors cited an osf page for additional methods, but I could not locate any files about methods on that page.

Action taken: We now explicitly state the name of the file (Supplementary Information.docx) on OSF that provides additional methods information and results on p. 9.

Comment 4: Which software was used to program the studies?

Action taken: We now state on p. 9 that we used E-Prime 2.0 to program the experiments.

Comment 5: How were picture pairs constructed in Experiment 1? Were the pairs randomly selected from lists of Republicans and Democrats for each participant, or were the pairs pre-determined and the same for all participants?

Action taken: We address these questions in the Experiment 1 method on p. 9. We write, “One hundred ten pairs of neutrally expressive Caucasian male faces were drawn from databases of opponents in United States political races that have been used in past work (e.g., Todorov et al., 2005). Each pair depicted one actual Republican and one actual Democrat who were opponents in a past political race. Thus, the pairs were pre-determined and the same across participants.”

Comment 6: How did they test recognition of each face (familiarity) after the task in Experiment 1?

Action taken: We clarify how we tested recognition on p. 10. We write, “Immediately after the task, participants disclosed if they recognized any faces. If they said yes (N = 65), disclosed who they thought they recognized. Our a priori exclusion criterion was to exclude any participants who accurately identified faces. No participants, however, did so.”

Comment 7: Exclusion criteria were vague for both experiments. How did they determine “not following the task instructions”?

Action taken: We clarify exclusion reasons for Experiment 1 on p. 9. We write, “Of 185 undergraduates recruited from a large Midwestern university in the United States, we excluded five. Two did not complete the partisanship characterization measures (see below), two failed the manipulation check (see below), and one did not report age.” We clarify exclusion reasons for Experiment 2 on p. 24. We write, “Of 101 undergraduates recruited from a large Midwestern university, we excluded seven for failing the manipulation check.”

Comment 8: The description of the models made it a bit difficult to comprehend. As far as I understood, in Experiment 1, the DV was a choice (binary; republican = 0, liberal = 1), and the trait of evaluation (competency, likability; how that was coded is unclear) was a predictor. Is that correct? How were the predictors coded specifically? Were they dummy coded, effect coded, etc.? How was political ideology standardized? Was it mean-centered? Specifying all these either in the text or on the table would help with interpreting the results.

Action taken: We clarify the model in Experiment 1 on p. 12 by writing, “We first tested whether disclosing partisanship more strongly affected impressions than non-disclosed partisanship. Likability and competency choices (Republican = 0, Democrat = 1) were logistically regressed on Trait (competent = 0, likable = 1), Task Version (disclosed labels = 0, non-disclosed labels = 1), Perceiver Political Ideology (standardized around the composite political ideology scores for the sample to have a mean of 0 and a standard deviation of 1), and their interactions as fixed effects.”

We clarify the model in Experiment 2 on p. 24 by writing, “Likability evaluations were regressed on Time (before label = 0, after label = 1), Partisan Label (Dummy coded using “undecided” as the reference of 0: Republican, Democrat, undecided), Perceiver Political Ideology (standardized as in Experiment 1), and their interactions as fixed effects.”

Comment 9: As the codings are not specified, I have trouble interpreting the tables. For instance, political ideology has a positive estimate of choice (DV). It is mentioned that democrat was coded as 1 and republican as 0 when choosing. I am confused because if higher ideology scores indicate more conservatism, were conservatives more likely to choose democrats as more competent/likable overall? As estimated marginal means suggest otherwise, there should be something I am missing here. The authors go straight to interpreting the post hoc tests without explaining the main tests here. I believe the main analyses require explanation.

Action taken: This comment refers to the Experiment 1 results section. Note that the political ideology effect in Experiment 1 is actually an odds ratio rather than the traditional slope recognized from linear regressions. The odds ratio of .85 actually means that as perceivers’ ideological conservatism increased, their likelihood of selecting Democrat faces decreased.

As suggested by the reviewer, we now detail main effects underlying the higher order interaction of primary interest in Experiment 1. Instead of going straight to the post hoc tests that explain the interaction, we first write on p. 12, “Main effects of Task Version (reflecting more selected Democrats in the disclosed versus non-disclosed task version) and Perceiver Political Ideology (reflecting fewer selected Democrats with higher perceiver conservatism) were qualified by a Task Version × Perceiver Political Ideology interaction that partially supported this hypothesis (Table 1a; Fig.1). We allow main effects to be interpreted within the context of this higher-order interaction.”

Comment 10: The motivation behind adding the Trait as a predictor to the model is unclear. Did the authors have any predictions about trait differences? If competence and likability choices are strongly related (it seems so), it would be justifiable to average these scores and create a simpler model. Simplifying the model this way could also help with the convergence of random slopes (which were not included in the veracity analyses, perhaps that was because of a convergence issue?) and would also make a new (missing) model with the perceived threat as another predictor possible and more interpretable (see the analysis suggested below in point 6).

Action taken: Our motivation for adding Trait as a predictor to the model was largely conceptual. Warmth and competence reflect the “big two” dimensions of social perception, and comprise separable ways in which people stereotype others as detailed by the stereotype content model. Although we might not expect findings to differ across traits, their separable distinct relevance in the social perception literature makes their inclusion in the model likely of interest to a variety of researchers. We write on p. 8, “These traits were selected because they are core dimensions of person perception (Fiske et al., 2007) reflecting separable ways in which people stereotype others (Fiske et al., 2002). Devaluing these traits would complement distinct ways of deriding opposing partisans that are becoming more commonplace in the United States (Finkel et al., 2020).”

Comment 11: The veracity analysis in Experiment 1 is left uninterpreted. What does the significant main effect of veracity in Table 3 mean? Does it mean that the option labeled as “democrat “is evaluated as more competent overall (democrat labeled as democrat > democrat labeled as republican)?

Action taken: We now write on p. 16, “Main effects of Disclosed Label Veracity (reflecting fewer selected Democrats with accurate versus inaccurate labels) and Perceiver Political Ideology (reflected fewer selected Democrats with higher perceiver conservatism) emerged. Note that the Disclosed Label Veracity effect likely reflects the ideological skew of the sample. Suggesting disclosed labels affected impressions irrespective of their veracity, however, no interaction between Disclosed Label Veracity and Perceiver Political Ideology emerged.”

Comment 12: The relevance of the partisan affiliation analyses to the study's main purpose was unclear. It is not surprising that ideology relates to relevant partisanship. I would move these analyses to supplementary materials and not interpret them in terms of face perception, as face perception is not part of these models.

Action taken: Based on this comment and a comment from Reviewer 2, we have moved these analyses to the supplemental material document available on OSF.

Comment 13: Partisan threat could have been relevant to the study's main question. Yet again, the threat was analyzed separately from the face perception data, though the findings were interpreted in terms of its relevance to face perception. A direct test of the partisan threat’s role in face choice is missing in both experiments

Action taken: This point is well taken. To address it, we include additional analyses for Experiments 1 (p. 20) and 2 (p. 30-31) that replace perceiver political ideology with the difference in perceived partisan threat from Democrats relative to Republicans in the primary models. Critically, using these threat scores produces virtually the same patterns of results across experiments. By including this direct test of the relation between perceived partisan threat and partisan disclosure effects on face impressions, we can more confidently suggest that perceiver ideology affects partisan disclosure effects on face impressions to the extent of threat perceived from different partisan groups.

Comment 14: Another standing question: were participants more likely to evaluate the faces of ingroup members (same ideology) as more likable/competent than those of outgroup members (other ideology) when ideology was not disclosed? The authors mention past work suggesting that in the introduction, but they do not report their findings on this question. Veracity was only analyzed among the disclosed ideology condition. Also, the title of these analyses reads a bit misleading: “Characterizing partisan disclosure effect on face impressions by veracity,”: but the manipulation of disclosure (nondisclosed vs. disclosed) was not a predictor in the reported analyses (only disclosed conditions are included).

Action taken: We address these questions in the results and discussion sections of Experiment 1. We write on p. 16 in the results section, “Because people are often able to detect partisanship from faces alone (e.g., Rule & Ambady, 2010), our next analyses concerned determining whether the veracity of disclosed labels affected polarized face impressions based on perceiver political ideology. First, we examined whether face impressions were polarized by non-disclosed partisanship. Among participants for whom party labels were not disclosed, however, perceiver political ideology did not affect face selections, OR = .98, p = .33, 95% CI [.94, 1.02].”

We write on p. 20-21, “Although partisanship can be detected from facial cues alone (e.g., Rule & Ambady, 2010), non-disclosed partisanship did not polarize face impressions. Although it could be that participants did not detect partisanship from these faces, another possibility is that being asked to evaluate traits overrode undisclosed partisanship effects on face impressions in this task overall (see Todorov et al., 2005). Future research may assess this possibility by addressing partisanship effects on face impressions when people are informed, for example, that they are evaluating politicians versus not.”

Comment 15: The authors speculate about the relative roles of partisan affiliation and perceived threat in face impressions. Again, their data should allow them to analyze such relative effects. Why are these analyses unavailable? Perhaps, the studies are underpowered for such analyses, but if that is the case, the authors should at least comment on that. Then, I would recommend not interpreting the results in terms of their parallel to a perceived threat (as there are no direct analyses, these interpretations are too speculative) or, more ideally, conducting a third study to test the relationship between perceived threat and face perception directly.

Action taken: We have removed language referring to the relative role of partisan affiliation from the manuscript. We now include new analyses (see response to Comment 13 of Reviewer 1) that allow us to interpret the results in terms of their parallel to a perceived threat. Finally, we refer to the potential experiment suggest by the reviewer in the General Discussion on p. 34 by writing, “Future work may disentangle the relationship between political ideology and partisan threat, perhaps by experimentally manipulating threat perceptions. This work would examine whether threat is a core feature of ideology or if there are contexts where ideological differences do not coincide with partisan threat and its pernicious consequences.”

Comment 16: Typos: P 7. “would the complement”

p. 21 “conservates”

Action taken: These typos have been corrected.

Reviewer 2

Comment 1: If anything, they err on the side of reporting too much, and I think they could move some of the less important tables or statistics to a supplement (e.g., it’s probably not necessary to fully report how republicans and democrats vary on political ideology), and I think by carefully considering which analyses are central to their point they could streamline their paper.

Action taken: Based on this comment and a similar comment from Reviewer 1, we have relegated analyses less central to our main point the supplemental material available on OSF to streamline the paper. For example, we moved analyses on how Republicans and Democrats vary on political ideology to supplemental information. We also moved analyses regarding affiliation ratings to the supplemental information.

Comment 2: I would suggest three changes for Figure 1 to increase clarity: first, include what -1 and +1 standard deviation on political ideology refers to (so readers don’t have to scroll all the way back to the methods). Second, I think the graph may be clearer if they used facets only for likability vs. competence, and not for political ideology, which could become the x variable, with condition becoming the grouping variable (so, in other words, use aes(x = ideology, color = disclosure)). I think this would make all the values much closer together and easier to compare. Finally, consider including significance bars to highlight which cells are significantly different from one another.

Action taken: We have edited Figure 1 to incorporate all changes suggested by the reviewer. We now include what -/+ 1 SD on political ideology refers to on the figure itself. We use facets only for likability and competence. Finally, we include significance bars to highly the cells that are significantly different from one another.

Comment 3: First, if the authors have reaction times from the first study, it would be interesting to do a drift-diffusion model on the reaction times to see if the presence of ideology is shifting (a) the starting point bias, (b) the rate of accumulation, or both. This might further get at the mechanism of what is going on (i.e., are participants simply requiring less evidence to say the politically consistent individual is competent/likable, or are they accumulating evidence more steeply from individuals who share their ideology?).

Action taken: This is a fantastic idea and one that we are considering as we continue this line of work. We bring up this idea in Experiment 1 Discussion on p. 21 by writing, “Future work may replicate these findings while focusing on reaction times to better understand why they emerged. For example, it could be that label disclosure enables people to require less evidence to say a similar relative to opposing partisans are competent and likable. However, it could also be that people more steeply accumulate evidence of competence and likability from similar relative to opposing partisans. Disentangling these findings using drift diffusion modeling (e.g., Johnson et al., 2017), can help clarify processes underlying face impressions polarized by partisanship.

Comment 4: Second, the updating question is interesting, and I would be interested to see it followed up with (a) implicit measures, particularly if they deviate from explicit measures, and (b) looking at more nuanced cases, such as finding someone switched political parties. It seems that learning about party affiliation, and how that shifts evaluations, is a potentially fruitful area of future research.

Action taken: Also great ideas! We address them in the General Discussion on p. 35 by writing, “To further characterize how disclosed partisanship affects face impressions, future work can vary the characteristics of faces disclosed as partisans (e.g., trustworthy or untrustworthy) and address disclosure effects using both implicit and explicit measures. Such manipulations can clarify the strength of disclosure on impressions and at what levels they manifest. Moreover, it would be worthwhile to test how changing party affiliations or knowledge of a target’s within-party disagreement affects face impressions. It could be that partisanship polarizes impressions only to the extent that partisans are perceived as being loyal to their party.”

Decision Letter 1

Peter Karl Jonason

2 Sep 2022

PONE-D-22-08989R1Disclosing Political Partisanship Polarizes First Impressions of FacesPLOS ONE

Dear Dr. Cassidy,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Only minor issues remain. Please submit your revised manuscript by Oct 17 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Peter Karl Jonason

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: As I mentioned in my previous review, I enjoyed reading this work and think that it offers some valuable contributions to the work on impression formation and updating.

I have now read the revision and the authors’ responses to both reviewers. I appreciate that the authors addressed the reviewers’ inquiries overall. I can now see their analytical approach in a much clearer way, and I believe the manuscript improved as a result. There are, however, a few standing issues for me before recommending the work for publication:

1. I appreciate that the reviewers added new analyses to examine the direct relationship between partisan disclosure effects and perceived partisan threat. The results suggest that, consistently across the two studies, the partisan threat effect is always in line with the perceiver ideology effects. That is, in Study 1, conservative ideology does not bring disclosure effects, nor does perceiving Democrats as more threatening (which correlates with conservative ideology). In Study 2, conservative ideology does bring disclosure effects, and so does perceiving Democrats as more threatening.

However, I am concerned that the authors interpret these results as if they are showing the partisan threat is the mechanism underlying the disclosure effects. For instance, when responding to Reviewer 1 Comment 13:

“By including this direct test of the relation between perceived partisan threat and partisan disclosure effects on face impressions, we can more confidently suggest that perceiver ideology affects partisan disclosure effects on face impressions to the extent of threat perceived from different partisan groups.”

Also, when they talk about their studies’ particular contribution on p. 4, they suggest that past research has not examined the mechanism, which implies their research does:

“[…] one limitation is that they do not explain why this polarization occurs.”

My concern about this interpretation is for two reasons. First, I would recommend avoiding making any claims about the mechanism without an experimental design or at least testing the mediation (ideally longitudinally but at least in a cross-sectional exploratory test). Second, it seems like some of their findings indeed contradict a mechanism interpretation. If the partisan threat is the mechanism underlying the disclosure effects on liking a particular face, shouldn’t partisan threat impact participants' liking of a particular (ingroup vs. outgroup) face similarly across liberal and conservative perceivers? That does not seem to be the case in Study 1. Perceiving Democrats as more threatening (mostly conservative participants, given the significant correlation) does not impact face perception, whereas perceiving Republicans as more threatening (mostly liberal participants, given the significant correlation) does. It seems like, even when a conservative perceiver feels threatened by a liberal (and they do to some degree, given the significant correlation), they do not report liking a liberal face less than a conservative face. As the threat effects are interpreted in comparison to different targets (perceiving a face from an ideological group as opposed to faces from other groups) rather than independently (the degree to which the perceiver ideology relates to the perception of threat from a certain target), it seems to me that these nuances in the findings do not get the attention they deserve. All in all, I think the authors should refrain from emphasizing threat as a mechanism in their framing.

2. Given the above interpretation, their findings suggest that threat may not fully explain conservative participants’ indifference to faces in Study 1. Then what does? I think the authors should elaborate more on the potential alternative explanations other than the perceived threat. For example, can conservative participants be less attentive to the choosing task used in Study 1 and pick who is more likable basically randomly? Relatedly, I think the revised manuscript would benefit from discussing the inconsistent findings across the two studies a bit more in-depth in the general discussion.

3. The authors described the findings in the general discussion: “the more conservative participants in Experiment 2 may have been more likely to outwardly derogate democrats” (p. 31). Yes, that is what their findings already suggest, but why can this be the case? Again, the general discussion seems to lack a discussion for potential reasons.

More minor issues:

-“Prior work supports disclosed partisanship as more generally polarizing face impression.”

(p. 4): The baseline for comparison is missing here (“more” than what?):

-I think partisan cues, and especially the labels used in this work, are not minimal cues as claimed on p. 6. They indeed seem very salient.

-“Although people accurately detect political partisanship from faces…” (p. 7): this sentence reads a bit too deterministic; they "can" detect, or they "tend to" detect?

-It seems odd to drop a participant’s data entirely for not reporting their age in Study 1. What is the rationale behind this decision? Were the results replicated when all participants were included in the analyses (at least this one participant who did not fail the attention and manipulation check)?

-How many task versions were there in total (including counterbalancing versions)? That would be nice to include in the main text (although one can probably figure it out by browsing the data files).

-This interpretation was not clear to me: “Main effects of Disclosed Label Veracity (reflecting fewer selected Democrats with accurate versus inaccurate labels) and Perceiver Political Ideology (reflected fewer selected Democrats

with higher perceiver conservatism) emerged. Note that the Disclosed Label Veracity effect likely reflects the ideological skew of the sample” (p. 16). Checking the distribution reported in the supplementary materials, the sample seems to be a little bit skewed towards Democrats. But wouldn’t we expect the opposite of these results were due to this skewness (i.e., more selected Democrats with accurate labels)? Perhaps the authors can clarify that part a bit more in the text.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2022 Nov 9;17(11):e0276400. doi: 10.1371/journal.pone.0276400.r004

Author response to Decision Letter 1


4 Oct 2022

September 20, 2022

Dear Dr. Jonason,

Please consider this revised manuscript (PONE-D-22-08989R1) titled, “Disclosed Political Partisanship Polarizes First Impressions of Faces” co-authored with Dr. Colleen Hughes and Dr. Anne Krendl, for consideration as a research article in PLOS ONE. This is an original manuscript, has not been published, and is not under consideration for publication elsewhere. All data and code are available on OSF. A link is provided in the manuscript. As detailed below, we have revised our manuscript based on the remaining comments of one reviewer. Substantive changes are denoted in the main text in red font.

We are, of course, happy to make additional changes to the paper with the goal of improving the work. I thank you and the reviewers for your time and attention to this manuscript.

Sincerely,

Brittany Cassidy

Assistant Professor

Department of Psychology

University of North Carolina at Greensboro

bscassid@uncg.edu

Responses to Individual Reviewer Concerns:

Reviewer 1

Comment 1: The reviewer had a broad remaining concern that perceived threat should not be discussed as a mechanism for polarized face impressions based on disclosed partisanship because the manuscript provides no experimental evidence of it. The reviewer also notes that inconsistencies regarding perceived partisan threat both within- and across-experiments should be addressed in discussion sections and in the general discussion. The reviewer suggested a broader discussion of potential mechanisms for face impressions polarized by partisanship, especially in the general discussion.

Action taken: These were points well taken. We absolutely agree with the reviewer that causality must not be inferred when only correlational evidence for a relation can be presented. We have revised our manuscript in several ways to address this concern.

First, we have removed language speaking to causality throughout the manuscript except when we suggest that future work use experimental manipulations of, for example, perceived partisan threat to establish a causal link with polarized face impressions (e.g., p. 4, 34).

Second, we now discuss inconsistencies in perceived threat findings within- and across-experiments, being cautious to avoid redundancy. For example, we write in the Experiment 1 Discussion on p. 23, “Although threat ratings suggested that more conservative perceivers did not find Democrats as more threatening than Republicans, significant correlations emerged between perceiver ideology and partisan threat perceptions. What might have caused this inconsistency? One possibility may lie in the college-aged sample recruited for the experiment. College-aged students often show a bias to perceiving themselves as more conservative than they really are (Zell & Bernstein, 2014). If the students identifying themselves as more conservative were indeed more liberal than they realized, it would allow for the possibility of threat perceptions less extreme than those of the students identifying as more liberal. Indeed, these biased perceptions of one’s own partisanship are more pronounced for conservatives than for liberals (Zell & Bernstein, 2014).”

We also address these inconsistencies while considering new potential mechanisms in the General Discussion on p. 35 by writing, “Although ideology effects on face impressions were paralleled by partisan threat perception effects across experiments, it is worth considering why inconsistencies across experiments might emerge. One previously discussed possibility regarded college students self-reporting being more conservative than they are when ideology is more objectively assessed (Zell & Bernstein, 2014). Potential conflicts between self-reported and actual ideologies could lead to inconsistencies both within- and across-experiments. Speculatively, more objective ideology assessments could, in part, resolve inconsistencies. It could be that factors that are beyond the scope of the current work interfaced with perceived threat and ideology to relate to impressions. For example, people who have high actual (Whitt et al., 2021) or even imagined (Warner & Villamil, 2017) contact with opposing partisans have less affective polarization, findings that broadly reflect work on intergroup contact to reduce prejudice (Pettigrew, 1998). Future work may consider the extent to which relative partisan contact or isolation interfaces with perceived threat to affect face impressions separably or interactively.”

Comment 2: “Prior work supports disclosed partisanship as more generally polarizing face impression.”

(p. 4): The baseline for comparison is missing here (“more” than what?):

Action taken: We have rewritten the sentence for clarity. It now reads (p. 4), “Prior work supports that disclosed partisanship may polarize face impressions across contexts.”

Comment 3: I think partisan cues, and especially the labels used in this work, are not minimal cues as claimed on p. 6. They indeed seem very salient.

Action taken: We agree and have rewritten the sentence for clarity. It now reads (p. 6), “Such patterns would extend work showing favoritism and, sometimes, derogation, based on group membership (Brewer, 1999; Tajfel, 1970; Tajfel et al., 1971; Tajfel & Turner, 1979) from a romantic (Mallinas et al., 2018) to a more general context and show that simple partisan labels in the absence of other partisan information can powerfully affect impressions.

Comment 4: “Although people accurately detect political partisanship from faces…” (p. 7): this sentence reads a bit too deterministic; they "can" detect, or they "tend to" detect?

Action taken: We have rewritten the sentence to read (p. 8), “Although people can often detect political partisanship from faces (Rule & Ambady, 2010), disclosed group labels can override naturally occurring ones to elicit biases (Van Bavel et al., 2008).”

Comment 5: It seems odd to drop a participant’s data entirely for not reporting their age in Study 1. What is the rationale behind this decision? Were the results replicated when all participants were included in the analyses (at least this one participant who did not fail the attention and manipulation check)?

Action taken: We now report analyses from Experiment 1 that include the participant who did not report his/her age. None of the conclusions or statistical significance of the results changed.

Comment 6: How many task versions were there in total (including counterbalancing versions)? That would be nice to include in the main text (although one can probably figure it out by browsing the data files).

Action taken: We now clarify that there were three versions of Experiment 1 (p. 10) and three versions of Experiment 2 (p. 26).

Comment 7: This interpretation was not clear to me: “Main effects of Disclosed Label Veracity (reflecting fewer selected Democrats with accurate versus inaccurate labels) and Perceiver Political Ideology (reflected fewer selected Democrats with higher perceiver conservatism) emerged. Note that the Disclosed Label Veracity effect likely reflects the ideological skew of the sample” (p. 16). Checking the distribution reported in the supplementary materials, the sample seems to be a little bit skewed towards Democrats. But wouldn’t we expect the opposite of these results were due to this skewness (i.e., more selected Democrats with accurate labels)? Perhaps the authors can clarify that part a bit more in the text.

Action taken: We have clarified this interpretation in the Experiment 1 Results section on p. 16 by writing, “A main effect of Perceiver Political Ideology reflected fewer selected Democrats with higher perceiver conservatism. A main effect of Disclosed Label Veracity reflected fewer selected Democrats with accurate versus inaccurate labels. This pattern may seem surprising because given the liberal skew of the sample (see Supplemental Material), one might expect more accurately labeled Democrats because potential detections would match labels. Because differences between faces signal the likelihood of winning (Todorov et al., 2005), it could also be that inaccurate labels resulted in more “Democrats” with positively interpreted facial cues.”

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 2

Peter Karl Jonason

6 Oct 2022

Disclosing Political Partisanship Polarizes First Impressions of Faces

PONE-D-22-08989R2

Dear Dr. Cassidy,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Peter Karl Jonason

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Peter Karl Jonason

14 Oct 2022

PONE-D-22-08989R2

Disclosing Political Partisanship Polarizes First Impressions of Faces

Dear Dr. Cassidy:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Peter Karl Jonason

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File

    (DOCX)

    Attachment

    Submitted filename: Response to Reviewers.docx

    Data Availability Statement

    Additional methods, data, and code for all experiments are available at the Open Science Framework: https://osf.io/9khta.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES