Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Jun 1.
Published in final edited form as: Emotion. 2023 Dec 21;24(4):1109–1124. doi: 10.1037/emo0001322

Emotion and Anxiety Interact to Bias Spatial Attention

Helena P Bachmann 1, Shruti Japee 2, Elisha P Merriam 1,#, Tina T Liu 1,#
PMCID: PMC11116080  NIHMSID: NIHMS1951658  PMID: 38127536

Abstract

Emotional expressions are an evolutionarily conserved means of social communication essential for social interactions. It is important to understand how anxious individuals perceive their social environments, including emotional expressions, especially with the rising prevalence of anxiety during the COVID-19 pandemic. Anxiety is often associated with an attentional bias for threat-related stimuli, such as angry faces. Yet the mechanisms by which anxiety enhances or impairs two key components of spatial attention—attentional capture and attentional disengagement—to emotional expressions are still unclear. Moreover, positive valence is often ignored in studies of threat-related attention and anxiety, despite the high occurrence of happy faces during everyday social interaction. Here, we investigated the relationship between anxiety, emotional valence, and spatial attention in 574 participants across two preregistered studies (Data collected in 2021 and 2022; Expt. 1: n = 154, 54.5% male, mean age = 43.5 years; Expt. 2: n = 420, 58% male, mean age = 36.46 years). We found that happy faces capture attention more quickly than angry faces during the visual search experiment and found delayed disengagement from both angry and happy faces over neutral faces during the spatial cueing experiment. We also show that anxiety has a distinct impact on both attentional capture and disengagement of emotional faces. Together our findings highlight the role of positively valenced stimuli in attracting and holding attention and suggest that anxiety is a critical factor modulating spatial attention to emotional stimuli.

Keywords: attention, emotion, anxiety, visual search, spatial cueing

Introduction

Anxiety is an unpleasant feeling of uneasiness, tension, and worry over anticipated events (Bouras & Holt, 2007). Anxiety disorders are the most common type of psychiatric disorder (Bandelow & Michaelis, 2015). Approximately one-third of the population experiences at least one period of clinically relevant anxiety. The global prevalence of anxiety has undergone a steep increase in the last decade, by some estimates as much as 25% (World Health Organization, 2022), and has thought to have grown threefold among healthcare workers in the first year of the COVID-19 pandemic (Santabárbara et al., 2021). This recent increase of anxiety rates around the world underscores the importance of understanding how anxious individuals perceive and interact with socially relevant stimuli.

Recognizing different emotional expressions is integral to everyday human social interactions and critical for responding and attending to different emotional and contextual cues. Behavioral observations have found that the emotional value of faces modulates spatial attention in two main ways: faster attentional capture (enhanced engagement) (Fox et al., 2002) or changes in attentional disengagement (for a review, see Vuilleumier, 2005). These emotional biases are commonly thought to be especially strong for threat-related emotions (e.g., angry faces), enabling individuals to respond more quickly to the source of danger.

The threat-related bias in attentional capture was first reported by Hansen & Hansen (1988). Using a visual search paradigm, the authors observed that an angry face was more rapidly detected in a crowd of happy faces. This phenomenon was termed the “anger superiority effect.” Regardless of the number of faces on the screen, they found no significant latency difference in detecting an angry face target, suggesting that angry faces can be pre-attentively detected (i.e., parallel search) (Wolfe, 1992). This anger superiority effect was further corroborated by findings using schematic line drawing of faces (Eastwood et al., 2001; Fox et al., 2000; Ohman et al., 2001), and supported by an anatomical substrate involving the retina-superior colliculus-pulvinar-amygdala subcortical pathway in rodents (the so-called “low-road”), and bypassing the retina-LGN-V1-higher order visual cortex pathway. This “low-road” pathway is thought to allow for faster processing of threatening information (Ledoux, 2000; Linke et al., 1999; Morris et al., 1999), which may be especially important for processing biologically-relevant stimuli.

Despite ample behavioral and neural evidence that supports threat-related attentional bias, there is growing evidence against the anger superiority effect. First, it is possible that the shadows in the face stimuli, as well as low-level visual confounds inherent in the schematic line drawings of angry faces (“cross detection”), may underlie the anger superiority effect (Becker et al., 2011; Coelho et al., 2010). Second, studies using real-face stimuli have yielded mixed results. While a number of studies reported the “angry face advantage” (Fox & Damjanovic, 2006; Hansen & Hansen, 1988; Horstmann & Bauland, 2006), other studies that controlled for the low-level visual confounds in Hansen & Hansen (1988) either failed to find it (Purcell et al., 1996) or only found an advantage for anger with schematic faces and not real faces (Juth et al., 2005). Third, despite anatomical evidence from rodents, the existence of projections from the pulvinar to the amygdala in primates has not been demonstrated (L. G. Ungerleider, personal communication, 2020).

In parallel, findings of the “happy face advantage” have been reported using the same visual search protocol and design that produced the “threat superiority effect” (Becker et al., 2011; Calvo & Nummenmaa, 2008; Frischen et al., 2008; Juth et al., 2005). A recent meta-analysis of attentional bias for positive emotional stimuli in a very large sample of healthy participants (N = 9120, Pool et al., 2016) also found a bias for positive over neutral stimuli. Additionally, despite the general emphasis on anxiety and attention toward threat, the field has started to explore the relationship between anxiety and selective attention for positive relative to neutral stimuli (Chen et al., 2012; Frewen et al., 2008; Taylor et al., 2010). In the positive stimulus literature, the general trend observed seems to be that low anxious individuals are biased toward the positive stimuli compared to neutral. However, this effect is weaker in those with high anxiety levels (Frewen et al., 2008).

Given the recent investigation into the interplay between anxiety and attention to positive stimuli, the first goal of Experiment 1 was to understand how anxiety modulates attentional capture of not only angry faces but also happy faces. The second goal of Experiment 1 was to reconcile the seemingly contradictory conclusions between the “anger superiority effect” and the “happy face advantage” by minimizing all confounding factors discussed in the literature. These factors include low-level confounds in both schematic line drawings and real face images (Coelho et al., 2010; Savage et al., 2013), perceived arousal (Lundqvist et al., 2014), and the interaction between facial gender and decisions about facial expression (Becker et al., 2007). These image features and differences in saliency between targets and distractors (Savage et al., 2013) are thought to have caused discrepant results in the literature and demonstrate the importance of carefully choosing a stimulus set.

In addition to attention capture, several studies have demonstrated an anxiety-related bias for threat during attentional disengagement in subclinical and high anxiety participants (Fox et al., 2001, 2002). Using an emotional spatial cueing paradigm, a modification of the classic spatial cueing paradigm (Posner, 1980), Fox et al. (2001) found that anxious participants took longer to respond to probes that were invalidly cued by angry facial expressions than neutral ones, indicating difficulty disengaging attention from the location of the emotional cue. Despite the consensus that anxious individuals are prone to attend to threat-related information (Armstrong & Olatunji, 2012; see Bar-Haim et al., 2007, for a review; Clauss et al., 2022; Okon-Singer, 2018; Richards et al., 2014; Valadez et al., 2022), it is not clear whether anxious individuals show delayed disengagement from threat (Hirsch & Mathews, 2012) or, alternatively, if they avoid threatening stimuli, i.e., a vigilance-avoidance model (Mogg & Bradley, 1998; Mogg et al., 2008). These two hypotheses make different predictions regarding performance on the emotional spatial cueing task. The delayed disengagement hypothesis, also termed attentional maintenance, predicts a slower response to probes after emotional invalid cues as attention remains on the cued location for longer. Support for this hypothesis can be found in a recent meta-analysis that reported anxiety and fear symptoms were associated with attentional maintenance on threatening stimuli (N=1815, Clauss et al., 2021), as well as another review (Richards et al., 2014) that showed a tendency of anxious individuals to fixate on threatening stimuli. In contrast, the vigilance-avoidance model predicts a faster response to probes on invalid trials due to avoidance of and faster disengagement from emotional face cues. Converging evidence for this model can be found in the meta-analysis conducted by Frewen et al., 2008, which reviewed clinical anxiety studies with longer threatening stimulus presentation.

Through two preregistered experiments conducted online via Amazon’s Mechanical Turk, we systematically studied the relationship between anxiety, emotional valence, and spatial attention. More specifically, Experiment 1 investigates attentional capture using a visual search task, and Experiment 2 investigates attentional disengagement using a spatial cueing task. Both experiments used gray-scaled images of KDEF faces (Lundqvist et al., 1998), varying in emotional expression (happy, angry, and neutral), that were closely cropped, balanced for low-level image statistics, and matched for perceived arousal ratings (Goeleven et al., 2008). In each experiment, participants saw equal numbers of female and male faces to minimize the interaction between facial gender and decisions about facial expression (Becker et al., 2007). These careful considerations are important to ensure the results are not driven by confounding factors that are inherent in the stimuli or the experimental design.

Experiment 1: Visual Search

To examine the relationship between anxiety, emotional valence, and attentional capture, we conducted an online visual search experiment. Our aims were to resolve the debate between the “anger superiority effect” and the “happy face advantage” and uncover how anxiety impacts detection of both angry and happy faces. We preregistered all aspects of this study, including hypotheses, design, sampling, and analysis, on the Open Science Framework (https://osf.io/s975h). The two hypotheses listed in the preregistration were:

Competing hypotheses:

Hypothesis 1a:

If positive valence facilitates attentional capture, then searching for a happy face should be faster than an angry face among neutral distractors.

Hypothesis 1b:

Conversely, if negative valence facilitates attentional capture, then searching for an angry face should be faster than a happy face among neutral distractors.

Hypothesis 1c:

If positive valence facilitates attentional capture, then searching for a happy face should be faster than an angry face among neutral and oppositely valenced distractors.

Hypothesis 1d:

Conversely, if negative valence facilitates attentional capture, then searching for an angry face should be faster than a happy face among neutral and oppositely valenced distractors.

Hypothesis 2:

The degree and/or the type of anxiety (trait or state) have different impacts on attentional bias to threatening stimuli.

Hypothesis 3:

The degree and/or the type of anxiety (trait or state) have different impacts on attentional bias to positively valenced stimuli.

Methods

Participants

Our sample size was determined by several power analyses conducted on our existing preliminary data using IBM SPSS version 28. Across all power analyses, the largest sample size was required to uncover the correlation between anxiety and attentional capture: The required sample size to achieve 0.95 power at the standard 0.05 alpha error probability was 140 (two-tailed). Two hundred online sign-up slots were posted on Amazon Mechanical Turk and data collection ended once all 200 slots were filled to ensure that the predetermined number of participants (n = 140) was reached. Data was collected in 2021. Informed consent was obtained prior to the start of the online experiment and the protocol (large-scale online studies of human perception and cognition) was approved by the Institutional Review Board (IRB) at NIH. Of the 200 collected, 46 participants were excluded based on predetermined criteria (https://osf.io/s975h), i.e., participants whose overall accuracy from the visual search task was below at least one standard deviation from the mean, 72.7%, or whose accuracy on any sub-condition was 50% or below, were excluded from further analysis. Our final sample consisted of a non-clinical population of 154 participants (84 male) with a mean age of 43.5 years (demographic information of four participants was not available due to a server error).

Transparency and openness

We report how we determined our sample size, all data exclusions, all manipulations, and all measures in the study. All data, analysis code, and research materials are available at https://osf.io/6k9fn/?view_only=75874e2f30a04820acbe86ffcec21665. Data were analyzed using MATLAB 2018b. The study design, hypotheses, and analysis plan were also preregistered; see https://osf.io/s975h for information on sample size and data exclusions.

Stimuli and procedure

The stimuli used were faces chosen from the KDEF database, including 64 faces per expression (happy, angry, neutral). Each expression contained an equal number of male and female faces. Based on a recent, large-scale facial stimuli rating study that used a 9-point rating scale (Sutton et al., 2019), the arousal ratings were not significantly different between the chosen positive and negatively valenced emotional faces [angry = 4.90±0.80, happy = 4.93±0.80, t(122) = −0.328, p = .743, Cohen’s d = 0.793], and were significantly higher compared to neutral faces [neutral = 2.32±0.44, angry-neutral: t(122) = 22.258, p < .001, Cohen’s d = 0.646, happy-neutral: t(122) = −22.877, p < .001, Cohen’s d = 0.640]. The valence ratings for angry and happy were on opposite ends of the scale, with neutral faces in the middle [angry = 1.90±0.59, happy = 6.61±0.55, neutral = 3.61 (0.56), happy-angry: t(122) = −46.647, p < .001, Cohen’s d = 0.564]. See Supplemental Table 1. These face stimuli were controlled for differences in low-level image features, matched for luminance and contrast, using the SHINE toolbox (Willenbockel et al., 2010) and were presented in a circular array to ensure equidistance from the central fixation. A visual search paradigm for emotional faces was programmed using PsyToolKit (Stoet, 2010, 2017) and posted on Amazon Mechanical Turk.

The sequence of displays for two sample trials is illustrated in Figure 1a. In each block, participants were instructed to search for a single target expression (either happy or angry), in a crowd of other faces. The 256 trials were divided into four blocks of 64 trials and the block order of the emotional target was counterbalanced. Prior to the experiment, all participants completed 16 practice trials to be familiarized with the visual search task.

Figure 1.

Figure 1.

Visual search a) task trial procedure and b) study design. After a brief fixation pulse, a circular array of faces of either set size 4 or 8 appeared on the screen. Participants were asked to respond whether a target emotion, happy or angry, was present or absent from the screen with response keys “p” and “a”. Feedback was given if participants are too slow. An example of a set size 4, angry target-present trial is seen on the left and a set size 8, target-absent trial (both with a neutral crowd) is on the right.

Participants were instructed to begin each trial by fixating at the central cross for 600 milliseconds (ms). While the face array stayed on the screen (for a maximum of 5 seconds), participants responded by pressing one key, “p”, to indicate that the target expression was present and another key, “a”, to indicate that the target expression was absent. During both the practice trials and the experiment, a “too slow” message stayed on the center of the screen for 1.25 s if no response was collected within a 5-s time window. Note that this feedback was provided only based on the speed, not the accuracy, of the response. Participants received feedback for incorrect responses in the practice round only. No feedback was given during the experiment.

Design

The experiment consisted of 256 trials (128 target-present trials and 128 target-absent trials). Of the 128 target-present trials, 64 were happy face target trials, one-half of which were paired with neutral distractors and the other one-half with angry distractors. The other 64 target-present trials consisted of angry face targets, one-half with neutral distractors and one-half with happy distractor faces. The target-absent trials consisted of three conditions: all neutral (64 trials), all angry (32 trials), and all happy faces (32 trials). Trials in all sub-conditions were split equally into set sizes 4 and 8, i.e., the number of faces in the search array. See Figure 1b for the full factorial design employed in this study.

Measures

The study had three within-subject variables: target presence (present or absent), set size (4 or 8), and the target’s expression (angry or happy). The dependent variables were search accuracy (% correct), reaction time (RT, in milliseconds), and inverse efficiency score (IES, in milliseconds). The inverse efficiency score (IES) is a measure that combines the speed and accuracy of a participant’s performance. It is calculated by dividing response time (RT) by accuracy, i.e., RT/(1 - proportion of errors), thus taking into account the speed-accuracy trade-off in task performance (Bruyer & Brysbaert, 2011).

Self- and informant-report questionnaires

Participants answered demographic questions about age, handedness, gender, race, and ethnicity before the visual search experiment. For questions regarding gender, race, and ethnicity, “prefer not to respond” was provided as an option. Following completion of the visual search experiment, participants were asked to fill out the State-Trait Anxiety Inventory Form Y for Adults (STAI; Spielberger, 1989; Spielberger et al., 1983). The STAI is a widely used instrument for measuring anxiety in adults. The STAI clearly differentiates between the temporary condition of “state anxiety” and the more general and long-standing quality of “trait anxiety.” State and Trait anxiety scores were calculated with a set of 20 questions each (APA PsycTests Database Record (c) 2019 APA, all rights reserved). All data and the analysis code have been made publicly available on OSF and can be accessed at https://osf.io/6k9fn/?view_only=75874e2f30a04820acbe86ffcec21665.

Statistical analysis

This visual search study adopts a within-subject design with four independent variables: target presence (present, absent), emotional valence of the target (angry, happy), emotional valence of the distractors (angry, happy, neutral), set size (4, 8). For each dependent variable (accuracy, RT and IES) we conducted a 2 × 2 repeated-measures ANOVA to examine the two within-subject effects of target emotion (happy and angry) and set size (four and eight) for all the target-present trials (including both types of distractors).

After performing the ANOVAs, where appropriate, we conducted paired-sample t tests to examine the differences between the target happy trials and the target angry trials for each of the three dependent variables (RT, accuracy, and IES) at each set size (4 and 8). In order to correct for the two statistical tests (happy vs. angry at set size 8, happy vs. angry at set size 4) conducted for each dependent variable, we applied a Bonferroni correction factor of 2 to the alpha level of 0.05 (Armstrong, 2014).

Next, we examined the correlation between the state and trait scores of the STAI with search accuracy, RT, and IES. We performed separate Pearson correlation analyses for happy and angry target trials. To gain further insight into the patterns of correlation, we calculated a valence index (happy target trials – angry target trials) for accuracy, RT, and IES scores. We then measured the correlation of this index with state and trait scores obtained from the STAI.

To investigate the influence of anxiety levels on emotional attentional capture, we formed high and low anxiety groups, with one pair of groups for state anxiety and another for trait anxiety. According to our preregistration, we defined low and high-anxiety scores as those within the top and bottom 25% of scores for both state and trait anxiety. However, to explore the extremes of the anxiety spectrum more comprehensively, we further narrowed down the selection to include only participants within the top and bottom 20% of scores for both state and trait anxiety. We conducted an Anxiety Group-by-Emotion analysis on accuracy scores for each type of anxiety. This 2 × 2 mixed ANOVA had one between-subject factor of anxiety group (20% high and 20% low) and one within-subject factor of target emotion (happy and angry) and was conducted separately for state and trait anxiety scores. Where applicable, we followed these ANOVAs with two-tailed independent samples t-tests (using Levene’s test for equality of variances where appropriate) to understand the difference between the high and low anxiety groups. The t-tests compared the valence index accuracy scores between the 20% high and low anxiety groups for each state and trait anxiety.

The decision was made to exclude reaction time (RT)-based dependent variables (RT and IES) in these between-group analyses. One primary concern is the presence of inherent sources of noise in the RT measure, such as individual differences in cognitive processing speed and motor response variability. The variability introduced by these factors can confound the direct comparison between anxiety groups.

Results

General attentional bias for positive over negative valence

To understand how the valence of emotional faces affects attentional capture, we conducted a series of repeated-measures ANOVAs and focused primarily on the effect of target emotion on search accuracy, reaction time (RT), and inverse efficiency score (IES). Inverse efficiency scores were calculated as reaction time divided by the proportion of correct responses. In the target-present condition, commonly used to measure attentional capture, we found a significant main effect of target emotion on all three dependent variables: accuracy [F(1,153) = 123.639, p < .001, ηp2 = .447], RT [F(1,153) = 1330.616, p < .001, ηp2 = .897], and IES [F(1,153) = 952.923, p < .001, ηp2 = .862]. There was a significant main effect of set size on accuracy [F(1,153) = 77.076, p < .001, ηp2 = .335], RT [F(1,153) = 716.918, p < .001, ηp2 = .824], and IES [F(1,153) = 512.003, p < .001, ηp2 = .770], and a significant interaction between emotion and set size on accuracy [F(1,153) = 61.320, p < .001, ηp2 = .286] and IES [F(1,153) = 9.012, p = .003, ηp2 = .056], but not RT [F(1,153) = 0.023, p = .879, ηp2 = .000] (See Supplemental Table 2 for descriptive statistics).

To investigate competing hypothesis 1c-d, we compared task performance for happy versus angry target trials. We examined the basis of the emotion by set size interaction by carrying out several paired sample t-tests, comparing visual search performance when target emotion was a happy or an angry face. At set size four, happy targets were identified more accurately (accuracy = 0.943) and more efficiently (IES = 975.141 ms) than angry targets (accuracy = 0.822, IES = 1665.677) [accuracy: t(153) = 13.862, p < .001 (significant after Bonferroni correction), IES: t(153) = 28.687, p < .001 (significant after Bonferroni correction)]. The same was true of set size 8 for accuracy (accuracy happy = 0.853, angry = 0.812) [t(153) = 4.473, p < .001 (significant after Bonferroni correction)] and IES (IES happy = 1393.507 ms, angry = 1993.208) [t(153) = 24.354, p < .001 (significant after Bonferroni correction)]. For RT, collapsed across set sizes, we found that visual search was performed faster [t(153) = −36.691, p < .001 (significant after Bonferroni correction), Cohen’s d = −2.957] for happy target trials compared to angry target trials. While the RT data did not show a significant emotion x set size interaction effect, we examined the pairwise contrasts between happy and angry targets at each set size for RT because of its relationship to IES (which did show a significant interaction) (Figure 2). At set size four, happy targets were identified faster (RT = 916.987 ms) than angry targets (reaction time = 1339.457 ms) [RT: t(153) = 33.690, p < .001 (significant after Bonferroni correction)]. The same was true of set size 8 (reaction time happy = 1170.393 ms, angry = 1595.096 ms) [t(153) = 25.647, p < .001 (significant after Bonferroni correction)]. These results demonstrate enhanced attentional capture for positive stimuli over negative, or threatening, stimuli. Our results support the “happy face advantage” and directly contradict the “threat superiority effect” in attentional capture.

Figure 2.

Figure 2.

Visual search performance (a. accuracy, b. reaction time, and c. inverse efficiency score) for each target emotion and set size. Number of participants: n = 154. *** p < .001, two-tailed paired t-test.

Anxiety impacts emotional attentional capture

Based on Hypothesis 2, we tested whether the degree and/or the type of anxiety (trait or state) have different impacts on attentional bias to threatening stimuli. We found that anxiety ratings from the STAI—both state and trait anxiety—correlated positively with search accuracy for angry targets, but negatively with search accuracy for happy targets (Figure 3ab). Given these opposing correlation patterns, we conducted an Anxiety Group-by-Emotion analysis on accuracy scores and found a significant interaction for both state anxiety [F(1,61) = 7.186, p = .009, ηp2= 0.105] and trait anxiety [F(1,58) = 6.084, p < .017, ηp2= 0.095]. Therefore, we further computed a valence index of emotional attention as a difference in search accuracy between angry and happy targets (Figure 3cd). This valence index correlated negatively with both state anxiety [r(152) = .206, p = .01] and trait anxiety [r(152) = .166, p = .04], suggesting that greater anxiety was associated with more threat-related attentional bias and less attentional bias to stimuli with positive valence . The valence index for RT did not have a strong correlation with state anxiety [r(152) = −0.141, p = .081] or trait anxiety [r(152) = −0.130, p = .107]. There was also no significant correlation between IES and anxiety scores (both p values > .05). The internal consistency between state and trait anxiety in Experiment 1 was 0.841.

Figure 3.

Figure 3.

Correlation between state and trait anxiety for happy and angry targets (a–b), and the difference in search accuracy for differently valenced targets (c–d). Anxiety group comparison for valence index (e–f). Number of participants in a–d: n = 154. Number of participants in e: n(low state anxiety) = 30, n(high state anxiety) = 33. Number of participants in f: n(low trait anxiety) = 29, n(high trait anxiety) = 31. ** p < .01, * p < .05, two-tailed Pearson correlation in c–d, two-tailed independent samples t-test in e–f.

To quantify any group differences in visual search performance, we further divided up participants into low and high-anxiety groups, using 20% cutoff values in either state or trait anxiety (See Supplemental Figure 1 for qualitatively similar results using 25% cutoff values and Supplemental Table 3 for the 20% and 25% cutoff descriptive statistics). We found a significant group difference in search accuracy between happy and angry targets, for both state and trait anxiety (Figure 3ef) (state high accuracy: happy = 0.89, angry = 0.83)(state low accuracy: happy = 0.90, angry = 0.77)(trait high accuracy: happy = 0.90, angry = 0.86)(state low accuracy: happy = 0.89, angry = 0.80) [accuracy(happy-angry) in state anxiety: t(61) = −2.681, p = .009, accuracy(happy-angry) in trait anxiety: t(58) = −2.467, p = .017]. While both groups showed a search advantage for happy face targets, this positive over negative bias became smaller with increasing anxiety. In other words, anxious participants showed more threat-related attentional bias and less attentional bias to stimuli with positive valence.

Experiment 2: Spatial Cueing

To understand the relationship between anxiety, emotional valence, and attentional disengagement, we conducted an online emotional spatial cueing experiment. Our goals were to examine how the valence of emotional faces affects attentional disengagement, and to determine if anxiety modulates attentional disengagement of emotional faces. If anxiety does have an impact, we asked whether highly anxious participants disengage from emotional faces faster or slower compared to those with low anxiety. We preregistered all aspects of our study, including hypotheses, design, sampling, and analysis, on the Open Science Framework (https://osf.io/v9pw8). In the preregistration, we included three sets of competing hypotheses:

Competing hypotheses 1:

Hypothesis 1a:

If negative valence takes longer to disengage from attention, then reaction time to respond to targets at the cue-invalid location will be longer following angry face than happy face cues.

Hypothesis 1b:

Alternatively, if positive valence takes longer to disengage from attention, then reaction time to respond to targets at the cue-invalid location will be longer following happy face than angry face cues.

Competing hypotheses 2.1:

Hypothesis 2.1a:

Participants with higher anxiety will take longer than those with low anxiety to disengage from emotional faces.

Hypothesis 2.1b:

Alternatively, participants with higher anxiety will be faster than those with low anxiety to disengage from emotional faces.

Competing hypotheses 2.2

Hypothesis 2.2a:

Within the high-anxiety participants, angry faces will take longer to disengage than happy face cues.

Hypothesis 2.2b:

Within the high-anxiety participants, happy faces will take longer to disengage than angry face cues.

Methods

Participants

Several power analyses were conducted on our existing preliminary data using IBM SPSS version 28. The largest sample size required to uncover the difference in attentional disengagement between the low and high-anxiety groups, with power = 0.95 and alpha error = 0.05, was 200. We then doubled this number to ensure that sample size was not a limiting factor. Our target sample size was 400 participants. Given the ratio of participants that participated with above chance performance in Experiment 1, we predicted that we would need to post 800 slots to achieve that sample size. In actuality, 1,056 online sign-up slots were posted on Amazon Mechanical Turk before data collection ended. Data was collected in 2022. Participants had to be 18 years or older to be eligible to partake in the study. Informed consent was obtained prior to the start of the online experiment and the protocol (large-scale online studies of human perception and cognition) was approved by the IRB at NIH. Of the 1,056 participants, 636 were excluded due to predetermined criteria (https://osf.io/v9pw8), i.e., participants whose overall accuracy fell below a 68% cutoff value, determined by the bimodal distribution of accuracy data shown in Supplemental Figure 2, who missed more than 10% of trials, or whose reaction time or inverse efficiency score was greater than three standard deviations from the mean in any sub-condition were excluded from further analysis. Our final sample included 420 participants (244 male) from a non-clinical population with a mean age of 36.46 years.

Transparency and openness

We report how we determined our sample size, all data exclusions, all manipulations, and all measures in the study. All data, analysis code, and research materials are available at https://osf.io/6k9fn/?view_only=75874e2f30a04820acbe86ffcec21665. Data were analyzed using MATLAB R2018b. The study design, hypotheses, and analysis plan were also preregistered; see https://osf.io/v9pw8 for information on sample size and data exclusions.

Stimuli and procedure

The face stimuli and data collection procedures were identical to those used in Experiment 1. The sequence of displays in two sample trials is illustrated in Figure 4, one example of a cue-valid trial and one example of a cue-invalid trial. Information about the location and size of the face and dot are also illustrated in the figure.

Figure 4.

Figure 4.

Spatial cueing task trial procedure. After a brief fixation pulse, a face with varying expression (angry, happy, or neutral) cued the likely location of the target (a white dot) that was either 0.8 deg above or below the center of the face. Participants were instructed to press “b” if the dot was below the midline and “h” if the dot was above the midline. The example on the left shows a cue-invalid trial and the one on the right shows a cue-valid trial.

Participants were asked to maintain fixation at a central cross while a face cue was presented for 250 ms in one of the two peripheral locations (at 3.54 deg eccentricity on the left or the right side). After a randomly selected interstimulus interval (ISI) of either 50 ms or 30 ms (SOA = 280 or 300 ms), a target appeared in one of the peripheral locations (with equal probability) for 100 ms. The target was a white dot that appeared slightly above or below the midline of the screen. Participants performed a dot localization task by pressing “b” when the dot appeared on the bottom one-half of the screen and “h” for when the dot appeared on the top one-half of the screen, which occurred with equal probability. Two out of the 32 trials in each block were catch trials, during which no white dot appeared, and participants were asked to withhold their response. On every trial, participants had 2 s to respond. However, a “too slow” feedback appeared on the screen if a response was not detected within 1 s. The intertrial interval (ITI) was randomly set between 0.9 s and 1.2 s. No other feedback was provided except performance (in percent correct) was shown at the end of each block.

Design

This study adopted a modified emotional spatial cueing paradigm (Fox et al., 2001) in which 60% of trials had valid cues (target appears at the cued location) and 40% of trials had invalid cues (target appears at the other, uncued location). In addition to cue validity, the emotional valence of the face cue (angry, happy, or neutral) was manipulated. There were an equal number of each facial expression overall, and within the valid and invalid sub-conditions. The task had eight blocks of 32 trials each and a practice block with 11 trials, totaling 267 trials.

Measures

The study had two within-subject variables: cue validity (valid or invalid) and the target’s expression (angry, happy, neutral). The dependent variables were search accuracy (percent correct), reaction time (RT, in milliseconds), and inverse efficiency score (IES, in milliseconds).

Self- and informant-report questionnaires

Demographic questions and the STAI questionnaire were identical to those used in Experiment 1. All data and the analysis code have been made publicly available on OSF and can be accessed at https://osf.io/6k9fn/?view_only=75874e2f30a04820acbe86ffcec21665.

Statistical analysis

The first analysis we ran was a series of 2 × 3 repeated measures ANOVAs that examined the effects of cue validity (valid or invalid) and cue emotion (happy, angry, neutral) on the three dependent variables (RT, accuracy, and IES). After performing the ANOVA, where appropriate, we computed post hoc paired-sample t-tests on all the trials to better understand the direction of the cue emotion effect. These t-tests compared the differences between happy versus neutral cues, angry versus neutral cues, and happy versus angry cues for all three dependent variables. To correct for the three statistical tests for each dependent variable, we applied a Bonferroni correction factor of 3 to the alpha level of 0.05.

To specifically examine attentional disengagement, we focused on the target invalid trials. Based on the results of the repeated measures ANOVA described above (see Results), we ran a one-way ANOVA on the invalid trials to understand the effect of cue emotion (three levels: happy, angry, neutral) on RT. Where appropriate, we performed a series of post hoc paired-samples t-tests to understand the direction of the result. We again applied a Bonferroni correction factor of 3 to the alpha level of 0.05 to account for the three statistical tests.

To investigate the impact of anxiety on attentional disengagement, we conducted a 2 × 3 mixed ANOVA. The between-subject factor was the anxiety group (high and low anxiety), defined separately for state and trait anxiety measures as the 25% cut-offs described in our preregistration. The within-subject factor was cue emotion (happy, angry, neutral) on the dependent variable RT. Next, we repeated these mixed ANOVAs for each state and trait anxiety using 10% anxiety groups to investigate the pattern further along the anxiety scale. Where appropriate, we conducted a set of between-subject independent sample t-tests to understand the effect of anxiety group on RT for each of the three cue emotions (happy, angry, neutral). We applied a Bonferroni correction factor of 3 to the alpha level of 0.05 to account for the three statistical tests conducted for each dependent variable. Finally, we examined the effect of cue emotion within each anxiety group. We conducted paired-samples t-tests (happy vs. neutral, angry vs. neutral, happy vs. angry) separately for each anxiety group (high and low anxiety). We again applied a Bonferroni correction factor of 3 to the alpha level of 0.05 to account for the three statistical tests conducted for each dependent variable.

Results

Effects of cue validity and cue emotion on general task performance

To understand the impact of the validity and the emotion of the face cue, we conducted a repeated-measures ANOVA investigating the effect of cue validity and cue emotion on each of the three dependent variables of interest (RT, accuracy, and IES). There was a significant main effect of cue validity across all three measures [RT: F(11,419) = 57.399, p < .001, ηp2= 0.12; accuracy: F(1,419) = 31.481, p < .001, ηp2 = 0.070; IES: F(1,419) = 60.042, p < .001, ηp2 = 0.125], such that performance in the cue-invalid condition was slower and less accurate than that in the valid condition [RT: t(419) = 7.598, p < .001 (significant after Bonferroni correction), Cohen’s d = 0.371; accuracy: t(419) = −5.611, p < .001 (significant after Bonferroni correction), Cohen’s d = −0.274; IES: t(419) = 7.603, p < .001 (significant after Bonferroni correction), Cohen’s d = 0.371]. This effect of cue validity is a replication of the classic spatial cueing effect, where participants respond more efficiently to a target presented at the cued than at the uncued location because attention has been directed to the cued location and enhances processing at that attended location (Posner, 1980).

In addition to cue validity, the effect of cue emotion is key in understanding how the valence of emotional faces affects the allocation of spatial attention. We found a strong main effect of cue emotion on RT, F(1.968,824.759) = 11.133, p < .001, ηp2 = 0.026, Greenhouse-Geisser correction, such that response to a target was slower when the target was preceded by an emotional face cue than a neutral face cue [happy vs. neutral: t(419) = 2.797, p = .005 (significant after Bonferroni correction), Cohen’s d = 0.136; angry vs. neutral: t(419) = 4.601, p < .001 (significant after Bonferroni correction), Cohen’s d = 0.225]. This emotion effect, however, was absent in the accuracy measure [happy vs. neutral: t(419) = .245, p = .806 (not significant after Bonferroni correction), Cohen’s d = 0.012; angry vs. neutral: t(419) = .016, p = .987 (not significant after Bonferroni-correction), Cohen’s d = 0.001], and minimal in the IES (p = .037 (not significant after Bonferroni-correction), ηp2 = 0.008), potentially due to ceiling effect (mean accuracy = 92.64%, p = .983, ηp2 < 0.001), so we proceeded with the reaction time measure as our dependent variable for further analysis, similar to previous spatial cueing studies (Becker et al., 2019; Fox et al., 2002; Posner, 1980).

The valence of emotional faces affects attentional disengagement

Motivated by our preregistered Competing Hypotheses 1, we next examined whether negative or positive valence takes longer to disengage from attention. Here, attention disengagement involves shifting attention from the location of the cue to an uncued location, so the cue-invalid condition was the condition of interest. First, we found a main effect of the cue emotion on RT, F(1.943,814.290) = 4.360, p = .014, ηp2 = 0.01, Greenhouse-Geisser correction. Then, we investigated further and concluded that reaction time to targets at the cue-invalid location was longer following happy than neutral face cues, t(419) = 2.086, p = .038 (not significant after Bonferroni-correction), Cohen’s d = 0.102, and following angry than neutral face cues, t(419) = 2.866, p = .004 (significant after Bonferroni correction), Cohen’s d = 0.140. While these results demonstrate that emotional faces take longer to disengage from attention than neutral faces, we did not find a significant difference in attentional disengagement between positively and negatively valenced emotional faces, t(419) = −0.933, p = .351.

The impact of anxiety on attentional disengagement from emotional stimuli

After establishing the effect of cue emotion on attentional disengagement, we then asked if anxiety level has an impact on attentional disengagement. Specifically, we tested if participants with higher anxiety were faster or slower than those with lower anxiety to disengage from emotional faces (Competing hypotheses 2.1). To classify participants into low and high-anxiety groups, we needed a cutoff value for STAI anxiety scores for each group. First, we defined low and high-anxiety scores, according to our preregistration, as those within the top and bottom 25% of scores for both state and trait anxiety. We carried out a mixed ANOVA on RT, with emotion (angry, happy, neutral) varying within subjects, and anxiety level (high, low) varying between subjects. While a group difference was evident, F(1, 210) = 17.021, p < .001, the interaction between anxiety level and emotion was not significant, F(1.887, 396.273) = 1.310, p = .270, Greenhouse-Geisser correction. The internal consistency between state and trait anxiety in Experiment 2 was 0.762.

We then looked further into the ends of the distribution using 10% on each end of the STAI scores as the cutoff. With these groups, high anxiety (n = 47) and low anxiety (n = 45), we replicated the group difference between high and low anxiety, (F(1, 90) = 8.305, p = .005), and found a significant interaction between state anxiety level and emotion (F(1.939, 174.504) = 3.645, p = .029, Greenhouse-Geisser correction), suggesting that anxiety level does have a sizable influence on attentional disengagement, corroborating findings from previous attention and anxiety research (Barbot & Carrasco, 2018; Bar-Haim et al., 2007; Ferneyhough et al., 2013; Fox et al., 2001). See Supplemental Table 4 for the descriptive statistics of the reaction time data for anxiety groups at both 10% and 25% cutoff values.

To examine the details of the interaction between anxiety level and emotion, we conducted a series of between-subject independent samples t-tests. We found that between the high and low state anxiety groups, in the invalid condition, there was a significant reaction time difference for each facial expression, (high anxiety RT: happy = 464.19 ms, angry = 466.00 ms, neutral = 454.47 ms)(low anxiety RT: happy = 392.74 ms, angry = 388.63 ms, neutral = 393.66 ms) [happy: t(90) = 2.957, p = .004 (significant after Bonferroni correction), Cohen’s d = 0.617; angry: t(85.442) = 3.140, p = .002 (significant after Bonferroni correction), Cohen’s d = 0.651; neutral: t(85.189) = 2.489, p = .015 (significant after Bonferroni correction), Cohen’s d = 0.516]. That is, high-anxiety participants took longer to disengage for all three face cues than those in the low-anxiety group during invalid trials, as shown in Figure 5ac. See Supplemental Figure 3 for similarly trending trait anxiety results. The two groups performed similarly in the valid condition (all p > 0.05), which shows that anxiety specifically impacts the ability to disengage from face cues (where shift of attention is required), impeding task performance.

Figure 5.

Figure 5.

Differences in reaction time (ms) between low and high state anxiety groups to disengage from a) angry b) happy and c) neutral face cues ** p < .01, * p < .05, two-tailed independent samples t-test.

Emotional valence within the high-anxiety group.

Given that high-anxiety participants took longer to disengage from face cues than low-anxiety participants, we next asked if such difficulty in disengagement is similar across face cues. We conducted a series of paired-samples t-tests within the high-anxiety group, comparing their time to disengage from different facial emotions in the cue-invalid trials. The time it took to disengage from happy versus neutral faces was trending toward significance, t(46) = 1.939, p = .059, Cohen’s d = 0.283. There was also a significant difference between the time it took to disengage from angry versus neutral faces during invalid trials, t(46) = 2.201, p = .033 (not significant after Bonferroni-correction), Cohen’s d = 0.321. However, we did not find a difference between happy and angry faces, t(46) = −.304, p = .763, Cohen’s d = −0.044 (Competing hypotheses 2.2 in our preregistration).

These findings demonstrate that anxiety level impairs overall task performance during attentional disengagement from emotional stimuli. Additionally, within those with high levels of anxiety, it takes longer to disengage from emotional (both happy and angry) faces compared to the neutral faces (see Figure 6a). This emotional cueing effect was not found in those with low anxiety. There was no significant difference between the reaction time of any of the face cues in the invalid condition for low-anxiety participants (all p > 0.05), as shown in Figure 6b below.

Figure 6.

Figure 6.

Differences in time (ms) that it took a) high state anxiety participants and b) low state anxiety participants to disengage from an emotional face cue compared to neutral faces. * p < .05, † p < .1, n.s. not significant, two-tailed paired t-test.

General discussion

During daily social interactions, humans use facial expressions to communicate emotions and feelings non-verbally, and hence receive privileged access to attention. Through two preregistered experiments on spatial attention, we addressed how attention is biased toward emotional information and how anxiety modulates the magnitude of this emotional bias. In the first experiment, we examined how emotional faces, with positive and negative valence, capture attention using a visual search task. While we found a consistent attentional bias for happy faces across all participants, this “happy face advantage” diminished as anxiety level increased. The high anxiety group exhibited an increased attentional bias for threat compared to the low anxiety group. In the second experiment, we investigated how emotional faces are disengaged from attention using an emotional spatial cueing task. We found that the high anxiety group took longer to disengage from both angry and happy face cues compared to neutral face cues, but the low anxiety group showed no difference in time to disengage from face cues of any emotion. Together, results from these two experiments converged to demonstrate a distinct impact of anxiety on the spatial deployment of emotional attention.

A number of prior studies have reported the phenomenon of emotional attention (Vuilleumier, 2005). While it is well established that emotional faces are prioritized for attentional processing, how differently valenced emotional faces capture and hold attention is still highly debated. Specifically, our experiments address two such debates, the “anger superiority effect” (Fox et al., 2000; Hansen & Hansen, 1988; Horstmann & Bauland, 2006) and the “happy face advantage” (Becker et al., 2011; Calvo & Nummenmaa, 2008) in attentional capture, and the “vigilance-avoidance” (Mogg & Bradley, 1998) and “attentional maintenance” (Fox et al., 2001) in attentional disengagement.

Reconciling the anger superiority effect vs. happy face advantage

The “anger superiority effect” and “happy face advantage” debate is centered on the “valence” dimension, a key factor manipulated in Experiment 1. Although valence is often considered a single dimension in the literature, it can be easily confounded by low-level image statistics (e.g., image brightness and contrast, small differences in head tilt among photographs in a search array) (Savage et al., 2013), perceived arousal of the emotional faces (Lundqvist et al., 2014), and the interaction between facial gender and decisions about facial expression (Becker et al., 2007). Indeed, it has been increasingly recognized that the shadows in the face photographs and the low-level visual confounds inherent in the schematic line drawings of angry faces (“cross detection”) largely underlie the angry-face-in-the-crowd effect, which undermines evidence for the “anger superiority effect” (for a review, see Becker et al., 2011; Savage et al., 2013).

In our experiments, we controlled for perceived arousal, low-level image statistics, and used an equal number of female and male faces within each emotional expression to ensure that our results were not stimulus-driven, but a result of the expressions themselves, which differentiates our work from prior studies. The aim was to use stimuli that were equal in perceptual distinctiveness while still holding emotional meaning, as recommended by Savage et al. (2013). Results from Experiment 1 provide strong support for the happy face advantage in visual search and are contradictory to the anger superiority effect. Across all participants, accuracy, reaction time, and inverse efficiency score converged to show facilitated searching for a happy face compared to an angry one. Our results demonstrate a comparable happy face advantage to that of Becker et al. (2011). Our results show, on average, a 400 ms search time difference between happy and angry target-present trials, collapsed on other conditions. The Becker group found approximately a 214 ms difference, but their reaction times were overall faster than ours by about a hundred milliseconds, which may be the result of conducting our study online instead of in a lab (Crump et al., 2013). Not only were we able to replicate their finding but further extended it to study how anxiety impacts attentional capture by emotional faces.

To examine whether our happy face advantage finding holds true regardless of the distractor type, we additionally investigated the two types of distractor crowds, neutral and oppositely valenced to the target, individually. Becker & Rheem (2020) cautions that perceptual aspects of the crowds can easily confound results and that regardless of whether crowds are emotional or neutral, it is important to ensure that distractor crowds are perceptually equivalent so that distractors are rejected at the same rate for trials with targets of all emotional valences. Some previous studies have described the importance of neutral distractors (Becker et al., 2011; Frischen et al., 2008), while other key papers in the literature like, (Hansen & Hansen, 1988), used crowds that were oppositely valenced to their targets. Other recent papers used both, as the field is moving toward this design (Tipura et al., 2022). We found that the neutral face and the oppositely valenced face crowds each demonstrated the same main effects and directionality of emotion and set size (see Supplemental Table 5 for the descriptive statistics of the data broken down by distractor type and Supplemental Tables 67 for the relevant statistics), therefore we included trials of both distractor types in our main analyses.

An additional aspect of the experimental design that differentiates our task from earlier visual search experiments is the way we used the task instructions to control for the top-down search strategies participants might employ. Early visual search tasks often used same-different judgement (Hansen & Hansen, 1988), whereas we instructed participants to search for a single target expression, either happy or angry (Figure 1a). The goal in instructing participants to search for a single target expression was to help control for varying top-down search strategies that might be employed. According to Becker and Rheem’s (2020) guidance on conducting visual search experiments, it is important for participants to have conscious awareness of the emotional expression they are searching for to make sure they are not using learned nonemotional features of the targets to increase search efficiency. Previous research had demonstrated that task performance may vary depending on whether participants were instructed to search for a specific emotion or use a same-different judgement approach (Frischen et al., 2008). Therefore, we chose to implement a predetermined mental set design as it gave us more control of the top-down attentional strategies that participants might employ (similar to the experimental design of Becker et al., 2011 Experiment 2). Importantly, we found converging results with some previous studies that used same-judgement paradigms, including that of Juth et al., 2005 and Becker et al., 2011 Experiments 1a and b. Therefore, while it is important to be mindful of the attentional strategies participants might use, this aspect of the experimental design is likely not a determining factor for the observed patterns of emotional attention.

While this experiment focused on target-present trials, we also analyzed the target-absent trials to obtain a fuller picture of how emotion affects task performance (please see descriptive statistics for target-absent trials in Supplemental Table 8). We found a main effect of both set size and emotion on accuracy, RT, and IES, as well as a significant interaction between the two (see Supplemental Tables 9 and 10 for relevant statistics). Post-hoc analyses revealed that responses for target-absent emotional trials (all happy faces, all angry faces) were generally more accurate and faster than target-absent neutral trials (all neutral faces) (all p < or = .001), except for the reaction time of happy versus neutral at set size four (p = .586). Additionally, responses for target-absent happy trials were generally more accurate and faster than target-absent angry trials (p < .013 except for accuracy at set size eight where p = .179), corroborating with the finding of happy face advantage in target-present trials.

Anxiety modulates attentional capture

Does anxiety affect attentional capture of emotional stimuli, and if so, how? Previous studies have attempted to answer this question, resulting in a mixed set of results. Some have claimed that anxiety enhances engagement with negatively valenced stimuli (Armstrong & Olatunji, 2012; Bar-Haim et al., 2007; Cisler & Koster, 2010; Clauss et al., 2022; Grafton & MacLeod, 2014; Mogg & Bradley, 1998; Okon-Singer, 2018; Rudaizky et al., 2014; Valadez et al., 2022; Williams et al., 1996), while others do not support the idea that anxiety enhances attentional capture at all (Blicher et al., 2020). Our results showed a novel, hybrid pattern of anxiety on attentional capture. While all participants exhibited the “happy face advantage,” this effect weakened as participants got more anxious. This finding coincides with the findings in the high anxiety group from Tipura et al. (2022) showing that anxious participants were slower to search for happy faces than neutral or fearful faces, as well as with the review from Frewen et al., 2008, p. 329. However, the bias we found for angry faces increased as anxiety increased, suggesting that highly anxious individuals have an increased bias for threat in attentional capture compared to those with low anxiety. This observation is consistent with the previous literature that reports threat-related attention in anxiety (Abado et al., 2020; for review, see Barry et al., 2015; Clauss et al., 2022; Richards et al., 2014; Valadez et al., 2022), but also maintains the happy face advantage, as the bias for threat was not large enough to supersede the attentional bias for happy faces.

Attention maintenance for positive and negative stimuli

In the second experiment, we looked at how anxiety impacts attentional disengagement. Consistent with many demonstrations of threat-related attentional bias (Hirsch & Mathews, 2012; Koster et al., 2006; Nelson et al., 2022; Rudaizky et al., 2014; Veerapa et al., 2020), we found delayed disengagement of negatively valenced stimuli compared to neutral stimuli in high anxious participants. These results align with many of prior studies that focused solely on comparing negative and neutral stimuli. In fact, spatial cueing with positive stimuli has not been well-studied given the literature’s focus on threat-related attention. Previous studies have mostly examined positive valence with specific types of stimuli, such as food words, babies, and attractiveness (Brosch et al., 2008; Leland & Pineda, 2006; Maner et al., 2007). These studies are included in a meta-analysis (Pool et al., 2016), which found a stronger attentional bias for positive stimuli in attentional capture than disengagement. One study that examined the disengagement of both positive and negative stimuli only included the positive stimuli to ensure that any conclusions about threat stimuli were not just an overall effect of emotion and were threat specific (Fox et al., 2001). Many experiments in the meta-analysis used schematic faces so it is unclear whether their findings are due to perceptual processing or emotional attention (Becker et al., 2011).

Our spatial cueing design differs from previous literature in that we used happy and angry real face cues, not schematic drawings. The results aligned most with that of Fox et al. (2002), who found that happy and angry faces both took longer to disengage from than neutral faces. Importantly, the delayed disengagement for emotional faces was found only within the high-anxiety group and not in the low-anxiety group (see Figure 5ab). Consistent with the literature, the performance of the low anxiety group on spatial cueing task did not show a difference between emotional faces and neutral ones (Bar-Haim et al., 2007; Ferneyhough et al., 2013; Fox et al., 2001), suggesting that emotional valence is not a critical factor that directly influences performance in this task. In this respect, our results are consistent with predictions from the delayed disengagement, or maintenance hypothesis (Clauss et al., 2022), and are not consistent with predictions from the vigilance-avoidance hypothesis (Frewen et al., 2008; Mogg & Bradley, 1998). Some claim that avoidance may be a top-down coping strategy anxious individuals use when presented with emotional stimuli (Rosen et al., 2019). However, extremely anxious individuals may not have the attentional control to implement this strategy. The lack of attentional control may, in fact, lead to greater attentional maintenance on arousing stimuli as they are not able to disengage (Ho et al., 2017; Taylor et al., 2016). More research needs to be done to better understand the role of attentional control in these attentional biases and whether valence intensity modulates participants’ ability to employ top-down strategies. Note that the difference in emotional cueing was not the only difference between anxiety groups, as the low anxious performed better on the task overall, which may also relate to the capacity for attentional control. Comparing the task accuracy in each of the two validity conditions, collapsed across all face cues, low-anxiety participants had significantly higher accuracy in both valid and invalid trials than high-anxiety participants.

Implications for understanding the neural mechanism of emotion processing

The observed findings regarding valence and anxiety have important implications for understanding the neural mechanism of emotion processing. Many visual areas, including the fusiform face area and early visual cortex, and subcortical regions, such as the amygdala, show greater activation to emotional faces, both positive and negative valenced, compared to neutral faces (Liu et al., 2022). Additionally, the amygdala-frontal connectivity has been a major focus for top-down emotion regulation through cognitive reappraisal. But tasks where faces were not attended showed that emotional processing in the amygdala is likely independent of spatial attention. Historically, this research has focused on how threat relates to amygdala function (Morris et al., 1996). It is thought that the amygdala may be involved in the rapid, initial processing of threatening cues through a subcortical pathway (Vuilleumier & Pourtois, 2007). More recently, however, the distinctions between the processing of anger and fear have garnered attention. First, research studying attention to threat-related emotional information suppressed from awareness has even found differences in the perception of fear versus angry stimuli in the absence of visual awareness (Vetter et al., 2019), which may be due to different neural mechanisms for fear and anger processing. In another study, it was concluded that high levels of trait anxiety impact gaze perception of indirectly threatening, or fearful, faces but not directly threatening, or angry, stimuli (Hu et al., 2017). Some even propose that fearful faces may differentiate from angry faces in that fearful faces are prosocial as they may be a help-seeking behavior that indicates vulnerability and appeasement (Marsh, 2023). Alternatively, the amygdala has been found to be highly sensitive to the spatial locations of all biological relevant stimuli, not only threatening information. In which case both positive and negative faces activate the amygdala (Peck et al., 2013; Peck & Salzman, 2014). More research needs to be done to examine neural activation by different emotional valences (including a more in-depth examination of direct versus indirect threat), across visual eccentricities, and the time course of activation for these different faces.

Limitations and future directions

There are several limitations in the current study. Most notable are the limitations specific to conducting online studies (Crump et al., 2013), which are not unique to our experiments. For example, the task design was constrained to ensure that participants could understand the task without lab personnel present to explain it to them. This was particularly true for Experiment 2, spatial cueing, in which the sequence of cue and target displays was more challenging to explain online. To overcome this difficulty, we necessitated a level of accuracy in the practice to ensure participants understood the task before proceeding to the experiment. We also provided feedback on the accuracy at the end of each block to allow the participants to continuously monitor their own performance. Additionally, the reliability of reaction-time based attentional tasks, particularly those conducted online, have been called to question recently. As a proxy for assessing the reliability, we examined the reproducibility of our findings by analyzing the pilot data which formed the basis of each online preregistration. We conducted a mixed 3-way ANOVA comparing the results from Experiment 1 and pilot version of this experiment as a proxy of task reliability to understand the reproducibility of our findings. We conducted similar analyses for Experiment 2 as well. In Experiment 1, we did not see a significant emotion x set size x dataGroup interaction (RT: [F(1,236) = 0.512, p = .475, ηp2 = .002], ACC: [F(1,236) = 3.249, p = .073, ηp2 = .014]), suggesting that the key effects of interest was consistent between the two data groups (pilot data and experiment 1 data). In Experiment 2, we again did not see a significant emotion x validity x dataGroup interaction (RT: [F(1.958,961.312) = 0.143, p = .862, ηp2 < .001, Greenhouse-Geisser correction], ACC: [F(1.991,977.616) = 0.945, p = .389, ηp2 = .002, Greenhouse-Geisser correction]), suggesting that the key effects of interest did not vary significantly between the two data groups. The lack of interactions in both experiments supports the reproducibility of our findings.

Another limitation of online studies was that we could not control participants’ environments while they were completing the task. These environments include computer and display related factors, physical environment, and the level of attention or adherence to the task. Third, the anxiety ratings in this study purely relies on participants’ self-reports. However, despite using a common and validated measure with high internal consistency, the STAI, self-report is a subjective measure. Due to the constraints of our study protocol, we could not ask about any clinical diagnoses, and it is likely that many participants with self-reported high-anxiety scores were not formally diagnosed with anxiety, which differentiates our sample from the patient sample in the anxiety literature. The online study protocol also made it difficult to rule out the possibility that some individuals may exhibit other clinical characteristics that tend to co-occur with anxiety symptoms. It has been suggested that the STAI-T (assessing trait anxiety) may primarily measure negative affect, which is not surprising given the high overlap between symptoms of anxiety and depression (Knowles & Olatunji, 2020). As the scope of our study only pertained to anxiety, which frequently presents alongside other clinical diagnoses, this limitation highlights an area for further exploration in the future. However, conducting an anxiety study online has unique advantages. It allowed us to sample geographically diverse populations. In addition, the large sample we acquired spans the full spectrum of anxiety scores, which would not have been possible in a typical lab setting.

Our findings best reveal characteristics of emotional attention in groups and should not be used to claim that individuals with anxiety show precise or stable patterns of attentional biases (McNally, 2019; Rodebaugh et al., 2016; Yuval et al., 2017; Zvielli et al., 2014). While the attentional biases we observed across two experiments are robust at the group level, we did not assess the intertrial and test-retest reliability at the individual level (MacLeod et al., 2019). Future work will need to use eye tracking (Skinner et al., 2018; Veerapa et al., 2020), functional MRI (Smith et al., 2021) or electrophysiology (Fox et al., 2008; Tipura & Fox, 2021) to provide individual-specific and more reliable measures of anxiety-related attentional biases.

Constraints on generality

Given the debates in the emotional attention and anxiety literature, it is essential that our research be generalizable and replicable. With this objective in mind, we focused on careful data collection and curation of our public data repository on the Open Science Framework (OSF). We included a large sample size to enhance generality (Expt. 1: n = 154, Expt. 2: n = 420). To promote transparency, we preregistered all aspects of this study on OSF, including hypotheses, study design, sampling, and analysis plan. We also reported all our findings in the final manuscript, including both null and significant results.

Despite our large sample size and transparent research practices, some important constraints must be considered. External and ecological validity are commonly recognized constraints that can impact the generalizability of findings in emotion studies. By conducting our studies online, specifically in participants’ homes, which represents a natural setting, we were able to achieve a higher degree of external validity compared to a laboratory setting. This approach enables a more realistic assessment of participants’ behaviors and experiences within their everyday environments. However, it is important to acknowledge that the static, black and white, cropped, facial images in our studies differ perceptually and experientially from real-life interactions where individuals engage with others’ emotions. Future studies may benefit from improving the ecological validity of emotional stimuli while controlling for low-level visual confounds to improve the generality of the results. Additionally, although we discuss the impact of anxiety in this study, the constraints of our online study protocol prevented us from asking participants about clinical diagnoses, so these findings may not generalize to clinical anxiety populations.

Conclusions and conceptual advance

The present study reveals distinct attentional mechanisms for positive and negative stimuli and highlights the importance of including both types of valence when studying emotional attention, particularly those with anxiety. Contrary to the commonly held idea that threat-related information receives facilitated attentional engagement, our findings indicate that happy faces are faster to capture attention than angry faces. However, this bias for happy over angry faces decreases as anxiety increases. Consistent with the delayed disengagement of attention from threat, we observed that happy faces are equally difficult to disengage from attention as angry faces in the highly anxious group. While the evolutionary importance of threat-related faces has been the central focus in the emotional attention and anxiety literature, we want to stress that the evolutionary importance and the statistical regularity of happy faces should not be neglected. Happy faces are an important instrument of communication to demonstrate social welcoming, acceptance and invite collaboration (Oatley & Jenkins, 1996).

We make three points. First, happy faces have their own social and evolutionary importance, as they can indicate friendliness and safety. Second, the statistical regularity of happy faces in the real world is much higher than that of angry or fearful faces. Third, from the perspective of ecological validity, including both negative and positive emotional valence in the study design can help ensure that the findings can be generalized to naturalistic situations.

These findings of rapid attentional capture by and delayed disengagement from emotional stimuli have meaningful implications for understanding how those with anxiety perceive their daily social environments, which is increasingly important to understand as global anxiety prevalence increases.

Supplementary Material

Supplemental Material

Acknowledgments

This work was supported by the Intramural Research Program of the National Institute of Mental Health (ZIAMH002966; protocol 000589). We thank Dr. Daniel Pine for helpful comments and the Laboratory of Brain and Cognition at NIMH for fruitful discussions. The authors also thank Yolanda L. Jones, NIH Library, for manuscript editing assistance.

Footnotes

Competing interests: The authors declare no competing interests.

The study design and hypotheses for Expt. 1 were preregistered; see https://osf.io/s975h. The study design and hypotheses for Expt. 2 were also preregistered; see https://osf.io/v9pw8. The preprint can be found at 10.31219/osf.io/qcp34. Additionally, this work has been presented at Vision Sciences Society 2022 and Society for Neuroscience 2022. The authors declare no competing interests.

References

  1. Abado E, Richter T, & Okon-Singer H (2020). Attention bias toward negative stimuli. In Cognitive Biases in Health and Psychiatric Disorders (pp. 19–40). Elsevier. 10.1016/B978-0-12-816660-4.00002-7 [DOI] [Google Scholar]
  2. Armstrong RA (2014). When to use the Bonferroni correction. Ophthalmic and Physiological Optics, 34(5), 502–508. 10.1111/opo.12131 [DOI] [PubMed] [Google Scholar]
  3. Armstrong T, & Olatunji BO (2012). Eye tracking of attention in the affective disorders: A meta-analytic review and synthesis. Clinical Psychology Review, 32(8), 704–723. 10.1016/j.cpr.2012.09.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bachmann H, Liu TT, Japee S, & Merriam EP (2022, December 16). The relationship between emotional valence, anxiety, and spatial attention. Retrieved from osf.io/6k9fn [Google Scholar]
  5. Bandelow B, & Michaelis S (2015). Epidemiology of anxiety disorders in the 21st century. Dialogues in Clinical Neuroscience, 17(3), 327–335. 10.31887/DCNS.2015.17.3/bbandelow [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Barbot A, & Carrasco M (2018). Emotion and anxiety potentiate the way attention alters visual appearance. Scientific Reports, 8(1), 5938. 10.1038/s41598-018-23686-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bar-Haim Y, Lamy D, Pergamin L, Bakermans-Kranenburg MJ, & van IJzendoorn MH (2007). Threat-related attentional bias in anxious and nonanxious individuals: A meta-analytic study. Psychological Bulletin, 133(1), 1–24. 10.1037/0033-2909.133.1.1 [DOI] [PubMed] [Google Scholar]
  8. Barry TJ, Vervliet B, & Hermans D (2015). An integrative review of attention biases and their contribution to treatment for anxiety disorders. Frontiers in Psychology, 6. https://www.frontiersin.org/articles/10.3389/fpsyg.2015.00968 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Becker DV, Anderson US, Mortensen CR, Neufeld SL, & Neel R (2011). The face in the crowd effect unconfounded: Happy faces, not angry faces, are more efficiently detected in single- and multiple-target visual search tasks. Journal of Experimental Psychology: General, 140(4), 637–659. 10.1037/a0024060 [DOI] [PubMed] [Google Scholar]
  10. Becker DV, Kenrick DT, Neuberg SL, Blackwell KC, & Smith DM (2007). The confounded nature of angry men and happy women. Journal of Personality and Social Psychology, 92(2), 179–190. 10.1037/0022-3514.92.2.179 [DOI] [PubMed] [Google Scholar]
  11. Becker DV, & Rheem H (2020). Searching for a face in the crowd: Pitfalls and unexplored possibilities. Attention, Perception, & Psychophysics, 82(2), 626–636. 10.3758/s13414-020-01975-7 [DOI] [PubMed] [Google Scholar]
  12. Becker DV, Rheem H, Pick CM, Ko A, & Lafko SR (2019). Angry faces hold attention: Evidence of attentional adhesion in two paradigms. In Progress in Brain Research (Vol. 247, pp. 89–110). Elsevier. 10.1016/bs.pbr.2019.03.033 [DOI] [PubMed] [Google Scholar]
  13. Blicher A, Reinholdt-Dunne ML, Hvenegaard M, Winding C, Petersen A, & Vangkilde S (2020). Engagement and disengagement components of attentional bias to emotional stimuli in anxiety and depression. Journal of Experimental Psychopathology, 11(3), 204380872094375. 10.1177/2043808720943753 [DOI] [Google Scholar]
  14. Bouras N., & Holt G. (Eds.). (2007). Psychiatric and Behavioural Disorders in Intellectual and Developmental Disabilities (2nd ed.). Cambridge University Press. 10.1017/CBO9780511543616 [DOI] [Google Scholar]
  15. Brosch T, Sander D, Pourtois G, & Scherer KR (2008). Beyond fear: Rapid spatial orienting toward positive emotional stimuli. Psychological Science, 19(4), 362–370. 10.1111/j.1467-9280.2008.02094.x [DOI] [PubMed] [Google Scholar]
  16. Bruyer R, & Brysbaert M (2011). Combining Speed and Accuracy in Cognitive Psychology: Is the Inverse Efficiency Score (IES) a Better Dependent Variable than the Mean Reaction Time (RT) and the Percentage Of Errors (PE)? Psychologica Belgica, 51(1), 5. 10.5334/pb-51-1-5 [DOI] [Google Scholar]
  17. Calvo MG, & Nummenmaa L (2008). Detection of emotional faces: Salient physical features guide effective visual search. Journal of Experimental Psychology. General, 137(3), 471–494. 10.1037/a0012771 [DOI] [PubMed] [Google Scholar]
  18. Chen NTM, Clarke PJF, MacLeod C, & Guastella AJ (2012). Biased Attentional Processing of Positive Stimuli in Social Anxiety Disorder: An Eye Movement Study. Cognitive Behaviour Therapy, 41(2), 96–107. 10.1080/16506073.2012.666562 [DOI] [PubMed] [Google Scholar]
  19. Cisler JM, & Koster EHW (2010). Mechanisms of attentional biases towards threat in anxiety disorders: An integrative review. Clinical Psychology Review, 30(2), 203–216. 10.1016/j.cpr.2009.11.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Clauss K, Gorday JY, & Bardeen JR (2022). Eye tracking evidence of threat-related attentional bias in anxiety- and fear-related disorders: A systematic review and meta-analysis. Clinical Psychology Review, 93, 102142. 10.1016/j.cpr.2022.102142 [DOI] [PubMed] [Google Scholar]
  21. Coelho CM, Cloete S, & Wallis G (2010). The face-in-the-crowd effect: When angry faces are just cross(es). Journal of Vision, 10(1), 7.1–14. 10.1167/10.1.7 [DOI] [PubMed] [Google Scholar]
  22. Crump MJC, McDonnell JV, & Gureckis TM (2013). Evaluating Amazon’s Mechanical Turk as a Tool for Experimental Behavioral Research. PLoS ONE, 8(3), e57410. 10.1371/journal.pone.0057410 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Eastwood JD, Smilek D, & Merikle PM (2001). Differential attentional guidance by unattended faces expressing positive and negative emotion. Perception & Psychophysics, 63(6), 1004–1013. 10.3758/BF03194519 [DOI] [PubMed] [Google Scholar]
  24. Ferneyhough E, Kim MK, Phelps EA, & Carrasco M (2013). Anxiety modulates the effects of emotion and attention on early vision. Cognition and Emotion, 27(1), 166–176. 10.1080/02699931.2012.689953 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Fox E, & Damjanovic L (2006). The eyes are sufficient to produce a threat superiority effect. Emotion (Washington, D.C.), 6(3), 534–539. 10.1037/1528-3542.6.3.534 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Fox E, Derakshan N, & Shoker L (2008). Trait anxiety modulates the electrophysiological indices of rapid orienting towards angry faces. Neuroreport, 19, 259–263. 10.1097/WNR.0b013e3282f53d2a [DOI] [PubMed] [Google Scholar]
  27. Fox E, Lester V, Russo R, Bowles RJ, Pichler A, & Dutton K (2000). Facial Expressions of Emotion: Are Angry Faces Detected More Efficiently? Cognition & Emotion, 14(1), 61–92. 10.1080/026999300378996 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Fox E, Russo R, Bowles R, & Dutton K (2001). Do threatening stimuli draw or hold visual attention in subclinical anxiety? Journal of Experimental Psychology: General, 130(4), 681–700. 10.1037/0096-3445.130.4.681 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Fox E, Russo R, & Dutton K (2002). Attentional bias for threat: Evidence for delayed disengagement from emotional faces. Cognition & Emotion, 16(3), 355–379. 10.1080/02699930143000527 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Frewen P, Dozois D, Joanisse M, & Neufeld R (2008). Selective attention to threat versus reward: Meta-analysis and neural-network modeling of the dot-probe task. Clinical Psychology Review, 28(2), 307–337. 10.1016/j.cpr.2007.05.006 [DOI] [PubMed] [Google Scholar]
  31. Frischen A, Eastwood JD, & Smilek D (2008). Visual search for faces with emotional expressions. Psychological Bulletin, 134(5), 662–676. 10.1037/0033-2909.134.5.662 [DOI] [PubMed] [Google Scholar]
  32. Goeleven E, De Raedt R, Leyman L, & Verschuere B (2008). The Karolinska Directed Emotional Faces: A validation study. Cognition and Emotion, 22(6), 1094–1118. 10.1080/02699930701626582 [DOI] [Google Scholar]
  33. Grafton B, & MacLeod C (2014). Enhanced probing of attentional bias: The independence of anxiety-linked selectivity in attentional engagement with and disengagement from negative information. Cognition and Emotion, 28(7), 1287–1302. 10.1080/02699931.2014.881326 [DOI] [PubMed] [Google Scholar]
  34. Hansen CH, & Hansen RD (1988). Finding the face in the crowd: An anger superiority effect. Journal of Personality and Social Psychology, 54(6), 917–924. 10.1037/0022-3514.54.6.917 [DOI] [PubMed] [Google Scholar]
  35. Hirsch CR, & Mathews A (2012). A cognitive model of pathological worry. Behaviour Research and Therapy, 50(10), 636–646. 10.1016/j.brat.2012.06.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Ho SMY, Yeung D, & Mak CWY (2017). The interaction effect of attentional bias and attentional control on dispositional anxiety among adolescents. British Journal of Psychology, 108(3), 564–582. 10.1111/bjop.12225 [DOI] [PubMed] [Google Scholar]
  37. Horstmann G, & Bauland A (2006). Search asymmetries with real faces: Testing the anger-superiority effect. Emotion (Washington, D.C.), 6(2), 193–207. 10.1037/1528-3542.6.2.193 [DOI] [PubMed] [Google Scholar]
  38. Hu Z, Gendron M, Liu Q, Zhao G, & Li H (2017). Trait Anxiety Impacts the Perceived Gaze Direction of Fearful But Not Angry Faces. Frontiers in Psychology, 8, 1186. 10.3389/fpsyg.2017.01186 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Juth P, Lundqvist D, Karlsson A, & Öhman A (2005). Looking for Foes and Friends: Perceptual and Emotional Factors When Finding a Face in the Crowd. Emotion, 5(4), 379–395. 10.1037/1528-3542.5.4.379 [DOI] [PubMed] [Google Scholar]
  40. Koster EHW, Crombez G, Verschuere B, Van Damme S, & Wiersema JR (2006). Components of attentional bias to threat in high trait anxiety: Facilitated engagement, impaired disengagement, and attentional avoidance. Behaviour Research and Therapy, 44(12), 1757–1771. 10.1016/j.brat.2005.12.011 [DOI] [PubMed] [Google Scholar]
  41. Ledoux J (2000). Cognitive–emotional interactions: Listen to the brain. [Google Scholar]
  42. Leland DS, & Pineda JA (2006). Effects of food-related stimuli on visual spatial attention in fasting and nonfasting normal subjects: Behavior and electrophysiology. Clinical Neurophysiology, 117(1), 67–84. 10.1016/j.clinph.2005.09.004 [DOI] [PubMed] [Google Scholar]
  43. Linke R, De Lima A, Schwegler H, & Pape H (1999). Direct synaptic connections of axons from superior colliculus with identified thalamo-amygdaloid projection neurons in the rat: Possible substrates of a subcortical visual pathway to the amygdala. Journal of Comparative Neurology, 403(2), 158–170. [PubMed] [Google Scholar]
  44. Liu TT, Fu JZ, Chai Y, Japee S, Chen G, Ungerleider LG, & Merriam EP (2022). Layer-specific, retinotopically-diffuse modulation in human visual cortex in response to viewing emotionally expressive faces. Nature Communications, 13(1), 6302. 10.1038/s41467-022-33580-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Lundqvist D, Flykt A, & Öhman A (1998). The Karolinska Directed Emotional Faces—KDEF, CD ROM from Department of Clinical Neuroscience. Psychology Section, Karolinska Institutet. [Google Scholar]
  46. Lundqvist D, Juth P, & Öhman A (2014). Using facial emotional stimuli in visual search experiments: The arousal factor explains contradictory results. Cognition and Emotion, 28(6), 1012–1029. 10.1080/02699931.2013.867479 [DOI] [PubMed] [Google Scholar]
  47. MacLeod C, Grafton B, & Notebaert L (2019). Anxiety-Linked Attentional Bias: Is It Reliable? Annual Review of Clinical Psychology, 15, 529–554. 10.1146/annurev-clinpsy-050718-095505 [DOI] [PubMed] [Google Scholar]
  48. Maner JK, Gailliot MT, Rouby DA, & Miller SL (2007). Can’t take my eyes off you: Attentional adhesion to mates and rivals. Journal of Personality and Social Psychology, 93(3), 389–401. 10.1037/0022-3514.93.3.389 [DOI] [PubMed] [Google Scholar]
  49. Marsh AA (2023). Fear signals vulnerability and appeasement, not threat. Behavioral and Brain Sciences, 46, e71. Cambridge Core. 10.1017/S0140525X22001753 [DOI] [PubMed] [Google Scholar]
  50. McNally RJ (2019). Attentional bias for threat: Crisis or opportunity? Clinical Psychology Review, 69, 4–13. 10.1016/j.cpr.2018.05.005 [DOI] [PubMed] [Google Scholar]
  51. Mogg K, & Bradley BP (1998). A cognitive-motivational analysis of anxiety. Behaviour Research and Therapy, 36(9), 809–848. 10.1016/S0005-7967(98)00063-1 [DOI] [PubMed] [Google Scholar]
  52. Morris JS, Frith CD, Perrett DI, Rowland D, Young AW, Calder AJ, & Dolan RJ (1996). A differential neural response in the human amygdala to fearful and happy facial expressions. Nature, 383(6603), 812–815. 10.1038/383812a0 [DOI] [PubMed] [Google Scholar]
  53. Morris JS, Öhman A, & Dolan RJ (1999). A subcortical pathway to the right amygdala mediating “unseen” fear. Proceedings of the National Academy of Sciences, 96(4), 1680–1685. 10.1073/pnas.96.4.1680 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Nelson AL, Quigley L, Carriere J, Kalles E, Smilek D, & Purdon C (2022). Avoidance of mild threat observed in generalized anxiety disorder (GAD) using eye tracking. Journal of Anxiety Disorders, 88, 102577. 10.1016/j.janxdis.2022.102577 [DOI] [PubMed] [Google Scholar]
  55. Nummenmaa L, & Calvo MG (2015). Dissociation between recognition and detection advantage for facial expressions: A meta-analysis. Emotion, 15(2), 243–256. 10.1037/emo0000042 [DOI] [PubMed] [Google Scholar]
  56. Oatley K, & Jenkins JM (1996). Understanding emotions. Blackwell Publishers. [Google Scholar]
  57. Ohman A, Lundqvist D, & Esteves F (2001). The face in the crowd revisited: A threat advantage with schematic stimuli. Journal of Personality and Social Psychology, 80(3), 381–396. 10.1037/0022-3514.80.3.381 [DOI] [PubMed] [Google Scholar]
  58. Okon-Singer H (2018). The role of attention bias to threat in anxiety: Mechanisms, modulators and open questions. Current Opinion in Behavioral Sciences, 19, 26–30. 10.1016/j.cobeha.2017.09.008 [DOI] [Google Scholar]
  59. Peck CJ, Lau B, & Salzman CD (2013). The primate amygdala combines information about space and value. Nature Neuroscience, 16(3), 340–348. 10.1038/nn.3328 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Peck CJ, & Salzman CD (2014). Amygdala neural activity reflects spatial attention towards stimuli promising reward or threatening punishment. ELife, 3. 10.7554/eLife.04478 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Pool E, Brosch T, Delplanque S, & Sander D (2016). Attentional bias for positive emotional stimuli: A meta-analytic investigation. Psychological Bulletin, 142(1), 79–106. 10.1037/bul0000026 [DOI] [PubMed] [Google Scholar]
  62. Posner MI (1980). Orienting of Attention. Quarterly Journal of Experimental Psychology, 32(1), 3–25. 10.1080/00335558008248231 [DOI] [PubMed] [Google Scholar]
  63. Purcell DG, Stewart AL, & Skov RB (1996). It Takes a Confounded Face to Pop Out of a Crowd. Perception, 25(9), 1091–1108. 10.1068/p251091 [DOI] [PubMed] [Google Scholar]
  64. Richards HJ, Benson V, Donnelly N, & Hadwin JA (2014). Exploring the function of selective attention and hypervigilance for threat in anxiety. Clinical Psychology Review, 34(1), 1–13. 10.1016/j.cpr.2013.10.006 [DOI] [PubMed] [Google Scholar]
  65. Rodebaugh TL, Scullin RB, Langer JK, Dixon DJ, Huppert JD, Bernstein A, Zvielli A, & Lenze EJ (2016). Unreliability as a threat to understanding psychopathology: The cautionary tale of attentional bias. Journal of Abnormal Psychology, 125(6), 840–851. 10.1037/abn0000184 [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Rosen D, Price RB, & Silk JS (2019). An integrative review of the vigilance-avoidance model in pediatric anxiety disorders: Are we looking in the wrong place? Journal of Anxiety Disorders, 64, 79–89. 10.1016/j.janxdis.2019.04.003 [DOI] [PubMed] [Google Scholar]
  67. Rudaizky D, Basanovic J, & MacLeod C (2014). Biased attentional engagement with, and disengagement from, negative information: Independent cognitive pathways to anxiety vulnerability? Cognition & Emotion, 28(2), 245–259. 10.1080/02699931.2013.815154 [DOI] [PubMed] [Google Scholar]
  68. Santabárbara J, Lasheras I, Lipnicki DM, Bueno-Notivol J, Pérez-Moreno M, López-Antón R, De la Cámara C, Lobo A, & Gracia-García P (2021). Prevalence of anxiety in the COVID-19 pandemic: An updated meta-analysis of community-based studies. Progress in Neuro-Psychopharmacology and Biological Psychiatry, 109, 110207. 10.1016/j.pnpbp.2020.110207 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Savage RA, Lipp OV, Craig BM, Becker SI, & Horstmann G (2013). In search of the emotional face: Anger versus happiness superiority in visual search. Emotion (Washington, D.C.), 13(4), 758–768. 10.1037/a0031970 [DOI] [PubMed] [Google Scholar]
  70. Skinner IW, Hübscher M, Moseley GL, Lee H, Wand BM, Traeger AC, Gustin SM, & McAuley JH (2018). The reliability of eyetracking to assess attentional bias to threatening words in healthy individuals. Behavior Research Methods, 50(5), 1778–1792. 10.3758/s13428-017-0946-y [DOI] [PubMed] [Google Scholar]
  71. Smith AR, Haller SP, Haas SA, Pagliaccio D, Behrens B, Swetlitz C, Bezek JL, Brotman MA, Leibenluft E, Fox NA, & Pine DS (2021). Emotional distractors and attentional control in anxious youth: Eye tracking and fMRI data. Cognition & Emotion, 35(1), 110–128. 10.1080/02699931.2020.1816911 [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Stoet G (2010). PsyToolkit: A software package for programming psychological experiments using Linux. Behavior Research Methods, 42(4), 1096–1104. 10.3758/BRM.42.4.1096 [DOI] [PubMed] [Google Scholar]
  73. Stoet G (2017). PsyToolkit: A Novel Web-Based Method for Running Online Questionnaires and Reaction-Time Experiments. Teaching of Psychology, 44(1), 24–31. 10.1177/0098628316677643 [DOI] [Google Scholar]
  74. Sutton TM, Herbert AM, & Clark DQ (2019). Valence, arousal, and dominance ratings for facial stimuli. Quarterly Journal of Experimental Psychology, 72(8), 2046–2055. 10.1177/1747021819829012 [DOI] [PubMed] [Google Scholar]
  75. Taylor CT, Bomyea J, & Amir N (2010). Attentional bias away from positive social information mediates the link between social anxiety and anxiety vulnerability to a social stressor. Journal of Anxiety Disorders, 24(4), 403–408. 10.1016/j.janxdis.2010.02.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Taylor CT, Cross K, & Amir N (2016). Attentional control moderates the relationship between social anxiety symptoms and attentional disengagement from threatening information. Journal of Behavior Therapy and Experimental Psychiatry, 50, 68–76. 10.1016/j.jbtep.2015.05.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Tipura E, & Fox E (2021). Neural mechanisms of eye gaze processing as a function of emotional expression and working memory load. Neuroscience Letters, 742, 135550. 10.1016/j.neulet.2020.135550 [DOI] [PubMed] [Google Scholar]
  78. Tipura E, Souto D, & Fox E (2022). Trait anxious people take longer to search for happy faces in the presence of neutral and fearful distractors [Preprint]. PsyArXiv. 10.31234/osf.io/p76eb [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Valadez EA, Pine DS, Fox NA, & Bar-Haim Y (2022). Attentional biases in human anxiety. Neuroscience & Biobehavioral Reviews, 142, 104917. 10.1016/j.neubiorev.2022.104917 [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Veerapa E, Grandgenevre P, El Fayoumi M, Vinnac B, Haelewyn O, Szaffarczyk S, Vaiva G, & D’Hondt F (2020). Attentional bias towards negative stimuli in healthy individuals and the effects of trait anxiety. Scientific Reports, 10(1), 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Vetter P, Badde S, Phelps EA, & Carrasco M (2019). Emotional faces guide the eyes in the absence of awareness. ELife, 8, e43467. 10.7554/eLife.43467 [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Vuilleumier P (2005). How brains beware: Neural mechanisms of emotional attention. Trends in Cognitive Sciences, 9(12), 585–594. 10.1016/j.tics.2005.10.011 [DOI] [PubMed] [Google Scholar]
  83. Vuilleumier P, & Pourtois G (2007). Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia, 45(1), 174–194. 10.1016/j.neuropsychologia.2006.06.003 [DOI] [PubMed] [Google Scholar]
  84. Willenbockel V, Sadr J, Fiset D, Horne GO, Gosselin F, & Tanaka JW (2010). Controlling low-level image properties: The SHINE toolbox. Behavior Research Methods, 42(3), 671–684. 10.3758/BRM.42.3.671 [DOI] [PubMed] [Google Scholar]
  85. Williams JMG, Mathews A, & MacLeod C (1996). The emotional Stroop task and psychopathology. Psychological Bulletin, 120(1), 3–24. 10.1037/0033-2909.120.1.3 [DOI] [PubMed] [Google Scholar]
  86. Wolfe JM (1992). The parallel guidance of visual attention. Current Directions in Psychological Science, 1(4), 124–128. [Google Scholar]
  87. Yuval K, Zvielli A, & Bernstein A (2017). Attentional Bias Dynamics and Posttraumatic Stress in Survivors of Violent Conflict and Atrocities: New Directions in Clinical Psychological Science of Refugee Mental Health. Clinical Psychological Science, 5(1), 64–73. 10.1177/2167702616649349 [DOI] [Google Scholar]
  88. Zvielli A, Bernstein A, & Koster EHW (2014). Dynamics of attentional bias to threat in anxious adults: Bias towards and/or away? PloS One, 9(8), e104025. 10.1371/journal.pone.0104025 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Material

RESOURCES