Abstract
In this paper, the role of self-reported anxiety and degree of conscious awareness as determinants of the selective processing of affective facial expressions is investigated. In two experiments, an attentional bias toward fearful facial expressions was observed, although this bias was apparent only for those reporting high levels of trait anxiety and only when the emotional face was presented in the left visual field. This pattern was especially strong when the participants were unaware of the presence of the facial stimuli. In Experiment 3, a patient with right-hemisphere brain damage and visual extinction was presented with photographs of faces and fruits on unilateral and bilateral trials. On bilateral trials, it was found that faces produced less extinction than did fruits. Moreover, faces portraying a fearful or a happy expression tended to produce less extinction than did neutral expressions. This suggests that emotional facial expressions may be less dependent on attention to achieve awareness.The implications of these results for understanding the relations between attention, emotion, and anxiety are discussed.
A primary function of attentional mechanisms is to direct cognitive resources toward parts or attributes of the world that are of particular relevance to the organism. These selective attentional processes are likely to be particularly sensitive to biologically significant stimuli. Emotional facial expressions are one set of stimuli that might be expected to have a high degree of behavioral relevance. The face carries important information about many social and biological attributes, such as identity, gender, age, and emotional state. Given that the expression of fear or threat in another individual may signal potential danger in the environment, the rapid detection and facilitated processing of such facial expressions would be expected. Three sets of evidence suggest that facial expressions are indeed a special category of stimuli in terms of attracting processing resources. First, with a visual search paradigm, it has been shown that angry facial expressions are easier to detect than happy or neutral facial expressions. For instance, Hansen and Hansen (1988) presented their participants with a series of photographs containing a number of faces. On half of the trials, there was a single discrepant face. They found that people were fastest to detect discrepant angry faces, relative to discrepant happy faces. Moreover, the time taken to detect an angry face was unaffected by the number of other faces in the display, whereas detection of a happy face was strongly influenced by the size of the display. They argued that processing of angry facial expressions occurs automatically and that a consequence of this automatic processing of threat is the allocation of attentional resources to a preattentively defined location. In contrast, search for happy faces was serial and was considered to require attention. Unfortunately, there were a number of problems in interpreting this early study, in that the results may have been due to slower search through angry distractors, since a discrepant angry face was always embedded in a happy crowd and vice versa. In addition, there were further problems with the nature of the stimuli used (see Fox et al., 2000, and Nothdurft, 1993, for discussions). Nevertheless, these methodological problems have been addressed in more recent research, and it does seem that processing of angry facial expressions is indeed more efficient than that of happy expressions, although the evidence suggests that the search may not be as fully automatic as was originally claimed (Eastwood, Smilek, & Merikle, 2001; Fox et al., 2000; Ohman, Lundqvist, & Esteves, 2001). This research indicates that the mechanisms underlying visual search are particularly sensitive to angry facial expressions.
A second area of research that supports the notion that facial expressions may be processed at an automatic level comes from studies of patients with unilateral neglect. Unilateral neglect is a relatively common disorder following lesions of the right parietal lobe (Vallar, 1993). It is characterized by the failure to notice contralesional stimuli even in the absence of hemianopia. Extinction is a similar impairment and is defined as the failure to detect a contralesional stimulus, but only when another stimulus is simultaneously presented on the ipsilesional side (Rafal, 1996). Unilateral neglect and extinction are widely thought to represent disorders of attentional mechanisms, because the patient seems capable of detecting and responding to stimuli in the neglected field and yet does not. Of particular relevance for the present purposes is a recent study demonstrating that the emotional expressions of schematic faces can influence the amount of extinction observed (Vuilleumier & Schwartz, 2001). This study found that faces with an angry or a happy expression resulted in less extinction than did faces with a neutral expression. This provides converging evidence that emotional expressions have a special status for the attentional system. It is interesting, however, that the influence on extinction was not specific to angry expressions but was found equally for angry and happy expressions. In contrast, the visual search experiments discussed above suggest that processing is especially facilitated for angry expressions.
Neurobiological research also supports the notion that emotional facial expressions, especially of negative emotions, may have a special status in attracting processing resources (see Dolan, 2000, and LeDoux, 1996, for reviews). For example, in conditioning experiments, larger galvanic skin responses and greater resistance to extinction are associated with aversively conditioned angry facial expressions, relative to happy or neutral facial expressions (Esteves, Parra, Dimberg, & Ohman, 1994). Likewise, in a positron emission tomography study, it has been shown that the human amygdala is involved in processing the emotional salience of faces, with an apparent specificity of response to fearful facial expressions (Morris et al., 1996). More recent research has shown that even when aversively conditioned angry facial expressions are backward masked, so that observers are unaware of their presentation, the right amygdala is more responsive to these unseen stimuli than to unconditioned angry faces. These studies suggest that the right amygdala can discriminate between stimuli on the basis of their acquired behavioral significance (Morris, Ohman, & Dolan, 1998, 1999). In addition, brain imaging has shown that the amygdala is particularly responsive to masked fearful facial expressions even when the expressions are not conditioned in the experiment (Whalen et al., 1998).
Thus, there has been a general finding from brain-imaging research that emotional facial expressions have a special status in capturing visual attentional processes. However, although the foregoing research has considered the emotionality of the stimuli to be processed in some detail, little attention has been paid to the emotionality of the participant in these studies. In relation to this, a burgeoning behavioral literature has found that selective processing of emotionally relevant stimuli might be modulated by the mood state of the person. For example, several studies have found that people who report high levels of trait anxiety are faster to detect probes occurring in a location previously occupied by an angry facial expression, relative to a neutral or happy facial expression (e.g., Bradley, Mogg, Falla, & Hamilton, 1998; Mogg & Bradley, 1999). Moreover, it has also been found that high state-anxious people take longer to disengage their visual attention from the location of angry facial expressions, whereas this pattern does not occur for happy facial expressions (Fox, Russo, Bowles, & Dutton, 2001). Thus, it seems that the level of an individual’s self-reported anxiety may be an important determinant of selective attention to behaviorally salient stimuli, such as facial expressions. It should be noted, however, that two studies have attempted to assess whether detection of angry expressions is enhanced in high trait-anxious (HA) individuals and both found little evidence for this hypothesis. Byrne and Eysenck (1995) did find that high anxious people detected an angry face in a neutral crowd more quickly, when compared with low trait-anxious (LA) individuals. However, the HA group was no faster to detect an angry face in a neutral crowd, relative to a happy face in a neutral crowd. Similarly, in a more recent study, it was found across several experiments that the enhanced detection of angry schematic faces did not differentiate between HA and LA people (Fox et al., 2000). Thus, although the evidence suggests an anxiety-related bias to allocate attentional resources toward the location of angry faces in dot-probe paradigms (e.g., Bradley et al., 1998; Mogg & Bradley, 1999), anxiety does not appear to modulate the detection of angry faces in visual search tasks.
One study using the dot-probe paradigm suggested a further specificity of the anxiety-related attentional bias toward angry facial expressions (Mogg & Bradley, 1999). This study found that HA people were faster to respond to probes occurring in the location of angry faces, but this bias was observed only when the angry faces appeared in the left visual field (LVF) and when the faces were backward masked (Mogg & Bradley, 1999). This raises the intriguing possibility that there is a hemispheric bias for processing angry facial expressions, which is primarily apparent when awareness of the faces is restricted. This is consistent with the finding that neglect patient scan process affective facial expressions even with severe problems in allocating attention to their LVF. It seems that affective facial expressions (angry and happy) may be processed automatically (Vuilleumier & Schwartz, 2001) and that attention then gets allocated toward the location of the angry expressions (Mogg & Bradley, 1999). The latter study further suggests that the allocation of attention to threat-related stimuli(angry faces) may be modulated by the anxiety level of the observer, as well as by the degree of conscious awareness of the stimuli.
Several studies have attempted to identify the neural mechanisms that might underlie anxious mood. For example, it has been proposed that anxious arousal is associated with greater activity (e.g., increased blood flow) in right-hemisphere than in left-hemisphere regions, whereas anxious apprehension (worry) is associated with greater left-hemisphere activity (Heller, Nitschke, Etienne, & Miller, 1997). In support of this, it has been found that social phobics show large increases in right-sided prefrontal and parietal activation while anticipating the presentation of a public speech, whereas this pattern was not apparent in low anxious individuals (Davidson, Marshall, Tomarken, & Henriques, 2000). Thus, although the evidence is not consistent across studies, there is a strong indication that the right hemisphere might be particularly sensitive to perceiving the emotionality of a stimulus and that this bias might be somewhat stronger in highly anxious people.
The aim of the present study was to further investigate the role of trait anxiety and the role of conscious awareness as determinants of the selective processing of affective facial expressions. In a previous study, there was an indication that angry faces might attract attention to their location in HA people only when awareness of the faces was restricted (Mogg & Bradley, 1999). The present study attempts to further assess this issue with fearful facial expressions. The first two experiments evaluate whether a selective attentional bias for fearful and/or happy faces is particularly salient when stimuli are presented in the LVF, as opposed to the right visual field (RVF), for HA and LA participants. Experiment 2 evaluates whether an anxiety-related attentional bias for fearful and happy faces occurs when the stimuli are backward masked. Finally, Experiment 3 presents fearful, happy, and neutral facial stimuli to the neglected field of a patient with unilateral neglect. Fearful expressions might be expected to be particularly salient stimuli, since they are powerful indicators of potential danger in the environment. To our knowledge, the present study is the first that reports results for which fearful facial expressions were used with the dot-probe task and with a neglect patient.
EXPERIMENT 1
The aim of Experiment 1 was to examine the influence of trait anxiety on the propensity to selectively attend to emotional facial expressions. A pictorial version of the dot-probe task was used in order to assess whether fearful faces attract visual attention to a greater extent than do happy faces. On each trial of the dot-probe task, a pair of faces was presented for 500 msec, immediately followed by a pair of dots (either vertically or horizontally aligned) in the location previously occupied by one of the presented faces. Using the same task, previous research has shown that angry faces appear to capture spatial attention, in that probes appearing in the location of angry faces are detected faster than probes appearing in the location of neutral or happy facial expressions. Moreover, this selective processing of angry faces was found only for those reporting high levels of trait anxiety (Bradley et al., 1998; Bradley, Mogg, & Millar, 2000).
In the present experiment, fearful and happy faces were paired with neutral faces in order to evaluate whether the facial portrayal of fear would capture spatial attention in HA people in a similar way to that of angry facial expressions. In this experiment, three key questions are addressed. First, do fearful facial expressions capture spatial attention to a greater extent than happy facial expressions do? Second, is this bias to attend to fearful expressions particularly salient when the stimuli are presented in the LVF? Third, is this bias particularly apparent in HA individuals?
Method
Participants
The participants were 32 undergraduate students (20 female, 12 male), who were selected from over 120 volunteers who had completed the trait scale of the Spielberger State Trait Anxiety Inventory (STAI: 26) early in the academic year. The recruitment procedure favored those with high and low scores on this scale, in order to ensure a wide range of emotionality scores within the sample.
Materials and Apparatus
The face stimuli consisted of pairs of photographs of faces. Each face pair consisted of two pictures of different individuals, with one person portraying an emotional expression (fear or happiness) and the other a neutral expression. Half of the emotional faces were fearful, and half were happy. Half of all the faces were male, and half were female. The 24 faces used were selected from the Pictures of Facial Affect developed by Ekman and Freisen (1975). Eight of the faces portrayed fearful expressions; half of these were male (Ekman codes PE3-21, JJ5-13, EM5-21, and WF3-16), and half were female (Ekman codes PF2-30, MO1-23, C1-21, and MF1-26). Eight of the faces portrayed happy expressions; once again, half of these were male (Ekman codes EM4-07, GS1-08, JB1-09, and WF2-12), and half were female (Ekman codes A1-06, JM1-04, MO1-04, and MF1-06). Eight of the faces portrayed a neutral emotional expression; half of these were male (Ekman codes WF2-05, JB1-03, JJ3-04, and PE2-04), and half were female (Ekman codes A1-02, C2-03, MO1-05, and SW3-03). In the ratings reported by Ekman, each of the emotional faces was rated as portraying the “correct” emotional category (fearful or happy) by over 87% of respondents. Ratings were not available for most of the neutral faces. We conducted an additional rating of all 24 faces with 20 independent raters in the same way as Ekman and Friesen. This confirmed that each of the faces was correctly categorized (fearful, happy, and neutral) by at least 16 out of 20 of the raters (i.e., 80%). Thus, we were confident that the facial expressions were perceived as portraying fearful, happy, or neutral emotional expressions.
All of the photographs were digitized and presented side by side at the center of a computer screen. The display size of each photograph was 4 × 6 cm (120 × 180 pixels), and the centers of the faces were 10 cm apart. The probe target was a pair of dots (either : or ..) that was presented on either the left- or the right-hand side of the screen in the location previously occupied by the center of one of the faces (i.e., 5 cm to the right or left of fixation). Sixteen face pairs were created (emotional face + neutral face) so that each pair contained either two male or two female faces and no pair contained two expressions made by the same person.
The experiment was presented on a Power Macintosh (7200/90) computer, with the screen set at a resolution of 832 × 623 pixels. Stimulus presentation and data collection were controlled by PsyScope software (Cohen, MacWhinney, Flatt, & Provost, 1993). Response times were collected by means of a New Micros PsyScope button box that gives an accuracy rate of 1 msec.
Procedure
The participants were welcomed to a quiet lab and were seated approximately 50 cm in front of a computer screen. Full instructions regarding the dot-probe task were presented on the computer screen. In the main dot-probe task, there were 16 practice trials, followed by 128 experimental trials. In the experimental trials, each of the 16 face pairs was presented 8 times. This meant that each of the emotional faces (fearful and happy) was presented 8 times each during the experiment, whereas each of the neutral faces was presented 16 times during the experiment. Each face appeared equally often in the LVF and the RVF, and the presentation of type of probe (: or ..) and location of probe (LVF or RVF) was counterbalanced across the experiment. Each trial began with a central fixation cross presented at the center of the computer screen for 500 msec. Upon offset of the fixation cross, the face pair was presented for 500 msec. Immediately following the display of the face pair, the probe target was presented in the location of one of the faces, and the participants were required to press one of two keys on the button box to indicate the type of probe (: or ..) as quickly and as accurately as possible. The left index finger was used for one probe (:), whereas the right index finger was used for the other probe (..). The probe was displayed until a response was made, up to a maximum of 2,000 msec. The intertrial interval was 1,000 msec. The computer emitted a 500-Hz bleep if the participant made a mistake. The trials were presented in a new random order for each participant.
Following the dot-probe task, each participant completed the state anxiety scale of the STAI (Spielberger, Gorsuch, Lushene, Vagg, & Jacobs, 1983).
Results
Group characteristics
The participants were pre-selected on the basis of their STAI trait anxiety scores, which were collected at a general testing session about 6-8 weeks prior to the dot-probe task. Those scoring 45 or above on the STAI trait anxiety scale were allocated to the HA group (n = 16), whereas those scoring 35 or less on the STAI trait anxiety scale were allocated to the LA group (n = 16). The two groups differed significantly on both trait anxiety [t(30) = 9.3, p< .001; HA, M = 48.9, SD = 5.6; LA, M = 31.5, SD = 4.9] and state anxiety [t(30) = 3.0, p< .005; HA, M = 45.7, SD = 10.1; LA, M = 34.2, SD = 11.7] scores.
Dot-probe task
Trials with errors were discarded, as were any reaction times less than 100 msec. Reaction times greater than two and a half standard deviations (SDs) from a participant’s overall mean reaction time were also excluded prior to data analysis. This resulted in the loss of less than 3% of the reaction time data. The mean reaction time data are shown in Table 1. To simplify the analysis, attentional bias scores were calculated from the response time data for each type of emotional face, relative to the neutral face. This bias score was calculated by subtracting the mean response time when the emotional face and the probe were in the same location from the mean response time when the emotional face and the probe location were in different positions. In order to test the specific hypotheses regarding the location of the emotional face (LVF or RVF), separate bias scores were calculated for emotional faces appearing in the LVF and the RVF. Positive values of the bias score for fearful faces reflect faster response times when probes appear in the same location as a fearful face (i.e., vigilance for fearful faces), relative to a neutral face, whereas a negative bias score reflects avoidance of fearful faces. Attentional bias scores were similarly calculated for happy faces. The bias scores for each anxiety group and each visual field are shown in Figure 1.
Table 1.
Mean Response Times to Probe Targets (in Milliseconds) in Experiment 1
| Face Type |
Face Location |
Probe Location |
High Trait Anxiety |
Low Trait Anxiety |
||
|---|---|---|---|---|---|---|
| M | SD | M | SD | |||
| Fearful | left | left | 614 | 118 | 571 | 105 |
| left | right | 638 | 112 | 575 | 190 | |
| right | left | 607 | 96 | 582 | 106 | |
| right | right | 615 | 122 | 574 | 185 | |
| Happy | left | left | 621 | 96 | 582 | 123 |
| left | right | 592 | 185 | 601 | 107 | |
| right | left | 605 | 190 | 575 | 110 | |
| right | right | 601 | 191 | 578 | 193 | |
Note—Face pairs were presented for 500 msec.
Figure 1.
Attentional bias scores (in milliseconds) as a function of trait anxiety group, type of emotional face, and location of emotional face in Experiment 1.
A 2 × 2 × 2 mixed design analysis of variance (ANOVA) was carried out on the attentional bias score data, with one between-subjects factor of trait anxiety (HA vs. LA), and two within-subjects factors of type of emotional face (fearful vs. happy) and location of emotional face (LVF vs. RVF). The only significant effect was an interaction between trait anxiety group, type of emotional face, and location of the emotional face [F(1,30) = 4.97, p < .03]. To examine the results in more detail, separate 2 × 2 ANOVAs were carried out on bias scores for fearful and happy faces. For the fearful faces, there were no significant effects, whereas for the happy faces, there was a trend toward an interaction between trait anxiety group and location of face [F(1,30) = 3.26, p < .08].
Given the specific hypotheses, planned contrasts were used to compare the groups on their bias scores for each type of emotional face in each location (LVF vs. RVF). With one-tailed tests of significance, the only effect was for the HA group to show avoidance of happy faces appearing in the LVF, whereas the LA group showed vigilance for the same faces [−28.56 vs. +19.87; t(30) = 1.83, p < .038]. Attentional bias scores were also tested against zero (0 = no attentional bias). Once again, with a one-tailed test, there was a trend for the HA group to show vigilance for fearful faces appearing in the LVF [+23.7; t(15) = 1.53, p < .07] and to show avoidance of happy faces appearing in the LVF [−28.56; t(15) = 2.03, p < .03]. No trends or significant effects were found for the LA group (all ts < 1). Pearson correlations were calculated between the index scores for fearful and happy faces and the trait and state anxiety questionnaires. No significant correlations were found between the index scores and the subjective measures.
Control experiment
Because the different emotional expressions corresponded to different individuals in the experiment, there is a possibility that the faces may have differed on features other than expression and that these artifacts may have influenced attention independently of expression. Thus, we conducted a control experiment with 6 HA participants (trait anxiety scores above 45). All of the stimuli were presented in a dot-probe paradigm exactly as in the main experiment, except that the faces were inverted. It is well known that inversion of faces destroys holistic processing (Tanaka & Farah, 1993), and therefore, if the emotional expressions were the critical factor in influencing the distribution of attention, no effects should be found with inverted faces. In contrast, if some low-level artifact, such as dimples, were causing the effect, a similar pattern should be observed with upright and inverted faces. In the pilot study with 6 HA participants, no attentional bias effects were found for fearful faces in the LVF (+0.23) or the RVF (−1.2) or for happy faces in the LVF (+2.2) or the RVF (+0.56; all ts < 1). This suggests that the pattern found in the main experiment was due to the emotionality of the faces.
Discussion
As was expected, in the analysis of attentional bias scores, the visual field of presentation and the type of emotional face did affect response times differentially for HA and LA participants. The results demonstrated that there was a tendency for HA participants to be influenced by emotional faces appearing in the LVF, but not by those appearing in the RVF. Importantly, a control experiment with 6 HA participants demonstrated that emotional faces did not influence the distribution of attention when the faces were inverted. However, when the faces were upright, HA participants tended to be vigilant for fearful faces appearing in the LVF but avoidant of happy faces appearing in the LVF. These results support the notion that the right hemisphere of the brain might be particularly sensitive to the emotionality of a face. This pattern was not observed for LA individuals and was not found for faces appearing in the RVF. Thus, the results of this study give tentative support to the notion that the nature of the stimuli (emotional vs. neutral), the visual field of presentation, and the participants’ level of trait anxiety can all influence the distribution of spatial attention.
Previous research has indicated that threat-related stimuli may have a stronger effect on attentional processes when awareness of the stimuli is restricted. For instance, one study demonstrated that the selective processing of HA people was disrupted by irrelevant threatening words only when these words were backward masked. The same words produced little disruption on the selection task when they were not masked (Fox, 1996). Likewise, it has been shown that the propensity of angry faces to capture visual attention in HA individuals is apparent only when the faces are masked, and not when they are unmasked (Mogg & Bradley, 1999). Thus, it was decided to evaluate the role of emotional expressions and level of trait anxiety in influencing spatial attention under conditions in which the facial stimuli were masked after a brief period.
EXPERIMENT 2
The aim of Experiment 2 was to evaluate whether HA people allocate attention toward the location of masked fearful faces. On the basis of the results of Experiment 1, as well as of previous research (Mogg & Bradley, 1999), the hypothesis was that this propensity should be most apparent when the fearful face was presented in the LVF. The experiment was identical to Experiment 1, except that the pair of faces were masked after 17 msec (1 screen refresh rate) by a mask made up of jumbled face parts. Immediately after the dot-probe task, a gender discrimination task was presented in order to provide an objective measure of awareness of the masked stimuli (Cheesman & Merikle, 1986; Mogg & Bradley, 1999).
Method
Participants
The participants were 36 undergraduate students (26 female, 10 male), who had not participated in Experiment 1. As before, they were selected from over 120 volunteers who had completed the trait scale of the Spielberger State Trait Anxiety Inventory (STAI: Spielberger et al., 1983) early in the academic year. The recruitment procedure favored those who obtained high and low scores on this scale in order to ensure a wide range of emotionality scores within the sample.
Materials and Apparatus
The materials and apparatus were identical to those in Experiment 1. The only difference was that a masking stimulus was presented 17 msec after the face pair was presented. The masking stimulus was constructed by cutting up and randomly reassembling a pair of neutral faces, so that the features of each face were jumbled. The display size of each masking photograph was 4 × 6 cm (120 × 180 pixels), and the centers of the mask stimuli were 10 cm apart.
Procedure
The procedure was very similar to that in Experiment 1. The dot-probe task was presented at the start of the session and consisted of 16 practice trials, followed by 128 experimental trials. The face stimuli and their presentation were exactly the same as those in Experiment 1. At the start of each trial, a fixation cross was presented for 500 msec. The face pair was then presented for 17 msec, immediately followed by the mask pair for 17 msec. The probe (: or ..) was then presented immediately in the location of one of the masks. The probe was displayed until a response was made, up to a maximum of 2,000 msec. The intertrial interval was 1,000 msec. The computer emitted a 500-Hz bleep if the participant made a mistake. The trials were presented in a new random order for each participant.
Following the dot-probe task, the awareness check was given for the masked faces. All of the face pairs used in the main experiment were presented twice, resulting in a total of 32 trials. The face pairs were presented for 17 msec and then were masked for 17 msec, just as in the dot-probe experiment. For each face pair, the participant was asked to indicate the gender of each face pair by pressing the left-hand button for female and the right-hand button for male. At the end of the session, each participant completed the state anxiety scale of the STAI.
Results
Group characteristics
The participants were preselected on the basis of their STAI trait anxiety scores, which were collected at a general testing session about 6–8 weeks prior to the dot-probe task. Those scoring 45 or above on the STAI trait anxiety scale were allocated to the HA group (n = 18), whereas those scoring 35 or less on the STAI trait anxiety scale were allocated to the LA group (n = 18). None had participated in Experiment 1. The two groups differed significantly on both trait anxiety [t(34) = 11.1, p< .001; HA, M = 50.4, SD = 6.8; LA, M = 29.4, SD = 4.2] and state anxiety [t(34) = 7.1, p< .001; HA, M = 48.7, SD = 9.7; LA, M = 28.6, SD = 7.1] scores.
Dot-probe task
On the awareness check, the mean percentage of correct scores on the gender discrimination task was 50.1% (SD = 2.1%), which is at chance level. There was no difference between the HA (M = 50.09%, range = 46%–54%) and the LA (M = 50.21%, range = 46%–55%) groups on the awareness check. On the dot-probe task, trials with errors were discarded, as were any reaction times less than 100 msec. Reaction times greater than two and a half SDs from a participant’s overall mean reaction time were also excluded prior to data analysis. This resulted in the loss of less than 3% of the reaction time data. The mean reaction time data are shown in Table 2. To simplify the analysis, attentional bias scores were calculated from the response time data for each type of emotional face, as in Experiment 1. The bias scores for each anxiety group and each visual field are shown in Figure 2.
Table 2.
Mean Response Times to Probe Targets (in Milliseconds) in Experiment 2
| Face Type |
Face Location |
Probe Location |
High Trait Anxiety |
Low Trait Anxiety |
||
|---|---|---|---|---|---|---|
| M | SD | M | SD | |||
| Fearful | left | left | 611 | 80 | 571 | 105 |
| left | right | 651 | 87 | 575 | 90 | |
| right | left | 621 | 70 | 582 | 106 | |
| right | right | 606 | 68 | 574 | 85 | |
| Happy | left | left | 622 | 67 | 597 | 186 |
| left | right | 615 | 62 | 604 | 177 | |
| right | left | 615 | 69 | 608 | 182 | |
| right | right | 621 | 72 | 594 | 185 | |
Note—Face pairs were presented for 17 msec and then masked.
Figure 2.
Attentional bias scores (in milliseconds) as a function of trait anxiety group, type of emotional face, and location of emotional face in Experiment 2.
A 2 × 2 × 2 mixed design ANOVA was carried out on the attentional bias score data with one between-subjects factor of trait anxiety (HA vs. LA) and two within-subjects factors of type of emotional face (fearful vs. happy) and location of emotional face (LVF vs. RVF). There were significant interactions between trait anxiety group and type of emotional face [F(1,34) = 17.6, p < .001], and between trait anxiety group and face location [F(1,34) = 6.4, p < .016]. The predicted interaction between trait anxiety group, type of emotional face, and location of emotional face almost reached significance [F(1,34) = 3.43, p < .073].
Given the specific hypotheses, planned contrasts were used to compare the groups on their bias scores for each type of emotional face in each location (LVF and RVF). With one-tailed tests of significance, it was found that the HA group was more vigilant than the LA group for fearful faces in both the LVF [+40.4 vs. −5.7; t(34) = 4.6, p < .001] and the RVF [+15.8 vs. +1.4; t(34) = 2.5, p < .01]. No between-group differences were found for the index scores for happy faces. Attentional bias scores were also tested against zero (0 = no attentional bias) for each anxiety group separately. Once again, with a one-tailed test, the HA group demonstrated vigilance for fearful faces appearing in the LVF [+40.4; t(17) = 5.7, p < .001] and in the RVF [+15.8; t(17) = 4.8, p < .001]. Further analysis showed that the vigilance for fearful faces was greater when the emotional face was presented in the LVF, relative to the RVF, for this group [t(17) = 3.3, p < .004]. The only effect for the LA participants was a vigilance for happy faces appearing in the RVF [+14.9; t(17) = 2.1, p < .026], which was not predicted. Finally, Pearson correlations were calculated between the trait and state anxiety scales of the STAI and the index scores for fearful and happy faces. This showed that the index for fearful faces was correlated with both trait anxiety (r = .53, p < .001) and state anxiety (r = .37, p < .025), whereas the index for happy faces did not correlate with either of the subjective measures.
Comparison of Experiments 1 and 2
The data from Experiments 1 and 2 were combined in order to assess whether masked fearful faces produced a stronger attentional capture effect in HA people, relative to unmasked faces. This combined data was analyzed in a 2 × 2 × 2 × 2 mixed design ANOVA, with attentional bias scores as the dependent variable. There were two between-subjects factors of masking (masked vs. unmasked) and trait anxiety (HA vs. LA) and two within-subjects factors of type of emotional face (fearful vs. happy) and location of emotional face (LVF vs. RVF). As was expected, there was a significant interaction between trait anxiety, type of emotional face, and location of the emotional face [F(1,64) = 7.9, p < .006]. However, the critical four-way interaction between these three factors and masking failed to reach conventional levels of significance [F(1,64) = 2.6, p < .10]. Because of the a priori predictions, one-tailed t tests were computed on the attentional bias index for fearful and happy facial expressions. For HA people, the attentional bias index was larger for masked fearful (+33.4 msec) than for unmasked fearful (+4.1 msec) faces [t(32) = 1.64, p < .055], whereas there was no difference for the happy faces (−20.3 msec vs. −9.6; t < 1). For the LA participants, there was no difference for either the fearful or the happy index scores between the masked and the unmasked conditions.
Discussion
The results of Experiment 2 confirmed that fearful faces had a greater influence on the distribution of spatial attention than did happy faces. Moreover, this result was apparent only for individuals with high levels of trait anxiety and was stronger when the emotional face was presented in the LVF. Thus, under conditions in which the participants were not aware of what stimuli were being presented, an anxiety-related propensity to selectively attend to fearful facial expressions was found. This is very similar to the results previously reported for angry facial expressions (Mogg & Bradley, 1999). It is particularly interesting that Mogg and Bradley also found that selective attention to angry faces was apparent only when the faces were presented in the LVF. The present results extend these findings to facial expressions of fear and suggest that the right hemisphere might be particularly sensitive to negative emotional expressions (anger and fear), but not to happy facial expressions. Moreover, in the present study, a strong correlation was found between the attentional bias index for masked facial expressions of fear and self-report measures of anxiety. In contrast, there was no correlation when the emotional facial expressions were not masked (Experiment 1). Unfortunately, the four-way interaction between masking, trait anxiety, type of emotional expression, and location of face did not reach significance, although the trend (p < .10) was in the right direction. In particular, the attentional bias index for fearful faces was higher for the masked, relative to the unmasked, conditions for HA participants.This supports previous research showing that masking emotional stimuli sometimes leads to stronger anxiety-related effects on spatial attention (Fox, 1996; Mogg & Bradley, 1999). Taken together, the present results, along side the previous research, suggest that restricting the awareness of emotional stimuli may lead to stronger attentional capture. In the final experiment, we further explored the question of awareness by presenting affective and neutral facial expressions to the LVF of a patient with right parietal lobe damage who demonstrated extinction of objects presented to the LVF. This allowed for assessment of whether the processing of emotional facial expressions might be less dependent on attentional processing.
EXPERIMENT 3
Many studies have demonstrated that neglected stimuli often seem to receive extensive processing despite the patient’s reporting that he or she sees nothing. In one study, for example, 4 patients with left neglect and 1 patient with hemianopia were presented with a lexical decision task at fixation. The letter string was preceded 600 msec earlier by two line drawings (one real object, one scrambled object) on either side of fixation (McGlinchey-Berroth, Milberg, Verfaellie, Alexander, & Kilduff, 1993). The critical finding was that lexical decisions were faster if the preceding line drawing was semantically related to the target word for the neglect patients, regardless of the side of presentation of the meaningful drawing. In other words, even when the drawing appeared in the neglected field, this “unseen” object still produced substantial priming. No priming effect was found for the hemianopic patient. This result suggests that despite being neglected, the unattended object received sufficient processing to influence future responding. More recent studies have shown that real objects or faces produce less extinction than do scrambled objects or scrambled faces (Vuilleumier, 2000; Ward & Goodrich, 1996). These studies support the notion that substantial analysis can take place in the absence of subjective awareness of the stimuli and that attention may then be biased to select more meaningful stimuli. Of more direct relevance for the present study is the recent report that faces with an angry or a happy expression resulted in less extinction than did faces with a neutral expression (Vuilleumier & Schwartz, 2001). In the present experiment, photographs of faces (fearful, happy, and neutral expressions) and of fruits (apple, orange, and banana) were presented to an individual who had experienced a right parietal cerebrovascular accident (CVA) approximately 1 year prior to testing. This patient, J.B., presented with mild unilateral neglect on the left side, and extinction for objects appearing in the LVF was easily demonstrable. It is also of interest that J.B. received some treatment for problems with anxiety and worry related to his illness. Therefore, we can expect that he was relatively highly anxious. The prediction was that faces would produce less contralesional extinction on bilateral trials than would fruits. Second, it was expected that fearful expressions would produce less contralesional extinction on bilateral trials than would either happy or neutral facial expressions.
Methods
Participants
J.B. was a 74-year-old right-handed man who had suffered a CVA 13 months prior to testing (in June 1995). The CVA had left him with extensive damage in the right posterior parietal region. On testing, he was alert and cooperative and had intact visual fields on both sides, with corrected-to-normal vision. He showed mild signs of spatial neglect on a standard test of letter cancellation, in which he obtained 60/60 on the right side, as compared with 45/60 on the left side. Left-side extinction was easily demonstrated on bilateral simultaneous stimulation for visual stimuli. He obtained 36 items correct with 14 errors on the National Adult Reading Test, which gave him an estimated IQ of 116. J.B. scored 33 (maximum = 50) on the Taylor Manifest Anxiety Scale (Taylor, 1953), which is a screening instrument, and indicated a high level of worry and anxiety. He was subsequently seen by a hospital psychiatrist, who determined that he was suffering distress, depression, and worry primarily related to his brain injury.
Materials and Procedure
Three male faces were selected from the Pictures of Facial Affect developed by Ekman and Freisen (1975). One of the faces portrayed a fearful expression (PE3-21), one portrayed a happy expression (EM4-07), and one portrayed a neutral emotional expression (WF2-05). These faces were digitized and presented as black-and-white photographs on a Macintosh portable computer. The display size of each photograph was 4 × 6 cm (120 × 180 pixels), and the centers of the faces were 10 cm apart on bilateral trials. In addition, three black-and-white photographs of an apple, an orange, and a banana were digitized and also had display sizes of 4 × 6 cm, with the centers of the fruits 10 cm apart on bilateral trials. The experiment consisted of 120 unilateral and 180 bilateral trials. Each of the six stimuli (three faces, three fruits) appeared equally often on unilateral and bilateral trials, and the computer presented the trials in a random order. All the possible trials are presented in Figure 3. Testing took place across two sessions about 1 week apart. The patient sat about 50 cm in front of the computer screen. Each trial began with a fixation cross at the center of the screen for 500 msec, followed by a stimulus presented in the left, the right, or both hemifields (5.7° of visual angle away from fixation) for 400 msec. Previous calibration had shown that J.B. could do the task with a 400-msec presentation time and that clear evidence of extinction was obtained with this exposure. The task simply required J.B. to say whether a face, a fruit, or both had been presented on each trial. He was told that two items would often appear together (one on the left and one on the right). When J.B. gave his response, this was entered into the computer by the experimenter, and this was followed by a 1,000-msec black screen until the next trial began.
Figure 3.
The diagram shows all possible unilateral and bilateral trials presented to patient J.B. in Experiment 3. The stimuli were black-and-white photographs of faces (fearful, happy, and neutral) and fruits (apple, orange, and banana). The trials were presented in a totally randomized order. LVF, left visual field; RVF, right visual field.
Results
Table 3 shows the number of stimuli missed in each condition for Patient J.B. The results show that J.B. missed relatively few contralesional stimuli in unilateral trials and that this was comparable for photographs of faces (20%) and photographs of fruits (23%). However, clear contralesional extinction was observed on bilateral trials. As can be seen, J.B. missed approximately 71% of LVF stimuli on bilateral trials, but only 2% in the RVF on bilateral trials. Of more interest, the amount of extinction observed for photographs of faces (56%) seemed to be considerably less than that observed for photographs of fruits (86%). With respect just to the face stimuli, there was some indication that the degree of extinction was less for fearful faces (43%), relative to neutral (73%) and happy (53%) faces. The data were analyzed by means of a 2 × 2 × 2 ANOVA, with type of trial (unilateral vs. bilateral), location of stimulus (LVF vs. RVF), and type of stimulus (faces vs. fruits), as “between-subjects” factors. Each individual trial was treated as a separate subject in order to allow for parametric analyses. The results showed main effects for type of trial [F(1,472) = 60.0, p < .001], such that the percentage of stimuli missed was lower for unilateral (11%) than for bilateral (37%) trials. As was expected, the percentage of stimuli missed was much higher in the LVF (59%) than in the RVF [2%; F(1,472) = 184.5, p < .001]. Of more interest, it was also found that the percentage of faces missed was less (24%) than the percentage of fruits missed [37%; F(1,472) = 7.6, p < .006]. The type of trial × location of stimuli interaction was also significant [F(1,472) = 50.2, p < .001], confirming that the LVF omissions occurred mainly on bilateral trials. There was also a type of trial × type of stimulus interaction, such that the difference between faces and fruits occurred only on bilateral trials.
Table 3.
Mean Number of Omissions for Each Condition for Patient J.B. in Experiment 3
| Unilateral Trials |
Bilateral Trials |
|||
|---|---|---|---|---|
| Type of Stimuli | LVF | RVF | LVF | RVF |
| Fearful face | 1 | 10 | 13 | 1 0 |
| Happy face | 3 | 0 | 16 | 10 |
| Neutral face | 2 | 10 | 20 | 10 |
| Apple | 3 | 10 | 26 | 12 |
| Orange | 1 | 10 | 27 | 10 |
| Banana | 3 | 10 | 24 | 12 |
| No. trials in each condition | 10 | 10 | 30 | 30 |
The next analyses considered the face stimuli only on bilateral trials in order to test the a priori prediction that fearful faces would be extinguished to less of an extent than neutral or happy faces. A 2 × 3 ANOVA for location of face (LVF vs. RVF) and type of face (fearful, happy, neutral) showed a significant main effect for location of the face [F(1,174) = 121.5, p < .001], whereas the main effect for type of face was close to significance [F(2,174) = 2.9, p < .055]. Of more importance, the location × type of face interaction was also almost significant [F(2,174) = 2.9, p < .055]. Planned contrasts showed that for the LVF (on bilateral trials), there was substantially more extinction for neutral faces (73%) than for fearful faces [43%; t(58) = 2.43, p < .009, one-tailed], or for happy faces [53%; t(58) = 1.6, p < .055]. There was no difference in the magnitude of extinction between fearful and happy faces.
Discussion
The results of this single case study confirm recent findings of reduced extinction for faces, relative to shapes or scrambled faces (Vuilleumier, 2000; Ward & Goodrich, 1996). Moreover, the results also confirm the hypothesis that emotionally salient faces produce less extinction than do neutral facial expressions (Vuilleumier & Schwartz, 2001). In Vuilleumier and Schwartz’s study, 3 patients with chronic left neglect and visual extinction were presented with schematic angry, happy, and neutral faces, as well as with simple shapes. It was found that the faces resulted in less extinction than did the shapes on bilateral trials for all of the patients (about 39% and 72%, respectively). It was also found that the angry and happy facial expressions produced less extinction (about 33% and 25% for angry and happy faces, respectively) than did the neutral faces (about 58%). The present study was similar in showing less extinctio non bilateral trials for photographs of faces (28%), relative to photographs of fruits (45%). When we looked just at the facial stimuli, J.B. was less likely to extinguish fearful (43%) or happy (53%) facial expressions in the LVF on bilateral trials, as compared with neutral (73%) facial expressions. Thus, these results support the previous finding that emotional facial expressions are extinguished less than neutral faces (Vuilleumier & Schwartz, 2001). In the present study, we used photographs of real facial expressions, rather than schematic faces, as in the previous study. In addition, the present results extend the previous findings to the emotional expression of fear. The expectation was that fearful expressions would produce less extinction than would happy expressions. Although the trend was in the right direction (43% vs. 53% for fearful and happy expressions, respectively), this was far from significant. Thus, as in the Vuilleumier study, similar levels of extinction were observed for the negative and the positive emotional expressions. Future research will be needed to investigate in more detail whether negative emotional expressions (e.g., anger, fear) are extinguished to a lesser extent than positive emotional expressions (e.g., happiness) for patients with neglect and extinction. The results of behavioral studies would suggest that negative expressions should have a stronger impact on attentional processing than do positive emotional expressions (Fox et al., 2001; Mogg & Bradley, 1999).
GENERAL DISCUSSION
A clear implicationof the present results is that the anxiety level reported by a participant can have a profound effect on the distribution of spatial attention to behaviorally relevant stimuli. In Experiments 1 and 2, attentional bias for fearful faces was observed only for those reporting relatively high levels of trait anxiety. This supports a growing research literature in cognition and emotion (Bradley et al., 1998; Bradley et al., 2000; Fox et al., 2001; Mogg & Bradley, 1999) that has not generally been acknowledged in cognitive neuroscience. The present research confirms that although the emotionality of the stimuli used is important in determining the distribution of spatial attention, the emotionality of the individual participating in the task is also of critical importance. An interesting aspect of the present study is that it demonstrates an LVF bias for processing fearful faces in high anxious individuals, but not in low anxious individuals. When face pairs were presented for 500 msec (Experiment 1), HA individuals demonstrated an attentional bias toward fearful expressions and away from happy expressions, but only when the emotional faces were presented in the LVF. When the face pairs were presented for just 17 msec and then backward masked, there was again evidence for an anxiety-related bias to emotional expressions, but this time only for fearful expressions. HA individuals were vigilant for fearful faces appearing in both the LVF and the RVF, but the magnitude of the bias was greater when the emotional faces were presented in the LVF. The present results add to the evidence that the right hemisphere may be especially, but not exclusively, sensitive to the presence of behaviorally relevant stimuli that signal potential danger to the organism. It is of particular interest to determine whether a right hemisphere bias might be especially related to high levels of anxiety. Further evidence consistent with this notion is the previous finding that masked angry faces captured attentional resources in HA individuals, but only when they appeared in the LVF (Mogg & Bradley, 1999).
Is there any neurological evidence that the right hemisphere might be especially sensitive to fear-relevant stimuli in anxious individuals? A brief overviewof the literature indicates a rather confusing picture. For instance, some research suggests that nonclinical state anxiety is associated with increased right-hemisphere activity, as measured by regional blood flow (Reivich, Gur, & Alavi, 1983), whereas another study has found that trait anxiety was associated with greater left-hemisphere activation (Tucker, Antes, Stenslie, & Barnhardt, 1978). Consistent with this, higher relative metabolism has been found in the left inferior frontal gyrus for patients with generalized anxiety disorder, as compared with matched controls (Wu et al.,1991). However, more recent studies have proposed that if anxious arousal (as presumably measured by state anxiety) is distinguished from anxious apprehension (as measured by trait anxiety), the results become more coherent, with relatively greater right-hemisphere activation in anxious arousal and greater left-hemisphere activation in anxious apprehension (Heller & Nitschke, 1998; Heller et al., 1997). Although state anxiety was not directly manipulated in the present study, the results seem inconsistent with this model, in that trait anxiety (apprehension) was a better predictor of attentional bias to fearful faces than was state anxiety (arousal) and this effect was stronger for faces presented in the LVF (i.e., a right-hemisphere bias). Likewise, a recent study has found evidence for greater right-hemisphere involvement in emotional processing for a group of people high in anxious apprehension, which also does not support Heller et al.’s model (Compton, Heller, Banich, Palmieri, & Miller, 2000). Moreover, another study has shown that socially phobic individuals show increased right-hemisphere activation when they are waiting for a potentially stressful event (Davidson et al., 2000). Thus, it my well be that the pattern of brain activity may be related to an interaction between trait and state anxiety. There is clearly a need to conduct more research on the neural mechanisms underlying anxiety. In particular, it would be interesting to establish the neural mechanisms that are engaged during attentional biases toward fearful faces in HA and LA people under conditions of high and low stress. One mechanism by which selective attentional biases might work is that the processing of a fear-relevant stimulus (e.g., a fearful or angry expression) might increase the level of autonomic arousal in anxious individuals, which might, in turn, lead to activation of certain regions of the right hemisphere (see Davidson, 1998). Thus, this mechanism would result in an anxiety-related attentional bias to fear-relevant stimuli presented in the LVF, as was found in the present study. It should be noted, however, that some studies have found larger anxiety-related attentional bias effects for threatening faces presented to the RVF, relative to the LVF (Bradley et al., 1998; Bradley et al., 2000). Further work is clearly required to determine the role of each hemisphere in processing emotional expressions, especially in relation to different types of anxiety(Compton et al., 2000).The present results suggest that the nature of the stimuli and the anxiety level of the individual combine to determine the distribution of spatial attention. One potential avenue of research would be to evaluate the responsiveness of the amygdala in HA individuals, since it is known that the amygdala can influence the distribution of spatial attention, as well as modulating levels of arousal (Armony & LeDoux, 2000). The present research suggests that the distribution of spatial attention is the result of a complex interplay between the nature of the stimuli and the anxiety level of the participant.
A second issue addressed in this paper is the anxiety-related attentional bias toward fearful faces that were unseen. It is of particular interest that the attentional bias toward masked stimuli seems to be somewhat stronger than that observed with unmasked stimuli. Although the critical interaction failed to reach significance in the present study, it was nevertheless clear that fearful faces had a stronger affect on attention in the HA group when the faces were masked. For instance, attentional bias was weak and was not predicted by trait or state anxiety scores when stimuli were unmasked (Experiment 1). However, when the stimuli were masked, trait anxiety was a significant predictor of attentional bias for fearful faces (Experiment 2). Likewise, it has been found that when masked stimuli were presented near threshold, so that participants were aware of stimuli being presented some of the time, the attentional bias for angry faces disappeared (Mogg & Bradley, 1999). Similarly, in an earlier study, differential processing of negative emotional words in HA individuals was observed only when the words were masked, and not when they were clearly visible (Fox, 1996). Recent brain-imaging research has also found that masked angry facial expressions produce a stronger activation of the right amygdala, whereas unmasked angry facial expressions produce greater activation of the left amygdala (Vuilleumier, Armony, Driver, & Dolan, 2001). Similarly, it has been found that when aversively conditioned angry facial expressions are backward masked, the right amygdala is most active, with little activity in the left amygdala (Morris et al., 1998, 1999). Morris et al. (1999) suggest that there is a subcortical pathway involving the superior colliculus and pulvinar that provides a route for processing behaviorally relevant unseen visual events. This route seems to be specific to negative emotional expressions, in that it was not activated when participants viewed happy facial expressions. Thus, it seems reasonable to speculate that the right hemisphere may be especially responsive to emotional faces and that the pulvinar route gives an additional advantage to unseen stimuli. An important avenue for future research is to establish why unseen emotional stimuli may have a stronger effect on the distribution of spatial attention than do clearly visible stimuli and why anxiety may modulate this effect. One possibility is that unseen stimuli are processed by the amygdala, which then modulates cortical processing to focus on the potentially dangerous information in more detail. This internal signal may be especially strong for anxious individuals, who have recently been found to show a stronger attentional bias toward internal threatening information, relative to external information (Stegen, van Diest, van de Woestijne, & van den Bergh, 2001).
The results of the first two experiments demonstrate that level of trait anxiety, as well as degree of awareness of visual stimuli, may both be important determinants of attentional bias toward fearful facial expressions. As was noted previously, the backward masking of visual stimuli may correspond fairly closely to the preattentive conditions observed for patients with neglect and extinction. In Experiment 3, a patient with right parietal lobe damage, who exhibits left-sided neglect and extinction, was studied. It is of interest that this individual had received treatment for problems with anxiety and worry, and therefore we can assume that he was a high anxious individual. The results of Experiment 3 demonstrated that significantly less extinction was observed in the LVF on bilateral trials for fearful and happy facial expressions, as compared with neutral expressions. This confirms previous findings that the automatic processing of emotional facial expressions can be revealed by the modulation of extinction for faces presented in the LVF (Vuilleumier & Schwartz, 2001). The present study is the first demonstration of this effect for the emotional expression of fear. It is of interest to note, however, that no difference was found in the amount of extinction observed for fearful and happy facial expressions. Thus, it seems that emotionally salient stimuli produce less extinction regardless of the valence of the faces (Vuilleumier & Schwartz, 2001; present Experiment 3). In contrast, it can be concluded that behaviorally relevant stimuli that signal potential threat are especially effective in influencing the distribution of visual attention in high anxious people. It is interesting to speculate that emotionally relevant stimuli (positive and negative) may be processed automatically (as in the neglect patient) but that attention then gets allocated only to the potentially threatening stimuli (as in the anxiety-related bias for fearful facial expressions). This is consistent with the recent finding that the presence of angry facial expressions tends to hold attention to their location, whereas happy facial expressions do not for anxious individuals (Fox et al., 2001). In summary, the present study suggests that the processing of affective facial expressions is not dependent on attentive processes and that this results in an anxiety-related selective processing of behaviorally relevant stimuli, which appears to be restricted to fear-relevant stimuli.
REFERENCES
- Armony J, LeDoux JE. How danger is encoded: Toward a systems, cellular, and computational understanding of cognitive-emotional interactions in fear. In: Gazzaniga MS, editor. The new cognitive neurosciences. 2nd ed. MIT Press; Cambridge, MA: 2000. pp. 1067–1079. [Google Scholar]
- Bradley BP, Mogg K, Falla SJ, Hamilton LR. Attentional bias for threatening facial expressions in anxiety: Manipulation of stimulus duration. Cognition & Emotion. 1998;6:737–753. [Google Scholar]
- Bradley BP, Mogg K, Millar NH. Covert and overt orienting of attention to emotional faces in anxiety. Cognition & Emotion. 2000;14:789–808. [Google Scholar]
- Byrne A, Eysenck MW. Trait anxiety, anxious mood, and threat detection. Cognition & Emotion. 1995;9:549–562. [Google Scholar]
- Cheesman J, Merikle PM. Distinguishing conscious from unconscious perceptual processes. Canadian Journal of Psychology. 1986;40:343–367. doi: 10.1037/h0080103. [DOI] [PubMed] [Google Scholar]
- Cohen J, MacWhinney B, Flatt M, Provost J. PsyScope: An interactive graphic system for designing and controlling experiments in the psychology laboratory using Macintosh computers. Behavioral Research Methods, Instruments, & Computers. 1993;25:257–271. [Google Scholar]
- Compton RJ, Heller W, Banich MT, Palmieri PA, Miller GA. Responding to threat: Hemispheric asymmetries and inter-hemispheric division of input. Neuropsychology. 2000;14:254–264. doi: 10.1037//0894-4105.14.2.254. [DOI] [PubMed] [Google Scholar]
- Davidson RJ. Affective style and affective disorders: Perspectives from affective neuroscience. Cognition & Emotion. 1998;12:307–330. [Google Scholar]
- Davidson RJ, Marshall JR, Tomarken AJ, Henriques JB. While a phobic waits: Regional brain electrical and autonomic activity in social phobics during anticipation of public speaking. Biological Psychiatry. 2000;47:85–95. doi: 10.1016/s0006-3223(99)00222-x. [DOI] [PubMed] [Google Scholar]
- Dolan RJ. Emotional processing in the human brain revealed through functional neuroimaging. In: Gazzaniga MS, editor. The new cognitive neurosciences. 2nd ed. MIT Press; Cambridge, MA: 2000. pp. 1115–1131. [Google Scholar]
- Eastwood JD, Smilek D, Merikle PM. Differential attentional guidance by unattended faces expressing positive and negative emotion. Perception & Psychophysics. 2001;63:1004–1013. doi: 10.3758/bf03194519. [DOI] [PubMed] [Google Scholar]
- Ekman P, Freisen WV. Pictures of facial affect. Consulting Psychologists Press; Palo Alto, CA: 1975. [Google Scholar]
- Esteves F, Parra C, Dimberg U, Ohman A. Nonconscious associative learning: Pavlovian conditioning of skin conductance responses to masked fear-relevant stimuli. Psychophysiology. 1994;31:375–385. doi: 10.1111/j.1469-8986.1994.tb02446.x. [DOI] [PubMed] [Google Scholar]
- Fox E. Selective processing of threatening words in anxiety: The role of awareness. Cognition & Emotion. 1996;10:449–480. [Google Scholar]
- Fox E, Lester V, Russo R, Bowles RJ, Pichler A, Dutton K. Facial expressions of emotion: Are angry faces detected more efficiently? Cognition & Emotion. 2000;14:61–92. doi: 10.1080/026999300378996. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fox E, Russo R, Bowles RJ, Dutton K. Do threatening stimuli draw or hold visual attention in subclinical anxiety? Journal of Experimental Psychology: General. 2001;130:681–700. [PMC free article] [PubMed] [Google Scholar]
- Hansen CH, Hansen RD. Finding the face in the crowd: An anger superiority effect. Journal of Personality & Social Psychology. 1988;54:917–924. doi: 10.1037//0022-3514.54.6.917. [DOI] [PubMed] [Google Scholar]
- Heller W, Nitschke JB. The puzzle of regional brain activity in depression and anxiety: The importance of subtypes and comorbidity. Cognition & Emotion. 1998;12:421–447. [Google Scholar]
- Heller W, Nitschke JB, Etienne MA, Miller GA. Patterns of regional brain activity differentiate types of anxiety. Journal of Abnormal Psychology. 1997;106:376–385. doi: 10.1037//0021-843x.106.3.376. [DOI] [PubMed] [Google Scholar]
- LeDoux JE. The emotional brain:The mysterious underpinnings of emotional life. Simon & Schuster; New York: 1996. [Google Scholar]
- McGlinchey-Berroth R, Milberg WP, Verfaellie M, Alexander M, Kilduff PT. Semantic processing in the neglected visual field: Evidence from a lexical decision task. Cognitive Neuropsychology. 1993;10:79–108. [Google Scholar]
- Mogg K, Bradley BP. Orienting of attention to threatening facial expressions presented under conditions of restricted awareness. Cognition & Emotion. 1999;13:713–740. [Google Scholar]
- Morris JS, Frith CD, Perrett DI, Rowland D, Young AW, Calder AJ, Dolan RJ. A differential neural response in the human amygdala to fearful and happy facial expressions. Nature. 1996;383:813–815. doi: 10.1038/383812a0. [DOI] [PubMed] [Google Scholar]
- Morris JS, Ohman A, Dolan RJ. Conscious and unconscious emotional learning in the human amygdala. Nature. 1998;393:467–470. doi: 10.1038/30976. [DOI] [PubMed] [Google Scholar]
- Morris JS, Ohman A, Dolan RJ. A subcortical pathway to the right amygdala mediating “unseen” fear. Proceedings of the National Academy of Sciences. 1999;96:1680–1685. doi: 10.1073/pnas.96.4.1680. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nothdurft HC. Faces and facial expressions do not pop out. Perception. 1993;22:1287–1298. doi: 10.1068/p221287. [DOI] [PubMed] [Google Scholar]
- Ohman A, Lundqvist D, Esteves F. The face in the crowd revisited: A threat advantage with schematic stimuli. Journal of Personality & Social Psychology. 2001;80:381–396. doi: 10.1037/0022-3514.80.3.381. [DOI] [PubMed] [Google Scholar]
- Rafal R. Visual attention: Converging operations from neurology and psychology. In: Kramer AF, Coles MGH, Logan GD, editors. Converging operations in the study of visual selective attention. American Psychological Association; Washington, DC: 1996. pp. 139–192. [Google Scholar]
- Reivich M, Gur R, Alavi A. Positron emission tomographic studies of sensory stimuli, cognitive processes and anxiety. Human Neurobiology. 1983;2:25–33. [PubMed] [Google Scholar]
- Spielberger CD, Gorsuch RL, Lushene R, Vagg PR, Jacobs GA. Manual for the state-trait anxiety inventory. Consulting Psychologists Press; Palo Alto, CA: 1983. [Google Scholar]
- Stegen K, van Diest I, van de Woestijne KP, van den Bergh O. Do persons with negative affect have an attentional bias to bodily sensations? Cognition & Emotion. 2001;15:813–829. [Google Scholar]
- Tanaka JW, Farah MJ. Parts and wholes in face recognition. Quarterly Journalof Experimental Psychology. 1993;46A:225–245. doi: 10.1080/14640749308401045. [DOI] [PubMed] [Google Scholar]
- Taylor JA. A personality scale of manifest anxiety. Journal of Abnormal & Social Psychology. 1953;48:285–290. doi: 10.1037/h0056264. [DOI] [PubMed] [Google Scholar]
- Tucker DM, Antes JR, Stenslie CE, Barnhardt TM. Anxiety and lateral cerebral function. Journal of Abnormal Psychology. 1978;87:380–383. [PubMed] [Google Scholar]
- Vallar G. The anatomical basis of spatial neglect in humans. In: Robertson IH, Marshall JC, editors. Unilateral neglect: Clinical and experimental studies. Erlbaum; Hillsdale, NJ: 1993. pp. 27–62. [Google Scholar]
- Vuilleumier P. Faces call for attention: Evidence from patients with visual extinction. Neuropsychologia. 2000;38:693–700. doi: 10.1016/s0028-3932(99)00107-4. [DOI] [PubMed] [Google Scholar]
- Vuilleumier P, Armony JL, Driver J, Dolan RJ. Distinct effects of attention and emotion on face processing in the human brain: An event related fMRI study. Neuron. 2001;30:829–841. doi: 10.1016/s0896-6273(01)00328-2. [DOI] [PubMed] [Google Scholar]
- Vuilleumier P, Schwartz S. Emotional facial expressions capture attention. Neurology. 2001;56:153–158. doi: 10.1212/wnl.56.2.153. [DOI] [PubMed] [Google Scholar]
- Ward R, Goodrich S. Differences between objects and non-objects in visual extinction: A competition for attention. Psychological Science. 1996;7:177–180. [Google Scholar]
- Whalen PJ, Rauch SL, Etcoff NL, McInerney SC, Lee MB, Jenike MA. Masked presentation of emotional facial expressions modulate amy gdala activity without explicit knowledge. Journal of Neuroscience. 1998;18:411–418. doi: 10.1523/JNEUROSCI.18-01-00411.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wu JC, Buchsbaum MS, Hershey TG, Hazlett E, Sicotte N, Johnson JC. PET in generalized anxiety disorder. Biological Psychiatry. 1991;29:1181–1199. doi: 10.1016/0006-3223(91)90326-h. [DOI] [PubMed] [Google Scholar]



