Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2012 Jan 1.
Published in final edited form as: Cogn Emot. 2011 Jan;25(1):73–88. doi: 10.1080/02699931003672381

Emotionally meaningful targets enhance orienting triggered by a fearful gazing face

Chris Kelland Friesen 1, Kimberly M Halvorson 2, Reiko Graham 3
PMCID: PMC3026354  NIHMSID: NIHMS179050  PMID: 21278907

Abstract

Studies investigating the effect of emotional expression on spatial orienting to a gazed-at location have produced mixed results. The present study investigated the role of affective context in the integration of emotion processing and gaze-triggered orienting. In three experiments, a face gazed nonpredictively to the left or right, and then its expression became fearful or happy. Participants identified (Experiments 1 and 2) or detected (Experiment 3) a peripheral target presented 225 or 525 ms after the gaze cue onset. In Experiments 1 and 3 the targets were either threatening (a snarling dog) or nonthreatening (a smiling baby); in Experiment 2 the targets were neutral. With emotionally-valenced targets, the gaze-cuing effect was larger when the face was fearful compared to happy -- but only with the longer cue-target interval. With neutral targets, there was no interaction between gaze and expression. Our results indicate that a meaningful context optimizes attentional integration of gaze and expression information.

Keywords: gaze direction, emotional expression, visual attention, affective context


Dynamic information from faces provides us with a rich source of data about our environment. For example, shifts in other people’s direction of gaze tell us where they are attending and can serve to direct our attention to potentially important objects or events that might be outside our current line of sight. Moreover, other people’s facial expressions can indicate how they feel about the object or event to which they are attending. Being sensitive to these social visual cues should help us respond more efficiently to events when it is advantageous to react quickly.

There is ample evidence that both gaze direction and emotional expression are processed quickly and automatically. Numerous attentional cuing studies have demonstrated that gaze direction cues can trigger an automatic shift of spatial attention to a gazed-at location (e.g., Driver et al., 1999; Friesen & Kingstone, 1998; for a review, see Frischen, Bayliss, & Tipper, 2007). This orienting effect occurs even when the gaze direction cues are not predictive of target location, and when the interval between the onset of the gaze cue and the onset of the target (stimulus onset asynchrony, SOA) is very short. Similarly, many studies have shown that facial emotional expression is processed quickly and automatically (e.g., Batty & Taylor, 2003; Eimer & Holmes, 2007; for a review, see Vuilleumier & Pourtois, 2007).

An important outstanding question is when and how gaze direction information and facial expression information are integrated. It seems reasonable to expect that humans would have the ability to combine these two sources of information for optimal processing of social facial signals. In particular, seeing another person looking off to the side with a frightened expression should enhance one’s natural tendency to shift attention to the gazed-at location because the combination of averted gaze and a fearful expression typically indicates the location of a potential threat in the environment. Consistent with this idea, the cognitive neuroscience literature suggests that gaze processing and expression processing are subserved, to some extent, by common brain areas. In particular, there is evidence that the superior temporal sulcus (STS) is involved in processing both gaze direction (Engell & Haxby, 2007; Hadjikhani, Hoge, Snyder, & de Gelder, 2008; Hoffman, Gothard, Schmid & Logothesis, 2007; Hoffman & Haxby, 2000; Hooker et al., 2003; Kingstone, Tipper, Ristic, & Ngan, 2004) and facial expression (Engell & Haxby, 2007; Furl, van Rijsbergen, Treves, Friston, & Dolan, 2007; Hasselmo, Rolls, & Baylis, 1989). There is also evidence that the amygdala is involved in processing both gaze and expression information (e.g., Adams, Gordon, Baird, Ambady, & Kleck, 2003; Hadjikhani et al., 2008; Hoffman et al., 2007; Hooker et al., 2003; Kawashima et al., 1999; Rolls, 1984), although a recent study using high-resolution fMRI suggests that gaze and expression may be processed in separate regions of the macaque amygdala (Hoffman et al., 2007).

Given both the intuitiveness of the idea that facial expression should modulate gaze-triggered orienting and the fact that gaze and expression information are known to be processed in common brain areas, one would expect an interaction between the two in behavioral gaze cuing studies. Interactions between gaze and expression have been observed in several behavioral studies that did not measure gaze-triggered orienting (e.g., Adams & Kleck, 2003; Ganel, Goshen-Gottstein & Goodale, 2005; Graham & LaBar, 2007) and in the electrophysiological results of one gaze cuing study (Fichtenholtz, Hopfinger, Graham, Detwiler, & LaBar, 2007).

However, attempts to demonstrate behavioral interactions between gaze cuing effects and expression effects have produced surprisingly mixed results. In some of these studies, either no effect of expression on gaze-triggered orienting was observed (e.g., Hietanen & Leppänen, 2003), or an effect was observed, but only in a subset of participants (e.g., anxious participants, Mathews, Fox, Yiend, & Calder, 2003). Collapsing across participants ranging from low to high in trait fearfulness, Tipples (2006) found a larger gaze cuing effect for fearful faces than for neutral faces, but contrary to what one might expect, he found no difference between fearful and happy faces. In contrast, Putman, Hermans, and van Honk (2006) found larger gaze-cuing effects across participants for fearful faces compared with happy faces at a short SOA of 100 ms. We also recently observed the same effect, but only at a longer SOA of 525 ms and not at a shorter SOA of 225 ms; however, this effect was evident in only one of two very similar experiments (Graham, Friesen, Fichtenholtz, & LaBar, 2009, Experiments 5 & 6).

The incongruous results across and within studies suggest that the effects of facial expression on gaze-triggered orienting are tenuous under the experimental conditions that have been used to date. One reason for this is that there is considerable variation in the timing of stimulus presentation across these studies, in terms of gaze cue and expression presentation, and in terms of cue-target SOA. For example, some studies used cuing sequences that may have minimized perceived cue-target contingencies (e.g., the face displays an emotion and then gazes to the side) and some may have presented the target too soon (i.e., before gaze and expression information could be integrated). But a more likely reason for weak or absent interactions between gaze and expression may be the meaninglessness of the context in which the gaze and expression cues were presented. In the behavioral studies mentioned above the targets were always neutral: expressive faces repeatedly looked toward locations where neutral objects such as asterisks or letters might subsequently appear.

In the present study, we hypothesized that for expression processing to be fully engaged in a gaze-cuing experiment (i.e., for expression to have an optimal effect on gaze-triggered orienting), it might be necessary to present targets that would be likely to elicit emotional expressions in a gazing face. For example, when a participant is presented with a fearful gazing face, it might matter that an upcoming target be could be threatening, such as a man with a gun or an attacking dog. In this context, it is perhaps worth mentioning that in the studies discussed above, not only was there usually no enhanced cuing effect for fearful faces, but in many cases no effect of expression was observed (e.g., Hietanen & Leppänen, 2003; Mathews et al., 2003; but see Graham et al., 2009). It is possible that because the emotional expression of the cue was not task relevant or meaningful within the context of the cuing sequence, emotional expression was not fully processed, thus minimizing the likelihood of observing both an emotional expression effect and an interaction between gaze cue validity and emotional expression.

In support of this notion, a recent study that used emotionally-valenced words as targets observed gaze-cuing effects for fearful and disgusted faces (and not for happy and neutral faces) when a group of participants evaluated the target words as positive or negative (Pecchinenda, Pes, Ferlazzo, & Zoccolotti, 2008). When a separate group of participants judged whether the same target words were in upper or lower case letters, there were equivalent cuing effects for all expressions, and the cuing effects for the negative expressions were significantly smaller than they had been in the evaluative task. The authors concluded that the enhancement of attentional orienting to gazing faces with negative expressions requires that participants be engaged in a task that involves having an explicit motivational goal to evaluate affective valence. This conclusion is consistent with the results of a gaze-cuing study conducted by Bayliss, Frischen, Fenske, & Tipper (2007) in which participants gave more positive evaluations to neutrally-valenced household objects that had been gazed at by a happy face compared with a disgusted face.

However, the idea that facilitated integration of gaze-triggered orienting and a fearful facial expression occurs only when people are engaged in an explicit affective evaluation is somewhat counterintuitive: in the real world, a fearful gazing face should enhance orienting to the gazed-at location rapidly and automatically regardless of the viewer’s current goal state. Thus, we investigated the possibility that to produce enhanced orienting to objects gazed at by the fearful gazing faces, it might be sufficient to simply have fearful and happy faces look at objects that might induce fear or happiness in the gazer, and that an explicit evaluative goal might not necessarily be required. In other words, we used target images for which a fearful or happy reaction would be appropriate.

We also used a cuing sequence that was more ecologically valid than those used in most of the previous studies. In the studies published to date, the cuing sequences varied in the degree to which they represented what normally occurs in real life. For example, both Hietanen and Lepännen (2003) and Mathews et al. (2003) presented their expressive faces first before introducing a gaze shift. Tipples (2006) and Putman et al. (2006) made their cue presentation more realistic by having gaze direction change (from direct to averted) and facial expression change (from neutral to emotionally expressive) simultaneously. However, it is natural for people to look at something first and then react to it. Therefore, we began each trial with a face with a neutral expression looking straight ahead that subsequently looked to the left or the right and then changed its expression to fearful or happy, as though in reaction to what it had just seen.

In order to investigate the timing of the integration of gaze and expression information, we used both a fairly short SOA of 225 ms and a longer one of 525 ms. One possible reason for the mixed results from previous studies is that different SOAs, ranging from 100 ms (Fichtenholtz et al., 2007) to 700 ms (Mathews et al., 2003; Tipples, 2006), were used across studies. Although ERP findings indicate that both gaze direction (e.g., Klucharev & Sams, 2004; Schuller & Rossion, 2001) and expression (e.g., Pizzagalli, Regard, & Lehmann, 1999; for a review, see Vuilleumier & Pourtois, 2007) are processed rapidly, there is evidence suggesting that their integration may occur relatively late. For example, in a TMS study, Pourtois et al. (2004) provided evidence that the processing of both a change in gaze direction and a fearful expression begin early, less than 200 ms after stimulus onset, and in nonoverlapping areas of the brain. In an ERP study, Klucharev and Sams (2004) observed early effects for both gaze and expression, and an interaction between the two after approximately 300 ms. Similarly, in a behavioral study, Graham and LaBar (2007) showed temporal asymmetries in gaze and expression processing that could be indicative of separate processing at early stages. Consistent with this, a recent fMRI study by Engell and Haxby (2007) has demonstrated that gaze and expression processing are carried out by overlapping but dissociable areas of the STS. Similarly, evidence for dissociations between gaze and expression processing have been reported in the macaque amygdala (Hoffman et al., 2007), a structure that has been implicated in the rapid and automatic processing of emotionally significant events (Vuilleumier & Pourtois, 2007).

In Experiment 1, fearful or happy gazing faces were followed by targets that were either nonthreatening (a smiling baby) or threatening (a snarling dog). Given the evidence that both gaze and expression are processed rapidly but that the integration of gaze information and expression information takes time, we predicted that we would observe separate gaze and expression effects at the short SOA, and an interaction between the two at the long SOA, consistent with the results of Graham et al. (2009, Experiment 6). Specifically, we expected that at the long SOA, the facilitatory effects of gaze on target identification would be greater with fearful faces than with happy faces. In Experiment 2, we reran Experiment 1 with emotionally neutral targets in order to rule out the possibility that the effects observed in Experiment 1 could be attributed to other specific characteristics (such as our cue stimuli, our cuing sequence, the SOAs we used, and the response task) rather than reflecting the effect of the context provided by emotionally meaningful targets. In both experiments, participants performed a non-evaluative target identification task. Finally, in order to rule out a depth-of-processing explanation (i.e., that deeper processing of the target rather than its meaningfulness enhances expression and gaze interactions), in Experiment 3 we reran Experiment 1 as a target detection task.

Experiment 1

Method

Participants

Forty-four undergraduates (27 female; mean age = 20 years) participated for course credit. All reported normal or corrected-to-normal vision.

Stimuli

Examples of cue and target stimuli are presented in Figure 1(a). The face stimuli were adapted from neutral, happy, and fearful faces of a single individual (PE) from Ekman and Friesen’s (1976) Pictures of Facial Affect, and were identical to those described in Graham et al. (2009). The image used prior to each cuing sequence was a face with a neutral expression, gazing straight ahead. Images used during the cuing sequences were faces with gaze averted to the left or right. These faces displayed a neutral expression, then an intermediate (55% emotional) happy or fearful expression, and then a full intensity (100% emotional) happy or fearful expression. Our target images were a baby (#2070) and a dog (#1300) taken from the International Affective Picture System (Lang, Bradley, & Cuthbert, 1999). All face images were equated for contrast and luminance, and the two target images were equated for contrast and luminance.

Figure 1.

Figure 1

Example illustrations of (a) Experiment 1 cue and target stimuli, and (b) Experiment 1 trial sequence. On each trial, a central fixation cross (not illustrated) was replaced by a face with a neutral expression gazing straight ahead for 1000 ms. The face’s eyes looked to the left or right for 50 ms, and then its expression changed to fearful or happy. A target (baby or dog) appeared on the left or right 225 (+/− 50) or 525 (+/− 50) ms after the gaze cue onset. In Experiment 2, the stimuli and trial sequence were identical to those in Experiment 1 except that the target stimuli were the capital letter T or L. In Experiment 3, the stimuli and trial sequence were identical to those in Experiment 1 except that on a minority of trials (catch trials) no target appeared after the onset of the final cue display. Target images depicted are for illustrative purposes; they are not the images that were presented.

Stimuli were presented on a black background. A white cross that served as the fixation stimulus at the beginning of each trial was positioned at the center of the screen, subtending 0.50° × 0.50° visual angle. The face images were positioned in the center of the screen, and were 8.3° wide by 12.0° high. The eyes of the face were 2.3° above the horizontal midline. Target images were 5.0° × 5.0°, and were positioned 7.5° from the vertical midline, as measured to the nearest edge of the target image. The targets were positioned with their centers 2.3° above the horizontal meridian, directly to the left or right of the face’s eyes.

Design

Participants performed an identification task, in which they indicated whether the target was a baby or a dog by pressing a left or right button. Response-side mapping was counterbalanced across subjects. Twenty practice trials were followed by 576 test trials presented in six blocks of 96 trials. Gaze direction (left, right), target location (left, right), facial expression (happy, fearful), target identity (baby, dog), and cue-to-target interval (175, 225, 275, 475, 525, 575 ms) were selected randomly and equally within each block. On valid trials, the target appeared on the side of the screen toward which the eyes of the face were looking; on invalid trials, the target appeared on the side opposite to where the eyes were looking. The cue-target intervals were divided into two SOA (stimulus onset asynchrony) categories: short (225 ms +/− 50 ms) and long (525 ms +/− 50 ms).

Procedure

Participants sat approximately 57 cm from the monitor. Figure 1(b) provides an illustration of the trial sequence. Each trial began with the presentation of the fixation cross for 1000 ms. The cross was then replaced by the following face images presented in succession: neutral expression with straight-ahead gaze (1000 ms); neutral expression with left or right gaze (50 ms); partial happy or fearful expression with the same left or right gaze (50 ms); and full happy or fearful expression with the same left or right gaze (125 ms +/− 50 ms for the short SOA; 425 ms +/− 50 ms for the long SOA). This cuing sequence was followed by the target display, which consisted of the baby or the dog appearing on the left or right side of the screen while the final face image from the cuing sequence remained on the screen. The trial ended when a response was made or 1000 ms had elapsed, whichever came first. The next trial began immediately with the presentation of the fixation cross.

Participants were instructed to keep their eyes fixated at the location of the cross throughout each trial and to respond as quickly and accurately as possible to the appearance of a target by pressing the appropriate left or right button to indicate whether the target was a baby or a dog. The experimenter emphasized that the face’s gaze direction and facial expression did not in any way predict the location or identity of the target. Targets were referred to simply as “a baby” and “a dog”; i.e., no affective (e.g., “cute”, “scary”) or evaluative (e.g., “positive”, ‘negative”) terminology was used.

Results

Keypress selection errors, anticipations (RTs < 100 ms), and timed-out trials (no response within 1000 ms of target onset) were excluded from statistical analysis. Keypress selection errors accounted for 3.2% of the total trials; anticipations, 0.03%; and timed-out trials, 1.6%.

Results for all conditions are presented in Table 1. An ANOVA on mean RT with cue-target validity (valid, invalid), SOA (225 ms, 525 ms), emotional expression (happy, fearful), and target identity (baby, dog) as within-subject factors revealed: a main effect for validity, F(1,43) = 32.99, p < 0.0001, with RT shorter on valid trials (M = 489 ms, SD = 61) compared to invalid trials (M = 499 ms, SD = 61); a main effect for SOA, F(1,43) = 108.03, p < .0001, with RT shorter at the long SOA (a standard foreperiod effect in which participants respond more quickly when they have had more time to prepare to respond; e.g., Mowrer, 1940); and a main effect for target, F(1,43) = 24.92, p < 0.01, with RT shorter for the baby (M = 486 ms, SD = 59) than for the dog (M =502 ms, SD = 62). The main effect for expression was not significant, F(1,43) = 0.73, p > 0.39.

Table 1.

Mean Response Times (in Milliseconds) and Standard Deviations for Experiment 1

Valid Invalid

Expression Target M SD M SD
225 ms SOA
Happy Baby 490 60 502 56
Dog 507 61 516 64
Fearful Baby 497 55 508 60
Dog 504 63 510 67
525 ms SOA
Happy Baby 468 58 474 57
Dog 494 59 497 57
Fearful Baby 469 62 481 55
Dog 484 60 502 67

There were several significant interactions. First, there were two interactions not involving gaze cuing. There was a significant SOA × target interaction, F(1,43) = 18.96, p < 0.0001, apparently indicating that the RT advantage for baby compared with dog was greater at the long SOA (M = 22 ms) than it was at the short SOA (M = 10 ms). There was also a significant expression × target interaction, F(1,43) = 12.53, p < 0.001, apparently reflecting an emotional valence congruency effect: for the baby, RTs were numerically shorter with a happy face (M = 483 ms) than with a fearful face (m = 489 ms), and for the dog, RTs were numerically shorter with a fearful face (M = 500 ms) than with a happy face (M = 504 ms). The expression × target × SOA interaction was not significant, F < 1.0.

The interactions of primary interest were those involving gaze cuing and expression. The validity by expression interaction was marginally significant, F(1,43) = 3.04, p < 0.09. Critically, however, this interaction was modulated by a significant three-way expression × validity × SOA interaction, F(1,43) = 11.55, p < 0.002. As can be seen in Figure 2(a), expression and validity appeared to interact only at the long SOA. No other interactions approached significance.

Figure 2.

Figure 2

Mean response times (RT) for valid and invalid trials as a function of SOA (stimulus onset asynchrony) and facial expression, and collapsing across target identities, for: (a) the Experiment 1 target identification task with emotionally valenced targets (baby and dog); (b) the Experiment 2 target identification task with emotionally neutral targets (the letters T and L); and (c) the Experiment 3 target detection task with emotionally valenced targets (baby and dog).

Planned t-tests confirmed that valid RT was significantly shorter than invalid RT for all combinations of expression and SOA, all p’s < 0.016. Thus, the significant validity × expression × SOA interaction indicates that there were gaze cuing effects of different magnitudes in different conditions. To examine this, we calculated cuing effects (invalid minus valid RT) for each condition, and compared the cuing effects for fearful vs. happy face cues at each SOA (Bonferroni corrected t-tests, two-tailed, alpha = 0.025). There was no statistical difference between fearful face cuing (8 ms) and happy face cuing (10 ms) at the short SOA, p > 0.49; but there was a difference at the long SOA, with fearful face cuing (15 ms) significantly greater than happy face cuing (5 ms), p < 0.003.

An inspection of Figure 2(a) suggests that the fearful face cuing effect increased over time, whereas the happy face cuing effect may have decreased. To test this, we compared happy face cuing effects at the short vs. the long SOA, and fearful face cuing effects at the short vs. the long SOA (Bonferroni corrected t-tests, two-tailed, alpha = 0.025). These tests confirmed that with an increase in SOA, the gaze cuing effect for fearful faces was enhanced, p < 0.01; however, the apparent attenuation of the gaze cuing effect for happy faces was not statistically significant after Bonferroni correction, p > 0.03.

Discussion

The purpose of Experiment 1 was to explore the relationship between facial expression and gaze direction in a cuing experiment with emotionally valenced targets. Using a dynamic cuing sequence and both short and long SOAs, we observed evidence of early and separate gaze and expression processing. The significant gaze cuing effect for both happy and fearful expressions at the short SOA indicates early gaze direction processing, and the significant interaction between expression and target across both SOAs indicates early and sustained emotional expression processing in this target identification task. The fact that gaze cue validity and expression did not interact at the short SOA supports the proposal that these two types of information are integrated later in processing.

One might wonder whether our use of a single target image to represent each of the two emotional valences is problematic in that (1) we may have produced a stimulus-specific effect, and (2) participants may have habituated to the target stimuli over the course of the experimental trials, thereby resulting in an underestimation of the effect of expression on gaze-triggered orienting. With regard to the first point, our strategy was to use only two target images in order to keep our experimental design straightforward. Although it remains for future research to determine the generalizability of findings such as ours with other threatening and nonthreatening target images, we are fairly confident that our effects are not stimulus-specific, as the target images we used are well-normed for emotional valence (Lang et al., 1999). With regard to the second point, we checked for habituation effects by conducting a first half vs. second half ANOVA. This analysis revealed no significant differences between reaction times obtained during trials from the first half and last half of the experiment (all ps > 0.05), thus indicating that habituation to the target images did not occur.

A critical difference between our experiment and those previously reported in the literature is that ours used emotionally valenced targets rather than emotionally neutral targets. However, our experiment differed in a number of other ways (e.g., timing of stimulus presentation, SOAs used, response task) from the variety of previous studies that either did or did not observe effects of expression on gaze-triggered orienting. To explore the extent to which including emotionally-valenced targets was responsible for producing the enhanced orienting for fearful gazing faces in this particular version of a gaze cuing paradigm, we reran Experiment 1 with emotionally neutral targets (the letters T and L), but otherwise using the same the same stimuli, task, and design.

Experiment 2

Method

Participants

Forty-three undergraduates (35 female; mean age = 22 years) reporting normal or corrected-to-normal vision participated for course credit.

Stimuli, Design, and Procedure

The face stimuli were identical to those used in Experiment 1. The target stimuli, the capital letters T and L (50% grayscale), were approximately 1.5° high by 1° wide. The experiment was otherwise identical to Experiment 1.

Results and Discussion

Errors were excluded as in Experiment 1. Anticipations accounted for 0.3% of the total trials; keypress selection errors, 3.0%; and timed-out trials, 2.9%.

The results for all conditions are presented in Table 2. An ANOVA revealed: a main effect for validity, F(1,42) = 30.527, p < 0.0001, with RT shorter on valid trials (M = 530 ms, SD = 75) compared to invalid trials (M =538 ms, SD = 77); a main effect for SOA, F(1,42) = 117.18, p < .0001, with response time shorter at the long SOA; and a main effect for expression, F(1,42) = 4.28, p < 0.05, with slightly shorter RTs for fearful (532 ms) than for happy faces (535 ms). The main effect for target identity was not significant, F < 1.0.

Table 2.

Mean Response Times (in Milliseconds) and Standard Deviations for Experiment 2

Valid Invalid

Expression Target M SD M SD
225 ms SOA
Happy T 545 76 546 71
L 546 74 551 79
Fearful T 543 72 553 73
L 545 74 548 77
525 ms SOA
Happy T 514 75 527 80
L 526 77 529 82
Fearful T 507 73 521 74
L 513 69 527 78

There were several significant interactions. First, there was a significant expression × SOA interaction, F(1,42) = 8.14, p < 0.007, indicating that the advantage for identifying targets preceded by a fearful face was enhanced at the long SOA. Second, there was a significant validity × SOA interaction, F(1,42) = 4.77, p < 0.035, revealing a greater gaze cuing effect at the long SOA compared with at the short SOA. There was also a marginally significant expression × validity interaction, F(1,42) = 3.18, p < 0.09, indicating a trend toward greater cuing effects for fearful faces compared with happy faces. No other interactions were significant. Notably, in contrast to our Experiment 1 results, the expression × validity × SOA interaction, which is illustrated in Figure 2(b), did not approach significance, F < 1.0.

In order to determine whether cuing effects (invalid minus valid trials) varied systematically across the two experiments, cuing effects for Experiment 1 and Experiment 2 were compared in a mixed ANOVA with expression (fearful, happy) and SOA (short, long) as within-subject factors and experiment as a between-subjects factor. With the added power afforded by the increase in sample size, a significant expression by SOA interaction was observed, with fearful cuing greater than happy cuing, F(1, 85) = 5.15, p < 0.05). The three-way expression × SOA × experiment interaction fell short of significance, F(1, 85) = 2.80, p < 0.10. As shown in Figure 2a and 2b, this nonsignificant trend seems to be driven by differences in the cuing effect for happy faces across the two experiments: while the cuing effect for happy faces was reduced at the long SOA relative to the short SOA in Experiment 1, the opposite was true in Experiment 2.

The main purpose of Experiment 2 was to determine the extent to which the interaction between gaze cuing and expression observed in Experiment 1 at the long SOA was a function of the presence of emotionally-valenced targets. We found that with neutral targets, the enhanced gaze cuing effect for fearful faces at the long SOA disappeared. Instead, there was a trend for the cuing effect to be larger for happy faces at the long SOA. This result was unexpected and is inconsistent with previous results reporting larger cuing effects for fear at longer SOAs (e.g. Graham et al., 2009). Although we can offer no ready explanation for this finding, it does attest to the inconsistent nature of gaze and expression interactions and their sensitivity to task demands (Bindemann, Burton & Langton, 2007), and it indicates that the lack of a three-way interaction between validity, expression, and SOA in Experiment 2 was not due to a lack of statistical power. Nonetheless, given the consistency of the stimuli, design, and procedure across the two experiments and the strikingly different results in Experiment 2 compared with Experiment 1, it appears that the integrated processing of facial expression and gaze direction is enhanced by the context provided by the emotional meaningfulness of the targets. In other words, it may be important that the expression on the gazing face is relevant to the objects being gazed at.

There is, however, another possible explanation for the differences we observed between Experiment 1 and Experiment 2 of the present study, and for the differences observed by Pecchinenda et al. (2008) between cuing effects when participants made an evaluative judgment about emotionally-valenced words compared with when they made a perceptual judgment about the same words. In both studies, the experiment in which enhanced orienting was observed with fearful faces compared with happy faces was the experiment that required deeper processing of the target stimuli. It could be argued that in our study, identifying the complex images of biologically-relevant targets in Experiment 1 required more processing resources than identifying the simple target letters in Experiment 2. Similarly, it could be argued that in Pecchinenda and colleagues’ study, evaluating the emotionally-valenced words required more processing resources than identifying the case in which the words were presented. Thus, it is possible that the differences in results between experiments in both studies reflect not the importance of meaningful context (as our results suggest) and not the importance of an evaluative goal (as Pecchinenda et al.’s results suggest), but simply the importance of deeper cognitive processing for producing enhanced orienting with fearful gazing faces.

Experiment 3

Experiment 3 was run to rule out a difference in cognitive resources requirements as an explanation for our observation of enhanced orienting for fearful faces in Experiment 1 but not in Experiment 2. Here we reran Experiment 1, but changed the response task from target identification to target detection. We reasoned that if the enhanced orienting with fearful faces observed in Experiment 1 were attributable to deeper processing of the target compared with Experiment 2, this enhanced orienting effect would not be observed in Experiment 3 because with a simple detection task the targets would require minimal processing. Alternatively, if the enhanced orienting with fearful faces were to be observed despite the low target processing requirements, this would support our conclusion that the emotional meaningfulness of the targets is important. Additionally, this experiment provides the opportunity to make a similar type of comparison as was made in the Pecchinenda et al. (2008) study: our Experiments 1 and 3 allow us to compare performance with the same targets but a different response task.

Method

Participants

Forty-four undergraduates (19 female; mean age = 19 years) reporting normal or corrected-to-normal vision participated for course credit.

Stimuli, Design, and Procedure

The face stimuli (fearful or happy faces with left or right gaze) and target stimuli (baby or dog appearing to the left or right of the face) were identical to those used in Experiment 1. In this experiment the response task for participants was to press the center key of the button box with the index finger of their preferred hand when a target appeared, regardless of its identity or location. In addition to the 576 target-present trials that comprised Experiment 1, there were 48 catch trials in which no target appeared (eight in each of the six blocks) that were selected randomly and equally within each block from the four possible combinations of gaze direction and expression. Thus, participants completed a total of 624 test trials (576 target-present trials as in Experiment 1, plus 48 catch trials). On catch trials the face cue with averted eyes and full emotional expression remained on the screen for 1000 ms or until a response was made. Participants were instructed not to respond on these trials. Experiment 3 was otherwise identical to Experiment 1.

Results and Discussion

Errors on target-present trials were excluded as in Experiments 1 and 2. Anticipations accounted for 0.60% of the target-present trials; keypress selection errors, 0.00%; and timed-out trials, 0.56%. The false alarm rate on catch trials was 1.76%.

The results for all conditions are presented in Table 3. An ANOVA revealed: a significant main effect for validity, F(1,43) = 50.31, p < 0.0001, with RT shorter on valid trials (M = 304 ms, SD = 51) compared to invalid trials (M = 315 ms, SD = 53); and a significant main effect for SOA, F(1,43) = 106.27, p < 0.0001, with response time shorter at the long SOA. There was no significant main effect for either expression or target identity (Fs < 1.3). In contrast to Experiments 1 and 2 in which the expression × validity interaction was only marginally significant, here the expression × validity interaction was significant, F(1,43) = 7.63, p < 0.009. However, this was qualified by a significant expression × validity × SOA interaction, F(1,43) = 7.33, p < 0.01, similar to that observed with emotionally valenced targets in Experiment 1. As can be seen in Figure 2(c), expression and validity appeared to interact only at the long SOA. No other interactions were significant.

Table 3.

Mean Response Times (in Milliseconds) and Standard Deviations for Experiment 3

Valid Invalid

Expression Target M SD M SD
225 ms SOA
Happy Baby 319 54 326 53
Dog 316 50 329 58
Fearful Baby 319 57 325 56
Dog 315 54 331 55
525 ms SOA
Happy Baby 295 43 303 49
Dog 293 52 297 43
Fearful Baby 291 39 306 51
Dog 288 46 306 51

Planned t-tests confirmed that valid RT was significantly shorter than invalid RT for all combinations of expression and SOA, all p’s < 0.003. Thus, the significant validity × expression × SOA interaction indicates that there were gaze cuing effects of different magnitudes in different conditions. To examine this, we calculated cuing effects for each condition as in Experiment 1, and compared the cuing effects for fearful face vs. happy face cues at each SOA (Bonferroni corrected t-tests, two-tailed, alpha = 0.025). There was no statistical difference between fearful face cuing (11 ms) and happy face cuing (10 ms) at the short SOA, p > 0.75; but there was a difference at the long SOA, with fearful face cuing (17 ms) significantly greater than happy face cuing (6 ms), p < 0.0003.

As was the case with Experiment 1, it appeared that the fearful face cuing effect increased over time, whereas the happy face cuing effect may have decreased (see Figure 2(c)). To test this, we compared happy face cuing effects at the short vs. the long SOA, and fearful face cuing effects at the short vs. the long SOA (Bonferroni corrected t-tests, two-tailed, alpha = 0.025). Consistent with the results observed in Experiment 1, these tests revealed that with an increase in SOA, the gaze cuing effect for fearful faces was enhanced, p < 0.025, and that the apparent attenuation of the gaze cuing effect for happy faces was not statistically significant, p > 0.14.

A notable difference between the results of Experiments 1 and 3 was the presence of a significant expression × target interaction in Experiment 1 that was not observed in Experiment 3. In order to confirm this observation, the data from these two experiments were compared in a mixed ANOVA with experiment as a between-subjects factor and with cue-target validity (valid, invalid), SOA (225 ms, 525 ms), emotional expression (happy, fearful), and target identity (baby, dog) as within-subject factors. There was a main effect of experiment, F(1,85) = 270.12, p < 0.001, reflecting the fact that target detection occurred more quickly than target identification (310 vs. 495 ms). The analysis also yielded interactions involving experiment and target type, including: a target × experiment interaction, F(1, 85) = 24.80, p < 0.001; a target × SOA × experiment interaction, F(1, 85) = 19.39, p < 0.001; and most importantly, an expression × target × experiment interaction, F(1, 85) = 9.77, p < 0.01. This particular interaction confirmed that the emotional valence congruency effect (i.e., for the baby, RTs were shorter with a happy face than with a fearful face, and for the dog, RTs were shorter with a fearful face than with a happy face) was present only in Experiment 1, suggesting that this effect may depend on deeper processing of the target.

Importantly, no interactions involving cue-target validity and experiment were observed (ps > 0.05). Indeed, apart from the absence of the emotional valence congruency effect, the results of Experiment 3 look very similar to those of Experiment 1, with enhanced orienting triggered by fearful gazing faces compared with happy gazing faces emerging at the longer SOA. Thus, the present experiment replicates our finding that this enhanced orienting can occur with emotionally meaningful targets in the absence of an evaluative goal; and it adds the new finding that deep processing of the target is not required to produce it.

General Discussion

The ability to infer the potential presence, location, and importance of an unseen stimulus in the environment from facial information alone involves the integration of information about both gaze direction and emotional expression. To date, the conditions under which these two stimulus dimensions interact to influence attentional orienting to gaze are not well delineated. It is clear from the literature that people are very sensitive to changeable aspects of the face such as emotional expression and gaze direction and that we extract a great deal of socially relevant information from these dynamic cues. Thus, intuitively, one would expect that varying the information contained in these critical facial components should produce differential results. In particular, one would expect that the gaze direction of a fearful face would produce enhanced orienting to a gazed-at object because the combination of gaze direction and a fearful facial expression would signal the location of a potential threat (e.g., Fichtenholtz et al., 2007; Hadjikhani et al., 2008). Surprisingly, however, the gaze cuing studies that have sought to explore the relationship between gaze-triggered orienting and facial expression have, as a whole, generated inconsistent findings, ranging from no effect of expression on gaze cuing (Hietanen & Leppänen, 2003), through enhanced cuing by fearful faces but only in anxious individuals (Mathews et al., 2003), to enhanced cuing by fearful faces at a short SOA (Putman et al., 2006).

In the present study, we investigated whether using emotionally meaningful targets would optimize our ability to observe enhanced orienting to the gaze direction of a fearful face. To this end, we presented nonpredictive directional gaze cues in a face whose expression changed from neutral to either fearful or happy, followed by a target that was either emotionally valenced (Experiment 1) or neutral (Experiment 2). To maximize our chances of observing clear results we used a natural trial sequence in which an initial gaze shift was followed by a change in expression (as if the model were reacting to the stimulus), and to examine the time course of the attentional effects of gaze and expression we included both a short and long SOA.

With the emotionally-valenced targets in Experiment 1, we found clear evidence of both gaze processing and expression processing at the short SOA, but no interaction between the two. We also observed clear evidence of the integration of gaze and expression information at the long SOA, with an enhanced gaze cuing effect for fearful faces compared with happy faces. These findings converge with the results of several neuroimaging studies that have suggested that gaze direction information and emotional expression information are dissociable, at least at early stages of processing (e.g., Fichtenholtz et al., 2007; Hoffman et al., 2007; Klucharev & Sams, 2004; Pourtois et al., 2004). We had hypothesized that the use of neutral targets in previous studies might have led to mixed results because a face repeatedly having an emotional reaction to neutral targets might not have fully engaged emotion processing (i.e., emotional expression is irrelevant with respect to the targets). Our finding of an enhanced gaze cuing effect for fearful faces at the long SOA in Experiment 1 might indeed have been due in large part to the addition of valenced targets; however, it was also possible that this finding was due to other particular characteristics of our experimental design. To explore this issue we ran Experiment 2 with an identical cuing sequence but with emotionally-neutral targets, and we found that the interaction between gaze cuing and expression was only marginally significant and that these factors did not interact with SOA as they had in Experiment 1. This result suggests that although our cuing sequence may have enhanced (relative to the cuing sequences used in previous studies) our ability to produce an effect of facial expression on gaze cuing, the presentation of meaningful targets in Experiment 1 was important. Finally, Experiment 3 replicated our critical finding from Experiment 1 with the same emotionally meaningful targets but with a low-demand response task. Enhanced orienting with fearful faces was again observed at the longer SOA, indicating that the level of processing of the target was not responsible for the differences we observed between Experiment 1 and Experiment 2.

It should be noted that our results do not suggest that emotional context is necessary for processing the emotion on the gazing face; in Experiment 2 we observed a significant main effect of expression despite the lack of emotionally-valenced targets. Overall, participants responded more quickly to targets cued by fearful faces, regardless of cue-target validity. It should also be noted that our results do not demonstrate that emotional targets are necessary for the enhancement of the gaze cuing effect. In Experiment 2 of the present study, we observed a marginal interaction between gaze and expression across the two SOAs. And in two recent experiments that were very similar to Experiment 2 of the present study (except that they included gazing faces with a neutral expression in addition to happy and fearful faces), we observed enhanced cuing for fearful gazing faces compared with happy gazing faces in one experiment but not another (Graham et. al., 2009). These mixed findings under almost identical stimulus conditions within the same study suggest that some integration of gaze and expression information will sometimes occur in the absence of meaningful targets. What the results of the present study do seem to indicate is that emotional context is necessary for optimal integration of gaze direction and expression information. A question for future research is why this is might be the case. One possibility is that the combination of both emotional expression and motivationally relevant targets engages top-down processing (e.g., the creation of cue-target expectancies) that facilitates the integration of information about these two facial dimensions for transmission to the spatial orienting system.

Although our results indicate that the presence of an emotional context does matter for the integration of gaze cuing and facial expression, it seems to be the general context that matters and not the context on an individual trial. In Experiment 1, the gaze cuing effect for a given expression did not interact with target valence; that is, the fearful face produced statistically equivalent cuing effects regardless of whether the target was appropriate for a fearful expression (the dog) or inappropriate for a fearful expression (the baby), and the same is true for the happy face. This is in contrast to the fact that cue valence and target valence interacted on a trial by trial basis in Experiment 1; i.e., a cue-target emotional valence congruency effect was observed (although it was not observed in Experiment 3). Moreover, our basic finding regarding the effects of expression and SOA on gaze cuing from Experiment 1 was not affected by reducing the depth of processing of the target in Experiment 3. This more general affective context effect on gaze cuing suggests that for expression to optimally modulate gaze-triggered orienting, what the gazing face is looking at must be potentially meaningful with respect to the expression displayed. Such a processing tendency would of course be appropriate in the natural environment where any number of threatening or nonthreatening creatures or events that are not in the viewer’s line of sight might be seen and reacted to emotionally by another person nearby.

Our results with emotionally-valenced target images replicate the results of Pecchinenda et al. (2008) with emotionally-valenced target words by demonstrating that the presence of an emotional context can be important for observing enhanced orienting for fearful gazing faces, but our findings also extend theirs by demonstrating that explicit evaluation is not necessary to observe modulation of the cuing effect by expression. It may simply be enough that what is gazed at could potentially evoke an emotional reaction in the gazer. This notion is consistent with findings from the macaque literature that face-selective and body posture-selective cells in the STS are sensitive to perceptual history and may play a role in the formation of expectations about the future behavior of others (Jellema & Perrett, 2003). When emotional gazing faces were consistently followed by emotionally meaningless targets in Experiment 2 of the present study, the fearful expression may have lost its emotional potency because it was not embedded in any relevant context.

A major difference between Pecchinenda et al.’s (2008) study and the present study is that Pecchinenda and colleagues observed an interaction between gaze and expression at a short SOA of 200 ms and we observed ours at a long SOA of 525 ms (and not at 225 ms). Moreover, the only other study to observe a clear enhancement of the gaze cuing effect with fearful faces vs. happy faces (but with neutral targets) also used a short SOA of 100 ms (Putman et al., 2006). We have outlined our reasons for expecting to see the interaction at a longer SOA (in particular, the likely importance of allowing observers enough time to integrate the two dimensions), and our results are in line with these expectations and consistent with our previous behavioral results that facial expression and gaze information have dissociable effects on target detection and identification when a short interval intervenes between the gaze shift and the appearance of the target (Fichtenholtz et al., 2007, Graham et al., 2009).

We had hypothesized that our natural cuing sequence and our choice of SOAs would maximize our chances of observing the often-elusive interaction between gaze and expression, and we did indeed observe the interaction at our long SOA (but, importantly, only with emotionally-valenced targets). In the end, though, the results of the present study offer no ready explanation for the discrepancy in terms of timing between our findings and those of the two other studies (Pecchinenda et al., 2008: Putman et al., 2006) that observed enhanced orienting with a fearful gazing face at a short SOA. Systematic investigations of the timing of the integration of gaze and expression information are necessary to resolve this issue.

In summary, in order to maximize our chances of observing clear results in Experiments 1 and 3 of the present study, we used a more natural trial sequence, we included both a short SOA and a long SOA in order to examine the time course of the attentional effects of gaze and expression, and we used emotionally-valenced targets that might actually produce happy or fearful expressions in the real world. We observed unambiguous evidence of an effect of facial emotional expression on the magnitude of the gaze cuing effect, with a fearful face producing enhanced orienting relative to a happy face. Consistent with findings from other studies in the cognitive neuroscience literature (e.g., Graham & LaBar, 2007; Klucharev & Sams, 2004; Pourtois et al., 2004), this effect occurred only at the longer SOA, suggesting that time is required for the integration of gaze and expression information. While providing an ecologically valid animated cuing sequence and allowing sufficient processing time before target onset may have been necessary to reveal this interaction, our results with neutral cues in Experiment 2 suggest that emotional context is an important factor for the integration of gaze and expression information. In natural situations, gaze and expression cues are inherently meaningful and permit inferences about the relative importance and/or safety of other objects and events in the environment. Our results indicate that in experimental situations, the presentation of fearful gazing faces that signal the location of emotionally meaningful visual targets can modulate the deployment of spatial attention; and, more generally, they demonstrate that the mechanisms underlying joint attention and social orienting are sensitive to emotional context.

Acknowledgments

This work was supported by NIH/NCRR COBRE grant P20 RR020151 (C.K.F.) and NIH grant 1R03MH079295-01A1 (R.G.).

The authors wish to thank Erin Kauffman, Christopher Kuylen, Edeleen Lunjew, Kayla Prosser, and Laura Vogel-Ciernia for their valuable assistance with data collection.

Contributor Information

Chris Kelland Friesen, North Dakota State University, Fargo, ND, USA.

Kimberly M. Halvorson, University of Iowa, Iowa City, IA, USA

Reiko Graham, Texas State University, San Marcos, TX, USA.

References

  1. Adams RB, Jr, Gordon HL, Baird AA, Ambady N, Kleck RE. Effects of gaze on amygdala sensitivity to anger and fear faces. Science. 2003;300:1536. doi: 10.1126/science.1082244. [DOI] [PubMed] [Google Scholar]
  2. Adams RB, Jr, Kleck RE. Perceived gaze direction and the processing of facial displays of emotion. Psychological Science. 2003;14:644–647. doi: 10.1046/j.0956-7976.2003.psci_1479.x. [DOI] [PubMed] [Google Scholar]
  3. Batty M, Taylor MJ. Early processing of the six basic facial emotional expressions. Cognitive Brain Research. 2003;17:613–620. doi: 10.1016/s0926-6410(03)00174-5. [DOI] [PubMed] [Google Scholar]
  4. Bayliss AP, Frischen A, Fenske MJ, Tipper SP. Affective evaluations of objects are influenced by observed gaze direction and emotional expression. Cognition. 2007;104:644–653. doi: 10.1016/j.cognition.2006.07.012. [DOI] [PubMed] [Google Scholar]
  5. Bindemann M, Burton M, Langton S. How do eye gaze and facial expression interact? Visual Cognition. 2007;16:708–733. [Google Scholar]
  6. Driver J, Davis G, Ricciardelli P, Kidd P, Maxwell E, Baron-Cohen S. Gaze perception triggers visuospatial orienting by adults in a reflexive manner. Visual Cognition. 1999;6:509–540. [Google Scholar]
  7. Eimer M, Holmes A. Event-related brain potential correlates of emotional face processing. Neuropsychologia. 2007;45:15–31. doi: 10.1016/j.neuropsychologia.2006.04.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Ekman P, Friesen W. Pictures of facial affect. Palo Alto, CA: Consulting Psychologist Press; 1976. [Google Scholar]
  9. Engell AD, Haxby JV. Facial expression and gaze-direction in human superior temporal sulcus. Neuropsychologia. 2007;45:3234–3241. doi: 10.1016/j.neuropsychologia.2007.06.022. [DOI] [PubMed] [Google Scholar]
  10. Fichtenholtz HM, Hopfinger JB, Graham R, Detwiler JM, LaBar KS. Happy and fearful emotion in cues and targets modulate event-related potential indices of gaze-directed attentional orienting. Social Cognitive and Affective Neuroscience. 2007;2:323–333. doi: 10.1093/scan/nsm026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Friesen CK, Kingstone A. The eyes have it! Reflexive orienting is triggered by nonpredictive gaze. Psychonomic Bulletin and Review. 1998;5:490–495. [Google Scholar]
  12. Frischen A, Bayliss AP, Tipper SP. Gaze cueing of attention: Visual attention, social cognition and individual differences. Psychological Bulletin. 2007;33:694–724. doi: 10.1037/0033-2909.133.4.694. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Furl N, van Rijsbergen NJ, Treves A, Friston KJ, Dolan RJ. Experience-dependent coding of facial expression in superior temporal sulcus. Proceedings of the National Academy of Sciences. 2007;104:13485–13489. doi: 10.1073/pnas.0702548104. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Ganel T, Goshen-Gottstein Y, Goodale M. Interactions between the processing of gaze direction and facial expression. Vision Research. 2005;49:1911–1200. doi: 10.1016/j.visres.2004.06.025. [DOI] [PubMed] [Google Scholar]
  15. Graham R, Friesen CK, Fichtenholtz H, LaBar K. Modulation of reflexive orienting to gaze direction by facial expressions. Visual Cognition. 2009 Retrieved August 2, 2009, from http://www.informaworld.com/10.1080/13506280802689281.
  16. Graham R, LaBar KS. Garner interference reveals dependencies between emotional expression and gaze in face perception. Emotion. 2007;7:296–313. doi: 10.1037/1528-3542.7.2.296. [DOI] [PubMed] [Google Scholar]
  17. Hadjikhani N, Hoge R, Snyder J, de Gelder B. Pointing with the eyes: The role of gaze in communicating danger. Brain and Cognition. 2008;68:1–8. doi: 10.1016/j.bandc.2008.01.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Hasselmo ME, Rolls ET, Baylis GC. The role of expression and identity in the face selective responses of neurons in the temporal visual cortex of the monkey. Behavioural Brain Research. 1989;32:203–218. doi: 10.1016/s0166-4328(89)80054-3. [DOI] [PubMed] [Google Scholar]
  19. Hietanen J, Leppänen M. Does facial expression affect attention orienting by gaze direction cues? Journal of Experimental Psychology: Human Perception and Performance. 2003;29:1228–1243. doi: 10.1037/0096-1523.29.6.1228. [DOI] [PubMed] [Google Scholar]
  20. Hoffman EA, Haxby JV. Distinct representations of eye gaze and identity in the distributed human neural system for face perception. Nature Neuroscience. 2000;3:80–84. doi: 10.1038/71152. [DOI] [PubMed] [Google Scholar]
  21. Hoffman KL, Gothard KM, Schmid MC, Logothetis NK. Facial-expression and gaze-selective responses in the monkey amygdala. Current Biology. 2007;17:766–772. doi: 10.1016/j.cub.2007.03.040. [DOI] [PubMed] [Google Scholar]
  22. Hooker CI, Paller KA, Gitelman DR, Parrish TB, Mesulam MM, Reber PJ. Brain networks for analyzing eye gaze. Cognitive Brain Research. 2003;17:406–418. doi: 10.1016/s0926-6410(03)00143-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Jellema T, Perrett DI. Perceptual history influences neural responses to face and body postures. Journal of Cognitive Neuroscience. 2003;15:961–971. doi: 10.1162/089892903770007353. [DOI] [PubMed] [Google Scholar]
  24. Kawashima R, Sugiura M, Kato T, Nakamura A, Hatan K, Ito K, Fukuda H, Kojima S, Nakamura K. The human amygdala plays an important role in gaze monitoring. Brain. 1999;122:779–783. doi: 10.1093/brain/122.4.779. [DOI] [PubMed] [Google Scholar]
  25. Kingstone A, Tipper C, Ristic J, Ngan E. The eyes have it!: an fMRI investigation. Brain and Cognition. 2004;55:269–271. doi: 10.1016/j.bandc.2004.02.037. [DOI] [PubMed] [Google Scholar]
  26. Klucharev V, Sams M. Interaction of gaze direction and facial expressions processing: ERP study. NeuroReport. 2004;15:621–626. doi: 10.1097/00001756-200403220-00010. [DOI] [PubMed] [Google Scholar]
  27. Lang PJ, Bradley MM, Cuthbert BN. Technical Report A-4. The Center for Research in Psychophysiology, University of Florida; 1999. International Affective Picture System: Instruction manual and affective ratings. [Google Scholar]
  28. Mathews AM, Fox E, Yiend J, Calder A. The face of fear: Effects of eye gaze and emotion on visual attention. Visual Cognition. 2003;10:823–835. doi: 10.1080/13506280344000095. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Mowrer OH. Preparatory set (expectancy) -- Some methods of measurements. Psychological Review Monographs. 1940;52 (WholeNo. 233) [Google Scholar]
  30. Pecchinenda A, Pes M, Ferlazzo F, Zoccolotti P. The combined effect of gaze direction and facial expression on cueing spatial attention. Emotion. 2008;8:628–634. doi: 10.1037/a0013437. [DOI] [PubMed] [Google Scholar]
  31. Pizzagalli D, Regard M, Lehmann D. Rapid emotional face processing in the human right and left brain hemispheres: An ERP study. NeuroReport. 1999;10:2691–2698. doi: 10.1097/00001756-199909090-00001. [DOI] [PubMed] [Google Scholar]
  32. Pourtois G, Sander D, Andres M, Grandjean D, Reveret L, Olivier E, Vuilleumier P. Dissociable roles of the human somatosensory and superior temporal cortices for processing social face signals. European Journal of Neuroscience. 2004;20:3507–3515. doi: 10.1111/j.1460-9568.2004.03794.x. [DOI] [PubMed] [Google Scholar]
  33. Putman P, Hermans E, van Honk J. Anxiety meets fear in perception of dynamic expressive gaze. Emotion. 2006;6:94–102. doi: 10.1037/1528-3542.6.1.94. [DOI] [PubMed] [Google Scholar]
  34. Rolls ET. Neurons in the cortex of the temporal lobe and in the amygdala of the monkey with responses selective for faces. Human Neurobiology. 1984;3:209–222. [PubMed] [Google Scholar]
  35. Schuller AM, Rossion B. Spatial attention triggered by eye gaze increases and speeds up early visual activity. NeuroReport. 2001;12:1–7. doi: 10.1097/00001756-200108080-00019. [DOI] [PubMed] [Google Scholar]
  36. Tipples J. Fear and fearfulness potentiate automatic orienting to eye gaze. Cognition and Emotion. 2006;20:309–320. [Google Scholar]
  37. Vuilleumier P, Pourtois G. Distributed and interactive brain mechanisms during emotion face perception: Evidence from functional neuroimaging. Neuropsychologia. 2007;45:174–194. doi: 10.1016/j.neuropsychologia.2006.06.003. [DOI] [PubMed] [Google Scholar]

RESOURCES