Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Sep 15.
Published in final edited form as: Exp Brain Res. 2019 Jan 25;237(4):967–975. doi: 10.1007/s00221-019-05472-8

Spatial and Feature-based Attention to Expressive Faces

Kestutis Kveraga 1,2, David De Vito 3, Cody Cushing 4, Hee Yeon Im 1,2, Daniel N Albohn 5, Reginald B Adams Jr 5
PMCID: PMC7491605  NIHMSID: NIHMS1524984  PMID: 30683957

Abstract

Facial emotion is an important cue for deciding whether an individual is potentially helpful or harmful. However, facial expressions are inherently ambiguous and observers typically employ other cues to categorize emotion expressed on the face, such race, sex, and context. Here, we explored the effect of increasing or reducing different types of uncertainty associated with a facial expression that is to be categorized. On each trial, observers responded according to the emotion and location of a peripherally presented face stimulus and were provided with either: 1) no information about the upcoming face; 2) its location; 3) its expressed emotion; or 4) both its location and emotion. While cueing emotion or location resulted in faster response times than cueing unpredictive information, cueing face emotion alone resulted in faster responses than cueing face location alone. Moreover, cueing both stimulus location and emotion resulted in a superadditive reduction of response times compared with cueing location or emotion alone, suggesting that feature-based attention to emotion and spatially selective attention interact to facilitate perception of face stimuli. While categorization of facial expressions was significantly affected by stable identity cues (sex and race) in the face, we found that these interactions were eliminated when uncertainty about facial expression, but not spatial uncertainty about stimulus location, was reduced by predictive cueing. This demonstrates that feature-based attention to facial expression greatly attenuates the need to rely on stable identity cues to interpret facial emotion.

Introduction

Efficient categorization of other humans as potentially hostile or friendly is highly adaptive in selecting an appropriate response, such as avoiding harm or seeking help. Facial expressions are one of most important ways of visually identifying another person’s intentions and emotional state. Because facial expressions are inherently ambiguous (Hassin, Aviezer, & Bentin, 2013), being able to anticipate the spatial location or emotional expression of the face to be perceived should confer advantages in responding. For example, being able to predict the location of the face to assess hostile intent allows one to concentrate attentional resources and direct the gaze to a particular region of the visual field, which can confer response speed advantages, at least with simple stimuli (Kveraga, Boucher, & Hughes, 2002), by reducing stimulus-response (SR) uncertainty. Similarly, having advance information about the emotion expressed on a face should reduce the number of S-R alternatives and make the perceptual processing and response preparation easier. Reduction of stimulus-response (S-R) uncertainty by manipulating the number of S-R alternatives has been known to reduce manual response latencies as a log2 function of the number of S-R alternatives since the 1950s, and became known as “Hick’s law” (Hick, 1952). However, Hick’s law is violated with certain highly overlearned responses and simple, spatially c-oregistered S-R mappings (e.g., refixation saccades to light dots (Kveraga, Boucher, & Hughes, 2002; Kveraga & Hughes, 2005, Lawrence, 2010; Kloft et al., 2012); and slow pursuit eye movements to moving dots (Berryhill et al., 2004), in that decreasing S-R uncertainty does not further reduce response latencies.

Given that we employed manual key-press responses with and without spatial cueing in the present study, we expected that reduction of spatial uncertainty in the spatial cueing condition would reduce response latencies, as has been found with simple stimuli (Hick, 1952, Kveraga et al., 2002, Kloft et al., 2012). In the present study, our goal was to systematically vary the amount of S-R uncertainty in responding to happy and angry faces to test how different types of predictive information (spatial or feature-based) affect response efficiency as measured by response time (RT) and accuracy. We manipulated the amount of uncertainty about the upcoming stimulus with pre-stimulus cues that 1) were uninformative about the spatial location or emotion of the face (Uncued trials); 2) predicted which side of the display the face was about to appear (Side cue trials); 3) predicted the facial expression on, but not the location of, the face stimulus (Emotion cue trials); 4) or predicted both the facial expression and spatial location of the stimulus face (Side & Emotion cue trials; see Figure 1). Because in the Uncued condition there are 4 possibilities of potential face location (left or right) and emotion (happy or angry), the S-R uncertainty is 2 bits according to Shannon’s information theory (Shannon, 1948). With Spatial only or Emotion only cueing, the S-R uncertainty is 1 bit in either condition, and in the Spatial and Emotion cueing condition, it is nominally1 0 bits. Hick’s law would predict the same reduction of latencies for the Emotion cueing condition as for the Spatial Cueing condition, and a reduction for the combined Spatial & Emotion Cueing. Therefore, one hypothesis that we tested here in this study was that a 1-bit reduction in S-R uncertainty would be expected to result in a similar reduction of response latencies for spatial and emotion cueing.

Figure 1. Trial procedure.

Figure 1.

A fixation cross at the start of a trial was followed by a cue which was either uninformative (Uncued condition), indicated the side of the screen on which the stimulus was about to appear (Side cue condition), indicated the emotion displayed by the upcoming stimulus face (Emotion cue condition), or both (Side & Emotion cue condition shown in the figure). Participants could make a key-press response at any time after the stimulus onset or during the blank screen that followed the stimulus presentation, following which feedback was given whether the response was correct, incorrect, or late.

Another hypothesis that we tested in this study based on prior research was that the visual hemifield in which the stimulus was presented should differentially affect response latencies to positive and negative stimuli (faster responses to happy faces in the right hemifield and angry faces in the left hemifield). Specifically, we expected that cueing would interact with the side of presentation and other stimulus characteristics (the reasons for this prediction are discussed below). A large body of research (reviewed in Rogers, Vallortigara, and Andrew, 2013) in both humans and many other species shows substantial visual field asymmetries in responding to emotionally positive and negative stimuli. For example, many species are more reactive to negative stimuli (e.g., predators) appearing on the left, and positive stimuli appearing on the right (processed initially by the right and left hemispheres, respectively). Aggression is mainly controlled by the right hemisphere and thus tends to be directed more to one’s left side, whereas the left hemisphere seems more engaged in precise discrimination and categorization of stimuli (Rogers et al., 2013, pp. 13–15). It also has been proposed that the right hemisphere is more involved in responding to novel stimuli and the left hemisphere, to familiar stimuli (MacNeilage et al., 2009). The right hemisphere is thought to be superior at processing negative emotion, whereas a left hemisphere advantage has been reported for faces displaying positive emotion (Davidson, 1992, 1995; Davidson & Irwin, 1999; Silberman & Weingartner, 1986). This was thought to explain faster and more accurate responses to negative stimuli presented on the left side, and to more accurate responses to positive stimuli presented on the right side (Jansari, Tranel, & Adolphs, 2000; Onal-Hartmann, Pauli, Ocklenburg, Gunturkun, 2012). Our recent studies using emotional faces likewise show visual hemifield asymmetries in comparing the valence of facial crowds, with higher accuracy in making task-congruent decisions (avoidance of angry vs. neutral crowd; approach of happy vs. neutral crowd) when presented on the left, and higher accuracy for implicit decisions (avoiding neutral vs. happy crowd; approaching neutral vs. angry crowd) with stimuli presented on the right (Im et al., 2017a). Thus, to test our second hypothesis, we presented faces in the left and right visual periphery in this study.

The third hypothesis had to do with interactions between changeable and stable facial cues and the effects of predictive cues on these interactions. The human face provides a wealth of information signaling the intent and ability to cause harm or offer help via a combination of dynamic (e.g., facial expression, eye gaze) and more stable (e.g., sex, race, maturity) cues. These dynamic and static cues combine to provide a shared signal (Adams & Kleck, 2003; Adams & Kveraga, 2015; Adams, Albohn, & Kveraga, 2017), whose value is amplified when the cues are congruent and reduced when they are not. Identity cues can have a substantial influence in guiding our reactions to emotional face stimuli that we encounter. For example, an angry facial expression is more effective when conveyed by males rather than females, and is amplified by other cues of facial maturity and masculinity (Zebrowitz, 1997). During tasks that involve the recognition of emotion in face stimuli, participants perceive more intensity, and respond faster to happy females compared to angry females, and faster to angry males compared to happy males (e.g., Becker, Kenrick, Neuberg, Blackwell, & Smith, 2007; Hess, Adams, Kleck, 2004; see also Adams, Hess, & Kleck, 2015 for review). This effect coincides with the ecological perspective that angry faces are more often male, as males are more likely to engage in violent behavior (Trivers, 1985), whereas happy faces are more often female because females are more likely than males to offer social support (Taylor et al., 2000). It has thereby been suggested that it is more adaptive for human beings to prioritize processing and quickly respond to women who can be of aid, and to men who intend to cause harm (Tay et al., 2015).

Along with effects driven by sex-linked face cues, such as masculinity, the characteristics of race and gender are intertwined with masculinity and thus guide our processing of face stimuli. Categorizing the sex of a face is facilitated when the phenotypes and stereotypes associated with the race of the face match its sex (e.g., Johnson, Freeman, & Pauker, 2012). For instance, Black males are responded to quickly because Black faces are perceived as more masculine and thus more threatening, than White faces (Goff, Thomas, & Jackson, 2008). A similar effect is found when participants are asked to categorize the race of the face, as responses to Black male faces are facilitated (Carpinella, Chen, Hamilton, & Johnson, 2015). This interaction between race and gender also extends to the processing of emotional face stimuli, with participants often classifying Black male faces as threatening (Cottrell & Neuberg, 2005; Maner et al., 2005). In tasks that require the categorization of the emotion displayed by a face, White participants respond faster to happy White faces compared to angry White faces, and faster to angry Black faces compared to happy Black faces (e.g., Hugenberg, 2005). Hugenberg (2005) suggests that this effect is likely driven by evaluative fluency and dysfluency, in that responses will be facilitated when race and expression match in evaluative context (i.e., Black with negativity, White with positivity) but hindered when they do not.

Taken together, these studies present evidence that stable face cues, such as sex and race, influence our decisions about faces that we encounter, and interact with expressive cues, such as facial expression and eye gaze. This interaction results in faster responding when the stable and changeable cues are congruent, and slows it down when they are not. In the current study, we expected that in the Uncued condition, where the uncertainty about the face stimulus about to be presented is highest, our results would be similar to those previous studies that have found interactions of stable identity cues with expressive cues, reporting faster responses to happy females, angry males, happy white, angry black, and black male faces (e.g., Cottrell & Neuberg, 2005; Hugenberg, 2005; Becker et al., 2007; Trawalter et al., 2008). Another study by Becker and colleagues (Becker, Mortensen, Ackerman et al., 2011) found that despite being explicitly instructed not to rely on identity (ethnicity and sex) cues, which were non-predictive for their task, subjects nonetheless could not ignore identity cues in making friend/enemy decisions and showed significant biases. However, reducing uncertainty by cueing the emotional valence or spatial location of the upcoming face stimulus may decrease reliance on the contextual facilitation provided by facial identity cues. Thus, the third hypothesis we tested in this study is whether emotion versus spatial cueing would reduce or even eliminate the influence of stable identity cues in the face stimuli.

Methods

Participants

Forty-four participants were recruited from the participant pool at the Pennsylvania State University and participated in exchange for course credit (mean age: 18.77, SD = 1.12, 22 females, 44 right-handed). Each participant had normal or corrected-to-normal vision and provided informed consent. No participants were excluded from the study. All materials and procedures were approved by the Institutional Review Board at the Pennsylvania State University.

Apparatus & Stimuli

The stimulus set comprised face images of 64 different identities randomly selected for use from The Chicago Face Database (Ma, Correll, & Wittenbrink, 2015). Of the 64 identities, 25% were Caucasian males, 25% were Caucasian females, 25% were African American males, and 25% were African American females. For each model, two images were included in the stimulus set: One with the model wearing a happy facial expression, and one with an angry facial expression. This resulted in a final stimulus set comprising 128 unique images.

The face images each measured 9.71 ° × 13.68° at a viewing distance of 50 cm. Stimuli were presented on a gray background. All stimulus presentation and behavioral response collection for this experiment were controlled using PsychoPy software (Peirce, 2007) running on a computer with an LCD monitor (resolution: 1440 × 900 pixels).

Procedure

The experiment consisted of 8 equal blocks of 117 trials each (936 total), during which participants were asked to identify the location and emotion of a face stimulus following a cue. Each trial began with the presentation of a fixation cross for 200–400 ms (jittered), which the participants were instructed to fixate at the beginning of each trial. A cue display presented for 1000 ms consisting of a centrally-presented arrow (1.14° × 3.45°), and a text label below the arrow notified participants of the location and emotion of the upcoming face stimulus, only the location, only the emotion, or neither (Uncued trials, in which the cues were uninformative; see Figure 1). The cue arrow pointed left or right to cue the stimulus location, or pointed up to indicate location uncertainty. To cue the emotion displayed on the stimulus, the cue arrow was either cyan (RGB: 52, 202, 203) or purple (RGB: 169, 48, 209) in colour, with colour assignment indicating whether a happy or an angry face was about to be shown, counterbalanced across participants. On trials where facial emotion was not cued, the cue arrow colour was black. Cue text above the cue arrow aided in cueing emotion by displaying “Happy Emotion”, “Angry Emotion”, or “Either Emotion” while cue text below the arrow aided in cueing side by displaying: “Left Side”, “Right Side”, or “Either Side”. The colour of all cue text matched that of the cue arrow. To discourage participants from responding to the cue onset rather than the face stimulus, on 4.3% of trials the cue was not predictive (i.e., they were catch trials). This manipulation was successful, as only about one half of a percent (0.49%) of responses were clearly anticipatory.

An exponential function was sampled to produce a jitter of 0–500 ms (0 ms = no blank screen) between the cue display and the face stimulus onset. This was done to discourage subjects from attempting to time the face stimulus onset (see Luce, 1986, pp. 75–79 for a discussion of this). The face stimulus was then presented for 250 ms on the left or right side of the screen (distance from fixation to center of face stimulus: 10.84°). The stimulus duration was selected to be sufficiently long to provide conscious perception of the stimulus, but short enough to minimize exploration of the stimulus via multiple saccades, as the mean latency of the initial saccade even to simple bright stimuli is on the order of ~200–250 ms (Kveraga, Boucher & Hughes, 2002; Kveraga & Hughes, 2005). Participants were asked to respond to the face stimulus as quickly and accurately as possible using the assigned keys on a standard keyboard. For images presented on the left side of the screen participants used their left hand to press the ‘s’ key in response to happy faces and the ‘d’ key in response to angry faces, counterbalanced across participants. For images presented on the right side of the screen participants used their right hand to press the ‘k’ key in response to angry faces and the ‘l’ key in response to happy faces, counterbalanced across participants. Following the face stimulus a blank screen was presented for 1.6 s during which participants could still make a response. The centrally-presented fixation cross then reappeared for 150 ms along with feedback text presented above fixation that alerted the participants that they were “Correct” or “Incorrect” on the trial. If response time on a trial was longer than 1 s from face onset or if participants responded to a stimulus with multiple keys then their feedback was replaced with “Late Response” or “Multiple Responses”, respectively.

Before commencing the experiment proper, participants performed 39–103 practice trials. The practice trials followed the same procedure as the experimental trials with a few minor changes. The first 12 practice trials were slowed, as the cue display was presented for 2.0 s and the face stimulus was presented for 0.5 s so that the participant could become accustomed to the task. The remaining practice trials used the same timing as the experimental trials. There were no catch trials in the practice block, so cues were 100% predictive. The blank screen between the cue display and the presentation of the face stimulus was 0–200ms (jittered). Following the first thirty practice trials, participants’ accuracy was recorded and when they achieved a span of 9 out of 10 trials correct they were allowed to move on to the experimental trials.

Results

All response time data reported here were calculated by first removing all response times shorter than 100 ms, as even the fastest non-anticipatory manual responses are highly unlikely to be shorter than that (see Luce, 1986, pp. 58–65) and longer than 1500 ms. This resulted in the removal of 0.8% of responses. The 4.3% of total trials that were ‘catch’ trials were also removed from the analysis. Overall, participants achieved an accuracy of 86.26% (SD 8.7). Only correct trials were included in subsequent analyses of response times.

We analyzed the RT results using repeated-measures ANOVA, with the following within-subjects factors: Cue Type, with 4 levels: Uncued, Cued Side, Cued Emotion, Cued Side & Emotion; Emotion of stimulus face, with 2 levels: Happy, Angry; Side of stimulus face presentation, with 2 levels: Left, Right; Race of stimulus face, with 2 levels: Black, White; Gender of stimulus face, with 2 levels: Male, Female. The main effect of Cue Type was significant (F3,129=92.93, p<0.0000), as Side and/or Emotion cueing resulted in faster RTs than in the Uncued condition [(Uncued: 636 (SD 60) ms, Side: 604 (SD 62) ms, Emotion: 571 (SD 71) ms, Side & Emotion: 507 (SD 81) ms]. Post-hoc contrasts cueing that Uncued responses were significantly longer than Side-cued responses (636 vs. 604 ms, t43=10.3, p<0.00001), which were longer than Emotion-cued responses (571 ms, t43=4.4, p<0.00006). In turn, Side-and-Emotion cueing resulted in faster responses (507 ms) than only Emotion-cued (t43=10.2, p<0.00001), only Side-cued (t43=9.1, p<0.00001), or Uncued (t43=11.7, p<0.00001) responses.

The main effect of Emotion (F1,43=8.22, p<0.0064) was significant, with happy faces eliciting shorter RT than angry faces [574 (SD 101) vs. 585 (SD 95) ms]. The main effect of Side was also significant (F1,43=9.73, p<0.0032), with stimuli presented on the right responded to more quickly than those on the left [586 (SD 97) vs. 573 (SD 99) ms]. Neither of these main effects can be explained by an accuracy difference as participants’ accuracy did not differ between happy (M = 85.99%, SD 9.0) and angry trials (M = 86.53%, SD 9.1), t43= .74, p = .465, or between left (M = 86.53%, SD 8.8) and right visual hemifield trials (M = 85.99%, SD 9.0), t43= .86, p = .393. The main effect of face Race was significant (F1,43=5.18, p<0.0285), with Black faces eliciting shorter RTs overall than White faces [572 (SD 99) vs. 582 (SD 100) ms] but the face Gender main effect was not (p>0.64).

The 5-way Cue Type x Emotion x Side x Race x Gender was significant, F3,129=3.17, p<0.0265, indicating that race and gender cues interacted with stimulus emotion and side differently depending on the cueing condition. Also significant were Emotion x Race (F1,43=18.76, p<0.0001) and Cue Type x Race x Gender (F3,129=5.95, p<0.0008) interactions. Other interactions did not reach significance.

We then focused on Emotion x Race interactions by Cue Type. In the Uncued condition, the Emotion x Race interaction was significant (F1,43=16.24, p<0.0002), but became weaker with Side Cueing (F1,43=4.45, p<0.041), and non-significant with Emotion (p>0.08) and Side & Emotion cueing (p>0.54). Similarly, Emotion x Gender interaction was significant in the Uncued condition (F1,43=4.14, p<0.045), and became non-significant with Side (p>0.14), Emotion (p>0.48), and Side & Emotion (p>0.67) cueing. These results confirm that cueing, particularly emotion cueing, reduces or eliminates the influence of race and gender identity cues (see further description of these results in Figure 2).

Figure 2. Response time means.

Figure 2.

In panel A, the overall effect of Cue Type is shown. In panels B-E, the shade of the line plots corresponds to the shade of the bars in the bar plot, with responses in the Uncued, Side, Emotion, and Side & Emotion cue conditions going from the lightest to the darkest shade and denoted by dotted, dashed, dot-dashed, and solid lines, respectively. Symbols in yellow are responses to happy faces and those in red, to angry faces. Panel B, Emotion x Side shows the main effect of presentation side, and the interactions in the Uncued and Side cue conditions, which disappears with emotion cueing. Panel C, Emotion x Gender, likewise shows interactions between male and female faces and expressed emotion in the Uncued and Side cue conditions (lighter dotted and dashed lines) which are eliminated in the Emotion and Side & Emotion cueing conditions (darker dot-dash and solid lines). Similar effects can be observed in panel D, Emotion x Race. In Panel E, Emotion x Race x Gender, in the Uncued and Side cue conditions angry black male faces (star-diamond symbols) are categorized faster than happy black male faces, while happy and angry white female faces (dot-square symbols) show the opposite pattern. However, the effects for black male faces are reversed, and those for white females are attenuated with Emotion or Emotion & Side cueing, shown in the lower plots.

General Discussion

In this study we investigated how responses to affective stimuli, happy and angry faces, differ depending on the amount of spatial and feature-based uncertainty associated with the stimulus prior to its presentation. Additionally, we examined how the visual hemifield in which faces are presented, and identity cues indicative of the race and sex of the face, interact with responding under varying uncertainty conditions. Observers were asked to identify the facial expression of peripherally presented face stimuli, which were preceded by a cue that either indicated the location of each face, emotion expressed on the face, both location and emotion, or was uninformative. We obtained the following findings: 1) As expected, response times were slowest, and accuracy was lowest, in the uninformative cueing (Uncued) condition in which S-R uncertainty was highest. Spatial (Side) cueing reduced RTs less than Emotion cueing, and combined Side and Emotion cueing reduced RTs in superadditive fashion, relative to spatial or emotion cueing alone. 2) The visual hemifield of stimulus presentation had a significant effect on RTs, with faces cued and presented in the right visual hemifield producing faster responses overall than those presented in the left hemifield. 3) Happy faces were categorized faster than angry faces overall, and stimulus emotion interacted with stable identity cues (race and gender) dependent on the cueing condition, such that the Uncued condition and in some cases with Side cueing there were expressive-identity cue interactions, but when Emotion was cued (with or without Side cueing), the interactions were eliminated.

The effects of S-R uncertainty

Stimulus-response uncertainty is known to increase manual RTs as a log2 function of S-R alternatives (Hick, 1952) and these results are generally in line with tasks requiring various types of manual responses in which uncertainty increase RTs either as a log2 function (e.g., manual key-presses, Kveraga, Boucher, Hughes, 2002), step function (joystick responses, Berryhill et al., 2005) or a quadratic function (visual pointing, Kveraga, Berryhill, & Hughes, 2006). However, in those studies, only the number of stimulus locations was varied. Uncertainty about facial expressions to be categorized seems to be much less straightforward than location uncertainty, as facial expressions differ in their salience, valence, perceptual discriminability, and frequency of exposure (see Calvo & Nummenmaa, 2016, for a review). In other words, cueing participants to facial expressions that are easier (happy) or harder (e.g., angry) to categorize is qualitatively different from providing information about whether the stimulus will appear in the left or right visual hemifield. While spatial cueing presumably increased selective attention to the probable location of the face stimulus and motor readiness in the corresponding hand, emotion cueing may have heightened feature-based attention to a particular facial expression in a spatially non-specific manner, based on an internal model of that expression, and increased motor readiness to the digits of both hands corresponding to the response to that emotion. With that in mind, our results suggest that: 1) reduction of facial expression uncertainty has a significantly greater benefit than reducing location uncertainty; and 2) participants can rapidly combine various forms of predictive cueing (i.e., spatial, emotion) to increase response speed in a superadditive fashion.

As the task was designed, with four potential responses to identify the location in which the stimulus face was presented and the emotion it expressed, in the uncued condition S-R uncertainty was 2 bits (i.e., 4 alternatives, with 2 possible locations and 2 expressions), in the spatial cue or emotion cue conditions it was 1 bit (either emotion or the location had 2 possibilities), and in the joint spatial and emotion cueing condition the uncertainty was nominally zero2 While cueing participants to only the location or only the emotion expressed by the upcoming stimulus face nominally reduced the uncertainty about the stimulus by the same amount of information (1 bit), cueing facially expressed emotion speeded up response times significantly more than cueing spatial location. This suggests that reducing uncertainty about facially expressed improves processing efficiency more than does reducing location uncertainty. The trials in which S-R uncertainty was lowest, those in which both location and emotion were cued, had the fastest response times and highest accuracy. Notably, response times in the joint cueing condition were significantly faster than the additive effect of cueing only location and cueing only emotion, suggesting that the combined spatial and feature-based cueing had a superadditive effect on response efficiency.

When we prioritize stimuli in our environment, we not only amplify task-relevant and inhibit task-irrelevant stimuli (see Kastner & Ungerleider, 2000 for review), we also prioritize stimuli based on affective value, which allows us to determine which stimuli are helpful and should be approached, and which are harmful and should be avoided (Konorski, 1967; Pessoa, 2008; Watson, Wiese, Vaidya & Tellegen, 1999). Reciprocal links have been found between brain regions responsible for attentional and emotional processing (Vuilleumier, Armony, & Dolan, 2003), suggesting an interaction between these two prioritization systems. Likewise, our findings suggest that when both emotion and location are cued, the two systems interact to produce response times that are much faster than the additive benefits of cueing emotion or location separately.

Rapidly detecting the emotional state of others is advantageous in that it allows us to quickly determine if a stimulus is helpful and should be approached or harmful and should be avoided (Barrett & Bliss-Moreau, 2009). As the brain is inherently a predictive organ (see O’Callaghan et al., 2017; Kveraga et al., 2007; Kveraga et al., 2009 for reviews), the stimuli that we encounter interact with the brain’s predictions to govern the formation of our conscious perception. In our study, predictions about the location and expression of the stimulus faces interacted with stable identity cues (race and sex) very differently. First, in the Uncued condition we found that facial identity cues interacted with expressive cues, accelerating responses to congruent (e.g., male-angry, female-happy, Black-angry, White-happy) cue combinations. This was in line with past behavioral findings which showed that facial identity cues, including sex and race, influence response times to emotional faces (Becker et al., 2007; Hugenberg, 2005; Trawalter et al., 2008; Becker et al., 2011). We found these emotion-sex and emotion-race interactions in Uncued trials, as well as in trials where only the location was cued (i.e., Side cue trials). These results support the notion that facial identity cues play a large role in the prioritization of emotional stimuli.

Emotional face stimuli are inherently ambiguous, as there can be large discrepancies in the interpretation of faces displaying basic expressions depending on the context in which they are encountered (see Aviezer, Ensenberg, & Hassin, 2017; Hassin, Aviezer, & Bentin, 2013 for reviews). The sex and race of a face are stable cues that can help in disambiguating and identifying the emotion of a face when contextual factors differ (e.g., Trawalter et al., 2008; Carpinella et al., 2015). When the identity cues and emotional valence are congruent with learned norms (e.g. black angry male, happy white female), responses are made more quickly (e.g., Trawalter et al., 2008). However, when facial emotion was cued in our study, either alone or along with location, these interactions disappeared. Being able to predict the emotional valence of the face thus reduced or eliminated observers’ need to rely on facial identity cues to interpret the emotion portrayed by the face.

Our overall results showing faster responses to happy versus angry face stimuli support past findings that in tasks requiring categorization of emotion, happy stimuli are often responded to faster than angry stimuli (see Nummenmaa & Calvo, 2015 for review). While recognizing threat conveyed by angry faces seems critical for survival, and as such could be expected to evoke faster responses than other facial expressions, happy faces are often recognized more quickly than angry faces. This recognition speed advantage has been argued to arise because of the perceptual “vividness” of happy expressions and their importance in quickly diffusing a falsely perceived threat (Billings, Harrison, & Alden, 1993; also see Becker & Srinivasan, 2014 and Nummenmaa & Calvo, 2015 for reviews). Happy facial expressions are more salient, more distinct both perceptually and affectively from other facial expressions, and are more frequently encountered (Becker & Srinivasan, 2014; Calvo & Nummenmaa, 2016). Our results show that this effect is not eliminated when participants are able to predict the facial expression of the stimulus before it is presented with a high degree of certainty in the Emotion cueing conditions. When participants are cued that the upcoming stimulus will be happy or angry, the uncertainty about the facial expression is greatly reduced. We would expect this to lead to a reduction of the happy vs. angry expression categorization advantage, by making the angry expression, the more difficult of the two, easier to resolve. However, this did not happen and participants were overall still significantly faster in responding to happy stimuli.

Our results also showed faster responses overall to stimuli presented in the right visual hemifield, supporting previous findings that responses to happy stimuli in the right visual field are responded to more quickly than those in the left; however, this result runs counter to other findings that angry stimuli are responded to more quickly when they are presented in the left visual field (Jansari, Tranel, & Adolphs, 2000; Onal-Hartmann, Pauli, Ocklenburg, Gunturkun, 2012). As can be seen in Figure 2, this effect is mainly driven by faster responses to happy stimuli presented in the right visual hemifield. Our finding of a right hemifield advantage for angry stimuli was much less pronounced, particularly in the Uncued condition which is comparable to previous investigations of laterality. There is an ongoing debate regarding the lateralization of emotion processing in the brain. While the right hemisphere hypothesis suggests that the right hemisphere processes all types of emotion (e.g., Borod, Cicero, Obler, et al., 1998), the valence-specific hypothesis suggests that the right hemisphere preferentially processes negative emotions, with positive emotions engaging the left hemisphere more (e.g., Adolphs, Jansari, & Tranel, 2001). Recent work suggests that this lateralization may also differ between prefrontal and subcortical areas (e.g., Beraha, Eggers, et al., 2012). Other lines of research have suggested that while the right hemisphere is geared towards global processing of unexpected stimuli, the left hemisphere is superior at sustained attention towards expected stimuli and local processing, as well as categorizing stimuli according to learned categories (see Rogers, Vallortigara et al., 2013, pp. 131–133 for a review). This latter dichotomy may provide some explanation for our results, as we employed a highly structured categorization task in which attention was explicitly cued either to the location and/or to configural features of the face, with the goal of rapidly ascertaining whether the face belonged to one of two categories. Even in the Uncued condition, where the S-R uncertainty was highest, observers knew after the training session that either a happy or an angry face would appear on the left or right side of the screen. Therefore, the nature of the task may have favored the left hemisphere’s processing strengths. Conversely, a task that features less predictable and/or global stimuli, such as arrays of faces varying in the strength of expressed emotion, or hierarchical stimuli, should reveal a right hemisphere processing advantage, as indeed has been found by us (Im et al., 2017a; 2017b), and others (e.g., Deruelle and Fagot, 1997).

While there has been much research investigating responses to emotional stimuli in our environment, this research typically has been done in settings where participants are uncertain of the characteristics of an upcoming emotional stimulus before it is presented. However, many of the situations in our daily lives involve the use of contextual cues and familiarity to predict the characteristics of upcoming stimuli, including race, sex, and setting. We have demonstrated here that the benefit of providing observers with certain predictive stimulus characteristics prior to stimulus presentation varies depending on whether location or emotion is cued, and that when both stimulus location and emotional expression are cued, these predictions interact to improve response times in a superadditive manner. This suggests that feature-based attention to facial expression cues benefits from being spatially focused. Lastly, we have shown that when facial emotional expression can be anticipated, there is reduced reliance on sex- and race-linked identity cues.

Acknowledgments:

Funding for this research was provided by R01 MH101194 to K.K. and R.B.A. Authors declare no conflict of interest.

Footnotes

1

Because 4.3% of trials were ‘catch’ trials to make sure subjects evaluated each stimulus before responding, the probability of e.g., happy face appearing on the left as cued was less than 1, the S-R uncertainty in this case is slightly higher than 0 bits.

2

Observers still had to verify the stimulus matched the cue, as a small percentage (4.3%) of trials were catch trials to minimize guessing.

References

  1. Adams RB Jr., Albohn DN, & Kveraga K. (2017). Social Vision: Applying a social-functional approach to face and expression perception. Current Directions in Psychological Science, 26(3), 243–248. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Adams RB Jr., Hess U, & Kleck RE (2015). The intersection of gender-related facial appearance and facial displays of emotion. Emotion Review, 7, 5–13. [Google Scholar]
  3. Adams RB Jr. & Kveraga K. (2015). Social Vision: Functional forecasting and the integration of compound threat cues. Rev Philos Psychol, 6(4):591–610. doi: 10.1007/s13164-015-0256-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Adolphs R, Jansari A, & Tranel D. (2001). Hemispheric perception of emotional valence from facial expressions. Neuropsychology, 15(4), 516–524. [PubMed] [Google Scholar]
  5. Al-Janabi S, MacLeod C, & Rhodes G. (2012). Non-threatening other-race faces capture visual attention: Evidence form a dot-probe task. Plos One, 7(10), e46119. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Aviezer H, Ensenberg N, & Hassin R. (2017). The inherently contextualized nature of facial emotion perception. Current Opinion in Psychology, 17, 47–54. [DOI] [PubMed] [Google Scholar]
  7. Barrett L & Bliss-Moreau E. (2009). Affect as a psychological primitive. Advances in experimental social psychology, 41, 167–218. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Becker DV, Kenrick DT, Neuberg SL, Blackwell KC, & Smith DM (2007). The confounded nature of angry men and happy women. Journal of Personality and Social Psychology, 92(2), 179–190. [DOI] [PubMed] [Google Scholar]
  9. Becker DV, Mortensen CR, Ackerman JM, Shapiro JR, Anderson US, Sasaki T, Maner JK, Neuberg SL, Kenrick DT (2011) Signal detection on the battlefield: priming self-protection vs. revenge-mindedness differentially modulates the detection of enemies and allies. PLoS One. 6(9):e23929. doi: 10.1371/journal.pone.0023929. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Becker DV & Srinivasan N. (2014). The vividness of the happy face. Current Directions in Psychological Science, 23(3), 189–194. [Google Scholar]
  11. Beltran D & Calvo M. (2015). Brain signatures of perceiving a smile: Time course and source localization. Human Brain Imaging, 36(11), 4287–4303. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Beraha E, Eggers J, Hindi Attar C, Gutwinski S, Schlagenhauf F, Stoy M…Bermpohl F. (2012). Hemispheric asymmetry for affective stimulus processing in healthy subjects – A fmri study. PLoS ONE, 7(10), 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Berryhill ME, Kveraga K, & Hughes HC (2005). Effects of directional uncertainty on visually-guided joystick pointing. Perceptual and Motor Skills, 100, 267–274. [DOI] [PubMed] [Google Scholar]
  14. Billings L, Harrison L, & Alden J. (1993). Age differences among women in the functional asymmetry for bias in facial affect perception. Bulletin of the Psychonomic Society, 31(4), 317–320. [Google Scholar]
  15. Borod JC, Cicero BA, Obler LK, Welkowitz J, Erhan HM, Santschi C…Agosti RM (1998). Right hemisphere emotional perception: Evidence across multiple channels. Neuropsychology, 12(3), 446–458. [DOI] [PubMed] [Google Scholar]
  16. Calvo M & Nummenmaa L. (2016). Perceptual and affective mechanisms in facial expression recognition: An integrative review. Cognition and Emotion, 30(6), 1081–1106. [DOI] [PubMed] [Google Scholar]
  17. Carpinella C, Chen J, Hamilton D, & Johnson KL (2015). Gendered facial cues influence race categorizations. Personality and Social Psychology Bulletin, 41(3), 405–419. [DOI] [PubMed] [Google Scholar]
  18. Cottrell CA & Neuberg SL (2005). Different emotional reactions to different groups: A sociofunctional threat-based approach to “prejudice”. Journal of Personality and Social Psychology, 88, 770–789. [DOI] [PubMed] [Google Scholar]
  19. Davidson RJ (1992). Anterior cerebral asymmetry and the nature of emotion. Brain & Cognition, 20, 125–151. [DOI] [PubMed] [Google Scholar]
  20. Davidson RJ (1995). Brain Asymmetry, Cerebral asymmetry, emotion, and affective style. In: Davidson RJ, Hugdahl K, editors (MIT Press, Cambridge, MA: ), pp 361–387. [Google Scholar]
  21. Davidson RJ, & Irwin W. (1999). The functional neuroanatomy of emotion and affective style. Trends in Cognitive Sciences, 3, 11–21. [DOI] [PubMed] [Google Scholar]
  22. Deruelle C & Fagot J. (1997). Hemispheric lateralisation and global precedence effects in the processing of visual stimuli by humans and baboons (Papio papio). Laterality, 2:233–246. [DOI] [PubMed] [Google Scholar]
  23. Goff P, Thomas M, & Jackson M. (2008). “Ain’t I a woman?”: Towards an intersectional approach to person perception and group-based harms. Sex Roles, 59(5–6), 392–403. [Google Scholar]
  24. Hassin R, Aviezer H, & Bentin S. (2013). Inherently ambiguous: Facial expressions of emotions, in context. Emotion Review, 5(1), 60–65. [Google Scholar]
  25. Hess U, Adams RB Jr., & Kleck RE (2004). Dominance, gender and emotion expression. Emotion, 4, 378–388. [DOI] [PubMed] [Google Scholar]
  26. Hick WE (1952). On the rate of gain of information. Quarterly Journal of Experimental Psychology, 4(1), 11–26. [Google Scholar]
  27. Hugenberg K. (2005). Social categorization and the perception of facial affect: Target race moderates the response latency advantage for happy faces. Emotion, 5(3), 267–276. [DOI] [PubMed] [Google Scholar]
  28. Im HY, Cushing CA, Albohn DN, Steiner TG, Adams RB Jr., Kveraga K. (2017b). Differential hemispheric and visual stream contributions to ensemble coding of crowd emotion. Nature Human Behaviour. 2017;1:828–842. doi: 10.1038/s41562-017-0225-z. PMID: 29226255 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Im HY, Chong SC, Sun J, Steiner TG, Albohn DN, Adams RB Jr., Kveraga K. (2017c) Cross-cultural and hemispheric laterality effects on the ensemble coding of emotion in facial crowds. Culture & Brain, p.1–18, doi: 10.1007/s40167-017-0054-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Jansari A, Tranel D, & Adolphs R. (2010). A valence-specific lateral bias for discriminating emotional facial expressions in free field. Cognition and Emotion, 14(3), 341–353. [Google Scholar]
  31. Johnson K, Freeman J, & Pauker K. (2012). Race is gendered: How covarying phenotypes and stereotypes bias sex categorization. Journal of Personality and Social Psychology, 102(1), 116–131. [DOI] [PubMed] [Google Scholar]
  32. Kastner S & Ungerleider LG (2000). Mechanisms of visual attention in the human cortex. Annual Review of Neuroscience, 23, 315–341. [DOI] [PubMed] [Google Scholar]
  33. Konorski J. (1967). Integrative activity of the brain: An interdisciplinary approach. Chicago: University of Chicago Press. [Google Scholar]
  34. Kveraga K, Berryhill M, & Hughes HC (2006). Directional uncertainty in visually guided pointing. Perceptual and motor skills, 102(1), 125–132. [DOI] [PubMed] [Google Scholar]
  35. Kveraga K, Boucher L, & Hughes HC (2002). Saccades operate in violation of Hick’s Law. Experimental Brain Research, 146(3), 307–314. [DOI] [PubMed] [Google Scholar]
  36. Kveraga K, Ghuman AS, & Bar M. (2007). Top-down predictions in the cognitive brain. Brain and Cognition, 65(2), 145–168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Kveraga K, Boshyan J, Bar M. (2009). The proactive brain: Using memory-based predictions in visual recognition In Dickinson S, Tarr M, Leonardis A and Schiele B. (Eds.) Object Categorization: Computer and Human Vision Perspectives. Cambridge University Press, pp. 384–401. [Google Scholar]
  38. Luce RD (1986). Response times: Their role in inferring elementary mental organization. New York: Oxford University Press. [Google Scholar]
  39. Ma DS, Correll J, & Wittenbrink B. (2015). The chicago face database: A free stimulus set of faces and norming data. Behavior Research Methods, 47, 1122–1135. [DOI] [PubMed] [Google Scholar]
  40. Maner JK, Kenrick DT, Neuberg SL, Becker DV…Schaller M. (2005). Functional projection: How fundamental social motives can bias interpersonal perception. Journal of Personality and Social Psychology, 88, 63–78. [DOI] [PubMed] [Google Scholar]
  41. Marsh AA, Ambady N, & Kleck RE (2005). The effects of fear and anger facial expressions on approach- and avoidance-related behaviours. Emotion, 5(1), 119124. [DOI] [PubMed] [Google Scholar]
  42. Nakashima SF, Morimoto Y, Takano Y, Yoshikawa S, Hugenberg K. (2014). Faces in the dark: interactive effects of darkness and anxiety on the memory for threatening faces. Frontiers in Psychology, October 2, 5:1091. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Notebaert L, Crombez G, Van Damme S, De Houwer J, & Theeuwes J. (2010). Looking out for danger: An attentional bias towards spatially predictable threatening stimuli. Behaviour Research and Therapy, 48, 1150–1154. [DOI] [PubMed] [Google Scholar]
  44. Nummenmaa L & Calvo MG (2015). Dissociation between recognition and detection advantage for facial expressions: A meta-analysis. Emotion, 15(2), 243–256. [DOI] [PubMed] [Google Scholar]
  45. O’Callaghan C, Kveraga K, Shine JM, Adams RB Jr., & Bar M. (2017). Predictions penetrate perception: Converging insights from brain, behaviour, and disorder. Consciousness and Cognition, 47, 63–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Onal-Hartmann C, Pauli P, Ocklenburg S, & Gunturkun O. (2012). The motor side of emotions: investigating the relationship between hemispheres, motor reactions, and emotional stimuli. Psychological Researc, 76(3), 311–316. [DOI] [PubMed] [Google Scholar]
  47. Peirce JW (2007). PsychoPy-Psychophysics software in Python. Journal of Neuroscience Methods, 162(1–2), 8–13. 10.1016/j.jneumeth.2006.11.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Pessoa L. (2008). On the relationship between emotion and cognition. Nature Reviews Neuroscience, 9, 148–158. [DOI] [PubMed] [Google Scholar]
  49. Ratcliff R. (1993). Methods for dealing with reaction time outliers. Psychological Bulletin, 114(3), 510–532. [DOI] [PubMed] [Google Scholar]
  50. Rogers LJ, Vallortigara G, & Andrew RJ (2013). Divided Brains: The Biology and Behaviour of Brain Asymmetries. 10.1017/CBO9780511793899 [DOI] [Google Scholar]
  51. Scherer KR & Wallbott HG (1994). Evidence for universality and cultural variation of differential emotion response patterning. Journal of Personality and Social Psychology, 66(2), 310–328. [DOI] [PubMed] [Google Scholar]
  52. Silberman EK, & Weingartner H. (1986). “Hemispheric lateralization of functions related to emotion”. Brain and Cognition, 5, 322–353. [DOI] [PubMed] [Google Scholar]
  53. Tay PKC (2015). The adaptive value associated with expressing and perceiving angry-male and happy-female faces. Frontiers in Psychology, 6, 851. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Taylor SE, Klein LC, Lewis BP, Gruenewald TL, Gurung RAR, & Updegraff JA (2000). Biobehavioural responses to stress in females: Tend and befriend, not fight-or-flight. Psychological Review, 107, 411–429. [DOI] [PubMed] [Google Scholar]
  55. Trawalter S, Todd AR, Baird AA, & Richeson JA (2008). Attending to threat: Race-based patterns of selective attention. Journal of Experimental Social Psychology, 44, 1322–1327. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Trivers RL (1985). Social evolution. Menlo Park, CA: Benjamin/Cummins. [Google Scholar]
  57. Vuilleumier P, Armory JL, & Dolan RJ (2003). Reciprocal links between emotion and attention In Friston KJ, Frith CD, Dolan RJ, Price C, Ashburner J, Penny W, Zeki S, & Frackowiak RSJ (Eds.), Human brain function (2nd ed., pp.419–444). New York: Academic Press. [Google Scholar]
  58. Watson D, Wiese D, Vaidya J, & Tellegen A. (1999). The two general activation systems of affect: Structural findings, evolutionary considerations, and psychobiological evidence. Journal of Personality and Social Psychology, 76, 820–838. [Google Scholar]
  59. Zebrowitz LA (1997). New directions in social psychology. Reading faces: Window to the soul? Boulder, CO: Westview Press. [Google Scholar]

RESOURCES