Abstract
Spatial attention not only enhances early visual processing and improves performance but also alters phenomenology of basic perceptual features. However, in spite of extensive research on attention altering appearance, it is still unknown whether attention also intensifies perceived facial emotional expressions. We investigated the effect of exogenous attention on two categories of emotions, one positive (happy) and one negative (sad) in separate sessions. Exogenous attention was manipulated using peripheral cues followed by two faces varying in emotional intensity that were presented on either side of fixation. Participants were asked to report the location of the emotional face displaying higher intensity of emotion. At short cue-to-target interval [CTI, Experiment 1 (60 ms)], participants reported the cued emotional face as more intense in expression compared with the uncued face. However, at longer CTI [Experiment 2 (500 ms)], this effect was absent. Results show that exogenous attention enhances appearance of higher level features, such as emotional intensity, irrespective of valence. Further, two experiments investigated the mediating role of facial contrast as a possible underlying mechanism for the observed effect. Although the results show that higher contrast faces are judged as more in emotional intensity, spatial attention effects seem to be dependent on task instructions. Possible mechanisms underlying the attentional effects on emotion intensity are discussed.
Keywords: exogenous attention, emotion intensity, appearance, happy, sad
Introduction
Consider, Leonardo da Vinci’s 16th-century masterpiece, “The Mona Lisa“ (La Gioconda (1452–1519)—the name in which it is displayed in Salon 16 of the Musse de Louvre, Paris) known for its mysterious smile. The appearance and disappearance of her smile has been debated for centuries. An interesting observation is that the smile appears to be more certain when one “attends to the face” covertly (Livingstone 2000). This raises an important question: Does covert spatial attention influence the appearance of emotional expressions?
In general, covert spatial attention improves task performance in terms of speeded detection and/or better accuracy at the cued (attended) location compared with uncued (unattended) location (Posner 1980; Carrasco 2014). But whether attention influences conscious experience (phenomenology) has been a much debated topic. Empirical studies in the past decade have shown that covert attention alters the appearance of objects to us (Carrasco et al. 2004); More specifically, it enhances perceived stimulus properties such as contrast (Carrasco et al. 2004), spatial frequency, gap size (Gobell and Carrasco 2005), and color saturation (Fuller and Carrasco 2006). Recent studies have shown enhancement for higher order features such as perceived size (Anton-Erxleben et al. 2007) and even for facial attractiveness (Störmer and Alvarez 2016). Although attention–emotion studies have shown that attention is necessary to process emotional information (Pourtois et al. 2013; Carretié 2014), it is not known whether attention could intensify emotional expressions.
In everyday social interactions, not only the valence of emotions (happy, sad, or angry) but also the intensity/degree of the expressed emotion (happier, sadder, or angrier) influence our impressions about people and their actions (Motley and Camden 1988). Emotion intensity changes have been explained as the relative degree of displacement (series of morphs) either from neutral to a particular emotion (within boundary deviation; e.g. neutral to happy) or from a particular emotion to another emotion (cross-boundary deviation; e.g. angry to fear) (Hess et al. 1997; Young et al. 1997). Although no study has directly investigated the effect of exogenous attention on perceived expression intensity, few studies have demonstrated the effect of contextual factors on perceived intensity changes (Ethofer et al. 2006; Sherman et al. 2012; Leleu et al. 2015). For example, Ethofer et al. (2006) showed that emotion-conveying voices influence intensity of perceived facial expressions; that is, simultaneously presented fearful voices made the participants’ rate neutral to fearful faces as more fearful. Similarly, laughter increased the perceived intensity of happy facial expressions (Sherman et al. 2012). In another study, an expression (happy, disgust, or anger) was perceived with lesser visual information when the odor context (pleasant or unpleasant) was emotionally congruent (Leleu et al. 2015). Similarly, other studies have shown that emotional mood (Niedenthal et al. 2001) and affective learning (Lim and Pessoa 2008) also influence the perceived intensity of emotional expressions.
With respect to attentional modulation of emotional information, previous studies have shown that affective value and responses are influenced by attention (Fenske and Raymond 2006). For example, abstract patterns of stimuli that acted as distractors in a task and were ignored received devaluation compared to when they were attended (Raymond et al. 2003). Participants were more empathic to attended compared with unattended pictures of humanitarian crisis victims (Dickert and Slovic 2009). Separate set of studies have also investigated how facial expressions strengthen attentional benefits on perception (Phelps et al. 2006; Bocanegra and Zeelenberg 2009, 2011). Phelps et al. (2006) investigated individual as well as conjoint effects of attention and emotion on early visual features such as contrast sensitivity. They showed that fearful faces as exogenous cues, enhanced contrast sensitivity of subsequently presented Gabor patch in comparison with neutral faces in an orientation judgment task. These findings were mostly present only for low spatial frequency and diminish for high spatial frequency (Bocanegra and Zeelenberg 2009). However, no study has systematically manipulated the spatial attention to investigate covert attention effects on perceived emotional intensity.
In this study, we investigated whether covert exogenous attention alters the subjective appearance of emotional expressions and whether such effects vary for different expression categories such as happy and sad. Based on the previously shown attentional effects on appearance (Carrasco et al. 2004; Störmer et al. 2009; Carrasco 2014; Störmer and Alvarez 2016), one may expect that exogenous attention could also intensify the appearance of emotional facial expression such that a less happy face upon attending might appear more happy or a less sad face might appear more sad. Alternatively, attention might influence other lower level features such as contrast, which in turn influences the perceived emotion. Evidence for the effect of contrast on perceived attractiveness (Störmer and Alvarez 2016) and age (Porcheron et al. 2013) is available but not for perceived emotional intensity. Hence, we also investigated the contribution of facial contrast as a possible mechanism that may underlie attentional benefits on emotion appearance.
To investigate the effect of spatial attention on emotion appearance, a gradation of emotional intensity morphs within an emotion category (Calder et al. 1996; Graham et al. 2007) were generated. Here, we have used a positive and a negative facial expression to create morphs from neutral to each emotion category, i.e. from neutral to happy and from neutral to sad. The critical aspect of the design was generation and standardization of emotional expressions (or morphs) that are “perceptually” equidistant (rather than objectively equidistant) in emotional intensity. Two preliminary experiments (see Supplementary Material) were conducted, a threshold detection experiment (to detect the lowest intensity happy/sad face; Supplementary Table S1) and an intensity discrimination experiment (to discriminate between various levels of emotional intensity within a category; Supplementary Fig. S1) to select a set of nine intensity levels (lowest to highest). Four such sets of facial expressions (happy-male, happy-female, sad-male, and sad-female) were generated (see Fig. 1).
Figure 1.
Emotion stimuli used in experiments: Final set of standardized nine levels of emotional intensity in the increasing order for each of a female happy face (first row), a female sad face (second row), a male happy face (third row), and a male sad face (bottom row). Each set was used in separate sessions.
Experiment 1: exogenous cuing at short CTI (60 ms)
The experimental task was a modified version of the original two alternative forced choice comparative judgment (2 AFC) task, developed by Carrasco et al. (2004), where both the subjective appearance changes and the objective performance measures could be obtained (Treue 2004; Fuller and Carrasco 2006; Ling and Carrasco 2007; Carrasco et al. 2008). In this study, exogenous attention was manipulated using transient peripheral cues or a neutral cue and the task of the participants was to report the location of the face that appears more happy (or more sad, in separate sessions). An earlier study (Fuller and Carrasco 2006) had suggested that spatial attention influences intensity for prothetic dimensions but not metathetic dimensions. Prothetic dimension can be understood as the property that is quantifiable and has a meaningful zero value (like saturation), i.e. stimulus varying from less to more, whereas metathetic dimension is where the stimulus quality changes, i.e. hue changes from red to green. Given that emotional intensity could be more or less (a face could be more happy or less happy) and can be zero (a neutral face), one might think that a similar analogy could be applied for attentional effects on emotion appearance.
It was hypothesized that participants would report the cued side (attended) as higher in emotional intensity in comparison with uncued side (unattended). This would be reflected in the leftward shift of psychometric function for cued condition and rightward shift for uncued condition in comparison with the neutral condition. However, if spatial cuing does not influence higher order emotion appearance, the psychometric functions for the cued and uncued conditions should not be significantly different from the neutral condition. Also, to evaluate the success of attentional manipulation, the performance (reaction time and accuracy) in the orthogonal task (location judgment) was measured, and it was hypothesized that participants would be faster and/or more accurate (Posner et al. 1980) in the cued condition when compared with uncued condition, as previous studies have shown performance-based differences for spatial attention concurrent to appearance changes (Gobell and Carrasco 2005; Liu et al. 2009; Abrams et al. 2010).
Method
Stimulus generation and preliminary experiments
The influence of gender and identity was minimized in the experiment using a male and a female face. We selected front-facing photographs of two models (male and female), each posing intense facial expressions of two basic emotions (happy and sad) and a neutral expression, from CBCS Emotional Faces Database (Grewal et al. 2012). They were validated for expression, intensity, and arousal using five-point Likert-type scale, and two models were selected that showed similar ratings for both the expressions. The extra non-emotional features, such as hair and ear, were removed and the faces were oval masked. We conducted two preliminary experiments (see Supplementary Materials, Supplementary Table S1; Supplementary Fig. S1) to select nine levels of emotional intensities for each of the two identities (male and female) and two expressions (happy and sad).
Main experiment
Participants
Fourteen naive student volunteers (6 females, age range =18–30 years) from University of Allahabad, with normal or corrected-to-normal vision, provided informed consent before participating in the study and were compensated for their time. The sample size was estimated using previous studies that investigated the effect of exogenous cuing of attention on apparent stimulus contrast (Carrasco et al. 2004; Liu et al. 2006; Anton-Erxleben et al. 2010). The study was approved by the Institutional Ethics Committee of University of Allahabad. Two participants were excluded from the experiments (one as an outlier in accuracy task and other did not complete all the sessions).
Apparatus and stimuli
Stimuli were presented using Eprime 2.0 professional on a 19” CRT Monitor with a refresh rate of 85 Hz at a resolution of 1024 × 768 pixels and sRGB color space. The luminance was measured using ColorCal photometer CIE1931(x, y, l) developed by Cambridge Research Systems. The display consisted of a gray fixation dot (0.1°; 77 cd/m2) presented on dark gray background (9.7 cd/m2) with two rectangular frames subtending 5.8° × 10° (4.6 cd/m2, 0.3° thick), which were located 5.7° (fixation to the center of frame) on either side of the fixation point. The target faces subtending 5.5° × 3.5° were presented within the rectangular frames. The exogenous peripheral cues were two circular white dots (each subtending an angle of 0.3° × 0.3°, 104 cd/m2). These two dots were aligned vertically at 5.7° from fixation; at a difference of 9° vertically inside the rectangular frame and each placed 0.5° away toward the inside, from the horizontal border of the frame. The cue for the neutral condition was a dot at the center (Liu et al. 2006).
Procedure
To avoid interstimulus gender and identity confounds within the trial, in any given session, only one identity was presented with only one emotion, varying in intensity. Each participant performed the experiment for both happy and sad faces of the same identity in separate 1-h sessions (order counterbalanced) with a gap of approximately 1 week. Half the participants ran the session with male identity and the other half with female identity model. Each participant was debriefed after completing both the sessions.
Participants completed a practice block of 100 trials with auditory feedback for correct position and emotional intensity before starting the main experiment. The main experiment had a total of 1152 trials in 16 blocks (72 trials in each block). The sequence of stimuli and events in the single trial are shown in Fig. 2. Each trial started with participants fixating on the fixation point (500 ms) at the center of the screen. The exogenous cue was presented for 44 ms randomly either to the left or right of fixation and neutral cue appearing at fixation. The participants were explicitly informed that the cue conveyed no information regarding the location or the intensity of the emotional expression. After a cue-to-target interval (interstimulus intervan between cue and target) of 60 ms, two emotional faces were presented simultaneously on the left and right side of fixation within the rectangular frames for 82 ms. The faces were positioned 1° above or below the fixation point with respect to the horizontal axis (also see Störmer and Alvarez 2016). In every trial, one face was always the standard face (middle intensity—fifth face), while the other (could be any face from the set of nine faces), a test face (Fig. 1). The total duration of cue CTI, and target faces was approximately 185 ms ensuring that the targets disappeared before a possible saccade to the cued location. The trials were randomized and made equiprobable in terms of location of the cues, position of the faces, and location of the test and standard faces. The participants performed a two-alternative forced choice task to report the location of the face that appeared more happy (or more sad in separate session) with both the hands using four different keys on the keyboard [upper-left (“w”), upper-right (“i”), lower-left (“x”), lower-right (“m”)]. They had a maximum of 2 s to make a response.
Figure 2.
Schematic representation of the single trial structure for the main experiment. The participants performed a 2AFC task where they reported the location (up vs. down) of the happier face (four choices: “w,” “x,” “i,” and “m” keys, two hands). In this trial, the response for the happier face would be key “x.” The same procedure was followed for control experiment except that the CTI was set to 500 ms.
Results
SPSS (Version 16.0) was used to perform statistical analysis. The data were normally distributed, and parametric tests were used. Separate two-way repeated-measures analysis of variance (ANOVA) with emotion (happy and sad) and cuing condition (neutral cued, standard cued, and test cued) was performed for the main and control experiments. Alpha level was set at 0.05 for all the statistical analysis. Posthoc comparisons were performed using Bonferroni correction. In the case of violation of sphericity, Greenhouse-Geisser corrected values are reported.
Emotion appearance
The shifts in perceptual sensitivity of emotion appearance, in different conditions, were measured using point of subjective equality (PSE) (Carrasco et al. 2004; Graham et al. 2007; Anton-Erxleben et al. 2010; Störmer and Alvarez 2016). A psychometric curve was fitted to the data with the four-parameter Weibull function using maximum likelihood procedure in Psignifit 3.0 toolbox in Matlab (Frund et al. 2011). The fits were obtained with emotional intensity as the function of the proportion of times participant chose the test emotional intensity in comparison with standard intensity; when the test face was cued (attended: test cued condition), when the test face was uncued (unattended: standard cued condition), and when there was a neutral cue (not directed to any spatial location: neutral cued condition). The goodness of fit was assessed using deviance scores (D ± SD = 10 ± 3), and participants with high deviance score were removed and excluded from further analysis (Wichmann and Hill 2001; Koldewyn et al. 2009).
Figure 3a and b show group-averaged psychometric function for happy and sad faces, respectively. The proportion was calculated only with correctly identified location for targets (mean ± SD = 94.32 ± 3%). The figure shows that the test cued function is shifted toward the left, indicating that participants chose the test appearing as more happy (or more sad) when it was cued. The standard cued function shifted toward the right, indicating that participants chose the test stimulus less frequently when it was uncued. The PSE for all the three cuing conditions was computed. Normalized PSEs (nPSE) were calculated for analysis, by dividing the PSE of each cue condition with the mean of all the PSEs form all the conditions (including both happy and sad sessions) for each participant (Abrams et al. 2010).
Figure 3.
Group-averaged psychometric functions depicting emotion appearance judgments using Weibull fit for happy (a) and sad (b) expression in the main experiment. The graphs show the proportions of trials where the observers chose the test stimulus to be happier (or sadder) than the standard stimulus, as the function of the physical emotional intensity of the test stimulus. The horizontal line intersecting the curves represents the point of subjective equality.
Cuing the test faces resulted in lowering the nPSE, whereas cuing the standard face led to higher nPSEs, indicating that cued emotional face was perceived to be happier (Fig. 3a) or sadder (Fig. 3b) than uncued emotional face. A two-way repeated-measures ANOVA for nPSE values (Fig. 4a and b) found a significant main effect of cue condition [F(1.23, 13.6) = 18.73, P < 0.001, ηp2 = 0.63]. The main effect of emotion [F(1, 11) = 0.64, P = 0.44, ηp2 = 0.06] and interaction between cue and emotion [F(1.4, 14.9) = 1.03, P = 0.35, ηp2 = 0.09] were not significant. Posthoc comparisons with 95% confidence interval (CI) showed significant differences between neutral cued and test cued condition (mean difference = 0.137, P = 0.013, CI = [0.029, 0.245]) and neutral cued and standard cued condition [mean difference = −0.195, P = 0.005, CI = [−.326, −.063]). The plot of distribution of individual nPSE (Fig. 5a) shows that for happy faces, except one participant, rest all have test cued nPSE less than the neutral cue (below the unity line) and the standard cued nPSE more than the neutral cue (above the unity line), suggesting the consistency across sample population. Similarly, even for sad faces (Fig. 5b), the results for individual participants follow the same pattern.
Figure 4.
The nPSE values for the test stimulus for each of the three cue types for (a) happy and (b) sad expression in the main experiment. Error bars depict standard errors of the mean (*p < 0.05).
Figure 5.
Scatter plots of individual observers’ nPSE values for (a) happy and (b) sad expressions in the main experiment. Each observer’s nPSEs for test stimuli in the test cued and standard cued conditions are plotted as the function of that observer’s nPSE for the test stimuli in the neutral cued condition.
Orthogonal task
The explicit task of the participants was to report the location of the face, which was contingent on the appearance judgment task. We measured accuracy of the reported location and reaction time (RT) only when the participants chose the standard intensity (fifth face) stimulus to compare performance of the same physical stimulus in one of three cuing conditions: when the standard face was cued (cued condition), when the test face was cued (uncued condition), and when there was a neutral cue (neutral cued condition) (Gobell and Carrasco 2005; Liu et al. 2009). Separate two-way repeated-measures ANOVA for RT and accuracy was performed for all three cuing conditions. RTs <100 ms were removed (< 30 trials across all conditions).
Figure 6a shows averaged discrimination accuracy for all the experiments. ANOVA for accuracy did not reveal any significant difference due to cue, F(2, 22) = 0.95, P = 0.40, ηp2 = 0.08; emotion, F(1, 11) = 0.12, P = 0.74, ηp2 = 0.01; or their interaction, F(2, 22) = 0.91, P = 0.42, ηp2 = 0.08. Figure 6b represents RT data showing significant effect of cuing, F(2, 22) = 8.16, P = 0.002, ηp2 = 0.43. The effects of emotion, F (1, 11) = 0.97, P = 0.35, ηp2 = 0.08 and their interaction, F(2, 22) = 0.66, P = 0.53, ηp2 = 0.06 were not significant. Posthoc comparisons revealed significant differences in RTs between the cued and neutral cue condition (mean difference = −25.73 ms, P = 0.024, 95% CI = [−48.24, −3.2]) and between the cued and uncued condition (mean difference = –36.1 ms, P = 0.025, 95% CI = [−67.91, −4.3]). Thus, directing attention facilitated reaction times for cued faces compared with uncued faces.
Figure 6.
(a) Accuracy and (b) reaction time for the orthogonal task in the main experiment averaged over happy and sad expressions. Error bars represent standard errors of the means. *P<0.05.
Discussion
The results from the first experiment demonstrate that exogenous cuing (short CTI—60 ms) to a spatial location enhances the appearance of emotional expression at that location. When the test face is attended (cued location), it is perceived to be more in emotional intensity than when the same face is unattended. This effect is similar for happy and sad emotional faces used. Attentional effects on performance were also observed as reduction in RT for cued condition than for uncued condition, showing both appearance-related and performance-related effects.
Experiment 2: control experiment (CTI—500 ms)
It might be possible that participants simply chose the side where exogenous cues (dots) appear and this might result in a cue bias. To rule out this explanation, a control experiment was performed by increasing the CTI between cue and the target to 500 ms. Since the effect of exogenous attention typically lasts only for a short time (<250 ms CTI; peaks at ∼100–120 ms) (Posner 1980; Nakayama and Mackeben 1989), we hypothesized that there would be no effects of exogenous attention on emotion appearance in this condition, indicating that effects at shorter CTIs (Main Experiment) are due to attention rather than cue or response bias (Carrasco et al. 2004; Liu et al. 2006; Ling and Carrasco 2007; Turatto et al. 2007; Störmer and Alvarez 2016).
Participants
Twelve naive student volunteers (6 females, age range =18–30 years) from the University of Allahabad, with normal or corrected-to-normal vision, provided informed consent before participating in the study and were compensated for their time. Different participants were used in the control experiment due to logistical reasons and unavailability of same participants during the course of the study. Two participants were excluded from the control experiment due to high deviance scores (D ± SD = 10 ± 3) of the fitted psychometric function.
Apparatus and stimuli
The stimuli and apparatus were the same as in Experiment 1.
Procedure
The procedure was exactly the same as in the main experiment, except that the CTI between the cue and the target was increased to 500 ms, since the effect of an exogenous cue is absent at longer CTIs (Posner et al. 1980; Carrasco et al. 2004; Liu et al. 2006; Ling and Carrasco 2007; Störmer and Alvarez 2016).
Results
Emotion appearance
There was no effect of cuing on emotion appearance with longer CTI of 500 ms for happy (Fig. 7a) or sad faces (Fig. 7b). ANOVA performed on the nPSEs (Fig. 8) did not show a significant effect of cue, F(1.13, 10.2) = 0.04, P = 0.88, ηp2 = 0.00; emotion F(1, 9) = 3.46, P = 0.1, ηp2 = 0.28; or their interaction, F(2, 18) = 0.08, P = 0.93, ηp2 = 0.00. Further, the plot for distribution of individual nPSE indicates that the test cued and standard cued conditions show no clear separation from the unity line, for both happy (Fig. 9a) and sad faces (Fig. 9b). The lack of effect of cuing on emotional appearance indicates that there was no response bias underlying the cuing effect with short CTI and the cuing effect in the main experiment is primarily due to exogenous attention.
Figure 7.
Group-averaged psychometric functions depicting emotion appearance judgments using Weibull fit for happy (a) and sad (b) expression in the control experiment. The horizontal line intersecting the curves represents the point of subjective equality.
Figure 8.
Bar plot of nPSE values for the test stimulus for each of the three cue types for happy and sad expressions in the control experiment. Error bars depict standard errors of the means
Figure 9.
Scatter plots of individual observers’ nPSEs for the (a) happy and (b) sad expression in the control experiment. Each observer’s nPSEs for test stimuli in the test cued and standard cued conditions are plotted as the function of that observer’s nPSE for the test stimuli in the neutral cue condition
Orthogonal task
Analysis performed on accuracy (Fig. 10a) did not show any significant effect of cue, F(2, 18) = 1.63, P = 0.22, ηp2 = 0.15; emotion, F(1, 9) = 0.28, P = 0.61, ηp2 = 0.03; or their interaction, F(2, 18) = 0.03, P = 0.96, ηp2 = 0.00. Analysis performed on RTs (Fig. 10b) did not show any significant effect of cue, F(1.25, 11.26) = 0.89, P = 0.39, ηp2 = 0.09; emotion, F(1, 9) = 0.45, P = 0.84, ηp2 = 0.00; or their interaction, F(2, 18) = 0.19, P = 0.83, ηp2 = 0.02. As expected, results with both accuracy and RT in the control experiment (500 ms) indicate that increasing the CTI eliminated the effects of exogenous attention and did not produce any facilitation on performance in the orthogonal task. The lack of difference in accuracy between cuing conditions for both main (CTI—60 ms) and control experiment (CTI—500 ms) could be possibly because the stimulus was suprathreshold, and the vertical location information was easily accessible (Gobell and Carrasco 2005).
Figure 10.
(a) Accuracy and (b) reaction time results for the orthogonal task in the control experiment averaged over happy and sad expressions. Error bars represent standard errors of the means.
Discussion
In this experiment, the CTI between the cue and the target was increased to 500 ms to investigate whether the effects are influenced by cue or response bias. Results from this experiment show that increasing the CTI did not lead to shifts in the PSE. The PSE values for standard cued and test cued were similar to the PSE values in the neutral condition, and this was consistent for both happy and sad emotions. No significant difference was observed in RT between cued and uncued conditions. As increasing the cue to target delay did not show any increase in perceived emotion due to cuing, the effects in Experiment 1 are probably due to exogenous attention rather than a cue bias.
Experiment 3: effect of facial contrast on perceived emotion intensity
The previous experiments demonstrated that exogenous attention enhances perceived emotion intensity, but the mechanisms underlying the attentional effect on emotional appearance are not known. One possibility could be to consider the involvement of early visual features. Störmer and Alvarez (2016) showed that spatial attention alters perceived facial attractiveness (higher order dimension) due to enhancement in contrast (lower level feature) near the eye region of female faces. However, it has also been shown that attentional effects on spatial frequency (second-order dimension) were not mediated by contrast (first-order dimension) (Gobell and Carrasco 2005). Considering that exogenous cuing enhances perceived contrast (Carrasco et al. 2004), it is possible that the effects of spatial attention on perceived emotional intensity could be mediated by low-level features such as contrast rather than direct modulations of affective intensity. We have no knowledge of a study that has measured direct influences of face contrast on perceived emotion intensity.
So, in two experiments (Experiments 3 and 4), we investigated the potential role of contrast as a mediating factor for the effects of attention on emotion intensity. In this experiment, we investigated whether emotion intensity judgments are dependent on facial contrast. If the perceived emotion intensity is influenced by contrast, then we would expect that high-contrast faces should be reported as more happy in comparison with low-contrast faces as a function of emotional intensity.
Method
Participants
Sixteen student volunteers (7 females, age range =18–30 years) from the University of Allahabad, with normal or corrected-to-normal vision, provided informed consent before participating in the study and were compensated for their time. The study was approved by the Institutional Ethics Committee of the University of Allahabad.
Stimuli and apparatus
The apparatus was the same as in the previous experiment. From the stimulus set (Fig. 1), alternate expression intensity levels were used (1, 3, 5, 7, and 9) for only the female happy faces in this experiment. The contrast for the full face (oval area) was manipulated and three levels of contrast were generated, by keeping the physical luminance value and contrast similar across all the intensity levels (Fig. 11).
Figure 11.
Stimuli used in Experiment 3: For each of the five intensities used, three levels of contrast were created namely (L=low, M=medium, and H=high).
The three levels of contrast for the five intensity levels used were low contrast 30.4 (± 0.0), medium contrast 33.9 (± 0.22), and high contrast 38.3 (± 0.0). The mean luminance value was 194.6 (± 0.17) across all five intensity levels and three contrasts used. The values were set using shine toolbox in Matlab and rechecked using adobe Photoshop for full oval area.
Procedure
Participants were seated 57 cm away from the monitor screen. The mid-contrast face (M) with the fifth intensity level (Level 5) was used as the standard face present in all the trials. The other face could be any of the 15 faces (five intensity levels and three contrast levels). The design was the same as in the main experiment but without using cues (also see, Fig. 2d, Gobell and Carrasco 2005). There were a total of 300 trials (20 trials each pair). After every 50 trials, there was a break (∼ 2–3 min). Participants had to report the location of the face that is happier using two response keys (Up-arrow, Down-arrow) from the keyboard, ignoring the changes in contrast.
Results
Participants’ data were fit using Weibull function and PSEs were calculated for the proportion of trials in which participants reported the location of the test face as happier, as a function of test face emotional intensity. Separate psychometric curves were fitted for each contrast level and normalized PSEs were calculated. The nPSE (Fig. 12) was subjected to one-way repeated-measures ANOVA for face contrast (low-C1, medium-C2, and high-C3) as a factor. The analysis showed a significant main effect of contrast on nPSE, F(2, 30) = 8.15, P < 0.001, ηp2 = 0.352. Bonferroni-corrected posthoc comparisons revealed significant difference between C1 vs. C2 [t(15) = 3.34, P = 0.014, mean difference = 0.175, SEM = 0.052] and between C3 vs. C1 [t(15) = 3.34, P = 0.013, mean difference = 0.20, SEM = 0.06].
Figure 12.
(a) Group-averaged graph for three levels of contrast as a function of emotion intensity when contrast was manipulated for full face. For high-contrast level, the line shifts towards left indicating that participants chose a high contrast face as more happy. (b) nPSE for the three contrast levels. Error bars represent standard error of mean (*p < 0.05).
Discussion
Indeed, the results showed that for high- and medium-contrast faces, the function shifted towards the left compared with the low-contrast face. This experiment demonstrates that manipulating actual (physical) face contrast does play a role in perceived emotion intensity, and with increase in actual contrast the perceived emotional intensity also increases. This suggests that early visual properties such as contrast influence the perception of emotional intensity.
To our knowledge, this is the first study to demonstrate clearly that a feature such as facial contrast changes perceived emotion intensity. Even though we show that increasing the physical contrast does increase the perceived emotion intensity, it does not establish that the apparent increase in contrast due to exogenous attention would influence perceived emotion intensity in a similar manner. We used this experiment as the basis and investigated the role of apparent and actual increase in contrast in Experiment 4 and whether exogenous attention mediates its effects on emotional intensity via facial contrast.
Experiment 4: effect of attention and contrast on perceived emotion intensity
The previous experiment showed that increasing the actual (physical) contrast of the face enhances the intensity of perceived emotion, but we do not know whether the increase in apparent contrast due to exogenous attentional cue would also lead to increase in perceived emotional intensity. A recent study has shown that exogenous attention enhanced attractiveness judgments by increasing the apparent facial contrast around the eye region (Störmer and Alvarez 2016). Cuing had an effect when they asked participants to judge attractiveness as well as when they asked participants to judge contrast. They concluded exogenous attention influences the perceived attractiveness via mediating the apparent contrast.
In this experiment, we orthogonally manipulated full face contrast (five levels- C1, C2, C3, C4, and C5) and exogenous attention (test cued and standard cued) while measuring participants’ response on both perceived emotion intensity and contrast judgments in separate blocks. We hypothesized that if increase in perceived contrast is responsible for the enhanced perceived emotion intensity, then we should get the enhanced effect of attentional cue for the perception of both emotion intensity and facial contrast. This would indicate that facial contrast is a possible mechanism that could underlie attentional effects on emotion appearance judgments.
Method
Participants
Sixteen student volunteers (5 females, age range =18–30 years) from the University of Allahabad, with normal or corrected-to-normal vision, provided informed consent before participating in the study and were compensated for their time. The study was approved by the Institutional Ethics Committee of the University of Allahabad.
Stimuli and apparatus
From the main stimulus set (Fig. 1), the least, mid, and extreme intensity expression were used (1, 5, and 9) for only the female happy faces. The contrast for the full face (oval area) was manipulated and five levels of contrast were created (root mean square error values: −0.3, −0.15, 0, 0.15, and 0.3), by keeping the physical luminance value similar across all the levels (Fig. 13). The values were set using shine toolbox in Matlab and rechecked using Adobe Photoshop for full oval area. The apparatus was the same as in previous experiments.
Figure 13.
Facial contrast stimuli used in Experiment 4: Five levels of contrast (least—C1, C2, C3, C4, and C5—highest contrast) were generated for the mid-happy intensity (fifth intensity) face.
Procedure
Participants were seated 57 cm away from the monitor screen. Of the five contrast levels, the middle contrast face was used as the standard face stimulus in all the trials, while the test contrast could be any of the five different contrast faces. The least and highest expression intensity were used as filler trials and data from these were not used for analysis. The design was similar to Experiment 1, where cues were used. Participants ran two blocks using the same stimulus set, with the only variation being task instructions. In the first block, the participants were asked to report the location of the “more happy face” and in the next block they were asked to report the location of the “higher contrast face.” For the second block, they were shown the contrast differences in the faces before they begin the block. The block order was fixed for all the participants. We did not counterbalance the blocks to ensure that the contrast judgment does not bias or influence the emotional judgment.
There were in all 600 trials (20 trials each pair) in each subexperiment (40 min total). After every 60 trials, there was a break (∼2–3 min). Participants had to report the location using two response keys (Up-arrow, Down-arrow) from the keyboard. We predicted that participant would report the cued side as more happy in comparison with uncued side if contrast plays a role in enhancing emotion intensity. Similarly it was predicted that even in Block 2, when directly asked about the contrast of the face, they would report the cued face as higher in contrast when compared with uncued face if contrast is the mediating factor for emotion intensity enhancement (also see Gobell and Carrasco 2005; Störmer and Alvarez 2016).
Results and Discussion
For Block 1 (more happy, Fig. 14a), the values were subjected to a two-way repeated-measures ANOVA for cuing condition (cued vs. uncued) and contrast levels (five levels) as a factor. The analysis showed a significant main effect of contrast level [F(1.72, 25.86) = 38.73, P < 0.001, ηp2 = 0.721] and cuing condition [F(1,15) = 7.93, P = 0.013, ηp2 = 0.35] but no interaction effect [F(4, 60) = 1.00, P > 0.250, ηp2= 0.03]. For Block 2 (higher contrast, Fig. 14b), the analysis showed significant main effect of contrast level F(1.46, 21.98) = 136.08, P < 0.001, ηp2 = 0.91, but no effect for cuing condition [F(1, 15) = 2.97, P = 0.11, ηp2 = 0.165] or their interaction [F(4, 60) = 1.37, P = 0.26, ηp2 = 0.082].
Figure 14.
Results of Experiment 4. Effect of attention and facial contrast on perceived emotion intensity: The proportion of test chosen when the test was cued vs. when it was not cued (uncued condition) was plotted as a function of contrast when the task was to report the location of (a) more happy face and (b) higher contrast face.
In summary, Experiment 4 was conducted (similar to Experiment 1) to see whether spatial attention influences emotion appearance by modulating facial contrast. In this cuing experiment, significant effect of cue was obtained when the task was to report the location of the more happy face. Participants reported the cued face as more happy in comparison with when the same face was uncued, replicating the results of Experiment 1. However, when the task was to report the location of the higher contrast face (using the same stimulus set), there was no effect of cue. The lack of cuing effect on contrast indicate the possibility that the attentional effects on emotional intensity is perhaps not fully mediated by changes in the overall facial contrast even though changes in facial contrast influences emotion intensity judgments (Experiment 3).
We also manipulated mouth contrast and collected data similar to full face contrast in experiment 3 with three mouth contrast levels, since the mouth region is important for happy expression. The results were similar to full face contrast with a main effect of contrast level. We also tried a version of experiment 4 manipulating mouth contrast. In the pilot study, participants found it hard to do the task and we essentially got flat psychometric functions. Changing mouth contrast even more to make the task somewhat easier resulted in spurious boundaries in the mouth region, especially given the faces we were working with. It also needs to be pointed out that changing the contrast significantly of a local region would introduce spurious high spatial frequencies in that region. Hence, it would also become difficult to determine whether it is change to the feature or change in spatial frequency content that is causing the difference in emotion perception of interest. Studies have shown that low spatial frequencies are more important for happy expression (Kumar & Srinivasan 2011). Careful manipulations of stimuli and features important for particular expressions are needed to delineate the mechanisms underlying the attentional effects on emotional appearance.
General Discussion
In this study, adapting the paradigm of Carrasco et al. (2004), we investigated the effect of exogenous spatial attention on the perceived intensity of happy and sad facial expressions using perceptually equidistant emotional intensity morphs (see Supplementary Material). At short CTI (60 ms), participants perceived a less intense emotional face to be higher in intensity when it was attended than when it was unattended or when there was a neutral cue (Fig. 3). This effect was absent with a longer CTI (500 ms), indicating that the enhancement is not due to response bias but possibly due to exogenous attention (Fig. 6). Our results show that exogenous spatial attention intensifies facial emotional expressions and makes a less happy face or a less sad face appear as happier or sadder. Further, emotion intensity varied as a function of contrast with higher contrast faces being judged as more in emotional intensity (Fig. 12). Cuing intensified emotion intensity when faces differed in terms of facial contrast and participants had to report higher emotion intensity rather than higher contrast (Fig. 14).
Previous studies using similar paradigms have ruled out alternative nonattentional explanations such as cue bias, response bias, and task design-related biases (Carrasco et al. 2004, 2008; Liu et al. 2006; Ling and Carrasco 2007; Fuller et al. 2009; Störmer and Alvarez 2016; for review see Carrasco 2014). In similar lines, simple response bias toward location of cues (i.e. left or right) were controlled by asking the participants to report the orthogonal dimension—location of the more expressive face (up or down with each hand for each side of the screen) and instructing that the cues are noninformative (Carrasco et al. 2004; Störmer and Alvarez 2016). Also, the participants were informed beforehand that the task was either for only happy faces or only sad faces, and before beginning of the experiment, they were familiarized with the emotional morphs used. Using this, cross-emotional confounds such as not knowing whether the two faces lie in the within-emotion boundary or across-emotions boundary were avoided.
The study also reports that attention shows similar effects for both happy and sad facial expressions, indicating that the effect on emotional intensity is valence invariant. One possible explanation is that emotional intensity could be considered as a prothetic stimulus dimension and the actual valence of the emotion does not matter. Further studies would be needed to confirm the generality of the attentional intensification of emotional expressions by investigating other emotions like neutral-anger or neutral-fear when used in more–less dimension. The prothetic–metathetic distinction and the possibly different effects of attention based on this distinction raise questions on whether qualitative emotional changes (faces changing from happy to sad or viceversa) would also show similar effects.
How could attention alter perceived expression intensity? First of all, we observed decreased threshold for attended expression, which may suggest a contrast-gain modulation-type response. Previous studies that showed attentional effects on low-order perceptual features, such as contrast, have suggested that attentional effects on apparent contrast (PSE shift toward left) could result due to contrast-gain modulations where attention changes the strength of the stimulus by “boosting effective contrast“ (Carrasco et al. 2004; Cutrone et al. 2014). Further, Störmer et al. (2009) showed that enhancement in apparent contrast due to cross-modal exogenous attention is associated with early perceptual modulations in brain areas that are responsive to actual physical differences in contrast. These studies suggest that the attentional effects on appearance could be observed in the brain areas that process those relevant features. Hence, it is possible that the effects of spatial attention on emotion appearance may arise in areas responsible for emotion processing by boosting effective emotional intensity either through specific low-level (perceptual) or high-level (affective) changes in facial features. A number of previous studies have suggested that emotion processing involves multiple stages: perceptual stage (<100 ms), integration stage (<500 ms) and the later affective stage (>500 ms) (Holmes et al. 2003; Calvo et al. 2012; Calvo and Nummenmaa 2016). Exogenous spatial attention could affect any of these three stages resulting in changes in emotional appearance.
Are the attentional effects primarily perceptual or affective? Evidence for contributions of perceptual vs. affective features in covert vision for happy and sad emotional expressions has been provided recently but for full-blown expressions (Calvo et al. 2014). They report that the smile in a happy face is a highly salient, diagnostic as well as a distinctive cue that directs facial happiness in peripheral vision, and hence, the integration of information from overall face may not be needed. They did not find any specific salient diagnostic feature for sad faces, but other studies (Bombari et al. 2013) have suggested that sad expression may depend on eyes and mouth to a reasonable extent (Calvo and Nummenmaa 2008; Guo 2012; Calvo et al. 2014). They also argue that perceptual rather than affective content is preferred in peripheral vision as there is a “breakdown of configural processing mechanisms” in periphery, because only certain features in the faces contribute to their identification.
Considering the above studies, it is possible that effects of spatial attention on perceived emotional intensity could arise either due to enhancement of local distinctive features (smile, lips) within the faces or due to changes in features like contrast or spatial frequency content of the whole face or specific regions of faces instead of direct modulations of affective intensity. We performed two experiments to check for the potential role of overall face contrast in mediating the attentional effect on emotion appearance. Experiment 3 showed that higher contrast faces are perceived to have higher emotional intensity. In Experiment 4, when spatial attention was manipulated and participants were asked to report the location of more happy face with faces varying in contrast, they reported the cued face as more happy in comparison with uncued face. The results are consistent with those of Experiment 1. However, there was no effect of cuing when participants were asked to report the location of faces higher in contrast. The lack of result of cuing directly on overall facial contrast indicates that the cuing effect of emotional appearance in Experiments 1 and 3 are possibly not mediated by overall facial contrast. This is somewhat similar to the finding that the attentional effects on spatial frequency are not mediated by contrast (Gobell and Carrasco 2005, Fig. 2d). Another factor to consider is that the cuing effect on particular aspects may depend on the methods used and task instructions (Anderson 2016). The results indicate the possibility that other features either on their own or together may underlie the attentional effect on emotion appearance. Further studies would be needed to elucidate the underlying mechanisms, and these mechanisms may also be different for different emotions.
What could be the associated neurophysiological and brain mechanisms for enhancement in perceived emotion intensity due to exogenous attention? Sensitivity to subtle emotional expressions (neutral to fear or neutral to anger) rather than identity or other features have been shown to be reduced in patients with bilateral amygdala damage (Graham et al. 2007). Even deficits in perceiving sadness intensity have been associated with amygdala (Adolphs and Tranel 2004). More recently, brain localization studies using emotional intensity morphs show involvement of mainly posterior superior temporal sulcus (pSTS) (Narumoto et al. 2001; Flack et al. 2015) and amygdala (Lin et al. 2016; Meaux and Vuilleumier 2016) that correlate with early perceptual changes in stimulus emotional intensity (individual feature changes such as eyes widening and mouth or lip widening). Other possible regions could be fusiform face area or occipital face area, as these areas are sensitive to competition between two simultaneously presented faces (Gentile and Jansma 2010). It might be possible that in our study, attention could enhance emotional intensity by tuning the neural responses in these brain areas (Winston et al. 2003). Thus, while it is possible that the attentional enhancement of early features (emotion-specific or nonemotion-specific) may underlie the results of our study, it is also possible that attention directly influences emotional intensity at a late stage.
The study has some potential limitations, which could be better addressed in future studies. First, is that the emotional intensity standardization experiments have been done with a separate group of participants. While it may be ideal to standardize faces with equal steps of difference in emotional intensity for each participant, it was difficult to achieve due to logistical reasons. However, it is unlikely this would influence the results because cuing is manipulated within participants and the same set of emotional faces was used across all conditions. Second, eye movements were not recorded, because the sum of the duration of cue and target was shorter than typical saccadic reaction times (200 ms) in the main experiment. In addition, participants were instructed to maintain fixation, because the cues were not informative for either the intensity or location information. More importantly, the condition in which a saccade could lead to a possible benefit (control experiment with longer CTI—500 ms) did not show the emotional intensification effect.
One possible implication of attentional effects on appearance is veridicality of perception (Block 2010; Hoffman et al. 2015). Given that attention alters phenomenology (Carrasco et al. 2004; Störmer and Alvarez 2016) and attention is always operational on some content or another, it is difficult to argue for veridicality of visual perception. The attentional effect on emotional appearance indicates that the perceived emotional intensity may not be veridical. This interpretation is consistent with arguments that the purpose of attention is for action (Wu 2011) and not necessarily veridical experience of the world (Treue 2004; Carrasco et al. 2008; Hoffman et al. 2015). Intensifying emotional expression of an attended face may enhance the possibility of performing actions toward the person displaying the emotion.
The results of our study can also potentially be extended to explanations of certain illusions arising in covert vision like the Mona Lisa’s portrait. Livingstone (2000) hypothesized that the smile is apparent only when one attends to the face covertly rather than foveally and that spatial frequency information in covert vision could play a substantial role. This study hints a prospective contribution of spatial attention to address the illusory effects in Mona Lisa’s facial expression, though the mechanism needs further investigation.
In summary, this study shows that attention alters apparent facial expression intensity and that enhancement in emotion expressions could be valence invariant, although further studies are needed to identify the underlying mechanisms. Future studies could also address whether other types of attentional manipulations and top-down factors also influence the perception of facial expressions in a similar manner, or these effects are specific to exogenous attention, and whether similar results are observed for expressions like fear and disgust. The more intense expression perceived due to attention could enable better or faster decision-making and appropriate social actions toward the person displaying the expression.
Data Availability Statement: Data are not currently publicly available. The authors are willing to make the data available in a public repository.
Supplementary data
Supplementary data is available at NCONSC Journal online.
Conflict of interest statement. None declared.
Supplementary Material
References
- Abrams J, Barbot A, Carrasco A.. Voluntary attention increases perceived spatial frequency. Atten Percept Psychophys 2010;72:1510–21.http://dx.doi.org/10.3758/APP.72.6.1510 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Adolphs R, Tranel D.. Impaired judgments of sadness but not happiness following bilateral amygdala damage. J Cogn Neurosci 2004;16:453–62.http://dx.doi.org/10.1162/089892904322926782 [DOI] [PubMed] [Google Scholar]
- Anderson B. Attentional effects on phenomenological appearance: How they change with task instructions and measurement methods. PloS one 2016;11:e0152353. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anton-Erxleben K, Abrams J, Carrasco M.. Evaluating comparative and equality judgments in contrast perception: attention alters appearance. J Vis 2010;10:6..http://dx.doi.org/10.1167/10.11.6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anton-Erxleben K, Henrich C, Treue S.. Attention changes perceived size of moving visual patterns. J Vis 2007;7:5..http://dx.doi.org/10.1167/7.11.5 [DOI] [PubMed] [Google Scholar]
- Block N. Attention and mental paint. Philos Issues 2010;20:23–63.http://dx.doi.org/10.1111/j.1533-6077.2010.00177.x [Google Scholar]
- Bocanegra BR, Zeelenberg R.. Emotion improves and impairs early vision. Psychol Sci 2009;20:707–13.http://dx.doi.org/10.1111/j.1467-9280.2009.02354.x [DOI] [PubMed] [Google Scholar]
- Bocanegra BR, Zeelenberg R.. Emotional cues enhance the attentional effects on spatial and temporal resolution. Psychon Bull Rev 2011;18:1071–6.http://dx.doi.org/10.3758/s13423-011-0156-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bombari D, Schmid PC, Schmid Mast M, et al. Emotion recognition: the role of featural and configural face information. Q J Experi Psychol 2013;66:2426–2442.http://dx.doi.org/10.1080/17470218.2013.789065 [DOI] [PubMed] [Google Scholar]
- Calder AJ, Young AW, Perrett DI, et al. Categorical perception of morphed facial expressions. Visual Cognition 1996;3:81–118.http://dx.doi.org/10.1080/713756735 [Google Scholar]
- Calvo MG, Beltrán D, Fernández-Martín A.. Processing of facial expressions in peripheral vision: neurophysiological evidence. Biol Psychol 2014;100:60–70. [DOI] [PubMed] [Google Scholar]
- Calvo MG, Fernández-Martín A, Nummenmaa L.. Perceptual, categorical, and affective processing of ambiguous smiling facial expressions. Cognition 2012;125:373–393. [DOI] [PubMed] [Google Scholar]
- Calvo MG, Fernández-Martín A, Nummenmaa L.. Facial expression recognition in peripheral versus central vision: role of the eyes and the mouth. Psychol Res 2014;78:180–195. [DOI] [PubMed] [Google Scholar]
- Calvo MG, Nummenmaa L.. Detection of emotional faces: salient physical features guide effective visual search. J Exp Psychol Gen 2008;137:471–94.http://dx.doi.org/10.1037/a0012771 [DOI] [PubMed] [Google Scholar]
- Calvo MG, Nummenmaa L.. Perceptual and affective mechanisms in facial expression recognition: an integrative review. Cogn Emot 2016;30:1081–106.http://dx.doi.org/10.1080/02699931.2015.1049124 [DOI] [PubMed] [Google Scholar]
- Carrasco M. Spatial covert attention: perceptual modulation In: Nobre AN, Kastner S (eds.), The Oxford Handbook of Attention. Oxford, UK: Oxford University Press, 2014; 183–230. [Google Scholar]
- Carrasco M, Fuller S, Ling S.. Transient attention does increase perceived contrast of suprathreshold stimuli: a reply to Prinzmetal, Long, and Leonhardt (2008). Percept Psychophys 2008;70:1151–1164.http://dx.doi.org/10.3758/PP.70.7.1151 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carrasco M, Ling S, Read S.. Attention alters appearance. Nat Neurosci 2004;7:308–313.http://dx.doi.org/10.1038/nn1194 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carretié L. Exogenous (automatic) attention to emotional stimuli: a review. Cogn Affect Behav Neurosci 2014;14:1228–1258. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cutrone EK, Heeger DJ, Carrasco M.. Attention enhances contrast appearance via increased input baseline of neural responses. J Vis 2014;14:16–16.http://dx.doi.org/10.1167/14.14.16 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dickert S, Slovic P.. Attentional mechanisms in the generation of sympathy. Judgm Decis Mak 2009;4:297–306. [Google Scholar]
- Ethofer T, Anders S, Erb M, et al. Impact of voice on emotional judgment of faces: an event-related fMRI study. Hum Brain Mapp 2006;27:707–714.http://dx.doi.org/10.1002/hbm.20212 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fenske MJ, Raymond JE.. Affective influences of selective attention. Curr Dir Psychol Sci 2006;15:312–316.http://dx.doi.org/10.1111/j.1467-8721.2006.00459.x [Google Scholar]
- Flack TR, Andrews TJ, Hymers M, et al. Responses in the right posterior superior temporal sulcus show a feature-based response to facial expression. Cortex 2015;69:14–23.http://dx.doi.org/10.1016/j.cortex.2015.03.002 [DOI] [PubMed] [Google Scholar]
- Fründ I, Haenel NV, Wichmann FA.. Inference for psychometric functions in the presence of nonstationary behavior. J Vis 2011;11:16–16.http://dx.doi.org/10.1167/11.6.16 [DOI] [PubMed] [Google Scholar]
- Fuller S, Carrasco M.. Exogenous attention and color perception: performance and appearance of saturation and hue. Vision Res 2006;46:4032–4047.http://dx.doi.org/10.1016/j.visres.2006.07.014 [DOI] [PubMed] [Google Scholar]
- Fuller S, Park Y, Carrasco M.. Cue contrast modulates the effects of exogenous attention on appearance. Vision Res 2009;49:1825–1837.http://dx.doi.org/10.1016/j.visres.2009.04.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gentile F, Jansma BM.. Neural competition through visual similarity in face selection. Brain Res 2010;1351:172–184.http://dx.doi.org/10.1016/j.brainres.2010.06.050 [DOI] [PubMed] [Google Scholar]
- Gobell J, Carrasco M.. Attention alters the appearance of spatial frequency and gap size. Psychol Sci 2005;16:644–51.http://dx.doi.org/10.1111/j.1467-9280.2005.01588.x [DOI] [PubMed] [Google Scholar]
- Graham R, Devinsky O, LaBar KS.. Quantifying deficits in the perception of fear and anger in morphed facial expressions after bilateral amygdala damage. Neuropsychol 2007;45:42–54.http://dx.doi.org/10.1016/j.neuropsychologia.2006.04.021 [DOI] [PubMed] [Google Scholar]
- Grewal PK, Kar BR, Kumar D.. CBCS Emotional Faces Database 2012. Retrieved from http://cbcs.ac.in/resources/.
- Guo K. Holistic gaze strategy to categorize facial expression of varying intensities. PLoS One 2012;7:e42585.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hess U, Blairy S, Kleck RE.. The intensity of emotional facial expressions and decoding accuracy. J Nonverb Behav 1997;21:241–257.http://dx.doi.org/10.1023/A:1024952730333 [Google Scholar]
- Hoffman DD, Singh M, Prakash C.. The interface theory of perception. Psychon Bull Rev 2015;22:1480–1506.http://dx.doi.org/10.3758/s13423-015-0890-8 [DOI] [PubMed] [Google Scholar]
- Holmes A, Vuilleumier P, Eimer M.. The processing of emotional facial expression is gated by spatial attention: evidence from event-related brain potentials. Brain Res Cogn Brain Res 2003;16:174–184.http://dx.doi.org/10.1016/S0926-6410(02)00268-9 [DOI] [PubMed] [Google Scholar]
- Koldewyn K, Whitney D, Rivera SM.. The psychophysics of visual motion and global form processing in autism. Brain 2009;133:599–610. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kumar D, Srinivasan N.. Emotion perception is mediated by spatial frequency content. Emotion 2011;11:1144–51.http://dx.doi.org/10.1037/a0025453 [DOI] [PubMed] [Google Scholar]
- Leleu A, Demily C, Franck N, et al. The odor context facilitates the perception of low-intensity facial expressions of emotion. PLoS One 2015;10:e0138656.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lim S-L, Pessoa L.. Affective learning increases sensitivity to graded emotional faces. Emotion 2008;8:96–103.http://dx.doi.org/10.1037/1528-3542.8.1.96 [DOI] [PubMed] [Google Scholar]
- Lin H, Mueller-Bardorff M, Mothes-Lasch M, et al. Effects of intensity of facial expressions on amygdalar activation independently of valence. Front Human Neurosci 2016;10:646.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ling S, Carrasco M.. Transient covert attention does alter appearance: a reply to Schneider (2006). Percept Psychophys 2007;69:1051–8.http://dx.doi.org/10.3758/BF03193943 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu T, Abrams J, Carrasco M.. Voluntary attention enhances contrast appearance. Psychol Sci 2009;20:354–62.http://dx.doi.org/10.1111/j.1467-9280.2009.02300.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liu T, Fuller S, Carrasco M.. Attention alters the appearance of motion coherence. Psychon Bull Rev 2006;13:1091–6.http://dx.doi.org/10.3758/BF03213931 [DOI] [PubMed] [Google Scholar]
- Livingstone MS. Is it warm? Is it real? Or just low spatial frequency? Science 2000;290:1299–1299. [PubMed] [Google Scholar]
- Meaux E, Vuilleumier P.. Facing mixed emotions: analytic and holistic perception of facial emotion expressions engages separate brain networks. Neuroimage 2016;141:154–173.http://dx.doi.org/10.1016/j.neuroimage.2016.07.004 [DOI] [PubMed] [Google Scholar]
- Motley MT, Camden CT.. Facial expression of emotion: a comparison of posed expressions versus spontaneous expressions in an interpersonal communication setting. West J Speech Commun 1988;52:1–22.http://dx.doi.org/10.1080/10570318809389622 [Google Scholar]
- Nakayama K, Mackeben M.. Sustained and transient components of focal visual attention. Vision Res 1989;29:1631–47.http://dx.doi.org/10.1016/0042-6989(89)90144-2 [DOI] [PubMed] [Google Scholar]
- Narumoto J, Okada T, Sadato N, et al. Attention to emotion modulates fMRI activity in human right superior temporal sulcus. Cognitive Brain Res 2001;12:225–31.http://dx.doi.org/10.1016/S0926-6410(01)00053-2 [DOI] [PubMed] [Google Scholar]
- Niedenthal PM, Brauer M, Halberstadt JB, et al. When did her smile drop? Facial mimicry and the influences of emotional state on the detection of change in emotional expression. Cogn Emot 2001;15:853–64.http://dx.doi.org/10.1080/02699930143000194 [Google Scholar]
- Phelps EA, Ling S, Carrasco M.. Emotion facilitates perception and potentiates the perceptual benefits of attention. Psychol Sci 2006;17:292–9.http://dx.doi.org/10.1111/j.1467-9280.2006.01701.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Porcheron A, Mauger E, Russell R.. Aspects of facial contrast decrease with age and are cues for age perception. PLoS One 2013;8:e57985.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Posner MI. Orienting of attention. Q J Exp Psychol 1980;32:3–25.http://dx.doi.org/10.1080/00335558008248231 [DOI] [PubMed] [Google Scholar]
- Posner MI, Snyder CR, Davidson BJ.. Attention and the detection of signals. J Exp Psychol Gen 1980;109:160–74.http://dx.doi.org/10.1037/0096-3445.109.2.160 [PubMed] [Google Scholar]
- Pourtois G, Schettino A, Vuilleumier P.. Brain mechanisms for emotional influences on perception and attention: what is magic and what is not. Biol Psychol 2013;92:492–512.http://dx.doi.org/10.1016/j.biopsycho.2012.02.007 [DOI] [PubMed] [Google Scholar]
- Raymond JE, Fenske MJ, Tavassoli NT.. Selective attention determines emotional responses to novel visual stimuli. Psychol Sci 2003;14:537–542.http://dx.doi.org/10.1046/j.0956-7976.2003.psci_1462.x [DOI] [PubMed] [Google Scholar]
- Sherman A, Sweeny TD, Grabowecky M, et al. Laughter exaggerates happy and sad faces depending on visual context. Psychon Bull Rev 2012;19:163–9.http://dx.doi.org/10.3758/s13423-011-0198-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Störmer VS, Alvarez GA.. Attention alters perceived attractiveness. Psychol Sci 2016;27:563–71. [DOI] [PubMed] [Google Scholar]
- Störmer VS, McDonald JJ, Hillyard SA.. Cross-modal cueing of attention alters appearance and early cortical processing of visual stimuli. Proc Nat Acad Sci USA 2009;106:22456–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Treue S. Perceptual enhancement of contrast by attention. Trends Cogn Sci 2004;8:435–7.http://dx.doi.org/10.1016/j.tics.2004.08.001 [DOI] [PubMed] [Google Scholar]
- Turatto M, Vescovi M, Valsecchi M.. Attention makes moving objects be perceived to move faster. Vision Res 2007;47:166–78.http://dx.doi.org/10.1016/j.visres.2006.10.002 [DOI] [PubMed] [Google Scholar]
- Wichmann FA, Hill NJ.. The psychometric function: I. Fitting, sampling, and goodness of fit. Percept Psychophys 2001;63:1293–313. [DOI] [PubMed] [Google Scholar]
- Winston J, O’Doherty J, Dolan R.. Common and distinct neural responses during direct and incidental processing of multiple facial emotions. Neuroimage 2003;20:84–97. [DOI] [PubMed] [Google Scholar]
- Wu W. What is conscious attention? Philos Phenomenol Res 2011;82:93–120. [Google Scholar]
- Young AW, Rowland D, Calder AJ, et al. Facial expression megamix: tests of dimensional and category accounts of emotion recognition. Cognition 1997;63:271–313.http://dx.doi.org/10.1016/S0010-0277(97)00003-6 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.














