Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2018 Nov 19;8:17039. doi: 10.1038/s41598-018-35259-w

Selective eye fixations on diagnostic face regions of dynamic emotional expressions: KDEF-dyn database

Manuel G Calvo 1,2,, Andrés Fernández-Martín 3, Aida Gutiérrez-García 4, Daniel Lundqvist 5
PMCID: PMC6242984  PMID: 30451919

Abstract

Prior research using static facial stimuli (photographs) has identified diagnostic face regions (i.e., functional for recognition) of emotional expressions. In the current study, we aimed to determine attentional orienting, engagement, and time course of fixation on diagnostic regions. To this end, we assessed the eye movements of observers inspecting dynamic expressions that changed from a neutral to an emotional face. A new stimulus set (KDEF-dyn) was developed, which comprises 240 video-clips of 40 human models portraying six basic emotions (happy, sad, angry, fearful, disgusted, and surprised). For validation purposes, 72 observers categorized the expressions while gaze behavior was measured (probability of first fixation, entry time, gaze duration, and number of fixations). Specific visual scanpath profiles characterized each emotional expression: The eye region was looked at earlier and longer for angry and sad faces; the mouth region, for happy faces; and the nose/cheek region, for disgusted faces; the eye and the mouth regions attracted attention in a more balanced manner for surprise and fear. These profiles reflected enhanced selective attention to expression-specific diagnostic face regions. The KDEF-dyn stimuli and the validation data will be available to the scientific community as a useful tool for research on emotional facial expression processing.

Introduction

Facial expressions are assumed to convey information about a person’s current feelings and motives, intentions and action tendencies. Most research on expression recognition has been conducted under a categorical view, using six basic expressions: happiness, anger, sadness, fear, disgust and surprise1 (for a review, see2). Emotion recognition relies on expression-specific diagnostic (i.e., distinctive) features, in that they are necessary or sufficient for recognition of the respective emotion: Anger and sadness are more recognizable from the eye region (e.g., frowning), whereas happiness and disgust are more recognizable from the mouth region (e.g., smiling), while recognition of fear and surprise depends on both regions38. In the current study, we aimed to determine the profile of overt attentional orienting to and engagement with such expression-diagnostic features; that is, whether, when, and how long they selectively attract eye fixations from observers. Importantly, we addressed this issue for dynamic facial expressions, thus extending typical approaches using photographic stimuli.

Prior eyetracking research using photographs of static expressions has provided non-conclusive evidence regarding the pattern and role of selective visual attention to facial features. First, during expression recognition, gaze allocation is often biased towards diagnostic face regions (e.g., the eye region receives more attention in sad and angry faces, whereas the mouth region receives more attention in happy and disgusted faces3,811). However, in other studies, the proportion of fixation on the different face areas was modulated by expression less consistently or was not affected1215. Second, increased visual attention to diagnostic facial features is correlated with improved recognition performance16. Looking at the mouth region contributes to recognition of happiness3,8 and disgust8, and looking at the eye/brow area contributes to recognition of sadness3 and anger8. However, results are less consistent for other emotions, and the role of fixation on diagnostic regions depends on expressive intensity, with recognition of subtle emotions being facilitated by fixations on the eyes (and a lesser contribution by the mouth), whereas recognition of extreme emotions is less dependent on fixations14.

Nonetheless, facial expressions are generally dynamic in daily social interaction. In addition, research has shown that motion benefits facial affect recognition (see1719). Consistently, relative to static expressions, the viewing of dynamic expressions enhances brain activity in regions associated with processing of social-relevant (superior temporal sulci) and emotion-relevant (amygdala) information20,21, which might explain the dynamic expression recognition advantage. Accordingly, it is important to investigate oculomotor behavior during the recognition of this type of expressions. To our knowledge, only a few studies have measured fixation patterns during dynamic facial expression processing, with non-convergent results. Lischke et al.22 reported an enhanced gaze duration bias towards the eye region of angry, sad, and fearful faces, while gaze duration was longer for the mouth region of happy faces (although differences were not statistically analyzed). In contrast, in the Blais, Fiset, Roy, Saumure-Régimbald, and Gosselin23 study, fixation patterns did not differ across six basic expressions and were not linked to a differential use of facial features during recognition.

It is, however, possible that the lack of fixation differences across expressions in the Blais et al.23 study was due to the use of (a) a short stimulus display (500 ms), thereby limiting the number of fixations (two fixations per trial); and (b) a small stimulus size (width: 5.72°), as the eyes and mouth were close to (1.7° and 2.1°) the center of the face (initial fixation location), and thus they could be seen in parafoveal vision (which then probably curtailed saccades). If so, such stimulus conditions might have reduced sensitivity of measurement. Yet, it must be noted that—in the absence of differences as a function of expression—fixations did vary as a function of display mode, with more fixations on the left eye and the mouth in the static than in the dynamic condition23. To clarify this issue, first, we used longer stimulus displays (1,033 ms), thus approximating the typical duration of expression unfolding for most basic emotions19,24. Second, we used larger face stimuli (8.8° width × 11.6° height, at an 80-cm viewing distance), which approximates the size of a real face (i.e., 13.8 × 18.5 cm, viewed from 1 m). In fact, in the Lischke et al.22 study (where fixation differences did occur as a function of expression), the stimulus display was longer (800 ms) and the size was larger (17° × 23.6°) than in the Blais et al.23 study.

An additional contribution of the current study involves the recollection of norming eyetracking data for each of 240 video-clip stimuli that will be available as a new dynamic expression stimulus set (KDEF-dyn) for other researchers. A number of dynamic expression databases have been developed (for a review, see25). To our knowledge, however, for none of them have eyetracking measures been obtained. Thus we make a contribution by devising a facial expression database for which eye movements and fixations are assessed while observers scan faces during emotional expression categorization. The current approach will provide information about the time course of selective attention to face regions, in terms of both orienting (as measured by the probabilities of entry and of first fixation on each region) and engagement (as indicated by gaze duration and number of fixations). If observers move their eyes to face regions that maximize performance determining the emotional state of a face26, then regions with expression-specific diagnostic features should receive selective attention, in the form of earlier orienting or longer engagement, relative to other regions. Thus, in a confirmatory approach, we predict enhanced attention to the eye region of angry and sad faces, to the mouth region of happy and disgusted faces, and a more balanced attention to the eyes and mouth of fearful and surprised faces. In an exploratory approach, we aim to examine how each attentional component, i.e., orienting and engagement, is affected.

We used a dynamic version (KDEF-dyn) of the original (static) Karolinska Directed Emotional Faces (KDEF) database27. The photographic KDEF stimuli have been examined in large norming studies28,29, and widely employed in behavioral3032 and neurophysiological3335 research (according to Google Scholar, the KDEF has been cited in over 2,000 publications). We built dynamic expressions by applying morphing animation to the KDEF photographs, whereby a neutral face changed towards a full-blown emotional face, trying to mimic real-life expressions and the average natural speed of emotional expression unfolding24,36. This approach provides fine-grained control and standardization of duration, speed, and intensity. Further, dynamically morphed facial expression stimuli have often been employed in behavioral18,24,37,38 and neurophysiological3942 research. Although this type of expressions may not convey the same naturalness as online video recordings, some studies indicate that natural expressions unfold in a uniform and ballistic way43,44, thus actually sharing properties with morphed dynamic expressions.

Method

Participants

Seventy-two university undergraduates (40 female; 32 male; aged 18 to 30 years: M = 21.3) from different courses participated for course credit or payment, after providing written informed consent. A power calculation using G*Power (version 3.1.9.245) showed that 42 participants would be sufficient to detect a medium effect size (Cohen’s d = 0.60) at α = 0.05, with power of 0.98, in an a priori analysis of repeated measures within factors (type of expression and face region) ANOVA. As this was a norming study of stimulus materials, a larger participant sample (i.e., 72) was used to obtain stable and representative mean scores. The study was approved by the University of La Laguna ethics committee (CEIBA, protocol number 2017–0227), and conducted in accordance with the WMA Declaration of Helsinki 2008.

Stimuli

The color photographs of 40 people (20 female; 20 male) from the KDEF set27, each displaying six basic expressions (happiness, sadness, anger, fear, disgust, and surprise), were used (see the KDEF identities in Supplemental Datasets S1A and S1B). For the current study, 240 dynamic video-clip versions (1,033 ms duration) of the original photographs were constructed. The face stimuli were subjected to morphing by means of FantaMorph© software (v. 5.4.2, Abrosoft, Beijing, China). For each expression and poser, we created a sequence of 31 (33.33-ms) frames, with intensity increasing at a rate of 30 frames per second, starting with a neutral face as the first frame (frame 0; original KDEF), and ending with the peak of an emotional face (either happy, sad, etc.) in the last frame (frame 30; original KDEF). A similar procedure and display duration has been used in prior research19,46,47. The stimuli and the norming data are available at http://kdef.se/versions.html; KDEF-dyn II).

Procedure

All 72 participants were presented with all 240 video-clips (40 posers × 6 expressions) in six blocks of 40 trials each. Block order was counterbalanced, and trial order and type of expression were randomized for each participant. The stimuli were displayed on a computer screen by means of SMI Experiment Center™ 3.6 software (SensoMotoric Instruments GmbH, Teltow, Germany). Participants were asked to indicate which of six basic expressions was shown on each trial by pressing a key out of six. Twelve video-clips served as practice trials, with two new models showing each expression.

The sequence of events on each trial was as follows. After an initial 500-ms central fixation cross on a screen, a video-clip showed a facial expression unfolding for 1,033 ms. The face subtended a visual angle of 11.6° (height) × 8.8° (width) at a 80-cm viewing distance. Following face offset, six small boxes appeared horizontally on the screen for responding, with each box associated to a number/label (e.g., 4: happy; 5: sad, etc.). For expression categorization, participants pressed one key (from 4 to 9) in the upper row of a standard computer keyboard with their dominant index finger. The assignment of expressions to keys was counterbalanced. The chosen response and reaction times (from the offset of the video-clip) were recorded. There was a 1,500-ms intertrial interval.

Design and measures

A within-subjects experimental design was used, with expression (happiness, sadness, anger, fear, disgust, and surprise) as a factor. As dependent variables, we measured three aspects of expression categorization performance: (a) hits, i.e., the probability that responses coincided with the displayed expression (e.g., responding “happy” when the face stimulus was intended to convey happiness); (b) reaction times (RTs) for hits; and (c) type of confusions, i.e., the probability that each target stimulus (the displayed expression) was categorized as each of the other five, non-target expressions (e.g., if the target was anger in a trial, the five non-targets were happiness, sadness, disgust, fear, and surprise).

Eye-movements were recorded by means of a 500-Hz (binocular; spatial resolution: 0.03°; gaze position accuracy: 0.4°) RED system eyetracker (SensoMotoric Instruments, SMI, Teltow, Germany). The following measures were obtained: (a) probability that the first fixation on the face (following the initial fixation on the central fixation point on the nose) landed on each of three regions of interest (see below); (b) probability of entry in each region during the display period (entry times are also reported in Supplemental Datasets S1A), but were not analyzed because some regions were not looked at by all viewers; thus the mean entry times are informative only by taking the probability of entry into account); (c) number of fixations (if ≥80 ms duration) on each region; and (d) gaze duration or total fixation time on each region. The probability of first fixation and entry assessed attentional orienting. The number of fixations and gaze duration assessed attentional engagement. In addition, to examine the time course of selective attention to face regions along expression unfolding, we computed the proportion of gaze duration for each face region during each of 10 consecutive intervals of 100 ms each (i.e., from 1 to 100 ms, from 101 to 200 ms, etc.) across the 1,033-ms display (the final 33 ms were not included). Net gaze duration was obtained and analyzed after saccades and blinks were excluded. For saccade and fixation detection parameters, we used a velocity-based algorithm with a 40°/s peak velocity threshold and 80 ms for minimum fixation duration (for details, see48).

Three face regions of interest were defined: eye and eyebrow (henceforth, eye region), nose/cheek (henceforth, nose), and mouth (see their sizes and shapes in Fig. 1). About 97% of total fixations occurred within these three regions (the forehead and the chin were excluded because they received only 1.2% of fixations).

Figure 1.

Figure 1

Regions of interest (with shapes and sizes, in pixels) of face stimuli used for eye-fixation assessment.

Results

Given that one major aim of the study was to obtain and provide other researchers with validation measures for each stimulus in the KDEF-dyn database, the statistical analyses were performed by items, with the 240 video-clip stimuli as the units of analysis (and scores averaged for the 72 participants). For all the following analyses, the post hoc multiple comparisons across expressions used a familywise error rate (FWER) procedure, with single step (i.e., equivalent adjustments made to each p value) Bonferroni corrections (with a p < 0.05 threshold).

Analyses of expression recognition performance and confusions

For the probability of accurate responses, a one-way (6: Expression stimulus: happiness, surprise, anger, sadness, disgust, and fear) ANOVA yielded significant effects, F(5, 234) = 39.34, p < 0.001, ηp2 = 0.46. Post hoc contrasts revealed better recognition of happiness, surprise, and anger (which did not differ from one another), relative to sadness and disgust (which did not differ), which were recognized better than fear (see Table 1, Hits row). The correct response reaction times, F(5, 234) = 50.26, p < 0.001, ηp2 = 0.52, were faster for happiness than for all the other expressions, followed by surprise, followed by disgust, anger, and sadness (which did not differ from one another), and fear was recognized most slowly (see Table 1, Hit RTs row).

Table 1.

Mean Proportion (%; and SDs in parenthesis) of Responses (Hits and Confusions, and Hit Reaction Times) for each Target (Stimulus) Expression.

Stimulus Expression Response Expression
Happiness Surprise Anger Sadness Disgust Fear
Happiness 97.6 a (4.9) 1.5b (4.8) 0.1b (0.4) 0.0b (0.3) 0.8b (1) 0.0b (0.2)
Surprise 1.7bc (2.1) 95.3 a (4.5) 0.2cd (0.6) 0.1d (0.5) 0.4cd (0.8) 2.3b (3.6)
Anger 0.0c (0.3) 0.6c (1.6) 92.7 a (8.8) 1.2bc (2.1) 3.8b (6) 1.7b (1.8)
Sadness 0.3d (1.2) 0.1d (2.2) 1.4d (2.6) 75.2 a (20.5) 7.7c (10.9) 14.3b (12,4)
Disgust 0.1d (0.4) 0.8cd (1.4) 10.8b (15.9) 4.9bc (10.1) 80.4 a (18.4) 3.0b (5.8)
Fear 0.5e (0.8) 26.0b (23.7) 0.8de (1.6) 2.4d (3.5) 10.5c (12.4) 59.8 a (20.4)
Hits 97.6a (4.9) 95.3a (4.5) 92.7a (8.8) 75.2b (20.5) 80.4b (18.4) 59.8c (20.4)
Hit RTs 823a (80) 958b (116) 1,154c (148) 1,239c (210) 1,149c (235) 1,429d (273)

Note. For each expression stimulus category, scores with different letters (on the same row) are significantly different in post hoc multiple contrasts (p < 0.05, Bonferroni corrected); expressions sharing a letter are equivalent. Boldface for hits.

A 6 (Expression stimulus) × 6 (Expression response) ANOVA on confusions yielded interactive effects, F(25, 1170) = 581.13, p < 0.001, ηp2 = 0.92, which were decomposed by one-way (6: Expression response) ANOVAs for each expression stimulus separately (see Table 1). Facial happiness, F(5, 195) = 6489.44, p < 0.001, ηp2 = 0.99, was minimally confused. Surprise, F(5, 195) = 6781.17, p < 0.001, ηp2 = 0.99, was slightly confused with fear and happiness; anger, F(5, 195) = 2231.79, p < 0.001, ηp2 = 0.98, with disgust and fear; sadness, F(5, 195) = 241.36, p < 0.001, ηp2 = 0.86, with fear and disgust; disgust, F(5, 195) = 289.88, p < 0.001, ηp2 = 0.87, with anger, sadness, and fear; and fear, F(5, 195) = 94.24, p < 0.001, ηp2 = 0.71, was confused mainly with surprise.

Analyses of eye movement measures

A 6 (Expression stimulus) ×3 (Face region: eyes, nose/cheek, and mouth) ANOVA was conducted on each eye-movement measure. The significant interactions were decomposed by means of one-way (6: Expression) ANOVAs for each region. Post hoc multiple comparisons examined how much the processing of each expression relied on a face region more than other expressions did. The critical comparisons involved contrasts across expressions for each region (which was of identical size for all the expressions), rather than across regions for each expression (as regions were different in size, thus probably affecting gaze behavior). The first fixation on the nose was removed as uninformative, given that the initial fixation point was located on this region.

For probability of first fixation, effects of region, F(2, 468) = 2361.70, p < 0.001, ηp2 = 0.91, but not of expression, F(5, 234) = 1.90, p = 0.095, ns, and an interaction, F(10, 468) = 9.75, p < 0.001, ηp2 = 0.17, emerged. The one-way (Expression) ANOVA yielded effects for the eye region, F(5, 234) = 10.26, p < 0.001, ηp2 = 0.18, and the mouth, F(5, 234) = 17.05, p < 0.001, ηp2 = 0.27, but not the nose, F(5, 234) = 1.56, p = 0.17, ns. As indicated in Table 2 (means and multiple contrasts), (a) the eye region was more likely to be fixated first in angry faces relative all the others, except for sad faces, which, along with surprised, disgusted, and fearful faces, were more likely to be fixated first on the eyes than happy faces were; and (b) the mouth region of happy faces was more likely to be fixated first, relative to the other expressions.

Table 2.

Mean Probability of First Fixation (and SDs) on each Face Region for each Expression.

Stimulus Expression Face Region
Eyes Nose/Cheek Mouth
M SD M SD M SD
Happiness 0.478c 0.038 0.231a 0.045 0.236 a 0.050
Surprise 0.533b 0.047 0.218a 0.045 0.189b 0.048
Anger 0.548a 0.055 0.229a 0.050 0.154c 0.042
Sadness 0.542 ab 0.056 0.240a 0.046 0.156c 0.037
Disgust 0.516b 0.050 0.245a 0.051 0.168bc 0.046
Fear 0.512b 0.053 0.230a 0.052 0.193b 0.055

Note. Fixations following the initial fixation on the central fixation point. Within each face region (on the same column), across expressions, scores with different letters are significantly different (p < 0.05, Bonferroni corrected); scores sharing a letter are equivalent. Boldface indicates that the region was most characteristic of the respective expression.

For probability of entries, effects of region, F(2, 468) = 3274.54, p < 0.001, ηp2 = 0.93, expression, F(5, 234) = 4.66, p < 0.001, ηp2 = 0.09, and an interaction, F(10, 468) = 47.91, p < 0.001, ηp2 = 0.51, emerged. The one-way (Expression) ANOVA yielded effects for the eye region, F(5, 234) = 46.04, p < 0.001, ηp2 = 0.50, the nose, F(5, 234) = 20.03, p < 0.001, ηp2 = 0.30, and the mouth, F(5, 234) = 37.01, p < 0.001, ηp2 = 0.44. As indicated in Table 3 (means and multiple contrasts), (a) the probability of entry in the eye region was higher for the angry, sad, and surprised faces than for disgusted and happy faces; (b) it was higher in the nose region for happy and disgusted faces than for the others; and (c) it was highest in the mouth region for happy faces.

Table 3.

Mean Probability of Entry (and SDs) on each Face Region for each Expression.

Stimulus Expression Face Region
Eyes Nose/Cheek Mouth
M SD M SD M SD
Happiness 0.714c 0.076 0.833a 0.041 0.505 a 0.058
Surprise 0.917 a 0.068 0.784b 0.045 0.374b 0.066
Anger 0.929 a 0.076 0.764b 0.045 0.301c 0.076
Sadness 0.923 a 0.079 0.759b 0.052 0.309c 0.084
Disgust 0.846b 0.077 0.828 a 0.036 0.356b 0.082
Fear 0.891ab 0.083 0.785b 0.047 0.358b 0.088

Note. Within each face region (on the same column), across expressions, scores with different letters are significantly different (p < 0.05, Bonferroni corrected); scores sharing a letter are equivalent. Boldface indicates that the region was most characteristic of the respective expression.

For gaze duration, effects of region, F(2, 468) = 2007.02, p < 0.001, ηp2 = 0.90, but not of expression (F < 1), and an interaction, F(10, 468) = 42.45, p < 0.001, ηp2 = 0.48, emerged. The one-way (Expression) ANOVA yielded effects for the eye region, F(5, 234) = 51.76, p < 0.001, ηp2 = 0.52, the nose, F(5, 234) = 10.60, p < 0.001, ηp2 = 0.19, and the mouth, F(5, 234) = 49.31, p < 0.001, ηp2 = 0.51. As indicated in Table 4 (means and multiple contrasts), (a) the eye region was fixated longer in angry and sad faces, relative to the others; (b) the nose region, in disgusted faces; and (c) the mouth, in happy faces.

Table 4.

Mean Gaze Duration (and SDs; in ms) on each Face Region for each Expression.

Stimulus Expression Face Region
Eyes Nose/Cheek Mouth
M SD M SD M SD
Happiness 286d 30 385ab 30 220 a 35
Surprise 390b 47 356c 37 145b 34
Anger 427 a 53 359c 35 113c 35
Sadness 421 a 44 359c 39 115c 31
Disgust 359c 41 399 a 24 136b 36
Fear 383bc 51 369bc 34 142b 39

Note. Within each face region (on the same column), across expressions, scores with different letters are significantly different (p < 0.05, Bonferroni corrected); scores sharing a letter are equivalent. Boldface indicates that the region was most characteristic of the respective expression.

For number of fixations, effects of region, F(2, 468) = 1624.24, p < 0.001, ηp2 = 0.87, but not of expression, F(5, 234) = 2.06, p = 0.071, ns, and an interaction, F(15, 702) = 39.71, p < 0.001, ηp2 = 0.46, appeared. The one-way (Expression) ANOVA yielded effects for the eye region, F(5, 234) = 46.11, p < 0.001, ηp2 = 0.50, the nose, F(5, 234) = 6.87, p < 0.001, ηp2 = 0.13, and the mouth, F(5, 234) = 36.63, p < 0.001, ηp2 = 0.45. As indicated in Table 5 (means and multiple contrasts), (a) the eye region was fixated more frequently in angry, sad, and surprised faces; (b) the nose, in disgusted and happy faces; and (c) the mouth, in happy faces.

Table 5.

Mean Number of Fixations (and SDs) on each Face Region for each Expression.

Stimulus Expression Face Region
Eyes Nose/Cheek Mouth
M SD M SD M SD
Happiness 1.13d 0.14 1.52 a 1 0.13 0.95 a 0.15
Surprise 1.58ab 0.18 1.43b 0.16 0.66b 0.14
Anger 1.66 a 0.21 1.41b 0.13 0.52c 0.15
Sadness 1.62 ab 0.20 1.40b 0.13 0.53c 0.17
Disgust 1.43c 0.15 1.53 a 0.11 0.63b 0.17
Fear 1.54bc 0.20 1.45ab 0.13 0.63b 0.17

Note. Within each face region (on the same column), across expressions, scores with different letters are significantly different (p < 0.05, Bonferroni corrected); scores sharing a letter are equivalent. Boldface indicates that the region was most characteristic of the respective expression.

Time course of selective attention to expression-diagnostic features

An overall ANOVA of Expression (6) by Region (3) by Interval (10) was performed on the proportion of gaze duration for each region during each of 10 consecutive 100-ms intervals across expression unfolding. Effects of region, F(2, 702) = 2818.37, p < 0.001, ηp2 = 0.89, and interval, F(9, 6818) = 8.79, p < 0.001, ηp2 = 0.01, were qualified by interactions of region by expression, F(10, 702) = 61.10, p < 0.001, ηp2 = 0.47, interval by region, F(18, 6318) = 2178.64, p < 0.001, ηp2 = 0.86, and a three-way interaction, F(90, 6318) = 39.62, p < 0.001, ηp2 = 0.36 (see Fig. 2a,b,c; see also Supplemental Datasets S1C Tables). To decompose the three-way interaction, two-way ANOVAs of Expression by Interval were run for each region, further followed by one-way ANOVAs testing the effect of Expression in each time window, with post hoc multiple comparisons (p < 0.05, Bonferroni corrected). This approach served to determine two aspects of the attentional time course: the threshold (i.e., the earliest interval) and the amplitude (i.e., for how many intervals) each face region was looked at more for an expression than for the others.

Figure 2.

Figure 2

(a,b,c) Time course of fixation on each region. Proportion of fixation time on each region (a: Eyes; b: Nose/cheek; c: Mouth) for each facial expression across 10 consecutive 100-ms intervals. For each interval, expressions within a different dotted circle/oval are significantly different from one another (in post hoc multiple contrasts; after p < 0.05 Bonferroni corrections); expressions within the same circle/oval are equivalent. For (b), the scale has been slightly stretched, to better notice differences between expressions.

For the eye region, effects of expression, F(5, 234) = 51.76, p < 0.001, ηp2 = 0.53, and interval, F(9, 2106) = 556.98, p < 0.001, ηp2 = 0.70, and an interaction, F(45, 2106) = 35.06, p < 0.001, ηp2 = 0.43, appeared. Expression effects were significant for all the intervals from the 301-to-400 ms time window onwards, with statistical significance ranging between F(5, 234) = 6.86, p < 0.001, ηp2 = 0.13 and F(5, 234) = 72.25, p < 0.001, ηp2 = 0.61. The post hoc contrasts and the significant differences across expressions within each interval are shown in Fig. 2a. An advantage emerged for sad and angry expressions, with the threshold located at the 401-to-500-ms interval, where their eye regions attracted more fixation time than for all the other expressions, and the amplitude of this advantage remained until 900 ms post-stimulus onset. Secondary advantages appeared for surprised and fearful faces, relative to disgusted and happy faces (see Fig. 2a).

For the nose/cheek region, effects of expression, F(5, 234) = 10.86, p < 0.001, ηp2 = 0.19, and interval, F(9, 2106) = 2910.16, p < 0.001, ηp2 = 0.93, were qualified by an interaction, F(45, 2106) = 4.58, p < 0.001, ηp2 = 0.09. Expression effects were significant for all the intervals from the 401-to-500 ms time window onwards, ranging between F(5, 234) = 5.67, p < 0.001, ηp2 = 0.11 and F(5, 234) = 15.62, p < 0.001, ηp2 = 0.25. The post hoc contrasts and the significant differences across expressions within each interval are shown in Fig. 2b. An advantage emerged for disgusted expressions over all the others, except for happy faces, with the threshold located at the 401-to-500-ms interval: The mouth/cheek region attracted more fixation time for disgusted faces than for all the other expressions (except happy faces), and the amplitude of this advantage remained until the end of the 1,000-ms display.

For the mouth region, effects of expression, F(5, 234) = 49.96, p < 0.001, ηp2 = 0.52, interval, F(9, 2106) = 1206.45, p < 0.001, ηp2 = 0.84, and an interaction, F(45, 2106) = 38.87, p < 0.001, ηp2 = 0.45, emerged. Expression effects were significant from the 301-to-400 ms interval onwards, ranging between F(5, 234) = 6.99, p < 0.001, ηp2 = 0.13 and F(5, 234) = 70.02, p < 0.001, ηp2 = 0.60. The post hoc multiple contrasts and the significant differences across expressions within each interval are shown in Fig. 2c. An advantage emerged for happy expressions over all the others, with the threshold located at the 401-to-500-ms interval: The smiling mouth region attracted more fixation time than the mouth region of all the other expressions, and the amplitude of this advantage remained until the end of the 1,000-ms display. Secondary advantages appeared for surprised and fearful faces, relative to sad and angry faces (see Fig. 2c).

Potentially spurious results involving the nose/cheek region

The eye and the mouth regions are typically the most expressive sources in a face and, in fact, most of the statistical effects reported above emerged for these regions. Yet for disgusted (and, to a lesser extent, happy) expressions effects appeared also in the nose and cheek region (e.g., longer gaze duration). As indicated in the following analyses, these effects—rather than being spurious or irrelevant—can be explained as a function of morphological changes in the nose/cheek region of such expressions.

According to FACS (Facial Action Coding System) proposals49, facial disgust is typically characterized by AU9 (Action Unit; nose wrinkling or furrowing), which directly engages the nose/cheek region; and happiness is characterized by AU6 (cheek raiser) and AU12 (lip corner puller), which engage the mouth region and extend to the nose/cheek region. We used automated facial expression analysis50,51 by means of Emotient FACET SDK v6.1 software (iMotions; http://emotient.com/index.php) to assess these AUs in our stimuli. A one-way (6: Expression) ANOVA revealed higher AU9 scores for disgusted faces (M = 3.48) relative to all the others (ranging from −5.22 [surprise] to 0.19 [anger]), F(5, 234) = 134.46, p < 0.001, ηp2 = 0.74. Relatedly, for happy faces, AU6 scores (M = 2.88) and AU12 (M = 4.06) scores were higher than for all the others, F(5, 234) = 126.00, p < 0.001, ηp2 = 0.73 (AU6 ranging from to −2.32 [surprise] to 1.02 [disgust]), and F(5, 234) = 204.85, p < 0.001, ηp2 = 0.81 (AU12 ranging from −1.80 [anger] to −0.76 [fear]), respectively.

Discussion

The major goal of the present study was to investigate gaze behavior during recognition of dynamic facial expressions changing from neutral to emotional (happy, sad, angry, fearful, disgusted, or surprised). We determined selective attentional orienting to and engagement with expression-diagnostic regions; that is, those that have been found to contribute to (in that they are sufficient or necessary for) recognition38. As a secondary goal, we also aimed to validate a new stimulus set (KDEF-dyn) of dynamic facial expressions, and provide other researchers with norming data of categorization performance and eye fixation profiles for this instrument.

The relative recognition accuracies, efficiency, and confusions across expressions in the current study are consistent with those in prior research on emotional expression categorization. With static face stimuli, (a) recognition performance is typically higher for facial happiness, followed by surprise, which are higher than for sadness and anger, followed by disgust and fear18,41,52,53; (b) happy faces are recognized faster, and fear is recognized most slowly, across different response systems4,5355; and (c) confusions occur mainly between disgust and anger, surprise and fear, and sadness and fear28,38,55,56. Regarding dynamic expressions in on-line video recordings, a pattern of recognition accuracies and reaction times comparable to ours (except for the lack of confusion of sadness as fear) has been found in prior research19. In addition, in studies using facial expressions in dynamic morphing format18,38,41,57, the pattern of expression recognition accuracy and confusions was also comparable to those in the current study. Thus, our recognition performance data concur with prior research data from static and dynamic expressions. This validates the KDEF-dyn set, and allows us to go forward and examine the central issues of the present approach concerning selective attention to dynamic expression-diagnostic face regions.

Our major contribution dealt with selective overt attention during facial expression processing, as reflected by eye movements and fixations. These measures have been obtained in many prior studies using static faces3,15,58,59, but scarcely in studies using dynamic faces22,23. Lischke et al.22 reported a trend towards longer gaze durations for expression-specific regions (i.e., the eyes of angry, sad, and fearful faces, and the mouth of happy faces). Our own results generally agree with these findings (except for fear) and extend them to additional expressions (disgust and surprise) and other eye-movement measures. In contrast, Blais et al.23 found no differences across the six basic expressions of emotion. However, as we argued in the Introduction, the lack of fixation differences in the Blais et al.23 study could be due to the use of a short stimulus display (500 ms) and a small stimulus size (5.72° width). In the current study (also in Lischke et al.22), we used longer displays (1,033 ms) and stimulus size (8.8° width) to increase sensitivity of measurement, which probably allowed for selective attention effects to emerge as a function of face region and expression.

The current study addressed two aspects of selective visual attention to diagnostic features in dynamic expressions that were not considered previously: The distinction and time course of attentional orienting and engagement. As summarized in Fig. 3 (also Fig. 2a,b,c), the effects on orienting and engagement were generally convergent (except for minor discrepancies regarding disgusted faces): (a) happy faces were characterized by selective orienting to and engagement with the mouth region, which showed a time course advantage (i.e., both an earlier threshold and a longer amplitude of visual processing), relative to the other expressions; (b) angry and sad faces were characterized by orienting to and engagement with the eye region, with an earlier and longer time course advantage; (c) disgusted faces were characterized mainly by engagement with the nose/cheek, with a time course advantage; and (d) for surprised and fearful faces, both orienting and engagement were attracted by the eyes and the mouth in a balanced manner, with no dominance. This suggests that facial happiness, anger, sadness, and disgust processing relies on the analysis of single features (either the eyes or the mouth, or the nose), whereas facial surprise and fear processing would require a more holistic integration (see60,61). Further, our findings reveal a close relationship between expression-specific diagnostic regions35,7,8 and selective attention to them for dynamic (not only for static) facial expressions.

Figure 3.

Figure 3

Summary of major findings of gaze behavior. Preferentially fixated (probability of first fixation, probability of entry, gaze duration, and number of fixations) face regions, and time course of fixation advantage (threshold, i.e., earliest time point; and amplitude, i.e., duration of advantage) across different expressions. Asterisks indicate a delayed and partial advantage for surprise and fear, relative to some—but not all—expressions (i.e., relative to disgust and happiness, for the eye region; or to sadness and anger, for the mouth region.

These findings have theoretical implications regarding the functional value of fixation profiles for expression categorization. It has been argued that fixation profiles reflect attention to the most diagnostic regions of a face for each emotion8. We have shown that the diagnostic facial features previously found to contribute to expression recognition38 are also the ones receiving earlier and longer overt attention during expression categorization. This allows us to infer that enhanced selective fixation on diagnostic regions of the respective expressions is functional for (i.e., facilitates) recognition. This is consistent with the hypothesis that observers move their eyes to face regions that maximize performance determining the emotional state of a face26, and the hypothesis of a predictive value of fixation patterns in recognizing emotional faces14. Nevertheless, beyond the aims and scope of the current study, an approach that directly addresses this issue should manipulate the visual availability or unavailability of diagnostic face regions, and examine how this affects actual expression recognition.

There are practical implications for an effective use of the current KDEF-dyn database: If the scanpath profiles when inspecting a face are functional (due to the diagnostic value of face regions), then such profiles can be taken as criteria for stimulus selection. We used a relatively large sample of stimuli (40 different models; 240 video-clips), which allows for selection of sub-samples depending on different research purposes (expression categorization, time course of attention, orienting, or engagement). Our stimuli vary in how much the respective scanpaths reflect the dominance (e.g., earlier first fixation, longer gaze duration, etc.) of diagnostic regions for each expression, and how much the scanpaths match the ideal pattern (e.g., earlier and longer gaze duration on the eye region of angry faces, etc.). This information can be obtained from our datasets (Supplemental Datasets S1A and S1B). Researchers could thus choose the stimulus models having the regions with enhanced attentional orienting or engagement, or a speeded time course (e.g., threshold) of attention. Of course, selection can also be made on the basis of recognition performance (hits, categorization efficiency, and type of confusions). Thus, the current study provides researchers with a useful methodological tool.

To conclude, we developed a set of morphed dynamic facial expressions of emotion (KDEF-dyn; see also62 for a complementary study using different measures). Expression recognition data were consistent with findings from prior research using static and other dynamic expressions. As a major contribution, eye-movement measures assessed selective attentional orienting and engagement, and its time course, for six basic emotions. Specific attentional profiles characterized each emotion: The eye region was looked at earlier and longer for angry and sad faces; the mouth region was looked at earlier and longer for happy faces; the nose/cheek region was looked at earlier and longer for disgusted faces; the eye and the mouth regions attracted attention in a more balanced manner for surprise and fear. This reveals selective visual attention to diagnostic features typically facilitating expression recognition.

Electronic supplementary material

S1A Dataset (102.3KB, xlsx)
S1B Dataset (78.8KB, xlsx)
S1C Tables (169.6KB, pdf)

Acknowledgements

This research was supported by Grant PSI2014-54720-P to MC from the Spanish Ministerio de Economía y Competitividad.

Author Contributions

M.C. designed the study and wrote the manuscript. A.F.M. developed the materials, conducted the experiment, and compiled the eye-movement data. A.G.G. developed the materials and performed the data analysis. D.L. wrote the manuscript. All authors reviewed the manuscript and approved the final version for submission.

Data Availability

The authors declare that the data of the study are included in Supplemental Datasets S1A and S1B linked to this manuscript.

Competing Interests

The authors declare no competing interests.

Footnotes

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Supplementary information accompanies this paper at 10.1038/s41598-018-35259-w.

References

  • 1.Ekman P, Cordaro D. What is meant by calling emotions basic. Emotion Review. 2011;3(4):364–370. doi: 10.1177/1754073911410740. [DOI] [Google Scholar]
  • 2.Calvo MG, Nummenmaa L. Perceptual and affective mechanisms in facial expression recognition: An integrative review. Cogn Emot. 2016;30(6):1081–1106. doi: 10.1080/02699931.2015.1049124. [DOI] [PubMed] [Google Scholar]
  • 3.Beaudry O, Roy-Charland A, Perron M, Cormier I, Tapp R. Featural processing in recognition of emotional facial expressions. Cogn Emot. 2014;28(3):416–432. doi: 10.1080/02699931.2013.833500. [DOI] [PubMed] [Google Scholar]
  • 4.Calder AJ, Young AW, Keane J, Dean M. Configural information in facial expression perception. Journal of Experimental Psychology Human Perception and Performance. 2000;26(2):527–551. doi: 10.1037/0096-1523.26.2.527. [DOI] [PubMed] [Google Scholar]
  • 5.Calvo MG, Fernández-Martín A, Nummenmaa L. Facial expression recognition in peripheral versus central vision: Role of the eyes and the mouth. Psychological Research. 2014;78(2):180–195. doi: 10.1007/s00426-013-0492-x. [DOI] [PubMed] [Google Scholar]
  • 6.Kohler CG, et al. Differences in facial expressions of four universal emotions. Psychiatry Res. 2004;128(3):235–244. doi: 10.1016/j.psychres.2004.07.003. [DOI] [PubMed] [Google Scholar]
  • 7.Smith ML, Cottrell GW, Gosselin F, Schyns PG. Transmitting and decoding facial expressions. Psychological Science. 2005;16(3):184–189. doi: 10.1111/j.0956-7976.2005.00801.x. [DOI] [PubMed] [Google Scholar]
  • 8.Schurgin MW, et al. Eye movements during emotion recognition in faces. Journal of Vision. 2014;14(13):1–16. doi: 10.1167/14.13.14. [DOI] [PubMed] [Google Scholar]
  • 9.Calvo MG, Nummenmaa L. Detection of emotional faces: salient physical features guide effective visual search. J Exp Psychol Gen. 2008;137(3):471–494. doi: 10.1037/a0012771. [DOI] [PubMed] [Google Scholar]
  • 10.Ebner NC, He Y, Johnson MK. Age and emotion affect how we look at a face: visual scan patterns differ for own-age versus other-age emotional faces. Cogn Emot. 2011;25(6):983–997. doi: 10.1080/02699931.2010.540817. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Eisenbarth H, Alpers GW. Happy mouth and sad eyes: Scanning emotional facial expressions. Emotion. 2011;11(4):860–52011. doi: 10.1037/a0022758. [DOI] [PubMed] [Google Scholar]
  • 12.Bombari D, et al. Emotion recognition: The role of featural and configural face information. Quarterly Journal of Experimental Psychology. 2013;66(12):2426–2442. doi: 10.1080/17470218.2013.789065. [DOI] [PubMed] [Google Scholar]
  • 13.Jack RE, Blais C, Scheepers C, Schyns PG, Caldara R. Cultural confusions show that facial expressions are not universal. Curr Biol. 2009;19(18):1543–8154. doi: 10.1016/j.cub.2009.07.051. [DOI] [PubMed] [Google Scholar]
  • 14.Vaidya AR, Jin C, Fellows LK. Eye spy: The predictive value of fixation patterns in detecting subtle and extreme emotions from faces. Cognition. 2014;133(2):443–456. doi: 10.1016/j.cognition.2014.07.004. [DOI] [PubMed] [Google Scholar]
  • 15.Wells LJ, Gillespie SM, Rotshtein P. Identification of emotional facial expressions: effects of expression, intensity, and sex on eye gaze. PloS ONE. 2016;11(12):e0168307. doi: 10.1371/journal.pone.0168307. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Wong B, Cronin-Golomb A, Neargarder S. Patterns of visual scanning as predictors of emotion identification in normal aging. Neuropsychology. 2005;19(6):739–749. doi: 10.1037/0894-4105.19.6.739. [DOI] [PubMed] [Google Scholar]
  • 17.Krumhuber EG, Kappas A, Manstead ASR. Effects of dynamic aspects of facial expressions: A review. Emotion Review. 2013;5(1):41–46. doi: 10.1177/1754073912451349. [DOI] [Google Scholar]
  • 18.Calvo MG, Avero P, Fernandez-Martin A, Recio G. Recognition thresholds for static and dynamic emotional faces. Emotion. 2016;16(8):1186–1200. doi: 10.1037/emo0000192. [DOI] [PubMed] [Google Scholar]
  • 19.Wingenbach TS, Ashwin C, Brosnan M. Validation of the Amsterdam Dynamic Facial Expression Set - Bath Intensity Variations (ADFES-BIV): A set of videos expressing low, intermediate, and high intensity emotions. PloS ONE. 2016;11(12):e0168891. doi: 10.1371/journal.pone.0168891. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Arsalidou M, Morris D, Taylor MJ. Converging evidence for the advantage of dynamic facial expressions. Brain Topography. 2011;24(2):149–163. doi: 10.1007/s10548-011-0171-4. [DOI] [PubMed] [Google Scholar]
  • 21.Trautmann SA, Fehr T, Herrmann M. Emotions in motion: Dynamic compared to static facial expressions of disgust and happiness reveal more widespread emotion-specific activations. Brain Research. 2009;1284:100–115. doi: 10.1016/j.brainres.2009.05.075. [DOI] [PubMed] [Google Scholar]
  • 22.Lischke A, et al. Intranasal oxytocin enhances emotion recognition from dynamic facial expressions and leaves eye-gaze unaffected. Psychoneuroendocrinology. 2012;37(4):475–481. doi: 10.1016/j.psyneuen.2011.07.015. [DOI] [PubMed] [Google Scholar]
  • 23.Blais C, Fiset D, Roy C, Saumure-Régimbald C, Gosselin F. Eye fixation patterns for categorizing static and dynamic facial expressions. Emotion. 2017;17(7):1107–1119. doi: 10.1037/emo0000283. [DOI] [PubMed] [Google Scholar]
  • 24.Hoffmann H, Traue HC, Bachmayr F, Kessler H. Perceived realism of dynamic facial expressions of emotion: Optimal durations for the presentation of emotional onsets and offsets. Cogn Emot. 2010;24(8):1369–76. doi: 10.1080/02699930903417855. [DOI] [Google Scholar]
  • 25.Krumhuber EG, Skora L, Küster D, Fou L. A review of dynamic datasets for facial expression research. Emotion Review. 2017;9(3):280–292. doi: 10.1177/1754073916670022. [DOI] [Google Scholar]
  • 26.Peterson MF, Eckstein MP. Looking just below the eyes is optimal across face recognition tasks. PNAS. 2012;109(48):E3314–3323. doi: 10.1073/pnas.1214269109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Lundqvist, D., Flykt, A. & Öhman, A. The Karolinska Directed Emotional Faces–KDEF [CD-ROM]. Department of Clinical Neuroscience, Psychology section, Karolinska Institutet, Stockholm, Sweden ISBN 91-630-7164-9 (1998).
  • 28.Calvo MG, Lundqvist D. Facial expressions of emotion (KDEF): Identification under different display-duration conditions. Behavior Research Methods. 2008;40(1):109–115. doi: 10.3758/BRM.40.1.109. [DOI] [PubMed] [Google Scholar]
  • 29.Goeleven E, De Raedt R, Leyman L, Verschuere B. The Karolinska Directed Emotional Faces: A validation study. Cogn Emot. 2008;22(6):1094–1118. doi: 10.1080/02699930701626582. [DOI] [Google Scholar]
  • 30.Calvo MG, Gutiérrez-García A, Avero P, Lundqvist D. Attentional mechanisms in judging genuine and fake smiles: Eye-movement patterns. Emotion. 2013;13(4):792–802. doi: 10.1037/a0032317. [DOI] [PubMed] [Google Scholar]
  • 31.Gupta R, Hur YJ, Lavie N. Distracted by pleasure: Effects of positive versus negative valence on emotional capture under load. Emotion. 2016;16(3):328–337. doi: 10.1037/emo0000112. [DOI] [PubMed] [Google Scholar]
  • 32.Sanchez A, Vazquez C, Gómez D, Joormann J. Gaze-fixation to happy faces predicts mood repair after a negative mood induction. Emotion. 2014;14(1):85–94. doi: 10.1037/a0034500. [DOI] [PubMed] [Google Scholar]
  • 33.Adamaszek M, et al. Neural correlates of impaired emotional face recognition in cerebellar lesions. Brain Research. 2015;1613:1–12. doi: 10.1016/j.brainres.2015.01.027. [DOI] [PubMed] [Google Scholar]
  • 34.Bublatzky F, Gerdes AB, White AJ, Riemer M, Alpers GW. Social and emotional relevance in face processing: Happy faces of future interaction partners enhance the late positive potential. Frontiers in Human Neuroscience. 2014;8:493. doi: 10.3389/fnhum.2014.00493. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Calvo MG, Beltrán D. Brain lateralization of holistic versus analytic processing of emotional facial expressions. NeuroImage. 2014;92:237–247. doi: 10.1016/j.neuroimage.2014.01.048. [DOI] [PubMed] [Google Scholar]
  • 36.Pollick FE, Hill H, Calder A, Paterson H. Recognising facial expression from spatially and temporally modified movements. Perception. 2003;32(7):813–826. doi: 10.1068/p3319. [DOI] [PubMed] [Google Scholar]
  • 37.Fiorentini C, Viviani P. Is there a dynamic advantage for facial expressions? Journal of Vision. 2011;11(3):1–15. doi: 10.1167/11.3.17. [DOI] [PubMed] [Google Scholar]
  • 38.Recio G, Schacht A, Sommer W. Classification of dynamic facial expressions of emotion presented briefly. Cogn Emot. 2013;27(8):1486–1494. doi: 10.1080/02699931.2013.794128. [DOI] [PubMed] [Google Scholar]
  • 39.Harris RJ, Young AW, Andrews TJ. Dynamic stimuli demonstrate a categorical representation of facial expression in the amygdala. Neuropsychologia. 2014;56:47–52. doi: 10.1016/j.neuropsychologia.2014.01.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Popov T, Miller GA, Rockstroh B, Weisz N. Modulation of alpha power and functional connectivity during facial affect recognition. The Journal of Neuroscience: The official journal of the Society for Neuroscience. 2013;33(14):6018–6026. doi: 10.1523/JNEUROSCI.2763-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Recio G, Schacht A, Sommer W. Recognizing dynamic facial expressions of emotion: Specificity and intensity effects in event-related brain potentials. Biological Psychology. 2014;96:111–125. doi: 10.1016/j.biopsycho.2013.12.003. [DOI] [PubMed] [Google Scholar]
  • 42.Vrticka P, Lordier L, Bediou B, Sander D. Human amygdala response to dynamic facial expressions of positive and negative surprise. Emotion. 2014;14(1):161–169. doi: 10.1037/a0034619. [DOI] [PubMed] [Google Scholar]
  • 43.Hess U, Kappas A, McHugo GJ, Kleck RE, Lanzetta JT. An analysis of the encoding and decoding of spontaneous and posed smiles: The use of facial electromyography. Journal of Nonverbal Behavior. 1989;13(2):121–137. doi: 10.1007/BF00990794. [DOI] [Google Scholar]
  • 44.Weiss F, Blum GS, Gleberman L. Anatomically based measurement of facial expressions in simulated versus hypnotically induced affect. Motivation & Emotion. 1987;11(1):67–81. doi: 10.1007/BF00992214. [DOI] [Google Scholar]
  • 45.Faul F, Erdfelder E, Lang AG, Buchner A. G*Power 3: A flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behavior Research Methods. 2007;39(2):175–191. doi: 10.3758/BF03193146. [DOI] [PubMed] [Google Scholar]
  • 46.Schultz J, Pilz KS. Natural facial motion enhances cortical responses to faces. Experimental Brain Research. 2009;194(3):465–475. doi: 10.1007/s00221-009-1721-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Johnston P, Mayes A, Hughes M, Young AW. Brain networks subserving the evaluation of static and dynamic facial expressions. Cortex. 2013;49(9):2462–2472. doi: 10.1016/j.cortex.2013.01.002. [DOI] [PubMed] [Google Scholar]
  • 48.Holmqvist, K., Nyström, N., Andersson, R., Dewhurst, R., Jarodzka, H., & Van de Weijer, J. Eye tracking: A comprehensive guide to methods and measures (Oxford University Press, Oxford, UK, 2011).
  • 49.Ekman, P., Friesen, W. V. & Hager, J. C. Facial action coding system (A Human Face, Salt Lake City, 2002).
  • 50.Cohn, J. F. & De la Torre, F. Automated face analysis for affective computing. In: Calvo, R. A., Di Mello, S., Gratch, J. & Kappas, A. (editors). The Oxford handbook of affective computing, 131–151 (Oxford University Press, New York, 2015).
  • 51.Bartlett, M. & Whitehill, J. Automated facial expression measurement: Recent applications to basic research in human behavior, learning, and education. In: Calder, A., Rhodes, G., Johnson, M. & Haxby, J. (editors). Handbook of face perception, 489–513 (Oxford University Press, Oxford, UK, 2011).
  • 52.Nelson NL, Russell JA. Universality revisited. Emotion Review. 2013;5(1):8–15. doi: 10.1177/1754073912457227. [DOI] [Google Scholar]
  • 53.Calvo MG, Nummenmaa L. Eye-movement assessment of the time course in facial expression recognition: Neurophysiological implications. Cognitive, Affective & Behavioral Neuroscience. 2009;9(4):398–411. doi: 10.3758/CABN.9.4.398. [DOI] [PubMed] [Google Scholar]
  • 54.Elfenbein HA, Ambady N. When familiarity breeds accuracy: Cultural exposure and facial emotion recognition. Journal of Personality and Social Psychology. 2003;85(2):276–290. doi: 10.1037/0022-3514.85.2.276. [DOI] [PubMed] [Google Scholar]
  • 55.Palermo R, Coltheart M. Photographs of facial expression: Accuracy, response times, and ratings of intensity. Behavior Research Methods, Instruments, & Computers. 2004;36(4):634–638. doi: 10.3758/BF03206544. [DOI] [PubMed] [Google Scholar]
  • 56.Tottenham N, et al. The NimStim set of facial expressions: Judgments from untrained research participants. Psychiatry Research. 2009;168(3):242–249. doi: 10.1016/j.psychres.2008.05.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Langner O, et al. Presentation and validation of the Radboud Faces Database. Cogn Emot. 2010;24(8):1377–1388. doi: 10.1080/02699930903485076. [DOI] [Google Scholar]
  • 58.Hsiao JH, Cottrell G. Two fixations suffice in face recognition. Psychological Science. 2008;19(10):998–1006. doi: 10.1111/j.1467-9280.2008.02191.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Kanan C, Bseiso DN, Ray NA, Hsiao JH, Cottrell GW. Humans have idiosyncratic and task-specific scanpaths for judging faces. Vision Research. 2015;108:67–76. doi: 10.1016/j.visres.2015.01.013. [DOI] [PubMed] [Google Scholar]
  • 60.Meaux E, Vuilleumier P. Facing mixed emotions: Analytic and holistic perception of facial emotion expressions engages separate brain networks. NeuroImage. 2016;141:154–173. doi: 10.1016/j.neuroimage.2016.07.004. [DOI] [PubMed] [Google Scholar]
  • 61.Tanaka JW, Kaiser MD, Butler S, Le Grand R. Mixed emotions: Holistic and analytic perception of facial expressions. Cogn Emot. 2012;26(6):961–977. doi: 10.1080/02699931.2011.630933. [DOI] [PubMed] [Google Scholar]
  • 62.Calvo, M. G., Fernández-Martín, A., Recio, G. & Lundqvist, D. Human observers and automated assessment of dynamic emotional facial expressions: KDEF-dyn database validation. Frontiers in Psychology9:2052 (2018). [DOI] [PMC free article] [PubMed]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

S1A Dataset (102.3KB, xlsx)
S1B Dataset (78.8KB, xlsx)
S1C Tables (169.6KB, pdf)

Data Availability Statement

The authors declare that the data of the study are included in Supplemental Datasets S1A and S1B linked to this manuscript.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES