Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Oct 1.
Published in final edited form as: Cogn Emot. 2019 May 13;34(2):359–366. doi: 10.1080/02699931.2019.1611542

Faces in the wild: A naturalistic study of children’s facial expressions in response to an Internet prank

Michael M Shuster a, Linda A Camras a, Adam Grabell b, Susan B Perlman c
PMCID: PMC7528222  NIHMSID: NIHMS1616571  PMID: 31084351

Abstract

There is surprisingly little empirical evidence supporting theoretical and anecdotal claims regarding the spontaneous production of prototypic facial expressions used in numerous emotion recognition studies. Proponents of innate prototypic expressions believe that this lack of evidence may be due to ethical restrictions against presenting powerful elicitors in the lab. The current popularity of internet platforms designed for public sharing of videos allows investigators to shed light on this debate by examining naturally-occurring facial expressions outside the laboratory. An Internet prank (“Scary Maze”) has provided a unique opportunity to observe children reacting to a consistent fear- and surprise-inducing stimulus: The unexpected presentation of a “scary face” during an online maze game. The purpose of this study was to examine children’s facial expressions in this naturalistic setting. Emotion ratings of non-facial behaviour (provided by untrained undergraduates) and anatomically-based facial codes were obtained from 60 videos of children (ages 4–7) found on YouTube. Emotion ratings were highest for fear and surprise. Correspondingly, children displayed more facial expressions of fear and surprise than for other emotions (e.g. anger, joy). These findings provide partial support for the ecological validity of fear and surprise expressions. Still prototypic expressions were produced by fewer than half the children.

Keywords: Fear, emotional expressions, prototypic expressions, YouTube


Countless studies of emotion recognition have focused on a set of prototypic facial expressions theorised to be inherently linked to a corresponding set of basic emotions (Ekman & Cordaro, 2011; Elfenbein & Ambady, 2002; Izard, 2011). One implicit assumption that often underlies these studies is that prototypic expressions are produced with substantial frequency in nature and are an important means by which emotions are communicated in real life. However, a number of recent adult studies in controlled settings have questioned the idea that some expressions, including fear and surprise, are typically produced in their theoretically-proposed eliciting circumstances (e.g. Fernández-Dols & Ruiz-Belda, 1995; Schützwohl & Reisenzein, 2012; see Duran, Reisenzein, & Fernández-Dols, 2017 for a meta-analytic review). Correspondingly, adults report that fear expressions are relatively rarely seen in daily life (although surprise expressions are reported to be more often seen; Calvo, Gutierrez-Garcia, Fernandez-Martin, & Nummenmaa, 2014).

Because children may express their emotions more freely than adults (Holodynski, 2004), in the present study, we attempt to shed further light on this issue by studying children’s production of fear expressions. In an effort to maximise the probability of observing such expressions, we examined their responses to a powerful eliciting situation that took place outside of the laboratory, i.e. an internet video prank commonly called the “Scary Maze”. These data provide an opportunity for researchers to potentially observe intense expressions of emotion that may be produced relatively rarely in everyday life.

Explaining the disconnect between emotions and expressions

When confronted with the lack of concordance between previously theorised emotion elicitors and facial muscle activation, proponents of prototypic expressions cite several explanations. One such explanation, proposed by Ekman (1973), is that cultural display rules inhibit the production of natural facial responses to the predicted corresponding stimuli. This explanation is particularly pertinent to laboratory-induced emotions in which a subject, aware of being observed, might inhibit extreme affective expression.

In order to bypass the mitigating influence of display rules on facial expressions, a number of researchers have studied infants. According to one prominent theory of emotional development (Differential Emotions Theory (DET)), infants cannot voluntarily control their emotional expressions and their cognitive status would allow no understanding of display rules (Izard & Malatesta, 1987). Therefore, coherence between expression and emotion should be observed. Several early studies indeed found that DET-predicted facial expressions for some discrete emotions were produced in emotion-appropriate situations (e.g. anger expressions in response to routine inoculations). However, subsequent studies have not generated support for significant coherence in infancy between emotion elicitors (particularly for negative emotions) and their predicted corresponding facial expressions (see Camras, Fatani, Fraumeni, & Shuster, 2016).

Beyond infancy, it is possible that maturational processes or socialisation processes (or both) might lead to greater coherence between facial expressions and elicitors of their corresponding emotions. Indeed, increased (albeit still very limited) correspondence was found in one study comparing younger (4-month-old) and older (12-month-old) infants (Bennett, Bendersky, & Lewis, 2005). Moreover, the influence of display rules might be expected to preclude observation of such correspondence in many situations as children grow older. In point of fact, while findings are somewhat mixed, a number of studies have shown that children’s understanding of display rules for hiding emotions increases during childhood (e.g. Hudson & Jacques, 2014; Jones, Abbey, & Cumberland, 1998; Saarni, 1979) and older children are better than younger children at actually hiding their negative emotion (e.g. Kromm, Farber, & Holodynski, 2015; Saarni, 1984; Simonds, Kieras, Rueda, & Rothbart, 2007). In the single extant study we found on children’s facial expressions in a nonsocial situation (Holodynski, 2004), expressivity declined overall between 6- and 8-years of age. Thus, this study aimed to focus on younger children to increase the probability of observing expression-elicitor coherence.

Beyond the influence of display rules, some investigators (e.g. Reisenzein, Studtmann, & Horstmann, 2013) have also proposed that coherence between expression and emotion may be more likely to be found in situations that elicit a more intense level of emotion. Consistent with this proposal, Anderson, Monroy, and Keltner (2017) recently reported coherence between fear facial and vocal expressions by adolescents and adults while rafting over a series of ten whitewater rapids. However, correspondence between fear facial expressions and self-reported emotion was not determined and coders were able to hear the vocalisation during facial coding. While 58% of participants showed at least one component of the prototypic fear expression (i.e. a fear brow or mouth) at some point during the excursion, the proportion of episodes in which such an expression was shown was still rather low (22%). Although innovative and promising, Anderson et al.’s study also highlights the need for further research on facial expression-emotion coherence.

For children in Western cultures, fear is arguably an emotion that is relatively rarely experienced at a high level of intensity in their daily life. Therefore, a low correspondence between prototypic fear expressions and fear elicitors might be due to a limited opportunity to observe children’s responses in more intense fear situations. As ethical restrictions prevent the use of powerful elicitors for some emotions in the laboratory setting (e.g. fear elicitors), in the present study we take advantage of an Internet phenomenon that has resulted in hundreds of videos of children responding to a consistent stimulus designed to elicit primarily fear in a naturalistic (i.e. non-laboratory) setting. The use of these publicly accessible videos has its limitations; however it circumvents the legitimate ethical limitations necessary in laboratory research and provides an additional perspective on children’s spontaneous production of facial expressions.

The Scary Maze

The Scary Maze is ostensibly an Internet puzzle game that requires progressively greater concentration as the player advances to new levels of more difficult mazes. As the player completes the final maze, an image of the demon-possessed girl from the 1973 film The Exorcist appears on the screen accompanied by a loud scream (see Supplemental Materials for example). Many videos of children responding to the Scary Maze are publicly accessible on YouTube. Although other studies have used publicly available materials to investigate emotional expression (e.g. Aviezer, Trope, & Todorov, 2012; Crivelli, Carrera, & Fernández-Dols, 2015; Matsumoto & Willingham, 2006; Wenzler, Levine, van Dick, Oertel-Knöchel, & Aviezer, 2016), to our knowledge, no previous study utilising such naturalistic data has objectively coded facial expressions of children responding to the same consistent stimulus. We believe that the stimulus that children were exposed to is consistent with common fear experiences among children in Western cultures. Research utilising recollective self-reports from a sample of undergraduates found that an overwhelming majority of participants reported experiencing enduring fearful responses to television and movies during their childhood and adolescence (Harrison & Cantor, 1999).

Method

The data for this study was collected using a sample of 60 publicly accessible “Scary Maze” videos from YouTube. An undergraduate research assistant who was unaware of the study’s goals selected the videos in order to prevent the potentially biased selection of videos that display prototypic fear facial expressions. To ensure that 60 videos provided enough observations for adequate statistical power, a power analysis was conducted and revealed that 39 videos would be enough to obtain a medium effect size of .50 at the recommended .80 level for statistical power (Cohen, 1988).

To facilitate reliable coding, the resolution of the clips was required to be at least 20 frames per second and children’s faces needed to be clearly visible. The assistant was asked to select children in the videos who looked to be roughly between the ages of four and seven years. Lastly, the videos could not display the Scary Maze stimulus in order to minimise expectancy bias in the ratings of emotion content. Once selected, videos were abridged so that they would begin as close as possible to two seconds prior to the presentation of the unexpected stimulus and end once the child’s face was no longer visible, or after six seconds.

Validating the Scary Maze stimulus

Although the Scary Maze stimulus is intended to evoke primarily fear (hence the name), we sought to empirically identify children’s emotional responses. As the nature of the data precluded self-report measures of emotion, we employed another approach used in other studies where self-reports of emotion cannot be obtained (e.g. Oster, Hegley, & Nagel, 1992). That is, we obtained emotion ratings from naïve observers who were shown videotapes of participants’ reactions to the emotion-eliciting event. However, in our case, we modified this approach to avoid the possibility that raters would base their ratings solely on the presence or absence of the prototypic fear expression and thus artificially inflate the number of these expressions we would find in our study. Instead, we presented naïve raters with versions of our videotapes on which the child’s facial expression was obscured and obtained emotion ratings based on the children’s non-facial emotion-related behaviours.

Forty undergraduates from the Psychology Department subject pool observed the videos and were asked to “rate the extent to which the children are experiencing each of the following emotions” (i.e. joy, fear, sadness, anger, surprise, disgust, distress) using a scale of 1 (not at all) to 5 (very much). To eliminate expectancy bias, the ratings of 24 observers who reported prior knowledge of the maze game were excluded, resulting in a total of 16 (12 female) unbiased observers. In addition, the emotion-inducing stimulus was neither visible nor audible and the children’s faces were digitally blurred. As noted above, faces were blurred to avoid the possibility that raters would base their ratings solely on the presence or absence of a prototypic expression and thus artificially ensure that we would find a disproportionate number of fear expressions in the videos. Observer ratings were thus based on the children’s non-facial physical actions (e.g. turning away, fleeing, or withdrawing from the computer screen). Non-facial behaviours (e.g. avoidant behaviours) are considered to be indices of emotions (e.g. fear) according to functionally-oriented theories of emotion (Barrett & Campos, 1987; Frijda, 1986).

Coding and interpretation of facial expressions

Children’s facial responses were coded using FACS, a comprehensive anatomically-based facial coding system that uses coding units (termed Action Units or AUs) to represent facial muscle contractions (Ekman, Friesen, & Hager, 2002). FACS requires coders to objectively determine which facial muscles are activated regardless of whether or not the configuration of muscle movements is considered to be an expression of emotion. In addition, the intensity of the muscle contraction may be coded in some cases. Coding was done by five trained coders from a different lab than that which selected the videos and who were blind to the nature of the research and were unaware that the children in the videos were exposed to the same type of stimuli or about the nature of the stimuli. Coders determined which AUs and AU combinations were produced and the sequence in which they appeared within the video episode.

Inter-rater reliability was established by having two trained and certified FACS coders code each video and averaging the reliability across videos. Reliability was calculated according to a formula outlined in the FACS Investigators’ Manual (number of agreements × 2 / total number of unique codes) and resulted in a .67 reliability score. This score was somewhat below the conventionally accepted level of .70 (Ekman et al., 2002). We inspected the data and subsequently combined AUs 26 and 27 as these AUs represent different intensities of vertical mouth opening and are interchangeable when interpreting raw FACS codes into emotion categories (as described below). Subsequent reliability calculation yielded .70. To further avoid bias, for each video we randomly selected which of the two coder’s sets of raw codes we would include in our analyses.

To generate emotional expression scores, the FACS coding was examined and scores were assigned based on whether the configurations of facial muscle movements are considered to be expressions of emotion according to the FACS Manual’s Investigator’s Guide (Ekman et al., 2002). The Guide specifies both prototypes and their “major variants” (p. 174) for the emotions of surprise, fear, happiness, sadness, disgust and anger. Emotion blends are not specified. In most cases, intensity coding is not required to determine whether to assign a facial configuration to an emotion category. For one facial configuration (AU 1 + 2+5 + 26/27, i.e. raised brows + widened eyes + open mouth), emotion assignment differs depending upon the intensity of a single AU (i.e. the intensity of eye widening [AU 5] determines whether the configuration is considered an expression of fear or surprise). Therefore, we recruited additional trained coders to provide an intensity coding for this AU in those cases where it appeared in conjunction with AUs 1 + 2 + 26/27. Reliability between coders was .86. Again, in cases of disagreement, we randomly determined which of the coders’ scores were used for the emotion assignment. We also examined the configuration of action units that would be interpreted as an expression of “distress” according to Izard’s infant-oriented facial coding system (i.e. MAX; Izard, 1995).

Within each child’s coding interval, the child could produce one or more facial expressions. The child was assigned a score 1 (“present”) for an emotional expression if she produced any prototype or variant facial expression for that emotion within the coding interval. The child was assigned a score of “0” for the emotional expression if she did not produce any prototype or variant for that emotion. Each child received a score of 1 or 0 for each of the seven emotions that were examined.

Results

Observers’ emotion ratings

A repeated measure ANOVA with a Greenhouse-Geisser correction was conducted on the average undergraduate observer ratings of each emotion (see Table 1 for ratings). Results indicate that observers rated children significantly higher for some emotions than others F(3.55, 209) = 318.66, p < .01 and the effect size was large (.84). Post hoc tests using Bonferroni pair-wise comparisons revealed that surprise (M = 3.82, SD = .60) was rated significantly higher than the other emotions, which is consistent with the unexpected (i.e. surprising) nature of the stimulus presentation (sadness: M = 1.43, SD = .54, d = 4.18; anger: M = 1.26, SD = .23, d = 5.63; disgust: M = 1.99, SD = .43, d = 3.5; distress: M = 2.74, SD = .88, d = 1.43; joy: M = 1.22, SD = .87, d = 3.48; p’s < .05). In addition, ratings were significantly higher for fear (M = 3.35, SD = .73) in comparison to all emotions other than surprise (sadness: M = 1.43, SD = .54, d = 2.99; anger: M = 1.26, SD = .23, d = 3.86; disgust: M = 1.99, SD = .43, d = 2.27 l; distress M = 2.74, SD = .88, d = .75; joy: M = 1.22, SD = .87, d = 2.65; p’s < .05). Although Scary Maze is intended to be a fear-provoking prank (hence its name), these ratings showed that surprise was judged to be experienced somewhat more than fear. Separate single sample t-tests indicated that the observer’s emotion ratings were significantly greater than the midpoint of the scale for surprise, (t(59) = 10.59, p < .001) and fear (t(59) = 3.71, p < .001) and were below the midpoint for all other emotions. Therefore, in our analyses of the facial expressions, we focused on the both surprise and fear.

Table 1.

Mean emotion ratings by untrained observers.

Judged Emotion Mean SD

Joy 1.22 (.87)
Anger 1.26 (.31)
Sadness 1.43 (.71)
Surprise 3.81 (.44)
Fear 3.35 (.81)
Distress 2.74 (.88)
Disgust 1.99 (.49)

Analysis of facial codes

One limitation of some previous studies is that only one type of facial expression is coded. Such studies thus fail to determine if their target expression is more prevalent than other prototypic expressions. More stringent investigations examine the intrasituational specificity of proposed prototypical expressions. Intrasituational specificity requires finding that individuals in an emotion situation display the predicted expressive response more often than other expressions (Hiatt, Campos, & Emde, 1979). Based on the ratings, we predicted that fear and surprise would be displayed more often than other prototypic expressions. As shown on Table 2, prototypic joy was present in 10% of the videos, anger was present in 5% of the videos, surprise in 41.7%, and fear in 46.7%. No full prototype or variant expressions of disgust, sadness, or distress were found; thus only fear, anger, surprise, and joy were included in our analysis. A Cochran’s Q analysis indicated a significant difference in the prevalence of prototypic expressions across emotions Q(3) = 38.10, p < .001, η2Q = .635. Pair-wise comparisons using McNemar tests showed that the percentage of children who produced prototypic fear expressions (46.7%) was significantly greater than the percentage who produced prototypic expressions of anger (5%; p < .001), and joy (10%; p < .001). Similarly, the percentage of children who produced prototypic expressions of surprise (41.7%) was significantly greater than the percentage who produced prototypic expressions of anger (p < .001), and joy (p < .001). There was no significant difference between the percentage of children who produced expressions of fear vs. surprise (p = .74). Thus, prototypic facial expressions corresponding to the two highest rated emotions (i.e. surprise and fear) were produced more often than expressions corresponding to the lower rated emotions.

Table 2.

Percentage and number of videos containing FACS manual specified prototype or variant expressions.

Emotion % #

Joy 10.0 6
Anger 5.0 3
Surprise 41.7 25
Fear 46.7 28

Note: According to this scoring scheme, no prototypic expressions of disgust or sadness were displayed, thus those emotions were not included.

Discussion

The initial purpose of the study was to examine children’s production of prototypic facial expressions in a powerful eliciting situation that took place outside of the laboratory in an effort to maximise the probability of observing spontaneously-produced prototypic fear expressions. To achieve this goal, we examined publicly accessible videos from YouTube that depicted children responding to a consistent stimulus that was designed to induce fear. Our analysis indicated that children were judged to be experiencing both fear and surprise, to a substantial degree, as represented by mean observer ratings that exceeded the mid-point of the rating scale. Correspondingly, they displayed more prototypic fear expressions and surprise expressions than expressions of other emotions. These findings provide some degree of empirical support for the ecological validity of previously described prototypic expressions of these emotions. However, they also require a nuanced interpretation as will be described below.

Observer ratings

Although the initial focus of our study was on fear, it is notable that untrained observers rated the stimulus as evoking more surprise than any other emotion. While initially unpredicted, this finding is plausible given the sudden transformation of the stimulus from a maze to a deformed face accompanied by a loud scream. Informal inspection of the videotapes suggested that children’s abrupt movements (e.g. straightening of the back and hands motions toward the face) may have yielded high ratings for both fear and surprise by untrained observers. As noted above, the children’s facial expressions were obscured on the videos judged by observers and thus their emotion ratings could not reflect preconceptions regarding the unique status of facial expressions as the primary index of emotional experience.

With respect to the negative emotions, ratings for fear were higher than for ratings for anger, sadness, and disgust. Previous controversy regarding the validity of prototypic emotional expressions in infants and children has focused on differentiation among expressions of negative emotions (see Bennett, Bendersky, & Lewis, 2002; Camras et al., 2007). Therefore, it was important for us to determine that other negative emotions were not perceived to be present to the same degree as fear in our eliciting situation. This was indeed demonstrated in our study.

Facial coding and interpretation

Our analysis of the emotion scores demonstrated intrasituational specificity of prototypic fear expressions with respect to the array of negative emotions examined in our study. That is, the fear expression was produced significantly more often than expressions for the other negative emotions. Intrasituational specificity is considered to be an important criterion for establishing the validity of a proposed emotional facial expression (Hiatt et al., 1979). In our study, 46.7% of the children in the videos produced a full-face configuration codeable as fear according to the FACS Manual. In contrast, only 34% of participants (on average) produced even one component of a fear expression (i.e. a brow, eye, or mouth component) in a recent meta-analysis of fear studies (Duran et al., 2017).

As noted, there was a high percentage of videos in which children displayed the facial expression of surprise (41.7%; see Table 2). These corresponded to the high surprise ratings by untrained observers who had viewed versions of the video clips on which facial expressions had been electronically blurred. Previous research (Reisenzein et al., 2013) has shown that subjects rarely show surprise expressions in laboratory situations that evoke strong surprise. In their meta-analysis of surprise expressions, Duran et al. (2017) reported the average proportion of reactive participants to be .09. However, our findings suggest that surprise expressions may be produced more often in some naturalistic settings than in laboratory studies.

In interpreting our results, it is important to consider the implication of both our intrasituational specificity findings (regarding correspondence between the relative ratings of surprise and fear, and the relative frequency of their corresponding emotional expressions) and the absolute number of expressions that were observed. Although we found a greater number of fear and surprise expressions than have been reported in past studies, less than half of participants produced such expressions. Taken together, these results suggest that when children experience an emotion and produce a prototypic facial response, it is indeed the expression that would be expected given the nature of the eliciting situation and observer judgments of the children’s emotional experience. However, combined with the results of past research, our findings also suggest that children might often produce no facial expression at all or at least none that fit one of the prototypes – even in a situation in which a strong emotion appears to be evoked and the influence of social display rules would be expected to be minimal (due to the young age of the children and the nonsocial nature of the eliciting situation). This suggests that classic theories of discrete emotion, involving an inherent link between expression and emotion (e.g. Ekman & Cordaro, 2011; Izard, 2011), should be appropriately modified as has recently been suggested by a number of scholars advocating more constructivist-oriented theories of emotion and emotional development (e.g. Barrett & Russell, 2015; Camras, 2011; Scarantino, 2015).

Limitations and further directions

The use of publicly accessible videos allowed us to circumvent ethical restrictions against presenting child participants with powerful stimuli designed to induce fear. This methodology also resulted in several limitations of our study. First, we were not able to obtain the children’s actual age or self-reports of emotion. Additionally, the present study was only able to examine the intrasituational specificity of prototypic expressions rather than both their intrasituational and intersituational specificity. Lastly, claims regarding the ecological validity of our findings might be questioned in that it is unclear to what degree the situation we examined resembles other fear episodes experienced in daily life. In addition, although we selected the videos of scary maze respondents at random, it is possible that the availability of the videos was influenced by the tendencies of some parents to use social media more than others or post only videos in which children display an extreme response. However, as indicated earlier, given the low occurrence rate of fear expressions found in previous studies, the goal of our own investigation was to determine if a greater number of fear expressions could be obtained in a naturalistic (i.e. non-laboratory) situation that might maximise their probability of occurrence. Our results showed that prototypic fear expressions were indeed produced more often than found in previous studies although still in less than half the children.

Our hope is that our findings will motivate further research designed to better understand the factors that determine whether a full prototypic expression will be produced when the corresponding emotion is being experienced. Our research suggests that the intensity of the experience or the unexpected nature of the event may be important factors. Other potential expression-facilitating factors should also be considered – such as those previously examined for surprise (e.g. social context and duration of stimuli; Reisenzein, Bördgen, Holtbernd, & Matz, 2006; Schützwohl & Reisenzein, 2012). Along these lines, one possibility is that prototypic fear will only be produced when the eliciting situation triggers rapid avoidance behaviours, as this was not present in a many past studies that failed to demonstrate a high prevalence of fear responses (Bennett et al., 2002; Vernon & Berenbaum, 2002). Due to the naturalistic origins of our data, we were unable to experimentally manipulate any of these factors, and thus cannot be certain as to which factor contributed to the higher prevalence of prototypic fear expressions found in our study.

Conclusion

Despite the relatively rare occurrence of prototypical fear expressions in previous investigations, this study demonstrates that both fear and surprise expressions are produced with substantial frequency in at least one emotion-inducing situation occurring outside of the lab environment. This finding provides partial support for the validity of prototypic facial expressions that are often used as experimental stimuli in emotion research. More research is necessary in order to identify the individual or situational factors that determine whether these and other prototypic emotional expressions will (or will not) be produced. Continuous efforts in this direction will help illuminate the relationship between facial expressions and other aspects of emotion.

Supplementary Material

Supplemental table
Supplemental figure 1
Video Sample 1
Download video file (10.7MB, avi)
Video Sample 2
Download video file (13.6MB, avi)

Footnotes

Disclosure statement

No potential conflict of interest was reported by the authors.

Supplemental data for this article can be accessed here https://doi.org/10.1080/02699931.2019.1611542.

References

  1. Anderson CL, Monroy M, & Keltner D. (2017), October 26. Emotion in the wilds of nature: The coherence and contagion of fear during threatening group-based outdoors experiences. Emotion, doi: 10.1037/emo0000378 [DOI] [PubMed] [Google Scholar]
  2. Aviezer H, Trope Y, & Todorov A. (2012). Body cues, not facial expressions, discriminate between intense positive and negative emotions. Science, 338(6111), 1225–1229. [DOI] [PubMed] [Google Scholar]
  3. Barrett K, & Campos J. (1987). Perspectives on emotional development II: A functionalist approach to emotions In Osofsky J (Ed.), Handbook of infant development (2nd ed., pp. 555–578). New York: Wiley. [Google Scholar]
  4. Barrett LF, & Russell J (Eds.). (2015). The psychological construction of emotion. New York: Guilford Press. [Google Scholar]
  5. Bennett D, Bendersky M, & Lewis M. (2002). Facial expressivity at 4 months: A context by expression analysis. Infancy, 3, 97–113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bennett DS, Bendersky M, & Lewis M. (2005). Does the organization of emotional expression change over time? Facial expressivity from 4 to 12 months. Infancy, 8(2), 167–187. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Calvo M, Gutierrez-Garcia A, Fernandez-Martin A, & Nummenmaa L. (2014). Recognition of facial expressions of emotion is related to their frequency in everyday life. Journal of Nonverbal Behavior, 38, 549–567. [Google Scholar]
  8. Camras LA (2011). Differentiation, dynamical integration and functional emotional development. Emotion Review, 3(2), 138–146. [Google Scholar]
  9. Camras LA, Fatani S, Fraumeni B, & Shuster M. (2016). The development of facial expressions: Current perspectives on infant emotions In Barrett LF, Lewis M, & Haviland-Jones J (Eds.), Handbook of emotions (4th ed, pp. 255–271). New York, NY: Guilford Publishing. [Google Scholar]
  10. Camras LA, Oster H, Bakeman R, Meng Z, Ujiie T, & Campos JJ (2007). Do infants show distinct negative facial expressions for fear and anger? Emotional expression in 11-month-old European American, Chinese, and Japanese infants. Infancy, 11(2), 131–155. [Google Scholar]
  11. Cohen J. (1988). The effect size index: D. Statistical Power Analysis for the Behavioral Sciences, 2, 284–288. [Google Scholar]
  12. Crivelli C, Carrera P, & Fernández-Dols JM (2015). Are smiles a sign of happiness? Spontaneous expressions of judo winners. Evolution and Human Behavior, 36(1), 52–58. [Google Scholar]
  13. Duran JI, Reisenzein R, & Fernández-Dols JM (2017). Coherence between emotions and facial expressions: A research synthesis In Fernández-Dols JM & Russell JA (Eds.), The science of facial expression (pp. 107–129). New York, NY: Oxford University Press. [Google Scholar]
  14. Ekman P. (1973). Cross-cultural studies of facial expression. Darwin and Facial Expression: A Century of Research in Review, 1, 169222. [Google Scholar]
  15. Ekman P, & Cordaro D. (2011). What is meant by calling emotions basic. Emotion Review, 3, 364–370. [Google Scholar]
  16. Ekman P, Friesen WV, & Hager J. (2002). The facial action coding system (2nd ed.). Salt Lake City, UT: Research Nexus. [Google Scholar]
  17. Elfenbein HA, & Ambady N. (2002). On the universality and cultural specificity of emotion recognition: A meta-analysis. Psychological Bulletin, 128(2), 203–235. [DOI] [PubMed] [Google Scholar]
  18. Fernández-Dols JM, & Ruiz-Belda MA (1995). Are smiles a sign of happiness? Gold medal winners at the Olympic Games. Journal of Personality and Social Psychology, 69(6), 1113. [Google Scholar]
  19. Frijda N. (1986). The emotions. Cambridge: Cambridge University Press. [Google Scholar]
  20. Harrison K, & Cantor J. (1999). Tales from the screen: Enduring fright reactions to scary media. Media Psychology, 1(2), 97–116. [Google Scholar]
  21. Hiatt S, Campos J, & Emde R. (1979). Facial patterning and infant emotional expression: Happiness, surprise, and fear. Child Development, 50, 1020–1035. [PubMed] [Google Scholar]
  22. Holodynski M. (2004). The miniaturization of expression in the development of emotional self-regulation. Developmental Psychology, 40(1), 16–28. [DOI] [PubMed] [Google Scholar]
  23. Hudson A, & Jacques S. (2014). Put on a happy face! Inhibitory control and socioemotional knowledge predict emotion regulation in 5- to 7-year-olds. Journal of Experimental Child Psychology, 123, 36–52. [DOI] [PubMed] [Google Scholar]
  24. Izard C. (1995). The maximally discrminative facial movement coding system. Unpublished manuscript. [Google Scholar]
  25. Izard C. (2011). Forms and functions of emotions: Matters of emotion–cognition interactions. Emotion Review, 3, 371–378. [Google Scholar]
  26. Izard C & Malatesta C. (1987). Perspectives on emotions development: I. Differential emotions theory of early emotional development In Osofsky J (Ed.), Handbook of infant development (2nd ed., pp. 494–554). New York, NY: Wiley. [Google Scholar]
  27. Jones DC, Abbey BB, & Cumberland A. (1998). The development of display rule knowledge: Linkages with family expressiveness and social competence. Child Development, 69, 1209–1222. [PubMed] [Google Scholar]
  28. Kromm H, Farber M, & Holodynski M. (2015). Felt or false smiles? Volitional regulation of emotional expression in 4-, 6-, and 8-year old children. Child Development, 86, 579–597. [DOI] [PubMed] [Google Scholar]
  29. Matsumoto D, & Willingham B. (2006). The thrill of victory and the agony of defeat: Spontaneous expressions of medal winners of the 2004 Athens Olympic Games. Journal of Personality and Social Psychology, 91(3), 568–581. [DOI] [PubMed] [Google Scholar]
  30. Oster H, Hegley D, & Nagel L. (1992). Adult judgments and fine-grained analysis of infant facial expressions: Testing the validity of a priori coding formulas. Developmental Psychology, 28(6), 1115–1131. [Google Scholar]
  31. Reisenzein R, Bördgen S, Holtbernd T, & Matz D. (2006). Evidence for strong dissociation between emotion and facial displays: The case of surprise. Journal of Personality and Social Psychology, 91, 295–315. [DOI] [PubMed] [Google Scholar]
  32. Reisenzein R, Studtmann M, & Horstmann G. (2013). Coherence between emotion and facial expression: Evidence from laboratory experiments. Emotion Review, 5, 16–23. [Google Scholar]
  33. Saarni C. (1979). Children’s understanding of display rules for expressive behavior. Developmental Psychology, 15, 424–429. [Google Scholar]
  34. Saarni C. (1984). An observational study of children’s attempts to monitor their expressive behavior. Child Development, 55, 1504–1513. [Google Scholar]
  35. Scarantino A. (2015). Basic emotions, psychological construction, and the problem of variability In Barrett LF & Russell J (Eds.), The psychological construction of emotion (pp. 334–376). New York: Guilford Press. [Google Scholar]
  36. Schützwohl A, & Reisenzein R. (2012). Facial expressions in response to a highly surprising event exceeding the field of vision: A test of Darwin’s theory of surprise. Evolution and Human Behavior, 33(6), 657–664. [Google Scholar]
  37. Simonds J, Kieras JE, Rueda MR, & Rothbart MK (2007). Effortful control, executive attention, and emotional regulation in 7–10-year-old children. Cognitive Development, 22, 474–488. doi: 10.1016/j.cogdev.2007.08.009 [DOI] [Google Scholar]
  38. Vernon LL, & Berenbaum H. (2002). Disgust and fear in response to spiders. Cognition and Emotion, 16, 809–830. [Google Scholar]
  39. Wenzler S, Levine S, van Dick R, Oertel-Knöchel V, & Aviezer H. (2016). Beyond pleasure and pain: Facial expression ambiguity in adults and children during intense situations. Emotion, 16(6), 807–814. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental table
Supplemental figure 1
Video Sample 1
Download video file (10.7MB, avi)
Video Sample 2
Download video file (13.6MB, avi)

RESOURCES