Abstract
Despite theoretical claims that emotions are primarily communicated through prototypic facial expressions, empirical evidence is surprisingly scarce. This study aimed to: (1) test whether children produced more components of a prototypic emotional facial expression during situations judged or self-reported to involve the corresponding emotion than situations involving other emotions (termed “intersituational specificity”), (2) test whether children produced more components of the prototypic expression corresponding to a situation’s judged or self-reported emotion than components of other emotional expressions (termed “intrasituational specificity”), and (3) examine coherence between children’s self-reported emotional experience and observers’ judgments of children’s emotions. One hundred and twenty children (ages 7–9) were video-recorded during a discussion with their mothers. Emotion ratings were obtained for children in 441 episodes. Children’s nonverbal behaviors were judged by observers and coded by FACS-trained researchers. Children’s self-reported emotion corresponded significantly to observers’ judgments of joy, anger, fear, and sadness but not surprise. Multilevel modeling results revealed that children produced joy facial expressions more in joy episodes than non-joy episodes (supporting intersituational specificity for joy) and more joy and surprise expressions than other emotional expressions in joy and surprise episodes (supporting intrasituational specificity for joy and surprise). However, children produced anger, fear, and sadness expressions more in non-corresponding episodes and produced these expressions less than other expressions in corresponding episodes. Findings suggest that communication of negative emotion during social interactions—as indexed by agreement between self-report and observer judgments—may rely less on prototypic facial expressions than is often theoretically assumed.
Keywords: facial expression, prototypic expression, discrete emotions, children, development
Several contemporary theories of emotion assume that emotional states are inherently linked to a set of prototypic facial expressions (e.g., Ekman & Cordaro, 2011; Izard, 2011). This assumption has provided the basis for countless studies of emotion communication in infants, children, and adults. According to these theories, emotional facial expressions serve as automatic “read-outs” of emotion experience and are automatically and spontaneously produced when an emotion is felt or experienced, unless some higher-order social or cognitive process causes these expressions to be controlled (i.e., overridden, masked, or suppressed, Buck, 1994; Ekman, 1972). However, because some spontaneous expressive behavior can be controlled, the extent to which prototypic expressions are produced during social interactions is unclear (Russell, 1997).
Assessing the degree to which facial expressions serve as a window into the emotions of others is further complicated by the use of contrived laboratory paradigms (e.g., viewing emotion eliciting films or pictures presented in a solitary context) that may not adequately represent the nature of emotion communication in real life. For example, emotions are often communicated between people (Parkinson, 2005), implying that real-life emotional situations are to some extent social (rather than solitary) in nature. This distinction is important, as many studies present support for the “audience effect” where expressivity is influenced by the perceived or actual presence of others (e.g., Fernandez-Dols & Ruiz-Belda, 1995; Fridlund, 1991; Fridlund et al., 1990; Holodynski, 2004; Ruiz-Belda, Fernandez-Dols, Carrera, & Barchard, 2003). Thus, facial responses may depend not only on emotional events but also on the social context of the event. Social presence is theorized to influence emotion expression by influencing social appraisals (Manstead & Fischer, 2001). Specifically, facial expressions may communicate the social motives of an expressor, and these motives impact how individuals appraise or judge the other person’s reactions to an emotional situation. Thus, it is clear that the use of more naturalistic and social laboratory paradigms, including interpersonal interactions and other social situations, provide more ecologically valid tests of the correspondence between expressed and experienced emotion.
Furthermore, because the majority of past research has focused on emotion communication in adults, it is not clear to what extent spontaneous facial expressions reflect children’s emotion experience. Adults regularly manage their expressive behaviors in accordance with culturally-, socially-, or personally-derived display rules; thus, most facial expressions displayed during social interactions may reflect highly controlled behavior rather than genuine responses to emotional events. Developmental studies of emotional facial expressions that focus on children—who can communicate emotion but are not yet fully constrained by cultural, social, or personal display rules and have not yet mastered their regulatory skills—stand to make a unique and important contribution to our understanding of how emotions are communicated throughout the lifespan.
The present study sought to advance the field by examining the communicative value of children’s spontaneous facial expressions during emotion-eliciting conversations with their mothers. Specifically, the primary goal was to identify the extent to which children’s experienced emotions were conveyed using prototypic facial expressions during these conversations. To do so, this study assessed children’s self-reported emotions and observers’ emotion judgments in relation to anatomically-based facial coding provided by trained researchers. A secondary goal was to compare children’s self-reports to observers’ judgments to determine the extent to which children’s emotions were effectively communicated overall.
Spontaneous Facial Expression by Adults
The automatic read-out hypothesis
Previous laboratory studies with adults have provided some—albeit limited—support for the “automatic read-out” hypothesis. A narrative review of ten studies investigating spontaneous emotion expression indicated generally moderate correlations between emotion-specific facial expressions and self-reports of the basic emotions of happiness, sadness, fear, anger, disgust, and/or distress (Matsumoto, Keltner, Shiota, O’Sullivan, & Frank, 2008). The studies varied in their procedures for eliciting emotional expressions. For example, some involved viewing emotion stimuli in private so that no display rules would be expected to be operating (e.g., Rosenberg & Ekman, 1994). Others involved conversations on emotion topics with an interviewer (Bonanno & Keltner, 1997) during which expressive regulation might (or might not) take place. Still, taken together, such studies provide important information about the role of prototypic expressions in emotion communication, and suggest that experienced emotion is sometimes—but not always—accompanied by the production of a corresponding emotional facial expression. It is notable, however, that in four cases (two for disgust and two for sadness), a negative emotion was positively correlated with its corresponding negative emotional expression, and, in four other cases, these same self-reported emotions (i.e., disgust and sadness) were also positively correlated with other negative emotional expressions (e.g., anger, contempt, fear and/or pain). Thus, these investigations suggest limited correspondence between spontaneous emotion expression and emotion experience. Indeed, facial expressions for emotions not self-reported by expressers may sometimes be produced (e.g., Fernandez-Dols, Carrera, & Crivelli, 2011; Fernandez-Dols, Sanchez, Carrera, & Ruiz-Belda, 1997).
As noted, one limitation of several studies which Matsumoto and colleagues (2008) reviewed is that measures of emotion expression and experience were averaged across extended time intervals rather than taken at the same moment in time. Another limitation is that virtually none of these studies examined their data using two stringent criteria that could produce stronger evidence for (or against) the automatic-readout hypothesis. Several decades ago, Hiatt, Campos, and Emde (1979) articulated two criteria for determining whether a facial expression was differentially associated with a particular emotion. The first is intersituational specificity—the degree to which a presumptive emotional expression was produced more often in situations eliciting the target emotion than in situations that elicit other emotions. For example, is the prototypic anger facial expression shown significantly more often in anger situations than in situations of sadness, fear, and disgust? The second criterion is intrasituational specificity—the degree to which a presumptive emotional expression was produced more often than other emotional expressions in a situation eliciting its corresponding emotion. For example, is the prototypic anger facial expression produced significantly more often than fear, sadness, and disgust facial expressions in a situation that elicits primarily anger in the expresser? Campos and colleagues have argued that both criteria must be met in order to demonstrate adequate or convincing support for the automatic-read out hypothesis.
In our study, we employed these two criteria to investigate the spontaneous production of emotional facial expressions in children. Reflecting our interest in the role of prototypic facial expressions during social interactions, we chose to examine children’s facial behavior during conversations with their mothers rather than children’s solitary responses to standardized emotion elicitors.
The communicative value of prototypic facial expressions
As described above, empirical investigations of the “automatic read-out” hypothesis have often compared self-reported emotion to some objectively-derived set of facial behavior codes. It is assumed that an observer’s inferences are similarly based on prototypic facial behavior; observers are thought to rely on prototypic facial expressions to infer how others are feeling. However, the inferential utility of prototypic facial expressions is unclear, as many studies have demonstrated weak links between theoretically-assumed facial configurations and judged emotion (e.g., Carroll & Russell, 1996, 1997). To further investigate the communicative value of children’s facial expressions, we applied the two criteria of inter- and intrasituational specificity to observers’ judgments of children’s emotions.
Developmental Processes
Developmental researchers have been particularly interested in the role of emotional expressivity in children’s social and emotional adjustment. According to several theories of emotional development (Campos & Barrett, 1984; Campos, Mumme, Kermoian, & Campos, 1994; Izard, 1978, 1984) and several models of emotional competence (Halberstadt, Denham, & Dunsmore, 2009; Saarni, 1999), the ability to effectively communicate one’s emotions to others in a socially-appropriate manner increases across childhood and is positively related to children’s behavioral, social, and emotional adjustment (Halberstadt, Parker, & Castro, 2013). For example, children who express emotion in socially-appropriate ways (such as expressing positivity in the classroom setting) tend to demonstrate greater social skill and academic achievement (Denham et al., 2003; Hernandez et al., 2016; Sallquist, Didonato, Hanish, Martin, & Fabes, 2012).
One limitation of these, studies, however, is the aggregation of facial expressions into global valence-based codes that specify a percentage of time children spent displaying positive or negative affect in the face (e.g., Casey, 1993). Not only does this approach reduce data from numerous individual coding units to unidimensional scales, it also reduces potential variability in the expression of multiple emotion components. Therefore, as with research on adults, it is not clear how often children’s facial expressions of discrete emotions (e.g., anger, fear, sadness) are spontaneously produced when emotion is experienced during the course of a given social interaction or to what extent they are the primary means through which emotion communication takes place during such an interaction.
By the elementary school years, children have accumulated substantial experience in expressing emotion but are increasingly situated within social and interpersonal contexts that might constrain this expression. Elementary-school-age children show an increasing capacity to manage their emotional experiences and expressions (Eisenberg & Morris, 2002; Holodynski, 2004; Pons, Harris, & de Rosnay, 2004), which is not yet fully mature. For example, third-grade children do not feel the need to regulate their emotions as much as older children (Zeman & Garber, 1996). There also appear to be differences in children’s expressivity across communicative contexts. For example, children are more comfortable expressing emotion with their mothers than with their peers (Zeman & Garber, 1996). Because the family is considered a primary context for elementary-school-age children’s emotion socialization and development (Dunsmore & Halberstadt, 1997; Eisenberg, Cumberland, & Spinrad, 1998), it may be especially relevant to examine children’s emotional expression in this context.
For these reasons, our study focused specifically on third-grade children in conversation with their mothers. These children were expected to behave in a relatively naturalistic manner, although some regulation may be taking place, as might be the case in any laboratory-based assessment of behavior. However, by studying children in this specific context, we can begin to explore the extent to which children spontaneously use prototypic facial expressions to communicate emotion in situations that approximate the opportunities and challenges of everyday life.
Emotion Coherence
Coherence among various emotion systems is a central tenet of basic emotion theory (Ekman, 1972, Ekman, 1993). An extension of this principle concerns the coherence between reported and judged emotion; that is, the observed agreement between how a person feels and how other people think a person feels. Past research has found a moderate link between self-reported and judged emotion in children and adults (e.g., Castro, Halberstadt, Lozada, & Craig, 2015; Matsumoto & Kupperbusch, 2001). However, it is unclear to what extent this coherence is reliant upon facially-communicated emotion, as these studies did not include the anatomically-based coding of facial expressions. Coherence between self-reported and judged emotion should emerge if emotions are communicated through prototypic facial behaviors, as such behaviors would serve as emotional signals that are conveyed by the sender and subsequently received by the decoder. However, coherence may also be high if emotions are communicated through other means beyond facial expressions. For example, children may communicate their emotional experiences through channels other than facial expressions, including through their vocal and bodily nonverbal behavior (Bachorowski, 1999; Boone & Cunningham, 2001). Thus, it is possible to observe high coherence between self-reported and judged emotion in the absence of associations between prototypic facial behavior and emotion judgments/ratings, suggesting that the face may not be the primary channel through which emotions are interpersonally communicated. To address this possibility, we also investigated the coherence between children’s self-reported emotion and observers’ judgments of children’s emotional behavior.
The Present Study
The present research included two goals: First, we aimed to examine the degree to which children’s emotions were communicated through prototypic facial expressions, and second, we aimed to examine the coherence between children’s experienced and judged emotion. Evidence that children’s facial behaviors do map onto their self-reports and/or observers’ judgments would support the automatic read-out hypothesis and demonstrates that facial behaviors have some communicative value; if instead children’s facial behaviors do not correspond to their experienced or judged emotion, and children’s self-reports and observer’s judgments do cohere, then multimodal expressions may hold greater communicative value over facial behaviors. To address our research goals, we measured third-grade children’s spontaneous emotional expressions during an emotion-eliciting conversation with their mothers in the laboratory.
Specifically, we tested the automatic read-out and communicative value hypotheses by comparing the children’s self-reports and observers’ emotion judgments to the children’s facial expressions. To do so, we determined whether children: (1) produced significantly more components of the prototypic joy, anger, fear, sadness, surprise, and disgust expression during episodes in which they or the observers indicated that the children were experiencing the corresponding emotion in comparison to episodes in which the children reported or were judged as experiencing a different emotion (supporting intersituational specificity) and (2) produced significantly more components of the prototypic joy, anger, fear, sadness, surprise, and disgust expressions than components of the expressions for other emotions in episodes corresponding to their self-reported or observer-judged emotion (supporting intrasituational specificity).
Second, to investigate the hypothesis that children effectively communicate their emotions to observers across multiple channels, we compared children’s self-reports of their emotions to the observers’ judgments of children’s emotional behavior. Consistent with past research, we expected a moderate degree of coherence between children’s self-reports and observers’ judgments. Moreover, to the extent that emotions are communicated via multiple channels, we expected this coherence regardless of whether children’s facial behaviors mapped onto their self-reports or observers’ judgments.
Method
Participants
Child participants were third-grade children who took part in a larger investigation of parental emotion socialization and child emotion understanding (Castro, Halberstadt, & Garrett-Peters, 2016; Rogers, Halberstadt, Castro, MacCormack, & Garrett-Peters, 2016). Videoclips of 120 children (55 girls, 65 boys) between the ages of 7 and 9 (M = 8.71, SD = 0.34) from the original study were selected for this study. Children were identified by their mothers as African American (N = 69), European American (N = 49), or Biracial (N = 2). Children represented a range of socioeconomic backgrounds, as evidenced by maternal education levels (Median = 14 years, some college education; range: 10th grade to college degree) and total family income (Median = $86,000; range: $4,800 to $420,000).
In addition, 9 untrained undergraduate research assistants (6 female, 3 male) at a large Southeastern public university served as observers for the present study. Six of the observers were European American, two were African American, and one was Indian American. Observers ranged in age from 19 to 26 years and were assigned to the project as part of a lab course focused on conducting independent research in psychology.
Procedure
All study procedures were approved by the Institutional Review Board for the Protection of Human Subjects in Research at North Carolina State University (NCSU IRB#1822).
Conflict discussion
Following Gunlicks-Stoessel and Powers (2008) and Welsh and Dickson (2005), children engaged in a 7-minute discussion with their mothers about an area of disagreement or conflict. Conflict topics were independently generated by children and their mothers; topics that were reported by both dyad members served as talking points during the discussion. Areas of conflict included homework, bedtime, chores, and siblings. Dyads were told to discuss each area of conflict until the 7 minutes had passed and were reminded that the goal was to discuss the conflict and potentially work toward a resolution. Mothers and children were video-recorded separately in these discussions to document their faces, upper-bodies, and voices, and to maximize the quality of material for facial coding; only the child recordings are relevant to this study.
Child self-reported emotions
Fifteen consecutive 10-second episodes from the beginning of each discussion were extracted into videoclips. The fist videoclip began approximately 30 seconds into the conflict discussion to allow mothers and children sufficient time to acclimate to the discussions. Fifteen 10-second videoclips were then extracted for each mother-child dyad with no interruptions between videoclips (i.e., 15 consecutive videoclips). The same procedure was followed for all dyads. Children viewed each videoclip from their own discussion and identified what they were feeling using a forced choice procedure. Given emotion understanding skill between the ages of 7 and 9 (Pons et al., 2004) and assessment from pilot testing, we asked the children to select among six emotion families corresponding to the six basic emotion categories (i.e., joy, anger, fear, sad, surprise, disgust), and we provided each emotion family with two or more related emotion terms. Specifically, joy was presented as “happy/pleased/proud,” anger was presented as “irritated/frustrated/mad,” fear was presented as “anxious/worried/afraid,” sad was presented as “sad/hurt,” surprise was presented as “curious/interested/surprised,” and disgust was presented as “disgust/contempt.” Children could also select “no emotion” or report another emotion not presented in the list. For each episode, children verbally indicated their choice among the emotion families to a research assistant who recorded their selections on a prepared answer sheet. Children then rated how strongly they experienced this emotion on a 5-pt scale (1= Just a Little to 5 = Very Strongly) as an indicator of emotional experience intensity.
Emotion judgments by observers
Following past research (Castro et al., 2015), the 9 observers engaged in the same emotion judgment procedure as did children, viewing each 10-second videoclip and then identifying what the child in the videoclip was feeling. To aid their judgments, observers were given the basic definition of each emotion term within each emotion family as well as the nonverbal cues (including facial, vocal, and bodily cues) typically associated with each discrete emotion. To illustrate, happy was defined as “a positive emotion indicative of pleasure, contentment, or general satisfaction with an event” and was characterized by “raised cheeks and crow’s feet at the corners of the eyes, a smile displayed by the mouth, laughter and quick speech.” For full judgment instructions and guide, see Supplemental Materials. Observers were told to focus on children’s nonverbal behaviors (how children said something) rather than children’s verbal expressions (what children said). Observers first viewed the entire video-recorded discussion and then coded each individual episode from start to finish. If multiple emotions were expressed within a given 10-second episode, observers were told to select the emotion that was most salient throughout the episode as indicated by intensity and duration. Mixed emotion ratings were not allowed.
Episode selection for facial coding
In order to address our questions regarding the role of facial expressions in effective emotion communication, we selected a subset of videoclips in which emotions were clearly communicated by children; our criterion was the agreement among the 9 independent observers. To determine this, we calculated an expressive clarity value for each episode (Noller, 2001; for recent example see Castro et al., 2015). Specifically, for each episode, we identified the emotion category most frequently chosen by the observers (including the category of “no emotion”). We then calculated an expressive clarity score by dividing the number of observers who chose that emotion category by the total number of observers. To illustrate, when all 9 observers agreed on an emotion (e.g., anger) for a given episode, then the episode was high on clarity and received a value of 1.0. If instead only 3 observers agreed on an emotion (and the other 6 observers distributed their choices over other emotions), then the episode had a score of .33. A threshold value of .78 (or agreement by 7/9 observers) was used to identify a subset of episodes for facial coding. This resulted in a sample of 441 episodes across 120 children. Importantly, this final sample of episodes represented a range of emotional intensities: Children reported feeling an emotion “just a little” in 24% of episodes and “very strongly” in approximately 14% of episodes, with the remaining episodes falling in between these intensities (full intensity statistics reported in supplemental materials).
Anatomically-based facial coding
Children’s facial expressions were coded using the Facial Action Coding System (FACS; Ekman, Friesen & Hager 2002). FACS is a comprehensive anatomically-based system in which coders identify the muscle movements and contractions involved in a facial expression and code them into individual coding units—referred to as Action Units or AUs. FACS coding thus occurs at the level of the muscle movement rather than the emotion expression; coders must identify whether specific facial muscles are activated (or not) without consideration of whether they are involved in an emotion-related configuration of facial movements. Although unsuitable for infants and possibly toddlers due to age differences in facial morphology, FACS has been successfully used with children as young as 3-years of age (Camras, Chen, Bakeman, Norris, & Cain, 2006).
The 441 episodes were FACS-coded by a trained coder (the first author). The coder first reviewed the FACS Investigator’s Manual and practice materials (Ekman et al., 2002) under the supervision of a FACS-certified researcher (the fourth author). Reliability was then established between the two coders on the materials used in the present study. To do so, a subset of episodes (7%) was scored by both coders. Inter-rater reliability was calculated following the formula provided in the FACS Investigators Manual (number of agreements × 2/total number of unique codes). The average reliability score was .88, thus substantially exceeding the conventionally-acceptable score for FACS coding (i.e., .70, Ekman et al., 2002).
Emotion interpretation of facial codes
The presence of emotion-relevant configurations of AUs was determined based on the recommendations and guidelines developed by Ekman and colleagues as presented in the FACS Investigator’s Manual (Ekman et al., 2002) and on configurations described by Izard and colleagues based on his research with infants and children (Izard, Dougherty & Hembree, 1983). More specifically, for each episode, the AUs produced in the upper face (i.e., brow/eye areas) and lower face (i.e., nose/mouth areas) were separately examined to determine the presence or absence of configurations hypothesized to express each of six basic emotion categories: joy, anger, fear, sad, surprise, and disgust (for configurations, see Table 1). This process resulted in six FACS-based expression (FBE) scores for each episode. Scores ranged from 0 to 2: “2” indicated that an emotion-relevant configuration was produced in both upper and lower areas of the face (full prototypic expression), “1” indicated that an emotion-relevant configuration was produced in either the upper or lower part of the face (partial prototypic expression), and “0” indicated that an emotion-relevant configuration was not produced at all. Our decision to code partial expressions (codes of “1”) reflects past practices by other developmental investigations of discrete emotions (e.g., Izard & Abe, 2004) and was designed to widen the range of expressions explored within our study. Frequencies of each score for each emotion are presented in Table 1.
Table 1.
Interpretation of Action Unit Configurations into Prototypic Expressions
Joy | Anger | Fear | Sad | Surprise | Disgust | |
---|---|---|---|---|---|---|
Upper Face | ||||||
6 | 4c | 1 + 2 + 4 | 1 | 1+ 2 | N/Ae | |
6 + 7a | 4 + 5 | 1 + 2 +4 + 5 | 1 + 4 | 1 + 2 + 5 f | ||
1 + 2 + 5 f | 5b | |||||
5b | ||||||
Lower face | ||||||
12 | 23 | 20 | 15 | 25 + 26 | 9 | |
12 + 25/26 | 23 + 10/17/22/25/26 | 20 + 25 | 15 + 17 | 25 + 27 | 9 + 16/17/25/26 | |
24 | 20 + 26 d | 9 + 10 | ||||
24 + 17 | 9 + 10 + 16/17/25/26 | |||||
20 + 26 d | 10 | |||||
10 +16/17/25/26 | ||||||
14 (unilateral/bilateral) |
Note:
Coded as joy if in the presence of 12 and there is no negative brow, otherwise ignored;
Coded as fear if co-occurred with a fear mouth or surprise if co-occurred with a surprise mouth, otherwise ignored;
Coded as anger if co-occurred with anger mouth, otherwise ignored;
Coded as anger if 26 > 20 or as fear if 26 < 20;
Because expressions of disgust involve only lower face movements, the highest FACS-based Expression score will be 1;
Coded as fear if co-occurred with fear mouth, coded as surprised if co-occurred with surprise mouth.
Results
Descriptive Results
As shown in the right three columns of Table 2, children produced more partial emotional facial expressions (consisting of movements in the upper or lower face, but not both areas) than complete prototypical facial expressions (consisting of movements in both the upper and lower face areas) during a conflict discussion with their mothers. The 441 episodes (selected for the present study on the basis of their expressive clarity scores, see above) varied in their distribution across emotion categories as indicated by the child self-reports and observer judgments (see left two columns in Table 2). For children’s self-reports, the most frequent emotion category was joy, followed by “no emotion”, or another emotion not captured by the seven response options. Similar frequencies of episodes fell into the categories of child-reported anger, fear, sad, and surprise. Disgust episodes as reported by children were relatively infrequent. For the observer judgments, the most frequent emotion category was that of “no emotion,” followed by surprise, anger, and joy. Relatively few episodes were categorized as displaying fear, and no episodes were identified as displaying disgust by observers.
Table 2.
Child- and Observer-Identified Episodes and FACS-based Expression Frequencies by Emotion
|
|||||
---|---|---|---|---|---|
Episode Categorization | FACS-based Expressions1 | ||||
|
|||||
Emotion Category |
Child2 (N = 436) |
Observer (N = 441) |
None (N = 1704) |
Partial (N = 671) |
Full (N = 271) |
Joy | 101 | 71 | 259 | 79 | 103 |
Anger | 41 | 76 | 285 | 128 | 28 |
Fear | 37 | 10 | 327 | 71 | 43 |
Sad | 39 | 23 | 366 | 57 | 18 |
Surprise | 42 | 82 | 170 | 192 | 79 |
Disgust | 15 | 0 | 297 | 144 | – |
No Emotion | 92 | 179 | – | – | – |
Other Emotion3 | 69 | – | – | – | – |
Note:
Because the 441 episodes could contain multiple FACS-based expression scores (i.e., one score for each of six discrete emotion categories per episode), the total number of FACS-based expressions will be greater than the total number of episodes and will vary across type of expression (none, partial, full).
Children’s emotion ratings were missing for five episodes due to audiovisual failures.
Children were able to select “Other Emotion” and were instructed to report what other emotion(s) they were feeling; these reports were then examined for fit within the six discrete emotion categories and the category of no emotion. Responses that matched a provided category (N = 19 episodes) were rescored accordingly and are not included in this cell.
Analytic Strategy
Situations (episodes) identified by child self-report and observer judgment were analyzed separately to allow for evaluation of the automatic read-out hypothesis (using the self-report data) and the communicative value of the children’s facial expressions (using the observer judgments). To assess our hypotheses, we examined whether children’s facial behavior demonstrated intersituational specificity (e.g., they produced joy expressions more often in joy episodes than in non-joy episodes) and intrasituational specificity (e.g., they produced joy expressions more than other emotional expressions in joy episodes). Figures 1 and 2 illustrate the mean FBE scores for each category of emotional episodes as identified by child self-report and observer judgment, respectively (for specific estimates, see supplemental materials).
Figure 1.
Mean FACS-based emotional expression scores for each category of self-report emotion episodes. Scores could range from 0–2 for Joy, Anger, Fear, Sad, and Surprise expressions; Disgust expression scores could range from 0–1.
Figure 2.
Mean FACS-based emotional expression scores for each category of observer emotion episodes. Scores could range from 0–2 for Joy, Anger, Fear, Sad, and Surprise expressions; Disgust expression scores could range from 0–1. No episodes were rated by observers as displaying disgust.
To evaluate the intersituational and intrasituational specificity of the children’s facial behavior, we conducted a series of multilevel models (Raudenbush & Bryk, 2002) in Mplus version 7.4 (Muthén & Muthén, 1998–2015). Multilevel modelling is the ideal analytic strategy for our data for two reasons. First, because our data included multiple episodes nested within children, our data are considered “multilevel”, with episodes at Level 1 nested within children at Level 2. In this way, our analyses are similar to a repeated measures analysis of variance, with episodes measured repeatedly within children. However, multilevel modeling differs from a repeated-measures analysis of variance in that equal cell sizes are not required. Thus, multilevel modeling provides a second advantage in the analysis of unbalanced data. Given that the number of episodes varied across children (ranging from 1 to 11), multilevel modeling is the more appropriate analysis method for our data.
Intersituational and Intrasituational Specificity Multilevel Models
Fully unconditional models
We first conducted fully unconditional models (e.g., Raudenbush & Bryk, 2002) to determine whether there was sufficient variability at Level 1 (within-children) and Level 2 (between-children). These models are essentially null models that include only the intercept with no predictors, yielding a fixed effect for the intercept and random effects for the Level 1 (within-child) and Level 2 (between-child) residual variances. Results then provide estimates of the relative significant variability at each level of the model. Two sets of fully unconditional models were estimated.
First, we partitioned variance for the intersituational specificity dependent variables; for these analyses, FBE scores for joy, anger, fear, sadness, surprise, and disgust were entered as continuous dependent variables with no observed predictors. Results indicated that 52.17% of the variability in joy facial expressions was between-children (τ00 = 0.36, p < .001) and 47.83% was within-children (σ2 = 0.33, p < .001). In contrast, 21.05% of the variability in anger facial expressions was between-children (τ00 = 0.08, p = .002) and 78.95% was within-children (σ2 = 0.30, p < .001). For fear facial expressions, 25.58% of the variability was between-children (τ00 = 0.11, p = .001) and 74.42% was within-children (σ2 = 0.32, p < .001). For sad facial expressions, 28.00% of the variability was between-children (τ00 = 0.07, p = .001) and 72.00% was within-children (σ2 = 0.18, p < .001). For surprise facial expressions, 28.30% of the variability was between-children (τ00 = 0.15, p < .001) and 71.70% was within-children (σ2 = 0.38, p < .001). Similarly, for disgust facial expressions, 9.09% of the variability was between-children (τ00 = 0.02, p = .018) and 90.91% was within-children (σ2 = 0.20, p < .001). Second, we partitioned variance for the intrasituational specificity dependent variable: we conducted a fully unconditional model on children’s FBE scores nested within all six discrete emotion categories. This analysis revealed that 4.44% of the variability in facial expression was between-children (τ00 = 0.02, p = .001) and 95.56% was within-children (σ2 = 0.43, p < .001).
Together, results from these fully unconditional models demonstrated significant and sufficient within-child variability in children’s FACS-based emotional expressions, indicating that children’s FBE scores varied significantly across emotional episodes and supporting further predictive models. Given that the amount of variability between-children (Level 2), though significant, was relatively low across models, and because we did not have any hypothesized predictors at the between-child level, we continued with Level 1 models only.
Random coefficients regression models
We then conducted four series of random coefficients regression models (Kahn, 2011; Raudenbush & Bryk, 2002) with non-randomly varying slopes to test our inter- and intrasituational specificity hypotheses.
Intersituational specificity models
Two series of models were used to address our hypotheses regarding intersituational specificity (e.g., whether children produced FACS-based joy expressions more in joy episodes than in non-joy episodes). The first series consisted of six models predicting children’s FBEs of joy, anger, fear, sadness, surprise and disgust with children’s self-reported feelings of joy, anger, fear, sadness, surprise, disgust, other emotion, and no emotion as Level 1 (within-child) predictors (see Table 3). The second analytic series consisted of five models that predicted children’s FBEs of joy, anger, fear, sadness, and surprise with observers’ judgments of children’s emotions as joy, anger, fear, sadness, surprise, and no emotion as Level 1 predictors (see Table 4). Because no episodes were rated by observers as displaying disgust, observers’ disgust ratings were not entered as a predictor nor were FACS-based disgust expressions treated as a dependent variable in the second series of models.
Table 3.
Unstandardized Coefficients, Standard Errors, and 95% Confidence Intervals of Intersituational Specificity Multilevel Models for Child Episodes
|
||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
FACS−based Expression β0 | ||||||||||||||||||
|
||||||||||||||||||
Joy | Anger | Fear | Sad | Surprise | Disgust | |||||||||||||
Fixed Effects | B | SE | CI | B | SE | CI | B | SE | CI | B | SE | CI | B | SE | CI | B | SE | CI |
Intercept γ00 | 0.95 | 0.10 | 0.76 1.14 |
0.48 | 0.09 | 0.31 0.65 |
0.45 | 0.12 | 0.22 0.69 |
0.23 | 0.08 | 0.06 0.40 |
0.91 | 0.11 | 0.69 1.31 |
0.55 | 0.11 | 0.33 0.76 |
Episode1 γ01–γ05 | ||||||||||||||||||
Joy | – | −0.14 | 0.10 | −0.33 0.05 |
−0.06 | 0.12 | −0.29 0.16 |
−0.02 | 0.09 | −0.20 0.16 |
−0.14 | 0.13 | −0.40 0.11 |
−0.21† | 0.11 | −0.42 0.01 |
||
Anger | −0.27† | 0.15 | −0.56 0.02 |
– | −0.11 | 0.16 | −0.42 0.20 |
0.02 | 0.13 | −0.25 0.28 |
−0.17 | 0.14 | −0.44 0.10 |
−0.27 | 0.12 | −0.50 −0.05 |
||
Fear | −0.42 | 0.13 | −0.67 −0.17 |
0.11 | 0.13 | −0.14 0.36 |
– | −0.04 | 0.11 | −0.25 0.18 |
−0.06 | 0.14 | −0.33 0.20 |
−0.27 | 0.13 | −0.51 −0.02 |
||
Sad | −0.30 | 0.15 | −0.59 −0.00 |
0.00 | 0.14 | −0.27 0.28 |
0.06 | 0.16 | −0.25 0.36 |
– | −0.21 | 0.17 | −0.54 0.12 |
−0.23† | 0.14 | −0.50 0.04 |
||
Surprise | −0.38 | 0.13 | −0.64 −0.11 |
−0.15 | 0.12 | −0.39 0.09 |
−0.09 | 0.16 | −0.40 0.23 |
−0.05 | 0.11 | −0.26 0.17 |
– | −0.17 | 0.13 | −0.42 0.08 |
||
Disgust | −0.24 | 0.16 | −0.56 0.08 |
−0.01 | 0.16 | −0.32 0.30 |
−0.15 | 0.18 | −0.49 0.20 |
0.06 | 0.15 | −0.24 0.36 |
0.08 | 0.18 | −0.28 0.43 |
– | ||
Other Emotion | −0.33 | 0.14 | −0.61 −0.05 |
0.04 | 0.13 | −0.21 0.28 |
−0.07 | 0.16 | −0.39 0.25 |
0.04 | 0.13 | −0.22 0.30 |
−0.15 | 0.14 | −0.42 0.12 |
−0.25† | 0.13 | −0.50 0.01 |
No Emotion | −0.45 | 0.11 | −0.67 −0.24 |
−0.11 | 0.11 | −0.32 0.10 |
−0.31 | 0.12 | −0.54 −0.07 |
−0.00 | 0.11 | −0.21 0.21 |
−0.06 | 0.12 | −0.29 0.17 |
−0.21† | 0.13 | −0.46 0.04 |
| ||||||||||||||||||
Random Effects | ||||||||||||||||||
| ||||||||||||||||||
Between (τ00) | 0.33 | 0.06 | 0.22 0.43 |
0.07 | 0.02 | 0.02 0.12 |
0.10 | 0.03 | 0.04 0.16 |
0.07 | 0.02 | 0.03 0.11 |
0.16 | 0.03 | 0.09 0.22 |
0.02 | 0.01 | 0.00 0.04 |
Within (σ2) | 0.30 | 0.04 | 0.23 0.38 |
0.30 | 0.03 | 0.24 0.35 |
0.30 | 0.04 | 0.22 0.39 |
0.19 | 0.03 | 0.13 0.25 |
0.37 | 0.03 | 0.31 0.44 |
0.20 | 0.01 | 0.17 0.22 |
Note:
Level 1 predictors were the target emotional episodes as identified by children. Episodes were dummy-coded such that the target emotional episode (the child self-reported episode that matched the FACS emotion category in each model; i.e., episodes where children said they were feeling joy in the joy FACS expression model) served as the reference group in all models. Each model contained five dummy-coded predictors corresponding to γ01, γ02, γ03, γ04, and γ05. Significant coefficients (p < .05) noted in bold. Marginally significant coefficients (p < .10) noted by †.
Table 4.
Unstandardized Coefficients, Standard Errors, and 95% Confidence Intervals of Intersituational Specificity Multilevel Models for Observer Episodes
|
|||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
FACS−based Expression β0 | |||||||||||||||
|
|||||||||||||||
Joy | Anger | Fear | Sad | Surprise | |||||||||||
Fixed Effects | B | SE | CI | B | SE | CI | B | SE | CI | B | SE | CI | B | SE | CI |
Intercept γ00 | 1.89 | 0.05 | 1.80 1.98 |
0.75 | 0.09 | 0.58 0.92 |
0.60 | 0.20 | 0.21 0.98 |
0.48 | 0.17 | 0.14 0.81 |
0.88 | 0.10 | 0.69 1.06 |
Episode1 γ01–γ05 | |||||||||||||||
Joy | – | −0.52 | 0.11 | −0.73 −0.31 |
−0.25 | 0.22 | −0.68 0.18 |
−0.35† | 0.18 | −0.70 −0.00 |
0.01 | 0.14 | −0.28 0.29 |
||
Anger | −1.43 | 0.13 | −1.68 −1.18 |
– | −0.10 | 0.22 | −0.52 0.33 |
−0.06 | 0.20 | −0.46 0.34 |
0.10 | 0.12 | −0.12 0.33 |
||
Fear | −1.50 | 0.25 | −1.98 −1.02 |
−0.35 | 0.27 | −0.87 0.17 |
– | −0.15 | 0.26 | −0.66 0.35 |
−0.07 | 0.23 | −0.53 0.39 |
||
Sad | −1.56 | 0.14 | −1.83 −1.29 |
−0.53 | 0.11 | −0.74 −0.32 |
−0.42 | 0.21 | −0.82 −0.01 |
– | −0.13 | 0.13 | −0.39 0.13 |
||
Surprise | −1.30 | 0.10 | −1.50 −1.10 |
−0.31 | 0.11 | −0.53 −0.08 |
−0.26 | 0.19 | −0.63 0.11 |
−0.35 | 0.17 | −0.69 −0.02 |
– | ||
No Emotion | −1.55 | 0.08 | −1.71 −1.39 |
−0.40 | 0.10 | −0.60 −0.20 |
−0.32 | 0.20 | −0.72 0.08 |
−0.30† | 0.18 | −0.64 0.05 |
−0.24 | 0.12 | −0.47 −0.01 |
| |||||||||||||||
Random Effects | |||||||||||||||
| |||||||||||||||
Between (τ00) | 0.10 | 0.03 | 0.04 0.16 |
0.04† | 0.02 | −0.00 0.09 |
0.10 | 0.03 | 0.23 0.40 |
0.05 | 0.02 | 0.02 0.08 |
0.15 | 0.03 | 0.09 0.22 |
Within (σ2) | 0.26 | 0.03 | 0.19 0.32 |
0.30 | 0.03 | 0.24 0.36 |
0.31 | 0.04 | 0.23 0.40 |
0.18 | 0.03 | 0.13 0.24 |
0.37 | 0.03 | 0.31 0.43 |
Note:
Level 1 predictors were the target emotional episodes as identified by observers. Episodes were dummy-coded such that the target emotional episode (the observer-identified episode that matched the FACS emotion category in each model; i.e., episodes where observers said children were feeling joy in the joy FACS expression model) served as the reference group in all models. Each model contained five dummy-coded predictors corresponding to γ01, γ02, γ03, γ04, and γ05. Significant coefficients (p < .05) noted in bold. Marginally significant coefficients (p < .10) noted by †.
In each model, binary dummy-coded episode variables were used to test whether FBEs of a given emotion category varied significantly between the different emotional episodes as identified by the children (and then secondly, as identified by observers). For example, we tested whether FACS-based joy expressions varied significantly between child-identified joy episodes (and secondly, between observer judgments) and episodes identified as anger, fear, sadness, surprise, disgust, other emotion, and no emotion.
Below, we illustrate the basic equations for the joy model using observer-rated episodes as predictors; similarly constructed equations were created for the intersituational specificity models using the child self-reports:
Level 1: FACS-based Joy Expressionij = β0ij + β1ij(Anger Episode Dummy Code) + β2ij(Fear Episode Dummy Code) + β3ij(Sad Episode Dummy Code) + β4ij(Surprise Episode Dummy Code) + β5ij(No Emotion Episode Dummy Code) + rij
- Level 2: β0i = γ00 + u0i
- β1i = γ10
- β2i = γ20
- β3i = γ30
- β4i = γ40
- β5i = γ50
In Level 1, the intercept, β0ij, is defined as the expected FACS-based joy expression score for child i for episode j. The slopes, β1, β2, β3, β4, and β5, reflect the expected difference in FACS-based joy expression scores between episodes identified by observers as joy and episodes identified by observers as anger (β1), fear (β2), sad (β3), surprise (β4), and no emotion (β5). The error term, rij, represents a unique effect associated with child i (i.e., how much that individual varies across episodes). The individual intercept (β0i) and slopes (β1, β2, β3, β4, β5) become the outcome variables in the Level 2 equations, where the average FACS-based joy expression score for the sample for observer-identified joy episodes (i.e., when Anger Episode Dummy Code, Fear Episode Dummy Code, Sad Episode Dummy Code, Surprise Episode Dummy Code, and No Emotion Episode Dummy Code = 0) is represented by γ00, the average difference in FACS-based joy expression scores between joy and anger episodes is represented by γ10, the average difference in FACS-based joy expression scores between joy and fear episodes is represented by γ20, the average difference in FACS-based joy expression scores between joy and sad episodes is represented by γ30, the average difference in FACS-based joy expression scores between joy and surprise episodes is represented by γ40, and the average difference in FACS-based joy expression scores between joy and no emotion episodes is represented by γ50. Positive γ coefficients indicate that children produced greater FACS-based joy expressions in joy episodes than non-joy episodes (i.e., supporting intersituational specificity), whereas negative γ coefficients indicate that children produced fewer FACS-based joy expressions in joy episodes than non-joy episodes (i.e., lack of intersituational specificity). The extent to which children vary from the sample average in FACS-based joy expression scores is represented by u0i. The remaining inter-situational models (with observer judgments and child self-reports as predictors) consisted of similar equations: observer-identified and child self-reported episodes were dummy coded and entered into the model as binary predictors, with the target emotional episode (i.e., the emotional episode matching the FBE dependent variable) serving as the referent group. Additional equations are omitted here to eliminate redundancy.
Intersituational specificity results for child self-report episodes
Overall, there was weak support for the intersituational specificity hypotheses for emotion episodes identified by children’s self-report (see Table 3). As might be expected, children produced significantly greater FACS-based joy facial expressions in episodes where they said they felt joy than in episodes where they said they felt fear (β = −.20, p = .001), sad (β = −.15, p = .046), surprised (β = −.19, p = .004), some other emotion (β = −.21, p = .017), or no emotion (β = −.32, p < .001), but surprisingly, FACS-based joy facial expressions did not vary significantly between episodes where children felt joy and episodes where children felt anger (β = −.14, p = .060) or disgusted (β = −.08, p = .134). This model explained 4.3% of the within-child variance in children’s joy facial expressions.
Children’s FACS-based anger facial expressions did not vary significantly between episodes where children felt angry and episodes where children felt joy (β = −.11, p = .133), fear (β = .06, p = .387), sad (β = .00, p = .979), surprise (β = −.08, p = .204), disgust (β = −.00, p = .3948), some other emotion (β = .02, p = .778), or no emotion (β = −.08, p = .328). Children produced significantly greater FACS-based fear expressions in episodes where children felt fear than in episodes where children felt no emotion (β = −.22, p = .009). However, children’s FACS-based fear expressions did not vary significantly between episodes where children felt fear and episodes where children felt joy (β = −.05, p = .585), anger (β = −.06, p = .475), sadness (β = .03, p = .718), surprise (β = −.05, p = .590), disgust (β = −.05, p = .411), or some other emotion (β = −.04, p = .682). This model accounted for 4.0% of the within-child variance in children’s fear facial expressions.
Children’s FACS-based sad facial expressions did not vary significantly between episodes where children felt sad and episodes where children felt joy (β = −.02, p = .830), anger (β = .01, p = .912), fear (β = −.02, p = .737), surprise (β = −.03, p = .673), disgust (β = .03, p = .695), some other emotion (β = .04, p = .753), or no emotion (β = −.00, p = .991). Similarly, children’s FACS-based surprise facial expressions did not vary significantly between episodes where children felt surprised and episodes where children felt joy (β = −.10, p = .276), anger (β = −.08, p = .221), fear (β = −.03, p = .646), sad (β = −.10, p = .215), disgust (β = .02, p = .674), some other emotion (β = −.09, p = .271), or no emotion (β = −.04, p = .603). Children produced significantly greater FACS-based disgust facial expressions in episodes where children felt disgusted than in episodes where children felt anger (β = −.18, p = .017) or fear (β = −.17, p = .032). However, children’s FACS-based disgust facial expressions did not vary significantly between episodes where children felt disgusted and episodes where children felt joy (β = −.20, p = .062), sadness (β = −.15, p = .091), surprise (β = −.11, p = .188), some other emotion (β = −.20, p = .057), or no emotion (β = −.19, p = .099).
Intersituational specificity results for observer episodes
Results from the observer judgments provided partial support for the intersituational specificity hypotheses (see Table 4). As might be expected, intersituational specificity was greatest for joy: Children produced significantly greater FACS-based joy facial expressions in episodes categorized as joy by observers than in all other episodes, including episodes categorized by observers as anger (β = −0.73, p < .001), fear (β = −.30, p < .001), sadness (β = −.47, p < .001), surprise (β = −.68, p < .001), and no emotion (β = −1.02, p < .001). The model explained 54.0% of the within-child variance in children’s joy facial expressions. Similarly, children produced significantly greater FACS-based anger facial expressions in episodes categorized as anger by observers than in episodes categorized as joy (β = −.33, p < .001), sadness (β = −.21, p < .001), surprise (β = −.21, p = .007), and no emotion (β = −.35, p < .001) by observers. However, children’s FACS-based anger facial expressions did not vary significantly between observer-identified angry and fearful episodes (β = −.09, p = .188). This model explained 8.8% of the within-child variance in children’s anger facial expressions.
Intersituational specificity for fear facial expressions was found only in comparisons with sad episodes: Children produced significantly greater FACS-based fear facial expressions in episodes categorized as fearful by observers than in episodes categorized as sad by observers (β = −.16, p = .043). However, FACS-based fear expressions did not vary significantly between fear episodes and joy (β = −.16, p = .255), anger (β = −.06, p = .656), surprise (β = −.18, p = .166), or no emotion episodes (β = −.28, p = .113). The model explained 2.8% of the within-child variance in children’s facial expressions of fear.
With regard to sad facial expressions, children produced significantly greater FACS-based sad facial expressions in episodes categorized as sad by observers than in episodes categorized as joy (β = −.29, p = .040) or surprise (β = −.31, p = .033), but did not vary significantly from FACS-based sad expressions produced in episodes categorized as anger (β = −.05, p = .763), fear (β = −.05, p = .550), and no emotion (β = −.33, p = .080). This model accounted for 7.1% of the within-child variance in children’s sad facial expressions.
Lastly, children produced significantly greater FACS-based surprise facial expressions in episodes categorized as surprise by observers than in episodes categorized as no emotion (β = −.19, p = .041). However, FACS-based surprise expressions did not vary significantly between surprise episodes and joy (β = −.00, p = .958), anger (β = .06, p = .370), fear (β = −.02, p = .761), or sadness episodes (β = −.05, p = .331). This model explained 4.7 % of the within-child variance in children’s surprise facial expressions.
In sum, intersituational specificity was consistently supported only for joy expressions: Children produced greater FBE of joy in episodes where they reported feeling or were judged to be feeling joy than in all other emotional episodes (child-reported and observer-identified). However, intersituational specificity was weaker or absent for negative emotional expressions in child- and observer-identified episodes. We turn next to our intrasituational specificity hypotheses and analyses.
Intrasituational specificity models
To address the hypotheses regarding intrasituational specificity (e.g., whether children produced joy facial expressions more than other emotional facial expressions in joy episodes), we conducted two additional series of multilevel models examining children’s FBE scores within each category of emotion episode (as designated according to either the observer judgments or the children’s self-reported emotion). Because the FACS coding of each episode was used to generate expression scores for each of six discrete emotions (i.e., FACS-based joy, anger, fear, sad, surprise, and disgust expressions), children’s FBE scores (ranging from 0 to 2) were nested within the six expression categories for each episode.
The first series of analyses consisted of six models that compared children’s FBE scores across the six discrete emotion expression categories within a subset of target emotional episodes as indicated by child self-report (see Table 5). Each model included only those episodes for which children indicated they were feeling a particular target emotion (i.e., joy, anger, fear, sadness, surprise, or disgust). For example, in one model we included only those episodes where children indicated that they felt joy so as to compare children’s FBE scores for joy with their FBE scores for the other emotions. The second analytic series consisted of five models that compared children’s FBE scores for the six discrete emotions within a specified set of observer-identified emotional episodes (see Table 6). Again, each model included only those episodes for which observers indicated a child was feeling a particular target emotion (i.e., joy, anger, fear, sadness or surprise). For example, in one model we included only those episodes where observers indicated that children were feeling joy so as to compare children’s FBE scores for joy with their FBE scores for the other emotions.
Table 5.
Unstandardized Coefficients, Standard Errors, and 95% Confidence Intervals of Intrasituational Specificity Multilevel Models for Child Episodes
FACS−based Expression Score β0
|
||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Fixed Effects | Joy Episode | Anger Episode | Fear Episode | Sad Episode | Surprise Episode | Disgust Episode | ||||||||||||
B | SE | CI | B | SE | CI | B | SE | CI | B | SE | CI | B | SE | CI | B | SE | CI | |
Intercept γ00 | 1.03 | 0.11 | 0.82 1.24 |
0.56 | 0.09 | 0.38 0.74 |
0.47 | 0.16 | 0.15 0.77 |
0.24 | 0.08 | 0.08 0.41 |
0.93 | 0.13 | 0.68 1.18 |
0.47 | 0.11 | 0.26 0.69 |
Emotion1 γ01–γ05 | ||||||||||||||||||
Joy | – | 0.07 | 0.23 | −0.39 0.53 |
0.00 | 0.23 | −0.44 0.44 |
0.31 | 0.20 | −0.08 0.70 |
−0.43 | 0.19 | −0.80 −0.06 |
0.33 | 0.37 | −0.39 1.05 |
||
Anger | −0.71 | 0.13 | −0.96 −0.47 |
– | 0.08 | 0.23 | −0.37 0.54 |
0.31 | 0.15 | 0.01 0.61 |
−0.60 | 0.17 | −0.93 −0.26 |
−0.00 | 0.19 | −0.37 0.37 |
||
Fear | −0.68 | 0.13 | −0.94 −0.43 |
−0.20 | 0.12 | −0.43 0.04 |
– | 0.26 | 0.12 | 0.03 0.49 |
−0.55 | 0.17 | −0.89 −0.21 |
−0.40 | 0.13 | −0.66 −0.15 |
||
Sad | −0.86 | 0.11 | −1.09 −0.64 |
−0.29 | 0.12 | −0.53 −0.06 |
−0.30 | 0.18 | −0.65 0.06 |
– | −0.74 | 0.14 | −1.02 −0.46 |
−0.14 | 0.19 | −0.51 0.24 |
||
Surprise | −0.27† | 0.14 | −0.55 0.01 |
0.24 | 0.12 | 0.02 0.47 |
0.38† | 0.20 | −0.01 0.76 |
0.49 | 0.16 | 0.18 0.80 |
– | 0.26 | 0.24 | −0.20 0.73 |
||
Disgust | −0.69 | 0.12 | −0.93 −0.46 |
−0.29 | 0.11 | −0.51 −0.08 |
−0.19 | 0.16 | −0.50 0.12 |
0.10 | 0.13 | −0.14 0.35 |
−0.55 | 0.13 | −0.83 −0.27 |
– | ||
| ||||||||||||||||||
Random Effects | ||||||||||||||||||
| ||||||||||||||||||
Between (τ00) | 0.01† | 0.01 | −0.00 0.03 |
0.01 | 0.02 | −0.04 0.06 |
0.02 | 0.02 | −0.01 0.05 |
0.03 | 0.02 | −0.02 0.07 |
0.01 | 0.02 | −0.02 0.04 |
0.00 | 0.00 | −0.01 0.01 |
Within (σ2) | 0.38 | 0.02 | 0.34 0.42 |
0.42 | 0.04 | 0.35 0.49 |
0.38 | 0.04 | 0.30 0.47 |
0.42 | 0.03 | 0.35 0.48 |
0.33 | 0.03 | 0.27 0.39 |
0.35 | 0.04 | 0.26 0.43 |
Note:
Level 1 predictors were the six basic emotion categories within which FACS expression scores were calculated. Categories were dummy-coded such that the target emotion category (the emotion category that matched the emotional episode as reported by children; i.e., joy category for joy episode model) served as the reference group in all models. Each model contained five dummy-coded predictors corresponding to the non-target emotion categories (γ01, γ02, γ03, γ04, and γ05, respectively). Significant coefficients (p < .05) noted in bold. Marginally significant coefficients (p < .10) noted by †.
Table 6.
Unstandardized Coefficients, Standard Errors, and 95% Confidence Intervals of Intrasituational Specificity Multilevel Models for Observer Episodes
FACS−based Expression Score β0
|
||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Fixed Effects | Joy Episode | Anger Episode | Fear Episode2 | Sad Episode | Surprise Episode | |||||||||||
B | SE | CI | B | SE | CI | B | SE | CI | B | SE | CI | B | SE | CI | ||
Intercept γ00 | 1.96 | 0.04 | 1.89 2.04 |
0.84 | 0.08 | 0.68 1.00 |
0.81 | 0.23 | 0.36 1.26 |
0.49 | 0.17 | 0.15 0.82 |
0.88 | 0.10 | 0.68 1.08 |
|
Emotion1 γ01–γ05 | ||||||||||||||||
Joy | – | −0.46 | 0.13 | −0.71 −0.21 |
−0.50 | 0.39 | −1.27 0.27 |
−0.22 | 0.20 | −0.61 0.18 |
−0.28† | 0.15 | −0.57 0.01 |
|||
Anger | −1.73 | 0.08 | −1.88 −1.59 |
– | −0.40 | 0.33 | −1.05 0.25 |
−0.26 | 0.17 | −0.60 0.08 |
−0.45 | 0.12 | −0.69 −0.21 |
|||
Fear | −1.59 | 0.15 | −1.88 −1.31 |
−0.35 | 0.11 | −0.56 −0.14 |
– | −0.26 | 0.18 | −0.62 0.10 |
−0.55 | 0.14 | −0.81 −0.28 |
|||
Sad | −1.87 | 0.05 | −1.97 −1.78 |
−0.37 | 0.14 | −0.64 −0.10 |
−0.50† | 0.26 | −1.00 0.00 |
– | −0.76 | 0.10 | −0.94 −0.57 |
|||
Surprise | −1.17 | 0.14 | −1.45 −0.89 |
0.10 | 0.13 | −0.15 0.35 |
0.00 | 0.32 | −0.62 0.62 |
0.17 | 0.19 | −0.19 0.54 |
– | |||
Disgust | −1.65 | 0.08 | −1.80 −1.50 |
−0.42 | 0.09 | −0.59 −0.24 |
−0.50 | 0.37 | −1.22 0.22 |
−0.22 | 0.22 | −0.65 0.22 |
−0.57 | 0.10 | −0.77 −0.38 |
|
| ||||||||||||||||
Random Effects | ||||||||||||||||
| ||||||||||||||||
Between (τ00) | 0.02 | 0.01 | 0.00 0.03 |
0.02 | 0.02 | −0.02 0.05 |
0.01 | 0.02 | −0.03 0.04 |
0.01 | 0.02 | −0.04 0.06 |
0.01† | 0.01 | −0.00 0.03 |
|
Within (σ2) | 0.21 | 0.02 | 0.16 0.25 |
0.42 | 0.03 | 0.37 0.48 |
0.46 | 0.09 | 0.29 0.63 |
0.31 | 0.05 | 0.22 0.40 |
0.38 | 0.03 | 0.31 0.44 |
Note:
Level 1 predictors were the six basic emotion categories within which FACS expression scores were calculated. Categories were dummy-coded such that the target emotion category (the emotion category that matched the emotional episode as identified by observers; i.e., joy category for joy episode model) served as the reference group in all models. Each model contained five dummy-coded predictors corresponding to the non-target emotion categories (γ01, γ02, γ03, γ04, and γ05, respectively).
Results presented here may reflect misindentified model, as more parameters were estimated than Level 2 units. A model with a single dichotomous fear expression predictor mirrored these results (B= 0.38, p = .199). Significant coefficients (p < .05) noted in bold. Marginally significant coefficients (p < .10) noted by †.
In each model, binary dummy-coded episode variables were used to test whether the FBEs of a given emotion category varied significantly from the FBEs of the other emotion categories in target emotional episodes as identified by the children (and then secondly, as identified by observers). The discrete emotion matching the target emotional episode served as the referent group in each model. For example, we tested whether children produced FACS-based joy facial expressions more than other FACS-based emotional facial expressions in episodes identified as joy by children themselves (and secondly, in episodes identified as joy by observers); in this example, joy expressions served as the referent group. Five binary dummy-coded independent variables were then created for anger expressions, fear expressions, sad expressions, surprise expressions, and disgust expressions. Below, we illustrate the basic equations for the observer-identified joy episode model; similarly constructed equations were created for the intrasituational specificity models using the child self-reports:
Level 1: FBE Scoreij = β0ij + β1ij(Anger Expression Dummy Code) + β2ij(Fear Expression Dummy Code) + β3ij(Sad Expression Dummy Code) + β4ij(Surprise Expression Dummy Code) + β5ij(Disgust Expression Dummy Code) + rij
- Level 2: β0i = γ00 + u0i
- β1i = γ10
- β2i = γ20
- β3i = γ30
- β4i = γ40
- β5i = γ50
In Level 1, the intercept, β0ij, is defined as the expected FBE score for child i for joy episode j. The slopes, β1, β2, β3, β4, and β5, reflect the expected difference in scores between FACS-based joy expressions and FBEs of anger (β1), fear (β2), sad (β3), surprise (β4), and disgust (β5) in observer-identified joy episodes. The error term, rij, represents a unique effect associated with child i (i.e., how much that individual varies across observer-identified joy episodes). The individual intercept (β0i) and slopes (β1, β2, β3, β4, β5) become the outcome variables in the Level 2 equations, where the average FACS-based joy expression score for observer-rated joy episodes (i.e., when Anger Expression Dummy Code, Fear Expression Dummy Code, Sad Expression Dummy Code, Surprise Expression Dummy Code, and Disgust Expression Dummy Code = 0) for the sample is represented by γ00, the average difference in FBE scores between joy and anger expressions is represented by γ10, the average difference in FBE scores between joy and fear expressions is represented by γ20, the average difference in FBE scores between joy and sad expressions is represented by γ30, the average difference in FBE scores between joy and surprise expressions is represented by γ40, and the average difference in FBE scores between joy and disgust expressions is represented by γ50. Positive γ coefficients indicate that children produced greater FACS-based joy expressions than other FACS-based emotional expressions in observer-identified joy episodes (i.e., supporting intrasituational specificity), whereas negative γ coefficients indicate that children produced fewer FACS-based joy facial expressions than other FACS-based emotional expressions in observer-identified joy episodes (i.e., lack of intersituational specificity). The extent to which children vary from the sample average in their FBE scores for observer-identified joy episodes is represented by u0i. The remaining intrasituational specificity models (including episodes identified by the observer judgments and children’s self-reports) consisted of similar equations: the six expression categories were dummy coded into five discrete emotion expression variables and these were entered as binary predictors at Level 1. In each model, the referent emotion expression matched the target emotion episode (e.g., anger expressions in anger episodes). Additional equations are omitted here to avoid redundancy.
Intrasituational specificity results for child self-report episodes
Tables 5 and 6 present the intrasituational specificity results for child-reported episodes and observer-identified episodes, respectively. With regard to child-reported episodes, intrasituational specificity was supported for joy expressions: For episodes where children said they felt joy, children’s FACS-based joy expressions were significantly greater than FBEs of anger (β = −.39, p < .001), fear (β = −.37, p < .001), sadness (β = −.47, p < .001), and disgust (β = −.39, p < .001), and marginally greater than FACS-based surprise expressions (β = −.15, p = .058). This model explained 19.2% of the within-child variance in children’s facial expressions child-rated joy episodes.
There was less—and sometimes conflicting—support for intrasituational specificity for the other emotions. For example, children’s FACS-based anger expressions were significantly greater than FACS-based sad expressions (β = −.16, p = .015) and disgust expressions (β = −.16, p = .006) in child-reported anger episodes, but were significantly lower than FACS-based surprise expressions (β = .13, p = .034) in child-reported anger episodes. Moreover, children’s FACS-based anger expressions did not vary significantly from FACS-based joy expressions (β = .04, p = .754) and fear expressions (β = −.11, p = .113) in episodes identified by children as anger. This model explained 8.6% of the within-child variance in children’s facial expressions in child-reported anger episodes. For child-reported fear episodes, children’s FACS-based fear expressions did not significantly differ from children’s FACS-based joy expressions (β = .00, p = 1.00), anger expressions (β = .05, p = .729), sad expressions (β = −.17, p = .086), surprise expressions (β = .22, p = .063), and disgust expressions (β = −.11, p = .220).
Contrary to our expectations, for child-reported sad episodes, children’s FACS-based sadness expressions were significantly lower than FACS-based anger expressions (β = .17, p = .043), fear expressions (β = .146, p = .025), and surprise expressions (β = .27, p = .002), and did not vary significantly from FACS-based joy expressions (β = .17, p = .119) or disgust expressions (β = .06, p = .410). This model explained 5.6% of the within-child variance in children’s facial expressions in child-reported sad episodes.
For child-reported surprise episodes, children’s FACS-based surprise expressions were significantly greater than FACS-based joy expressions (β = −.26, p = .021), anger expressions (β = −.36, p < .001), fear expressions (β = −.33, p = .001), sadness expressions (β = −.45, p < .001), and disgust expressions (β = −.33, p < .001). This model explained 14.1% of the within-child variance in children’s facial expressions in child-reported surprise episodes. Finally, for child-reported disgust episodes, children’s FACS-based disgust expressions were significantly greater than FACS-based fear expressions (β = −.24, p < .001) but did not vary significantly from FBEs of joy (β = .20, p = .3529), anger (β = −.00, p = .984), sadness (β = −.08, p = .453), or surprise (β = .15, p = .304). This model explained 14.8% of the within-child variance in children’s facial expressions in child-reported disgust episodes.
Intrasituational specificity results for observer episodes
As might be expected, results supported intrasituational specificity for children’s joy facial expressions in episodes identified as joy by observers: Children’s FACS-based joy expressions were significantly greater than FBEs of anger (β = −.83, p < .001), fear (β = −.76, p < .001), sadness (β = −.89, p < .001), surprise (β = −.56, p < .001), and disgust (β = −.79, p < .001), and for child-reported joy episodes. This model explained 66.0% of the within-child variance in children’s facial expressions in observer-identified joy episodes. However, there was generally less support for intrasituational specificity for all other emotions in observer episodes.
For observer-rated anger episodes, children’s FACS-based anger expressions were significantly greater than FBEs of joy (β = −.23, p = .001), fear (β = −.17, p = .003), sadness (β = −.18, p = .005), and disgust (β = −.21, p < .001) but did not vary significantly from FACS-based surprise expressions (β = .09, p = .201). This model explained 9.9% of the within-child variance in children’s facial expressions in observer-rated anger episodes. Given the number of predictors and the fact that only 10 episodes were rated by observer as displaying fear (see Table 2), we avoided model misidentification by testing whether children’s FACS-based fear expressions were significantly greater than all other expressions in episodes identified by observers as displaying fear using a single Level 1 dichotomous predictor (where fear expressions 1, else = 0). Contrary to our expectations, children’s FACS-based fear expressions did not vary significantly from other FBEs for observer-rated fear episodes (β = .20, p = .206). The model explained 3.9% of the within-child variance in children’s facial expressions in observer-rated fear episodes. Also contrary to expectations, children’s FACS-based sadness expressions did not vary significantly from FBEs of joy (β = −.14, p = .271), anger (β = −.17, p = .118), fear (β = −.17, p = .144), surprise (β = .11, p = .357), and disgust (β = −.14, p = .320) for observer-rated sad episodes. Finally, for observer-rated surprise episodes, children’s FACS-based surprise expressions were significantly greater than FBEs of anger (β = −.25, p < .001), fear (β = −.31, p < .001), sadness (β = −.43, p < .001), and disgust (β = −.32, p < .001) and were marginally greater than FACS-based joy expressions (β = −.16, p = .057). This model explained 13.3% of the within-child variance in children’s facial expressions in observer-rated surprise episodes.
Overall, there was mixed support for intrasituational specificity in children’s facial expressions. Intrasituational specificity was highest for FACS-based joy expressions in joy episodes (identified by both observers and children) and FACS-based surprise expressions in surprise episodes (identified by both observers and children). However, inverse findings emerged for child-identified episodes where intrasituational specificity was lower than expected (i.e., for FACS-based anger and sad expressions), suggesting that intrasituational specificity may not be well supported in negative emotional episodes identified by children. 1
Logistic Multilevel Models Predicting Child Reports from Observer Judgments
The intersituational specificity and intrasituational specificity analyses provided insight into the extent to which children’s facial expressions mapped on to their own and observers’ emotion judgments. However, it is also important to examine the extent to which children’s self-reports cohere with observers’ reports as a way to determine whether effective emotion communication occurred overall and via multiple channels (i.e., face, voice, and body). As shown in Table 7 (bolded values), observer emotion judgments matched children’s self-reported emotions in 40% of episodes where a discrete emotion was reported, suggesting some coherence between children’s reports and observers’ judgments. We conducted five logistic regression models with non-randomly varying slopes to test the specific within-child associations between observers’ judgements of children’s emotions and children’s ratings of their own emotions. In these models, children’s emotion ratings were predicted with observers’ emotion judgments. To illustrate, the equations for children’s joy ratings were as follows:
Level 1: Joy Child Self-Reportij = β0ij + β1ij(Joy Observer Judgment) + rij
- Level 2: β0i = γ00 + u0i
- β1i = γ10
Table 7.
Frequency Agreement between Child Self-Report and Observer Emotion Judgments
|
||||||
---|---|---|---|---|---|---|
Child Self-Report1
|
||||||
Joy | Anger | Fear | Sad | Surprise | Disgust | |
Observer Judgments2 | ||||||
Joy | 30 | 6 | 3 | 1 | 4 | 4 |
Anger | 4 | 23 | 4 | 9 | 4 | 5 |
Fear | 3 | 0 | 5 | 0 | 0 | 0 |
Sad | 4 | 3 | 3 | 6 | 0 | 1 |
Surprise | 26 | 2 | 8 | 6 | 4 | 1 |
Note:
Five episodes were not rated by children, resulting in 436 episodes that were rated by both children and observers. Only episodes that were rated or judged as displaying one of the six basic emotions are presented here.
No episodes was rated by observers as displaying disgust. Thus, frequencies are presented for joy, anger, fear, sad, and surprise only.
Bolded values indicate episode frequencies where observer judgments matched child self-reported emotion.
In Level 1, the intercept, β0ij, is defined as the expected odds for children to report feeling joy for child i for episode j. The observer slope, β1, reflects the expected difference in the odds that a child reported they felt joy between episodes where observers said children were feeling joy and episodes where observers said children were feeling something else (β1ij). The error term, rij, represents a unique effect associated with child i (i.e., how much that individual varies across episodes). The individual intercept (β0i) and slope (β1) become the outcome variables in the Level 2 equations, where the average odds that children reported feeling joy for episodes where observers said children were feeling something else (i.e., Joy Observer Judgment = 0) for the sample is represented by γ00 and the average difference in odds that children reported feeling joy between joy and non-joy episodes as identified by observers is represented by γ10. Positive γ coefficients indicate a positive within-child association between children’s self-reports and observers’ judgments, whereas negative γ coefficients indicate an inverse within-child association between children’s self-reports and observers’ judgments. The extent to which children vary from the sample average in odds that they reported feeling joy is represented by u0i.
Associations between observer judgments and child reports
Mirroring the results presented in Table 7, children’s self-reported emotions were generally associated with observers’ judgments of children’s emotions. Specifically, when children reported feeling joy, there were increased odds that observers said children were feeling joy than that children were feeling something else (OR = 3.71, 95% CI = 1.76–7.79, p = .001); observers’ judgments of children’s joy explained 6.6% of the variance in children’s self-reports of joy. Similarly, when children reported feeling angry, there were increased odds that observers said children were feeling angry than that children were feeling something else (OR = 6.92, 95% CI = 2.66–18.03, p < .001). This model explained 13.8% of the variance in children’s self-reports of anger. When children reported feeling fearful, there were increased odds that observers said children were feeling fearful than that children were feeling something else (OR = 14.12, 95% CI = 1.93–73.29, p = .008). Observers’ judgments of children’s fear explained 3.0% of the variance in children’s self-reports of fear. When children reported feeling sad, there were increased odds that observers said children were also feeling sad than that children were feeling something else (OR = 4.19, 95% CI = 1.09–16.08, p = .037). This model explained 2.2% of the variance in children’s self-reports of sadness. However, there was no significant within-child association between children’s self-reported surprise and observers’ judgments of children’s surprise (OR = 0.36, 95% CI = 0.11–1.14, p = .083). Because no episodes were judged by observers as displaying disgust, we could not conduct any models predicting children’s self-reported disgust. Taken together, these results indicate some coherence between children’s self-reported emotions and observers’ judgments of children’s emotions of joy, anger, fear, and sadness, but not surprise.
Discussion
Our results suggest three important conclusions regarding third-grade children’s spontaneous facial expressions. First, children rarely produced prototypic facial expressions during an emotional interaction with their mothers (see Table 2). The low FACS-based expression scores across emotion categories (see Figures 1 and 2 and supplemental materials) indicate that episodes examined in the present study contained few fully prototypic emotional expressions (consisting of both upper and lower facial behaviors). Because children self-reported feeling no emotion in approximately 21% of episodes, one possibility is that the social interaction that we investigated—a conflict discussion between children and their mothers—was not intense enough to elicit strong sustained emotional responses from children. This possibility may hold true even when children did report experiencing some emotion, as that emotion may have been only mildly experienced. Indeed, children in our study reported feeling low levels of emotional intensity in nearly one-quarter of episodes. Stronger responses might have been observed with more extreme situations that elicit more intense experiences of emotion (Bonanno & Keltner, 2004; Mauss, Levenson, McCarter, Wilhelm, & Gross, 2005). However, numerous studies with adults assessing intense emotional situations (e.g., bullfighting, viewing a horror film) also find little evidence of prototypic emotional facial expressions (Fernandez-Dols et al., 1997; Garcia-Higuera, Crivelli, & Fernandez-Dols, 2015; Reisenzein, Bördgen, Holtbernd, & Matz, 2006), contesting the assumption that prototypic facial behaviors are elicited more in highly intense emotional situations. And, in at least some (i.e., 14%) of the episodes we examined, children reported feeling a given emotion very strongly. We explored whether our analytic models would provide a different pattern of results for highly intense episodes by constraining our models to only those episodes where children reported feeling a given emotion strongly or very strongly; results from these exploratory analyses revealed the same overall patterns as did our original models. It remains unlikely, thus, that our results are due to a lack of intense emotional elicitation. Moreover, because most daily emotional situations may not elicit highly intense emotions, our results based on a more naturalistic social interaction may accurately represent emotion communication in daily life.
Second, theoretical assumptions regarding the innate correspondence between experienced and expressed emotion were generally unsupported. The automatic read-out and communicative value hypotheses were supported for positive emotion only: Children communicated their self-reported experience of joy through their facial expressions. Similarly, children’s facial expressions of joy were more likely to occur in episodes for which children were judged by observers as feeling joyful. However, a one-to-one correspondence between children’s emotion experience and facially-expressed emotion was weaker or nonsignificant for the negative emotions of anger, fear, sadness, and disgust. That is, children failed to selectively produce expressions of anger, fear, sadness, and disgust in episodes for which they said they felt (or were judged to be feeling) these emotions. Instead, children frequently produced facial expressions for emotions not self-reported or judged as often or even more often than they produced expressions corresponding to the (presumably primary) felt or judged emotion. Together, these results suggest that there is little one-to-one correspondence between feeling and showing negative emotions through the face in third-grade children. This pattern is consistent with other laboratory research (e.g., Reisenzein, Studtmann, & Horstmann, 2013) and with naturalistic observations of emotion expression in both adults and infants (e.g., Camras, 1992; Garcia et al., 2015). In adulthood, a lack of concordance between experienced and expressed emotion is thought to reflect adults’ motivations and abilities to manage their emotions, whereas in infancy, this lack of concordance is thought to reflect the immature organization of infants’ emotional response systems. Our study suggests a third possibility: that a relatively low degree of concordance between facial behaviors and emotion experiences may be a common phenomenon that is continuous throughout the lifespan. Longitudinal research that exposes the same individuals to different emotion-eliciting situations may prove especially useful in further testing this possibility.
Finally, the moderate coherence between children’s self-reported emotion and observers’ judgments of children’s emotion provides some evidence that children do effectively communicate their emotions, although facial expressions may not be the primary channel through which they did so in our mother-child conversation paradigm. Indeed, emotions are communicated in real life through multiple channels, including the face, voice, and body, and children can utilize these channels to convey how they are feeling to others at an early age (Halberstadt et al., 2013). Consequently, both children and observers may have used this multimodal information when identifying and ratings children’s emotion. In contrast, the anatomically-based facial codes examined in this study did not appear to strongly map onto what children said they were feeling or what observers said children were feeling, perhaps because children’s feelings were better communicated through their voices, body postures, and other contextual features embedded in the interaction (e.g., the salience of the topic discussed and whether a resolution was achieved or not). Because children and observers were presented with auditory cues, including those that may convey emotion like pitch, volume, and velocity (Juslin & Laukka, 2003; Juslin & Scherer, 2005), alongside visual cues, children and observers may have based their emotion ratings on input from these channels rather than facial behaviors. Future studies could disentangle these possibilities by manipulating the cues upon which child and observer ratings are based on and examining whether such manipulations predict differences in coherence with coded facial behaviors.
Although in general we found a moderate degree of coherence overall between children’s emotion ratings and observers’ judgments, observers sometimes made inaccurate judgments of children’s emotions. For example, children’s self-reported sadness episodes were judged more often as anger than as sadness by observers, and children’s self-reported fear episodes were judged more often as surprise than as fear. These areas of confusion—in which observers judged children to be feeling a different emotion than what children said they were feeling—may highlight the equifinality in the communication of emotion in everyday life. In some situations, for example, children may be motivated to mask their external expressions in accordance with social norms, as might be the case if children appear fearful of their parents. In other cases, children may use their external expressions not only as a means to mask their true feelings but to facilitate an optimal outcome—as might be the case when children are hurt by their parents and react with anger or aggression. In particular, the coherence between child and observer ratings was non-significant for surprise; this finding fits with past research suggesting that surprise demonstrates little coherence among emotion systems (Reisenzein et al., 2006). Furthermore, partial surprise expressions (and in particular, raised brows) are often produced as “conversational signals” to convey emphasis or mark a question rather than to express feelings of surprise (Ekman, 1979). In addition, although there were 15 episodes in which children reported feeling disgust, it is notable that disgust was never identified by the observers. The recognition of disgust expressions develops later in childhood than the recognition of other emotional expressions (Widen & Russell, 2013), however, adults are able to recognize disgust expressions well above chance levels (e.g., Ekman, Sorenson, & Friesen, 1969). Our finding, thus, does not appear to reflect observers’ inability to identify disgust expressions but rather the potential absence of such expressions and/or greater salience of other emotional expressions.
Limitations and Future Directions
Our study demonstrated several notable strengths, including the application of two stringent but appropriate criteria (inter- and intrasituational specificity) to test the automatic read-out and communicative value hypotheses and the assessment of children’s spontaneous expression during an interpersonal interaction. However, we note three particular limitations. First, because children and observers could not select multiple emotions within a ten-second clip, it is possible that they recognized multiple emotions expressed in children’s faces, but were simply unable to represent this knowledge on our assessment. Our facial coding results, thus, may fail to match the self-report and observer judgments because the facial coding allowed for multiple emotions, whereas the other two measures did not. However, because episodes were selected on the basis of robust agreement among observers about the emotion being expressed, it is unlikely that these episodes would have been perceived as involving multiple emotions of equal intensity. Indeed, there were some such episodes in the larger dataset, for example, episodes for which half of the observers chose one emotion (e.g., anger) and the other half selected a different emotion (e.g., sadness). These episodes were excluded from the present study. Consequently, our selection criteria most likely resulted in a pool of emotional situations that were as homogenous as could be found using the conflict discussion paradigm. Future studies could extend our research by asking children and observers to identify all emotions they believe were displayed in a given videoclip.
Second, it is possible that our third-grade children suppressed their facial expressions as a means to hide how they were feeling from their mothers and/or the experimenter (Halberstadt, Grotjohn, Johnson, Furth, & Greig, 1992). However, research has demonstrated that children are comfortable revealing their emotions to their mothers (Zeman & Garber, 1996). Moreover, in our study, children often indicated low-to-moderately intense feelings during the conflict discussion, and overall, the discussions appeared relatively relaxed, perhaps because children appreciated the opportunity to solve conflictual situations outside of the context in which they were elicited. Thus, it did not appear that children were motivated to suppress their emotional expressions. Furthermore, the children did appear to successfully communicate their felt emotions (both negative as well as positive), as indicated by the significant associations between the observers’ judgments and the children’s self-reports. Nevertheless, to reduce the concern about potential masking, future work could include younger children who are even less likely to mask their emotions, but are still able to recognize others’ expressions of joy, anger, sad, and surprise at fairly high rates (Widen & Russell, 2013).
Third, given our interests in testing intraindividual differences in children’s facial behaviors and emotions, we did not investigate individual difference factors. On the one hand, our data provided some support for excluding individual differences: Results from the null models indicated that only 4.44% of the variability in children’s facial behavior was between-children. On the other hand, the coherence between facial behavior and emotion experience may be influenced by factors that vary between children. As noted above, individual differences in masking or suppressing may alter facial expressions; for example, older children may demonstrate greater regulatory abilities, and as a result, may be more likely to suppress their strong negative emotions during a conflict discussion with their mothers. This suppression may result in less prototypic—and perhaps more partial—facial expressions. Similarly, to the extent that girls are differentially socialized toward greater emotional control (e.g., Banerjee & Eggleston, 1993), they may be less likely to convey their feelings through their facial behaviors. Although our analysis design did not allow for the simultaneous testing of such individual difference factors, future research could extend our models to consider the ways in which the coherence between experienced and expressed emotion is modulated by person-level factors.Although not a limitation of the study, it must be noted that facial expressions do not serve only as a window for emotions. Indeed, there are a multiplicity of influences on facial expressions. In addition to the operation of social and cultural display rules, some theories of facial expression have proposed that non-emotional factors can generate the same facial muscle configurations as an emotional response (Camras, 2011). For example, physical exertion can cause weightlifters to produce a variety of negative facial expressions, several of which may be representative of emotions that the weightlifters are not likely to be experiencing (e.g., anger). Similar factors—as yet undetermined—may have operated during the mother-child conversations investigated in the present study. Moreover, some facial behaviors also serve as back-channel responses to indicate that one is listening during a social interaction (e.g., eyebrow raises). Observers (both in our study and in real life) may be implicitly aware that multiple factors underlie the generation of a facial expression and use this contextual information to determine if a particular configuration of facial muscle movements should be interpreted as an expression of emotion.
We should also note that our study included only one positive emotion (joy). Other positive emotions (e.g., awe, love, contentment) have been identified in studies in which participants are asked to pose a particular emotion (e.g., Campos, Shiota, Keltner, Gonzaga & Goetz, 2013). These expressions often (but not always) include a smile but may differ in the intensity of the smiles and/or in their accompanying facial or nonfacial movements (e.g., lip pressing, upright posture). Evaluating the validity of these expressions via studies that involve spontaneous (rather than posed) expressions of emotion and that utilize the criteria of inter- and intrasituational specificity would provide an important complement to the more typical studies of emotion recognition.
In conclusion, our study contributes to accumulating research with adults that fails to find evidence for the automatic read-out hypothesis with respect to facial expressions (e.g., Fernandez-Dols et al., 1997; Garcia-Higuera, Crivelli, & Fernandez-Dols, 2015; Reisenzein et al., 2006). More specifically, our findings suggest that negative emotion communication during social interactions—as indexed by agreement between child self-report and observer judgments—may rely less on prototypic emotional facial expressions than has theoretically been assumed. Nevertheless, successful emotion communication does sometimes take place—as evidenced when observers agreed about a child’s expressed emotion.
Collectively, these findings direct us to the question of what information do people use when attributing emotions to others in real life? As noted above and elsewhere (e.g., Bachorowski, 1999; Boone & Cunningham, 2001; Juslin & Laukka, 2003; Juslin & Scherer, 2005), observers may use a variety of available cues including body postures and movements, gestures, verbalizations, and prosodic features of speech to judge how others are feeling. Similarly, children may rely on diverse cues to identify how they feel at a given moment in a given situation, including their facial, vocal and bodily expressions but also their interoception (how the body feels internally) and motivations (such as their adherence to culturally-, socially-, and personally-derived displayed rules) as well as aspects of the situation itself. All of these factors weave together complexly during real-life emotion communication, such that the communicative value of a given cue for a given emotion (to others and to oneself) can only be determined through consideration of these other factors. This potential for equifinality in the communication of emotion, thus, warrants greater focus in the emotion literature.
Supplementary Material
Footnotes
Following a reviewer suggestion, we replicated the child self-report inter- and intrasituational specificity models with a subset of highly intense episodes (i.e., episodes where children reported feeling a given emotion strongly or very strongly) to examine whether specificity would be greater in highly intense situations. Results revealed similar patterns to those reported in text. Because the constrained models included fewer episodes, thus limiting power, we report and interpret findings from the original, unconstrained models.
Contributor Information
Vanessa L. Castro, Northeastern University
Linda A. Camras, DePaul University
Amy G. Halberstadt, North Carolina State University
Michael Shuster, DePaul University.
References
- Bachorowski J. Vocal expression and perception of emotion. Current Directions in Psychological Science. 1999;8:53–57. doi: 10.1111/1467-8721.00013. [DOI] [Google Scholar]
- Banerjee M, Eggleston R. Paper presented at the meetings of the Society for Research in Child Development. New Orleans, LA: 1993. Mar, Preschoolers’ and parents’ understanding of emotion regulation. [Google Scholar]
- Barrett KC, Campos JJ. Perspectives on emotional development II: A functionalist approach to emotions. In: Osofsky JD, editor. Handbook of infant development. 2nd. Oxford, England: John Wiley & Sons; 1987. pp. 555–578. [Google Scholar]
- Bonanno GA, Keltner D. Facial expressions of emotion and the course of conjugal bereavement. Journal of Abnormal Psychology. 1997;106:126–137. doi: 10.1037/0021-843X.106.1.126. [DOI] [PubMed] [Google Scholar]
- Bonanno GA, Keltner D. Brief report: The coherence of emotion systems: Comparing ‘on-line’ measures of appraisal and facial expressions, and self-report. Cognition and Emotion. 2004;18:431–444. doi: 10.1080/02699930341000149. [DOI] [Google Scholar]
- Boone RT, Cunningham JG. Children’s expression of emotional meaning in music through expressive body movement. Journal of Nonverbal Behavior. 2001;25:21–41. doi: 10.1023/A:1006733123708. [DOI] [Google Scholar]
- Buck R. Social and emotional functions in facial expression and communication: The readout hypothesis. Biological Psychology. 1994;38:95–115. doi: 10.1016/0301-0511(94)90032-9. [DOI] [PubMed] [Google Scholar]
- Campos B, Shiota M, Keltner D, Gonzaga G, Goetz J. What is shared, what is different? Core relational themes and expressive displays of eight positive emotions. Cognition and Emotion. 2013;27:37–52. doi: 10.1080/02699931.2012.683852. [DOI] [PubMed] [Google Scholar]
- Campos JJ, Barrett KC. Toward a new understanding of emotions and their development. In: Izard CE, Kagan J, Zajonc RB, editors. Emotions, cognition, and behavior. Camrbidge: Cambridge University Press; 1984. pp. 229–263. [Google Scholar]
- Campos JJ, Mumme DL, Kermoian R, Campos RG. A functionalist perspective on the nature of emotion. Monographs of the Society for Research in Child Development. 1994;59:284–303. doi: 10.2307/1166150. [DOI] [PubMed] [Google Scholar]
- Camras LA. Expressive development and basic emotion. Cognition and Emotion. 1992;6:269–283. [Google Scholar]
- Camras LA. Differentiation, dynamical integration and functional emotional development. Emotion Review. 2011;3:138–146. doi: 10.1177/1754073910387944. [DOI] [Google Scholar]
- Camras LA, Chen Y, Bakeman R, Norris K, Cain T. Culture, ethnicity, and children’s facial Expressions: A study of European American, Mainland Chinese, Chinese American, and Adopted Chinese girls. Emotion. 2006;6:103–114. doi: 10.1037/1528-3542.6.1.103. [DOI] [PubMed] [Google Scholar]
- Carroll JM, Russell JA. Do facial expressions signal specific emotions? Judging emotion from the face in context. Journal of Personality and Social Psychology. 1996;70:205–218. doi: 10.1037/0022-3514.70.2.205. [DOI] [PubMed] [Google Scholar]
- Carroll JM, Russell JA. Facial expressions in Hollywood’s portrayal of emotion. Journal of Personality and Social Psychology. 1997;72:164–176. doi: 10.1037/0022-3514.72.1.164. [DOI] [PubMed] [Google Scholar]
- Casey RJ. Children’s emotional experience: Relations among expression, self-report, and understanding. Developmental Psychology. 1993;29:119–129. doi: 10.1037/0012-1649.29.1.119. [DOI] [Google Scholar]
- Castro VL, Halberstadt AG, Garrett-Peters P. A three-factor structure of emotion understanding in third-grade children. Social Development. 2016;25:602–622. doi: 10.1111/sode.12162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Castro VL, Halberstadt AG, Lozada FT, Craig AB. Parents’ emotion‐related beliefs, behaviours, and skills predict children’s recognition of emotion. Infant and Child Development. 2015;24:1–22. doi: 10.1002/icd.1868. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Denham SA, Blair KA, DeMulder E, Levitas J, Sawyer K, Auerbach-Major S, Queenan P. Preschool emotional competence: Pathway to social competence. Child Development. 2003;74:238–256. doi: 10.1111/1467-8624.00533. [DOI] [PubMed] [Google Scholar]
- Dunsmore JC, Halberstadt AG. How does family emotional expressiveness affect children’s schemas? In: Barrett KC, editor. New Directions in Child Development, The communication of emotion: Current research from diverse perspectives. Vol. 77. 1997. pp. 45–68. [DOI] [PubMed] [Google Scholar]
- Eisenberg N, Cumberland A, Spinrad TL. Parental socialization of emotion. Psychological Inquiry. 1998;9:241–273. doi: 10.1207/s15327965pli0904_1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eisenberg N, Morris AS. Children’s emotion-related regulation. In: Kail RV, editor. Advances in child development and behavior. San Diego, CA, US: Academic Press; 2002. pp. 189–229. [PubMed] [Google Scholar]
- Ekman P. Universals and cultural differences in facial expressions of emotion. Nebraska Symposium on Motivation. 1972;19:207–282. [Google Scholar]
- Ekman P. About brows: Emotional and conversational signals. In: von Cranach M, Foppa K, Lepenies W, Ploog D, editors. Human Ethology. Cambridge University Press; 1979. [Google Scholar]
- Ekman P, Cordaro D. What is meant by calling emotions basic? Emotion Review. 2011;3:364–370. doi: 10.1177/1754073911410740. [DOI] [Google Scholar]
- Ekman P, Friesen WV. Nonverbal leakage and clues to deception. Psychiatry: Journal for the Study of Interpersonal Processes. 1969;32:88–106. doi: 10.1080/00332747.1969.11023575. [DOI] [PubMed] [Google Scholar]
- Ekman P, Friesen WV, Hager J. Facial action coding system. Salt Lake City, UT: Research Nexus; 2002. [Google Scholar]
- Ekman P, Sorenson ER, Friesen WV. Pan-cultural elements in facial displays of emotion. Science. 1969;164:86–88. doi: 10.1126/science.164.3875.86. [DOI] [PubMed] [Google Scholar]
- Fernández-Dols JM, Carrera P, Crivelli C. Facial behavior while experiencing sexual excitement. Journal of Nonverbal Behavior. 2011;35:63–71. doi: 10.1007/s10919-010-0097-7. [DOI] [Google Scholar]
- Fernandez-Dols JM, Ruiz-Belda MA. Expression of emotion versus expressions of emotions: Everyday conceptions of spontaneous facial behavior. In: Russell JA, Fernández-Dols J, Manstead AR, Wellenkamp JC, Russell JA, Fernández-Dols J, Wellenkamp JC, editors. Everyday conceptions of emotion: An introduction to the psychology, anthropology and linguistics of emotion. New York, NY, US: Kluwer Academic/Plenum Publishers; 1995. pp. 505–522. [DOI] [Google Scholar]
- Fernández-Dols JM, Sánchez F, Carrera P, Ruiz-Belda M. Are spontaneous expressions and emotions linked? An experimental test of coherence. Journal of Nonverbal Behavior. 1997;21:163–177. doi: 10.1023/A:1024917530100. [DOI] [Google Scholar]
- Fridlund AJ. Evolution and facial action in reflex, social motive, and paralanguage. Biological Psychology. 1991;32:3–100. doi: 10.1016/0301-0511(91)90003-Y. [DOI] [PubMed] [Google Scholar]
- Fridlund AJ, Sabini JP, Hedlund LE, Schaut JA, Shenker JI, Knauer MJ. Audience effects on solitary faces during imagery: Displaying to the people in your head. Journal of Nonverbal Behavior. 1990;14:113–137. doi: 10.1007/BF01670438. [DOI] [Google Scholar]
- García-Higuera JA, Crivelli C, Fernández-Dols JM. Facial expressions during an extremely intense emotional situation: Toreros’ lip funnel. Social Science Information. 2015;54:439–454. doi: 10.1177/0539018415596381. [DOI] [Google Scholar]
- Gunlicks-Stoessel ML, Powers SI. Adolescents’ emotional experiences of mother-adolescent conflict predict internalizing and externalizing symptoms. Journal of Research on Adolescence. 2008;18:621–642. doi: 10.1111/j.1532-7795.2008.00574.x. [DOI] [Google Scholar]
- Halberstadt AG, Denham SA, Dunsmore JC. Affective social competence. Social Development. 2001;10:79–119. doi: 10.1111/1467-9507.00150. [DOI] [Google Scholar]
- Halberstadt AG, Grotjohn DK, Johnson CA, Furth MS, Greig MM. Children’s abilities and strategies in managing the facial display of affect. Journal of Nonverbal Behavior. 1992;16:215–230. doi: 10.1007/BF01462003. [DOI] [Google Scholar]
- Halberstadt AG, Parker AE, Castro VL. Nonverbal communication: Developmental perspectives. In: Hall JA, Knapp ML, editors. Nonverbal communication. Boston, MA, US: De Gruyter Mouton; 2013. pp. 93–127. [DOI] [Google Scholar]
- Hernández MM, Eisenberg N, Valiente C, VanSchyndel SK, Spinrad TL, Silva KM, Southworth J. Emotional expression in school context, social relationships, and academic adjustment in kindergarten. Emotion. 2016;16:553–566. doi: 10.1037/emo0000147. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hiatt SW, Campos JJ, Emde RN. Facial patterning and infant emotional expression: Happiness, surprise, and fear. Child Development. 1979;50:1020–1035. doi: 10.2307/1129328. [DOI] [PubMed] [Google Scholar]
- Holodynski M. The miniaturization of expression in the development of emotional self-regulation. Developmental Psychology. 2004;40:16–28. doi: 10.1037/0012-1649.40.1.16. [DOI] [PubMed] [Google Scholar]
- Izard CE. Emotions as motivations: An evolutionary-developmental perspective. Nebraska Symposium on Motivation. 1978;26:163–200. [PubMed] [Google Scholar]
- Izard CE. Emotions without feelings? Contemporary Psychology. 1984;29:457–459. doi: 10.1037/022933. [DOI] [Google Scholar]
- Izard CE. ‘Cognition-emotion feedback and the self-organization of developmental paths’: Commentary. Human Development. 1995;38:103–112. doi: 10.1159/000278303. [DOI] [Google Scholar]
- Izard CE. Forms and functions of emotions: Matters of emotion–cognition interactions. Emotion Review. 2011;3(4):371–378. doi: 10.1177/175407391141073. [DOI] [Google Scholar]
- Izard CE, Abe J. Developmental changes in facial expressions of emotions in the Strange, Situation during the second year of life. Emotion. 2004;4:251–265. doi: 10.1037/1528-3542.4.3.251. [DOI] [PubMed] [Google Scholar]
- Izard CE, Dougherty L, Hembree E. A system for identifying affect expressions by holistic judgments (AFFEX) Newark, DE: Instructional Resources Center, University of Delaware; 1983. [Google Scholar]
- Izard CE, Malatesta CZ. Perspectives on emotional development I: Differential emotions theory of early emotional development. In: Osofsky JD, editor. Handbook of infant development. 2nd. Oxford, England: John Wiley & Sons; 1987. pp. 494–554. [Google Scholar]
- Juslin PN, Laukka P. Communication of emotions in vocal expression and music performance: Different channels, same code? Psychological Bulletin. 2003;129:770–814. doi: 10.1037/0033-2909.129.5.770. [DOI] [PubMed] [Google Scholar]
- Juslin PN, Scherer KR. Vocal expression of affect. In: Harrigan JA, Rosenthal R, Scherer KR, editors. The new handbook of methods in nonverbal behavior research. New York, NY, US: Oxford University Press; 2005. pp. 65–135. [Google Scholar]
- Kahn JH. Multilevel modeling: overview and applications to research in counseling psychology. Journal of Counseling Psychology. 2011;58:257–271. doi: 10.1037/a0022680. [DOI] [PubMed] [Google Scholar]
- Manstead A, Fischer AH. Social appraisal: The social world as object of and influence on appraisal processes. In: Scherer KR, Schorr A, Johnstone T, editors. Appraisal processes in emotion: Theory, methods, research. Oxford, United Kingdom: Oxford University Press; 2001. pp. 221–232. [Google Scholar]
- Matsumoto D, Keltner D, Shiota MN, O’Sullivan M, Frank M. Facial expressions of emotion. In: Lewis M, Haviland-Jones JM, Barrett LFM, editors. Handbook of emotions. 3rd. New York, NY, US: Guilford Press; 2008. pp. 211–234. [Google Scholar]
- Matsumoto D, Kupperbusch C. Idiocentric and allocentric differences in emotional expression, experience, and the coherence between expression and experience. Asian Journal of Social Psychology. 2001;4:113–131. doi: 10.1111/j.1467-839X.2001.00080.x. [DOI] [Google Scholar]
- Mauss IB, Levenson RW, McCarter L, Wilhelm FH, Gross JJ. The tie that binds? Coherence among emotion experience, behavior, and physiology. Emotion. 2005;5:175–190. doi: 10.1037/1528-3542.5.2.175. [DOI] [PubMed] [Google Scholar]
- Morris AS, Silk JS, Steinberg L, Myers SS, Robinson LR. The role of the family context in the development of emotion regulation. Social Development. 2007;16:361–388. doi: 10.1111/j.1467-9507.2007.00389.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Muthén LK, Muthén BO. Mplus user’s guid. 7th. Los Angeles, CA: Muthén & Muthén; 1998–2015. [Google Scholar]
- Noller P. Using standard content methodology to assess nonverbal sensitivity in dyads. In: Hall JA, Bernieri FJ, editors. Interpersonal sensitivity: Theory and measurement. Mahwah, NJ, US: Lawrence Erlbaum Associates Publishers; 2001. pp. 243–264. [Google Scholar]
- Parkinson B. Do facial movements express emotions or communicate motives? Personality and Social Psychology Review. 2005;9:278–311. doi: 10.1207/s15327957pspr0904_1. [DOI] [PubMed] [Google Scholar]
- Pons F, Harris PL, de Rosnay M. Emotion comprehension between 3 and 11 years: Developmental periods and hierarchical organization. European Journal of Developmental Psychology. 2004;1:127–152. doi: 10.1080/17405620344000022. [DOI] [Google Scholar]
- Raudenbush SW, Bryk AS. Hierarchical linear models: Applications and data analysis methods. Vol. 1. Thousand Oaks, CA: Sage Publications, Inc; 2002. [Google Scholar]
- Reisenzein R, Bördgen S, Holtbernd T, Matz D. Evidence for strong dissociation between emotion and facial displays: The case of surprise. Journal of Personality and Social Psychology. 2006;91:295–315. doi: 10.1037/0022-3514.91.2.295. [DOI] [PubMed] [Google Scholar]
- Reisenzein R, Studtmann M, Horstmann G. Coherence between emotion and facial expression: Evidence from laboratory experiments. Emotion Review. 2013;5:16–23. doi: 10.1177/1754073912457228. [DOI] [Google Scholar]
- Rogers M, Halberstadt AG, Castro VL, MacCormack JK, Garrett-Peters P. Mothers’ emotion regulation skills and beliefs about children’s emotions predict children’s emotion regulation skills. Emotion. 2016;16:280–291. doi: 10.1037/emo0000142. [DOI] [PubMed] [Google Scholar]
- Rosenberg EL, Ekman P. Coherence between expressive and experiential systems in emotion. Cognition and Emotion. 1994;8:201–229. doi: 10.1080/02699939408408938. [DOI] [Google Scholar]
- Ruiz-Belda M, Fernández-Dols J, Carrera P, Barchard K. Spontaneous facial expressions of happy bowlers and soccer fans. Cognition and Emotion. 2003;17:315–326. doi: 10.1080/02699930302288. [DOI] [PubMed] [Google Scholar]
- Russell JA. Reading emotions from and into faces: Resurrecting a dimensional-contextual perspective. In: Russell JA, Fernandez-Dols JM, editors. The psychology of facial expression. New York, NY: Cambridge University Press; 1997. pp. 295–320. [Google Scholar]
- Saarni C. The development of emotional competence. New York, NY, US: Guilford Press; 1999. [Google Scholar]
- Sallquist J, DiDonato MD, Hanish LD, Martin CL, Fabes RA. The importance of mutual positive expressivity in social adjustment: understanding the role of peers and gender. Emotion. 2012;12:304–313. doi: 10.1037/a0025238. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sroufe LA. Emotional development. New York, NY: Cambridge University Press; 1996. [Google Scholar]
- Widen SC. Children’s interpretation of facial expressions: The long path from valence-based to specific discrete categories. Emotion Review. 2013;5:72–77. doi: 10.1177/1754073912451492. [DOI] [Google Scholar]
- Widen SC, Russell JA. Children’s recognition of disgust in others. Psychological Bulletin. 2013;139:271–299. doi: 10.1037/a0031640. [DOI] [PubMed] [Google Scholar]
- Zeman J, Garber J. Display rules for anger, sadness, and pain: It depends on who is watching. Child Development. 1996;67:957–973. doi: 10.2307/1131873. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.