Abstract
The current study examined perceptual differences between adults and youth in perceiving ambiguous facial expressions. We estimated individuals’ internal representation for facial expressions and compared it between age groups (adolescents: N=108, Mage=13.04 years, 43.52% female; adults: N=81, Mage=31.54, 65.43% female). We found that adolescents’ perceptual representation for facial emotion is broader than adults, such that adolescents experience more difficulty in identifying subtle configurational differences of facial expressions. At the neural level, perceptual uncertainty in face-selective regions (e.g., fusiform face area, occipital face area) were significantly higher for adolescents than for adults, suggesting that adolescents’ brain more similarly represents lower intensity emotional faces than adults. Our results provide evidence for age-related differences of psychophysical differences in perceptual representation of emotional faces at the neural and behavioral level.
Keywords: face emotion perception, adolescents, uncertainty, MVPA, fMRI
Introduction
The ability to recognize and decode others’ facial expressions is an essential feature of social interaction (Adolphs, 2002). Emotion perception is incredibly complex, requiring the individual to both distinguish fine-grained differences in facial configuration and understand complicated, nuanced social context rules (Barrett, Lindquist, & Gendron, 2007). Although there is a robust connection between a confined set of prototypical facial configurations and emotional states (i.e., the “discrete emotions” perspective; Ekman, 1993), face emotion perception is not always determined by specific physical feature of facial configurations, such that various external and internal factors can change an observer’s emotion perception even for the same facial configuration (e.g., Kim et al., 2004; Lee, Choi, & Cho, 2012). Furthermore, emotional expressions are often subtle, ambiguous, and uncertain in everyday social interactions (Fridlund, 2014). Such ambiguity poses particular challenges to adolescents as they learn to identify and appropriately respond to seemingly ambiguous emotional states. Indeed, incorporation of various social cues to interpret others’ emotional states develops in conjunction with improvements in youths’ perceptual abilities (Barrett et al., 2007). Therefore, facial affect perception can be challenging for youth (Gross & Ballif, 1991; McClure, 2000) as perceptual learning of emotions is still developing (Pollak, Messner, Kistler, & Cohn, 2009).
Although evidence to date indicates that adolescents’ perception of others’ affect differs from adults’ perceptions, studies largely utilize overt facial affect recognition tasks that are not designed to capture the oft-ambiguous nature of real-world situations (i.e., ambiguous expressiveness; e.g., Batty & Taylor, 2006; Thomas et al., 2001). Furthermore, much of the research base focuses on clinical populations (e.g.,autism spectrum; Critchley et al., 2000). where affective processing is clearly sub-optimal. Studying the normative development of facial emotion perception is integral to improving our understanding of how affective processing normatively changes over the lifespan. In the only known study to date to examine developmental differences in ambiguous facial affect (Wiggins et al., 2015), adolescents recruited face-processing networks significantly less than adults when the emotional intensity of the face was unclear (i.e., ambiguous; e.g., 50% intensity of fearful face), indicating adolescents’ perceptions of subtle facial expressions may be comparatively underdeveloped. This study suggests that activation in the ventral stream is a likely neural candidate reflecting the maturation of systems for perceiving facial affect.
Building upon this work (Wiggins et al., 2015), we sought to examine the internal representation of perceptual uncertainty for emotional faces between adolescents (N=108) and adults (N=81) by fitting behavioral and neural data to psychophysics model (Fig 1A; Calder, Jenkins, Cassel, & Clifford, 2008; Clifford, Mareschal, Otsuka, & Watson, 2015; Lynn et al., 2016; Mareschal, Calder, Dadds, & Clifford, 2013; Wang et al., 2017). In the present study, we focused on neural pattern similarities between emotional faces as a form of multi-voxel pattern approach (MVPA) to directly fit neural patterns to a psychophysics model. To generate emotionally ambiguous facial expressions, we used happy and angry faces morphed with neutral faces ranging from 15% to 75% intensity levels (Fig 1B). We hypothesized that adolescents would be less sensitive to ambiguous facial emotions. In other words, adolescents would be more likely to perceive ambiguous facial expressions as non-emotional or “neutral” compared with adults, thereby demonstrating broader representations of non-emotional faces.
Fig 1.

(A) The schematic psychophysical model, showing a perceptual representation for emotion perception and perceptual criteria between perceiving emotion (either happy or angry) and non-emotion (neutral) as two perception change points (red and blue line) are closer, observer has more keen criteria in perceiving emotionality from subtle facial expressions as the uncertainty boundary gets smaller (B) An example of face stimuli used in the current study. (C) Neural pattern similarity estimation within the ROI as a function of emotion intensity. Using neural pattern anchor averaged across lowest emotion intensities in both happy and angry, we computed pattern similarities between neural pattern anchor and each intensity using Pearson-r (Fisher-z transformed), then fitted them into the emotion perception model. The matrices (4×4) are just for schematic illustration of pattern within the ROI mask. (D) Group activation map responding to all face stimuli versus baseline during the “observe” round. The stimuli robustly activated regions along the ventral visual pathway. The bar plots show activation strength in those regions on the “Affect label” round as a function of emotion intensity across participants. Note that there was no significant difference for happy and angry stimuli at corresponding stimulus levels (e.g., 75% happy and angry; all Ps>.09) (E) Representative subject’s fitted curves for behavioral and neural data, showing the perceptual boundary representation as a function of facial emotion intensity. The fitting values on average were 0.87 and 0.56 for behavioral and neural data respectively.
Method and Analysis1
Participants.
An emotional labeling task was presented to 189 participants during an fMRI scan; 108 adolescents (Mage=13.04 years, SD=0.90, range:12–15, 43.52% female) and 81 adults (Mage=31.54, SD=12.47, range:19–54, 65.43% female) participated. The adult sample included younger adults (N=39, college students) as well as older adults, some of whom were the parent of an adolescent in the sample (N=33). Data from eight individuals were excluded due to motion (three adolescents; mean FD=1.10 mm, DVARS=51.52) and technical failure (four adolescents and one adult). The remaining participants for fMRI data analysis (N=181) did not have any motion issues (mean FD=0.11mm, DVARS=29.69; adolescents: FD=0.14mm, DVARS=30.27; adults: FD=0.08mm, DVARS=29.10). All participants provided informed consent and were remunerated for their participation. The study was approved by the Institutional Review Board (IRB) of University of Illinois at Urbana-Champaign (UIUC).
Task and stimuli.
Face stimuli consisted of angry, happy, and neutral expressions. To vary emotional intensity parametrically, we morphed happy and angry faces with neutral faces in 15% increments (e.g., 15%, 30%, 45%, 60%, and 75%, where the percentage indicates the emotional intensity [happy or angry] of each category). Eighty total stimuli comprised these emotion intensity categories (40 happy and angry faces with intensity variations). Participants completed two different variants of the task: “Affect Label” and “Observe” rounds. During the “Affect Label” round, participants were instructed to match the facial emotion of the stimuli displayed with one of three labels (“Happy,” “Neutral,” and “Angry”), displayed across the bottom of the screen, using their index, middle, and ring fingers respectively. During the “Observe” rounds, participants were asked to press their thumb for each face instead of making an effort to label the emotion of face. This “Observe” was designed to serve as a main-task independent localizer for face-selective voxels (Fig S1) with the assumption that it reflects simple face perception without recruiting affective resources explicitly (see RT results in the online supplement; Fig S2.
Data acquisition and preprocessing.
T1-MPRAGE and T2*-weighted echoplanar images (EPI) were collected using a 3T-Siemens Trio MRI scanner with a 32-channel matrix coil. Preprocessing was carried out using FSL 5.0.10 (https://fsl.fmrib.ox.ac.uk/fsl/fslwiki).
Analysis of behavioral response.
We defined perceptual uncertainty for the proportion of trials labeled as “neutral’ as a function of facial emotion intensity (Fig 1A). The more neutral judgements across face emotion intensities represented more perceptual uncertainty for the face emotion. To quantify this perceptual uncertainty level, we computed the proportions of “neutral’ responses (i.e., indicating no emotion perception for a given emotional intensity) for each intensity of face stimuli, and fitted them into the psychophysics model using a Gaussian function representing the perceptual uncertainty boundary in sensory representation (Fig 1A; Clifford et al., 2015; Jun, Mareschal, Clifford, & Dadds, 2013; Mareschal et al., 2013):
| (Equation 1) |
where, α represents peak amplitude of responses (i.e., the height of the curve’s peak), μ specifies the position of the center of the peak (i.e., face emotion intensity in which faces were judged as neutral), and σ is the bandwidth (i.e., standard deviation of the curve). The bandwidth parameter, σ, was used as the primary metric for the degree of perceptual uncertainty (Calder et al., 2008; Clifford et al., 2015; Mareschal et al., 2013) as wider curves (larger σ) suggest participants had greater neutral responses in emotion judgement to changes in emotional intensity. That is, participants were less perceptively sensitive to subtle emotional changes in the faces and vice versa for narrower curves (smaller σ). The fitting values (r-square) on average were 0.87 and 0.56 for behavioral and neural data respectively.
Analysis of neural pattern
To fit the neural data on the psychophysics model depicted in Fig 1A, we performed a neural pattern similarity analysis (e.g., Kriegeskorte, Mur, & Bandettini, 2008; Lee, Qu, & Telzer, 2017) by estimating single-trial activation patterns for each emotional intensity based on least-squares-single methods (LSS; Mumford, Turner, Ashby, & Poldrack, 2012). We then extracted standardized voxel-wise pattern activity (i.e., z-map) for each emotion intensity within the ROI on each individual space, and computed the similarity values (i.e., Fisher’s z- transformed Pearson correlation coefficients) across each vector between the pattern anchor (Wang et al., 2017) and the other vectors in each emotional intensity (Fig 1C). The neural anchor was created by averaging the neural patterns of 15% angry and 15% happy faces, and thus the anchor pattern should show very high similarity with both neural patterns of 15% happy and angry faces respectively. Finally, we fitted pattern similarity metrics of each intensity into the psychophysical model. Higher pattern similarities for the anchor indicate neural encoding for a given face is more likely to be perceived as neutral.
ROI selection
For the face-sensitive-ROI selection, we performed a standard two-stage univariate GLM analysis for the “Observe” rounds as an orthogonal functional localizer (Poldrack, 2007). An individual-level GLM estimated brain activation for faces regardless of their intensities contrasted to the baseline (e.g., Bishop, Aguirre, Nunez-Elizalde, & Toker, 2015; Thielscher & Pessoa, 2007), and then group-level random effects were estimated (clusters-corrected Z>2.3, p=0.05; one-tailed; FLAME1+2; Table S1). Finally, we selected voxels that fell within the previously defined functional parcels (http://web.mit.edu/bcs/nklab/GSS.shtml) for face-sensitive-voxels (Julian, Fedorenko, Webster, & Kanwisher, 2012). No clear STS cluster activation was observed and this may be due to our current approach (contrasted with baseline instead of face minus other categorical stimulus such as places). However, it does not suggest that the STS is not a face-selective region; hence our final ROI mask included the FFA and OFA (k=2104 voxels; Fig 1D). Given that previous studies that the amygdala also plays a role in encoding emotion parametrically (Wang et al., 2017), we also selected voxels (k=432) within the bilateral amygdala atlas (Harvard-Oxford, 50%-threshold).
Results2
Each participant’s neutral responses were fitted to the psychophysics model to estimate the uncertainty boundary (i.e., σ). An independent-samples t-test indicated that perceptual uncertainty levels were significantly higher for adolescents (M=45.82, SD=8.25, SE=0.82) than adults (M=43.37, SD=7.49, SE=0.83), t(177)=2.09,p=0.037, 95% CI=[0.14,4.68], Cohen’s d=0.31 (Fig 2A). This indicates that adolescents’ face emotion perception is less sensitive to changes in expression intensities compared to adults, and therefore adolescents are more likely to perceive subtle expressions as neutral or not indicative of increasing emotional intensity. In contrast, adults’ perceptual ability is more finely tuned, enabling them to recognize subtler expressions with only minor observed affective changes3.
Fig 2.

Averaged perceptual uncertainty parameter (σ) based on (A) behavioral response and (B) neural pattern similarity as a function of age. Error bars represent ± SEM. * denotes statistical significance at 95% CI level based on bootstrapping resampling (n=9999).
Consistent with the behavioral findings, an independent-samples t-test on the neural parameter indicated that perceptual uncertainty levels in face-selective regions (see ROI selection) were significantly higher for adolescents (M=56.74, 50=28.88, SD=2.87) than for adults (M=45.31, SD=19.68, SE=2.18), t(175)=3.18,p=0.002, 95% CI=[4.71,18.61], Cohen’s d=0.46 (Fig 2B), suggesting that adolescents’ more similarly represent lower intensity emotional faces than adults. In other words, subtle intensities in facial expression are less finely represented in adolescents at the neural level, and therefore, adolescents need more intense-emotional facial expressions to perceive facial emotion at the neural level, whereas adults perceived more emotionality even from subtle facial expressions. The bandwidth parameter from the neural data showed a modest yet significant positive correlation with the bandwidth from the behavioral data across participants, , p=0.022, 95% CI=[0.02, 0.32]. Additional correlation analyses separately for each group, however, did not reveal significant relationships between the behavioral and neural parameters (for teens, p=.792; for adults, p=.139), implying that there was no explicit convergent evidence between behavioral and neural measures within each age group. Lastly, we estimated the bandwidth metric with the amygdala voxels identified from the same ROI contrast, but no age-related differences in the bandwidth parameter emerged, t(180)=1.46, p=270, 95% CT=[0.56, 25.51].
Discussion
Youth have less experience with emotion as a function of age, with some difficulty recognizing and interpreting others’ facial affect, particularly when expressed in subtle or ambiguous ways. The current study was designed to provide a more nuanced analysis of the perceptual differences between adults and youth by comparing internal representations of emotional faces between the age groups. We provide evidence for age-related differences in perceptual representation of emotional faces by fitting the behavioral and neural data to a psychophysics model of emotion perception.
Our work expands upon previous findings (Wiggins et al., 2015) that the ventral stream system may provide a neural index for the ability of perceiving ambiguous facial expressions and maturation of fine-tuned internal perceptual representations for ambiguity in developing youth. More specifically, our results suggest that adolescents show less perceptual sensitivity in the ventral stream system to perceive changes of facial expression, such that adolescents’ perceptual representation for neutral expression is broader than adults. In other words, adolescents have more uncertainty for emotion than adults, leading adolescents to be more likely to perceive subtle facial expressions of emotion as non-emotional, consistent with previous interpretations of the broader curve in the perception model (Calder et al., 2008; Clifford et al., 2015; Mareschal et al., 2013)
Our work provides support that adolescents perceive ambiguous facial affect as being less emotionally salient than their adult counterparts. However, some limitations exist in our design. Given our recruitment of teens and adults specifically, we are not able to speak to how this facial affect processing develops in early childhood, a critical developmental period for learning about affect (Sroufe, 1997). Additionally, given the cross-sectional design, we are unable to examine these changes in vivo. Future work is necessary to study the progression of affect-processing across development, as this will provide greater insight into how these processes are shaped normatively and how they may be impacted by life experiences. Another constraint on generalizability may be the lack of attention paid towards how adolescents express emotions relative to adults (McLaughlin, Garrad, & Somerville, 2015). It may be that adolescents are generally less expressive, perhaps complicating the interpretations of the current study. Finally, we did not address individual differences, such as anxiety (e.g,. Bishop et al., 2015), or physiological reactivity (e.g., McManis, Bradley, Berg, Cuthbert, & Lang, 2001), which may play an important modulatory role in affect processing. For example, social-emotional competency may moderate how well one perceives or attributes emotional states particularly in subtle or ambiguous presentations (e.g., Mayer & Geher, 1996). Future examinations should test whether individual differences, such as arousal reactivity, moderate perceptual differences in developing populations, or if the same individual differences that predict adult perception can be linked to adolescents’ affect perception. Lastly, we used relatively short ISIs between faces (range: 3.17–4.54s, based on gaussian distribution), which may be suboptimal compared to fully- stimulus-spaced design with long SOAs (e.g, 12s). Thus, it is possible the neural estimation for each trial may be less specific and more influenced by a close trial as model fitting for neural data was not as high as behavior-based-values. Although, we found that there is a consistency in findings across age groups for both neural and behavioral data as we hypothesized, future work is necessary to have more optimal parameters in the design to increase the specificity of neural estimation.
Extending previous work (Batty & Taylor, 2006; Gross & Ballif, 1991; McClure, 2000; Thomas et al., 2001; Wiggins et al., 2015), the present study adds to our knowledge about age-related differences in facial emotion perception. Our findings provide direct evidence that internal perceptual criteria in representing others’ emotional expressions is still developing during adolescence. Compared with adults, adolescents exhibited a broader bandwidth for neutral face perception indicating that they may be less sensitive to subtle features of emotional expression and are more likely to perceive others’ subtle expressions as non-emotional or neutral.
Supplementary Material
Acknowledgments
This work was supported by the National Institutes of Health (1R01DA039923: Eva H Telzer), National Science Foundation (BCS 1539651: Nany McElwain; SES 1459719: Eva H. Telzer) and Jacobs Foundation (2014–1095 Young Scholar Grant: Eva H Telzer). Michael T. Perino is now at the Department of Psychiatry, Washington University School of Medicine in St Louis.
Footnotes
Please see the Online Supplement for more details.
We observed violations of equal variance assumption (Levene’s test, all ps < 0.049). This violation is possibly due to either group size difference and/or higher variability in our adolescent sample, Accordingly and unless otherwise noted, we employed Welch’s t-test (adjusting degrees of freedom) for mean difference between groups, as well as non-parametric correlation coefficients (i.e., Spearman’s rho; Bishara & Hittner, 2012) between age and curve fit parameter combined with the bootstrap random-sampling (n=9999; with replacement) at 95 % confidence level to reduce possible impact of data heteroscedasticity.
Although our primary interest was perceptual uncertainty level (i.e., curve bandwidth, σ), we additionally compared the peak amplitude (i.e., α), and its location (i.e., μ), and found no age group differences, 95% CI=[−1.57, 1.62], and 95% CI=[−0.02, 0.03] respectively, indicating that adolescents and adults showed similar height of the curve’s peak and face emotion intensity in which faces were judged as neutral or no-emotion. Therefore, we focused our remaining analyses on σ.
References
- Adolphs R (2002). Neural systems for recognizing emotion. Current Opinion in Neurobiology, 12(2), 169–177. [DOI] [PubMed] [Google Scholar]
- Barrett LF, Lindquist KA, & Gendron M (2007). Language as context for the perception of emotion. Trends in Cognitive Sciences, 11(8), 327–332. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Batty M, & Taylor MJ (2006). The development of emotional face processing during childhood. Developmental science, 9(2), 207–220. [DOI] [PubMed] [Google Scholar]
- Bishara AJ, & Hittner JB (2012). Testing the significance of a correlation with nonnormal data: comparison of Pearson, Spearman, transformation, and resampling approaches. Psychological methods, 17(3), 399. [DOI] [PubMed] [Google Scholar]
- Bishop SJ, Aguirre GK, Nunez-Elizalde AO, & Toker D (2015). Seeing the world through non rose-colored glasses: anxiety and the amygdala response to blended expressions. Frontiers in Human Neuroscience, 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Calder AJ, Jenkins R, Cassel A, & Clifford CW (2008). Visual representation of eye gaze is coded by a nonopponent multichannel system. Journal of Experimental Psychology: General, 137(2), 244. [DOI] [PubMed] [Google Scholar]
- Clifford C, Mareschal I, Otsuka Y, & Watson TL (2015). A Bayesian approach to person perception. Consciousness and cognition, 36, 406–413. [DOI] [PubMed] [Google Scholar]
- Critchley HD, Daly EM, Bullmore ET, Williams SC, Van Amelsvoort T, Robertson DM, . . . Howlin P. (2000). The functional neuroanatomy of social behaviour: changes in cerebral blood flow when people with autistic disorder process facial expressions. Brain, 123(11), 2203–2212. [DOI] [PubMed] [Google Scholar]
- Ekman P (1993). Facial expression and emotion. American psychologist, 48(4), 384. [DOI] [PubMed] [Google Scholar]
- Fridlund AJ (2014). Human facial expression: An evolutionary view : Academic Press. [Google Scholar]
- Gross AL, & Ballif B (1991). Children’s understanding of emotion from facial expressions and situations: A review. Developmental review, 11(4), 368–398. [Google Scholar]
- Julian JB, Fedorenko E, Webster J, & Kanwisher N (2012). An algorithmic method for functionally defining regions of interest in the ventral visual pathway. Neuroimage, 60(4), 2357–2364. [DOI] [PubMed] [Google Scholar]
- Jun YY, Mareschal I, Clifford CW, & Dadds MR (2013). Cone of direct gaze as a marker of social anxiety in males. Psychiatry research, 210(1), 193–198. [DOI] [PubMed] [Google Scholar]
- Kim H, Somerville LH, Johnstone T, Polis S, Alexander AL, Shin LM, & Whalen PJ (2004). Contextual modulation of amygdala responsivity to surprised faces. Journal of cognitive neuroscience, 16(10), 1730–1745. [DOI] [PubMed] [Google Scholar]
- Kriegeskorte N, Mur M, & Bandettini P (2008). Representational similarity analysis-connecting the branches of systems neuroscience. Frontiers in Systems Neuroscience, 2, 1–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee TH, Choi JS, & Cho YS (2012). Context Modulation of Facial Emotion Perception Differed by Individual Difference. PLoS One, 7(3), e32987. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee TH, Qu Y, & Telzer EH (2017). Love flows downstream: mothers’ and children’s neural representation similarity in perceiving distress of self and family. Social cognitive and affective neuroscience, nsx125–nsx125. doi: 10.1093/scan/nsx125 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lynn SK, Ibagon C, Bui E, Palitz SA, Simon NM, & Barrett LF (2016). Working memory capacity is associated with optimal adaptation of response bias to perceptual sensitivity in emotion perception. Emotion, 16(2), 155. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mareschal I, Calder AJ, Dadds MR, & Clifford CW (2013). Gaze categorization under uncertainty: Psychophysics and modeling. Journal of Vision, 13(5), 18–18. [DOI] [PubMed] [Google Scholar]
- Mayer JD, & Geher G (1996). Emotional intelligence and the identification of emotion. Intelligence, 22(2), 89–113. [Google Scholar]
- McClure EB (2000). A meta-analytic review of sex differences in facial expression processing and their development in infants, children, and adolescents: American Psychological Association. [DOI] [PubMed] [Google Scholar]
- McLaughlin KA, Garrad MC, & Somerville LH (2015). What develops during emotional development? A component process approach to identifying sources of psychopathology risk in adolescence. Dialogues in clinical neuroscience, 17(4), 403. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McManis MH, Bradley MM, Berg WK, Cuthbert BN, & Lang PJ (2001). Emotional reactions in children: Verbal, physiological, and behavioral responses to affective pictures. Psychophysiology, 38(2), 222–231. [PubMed] [Google Scholar]
- Mumford JA, Turner BO, Ashby FG, & Poldrack RA (2012). Deconvolving BOLD activation in event-related designs for multivoxel pattern classification analyses. Neuroimage, 59(3), 2636–2643. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Poldrack RA (2007). Region of interest analysis for fMRI. Social cognitive and affective neuroscience, 2(1), 67–70. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pollak SD, Messner M, Kistler DJ, & Cohn JF (2009). Development of perceptual expertise in emotion recognition. Cognition, 110(2), 242–247. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sroufe LA (1997). Emotional development: The organization of emotional life in the early years : Cambridge University Press. [Google Scholar]
- Thielscher A, & Pessoa L (2007). Neural correlates of perceptual choice and decision making during fear-disgust discrimination. The Journal of Neuroscience, 27(11), 2908–2917. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thomas KM, Drevets WC, Whalen PJ, Eccard CH, Dahl RE, Ryan ND, & Casey B (2001). Amygdala response to facial expressions in children and adults. Biological psychiatry, 49(4), 309–316. [DOI] [PubMed] [Google Scholar]
- Wang S, Yu R, Tyszka JM, Zhen S, Kovach C, Sun S, . . . Chung JM. (2017). The human amygdala parametrically encodes the intensity of specific facial emotions and their categorical ambiguity. Nature communications, 8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wiggins JL, Adleman NE, Kim P, Oakes AH, Hsu D, Reynolds RC, . . . Leibenluft E. (2015). Developmental differences in the neural mechanisms of facial emotion labeling. Social cognitive and affective neuroscience, 11(1), 172–181. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
