Skip to main content
iScience logoLink to iScience
. 2026 Feb 17;29(3):115042. doi: 10.1016/j.isci.2026.115042

Action units of facial expressions in emotional contagion

Alessia Celeghin 1,3,4,, Ivonne Angelica Castiblanco Jimenez 1,3, Martina Froio 1, Enrico Vezzetti 2, Elena Carlotta Olivetti 2, Federica Marcolin 2
PMCID: PMC12973004  PMID: 41816287

Summary

While research on facial expressions has traditionally focused on basic emotions, other spontaneous reactions, especially those elicited through emotional contagion, are far more common in daily social interactions, with facial movements playing a key role in conveying emotions. Here, we explore spontaneous facial expressions of laughter, yawning, and mirror pain through a contagion experiment involving 32 participants. We identified key facial action units and used landmark-based distances as morphometric features to capture their dynamics. Our analysis revealed distinct patterns that separate these expressions from each other and from neutral faces, particularly in the lower face. This work provides a curated set of action units and features for the assessment of laughter, yawning, and mirror pain expressions, offering a valuable resource for future spontaneous facial expression recognition studies.

Subject areas: social sciences, psychology

Graphical abstract

graphic file with name fx1.jpg

Highlights

  • Specific AUs define contagious laughter, yawning, and mirror pain

  • Mouth distances best differentiate these emotional expressions

  • Laughter and yawning transmit more strongly than mirror pain

  • Geometric facial distances capture subtle motor responses in contagion


Social sciences; psychology

Introduction

Emotional contagion, a fundamental mechanism in human social dynamics, represents the tendency of individuals to automatically mimic and synchronize their emotional states with others.1,2 Two reflex-like mechanisms are involved in emotional contagion, the unintentional imitation of the sender’s emotional expression, the so called emotional mimicry, and some afferent feedback from such mimicry eliciting the same emotional state in the receiver.3 This phenomenon plays a crucial role in shaping interpersonal relationships facilitating empathy and social cohesion4 while allowing a better coordination of emotional responses and behaviors within the group.5 In environments with rapidly changing elements, the ability to learn from others’ emotional responses provides an evolutionary advantage for survival. Indeed, groups can offer better prospects through communication and cooperation while rapidly sharing perceptions of potential threats.6 The ability to share and understand emotions has developed through several stages, suggesting that emotional contagion developed gradually during evolution.7,8 In its early stages, animals within a group displayed similar expressions in response to a shared stimulus, like when all members became alert in the presence of danger. Over time, this collective reaction is transformed into something more complex: the expression of a single individual could trigger the same response in others, even without the original stimulus.8 Social animals can indeed learn to fear novel stimuli indirectly by witnessing the conspecific reactions (a phenomenon known as social fear learning), while humans can likely understand others’ feelings and anticipate their actions often without conscious awareness.9,10 As emotional mimicry occurs through subtle, non-verbal cues, facial expressions represent one of the primary channels for transmitting and triggering social or defensive responses.11 The initial stages of social cue encoding, such as the reaction to specific patterns in facial expressions, may represent the initial phase of emotional communication.

The way behavioral mimicry contributes to emotional contagion varies across different species and expressions. Sometimes, it happens almost instantly, like when we quickly return a smile, other times, it develops more slowly, as with yawning8,12 or mirror pain,13 with contextual factors and social goals that significantly influence facial expressions’ occurrence and intensity.14 In both humans and other animals, mimicry appears first as a basic copying of facial movements that could or could not lead to experience the emotion itself, thus creating a bridge between physical imitation and emotional sharing.8 This process aligns with the two-step model proposed by Dezecache et al.,15 where emotional contagion first requires the observer to recognize changes in the demonstrator’s behavior, followed by the imitation of these behaviors along with experiencing the corresponding emotion. Most importantly, this initial emotional transmission occurs automatically, without necessarily requiring conscious awareness of the other’s emotional state. This extensive range of facial movements is allowed by the presence of shared physical traits, influenced by different evolutionary pressures and ecological niches that each species occupies. The specific needs highlight the diversity of life and the complex ways in which different species adapt to their surroundings. However, common anatomical features such as facial muscles contribute to a growing body of evidence suggesting that the evolution of facial expressions was not driven entirely by phylogenetic pressures, but that other socio-ecological factors had a significant influence. Among the facial expressions most associated with this phenomenon, yawning, laughter, and mirror pain are compelling examples of how possible emotional states can spread quickly within social groups.

For example, while yawning serves several physiological functions, such as brain cooling16 and increased alertness,17 its contagious nature has gathered increased attention as studies showed that observing or even thinking about yawning can trigger the same response.18 The characteristic physiological reflex typically linked to boredom or drowsiness, according to Kapitány and Nielsen19 may serve as a basic form of emotional synchronization, potentially enhancing vigilance and social bonding, as well as the broader concept of empathy.20 Indeed, the susceptibility to contagious yawning appears to be linked to empathic abilities: individuals with higher levels of empathy are more likely to experience contagious yawning, while those with conditions characterized by reduced empathy, such as autism spectrum disorders, show decreased susceptibility.12 When studying how yawning spreads between people, researchers have found interesting patterns in facial responses.21,22 Even when people successfully stop themselves from yawning, their facial muscles often show subtle activity patterns similar to yawning. These small muscle movements indicate that our faces automatically prepare to replicate others’ expressions, even if we do not complete the action.23 This automatic response helps explain why yawning is so contagious, as our facial muscles react before we are even aware of it.9 Additionally, studies show that people are more likely to “catch” yawns from family members and close friends than from strangers, implying that social bonds strengthen this contagious response.8

Similarly, laughter plays a vital role in social interactions and emotional regulation. The way humans laugh shares important features with the “play face” expressions seen in other primates, particularly in the brain areas involved and the way facial muscles move. This similarity is not just coincidental, it means that laughter started as a simple play signal and evolved into the more complex social tool we use today; in fact, when people laugh together, they use the same facial muscles and brain networks that other primates use during playful social interactions.5 Moreover, laughter appears very early in human development, starting around 3–4 months of age, and typically happens during playful interactions.24 As children grow, they develop the ability to distinguish between genuine and social laughter, showing how their understanding of social and emotional signals becomes more sophisticated.5 Spontaneous laughter, opposed to voluntary laughter,25 has specific properties and is more likely to elicit contagious responses. Studies by Simonyan and Horowitz26 show that spontaneous laughter is preserved even in patients with bilateral damage to speech motor areas who cannot speak or vocalize voluntarily, requiring less conscious control and showing shorter response latencies.

Also mirror pain, the phenomenon where observers experience pain-like sensations when witnessing others in pain, despite not undergoing any physical trauma themselves,27 shows comparing facial expressions during actual and observed pain. Indeed, when people see others in pain, their facial muscles often contract in patterns similar to those of the person actually experiencing pain. These responses happen quickly and automatically, demonstrating they are part of our basic social connection system.8 Brain imaging studies show that watching others in pain activates some of the same brain regions that process our own pain experiences.28,29 This shared neural activity, supposedly mediated by the mirror neuron system,30 helps explain why we might grimace when seeing someone hurt themselves.31,32 Shared and distinct neural networks for self-experienced and empathized pain, support the idea that mirror pain extends beyond simple motor imitation to include deeply emotional processes. The intensity of these responses is modulated by social context, indicating that external factors can influence the degree of mirror pain experienced.33 Interestingly, these reactions become stronger when we have closer relationships with the person in pain, similar to how yawning and laughter spread more easily between friends and family.8

Traditional methods of studying basic emotions have relied heavily on behavioral observations and self-report measures. However, the analysis of the action units (AUs) has become a particularly promising approach for measuring and understanding the relationship between facial expressions and the underlying emotion.34 The Facial Action Coding System (FACS) and its associated AUs have provided researchers with a standardized method for describing facial movements.35,36 Each AU corresponds to the movement of an individual muscle (Figure 1) or specific muscle group of the face, identified by a number (AU1, AU2, etc.). AUs can manifest individually or in combination with each other, and despite their limited number, over 7,000 different combinations have been observed.

Figure 1.

Figure 1

Examples of facial muscles and their corresponding action units

Representation of key muscles, such as the frontalis, zygomaticus major and minor, orbicularis oculi, and masseter, which contribute to expressions related to emotions and communication. This representation is illustrative rather than exhaustive, showing selected facial muscles and some of their associated AUs as representative examples.35,37

While FACS itself does not contain specific descriptions tied to emotions, it is commonly used to interpret the nonverbal communicative signals (i.e., facial expressions) associated with emotional states. Although primarily focusing on basic emotions, recently, some studies have identified particular AUs associated with facial expressions of emotional contagion, providing foundation for more systematic analysis on the more complex emotional states (see Table 1 for a detailed description of AUs in the emotional responses studied here).

Table 1.

Facial action units reported in the literature for laughter, yawning, and mirror pain

graphic file with name fx2.gif AU12
Lip corner puller
Laugh Keltner,38 Littlewort et al.39 Sun et al.13 graphic file with name fx3.gif AU2
Outer brow raiser
Laugh Drack et al.,40 Niewiadomski and Pelachaud41
Mirror pain Lucey et al.42 Sun et al.13
Yawn Barbizet43
graphic file with name fx4.gif AU6
Cheek raiser
Laugh Keltner,38 Niewiadomski and Pelachaud41 graphic file with name fx5.gif AU20
Lip stretcher
Laugh Drack et al.,40 Niewiadomski and Pelachaud41
Mirror pain Prkachin,44 Keltner,38 Cordaro et al.45 Goller et al.46
Yawn Menin et al.47
graphic file with name fx6.gif AU25
Lip part
Laugh Keltner,38 Beermann et al.48 Drack et al.40 graphic file with name fx7.gif AU43
Eye closure
Yawn Vural et al.,34 Menin et al.47
Yawn Vural et al.34 Menin et al.47 Mirror pain Prkachin,44 Keltner,38 Cordaro et al.45
Mirror pain Goller et al.46
graphic file with name fx8.gif AU26
Jaw drop
Laugh Keltner,38 Beermann et al.48 Drack et al.40 graphic file with name fx9.gif AU45
Blink (ritmic AU43)
Yawn Vural et al.,34 Sikander and Anwar49
Yawn Li,50 Vural et al.34 Menin et al.47 Littlewort et al.39
Mirror pain Heesen et al.51 Kunz et al.52,53 Goller et al.46
graphic file with name fx10.gif AU27
Mouth stretch
Laugh Keltner38 graphic file with name fx11.gif AU10
Upper lid raiser
Mirror pain Prkachin,44 Keltner,38 Cordaro et al.,45 Goller et al.46
Yawn Li,50 Vural et al.34 Menin et al.47
Mirror pain Goller et al.46
graphic file with name fx12.gif AU7
Lid tightener
Laugh Keltner,38 Beermann et al.48 graphic file with name fx13.gif AU17
Chin raiser
Mirror pain Keltner,38 Tessier et al.54
Mirror pain Prkachin,44 Keltner,38 Cordaro et al.45 Goller et al.46
graphic file with name fx14.gif AU4
Brow lowerer
Laugh Darwin,55 Niewiadomski and Pelachaud41 graphic file with name fx15.gif AU18
Lip puckerer
Mirror pain Keltner38
Yawn Menin et al.47
Mirror pain Prkachin,44 Keltner,38 Cordaro et al.45 Goller et al.46
graphic file with name fx16.gif AU5
Upper lid raiser
Laugh Ruch and Ekman,56 Niewiadomski and Pelachaud41 graphic file with name fx17.gif AU23
Lip tightener
Mirror pain Keltner,38 Tessier et al.54
graphic file with name fx18.gif AU9
Nose wrinkler
Laugh Ruch and Ekman,56 Niewiadomski and Pelachaud41 graphic file with name fx19.gif AU24
Lip pressor
Mirror pain Keltner38
Yawn Menin et al.47
Mirror pain Prkachin,44 Keltner,38 Cordaro et al,45 Goller et al.46
graphic file with name fx20.gif AU1
Inner brow raiser
Laugh Drack et al.40 Niewiadomski and Pelachaud41 graphic file with name fx21.gif AU16 + AU25
Lower lip depressor
+
Lip part
Mirror pain Heesen et al.51
graphic file with name fx22.gif AU6 +
AU12
Cheek raiser
+
Lip corner puller
Laugh Niewiadomski and Pelachaud,41 Dijk et al.57 Sachisthal et al58 graphic file with name fx23.gif AU6 + AU12 + AU25
Cheek raiser
+
Lip corner puller
+
Lip part
Laugh Gironzetti et al.,59 Hofling et al.,60 Niewiadomski and Pelachaud41

Includes Ekman’s visual representation,35 AU code, name, and relevant authors.

Table 1 synthesizes current research on AUs associated with laughter, yawning, and mirror pain. Laughter consistently features AU12 (lip corner puller) and AU6 (cheek raiser) forming the Duchenne smile pattern,38,61 typically accompanied by mouth opening actions (AU25, AU26). Yawning is characterized by pronounced mouth movements, particularly AU27 (mouth stretch) as documented by Barbizet,43 along with eye movements like AU43 (eye closure) and AU45 (blink).34,47 Mirror pain displays a more complex pattern, presenting both upper face movements (AU4, AU7, AU10) identified in Prkachin’s44 work and elaborated by Keltner38 and Goller et al.,46 along with various lower face actions. The table also includes some selected combinations of AUs that have been observed to co-occur in this emotional context.

Recent research on emotional contagion62 has identified gaps in our understanding of facial dynamics. Specifically, researchers have highlighted the need to expand beyond measuring just basic muscle activations, like zygomaticus major and corrugator supercilii, suggesting that a more comprehensive analysis of multiple facial muscles would advance the understanding of how emotions spread. We address this research gap by examining a broader range of AUs and their corresponding facial landmark configurations in facial expressions linked to contagious states. While emotional contagion involves the broader process of emotional transmission, our study focuses specifically on the observable facial expressions that initiate this process. To avoid ambiguity, throughout this article we reserve the term “mimicry” for discussing the theoretical concept of expression replication, while referring to the observable responses of our participants as facial manifestations. We examine the facial expressions produced when participants watch others’ emotional displays without any context. This approach enables us to analyze how viewing someone else’s expression leads to specific facial movements in the observer without the influence of situational cues or conscious interpretation. Our study focuses on characterizing facial expressions associated with laughter, yawning, and mirror pain, offering an opportunity to obtain new insights of their distinctive manifestation through comprehensive AU analysis and corresponding facial landmark measurements. Specifically, we aim to (1) identify and characterize specific AUs associated with each facial expression resulting from emotional mimicry to laughing, yawning and mirror pain reactions; (2) identify a set of morphometric measures (in the form of landmark distances) associated to every AU in exam; and (3) differentiate, on a statistical basis, these facial responses from each other and from the neutral by exploiting the descriptiveness of the selected morphometric measures and, therefore, showing the validity of the proposed AU mapping for these facial expressions.

To achieve these objectives, we implemented a two-phase experimental protocol. In phase 1, we recorded spontaneous facial expressions from participants viewing emotion-eliciting video content, creating a validated stimulus set of facial displays expressing laughter, yawning, and mirror pain. In phase 2, we presented these isolated facial expressions (without audio or contextual information) to a new participant group while recording their facial responses. We analyzed these responses through both qualitative and quantitative approaches. The qualitative component involves visual observation of facial movements of participants recorded during the visualization of emotional stimuli and their correspondence to defined AUs. This is complemented by quantitative analysis, which involves a statistical analysis of differences in facial changes between neutral and emotional states and between different emotions using selected landmark distances. This study allows for a detailed examination of facial expressions that operate to express contagious emotional responses, providing insights into the facial pattern of laughter, yawning, and mirror pain expression.

Results

Identification of a consistent set of AUs and distances for each emotion

An extended qualitative analysis identifying all AUs observed across participants is provided in Method S1. Two to three AUs were identified for each emotional category, selected from those detected by the expert (FACS coder), that can be seen in Figure S1 of the supplemental information. These sets were chosen based on their consistency, representing the largest number of subjects possible (n = 17), and their ability to be easily identified through distances extracted from the positions of the 68 landmarks. The selected AUs were as follows: for laughter, AU12 (lip corner puller) and AU25 (lip part) were identified as characteristic for all subjects; for yawn, AU9 (nose wrinkler), AU27 (mouth stretch), and AU43 (eye closure) were chosen; for mirror pain, AU4 (brows lowerer), AU7 (lid tightener), and AU10 (upper lip raiser) were selected. A specific set of distances was then defined for each AU, as indicated in Table 2 and in Figure 2.

Table 2.

Distance sets for each emotional category and AU according to the 68 landmarks position

Emotion AU Distances
Laugh AU12 49-1, 49-2, 49-3, 49-37, 49-40, 49-28, 55-17, 55-16, 55-15, 55-46, 55-43, 55-28, 55-49, 61-65
AU25 62-68, 63-67, 64-66, 50-60, 51-59, 52-58, 53-57, 54-56
Yawn AU9 28-21, 28-22, 28-23, 28-24, 31-21, 31-22, 31-23, 31-24, 34-21, 34-22, 34-23, 34-24, 50-32, 54-36, 50-40, 54-43, 28-32, 28-33, 28-35, 28-36, 28-31, 40-32, 40-33, 40-35, 40-36, 43-32, 43-33, 43-35, 43-36
AU27 62-68, 63-67, 64-66, 50-60, 51-59, 52–58, 53–57, 54-56
AU43 38-42, 39-41, 44-48, 45-47
Mirror pain AU4 28-21, 28-22, 28-23, 28-24, 31-21, 31-22, 31-23, 31-24, 34-21, 34-22, 34-23, 34-24
AU7 38-42, 39-41, 44-48, 45-47
AU10 50-32, 54-36, 50-40, 54-43, 28-32, 28-33, 28-35, 28-36, 28-31, 40-32, 40-33, 40-35, 40-36, 43-32, 43-33, 43-35, 43-36

Figure 2.

Figure 2

Mapping facial distances across laughter, yawning, and mirror pain: identifying consistent vs. inconsistent AU behavior relative to neutral

On the top, representation of initial distance sets for each emotional category and AU according to the 68 landmarks position. On the bottom, representation of distances that exhibit consistent behavior (in green) and inconsistent behavior (in orange) compared to the expected one with reference to AU0 (neutral facial expression). From left to right, laughter (AU12, AU25), yawning (AU9, AU27, AU43), and mirror pain (AU4, AU7, AU10). See also Figure S2 for laughter (AU12, AU25), Figure S3 for yawning (AU9), Figure S4 for yawning (AU27, AU43), Figure S5 for mirror pain (AU4, AU7), Figure S6 for mirror pain (AU10), and Table S2.

Statistical results

We began processing the selected distances by evaluating their median values and corresponding interquartile ranges in relation to the neutral emotional state (AU0). To verify distance variations’ consistency for each specific AU compared to its neutral value, we analyzed the expected behavior of each distance during muscle activation involved in that AU. As shown in Figure 3, for most distances, a decrease in value is expected compared to AU0 (distances associated with the eyebrows, nose, eyes, and facial contour). However, certain distances, especially those associated with the mouth, should instead increase to activate the specific AU properly.

Figure 3.

Figure 3

Selected AU landmark distances and expected changes relative to neutral across laughter, yawning, and mirror pain

Representation of the selected AU and Euclidean distances with their corresponding landmarks and Ekman’s depiction alongside35 for laugh (AU 12, AU25), for yawn (AU 9, AU27, AU43), and for mirror pain (AU 4, AU7, AU10). The distances in purple are those that should theoretically decrease compared to the neutral state for activation of the corresponding AU, while those in blue should increase. See also Figure S1.

A quantitative representation of the median values for each distance, along with the neutral reference and interquartile ranges, is provided in Figures S2–S6, accompanied by a table detailing the numerical values of medians and interquartile ranges in Table S2. We evaluate and represent consistent activations (where the median variation in distance aligns with the expected change relative to the neutral condition for the AU to be activated) and inconsistent activations (where the median variation in distance does not align with the expected change relative to the neutral condition for the AU to be activated) for each distance.

Most features selected to represent the AU sets exhibited consistent behavior with the activation of the corresponding AU (Table S2). However, three distances showed inconsistent patterns. For AU9 (nose wrinkler) in yawn, features 54-36 and 50-32 displayed higher median values compared to the neutral reference, when ideally these values should be lower for correct AU activation (Figure S3). Similarly, for AU10 (upper lip raiser) in mirror pain (Figure S6), distance 40-36 deviated from the expected pattern, showing values higher than neutral when reduction was expected (complete median and interquartile range values in Table S2). Figure 2 represents the initial distance sets for each emotional category and illustrates the facial locations of these features, highlighting distances with consistent activation (green) versus those with inconsistent behavior (orange). Overall, the selected distances generally showed marked activations compared to neutral, except for AU10 in mirror pain, which exhibited values similar to neutral (Table S2; Figure S6).

Analyzing the consistency relative to neutral values can provide valuable insights into facial patterns. Since some distances are shared across multiple AUs and emotional categories, this preliminary analysis helps identify movements common to several facial expressions and those specific to a particular emotional state.

The Friedman test results reveal that most features have a p value below 0.05, indicating at least one significant difference between emotional categories for each distance (see Table S3 for complete statistical results). Not significant distances, meaning that these distances are not informative in discriminating our set of emotional categories, are 34-22, 34-23, 40-35, 40-36, 43-32, 43-33 (Figure S7). For the distances that were found to be significant in the Friedman test, the post hoc Conover test identified additional distances as non-discriminative of the emotional states considered, namely 31-22, 43-35, 31-23, 40-33, 28-31, 34-24, 28-23, 34-21, 28-35, 28-33 (as can be seen in Figure S7).

The between/within class variability (BWV) analysis led to the results shown in Figure 4. The chart displays BWV values sorted from highest (left) to lowest (right) for each distance. We can see that the discriminative distances are 50-60, 54-56, 53-57, 51-59, 52-58, 54-36, 63-67, 62-68, 64-66, and 55-46, while all the other distances have a BWV value lower than one.

Figure 4.

Figure 4

BWV Results

Diagram showing the BWV values for each distance, arranged in descending order. On the x axis there are distance labels, while on the y axis there are BWV values. The discriminative distances are enclosed in a purple box (cut off BWV = 1, indicated by a dashed line), with their corresponding landmarks and Euclidean distances displayed on the top right. See also Figure S14.

The results of the post hoc Conover test for the distances found to be discriminative from the BWV analysis can be seen in Figures S8 and S9. Across all significant pairwise comparisons, a bootstrap analysis (n_boot = 1,000) estimating the variability of the mean r was conducted. The average standardized effect size on the significant pairs was large (r = 1.105 ± 0.313, 95% CI, [1.050, 1.157]). Despite the overall large effect, variability across pairs suggests that some emotional distinctions were stronger than others, see Figure 5.

Figure 5.

Figure 5

Average pairwise heatmap of mean effect size across emotional category for post hoc Conover test

Each cell shows the mean absolute value of the effect size r between two emotional conditions (mirror pain, neutral, laugh, and yawn), averaged over all significant features and pairs. The color intensity reflects the magnitude of the effect: lighter tones indicate smaller effects, while darker orange areas represent stronger pairwise differences between emotions. See also Table S3.

The most discriminative features, identified through BWV analysis, are located in the mouth area (Figure 4). Post hoc Conover test results showed that inner mouth distances (63-67, 62-68, 64-66, 55-46) had the strongest discrimination, with significant differences (p < 0.001) between all emotional category combinations (laughter, yawning, and mirror pain) (as can be seen in Figure S9). In contrast, outer mouth distances (50-60, 53-57, 51-59, 52-58) could not discriminate between laughter and mirror pain (p values ranging from 0.056 to 0.150). Distance 54-36 was particularly effective at distinguishing laughter from other emotions (p < 0.001), while yawn-related distances showed significant differences against all other categories except for distance 54-56 in the yawn-laugh comparison (p = 0.053) (Figure S8). Eye-related distances (38-42, 44-48) effectively discriminated yawning against all other emotions (p < 0.001) but could not distinguish laughter from mirror pain (p > 0.290) (Figure S12). Eyebrow distances showed significant results specifically for mirror pain versus all other emotional categories (p < 0.001) (Figure S13).

Emotional alignment and validation of stimuli

The percentage of participants who reacted at least once to a stimulus for each emotion was computed as a measure of emotional reactivity aligned with the presented stimuli. In total, 29 reactive participants and 3 who never showed emotional reaction throughout the entire experiment were identified. Specifically, 20 participants reacted to laughter, 23 to yawning, and 15 to mirror pain, out of a total of 32 participants, resulting in a 62.5% for laugh, 71.87% for yawn, and 46.88% for mirror pain. The results of plots comparing the intensity of distances of participants (n = 29), stimuli, and neutral conditions, along with the corresponding statistical tests used to assess the significance of distribution differences can be seen in Figure 6 for laugh AUs (12, 25), in Figure 7 for yawn AUs (9, 43, 27) and in Figure 8 for mirror pain AUs (4, 7). The neutral baseline (orange violin’s boxplot in Figures 6, 7, and 8) was calculated using a fixed group of 17 participants selected from the reduced dataset based on FACS coder validation, ensuring standard neutral expressions. This group was kept constant across all comparisons with other emotions. Intensity analyses included only participants who expressed the target emotional state.

Figure 6.

Figure 6

Laughter-related boxplots and violin plots of concordant emotional transmission and Mann-Whitney U test

Intensity values (y axis) and their distributions across participants (n = 29) for the subject (green), stimulus (purple), and neutral (orange) conditions are indicated for each distance (x axis) at which a statistically significant difference between laughter and neutral conditions was detected by the Conover post hoc test. Laughter-related action units are shown separately (AU12 at the top and AU25 at the bottom). Results of pairwise comparisons using two-tailed Mann-Whitney U test are reported above each plot, with non-significant comparisons indicated as NS. See also method S2.1.

Figure 7.

Figure 7

Yawning-related boxplots and violin plots of concordant emotional transmission and Mann-Whitney U test

Intensity values (y axis) and their distributions across participants (n = 29) for the subject (green), stimulus (purple), and neutral (orange) conditions are indicated for each distance (x axis) at which a statistically significant difference between yawning and neutral conditions was detected by the Conover post hoc test. Yawning-related action units are shown separately (AU9 and AU43 at the top and AU27 at the bottom). Results of pairwise comparisons using two-tailed Mann-Whitney U test are reported above each plot, with non-significant comparisons indicated as NS. See also Method S2.1.

Figure 8.

Figure 8

Mirror pain-related boxplots and violin plots of concordant emotional transmission and Mann-Whitney U test

Intensity values (y axis) and their distributions across participants (n = 29) for the subject (green), stimulus (purple), and neutral (orange) conditions are indicated for each distance (x axis) at which a statistically significant difference between mirror pain and neutral conditions was detected by the Conover post hoc test. Mirror pain-related action units are shown (AU4 and AU7). Results of pairwise comparisons using two-tailed Mann-Whitney U tests are reported above each plot, with non-significant comparisons indicated as NS. See also Method S2.1.

For laughter (Figure 6), the Mann–Whitney U test showed that all distances between the various groups were significant, except for distances 49-28 and 55-43 of AU12 between participants and stimuli. For yawning (Figure 7), AU43 was the only one showing no significant difference between participants and stimuli. For mirror pain (Figure 8), distances 44-48 and 45-47 of AU7 were not significant between stimuli and neutral conditions.

Figure 9 shows the results of the unsupervised clustering (k-means, k = 4), based on the mouth distance features (discriminative according to the BWV analysis). The plot in Figure 9A shows the distribution of data along the first two principal components (PC1 and PC2) differentiated according to the clustering assignments, while the matrix in Figure 9B provides a quantitative summary of label representation within each cluster, illustrating both the internal composition of clusters and the distribution of each emotional category across them.

Figure 9.

Figure 9

Results of the stimuli clustering using k-means with k = 4

(A) displays the distribution of data along PC1 (x axis) and PC2 (y axis) components, colored by cluster assignment (0 = blue, 1 = orange, 2 = green, 3 = pink).

(B) (B.1) shows the percentage of all samples from each label that fall into each cluster (i.e., label distribution across clusters) and (B.2) shows the percentage of each emotional category within the cluster (i.e., cluster composition). The cells corresponding to the highest value for each label are highlighted in turquoise. See also Method S2.2.

Discussion

Emotional alignment and quantitative and qualitative analysis of AUs

The present study aimed to identify facial AUs associated with laughter, yawning, and mirror pain, which exhibit contagious properties. To investigate mediated emotional contagion, we designed an experimental paradigm in which emotional states were first elicited in a group of participants through audiovisual stimuli and subsequently transmitted to a second sample via the recorded facial expressions alone. This design allowed us to test whether emotional facial signals could propagate in the absence of direct contextual cues, thus assessing the transitive nature of emotional transmission.

Our results indicated that laughter and yawning can be vicarious transmitted, albeit with reduced intensity and variability across individuals, whereas mirror pain showed weaker and less consistent resonance. Specifically, yawning elicited responses in 71.87% of participants, followed by laughter (62.50%) and mirror pain (46.88%). Participants rarely replicated the exact facial configurations or intensity of the original stimuli. This attenuation was expected, as transitive emotional contagion is known to produce attenuated expressive responses than direct emotional engagement, as previously discussed by Dezecache et al.63

Yawning emerged as the most consistently reactive expression, with responses occurring across variable temporal windows. Intra- and inter-participant variability was evident in both response modality and intensity. While some individuals attempted to suppress their reactions (e.g., covering their mouth), most exhibited attenuated yet observable facial responses. Whether yawning reflects a purely automatic motor response or carries an emotional component remains an open question.

Laughter also elicited valence-congruent reactions, typically expressed as smiles rather than full laughter, likely due to the absence of auditory cues. It is noteworthy that laughter involves complex motor coordination, including rhythmic facial and diaphragmatic movements, whereas smiles may serve as its simplified counterpart, a weaker form of laughter, or its precursor, retaining social and affective functions without vocalization.55,64,65,66,67

Mirror pain proved to be the most ambiguous emotional state. Facial responses ranged from partial replication (often limited to the upper face) to expressions of doubt, curiosity, or even cheerfulness. These findings suggest that mirror pain relies more strongly on contextual and interpretive cues than laughter or yawning, limiting its transmission through facial signals alone. As discussed in prior literature, emotional mimicry may occur automatically,68,69 even under subliminal or constrained perceptual conditions. However, contextual and social variables, such as perceived similarity, social intent, group affiliation, and cooperative dynamics, can modulate the likelihood and intensity of emotional contagion.70,71 Our findings suggest that certain emotional states, particularly laughing and yawning, possess transitive properties capable of propagating across individuals even in the absence of direct context. In contrast, the replication of mirror pain requires additional interpretive cues. Building on the framework of Dezecache et al.,15 we agree that emotional signal coordination between sender and receiver may be evolutionarily stable only when reciprocation confers mutual benefit, implying that, for some emotional states as mirror pain in our case, individual and social factors are essential for decoding and responding appropriately.

Qualitative and quantitative analyses revealed that specific facial distances, particularly those related to vertical mouth movements, provided strong discriminatory power across emotional categories. Laughter was characterized by AU6 (cheek raiser), AU12 (lip corner puller), and AU25 (lip part), consistent with prior literature.41,59,60 While AU7 (lid tightener) and AU4 (brow lowerer), can appear in intense laughter,55 they were primarily associated with mirror pain in our sample, indicating subdued laughter responses (Figure S1). The predominance of AU12 with AU6 confirms that eye squinting in laughter results from AU6 activation rather than AU7.38,48 AU12 can appear independently in subtle smiles or combined with AU25, which shares distance parameters with AU27 (mouth stretch) in yawning but induces smaller increases in mouth opening distances. AU12 corresponded to expected distance patterns, including characteristic horizontal mouth expansion (distances 61–65, 55-49), effectively distinguishing laughter from other emotions (see Table S2; Figure S2).

Yawning was the most frequently expressed emotional state, primarily characterized by AU27 (mouth stretch) at varying intensities. Suppressed yawns involved AU9 (nose wrinkler), AU43 (eye closure) at multiple intensities (AU43i-AU43iii), and AU45 (blink), with behavioral indicators such as mouth covering reflecting social inhibition.22 AU17 (chin raiser), shared with mirror pain, was also present (Figure S1). AU9, which shares measurement parameters with AU4 (brow lowerer) and AU10 (upper lip raiser), typically yielded distance reductions; however, exceptions in measurements 54-36 and 50-32 showed increases due to co-activation with AU27 (Table S2; Figure S3). Isolated AU27 activation produced widespread distance increases characteristic of traditional yawns, while combined AU27 and AU9 activation resulted in smaller values defining suppressed yawns. Yawning exhibited greater facial expressiveness than mirror pain, with wider interquartile ranges and participant variability. AU43, defined by the same distances as AU7 (lid tightener), demonstrated marked reductions consistent with full eye closure, while AU27 produced substantial increases in mouth-related distances, validating these metrics for distinguishing yawning from laughter and mirror pain.

Mirror pain elicited the greatest inter-individual variability, primarily featuring AU4 (brow lowerer), AU7 (lid tightener), and AU10 (upper lip raiser) at low intensities. Additional less commonly documented AUs included AU15 (lip corner depressor), AU24 (lip pressor), and AU5 (upper lid raiser), with one subject exhibiting AU14 (dimpler), interpreted as empathetic expression. Eye narrowing resulted primarily from AU7 (orbicularis oculi contraction) emphasized by concurrent AU4 (brow furrowing), with participants displaying varied AU4 intensity levels (Figure S1). Facial distances associated with AU4 and AU7 decreased as expected, with median values aligned with theoretical predictions. AU4 exhibited more pronounced reductions than AU10 (marked decreases in distances 28-21, 28-22, 28-23, 28-24 versus moderate changes in landmarks 31, 34), indicating stronger upper-face expressiveness relative to the midfacial region (Table S2; Figures S5 and S6). For AU10, distances generally decreased versus neutral except for landmark 40-36, which showed a slight increase. However, AU10 exhibited lower-than-anticipated magnitude changes and modest deviations from neutral, attributable to limited facial expressiveness, pronounced inter-individual variability, and potential landmark detection imprecision in the upper face and nasal regions.

Interestingly, as shown in Table 1 and supported by prior literature, several AUs were shared across multiple emotional categories. AU12 (lip corner puller) and AU25 (lip part), prominent in laughter, have been documented in mirror pain.13,42,46 While AU43 (eye closure) appeared in yawning, it is also linked to mirror pain in literature,38,44,45 though AU7 (lid tightener) was more prevalent in our observations. AU27 (mouth stretch), observed with varying intensities that some authors categorize as distinct AUs including AU25 and AU26 (jaw drop),34,39,47,50 is typically expected in laughter38 but predominated as AU25 in our sample, with AU27 occasionally present in mirror pain. AU4 (brow lowerer), a mirror pain hallmark, is cited in the literature as potentially present during yawning47 but was absent in our subjects. AU9 (nose wrinkler), strongly associated with yawning, is also described in laughter and mirror pain contexts,38,44,45,56 though AU10 (upper lip raiser) was more consistently observed in our mirror pain data while AU9 was absent in laughter.

The selective occurrence of AUs across emotional states reinforces the notion that facial expressions can be discriminated by specific motor configurations, despite challenges in consistently capturing all muscle activations. Indeed, several AUs reported in prior studies were not observed in our data, including AU16 (lower lip depressor) in conjunction with AU25, AU18 (lip puckerer), and AU23 (lip tightener) in mirror pain; AU20 (lip stretcher) in laughter and yawning; and AU26 (jaw drop) across all three categories (see Table 1). Conversely, we identified AUs not widely referenced in the literature, such as AU6 (cheek raiser), AU17 (chin raiser), and AU38 (nostril dilator) in suppressed yawns, as well as AU5 (upper lid raiser), AU14 (dimpler), and AU15 (lip corner depressor) in mirror pain. Visual representations of these AUs are provided in Table S1.

To enhance the robustness of our quantitative analysis in capturing subtle facial expressions, we employed Friedman and Conover statistical tests to identify non-significant distances that do not effectively discriminate between emotional categories. The results indicated that the majority of distances yielded p values below 0.05, confirming statistical significance for at least one emotional group comparison. The discriminatory power of these distances was further validated through BWV and Conover post hoc test statistical results, where numerous emotion pairs also exhibited p values below the 0.05 threshold. Conversely, distances that failed to demonstrate discriminatory capacity (refer to Figure S7 for visual details) were primarily located in the central region of the face, including the nose and eyebrow areas. Notably, these non-discriminatory distances corresponded with the lowest BWV values, suggesting limited contribution to emotional differentiation (as can be seen in Figure S14).

Yawning was the most distinctly differentiated emotional state, as reflected in the median and interquartile range analysis as well as in the Conover test. This trend was evident across discriminative facial features, including distances 50-60, 54-56, 53-57, 51-59, 52-58, 63-67, 62-68, 64-66, and 55-46 (Figures S8 and S9). Laughter also displayed unique discriminative markers, with distances 54-36 and 55-46 and, also, 61-65 and 55-49 (Figure S11) yielding statistically significant p values in comparison to other emotional categories, thereby reinforcing their relevance in identifying laughter expressions. In the case of mirror pain, inner mouth distances proved particularly effective for differentiation, especially when contrasted with yawning and laughter. Conversely, outer mouth distances showed stronger discriminative power when compared to yawning alone. These findings align with earlier analyses indicating that vertical mouth metrics are good discriminators, not only for yawning, but also for laughter and mirror pain. Overall, the features identified through the BWV analysis have confirmed these features can distinguish better than other facial regions between the targeted emotional states.

Beyond the top ten discriminative distances, those with BWV values below one were further analyzed using the Conover post hoc test. Distances associated with the mouth, eyes, and facial contour demonstrated strong discriminatory power for laughter (Figure S10). Eye-related distances (Figure S12) showed statistical significance across nearly all emotional categories except between mirror pain and laughter, reflecting shared mechanisms, AU7 (lid tightener) in mirror pain and AU6 (cheek raiser) in laughter both produce partial eye closure, reducing discrimination efficacy between these states.

Upper facial distances, particularly those associated with eyebrow movement (28-24, 28-22, 28-21), effectively distinguished mirror pain from laughter and neutral conditions where eyebrow activity is less prominent (Figure S13). However, differentiating mirror pain from yawning proved more challenging due to overlapping AU activation, AU4 (brow lowerer) and AU10 (upper lip raiser) in mirror pain (low intensities) and AU9 (nose wrinkler) in suppressed yawning, leading to expression convergence that complicates classification.

Notably, distances with the lowest BWV values were concentrated in the upper facial region (Figure 4), indicating reduced discriminative power and highlighting the need for future research in this area. The limited performance of upper facial metrics may be attributable to decreased landmark detection accuracy (especially around eyebrows), potentially influenced by electrode placement and image quality, and the inherent anatomical constraint that upper face musculature exhibits less variability and motion compared to the more expressive lower face. See Figure 10 for an integrated representation of AUs and statistical results for each emotional state.

Figure 10.

Figure 10

Graphical integration of results (n = 17)

Representation of the AUs and starting distances integrated with the results obtained from the statistical analyses and the BWV for laugh (on the top left), yawn (on the top right), and mirror pain (on the bottom left). Features highlighted in red are those excluded based on the Friedman and Conover statistical analyses, as they were deemed not significant, and with the lowest BWV values among all non-discriminative distances according to BWV analysis (lowest-ranked non-discriminative). Discriminative distances are marked in green, highest-ranked non-discriminative and moderately high-ranked non-discriminative ones, respectively, in blue and purple, and moderately low-ranked non-discriminative features in orange.

Validation of stimuli

Boxplots and violin plots were used to visually inspect the distribution of distances across stimuli, participants, and neutral reference conditions (Figures 6, 7, and 8). Mann-Whitney U tests revealed distinct patterns for each emotional state (detailed results in Method S2.1).

For laughter, AU12 (Lip Corner Puller) and AU25 (Lip Part) exhibited consistent directional trends between participants and stimuli relative to neutral expressions, with statistically significant deviations congruent with expected AU activation. However, significant intensity differences emerged: participants typically displayed smiles (indicating perceived positive valence with reduced intensity) while stimuli showed full laughter. AU12 was present in both groups, though AU25, more specifically associated with laughter than smiling, exhibited significantly greater variation in the stimuli group. Nearly all distance comparisons showed significant differences, with only two AU12 distances (49-28 and 55-43) showing no significant difference between participants and stimuli (Figure 6).

For yawning, AU9 (nose wrinkler) and AU43 (eye closure) showed significant deviations from neutral in both groups, aligned with theoretical predictions. AU27 (mouth stretch), the defining AU for yawning, demonstrated markedly stronger activation in stimuli than participants, reflecting the exaggerated nature of selected yawning stimuli (featuring pronounced mouth openings) versus participants’ frequent yawn suppression. Distance changes were consequently attenuated in participants, with frame exclusions necessary in some cases due to occlusion. AU43 displayed similar intensity and distribution across both groups (non-significant difference), suggesting eye closure as the most reliably observed feature in both overt and suppressed yawns (Figure 7).

Mirror pain presented the greatest challenge, likely due to its contextual dependence. AU4 (brow lowerer) and AU7 (lid tightener) demonstrated trends broadly aligned with neutral expressions, with limited intensity variation across conditions. Mirror pain was the only emotional state in which participant activations were consistently lower than stimuli. Statistical analyses revealed significant distributional differences for most group comparisons, with exceptions noted for AU7 distances 44–48 and 45–47 between stimuli and neutral. This pattern may reflect different mechanisms: Phase 1 participants expressing mirror pain in response to overtly painful scenes, while Phase 2 participants displayed analogous furrowed expressions due to confusion, uncertainty, or social discomfort rather than direct emotional contagion (Figure 8).

K-means clustering (k = 4) performed on discriminative mouth distance features validated their capacity to distinguish emotional stimuli based on facial expression patterns (Figure 9). The neutral category formed a well-isolated cluster (Cluster 0), with 91.20% of all neutral samples grouped together, confirming its role as a reference condition. The remaining emotional states are distributed across three additional clusters. Yawning samples were predominantly assigned to Cluster 1 (62.07% of all yawns), characterized by full mouth opening (AU27), while laughter samples were primarily captured in Cluster 2 (62.86% of all laughter), exhibiting moderate mouth opening (likely AU12). Cluster 1 also contained some laughter samples showing stronger AU25 activation. Mirror pain samples distributed across clusters associated with closed or semi-open mouth configurations, rarely appearing in the open-mouth Cluster 1. This spatial organization is visually evident in Figure 9A: moving from left to right in PC1-PC2 space, we observe a transition from closed-mouth expressions (Cluster 0: mostly neutral and some mirror pain) through intermediate configurations (Cluster 3: increasing mirror pain; Cluster 2: mix of laughter and mirror pain with moderate mouth opening) to fully open-mouth expressions (Cluster 1: predominantly yawning with full AU27 activation, plus some laughter with AU25). This result is consistent with unsupervised methods that reveal underlying structure based on feature similarity rather than imposing categorical boundaries, suggesting that contagious emotional activation may exist along a continuum without abrupt discrete shifts. Detailed cluster composition and statistical validation are provided in Method S2.2.

In summary, this study demonstrates that contagious emotional states exhibit identifiable yet attenuated facial expression patterns that can be captured through a combined qualitative and quantitative framework. Laughter and yawning showed robust transmissibility, whereas mirror pain remained strongly context dependent. The predominance of lower-face metrics underscores their importance in facial expression recognition, while the reduced discriminative power of upper-face features highlights areas for methodological refinement. Together, these findings support the integration of geometric and statistical approaches with traditional FACS analysis to better characterize spontaneous and contagious facial expressions.

Limitations of this study

Several limitations of this study warrant consideration. First, the inherently complex nature of emotional reactions poses challenges for experimental precision. Although the stimulus selection process prioritized recognizability and clarity, spontaneous facial expressions are rarely entirely “pure,” even under controlled laboratory conditions. Individual factors, such as participants’ current mood, cognitive state, and personal disposition, may introduce variability in emotional responsiveness, thereby affecting the consistency of observed facial reactions. Furthermore, the reliance on facial stimuli may have constrained participants’ emotional engagement. Unlike multimodal or immersive approaches, facial expressions displayed in videos may not fully evoke the intended emotional resonance. Moreover, the intrinsic nature of the emotional states examined in this study remains a subject of ongoing debate. Some authors12,72,73,74 refer to laughter and yawning as emotionally laden behaviors shared across species, serving crucial roles in social communication and group cohesion. Others conceptualize laughing and yawning either as physiological or neurocognitive phenomena that involve advanced cognitive processing and are not necessarily linked to universally recognizable emotional content,21,75 or as basic motor responses lacking intentional emotional expression.76 Similarly, mirror pain is not uniformly classified as an emotional state. Some researchers interpret it as a cognitive response, involving mentalizing and perspective-taking,77 while others view it as an empathic response, involving affective resonance and motor simulation.78,79,80 Comparable debates persist concerning the operational definitions of emotional contagion and emotional mimicry, with variation in how these constructs are distinguished, whether through automatic sensorimotor responses, affective sharing, or context-dependent social interpretation.1,14,81

A second key limitation pertains to the final sample used for analysis. Given their reliance on spontaneous emotional reactions, a substantial number of participants did not produce observable facial expressions and were consequently excluded. While such exclusions are common in research on spontaneous behavior, the resulting reduction in sample size impacts statistical power. We conducted a post-hoc power analysis to transparently assess this limitation. The analysis revealed that observed Kendall’s W values ranged from 0.008 to 0.878, with corresponding power estimates between 0.067 and 0.998. Notably, the majority of our significant findings exhibited large effect sizes (many W > 0.60) well above the minimal detectable threshold (Wmin = 0.330 for power = 0.80), confirming the robustness of the identified discriminative patterns. While effects smaller than W = 0.33 may exist but remain undetected given our sample characteristics, the large observed effect sizes indicate that the detected facial configurations represent substantial and meaningful patterns rather than subtle variations. To increase overall statistical power and achieve broader identification and validation of emotional expression patterns, future studies would benefit from larger and more diverse samples, with greater independence across emotional categories. Additionally, our sample was composed exclusively of Italian participants, which may constrain the cross-cultural applicability of the identified AU patterns. Cultural and demographic differences can shape both emotional expressiveness and interpretation; thus, the extent to which these findings generalize to other populations remains uncertain and warrants further investigation. Full methodological details of the power analysis are provided in the quantification and statistical analysis section.

Finally, all facial images used in the analysis were frontal, though some exhibited slight tilting, as the study prioritized capturing spontaneous rather than posed expressions. Accordingly, landmark localization and distance calculations were performed on 2D frame images and carefully validated for accuracy. To preserve ecological validity, we did not exclude participants based on appearance (e.g., facial hair, hairstyle) or minor head movements, embracing a more naturalistic experimental design. Future research could benefit from implementing three-dimensional facial analysis techniques to better accommodate positional variability, improve detection accuracy in regions where 2D methods prove limited, and compute Euclidean distances employing also the third dimension, funneling the research into an overall enhanced fully pose/makeup/camouflage-independent 3D perspective.

Resource availability

Lead contact

Further information and requests for resources should be directed to and will be fulfilled by the lead contact, Alessia Celeghin (alessia.celeghin@unito.it).

Materials availability

This study did not generate new unique reagents.

Data and code availability

  • Anonymized raw facial landmark coordinates (JSON files) calculated Euclidean distance matrices (non-normalized and min–max normalized), and processed datasets for the experimental phases have been deposited at Mendeley data and are publicly available as of the date of publication. The accession number (DOI) is listed in the key resources table: https://doi.org/10.17632/j96bffmhgc.1. Raw video recordings cannot be shared due to participant privacy concerns and ethical restrictions imposed by the University of Turin’s Academic Bioethics Committee.

  • Custom Python scripts for facial landmark extraction, Euclidean distance computation, global normalization workflow, between/within-class variability (BWV) calculation, and statistical analyses (Friedman tests and K-means clustering) have been deposited at Mendeley data and are publicly available as of the date of publication under the same accession number of the data, as listed in the key resources table: https://doi.org/10.17632/j96bffmhgc.1.

  • Any additional information required to reanalyze the data reported in this study is available from lead contact upon request.

Acknowledgments

I.A.C.J. is supported by the PRIN 2022 grant from the Italian Ministry of University and Research (MUR) (grant 2022KR9Y29) to A.C. We would like to thank Matilde Blasi, Lorenzo Morizio, and Andrea D’Alterio for their contribution in collecting and preliminary analyzing the data, and all experiment participants, who let their data be acquired.

Author contributions

Conceptualization, A.C., M.F., and F.M.; methodology, A.C., I.A.C.J., M.F., E.C.O., and F.M.; investigation, I.A.C.J. and M.F.; formal analysis, I.A.C.J., M.F., and E.C.O.; data curation, A.C., E.C.O., and M.F.; visualization, I.A.C.J. and M.F.; resources, A.C. and E.V. ; supervision, A.C. and F.M.; project administration, A.C.; funding acquisition, A.C.; writing – original draft, I.A.C.J., A.C., and M.F.; writing – review and editing, all authors.

Declaration of interests

The authors declare no competing interests.

STAR★Methods

Key resources table

REAGENT or RESOURCE SOURCE IDENTIFIER
Deposited data

Facial landmark dataset and processed distance matrices This study https://doi.org/10.17632/j96bffmhgc.1

Software and algorithms

Python (v3.11.11) Python Software Foundation https://www.python.org
dlib (v19.24.6) King82 http://dlib.net/
Pre-trained facial landmark predictor (shape_predictor_68_face_landmarks.dat) dlib http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
G∗Power (version 3.1.9.4) Faul et al.83 https://www.psychologie.hhu.de/arbeitsgruppen/allgemeine-psychologie-und-arbeitspsychologie/gpower
Custom Python analysis scripts This study https://doi.org/10.17632/j96bffmhgc.1

Others

Webcam AUKEY PC-W1 Full HD 1080p
Headphones Sennheiser PC5
EMG Acquisition System BIOPAC Systems, Goleta, CA Cat#MP160
EMG Modules BIOPAC Systems, Goleta, CA Cat#EMG100C
EMG Electrodes (Ag/AgCl, 4 mm) BIOPAC Systems, Goleta, CA Cat#EL254S

Experimental model and study participant details

Phase 1 participants - Stimulus creation

A selection of evocative video clips was presented to 30 Italian participants (15 women, 15 men; age range 19–50 years, M = 25.5, SD = 5.22). These facial expressions were further validated by a second group of 30 participants (21 women, 9 men; age range 20–56 years, M = 24.8, SD = 6.2).

Phase 2 participants - Main study

To determine the appropriate number of subjects, we used G∗Power 3.1.9.4.83 We conducted an a priori power analysis (ANOVA, repeated measures, within factors) with α = 0.05, power = 0.80 and an effect size of 0.25. The results indicated a required total sample size of 32. Therefore, we recruited a group of 32 Italian participants (19 women, 13 men; age range 23–33 years, M = 28, SD = 5).

Ethics statement

Before starting Phase 1, participants provided informed consent and were briefed about the experiment’s timeline and procedure. As for Phase 1, in Phase 2, participants provided informed consent after receiving information about the experimental procedure and timeline and were not informed about video recording until after the experiment, at which point they provided additional consent for the use of their facial recordings. This research was conducted in accordance with the Declaration of Helsinki. The study protocol (Ref. No. 0183752, dated 28/02/2023) was approved by University of Turin’s Academic Bioethics Committee.

Method details

Experimental design overview

We implemented a two-phase experimental protocol to investigate facial expressions during emotional contagion through AU analysis. Phase 1 (Figure S15), which served as an instrumental role in our study, utilized evocative video clips selected from YouTube platform, to elicit laughter, yawning, and mirror pain spontaneous responses in a group of participants, from which we recorded facial expressions. A detailed description of each clip’s content and duration is provided in Table S4. The facial expressions recorded will become the stimuli used in Phase 2 (Figure S16), our main experimental phase, where the isolated facial expressions of the participants were presented without any contextual cues or audio. Since participants of Phase 2 responded only to facial displays, related expressions that emerged during viewing could be studied as manifestations of emotional resonance. Participants were not given any instructions to imitate or copy the expressions they saw. They were simply told to watch the videos. By removing contextual information and avoiding explicit instructions to mimic, we could observe whether participants would naturally produce similar facial expressions in response to viewing emotional faces alone.

Phase 1 stimulus creation

This preparatory phase focused on generating facial affective stimuli for our main experiment. Participants were instructed to relax and naturally express their feelings while watching the videos. To ensure optimal recording conditions, they were asked to avoid excessive movement and remove glasses or anything that might obstruct facial visibility. To maintain genuine emotional reactions and prevent self-consciousness, participants were not informed about video recording, or the specific facial expressions being studied, however, electrodermal activity was acquired to provide participants with a compelling research rationale and motivate the necessary steadiness during the session. Following the viewing session, they were informed about the video capturing and asked to provide additional consent for the use of their facial recordings. Those who did not agree were excluded from the study.

Participants were seated 50 cm from the screen, wearing Sennheiser PC5 headphones for audio clarity and isolation. A high-definition camera (AUKEY Webcam PC-W1 Full HD 1080p) covertly recorded facial expressions at 30 fps with 720px resolution. Participants were isolated from their surroundings by three black panels placed around them to ensure the spontaneity of their emotions and make them feel comfortable without feeling observed during the viewing, and room lighting was adjusted to maintain facial visibility while allowing clear screen viewing. Video sequences were randomized across participants to control order effects.

The faces of participants of Phase 1 were recorded, and each video acquisition was visually inspected. Clips with the most expressive facial expressions for a specific emotional category were manually selected. This collection process involved identifying the most salient response of each stimulus, such as the peak of an expression, a critical step in maximizing the clarity and effectiveness of the stimuli. This screening procedure was carried out by the entire project team to gather multiple opinions on the detected facial expressions and a montage of selected faces recorded during Phase 1 was created. To ensure the consistency of the stimuli, the research team screened the clips to select only clear, authentic expressions and discard any that appeared ambiguous or forced. These facial expressions were further validated by a second group of participants. Based on participants’ responses, we selected the 10 stimuli per facial expression that received the highest number of correct emotional identifications from the evaluated set. The faces from Phase 1 with the highest rate of correct identification would thus become the new emotional stimuli to be shown to a third group of subjects during Phase 2 of the experiment.

Phase 2 contagion stimuli presentation

Phase 2 involved presenting the facial displays of laughter, yawning, and mirror pain collected during Phase 1. This main experimental phase (Figure S16) aimed to examine how participants responded to facial displays alone, without contextual or auditory information, allowing us to analyze expressions that emerged purely from viewing others’ facial displays. The experimental setup was the same as Phase 1.

The stimuli consisted of two sets of video clips presented in randomized order. Each set contained a series of emotional packages featuring laughter, yawning, mirror pain, and neutral expressions. Each package (one for each emotion) featured 5 facial expressions from Phase 1 subjects chosen for their expressiveness, totaling 20 faces per set, for a total of 40 facial expressions (10 for each emotion). The specific structure and total duration of each emotional package are detailed in Table S5. The consecutive presentation of five facial expressions per emotional category follows established protocols in emotional contagion research.12,22 This design was intentionally chosen to provide sufficient exposure opportunities for automatic contagion mechanisms to activate naturally, given that these responses occur independently of conscious intention or deliberate imitation. Stimuli were presented in randomized orders across participants to control order effects. Participants received no instructions regarding facial expression production and remained unaware of video recording during stimulus presentation, eliminating self-monitoring behaviors and minimizing potential demand characteristics. EMG recording provided physiological validation that observed facial responses reflected genuine muscle activation rather than performative displays, though detailed EMG analysis is beyond the scope of this study. To ensure a reset of their focus, participants were subjected to a moment of neutral visual input; in detail, each video set display began and ended with 6 s of a black screen. Furthermore, a 2-s black screen was inserted between each individual video clip to allow participants to transition from one emotional expression to the next. The presentation of each video set was designed to last approximately 10 min, a duration chosen to maintain participants’ engagement and minimize potential effects of boredom or muscular fatigue, and a 40-min break was implemented between the two video sets for questionnaire completion. The entire experimental session lasted approximately 60 min for each participant. The acquired sequences were recorded at a frame rate of 30 fps, at a maximum resolution of 720 pixels in mp4 format.

Data processing and selection

The adopted methodological approach involves qualitative analysis of facial expressions and quantitative analysis of extracted features. To analyze participants’ responses, Phase 2 facial expressions recordings were reviewed. The aim was to detect and identify unique features of expressions characterizing spontaneous emotional states based on geometrical properties. Therefore, as AUs provide detailed descriptions of facial movements, we identified the Euclidean distances that best characterize each AU.

Participants, EMG-based data processing and frames extraction

The initial dataset included recordings from 32 participants during Phase 2. The selection of participants and video segments followed a two-step filtering procedure to ensure that only genuine, physiologically validated facial expressions were retained.

Visual screening for emotional contagion

All recordings were first reviewed to identify moments in which participants visibly reacted to the presented stimuli. For example, participants responding to laughter videos with a smile or laugh, or to yawning videos with a yawn. Participants who did not exhibit any observable emotional response were excluded at this stage (3 participants), leaving 29 participants with at least one potential emotional reaction coherent with the emotional stimuli.

EMG-based validation of facial activity

For these 29 participants, EMG signals were analyzed from three key facial muscles: the zygomaticus major (lifting the mouth corners, associated with smiling and laughter), the corrugator supercilii (involved in frowning, associated with pain, sadness, and anger), and the orbicularis oculi (surrounding the eyes, important for differentiating types of smiles and detecting yawning). A differential recording configuration was used, with shielded Ag/AgCl electrodes (EL254S, 4 mm diameter) placed over the corresponding left-side facial muscles, and a seventh unshielded electrode on the left temporal bone serving as reference. Signals were acquired at 1000 Hz using BIOPAC MP160 system (BIOPAC Systems, Goleta, CA) with three EMG100C modules. An example of EMG acquisition and muscles activity can be found in Figure S17.

EMG recordings were preprocessed and segmented into overlapping 2-s windows with 50% overlap. Signal energy, computed as the sum of squared amplitudes, quantified muscle activation for each window. The most energetic windows per muscle were selected as candidate segments. Only intervals showing EMG-confirmed activation coherent with the expected emotional category (laughter/smile, yawn, or mirror pain) were retained. EMG-defined epochs typically lasted between a few seconds and 10–15 s (M = 9.2, SD = 3.4), depending on the intensity and duration of the emotional display.

As a result, the final dataset included 29 participants, each exhibiting at least one EMG-validated expression per emotional category, ensuring that all retained video segments represented authentic, physiologically supported facial expressions, (Figure S17).

Frame extraction

Video frames were initially extracted at a rate of 30 frames per second (fps) to capture the full temporal dynamics of facial expressions. Within each EMG-validated epoch, 10 representative frames per second was retained to achieve a temporally balanced yet computationally efficient sampling of the expression dynamics with less redundancy. Consequently, the total number of frames analyzed per emotion varied proportionally to the duration of the EMG-defined segment (M = 92, SD = 34). Given the size of our dataset (n = 29), only automatic frame selection could have omitted important information. Therefore, an expert FACS coder reviewed the dataset and retained the most expressive frames while removing redundant ones for each participant.

Handling of off-target expressions

Rare expressions not elicited by the stimulus, such as laughing during a mirror pain segment or yawning during other emotions, were excluded from all analyses. Overall, 19.21% of instances were discarded due to emotion-expression mismatch, including smiling during yawning (4.19%) and mirror pain video (7.14%), and yawning during laughing facial stimuli (2.69%) and mirror pain stimuli (4.43%). In cases where participants did not show any visible reaction to the emotional stimuli, a neutral face was labeled and included if it provided a more functional image for analysis (e.g., a more frontal view).

Qualitative analysis

A certified FACS coder, i.e., a person that received a certification for recognizing AUs (later on called ‘expert’), analyzed the dataset by thoroughly examining all images and carefully observing variations in facial muscles and features to catalog the movements associated with each contagious expression. When multiple instances of the same emotion occurred, the instance with the highest expression intensity, assessed based on the FACS coder’s expertise, was used, while other expressions were retained for contagion assessment. This analysis identified all the AUs present in the dataset, ensuring comprehensive characterization useful for assessing spontaneous facial expressions.

Among all the subjects involved in Phase 2, some did not show significant facial activations according to the expert, because, while EMG recordings had revealed muscular activation, the corresponding facial expression in video were too subtle to be detected by a visual inspection. As a result, the initial dataset was reduced, including only 17 subjects who exhibited meaningful facial expressions pattern for one or more emotional categories. The detailed results of this qualitative analysis, including the complete characterization of all observed AUs for each emotion, are provided in Method S1.

Quantitative analysis

Description of the distance extraction process

To identify facial distances, we positioned facial landmarks (LMs), which served as reference points for extracting geometric features from faces. The code for extracting LMs was implemented in Python (version 3.11.11) in Google Colaboratory using dlib’s pre-trained model (shape_predictor_68_face_landmarks) for the placement of 68 facial LMs as follows: 1–17 along the jawline, 18–27 on the eyebrows, 28-36 on the nose, 37-48 on the eyes, and 49–68 on the mouth82 (Figure S18). The model outputs the 2D spatial coordinates of each LM within the image. For each emotional category, facial landmarks were extracted from all frames selected by the FACS expert (n = 17, dataset for facial pattern identification) and all frames selected by EMG activations (n = 29, dataset for emotional contagion assessment). The distances between landmarks were calculated for each frame individually. No temporal averaging across frames was performed; instead, each frame contributed independently to the quantitative analysis, respecting the expert-driven selection of the most expressive frames, ensuring no loss of critical facial information. To verify the accuracy of the placements, frames containing the LM positions were saved for further evaluation. After manual review, some frames were discarded due to undetected faces or incorrect landmark placements (Figure S18), because of occlusion or lightning variation and face orientation. After expert validation by a certified FACS coder, an average of 69 ± 26 frames per subject were retained for analysis across all emotions (laughter: 95 ± 88; yawn: 43 ± 85; mirror pain: 69 ± 144 frames). Table S7 reports the mean number of frames per subject for each emotional category after wrong landmarks positioning exclusion. Standard deviations indicate substantial variability between subjects due to differences in expression duration, intensity, and the presence of suppressed expressions (particularly evident in mirror pain).

Among the set of AUs selected by the expert for each emotional category, some AUs could not be accurately described using the distances derived from the available LM placements and were therefore excluded from the quantitative analysis. Next, we defined the optimal set of distances to characterize every AU. These distances were calculated using the Euclidean distance formula in a 2D space. To ensure independence from the subjects’ facial size, normalization was performed relative to the face width, i.e., each distance was divided by the facial width calculated approximately as the Euclidean distance between the right and left tragion landmark.84 Then, min-max scaling (Equation 1) was applied to each distance across all frames to guarantee scale-independent measurements:

x=xxminxmaxxmin, (Equation 1)

In Equation 1, x represents the original distance value, while xmin and xmax indicate the minimum and maximum values within the distribution, respectively. The formula provides the value x', which is the normalized version of x within the range of 0–1.

The selection of distances was based on two key principles. First, some distances were measured between a fixed point and a moving point. Second, other distances were defined between two moving points.

Statistical analysis

After selecting the characteristic distances for each emotion and AU from the dataset filtered by the expert (n = 17), statistical tests were conducted to assess the distribution of distances and the homogeneity of variance.

The Kolmogorov-Smirnov test revealed p-values significantly below 0.05 for all distances, indicating that the variables are characterized by non-normal distributions, while the non-parametric Fligner-Killeen test yielded p-values significantly below the threshold, suggesting that the data exhibit heteroscedasticity. Thus, we analyzed the behavior of the selected distances for characterizing AUs by assessing changes in the median and interquartile range of each distance for each emotion relative to the neutral expression (in the following referred as AU0). The consistency of the distances was evaluated based on their expected behavior (increase or decrease) relative to AU0, as a function of the activation of the specific AU to which the distance is associated.

Furthermore, to capture the subtleties of spontaneous facial expressions of laugh, yawn and mirror pain, we assessed the discriminative power of each distance across all emotions, and we determined whether the selected features not only characterized a given AU effectively but also distinguished between different facial expressions. For this purpose, the Friedman test combined with the False Discovery Rate (FDR) correction (Benjamini-Hochberg correction) was adopted and the effect size Kendall’s W was computed. Then, to highlight the specific emotion pairs showing significant differences for each variable, the Conover post hoc test was applied, and the effect size r was computed. Notably, multiple emotional responses for each subject were included, leading to the adoption of dependent groups.

Following this analysis, a parallel evaluation was conducted to identify the most distinguishing features among the categories studied. This assessment examined the variability both within and between emotional groups across the 17 subjects. Specifically, the ratio of inter-class variability (between emotions) to intra-class variability (within emotions), known as the Between/Within Class Variability (BWV),85 served as a key indicator. If the BWV value associated with a specific distance is greater than one, the inter-class variability exceeds the intra-class variability, showing that the distance is discriminative.

The BWV is expressed as in Equation 2:

i=1c(mim)2i=1c1ni·xϕic(xmi)2, (Equation 2)

where, c represents the number of emotional categories considered, which includes the three contagion facial expressions and neutral; m is the mean of all values across all frames and categories for a given distance, while ϕi refers to the set of frames associated with that distance; ni is the cardinality of ϕi, which corresponds to the number of frames for the ith emotional category, and mi is the mean value for the ith emotional category. Lastly, x is the value of the distance being considered. Within our analysis, integrating the results of the Friedman and Conover tests with assessments of feature variability allows for a more robust evaluation of which features reliably discriminate between different facial expressions.

Emotional alignment and validation of stimuli

Further analyses were conducted to investigate the emotional reactivity of participants in response to the stimuli. Thus, the consistency between the behavior of the previously identified distances in the participants and in the stimuli was studied.

Firstly, the percentage of participants (n = 29) who exhibited an observable emotional response to at least one stimulus for each emotional state was computed; then, only the concordant responses occurring after each stimulus presentation were considered. This dataset included solely those participants’ facial expressions for which concordance with the expected reaction to the stimuli emerged (Table S6). Distances for each Action Unit (AU) and each emotional category were computed on this dataset and on the emotional stimuli.

Emotional stimuli dataset

To compare participants’ responses with the observed stimuli at the level of facial patterns during moments of emotional contagion, a dataset consisting solely of the stimuli was created. In this case, each stimulus corresponded to a different individual, each displaying the proposed emotions. The stimuli were reviewed, and the precise onset and offset of each emotion were identified. 10 frames per second were extracted, as for the other two datasets, and the frames were labeled according to the type of stimulus presented to the participant. Euclidean distances were subsequently calculated following the placement of facial landmarks. The number of frames selected in this case varied depending on the duration of the facial expression and the accuracy of landmark placement; some frames were excluded if the landmarks could not be reliably positioned.

A comparison was conducted between the participants’ dataset (n = 29), the stimuli and the neutral responses. In detail, for each AU and emotion, the distances that showed statistically significant differences according to the Conover test between the target emotion and the neutral condition were considered. This analysis aimed to determine whether participants’ reactions, characterized by the same significant AU distances as those found in the stimuli, reflected similar patterns of AU activation. To further explore differences in the distributions of distances, we applied two-tailed Mann–Whitney U tests (pcr = 0.05) to the following group comparisons: participants vs. stimuli, participants vs. neutral, and stimuli vs. neutral. This allowed for a more detailed examination of potential differences in intensity and spread across conditions.

To further provide a data-driven validation of the stimuli, we conducted a k-means clustering analysis by combining the datasets of neutral expressions and emotional stimuli. The goal was to assess whether the distances identified as discriminative by the BWV analysis could reveal emotion-specific clusters, with an unsupervised approach. Prior to the application of the k-means algorithm, for which k = 4 was set, principal component analysis (PCA) was performed to reduce noise and improve interpretability, following the removal of outliers. For each resulting cluster, the percentage of each label within the cluster was computed, reflecting its internal composition, as well as the proportion of all samples associated with a given emotional category, which shows the distribution of emotional states across groups. This additional step allowed to verify whether the distances previously found to be discriminative in participants’ responses could also distinguish between emotional stimuli in an unsupervised framework. Ultimately, this analysis aimed to determine whether those same distances retain their discriminative power when applied directly to the stimuli.

Quantification and statistical analysis

Landmark extraction and data processing

Landmark extraction was implemented in Python (version 3.11.11) in Google Colaboratory using dlib’s pre-trained model (shape_predictor_68_face_landmarks) for the placement of 68 facial landmarks.82

Sample size determination

Sample size for Phase 2 was determined using G∗Power 3.1.9.4.83 An a priori power analysis (ANOVA, repeated measures, within factors) with α = 0.05, power = 0.80, and an effect size of 0.25 indicated a required total sample size of 32 participants.

Distribution and variance analysis

The Kolmogorov-Smirnov test was applied to assess normality of distributions. The non-parametric Fligner-Killeen test was used to evaluate homoscedasticity.

Statistical tests for discriminative analysis

The Friedman test combined with False Discovery Rate (FDR) correction (Benjamini-Hochberg correction) was adopted to assess the discriminative power of each distance across emotions. Effect size was computed using Kendall’s W. The Conover post-hoc test was applied to identify specific emotion pairs showing significant differences, with effect size r computed for each comparison.

Between/Within Class Variability (BWV) analysis was performed to identify the most distinguishing features among emotional categories. The BWV was calculated as the ratio of inter-class variability (between emotions) to intra-class variability (within emotions).

Group comparisons

Two-tailed Mann-Whitney U tests (pcr = 0.05) were applied for the following group comparisons: participants vs. stimuli, participants vs. neutral, and stimuli vs. neutral.

Unsupervised validation

K-means clustering analysis (k = 4) was conducted on principal components after outlier removal to validate discriminative distances in the stimuli dataset.

Significance level

Statistical significance was set at α = 0.05 for all tests.

Post-hoc power analysis - Estimated sensitivity and statistical power of friedman analysis

To evaluate the sensitivity of the statistical facial pattern analysis, we conducted a theoretically based approach: the estimation is based on the relationship between the observed effect size, expressed as Kendall’s W, and the non-central chi-square distribution of the test. Specifically, the non-centrality parameter λ = n · (k – 1) · W (n = number of subjects and k = number of emotional conditions) was used to calculate the probability that the test statistic exceeds the critical threshold at a significant level of α = 0.05.

Additionally, the minimal detectable W (Wmin) was calculated for a target power of 0.80, indicating the effect size necessary to obtain statistically significant results with the actual sample collected for each condition. Across all AU-distance comparisons, the observed Kendall’s W values ranged from 0.008 to 0.878, corresponding to estimated statistical power between 0.067 and 0.998. The Wmin required to achieve the target power was 0.330.

These results indicate that, for the design of the given study, only relatively large effect could be reliably detected (W > 0.33).

Reporting of statistical parameters

Detailed statistical results, including exact p-values, effect sizes (Kendall’s W for Friedman test and r for Conover post-hoc test), sample sizes (n), and measures of dispersion (SD) are reported in the main text, figure legends, and supplemental information. Complete Friedman and Conover test results for all distance measures are provided in Table S3. Statistical parameters such as mean, median, and interquartile ranges are reported throughout the results section and figure legends.

Published: February 17, 2026

Footnotes

Supplemental information can be found online at https://doi.org/10.1016/j.isci.2026.115042.

Supplemental information

Document S1. Figures S1–S18, Tables S1, S3–S7, and Methods S1, and S2
mmc1.pdf (2.1MB, pdf)
Table S2. Detailed quantitative results, including median values, interquartile ranges (IQRs), and consistency with expected AU activation, related to Figures 2 and 3
mmc2.xlsx (1.9MB, xlsx)

References

  • 1.Hatfield E. Cambridge University Press; 1994. Emotional Contagion. [Google Scholar]
  • 2.Hatfield E., Cacioppo J.T., Rapson R.L. Emotional Contagion. Curr. Dir. Psychol. Sci. 1993;2:96–100. doi: 10.1111/1467-8721.ep10770953. [DOI] [Google Scholar]
  • 3.Wróbel M., Imbir K.K. Broadening the Perspective on Emotional Contagion and Emotional Mimicry: The Correction Hypothesis. Perspect. Psychol. Sci. 2019;14:437–451. doi: 10.1177/1745691618808523. [DOI] [PubMed] [Google Scholar]
  • 4.Barsade S.G. The Ripple Effect: Emotional Contagion and its Influence on Group Behavior. Adm. Sci. Q. 2002;47:644–675. doi: 10.2307/3094912. [DOI] [Google Scholar]
  • 5.Palagi E., Caruana F., de Waal F.B.M. The naturalistic approach to laughter in humans and other animals: towards a unified theory. Philos. Trans. R. Soc. Lond. B Biol. Sci. 2022;377 doi: 10.1098/rstb.2021.0175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Lanzilotto M., Dal Monte O., Diano M., Panormita M., Battaglia S., Celeghin A., Bonini L., Tamietto M. Learning to fear novel stimuli by observing others in the social affordance framework. Neurosci. Biobehav. Rev. 2025;169 doi: 10.1016/j.neubiorev.2025.106006. [DOI] [PubMed] [Google Scholar]
  • 7.de Waal F.B.M., Preston S.D. Mammalian empathy: behavioural manifestations and neural basis. Nat. Rev. Neurosci. 2017;18:498–509. doi: 10.1038/nrn.2017.72. [DOI] [PubMed] [Google Scholar]
  • 8.Palagi E., Celeghin A., Tamietto M., Winkielman P., Norscia I. The neuroethology of spontaneous mimicry and emotional contagion in human and non-human animals. Neurosci. Biobehav. Rev. 2020;111:149–165. doi: 10.1016/j.neubiorev.2020.01.020. [DOI] [PubMed] [Google Scholar]
  • 9.Olszanowski M., Wróbel M., Hess U. Mimicking and sharing emotions: a re-examination of the link between facial mimicry and emotional contagion. Cognit. Emot. 2020;34:367–376. doi: 10.1080/02699931.2019.1611543. [DOI] [PubMed] [Google Scholar]
  • 10.Prochazkova E., Kret M.E. Connecting minds and sharing emotions through mimicry: A neurocognitive model of emotional contagion. Neurosci. Biobehav. Rev. 2017;80:99–114. doi: 10.1016/j.neubiorev.2017.05.013. [DOI] [PubMed] [Google Scholar]
  • 11.Wild B., Erb M., Bartels M. Are emotions contagious? Evoked emotions while viewing emotionally expressive faces: quality, quantity, time course and gender differences. Psychiatry Res. 2001;102:109–124. doi: 10.1016/S0165-1781(01)00225-6. [DOI] [PubMed] [Google Scholar]
  • 12.Norscia I., Palagi E. Yawn Contagion and Empathy in Homo sapiens. PLoS One. 2011;6 doi: 10.1371/journal.pone.0028472. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Sun Y.-B., Wang Y.-Z., Wang J.-Y., Luo F. Emotional mimicry signals pain empathy as evidenced by facial electromyography. Sci. Rep. 2015;5 doi: 10.1038/srep16988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Hess U., Fischer A. Emotional Mimicry: Why and When We Mimic Emotions. Soc. Personal. Psychol. Compass. 2014;8:45–57. doi: 10.1111/spc3.12083. [DOI] [Google Scholar]
  • 15.Dezecache G., Jacob P., Grèzes J. Emotional contagion: its scope and limits. Trends Cogn. Sci. 2015;19:297–299. doi: 10.1016/j.tics.2015.03.011. [DOI] [PubMed] [Google Scholar]
  • 16.Krestel H., Bassetti C.L., Walusinski O. Yawning—Its anatomy, chemistry, role, and pathological considerations. Prog. Neurobiol. 2018;161:61–78. doi: 10.1016/j.pneurobio.2017.11.003. [DOI] [PubMed] [Google Scholar]
  • 17.Gallup A.C., Gallup G.G. Yawning as a Brain Cooling Mechanism: Nasal Breathing and Forehead Cooling Diminish the Incidence of Contagious Yawning. Evol. Psychol. 2007;5 doi: 10.1177/147470490700500109. [DOI] [Google Scholar]
  • 18.Provine R.R. Yawning as a Stereotyped Action Pattern and Releasing Stimulus. Ethology. 1986;72:109–122. doi: 10.1111/j.1439-0310.1986.tb00611.x. [DOI] [Google Scholar]
  • 19.Kapitány R., Nielsen M. Are Yawns really Contagious? A Critique and Quantification of Yawn Contagion. Adaptive Human Behavior and Physiology. 2017;3:134–155. doi: 10.1007/s40750-017-0059-y. [DOI] [Google Scholar]
  • 20.Preston S.D., de Waal F.B.M. Empathy: Its ultimate and proximate bases. Behav. Brain Sci. 2002;25:1–20. doi: 10.1017/S0140525X02000018. [DOI] [PubMed] [Google Scholar]
  • 21.Platek S.M., Mohamed F.B., Gallup G.G. Contagious yawning and the brain. Brain Res. Cogn. Brain Res. 2005;23:448–452. doi: 10.1016/j.cogbrainres.2004.11.011. [DOI] [PubMed] [Google Scholar]
  • 22.Provine R. Yawning: The yawn is primal, unstoppable and contagious, revealing the evolutionary and neural basis of empathy and unconscious behavior. Am. Sci. 2005;93:532–539. [Google Scholar]
  • 23.Dimberg U., Petterson M. Facial reactions to happy and angry facial expressions: Evidence for right hemisphere dominance. Psychophysiology. 2000;37:693–696. doi: 10.1111/1469-8986.3750693. [DOI] [PubMed] [Google Scholar]
  • 24.Sroufe L.A., Wunsch J.P. The Development of Laughter in the First Year of Life. Child Dev. 1972;43:1326–1344. doi: 10.2307/1127519. [DOI] [PubMed] [Google Scholar]
  • 25.Provine R.R. Laughter as an approach to vocal evolution: The bipedal theory. Psychon. Bull. Rev. 2017;24:238–244. doi: 10.3758/s13423-016-1089-3. [DOI] [PubMed] [Google Scholar]
  • 26.Simonyan K., Horwitz B. Laryngeal Motor Cortex and Control of Speech in Humans. Neuroscientist. 2011;17:197–208. doi: 10.1177/1073858410386727. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Budell L., Kunz M., Jackson P.L., Rainville P. Mirroring Pain in the Brain: Emotional Expression versus Motor Imitation. PLoS One. 2015;10 doi: 10.1371/journal.pone.0107526. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Lamm C., Decety J., Singer T. Meta-analytic evidence for common and distinct neural networks associated with directly experienced pain and empathy for pain. Neuroimage. 2011;54:2492–2502. doi: 10.1016/j.neuroimage.2010.10.014. [DOI] [PubMed] [Google Scholar]
  • 29.Singer T., Seymour B., O’Doherty J., Kaube H., Dolan R.J., Frith C.D. Empathy for Pain Involves the Affective but not Sensory Components of Pain. Science. 2004;303:1157–1162. doi: 10.1126/science.1093535. [DOI] [PubMed] [Google Scholar]
  • 30.Rizzolatti G., Craighero L. THE MIRROR-NEURON SYSTEM. Annu. Rev. Neurosci. 2004;27:169–192. doi: 10.1146/annurev.neuro.27.070203.144230. [DOI] [PubMed] [Google Scholar]
  • 31.Grice-Jackson T., Critchley H.D., Banissy M.J., Ward J. Consciously Feeling the Pain of Others Reflects Atypical Functional Connectivity between the Pain Matrix and Frontal-Parietal Regions. Front. Hum. Neurosci. 2017;11 doi: 10.3389/fnhum.2017.00507. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Riečanský I., Lamm C. The Role of Sensorimotor Processes in Pain Empathy. Brain Topogr. 2019;32:965–976. doi: 10.1007/s10548-019-00738-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Zaki J., Wager T.D., Singer T., Keysers C., Gazzola V. The Anatomy of Suffering: Understanding the Relationship between Nociceptive and Empathic Pain. Trends Cogn. Sci. 2016;20:249–259. doi: 10.1016/j.tics.2016.02.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Vural E., Cetin M., Ercil A., Littlewort G., Bartlett M., Movellan J. In: Human–Computer Interaction. Lew M., Sebe N., Huang T.S., Bakker E.M., editors. Springer; 2007. Drowsy Driver Detection Through Facial Movement Analysis; pp. 6–18. [DOI] [Google Scholar]
  • 35.Ekman P., Friesen W.V., Hager J.C. The Manual (Research Nexus division of Network Information Research Corporation); 2002. Facial Action Coding System. [Google Scholar]
  • 36.Ekman P., Friesen W.V. Facial action coding system (FACS) 1978. [DOI]
  • 37.Hutto J.R., Vattoth S., Hutto J.R., Vattoth S. A Practical Review of the Muscles of Facial Mimicry With Special Emphasis on the Superficial Musculoaponeurotic System. Am. J. Roentgenol. 2014;204:W19–W26. doi: 10.2214/AJR.14.12857. [DOI] [PubMed] [Google Scholar]
  • 38.Keltner D. Signs of appeasement: Evidence for the distinct displays of embarrassment, amusement, and shame. J. Pers. Soc. Psychol. 1995;68:441–454. doi: 10.1037/0022-3514.68.3.441. [DOI] [Google Scholar]
  • 39.Littlewort G., Whitehill J., Wu T., Fasel I., Frank M., Movellan J., Bartlett M. 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG) IEEE; 2011. The computer expression recognition toolbox (CERT) pp. 298–305. [DOI] [Google Scholar]
  • 40.Drack P., Huber T., Ruch W. In: Current and Future Perspectives in Facial Expression Research: Topics and Methodical Questions. Bänninger-Huber E., Peham D., editors. Innsbruck University Press; 2009. The apex of happy laughter: A FACS-study with actors; pp. 36–41. [Google Scholar]
  • 41.Niewiadomski R., Pelachaud C. In: Intelligent Virtual Agents. Nakano Y., Neff M., Paiva A., Walker M., editors. Springer; 2012. Towards Multimodal Expression of Laughter; pp. 231–244. [DOI] [Google Scholar]
  • 42.Lucey P., Cohn J., Lucey S., Matthews I., Sridharan S., Prkachin K.M. 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops. IEEE; 2009. Automatically detecting pain using facial actions; pp. 1–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Barbizet J. Yawning. J. Neurol. Neurosurg. Psychiatry. 1958;21:203–209. doi: 10.1136/jnnp.21.3.203. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Prkachin K.M. The consistency of facial expressions of pain: a comparison across modalities. Pain. 1992;51:297–306. doi: 10.1016/0304-3959(92)90213-U. [DOI] [PubMed] [Google Scholar]
  • 45.Cordaro D.T., Sun R., Keltner D., Kamble S., Huddar N., McNeil G. Universals and cultural variations in 22 emotional expressions across five cultures. Emotion. 2018;18:75–93. doi: 10.1037/emo0000302. [DOI] [PubMed] [Google Scholar]
  • 46.Göller P.J., Reicherts P., Lautenbacher S., Kunz M. Vicarious facilitation of facial responses to pain: Does the others’ expression need to be painful? Eur. J. Pain. 2025;29 doi: 10.1002/ejp.4709. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Menin D., Ballardini E., Panebianco R., Garani G., Borgna-Pignatti C., Oster H., Dondi M. Factors affecting yawning frequencies in preterm neonates. PLoS One. 2022;17 doi: 10.1371/journal.pone.0268083. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Beermann U., Gander F., Hiltebrand D., Wyss T., Ruch W. In: Current and Future Perspectives in Facial Expression Research: Topics and Methodical Questions. Bänninger-Huber E., Peham D., editors. Innsbruck University Press; 2009. Laughing at oneself: Trait or state? pp. 31–36. [Google Scholar]
  • 49.Sikander G., Anwar S. A Novel Machine Vision-Based 3D Facial Action Unit Identification for Fatigue Detection. IEEE Trans. Intell. Transport. Syst. 2021;22:2730–2740. doi: 10.1109/TITS.2020.2974263. [DOI] [Google Scholar]
  • 50.Li . Proceedings of 2001 International Symposium on Intelligent Multimedia, Video and Speech Processing. IEEE Cat. No.01EX489; 2001. Computer recognition of human emotions; pp. 490–493. ISIMP 2001. [DOI] [Google Scholar]
  • 51.Heesen R., Szenteczki M.A., Kim Y., Kret M.E., Atkinson A.P., Upton Z., Clay Z. Impact of social context on human facial and gestural emotion expressions. iScience. 2024;27 doi: 10.1016/j.isci.2024.110663. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Kunz M., Gruber A., Lautenbacher S. Sex Differences in Facial Encoding of Pain. J. Pain. 2006;7:915–928. doi: 10.1016/j.jpain.2006.04.012. [DOI] [PubMed] [Google Scholar]
  • 53.Kunz M., Chatelle C., Lautenbacher S., Rainville P. The relation between catastrophizing and facial responsiveness to pain. Pain. 2008;140:127–134. doi: 10.1016/j.pain.2008.07.019. [DOI] [PubMed] [Google Scholar]
  • 54.Tessier M.-H., Mazet J.-P., Gagner E., Marcoux A., Jackson P.L. Facial representations of complex affective states combining pain and a negative emotion. Sci. Rep. 2024;14 doi: 10.1038/s41598-024-62423-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Darwin C. The Expression of the Emotions in Man and Animals. John Murray; 1872. [Google Scholar]
  • 56.Ruch W., Ekman P. Word Scientific Publisher; 2001. The expressive pattern of laughter. [DOI] [Google Scholar]
  • 57.Dijk C., Fischer A.H., Morina N., van Eeuwijk C., van Kleef G.A. Effects of Social Anxiety on Emotional Mimicry and Contagion: Feeling Negative, but Smiling Politely. J. Nonverbal Behav. 2018;42:81–99. doi: 10.1007/s10919-017-0266-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Sachisthal M.S.M., Sauter D.A., Fischer A.H. Mimicry of ingroup and outgroup emotional expressions. Comprehensive Results in Social Psychology. 2016;1:86–105. doi: 10.1080/23743603.2017.1298355. [DOI] [Google Scholar]
  • 59.Gironzetti E., Attardo S., Pickering L. In: Metapragmatics of Humor: Current research trends IVITRA Research in Linguistics and Literature. Ruiz-Gurillo L., editor. John Benjamins Publishing Company; 2016. A pilot study: Smiling, gaze, and humor in conversation; pp. 235–254. [DOI] [Google Scholar]
  • 60.Höfling T.T.A., Alpers G.W., Büdenbender B., Föhl U., Gerdes A.B.M. What’s in a face: Automatic facial coding of untrained study participants compared to standardized inventories. PLoS One. 2022;17 doi: 10.1371/journal.pone.0263863. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Ruch W. Handbook of emotions. The Guilford Press; 1993. Exhilaration and humor; pp. 605–616. [Google Scholar]
  • 62.Lin D., Zhu T., Wang Y. Emotion contagion and physiological synchrony: The more intimate relationships, the more contagion of positive emotions. Physiol. Behav. 2024;275 doi: 10.1016/j.physbeh.2023.114434. [DOI] [PubMed] [Google Scholar]
  • 63.Dezecache G., Conty L., Chadwick M., Philip L., Soussignan R., Sperber D., Grèzes J. Evidence for Unintentional Emotional Contagion Beyond Dyads. PLoS One. 2013;8 doi: 10.1371/journal.pone.0067371. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Cattaneo L., Pavesi G. The facial motor system. Neurosci. Biobehav. Rev. 2014;38:135–159. doi: 10.1016/j.neubiorev.2013.11.002. [DOI] [PubMed] [Google Scholar]
  • 65.Frijda N.H. The emotions. Cambridge University Press; 1986. [Google Scholar]
  • 66.Niedenthal P.M., Mermillod M., Maringer M., Hess U. The Simulation of Smiles (SIMS) model: Embodied simulation and the meaning of facial expression. Behav. Brain Sci. 2010;33:417–480. doi: 10.1017/S0140525X10000865. [DOI] [PubMed] [Google Scholar]
  • 67.Wood A., Niedenthal P. Developing a social functional account of laughter. Soc. Personal. Psychol. Compass. 2018;12 doi: 10.1111/spc3.12383. [DOI] [Google Scholar]
  • 68.Dimberg U., Thunberg M., Elmehed K. Unconscious Facial Reactions to Emotional Facial Expressions. Psychol. Sci. 2000;11:86–89. doi: 10.1111/1467-9280.00221. [DOI] [PubMed] [Google Scholar]
  • 69.Tamietto M., Castelli L., Vighetti S., Perozzo P., Geminiani G., Weiskrantz L., de Gelder B. Unseen facial and bodily expressions trigger fast emotional reactions. Proc. Natl. Acad. Sci. USA. 2009;106:17661–17666. doi: 10.1073/pnas.0908994106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Epstude K., Mussweiler T. What you feel is how you compare: How comparisons influence the social induction of affect. Emotion. 2009;9:1–14. doi: 10.1037/a0014148. [DOI] [PubMed] [Google Scholar]
  • 71.Fischer A., Hess U. Mimicking emotions. Curr. Opin. Psychol. 2017;17:151–155. doi: 10.1016/j.copsyc.2017.07.008. [DOI] [PubMed] [Google Scholar]
  • 72.Palagi E., Leone A., Mancini G., Ferrari P.F. Contagious yawning in gelada baboons as a possible expression of empathy. Proc. Natl. Acad. Sci. USA. 2009;106:19262–19267. doi: 10.1073/pnas.0910891106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Caruana F., Avanzini P., Gozzo F., Francione S., Cardinale F., Rizzolatti G. Mirth and laughter elicited by electrical stimulation of the human anterior cingulate cortex. Cortex. 2015;71:323–331. doi: 10.1016/j.cortex.2015.07.024. [DOI] [PubMed] [Google Scholar]
  • 74.Romero T., Ito M., Saito A., Hasegawa T. Social Modulation of Contagious Yawning in Wolves. PLoS One. 2014;9 doi: 10.1371/journal.pone.0105963. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Massen J.J.M., Gallup A.C. Why contagious yawning does not (yet) equate to empathy. Neurosci. Biobehav. Rev. 2017;80:573–585. doi: 10.1016/j.neubiorev.2017.07.006. [DOI] [PubMed] [Google Scholar]
  • 76.Anderson J.R., Myowa–Yamakoshi M., Matsuzawa T. Contagious yawning in chimpanzees. Proc. Biol. Sci. 2004;271:S468–S470. doi: 10.1098/rsbl.2004.0224. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77.Yesudas E.H., Lee T.M.C. The Role of Cingulate Cortex in Vicarious Pain. BioMed Res. Int. 2015;2015 doi: 10.1155/2015/719615. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Benuzzi F., Lui F., Ardizzi M., Ambrosecchia M., Ballotta D., Righi S., Pagnoni G., Gallese V., Porro C.A. Pain Mirrors: Neural Correlates of Observing Self or Others’ Facial Expressions of Pain. Front. Psychol. 2018;9 doi: 10.3389/fpsyg.2018.01825. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Osborn J., Derbyshire S.W.G. Pain sensation evoked by observing injury in others. Pain. 2010;148:268–274. doi: 10.1016/j.pain.2009.11.007. [DOI] [PubMed] [Google Scholar]
  • 80.Jauniaux J., Khatibi A., Rainville P., Jackson P.L. A meta-analysis of neuroimaging studies on pain empathy: investigating the role of visual information and observers’ perspective. Soc. Cogn. Affect. Neurosci. 2019;14:789–813. doi: 10.1093/scan/nsz055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Lamm C., Batson C.D., Decety J. The Neural Substrate of Human Empathy: Effects of Perspective-taking and Cognitive Appraisal. J. Cogn. Neurosci. 2007;19:42–58. doi: 10.1162/jocn.2007.19.1.42. [DOI] [PubMed] [Google Scholar]
  • 82.King D.E. Dlib-ml: A Machine Learning Toolkit. J. Mach. Learn. Res. 2009;10:1755–1758. [Google Scholar]
  • 83.Faul F., Erdfelder E., Buchner A., Lang A.-G. Statistical power analyses using G∗Power 3.1: Tests for correlation and regression analyses. Behav. Res. Methods. 2009;41:1149–1160. doi: 10.3758/BRM.41.4.1149. [DOI] [PubMed] [Google Scholar]
  • 84.Swennen G., Schutyser F., Hausamen J.-E. A Color Atlas and Manual. Springer; 2005. Three-Dimensional Cephalometry. [DOI] [Google Scholar]
  • 85.Vezzetti E., Marcolin F., Fracastoro G. 3D face recognition: An automatic strategy based on geometrical descriptors and landmarks. Robot. Autonom. Syst. 2014;62:1768–1776. doi: 10.1016/j.robot.2014.07.009. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Document S1. Figures S1–S18, Tables S1, S3–S7, and Methods S1, and S2
mmc1.pdf (2.1MB, pdf)
Table S2. Detailed quantitative results, including median values, interquartile ranges (IQRs), and consistency with expected AU activation, related to Figures 2 and 3
mmc2.xlsx (1.9MB, xlsx)

Data Availability Statement

  • Anonymized raw facial landmark coordinates (JSON files) calculated Euclidean distance matrices (non-normalized and min–max normalized), and processed datasets for the experimental phases have been deposited at Mendeley data and are publicly available as of the date of publication. The accession number (DOI) is listed in the key resources table: https://doi.org/10.17632/j96bffmhgc.1. Raw video recordings cannot be shared due to participant privacy concerns and ethical restrictions imposed by the University of Turin’s Academic Bioethics Committee.

  • Custom Python scripts for facial landmark extraction, Euclidean distance computation, global normalization workflow, between/within-class variability (BWV) calculation, and statistical analyses (Friedman tests and K-means clustering) have been deposited at Mendeley data and are publicly available as of the date of publication under the same accession number of the data, as listed in the key resources table: https://doi.org/10.17632/j96bffmhgc.1.

  • Any additional information required to reanalyze the data reported in this study is available from lead contact upon request.


Articles from iScience are provided here courtesy of Elsevier

RESOURCES