Skip to main content
PLOS One logoLink to PLOS One
. 2026 Mar 11;21(3):e0342196. doi: 10.1371/journal.pone.0342196

Evolutionary echoes of emotion: Humans mimic other primate expressions

Ursula Hess 1,*,#, Till Kastendieck 1,#, M Gizem Erkol 1, Heidi Mauersberger 1, Marina Davila-Ross 2, Katja Liebal 3, Elisabetta Palagi 4
Editor: Brittany N Florkiewicz5
PMCID: PMC12978439  PMID: 41811835

Abstract

Humans readily mimic the emotional behavior of conspecifics -- a behavior linked to empathy. Yet, whether humans unconsciously mimic the emotional expressions of non-human primates remains an open question. Human observers watched short videos with positive (play-face), negative (open-mouth threat display) or neutral expressions by monkeys and apes (while their own facial expressions were filmed and automatically coded), rated the expressions for emotional content and indicated their degree of liking of and closeness to the primates. Participants mimicked both positive and negative expressions and were able to correctly identify the expressions as positive or negative. These findings shed new light on the deep-rooted, cross-species nature of emotional connection, suggesting that humans are able to empathize and mirror the emotions of other species.

1. Introduction

The matching or mimicry of the emotional behavior of conspecifics is prevalent in many species. Next to humans [1] it has, for example, been observed in orangutans [2], geladas [3] and some macaque species [4,5], but also sun bears [6], dogs [7] and meerkats [8]. In humans, but also in non-human primates, the imitation of emotional behavior – emotional mimicry – has been shown to be relevant for the establishment of affiliative relations [1,9]. Specifically, mimicry is indicative of an affiliative stance towards the mimickee [1,10] and is often considered a “low road” to empathy [11], as such it is one important facet of human empathy.

There is good evidence that infant and juvenile non-human primates spontaneously mimic human facial gestures, such as tongue protrusion and lip smacking [(see, e.g., 12, 13)]. Moreover, humans and chimpanzees have been found to engage in voluntary imitation of each other’s behavior [14]. Notably also, capuchin monkeys show more affiliation towards humans who were instructed to imitate than when no imitation took place [10], suggesting that mimicry can serve to signal affiliation across species.

Curiously however, there are to date no studies assessing whether humans spontaneously mimic the emotional expressions of non-human primates. The goal of the present research was to close this gap.

Given the importance of mimicry behavior for affiliative relations between individuals [1,15], the question of whether such behavior is shown across species is intriguing. If humans imitate the emotional behavior of non-human primates, this would be suggestive for humans’ capacity for empathic reactions towards them [11].

According to the emotional mimicry as social regulator view [1,15], humans should mimic the expressions of others, including non-humans, provided they are able to understand them and they feel a certain level of closeness to the expresser. In human-human interaction, this level of closeness is almost a default setting, such that for the most part, humans mimic fellow humans unless there is an explicit reason for reduced closeness [16,17]. In fact, humans mimic not only human-like avatars [17] but also crude drawings of human emotional faces, provided the expressions are perceived as emotionally relevant [18].

Thus, two factors seem relevant. First, are humans able to recognize the valence of the expression, which seems to suffice for mimicry [15] and, second, do they feel sufficiently close to non-human primates for mimicry to occur.

In this context, it is important to emphasize that we use an observer focused perspective when using the terms emotional mimicry and emotional expression. It is difficult if not impossible to assert what any organism, humans and other primates included, experiences when showing a given expression. However, humans nonetheless not only attribute emotions to such expressions [18] but also act accordingly [19]. Thus, to the human observer the primate shows an emotion and the observer will mimic (or not) the expression in line with this assumption. Thus, human empathy with a primate can only be based on the human observers‘attributions not on ground truth knowledge about inner states.

Some research indicates that humans report more empathy towards animals such as non-human primates (that are more closely related to humans) than for example towards birds and reptiles [19]. Further, humans are generally able to recognize emotional expressions of other animal taxa, such as cats [20] and dogs [21]. Depending on the emotion, humans are better able to detect emotions in our closest relatives (chimpanzees/bonobos) than in dogs, with anger being generally well detected across species [22]. Human listeners are also to some degree able to identify emotions of vocalizations by rhesus macaques [23] as well as primate facial expressions, even though the level of accuracy varies with primate species and experience [24]. Importantly, primate and human emotion expressions, despite various differences, share many features [25,26] and hence an ability to rate expressions as positive or negative is plausible. This notion is also supported by a study showing that adult humans rate complex scenes with multiple protagonists who are either human or bonobos similarly with regard to the valence and arousal of the picture [27]. Even though the focus here was not on individual facial expressions per se, the expressive behavior of the bonobos played a role, as children tended to misidentify pictures including silent bared-teeth displays as positive [27]. Yet, in a task where attentional bias to emotional versus neutral expressions was assessed, the bias shown for human expressions was not found for images showing chimpanzee expression suggesting a lack of relevant expertise [28].

The present research is the first to address whether humans who have little or no experience with non-human primates, can not only recognize the emotions associated with non-human primate facial expressions but also spontaneously mimic these expressions. In an online experiment, participants saw videos of positive (play-face: mouth is open and relaxed with rounded lip corners, teeth are visible), negative (open-mouth threat display: mouth is pulled open, teeth are prominently visible) and neutral expressions (mouth is closed and relaxed). Fig 1 shows example stills of the expressions.

Fig 1. Examples for positive, neutral and negative expressions.

Fig 1

Copyright held by the authors.

Following each video, participants rated both the degree of positivity/negativity of the expression and the levels of different discrete emotions (anger, disgust, fear, happiness, sadness, surprise) as well as perceived psychological closeness and liking towards the expresser. While they were watching the video, observers’ facial expressions were filmed using their webcam. The expressions were FACS [26] coded using OpenFace 2.0 [28]. This method allows a detailed description of the participants’ facial activity in terms of facial action units (AUs). Expressions with high intensity of AU04 (frown) and low intensity of AU12 (pulls the lips up) and AU06 (wrinkles around the eyes) index negative facial expressions, whereas the reverse pattern indexes positive expressions [29].

The videos showed the animals in natural surroundings. As such, most of the upper body and, in some cases, the whole body of the individuals was visible. That is, emotion cues were not limited to the face. However, there is good evidence that humans engage in facial mimicry even when the emotional signal is not presented via facial expressions [29,27] as long as they understand the emotion being signaled [30].

2. Methods

Participants

A power simulation using simr [31]  assuming an effect size of d =.32 (based on data from previous online studies involving facial mimicry) for the focal mimicry analysis suggested 200 participants for power > 90% with an alpha of.05 for the focal analyses (see power curve in the supporting information). A total of 212 (103 women, 107 men, 2 gender unknown) participants with a mean age of 40 years (SD = 13) completed the task and provided usable video material. Data from an additional 18 participants were excluded due to technical problems or noncompliance with the video instructions (e.g., eating during the experiment, face not visible on camera).

Participants were recruited via Prolific Academic and were paid €3.11 for the 20-minute task (which is classified as “good” payment in Prolific). The study was approved by the Institutional Review Board of the Department of Psychology of Humboldt University of Berlin and conducted in accordance with the Declaration of Helsinki and the German Research Foundation’s Guidelines for Safeguarding Good Research Practice. A master’s thesis that was part of the larger project was preregistered via OSF (https://osf.io/9rw2m). The data reported herein are not part of the thesis. Data were collected during February 2025.

Measures

Dimensional Emotion Perception Ratings. Participants rated the positive and negative valence of each non-human primate expression on two Likert scales (1 = not at all, 7 = very much).

Discrete Emotion Profile: Participants rated the degree to which the non-human primate seemed to express each of seven emotions (happiness, anger, sadness, fear, disgust, and surprise) using a 7-point scale ranging from 1 – not at all to 7 – very much. The emotion profile was used to assess how human observers interpret the expressions using emotion terms with which they are familiar.

Perceived closeness and liking. Following the emotion ratings, participants were asked to rate their closeness to the primate shown using the Inclusion of Other in the Self-scale [32]. For this, they used a slider that moved two circles that represented themselves and the primate to a distance that represented their perceived closeness (1 – most distant, 101 –closely overlapping). They also indicated their liking of the primate on a 7-point scale (1- not at all, 7 – very much). See below for additional measures not reported here.

Emotional Mimicry: Emotional mimicry was assessed as using OpenFace 2.0 [33], an open-source tool for facial behavior analysis. The software tracks facial activity based on the Facial Action Coding System [34]. We extracted information for AU4 (brow furrow), AU6 (cheek raise), and AU12 (lip corner pull). Mimicry of negative expressions is indexed by a relatively stronger activity of AU4 compared to AU6 and AU12; the reverse pattern indexes mimicry of positive expressions [35].

Video stimuli

Stimuli consisted of videos showing apes (eight chimpanzees, two orangutans, two gorillas, and one bonobo) and monkeys (one langur and one crested macaque) who showed either a positive (play face), neutral or negative (open-mouth threat display) expression. For each expression five different primates were shown. The videos were sourced from a pool of several primate research archives provided by Elisabetta Palagi, Marina Davila-Ross, and Katja Liebal, as well as primate documentaries (e.g., Netflix, BBC, NatGeo). Videos were selected based on image quality, duration, and the visibility of expressions. Only videos showing a single animal in the scene were included. The preselected videos were independently coded by Elisabetta Palagi and Marina Davila-Ross, with regard to the expression shown. For the final video selection, 100% agreement regarding the expression was obtained. Videos varied in length from 5.7 to 6.5 seconds.

Procedure

Participants were informed about the content and length of the task. Furthermore, they were informed that participation in the experiment is only possible if they have a webcam enabled computer/laptop and agree to a webcam recording of their face during the experiment. Informed consent included standard details on compensation, confidentiality, and contact information as well as detailed information on (video) data storage and processing. Participants who gave informed consent were instructed to set up their webcam to allow recording, to arrange sufficient lighting, and to refrain from eating or covering their face during the experiment.

Participants then provided general socio-demographic information. Before beginning the main experiment, participants completed a single practice trial with a non-human primate video displaying a pant hoot expression to familiarize themselves with the task.

During each trial, participants saw the videos in random order. Videos were preceded by a 1.5 sec fixation cross (on grey screen) and followed by a 1.5 sec grey screen. While they watched the video, participants’ faces were recorded. Following each video, participants completed the rating scales. After completing the video trials, they provided global ratings of their perceived connectedness to nature, closeness to primates, and phylogenetic closeness to specific primate species (these scales are not part of the present report). At the end of the experiment, participants were fully debriefed about the study’s goals and were provided with a code to receive compensation via Prolific.

The research was preregistered [https://osf.io/9rw2m] and received Ethics approval from the Departmental Ethics board of the Department of Psychology at Humboldt-University of Berlin [Addendum to Proposal #2020−39]. The complete data as well as a markdown for all reported analyses as well as supplementary analyses is available and can be found in S1 R Markdown. The data files and markdown are available at [https://osf.io/n4cdj/files/osfstorage].

Results

Data analyses were conducted using LMM with sum coded contrasts for Expression and Helmert contrast for AUs (lme4, 32, and lmerTest, 33). Post-hoc tests were conducted using emmeans [3638]. A markdown for all analyses can be found in S1 R Markdown.

Emotion recognition

We first assessed whether participants recognized the expressions. An LMM with the random factor participants revealed a significant effect of Expression for ratings of both Positivity, F(2, 1977) = 81.44, p <.001, ηp2 =.08, and Negativity of the Expression, F(2, 1979) = 148.01, p <.001, ηp2 =.13. Participants rated positive expressions as significantly more positive (M = 3.48, SD = 1.11, CI95[3.33, 3.64]) and less negative (M = 2.66, SD = 1.02, CI95[2.52, 2.80]) than negative expressions (Mpos = 2.42, SD = 1.10, CI95[2.26, 2.57], t1977 = 12.11, p <.001; Mneg = 4.12, SD = 1.31, CI95[3.94, 4.30], t1977 = 15.65, p <.001). Neutral expressions (Mpos = 2.69, SD = 1.12, CI95[2.53, 2.84]; Mneg = 2.79, SD = 1.23, CI95[2.62, 2.86]) were rated as more positive than negative pressions, t1978 = 2.63, p =.024, but less positive than positive expressions, t1977 = 9.49, p <.001. Even though neutral expressions were rated as less negative than negative expression, t1977 = 14.01, p <.001, they did not differ in negativity from positive expressions, t1976 = 1.53, p =.275.

Fig 2 shows the ratings on the discrete emotion scales. Positive expressions were rated as predominantly happy, negative expressions as predominantly angry and neutral expressions as predominantly sad (for more details, post-hoc tests and a table with the summary statistics see S1 File).

Fig 2. Mean ratings for discrete emotions as a function of expression valence.

Fig 2

Overall, these data suggest that participants were rather good at assessing whether an expression signals positive versus negative affect or neutrality. They also associated the negative expressions more with anger (but also with fear and surprise) and the positive expressions more with happiness. Neutral expressions were rated somewhat sadder, possibly due to the low arousal in these expressions.

Mimicry

Mimicry was indexed by a relatively higher level of AU4 (frown) compared to AU6 (wrinkles around the eyes) and AU12 (lip corners pulled up) for negative expressions and the reverse pattern for positive expressions. Fig 3 shows the means and standard errors for these action units.

Fig 3. Mean intensity (z-scores) of AU04, AU06, and AU12 as a function of expression valence.

Fig 3

The observed pattern is congruent with the notion that participants mimic the valence of the expressions. A linear model on the within participant z-transformed AU data, with the factors AU and Expression reveals a significant effect of expression which was qualified by a significant interaction between AU and Expression, F(4, 7750) = 5.18, p <.001, ηp2 =.003, indicating that different AU patterns were observed in response to different primate expressions.

Post-hoc tests using a Helmert contrast to compare AU04 to the mean of AU06 and AU12, were significant for both positive, t7750 = 4.10, p <.001, ηp2 =.002, and negative primate expressions t7750 = 2.09, p =.037, ηp2 =.001. As expected, neutral expressions did not elicit patterned facial responses, t7750 = 0.35, p =.725, ηp2 =.000 (for more details and a table with the summary statistics see S1 File).

Moderation of mimicry by perceived emotion and self-reported closeness

Finally, we assessed whether, as shown for human-human mimicry, the degree of imitation was moderated by both the level of self-reported closeness and emotion perception [16,39,40]. For this, we calculated a positive pattern score based on the above mentioned Helmert contrast [35]. This score is positive when participants show a positive expression and negative when they show a negative expression. We conducted an analysis predicting the positive pattern score from both the rated positivity of the expression and self-reported closeness to the primate, predictors were z-scored (for the means and SDs for these variables see S1 File).

The model was significant F(3, 2164) = 28.52, p = < 0.001, ηp2 = 0.038, with a significant effect of the rated positivity of the expression, ß =.13, SE =.02, t = 5.58, p <.001, ηp2 = 0.014, and a significant interaction between rated positivity and self-reported closeness, ß =.05, SE =.02, t = 2.45, p =.014, ηp2 = 0.003. Fig 4 shows the slopes.

Fig 4. Predicted values for expressive contrast as a function of rated positivity of the expression and self-reported closeness to the expresser.

Fig 4

Post-hoc tests confirmed that positive facial reactions of the observer increased with increased rated positivity at all levels of self-reported closeness. That is, the more positive the primate expressions were rated, the more positive was the participants’ own expression. A reversal in sign for expressions rated not positive at all, suggests that participants showed negative expressions in response. Notably, for expressions rated more positively, the effect was moderated by perceived closeness such that the slope was steeper when participants reported feeling closer to the primate, t2164 = 2.46, p =.038 (for more details see S1 File).

Discussion

This research is the first to provide evidence that humans not only recognize positive and negative emotion expressions shown by non-human primates but also spontaneously mimic these expressions. Further, as for human-human mimicry, the strength of participants’ mimicry reaction depended on both their perception of the primate expression and their perceived closeness to the primate. These data suggest that humans spontaneously react empathically towards non-human primates.

Although there is evidence of yawn contagion between humans and a variety of species [41], the important difference between mimicry of emotional expressions and mimicry of facial gestures such as yawning is that the latter are not intrinsically meaningful -- they tell us little about the emotional state of the yawner. By contrast, in humans – and other primates -- emotion expressions signal socially relevant information [42] as well as behavioral intentions [43]. Specifically, whereas happiness signals affiliative intentions, anger signals the opposite. Hence, the mimicry of emotional expressions is linked to the capacity to understand the expresser and to empathically feel with the expresser [11] in a way that yawn contagion is not. Thus, the finding that humans mimic emotional behavior shown by primates sheds new light on the deep-rooted, cross-species nature of emotional connection between these different species and humans, suggesting that the human ability to empathize and mirror emotions extends beyond humankind.

What is also striking is that participants were not only able to identify the expressions in terms of negativity and positivity but also to correctly attach discrete emotion labels to the expressions. The positive expressions were play-faces, whereas the negative expressions were open-mouth threat displays, which participants rated as predominantly happy and angry respectively. These ratings represent a plausible interpretation of the behaviors and the accompanying behavioral intentions. Notably, both behaviors involve an open mouth with teeth visible. Nonetheless, participants were able to distinguish the behavioral intentions behind these facial gestures by attributing an affiliative vs antagonistic discrete emotion to the primates.

Finally, a regression analysis showed that the intensity of the mimicry behavior depended both on the perceived emotionality of the expression and the perceived closeness to the primate. Specifically, participants reported both more liking and more felt closeness for primates who showed positive expressions. In turn, mimicry of positive expressions was more pronounced when participants reported higher levels of felt closeness to the specific animal. Thus, the affiliative stance towards the primate is important for situations were mimicry produces an affiliative expression in the mimicker. That participants modulate positive mimicry as a function of the affiliation and closeness they feel towards the expresser suggests that people may in some ways be more “careful” when sending an affiliative signal, a smile, in response to a play-face, than when sending an essentially antagonistic signal, a frown, in response to a threat display. This more nuanced reaction to a positive, affiliative signal matches findings for human-human mimicry [16,40]. One could speculate that positive overtures towards others may be considered “costly” in some contexts and hence in those contexts should be preferentially shown to those we already like and feel close to. The question why mimicry of positive expression seems to be preferentially moderated by perceived closeness is intriguing and should be addressed by future research.

The present study provides strong evidence that humans not only understand positive, negative and neutral expressions shown by non-human primates but mimic these as well. It should be noted that the primates shown included not only the four great apes but also two different monkeys. As such, the findings allow for some generalization across primates.

Yet, the study also has some limitations. The videos were very short and showed not only the faces of the primates but also different levels of body postures. Also, the primates showing emotion expressions moved more than the ones showing a neutral expression. This notably lower level of arousal may have contributed to the use of the label sad for neutral expressions.

However, adding body cues may have helped participants to decode the expressions better. Further, participants were not only able to distinguish the positive and negative expressions from neutral expressions, but also from each other, despite the fact that both were associated with higher arousal and involved an open mouth that revealed the teeth.

In conclusion, the intricate relationship between emotional sharing and key dimensions of empathy suggests that our findings may transcend the boundaries of evolutionary biology, resonating deeply within psychology and the humanities. In particular, philosophy of mind, where foundational theoretical models of empathy have been developed, stands to be significantly influenced. If humans are capable of perceiving and resonating with the emotional states of non-human animals, this challenges long-standing anthropocentric paradigms and fosters a reconceptualization of the human-animal relationship.

The implications of this perspective extend beyond theoretical discourse, bearing profound ecological and ethical consequences. By reducing the psychological and conceptual distance between humans and other animals, we cultivate a biocentric rather than anthropocentric worldview—one that acknowledges the intrinsic value of all living beings, independent of their utility to humankind. Such a shift is particularly crucial in addressing pressing global challenges, including biodiversity loss, the degradation of ecosystem services, and the destruction of natural habitats. Recognizing our shared emotional landscape with non-human animals can serve as a catalyst for fostering a deeper ecological sensitivity, inspiring policies and conservation strategies rooted in an ethic of care and interconnectedness.

Supporting information

S1 File. R Markdown.

This is the markdown for the R analyses.

(PDF)

pone.0342196.s001.pdf (197.1KB, pdf)

Data Availability

The data files and markdown are available at [https://osf.io/n4cdj/files/osfstorage].

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1.Hess U, Fischer A. Emotional mimicry as social regulator: theoretical considerations. Cogn Emot. 2022;36(5):785–93. doi: 10.1080/02699931.2022.2103522 [DOI] [PubMed] [Google Scholar]
  • 2.Davila Ross M, Menzler S, Zimmermann E. Rapid facial mimicry in orangutan play. Biol Lett. 2008;4(1):27–30. doi: 10.1098/rsbl.2007.0535 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Mancini G, Ferrari PF, Palagi E. Rapid facial mimicry in geladas. Scientific Reports. 2013;3:1527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Scopa C, Palagi E. American psychological association, US. 2016; 130, pp. 153–61.
  • 5.Facondini G, Pedruzzi L, Aere S, Böye M, Lemasson A, Palagi E. Rapid facial mimicry as a regulator of play in a despotic macaque species. Behav Ecol Sociobiol. 2024;78(6). doi: 10.1007/s00265-024-03479-y [DOI] [Google Scholar]
  • 6.Taylor D, Hartmann D, Dezecache G, Te Wong S, Davila-Ross M. Facial complexity in sun bears: exact facial mimicry and social sensitivity. Sci Rep. 2019;9(1):4961. doi: 10.1038/s41598-019-39932-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Palagi E, Nicotra V, Cordoni G. Rapid mimicry and emotional contagion in domestic dogs. R Soc Open Sci. 2015;2(12):150505. doi: 10.1098/rsos.150505 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Palagi E, Marchi E, Cavicchio P, Bandoli F. Sharing playful mood: rapid facial mimicry in Suricata suricatta. Anim Cogn. 2019;22(5):719–32. doi: 10.1007/s10071-019-01269-y [DOI] [PubMed] [Google Scholar]
  • 9.Palagi E, Celeghin A, Tamietto M, Winkielman P, Norscia I. The neuroethology of spontaneous mimicry and emotional contagion in human and non-human animals. Neurosci Biobehav Rev. 2020;111:149–65. doi: 10.1016/j.neubiorev.2020.01.020 [DOI] [PubMed] [Google Scholar]
  • 10.Paukner A, Suomi SJ, Visalberghi E, Ferrari PF. Capuchin monkeys display affiliation toward humans who imitate them. Science. 2009;325(5942):880–3. doi: 10.1126/science.1176269 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Schuler M, Mohnke S, Walter H. The neurological basis of empathy and mimicry. Emotional Mimicry in Social Context. Cambridge University Press. 2016. p. 192–221. doi: 10.1017/cbo9781107587595.010 [DOI] [Google Scholar]
  • 12.Ferrari PF, et al. Interindividual differences in neonatal imitation and the development of action chains in rhesus macaques. Child Development. 2009;80:1057–68. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Ferrari PF, et al. Neonatal imitation in rhesus macaques. PLOS Biology. 2006;4:e302. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Persson T, Sauciuc G-A, Madsen EA. Spontaneous cross-species imitation in interactions between chimpanzees and zoo visitors. Primates. 2018;59(1):19–29. doi: 10.1007/s10329-017-0624-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Hess U, Fischer A. Emotional mimicry as social regulation. Pers Soc Psychol Rev. 2013;17(2):142–57. doi: 10.1177/1088868312472607 [DOI] [PubMed] [Google Scholar]
  • 16.Kastendieck T, Mauersberger H, Blaison C, Ghalib J, Hess U. Laughing at funerals and frowning at weddings: Top-down influences of context-driven social judgments on emotional mimicry. Acta Psychol (Amst). 2021;212:103195. doi: 10.1016/j.actpsy.2020.103195 [DOI] [PubMed] [Google Scholar]
  • 17.Likowski KU, Mühlberger A, Seibt B, Pauli P, Weyers P. Modulation of facial mimicry by attitudes. J Experimental Social Psychology. 2008;44(4):1065–72. doi: 10.1016/j.jesp.2007.10.007 [DOI] [Google Scholar]
  • 18.Hess U, Hühnel I, van der Schalk J, Fischer A. Emotional Mimicry in Social Context. Hess U, Fischer A, eds. Cambridge University Press: Cambridge, UK; 2016. pp. 90–106. [Google Scholar]
  • 19.Ingham HRW, Neumann DL, Waters AM. Empathy-Related Ratings to Still Images of Human and Nonhuman Animal Groups in Negative Contexts Graded for Phylogenetic Similarity. Anthrozoös. 2015;28(1):113–30. doi: 10.2752/089279315x14129350722136 [DOI] [Google Scholar]
  • 20.Thibault P, Bourgeois P, Hess U. The effect of group-identification on emotion recognition: The case of cats and basketball players. J Experimental Social Psychol. 2006;42(5):676–83. doi: 10.1016/j.jesp.2005.10.006 [DOI] [Google Scholar]
  • 21.Lakestani NN, Donaldson ML, Waran N. Interpretation of Dog Behavior by Children and Young Adults. Anthrozoös. 2014;27(1):65–80. doi: 10.2752/175303714x13837396326413 [DOI] [Google Scholar]
  • 22.Sullivan SK, Kim A, Vinicius Castilho L, Harris LT. Comparing emotion inferences from dogs (Canis familiaris), panins (Pan troglodytes/Pan paniscus), and humans (Homo sapiens) facial displays. Sci Rep. 2022;12(1):13171. doi: 10.1038/s41598-022-16098-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Frolova O, Matveev A, Lyakso E, Kuznetsova T, Golubeva I. In: Karpov A, Delić V, editors. Speech and Computer. Cham: Springer Nature Switzerland. 2025. p. 85–94. [Google Scholar]
  • 24.Guo K, Li Z, Yan Y, Li W. Viewing heterospecific facial expressions: an eye-tracking study of human and monkey viewers. Exp Brain Res. 2019;237(8):2045–59. doi: 10.1007/s00221-019-05574-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Kret ME, Prochazkova E, Sterck EHM, Clay Z. Emotional expressions in human and non-human great apes. Neurosci Biobehav Rev. 2020;115:378–95. doi: 10.1016/j.neubiorev.2020.01.027 [DOI] [PubMed] [Google Scholar]
  • 26.Kavanagh E, Kimock C, Whitehouse J, Micheletta J, Waller BM. Revisiting Darwin’s comparisons between human and non-human primate facial signals. Evol Hum Sci. 2022;4:e27. doi: 10.1017/ehs.2022.26 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Hawk ST, Fischer AH, Van Kleef GA. Face the noise: embodied responses to nonverbal vocalizations of discrete emotions. J Pers Soc Psychol. 2012;102(4):796–814. doi: 10.1037/a0026234 [DOI] [PubMed] [Google Scholar]
  • 28.Heesen R, Kim Y, Kret ME, Clay Z. Perceptual integration of bodily and facial emotion cues in chimpanzees and humans. PNAS Nexus. 2024;3(2):pgae012. doi: 10.1093/pnasnexus/pgae012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Hawk ST, Fischer A. Emotional mimicry in social context. In: Hess U, Fischer A, editors. Emotional mimicry in social context. Cambridge, UK: Cambridge University Press. 2016. p. 107–24. [Google Scholar]
  • 30.Hess U 30, Houde S, Fischer A. Collective emotions: Perspectives from psychology, philosophy, and sociology. Collective emotions: Perspectives from psychology, philosophy, and sociology. New York, NY, US: Oxford University Press. 2014. p. 94–107. [Google Scholar]
  • 31.Green P, MacLeod CJ. SIMR: An R package for power analysis of generalized linear mixed models by simulation. Methods Ecol Evol. 2016;7(4):493–8. [Google Scholar]
  • 32.Aron A, Aron EN, Smollan D. Inclusion of other in the self scale and the structure of interpersonal closeness. J Pers Soc Psychol. 1992;63:596–612. 10.1037/0022-3514.63.4.596 [DOI] [Google Scholar]
  • 33.Baltrusaitis T, Zadeh A, Lim YC, Morency L-P. Openface 2.0: Facial behavior analysis toolkit. 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018); 2018. [Google Scholar]
  • 34.Ekman P, Friesen WV. The facial action coding system: A technique for the measurement of facial movement. Consulting Psychologists Press; 1978. 10.1037/t27734-000 [DOI] [Google Scholar]
  • 35.Hess U, et al. Reliability of Surface Facial Electromyography. Psychophysiology. 2017;54:12–23. [DOI] [PubMed] [Google Scholar]
  • 36.Bates D, Mächler M, Bolker B, Walker S. Fitting Linear Mixed-Effects Models Usinglme4. J Stat Soft. 2015;67(1). doi: 10.18637/jss.v067.i01 [DOI] [Google Scholar]
  • 37.Kuznetsova A, Brockhoff PB, Christensen RH. lmerTest package: tests in linear mixed effects models. Journal of Statistical Software. 2017;82:1–26. [Google Scholar]
  • 38.Lenth R, Piaskowski J. 2025.
  • 39.Kastendieck T, Zillmer S, Hess U. (Un)mask yourself! Effects of face masks on facial mimicry and emotion perception during the COVID-19 pandemic. Cogn Emot. 2022;36(1):59–69. doi: 10.1080/02699931.2021.1950639 [DOI] [PubMed] [Google Scholar]
  • 40.Mauersberger H, Kastendieck T, Hetmann A, Schöll A, Hess U. The different shades of laughter: when do we laugh and when do we mimic other’s laughter?. Philos Trans R Soc Lond B Biol Sci. 2022;377(1863):20210188. doi: 10.1098/rstb.2021.0188 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Gallup AC, Wozny S. Interspecific contagious yawning in humans. Animals. 2022;12(1908). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Alaei R, Rule NO. The social psychology of perceiving others accurately. In: Hall JA, Mast MS, West TV, editors. The social psychology of perceiving others accurately. Cambridge, UK: Cambridge University Press. 2016. p. 125–42. [Google Scholar]
  • 43.Hess U, Blairy S, Kleck RE. The influence of facial emotion displays, gender, and ethnicity on judgments of dominance and affiliation. J Nonverbal Behavior. 2000;24(4):265–83. doi: 10.1023/a:1006623213355 [DOI] [Google Scholar]
PLoS One. 2026 Mar 11;21(3):e0342196. doi: 10.1371/journal.pone.0342196.r001

Author response to Decision Letter 0


Transfer Alert

This paper was transferred from another journal. As a result, its full editorial history (including decision letters, peer reviews and author responses) may not be present.

27 Aug 2025

Decision Letter 0

Brittany Florkiewicz

30 Dec 2025

Dear Dr. Hess,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Feb 13 2026 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org . When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

  • A letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols .

We look forward to receiving your revised manuscript.

Kind regards,

Brittany N. Florkiewicz, Ph.D.

Academic Editor

PLOS One

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that your Data Availability Statement is currently missing the repository name. If your manuscript is accepted for publication, you will be asked to provide these details on a very short timeline. We therefore suggest that you provide this information now, though we will not hold up the peer review process if you are unable.

3. Your ethics statement should only appear in the Methods section of your manuscript. If your ethics statement is written in any section besides the Methods, please move it to the Methods section and delete it from any other section. Please ensure that your ethics statement is included in your manuscript, as the ethics statement entered into the online submission form will not be published alongside your manuscript.

4. We note that Figure 1 in your submission contain copyrighted images. All PLOS content is published under the Creative Commons Attribution License (CC BY 4.0), which means that the manuscript, images, and Supporting Information files will be freely available online, and any third party is permitted to access, download, copy, distribute, and use these materials in any way, even commercially, with proper attribution. For more information, see our copyright guidelines: http://journals.plos.org/plosone/s/licenses-and-copyright.

We require you to either (1) present written permission from the copyright holder to publish these figures specifically under the CC BY 4.0 license, or (2) remove the figures from your submission:

1) You may seek permission from the original copyright holder of Figure 1 to publish the content specifically under the CC BY 4.0 license.

We recommend that you contact the original copyright holder with the Content Permission Form (http://journals.plos.org/plosone/s/file?id=7c09/content-permission-form.pdf) and the following text:

“I request permission for the open-access journal PLOS ONE to publish XXX under the Creative Commons Attribution License (CCAL) CC BY 4.0 (http://creativecommons.org/licenses/by/4.0/). Please be aware that this license allows unrestricted use and distribution, even commercially, by third parties. Please reply and provide explicit written permission to publish XXX under a CC BY license and complete the attached form.”

Please upload the completed Content Permission Form or other proof of granted permissions as an ""Other"" file with your submission.

In the figure caption of the copyrighted figure, please include the following text: “Reprinted from [ref] under a CC BY license, with permission from [name of publisher], original copyright [original copyright year].”

2) If you are unable to obtain permission from the original copyright holder to publish these figures under the CC BY 4.0 license or if the copyright holder’s requirements are incompatible with the CC BY 4.0 license, please either i) remove the figure or ii) supply a replacement figure that complies with the CC BY 4.0 license. Please check copyright information on all replacement figures and update the figure caption with source information.

If applicable, please specify in the figure caption text when a figure is similar but not identical to the original image and is therefore for illustrative purposes only.

5. Please include captions for your Supporting Information files at the end of your manuscript, and update any in-text citations to match accordingly. Please see our Supporting Information guidelines for more information: http://journals.plos.org/plosone/s/supporting-information.

6. If the reviewer comments include a recommendation to cite specific previously published works, please review and evaluate these publications to determine whether they are relevant and should be cited. There is no requirement to cite these works unless the editor has indicated otherwise.

Additional Editor Comments:

Thank you for submitting your manuscript to PLOS One! I apologize once again for the delay. I have now secured reviews from two qualified animal behaviorists. They both cite important theoretical and methodological concerns with your manuscript that should be fully addressed before your next submission. Below, you will find a copy of their comments.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously? -->?>

Reviewer #1: Yes

Reviewer #2: I Don't Know

**********

3. Have the authors made all data underlying the findings in their manuscript fully available??>

The PLOS Data policy

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English??>

Reviewer #1: Yes

Reviewer #2: Yes

**********

Reviewer #1: General

My assessment of the current paper is mainly focused on some aspects of the statistical analysis of the experimental protocol designed by the authors .

The authors investigated the combined effect of species (humans/ not-human primates) and task specificity in facial expressions (neutral, negative, positive), on conscious/ unconscious mimicry . and used minimally invasive protocols for data collection.

Standardized stimuli (2D videos of human facial expressions) and computerized data analysis (facial mimicry) showed a close connection between the emotional behavior of non human primates and that of humans. Even if the reported data and the relevant analysis and results are not totally new, further investigations may open new perspectives on the role of empathy inside evolution. Dynamic interactions between the varying facial expressions might also help communication inside and outside species, with possible interaction in the fields of human rehabilitation, as already done in various therapies with other mammals (horses, dogs..).

Statistics

I have some concern about the number of significant differences in the paper, a general effect of false positive inflation, with subsequent small sample sizes, potentially contributing to the reduced replication rate. Indeed, this topic is of current concern in several research contexts. For example, please see a recent meta analysis by Murphy et al, , Estimating the Replicability of Sports and Exercise Science Research. Sports Med. 2025 Oct;55(10):2659-2679. doi: 10.1007/s40279-025-02201-

Of course I do recognize that authors used several correction method estimators (post hoc tests, effect size, Cohen coefficients ...). But this is non-sufficient considering the limited number of landmarks/ variables used in the study, and the large numbers for statistical tests. From a mathematical point of view, an increase of statistical tests also increments the number of statistically significant values (p<0.05) but they are false positives. Which is the biological significance or practical/ clinical meaning of the current extra p values?

Reviewer #2: The paper presents interesting findings on cross-species mimicry of facial expressions from humans viewing primate stimuli; the paper generally is easy to read, and the findings are novel and interesting for the field, especially for comparative affective science. The paper requires revision, however, and adding more nuance. I made various comments on how the paper can be improved. These mainly refer to more careful reflections and assumptions, better embedding with the literature and more information in terms of the choice of statistics and descriptive stats, see below.

Before delving into each part of the paper, I would like to make a general remark, which concern the whole paper – and I believe we should all be mindful of as researchers especially when drawing assumptions of affective states in nonlinguistic species: What actually is an emotion expression in animals? Are all facial expressions in animals emotional in nature, and how do we know the expression we see is linked to the assumed affective state? Without physiology markers, what makes us sure that the expressions presented here do not comprise voluntary facial movements used for coordination (Waller et al. 2017, Neuroscience & Biobehavioral Reviews; Heesen et al., 2022 Primate Cognitive Studies)? We know certain primates have some voluntary power over their expressions (e.g., Waller et al., 2015 and several other papers). I think the field generally, including this novel paper, should apply more care in the way we phrase our work and how we use terms. Here, the authors might consider avoiding terms similar to “emotional mimicry” or “emotion expressions”, since they did not use physiological measures to support the analyses, neither for apes or humans.

Abstract

-There is a typo in the last phrase, I assume it could read as: …”suggesting that humans are able to emphasize and mirror emotions of other species.”

-I wondered: since empathy contains several “building blocks”, the term itself might not be adequate as you only looked at mimicry - which is a necessary part of it though not sufficient. A simple adjustment in phrasing is sufficient here.

Introduction

-“more egalitarian macaque species” – who are you comparing them to, to other macaque species? Please specify for clarity.

-P.4: Whether or not humans can classify primate expressions and how well they perform may be dependent on many more aspects than listed here; some studies using attention measures show they can (Kret & van Berlo, 2021) though it depends on age (children struggle at it); others using a priming design showed that humans do not pay selective attention to congruent facial expression matching a former affective scene, suggesting difficulty in cross-species perception also in adults (Heesen et al., 2024, PNAS Nexus); it may be helpful for readers if the authors better embed their current work with already existing literature on cross-species perception.

- P.4: I have a remark regarding the following phrase: “and hence an ability to categorize expressions into positive versus negative is plausible.” In primate behaviour, not every context may be as clear cut and thus separable on such a dichotomous scale – for example, contact-playing in primates or sexual contact in bonobos can be positive and negative at once (being close to a dominant and potentially aggressive other, while experiencing an arousing, affiliative contact); at the same time, mobbing group members or predators could combine positive and negative states at once (securing ones dominance status, dominating others, potentially being attacked by another). Such a dichomotous valence scale might be even difficult to apply to humans, because most interaction contexts are a messy pool of feelings with pleasant and unpleasant aspects – I would recommend caution in strong categorizations into positive and negative, especially when comparing across species. The authors may want to acknowledge that the phenomenon of valence is not as clear-cut as we’d like it to be, especially for animal researchers. Rather than calling expressions positive or negative, the field may benefit from more nuance moving forward, e.g., focusing on the functionality, or context in which expressions occur, rather than presuming whether the experience a good or bad state.

-P.5: Can the authors state more about which AUs they considered active in the threat related display of the primates? The display they show in the Fig. 1 looks like a mix between scream and bared teeth face, rather than open mouth threat face, see Figure 2 Parr et al 2007 (between bared teeth and scream face) and see Fig 1 c Waller et al 2016. Can the authors cite a paper that shows such a threat face expression as indicated in their Figure? Typically, threat face involves an open mouth, though teeth are not necessarily visible (see Fig 1 a and b, Overduin-de Vries et al 2016). If the expressions in the negative category were bared-teeth faces, then this needs to be discussed, because it would rather fit the “fear” than the “anger” category and it is interesting that people still attributed anger more so than fear (Fig 2, results).

Methods

-P.9: Did the authors instruct people not to talk, or have they verified this? They may have talked about the videos or said things while viewing, such that mouth movements could be affected by that – It would be good to verify.

-In the videos of the primates, were there any bystanders or objects visible? This should be mentioned somewhere.

-P. 9: The sentence was cut at the end of page.

-Could the others provide more information on what the test trial with the panthoot included?

-Did the authors ask participants whether they had any experience in watching apes, have they seen or observed them before? This is important as, like the authors mentioned, watching such videos could be affected by former training.

-Could the authors provide more information about the statistics chosen, models fitted, assumptions met within their LMMs? Use of software?

Results

-As indicated above, a summary statistics would be helpful for all ratings.

-Since the authors, in their discussion, generalize their findings across primate species, it would be beneficial to include a graph for each primate species here, such that the reader can get an insight whether the pattern holds equally across species, and across apes and monkeys in particular.

Discussion

-P.16 “That participants modulate positive mimicry as a function of the affiliation

and closeness they feel towards the expresser suggests that people may in some ways

be more “careful” when sending an affiliative signal, a smile, in response to a play-face,

than when sending an essentially antagonistic signal, a frown, in response to a threat

display.” – An alternative explanation could be that people feel more close, and therefore express more affiliative expressions. What do the authors mean exactly by “careful” and how is this explanation justified, i.e., do you have papers that support this idea, and what would it tell us if humans are more “careful” in affiliative contexts? In some way, one would expect the opposite: one should be most careful in negative contexts because those contexts bear danger and potential injuries. Also, why would you expect positive “overtures” (what is this?) to be more costly than coordinating in a potentially life-threatening conflict/fight? I would recommend adding some theoretical background and embedding the claims/conclusions/assumptions into the literature.

-At the end of the discussion, there is a random sentence on page 18 starting mid way, does it belong to the methods?

**********

what does this mean? ). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

To ensure your figures meet our technical requirements, please review our figure guidelines: https://journals.plos.org/plosone/s/figures

You may also use PLOS’s free figure tool, NAAS, to help you prepare publication quality figures: https://journals.plos.org/plosone/s/figures#loc-tools-for-figure-preparation.

NAAS will assess whether your figures meet our technical requirements by comparing each figure against our figure specifications.

PLoS One. 2026 Mar 11;21(3):e0342196. doi: 10.1371/journal.pone.0342196.r003

Author response to Decision Letter 1


16 Jan 2026

Thank you for allowing us to revise our manuscript. We carefully read the reviewers’ comments and addressed them as outlined below.

We also moved the Ethics and Data availability statements to the end of the methods section. Depository names for the Data availability statement and the preregistration are included there.

The copy right for the images in Figure 1 is held by two of the authors and we have added this information to the figure caption and added a file entitled “Granted Permissions where the authors (Marina Davila-Ross and Katja Liebal) agree that the images included in the figure may be published under CC BY 4.0 license.

Review Comments to the Author

Reviewer #1:

1. General

My assessment of the current paper is mainly focused on some aspects of the statistical analysis of the experimental protocol designed by the authors.

The authors investigated the combined effect of species (humans/ not-human primates) and task specificity in facial expressions (neutral, negative, positive), on conscious/ unconscious mimicry. and used minimally invasive protocols for data collection.

Standardized stimuli (2D videos of human facial expressions) and computerized data analysis (facial mimicry) showed a close connection between the emotional behavior of non human primates and that of humans. Even if the reported data and the relevant analysis and results are not totally new, further investigations may open new perspectives on the role of empathy inside evolution. Dynamic interactions between the varying facial expressions might also help communication inside and outside species, with possible interaction in the fields of human rehabilitation, as already done in various therapies with other mammals (horses, dogs..).

The description of the study is not quite accurate as the stimuli consisted of short videos showing only primate expressions. Specifically, the goal of the study was to assess whether human observers can a) recognize the valence of primate expressions and b) mimic the expressions in terms of valence. We found both to be the case. The design did not allow for dynamic interactions strictly speaking.

2. Statistics

I have some concern about the number of significant differences in the paper, a general effect of false positive inflation, with subsequent small sample sizes, potentially contributing to the reduced replication rate. Indeed, this topic is of current concern in several research contexts. For example, please see a recent meta analysis by Murphy et al, , Estimating the Replicability of Sports and Exercise Science Research. Sports Med. 2025 Oct;55(10):2659-2679. doi: 10.1007/s40279-025-02201- Of course I do recognize that authors used several correction method estimators (post hoc tests, effect size, Cohen coefficients ...). But this is non-sufficient considering the limited number of landmarks/ variables used in the study, and the large numbers for statistical tests. From a mathematical point of view, an increase of statistical tests also increments the number of statistically significant values (p<0.05) but they are false positives. Which is the biological significance or practical/ clinical meaning of the current extra p values?

We share the reviewers concerns regarding power and sample size. We included a power simulation that was based on online studies of human mimicry of other humans but also avatars conducted in our laboratory. We started out with the mean effect size from that body of research and the power simulation suggested that N > 200 would allow for 90% power. We collected data from 230 participants. Data from 18 participants were excluded due to technical problems or noncompliance with the video instructions (e.g., eating during the experiment, face not visible on camera), thus we retained 212 participants. The power curve for the simulation was included in the supplementary materials.

As regards the statistical tests, the focal tests reported in the manuscript are the LMM equivalent of two one-way analyses of variance with the factor expression type for the negativity/positivity rating scales with associated alpha corrected post-hoc tests (Tukey) and a two-way analysis with AU and expression type. The latter was followed up by a-priori Helmert contrasts by expression type. A further correlational analysis assessed whether the Helmert contrast score varied as a function of the rated positivity of the expression and the self-reported closeness to the primate with associated post-hoc tests for the significant interaction. These are the minimal tests needed to assess the hypotheses. Hence cherry picking from an excessive array of tests was not possible. Note that all significant F-values were significant at p < .001 (as well as most post-hoc tests), suggesting stable effects. Post-hoc tests used family wise alpha protection via the Tukey test. We also report semi-partial eta2, allowing the reader to judge the size of the effect.

Given that the power simulation suggests that we used an adequate sample size and the sparse testing, we feel that our statistical analyses are commensurate with the reviewer’s desire for adequate power and stringent statistical tests.

Reviewer #2:

1. The paper presents interesting findings on cross-species mimicry of facial expressions from humans viewing primate stimuli; the paper generally is easy to read, and the findings are novel and interesting for the field, especially for comparative affective science. The paper requires revision, however, and adding more nuance. I made various comments on how the paper can be improved. These mainly refer to more careful reflections and assumptions, better embedding with the literature and more information in terms of the choice of statistics and descriptive stats, see below.

We thank the reviewer for these encouraging words. We found the suggestions helpful and have tried to implement them accordingly.

2. Before delving into each part of the paper, I would like to make a general remark, which concern the whole paper – and I believe we should all be mindful of as researchers especially when drawing assumptions of affective states in nonlinguistic species: What actually is an emotion expression in animals? Are all facial expressions in animals emotional in nature, and how do we know the expression we see is linked to the assumed affective state? Without physiology markers, what makes us sure that the expressions presented here do not comprise voluntary facial movements used for coordination (Waller et al. 2017, Neuroscience & Biobehavioral Reviews; Heesen et al., 2022 Primate Cognitive Studies)? We know certain primates have some voluntary power over their expressions (e.g., Waller et al., 2015 and several other papers). I think the field generally, including this novel paper, should apply more care in the way we phrase our work and how we use terms. Here, the authors might consider avoiding terms similar to “emotional mimicry” or “emotion expressions”, since they did not use physiological measures to support the analyses, neither for apes or humans.

We agree that we cannot know what the animal ‘feels’ – just as we cannot really know what humans feel. We use the term emotional mimicry because we take an observer focused stance in this research and in this case, we focus on the emotions that the (human) observers attribute to the primate. As is the case in emotional mimicry research in the human-human context, this is an attribution and it may be wrong. Nonetheless, observers act in accordance with these attributions. Hence, we opted to keep the terms emotional mimicry and emotional expression. However, we have added the paragraph below to explain our research perspective and to nuance these terms accordingly.

In this context it is important to emphasize that we use an observer focused perspective when using the terms emotional mimicry and emotional expression. It is difficult if not impossible to assert what any organism, humans and other primates included, experiences when showing a given expression. However, humans nonetheless not only attribute emotions to such expressions (18) but also act accordingly (19). Thus, to the human observer the animal shows an emotion and the observer will mimic (or not) the expression in line with this assumption. Thus, human empathy with a primate can only be based on the human observers‘ attributions, not on ground truth knowledge about inner states.

3. Abstract

-There is a typo in the last phrase, I assume it could read as: …”suggesting that humans are able to emphasize and mirror emotions of other species.”

We thank the reviewer for catching this typo

4. -I wondered: since empathy contains several “building blocks”, the term itself might not be adequate as you only looked at mimicry - which is a necessary part of it though not sufficient. A simple adjustment in phrasing is sufficient here.

Given the word limitations for the abstract, we added this important point to the introduction.

Specifically, mimicry is indicative of an affiliative stance towards the mimickee (1, 13) and often considered a “low road” to empathy (15), as such it is one important facet of human empathy.

5. Introduction

-“more egalitarian macaque species” – who are you comparing them to, to other macaque species? Please specify for clarity.

We thank the reviewer for the comment. We have revised this passage of the text, also in light of recent research on facial mimicry in other macaque species.

6. -P.4: Whether or not humans can classify primate expressions and how well they perform may be dependent on many more aspects than listed here; some studies using attention measures show they can (Kret & van Berlo, 2021) though it depends on age (children struggle at it); others using a priming design showed that humans do not pay selective attention to congruent facial expression matching a former affective scene, suggesting difficulty in cross-species perception also in adults (Heesen et al., 2024, PNAS Nexus); it may be helpful for readers if the authors better embed their current work with already existing literature on cross-species perception.

We were aware of this research; however, we originally decided to not include it in our literature review which specifically focused on studies where participants had to label the expressions. In essence these two studies present opposing findings based on different tasks which both differ from our task and in the case of Heesen et al. are also based on a very small sample. Nonetheless, given the sparsity of any research involving the perception of primate expressions by humans we now include these studies in the literature review

This notion is also supported by a study showing that adult humans rate complex scenes with multiple protagonists who are either human or bonobos similarly with regard to the valence and arousal of the picture. Even though the focus here was not on individual facial expressions per se, the expressive behavior of the bonobos played a role, as children tended to misidentify pictures including silent bared-teeth displays as positive (29). Yet, in a task were attentional bias to emotional versus neutral expressions was assessed the bias shown for human expressions was not found for images showing chimpanzee expression suggesting a lack of relevant expertise.

7. P.4: I have a remark regarding the following phrase: “and hence an ability to categorize expressions into positive versus negative is plausible.” In primate behaviour, not every context may be as clear cut and thus separable on such a dichotomous scale – for example, contact-playing in primates or sexual contact in bonobos can be positive and negative at once (being close to a dominant and potentially aggressive other, while experiencing an arousing, affiliative contact); at the same time, mobbing group members or predators could combine positive and negative states at once (securing ones dominance status, dominating others, potentially being attacked by another). Such a dichomotous valence scale might be even difficult to apply to humans, because most interaction contexts are a messy pool of feelings with pleasant and unpleasant aspects – I would recommend caution in strong categorizations into positive and negative, especially when comparing across species. The authors may want to acknowledge that the phenomenon of valence is not as clear-cut as we’d like it to be, especially for animal researchers. Rather than calling expressions positive or negative, the field may benefit from more nuance moving forward, e.g., focusing on the functionality, or context in which expressions occur, rather than presuming whether the experience a good or bad state.

We very much agree with this view. This is why we did not ask participants to classify expressions as either positive or negative but to indicate the degree of positivity and negativity of the expression on two separate dimensional scales. This allows for the possibility to describe expressions as both positive and negative to some degree. To avoid any confusion in that regard we rephrased the sentence to read:

and hence an ability to rate expressions as positive or negative is plausible.

8. -P.5: Can the authors state more about which AUs they considered active in the threat related display of the primates? The display they show in the Fig. 1 looks like a mix between scream and bared teeth face, rather than open mouth threat face, see Figure 2 Parr et al 2007 (between bared teeth and scream face) and see Fig 1 c Waller et al 2016. Can the authors cite a paper that shows such a threat face expression as indicated in their Figure? Typically, threat face involves an open mouth, though teeth are not necessarily visible (see Fig 1 a and b, Overduin-de Vries et al 2016). If the expressions in the negative category were bared-teeth faces, then this needs to be discussed, because it would rather fit the “fear” than the “anger” category and it is interesting that people still attributed anger more so than fear (Fig 2, results).

We did not use FACS to code the expressions. Rather, the expressions were independently rated by two of the authors who are experts in primate expressions, Elisabetta Palagi and Marina Davila-Ross, with regard to the expression shown. From these we selected five expressions for each category where the raters agreed. The participant ratings support the notion that indeed threat faces and not silent bared teeth displays were selected for the negative category. Importantly, however, the raters – like the participants – rated the whole video sequence.

9. Methods

-P.9: Did the authors instruct people not to talk, or have they verified this? They may have talked about the videos or said things while viewing, such that mouth movements could be affected by that – It would be good to verify.

As noted in the method section, participants who did not comply with instructions were excluded from the study. This included activities like eating but also taking or chewing gum.

10. -In the videos of the primates, were there any bystanders or objects visible? This should be mentioned somewhere.

We note in the introduction that the videos showed the animals in natural surroundings. We further note in the method section that only a single individual was shown in each video.

11. -P. 9: The sentence was cut at the end of page.

We thank the reviewer for catching this problem.

12. -Could the others provide more information on what the test trial with the panthoot included?

This was simply a single trial to familiarize the participants with the procedure. This is now specified.

13. -Did the authors ask participants whether they had any experience in watching apes, have they seen or observed them before? This is important as, like the authors mentioned, watching such videos could be affected by former training.

We did not ask this question. However, given the participant pool (Prolific academic) such knowledge would have been rare.

14. -Could the authors provide more information about the statistics chosen, models fitted, assumptions met within their LMMs? Use of software?

A markdown for the analyses is included in the supplementary materials. However, for ease of refere

Attachment

Submitted filename: Response to reviewers.docx

pone.0342196.s002.docx (30KB, docx)

Decision Letter 1

Brittany Florkiewicz

19 Jan 2026

<p>Evolutionary Echoes of Emotion: Humans Mimic Other Primate Expressions

PONE-D-25-31307R1

Dear Dr. Hess,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication. Thank you for thoroughly addressing the reviewers' comments and concerns. Your article will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager®  and clicking the ‘Update My Information' link at the top of the page. For questions related to billing, please contact billing support .

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Brittany N. Florkiewicz, Ph.D.

Academic Editor

PLOS One

Additional Editor Comments (optional):

Thank you for diligently addressing all of the reviewers' comments and concerns! After reviewing your revisions, I believe you have sufficiently addressed everything. However, there is one minor issue that can be resolved during the proofing stage: the quality of the images of the example NHP facial signals is relatively low on our end. Please ensure that these images are of the highest quality when you proofread your article.

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. R Markdown.

    This is the markdown for the R analyses.

    (PDF)

    pone.0342196.s001.pdf (197.1KB, pdf)
    Attachment

    Submitted filename: Response to reviewers.docx

    pone.0342196.s002.docx (30KB, docx)

    Data Availability Statement

    The data files and markdown are available at [https://osf.io/n4cdj/files/osfstorage].


    Articles from PLOS One are provided here courtesy of PLOS

    RESOURCES