Abstract
Humans and other animals communicate a large quantity of information vocally through nonverbal means. Here, we review the domains of animal vocalizations, human nonverbal vocal communication and computer-mediated communication (CMC), under the common thread of emotion, which, we suggest, connects them as a dimension of all these types of communication. After reviewing the use of emotions across domains, we focus on two concepts that have often been opposed to emotion in the animal versus human communication literature: control and meaning. Non-human vocal communication is commonly described as emotional, preventing either control or meaning; in contrast, the emotional dimension of human nonverbal signals does not prevent them from being perceived as both intentionally produced and meaningful. Amongst others, we disagree with this position, highlighting here that emotions should be integrated across species and modalities such as the written modality. We conclude by delineating ways in which each of these domains can meaningfully influence each other, and debates in their respective fields, and more generally the debate on the evolution of communication.
Keywords: emotional communication, animal vocalizations, prosody, evolution of communication
Introduction
Over the last few decades, the topics of communication and emotion have increasingly been connected in a diversity of domains of research, such as human communication, non-human animal (from henceforth, animal) communication, and more recently in computer-mediated communication (CMC). This has been triggered either by basic interests or by debates aiming to fine-tune our understanding of connections between human and animal communication. Considerations of both the similarities and the differences between the latter have also been fueled by a flourishing interest for the evolutionary origins of human language.
Here, we contend that much of the discussion on the similarities and differences between human and non-human communication systems also lies within the theoretical positioning on emotions. While in humans, the co-occurrence of a meaningful, semantic aspect and an emotional dimension is not controversial, with communication considered inseparable from emotion (Reilly & Seibert, 2003), in animals, much of the research has so far aimed to determine whether animal communication exclusively reflects affect or meaning (e.g., Wheeler & Fischer, 2012). In the vocal domain, the vocalizations of animals can be affected by emotions, for example affective bursts or rough vocalizations without clear temporal structure. However, this is also the case in human nonverbal sounds such as laughter or cries. Additionally, the prevalence of affect in human communication has naturally led humans to devise ways of conveying emotion even when they engage with each other indirectly, through means of a computer. The aim of this review is to provide an overview of the state of knowledge on nonverbal vocal/auditory communication and emotions in animals and humans, as well as nonverbal emotion in the more recent (written) CMC forms in humans. Overall, we aim to compare these three fields to highlight common ground and areas for future research by uniting them through a common golden thread of affect.
How Emotion Is Understood Within the Three Fields
We start by summarizing definitions across the fields of animal communication, human communication and CMC, which overlap to some extent. Interest in the study of animal emotions increased exponentially around 30 years ago, propelled by the needs of the pharmaceutical industry (e.g., treatment of human disorders), comparative neuroscience (e.g., development of animal models of human neurological disorders) and animal welfare (Fraser, 2009; Panksepp, 2011). Nowadays, several frameworks allow studying animal emotions (e.g., Désiré et al., 2002; Mendl et al., 2010), encompassing the four components of emotions accessible in these species, along with their indicators and the tools to measure them: neural (e.g., brain activity), peripheral physiological (e.g., heart rate, stress hormones), cognitive (e.g., cognitive bias), and behavioural (e.g., body postures, facial and vocal expressions) indicators (Kremer et al., 2020). This focus on four components excludes the generally accepted fifth component of emotion accessible in humans, that is, the subjective, conscious component (Sander, 2013). The frameworks developed for studying animal emotions have been adapted from human psychology to animals: appraisal theories (Ellsworth & Scherer, 2003) suggest that discrete or modal emotions, arise as a function of specific features of the situation and how the animal appraises them (Désiré et al., 2002). By contrast, the two-dimensional framework (Russell, 1980) suggests that emotions can be mapped according to their arousal (bodily excitation) and valence (positive vs. negative), which can be assessed based on the pleasant (rewarding/attractive) or unpleasant (punishing/repulsive) nature of the situation triggering the emotion (Mendl et al., 2010).
Animal emotions are commonly described as relatively short-term reactions to an external or internal stimulus or event of importance for the organism, characterized by coordinated neural, physiological, cognitive and behavioral changes (Paul & Mendl, 2018). In the absence of clear evidence for feelings, the term ‘emotion’ has been used to refer to emotional-like processing, independently of the degree to which they are consciously experienced (Paul et al., 2020). The more general term of ‘affective states’ encompasses short and longer-term states (e.g., ‘moods’) that are valenced. Emotions and moods play an important role for animal survival: emotions guide responses to stimuli, while moods inform about expectations in the environment (Mendl et al., 2010). By contrast, motivation states reflect the likelihood of performing a given behaviour, or the force that drives this behaviour (‘drives’ or ‘wants’; Dawkins, 2008). Motivation is strongly influenced by underlying affective states, but considered as a distinct phenomenon (Gygax, 2017).
In the field of human affective science, different kinds of affective phenomena have also been described in the literature over the last 30 years (e.g., Sander, 2013). In a multi-componential perspective, emotions are concomitant modifications or synchronizations among different subcomponents of the organisms. These include cognition, in which appraisal is a key component (e.g., appraisal of relevance, implication, causality, coping potential, and norms); the peripheral physiological response, such as respiration, etc.; and expressive behaviours, including vocal or gestural channels. The remaining subcomponents include motivation, in which the concept of action tendencies (i.e. the ‘internal motive states that are hypothesized to underlie a felt urge […]’; Frijda, 2009; Sander et al., 2018, p. 223) is central; and feeling, which is conceptualized as an integrator and monitor, conscious component of emotion (Grandjean et al., 2008; Sander et al., 2018; Scherer, 1984). In this perspective, the main categories conceptualized as basic emotions in an evolutionary discrete perspective (Ekman, 1992), including happiness, fear, sadness, disgust, anger, and surprise are theorized as modal emotions. Beyond these five or six modal emotions, that is, the most often observed emotions and subject of explicit discrete categorization, the multi-componential perspective proposes that the appraisal of a situation or an event can induce an infinite variety of emotions. Note that appraisal can be effortful and conscious, such as in the case of an explicit evaluation of causality, but can also occur at more basic levels, such as overlearned cognitive scripts or habits at schematic or sensorimotor levels (Leventhal & Scherer, 1987).
In the field of CMC, at least with regards to its historically predominant text-based modality, emotions are most often conceptualized as content that can be shared, expressed in a more or less explicit fashion by the sender of a message and recognized or inferred by the recipient of a message (Derks et al., 2008). The preference for this viewpoint, as opposed notably to the construal of emotion as a component of human experience, may have been partly motivated by the observation that the expression of emotional content in CMC is at least separated from the corresponding emotional experience by a number of technological steps, and sometimes does not even correspond to an actual emotional experience. Regardless of the motivation for the field's prevalent conceptual perspective on emotion, a defining characteristic of the overall direction of research in this domain is the gradual shift from a paradigm initially centered on the methods of psychology and sociology to one that emphasizes the methods of data science and corpus linguistics, with researchers taking advantage of the very large datasets generated by social network services. The strong commitment to empirical methodologies has made it all the more necessary to rely on formal models of emotion, chief among which are Ekman's basic categories and various instances of dimensional representations that minimally involve valence and arousal and sometimes additional features such as dominance or surprise (Wood et al., 2018).
Summary of Approach to Emotions Across Fields and Structure of the Article
Overall, in our view, there are both similarities and differences in the way emotions have been dealt with in the three domains (Table S1; Figure 1). Animal emotion research remains at a disadvantage, by having to build upon theoretical frameworks inspired by human research and because of humans’ unrivalled use of technology that can decouple emotion from the media. Yet, we will assume here that some aspects of emotion, such as the appraisal aspect, are present in some forms in other species (Désiré et al., 2002). It is worth noting that these appraisals can be implemented at different levels (Leventhal & Scherer, 1987), including aspects that cannot be made explicit, even in humans. Conversely, we contend that the sometimes too-cognitively loaded approach to emotion applied to humans might not be necessary (see below for discussion).
Figure 1.
One way to understand nonverbal emotional communication is that it involves a sender and a receiver sharing emotional information and establishing shared knowledge about an important external, internal, or contextual state. The sender (left side) experiences some emotional state triggered by a reference state in cases of naturally occurring emotional expressions. Exceptions are deceptive emotional expressions, which only have a weak or no link to reference states. This reference state (top box) can be a variety of objects, providing a meaningful background for the experience and expression of the emotional state. Meaning can be either specific (e.g., signalling the presence of a specific predator type) or unspecific (e.g., the situation or context is frightening without reference to a specific object). The emotional state of the sender (left box) can vary from basic/simple states to more complex and mixed emotional states (see text for description). Emotional states in the sender eventually lead to nonverbal expressions, expressed more or less voluntarily. The nonverbal expressions (right box) can be of various nature and modality across species and communication tools. The receiver (right side) aims to decode the emotional information encoded in the sender's expressions, potentially by mirroring some of the emotional state of the sender. Successful communication happens when a shared meaning is established between the sender and the receiver, and when a receiver mistakes deceptive emotional signals by the sender is being truthful. Unsuccessful communication happens, when the sender and received disagree on a shared meaning, such in the case of conflicting interactions.
Finally, while we acknowledge that an affective dimension can be found in various other aspects of communication, such as within the choice of words themselves or with a different means of communication (e.g., gestures, Heesen et al., 2022), our review will be solely concerned with nonverbal emotional communication. In the following sections, we will first introduce the nonverbal communication of emotions across the fields before looking at specific aspects that have shaped the discussion over the last few decades. We will discuss the relationship between emotions and control, often addressed within the general debate on intentionality (Townsend et al., 2017); and between emotions and meaning of nonverbal utterances in all three fields of research, using the same order in each section: animal vocalizations, human vocalizations, and CMC. Finally, we will conclude by underlining the similarities and differences across fields and delineate a common work plan across fields to progress in our understanding of the co-occurrence of affect and communication.
Nonverbal Communication of Emotions
Nonverbal Communication of Emotions in Animal Vocalizations
Vocal expression of emotions in animals, as well as the perception of these vocal cues by conspecifics, has been revealed in many species (Briefer, 2012, 2018). According to the ‘motivation-structural rules’, the features of bird and mammal vocalization vary in a predictable way depending on the characteristics of the context as fearful (high-frequency and tonal sounds), friendly (soft, low-frequency, amplitude-modulated, and rhythmic sounds) or aggressive/hostile (low-frequency, loud, and noisy sounds) (August & Anderson, 1987; Morton, 1977). In addition, there are similarities in how different species express emotional arousal and valence (Briefer, 2012, 2020); in most species, situations of opposite valence are characterized by the production of different call types (i.e., functionally and acoustically distinct units), while changes in arousal or motivation result in more subtle modifications of the acoustic structure of the sounds (increase in amplitude, rate and frequencies) (Briefer, 2012; Manser, 2010). However, some call types (e.g., contact calls) can also be produced in both positive (e.g., social reunion, foraging) and negative contexts (e.g., social separation, isolation), in which case their acoustic structure changes, as shown for instance in many ungulates (e.g., domestic and wild horses or pigs goats, sheep and cows), with, often, shorter durations, and lower and less variable frequencies in positive contexts (Briefer, 2020).
Research in the field of animal communication has, in recent years, mainly focused on the expression of emotional context, arousal or valence, as well as the discrimination of these sounds and perception/contagion by receivers (see Briefer, 2012, 2018; Scheiner & Fischer, 2011; Zimmermann et al., 2013 for reviews). More recently, expression and perception of emotions across species has been increasingly studied, to investigate the evolution of vocal expression of emotion and test the hypothesis that expression of arousal and maybe also valence has been conserved throughout evolution (Belin et al., 2008; Filippi et al., 2017; Maigrot et al., 2022; Smith et al., 2018). Some work has also focussed on variation within taxonomic families, highlighting both similarities and striking differences in how domestic and Przewalski's horses express emotions (Briefer et al., 2015; Maigrot et al., 2017); and how domestic pigs and wild boars do so (Briefer et al., 2019; Maigrot et al., 2018). For instance, wild boars produce grunts with lower formants in positive than negative contexts, while the opposite is true for domestic pigs (Briefer et al., 2019; Maigrot et al., 2018). Both comparisons have revealed that some acoustic parameters are used in the same way by both species to encode emotional valence (e.g., duration is longer in negative than positive valence in both pigs and wild boars), while other parameters change in opposite directions (e.g., formants are higher in positive compared to negative contexts in pigs, while the opposite occurs in wild boars). By contrast, within species variation in vocal emotion expression has, to our knowledge, not been explored yet in other animals. Such work could be done by, for example, comparing vocal expression of emotions between wild populations or between domestic breeds (e.g., Papadaki et al., 2021). In humans, testing the hypothesis of universality of emotions across different cultures has a long history, for example in the cases of facial (e.g., Ekman & Friesen, 1971) or vocal recognition (Sauter et al., 2010b), whereby recent studies have highlighted significant differences between cultures regarding facial expression (Jack et al., 2012), or the desired expressivity, intensity, and preferred emotion during infant emotion expression (Bard et al., 2021). From our point of view, and as was done in the latter study, the universality of emotions could also be studied in a range of animals. Finally, an increasing amount of work is aimed at deciphering how emotional and referential information is integrated in animal signals, as well as comparing emotional and intentional communication (Price et al., 2015; Seyfarth & Cheney, 2003; Slocombe & Zuberbühler, 2007) as will be discussed in the relevant sections on reference and control.
Nonverbal Communication of Emotions in Human Vocalizations
Humans can experience a broad variety of basic and complex emotions, and these emotional states influence human behaviour and expressive signals that are used for communicating these emotions to other individuals and to influence the behaviour of conspecifics. Researchers have introduced the concept of emotional prosody to account for situations where emotions have an impact on vocalizations, specifically by acting on respiration, phonation, or articulation (e.g., Grandjean et al., 2006). Indeed, during an emotional episode, the production of vocalizations is modified and the information provided by the speaker, voluntarily or involuntarily, can be used by the listener to infer the speaker's emotional state and then to adjust their behaviour. In this view, vocal production can be characterized by different physical acoustic parameters such as the fundamental frequency (f0), which corresponds at the perceptual level to the pitch; the energy related to loudness; and the spectral components referring to the voice quality. Emotional contents are characterized by different patterns of acoustic parameters (Banse & Scherer, 1996; Sauter et al., 2010b) and those can be used by the listener, through their perceptual correspondence, to infer the emotional state of the speaker.
An important topic of discussion for nonverbal human expression of emotions concerns their effectiveness, particularly the acoustic distinctiveness of expressing emotions in this channel. Previous research has shown that variation in underlying emotions results in largely distinctive vocal nonverbal expressions (Frühholz et al., 2021; Patel et al., 2011; Sauter et al., 2010a). The acoustic distinctiveness however is not exclusive, as there is also acoustic overlap and confusion between some of these vocal expressions, such as fear sharing features with achievement and anger (Sauter et al., 2010a), or intense joy sharing features with panic fear (Patel et al., 2011). The question of the effectiveness has a second component related to how well listeners can detect and recognize the emotions portrayed in these various nonverbal expressions. Again, listeners distinguish and classify the emotions expressed on nonverbal vocal emotions, usually well above chance level (Lima et al., 2019; Sauter et al., 2010a) and across cultures (Sauter et al., 2010b). However, some confusions also occur in listeners while classifying such vocal emotions, such as misclassifying surprise as disgust or relief, anger as disgust or fear (Sauter et al., 2010a), or misclassifying fear and amusement vocalizations (Lima et al., 2019). Such data both support (by allowing classification) and confront (because of the overlap) the basic model of emotions (Ekman, 1992). A consensual view, which we endorse, would be that emotion recognition is mostly a multi-modal process, in that ambiguous vocal expression can be disambiguated by additional sensory information from other modalities (Thorstenson et al., 2022).
Other topics of discussion concern the effect of intensity variations in nonverbal expressions ; the correspondence between acted (‘acted’, ‘play-acted’, or ‘posed’ mainly refers to the case that a person is not in an emotional state, but expresses and pretends an emotion as-if being in an emotional state, they are almost always of a voluntary nature (see Jürgens et al., 2011; Serrat et al., 2020) and spontaneous nonverbal emotion expression; and the sensitivity of certain brain systems for emotional vocal expressions. However, we will not cover these aspects in our review.
Nonverbal Communication of Emotions in Text Messages
Besides vocalizations, human face-to-face (F2F) communication uses various nonverbal or paralinguistic means to express socio-emotional content, including facial expressions, gestures, or physical proximity. Such cues are obviously missing in CMC, at least as far as its text-based modality is concerned. For this reason, early research in this field typically adopted the ‘cues filtered-out’ perspective, whereby CMC was perceived as a defective channel to convey socio-emotional content (Short et al., 1976). It was not long, however, before it was recognized that CMC users find ways to circumvent the channel's limitations and take advantage of its possibilities to fulfill their communicative needs (Derks et al., 2008; Rice & Love, 1987; Walther, 1992), leading CMC, given sufficient time, to become just as effective as F2F as a means of conveying socio-emotional and relational content—and even more effective in certain circumstances (Walther, 1996). For example, the expression of negative emotions appears to be facilitated by the reduced social presence and visibility that are characteristics of CMC (Derks et al., 2008).
Over time, a wide array of new communicative devices has emerged in CMC, which function as substitutes for F2F paralinguistic cues. The earliest of these paralinguistic devices rely on orthographic and typographic conventions that were already well identified four decades ago: non-standard uses of punctuation and other symbols such as asterisks, parentheses and blanks, non-standard word spellings, capitalization, interjections, acronyms, parenthetical tone or mood descriptions, and so on (Carey, 1980).
1
In the early 1980s, the emergence of emoticons (character sequences such as ‘:-)’, representing various facial expressions) inaugurated a trend that would gain increasing traction until the present day, namely the conversational use of various types of graphical devices in CMC, also known as ‘graphicons’ (Herring & Dainas, 2017): these include emoticons, emojis, animojis, stickers, GIFs, images, and videos, most of which have become part of CMC as a result of successive technological advances. In this review, we will only be concerned with those graphicons that are encoded by textual means, that is, emoticons (sequences of symbols and punctuation signs usually representing facial expressions) and emojis (graphical symbols such as
or
, representing facial expressions, gestures, objects, concepts, etc.). Emojis are by far the most frequently used nonverbal cues at the time of writing (Dürscheid & Siever, 2017; Riordan & Kreuz, 2010), and evidence suggests that they have taken over several communicative functions previously assigned to other types of nonverbal cues (Pavalanathan & Eisenstein, 2016). Yet we believe that the study of other graphicon types will become increasingly important for understanding the evolution of CMC.
Summary of Nonverbal Communication of Emotion
Overall, in our view, the current trend of research in the three domains has mostly been characterizing how we communicate and understand emotions as different species (Table S1). There are some unquestionable similarities such as the reliance on various physical properties of the sound that allow conveying emotion, although the development of new technologies has also made humans innovate with the goal to convey emotions more clearly. The displayed flexibility with respect to emotion is crucial and may represent a major evolution in the human emotional communication systems, as can be evidenced in two fundamental aspects of communication: control and meaning.
Control in Emotional Communication
Control in Animal Vocalizations
There is growing evidence that the level of control animals have over their vocalizations is a continuum; at a basic stage, simple control over respiration rate can affect the rate of call production; at a second stage, control over the respiratory pressure allows modification of the amplitude over time; then, muscle tension control enables changes in frequency of the sounds produced; at the latest stage, a high and direct cerebral forebrain control over vocal production can lead to a refinement of the sounds. Control can also consist in inhibiting sound production. Overall, as the level of cerebral control increases, vocal production becomes less dependent on the physiological state (and hence emotion) of the producer, and prone to voluntary manipulations (Tchernichovski & Oller, 2016). This can result, amongst others, in vocal imitation (or ‘complex vocal learning’; that is, the ability to produce entirely new sounds by imitation) (Tchernichovski & Oller, 2016; Tyack, 2020). Other than in humans, such ability for imitation has been found so far only in three vertebrate groups: birds (songbirds, hummingbirds and parrots), nonhuman primates, and a few nonprimate mammals amongst cetaceans, pinnipeds, elephants and bats (Lattenkamp & Vernes, 2018). Since most species have relatively less control over their vocalizations compared to humans (Jürgens, 2009), emotions are expected to influence vocal production in animals even more directly than in humans, whose voice features depend, notably, on socio-cultural and linguistic conventions as well as additional intentional manipulations (Scheiner & Fischer, 2011).
The topic of control in animal vocalizations has been intertwined with the long-standing debate on intentional (goal-directed) communication in non-humans (Marler et al., 1992; Sievers et al., 2017; Townsend et al., 2017). While animal scientists have strived to adopt a common position to isolate criteria of intentional production in vocal communication (Townsend et al., 2017), echoing some of the earlier work in gestural communication (Genty et al., 2009; Liebal et al., 2014), much of this theoretical framework falls short of the high-cognitive load required for human intentional communication (Scott-Phillips, 2015), leading to possible irreconcilable difference with human communication (Tomasello, 2008). Yet, animals display a surprising flexibility in call production, including the so-called emotional ones (Sievers et al., 2017). For example, victims of aggressions in chimpanzees vary the acoustic structure of their screams depending on the severity of the aggression they are facing, but also according to the audience, suggesting that they strategically modify their calling pattern to recruit help (Slocombe & Zuberbühler, 2007). This example outlines the broader ability of a number of animal species to engage in deception, which can be found in many forms, including the manipulation of vocal production (Byrne & Whiten, 1988). Overall, beyond manipulation, the ability to refrain from or produce ‘on demand’ calls, as when engaged in sexual intercourse with desirable individuals (Clay et al., 2011), underlines the inaccurate classification of animal calls as pure emotional reactions produced without any control (Tomasello, 2008). This is especially the case if animals are able to exert some control on their vocal production in some of the most stressful contexts (see also Crockford et al., 2012 in a predation context). We note that most of the examples in this section come from primates; although efforts are now underway to shift the lens from a generally primate-centric field (Nieder & Mooney, 2020; Ravignani & Garcia, 2022; Townsend et al., 2012), we acknowledge that not all animal species will have the cognitive flexibility to exert such control on their vocal production. This however does not change the fundamental message of this section, which is that emotions must be seen as a dimension of animal communication, which interacts with control, rather than as a killjoy factor that precludes any control (see also Kret et al., 2020; Sievers & Gruber, 2020).
Control in Human Emotional Vocalizations
In the literature on humans, researchers are more concerned with the controlled expression of emotion through speech, rather than with the intentional production of emotional calls, a notable difference with the animal literature. Humans have two major nonverbal vocal channels to express and communicate emotions (Frühholz & Schweinberger, 2021). The first channel seems evolutionarily older and shared with many mammalian, and other vertebrate species. This channel refers to nonverbal expressions of vocal emotions, and is typically used to express basic emotional states (Patel et al., 2011; Sauter et al., 2010a). Emotions expressed this way can be short ‘affective bursts’ (Scherer, 1994) and are usually defined as basic emotional states triggered by perceptual and mental experiences that only involve some low-to-medium level of cognitive processing and evaluation. Given that these nonverbal affective bursts are usually triggered by external and internal cues, the level of control in these bursts is rather low. However, the expression of these nonverbal affective bursts can be controlled to some degree, if required by certain contexts. The expression of these bursts can be inhibited, delayed, or attenuated. Yet, this requires a high level of top-down control and emotion regulation, as well as a certain level of ontogenetic development according to some researchers (Barrett, 1993). Overall, this channel seems sometimes processed faster with regard to accuracy of recognition of the expressed emotion in listeners, compared to the second channel (Frühholz & Schweinberger, 2021).
This second channel refers to vocal modulations of speech utterances, which are referred to as emotional prosody. Thus, while humans express emotional information in their speech utterances based on linguistic rules, they simultaneously also express their emotions in the paralinguistic channel of prosodic voice modulations. This paralinguistic channel is used to express both rather basic emotions and more complex emotions specific to human interactions (Alba-Ferrara et al., 2011).
Further topics concern the commonalities and differences between spontaneous and acted nonverbal vocal emotions (Anikin & Lima, 2018; Bryant et al., 2018; Engelberg & Gouzoules, 2019; Jürgens et al., 2011; McGettigan et al., 2015). Again, there is overlap as well as distinctiveness between spontaneous and acted nonverbal emotions, both in acoustic and in perceptual terms. Listeners can distinguish spontaneous from acted vocalizations above chance level (∼56%–69%) (Anikin & Lima, 2018; Bryant et al., 2018), but this rather low detection rate indicates a confusion between both vocalization types, pointing to an acoustic similarity between them (Anikin & Lima, 2018; Engelberg & Gouzoules, 2019).
Control in Text Messages
Emotion expression in CMC is strongly controlled. When questioned about it, respondents appear to have formed a reasonably clear representation of how and why they and others use nonverbal cues in CMC (Gullberg, 2016; Kelly & Watts, 2015). Some linguistic aspects of verbal communication are known to be related (in a presumably mostly non-intentional way) to affective states, in particular intensity, immediacy, and diversity (Bradac et al., 1979). The few studies that have investigated whether and how these relations extend to nonverbal cues in CMC have mostly focused on the receivers’ perception and behavioural response to variations of these parameters in messages (Andersen & Blackburn, 2004; Harris & Paradice, 2007; Wang, 2003). Further research concerning the relationship between the affective and general mental state of the sender and the patterns of uses of nonverbal cues in their messages must follow.
Much like the biological properties of a species’ vocal tract shape the space of possible vocalizations, the technological infrastructure that is inseparable from CMC places strong constraints on the form that communication acts can take in this context, and thus on the variation space over which control may theoretically be exerted by users. For instance, the number of available emojis has grown from 90 at their creation in the late 1990s to more than 3,000 at the time of writing, making the selection of an emoji a very different and in principle much more informative decision on the part of the sender of a message. In practice however, emoji distributions observed in large CMC datasets exhibit Zipf-like properties, in that a limited subset of them accounts for a large proportion of occurrences (Ljubešić & Fišer, 2016; Lu et al., 2016). It is also worth noting that certain cues, notably emojis, may be rendered differently on the receivers’ devices than on the sender's one, effectively subtracting to the latter's potential degree of control on their communication and increasing the potential for misconstrual on the receiver's part (Miller et al., 2016; Shurick & Daniel, 2020; Tigwell & Flatla, 2016).
Recent years have witnessed a gradual increase in the amount of algorithmic intervention during the preparation of a message, which also contributes to lessen the sender's degree of control. This is notably the case of automatic correction and predictive typing, whereby an algorithm suggests the most likely completion of a word based on what the user has previously typed. This has resulted in a rarefaction of non-standard forms such as those involving abbreviation or letter repetition, since they now require a deliberate effort to include, in contrast to the efficiency concerns that initially motivated their use (Herring, 2019). It is likely that the application of similar technologies for suggesting the replacement of words or phrases by specific emojis has fostered the proliferation and diversification of emojis. This hypothesis underlines the relevance of technological factors, which are at least partly outside of the users’ control, for explaining large-scale trends observable in the evolution of CMC data. More advanced technologies leveraging artificial intelligence methods (e.g., automatic translation and so-called ‘smart replies’, that is, entire predefined answers automatically suggested to the user) have emerged recently and little is known at this point about how often they are used and how strongly they influence CMC practices, which makes them an important stake for future research (Hancock et al., 2020).
Summary of Control in Emotional Communication
Overall, in our view, intriguing parallels can be drawn from the different literatures (Table S1). While the human emotion literature can freely investigate differences between acted or spontaneous production, both the animal literature and the CMC literature are presented with challenges to study the general propensity of the senders to control what they express. For animals, this is part of a long-standing debate on intentionality, while for CMC, there has been little research on the emotional state of the sender and how it affects how they produce messages. We contend that the research in the three fields can meaningfully influence each other with the debate on intentional production being especially interesting for CMC research. Conversely, how CMC research studies the progressive insaturation of a lack of control by predictive typing, or the involuntary misconstrual of a signal can influence how animal and human researchers assess the production and perception of emotional signals in their study systems.
Meaning in Emotional Signals
Meaning in Animal Vocalizations
The notion of ‘meaning’ in animal communication is highly debated (Scarantino & Clay, 2015; Wheeler & Fischer, 2012). One can differentiate between two types of meaning; ‘literal meaning’, which is the code that links sounds to what they represent (e.g., referent), and ‘intended meaning’, which requires an understanding of the signaller's intention within a social context (Grice, 1957). In animals, it might be challenging to differentiate between these two types of meaning, as it requires knowledge about whether signals are intentionally produced or not, which we addressed above, and whether the signalling animal displays theory of mind abilities. However, most researchers in the field would agree that, whether or not animal vocalizations are voluntarily produced or intentional, they provide listeners with ‘information’ in the sense that they reduce uncertainty (e.g., about upcoming social interactions or events in the environment) (Marler, 1961; Seyfarth & Cheney, 2017). The information acquired by receivers will depend both on the properties of the signal and the context of production (Seyfarth & Cheney, 2017).
The best example of ‘meaning’ in animal vocal communication can be found in alarm calls that map onto predators. The discovery of alarm calls in vervet monkeys (Chlorocebus pygerythrus), which vary according to the types of predators that are approaching (i.e., refer to an event external to the producer) represented a landmark finding in animal communication (Seyfarth et al., 1980). These signals, later termed ‘functionally referential calls’ (Marler et al., 1992) (hereafter ‘FRCs’) provide very specific information about external objects or events to receivers. According to Macedonia & Evans (1993), to define a vocal signal as referential, the following criteria need to be met; production needs to be context-specific (i.e., linked to the presence of a particular external referent; ‘production specificity’), and appropriate responses to the calls need to be stimulus-independent (i.e., the signal itself needs to elicit an appropriate response even in the absence of the referent; ‘perception specificity’). These calls have now been identified in the repertoire of several species and can refer to predators, food or social interactions (Townsend & Manser, 2013).
FRCs raised the idea that animal vocalizations might refer not only to the internal state of the producer, but also specifically to external referents. The semantic aspect of FRCs from the producer's side (i.e., production mechanism) is highly debated (Scarantino & Clay, 2015; Seyfarth & Cheney, 2003; Townsend & Manser, 2013; Wheeler & Fischer, 2012). Some researchers have argued that signals can be purely emotional in their production and still meet the criteria of FRCs, if they are linked in a predictable way to the presence of an external referent (i.e., through a tight association between the presence of a referent and the internal state of the producer). In this case, receivers should still be able to extract specific meaning from the signal regarding the context of production, hence fulfilling the perception specificity criterion (Seyfarth & Cheney, 2003; Wheeler & Fischer, 2012).
Nevertheless, the most parsimonious position argues against a strong dichotomy between emotionality and referentiality. For example, the description of the meerkat (Suricata suricatta) alarm call system shed new light on the study of referential and emotional signalling, since the alarm calls of this species vary as a function of both the type of predator approaching and the level of urgency for each predator type (Manser, 2001). It thus suggests that animal vocalizations can simultaneously contain referential and emotional components, similar to human speech (Manser et al., 2002; Scherer, 2003). These two components could be either encoded in different vocal parameters (‘segregation of information’ (Marler, 1960)) or in the same component, in which case they might interact.
Overall the question of whether producers of FRCs are referring to an external stimulus in the same way as human semantic communication, as well as the convergence of the debates between intentional production and reference (see Sievers & Gruber, 2016), remain crucial questions regarding the evolution of language (Christiansen & Kirby, 2003; Fitch, 2010). Yet, one may also acknowledge that the strong dichotomy between emotion and meaning applied to animal vocalizations does not subject them to the same standards as human speech, where affect is often complementary to, and part of, meaning.
Meaning in Human Emotional Vocalizations
There are typical nonverbal expressions of emotions, such that humans vocally cry when experiencing states of sadness, they laugh when being overwhelmed by joy (Scott et al., 2014; Szameitat et al., 2009), they angrily growl when being aggressive towards natural or social obstacles (Raine et al., 2019), or they fearfully scream when being terrified by an external threat (Arnal et al., 2015; Frühholz et al., 2021). Although humans can express a variety of emotional states in nonverbal expressions, their exact number, and whether they refer to basic emotional states or more complex emotions is debated. Previous reports differ on the number of different nonverbal vocal emotions reported, ranging approximately from 8 (Lima et al., 2013) to 18 (Bänziger & Scherer, 2010). Recent reports also described subtypes of vocal expressions, such as for positive nonverbal expressions (e.g., happiness, amusement, interest, relief, etc.) (Kamiloğlu et al., 2020) or for different types of scream calls (Frühholz et al., 2021).
In terms of the expressive component, most of these basic expressions of emotions would be classified as a ‘symptom’ in semiotic terms (Frühholz et al., 2016), given that the primary reference is to the emotional state of the speaker. This nonverbal expression, however, could also figure as a ‘signal’ or ‘index’ when related to their communicative component to trigger certain responses in listeners. An aggressive growl, for example, can be a symptom of the inner angry state of a person, and it signals another person to immediately distance from the aggressor. In general, the meaning of nonverbal expressions both related to expression and communicating an emotional state is relatively non-arbitrary, given their high acoustic distinctiveness and perceptual recognizability (Frühholz et al., 2021). However, nonverbal expressions in humans seem to be more arbitrary than similar expression in animals, since the reference to external objects (e.g., a scream signals potential danger) is often ambiguous (e.g., a scream does not signify the exact type of danger) and can only be disambiguated by further sensory information (e.g., visual search to identify the source of danger).
In humans, language also allows the transmission of linguistic information, rooted in segmental units (phonemes, syllables, words), which will allow the communication of concepts or facts based on semantic representations (e.g., Friederici, 2005). Moreover, emotions can also be the object of semantic representations and discourses that are used for example in interpersonal emotional regulation (Braunstein et al., 2017). Information at the linguistic and semantic levels can also be in conflict with the information transmitted by the emotional prosody (Schwartz & Pell, 2012). For example, sarcasm or irony is a phenomena referring to an antagonistic tension between semantic aspects (e.g., negative content) and supra-segmental emotional information (positive emotional prosody, see Cheang & Pell, 2008). The context in which these complex utterances are produced is also central in understanding the ironic or sarcastic character of a statement.
Meaning of Nonverbal Cues in Text Messages
From the point of view of meaning, a defining feature of the most frequent types of CMC nonverbal cues is that they are signs whose meaning refers to another sign, namely an F2F nonverbal cue. This is for example the case of emojis referring to facial expressions (Dürscheid & Siever, 2017) or gestures, but also of several other, older types of cues such as letter repetitions and capitalizations, typically referring to voice quality alterations. Different types of CMC cues appear to specialize in the reference to certain F2F cue types, but there is also a certain degree of overlap: for instance, both emojis and emoticons can represent facial expressions, which opens possibilities for users in terms of identity display and in- or out-group marking. There are of course many nonverbal CMC cues that do not refer to F2F nonverbal cues, some of which carry a clear emotional load, but most of them are considerably less frequent than the aforementioned cues, the main exception to this observation being the
emoji and its variants.
Concerning emojis in particular, significant differences in their use have been observed across countries (Lu et al., 2016), but their interpretation seems relatively consistent across languages (Barbieri et al., 2016a; Novak et al., 2015). It does however vary according to a range of factors, chief among which are socio-demographic features of users (notably gender, see e.g., Oleszkiewicz et al., 2017; Prada et al., 2018), the specifics of the communication context (notably co-occurring verbal and nonverbal emotional cues, see Vandergriff, 2013), and the considered platforms and media (Tauch & Kanjo, 2016). Emoji interpretation also varies according to the users’ opinions on their use and what they individually represent, creating ambiguity and misunderstandings, as context does not significantly reduce the latter (Miller et al., 2016, 2017). With time and repetition, emojis can also take on idiosyncratic meanings whose interpretation is unique in a certain group or relationship (Al Rashdi, 2015; Gullberg, 2016; Kelly & Watts, 2015; Wiseman & Gould, 2018), similarly to their F2F counterparts such as hand gestures.
The continuous accumulation of vast amounts of written CMC data over the last few decades makes it possible to use data science methods to not only attempt to characterize the meaning (emotional or not) of nonverbal cues at a large scale (Barbieri et al., 2016b; Novak et al., 2015), but even to trace their evolution over time. Applying these techniques to emojis in Twitter data spanning 6 years – a sizable portion of the history of emojis since emergence at the worldwide scale – Robertson et al. (2021) show that the meaning of most emojis is relatively stable over time, with a small subset of them undergoing more considerable semantic change, with emojis with more concrete meanings more likely to undergo semantic change. A major challenge for future research is to move away from data gathered on those web services that make them most readily available, and to adapt data-analytic methodologies to CMC contexts that may be more representative of F2F communication practices, such as instant messaging, for which data are typically not accessible in comparable volumes.
Summary of Meaning in Nonverbal Emotional Communication
Overall, in our view, the issue of meaning parallels the one of control (Table S1). Once again, research on animals has to contend with the assumption that their signals are fixed, with little flexibility around, with new layers periodically added to the debate such as the presence or absence of characteristic such as arbitrariness (Sievers & Gruber, 2020; Watson et al., 2022). This contrasts strongly with the flexibility displayed by humans in both their vocal non-verbal and CMC communication, where a change of prosody can radically alter the meaning of an expression.
Conclusion
Looking at the three different communication systems indicates that aspects of emotion, control and meaning, are present in all of them. However, in animals the main communicative part is related to the expression of emotions, whereas in human, it is about emotions and meaning, and in text messages, meaning is the foremost goal and emotions are a secondary addition. As our review has shown, the nonverbal communication of emotion has always been uncontroversial in the case of human vocalizations, where it is widely acknowledged to be an integral part of this means of communication through prosody. In contrast, the study of emotion in nonverbal domains such as animal communication and CMC has required more careful discussion before being established as worthy topics of discussion, and it is only in recent times that researchers embrace emotions as an integral part of animals’ lives (de Waal, 2011), particularly in the context of welfare science, where new research paradigms aim to align with human emotion theory to decipher animal emotions based on nonverbal signals. On the other hand, CMC had to rely on a combination of technological evolutions and innovative practices that progressively allowed the communication of emotion, particularly through the development and use of emoticons and, since the 2010s, emojis, which continuously increase the ability of users to convey their emotions.
As illustrated in our second section, the debate on vocal communication in animals has been fuelled with comparisons with human language properties such as control and intentional production. As with emotional content, human speech is the standard, and the question of control and intentional production remains unquestioned in human vocal production. While some uncontrolled outbursts are produced in specific contexts such as spontaneous crying or laughing, they are connected to basic emotional states, with humans rather described as able to willingly modulate acoustic parameters to express an intended emotion, whether experienced or not. Interestingly, we have seen that CMC could possibly widen the gap with animal communication. Emotion communication in CMC is indeed seen as completely under control, as the user is only limited by the available set of tools at their disposal to transmit the emotion or emotional tone they want to convey. Yet, this has to be modulated by the increasing reliance on automatic algorithmic interventions that may lessen the user's control, as well as the uncontrollable possible differences that may appear on the other user's screen upon reception of the message. In contrast with this human image of emotional communication, intentional production remains largely debated in animal communication, where the application of a human-defined Gricean communication framework imposes large cognitive demands on animals that may not display them. Yet, we have listed several aspects that have shown a not so black-and-white picture in animals, including some control in stressful contexts. Overall, in our view, while human nonverbal communication, whether vocal or through CMC, remains undoubtedly intentional, a certain continuity has been established over the last few decades with animal communication, which is not limited to uncontrolled emotional bursts.
As illustrated in our third section, meaning has also constituted a point of contention when comparing animal and human vocalizations. While the findings of functionally referential signals in non-humans have opened the door for the discussion on less emotionally based signals in animals, they have been framed in an unhelpful opposition between meaning and emotion, largely ignoring more consensual positions that could combine both. Animal emotion research may benefit from human approaches that set the debate in terms of flexibility of use from the producer's side, and flexibility of understanding from the listener's side, rather than in opposing two complementary aspects of vocal communication, as our review of human communication in two modalities (speech and CMC) has shown. Once again, meaning is a defining feature of human communication, making it hard to abstract away from it altogether. Emotional basic signals, such as an aggressive human growl, give contextual information to the listener about the producer. However, the producer may also alter the prosody of an utterance to convey a radically different meaning than indicated by the lexical content (as in a sarcastic congratulation). This intentional and meaningful use of prosody variation adds another layer to the flexibility with which humans can display their emotions. Another way to add on flexibility is to rely on CMC. While the study of CMC has a relatively short history, it is already clear that the meaning attached to emojis remains stable, offering a consistent way to express one's emotions. Yet, specific groups can also attach a specific meaning to a given emoji, allowing a variety of meanings only available to the insiders. Overall, these findings in both human communication and CMC highlight the versatility of human meaningful emotional signals. This very much contrasts with current debates on animal communicative signals, where the discussion is often limited to opposing emotion to meaning.
Overall, we have sought to underline the role of affect in nonverbal communication across species and media of expression. Beyond outlining similarities and differences across domains of research (Table S1), our review also highlights how affect can contribute to bridging research fields that have sometimes remained unconnected because of the methods they use (e.g., CMC), or have diverged because of ideological backgrounds (e.g., animal and human communication). For example, debates on animal and human communication do not accurately reflect the continuity between the two (see also Moore, 2013), and are threatened by an increasingly large gap that isolates the study of human linguistic communication, seen as highly cognitive, from that of other communication systems (e.g., Scott-Phillips, 2015), with little hope of comparing systems. Yet, other models of communication are less cognitively demanding (Moore, 2013; Sievers & Gruber, 2020). As our review has shown, emotions play an integral part in communication and can contribute to bridging different fields of research.
Although methodological limitations may limit comparisons, nonverbal communication needs to be approached from an evolutionary point of view of the requirements for the efficiency of communication systems. As such, we believe that our review allows the drawing of inferences about the evolution of emotion, and how human's harnessing of their emotions as a communicative tool has truly exploded in our lineage, from a last common ancestor with chimpanzees and bonobos possibly closer to the latter two (for a review, see Gruber & Clay, 2016). Our review underlines a pattern where animal signals express mainly the emotional side with limited variation in meaning, whereas in human communication, meaning takes over, without emotions being any less present; finally, CMC started by only including meaning, but emotional expressions were rapidly brought in addition. We develop this further here: animal vocalizations appear to mainly relate to the behavioural context the producer experiences (motivation) and only in very few contexts refer to external events or objects. Animal communication thus remains tightly linked to the emotional state of the producer, and arguably, also of the receiver. While this emotional layer has too often been opposed to the possibility that the calls may nonetheless be meaningful for both the producer and the receiver (e.g., Crockford et al., 2012), and produced as part of an intentional act of communication, the current state of knowledge suggests that our last common ancestor's use of emotions as a means for intentional communication remains at most basic. This is in strong contrast to humans, where the large repertoire of signals appears to be frequently related to external events or objects, or temporarily related to the past or future (Figure 1). Yet, the emotional expression here is present in all aspects, and our species has in fact developed ways of harnessing our emotions to intentionally manipulate the message we want to convey. This is valid both for nonverbal vocal communication and for the more recently developed CMC that is taking an increasingly large part of our social lives. While the original goal of CMC was to communicate meaningful information as efficiently as possible by reducing redundant information in the signals used in human speech or written language, a number of paralinguistic cues, chief among which emojis, had to be brought in to avoid ambiguity in communication and reflect the emotional information notably found in prosody. We note that despite our claimed focus on nonverbal communication systems, we could not avoid including other media of communication, such as found during F2F interaction. This is because communication is, at its core, multimodal (Fröhlich et al., 2019), and that it remains difficult to split the contribution of each means of communication. As such, we acknowledge that some of our arguments, particularly pertaining to animal communication, must be assessed contextually, with each medium of communication allowing their own flexibility. For example, the emotional content of animal gestures remains much unknown but can be analysed in a manner similar to how emotional vocalizations are analysed (Heesen et al., 2022). Similarly, humans readily associate facial expression such as smiles with vocalizations (Drahota et al., 2008), making integrating control, meaning and emotion a multimodal endeavour in the future across all three fields.
Overall, the abilities to convey accurate or false emotional information, and to intentionally modify the meaning of an utterance through prosody, both appear to have emerged in our lineage, although how early remains unknown (i.e., did this ability emerge before our mastering of language, or preempted it?). By including CMC, our review illustrates that emotional expression is no longer constrained by our biology. With an ever-increasing reliance on CMC and technology, we expect that our species will find additional ways to intentionally and meaningfully express their emotion relying on cultural rather than biological evolution. Yet our review also suggests that many mechanisms remain similar between the three domains reviewed. Finally, as it is widely found across species, we suggest that emotional conveyance can meaningfully contribute to the more general debates on the evolution of communication, and particularly language, in the human lineage.
Supplemental Material
Supplemental material, sj-docx-1-emr-10.1177_17540739241303505 for Emotion in Nonverbal Communication: Comparing Animal and Human Vocalizations and Human Text Messages by T. Gruber, E. F. Briefer, A. Grütter, A. Xanthos, D. Grandjean, M. B. Manser and S. Frühholz in Emotion Review
Acknowledgements
The authors thank Isabel Driscoll for comments on an earlier version of the draft, as well as the National Center of Competencies in Research ‘Evolving Language’ for supporting this project. We thank the associate editor and an anonymous reviewer for their meaningful comments that improved the overall quality of our manuscript.
Interestingly, the large-scale use of such devices has not been attested in written correspondence, which raises the question whether their emergence in CMC can be partly explained by the fact that this communicative context has a pace which is considerably closer to the rapid back and forth of face-to-face conversations.
Footnotes
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: TG and SF were supported by grants from the Swiss National Science Foundation (grants PCEFP1_186832 to TG and PP00P1_157409/1 and PP00P1_183711/1 to SF).
ORCID iDs: T. Gruber https://orcid.org/0000-0002-6766-3947
A. Grütter https://orcid.org/0000-0003-2350-7990
D. Grandjean https://orcid.org/0000-0001-6125-4520
Supplemental Material: Supplemental material for this article is available online.
Contributor Information
T. Gruber, Swiss Center for Affective Sciences and Faculty of Psychology and Educational Sciences, University of Geneva, Switzerland.
E. F. Briefer, Behavioural Ecology Group, Section for Ecology & Evolution, Department of Biology, University of Copenhagen, Copenhagen Ø, Denmark.
A. Grütter, Department of Language and Information Sciences, Faculty of Arts, University of Lausanne, Lausanne, Switzerland English Seminar, University of Zurich, Zurich, Switzerland.
A. Xanthos, Department of Language and Information Sciences, Faculty of Arts, University of Lausanne, Lausanne, Switzerland
D. Grandjean, Swiss Center for Affective Sciences and Faculty of Psychology and Educational Sciences, University of Geneva, Switzerland
M. B. Manser, Department of Evolutionary Biology and Environmental Studies, University of Zurich, Zurich, Switzerland Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Zurich, Switzerland.
S. Frühholz, Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland Department of Psychology, University of Oslo, Oslo, Norway.
References
- Alba-Ferrara L., Hausmann M., Mitchell R. L., Weis S. (2011). The neural correlates of emotional prosody comprehension: Disentangling simple from complex emotion. PLoS ONE, 6, e28701. 10.1371/journal.pone.0028701 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Al Rashdi F. (2015). Forms and functions of emojis in WhatsApp interaction among Omanis. Georgetown University. [Google Scholar]
- Andersen P. A., Blackburn T. R. (2004). An experimental study of language intensity and response rate in e mail surveys. Communication Reports, 17, 73–84. 10.1080/08934210409389377 [DOI] [Google Scholar]
- Anikin A., Lima C. F. (2018). Perceptual and acoustic differences between authentic and acted nonverbal emotional vocalizations. Quarterly Journal of Experimental Psychology, 71, 622–641. 10.1080/17470218.2016.1270976 [DOI] [PubMed] [Google Scholar]
- Arnal L. H., Flinker A., Kleinschmidt A., Giraud A. L., Poeppel D. (2015). Human screams occupy a privileged niche in the communication soundscape. Current Biology, 25, 2051–2056. 10.1016/j.cub.2015.06.043 [DOI] [PMC free article] [PubMed] [Google Scholar]
- August P. V., Anderson J. G. T. (1987). Mammal sounds and motivation-structural rules: A test of the hypothesis. Journal of Mammalogy, 68, 1–9. 10.2307/1381039 [DOI] [Google Scholar]
- Banse R., Scherer K. R. (1996). Acoustic profiles in vocal emotion expression. Journal of Personality and Social Psychology, 70(3), 614–636. 10.1037/0022-3514.70.3.614 [DOI] [PubMed] [Google Scholar]
- Bänziger T., Scherer K. R. (2010). Introducing the Geneva multimodal emotion portrayal (GEMEP) corpus. In Scherer K. R., Bänziger T., Roesch E. B. (Eds.), Blueprint for affective computing: A sourcebook (pp. 271–294). Oxford University Press. [Google Scholar]
- Barbieri F., Kruszewski G., Ronzano F., Saggion H. (2016a, October 15–19). How cosmopolitan are emojis? Exploring emojis usage and meaning over different languages with distributional semantics. In Paper presented at the Proceedings of the 24th ACM International Conference on Multimedia, Amsterdam, The Netherlands.
- Barbieri F., Ronzano F., Saggion H. (2016b, May 23–28). What does this Emoji mean? A vector space skip-gram model for Twitter emojis. In Paper presented at the Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC’16), Portorož, Slovenia.
- Bard K. A., Keller H., Ross K. M., Hewlett B., Butler L., Boysen S. T., Matsuzawa T. (2021). Joint attention in human and chimpanzee infants in varied socio-ecological contexts. Monographs of the Society for Research in Child Development, 86(4), 7–217. 10.1111/mono.12435 [DOI] [PubMed] [Google Scholar]
- Barrett K. C. (1993). The development of nonverbal communication of emotion: A functionalist perspective. Journal of Nonverbal Behavior, 17, 145–169. 10.1007/BF00986117 [DOI] [Google Scholar]
- Belin P., Fecteau S., Charest I., Nicastro N., Hauser M. D., Armony J. L. (2008). Human cerebral response to animal affective vocalizations. Proceedings of the Royal Society B: Biological Sciences, 275(1634), 473–481. 10.1098/rspb.2007.1460 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bradac J. J., Bowers J. W., Courtright J. A. (1979). Three language variables in communication research: Intensity, immediacy, and diversity. Human Communication Research, 5, 257–269. 10.1111/j.1468-2958.1979.tb00639.x [DOI] [Google Scholar]
- Braunstein L. M., Gross J. J., Ochsner K. N. (2017). Explicit and implicit emotion regulation: A multi-level framework. Social Cognitive and Affective Neuroscience, 12(10), 1545–1557. 10.1093/scan/nsx096 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Briefer E. F. (2012). Vocal expression of emotions in mammals: Mechanisms of production and evidence. Journal of Zoology, 288, 1–20. 10.1111/j.1469-7998.2012.00920.x [DOI] [Google Scholar]
- Briefer E. F. (2018). Vocal contagion of emotions in non-human animals. Proceedings of the Royal Society B: Biological Sciences, 285, 20172783. 10.1098/rspb.2017.2783 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Briefer E. F. (2020). Coding for ‘dynamic’ information: Vocal expression of emotional arousal and valence in non-human animals. In Aubin T., Mathevon N. (Eds.), Coding strategies in vertebrate acoustic communication (pp. 137–162). Springer International Publishing. [Google Scholar]
- Briefer E. F., Maigrot A.-L., Mandel R., Freymond S. B., Bachmann I., Hillmann E. (2015). Segregation of information about emotional arousal and valence in horse whinnies. Scientific Reports, 5(1), 9989. 10.1038/srep09989 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Briefer E. F., Vizier E., Gygax L., Hillmann E. (2019). Expression of emotional valence in pig closed-mouth grunts: Involvement of both source- and filter-related parameters. The Journal of the Acoustical Society of America, 145(5), 2895. 10.1121/1.5100612 [DOI] [PubMed] [Google Scholar]
- Bryant G. A., Fessler D. M. T., Fusaroli R., Clint E., Amir D., Chávez B., Denton K. K., Díaz C., Duran L. T., Fancóvicóvá J., Fux M., Ginting E. F., Hasan Y., Hu A., Kamble S. V., Kameda T., Kuroda K., Li N. P., Luberti F. R., Peyravi R., Prokop P., Quintelier K. J. P., Shin H. J., Stieger S., Sugiyama L. S., van den Hende E. A., Viciana-Asensio H., Yildizhan S. E., Yong J. C., Yuditha T., Zhou Y. (2018). The perception of spontaneous and volitional laughter across 21 societies. Psychological Science, 29(9), 1515–1525. 10.1177/0956797618778235 [DOI] [PubMed] [Google Scholar]
- Byrne R. W., Whiten A. (1988). Machiavellian intelligence: Social expertise and the evolution of intellect in monkeys, apes and humans. Oxford University Press. [Google Scholar]
- Carey J. (1980). Paralanguage in computer mediated communication. In: Paper presented at the 18th Annual Meeting of the Association for Computational Linguistics, Philadelphia, PA, USA
- Cheang H. S., Pell M. D. (2008). The sound of sarcasm. Speech Communication, 50, 366–381. 10.1016/j.specom.2007.11.003 [DOI] [Google Scholar]
- Christiansen M., Kirby S. (2003). Language evolution: The hardest problem in science? Oxford University Press. [Google Scholar]
- Clay Z., Pika S., Gruber T., Zuberbühler K. (2011). Female bonobos use copulation calls as social signals. Biology Letters, 7(4), 513–516. 10.1098/rsbl.2010.1227 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crockford C., Wittig R. M., Mundry R., Zuberbühler K. (2012). Wild chimpanzees inform ignorant group members of danger. Current Biology, 22, 142–146. 10.1016/j.cub.2011.11.053 [DOI] [PubMed] [Google Scholar]
- Dawkins M. S. (2008). The science of animal suffering. Ethology, 114, 937–945. 10.1111/j.1439-0310.2008.01557.x [DOI] [Google Scholar]
- Derks D., Fischer A. H., Bos A. E. R. (2008). The role of emotion in computer-mediated communication: A review. Computers in Human Behavior, 24, 766–785. 10.1016/j.chb.2007.04.004 [DOI] [Google Scholar]
- Désiré L., Boissy A., Veissier I. (2002). Emotions in farm animals: A new approach to animal welfare in applied ethology. Behavioural Processes, 60, 165–180. 10.1016/S0376-6357(02)00081-5 [DOI] [PubMed] [Google Scholar]
- de Waal F. B. M. (2011). What is an animal emotion? Annals of the New York Academy of Sciences, 1224(1), 191–206. 10.1111/j.1749-6632.2010.05912.x [DOI] [PubMed] [Google Scholar]
- Drahota A., Costall A., Reddy V. (2008). The vocal communication of different kinds of smile. Speech Communication, 50, 278–287. 10.1016/j.specom.2007.10.001 [DOI] [Google Scholar]
- Dürscheid C., Siever C. M. (2017). Jenseits des alphabets – Kommunikation mit emojis. Zeitschrift für Germanistische Linguistik, 45, 256–285. 10.1515/zgl-2017-0013 [DOI] [Google Scholar]
- Ekman P. (1992). An argument for basic emotions. Cognition and Emotion, 6, 169–200. 10.1080/02699939208411068 [DOI] [Google Scholar]
- Ekman P., Friesen W. V. (1971). Constants across cultures in the face and emotion. Journal of Personality and Social Psychology, 17(2), 124. 10.1037/h0030377 [DOI] [PubMed] [Google Scholar]
- Ellsworth P. C., Scherer K. R. (2003). Appraisal processes in emotion. In Davidson R. J., Goldsmith H., Scherer K. R. (Eds.), Handbook of affective sciences (pp. 572–595). Oxford University Press. [Google Scholar]
- Engelberg J. W. M., Gouzoules H. (2019). The credibility of acted screams: Implications for emotional communication research. Quarterly Journal of Experimental Psychology, 72, 1889–1902. 10.1177/1747021818816307 [DOI] [PubMed] [Google Scholar]
- Filippi P., Congdon J. V., Hoang J., Bowling D. L., Reber S. A., Pašukonis A., Hoeschele M., Ocklenburg S., de Boer B., Sturdy C. B., Newen A., Güntürkün O., Güntürkün O. (2017). Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: Evidence for acoustic universals. Proceedings of the Royal Society B: Biological Sciences, 284(1859), 20170990. 10.1098/rspb.2017.0990 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fitch W. T. (2010). The evolution of language. Cambridge University Press. [Google Scholar]
- Fraser D. (2009). Animal behaviour, animal welfare and the scientific study of affect. Applied Animal Behaviour Science, 118, 108–117. 10.1016/j.applanim.2009.02.020 [DOI] [Google Scholar]
- Friederici A. D. (2005). Neurophysiological markers of early language acquisition: From syllables to sentences. Trends in Cognitive Sciences, 9, 481–488. 10.1016/j.tics.2005.08.008 [DOI] [PubMed] [Google Scholar]
- Frijda N. H. (2009). Action tendencies. In Sander D., Scherer K. R. (Eds.), The Oxford companion to emotion and the affective sciences (pp. 1–2). Oxford University Press. [Google Scholar]
- Fröhlich M., Sievers C., Townsend S. W., Gruber T., van Schaik C. P. (2019). Multimodal communication and language origins: Integrating gestures and vocalizations. Biological Reviews, 94(5), 1809–1829. 10.1111/brv.12535 [DOI] [PubMed] [Google Scholar]
- Frühholz S., Dietziker J., Staib M., Trost W. (2021). Neurocognitive processing efficiency for discriminating human non-alarm rather than alarm scream calls. PLOS Biology, 19, e3000751. 10.1371/journal.pbio.3000751 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Frühholz S., Schweinberger S. R. (2021). Nonverbal auditory communication – Evidence for integrated neural systems for voice signal production and perception. Progress in Neurobiology, 199, 101948. 10.1016/j.pneurobio.2020.101948 [DOI] [PubMed] [Google Scholar]
- Frühholz S., Trost W., Kotz S. A. (2016). The sound of emotions – Towards a unifying neural network perspective of affective sound processing. Neuroscience & Biobehavioral Reviews, 68, 96–110. 10.1016/j.neubiorev.2016.05.002 [DOI] [PubMed] [Google Scholar]
- Genty E., Breuer T., Hobaiter C., Byrne R. (2009). Gestural communication of the gorilla (Gorilla gorilla): Repertoire, intentionality and possible origins. Animal Cognition, 12, 527–546. 10.1007/s10071-009-0213-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grandjean D., Banziger T., Scherer K. R. (2006). Intonation as an interface between language and affect. Progress in Brain Research, 156, 235–247. 10.1016/S0079-6123(06)56012-1 [DOI] [PubMed] [Google Scholar]
- Grandjean D., Sander D., Scherer K. R. (2008). Conscious emotional experience emerges as a function of multilevel, appraisal-driven response synchronization. Consciousness and Cognition, 17, 484–495. 10.1016/j.concog.2008.03.019 [DOI] [PubMed] [Google Scholar]
- Grice H. P. (1957). Meaning. The Philosophical Review, 66, 377–388. 10.2307/2182440 [DOI] [Google Scholar]
- Gruber T., Clay Z. (2016). A comparison between bonobos and chimpanzees: A review and update. Evolutionary Anthropology: Issues, News, and Reviews, 25, 239–252. 10.1002/evan.21501 [DOI] [PubMed] [Google Scholar]
- Gullberg K. (2016). Laughing face with tears of joy: A study of the production and interpretation of emojis among Swedish University Students. University of Lund. [Google Scholar]
- Gygax L. (2017). Wanting, liking and welfare: The role of affective states in proximate control of behaviour in vertebrates. Ethology, 123, 689–704. 10.1111/eth.12655 [DOI] [Google Scholar]
- Hancock J. T., Naaman M., Levy K. (2020). AI-Mediated communication: Definition, research agenda, and ethical considerations. Journal of Computer-Mediated Communication, 25, 89–100. 10.1093/jcmc/zmz022 [DOI] [Google Scholar]
- Harris R. B., Paradice D. (2007). An investigation of the computer-mediated communication of emotions. Journal of Applied Sciences Research, 3, 2081–2090. [Google Scholar]
- Heesen R. M., Sievers C., Gruber T., Clay Z. (2022). Primate communication: Affective, intentional, or both? In Schwartz B. L., Beran M. J. (Eds.), Primate cognitive studies (pp. 411–438). Cambridge University Press. [Google Scholar]
- Herring S. C. (2019). Grammar and electronic communication. In Chappelle C. (Ed.), The Concise Encyclopedia of Applied Linguistics (pp. 521–536). Wiley-Blackwell. [Google Scholar]
- Herring S. C., Dainas A. (2017). “Nice picture comment!” Graphicons in Facebook Comment Threads. In Paper presented at the Hawaii International Conference on System Sciences, Manoa, Hawaii, USA.
- Jack R. E., Garrod O. G. B., Yu H., Caldara R., Schyns P. G. (2012). Facial expressions of emotion are not culturally universal. Proceedings of the National Academy of Sciences, 109(19), 7241–7244. 10.1073/pnas.1200155109 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jürgens U. (2009). The neural control of vocalization in mammals: A review. Journal of Voice, 23, 1–10. 10.1016/j.jvoice.2007.07.005 [DOI] [PubMed] [Google Scholar]
- Jürgens R., Hammerschmidt K., Fischer J. (2011). Authentic and play-acted vocal emotion expressions reveal acoustic differences. Frontiers in Psychology, 2, 180. 10.3389/fpsyg.2011.00180 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kamiloğlu R. G., Fischer A. H., Sauter D. A. (2020). Good vibrations: A review of vocal expressions of positive emotions. Psychonomic Bulletin & Review, 27(2), 237–265. 10.3758/s13423-019-01701-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kelly R., Watts L. (2015). Characterising the inventive appropriation of emoji as relationally meaningful in mediated close personal relationships. In Paper presented at the Experiences of Technology Appropriation: Unanticipated Users, Usage, Circumstances, and Design, Oslo, Norway.
- Kremer L., Holkenborg S. K., Reimert I., Bolhuis J. E., Webb L. E. (2020). The nuts and bolts of animal emotion. Neuroscience & Biobehavioral Reviews, 113, 273–286. 10.1016/j.neubiorev.2020.01.028 [DOI] [PubMed] [Google Scholar]
- Kret M. E., Prochazkova E., Sterck E. H. M., Clay Z. (2020). Emotional expressions in human and non-human great apes. Neuroscience & Biobehavioral Reviews, 115, 378–395. 10.1016/j.neubiorev.2020.01.027 [DOI] [PubMed] [Google Scholar]
- Lattenkamp E. Z., Vernes S. C. (2018). Vocal learning: A language-relevant trait in need of a broad cross-species approach. Current Opinion in Behavioral Sciences, 21, 209–215. 10.1016/j.cobeha.2018.04.007 [DOI] [Google Scholar]
- Leventhal H., Scherer K. R. (1987). The relationship of emotion to cognition: A functional approach to a semantic controversy. Cognition & Emotion, 1(1), 3–28. 10.1080/02699938708408361 [DOI] [Google Scholar]
- Liebal K., Waller B. M., Burrows A. M., Slocombe K. (2014). Primate communication: A multimodal approach. Cambridge University Press. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lima C. F., Anikin A., Monteiro A. C., Scott S. K., Castro S. L. (2019). Automaticity in the recognition of nonverbal emotional vocalizations. Emotion, 19, 219–233. 10.1037/emo0000429 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lima C. F., Castro S. L., Scott S. K. (2013). When voices get emotional: A corpus of nonverbal vocalizations for research on emotion processing. Behavior Research Methods, 45, 1234–1245. 10.3758/s13428-013-0324-3 [DOI] [PubMed] [Google Scholar]
- Ljubešić N., Fišer D. (2016, August). A global analysis of emoji usage. In Paper presented at the Proceedings of the 10th Web as Corpus Workshop, Berlin, Germany.
- Lu X., Ai W., Liu X., Li Q., Wang N., Huang G., Mei Q. (2016, March 14–18). Learning from the ubiquitous language: an empirical analysis of emoji usage of smartphone users. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, Sydney, Australia
- Macedonia J. M., Evans C. S. (1993). Variation among mammalian alarm call systems and the problem of meaning in animal signals. Ethology, 93(3), 177–197. [Google Scholar]
- Maigrot A.-L., Hillmann E., Anne C., Briefer E. F. (2017). Vocal expression of emotional valence in Przewalski’s horses (Equus przewalskii). Scientific Reports, 7(1), 8779. 10.1038/s41598-017-09437-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maigrot A.-L., Hillmann E., Briefer E. F. (2018). Encoding of emotional valence in wild boar (Sus scrofa) calls. Animals, 8(6), 85. 10.3390/ani8060085 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maigrot A.-L., Hillmann E., Briefer E. F. (2022). Cross-species discrimination of vocal expression of emotional valence by Equidae and Suidae. BMC Biology, 20(1), 106. 10.1186/s12915-022-01311-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Manser M. B. (2001). The acoustic structure of suricates’ alarm calls varies with predator type and the level of response urgency. Proceedings of the Royal Society of London. Series B: Biological Sciences, 268(1483), 2315–2324. 10.1098/rspb.2001.1773 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Manser M. B. (2010). The generation of functionally referential and motivational vocal signals in mammals. In Brudzynski S. (Ed.), Handbook of mammalian vocalization – An integrative neuroscience approach (pp. 477–486). Academic Press. [Google Scholar]
- Manser M. B., Seyfarth R. M., Cheney D. L. (2002). Suricate alarm calls signal predator class and urgency. Trends in Cognitive Sciences, 6(2), 55–57. 10.1016/S1364-6613(00)01840-4 [DOI] [PubMed] [Google Scholar]
- Marler P. (1960). Bird songs and mate selection. In Lanyon W. E., Tavolga W. N. (Eds.), Animal sounds and communication (pp. 150–206). American Institute of Biological Science. [Google Scholar]
- Marler P. (1961). The logical analysis of animal communication. Journal of Theoretical Biology, 1, 295–317. 10.1016/0022-5193(61)90032-7 [DOI] [PubMed] [Google Scholar]
- Marler P., Evans C., Hauser M. D. (1992). Animal signals: Reference, motivation, or both? In Papoucek H., Jürgens U., Papoucek M. (Eds.), Nonverbal vocal communication: Comparative and developmental approaches (pp. 66–86). Cambridge University Press. [Google Scholar]
- McGettigan C., Walsh E., Jessop R., Agnew Z. K., Sauter D. A., Warren J. E., Scott S. K. (2015). Individual differences in laughter perception reveal roles for mentalizing and sensorimotor systems in the evaluation of emotional authenticity. Cerebral Cortex, 25, 246–257. 10.1093/cercor/bht227 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mendl M., Burman O. H. P., Paul E. S. (2010). An integrative and functional framework for the study of animal emotion and mood. Proceedings of the Royal Society B: Biological Sciences, 277, 2895–2904. 10.1098/rspb.2010.0303 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Miller H., Kluver D., Thebault-Spieker J., Terveen L., Hecht B. (2017, May 15–18). Understanding emoji ambiguity in context: The role of text in emoji-related miscommunication. In Paper presented at the Proceedings of the 11th International Conference on Web and Social Media, ICWSM 2017, Montreal, Quebec, Canada.
- Miller H., Thebault-Spieker J., Chang S., Johnson I., Terveen L., Hecht B. (2016, May 17–20). “Blissfully happy” or “ready to fight”: Varying interpretations of emoji. In Paper presented at the Proceedings of the 10th International Conference on Web and Social Media, ICWSM 2016, Cologne, Germany.
- Moore R. (2013). Imitation and conventional communication. Biology & Philosophy, 28(3), 481–500. 10.1007/s10539-012-9349-8 [DOI] [Google Scholar]
- Morton E. S. (1977). On the occurrence and significance of motivation-structural rules in some bird and mammal sounds. The American Naturalist, 111, 855–869. 10.1086/283219 [DOI] [Google Scholar]
- Nieder A., Mooney R. (2020). The neurobiology of innate, volitional and learned vocalizations in mammals and birds. Philosophical Transactions of the Royal Society B: Biological Sciences, 375(1789), 20190054. 10.1098/rstb.2019.0054 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Novak P. K., Smailović J., Sluban B., Mozetič I. (2015). Sentiment of emojis. PLoS One, 10, e0144296. 10.1371/journal.pone.0144296 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Oleszkiewicz A., Karwowski M., Pisanski K., Sorokowski P., Sobrado B., Sorokowska A. (2017). Who uses emoticons? Data from 86 702 Facebook users. Personality and Individual Differences, 119, 289–295. 10.1016/j.paid.2017.07.034 [DOI] [Google Scholar]
- Panksepp J. (2011). Cross-species affective neuroscience decoding of the primal affective experiences of humans and related animals. PLoS One, 6(9), e21236. 10.1371/journal.pone.0021236 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Papadaki K., Laliotis G. P., Bizelis I. (2021). Acoustic variables of high-pitched vocalizations in dairy sheep breeds. Applied Animal Behaviour Science, 241, 105398. 10.1016/j.applanim.2021.105398 [DOI] [Google Scholar]
- Patel S., Scherer K. R., Björkner E., Sundberg J. (2011). Mapping emotions into acoustic space: The role of voice production. Biological Psychology, 87, 93–98. 10.1016/j.biopsycho.2011.02.010 [DOI] [PubMed] [Google Scholar]
- Paul E. S., Mendl M. T. (2018). Animal emotion: Descriptive and prescriptive definitions and their implications for a comparative perspective. Applied Animal Behaviour Science, 205, 202–209. 10.1016/j.applanim.2018.01.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paul E. S., Sher S., Tamietto M., Winkielman P., Mendl M. T. (2020). Towards a comparative science of emotion: Affect and consciousness in humans and animals. Neuroscience & Biobehavioral Reviews, 108, 749–770. 10.1016/j.neubiorev.2019.11.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pavalanathan U., Eisenstein J. (2016). More emojis, less :) The competition for paralinguistic function in microblog writing. First Monday, 21, 11. 10.5210/fm.v21i11.6879 [DOI] [Google Scholar]
- Prada M., Rodrigues D. L., Garrido M. V., Lopes D., Cavalheiro B., Gaspar R. (2018). Motives, frequency and attitudes toward emoji and emoticon use. Telematics and Informatics, 35, 1925–1934. 10.1016/j.tele.2018.06.005 [DOI] [Google Scholar]
- Price T., Wadewitz P., Cheney D., Seyfarth R., Hammerschmidt K., Fischer J. (2015). Vervets revisited: A quantitative analysis of alarm call structure and context specificity. Scientific Reports, 5, 13220. 10.1038/srep13220 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Raine J., Pisanski K., Bond R., Simner J., Reby D. (2019). Human roars communicate upper-body strength more effectively than do screams or aggressive and distressed speech. PLoS ONE, 14(3), e0213034. 10.1371/journal.pone.0213034 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ravignani A., Garcia M. (2022). A cross-species framework to identify vocal learning abilities in mammals. Philosophical Transactions of the Royal Society B: Biological Sciences, 377(1841), 20200394. 10.1098/rstb.2020.0394 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Reilly J., Seibert L. (2003). Language and emotion handbook of affective sciences (pp. 535–559). Oxford University Press. [Google Scholar]
- Rice R. E., Love G. (1987). Electronic emotion: Socioemotional content in a computer-mediated communication network. Communication Research, 14, 85–108. 10.1177/009365087014001005 [DOI] [Google Scholar]
- Riordan M. A., Kreuz R. J. (2010). Cues in computer-mediated communication: A corpus analysis. Computers in Human Behavior, 26, 1806–1817. 10.1016/j.chb.2010.07.008 [DOI] [Google Scholar]
- Robertson A., Liza F. F., Nguyen D., McGillivray B., Hale S. A. (2021). Semantic journeys: quantifying change in emoji meaning from 2012-2018. Paper presented at the 4th International Workshop on Emoji Understanding and Applications in Social Media 2021.
- Russell J. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39, 1161–1178. 10.1037/h0077714 [DOI] [PubMed] [Google Scholar]
- Sander D. (2013). Models of emotion: The affective neuroscience approach handbook of human affective neuroscience. Cambridge University Press. [Google Scholar]
- Sander D., Grandjean D., Scherer K. R. (2018). An appraisal-driven componential approach to the emotional brain. Emotion Review, 10(3), 219–231. 10.1177/1754073918765653 [DOI] [Google Scholar]
- Sauter D. A., Eisner F., Calder A. J., Scott S. K. (2010a). Perceptual cues in nonverbal vocal expressions of emotion. Quarterly Journal of Experimental Psychology, 63, 2251–2272. 10.1080/17470211003721642 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sauter D. A., Eisner F., Ekman P., Scott S. K. (2010b). Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations. Proceedings of the National Academy of Sciences, 107, 2408–2412. 10.1073/pnas.0908239106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scarantino A., Clay Z. (2015). Contextually variable signals can be functionally referential. Animal Behaviour, 100, e1–e8. 10.1016/j.anbehav.2014.08.017 [DOI] [Google Scholar]
- Scheiner E., Fischer J. (2011). Emotion expression – the evolutionary heritage in the human voice. In Welsch W., Singer W. J., Wunder A. (Eds.), Interdisciplinary anthropology: The continuing evolution of man (pp. 105–130). Springer. [Google Scholar]
- Scherer K. R. (1984). Emotion as a multicomponent process: A model and some cross-cultural data. Review of Personality & Social Psychology, 5, 37–63. [Google Scholar]
- Scherer K. R. (1994). Affect bursts. In van Goozen S., van de Poll N. E., Sergeant J. A. (Eds.), Emotions: Essays on emotion theory (pp. 161–193). Erlbaum. [Google Scholar]
- Scherer K. R. (2003). Vocal communication of emotion: A review of research paradigms. Speech Communication, 40, 227–256. 10.1016/S0167-6393(02)00084-5 [DOI] [Google Scholar]
- Schwartz R., Pell M. D. (2012). Emotional speech processing at the intersection of prosody and semantics. PLOS One, 7, e47279. 10.1371/journal.pone.0047279 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scott-Phillips T. (2015). Nonhuman primate communication, pragmatics, and the origins of language. Current Anthropology, 56, 56–80. 10.1086/679674 [DOI] [Google Scholar]
- Scott S. K., Lavan N., Chen S., McGettigan C. (2014). The social life of laughter. Trends in Cognitive Sciences, 18(12), 618–620. 10.1016/j.tics.2014.09.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Serrat E., Amadó A., Rostan C., Caparrós B., Sidera F. (2020). Identifying emotional expressions: Children’s reasoning about pretend emotions of sadness and anger. Frontiers in Psychology, 11, 602385. 10.3389/fpsyg.2020.602385 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seyfarth R. M., Cheney D. L. (2003). Meaning and emotion in animal vocalizations. Annals of the New York Academy of Sciences, 1000(1), 32–55. 10.1196/annals.1280.004 [DOI] [PubMed] [Google Scholar]
- Seyfarth R. M., Cheney D. L. (2017). The origin of meaning in animal signals. Animal Behaviour, 124, 339–346. 10.1016/j.anbehav.2016.05.020 [DOI] [Google Scholar]
- Seyfarth R. M., Cheney D. L., Marler P. (1980). Vervet monkey alarm calls: Semantic communication in a free-ranging primate. Animal Behaviour, 28(4), 1070–1094. 10.1016/S0003-3472(80)80097-2 [DOI] [Google Scholar]
- Short J., Williams E., Christie B. (1976). The social psychology of telecommunications. Wiley. [Google Scholar]
- Shurick A. A., Daniel J. (2020, June 8–11). What’s behind those smiling eyes: Examining emoji sentiment across vendors. In Paper presented at the Workshop Proceedings of the 14th International AAAI Conference on Web and Social Media. ONLINE.
- Sievers C., Gruber T. (2016). Reference in human and non-human primate communication: What does it take to refer? Animal Cognition, 19(4), 759–768. 10.1007/s10071-016-0974-5 [DOI] [PubMed] [Google Scholar]
- Sievers C., Gruber T. (2020). Can nonhuman primate signals be arbitrarily meaningful like human words? An affective approach. Animal Behavior and Cognition, 7(2), 140–150. 10.26451/abc.07.02.08.2020 [DOI] [Google Scholar]
- Sievers C., Wild M., Gruber T. (2017). Intentionality and flexibility in animal communication. In Andrews K., Beck J. (Eds.), Routledge handbook of philosophy of animal minds (pp. 333–342). Routledge. [Google Scholar]
- Slocombe K. E., Zuberbühler K. (2007). Chimpanzees modify recruitment screams as a function of audience composition. Proceedings of the National Academy of Sciences, 104, 17228–17233. 10.1073/pnas.0706741104 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith A. V., Proops L., Grounds K., Wathan J., Scott S. K., McComb K. (2018). Domestic horses (Equus caballus) discriminate between negative and positive human nonverbal vocalisations. Scientific Reports, 8, 1–8. 10.1038/s41598-018-30777-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Szameitat D. P., Alter K., Szameitat A. J., Wildgruber D., Sterr A., Darwin C. J. (2009). Acoustic profiles of distinct emotional expressions in laughter. The Journal of the Acoustical Society of America, 126, 354–366. 10.1121/1.3139899 [DOI] [PubMed] [Google Scholar]
- Tauch C., Kanjo E. (2016, September 12–16). The roles of emojis in mobile phone notifications. In Paper presented at the Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct, Heidelberg, Germany.
- Tchernichovski O., Oller D. K. (2016). Vocal development: How marmoset infants express their feelings. Current Biology, 26, R422–R424. 10.1016/j.cub.2016.03.063 [DOI] [PubMed] [Google Scholar]
- Thorstenson C. A., McPhetres J., Pazda A. D., Young S. G. (2022). The role of facial coloration in emotion disambiguation. Emotion, 22, 1604–1613. 10.1037/emo0000900 [DOI] [PubMed] [Google Scholar]
- Tigwell G. W., Flatla D. R. (2016, July 17–22). Oh that’s what you meant! Reducing emoji misunderstanding. In Paper presented at the Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, Toronto, Canada.
- Tomasello M. (2008). Origins of human communication. MIT Press. [Google Scholar]
- Townsend S. W., Koski S., Byrne R., Slocombe K., Bickel B., Braga Goncalves I., Burkart J. M., Flower T., Gaunet F., Glock H. J., Gruber T., Jansen D. A. W. A. M., Liebal K., Linke A., Miklósi Á, Moore R., van Schaik C. P., Stoll S., Vail A., Waller B. M., Wild M., Klaus Zub M., Manser M. B. (2017). Exorcising Grice’s ghost: An empirical approach to studying intentional communication in animals. Biological Reviews, 92(3), 1427–1433. 10.1111/brv.12289 [DOI] [PubMed] [Google Scholar]
- Townsend S. W., Manser M. B. (2013). Functionally referential communication in mammals: The past, present and the future. Ethology, 119, 1–11. 10.1111/eth.12015 [DOI] [Google Scholar]
- Townsend S. W., Rasmussen M., Clutton-Brock T., Manser M. B. (2012). Flexible alarm calling in meerkats: The role of the social environment and predation urgency. Behavioral Ecology, 23, 1360–1364. 10.1093/beheco/ars129 [DOI] [Google Scholar]
- Tyack P. L. (2020). A taxonomy for vocal learning. Philosophical Transactions of the Royal Society B: Biological Sciences, 375, 20180406. 10.1098/rstb.2018.0406 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vandergriff I. (2013). Emotive communication online: A contextual analysis of computer-mediated communication (CMC) cues. Journal of Pragmatics, 51, 1–12. 10.1016/j.pragma.2013.02.008 [DOI] [Google Scholar]
- Walther J. B. (1992). Interpersonal effects in computer-mediated interaction: A relational perspective. Communication Research, 19, 52–90. 10.1177/009365092019001003 [DOI] [Google Scholar]
- Walther J. B. (1996). Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23, 3–43. 10.1177/009365096023001001 [DOI] [Google Scholar]
- Wang H. (2003). Computer-mediated compliance: An experimental study on the influence of language intensity and email announcement responses. In Paper presented at the National Communication Association annual conference, Miami, FL, United States.
- Watson S. K., Filippi P., Gasparri L., Falk N., Tamer N., Widmer P., Marta Manser G., J H. (2022). Optionality in animal communication: A novel framework for examining the evolution of arbitrariness. Biological Reviews, 97(6), 2057–2075. 10.1111/brv.12882 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wheeler B. C., Fischer J. (2012). Functionally referential signals: A promising paradigm whose time has passed. Evolutionary Anthropology: Issues, News, and Reviews, 21, 195–205. 10.1002/evan.21319 [DOI] [PubMed] [Google Scholar]
- Wiseman S., Gould S. J. J. (2018, April 26). Repurposing emoji for personalised communication: Why [pizza slice] means “I love you.” In Paper presented at the Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montréal, Canada.
- Wood I. D., McCrae J. P., Andryushechkin V., Buitelaar P. (2018). A comparison of emotion annotation approaches for text. Information, 9(5), 117. 10.3390/info9050117 [DOI] [Google Scholar]
- Zimmermann E., Leliveld L. M. C., Schehka S. (2013). Towards the evolutionary roots of affective prosody in human acoustic communication: A comparative approach to mammalian voices. In Altenmüller E., Schmidt S., Zimmermann E. (Eds.), Evolution of emotional communication: From sounds in nonhuman mammals to speech and music in man (pp. 116–132). Oxford University Press. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplemental material, sj-docx-1-emr-10.1177_17540739241303505 for Emotion in Nonverbal Communication: Comparing Animal and Human Vocalizations and Human Text Messages by T. Gruber, E. F. Briefer, A. Grütter, A. Xanthos, D. Grandjean, M. B. Manser and S. Frühholz in Emotion Review

