Skip to main content
Neuroimage: Reports logoLink to Neuroimage: Reports
. 2022 Jun 4;2(3):100105. doi: 10.1016/j.ynirp.2022.100105

The power of tears: Observers’ brain responses show that tears provide unambiguous signals independent of scene context

Anita Tursic a, Maarten Vaessen a, Minye Zhan a, Ad JJM Vingerhoets b, Beatrice de Gelder a,c,
PMCID: PMC12172846  PMID: 40567306

Abstract

Only humans produce emotional tears, a fact that has been linked to triggering empathy, social bonding, and providing support in observers. Consequently, either the tears themselves play a crucial role in eliciting such behavior, or, alternatively, the negative context in which they are shed is responsible for these observers’ reactions. The present study investigates whether the context in which we see an individual cry influences our perception of tears. We exposed participants (N = 13) to compound stimuli of faces with or without tears, combined with positive, negative, and scrambled backgrounds, while their brain activity was measured using functional magnetic resonance imaging (fMRI). Findings reveal that the lateral occipital gyrus responds to the presence of tears but that the contextual information does not influence this reaction; furthermore, tears appear to facilitate interpreting emotional facial expressions when combined with a positive context. These findings indicate that tears are a robust, unambiguous signal, the perception of which is insensitive to context but can still contribute to the context interpretation. This feature sets tears apart from other facial emotional expressions. This is likely due to their crucial evolutionary role as one of the foremost indicators of discomfort, signaling a need for help and their power to forge a bond between people.

Keywords: Tears, Face, Context, Lateral occipital gyrus, Functional magnetic resonance imaging

1. Introduction

Only humans produce emotional tears, a fact that has been linked to triggering empathy, bonding, and providing support in observers. Tears may play a unique role in eliciting such prosocial behavior (Vingerhoets, 2013; Zickfeld et al., 2021 for a recent systematic overview). An interesting question is whether the functional role of tears derives from interactions between tear signals, the facial expression and the context, or, alternatively, that their role is relatively autonomous and their meaning can be easily recognized without additional information to encourage a fast prosocial reaction by others. In particular, this would be evident in cases where a crying child is first attended to and reassured before the caregiver investigates the cause of the crying but is perhaps less evident in adult crying.

We rarely see a face without any context providing cues about the person's emotional state in daily life. Therefore, it is reasonable to assume that the brain actively uses contextual information when processing and interpreting emotional faces and guiding reactions to them. Experiments using compound stimuli combining a face and a background scene with the same emotional valence (i.e., congruent stimuli) yielded consistent support for context influencers on facial expressions (de Gelder et al., 2006). Emotional congruency of the faces and the background resulted in faster and more accurate recognition of the person's emotion (Diéguez-Risco et al., 2013; Kret and de Gelder, 2010; Reschke et al., 2017; Righart and de Gelder, 2008b; Xu et al., 2017), lower early-processing ERP component N170 (Hietanen and Astikainen, 2013; Righart and de Gelder, 2006, 2008a, but see Diéguez-Risco et al., 2013; Xu et al., 2017), higher late-processing late positive potential (LPP) (Diéguez-Risco et al., 2013; Xu et al., 2017), and higher steady-state visually evoked potential (ssVEP) (Wieser and Keil, 2014), when compared to incongruent stimuli. Similar congruency effects have also been shown with fMRI when combining emotional faces with emotional bodies (Poyo Solanas et al., 2018). Furthermore, recent research demonstrated that the impact of the context in which the face is presented plays a more prominent role when the facial expression is ambiguous. When Lee et al. (2012) used morphed faces, highly ambiguous faces were more likely to be perceived as fearful when presented with negative backgrounds, and as neutral when combined with a positive context. This effect was less strong when the facial expressions were clear.

Studies investigating the effect of background on face perception generally use fearful stimuli (Kret and de Gelder, 2010; Righart and de Gelder, 2006, 2008a, 2008b; Sinke et al., 2012; van den Stock et al., 2014a, van den Stock et al., 2014b; Wieser and Keil, 2014; Xu et al., 2015, 2017), angry (Kret et al., 2013; Kret and de Gelder, 2010) or disgusted faces (Reschke et al., 2017; Righart and de Gelder, 2008b), contrasted with neutral or happy expressions. Specifically concerning tear perception, a recent study using eye-tracking reported that tears attracted visual attention and that tearful faces received longer gaze time. The presence of tears led to greater perceived emotional intensity (Picó et al., 2020a). The study thus shows that tears convey a message without the explicit need to identify the specific emotion that possibly caused them, suggesting that tear perception is a relatively autonomous process that is not under voluntary attentional control. However, to date, no study has systematically investigated the impact of the emotional context on the perception of tears.

Only humans produce emotional tears (Gračanin et al., 2018; Rottenberg and Vingerhoets, 2012; Vingerhoets, 2013; Vingerhoets and Bylsma, 2016). The type of triggers, extent, and frequency, however, change over the lifespan; infants and children cry loudly and due to distress, pain, or need, whereas adults cry less often, less vocally, and usually due to more contextual and intrinsic reasons (Rottenberg and Vingerhoets, 2012). The most frequent triggers of adult tears include conflict, loss, grief, defeat, or failure, but also positive events, such as weddings, reunions, and the birth of children (Denckla et al., 2014; Vingerhoets, 2013; Vingerhoets and Bylsma, 2016). It is not completely clear why (adult) humans weep (Gračanin et al., 2018; Rottenberg and Vingerhoets, 2012; Vingerhoets, 2013). Initially, the focus of research was mainly on the intrapersonal effects of crying, particularly the supposed cathartic effect of tears. However, more recently, researchers diverted their attention to the interpersonal effects of tears (Vingerhoets and Bylsma, 2016). Studies using a variety of experimental approaches yielded consistent and increasing evidence that tears serve specific interpersonal functions (Picó et al., 2020b; Zickfeld et al., 2021). The emotional valence of sad, tearful faces is recognized faster than that of faces with the same expression, but without tears (Balsters et al., 2013; Gračanin et al., 2021; Picó et al., 2020b; Provine et al., 2009). Tearful individuals are also perceived as more helpless by others and more likely to be helped, which encourages social bonding (Stadel et al., 2019; Vingerhoets, van de Ven and van der Velden, 2016; Zickfeld et al., 2021). Finally, there is suggestive evidence that individuals who cry for appropriate, valid reasons are perceived as warmer, as well as more honest and reliable (Picó et al., 2020b; van Roeyen et al., 2020).

Whereas the studies mentioned above all used self-reports and/or behavioral techniques, each with well-known limitations, functional magnetic resonance imaging (fMRI) studies may provide additional insight into the behavioral intentions of observers of crying. The first steps in this direction were made by researchers who demonstrated that, when listening to recordings of crying adults, the activation of the left amygdala was stronger compared to that of the right amygdala. In contrast, the right insula was more active than the left insula (Sander and Scheich, 2005). These results are in line with the functional role of the mentioned regions; the insula is known to be involved in emotional processing, empathy, and social cognition, but also auditory processing, including sound detection and processing of non-verbal sounds (for an overview see (Gogolla, 2017; Uddin et al., 2017)). The right insula shares stronger connections to the auditory cortex and the amygdala than the left insula, which might explain the stronger activation in the mentioned study (Zhang et al., 2019). Although both amygdalas are associated with emotional processing, only damage to the left amygdala seems to disrupt the processing of vocal affective information (Frühholz et al., 2015).

The exposure of adults to auditory stimuli of crying infants elicited a stronger reaction in the amygdala than the sounds of crying adults (Sander et al., 2007)and control sounds (Riem et al., 2011, 2012). Furthermore, gender effects were observed between participants when listening to crying and laughing sounds of infants; activations in the amygdala and anterior cingulate cortex (ACC) were significantly stronger in women than in men (Sander et al., 2007). ACC shares strong connections to the limbic areas, including the amygdala, and is generally associated with emotional regulation and processing; in particular, it is active during induced sadness or pain, social information and empathy (for an overview see (Bush et al., 2000; Lavin et al., 2013; Lockwood, 2016; Stevens et al., 2011)). From an evolutionary perspective, especially its association with pain and empathy might explain the stronger activation related to crying infants, given the gender differences in empathy (Christov-Moore et al., 2014).

More recent studies focusing on the visual domain showed that the perception of infant tears resulted in stronger brain activation than when perceiving adult tears, although not in the amygdala, but in different areas. More precisely, this concerned the occipital cortex (namely the lateral occipital gyrus, precuneus, superior parietal lobule, precentral gyrus, and superior and middle frontal gyrus). Adult tears resulted in the activation within one significant cluster, including the lateral occipital cortex (LOC) and occipital pole (Riem et al., 2017). The LOC is susceptible to visual and emotional salience, rather than arousal (Kuniecki et al., 2018; Todd et al., 2012), indicating that a stronger activation during tear perception might indicate a clearer emotional expression.

The present study aims to replicate the findings related to adult tear perception found in a previous study (Riem et al., 2017) but then expand it by evaluating the valence-dependent context effects on tear perception. When comparing faces with tears to faces without tears, we expect to find activation in LOC. Furthermore, since negative events more commonly result in weeping (Denckla et al., 2014) and might potentially result in stronger empathic reactions, we hypothesize that perceiving crying adults in negative circumstances will result in stronger activation in tear perception related regions, compared to the perception of tears due to positive reasons. On the other hand, research suggests that tears clarify the facial expression and add meaning to it (Balsters et al., 2013; Gračanin et al., 2021; Provine et al., 2009), so the additional information carried by the context might be questionable. Evolutionary speaking, there is little doubt that vocal crying presents a strong signal of need (Newman, 2007), essential for an infant's survival. However, the function of emotional tears for adults is less clear (Gračanin et al., 2018). If tears indeed are an absolute, unmistakable strong signal (e.g., Gračanin et al., 2021), they may be expected to remove the ambiguity, and therefore additional context information is no longer needed.

2. Materials and methods

2.1. Participants

Fifteen female participants (age range 19–27, mean = 21.5, SD = 2.45; one left-handed) were recruited on the Maastricht University campus. After completing an anonymous online screening form, these participants were chosen based on their high score on an empathy questionnaire (Interpersonal Reactivity Index (Davis, 1980)). They also had a high score on their self-evaluation of crying proneness in several different scenarios (e.g., the death of a person, reunion; Crying Proneness Scale (Denckla et al., 2014)). High empathy and crying proneness were relevant for a different study, conducted in the second half of the same scanning session, where we hoped to solicit tears in the participants. This other study is not discussed in the current manuscript. It is important to emphasize that the previously demonstrated gender-specific processing of crying stimuli (Sander et al., 2007), gender-specific empathic processing (Christov-Moore et al., 2014), and the overall high empathy scores of the participants might have influenced stimuli perception and processing, which could limit the generalization of the present findings to the general population. Exclusion criteria were MRI contraindications and an affective disorder diagnosis. Two participants were later excluded from analysis due to excessive movement in the scanner, resulting in the final sample of thirteen participants (age range 19–27, mean = 21.6, SD = 2.6; one left-handed). All participants provided written informed consent and received compensation for their participation. The experimental procedures were applied under the Declaration of Helsinki and were approved by the local ethical committee.

2.2. Stimuli

We used compound stimuli consisting of full-color images (size 800 × 500 pixels) of a face overlaid on a background (see Fig. 1). They were presented on a grey screen (1920 × 1200 pixels) and were replaced by a white fixation cross during rest periods when no image was displayed. The pictures of the faces represented crying persons, either with or without tears (“tear” and “no-tear” condition, respectively). In the “no-tear” condition, the tears were digitally removed from the picture, allowing the use of the same photos of the individuals in both conditions. The images of crying faces that displayed real emotional reactions to an art show were used in previous studies (Riem et al., 2017; Vingerhoets et al., 2016). We used eight male and eight female targets, with clearly visible tears and mainly neutral facial expressions (although completely neutral expressions could not be guaranteed due to the nature of the image acquisition, i.e., the reactions were not acted or otherwise instructed).

Fig. 1.

Fig. 1

Examples of stimuli. Compound stimuli consisted of faces with tears or with digitally removed tears and of backgrounds that were either negative, positive, or scrambled. The stimuli included both male and female faces.

Different background images were used to provide either a positive or negative context in which the person was crying. They were divided into three categories: cars, buildings, and events. Negative backgrounds consisted of car wrecks, burning houses, and funerals, respectively, and positive backgrounds represented wedding cars, island beach houses, and birthday parties. To control for the effect of the background content on tear perception, the positive and negative background images were individually Fourier phase-scrambled using a custom script (Matlab R2016a, MathWorks, Natick, Massachusetts) and used as a third background condition.

Background images were validated before the beginning of the study by an independent sample of 20 female participants. The initial 64 backgrounds were categorized into four pairs of positive and negative categories: wedding cars and car wrecks, island beach houses and burning houses, funerals and birthday parties, and destroyed homes and nurseries, respectively. For each image, the label that best described it had to be selected; the options included “neutral,” “disgust,” “fear,” “happy,” and “sad.” Then the strength of the selected emotion was rated on a five-point scale, where “1” meant “very weak” and “5” meant “very strong.” Finally, the valence was determined on a five-level scale, with “1” indicating “very negative” and “5” “very positive.” We excluded all images rated as either positive (score of 4 or 5) or negative (score of 1 or 2) by less than 70% of the participants. Next, the top-rated four images in each category were selected and, based on the highest average score of each category, the pictures of destroyed homes and nurseries were removed, leaving three category pairs: wedding cars and car wracks, island beach houses and burning houses, and funerals and birthday parties.

2.3. Paradigm

Participants were scanned using a 3 T Siemens Prisma Fit scanner in Scannexus, Maastricht University. Once the participants were lying comfortably in the scanner, stimuli were back-projected on a screen at the back of the bore using PsychoPy software (version 1.84) (Peirce, 2007), with a viewing distance of ∼75 cm. Images were pseudorandomized so that each block included four images, which were matched in gender, tear presence, and background valence. This resulted in 12 different conditions. No compound stimulus was presented twice in a session. Within each 3800 ms block, four images were presented. Each was displayed briefly for 800 ms, followed by a 200 ms fixation cross before the next image was shown. Every run consisted of three repetitions of each condition, and four catch trials, summing up to 40 blocks, which were pseudorandomly presented and separated by a jittered rest period of 12.16s (±1.33s). Participants were asked to pay attention to the displayed images and to press a button with their right index finger whenever they saw a red triangle presented on a face (i.e., catch trial). The catch trials (10% of all trials), which consisted of a standard compound stimulus with an added red triangle on the face, were used to ensure that the participants were indeed paying attention to the stimuli. Most of the trials thus required merely passive viewing; no other behavioral data were collected during scanning.

The data of three participants consisted of only three functional runs due to technical problems, the rest of the participants completed four runs each.

2.4. Complementary questionnaires

After the scanning session, the participants were asked to fill in an online questionnaire in which the faces were presented separately from the backgrounds. Participants rated facial expressions of each face by indicating the perceived valence on a five-point scale, with 1 meaning “very negative” and 5 corresponding to “very positive."

2.5. MRI data acquisition

Anatomical T1-weighted data of 13 participants were acquired using a 3 T scanner with a 64-channel coil with the following parameters: Magnetization Prepared RApid Gradient Echo (MPRAGE) sequence, repetition time = 2300 ms, echo time = 2.98 ms, GeneRalized Autocalibrating Partial Parallel Acquisition (GRAPPA) = 2, 192 slices, 1 mm isotropic, flip angle = 9°, field of view = 256 × 256mm2, inversion time = 900 ms, matrix size = 256 × 256.

The anatomical data of the remaining two participants were collected with a T1-weighted 3D MPRAGE sequence with different parameters (repetition time = 2400 ms, echo time = 2.34 ms, field of view = 223 × 223mm2, matrix = 320 × 320, 256 sagittal slices in a single slab, inversion time = 1000 ms, flip angle = 8°, GRAPPA = 2, 0.7 mm3 isotropic) due to the requirements of another study. These data were first down-sampled to match the 1 × 1 × 1mm3 resolution of the other anatomical datasets.

Functional scans covered the whole brain and were acquired with a T2*-weighted gradient echo EPI sequence (repetition time = 1330 ms, echo time = 30 ms, 63 slices without gap, 2 mm3 isotropic, flip angle = 67°, multiband = 3, GRAPPA = 2, field of view = 200 × 200mm2, matrix size = 100 × 100, phase encoding direction: anterior to posterior) (Feinberg et al., 2010). To correct for EPI distortion, five additional volumes were acquired before every run in the opposite phase encoding direction (posterior to anterior).

2.6. MRI preprocessing

The (f)MRI analysis was performed with BrainVoyager (version 20.4, Brain Innovation B.V., Maastricht, the Netherlands). Using COPE plugin, functional data were first corrected for EPI distortion in the anterior-posterior phase-encoding direction. The preprocessing included a slice scan time correction with sinc interpolation, 6-dimensional motion correction using trilinear/sinc interpolation, temporal high pass filtering with frequency-space filter cut-off value of 0.01 Hz, and no initial spatial smoothing. The data from the first functional run were aligned with the anatomical data of each participant. Later runs were aligned to the first functional run to assure a satisfactory spatial alignment. Anatomical and functional runs were normalized into Talairach space, keeping the original resolution of 1 and 2 mm3 isotropic, respectively. A Gaussian spatial smoothing kernel of 6 mm was then applied to each run of the functional data.

2.7. Analysis

The data collected with the online questionnaires after the scanning were analyzed using SPSS (version 24, I.B.M. Corp.). A two-way repeated-measures ANOVA was performed to check if the presented images were perceived as different in terms of emotional valence when comparing tearful faces to faces with digitally removed tears and between genders.

The fMRI analysis was performed by first fitting a random-effects general linear model to the group data with 20 predictors; all 12 conditions (combination of tear presence, gender, and background), catch trials, six z-transformed motion predictors included as nuisance predictors, and a constant. Next, random-effects analysis of variance (ANOVA) was performed by defining “tear,” “gender,” and “background” as within-subject factors. The resulting volume maps were initially set to p = 0.001, and the minimum cluster size threshold corresponding to a cluster-level false-positive (α) of 0.05 was applied after performing a Monte-Carlo simulation with 5000 iterations for each map.

Betas of resulting significant activation within clusters were then extracted per participant, cluster, and condition for each factor and interaction and imported into SPSS. A repeated-measures ANOVA was performed to calculate simple main effects for interactions to determine the effects of one variable on a specific level of a different variable.

3. Results

3.1. Behavioral data

3.1.1. In the scanner

Out of four catch trials per run, participants on average detected 3.75 trials (SD = 0.69).

3.1.2. After scanning

ANOVA results of the ratings of the faces confirmed that the valence of tearful facial expressions (mean = 2.46, SE = 0.06) was perceived as more negative than expressions without tears (mean = 2.95, SE = 0.05) [F (1,14) = 26.265, p < 0.001], although average values of both distributed closely around the value of 2.5, implying relatively neutral expressions. Female faces (mean = 2.59, SE = 0.08) were in general rated as more negative than male ones (mean = 2.81, SE = 0.07) [F (1,14) = 15.813, p < 0.05], but no significant interaction was observed between the tear and gender factor.

3.2. Functional data

To investigate the potential effects of backgrounds and gender on the perception of tears, we performed a 2 (tear: tear and no tear) × 3 (background: negative, positive, and scrambled) × 2 (gender: female and male) repeated measures ANOVA. The results, summarized in Table 1, show a significant effect of tears, background, and the following interactions: tear × background, tear × gender, and background × gender. Each significant result is discussed in a separate paragraph below. The remaining interactions were not significant and are therefore not mentioned.

Table 1.

Significant activation in brain areas for each of the factors in ANOVA; their laterality, center-of-gravity coordinates (Talairach space), size (number of voxels) and statistics (F-value), simple effects of interactions, and involved areas. Simple effects (factors “Tear x Gender” and “Tear x Background”) show which condition activated the brain area more as a part of interaction and how significant is the difference. Pairwise comparisons (factor “Background”) show which background valence activated the brain area more as a part of the main effect of the background and the degree of statistical significance.

Factor Hemisphere x y z Size F Simple effects / Pairwise comparisons Area
Tear R 33 −82 5 188 23.873 Middle occipital gyrus
Tear × Gender L −17 −50 18 74 23.755 No tear Male > Female Corpus Callosum
Tear × Background R 39 −82 12 236 11.565 Scrambled Tear > No tear ∗∗ Middle occipital gyrus
R 14 −78 46 172 10.139 Scrambled Tear > No tear ∗∗ Precuneus
Negative No tear > Tear
Tear Positive > Negative
L −17 −69 58 163 11.440 Scrambled Tear > No tear ∗∗ Superior Parietal Lobule
Negative No tear > Tear
Tear Scrambled > Negative
L −18 −63 36 195 11.897 Scrambled Tear > No tear ∗∗ Precuneus
Positive Tear > No tear
Background R 37 −75 17 4975 15.411 Negative > Scrambled Middle Occipital Gyrus, middle temporal gyrus, superior occipital gyrus, precuneus
Positive > Scrambled
R 28 −45 −10 8832 17.221 Negative > Scrambled Parahippocampal gyrus, fusiform gyrus, culmen, lingual gyrus, declive
Positive > Scrambled
R 17 −52 16 1906 14.196 Negative > Scrambled Posterior Cingulate, parahippocampal gyrus, corpus callosum
Positive > Scrambled
L −7 55 19 556 11.000 Negative > Scrambled Medial and superior frontal gyrus
Positive > Scrambled
L −15 −54 13 418 11.643 Negative > Scrambled Posterior Cingulate
Positive > Scrambled
L −27 −53 −10 11862 18.483 Negative > Scrambled Fusiform Gyrus, parahippocampal gyrus, declive, lingual gyrus, culmen
Positive > Scrambled
L −35 −80 11 5649 15.123 Negative > Scrambled Middle Occipital Gyrus, middle temporal gyrus, cuneus, precuneus, inferior occipital gyrus
Positive > Scrambled

p < 0.05.

∗∗

p < 0.01.

The main effect of tears was observed in the right lateral occipital gyrus [F (1,12) = 23.873, p < 0.001], where the perception of tears resulted in a significantly higher activation compared to the no tears condition (Fig. 2). In general, tearful faces seem to trigger a stronger brain response than faces without tears.

Fig. 2.

Fig. 2

The main effect of tears. Perception of tearful faces resulted in stronger activation of the right middle occipital gyrus compared to faces without tears [F (1,12) = 23.873, p < 0.001].

When investigating the main effect of the backgrounds, both positive and negative backgrounds resulted in stronger activation than scrambled backgrounds in all but one out of seven clusters (all p < 0.005, see Table 1). The activation in the remaining cluster, consisting of the left medial and superior frontal gyrus, was more substantial for scrambled backgrounds (p < 0.025). Therefore, the activation detected in most clusters was more robust for emotional backgrounds.

The interaction tear × background resulted in significant activation in four clusters, covering the right middle occipital gyrus [F (2,24) = 11.565], left superior parietal lobule [F (2,24) = 11.440] and left and right precuneus [F (2,24) = 10.139 and 11.897]. To understand which conditions drove the interaction, we looked at simple effects. Simple effects revealed that tears perception caused a stronger activation than seeing faces with no tears in all four clusters when the background was scrambled. The simple effect of tears remained significant only in the left precuneus when the images were presented in combination with a positive background. No brain area responded more to faces with tears than without them when the faces were presented on negative backgrounds; in contrast, the left superior parietal lobule and right precuneus responded more strongly to faces without tears. Furthermore, when looking at the differences within the tear condition, the left superior temporal lobule responded significantly stronger to scrambled backgrounds than negative, and right precuneus to positive compared to negative.

Finally, gender effects (of the model) were investigated. Although we found a significant interaction between the tear and gender conditions, no difference between the target genders within the tear condition was observed. Similarly, no difference was detected between the presence and absence of tears in any of the genders.

In summary, the activation was stronger for faces with tears than without tears when the background was scrambled or positive. Negative backgrounds caused the activation to be stronger for faces without tears. Comparing only the conditions with tearful faces, both scrambled and positive backgrounds resulted in a stronger activation compared to negative backgrounds. The gender of the depicted individuals showed no effect in combination with tearful faces.

4. Discussion

The present study investigated the neural mechanisms of tear perception. It focused on whether the simultaneous presentation of a scene context influences this perception and whether the valence of the scene possibly determined this influence. Previous studies showed that, in the case of negative emotional expressions such as fear, anger, disgust, and tearless sadness, the affective meaning of the context influences the perception of a facial expression (Kret and de Gelder, 2010; Righart and de Gelder, 2006, 2008a, 2008b; Sinke et al., 2012; Reschke et al., 2017). In contrast, our results suggest that the perception of tears and the understanding of the crier's emotional state are not affected by the context. Nevertheless, when combined with positive and scrambled backgrounds, tears facilitate the interpretation of emotional expressions. This finding suggests that tears are a more robust and less ambiguous signal than other emotional facial expressions.

Previous research demonstrated that background context, especially when negative, tends to boost the impact of the otherwise ambiguous emotional facial expressions (Aviezer et al., 2017; de Gelder et al., 2006; Hassin et al., 2013; Righart and de Gelder, 2008a). Surprisingly, the negative context does not seem to play an important role in tearful facial expression perception in the case of tears. This indicates that tears do not require additional negative contextual environmental information to be interpreted correctly (cf. Lee et al., 2012). Our results thus suggest that the negative context provided additional information through automatic processing when trying to decipher the facial expressions in the absence of tears subconsciously. However, with visible tears, the negative contextual information was no longer used or demanded less processing for interpretation given the clear congruency and the lack of a “boosting” effect on brain activation when the two were combined (i.e., one was not needed to further explain the other). However, tears provided crucial additional information when combined with positive and scrambled backgrounds, given the perhaps less apparent congruency between tears and positive backgrounds on the one hand and the lack of contextual information in the scrambled backgrounds on the other.

First, this interpretation is supported by the stronger activation caused by the negative scenes with tearless instead of tearful crying faces. Yet, when tears are combined with positive backgrounds, additional brain resources seem to be recruited presumably to generate an interpretation of the facial expressions consistent with the positive scene. The tears enabled a quick evaluation and facilitated processing of the negative context. When the negative background was presented without tears, the expectation of tears created a conflict, and additional brain resources (i.e., activation) were required to clarify the context correctly. The notion that tears are unambiguous is in line with the proposed biological function of tears, i.e., that they attract our attention (Picó et al., 2020a) and effectively stimulate the provision of needed support (Gračanin et al., 2018). The initial vocal crying of human infants originates from the distress calls of mammal infants (and infants of some bird species) to their parents (Newman, 2007). Since a loud vocalization not only attracts the attention of the caregivers but also of potential predators, Gračanin et al. (2018) speculate that a visual signal gains significance in more close social interactions throughout our childhood because they can be explicitly targeted at a specific person (e.g., a parent) and are therefore both more effective and less dangerous. Once a child has developed the motor skills to approach a chosen individual to communicate its needs, vocal crying is no longer vital. A visual signal better fits such more intimate interactions and does not inform the whole environment about one's weakness and is thus more safe and functional when one needs the protection or care of adults. This, of course, does not mean that adults no longer vocalize, as crying is a complex reaction; vocalizations just become weaker and less prevalent, depending on the crying intensity and circumstances. Therefore, tears play an essential survival role in childhood, and they remain a powerful signal in adulthood. The most potent triggers of tears are powerlessness, loss, and separation (Vingerhoets, 2013). Happy tears can be regarded as the result of an overwhelming, intense positive emotion related to a previous concern, doubt of success, and helplessness during a tough time (Gračanin et al., 2018; Miceli and Castelfranchi, 2003). Observers perceive tearful individuals as more helpless, sad, friendly, and in need of support. They also report to be more willing to help them, and they feel closer to them (Balsters et al., 2013; Provine et al., 2009; Vingerhoets et al., 2016; Zickfeld et al., 2021).

While we argue that the perception and interpretation of tears are insensitive to negative context, this does not mean that the background information is not processed. In fact, negative background information is needed to confirm the congruency between the tears and background information. Consequently, the resources necessary for further processing and interpretation are no longer required. This conclusion is in line with behavioral studies showing faster and more accurate recognition of the person's emotion when perceiving congruent stimuli (Diéguez-Risco et al., 2013; Kret and de Gelder, 2010; Reschke et al., 2017; Righart and de Gelder, 2008b; Xu et al., 2017). Furthermore, both background valences resulted in significantly stronger activation than scrambled backgrounds, but their information is not used for tear perception. This might explain why the brain, specifically the superior parietal lobule, reacts stronger to tears presented on scrambled background than on negative background. The two types of backgrounds are visually very different and, unlike the scrambled, negative backgrounds provide information that seems to attract our attention and use some of the brain resources that would otherwise be used only for the processing of tears. The superior parietal lobule is connected to the parietal part of the dorsal fronto-parietal system involved in visual-spatial attention, working memory and action control, such as motor planning and imagery (Ptak, 2012; Ptak et al., 2017). The precise function of the superior parietal lobule is currently unknown. Perhaps this part of the network plays a prominent role in spatial attention to features relevant to a specific task by integrating various visual features and using different reference frames (Ptak, 2012; Ptak et al., 2017; Szczepanski et al., 2013; Vossel et al., 2014), which seems to be in line with the postulated evolutionary role of tears. Tears should draw the attention of observers and increase their willingness to provide support.

Interestingly, even though tears cannot be considered characteristic of a specific emotional state with a specific positive or negative connotation, the right precuneus responded more strongly to tears in the positive than in the negative context. This finding, too, could be attributed to attention. When presented together with a face, fearful backgrounds result in stronger N180 event-related potential responses compared to happy and neutral backgrounds (Righart and de Gelder, 2008a). Perhaps negative backgrounds attract more attention than positive ones, and therefore less attention might be paid to the tears. On the other hand, and more likely, it could be that the tearful faces combined with positive backgrounds required more resources to arrive at a correct interpretation of the tearful expression as positive. If this was indeed the case, the participants could have tried to interpret the stimuli based on their personal experience, i.e., episodic memory, which has been shown to engage precuneus (Cavanna and Trimble, 2006).

A distinct finding was the lack of activation in the insula, the amygdala, and the ACC. These regions were consistently found in previous literature investigating the effect of tears (Sander and Scheich, 2005; Sander et al., 2007; Riem et al., 2011, 2012) and were associated with processing and understanding the affective components of other people's emotional distress. Importantly, Riem et al. (2017) also failed to detect any activation associated with these regions. The key difference between the present and previous studies and the study by Riem et al. is the type of stimuli used. Previous studies all used auditory recordings of crying individuals, perhaps making the individual's emotional state more ambiguous. The present study and the study by Riem et al. (2017) presented the participants with visual stimuli. In line with our congruency results, the insula, the amygdala, and the ACC were probably not involved while observing the visual stimuli due to the clarity of tears; the tears, as a visually unambiguous signal, need no further interpretation, which prevents the involvement of the regions providing additional emotional information. Moreover, on average, the presented faces were perceived as relatively neutral, further limiting the need for emotional interpretation.

Another notable result is the absence of a gender effect concerning tears. Social roles and the general higher crying proneness observed in women (Fischer and Lafrance, 2015; Vingerhoets and Scheirs, 2000) led us to anticipate that crying men might be perceived differently than crying women. Findings of proneness and causes of weeping rely mainly on self-reports that might already be fundamentally affected by stereotypes and gender-specific norms (Bekker and Vingerhoets, 2001; Fischer and Lafrance, 2015). We also observed a gender difference in our behavioral data collected after the scanning session, with the emotional valence of women on average being rated as more negative than that of men. However, the tears did not seem to affect the valence ratings of both genders differentially. On the other hand, a study in which participants were asked to rate pictures of crying and non-crying men and women (Fischer et al., 2013) showed that women were rated as more emotional, and men as more stoic when crying, entirely in line with the perceived social roles and stereotypes. However, Stadel et al. (2019) showed that the effect of tears on the willingness to help was less potent for male dyads (i.e., a male observer and a male crier) than for female or mixed ones.

Moreover, Fischer et al. (2013) also showed a distinctive influence of different contexts (namely work environment and romantic relationships), explaining the lack of interaction between tears and gender in the current study. In a professional setting, where tears are generally considered less appropriate, crying men were perceived as significantly more emotional, sadder, and less competent than crying women. Our study, however, included mainly contexts not inherently connected with social roles and more closely resembled the relationship condition of Fischer et al. (2013), where it is generally equally acceptable for both genders to cry (such as funerals and weddings). Furthermore, the study included only female participants and they (unlike men) have been shown to respond to criers similarly regardless of their gender (Vingerhoets and Bylsma, 2016). Overall, this interpretation remains speculative and should be explored in future studies.

A few limitations of the present study need mentioning. Although the sample size of this study was relatively small, we were able to replicate the results regarding the perception of adults with and without tears by Riem et al. (2017). The activation in the lateral occipital cortex as a response to tearful faces was shown in our study as well, indicating the reliability of the results of our participant sample. Secondly, as our sample consisted of only (highly empathic) females, it is hard to generalize our results to men. The previously found low willingness of men to support weeping men (Stadel et al., 2019) can perhaps also be reflected in their brain activation. Furthermore, empathic individuals focus longer on emotional stimuli than individuals with lower empathy levels (Martínez-Velázquez et al., 2020). Not limiting the participants to highly empathic individuals could perhaps decrease the strength of the brain activation or even restrict the activation in certain regions. Future studies should thus include more participants of both genders, allowing a comparison between them. Another potential limitation is the lack of compound stimuli validation. Backgrounds and emotional faces were validated separately, implying that it is still possible that the crying faces combined with, for example, positive backgrounds could sometimes be interpreted as negative (e.g., a person is crying at a party out of sadness because nobody came to their party instead of crying out of joy).

Moreover, the images of emotional faces depicted real instead of acted emotional reactions, so it is possible that the emotions of the facial expressions were not consistently or uniformly recognized across participants. However, this is less likely because the participants validated the facial expressions. Finally, we used only static images of faces with visible tears in the present study. However, it is essential to realize that emotional crying is a more comprehensive biological response consisting of visible tears as well as vocalizations and sobbing, which includes dynamic muscle contractions of the face and even chest. Still, additional studies with more complex multi-modal stimuli would be essential to understand the perception and processing of crying fully.

In conclusion, our study suggests that tears present a clear, strong, and evolutionary important signal of weakness and helplessness that facilitates social bonding and is insensitive to context. Unlike what has been shown for other facial expressions such as tearless sadness and anger, the valence of the context, which is processed in parallel, is not needed to clarify the tearful facial expression. Further research is, however, necessary to shed some more light on the role of the specific brain structures involved in tear perception.

CRediT statement

Anita Tursic: conceptualization, formal analysis, investigation, resources, writing – original draft, writing – review & editing, visualization, project administration. Maarten Vaessen: conceptualization, formal analysis, investigation, writing – review & editing. Minye Zhan: formal analysis, investigation, writing – review & editing. Ad J. J. M. Vingerhoets: conceptualization, writing – review & editing, funding acquisition. Beatrice de Gelder: conceptualization, writing – review & editing, supervision, project administration, funding acquisition.

Conflict of interest statement

The authors declare no conflict of interest.

Data accessibility statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to their containing information that could compromise the privacy of research participants.

Acknowledgments

This work was supported by the Center of Research on Psychological and Somatic disorders (CoRPS) of Tilburg University (A.V.) and by the European Research Council (ERC) under European Union's Seventh Framework Programme for Research 2007-13 (ERC Grant agreement number 295673) to BdG.

We want to thank photographer Marco Anelli for permission to use some of his photographs in our stimuli set.

References

  1. Aviezer H., Ensenberg N., Hassin R.R. The inherently contextualized nature of facial emotion perception. Current Opinion in Psychology. 2017;17:47–54. doi: 10.1016/j.copsyc.2017.06.006. [DOI] [PubMed] [Google Scholar]
  2. Balsters M., Krahmer E., Swerts M., Vingerhoets A.J.J.M. Emotional tears facilitate the recognition of sadness and the perceived need for social support. Evol. Psychol. 2013;11(1):148–158. doi: 10.1177/147470491301100114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bekker M.H.J., Vingerhoets A.J.J.M. In: Adult Crying. A Biopsychosocial Approach. Vingerhoets A.J.J.M., Cornelius R.R., editors. Brunner-Routledge; 2001. Male and female tears: swallowing versus shedding? pp. 91–114. [Google Scholar]
  4. Bush G., Luu P., Posner M.I. Cognitive and emotional influences in anterior cingulate cortex. Trends Cognit. Sci. 2000;4(6):215–222. doi: 10.1016/S1364-6613(00)01483-2. [DOI] [PubMed] [Google Scholar]
  5. Cavanna A.E., Trimble M.R. The precuneus: a review of its functional anatomy and behavioural correlates. Brain. 2006;129(3):564–583. doi: 10.1093/brain/awl004. [DOI] [PubMed] [Google Scholar]
  6. Christov-Moore L., Simpson E.A., Coudé G., Grigaityte K., Iacoboni M., Ferrari P.F. Empathy: gender effects in brain and behavior. Neurosci. Biobehav. Rev. 2014;46(Pt 4):604–627. doi: 10.1016/j.neubiorev.2014.09.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Davis M.H. A multidimensional approach to individual differences in empathy. JSAS Catalog of Selected Documents in Psychology. 1980;10(85):1–19. doi: 10.1037/0022-3514.44.1.113. [DOI] [Google Scholar]
  8. de Gelder B., Meeren H.K.M., Righart R., van den Stock J., van de Riet W.A.C., Tamietto M. Beyond the face: exploring rapid influences of context on face processing. Prog. Brain Res. 2006;155:37–48. doi: 10.1016/S0079-6123(06)55003-4. [DOI] [PubMed] [Google Scholar]
  9. Denckla C.A., Fiori K.L., Vingerhoets A.J.J.M. Development of the crying proneness scale: associations among crying proneness, empathy, attachment, and age. J. Pers. Assess. 2014;96(6):619–631. doi: 10.1080/00223891.2014.899498. [DOI] [PubMed] [Google Scholar]
  10. Diéguez-Risco T., Aguado L., Albert J., Hinojosa J.A. Faces in context: modulation of expression processing by situational information. Soc. Neurosci. 2013;8(6):601–620. doi: 10.1080/17470919.2013.834842. [DOI] [PubMed] [Google Scholar]
  11. Feinberg D.A., Moeller S., Smith S.M., Auerbach E., Ramanna S., Glasser M.F., Miller K.L., Ugurbil K., Yacoub E. Multiplexed echo planar imaging for sub-second whole brain FMRI and fast diffusion imaging. PLoS One. 2010;5(12) doi: 10.1371/journal.pone.0015710. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Fischer A.H., Eagly A.H., Oosterwijk S. The meaning of tears: which sex seems emotional depends on the social context. Eur. J. Soc. Psychol. 2013;43(6):505–515. doi: 10.1002/ejsp.1974. [DOI] [Google Scholar]
  13. Fischer A.H., Lafrance M. What drives the smile and the tear: why women are more emotionally expressive than men. Emotion Review. 2015;7(1):22–29. doi: 10.1177/1754073914544406. [DOI] [Google Scholar]
  14. Frühholz S., Hofstetter C., Cristinzio C., Saj A., Seeck M., Vuilleumier P., Grandjean D. Asymmetrical effects of unilateral right or left amygdala damage on auditory cortical processing of vocal emotions. Proc. Natl. Acad. Sci. U.S.A. 2015;112(5):1583–1588. doi: 10.1073/pnas.1411315112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Gogolla N. The insular cortex. Curr. Biol. 2017;27(12):R580–R586. doi: 10.1016/j.cub.2017.05.010. [DOI] [PubMed] [Google Scholar]
  16. Gračanin A., Bylsma L.M., Vingerhoets A.J.J.M. Why only humans shed emotional tears. Hum. Nat. 2018;29(2):104–133. doi: 10.1007/s12110-018-9312-8. [DOI] [PubMed] [Google Scholar]
  17. Gračanin A., Krahmer E., Balsters M., Küster D., Vingerhoets A.J.J.M. How weeping influences the perception of facial expressions: the signal value of tears. J. Nonverbal Behav. 2021;45:83–105. doi: 10.1007/s10919-020-00347-x. [DOI] [Google Scholar]
  18. Hassin R.R., Aviezer H., Bentin S. Inherently ambiguous: facial expressions of emotions, in context. Emotion Review. 2013;5(1):60–65. doi: 10.1177/1754073912451331. [DOI] [Google Scholar]
  19. Hietanen J.K., Astikainen P. N170 response to facial expressions is modulated by the affective congruency between the emotional expression and preceding affective picture. Biol. Psychol. 2013;92(2):114–124. doi: 10.1016/j.biopsycho.2012.10.005. [DOI] [PubMed] [Google Scholar]
  20. Kret M.E., de Gelder B. Social context influences recognition of bodily expressions. Exp. Brain Res. 2010;203(1):169–180. doi: 10.1007/s00221-010-2220-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Kret M.E., Roelofs K., Stekelenburg J.J., de Gelder B. Emotional signals from faces, bodies and scenes influence observers' face expressions, fixations and pupil-size. Front. Hum. Neurosci. 2013;7:1–9. doi: 10.3389/fnhum.2013.00810. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Kuniecki M., Wołoszyn K., Domagalik A., Pilarczyk J. Disentangling brain activity related to the processing of emotional visual information and emotional arousal. Brain Struct. Funct. 2018;223(4):1589–1597. doi: 10.1007/s00429-017-1576-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Lavin C., Melis C., Mikulan E., Gelormini C., Huepe D., Ibañez A. The anterior cingulate cortex: an integrative hub for human socially-driven interactions. Front. Neurosci. 2013;7:64. doi: 10.3389/fnins.2013.00064. May. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Lee T.-H., Choi J.-S., Cho Y.S. Context modulation of facial emotion perception differed by individual difference. PLoS One. 2012;7(3) doi: 10.1371/journal.pone.0032987. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Lockwood P.L. The anatomy of empathy: vicarious experience and disorders of social cognition. Behav. Brain Res. 2016;311:255–266. doi: 10.1016/j.bbr.2016.05.048. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Martínez-Velázquez E.S., Ahuatzin González A.L., Chamorro Y., Sequeira H. The influence of empathy trait and gender on empathic responses. A study with dynamic emotional stimulus and eye movement recordings. Front. Psychol. 2020;11:23. doi: 10.3389/fpsyg.2020.00023. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Miceli M., Castelfranchi C. Crying: discussing its basic reasons and uses. New Ideas Psychol. 2003;21(3):247–273. doi: 10.1016/j.newideapsych.2003.09.001. [DOI] [Google Scholar]
  28. Newman J.D. Neural circuits underlying crying and cry responding in mammals. Behav. Brain Res. 2007;182(2):155–165. doi: 10.1016/j.bbr.2007.02.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Peirce J.W. PsychoPy-Psychophysics software in Python. J. Neurosci. Methods. 2007;162(1–2):8–13. doi: 10.1016/j.jneumeth.2006.11.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Picó A., Espert R., Gadea M. How our gaze reacts to another person's tears: experimental insights into eye tracking technology. Front. Psychol. 2020;11:2134. doi: 10.3389/fpsyg.2020.02134. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Picó A., Gračanin A., Gadea M., Boeren A., Aliño M., Vingerhoets A.J.J.M. How visible tears affect observers' judgements and behavioral intentions: sincerity, remorse, and punishment. J. Nonverbal Behav. 2020;44(2):215–232. doi: 10.1007/s10919-019-00328-9. [DOI] [Google Scholar]
  32. Poyo Solanas M., Zhan M., Vaessen M., Hortensius R., Engelen T., de Gelder B. Looking at the face and seeing the whole body. Neural basis of combined face and body expressions. Soc. Cognit. Affect Neurosci. 2018;13(1):135–144. doi: 10.1093/scan/nsx130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Provine R.R., Krosnowski K.A., Brocato N.W. Tearing: breakthrough in human emotional signaling. Evol. Psychol. 2009;7(1):52–56. doi: 10.1177/147470490900700107. [DOI] [Google Scholar]
  34. Ptak R. The frontoparietal attention network of the human brain: action, saliency, and a priority map of the environment. Neuroscientist. 2012;18(5):502–515. doi: 10.1177/1073858411409051. [DOI] [PubMed] [Google Scholar]
  35. Ptak R., Schnider A., Fellrath J. The dorsal frontoparietal network: a core system for emulated action. Trends Cognit. Sci. 2017;21(8):589–599. doi: 10.1016/j.tics.2017.05.002. [DOI] [PubMed] [Google Scholar]
  36. Reschke P.J., Knothe J.M., Lopez L.D., Walle E.A. Putting “context” in context: the effects of body posture and emotion scene on adult categorizations of disgust facial expressions. Emotion. 2017;18(1):153–158. doi: 10.1037/emo0000350. [DOI] [PubMed] [Google Scholar]
  37. Riem M.M.E., Bakermans-Kranenburg M.J., Pieper S., Tops M., Boksem M.A.S., Vermeiren R.R.J.M., van IJzendoorn M.H., Rombouts S.A.R.B. Oxytocin modulates amygdala, insula, and inferior frontal gyrus responses to infant crying: a randomized controlled trial. Biol. Psychiatr. 2011;70(3):291–297. doi: 10.1016/j.biopsych.2011.02.006. [DOI] [PubMed] [Google Scholar]
  38. Riem M.M.E., Bakermans-Kranenburg M.J., van IJzendoorn M.H., Out D., Rombouts S.A.R.B. Attachment in the brain: adult attachment representations predict amygdala and behavioral responses to infant crying. Am. J. Bioeth. 2012;14(6):533–551. doi: 10.1080/14616734.2012.727252. [DOI] [PubMed] [Google Scholar]
  39. Riem M.M.E., van IJzendoorn M.H., De Carli P., Vingerhoets A.J.J.M., Bakermans-Kranenburg M.J. As tears go by: baby tears trigger more brain activity than adult tears in nulliparous women. Soc. Neurosci. 2017;12(6):633–636. doi: 10.1080/17470919.2016.1247012. [DOI] [PubMed] [Google Scholar]
  40. Righart R., de Gelder B. Context influences early perceptual analysis of faces - an electrophysiological study. Cerebr. Cortex. 2006;16(9):1249–1257. doi: 10.1093/cercor/bhj066. [DOI] [PubMed] [Google Scholar]
  41. Righart R., de Gelder B. Rapid influence of emotional scenes on encoding of facial expressions: an ERP study. Soc. Cognit. Affect Neurosci. 2008;3(3):270–278. doi: 10.1093/scan/nsn021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Righart R., de Gelder B. Recognition of facial expressions is influenced by emotional scene gist. Cognit. Affect Behav. Neurosci. 2008;8(3):264–272. doi: 10.3758/CABN.8.3.264. [DOI] [PubMed] [Google Scholar]
  43. Rottenberg J., Vingerhoets A.J.J.M. Crying: call for a lifespan approach. Social and Personality Psychology Compass. 2012;6(3):217–227. doi: 10.1111/j.1751-9004.2012.00426.x. [DOI] [Google Scholar]
  44. Sander K., Frome Y., Scheich H. FMRI activations of amygdala, cingulate cortex, and auditory cortex by infant laughing and crying. Hum. Brain Mapp. 2007;28(10):1007–1022. doi: 10.1002/hbm.20333. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Sander K., Scheich H. Left auditory cortex and amygdala, but right insula dominance for human laughing and crying. J. Cognit. Neurosci. 2005;17(10):1519–1531. doi: 10.1162/089892905774597227. [DOI] [PubMed] [Google Scholar]
  46. Sinke C.B.A., van den Stock J., Goebel R., de Gelder B. The constructive nature of affective vision: seeing fearful scenes activates extrastriate body area. PLoS One. 2012;7(6) doi: 10.1371/journal.pone.0038118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Stadel M., Daniels J.K., Warrens M.J., Jeronimus B.F. The gender-specific impact of emotional tears. Motiv. Emot. 2019;43(4):696–704. doi: 10.1007/s11031-019-09771-z. [DOI] [Google Scholar]
  48. Stevens F.L., Hurley R.A., Taber K.H. Anterior cingulate cortex: unique role in cognition and emotion. J. Neuropsychiatry. 2011;23(2):120–125. doi: 10.1176/jnp.23.2.jnp121. [DOI] [PubMed] [Google Scholar]
  49. Szczepanski S.M., Pinsk M.A., Douglas M.M., Kastner S., Saalmann Y.B. Functional and structural architecture of the human dorsal frontoparietal attention network. Proc. Natl. Acad. Sci. U.S.A. 2013;110(39):15806–15811. doi: 10.1073/pnas.1313903110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Todd R.M., Talmi D., Schmitz T.W., Susskind J., Anderson A.K. Psychophysical and neural evidence for emotion-enhanced perceptual vividness. J. Neurosci. 2012;32(33):11201–11212. doi: 10.1523/JNEUROSCI.0155-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Uddin L.Q., Nomi J.S., Hebert-Seropian B., Ghaziri J., Boucher O. Structure and function of the human insula. J. Clin. Neurophysiol. 2017;34(4):300–306. doi: 10.1097/WNP.0000000000000377. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. van den Stock J., Vandenbulcke M., Sinke C.B.A., de Gelder B. Affective scenes influence fear perception of individual body expressions. Hum. Brain Mapp. 2014;35(2):492–502. doi: 10.1002/hbm.22195. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. van den Stock J., Vandenbulcke M., Sinke C.B.A., Goebel R., de Gelder B. How affective information from faces and scenes interacts in the brain. Soc. Cognit. Affect Neurosci. 2014;9(10):1481–1488. doi: 10.1093/scan/nst138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. van Roeyen I., Riem M.M.E., Toncic M., Vingerhoets A.J.J.M. The damaging effects of perceived crocodile tears for a crier's image. Front. Psychol. 2020;11:172. doi: 10.3389/fpsyg.2020.00172. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Vingerhoets A.J.J.M. Oxford University Press; Oxford: 2013. Why Only Humans Weep. Unravelling the Mysteries of Tears. [Google Scholar]
  56. Vingerhoets A.J.J.M., Bylsma L.M. The riddle of human emotional crying: a challenge for emotion researchers. Emotion Review. 2016;8(3):207–217. doi: 10.1177/1754073915586226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Vingerhoets A.J.J.M., Scheirs J.G.M. In: Gender and Emotion: Social Psychological Perspectives. Fischer A.H., editor. Cambridge University Press; 2000. Sex differences in crying: empirical findings and possible explanations; pp. 143–165. [DOI] [Google Scholar]
  58. Vingerhoets A.J.J.M., van de Ven N., van der Velden Y. The social impact of emotional tears. Motiv. Emot. 2016;40(3):455–463. doi: 10.1007/s11031-016-9543-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Vossel S., Geng J.J., Fink G.R. Dorsal and ventral attention systems: distinct neural circuits but collaborative roles. Neuroscientist. 2014;20(2):150–159. doi: 10.1177/1073858413494269. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Wieser M.J., Keil A. Fearful faces heighten the cortical representation of contextual threat. Neuroimage. 2014;86:317–325. doi: 10.1016/j.neuroimage.2013.10.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Xu Q., Yang Y., Tan Q., Zhang L. Facial expressions in context: electrophysiological correlates of the emotional congruency of facial expressions and background scenes. Front. Psychol. 2017;8:2175. doi: 10.3389/fpsyg.2017.02175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Xu Q., Yang Y., Zhang E., Qiao F., Lin W., Liang N. Emotional conflict in facial expression processing during scene viewing: an ERP study. Brain Res. 2015;1608:138–146. doi: 10.1016/j.brainres.2015.02.047. [DOI] [PubMed] [Google Scholar]
  63. Zhang Y., Zhou W., Wang S., Zhou Q., Wang H., Zhang B., Huang J., Hong B., Wang X. The roles of subdivisions of human insula in emotion perception and auditory processing. Cerebr. Cortex. 2019;29(2):517–528. doi: 10.1093/cercor/bhx334. [DOI] [PubMed] [Google Scholar]
  64. Zickfeld J.H., et al. Tears evoke the intention to offer social support: a systematic investigation of the interpersonal effects of emotional crying across 41 countries. J. Exp. Soc. Psychol. 2021;95(4) doi: 10.1016/j.jesp.2021.104137. [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to their containing information that could compromise the privacy of research participants.


Articles from Neuroimage: Reports are provided here courtesy of Elsevier

RESOURCES