Skip to main content
Frontiers in Medicine logoLink to Frontiers in Medicine
. 2023 May 25;10:1059203. doi: 10.3389/fmed.2023.1059203

Behavioral and neural underpinnings of empathic characteristics in a Humanitude-care expert

Wataru Sato 1,*, Atsushi Nakazawa 2, Sakiko Yoshikawa 3, Takanori Kochiyama 4, Miwako Honda 5, Yves Gineste 5,6
PMCID: PMC10248535  PMID: 37305136

Abstract

Background

Humanitude approaches have shown positive effects in elderly care. However, the behavioral and neural underpinnings of empathic characteristics in Humanitude-care experts remain unknown.

Methods

We investigated the empathic characteristics of a Humanitude-care expert (YG) and those of age-, sex-, and race-matched controls (n = 13). In a behavioral study, we measured subjective valence and arousal ratings and facial electromyography (EMG) of the corrugator supercilii and zygomatic major muscles while participants passively observed dynamic facial expressions associated with anger and happiness and their randomized mosaic patterns. In a functional magnetic resonance imaging (MRI) study, we measured brain activity while participants passively observed the same dynamic facial expressions and mosaics. In a structural MRI study, we acquired structural MRI data and analyzed gray matter volume.

Results

Our behavioral data showed that YG experienced higher subjective arousal and showed stronger facial EMG activity congruent with stimulus facial expressions compared with controls. The functional MRI data demonstrated that YG showed stronger activity in the ventral premotor cortex (PMv; covering the precentral gyrus and inferior frontal gyrus) and posterior middle temporal gyrus in the right hemisphere in response to dynamic facial expressions versus dynamic mosaics compared with controls. The structural MRI data revealed higher regional gray matter volume in the right PMv in YG than in controls.

Conclusion

These results suggest that Humanitude-care experts have behavioral and neural characteristics associated with empathic social interactions.

Keywords: empathy, expert, facial mimicry, fMRI, Humanitude care, mirror neuron system

1. Introduction

Given the increasing numbers of people with dementia worldwide (1), Humanitude care has attracted interest. Humanitude care was developed by Gineste and Marescotti in 1979 as relationship-centered care for people with dementia (2). This methodology facilitates gentle, positive interaction with patients with dementia (3) using more than 150 social skills, such as looking at them face-to-face (4, 5). Several studies have shown that Humanitude care effectively reduces behavioral and psychological symptoms of dementia in patients [e.g., (6, 7); for a review, see (8)]. Furthermore, studies showed that the experiences of Humanitude caregivers enhance empathy [i.e., the ability to respond emotionally to and understand others’ emotions (9)] (10, 11). These data suggest that Humanitude care has positive effects on both patients and caregivers, which promotes the necessity of research on this technique.

However, the characteristics of Humanitude-care experts remain unknown. Such information would deepen understanding of Humanitude care and facilitate its application. We hypothesized that Humanitude-care experts could have behavioral and neural characteristics associated with empathy, based on findings that Humanitude care experiences increase empathy in caregivers (10, 11). Several behavioral studies have shown that motor synchrony, such as facial mimicry, underlies empathic processing, including both sharing and recognizing the emotional states of others [e.g., (12); for reviews, see (1315)]. Neuroscientific studies have suggested that the mirror neuron system (MNS) may underlie such empathic interaction via motor synchronization [for reviews, see (1619)]. Among the core regions of the MNS, including the ventral premotor cortex (PMv; including the precentral and inferior frontal gyri), inferior parietal lobule, and superior temporal sulcus (STS) region (including the posterior middle and superior temporal gyri) (20), dynamic face-to-face interaction specifically activates the PMv and STS region in the right hemisphere (2124). We hypothesized that Humanitude-care experts, compared with non-experts, would show enhanced facial mimicry of other individuals’ facial expressions, and have increased functional and structural neural substrates in the MNS.

To test this hypothesis, we conducted a case study of one Humanitude-care expert, YG, who was a Caucasian male trained in Humanitude care for more than 38 years. We conducted a series of behavioral, functional magnetic resonance imaging (MRI), and structural MRI studies with YG and age-, sex-, and race-matched controls. In the behavioral study, to test subjective empathic responses and facial mimicry, we measured subjective emotional ratings and facial electromyography (EMG) while participants passively observed dynamic facial expressions associated with anger and happiness and their randomized mosaic patterns. To test MNS activity, we conducted a functional MRI study measuring brain activity while participants passively observed the same dynamic facial expressions and mosaics. We further conducted a structural MRI study to analyze the structural characteristics of the MNS. We used the dynamic facial expression stimuli of Japanese models that were previously shown to elicit facial mimicry and MNS activity (23, 25, 26). In addition, as previous behavioral studies reported that the effects of trait empathy were more evident on attitudes toward outgroups than on those to ingroups (27, 28), we expected that the faces of people who differed racially from the participants would reveal the empathic characteristics of YG.

2. Materials and methods

2.1. Participants

YG was a 63-year-old Caucasian male who was one of the founders of Humanitude care. YG was an exceptional expert (29) of Humanitude care who worked at the master level in the objective criteria of teaching and setting standards (30). He has been involved in Humanitude care for more than 38 years. He previously worked as a caregiver and an educator for caregivers, caring for more than 30,000 people with dementia (about 40 per week). He used the techniques of Humanitude care, developed during his own caring experience, together with that of a colleague. Humanitude care features a structured sequence of caring procedures, based on the four pillars of gaze, speech, touch, and verticality, each of which includes many operationalized skills (4). When the experiment was conducted, he worked principally as an educator, delivering lectures on Humanitude care to about 500 institutes annually. The first language of YG was not Japanese.

The control group included 13 Caucasian male adults (mean ± SD age, 54.5 ± 9.4 years); the participants were matched with YG for age [Crawford and Howell’s (31) modified t-test, t = 0.88, p(two-tailed) = 0.394, zcc = 0.92], sex, and race. Control participants were recruited from a local human resource company (Chuo Sato, Amagasaki, Japan) through advertisements offering temporary jobs to foreigners in the West Japan region. The inclusion criteria were Caucasian race, male sex, age between 40 and 69 years, and a willingness to participate in behavioral and MRI studies at Kyoto University, Japan. The exclusion criteria included any contraindication to MRI (e.g., a pacemaker). The first language of all controls was other than Japanese. The sample size was determined by reference to the result of a power analysis using Crawford and Howell’s modified t-test (31) that detecting a deviation > 3 SD in the case with a power of 0.84 requires 10 controls (32). Also, we heuristically (33) referred to a previous functional MRI study that compared a sports expert with a control group of ordinary sports players (n = 6) (34).

All participants had normal or corrected-to-normal visual acuity. Following an explanation of the experimental procedure, all participants gave informed consent. This study was approved by the Ethics Committee of the Unit for Advanced Studies of the Human Mind, Kyoto University, Japan. All experiments complied with institutional ethical provisions and the Declaration of Helsinki.

2.2. Stimuli

For the behavioral and functional MRI studies, video clips of angry and happy facial expressions by four Japanese women and four Japanese men were used as dynamic facial expression stimuli. These stimuli were selected from our video database of facial expressions of emotion, which included expressions by 65 Japanese models (35). The stimulus models looked straight ahead and were unfamiliar to the participants. These specific stimulus expressions were selected because they represented theoretically appropriate facial expressions, as confirmed by coding analyses performed by a trained coder using the Facial Action Coding System (36) and the Facial Action Coding System Affect Interpretation Dictionary (37). The speeds of dynamic changes in these expressions were within the natural range of the observers (38), and the stimuli were validated in several previous behavioral and functional MRI studies. Specifically, the stimuli elicited appropriate subjective emotional responses (39) and spontaneous facial mimicry (25, 26) and activated the MNS, including the right PMv and the bilateral STS regions (23). The dynamic expression stimuli comprised 38 frames ranging from neutral to emotional expressions. Each frame was presented for 40 ms and each clip for 1,520 ms. The stimuli subtended a visual angle of approximately 15° vertical × 12° horizontal. An example of the stimulus sequence is shown in Figure 1, which contains data from a model who provided consent for the use of her image in scientific publications.

FIGURE 1.

FIGURE 1

Illustrations of dynamic facial expression and dynamic mosaic stimuli.

To create dynamic mosaic image stimuli, all dynamic facial expression frames were divided into 50 vertical × 40 horizontal squares and rearranged using a fixed randomization algorithm (Figure 1). This procedure made each image unrecognizable as a face. A set of these 38 frames was serially presented as a moving clip, corresponding to the original dynamic face images, at the same speed as that for the dynamic expression stimuli. As a result, dynamic mosaic stimuli were presented with smooth motion comparable to natural dynamic facial expressions, even though they were unrecognizable as faces.

2.3. Presentation apparatus

For the behavioral and functional MRI studies, the experiments were controlled using the Presentation software (Neurobehavioral Systems, Albany, CA, USA). In the behavioral study, the stimuli were presented on a 19-inch cathode ray tube monitor (HM903D-A, Iiyama, Tokyo, Japan). In the functional MRI study, the stimuli were projected from a liquid crystal projector (DLA-HD10K; Japan Victor Company, Yokohama, Japan) to a mirror positioned in front of the participants.

2.4. Procedure

The studies were conducted individually. The behavioral study was conducted first, followed by the functional and structural MRI studies.

2.4.1. Behavioral study

The behavioral study was conducted using procedures described previously (26), with some modifications. An electrically shielded soundproof room was used for the experiments. At the beginning of the experiments, participants were told that the experiment involved recording electric activity from the skin to conceal the real purpose of our muscle activity tests. After electrode placement, the participants were told to view the stimuli and then evaluate them. EMG recordings were conducted while the participants passively viewed the stimuli. A total of 32 trials were performed, consisting of eight trials each of angry faces, happy faces, angry mosaics, and happy mosaics. The stimuli were presented in random order.

In each trial, a fixation point (a small gray cross on a white background) was presented at the center of the screen for 1,520 ms, and then the stimulus was presented for 1,520 ms. Next, the screen was filled with a solid gray field as an inter-trial interval, with length varying randomly between 6,000 and 9,000 ms.

After the EMG recordings, the stimuli were presented again, and the participants were asked to respond to the question, “How did you feel emotionally when you viewed the expression?” using an affect grid (40) that graphically assessed the two dimensions of valence and arousal on nine-point scales. Valence and arousal ranged from −4 (negative) to + 4 (positive) and from −4 (low arousal) to + 4 (high arousal), respectively. The general interpretation is that valence represents the qualitative component and arousal reflects the energy of either positive or negative emotions (41). The stimuli were presented in random order.

2.4.2. Functional MRI study

The functional MRI study was conducted as described previously (23), with modifications. Each participant completed a single functional MRI scan consisting of 20 epochs of 20 s each, separated by 20 rest periods (a blank screen) of 10 s each. Each of the four stimulus conditions was presented in different epochs within each run, and the order of epochs was pseudorandomized. The order of stimuli within each epoch was randomized. Each epoch consisted of eight trials, and a total of 160 trials were performed by each participant. Stimulus trials were replaced by target trials in eight trials (two trials in each of the angry dynamic facial expression, happy dynamic facial expression, angry dynamic mosaic, and happy dynamic mosaic conditions).

In each stimulus trial, a fixation point (a small gray cross on a white background) was presented in the center of the screen for 980 ms, followed by the stimulus for 1,520 ms. In each target trial, a red cross (approximately 1.2° × 1.2°) was presented instead of the stimulus. Participants were asked to press a button using their right index fingers as quickly as possible when a red cross appeared and to gaze at the fixation point in each trial; they received no other information (e.g., stimulus type). These dummy tasks were conducted to ensure that the participants attended to the stimuli but did not engage in either controlled processing of the stimuli or stimulus-related motor responses.

After functional and structural image acquisition, the participants were interviewed to determine whether they had been aware that their muscle activity and related brain activity/structure had been tested. This process ensured that all participants, including YG, had been unaware of the research purpose. Debriefing was conducted, and then participant permission to use their data for analysis was requested and granted for all participants.

2.5. Measurement

2.5.1. EMG

EMG was used to monitor the muscles of the corrugator supercilii (related to brow-lowering actions, prototypical of angry facial expressions) and the zygomatic major (related to lip-corner-pulling actions, prototypical of happy facial expressions). These muscles were selected as indices of facial mimicry because several previous studies employing facial EMG indicated that observation of angry and happy facial expressions induced corrugator supercilii and zygomatic major muscle activities, respectively [e.g., (42)]. The Ag/AgCl electrodes were placed according to established guidelines (43, 44). A ground electrode was placed on the forehead. The data were amplified, filtered online (band pass: 20–400 Hz), and sampled at 1,000 Hz using an EMG-025 amplifier (Harada Electronic Industry, Sapporo, Japan) and the PowerLab 16/35 data acquisition system and LabChart Pro v8.0 software (ADInstruments, Dunedin, New Zealand). A low-cut filter at 20 Hz was applied because it has been reported to remove motion artifacts from facial EMG (45). All participants were video monitored using an unobtrusive digital web camera (HD1080P; Logicool, Tokyo, Japan).

2.5.2. Functional MRI

Functional and structural image scanning was performed on a 3T scanning system (MAGNETOM Verio; Siemens, Malvern, PA, USA) using a 32-channel head coil. Elastic pads were used to stabilize the participants’ head position. The functional images consisted of 40 consecutive slices parallel to the anterior–posterior commissure plane, covering the whole brain. A T2*-weighted gradient-echo echo-planar imaging sequence was used with the following parameters: repetition time (TR) = 2,500 ms; echo time (TE) = 30 ms; flip angle = 90°; matrix size = 64 × 64; and voxel size = 3 × 3 × 4 mm. The slices were in ascending order.

2.5.3. Structural MRI

Following the acquisition of functional images, a T1-weighted, high-resolution structural image was acquired using a magnetization-prepared rapid-acquisition gradient-echo sequence (TR = 2,250 ms; TE = 3.06 ms; inversion time = 1,000 ms; flip angle = 9°; field of view = 256 × 256 mm; voxel size = 1 × 1 × 1 mm).

2.6. Data analyses

2.6.1. Subjective ratings

The valence and arousal ratings were analyzed separately. The ratings of dynamic angry expression, dynamic happy expression, and dynamic mosaic stimuli were averaged separately for each participant. Composite scores were calculated by averaging valence ratings for angry expressions × −1 and valence ratings for happy expressions to represent valence and averaging arousal ratings for angry and happy expressions to represent arousal. The composite scores were analyzed in terms of the difference between YG and controls using Crawford and Howell’s modified t-test (31, 46) (one-tailed) implemented using the Singlims_ES function (47). This test is a modified version of the two-sample t-test in which the target single case is treated as a sample of n = 1 and does not contribute to estimating the within-group variance (31). A result was considered statistically significant at p < 0.05.

2.6.2. EMG

EMG data were analyzed using the Psychophysiological Analysis Software 3.3 (Computational Neuroscience Laboratory of the Salk Institute, La Jolla, CA, USA) implemented in MATLAB 2020a (MathWorks, Natick, MA, USA). The data were sampled for 3,500 ms in each trial, including pre-stimulus baseline data for 1,000 ms (during observation of the fixation point) and the data for 2,500 ms after stimulus onset. The time window of the post-stimulus period was the same as that of a previous study that detected facial EMG activity in response to dynamic facial expressions (26). For each trial, the differences in the mean absolute amplitudes between the pre- and post-stimulus periods were calculated as the EMG data.

As in the subjective rating analysis, the mean EMG activity was calculated for each condition of each participant, excluding data beyond the total mean > 3 SD or < −3 SD as artifacts. Then, as a measure of facial mimicry, corrugator supercilii EMG activity in response to angry expressions and zygomatic major EMG activity in response to happy expressions were analyzed using Crawford and Howell’s (31) modified t-test (one-tailed).

2.6.3. Functional MRI

Functional and structural MRI analyses were performed using the statistical parametric mapping package SPM12,1 implemented in MATLAB R2020a (MathWorks, Natick, MA, USA).

As a pre-processing step, functional images of each run were first realigned with the first scan as a reference to correct for head motion. The realignment parameters revealed only a small (< 2 mm) motion correction. All functional images were corrected for slice timing, and then the functional images were coregistered to the anatomical image. Next, all anatomical and functional images were normalized to Montreal Neurological Institute (MNI) space using the anatomical image-based unified segmentation–spatial normalization approach (48). Finally, the spatially normalized functional images were resampled to a voxel size of 2 × 2 × 2 mm and smoothed with an isotropic Gaussian kernel of 8-mm full-width at half-maximum (FWHM) to compensate for anatomical variability among participants.

Random-effects analyses were performed to identify significantly activated voxels at the population level (49). First, a single-subject analysis was performed (50). The task-related regressor for each stimulus condition and target condition was modeled by the boxcar and delta functions, respectively, convolving it with a canonical hemodynamic response function for each presentation condition in each participant. The realignment parameters were used as covariates to account for motion-related noise signals. A high-pass filter with a cut-off period of 128 s was used to eliminate the artifactual low-frequency trend. Serial autocorrelation was accounted for using a first-order autoregressive model. For the second-level random-effects analysis, contrast images of the main effect of stimulus type (dynamic expression versus dynamic mosaic) were entered into a two-sample t-test using age as a covariate. The contrast of interest was YG versus controls. To ensure valid activity for dynamic facial expressions, the simple main effect of stimulus type was additionally tested in each group. If voxels reached the extent threshold of P < 0.05 with family-wise error (FWE) correction for the whole brain with a cluster-forming threshold of P < 0.001 (uncorrected), then they were deemed to be significant.

Brain structures were labeled anatomically and identified according to Brodmann’s areas using the Automated Anatomical Labeling Atlas (51) and Brodmann Maps (Brodmann.nii), respectively, with the MRIcron tool.2

2.6.4. Structural MRI

All images were analyzed using the Computational Anatomy Toolbox CAT123 of SPM12 with the default settings. All structural T1 images were segmented into gray matter, white matter, and cerebrospinal fluid by an adaptive maximum a posteriori (AMAP) approach (52). The homogeneity intensity of each image was modeled as a slowly varying spatial function and corrected in the AMAP estimate. These segmented images were used for partial volume estimation using a simple model with mixed tissue types to improve segmentation (53). A spatially adaptive non-local means denoising filter was applied to handle spatially varying noise levels (54). A Markov random field cleanup was employed to improve image quality. Gray matter images in native space were subsequently normalized to the standard stereotactic space defined by the MNI using diffeomorphic anatomical registration with the exponentiated Lie algebra algorithm approach (55). The resulting normalized gray matter images were modulated using Jacobian determinants with non-linear warping only to exclude the effect of total intracranial volume. Finally, the normalized modulated gray matter images were resampled to a resolution of 1.5 × 1.5 × 1.5 mm and smoothed using an 8-mm FWHM isotropic Gaussian kernel based on the recommendation for the VBM method, where FWHM is typically between 4 and 12 mm (56).

To identify the brain regions associated with gray matter volume difference between YG and controls, we performed a two-sample t-test with the covariates of total brain volume and age; the contrast of YG versus controls was tested. Voxels were deemed significant if they reached the extent threshold of P < 0.05 following FWE correction for multiple comparisons over the search volume, with a cluster-forming threshold of P < 0.001 (uncorrected). First, we corrected for FWE for the entire brain. Then, we conducted small-volume correction for the anatomically defined PMv (i.e., the IFG opercular and triangular parts) and STS region (i.e., superior and middle temporal gyri) in the right hemisphere based on our interest. Brain structures were identified using the same method as the above-described functional image analysis.

3. Results

3.1. Behavior

For the valence ratings (Figure 2, upper left), t-tests showed no significant difference between YG and controls (t = 0.71, p = 0.246, zcc = 0.73). For the arousal ratings (Figure 2, upper right), t-tests revealed a significant difference between YG and controls (t = 2.12, p = 0.028, zcc = 2.20), indicating higher arousal ratings for dynamic facial expressions in YG than in controls.

FIGURE 2.

FIGURE 2

Mean (±SE) subjective ratings of experienced valence (upper left) and arousal (upper right) and electromyography (EMG) activity of the corrugator supercilii (lower left) and zygomatic major (lower right) muscles in response to dynamic angry expressions, dynamic happy expressions, and dynamic mosaics.

For corrugator supercilii EMG activity in response to angry expressions (Figure 2, lower left), t-tests revealed a significant group difference (t = 7.35, p < 0.001, zcc = 7.63), indicating stronger facial mimicry responses to others’ angry expressions in YG than in controls. For zygomatic major EMG activity in response to happy expressions (Figure 2, lower right), t-tests revealed a significant group difference (t = 61.88, p < 0.001, zcc = 64.22), indicating stronger facial mimicry responses to others’ happy expressions in YG than in controls. Visual inspection of the videos indicated that YG evidenced externally observable facial expressions congruent with the facial expressions of the stimuli. In the controls, no detectable facial reactions were systematically associated with the stimuli.

3.2. Regional brain activity

Contrasts between dynamic facial expression versus dynamic mosaic observations in YG and controls (Supplementary Figure 1 and Supplementary Tables 1, 2) revealed similar brain activity to those reported in previous studies [e.g., (23)]. Specifically, in both YG and controls, significant activity was detected in the right PMv and bilateral STS regions. However, these clusters in the PMv and STS region in the right hemisphere were more extended in YG than in controls.

The contrast between YG versus controls in terms of brain activity reacting to dynamic expressions versus dynamic mosaics revealed significantly stronger activity in some brain regions, including the PMv (covering the precentral gyrus and IFG) and STS region (covering the superior, middle, and inferior temporal gyri) in the right hemisphere (Figure 3 and Table 1). No other significant regions were detected in this contrast. There was no significantly stronger activity in controls than YG.

FIGURE 3.

FIGURE 3

Statistical parametric maps indicating regions that were significantly more activated in YG than in controls in response to dynamic expressions versus dynamic mosaics. Areas of activation are rendered on spatially normalized brain (left) and spatially normalized magnetic resonance images of a representative participant (middle). Blue crosses indicate activation foci in the group difference. Effect sizes (right) are indicated by mean (±SE) beta values of regions at the sites of activation foci. R, right.

TABLE 1.

Brain regions that were significantly more activated in YG than in controls in response to dynamic expressions versus dynamic mosaics.

Side Region BA Coordinates T-value Cluster size
x y z (voxel)
YG > Controls
R Cerebellum 44 −50 −32 14.81 169
L Supramarginal gyrus 43 −40 −16 28 7.95 189
R Middle temporal gyrus 21 50 −54 8 7.81 244
R Middle temporal gyrus 37 52 −62 6 5.77
R Precentral gyrus 4 48 −8 34 6.64 331
R Precentral gyrus 6 58 2 30 6.19
Controls > YG
None

BA, Brodmann’s area.

3.3. Regional gray matter volume

The results revealed no significant difference in gray matter volume between YG and controls when FWE correction was applied for the entire brain. When we restricted our search volumes based on our interest in the MNS regions for facial expression processing (i.e., PMv and STS regions in the right hemisphere), there was a significant main effect for group in the right PMv (i.e., the IFG; peak: x = 44, y = 3, z = 23; T = 6.19; 96 voxels; Figure 4), with higher volume in YG than in controls.

FIGURE 4.

FIGURE 4

Statistical parametric maps indicating regions that showed significantly higher gray matter volume in YG than in controls. Significant areas are rendered on spatially normalized magnetic resonance images of a representative participant (upper). A blue cross indicates the group difference focus. Effect sizes (lower) are indicated by mean (±SE) beta values at the focus.

4. Discussion

In this study, we investigated the behavioral and neural characteristics related to empathy in the Humanitude-care expert YG, and control participants. We acknowledge that testing only one Humanitude-care expert is a limitation of the study. Selection bias (57) was inevitable, and the generalizability of our findings thus remains to be tested (58). However, single-case research has often yielded valuable insights into our understanding of clinical and psychological phenomena and has complemented research using large groups (5961). Further, to enhance the value and rigor of our work, we combined behavioral and neuroimaging methods (61, 62) and used validated statistical tools to compare the single case and controls (31). We believe that our results with an expert individual afford unique insights into Humanitude-care practitioners. Another limitation of this study is that we used a cross-sectional design in this study, and therefore cannot conclude causal relationships between behavioral or neural characteristics of YG and Humanitude care. It is possible that YG’s experience in elderly care using Humanitude techniques for more than 38 years has shaped his unique behavioral and neural characteristics. Alternatively, he may have an innate talent for empathic social interaction and selected an appropriate career to use his empathic abilities to help people with dementia. However, ample evidence indicates that becoming an expert generally requires practice rather than innate talent (63, 64), and some data suggest that this is also true among medical professionals (65, 66). Additionally, anecdotal records about YG’s life indicate that his first job was as a sports teacher, involving diving and swimming instruction. When YG started to care for people with dementia, he experienced many failures, and sought innovative care techniques. YG improved his care techniques gradually through interaction with more than 30,000 individuals with dementia. Based on this information, we speculate that our results at least partially reflect the experiences of Humanitude care for people with dementia. Future research comparing different stages of Humanitude-care expertise is warranted to test this idea.

Our behavioral results showed that YG exhibited stronger subjective arousal and facial mimicry than the controls. The control group did not show any clear pattern of facial mimicry, which is consistent with previous findings that facial mimicry may be decreased in response to the faces of other races than to those of the same race (6769). Regardless, YG demonstrated clear facial mimicry. These results are consistent with findings that the experiences of Humanitude care enhance empathy in caregivers (10, 11) and that people with greater empathic traits report stronger shared emotional experiences (7075) and show more evident facial mimicry (7080). Together with these data, our results suggest that the experience of long-time Humanitude care improves behavioral (subjective and motor) characteristics associated with empathy.

Our functional MRI data showed that YG exhibited stronger MNS activity, specifically in the right PMv and STS region, in response to dynamic facial expressions than did controls. Our observed activation of the PMv and STS region in the right hemisphere is compatible with previous neuroimaging findings indicating that these regions were active during observation of dynamic facial expressions (2124) and were the functional network used during processing of such expressions (22, 24). The heightened activities in the right PMv and STS and the facial mimicry of YG are also in agreement with previous findings that these regions were associated with facial EMG activity during observation of facial expressions (24, 81). Based on the assumption that YG has heightened empathy through Humanitude-care experience (10, 11), the results are consistent with previous neuroimaging findings that people with high empathic traits show stronger activity in the MNS during the observation of others’ facial or bodily actions (8286). The data are also consistent with previous neuroimaging findings that expert groups of professional ballet dancers and professional pianists show stronger MNS activity during the observation of bodily actions related to their expertise (8790). Other studies also found that non-expert participant groups who trained in dancing skills show enhanced MNS activity during the observation of dancing bodily actions (91, 92). Our data extend these findings and indicate that such an enhancement effect on MNS activity can be acquired through elderly care experiences, demonstrated through the observation of facial actions, and evaluated even in a single expert.

Our structural MRI data revealed that the gray matter volume of the right PMv was higher in YG than controls. The fact that YG evidenced enhancements in terms of both facial mimicry and the gray matter volume of the right PMv is in line with the finding of an earlier structural MRI study that gray matter volume in this region was increased in participants who identified emotions in facial expressions more accurately than others (93). As in the case of the functional MRI results, under the assumption that YG acquired heightened empathy through Humanitude-care experience (10, 11), the results are consistent with those of previous studies that people with greater empathic traits have increased gray matter volume in the PMv (94, 95). The results are also compatible with the previous findings that groups of musicians showed increased gray matter volume in the PMv, possibly reflecting their musical expertise (96100), and that musical training increased gray matter volume in the PMv (99, 101, 102). Collectively, our results suggest that long-term experience of Humanitude care increases the gray matter volume of the MNS, to an extent detectable in a single participant.

Our data have a theoretical implication that Humanitude-care experience can improve behavioral (i.e., shared subjective emotional experience and facial mimicry) and neural (i.e., MNS activity and structure) characteristics associated with empathy. This result is notable, because Humanitude-care techniques do not explicitly offer instruction in these subjective, behavioral, and neural responses, although face-to-face interaction with eye contact at close distance is a pillar of Humanitude care (5). It may be difficult to implement natural facial mimicry deliberately, which is rapid (103) and automatic (104, 105). We speculate that, as Humanitude techniques are designed to create and maintain compassionate and respectful relationships between caregivers and people with dementia (2, 8), repeated Humanitude-care experience may develop empathic behavioral and neural characteristics in caregivers. In the general clinical care literature, several studies have found that empathic traits in caregivers generally produced good patient outcomes, such as patient satisfaction and improved cholesterol levels [e.g., (106); for reviews, see (107, 108)]. Some studies also showed that behaviors and experiences shared between caregivers and patients were associated with symptom reduction in those with chronic illnesses [e.g., (109); for a review, see (110)]. Interviews with numerous caregivers of people with dementia revealed that shared emotion was vital to maintaining relationships with people with dementia (111). Together with these data, our findings suggest that the empathic behavioral and neural characteristics of Humanitude caregivers may contribute to the positive effects of Humanitude care to reduce the behavioral and psychological symptoms of dementia [for a review, see (8)].

Our data also provide a unique perspective on the potential of Humanitude care. The results showed heightened facial mimicry and increased activity and structure in the MNS region in a Humanitude-care expert. Previous studies have reported that individuals with autism spectrum disorder (ASD) show weakened facial mimicry (112, 113) and reduced activity [e.g., (22, 114116)] and gray matter volume [e.g., (117120)] in the MNS region [for a review, see (121)]. Some studies showed that social skill training can improve social functioning in individuals with ASD [for a review, see (122)]; however, further research to develop effective training techniques is warranted. The results of this study suggest that performing Humanitude care (i.e., social interaction using Humanitude care techniques) may have the potential to enhance facial mimicry behaviors and MNS activity and structure in individuals with ASD. It would be interesting to test this hypothesis in a future study.

Apart from the intrinsic limitations of the study design discussed above, there were several other limitations. First, we presented only Japanese faces to Caucasian participants. We expected that such outgroup faces would reveal differences between YG and controls, as the effect of empathy was reportedly evident in positive attitudes to outgroup members (27, 28). However, whether Caucasian faces might produce different results and whether our current results can be generalized to faces of other races remain to be tested. Future studies should investigate the behavioral and neural responses to faces of other races by Humanitude-care experts. Second, our control sample size was small; we detected only a large effect size. Future studies with larger samples may reveal more behavioral and neural characteristics associated with Humanitude-care expertise. Finally, although we found heightened empathic behavioral and neural characteristics in a Humanitude-care expert, it remains unknown how these characteristics may contribute to the positive effects of Humanitude care, including reductions in the behavioral and psychological symptoms of dementia (8). Further work is warranted to investigate causal relationships between the empathic characteristics of Humanitude-care experts and the care effects.

In conclusion, our behavioral results demonstrated that the Humanitude expert YG experienced stronger shared emotion and exhibited more evident facial mimicry compared with controls. Our functional MRI data demonstrated that YG showed enhanced activity in the MNS (the PMv and STS region in the right hemisphere) in response to dynamic facial expressions, compared with controls. Structural MRI data revealed higher regional gray matter volume in the right PMv in YG than in controls. These results imply that Humanitude-care experts have behavioral and neural characteristics associated with empathic social interactions.

Data availability statement

The data supporting the conclusions of this article will be made available by the authors, excluding any identifiable data.

Ethics statement

The studies involving human participants were reviewed and approved by the Ethics Committee of the Unit for Advanced Studies of the Human Mind, Kyoto University, Japan. The patients/participants provided their written informed consent to participate in this study. Written informed consent was obtained from the individual(s) for the publication of any potentially identifiable images or data included in this article.

Author contributions

WS, AN, SY, and MH designed the research, and obtained the data. WS and TK analyzed the data. All authors wrote the manuscript, read, and approved the final manuscript.

Acknowledgments

We thank the Institute for the Future of Human Society, Kyoto University for support with the MRI data acquisition and Kazusa Minemoto and Masaru Usami for technical support.

Funding Statement

This study was supported by funds from the Japan Science and Technology Agency CREST (JPMJCR17A5).

Footnotes

Conflict of interest

YG was employed by company IGM-France. TK was employed by ATR-Promotions Inc. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Supplementary material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fmed.2023.1059203/full#supplementary-material

References

  • 1.GBD 2019 Dementia Forecasting Collaborators. Estimation of the global prevalence of dementia in 2019 and forecasted prevalence in 2050: an analysis for the Global Burden of Disease Study 2019. Lancet Public Health. (2022) 7:e105–25. 10.1016/S2468-2667(21)00249-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Gineste Y, Marescotti R. Interest of the philosophy of humanitude in caring for patients with Alzheimer’s disease. Soins Gerontol. (2010) 85:26–7. [PubMed] [Google Scholar]
  • 3.Gineste Y, Marescotti R, Pellissier J. Gineste Marescotti§method “Humanitude”, and pacification of pathological aggression behavior during nursing care [Short Survey]. Revue de Gériatrie. (2008) 33:2–4. [Google Scholar]
  • 4.Melo R, Queirós P, Tanaka L, Salgueiro N, Alves R, Araújo J, et al. State-of-the-art in the implementation of the Humanitude care methodology in Portugal. Rev Enferm Ref. (2017) 4:53–62. [Google Scholar]
  • 5.Sumioka H, Shiomi M, Honda M, Nakazawa A. Technical challenges for smooth interaction with seniors with dementia: lessons from humanitude™. Front Robot AI. (2021) 8:680906. 10.3389/frobt.2021.650906 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Ito M, Honda M. Improvement of behavioral and psychological symptoms of dementia (BPSD) by comprehensive standardized care methodology. Autsin J Clin Neurol. (2015) 2:1073. 23074509 [Google Scholar]
  • 7.Kobayashi M, Honda M. The effect of a multimodal comprehensive care methodology for family caregivers of people with dementia. BMC Geriatr. (2021) 21:434. 10.1186/s12877-021-02373-w [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Giang T, Koh J, Cheng L, Tang Q, Chua M, Liew T, et al. Effects of Humanitude care on people with dementia and caregivers: a scoping review. J Clin Nurs. (2022). [Epub ahead of print]. 10.1111/jocn.16477 [DOI] [PubMed] [Google Scholar]
  • 9.Cuff B, Brown S, Taylor L, Howat D. Empathy: a review of the concept. Emotion Rev. (2016) 8:144–53. [Google Scholar]
  • 10.Fukuyasu Y, Kataoka H, Honda M, Iwase T, Ogawa H, Sato M, et al. The effect of Humanitude care methodology on improving empathy: a six-year longitudinal study of medical students in Japan. BMC Med Educ. (2021) 21:316. 10.1186/s12909-021-02773-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Kobayashi M, Ito M, Iwasa Y, Motohashi Y, Edahiro A, Shirobe M, et al. The effect of multimodal comprehensive care methodology training on oral health care professionals’ empathy for patients with dementia. BMC Med Educ. (2021) 21:315. 10.1186/s12909-021-02760-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Sato W, Fujimura T, Kochiyama T, Suzuki N. Relationships among facial mimicry, emotional experience, and emotion recognition. PLoS One. (2013) 8:e57889. 10.1371/journal.pone.0057889 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Hatfield E, Cacioppo J, Rapson R. Emotional contagion. Cambridge: Cambridge University Press; (1994). p. 240. [Google Scholar]
  • 14.Hoffman M. Empathy and moral development: Implications for caring and justice. Cambridge: Cambridge University Press; (2000). p. 331. [Google Scholar]
  • 15.Holland A, O’Connell G, Dziobek I. Facial mimicry, empathy, and emotion recognition: a meta-analysis of correlations. Cogn Emot. (2021) 35:150–68. 10.1080/02699931.2020.1815655 [DOI] [PubMed] [Google Scholar]
  • 16.Gallese V. The roots of empathy: the shared manifold hypothesis and the neural basis of intersubjectivity. Psychopathology. (2003) 36:171–80. 10.1159/000072786 [DOI] [PubMed] [Google Scholar]
  • 17.Rizzolatti G, Craighero L. Mirror neuron: a neurological approach to empathy. In: Changeux J, Damasio A, Singer W, Christen Y. editors. Neurobiology of human values. Berlin: Springer-Verlag; (2005). p. 107–24. [Google Scholar]
  • 18.Iacoboni M. Imitation, empathy, and mirror neurons. Annu Rev Psychol. (2009) 60:653–70. [DOI] [PubMed] [Google Scholar]
  • 19.Bekkali S, Youssef G, Donaldson P, Albein-Urios N, Hyde C, Enticott P. Is the putative mirror neuron system associated with empathy? A systematic review and meta-analysis. Neuropsychol Rev. (2021) 31:14–57. 10.1007/s11065-020-09452-6 [DOI] [PubMed] [Google Scholar]
  • 20.Hamilton A. Emulation and mimicry for social interaction: a theoretical approach to imitation in autism. Q J Exp Psychol. (2008) 61:101–15. 10.1080/17470210701508798 [DOI] [PubMed] [Google Scholar]
  • 21.Sato W, Kochiyama T, Yoshikawa S, Naito E, Matsumura M. Enhanced neural activity in response to dynamic facial expressions of emotion: an fMRI study. Brain Res Cogn Brain Res. (2004) 20:81–91. [DOI] [PubMed] [Google Scholar]
  • 22.Sato W, Toichi M, Uono S, Kochiyama T. Impaired social brain network for processing dynamic facial expressions in autism spectrum disorders. BMC Neurosci. (2012) 13:99. 10.1186/1471-2202-13-99 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Sato W, Kochiyama T, Uono S, Sawada R, Kubota Y, Yoshimura S, et al. Widespread and lateralized social brain activity for processing dynamic facial expressions. Hum Brain Mapp. (2019) 40:3753–68. 10.1002/hbm.24629 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Hsu CT, Sato W, Kochiyama T, Nakai R, Asano K, Abe N, et al. Enhanced mirror neuron network activity and effective connectivity during live interaction. Neuroimage. (2022) 263:119655. 10.1016/j.neuroimage.2022.119655 [DOI] [PubMed] [Google Scholar]
  • 25.Sato W, Yoshikawa S. Spontaneous facial mimicry in response to dynamic facial expressions. Cognition. (2007) 104:1–18. [DOI] [PubMed] [Google Scholar]
  • 26.Sato W, Fujimura T, Suzuki N. Enhanced facial EMG activity in response to dynamic facial expressions. Int J Psychophysiol. (2008) 70:70–4. [DOI] [PubMed] [Google Scholar]
  • 27.Nesdale D, Griffith J, Durkin K, Maass A. Empathy, group norms and children’s ethnic attitudes. J Appl Dev Psychol. (2005) 26:623–37. [Google Scholar]
  • 28.van Bommel G, Thijs J, Miklikowska M. Parallel empathy and group attitudes in late childhood: the role of perceived peer group attitudes. J Soc Psychol. (2021) 161:337–50. 10.1080/00224545.2020.1840326 [DOI] [PubMed] [Google Scholar]
  • 29.Chi M. Conceptual change within and across ontological categories: examples from learning and discovery in science. In: Giere R. editor. Cognitive models of science: Minnesota studies in the philosophy of science. Minneapolis: University of Minnesota Press; (1992). p. 129–60. [Google Scholar]
  • 30.Hoffman R. How can expertise be defined?: Implications of research from cognitive psychology. In: Williams R, Faulkner W, Fleck J. editors. Exploring expertise. New York, NY: Macmillan; (1998). p. 81–100. [Google Scholar]
  • 31.Crawford J, Howell D. Comparing an individual’s test score against norms derived from small samples. Clin Neuropsychol. (1998) 12:482–6. [Google Scholar]
  • 32.Crawford J, Garthwaite P. Methods of testing for a deficit in single-case studies: evaluation of statistical power by Monte Carlo simulation. Cogn Neuropsychol. (2006) 23:877–904. 10.1080/02643290500538372 [DOI] [PubMed] [Google Scholar]
  • 33.Lakens D. Sample size justification. Collabra Psychol. (2022) 8:33267. [Google Scholar]
  • 34.Naito E, Hirose S. Efficient foot motor control by Neymar’s brain. Front Hum Neurosci. (2014) 8:594. 10.3389/fnhum.2014.00594 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Sato W, Hyniewska S, Minemoto K, Yoshikawa S. Facial expressions of basic emotions in Japanese laypeople. Front Psychol. (2019) 10:259. 10.3389/fpsyg.2019.00259 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Ekman P, Friesen W, Hager J. Facial action coding system: The manual. Palo Alto: Consulting Psychologist; (2002). p. 527. [Google Scholar]
  • 37.Ekman P, Hager J, Irwin W, Rosenberg E. Facial action coding system affect information database (FACSAID). (1998). Available online at: http://face-and-emotion.com/dataface/general/homepage.jsp [Google Scholar]
  • 38.Sato W, Yoshikawa S. The dynamic aspects of emotional facial expressions. Cogn Emot. (2004) 18:701–10. [Google Scholar]
  • 39.Sato W, Yoshikawa S. Enhanced experience of emotional arousal in response to dynamic facial expressions. J Nonverbal Behav. (2007) 31:119–35. 10.1016/j.pain.2013.02.012 [DOI] [PubMed] [Google Scholar]
  • 40.Russell J, Weiss A, Mendelsohn G. Affect grid: a single-item scale of pleasure and arousal. J Pers Soc Psychol. (1989) 57:493–502. [Google Scholar]
  • 41.Reisenzein R. Pleasure-arousal theory and the intensity of emotions. J Pers Soc Psychol. (1994) 67:525–39. [Google Scholar]
  • 42.Dimberg U. Facial reactions to facial expressions. Psychophysiology. (1982) 19:643–7. [DOI] [PubMed] [Google Scholar]
  • 43.Fridlund A, Cacioppo J. Guidelines for human electromyographic research. Psychophysiology. (1986) 23:567–89. [DOI] [PubMed] [Google Scholar]
  • 44.Schumann N, Bongers K, Guntinas-Lichius O, Scholle H. Facial muscle activation patterns in healthy male humans: a multi-channel surface EMG study. J Neurosci Methods. (2010) 187:120–8. 10.1016/j.jneumeth.2009.12.019 [DOI] [PubMed] [Google Scholar]
  • 45.Van Boxtel A. Optimal signal bandwidth for the recording of surface EMG activity of facial, jaw, oral, and neck muscles. Psychophysiology. (2001) 38:22–34. [PubMed] [Google Scholar]
  • 46.Crawford J, Garthwaite P. Single-case research in neuropsychology: a comparison of five forms of t-test for comparing a case to controls. Cortex. (2012) 48:1009–16. 10.1016/j.cortex.2011.06.021 [DOI] [PubMed] [Google Scholar]
  • 47.Crawford J, Garthwaite P, Porter S. Point and interval estimates of effect sizes for the case-controls design in neuropsychology: rationale, methods, implementations, and proposed reporting standards. Cogn Neuropsychol. (2010) 27:245–60. 10.1080/02643294.2010.513967 [DOI] [PubMed] [Google Scholar]
  • 48.Ashburner J, Friston K. Unified segmentation. Neuroimage. (2005) 26:839–51. 10.1016/j.neuroimage.2005.02.018 [DOI] [PubMed] [Google Scholar]
  • 49.Holmes A, Friston K. Generalisability, random effects and population inference. Neuroimage. (1998) 7:S754. 10.1016/S1053-8119(18)31587-8 [DOI] [Google Scholar]
  • 50.Friston K, Holmes A, Poline J, Grasby P, Williams S, Frackowiak R, et al. Analysis of fMRI time-series revisited. Neuroimage. (1995) 2:45–53. 10.1006/nimg.1995.1007 [DOI] [PubMed] [Google Scholar]
  • 51.Tzourio-Mazoyer N, De Schonen S, Crivello F, Reutter B, Aujard Y, Mazoyer B. Neural correlates of woman face processing by 2-month-old infants. Neuroimage. (2002) 15:454–61. 10.1006/nimg.2001.0979 [DOI] [PubMed] [Google Scholar]
  • 52.Rajapakse J, Giedd J, Rapoport J. Statistical approach to segmentation of single-channel cerebral MR images. IEEE Trans Med Imaging. (1997) 16:176–86. [DOI] [PubMed] [Google Scholar]
  • 53.Tohka J, Zijdenbos A, Evans A. Fast and robust parameter estimation for statistical partial volume models in brain MRI. Neuroimage. (2004) 23:84–97. 10.1016/j.neuroimage.2004.05.007 [DOI] [PubMed] [Google Scholar]
  • 54.Manjón J, Coupé P, Martí-Bonmatí L, Collins D, Robles M. Adaptive non-local means denoising of MR images with spatially varying noise levels. J Magn Reson Imaging. (2010) 31:192–203. 10.1002/jmri.22003 [DOI] [PubMed] [Google Scholar]
  • 55.Ashburner J. A fast diffeomorphic image registration algorithm. Neuroimage. (2007) 38:95–113. 10.1016/j.neuroimage.2007.07.007 [DOI] [PubMed] [Google Scholar]
  • 56.Ashburner J. VBM tutorial. London: University College London; (2010). [Google Scholar]
  • 57.Lu H, Cole S, Howe C, Westreich D. Toward a clearer definition of selection bias when estimating causal effects. Epidemiology (2022) 33:699–706. 10.1097/EDE.0000000000001516 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Takahashi A, Araujo L. Case study research: opening up research opportunities Case study research: opening up research opportunities. RAUSP Manag J. (2020) 55:100–11. [Google Scholar]
  • 59.Rosenbaum R, Gilboa A, Moscovitch M. Case studies continue to illuminate the cognitive neuroscience of memory. Ann N Y Acad Sci. (2014) 1316:105–33. [DOI] [PubMed] [Google Scholar]
  • 60.Zaytseva Y, Bao Y. Editorial: special collection on single case studies. Psych J. (2015) 4:177. [DOI] [PubMed] [Google Scholar]
  • 61.Nickels L, Fischer-Baum F, Best W. Single case studies are a powerful tool for developing, testing and extending theories. Nat Rev Psychol. (2022) 1:733–47. [Google Scholar]
  • 62.Streese C, Tranel D. Combined lesion-deficit and fMRI approaches in single-case studies: unique contributions to cognitive neuroscience. Curr Opin Behav Sci. (2021) 40:58–63. 10.1016/j.cobeha.2021.01.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63.Ericsson K, Krampe R, Tesch-Romer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev. (1993) 100:363–406. [Google Scholar]
  • 64.Ericsson K, Nandagopal K, Roring R. Toward a science of exceptional achievement: attaining superior performance through deliberate practice. Ann N Y Acad Sci. (2009) 1172:199–217. 10.1196/annals.1393.001 [DOI] [PubMed] [Google Scholar]
  • 65.Sadideen H, Alvand A, Saadeddin M, Kneebone R. Surgical experts: born or made?. Int J Surg. (2013) 11:773–8. [DOI] [PubMed] [Google Scholar]
  • 66.Ericsson K. Acquisition and maintenance of medical expertise: a perspective from the expert-performance approach with deliberate practice. Acad Med. (2015) 2015:1471–86. 10.1097/ACM.0000000000000939 [DOI] [PubMed] [Google Scholar]
  • 67.van der Schalk J, Fischer A, Doosje B, Wigboldus D, Hawk S, Rotteveel M, et al. Convergent and divergent responses to emotional displays of ingroup and outgroup. Emotion. (2011) 11:286–98. 10.1037/a0022582 [DOI] [PubMed] [Google Scholar]
  • 68.Peng S, Kuang B, Hu P. Right temporoparietal junction modulates in-group bias in facial emotional mimicry: a tDCS study. Front Behav Neurosci. (2020) 14:143. 10.3389/fnbeh.2020.00143 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69.Peng S, Kuang B, Zhang L, Hu P. Right temporoparietal junction plays a role in the modulation of emotional mimicry by group membership. Front Hum Neurosci. (2021) 15:606292. 10.3389/fnhum.2021.606292 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Wiesenfeld A, Whitman P, Malatesta C. Individual differences among adult women in sensitivity to infants: evidence in support of an empathy concept. J Pers Soc Psychol. (1984) 46:118–24. 10.1037//0022-3514.46.1.118 [DOI] [PubMed] [Google Scholar]
  • 71.Westbury H, Neumann D. Empathy-related responses to moving film stimuli depicting human and non-human animal targets in negative circumstances. Biol Psychol. (2008) 78:66–74. 10.1016/j.biopsycho.2007.12.009 [DOI] [PubMed] [Google Scholar]
  • 72.Harrison N, Morgan R, Critchley H. From facial mimicry to emotional empathy: a role for norepinephrine? Soc Neurosci. (2010) 5:393–400. 10.1080/17470911003656330 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Dimberg U, Thunberg M. Empathy, emotional contagion, and rapid facial reactions to angry and happy facial expressions. Psych J. (2012) 1:118–27. 10.1002/pchj.4 [DOI] [PubMed] [Google Scholar]
  • 74.Van der Graaff J, Meeus W, de Wied M, van Boxtel A, van Lier P, Koot H, et al. Motor, affective and cognitive empathy in adolescence: interrelations between facial electromyography and self-reported trait and state measures. Cogn Emot. (2016) 30:745–61. 10.1080/02699931.2015.1027665 [DOI] [PubMed] [Google Scholar]
  • 75.Hsu C, Sato W, Yoshikawa S. An investigation of the modulatory effects of empathic and autistic traits on emotional and facial motor responses during live social interactions. PsyArXiv. [Preprint]. (2022). 10.31234/osf.io/aqnrg [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Sonnby-Borgström M. Automatic mimicry reactions as related to differences in emotional empathy. Scand J Psychol. (2002) 43:433–43. [DOI] [PubMed] [Google Scholar]
  • 77.Sonnby-Borgström M, Jönsson P, Svensson O. Emotional empathy as related to mimicry reactions at different levels of information processing. J Nonverbal Behav. (2003) 27:3–23. [Google Scholar]
  • 78.Dimberg U, Andréasson P, Thunberg M. Emotional empathy and facial reactions to facial expressions. J Psychophysiol. (2011) 25:26–31. [Google Scholar]
  • 79.Rymarczyk K, Żurawski Ł, Jankowiak-Siuda K, Szatkowska I. Emotional empathy and facial mimicry for static and dynamic facial expressions of fear and disgust. Front Psychol. (2016) 7:1853. 10.3389/fpsyg.2016.01853 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 80.Rymarczyk K, Żurawski Ł, Jankowiak-Siuda K, Szatkowska I. Empathy in facial mimicry of fear and disgust: simultaneous EMG-fMRI recordings during observation of static and dynamic facial expressions. Front Psychol. (2019) 10:701. 10.3389/fpsyg.2019.00701 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Likowski K, Muhlberger A, Gerdes A, Wieser M, Pauli P, Weyers P. Facial mimicry and the mirror neuron system: simultaneous acquisition of facial electromyography and functional magnetic resonance imaging. Front Hum Neurosci. (2012) 6:214. 10.3389/fnhum.2012.00214 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Gazzola V, Aziz-Zadeh L, Keysers C. Empathy and the somatotopic auditory mirror system in humans. Curr Biol. (2006) 16:1824–9. 10.1016/j.cub.2006.07.072 [DOI] [PubMed] [Google Scholar]
  • 83.Kaplan J, Iacoboni M. Getting a grip on other minds: mirror neurons, intention understanding, and cognitive empathy. Soc Neurosci. (2006) 1:175–83. 10.1080/17470910600985605 [DOI] [PubMed] [Google Scholar]
  • 84.Jabbi M, Swart M, Keysers C. Empathy for positive and negative emotions in the gustatory cortex. Neuroimage. (2007) 34:1744–53. [DOI] [PubMed] [Google Scholar]
  • 85.Schulte-Rüther M, Markowitsch H, Fink G, Piefke M. Mirror neuron and theory of mind mechanisms involved in face-to-face interactions: a functional magnetic resonance imaging approach to empathy. J CogniNeurosci. (2007) 19:1354–72. 10.1162/jocn.2007.19.8.1354 [DOI] [PubMed] [Google Scholar]
  • 86.Horan W, Iacoboni M, Cross K, Korb A, Lee J, Nori P, et al. Self-reported empathy and neural activity during action imitation and observation in schizophrenia. Neuroimage Clin. (2014) 5:100–8. 10.1016/j.nicl.2014.06.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Calvo-Merino B, Glaser D, Grèzes J, Passingham R, Haggard P. Action observation and acquired motor skills: an FMRI study with expert dancers. Cereb Cortex. (2005) 15:1243–9. 10.1093/cercor/bhi007 [DOI] [PubMed] [Google Scholar]
  • 88.Haslinger B, Erhard P, Altenmüller E, Schroeder U, Boecker H, Ceballos-Baumann A. Transmodal sensorimotor networks during action observation in professional pianists. J Cogn Neurosci. (2005) 17:282–93. 10.1162/0898929053124893 [DOI] [PubMed] [Google Scholar]
  • 89.Calvo-Merino B, Grèzes J, Glaser D, Passingham R, Haggard P. Seeing or doing? Influence of visual and motor familiarity in action observation. Curr Biol. (2006) 16:1905–10. [DOI] [PubMed] [Google Scholar]
  • 90.Pilgramm S, Lorey B, Stark R, Munzert J, Vaitl D, Zentgraf K. Differential activation of the lateral premotor cortex during action observation. BMC Neurosci. (2010) 11:89. 10.1186/1471-2202-11-89 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Cross E, Hamilton A, Grafton S. Building a motor simulation de novo: observation of dance by dancers. Neuroimage. (2006) 31:1257–67. 10.1016/j.neuroimage.2006.01.033 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Kirsch L, Cross E. Additive routes to action learning: layering experience shapes engagement of the action observation network. Cereb Cortex. (2015) 25:4799–811. 10.1093/cercor/bhv167 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 93.Uono S, Sato W, Kochiyama T, Sawada R, Kubota Y, Yoshimura S, et al. Neural substrates of the ability to recognize facial expressions: a voxel-based morphometry study. Soc Cogn Affect Neurosci. (2017) 12:487–95. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94.Cheng Y, Chou K, Decety J, Chen I, Hung D, Tzeng O, et al. Sex differences in the neuroanatomy of human mirror-neuron system: a voxel-based morphometric investigation. Neuroscience. (2009) 158:713–20. 10.1016/j.neuroscience.2008.10.026 [DOI] [PubMed] [Google Scholar]
  • 95.Sassa Y, Taki Y, Takeuchi H, Hashizume H, Asano M, Asano K, et al. The correlation between brain gray matter volume and empathizing and systemizing quotients in healthy children. Neuroimage. (2012) 60:2035–41. 10.1016/j.neuroimage.2012.02.021 [DOI] [PubMed] [Google Scholar]
  • 96.Sluming V, Barrick T, Howard M, Cezayirli E, Mayes A, Roberts N. Voxel-based morphometry reveals increased gray matter density in Broca’s area in male symphony orchestra musicians. Neuroimage. (2002) 17:1613–22. 10.1006/nimg.2002.1288 [DOI] [PubMed] [Google Scholar]
  • 97.Gaser C, Schlaug G. Brain structures differ between musicians and non-musicians. J Neurosci. (2003) 23:9240–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 98.Bermudez P, Lerch J, Evans A, Zatorre R. Neuroanatomical correlates of musicianship as revealed by cortical thickness and voxel-based morphometry. Cereb Cortex. (2009) 19:1583–96. 10.1093/cercor/bhn196 [DOI] [PubMed] [Google Scholar]
  • 99.Abdul-Kareem I, Stancak A, Parkes L, Sluming V. Increased gray matter volume of left pars opercularis in male orchestral musicians correlate positively with years of musical performance. J Magn Reson Imaging. (2011) 33:24–32. [DOI] [PubMed] [Google Scholar]
  • 100.Sato K, Kirino E, Tanaka S. A voxel-based morphometry study of the brain of university students majoring in music and nonmusic disciplines. Behav Neurol. (2015) 2015:274919. 10.1155/2015/274919 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101.James C, Oechslin M, Van De Ville D, Hauert C, Descloux C, Lazeyras F. Musical training intensity yields opposite effects on grey matter density in cognitive versus sensorimotor networks. Brain Struct Funct. (2014) 219:353–66. 10.1007/s00429-013-0504-z [DOI] [PubMed] [Google Scholar]
  • 102.Chaddock-Heyman L, Loui P, Weng T, Weisshappel R, McAuley E, Kramer A. Musical training and brain volume in older adults. Brain Sci. (2021) 11:50. 10.3390/brainsci11010050 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103.Dimberg U, Thunberg M. Rapid facial reactions to emotional facial expressions. Scand J Psychol. (1998) 39:39–45. 10.1111/1467-9450.00054 [DOI] [PubMed] [Google Scholar]
  • 104.Dimberg U, Thunberg M, Elmehed K. Unconscious facial reactions to emotional facial expressions. Psychol Sci. (2000) 11:86–9. 10.1111/1467-9280.00221 [DOI] [PubMed] [Google Scholar]
  • 105.Dimberg U, Thunberg M, Grunedal S. Facial reactions to emotional stimuli: automatically controlled emotional responses. Cogn Emot. (2002) 16:449–71. 10.1080/02699930143000356 [DOI] [Google Scholar]
  • 106.Hojat M, Louis D, Markham F, Wender R, Rabinowitz C, Gonnella J. Physicians’ empathy and clinical outcomes for diabetic patients. Acad Med. (2011) 86:359–64. [DOI] [PubMed] [Google Scholar]
  • 107.Derksen F, Bensing J, Lagro-Janssen A. Effectiveness of empathy in general practice: a systematic review. Br J Gen Pract. (2013) 63:e76–84. 10.3399/bjgp13X660814 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 108.Nembhard I, David G, Ezzeddine I, Betts D, Radin J. A systematic review of research on empathy in health care. Health Serv Res. (2023) 8:250–63. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 109.Güney Z, Sattel H, Cardone D, Merla A. Assessing embodied interpersonal emotion regulation in somatic symptom disorders: a case study. Front Psychol. (2015) 6:68. 10.3389/fpsyg.2015.00068 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 110.Gifford A, Marmelat V, Beadle JN. A narrative review examining the utility of interpersonal synchrony for the caregiver-care recipient relationship in alzheimer’s disease and related dementias. Front Psychol. (2021) 12:595816. 10.3389/fpsyg.2021.595816 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111.Ward R, Vass A, Aggarwal N, Garfield C, Cybyk B. A different story: exploring patterns of communication in residential dementia care. Ageing Soc. (2008) 28:629–51. [Google Scholar]
  • 112.Rozga A, King T, Vuduc R, Robins D. Undifferentiated facial electromyography responses to dynamic, audio-visual emotion displays in individuals with autism spectrum disorders. Dev Sci. (2013) 16:499–514. 10.1111/desc.12062 [DOI] [PubMed] [Google Scholar]
  • 113.Yoshimura S, Sato W, Uono S, Toichi M. Impaired overt facial mimicry in response to dynamic facial expressions in high-functioning autism spectrum disorders. J Autism Dev Disord. (2015) 45:1318–28. 10.1007/s10803-014-2291-7 [DOI] [PubMed] [Google Scholar]
  • 114.Baron-Cohen S, Ring H, Wheelwright S, Bullmore E, Brammer M, Simmons A, et al. Social intelligence in the normal and autistic brain: an fMRI study. Eur J Neurosci. (1999) 11:1891–8. 10.1046/j.1460-9568.1999.00621.x [DOI] [PubMed] [Google Scholar]
  • 115.Hall G, Szechtman H, Nahmias C. Enhanced salience and emotion recognition in Autism: a PET study. Am J Psychiatry. (2003) 160:1439–41. 10.1176/appi.ajp.160.8.1439 [DOI] [PubMed] [Google Scholar]
  • 116.Dapretto M, Davies M, Pfeifer J, Scott A, Sigman M, Bookheimer S, et al. Understanding emotions in others: mirror neuron dysfunction in children with autism spectrum disorders. Nat Neurosci. (2006) 9:28–30. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 117.Hadjikhani N, Joseph R, Snyder J, Tager-Flusberg H. Anatomical differences in the mirror neuron system and social cognition network in autism. Cereb Cortex. (2006) 16:1276–82. [DOI] [PubMed] [Google Scholar]
  • 118.Yamasaki S, Yamasue H, Abe O, Suga M, Yamada H, Inoue H, et al. Reduced gray matter volume of pars opercularis is associated with impaired social communication in high-functioning autism spectrum disorders. Biol Psychiatry. (2010) 68:1141–7. 10.1016/j.biopsych.2010.07.012 [DOI] [PubMed] [Google Scholar]
  • 119.Mueller S, Keeser D, Samson A, Kirsch V, Blautzik J, Grothe M, et al. Convergent findings of altered functional and structural brain connectivity in individuals with high functioning autism: a multimodal MRI study. PLoS One. (2013) 8:e67329. 10.1371/journal.pone.0067329 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 120.Sato W, Kochiyama T, Uono S, Yoshimura S, Kubota Y, Sawada R, et al. Reduced gray matter volume in the social brain network in adults with autism spectrum disorder. Front Hum Neurosci. (2017) 11:395. 10.3389/fnhum.2017.00395 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 121.Sato W, Uono S. The atypical social brain network in autism: advances in structural and functional MRI studies. Curr Opin Neurol. (2019) 32:617–21. [DOI] [PubMed] [Google Scholar]
  • 122.Cappadocia M, Weiss J. Review of social skills training groups for youth with asperger syndrome and high functioning autism. Res Autism Spectr Disord. (2011) 5:70–8. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Data Availability Statement

The data supporting the conclusions of this article will be made available by the authors, excluding any identifiable data.


Articles from Frontiers in Medicine are provided here courtesy of Frontiers Media SA

RESOURCES