Abstract
Facial expressions and voice modulations are among the most important communicational signals to convey emotional information. The ability to correctly interpret this information is highly relevant for successful social interaction and represents an integral component of emotional competencies that have been conceptualized under the term emotional intelligence. Here, we investigated the relationship of emotional intelligence as measured with the Salovey-Caruso-Emotional-Intelligence-Test (MSCEIT) with cerebral voice and face processing using functional and structural magnetic resonance imaging. MSCEIT scores were positively correlated with increased voice-sensitivity and gray matter volume of the insula accompanied by voice-sensitivity enhanced connectivity between the insula and the temporal voice area, indicating generally increased salience of voices. Conversely, in the face processing system, higher MSCEIT scores were associated with decreased face-sensitivity and gray matter volume of the fusiform face area. Taken together, these findings point to an alteration in the balance of cerebral voice and face processing systems in the form of an attenuated face-vs-voice bias as one potential factor underpinning emotional intelligence.
Keywords: functional magnet resonance imaging, emotional intelligence, fusiform face area, temporal voice area, voxel-based morphometry
Introduction
Emotions represent a major determinant of human behavior. In everyday life, they are in large part communicated through signals from voice and face. In recent years, specialized brain regions and networks underlying the cerebral processing of human voices and faces have been identified. The temporal voice area (TVA; e.g. Belin et al., 2000; von Kriegstein et al., 2006; Ethofer et al., 2009; Pernet et al., 2015) and its counterpart the fusiform face area (FFA; e.g. Kanwisher et al., 1997; Posamentier and Abdi, 2003; Kanwisher and Yovel, 2006) are among the regions most consistently considered key functional modules in voice and face processing, respectively. Additionally, limbic brain regions have been shown to exhibit preferential responses to voices and faces also outside of the context of emotion processing with a spatial overlap of voice and face-sensitivity in the amygdala (Mende-Siedlecki et al., 2013; Pernet et al., 2015). As all these regions do not exclusively respond to voices or faces, we use the terms voice-sensitive and face-sensitive to denote these cue dependent preferences.
The ability to correctly interpret emotional information from voice and face is an integral component of emotional competence. The model of emotional intelligence (EI) proposed by Mayer and Salovey conceptualizes such competences as an ability for the ‘accurate appraisal and expression of emotion in oneself and in others, the effective regulation of emotion in self and others, and the use of feelings to motivate, plan and achieve in one’s life’ (Salovey and Mayer, 1990). According to this model, EI encompasses an experiential component (i.e. the perception and use of emotional states) and a strategic component (i.e. understanding and management of emotions). These domains are represented in the Mayer-Salovey-Caruso-Emotional-Intelligence-Test (MSCEIT). It should be noted, however, that the appropriateness of the term ‘intelligence’ for the set of competences measured with the MSCEIT has been questioned (for example, due to the fact that it does not represent a pure maximum performance parameter; Petrides, 2011). Despite this ongoing debate about the construct of EI, this set of competences has been shown to be relevant to successful interaction. It was demonstrated that the MSCEIT is positively correlated with psychological well-being (Lanciano and Curci, 2015), social competence (Brackett et al., 2006), quality of social interaction (Lopes et al., 2004), perceived social support (Fabio, 2015) as well as academic success (Chew et al., 2013; Lanciano and Curci, 2014), whereas it was negatively correlated with loneliness (Wols et al., 2015). Finally, in incarcerated men, lower MSCEIT scores were associated with higher scores for psychopathy (Ermer et al., 2012).
Furthermore, there is evidence not only for a link between emotional competence encompassed as EI and effective behavioral non-verbal emotion processing (Dodonova and Dodonov, 2012; Kniazev et al., 2013; Wojciechowski et al., 2014) but also for parallel associations between EI and the cerebral activation during the processing of non-verbal emotional signals (Killgore and Yurgelun-Todd, 2007; Kreifelts et al., 2009; Killgore et al., 2013; Kniazev et al., 2013; Raz et al., 2014; Alkozei and Killgore, 2015; Quarto et al., 2016) as well as during resting state (Takeuchi et al., 2013; Pan et al., 2014). At the structural level, EI and gray matter volume were found to be positively correlated in the insula and prefrontal areas (Killgore et al., 2012; Tan et al., 2014).
However, it remains an open question if neurobiological correlates of the emotional competences encompassed in the concept of EI can also be identified at the more basic level of cerebral face and voice processing. As faces and voices constitute the most prevalent means to express emotionally relevant information in human social communication, it can be assumed that a high degree of emotional competence should be linked to the sensitivity for vocal and facial cues within cerebral face and voice processing areas.
Thus, the present study aimed to clarify in a cohort of 85 healthy individuals, if and how EI is reflected in the neural responses and structure of canonical voice and face perception networks. Based on the assumption of a link between the sensitivity to vocal and facial cues and EI, we hypothesized a linear association of EI with voice- and face-sensitivity within the respective voice- and face-sensitive brain regions and the amygdala as a central emotion processing structure. We also investigated potential differential contributions of experiential and strategic EI to cerebral voice- and face-sensitivity under the hypothesis of a stronger association of experiential than strategic EI with cerebral voice- and face-sensitivity. Furthermore, it was tested if beyond cerebral responses EI also modulates voice- and/or face-sensitive functional connectivity of brain regions with EI-associated voice- and/or face-sensitivity. Here, we assumed that such EI-dependent modulations would involve the TVA for voice-sensitive modulations of connectivity and the FFA for face-sensitive connectivity modulations, respectively. Finally, we investigated if associations between EI and cerebral voice- and face-sensitivity were also reflected at the structural level.
Materials and methods
Participants
85 healthy individuals (mean age 25.5 years, s.d. = 3.1 years, 43 female) participated at the Universities of Tübingen and Greifswald. All of the participants were native German speakers and right-handed, as assessed with the Edinburgh Inventory (Oldfield, 1971). None of the individuals had a history of neurological or psychiatric illness or of substance abuse or impaired hearing. Vision was normal or corrected to normal. None of the individuals was taking regular medication. The study was performed according to the Code of Ethics of the World Medical Association (Declaration of Helsinki) and the protocol of human investigation was approved by the local ethics committee where the study was performed. All individuals gave their written informed consent prior to their participation in the study.
Mayer-Salovey-Caruso-Emotional-Intelligence-Test
Following the MR-scanning procedure, all participants were asked to complete the German version of the MSCEIT (Steinmayr et al., 2011). The MSCEIT is a performance measure of emotional competences termed EI that assesses how well people solve emotion-laden problems across several domains, including perception, use, understanding and managing emotions. Perception and use of emotions is subsumed as experiential EI, understanding and regulation of emotions as strategic EI. Additionally, an overall MSCEIT score is calculated. The MSCEIT scoring was based on consensus rating (normative sample of over 5000 heterogeneous individuals; Mayer et al., 2003).
Stimuli and experimental design
Two fMRI experiments were performed to localize face-sensitive (Kanwisher et al., 1997) and voice-sensitive (Belin et al., 2000) brain areas.
The face localizer experiment included pictures from four different categories (faces, houses, objects and natural scenes) using a block-design. All stimuli employed in the experiment were black-and-white photographs and unknown to the participants. The face stimuli included facial expressions which were primarily neutral with certain shifts from serious/somewhat angry to friendly/smiling. The house stimuli depicted different types of multilevel buildings (e.g. brick, wooden, concrete). The object stimuli comprised common household objects and items of clothing, whereas the natural scenes included pictures of different types of panoramas (e.g. mountains, coast, river). Each block and category contained 20 stimuli. Within blocks, the stimuli were presented in random order for 300 ms interleaved with 500 ms of fixation [1 block = 20 stimuli × (300 ms picture + 500 ms fixation) = 16 s]. Eight blocks of each category pseudorandomized within the experiment were shown separated by short ∼1.5 s rest periods. To ascertain constant attention, a one-back task was employed in which the participants had to press a button on a fiber optic system (LumiTouch, Photon Control, Burnaby, Canada) with their right index finger when they saw a picture directly repeated. Positions of repeated stimuli were randomized within blocks with the restriction that one occurred during the first half of the block and one during the second half.
The voice localizer experiment was adapted from the seminal study by Belin et al. (2000) and consisted of a passive-listening block design experiment with 24 stimulation blocks and 12 silent periods (each 8 s). Participants were instructed to listen attentively with their eyes closed. The stimuli included 12 blocks of human vocal sounds (speech, sighs, laughs, cries), 6 blocks with animal sounds (e.g. various cries, gallops) and 6 blocks with environmental sounds (e.g. doors, telephones, cars, planes). Stimuli were normalized with respect to mean acoustic energy. The blocks were separated by 2 s of silence. Sound and silence blocks were randomized across the experiment with the restriction that a block of silence was always followed by at least one sound block.
Image acquisition
MRI was performed using a TRIO or VERIO 3 T whole body scanner (Siemens, Erlangen, Germany). At the TRIO, structural T1-weighted images (176 slices, TR = 2300 ms, TE = 2.96 ms, voxel size: 1×1×1 mm3) and functional images (30 axial slices acquired in sequential descending order, slice thickness 3 mm + 1 mm gap, TR = 1.7 s, TE = 30 ms, voxel size: 3×3×4 mm3, field of view 192×192 mm2, 64×64 matrix, flip angle 90°) were acquired. The time series consisted of 336 images for the face localizer and 231 images for the voice localizer. For correction of image distortions, a field map [36 slices, slice thickness 3 mm, TR = 400 ms, TE(1) = 5.19 ms, TE(2) = 7.65 ms] was acquired. At the VERIO, structural T1-weighted images (176 slices, TR = 1900 ms, TE = 2.52 ms, voxel size: 1×1×1 mm3) and functional images (34 axial slices acquired in sequential descending order, slice thickness 3 mm + 1 mm gap, TR = 2.0 s, TE = 30 ms, voxel size: 3×3×4 mm3, field of view 192×192 mm2, 64×64 matrix, flip angle 90°) were acquired. Time series consisted of 303 images for the face localizer and 195 images for the voice localizer. A field map with 34 slices, TR = 488 ms, TE(1) = 4.92 ms, TE(2) = 7.38 ms was acquired.
Analysis of fMRI data
Data were analyzed with statistical parametric mapping software (SPM8, Wellcome Department of Imaging Neuroscience, London, http://www.fil.ion.ucl.ac.uk/spm/). Pre-processing comprised the removal of the first five EPI images from each run to exclude measurements preceding T1 equilibrium, realignment, unwarping on the basis of a static field map, normalization into MNI space (Montreal Neurological Institute; Collins et al., 1994; resampled voxel size: 3 × 3 × 3 mm³) and spatial smoothing using a Gaussian filter with 8 mm full width half maximum (FWHM). For the voice localizer experiment, three regressors were defined [vocal sounds (V), animal sounds (A) and environmental sounds (E)] using a box car function convolved with the hemodynamic response function (HRF) corresponding to the duration of the respective blocks of stimuli. In a similar fashion, four regressors [faces (F), houses (H), objects (O) and scenes (S)] were defined for the face localizer experiment. To remove low frequency components, a high-pass filter with a cutoff frequency of 1/128 Hz was employed. The error term was modeled as a first order autoregressive process with a coefficient of 0.2 and a white noise component to account for serial autocorrelations (Friston et al., 2002). The six motion parameters (i.e. translation and rotation on the x-, y- and z-axes) estimated during realignment were included in the models at single subject level as covariates to further reduce motion-related error variance. Voice-sensitivity was defined by the contrast V > (A, E) while face-sensitivity was defined by the contrast F > (H, O, S). The individual contrast images were calculated and statistically evaluated at the group level in a random-effects analysis using one-sample t-tests to define the face-sensitive fusiform face area (FFA) and the voice-sensitive temporal voice area (TVA) as functional regions of interest (ROI) for subsequent analyses. Statistical significance of activations was assessed at P < 0.001, uncorrected at voxel level and FWE correction for multiple comparisons at cluster level with P < 0.05. For the definition of the FFA, the fusiform gyrus was defined as a priori anatomical ROI and the temporal gyri and the temporal pole for the definition of the TVA. For the definition of the functional ROIs (i.e. FFA and TVA), FWE-cluster level correction was performed across these a priori anatomical ROIs using small volume correction (SVC; Worsley et al., 1996) (Table 1). Additionally, the amygdala served as anatomically defined a priori ROI. The Automated Anatomic Labeling (AAL) toolbox implemented in SPM (Tzourio-Mazoyer et al., 2002) was used for the definition of the amygdala in MNI space.
Table 1.
Functional regions of interest (ROI): voice-sensitive area in the temporal lobe (temporal voice area, TVA) and face-sensitive area in the fusiform gyrus (fusiform face area, FFA) in 85 healthy individuals
| Voice- and face-sensitive areas | Peak MNI coordinate (x y z) | t peak voxel (df = 81) | Cluster size (mm³) | P value |
|---|---|---|---|---|
| Right TVA | 60 −12 −3 | 20.2 | 32 670 | <0.001 |
| 57 −21 0 | 17.9 | |||
| Left TVA | −57 −18 −3 | 19.2 | 32 400 | <0.001 |
| −60 −30 3 | 16.6 | |||
| Right FFA | 42 −48 −21 | 15.0 | 2214 | 0.002 |
| Left FFA | −42 −54 −21 | 9.6 | 1701 | 0.005 |
Notes: For the definition of the FFA the fusiform gyrus was defined as a priori anatomical ROI, and the temporal gyri and the temporal pole for the definition of the TVA. Results are based on a threshold of P < 0.001, uncorrected at voxel level, and FWE correction for multiple comparisons at cluster level with P < 0.05 across these a priori ROIs. Voxel size was 3 × 3 × 3 mm³. df, degrees of freedom. P values given in the table are FWE corrected.
The associations between MSCEIT scores and individual cerebral face- and voice-sensitivity were investigated using linear regression analyses. We first calculated hypothesis-based ROI analyses centered on FFA, TVA and amygdala [P < 0.001 at voxel level with FWE correction (P < 0.05) for multiple comparisons across the respective ROI volume]. In view of the lack of previous studies on the association of emotional competences or intelligence and cerebral voice- and face-sensitivity, the ROI analyses were complemented with an explorative whole-brain analysis [P < 0.001 at voxel level with FWE correction (P < 0.05) for multiple comparisons at cluster level]. In clusters with a significant association of MSCEIT scores and cerebral face- and/or voice-sensitivity, mean contrast estimates were extracted and the regression coefficients obtained for the association with experiential and strategic EI were tested for differences when using the matrix approach described by A. Paul Beaulne (http://www.spsstools.net/Syntax/RegressionRepeatedMeasure/CompareRegressionCoefficients.txt). Effect sizes for such differential associations with experiential and strategic EI are given as Cohen’s d. Additionally, validation analyses were performed where face- and voice-sensitivity were defined as minimum difference contrasts [i.e. F−max(H, O, S), V−max(A, E)] to ensure that observed associations with MSCEIT scores unequivocally originated from specific modulations of cerebral responses to voices or faces, respectively. Finally, another validation analysis was performed comparing the regression coefficients obtained for the association of MSCEIT scores with the responses to voices, or faces, respectively, with those regression coefficients obtained for the associations of MSCEIT scores and each of the other stimulus classes within the respective localizer experiment. Age, gender and the MRI scanner in which the experiments were performed were included as covariates in all group analyses. As it has been repeatedly demonstrated that the TVA is not uniform but contains several distinct peaks of voice-sensitivity with assumedly distinct functional profiles, all significant functional effects observed in the TVA in the present study were spatially referenced by their Euclidian distance in MNI space to the TVA voice-sensitivity peaks in the present study (Table 1 and Supplementary Table S1) and to the distinct voice-sensitivity peaks/clusters observed in the seminal study by Belin et al. (2000) as well as the recent large scale study by Pernet et al. (2015), which included 218 individuals.
To minimize the risk of missing significant associations between cerebral face-sensitivity and MSCEIT scores occurring in other major face processing areas [i.e. the posterior superior temporal sulcus (pSTS) and the occipital face area (OFA)], complementary ROI analyses focused on these regions were performed. For the definition of the face-sensitive areas in the pSTS, the superior and middle temporal gyri and the angular gyrus were selected as the a priori anatomical ROI, and for the definition of the OFA, the inferior occipital gyrus was selected as the a priori anatomical ROI. Parallel to the definition of TVA and FFA, the statistical significance of face-sensitive activations in these anatomical a priori ROIs was assessed with a voxel-wise threshold of P < 0.001 and FWE correction for multiple comparisons at cluster level with P < 0.05 across the anatomical ROIs using SVC (Supplementary Table S2).
Psychophysiological interaction analyses (PPI, Friston et al., 1997) were performed to assess the relationship of EI and voice-/face-sensitive modulations of functional connectivity (FC). Areas with significant associations between EI and voice-/face-sensitivity were selected as seed regions for the PPI analyses. In these analyses, the time-course of the BOLD response, based on a sphere with a radius of 3 mm around the peak-activation voxel within the respective seed region of the contrast of interest [e.g. V−(A, E)] was extracted in each individual participant and was defined as the physiological variable. The psychophysiological interaction was calculated as the product of the deconvolved activation time course (Gitelman et al., 2003) and the vector of the psychological variable [i.e. the voice- or face-sensitivity defining contrasts V−(A, E) and F−(H, O, S), respectively]. The relationships between EI and individual face- and voice-sensitive FC-modulations (i.e. PPI estimates) were, again, investigated using linear regression analyses. The sequence of group level analyses was parallel to the analyses performed for the cerebral activation patterns as described earlier. Again, differential associations of experiential and strategic EI with voice-/face-sensitive connectivity patterns were tested post hoc by comparing the respective regression slopes using Beaulne’s matrix procedure.
Voxel-based morphometry
The voxel-based morphometry (VBM)8 toolbox (http://dbm.neuro.uni-jena.de/vbm.html) implemented in SPM8 was used for the pre-processing of the T1-weighted structural images applying the default settings: The images were segmented into gray matter, white matter and cerebrospinal fluid, DARTEL-normalized to MNI space (resampled voxel size: 1.5 × 1.5 × 1.5 mm³) and modulated with the non-linear components enabling the comparison of the absolute amount of tissue corrected for individual brain sizes. The gray matter segments were smoothed with a Gaussian kernel (8 mm FWHM). The association between MSCEIT scores and gray matter volume was then analyzed as described for the functional images. Primarily, analyses focused on regions where voice- and/or face-sensitivity were associated with EI. In these regions, mean gray matter volumes were extracted and analyzed. Additionally, the regression on MSCEIT scores was performed in an a priori ROI including the insula, the orbitofrontal and the anterior mediofrontal cortex (Killgore et al., 2012) as defined using the AAL toolbox with a voxel-wise threshold of P < 0.001, uncorrected, and FWE correction (P < 0.05) at cluster level for multiple comparisons within the ROI.
Results
Participant sample data
In our study population of 85 healthy individuals (mean age 25.5 years, s.d. = 3.1 years, 43 female), the mean overall MSCEIT score was 105.7 (s.d. = 13.1), the mean experiential EI subscore was 104.3 (s.d. = 14.8) and the mean strategic EI subscore was 105.1 (s.d. = 10.9). The covariates age, gender and MRI scanner were not substantially correlated with the MSCEIT scores [all P > 0.05, all abs(r) ≤ 0.09].
fMRI analysis
Cerebral activation
In the ROI analyses, a significant positive linear association between individual MSCEIT scores and cerebral voice-sensitivity was detected in the left amygdala (Figure 1; peak MNI coordinate: −18 −3 −18, cluster size 54 mm3, t = 3.4, pFWEcorr = 0.01) but not in the other ROIs. At whole brain level, an additional significant positive linear relationship between MSCEIT scores and voice-sensitivity was observed in the left anterior insula extending into the inferior frontal gyrus (left insula/IFG; Figure 1 and Table 2; pFWEcorr = 0.045).
Fig. 1.
Linear associations between MSCEIT scores and cerebral voice-sensitivity. Correlations of MSCEIT scores with voice-sensitivity (red) rendered onto a standard brain (A) and coronal as well as transversal slices of the study population mean anatomical scan (B, C). Functional (TVA, FFA) and anatomical regions of interest are rendered in different colors (TVA, dark blue; FFA, green; amygdala, yellow). Results shown at a threshold of P<0.001, uncorrected, at voxel-level; cluster significance was assessed using FWE-correction for multiple comparisons across the whole brain (marked with *) and the a priori ROIs (small volume correction, SVC), respectively (marked with **) with a threshold of P<0.05. The diagram (D) illustrates the direction of the relationship between the MSCEIT scores and voice-sensitivity in the left IFG/insula).
Table 2.
Linear associations between EI and cerebral voice- and face-sensitivity
| Peak MNI coordinate region | Peak MNI coordinate (x y z) | t peak voxel (df = 80) | Cluster size (mm³) |
|---|---|---|---|
| Voice-sensitivity | |||
| L insula/L inferior frontal gyrus partes triangularis et opercularis | −36 15 6 | 4.5 | 2511* |
| R thalamus | 6 −6 0 | 3.9 | 432 |
| R inferior frontal gyrus pars triangularis | 54 24 15 | 3.6 | 297 |
| R putamen | 30 18 0 | 3.5 | 135 |
| R superior temporal gyrus | 66 −39 18 | 3.5 | 216 |
| Face-sensitivity | |||
| L precuneus/L superior parietal gyrus | −15 −57 42 | −4.0 | 351 |
| R fusiform gyrus/R cerebellum | 36 −60 −18 | −3.7 | 432** |
Notes: The initial whole-brain analysis was performed at a threshold of P < 0.001 at voxel level with FWE correction (P < 0.05) for multiple comparisons at cluster level; significant results are marked with *. ROI analyses centered on FFA, TVA and amygdala were performed at a threshold P < 0.001 at voxel level with FWE correction (P < 0.05) for multiple comparisons across the ROI volume based on small volume correction (SVC) and significant results are marked with **. Only clusters ≥ 135 mm³ (voxel size 3×3×3 mm³) are reported. df, degrees of freedom; R, right; L, left.
In both regions, i.e. left insula/IFG and left amygdala, the relationship between MSCEIT scores and voice-sensitivity was driven by increased responses to voices in individuals with greater MSCEIT scores (r ≥ 0.32, P ≤ 0.003) whereas there was no effect of MSCEIT scores on cerebral responses to animal and environmental sounds [all abs(r) ≤ 0.16, all P > 0.05; Figure 2A and B].
Fig. 2.
Linear associations between MSCEIT scores and absolute contrast estimates in voice-sensitive left IFG/insula and amygdala. Correlations of MSCEIT scores with corrected contrast estimates for voices, animals and environmental sounds in the (A) left IFG/insula and (B) left amygdala. There is a positive correlation of increasing MSCEIT scores and higher contrast estimates for voices (red line), whereas there is obviously no clear effect on animal (brown line) and environmental sounds (blue line). In the left IFG/insula, only the third of subjects with the highest EI showed a positive correlation, whereas subjects with medium and low EI showed a slightly negative correlation (C). A similar, although only marginal effect was observed in the left amygdala (D).
The relationship between MSCEIT scores and voice-sensitivity can also be further illustrated by splitting the participants into three subsamples with regard to their MSCEIT scores {i.e. ‘low’ EI [n = 28, mean MSCEIT score 90.9 (s.d. = 1.6)], ‘average’ EI [n = 29, mean MSCEIT score 107.0 (s.d. = 0.5)] and ‘high’ EI [n = 28, mean MSCEIT score 119.2 (s.d. = 1.1)]}. Only for individuals with ‘high EI’, the left insula/IFG exhibited significant voice-sensitivity (t = 3.6, P = 0.001) whereas in ‘average’ and ‘low’ EI individuals this region did not exhibit voice-sensitivity [both abs(t) ≤ 1.3, both P > 0.05; Figure 2C]. In contrast, in the left amygdala only individuals with ‘low’ EI did not exhibit significant voice-sensitivity (t = 1.4, P > 0.05) while the other two groups did comparably do so (both t ≥ 4.4, both P < 0.001; Figure 2D).
A significant negative relationship of MSCEIT scores and face-sensitivity was found in the right FFA at ROI level (Figure 3 and Table 2, pFWEcorr = 0.02) whereas we did not observe any significant associations between MSCEIT scores and face-sensitivity within the other ROIs or at the whole brain level.
Fig. 3.
Linear associations between MSCEIT scores and cerebral face-sensitivity. Correlation of MSCEIT scores with face-sensitivity (red) rendered onto a standard brain (A) and a coronal slice of the study population mean anatomical scan (B). Functional (TVA, FFA) regions of interest are rendered in different colors (TVA, dark blue; FFA, green). Amygdala ROI not displayed in this figure. Results shown at a threshold of P<0.001, uncorrected, at voxel-level; the cluster marked with two asterisks is found to be significant using FWE-correction for multiple comparisons across the FFA as a priori ROIs (small volume correction, SVC), with a threshold of P<0.05. The diagram (D) illustrates the direction of the relationship between the MSCEIT scores and face-sensitivity in the right FFA.
The negative correlation between MSCEIT scores and face-sensitivity (i.e. the differential response to faces as compared to the other classes of stimuli) in the right FFA was driven by a combination of decreased responses to faces and increased responses to the other stimulus classes with increasing MSCEIT scores. Statistical post hoc decomposition indicated, however, that these effects were non-significant when analyzed separately [all abs(r) ≤ 0.2, all P > 0.05; Figure 4A]. In the group comparison, individuals with ‘low’ EI exhibited greater face-sensitivity in the right FFA than the ‘average’ and ‘high’ EI groups (both t ≥ 2.2, both P ≤ 0.04; Figure 4B).
Fig. 4.
Linear associations between MSCEIT scores and absolute contrast estimates in face-sensitive right FFA. Correlations of MSCEIT scores with corrected contrast estimates for faces, houses, objects and scenes in the right FFA (A). Decreased face-sensitivity in individuals with higher MSCEIT scores is due to a decrease in contrast estimates for faces (red line) as well as an increase in contrast estimates for houses (green line), objects (brown line) and scenes (blue line). Subjects with low EI showed higher face-sensitivity-contrast estimates than did subjects with medium and high EI (B).
Moreover, in the left insula/IFG and right FFA, experiential EI contributed more strongly to the observed association between EI and voice- and face-sensitivity, respectively, than did strategic EI (both t ≥ 2.0, both P ≤ 0.03, both d ≥ 0.43; Supplementary Figure S1A and B). This effect was marginally significant in the left amygdala (t = 1.6, P = 0.06, d = 0.35).
Validation analyses using minimum difference contrasts evidenced that all observed associations between EI and voice- and face-sensitivity were driven by increased responses to voices (left insula/IFG: r = 0.40, P < 0.001; left amygdala: r = 0.33, P = 0.002) and weaker responses to faces (right FFA: r = −0.32, P = 0.003). Further validation analyses targeting differential associations between MSCEIT scores and the cerebral responses to the different stimulus categories confirmed the regression coefficient difference in the left insula/IFG for voice vs animal sounds (t = 4.2, P < 0.001, d = 0.93) and voice vs environmental sounds (t = 3.6, P < 0.001, d = 0.78). In the left amygdala, differences for voice vs animal sounds (t = 3.1, P < 0.005, d = 0.68) and voice vs environmental sounds (t = 2.3, P < 0.05, d = 0.50) were corroborated. In addition, the regression coefficient differences in the right FFA for faces vs houses (t = −3.7, P < 0.001, d = 0.81), faces vs objects (t = −3.1, P < 0.005, d = 0.69) and faces vs scenes (t = −3.3, P < 0.005, d = 0.73) were statistically significant.
The complementary ROI analyses targeting potential associations between MSCEIT scores and face-sensitivity in other major face processing areas (i.e. the pSTS and the OFA, Supplementary Table S2) did not produce any significant results: the voxel-wise statistical threshold of P < 0.001 was not reached in any voxel included in the analyses.
Voice- and face-sensitive modulations of functional connectivity
Psychophysiological interaction (PPI) analyses using areas with significant associations between MSCEIT scores and voice-/face-sensitivity as seed regions demonstrated a significant positive linear association of MSCEIT scores with voice-sensitive FC increases between the left insula/IFG and a region in the middle part of the right TVA (Table 3 and Figure 5, pFWEcorr = 0.049). Spatial comparison revealed that the observed PPI effect is situated closest to the middle STS voice-sensitivity peaks found in the present and previous studies (Supplementary Table S1). Using the left amygdala and the right FFA as seed regions, no significant relationships between MSCEIT scores and voice-sensitive (left amygdala) or face-sensitive (right FFA) FC modulations were observed (Table 3).
Table 3.
Linear associations between MSCEIT scores and voice- as well as face-sensitive modulations of functional connectivity (FC). Psychophysiological interaction (PPI) analysis
| Peak MNI coordinate region | Peak MNI coordinate (x y z) | t peak voxel (df = 80) | Cluster size (mm³) |
|---|---|---|---|
| Seed region 1: L inferior frontal gyrus (IFG)/insula | |||
| R superior frontal gyrus | 21 54 36 | 4.4 | 486 |
| R middle and inferior temporal gyrus | 60 −27 −12 | 4.2 | 756** |
| L fusiform gyrus | −30 −30 −24 | 3.8 | 243 |
| R middle frontal gyrus | 33 24 51 | 3.8 | 756 |
| R anterior cingulum | 6 21 15 | 3.8 | 162 |
| R midbrain | 9 −21 −9 | 3.7 | 135 |
| L superior frontal gyrus | −27 57 27 | 3.7 | 270 |
| L cerebellum | −9 −84 −39 | 3.6 | 459 |
| R cerebellum | 12 −48 −18 | 3.5 | 351 |
| R middle frontal gyrus | 48 36 30 | 3.5 | 135 |
| L midbrain | −6 −30 −12 | 3.5 | 162 |
| Seed region 2: L amygdala | |||
| L middle temporal gyrus | −48 −3 −24 | 4.2 | 459 |
| Seed region 3: R fusiform face area (FFA) | |||
| No suprathreshold clusters. | |||
Notes: Results are shown at a threshold of P< 0.001, uncorrected, at voxel-level; cluster significance was assessed using FWE correction (P < 0.05) for multiple comparisons across the a priori ROIs based on small volume correction (SVC) (significant clusters marked with **). Only clusters ≥ 135 mm³ (voxel size 3×3×3 mm³) are reported. df, degrees of freedom; R, right; L, left.
Fig. 5.
Linear associations between MSCEIT scores and voice-sensitive modulations of FC (PPI). Correlations of MSCEIT scores with voice-sensitive (red) modulations of connectivity (A–C). Functional (TVA, FFA) and anatomical regions of interest are rendered in different colors (TVA, dark blue; FFA, green; amygdala, yellow). Results shown for the left IFG/insula seed at a threshold of P<0.001, uncorrected, at voxel-level (A, B); cluster significance was assessed using FWE-correction for multiple comparisons across the a priori ROIs (SVC; marked with **) with a threshold of P<0.05. The diagram (D) illustrates the direction of the relationship between the MSCEIT scores and voice-sensitive FC modulations between the left IFG/insula and the right TVA.
Voxel-based morphometry
The relationship between EI and voice-sensitivity in the left insula/IFG and face-sensitivity in the right FFA was paralleled by concurrent associations between MSCEIT scores and gray matter volume in these areas (i.e. left insula/IFG: r = 0.19, P = 0.04, one-tailed; right FFA: r = −0.27, P = 0.01; Figure 6). No such correlation was observed in the left amygdala (r = 0.15, P > 0.05). Apart from the functional ROIs, a positive correlation of MSCEIT scores and gray matter volume was observed in the right OFC (Table 4 and Figure 7, pFWEcorr = 0.04). Differential relationships between gray matter volume and experiential vs strategic EI were not observed [all abs(t) < 1.9, all P > 0.05, d < 0.39].
Fig. 6.
Convergence of negative relationships between MSCEIT scores and face-sensitivity and gray matter volume in the right FFA. Negative correlations of MSCEIT scores with face-sensitivity (green) and gray matter volume (red) rendered onto a standard brain (A) and a transversal slice of the study population mean anatomical scan (B). Results shown at a threshold of P<0.01, uncorrected, for illustration purposes. The diagram (C) illustrates the direction of the relationship between the MSCEIT scores and gray matter volume in the part of the right FFA which exhibited a negative correlation between MSCEIT scores and face-sensitivity.
Table 4.
Linear associations between MSCEIT scores and gray matter volume. Voxel-based morphometry
| Peak MNI coordinate region | Peak MNI coordinate (x y z) | t peak voxel (df = 80) | Cluster size (mm³) |
|---|---|---|---|
| Positive relationship | |||
| R middle and superior frontal gyri, partes orbitales | 27 48 −20 | 4.0 | 1596 ** |
| L fusiform gyrus/inferior temporal gyrus | −28 −4 −41 | 3.8 | 486 |
| Negative relationship | |||
| L cuneus | −4 −90 36 | 3.9 | 358 |
| L superior partietal gyrus/superior occipital gyrus | −10 −82 46 | 3.7 | 186 |
Notes: Results are shown at a threshold of P< 0.001, uncorrected, at voxel-level; cluster significance was assessed using FWE correction (P < 0.05) for multiple comparisons across the a priori anatomical ROI based on small volume correction (SVC; significant clusters marked with **). Only clusters ≥ 135 mm³ are reported (voxel size: 1.5×1.5×1.5 mm³). df, degrees of freedom; R, right; L, left.
Fig. 7.
Positive relationship between MSCEIT scores and gray matter volume in the right OFC. Positive correlation of MSCEIT scores with gray matter volume (red) rendered onto a standard brain (A) and a transversal slice of the study population mean anatomical scan (B). Results shown at a threshold of P<0.001, uncorrected, cluster significance was assessed using FWE-correction for multiple comparisons across the a priori anatomical ROI (small volume correction, SVC) with a threshold of P<0.05. The diagram (C) illustrates the direction of the relationship between the MSCEIT scores and gray matter volume in the right OFC.
Discussion
Our findings demonstrate that the complex set of emotional competences termed EI is linked to the cerebral processing of faces and voices already at the level of sensory voice- and face- as well as limbic emotion-processing areas rather than in higher cognitive brain regions. Notably, this link was observed irrespective of an experimentally inherent cognitive focus on emotional information as this is the case, e.g. in an emotion evaluation task.
The first main finding, a positive relationship between EI and voice-sensitivity in the anterior insula extending into the IFG, fits several neuroimaging studies which reported an association of insular responses and EI during the processing of emotional cues (i.e. emotional faces; Killgore and Yurgelun-Todd, 2007; Alkozei and Killgore, 2015; Quarto et al., 2016). These latter findings have been discussed with reference to the somatic marker hypothesis (Damasio, 1994), in which especially the anterior part of the insula plays a major role as a neural structure integrating the emotional salience of a stimulus and the individual’s own affective state (Phillips et al., 2003) during decision-making. However, as the relationship between EI and insular voice-sensitivity exists outside the context of explicit emotion processing, it may be necessary to interpret this preference for one of the major carrier signals of emotionally relevant information in human social life in relation to the general salience processing function of the insula (Bartra et al., 2013; Hayes et al., 2014) within the so-called salience network (Seeley et al., 2007). This notion of increased salience of human voices in emotionally competent individuals is also consistent with the finding of a corresponding relationship of EI and voice-sensitivity in another central part of the network subserving salience processing, namely the amygdala (Adolphs, 2010; Fernando et al., 2013). Following this conception, the EI-associated voice-sensitive FC increase between the insula and right TVA may be a correlate of more pronounced parsing of vocal signals for emotionally and socially relevant information. This observation potentially reflects a neural mechanism of how effective voice processing supports emotional competences. It dovetails with current findings of increased FC between the TVA and the anterior insula/IFG during the task-irrelevant extraction of emotional information from vocal cues (Frühholz and Grandjean, 2012) as well as decreased FC between these areas in psychiatric conditions with perceptual deficits for vocally communicated emotional information [i.e. schizophrenia (Kantrowitz et al., 2015) and autism spectrum disorders (Abrams et al., 2013)]. The spatial proximity of the EI-associated voice-sensitive FC increase to the middle STS voice-sensitivity peaks observed in the present and previous studies (Belin et al., 2000; Pernet et al., 2015) indicates that this effect may reflect voice-specific acoustic processing (Kriegstein and Giraud, 2004; Charest et al., 2013; Latinus et al., 2013; Giordano et al., 2014).
Surprisingly, the FFA, as one of the most central modules of the cerebral face processing network exhibited a negative relationship between its sensitivity for faces and MSCEIT scores. Functionally, this might be explained by greater neural efficiency during general face processing in emotionally competent individuals as has been suggested for emotional facial expressions (Killgore and Yurgelun-Todd, 2007). Yet, the decreased face-sensitivity of the FFA in highly emotionally competent individuals is structurally mirrored by a reduction in gray matter volume. Although a negative correlation between EI and gray matter volume in the fusiform gyrus has previously been reported in a large scale study (Tan et al., 2014), such findings stand in contrast to training (Kreifelts et al., 2013) and learning (Gimenez et al., 2014) studies in which increased neural efficiency was not only accompanied by decreased cerebral activation but also by an increase in gray matter volume. Specifically, a decrease in FFA responses to emotional cues following an emotion communication training was associated with increased gray matter volume in this region (Kreifelts et al., 2013). Nevertheless, the novel and somewhat surprising finding of a negative relationship between emotional competence and FFA gray matter volume is validated by positive correlations between gray matter volume in the OFC and insula and EI observed in the very same analysis. These results correspond with those of previous studies in healthy individuals (Killgore et al., 2012; Tan et al., 2014) and individuals with brain injuries (Barbey et al., 2014) and converge with current concepts of these structures in emotional perception, evaluation and decision-making (Gutierrez-Cobo et al., 2016). Thus, further discussion is warranted.
Alternatively, the opposite associations between MSCEIT scores and voice-sensitivity on the one hand and face-sensitivity on the other might be an indicator of a reduced visual bias in emotionally competent individuals. Generally, adults exhibit a visual preference during the processing of audiovisual signals, both in the abstract (Robinson and Sloutsky, 2004) but also in the emotional (i.e. facial and vocal expressions; Santorelli, 2006) domain. Such a bias does not exist in children or is even reversed toward the auditory modality (Robinson and Sloutsky, 2004). One could hypothesize that those individuals who develop a high degree of emotional competence in the sense of effective emotional learning during childhood also develop a weaker visual bias, or in other words, process voices and faces in a more balanced manner. Again, it is in line with our results that the experiential domain of EI should be more strongly associated with such altered voice and face processing than the strategic domain of EI.
The idea of a reduced visual bias in voice and face processing therefore appears as a very interesting topic for future research on the bases of EI. If it could be demonstrated that EI is indeed positively correlated with a reduced face bias at the behavioral level, this would open up the avenue to a line of research investigating the possibility of fostering the development of emotional competences through ‘voice bias trainings’, with the potential to clarify a causal relationship. Based on findings that voice and face processing are at least partially genetically determined (Brown et al., 2012; Koeda et al., 2015), genetic imaging might be employed to elucidate the potential genetic foundations of alterations in voice and face processing associated with EI. Jointly, such investigations could clarify if the increased cerebral voice-sensitivity and reduced face-sensitivity are the consequence or rather the cause of well-developed emotional competences. Future studies are needed to determine if certain vocal and facial features are more strongly associated with EI than others in analogy to the observation of differential associations between different emotional facial expressions and EI in the anterior insula (Quarto et al., 2016).
Finally, one should keep in mind that constructs of EI share a considerable amount of variance with other interindividual characteristics (e.g. personality and cognitive ability; Joseph and Newman, 2010; Joseph et al., 2015). This precludes any definitive conclusions about the specificity of the observed associations with regard to the construct of EI and is reflected in the relatively low incremental validity of EI constructs over measures of personality and cognitive ability (Joseph and Newman, 2010). Thus, to determine EI-specific neural correlates, large scale studies are needed incorporating all of the partially co-linear interindividual characteristics.
Limitations
Due to the fact that the canonical voice and face localizer experiments employed in the present study vary with regard to attentional load and processing effort (i.e. passive listening vs one-back working memory task), an influence of these factors, e.g. in the form of voice-sensitive automatic attention and evaluation processes or face-sensitive reduced effort in the working memory task, on our results can not be excluded and certainly further research on the relations of EI with attention and processing effort during voice and face perception is warranted.
In addition, the auditory and visual stimuli are not perfectly comparable with regard to emotional content and stimulus-inherent dynamics. As the potential emotional content of the face and voice stimuli has not been explicitly quantified and compared, it cannot be ruled out completely, that the emotional information incidentally included in the stimulus material might have contributed differently to the TVA (as determined by the voice localizer) than to the FFA (as determined by the face localizer) activations. Moreover, the visual stimuli are presented as static pictures, whereas a dynamic is inherent in auditory stimuli. Even if the FFA responds to static and dynamic faces in the same way (Pitcher et al., 2011), an influence of this factor can not be fully excluded. These potential interfering factors should be addressed in more detail in further experiments evaluating a shift of voice- and face sensitivity related to EI.
On the other hand, the employment of these canonical designs allows us to relate our findings directly to the extensively investigated voice and face processing networks described on the basis of commonly used localizer experiments. Additionally, it appears more than unlikely that the opposite relationships between voice- and face-sensitivity and MSCEIT scores should be solely due to differences in attention, effort and stimulus material as these functional associations were found to be accompanied by concurrent associations between gray matter volume and EI which also exist outside the context of the respective experimental settings.
Conclusion
In the present study, we demonstrated that emotional competence measured as EI and cerebral voice- as well as face-sensitivity are inversely associated in healthy individuals. The concordant positive correlations between EI and the voice-sensitivity and gray matter volume of the right anterior insula as well voice-sensitively increased functional connectivity between the insula and the TVA can be interpreted as correlates of a generally increased salience of voices in emotionally competent individuals. This notion is further supported by the comparable positive association of EI and voice-sensitivity in the left amygdala. In contrast, the right FFA, as one central functional module of the cerebral face processing system, exhibits a negative correlation between EI and face-sensitivity as well as gray matter volume. Together, these results indicate a shifted balance of voice and face processing systems in the form of an attenuated face-vs-voice bias as a neural correlate of EI. The present study offers a starting point for future research projects aimed to further elucidate the direct behavioral relevance of the observed EI-associated voice-vs-face processing shift within the framework of more elaborate voice-face processing studies. Moreover, it remains an open question for future studies if the alteration in the balance of cerebral voice and face processing is rather a genetically determined pre-condition for the development of a high degree of emotional competence, or if it is the consequence of learning mechanisms underlying the individual development of emotional competence.
Funding
We acknowledge support by Deutsche Forschungsgemeinschaft and Open Access Publishing Fund of University of Tuebingen.
Supplementary data
Supplementary data are available at SCAN online.
Conflict of interest. None declared.
Supplementary Material
References
- Abrams D.A., Lynch C.J., Cheng K.M., et al. (2013). Underconnectivity between voice-selective cortex and reward circuitry in children with autism. Proceedings of the National Academy of Sciences of the United States of America, 110(29), 12060–5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Adolphs R. (2010). What does the amygdala contribute to social cognition? Annals of the New York Academy of Sciences, 1191, 42–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Alkozei A., Killgore W.D. (2015). Emotional intelligence is associated with reduced insula responses to masked angry faces. Neuroreport, 26(10), 567–71. [DOI] [PubMed] [Google Scholar]
- Barbey A.K., Colom R., Grafman J. (2014). Distributed neural system for emotional intelligence revealed by lesion mapping. Social Cognitive and Affective Neuroscience, 9(3), 265–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bartra O., McGuire J.T., Kable J.W. (2013). The valuation system: a coordinate-based meta-analysis of BOLD fMRI experiments examining neural correlates of subjective value. Neuroimage, 76, 412–27. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Belin P., Zatorre R.J., Lafaille P., Ahad P., Pike B. (2000). Voice-selective areas in human auditory cortex. Nature, 403(6767), 309–12. [DOI] [PubMed] [Google Scholar]
- Brackett M.A., Rivers S.E., Shiffman S., Lerner N., Salovey P. (2006). Relating emotional abilities to social functioning: a comparison of self-report and performance measures of emotional intelligence. Journal of Personality and Social Psychology, 91(4), 780–95. [DOI] [PubMed] [Google Scholar]
- Brown A.A., Jensen J., Nikolova Y.S., et al. (2012). Genetic variants affecting the neural processing of human facial expressions: evidence using a genome-wide functional imaging approach. Translational Psychiatry, 2(7),e143.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Charest I., Pernet C., Latinus M., Crabbe F., Belin P. (2013). Cerebral processing of voice gender studied using a continuous carryover FMRI design. Cerebral Cortex, 23(4), 958–66. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chew B.H., Zain A.M., Hassan F. (2013). Emotional intelligence and academic performance in first and final year medical students: a cross-sectional study. BMC Medical Education, 13(1), 44.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Collins D.L., Neelin P., Peters T.M., Evans A.C. (1994). Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space. Journal of Computer Assisted Tomography, 18(2), 192–205. [PubMed] [Google Scholar]
- Damasio A.R. (1994). Descartes’ Error: Emotion, Reason, and the Human Brain. New York: Putnam Publishing. [Google Scholar]
- Dodonova Y.A., Dodonov Y.S. (2012). Speed of emotional information processing and emotional intelligence. International Journal of Psychology, 47(6), 429–37. [DOI] [PubMed] [Google Scholar]
- Ermer E., Kahn R.E., Salovey P., Kiehl K.A. (2012). Emotional intelligence in incarcerated men with psychopathic traits. Journal of Personality and Social Psychology, 103(1), 194–204. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ethofer T., Kreifelts B., Wiethoff S., et al. (2009). Differential influences of emotion, task, and novelty on brain regions underlying the processing of speech melody. Journal of Cognitive Neuroscience, 21(7), 1255–68. [DOI] [PubMed] [Google Scholar]
- Fabio A.D. (2015). Beyond fluid intelligence and personality traits in social support: the role of ability based emotional intelligence. Frontiers in Psychology, 6, 395. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fernando A.B., Murray J.E., Milton A.L. (2013). The amygdala: securing pleasure and avoiding pain. Frontiers in Behavioral Neuroscience, 7, 190.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Friston K.J., Buechel C., Fink G.R., Morris J., Rolls E., Dolan R.J. (1997). Psychophysiological and modulatory interactions in neuroimaging. Neuroimage, 6(3), 218–29. [DOI] [PubMed] [Google Scholar]
- Friston K.J., Glaser D.E., Henson R.N., Kiebel S., Phillips C., Ashburner J. (2002). Classical and Bayesian inference in neuroimaging: applications. Neuroimage, 16(2), 484–512. [DOI] [PubMed] [Google Scholar]
- Frühholz S., Grandjean D. (2012). Towards a fronto-temporal neural network for the decoding of angry vocal expressions. Neuroimage, 62(3), 1658–66. [DOI] [PubMed] [Google Scholar]
- Gimenez P., Bugescu N., Black J.M., et al. (2014). Neuroimaging correlates of handwriting quality as children learn to read and write. Frontiers in Human Neuroscience, 8, 155.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Giordano B.L., Pernet C., Charest I., Belizaire G., Zatorre R.J., Belin P. (2014). Automatic domain-general processing of sound source identity in the left posterior middle frontal gyrus. Cortex, 58, 170–85. [DOI] [PubMed] [Google Scholar]
- Gitelman D.R., Penny W.D., Ashburner J., Friston K.J. (2003). Modeling regional and psychophysiologic interactions in fMRI: the importance of hemodynamic deconvolution. Neuroimage, 19(1), 200–7. [DOI] [PubMed] [Google Scholar]
- Gutierrez-Cobo M.J., Cabello R., Fernandez-Berrocal P. (2016). The relationship between emotional intelligence and cool and hot cognitive processes: a systematic review. Frontiers in Behavioral Neuroscience, 10, 101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hayes D.J., Duncan N.W., Xu J., Northoff G. (2014). A comparison of neural responses to appetitive and aversive stimuli in humans and other mammals. Neuroscience and Biobehavioral Reviews, 45, 350–68. [DOI] [PubMed] [Google Scholar]
- Joseph D.L., Jin J., Newman D.A., O’Boyle E.H. (2015). Why does self-reported emotional intelligence predict job performance? A meta-analytic investigation of mixed EI. Journal of Applied Psychology, 100(2), 298–342. [DOI] [PubMed] [Google Scholar]
- Joseph D.L., Newman D.A. (2010). Emotional intelligence: an integrative meta-analysis and cascading model. Journal of Applied Psychology, 95(1), 54–78. [DOI] [PubMed] [Google Scholar]
- Kantrowitz J.T., Hoptman M.J., Leitman D.I., et al. (2015). Neural substrates of auditory emotion recognition deficits in Schizophrenia. Journal of Neuroscience, 35(44), 14909–21. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kanwisher N., McDermott J., Chun M.M. (1997). The fusiform face area: a module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17(11), 4302–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kanwisher N., Yovel G. (2006). The fusiform face area: a cortical region specialized for the perception of faces. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 361(1476), 2109–28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Killgore W.D., Schwab Z.J., Tkachenko O., et al. (2013). Emotional intelligence correlates with functional responses to dynamic changes in facial trustworthiness. Society for Neuroscience, 8(4), 334–46. [DOI] [PubMed] [Google Scholar]
- Killgore W.D., Weber M., Schwab Z.J., et al. (2012). Gray matter correlates of Trait and Ability models of emotional intelligence. Neuroreport, 23(9), 551–5. [DOI] [PubMed] [Google Scholar]
- Killgore W.D., Yurgelun-Todd D.A. (2007). Neural correlates of emotional intelligence in adolescent children. Cognitive, Affective & Behavioral Neuroscience, 7(2), 140–51. [DOI] [PubMed] [Google Scholar]
- Kniazev G.G., Mitrofanova L.G., Bocharov A.V. (2013). [Emotional intelligence and oscillatory responses on the emotional facial expressions]. Fiziologiia cheloveka, 39(4), 41–8. [PubMed] [Google Scholar]
- Koeda M., Watanabe A., Tsuda K., et al. (2015). Interaction effect between handedness and CNTNAP2 polymorphism (rs7794745 genotype) on voice-specific frontotemporal activity in healthy individuals: an fMRI study. Frontiers in Behavioral Neuroscience, 9, 87. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kreifelts B., Ethofer T., Huberle E., Grodd W., Wildgruber D. (2009). Association of trait emotional intelligence and individual fMRI-activation patterns during the perception of social signals from voice and face. Human Brain Mapping, 31(7), 979–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kreifelts B., Jacob H., Bruck C., Erb M., Ethofer T., Wildgruber D. (2013). Non-verbal emotion communication training induces specific changes in brain function and structure. Frontiers in Human Neuroscience, 7, 648.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kriegstein K.V., Giraud A.L. (2004). Distinct functional substrates along the right superior temporal sulcus for the processing of voices. Neuroimage, 22(2), 948–55. [DOI] [PubMed] [Google Scholar]
- Lanciano T., Curci A. (2014). Incremental validity of emotional intelligence ability in predicting academic achievement. American Journal of Psychology, 127(4), 447–61. [DOI] [PubMed] [Google Scholar]
- Lanciano T., Curci A. (2015). Does emotions communication ability affect psychological well-being? A study with the Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT) v2.0. Health Communication, 30(11), 1112–21. [DOI] [PubMed] [Google Scholar]
- Latinus M., McAleer P., Bestelmeyer P.E., Belin P. (2013). Norm-based coding of voice identity in human auditory cortex. Current Biology, 23(12), 1075–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lopes P.N., Brackett M.A., Nezlek J.B., Schütz A., Sellin I., Salovey P. (2004). Emotional intelligence and social interaction. Personality and Social Psychology Bulletin, 30(8), 1018–34. [DOI] [PubMed] [Google Scholar]
- Mayer J.D., Salovey P., Caruso D.R., Sitarenios G. (2003). Measuring emotional intelligence with the MSCEIT V2.0. Emotion, 3(1), 97–105. [DOI] [PubMed] [Google Scholar]
- Mende-Siedlecki P., Verosky S.C., Turk-Browne N.B., Todorov A. (2013). Robust selectivity for faces in the human amygdala in the absence of expressions. Journal of Cognitive Neuroscience, 25(12), 2086–106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Oldfield R.C. (1971). The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia, 9(1), 97–113. [DOI] [PubMed] [Google Scholar]
- Pan W., Wang T., Wang X., et al. (2014). Identifying the core components of emotional intelligence: evidence from amplitude of low-frequency fluctuations during resting state. PLoS One, 9(10), e111435.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pernet C.R., McAleer P., Latinus M., et al. (2015). The human voice areas: spatial organization and inter-individual variability in temporal and extra-temporal cortices. Neuroimage, 119, 164–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Petrides K.V. (2011). Ability and trait emotional intelligence In: Chamorro-Premuzic T. v. S., Furnham A., editors. The Wiley-Blackwell Handbook of Individual Differences. Oxford: Wiley-Blackwell. [Google Scholar]
- Phillips M.L., Drevets W.C., Rauch S.L., Lane R. (2003). Neurobiology of emotion perception I: the neural basis of normal emotion perception. Biological Psychiatry, 54(5), 504–14. [DOI] [PubMed] [Google Scholar]
- Pitcher D., Dilks D.D., Saxe R.R., Triantafyllou C., Kanwisher N. (2011). Differential selectivity for dynamic versus static information in face-selective cortical regions. Neuroimage, 56(4), 2356–63. [DOI] [PubMed] [Google Scholar]
- Posamentier M.T., Abdi H. (2003). Processing faces and facial expressions. Neuropsychology Review, 13(3), 113–43. [DOI] [PubMed] [Google Scholar]
- Quarto T., Blasi G., Maddalena C., et al. (2016). Association between ability emotional intelligence and left insula during social judgment of facial emotions. PLoS One, 11(2), e0148621.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Raz S., Dan O., Zysberg L. (2014). Neural correlates of emotional intelligence in a visual emotional oddball task: an ERP study. Brain and Cognition, 91, 79–86. [DOI] [PubMed] [Google Scholar]
- Robinson C.W., Sloutsky V.M. (2004). Auditory dominance and its change in the course of development. Child Development, 75(5), 1387–401. [DOI] [PubMed] [Google Scholar]
- Salovey P., Mayer J.D. (1990). Emotional intelligence. Imagination, Cognition and Personality, 9(3), 185–211. [Google Scholar]
- Santorelli N.T. (2006). Perception of Emotion from Facial Expression and Affective Prosody. Atlanta, GA, USA: Georgia State University. [Google Scholar]
- Seeley W.W., Menon V., Schatzberg A.F., et al. (2007). Dissociable intrinsic connectivity networks for salience processing and executive control. Journal of Neuroscience, 27(9), 2349–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Steinmayr R., Schütz A., Hertel J., Schröder-Abé M. (2011). Mayer-Salovey-Caruso Emotional Intelligence Test (MSCEIT). Bern: Huber. [Google Scholar]
- Takeuchi H., Taki Y., Nouchi R., et al. (2013). Resting state functional connectivity associated with trait emotional intelligence. Neuroimage, 83, 318–28. [DOI] [PubMed] [Google Scholar]
- Tan Y., Zhang Q., Li W., et al. (2014). The correlation between emotional intelligence and gray matter volume in university students. Brain and Cognition, 91, 100–7. [DOI] [PubMed] [Google Scholar]
- Tzourio-Mazoyer N., Landeau B., Papathanassiou D., et al. (2002). Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. Neuroimage, 15(1), 273–89. [DOI] [PubMed] [Google Scholar]
- von Kriegstein K., Giraud A.-L., Ungerleider L. (2006). Implicit multisensory associations influence voice recognition. PLoS Biology, 4(10), e326.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wojciechowski J., Stolarski M., Matthews G., Chao L. (2014). Emotional intelligence and mismatching expressive and verbal messages: a contribution to detection of deception. PLoS One, 9(3), e92570.. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wols A., Scholte R.H., Qualter P. (2015). Prospective associations between loneliness and emotional intelligence. Journal of Adolescence, 39, 40–8. [DOI] [PubMed] [Google Scholar]
- Worsley K.J., Marrett S., Neelin P., Vandal A.C., Friston K.J., Evans A.C. (1996). A unified statistical approach for determining significant signals in images of cerebral activation. Human Brain Mapping, 4(1), 58–73. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.







