Skip to main content
Social Cognitive and Affective Neuroscience logoLink to Social Cognitive and Affective Neuroscience
. 2013 May 9;9(6):839–848. doi: 10.1093/scan/nst057

Crossmodal emotional integration in major depression

Veronika I Müller 1,2,3,, Edna C Cieslik 1,2,3, Tanja S Kellermann 2,3,4, Simon B Eickhoff 1,2
PMCID: PMC4040101  PMID: 23576809

Abstract

Major depression goes along with affective and social-cognitive deficits. Most research on affective deficits in depression has, however, only focused on unimodal emotion processing, whereas in daily life, emotional perception is often highly dependent on the evaluation of multimodal inputs. We thus investigated emotional audiovisual integration in patients with depression and healthy subjects. Subjects rated the expression of happy, neutral and fearful faces while concurrently being exposed to emotional or neutral sounds. Results demonstrated group differences in left inferior frontal gyrus and inferior parietal cortex when comparing incongruent to congruent happy facial conditions, mainly due to a failure of patients to deactivate these regions in response to congruent stimulus pairs. Moreover, healthy subjects decreased activation in right posterior superior temporal gyrus/sulcus and midcingulate cortex when an emotional stimulus was paired with a neutral rather than another emotional one. In contrast, patients did not show such deactivation when neutral stimuli were integrated. These results demonstrate aberrant neural response in audiovisual processing in depression, indicated by failure to deactivate regions involved in inhibition and salience processing when congruent and neutral audiovisual stimuli pairs are integrated, providing a possible mechanism of constant arousal and readiness to act in this patient group.

Keywords: audiovisual, emotion, depression, neutral, congruent

INTRODUCTION

Unipolar major depressive disorder (MDD) is one of the most prevalent psychiatric disorders and characterized by cognitive and motor impairments, but most prominently by persistent anhedonia and low mood (WHO, 2010). These latter symptoms have repeatedly been linked to deficits in emotion processing in MDD, demonstrated by poorer recognition of emotions in faces (Gur et al., 1992; Persad and Polivy, 1993; Surguladze et al., 2004; Bourke et al., 2010; Naranjo et al., 2011), voices (Uekermann et al., 2008; Naranjo et al., 2011; Peron et al., 2011) and music (Naranjo et al., 2011). Furthermore, patients interpret expressions as more negative (Naranjo et al., 2011) and less positive (Surguladze et al., 2004; Yoon et al., 2009) than healthy controls, which indicates a general negativity bias in this patient group. Supporting this view, patients reveal increased attention to sad stimuli (Gotlib et al., 2004) and are impaired in inhibiting irrelevant information, in particular negative emotional material (Goeleven et al., 2006; Joormann and Gotlib, 2010). Previous studies also found impairments in the recognition of neutral stimuli (Gur et al., 1992; Leppanen et al., 2004; Uekermann et al., 2008; Naranjo et al., 2011). In particular, patients with depression misattribute more often negative (especially sad) emotions to neutral stimuli (Gur et al., 1992; Leppanen et al., 2004; Naranjo et al., 2011).

On the neurobiological level, these problems in emotion recognition have repeatedly been linked to dysregulation in (anterior and mid) cingulate and prefrontal cortices, (anterior) insula and the amygdala (Fitzgerald et al., 2008; Elliott et al., 2011; Stuhrmann et al., 2011; Cusi et al., 2012). These results, however, are inconsistent, with some studies reporting hyper-, whereas others find hypoactivation. Furthermore, Bertocci et al. (2012) report a failure of MDD to deactivate midcingulate cortex in response to neutral faces. In particular, while healthy subjects deactivate this region during neutral face presentation, patients with depression respond with activation.

These studies in depression have, however, mainly focused on unimodal emotion processing, whereas in daily life, emotions are hardly ever conveyed and perceived in just one sensory channel alone. Rather, emotional perception is quite often the result of the evaluation of multimodal emotional signals reaching us through various senses. Thus, it may be argued, that it is difficult to transfer results of unimodal emotion processing in depression to problems in naturalistic settings, as unimodality does not reflect emotions in real life. Importantly, the fact that patients with depression have problems in unimodal emotion recognition does not necessarily imply deficits in multimodal emotion processing. More precisely, it may be possible that multimodal emotional information may lead to compensation mechanisms. This assumption is supported by a study in patients with schizophrenia (Müller et al., 2012b) demonstrating that deficits in early visual processing of emotional faces can be restored by concurrently presenting emotionally congruent auditory information. These results thus indicate that extrapolating from unimodal emotional processing deficits to multimodal impairments is difficult, as the latter may entail additional processes that may lead to exaggeration or amelioration of impairments.

In a previous study (Müller et al., 2011), we suggested that during audiovisual processing, two different kinds of incongruence (IC) may occur. IC of emotional valence emerges when presenting two emotional stimuli in the auditory and visual channel, which differ in emotional valence. In contrast, IC of emotion presence is an audiovisual mismatch arising from concurrent presentation of an emotional stimulus in one channel and a neutral one in the other (Müller et al., 2011). In healthy subjects, we could demonstrate that these two kinds of IC are associated with different neuronal circuits, with midcingulate and prefrontal cortex, temporo-parietal junction and supplementary motor area (SMA) involved in processing IC of emotional valence, whereas the left amygdala attenuated activity under IC of emotion presence.

Even though it is known that MDD is characterized by impairments in emotion recognition in the visual and auditory domain, there is no information how patients deal with (congruent and incongruent) audiovisual emotional information. Therefore, this study focuses on the investigation of potential deficits and associated neural dysregulations in patients with depression when two different kinds of IC (IC of emotional valence, IC of emotion presence) are processed. For that purpose, patients with depression and healthy controls are instructed to rate emotional (happy, fearful) and neutral facial expressions while concurrently being distracted by emotional (scream, laugh) or neutral (yawn) sounds during functional magnetic imaging.

We hypothesize that, independent of emotion congruency, patients compared with controls are more influenced by screams when rating faces as well as reveal general higher valence and arousal ratings for fearful stimuli. On the neural level, it is suggested that in patients with depression activity in the amygdala, prefrontal and cingulate areas are differently modulated by IC of emotional valence and IC of emotion presence.

METHODS

Subjects

In total, 22 patients with major depression and 22 healthy subjects, individually matched for gender, age and education were examined. One patient (and the corresponding control subject) was excluded from further analyses due to ultimately uncertain diagnosis from the treating psychiatrist and low Beck Depression Inventory (BDI) and Hamilton scores. Thus, 21 patients and 21 healthy controls (see Table 1 for demographic and clinical characteristics) were included into the analyses. All participants had normal or corrected-to-normal vision and were right handed as confirmed by the Edinburgh Inventory (Oldfield, 1971).

Table 1.

Demographic and clinical characteristics of the patient and control groups

Characteristic Patients Controls
Gender 12 males/9 females 12 males/9 females
Age 33.7 ± 11.06 33.9 ± 10.85
Education in years 12.9 ± 3.65 12.9 ± 2.41
Age of onset 27.6 ± 10.39
Duration of illness 5.9 ± 5.24
BDI score 21.1 ± 9.06 0.8 ± 1.25
Hamilton score 9.48 ± 5.33 0.1 ± 0.31

The clinical profile of the patient group is presented in Tables 1 and 2. Patients were recruited from the inpatient and outpatient units of the Department of Psychiatry, Psychotherapy and Psychosomatics, RWTH University Hospital. All patients included into the final analysis were diagnosed with a depressive episode or a recurrent depressive disorder by their treating psychiatrist according to the criteria of the ICD-10 (WHO, 2010). Eight patients were diagnosed with a moderate (F 32.1) and two with a severe episode (F 32.2), seven had the diagnosis of recurrent depressive disorder with a current moderate episode (F 33.1) and four with a current severe episode (F 33.2). The structured clinical interview (SCID; Wittchen et al., 1997) was used to confirm the diagnosis and to check for comorbidities. Only patients with no comorbid psychiatric or neurological illness and no substance addiction in the last 6 months were included in the study. One patient was unmediated, while the other 20 patients were treated with antidepressants and/or mood stabilizers (Table 2).

Table 2.

Clinical profile of the patient group

Gender Age Diagnosis Medication
Female 22 F 32.1 Venlafaxine
Male 29 F 32.1 Lithiumcarbonate
Male 19 F 32.1 Citalopram
Female 26 F 32.1 Citalopram
Male 28 F 33.2 Venlafaxine
Female 50 F 33.2 Duloxetine, Quetiapine
Male 30 F 33.1 Opipramol, Sertraline, Mirtazapine
Male 53 F 33.2 Venlafaxine, Mirtazapine
Female 38 F 33.1 Reboxetine
Female 30 F 32.1 No medication
Female 51 F 33.1 Escitalopram
Male 34 F 33.2 Venlafaxine, Quetiapine
Female 41 F 32.1 Venlafaxine
Female 29 F 32.2 Citalopram
Male 22 F 32.1 Venlafaxine
Male 41 F 33.1 Venlafaxine
Male 29 F 33.1 Bupropion, Quetiapine
Male 18 F 32.2 Mirtazapine
Male 24 F 33.1 Citalopram
Female 42 F 33.1 Mirtazapine
Male 51 F 32.1 Sertraline

The healthy control group was also screened with the SCID (Wittchen et al., 1997) revealing no history of neurological or psychiatric disorders, including substance abuse. No control subject took any mood- or cognition-altering medication.

All patients and healthy subjects gave informed consent into this study which was approved by the ethics committee of the School of Medicine of the RWTH Aachen University.

Stimuli and procedure

A detailed description of the stimuli and procedure can be found in Müller et al. (2011). In brief, the visual stimuli consisted of faces from five males and five females, each showing a happy, neutral and fearful expression (FEBA; Gur et al., 2002), resulting in 30 different pictures. Furthermore, 10 different neutral faces were blurred with a mosaic filter and served as masks. At last, as auditory stimuli 10 yawns, 10 laughs and 10 screams (five male and five female in each case) were used. In total, every stimulus was presented six times. Every face condition was associated with every sound condition resulting in 180 stimulus pairs with nine conditions (fearful/scream, fearful/yawn, fearful/laugh, neutral/scream, neutral/yawn, neutral/laugh, happy/scream, happy/yawn and happy/laugh) and 20 stimulus pairs per condition. As every sound and every face was presented twice for the 20 individual trials per condition, pairing of the same stimuli was possible. Furthermore, female a face was always paired with a female voice and vice versa.

The audiovisual pairs were presented for 1500 ms. Every trial started with the presentation of a sound concurrently with a mask. After 1000 ms, the mask was displaced by an either neutral or emotional face and presented with the continuing sound for another 500 ms. Stimuli were presented with the software Presentation 14.2 (http://www.neurobs.com/) and displayed to the participants via MR-compatible goggles. The task was to ignore the sound and the blurred face and to just rate the facial expression on an eight-point scale from extremely fearful to extremely happy. The rating scale was presented immediately after presentation of the audiovisual stimulus pairs and the response was given manually on an MR-compatible response pad. Half of the participants rated the faces from extremely fearful—left little finger (1) to extremely happy—right little finger (8), whereas the other half used an inversed rating scale [from extremely happy—left little finger (1) to extremely fearful—right little finger (8)]. The inter-stimulus-interval, during which a blank black screen was shown, was jittered between 4000 and 6000 ms.

After the fMRI experiment, subjects rated the valence and arousal of every sound and face on a nine-point valence scale ranging from very fearful to very happy and on a nine-point arousal scale ranging from not at all arousing to very arousing, respectively. Moreover, the BDI-II (Hautzinger et al., 2006) and Hamilton Depression Rating Scale (HAM-D; Williams, 1988) were used to gather symptom severity in patients and subclinical depression scores in healthy subjects. Furthermore, the german version of the Torronto Alexithymia Scale (TAS-D) (Bagby et al., 1994) was carried out to control for alexithymia in the control group.

fMRI data acquisition

Images were acquired on a Siemens Trio 3 T whole-body scanner (Erlangen, Germany) in the RWTH Aachen University hospital using blood-oxygen-level-dependent contrast [Gradient-echo EPI pulse sequence, TR = 2.2 s, in plane resolution = 3.1 ( x 3.1 mm, 36 axial slices (3.1 mm thickness)]) covering the entire brain. Four dummy images were acquired allowing magnetic field saturation and discharged prior to further processing. Images were analysed using SPM8 (www.fil.ion.ucl.ac.uk/spm). EPI images were first corrected for head movement using a two-pass affine registration procedure, by which images were initially realigned to the first image and subsequently to the mean of the realigned images. Then, the mean EPI image of each subject was spatially normalized to the MNI single subject template using the ‘unified segmentation’ approach (Ashburner and Friston, 2005). The resulting parameters of a discrete cosine transform, which define the deformation field necessary to move the subjects data into the space of the MNI tissue probability maps, were then combined with the deformation field transforming between the latter and the MNI single subject template. The ensuing deformation was subsequently applied to the individual EPI volumes that were hereby transformed into the MNI single subject space and resampled at 2 × 2 × 2 mm3 voxel size. The normalized images were spatially smoothed using an 8 mm FWHM Gaussian kernel to meet the statistical requirements of the general linear model and to compensate for residual macro-anatomical variations across subjects.

Statistical analysis

Behavioural data were analysed off-line using IBM SPSS 19.0.0. All data, except of the rating of the valence of neutral sounds were confirmed to be normally distributed. For the normally distributed data, ANOVAs were calculated, whereas for the analyses of the valence rating of neutral sounds, a Mann–Whitney U-test was used (because of violation of normality). Violations of sphericity were corrected by the Greenhouse-Geisser or Huynh–Feldt correction where appropriate. Post hoc analyses, corrected for multiple comparisons, were calculated for significant main effects.

The imaging data were analysed using a general linear model as implemented in SPM8. Each experimental conditions (fearful face/scream, fearful face/yawn, fearful face/laugh, neutral face/scream, neutral face/yawn, neutral face/laugh, happy face/scream, happy face/yawn and happy face/laugh) as well as the response event were separately modelled in the first-level (single-subject) time-series analysis by a boxcar reference vector convolved with a canonical haemodynamic response function and its first-order temporal derivative. Additionally, low-frequency signal drifts were filtered using a cutoff period of 128 s. Parameter estimates were subsequently calculated for each voxel using weighted least squares to provide maximum-likelihood estimators based on the temporal autocorrelation of the data (Kiebel et al., 2003). No global scaling was applied. For each subject, simple main effects for each of the nine experimental conditions and the response were computed by applying appropriate baseline contrasts. These individual first-level contrasts were then entered into a second-level group analysis using an ANOVA (factor: condition, separately per group; blocking factor subject) employing a random-effects model. In the modelling of variance components, violations of sphericity have been allowed by modelling non-independence across images from the same subject and allowing unequal variances between conditions and subjects using the standard implementation in SPM 8. Based on these estimates of the second-level analysis, separate t-contrasts were calculated for every differential main effect and interaction by applying the respective contrast to the second-level parameter estimates. The resulting SPM (T) maps were then thresholded at a cluster level FWE rate of P < 0.05 (cluster forming threshold: P < 0.001 at voxel level). For the group comparisons of the IC and EPC effect, these cluster-level corrected maps were further inclusively masked by the respective effect in the control (IC controls > IC patients and EPC controls > EPC patients) or patient (IC controls < IC patients and EPC controls < EPC patients) group.

Additionally, two further second-level models were estimated for the patient group only. For the first one, we computed, for each subject, the contrast for IC of emotion valence and a contrast of IC of emotion presence. These first-level contrasts were then fed into two different second-level analyses (one with the IC of emotion valence contrast and one with the emotion presence congruence contrast). Additionally, the (mean centred) BDI scores for individual patients were included as a covariate.

Ensuing activations were anatomically localized using version 1.6b of the SPM Anatomy toolbox (Eickhoff et al., 2006, 2007; www.fz-juelich.de/ime/spm_anatomy_toolbox; Eickhoff et al., 2005).

RESULTS

Behavioural data

On-line rating

An ANOVA with the factors sound (scream/yawn/laugh), face (fearful/neutral/happy) and group (patient/control) and the dependent variable ‘on-line rating’ revealed significant main effects of sound (F2,80 = 6.21, P < 0.05) and face (Greenhouse-Geisser corrected: F1,51 = 342.55, ε = 0.63, P < 0.05). However, the interaction between sound × face (F4,160 = 0.1, NS) as well as the main effect of group (F1,40 = 0.15, NS) and interactions of either sound or face or both factors with group were not significant (sound × group: F2,80 = 2.03, NS; face × group: F2,80 = 1.09, NS; sound × face × group: F4,160 = 0.64, NS). As audiovisual emotional integration has never been investigated in depression before, we conducted further exploratory analyses by only focusing on congruent and incongruent conditions of emotional valence. For that purpose, two ANOVAs with the factor congruence (congruent/incongruent) and group (patient/control) and the dependent variables rating of fearful and rating of happy faces, respectively, were calculated. The ANOVA analysing IC effects for fearful faces did not reveal any significant effects (congruence: F1,40 = 1.40, NS; Group: F1,40 = 0.46, NS; congruence × group: F1,40 = 0.78, NS). In contrast, the ANOVA testing effects on happy faces showed on the one hand a trend towards significance for the main effect congruence (F1,40 = 4.04, P < 0.1), and on the other hand also a tendency for an interaction between congruence × group (F1,40 = 3.10, P < 0.1). Analysing this interaction trend in detail by calculating separate t-tests revealed that healthy subjects tend to differ in their ratings of happy congruent and incongruent conditions (t20 = −2.83, P < 0.05), whereas patients rate happy faces in incongruent and congruent conditions in a similar way (t20 = −0.17, NS) (Figure 1).

Fig. 1.

Fig. 1

Behavioural ratings of happy faces accompanied by congruent and incongruent sounds in patients and controls. Although healthy subjects tend to rate faces paired with a congruent sound differently than paired with an incongruent sound, patients rate happy faces in incongruent and congruent conditions in a similar way. Note that this interaction only shows a trend towards significance.

Off-line rating

For the analyses of the off-line ratings, four ANOVAs and a Mann–Whitney U-test were calculated. Two ANOVAs analysed the rating of arousal for faces and sounds and one the valence rating of faces.

For the off-line valence rating of sounds, due to violation of normal distribution of the neutral sound condition, an ANOVA excluding the neutral condition was calculated as well as a Mann–Whitney U-test for the comparison of the rating of neutral sounds between groups. The first three ANOVAs (valence rating faces, arousal rating faces and arousal rating sounds) included the factors emotion (happy/neutral/fear) and group (patient/control). All three ANOVAs resulted in significant effects of emotion (valence faces: Greenhouse-Geisser corrected: F1,56 = 260.47, ε = 0.70, P < 0.05; arousal faces: Huynh–Feldt corrected: F2,68 = 13.12, ε = 0.85, P < 0.05; arousal sounds: F2,80 = 59.61, P < 0.05). Post hoc analyses revealed that all types of faces differed from each other in the valence rating and that emotional sounds were rated as more arousing than neutral ones. In contrast, only fearful faces were rated as more arousing than neutral and happy ones whereas happy faces were rated not significantly differently arousing than neutral faces. None of the analyses revealed significant main effects of group (valence faces: F1,40 = 0.50, NS; arousal faces: F1,40 = 0.05, NS; arousal sounds: F1,40 = 0.15, NS) or a group × stimulus interaction (valence faces: F2,80 = 0.08, NS; arousal faces: F2,80 = 0.05, NS; arousal sounds: F2,80 = 1.06, NS). As the rating of the neutral sounds violated the normal distribution, the neutral sound condition was excluded when calculating an ANOVA analysing the valence rating of sounds. This analysis revealed again a significant effect of emotion (F1,40 = 1509.08, P < 0.05) but no main effect (F2,80 = 0.09, NS) or interaction with group (F1,40 = 1.70, NS). In return, a Mann–Whitney U-test further compared the rating of neutral sounds between groups revealing no significant difference (U = 204.5, NS). In summary, across groups significant emotion effects could be found for the rating of arousal and valence of faces and sounds, but groups did not differ in any of those ratings.

Imaging data

Group effects over all conditions

The comparisons between the two groups over all conditions revealed a significant greater response in patients compared with controls in the left fusiform gyrus [FG2; Caspers et al., 2012; x = −40, y = −70, z = −12; P < 0.05 cFWE (cluster level) corrected, k > 280]. In contrast, the reversed comparison (patients < controls) did not produce any significant results.

IC of emotion valence

Main effect

Over both groups, IC of emotion valence revealed significant elevated activity in middle occipital gyrus, right SMA, left precuneus, right lateral prefrontal cortex, midcingulate cortex (MCC), left cerebellum and right superior parietal cortex (P < 0.05 cFWE corrected, k > 220; Table 3) in emotional incongruent compared with congruent conditions. In a previous study (Müller et al., 2011) in healthy subjects, we also found right SMA, MCC and prefrontal cortex to be active in response to IC of emotion valence. Figure 2 illustrates the contrast estimates for patients and controls for congruent and incongruent conditions in those regions. These demonstrate that, across groups, higher activity in emotional incongruent compared with congruent conditions can be found. Comparing the effect sizes from previous work with those of the current results demonstrates consistent higher effect sizes in the previous study.

Table 3.

Main effects over both groups for IC of emotion valence

Region x y z Z score No. of voxels
Right middle occipital cortex 38 −68 26 5.73 4432
Right SMA—area 6 2 2 63 4.87 228
9 −15 74 4.84 278
Medial superior parietal lobe (SPL)—area 5 m 2 −39 50 3.99 310
Right medial prefrontal cortex 8 56 5 4.42 759
Midcingulate (MCC) 5 −12 41 4.40 280
−5 0 33 4.04 383
Right cerebellum 3 −42 −15 4.38 230
Left precuneus −11 −59 29 4.50 637

Coordinates are in MNI space; mapping of cytoarchitectonic areas: 6 m—(Geyer, 2004); 5 m—(Scheperjans et al., 2008).

Fig. 2.

Fig. 2

Contrast estimates for patients and controls in MCC (left), right SMA (middle) and prefrontal cortex (right) for congruent and incongruent conditions. Fear C: fearful face paired with congruent sound (scream); fear IC: fearful face paired with incongruent sound (laugh); happy C: happy face paired with congruent sound (laugh); happy IC: happy face paired with incongruent sound (scream). Values of the y-axes differ between the three graphs.

Interaction with group

IC of emotion valence was compared between both groups, testing for both greater effects in controls vs patients (patients < controls) and decreased IC effects in controls (patients > controls) in separate contrasts. To assure that controls indeed showed increased activity in the incongruent compared with the congruent condition, the interaction IC patient < IC controls was masked by the IC effect in the control group. Conversely, the interaction IC patients > IC controls was masked by the IC effect in the patient group.

On the cFWE-corrected level, there was no significant interaction between group (patients < controls) and the effect of IC of emotion valence across both happy and fearful face conditions. Therefore, comparisons specific for these two facial emotions were carried out, revealing significant interaction effects for the happy facial condition in left inferior parietal cortex (IPC) (PGp; Caspers et al., 2006; x = −39, y = −76, z = 33, P < 0.05 cFWE corrected, k > 240; Figure 3A) and left inferior frontal gyrus (IFG) (x = −45, y = 22, z = −10, P < 0.05 cFWE corrected, k > 240; Figure 3A). Figure 3B demonstrates that controls show stronger deactivation than patients when happy faces were paired with a congruent sound. In contrast, no interaction with group could be found for IC of emotion valence in the fearful face condition. In addition, for incongruent happy as well as for fearful congruent and fearful incongruent conditions, no such difference between groups could be found.

Fig. 3.

Fig. 3

(A) Group × Condition interaction for IC of emotion valence in left IFG and IPC (PGp). (B) Contrast estimates in left PGp for congruent and incongruent happy (left) and fearful (right) face conditions for patients (blue) and controls (red).

Furthermore, comparing the reversed interactions (patients > controls) did not reveal any significant clusters, neither for the contrast across happy and fearful faces nor for the specific contrasts separated by emotion.

Correlation with BDI scores

Based on the results of the interaction of IC of emotion valence for happy faces, we were interested whether activity in these regions is related to BDI scores in the patient group. Therefore, an analysis of the IC effect of emotion valence for happy faces with BDI scores of patients as covariate (masked inclusively by activity revealed by the group interaction for the IC effect for happy faces) was carried out. This contrast did not reveal any significant correlations with BDI.

Congruence of emotion presence (EPC)

Main effect

Comparing pure emotional conditions (fear/scream, fear/laugh, happy/scream and happy/laugh) with conditions where in one channel a neutral stimulus was presented (fear/yawn, neutral/scream, neutral/laugh and happy/yawn) across both groups revealed at cFWE (k > 230) corrected level significant elevated activity in bilateral superior temporal gyrus (x = 51, y = −8, z = −4 and x = −48, y = −12, z = −10), left middle temporal gyrus (x = −64, y = −30, z = −10), bilateral IFG (overlapping area 44 and 45; Amunts et al., 1999; x = 57, y = 18, z = 6 and x = −56, y = 15, z = −6), MCC (x = −6, y = −22, z = 38), right precentral gyrus (x = 38, y = 4, z = 45) and right superior medial gyrus (x = 8, y = 52, z = 36). As the contrast estimates in bilateral superior temporal gyrus (supplementary Figure S1) indicated that the effect in this region may be due to decreased activity in the yawn conditions only, the main effect of congruence of emotion presence (EPC) was again computed, but masked (exclusively) by the contrast emotional vs neutral sounds (laugh ∩ scream > yawn). As a result, superior temporal clusters were not significant anymore.

Since the EPC effect was found in the left amygdala in our previous study in healthy subjects (Müller et al., 2011), a small volume correction based on the coordinates of the previous study (x = −20, y = −3, z = −21, search volume 6 mm) was carried out. This analysis revealed a significant effect over both groups in the amygdala (x = −21, y = −2, z = −26, P < 0.05 SVC), with attenuated activity in conditions where an emotional stimulus was paired with a neutral one compared with emotional stimuli in both channels.

Interaction with group

Again, to assure that controls indeed show decreased activation in conditions where a neutral stimulus was presented compared with pure emotional conditions, the interaction of EPC and group was masked inclusively by the EPC effect in controls. Comparing the EPC effect between groups revealed a significant interaction in right superior temporal gyrus extending into posterior superior temporal sulcus (pSTS) (x = 54, y = −39, z = 12, P < 0.05 cFWE corrected, k > 310; Figure 4) and MCC (x = −3, y = 3, z = 36, P < 0.05 cFWE corrected, k > 310; Figure 4). Although healthy controls showed decreased activity in these regions when a neutral stimulus was paired with an emotional one compared with conditions where in both channels an emotional stimulus was presented, patients with depression did not show this pattern (Figure 4).

Fig. 4.

Fig. 4

Group × Condition interaction for emotion presence IC in STG/pSTS and MCC and contrast estimates in STG/pSTS for pure emotional (Ee) and conditions where a neutral stimulus was paired with an emotional one (En, Ne). Ee, emotional face/emotional sound; En, emotional face/neutral sound and Ne, neutral face/emotional sound).

Correlation with BDI scores

Based on the interaction of congruence of emotion presence, we investigated whether activity in posterior superior temporal and MCC correlates with BDI scores in the patient group. Thus, EPC-related activity was analysed in the patient group by including BDI as covariate (masked inclusively by activity revealed by the group interaction for the EPC effect). A significant negative correlation of EPC-related activity and BDI scores in the right STG (x = 56, y = −36, z = 15, P < 0.05 SVC corrected, Figure 5) could be found.

Fig. 5.

Fig. 5

Negative correlation of EPC-related activity in superior temporal cortex with BDI scores in the patient group.

DISCUSSION

Here, an emotional audiovisual integration paradigm was used to investigate the neural responses to two different kinds of emotional (in)congruence (IC of emotional valence, congruence of emotion presence) processing in depression. Despite similar behavioural performance and arousal and valence ratings in both groups, patients with depression showed a failure to deactivate left inferior frontal and IPC in response to congruent happy audiovisual stimulus pairs. Moreover, patients compared with controls failed to attenuate activity in right posterior superior temporal gyrus (pSTG)/STS and midcingulate cortex (MCC) when neutral information in one channel was paired with an emotional one in the other. This activity was further negatively correlated with symptom severity. Therefore, the results of this study indicate that despite intact behavioural responses, patients with depression show dysregulated neural activity during audiovisual integration of emotional and neutral information.

Behavioural ratings

Patients did not show any significant differences in their behavioural responses, neither in the on-line rating of faces while instructed to ignore the concurrent sounds nor in the off-line ratings where valence and arousal had to be rated independently for faces and sounds. To our knowledge, there are no studies investigating crossmodal emotional integration in patients with depression. Hence, the results of our study indicate that in audiovisual integration, patients and controls are similarly influenced by voices when rating the emotional expression of a face. Nevertheless, we did find a tendency for a congruence and group interaction in happy facial stimulus pairs when analysing IC of emotion valence effects separately for fearful and happy facial conditions. This effect remains to be tested and replicated in a different and bigger sample of patients and controls.

The valence and arousal rating of faces and sounds following the fMRI experiment also did not reveal any group differences, indicating that perception and subjective experience of sounds and faces are not impaired in our depression group. These results thus neither corroborate the assumed negativity bias nor reports of aberrant recognition of emotional and neutral stimuli in depression (Gur et al., 1992; Persad and Polivy, 1993; Leppanen et al., 2004; Surguladze et al., 2004; Csukly et al., 2009; Naranjo et al., 2011). Based on these studies, it would have been hypothesized that patients would be more influenced by screams and would rate (neutral) stimuli as more fearful than healthy controls. The failure to support these hypotheses might be due to differences in methodology in this study. Surguladze et al. (2004), for example, demonstrated that patients and controls only differ in their emotion labelling performance when the face was presented for a short duration (100 ms). In contrast, for longer presentation times (2000 ms), no group difference could be found. In this study, the presentation times for both, the stimulus pairs in the on-line experiment (1500 ms) as well as for the stimuli in the rating after scanning were rather long (1500 ms), which may have led to these contradictory results. However, our results are well in line with findings of no group differences in accuracy and reaction times for labelling of emotional stimuli (Kan et al., 2004; Gollan et al., 2008; Bourke et al., 2010).

Furthermore, the lack of difference between groups might as well be due to the use of fearful, rather than sad stimuli in this study. It has been suggested that the negativity bias in depression might not be a general attentional bias for all kinds of ‘negative’ stimuli. Rather MDD patients may show a bias in stimuli (dimension) more specifically related to depressive symptoms as demonstrated by group differences in accuracy and error rates only when labelling sad, but not, i.e. fearful stimuli (Leppanen et al., 2004; Ellenbogen and Schwartzman, 2009; Milders et al., 2010; Hu et al., 2012). Additionally, Hu et al. (2012) report that only depression-related words (hopeless and sad) led to greater interference effects when labelling positive faces in depression compared with healthy controls. In turn, no such differential interference effect could be found for depression-unrelated words. Therefore, the use of screams as distractors in the present paradigm might have led to equal interference of sounds in patients and controls. Our data therefore provide evidence that the processing and attribution bias in depression is not a general effect of ‘negative’ stimuli such as fear. In this context, further studies in patients with MDD and comorbid anxiety disorder using both sad and fearful (multimodal) stimuli may shed further light on the specificity of a presumed negativity bias.

Imaging data

Group effect across all conditions

Independent of IC of emotion valence and presence, patients showed increased activity compared with controls in the left fusiform gyrus across all conditions. We would conjecture that this increased activation may indicate increased distraction by sounds and consequently higher cognitive effort when evaluating faces in the patient group. Studies investigating patients with depression during unimodal face processing also found increased activity in fusiform gyrus compared with controls (for review, see Stuhrmann et al., 2011). Nevertheless, this increase could previously only be found in response to sad faces, whereas in response to happy faces decreased activity was reported. Our results now extend studies of unimodal emotion processing in depression by demonstrating that during multimodal emotion processing, patients with depression show, independent of emotional expression, hyperactivity in the left fusiform gyrus compared with healthy controls. This increased activity in an important region for facial processing (Kanwisher et al., 1997; Kanwisher and Yovel, 2006) in depressed patients might therefore indicate increased visual attention, across all conditions, independent which facial–sound pair was presented. From a different perspective, this discrepancy again highlights that current findings from unimodal processing may not necessarily translate in similar differences in a multimodal and hence presumably more naturalistic setting.

Group effects of IC of emotion valence

Over both groups, elevated activity in middle occipital gyrus, right SMA, left precuneus, right lateral prefrontal cortex (PFC), MCC, left cerebellum and right superior parietal cortex in emotional incongruent compared with congruent conditions could be found. This network overlaps with the IC network in our previous study (Müller et al., 2011), where we used the same paradigm in healthy subjects and found increased activity in SMA, MCC and PFC in incongruent conditions. However, the present network is slightly broader than the previous one, possible due to the fact that we used the very conservative voxel-level FWE correction in the previous study. Furthermore, calculation of effect sizes of previous and current IC effects revealed consistently stronger effect sizes for the former. Even though, we did not find an interaction with group in those regions, this result is not surprising, as the IC effect in the previous study was based on data of only healthy subjects whereas current effects were calculated across patients and controls, possibly leading to much more variability. In addition, Klasen et al. (2011) also report a similar fronto-parietal network (and the cerebellum) to be activated when incongruent audiovisual information is processed. Our results, demonstrating an IC of emotion valence effect over both groups and no interaction effect in these regions, therefore indicate an intact recruitment of regions necessary for response inhibition (Sumner et al., 2007; Nachev et al., 2008), conflict detection (Botvinick et al., 2004; Carter and van Veen, 2007) and conflict resolution (Durston et al., 2003; Egner and Hirsch, 2005) in patients with depression. Furthermore, across happy and fearful conditions as well as for the fearful face conditions separately, no significant group interaction with the IC of emotion valence effect could be found. In contrast, there was a group and condition interaction in the left IFG and left IPC. Although healthy controls showed increased activity in these regions in incongruent (happy face + scream) compared with congruent (happy face + laugh) trials, patients did not show a difference between incongruent and congruent audiovisual stimulus pairs, mainly due to a failure to deactivate IFG and IPC in the congruent condition. That is, patients did not show an attenuation of these regions when a congruent happy pair was presented and therefore processed congruent and incongruent stimulus pairs in a similar way. This fits with the trend which we found in the behavioural rating of happy faces indicating a difference in the rating of congruent and incongruent conditions in the control group, whereas there was no difference between conditions in the patient group.

The left IFG has been associated with semantic processing (Binder et al., 2009) and interference monitoring (Blasi et al., 2006). It has also repeatedly been reported as an important region for in the inhibition of distracters (Blasi et al., 2006; Dolcos and McCarthy, 2006; Ovaysikia et al., 2011). The result that, in contrast to healthy controls, patients did not show differences in activity in IFG depending on the congruence of happy face trials might therefore indicate that patients with depression are more distracted than healthy controls by emotional sounds, even though when these are congruent with the (visual) target. That is, in contrast to healthy controls, where congruent (positive) emotional stimuli lead to deactivation of the IFG as there is no need for inhibition of the distractor, patients do not show this deactivation pattern and might therefore also need cognitive effort to ignore and inhibit the congruent sound stimulus. Importantly, we only found dysregulation of the IFG for the happy facial IC effect and not for the fearful face condition. This is in line with the result of Ovaysikia et al. (2011), who also report only increased activity in incongruent compared with congruent conditions for happy faces. Some authors have suggested that the negativity bias in depression is rather an absence of a normal positivity bias (Elliott et al., 2011). Therefore, the fact that patients with depression fail to deactivate IFG only in response to pure happy conditions (laugh and happy face) might be associated with this view of an absent positivity bias. In particular, although there is no difference between patients and controls when stimulus pairs are processed with fearful information in at least one channel, groups differ when in both channels positive information is conveyed. This may indicate that possible problems in unimodal processing (absent positivity bias) are only apparent in multimodal settings when the information in both sensory channels is positive, whereas biased processing has no longer any effect when only in one channel positive information is presented.

This specialization of dysregulated IC processing in depression on happy facial conditions was also found for the caudal part of the IPC (PGp). This region has also been reported to be involved in language processing (Hall et al., 2005; Binder et al., 2009; Price, 2010; Clos et al., 2014) and in this context been suggested to act as a high-level supramodal conceptual integration area (Binder et al., 2009). This idea receives further support by connectivity studies reporting substantial connections from region PGp of the parietal cortex to auditory and occipital areas (Caspers et al., 2011). The current result, revealing dysregulated PGp activity in response to congruence processing in depression, therefore suggests a role of PGp not only in supramodal integration of language but rather also when emotional stimuli from different modalities converge. In addition, the posterior IPC has also been reported to activate in various forms of conflict processing, for example, in Simon and Stroop tasks (Liu et al., 2004), but also when expected and retrieved memory information do not match (O’Connor et al., 2010). Moreover, Ochsner et al. (2002) found this region to be part of the reappraisal network. Thus, in sum, all these results indicate that the IPC plays a major role in directing attention to stimuli which need further processing and cognitive effort.

Therefore, healthy subjects in our study may have decreased activity in PGp and IFG in response to congruent audiovisual stimulus pairs as the stimuli matched and did not need any more attention and cognitive effort for further processing. In contrast, patients with depression failed to adopt this mechanism and kept processing and attending to the presented stimuli, even though stimuli were congruent and the message clear.

Group effects of congruence of emotion presence

In both, depressed and control subjects, increased activity in pure emotional conditions compared with conditions where in one channel a neutral stimulus was presented could be found in a broad network of brain region (frontal and temporal regions) including the left amygdala. In contrast to the preserved amygdala congruence of emotion presence (EPC) effect, however, differences between patients and controls in response to EPC were found in the right posterior temporal (pSTG) gyrus extending into superior temporal sulcus (pSTG/STS) as well as in the MCC. Although controls showed decreased activation when emotional and neutral stimuli were paired compared with conditions with emotional stimuli in both channels, patients with depression did not show this pattern. Rather, they even responded with increased activity to conditions where neutral sounds of faces were presented, indicating an inadequate integration of neutral stimuli.

Dysregulation of MCC and STG has already been reported in depression in response to emotional stimuli, but results are rather inconsistent with some studies showing increased and others decreased activity (Elliott et al., 2011; Stuhrmann et al., 2011; Diener et al., 2012). In the context of neutral stimulation, Bertocci et al. (2012) report that patients with depression activated the MCC in response to neutral faces, whereas healthy subjects showed a deactivation in this region. This, together with our result, where patients responded with activation rather than the physiological deactivation in MCC to neutral/emotional audiovisual stimulus pairs, indicates that depression goes along with increased activity in response to neutral stimulation. Furthermore, negative correlation of emotion presence congruence-related activity and BDI scores in the patient group signifies that more depressive symptoms go along with lesser decrease of activity when an emotional stimulus is paired with a neutral one. Thus, the higher symptomology, the more patients fail do adequately integrate neutral stimuli which may further lead to higher arousal when confronted with stimuli, which normally would initiate attenuation of activity and therefore an all-clear signal.

The posterior part of the superior temporal gyrus/sulcus has been reported to play a major role in audiovisual integration (Calvert et al., 2000; Beauchamp et al., 2004; Kreifelts et al., 2007), but also in evaluating the social relevance/significance of stimuli (Scharpf et al., 2010; Kaiser et al., 2011). Furthermore, Kreifelts et al. (2007) could show emotional sensitivity of the pSTS, with higher activity to emotional than neutral stimulation. Similar to the pSTG/STS, the MCC is also involved in the evaluation of emotional information (Stevens et al., 2011), but also in controlling attentional resources to behaviourally relevant stimuli (Stevens et al., 2011). Therefore, dysregulation in the MCC and pSTG/STS in depressed individuals may lead to biased salience processing of neutral stimuli. As a result, both emotional and neutral stimulation lead to arousal and readiness to act. This matches clinical symptoms of psychomotor disturbances in depression. Therefore, elevated arousal in response to both, emotional and neutral stimuli, as found in this study, might be associated with agitation and restlessness, which are often observed in depression (WHO, 2010). As a result, increased arousal and biased salience processing of neutral stimuli may also lead to increased scanning of the environment and therefore visual attention. This suggestion of constant visual attention fits with the observation of increased fusiform activation in response to all conditions in patients compared with controls.

Interestingly, patients show dysregulated emotion presence congruence activity in regions previously not associated with this effect (Müller et al., 2011), while the amygdala also responded with an all-clear signal in patients with depression. In general, LeDoux (1996) suggests that information reaches the amygdala via two different routes; the fast and direct subcortical route via the thalamus and the slower cortical pathway where information is processed in the cortex before forwarded to the amygdala. With regard to the latter route, a recent DCM study in healthy subjects (Müller et al., 2012a) could show that the pSTS plays a major role in gating information transfer from fusiform gyrus into the amygdala.

Our results thus may indicate that processing of emotional signals through the direct subcortical route via the thalamus might not be impaired in patients, indicated by an emotion presence congruence effect in the amygdala across groups. In contrast, processing through the slower cortical pathway seems to be dysregulated, demonstrated by an interaction of congruence of emotion presence (EPC) and group in STG/pSTS and MCC as well as a correlation of EPC-related activity and BDI scores. Therefore, it might be speculated that the intact signal in response to neutral stimulation coming from subcortical impulses might counteract or attenuate excessive impulses derived from cortical areas. If and how these two routes interact with each other cannot be uncovered by the current experiment and remains to be clarified in future studies.

General discussion and limitations

Studies and models of emotion recognition and processing in depression have suggested that deficits found in unimodal emotion processing may contribute to the understanding of the characteristic symptomology of this patient group. Thus, research has mainly focused on unimodal processing, leading to models of emotion recognition in depression which on the one hand postulate a negativity (or absent positivity) bias and on the other hand a general deficit in recognizing emotions (Yoon et al., 2009; Elliott et al., 2011; Kohler et al., 2011; Naranjo et al., 2011). It may be speculated that these deficits in unimodal processing may add up in multimodal settings, but results of this study indicate that it is not that simple. Rather, deficits in multimodal emotion processing seem to involve additional, integrative processes rather than just the accumulation of unimodal impairments. Thus, deficits in multimodal facial emotional processing depend on the emotional information conveyed in other channels. Therefore, our data neither support a specific deficit in processing positive or negative stimuli nor a general impairment in emotion processing. Rather it suggests, that even if biases in unimodal affect recognition exist, they may not fully explain affective symptomology of depression, as in more naturalistic (multimodal) settings deficits cannot just be reduced to specific biases.

It is important to note that all but one patient received antidepressant medication which might have influenced the results by modulating brain activity in the patient group. However, treatment studies suggest that antidepressants normalize emotion recognition problems and brain activity dysfunctions during emotion processing (Anderson et al., 2011; Hoflich et al., 2012) which makes it rather unlikely that the current group differences are due to medication effects. Rather the effects might even be stronger in unmedicated subjects. Nevertheless, the current results should be understood as applying only to medicated patients as the influence of antidepressants cannot be fully ruled out.

CONCLUSION

In summary, this study demonstrates aberrant neural response in audiovisual processing in depression, indicated by failure to deactivate regions involved in inhibition and attentional control when the emotional message of audiovisual stimulus pairs is clear. Furthermore, missing all-clear signals in pSTG/STS and MCC when neutral information was paired with emotional stimuli points to inefficient integration of neutral information which is possibly related to symptoms like agitation and restlessness in depression.

SUPPLEMENTARY DATA

Supplementary data are available at SCAN online.

Conflict of interest

None declared.

Supplementary Data

Acknowledgments

This study was supported by the Deutsche Forschungsgemeinschaft (DFG, IRTG 1328), by the National Institute of Mental Health (R01-MH074457), and the Helmholtz Initiative on systems biology (The Human Brain Model).

References

  1. Amunts K, Schleicher A, Burgel U, Mohlberg H, Uylings HB, Zilles K. Broca’s region revisited: cytoarchitecture and intersubject variability. The Journal of Comparative Neurology. 1999;412:319–41. doi: 10.1002/(sici)1096-9861(19990920)412:2<319::aid-cne10>3.0.co;2-7. [DOI] [PubMed] [Google Scholar]
  2. Anderson IM, Shippen C, Juhasz G, et al. State-dependent alteration in face emotion recognition in depression. The British Journal of Psychiatry. 2011;198:302–8. doi: 10.1192/bjp.bp.110.078139. [DOI] [PubMed] [Google Scholar]
  3. Ashburner J, Friston KJ. Unified segmentation. Neuroimage. 2005;26:839–51. doi: 10.1016/j.neuroimage.2005.02.018. [DOI] [PubMed] [Google Scholar]
  4. Bagby RM, Taylor GJ, Parker JD. The Twenty-item Toronto Alexithymia Scale-II. Convergent, discriminant, and concurrent validity. Journal of Psychosomatic Research. 1994;38:33–40. doi: 10.1016/0022-3999(94)90006-x. [DOI] [PubMed] [Google Scholar]
  5. Beauchamp MS, Lee KE, Argall BD, Martin A. Integration of auditory and visual information about objects in superior temporal sulcus. Neuron. 2004;41:809–23. doi: 10.1016/s0896-6273(04)00070-4. [DOI] [PubMed] [Google Scholar]
  6. Bertocci MA, Bebko GM, Mullin BC, et al. Abnormal anterior cingulate cortical activity during emotional n-back task performance distinguishes bipolar from unipolar depressed females. Psychological Medicine. 2012;42:1417–28. doi: 10.1017/S003329171100242X. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Binder JR, Desai RH, Graves WW, Conant LL. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cerebral Cortex. 2009;19:2767–96. doi: 10.1093/cercor/bhp055. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Blasi G, Goldberg TE, Weickert T, et al. Brain regions underlying response inhibition and interference monitoring and suppression. The European Journal of Neuroscience. 2006;23:1658–64. doi: 10.1111/j.1460-9568.2006.04680.x. [DOI] [PubMed] [Google Scholar]
  9. Botvinick MM, Cohen JD, Carter CS. Conflict monitoring and anterior cingulate cortex: an update. Trends in Cognitive Sciences. 2004;8:539–46. doi: 10.1016/j.tics.2004.10.003. [DOI] [PubMed] [Google Scholar]
  10. Bourke C, Douglas K, Porter R. Processing of facial emotion expression in major depression: a review. The Australian and New Zealand Journal of Psychiatry. 2010;44:681–96. doi: 10.3109/00048674.2010.496359. [DOI] [PubMed] [Google Scholar]
  11. Calvert GA, Campbell R, Brammer MJ. Evidence from functional magnetic resonance imaging of crossmodal binding in the human heteromodal cortex. Current Biology. 2000;10:649–57. doi: 10.1016/s0960-9822(00)00513-3. [DOI] [PubMed] [Google Scholar]
  12. Carter CS, van Veen V. Anterior cingulate cortex and conflict detection: an update of theory and data. Cognitive, Affective & Behavioral Neuroscience. 2007;7:367–79. doi: 10.3758/cabn.7.4.367. [DOI] [PubMed] [Google Scholar]
  13. Caspers J, Zilles K, Eickhoff SB, Schleicher A, Mohlberg H, Amunts K. Cytoarchitectonical analysis and probabilistic mapping of two extrastriate areas of the human posterior fusiform gyrus. Brain Structure & Function. 2012;218:511–26. doi: 10.1007/s00429-012-0411-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Caspers S, Geyer S, Schleicher A, Mohlberg H, Amunts K, Zilles K. The human inferior parietal cortex: cytoarchitectonic parcellation and interindividual variability. Neuroimage. 2006;33:430–48. doi: 10.1016/j.neuroimage.2006.06.054. [DOI] [PubMed] [Google Scholar]
  15. Caspers S, Eickhoff SB, Rick T, et al. Probabilistic fibre tract analysis of cytoarchitectonically defined human inferior parietal lobule areas reveals similarities to macaques. Neuroimage. 2011;58:362–80. doi: 10.1016/j.neuroimage.2011.06.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Clos M, Langner R, Meyer M, Oechslin MS, Zilles K, Eickhoff SB. Effects of prior information on decoding degraded speech: an fMRI study. Human Brain Mapping. 2014;35(1):61–74. doi: 10.1002/hbm.22151. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Csukly G, Czobor P, Szily E, Takacs B, Simon L. Facial expression recognition in depressed subjects: the impact of intensity level and arousal dimension. The Journal of Nervous and Mental Disease. 2009;197:98–103. doi: 10.1097/NMD.0b013e3181923f82. [DOI] [PubMed] [Google Scholar]
  18. Cusi AM, Nazarov A, Holshausen K, Macqueen GM, McKinnon MC. Systematic review of the neural basis of social cognition in patients with mood disorders. Journal of Psychiatry & Neuroscience. 2012;37:154–69. doi: 10.1503/jpn.100179. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Diener C, Kuehner C, Brusniak W, Ubl B, Wessa M, Flor H. A meta-analysis of neurofunctional imaging studies of emotion and cognition in major depression. Neuroimage. 2012;61:677–85. doi: 10.1016/j.neuroimage.2012.04.005. [DOI] [PubMed] [Google Scholar]
  20. Dolcos F, McCarthy G. Brain systems mediating cognitive interference by emotional distraction. Journal of Neuroscience. 2006;26:2072–9. doi: 10.1523/JNEUROSCI.5042-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Durston S, Davidson MC, Thomas KM, et al. Parametric manipulation of conflict and response competition using rapid mixed-trial event-related fMRI. Neuroimage. 2003;20:2135–41. doi: 10.1016/j.neuroimage.2003.08.004. [DOI] [PubMed] [Google Scholar]
  22. Egner T, Hirsch J. The neural correlates and functional integration of cognitive control in a Stroop task. Neuroimage. 2005;24:539–47. doi: 10.1016/j.neuroimage.2004.09.007. [DOI] [PubMed] [Google Scholar]
  23. Eickhoff SB, Heim S, Zilles K, Amunts K. Testing anatomically specified hypotheses in functional imaging using cytoarchitectonic maps. Neuroimage. 2006;32:570–82. doi: 10.1016/j.neuroimage.2006.04.204. [DOI] [PubMed] [Google Scholar]
  24. Eickhoff SB, Paus T, Caspers S, et al. Assignment of functional activations to probabilistic cytoarchitectonic areas revisited. Neuroimage. 2007;36:511–21. doi: 10.1016/j.neuroimage.2007.03.060. [DOI] [PubMed] [Google Scholar]
  25. Eickhoff SB, Stephan KE, Mohlberg H, et al. A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. Neuroimage. 2005;25:1325–35. doi: 10.1016/j.neuroimage.2004.12.034. [DOI] [PubMed] [Google Scholar]
  26. Ellenbogen MA, Schwartzman AE. Selective attention and avoidance on a pictorial cueing task during stress in clinically anxious and depressed participants. Behaviour Research and Therapy. 2009;47:128–38. doi: 10.1016/j.brat.2008.10.021. [DOI] [PubMed] [Google Scholar]
  27. Elliott R, Zahn R, Deakin JF, Anderson IM. Affective cognition and its disruption in mood disorders. Neuropsychopharmacology. 2011;36:153–82. doi: 10.1038/npp.2010.77. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Fitzgerald PB, Laird AR, Maller J, Daskalakis ZJ. A meta-analytic study of changes in brain activation in depression. Human Brain Mapping. 2008;29:683–95. doi: 10.1002/hbm.20426. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Geyer S. The microstructural border between the motor and the cognitive domain in the human cerebral cortex. Advances in Anatomy, Embryology, and Cell Biology. 2004;174:I–89. doi: 10.1007/978-3-642-18910-4. [DOI] [PubMed] [Google Scholar]
  30. Goeleven E, De Raedt R, Baert S, Koster EH. Deficient inhibition of emotional information in depression. Journal of Affective Disorders. 2006;93:149–57. doi: 10.1016/j.jad.2006.03.007. [DOI] [PubMed] [Google Scholar]
  31. Gollan JK, Pane HT, McCloskey MS, Coccaro EF. Identifying differences in biased affective information processing in major depression. Psychiatry Research. 2008;159:18–24. doi: 10.1016/j.psychres.2007.06.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Gotlib IH, Krasnoperova E, Yue DN, Joormann J. Attentional biases for negative interpersonal stimuli in clinical depression. Journal of Abnormal Psychology. 2004;113:121–35. doi: 10.1037/0021-843X.113.1.121. [DOI] [PubMed] [Google Scholar]
  33. Gur RC, Erwin RJ, Gur RE, Zwil AS, Heimberg C, Kraemer HC. Facial emotion discrimination: II. Behavioral findings in depression. Psychiatry Research. 1992;42:241–51. doi: 10.1016/0165-1781(92)90116-k. [DOI] [PubMed] [Google Scholar]
  34. Gur RC, Sara R, Hagendoorn M, et al. A method for obtaining 3-dimensional facial expressions and its standardization for use in neurocognitive studies. Journal of Neuroscience Methods. 2002;115:137–43. doi: 10.1016/s0165-0270(02)00006-7. [DOI] [PubMed] [Google Scholar]
  35. Hall DA, Fussell C, Summerfield AQ. Reading fluent speech from talking faces: typical brain networks and individual differences. Journal of Cognitive Neuroscience. 2005;17:939–53. doi: 10.1162/0898929054021175. [DOI] [PubMed] [Google Scholar]
  36. Hautzinger M, Keller F, Kühner C. Beck-Depressions-Inventar (BDI-II) 2006. Revision. Frankfurt/Main: Harcourt Test Services. [DOI] [PubMed] [Google Scholar]
  37. Hoflich A, Baldinger P, Savli M, Lanzenberger R, Kasper S. Imaging treatment effects in depression. Reviews in the Neurosciences. 2012;23:227–52. doi: 10.1515/revneuro-2012-0038. [DOI] [PubMed] [Google Scholar]
  38. Hu Z, Liu H, Weng X, Northoff G. Is there a valence-specific pattern in emotional conflict in major depressive disorder? An exploratory psychological study. PLoS One. 2012;7:e31983. doi: 10.1371/journal.pone.0031983. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Joormann J, Gotlib IH. Emotion regulation in depression: relation to cognitive inhibition. Cognition & Emotion. 2010;24:281–98. doi: 10.1080/02699930903407948. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Kaiser MD, Shiffrar M, Pelphrey KA. Socially tuned: brain responses differentiating human and animal motion. Social Neuroscience. 2011 doi: 10.1080/17470919.2011.614003. in press, doi:10.1080/17470919.2011.614003. [DOI] [PubMed] [Google Scholar]
  41. Kan Y, Mimura M, Kamijima K, Kawamura M. Recognition of emotion from moving facial and prosodic stimuli in depressed patients. Journal of Neurology, Neurosurgery, and Psychiatry. 2004;75:1667–71. doi: 10.1136/jnnp.2004.036079. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Kanwisher N, McDermott J, Chun MM. The fusiform face area: a module in human extrastriate cortex specialized for face perception. Journal of Neuroscience. 1997;17:4302–11. doi: 10.1523/JNEUROSCI.17-11-04302.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Kanwisher N, Yovel G. The fusiform face area: a cortical region specialized for the perception of faces. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 2006;361:2109–28. doi: 10.1098/rstb.2006.1934. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Kiebel SJ, Glaser DE, Friston KJ. A heuristic for the degrees of freedom of statistics based on multiple variance parameters. Neuroimage. 2003;20:591–600. doi: 10.1016/s1053-8119(03)00308-2. [DOI] [PubMed] [Google Scholar]
  45. Klasen M, Kenworthy CA, Mathiak KA, Kircher TT, Mathiak K. Supramodal representation of emotions. Journal of Neuroscience. 2011;31:13635–43. doi: 10.1523/JNEUROSCI.2833-11.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Kohler CG, Hoffman LJ, Eastman LB, Healey K, Moberg PJ. Facial emotion perception in depression and bipolar disorder: a quantitative review. Psychiatry Research. 2011;188:303–9. doi: 10.1016/j.psychres.2011.04.019. [DOI] [PubMed] [Google Scholar]
  47. Kreifelts B, Ethofer T, Grodd W, Erb M, Wildgruber D. Audiovisual integration of emotional signals in voice and face: an event-related fMRI study. Neuroimage. 2007;37:1445–56. doi: 10.1016/j.neuroimage.2007.06.020. [DOI] [PubMed] [Google Scholar]
  48. LeDoux JE. The Emotional Brain: The Mysterious Underpinnings of Emotional Life. New York: Simon & Schuster; 1996. [Google Scholar]
  49. Leppanen JM, Milders M, Bell JS, Terriere E, Hietanen JK. Depression biases the recognition of emotionally neutral faces. Psychiatry Research. 2004;128:123–33. doi: 10.1016/j.psychres.2004.05.020. [DOI] [PubMed] [Google Scholar]
  50. Liu X, Banich MT, Jacobson BL, Tanabe JL. Common and distinct neural substrates of attentional control in an integrated Simon and spatial Stroop task as assessed by event-related fMRI. Neuroimage. 2004;22:1097–106. doi: 10.1016/j.neuroimage.2004.02.033. [DOI] [PubMed] [Google Scholar]
  51. Milders M, Bell S, Platt J, Serrano R, Runcie O. Stable expression recognition abnormalities in unipolar depression. Psychiatry Research. 2010;179:38–42. doi: 10.1016/j.psychres.2009.05.015. [DOI] [PubMed] [Google Scholar]
  52. Müller VI, Cieslik EC, Turetsky BI, Eickhoff SB. Crossmodal interactions in audiovisual emotion processing. Neuroimage. 2012a;60:553–61. doi: 10.1016/j.neuroimage.2011.12.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Müller VI, Habel U, Derntl B, et al. Incongruence effects in crossmodal emotional integration. Neuroimage. 2011;54:2257–66. doi: 10.1016/j.neuroimage.2010.10.047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Müller VI, Kellermann TS, Seligman SC, Turetsky BI, Eickhoff SB. Modulation of affective face processing deficits in schizophrenia by congruent emotional sounds. Social Cognitive and Affective Neuroscience. 2012b doi: 10.1093/scan/nss107. in press, doi:10.1093/scan/nss107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Nachev P, Kennard C, Husain M. Functional role of the supplementary and pre-supplementary motor areas. Nature Reviews. Neuroscience. 2008;9:856–69. doi: 10.1038/nrn2478. [DOI] [PubMed] [Google Scholar]
  56. Naranjo C, Kornreich C, Campanella S, et al. Major depression is associated with impaired processing of emotion in music as well as in facial and vocal stimuli. Journal of Affective Disorders. 2011;128:243–51. doi: 10.1016/j.jad.2010.06.039. [DOI] [PubMed] [Google Scholar]
  57. O'Connor AR, Han S, Dobbins IG. The inferior parietal lobule and recognition memory: expectancy violation or successful retrieval? Journal of Neuroscience. 2010;30:2924–34. doi: 10.1523/JNEUROSCI.4225-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Ochsner KN, Bunge SA, Gross JJ, Gabrieli JD. Rethinking feelings: an FMRI study of the cognitive regulation of emotion. Journal of Cognitive Neuroscience. 2002;14:1215–29. doi: 10.1162/089892902760807212. [DOI] [PubMed] [Google Scholar]
  59. Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9:97–113. doi: 10.1016/0028-3932(71)90067-4. [DOI] [PubMed] [Google Scholar]
  60. Ovaysikia S, Tahir KA, Chan JL, DeSouza JF. Word wins over face: emotional Stroop effect activates the frontal cortical network. Frontiers in Human Neuroscience. 2011;4:234. doi: 10.3389/fnhum.2010.00234. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Peron J, El Tamer S, Grandjean D, et al. Major depressive disorder skews the recognition of emotional prosody. Progress in Neuro-psychopharmacology & Biological Psychiatry. 2011;35:987–96. doi: 10.1016/j.pnpbp.2011.01.019. [DOI] [PubMed] [Google Scholar]
  62. Persad SM, Polivy J. Differences between depressed and nondepressed individuals in the recognition of and response to facial emotional cues. Journal of Abnormal Psychology. 1993;102:358–68. doi: 10.1037//0021-843x.102.3.358. [DOI] [PubMed] [Google Scholar]
  63. Price CJ. The anatomy of language: a review of 100 fMRI studies published in 2009. Annals of the New York Academy of Sciences. 2010;1191:62–88. doi: 10.1111/j.1749-6632.2010.05444.x. [DOI] [PubMed] [Google Scholar]
  64. Scharpf KR, Wendt J, Lotze M, Hamm AO. The brain’s relevance detection network operates independently of stimulus modality. Behavioural Brain Research. 2010;210:16–23. doi: 10.1016/j.bbr.2010.01.038. [DOI] [PubMed] [Google Scholar]
  65. Scheperjans F, Hermann K, Eickhoff SB, Amunts K, Schleicher A, Zilles K. Observer-independent cytoarchitectonic mapping of the human superior parietal cortex. Cerebral Cortex. 2008;18:846–67. doi: 10.1093/cercor/bhm116. [DOI] [PubMed] [Google Scholar]
  66. Stevens FL, Hurley RA, Taber KH. Anterior cingulate cortex: unique role in cognition and emotion. The Journal of Neuropsychiatry and Clinical Neurosciences. 2011;23:121–5. doi: 10.1176/jnp.23.2.jnp121. [DOI] [PubMed] [Google Scholar]
  67. Stuhrmann A, Suslow T, Dannlowski U. Facial emotion processing in major depression: a systematic review of neuroimaging findings. Biology of of Mood & Anxiety Disorders. 2011;1:10. doi: 10.1186/2045-5380-1-10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Sumner P, Nachev P, Morris P, et al. Human medial frontal cortex mediates unconscious inhibition of voluntary action. Neuron. 2007;54:697–711. doi: 10.1016/j.neuron.2007.05.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Surguladze SA, Young AW, Senior C, Brebion G, Travis MJ, Phillips ML. Recognition accuracy and response bias to happy and sad facial expressions in patients with major depression. Neuropsychology. 2004;18:212–8. doi: 10.1037/0894-4105.18.2.212. [DOI] [PubMed] [Google Scholar]
  70. Uekermann J, Abdel-Hamid M, Lehmkamper C, Vollmoeller W, Daum I. Perception of affective prosody in major depression: a link to executive functions? Journal of the International Neuropsychological Society. 2008;14:552–61. doi: 10.1017/S1355617708080740. [DOI] [PubMed] [Google Scholar]
  71. WHO. International Statistical Classification of Diseases and Related Health Problems. 2010. 10th Revision. Geneva: WHO. [Google Scholar]
  72. Williams JB. A structured interview guide for the Hamilton Depression Rating Scale. Archives of General Psychiatry. 1988;45:742–7. doi: 10.1001/archpsyc.1988.01800320058007. [DOI] [PubMed] [Google Scholar]
  73. Wittchen HU, Zaudig M, Fydrich T. Strukturiertes Klinisches Interview für DSM-IV. Göttingen: Hogrefe; 1997. [Google Scholar]
  74. Yoon KL, Joormann J, Gotlib IH. Judging the intensity of facial expressions of emotion: depression-related biases in the processing of positive affect. Journal of Abnormal Psychology. 2009;118:223–8. doi: 10.1037/a0014658. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Data

Articles from Social Cognitive and Affective Neuroscience are provided here courtesy of Oxford University Press

RESOURCES