Abstract
Although the emotions of other people can often be perceived from overt reactions (e.g., facial or vocal expressions), they can also be inferred from situational information in the absence of observable expressions. How does the human brain make use of these diverse forms of evidence to generate a common representation of a target's emotional state? In the present research, we identify neural patterns that correspond to emotions inferred from contextual information and find that these patterns generalize across different cues from which an emotion can be attributed. Specifically, we use functional neuroimaging to measure neural responses to dynamic facial expressions with positive and negative valence and to short animations in which the valence of a character's emotion could be identified only from the situation. Using multivoxel pattern analysis, we test for regions that contain information about the target's emotional state, identifying representations specific to a single stimulus type and representations that generalize across stimulus types. In regions of medial prefrontal cortex (MPFC), a classifier trained to discriminate emotional valence for one stimulus (e.g., animated situations) could successfully discriminate valence for the remaining stimulus (e.g., facial expressions), indicating a representation of valence that abstracts away from perceptual features and generalizes across different forms of evidence. Moreover, in a subregion of MPFC, this neural representation generalized to trials involving subjectively experienced emotional events, suggesting partial overlap in neural responses to attributed and experienced emotions. These data provide a step toward understanding how the brain transforms stimulus-bound inputs into abstract representations of emotion.
Keywords: abstraction, concepts, emotion attribution, multimodal, social cognition, theory of mind
Introduction
To recognize someone's emotion, we can rely on facial expression, tone of voice, and even body posture. Perceiving emotions from these overt expressions poses a version of the “invariance problem” faced across perceptual domains (Ullman, 1998; DiCarlo et al., 2012): we recognize emotions despite variation both within modality (e.g., sad face across viewpoint and identity) and across modalities (e.g., sadness from facial and vocal expressions). Emotion recognition may therefore rely on bottom-up extraction of invariants within a hierarchy of increasingly complex feature-detectors (Tanaka, 1993). However, we can also infer emotions in the absence of overt expressions by reasoning about the situation a person encounters (Ortony, 1990; Zaki et al., 2008; Scherer and Meuleman, 2013). To do so, we rely on abstract causal principles (e.g., social rejection causes sadness) rather than direct perceptual cues. Ultimately, the brain must integrate these diverse sources of information into a common code that supports empathic responses and flexible emotion-based inference.
What neural mechanisms underlie these different aspects of emotion recognition? Previous neuroimaging studies have revealed regions containing information about emotions in overt expressions: different facial expressions, for example, elicit distinct patterns of neural activity in the superior temporal sulcus and fusiform gyrus (Said et al., 2010a,b; Harry et al., 2013; see also Pitcher, 2014). In these studies, emotional stimuli were presented in a single modality, leaving it unclear the precise dimensions represented in these regions. Given that facial expressions can be distinguished based on features specific to the visual modality (e.g., mouth motion, eyebrow deflection, eye aperture; Ekman and Rosenberg, 1997; Oosterhof and Todorov, 2009), face-responsive visual regions could distinguish emotional expressions based on such lower-level features.
To represent what is in common across sad faces and voices, the brain may also compute multimodal representations. In a recent study (Peelen et al., 2010), subjects were presented with overt facial, bodily, and vocal expressions: in posterior temporal cortex (lpSTC) and middle medial prefrontal cortex (MMPFC), the pattern of response across different modalities was more similar for the same emotion than for different emotions. Thus, emotional stimuli sharing no low-level perceptual features seem to be represented similarly in these regions.
However, we not only recognize emotions from canonical perceptual cues, but also infer emotions from causal context alone. We identify emotions in the absence of familiar expressions, even for situations we have never observed or experienced. In the present study, we test for neural representations of emotional valence that generalize across both overt facial expressions and emotions inferred from the situation a character is in. We first identify neural patterns that contain information about emotional valence for each type of stimulus. We then test whether these neural patterns generalize across the two stimulus types, the signature of a common code integrating these very different types of emotional information. Finally, we investigate whether attributing emotional experiences to others and experiencing one's own emotions recruit a common neural representation by testing whether these same neural patterns generalize to emotional events experienced by participants themselves.
Materials and Methods
Summary
In Experiment 1, we used functional magnetic resonance imaging (fMRI) to measure blood oxygen level-dependent (BOLD) responses to emotional facial expressions and to animations depicting a character in an emotion-eliciting situation. While emotion-specific representations could, in principle, take the form of a uniform response across voxels in a region (detectable with univariate analyses), prior research has yielded little evidence for consistent and selective associations between discrete brain regions and specific emotions (Fusar-Poli et al., 2009; Lindquist et al., 2012). Thus, the present research uses multivariate analyses that exploit reliable signal across distributed patterns of voxels to uncover neural representations at a spatial scale smaller than that of entire regions (Haxby et al., 2001; Kamitani and Tong, 2005; Kriegeskorte et al., 2006; Norman et al., 2006). With this approach, we test for representations of emotional valence that are specific to a particular type of stimulus (facial expressions or causal situations) and representations that generalize across the two stimulus types. To identify stimulus-independent representations, we trained a pattern classification algorithm to discriminate emotional valence for one stimulus type (e.g., dynamic facial expressions) and tested its ability to discriminate valence for the remaining type (e.g., animations depicting causal situations). Thus, for each region of interest (ROI), we test whether there is a reliable neural pattern that supports classifying emotions when trained and tested on facial expressions, when trained and tested on situations, and when requiring generalization across facial expressions and situations.
We then test whether attributing emotions to others engages neural mechanisms involved in the first-person experience of emotion. Previous research has implicated MPFC not only in emotion attribution, but also in subjective experience of emotional or rewarding outcomes (Lin et al., 2012; Clithero and Rangel, 2013; Winecoff et al., 2013; Chikazoe et al., 2014). However, the relationship between experienced reward and emotion attribution remains poorly understood. In Experiment 2, we measured BOLD responses to positive and negative situations for another individual (replicating Experiment 1) and to trials in which subjects themselves experienced positive and negative outcomes (winning and losing money). Again, we test whether there is a reliable neural pattern that supports classifying the valence of events when trained and tested on third-party situations, when trained and tested on first-person rewards, and when requiring generalization across third-person and first-person experiences.
Regions of interest
Based on prior literature (Peelen et al., 2010), our regions of interest for abstract, conceptual representations of emotion were the pSTC and MMPFC. We localized in individual subjects a middle MPFC ROI comparable with that of Peelen et al. (2010), using a standard social versus nonsocial contrast (Saxe and Kanwisher, 2003; Dodell-Feder et al., 2011; see below). Because pSTC could not be identified by standard localizer tasks, we identified bilateral group ROIs based on the peak coordinate from Peelen et al. (2010). Our primary analyses target these three ROIs, accounting for multiple comparisons with a corrected α = 0.05/3 (0.017).
In addition to the MMPFC region identified by Peelen et al. (2010), adjacent regions of dorsal and ventral MPFC have been strongly implicated in studies of emotion and affective value (Amodio and Frith, 2006; Hynes et al., 2006; Völlm et al., 2006; Etkin et al. 2011). Moreover, the MPFC is part of a larger set of regions [the posterior cingulate/precuneus (PC), bilateral temporal parietal junction (rTPJ and lTPJ), and right anterior temporal lobe (rATL)] that are reliably recruited when reasoning about others' mental states (Saxe and Kanwisher, 2003; Mitchell, 2009), including emotional states (Zaki et al., 2010; Bruneau et al., 2012; Spunt and Lieberman, 2012). This set of six regions [dorsal MPFC (DMPFC), ventral MPFC (VMPFC), rTPJ, lTPJ, PC, and rATL, in addition to MMPFC described above) was identified in individual subjects using the social versus nonsocial contrast (described below). We test these remaining regions for representations of both perceived and inferred emotions [with α = 0.05/6 (0.008) to correct for comparisons across these six ROIs].
To test for modality-specific representations, we localized regions that might contain information specific to overt facial expressions: the right middle superior temporal sulcus (rmSTS), hypothesized to code for facial motion parameters (Pelphrey et al., 2005; Calder et al., 2007; Carlin et al., 2011), and face-selective patches in right occipitotemporal cortex thought to code for identity-relevant face features [occipital face area (rOFA) and fusiform face area (rFFA); Kanwisher and Yovel, 2006]. For this analysis, we again correct for multiple comparisons using α = 0.017 (0.05/3).
Finally, in Experiment 2, we examined how the mechanisms involved in third-person attribution of emotional states relate to mechanisms involved in processing first-person subjective value. To do so, we identified a region of orbitofrontal cortex (OFC/VMPFC) that has been previously implicated in processing reward/emotional value (Kable and Glimcher, 2007; Plassmann et al., 2007; Chib et al., 2009; Winecoff et al., 2013; Chikazoe et al., 2014). We used a mask derived from two recent meta-analyses (Bartra et al., 2013; Clithero and Rangel, 2013) to investigate neural responses in an anatomical region of OFC/VMPFC in which neural responses have been shown to consistently correlate with reward value across reward types and decision contexts (anatomical mask available at http://www.rnl.caltech.edu/resources/index.html). Note that this mask is only partially overlapping with the search space used to identify VMPFC responses to theory of mind (in Experiment 1).
Participants
Twenty-one right-handed participants (20–43 years; Mage = 26.84; 14 male) were recruited for Experiment 1. Sixteen right-handed participants (19–40 years; Mage = 27.88; seven male) were recruited for Experiment 2. All participants had normal or corrected-to-normal vision and no history of neurological or psychiatric disorders and gave written, informed consent in accordance with the requirements of the MIT institutional review board.
fMRI tasks and stimuli
In Experiment 1, each subject participated in several behavioral tasks as well as three fMRI tasks: an Emotion Attribution task and two tasks used to localize regions involved in theory of mind and face perception. Subjects in Experiment 2 completed only the Emotion Attribution task and the theory of mind localizer.
Emotion Attribution task.
In the Emotion Attribution task (Fig. 1), subjects viewed brief video clips designed to elicit the attribution of an emotional state to a target (Fig. 1 depicts static photos similar to video clips used in the study). The task consisted of video clips of people expressing a positive (happy/smiling) or negative (sad/frowning) emotion (expressions condition) and brief animations in which a simple geometric character experienced an event that would elicit positive or negative emotion (situations condition). In the situations condition, no emotion was expressed, but the character's emotional state could be inferred based on the character's goals and the event outcome. To ensure consistent attributions of emotional valence, independent subjects on Amazon Mechanical Turk (n = 16 per item) rated the stimuli from 1 to 7 (negative to positive valence): M(SEM)pos-faces = 5.597(0.077); M(SEM)neg-faces = 2.694(0.084); M(SEM)pos-situations = 5.401(0.068); M(SEM)neg-situations = 2.695(0.058). Each stimulus type was further divided into two subcategories: “male” and “female” for facial expression clips and “social” and “nonsocial” for situation clips. In the nonsocial condition, the character demonstrated an instrumental goal and achieved or failed to achieve it (e.g., attempted to climb a hill and succeeded or tumbled to the bottom); in the social condition, there were multiple agents who acted prosocially or antisocially to the target character (e.g., included or excluded the target from their group). This yielded a total of eight stimulus conditions (male positive, male negative, female positive, female negative, social positive, social negative, nonsocial positive, nonsocial negative). Because the face stimuli involved a close-perspective view on a single entity, these stimuli were presented at 7.8 × 7.4° visual angle, whereas the context animations were presented at 16.7 × 12.5°. We used dynamic, naturalistic facial expressions from movies, which are relatively uncontrolled compared with artificial stimuli (e.g., face morphs). However, our main interest is in representations that generalize to animations in the situations condition; low-level visual confounds that generalize across the two perceptually distinct stimulus sets are, therefore, highly unlikely. An advantage of these stimuli in the present design is that they achieve an unusual a balance between external validity (Zaki and Ochsner, 2009; Spunt and Lieberman, 2012) and experimental control.
The experiment consisted of eight runs (9.43 min/run), each containing 6 stimuli in each of the eight conditions, for a total of 48 stimuli per condition. Each condition contained 24 semantically distinct events, each of which was presented twice over the course of the experiment with superficial transformations (the background scene for context animations and a minor luminance change for facial expressions), and the left–right orientation varied across the two presentations. Each clip was presented at fixation for 4 s, followed by a 1750 ms window during which subjects made a behavioral response and a 250 ms blank screen. Subjects were instructed to press a button to indicate the intensity of the character's emotion in each event (1 to 4, neutral to extreme), which focused subjects' attention on the character's emotional state but ensured that motor responses (intensity) were orthogonal to the discrimination of interest (valence). The clips were presented in a jittered, event-related design, and a central fixation cross was presented between trials with a variable interstimulus interval of 0–14 s. Optseq2 (http://surfer.nmr.mgh.harvard.edu/optseq/) was used to create efficient stimulus presentation schedules with a first-order counterbalancing constraint such that each condition preceded each other with approximately equal probability across the experiment. The assignment of conditions to positions within this sequence was randomized across participants. The order of individual stimulus clips for a given condition was chosen pseudo-randomly for each participant, with the constraint that repetitions of each stimulus occurred in the same even–odd folds as the first presentation (e.g., an event first presented in run 2 would be repeated in run 6, and an event presented in run 3 would be repeated in run 7).
In Experiment 2, subjects completed a modified and abbreviated version of this task (four runs). On 50% of trials, subjects viewed nonsocial situation stimuli from Experiment 1 (96 total trials); on the remaining trials, subjects were presented with positive and negative events in which they either gained or lost money from a postscan bonus (reward condition; Fig. 2). On each reward trial, subjects viewed a cycle of 20 rapidly presented random monetary values (2 s total), followed by the reward outcome for the trial, shown in green (2 s). Negative values ranged from −$0.20 to −$1.00, and positive values ranged from +$0.20 to +$2.00; this asymmetry allowed subjects to have net gain for their bonus and accounted for the fact that losses are experienced more strongly than comparable gains (Tversky and Kahneman, 1991). The experimental design and behavioral task were identical to Experiment 1, except that subjects were asked to rate the character's emotional intensity on the situation trials and their own emotional intensity on the reward trials.
Theory of mind localizer.
Subjects were presented with short textual scenarios that required inferences about mental state representations (Belief condition) or physical representations such as a map, photo, or painting (Photo condition; Dodell-Feder et al., 2011; stimuli are available at http://saxelab.mit.edu/superloc.php). These two types of scenarios were similar in their meta-representational demands and logical complexity, but only the scenarios in the Belief condition required building a representation of another person's thoughts and beliefs. Scenarios were displayed for 10 s, followed immediately by a true or false question (4 s) about either the representation (Belief or Photo) or the reality of the situation. Each run (4.53 min) consisted of 10 trials separated by 12 s interstimulus intervals, and 12 s blocks of fixation were included at the beginning and end of each run. One to two runs were presented to each participant. The order of stimulus type (Belief or Photo) and correct answer (True or False) were counterbalanced within and across runs.
Face perception localizer.
Subjects viewed two conditions designed to identify face-selective regions: dynamic faces (video clips of human children's faces) and dynamic objects (video clips of objects in motion; from Pitcher et al., 2011). For each of these conditions, there were a total of 30 clips (3 s each, separated by 333 ms of blank screen), and six clips were presented in each block. This localizer also included two other conditions, biological motion and structure from motion, which were not of interest for the present analyses. All conditions were presented as 20 s blocks followed by 2 s of rest, and 12 s blocks of fixation were included at the beginning and end of each run, as well as once in the middle of the run. Each condition was presented twice per run, and subjects received two runs lasting 5 min each, with condition order counterbalanced within and across runs and across participants. To maintain attention, subjects were required to complete a one-back task during viewing. Two of 21 subjects did not complete this localizer because of insufficient scan time.
Behavioral tasks.
The Autism-Spectrum Quotient (Baron-Cohen et al., 2001) and the Interpersonal Reactivity Index (Davis, 1983) were completed via on-line Qualtrics surveys. Participants also completed an Empathic Accuracy task based on the study by Zaki et al. (2008) and the verbal reasoning, matrices, and riddles components of the KBIT2 (Kaufman, 1990).
Acquisition
Data were acquired on a 3T Siemens Tim Trio scanner in the Athinoula A. Martinos Imaging Center at the McGovern Institute for Brain Research at MIT, using a Siemens 32-channel phased array head coil. We collected a high-resolution (1 mm isotropic) T1-weighted MPRAGE anatomical scan, followed by functional images acquired with a gradient-echo EPI sequence sensitive to BOLD contrast [repetition time (TR), 2 s; echo time, 30 ms; flip angle, 90°; voxel size, 3 × 3 × 3 mm; matrix 64 × 64; 32 axial slices]. Slices were aligned with the anterior/posterior commissure and provided whole-brain coverage (excluding the cerebellum).
Analysis
Pilot data.
In addition to the 21 subjects reported, 8 independent pilot subjects were analyzed to fix the parameters of the analyses reported below (e.g., size of smoothing kernel, type of classifier, method for feature selection). A general concern with fMRI analyses, and with the application of machine learning techniques to fMRI data in particular, is that the space of possible and reasonable analyses is large and can yield qualitatively different results. Analysis decisions should be made independent of the comparisons or tests of interest; otherwise, one risks overfitting the analysis to the data (Simmons et al., 2011). One way to optimize an analysis stream without such overfitting is to separate subjects into an exploratory or pilot set and a validation or test set. Thus, the analysis stream reported here was selected based on the parameters that appeared to yield the most sensitive analysis of eight pilot subjects.
Preprocessing.
MRI data were preprocessed using SPM8 (http://www.fil.ion.ucl.ac.uk/spm/software/spm8/), FreeSurfer (http://surfer.nmr.mgh.harvard.edu/), and in-house code. FreeSurfer's skull-stripping software was used for brain extraction. SPM was used to motion correct each subject's data via rigid rotation and translation about the six orthogonal axes of motion, to register the functional data to the subject's high-resolution anatomical image, and to normalize the data onto a common brain space (Montreal Neurological Institute). In addition to the smoothing imposed by normalization, functional images were smoothed using a Gaussian filter (FWHM, 5 mm).
Defining regions of interest.
To define individual ROIs, we used hypothesis spaces derived from random-effects analyses of previous studies [theory of mind (Dufour et al., 2013): bilateral TPJ, rATL, PC, subregions of MPFC (DMPFC, MMPFC, VMPFC); face perception (Julian et al., 2012): rmSTS, rFFA, rOFA], combined with individual subject activations for the localizer tasks. The theory of mind task was modeled as a 14 s boxcar (the full length of the story and question period, shifted by 1 TR to account for lag in reading, comprehension, and processing of comprehended text) convolved with a standard hemodynamic response function (HRF). A general linear model was implemented in SPM8 to estimate β values for Belief trials and Photo trials. We conducted high-pass filtering at 128 Hz, normalized the global mean signal, and included nuisance covariates to remove effects of run. The face perception task was modeled as a 22 s boxcar, and β values were similarly estimated for each of condition (dynamic faces, dynamic objects, biological motion, structure from motion). For each subject, we used a one-sample t test implemented in SPM8 to generate a map of t values for the relevant contrast (Belief > Photo for the theory of mind ROIs, faces > objects for the face perception ROIs), and for each ROI, we identified the peak t value within the hypothesis space. An individual subject's ROI was defined as the cluster of contiguous suprathreshold voxels (minimum k = 10) within a 9 mm sphere surrounding this peak. If no cluster was found at p < 0.001, we repeated this procedure at p < 0.01 and p < 0.05. We masked each ROI by its hypothesis space (defined to be mutually exclusive) such that there was no overlap in the voxels contained in each functionally defined ROI. An ROI for a given subject was required to have at least 20 voxels to be included in multivariate analyses. For the pSTC region (Peelen et al., 2010), we generated a group ROI defined as a 9 mm sphere around the peak coordinate from that study, as well as an analogous ROI for the right hemisphere.
Multivariate analyses.
Multivoxel pattern analysis (MVPA) was conducted using an in-house code developed in Python using the publicly available PyMVPA toolbox (http://www.pymvpa.org/; Fig. 3). We conducted MVPA within ROIs that were functionally defined based on individual subject localizer scans. High-pass filtering (128 Hz) was conducted on each run, and linear detrending was performed across the whole time course. A time point was excluded if it was a global intensity outlier (>3 SD above the mean intensity) or corresponded to a large movement (>2 mm scan to scan). The data were temporally compressed to generate one voxel-wise summary for each individual trial, and these single trial summaries were used for both training and testing. Individual trial patterns were calculated by averaging the preprocessed bold images for the 6 s duration of the trial, offset by 4 s to account for HRF lag. Rest time points were removed, and the trial summaries were concatenated into one experimental vector in which each value was a trial's average response. The pattern for each trial was then z-scored relative to the mean across all trial responses in that voxel.
Given the high dimensionality of fMRI data and the relatively small number of training examples available, feature selection is often useful to extract voxels likely to be informative for classification (Mitchell et al., 2004; De Martino et al., 2008; Pereira et al., 2009). Within each ROI, we conducted voxel-wise ANOVAs to identify voxels that were modulated by the task (based on the F statistic for task vs rest contrast). This univariate selection procedure tends to eliminate high-variance, noisy voxels (Mitchell et al., 2004). Because this selection procedure is orthogonal to all of the classifications reported here, it could be performed once over the whole dataset without constituting peeking, meaning that the same voxels could be used as features in each cross-validation fold. The top 80 most active voxels within the ROI were used for classification (selecting a fixed number of voxels also helps to minimize differences in the number of voxels across regions and subjects).
The data were classified using a support vector machine implemented with libSVM (http://www.csie.ntu.edu.tw/∼cjlin/libsvm/; Chang and Lin, 2011). This classifier uses condition-labeled training data to learn a weight for each voxel, and subsequent stimuli (validation data not used for model training) can then be assigned to one of two classes based on a weighted linear combination of the response in each voxel. In a support vector machine, the linear decision function can be thought of as a hyperplane dividing the multidimensional voxel space into two classes, and voxel weights are learned so as to maximize the distance between the hyperplane and the closest observed example. We conducted binary classification with a linear kernel using a fixed regularization parameter (C = 1) to control the tradeoff between margin size and training error. We restricted ourselves to linearly decodable signal under the assumption that a linear kernel implements a plausible readout mechanism for downstream neurons (Seung and Sompolinsky, 1993; Hung et al., 2005; Shamir and Sompolinsky, 2006). Given that the brain likely implements nonlinear transformations, linear separability within a population can be thought of as a conservative but reasonable estimate of the information available for explicit readout (DiCarlo and Cox, 2007).
For each classification, the data were partitioned into multiple cross-validation folds where the classifier was trained iteratively on all folds but one and tested on the remaining fold. Classification accuracy was then averaged across folds to yield a single classification accuracy for each subject in the ROI. A one-sample t test was then performed over these individual accuracies, comparing with chance classification of 0.50 (all t tests on classification accuracies were one-tailed). Whereas parametric tests are not always appropriate for assessing the significance of classification accuracies (Stelzer et al., 2013), the assumptions of these tests are met in the present case: the accuracy values are independent samples from separate subjects (rather than individual folds trained on overlapping data), and the classification accuracies were found to be normally distributed around the mean accuracy. For within-stimulus analyses (classifying within facial expressions and within situation stimuli), cross-validation was performed across runs (i.e., iteratively train on seven runs, test on the remaining eighth). For cross-stimulus analyses, the folds for cross-validation were based on stimulus type. To ensure complete independence between training and test data, folds for the cross-stimulus analysis were also divided based on even versus odd runs (e.g., train on even run facial expressions, test on odd run situations).
Whole-brain searchlight classification.
The searchlight procedure was identical to the ROI-based procedure except that the classifier was applied to voxels within searchlight spheres rather than individually localized ROIs. For each voxel in a gray matter mask, we defined a sphere containing all voxels within a three-voxel radius of the center voxel. The searchlight size (123 voxels) was selected to approximately match the size of the regions in which effects were found with the ROI analysis, and we again conducted an ANOVA to select the 80 most active voxels in the sphere. Classification was then performed on each cross-validation fold, and the average classification accuracy for each sphere was assigned to its central voxel, yielding a single accuracy image for each subject for a given discrimination. We then conducted a one-sample t test over subjects' accuracy maps, comparing accuracy in each voxel to chance (0.5). This yielded a group t-map, which was assessed at a p < 0.05, FWE corrected (based on SPM's implementation of Gaussian random fields).
Whole-brain random-effects analysis (univariate).
We also conducted a whole-brain random effects analysis to identify voxels in which the univariate response differentiated positive and negative valence for faces and for situations. The conjunction of these two contrasts would identify voxels in which the magnitude of response was related to the valence for both stimulus types.
Results
Experiment 1
Regions of interest
Using the contrast of Belief > Photo, we identified seven ROIs (rTPJ, lTPJ, rATL, PC, DMPFC, MMPFC, VMPFC) in each of the 21 subjects, and using the contrast of faces > objects, we identified right lateralized face regions OFA, FFA, and mSTS in 18 subjects (of 19 subjects who completed this localizer).
Multivariate results
Multimodal regions (pSTC and MMPFC).
For classification of emotional valence for facial expressions, we replicated the results of Peelen et al. (2010) with above-chance classification in MMPFC [M(SEM) = 0.534(0.013), t(18) = 2.65, p = 0.008; Fig. 4] and lpSTC [M(SEM) = 0.525(0.010), t(20) = 2.61, p = 0.008; Fig. 5]. Classification in right posterior superior temporal cortex (rpSTC) did not reach significance at a corrected (0.05/3) threshold [M(SEM) = 0.516(0.007), t(20) = 2.23, p = 0.019]. Note that although the magnitude of these effects is small, these results reflect classification of single-event trials, which are strongly influenced by measurement noise. Small but significant classification accuracies are common for single-trial, within-category distinctions (Anzellotti et al., 2013; Harry et al., 2013).
The key question for the present research is whether these regions contain neural codes specific to overt expressions or whether they also represent the valence of inferred emotional states. When classifying valence for situation stimuli, we again found above-chance classification accuracy in MMPFC [M(SEM) = 0.553(0.012), t(18) = 4.31, p > 0.001]. We then tested for stimulus-independent representations by training on one kind of stimulus and testing on the other. Consistent with the existence of an abstract valence code, MMPFC supported above-chance valence classification across both stimulus types [M(SEM) = 0.524(0.007), t(18) = 3.77, p = 0.001]. In contrast, lpSTC did not perform above chance when classifying the valence of situation stimuli [M(SEM) = 0.512(0.011), t(20) = 1.06, p = 0.152], nor when requiring generalization across stimulus type [M(SEM) = 0.500(0.008), t(20) = 0.04, p = 0.486]. To directly compare accuracy in lpSTC when classifying within facial expression stimuli and when generalizing across stimulus types, we conducted a paired sample t test (one-tailed) comparing classification accuracy for faces to accuracy for cross-stimulus classification: classification accuracy was significantly higher for faces compared with cross-stimulus classification (M = 0.525, M = 0.500, t(20) = 2.00, p = 0.029).
Theory of mind regions.
We performed these same analyses in six remaining theory of mind regions (at a corrected α = 0.05/6, 0.008). In DMPFC (Fig. 4), we observed results very comparable with those observed in MMPFC: above-chance classification of facial emotion [M(SEM) = 0.539(0.016), t(18) = 2.39, p = 0.014], of emotion from situations [M(SEM) = 0.570(0.013), t(18) = 5.38, p < 0.001], and when generalizing across stimulus types [M(SEM) = 0.532(0.008), t(18) = 3.95, p < 0.001]. VMPFC did not perform above chance at a corrected threshold (p < 0.008) when classifying facial expressions [M(SEM) = 0.525(0.009), t(17) = 2.62, p = 0.009] or situation stimuli [M(SEM) = 0.524(0.012), t(17) = 1.98, p = 0.032]; however, cross-stimulus decoding was above chance [M(SEM) = 0.527(0.007), t(17) = 3.79, p = 0.001].
None of the other theory of mind regions classified above threshold when distinguishing positive and negative facial expressions [rTPJ: M(SEM) = 0.501(0.010), t(20) = 0.06, p = 0.478; lTPJ: M(SEM) = 0.521(0.012), t(20) = 1.85, p = 0.040; rATL: M(SEM) = 0.525(0.012), t(20) = 2.05, p = 0.027; PC: M(SEM) = 0.514(0.011), t(20) = 1.32, p = 0.102], when distinguishing positive and negative situations [rTPJ: M(SEM) = 0.528(0.014), t(20) = 2.04, p = 0.027; lTPJ: M(SEM) = 0.515(0.009), t(20) = 1.57, p = 0.066; rATL: M(SEM) = 0.510(0.012), t(20) = 0.80, p = 0.216; PC: M(SEM) = 0.523(0.012), t(20) = 1.84, p = 0.040], or when generalizing across stimulus types [rTPJ: M(SEM) = 0.503(0.007), t(20) = 0.45, p = 0.330; lTPJ: M(SEM) = 0.509(0.007), t(20) = 1.38, p = 0.092; rATL: M(SEM) = 0.510(0.006), t(20) = 1.85, p = 0.039; PC: M(SEM) = 0.495(0.008), t(20) =−0.60, p = 0.724].
Face-selective cortex.
For valence in facial expressions, we also performed a secondary analysis in face-selective regions rOFA, rFFA, and rmSTS (at a corrected threshold of 0.05/3; Fig. 5). We replicated previous reports (Said et al., 2010a,b; Furl et al., 2012; Harry et al., 2013) with classification accuracies significantly above chance in rmSTS [M(SEM) = 0.539(0.007), t(14) = 5.20, p < 0.001] and in rFFA [M(SEM) = 0.531(0.012), t(14) = 2.59, p = 0.011]; classification in rOFA did not survive correction for multiple comparisons [M(SEM) = 0.529(0.016), t(13) = 1.87, p = 0.042]. For the situation stimuli, the rFFA failed to classify valence when it was inferred from context [rFFA: M(SEM) = 0.508(0.016), t(14) = 0.54, p = 0.300]. In the rmSTS, on the other hand, there was reliable information about situation stimuli in addition to the face stimuli [M(SEM) = 0.537(0.014), t(14) = 2.57, p = 0.011]. However, neither region supported above-chance cross-stimulus classification [rFFA: M(SEM) = 0.499(0.006), t(14) = −0.16, p = 0.563; rmSTS: M(SEM) = 0.499(0.008), t(14) =−0.17, p = 0.565], and classification accuracy was reliably higher (one-tailed test) when training and testing on faces compared with when requiring generalization across stimulus types in rmSTS (M = 0.539, M = 0.499, t(14) = 4.52, p < 0.001) and in rFFA (M = 0.531, M = 0.499, t(14) = 2.26, p = 0.020).
Follow-up analyses
Given successful valence decoding in dorsal and middle MPFC, we conducted several follow-up analyses to examine the scope and generality of these effects. For facial expressions, we performed cross-validation across the orthogonal dimension of face gender. Both regions of MPFC performed above chance [DMPFC: M(SEM) = 0.529(0.015), t(18) = 1.92, p = 0.035; MMPFC: M(SEM) = 0.532(0.010), t(18) = 3.20, p = 0.003], indicating that the valence-specific voxel patterns generalize across two face sets that differed at the level of exemplars, identity, and gender. We also tested for generalization across face sets in the remaining regions that supported decoding of facial expressions (rmSTS, rFFA, lpSTC). The neural patterns generalized across the male and female face sets in rmSTS [M(SEM) = 0.524(0.012), t(14) = 2.02, p = 0.032] but not in rFFA [M(SEM) = 0.512(0.012), t(14) = 1.00, p = 0.167] or lpSTC [M(SEM) = 0.509(0.009), t(20) = 1.05, p = 0.154].
For situation stimuli, both regions of MPFC were able to classify valence across the orthogonal dimension: social versus nonsocial situations [DMPFC: M(SEM) = 0.552(0.012), t(18) = 4.44, p < 0.001; MMPFC: M(SEM) = 0.543(0.011), t(18) = 3.97, p < 0.001]. Finally, to test for possible asymmetry in the cross-stimulus classification, we separated the cross-stimulus analysis into training on faces/testing on situations and training on situations/testing on faces. We observed above-chance classification for both train/test partitions in both DMPFC [testing on faces: M(SEM) = 0.523(0.011), t(18) = 2.18, p = 0.021; testing on situations: M(SEM) = 0.540(0.007), t(18) = 5.47, p < 0.001] and MMPFC [testing on faces: M(SEM) = 0.525(0.006), t(18) = 4.13, p < 0.001; testing on situations: M(SEM) = 0.524(0.009), t(18) = 2.64, p = 0.008].
In summary, it appears that dorsal and middle subregions of MPFC contain reliable information about the emotional valence of a stimulus when the emotion must be inferred from the situation and that the neural code in this region is highly abstract, generalizing across diverse cues from which an emotion can be identified. In contrast, although both rFFA and the region of superior temporal cortex identified by Peelen et al. (2010) contain information about the valence of facial expressions, the neural codes in those regions do not appear generalized to valence representations formed on the basis of contextual information. Interestingly, the rmSTS appears to contain information about valence in faces and situations but does not form a common code that integrates across stimulus type.
Whole-brain analyses
To test for any remaining regions that may contain information about the emotional valence of these stimuli, we conducted a searchlight procedure, revealing striking consistency with the ROI analysis (Table 1; Fig. 6). Only DMPFC and MMPFC exhibited above-chance classification for faces and contexts, and when generalizing across these two stimulus types. In addition, for classification of facial expressions alone, we observed clusters in occipital cortex. Clusters in the other ROIs emerged at a more liberal threshold (rOFA and rmSTS at p < 0.001 uncorrected; rFFA, rpSTC, and lpSTC at p < 0.01). In contrast, whole-brain analyses of the univariate response revealed no regions in which the mean response distinguished between positive and negative facial expressions or between positive and negative contexts (at p < 0.05, FWE correction based on Gaussian random fields).
Table 1.
Stimulus | Number of voxels | Peak t | x | y | z | Region |
---|---|---|---|---|---|---|
Situations | 52 | 11.80 | 4 | 46 | 38 | DMPFC |
8.24 | 6 | 50 | 28 | |||
9 | 9.49 | −8 | 54 | 26 | DMPFC | |
28 | 9.21 | 4 | 58 | 14 | MMPFC | |
9.02 | 4 | 56 | 22 | |||
1 | 7.98 | 16 | 60 | 24 | MMPFC | |
1 | 7.86 | 0 | 50 | 36 | DMPFC | |
4 | 7.82 | 0 | 54 | 30 | DMPFC | |
1 | 7.55 | −8 | 54 | 18 | MMPFC | |
2 | 7.43 | 8 | 56 | 20 | MMPFC | |
1 | 7.40 | −2 | 54 | 36 | DMPFC | |
1 | 7.30 | −28 | −78 | 32 | L OCC/TEMP | |
Faces | 8 | 8.77 | −30 | −88 | −4 | L MID OCC GYRUS |
2 | 8.48 | 38 | −92 | 8 | R MID OCC GYRUS | |
3 | 8.16 | 2 | 52 | 20 | MMPFC | |
1 | 7.88 | 6 | 52 | 22 | MMPFC | |
2 | 7.60 | 8 | 56 | 20 | MMPFC | |
1 | 7.52 | 28 | −82 | 32 | R SUP OCC | |
Cross-stimulus | 42 | 10.91 | −2 | 50 | 34 | DMPFC |
9.28 | 0 | 48 | 24 | |||
7.28 | 8 | 56 | 20 | |||
1 | 8.93 | 8 | 56 | 10 | MMPFC | |
1 | 7.34 | 12 | 66 | 10 | MMPFC |
L OCC/TEMP, Left occipital/temporal; L MID OCC GYRUS, left middle occipital gyrus, R MID OCC GYRUS, right middle occipital gyrus; R SUP OCC, right superior occipital.
Experiment 2
The results of Experiment 1 suggest that DMPFC and MMPFC contain abstract, stimulus-independent information about emotional valence of perceived and inferred emotions. How is this region related to the regions of MPFC typically implicated in processing value and/or subjective experience? For Experiment 2, we first used a group anatomical mask (Bartra et al., 2013; Clithero and Rangel, 2013) to identify a region of OFC/VMPFC previously implicated in reward/value processing. Consistent with previous reports (Kable and Glimcher, 2007; Chib et al., 2009), this region showed an overall magnitude effect for positive > negative rewards (t(15) = 3.20, p = 0.006; Fig. 7) and could classify positive versus negative reward trials reliably above chance [M(SEM) = 0.542(0.020), t(15) = 2.09, p = 0.027]. Interestingly, this canonical reward region did not reliably distinguish positive and negative situations for others [M(SEM) = 0.521(0.018), t(15) = 1.15, p = 0.135], and there was no evidence for a common valence code generalizing across self and other [M(SEM) = 0.512(0.014), t(15) = 0.80, p = 0.219]. Classification accuracies were significantly higher when discriminating self-reward values compared with when generalizing across reward and situation trials (M = 0.542, M = 0.512, t(15) = 1.90, p = 0.038, one-tailed).
What about the regions implicated in abstract valence representation in Experiment 1? By decoding valence within the situation stimuli, we replicate the finding of Experiment 1 that DMPFC and MMPFC contain information about the emotion attributed to a target even when that emotion must be inferred from context [DMPFC: M(SEM) = 0.543(0.021), t(15) = 2.04, p = 0.030; MMPFC: M(SEM) = 0.536(0.019), t(15) = 1.95, p = 0.035; Fig. 8]. Do we observe these same neural patterns on trials in which subjects evaluate their own subjectively experienced emotions? In MMPFC, we observed above-chance valence classification for reward trials [M(SEM) = 0.539(0.018), t(15) = 2.17, p = 0.023] in addition to situation trials. Moreover, neural patterns generalized across positive/negative situations and positive/negative outcomes for the self [M(SEM) = 0.526(0.010), t(15) = 2.60, p = 0.010]. In dorsal MPFC, in contrast, we observed similar classification of the valence of reward outcomes [M(SEM) = 0.544(0.025), t(15) = 1.74, p = 0.051], but this region failed to classify above chance when generalizing across self and other [M(SEM) = 0.514(0.013), t(15) = 1.07, p = 0.150].
Discussion
Are there neural representations of emotions that generalize across diverse sources of evidence, including overt emotional expressions and emotions inferred from context alone? In the present study, we identified regions in which voxel-wise response patterns contained information about the emotional valence of facial expressions and a smaller number of regions that distinguished the valence of emotion-eliciting situations. Our results, together with existing literature (Peelen et al., 2010), provide candidate neural substrates for three levels of representation: modality-specific representations bound to perceptual invariants in the input, intermediate multimodal representations that generalize across canonical perceptual schemas, and conceptual representations that are fully invariant to the information used to identify emotions.
Conceptual representations
In DMPFC/MMPFC, we decoded emotional valence from facial expressions and from animations depicting emotion-eliciting situations. Like other domains of high-level cognition, emotion knowledge is theory like (Carey, 1985; Gopnik and Wellman, 1992), requiring abstract concepts (e.g., of goals, expectations) to be integrated in a coherent, causal manner. The present results suggest that valence representations in DMPFC/MMPFC are elicited by such inferential processes. We could classify valence when training on faces and testing on situations (and vice versa), replicating the finding that emotion representations in MMPFC generalize across perceptually dissimilar stimuli (Peelen et al., 2010). Moreover, our results demonstrate an even stronger form of generalization: perceived emotions and emotions inferred through generative, theory-like processes activate similar neural patterns in DMPFC/MMPFC, indicating a mechanism beyond mere association of co-occurring perceptual schemas. Thus, the MPFC may contain a common neural code that integrates diverse perceptual and inferential processes to form abstract representations of emotions.
Previous research leaves open the question of whether activity in MPFC reflects mechanisms specific to emotion attribution or mechanisms involved in value or valence processing more generally. In Experiment 2, we found evidence for both kinds of representations. First, we found that the region of OFC/VMPFC implicated in reward processing (Clithero and Rangel, 2013; anatomical ROI from Bartra et al., 2013) does not contain information about the valence of attributed emotions. Second, we found no evidence for a shared representation of experienced and attributed emotion in dorsal MPFC. Finally, in MMPFC, we observed neural patterns that generalized across attributed and experienced emotional events. One interpretation of this result is that attributing positive or rewarding experiences to others depends on general purpose reward representations that code value in social and nonsocial contexts (Chib et al., 2009; Lin et al., 2012, Ruff and Fehr, 2014). Alternatively, neural responses in MMPFC could reflect the participant's own empathic reaction to the depicted experiences (e.g., witnessing someone achieve a goal elicits positive emotions in participants). If so, the participant's empathic reaction might be causally involved in the process of attributing emotions to others (consistent with “simulation theory”; Goldman and Sripada, 2005; Niedenthal, 2007) or might be a downstream consequence of attribution. Previous results do indicate a causal role for MPFC in emotion perception and attribution: damage to MPFC is associated with deficits in emotion recognition (Shamay-Tsoory et al., 2003, 2009), and direct disruption of MPFC via transcranial magnetic stimulation has been shown to impair recognition of facial expressions (Harmer et al., 2001; see also Mattavelli et al., 2011). Moreover, the degree to which MPFC is recruited during an emotion attribution task predicts individual differences in the accuracy of emotion judgments (Zaki et al., 2009a,b). Future research should continue to distinguish the specific contents of attributed emotions from the emotional response of the participant. For example, can patterns in MPFC be used to classify the attribution of more specific emotions that are unlikely to be shared by the observer (e.g., loneliness vs regret)?
Modality-specific representations
In face-selective regions (rFFA and rmSTS), we found that neural patterns could distinguish positive and negative facial expressions, replicating previous reports of emotion-specific neural representations in these regions (Fox et al., 2009; Said et al., 2010a,b; Xu and Biederman, 2010; Furl et al., 2012; Harry et al., 2013). Neural populations could distinguish facial expressions by responding to relatively low-level parameters that differ across expressions, by extracting mid-level invariants (e.g., eye motion, mouth configuration) that generalize across within-modality transformations (e.g., lighting, position), or by computing explicit representations of facial emotion that integrate multiple facial parameters. The present study used naturalistic stimuli that varied in lighting conditions, face direction, and face position and found reliable generalization across male and female face sets in rmSTS. Thus, it is possible that these neural patterns distinguish facial expressions based on representations invariant to certain low-level transformations (Anzellotti et al., 2013). Future research should investigate this possibility by systematically testing the generalization properties of neural responses to emotional expressions across variation in low-level dimensions (e.g., face direction) and higher-level dimensions (e.g., generalization from sad eyes to a sad mouth). Interestingly, the rmSTS also contained information about emotional valence in situation stimuli, but the neural patterns did not generalize across these distinct sources of evidence, suggesting two independent valence codes in this region.
Multimodal representations
We also replicate the finding that pSTC contains information about the emotional valence of facial expressions (Peelen et al., 2010). However, unlike DMPFC/MMPFC, we find no evidence for representations of emotions inferred from situations. Interestingly, Peelen et al. (2010) found that the pSTC could decode emotional expressions across modalities (faces, bodies, voices), suggesting that this region may support an intermediate representation that is neither fully conceptual nor tied to specific perceptual parameters. For example, pSTC could be involved in pooling over associated perceptual schemas, leading to representations that generalize across diverse sensory inputs but do not extend to more abstract, inference-based representations. This interpretation would be consistent with the region's proposed role in cross-modal integration (Kreifelts et al., 2009; Stevenson and James, 2009). Thus, the present findings reveal a novel functional division within the set of regions (pSTC and MMPFC) previously implicated in multimodal emotion representation (Peelen et al., 2010).
Open questions
While these data provide important constraints on the levels of representation associated with different regions, important questions remain open. First, do the regions identified here contain information about more fine-grained emotional distinctions beyond valence? Previous studies have successfully decoded a larger space of perceived emotions in MMPFC, STS, and FFA (Peelen et al., 2010; Said et al., 2010a,b; Harry et al., 2013). For emotions inferred from context, the neural representation of more fine-grained emotional distinctions (e.g., inferring sadness vs fear) will be a key question for future research.
This study also leaves open the role of other regions (e.g., amygdala, insula, inferior frontal gyrus) that have previously been associated with emotion perception and experience (Shamay-Tsoory et al., 2009; Singer et al., 2009; Pessoa and Adolphs, 2010). What is the precise content of emotion representations in these regions, and do they contribute to identifying specific emotional states in others? With the searchlight procedure, we found little evidence for representations of emotional valence outside the a priori ROIs. However, whole-brain analyses are less sensitive than ROI analyses, and although multivariate analyses alleviate some of the spatial constraints of univariate methods, they still tend to rely on relatively low-frequency information (Op de Beeck, 2010; Freeman et al., 2011), meaning that MVPA provides a lower bound on the information available in a given region (Kriegeskorte and Kievit, 2013). Neurophysiological studies (Gothard et al., 2007; Hadj-Bouziane et al., 2012) may help to elucidate the full set of regions contributing to emotion attribution.
Relatedly, how does information in these different regions interact during the process of attribution? A tempting speculation is that the regions described here make up a hierarchy of information flow (Adolphs, 2002; Ethofer et al., 2006; e.g., modality-specific, face-selective cortex ⇔ multimodal pSTC ⇔ conceptual MPFC). However, additional connectivity or causal information (Friston et al., 2003; Bestmann et al., 2008) would be required to confirm such an account and to directly map different representational content onto discrete stages.
Finally, these findings are complementary to previous investigations of semantic representations [e.g., object categories (Devereux et al., 2013; Fairhall and Caramazza, 2013)], which have identified modality-specific representations (e.g., in visual cortex) and representations that generalize across modalities (e.g., across words and pictures in left middle temporal gyrus). The present findings highlight a distinction between representations that are multimodal and those that are based on theory-like causal inferences. Does this distinction apply to other domains, and can it help to clarify the neural organization of abstract knowledge more broadly?
General conclusions
The challenge of emotion recognition demands neural processes for exploiting different sources of evidence for others' emotions, as well as a common code for integrating this information to support emotion-based inference. Here, we demonstrate successful decoding of valence for emotional states that must be inferred from context as well as emotions directly perceived from overt expressions. By testing the scope and generality of the responses in different regions, we provide important constraints on possible computational roles of these regions and begin to elucidate the series of representations that make up the processing stream for emotional perception, attribution, and empathy. Thus, the present research provides a step toward understanding how the brain transforms stimulus-bound inputs into abstract representations of emotions.
Footnotes
This work was supported by National Science Foundation Graduate Research Fellowship (A.E.S.) and NIH Grant 1R01 MH096914-01A1 (R.S.). We thank Laura Schulz, Nancy Kanwisher, Michael Cohen, Dorit Kliemann, Stefano Anzellotti, and Jorie Koster-Hale for helpful comments and discussion.
The authors declare no competing financial interests.
References
- Adolphs R. Neural systems for recognizing emotion. Curr Opin Neurobiol. 2002;12:169–177. doi: 10.1016/S0959-4388(02)00301-X. [DOI] [PubMed] [Google Scholar]
- Amodio DM, Frith CD. Meeting of minds: the medial frontal cortex and social cognition. Nat Rev Neurosci. 2006;7:268–277. doi: 10.1038/nrn1884. [DOI] [PubMed] [Google Scholar]
- Anzellotti S, Fairhall SL, Caramazza A. Decoding representations of face identity that are tolerant to rotation. Cereb Cortex. 2013;24:1988–1995. doi: 10.1093/cercor/bht046. [DOI] [PubMed] [Google Scholar]
- Baron-Cohen S, Wheelwright S, Skinner R, Martin J, Clubley E. The Autism-Spectrum Quotient (AQ): evidence from Asperger syndrome/high-functioning autism, males and females, scientists and mathematicians. J Autism Dev Disord. 2001;31:5–17. doi: 10.1023/A:1005653411471. [DOI] [PubMed] [Google Scholar]
- Bartra O, McGuire JT, Kable JW. The valuation system: a coordinate-based meta-analysis of BOLD fMRI experiments examining neural correlates of subjective value. Neuroimage. 2013;76:412–427. doi: 10.1016/j.neuroimage.2013.02.063. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bestmann S, Ruff CC, Blankenburg F, Weiskopf N, Driver J, Rothwell JC. Mapping causal interregional influences with concurrent TMS–fMRI. Exp Brain Res. 2008;191:383–402. doi: 10.1007/s00221-008-1601-8. [DOI] [PubMed] [Google Scholar]
- Bruneau EG, Pluta A, Saxe R. Distinct roles of the “shared pain” and “theory of mind” networks in processing others' emotional suffering. Neuropsychologia. 2012;50:219–231. doi: 10.1016/j.neuropsychologia.2011.11.008. [DOI] [PubMed] [Google Scholar]
- Calder AJ, Beaver JD, Winston JS, Dolan RJ, Jenkins R, Eger E, Henson RN. Separate coding of different gaze directions in the superior temporal sulcus and inferior parietal lobule. Curr Biol. 2007;17:20–25. doi: 10.1016/j.cub.2006.10.052. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carey S. Conceptual change in childhood. Cambridge, MA: MIT; 1985. [Google Scholar]
- Carlin JD, Calder AJ, Kriegeskorte N, Nili H, Rowe JB. A head view-invariant representation of gaze direction in anterior superior temporal sulcus. Curr Biol. 2011;21:1817–1821. doi: 10.1016/j.cub.2011.09.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chang C, Lin C. LIBSVM: a library for support vector machines. ACM Trans Intellig Sys Tech. 2011;2:1–27. [Google Scholar]
- Chib VS, Rangel A, Shimojo S, O'Doherty JP. Evidence for a common representation of decision values for dissimilar goods in human ventromedial prefrontal cortex. J Neurosci. 2009;29:12315–12320. doi: 10.1523/JNEUROSCI.2575-09.2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chikazoe J, Lee DH, Kriegeskorte N, Anderson AK. Population coding of affect across stimuli, modalities and individuals. Nat Neurosci. 2014;17:1114–1122. doi: 10.1038/nn.3749. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clithero JA, Rangel A. Informatic parcellation of the network involved in the computation of subjective value. Soc Cogn Affect Neurosci. 2013;9:1289–1302. doi: 10.1093/scan/nst106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Davis MH. Measuring individual differences in empathy: evidence for a multidimensional approach. J Pers Soc Psychol. 1983;44:113–126. doi: 10.1037/0022-3514.44.1.113. [DOI] [Google Scholar]
- De Martino F, Valente G, Staeren N, Ashburner J, Goebel R, Formisano E. Combining multivariate voxel selection and support vector machines for mapping and classification of fMRI spatial patterns. Neuroimage. 2008;43:44–58. doi: 10.1016/j.neuroimage.2008.06.037. [DOI] [PubMed] [Google Scholar]
- Devereux BJ, Clarke A, Marouchos A, Tyler LK. Representational similarity analysis reveals commonalities and differences in the semantic processing of words and objects. J Neurosci. 2013;33:18906–18916. doi: 10.1523/JNEUROSCI.3809-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- DiCarlo JJ, Cox DD. Untangling invariant object recognition. Trends Cogn Sci. 2007;11:333–341. doi: 10.1016/j.tics.2007.06.010. [DOI] [PubMed] [Google Scholar]
- DiCarlo JJ, Zoccolan D, Rust NC. How does the brain solve visual object recognition? Neuron. 2012;73:415–434. doi: 10.1016/j.neuron.2012.01.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dodell-Feder D, Koster-Hale J, Bedny M, Saxe R. fMRI item analysis in a theory of mind task. Neuroimage. 2011;55:705–712. doi: 10.1016/j.neuroimage.2010.12.040. [DOI] [PubMed] [Google Scholar]
- Dufour N, Redcay E, Young L, Mavros PL, Moran JM, Triantafyllou C, Gabrieli JD, Saxe R. Similar brain activation during false belief tasks in a large sample of adults with and without autism. PLoS One. 2013;8:e75468. doi: 10.1371/journal.pone.0075468. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ekman P, Rosenberg EL. What the face reveals: basic and applied studies of spontaneous expression using the Facial Action Coding System (FACS) Oxford: Oxford UP; 1997. [Google Scholar]
- Ethofer T, Anders S, Erb M, Herbert C, Wiethoff S, Kissler J, Grodd W, Wildgruber D. Cerebral pathways in processing of affective prosody: a dynamic causal modeling study. Neuroimage. 2006;30:580–587. doi: 10.1016/j.neuroimage.2005.09.059. [DOI] [PubMed] [Google Scholar]
- Etkin A, Egner T, Kalisch R. Emotional processing in anterior cingulate and medial prefrontal cortex. Trends Cogn Sci. 2011;15:85–93. doi: 10.1016/j.tics.2010.11.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fairhall SL, Caramazza A. Brain regions that represent amodal conceptual knowledge. J Neurosci. 2013;33:10552–10558. doi: 10.1523/JNEUROSCI.0051-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fox CJ, Moon SY, Iaria G, Barton JJ. The correlates of subjective perception of identity and expression in the face network: an fMRI adaptation study. Neuroimage. 2009;44:569–580. doi: 10.1016/j.neuroimage.2008.09.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Freeman J, Brouwer GJ, Heeger DJ, Merriam EP. Orientation decoding depends on maps, not columns. J Neurosci. 2011;31:4792–4804. doi: 10.1523/JNEUROSCI.5160-10.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Friston KJ, Harrison L, Penny W. Dynamic causal modelling. Neuroimage. 2003;19:1273–1302. doi: 10.1016/S1053-8119(03)00202-7. [DOI] [PubMed] [Google Scholar]
- Furl N, Hadj-Bouziane F, Liu N, Averbeck BB, Ungerleider LG. Dynamic and static facial expressions decoded from motion-sensitive areas in the macaque monkey. J Neurosci. 2012;32:15952–15962. doi: 10.1523/JNEUROSCI.1992-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fusar-Poli P, Placentino A, Carletti F, Landi P, Allen P, Surguladze S, Benedetti F, Abbamonte M, Gasparotti R, Barale F, Perez J, McGuire P, Politi P. Functional atlas of emotional faces processing: a voxel-based meta-analysis of 105 functional magnetic resonance imaging studies. J Psychiatry Neurosci. 2009;34:418–432. [PMC free article] [PubMed] [Google Scholar]
- Goldman AI, Sripada CS. Simulationist models of face-based emotion recognition. Cognition. 2005;94:193–213. doi: 10.1016/j.cognition.2004.01.005. [DOI] [PubMed] [Google Scholar]
- Gopnik A, Wellman HM. Why the child's theory of mind really is a theory. Mind Lang. 1992;7:145–171. doi: 10.1111/j.1468-0017.1992.tb00202.x. [DOI] [Google Scholar]
- Gothard KM, Battaglia FP, Erickson CA, Spitler KM, Amaral DG. Neural responses to facial expression and face identity in the monkey amygdala. J Neurophysiol. 2007;97:1671–1683. doi: 10.1152/jn.00714.2006. [DOI] [PubMed] [Google Scholar]
- Hadj-Bouziane F, Liu N, Bell AH, Gothard KM, Luh WM, Tootell RB, Murray EA, Ungerleider LG. Amygdala lesions disrupt modulation of functional MRI activity evoked by facial expression in the monkey inferior temporal cortex. Proc Natl Acad Sci U S A. 2012;109:E3640–E3648. doi: 10.1073/pnas.1218406109. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harmer CJ, Thilo KV, Rothwell JC, Goodwin GM. Transcranial magnetic stimulation of medial–frontal cortex impairs the processing of angry facial expressions. Nat Neurosci. 2001;4:17–18. doi: 10.1038/82854. [DOI] [PubMed] [Google Scholar]
- Harry B, Williams MA, Davis C, Kim J. Emotional expressions evoke a differential response in the fusiform face area. Front Hum Neurosci. 2013;7:692. doi: 10.3389/fnhum.2013.00692. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Haxby JV, Gobbini MI, Furey ML, Ishai A, Schouten JL, Pietrini P. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science. 2001;293:2425–2430. doi: 10.1126/science.1063736. [DOI] [PubMed] [Google Scholar]
- Hung CP, Kreiman G, Poggio T, DiCarlo JJ. Fast readout of object identity from macaque inferior temporal cortex. Science. 2005;310:863–866. doi: 10.1126/science.1117593. [DOI] [PubMed] [Google Scholar]
- Hynes CA, Baird AA, Grafton ST. Differential role of the orbital frontal lobe in emotional versus cognitive perspective-taking. Neuropsychologia. 2006;44:374–383. doi: 10.1016/j.neuropsychologia.2005.06.011. [DOI] [PubMed] [Google Scholar]
- Julian JB, Fedorenko E, Webster J, Kanwisher N. An algorithmic method for functionally defining regions of interest in the ventral visual pathway. Neuroimage. 2012;60:2357–2364. doi: 10.1016/j.neuroimage.2012.02.055. [DOI] [PubMed] [Google Scholar]
- Kable JW, Glimcher PW. The neural correlates of subjective value during intertemporal choice. Nat Neurosci. 2007;10:1625–1633. doi: 10.1038/nn2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kamitani Y, Tong F. Decoding the visual and subjective contents of the human brain. Nat Neurosci. 2005;8:679–685. doi: 10.1038/nn1444. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kanwisher N, Yovel G. The fusiform face area: a cortical region specialized for the perception of faces. Philos Trans R Soc Lond B Biol Sci. 2006;361:2109–2128. doi: 10.1098/rstb.2006.1934. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kaufman A. KBIT-2: Kaufman Brief Intelligence Test. Bloomington, MN: NCS Pearson; 1990. [Google Scholar]
- Kreifelts B, Ethofer T, Shiozawa T, Grodd W, Wildgruber D. Cerebral representation of non-verbal emotional perception: fMRI reveals audiovisual integration area between voice- and face-sensitive regions in the superior temporal sulcus. Neuropsychologia. 2009;47:3059–3066. doi: 10.1016/j.neuropsychologia.2009.07.001. [DOI] [PubMed] [Google Scholar]
- Kriegeskorte N, Kievit RA. Representational geometry: integrating cognition, computation, and the brain. Trends Cogn Sci. 2013;17:401–412. doi: 10.1016/j.tics.2013.06.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kriegeskorte N, Goebel R, Bandettini P. Information-based functional brain mapping. Proc Natl Acad Sci U S A. 2006;103:3863–3868. doi: 10.1073/pnas.0600244103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lin A, Adolphs R, Rangel A. Social and monetary reward learning engage overlapping neural substrates. Soc Cogn Affect Neurosci. 2012;7:274–281. doi: 10.1093/scan/nsr006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lindquist KA, Wager TD, Kober H, Bliss-Moreau E, Barrett LF. The brain basis of emotion: a meta-analytic review. Behav Brain Sci. 2012;35:121–143. doi: 10.1017/S0140525X11000446. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mattavelli G, Cattaneo Z, Papagno C. Transcranial magnetic stimulation of medial prefrontal cortex modulates face expressions processing in a priming task. Neuropsychologia. 2011;49:992–998. doi: 10.1016/j.neuropsychologia.2011.01.038. [DOI] [PubMed] [Google Scholar]
- Mitchell JP. Inferences about mental states. Philos Trans R Soc Lond B Biol Sci. 2009;364:1309–1316. doi: 10.1098/rstb.2008.0318. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mitchell TM, Hutchinson R, Niculescu RS, Pereira F, Wang X, Just M, Newman S. Learning to decode cognitive states from brain images. Mach Learn. 2004;57:145–175. doi: 10.1023/B:MACH.0000035475.85309.1b. [DOI] [Google Scholar]
- Niedenthal PM. Embodying emotion. Science. 2007;316:1002–1005. doi: 10.1126/science.1136930. [DOI] [PubMed] [Google Scholar]
- Norman KA, Polyn SM, Detre GJ, Haxby JV. Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends Cogn Sci. 2006;10:424–430. doi: 10.1016/j.tics.2006.07.005. [DOI] [PubMed] [Google Scholar]
- Oosterhof NN, Todorov A. Shared perceptual basis of emotional expressions and trustworthiness impressions from faces. Emotion. 2009;9:128–133. doi: 10.1037/a0014520. [DOI] [PubMed] [Google Scholar]
- Op de Beeck HP. Against hyperacuity in brain reading: spatial smoothing does not hurt multivariate fMRI analyses? Neuroimage. 2010;49:1943–1948. doi: 10.1016/j.neuroimage.2009.02.047. [DOI] [PubMed] [Google Scholar]
- Ortony A. The cognitive structure of emotions. Cambridge, UK: Cambridge UP; 1990. Reactions to Events I; p. 228. [Google Scholar]
- Peelen MV, Atkinson AP, Vuilleumier P. Supramodal representations of perceived emotions in the human brain. J Neurosci. 2010;30:10127–10134. doi: 10.1523/JNEUROSCI.2161-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pelphrey KA, Morris JP, Michelich CR, Allison T, McCarthy G. Functional anatomy of biological motion perception in posterior temporal cortex: an fMRI study of eye, mouth and hand movements. Cereb Cortex. 2005;15:1866–1876. doi: 10.1093/cercor/bhi064. [DOI] [PubMed] [Google Scholar]
- Pereira F, Mitchell T, Botvinick M. Machine learning classifiers and fMRI: a tutorial overview. Neuroimage. 2009;45:S199–S209. doi: 10.1016/j.neuroimage.2008.11.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pessoa L, Adolphs R. Emotion processing and the amygdala: from a “low road” to “many roads” of evaluating biological significance. Nat Rev Neurosci. 2010;11:773–783. doi: 10.1038/nrn2920. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pitcher D. Facial expression recognition takes longer in the posterior superior temporal sulcus than in the occipital face area. J Neurosci. 2014;34:9173–9177. doi: 10.1523/JNEUROSCI.5038-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pitcher D, Dilks DD, Saxe RR, Triantagyllou C, Kanwisher N. Differential selectivity for dynamic versus static information in face selective cortical regions. Neuroimage. 2011;56:2356–2363. doi: 10.1016/j.neuroimage.2011.03.067. [DOI] [PubMed] [Google Scholar]
- Plassmann H, O'Doherty J, Rangel A. Orbitofrontal cortex encodes willingness to pay in everyday economic transactions. J Neurosci. 2007;27:9984–9988. doi: 10.1523/JNEUROSCI.2131-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ruff CC, Fehr E. The neurobiology of rewards and values in social decision making. Nat Rev Neurosci. 2014;15:549–562. doi: 10.1038/nrn3776. [DOI] [PubMed] [Google Scholar]
- Said CP, Moore CD, Engell AD, Todorov A, Haxby JV. Distributed representations of dynamic facial expressions in the superior temporal sulcus. J Vis. 2010a;10:11. doi: 10.1167/10.5.11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Said CP, Moore CD, Norman KA, Haxby JV, Todorov A. Graded representations of emotional expressions in the left superior temporal sulcus. Front Syst Neurosci. 2010b;4:6. doi: 10.3389/fnsys.2010.00006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saxe R, Kanwisher N. People thinking about thinking people. The role of the temporo-parietal junction in “theory of mind.”. Neuroimage. 2003;19:1835–1842. doi: 10.1016/S1053-8119(03)00230-1. [DOI] [PubMed] [Google Scholar]
- Scherer KR, Meuleman B. Human emotion experiences can be predicted on theoretical grounds: evidence from verbal labeling. PLoS One. 2013;8:e58166. doi: 10.1371/journal.pone.0058166. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seung HS, Sompolinsky H. Simple models for reading neuronal population codes. Proc Natl Acad Sci U S A. 1993;90:10749–10753. doi: 10.1073/pnas.90.22.10749. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shamay-Tsoory SG, Tomer R, Berger BD, Aharon-Peretz J. Characterization of empathy deficits following prefrontal brain damage: the role of the right ventromedial prefrontal cortex. J Cogn Neurosci. 2003;15:324–337. doi: 10.1162/089892903321593063. [DOI] [PubMed] [Google Scholar]
- Shamay-Tsoory SG, Aharon-Peretz J, Perry D. Two systems for empathy: a double dissociation between emotional and cognitive empathy in inferior frontal gyrus versus ventromedial prefrontal lesions. Brain. 2009;132:617–627. doi: 10.1093/brain/awn279. [DOI] [PubMed] [Google Scholar]
- Shamir M, Sompolinsky H. Implications of neuronal diversity on population coding. Neural Comput. 2006;18:1951–1986. doi: 10.1162/neco.2006.18.8.1951. [DOI] [PubMed] [Google Scholar]
- Simmons JP, Nelson LD, Simonsohn U. False-positive psychology undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011;22:1359–1366. doi: 10.1177/0956797611417632. [DOI] [PubMed] [Google Scholar]
- Singer T, Critchley HD, Preuschoff K. A common role of insula in feelings, empathy and uncertainty. Trends Cogn Sci. 2009;13:334–340. doi: 10.1016/j.tics.2009.05.001. [DOI] [PubMed] [Google Scholar]
- Spunt RP, Lieberman MD. An integrative model of the neural systems supporting the comprehension of observed emotional behavior. Neuroimage. 2012;59:3050–3059. doi: 10.1016/j.neuroimage.2011.10.005. [DOI] [PubMed] [Google Scholar]
- Stelzer J, Chen Y, Turner R. Statistical inference and multiple testing correction in classification-based multi-voxel pattern analysis (MVPA): random permutations and cluster size control. Neuroimage. 2013;65:69–82. doi: 10.1016/j.neuroimage.2012.09.063. [DOI] [PubMed] [Google Scholar]
- Stevenson RA, James TW. Audiovisual integration in human superior temporal sulcus: inverse effectiveness and the neural processing of speech and object recognition. Neuroimage. 2009;44:1210–1223. doi: 10.1016/j.neuroimage.2008.09.034. [DOI] [PubMed] [Google Scholar]
- Tanaka K. Neuronal mechanisms of object recognition. Science. 1993;262:685–688. doi: 10.1126/science.8235589. [DOI] [PubMed] [Google Scholar]
- Tversky A, Kahneman D. Loss aversion in riskless choice: a reference-dependent model. Q J Econ. 1991;106:1039–1061. doi: 10.2307/2937956. [DOI] [Google Scholar]
- Ullman S. Three-dimensional object recognition based on the combination of views. Cognition. 1998;67:21–44. doi: 10.1016/S0010-0277(98)00013-4. [DOI] [PubMed] [Google Scholar]
- Völlm BA, Taylor AN, Richardson P, Corcoran R, Stirling J, McKie S, Deakin JF, Elliott R. Neuronal correlates of theory of mind and empathy: a functional magnetic resonance imaging study in a nonverbal task. Neuroimage. 2006;29:90–98. doi: 10.1016/j.neuroimage.2005.07.022. [DOI] [PubMed] [Google Scholar]
- Winecoff A, Clithero JA, Carter RM, Bergman SR, Wang L, Huettel SA. Ventromedial prefrontal cortex encodes emotional value. J Neurosci. 2013;33:11032–11039. doi: 10.1523/JNEUROSCI.4317-12.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xu X, Biederman I. Loci of the release from fMRI adaptation for changes in facial expression, identity, and viewpoint. J Vis. 2010;10:36. doi: 10.1167/10.14.36. [DOI] [PubMed] [Google Scholar]
- Zaki J, Ochsner KN. The need for a cognitive neuroscience of naturalistic social cognition. Ann N Y Acad Sci. 2009;1167:16–30. doi: 10.1111/j.1749-6632.2009.04601.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zaki J, Bolger N, Ochsner K. It takes two: the interpersonal nature of empathic accuracy. Psychol Sci. 2008;19:399–404. doi: 10.1111/j.1467-9280.2008.02099.x. [DOI] [PubMed] [Google Scholar]
- Zaki J, Weber J, Bolger N, Ochsner K. The neural bases of empathic accuracy. Proc Natl Acad Sci U S A. 2009a;106:11382–11387. doi: 10.1073/pnas.0902666106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zaki J, Bolger N, Ochsner K. Unpacking the informational bases of empathic accuracy. Emotion. 2009b;9:478–487. doi: 10.1037/a0016551. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zaki J, Hennigan K, Weber J, Ochsner KN. Social cognitive conflict resolution: contributions of domain-general and domain-specific neural systems. J Neurosci. 2010;30:8481–8488. doi: 10.1523/JNEUROSCI.0382-10.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]