Abstract
Although emotion words such as “anger,” “disgust,” “happiness,” or “pride” are often thought of as mere labels, increasing evidence points to language as being important for emotion perception and experience. Emotion words may be particularly important for facilitating access to the emotion concepts. Indeed, deficits in semantic processing or impaired access to emotion words interfere with emotion perception. Yet, it is unclear what these behavioral findings mean for affective neuroscience. Thus, we examined the brain areas that support processing of emotion words using representational similarity analysis of functional magnetic resonance imaging data (N = 25). In the task, participants saw 10 emotion words (e.g. “anger,” “happiness”) while in the scanner. Participants rated each word based on its valence on a continuous scale ranging from 0 (Pleasant/Good) to 1 (Unpleasant/Bad) scale to ensure they were processing the words. Our results revealed that a diverse range of brain areas including prefrontal, midline cortical, and sensorimotor regions contained information about emotion words. Notably, our results overlapped with many regions implicated in decoding emotion experience by prior studies. Our results raise questions about what processes are being supported by these regions during emotion experience.
Keywords: emotion words, language, MVPA, representational similarity analysis
Words such as “anger” or “fear” are typically viewed as mere labels of the emotions they signify. Yet, accumulating evidence from behavioral and neuropsychology studies suggests that emotion words may play a more fundamental role in constructing emotion representations (Lindquist and Gendron 2013, Lindquist et al. 2015b, 2016). For instance, developmental studies examining the ages at which children can distinguish between facial expression stimuli have found a “verbal superiority effect” (e.g. Russell and Bullock 1985, Widen 2013; see also Hoemann et al. 2019, Nook et al. 2017). What this effect entails is that children are better able to match facial expression stimuli to emotion words (e.g. a scowling face to the word “angry”) than to other similar facial expression stimuli (i.e. another scowling face). Similarly, adults are also faster and more accurate when matching facial expression stimuli with emotion words than with other facial expression stimuli (Nook et al. 2015).
The notion that words are important for emotion representation is bolstered by causal evidence that emotion words can influence emotion perception. For instance, interfering with emotion word processing also interferes with perceiving facial expressions as belonging to particular emotion categories (e.g. by using articulatory suppression, Roberson and Davidoff 2000; or semantic satiation, Gendron et al. 2012, Lindquist et al. 2006; for a review see Fugate 2013). Damage to the brain regions important for semantic processing in cases of semantic aphasia also impairs emotion perception. For example, damage to the anterior temporal lobes (ATLs) produces deficits in verbal tasks such as object labeling and word completion (Mummery et al. 2000, Chan et al. 2001, Snowden et al. 2018) and also results in deficits in emotion perception, even when tasks do not explicitly require using emotion labels (Lindquist et al. 2014, Souter et al. 2021). These findings suggest that emotion words, or more precisely the semantic categories they connote, are commonly accessed even in the routine processing of emotion perceptions.
The implications of this work for the neuroscience of emotion have yet to be fully understood, in part, because little is known regarding which brain regions carry information pertaining to emotion word content. Studies in affective neuroscience have typically focused on examining the brain regions that encode or decode full-fledged experiences (e.g. Sabatinelli et al. 2005, Kassam et al. 2013, Bush et al. 2017) or perceptions (e.g. Adolphs et al. 1999, Kesler et al. 2001, Matsuda et al. 2013, Skerry and Saxe 2014, Liang et al. 2017) of emotion. However, if the semantic concepts connoted by emotion words are routinely retrieved during the experience of emotions, then it stands to reason that at least some of the brain regions that carry representational content for emotional experiences and perceptions may do so because they carry information concerning semantic representations of emotion words.
The findings from prior functional magnetic resonance imaging (fMRI) studies are suggestive of this possibility. Many of the brain regions that are often engaged during the experience and/or perception of emotion (Kober et al. 2008, Vytal and Hamann 2010, Lindquist et al. 2012) have also been implicated in semantic processing (Binder and Desai 2011, Satpute and Lindquist 2021). These regions include the lateral temporal cortex, the ATL, and the ventrolateral (vlPFC) and dorsomedial prefrontal cortices (dmPFC). In prior work, we (Wager et al. 2015, Satpute and Lindquist 2019, 2021) and others (Ochsner and Barrett 2001, Barrett et al. 2007) suggested that the reason why so many widely distributed brain regions are implicated in emotion is because an emotional experience is a construction involving a collection of neurocognitively dissociable psychological processes (for similar ideas from an appraisal perspective, see Brosch and Sander 2013, Smith and Lane 2015, Sander et al. 2018). Consistent with this idea, in previous research we found functional dissociations between the amygdala, which tracked with affective arousal; the dmPFC, which tracked the focus of attention on affective dimensions of experience; and the vlPFC, which tracked semantic categorization into verbal labels (Satpute et al. 2013). While all of these brain regions are often engaged during experiences and perceptions of emotion, they appear to play distinct functional roles when constructing these states. These findings are consistent with the constructionist account that emotion perceptions and experiences involve a wide variety of more basic psychological ingredients (Barrett 2006, 2009, 2017, Lindquist and Barrett 2012, Lindquist 2013).
However, the univariate approach taken in our prior work (Satpute et al. 2013) limits the ways in which content associated with emotion words may be encoded by the brain. That is, while this prior work has identified which brain regions are engaged during the process of semantic categorization in general, it remains unclear whether and which brain regions carry content information that differentiates between emotion word representations. This same issue pertains to studies on “affect labeling,” which have also implicated the vlPFC in the process of categorization (Lieberman et al. 2007, 2011, Brooks et al. 2017), but not the semantic contents of emotion word representations.
Perhaps more pertinent to this research question are studies using multivoxel pattern analysis (MVPA), wherein the patterns of activity across voxels are used to decode certain psychological states or task conditions (Norman et al. 2006, Haxby 2012, Poldrack et al. 2012). MVPA studies in affective neuroscience have focused in particular on decoding specific categories of emotional experiences (Kragel and LaBar 2015, Wager et al. 2015, Saarimäki et al. 2016), emotion perceptions (Ethofer et al. 2009, Peelen et al. 2010, Said et al. 2010), or valence and arousal states (Baucom et al. 2012, Bush et al. 2017, 2018, Kim et al. 2017, Shinkareva et al. 2020). In concert with univariate studies, results from studies using MVPA also suggest that brain regions that carry information about emotion categories overlap with those previously implicated in semantic processing (Satpute and Lindquist 2019, 2021). Even so, these studies typically view representations tied to emotion words as an issue in study design (e.g. Kassam et al. 2013, Saarimäki et al. 2016), rather than an important component of emotion construction (Barrett et al. 2011, Lindquist and Gendron 2013, Lindquist et al. 2015b, Mesquita 2022).
The current study
Here, we ask which brain regions carry information about emotion word content even in the absence of a full-fledged emotional experience? We used an existing fMRI dataset in which participants were simply shown emotion words (e.g. “Fear,” “Guilt,” “Calm,” “Pride,” etc.) and were instructed to judge each word for its hedonic (pleasant/unpleasant) or evaluative (good/bad) valence properties in alternating blocks. We previously reported which brain regions are differentially associated with hedonic versus evaluative emotion knowledge retrieval by comparing activity during these blocks (Lee et al. 2022). In the current study, we focused instead on which brain regions carry information pertaining to emotion words. Importantly, the simple valence judgments were useful for ensuring that participants accessed the semantic rather than just lexical aspects of words. We then used a type of representational similarity analysis (RSA) to identify which brain regions carry informational content about emotion words.
Method
Participants
Twenty-five right-handed, native English speakers (11 females, 14 males) aged 21–40 years (M = 29.84, SD = 5.46) took part in the study. Inclusion criteria for the study were for participants to be right-handed, aged between 18 years and 55 years, native English speakers, have no non-removable metal in their bodies, and no history of psychiatric illness. Our sample size was chosen based on sample sizes of contemporary MVPA studies of emotion, which typically included 21 participants or fewer (Kassam et al. 2013, Shinkareva et al. 2014, Kragel and LaBar 2015, Saarimäki et al. 2016). Prior to participation, all participants provided informed consent, and all procedures were approved by the California Institute of Technology’s institutional review board. We note that these participants’ data were previously published in Lee et al. (2022).
Experimental design
While undergoing fMRI, participants engaged in an emotion concept valuation task. Specifically, they were shown blocks of emotion words consisting of an equal number of negatively- and positively-valenced emotion concepts: Anger, Disgust, Fear, Guilt, Sad, Calm, Excited, Happy, Lust, and Pride. In each block, each word was presented serially on the screen for 4 s in randomized order with a variable interstimulus interval [2–4 s], and participants were instructed to judge each emotion word on one of two valuation scales. The ratings were made during the 4-s period that the emotion words were on screen. In the hedonic rating blocks, participants were first shown the instruction cue, “Pleasant to Unpleasant” on the screen for 5 s. During the subsequent block of emotion words, participants made pleasantness ratings of the emotion concepts connoted by the emotion words on a continuous scale [0–1] from “Pleasant” to “Unpleasant” using a trackball mouse. In the evaluative rating blocks, participants were instead shown the instruction cue, “Good to Bad” on the screen for 5 s, and made corresponding ratings during the block of emotion words. These valuation judgments were included for two reasons. First, for another study, we were interested in identifying which brain regions were associated with focussing on evaluative versus hedonic emotion knowledge. We reported these findings in another manuscript (Lee et al. 2022). Second, and more relevant for the present purposes, these valence judgments helped to ensure that participants were engaged in the task and were retrieving semantic knowledge about the emotion words.
Participants completed 8 blocks (80 trials, 8 trials/emotion word) of the task, evenly divided into 4 blocks per valuation condition, with blocks randomized across two runs. Finally, before each block, participants completed a standard active baseline task (Stark and Squire 2001); participants were serially presented with eight single-digit numbers [1, 3, 4, 5, 5, 5, 6, 8] in a randomized order for 1.7 s each, and instructed to make a button response if the number was a “5” participants. Participants were trained on the task by completing the equivalent of one run outside of the scanner. For interpretability, we reverse scored the ratings such that unpleasant/bad was coded as 0 and pleasant/good was coded as 1.
Apparatus
The fMRI was conducted using a 3 T Siemens TIM Trio (Erlangen, Germany) equipped with a 32-channel head coil. Functional images (60 slices) were acquired with a T2*-weighted echo planar imaging pulse sequence (repeitition time [TR] = 1 s, echo time [TE] = 30 ms, flip angle = 60°, 2.5 mm isotropic resolution, interleaved transverse acquisition, multi-band acceleration factor = 4). Structural images were acquired using a T1-weighted sequence (TR = 2.4 s, TE = 2.6 s, flip angle = 8°, 1 mm isotropic resolution). During fMRI, the experimental task was projected onto a screen that participants viewed through a mirror attached to the head coil.
Data analysis
Preprocessing of functional images was done using the fmriprep pipeline (https://fmriprep.readthedocs.io/en/stable/index.html). Preprocessing included coregistration of functional images with T1-weighted structure images, motion and slice-time correction, and normalization of functional images to the MNI-ICBM152 template. Afterward, functional images were spatially smoothed (6 mm full-width half-max). First-level models were conducted using NeuroElf v1.1 rc2 (http://neuroelf.net/) software on the MATLAB R2018b (Mathworks) platform. A general linear model was used to estimate hemodynamic responses during each emotion word, separately for each run and each participant. Thus, each model included 10 regressors (one for each emotion word) convolved with a canonical hemodynamic response function, along with nuisance regressors for six motion regressors and a high-pass temporal filter (discrete cosine transform, 100 s). Betas from the first-level analyses were exported as NIFTI files.
The relevant scripts for the analyses, behavioral data, and first-level input data for the RSA (see below) are available on the Open Science Foundation website at this link: https://osf.io/wmszj/. Currently, the preprocessed data are not fully in Brain Imaging Data Structure format; however, they are available upon request.
Representational similarity analysis
We conducted RSA (Kriegeskorte et al. 2008, Kriegeskorte and Kievit 2013) to identify brain regions that contained information about emotion words. RSA examines how similar (or dissimilar) the pattern of activation across a set of voxels (here, beta-weights from first-level models) is during one condition in comparison to another. We specifically used the CoSMoMVPA toolbox (Oosterhof et al. 2016) in MATLAB to perform our searchlight RSA with the correlation method (Haxby 2012). We first applied a custom mask to the data due to signal dropout. Our custom mask combined the Harvard-Oxford cortical and subcortical atlas (1 mm, 25% probability) and included only voxels where data were present for 60% (N = 15) of participants (see https://osf.io/wmszj/).
After applying the mask to the data, we used the CoSMoMVPA toolbox to perform a whole-brain searchlight RSA. The searchlight analysis examined a spherical neighborhood with a radius of five voxels around each voxel. Split-half correlations for the betas were computed for on-diagonal (i.e. within emotion category) and off-diagonal values (i.e. between emotion categories). The difference was then taken between on- and off-diagonal coefficients resulting in a dissimilarity score. Values closer to zero reflect greater similarity while values further away from zero reflect greater dissimilarity.
Statistical thresholding of RSA maps
We thresholded the maps of dissimilarity scores via signed permutation testing in MATLAB R2018b with 5000 iterations and α = 0.05. Specifically, we create an n-voxel vector of the average similarity score across participants for each voxel. Next, we randomly assigned each voxel to have a negative or positive value. This process was repeated across 5000 iterations, producing a matrix of n-voxels × 5000 iterations that served as our null distribution. We then added the observed data (i.e. mean similarity scores) to this matrix and obtained the cutoff point at the 95th percentile. We then created a thresholded map that masked out all similarity scores that fell below the cutoff.
Follow-up analyses
In our follow-up analyses, we tested the robustness of our results and examined whether other features of the emotion words, such as word valence or word length might account for our results. In order to do so, we parcellated the regions identified by our RSA into 27 individual areas shown in prior studies to be associated with affective processing (e.g. orbitofrontal cortex, anterior cingulate cortex, the insula), concept representation (e.g. medial and lateral prefrontal cortices, precuneus, and posterior cingulate cortex), motor behavior (e.g. pre- and postcentral gyri), and visual processing of linguistic stimuli [bilateral lateral occipital cortices (LOCs), the lingual gyrus, and the fusiform gyrus].
Next, we used multiple regression to examine the extent to which factors such as valence and word length accounted for our multivariate results. For each region, in each participant, we estimated a regression equation predicting the beta-weights (activity) during one run in each voxel using the beta-weights from the other run, while controlling for the valence ratings and length of the emotion word. Specifically, we examined within-category activity across runs. Because the evaluative and hedonic valence ratings across all emotion words were highly correlated (r = 0.99, P < .001) and because differences between the types of ratings were not of interest in the present study, we averaged together both sets of ratings. The purpose of averaging together the evaluative and hedonic ratings was primarily to examine if valence might explain our results. Thus, we used the averaged ratings as the valence rating regressor in our follow-up analyses. All variables were z-scored prior to analyses. For each region, we then averaged together the regression coefficients across participants and conducted one-tailed t-tests to examine whether those coefficients were significantly different than zero at the group level.
Results
Representational similarity
Figure 1 displays the thresholded results (P < .05) of our whole-brain searchlight RSA. We found a wide range of regions that contained information about emotion knowledge. Our results included areas previously associated with affective experience (e.g. orbitofrontal cortex, anterior cingulate cortex, the insula) and default mode network areas (e.g. medial and lateral prefrontal cortex, precuneus). Our results also included a number of sensorimotor areas including the pre- and postcentral gyri, the superior and inferior parietal lobules, and wide swathes of occipital cortex. Notably, within the occipital cortex, we also found involvement of left and right LOCs in representation of emotion words. Figure 2 depicts the representational dissimilarity matrices (RDMs) for regions of interest of limbic and paralimbic, as well as default mode areas.
Figure 1.

The RSA results. Displayed are areas that contain information about emotion knowledge (P < .05). The top row shows the left hemisphere in lateral and medial views while the bottom row shows the right hemispheres in lateral and medial views. LH = left hemisphere, RH = right hemisphere.
Figure 2.

The RDM for whole-brain representational similarity results depicts dissimilarity scores computed by taking the average of the correlation matrix of whole-brain cross-run activity across subjects. We then subtracted the coefficients from two to obtain a dissimilarity score such that higher numbers reflect greater dissimilarity and lower numbers reflect greater similarity. Note: Although there appears to be a difference in similarity between negatively-valenced versus positively-valenced emotions on the diagonals, this effect was not significant, t (4) = −1.38, P = .24. Orange = more dissimilarity, Blue = more similarity.
Follow-up analyses
The involvement of regions associated with the generation of affect such as the anterior cingulate cortex and insula raises questions about whether involvement of these regions may be due to the valence of the emotion concepts. Similarly, involvement of motor areas may reflect ratings based on valence, and the involvement of visual regions might be accounted for by differences in the visual features of the stimuli (i.e. word length). Thus, we probed whether valence and word length might account for some of our findings in follow-up analyses using multiple regression. All standardized regression coefficients and t-test statistics are displayed in Table 1.
Table 1.
Results of follow-up analyses using multiple regression predicting neural activity related to emotion words in one run from neural activity related to emotion words in another run, while controlling for valence and word length.
| MNI Coordinates | Emotion | Valence | Word length | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Area | x | y | z | k | β | t | P | β | t | P | β | t | P |
| Emotion and valence | |||||||||||||
| ACC | 0 | 24 | 18 | 94 | 0.10 | 2.40 | .02 | −0.09 | −2.97 | .01 | −0.07 | −1.55 | .14 |
| SFG | 15 | −3 | 75 | 780 | 0.24 | 5.71 | <.001 | −0.06 | −2.12 | .04 | −0.05 | −1.53 | .14 |
| L Precentral gyrus | −57 | 6 | 9 | 85 | 0.20 | 4.86 | <.001 | −0.09 | −2.52 | .02 | −0.04 | −1.03 | .31 |
| R Precentral/IFG | 57 | 6 | 6 | 141 | 0.11 | 2.48 | .02 | −0.09 | −2.83 | .01 | −0.02 | −0.70 | .49 |
| L LOC | −33 | −81 | −15 | 313 | 0.35 | 6.29 | <.001 | −0.05 | −2.23 | .04 | −0.03 | −0.62 | .54 |
| R LOC | 18 | −81 | −12 | 332 | 0.29 | 5.44 | <.001 | −0.06 | −2.81 | .01 | 0.02 | 0.95 | 35 |
| Emotion | |||||||||||||
| OFC | −15 | 42 | −24 | 171 | 0.07 | 2.36 | .03 | >−0.01 | −0.22 | .82 | <0.01 | 0.04 | .97 |
| L Anterior temporal | −48 | 9 | −30 | 136 | 0.13 | 4.82 | <.001 | −0.04 | −1.19 | .25 | −0.03 | −0.83 | .41 |
| L lPFC | −18 | 42 | −18 | 219 | 0.10 | 2.92 | .01 | −0.04 | −1.87 | .07 | −0.03 | −1.59 | .13 |
| R lPFC | 42 | 54 | −15 | 180 | 0.16 | 3.80 | <.001 | −0.05 | −1.23 | .23 | −0.04 | −1.34 | .19 |
| R Anterior temporal | 33 | 18 | −42 | 55 | 0.05 | 2.03 | .05 | −0.02 | −0.59 | .56 | 0.02 | 0.70 | .49 |
| R Superior temporal/insula | 51 | 18 | −18 | 160 | 0.11 | 2.22 | .04 | −0.06 | −1.74 | .09 | −0.04 | −1.17 | .25 |
| R precentral | 51 | 0 | 54 | 340 | 0.20 | 3.76 | <.001 | −0.05 | −1.73 | .10 | −0.03 | −0.95 | .35 |
| L postcentral/parietal lobule | −21 | −42 | 75 | 643 | 0.32 | 6.80 | <.001 | −0.04 | −1.67 | .11 | −0.01 | −0.20 | .85 |
| L postcentral/parietal | −66 | −18 | 27 | 124 | 0.25 | 4.92 | <.001 | −0.07 | −1.82 | .08 | −0.02 | −0.49 | .63 |
| R postcentral | 57 | −12 | 51 | 173 | 0.21 | 4.16 | <.001 | <0.01 | 0.07 | .95 | −0.03 | −0.67 | .51 |
| R Superior parietal | 39 | −48 | 66 | 1090 | 0.26 | 5.29 | <.001 | −0.04 | −1.33 | .19 | −0.01 | −0.25 | .81 |
| R MFG | 42 | 27 | 33 | 393 | 0.22 | 4.33 | <.001 | −0.02 | −0.54 | .59 | −0.04 | −1.12 | .27 |
| L MFG | −30 | 48 | 36 | 0.18 | 4.31 | <.001 | −0.06 | −1.75 | .09 | −0.03 | −0.55 | .59 | |
| R MTG | 69 | −27 | 0 | 136 | 0.16 | 2.88 | .01 | −0.04 | −0.95 | .35 | −0.02 | −0.61 | .55 |
| PCC/Cuneus | 9 | −60 | −3 | 256 | 0.21 | 4.93 | <.001 | −0.02 | −0.91 | .37 | −0.05 | −1.29 | .21 |
| L STG | −69 | −24 | 6 | 181 | 0.24 | 5.31 | <.001 | −0.04 | −1.17 | .25 | −0.02 | −0.69 | .50 |
| Valence | |||||||||||||
| R IFG/STG | 51 | 21 | −18 | 52 | 0.06 | 1.10 | .28 | −0.11 | −2.79 | .01 | −0.06 | −1.17 | .25 |
| Neither | |||||||||||||
| mPFC | 9 | 57 | −27 | 330 | 0.03 | 1.36 | .19 | <0.01 | 0.18 | .86 | >−0.01 | −0.18 | .86 |
| L IFG | −24 | 21 | −24 | 105 | 0.03 | 0.79 | .44 | >−0.01 | −0.12 | .91 | −0.02 | −0.92 | .37 |
| L Precuneus | −6 | −66 | 36 | 67 | 0.08 | 1.66 | .11 | −0.06 | −1.39 | .18 | <0.01 | 0.01 | .99 |
| Visual | 3 | −75 | −12 | 3443 | 0.05 | 1.39 | .18 | −0.01 | −1.96 | .06 | −0.01 | −0.94 | .36 |
Separate multiple regression equations were estimated for each region of interest. Regions under the category “Emotion and Valence” reflect regions in which activity related to emotion words were uniquely predicted by emotion words and valence even after controlling for word length. Regions under the category “Emotion” reflect regions in which only emotion words contributed unique variance (i.e. showed only representational similarity) after controlling for valence and word length. Regions labeled “Valence” reflect areas in which activity during viewing of emotion words was uniquely predicted only by valence while controlling for emotion words and word length. Finally, regions under “Neither” are regions in which representational similarity no longer held after controlling for valence and word length, and was not predicted by valence or word length. L = left hemisphere, R = right hemisphere, ACC = Anterior Cingulate Cortex, PCC = Posterior Cingulate Cortex, STG = Superior Temporal Gyrus, mPFC = Medial Prefrontal Cortex, OFC = Orbital Prefrontal Gyrus, lPFC = Lateral Prefrontal Cortex, MFG = Middle Frontal Gyrus, SFG = Superior Frontal Gyrus, MTG = Middle Temporal Gyrus, IFG = Inferior Temporal Gyrus, MNI = Montreal Neurologic Institute.
Even after controlling for valence and word length, we continued to find that 21 out of 27 regions contained information about emotion words (see Table 1 and Fig. 3). Of the regions that showed continued representational similarity for emotion words, some areas also showed sensitivity to valence (orange in Fig. 3) while others only showed sensitivity to emotion words (red in Fig. 3). Almost all of the areas that showed sensitivity to valence above and beyond emotion words and word length overlapped with regions that showed continued representational similarity to emotion words. The one exception was a cluster that included the right inferior frontal gyrus and superior temporal gyrus (green in Fig. 3). Finally, no regions showed sensitivity to word length above and beyond emotion words or valence.
Figure 3.

Regions that continued to contain information about emotion words even after controlling for hedonic valence and word length (Red). Figure also displays regions in which activity related to emotion words was uniquely predicted by emotion words and valence (Orange). One cluster that included the right inferior frontal gyrus and superior frontal gyrus was uniquely predicted by only valence (Green; bottom left figure). Lateral and medial views of the left hemisphere are displayed in the top row, while lateral and medial views of the right hemisphere are displayed in the bottom row. LH = left hemisphere, RH = right hemisphere.
Discussion
Increasing evidence points to language as playing an integral role in emotion representation (Lindquist et al. 2015a, Brooks et al. 2017, Shablack and Lindquist 2019, Satpute and Lindquist 2021). Using words such as “anger” or “fear” to convey feelings is not just descriptive of mental experience. Rather, words and language, more broadly, may influence how people develop emotion category representations (Nook et al. 2017, Hoemann et al. 2019, 2020, Shablack et al. 2020). In turn, these representations influence how they construct perceptions (Roberson and Davidoff 2000, Lindquist et al. 2006, Gendron et al. 2012) and experiences (Lindquist and Barrett 2008, Oosterwijk et al. 2010, Lee et al. 2018) of emotion. Yet, the implications of these behavioral findings for the neural representation of emotion have remained unclear.
Thus, in order to extend the behavioral work on language and emotion to affective neuroscience, we investigated the brain regions that may carry information pertaining to emotion concepts when simply processing emotion words. In our study, participants made simple semantic judgments regarding the valence of certain emotion words in the complete absence of evocative stimulus presentations. A RSA showed that brain regions containing information pertaining to emotion words were widely distributed across limbic and paralimbic regions, prefrontal and midline cortical areas, and sensorimotor areas even after controlling for valence and word length. Our findings suggest that the mere retrieval of emotion knowledge from emotion words can lead to widely distributed patterns of activation throughout much of the brain.
Our study lies at the intersection of two disparate research areas in affective neuroscience: one on the neuroscience of language and emotion, the other on the neural representation of discrete emotions. The first area has focused on the neurocognitive processes involved in putting feelings into words, such as identifying which brain regions underlie attending to and labeling feelings (e.g. Burklund et al. 2014, Lieberman et al. 2007, Satpute et al. 2013;Taylor et al. 2003). Data are typically averaged across emotion categories in univariate analyses to reveal brain regions that support verbal categorization, introspection, etc. The second research area has focused on identifying distinct patterns of activation that occur between experiences of emotion (e.g. Kassam et al. 2013, Kragel and LaBar 2015, Wager et al. 2015, Saarimäki et al. 2016). Participants are induced to feel various emotions using evocative stimuli, and MVPA is used to identify which brain regions carry information that may distinguish between experiences from distinct emotion categories. These research areas have largely developed independently of one another. Here, our study serves as a bridge that may open up new directions in understanding the neural basis of both putting feelings into words, and how emotion words may contribute to emotion category representations. Below, we discuss the implications of our findings for these two areas in turn, and we further comment on the implications of these findings for emotion theory and for individual differences in language and emotion processing, such as in alexithymia.
Putting feelings into words
The ability to communicate feelings with words may provide unique mechanistic pathways to process and regulate feelings (Greenberg 2011, Lieberman et al. 2011, Niles et al. 2015, Torre and Lieberman 2018, Nook et al. 2021). In part for these reasons, one branch of research in affective neuroscience has focused on the neural systems that underlie the ability to attend to and label or categorize perceptions and experiences of emotion (Lane et al. 1997, Morawetz et al. 2017, Torre and Lieberman 2018, Satpute and Lindquist 2021). However, most of this work has focused on the cognitive processes involved in matching or labeling emotion words with perceptions or feelings of emotion irrespective of the particular contents of specific emotion words. For example, studies typically compare trials in which affective faces are matched to emotion word labels versus various control conditions, irrespective of whether those trials involve different emotion category content (e.g. Lieberman et al. 2007). These task-level effects are typically subjected to univariate analyses and tend to produce more focal activation patterns in particular brain regions, such as the ventrolateral prefrontal cortex (Satpute et al. 2013).
In contrast to those studies, we used MVPA to identify brain regions that carry informational content in their distribution of functional activation patterns pertaining to specific emotion words. In doing so, we found that information about emotion content was widely distributed throughout much of the brain. These findings complement prior work on affect labeling and emotion categorization by illustrating how conceptual content for specific emotion words, too, may be an important factor for developing a complete understanding on the neuroscience of language and emotion (also see, Satpute et al. 2016). Of interest to future work is understanding how certain cognitive processes (e.g. demands placed on the retrieval and selection of particular emotion words) relate with the neural representation of specific emotion concepts.
Implications for studies on the neural signature of emotion
Several prior studies have shown that the information contained in patterns of activity across the brain can be used to decode emotional categories (Kassam et al. 2013, Kragel and LaBar 2015, Wager et al. 2015, Saarimäki et al. 2016). Some of these studies have claimed to identify “neural signatures” of emotion, implying that a given emotion category has a specific, monolithic activation pattern that underlies instances from the category. However, we and others have shown that the psychological meaning of these patterns of activity is quite ambiguous (Clark-Polner et al. 2016). For instance, in prior work, we found that the overwhelming majority of fear-predictive patterns of activity are actually content dependent; different brain regions predict increasing degrees of fear depending on whether the situation involves heights or spiders (Wang et al. 2022; see McVeigh et al. 2024 for a similar argument concerning peripheral autonomic predictors of fear). In our other work, we have argued that these results suggest that fear does not involve a single brain pattern or state. Rather, fear may instead involve a collection of brain states that depend on the person and situation.
Even though participants in our study were not induced to experience emotions, our findings (see Figs 1 and 3) bear a striking resemblance to those reported in prior MVPA studies of emotional experience (as reviewed in Kragel and LaBar 2016, Nummenmaa and Saarimäki 2019, Satpute and Lindquist 2019, 2021) that used highly evocative affective stimuli. To be sure, an ideal study would directly compare the extent to which patterns of activation during full-fledged emotional experiences are shared from the mere processing of emotion words to determine whether and which brain regions do contain shared neural codes. While our study is limited in this respect, there are a few noteworthy observations. In both cases, patterns of activity carrying information pertaining to emotion categories are widely distributed throughout the brain and also appear to involve many of the same brain regions. Such an overlap raises important issues concerning the degree to which the neural codes underlying the classification of emotion pertain to affective aspects of the emotional experience versus other components including those evoked by emotion word representations. In that regard, our findings dovetail with recent work showing people can readily indicate pleasure or displeasure based on relatively unemotional, semantic associations (Itkes et al. 2017, Itkes and Kron 2019, Hamzani et al. 2020). Prior MVPA studies did not include conditions that compared decoding accuracy for relatively shallow/conceptual representations of emotion versus deeply felt instances of emotion, which would be an important avenue to pursue going forward.
The role of language in emotion representation
Our findings also have theoretical implications regarding the role of emotion words in the neural basis of emotion representation. By some accounts, one might expect that only “higher-level,” multimodal association areas would carry information about emotion categories when accessed from semantic processing of emotion words. Certain lines of work in the neuroscience of language have pointed to the multimodal association areas (e.g. lateral temporal cortex, anterior medial, and ventrolateral prefrontal cortex), as being involved in the semantic processes more generally (Binder et al. 2009). However, our findings showed that primary somatosensory areas also carried information that distinguished between emotion words. These early sensory areas have also been theoretically hypothesized to contain information pertaining to “felt” emotions (Damasio and Carvalho 2013).
Our findings are consistent with certain constructionist theories that argue emotion words play a constitutive role in the construction of emotion (Lindquist et al. 2015a, Brooks et al. 2017, Satpute and Lindquist 2019, 2021). By this account, emotion words serve to catalyze the formation of emotion concept representations (e.g. during childhood development). Consistent with this idea, the ability of young children to differentiate different sensory stimuli (e.g. emotional faces) into distinct emotion categories above and beyond valence coincides closely with the development of language (Widen 2016, Nook et al. 2017, Hoemann et al. 2019, 2020). Conversely, the absence of emotion word usage during development can also interfere with formation of discrete emotion representations (as in alexithymia or “affective agnosia”; Taylor et al. 1999, Taylor and Bagby 2000, Lane et al. 2015).
The notion that language plays an important role in concept formation is not unique to emotion. In cognitive psychology, verbal labels catalyze the formation of concept representations in object perception, too, even when words are incidental to the task at hand (Lupyan et al. 2007). Aligning with constructionist theories in emotion, grounded and embodiment theories of semantic representations in cognitive psychology suggest that concept representations of words are “grounded” or derive their meaning from the sensory-motor representations of the objects themselves (Barsalou 1999, 2008, Gallese and Lakoff 2005, Wilson-Mendenhall et al. 2013, Mazzuca et al. 2021). For instance, when participants are asked to confirm whether or not a noun (e.g. “tomato”) can be described by features such as shape (e.g. roundness) and color (e.g. red), this engages sensorimotor regions involved in shape and color perception (Fernandino et al. 2016). It follows suit that the meanings of emotion words like “anger,” “fear,” “happy,” and “pride” are also grounded in the instances wherein those words were used, such that mere reading of the word involves pattern completion of the aspects of the affective instances themselves. From this view, the distinctions between emotion concepts and emotional experiences are not necessarily one of kind (cf. Adolphs 2017), but rather reflect an artificial boundary (Hoemann and Feldman Barrett 2019), wherein pattern completion leads to some amount of overlapping patterns of activity whether originating from reading emotion words or being presented with evocative stimuli.
Alexithymia and the neural representations of emotion words
Our findings also raise new questions and research directions concerning individual differences in the ability to put feelings into words (Taylor and Bagby 2000, Tugade et al. 2004, Lane et al. 2020, Hoemann et al. 2021). Many people find it difficult to understand and describe their experiences in terms of emotions. This trait has been most commonly referred to as alexithymia (Sifneos 1996), albeit it has also been linked to the closely related constructs of affective agnosia (Lane et al. 2020) and emotion granularity (Tugade et al. 2004; for a conceptual framework on emotion expertise, see Hoemann et al. 2021). Here, we use the term alexithymia for brevity in our discussion, but we note that the implications may extend to these overlapping constructs as well.
Alexithymia has been linked with a wide variety of clinical conditions and an inability to reap the benefits of psychotherapy (Sifneos 1996, Taylor et al. 1999). Several potential mechanisms may explain alexithymia in part depending on the level of analysis (e.g. sociocultural, developmental, or neurocognitive; Taylor et al. 1999). Focusing on the neurocognitive level, it is possible that high alexithymia is associated with general processes pertaining to the retrieval of emotion words, regardless of the specific content of those emotion words. By this processing account, certain processes (e.g. affect labeling and emotion categorization) as studied in previous work may underlie individual differences in alexithymia. Another (not mutually exclusive) possibility is that high alexithymia is associated with degraded representational content (i.e. low precision) for emotion words. By this representational account, we might expect that neural variation that accounts for individual differences in alexithymia would be explained by measures extracted from MVPA of specific emotion words using a design similar to the present study.
Limitations
The results of our study should be understood in the context of its limitations. One limitation of our study is that we did not include a condition that induced full-fledged emotion experiences in participants. In future studies, a direct comparison between neural activity during processing of emotion words and full-blown emotion experiences may elucidate why there is an overlap between regions involved in processing emotion words and regions previously implicated in induced emotion experiences.
Another limitation of our study is that we did not include ratings of arousal for the emotion words. It is possible that arousal may account for neural representations of emotion words showing greater within-category similarity versus cross-category similarity. For instance, neural activity related to emotion words associated with high arousal might be similar to other emotion words with high arousal. This would lead to results showing greater similarity along the diagonal of the RDMs on average. However, we note that although we were unable to statistically control for arousal, the results of the whole-brain RDM (Fig. 2) do not seem consistent with this arousal account. For example, the arousal account would expect “anger” to show greater similarity to “excited” given that they are often thought of as high arousal emotions. Conversely, we might expect “sad” to show greater similarity to “calm” given that they are often considered to be low arousal emotions. Yet, we did not find this to be the case. Nevertheless, future studies would benefit from including ratings of arousal alongside ratings of valence to account for the possibility that these broader affective factors influence neural representations of emotion words. Moreover, including ratings of valence and arousal may be critical for studies that seek to understand the neural representations of full-blown emotion experiences, as our results overlap with regions identified in prior studies of induced emotion (e.g. Sabatinelli et al. 2005, Kassam et al. 2013, Bush et al. 2017).
Summary and conclusion
We found that emotion words engaged a diverse collection of areas in the brain. These areas ranged from sensorimotor regions (e.g. occipital areas, pre- and postcentral gyri, the insula) associated with experience to multimodal areas (e.g. the lateral and medial prefrontal cortex, the orbitofrontal gyrus) thought to integrate sensory signals with concept knowledge. These findings raise questions about the extent to which brain areas that represent emotion knowledge and full-blown emotion experiences overlap, and why. Rather than being simple semantic labels, our results are consistent with the idea that language, and words in particular, may play a more integral role in constructing emotion experience.
Contributor Information
Kent M Lee, Department of Psychology, Northeastern University, 125 Nightingale Hall, Boston, MA 02115, USA.
Ajay B Satpute, Department of Psychology, Northeastern University, 125 Nightingale Hall, Boston, MA 02115, USA.
Conflict of interest
None declared.
Funding
This research was supported by grants from the Brain and Cognitive Sciences Division of the National Science Foundation under award numbers [1947972 and 2241938], and the National Institute of Mental Health of the National Institutes of Health under award number [F32MH122062].
References
- Adolphs R. How should neuroscience study emotions? By distinguishing emotion states, concepts, and experiences. Soc Cogn Affect Neurosci 2017;12:24–31.doi: 10.1093/scan/nsw153 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Adolphs R, Tranel D, Hamann S et al. Recognition of facial emotion in nine individuals with bilateral amygdala damage. Neuropsychologia 1999;37:1111–17.doi: 10.1016/S0028-3932(99)00039-1 [DOI] [PubMed] [Google Scholar]
- Barrett LF. Solving the emotion paradox: categorization and the experience of emotion. Pers Soc Psychol Rev 2006;10:20–46.doi: 10.1207/s15327957pspr1001_2 [DOI] [PubMed] [Google Scholar]
- Barrett LF. Variety is the spice of life: a psychological construction approach to understanding variability in emotion. Cogn Emot 2009;23:1284–306.doi: 10.1080/02699930902985894 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barrett LF. The theory of constructed emotion: an active inference account of interoception and categorization. Soc Cogn Affect Neurosci 2017;12:1–23.doi: 10.1093/scan/nsw154 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barrett LF, Mesquita B, Gendron M. Context in emotion perception. Curr Dir Psychol Sci 2011;20:286–90.doi: 10.1177/0963721411422522 [DOI] [Google Scholar]
- Barrett LF, Mesquita B, Ochsner KN et al. The experience of emotion. Annu Rev Psychol 2007;58:373–403.doi: 10.1146/annurev.psych.58.110405.085709 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Barsalou LW. Perceptual symbol systems. Behav Brain Sci 1999;22:577–609;discussion610–660.doi: 10.1017/s0140525x99002149 [DOI] [PubMed] [Google Scholar]
- Barsalou LW. Grounded cognition. Annu Rev Psychol 2008;59:617–45.doi: 10.1146/annurev.psych.59.103006.093639 [DOI] [PubMed] [Google Scholar]
- Baucom LB, Wedell DH, Wang J et al. Decoding the neural representation of affective states. Neuroimage 2012;59:718–27.doi: 10.1016/j.neuroimage.2011.07.037 [DOI] [PubMed] [Google Scholar]
- Binder JR, Desai RH. The neurobiology of semantic memory. Trends Cogn Sci 2011;15:527–36.doi: 10.1016/j.tics.2011.10.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Binder JR, Desai RH, Graves WW et al. Where is the semantic system? A critical review and meta-analysis of 120 functional neuroimaging studies. Cereb Cortex 2009;19:2767–96.doi: 10.1093/cercor/bhp055 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brooks JA, Shablack H, Gendron M et al. The role of language in the experience and perception of emotion: a neuroimaging meta-analysis. Soc Cogn Affect Neurosci 2017;12:169–83.doi: 10.1093/scan/nsw121 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brosch T, Sander D. Comment: the appraising brain: towards a neuro-cognitive model of appraisal processes in emotion. Emot Rev 2013;5:163–68.doi: 10.1177/1754073912468298 [DOI] [Google Scholar]
- Burklund LJ, Creswell JD, Irwin MR et al. The common and distinct neural bases of affect labeling and reappraisal in healthy adults. Front Psychol 2014;5:221.doi: 10.3389/fpsyg.2014.00221 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bush KA, Gardner J, Privratsky AA et al. Brain states that encode perceived emotion are reproducible but their classification accuracy is stimulus-dependent. Front Human Neurosci 2018;12:262.doi: 10.3389/fnhum.2018.00262 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bush KA, Inman CS, Hamann SB et al. Distributed neural processing predictors of multi-dimensional properties of affect. Front Human Neurosci 2017;11:459.doi: 10.3389/fnhum.2017.00459 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chan D, Fox NC, Scahill RI et al. Patterns of temporal lobe atrophy in semantic dementia and Alzheimer’s disease. Ann Neurol 2001;49:433–42.doi: 10.1002/ana.92 [DOI] [PubMed] [Google Scholar]
- Clark-Polner E, Johnson TD, Barrett LF. Multivoxel pattern analysis does not provide evidence to support the existence of basic emotions. Cereb Cortex 2016;27:bhw028.doi: 10.1093/cercor/bhw028 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Damasio A, Carvalho GB. The nature of feelings: evolutionary and neurobiological origins. Nat Rev Neurosci 2013;14:143–52.doi: 10.1038/nrn3403 [DOI] [PubMed] [Google Scholar]
- Ethofer T, Van De Ville D, Scherer K et al. Decoding of emotional information in voice-sensitive cortices. Curr Biol 2009;19:1028–33.doi: 10.1016/j.cub.2009.04.054 [DOI] [PubMed] [Google Scholar]
- Fernandino L, Humphries CJ, Conant LL et al. Heteromodal cortical areas encode sensory-motor features of word meaning. J Neurosci 2016;36:9763–69.doi: 10.1523/JNEUROSCI.4095-15.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fugate JM. Categorical perception for emotional faces. Emot Rev 2013;5:84–89.doi: 10.1177/1754073912451350 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gallese VV, Lakoff G. The brain’s concepts: the role of the sensory-motor system in conceptual knowledge. Cogn Neuropsychol 2005;22:455–79.doi: 10.1080/02643290442000310 [DOI] [PubMed] [Google Scholar]
- Gendron M, Lindquist KA, Barsalou L et al. Emotion words shape emotion percepts. Emotion 2012;12:314–25.doi: 10.1037/a0026007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Greenberg LS. Emotion-focused Therapy. Washington, DC: American Psychological Association, 2011, xii, 182. [Google Scholar]
- Hamzani O, Mazar T, Itkes O et al. Semantic and affective representations of valence: prediction of autonomic and facial responses from feelings-focused and knowledge-focused self-reports. Emotion 2020;20:486–500.doi: 10.1037/emo0000567 [DOI] [PubMed] [Google Scholar]
- Haxby JV. Multivariate pattern analysis of fMRI: the early beginnings. Neuroimage 2012;62:852–55.doi: 10.1016/j.neuroimage.2012.03.016 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoemann K, Feldman Barrett L. Concepts dissolve artificial boundaries in the study of emotion and cognition, uniting body, brain, and mind. Cogn Emot 2019;33:67–76.doi: 10.1080/02699931.2018.1535428 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoemann K, Nielson C, Yuen A et al. Expertise in emotion: a scoping review and unifying framework for individual differences in the mental representation of emotional experience. Psychol Bull 2021;147:1159–83.doi: 10.1037/bul0000327 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoemann K, Wu R, LoBue V et al. Developing an understanding of emotion categories: lessons from objects. Trends Cogn Sci 2020;24:39–51.doi: 10.1016/j.tics.2019.10.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoemann K, Xu F, Barrett LF. Emotion words, emotion concepts, and emotional development in children: a constructionist hypothesis. Dev Psychol 2019;55:1830.doi: 10.1037/dev0000686 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Itkes O, Kimchi R, Haj-Ali H et al. Dissociating affective and semantic valence. J Exp Psychol Gen 2017;146:924–42.doi: 10.1037/xge0000291 [DOI] [PubMed] [Google Scholar]
- Itkes O, Kron A. Affective and semantic representations of valence: a conceptual framework. Emot Rev 2019;11:283–93.doi: 10.1177/1754073919868759 [DOI] [Google Scholar]
- Kassam KS, Markey AR, Cherkassky VL et al. Identifying emotions on the basis of neural activation. PloS One 2013;8:e66032.doi: 10.1371/journal.pone.0066032 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kesler ML, Andersen AH, Smith CD et al. Neural substrates of facial emotion processing using fMRI. Cogn Brain Res 2001;11:213–26.doi: 10.1016/S0926-6410(00)00073-2 [DOI] [PubMed] [Google Scholar]
- Kim J, Shinkareva SV, Wedell DH. Representations of modality-general valence for videos and music derived from fMRI data. Neuroimage 2017;148:42–54.doi: 10.1016/j.neuroimage.2017.01.002 [DOI] [PubMed] [Google Scholar]
- Kober H, Barrett LF, Joseph J et al. Functional grouping and cortical–subcortical interactions in emotion: a meta-analysis of neuroimaging studies. Neuroimage 2008;42:998–1031.doi: 10.1016/j.neuroimage.2008.03.059 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kragel PA, LaBar KS. Multivariate neural biomarkers of emotional states are categorically distinct. Soc Cogn Affect Neurosci 2015;10:1437–48.doi: 10.1093/scan/nsv032 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kragel PA, LaBar KS. Decoding the nature of emotion in the brain. Trends Cogn Sci 2016;20:444–55.doi: 10.1016/j.tics.2016.03.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kriegeskorte N, Kievit RA. Representational geometry: integrating cognition, computation, and the brain. Trends Cogn Sci 2013;17:401–12.doi: 10.1016/j.tics.2013.06.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kriegeskorte N, Mur M, Bandettini PA. Representational similarity analysis—connecting the branches of systems neuroscience. Front Syst Neurosci 2008;2:4.doi: 10.3389/neuro.06.004.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lane RD, Fink GR, Chau PM et al. Neural activation during selective attention to subjective emotional responses. Neuroreport 1997;8:3969–72.doi: 10.1097/00001756-199712220-00024 [DOI] [PubMed] [Google Scholar]
- Lane RD, Solms M, Weihs KL et al. Affective agnosia: a core affective processing deficit in the alexithymia spectrum. Biopsychosoc Med 2020;14:20.doi: 10.1186/s13030-020-00184-w [DOI] [Google Scholar]
- Lane RD, Weihs KL, Herring A et al. Affective agnosia: expansion of the alexithymia construct and a new opportunity to integrate and extend Freud’s legacy. Neurosci Biobehav Rev 2015;55:594–611.doi: 10.1016/j.neubiorev.2015.06.007 [DOI] [PubMed] [Google Scholar]
- Lee KM, Lee S, Satpute AB. Sinful pleasures and pious woes? Using fMRI to examine evaluative and hedonic emotion knowledge. Soc Cogn Affect Neurosci 2022;17:986–94.doi: 10.1093/scan/nsac024 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lee KM, Lindquist KA, Payne BK. Constructing bias: conceptualization breaks the link between implicit bias and fear of Black Americans. Emotion 2018;18:855–71.doi: 10.1037/emo0000347 [DOI] [PubMed] [Google Scholar]
- Liang Y, Liu B, Xu J et al. Decoding facial expressions based on face-selective and motion-sensitive areas. Hum Brain Mapp 2017;38:3113–25.doi: 10.1002/hbm.23578 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lieberman MD, Eisenberger NI, Crockett MJ et al. Putting feelings into words: affect labeling disrupts amygdala activity in response to affective stimuli. Psychol Sci 2007;18:421–28.doi: 10.1111/j.1467-9280.2007.01916.x [DOI] [PubMed] [Google Scholar]
- Lieberman MD, Inagaki TK, Tabibnia G et al. Subjective responses to emotional stimuli during labeling, reappraisal, and distraction. Emotion 2011;11:468–80.doi: 10.1037/a0023503 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lindquist KA. Emotions emerge from more basic psychological ingredients: a modern psychological constructionist model. Emot Rev 2013;5:356–68.doi: 10.1177/1754073913489750 [DOI] [Google Scholar]
- Lindquist KA, Barrett LF. Constructing emotion: the experience of fear as a conceptual act. Psychol Sci 2008;19:898–903.doi: 10.1111/j.1467-9280.2008.02174.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lindquist KA, Barrett LF. A functional architecture of the human brain: emerging insights from the science of emotion. Trends Cogn Sci 2012;16:533–40.doi: 10.1016/j.tics.2012.09.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lindquist KA, Barrett LF, Bliss-Moreau E et al. Language and the perception of emotion. Emotion 2006;6:125–38.doi: 10.1037/1528-3542.6.1.125 [DOI] [PubMed] [Google Scholar]
- Lindquist KA, Gendron M. What’s in a word? Language constructs emotion perception. Emot Rev 2013;5:66–71.doi: 10.1177/1754073912451351 [DOI] [Google Scholar]
- Lindquist KA, Gendron M, Barrett LF et al. Emotion perception, but not affect perception, is impaired with semantic memory loss. Emotion 2014;14:375–87.doi: 10.1037/a0035293 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lindquist KA, Gendron M, Satpute AB. Language and emotion. In: Lewis M, Haviland-Jones J and Barrett LF (eds.), Handbook of Emotions, 4th edn. New York, NY: The Guilford Press, 2016, 579–94. [Google Scholar]
- Lindquist KA, MacCormack JK, Shablack H. The role of language in emotion: predictions from psychological constructionism. Front Psychol 2015a;6:444.doi: 10.3389/fpsyg.2015.00444 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lindquist KA, Satpute AB, Gendron M. Does language do more than communicate emotion?. Curr Dir Psychol Sci 2015b;24:99–108.doi: 10.1177/0963721414553440 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lindquist KA, Wager TD, Kober H et al. The brain basis of emotion: a meta-analytic review. Behav Brain Sci 2012;35:121–43.doi: 10.1017/S0140525X11000446 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lupyan G, Rakison DH, McClelland JL. Language is not just for talking: redundant labels facilitate learning of novel categories. Psychol Sci 2007;18:1077–83.doi: 10.1111/j.1467-9280.2007.02028.x [DOI] [PubMed] [Google Scholar]
- Matsuda Y-T, Fujimura T, Katahira K et al. The implicit processing of categorical and dimensional strategies: an fMRI study of facial emotion perception. Front Human Neurosci 2013;7:551.doi: 10.3389/fnhum.2013.00551 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mazzuca C, Fini C, Michalland AH et al. From affordances to abstract words: the flexibility of sensorimotor grounding. Brain Sci 2021;11:Article10.doi: 10.3390/brainsci11101304 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McVeigh K, Kleckner IR, Quigley KS et al. Fear-related psychophysiological patterns are situation and individual dependent: a Bayesian model comparison approach. Emotion 2024;24:506–21.doi: 10.1037/emo0001265 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mesquita B. Between Us: How Cultures Create Emotions. New York, NY: WW Norton & Company, 2022. [Google Scholar]
- Morawetz C, Bode S, Derntl B et al. The effect of strategies, goals and stimulus material on the neural mechanisms of emotion regulation: a meta-analysis of fMRI studies. Neurosci Biobehav Rev 2017;72:111–28.doi: 10.1016/j.neubiorev.2016.11.014 [DOI] [PubMed] [Google Scholar]
- Mummery CJ, Patterson K, Price CJ et al. A voxel-based morphometry study of semantic dementia: relationship between temporal lobe atrophy and semantic memory. Ann Neurol 2000;47:36–45.doi: 10.1002/1531-8249(200001)47:1<36::AID-ANA8>3.0.CO;2-L [DOI] [PubMed] [Google Scholar]
- Niles AN, Craske MG, Lieberman MD et al. Affect labeling enhances exposure effectiveness for public speaking anxiety. Behav Res Ther 2015;68:27–36.doi: 10.1016/j.brat.2015.03.004 [DOI] [PubMed] [Google Scholar]
- Nook EC, Lindquist KA, Zaki J. A new look at emotion perception: concepts speed and shape facial emotion recognition. Emotion 2015;15:569–78.doi: 10.1037/a0039166 [DOI] [PubMed] [Google Scholar]
- Nook EC, Sasse SF, Lambert HK et al. Increasing verbal knowledge mediates development of multidimensional emotion representations. Nat Hum Behav 2017;1:881–9.doi: 10.1038/s41562-017-0238-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nook EC, Satpute AB, Ochsner KN. Emotion naming impedes both cognitive reappraisal and mindful acceptance strategies of emotion regulation. Affect Sci 2021;2:187–98.doi: 10.1007/s42761-021-00036-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- Norman KA, Polyn SM, Detre GJ et al. Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends Cogn Sci 2006;10:424–30.doi: 10.1016/j.tics.2006.07.005 [DOI] [PubMed] [Google Scholar]
- Nummenmaa L, Saarimäki H. Emotions as discrete patterns of systemic activity. Neurosci Lett 2019;693:3–8.doi: 10.1016/j.neulet.2017.07.012 [DOI] [PubMed] [Google Scholar]
- Ochsner KN, Barrett LF. A multiprocess perspective on the neuroscience of emotion. In: Mayne TJ, Bonanno GA (eds.), Emotions: Currrent Issues and Future Directions. New York, NY: The Guilford Press, 2001, 38–81. [Google Scholar]
- Oosterhof NN, Connolly AC, Haxby JV. CoSMoMVPA: multi-modal multivariate pattern analysis of neuroimaging data in Matlab/GNU Octave. Front Neuroinform 2016;10:27.doi: 10.3389/fninf.2016.00027 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Oosterwijk S, Topper M, Rotteveel M et al. When the mind forms fear: embodied fear knowledge potentiates bodily reactions to fearful stimuli. Soc Psychol Pers Sci 2010;1:65–72.doi: 10.1177/1948550609355328 [DOI] [Google Scholar]
- Peelen MV, Atkinson AP, Vuilleumier P. Supramodal representations of perceived emotions in the human brain. J Neurosci 2010;30:10127–34.doi: 10.1523/JNEUROSCI.2161-10.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Poldrack RA, Mumford JA, Schonberg T et al. Discovering relations between mind, brain, and mental disorders using topic mapping. PLoS Comput Biol 2012;8:e1002707.doi: 10.1371/journal.pcbi.1002707 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Roberson D, Davidoff J. The categorical perception of colors and facial expressions: the effect of verbal interference. Mem Cogn 2000;28:977–86.doi: 10.3758/BF03209345 [DOI] [PubMed] [Google Scholar]
- Russell JA, Bullock M. Multidimensional scaling of emotional facial expressions: similarity from preschoolers to adults. J Pers Soc Psychol 1985;48:1290.doi: 10.1037/0022-3514.48.5.1290 [DOI] [Google Scholar]
- Saarimäki H, Gotsopoulos A, Jääskeläinen IP et al. Discrete neural signatures of basic emotions. Cereb Cortex 2016;26:2563–73.doi: 10.1093/cercor/bhv086 [DOI] [PubMed] [Google Scholar]
- Sabatinelli D, Bradley MM, Fitzsimmons JR et al. Parallel amygdala and inferotemporal activation reflect emotional intensity and fear relevance. Neuroimage 2005;24:1265–70.doi: 10.1016/j.neuroimage.2004.12.015 [DOI] [PubMed] [Google Scholar]
- Said CP, Moore CD, Engell AD et al. Distributed representations of dynamic facial expressions in the superior temporal sulcus. J Vis 2010;10:11.doi: 10.1167/10.5.11 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sander D, Grandjean D, Scherer KR. An appraisal-driven componential approach to the emotional brain. Emot Rev 2018;10:219–31.doi: 10.1177/1754073918765653 [DOI] [Google Scholar]
- Satpute AB, Lindquist KA. The default mode network’s role in discrete emotion. Trends Cogn Sci 2019;23:851–64.doi: 10.1016/j.tics.2019.07.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Satpute AB, Lindquist KA. At the neural intersection between language and emotion. Affect Sci 2021;2:207–20.doi: 10.1007/s42761-021-00032-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Satpute AB, Nook EC, Narayanan S et al. Emotions in “black or white” or shades of gray? How we think about emotion shapes our perception and neural representation of emotion. Psychol Sci 2016;27:1428–42.doi: 10.1177/0956797616661555 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Satpute AB, Shu J, Weber J et al. The functional neural architecture of self-reports of affective experience. Biol Psychiatry 2013;73:631–38.doi: 10.1016/j.biopsych.2012.10.001 [DOI] [PubMed] [Google Scholar]
- Shablack H, Becker M, Lindquist KA. How do children learn novel emotion words? A study of emotion concept acquisition in preschoolers. J Exp Psychol Gen 2020;149:1537.doi: 10.1037/xge0000727 [DOI] [PubMed] [Google Scholar]
- Shablack H, Lindquist KA. The role of language in the development of emotion. In: LoBue V, Perez-Edgar K, Buss K (eds.), Handbook of Emotional Development. Cham, Switzerland: Springer, 2019, 451–78. [Google Scholar]
- Shinkareva SV, Gao C, Wedell D. Audiovisual representations of valence: a cross-study perspective. Affect Sci 2020;1:237–46.doi: 10.1007/s42761-020-00023-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shinkareva SV, Wang J, Kim J et al. Representations of modality‐specific affective processing for visual and auditory stimuli derived from functional magnetic resonance imaging data. Hum Brain Mapp 2014;35:3558–68.doi: 10.1002/hbm.22421 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sifneos PE. Alexithymia: past and present. Am J Psychiatry 1996;153:137–42.doi: 10.1176/ajp.153.7.137 [DOI] [PubMed] [Google Scholar]
- Skerry AE, Saxe R. A common neural code for perceived and inferred emotion. J Neurosci 2014;34:15997–6008.doi: 10.1523/JNEUROSCI.1676-14.2014 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith R, Lane RD. The neural basis of one’s own conscious and unconscious emotional states. Neurosci Biobehav Rev 2015;57:1–29.doi: 10.1016/j.neubiorev.2015.08.003 [DOI] [PubMed] [Google Scholar]
- Snowden JS, Harris JM, Thompson JC et al. Semantic dementia and the left and right temporal lobes. Cortex 2018;107:188–203.doi: 10.1016/j.cortex.2017.08.024 [DOI] [PubMed] [Google Scholar]
- Souter NE, Lindquist KA, Jefferies E. Impaired emotion perception and categorization in semantic aphasia. Neuropsychologia 2021;162:108052.doi: 10.1016/j.neuropsychologia.2021.108052 [DOI] [PubMed] [Google Scholar]
- Stark CE, Squire LR. When zero is not zero: the problem of ambiguous baseline conditions in fMRI. Proc Natl Acad Sci USA 2001;98:12760–66.doi: 10.1073/pnas.221462998 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Taylor GJ, Bagby RM. An overview of the alexithymia construct. In: Bar-On R, Parker JDA (eds.), Hand Book of Emotional Intelligence. San Francisco, CA: Jossey-Bass, 2000, 40–67. [Google Scholar]
- Taylor GJ, Bagby RM, Parker JDA. Disorders of Affect Regulation: Alexithymia in Medical and Psychiatric Illness. Cambridge, UK: Cambridge University Press, 1999. [Google Scholar]
- Taylor SF, Phan KL, Decker LR et al. Subjective rating of emotionally salient stimuli modulates neural activity. Neuroimage 2003;18:650–59.doi: 10.1016/s1053-8119(02)00051-4 [DOI] [PubMed] [Google Scholar]
- Torre JB, Lieberman MD. Putting feelings into words: affect labeling as implicit emotion regulation. Emot Rev 2018;10:116–24.doi: 10.1177/1754073917742706 [DOI] [Google Scholar]
- Tugade MM, Fredrickson BL, Barrett LF. Psychological resilience and positive emotional granularity: examining the benefits of positive emotions on coping and health. J Pers 2004;72:1161–90.doi: 10.1111/j.1467-6494.2004.00294.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vytal K, Hamann S. Neuroimaging support for discrete neural correlates of basic emotions: a voxel-based meta-analysis. J Cogn Neurosci 2010;22:2864–85.doi: 10.1162/jocn.2009.21366 [DOI] [PubMed] [Google Scholar]
- Wager TD, Kang J, Johnson T et al. A Bayesian model of category-specific emotional brain responses. PLoS Comput Biol 2015;11:e1004066.doi: 10.1371/journal.pcbi.1004066 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang Y, Kragel PA, Satpute AB. Neural predictors of subjective fear depend on the situation. bioRxiv 2022:2022.10.20.513114.doi: 10.1101/2022.10.20.513114 [DOI] [Google Scholar]
- Widen SC. Children’s interpretation of facial expressions: the long path from valence-based to specific discrete categories. Emot Rev 2013;5:72–77.doi: 10.1177/1754073912451492 [DOI] [Google Scholar]
- Widen SC. The development of children’s concepts of emotion. Handbook Emot 2016;4:307–18. [Google Scholar]
- Wilson-Mendenhall CD, Barrett LF, Barsalou LW. Situating emotional experience. Front Human Neurosci 2013;7:764.doi: 10.3389/fnhum.2013.00764 [DOI] [PMC free article] [PubMed] [Google Scholar]
