Abstract
Recognizing facial expressions is dependent on multiple brain networks specialized for different cognitive functions. In the current study, participants (N = 20) were scanned using functional magnetic resonance imaging (fMRI), while they performed a covert facial expression naming task. Immediately prior to scanning thetaburst transcranial magnetic stimulation (TMS) was delivered over the right lateral prefrontal cortex (PFC), or the vertex control site. A group whole-brain analysis revealed that TMS induced opposite effects in the neural responses across different brain networks. Stimulation of the right PFC (compared to stimulation of the vertex) decreased neural activity in the left lateral PFC but increased neural activity in three nodes of the default mode network (DMN): the right superior frontal gyrus, right angular gyrus and the bilateral middle cingulate gyrus. A region of interest analysis showed that TMS delivered over the right PFC reduced neural activity across all functionally localised face areas (including in the PFC) compared to TMS delivered over the vertex. These results suggest that visually recognizing facial expressions is dependent on the dynamic interaction of the face-processing network and the DMN. Our study also demonstrates the utility of combined TMS/fMRI studies for revealing the dynamic interactions between different functional brain networks.
Keywords: Superior temporal sulcus, fusiform face area, occipital face area, Amygdala
Introduction
Humans need to recognize and interpret the facial expressions of other people during social interactions. The neural computations that support these cognitive processes have been extensively investigated using functional magnetic resonance imaging (fMRI). These studies have been the basis of theories positing that emotions are processed across multiple large-scale brain networks that engage both cortical and subcortical structures (Lindquist et al., 2012; Barrett and Satpute, 2013; Wager et al., 2015; Pessoa, 2018). The extent to which these networks interact with brain networks specialized for other cognitive functions has also been investigated. For example, it has been proposed that emotion processing is reliant on dynamic interactions between the salience network (e.g. the amygdala and insula) and the central executive brain network for cognitive control [e.g. the lateral prefrontal cortex (PFC)] (Seeley et al., 2007; Uddin, 2015; Pessoa, 2018). Both the salience network and the central executive network consist of brain areas that show greater neural activation when participants perform tasks requiring emotional processing. However, a recent model has proposed that another brain network, the default mode network (DMN) is also necessary for emotion processing (Satpute and Lindquist, 2019).
The DMN is anti-correlated with task performance, meaning it exhibits a decrease in neural activity when participants perform cognitive tasks in the fMRI scanner (Raichle, 2015). This has led to claims that the DMN mediates inner states such as mind wandering, inner thoughts and internal states (Smallwood et al., 2021). fMRI studies have also demonstrated that the DMN is anti-correlated with other brain areas during facial expression naming tasks (Sreenivas et al., 2012; Lanzoni et al., 2020). This is consistent with the hypothesis that emotion processing is dependent on a push/pull interaction between the salience and central executive networks and the DMN (Satpute and Lindquist, 2019). Our aim in the current study was to causally test the role of the DMN in a facial expression naming task by combining fMRI with transcranial magnetic stimulation (TMS). To do this, we transiently disrupted the right inferior frontal gyrus (IFG), a brain area in the lateral PFC. Importantly, the lateral PFC contains spatially distinct brain areas that are components in different functional brain networks. The anterior parts of the lateral IFG are part of the DMN, while more posterior areas of the IFG and middle frontal gyrus (MFG) are a part of the fronto-parietal attention network (FPN) (Yeo et al., 2011). The IFG has also been identified as part of the central executive network (CEN), a brain network that is anti-correlated with the DMN (Raichle, 2015).
The lateral PFC is involved in a range of different face-processing tasks including identity recognition (Ishai et al., 2002), working memory for faces (Courtney et al., 1996) and the configural processing of the eyes and mouth (Renzi et al., 2013). Importantly, prior studies have also demonstrated that the lateral PFC is involved in facial expression processing (Gorno-Tempini et al., 2001; Iidaka et al., 2001). Neuropsychological studies of patients with frontal lobe damage have further demonstrated that those with lateral PFC damage have problems with a range of emotion processing tasks including theory of mind and self-emotion regulation (Tsuchida and Fellows, 2012; Jastorff et al., 2016). Patient studies have also demonstrated that facial expression recognition is dependent on a wider network of visual brain areas that selectively process faces (Adolphs, 2002; Jastorff et al., 2016). These include areas of the temporal cortex that are known to contain face-selective areas in both the ventral (Kanwisher et al., 1997; McCarthy et al., 1997) and lateral (Puce et al., 1996, 1998; Gauthier et al., 2000) brain surfaces. These areas have been linked together into models that propose a distributed brain network specialized for face processing (Haxby et al., 2000; Calder and Young, 2005). Prior neuroimaging studies have also revealed that the lateral PFC is engaged in the top-down control of other brain areas when recognizing faces including the amygdala (Davies-Thompson and Andrews, 2012), ventral temporal cortex (Heekeren et al., 2004; Baldauf and Desimone, 2014) and the superior temporal cortex (STS) (Wang et al., 2020).
Our prior combined TMS/fMRI studies have causally demonstrated the connectivity between different nodes in the face-processing network. For example, we demonstrated that TMS delivered over the occipital face area (OFA) reduced the BOLD response to faces in the fusiform face area (FFA) compared to TMS delivered over a control site (Pitcher et al., 2014; Groen et al., 2021). TMS delivered over the face-selective area in the posterior STS reduced the BOLD response to face videos in the STS and amygdala compared to TMS delivered over the control site (Pitcher et al., 2017). In addition, we compared TMS delivered over the right posterior STS and right motor cortex using resting-state fMRI (Handwerker et al., 2020). Results showed that TMS delivered over STS selectively reduced functional connectivity between multiple nodes of the face network (e.g. the OFA, FFA and amygdala). Taken together, these studies demonstrate that TMS disruption of one face-selective area causes remote effects across other nodes of the face processing network. Having previously targeted face-processing areas in the occipitotemporal cortex (e.g. the OFA and STS) in the present study we disrupted the face-selective area in the right IFG (Ishai et al., 2002; Nikel et al., 2022).
The face-selective area in the IFG has been shown to process a range of different face-processing tasks. These include familiar face recognition (Rapcsak et al., 1996), working memory for faces (Courtney et al., 1997), famous-face recognition (Ishai et al., 2002), processing of information from the eyes (Chan and Downing, 2011) and configural processing of the component parts of faces (e.g. the eyes and mouth) (Renzi et al., 2013). Other studies have demonstrated that the IFG is involved in the top-down control of ventral temporal cortex when recognising faces (Heekeren et al., 2004; Baldauf and Desimone, 2014) and is functionally connected to the amygdala (Davies-Thompson and Andrews, 2012). We therefore predicted that TMS delivered over the right PFC while participants performed a facial expression naming task would decrease neural activity across the face network. Crucially, we also predicted that transient disruption of this network would cause an increase in neural activity in the DMN because the two networks dynamically interact during facial expression naming.
Materials and methods
Participants
A total of 20 participants (14 females; age range 19- to 46-years old; mean age 23 years, SD = 6.4) with normal or corrected-to-normal vision gave informed consent as directed by the Ethics Committee at the University of York.
Stimuli
Face stimuli for the expression naming task were 14 models (female and male) from Ekman and Friesen’s (1976) facial affect series expressing one of seven emotions. Each image was shown once only. This equated to a total of 110 unique pictures: anger (17), disgust (15), fear (17), happy (18), neutral (14), sad (15) and surprise (14).
In addition to the experimental task, we also ran a functional localizer to identify face-selective areas in each participant. Stimuli were 3 s movie clips of faces and objects that we have used in prior studies (Sliwinska et al., 2020b, 2022; Küçük et al., 2022). There were 60 movie clips for each category in which distinct exemplars appeared multiple times. Movies of faces and bodies were filmed on a black background, and framed close-up to reveal only the faces or bodies of seven children as they danced or played with toys or adults (who were out of frame). Fifteen different moving objects were selected that minimized any suggestion of animacy of the object itself or of a hidden actor pushing the object (these included mobiles, windup toys, toy planes and tractors, balls rolling down sloped inclines). Within each block, stimuli were randomly selected from within the entire set for that stimulus category. This meant that the same actor or object could appear within the same block but given the number of stimuli this did not occur regularly.
Procedure
Participants completed three separate sessions, each performed on a different day. The first session was an fMRI experiment designed to individually localize the TMS sites in each participant. In sessions two and three, TMS was delivered over the right lateral IFG or over the vertex control site (order was balanced across participants) immediately before scanning began.
In the first session, participants viewed three runs of a functional localiser task (234 s each) to individually identify face-selective areas. Our previous fMRI study of face processing in the lateral PFC demonstrated that a face-selective area was more commonly identified across participants in the right IFG (Nikel et al., 2022). Based on this study, we targeted the same location for disruption with TMS in the current study. Functional runs presented short video clips of faces, bodies and objects in 18 s blocks that contained six 3 s video clips from that category. We also collected a high-resolution structural scan for each participant.
Sessions two and three combined TMS and fMRI. Prior to scanning, the Brainsight TMS-MRI co-registration system (Rogue Research) was used to mark the location of the face-selective area in the right IFG based on the initial fMRI localizer data collected for each participant. The vertex control site was identified using a tape measure as a point in the middle of the head halfway between the nasion and inion (Figure 1A).
Fig. 1.

(A) The TMS sites from an example participant. The active site in the right lateral PFC was defined using a contrast of faces > objects for each participant. The average MNI coordinates (36,6,48) were centred in right IFG. (B) An example of the trial procedure for the fMRI covert facial expression naming task. Participants were required to silently name the expression on the Ekman faces that were displayed for 3 s.
Participants were then taken to the fMRI scanner control room where thetaburst TMS was delivered over the right IFG or the vertex for each participant (stimulation site order was balanced across participants). Once stimulation was completed, participants entered the scanner room immediately. fMRI data collection began as quickly as possible as the effects of TMS are transient, but varied owing to factors like participants speed at entering the scanner. For all participants, the start of scanning began within 5 min of TMS stimulation being delivered.
Functional data for the expression naming task were acquired over two blocked-design functional runs lasting 570 s each (Figure 1B). Each run consisted of 55 trials during which facial expression stimuli were presented centrally on the screen for 3 s, followed by a blank screen of 7 s. The two runs of the expression naming task (570 s each) plus with the time taken to place the participants in the scanner (always <5 min) meant that all experimental data were collected within 30 min of TMS being delivered. Our prior studies have demonstrated that this is an effective duration to measure the impact of TMS on the BOLD signal (Pitcher et al., 2014; Groen et al., 2021).
Participants were instructed to silently name the emotion that the facial expression displayed when the stimuli was presented. Once the two expression naming runs were completed participants viewed two runs of the localizer task (234 s each) to individually identify face-selective areas. Functional runs presented short video clips of faces, bodies and objects in 18 s blocks that contained six 3 s video clips from that category. Once localiser data collection was completed participants exited the scanner. After the final session participants were debriefed on the nature of the study.
Brain imaging and analysis
Imaging data were acquired using a 3 T Siemens Magnetom Prisma MRI scanner (Siemens Healthcare, Erlangen, Germany) at the University of York. Functional images were acquired with a 20-channel phased array head coil and a gradient-echo EPI sequence [38 interleaved slices, repetition time (TR) = 3 sec, echo time (TE) = 30 ms, flip angle =90 degrees; voxel size 3 mm isotropic; matrix size = 128 × 128] providing whole-brain coverage. Slices were aligned with the anterior to posterior commissure line. Structural images were acquired using the same head coil and a high-resolution T-1-weighted 3D fast spoilt gradient (SPGR) sequence [176 interleaved slices, repetition time (TR) = 7.8 s, echo time (TE) = 3 ms, flip angle = 20 degrees; voxel size 1 mm isotropic; matrix size = 256 × 256).
fMRI data were analysed using AFNI (http://afni.nimh.nih.gov/afni). Data from the first four TRs from each run were discarded. The remaining images were slice-time corrected and realigned to the last volume of the last run prior to TMS during the TMS to vertex session, and to the corresponding anatomical scan. The volume registered data were spatially smoothed with an 8 mm full-width-half-maximum Gaussian kernel. Signal intensity was normalized to the mean signal value within each run and multiplied by 100 so that the data represented percent signal change from the mean signal value before analysis.
For the localiser task a general linear model (GLM) was established by convolving the standard haemodynamic response function with two regressors of interest (faces and objects). Regressors of no interest (e.g. six head movement parameters obtained during volume registration and AFNI’s baseline estimates) were also included. Face-selective areas were identified in each participant using a contrast of faces greater than objects.
For the expression naming task, we performed two separate analyses. The first grouped all expressions together using a GLM was established by convolving the standard haemodynamic response function with one regressor of interest (faces). Regressors of no interest (e.g. six head movement parameters obtained during volume registration and AFNI’s baseline estimates) were also included in the GLM. To investigate whole-brain effects of TMS, we used a mixed effects ANOVA in AFNI (3dANOVA2) with TMS sessions (IFG and Vertex) and participants (N = 20) as independent factors. Group whole-brain maps were calculated for each TMS session. We then we subtracted the IFG session data from the vertex session.
In the second analysis, we performed an exploratory multi-voxel pattern analysis (MVPA) to determine whether TMS delivered over the right IFG decreases the neural discriminability between different expressions in the face network. This analysis was performed for face-selective areas identified using the functional localizer (IFG, the Amygdala, pSTS, FFA and OFA). Here, we used regions of interest (ROI) masks that included both hemispheres to increase signal-to-noise ratio in the light of the limited data available. We first created new GLMs, which contained seven regressors for each of the seven emotions (anger, disgust, fear happy, neutral, sad and surprise), separately for each fMRI run. From these GLMs, we then calculated T-maps against baseline for each emotion. The subsequent MVPA analysis was carried out using the CoSMoMVPA toolbox for Matlab (Oosterhof et al., 2016). To quantify the discriminability between emotions, we used a cross-validated correlation approach (Haxby et al., 2001). Specifically, we correlated (Spearman-correlation) response patterns (T-values across voxels in each ROI) between the two runs, either for the same emotion (within-correlations) or for different emotions (between-correlations). Subtracting the between-correlations from the within-correlations yielded a measure of neural discriminability between emotions for each ROI, separately for the two TMS sites. Discriminability in each ROI was tested against zero using one-sided t-tests (as below-zero values are not interpretable in this analysis). Discriminability was compared between the TMS conditions using two-sided t-tests.
TMS site localization and parameters
Stimulation sites were localised using individual structural and functional images collected during an fMRI localiser task that each participant completed prior to the combined TMS/fMRI sessions. In the localiser session, participants viewed the same dynamic face and object stimuli as in earlier studies of the face network (Pitcher et al., 2011; Sliwinska et al., 2020a). The stimulation site targeted in the right IFG (Nikel et al., 2022) of each participant was the peak voxel in the face-selective ROI identified using a contrast of greater activation by dynamic faces than dynamic objects (mean MNI co-ordinates 36,6,48). The mean MNI coordinates for all participants are included in the Supplemental Materials. The vertex site was identified as a point on the top of the head halfway between the nasion (the tip of the nose) and the inion (the point at the back of the head). TMS sites were identified using the Brainsight TMS-MRI co-registration system (Rogue Research) and the proper coil locations were then marked on each participant’s scalp using a marker pen.
A Magstim Super Rapid Stimulator (Magstim; Whitland, UK) was used to deliver the TMS via a figure-eight coil with a wing diameter of 70 mm. TMS was delivered at an intensity of 45% of machine output over each participant’s functionally localised right IFG or vertex. Thetaburst TMS (TBS) was delivered using a continuous train of 600 pulses delivered in bursts of 3 pulses (a total of 200 bursts) at a frequency of 30 Hz with a burst frequency of 6 Hz for a duration of 33.3 s and fixed intensity of 45% of the maximum stimulator output. We used a modified version (Nyffeler et al., 2006) of the original thetaburst protocol (Huang et al., 2005) as this version has been shown to have longer lasting effects (Goldsworthy et al., 2012). The Stimulator coil handle was held pointing upwards and parallel to the midline when delivered over the right IFG and flat against the skull with handle towards the inion when delivered over the vertex.
Results
Whole-brain group analysis of TMS disruption of the right IFG
Experimental data (N = 20) from the expression naming task were entered into a group whole-brain-mixed effects analysis of variance (ANOVA) with TMS condition (right IFG and vertex control site) and participants as independent factors. Activation maps were calculated for each TMS session and the IFG session data were then subtracted from the vertex session data to establish the whole-brain effects of TMS. Data were uncorrected and thresholded (P = 0.005, z-stat = 3.1) with a cluster correction set at 50 contiguous voxels. These maps were then registered to the MNI template using probabilistic maps for combining functional imaging data with cytoarchitectonic maps (Eickhoff et al., 2005). Results revealed multiple brain areas that exhibited significant differences between TMS sites (Figure 2A). TMS delivered over the right IFG compared to TMS delivered over the vertex control site reduced neural activity in the left IFG (−46, 22, 17) (133 voxels) while increasing neural activity in three nodes of the default mode network: the right superior frontal gyrus (SFG) (20, 37, 35) (90 voxels), right angular gyrus (47, −50, 29) (54 voxels) and bilateral middle cingulate cortex (5, −23, 38) (76 voxels). To determine response magnitudes against baseline, we also calculated the percent signal change for the two stimulation conditions in these four regions (Figure 2B). Consistent with the established neural response pattern of the DMN we observed a negative BOLD response in the right SFG, right angular gyrus and bilateral cingulate cortex in the vertex control condition. However, when we disrupted the right IFG the negativity of the BOLD response was reduced in all three areas compared to the vertex control condition. TMS delivered over the right IFG revealed the opposite effect on the neural activity in the left IFG. Namely, disruption of the right IFG (compared to the vertex) reduced the positive neural activity in the left IFG when performing the facial expression naming task (Figure 2B).
Fig. 2.

(A) The results of a group whole-brain analysis showing the distributed impact of TMS delivered over the right IFG, while participants silently named facial expressions. Group data (N = 20) were calculated for each TMS session and the IFG session data were then subtracted from the vertex session data (P = 0.005, z-stat = 3.1). Clusters in orange denote an increase in neural activity after TMS delivered over the right IFG. The cluster in blue denotes a decrease in neural activity after TMS delivered over the right IFG. (B) The percent signal change for the two stimulation conditions in the four regions identified in the group analysis. TMS delivered over the right IFG reduced the positive neural activity in the left IFG and increased the negative neural activity in the right SFG, right angular gyrus and bilateral middle cingulate cortex (components of the default mode network).
ROI analysis of TMS disruption in face processing network
To further characterise the effects of disrupting the right IFG across the face-processing network, we also performed a ROI analysis at the individual participant level. ROIs were defined using the functional localiser runs from the initial fMRI session and the runs collected after the expression naming task in the combined TMS/fMRI sessions. Face-selective ROIs were identified across both hemispheres using a contrast of faces greater than objects and a statistical threshold of P < 0.1. This was based on our prior study of the face-selective areas in the bilateral IFG which demonstrated that this threshold was necessary (Nikel et al., 2022). We identified clusters of at last 5 voxels in each defined face area and created a 5 mm sphere around the peak activation coordinate for the following ROIs in both hemispheres: IFG, Amygdala, posterior superior temporal sulcus (pSTS), fusiform face area (FFA) and occipital face area (OFA). We then calculated the percent signal change for the two stimulation conditions in each ROI (Figure 3).
Fig. 3.

Results of the ROI analysis performed in face-selective areas for the facial expression naming task. Percent signal change (PSC) data for two stimulation conditions (right IFG and vertex) in the right and left IFG, amygdala, pSTS, FFA and OFA. Analyses revealed a significant main effect of stimulation (P = 0.015) in which TMS delivered over the right IFG reduced the neural response to expression naming across all nodes of the face network. There were no significant interactions. Error bars show standard errors of the mean across participants.
The percent signal change (PSC) data for the expression naming task for the two stimulation conditions were entered into a 2 (stimulation: Right IFG, Vertex) by 2 (Hemisphere: right, left) by 5 (ROI: IFG, amygdala, pSTS, FFA, OFA) repeated measures ANOVA. Results showed significant main effects of stimulation [F (1,19) = 7.2, P = 0.015; partial η2 =0.275] and ROI [F (4,76) = 66.9, P < 0.001; partial η2 =0.779] but not of hemisphere [F (1,19) = 1.2, P = 0.27; partial η2 =0.063]. There was no significant three-way interaction between stimulation, hemisphere, and ROI [F (4,76) = 1.9, P = 0.112; partial η2 =0.09]. All three of the two-way interactions were also non-significant (P > 0.065).
Multivoxel pattern-wide analysis analysis
Finally, we performed an exploratory multivoxel pattern-wide analysis (MVPA) on the facial expression data. It is worth noting that the amount of data used for MVPA here is less than the amount of data used in fMRI only studies because we only collected data during the duration when we expected TMS to disrupt activity. The MVPA can establish whether TMS delivered over the right IFG selectively impaired the neural discriminability of the facial expressions presented (anger, disgust fear, happy, neutral, sad and surprise). Emotions could be discriminated from activity patterns in the bilateral pSTS, both after TMS over IFG [t (19) = 3.14, P = 0.003] and vertex [trending at t(19) = 1.59, P = 0.06]. Emotions were also discriminable from activity patterns in the bilateral FFA [trending at t(19) = 1.37, P = 0.09] and OFA [t(19) = 2.86, P = 0.005], but only after TMS over vertex. The OFA was the only region showing a TMS-related difference: emotions were more readily discriminable from OFA response patterns when TMS was applied over vertex compared to TMS over the IFG [t(19) = 2.17, P = 0.04]. We further investigated whether the effect in OFA was specifically driven by changes in the representation of negative or positive/neural emotions. However, repeating the MVPA for the negative (anger, disgust, fear, sadness) or positive/neural (neutral, happiness, surprise) emotions separately, we did not find any differences in emotion discrimination between TMS over vertex and TMS over the IFG [negative: t(19) = 1.11, P = 0.28; positive/neural: t(19) = 0.95, P = 0.35]. While this could suggest that there is no modulation of discrimination within emotion categories, our limited scan time may not offer enough statical power to separately assess positive and negative emotions.
Discussion
In the current study, participants were scanned using fMRI during two separate sessions while performing a facial expression naming task. Prior to scanning TMS was delivered over a functionally localised face-selective area centered at the right IFG, or over the vertex control site. We then calculated the changes in neural activity by subtracting the the BOLD data collected during the right IFG stimulation session from the BOLD data collected during the vertex stimulation condition. The results of a whole-brain group analysis (Figure 2) demonstrated that TMS delivered over the right IFG decreased neural activity in the left inferior frontal gyrus (compared to when TMS was delivered over the vertex). The same analysis also revealed an increase in neural activity in three nodes of the DMN: the right SFG, right angular gyrus and the bilateral middle cingulate gyrus. The ROI analysis of the face-selective areas revealed a main effect of stimulation. TMS delivered over the right IFG reduced the neural response across all bilateral face ROIs compared to the vertex control condition (Figure 3). Our results are consistent demonstrate that visually naming facial expressions involves an interaction a dynamic push/pull interaction between the face-processing network (Haxby et al., 2000; Calder and Young, 2005) and the DMN (Raichle, 2015; Smallwood et al., 2021). This is consistent with a recent a combined TMS/EEG study that demonstrated a dynamic interaction between the inferior frontal cortex and the DMN during action performance task (Zanon et al., 2018).
Our prior studies that combined TMS and fMRI also demonstrated distributed disruption across the face network (Pitcher et al., 2014, 2017; Handwerker et al., 2020; Groen et al., 2021). The lack of an interaction between stimulation site and ROI (Figure 3) suggests that all five ROIs in both hemispheres are connected to the IFG during facial expression naming. This is consistent with patient and TMS studies showing that the IFG (Paracampo et al., 2017, 2018; Penton et al., 2017), pSTS (Pitcher, 2014; Sliwinska et al., 2020b), FFA (Rezlescu et al., 2012), OFA (Pitcher et al., 2008) and the amygdala (Adolphs et al., 1994) are all involved in facial expression recognition. Our findings reveal that the right IFG is directly or indirectly connected to all other regions in the face network. This is consistent with prior studies demonstrating that lateral prefrontal cortex is implicated in a range of neural processes that including cognitive control (MacDonald et al., 2000), working memory (Curtis and D’Esposito, 2003), and Theory of Mind (Kalbe et al., 2010), executive function (Goldman-Rakic, 1996, 2000) and the top-down of visual recognition (Heekeren et al., 2004; Baldauf and Desimone, 2014).
The results of the group whole-brain analysis revealed that TMS delivered over the right IFG (compared the vertex control site) reduced neural activity in the left IFG for the expression naming task. The right IFG was selected as the TMS stimulation site because our prior study had demonstrated a greater response to visually presented faces in the right, more than the left IFG (Nikel et al., 2022). Despite this lateralisation, the left frontal cortex has still been implicated in a range of face processing tasks. For example, prior fMRI studies have demonstrated that the left IFG exhibits greater activity in facial expression recognition tasks (Gorno-Tempini et al., 2001; Trautmann et al., 2009; Regenbogen et al., 2012). In addition, other tasks such as evaluating the social impact of facial expressions (Prochnow et al., 2014) and facial expression matching tasks (Sreenivas et al., 2012) also generate greater activity in the left IFG. The reduction in neural activity in the current study may have also been partially driven by the silent naming task participants performed. This would be consistent with the established role of the left IFG as a ‘high-level’ language brain area (Fedorenko and Thompson-Schill, 2014; Fedorenko and Blank, 2020). More generally, our data show that face networks in both hemispheres are tightly interconnected, where disruption of one network node (the right IFG) has consequences on the activity in contralateral nodes like the left IFG.
We also performed an exploratory MVPA analysis to establish whether TMS disruption of the right IFG disrupted the neural representation of emotions. This analysis revealed that TMS over the IFG reduced the neural discriminability of emotions in OFA, but not any of the other regions. This suggests that TMS to the IFG can disrupt emotion processing in areas of the core face network. It is worth noting that this result was obtained under experimental conditions that are suboptimal for MVPA: the temporally constrained nature of TBS effects (lasting for only about 30 min; (Pitcher et al., 2014, 2017; Handwerker et al., 2020; Groen et al., 2021) drastically reduces the amount of available fMRI data, compared to typical MVPA studies of emotion processing (Said et al., 2010; Harry et al., 2013; Wegrzyn et al., 2015). Whether the effect observed in the OFA here truly extends to a larger set of areas, perhaps including the FFA, could be tested in future studies that use concurrent fMRI/TMS approaches (Mizutani-Tiebel et al., 2022) to increase the amount of available data. A surprising result in our MVPA is that pSTS, which is often considered a key region for emotion discrimination, did not show altered emotion representations after TMS to the IFG. Future studies should investigate whether such effects appear when emotion processing is probed with dynamic stimuli, which are strongly preferred by the region (Pitcher and Ungerleider, 2021). It is worth highlighting that the results of our MVPA should be interpreted with caution. As they were obtained with very little available data for multivariate analyses, and as the effects are statistically not very robust, they only provide a first benchmark of how facial emotion processing could change after PFC disruption. Further studies are needed to solidify our results. For example, it is unclear whether studies with adequate power will be able to distinguish between different emotional expressions when accounting for factors such as valence.
The overall pattern of our results demonstrates that naming facial expressions is dependent on the interaction of different functional brain networks. While it is common for researchers to talk about the face-processing network (Haxby et al., 2000) it is also important to note that the nodes of this network are distributed across brain areas with different cognitive functions. These include visual areas in occipito-temporal cortex (FFA, OFA, pSTS), emotion processing areas (the amygdala) and cognitive control areas (IFG). The results of the current study demonstrate the push/pull dynamic network interactions between these brain areas and nodes in the DMN. This is consistent with models of that proposing that emotion processing is a complex process that is dependent on the interactions of brain networks with different cognitive functions (Uddin, 2015; Pessoa, 2018; Satpute and Lindquist, 2019).
Supplementary Material
Contributor Information
David Pitcher, Department of Psychology, University of York, Heslington, York YO105DD, UK.
Magdalena W Sliwinska, School of Psychology, Liverpool John Moores University, Liverpool L3 3AF, UK.
Daniel Kaiser, Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Gießen 35392, Germany; Center for Mind, Brain and Behaviour, Philipps-Universität Marburg, and Justus-Liebig-Universität Gießen, Marburg 35032, Germany.
Supplementary data
Supplementary data is available at SCAN online.
Data availability
The data underlying this article will be shared on reasonable request to the corresponding author.
Funding
This work was funded by a grant from the Biotechnology and Biological Sciences Research Council (BB/P006981/1) awarded to D.P. D.K. is supported by the German Research Foundation (DFG; SFB/TRR135, project number 222641018; KA4683/5-1, project number 518483074), ‘The Adaptive Mind’, funded by the Excellence Program of the Hessian Ministry of Higher Education, Science, Research and Art, and an European Research Council Starting Grant (PEP, ERC-2022-STG 101076057). Views and opinions expressed are those of the authors only and do not necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
Conflict of interest
The authors declared that they had no conflict of interest with respect to their authorship or the publication of this article.
References
- Adolphs R., Tranel D., Damasio H., Damasio A. (1994). Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature, 372(6507), 669–72. [DOI] [PubMed] [Google Scholar]
- Adolphs R. (2002). Recognizing emotion from facial expressions: psychological and neurological mechanisms. Behavioral and Cognitive Neuroscience Reviews, 1(1), 21–62. [DOI] [PubMed] [Google Scholar]
- Baldauf D., Desimone R. (2014). Neural mechanisms of object-based attention. Science, 344(6182), 424–7. [DOI] [PubMed] [Google Scholar]
- Barrett L.F., Satpute A.B. (2013). Large-scale brain networks in affective and social neuroscience: towards an integrative functional architecture of the brain. Current Opinion in Neurobiology, 23(3), 361–72. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Calder A.J., Young A.W. (2005). Understanding the recognition of facial identity and facial expression. Nature Reviews, Neuroscience, 6(8), 641–51. [DOI] [PubMed] [Google Scholar]
- Chan A.W., Downing P.E. (2011). Faces and eyes in human lateral prefrontal cortex. Frontiers in Human Neuroscience, 5, 51. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Courtney S.M., Ungerleider L.G., Keil K., Haxby J.V. (1996). Object and spatial visual working memory activate separate neural systems in human cortex. Cerebral Cortex, 6(1), 39–49. [DOI] [PubMed] [Google Scholar]
- Courtney S.M., Ungerleider L.G., Keil K., Haxby J.V. (1997). Transient and sustained activity in a distributed neural system for human working memory. Nature, 386(6625), 608–11. [DOI] [PubMed] [Google Scholar]
- Curtis C.E., D’Esposito M. (2003). Persistent activity in the prefrontal cortex during working memory. Trends in Cognitive Sciences, 7(9), 415–23. [DOI] [PubMed] [Google Scholar]
- Davies-Thompson J., Andrews T.J. (2012). Intra- and interhemispheric connectivity between face-selective regions in the human brain. Journal of Neurophysiology, 108(11), 3087–95. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eickhoff S.B., Stephan K.E., Mohlberg H., et al. (2005). A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. NeuroImage, 25(4), 1325–35. [DOI] [PubMed] [Google Scholar]
- Ekman P., Friesen W.V. (1976). Measuring facial movement. Environmental Psychology & Nonverbal Behavior, 1(1), 56–75. [Google Scholar]
- Fedorenko E., Blank I.A. (2020). Broca’s area is not a natural kind. Trends in Cognitive Sciences, 24(4), 270–84. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fedorenko E., Thompson-Schill S.L. (2014). Reworking the language network. Trends in Cognitive Sciences, 18(3), 120–6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gauthier I., Tarr M.J., Moylan J., Skudlarski P., Gore J.C., Anderson A.W. (2000). The fusiform ‘face area’ is part of a network that processes faces at the individual level. Journal of Cognitive Neuroscience, 12(3), 495–504. [DOI] [PubMed] [Google Scholar]
- Goldman-Rakic P. (2000). Localization of function all over again. NeuroImage, 11(5 Pt 1), 451–7. [DOI] [PubMed] [Google Scholar]
- Goldman-Rakic P.S. (1996). The prefrontal landscape: implications of functional architecture for understanding human mentation and the central executive. Philosophical Transactions of the Royal Society of London B: Biological Sciences, 351(1346), 1445–53. [DOI] [PubMed] [Google Scholar]
- Goldsworthy M.R., Pitcher J.B., Ridding M.C. (2012). A comparison of two different continuous theta burst stimulation paradigms applied to the human primary motor cortex. Clinical Neurophysiology, 123(11), 2256–63. [DOI] [PubMed] [Google Scholar]
- Gorno-Tempini M.L., Pradelli S., Serafini M., et al. (2001). Explicit and incidental facial expression processing: an fMRI study. Neuroimage, 14(2), 465–73. [DOI] [PubMed] [Google Scholar]
- Groen I.I.A., Silson E.H., Pitcher D., Baker C.I. (2021). Theta-burst TMS of lateral occipital cortex reduces BOLD responses across category-selective areas in ventral temporal cortex. NeuroImage, 230, 117790. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Handwerker D.A., Ianni G., Gutierrez B., et al. (2020). Theta-burst TMS to the posterior superior temporal sulcus decreases resting-state fMRI connectivity across the face processing network. Network Neuroscience, 4(3), 746–60. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harry B., Williams M.A., Davis C., Kim J. (2013). Emotional expressions evoke a differential response in the fusiform face area. Frontiers in Human Neuroscience, 7, 692. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Haxby J.V., Hoffman E.A., Gobbini M.I. (2000). The distributed human neural system for face perception. Trends in Cognitive Sciences, 4(6), 223–33. [DOI] [PubMed] [Google Scholar]
- Haxby J.V., Gobbini M.I., Furey M.L., Ishai A., Schouten J.L., Pietrini P. (2001). Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293(5539), 2425–30. [DOI] [PubMed] [Google Scholar]
- Heekeren H.R., Marrett S., Bandettini P.A., Ungerleider L.G. (2004). A general mechanism for perceptual decision-making in the human brain. Nature, 431(7010), 859–62. [DOI] [PubMed] [Google Scholar]
- Huang Y.Z., Edwards M.J., Rounis E., Bhatia K.P., Rothwell J.C. (2005). Theta burst stimulation of the human motor cortex. Neuron, 45(2), 201–6. [DOI] [PubMed] [Google Scholar]
- Iidaka T., Omori M., Murata T., et al. (2001). Neural interaction of the amygdala with the prefrontal and temporal cortices in the processing of facial expressions as revealed by fMRI. Journal of Cognitive Neuroscience, 13(8), 1035–47. [DOI] [PubMed] [Google Scholar]
- Ishai A., Haxby J.V., Ungerleider L.G. (2002). Visual imagery of famous faces: Effects of memory and attention revealed by fMRI. NeuroImage, 17(4), 1729–41. [DOI] [PubMed] [Google Scholar]
- Jastorff J., De Winter F.L., Van den Stock J., Vandenberghe R., Giese M.A., Vandenbulcke M. (2016). Functional dissociation between anterior temporal lobe and inferior frontal gyrus in the processing of dynamic body expressions: insights from behavioral variant frontotemporal dementia. Human Brain Mapping, 37(12), 4472–86. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kalbe E., Schlegel M., Sack A.T., et al. (2010). Dissociating cognitive from affective theory of mind: a TMS study. Cortex, 46(6), 769–80. [DOI] [PubMed] [Google Scholar]
- Kanwisher N., McDermott J., Chun M.M. (1997). The fusiform face area: a module in human extrastriate cortex specialized for face perception. Journal of Neuroscience, 17(11), 4302–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Küçük E., Foxwell M., Kaiser D., Pitcher D. (2022). Moving and static faces, bodies, objects and scenes are differentially represented across the three visual pathways. bioRxiv, 2022.2011.2030.518408. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lanzoni L., Ravasio D., Thompson H., et al. (2020). The role of default mode network in semantic cue integration. NeuroImage, 219, 117019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lindquist K.A., Wager T.D., Kober H., Bliss-Moreau E., Barrett L.F. (2012). The brain basis of emotion: a meta-analytic review. Behavioral and Brain Sciences, 35(3), 121–43. [DOI] [PMC free article] [PubMed] [Google Scholar]
- MacDonald A.W., Cohen J.D., Stenger V.A., Carter C.S. (2000). Dissociating the role of the dorsolateral prefrontal and anterior cingulate cortex in cognitive control. Science, 288(5472), 1835–8. [DOI] [PubMed] [Google Scholar]
- McCarthy G., Puce A., Gore J.C., Allison T. (1997). Face-specific processing in the human fusiform gyrus. Journal of Cognitive Neuroscience, 9(5), 605–10. [DOI] [PubMed] [Google Scholar]
- Mizutani-Tiebel Y., Tik M., Chang K.Y., et al. (2022). Concurrent TMS-fMRI: technical challenges, developments, and overview of previous studies. Frontiers in Psychiatry, 13, 825205. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nikel L., Sliwinska M.W., Kucuk E., Ungerleider L.G., Pitcher D. (2022). Measuring the response to visually presented faces in the human lateral prefrontal cortex. Cerebral Cortex Communications, 3(3), tgac036. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nyffeler T., Wurtz P., Luscher H.R., et al. (2006). Repetitive TMS over the human oculomotor cortex: comparison of 1-Hz and theta burst stimulation. Neuroscience Letters, 409(1), 57–60. [DOI] [PubMed] [Google Scholar]
- Oosterhof N.N., Connolly A.C., Haxby J.V. (2016). CoSMoMVPA: multi-modal multivariate pattern analysis of neuroimaging data in matlab/GNU Octave. Frontiers in Neuroinformatics, 10, 27. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Paracampo R., Tidoni E., Borgomaneri S., Di Pellegrino G., Avenanti A. (2017). Sensorimotor network crucial for inferring amusement from smiles. Cereb Cortex, 27(11), 5116–29. [DOI] [PubMed] [Google Scholar]
- Paracampo R., Pirruccio M., Costa M., Borgomaneri S., Avenanti A. (2018). Visual, sensorimotor and cognitive routes to understanding others’ enjoyment: an individual differences rTMS approach to empathic accuracy. Neuropsychologia, 116(Pt A), 86–98. [DOI] [PubMed] [Google Scholar]
- Penton T., Dixon L., Evans L.J., Banissy M.J. (2017). Emotion perception improvement following high frequency transcranial random noise stimulation of the inferior frontal cortex. Scientific Reports, 7(1), 11278. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pessoa L. (2018). Understanding emotion with brain networks. Current Opinion in Behavioral Sciences, 19, 19–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pitcher D., Garrido L., Walsh V., Duchaine B.C. (2008). Transcranial magnetic stimulation disrupts the perception and embodiment of facial expressions. Journal of Neuroscience, 28(36), 8929–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pitcher D., Dilks D.D., Saxe R.R., Triantafyllou C., Kanwisher N. (2011). Differential selectivity for dynamic versus static information in face-selective cortical regions. NeuroImage, 56(4), 2356–63. [DOI] [PubMed] [Google Scholar]
- Pitcher D. (2014). Facial expression recognition takes longer in the posterior superior temporal sulcus than in the occipital face area. Journal of Neuroscience, 34(27), 9173–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pitcher D., Duchaine B., Walsh V. (2014). Combined TMS and fMRI reveal dissociable cortical pathways for dynamic and static face perception. Current Biology, 24(17), 2066–70. [DOI] [PubMed] [Google Scholar]
- Pitcher D., Japee S., Rauth L., Ungerleider L.G. (2017). The superior temporal sulcus is causally connected to the amygdala: a combined TBS-fMRI study. Journal of Neuroscience, 37(5), 1156–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pitcher D., Ungerleider L.G. (2021). Evidence for a third visual pathway specialized for social perception. Trends in Cognitive Sciences, 25(2), 100–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Prochnow D., Brunheim S., Steinhauser L., Seitz R.J. (2014). Reasoning about the implications of facial expressions: a behavioral and fMRI study on low and high social impact. Brain and Cognition, 90, 165–73. [DOI] [PubMed] [Google Scholar]
- Puce A., Allison T., Asgari M., Gore J.C., McCarthy G. (1996). Differential sensitivity of human visual cortex to faces, letterstrings, and textures: a functional magnetic resonance imaging study. Journal of Neuroscience, 16(16), 5205–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Puce A., Allison T., Bentin S., Gore J.C., McCarthy G. (1998). Temporal cortex activation in humans viewing eye and mouth movements. Journal of Neuroscience, 18(6), 2188–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Raichle M.E. (2015). The brain’s default mode network. Annual Review of Neuroscience, 38, 433–47. [DOI] [PubMed] [Google Scholar]
- Rapcsak S.Z., Polster M.R., Glisky M.L., Comers J.F. (1996). False recognition of unfamiliar faces following right hemisphere damage: neuropsychological and anatomical observations. Cortex, 32(4), 593–611. [DOI] [PubMed] [Google Scholar]
- Regenbogen C., Schneider D.A., Gur R.E., Schneider F., Habel U., Kellermann T. (2012). Multimodal human communication--targeting facial expressions, speech content and prosody. NeuroImage, 60(4), 2346–56. [DOI] [PubMed] [Google Scholar]
- Renzi C., Schiavi S., Carbon -C.-C., Vecchi T., Silvanto J., Cattaneo Z. (2013). Processing of featural and configural aspects of faces is lateralized in dorsolateral prefrontal cortex: a TMS study. Neuroimage, 74, 45–51. [DOI] [PubMed] [Google Scholar]
- Rezlescu C., Pitcher D., Duchaine B. (2012). Acquired prosopagnosia with spared within-class object recognition but impaired recognition of degraded basic-level objects. Cognitive Neuropsychology, 29(4), 325–47. [DOI] [PubMed] [Google Scholar]
- Said C.P., Moore C.D., Engell A.D., Todorov A., Haxby J.V. (2010). Distributed representations of dynamic facial expressions in the superior temporal sulcus. Journal of Vision, 10(5), 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Satpute A.B., Lindquist K.A. (2019). The default mode network’s role in discrete emotion. Trends in Cognitive Sciences, 23(10), 851–64. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seeley W.W., Menon V., Schatzberg A.F., et al. (2007). Dissociable intrinsic connectivity networks for salience processing and executive control. The Journal of Neuroscience, 27(9), 2349–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sliwinska M.W., Bearpark C., Corkhill J., McPhillips A., Pitcher D. (2020a). Dissociable pathways for moving and static face perception begin in early visual cortex: evidence from an acquired prosopagnosic. Cortex, 130, 327–39. [DOI] [PubMed] [Google Scholar]
- Sliwinska M.W., Elson R., Pitcher D. (2020b). Dual-site TMS demonstrates causal functional connectivity between the left and right posterior temporal sulci during facial expression recognition. Brain Stimulation, 13(4), 1008–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sliwinska M.W., Searle L.R., Earl M., et al. (2022). Face learning via brief real-world social interactions includes changes in face-selective brain areas and hippocampus. Perception, 51(8), 521–38. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smallwood J., Bernhardt B.C., Leech R., Bzdok D., Jefferies E., Margulies D.S. (2021). The default mode network in cognition: a topographical perspective. Nature Reviews, Neuroscience, 22(8), 503–13. [DOI] [PubMed] [Google Scholar]
- Sreenivas S., Boehm S.G., Linden D.E. (2012). Emotional faces and the default mode network. Neuroscience Letters, 506(2), 229–34. [DOI] [PubMed] [Google Scholar]
- Trautmann S.A., Fehr T., Herrmann M. (2009). Emotions in motion: dynamic compared to static facial expressions of disgust and happiness reveal more widespread emotion-specific activations. Brain Research, 1284, 100–15. [DOI] [PubMed] [Google Scholar]
- Tsuchida A., Fellows L.K. (2012). Are you upset? Distinct roles for orbitofrontal and lateral prefrontal cortex in detecting and distinguishing facial expressions of emotion. Cerebral Cortex, 22(12), 2904–12. [DOI] [PubMed] [Google Scholar]
- Uddin L.Q. (2015). Salience processing and insular cortical function and dysfunction. Nature Reviews, Neuroscience, 16(1), 55–61. [DOI] [PubMed] [Google Scholar]
- Wager T.D., Kang J., Johnson T.D., Nichols T.E., Satpute A.B., Barrett L.F. (2015). A Bayesian model of category-specific emotional brain responses. PLOS Computational Biology, 11(4), e1004066. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang Y., Metoki A., Smith D.V., et al. (2020). Multimodal mapping of the face connectome. Nature Human Behaviour, 4(4), 397–411. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wegrzyn M., Riehle M., Labudda K., et al. (2015). Investigating the brain basis of facial expression perception using multi-voxel pattern analysis. Cortex, 69, 131–40. [DOI] [PubMed] [Google Scholar]
- Yeo B.T., Krienen F.M., Sepulcre J., et al. (2011). The organization of the human cerebral cortex estimated by intrinsic functional connectivity. Journal of Neurophysiology, 106(3), 1125–65. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zanon M., Borgomaneri S., Avenanti A. (2018). Action-related dynamic changes in inferior frontal cortex effective connectivity: a TMS/EEG coregistration study. Cortex, 108, 193–209. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
The data underlying this article will be shared on reasonable request to the corresponding author.
