Abstract
Words activate cortical regions in accordance with their modality of presentation (i.e., written vs. spoken), yet there is a long-standing debate about whether patterns of activity in any specific brain region capture modality-invariant conceptual information. Deficits in patients with semantic dementia highlight the anterior temporal lobe (ATL) as an amodal store of semantic knowledge but these studies do not permit precise localisation of this function. The current investigation used multiple imaging methods in healthy participants to examine functional dissociations within ATL. Multi-voxel pattern analysis identified spatially segregated regions: a response to input modality in anterior superior temporal gyrus (aSTG) and a response to meaning in more ventral anterior temporal lobe (vATL). This functional dissociation was supported by resting-state connectivity that found greater coupling for aSTG with primary auditory cortex and vATL with the default mode network. A meta-analytic decoding of these connectivity patterns implicated aSTG in processes closely tied to auditory processing (such as phonology and language) and vATL in meaning-based tasks (such as comprehension or social cognition). Thus we provide converging evidence for the segregation of meaning and input modality in the ATL.
Keywords: Anterior temporal lobe (ATL), Multi voxel pattern analysis (MVPA), Semantic, Hub, Spoke, Resting-state connectivity
Highlights
-
•
Multi-voxel pattern analysis identified spatially segregated regions in the anterior temporal lobe.
-
•
Anterior superior temporal gyrus (aSTG) responded to modality input.
-
•
Ventral anterior temporal lobe (vATL) responded to semantic meaning.
-
•
Converging findings from resting-state connectivity supports this functional dissociation.
1. Introduction
Current neurocognitive models propose that concepts are represented in a large-scale distributed network comprising (1) sensory and motor ‘spoke’ regions that store knowledge of physical features and (2) convergence zones that integrate across multiple modalities (e.g., visual vs. auditory) to form abstract amodal representations (Damasio, 1989, Patterson et al., 2007). For example, the hub and spoke model of Patterson et al. (2007) proposes that information from modality-specific spoke regions is integrated in an amodal ‘hub’ region within the anterior temporal lobes (ATL), allowing the conceptual similarity of items that are semantically similar yet share few surface features, such as ‘flute’ and ‘violin’, to be represented, and making it possible to map between modalities so that we can picture a flute and imagine the sound that it makes from only its name (e.g., Damasio, 1989; Lambon et al., 2010; Patterson et al., 2007; Rogers et al., 2004). This hub and spoke model proposes that both the ATL and modality-specific spokes make a crucial contribution to conceptual representation, and these elements are mutually-constraining through a pattern of interactive-activation.
The spokes are hypothesized to represent the contributions of sensory and motor cortex to conceptual knowledge, as words associated with specific sensorimotor attributes activate corresponding sensorimotor cortex. For example, words denoting actions (e.g., kick) activate the motor system (Postle et al., 2008, Rueschemeyer et al., 2007, Rueschemeyer et al., 2010), while words associated with specific smells (e.g., cinnamon) elicit activation in olfactory cortex (Cerf-ducastel and Murphy, 2004, Gonzalez et al., 2006). Although these neural regions are important for perception and action, they are also recruited during semantic processing to provide meaning to words (Barsalou, 1999, Barsalou, 2008, Martin, 2007, Patterson et al., 2007, Kiefer and Pulvermuller, 2012).
The proposal that the ATL forms a key semantic “hub” capturing knowledge across different input modalities was initially put forward to account for the pattern of impairment in semantic dementia (SD), in which relatively focal atrophy centered on ATL leads to progressive conceptual degradation across modalities and tasks (e.g., Patterson et al., 2007; Rogers et al., 2015). SD patients are highly consistent in the knowledge they can demonstrate when the same concepts are probed in different ways, suggesting central semantic representations degrade in this condition. Patients with SD have atrophy which increasingly affects inferior frontal and posterior temporal areas, as well as ATL, making it difficult to draw strong conclusions about the location of the “hub” from neuropsychology alone; however, the severity of the semantic impairment correlates most strongly with the degree of hypometabolism in inferior ATL (Mion et al. 2010). The crucial role of ATL is also supported by functional neuroimaging studies of healthy participants that show amodal conceptual processing in ATL (Rice et al., 2015; Visser et al., 2010). For example, Visser and Ralph (2011) characterized the degree of modality convergence in STG, MTG, ITG and fusiform cortex comparing posterior and anterior parts of the temporal lobe. Both STG and fusiform were modality-sensitive along the temporal lobe, showing stronger activation for spoken words and pictures respectively. MTG showed a multimodal response in both anterior and posterior regions. ITG uniquely showed a pattern consistent with the increasing integration of information from different inputs, namely sensitivity to modality in posterior but not anterior regions. Moreover, Spitsyna et al. (2006) showed that, despite originating from different sensory inputs, there is considerable activation overlap for spoken and written processing in ATL regions. Thus, emerging evidence from both patients with SD and healthy participants suggests that the semantic hub may be located in ventral ATL.
These observations raise the possibility of functional dissociations with ATL. Jackson et al. (2016) recently observed different patterns of functional connectivity within superior and ventral regions of the ATL, with anterior STG showing stronger connectivity to language, auditory and motor regions, while ventral ATL showed connectivity to other multimodal semantic regions including inferior frontal gyrus, angular gyrus and posterior middle temporal gyrus. These parallel the pattern of white-matter connections found by Binney et al., (2012) and Jung et al., (2016). Consistent with these findings it has been proposed that superior regions of the ATL are important in lexical and auditory processing, while ventral regions support conceptual processing across all sensory modalities (Rice et al., 2015; Visser et al., 2010; Visser and Ralph, 2011). Ventral and ventrolateral ATL regions have been found to respond to meaningful inputs across multiple modalities by studies employing convergent methods; including fMRI and transcranial magnetic stimulation (Binney et al., 2010, Visser et al., 2012, Hoffman et al., 2015) and representational similarity analysis (RSA) of ECoG data (Chen et al., 2016).
The current study used multiple imaging methodologies to simultaneously investigate the organization of knowledge in the ATL (hub) and auditory and visual regions (as potential spokes). In a functional experiment we manipulated the format in which words were presented (i.e., spoken, written) and the modality-specific features associated with the word's meaning (e.g., auditory features: “loud” vs. visual features: “shiny”). We used Multi Voxel Pattern Analysis (MVPA) to decode how these different features (modality of presentation and underlying meaning) are represented. Based on the hub and spoke model, we expected this analysis to reveal regions that are distributed across the cortex that responded to the meaning of the stimulus regardless of the input modality. In this experiment we were particularly interested in identifying regions in ATL where the meaning of words is represented that are independent of input modality. The amodal hub regions should be able to code the meaning of a stimulus regardless of the presentation format (e.g., auditory feature words should elicit similar patterns of activation even when spoken and written words are compared). In addition, this region should represent the meaning of words tied to different sensory modalities (i.e., it should represent words with auditory meanings like ‘loud’ and words with visual meanings like ‘shiny’). In contrast, the spokes should represent particular semantic features in regions of sensory cortex (i.e., words with an auditory meaning, such as loud, should be represented in auditory cortex regardless of how they are presented (written or spoken). However, spoke regions are not expected to represent meaning that is tied to a different sensory modality (i.e., auditory cortex may not contribute to semantic representation for words with a visual meaning, such as shiny).
Next we used the regions identified in our MVPA analysis as regions of interest in a seed based resting state connectivity analysis to understand the neural networks in which these different regions of the ATL are embedded. We expected the amodal region of ATL to show functional connectivity with regions of cortex that are important in more abstract forms of cognition, e.g., the default mode network, rather than regions important in unimodal sensory processing, such as the auditory and visual cortex. Finally, we used the search tool Neurosynth to decode the most common interpretations of this pattern of functional connectivity in the broader neuroimaging literature.
2. Materials and methods
2.1. Functional experiment
2.1.1. Participants
Twenty participants were recruited from the University of York. One participant's data was excluded due to excessive motion artifacts, leaving nineteen subjects in the final analysis (10 female; mean age 24.55, range 18–36 years). Participants were native British speakers, right handed and had normal or corrected-to-normal vision. Participants gave written informed consent to take part and were reimbursed for their time. The study was approved by the York Neuroimaging Centre Ethics Committee at the University of York.
2.1.2. Stimuli
Participants were presented with blocks of spoken and written items from three conditions: AUD words denoted auditory features (e.g., loud), VIS words denoted visual features (e.g., shiny) and NON stimuli were meaningless nonwords (e.g., brodic). A block consisted of a sequence of items; participants were asked to pay attention to the meaning of each item, and respond with their left index finger when an out-of-category item was presented (see Fig. 1). For VIS and AUD blocks, half of the out-of-category items were taken from the non-presented feature condition, while the other half were taken from a separate list of taste words (e.g., spicy). Participants could not predict the category of the out-of-category item and therefore had to focus on the AUD or VIS feature specified in the instructions. In the NON condition, participants were asked to respond to any item that was a word. All stimuli were presented in both spoken and a written format. Spoken words were recorded digitally and then normalized for volume and power. Written words were presented centrally as white letters on a black background. The combination of item meaning (AUD, VIS, NON) and presentation format (Spoken, Written) yielded 6 experimental conditions (Spoken-AUD, Spoken-VIS, Spoken-NON, Written-AUD, Written-VIS, Written-NON).
Fig. 1.
Experimental design. (A) Four runs across the fMRI session. Each run lasted no longer than 6 min 19 s. (B) Block organization across each run. WA=written-Aud, WV=written-VIS, WN=written-NON, SA=spoken-AUD, SV=spoken-VIS and SN=spoken-NON. Only 6 are depicted for illustration (from a total of 12 blocks). Each of the 6 conditions were randomly presented twice, with no immediate repetition. Written blocks lasted 22.7 seconds, spoken blocks lasted no longer than 23.2 seconds. (C) Each block began with written instructions stating the semantic feature type and presentation format, for 3500ms (followed by 500 ms fixation). The 8 items from the condition were then presented twice in a random order, with no immediate repetition. Only 5 are depicted for illustration (from a total of 16 items). The arrow represents an out-of-category item (e.g., visual feature ‘glossy’ in a block of auditory features). In total, 17 words were presented within each block (16 targets and 1 catch trial).
The selection of AUD and VIS words was validated in a behavioural study with twelve participants who did not take part in the fMRI session. Participants were asked to rate a subset of modality-specific words (n=220), according to how much each one related to four sensory categories; auditory, visual, haptic and taste. Participants also provided ratings of familiarity and emotional valence. All ratings were given on a 5-point likert-scale. We selected adjectives with strong auditory or visual associations. Each set contained 8 items which were matched for key psycholingusitic variables such as frequency and length (see Table 1; Wilcoxon signed rank tests revealed all p>.05). AUD words (such as ‘loud’) were selected if they scored significantly higher on the auditory than visual, haptic or taste modalities (all p<.001). Likewise VIS words (such as ‘shiny’) were selected if they scored significantly higher on the visual than the auditory, haptic or taste modalities (all p<.001).
Table 1.
Mean psycholinguistic properties of stimuli (SD in parentheses).
Property | Auditory feature words | Visual feature words | Non-words | ||
---|---|---|---|---|---|
Example | “loud” | “shiny” | “brodic” | ||
Log frequency | 2.27 (1.05) | 2.54 (.82) | N/A | ||
Length | 5.25 (.76) | 5.50 (.80) | 5.88 (1.17) | ||
Syllables | 1.88 (.45) | 1.63 (.49) | 2.00 (.50) | ||
Age of acquisition | 7.17 (2.70) | 6.85 (2.76) | N/A | ||
Familiarity | 4.43 (.63) | 4.40 (.51) | N/A | ||
Emotional Valence | 3.18 (.70) | 3.3 (.67) | N/A | ||
Levehnstein distance | 5.11 (.94) | 6.00 (1.25) | 5.89 (.86) | ||
Behavioural feature-rating (auditory) | 4.45 (.61)* | 1.15 (.04)* | N/A | ||
Behavioural feature-rating (visual) | 1.65 (.32)* | 4.77 (.19)* | N/A | ||
Behavioural feature-rating (haptic) | 1.5 (.39) | 1.76 (.72) | N/A | ||
Behavioural feature-rating (taste) | 1.19 (.07) | 1.21 (.09) | N/A |
Footnote: Log frequency=log-transformed lemma frequencies from the SUBTLEX database (Brysbaert, New and Keuleers, 2012;
http://expsy.ugent.be/subtlexus). Length=number of letters. Age of acquisition (AoA norms; Kuperman et al., 2012). Part of
speech also taken from SUBTLEX database. Familiarity, emotional valence and behavioural feature rating (auditory; visual;
haptic; taste) were obtained from a behavioural experiment with a separate cohort of participants from the fMRI study.
These were scored on a Likert-scale (1–5).
Wilcoxon signed rank tests revealed a significant difference between auditory-feature and visual-feature conditions (p<.001).
A set of 8 taste features were used in out-of-category catch trials. These items scored significantly higher on the taste modality than auditory, visual and haptic (p<.001). These items were also matched to AUD or VIS words on the variables in Table 1 (all p>.05). Finally, NON words were made by recombining the phonemes from the AUD and VIS conditions to create 8 pseudo-words. The non-word condition matched AUD and VIS conditions on number of letters, syllable and Levenshtein distance (Levenshtein, 1965), which quantifies the number of phoneme insertions, deletions and/or substitutions required to change one word into another, (all p>.05). The use of a small number of items is consistent with other MVPA studies into semantic representation (Correia et al., 2014; Peelen and Caramazza, 2012).
Stimulus presentation was controlled by a PC running Neurobehavioural System Presentation® software (Version 0.07, www.neurobs.com). Stimuli were projected onto a screen viewed though a mirror mounted on the head coil. Spoken stimuli were presented binaurally using MR-compatible headphones.
2.1.3. Task procedure
Prior to being scanned, participants were shown a printed copy of all stimuli (8 AUD, 8 VIS, 8 NON) to familiarize them with the items. They also performed a practice session consisting of 12 blocks, identical to one scanning run.
In the scanner there were 4 runs of 12 blocks. The choice of 4 functional runs is consistent with many MVPA studies that also presented trials within 4 runs that were each 5–10 min long (Coutanche and Thompson-Schill, 2012; Fairhall and Caramazza, 2013; Peelen and Caramzza, 2012). Within each run, there were two blocks related to each of the 6 experimental conditions (spoken and written words combined with three meaning conditions: AUD, VIS and NON). These were presented in a pseudo-random order, with no immediate repetition of conditions. Blocks were separated by a jittered gap (4–8 s) during which a red fixation cross was presented. A block consisted of 17 stimuli: eight stimuli related to that experimental condition presented twice in a pseudo-random order, with no immediate repetition, plus one out-of-category catch trial. Written stimuli were presented for 600ms; spoken stimuli were presented on average for 633.57 ms (SD=71.57 ms). Words within each block were separated by a 500 ms inter-stimulus interval.
Block transitions were marked with a written task instruction, which indicated (i) the aspect of meaning that participants needed to focus on and (ii) the presentation format presented in parentheses. The task instructions were presented for 3500 ms (followed by 500ms fixation). A grey fixation cross against a black background was used to minimize eye movements during the duration of a block. Each block (including task instruction and jittered rest period) lasted on average 28.7 s.
2.1.4. Acquisition
Data were acquired using a GE 3 T HD Excite MRI scanner at the York Neuroimaging Centre, University of York. A Magnex head-dedicated gradient insert coil was used in conjunction with a birdcage, radio-frequency coil tuned to 127.4 MHz. A gradient-echo EPI sequence was used to collect data from 38 bottom-up axial slices aligned with the temporal lobe (TR=2 s, TE=18 ms, FOV=192×192 mm, matrix size=64×64, slice thickness=3 mm, slice-gap 1mm, flip-angle=90°). Voxel size was 3×3×3 mm. Functional images were co-registered onto a T1-weighted anatomical image from each participant (TR=7.8 s, TE=3 ms, FOV=290 mmx290 mm, matrix size=256 mmx256 mm, voxel size=1.13 mmx1.13 mmx1 mm) using linear registration (FLIRT, FSL).
2.1.5. Preprocessing
Imaging data were preprocessed using the FSL toolbox (http://www.fmrib.ox.ac.uk/fsl). Images were skull-stripped using a brain extraction tool (BET, Smith, 2002) to remove non-brain tissue from the image. The first five volumes (10 s) of each scan were removed to minimize the effects of magnetic saturation, and slice-timing correction was applied. Motion correction (MCFLIRT, Jenkinson et al., 2002) was followed by temporal high-pass filtering (cutoff=0.01 Hz). Individual participant data were first registered to their high-resolution T1-anatomical image, and then into a standard space (Montreal Neurological Institute (MNI152); this process included tri-linear interpolation of voxel sizes to 2×2×2 mm. For univariate analyses, data were additionally smoothed (Gaussian full width half maximum 6 mm).
2.1.6. Univariate analysis
The condition onset and duration were taken from the first item in each block (after the initial instructions) to the end of the last item. The response to each of the 6 conditions was contrasted against rest. Box-car regressors for each condition, for each run, in the general linear model were convolved with a double gamma hemodynamic response function (FEAT, FSL). Regressors of no interest were also included to account for head motion within scans. A fixed effect design (FLAME, http://www.fmrib.ox.ac.uk/fsl) was then conducted to average across the four runs, within each individual. Finally, individual participant data were entered into a higher-level group analysis using a mixed effects design (FLAME, http://www.fmrib.ox.ac.uk/fsl) whole-brain analysis.
2.1.7. Multivariate pattern analysis
Parameter estimates were calculated in the same manner as for univariate analyses, for each condition and for each run: in this way, the spatial pattern information entered into the classifier from each condition represented the average response to the 8 exemplars. This method is consistent with previous literature investigating semantic representations (Coutanche & Thompson-Schill, 2012; Fairhall and Caramazza, 2013; Peelen and Caramagzza, 2012): it allows us to make inferences that a particular region is able to discriminate between words referring to auditory and visual features, for example, but not the meanings of these individual words. MVPA was conducted on spatially unsmoothed data to preserve local voxel information.
As we had a priori knowledge of strong selectivity for the classes in particular brain regions (ATL, primary auditory cortex and primary visual cortex), we opted for a ROI-based MVPA method rather than whole-brain analysis. This reduced the number of voxels used for classification (and therefore the number of free parameters which can lead to over-fitting; for similar approaches see Kamitani and Tong (2005) and Kuhl, Rissman, Chun and Wagner, (2011). The following masks were used; primary visual cortex (taken from FSL Juelich Atlas; http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Atlases), primary auditory cortex (taken from FSL Juelich Atlas; http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/Atlases) and ATL (anterior to Y=−22; Hoffman et al., 2015). The size of these masks are as follows; primary visual cortex, 12662 voxels; primary auditory cortex, 2372 voxels; ATL, 18523 voxels.
To ensure that our ROIs had sufficient signal to detect reliable fMRI activation, the temporal signal-to-noise ratio (tSNR) for each participant was calculated for the first run of the experiment by dividing the mean signal in each voxel by the standard deviation of the residual error time series in that voxel (Friedman et al., 2006). tSNR values were averaged across the voxels of each ROI. Mean tSNR values, averaged across participants, were as follows: ATL, 76.74; primary auditory cortex (PAC), 93.61; primary visual cortex (PVC), 102.96. The percentage of voxels in each ROI that had “good” tSNR values (>20; Binder et al., 2011) was above 85% for all ROIs: ATL, 86.17%; PAC, 99.87%; PVC, 94.58%. These values indicate that, although mean tSNR was lower in anterior temporal cortex than in sensory regions, the tSNR was sufficient to detect reliable fMRI activation in all ROIs (Binder et al., 2011). Moreover, to determine whether tSNR was sufficient in each sub-region of the ATL (as signal drop out is most prominent in ventral anterior regions), the tSNR was calculated for the following regions: aSTG, 85.97; aMTG, 89.00; aITG, 69.79; anterior fusiform gyrus, 69.74; anterior parahippocampal gyrus, 67.13; temporal pole, 63.27. These values suggest that, again, although mean tSNR was lower in more ventral anterior regions, it was still sufficient to detect reliable fMRI activation (Binder et al., 2011).
For each voxel in our three ROI masks, we computed a linear support vector machine (LIBSVM; with fixed regularization hyper-parameter C=1) and a 4-fold cross-validation (leave-one-run-out) classification, implemented in custom python scripts using the pyMVPA software package (Hanke et al., 2009). A support vector machine was chosen as this aims to combat over-fitting by limiting the complexity of the classifier (Lewis-Peacock and Norman, 2013). The classifier was trained on three runs and tested on the independent fourth run; the testing set was then alternated for each of four iterations. Classifiers were trained and tested on individual subject data transformed into MNI standard space. The functional data were first z-scored per voxel within each run. The searchlight analysis was implemented by extracting the z-scored β-values from spheres (6mm radius) centered on each voxel in the masks. This sized sphere included∼1233 mm voxels (Kriegeskorte et al., 2006). Classification accuracy (proportion of correctly classified trials) for each sphere was assigned to the sphere's central voxel, in order to produce accuracy maps. The resulting accuracy maps were then smoothed with a Gaussian kernel (6mm FWHM). To determine whether accuracy maps were above chance-levels (50%), individual accuracy maps were entered into a higher-level group analysis (mixed effects, FLAME; http://www.fmrib.ox.ac.uk/fsl), testing the accuracy values across subjects against chance for each voxel. Voxel inclusion was set at z=2.3 with a cluster significance threshold at FWE p<.05.
The following classification tests were performed: (1) Semantic feature classifier: this examined whether patterns of activity conveyed information regarding the meanings of words, by training a classifier to discriminate between words referring to auditory features (e.g. loud) and visual features (e.g., shiny). This classifier was truly format-independent in the sense that it was trained on this semantic distinction using spoken words and tested using written words (and vice versa). The advantage of performing the classification in this manner is only semantic information common to both presentation formats was informative to the classifier (see Fig. 2A). The results from the two classifications were averaged to produce a single estimate of classification accuracy. (2) Perceptual classifier: here a classifier was trained to discriminate between spoken and written non-words and was tested on these two presentation formats for words. In this way only the presentation format that was general to both non-words and words was informative to the classifier (see Fig. 2B).
Fig. 2.
Schematic illustration of the MVPA searchlight classifiers performed. Each box includes the six experimental conditions. Classifiers were trained to distinguish between two conditions (red and blue). The classifiers were then tested on independent trials that differed in the same way. (A) Classifiers were trained and tested based on semantic content (trained on Spoken-AUD vs. Spoken-VIS, tested on Written-AUD vs. Written-VIS – and vice versa). The results from both comparisons were then averaged. (B) Classifiers were trained and tested based on presentation format (trained on Spoken-NON vs. Written-NON, tested on Spoken words vs. Written words). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article).
2.2. Resting state fMRI
2.2.1. Participants
This analysis was performed ona separate cohort of 42 healthy participants at York Neuroimaging Centre (13 male; mean age 20.31, range 18–25 years). Subjects completed a 9 minute functional connectivity MRI scan during which they were asked to rest in the scanner with their eyes open. Using these data we examined the resting-state fMRI (rs-fMRI) connectivity of ATL regions that were informative to the semantic feature (aITG) and perceptual classifiers (aSTG) to investigate whether these regions fell within similar or distinct networks. In addition, we investigated the rs-fMRI connectivity of semantic regions within primary sensory cortices that showed significant decoding by the semantic classifiers to examine whether these regions overlap with the connectivity maps of the ATL seeds.
2.2.2. Acquisition
As with the functional experiment, a Magnex head-dedicated gradient insert coil was used in conjunction with a birdcage, radio-frequency coil tuned to 127.4 MHz. For the resting-state data, a gradient-echo EPI sequence was used to collect data from 60 axial slices with an interleaved (bottom-up) acquisition order with the following parameters: TR=3 s, TE=minimum full, volumes=180, flip angle=90°, matrix size=64×64, FOV=192×192 mm, voxel size=3x3×3 mm. A minimum full TE was selected to optimise image quality (as opposed to selecting a value less than minimum full which, for instance, would be beneficial for obtaining more slices per TR). Functional images were co-registered onto a T1-weighted anatomical image from each participant (TR=7.8 s, TE=3 ms, FOV=290 mmx290 mm, matrix size=256 mm x256 mm, voxel size=1 mmx1 mmx1 mm).
2.2.3. Pre-processing
Data were preprocessed using the FSL toolbox (http://www.fmrib.ox.ac.uk/fsl). Prior to conducting the functional connectivity analysis, the following pre-statistics processing was applied to the resting state data; motion correction using MCFLIRT to safeguard against motion-related spurious correlations (Baker et al., 2015, Smallwood et al., 2016, Krieger-Redwood et al., 2016, Davey et al., 2016); slice-timing correction using Fourier-space time-series phase-shifting; non-brain removal using BET; spatial smoothing using a Gaussian kernel of FWHM 6 mm; grand-mean intensity normalisation of the entire 4D dataset by a single multiplicative factor; high-passtemporalfiltering (Gaussian-weighted least-squares straight line fitting, with sigma=100 s); Gaussian lowpass temporal filtering, with sigma=2.8 s.
2.2.4. Low level analysis
For our ATL sites we created two spherical seed ROIs, 6 mm in diameter,centered on the co-ordinates of the central voxel in the highest performing spheres in our presentation and semantic searchlight analyses; left aSTG [-54 2 -10] and aITG [-50 -10 -26] respectively (see Table 2). For our sensory semantic regions we created two spherical seed ROIS centered on intracalcarine cortex [-18 -84 4] and planum polare [-48 -12 -4] from the best performing spheres in our semantic searchlight analysis; as these regions showed high performance accuracy on the semantic classifier and fall within primary sensory regions.
Table 2.
Centre Voxel Coordinates of Highest Decoding Sphere in the Searchlight Analyses.
Condition | Mask | Cluster Peak | Extended Cluster Regions | Cluster Extent | Z-score | Acc (%) | x | y | z |
---|---|---|---|---|---|---|---|---|---|
Semantic Feature | |||||||||
ATL | L Anterior ITG/MTG | L Heschls gyrus, L putamen | 478 | 4.91 | 61.22 | -50 | -10 | -26 | |
ATL | R Temporal pole | R Anterior parahippocampal gyrus, R Anterior MTG, R Anterior STG. | 416 | 4.58 | 61.05 | 42 | 12 | -24 | |
Auditory | L Planum polare | L Heschls gyrus, L Planum temporale | 88 | 3.92 | 59.53 | -48 | -12 | -4 | |
Visual | L Intracalcarine cortex | L Lingual gyrus | 81 | 4.26 | 61.18 | -18 | -84 | 4 | |
Presentation format | |||||||||
Visual | L Occipital pole | L Occipital fusiform gyrus, L Inferior lateral occipital cortex. | 607 | 4.3 | 58.57 | -16 | -92 | 0 | |
Auditory | L Planum temporale | L Heschl's gyrus, R Planum Temporale, R Heschl's gyrus, | 581 | 4.97 | 59.85 | -58 | -24 | 8 | |
ATL | L Anterior STG | L Temporal pole, R Anterior STG | 66 | 2.8 | 58.36 | -58 | -10 | -2 |
Footnote: Highest decoding accuracy clusters for semantic feature (AUD vs. VIS) and presentation format (spoken vs. written words) analysed separately. Semantic feature classifier was trained on the distinction between spoken AUD vs. spoken VIS and tested on written AUD vs. written VIS (and vice versa). Presentation format classifier was trained on the distinction between written non-words vs. spoken non-words and tested on spoken words vs. written words. Results are thresholded at p<.05 (cluster corrected). L=left, R=right. As well as peak accuracy (reported under the ‘Cluster Peak’ column), the ‘Extended Cluster Regions’ includes all significant regions within each ROI. In addition to the searchlight analyses reported in the table, a further searchlight analysis was run on the distinction between all spoken vs. all written items. This revealed accuracies as high as 99.6% in primary sensory regions and 93.2% in ATL. The unthresholded MVPA maps for each searchlight have been uploaded to the Neurovault database and can be found here http://neurovault.org/collections/1970/.
The time series of these regions were extracted and used as explanatory variables in a separate subject level functional connectivity analysis for each seed. Subject specific nuisance regressors were determined using a component based noise correction (CompCor) approach (Behzadi et al., 2007). This method applies principal component analysis (PCA) to the fMRI signal from subject specific white matter and CSF ROIS. In total there were 11 nuisance regressors, five regressors from the CompCorr and a further 6 nuisance regressors were identified using the motion correction MCFLIRT. These principle components are then removed from the fMRI data through linear regression. The WM and CSF covariates were generated by segmenting each individual's high-resolution structural image (using FAST in FSL; Zhang et al., 2001). The default tissue probability maps, referred to as Prior Probability Maps (PPM), were registered to each individual's high-resolution structural image (T1 space) and the overlap between these PPM and the corresponding CSF and WM maps was identified. These maps were then thresholded (40% for the SCF and 66% for the WM), binarized and combined. The six motion parameters were calculated in the motion-correction step during pre-processing. Movement in each of the three Cartesian directions (x, y, z) and rotational movement around three axes (pitch, yaw, roll) were included for each individual.
2.2.5. High level analysis
At the group-level the data were processed using FEAT version 5.98 part of FSL (FMRIB's Software Library,www.fmrib.ox.ac.uk/fsl) and the analyses were carried out using FMRIB's Local Analysis of Mixed Effects (FLAME) stage 1 with automatic outlier detection. The z statistic images were then thresholded using clusters determined by z > 2.3 and a (corrected) cluster significance threshold of p = 0.05 (Worsley, 2001). No global signal regression was performed.
To investigate the differences between the connectivity maps a fixed effect design (FLAME, http://www.fmrib.ox.ac.uk/fsl) was conducted for each participant to investigate four contrasts; (i) aSTG>aITG seed, (ii) aITG>aSTG seed, (iii) auditory semantic>visual semantic seed and (iv) visual semantic>auditory semantic seed. Individual participant data were then entered into a higher-level group analysis using a mixed effects design (FLAME, http://www.fmrib.ox.ac.uk/fsl) whole-brain analysis. Finally, to determine whether our ATL seeds connectivity maps overlap with the connectivity maps of the sensory semantic seeds we calculated the number of overlapping voxels for our two ATL sites and the sensory semantic connectivity maps.
2.3. Resting state decoder
To allow quantitative inferences to be drawn on the functional neural activity identified through our seed based correlational analyses we performed an automated meta-analysis using NeuroSynth (http://neurosynth.org/decode; Yarkoni et al., 2011). This software computed the spatial correlation between each ATL component mask and every other meta-analytic map (n=11406) for each term/concept stored in the database (e.g., semantic, language, memory, sensory). The 15 meta-analytic maps exhibiting the highest positive correlation and negative correlation for each sub-system mask were extracted, and the term corresponding to each of these meta-analyses is shown in Fig. 4. The font size reflects the size of the correlation (ranging from r=0.10 to 0.45 for positive correlations and r=−0.05 to −0.2 for negative correlations, in increments of 0.05). This allows us to quantify the most likely reverse inferences that would be drawn from these functional maps by the larger neuroimaging community.
Fig. 4.
Left hemisphere sagittal slices taken at X=−55 and X=−50. Primary auditory ROI taken from Juelich histological atlases projected in blue. Results of the group-level searchlight analysis for semantic feature classification (AUD vs. VIS) projected in magenta (cluster-corrected p<.01). Results for perceptual classifier (spoken vs. written) projected in cyan (cluster-corrected p<.01). Overlap of the two searchlight analyses in white. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.).
3. Results
3.1. Behavioural results
Accuracy and reaction times (RT) were calculated for each participant (n=19) for the catch trials in each experimental condition. Results showed that all participants paid attention to the words as indicated by a mean accuracy above 80% for all experimental conditions (spoken AUD = 80.63% ± 15.33, spoken VIS = 88.12% ± 4.86, spoken NON=85.62%±11.47, written AUD=83.12%±19.01, written VIS=86.25%±13.52, written NON=88.75%±5.45). A chi-square test of independence revealed that accuracy did not significantly differ across the six experimental conditions ((5)=6.09, p=.303) or across spoken and written input ((1)=.301, ns). RTs differed significantly between modality-input (t(59)=7.36, p<.001), but not semantic-category within each modality (spoken: F(2,38)=.92, ns; written: F(2,38)=0.074, ns). In line with previous findings (Booth et al., 2002, Cohen et al., 2004), participants were significantly faster at responding to written than spoken stimuli. Furthermore, there was no difference in RT between AUD, VIS and NON items within each presentation modality, suggesting that the experimental conditions were well matched at the behavioural level within our stimuli subset.
3.2. Searchlight analysis
3.2.1. Semantic feature classifier
The format-independent searchlight classifier, trained on the distinction between visual and auditory features in one presentation modality and tested on this distinction in the other modality, was run in three separate masks (ATL; primary auditory cortex and primary visual cortex). All results reported are above chance levels (50%, cluster corrected p<.05). The searchlight analysis within the ATL mask revealed a left hemisphere cluster that could decode semantic information across modalities in aMTG and aITG (see Fig. 3, Table 2). Additionally, right hemisphere clusters were revealed in anterior parahippocampal gyrus and temporal pole (TP). The searchlight analysis within the primary auditory mask revealed a cluster in planum polare (see Fig. 4, Table 2). Finally, the primary visual cortex mask revealed a cluster in intracalcarine cortex that could decode semantic content (see Fig. 5, Table 2).
Fig. 3.
Coronal slices taken at Y=5, Y=−5 and Y=−15. Anterior temporal lobe mask shows all regions of the temporal lobe anterior to Y=−22 in line with Hoffman et al., (2015) projected in blue. Results of the group-level searchlight analysis for semantic feature classification (AUD vs. VIS) projected in magenta (cluster-corrected p<.01). Results for perceptual classifier (spoken vs. written) projected in cyan (cluster-corrected p<.01). Overlap of the two searchlight analyses in white. In total 47 voxels overlapped across the two searchlight analyses in aSTG (right hemisphere, 38 voxels; left hemisphere, 9 voxels). aSTG=anterior superior temporal gyrus; aMTG/aITG=anterior middle temporal gyrus/inferior temporal gyrus. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.).
Fig. 5.
Left hemisphere sagittal slices taken at X=−20, X=−15 and X=−10. Primary visual ROI taken from Juelich histological atlases projected in blue. Results of the group-level searchlight analysis for semantic feature classification (AUD vs. VIS) projected in magenta (cluster-corrected p<.01). Results for perceptual classifier (spoken vs. written) projected in cyan (cluster-corrected p < .01). Overlap of the two searchlight analyses in white. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.).
3.2.2. Perceptual classifier
The classifier that was trained on the distinction between spoken and written non-words and tested on the distinction between these presentation modalities for words, was also run in three separate masks (ATL; primary auditory cortex and primary visual cortex). All results reported are above chance levels (50%, cluster corrected p<.05). Within the ATL, anterior portions of STG, extending into temporal pole, were able to decode between presentation formats (see Fig. 3; Table 2). The classifier results for the primary auditory cortex mask revealed an extensive cluster of voxels that could classify perceptual information in Heschl's Gyrus, planum temporale and superior temporal gyrus (see Fig. 4; Table 2). The classifier results for the primary visual cortex mask revealed an extensive cluster of voxels in occipital pole (see Fig. 5; Table 2).
To explicitly determine whether the aITG and aSTG were differentially able to classify the modality of presentation and the meaning of the stimulus, we conducted a 2×2 repeated-measures ANOVA in which we compared the prediction accuracies for each classifier output for each significant cluster. This revealed three significant effects. First, a main effect for classifier type (presentation format vs. semantic classifier; F(1,18)=36.76, p<.001). Second, a significant main effect of region (aSTG vs. aITG; F(1,18)=79.71, p<.001). Critically, we also found a significant interaction between classifier type and ATL region (F(1,18)=1087.51, p<.001). Post-hoc tests revealed a significant difference between aSTG and aITG for the presentation format classifier, with aSTG performing significantly better than aITG (t(18)=29.04, p<.001). There was also a significant difference between aITG and aSTG for the semantic feature classifier, with aITG performing significantly better than aSTG (t(18)=28.30, p<.001). Collectively, these analyses show a dissociation between ATL regions: aSTG classification accuracy was higher for presentation modality than word meaning, while the reverse pattern was obtained for aITG.
In addition to our ROI-based MVPA results, a whole-brain searchlight analysis was computed for both the semantic feature classifier and perceptual classifier, using the same analysis pipeline outlined for our ROI analysis. Results from the whole-brain searchlight reveal similar clusters across primary auditory cortex, primary visual cortex and anterior temporal lobe. In addition, the whole-brain analysis revealed clusters in occipital-parietal cortex and clusters extending along the temporal lobe. The unthresholded maps from the whole-brain searchlight analysis have been uploaded to the neurovault database and can be found here http://neurovault.org/collections/1970/.”
3.3. Univariate analysis
The searchlight results revealed that in ATL, primary auditory cortex and visual cortex, distinct regions were able to decode semantic feature type and presentation modality. As an additional complementary analysis, the percentage signal change was extracted for each condition from the pairs of clusters that were able to decode semantic feature type and modality of presentation in ATL, visual cortex and auditory cortex (generating six analyses; see Fig. 6). A 6mm sphere was centered at the peak MVPA accuracy in each of these sites (see Table 2). The ventral ATL region (encompassing aITG and aMTG, decoding feature type) showed deactivation across all four conditions, and the degree of deactivation was sensitive to meaning (auditory > visual features) but not input modality (spoken=written words). In contrast, aSTG (which decoded presentation modality) was sensitive to modality (spoken>written) but not meaning (auditory=visual features). Thus, univariate analyses also revealed a functional dissociation within ATL. We also examined regions that could decode modality of presentation and semantic feature type within primary auditory cortex (planum temporale and planum polare respectively) and primary visual cortex (occipital pole and intracalcarine cortex). All four sites showed strong effects of input modality in univariate analyses across both feature types. In addition, the intracalcarine cortex showed greater activity to words that denoted a visual property (e.g., bright) whereas planum polare showed greater activation to words that denoted an auditory property (e.g., loud). This effect of meaning in primary visual and auditory areas was only seen when the words were presented in the complementary input modality: primary visual cortex responded more to visual features when written words were presented, while primary auditory cortex responded more to auditory features when spoken words were presented. Thus, aITG was unique in showing a pattern across both multivariate and univariate analyses consistent with the predictions for an amodal ‘hub’: i.e., sensitivity to meaning and insensitivity to presentation modality.
Fig. 6.
The first column shows 6 mm ROIs centered on the peak MVPA results from the searchlight analyses (shown in Fig. 3, Fig. 4, Fig. 5) for semantic classifier in magenta and modality classifier in cyan, for each of our three masks (anterior temporal lobe, primary auditory cortex and primary visual cortex) projected in blue. The centre for these ROIs are as follows; aITG seed [-50 -10 -26], aSTG seed [-58 -10 -2], planum polare [-48 -12 -4], planum temporale [-58 -24 8], intracalcarine cortex [-16 -84 4] and occipital pole [-16 -92 0]. The second column shows the univariate percent signal change for each of our four conditions within the semantic (magenta) ROI. The third column shows the univariate percent signal change for each of our four conditions with the modality (cyan) ROI. Grey bars show the results for auditory-feature words (e.g., ‘loud’) and white bars show the results for visual-feature words (e.g., ‘bright). * indicates a significant difference between auditory-features and visual-features within a modality (i.e., spoken auditory-features and spoken visual-features; p<.05). ** indicates a significant difference between spoken and written presentation format (p<.001). The unthresholded univariate maps for each condition have been uploaded to the Neurovault database and can be found here http://neurovault.org/collections/1970/. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.).
3.4. Resting state fMRI
To provide a better understanding of the neural architecture that supported the functional distinction between aSTG (effect of input modality) and aITG (effect of semantic feature type), we explored the connectivity of these regions in resting state fMRI (see Fig. 7) by placing spherical ROIs at peaks in the MVPA analysis. The aSTG seed showed significant positive connectivity across the entire length of STG through primary auditory cortex and into supramarginal gyrus (SMG). It coupled with posterior and anterior regions of MTG, pre- and postcentral gyrus, supplementary motor cortex and anterior cingulate gyrus and deactivation with visual regions, including lateral occipital cortex, intracalcarine cortex, occipital fusiform gyrus (OFG) and temporal occipital fusiform gyrus, as well as posterior cingulate and precuneous. In contrast, the aITG site showed connectivity with core parts of the default mode network and multimodal semantic regions, including angular gyrus, posterior parts of MTG and ITG, temporal pole extending medially to include hippocampus and anterior parahippocampal gyrus, and anterior and inferior prefrontal regions, including orbital cortex and left inferior frontal gyrus (LIFG). This seed also coupled with lateral visual regions (e.g., LOC and occipital fusiform gyrus). Table 3 presents location and size of each of these clusters.
Fig. 7.
Resting state connectivity maps projected on rendered brain, displaying (from left-to-right) left hemisphere, right hemisphere, medial view. Maps thresholded at z=2.3, cluster corrected p<.01. (A) Resting state connectivity from two ATL regions connectivity maps; green seed=aSTG (taken from peak accuracy for modality classifier within anterior temporal lobe) and red seed=aITG (taken from peak accuracy for semantic classifier within anterior temporal lobe) - the seed locations are highlighted on the right. (B) Subtraction analysis from two ATL connectivity maps; red=aITG>aSTG. Pie chart on the right shows proportion of overlapping voxels for this difference map with core networks taken from Yeo et al. (2011). These four networks include two sensory maps (Visual, Somatosensory), Limbic and Default Mode Network. (C) Subtraction analysis from two ATL connectivity maps; green=aSTG>aITG. Pie chart on the right shows proportion of overlapping voxels for this difference map with core networks taken from Yeo et al. (2011). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.).
Table 3.
Coordinates of peak clusters in the resting-state connectivity analyses.
Seed Region | Cluster | Cluster Extent | Z-score | x | y | z |
---|---|---|---|---|---|---|
aSTG | Increased Correlation | |||||
L. aSTG | 15745 | 12.3 | -54 | 2 | -10 | |
R. Temporal pole | 12970 | 9.24 | 52 | 8 | -14 | |
Cingulate Gyrus | 7618 | 7.02 | -4 | 12 | 32 | |
Reduced Correlation | ||||||
L. Cuneal cortex | 26667 | 6.19 | -20 | -74 | 32 | |
R. Superior frontal gyrus | 4128 | 4.69 | 20 | 12 | 52 | |
L. Middle frontal gyrus | 2259 | 4.53 | -32 | 10 | 50 | |
L. Lateral occipital cortex, inferior | 1457 | 5.46 | -46 | -70 | -12 | |
aITG | Increased Correlation | |||||
L. aITG/MTG | 20324 | 13.1 | -50 | -10 | -26 | |
L. Frontal pole | 2899 | 7.22 | -10 | 50 | 32 | |
L. Occipital fusiform gyrus | 1981 | 4.49 | -26 | -82 | -8 | |
Reduced Correlation | ||||||
Postcentral gyrus | 3725 | 4.44 | 0 | -54 | 74 | |
R. Frontal pole | 2717 | 5.07 | 42 | 54 | 12 | |
L. IFG, pars triangularis | 2118 | 5.17 | -46 | 35 | 16 | |
R. Cingulate gyrus | 1276 | 4.44 | 12 | 32 | 16 | |
L. Angular gyrus | 783 | 4.39 | -40 | -50 | 42 | |
L. Superior parietal lobule | 769 | 3.94 | -30 | -48 | -56 | |
L. Middle frontal gyrus | 724 | 4.72 | -28 | 8 | 60 | |
R. Middle frontal gyrus | 626 | 4.16 | 30 | 12 | 56 |
Footnote: The table shows peak clusters in the resting-state connectivity analysis from two seed regions; aSTG and aITG. Results are thresholded at p<.01 (cluster corrected).
L=left, R=right.
To investigate the differences between these two ATL maps a difference analysis was performed (Fig. 7B). The contrast of aSTG > aITG identified bilateral superior temporal and frontal polar regions. The contrast aITG > aSTG revealed bilateral inferior and middle portions of the temporal lobe and multimodal semantic sites including angular gyrus, pMTG and LIFG. These differences resemble resting state differences for aSTG and vATL reported by Jackson et al. (2016), helping to validate the functional dissociation we observed using MVPA.
To further interrogate the assumption that aITG exhibits a connectivity profile consistent with an amodal region, whereas aSTG is connected to sensory regions, we looked at the similarity between our two ATL difference maps (see Fig. 7B and C) and that of four core networks taken from Yeo. et al. (2014). These included two networks sensitive to sensory input (visual, somatosensory) and two networks thought to be crucial in the generation of cognitive states that do not rely on sensory inputs for their mental content (limbic and default mode network) (for a review see Andrews-Hanna et al. (2014)). The results, outlined in Figs. 7B and 7C, indicated substantial overlap between the sensory networks (namely somatosensory) and aSTG. In contrast, aITG showed substantial overlap with limbic and DMN networks.
4. Discussion
The current study used multiple imaging methods to identify regions in the anterior temporal lobe (ATL) and primary sensory regions that showed the pattern expected for the semantic hub of the hub and spokes model (Patterson et al., 2007). In an fMRI study, participants listened to or viewed words that referred to either visual or auditory features (e.g., bright or loud). Multivoxel pattern analysis (MVPA) revealed a dissociation between (i) anterior inferior temporal gyrus (aITG), which could classify semantic categories relating to feature type (e.g., auditory features like “loud” as being different from visual features like “bright”) across auditory and visual inputs and (ii) anterior superior temporal gyrus (aSTG), which was sensitive to input modality across meaningful and meaningless items. This dissociation within ATL was further supported by univariate contrasts and patterns of resting state connectivity: aSTG showed a stronger response to spoken than written inputs and was functionally coupled to an auditory-motor network (somatosensory network; Yeo et al., 2014), while aITG was insensitive to input modality and showed substantial connectivity with regions in the default mode network and limbic network, plus some overlap with visual regions (see Jackson et al. (2016), for similar findings).
Our findings make an important contribution to our understanding of the neural basis of semantic cognition in three ways: (1) We provide evidence that conceptual knowledge, extracted from different modalities of input across many learning experiences, is represented within ventral portions of ATL which act as a ‘hub’ (Patterson et al., 2007, Rogers et al., 2004). (2) Across converging methods, we observe a functional dissociation between ventral and superior portions of ATL and provide evidence that these regions are situated within distinct large-scale cortical networks. (3) Responses in primary visual and auditory cortex confirm the contribution of these ‘spoke’ regions to semantic processing.
According to the hub and spoke model (Patterson et al., 2007), conceptual knowledge depends on the co-activation of spoke regions that convey information about specific unimodal and multimodal features of concepts, and an ATL hub which integrates these features to form amodal conceptual representations that are independent of specific sensory input. Studies of patients with semantic dementia (SD) provided the original motivation for this proposal yet neuropsychological methods are not especially well-suited to the precise localization of amodal conceptual representations given the widespread atrophy in this condition. Nevertheless, the degree of semantic impairment correlates with hypometabolism in ventral rather than superior portions of ATL across patients (Mion et al., 2010), suggesting that ventral ATL could be the critical substrate for amodal knowledge. Relevant evidence is also provided by univariate fMRI analyses of the ATL response to verbal comprehension tasks in healthy participants, which show multiple peak responses in both ventral ATL and aSTG, often to the same contrasts (Binney et al., 2010, Hoffman et al., 2015; Visser and Ralph, 2011). Semantic matching and naming tasks have also shown multiple peak responses in the ATL with the more superior ATL region being involved in object naming and the more ventral region in semantic matching (Sanjuán et al., 2015). Furthermore, the differential patterns of functional connectivity across ATL regions have been observed by both Jackson et al., (2016) and Pascual et al., (2015).
Our findings therefore add to existing knowledge by showing a dissociable response in these two regions: only the ventral ATL site showed a pattern consistent with the representation of conceptual information, since it was able to classify responses according to semantic category (i.e., feature type, not input modality). In univariate analyses, this aITG site also showed deactivation (arguably due to the use of rest rather than an active baseline; Visser et al., 2010; Humphreys et al., 2015) for both auditory and visual feature types, irrespective of whether these words were spoken or written – and the magnitude of this deactivation was greater for visual than auditory features. Finally, this site showed stronger functional connectivity at rest with the default mode and limbic systems, as expected for a region implicated in amodal conceptual processing. Therefore, our combination of functional and resting state methods provides novel converging evidence that anterior ventral temporal areas allow different sensory representations to be integrated to form ‘amodal’ conceptual representations (particularly for auditory features, see limitations below).
Previous studies have used MVPA to explore the neural basis of semantic processing, and have identified a conceptual response in ATL using classification of stimuli within a single presentation modality (Coutanche and Thompson-Schill, 2014, Peelen and Caramazza, 2012). Other studies, examining semantic cognition across modalities of presentation (Devereux et al., 2013, Fairhall and Caramazza, 2013, Man et al., 2015), have largely not observed effects in ATL. An exception is a recent crossmodal MVPA study, investigating Dutch-English bilinguals (Correia et al., 2014). The research tested whether patterns of activity related to the distinction between spoken nouns in one language (e.g., “horse” vs. “duck” in English) could accurately predict the same distinction in the other language (e.g., “paard” vs. “eend” in Dutch). Consistent with our findings, the cross-language classifier revealed a significant cluster in the left ATL. This largely fell within mid-superior temporal pole rather than the more ventral region we identified in our analysis, perhaps because aSTG is an important interface between semantic processing and other aspects of language.
Analyses of resting state connectivity from the ATL regions that were able to classify input modality (aSTG) and semantic feature type (aITG) revealed that these two sites lie within distinct large-scale functional networks. A similar dissociation between the resting state connectivity of ventral ATL and anterior STG was recently reported by Jackson et al., (2016), providing further evidence for the validity of the functional dissociation in ATL that we observed using MVPA. To quantify the interpretation of the functional connectivity of the aSTG and aITG connectivity maps, we performed a decoding analysis using automated fMRI meta-analytic software NeuroSynth (see Fig. 8). Meta-analytic decoding of these spatial maps revealed that our aSTG connectivity map correlated with terms related to language (e.g., sentence, comprehension) and auditory processing (e.g., speech, sound) whilst anti-correlating with other modality information (e.g., visual, spatial) and memory (e.g., working memory, episodic). In contrast, the aITG connectivity map correlated with terms related to memory (e.g., semantic, autobiographical) and social processes (e.g., theory of mind, social cognition) terms, whilst anti-correlating with modality-specific (e.g., ventral visual, motor, spatial) and executive terms (e.g., maintenance, demands). This is consistent with previous findings that relate aSTG to speech comprehension, language and sensory processing (Patterson and Lambon Ralph, 1999, Jobard et al., 2007, Scott and Johnsrude, 2003, Scott et al., 2003, Scott et al., 2000, Spitsyna et al., 2006) and aITG to semantic processing but not sensory experience (Patterson et al., 2007, Visser et al., 2010). Furthermore, the differences in function across temporal areas as revealed by the Neurosynth database seem to align with differences in the white-matter terminations (see Bajada et al., 2016). These findings confirmed associations between (i) the network anchored in the aSTG and auditory processing and speech perception, plus (ii) the aITG network and more abstract domains (such as social cognition, theory of mind, or mental states).
Fig. 8.
Decoding the functions of two ATL components (aSTG and aITG) using automated fMRI meta-analyses (NeuroSynth, Yarkoni et al., 2011). This software computed the spatial correlation between each ATL component unthresholded zstat mask (shown on the left; red = positive correlation and blue=negative correlation) and every other meta-analytic map (n=11406) for each term/concept stored in the database (e.g., semantic, language, memory and sensory). The 15 meta-analytic maps exhibiting the highest positive correlation (red words) and negative correlation (blue words) for each sub-system mask were extracted, and the term corresponding to each of these meta-analyses is shown in the respective box (shown on the right). The font size reflects the size of the correlation (ranging from r=0.10 to 0.45 for positive correlations (red) and r=−0.05 to −0.2 for negative correlations (blue), in increments of 0.05). This allows us to quantify the most likely reverse inferences that would be drawn from these functional maps by the larger neuroimaging community. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.).
Thus, the putative semantic ‘hub’ in ventral ATL was functionally coupled to aspects of cortex that specialize in forms of stimulus-independent higher order cognition, including angular gyrus (AG) and posterior and anterior areas on the medial surface that correspond to the midline core of the so-called default mode network (DMN)(see also Hurley et al., 2015). This network is known to be deactivated by input (Raichle et al., 2001) and is thought to be crucial in the generation of cognitive states that do not rely on sensory information for their mental content (for a review see Andrews-Hanna et al., 2014). Tasks which are associated with the default mode network include those that depend on episodic memory, semantic processing, mental state attribution as well as states of spontaneous thought studied under the rubric of mind-wandering / daydreaming (Spreng et al., 2009, Raichle, 2015). Although previous literature has shown that connectivity to the AG may not be due to shared semantic processing (Humphreys et al., 2015). Therefore, as many cognitive states that involve the DMN are stimulus-independent in nature, their association with ventral ATL both in terms of functional connectivity and their meta-analytic decoding is consistent with the view that this region supports semantic processing across different input modalities and may form conceptual representations that are not tied to a specific input modality (see Margulies et al., 2016). In contrast, aSTG showed greater functional connectivity with auditory and motor regions and this spatial map was associated with auditory processing and language tasks, as opposed to amodal tasks, in the meta-analytic decoding. Therefore, our combination of functional and resting state methods provides novel converging evidence that anterior ventral temporal areas allow different sensory representations to be integrated to form ‘amodal’ conceptual representations.
As discussed, the hub and spoke model (Lambon Ralph et al., 2010, Patterson et al., 2007, Rogers et al., 2004) makes novel predictions about the contribution of the ATL to amodal conceptual knowledge, but it also anticipates an important role for modality-specific ‘spoke’ regions in visual and auditory cortex, in line with many influential accounts of semantic processing (Damasio, 1989, Martin, 2007, Meteyard et al., 2012, Pulvermüller, 2013). Furthermore, the involvement of both hub and spoke regions in semantic representations has been shown using TMS (Pobric et al., 2010). In line with this view, MVPA revealed regions that responded to meaning in both ventral parts of ATL (putative ‘hub’) and in primary visual and auditory regions (putative ‘spokes’). In addition, even though the putative ‘spoke’ regions (i.e., voxels sensitive to meaning) were adjacent to areas that coded for input modality, the specific voxels that could classify meaning and input modality were largely different. These findings do not readily support traditional ‘strong’ embodied accounts that equate semantic representations with traces of perceptual/motor experience (for a review, see Meteyard et al. (2012)) since this would suggest a greater degree of overlap between the results of these two classifiers. While our data suggests that sensory systems appear to play a critical role in the representation of meaning, they also suggest that perceptual experience and imagery generated as part of semantic retrieval may be distinguishable on the basis of differences in the patterns of activity in sensory cortex.
One potential limitation of our study is that we did not observe evidence that aITG responds to both auditory and visual semantic features in the univariate contrasts: this site showed deactivation for both feature types that was greater for visual features. Thus, the strongest evidence for the aITG as an amodal hub is provided by the MVPA results and our meta-analytic decoding of this region's pattern of distinct functional connectivity, and not the univariate analyses. Our design was optimized for decoding rather than univariate effects – as we focused on obtaining the maximum number of blocks for MVPA and did not employ a high-level non-semantic baseline which would have allowed us to recover semantic activation in ATL for both auditory and visual features from a contrast (Humphreys et al., 2015). Since we found that aITG responds more to auditory features (words such as “loud”) than visual features (words such as “bright”), it remains unclear whether aITG reflects the meanings of auditory features alone, or both feature types equally. Future studies might allow these possibilities to be disentangled using a high-level baseline with which both feature types can be compared (e.g. Jackson et al., 2015).
5. Conclusion
Collectively, our findings from both pattern classification and resting-state connectivity provide converging evidence that sub-regions of the ATL support different aspects of semantic processing. Anterior ITG and MTG capture meaning independent of input modality, consistent with the fact that semantic dementia patients (who have multimodal semantic impairment) have considerable atrophy in this same region of ATL (Binney et al., 2010, Galton et al., 2001). In contrast, aSTG exhibited a degree of modality specificity: this structure, which is known to be important for understanding speech and environmental sounds, does not fulfil the criteria for an amodal semantic hub. Finally, the current results provide evidence for modality-specific spokes regions within the vicinity of primary auditory and visual cortex (intracalcarine cortex and planum polare respectively). However, the specific voxels that could classify between each condition (presentation format and semantic feature) were largely different. These findings challenge traditional embodied accounts (Pulvermuller, 2005) that attempt to equate semantic representations with traces of perceptual/motor experience, and instead support the view that the richness of semantic cognition arises at least in part from abstraction away from specific input modalities in ventral regions of the anterior temporal lobe.
Funding
The research was supported by BBSRC grant BB/J006963/1. Jefferies was supported by a grant from the European Research Council (SEMBIND - 283530). The publication was part-funded by a grant from the John Templeton Foundation, “Prospective Psychology Stage 2: A Research Competition” to Martin Seligman. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of the John Templeton Foundation. The authors declare no competing financial interests.
Acknowledgements
We would like to thank Tim Andrews for his advice about the experimental design.
References
- Andrews-Hanna J.R., Smallwood J., Spreng R.N. The default network and self-generated thought: component processes, dynamic control, and clinical relevance. Ann. New Y. Acad. Sci. 2014;316:29–52. doi: 10.1111/nyas.12360. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bajada C.J., Haroon H.A., Azadbakht H., Parker G.J., Ralph M.A.L., Cloutman L.L. The tract terminations in the temporal lobe: their location and associated functions. Cortex. 2016 doi: 10.1016/j.cortex.2016.03.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baker D.H., Karapanagiotidis T., Coggan D.D., Wailes-Newson K., Smallwood J. Brain networks underlying bistable perception. NeuroImage. 2015;119:229–234. doi: 10.1016/j.neuroimage.2015.06.053. [DOI] [PubMed] [Google Scholar]
- Barsalou L.W. Perceptual symbol systems. Behav. Brain Sci. 1999;22:577–660. doi: 10.1017/s0140525x99002149. [DOI] [PubMed] [Google Scholar]
- Barsalou L.W. Grounded cognition. Annu. Rev. Psychol. 2008;59:617–645. doi: 10.1146/annurev.psych.59.103006.093639. [DOI] [PubMed] [Google Scholar]
- Behzadi Y., Retsom K., Liau J., Liu T.T. A component based noise correction method (CompCor) for BOLD and perfusion based fMRI. NeuroImage. 2007;37(1):90–101. doi: 10.1016/j.neuroimage.2007.04.042. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Binder J.R., Gross W.L., Allendorfer J.B., Bonilha L., Chapin J., Edwards J.C., Grabowski T.J., Langfitt J.T., Loring D.W., Lowe M.J., Koenig K., Morgan P.S., Ojemann J.G., Rorden C., Szaflarski J.P., Tivarus M.E., Weaver K.E. Mapping anterior temporal lobe language areas with fMRI: a multicenter normative study. NeuroImage. 2011;54:1465–1475. doi: 10.1016/j.neuroimage.2010.09.048. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Binney R.J., Embleton K., V., Jefferies E., Parker G.J.M., Lambon Ralph M., A. The ventral and inferolateral aspects of the anterior temporal lobe are crucial in semantic memory: evidence from a novel direct comparison of distortion-corrected fMRI, rTMS and semantic dementia. Cereb. Cortex. 2010;20:2728–2738. doi: 10.1093/cercor/bhq019. [DOI] [PubMed] [Google Scholar]
- Binney R.J., Parker G.J., Ralph M.A.L. Convergent connectivity and graded specialization in the rostral human temporal lobe as revealed by diffusion-weighted imaging probabilistic tractography. J. Cogn. Neurosci. 2012;24(10):1998–2014. doi: 10.1162/jocn_a_00263. [DOI] [PubMed] [Google Scholar]
- Booth J.R., Burman D.D., Meyer J.R., Gitelman D.R., Parrish T.P., Mesulam M.M. Modality indepdence of word comprehension. Human. Brain Mapp. 2002;16(4):251–261. doi: 10.1002/hbm.10054. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brysbaert M., New B., Keuleers E. Adding part-of-speech information to the SUBTLEX-US word frequencies. Behav. Res. Meth. 2012;44(4):991–997. doi: 10.3758/s13428-012-0190-4. [DOI] [PubMed] [Google Scholar]
- Cerf-ducastel B., Murphy C. Validation of a stimulation protocol suited to the investigation of odor-taste interactions with fMRI. Physiol. Behav. 2004;81(3):389. doi: 10.1016/j.physbeh.2003.12.018. [DOI] [PubMed] [Google Scholar]
- Chen Y., Shimotake A., Matsumoto R., Kunieda T., Kikuchi T., Miyamoto S., Ralph M.L. The ‘when’and ‘where’of semantic coding in the anterior temporal lobe: temporal representational similarity analysis of electrocorticogram data. Cortex. 2016;79:1–13. doi: 10.1016/j.cortex.2016.02.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cohen L., Jobert A., Le Bihan D., Dehaene S. Distinct unimodal and multimodal regions for word processing in the left temporal cortex. NeuroImage. 2004;23(4):1256–1270. doi: 10.1016/j.neuroimage.2004.07.052. [DOI] [PubMed] [Google Scholar]
- Correia J., Formisano E., Valente G., Hausfeld L., Jansma B., Bonte M. Brain-based translation: fmri decoding of spoken words in bilinguals reveals language-independent semantic representations in anterior temporal lobe. J. Neurosci. 2014;34(1):332–338. doi: 10.1523/JNEUROSCI.1302-13.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Coutanche M.N., Thompson-Schill S.L. Creating concepts from converging features in human cortex. Cereb. Cortex. 2014:bhu057. doi: 10.1093/cercor/bhu057. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Damasio A.R. The brain binds entities and events by multiregional activation from convergence zones. Neural Comput. 1989;1:123–132. [Google Scholar]
- Davey J., Thompson H.E., Hallam G., Karapanagiotidis T., Murphy C., De Caso I., Jefferies E. Exploring the role of the posterior middle temporal gyrus in semantic cognition: integration of anterior temporal lobe with executive processes. NeuroImage. 2016 doi: 10.1016/j.neuroimage.2016.05.051. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Devereux B.J., Clarke A., Marouchos A., Tyler L.K. Representational similarity analysis reveals commonalities and differences in the semantic processing of words and objects. J. Neurosci. 2013;33(48):18906–18916. doi: 10.1523/JNEUROSCI.3809-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fairhall S.L., Caramazza A. Brain regions that represent amodal conceptual knowledge. J. Neurosci. 2013;33(25):10552–10558. doi: 10.1523/JNEUROSCI.0051-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Friedman L., Glover G.H., Consortium Fbirn. Reducing interscanner variability of activation in a multicenter fMRI study: controlling for signal-to-fluctuation-noise-ratio (SFNR) differences. Neuroimage. 2006;33(2):471–481. doi: 10.1016/j.neuroimage.2006.07.012. [DOI] [PubMed] [Google Scholar]
- Galton C.J., Patterson K., Graham K., Lambon-Ralph M.A., Williams G., Antoun N., Hodges J.R. Differing patterns of temporal atrophy in Alzheimer's disease and semantic dementia. Neurology. 2001;57(2):216–225. doi: 10.1212/wnl.57.2.216. [DOI] [PubMed] [Google Scholar]
- Gonzalez J., Barros-Loscertales A., Pulvermuller F., Meseguer V., Sanjuan A., Belloch V., Avila C. Reading cinnamon activates olfactory brain regions. NeuroImage. 2006;32(2):906–912. doi: 10.1016/j.neuroimage.2006.03.037. [DOI] [PubMed] [Google Scholar]
- Hanke M., Halchecnko Y.O., Sederberg P.B., Hanson S.J., Haxby J.V., Pollmann S. PyMVPA: a python toolbox for multivariate pattern analysis of fMRI data. Neuroinformatics. 2009;7(1):37–53. doi: 10.1007/s12021-008-9041-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hoffman P., Binney R.J., Ralph M.A.L. Differing contributions of inferior prefrontal and anterior temporal cortex to concrete and abstract conceptual knowledge. Cortex. 2015;63:250–266. doi: 10.1016/j.cortex.2014.09.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Humphreys G.F., Hoffman P., Visser M., Binney R.J., Ralph M.A.L. Establishing task-and modality-dependent dissociations between the semantic and default mode networks. Proc. Natl. Acad. Sci. 2015;112(25):7857–7862. doi: 10.1073/pnas.1422760112. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hurley R.S., Bonakdarpour B., Wang X., Mesulam M.M. Asymmetric connectivity between the anterior temporal lobe and the language network. J. Cogn. Neurosci. 2015 doi: 10.1162/jocn_a_00722. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jackson R.L., Hoffman P., Pobric G., Ralph M.A.L. The nature and neural correlates of semantic association versus conceptual similarity. Cereb. Cortex. 2015;25(11):4319–4333. doi: 10.1093/cercor/bhv003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jackson R.L., Hoffman P., Pobric G., Lambon Ralph M.A. The semantic network at work and rest: differential connectivity of anterior temporal lobe subregions. J. Neurosci. 2016;36(5):1490–1501. doi: 10.1523/JNEUROSCI.2999-15.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jenkinson M., Bannister P., Brady J.M., Smith S.M. Improved optimisation for the robust and accurate linear registration and motion correction of brain images. NeuroImage. 2002;17(2):825–841. doi: 10.1016/s1053-8119(02)91132-8. [DOI] [PubMed] [Google Scholar]
- Jobard G., Vigneau M., Mazoyer B., Tzourio-Mazoyer N. Impact of modality and linguistic complexity during reading and listening tasks. NeuroImage. 2007;34(2):784–800. doi: 10.1016/j.neuroimage.2006.06.067. [DOI] [PubMed] [Google Scholar]
- Jung J., Cloutman L.L., Binney R.J., Ralph M.A.L. The structural connectivity of higher order association cortices reflects human functional brain networks. Cortex. 2016 doi: 10.1016/j.cortex.2016.08.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kamitani Y., Tong F. Decoding the visual and subjective contents of the human brain. Nat. Neurosci. 2005;8(5):679–685. doi: 10.1038/nn1444. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kiefer M., Pulvermuller F. Conceptual representations in mind and brain: theoretical developments, current evidence and future directions. Cortex. 2012;48(7):805–825. doi: 10.1016/j.cortex.2011.04.006. [DOI] [PubMed] [Google Scholar]
- Krieger-Redwood K., Jefferies E., Karapanagiotidis T., Seymour R., Nunes A., Ang J.W.A., Smallwood J. Down but not out in posterior cingulate cortex: deactivation yet functional coupling with prefrontal cortex during demanding semantic cognition. Neuroimage. 2016;141:366–377. doi: 10.1016/j.neuroimage.2016.07.060. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kriegeskorte N., Goebel R., Bandettini P. Information-based functional brain mapping. Proc. Natl. Acad. Sci. U. S. Am. 2006;103:3863–3868. doi: 10.1073/pnas.0600244103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kuhl B.A., Rissman J., Chun M.M., Wagner A.D. Fidelity of neural reactivation reveals competition between memories. Proc. Natl. Acad. Sci. 2011;108(14):5903–5908. doi: 10.1073/pnas.1016939108. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kuperman V., Stadthagen-Gonzalez H., Brysbaert M. Age-of-acquisition ratings for 30,000 English words. Behav. Res. Meth. 2012;44(4):978–990. doi: 10.3758/s13428-012-0210-4. [DOI] [PubMed] [Google Scholar]
- Lambon Ralph M.A., Sage K., Jones R., Mayberry E. Coherent concepts are computed in the anterior temporal lobes. Proc. Natl. Acad. Sci. U. S. Am. 2010;107:2717–2722. doi: 10.1073/pnas.0907307107. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Levenshtein V.I. Binary codes with correction for deletions and insertions of the symbol 1. Probl. Peredachi Informatsii. 1965;1(1):12–25. [Google Scholar]
- Lewis-Peacock J.A., Norman K.A. Multi-voxel pattern analysis of fmri data. In: Gazzaniga M.S., editor. The Cognitive Neurosciences. 5th ed. MIT Press; Cambridge, MA: 2013. pp. 911–920. [Google Scholar]
- Man K., Damasio A., Meyer K., Kaplan J.T. Convergent and invariant object representations for sight, sound, and touch. Human. brain Mapp. 2015;36(9):3629–3640. doi: 10.1002/hbm.22867. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Margulies D.S., Ghosh S.S., Goulas A., Falkiewicz M., Huntenburg J.M., Langs G., Jefferies E. Situating the default-mode network along a principal gradient of macroscale cortical organization. Proc. Natl. Acad. Sci. 2016;113(44):12574–12579. doi: 10.1073/pnas.1608282113. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Martin A. The representation of object concepts in the brain. Annu. Rev. Psychol. 2007;58:25–45. doi: 10.1146/annurev.psych.57.102904.190143. [DOI] [PubMed] [Google Scholar]
- Meteyard L., Cuadrado S.R., Bahrami B., Vigliocco G. Coming of age: a review of embodiment and the neuroscience of semantics. Cortex. 2012;48(7):788–804. doi: 10.1016/j.cortex.2010.11.002. [DOI] [PubMed] [Google Scholar]
- Mion M., Patterson K., Acosta-Cabronero J., Pengas G., Izquierdo-Garcia D., Hong Y., T. What the left and right anterior fusiform gyri tell us about semantic memory. Brain. 2010;133:3256–3268. doi: 10.1093/brain/awq272. [DOI] [PubMed] [Google Scholar]
- Pascual B., Masdeu J.C., Hollenbeck M., Makris N., Insausti R., Ding S.L., Dickerson B.C. Large-scale brain networks of the human left temporal pole: a functional connectivity MRI study. Cereb. Cortex. 2015;25(3):680–702. doi: 10.1093/cercor/bht260. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Patterson K., Lambon Ralph M.A. Is a picture worth a thousand words? Evidence from concept definitions by patients with semantic dementia. Brain Lang. 1999;70(3):309–335. doi: 10.1006/brln.1999.2143. [DOI] [PubMed] [Google Scholar]
- Patterson K., Nestor P.J., Rogers T.T. Where do you know what you know? The representation of semantic knowledge in the human brain. Nat. Rev. Neurosci. 2007;8(12):976–987. doi: 10.1038/nrn2277. [DOI] [PubMed] [Google Scholar]
- Peelen M.V., Caramazza A. Conceptual object representations in human anterior temporal cortex. J. Neurosci. 2012;32(45):15728–15763. doi: 10.1523/JNEUROSCI.1953-12.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pobric G., Jefferies E., Ralph M.A.L. Category-specific versus category-general semantic impairment induced by transcranial magnetic stimulation. Curr. Biol. 2010;20(10):964–968. doi: 10.1016/j.cub.2010.03.070. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Postle N., McMahon K.L., Ashton R., Meredith M., Zubicaray G., I., D. Action word meaning representations in cytoarchitectonically defined primary and premotor cortices. NeuroImage. 2008;43:634–644. doi: 10.1016/j.neuroimage.2008.08.006. [DOI] [PubMed] [Google Scholar]
- Pulvermuller F. Brain mechanisms linking language and action. Nat. Rev. Neurosci. 2005;6:576–582. doi: 10.1038/nrn1706. [DOI] [PubMed] [Google Scholar]
- Pulvermüller F. How neurons make meaning: brain mechanisms for embodied and abstract-symbolic semantics. Trends Cogn. Sci. 2013;17(9):458–470. doi: 10.1016/j.tics.2013.06.004. [DOI] [PubMed] [Google Scholar]
- Raichle M.E., MacLeod A.M., Snyder A.Z., Powers W.J., Gusnard D.A., Shulman G.L. A default mode of brain function. Proc. Natl. Acad. Sci. 2001;98(2):676–682. doi: 10.1073/pnas.98.2.676. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Raichle M.E. The brain's default mode network. Annu. Rev. Neurosci. 2015;(0) doi: 10.1146/annurev-neuro-071013-014030. [DOI] [PubMed] [Google Scholar]
- Rice, G.E., Lambon Ralph, M.A. Hoffman, P.The roles of left versus right anterior temporal lobes in conceptual knowledge: An ALE meta-analysis of 97 functional neuroimaging studies. Cerebral Cortex, vol. 26 (2), pp. 1–18. [DOI] [PMC free article] [PubMed]
- Rogers T.T., Lambon Ralph M.A., Garrard P., Bozeat S., McClelland J.L., Hodges J.R., Patterson K. Structure and deterioration of semantic memory: a neuropsychological and computational investigation. Psychol. Rev. 2004;111(1):205–235. doi: 10.1037/0033-295X.111.1.205. [DOI] [PubMed] [Google Scholar]
- Rogers T.T., Patterson K., Jefferies E., Lambon Ralph M.A. Disorders of representation and control in semantic cognition: effects of familiarity, typicality and specificity. Neuropsychologia. 2015;76:220–239. doi: 10.1016/j.neuropsychologia.2015.04.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rueschemeyer S.-A., Brass M., Friederici A., D. Comprehending prehending: neural correlates of processing verbs with motor stems. J. Cogn. Neurosci. 2007;19(5):855–865. doi: 10.1162/jocn.2007.19.5.855. [DOI] [PubMed] [Google Scholar]
- Rueschemeyer S.-A., Rooik D.V., Lindemann O., Willems R.M., Bekkering H. The function of words: distinct neural correlates for words denoting differently manipulable objects. J. Cogn. Neurosci. 2010;22(8):1844–1851. doi: 10.1162/jocn.2009.21310. [DOI] [PubMed] [Google Scholar]
- Sanjuán A., Hope T.M., Jones O.P., Prejawa S., Oberhuber M., Guerin J., Price C.J. Dissociating the semantic function of two neighbouring subregions in the left lateral anterior temporal lobe. Neuropsychologia. 2015;76:153–162. doi: 10.1016/j.neuropsychologia.2014.12.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scott S.K., Blank C.C., Rosen S., Wise R.J. Identification of a pathway for intelligible speech in the left temporal lobe. Brain. 2000;123(12):2400–2406. doi: 10.1093/brain/123.12.2400. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scott S.K., Johnsrude I.S. The neuroanatomical and functional organization of speech perception. Trends Neurosci. 2003;26(2):100–107. doi: 10.1016/S0166-2236(02)00037-1. [DOI] [PubMed] [Google Scholar]
- Scott S.K., Leff A.P., Wise R.J. Going beyond the information given: a neural system supporting semantic interpretation. Neuroimage. 2003;19(3):870–876. doi: 10.1016/s1053-8119(03)00083-1. [DOI] [PubMed] [Google Scholar]
- Smallwood J., Karapanagiotidis T., Ruby F., Medea B., de Caso I., Konishi M., Jefferies E. Representing representation: integration between the temporal lobe and the posterior cingulate influences the content and form of spontaneous thought. PloS one. 2016;11(4):e0152272. doi: 10.1371/journal.pone.0152272. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Smith S.M. Fast robust automated brain extraction. Human. Brain Mapp. 2002;17(3):143–155. doi: 10.1002/hbm.10062. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spitsyna G., Warren J.E., Scott S.K., Turkheimer F.E., Wise R.J. Converging language streams in the human temporal lobe. J. Neurosci. 2006;26(28):7328–7336. doi: 10.1523/JNEUROSCI.0559-06.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spreng R.N., Mar R.A., Kim A.S. The common neural basis of autobiographical memory, prospection, navigation, theory of mind, and the default mode: a quantitative meta-analysis. J. Cogn. Neurosci. 2009;21(3):489–510. doi: 10.1162/jocn.2008.21029. [DOI] [PubMed] [Google Scholar]
- Visser M., Embleton K.V., Jefferies E., Parker G.J.M., Lambon Ralph M.A. The anterior temporal lobes and semantic memory clarified: novel evidence from distortion-corrected spin-echo EPI fMRI. Neuropsycholgia. 2010;48:1689–1696. doi: 10.1016/j.neuropsychologia.2010.02.016. [DOI] [PubMed] [Google Scholar]
- Visser M., Jefferies E., Embleton K.V., Lambon Ralph M.A. Both the middle temporal gyrus and the ventral anterior temporal area are crucial for multimodal semantic processing: distortion-corrected fMRI evidence for a double gradient of information convergence in the temporal lobes. J. Cogn. Neurosci. 2012;24:1766–1778. doi: 10.1162/jocn_a_00244. [DOI] [PubMed] [Google Scholar]
- Visser M., Ralph M.L. Differential contributions of bilateral ventral anterior temporal lobe and left anterior superior temporal gyrus to semantic processes. J. Cogn. Neurosci. 2011;23(10):3121–3131. doi: 10.1162/jocn_a_00007. [DOI] [PubMed] [Google Scholar]
- Worsley K.J. Vol. 14. 2001. Statistical analysis of activation images. Functional MRI: An introduction to methods; pp. 251–270. [Google Scholar]
- Yarkoni T., Poldrack R.A., Nichols T.E., Van Essen D.C., Wager T.D. Large-scale automated synthesis of human functional neuroimaging data. Nat. Methods. 2011;8:665–670. doi: 10.1038/nmeth.1635. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yeo B.T.T., Krienen F.N., Chee M.W.L., Buckner R.L. Estimates of segregation and overlap of functional connectivity networks in the human cerebral cortex. NeuroImage. 2014;0:212–227. doi: 10.1016/j.neuroimage.2013.10.046. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yeo B.T., Krienen F.M., Sepulcre J., Sabuncu M.R., Lashkari D., Hollinshead M., Fischl B. The organization of the human cerebral cortex estimated by intrinsic functional connectivity. J. Neurophysiol. 2011;106(3):1125–1165. doi: 10.1152/jn.00338.2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang Y., Brady M., Smith S. Segmentation of brain MR images through a hidden Markov random field model and the expectation-maximization algorithm. IEEE Trans. Med Imag. 2001;20(1):45–57. doi: 10.1109/42.906424. [DOI] [PubMed] [Google Scholar]