Abstract
Reading, naming, and repetition are classical neuropsychological tasks widely used in the clinic and psycholinguistic research. While reading and repetition can be accomplished by following a direct or an indirect route, pictures can be named only by means of semantic mediation. By means of fMRI multivariate pattern analysis, we evaluated whether this well‐established fundamental difference at the cognitive level is associated at the brain level with a difference in the degree to which semantic representations are activated during these tasks. Semantic similarity between words was estimated based on a word association model. Twenty subjects participated in an event‐related fMRI study where the three tasks were presented in pseudo‐random order. Linear discriminant analysis of fMRI patterns identified a set of regions that allow to discriminate between words at a high level of word‐specificity across tasks. Representational similarity analysis was used to determine whether semantic similarity was represented in these regions and whether this depended on the task performed. The similarity between neural patterns of the left Brodmann area 45 (BA45) and of the superior portion of the left supramarginal gyrus correlated with the similarity in meaning between entities during picture naming. In both regions, no significant effects were seen for repetition or reading. The semantic similarity effect during picture naming was significantly larger than the similarity effect during the two other tasks. In contrast, several regions including left anterior superior temporal gyrus and left ventral BA44/frontal operculum, among others, coded for semantic similarity in a task‐independent manner. These findings provide new evidence for the dynamic, task‐dependent nature of semantic representations in the left BA45 and a more task‐independent nature of the representational activation in the lateral temporal cortex and ventral BA44/frontal operculum.
Keywords: fMRI, IFG, MVPA, RSA, semantic similarity, STG
The current multivariate pattern analysis (MVPA) study revealed a neurobiological basis for the distinction between task‐dependent and task‐independent retrieval of word meaning. Whereas the left BA45 and supramarginal gyrus showed a semantic similarity effect during picture naming only, the anterior STG and ventral BA44/frontal operculum, among other regions, showed a task‐independent semantic similarity effect.
1. INTRODUCTION
The representation of the meaning of words in the brain has been intensively studied over the past decade. One of the prevailing methods, representational similarity analysis, has revealed representations of the meaning of words in a distributed set of regions, including but not limited to the classical perisylvian language neocortex, a neural loop involved in language processing containing pars triangularis and opercularis of the inferior frontal gyrus in the language dominant hemisphere, the supramarginal gyrus and the superior and middle temporal gyrus (Catani et al., 2005; Devereux et al., 2013; Fairhall & Caramazza, 2013; Fernandino et al., 2022; Liuzzi et al., 2015, 2017, 2019, 2020, 2023; Martin et al., 2018; Xiang et al., 2010). In everyday life, words are used for various purposes and in different contexts. This task‐ and context‐dependency has until now been mostly phrased as a dichotomy between regions involved in semantic representations versus semantic control (Jefferies et al., 2020; Lambon Ralph et al., 2016). How context and task affect semantic representations, has been studied by only a handful of studies (Aglinskas & Fairhall, 2023; Gao et al., 2022; Liuzzi et al., 2019, 2020, 2021, 2023; Meersmans et al., 2022; Wang et al., 2018). Here we examined how the representation of word meaning in the brain changed with the task performed for three canonical tasks widely used clinically: reading, repetition, and confrontation naming. While each of these tasks is an essential component of conventional neurolinguistic assessment of patients, picture naming is the most frequently used (Hamberger, 2007). Anomia is a defining feature of aphasia. For instance, picture naming task is widely used during functional language mapping of left temporal lobe epileptic patients in order to best protect against postoperative aphasia (Aron et al., 2022; Binder et al., 2020; Hamberger, 2007). As a further example, semantic dementia, which affects predominantly the ventral, lateral, and medial part of the anterior temporal pole, is characterized by anomia as well as surface dyslexia with relative preservation of repetition, and a progressive decline in semantic memory. The role of the anterior temporal cortex in picture naming has been confirmed in other populations using different techniques (Chen et al., 2016; Shimotake et al., 2015; Woollams et al., 2007). The anterior temporal cortex is connected with the left‐lateralized frontal regions involved in speech production (Hurley et al., 2015; Lambon Ralph et al., 2001; Rice et al., 2015; Schapiro et al., 2013; Weiller et al., 2021).
According to several cognitive models (Coltheart et al., 1993; Hickok & Poeppel, 2007; Jobard et al., 2003), reading and repetition of single words can be accomplished by two parallel routes: a sublexical or indirect route and a lexical or direct route. Unlike single‐word reading and repetition, naming a picture can be accomplished only by following a direct route, that is by means of semantic mediation (Coltheart et al., 1993; Hillis & Caramazza, 1991). Prior to articulation, naming a picture requires the recognition of the visual stimulus and access to visual‐semantic information associated with the picture. Next, this complex semantic representation has to be mapped onto a word form representation. The latter process is mediated by a “lexical‐semantics” or “defining features” mechanism that consists of defining a subset of semantic information for determining “whether or not a picture depicts an instance of the referent of a particular word” (DeLeon et al., 2007). This then serves as input to a name retrieval process, which consists of the selection of a name among semantic competitors (de Zubicaray & McMahon, 2009; Mahon et al., 2007; Schnur et al., 2009). Finally, phonological codes are integrated to encode a phonological word form. Given the different roles that semantic processing fulfills depending on the task, we examined whether the semantic representations in brain activity patterns also differed at a cognitive level between these tasks.
As mentioned above, naming a picture requires a semantic processing step (Hillis & Caramazza, 1991). A left‐lateralized network including the pars orbitalis of the IFG and ventral occipitotemporal regions, among other regions, has been shown to be sensitive to picture naming as well as reading (Bookheimer et al., 1995; Mechelli et al., 2007; Tyler et al., 2013). During reading (Coltheart et al., 1993; Jobard et al., 2003), an indirect (via grapheme‐to‐phoneme mapping) and a direct (via semantic mediation) route have been distinguished functionally and neuroanatomically. For auditory word recognition, an anterior‐directed hierarchical account of the ventral stream has been hypothesized (Binder, 2000; Cohen et al., 2004; DeWitt & Rauschecker, 2012), with an anatomical dissociation between the middle superior temporal gyrus (STG) involved in phonemic processing, and the anterior STG involved in auditory word recognition per se (DeWitt & Rauschecker, 2012). Notably, the effect of auditory word recognition extends into the subjacent anterior STS (Giraud & Price, 2001; Scott, 2000; for a review, see Price (2012)). Importantly, the brain regions implicated in these different tasks (naming, reading, and repetition) may not necessarily code semantic information per se. Their activation may be attributable to other processes such as semantic control or nonsemantic language functions. Drawing on prior computational models of language and memory (Dell et al., 1997; Plaut et al., 1996; Rogers et al., 2004; Woollams et al., 2007), the neuroanatomically constrained dual‐pathway language model (Ueno et al., 2011) postulates that speaking, naming, and repetition are supported by the interactive contribution of the dorsal (Geschwind, 1970; Lichteim, 1885) and ventral language pathways (Weiller et al., 2011). In this model, the task performed, comprehension versus speaking/naming, alters semantic similarity in a hidden layer that may correspond to the anterior superior temporal region (Ueno et al., 2011). Here we specifically examined how brain regions associated with these tasks activate semantic representations in the perisylvian language network differentially.
The majority of prior neuroimaging studies of naming, reading, and repetition relied on a univariate approach (but see also Chen et al. (2016); Shimotake et al. (2015)). Accordingly, it is still unknown which brain regions are not only activated during the different tasks, but also code for semantic content and whether this semantic coding is task‐specific. In the current multivariate pattern analysis (MVPA) study, we investigated where activity patterns that allow for highly specific discrimination between individual words, show semantic similarity effects and how these depend on task. There are three possible scenarios: brain regions sensitive to word‐specific information that is general to all tasks may not code for semantic information. Or brain regions sensitive to word‐specific information may code for semantic information in a task‐dependent manner. In that case, the semantic representation is triggered by only a specific task. Third, brain regions sensitive to word‐specific information may code for semantic information in a task‐independent manner.
During fMRI, subjects were required to read a word, name a picture, or repeat a spoken word. To ensure that subjects were engaged in the task, in half of the trials they were required to overtly pronounce the name of the entity. Naming, reading, and repetition trials were sensorially matched by the simultaneous presentation of sensory control stimuli along with the target stimuli (Figure 1). First, by means of decoding MVPA, we identified a brain network that accurately discriminates between concrete words independently of the task by which these entities are accessed. Given that the tasks imply three different input modalities by which these words are accessed (written words, spoken words, and pictures), this approach allowed us to select regions that code for word‐specific information across tasks and modalities. Then, we computed the representational similarity analysis (RSA) for each component of this network in order to reveal brain regions whose neural activity is sensitive to the similarity in meaning between entities and how this depends on the task (reading, naming, or repetition).
FIGURE 1.
Task. Subjects performed three tasks: (a) Reading (written modality), (b) naming (pictures), and (c) repetition (spoken modality). The word, or the picture, was followed by a red or green fixation point indicating whether the subject was required to pronounce the word aloud (green) or not (red).
The primary analysis was conducted in two steps. First, we isolated regions where activity patterns allow for classification at the individual word level regardless of how they are accessed (the task adopted) using linear discriminant analysis (LDA). These are regions that code for words at a high level of individual word specificity. Importantly, since LDA was performed across the three tasks and across the visual and auditory modalities, classification was based on word‐specific information that is general to all tasks. This allows to dismiss the effect of task‐ or modality‐specific surface features. Next, in these regions with high coding specificity, we applied RSA to examine whether the activity patterns represented the conceptual similarity of the entities being encoded and how this representation differed between the three tasks. The distinction between LDA and RSA is equivalent to the classical distinction between classification and regression. Classification sheds light on the algorithm's performance at the individual word level. RSA determines the conformity between neural and computational representational space.
2. SUBJECTS AND METHODS
2.1. Participants
Twenty subjects between 18 and 28 years old participated in this fMRI experiment. All subjects were native Dutch speakers, right‐handed, free of neurological or psychiatric history, and had normal hearing and vision. Subjects provided written informed consent. All the procedures were approved by the Ethics Committee of the University Hospital of Leuven.
2.2. Task
Subjects performed three tasks, each of them relying on a different input‐modality: single‐word reading (written words), picture naming, and single‐word repetition (spoken words) (Figure 1). Each trial began with a blue fixation point (duration 500 ms), followed by the presentation of an animate entity (duration 1100 ms) as either a written word (Figure 1a), a picture (Figure 1b), or a spoken word (Figure 1c). This was immediately followed by a red or green fixation point (duration 3000 ms) indicating whether the subject was required to pronounce (green) the word aloud or not (red). Finally, a white fixation point was shown for 3900 ms. The total inter‐trial interval was 8500 ms. Each trial made parallel use of three input channels: a written, an auditory, and a picture channel. In each trial, the entity was presented in only one input channel and it was synchronized with primary sensory control stimuli in the two remaining input‐modality channels (Figure 1). In the written modality, these control stimuli consisted of consonant letter strings, which were obtained by replacing vowels with consonants and randomizing the characters' order. In the picture modality, these were obtained through diffeomorphic transformations, which allow to preserve the perceptual properties of the image while removing its meaning (Stojanoski & Cusack, 2014). In the spoken modality, the control stimuli consisted of the rotated spectrograms (Scott, 2000), which abolish the recognizability of the auditory words while keeping the physical features closely matched to the auditory words.
For written words and consonant letter strings, we kept the total string length at 12 characters by centering the word and adding “x+” or “+x” before and after. Written words were presented with a letter size of 0.7 visual degrees. Written words varied in length between 3 and 9 characters. Each picture (size of 5.1 × 5.1 visual degrees) was presented as a prototypical color photo. Pictures were obtained from a standard picture library (Hemera Photo‐Object 5000).
2.3. Entities
The stimulus set consisted of 24 animate entities (Figure 2). Entities were derived from the concept‐feature matrices collected by De Deyne and Storms (2008), which were based on feature verification for 120 animate entities. From the concept–feature matrix (De Deyne & Storms, 2008), the pairwise cosine dissimilarity was calculated for each pair of entities (semantic matrix). The entities were selected from a pool of 120 concepts in a semi‐automated manner such that the standard deviation of the pairwise semantic similarity was maximized and the correlation between the semantic and phonological similarity was minimized. The wider the range in pairwise semantic distances, the higher the sensitivity for detecting semantic effects through RSA (see below). The lower the correlation between semantic and phonological matrix, the lower the chance that semantic similarity effects are confounded by phonological similarity. Entities for which there was no phonological transcription (19 out of 129 entities; Section 2.7) were excluded from the pool from which were selected.
FIGURE 2.
Multidimensional scaling (MDS). Visual representation of the semantic clusters (indicated by different colors) and semantic distances between animate entities, based on the feature generation data collected by De Deyne and Storms (2008). For visualization, data reduction of the similarity matrix to two dimensions was performed by means of MDS.
Log‐transformed word frequency of the lemma counts from the Dutch version of the CELEX database (Baayen et al., 1995) was between 1.49 and 5.21, age of noun acquisition between 4 and 9.21 years, and familiarity between 2.43 and 4.07 (on a 7‐point Likert‐type scale (De Deyne & Storms, 2008)).
2.4. Number of trials and runs
The study had two factors: type of task (three levels: reading of the written word, naming of the picture, and repetition of the spoken word) and response type (two levels: overt and covert). The entire fMRI experiment consisted of 8 runs. Each run (306 volumes) was composed of 72 trials: 12 trials of each of the six trial types. Across runs, the same entity was presented 8 times as written word, 8 times as picture, and 8 times as spoken word (i.e., once per modality, per run). Spoken responses were recorded via a noise‐canceling MRI microphone (FOMRI‐III Optoacoustics). For technical reasons, no responses were recorded for the first six subjects. However, although not recorded, responses were provided. Accordingly, the first six subjects were included in the analysis. Three participants exhibited excessive head motion (displacement >1 mm) in one out of eight runs. The run was removed from the analysis. Before performing the fMRI experiment, all subjects performed a practice run outside the MRI scanner, using stimuli other than the ones used during the experiment.
2.5. Image acquisition
A Philips Achieva dstream 3 T scanner equipped with a 32‐channel head volume coil provided functional and structural images. Structural imaging sequences consisted of a T1‐weighted 3D turbo‐field‐echo sequence acquired as coronal slices (repetition time = 9.6 ms, echo time = 4.6 ms, in‐plane resolution = 0.97 mm, slice thickness = 1.2 mm). Functional images were obtained using T2* echoplanar images comprising 36 transverse slices [repetition time = 2 s, echo time = 30 ms, voxel size 2.75 × 2.75 × 3.75 mm3, Sensitivity Encoding (SENSE) factor = 2], with the field of view (FOV; 220 × 220 × 135 mm3) covering the entire brain. Each run was preceded by 10 dummy scans: this allowed reaching a steady state magnetization for the BOLD acquisition. Furthermore, the EPI noise generated during the dummy scans is learned by the noise cancellation software, which builds a model for eliminating the scanner noise (FOMRI‐III Optoacoustics).
2.6. Image preprocessing
The images were analyzed with Statistical Parametric Mapping SPM12 (Wellcome Trust Centre for Neuroimaging, University College London, UK). Functional images were realigned and a mean functional image was created. Scans were corrected for slice acquisition time. The structural image was co‐registered with the mean functional image and segmented into gray matter, white matter, and cerebrospinal fluid. Based on the deformation field obtained during the segmentation step, the functional images were normalized to the Montreal Neurological Institute (MNI) space and resliced to a voxel size of 3 × 3 × 3 mm3 (Friston et al., 1995). The deformation field was also applied to the structural image and its segmentations. Structural image was resliced to a voxel size of 1 × 1 × 1 mm3. A standard SPM12 modeling was adopted where estimated motion parameters from the realignment procedure were included as covariates of no interest.
2.7. Behavioral models
Three models were adopted in this study: a semantic similarity model, a phonological similarity model, and a visuoperceptual similarity model.
The semantic similarity model was based on the Dutch Small World of Words word association data in Dutch (De Deyne et al., 2013). This dataset consisted of free associations from +70,000 participants to more than 12,000 cue words. Association strength was first weighted using point‐wise mutual information. Similar to De Deyne et al. (2016), a random walk procedure similar to the Katz index (Katz, 1953) was used to extract a dense graph that included both direct and indirect paths (Liuzzi et al., 2019). Semantic similarity between each pair of words was obtained by calculating the cosine similarity between the distributions derived by a random walk algorithm run over the graph using each stimulus word as seed. This association‐based model correlated significantly (rho = 0.76; p < .001) with the feature‐based model (De Deyne et al., 2008) from which the stimulus set was derived.
A phonological similarity model was obtained by calculating the Levenshtein distances (Levenshtein, 1966) between each pair of words. Distances were normalized on the length of the shortest alignment (Heeringa, 2004). The Levenshtein distance (Levenshtein, 1966) corresponds to the minimal number of steps necessary to transform a string into another string by substitutions, insertions, or deletions. The phonological transcriptions for the stimuli were obtained from the CELEX lexical database (Baayen et al., 1995).
The visuoperceptual similarity models were based on AlexNet deep convolutional learning model (Krizhevsky et al., 2017). The pictures of the 24 animate entities used in the study were provided as input to the Deep Network. The network consists of eight layers: The first convoluted layers are mostly determined by low‐level visuoperceptual features, whereas the last three fully connected layers are derived from higher‐order perceptual features and partly represent the categorical labels assigned to pictures during learning. For each picture, the activity pattern in each of the 8 layers was extracted. Next, the eight learned layers were obtained by computing, for each pair of pictures, the cosine similarity between the corresponding activity vectors per layer.
In order to test for the relationship between the semantic model and the phonological model and between the semantic model and the visuoperceptual model, the semantic similarity model was transformed into a dissimilarity model by subtracting 1 from each cosine similarity value. A correlation analysis was performed between the resulting matrix, the phonological distance matrix and the visuoperceptual distance matrix, respectively. No relationship was detected between the semantic model and the phonological model (rho = −0.0014, p = .98). A significant correlation was detected between the semantic model and each of the three fully connected layers of the AlexNet model (layer 6: rho = 0.18, p = .003; layer 7: rho = 0.4, p < .001; layer 8: rho = 0.5, p < .001). This is in agreement with the fact that the supervised learning training phase of the AlexNet model makes use of picture labels.
2.8. Statistical analysis
2.8.1. Whole‐brain searchlight multivariate pattern analysis
MVPA was performed in order to identify brain regions where the activity pattern allows to discriminate between individual entities regardless of the type of task. This will be referred to as a “cross‐task MVPA.”
First, normalized unsmoothed data were modeled using a general linear model (GLM) with 72 conditions: 24 animate entities for the reading task (12 overt and 12 covert trials), 24 animate entities for the naming task (12 overt and 12 covert trials), and 24 animate entities for the repetition task (12 overt and 12 covert trials). Next, subject‐specific β weights were used as input for a whole‐brain searchlight MVPA with LDA as a classifier, as implemented in COSMO (http://www.cosmomvpa.org/; Oosterhof et al., 2016). The classifier was trained on two of the tasks (e.g., naming and repetition) and tested on the remaining task (e.g., reading). The classifier was trained for accurate discrimination of the entity based on the activity patterns. By means of the Euclidean distance, LDA tests whether the test vectors are closer to the training vectors of the same entities compared to training vectors of any of the other entities. For each voxel of the brain, a spherical searchlight volume composed of 150 voxels (radius 9.8 mm) nearest to the center voxel, was defined. Subject‐specific classification accuracy maps were obtained by performing a cross‐validated leave‐one‐out classification between 24 concrete animate entities. This approach has been chosen in analogy with previous studies (Fairhall, 2020; Liuzzi et al., 2021). For each task (three in total), each run (out of eight runs in total) was used as test dataset. A total of 24 iterations (3 tasks × 8 runs) were then performed. During the testing phase of each iteration, the classifier was required to classify the 24 animate entities. Accordingly, each iteration produced an accuracy value, which is related to the classification's performance of the classifier. For each searchlight, the 24 accuracy scores derived from the 24 iterations were averaged and the resulting accuracy value was summarized at the center voxel of the sphere, after subtracting chance level (1/24). Importantly, as the same task was never used during both training and testing, above chance discrimination accuracy was based uniquely on shared information among tasks. Subject‐specific accuracy maps were smoothed with a 6 × 6 × 6 mm3 FWHM Kernel (Clarke & Tyler, 2014; Devereux et al., 2013; Liuzzi et al., 2020; Simanova et al., 2014) and entered into a random effect analysis in SPM12 using a one‐sample t test. Significance was set at a cluster‐level inference thresholded at whole‐brain family wise error (FWE)‐corrected p < .05 (with the voxel‐level threshold set at uncorrected p < .001).
2.8.2. Volume‐of‐interest definition
LDA identified a set of brain regions where the activity pattern allowed for accurate discrimination between entities. Next, we evaluated whether these regions also coded for a semantic representation of these entities. From the group‐level LDA analysis, we extracted all significant local maxima more than 20 mm apart. For each local maximum, a sphere was created with radius of 16 mm (~600 voxels). Each sphere was intersected with the group‐level LDA accuracy map. This excluded voxels from the volume‐of‐interest (VOI) where no significance was obtained in the searchlight LDA. Next, subject‐specific VOI regions were obtained by intersecting the resulting VOI with the individual grey matter mask (grey matter probability threshold >0.3). Finally, only VOIs with an average size across participants larger than 50 voxels were retained. These VOIs were then used for RSA.
2.8.3. Representational similarity analysis
RSA is a second‐order similarity technique that performs a direct comparison between two representational spaces, one derived from brain data and the other derived from behavioral or computational data. The RSA consists of computing a correlation between two matrices derived from brain data and behavioral or computational data, respectively. A significant correlation between the fMRI matrix and the model indicates that the brain region where the fMRI matrix is derived from represents the similarity in meaning between entities. Importantly, this semantic representation is abstracted away from any linguistic or visual form.
In this study, RSA was conducted for a dual purpose: (1) to determine whether specific brain regions able to distinguish between entities regardless of the type of task code for the similarity in meaning between entities represented by an association‐based model and (2) to determine whether such semantic similarity effect was task‐dependent. For the sake of completeness and in order to address whether the semantic similarity effects changed as a function of the model adopted, the RSA was re‐computed using the concept‐feature matrix (De Deyne & Storms, 2008)—which was adopted for generating the stimulus set—as a model.
For each subject and for each VOI, four fMRI similarity matrices were created: for all tasks pooled and for each of the three tasks separately. For each task and for each repetition of an entity, a 24 × 24 fMRI similarity matrix was obtained by extracting the 24 β weight patterns related to the 24 animate entities (corresponding to a matrix 24 × number of voxels) and computing a pair‐wise correlation. Next, a subject‐specific fMRI similarity matrix was obtained by averaging the 24 × 24 fMRI similarity matrices across repetitions of the same entity for the task‐specific fMRI matrices, and across repetitions of the same entity and of tasks for the pooled fMRI matrix. Subject‐specific fMRI similarity matrices were converted into distance matrices (1‐correlation) and compared with the semantic dissimilarity model (Spearman correlation). Only above‐diagonal elements were taken into account. Significance of the results was obtained by means of a one‐side Wilcoxon Signed‐Rank Test.
For each VOI, we tested the presence of a task‐specific semantic similarity effect. This effect had to fulfill two criteria: (1) the presence of a significant semantic similarity effect for one task and (2) a significant difference between the semantic similarity effect for one task and each of the other two tasks. Differences in task‐specificity between regions were addressed by statistically testing whether the difference in semantic similarity effects between tasks was larger in one region compared to another region. For each ROI, we calculated the difference between subject‐specific correlational values for each pair of tasks. The significance of the difference between regions for each pair of tasks was obtained by means of Wilcoxon Signed‐Rank test.
For each VOI we also tested the presence of a task‐independent semantic similarity effect. A task‐independent semantic similarity effect had to fulfill three criteria: (1) the presence of a significant semantic similarity effect for all tasks pooled, (2) the presence of a significant semantic similarity effect for at least two tasks, and (3) the absence of a pairwise difference in semantic similarity effect between each of the tasks. Each effect had to survive a Bonferroni correction for number of VOIs and number of tests.
In order to ensure that semantic similarity effects were not confounded by phonological or visuoperceptual effects, the RSA was computed by partialling out the phonetic and visuoperceptual similarity. All three fully connected AlexNet layers correlated significantly with the semantic model (Section 2.7). Thus, they contain both high‐order visual perceptual features and semantic information. A strong effect of semantic information upon the upper layers is logical since the algorithm is trained on labeled images. Accordingly, in order to limit the removal of semantic information, the main results are based on RSA corrected for phonological similarity and visuoperceptual similarity derived by layer 6. However, for the sake of completeness, RSA results corrected for phonological similarity and layer 7 and layer 8 of AlexNet are reported in Table S3.
3. RESULTS
3.1. Behavioral analysis
Accuracy of responses on the overt response trials was analyzed by means of a repeated‐measures ANOVA with one within‐subjects factor, type of task, with three levels (reading, naming, and repetition). Accuracy was defined as number of trials for which a correct name was provided. The one‐way repeated‐measures ANOVA with accuracy as outcome variable showed a main effect of task [F(2,26) = 41.31; p < .0005]. Subjects were significantly less accurate during picture naming (mean: 81%, SD: .07) compared to reading (mean: 97%, SD: .01) or repetition (mean: 93%, SD: .06). 56.5% of the errors came from entities belonging to the bird category (chickadee 18.6%, swallow 14.6%, sparrow 9.76%, and blackbird 8.94%) and 15.45% from insects (bumblebee 9.76% and bee 5.7%). The remaining 30% was spread across all other entities. A qualitative analysis of the type of errors revealed that, for the bird category, subjects relied on a supraordinate category (i.e., naming “bird” instead of “blackbird”) in 18% of the error cases, whereas in the remaining 38.6% they misnamed the entity with a name belonging to the same semantic category (i.e., “blackbird” instead of “swallow”). For the insect category, subjects did not rely on a supraordinate category, but they mis‐named the entity with a semantically similar entity.
3.2. Whole‐brain searchlight decoding MVPA with LDA
The whole‐brain cross‐task searchlight MVPA yielded eight significant clusters. In the left hemisphere, LDA was able to discriminate between the 24 words regardless of task and modality in a set of regions mainly corresponding to the perisylvian language network together with the posterior middle and inferior temporal gyrus. In the right hemisphere, homologous regions were also involved, but to a lesser extent (Table S1; Figure 4).
FIGURE 4.
Decoding multivariate pattern analysis (MVPA): Rendering and axial sections of the whole brain cross‐task searchlight MVPA using linear discriminant analysis (LDA). Subject‐specific accuracy maps were entered into a random effect analysis SPM12 using a one‐sample t‐test. Significance was set at cluster‐level FWE‐corrected p < .05 with voxel‐level threshold set at uncorrect p < .001.
3.3. Representational similarity analysis
LDA revealed a set of brain regions where the activity patterns allow for accurate discrimination between individual words across the tasks. Next, we examined whether these regions coded for the semantic similarity between entities and, critically, whether this effect was task‐dependent.
A total of 26 local maxima were extracted from the group‐level decoding accuracy map for the cross‐task MVPA (Table S1). Based on the a priori criteria for VOI selection described in Methods, 14 VOIs (Table 1) were further analyzed (Figure 3). We computed the RSA for all tasks pooled and for each task separately (Table 2). Significance of the semantic similarity effects was Bonferroni corrected for number of regions (n = 14) and number of tests (n = 4; 56 in total, Bonferroni corrected threshold p = .0009, corresponding to an alpha level of .05). Table 2 reports results uncorrected for phonological and visuoperceptual similarity. Table 3 and Figure 5 report the RSA results after partialling out the phonological and the visuoperceptual similarity based on layer 6 of AlexNet (see Section 2.8.3 for more details).
TABLE 1.
Volume‐of‐interest (VOI) central coordinates and size.
Coordinates | kE | ||||
---|---|---|---|---|---|
x | y | z | Mean | SD | |
L BA45 | −51 | 27 | 19 | 161.1 | 7.10 |
L ventral BA44/frontal operculum | −63 | 12 | 10 | 130.1 | 10.10 |
L dorsal BA44 | −42 | 12 | 31 | 157.8 | 10.90 |
L orbitofrontal C. | −42 | 36 | −5 | 237.2 | 14.40 |
L frontal pole | −30 | 63 | 1 | 81,010 | 3.5 |
L postcentral G. | −45 | −12 | 58 | 161.3 | 8.20 |
L anterior insula | −30 | 6 | 1 | 74.5 | 1.90 |
L anterior STG | −48 | −9 | 1 | 208.8 | 10.9 |
L posterior STG | −54 | −45 | 16 | 151.6 | 8.80 |
L posterior ITG | −48 | −66 | −17 | 77.8 | 5.10 |
L supramarginal G | −54 | −36 | 43 | 112.6 | 9.80 |
L lateral AG | −54 | −57 | 37 | 146.1 | 10.9 |
R posterior STG | 48 | −18 | 10 | 118.3 | 7.4 |
R posterior MTG | 69 | −33 | 7 | 69.4 | 3.2 |
Note: For each VOI, central coordinates and cluster size are reported.
Abbreviations: AG, angular gyrus; BA, Brodmann area; C, cortex; G, gyrus; ITG, inferior temporal gyrus; kE, size of clusters based on the mean across subjects of number of voxels after exclusion of non‐grey matter voxels; L, left; MTG, middle temporal gyrus; R, right; SD, standard deviation; STG, superior temporal gyrus.
FIGURE 3.
Volume‐of‐interests (VOIs) stepwise selection. Sequential steps adopted to select VOIs. For each step (black circle), a rendering shows the brain regions involved.
TABLE 2.
RSA results.
Tasks pooled | Reading | Naming | Repetition | |||||
---|---|---|---|---|---|---|---|---|
Rho | p | Rho | p | Rho | p | Rho | p | |
L BA45 | 0.08 | .0002 | 0.005 | .46 | 0.11 | .00005 | 0.03 | .04 |
L ventral BA44 | 0.13 | .00008 | 0.07 | .0009 | 0.10 | .0001 | 0.08 | .0008 |
L dorsal BA44 | 0.11 | .00005 | 0.01 | .19 | 0.12 | .00006 | 0.05 | .003 |
L orbitofrontal C. | 0.07 | .0007 | 0.03 | .06 | 0.07 | .005 | 0.03 | .01 |
L postcentral G. | 0.22 | .00005 | 0.16 | .00006 | 0.18 | .00005 | 0.18 | .00005 |
L anterior insula | 0.10 | .0001 | 0.07 | .001 | 0.04 | .01 | 0.07 | .0002 |
L anterior STG | 0.11 | .00005 | 0.05 | .005 | 0.06 | .0002 | 0.08 | .00006 |
L posterior STG | 0.12 | .00009 | 0.04 | .03 | 0.08 | .0007 | 0.09 | .0003 |
L supramarginal G | 0.08 | .0001 | 0.02 | .22 | 0.10 | .00006 | 0.04 | .007 |
R posterior STG | 0.09 | .0001 | 0.04 | .02 | 0.05 | .004 | 0.07 | .0003 |
R posterior MTG | 0.10 | .0001 | 0.03 | .07 | 0.07 | .003 | 0.08 | .0003 |
L frontal pole | 0.05 | .004 | 0.02 | .08 | 0.04 | .008 | 0.01 | .21 |
L lateral AG | 0.05 | .009 | 0.02 | .16 | 0.04 | .02 | 0.02 | .10 |
L posterior ITG | 0.06 | .002 | 0.02 | .15 | 0.03 | .04 | 0.04 | .006 |
Note: For each VOI, RSA results are displayed for all tasks pooled and for each task separately. Significant semantic similarity effects surviving a Bonferroni correction—at an alpha level of .05—for number of regions (n = 14) and number of tests (n = 4; corresponding to a threshold of p < .0009) are reported in bold. Results were not corrected for visuoperceptual and phonological similarity. See Figure 5 and Table 3 for corrected results.
Abbreviations: BA, Brodmann area; C, cortex; G, gyrus; ITG, inferior temporal gyrus; L, left; R, right; STG, superior temporal gyrus.
TABLE 3.
RSA results corrected for phonological and visuoperceptual similarity.
Reading | Naming | Repetition | ||||
---|---|---|---|---|---|---|
Rho | p | Rho | p | Rho | p | |
L BA45 | −0.002 | .62 | 0.10 | .0002 | 0.009 | .35 |
L ventral BA44 | 0.07 | .002 | 0.09 | .0002 | 0.07 | .0002 |
L dorsal BA44 | 0.003 | .42 | 0.10 | .0002 | 0.04 | .03 |
L orbitofrontal C. | 0.02 | 0.06 | 0.06 | 0.01 | 0.03 | 0.03 |
L postcentral G. | 0.15 | .00008 | 0.17 | .00005 | 0.17 | .00005 |
L anterior insula | 0.07 | .0007 | 0.04 | .02 | 0.07 | .0002 |
L anterior STG | 0.05 | .008 | 0.07 | .0002 | 0.09 | .00006 |
L posterior STG | 0.04 | .04 | 0.08 | .002 | 0.09 | .0006 |
L supramarginal G | 0.01 | .29 | 0.09 | .0003 | 0.02 | .06 |
R posterior STG | 0.04 | .02 | 0.06 | .004 | 0.09 | .0001 |
R posterior MTG | 0.02 | .14 | 0.07 | .005 | 0.08 | .0003 |
L frontal pole | 0.02 | .14 | 0.04 | .01 | 0.01 | .17 |
L lateral AG | 0.02 | .16 | 0.03 | .10 | 0.01 | .11 |
L posterior ITG | 0.01 | .26 | 0.02 | .16 | 0.04 | .005 |
Note: For each VOI, RSA results are displayed for each task separately. Significant semantic similarity effects surviving a Bonferroni correction—at an alpha level of .05—for number of regions (n = 14) and number of tests (n = 4; corresponding to a threshold of p < .0009) are reported in bold.
Abbreviations: BA, Brodmann area; C, cortex; G, gyrus; ITG, inferior temporal gyrus; L, left; R, right; STG, superior temporal gyrus.
FIGURE 5.
RSA results corrected for visuoperceptual and phonological similarity. Only regions showing a significant semantic similarity effect for at least one of three tasks are reported. In the center, sagittal sections of brain regions showing a semantic similarity effect. (a–j) For each brain region and for each task, subject‐specific correlational values and average correlation (bar plot) are displayed. Asterisks refer to the significance of semantic similarity effect for each specific task [*p < .05, **p < .001, ***p < .0009; Bonferroni correction—at an alpha level of .05—for number of regions (n = 14) and number of tests (n = 4)]. On top of bar plots, the significance of the difference between semantic similarity effects (uncorrected p values) is reported only when significant.
Among all brain regions tested, a task‐specific semantic similarity effect (defined by the presence of a significant semantic similarity effect for one task along with a significant difference in semantic similarity effect between that task and each of the two remaining tasks) was detected in left BA45 (Figure 5a) and in the superior half of the left supramarginal gyrus (Figure 5e, Table 3). In both regions, the similarity between fMRI patterns was representative of the similarity in meaning between entities during picture naming only (left BA45: rho = 0.10, p = .0002; left supramarginal gyrus: rho = 0.09, p = .0003). No significant effect (p > .05) was detected for reading or repetition (Table 3). In both regions, the semantic similarity effect for picture naming differed significantly from the similarity effect for reading (BA45: p = .0012; supramarginal gyrus: p = .01) and for repetition (BA45: p = .0032; supramarginal gyrus: p = .02).
In contrast to the left BA45 and supramarginal gyrus, none of the other VOIs exhibited a task‐specific semantic similarity effect. However, similarly to the left BA45, the adjacent left dorsal BA44 showed a semantic similarity effect during picture naming (rho = 0.10, p = .0002; Figure 5b). No significant effect was detected for reading (Table 3), whereas an effect was detected for repetition (rho = 0.04, p = .03), which however did not survive the stringent Bonferroni correction for number of comparisons (p = .0009). The semantic similarity effect for pictures differed significantly from the semantic similarity effect during reading (p = .0008) and during repetition (p = 0.05).
A task‐independent semantic similarity effect was defined based on the presence of a semantic similarity effect for all tasks pooled (Table 2), a semantic similarity effect for at least two tasks (Table 3), and the absence of a pairwise difference in semantic similarity effect between each of the tasks (Figure 5). At a Bonferroni corrected threshold of p < .0009 (corresponding to an alpha level of .05), a significant semantic similarity effect for all tasks pooled was detected in the left hemisphere, in BA45, ventral BA44/frontal operculum, dorsal BA44, orbitofrontal cortex, postcentral gyrus, anterior insula, anterior STG, posterior STG, and the superior half of the left supramarginal gyrus. In the right hemisphere, a significant semantic similarity effect was present in the posterior STG and posterior MTG (Table 2).
Among these regions showing a semantic similarity effect for all tasks pooled, the left ventral BA44/frontal operculum, the left postcentral gyrus, the left anterior STG, and the left anterior insula showed a semantic similarity effect for at least two tasks (Table 3). In these four regions, for each pair of tasks, no significant difference (p > .05) between the semantic similarity effects was detected (Figure 5). In addition, an extra analysis was conducted in order to better characterize the task‐independent semantic similarity effect in the ventral BA44/frontal operculum, a region typically involved in phonological processing (Moore & Price, 1999; Price, 2010; Price et al., 1997; Tranel et al., 2003). Accordingly, we re‐computed the RSA by regressing out only the phonological similarity and we tested for the difference in semantic similarity effects with and without phonological correction. For each task, a significant (p < .05) difference was found.
Finally, the left posterior STG, the right posterior STG, and right posterior MTG showed a semantic similarity effect for the repetition task only (Table 3). However, the effect was not significantly different from each of the other two tasks: For each of the three regions, the semantic similarity effect for the repetition task was significantly different (p < .05) from the semantic similarity effect for reading, but none of the regions showed a significant difference between the semantic similarity effect for repetition and picture naming (p > .05; Figure 5).
Furthermore, we addressed the significance of the difference between the semantic similarity effects represented in BA45 and the ones represented in each of the other brain regions investigated: The difference between picture naming and repetition was significantly larger (p < .05) in left BA45 than in all other regions except for the left dorsal BA44, the left postcentral gyrus and the left supramarginal gyrus. The difference between picture naming and reading was significantly larger (p < .05) in the left BA45 than in all other regions except for the left dorsal BA44, left postcentral gyrus, left supramarginal gyrus, and the left and right posterior STG.
Similarly, we addressed the significance of the difference in the semantic similarity effect between picture naming and repetition—and between picture naming and reading—between the left supramarginal gyrus and each of the other regions. The difference between picture naming and reading was significantly larger (p < .05) in the left supramarginal gyrus than in the left ventral BA44/frontal operculum and left insula. The difference between picture naming and repetition was significantly larger (p < .05) in the left supramarginal gyrus than in the left anterior and posterior STG, right posterior STG and MTG, and left insula.
For each of the above‐mentioned VOIs, RSA correlational values, and 95% confidence interval are reported in Table S2.
For RSA results controlled for phonological and the visuoperceptual similarity based on layers 7 and 8, refer to Table S3.
For RSA results based on the concept‐feature matrix as a model, refer to Table S4.
Finally, in order to determine the macroanatomical definition of the three localized inferior frontal regions (i.e., left BA45, ventral BA44/frontal operculum, and dorsal BA44) in a standard manner and for the sake of comparison with our previous study (Liuzzi et al., 2017), the regions were put into the Jülich–Düsseldorf cytoarchitectonic reference frame (Eickhoff et al., 2005, 2006, 2007): According to an assignment based on Maximum Probability Map, 57% of the left BA45 was located in the left Area 45 and 8.7% in Area 44. 35.9% of the left dorsal BA44 was located in the left Area 44 and 5.7% in the left Area 45. Finally, 65.8% of the voxels composing the left ventral BA44 were located in the left Area 44, 7.7% in left Area 45, 3.4% in the operculum, and 0.3% in Area 6. For each region, the remaining voxels of the cluster lay outside the Jülich cytoarchitectonic map.
4. DISCUSSION
Previous neuroimaging studies comparing naming, reading, and repetition in healthy individuals have been mainly based on univariate analyses (Bookheimer et al., 1995; Moore & Price, 1999; Price et al., 1997; Price et al., 2005, 2006). Accordingly, it is still unknown to what extent brain regions activated by these three tasks also code for a semantic representation. In other words, whether during reading (written words), naming (a picture), or repetition (spoken words), these regions code for semantic information abstracted away from any linguistic or visual form. Here, we addressed where the activated representation of concept meaning differed when elicited by these three tasks (Figure 5). That is, whether (1) the semantic similarity effect was significant for one task only (e.g., picture naming) and (2) this effect differed significantly from the semantic similarity effect for the other tasks (e.g., reading and repetition). LDA applied in a whole‐brain cross‐task MVPA revealed that a mostly left‐lateralized network was able to discriminate between the individual words regardless of the type of task, naming, reading, and repetition (Figure 4). The pars triangularis of the left inferior frontal gyrus (B45) and the left supramarginal gyrus coded for the semantic meaning between entities only during the picture naming task. In both regions, the semantic similarity effect during picture naming was significantly stronger than during reading or repetition (Figure 6). In contrast, in the left lateral temporal neocortex, lateral BA44/frontal operculum, left postcentral gyrus, and anterior insula semantic similarity effects were present in at least two tasks and no differences were found between tasks at the preset threshold (Figure 6).
FIGURE 6.
Overview of the main results. Color bar plots report the representational similarity analysis (RSA) results for the left (a) BA45, (b) supramarginal gyrus, (e) ventral BA44/frontal operculum, (f) anterior STG, (g) insula, and (h) the left postcentral gyrus. (c,d) Sagittal and axial slices of the left BA45 and left supramarginal gyrus, as binary VOI. In the center, rendering of all VOIs. Asterisks refer to the significance of semantic similarity effect for each specific task [*p < .05, **p < .001, ***p < .0009; Bonferroni correction—at an alpha level of .05—for number of regions (n = 14) and number of tests (n = 4)]. On top of bar plots, the significance of the difference between semantic similarity effects (uncorrected p values) is reported only when significant.
Picture naming fundamentally differs from reading and repetition as it necessitates access to the semantic referent of the word. During single‐word reading and repetition, automatic activation of the word meaning is also likely to occur, but is not necessary for execution of the task. We hypothesize that the task‐specificity of the semantic similarity effect in the left BA45 and the superior half of the left supramarginal gyrus reflects this distinction between these tasks: Unlike reading and repetition, naming a picture requires the recognition of the visual stimulus and access to its semantic representation, which is mapped onto a word form representation. This process is mediated by a “lexical‐semantics process,” which consists of the selection of specific information. Although it has been demonstrated that IFG is a relevant structure in selection processes, there is no agreement about the definition of this mechanism: Neuroimaging studies refer to selection as a top‐down cognitive control mechanism triggered when unchecked activated nodes cause competition and/or when there is a weak activation of any potential target (Badre et al., 2005; Jefferies et al., 2020; Krieger‐Redwood & Jefferies, 2014; Thompson‐Schill et al., 1997, 1999). Psycholinguistic research (Dell, 1986; Heim et al., 2009; Levelt et al., 1999) defines “selection” as a bottom‐up process where before uttering a word, several nodes of the mental lexicon are activated and the ones highly activated are then selected. Adjudicating between these definitions goes beyond the purpose of the current study, however, current results provide insight into this mechanism. Here, we show that retrieving the correct name from a picture requires the activation of association‐based semantic representation containing highly specific semantic information, and this detailed association‐based semantic representation is well represented in the neural patterns of the left IFG and the left supramarginal gyrus.
With regard to the left supramarginal gyrus, the task‐dependent semantic similarity effect was located in the dorsal supramarginal gyrus (Oberhuber et al., 2016) and lower bank of the inferior parietal sulcus. Although the supramarginal gyrus has been more consistently implicated in phonological (Hartwigsen et al., 2010; Oberhuber et al., 2016) than in semantic processing, in a number of studies the inferior parietal activation during semantic tasks extends into the dorsal supramarginal gyrus during semantic tasks (Binder et al., 2009; Graves et al., 2023; Numssen et al., 2021). By means of RSA, Graves et al. (2023) have provided evidence of a semantic similarity effect, corrected for phonological and orthographic similarity, in the dorsal supramarginal gyrus. Similarly, by means of a pattern‐learning algorithm, Numssen et al. (2021) showed that the left anterior inferior parietal lobe (IPL; coordinates: −57, −32, 30) had a high probability of representing semantic information. Overall, the semantic similarity effect during picture naming specifically is in line with the role of the supramarginal gyrus in high‐level integrative processes (Binder & Desai, 2011; Koenig et al., 2005): Picture naming is the modality that—compared with reading and repeating—needs the integration of visual, semantic, and lexical information.
With regard to the task‐dependent semantic similarity effect in the left IFG (BA45 slightly extending into dorsal BA44), we hypothesize that this effect speaks in favor of a dynamic uploading (Gabrieli et al., 1998) of semantic representations of concrete entities (Gabrieli et al., 1998; Liuzzi et al., 2019) which facilitates IFG selection mechanisms. While the selection (Thompson‐Schill et al., 1997) or semantic control (Lambon Ralph et al., 2016) model does not clearly predict the representation of meaning in the IFG, our results fit into the semantic working memory model proposed by Gabrieli et al. (1998), where a representation of meaning is an explicit and integral part of the model. Contrary to picture naming, the two other tasks, reading, and repetition, do not require the activation of a detailed semantic representation since phonological information is readily available in the stimulus itself through either a direct or a grapheme–phoneme mapping. We hypothesize that once the input reaches the pars triangularis, it gains access to semantic working memory. In this process, item‐specific semantic relationships are, temporarily, uploaded in the dorsal BA45 in order to facilitate word selection mechanisms (Liuzzi et al., 2019). According to this hypothesis, this additional semantic working memory step is not necessary during single‐word repetition or single‐word reading.
In a recent study, Liuzzi et al. (2021) found that the left pars triangularis showed a stronger sensitivity to semantic category when active rather than passive semantic access is required. At the same time, neural representations in this region showed a subtle pattern of category representation that was common to both active and passive conceptual access. Liuzzi et al. (2021) adopted a phonetic decision task and a typicality judgmental task: Subjects were required to decide which one out of the three possible concepts started with a consonant or was the most typical of its category, respectively. In this sense, access to meaning was ensured, although passively for the phonetic decision task and actively for the typicality judgmental task. It is relevant to point out that the portion of the IFG region showing a task‐dependent semantic similarity effect (current dataset) lies more posterior compared to the one showing a common representation for passive and active semantic access.
Whereas BA45 and the left supramarginal gyrus were characterized by a task‐dependent semantic similarity effect, the left anterior STG was characterized by task‐independent semantic similarity effects. The left anterior STG coded for the similarity in meaning between entities during the picture naming as well as during the single word repetition task and no significant difference was detected between these similarity effects. Importantly, a statistical comparison of the task‐dependent effect between BA45 and lateral temporal cortex revealed a significant difference between these two brain regions: a significant difference between the left BA45 and left anterior STG was present for the difference between picture naming and reading (p = .03) and between picture naming and repetition (p = .006).
The semantic similarity effect for the repetition task in the left STG replicated our previous findings: In Liuzzi et al. (2017) we showed that the property verification task for auditory words activated the same portion of the STG (y = −10). Anterior STG is involved in production (Adank, 2012; Borovsky et al., 2007; Brown et al., 2009; Walker et al., 2011) as well as comprehension (DeWitt & Rauschecker, 2012; Hillis et al., 2017; Mesulam et al., 2015; Roux et al., 2015). By means of meta‐analysis incorporating more than 100 experiments addressing phoneme, words, and phrasal processing, DeWitt and Rauschecker (2012) located the auditory word comprehension in a portion of the STG situated between sites associated with phoneme and phrase processing. Roux et al. (2015) used electrical stimulation in order to map areas involved in the comprehension of auditory and visual words. They (Roux et al., 2015) combined picture naming and auditory comprehension tasks and showed that while the mid‐to‐anterior STG was associated with “word deafness errors,” corresponding to the inability of the subjects to understand questions, the anterior STG was associated with the inability to associate an auditory word form to its semantic referent (lexical‐semantic errors). The region showing a semantic similarity effect in the current study corresponds to the anterior STG reported by Roux et al. (2015) (coronal slices: 8; 0; −8; −12). Accordingly, whereas on the one hand, current results add evidence to a central role of the left anterior superior temporal gyrus in auditory word comprehension, we also bring new evidence that this region represents the similarity in meaning between entities in a task‐independent manner (i.e., during picture naming and repetition).
Similarly to the anterior STG, the left ventral BA44/frontal operculum showed task‐independent semantic similarity effects: The frontal operculum coded for the similarity in meaning between entities during the picture naming and single word repetition task with no significant difference between these similarity effects. In addition, in the left ventral BA44/frontal operculum, for each task, a significant difference was found between the semantic similarity effect with and without phonological correction. These results indicate a role of this region in coding both types of similarities, semantic, and phonological similarity. This places the function of this region at the interface between phonological (Moore & Price, 1999; Price et al., 1997; Tranel et al., 2003) and semantic word processing.
In addition to anterior STG and ventral BA44/frontal operculum, also the left anterior insula showed a semantic similarity effect during both reading and repetition with not significant pairwise difference between each of the three tasks. Several neuroimaging studies have demonstrated that the anterior insula is a core area in language processing: it is connected to areas involved in language production, language comprehension, and repetition (Ardila et al., 2014) and it is also involved in word generation (Kemeny et al., 2005; Rowan et al., 2004), naming (Damasio et al., 2001; Price et al., 1996), phonological discrimination (Rumsey et al., 1997), and in automatic aspects of semantic processing (Friederici et al., 2003; Mummery et al., 1999). By showing that the left anterior insula is one of the regions able to discriminate entities at word‐level and that it represents the similarity in meaning between entities during either reading or repetition, current results add evidence of the role of this region in verbal and semantic aspects of language processing.
The left postcentral gyrus was the only region showing a semantic similarity effect for each of the tasks investigated. Although the postcentral gyrus is classically considered a primary somatosensory cortex, it is highly involved in receptive and expressive language (Hickok, 2009; Jackson et al., 2016; Price, 2010, 2012; Ueno et al., 2011). It is activated by overt naming tasks (Devereux et al., 2013; Hillis et al., 2017; Moriai‐izawa et al., 2012), it is associated with increased reading ability (Meyler et al., 2008) and it is also part of the auditory‐motor feedback loop between speech production (motor commands) and perception (sensory feedback; Jackson et al., 2016; Price, 2012; Zheng et al., 2010). In the current study, we found that the similarity between neural patterns of the left postcentral gyrus during reading, naming, and repetition was representative of the similarity in meaning between animate entities. Recently, by means of MVPA and RSA, the involvement of the left postcentral gyrus in coding for semantic similarity between semantic categories has been demonstrated for overt reading and naming (Devereux et al., 2013). This region is also sensitive to semantic categories during covert reading and object processing (Shinkareva et al., 2011). Although current results do not allow to hypothesize a specific role of the postcentral gyrus, they show a clear involvement of the left postcentral gyrus in semantic processing, perhaps due to its connections with the aSTG/STS (Jackson et al., 2016).
In the right posterior STG and MTG, we detected a semantic similarity effect during repetition. Roux et al. (2015) found no speech deficit during stimulation of the right temporal cortex. However, Yamamoto et al. (2019) have recently investigated whether parts of the auditory cortex were more activated by self‐generated speech sounds compared to listening to auditory stimuli. They found that, while the right posterior STS was more activated during speech production, the bilateral STG was more activated during listening to auditory stimuli than during speech production. In addition, bilateral STG did not respond in the absence of auditory stimuli. Current results add evidence for the role of the right STG in coding semantic meaning.
Overall, current results support the role of the IPL in high‐level integrative processes, and the role of the dorsal BA45 in representing semantic similarity between words during picture naming task. They show a dissociation between the inferior frontal and several regions including lateral temporal cortex, ventral BA44/frontal operculum, and anterior insula in coding task‐dependent semantic similarity effects. These results are in line with neuroimaging studies of the neural underpinnings of implicit semantic priming (Friederici et al., 2003; Mummery et al., 1999; Rissman et al., 2003; Ruff et al., 2008): whereas the STG, the MTG, and the left anterior insula appeared to be involved in the implicit processing of the meaning of words, the frontal lobe was more involved in monitoring the activation of the lexical representations and mapping the word to the appropriate motor response. The current findings reveal a neurobiological basis for the distinction between task‐dependent and task‐independent retrieval of word meaning.
4.1. Limitation
The VOI for RSA were based on an LDA of the same data. This does not introduce a circularity because the fact that a region is able to discriminate between entities, does not imply that there is concordance between neural and semantic similarity.
In the current study, LDA did not yield any significant cluster in the anterior temporal lobe (ATL). However, we cannot exclude that this is due to the adoption of a single‐echo rather than a dual gradient‐echo sequence, which improves the signal in areas associated with signal loss (Halai et al., 2014).
The definition of a task‐specific effect was based on two criteria: the presence of a significant semantic similarity effect for one task and a significant difference between the semantic similarity effect for one task and each of the other tasks. This is a rigorous definition. Although in principle activation during two tasks and not the other is also a form of task‐specificity, we opted to use the more rigorous definition so as to protect against false‐positives. However, the presence of a significant difference between the semantic similarity effect for one task and only one of the two other tasks could be considered as a form of a task specificity.
The naming task is inherently cognitively more demanding than reading or repetition. The higher cognitive demands are directly related to the need for visual recognition, the necessary access to the semantic representation, and the lexical‐semantic retrieval process. The representational similarity indicates that it is not an aspecific effect of difficulty in general, but relates to the semantic content of the item. Theoretically, it is possible that the semantic representation is more activated because the task requires more attention to this semantic representation, which is inherent to naming, the process studied.
5. CONCLUSION
In this study, we addressed whether brain regions able to discriminate between animate entities code for the similarity in meaning between entities in a task‐dependent manner. Whereas the left BA45 and the superior half of the left supramarginal gyrus showed a semantic similarity effect during picture naming only, the left anterior STG, the left ventral BA44/frontal operculum, left anterior insula and left postcentral gyrus showed a task‐independent semantic similarity effect. We interpret these findings in line with the role of the left supramarginal gyrus in high‐level integration processes and the role of the inferior frontal cortex in dynamic uploading of semantic word representations as a function of task context (semantic working memory).
CONFLICT OF INTEREST STATEMENT
The authors declare no competing financial interests.
Supporting information
Data S1: Supporting Information
ACKNOWLEDGEMENTS
This work was supported by Research Foundation Flanders (FWO; Award ID: G094418N, 1247821N) and Onderzoeksraad KU Leuven (Award ID: C14/21/109 and C14/17/10). Antonietta Gabriella Liuzzi is a postdoctoral fellow of the FWO.
Liuzzi, A. G. , Meersmans, K. , Peeters, R. , De Deyne, S. , Dupont, P. , & Vandenberghe, R. (2024). Semantic representations in inferior frontal and lateral temporal cortex during picture naming, reading, and repetition. Human Brain Mapping, 45(2), e26603. 10.1002/hbm.26603
DATA AVAILABILITY STATEMENT
Data, stimuli, and analysis scripts are available upon reasonable request from the corresponding author. No commercial use will be allowed.
REFERENCES
- Adank, P. (2012). The neural bases of difficult speech comprehension and speech production: Two activation likelihood estimation (ALE) meta‐analyses. Brain and Language, 122, 42–54. [DOI] [PubMed] [Google Scholar]
- Aglinskas, A. , & Fairhall, S. L. (2023). Similar representation of names and faces in the network for person perception. NeuroImage, 274, 120100. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ardila, A. , Bernal, B. , & Rosselli, M. (2014). Participation of the insula in language revisited: A meta‐analytic connectivity study. Journal of Neurolinguistics, 29, 31–41. [Google Scholar]
- Aron, O. , Krieg, J. , Brissart, H. , Abdallah, C. , Colnat‐Coulbois, S. , Jonas, J. , & Maillard, L. (2022). Naming impairments evoked by focal cortical electrical stimulation in the ventral temporal cortex correlate with increased functional connectivity. Neurophysiologie Clinique, 52, 312–322. [DOI] [PubMed] [Google Scholar]
- Baayen, H. , Piepenbrock, R. , & Rijn, H. (1995). The CELEX lexical data base on {CD‐ROM}. University of Pennsylvania, Linguistic Data Consortium. [Google Scholar]
- Badre, D. , Poldrack, R. A. , Paré‐Blagoev, E. J. , Insler, R. Z. , & Wagner, A. D. (2005). Dissociable controlled retrieval and generalized selection mechanisms in ventrolateral prefrontal cortex. Neuron, 47, 907–918. [DOI] [PubMed] [Google Scholar]
- Binder, J. (2000). Human temporal lobe activation by speech and nonspeech sounds. Cerebral Cortex, 10, 512–528. [DOI] [PubMed] [Google Scholar]
- Binder, J. R. , & Desai, R. H. (2011). The neurobiology of semantic memory. Trends in Cognitive Sciences, 15, 527–536. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Binder, J. R. , Desai, R. H. , Graves, W. W. , & Conant, L. L. (2009). Where is the semantic system? A critical review and meta‐analysis of 120 functional neuroimaging studies. Cerebral Cortex, 19, 2767–2796. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Binder, J. R. , Tong, J. , Pillay, S. B. , Conant, L. L. , Humphries, C. J. , Raghavan, M. , Mueller, W. M. , Busch, R. M. , Allen, L. , Gross, W. L. , Anderson, C. T. , Carlson, C. E. , Lowe, M. J. , Langfitt, J. T. , Madalina, E. , Drane, D. L. , Loring, D. W. , Jacobs, M. , Morgan, V. L. , & Jane, B. (2020). Temporal lobe regions essential for preserved picture naming after left temporal epilepsy surgery. Epilepsia, 61, 1939–1948. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bookheimer, S. Y. , Zeffiro, T. A. , Blaxton, T. , Gaillard, W. , & Theodore, W. (1995). Regional cerebral blood flow during object naming and word reading. Human Brain Mapping, 3, 93–106. [Google Scholar]
- Borovsky, A. , Saygin, A. P. , Bates, E. , & Dronkers, N. (2007). Lesion correlates of conversational speech production deficits. Neuropsychologia, 45, 2525–2533. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brown, S. , Laird, A. R. , Pfordresher, P. Q. , Thelen, S. M. , Turkeltaub, P. , & Liotti, M. (2009). The somatotopy of speech: Phonation and articulation in the human motor cortex. Brain and Cognition, 70, 31–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Catani, M. , Jones, D. K. , & Ffytche, D. H. (2005). Perisylvian language networks of the human brain. Annals of Neurology, 57, 8–16. [DOI] [PubMed] [Google Scholar]
- Chen, Y. , Shimotake, A. , Matsumoto, R. , Kunieda, T. , Kikuchi, T. , Miyamoto, S. , Fukuyama, H. , Takahashi, R. , Ikeda, A. , & Ralph, M. A. L. (2016). ScienceDirect the ‘when’ and ‘where’ of semantic coding in the anterior temporal lobe: Temporal representational similarity analysis of electrocorticogram data. Cortex, 79, 1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clarke, A. , & Tyler, L. K. (2014). Object‐specific semantic coding in human perirhinal cortex. The Journal of Neuroscience, 34, 4766–4775. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cohen, L. , Jobert, A. , le Bihan, D. , & Dehaene, S. (2004). Distinct unimodal and multimodal regions for word processing in the left temporal cortex. NeuroImage, 23, 1256–1270. [DOI] [PubMed] [Google Scholar]
- Coltheart, M. , Curtis, B. , Atkins, P. , & Haller, M. (1993). Models of reading aloud: Dual‐route and parallel distributed processing approaches. Psychological Review, 100, 589–608. [Google Scholar]
- Damasio, H. , Grabowski, T. J. , Tranel, D. , Ponto, L. L. , Hichwa, R. D. , & Damasio, A. R. (2001). Neural correlates of naming actions and of naming spatial relations. NeuroImage, 13, 1053–1064. [DOI] [PubMed] [Google Scholar]
- De Deyne, S. , Navarro, D. J. , Perfors, A. , & Storms, G. (2016). Structure at every scale: A semantic network account of the similarities between unrelated concepts. Journal of Experimental Psychology: General, 145, 1228–1254. [DOI] [PubMed] [Google Scholar]
- D e Deyne, S. , Navarro, D. J. , & Storms, G. (2013). Better explanations of lexical and semantic cognition using networks derived from continued rather than single‐word associations. Behavior Research Methods, 45, 480–498. [DOI] [PubMed] [Google Scholar]
- De Deyne, S. , & Storms, G. (2008). Word associations: Norms for 1,424 Dutch words in a continuous task. Behavior Research Methods, 40, 198–205. [DOI] [PubMed] [Google Scholar]
- D e Deyne, S. , Verheyen, S. , Ameel, E. , Vanpaemel, W. , Dry, M. J. , Voorspoels, W. , & Storms, G. (2008). Exemplar by feature applicability matrices and other Dutch normative data for semantic concepts. Behavior Research Methods, 40, 1030–1048. [DOI] [PubMed] [Google Scholar]
- de Zubicaray, G. I. , & McMahon, K. L. (2009). Auditory context effects in picture naming investigated with event‐related fMRI. Cognitive, Affective, & Behavioral Neuroscience, 9, 260–269. [DOI] [PubMed] [Google Scholar]
- DeLeon, J. , Gottesman, R. F. , Kleinman, J. T. , Newhart, M. , Davis, C. , Heidler‐Gary, J. , Lee, A. , & Hillis, A. E. (2007). Neural regions essential for distinct cognitive processes underlying picture naming. Brain, 130, 1408–1422. [DOI] [PubMed] [Google Scholar]
- Dell, G. S. (1986). A spreading‐activation theory of retrieval in sentence production. Psychological Review, 93, 283–321. [PubMed] [Google Scholar]
- Dell, G. S. , Schwartz, M. F. , Martin, N. , Saffran, E. M. , & Gagnon, D. A. (1997). Lexical access in aphasic and nonaphasic speakers. Psychological Review, 104, 801–838. [DOI] [PubMed] [Google Scholar]
- Devereux, B. J. , Clarke, A. , Marouchos, A. , & Tyler, L. K. (2013). Representational similarity analysis reveals commonalities and differences in the semantic processing of words and objects. The Journal of Neuroscience, 33, 18906–18916. [DOI] [PMC free article] [PubMed] [Google Scholar]
- DeWitt, I. , & Rauschecker, J. P. (2012). Phoneme and word recognition in the auditory ventral stream. Proceedings of the National Academy of Sciences of the United States of America, 109, 505–514. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eickhoff, S. B. , Heim, S. , Zilles, K. , & Amunts, K. (2006). Testing anatomically specified hypotheses in functional imaging using cytoarchitectonic maps. NeuroImage, 32, 570–582. [DOI] [PubMed] [Google Scholar]
- Eickhoff, S. B. , Paus, T. , Caspers, S. , Grosbras, M. H. , Evans, A. C. , Zilles, K. , & Amunts, K. (2007). Assignment of functional activations to probabilistic cytoarchitectonic areas revisited. NeuroImage, 36, 511–521. [DOI] [PubMed] [Google Scholar]
- Eickhoff, S. B. , Stephan, K. E. , Mohlberg, H. , Grefkes, C. , Fink, G. R. , Amunts, K. , & Zilles, K. (2005). A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. NeuroImage, 25, 1325–1335. [DOI] [PubMed] [Google Scholar]
- Fairhall, S. L. (2020). Cross recruitment of domain‐selective cortical representations enables flexible semantic knowledge. Journal of Neuroscience, 40, 3096–3103. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fairhall, S. L. , & Caramazza, A. (2013). Brain regions that represent Amodal conceptual knowledge. Journal of Neuroscience, 33, 10552–10558. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fernandino, L. , Tong, J. Q. , Conant, L. L. , Humphries, C. J. , & Binder, J. R. (2022). Decoding the information structure underlying the neural representation of concepts. Proceedings of the National Academy of Sciences of the United States of America, 119, 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Friederici, A. D. , Rüschemeyer, S. A. , Hahne, A. , & Fiebach, C. J. (2003). The role of left inferior frontal and superior temporal cortex in sentence comprehension: Localizing syntactic and semantic processes. Cerebral Cortex, 13, 170–177. [DOI] [PubMed] [Google Scholar]
- Friston, K. J. , Holmes, a P. , Worsley, K. J. , Poline, J. P. , Frith, C. D. , & Frackowiak, R. S. J. (1995). Statistical parametric maps in functional imaging: A general linear approach. Human Brain Mapping, 2, 189–210. [Google Scholar]
- Gabrieli, J. D. , Poldrack, R. A. , & Desmond, J. E. (1998). The role of left prefrontal cortex in language and memory. Proceedings of the National Academy of Sciences of the United States of America, 95, 906–913. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gao, Z. , Zheng, L. , Krieger‐Redwood, K. , Halai, A. , Margulies, D. S. , Smallwood, J. , & Jefferies, E. (2022). Flexing the principal gradient of the cerebral cortex to suit changing semantic task demands. eLife, 11, 1–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Geschwind, N. (1970). The organization of language and the brain. Science, 170, 940–944. [DOI] [PubMed] [Google Scholar]
- Giraud, A. L. , & Price, C. J. (2001). The constraints functional neuroimaging places on classical models of auditory word processing. Journal of Cognitive Neuroscience, 13, 754–765. [DOI] [PubMed] [Google Scholar]
- Graves, W. W. , Purcell, J. , Rothlein, D. , Bolger, D. J. , Rosenberg‐Lee, M. , & Staples, R. (2023). Correspondence between cognitive and neural representations for phonology, orthography, and semantics in supramarginal compared to angular gyrus. Brain Structure and Function, 228, 255–271. [DOI] [PubMed] [Google Scholar]
- Halai, A. D. , Welbourne, S. R. , Embleton, K. , & Parkes, L. M. (2014). A comparison of dual gradient‐echo and spin‐echo fMRI of the inferior temporal lobe. Human Brain Mapping, 35, 4118–4128. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hamberger, M. J. (2007). Cortical language mapping in epilepsy: A critical review. Neuropsychology Review, 17, 477–489. [DOI] [PubMed] [Google Scholar]
- Hartwigsen, G. , Baumgaertner, A. , Price, C. J. , Koehnke, M. , Ulmer, S. , & Siebner, H. R. (2010). Phonological decisions require both the left and right supramarginal gyri. Proceedings of the National Academy of Sciences of the United States of America, 107, 16494–16499. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Heeringa, W. (2004). Measuring dialect pronunciation differences using Levenshtein distance. Groningen Dissertations in Linguistics, 46, 315. [Google Scholar]
- Heim, S. , Eickhoff, S. B. , Friederici, A. D. , & Amunts, K. (2009). Left cytoarchitectonic area 44 supports selection in the mental lexicon during language production. Brain Structure and Function, 213, 441–456. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hickok, G. (2009). The functional neuroanatomy of language. Physics of Life Reviews, 6, 121–143. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hickok, G. , & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews Neuroscience, 8, 393–402. [DOI] [PubMed] [Google Scholar]
- Hillis, A. E. , & Caramazza, A. (1991). Category‐specific naming and comprehension impairment: A double dissociation. Brain, 114, 2081–2094. [DOI] [PubMed] [Google Scholar]
- Hillis, A. E. , Rorden, C. , & Fridriksson, J. (2017). Brain regions essential for word comprehension: Drawing inferences from patients. Annals of Neurology, 81, 759–768. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hurley, R. S. , Bonakdarpour, B. , Wang, X. , & Mesulam, M. M. (2015). Asymmetric connectivity between the anterior temporal lobe and the language network. Journal of Cognitive Neuroscience, 27, 464–473. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jackson, R. L. , Hoffman, P. , Pobric, G. , & Lambon Ralph, M. A. (2016). The semantic network at work and rest: Differential connectivity of anterior temporal lobe subregions. The Journal of Neuroscience, 36, 1490–1501. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jefferies, E. , Thompson, H. , Cornelissen, P. , & Smallwood, J. (2020). The neurocognitive basis of knowledge about object identity and events: Dissociations reflect opposing effects of semantic coherence and control. Philosophical Transactions of the Royal Society, B: Biological Sciences, 375, 20190300. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jobard, G. , Crivello, F. , & Tzourio‐Mazoyer, N. (2003). Evaluation of the dual route theory of reading: A metanalysis of 35 neuroimaging studies. NeuroImage, 20, 693–712. [DOI] [PubMed] [Google Scholar]
- Katz, L. (1953). A new status index derived from sociometric analysis. Psychmetrika, 18, 39–43. [Google Scholar]
- Kemeny, S. , Ye, F. Q. , Birn, R. , & Braun, A. R. (2005). Comparison of continuous overt speech fMRI using BOLD and arterial spin labeling. Human Brain Mapping, 24, 173–183. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Koenig, P. , Smith, E. E. , Glosser, G. , DeVita, C. , Moore, P. , McMillan, C. , Gee, J. , & Grossman, M. (2005). The neural basis for novel semantic categorization. NeuroImage, 24, 369–383. [DOI] [PubMed] [Google Scholar]
- Krieger‐Redwood, K. , & Jefferies, E. (2014). TMS interferes with lexical‐semantic retrieval in left inferior frontal gyrus and posterior middle temporal gyrus: Evidence from cyclical picture naming. Neuropsychologia, 64, 24–32. [DOI] [PubMed] [Google Scholar]
- Krizhevsky, A. , Sutskever, I. , & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60, 84–90. [Google Scholar]
- Lambon Ralph, M. A. , Jefferies, E. , Patterson, K. , & Rogers, T. T. (2016). The neural and computational bases of semantic cognition. Nature Reviews Neuroscience, 18, 1–14. [DOI] [PubMed] [Google Scholar]
- Lambon Ralph, M. A. , McClelland, J. L. , Patterson, K. , Galton, C. J. , & Hodges, J. R. (2001). No right to speak? The relationship between object naming and semantic impairment: Neuropsychological evidence and a computational model. Journal of Cognitive Neuroscience, 13, 341–356. [DOI] [PubMed] [Google Scholar]
- Levelt, W. J. M. , Roelofs, A. , & Meyer, A. S. (1999). A theory of lexical access in speech production. Behavioral and Brain Sciences, 22, 1–75. [DOI] [PubMed] [Google Scholar]
- Levenshtein, V. (1966). Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady, 10, 707–710. [Google Scholar]
- Lichteim, L. (1885). On Aphasia. Brain, 7, 433–484. [Google Scholar]
- Liuzzi, A. G. , Aglinskas, A. , & Fairhall, S. L. (2020). General and feature‐based semantic representations in the semantic network. Scientific Reports, 10, 8931. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liuzzi, A. G. , Bruffaerts, R. , Dupont, P. , Adamczuk, K. , Peeters, R. , De Deyne, S. , Storms, G. , & Vandenberghe, R. (2015). Left perirhinal cortex codes for similarity in meaning between written words: Comparison with auditory word input. Neuropsychologia, 76, 1–13. [DOI] [PubMed] [Google Scholar]
- Liuzzi, A. G. , Bruffaerts, R. , Peeters, R. , Adamczuk, K. , Keuleers, E. , De Deyne, S. , Storms, G. , Dupont, P. , & Vandenberghe, R. (2017). Cross‐modal representation of spoken and written word meaning in left pars triangularis. NeuroImage, 150, 292–307. [DOI] [PubMed] [Google Scholar]
- Liuzzi, A. G. , Dupont, P. , Peeters, R. , Bruffaerts, R. , De Deyne, S. , Storms, G. , & Vandenberghe, R. (2019). Left perirhinal cortex codes for semantic similarity between written words defined from cued word association. NeuroImage, 191, 127–139. [DOI] [PubMed] [Google Scholar]
- Liuzzi, A. G. , Meersmans, K. , Storms, G. , De Deyne, S. , Dupont, P. , & Vandenberghe, R. (2023). Independency of coding for affective similarities and for word co‐occurrences in temporal perisylvian neocortex. Neurobiology of Language, 4, 257–279. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liuzzi, A. G. , Ubaldi, S. , & Fairhall, S. L. (2021). Representations of conceptual information during automatic and active semantic access. Neuropsychologia, 160, 107953. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mahon, B. Z. , Costa, A. , Peterson, R. , Vargas, K. A. , & Caramazza, A. (2007). Lexical selection is not by competition: A reinterpretation of semantic interference and facilitation effects in the picture‐word interference paradigm. Journal of Experimental Psychology: Learning, Memory, and Cognition, 33, 503–535. [DOI] [PubMed] [Google Scholar]
- Martin, C. B. , Douglas, D. , Newsome, R. N. , Man, L. L. , & Barense, M. (2018). Integrative and distinctive coding of visual and conceptual object features in the ventral visual stream. eLife, 7, e31873. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mechelli, A. , Josephs, O. , Lambon Ralph, M. A. , McClelland, J. L. , & Price, C. J. (2007). Dissociating stimulus‐driven semantic and phonological effect during reading and naming. Human Brain Mapping, 28, 205–217. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meersmans, K. , Storms, G. , De Deyne, S. , Bruffaerts, R. , Dupont, P. , & Vandenberghe, R. (2022). Orienting to different dimensions of word meaning alters the representation of word meaning in early processing regions. Cerebral Cortex, 32, 3302–3317. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mesulam, M. M. , Thompson, C. K. , Weintraub, S. , & Rogalski, E. J. (2015). The Wernicke conundrum and the anatomy of language comprehension in primary progressive aphasia. Brain: A Journal of Neurology, 138, 2423–2437. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meyler, A. , Keller, T. A. , Cherkassky, V. L. , Lee, D. , Hoeft, F. , Whitfield‐gabrieli, S. , Gabrieli, J. D. E. , & Just, M. A. (2008). Brain activation during sentence comprehension among good and poor readers. Cerebral Cortex (New York, N.Y.: 1991), 17, 2780–2787. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moore, C. J. , & Price, C. J. (1999). Three distinct ventral Occipitotemporal regions for Reading and object naming. NeuroImage, 10, 181–192. [DOI] [PubMed] [Google Scholar]
- Moriai‐izawa, A. , Dan, H. , Dan, I. , Sano, T. , Oguro, K. , Yokota, H. , Tsuzuki, D. , & Watanabe, E. (2012). Multichannel fNIRS assessment of overt and covert confrontation naming. Brain and Language, 121, 185–193. [DOI] [PubMed] [Google Scholar]
- Mummery, C. J. , Shallice, T. , & Price, C. J. (1999). Dual‐process model in semantic priming: A functional imaging perspective. NeuroImage, 9, 516–525. [DOI] [PubMed] [Google Scholar]
- Numssen, O. , Bzdok, D. , & Hartwigsen, G. (2021). Functional specialization within the inferior parietal lobes across cognitive domains. eLife, 10, 1–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Oberhuber, M. , Hope, T. M. , Seghier, M. L. , Parker Jones, O. , Prejawa, S. , Green, D. W. , & Price, C. J. (2016). Four functionally distinct regions in the left Supramarginal gyrus support word processing. Cerebral Cortex, 26, 4212–4226. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Oosterhof, N. N. , Connolly, A. C. , & Haxby, J. V. (2016). CoSMoMVPA: Multi‐modal multivariate pattern analysis of neuroimaging data in Matlab/GNU octave. Frontiers in Neuroinformatics, 10, 1–27. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Plaut, D. C. , McClelland, J. L. , Seidenberg, M. S. , & Patterson, K. (1996). Understanding normal and impaired word reading: Computational principles in quasi‐regular domains. Psychological Review, 103, 56–115. [DOI] [PubMed] [Google Scholar]
- Price, C. , McCrory, E. , Noppeney, U. , Mechelli, A. , Moore, C. , Biggio, N. , & Devlin, J. (2006). How reading differs from object naming at the neuronal level. NeuroImage, 29, 643–648. [DOI] [PubMed] [Google Scholar]
- Price, C. J. (2010). The anatomy of language: A review of 100 fMRI studies published in 2009. Annals of the New York Academy of Sciences, 1191, 62–88. [DOI] [PubMed] [Google Scholar]
- Price, C. J. (2012). A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. NeuroImage, 62, 816–847. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Price, C. J. , Devlin, J. T. , Moore, C. J. , Morton, C. , & Laird, A. R. (2005). Meta‐analyses of object naming: Effect of baseline. Human Brain Mapping, 25, 70–82. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Price, C. J. , Moore, C. J. , Humphreys, G. W. , Frackowiak, R. S. , & Friston, K. J. (1996). The neural regions sustaining object recognition and naming. Proceedings of the Royal Society of London. Series B: Biological Sciences, 263, 1501–1507. [DOI] [PubMed] [Google Scholar]
- Price, C. J. , Moore, C. J. , Humphreys, G. W. , & Wise, R. J. (1997). Segregating semantic from phonological processes during reading. Journal of Cognitive Neuroscience, 9, 727–733. [DOI] [PubMed] [Google Scholar]
- Rice, G. E. , Hoffman, P. , & Lambon Ralph, M. A. (2015). Graded specialization within and between the anterior temporal lobes. Annals of the New York Academy of Sciences, 1359, 84–97. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rissman, J. , Eliassen, J. C. , & Blumstein, S. E. (2003). An event‐related fMRI investigation of implicit semantic priming. Journal of Cognitive Neuroscience, 15, 1160–1175. [DOI] [PubMed] [Google Scholar]
- Rogers, T. T. , Lambon Ralph, M. A. , Garrard, P. , Bozeat, S. , McClelland, J. L. , Hodges, J. R. , & Patterson, K. (2004). Structure and deterioration of semantic memory: A neuropsychological and computational investigation. Psychological Review, 111, 205–235. [DOI] [PubMed] [Google Scholar]
- Roux, F. E. , Miskin, K. , Durand, J. B. , Sacko, O. , Réhault, E. , Tanova, R. , & Démonet, J. F. (2015). Electrostimulation mapping of comprehension of auditory and visual words. Cortex, 71, 398–408. [DOI] [PubMed] [Google Scholar]
- Rowan, A. , Liégeois, F. , Vargha‐Khadem, F. , Gadian, D. , Connelly, A. , & Baldeweg, T. (2004). Cortical lateralization during verb generation: A combined ERP and fMRI study. NeuroImage, 22, 665–675. [DOI] [PubMed] [Google Scholar]
- Ruff, I. , Blumstein, S. E. , Myers, E. B. , & Hutchison, E. (2008). Recruitment of anterior and posterior structures in lexical‐semantic processing: An fMRI study comparing implicit and explicit tasks. Brain and Language, 105, 41–49. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rumsey, J. M. , Horwitz, B. , Donohue, B. C. , Nace, K. , Maisog, J. M. , & Andreason, P. (1997). Phonological and orthographic components of word recognition. A PET‐rCBF study. Brain, 120, 739–759. [DOI] [PubMed] [Google Scholar]
- Schapiro, A. C. , McClelland, J. L. , Welbourne, S. R. , Rogers, T. T. , & Lambon Ralph, M. A. (2013). Why bilateral damage is worse than unilateral damage to the brain. Journal of Cognitive Neuroscience, 25, 2107–2123. [DOI] [PubMed] [Google Scholar]
- Schnur, T. T. , Schwartz, M. F. , Kimberg, D. Y. , Hirshorn, E. , Coslett, H. B. , & Thompson‐Schill, S. L. (2009). Localizing interference during naming: Convergent neuroimaging and neuropsychological evidence for the function of Broca's area. Proceedings of the National Academy of Sciences of the United States of America, 106, 322–327. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scott, S. K. (2000). Identification of a pathway for intelligible speech in the left temporal lobe. Brain, 123, 2400–2406. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shimotake, A. , Matsumoto, R. , Ueno, T. , Kunieda, T. , Saito, S. , Hoffman, P. , Kikuchi, T. , Fukuyama, H. , Miyamoto, S. , Takahashi, R. , Ikeda, A. , & Ralph, M. A. L. (2015). Direct exploration of the role of the ventral anterior temporal lobe in semantic memory: Cortical stimulation and local field potential evidence from subdural grid electrodes. Cerebral Cortex (New York, N.Y.: 1991), 25(10), 3802–3817. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shinkareva, S. V. , Malave, V. L. , Mason, R. A. , Mitchell, T. M. , & Just, M. A. (2011). Commonality of neural representations of words and pictures. NeuroImage, 54, 2418–2425. [DOI] [PubMed] [Google Scholar]
- Simanova, I. , Hagoort, P. , Oostenveld, R. , & van Gerven, M. A. J. (2014). Modality‐independent decoding of semantic information from the human brain. Cerebral Cortex, 24, 426–434. [DOI] [PubMed] [Google Scholar]
- Stojanoski, B. , & Cusack, R. (2014). Time to wave good‐bye to phase scrambling: Creating controlled scrambled images using diffeomorphic transformations. Journal of Vision, 14, 6. [DOI] [PubMed] [Google Scholar]
- Thompson‐Schill, S. L. , Aguirre, G. K. , Esposito, M. D. , & Faraha, M. J. (1999). A neural basis for category and modality specificity of semantic knowledge. Neuropsychologia, 37, 671–676. [DOI] [PubMed] [Google Scholar]
- Thompson‐Schill, S. L. , D'Esposito, M. , Aguirre, G. K. , & Farah, M. J. (1997). Role of left inferior prefrontal cortex in retrieval of semantic knowledge: A reevaluation. Proceedings of the National Academy of Sciences of the United States of America, 94, 14792–14797. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tranel, D. , Damasio, H. , Eichhorn, G. R. , Grabowski, T. , Ponto, L. L. , & Hichwa, R. D. (2003). Neural correlates of naming animals from their characteristic sounds. Neuropsychologia, 41, 847–854. [DOI] [PubMed] [Google Scholar]
- Tyler, L. K. , Chiu, S. , Zhuang, J. , Randall, B. , Devereux, B. J. , Wright, P. , Clarke, A. , & Taylor, K. I. (2013). Objects and categories: Feature statistics and object processing in the ventral stream. Journal of Cognitive Neuroscience, 25, 1723–1735. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ueno, T. , Saito, S. , Rogers, T. T. , & Lambon Ralph, M. A. (2011). Lichtheim 2: Synthesizing aphasia and the neural basis of language in a neurocomputational model of the dual dorsal‐ventral language pathways. Neuron, 72, 385–396. [DOI] [PubMed] [Google Scholar]
- Walker, G. M. , Schwartz, M. F. , Kimberg, D. Y. , Faseyitan, O. , Brecher, A. , Dell, G. S. , & Coslett, H. B. (2011). Support for anterior temporal involvement in semantic error production in aphasia: New evidence from VLSM. Brain and Language, 117, 110–122. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wang, X. , Xu, Y. , Wang, Y. , Zeng, Y. , Zhang, J. , Ling, Z. , & Bi, Y. (2018). Representational similarity analysis reveals task‐dependent semantic influence of the visual word form area. Scientific Reports, 8, 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weiller, C. , Bormann, T. , Saur, D. , Musso, M. , & Rijntjes, M. (2011). How the ventral pathway got lost—And what its recovery might mean. Brain and Language, 118, 29–39. [DOI] [PubMed] [Google Scholar]
- Weiller, C. , Reisert, M. , Peto, I. , Hennig, J. , Makris, N. , Petrides, M. , Rijntjes, M. , & Egger, K. (2021). The ventral pathway of the human brain: A continuous association tract system. NeuroImage, 234, 117977. [DOI] [PubMed] [Google Scholar]
- Woollams, A. M. , Ralph, M. A. L. , Plaut, D. C. , & Patterson, K. (2007). SD‐squared: On the association between semantic dementia and surface dyslexia. Psychological Review, 114, 316–339. [DOI] [PubMed] [Google Scholar]
- Xiang, H. D. , Fonteijn, H. M. , Norris, D. G. , & Hagoort, P. (2010). Topographical functional connectivity pattern in the perisylvian language networks. Cerebral Cortex (New York, N.Y.: 1991), 20, 549–560. [DOI] [PubMed] [Google Scholar]
- Yamamoto, A. K. , Parker Jones, O. , Hope, T. M. , Prejawa, S. , Oberhuber, M. , Ludersdorfer, P. , Yousry, T. A. , Green, D. W. , & Price, C. J. (2019). A special role for the right posterior superior temporal sulcus during speech production. NeuroImage, 203, 116184. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zheng, Z. Z. , Munhall, K. G. , & Johnsrude, I. S. (2010). Functional overlap between regions involved in speech perception and in monitoring one's own voice during speech production. Journal of Cognitive Neuroscience, 22, 1770–1781. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data S1: Supporting Information
Data Availability Statement
Data, stimuli, and analysis scripts are available upon reasonable request from the corresponding author. No commercial use will be allowed.