Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2026 Apr 12.
Published in final edited form as: Cognition. 2025 Jan 18;257:106058. doi: 10.1016/j.cognition.2024.106058

Neural specialization for ‘visual’ concepts emerges in the absence of vision

Miriam Hauptman 1, Giulia Elli 1, Rashi Pant 1,2, Marina Bedny 1
PMCID: PMC13069477  NIHMSID: NIHMS2159159  PMID: 39827755

Abstract

The ‘different-body/different-concepts hypothesis’ central to some embodiment theories proposes that the sensory capacities of our bodies shape the cognitive and neural basis of our concepts. We tested this hypothesis by comparing behavioral semantic similarity judgments and neural signatures (fMRI) of ‘visual’ categories (‘living things,’ or animals, e.g., tiger, and light events, e.g., sparkle) across congenitally blind (n=21) and sighted (n=22) adults. Words referring to ‘visual’ entities/nouns and events/verbs (animals and light events) were compared to less vision-dependent categories from the same grammatical class (i.e., animal vs. place nouns, light vs. sound, mouth, and hand verbs). Within-category semantic similarity judgments about animals (e.g., sparrow vs. finch) were partially different across groups, consistent with the idea that sighted people rely on visually learned information to make such judgments about animals. However, robust neural specialization for living things in temporoparietal semantic networks, including in the precuneus, was observed in blind and sighted people alike. For light events, which are directly accessible only through vision, behavioral judgments were indistinguishable across groups. Neural responses to light events were also similar across groups: in both blind and sighted people, the left middle temporal gyrus (LMTG+) responded more to event concepts, including light events, compared to entities. Multivariate patterns of neural activity in LMTG+ distinguished among different event types, including light events vs. other event types. In sum, we find that neural signatures of concepts previously attributed to visual experience do not require vision. Across a wide range of semantic types, conceptual representations develop independent of sensory experience.

Keywords: Blindness, Concepts, Language, Vision, fMRI


Quotations

“Critics delight to tell us what we cannot do. They assume that blindness and deafness sever us completely from the things which the seeing and the hearing enjoy, and hence they assert we have no moral right to talk about beauty, the skies, mountains, the song of birds, and colours. They declare that the very sensations we have from the sense of touch are “vicarious,” as though our friends felt the sun for us! They deny a priori what they have not seen and I have felt. Some brave doubters have gone so far even as to deny my existence. In order, therefore, that I may know that I exist, I resort to Descartes’s method: “I think, therefore I am.” Thus I am metaphysically established, and I throw upon the doubters the burden of proving my non-existence.” – Helen Keller (1908)

“The properties of an organism’s body limit or constrain the concepts an organism can acquire. That is, the concepts by which an organism understands its environment depend on the nature of its body in such a way that differently embodied organisms would understand their environments differently.” – Stanford Encyclopedia of Philosophy (Shapiro & Spaulding, 2021).

Introduction

Within the first weeks of life, infants identify living things by looking for faces, bodies, and biological motion (Simion et al., 2008; Opfer & Gelman, 2011; Spelke, 2022). Watching animals such as elephants and blue jays offers information about their shape, color, texture, and behavior (e.g., De Vries, 1969; Massey & Gelman, 1988; Setoh et al., 2013). Some conceptual categories, like light events (e.g., ‘sparkle’, ‘glow’) and colors (e.g., ‘blue’, ‘yellow’), can only be directly experienced through vision. Vision is an important source of direct sensory evidence about many conceptual categories. Here we ask how visual experience contributes to the cognitive and neural basis of concepts.

Many embodied accounts of cognition propose that the sensory capacities of our bodies constrain the concepts we can entertain (e.g., Barsalou, 1999; Lakoff & Johnson, 1980, 1999; Tucker & Ellis, 1998; Glenberg & Kaschak, 2002; Thompson-Schill et al., 1999; Casasanto, 2009; Shapiro & Spaulding, 2021; see also Locke, 1690; Hume, 1739; Berkeley, 1732). Laurence & Margolis (2024) dub this the ‘different-body/different-concepts hypothesis.’ Neurally, such theories propose that concepts are represented in the modality-specific sensory systems though which they were acquired. Consistent with this idea, a large neuroimaging literature suggests that conceptual retrieval activates sensory regions of the brain (Thompson-Schill, 2003; Barsalou, 2008; Meyer & Damasio, 2009; Pulvermüller & Fadiga, 2010; Kiefer & Pulvermüller, 2012; Meteyard et al., 2012; Yee & Thompson-Schill, 2016; Martin, 2016; Reilly et al., 2024; Tyler & Moss, 2001; Barsalou et al., 2003; Barsalou, 2010; Pulvermüller, 2001; Gallese & Lakoff, 2005; Zwaan, 2014). Color words activate visual color areas, motor words motor control regions, and auditory words auditory cortices (Martin et al., 1995; Chao & Martin, 1999; Hauk et al., 2004; Wallentin et al., 2005; Simmons, Martin, & Barsalou, 2005; Beilock et al., 2008; Hoenig et al., 2011; Fernandino et al., 2016; Simmons et al., 2007; Halpern et al., 2004; Kiefer et al., 2008; Kemmerer et al., 2008; Kuhnke, Kiefer, & Hartwigsen, 2020). One interpretation of these data is that the sensory means through which concepts are acquired shapes their cognitive and neural basis.

Sensorimotor experiences can differ widely across people. One question is whether such differences lead to different conceptual representations. Unlike experts in music or a particular sport, whose sensory experience differs from that of non-experts in nuanced ways, congenitally blind individuals lack visual experience entirely. Even so, behavioral studies find shared use of ‘visual’ words across blind and sighted people (Marmor, 1978; Zimler & Keenan, 1983; Landau & Gleitman, 1985; Shepard & Cooper, 1992; Connolly et al., 2007; Lenci et al., 2013; Saysani et al., 2018; Kim et al., 2019; Bedny et al., 2019; Wang et al., 2020; Kim et al., 2021). For instance, Landau & Gleitman (1985) found that a congenitally blind child, Kelli, understood and produced verbs like ‘look’ and ‘see’ as well as color words (e.g., ‘green’) around the same age as sighted children do. Congenitally blind adults make subtle distinctions among verbs that refer to light events based on light intensity and periodicity (e.g., ‘sparkle’ vs. ‘flash’; Lenci et al., 2013; Bedny et al., 2019). Blind and sighted people share knowledge of large animal appearance (e.g., what is the shape and size of a tiger?), despite the fact that direct sensory access to large animals is primarily visual (Kim et al., 2019). Blind and sighted people have similar intuitions about how color varies across object tokens (e.g., two pieces of paper are more likely to have the same color than two cars) (Kim et al., 2021) and about the similarity space of colors (e.g., orange is more similar to red than to green) (Shepard & Cooper, 1992; Marmor, 1978; Saysani et al., 2018). Thus, seemingly visual information is acquired by humans who do not have direct sensory access to it.

People born blind could acquire ‘visual’ knowledge in a variety of ways, including by analogy to other senses (e.g., touch, audition), but humans are prodigious social learners, and learning through language likely makes a significant contribution to visual knowledge in blindness. Indeed, languages of the world convey rich information about the senses. English has a large ‘visual’ lexicon (Viberg, 1983; Sweetser, 1990; Levinson & Majid, 2014; San Roque et al., 2015; Winter et al., 2018). Recently, large language models (LLMs) have aptly demonstrated that semantic representations of ‘sensory’ information can be acquired via language alone (Abdou et al., 2021; Patel & Pavlick, 2022; Li et al, 2021; Wei et al., 2022; Sharma et al., 2024; Gurnee & Tegmark, 2023; Marjieh et al., 2022, 2024). For example, LLMs can reconstruct the similarity space of colors (red is more similar to orange than to blue), the spatial locations of US states on a map, and object shapes (Abdou et al., 2021; Gurnee & Tegmark, 2024; Sharma et al., 2024; Marjieh et al., 2024).

Exactly how much humans learn about vision from language and what kinds of representations are acquired from language remain open questions. Compared to LLMs, humans have more modest memory resources and access to far less linguistic data (Warstadt & Bowman, 2022; Frank, 2023). It has also been suggested that precisely because LLMs learn from language alone, their representations are shallow (Lake & Murphy, 2023; Bender & Koller, 2020; but see Chalmers, 2023). Some differences in visual knowledge have also been observed across blind and sighted people in prior behavioral studies. Although color similarity judgments (e.g., orange vs. blue) are similar across blind and sighted people on average, individual blind people’s judgments are more variable (Shepard & Cooper, 1992; Marmor, 1978; Saysani et al., 2018). Blind individuals rate large animals (e.g., tiger, rhinoceros) as less familiar, and animal appearance knowledge differs somewhat across blind and sighted groups (Kim et al., 2019). Blind and sighted people show low agreement about the colors of animals and common objects, including plants. For example, in one study, 100% of sighted people and about 50% of blind participants labeled carrots as orange (Kim et al., 2021). Likewise, semantic similarity judgments about fruits and vegetables are influenced by color in sighted but not blind participants (Connolly et al., 2007). In sum, the available behavioral evidence suggests that sighted people and people born blind share ‘visual’ knowledge, but this knowledge is not identical across groups. In particular, blind and sighted people disagree about some aspects of the appearance of living things.

Given that differences in behavior across groups are relatively subtle, one interpretation of these results is that direct sensory access is not as central to conceptual representation as predicted by the ‘different-body/different-concepts’ hypothesis (Mahon et al., 2009; Mahon & Caramazza, 2011; Vannuscorps & Caramazza, 2016; Bedny & Saxe, 2016; Bedny et al., 2019; Bedny, 2020). Alternatively, it is possible that subtle behavioral differences between blind and sighted people reveal more fundamental changes in the format of their ‘visual’ conceptual representations (e.g., Connolly et al., 2007; Yee, Chrysikou, & Thompson-Schill, 2013; Yee, Jones, & McRae, 2017). Neuroscience evidence can help distinguish between these interpretations.

1.1. Neural evidence regarding the relationship of concepts and sensory experience

Neuroscience studies have provided some of the strongest evidence for the idea that concepts are embodied in sensorimotor systems. Classic neuropsychological work proposes that semantic deficits for living things arise as a result of damaged visual knowledge (Allport, 1985; Warrington & Shallice, 1984; Warrington & McCarthy, 1987; Farah & McClelland, 1991; Gaffan & Heywood, 1993; Moss et al., 1997; Tranel et al., 1997; Humphreys & Forde, 2001; cf. Caramazza & Shelton, 1998; Caramazza & Mahon, 2003). fMRI studies find that thinking about living things (e.g., animals) activates distinctive neural structures, and such findings have been attributed to the retrieval of visual knowledge central to living things concepts (Perani et al., 1999; Martin et al., 1996; Okada et al., 2000; Thompson-Schill et al., 1999).

Analogously, parts of the posterior portion of the left middle temporal gyrus (LMTG+) respond preferentially to action verbs over concrete nouns (Kable et al., 2002, 2005; Davis et al., 2004; Bedny et al., 2008, 2014; Martin et al., 1995; Bedny & Thompson-Schill, 2006; Yu et al., 2012; Lapinskaya et al., 2016; Elli et al., 2019), which has been attributed to the importance of visual motion information for action verb representations (Kable et al., 2002, 2005; Tranel et al., 2003; Kemmerer et al., 2008; Noppeney, 2008; Pulvermüller & Fadiga, 2010; Kemmerer & Gonzalez-Castillo, 2010; Damasio & Tranel, 1993). In the current study, we tested whether these neural signatures of vision-dependent concepts differ across congenitally blind and sighted people. The emergence of group differences would provide support for the idea that the sensory capacities of our bodies shape our conceptual representations. Alternatively, it is possible that neural signatures previously attributed to the visual dependence of concepts develop similarly in people born blind. This finding would provide evidence for the body-independence of concepts.

1.2. The current study: Comparing the neural basis of living things and light events across congenitally blind and sighted people

We compare the neural basis of two visual categories, living things and light events, across congenitally blind and sighted people. Vision is thought to provide a key source of information about these categories. Living things and light events also span a wide range of semantic types, from concrete entities to events, and thus together offer a broad perspective on the contribution of vision to the neural instantiation of concepts. Finally, as noted above, living things and light events are associated with distinctive and consistent neural signatures in sighted people. In other words, both categories dissociate neurally from other categories of concepts (i.e., activate distinctive regions of the brain and/or produce distinctive neural patterns of activity as measured by multivariate methods). One hypothesis is that such dissociations arise because vision plays a privileged role in the acquisition of these concepts (e.g., Warrington & Shallice, 1984; Tranel et al., 2003; cf. Caramazza, 1998). If so, we would expect to find some or all of these neural dissociations to be absent, weakened, or different (e.g., in neural location) in people born blind. Alternatively, if the same neural signatures are observed in congenitally blind people, this finding would provide strong support for the idea that visual experience is not central to their acquisition.

1.3.1. Neural responses to living things concepts in sighted people and predicted responses in blind people

In sighted people, words referring to living things elicit distinctive neural responses in temporoparietal semantic brain networks, particularly in the precuneus (PC; Fairhall & Caramazza, 2013a, 2013b; Fairhall et al., 2014; Peer et al., 2015; Wang et al., 2016; Silson et al., 2019; Rabini, Ubaldi, & Fairhall, 2021; Deen & Freiwald, 2022; Aglinskas & Fairhall, 2023). Responses to living things in the precuneus are elicited by words and images alike (Fairhall & Caramazza, 2013a; Fairhall et al., 2014; Kanwisher et al., 1997; Grill-Spector et al., 2004; Konkle & Caramazza, 2013; Connolly et al., 2016; Devlin et al., 2002; Noppeney et al., 2006; Mahon et al., 2009). This distinguishes the precuneus from lateral ventral occipito-temporal cortex (VOTC), which responds to images of living things but not to words referring to them (see Bi et al., 2016 for a review). Moreover, in sighted people, classifiers trained on patterns of neural activity in the precuneus generalize across images of living things and words referring to living things (Fairhall & Caramazza, 2013a), making these responses a good test case for comparing living things representations across blind and sighted people.

1.3.2. Neural responses to light event concepts in sighted people and predicted responses in blind people

The second ‘visual’ category examined in the current study is light events (e.g., ‘sparkle’). Unlike animal nouns or motion verbs (e.g., ‘clap’, ‘hit’), light events are only directly accessible through vision. Light events are situated in time and encoded in most languages, including English, by verbs (Talmy, 1975; Langacker, 1987; Frawley, 1992). Previous studies have found that event words elicit neural responses in the LMTG+ relative to words describing objects and properties (e.g., Kable et al., 2002, 2005; Davis et al., 2004; Bedny & Thompson-Schill, 2006). Whether the LMTG+ encodes modality-specific or modality-invariant conceptual representations has been debated. Early studies attributed responses to motion verbs in the LMTG+ to the retrieval of visual motion information (e.g., Damasio & Tranel, 1993; Martin et al., 1995; Kable et al., 2002; 2005). Subsequent work showed that people born blind also activate this region during motion verb comprehension, suggesting that it supports modality invariant representations (Noppeney et al., 2003; Bedny et al., 2012; Bottini et al., 2020). However, an alternative interpretation is that visual motion information represented in this region in sighted people is replaced by auditory or sensorimotor motion information in people born blind (e.g., the visual image of bouncing is replaced by the sound of bouncing) (Yee, Chrysikou, & Thompson-Schill, 2013; Yee, Jones, & McRae, 2017; see also Striem-Amit et al., 2018; Bi, 2021; Kiefer, Kuhnke, & Hartwigsen, 2023; Campbell & Bergelson, 2022). Relative to motion verbs, light verbs provide a stronger test of the contribution of visual experience to conceptual representation because they are directly accessible only through vision. If LMTG+ responses to light events are observed in congenitally blind people, this would suggest that such representations develop equivalently regardless of whether they are learned via direct sensory access.

1.3.4. Current experimental design

We compared ‘visual’ categories to multiple non-visual categories within the same general semantic class (entities/nouns vs. events/verbs). Among entity concepts, we compared ‘more visual’ living things (birds and mammals) to ‘less visual’ non-living things (manmade and natural places) (e.g., Warrington & Shallice, 1984; Warrington & McCarthy, 1987). Among event concepts, we compared visual light emission events (e.g., ‘sparkle’, ‘glow’) to non-visual events, including sound emission events (e.g., ‘beep’, ‘squeak’), hand actions (e.g., ‘prod’, ‘stroke’), and mouth actions (e.g., ‘slurp’, ‘lick’). Participants heard pairs of words from the same semantic category (e.g., ‘to sparkle, to glow’ or ‘the robin, the owl’) and judged their semantic similarity on a scale of 1 to 4. Pairing words within semantic categories ensured that participants could make detailed semantic judgments. We used individual-subject fMRI analysis (Fedorenko et al., 2010; Nieto-Castañón & Fedorenko, 2012) to identify previously established neural networks responsive to entities/nouns and events/verbs in each participant. Within these networks, we compared neural responses to ‘visual’ and non-visual concepts using univariate and multivariate approaches.

Methods

2.1. Participants

Twenty-one congenitally blind adults (13 females, 8 males; age range 18-67 years, M = 39.14 ± 13.81 SD) and twenty-two sighted age- and education-matched controls (16 females, 6 males; age range: 19-62 years, M = 37.55 ± 13.25 SD) participated in the study (Supplementary Table 1). Blind participants lost their sight due to pathologies of the eyes or optic nerve anterior to the optic chiasm (i.e., not due to brain damage), and had at most minimal light perception since birth. Throughout the experiment, all participants (sighted and blind) wore a light exclusion blindfold to match their visual input. Sighted and blind participants were screened for cognitive and neurological disabilities (self-report). Participants gave written informed consent and were compensated $30 per hour. The study was reviewed and approved by the Johns Hopkins Medicine Institutional Review Boards. Four additional blind participants were scanned but excluded from the final sample because they were older than 70 years of age (n=2), they were not blind since birth (n=1), or they gave similarity judgments different from those of the group (n=1, correlation with the group lower than 2.5 SDs from the average for both verbs and nouns).

2.2. Stimuli and procedure

While undergoing functional magnetic resonance imaging (fMRI), participants heard pairs of words and judged how similar the two words were in meaning on a scale from 1 (not at all similar) to 4 (very similar), indicating their responses via button press. Word stimuli fell into 1 of 2 grammatical classes (entities/nouns, events/verbs), facilitating our investigation of ‘visual’ categories spanning both classes (i.e., animal nouns, light verbs). Within these classes, words were further divided into 4 categories (entities/nouns: birds, e.g., ‘the crow’; mammals, e.g., ‘the fox’; manmade places, e.g., ‘the barn’; natural places, e.g., ‘the swamp’; events/verbs: light emission, e.g., ‘to sparkle’; sound emission, e.g., ‘to squeak’; hand-related actions, e.g., ‘to pluck’, mouth-related actions, e.g., ‘to bite’) (Figure 1, Table 1, see Appendix 1 for full list of stimuli). These categories captured a wide range of semantic categories within each grammatical class, including categories for which visual information is thought to play a comparatively less important role (e.g., places, sound emission). Words were matched across several variables, including number of syllables and familiarity (see Elli et al., 2019 and Appendix 2 for details). Word pairs were presented in blocks of 4 and were grouped by semantic category within blocks. Each word appeared once within a block. Blocks were 16 s long and were separated by 10 s of rest. The experiment included a total of 144 blocks evenly divided into 8 runs.

Figure 1:

Figure 1:

In-scanner semantic similarity judgments across sighted and blind participants. (A) Item-wise correlations (Spearman’s rho ρ) between blind and sighted average group ratings. Confidence intervals (95%) are indicated via shading. (B) Leave-one-out within-group correlations (Spearman’s ρ). Error bars: ± standard error of the mean.

Table 1:

Example stimuli from each category.

Entities / Nouns Animals Birds the crow – the dove the goose – the owl
Mammals the fox – the lion the giraffe – the hippo
Places Manmade the barn – the garage the shrine – the temple
Natural the swamp – the bay the canyon – the crater
Events / Verbs Actions Hand to prod – to pluck to stroke – to pummel
Mouth to gnaw – to bite to slurp – to lick
Emissions Light to glow – to sparkle to shine – to flash
Sound to beep – to ring to squeak – to bang

Our experimental design enabled us to perform multivariate analysis (MVPA) of neural responses to each category. Whereas univariate analysis measures the magnitude of neural activity corresponding to different experimental conditions, multivariate analysis measures the distinctiveness of patterns of activity across conditions, offering a more sensitive approach. Because multivariate analysis can capture more subtle differences in representational content, it is well suited to address cognitive questions. In the current study, we use multivariate analysis (linear classification) to ask whether entity/noun categories and event/verb categories are differentially represented in blind and sighted participants’ brains. To facilitate such analysis, we created two non-overlapping subsets of words that were exclusively presented in either even or odd runs. This enabled us to train linear classifiers on neural responses to one set of words and test the classifiers on neural responses to a different set of words, ensuring that any above-chance classification effects reflect differences in the neural patterns associated with semantic categories and not word forms. Words in each semantic category were divided into two non-overlapping sets of 9 words. Within each such set, we created all possible pairs within a category (e.g., ‘the seagull, the parrot’, 36 pairs per set per category). There were no cross-category pairs.

2.3. Behavioral data analysis

Due to a response box malfunction, 19/21 blind and 19/22 sighted participants contributed to behavioral data analysis. In-scanner similarity judgments were first standardized (z-scored to mean=0 ± 1SD) within each participant to account for individual differences in Likert scale use, and then standardized within grammatical class (i.e., events/verbs, entities/nouns) to a [0,1] range (i.e., x=xxminxmaxxmin) within each participant. To assess agreement in semantic judgments within and across blind and sighted groups, we correlated item-wise ratings within each semantic category using Spearman’s rho (ρ) rank correlations. This analysis asks whether blind and sighted participants agree regarding which pairs within a semantic category are most similar in meaning (see Appendix 3 for details).

2.4. fMRI data acquisition and preprocessing

MRI structural and functional data of the whole brain were collected using a 3 Tesla Phillips scanner with a 32-channel head coil. We collected T1-weighted 3D-MPRAGE structural images using a pulse sequence in 170 sagittal slices with 1mm isotropic voxels (TE/TR=7.0/3.2ms, FoV=240x240 mm, 288x272 acquisition matrix, scan duration=5:59’). We collected T2*-functional BOLD images using parallel transverse ascending echo planar imaging (EPI) sequences in 36 axial slices with 2.5 x 2.5 x 2.5 mm voxels (TE/TR=30/2000ms, FoV=192x172mm, 76x66 acquisition matrix, 0.5mm gap, flip angle=70º, scan duration=8:04’).

Data were analyzed using FSL, Freesurfer, the Human Connectome Project workbench, and custom in-house software written in Python (Dale, Fischl, & Sereno, 1999; Smith et al., 2004; Glasser et al., 2013). Functional data were motion corrected using FSL’s MCFLIRT algorithm (Jenkinson et al., 2002), high pass filtered to remove signal fluctuations at frequencies longer than 128 seconds/cycle, mapped to the cortical surface using Freesurfer, spatially smoothed on the cortical surface (6mm FWHM Gaussian kernel), and prewhitened to remove temporal autocorrelation. Covariates of no interest were included to account for confounds related to white matter, cerebral spinal fluid, and motion spikes.

2.5. fMRI data analysis

2.5.1. Univariate analysis

Univariate analyses were used to test whether regions previously associated with animal nouns (i.e., the precuneus) and light verbs (i.e., the LMTG+) exhibit characteristic category-specific responses in the absence of visual experience. Each of the entity/noun and event/verb categories was entered as a separate predictor in a general linear model (GLM) after convolving with a canonical hemodynamic response function and its first temporal derivative. Each run was modeled separately, and runs were combined within-subject using a fixed-effects model (Dale et al., 1999; Smith et al., 2004). Group-level random-effects analyses were corrected for multiple comparisons across the whole cortex at p < .05 family-wise error rate (FWER) using a nonparametric permutation test (cluster-forming threshold p < .01 uncorrected) (Winkler et al., 2014; Eklund, Nichols, & Knutsson, 2016; Eklund, Knutsson, & Nichols, 2019).

2.5.2. ROI definition

We defined regions of interest in each participant to enable individual-subject analyses of responses to ‘visual’ vs. ‘non-visual’ categories. Regions of interest were defined in cortical areas (search spaces) previously shown to respond to entity and event concepts in sighted people (see Crepaldi et al., 2013, for a review). Within these noun and verb responsive areas, we compared responses to ‘visual’ concepts across blind and sighted people. These areas also responded more to nouns vs. verbs or vice versa in whole-cortex analysis (p < .05 uncorrected) in the current study. We defined 4 entity/noun-preferring search spaces: left precuneus (LPC), left inferior parietal lobule (LIP), left lateral inferior temporal cortex (LlatIT), and left medial ventral temporal cortex (LmedVT); and 1 event/verb-preferring search-space: left middle temporal gyrus/inferior parietal cortex (LMTG+) (Supplementary Figure 5). Within these search spaces, we defined individual subjects’ functional ROIs for each participant.

Although the left inferior frontal gyrus also responded more to events than entities in the current study, we previously found that it showed weak and category-invariant decoding in sighted adults (Elli et al., 2019); therefore, we did not use this ROI.

Search spaces were first defined in the blind and sighted groups separately and then combined across groups, such that each search space (e.g., blind LPC + sighted LPC) included all the voxels responding more to events or entities in either group. This procedure is inclusive to avoid omitting above-threshold activation in either of the groups. Next, we defined individual-subject ROIs within each search space by selecting every participant’s top 300 active vertices for the events/verbs>entities/nouns (verb ROI) or entities/nouns>events/verbs (noun ROIs) contrasts (see Appendix 4 for details).

Following past work demonstrating occipital activation during language processing in blind individuals (Röder et al., 2002; Amedi et al., 2003; Bedny et al., 2011, 2012, 2015; Lane et al., 2015), we additionally defined two ROIs in occipital cortex in each participant: left and right V1-V2 (BA17-18) from the PALS-B12 Brodmann area atlas included in FreeSurfer (Van Essen, 2005).

2.5.3. MVPA ROI analysis

We used MVPA (PyMVPA toolbox; Hanke et al., 2009) to assess the extent to which patterns of activity in entity- and event-responsive ROIs distinguish between entity categories and between event categories.

For each ROI in each participant, we trained a linear support vector machine (SVM) classifier to separately decode among the 4 event categories and the 4 entity categories (chance 25%). We submitted to this analysis the z-scored beta parameter of the GLM associated with each vertex for each semantic category in each run (2 grammatical classes * 4 categories per class = 8 total observations per vertex per run) (see Appendix 5 for details). Within each of the entity- and event-responsive ROIs, we used one-tailed Student’s t-tests to test the classifier’s accuracy against chance (25%), and two-tailed independent samples Student’s t-test to compare the accuracy for events and entities. We used repeated measures ANOVAs to test for interactions between groups, ROIs, and grammatical class (entities/events). We evaluated significance using a combined permutation and bootstrapping approach (Schreiber & Krekelberg, 2013; Stelzer, Chen, & Turner, 2013) (see Appendix 5 for details). The same approach was used to assess the statistical significance of decoding accuracies for entity categories and event categories within the two occipital ROIs.

Next, to evaluate how well the classifier performed on pairwise distinctions among entities (e.g., birds vs. mammals) and among events, we inspected the confusion matrices generated by the classifier. The confusion matrices contain the classification and misclassification frequencies for any pair of categories, which can be compared using a signal detection theory framework (Swets, Tanners, & Birdsall, 1961; Green & Swets, 1966; Haxby, Connolly, & Guntupalli, 2014). We assessed the discriminability between 1) animals vs. places within entity-responsive ROIs and 2) light events vs. all other event categories in the LMTG+ by computing the nonparametric estimate of discriminability (Pollack & Norman, 1964; Grier, 1971; Stanislaw & Todorov, 1999). An A′ of 0.5 corresponds to chance performance, whereas an A′ of 1 indicates perfect discriminability. Because A′ values did not follow a normal distribution, we used one-sample Wilcoxon signed rank tests to compare A′ values to chance performance, and a repeated measures permutation ANOVA (5,000 permutations) using the permuco package in R (Frossard & Renaud, 2021) to test for interactions between groups, ROIs, and classification error type in entity-responsive brain regions. Wilcoxon signed rank tests use the test statistic V, which represents the sum of the positive ranks, or the distance of all observed values greater than the chance-level from the chance-level.

Results

3.1. Behavioral results

Between-group agreement

Semantic similarity judgments made by blind and sighted people were significantly correlated across groups for every semantic category. Some categories were more similar across groups than others (Figure 1A): between-group similarity was highest for mouth events (ρ=0.93), and lowest for birds (ρ=0.6) and mammals (ρ=0.68). Between-group similarity was lower for animal nouns (bird, mammal) than for place nouns (manmade, natural) (animal nouns: ρ=0.7, 95% CI = [0.61, 0.78]; place nouns: ρ=0.86, 95% CI = [0.81, 0.9]). Light events, the only purely visual category, showed similar agreement across groups compared to other event/verb types (light events: ρ=0.85, 95% CI = [0.77, 0.9], mouth, hand, and sound events: ρ=0.88, 95% CI = [0.85, 0.91]).

Within-group agreement

Blind and sighted participants showed significant within-group agreement for all categories (blind: nouns ρ=0.23 ± 0.24 SD, verbs ρ=0.33 ± 0.28 SD; sighted: nouns ρ=0.42 ± 0.26 SD, verbs ρ=0.5 ± 0.24 SD). Overall, there was lower agreement among blind participants than among sighted participants for both entities/nouns and events/verbs (entities/nouns: main effect of group, F(1,37)=8.99, p=.005; events/verbs: main effect of group, F(1,37)=5.65, p=.02). An ANOVA comparing within-group agreement across entity/noun categories revealed a marginal group by semantic category interaction (group x entity semantic category interaction, F(3,111)=2.24, p=0.09; Figure 1B). Post-hoc Tukey-adjusted pairwise comparisons revealed a significant difference between groups only for mammals (Figure 1B, blind ρ=0.22 ± 0.24; sighted ρ=0.49 ± 0.24). No group by semantic category interaction was observed in within-group agreement for events/verbs (F(3,111)=1.46, p=0.23).

Average similarity ratings

People born blind rated entities/nouns as more similar to each other (Supplementary Figure 1A; repeated measures ANOVA, 2 groups (sighted, blind) x 4 noun semantic categories (birds, mammals, manmade pl., natural pl.): main effect of group, F(1,37)=7.47, p=0.01). This effect was qualified by a marginal group by semantic category interaction, F(3,111)=2.57, p=0.06), whereby the group difference was more pronounced for birds and mammals (Supplementary Figure 1A; repeated measures ANOVA, 2 groups (sighted, blind) x 4 noun semantic categories (birds, mammals, manmade pl., natural pl.).

For events/verbs, there were no significant group or group by condition interaction effects in average similarity ratings (all ps > 0.1; Supplementary Figure 1A; see Appendix 6 for details).

Reaction times

There were no group or group by condition interaction effects in reaction time data among entities/nouns or events/verbs (all ps > 0.5; Supplementary Figure 1B; see Appendix 6 for details). Across both groups, participants were faster to make judgments about animals (birds, mammals) compared to places (manmade, natural; repeated measures ANOVA, 2 groups (sighted, blind) x 4 entity/noun categories (birds, mammals, manmade places, natural places): main effect of semantic category, F(3,111)=7.79, p<0.0001). Participants across groups were also faster to make judgments about mouth actions compared to all other event/verb categories (repeated measures ANOVA, 2 groups (sighted, blind) x 4 noun categories (hand, mouth, light, sound): main effect of semantic category, F(3,111)=7.36, p=0.0002).

In sum, subtle differences in behavioral judgments were observed across groups for living things (i.e., animal nouns), a partially vision-dependent category, but not for light events, an entirely vision-dependent category. These findings suggest that visually acquired knowledge is used by sighted people to judge similarity between some ‘visual’ categories, i.e., animals. However, direct sensory access is not necessary for acquiring typical meanings of sensory categories, i.e., light events.

3.2. fMRI results

3.2.1. Do selective univariate responses to living things concepts emerge in the absence of visual experience?

We observed similar neural signatures of living things concepts across groups. In both sighted and blind participants, animal nouns (birds and mammals) activated a sub-region of the PC more than place nouns (Figure 2, animals > places). The animal response observed in the blind group was in an analogous location to previously reported responses to living things (i.e., people) in the PC of sighted participants (e.g., Fairhall et al., 2013b). This result suggests that the emergence of a preferential response to living things concepts in the PC does not require vision.

Figure 2:

Figure 2:

Whole-cortex results for animals>places: (A) Sighted; (B) Blind. Group maps are shown p<0.01 with FWER cluster-correction for multiple comparisons. Voxels are color coded on a scale from p=0.01 to p=0.00001. The average PPA location from separate cohort of sighted subjects (Weiner et al., 2017) is overlaid on the place noun response observed in the current study. The two overlap in both groups, with the focus of the place noun response located more anteriorly. The average people-preferring precuneus location from a separate cohort of sighted subjects (Fairhall & Caramazza, 2013b) is overlaid on the animals response observed in the current study. These also overlap in both blind and sighted participants. Increased activation for animals over places is observed in the left precuneus in sighted participants at a lower statistical threshold (p < 0.05 uncorrected). See Supplementary Figure 3 for full whole-cortex results.

Consistent with prior findings, preferential responses to place nouns over animal nouns were also observed in sighted participants on the medial surface, in the retrosplenial complex, inferior to the responses to animal nouns. A similar response to place nouns was observed at a more lenient statistical threshold in the blind group (p<0.01, uncorrected). This retrosplenial complex region has previously been identified as part of the ‘place’ processing network in sighted participants (Ino et al., 2002; Rauchs et al., 2008; Epstein, 2008; Dilks et al., 2022). In both groups, preferential responses to places were also observed in medial VOTC, near but anterior to the canonical location of the parahippocampal place area (PPA) (Weiner et al., 2017), although this response was weaker and more distributed in the blind group, extending into early visual cortices (group-by-condition interaction, Figure 5B).

Figure 5:

Figure 5:

Group-by-condition interactions of univariate responses in occipital cortices. Group maps are shown p<0.01 with FWER cluster-correction for multiple comparisons. Voxels are color coded on a scale from p=0.01 to p=0.00001. (A) Peak percent signal change averaged across all occipital regions in which group-by-grammatical class (events vs. entities) interactions were observed. (B) Peak percent signal change in occipital regions in which group-by-entity category (animals vs. places) interactions were observed (occipital pole, anterior medial VTC).

Both groups also exhibited preferential univariate responses to entities/nouns over events/verbs in parietal and temporal regions previously associated with concrete entities, including the posterior parietal, lateral inferior temporal, and medial occipitotemporal cortices, as well as the PC (Supplementary Figure 3).

In sum, selective responses to living things were observed in temporoparietal networks of congenitally blind and sighted participants alike, particularly in the precuneus. Neural specialization for ‘living things’ concepts, a putatively visual category, develops with and without vision.

3.2.2. Multivariate decoding of animals vs. places throughout entity-responsive network in blind and sighted groups

MVPA revealed that animals were robustly discriminable from places throughout entity-responsive regions in both sighted and blind participants (all ps < .05), including in PC (sighted: V = 210, p = 0.0004, blind: V = 173, p = 0.007; see Supplementary Figure 5 and Supplementary Table 3 for results in each ROI), although the sighted group exhibited higher discriminability overall (repeated measures ANOVA, 2 groups (sighted, blind) x 4 ROIs (LPC, LIP, LlatIT, LmedVT): main effect of group F(1,41)=37.30, permuted p = 0.0002; main effect of ROI F(3,164)=1.55, permuted p = 0.2).

Inspection of the confusion matrices showed that in both groups, neural patterns for birds were more likely to be confused with mammals than places (repeated measures ANOVA, 2 groups (sighted, blind) x 2 error types (bird-mammal, bird-place) x 4 ROIs (LPC, LIP, LlatIT, LmedVT): main effect of error type F(1,82)=32.04, permuted p = 0.0002). This effect was qualitatively similar but smaller in the blind group (error type x group interaction F(3,246)=10.60, permuted p = 0.003). Similarly, mammals were more likely to be confused with birds than with places (repeated measures ANOVA, 2 groups (sighted, blind) x 2 error types (bird-mammal, bird-place) x 4 ROIs (LPC, LIP, LlatIT, LmedVT): main effect of classifier error type F(1,82)=28.25, permuted p = 0.0002), and this effect was qualitatively similar across groups but smaller in the blind group (error type x group interaction F(3,246)=23.92, permuted p = 0.0002). These results suggest that all of the neural features of the dissociation between animals and places in temporoparietal semantic networks develop without visual access.

Together, the univariate and the multivariate evidence suggests that neural representations of living things concepts, a partially ‘vision-dependent’ category, develop qualitatively similarly regardless of visual experience.

3.2.3. Responses to visual light events in LMTG+ across blind and sighted people

There were no differences across blind and sighted groups in the LMTG+’s response to light events or any other event category in univariate analysis (Figure 3; individual-subject ROI analysis, repeated measures ANOVA, 2 groups (sighted, blind) x 4 event categories (hand, mouth, light, sound): group x event category interaction, F(3,123)=1.14, p=0.34; main effect of group, F(1,41)=0.06, p=0.81; main effect of semantic category, F(3,123)=7.16, p=0.0002). In other words, the LMTG+ of both groups showed a robust response to light events that was higher than the response to entities/nouns. Multivariate analysis revealed that spatial patterns of neural activity in LMTG+ distinguish between different types of events (i.e., light, sound, hand, and mouth) and this is equally true for blind (t(20)=3.91, permuted p=0.0004) and sighted (t(21)=3.88, permuted p=0.0003) participants. There were no differences in decoding accuracy between the groups (repeated measures ANOVA, 2 groups (sighted, blind): main effect of group, F(1,41)=0.94, p=0.34; Supplementary Figure 4). Neural populations in the LMTG+ are therefore sensitive to semantic distinctions between event categories in both sighted and blind people.

Figure 3:

Figure 3:

Whole-cortex results for events/verbs > entities/nouns on the left lateral surface: (A) Sighted; (B) Blind. Group maps are shown at p<0.01 with FWER cluster-correction for multiple comparisons. Voxels are color coded on a scale from p=0.01 to p=0.00001. (C) Peak percent signal change (PSC) from the 5% most active vertices for events/verbs>entities/nouns in the LMTG+ (left: sighted; right: blind). Note that this figure can be used to evaluate differences among events and among entities in the LMTG+ ROI, as well as differences between groups in entity/event responses. This figure cannot be used to evaluate within-group differences between events and entities because the ROIs were defined as the most event-selective vertices; thus the difference between events and entities may be exaggerated due to statistical bias. See Supplementary Figure 3 for full whole-cortex results.

We next looked at light events in greater detail, because of their visual nature. Light events were distinguishable from hand events in both groups (blind: V = 158, p = 0.0009, sighted: V = 170, p = 0.0001). In the blind group, light events were also distinguishable from both sound emission events (V = 92, p = 0.007) and mouth actions (V = 127, p = 0.009). In the sighted group, light events were not distinguishable from sound emission events (V = 86, p = 0.65) and were marginally distinguishable from mouth actions (V = 113, p = 0.046). We constructed confusion matrices based on classifier error patterns to probe the ‘representational space’ of the LMTG+ across groups, these confusion matrices are a measure of which event categories have the most similar representations. Consistent with the idea that the LMTG+ of blind and sighted people shares a similar representational space, the confusion matrices for the blind and sighted groups were significantly correlated (Figure 4B; r(30)=0.55, p=0.03).

Figure 4:

Figure 4:

Classifier responses and confusion matrices for entity categories in the LPC (A) and for event categories in the LMTG+ (B). Bar graphs display the correct responses and errors for classification of animals vs. places (LPC) and light vs. all other event categories (LMTG+) within each participant group. Note that in the two lightest bars reflect the number of errors made in both directions (e.g., “light-sound” = mean of light (real) – sound (predicted) and sound (real) – light (predicted)). Chance: 25%. Confusion matrices (columns = real, rows = predicted) display the percentage of correct responses (diagonals) and errors (off diagonals) for classification of the relevant categories in each ROI. See Supplementary Figure 5 for results from all ROIs. Key: Mml = mammal, ManP = manmade place, NatP = natural place.

In the blind group, the LMTG+ was the only region that showed higher decoding for verbs than nouns (t(20)=−2.68, permuted p=0.01), providing evidence for LMTG+ selectivity for events in this population. In the sighted group, decoding for verbs and nouns was not different in the LMTG+ (t(21)=−0.28, permuted p=0.78), whereas it was higher for nouns in LPC and LmedVT (Supplementary Figure 4; Supplementary Table 4). A 3-way repeated measures ANOVA (2 groups (sighted, blind) x 5 ROIs (LMTG+, LPC, LIP, LlatIT, LmedVT) x 2 grammatical classes (entities/nouns, events/verbs)) revealed an ROI x grammatical class interaction but no 3-way interaction with group (two-way ROI x grammatical class interaction, F(4,164)=6.40, permuted p<0.0001; 3-way interaction F(4,164)=1.31, permuted p=0.26). This result suggests that entity/noun and event/verb selectivity develop similarly across the cortex regardless of visual experience.

In sum, we find that neural signatures of light events are similar across congenitally blind and sighted people. In both sighted and blind participants, the LMTG+ responds more to events than entities and distinguishes among different semantic categories of events, including light events and other event types. Thus, the neural basis of a conceptual category that is only directly accessible through vision, i.e., light events, develops similarly in people with and without direct sensory access. These results suggest that vision is not necessary for the emergence of category-specific neural responses to ‘visual’ events.

3.2.4. Responses to words in occipital networks of blind and sighted people

Prior studies have identified responses to sentences and words in the occipital cortices of congenitally blind people as well as some sensitivity to properties of spoken words in sighted people (e.g., Sadato et al., 1996, 1998; Burton, 2002; Burton et al., 2003; Röder et al., 2002; Amedi et al., 2003; Bedny et al., 2011; Lane et al., 2015; Seydell-Greenwald et al., 2020). These results have been described as evidence for ‘cross-modal plasticity,’ i.e., the recruitment of visual networks for non-visual tasks. Consistent with this prior work, in the current study, group differences emerged exclusively within occipital cortices.

First, and consistent with prior work, sighted participants showed either deactivation or activity that was not different from rest for all word categories in early occipital cortices (e.g., Bottini et al., 2020). Numerous previous studies find deactivation in visual cortices of sighted people during attentive cross-modal auditory and tactile tasks, a response pattern thought to be related to the suppression of irrelevant information from the visual modality (e.g., Hairston et al., 2008; Kawashima et al., 1995; Murphy et al., 2016; Laurienti et al., 2002). A similar phenomenon is likely to account for the suppression of activity in visual cortex of sighted people while listening to words in the current study. By contrast, the congenitally blind group showed above-rest responses to entity and event words in several early occipital areas, consistent with prior evidence that visual cortices participate in spoken language tasks in this population (Figure 5).

Second, blind and sighted groups showed different preferences across semantic categories in occipital cortex, in line with the idea that the functions supported by visual areas is different across blind and sighted populations. Importantly, however, group differences did not pattern with the ‘visual’ status of the categories. As discussed above, an anterior PPA-like medial VOTC region showed a preference for places in sighted and blind people, but a larger effect was observed in the sighted group. The place-preferring activation of blind participants was more diffuse, expanding into posterior early occipital ‘visual’ networks. In blind participants, a left-lateralized network of early visual areas exhibited increased responses to places over animals, with above-baseline responses observed for both categories (Figure 5B). We suggest that this finding reflects the posterior expansion of semantic place responses into early occipital networks in blind people. By contrast, the same early visual areas of sighted participants exhibited deactivation for both animals and places.

We failed to find any evidence for enhanced responses to light events or living things (i.e., animals) in early visual cortex of sighted people compared to blind people (Supplementary Figure 6). Although concrete nouns are generally more imageable than verbs (see Appendix 2 for imageability ratings of stimuli used in the current study), sighted participants exhibited greater deactivation for nouns compared to verbs in a network of right-lateralized early visual areas (medial, ventral, and dorsal surfaces of the occipital pole; Figure 5, see also Figure 2). By contrast, blind participants exhibited equivalent above-baseline activity for both nouns and verbs in these regions (Figure 5A). Differential responses of early visual cortices to spoken words across blind and sighted people is consistent with prior evidence of plasticity in this population (e.g., Röder et al., 2002; Bedny et al., 2011; Collignon et al., 2013; see Pascual-Leone et al., 2005; Merabet & Pascual-Leone, 2010; Bedny, 2017 for reviews).

Multivariate decoding among semantic categories in early visual networks was weak in blind and sighted people alike (Supplementary Figure 6). In early visual regions defined using a Brodmann area atlas (V1-V2; BA17-18), we observed above-chance decoding exclusively in the right hemisphere of blind participants (decoding of entity categories: t(20)=2.51, permuted p=0.009; event categories: t(20)=2.33, permuted p=0.01). Thus, despite the fact that early visual cortices showed above-rest univariate responses to events and entities in blind people, these regions do not robustly encode finer-grained distinctions among semantic categories. We found marginal decoding among entities in the left hemisphere of sighted participants (V1-V2; BA17-18 entity categories: t(21)=1.56, permuted p=0.07; event categories: t(20)=0.45, permuted p=0.34).

In sum, responses in early visual networks of blind and sighted people were not related to the ‘visual’ status of the stimuli. We failed to find any evidence that ‘visual’ words (animal nouns or light events) activate visual cortices in sighted but not blind people. Reponses of early visual networks to spoken words were therefore not predicted by whether or not an individual had accessed the referent of the words through vision.

Discussion

4.1. Semantic similarity judgments for ‘visual’ words across groups

Consistent with prior evidence that people born blind have rich ‘visual’ semantic knowledge, similarity judgments were positively correlated across groups for all semantic categories, including ‘visual’ ones (e.g., Marmor, 1978; Landau & Gleitman, 1985; Shepard & Cooper, 1992; Lenci et al., 2013; Saysani et al., 2018). In line with the claim that vision plays an important role in learning about living things, semantic similarity judgments of blind and sighted people differed more for birds and mammals than for places (e.g., ‘barn’, ‘garage’) (i.e., slightly lower correlations between groups and higher similarity judgments on average for birds and mammals among blind people) (Allport, 1985; Warrington & Shallice, 1984; Warrington & McCarthy, 1987; Farah & McClelland, 1991; Gaffan & Heywood, 1993; Moss et al., 1997; Tranel et al., 1997; Humphreys & Forde, 2001; see Bi et al., 2016, for related arguments). One prior study also found that animal appearance knowledge differs partially across sighted and blind people (Kim et al., 2019). In particular, blind and sighted people’s judgments about animal shape, size, and texture is overlapping but not identical, and labels of animal colors differ across groups (Kim et al., 2019). Together with this prior evidence, the results of the current study suggest that for sighted people, visually derived information about the surface features of animals influences semantic similarity judgments (see Connolly et al., 2007; Kim et al, 2021 for related evidence with regard to fruits and vegetables).

In contrast to living things, which can in principle be accessed through non-visual modalities (e.g., touch, audition), light events (e.g., ‘sparkle’) are directly accessible only through vision. We might therefore expect judgments about light verbs to differ even more across blind and sighted people. Contrary to this prediction, we found that semantic similarity judgments for light events were just as correlated across blind and sighted groups as judgments about non-visual events/verbs. This result corroborates prior behavioral studies that report similar judgments for light event concepts across blind and sighted people (Lenci et al., 2013; Bedny et al., 2019). In sum, these results suggest that shared sensory experience of a concept’s referent does not predict shared semantic knowledge as measured by semantic similarity judgments. If this were the case, we would expect to observe larger differences between groups in semantic similarity judgments about light events compared to judgments about living things.

One factor that could influence the degree to which shared sensory experience influences semantic similarity judgments is the availability of other shared non-visual information that could be used to make the same judgments. Prior evidence suggests that the degree to which semantic judgments of animals are influenced by appearance knowledge varies among sighted people as a function of ecological expertise. While sighted adults living in industrialized societies rely on surface-level visual appearance when judging the semantic similarity of living things (animals and plants), people with more biological expertise (e.g., members of cultural groups that live in closer contact with nature) tend to rely more on abstract causal information such as behavioral and ecological patterns (Murphy & Medin, 1985; Boster & Johnson, 1989; López et al., 1997; Proffitt, Coley, Medin, 2000; Bailenson et al., 2002; Medin & Atran, 2004). The participants in the current study were mostly recruited from the urban environment of Baltimore, although we did not measure their ecological expertise. Subtle differences in semantic similarity judgments about birds and mammals across blind and sighted urbanites could partly reflect the fact that the average sighted U.S. city-dweller knows little else about what distinguishes a sparrow from a finch besides what they look like. Future work comparing blind and sighted people with different levels of animal expertise could resolve this question. Another factor that may influence the coherence of similarity judgments across blind and sighted groups is the degree to which the ‘sensory’ information in question can be readily learned through other sources, such as language. Semantic distinctions among light verbs are arguably low dimensional: light emission verbs fall along dimensions of intensity and periodicity (Faber & Usón, 1999), whereas differences between the shapes, colors, and sizes of birds are complex and seemingly arbitrary, potentially making these features harder to acquire efficiently through linguistic communication.

In sum, subtle group differences in behavioral judgments were observed for living things, and there was high agreement across groups for light events. These results suggest that people can develop shared representations of purely visual concepts, such as light events, with and without direct sensory access to their referents.

4.2. Similar neural responses to living things and light events across sighted and blind people

Behavioral evidence from the current study and prior work is open to multiple interpretations. The observed difference in behavioral judgments between blind and sighted people for birds and mammals suggests that sighted participants use visually acquired appearance knowledge to make semantic similarity judgments about these categories. Do such behavioral differences between blind and sighted people reflect fundamental differences in conceptual representation? Or do they reflect small quantitative differences in knowledge analogous to those typically observed across subsamples of the sighted population (e.g., urbanites vs. naturalists) (e.g., Carey, 2011; Yee & Thompson-Schill, 2016; Marti et al., 2023)? In a similar manner, the absence of group differences in light event judgments could reflect the use of qualitatively similar conceptual representations across groups or mask profound differences in representation (e.g., sighted people use visual representations and congenitally blind people use linguistic ones to arrive at similar judgments). Neural evidence offers complementary insights by testing whether previously identified neural signatures of ‘visual’ concepts emerge in the absence of visual experience. The current neural findings support the view that ‘visual’ concepts develop in qualitatively similar ways across sighted and blind adults.

4.2.1. Specialization for living things in temporoparietal semantic network of people born blind

We find that both sighted and blind people exhibit robust neural specialization for living things in temporoparietal networks. Multivariate analysis revealed distinct neural patterns for living things (i.e., animals) and non-living things (i.e., places) across temporoparietal regions previously associated with the retrieval of entity concepts (e.g., Fairhall et al., 2014; Deen & Freiwald, 2022; Elli et al., 2019). In addition, selective responses to living things emerged in the PC of both blind and sighted participants. These results are consistent with prior findings from sighted adults proposing that the PC supports living things representations (Devlin et al., 2002; Fairhall & Caramazza, 2013a; 2013b; Fairhall et al., 2014; Deen & Freiwald, 2022; Peer et al., 2015; Elli et al., 2019). Our findings suggest that neural specialization for living things concepts develops independent of vision. In other words, whether people have direct sensory access to a concept’s referent does not appear to influence the neural basis of its representation.

Consistent with prior literature, we also observed selective responses to place words in the retrosplenial complex and the medial VOTC of both blind and sighted people. Medial VOTC responses were located anterior to the typical location of the perceptual PPA in sighted people (e.g., He et al., 2013; Wang et al., 2016; Fairhall et al., 2014; Steel et al., 2021; Häusler et al., 2022; Epstein & Kanwisher, 1998; Weiner et al., 2017; Silson et al., 2016, 2019; see also Baldassano et al., 2013). Previous studies have suggested that unlike the posterior PPA, which is involved in place perception, anterior PPA represents mnemonic and/or conceptual information related to places (Silson et al., 2016, 2019; Steel et al., 2021; Häusler et al., 2022). Together, these results point to a vision-independent ‘double dissociation’ in the neural instantiation of living things and place concepts.

In sum, neural signatures of ‘living things’, a partially vision-dependent category for sighted people, develop similarly in people with and without visual experience, suggesting that these networks are robust to differences in sensory experience. This neural evidence points to the ‘body independence’ of living things concepts.

4.2.2. Similar neural signatures of light event concepts in people born blind and sighted

Unlike animals, light events (e.g., ‘sparkle’, ‘glow’) can be perceived only through vision. Despite this, we observed similar neural responses to light emission events among sighted and congenitally blind adults. In both populations, the LMTG+ exhibits distinctive neural responses to light events relative to entities (univariate analysis) as well as to other event categories (e.g., hand actions; multivariate analysis).

The current results expand on prior work showing that representations of motion verbs (e.g., ‘roll’, ‘bounce’) in the LMTG+ are similar across congenitally blind and sighted people (Noppeney et al., 2003; Bedny et al., 2012). One interpretation of these prior findings is that the LMTG+ of people who are sighted represents visual motion information, while the LMTG+ of people born blind undergoes ‘cross-modal plasticity’: visual information is replaced with sensory information from other modalities (e.g., audition, touch) (Yee, Chrysikou, & Thompson-Schill, 2013; Yee, Jones, & McRae, 2017; see also Pascual-Leone & Hamilton, 2001; Bavelier & Neville, 2002). This explanation cannot account for the current findings on light events, which can only be accessed through the visual modality. It is not clear what aspects of auditory/tactile experience could inform the learner that, for example, shining is more similar to glowing than to sparkling or flashing.

We speculate that people born blind use linguistic evidence to acquire the same light event concepts that sighted people acquire (see also Landau & Gleitman, 1985). Large language models (LLMs) trained on linguistic data alone can generate human-like semantic judgments about sensory phenomena (Abdou et al., 2021; Patel & Pavlick, 2022; Li et al, 2021; Wei et al., 2022; Sharma et al., 2024; Gurnee & Tegmark, 2024; Marjieh et al., 2022, 2024). Sensory semantic content can therefore in principle be learned from language alone, i.e., without access to sensory information from any modality. Precisely how people born blind learn ‘visual’ meaning from language remains to be understood.

It is possible that the LMTG+ of blind and sighted people represents different types of information, i.e., language-derived information in blind people and visual motion information in sighted people. This possibility cannot be ruled out by the available neural data but also lacks any positive empirical support. Across several studies and a variety of semantic categories, the LMTG+ of sighted and blind people exhibits similar neural responses to event concepts. By contrast, both the current and prior studies find functional differences across early visual networks of blind and sighted groups (e.g., responses to spoken language and braille in V1) (e.g., Röder et al., 2002; Sadato et al., 1996; 1998; Amedi et al., 2004; Collignon et al., 2011; 2013; Striem-Amit et al., 2015; Abboud & Cohen, 2019). In our view, the most parsimonious account of this evidence is that conceptual representations in the LMTG+ develop in qualitatively similar ways in people with and without direct visual access.

It is also worth mentioning that the LMTG+ responds not only to perceptible events (e.g., ‘to sparkle’, ‘to run’), but also abstract events (e.g., ‘to think’, ‘to love’), as well as to event nouns (e.g., ‘the hurricane’) (Davis et al., 2004; Bedny et al., 2008; Bedny et al., 2014; Noppeney et al., 2003; Bedny et al., 2012). This, together with the evidence that patterns of neural activity in the LMTG+ distinguish among different event types (Elli et al., 2019), suggests that the LMTG+ encodes modality-independent semantic representations of event concepts.

In sum, across a broad range of semantic types, from living things to light events, neural signatures of concepts develop similarly in individuals with and without direct sensory access. This evidence provides support for the hypothesis that concepts are ‘body-independent.’

4.3.1. The relationship of the current findings to prior neuroscience evidence for embodied concepts

Neural data have played a significant role in motivating the view that concepts are grounded in sensory experience and have contributed to the ‘different-body/different-concepts hypothesis’ (e.g., Pulvermüller, 2001; Barsalou, 2010; Gallese & Lakoff, 2005; Zwaan, 2014; Meteyard et al., 2012; Kiefer & Pulvermüller, 2012; Yee & Thompson-Schill, 2016; Reilly et al., 2024). How do we reconcile the current evidence with prior studies that report activation of sensory systems during semantic tasks? We speculate that some of the neural activity observed during conceptual tasks that was interpreted as sensory in prior work is not in fact sensory. This appears to be the case for responses to action verbs observed in the LMTG+, which were originally interpreted as reflecting the activation of visual motion representations. In sighted people, the LMTG+ is located near visual motion perception regions, including area MT+ and biological motion perception areas in the superior temporal sulcus (STS) (Grossman et al., 2000; Isik et al., 2017; Wurm & Caramazza, 2019). The original studies proposing that the LMTG+ represents visual motion information were conducted prior to the advent of modern functional localization techniques and used group analyses prone to the error of ‘blending’ regions that are proximal but functionally distinct in individual participants (Fedorenko et al., 2010; Nieto-Castañón & Fedorenko, 2012). It is therefore possible that neighboring visual and conceptual responses were not separated in this prior work. More recent evidence suggests that modality-specific sensory responses to visual motion are separable from responses to words referring to visual motion (e.g., to run), although there are also shared action representations across language and vision (Bedny et al., 2008; Wurm & Caramazza, 2019)

There is also evidence that under some task conditions, language referring to perceptible qualities (e.g., color) or objects can activate high-level sensory representations (e.g., Hsu et al., 2011; Wang et al., 2020; Seydell-Greenwald et al., 2023). For example, when asked to make highly detailed perceptual judgments about the colors of named objects (is a school bus more similar in color to an egg yolk or to butter?), sighted people activate high-level color perception regions (Hsu et al., 2011). Whether such neural responses should be considered ‘part of a concept’ has been hotly debated (Yee & Thompson-Schill, 2016; Mahon & Hickok, 2016; Leshinksaya & Caramazza, 2016; Machery, 2010).

A ‘dual theory’ of concepts accommodates both the observation that sensory regions can be activated during conceptual tasks and the observation that blind and sighted people exhibit similar neural responses to ‘visual’ categories (Osherson & Smith, 1981; Margolis & Laurence, 2003; see also Bi, 2021). According to this view, people with direct sensory access to perceptible categories have a two-part conceptual representation that includes ‘abstract conceptual cores’ as well as ‘sensory identification procedures’ used to identify referents of that category (Osherson & Smith, 1981; Margolis & Laurence, 2003). The conceptual cores are activated obligatorily and shared across people regardless of sensory experience. By contrast, the sensory identification procedures are retrieved optionally depending on the task and context, are only retrieved for some perceptible categories, and vary across people based on their sensory experiences (Yee & Thompson-Schill, 2016). Some recent evidence comparing the neural basis of color knowledge across blind and sighted people is potentially consistent with this view. Blind and sighted people activate similar semantic networks when judging object color, but sighted people additionally activate occipital visual-perceptual regions (Bottini et al., 2020; Wang et al., 2020). One interpretation of this result is that the regions activated by both sighted and blind people represent abstract conceptual cores, whereas the visual-perception regions activated only by sighted people support perceptual identification procedures.

The conceptual cores vs. identification procedures hypothesis still leaves open the question of why visual-perceptual regions (i.e., occipital cortices) are recruited by sighted people for some visual concepts (e.g., object colors) but not others (i.e., living things and light events). Likewise, it is unclear why neural responses to living things and light events are indistinguishable across sighted and blind people making detailed semantic judgments, while subtle group differences are observed in object color labeling tasks (Bottini et al., 2020; Wang et al., 2020).

One speculative possibility is that sighted people are more likely to store and retrieve long-term perceptual representations of object colors than light events. More generally, ‘visual’ identification procedures might be less relevant for semantic categories that can be identified without retrieving information from long-term memory. Unlike representations of object appearance, light events are low-dimensional (flashing = bright light changing periodically), dynamic (i.e., unfolding over time), and variable across instances (flashing streetlights vs. flashing lightning). For instance, when told to ‘drive until you see the flashing light’, a sighted person might identify the flashing event without the need to store or retrieve a long-term sensory memory of flashing.

Regardless of which of these explanations, if any, is correct, the available data suggest that some purely visual concepts (e.g., light events) are behaviorally and neural indistinguishable in individuals with and without direct sensory access. Moreover, differences in sensory experience and resulting differences in appearance knowledge (i.e., about living things) do not necessitate changes in the neural basis of semantic representations. Even in the face of dramatic sensory differences, people acquire shared sensory concepts with similar neural bases. Such evidence is difficult to reconcile with the ‘different-body/different-concept hypothesis.’ Social and inferential learning via linguistic evidence establishes shared conceptual representations across people.

Supplementary Material

Hauptman_Supplementary

Acknowledgements

We would like to thank all of the blind and sighted participants, the blind community, and the National Federation of the Blind. Without their support, this study would not be possible. We would also like to thank the F.M. Kirby Research Center for Functional Brain Imaging at the Kennedy Krieger Institute for their assistance with data collection. We thank Jeffrey Bowen for assistance with statistical analyses. This work was supported by the National Institutes of Health (R01 EY027352 to M.B.) and the Johns Hopkins University Catalyst Grant (to M.B.).

Data statement

Anonymized fMRI data from the current study are available on OpenICPSR (https://www.openicpsr.org/openicpsr/project/198163/version/V3/view). Code used in the current study can be found on Open Science Framework (https://osf.io/f4dj2/?view_only=13b637d0bde049d684077b331c606bc7) and GitHub (https://github.com/NPDL/NPDL-scripts).

References

  1. Allport DA (1985). Distributed memory, modular subsystems and dysphasia. In: Current perspectives in dysphasia (Newman SK, Epstein R, eds), pp 207–244. Edinburgh: Churchill Livingstone. [Google Scholar]
  2. Abboud S, & Cohen L (2019). Distinctive Interaction Between Cognitive Networks and the Visual Cortex in Early Blind Individuals. Cerebral Cortex, 29(11), 4725–4742. 10.1093/cercor/bhz006 [DOI] [PubMed] [Google Scholar]
  3. Abdou M, Kulmizev A, Hershcovich D, Frank S, Pavlick E, & Søgaard A (2021). Can Language Models Encode Perceptual Structure Without Grounding? A Case Study in Color (arXiv:2109.06129). arXiv. http://arxiv.org/abs/2109.06129 [Google Scholar]
  4. Aglinskas A, & Fairhall SL (2023). Similar representation of names and faces in the network for person perception. NeuroImage, 274, 120100. 10.1016/j.neuroimage.2023.120100 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Amedi A, Floel A, Knecht S, Zohary E, & Cohen LG (2004). Transcranial magnetic stimulation of the occipital pole interferes with verbal processing in blind subjects. Nature Neuroscience, 7(11), 1266–1270. 10.1038/nn1328 [DOI] [PubMed] [Google Scholar]
  6. Amedi A, Raz N, Pianka P, Malach R, & Zohary E (2003). Early ‘visual’ cortex activation correlates with superior verbal memory performance in the blind. Nature Neuroscience, 6(7), Article 7. 10.1038/nn1072 [DOI] [PubMed] [Google Scholar]
  7. Bailenson JN, Shum MS, Atran S, Medin DL, & Coley JD (2002). A bird’s eye view: Biological categorization and reasoning within and across cultures. Cognition, 84(1), 1–53. 10.1016/S0010-0277(02)00011-2 [DOI] [PubMed] [Google Scholar]
  8. Baldassano C, Beck DM, & Fei-Fei L (2013). Differential connectivity within the Parahippocampal Place Area. NeuroImage, 75, 228–237. 10.1016/j.neuroimage.2013.02.073 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Barsalou LW (2008). Grounded Cognition. Annual Review of Psychology, 59(1), 617–645. 10.1146/annurev.psych.59.103006.093639 [DOI] [PubMed] [Google Scholar]
  10. Barsalou LW (2010). Grounded Cognition: Past, Present, and Future. Topics in Cognitive Science, 2(4), 716–724. 10.1111/j.1756-8765.2010.01115.x [DOI] [PubMed] [Google Scholar]
  11. Barsalou LW, Kyle Simmons W, Barbey AK, & Wilson CD (2003). Grounding conceptual knowledge in modality-specific systems. Trends in Cognitive Sciences, 7(2), 84–91. 10.1016/S1364-6613(02)00029-3 [DOI] [PubMed] [Google Scholar]
  12. Barsalou LW (1999). Perceptions of perceptual symbols. Behavioral and brain sciences, 22(4), 637–660. [DOI] [PubMed] [Google Scholar]
  13. Bavelier D, & Neville HJ (2002). Cross-modal plasticity: Where and how? Nature Reviews Neuroscience, 3(6), 443–452. 10.1038/nrn848 [DOI] [PubMed] [Google Scholar]
  14. Bedny M (2020). The contribution of sensory-motor experience to the mind and brain. In: Gazzaniga MS, Mangun GR, Poeppel D (Eds.), The Cognitive Neurosciences. MIT Press. [Google Scholar]
  15. Bedny M, Caramazza A, Grossman E, Pascual-Leone A, & Saxe R (2008). Concepts Are More than Percepts: The Case of Action Verbs. The Journal of Neuroscience, 28(44), 11347–11353. 10.1523/JNEUROSCI.3039-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Bedny M, Dravida S, & Saxe R (2014). Shindigs, brunches, and rodeos: The neural basis of event words. Cognitive, Affective, & Behavioral Neuroscience, 14(3), 891–901. 10.3758/s13415-013-0217-z [DOI] [PubMed] [Google Scholar]
  17. Bedny M, Koster-Hale J, Elli G, Yazzolino L, & Saxe R (2019). There’s more to “sparkle” than meets the eye: Knowledge of vision and light verbs among congenitally blind and sighted individuals. Cognition, 189, 105–115. 10.1016/j.cognition.2019.03.017 [DOI] [PubMed] [Google Scholar]
  18. Bedny M, Pascual-Leone A, Dodell-Feder D, Fedorenko E, & Saxe R (2011). Language processing in the occipital cortex of congenitally blind adults. Proceedings of the National Academy of Sciences, 108(11), 4429–4434. 10.1073/pnas.1014818108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Bedny M, Pascual-Leone A, Dravida S, & Saxe R (2012). A sensitive period for language in the visual cortex: Distinct patterns of plasticity in congenitally versus late blind adults. Brain and Language, 122(3), 162–170. 10.1016/j.bandl.2011.10.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Bedny M, Richardson H, & Saxe R (2015). “Visual” Cortex Responds to Spoken Language in Blind Children. The Journal of Neuroscience, 35(33), 11674–11681. 10.1523/JNEUROSCI.0634-15.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Bedny M, & Saxe R (2016). Insights into the origins of knowledge from the cognitive neuroscience of blindness. Understanding Cognitive Development, 56–84. [DOI] [PubMed] [Google Scholar]
  22. Bedny M, & Thompson-Schill SL (2006). Neuroanatomically separable effects of imageability and grammatical class during single-word comprehension. Brain and Language, 98(2), 127–139. 10.1016/j.bandl.2006.04.008 [DOI] [PubMed] [Google Scholar]
  23. Behrmann M, & Nishimura M (2010). Agnosias. WIREs Cognitive Science, 1(2), 203–213. 10.1002/wcs.42 [DOI] [PubMed] [Google Scholar]
  24. Beilock SL, Lyons IM, Mattarella-Micke A, Nusbaum HC, & Small SL (2008). Sports experience changes the neural processing of action language. Proceedings of the National Academy of Sciences, 105(36), 13269–13273. 10.1073/pnas.0803424105 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Bender EM, & Koller A (2020). Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data. In Jurafsky D, Chai J, Schluter N, & Tetreault J (Eds.), Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (pp. 5185–5198). Association for Computational Linguistics. 10.18653/v1/2020.acl-main.463 [DOI] [Google Scholar]
  26. Ben-Shachar M, Dougherty RF, Deutsch GK, & Wandell BA (2007). Differential Sensitivity to Words and Shapes in Ventral Occipito-Temporal Cortex. Cerebral Cortex, 17(7), 1604–1611. 10.1093/cercor/bhl071 [DOI] [PubMed] [Google Scholar]
  27. Berkeley G (1732). An essay towards a new theory of vision. (Based on the fourth edition, London, 1732). Edited by Wilkins David R., Dublin, December 2002. [Google Scholar]
  28. Bi Y (2021). Dual coding of knowledge in the human brain. Trends in Cognitive Sciences, 25(10), 883–895. 10.1016/j.tics.2021.07.006 [DOI] [PubMed] [Google Scholar]
  29. Bi Y, Wang X, & Caramazza A (2016). Object Domain and Modality in the Ventral Visual Pathway. Trends in Cognitive Sciences, 20(4), 282–290. 10.1016/j.tics.2016.02.002 [DOI] [PubMed] [Google Scholar]
  30. Bola Ł, Siuda-Krzywicka K, Paplińska M, Sumera E, Zimmermann M, Jednoróg K, Marchewka A, & Szwed M (2017). Structural reorganization of the early visual cortex following Braille training in sighted adults. Scientific Reports, 7(1), Article 1. 10.1038/s41598-017-17738-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Bola Ł, Yang H, Caramazza A, & Bi Y (2022). Preference for animate domain sounds in the fusiform gyrus of blind individuals is modulated by shape–action mapping. Cerebral Cortex, 32(21), 4913–4933. 10.1093/cercor/bhab524 [DOI] [PubMed] [Google Scholar]
  32. Boster JS, & Johnson JC (1989). Form or Function: A Comparison of Expert and Novice Judgments of Similarity Among Fish. American Anthropologist, 91(4), 866–889. 10.1525/aa.1989.91.4.02a00040 [DOI] [Google Scholar]
  33. Bottini R, Ferraro S, Nigri A, Cuccarini V, Bruzzone MG, & Collignon O (2020). Brain Regions Involved in Conceptual Retrieval in Sighted and Blind People. Journal of Cognitive Neuroscience, 32(6), 1009–1025. 10.1162/jocn_a_01538 [DOI] [PubMed] [Google Scholar]
  34. Campbell EE, & Bergelson E (2022). Making sense of sensory language: Acquisition of sensory knowledge by individuals with congenital sensory impairments. Neuropsychologia, 174, 108320. [DOI] [PubMed] [Google Scholar]
  35. Caramazza A (1998). The Interpretation of Semantic Category-specific Deficits: What Do They Reveal About the Organization of Conceptual Knowledge in the Brain? Neurocase, 4(4–5), 265–272. 10.1080/13554799808410627 [DOI] [Google Scholar]
  36. Caramazza A, & Mahon BZ (2003). The organization of conceptual knowledge: The evidence from category-specific semantic deficits. Trends in Cognitive Sciences, 7(8), 354–361. 10.1016/S1364-6613(03)00159-1 [DOI] [PubMed] [Google Scholar]
  37. Caramazza A, & Shelton JR (1998). Domain-Specific Knowledge Systems in the Brain: The Animate-Inanimate Distinction. Journal of Cognitive Neuroscience, 10(1), 1–34. 10.1162/089892998563752 [DOI] [PubMed] [Google Scholar]
  38. Carey S (2011). The origin of concepts. Oxford University Press. [Google Scholar]
  39. Casasanto D (2009). Embodiment of abstract concepts: Good and bad in right- and left-handers. Journal of Experimental Psychology: General, 138(3), 351–367. 10.1037/a0015854 [DOI] [PubMed] [Google Scholar]
  40. Chalmers DJ (2024). Does Thought Require Sensory Grounding? From Pure Thinkers to Large Language Models (arXiv:2408.09605). arXiv. http://arxiv.org/abs/2408.09605 [Google Scholar]
  41. Cohen L, Dehaene S, Naccache L, Lehéricy S, Dehaene-Lambertz G, Hénaff M-A, & Michel F (2000). The visual word form area: Spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients. Brain, 123(2), 291–307. 10.1093/brain/123.2.291 [DOI] [PubMed] [Google Scholar]
  42. Collignon O, Dormal G, Albouy G, Vandewalle G, Voss P, Phillips C, & Lepore F (2013). Impact of blindness onset on the functional organization and the connectivity of the occipital cortex. Brain, 136(9), 2769–2783. 10.1093/brain/awt176 [DOI] [PubMed] [Google Scholar]
  43. Collignon O, Vandewalle G, Voss P, Albouy G, Charbonneau G, Lassonde M, & Lepore F (2011). Functional specialization for auditory–spatial processing in the occipital cortex of congenitally blind humans. Proceedings of the National Academy of Sciences, 108(11), 4435–4440. 10.1073/pnas.1013928108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Connolly AC, Gleitman LR, & Thompson-Schill SL (2007). Effect of congenital blindness on the semantic representation of some everyday concepts. Proceedings of the National Academy of Sciences, 104(20), 8241–8246. 10.1073/pnas.0702812104 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Connolly AC, Sha L, Guntupalli JS, Oosterhof N, Halchenko YO, Nastase SA, Di Oleggio Castello MV, Abdi H, Jobst BC, Gobbini MI, & Haxby JV (2016). How the Human Brain Represents Perceived Dangerousness or “Predacity” of Animals. The Journal of Neuroscience, 36(19), 5373–5384. 10.1523/JNEUROSCI.3395-15.2016 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Crepaldi D, Berlingeri M, Cattinelli I, Borghese N, Luzzatti C, & Paulesu E (2013). Clustering the lexicon in the brain: A meta-analysis of the neurofunctional evidence on noun and verb processing. Frontiers in Human Neuroscience, 7. https://www.frontiersin.org/articles/ 10.3389/fnhum.2013.00303 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Dale AM, Fischl B, & Sereno MI (1999). Cortical Surface-Based Analysis: I. Segmentation and Surface Reconstruction. NeuroImage, 9(2), 179–194. 10.1006/nimg.1998.0395 [DOI] [PubMed] [Google Scholar]
  48. Damasio AR, & Tranel D (1993). Nouns and verbs are retrieved with differently distributed neural systems. Proceedings of the National Academy of Sciences, 90(11), 4957–4960. 10.1073/pnas.90.11.4957 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Davis MH, Meunier F, & Marslen-Wilson WD (2004). Neural responses to morphological, syntactic, and semantic properties of single words: An fMRI study. Brain and Language, 89(3), 439–449. 10.1016/S0093-934X(03)00471-1 [DOI] [PubMed] [Google Scholar]
  50. de Vries R (1969). Constancy of Generic Identity in the Years Three to Six. Monographs of the Society for Research in Child Development, 34(3), iii–67. 10.2307/1165683 [DOI] [PubMed] [Google Scholar]
  51. Deen B, & Freiwald WA (2022). Parallel systems for social and spatial reasoning within the cortical apex (p. 2021.09.23.461550). bioRxiv. 10.1101/2021.09.23.461550 [DOI] [Google Scholar]
  52. Dehaene S, & Cohen L (2011). The unique role of the visual word form area in reading. Trends in Cognitive Sciences, 15(6), 254–262. 10.1016/j.tics.2011.04.003 [DOI] [PubMed] [Google Scholar]
  53. Dehaene S, Pegado F, Braga LW, Ventura P, Filho GN, Jobert A, Dehaene-Lambertz G, Kolinsky R, Morais J, & Cohen L (2010). How Learning to Read Changes the Cortical Networks for Vision and Language. Science, 330(6009), 1359–1364. 10.1126/science.1194140 [DOI] [PubMed] [Google Scholar]
  54. Devlin JT, Russell RP, Davis MH, Price CJ, Moss HE, Fadili MJ, & Tyler LK (2002). Is there an anatomical basis for category-specificity? Semantic memory studies in PET and fMRI. Neuropsychologia, 40(1), 54–75. 10.1016/S0028-3932(01)00066-5 [DOI] [PubMed] [Google Scholar]
  55. DiCarlo JJ, & Cox DD (2007). Untangling invariant object recognition. Trends in Cognitive Sciences, 11(8), 333–341. 10.1016/j.tics.2007.06.010 [DOI] [PubMed] [Google Scholar]
  56. Dilks DD, Kamps FS, & Persichetti AS (2022). Three cortical scene systems and their development. Trends in Cognitive Sciences, 26(2), 117–127. 10.1016/j.tics.2021.11.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Eklund A, Knutsson H, & Nichols TE (2019). Cluster failure revisited: Impact of first level design and physiological noise on cluster false positive rates. Human Brain Mapping, 40(7), 2017–2032. 10.1002/hbm.24350 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Eklund A, Nichols TE, & Knutsson H (2016). Cluster failure: Why fMRI inferences for spatial extent have inflated false-positive rates. Proceedings of the National Academy of Sciences, 113(28), 7900–7905. 10.1073/pnas.1602413113 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Elli GV, Lane C, & Bedny M (2019). A Double Dissociation in Sensitivity to Verb and Noun Semantics Across Cortical Networks. Cerebral Cortex (New York, NY), 29(11), 4803–4817. 10.1093/cercor/bhz014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Epstein RA (2008). Parahippocampal and retrosplenial contributions to human spatial navigation. Trends in Cognitive Sciences, 12(10), 388–396. 10.1016/j.tics.2008.07.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Epstein R, & Kanwisher N (1998). A cortical representation of the local visual environment. Nature, 392(6676), Article 6676. 10.1038/33402 [DOI] [PubMed] [Google Scholar]
  62. Faber PB, & Usón RM (1999). Constructing a lexicon of English verbs (Vol. 23). Walter de Gruyter. [Google Scholar]
  63. Fairhall SL, Anzellotti S, Ubaldi S, & Caramazza A (2014). Person- and Place-Selective Neural Substrates for Entity-Specific Semantic Access. Cerebral Cortex, 24(7), 1687–1696. 10.1093/cercor/bht039 [DOI] [PubMed] [Google Scholar]
  64. Fairhall SL, & Caramazza A (2013a). Brain Regions That Represent Amodal Conceptual Knowledge. Journal of Neuroscience, 33(25), 10552–10558. 10.1523/JNEUROSCI.0051-13.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Fairhall SL, & Caramazza A (2013b). Category-selective neural substrates for person- and place-related concepts. Cortex, 49(10), 2748–2757. 10.1016/j.cortex.2013.05.010 [DOI] [PubMed] [Google Scholar]
  66. Farah MJ (1990). Visual agnosia: Disorders of object recognition and what they tell us about normal vision. MIT press. [Google Scholar]
  67. Farah MJ, & McClelland JL (1991). A computational model of semantic memory impairment: Modality specificity and emergent category specificity. Journal of Experimental Psychology: General, 120(4), 339–357. 10.1037/0096-3445.120.4.339 [DOI] [PubMed] [Google Scholar]
  68. Fedorenko E, Hsieh P-J, Nieto-Castañón A, Whitfield-Gabrieli S, & Kanwisher N (2010). New Method for fMRI Investigations of Language: Defining ROIs Functionally in Individual Subjects. Journal of Neurophysiology, 104(2), 1177–1194. 10.1152/jn.00032.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Fernandino L, Binder JR, Desai RH, Pendl SL, Humphries CJ, Gross WL, Conant LL, & Seidenberg MS (2016). Concept Representation Reflects Multimodal Abstraction: A Framework for Embodied Semantics. Cerebral Cortex, 26(5), 2018–2034. 10.1093/cercor/bhv020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Frank MC (2023). Baby steps in evaluating the capacities of large language models. Nature Reviews Psychology, 2(8), 451–452. 10.1038/s44159-023-00211-x [DOI] [Google Scholar]
  71. Frawley W (1992). Linguistic semantics. Hillsdale, NJ: Erlbaum. [Google Scholar]
  72. Frossard J, & Renaud O (2021). Permutation Tests for Regression, ANOVA, and Comparison of Signals: The permuco Package. Journal of Statistical Software, 99, 1–32. 10.18637/jss.v099.i15 [DOI] [Google Scholar]
  73. Gaffan D, & Heywood CA (1993). A Spurious Category-Specific Visual Agnosia for Living Things in Normal Human and Nonhuman Primates. Journal of Cognitive Neuroscience, 5(1), 118–128. 10.1162/jocn.1993.5.1.118 [DOI] [PubMed] [Google Scholar]
  74. Gallese V, & Lakoff G (2005). The Brain’s concepts: The role of the Sensory-motor system in conceptual knowledge. Cognitive Neuropsychology, 22(3–4), 455–479. 10.1080/02643290442000310 [DOI] [PubMed] [Google Scholar]
  75. Gati I, & Tversky A (1984). Weighting common and distinctive features in perceptual and conceptual judgments. Cognitive Psychology, 16(3), 341–370. 10.1016/0010-0285(84)90013-6 [DOI] [PubMed] [Google Scholar]
  76. Gelman SA, & Roberts SO (2017). How language shapes the cultural inheritance of categories. Proceedings of the National Academy of Sciences, 114(30), 7900–7907. 10.1073/pnas.1621073114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Glasser MF, Sotiropoulos SN, Wilson JA, Coalson TS, Fischl B, Andersson JL, Xu J, Jbabdi S, Webster M, Polimeni JR, Van Essen DC, & Jenkinson M (2013). The minimal preprocessing pipelines for the Human Connectome Project. NeuroImage, 80, 105–124. 10.1016/j.neuroimage.2013.04.127 [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Glezer LS, Jiang X, & Riesenhuber M (2009). Evidence for Highly Selective Neuronal Tuning to Whole Words in the “Visual Word Form Area.” Neuron, 62(2), 199–204. 10.1016/j.neuron.2009.03.017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Green DM, & Swets JA (1966). Signal detection theory and psychophysics (Vol. 1, pp. 1969–2012). New York: Wiley. [Google Scholar]
  80. Grier JB (1971). Nonparametric indexes for sensitivity and bias: Computing formulas. Psychological Bulletin, 75(6), 424–429. 10.1037/h0031246 [DOI] [PubMed] [Google Scholar]
  81. Grill-Spector K, Knouf N, & Kanwisher N (2004). The fusiform face area subserves face perception, not generic within-category identification. Nature Neuroscience, 7(5), 555–562. 10.1038/nn1224 [DOI] [PubMed] [Google Scholar]
  82. Grossman E, Donnelly M, Price R, Pickens D, Morgan V, Neighbor G, & Blake R (2000). Brain Areas Involved in Perception of Biological Motion. Journal of Cognitive Neuroscience, 12(5), 711–720. 10.1162/089892900562417 [DOI] [PubMed] [Google Scholar]
  83. Gurnee W, & Tegmark M (2024). Language Models Represent Space and Time (arXiv:2310.02207). arXiv. http://arxiv.org/abs/2310.02207 [Google Scholar]
  84. Hairston WD, Hodges DA, Casanova R, Hayasaka S, Kraft R, Maldjian JA, & Burdette JH (2008). Closing the mind’s eye: deactivation of visual cortex related to auditory task difficulty. Neuroreport, 19(2), 151–154. [DOI] [PubMed] [Google Scholar]
  85. Halpern AR, Zatorre RJ, Bouffard M, & Johnson JA (2004). Behavioral and neural correlates of perceived and imagined musical timbre. Neuropsychologia, 42(9), 1281–1292. 10.1016/j.neuropsychologia.2003.12.017 [DOI] [PubMed] [Google Scholar]
  86. Hanke M, Halchenko YO, Sederberg PB, Hanson SJ, Haxby JV, & Pollmann S (2009). PyMVPA: A Python Toolbox for Multivariate Pattern Analysis of fMRI Data. Neuroinformatics, 7(1), 37–53. 10.1007/s12021-008-9041-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Hauk O, Johnsrude I, & Pulvermüller F (2004). Somatotopic Representation of Action Words in Human Motor and Premotor Cortex. Neuron, 41(2), 301–307. 10.1016/S0896-6273(03)00838-9 [DOI] [PubMed] [Google Scholar]
  88. Häusler CO, Eickhoff SB, & Hanke M (2022). Processing of visual and non-visual naturalistic spatial information in the. Scientific Data, 9(1), Article 1. 10.1038/s41597-022-01250-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Haxby JV, Connolly AC, & Guntupalli JS (2014). Decoding Neural Representational Spaces Using Multivariate Pattern Analysis. Annual Review of Neuroscience, 37(1), 435–456. 10.1146/annurev-neuro-062012-170325 [DOI] [PubMed] [Google Scholar]
  90. He C, Peelen MV, Han Z, Lin N, Caramazza A, & Bi Y (2013). Selectivity for large nonmanipulable objects in scene-selective visual cortex does not require visual experience. NeuroImage, 79, 1–9. 10.1016/j.neuroimage.2013.04.051 [DOI] [PubMed] [Google Scholar]
  91. Hsu NS, Kraemer DJM, Oliver RT, Schlichting ML, & Thompson-Schill SL (2011). Color, Context, and Cognitive Style: Variations in Color Knowledge Retrieval as a Function of Task and Subject Variables. Journal of Cognitive Neuroscience, 23(9), 2544–2557. 10.1162/jocn.2011.21619 [DOI] [PubMed] [Google Scholar]
  92. Hume D (1739/1978). A treatise of human nature. Oxford, UK: Oxford University Press. [Google Scholar]
  93. Humphreys GW, & Forde EME (2001). Hierarchies, similarity, and interactivity in object recognition: “Category-specific” neuropsychological deficits. Behavioral and Brain Sciences, 24(3), 453–476. 10.1017/S0140525X01004150 [DOI] [PubMed] [Google Scholar]
  94. Ino T, Inoue Y, Kage M, Hirose S, Kimura T, & Fukuyama H (2002). Mental navigation in humans is processed in the anterior bank of the parieto-occipital sulcus. Neuroscience Letters, 322(3), 182–186. 10.1016/S0304-3940(02)00019-8 [DOI] [PubMed] [Google Scholar]
  95. Isik L, Koldewyn K, Beeler D, & Kanwisher N (2017). Perceiving social interactions in the posterior superior temporal sulcus. Proceedings of the National Academy of Sciences, 114(43), E9145–E9152. 10.1073/pnas.1714471114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Jenkinson M, Bannister P, Brady M, & Smith S (2002). Improved optimization for the robust and accurate linear registration and motion correction of brain images. Neuroimage, 17(2), 825–841. [DOI] [PubMed] [Google Scholar]
  97. Kable JW, Kan IP, Wilson A, Thompson-Schill SL, & Chatterjee A (2005). Conceptual Representations of Action in the Lateral Temporal Cortex. Journal of Cognitive Neuroscience, 17(12), 1855–1870. 10.1162/089892905775008625 [DOI] [PubMed] [Google Scholar]
  98. Kable JW, Lease-Spellmeyer J, & Chatterjee A (2002). Neural Substrates of Action Event Knowledge. Journal of Cognitive Neuroscience, 14(5), 795–805. 10.1162/08989290260138681 [DOI] [PubMed] [Google Scholar]
  99. Kanjlia S, Lane C, Feigenson L, & Bedny M (2016). Absence of visual experience modifies the neural basis of numerical thinking. Proceedings of the National Academy of Sciences, 113(40), 11172–11177. 10.1073/pnas.1524982113 [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Kanwisher N, McDermott J, & Chun MM (1997). The fusiform face area: a module in human extrastriate cortex specialized for face perception. Journal of neuroscience, 17(11), 4302–4311. [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Kawashima R, O’Sullivan BT, & Roland PE (1995). Positron-emission tomography studies of cross-modality inhibition in selective attentional tasks: Closing the “mind’s eye”. Proceedings of the National Academy of Sciences, 92(13), 5969–5972. 10.1073/pnas.92.13.5969 [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Keller H (1908). The world I live in. London: Hodder & Stoughton, E.C. [Google Scholar]
  103. Kemmerer D, Castillo JG, Talavage T, Patterson S, & Wiley C (2008). Neuroanatomical distribution of five semantic components of verbs: Evidence from fMRI. Brain and Language, 107(1), 16–43. 10.1016/j.bandl.2007.09.003 [DOI] [PubMed] [Google Scholar]
  104. Kiefer M, & Pulvermüller F (2012). Conceptual representations in mind and brain: Theoretical developments, current evidence and future directions. Cortex, 48(7), 805–825. 10.1016/j.cortex.2011.04.006 [DOI] [PubMed] [Google Scholar]
  105. Kiefer M, Sim E-J, Herrnberger B, Grothe J, & Hoenig K (2008). The Sound of Concepts: Four Markers for a Link between Auditory and Conceptual Brain Systems. The Journal of Neuroscience, 28(47), 12224–12230. 10.1523/JNEUROSCI.3579-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Kim JS, Aheimer B, Montané Manrara V, & Bedny M (2021). Shared understanding of color among sighted and blind adults. Proceedings of the National Academy of Sciences, 118(33), e2020192118. 10.1073/pnas.2020192118 [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Kim JS, Elli GV, & Bedny M (2019). Knowledge of animal appearance among sighted and blind adults. Proceedings of the National Academy of Sciences, 116(23), 11213–11222. 10.1073/pnas.1900952116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Kim JS, Kanjlia S, Merabet LB, & Bedny M (2017). Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers. The Journal of Neuroscience, 37(47), 11495–11504. 10.1523/JNEUROSCI.0997-17.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  109. Konkle T, & Caramazza A (2013). Tripartite Organization of the Ventral Stream by Animacy and Object Size. Journal of Neuroscience, 33(25), 10235–10242. 10.1523/JNEUROSCI.0983-13.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Kuhnke P, Kiefer M, & Hartwigsen G (2020). Task-Dependent Recruitment of Modality-Specific and Multimodal Regions during Conceptual Processing. Cerebral Cortex, 30(7), 3938–3959. 10.1093/cercor/bhaa010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  111. Kuhnke P, Kiefer M, & Hartwigsen G (2023). Conceptual representations in the default, control and attention networks are task-dependent and cross-modal. Brain and Language, 244, 105313. 10.1016/j.bandl.2023.105313 [DOI] [PubMed] [Google Scholar]
  112. Lake BM, & Murphy GL (2023). Word meaning in minds and machines. Psychological Review, 130(2), 401–431. 10.1037/rev0000297 [DOI] [PubMed] [Google Scholar]
  113. Lakoff G, & Johnson M (1980). The metaphorical structure of the human conceptual system. Cognitive science, 4(2), 195–208. [Google Scholar]
  114. Lakoff G, & Johnson M (1999). Philosophy in the flesh—the embodied mind and its challenge to Western Thought. NY: Basic Books. [Google Scholar]
  115. Lane C, Kanjlia S, Omaki A, & Bedny M (2015). “Visual” Cortex of Congenitally Blind Adults Responds to Syntactic Movement. The Journal of Neuroscience, 35(37), 12859–12868. 10.1523/JNEUROSCI.1256-15.2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Langacker RW (1987). Nouns and verbs. Language, 63, 53–94. [Google Scholar]
  117. Lapinskaya N, Uzomah U, Bedny M, & Lau E (2016). Electrophysiological signatures of event words: Dissociating syntactic and semantic category effects in lexical processing. Neuropsychologia, 93, 151–157. 10.1016/j.neuropsychologia.2016.10.014 [DOI] [PubMed] [Google Scholar]
  118. Laurienti PJ, Burdette JH, Wallace MT, Yen Y-F, Field AS, & Stein BE (2002). Deactivation of Sensory-Specific Cortex by Cross-Modal Stimuli. Journal of Cognitive Neuroscience, 14(3), 420–429. 10.1162/089892902317361930 [DOI] [PubMed] [Google Scholar]
  119. Laurence S, & Margolis E (2024). The building blocks of thought. Oxford University Press. [Google Scholar]
  120. Leshinskaya A, & Caramazza A (2016). For a cognitive neuroscience of concepts: Moving beyond the grounding issue. Psychonomic Bulletin & Review, 23(4), 991–1001. 10.3758/s13423-015-0870-z [DOI] [PubMed] [Google Scholar]
  121. Levinson SC, & Majid A (2014). Differential Ineffability and the Senses. Mind & Language, 29(4), 407–427. 10.1111/mila.12057 [DOI] [Google Scholar]
  122. Li BZ, Nye M, & Andreas J (2021). Implicit Representations of Meaning in Neural Language Models (arXiv:2106.00737). arXiv. http://arxiv.org/abs/2106.00737 [Google Scholar]
  123. Locke J (1690). An essay concerning human understanding (New ed.). London: Ward, Lock, & Co. [Google Scholar]
  124. López A, Atran S, Coley JD, Medin DL, & Smith EE (1997). The Tree of Life: Universal and Cultural Features of Folkbiological Taxonomies and Inductions. Cognitive Psychology, 32(3), 251–295. 10.1006/cogp.1997.0651 [DOI] [Google Scholar]
  125. Machery E (2010). Précis of Doing without Concepts. Behavioral and Brain Sciences, 33(2–3), 195–206. 10.1017/S0140525X09991531 [DOI] [PubMed] [Google Scholar]
  126. Mahon BZ, Anzellotti S, Schwarzbach J, Zampini M, & Caramazza A (2009). Category-Specific Organization in the Human Brain Does Not Require Visual Experience. Neuron, 63(3), 397–405. 10.1016/j.neuron.2009.07.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
  127. Mahon BZ, & Caramazza A (2011). What drives the organization of object knowledge in the brain? Trends in cognitive sciences, 15(3), 97–103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  128. Mahon BZ, & Hickok G (2016). Arguments about the nature of concepts: Symbols, embodiment, and beyond. Psychonomic Bulletin & Review, 23(4), 941–958. 10.3758/s13423-016-1045-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  129. Margolis E, & Laurence S (2003). Concepts. The Blackwell guide to philosophy of mind, 190–213. [Google Scholar]
  130. Marjieh R, Sucholutsky I, Van Rijn P, Jacoby N, & Griffiths TL (2024). Large language models predict human sensory judgments across six modalities. Scientific Reports, 14(1), 21445. 10.1038/s41598-024-72071-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  131. Marjieh R, van Rijn P, Sucholutsky I, Sumers TR, Lee H, Griffiths TL, & Jacoby N (2022). Words are all you need? capturing human sensory similarity with textual descriptors. arXiv preprint arXiv:2206.04105, 2. [Google Scholar]
  132. Marmor GS (1978). Age at onset of blindness and the development of the semantics of color names. Journal of experimental child psychology, 25(2), 267–278. [DOI] [PubMed] [Google Scholar]
  133. Marti L, Wu S, Piantadosi ST, & Kidd C (2023). Latent diversity in human concepts. Open Mind, 7, 79–92. [DOI] [PMC free article] [PubMed] [Google Scholar]
  134. Martin A, Haxby JV, Lalonde FM, Wiggs CL, & Ungerleider LG (1995). Discrete Cortical Regions Associated with Knowledge of Color and Knowledge of Action. Science, 270(5233), 102–105. 10.1126/science.270.5233.102 [DOI] [PubMed] [Google Scholar]
  135. Martin A, Wiggs CL, Ungerleider LG, & Haxby JV (1996). Neural correlates of category-specific knowledge. Nature, 379(6566), 649–652. 10.1038/379649a0 [DOI] [PubMed] [Google Scholar]
  136. Mattioni S, Rezk M, Battal C, Bottini R, Cuculiza Mendoza KE, Oosterhof NN, & Collignon O (2020). Categorical representation from sound and sight in the ventral occipito-temporal cortex of sighted and blind. eLife, 9, e50732. 10.7554/eLife.50732 [DOI] [PMC free article] [PubMed] [Google Scholar]
  137. Medin DL, & Atran S (2004). The Native Mind: Biological Categorization and Reasoning in Development and Across Cultures. Psychological Review, 111(4), 960–983. 10.1037/0033-295X.111.4.960 [DOI] [PubMed] [Google Scholar]
  138. Meteyard L, Cuadrado SR, Bahrami B, & Vigliocco G (2012). Coming of age: A review of embodiment and the neuroscience of semantics. Cortex, 48(7), 788–804. 10.1016/j.cortex.2010.11.002 [DOI] [PubMed] [Google Scholar]
  139. Meyer K, & Damasio A (2009). Convergence and divergence in a neural architecture for recognition and memory. Trends in Neurosciences, 32(7), 376–382. 10.1016/j.tins.2009.04.002 [DOI] [PubMed] [Google Scholar]
  140. Moss HE, Tyler LK, & Jennings F (1997). When leopards lose their spots: Knowledge of visual properties in category-specific deficits for living things. Cognitive Neuropsychology, 14(6), 901–950. [Google Scholar]
  141. Murphy GL, & Medin DL (1985). The role of theories in conceptual coherence. Psychological review, 92(3), 289. [PubMed] [Google Scholar]
  142. Murphy MC, Nau AC, Fisher C, Kim S-G, Schuman JS, & Chan KC (2016). Top-down influence on the visual cortex of the blind during sensory substitution. NeuroImage, 125, 932–940. 10.1016/j.neuroimage.2015.11.021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  143. Nieto-Castañón A, & Fedorenko E (2012). Subject-specific functional localizers increase sensitivity and functional resolution of multi-subject analyses. NeuroImage, 63(3), 1646–1669. 10.1016/j.neuroimage.2012.06.065 [DOI] [PMC free article] [PubMed] [Google Scholar]
  144. Noppeney U, Friston KJ, & Price CJ (2003). Effects of visual deprivation on the organization of the semantic system. Brain, 126(7), 1620–1627. 10.1093/brain/awg152 [DOI] [PubMed] [Google Scholar]
  145. Noppeney U, Price CJ, Penny WD, & Friston KJ (2006a). Two Distinct Neural Mechanisms for Category-selective Responses. Cerebral Cortex, 16(3), 437–445. 10.1093/cercor/bhi123 [DOI] [PubMed] [Google Scholar]
  146. Noppeney U, Price CJ, Penny WD, & Friston KJ (2006b). Two Distinct Neural Mechanisms for Category-selective Responses. Cerebral Cortex, 16(3), 437–445. 10.1093/cercor/bhi123 [DOI] [PubMed] [Google Scholar]
  147. Okada T, Tanaka S, Nakai T, Nishizawa S, Inui T, Sadato N, Yonekura Y, & Konishi J (2000). Naming of animals and tools: A functional magnetic resonance imaging study of categorical differences in the human brain areas commonly used for naming visually presented objects. Neuroscience Letters, 296(1), 33–36. 10.1016/S0304-3940(00)01612-8 [DOI] [PubMed] [Google Scholar]
  148. Opfer JE, & Gelman SA (2011). Development of the animate-inanimate distinction. In Goswami U (Ed.), The Wiley-Blackwell handbook of childhood cognitive development. WileyBlackwell. [Google Scholar]
  149. Osherson DN, & Smith EE (1981). On the adequacy of prototype theory as a theory of concepts. Cognition, 9(1), 35–58. [DOI] [PubMed] [Google Scholar]
  150. Pascual-Leone A, & Hamilton R (2001). Chapter 27 The metamodal organization of the brain. In Progress in Brain Research (Vol. 134, pp. 427–445). Elsevier. 10.1016/S0079-6123(01)34028-1 [DOI] [PubMed] [Google Scholar]
  151. Patel R, & Pavlick E (2022). MAPPING LANGUAGE MODELS TO GROUNDED CONCEPTUAL SPACES. [Google Scholar]
  152. Peer M, Salomon R, Goldberg I, Blanke O, & Arzy S (2015). Brain system for mental orientation in space, time, and person. Proceedings of the National Academy of Sciences, 112(35), 11072–11077. 10.1073/pnas.1504242112 [DOI] [PMC free article] [PubMed] [Google Scholar]
  153. Perani D, Schnur T, Tettamanti M, Italy, Cappa SF, & Fazio F (1999). Word and picture matching: A PET study of semantic category effects. Neuropsychologia, 37(3), 293–306. 10.1016/S0028-3932(98)00073-6 [DOI] [PubMed] [Google Scholar]
  154. Pietrini P, Furey ML, Ricciardi E, Gobbini MI, Wu W-HC, Cohen L, Guazzelli M, & Haxby JV (2004). Beyond sensory images: Object-based representation in the human ventral pathway. Proceedings of the National Academy of Sciences, 101(15), 5658–5663. 10.1073/pnas.0400707101 [DOI] [PMC free article] [PubMed] [Google Scholar]
  155. Pollack I, & Norman DA (1964). A non-parametric analysis of recognition experiments. Psychonomic science, 1, 125–126. [Google Scholar]
  156. Proffitt JB, Coley JD, & Medin DL (2000). Expertise and category-based induction. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26(4), 811. [DOI] [PubMed] [Google Scholar]
  157. Pulvermüller F (2001). Brain reflections of words and their meaning. Trends in Cognitive Sciences, 5(12), 517–524. 10.1016/S1364-6613(00)01803-9 [DOI] [PubMed] [Google Scholar]
  158. Pulvermüller F, & Fadiga L (2010). Active perception: Sensorimotor circuits as a cortical basis for language. Nature Reviews Neuroscience, 11(5), Article 5. 10.1038/nrn2811 [DOI] [PubMed] [Google Scholar]
  159. Rabini G, Ubaldi S, & Fairhall S (2021). Combining Concepts Across Categorical Domains: A Linking Role of the Precuneus. Neurobiology of Language, 2(3), 354–371. 10.1162/nol_a_00039 [DOI] [PMC free article] [PubMed] [Google Scholar]
  160. Ratan Murty NA, Teng S, Beeler D, Mynick A, Oliva A, & Kanwisher N (2020). Visual experience is not necessary for the development of face-selectivity in the lateral fusiform gyrus. Proceedings of the National Academy of Sciences, 117(37), 23011–23020. 10.1073/pnas.2004607117 [DOI] [PMC free article] [PubMed] [Google Scholar]
  161. Rauchs G, Orban P, Balteau E, Schmidt C, Degueldre C, Luxen A, Maquet P, & Peigneux P (2008). Partially segregated neural networks for spatial and contextual memory in virtual navigation. Hippocampus, 18(5), 503–518. 10.1002/hipo.20411 [DOI] [PubMed] [Google Scholar]
  162. Reilly J, Shain C, Borghesani V, Kuhnke P, Vigliocco G, Peelle JE, … & Vinson D (2024). What we mean when we say semantic: Toward a multidisciplinary semantic glossary. Psychonomic bulletin & review, 1–38. [DOI] [PMC free article] [PubMed] [Google Scholar]
  163. Röder B, Stock O, Bien S, Neville H, & Rösler F (2002). Speech processing activates visual cortex in congenitally blind humans. European Journal of Neuroscience, 16(5), 930–936. 10.1046/j.1460-9568.2002.02147.x [DOI] [PubMed] [Google Scholar]
  164. Saccone EJ, Tian M, & Bedny M (2024). Developing cortex is functionally pluripotent: Evidence from blindness. Developmental Cognitive Neuroscience, 66, 101360. 10.1016/j.dcn.2024.101360 [DOI] [PMC free article] [PubMed] [Google Scholar]
  165. Sadato N, Pascual-Leone A, Grafman J, Deiber MP, Ibañez V, & Hallett M (1998). Neural networks for Braille reading by the blind. Brain, 121(7), 1213–1229. 10.1093/brain/121.7.1213 [DOI] [PubMed] [Google Scholar]
  166. Sadato N, Pascual-Leone A, Grafman J, Ibañez V, Deiber M-P, Dold G, & Hallett M (1996). Activation of the primary visual cortex by Braille reading in blind subjects. Nature, 380(6574), 526–528. 10.1038/380526a0 [DOI] [PubMed] [Google Scholar]
  167. San Roque L, Kendrick KH, Norcliffe E, Brown P, Defina R, Dingemanse M, Dirksmeyer T, Enfield N, Floyd S, Hammond J, Rossi G, Tufvesson S, Van Putten S, & Majid A (2015). Vision verbs dominate in conversation across cultures, but the ranking of non-visual verbs varies. Cognitive Linguistics, 26(1), 31–60. 10.1515/cog-2014-0089 [DOI] [Google Scholar]
  168. Saysani A, Corballis MC, & Corballis PM (2018). Colour envisioned: Concepts of colour in the blind and sighted. Visual Cognition, 26(5), 382–392. 10.1080/13506285.2018.1465148 [DOI] [Google Scholar]
  169. Schreiber K, & Krekelberg B (2013). The Statistical Analysis of Multi-Voxel Patterns in Functional Imaging. PLOS ONE, 8(7), e69328. 10.1371/journal.pone.0069328 [DOI] [PMC free article] [PubMed] [Google Scholar]
  170. Setoh P, Wu D, Baillargeon R, & Gelman R (2013). Young infants have biological expectations about animals. Proceedings of the National Academy of Sciences, 110(40), 15937–15942. 10.1073/pnas.1314075110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  171. Seydell-Greenwald A, Wang X, Newport E, Bi Y, & Striem-Amit E (2021). Primary visual cortex is activated by spoken language comprehension. Journal of Vision, 21(9), 2256. 10.1167/jov.21.9.2256 [DOI] [Google Scholar]
  172. Shapiro L, and Spaulding S (2021). Embodied cognition. In Zalta EN & Nodelman U (Eds.), The Stanford Encyclopedia of Philosophy. https://plato.stanford.edu/archives/fall2024/entries/embodied-cognition/ [Google Scholar]
  173. Sharma P, Shaham TR, Baradad M, Rodriíuez-Muñoz A, Duggal S, Isola P, Torralba A, & Fu S (2024). A Vision Check-up for Language Models. 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 14410–14419. 10.1109/CVPR52733.2024.01366 [DOI] [Google Scholar]
  174. Shepard RN, & Cooper LA (1992). Representation of Colors in the Blind, Color-Blind, and Normally Sighted. Psychological Science, 3(2), 97–104. 10.1111/j.1467-9280.1992.tb00006.x [DOI] [Google Scholar]
  175. Silson EH, Steel AD, & Baker CI (2016). Scene-Selectivity and Retinotopy in Medial Parietal Cortex. Frontiers in Human Neuroscience, 10. https://www.frontiersin.org/articles/10.3389/fnhum.2016.00412 [DOI] [PMC free article] [PubMed] [Google Scholar]
  176. Silson EH, Steel A, Kidder A, Gilmore AW, & Baker CI (2019a). Distinct subdivisions of human medial parietal cortex support recollection of people and places. eLife, 8, e47391. 10.7554/eLife.47391 [DOI] [PMC free article] [PubMed] [Google Scholar]
  177. Silson EH, Steel A, Kidder A, Gilmore AW, & Baker CI (2019b). Distinct subdivisions of human medial parietal cortex support recollection of people and places. eLife, 8, e47391. 10.7554/eLife.47391 [DOI] [PMC free article] [PubMed] [Google Scholar]
  178. Simion F, Regolin L, & Bulf H (2008). A predisposition for biological motion in the newborn baby. Proceedings of the National Academy of Sciences, 105(2), 809–813. 10.1073/pnas.0707021105 [DOI] [PMC free article] [PubMed] [Google Scholar]
  179. Simmons WK, Martin A, & Barsalou LW (2005). Pictures of Appetizing Foods Activate Gustatory Cortices for Taste and Reward. Cerebral Cortex, 15(10), 1602–1608. 10.1093/cercor/bhi038 [DOI] [PubMed] [Google Scholar]
  180. Simmons WK, Ramjee V, Beauchamp MS, McRae K, Martin A, & Barsalou LW (2007). A common neural substrate for perceiving and knowing about color. Neuropsychologia, 45(12), 2802–2810. 10.1016/j.neuropsychologia.2007.05.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
  181. Smith SM, Jenkinson M, Woolrich MW, Beckmann CF, Behrens TEJ, Johansen-Berg H, Bannister PR, De Luca M, Drobnjak I, Flitney DE, Niazy RK, Saunders J, Vickers J, Zhang Y, De Stefano N, Brady JM, & Matthews PM (2004). Advances in functional and structural MR image analysis and implementation as FSL. NeuroImage, 23, S208–S219. 10.1016/j.neuroimage.2004.07.051 [DOI] [PubMed] [Google Scholar]
  182. Spelke ES (2022). What Babies Know: Core Knowledge and Composition Volume 1. New York, NY: Oxford University Press. [Google Scholar]
  183. Stanislaw H, & Todorov N (1999). Calculation of signal detection theory measures. Behavior Research Methods, Instruments, & Computers, 31(1), 137–149. 10.3758/BF03207704 [DOI] [PubMed] [Google Scholar]
  184. Steel A, Billings MM, Silson EH, & Robertson CE (2021). A network linking scene perception and spatial memory systems in posterior cerebral cortex. Nature Communications, 12(1), 2632. 10.1038/s41467-021-22848-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  185. Stelzer J, Chen Y, & Turner R (2013). Statistical inference and multiple testing correction in classification-based multi-voxel pattern analysis (MVPA): Random permutations and cluster size control. NeuroImage, 65, 69–82. 10.1016/j.neuroimage.2012.09.063 [DOI] [PubMed] [Google Scholar]
  186. Striem-Amit E, Ovadia-Caro S, Caramazza A, Margulies DS, Villringer A, & Amedi A (2015). Functional connectivity of visual cortex in the blind follows retinotopic organization principles. Brain, 138(6), 1679–1695. 10.1093/brain/awv083 [DOI] [PMC free article] [PubMed] [Google Scholar]
  187. Striem-Amit E, Wang X, Bi Y, & Caramazza A (2018). Neural representation of visual concepts in people born blind. Nature Communications, 9(1), Article 1. 10.1038/s41467-018-07574-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  188. Sweetser E (1990). From etymology to pragmatics: Metaphorical and cultural aspects of semantic structure (Vol. 54). Cambridge University Press. [Google Scholar]
  189. Swets JA, Tanner WP Jr., & Birdsall TG (1961). Decision processes in perception. Psychological Review, 68(5), 301–340. 10.1037/h0040547 [DOI] [PubMed] [Google Scholar]
  190. Szwed M, Dehaene S, Kleinschmidt A, Eger E, Valabrègue R, Amadon A, & Cohen L (2011). Specialization for written words over objects in the visual cortex. NeuroImage, 56(1), 330–344. 10.1016/j.neuroimage.2011.01.073 [DOI] [PubMed] [Google Scholar]
  191. Talmy L (1975). Semantics and syntax of motion. In Kimball JP (Ed.), Syntax and semantics (Vol. 4, pp. 181–238). New York, NY: Academic Press. [Google Scholar]
  192. Thompson-Schill SL (2003). Neuroimaging studies of semantic memory: Inferring “how” from “where.” Neuropsychologia, 41(3), 280–292. 10.1016/S0028-3932(02)00161-6 [DOI] [PubMed] [Google Scholar]
  193. Thompson-Schill SL, Aguirre GK, Desposito M, & Farah MJ (1999). A neural basis for category and modality specificity of semantic knowledge. Neuropsychologia, 37(6), 671–676. 10.1016/S0028-3932(98)00126-2 [DOI] [PubMed] [Google Scholar]
  194. Tian M, Saccone EJ, Kim JS, Kanjlia S, & Bedny M (2023). Sensory modality and spoken language shape reading network in blind readers of Braille. Cerebral Cortex, 33(6), 2426–2440. 10.1093/cercor/bhac216 [DOI] [PMC free article] [PubMed] [Google Scholar]
  195. Tranel D, Kemmerer D, Adolphs R, Damasio H, & Damasio AR (2003). NEURAL CORRELATES OF CONCEPTUAL KNOWLEDGE FOR ACTIONS. Cognitive Neuropsychology, 20(3–6), 409–432. 10.1080/02643290244000248 [DOI] [PubMed] [Google Scholar]
  196. Tucker M, & Ellis R (1998). On the relations between seen objects and components of potential actions. Journal of Experimental Psychology: Human perception and performance, 24(3), 830. [DOI] [PubMed] [Google Scholar]
  197. Tyler L, & Moss H (2001). Towards a distributed account of conceptual knowledge. Trends in Cognitive Sciences, 5(6), 244–252. 10.1016/S1364-6613(00)01651-X [DOI] [PubMed] [Google Scholar]
  198. van den Hurk J, Van Baelen M, & Op de Beeck HP (2017). Development of visual category selectivity in ventral visual cortex does not require visual experience. Proceedings of the National Academy of Sciences, 114(22), E4501–E4510. 10.1073/pnas.1612862114 [DOI] [PMC free article] [PubMed] [Google Scholar]
  199. Van Essen DC (2005). A Population-Average, Landmark- and Surface-based (PALS) atlas of human cerebral cortex. NeuroImage, 28(3), 635–662. 10.1016/j.neuroimage.2005.06.058 [DOI] [PubMed] [Google Scholar]
  200. Vannuscorps G, & Caramazza A (2016). Typical action perception and interpretation without motor simulation. Proceedings of the National Academy of Sciences, 113(1), 86–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  201. Viberg Å (1983). The verbs of perception: A typological study. Linguistics 21, 123–162. [Google Scholar]
  202. Wallentin M, Lund T, Östergaard S, Östergaard L, & Roepstorff A (2005). Motion verb sentences activate left posterior middle temporal cortex despite static context: NeuroReport, 16(6), 649–652. 10.1097/00001756-200504250-00027 [DOI] [PubMed] [Google Scholar]
  203. Wang X, Men W, Gao J, Caramazza A, & Bi Y (2020). Two Forms of Knowledge Representations in the Human Brain. Neuron, 107(2), 383–393.e5. 10.1016/j.neuron.2020.04.010 [DOI] [PubMed] [Google Scholar]
  204. Wang X, Peelen MV, Han Z, Caramazza A, & Bi Y (2016). The role of vision in the neural representation of unique entities. Neuropsychologia, 87, 144–156. 10.1016/j.neuropsychologia.2016.05.007 [DOI] [PubMed] [Google Scholar]
  205. Warrington EK, & McCarthy RA (1987). Categories of knowledge: Further fractionations and an attempted integration. Brain, 110(5), 1273–1296. 10.1093/brain/110.5.1273 [DOI] [PubMed] [Google Scholar]
  206. Warrington EK, & Shallice T (1984). Category specific semantic impairments. Brain, 107(3), 829–853. 10.1093/brain/107.3.829 [DOI] [PubMed] [Google Scholar]
  207. Warstadt A, & Bowman SR (2024). What Artificial Neural Networks Can Tell Us About Human Language Acquisition (arXiv:2208.07998). arXiv. http://arxiv.org/abs/2208.07998 [Google Scholar]
  208. Wei J, Tay Y, Bommasani R, Raffel C, Zoph B, Borgeaud S, Yogatama D, Bosma M, Zhou D, Metzler D, Chi EH, Hashimoto T, Vinyals O, Liang P, Dean J, & Fedus W (2022). Emergent Abilities of Large Language Models (arXiv:2206.07682). arXiv. http://arxiv.org/abs/2206.07682 [Google Scholar]
  209. Weiner KS, Barnett MA, Witthoft N, Golarai G, Stigliani A, Kay KN, Gomez J, Natu VS, Amunts K, Zilles K, & Grill-Spector K (2018). Defining the most probable location of the parahippocampal place area using cortex-based alignment and cross-validation. NeuroImage, 170, 373–384. 10.1016/j.neuroimage.2017.04.040 [DOI] [PMC free article] [PubMed] [Google Scholar]
  210. Winkler AM, Ridgway GR, Webster MA, Smith SM, & Nichols TE (2014). Permutation inference for the general linear model. NeuroImage, 92, 381–397. 10.1016/j.neuroimage.2014.01.060 [DOI] [PMC free article] [PubMed] [Google Scholar]
  211. Winter B, Perlman M, & Majid A (2018). Vision dominates in perceptual language: English sensory vocabulary is optimized for usage. Cognition, 179, 213–220. 10.1016/j.cognition.2018.05.008 [DOI] [PubMed] [Google Scholar]
  212. Wurm MF, & Caramazza A (2019). Distinct roles of temporal and frontoparietal cortex in representing actions across vision and language. Nature Communications, 10(1), 289. 10.1038/s41467-018-08084-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  213. Yee E, Chrysikou EG, & Thompson-Schill SL (2013). Semantic Memory. In Ochsner K & Kosslyn SM (Eds.), The Oxford Handbook of Cognitive Neuroscience, Volume 1: Core Topics (pp. 353–374). Oxford University Press. [Google Scholar]
  214. Yee E, Jones MN, & McRae K (2018). Semantic memory. Stevens’ Handbook of Experimental Psychology and Cognitive Neuroscience, 3, 1–38. [Google Scholar]
  215. Yee E, & Thompson-Schill SL (2016). Putting concepts into context. Psychonomic Bulletin & Review, 23(4), 1015–1027. 10.3758/s13423-015-0948-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  216. Yu X, Bi Y, Han Z, Zhu C, & Law SP (2012). Neural correlates of comprehension and production of nouns and verbs in Chinese. Brain and language, 122(2), 126–131. [DOI] [PubMed] [Google Scholar]
  217. Zimler J, & Keenan JM (1983). Imagery in the congenitally blind: How visual are visual images? Journal of Experimental Psychology: Learning, Memory, and Cognition, 9(2), 269–282. 10.1037/0278-7393.9.2.269 [DOI] [PubMed] [Google Scholar]
  218. Zwaan RA (2014). Embodiment and language comprehension: Reframing the discussion. Trends in Cognitive Sciences, 18(5), 229–234. 10.1016/j.tics.2014.02.008 [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Hauptman_Supplementary

Data Availability Statement

Anonymized fMRI data from the current study are available on OpenICPSR (https://www.openicpsr.org/openicpsr/project/198163/version/V3/view). Code used in the current study can be found on Open Science Framework (https://osf.io/f4dj2/?view_only=13b637d0bde049d684077b331c606bc7) and GitHub (https://github.com/NPDL/NPDL-scripts).

RESOURCES