Skip to main content
Human Brain Mapping logoLink to Human Brain Mapping
. 2026 Feb 6;47(2):e70444. doi: 10.1002/hbm.70444

The Neural Organization of Visual Information in the Auditory Cortex of the Congenitally Deaf

Zohar Tal 1,2, Joana Sayal 1,2, Fang Fang 3,4,5, Yanchao Bi 6,7,8, Jorge Almeida 1,2,, Alessio Fracasso 9,10
PMCID: PMC12877721  PMID: 41645742

ABSTRACT

Neuroplasticity is the brain's ability to reorganize its structural and functional architecture throughout life. In congenital deafness, the sensory‐deprived auditory cortex can be recruited to represent sensory information belonging to other modalities, a process known as cross‐modal plasticity. Previous studies have indicated that the auditory cortex of congenitally deaf, but not of hearing individuals, is recruited during visual tasks. However, it remains unclear whether and to what extent these cross‐modal responses represent low‐level visual spatial information or map the visual field. Here, we addressed this question using two complementary fMRI experiments focusing on cross‐modal processing in the auditory cortex of both deaf and hearing individuals during passive viewing of conventional visual stimuli. The first experiment, at the group level, revealed that, unlike in hearing individuals, the auditory cortex of deaf individuals predominantly exhibited negative BOLD signals in early and associative auditory areas—a surprising finding given the prevailing focus on activations in prior work. These negative BOLD signals—commonly interpreted as deactivation responses—suggest that visual information may be represented via cross‐modal deactivation mechanisms. We complement the investigation with an exploratory follow‐up analysis using pRF modeling in a subset of participants. Together, our findings indicated that, in congenitally deaf individuals, cross‐modal visual processing in the auditory cortex may be mediated by deactivation signals, offering new insights into the neural basis of sensory reorganization.

Keywords: congenital deafness, functional magnetic resonance imaging, neuroplasticity, population receptive field analysis, topographic organization


Using fMRI and pRF modeling, we show that visual spatial information is represented in the auditory cortex of congenital deaf individuals through deactivation signals. These negative BOLD responses suggest a novel mechanism of cross‐modal plasticity.

graphic file with name HBM-47-e70444-g001.jpg

1. Introduction

The human brain can partially reorganize and modify its activity throughout life (Frasnelli et al. 2011). Psychophysical and imaging methods have provided evidence in relation to the development of the human visual system, reaching the conclusion that the visual pathways up through primary visual cortex are remarkably mature already in infants (Kiorpes 2016). However, evidence also suggests that behaviorally relevant phenomena can emerge at a later stage in human development (Carrasco et al. 2022) which is compatible with the existence of environmental factors shaping the developing visual system. Crucially, lesion and sensory deprivation studies represent an important line of study, indicating that human early visual cortex has the ability to substantially reshape itself following sensory deprivation, showing examples of neuroplasticity (Collignon et al. 2009; Gougoux et al. 2004; Van Boven et al. 2000).

One major example of neuroplasticity comes from the study of sensory deprivation in the case of congenital blindness or deafness. Under these circumstances, the sensory‐deprived cortex can be recruited to process information belonging to other, intact modalities, a process known as cross‐modal plasticity. Cross‐modal neuroplasticity has been extensively studied in animal and human blindness (Collignon et al. 2009; Gougoux et al. 2004; Van Boven et al. 2000), revealing the involvement of the deprived visual cortex in non‐visual tasks including auditory, tactile, and language processing (Amedi et al. 2003; Bedny et al. 2011; Collignon et al. 2009; Ptito et al. 2012; Renier et al. 2010; Sadato et al. 1996). Similarly, a substantial body of research has documented the recruitment of auditory‐deprived areas during visual or somatosensory stimulation (Almeida et al. 2015; Bell et al. 2019; Bola et al. 2017; Bottari et al. 2014; Dhanik et al. 2024; Karns et al. 2012; Scurry et al. 2020; Zimmermann et al. 2021). Which principles govern the organization of sensory information in cross‐modal plasticity? Do cross‐modal responses in deprived cortical areas represent low‐level information in a similar way to the intact sensory areas? Here, we explored the representation of low‐level spatial visual information in the auditory cortex of deaf individuals.

Functional neuroimaging studies show that the auditory cortex (AC) of congenitally deaf individuals reorganizes to process visual information, such as visual motion (Retter et al. 2018), rhythm/frequency discrimination (Bola et al. 2017; Zimmermann et al. 2021) and position in the visual field (Almeida et al. 2015). Cross‐modal plasticity was also documented in electrophysiological studies with animal models of deafness, including cats (Land et al. 2016; Lomber et al. 2010, 2011; Meredith et al. 2011; Rebillard et al. 1977); ferrets (Meredith et al. 2012; Meredith and Allman 2012) and mice (Hunt et al. 2006). Congenital or early auditory deprivation was also associated with behavioral changes, such as an improved performance in various sensorial tasks of the unaffected modalities. For example, deaf individuals demonstrate enhanced visual abilities in motion detection (Almeida et al. 2018; Hauthal et al. 2013) and visual localization tasks (Codina et al. 2011; Seymour et al. 2017), especially when the stimuli are presented in the peripheral visual field (Bavelier et al. 2000; Neville and Lawson 1987). Congenitally deaf cats show superior performance in visual localization in peripheral visual fields and lower visual motion detection thresholds compared to hearing cats (Lomber et al. 2010, 2011). These attributes can easily be understood as compensatory mechanisms for the auditory loss because these functions in hearing individuals work in tandem—i.e., auditory and visual input—for the perception of daily life danger and spatial orienting.

Animal studies can elucidate some of the mechanisms underlying these processes by localizing individual visual functions to portions of the reorganized AC and establishing a causal relationship between the activated areas and visual performance (Lomber et al. 2010; Meredith et al. 2011; Meredith and Lomber 2011). For example, reversible deactivation of the dorsal zone (DZ) in cats reduced their enhanced motion detection abilities while deactivation of the posterior auditory field (PAF) abolished superior visual peripheral localization (Lomber et al. 2010). These findings suggest that cross‐modal processes might underlie the superior performance of deaf individuals in different tasks related to the remaining senses. Moreover, single‐unit recordings in deaf (or partially deaf) animals have shown an increase in the proportion of neurons in the auditory cortex that respond to visual and tactile stimuli (Hunt et al. 2006; Meredith et al. 2011, 2012; Meredith and Lomber 2011). Detailed characterization of the visual‐evoked responses of these neurons revealed large‐diameter receptive fields, which were distributed across the contralateral visual hemifield, extending also to the ipsilateral hemifield (Meredith et al. 2011; Meredith and Lomber 2011).

Does the reorganized auditory cortex of humans represent spatial features of visual stimuli in a similar way? For instance, how is eccentricity and polar angle information represented in auditory areas? In a study by Almeida et al. (2015), a group of deaf and hearing individuals was presented with a set of stimuli typically used to map visual field location. Using multi‐voxel pattern analysis, the authors found that the location of the visual stimuli could be decoded from the activity pattern of the primary auditory cortex of the deaf, but not the hearing group. While these results clearly show that the activity pattern of the primary auditory cortex of deaf individuals contains spatial visual information, the organization of these visual responses, particularly in terms of spatial features like eccentricity and polar angle, remains largely unexplored.

To address this, we undertook a detailed examination of how visual spatial information is processed in the auditory cortex of deaf individuals. Our investigation consisted of two complementary components. First, we conducted a group‐level study using a visual stimulation paradigm to explore differences in spatial processing between deaf and hearing participants. Second, in an exploratory analysis, we applied population receptive field (pRF) modelling (Dumoulin and Wandell 2008) to a smaller subset of participants (n = 2 deaf and 1 hearing individuals) to provide a preliminary characterization of visual receptive field properties in the reorganized auditory cortex. This method models the collective response and tuning properties of the neuronal population at each recording site (voxel), providing estimation of receptive fields' location and size (tuning width). pRF modelling, which was first introduced for retinotopic mapping (Dumoulin and Wandell 2008), has been extensively used to map other sensory cortices and higher‐order areas (Almeida et al. 2025; Fabius et al. 2022; Fracasso, Petridou, and Dumoulin 2016; Harvey et al. 2013; Scurry et al. 2020; Thomas et al. 2015). In the current study, we used it in a limited, proof‐of‐concept fashion to explore spatial properties of cross‐modal visual responses in the auditory cortex.

While most research on cross‐modal plasticity—especially in cases of congenital sensory deprivation—has focused on positive BOLD responses, recent findings highlight the relevance of negative BOLD responses (NBRs), or deactivations, in sensory‐specific regions (Fracasso et al. 2022; Hairston et al. 2008; Jorge et al. 2018; Laurienti et al. 2002; Merabet et al. 2007; Sadato et al. 1996; Tal et al. 2016; Weisser et al. 2005) as well as non‐sensory‐specific regions (Anticevic et al. 2012; Mayer et al. 2010). Importantly, recent work demonstrates that NBRs can carry spatial information and can be effectively modelled using pRF techniques (Szinte and Knapen 2020). Here, as an exploratory analysis, we adapted this method to a new context and applied pRF modelling to investigate the spatial organization of visually evoked cross‐modal responses—both positive and negative—in the auditory cortex of deaf individuals.

2. Methods

2.1. Participants

Experiment 1 included a group of 31 participants, 15 congenitally deaf (13 women, 2 men; mean age = 19.4 years) and 16 hearing individuals (13 women, 3 men; mean age = 19.1 years). Two hearing participants were excluded from the analysis due to unreliable segmentation and coregistration procedures, resulting in the inclusion of 15 deaf and 14 hearing participants in the analysis. A subset of 2 congenital deaf (women, mean age = 19) and one hearing participant (19 years old woman) participated in Experiment 2. All participants were naive to the purpose of the experiment. Participants had normal or corrected‐to‐normal vision, no history of neurological disorder, and gave written informed consent in accordance with the guidelines of the institutional review board of Beijing Normal University Imaging Center for Brain Research. All deaf participants were proficient in Chinese sign language and had hearing loss above 90 dB binaurally (frequencies tested ranged from 125 to 8000 Hz). 8 participants had never used hearing aids, and the rest (7 participants) had used a hearing aid in the past, for a period ranging between 2 to 10 years. In general, the hearing aids enabled the participants to perceive sound, but not to discriminate between different categories (e.g., voices) or to localize its source in space. Their deafness was caused either by genetic, pregnancy‐related diseases and complications at childbirth, or by reasons unknown (see Table S1 for a full demographic description of the participants). The hearing participant reported no hearing impairment or knowledge of Chinese sign language.

Parts of this dataset have been used in previous publications (functional data (Almeida et al. 2015; Ruttorf et al. 2023); structural data (Amaral et al. 2016)).

2.2. Stimuli and Procedure

Participants in both experiments were presented with two types of visual stimuli that are typically used to study the processing and representation of low‐level visual information and to obtain visual field maps: rotating wedges and expanding annuli (Engel et al. 1994, 1997; Sereno et al. 1995). The stimuli were generated in MATLAB (MathWorks) using the PsychToolbox (Brainard 1997; Kleiner et al. 2007; Pelli 1997).

2.2.1. Experiment 1 (Block‐Design)

This experiment included 4 functional runs. In each run, visual stimuli were presented in a block‐design with five blocks per run. Each run started with a resting period of 12 s that was followed by a 48‐s block of stimuli. Each stimulation block was followed by a 12‐s inter‐block‐interval of baseline fixation. In two runs (wedge condition) the visual stimuli included a random sequence of four images showing of counterphase flickering (5 Hz) checkerboard wedges subtending 10.50° of the visual angle, that were located on the left and right parts of the screen along the azimuth plane, and upper and lower parts of the screen along the meridian plane (Figure 1A, top). In the second condition (ring condition), the visual stimuli included a random sequence of four images showing counterphase flickering (5 Hz) rings subtending 9.23°, 6.61°, 3,97°, or 1.30° of the visual angle (Figure 1A, bottom). Participants were asked to maintain fixation on a central point throughout the entire run. All participants completed two runs per condition in which the four wedge stimuli or the four annuli randomly alternated—which resulted in recording 312 functional volumes per condition.

FIGURE 1.

FIGURE 1

Experimental stimuli and procedure. (A) Experiment 1, block‐design. Wedges were presented alternately in each quadrant of the screen, and annuli were presented at four different sizes in the center of the screen. Stimuli appeared in a pseudorandom order within each block (each stimulus was presented for 6 TRs = 12 s). (B) Experiment 2, retinotopic mapping. Polar angle stimuli included high contrast, flickering checkerboard wedges extending from the fovea to the visual periphery and were presented along twelve polar angles (each wedge was presented for 1 TR = 2 s). The wedge stimulus rotated clockwise around a central fixation point, eliciting each voxel's preferred polar angle in the visual space. Eccentricity stimuli elicited each voxel's preferred eccentricity by presenting an expanding ring starting from the central fovea up to the periphery, in twelve different positions around a central fixation point (each ring was presented for 1 TR = 2 s).

2.2.2. Experiment 2 (Retinotopic Mapping)

This experiment included 12 runs, 6 with rotating wedges and 6 with expanding annuli. In the wedge runs, the stimulus was a counterphase flickering (5 Hz) checkerboard wedge (30°) rotating along 12 equidistant positions along the 360°, eliciting different polar angle preferences. The wedge stimuli rotated in clockwise order, starting from the top vertical plane (Figure 1B). In the ring runs, the stimuli included counterphase flickering (5 Hz) checkerboard annuli expanding in 12 different steps, starting from the central fovea up to the periphery of the visual field, eliciting eccentricity preferences (Figure 1B). Participants were asked to maintain fixation on a central point throughout the entire functional runs. Each run (with a total of 84 volumes) started with 12 s of blank (grey) screen, followed by six cycles of the full loop of stimuli (72 volumes) and ended with a blank screen for 12 s.

2.3. Anatomical and Functional Imaging

Magnetic resonance imaging (MRI) data was collected at the Beijing Normal University MRI center on a 3 T Siemens Tim Trio scanner using a 32‐channel head coil. A high‐resolution 3‐D structural data set was collected in the sagittal plane with a 3‐D magnetization‐prepared rapid‐acquisition gradient echo sequence and the following parameters: repetition time (TR) = 2530 ms, echo time (TE) = 3.39 ms, flip angle = 7°, matrix size = 256 × 256, voxel size = 1 × 1 × 1.33 mm, 144 slices, acquisition time = 8.07 min. Functional data were collected using a T2*‐weighted echo‐planar image (EPI) sequence (TR = 2000 ms, TE = 30 ms, flip angle = 90°, matrix size = 64 × 64, voxel size = 3.125 × 3.125 × 4 mm, 33 slices, interslice interval = 0.6 mm, slice orientation = axial).

2.4. MRI Preprocessing

Processing of the anatomical T1‐weighted MR images was performed in Freesurfer image analysis suite (6.0 version, http://surfer.nmr.mgh.harvard.edu/) using the recon‐all pipeline that included white and grey matter segmentation, skull removal, and cortical reconstruction.

Preprocessing of functional MRI data was performed in R (R Core Team 2014) and AFNI (Cox 1996; Cox and Hyde 1997). Preprocessing included the following steps: slice acquisition time correction, head motion correction, scaling and temporal detrending using a third‐degree polynomial to remove low‐frequency drifts. For the block‐design data set, a denoising procedure (based on random matrix theory) (Veraart et al. 2016) was applied prior to standard preprocessing steps to improve image and statistical quality. The functional runs were coregistered to each participant's anatomical scan, and transformation parameters were then applied to the first‐level statistical maps (see below). To assess potential group differences in head motion, we computed the average absolute displacement across all six motion parameters for each participant and compared these values between groups using a two‐sample t‐test. No significant difference in mean motion was observed between deaf and hearing participants (t(26.8) = −0.43, p = 0.67).

For the retinotopic data set, the preprocessed time‐series of each stimulus type (wedge and ring) were averaged across repetitions for each participant and concatenated to create a single time‐series for each participant, containing both polar angle and eccentricity stimuli. The averaged functional image was co‐registered to the anatomical scans (in individual native space) using the function 3dAllineate (with local Pearson correlation as the cost function).

2.5. General Linear Model (GLM) Analysis

Task‐related brain activity was estimated using a GLM analysis, using the afni_proc.py pipeline (calling 3dDeconvolve for OLS estimation). Stimulus condition timing files were convolved with a block function (BLOCK(12,1), a 12‐s boxcar convolved with a gamma‐shaped HRF) to model the expected hemodynamic response over a 12‐s window. The GLM included motion parameters as nuisance regressors to control for residual motion artifacts, and fitted time series were computed to capture predicted responses. The resulting beta coefficient and statistical maps for each condition were projected to SUMA standard cortical space, enabling second‐level group analysis across participants.

2.6. Region of Interest Analysis

To extract models' parameters and compare the results of the deaf and hearing participants, we defined a bilateral set of regions of interest (ROI), in sensory, motor, and control regions, based on the Human Connectome Project (HCP) multimodal parcellation (MMP) atlas (Glasser et al. 2016) available in AFNI/SUMA (Figure 2B, see Table S2 for a full list of ROIs and their corresponding parcel index). For statistical analysis, individual ROIs were grouped into main regions, roughly following the grouping criteria in (Glasser et al. 2016). The ROIs for experiment 1 (block‐design) included: early visual cortex (V1, V2, and V3 cortical areas); early auditory cortex (A1, medial belt, lateral belt, para belt and retro‐insular cortical areas); associative auditory cortex (A4, A5, STSdp, STSda, STSvp, STSva, STGa, and TA2); intraparietal cortex (IPS, including AIP, VIP, MIP, LIPv, and LIPp cortical areas); somatomotor cortex (areas 1, 2, 3a, 3b, and 4) and dorsolateral cortex (8Av, 8Ad, and 8C cortical areas). The individual statistical parametric maps were projected to the standard cortical surface and used for ROI‐based group analyses and group‐level activation maps (for visualization purposes). For the ROI analysis, we extracted both beta and t values at the surface node level. Then, within a given ROI, we selected the subset of nodes whose absolute t‐values exceeded the 0.7 quantile of the absolute t distribution. We thresholded based on the absolute t distribution to focus the analysis to the most responsive vertices, either positive or negative, regardless of the sign of BOLD signal, and reducing the influence of low‐activation or noisy nodes, improving sensitivity to task‐related effects. The mean beta values of the selected nodes were used for the group‐level ROI analysis.

FIGURE 2.

FIGURE 2

Group activation maps and ROI definition. (A) Group beta value maps for two ring stimuli (ring 1: Upper row, ring 2: Lower row) in deaf (t(14) = ±2.3, p < 0.01, uncorrected) and hearing participants (t(13) = ±2.3, p < 0.01, uncorrected). Warm colors represent positive beta values (corresponding to positive BOLD responses), while cool colors indicate negative beta values (negative BOLD responses). Deaf participants exhibited broader negative BOLD responses in temporal cortical regions compared to hearing participants. Maps are shown for illustration only; all statistical tests were performed on ROI‐level data. (B) Regions of interest (ROIs) from the right hemisphere, drawn based on the Glasser atlas (Glasser et al. 2016). The analysis included a bilateral set of ROIs, with corresponding regions in the left hemisphere. These include early visual (orange), early auditory (red), associative auditory (blue), intraparietal sulcus (IPS, purple), somatomotor (cyan), and dorsolateral regions (black).

Statistical analysis at the group‐level was performed on each ROI using a one‐sample t‐test to assess whether the ROI signal differed significantly from zero, yielding a significant PBR or NBR response. This distinction is important because positive and negative BOLD signals do not simply reflect inverse responses within a shared mechanism. Specifically, PBRs are related to increased cerebral blood flow to areas with increased neural activity. NBRs, on the other hand, are potentially related to different mechanisms, such as active neuronal suppression and/or vascular mechanisms (e.g., active blood regulation mechanism (Goense et al. 2016; Shmuel et al. 2002, 2006; Smith et al. 2004)). In this investigation, we are predominantly interested in the potential NBR involvement in cross‐modal effects. Given the mechanistic asymmetry between PBR and NBR, we then examined whether each group exhibited positive or negative BOLD responses via one‐sample tests against 0.

Moreover, we performed a two‐sample t‐test per ROI to assess group difference for future reference. It is important to note that a difference between groups does not provide information about the sign of the BOLD response within each group.

The ROIs for experiment 2 (retinotopic mapping) included the early visual, early auditory, and associative auditory areas, as well as a control ROI in the white matter. The white matter ROI was defined based on the individual segmentation pipeline of Freesurfer (6.0 version, http://surfer.nmr.mgh.harvard.edu/). The cortical atlas was projected on the standard cortical surface of each individual participant and then resampled to volumetric space. Model parameters were extracted for all the voxels in a chosen ROI.

2.7. tSNR Analysis

To ensure that any observed BOLD signal differences were not driven by variations in signal quality, we conducted a temporal signal‐to‐noise ratio (tSNR) analysis. tSNR values were extracted and averaged within our predefined ROIs. We then compared the distributions between deaf and hearing participants using two‐tailed independent t‐tests for each ROI to verify that potential group differences in BOLD responses were not influenced by differences in signal quality.

2.8. Population Receptive Field Analysis

Population receptive field (pRF) modelling (Dumoulin and Wandell 2008) was conducted in R (R Core Team 2014) following previous retinotopic mapping procedures (Dumoulin and Wandell 2008; Fracasso, Petridou, and Dumoulin 2016). Briefly, a set of predicted pRF time‐series were generated using a 2D‐Gaussian model with three parameters: x, y (horizontal and vertical coordinates of the receptive field's center, based on a 28 × 28 grid of preferred visual locations) and σ (pRF size, ranging between 0.5° to 11° of the visual field). The predicted time‐series were calculated by convolving the stimulus sequence of a given pRF with a canonical hemodynamic response function (HRF, (Friston et al. 1998), parameters: 6‐s peak latency and 12‐s decay latency). For each voxel, the best fitting pRF was obtained by iteratively fitting all predicted pRFs time‐series by means of a general linear model (GLM), storing the predicted pRF that yielded the highest variance explained (R 2) with the constraint of a positive beta value (“positive model”). The extracted model parameters included x and y position, pRF size as well as the model slope (beta value), polar angle, eccentricity, and variance explained (R 2). Variance explained values were used to threshold the data for visualization of the maps that were projected on the reconstructed cortical surface of each participant through AFNI/SUMA (Saad et al. 2004; Saad and Reynolds 2012). In the visual areas, the threshold was set to 20% of variance explained. In the whole‐brain or auditory areas analyses the threshold was set to 8% of variance explained. This value was set to be higher than the mean variance explained obtained from the white matter in all three participants. Mean white matter R 2 values were: 0.06 ± 0.09 for the hearing control (HC) participant; 0.08 ± 0.13 for participant deaf 01 (D1); and 0.06 ± 0.1 for participant deaf 02 (D2). It is important to note that the pRF method and its different implementations have been successfully used in the past to provide evidence about the plasticity and the stability of the visual system in case of congenital (Fracasso, Koenraads, et al. 2016) and acquired deficits (Levin et al. 2010).

Here we also included a model for the negative fluctuations of the blood‐oxygen‐level‐dependent (BOLD) signal, where we removed the constraint of a positive beta value, which enabled us to explore “negative pRFs” (“negative model”). In traditional pRF modelling, the best‐fitting predictor is determined based on the highest variance explained (R 2) and positive beta parameter. Here, in the model that included negative pRF, we removed the constraint on the positive beta value and extracted the model parameters for the pRF that yielded the highest variance explained (R 2). In this way, the best fitting time series could be characterized by either positive or negative beta values.

2.9. Bootstrapping Analysis

To compare the goodness of fit of the positive and negative pRF models we performed a bootstrapping analysis of the variance explained values in the early visual cortex and the white matter ROIs. To test for potential differences between the models at a wide range of variance explained thresholds, bootstrapping was performed using quantile thresholding for each participant. For that, within each ROI, voxels were selected based on variance explained values, thresholded in 5 quantile steps (ranging between 0.3 and 0.7). In each step, variance explained values of the selected voxels were bootstrapped 1000 times with replacement. In each iteration, the difference between the median of the two models was computed, as well as the 95% bootstrapped confidence intervals of the difference. Bootstrapping analysis was also used to estimate the beta values of the negative pRF model in the predefined visual and auditory ROIs. This was done to test whether there are differences in the beta values of hearing and deaf participants, and whether these differences are specific to auditory areas. Similar to the variance explained bootstrapping analysis described above, for each participant, voxels were selected at 5 quantile threshold steps, and the beta values were bootstrapped 1000 times with replacement. For each iteration, the median beta value was computed, as well as the 95% bootstrapped confidence intervals.

3. Results

This experiment aimed at exploring the organization and spatial properties of cross‐modal visual responses in the auditory cortex of congenitally deaf participants. Our study utilized a block‐design paradigm for deaf and hearing groups and an exploratory pRF modelling in a small subset of participants. In both lines of analysis, our modelling strategy accounted for positive and negative responses, thus characterizing the contribution of both positive and negative BOLD signal to cross‐modal processing.

3.1. Experiment 1: Block‐Design

To investigate cortical response patterns to visual stimuli, we computed group‐level activation maps for each visual condition relative to baseline activity. To illustrate the general pattern of evoked responses across participants, 1‐sample t‐tests were performed for each ring and wedge stimulus to estimate the responses across participants in each group. Figure 2A shows the group‐averaged beta value maps for two ring stimuli in deaf (n = 15) and hearing (n = 14) participants. The activation patterns in the early visual cortex were similar between the two groups and aligned with the main axis of eccentricity mapping. Posterior regions responded to foveal stimuli (ring 1, Figure 2A, upper row), while responses expanded anteriorly as the stimuli moved to more peripheral areas of the visual field (ring 2, Figure 2A, lower row). Differences between the groups emerged in the negative BOLD responses. Deaf participants showed more extensive and pronounced negative BOLD responses in temporal regions compared to hearing participants, suggesting group‐specific differences in cross‐modal processing in auditory‐related cortical areas.

To assess the statistical significance of the observed differences and to test whether these differences were specific to auditory regions or extended to other sensory and control areas, we conducted a group‐level ROI analysis, using a set of bilateral ROIs. These included auditory areas (early and associative auditory areas), early visual cortex, and somatomotor cortex as sensory controls, and other areas (IPS and dorsolateral cortex) to assess broader patterns of cortical responses (see Figure 2B and Table S2 for details of ROI selection). Beta values were extracted from each ROI for all participants and stimulus conditions (see Methods for thresholding details). For each participant, we averaged the beta values across the four ring stimuli and the four wedge stimuli, resulting in two values per participant: one for the ring stimuli and one for the wedge stimuli. We conducted 1‐sample t‐tests on these values to determine whether the responses represented significantly positive or negative BOLD activity across groups. Figure 3 presents the distribution of responses across participants for each ROI and condition (ring and wedge). While both groups showed positive BOLD responses in early visual regions, negative beta values were observed mainly in the deaf group, specifically in early and associative auditory cortices (see t‐test results in Table S3). Specifically, negative cross‐modal responses were found bilaterally in both early and associative cortices for ring stimuli, and in the right hemisphere for wedge stimuli (see Table S4 for the results of t‐tests between the groups for each ROI). To further illustrate the group‐level difference in the distribution of positive and negative responses, Table 1 summarizes the number and percentage of participants in each group who showed negative versus positive mean beta value in auditory ROIs. This highlights the clear shift in response prevalence between groups, with most deaf participants showing negative BOLD responses, in contrast to a more mixed pattern in the hearing group.

FIGURE 3.

FIGURE 3

ROI analysis. Boxplots represent the beta values for each ROI in deaf (blue) and hearing (red) participants for ring and wedge stimuli. Asterisks indicate the results of 1‐sample t‐tests on beta values (*p < 0.05, **p < 0.01, corrected), testing whether responses were significantly positive or negative. Significant negative beta values were observed only in auditory areas (early and associative) in the deaf group, highlighting unique response patterns compared to the hearing group.

TABLE 1.

Number and percentage of participants in each group showing negative or positive BOLD responses in auditory ROIs. Responses were classified based on the average beta value across ring and wedge conditions.

ROI Deaf (n = 15) Hearing (n = 14)
NBR PBR NBR PBR
Early auditory LH 11 (73.3%) 4 (26.7%) 4 (28.6%) 10 (71.4%)
Early auditory RH 13 (86.7%) 2 (13.3%) 6 (42.9) 8 (57.1)
Associative auditory LH 12 (80.0%) 3 (20.0%) 5 (35.7%) 9 (64.3%)
Associative auditory RH 14 (93.3%) 1 (6.7%) 9 (64.3%) 5 (35.7%)

To confirm the validity of our findings of negative BOLD responses in the deaf group and ensure that these differences were not artifacts of lower signal quality, we performed an analysis of the temporal signal‐to‐noise ratio (tSNR). We compared the tSNR values extracted from our ROIs and found no significant differences between the groups (see Figure S1 for the distribution of tSNR values, and Table S5 for t‐test results). This result supports the interpretation that the observed negative BOLD responses in deaf participants reflect genuine physiological differences, rather than being influenced by differences in signal quality.

Seven of the fifteen deaf participants in our study reported using hearing aids (HA) during early development (for 2–10 years), although none achieved speech comprehension or sound localization. To test whether prior HA experience influenced the observed cross‐modal responses, we compared beta values between HA users and non‐users across all ROIs (averaging ring and wedge conditions). No significant group differences were found (all p > 0.18), except for a single effect in the left IPS (t(13) = −2.884, p = 0.007). Moreover, no differences emerged in the auditory ROIs. Correlation analysis between years of HA use and auditory beta values revealed no significant associations (LH: r = −0.279, p = 0.334; RH: r = −0.349, p = 0.216), suggesting that inter‐individual variability in auditory responses was not explained by hearing aid history.

When examining the beta values in the early visual cortex for the wedge stimuli (Figure 3, left ROI), we observed greater variability in responses between the deaf and hearing groups. This variability suggests potential differences in how the two groups process lateralized visual stimuli. To explore this further, we focused on the responses to left and right wedge stimuli, which are expected to elicit distinct patterns of activity in the left and right hemispheres of the early visual cortex. Figure 4 illustrates the group activation maps for wedge stimuli alongside boxplots summarizing beta values in the early visual cortex of each hemisphere for both groups. The maps suggest that compared to the hearing group, deaf participants exhibit broader patterns of negative BOLD in response to ipsilateral stimulation (i.e., negative beta values in the left hemisphere in response to left wedge stimulus). However, this difference was not statistically significant, as indicated by the distribution of individual responses (Figure 4, bottom) and confirmed by direct group comparison (Welch two‐sample t‐test; early visual LH, left wedge: t(26.82) = −1.68, p = 0.10; early visual LH, right wedge: t(26.82) = −0.75, p = 0.46; early visual RH, left wedge: t(26.82) = −1.90, p = 0.07; early visual RH, right wedge: t(26.82) = −1.87, p = 0.07).

FIGURE 4.

FIGURE 4

Group activation maps and ROI analysis for wedge stimuli. (Top) Group activation maps show beta values for left and right wedge stimuli in the deaf and hearing groups, visualized on inflated cortical surfaces for both hemispheres. (Bottom) Boxplots depict the beta values extracted from the early visual cortex ROIs in the left and right hemispheres for the wedge‐left and wedge‐right stimuli. Asterisks indicate significant deviations from zero (1‐sample t‐test, *p < 0.05, **p < 0.01).

Finally, to assess whether stronger positive BOLD responses in visual cortex are associated with more pronounced negative BOLD responses in auditory cortex, we computed within‐participant correlations between early visual cortex and auditory ROIs. No significant correlations were found in either group (Deaf LH: r = 0.094, p = 0.739; Deaf RH: r = 0.296, p = 0.285; Hearing LH: r = −0.056, p = 0.850; Hearing RH: r = 0.054, p = 0.856). This result suggests that the variability in cross‐modal NBR is not directly linked to the amplitude of the visual PBR. These findings are consistent with a recent report showing no systematic association between PBR and NBR amplitudes within or across sensory modalities (Nelson and Mayhew 2025).

3.2. Experiment 2: Retinotopic Mapping and pRF Analysis

The results from the first experiment indicated that group differences in processing low‐level visual stimuli were primarily localized to the auditory cortex. In deaf participants, these cross‐modal responses were predominantly characterized by negative BOLD signals. To further explore the spatial characterization of these responses, we conducted an exploratory retinotopic mapping experiment, which included two deaf and one hearing participant. This follow‐up experiment was aimed to provide preliminary insight into the spatial tuning of cross‐modal responses, by modelling both positive and negative pRFs.

3.2.1. Visual Representation in the Visual Cortex

First, we analyzed the data using a conventional pRF modelling, in which the selection of the best fitted predictor was restricted to predictors with positive beta values (“positive model”), thus fitting a positive pRF to each voxel. Individual retinotopic maps presented on the inflated cortical surface (Figure 5), reveal the known topographic organization of early visual cortex. In both hearing and deaf participants, eccentricity preferences range from fovea to periphery along the anterior–posterior axis (Figure 5A). Polar angle maps demonstrate the lateralization of pRFs, as voxels in early visual cortex represent the contralateral hemifield (Figure 5B). The reversals in the representation between the vertical and horizontal meridian correspond to the borders between early visual areas (V1, V2, and V3), as delineated based on the HCP MMP atlas (Glasser et al. 2016, Figure 5B).

FIGURE 5.

FIGURE 5

Retinotopic organization in early visual cortex. Eccentricity (A) and polar angle (B) maps derived from positive pRF modelling are represented along the early visual cortex of hearing control and deaf participants. Threshold was set to R 2 ≥ 0.20. Dashed lines delineate the calcarine sulcus (white) and borders between V1, V2 and V3 (black lines), that were defined based on the projection of the HCP MMP atlas (Glasser et al. 2016) on each individual cortical surface. HC‐ hearing control, D1‐ deaf participant 1, D2‐ deaf participant 2.

Next, we turned to explore the negative pRF model and compared the extracted parameters between the negative and positive models. First, we compared the goodness of fit of the two models, as reflected in the variance explained maps. Figure 6A shows that both models perform similarly and provide good fits of the data in early visual cortex. Mean variance explained values (at a threshold of R 2 > 0.2) for the positive model and negative models were 0.54 (±0.17) and 0.53 (±1.8) for the hearing participant; 0.62 (±0.18) and 0.59 (±0.20) for participant D1; 0.51 (±0.16) and 0.49 (±1.7) for participant D2. Inspection of the beta values assigned by each model (Figure 6B) indicate that in both positive and negative models most responses in the early visual cortex were best fitted by positive predictors. The negative pRF model covers additional areas compared to the positive model, mainly in cuneus and medial parietal areas (see (Szinte and Knapen 2020) for a similar result).

FIGURE 6.

FIGURE 6

Positive and negative pRF modelling in the early visual cortex. (A) Variance explained by negative (upper row) and positive (lower row) models presented on the medial surface of early visual cortex. (B) Beta values of negative and positive models, color coded by their sign. Maps were thresholded at R 2 ≥ 0.2. (C) Difference in the goodness of fit between the negative and positive pRF modelling. The difference (and the 95% confidence intervals) in variance explained values obtained by the two pRF models presented at 5 different quantile thresholds. The two models resulted in similar R 2 values, except for the lowest thresholds in the deaf participants (D1 & D2) in which the positive model provided higher R 2.

To quantitively test whether there is a difference in the performance of the two models, we compared their goodness of fit in the retinotopically organized early visual cortex, as well as in the white matter. The difference in the variance explained (R 2 values) between the negative and positive pRF models was calculated using bootstrapping quantile analysis at five different thresholds (Figure 6B). The results in the early visual cortex indicate that overall, the two models resulted with similar R 2 values, except for the lowest thresholds in the deaf participants (Figure 6B, middle and lower left panels) in which the positive model provided higher R 2. No differences between the models were found in the white matter ROI.

3.2.2. Visual Representation in the Auditory Cortex

The positive and negative models provide similar results for deaf and hearing participants when applied to early visual areas. However, differences between models are evident in cortical locations beyond visual areas, and specifically in the auditory cortex (see Figure S2 for comparison between the two pRF models). Figure 7A presents the beta values of the best fitted predictor at each cortical location of the model with both positive and negative predictors. The maps are presented at a lower threshold of variance explained (R 2 ≥ 0.08), which was set to be higher than the average variance explained in the white matter (Mean white matter R 2 values were 0.06 ± 0.09, 0.08 ± 0.13 and 0.06 ± 0.1 for participants HC, D1 and D2, respectively).

FIGURE 7.

FIGURE 7

pRF analysis reveals distinct response patterns in auditory areas for deaf and hearing participants. (A) Beta values of the pRF model are visualized on cortical surfaces for the hearing control (HC) and the two deaf participants (D1, D2). Warm colors (yellow‐red) represent positive beta values, while cool colors (cyan‐blue) indicate negative beta values. Maps are thresholded at R 2 ≥ 0.08. (B) Bootstrapping analysis of the estimated beta values in early visual, early auditory, and associative auditory ROIs. Each data point represents the median beta value, with 95% confidence intervals, for a given R 2 quantile.

The maps show that in the two deaf (but not the hearing control) individuals most of the responses beyond visual cortex are characterized by negative beta values, mainly in the frontal and temporal lobes. This is mainly visible for participant D1, which shows pertinent clusters in auditory areas including the superior temporal sulcus (STS), superior temporal gyrus (STG), and Heschl's gyrus (HG), and to a lesser degree for participant D2. In both deaf participants, additional clusters were also localized to multisensory areas at the temporoparietal junction. Figure 7B presents bootstrapping analysis of the estimated beta values of the negative pRF model, which allows for positive as well as negative beta values, in visual and auditory ROIs. The results indicate that regardless of the chosen threshold, while early visual areas are characterized by positive beta values (in both hearing and deaf participants), the responses in the auditory cortex of deaf participants (and not the hearing participant) are best fitted by negative beta values.

Finally, we turned to explore the spatial tuning properties of the cross‐modal visual responses in the auditory areas delineated by the negative model. Because visual responses were found only in the AC of the deaf participants, and only for negative pRF tuning, we focused on these participants and model in the following explorative analysis. Model parameters representing polar angle, eccentricity, horizontal and vertical location, and pRF size (Figure 8) were extracted only from voxels with negative beta values. Polar angle maps (Figure 8A) show that most of the responses in the right hemisphere represent the contralateral visual field, similarly to the known laterality observed in early visual cortex. Eccentricity maps (Figure 8B) suggest that the pRFs in AC mostly represent central areas of the visual field, with foveal and near‐foveal preferences (values smaller than 2–3 dva). This is also reflected by the horizontal and vertical location maps (Figure 8C,D, respectively) pointing to a preference for representing the left visual field. Finally, pRF size (Figure 8E) shows large values, meaning large receptive field sizes, in the absence of a consistent gradient. To summarize, the exploratory pRF analysis in the two deaf participants showed visual‐evoked responses in auditory areas. These responses appeared to show patchy representation of the contralateral hemifield, with large receptive fields centered near foveal areas.

FIGURE 8.

FIGURE 8

Visual features representations in the AC of the deaf participants. Polar angle (A), eccentricity (B), horizontal location (C), vertical location (D), and pRF size (E) parameters presented on enlarged view of the temporal lobe of the deaf participants. Maps are thresholded at R 2 ≥ 0.08. Dotted lines delineate the HG and STS.

4. Discussion

In this study, we explored cross‐modal processing in the auditory cortex of congenitally deaf individuals in response to passive viewing of simple visual stimulation. Evidence from block‐design shows that these responses were characterized by negative BOLD signals in early and associative auditory areas. From a descriptive point of view, pRF modelling suggests that these cross‐modal responses might be spatially tuned, displaying contralateral bias and large receptive fields, although it is important to note that the pRF data is only exploratory given that results from only three participants are presented. These findings suggest that cross‐modal responses might be represented through deactivation signals in the reorganized auditory cortex of congenitally deaf individuals.

4.1. Negative BOLD and Cross‐Modal Plasticity

A key finding of our study is that visual‐driven responses in deaf individuals were characterized by negative beta values, indicating a below‐baseline deflection of the BOLD signal—i.e., negative BOLD responses.

Negative BOLD responses (NBR) in early sensory and motor cortices are usually related to stimuli presented in the ipsilateral visual field (Fracasso et al. 2018, 2021; Smith et al. 2004) as well as ipsilateral hand movement (Hamzei et al. 2002) or tactile stimulation of one side of the body (Kastrup et al. 2008; Tal et al. 2017). Although the physiological basis of NBR remains only partially understood, research has provided some important evidence suggesting that these negative responses reflect a functionally relevant measure of neuronal deactivation, as NBR are correlated with corresponding decreases in cerebral blood flow, oxygen consumption and neuronal activity (Boorman et al. 2010; Mullinger et al. 2014; Shmuel et al. 2002, 2006; Stefanovic et al. 2004). Accumulating evidence have shown that like positive BOLD responses, NBR contain stimulus‐specific information, suggesting that these responses contribute to building‐up the sensory percept (Bressler et al. 2007; Szinte and Knapen 2020; Tal et al. 2017; Zeharia et al. 2012). In early sensory cortex, NBR can be found proximal to areas exhibiting positive BOLD responses and were suggested to be mediated by lateral suppression mechanism (Boorman et al. 2010). For example, early visual areas that are tuned to foveal representation respond with positive BOLD signal to foveal visual stimuli and with negative BOLD signal when the stimuli are presented in the periphery, and vice versa (Shmuel et al. 2002, 2006). NBR are also evident in the ipsilateral hemisphere following lateralized visual and somatosensory stimulus or a motor task (Fracasso et al. 2018; Gouws et al. 2014; Hlushchuk and Hari 2006; Kastrup et al. 2008; Klingner et al. 2010; Tal et al. 2017).

Importantly, NBR have been also reported in cortical areas of the non‐corresponding sensory modalities, manifested as negative cross‐modal responses, including visual‐induced deactivation of the auditory cortex (Laurienti et al. 2002; Mozolic et al. 2008) and negative responses in visual areas following auditory or tactile stimulation (Hairston et al. 2008; Kawashima et al. 1995; Laurienti et al. 2002; Merabet et al. 2007; Sadato et al. 1996; Tal et al. 2016; Weisser et al. 2005). For example, (Laurienti et al. 2002) used fMRI in a passive stimulation paradigm with 3 conditions (a visual stimulus alone, an auditory stimulus alone, and a combined visual–auditory stimulus) and concluded that visual stimulation resulted in negative BOLD signals in the AC, namely along the superior and middle temporal gyri with a peak in Brodmann area 22. A similar effect occurred with the auditory condition, as negative BOLD was observed in extrastriate visual areas, but not in response to the combined visuo‐auditory condition, which resulted with positive BOLD responses.

Our results provide additional evidence for negative cross‐modal responses in the auditory cortex, and might suggest that these responses are partially tuned to specific parts of the visual field. However, these findings contrast with previous studies of cross‐modal plasticity which collectively point to recruitment of the deprived auditory areas for visual or somatosensory tasks (Bola et al. 2017; Karns et al. 2012; Scott et al. 2014; Scurry et al. 2020; Shiell et al. 2015; Vachon et al. 2013). What could be the source for this discrepancy?

One possible explanation is a general bias in the literature toward reporting positive modulations of the BOLD signal, rather than BOLD deactivations. Many previous cross‐modal studies (Benetti et al. 2021; Scott et al. 2014; Shiell et al. 2015; Simon et al. 2020; Vachon et al. 2013) rely on contrast‐based analyses between experimental conditions or groups, which do not reveal activity patterns relative to baseline. While some studies have reported deactivations (Karns et al. 2012), these findings were typically presented in the context of ROI analysis (Benetti et al. 2021; Bola et al. 2017; Twomey et al. 2017), and were not further characterized.

Recent studies suggest that NBR may be much more prevalent than previously recognized, although their detection is highly task‐dependent (Gouws et al. 2014). Higher task demands have been shown to increase NBR substantially (Gonzalez‐Castillo et al. 2015), with task difficulty determining the degree of deactivation (Hairston et al. 2008). NBR can also be observed in response to passive viewing of stimuli (Mayhew et al. 2013). However, because the amplitude of NBR is substantially lower than that of positive BOLD responses (Gonzalez‐Castillo et al. 2015; Harel et al. 2002), its detection requires a higher signal‐to‐noise ratio. This could be achieved by obtaining a large dataset (large groups of participants or the collection of substantially longer fMRI data of single participants) or by using a stronger magnetic field (Jorge et al. 2018). The use of a passive viewing task in the present study likely minimized top‐down attentional engagement. It remains an open question whether auditory deactivations in deaf individuals would be amplified or modulated during cognitively active and task‐relevant conditions, particularly in contexts where deaf individuals are known to exhibit enhanced performance. For example, spatial localization of visual stimuli may elicit different patterns of auditory deactivation compared to tasks involving visual motion detection. Additionally, in our study, we found no correlation between the magnitude of positive BOLD responses in visual cortex and negative BOLD responses in auditory regions. However, it would be informative to test whether negative BOLD responses evoked by different sensory modalities (e.g., visual vs. somatosensory stimulation) show consistent patterns or within‐subject correlations, similar to recent findings showing NBR‐to‐NBR consistency across modalities (Nelson and Mayhew 2025). Future studies, aiming to explore the prevalence of cross‐modal NBR under different behavioral tasks and cognitive load conditions, could provide important insights regarding their role in cross‐modal plasticity.

Here, by directly comparing responses to baseline—without applying spatial smoothing—our approach enabled us to detect task‐dependent negative BOLD responses at the single‐subject level, specifically in deaf individuals. Given the novelty of our method and our limited sample size in the pRF measurements, we interpret these results with caution. Moreover, some of our MR acquisition parameters and aspects of the experimental design were not optimized for sensitivity in detecting NBRs. In particular, the relatively short inter‐block interval (12 s), may have limited the full recovery of the BOLD signal to true baseline between blocks, thereby reducing our ability to detect robust negative responses. Despite these constraints, because all participants were exposed to identical stimuli under the same conditions and the two groups did not differ in fMRI signal quality, it is reasonable to attribute the observed differences between deaf and hearing participants to life‐long sensory deprivation and related functional and anatomical plasticity.

It is important to note that, although our subgroup analysis revealed no significant differences in visually driven responses between HA users and non‐users in the auditory cortex, prior work shows that even limited auditory input and hearing‐aid use can shape cross‐modal plasticity. Prior studies demonstrate that auditory input, whether degraded or partial, can modulate the extent and pattern of cortical reorganization in auditory and multisensory regions. This input can alter functional connectivity and audiovisual integration and may potentially inhibit or reverse the recruitment of the auditory cortex for visual processing (Ding et al. 2015; Pereira‐Jorge et al. 2018; Rosemann et al. 2021; Shiell et al. 2015). These studies highlight an important future direction: systematically characterizing how residual auditory input and different forms of assisted hearing shape the balance of cross‐modal activation and deactivation in congenital deafness.

4.2. Preliminary Observations and Future Perspectives

As described above, converging evidence suggest that NBR are much more prevalent than commonly reported, suggesting that these responses might be relevant to different aspects of sensory‐motor and cognitive processing. In a recent study, Szinte and Knapen (2020) explored the role of the default mode network (DMN) in visual processing. By allowing the pRF model to capture both positive and negative BOLD responses, they showed that the DMN selectivity deactivates as a function of the position of a visual stimulus. Their study was the first ascribing the term “negative pRFs” to refer to signals that are better predicted by deactivation and are tuned to the spatiotemporal features of the visual stimuli. Negative pRF were further found in non‐human primates, in a study that modelled whole‐brain fMRI signal and neurophysiological multi‐unit recording from the visual cortex (Klink et al. 2021). Here, we provide one of the first application of negative pRF modelling in the context of cross‐modal plasticity. While the exploratory nature of this analysis prevents us from drawing strong inferences, it raises few speculative ideas and open questions that could guide future studies.

The dynamic balance between positive and negative activity might possess computational benefits and increase processing efficacy within and across modalities. That is, encoding the dissimilar features might provide an effective approach for perceptual inference (Goncalves and Welchman 2017). For example, computational modelling approaches have shown that a mixed population of neurons with mismatched tuning for two different sensory cues (such as visual and vestibular movement direction tuning) reduces errors in decoding the possible sources of sensory inputs (Kim et al. 2016).

Although NBR might arise from different neuronal sources under different experimental paradigms and attentional demands (including lateral inhibition in local scale, interhemispheric inhibitory connections, or top‐down modulation), these negative modulations occur in response to unattended stimuli, i.e., in areas beyond the corresponding receptive field, or in the unattended sensory modality (Nelson and Mayhew 2025). Thus, NBR might serve in sharpening the overall neural population tuning and operate as a filter for non‐relevant information both within and across sensory modalities. If deactivating/suppressing nonrelevant information contributes to the perceptual integrity and attention to sensory information (Hairston et al. 2008), it would be interesting to test whether the negative responses are correlated to the enhanced performance of the deaf in some visual tasks, mainly in the peripheral visual field (Bavelier et al. 2000; Bosworth and Dobkins 2002; Neville and Lawson 1987). For example, we can speculate that a suppression of foveal visual field may contribute to the ability to distinguish between central and peripheral visual representations, in accordance with the reported enhanced performance in visual peripheral tasks. The fact that the visual‐induced NBRs in the auditory areas were not accompanied by nearby PBRs (as previously found in peripheral V1 areas) (Klink et al. 2021; Smith et al. 2004) suggests that the underlying mechanism is probably less related to local surround‐suppression or vascular stealing, and might involve top‐down sensory or attentional modulations. Future studies, employing higher sample size, with systemically modulated attentional paradigm could help validate this hypothesis by characterizing the selectivity of cross‐modal negative (and/or positive) pRF in processing visual information and perhaps also shed light on the underlying mechanism of cross‐modal NBR.

4.3. Concluding Remarks

Our findings show that visual‐driven responses in the auditory cortex of deaf individuals are characterized by signal deactivation, demonstrated through both group‐level analysis and preliminary retinotopic mapping. While most previous studies have focused on activations, our results suggest that negative responses, or deactivations, also play a crucial role in cross‐modal processing. We propose that incorporating deactivations into existing models and theories could offer a more comprehensive understanding of sensory reorganization in deprived systems, broadening our understanding of the mechanisms and functionality of cross‐modal plasticity.

Author Contributions

Z.T.: software, formal analysis, visualization, writing – original draft, review and editing. J.S.: software, formal analysis, visualization, writing – original draft. F.F.: investigation, resources. Y.B.: investigation, resources. J.A.: conceptualization, resources, writing – review and editing, supervision. A.F.: methodology, software, writing – review and editing, supervision.

Funding

This work was supported by Fundação para a Ciência e a Tecnologia, PTDC/PSI‐GER/30757/2017, 2023.07705.CEECIND/CP2832/CT0013, UI/BD/154438/2022, COMPETE2030‐FEDER‐00702600/15905. European Research Council, 802553. European Research Executive Agency Widening, 101087584. National Natural Science Foundation of China, 31930053. Biotechnology and Biology research council, BB/S006605/1. Bial Foundation Grants Programme, A‐29315.

Supporting information

Table S1: Demographic information of deaf participants

Table S2: ROI definition.

Table S3: t‐test results and Cohen's D values

Table S4: Two‐sample t‐tests between groups

Figure S1: tSNR analysis

Table S5: t‐test results for tSNR analysis

Figure S2: Variance explained of positive and negative pRF models. Variance explained by the model including positive and negative predictors (left row) and only positive predictors (right row) are presented on the lateral view of the brain. Maps are threshold at R 2 ≥ 0.08. While the hearing participant does not appear to show clear differences between the two models, the maps of the deaf participants reveal more clusters in the frontal and temporal lobe for the model that includes negative beta values.

HBM-47-e70444-s001.docx (387.7KB, docx)

Acknowledgments

This research is supported by an FCT grant PTDC/PSI‐GER/30757/2017 and Programa COMPETE Portugal. J.A. and Z.T. are supported by funds from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program Starting Grant number 802553; “ContentMAP” attributed to J.A. J.A. is also supported by the European Research Executive Agency Widening program under the European Union's Horizon Europe Grant 101087584 “CogBooster”. Z.T. is also supported by the FCT CEEC/IND6ed funding program (https://doi.org/10.54499/2023.07705.CEECIND/CP2832/CT0013) and by FCT program COMPETE2030‐FEDER‐00702600, grant number 15905. J.S. is supported by an FCT grant for doctoral research (https://doi.org/10.54499/UI/BD/154438/2022). F.F. is supported by the National Natural Science Foundation of China (31930053). A.F. is supported by a grant from the Biotechnology and Biology research council (BBSRC, grant number: BB/S006605/1) and the Bial Foundation, Bial Foundation Grants Programme Grant ID: A‐29315, number: 203/2020, grant edition: G‐15516.

Tal, Z. , Sayal J., Fang F., Bi Y., Almeida J., and Fracasso A.. 2026. “The Neural Organization of Visual Information in the Auditory Cortex of the Congenitally Deaf.” Human Brain Mapping 47, no. 2: e70444. 10.1002/hbm.70444.

Contributor Information

Joana Sayal, Email: joana.campos@uc.pt.

Jorge Almeida, Email: jorgecbalmeida@gmail.com.

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.

References

  1. Almeida, J. , He D., Chen Q., et al. 2015. “Decoding Visual Location From Neural Patterns in the Auditory Cortex of the Congenitally Deaf.” Psychological Science 26, no. 11: 1771–1782. 10.1177/0956797615598970. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Almeida, J. , Kristensen S., Tal Z., and Fracasso A.. 2025. “Contentopic Mapping in Ventral and Dorsal Association Cortex: The Topographical Organization of Manipulable Object Information.” NeuroImage 321: 121514. 10.1016/J.NEUROIMAGE.2025.121514. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Almeida, J. , Nunes G., Marques J. F., and Amaral L.. 2018. “Compensatory Plasticity in the Congenitally Deaf for Visual Tasks Is Restricted to the Horizontal Plane.” Journal of Experimental Psychology: General 147, no. 6: 924–932. 10.1037/xge0000447. [DOI] [PubMed] [Google Scholar]
  4. Amaral, L. , Ganho‐Ávila A., Osório A., et al. 2016. “Hemispheric Asymmetries in Subcortical Visual and Auditory Relay Structures in Congenital Deafness.” European Journal of Neuroscience 44, no. 6: 2334–2339. 10.1111/ejn.13340. [DOI] [PubMed] [Google Scholar]
  5. Amedi, A. , Raz N., Pianka P., Malach R., and Zohary E.. 2003. “Early ‘Visual’ Cortex Activation Correlates With Superior Verbal Memory Performance in the Blind.” Nature Neuroscience 6, no. 7: 758–766. 10.1038/nn1072. [DOI] [PubMed] [Google Scholar]
  6. Anticevic, A. , Cole M. W., Murray J. D., Corlett P. R., Wang X.‐J., and Krystal J. H.. 2012. “The Role of Default Network Deactivation in Cognition and Disease.” Trends in Cognitive Sciences 16, no. 12: 584–592. 10.1016/j.tics.2012.10.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bavelier, D. , Tomann A., Hutton C., et al. 2000. “Visual Attention to the Periphery Is Enhanced in Congenitally Deaf Individuals.” Journal of Neuroscience 20, no. 17: RC93. 10.1523/jneurosci.20-17-j0001.2000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bedny, M. , Pascual‐Leone A., Dodell‐Feder D., Fedorenko E., and Saxe R.. 2011. “Language Processing in the Occipital Cortex of Congenitally Blind Adults.” Proceedings of the National Academy of Sciences of the United States of America 108, no. 11: 4429–4434. 10.1073/pnas.1014818108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bell, L. , Wagels L., Neuschaefer‐Rube C., Fels J., Gur R. E., and Konrad K.. 2019. “The Cross‐Modal Effects of Sensory Deprivation on Spatial and Temporal Processes in Vision and Audition: A Systematic Review on Behavioral and Neuroimaging Research Since 2000.” Neural Plasticity 2019: 1–21. 10.1155/2019/9603469. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Benetti, S. , Zonca J., Ferrari A., Rezk M., Rabini G., and Collignon O.. 2021. “Visual Motion Processing Recruits Regions Selective for Auditory Motion in Early Deaf Individuals.” NeuroImage 230: 117816. 10.1016/j.neuroimage.2021.117816. [DOI] [PubMed] [Google Scholar]
  11. Bola, Ł. , Zimmermann M., Mostowski P., et al. 2017. “Task‐Specific Reorganization of the Auditory Cortex in Deaf Humans.” Proceedings of the National Academy of Sciences of the United States of America 114, no. 4: E600–E609. 10.1073/pnas.1609000114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Boorman, L. , Kennerley A. J., Johnston D., et al. 2010. “Negative Blood Oxygen Level Dependence in the Rat:A Model for Investigating the Role of Suppression in Neurovascular Coupling.” Journal of Neuroscience 30, no. 12: 4285–4294. 10.1523/JNEUROSCI.6063-09.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Bosworth, R. G. , and Dobkins K. R.. 2002. “The Effects of Spatial Attention on Motion Processing in Deaf Signers, Hearing Signers, and Hearing Nonsigners.” Brain and Cognition 49, no. 1: 152–169. 10.1006/brcg.2001.1497. [DOI] [PubMed] [Google Scholar]
  14. Bottari, D. , Heimler B., Caclin A., Dalmolin A., Giard M.‐H., and Pavani F.. 2014. “Visual Change Detection Recruits Auditory Cortices in Early Deafness.” NeuroImage 94: 172–184. 10.1016/j.neuroimage.2014.02.031. [DOI] [PubMed] [Google Scholar]
  15. Brainard, D. H. 1997. “The Psychophsycis Toolbox.” Spatial Vision 10, no. 4: 433–436. [PubMed] [Google Scholar]
  16. Bressler, D. , Spotswood N., and Whitney D.. 2007. “Negative BOLD fMRI Response in the Visual Cortex Carries Precise Stimulus‐Specific Information.” PLoS One 2, no. 5: e410. 10.1371/journal.pone.0000410. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Carrasco, M. , Roberts M., Myers C., and Shukla L.. 2022. “Visual Field Asymmetries Vary Between Children and Adults.” Current Biology 32, no. 11: R509–R510. 10.1016/j.cub.2022.04.052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Codina, C. , Buckley D., Port M., and Pascalis O.. 2011. “Deaf and Hearing Children: A Comparison of Peripheral Vision Development.” Developmental Science 14, no. 4: 725–737. 10.1111/j.1467-7687.2010.01017.x. [DOI] [PubMed] [Google Scholar]
  19. Collignon, O. , Voss P., Lassonde M., and Lepore F.. 2009. “Cross‐Modal Plasticity for the Spatial Processing of Sounds in Visually Deprived Subjects.” Experimental Brain Research 192, no. 3: 343–358. 10.1007/s00221-008-1553-z. [DOI] [PubMed] [Google Scholar]
  20. Cox, R. W. 1996. “AFNI: Software for Analysis and Visualization of Functional Magnetic Resonance Neuroimages.” Computers and Biomedical Research 29, no. 3: 162–173. 10.1006/cbmr.1996.0014. [DOI] [PubMed] [Google Scholar]
  21. Cox, R. W. , and Hyde J. S.. 1997. “Software Tools for Analysis and Visualization of FMRI Data NMR in Biomedicine, in Press.” NMR in Biomedicine 10, no. 4–5: 171–178. [DOI] [PubMed] [Google Scholar]
  22. Dhanik, K. , Pandey H. R., Mishra M., Keshri A., and Kumar U.. 2024. “Neural Adaptations to Congenital Deafness: Enhanced Tactile Discrimination Through Cross‐Modal Neural Plasticity ‐ An fMRI Study.” Neurological Sciences 45: 5489–5499. 10.1007/s10072-024-07615-4. [DOI] [PubMed] [Google Scholar]
  23. Ding, H. , Qin W., Liang M., et al. 2015. “Cross‐Modal Activation of Auditory Regions During Visuo‐Spatial Working Memory in Early Deafness.” Brain 138, no. 9: 2750–2765. 10.1093/BRAIN/AWV165. [DOI] [PubMed] [Google Scholar]
  24. Dumoulin, S. O. , and Wandell B. A.. 2008. “Population Receptive Field Estimates in Human Visual Cortex.” NeuroImage 39, no. 2: 647–660. 10.1016/j.neuroimage.2007.09.034. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Engel, S. , Glover G. H., and Wandell B. A.. 1997. “Retinotopic Organization in Human Visual Cortex and the Spatial Precision of Functional MRI.” Cerebral Cortex 7, no. 2: 181–192. [DOI] [PubMed] [Google Scholar]
  26. Engel, S. , Rumelhart D. E., Wandell B. A., et al. 1994. “fMRI of Human Visual Cortex.” Nature 369, no. 6481: 525. [DOI] [PubMed] [Google Scholar]
  27. Fabius, J. H. , Moravkova K., and Fracasso A.. 2022. “Topographic Organization of Eye‐Position Dependent Gain Fields in Human Visual Cortex.” Nature Communications 13, no. 1: 7925. 10.1038/S41467-022-35488-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Fracasso, A. , Dumoulin S. O., and Petridou N.. 2021. “Point‐Spread Function of the BOLD Response Across Columns and Cortical Depth in Human Extra‐Striate Cortex.” Progress in Neurobiology 207: 102187. 10.1016/j.pneurobio.2021.102187. [DOI] [PubMed] [Google Scholar]
  29. Fracasso, A. , Gaglianese A., Vansteensel M. J., et al. 2022. “FMRI and Intra‐Cranial Electrocorticography Recordings in the Same Human Subjects Reveals Negative BOLD Signal Coupled With Silenced Neuronal Activity.” Brain Structure & Function 227, no. 4: 1371–1384. 10.1007/S00429-021-02342-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Fracasso, A. , Koenraads Y., Porro G. L., and Dumoulin S. O.. 2016. “Bilateral Population Receptive Fields in Congenital Hemihydranencephaly.” Ophthalmic and Physiological Optics 36, no. 3: 324–334. 10.1111/opo.12294. [DOI] [PubMed] [Google Scholar]
  31. Fracasso, A. , Luijten P. R., Dumoulin S. O., and Petridou N.. 2018. “Laminar Imaging of Positive and Negative BOLD in Human Visual Cortex at 7 T.” NeuroImage 164: 100–111. 10.1016/j.neuroimage.2017.02.038. [DOI] [PubMed] [Google Scholar]
  32. Fracasso, A. , Petridou N., and Dumoulin S. O.. 2016. “Systematic Variation of Population Receptive Field Properties Across Cortical Depth in Human Visual Cortex.” NeuroImage 139: 427–438. 10.1016/J.NEUROIMAGE.2016.06.048. [DOI] [PubMed] [Google Scholar]
  33. Frasnelli, J. , Collignon O., Voss P., and Lepore F.. 2011. “Crossmodal Plasticity in Sensory Loss.” In Progress in Brain Research, vol. 191, 1st ed. Elsevier B.V. 10.1016/B978-0-444-53752-2.00002-3. [DOI] [PubMed] [Google Scholar]
  34. Friston, K. J. , Fletcher P., Josephs O., Holmes A., Rugg M. D., and Turner R.. 1998. “Event‐Related fMRI: Characterizing Differential Responses.” NeuroImage 7, no. 1: 30–40. 10.1006/nimg.1997.0306. [DOI] [PubMed] [Google Scholar]
  35. Glasser, M. F. , Coalson T. S., Robinson E. C., et al. 2016. “A Multi‐Modal Parcellation of Human Cerebral Cortex.” Nature 536, no. 7615: 171–178. 10.1038/nature18933. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Goense, J. , Bohraus Y., and Logothetis N. K.. 2016. “fMRI at High Spatial Resolution: Implications for BOLD‐Models.” Frontiers in Computational Neuroscience 10, no. Jun: 66. 10.3389/FNCOM.2016.00066. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Goncalves, N. R. , and Welchman A. E.. 2017. ““What Not” Detectors Help the Brain See in Depth.” Current Biology 27, no. 10: 1403–1412.e8. 10.1016/j.cub.2017.03.074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Gonzalez‐Castillo, J. , Hoy C. W., Handwerker D. A., et al. 2015. “Task Dependence, Tissue Specificity, and Spatial Distribution of Widespread Activations in Large Single‐Subject Functional MRI Datasets at 7T.” Cerebral Cortex 25, no. 12: 4667–4677. 10.1093/cercor/bhu148. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Gougoux, F. , Lepore F., Lassonde M., Voss P., Zatorre R. J., and Belin P.. 2004. “Pitch Discrimination in the Early Blind.” Nature 430, no. 6997: 309. 10.1038/430309a. [DOI] [PubMed] [Google Scholar]
  40. Gouws, A. D. , Alvarez I., Watson D. M., Uesaki M., Rogers J., and Morland A. B.. 2014. “On the Role of Suppression in Spatial Attention: Evidence From Negative Bold in Human Subcortical and Cortical Structures.” Journal of Neuroscience 34, no. 31: 10347–10360. 10.1523/JNEUROSCI.0164-14.2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Hairston, W. D. , Hodges D. A., Casanova R., et al. 2008. “Closing the Mind's Eye: Deactivation of Visual Cortex Related to Auditory Task Difficulty.” Neuroreport 19, no. 2: 151–154. 10.1097/WNR.0b013e3282f42509. [DOI] [PubMed] [Google Scholar]
  42. Hamzei, F. , Dettmers C., Rzanny R., Liepert J., Büchel C., and Weiller C.. 2002. “Reduction of Excitability (“Inhibition”) in the Ipsilateral Primary Motor Cortex Is Mirrored by fMRI Signal Decreases.” NeuroImage 17, no. 1: 490–496. 10.1006/nimg.2002.1077. [DOI] [PubMed] [Google Scholar]
  43. Harel, N. , Lee S.‐P., Nagaoka T., Kim D.‐S., and Kim S.‐G.. 2002. “Origin of Negative Blood Oxygenation Level‐Dependent fMRI Signals.” Journal of Cerebral Blood Flow & Metabolism 22, no. 8: 908–917. 10.1097/00004647-200208000-00002. [DOI] [PubMed] [Google Scholar]
  44. Harvey, B. M. , Vansteensel M. J., Ferrier C. H., et al. 2013. “Frequency Specific Spatial Interactions in Human Electrocorticography: V1 Alpha Oscillations Reflect Surround Suppression.” NeuroImage 65: 424–432. 10.1016/j.neuroimage.2012.10.020. [DOI] [PubMed] [Google Scholar]
  45. Hauthal, N. , Sandmann P., Debener S., and Thome J. D.. 2013. “Visual Movement Perception in Deaf and Hearing Individuals.” Advances in Cognitive Psychology 9, no. 2: 53–61. 10.2478/V10053-008-0131-Z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Hlushchuk, Y. , and Hari R.. 2006. “Transient Suppression of Ipsilateral Primary Somatosensory Cortex during Tactile Finger Stimulation.” Journal of Neuroscience 26: 5819–5824. 10.1523/JNEUROSCI.5536-05.2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Hunt, D. L. , Yamoah E. N., and Krubitzer L.. 2006. “Multisensory Plasticity in Congenitally Deaf Mice: How Are Cortical Areas Functionally Specified?” Neuroscience 139, no. 4: 1507–1524. 10.1016/J.NEUROSCIENCE.2006.01.023. [DOI] [PubMed] [Google Scholar]
  48. Jorge, J. , Figueiredo P., Gruetter R., and van der Zwaag W.. 2018. “Mapping and Characterization of Positive and Negative BOLD Responses to Visual Stimulation in Multiple Brain Regions at 7T.” Human Brain Mapping 39, no. 6: 2426–2441. 10.1002/hbm.24012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Karns, C. M. , Dow M. W., and Neville H. J.. 2012. “Altered Cross‐Modal Processing in the Primary Auditory Cortex of Congenitally Deaf Adults: A Visual‐Somatosensory fMRI Study With a Double‐Flash Illusion.” Journal of Neuroscience 32, no. 28: 9626–9638. 10.1523/JNEUROSCI.6488-11.2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Kastrup, A. , Baudewig J., Schnaudigel S., et al. 2008. “Behavioral Correlates of Negative BOLD Signal Changes in the Primary Somatosensory Cortex.” NeuroImage 41, no. 4: 1364–1371. 10.1016/J.NEUROIMAGE.2008.03.049. [DOI] [PubMed] [Google Scholar]
  51. Kawashima, R. , O'Sullivan B. T., and Roland P. E.. 1995. “Positron‐Emission Tomography Studies of Cross‐Modality Inhibition in Selective Attentional Tasks: Closing the “Mind's Eye”.” Proceedings of the National Academy of Sciences 92, no. 13: 5969–5972. 10.1073/pnas.92.13.5969. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Kim, H. R. , Pitkow X., Angelaki D. E., and DeAngelis G. C.. 2016. “A Simple Approach to Ignoring Irrelevant Variables by Population Decoding Based on Multisensory Neurons.” Journal of Neurophysiology 116, no. 3: 1449–1467. 10.1152/jn.00005.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Kiorpes, L. 2016. “The Puzzle of Visual Development: Behavior and Neural Limits.” Journal of Neuroscience 36, no. 45: 11384–11393. 10.1523/JNEUROSCI.2937-16.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Kleiner, M. , Brainard D., Pelli D., Ingling A., Murray R., and Broussard C.. 2007. “What's New in Psychtoolbox‐3.” Perception 36, no. 14: 1–16. [Google Scholar]
  55. Klingner, C. M. , Hasler C., Brodoehl S., and Witte O. W.. 2010. “Dependence of the Negative BOLD Response on Somatosensory Stimulus Intensity.” NeuroImage 53, no. 1: 189–195. 10.1016/J.NEUROIMAGE.2010.05.087. [DOI] [PubMed] [Google Scholar]
  56. Klink, P. C. , Chen X., Vanduffel W., and Roelfsema P. R.. 2021. “Population Receptive Fields in Nonhuman primates From Whole‐Brain fMRI and Large‐Scale Neurophysiology in Visual Cortex.” eLife 10: e67304. 10.7554/eLife.67304. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Land, R. , Baumhoff P., Tillein J., Lomber S. G., Hubka P., and Kral A.. 2016. “Cross‐Modal Plasticity in Higher‐Order Auditory Cortex of Congenitally Deaf Cats Does Not Limit Auditory Responsiveness to Cochlear Implants.” Journal of Neuroscience 36, no. 23: 6175–6185. 10.1523/JNEUROSCI.0046-16.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Laurienti, P. J. , Burdette J. H., Wallace M. T., Yen Y. F., Field A. S., and Stein B. E.. 2002. “Deactivation of Sensory‐Specific Cortex by Cross‐Modal Stimuli.” Journal of Cognitive Neuroscience 14, no. 3: 420–429. 10.1162/089892902317361930. [DOI] [PubMed] [Google Scholar]
  59. Levin, N. , Dumoulin S. O., Winawer J., Dougherty R. F., and Wandell B. A.. 2010. “Cortical Maps and White Matter Tracts Following Long Period of Visual Deprivation and Retinal Image Restoration.” Neuron 65, no. 1: 21–31. 10.1016/j.neuron.2009.12.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Lomber, S. G. , Meredith M. A., and Kral A.. 2010. “Cross‐Modal Plasticity in Specific Auditory Cortices Underlies Visual Compensations in the Deaf.” Nature Neuroscience 13, no. 11: 1421–1427. 10.1038/nn.2653. [DOI] [PubMed] [Google Scholar]
  61. Lomber, S. G. , Meredith M. A., and Kral A.. 2011. “Adaptive Crossmodal Plasticity in Deaf Auditory Cortex: Areal and Laminar Contributions to Supranormal Vision in the Deaf.” Progress in Brain Research 191: 251–270. 10.1016/B978-0-444-53752-2.00001-1. [DOI] [PubMed] [Google Scholar]
  62. Mayer, J. S. , Roebroeck A., Maurer K., and Linden D. E. J.. 2010. “Specialization in the Default Mode: Task‐Induced Brain Deactivations Dissociate Between Visual Working Memory and Attention.” Human Brain Mapping 31, no. 1: 126–139. 10.1002/hbm.20850. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Mayhew, S. D. , Ostwald D., Porcaro C., and Bagshaw A. P.. 2013. “Spontaneous EEG Alpha Oscillation Interacts With Positive and Negative BOLD Responses in the Visual–Auditory Cortices and Default‐Mode Network.” NeuroImage 76: 362–372. 10.1016/j.neuroimage.2013.02.070. [DOI] [PubMed] [Google Scholar]
  64. Merabet, L. B. , Swisher J. D., McMains S. A., et al. 2007. “Combined Activation and Deactivation of Visual Cortex During Tactile Sensory Processing.” Journal of Neurophysiology 97, no. 2: 1633–1641. 10.1152/jn.00806.2006. [DOI] [PubMed] [Google Scholar]
  65. Meredith, M. A. , and Allman B. L.. 2012. “Early Hearing‐Impairment Results in Crossmodal Reorganization of Ferret Core Auditory Cortex.” Neural Plasticity 2012: 1–13. 10.1155/2012/601591. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Meredith, M. A. , Keniston L. P., and Allman B. L.. 2012. “Multisensory Dysfunction Accompanies Crossmodal Plasticity Following Adult Hearing Impairment.” Neuroscience 214: 136–148. 10.1016/j.neuroscience.2012.04.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Meredith, M. A. , Kryklywy J., McMillan A. J., Malhotra S., Lum‐Tai R., and Lomber S. G.. 2011. “Crossmodal Reorganization in the Early Deaf Switches Sensory, but Not Behavioral Roles of Auditory Cortex.” Proceedings of the National Academy of Sciences of the United States of America 108, no. 21: 8856–8861. 10.1073/PNAS.1018519108/-/DCSUPPLEMENTAL. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Meredith, M. A. , and Lomber S. G.. 2011. “Somatosensory and Visual Crossmodal Plasticity in the Anterior Auditory Field of Early‐Deaf Cats.” Hearing Research 280, no. 1–2: 38–47. 10.1016/J.HEARES.2011.02.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Mozolic, J. L. , Joyner D., Hugenschmidt C. E., et al. 2008. “Cross‐Modal Deactivations During Modality‐Specific Selective Attention.” BMC Neurology 8: 35. 10.1186/1471-2377-8-35. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Mullinger, K. J. , Mayhew S. D., Bagshaw A. P., Bowtell R., and Francis S. T.. 2014. “Evidence That the Negative BOLD Response Is Neuronal in Origin: A Simultaneous EEG–BOLD–CBF Study in Humans.” NeuroImage 94: 263–274. 10.1016/j.neuroimage.2014.02.029. [DOI] [PubMed] [Google Scholar]
  71. Nelson, W. , and Mayhew S. D.. 2025. “Investigating the Consistency of Negative BOLD Responses to Combinations of Visual, Auditory, and Somatosensory Stimuli and Their Modulation by the Level of Task Demand.” Human Brain Mapping 46, no. 4: e70177. 10.1002/HBM.70177. [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Neville, H. J. , and Lawson D.. 1987. “Attention to Central and Peripheral Visual Space in a Movement Detection Task: An Event‐Related Potential and Behavioral Study. II. Congenitally Deaf Adults.” Brain Research 405, no. 2: 268–283. 10.1016/0006-8993(87)90296-4. [DOI] [PubMed] [Google Scholar]
  73. Pelli, D. G. 1997. “The VideoToolbox Software for Visual Psychophysics: Transforming Numbers Into Movies.” Spatial Vision 10, no. 4: 437–442. 10.1163/156856897X00366. [DOI] [PubMed] [Google Scholar]
  74. Pereira‐Jorge, M. R. , Andrade K. C., Palhano‐Fontes F. X., et al. 2018. “Anatomical and Functional MRI Changes After One Year of Auditory Rehabilitation With Hearing Aids.” Neural Plasticity 2018, no. 1: 9303674. 10.1155/2018/9303674. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Ptito, M. , Matteau I., Zhi Wang A., Paulson O. B., Siebner H. R., and Kupers R.. 2012. “Crossmodal Recruitment of the Ventral Visual Stream in Congenital Blindness.” Neural Plasticity 2012: 1–9. 10.1155/2012/304045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. R Core Team . 2014. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. http://www.r‐project.org/. [Google Scholar]
  77. Rebillard, G. , Carlier E., and Pujol R.. 1977. “Réponses d'Origine visuelle au niveau du cortex auditif primaire, chez le chat privé précocement de récepteurs cochléaires. Revue D'Electroencéphalographie et de .” Neurophysiologie Clinique 7, no. 3: 284–289. 10.1016/S0370-4475(77)80007-5. [DOI] [PubMed] [Google Scholar]
  78. Renier, L. A. , Anurova I., de Volder A. G., Carlson S., VanMeter J., and Rauschecker J. P.. 2010. “Preserved Functional Specialization for Spatial Processing in the Middle Occipital Gyrus of the Early Blind.” Neuron 68, no. 1: 138–148. 10.1016/j.neuron.2010.09.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Retter, T. L. , Webster M. A., and Jiang F.. 2018. “Directional Visual Motion Is Represented in the Auditory and Association Cortices of Early Deaf Individuals.” Journal of Cognitive Neuroscience 31, no. 8: 1126–1140. 10.1162/jocn_a_01378. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Rosemann, S. , Gieseler A., Tahden M., Colonius H., and Thiel C. M.. 2021. “Treatment of Age‐Related Hearing Loss Alters Audiovisual Integration and Resting‐State Functional Connectivity: A Randomized Controlled Pilot Trial.” ENeuro 8, no. 6: 258–279. 10.1523/ENEURO.0258-21.2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  81. Ruttorf, M. , Tal Z., Amaral L., Fang F., Bi Y., and Almeida J.. 2023. “Neuroplastic Changes in Functional Wiring in Sensory Cortices of the Congenitally Deaf: A Network Analysis.” Human Brain Mapping 44, no. 18: 6523–6536. 10.1002/HBM.26530. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Saad, Z. S. , and Reynolds R. C.. 2012. “SUMA.” NeuroImage 62, no. 2: 768–773. 10.1016/j.neuroimage.2011.09.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. Saad, Z. S. , Reynolds R. C., Argall B., Japee S., and Cox R. W.. 2004. “SUMA: An Interface for Surface‐Based Intra‐ and Inter‐Subject Analysis With AFNI. 2004 2nd IEEE International Symposium on Biomedical Imaging: Macro to Nano, 2, 1510–1513.” 10.1109/isbi.2004.1398837. [DOI]
  84. Sadato, N. , Pascual‐Leone A., Grafman J., et al. 1996. “Activation of the Primary Visual Cortex by Braille Reading in Blind Subjects.” Nature 380, no. 6574: 526–528. 10.1038/380526a0. [DOI] [PubMed] [Google Scholar]
  85. Scott, G. D. , Karns C. M., Dow M. W., Stevens C., and Neville H. J.. 2014. “Enhanced Peripheral Visual Processing in Congenitally Deaf Humans Is Supported by Multiple Brain Regions, Including Primary Auditory Cortex.” Frontiers in Human Neuroscience 8: 177. 10.3389/fnhum.2014.00177. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Scurry, A. N. , Huber E., Matera C., and Jiang F.. 2020. “Increased Right Posterior STS Recruitment Without Enhanced Directional‐Tuning During Tactile Motion Processing in Early Deaf Individuals.” Frontiers in Neuroscience 14: 864. 10.3389/fnins.2020.00864. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Sereno, M. I. , Dale A. M., Reppas J. B., et al. 1995. “Borders of Multiple Visual Areas in Humans Revealed by Functional Magnetic Resonance Imaging.” Science 268, no. 5212: 889–893. 10.1126/science.7754376. [DOI] [PubMed] [Google Scholar]
  88. Seymour, J. L. , Low K. A., Maclin E. L., et al. 2017. “Reorganization of Neural Systems Mediating Peripheral Visual Selective Attention in the Deaf: An Optical Imaging Study.” Hearing Research 343: 162–175. 10.1016/j.heares.2016.09.007. [DOI] [PubMed] [Google Scholar]
  89. Shiell, M. M. , Champoux F., and Zatorre R. J.. 2015. “Reorganization of Auditory Cortex in Early‐Deaf People: Functional Connectivity and Relationship to Hearing Aid Use.” Journal of Cognitive Neuroscience 27, no. 1: 150–163. 10.1162/jocn_a_00683. [DOI] [PubMed] [Google Scholar]
  90. Shmuel, A. , Augath M., Oeltermann A., and Logothetis N. K.. 2006. “Negative Functional MRI Response Correlates With Decreases in Neuronal Activity in Monkey Visual Area V1.” Nature Neuroscience 9, no. 4: 569–577. 10.1038/nn1675. [DOI] [PubMed] [Google Scholar]
  91. Shmuel, A. , Yacoub E., Pfeuffer J., et al. 2002. “Sustained Negative BOLD, Blood Flow and Oxygen Consumption Response and Its Coupling to the Positive Response in the Human Brain.” Neuron 36, no. 6: 1195–1210. 10.1016/S0896-6273(02)01061-9. [DOI] [PubMed] [Google Scholar]
  92. Simon, M. , Lazzouni L., Campbell E., et al. 2020. “Enhancement of Visual Biological Motion Recognition in Early‐Deaf Adults: Functional and Behavioral Correlates.” PLoS One 15, no. 8: e0236800. 10.1371/journal.pone.0236800. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Smith, A. T. , Williams A. L., and Singh K. D.. 2004. “Negative BOLD in the Visual Cortex: Evidence Against Blood Stealing.” Human Brain Mapping 21, no. 4: 213–220. 10.1002/hbm.20017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Stefanovic, B. , Warnking J. M., and Pike G. B.. 2004. “Hemodynamic and Metabolic Responses to Neuronal Inhibition.” NeuroImage 22, no. 2: 771–778. 10.1016/j.neuroimage.2004.01.036. [DOI] [PubMed] [Google Scholar]
  95. Szinte, M. , and Knapen T.. 2020. “Visual Organization of the Default Network.” Cerebral Cortex 30, no. 6: 3518–3527. 10.1093/cercor/bhz323. [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Tal, Z. , Geva R., and Amedi A.. 2016. “The Origins of Metamodality in Visual Object Area LO: Bodily Topographical Biases and Increased Functional Connectivity to S1.” NeuroImage 127: 363–375. 10.1016/j.neuroimage.2015.11.058. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Tal, Z. , Geva R., and Amedi A.. 2017. “Positive and Negative Somatotopic BOLD Responses in Contralateral Versus Ipsilateral Penfield Homunculus.” Cerebral Cortex 27, no. 2: 962–980. 10.1093/cercor/bhx024. [DOI] [PMC free article] [PubMed] [Google Scholar]
  98. Thomas, J. M. , Huber E., Stecker G. C., Boynton G. M., Saenz M., and Fine I.. 2015. “Population Receptive Field Estimates of Human Auditory Cortex.” NeuroImage 105, no. 206: 428–439. 10.1016/j.neuroimage.2014.10.060. [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Twomey, T. , Waters D., Price C. J., Evans S., and Macsweeney M.. 2017. “How Auditory Experience Differentially Influences the Function of Left and Right Superior Temporal Cortices.” Journal of Neuroscience 37, no. 39: 9564–9573. 10.1523/JNEUROSCI.0846-17.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  100. Vachon, P. , Voss P., Lassonde M., et al. 2013. “Reorganization of the Auditory, Visual and Multimodal Areas in Early Deaf Individuals.” Neuroscience 245: 50–60. 10.1016/j.neuroscience.2013.04.004. [DOI] [PubMed] [Google Scholar]
  101. Van Boven, R. W. , Hamilton R. H., Kauffman T., Keenan J. P., and Pascual‐Leone A.. 2000. “Tactile Spatial Resolution in Blind Braille Readers.” American Journal of Ophthalmology 130, no. 4: 542. 10.1016/s0002-9394(00)00743-1. [DOI] [PubMed] [Google Scholar]
  102. Veraart, J. , Novikov D. S., Christiaens D., Ades‐aron B., Sijbers J., and Fieremans E.. 2016. “Denoising of Diffusion MRI Using Random Matrix Theory.” NeuroImage 142: 394–406. 10.1016/J.NEUROIMAGE.2016.08.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. Weisser, V. , Stilla R., Peltier S., Hu X., and Sathian K.. 2005. “Short‐Term Visual Deprivation Alters Neural Processing of Tactile Form.” Experimental Brain Research 166, no. 3–4: 572–582. 10.1007/s00221-005-2397-4. [DOI] [PubMed] [Google Scholar]
  104. Zeharia, N. , Hertz U., Flash T., and Amedi A.. 2012. “Negative Blood Oxygenation Level Dependent Homunculus and Somatotopic Information in Primary Motor Cortex and Supplementary Motor Area.” Proceedings of the National Academy of Sciences 109, no. 45: 18565–18570. 10.1073/pnas.1119125109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Zimmermann, M. , Mostowski P., Rutkowski P., et al. 2021. “The Extent of Task Specificity for Visual and Tactile Sequences in the Auditory Cortex of the Deaf and Hard of Hearing.” Journal of Neuroscience 41, no. 47: 9720–9731. 10.1523/JNEUROSCI.2527-20.2021. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Table S1: Demographic information of deaf participants

Table S2: ROI definition.

Table S3: t‐test results and Cohen's D values

Table S4: Two‐sample t‐tests between groups

Figure S1: tSNR analysis

Table S5: t‐test results for tSNR analysis

Figure S2: Variance explained of positive and negative pRF models. Variance explained by the model including positive and negative predictors (left row) and only positive predictors (right row) are presented on the lateral view of the brain. Maps are threshold at R 2 ≥ 0.08. While the hearing participant does not appear to show clear differences between the two models, the maps of the deaf participants reveal more clusters in the frontal and temporal lobe for the model that includes negative beta values.

HBM-47-e70444-s001.docx (387.7KB, docx)

Data Availability Statement

The data that support the findings of this study are available on request from the corresponding author. The data are not publicly available due to privacy or ethical restrictions.


Articles from Human Brain Mapping are provided here courtesy of Wiley

RESOURCES