Skip to main content
The Journal of Neuroscience logoLink to The Journal of Neuroscience
. 2023 Nov 15;43(46):7868–7878. doi: 10.1523/JNEUROSCI.0376-23.2023

Decoding Reach Direction in Early “Visual” Cortex of Congenitally Blind Individuals

Łukasz Bola 1,*,, Petra Vetter 2,*, Mohr Wenger 3, Amir Amedi 3,4,
PMCID: PMC10648511  PMID: 37783506

Abstract

Motor actions, such as reaching or grasping, can be decoded from fMRI activity of early visual cortex (EVC) in sighted humans. This effect can depend on vision or visual imagery, or alternatively, could be driven by mechanisms independent of visual experience. Here, we show that the actions of reaching in different directions can be reliably decoded from fMRI activity of EVC in congenitally blind humans (both sexes). Thus, neither visual experience nor visual imagery is necessary for EVC to represent action-related information. We also demonstrate that, within EVC of blind humans, the accuracy of reach direction decoding is highest in areas typically representing foveal vision and gradually decreases in areas typically representing peripheral vision. We propose that this might indicate the existence of a predictive, hard-wired mechanism of aligning action and visual spaces. This mechanism might send action-related information primarily to the high-resolution foveal visual areas, which are critical for guiding and online correction of motor actions. Finally, we show that, beyond EVC, the decoding of reach direction in blind humans is most accurate in dorsal stream areas known to be critical for visuo-spatial and visuo-motor integration in the sighted. Thus, these areas can develop space and action representations even in the lifelong absence of vision. Overall, our findings in congenitally blind humans match previous research on the action system in the sighted, and suggest that the development of action representations in the human brain might be largely independent of visual experience.

SIGNIFICANCE STATEMENT Early visual cortex (EVC) was traditionally thought to process only visual signals from the retina. Recent studies proved this account incomplete, and showed EVC involvement in many activities not directly related to incoming visual information, such as memory, sound, or action processing. Is EVC involved in these activities because of visual imagery? Here, we show robust reach direction representation in EVC of humans born blind. This demonstrates that EVC can represent actions independently of vision and visual imagery. Beyond EVC, we found that reach direction representation in blind humans is strongest in dorsal brain areas, critical for action processing in the sighted. This suggests that the development of action representations in the human brain is largely independent of visual experience.

Keywords: actions, dorsal stream, fMRI, MVPA visual cortex

Introduction

Early visual cortex (EVC) was traditionally considered a purely perceptual region, which only processes visual signals from the retina. Recent years proved this account incomplete, with studies demonstrating that EVC is involved in many activities that are not directly related to incoming visual information, such as working memory (Harrison and Tong, 2009; Roelfsema and de Lange, 2016), sound representation (Vetter et al., 2014), or action representation (Monaco et al., 2020; Knights et al., 2021). However, it is still debated whether EVC involvement in these tasks can be reduced to visual imagery.

Studying individuals born blind, who could not develop visual imagery, is a powerful way to contribute to this debate. Here, we used this approach to investigate how EVC represents motor actions. This region increases its activity when sighted individuals plan or perform motor actions, such as reaching toward objects or grasping them (Monaco et al., 2017; Styrkowiec et al., 2019). This effect persists even when actions are performed in darkness (Monaco et al., 2017). Furthermore, EVC activity in sighted individuals can be used to distinguish between specific actions or action intentions (Monaco et al., 2020; Knights et al., 2021). One interpretation of these findings is that the emergence of action representation in EVC is driven by visual imagery: the creation of internal, vision-like mental representation of actions or objects over which actions are performed (Pearson et al., 2015). An intriguing, alternative hypothesis is that EVC can represent action-related information independently of visual experience. One can suppose, for example, that spatial properties of actions or action targets can be mapped onto the EVC retinotopic organization without being transformed into visual format. Several studies have shown that the EVC retinotopic organization is used to represent certain types of information even in congenitally blind individuals (Striem-Amit et al., 2015; Norman and Thaler, 2019; Vetter et al., 2020).

In sighted individuals, performing actions over small objects preferentially involves EVC foveal areas, even when participants do not see these objects and fixate on a point well above their location (Monaco et al., 2017). Beyond visual cortex, motor actions primarily involve dorsal brain regions, such as the motor and somatosensory cortices, the superior parietal lobe (SPL), the intraparietal sulcus (IPS), and the frontal eye field (FEF)/dorsal premotor cortex (PMd) (Fabbri et al., 2014; Gallivan and Culham, 2015). Here, we used these findings as leverage to study the impact of vision on the action system development. Particularly, an observation that actions preferentially involve typically foveal EVC also in congenitally blind individuals would be suggestive of similar neural mechanisms supporting action-related representations in EVC in both populations. Furthermore, finding that actions preferentially involve dorsal stream regions also in congenitally blind individuals would add to evidence that these regions can develop relatively typical functional specialization independently of visual experience (Garg et al., 2007; Fiehler et al., 2009; Striem-Amit et al., 2012).

We used fMRI to measure brain activity in 9 congenitally blind participants who reached for and read Braille words printed on the four cardinal positions (up, down, left, right) of an A4 Braille sheet. We then used multivoxel pattern classification to decode different reach directions from these participants' brain activity. Importantly, different, unrelated Braille words were used in the two experimental runs. This, in combination with our analytical scheme (training and testing the classifier on different runs; see Materials and Methods), ensured that we investigated representation of reach directions, rather than representation of Braille words (Sadato et al., 1996; Cohen et al., 1997). Devoid of all features specific to a given word, our Braille stimuli could be seen as a form of small objects, requiring a very precise calibration of the reach and the hand shape.

We expected to find reach direction representation in EVC of the blind participants, particularly in typically foveal areas. Beyond EVC, we expected to find reach direction representation primarily in regions that form the action system in the sighted.

Materials and Methods

Participants

Nine congenitally blind individuals with intact hearing (3 males, 6 females; mean age 33 years, range 23-39 years; 4 left handers, 4 right handers, 1 ambidextrous; mean education duration 14 years, range 12-17 years) participated in the study. Reasons for blindness were as follows: microphthalmia in 3 participants of which 1 also had retinal detachment, retinopathy of prematurity in 4 participants, enophthalmos in 1 participant, and Leber congenital amaurosis in 1 participant. One blind participant had very faint light perception; all others had no light perception at all. All participants were proficient Braille readers. Eight of nine participants participated in our previous study on natural sound decoding from EVC activity (Vetter et al., 2020). In this previous study, such a sample size was sufficient to detect robust effects in early visual areas. Here, we expected to obtain effects of comparable size. All participants received detailed information on the study, signed informed consent, and were paid for their participation. The study was approved by the Tel-Aviv Sourasky Medical Center Ethics Committee, Israel.

Experimental design

The design of the experiment is illustrated in Figure 1. Participants underwent fMRI while they were reaching for and reading Braille words printed at the center of the four edges of a thick A4 Braille sheet (portrait orientation) to probe the four cardinal spatial positions (up, down, left, and right) (Fig. 1A). Two different Braille sheets with different, unrelated words referring to abstract concepts with low imagination score were used to ensure that the subsequent multivoxel pattern classification analysis did not rely on the processing of word meanings. The Braille sheets were handed to participants by the experimenter and exchanged for the other Braille sheet after each run. Order of Braille sheets was counterbalanced across participants. Participants lay supine inside the MRI scanner, held the Braille sheet with their nondominant hand flat on their lap, and started each experimental trial with the index finger of their dominant hand on a central “fixation” dot printed on the Braille sheet. Then, they heard a verbal cue indicating the reach direction (up, down, left, or right, ∼1 s), which was followed by 3.5 s of silence to allow for hand reaching and word reading at the cued location (4.5 s of a trial time, in total; Fig. 1B). Participants moved their hand and lower arm from the center ∼14 cm toward the up and down locations and ∼9 cm toward the left and right locations (within the dimension of an A4 sheet). Subsequently, participants heard a second verbal cue (∼1 s) instructing them to return their hands to the center of the Braille sheet, which was followed by silence lasting for 8 s (9 s of a rest time, in total).

Figure 1.

Figure 1.

Study design. A, B, Congenitally blind participants reached for and read Braille words printed on one of the four edges of the A4 Braille sheet. The participants started each trial with a finger placed on the central “fixation dot.” They moved their hand on hearing a verbal cue indicating reach direction (up, down, right, left). Different, unrelated Braille words referring to abstract concepts were used in each experimental run. C, Maps of early visual areas, obtained in a separate, retinotopic mapping experiment with sighted participants, were cortex-based aligned to the reconstructions of cortical anatomy of each blind participant, and then transformed into maximum probability maps. Multivoxel pattern classification was used to decode reach directions from these early visual areas in the blind participants. The classifier was trained and tested on data from different runs, to ensure that reach direction representation is not confounded with Braille word representation. The figure presents maximum probability maps created for one, representative blind participant (a left hemisphere is presented, inflated for visualization purposes). Different colors represent different early visual areas (red represents V1; green represents V2; blue represents V3), whereas different shades of the same color represent different eccentricities (darker shades represent foveal areas; lighter shades represent peripheral areas).

Subjects completed 2 runs, each consisting of 40 trials (10 trials × 4 reach directions). The order of trials was randomized with a constraint that the same reach direction did not repeat in two consecutive trials.

We used Braille words as reach targets, instead of more typical “objects,” mostly for practical reasons; such stimuli could be squeezed into an A4 sheet and comfortably reached in a constrained MRI scanner space, without the need to build special platforms, which are usually used for the presentation of more typical objects (e.g., Singhal et al., 2013; Monaco et al., 2017, 2020). Moreover, our pilot study suggested that keeping participants' reaches relatively short, a study feature that we could readily achieve with Braille words, attenuated the fMRI signal artifacts related to moving in the MRI scanner (Barry et al., 2010) and resulted in overall better data quality. Last but not least, reaching for and reading Braille words was a very natural activity for the blind participants enrolled in the study, and made the experimental task readily understandable for them.

Data collection

BOLD signals were acquired in a 3 T General Electric MRI scanner with an 8-channel head coil (TR = 1.5 s, TE = 35 ms, resolution: 3.75 × 3.75 × 4.5 mm voxels, 4.5 mm slice thickness, 0.4 mm gap thickness, 27 slices, flip angle: 70). In each experimental run, 376 volumes were collected. Additionally, an anatomic brain image was collected for each participant using a standard MPRAGE T1-weighted sequence.

Data preprocessing

Data were analyzed in BrainVoyager 20.6 (BrainInnovation). Standard preprocessing routines were used, including slice scan time correction, 3D rigid body motion correction, temporal high-pass filter (GLM with Fourier basis set, 3 cycles per run), no spatial smoothing for the multivoxel pattern analysis (MVPA), and spatial smoothing on cortical surface (the nearest neighbors approach, repeat value: 4) for the univariate analysis. Activation for each trial (in the MVPA: 2 runs × 4 reach directions × 10 trials) or experimental condition (in the univariate analysis: 2 runs × 4 reach directions) was modeled using a GLM by convolving each trial/condition time course with the canonical HRF. For each participant, functional data were mapped onto an individual reconstruction of the cortical surface, created based on the collected anatomic image. All subsequent analyses were performed in the surface space.

Statistical analysis

Multivoxel pattern classification (decoding) analysis

All multivoxel pattern classification analyses were performed in CosmoMVPA (version 1.1.0) (Oosterhof et al., 2016), running on MATLAB R2018b (The MathWorks). All analyses were performed on T values (Misaki et al., 2010). To obtain these values, a separate T map was computed for each experimental trial by comparing brain activation during this trial to brain activation during rest periods in a given run (10 trials × 4 reach directions per run, 80 maps per participant in total). In all analyses, a linear support vector machine classification algorithm was used, as implemented in the LIBSVM toolbox (version 3.23) (Chang and Lin, 2011). A standard LIBSVM data normalization procedure (i.e., Z scoring β estimates for each voxel in the training set and applying output values to the test set) was applied to the data before classification.

We performed several multivoxel pattern classification analyses in EVC. We used the same, bilateral EVC patches of interest (POIs) as in our previous study investigating natural sound representations in the same blind participants (Vetter et al., 2020). Briefly, a standard retinotopic polar mapping fMRI experiment was performed to delineate areas V1, V2, and V3 in 10 sighted participants (data reported in Vetter et al., 2014). These areas were also divided into three equally spaced segments along the posterior-anterior brain axis, to create POIs representing approximately foveal, peripheral, and far peripheral visual fields (eccentricity mapping was not performed). Then, the individual POIs obtained from sighted participants were mapped onto a cortical surface reconstruction of each blind participant, using the BrainVoyager cortex-based alignment procedure, and converted into maximum probability maps (Fig. 1C), which were then used in the classification analyses.

Importantly, a standard retinotopic mapping fMRI experiment, as the one described here, is not able to image the whole visual field in humans (for discussion, see Pitzalis et al., 2006). Thus, our “far periphery” EVC POIs are unlikely to correspond to the real-live boundaries of the visual field. Nevertheless, the obtained POIs extended into fairly anterior portions of the calcarine sulcus and the pericalcarine cortex (Fig. 1C), which suggests that the peripheral visual representation was stimulated (perhaps not only directly, but also through lateral connections) (Pitzalis et al., 2006).

In the first analysis, we tested for the EVC representation of reach direction in each blind participant separately (within-participant decoding). Thus, the cross-validation of the classification results was performed across runs; in each participant, there were two cross-validation folds; and in each of them, one run was used to train the classifier and the other run was used for testing. This cross-validation scheme ensured that we decoded reach direction rather than Braille words, which were different in each run (i.e., in the training and testing sets; see also Experimental design). We tested for the reach direction representation in the whole EVC (areas V1, V2, and V3 combined) and in each early visual area separately. Additionally, we also tested for reach direction representation in the two other early sensory regions: motor cortex (MC) and auditory cortex (AC). The AC POI was created by combining the bilateral masks of Brodmann areas (BAs) 41 and 42 together. The MC POI was defined as bilateral area BA 4 (thus, it is likely to contain the somatotopic map of the whole body, not only the hand or arm). The BrainVoyager and BrainTutor (BrainInnovation) cortical atlases were used to obtain the masks of specific BAs. The atlases were cortex-based aligned to the reconstruction of cortical surfaces of blind participants using the procedures described above.

Second, we tested for the generalization of the activity patterns induced by specific reach directions across the blind participants (cross-participant decoding). This analysis was again performed in the three sensory regions: EVC, MC, and AC. The aim of this analysis was to test whether reach direction representation might rely on the large-scale organization of these regions (e.g., retinotopy in EVC, somatotopy in MC), as only such representation is likely to be generalized across participants. To verify this, the cross-validation of reach direction classification was performed across participants; that is, there were nine cross-validation folds; and in each of them, the data from 8 participants were used to train the classifier and the data from the remaining participant were used for testing. For the cross-participant analysis, the sensory POIs were defined as was described above, and aligned to the average cortical folding of all blind participants using a group cortex-based alignment procedure. This resulted in exactly the same POIs for each participant. As an additional control analysis, we used the same POIs and cross-participant analysis scheme to try to decode the two sets of Braille words, which were reached and read by the blind participants in the two experimental runs. We reasoned that, even if EVC in blind participants represents some information related to abstract Braille words, which were used as a target for reaches in our study (Sadato et al., 1996; Cohen et al., 1997), such representation is unlikely to rely on large-scale retinotopic biases that could be generalized across the participants.

Third, we investigated the reach direction representation in EVC areas that, in sighted individuals, represent foveal and peripheral vision. The analysis was performed in foveal, peripheral, and far peripheral EVC POIs (see above for their description). The results for similar POIs delineated in specific visual areas (V1, V2, and V3) were also calculated. The within-participant decoding and across-run cross validation scheme, described above, were used. Furthermore, to exclude a possibility that differences in foveal, peripheral, and far peripheral POI sizes (average foveal POI = 684 vertices; average peripheral POI = 912 vertices; average far peripheral POI = 1048 vertices) affected our results, we repeated the analysis while randomly drawing (without replacement) equal numbers of vertices from foveal, peripheral, and far peripheral EVC POIs. We tested six POI sizes, from 100 to 600 vertices. At each POI size level, and for each of the three POIs, we averaged the decoding results across 1000 random draws of vertices. We then compared the results with the decoding accuracies obtained in the analysis of whole POIs.

Fourth, to further investigate the robustness of our findings, we plotted decoding accuracies obtained for individual blind participants. The results for EVC, MC, and AC POIs were plotted.

In addition to the analyses focused on EVC, we performed the searchlight analysis, to reveal the whole cortical network representing reach direction in the blind participants. The analysis was performed on cortical surface reconstructions of each blind participant, using CosmoMVPA and Surfing Toolbox (Oosterhof et al., 2011). It was performed separately for each hemisphere, within surface patches containing 100 vertices. All other analysis parameters were the same as in the within-participant POI decoding analyses.

Finally, to test whether the reach decoding representation is stronger in canonical visuospatial processing areas than in other high-order brain areas, we performed the within-participant POI analysis, using the parameters described above, in four regions: the two canonical visuospatial areas, that is, inferior parietal sulcus (IPS) and FEF/PMd, and the two canonical language areas, the Broca's area and the superior temporal sulcus (STS)/superior temporal gyrus (STG). The Broca's area POI was created by combining left BAs 44 and 45 together. The STS/STG POI was defined as left BA 22. As in the previous analyses, the BrainVoyager cortical atlas of BAs was used to define these POIs. The IPS POI was defined bilaterally using the BrainVoyager atlas of cortical sulci. It covered the whole extent of the IPS; thus, it is likely to include multiple functional areas (e.g., Gallivan and Culham, 2015). Our aim was to have a general assessment of the reach direction decoding accuracy in the IPS, rather than to distinguish between these specific areas. The FEF/PMd POI was defined bilaterally using BrainVoyager “fMRI atlas,” and then dilated to achieve the approximate size of the Broca's and the STS/STG POIs. The procedures of cortex-based alignment, identical to those used in the other within-participant POI analyses, were used to align each POI to cortical reconstructions of individual blind participants.

In all within-participant POI decoding analyses, the statistical significance of obtained classification accuracies was tested against chance levels that were empirically derived in the permutation procedure. Specifically, each classification analysis was rerun 1000 times for each participant with reach direction labels (up, down, right, left) randomly assigned to experimental trials in each iteration, participant, and experimental run. Null distributions created in this procedure were averaged across participants and compared with the actual average classification accuracies. The p values that were obtained in this way were corrected for multiple comparisons using the false discovery rate (FDR) (Benjamini and Hochberg, 1995). A review of null distributions confirmed that, for each POI and analysis, the empirically derived chance levels were indistinguishable from a priori chance levels (25%). Thus, for simplicity, the a priori chance level is presented in the figures.

The same procedures were used in the cross-participant decoding analysis. In the case of cross-participant reach direction decoding, the chance level was derived by rerunning the analysis with reach direction labels (up, down, right, left) randomly assigned to experimental trials in each iteration, participant, and run, as was described above. In the case of cross-participant Braille words decoding, the analysis was rerun with labels of the two Braille sheets randomly assigned to experimental runs, in each iteration and participant. In the cross-participant analysis, the empirically derived chance levels were indistinguishable from a priori chance levels (25% for reach direction decoding, 50% for Braille words decoding).

Testing for significant differences in decoding accuracies across multiple POIs was performed with repeated-measures ANOVAs. Testing for differences in decoding accuracies between two POIs was performed with a paired t test. SPSS 25 (IBM) was used to perform these tests. FDR was used to correct for multiple comparisons, when applicable.

To statistically test for above-chance effects in the searchlight analysis, single-subject classification accuracy maps were smoothed (a BrainVoyager procedure of smoothing on surface, the nearest neighbors approach, repeat value: 4), cortex-based aligned to the group average, and converted into a group threshold-free cluster enhancement (TFCE) map (Smith and Nichols, 2009), calculated in CosmoMVPA with standard parameters (E = 0.5, H = 2). The obtained TFCE values were then compared with an empirically derived chance level, obtained in the Monte Carlo simulation procedure (Oosterhof et al., 2016). Specifically, for each vertex, the TFCE values obtained in the group analysis of actual decoding accuracies were compared with the null distribution of TFCE values obtained in 10,000 iterations in which the signs of the effects obtained in specific participants were randomly flipped. The analysis was thresholded at p < 0.05, corrected for multiple comparisons across the whole cortical surface of a given hemisphere (z = 1.65).

Univariate analysis

We also ran the univariate analysis, to reveal brain responses elicited by our task, relative to rest periods, in the congenitally blind participants. We first performed a whole-brain analysis, in which we tested for activations induced by all experimental trials, compared with rest, across the cortical surface. This was followed by a more sensitive POI analysis, in which we investigated the same effect in EVC, in specific early visual areas (V1, V2, and V3), and in the EVC regions that typically represent specific visual eccentricities (foveal, peripheral, and far peripheral; for the description of these POIs, see above).

Furthermore, we also tested for univariate activation differences across the experimental conditions, that is, trials with different reach directions (up, down, right, left). To perform the whole-brain analysis, contrast estimate maps for each experimental condition versus rest were calculated for each participant (four maps for each participant), using BrainVoyager GLM functionality. These maps were then entered into a repeated-measures ANOVA, as implemented in CosmoMVPA. The whole-brain analysis was again followed by a POI analysis in EVC.

All univariate analyses were performed on smoothed data (see Data preprocessing). The statistical significance of effects observed in the whole-brain univariate analyses was again tested using TFCE maps and Monte Carlo simulation, as implemented in CosmoMVPA. The same analysis parameters and statistical thresholds as in the searchlight decoding analysis were used. The statistical significance of effects observed in the POI analyses was tested using one-sample t tests. The differences between results for different POIs were tested using repeated-measures ANOVAs and paired-sample t tests. SPSS 25 was used to calculate all statistics in the univariate POI analyses. FDR correction for multiple comparisons was applied, when applicable.

Controlling for movement artifacts

Finally, we ran several control analyses to exclude the possibility that our results are driven by the fMRI signal artifacts induced by movements performed in the MRI scanner (Barry et al., 2010).

First, we investigated event-related average plots, illustrating the unfolding of brain activation for all experimental trials compared with rest, for the four regions that are critical for the study: EVC, MC, IPS, and FEF/PMd. The plots were calculated separately for each hemisphere, using the POIs described above, and then averaged. We performed this analysis to verify whether there were any spikes in the signal when participants performed the reaches. The existence of such spikes would be indicative of movement-related artifacts in the signal (e.g., Singhal et al., 2013; Monaco et al., 2017).

Second, we ran the within-participant decoding of reach directions in the frontal white matter, near MC. Contrary to the actual analyses, this analysis was performed in volume space, as decoding only from the white matter is not possible in the surface space. The ROI was defined in the right hemisphere (Talairach coordinates of the center: 21, 16, 29) and contained ∼50 voxels. The statistical significance of the decoding was assessed in the permutation procedure, in the same way as in the actual analyses.

Third, we further analyzed the results produced by the searchlight classification procedure. Specifically, we averaged the reach decoding accuracies produced by the searchlight within each of our four critical POIs (EVC, MC, IPS, FEF/PMd). Next, we compared these accuracies with searchlight reach decoding accuracy averaged across the frontal and temporal lobes (the MC and the FEF/PMd were excluded from the mask). Furthermore, we reran the searchlight decoding analysis, and we tested for significant effects (using the same procedures and thresholds as in the original analysis) using the mean decoding accuracy obtained within the above-described, frontal and temporal mask as baseline. The frontal and temporal regions are likely to include some “ground-truth” representations of reach directions, and are also among the most affected by movement artifacts (Wu et al., 1997; Barry et al., 2010). Thus, finding significant effects in these analyses, in regions that are critical for our claims, would be a conservative demonstration of (1) specificity of our effects, and (2) that our findings cannot be explained by movement artifacts.

Results

Multivoxel pattern classification (decoding) results

In the within-participant decoding analysis, we were able to reliably decode reach direction (up, down, left, right) from fMRI activity patterns of EVC (areas V1, V2, and V3 combined) and of specific early visual areas in the congenitally blind participants (all p values < 0.001, Fig. 2). Successful decoding of reach direction was also achieved in other sensory areas: MC and AC (all p values < 0.001; Fig. 2). However, the accuracy of reach direction decoding in these three sensory areas differed, as indicated by a significant area effect (F(2,16) = 12.11, p < 0.001, partial η2 = 0.6) in a one-way repeated-measures ANOVA. The post hoc comparisons revealed a higher decoding accuracy in MC than in AC (p = 0.001) and EVC (trend level, p = 0.052). Moreover, the decoding accuracy in EVC was higher than in AC (trend level, p = 0.052).

Figure 2.

Figure 2.

Reach direction can be reliably decoded from the fMRI activity of EVC in congenitally blind participants. Results of the reach direction decoding analysis in MC, EVC, AC, and in specific early visual areas. ***p < 0.001; **p < 0.01; tp = 0.052; FDR-corrected. Black lines indicate chance level. Error bars indicate the SEM.

In the cross-participant decoding analysis, we were able to decode reach directions across participants in EVC (p = 0.003) and in MC (p = 0.003), but not in AC (p = 0.189) (Fig. 3A). This suggests that the reach direction in EVC and in MC is represented using some form of a large-scale organization (e.g., retinotopy in EVC, somatotopy in MC), as only such organization is likely to generalize across participants. In contrast to the reach direction decoding, we were not able to decode the two sets of Braille words used in the study across the blind participants, in any of the three sensory areas (all p values > 0.25; Fig. 3B).

Figure 3.

Figure 3.

Different reach directions, but not different Braille words, can be decoded across the congenitally blind participants, based on activity of MC and EVC. The results of the cross-participant classification of (A) four reach directions and (B) two sets of Braille words used in the study. The classification accuracies are presented for MC, EVC, and AC. **p < 0.01 (FDR-corrected). Black lines indicate chance level. Error bars indicate the SEM calculated across the cross-validation folds (i.e., across the results of decoding with different participants' data used for testing).

In the within-participant analysis of typically foveal and peripheral EVC areas, we observed a gradient of reach direction decoding accuracy at different eccentricities (Fig. 4). As expected, the decoding was most accurate in the foveal parts of EVC and gradually decreased in peripheral parts of this region, as indicated by a significant eccentricity effect (F(2,16) = 3.77, p = 0.046, partial η2 = 0.32) and a significant linear contrast for the eccentricity factor (F(1,8) = 5.64, p = 0.045, partial η2 = 0.41) in a one-way repeated-measures ANOVA.

Figure 4.

Figure 4.

The foveal-peripheral gradient of reach decoding accuracy in EVC of congenitally blind participants. Results of the reach direction decoding analysis in early visual regions typically representing the foveal visual field, the peripheral visual field, and the far periphery of the visual field. Results are presented for EVC and for specific early visual areas. ***p < 0.001 (FDR-corrected). Arrow indicates a significant linear contrast in a repeated-measures ANOVA. Black line indicates chance level. Error bars indicate the SEM.

We then repeated the analysis in POIs created by randomly drawing an equal number of vertices from foveal, peripheral, and far peripheral EVC POIs (see Materials and Methods). We created six POIs, including from 100 to 600 vertices, and observed comparable foveal-peripheral reach direction decoding gradient across all POI sizes tested (Fig. 5). The 3 (EVC eccentricity) × 7 (POI size, including the whole POIs) repeated-measures ANOVA produced a significant main effect of POI size (F(6,96) = 15.07, p = 0.002, partial η2 = 0.65), indicating that the decoding accuracy increased with larger POI sizes. Importantly, we also found a significant main effect of EVC eccentricity (F(2,96) = 3.95, p = 0.040, partial η2 = 0.33) and a significant linear contrast for this effect (F(1,8) = 5.77, p = 0.043, partial η2 = 0.42). There were no interactions between the two main effects (F < 1, p > 0.25). Overall, this control analysis shows that the foveal-peripheral reach direction decoding gradient can be reliably found in EVC in congenitally blind participants across a variety of POI sizes, and when the size differences between specific POIs are controlled.

Figure 5.

Figure 5.

The foveal-peripheral gradient of reach direction decoding in EVC in blind participants can be found across a wide range of POI sizes. The figure presents the accuracy of reach direction decoding in EVC in congenitally blind participants. The results are presented separately for areas typically representing the fovea, the peripheries, and the far peripheries of the visual field. The analysis was performed in POIs containing different numbers of vertices. To create POIs containing only a subset of vertices, these vertices were randomly drawn from the whole POIs. At each POI size level, the decoding accuracies were averaged across 1000 random draws of vertices for each POI. Symbols above the results represent significant main effects of EVC eccentricity, and significant linear contrasts for these effects, in repeated-measures ANOVAs run at each POI size level. *p < 0.05. tp < 0.1. The decoding chance level is equal to 25% and is not shown. Error bars indicate the SEM, adjusted to properly reflect variability in repeated-measures comparisons, using a method described by Cousineau (2005).

Furthermore, given that in our previous study (Vetter et al., 2020) we observed an opposing EVC decoding accuracy gradient (i.e., better decoding in peripheries) when the same blind participants listened to natural sounds, we formally tested for a difference in these results. We entered the EVC decoding accuracies obtained in our two studies in a 2 (study) × 3 (EVC eccentricity) repeated-measures ANOVA. As expected, we found highly significant interactions between study and EVC eccentricity factors (F(2,14) = 31.26, p < 0.001, partial η2 = 0.82), and between study and linear contrasts fitted to the EVC eccentricity factor (F(1,7) = 84.18, p < 0.001, partial η2 = 0.92).

We then plotted the within-participant decoding results for individual participants (Fig. 6). We found that the accuracy of reach direction decoding in EVC was above chance level in all 9 congenitally blind participants. Furthermore, the foveal-peripheral reach direction decoding gradient in EVC was clearly visible even at the level of individual results.

Figure 6.

Figure 6.

Reach direction decoding accuracies for individual blind participants. The individual data are presented for MC, EVC, and AC. Furthermore, the data are presented for specific early visual areas, and for early visual regions that typically represent foveal (EVC fov), peripheral (EVC peri), and far peripheral (EVC far peri) visual fields. Dotted lines indicate chance level.

In the surface searchlight analysis (Fig. 7), we observed the highest reach decoding accuracy in the foveal parts of EVC and in the dorsal brain areas: motor and somatosensory cortices, SPL, IPS, supplementary motor area (SMA), and right FEF/PMd. The independent POI analysis confirmed that reach decoding accuracy in EVC and in the two canonical dorsal visuospatial areas (IPS and FEF/PMd) was significantly higher than in the two canonical language areas (Broca's area and left STS/STG) (Fig. 8; all p values < 0.05). These results suggest that the dorsal stream regions are preferentially involved in representing reach-related information in congenitally blind participants. Moreover, this analysis provides an important control comparison: the fact that the decoding accuracy for reach direction was higher in EVC than in AC (Fig. 2) and in canonical language regions (Fig. 8) shows that the effects observed in EVC cannot be explained by auditory or linguistic processing of the verbal cues indicating reach direction in each trial.

Figure 7.

Figure 7.

Results of the searchlight analysis of reach direction decoding. The accuracy of reach direction decoding was averaged across subjects and visualized on an inflated, averaged cortical surface reconstruction for the blind group. A significant above-chance decoding was found throughout the brain. However, the highest decoding accuracy was observed in somatosensory and motor cortices, the foveal EVC, and in the dorsal brain regions, such as SPL, IPS, SMA, or right FEF/PMd. The significance of the observed effects was confirmed with a TFCE approach and Monte Carlo simulation. The statistical threshold was set at p < 0.05, corrected for multiple comparisons across the whole cortical surface.

Figure 8.

Figure 8.

The accuracy of reach direction decoding is higher in EVC and dorsal visuospatial areas than in canonical language areas. Results of the reach direction decoding analysis for the five regions: EVC, FEF/PMd, IPS, Broca's area, and left STS/STG. Different colors are used to represent different brain networks to which these brain areas are thought to belong. ***p < 0.001; *p < 0.05; FDR-corrected. Black line indicates chance level. Error bars indicate the SEM.

Univariate results

In the whole-brain, fully corrected analysis, we did not observe any significant activations for our task, compared with rest, perhaps because our event-related design was optimized for the decoding rather than detecting univariate brain responses. However, with a more lenient statistical threshold (p < 0.001, uncorrected), we were able to detect expected activations in the motor, somatosensory, and parietal cortices (Fig. 9A). Furthermore, a more sensitive POI analysis revealed a subtle univariate response in EVC (t(8) = 1.96, p = 0.043; Fig. 9B). The univariate responses in EVC increased from typically peripheral to typically foveal regions (main effect of EVC eccentricity: F(2,16) = 4.93, p = 0.048, partial η2 = 0.38; linear contrast: F(1,8) = 6.46, p = 0.035; partial η2 = 0.45; Fig. 9C), an effect similar to the one found for the decoding accuracies. Interestingly, univariate responses also increased from V1 to V3 (main effect of area: F(2,16) = 7.65, p = 0.005, partial η2 = 0.49; linear contrast: F(1,8) = 8.25, p = 0.021, partial η2 = 0.51; Fig. 9D), an effect not found for the decoding accuracies, which were comparable in all early visual areas. Our task did not elicit a univariate response in area V1 (t < 1, p > 0.25), which showed robust reach direction representation in the decoding analysis.

Figure 9.

Figure 9.

Univariate responses elicited by reaching for and reading Braille words. A, Brain regions showing stronger univariate activation during the experimental trials (i.e., when participants were involved in the task) than during the rest periods. The whole-brain analysis was thresholded at p < 0.001, uncorrected (no significant activations were detected at the corrected level). B–D, Activation during the experimental trials, relative to the rest periods, in (B) EVC, (C) early visual regions that typically represent fovea (EVC fov), peripheries (EVC peri), and far peripheries (EVC far peri) of the visual field, and (D) specific early visual areas. *p < 0.05; tp < 0.1; FDR-corrected. Arrows indicate significant linear contrasts in repeated-measures ANOVAs. Error bars indicate the SEM.

The whole-brain analysis of differences in activations across specific reach directions (up, down, right, left) produced significant effects in left motor and somatosensory cortices, left inferior frontal cortex, medial frontal cortices, temporal lobe, precuneus, and cuneus (Fig. 10A). Given that some of these effects were localized in regions in which no significant responses relative to rest were observed (including some default mode network regions) (Raichle, 2015), we cannot exclude the possibility that these findings reflect differences in deactivation levels rather than in above-rest activations. In a more sensitive POI analysis, we also detected a significant main effect of experimental condition (reach direction) in EVC (F(3,24) = 3.29, p = 0.038, partial η2 = 0.29). While a pattern of responses induced by each condition, relative to rest, suggested that EVC activations were primarily driven by trials in which the participants reached down (t(8) = 3.65, p = 0.014; p values for all other conditions > 0.1; Fig. 10B), the direct comparisons between experimental conditions were not significant (all p values > 0.05).

Figure 10.

Figure 10.

Differences in univariate activations across the reach directions. A, Results of the whole-brain F test testing for the differences in univariate responses across the four reach directions. The significance of the observed effects was assessed using a TFCE approach and Monte Carlo simulation. The statistical threshold was set at p < 0.05, corrected for multiple comparisons across the whole cortical surface. B, Univariate responses across the four reach directions in EVC. *p < 0.05 (FDR-corrected). Error bars indicate the SEM.

Controlling for movement artifacts

The analysis of event-related average plots did not show signal spikes at the moment of hand movement, similar to those described previously (e.g., Singhal et al., 2013; Monaco et al., 2017), in any of the four regions tested (EVC, MC, IPS, and FEF/PMd; Fig. 11A). The accuracy of within-participant decoding of reach directions in the frontal white matter was not significantly different from chance level (p = 0.107; Fig. 11B). Finally, testing for significant effects in the searchlight analysis performed with decoding accuracy in frontal and temporal regions as baseline (see Materials and Methods) still produced significant results in the foveal EVC, motor and somatosensory cortices, SMA, IPS, and FEF/PMd (Fig. 11C). These effects were detected despite the fact that frontal and temporal regions are among the most affected by movement artifacts (Wu et al., 1997; Barry et al., 2010), and are also likely to compute some “ground-truth” reach direction representations.

Figure 11.

Figure 11.

Testing for the effects of participants' movements on the study results. A, Event-related average plots, illustrating unfolding of brain activation during experimental trials, for EVC, MC, IPS, and FEF/PMd. The plots were calculated separately for each hemisphere and then averaged. No signal spikes, characteristics of movement artifacts, were observed. B, The accuracy of reach direction decoding in frontal white matter. Top, ROI. Talairach coordinates of the center (21, 16, 29). C, Results of searchlight classification of reach directions that used frontal and temporal regions as baseline for significance testing (MC and FEF/PMd were excluded from the mask). In the whole-brain analysis of searchlight effects (left), significance testing was performed with a TFCE approach and Monte Carlo simulation. The statistical threshold was set at p < 0.05, corrected for multiple comparisons across the whole cortical surface. In the analysis of searchlight effects in specific brain regions (right), significant increases in classification accuracy, relative to baseline regions, were tested with one-sample t tests. *p < 0.05; **p < 0.01; ***p < 0.001; FDR-corrected. Error bars indicate the SEM.

Overall, the three analyses that were performed provide converging evidence that our results cannot be explained by movement artifacts.

Discussion

In this study, we found that reach direction could be reliably decoded from fMRI activity patterns of EVC in congenitally blind participants. We also observed a gradient of reach direction decoding within EVC in these participants; the decoding accuracy was highest in the typically foveal EVC areas and gradually decreased in typically peripheral areas. Beyond EVC, the reach direction decoding was most accurate in dorsal brain areas, such as somatosensory and motor cortices, SPL, IPS, SMA, or FEF/PMd.

Are representations of motor actions, observed in EVC of sighted individuals (Monaco et al., 2020; Knights et al., 2021), reducible to visual imagery? Is vision a necessary prerequisite for the development of these representations? The answers to these questions were unclear and inferred primarily from differences in response magnitudes during actions and imagining actions (Monaco et al., 2017), or from null effects in cross-decoding between these two conditions (Monaco et al., 2020). Here, we took a different approach to resolve these issues: we tested congenitally blind participants, who have never had visual experience and could not develop visual imagery. Our results clearly demonstrate that neither visual experience nor visual imagery, understood as the creation of an internal, vision-like mental representation of actions or objects over which actions are performed, is necessary for the emergence of action-related representation in EVC.

If action-related information is not represented in EVC through visual imagery, then what mechanisms can support such representation? One possibility is that spatial properties of actions or action targets (“objects”) can be directly projected onto EVC retinotopic organization, without an intermediate step of being transformed into visual format. Our results support this possibility and suggest that the retinotopic EVC organization is, indeed, involved in representing reach directions in blind individuals. First, we were able to cross-decode reach directions across the blind participants, based on EVC activity. This suggests that reach representation in this region is supported by some form of a large-scale organization, as only such representation is likely to be generalizable across the participants. Arguably, the retinotopic organization is the most plausible candidate for such a large-scale representational mechanism in EVC, also in blind individuals (Striem-Amit et al., 2015; Norman and Thaler, 2019; Vetter et al., 2020). Second, we directly confirmed the importance of the EVC retinotopic organization in supporting reach representation in blind individuals by showing that reach direction is preferential represented in typically foveal EVC areas in blind participants. Overall, our results suggest that action-related information can be represented in the EVC through modulation of specific retinotopic locations, and that such modulation is a process that is independent of visual imagery and visual experience. A mechanism of projecting spatial properties of environment onto retinotopic organization can, potentially, underlie activations of visual areas in blind participants for many spatial tasks, such as localizing stimuli in space (Gougoux et al., 2005; Collignon et al., 2007, 2011; Garg et al., 2007), distance or symmetry judgment (Merabet et al., 2004; Bauer et al., 2015), or Braille reading (Sadato et al., 1996; Cohen et al., 1997; Tian et al., 2023).

In sighted individuals, performing actions over small objects preferentially involves the foveal EVC, even when participants do not see these objects, and are asked to fixate on a point placed well above them (Monaco et al., 2017). Similarly, we showed that reaching for Braille words placed in some distance from the center of a Braille sheet (the hand starting point) preferentially involves foveal EVC in congenitally blind participants. This shows that, in both populations, action-related information might be projected onto EVC using the same pathways and mechanisms. Furthermore, preferential involvement of foveal EVC, regardless of the actual position of a target object, might suggest that the action-related projections to EVC are predictive in nature. In this view, action-related information is sent primarily to the foveal visual areas because foveal vision is critical for guiding and online correction of motor actions. Such a predictive mechanism would fit with our real-world behaviors: we tend to foveate on small objects we want to grasp, even if, at the stage of formulating action intention, these objects are in our peripheral visual field. Such a mechanism seems more efficient than the coding of the actual position of an object (action endpoint) in the visual field, especially given the multitude of saccades and head turns we perform every second. In our study, we show that pathways supporting such a predictive mechanism of aligning action and visual spaces might be preserved in congenitally blind individuals. Perhaps spatial and motor experience is sufficient to make these pathways functional even in the lifelong absence of vision. Another interesting hypothesis is that the index fingertip of blind individuals serves as tactile “fovea” during Braille reading, that, in our task, moves to the different spatial locations, just like eye movements in the sighted. Our successful decoding results in both foveal EVC and FEF/PMd might support this idea.

In our previous study with the same group of congenitally blind participants, we demonstrated that the decoding accuracy of natural sounds increases from foveal to peripheral parts of EVC (Vetter et al., 2020). Here, we show an opposite decoding gradient for reach direction, with decoding accuracy being higher in foveal EVC parts. Together, these findings show a precise functional architecture for representing nonvisual information in EVC of congenitally blind individuals, which can be activated in a variety of contexts in a way that potentially reflects computational demands of stimuli or tasks.

In addition to the successful decoding of trials involving different reach directions, we also found that our task elicited univariate activation of EVC in the blind participants, although these effects were rather subtle. Interpretation of univariate responses is challenging, as they can be driven by both action-related processes and reading Braille words (Sadato et al., 1996; Cohen et al., 1997; Tian et al., 2023). However, our design (using different, unrelated, and abstract Braille words in each experimental run), in combination with the decoding procedure (training and testing the classifier on different runs), ensured that the decoding results, critical for this study, are not affected by the Braille word representations. The only decoding analysis in which our design did not preclude finding Braille-related effects was the cross-participant analysis. Interestingly, even in this analysis, we found robust representation of reach direction in EVC in congenitally blind participants, but no representation of different Braille words.

The searchlight analysis highlighted a number of dorsal stream areas that preferentially represent reach directions also in sighted individuals (Fabbri et al., 2014). In the sighted population, these areas are known to be critical for visuo-spatial attention and visuo-motor integration (Mishkin et al., 1983; Goodale and Milner, 1992; Kravitz et al., 2011; Gallivan and Culham, 2015). However, certain studies suggest that the representations computed in these areas are not fully dependent on incoming visual information from the retina (e.g., Prather et al., 2004; Tark and Curtis, 2009; Bernier and Grafton, 2010; Sathian et al., 2011). In congenitally blind participants, shape identification preferentially activates ventral stream areas, whereas location identification preferentially activates dorsal stream areas (Striem-Amit et al., 2012). Furthermore, similar dorsal regions are preferentially involved in guiding hand movements in congenitally blind and sighted participants (Fiehler et al., 2009). Finally, the FEF/PMd is involved in spatial orienting not only in sighted individuals, but also in congenitally blind participants (Garg et al., 2007). In our study, we add to evidence that dorsal stream areas in congenitally blind individuals truly develop representations of space and/or actions, as indicated by these areas' ability to represent different reach directions.

Together, our results match the findings in sighted individuals, and suggest that the development of action representations in the human brain might be largely independent of visual experience. It is important to note, however, that the action representations in sighted individuals were mostly studied using 3D objects, whereas, in our study, blind participants reached for Braille words. The exact impact of using such stimuli on our results remains to be investigated. A direct comparison of results from congenitally blind and sighted individuals, preferably in a design using typical 3D objects as action targets, would be necessary to address this issue. Such a comparison would allow a more detailed description of similarities and differences in the brain action systems in these two populations.

In conclusion, we show that EVC represents action-related information in congenitally blind individuals. This finding demonstrates that neither visual experience nor visual imagery is necessary for such representations to emerge. Furthermore, we demonstrate remarkable similarity of the dorsal action brain networks in congenitally blind and sighted individuals, which calls for rethinking of how these networks develop in the human brain.

Footnotes

This work was supported by National Science Center Poland Grant 2020/37/B/HS6/01269 and Polish National Center for Academic Exchange Fellowship BPN/SEL/2021/1/00004 to Ł.B.; Medical Academy of Sciences (UK) Daniel Turnberg Travel Fellowship and Swiss National Science Foundation PRIMA Grant PR00P1_185918 to P.V.; and ERC Consolidator Grant 773121, Horizon GuestXR Grant 101017884, and Joy Ventures Grant to A.A. We thank Lior Reich for help with data collection; Ella Striem-Amit for fruitful discussions on the experimental design; and the Muckli Lab at Glasgow University for sharing the retinotopic maps of sighted participants with us.

The authors declare no competing financial interests.

References

  1. Barry RL, Williams JM, Klassen LM, Gallivan JP, Culham JC, Menon RS (2010) Evaluation of preprocessing steps to compensate for magnetic field distortions due to body movements in BOLD fMRI. Magn Reson Imaging 28:235–244. 10.1016/j.mri.2009.07.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Bauer C, Yazzolino L, Hirsch G, Cattaneo Z, Vecchi T, Merabet LB (2015) Neural correlates associated with superior tactile symmetry perception in the early blind. Cortex 63:104–117. 10.1016/j.cortex.2014.08.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Benjamini Y, Hochberg Y (1995) Controlling the false discovery rate: a practical and powerful approach to multiple testing. J R Stat Soc B Stat Methodol 57:289–300. 10.1111/j.2517-6161.1995.tb02031.x [DOI] [Google Scholar]
  4. Bernier PM, Grafton ST (2010) Human posterior parietal cortex flexibly determines reference frames for reaching based on sensory context. Neuron 68:776–788. 10.1016/j.neuron.2010.11.002 [DOI] [PubMed] [Google Scholar]
  5. Chang CC, Lin CJ (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol 2:1–27. 10.1145/1961189.1961199 [DOI] [Google Scholar]
  6. Cohen LG, Celnik P, Pascual-Leone A, Corwell B, Faiz L, Dambrosia J, Honda M, Sadato N, Gerloff C, Catalá MD, Hallett M (1997) Functional relevance of cross-modal plasticity in blind humans. Nature 389:180–183. 10.1038/38278 [DOI] [PubMed] [Google Scholar]
  7. Collignon O, Lassonde M, Lepore F, Bastien D, Veraart C (2007) Functional cerebral reorganization for auditory spatial processing and auditory substitution of vision in early blind subjects. Cereb Cortex 17:457–465. 10.1093/cercor/bhj162 [DOI] [PubMed] [Google Scholar]
  8. Collignon O, Vandewalle G, Voss P, Albouy G, Charbonneau G, Lassonde M, Lepore F (2011) Functional specialization for auditory–spatial processing in the occipital cortex of congenitally blind humans. Proc Natl Acad Sci USA 108:4435–4440. 10.1073/pnas.1013928108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Cousineau D (2005) Confidence intervals in within-subject designs: a simpler solution to Loftus and Masson's method. TQMP 1:42–45. 10.20982/tqmp.01.1.p042 [DOI] [Google Scholar]
  10. Fabbri S, Strnad L, Caramazza A, Lingnau A (2014) Overlapping representations for grip type and reach direction. Neuroimage 94:138–146. 10.1016/j.neuroimage.2014.03.017 [DOI] [PubMed] [Google Scholar]
  11. Fiehler K, Burke M, Bien S, Röder B, Rösler F (2009) The human dorsal action control system develops in the absence of vision. Cereb Cortex 19:1–12. 10.1093/cercor/bhn067 [DOI] [PubMed] [Google Scholar]
  12. Gallivan JP, Culham JC (2015) Neural coding within human brain areas involved in actions. Curr Opin Neurobiol 33:141–149. 10.1016/j.conb.2015.03.012 [DOI] [PubMed] [Google Scholar]
  13. Garg A, Schwartz D, Stevens AA (2007) Orienting auditory spatial attention engages frontal eye fields and medial occipital cortex in congenitally blind humans. Neuropsychologia 45:2307–2321. 10.1016/j.neuropsychologia.2007.02.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Goodale MA, Milner AD (1992) Separate visual pathways for perception and action. Trends Neurosci 15:20–25. 10.1016/0166-2236(92)90344-8 [DOI] [PubMed] [Google Scholar]
  15. Gougoux F, Zatorre RJ, Lassonde M, Voss P, Lepore F (2005) A functional neuroimaging study of sound localization: visual cortex activity predicts performance in early-blind individuals. PLoS Biol 3:e27. 10.1371/journal.pbio.0030027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Harrison SA, Tong F (2009) Decoding reveals the contents of visual working memory in early visual areas. Nature 458:632–635. 10.1038/nature07832 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Knights E, Mansfield C, Tonin D, Saada J, Smith FW, Rossit S (2021) Hand-selective visual regions represent how to grasp 3D tools: brain decoding during real actions. J Neurosci 41:5263–5273. 10.1523/JNEUROSCI.0083-21.2021 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Kravitz DJ, Saleem KS, Baker CI, Mishkin M (2011) A new neural framework for visuospatial processing. Nat Rev Neurosci 12:217–230. 10.1038/nrn3008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Merabet L, Thut G, Murray B, Andrews J, Hsiao S, Pascual-Leone A (2004) Feeling by sight or seeing by touch? Neuron 42:173–179. 10.1016/s0896-6273(04)00147-3 [DOI] [PubMed] [Google Scholar]
  20. Misaki M, Kim Y, Bandettini PA, Kriegeskorte N (2010) Comparison of multivariate classifiers and response normalizations for pattern-information fMRI. Neuroimage 53:103–118. 10.1016/j.neuroimage.2010.05.051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Mishkin M, Ungerleider LG, Macko KA (1983) Object vision and spatial vision: two cortical pathways. Trends Neurosci 6:414–417. 10.1016/0166-2236(83)90190-X [DOI] [Google Scholar]
  22. Monaco S, Gallivan JP, Figley TD, Singhal A, Culham JC (2017) Recruitment of foveal retinotopic cortex during haptic exploration of shapes and actions in the dark. J Neurosci 37:11572–11591. 10.1523/JNEUROSCI.2428-16.2017 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Monaco S, Malfatti G, Culham JC, Cattaneo L, Turella L (2020) Decoding motor imagery and action planning in the early visual cortex: overlapping but distinct neural mechanisms. Neuroimage 218:116981. 10.1016/j.neuroimage.2020.116981 [DOI] [PubMed] [Google Scholar]
  24. Norman LJ, Thaler L (2019) Retinotopic-like maps of spatial sound in primary 'visual' cortex of blind human echolocators. Proc Biol Sci 286:20191910. 10.1098/rspb.2019.1910 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Oosterhof NN, Wiestler T, Downing PE, Diedrichsen J (2011) A comparison of volume-based and surface-based multi-voxel pattern analysis. Neuroimage 56:593–600. 10.1016/j.neuroimage.2010.04.270 [DOI] [PubMed] [Google Scholar]
  26. Oosterhof NN, Connolly AC, Haxby JV (2016) CoSMoMVPA: multi-modal multivariate pattern analysis of neuroimaging data in Matlab/GNU Octave. Front Neuroinform 10:27. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Pearson J, Naselaris T, Holmes EA, Kosslyn SM (2015) Mental imagery: functional mechanisms and clinical applications. Trends Cogn Sci 19:590–602. 10.1016/j.tics.2015.08.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Pitzalis S, Galletti C, Huang RS, Patria F, Committeri G, Galati G, Fattori P, Sereno MI (2006) Wide-field retinotopy defines human cortical visual area V6. J Neurosci 26:7962–7973. 10.1523/JNEUROSCI.0178-06.2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Prather SC, Votaw JR, Sathian K (2004) Task-specific recruitment of dorsal and ventral visual areas during tactile perception. Neuropsychologia 42:1079–1087. 10.1016/j.neuropsychologia.2003.12.013 [DOI] [PubMed] [Google Scholar]
  30. Raichle ME (2015) The brain's default mode network. Annu Rev Neurosci 38:433–447. 10.1146/annurev-neuro-071013-014030 [DOI] [PubMed] [Google Scholar]
  31. Roelfsema PR, de Lange FP (2016) Early visual cortex as a multiscale cognitive blackboard. Annu Rev Vis Sci 2:131–151. 10.1146/annurev-vision-111815-114443 [DOI] [PubMed] [Google Scholar]
  32. Sadato N, Pascual-Leone A, Grafman J, Ibañez V, Deiber MP, Dold G, Hallett M (1996) Activation of the primary visual cortex by Braille reading in blind subjects. Nature 380:526–528. 10.1038/380526a0 [DOI] [PubMed] [Google Scholar]
  33. Sathian K, Lacey S, Stilla R, Gibson GO, Deshpande G, Hu X, Laconte S, Glielmi C (2011) Dual pathways for haptic and visual perception of spatial and texture information. Neuroimage 57:462–475. 10.1016/j.neuroimage.2011.05.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Singhal A, Monaco S, Kaufman LD, Culham JC (2013) Human fMRI reveals that delayed action re-recruits visual perception. PLoS One 8:e73629. 10.1371/journal.pone.0073629 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Smith SM, Nichols TE (2009) Threshold-free cluster enhancement: addressing problems of smoothing, threshold dependence and localisation in cluster inference. Neuroimage 44:83–98. 10.1016/j.neuroimage.2008.03.061 [DOI] [PubMed] [Google Scholar]
  36. Striem-Amit E, Dakwar O, Reich L, Amedi A (2012) The large-scale organization of 'visual' streams emerges without visual experience. Cereb Cortex 22:1698–1709. 10.1093/cercor/bhr253 [DOI] [PubMed] [Google Scholar]
  37. Striem-Amit E, Ovadia-Caro S, Caramazza A, Margulies DS, Villringer A, Amedi A (2015) Functional connectivity of visual cortex in the blind follows retinotopic organization principles. Brain 138:1679–1695. 10.1093/brain/awv083 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Styrkowiec PP, Nowik AM, Króliczak G (2019) The neural underpinnings of haptically guided functional grasping of tools: an fMRI study. Neuroimage 194:149–162. 10.1016/j.neuroimage.2019.03.043 [DOI] [PubMed] [Google Scholar]
  39. Tian M, Saccone EJ, Kim JS, Kanjlia S, Bedny M (2023) Sensory modality and spoken language shape reading network in blind readers of Braille. Cereb Cortex 33:2426–2440. 10.1093/cercor/bhac216 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Tark KJ, Curtis CE (2009) Persistent neural activity in the human frontal cortex when maintaining space that is off the map. Nat Neurosci 12:1463–1468. 10.1038/nn.2406 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Vetter P, Smith FW, Muckli L (2014) Decoding sound and imagery content in early visual cortex. Curr Biol 24:1256–1262. 10.1016/j.cub.2014.04.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Vetter P, Bola Ł, Reich L, Bennett M, Muckli L, Amedi A (2020) Decoding natural sounds in early 'visual' cortex of congenitally blind individuals. Curr Biol 30:3039–3044.e2. 10.1016/j.cub.2020.05.071 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Wu DH, Lewin JS, Duerk JL (1997) Inadequacy of motion correction algorithms in functional MRI: role of susceptibility-induced artifacts. J Magn Reson Imaging 7:365–370. 10.1002/jmri.1880070219 [DOI] [PubMed] [Google Scholar]

Articles from The Journal of Neuroscience are provided here courtesy of Society for Neuroscience

RESOURCES