Skip to main content
Proceedings of the National Academy of Sciences of the United States of America logoLink to Proceedings of the National Academy of Sciences of the United States of America
. 2010 Jun 28;107(28):12716–12721. doi: 10.1073/pnas.1006199107

Neural correlates of virtual route recognition in congenital blindness

Ron Kupers a,1,2, Daniel R Chebat b,1, Kristoffer H Madsen c, Olaf B Paulson c,d, Maurice Ptito b,c
PMCID: PMC2906580  PMID: 20616025

Abstract

Despite the importance of vision for spatial navigation, blind subjects retain the ability to represent spatial information and to move independently in space to localize and reach targets. However, the neural correlates of navigation in subjects lacking vision remain elusive. We therefore used functional MRI (fMRI) to explore the cortical network underlying successful navigation in blind subjects. We first trained congenitally blind and blindfolded sighted control subjects to perform a virtual navigation task with the tongue display unit (TDU), a tactile-to-vision sensory substitution device that translates a visual image into electrotactile stimulation applied to the tongue. After training, participants repeated the navigation task during fMRI. Although both groups successfully learned to use the TDU in the virtual navigation task, the brain activation patterns showed substantial differences. Blind but not blindfolded sighted control subjects activated the parahippocampus and visual cortex during navigation, areas that are recruited during topographical learning and spatial representation in sighted subjects. When the navigation task was performed under full vision in a second group of sighted participants, the activation pattern strongly resembled the one obtained in the blind when using the TDU. This suggests that in the absence of vision, cross-modal plasticity permits the recruitment of the same cortical network used for spatial navigation tasks in sighted subjects.

Keywords: cross-modal plasticity, parahippocampus, sensory substitution, spatial navigation, visual cortex


The ability to navigate efficiently in large-scale environments was always a predicate for human survival, now applied to the particular challenges of living in a modern, urban society. Visual cues signaling the location of landmarks play a key role in facilitating the formation of spatial cognitive maps used for path finding in a visual setting (1, 2). Despite the importance of vision in spatial cognition, the abilities to recognize a traveled route and to represent spatial information are maintained in blind individuals (35), probably through tactile, auditory, and olfactory cues, as well as motion-related cues arising from the vestibular and proprioceptive systems.

During successful navigation, spatial information needs to be encoded and retrieved. The role of the hippocampus for navigation in large-scale environments has been amply demonstrated in both animal (68) and human studies (911). Besides the hippocampus, several other areas in the posterior mesial lobe and posterior parietal, occipital, and infero-temporal cortices also play an important role in navigation (9, 1217).

The neural correlates of navigation in congenital blindness remain elusive, in part owing to the difficulty in testing navigational skills of blind subjects within the setting of a functional brain imaging study. To circumvent this difficulty, we trained blind and sighted subjects in a spatial navigation task using the tongue display unit (TDU), a visual-to-tactile sensory substitution device that converts visual information into electro-tactile pulses applied to the tongue (18, 19). We hypothesized that, through the agency of cross-modal plasticity, blind subjects would recruit brain regions used by sighted individuals during route recognition, in particular the parahippocampal area, ventral visual cortex, fusiform gyrus, posterior parietal cortex, and precuneus (9, 12, 13).

The study consists of two experiments using the same navigational tasks presented either through the TDU in blind subjects and blindfolded sighted control subjects (experiment 1) or visually in a second group of sighted subjects (experiment 2).

Results

In experiment 1, we first trained 10 congenitally blind and 10 blindfolded sighted control subjects during 4 consecutive d in a route navigation and route recognition task. Demographics of the blind participants are summarized in Table S1. During route navigation, participants actively learned to navigate through either of two virtual routes that were presented via the TDU (Fig. 1), by using the arrow keys of a keyboard. At the end of each training day, participants were asked to draw the routes, for verification that they had encoded a cognitive map. In the route recognition (passive) task, the computer program guided the participants automatically through the routes, and they then had to indicate which of the two routes had been presented. The results of the behavioral study are shown in Fig. 2. Performance on day 1 was not different between blind and sighted participants. In both groups, performance improved steadily over the course of the four training days [F(3,57) = 167.8, P < 0.0001 and F(3,57) = 21.7, P < 0.001 for, respectively, the route navigation and recognition tasks; mixed-effects ANOVA]. There was no general difference in performance between the blind and sighted participants [F(1,19) = 1.39, P > 0.05 and F(1,19) = 1.57, P > 0.05 for, respectively, the route navigation and recognition tasks]. However, when only considering the results at the end of the training session, blind participants outperformed the blindfolded sighted controls [t(19) = 2.92, P < 0.01 and t(19) = 4.65, P < 0.001; Student's unpaired t test for independent samples] (Fig. 2 A and B). Fig. 2C shows examples of the drawings of the routes by two blind and two sighted subjects. As illustrated, the drawing became more precise over time, and at the end of the fourth training day, all participants had formed an accurate cognitive map of the two routes.

Fig. 1.

Fig. 1.

(A) Experimental set-up showing the TDU and the electrode array. (B) The two virtual routes that were used in the route navigation and route recognition tasks. The position of the subject in the trail and the end of the trail are represented by, respectively, the white dot and the asterisk. At any given time, only a part of the routes was “visible” to participants, as illustrated by snapshots of the spatial layout within three route segments (indicated by the numbers 1–3). During route navigation, participants actively learned to navigate through either of two virtual routes that were presented via the TDU, by using the arrow keys of a keyboard. Both routes were presented 15 times each training day. In the route recognition (passive) task, the computer program guided the participants automatically through the routes. They then had to indicate which of the two routes had been presented.

Fig. 2.

Fig. 2.

Behavioral results on the navigation tasks of congenitally blind (CB) and blindfolded sighted control (SC) subjects. (A) Percentage of correct responses in the route navigation task. (B) Percentage of correct responses on the route recognition task. Performance during days 1–4 was measured outside the scanner, whereas the results of day 5 show performance during the fMRI data acquisition. (C) Examples of actual drawings of the routes by two blind (cb1, cb2) and two blindfolded sighted (sc1, sc2) participants at the end of training days 1–3. Participants developed a more accurate mental representation of the spatial layout of the routes over time.

After behavioral training, subjects participated in a functional MRI (fMRI) study during which they repeated the passive route recognition task while positioned inside the scanner. Behavioral performance during fMRI was not different between the two groups, with 94 ± 4% and 95 ± 4% correct responses for, respectively, blind and sighted participants (Fig. 2B). Despite similar behavioral performance, the fMRI data revealed important group differences in the activation patterns. During route recognition, congenitally blind subjects showed increased blood oxygen level–dependent (BOLD) responses in the right parahippocampus, superior and inferior posterior parietal cortex, precuneus, anterior cingulate cortex, anterior insula, dorsolateral prefrontal cortex, and cerebellum. In addition, blind participants also activated large parts of their visual cortex, including the cuneus, inferior, middle, and superior occipital gyri, and fusiform gyrus (Fig. 3A and Table S2). In sharp contrast, blindfolded sighted controls did not show task-dependent BOLD signal increases in the parahippocampus or in any region of the visual cortex, even when using a more lenient statistical threshold (P < 0.05, uncorrected). However, they activated superior and inferior posterior parietal cortex, precuneus, anterior cingulate cortex, anterior insula, caudate nucleus, and cerebellum. Blindfolded sighted control subjects also activated more frontal areas not seen in blind subjects (Fig. 3B). A direct group comparison of the activation patterns confirmed that blind subjects more strongly activated the right parahippocampus and the occipital cortex (Table S3).

Fig. 3.

Fig. 3.

Brain activation patterns during route recognition using the TDU or a visual control paradigm. Red and yellow voxels represent clusters of significant BOLD signal increases [P < 0.01; false discovery rate (FDR)-corrected] during the route recognition compared with random noise presentation, superimposed on cortical flatmaps. (A) Results of blind participants, showing activation of occipital and posterior parietal cortices, precuneus, fusiform gyrus, and right parahippocampus during route recognition with the TDU. We also measured BOLD signal increases in the anterior insula and the prefrontal cortex bilaterally. (B) Blindfolded sighted control subjects did not activate the parahippocampus or occipital cortex, but they activated the posterior parietal cortex and the precuneus and showed a more widespread activation of the prefrontal cortex. (C) Sighted control subjects performing the route recognition task visually showed strong bilateral BOLD increases in the occipital and superior parietal cortices, the precuneus, fusiform gyrus, and right parahippocampus. BOLD increases in the prefrontal cortex were less extended than during the electro-tactile version of the task. Cu, cuneus; FG, fusiform gyrus; Ins, insula; IFG, inferior frontal gyrus; IPL, inferior parietal lobule; ITG, inferior temporal gyrus; LG, lingual gyrus; LO, lateral occipital; MFG, middle frontal gyrus; MTG, middle temporal gyrus; Orb, orbital gyrus; Paracent, paracentral gyrus; Pericalc, pericalcarine sulcus; Precent, precentral gyrus; Precun, precuneus; Postcent, postcentral gyrus; SFG, superior frontal gyrus; STG, superior temporal gyrus; SPL, superior parietal lobule; SMG, supramarginal gyrus.

In the second experiment, we wished to demonstrate that the areas activated by the blind participants in experiment 1 are the same as those activated during visually based navigation in normal sighted subjects. Thereto, we trained another group of 10 sighted controls in the same navigational task but without blindfolding (i.e., under full vision). Participants did not use the TDU in this experiment, and stimuli were presented visually on a computer screen. Here, the fMRI posttraining data show that visual route recognition activates a network highly similar to that observed during tactile route recognition in blind participants, including the right parahippocampus, superior and inferior parietal cortex, precuneus, cuneus, superior occipital cortex, fusiform gyrus, anterior cingulate cortex, anterior insula, dorsolateral prefrontal cortex, and cerebellum (Fig. 3C and Table S4). The similarity of the activation patterns during route recognition in the blind using the TDU and the sighted resolving the task visually was further substantiated by the results of a conjunction analysis, which showed common activations in the superior and inferior parietal lobule, precuneus, cuneus, ventral occipital cortex, and right parahippocampus (Fig. 4 and Table S5). In sharp contrast, a conjunction analysis of the results obtained in blind and blindfolded sighted controls did not show activity in visual cortex or parahippocampus (Fig. S1), further supporting the specificity of the occipital and parahippocampal activation in the former.

Fig. 4.

Fig. 4.

Conjunction analysis of blind and sighted subjects performing the route recognition task respectively with the TDU or visually. Results are shown on axial planes. The color map shows clusters of significant activation (P < 0.001; uncorrected) superimposed on the average brain of the participants, projected in MNI space. Numbers refer to the dorsoventral orientation of the slice in MNI space. Both groups commonly activated superior parietal cortex (slices 54, 42), superior occipital cortex (slices 30, 18), cuneus (slice 6), and parahippocampus (slice −6).

Discussion

The present study demonstrates the neural pathways involved in navigation in subjects lacking vision from birth. Although there is a vast literature on cross-modal plasticity in congenital blindness (20), the issue of the neural correlates of navigation in blindness has been barely addressed. The large majority of the studies on navigation in blindness are behavioral in nature, using human-size corridors or mazes to examine the behavioral and cognitive strategies. One of the rare functional brain imaging studies hitherto asked blind participants to imagine the kinesthetic aspects of walking and running (21). Although interesting, such tasks lack an explicit navigational component. Another brain imaging study is purely anatomical in nature, correlating behavioral performance in a man-size maze with hippocampal volumes in the blind (3).

It has been argued that blind subjects rely more on idiothetic cues and echolocation for navigation (5), suggesting they may use a different cortical network. The present data show, however, that during a spatial navigation task with a visual-to-tactile sensory substitution device, congenitally blind subjects recruit the posterior parahippocampus and posterior parietal and ventromedial occipito-temporal cortices, areas that are involved in spatial navigation under full vision. This suggests that cross-modal plasticity permits the recruitment of the same cortical network used for spatial navigation in sighted subjects. Of course, this does not exclude the possibility that blind subjects may recruit additional networks when resolving spatial tasks using explicit proprioceptive, vestibular, and echolocation cues.

Role of the Parahippocampus.

There is a vast literature indicating that besides the hippocampus, other brain structures, such as the precuneus, posterior parietal cortex, inferior occipital cortex, and parahippocampus, play an important role in spatial cognitive mapping. For instance, “place cells,” traditionally believed to exist exclusively in the hippocampus (6), are also found in the parahippocampus and in the parietal cortex (22). The parahippocampus in primates also contains “spatial view” cells (i.e., cells that respond when looking at a part of the environment) (23). Results from brain imaging studies in healthy humans invariably underscore the role of the parahippocampus in the learning or recall of topographical information. These studies have shown that the parahippocampus is involved in recognition of scenes, even when these are lacking any landmarks (12, 1417). The hippocampus and parahippocampus may fulfill different roles in spatial navigation. For instance, a recent fMRI study showed that the parahippocampus is involved in egocentric spatial learning (24), whereas the hippocampus may be more involved in allocentric spatial representations (11, 25, 26). In our study, blind but not blindfolded sighted participants activated the right posterior parahippocampus during route recognition. This is in line with the results of brain imaging studies in sighted subjects during spatial navigation under full vision in virtual environments (9, 11, 12, 24, 27), during mental navigation of an old, known environment (2831), and during visual scene processing (14, 15). We here show that the same area is activated in congenitally blind subjects when spatial information is provided through the tactile modality. Neuroanatomical studies in primates have found that the posterior parahippocampus receives widespread projections from sensory-specific and multimodal association cortices, providing it with unimodal visual, somatic, and auditory input, as well as multimodal inputs (32, 33). The parahippocampus sends projections back to most areas from which it receives inputs (34). Studies in the macaque have further shown the existence of direct projections from prestriate ventral visual area V4 to the parahippocampus and also from dorsal regions of area V4, parietal lobe, and superior temporal sulcus (35). A recent diffusion tensor imaging tractography study in healthy humans confirmed connections between the parahippocampal gyrus and extrastriate occipital lobe via the lingual and fusiform gyri (36). We explain the parahippocampal activation in the blind subjects through its connections with caudal visual areas V4, TEO, and TE, or via areas 7a and LIP of the posterior parietal cortex (37).

Our data also show that sighted subjects use a different strategy to resolve the navigation task. Looking at the activation maps in Fig. 3, sighted subjects activated more frontal areas not seen in blind subjects, suggesting a stronger reliance on prefrontal decision-making strategies. This raises the question as to whether preexisting visual strategies interfere with the development of alternative strategies for navigation in the absence of vision. Future studies testing blind subjects who lost their vision later in life may provide clues to answer this question.

Other Activations.

The precuneus, posterior parietal cortex, and fusiform gyrus also play an important role in spatial cognition (9, 12, 13, 24, 2831). Activation of the ventrolateral occipito-temporal cortex, including the parahippocampus, was reported in sighted subjects during landmark-centered judgment about object location, whereas superior parietal lobule, cuneus, precuneus, and superior and middle occipital gyri were activated by both allocentric and egocentric spatial tasks (12). We here show that the same areas are activated in congenitally blind subjects when spatial information is provided through the tactile modality. We explain the occipital activation by the strengthening of parieto-occipital connectivity in congenitally blind subjects (19, 3840).

Absence of Hippocampal Activation.

Since the initial discovery of place cells in rodents (6), the hippocampus has been associated with the formation and storage of cognitive maps of the environment. Neuropsychological studies in patients with hippocampal lesions indicate severe spatial memory deficits and topographical disorientation (4143). In addition, a large number of brain imaging studies showed hippocampal activation in spatial tasks (11, 12, 16, 25, 27, 2931). We did not find increased hippocampal activity in our blind subjects during the route recognition task. This is unlikely to be due to the tactile nature of the task because sighted subjects performing the task visually also failed to activate the hippocampus. In addition, Save et al. (44) demonstrated the existence of place cells in the hippocampus in early blind rats, whose properties were very similar to those in sighted rats, suggesting that early vision is not required for normal firing of place cells. A possible explanation for the lack of hippocampal activation is that the hippocampus is only recruited during the initial formation of the cognitive map and not during its retrieval. This is supported by recent brain imaging data showing strong hippocampal activation during the early but not the late trials of a spatial navigation task (26, 45). We therefore explain the lack of hippocampal activation in our study by the extensive training subjects had undergone before scanning. This is in line with the results of Committeri et al. (12), who also failed to find hippocampal activation in both their egocentric and allocentric spatial tasks in subjects who had been exposed extensively to the spatial environment before the fMRI session.

Egocentric or Allocentric?

Goal-directed navigation can be accomplished by either egocentric or allocentric strategies (46). The former are based on the usage of idiothetic cues, such as head direction, eye or body movements, and vestibular signals, and hence do not depend on external references. In contrast, allocentric strategies make use of allothetic signals that are fixed to the environment itself or to individual objects. These require that the subject encodes the relationships between environmental landmarks, motion, and goal location. In contrast with egocentric frameworks, the location of objects within allocentric frameworks does not change when the subject moves in the environment (25). Results from brain imaging studies and studies in patients with brain lesions have suggested that the hippocampus supports allocentric processing, whereas the posterior parietal cortex and the precuneus are more involved in egocentric spatial representations (12, 14, 24, 25).

Although this study was not designed to disentangle the respective roles of egocentric and allocentric processes in cross-modal spatial navigation, we would like to briefly comment on this issue. Our route navigation and route recognition tasks can be solved using egocentric (e.g., a sequence of left–right turns) or allocentric strategies, or a combination of both. During the training phase, participants had to construct a spatial cognitive map of the routes from successive egocentric viewpoints in a tactile virtual environment. This led to the formation of an allocentric route representation, as witnessed by the route drawings (Fig. 2C), that participants could manipulate and compare with each other. Our “passive” task required participants to update their position using successive viewpoints and compare this with the route configurations stored in a cognitive map. It is therefore likely that participants continuously switched between allocentric and egocentric strategies, as is often done in real world navigation (25, 47).

Methodological Issues.

Because of the limited image resolution that can be achieved with the TDU, we used a simplified version of a simple computerized maze. The routes consisted of sequences of three line segments of different lengths with 90° turns, without additional landmarks and with only one possible route to destination. The question is therefore whether this represents a truly demanding navigational task. We believe this was indeed the case. First, the task was performed using electrotactile input, which makes it much more difficult than tasks using visual stimuli. This is witnessed by the slow increase in performance scores over time (Fig. 2). Next, to correctly solve the route recognition task, the subjects had to construct a cognitive map of the two routes. Finally, the results of the visual control task confirm that the same navigational task performed under full vision activates a neural network commonly activated in navigational tasks (9, 1117, 25, 26, 2931).

In summary, our study shows that in the absence of vision since birth, navigation is mediated by the parahippocampus and visual areas, suggesting cross-modal plasticity in spatial coding. The theoretical implications of the present findings go well beyond those of spatial processing in congenitally blind subjects. Our findings suggest that visual experience is not necessary for the development of a spatial navigation network in the brain, because visual association cortical areas are capable of processing and interpreting spatial information carried by nonvisual sensory modalities.

Methods

Subjects.

Ten congenitally blind and 10 sex- and age-matched blindfolded sighted control subjects with normal or normal corrected vision participated in the first experiment, using the TDU. Blindness was of peripheral origin in all cases. Demographics of the participants are summarized in Table S1. An additional group of 10 healthy, sighted control subjects under full vision participated in the second experiment, during which the route recognition task was presented visually. The study protocol was approved by the ethical committee of Copenhagen and Frederiksberg municipalities (ref. nr. KF 01 328723), and all subjects provided written informed consent.

Experimental Procedures.

Tongue display unit.

The tactile vision substitution system has been described elsewhere in detail (18, 19). In brief, it comprises the TDU (BrainPort, Wicab), a laptop computer with custom-made software and an electrode array (3 × 3 cm) consisting of 144 gold-plated contacts, each with a 1.55-mm diameter arranged in a 12 × 12 matrix (Fig. 1A). The subject navigates, with the help of the arrow keys of a keyboard, through a virtual route that is presented on the laptop (see below). The stimulus is sampled and reduced to the 12 × 12 resolution of the tongue display with an update rate of 14–20 frames/s.

Route navigation and route recognition using the TDU.

In experiment 1, we first trained 10 congenitally blind and 10 blindfolded sighted control subjects in an active (route navigation) and passive (route recognition) navigational task during 4 consecutive d. During route navigation, participants actively learned to navigate through either of two virtual routes (named route 1 and route 2) that were presented via the TDU (Fig. 1B). The two routes consisted of simple sequences of three line segments of different lengths, together with two 90° (left or right) turns. No landmarks were added, and only one route to the destination was possible. The two routes were presented randomly, and at the beginning of each trial participants were informed whether route 1 or route 2 was presented. Participants used the arrow keys of a keyboard to move through the route. The edges of the routes were indicated by active electrode contacts. Hitting the edge or making a wrong turn was counted as an error, which returned the participant to the start position. Because the routes were not previewed before the experiment, subjects had to learn them by trial and error. During a trial, the tongue was permanently stimulated, irrespective of whether the arrow keys were pressed, but the spatial layout of the set of activated electrodes depended on the route segment the participant had reached at a particular time. Active electrodes on the edges of the TDU array indicated the walls of the corridor, whereas an active electrode in the center of the anterior part of the TDU indicated the position of the subject within the route. Pressing one of the arrow keys made the subject move forward or sideward by two pixels and consequently lead to a change in the spatial configuration of the active electrode contacts. A route turning leftward would result in fewer active contacts at the left edge of the TDU, whereas a route turning rightward would result in fewer active electrodes on the right edge of the grid. There was no minimum time interval set between two successive key presses. Only a part of the route was “visible” at any particular time (Fig. 1, Insets), meaning the participants had to actively construct a mental image of the route they navigated through. Both routes were presented 15 times per training day. At the end of each training day, participants were asked to draw the routes by pencil and paper, for verification that they had encoded a cognitive map. After the active route navigation task, subjects participated in a (passive) route recognition task. During this task, the computer program guided the participants through the routes, drawing the pattern automatically on the tongue. We also presented a scrambled route that consisted of the same amount of pixels as the real routes but lacking any geometrical information. Subjects made no key presses during the passive condition. They then had to indicate which of the two previously learned routes (route 1 or 2) or the scrambled route had been presented. As in the active task, both routes were randomly presented 15 times, whereas the scrambled route was presented 30 times during each training day.

Visual navigation task.

In the second experiment, we trained another group of 10 sighted subjects in the same navigational task under full vision. The same routes were used as in the first experiment with the TDU, but this time the routes were presented visually. As in the tactile route recognition task, participants were first trained outside the scanner in the route navigation and recognition task. They sat in front of a computer screen that showed a part of the route to be navigated. The routes were defined by green dots, and the participant's current position was represented by a green flashing dot. In the active (route navigation) condition, participants moved forward through one of two different routes by using the arrow keys of a keyboard. Touching a wall or making a wrong turn was counted as an error and caused the participant to return to the starting position. Each route was presented 15 times. At the end of the training session, participants were asked to draw the routes. In the passive (route recognition) task, the computer program navigated the participants through the routes, and they subsequently had to decide whether route 1 or 2 had been presented. All subjects learned both tasks with an accuracy of >90% correct responses. After the training, subjects repeated the route recognition task inside the MRI scanner. The routes were back-projected via a screen mounted at the rear end of the magnet bore and were visible to the subjects by reflection in the mirror mounted on the head coil.

fMRI data acquisition.

MRI was conducted on a 3-T scanner (Siemens Magnetom Trio) equipped with a standard single-channel birdcage head coil. BOLD-weighted fMRI scans were acquired using whole-brain gradient-echo echo planar imaging (EPI) sequence with the following parameters: repetition time (TR) 2.49 s, echo time (TE) 30 ms, and flip angle 90°, using a 64 × 64 matrix with an in-plane resolution of 3 × 3 mm2. Each volume consisted of 42 slices each 3 mm thick, positioned parallel to the anterior commissure–posterior commissure line and obtained in an interleaved fashion beginning with the bottom slice. Each functional scan consisted of 282 EPI volumes for a total duration of 11 min, 42 s. Head motion was restricted by placement of comfortable padding around the participant's head. Recordings of pulse and respiration were used to form regressors that were entered as nuisance effects in the statistical parametric mapping (SPM) analysis, along with modeling of residual motion effects, as described in detail below. Two identical functional runs were performed during each fMRI examination.

In the fMRI study, subjects repeated the previously learned passive route recognition task. We opted for the route recognition instead of the active route navigation task to avoid interference with motor planning and output. We used a block design paradigm, during which either one of the two previously learned routes or a scrambled route (control task) was presented. Each block lasted 12 s and was repeated 15 times for each of the two routes and 30 times for the scrambled route condition. A time interval of 3 s separated two successive blocks, during which participants pressed a key to signal whether previously learned route 1 or 2 or a scrambled route had been presented.

fMRI data analysis.

fMRI image processing and statistical analysis were performed with SPM5 (Wellcome Department of Imaging Neuroscience, University College London). The functional images were first corrected for head movements and then spatially normalized to the standard Montreal Neurological Institute (MNI) EPI template, resampled to 3-mm isotropic voxel size, and spatially smoothed using an isotropic Gaussian kernel of 6 mm full-width at half-maximum. High-pass filtering was applied to reduce the effect of slow signal drifts, and temporal autocorrelation was compensated by “prewhitening” the data using a first-order autoregressive model (48). We used a conventional approach to estimate the effect associated with the experimental design on a voxel-by-voxel basis using the general linear model formulation of SPM5. To correct for the structured noise induced by respiration and cardiac pulsation, we included RETROICOR (RETROspective Image-based CORrection method) nuisance covariates in the design matrix (49). We also included 24 regressors to remove residual movement artifacts with spin history effects (50, 51). Linear contrasts were used to test the effects of interests: route recognition vs. random dots. After the single-subject analyses, we performed random-effect analyses at the group level using the individual contrast estimates. The significance level was set to P < 0.01, FDR-corrected for multiple comparisons. For direct statistical comparison of activation maps in blind and blindfolded sighted controls, we tested for significant activation within areas of interest based on previous studies, including the hippocampus, parahippocampus, ventral visual cortex, cuneus, precuneus, and posterior parietal cortex (12, 22, 25). For each area, we corrected the peak activation voxel for multiple comparisons within a 10-mm radius sphere using Gaussian random field theory (52).

Structural MRI.

For each participant, we acquired a magnetization prepared rapid acquisition gradient echo with a voxel dimension of 1 × 1 × 1 mm3, field of view of 256 mm, matrix 256 × 256, TR 1540 ms, TE 3.93 ms, inversion time 800 ms, and a flip-angle of 9°.

Supplementary Material

Supporting Information

Acknowledgments

We thank Drs. Paul Cumming and Albert Gjedde for critical reading of the manuscript. This work was supported by the Lundbeck Foundation (R.K.), the Danish Medical Research Council (M.P. and R.K.), and the Harland Sanders Chair in Visual Sciences, Canada (M.P.). D.R.C. is supported by a doctoral fellowship from the Canadian Institutes of Health Research.

Footnotes

The authors declare no conflict of interest.

This article is a PNAS Direct Submission.

This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1006199107/-/DCSupplemental.

References

  • 1.McNaughton BL, Chen LL, Markus EJ. “Dead reckoning,” landmark learning, and the sense of direction: A neurophysiological and computational hypothesis. J Cogn Neurosci. 1991;3:190–202. doi: 10.1162/jocn.1991.3.2.190. [DOI] [PubMed] [Google Scholar]
  • 2.Colgin LL, Moser EI, Moser MB. Understanding memory through hippocampal remapping. Trends Neurosci. 2008;31:469–477. doi: 10.1016/j.tins.2008.06.008. [DOI] [PubMed] [Google Scholar]
  • 3.Fortin M, et al. Wayfinding in the blind: Larger hippocampal volume and supranormal spatial navigation. Brain. 2008;131:2995–3005. doi: 10.1093/brain/awn250. [DOI] [PubMed] [Google Scholar]
  • 4.Loomis JM, et al. Nonvisual navigation by blind and sighted: assessment of path integration ability. J Exp Psychol Gen. 1993;122:73–91. doi: 10.1037//0096-3445.122.1.73. [DOI] [PubMed] [Google Scholar]
  • 5.Thinus-Blanc C, Gaunet F. Representation of space in blind persons: Vision as a spatial sense? Psychol Bull. 1997;121:20–42. doi: 10.1037/0033-2909.121.1.20. [DOI] [PubMed] [Google Scholar]
  • 6.O'Keefe J, Dostrovsky J. The hippocampus as a spatial map. Preliminary evidence from unit activity in the freely-moving rat. Brain Res. 1971;34:171–175. doi: 10.1016/0006-8993(71)90358-1. [DOI] [PubMed] [Google Scholar]
  • 7.Moser MB, Moser EI. Distributed encoding and retrieval of spatial memory in the hippocampus. J Neurosci. 1998;18:7535–7542. doi: 10.1523/JNEUROSCI.18-18-07535.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Robertson RG, Rolls ET, Georges-François P. Spatial view cells in the primate hippocampus: Effects of removal of view details. J Neurophysiol. 1998;79:1145–1156. doi: 10.1152/jn.1998.79.3.1145. [DOI] [PubMed] [Google Scholar]
  • 9.Aguirre GK, Detre JA, Alsop DC, D'Esposito M. The parahippocampus subserves topographical learning in man. Cereb Cortex. 1996;6:823–829. doi: 10.1093/cercor/6.6.823. [DOI] [PubMed] [Google Scholar]
  • 10.Ekstrom AD, et al. Cellular networks underlying human spatial navigation. Nature. 2003;425:184–188. doi: 10.1038/nature01964. [DOI] [PubMed] [Google Scholar]
  • 11.Maguire EA, et al. Knowing where and getting there: A human navigation network. Science. 1998;280:921–924. doi: 10.1126/science.280.5365.921. [DOI] [PubMed] [Google Scholar]
  • 12.Committeri G, et al. Reference frames for spatial cognition: Different brain areas are involved in viewer-, object-, and landmark-centered judgments about object location. J Cogn Neurosci. 2004;16:1517–1535. doi: 10.1162/0898929042568550. [DOI] [PubMed] [Google Scholar]
  • 13.Epstein RA. Parahippocampal and retrosplenial contributions to human spatial navigation. Trends Cogn Sci. 2008;12:388–396. doi: 10.1016/j.tics.2008.07.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Epstein R, Kanwisher N. A cortical representation of the local visual environment. Nature. 1998;392:598–601. doi: 10.1038/33402. [DOI] [PubMed] [Google Scholar]
  • 15.Epstein RA, Parker WE, Feiler AM. Where am I now? Distinct roles for parahippocampal and retrosplenial cortices in place recognition. J Neurosci. 2007;27:6141–6149. doi: 10.1523/JNEUROSCI.0799-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Iaria G, Chen JK, Guariglia C, Ptito A, Petrides M. Retrosplenial and hippocampal brain regions in human navigation: Complementary functional contributions to the formation and use of cognitive maps. Eur J Neurosci. 2007;25:890–899. doi: 10.1111/j.1460-9568.2007.05371.x. [DOI] [PubMed] [Google Scholar]
  • 17.Maguire EA, Frith CD, Burgess N, Donnett JG, O'Keefe J. Knowing where things are parahippocampal involvement in encoding object locations in virtual large-scale space. J Cogn Neurosci. 1998;10:61–76. doi: 10.1162/089892998563789. [DOI] [PubMed] [Google Scholar]
  • 18.Bach-y-Rita P, W Kercel S. Sensory substitution and the human-machine interface. Trends Cogn Sci. 2003;7:541–546. doi: 10.1016/j.tics.2003.10.013. [DOI] [PubMed] [Google Scholar]
  • 19.Ptito M, Moesgaard SM, Gjedde A, Kupers R. Cross-modal plasticity revealed by electrotactile stimulation of the tongue in the congenitally blind. Brain. 2005;128:606–614. doi: 10.1093/brain/awh380. [DOI] [PubMed] [Google Scholar]
  • 20.Merabet LB, Pascual-Leone A. Neural reorganization following sensory loss: The opportunity of change. Nat Rev Neurosci. 2010;11:44–52. doi: 10.1038/nrn2758. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Deutschländer A, et al. Imagined locomotion in the blind: an fMRI study. Neuroimage. 2009;45:122–128. doi: 10.1016/j.neuroimage.2008.11.029. [DOI] [PubMed] [Google Scholar]
  • 22.Whitlock JR, Sutherland RJ, Witter MP, Moser MB, Moser EI. Navigating from hippocampus to parietal cortex. Proc Natl Acad Sci USA. 2008;105:14755–14762. doi: 10.1073/pnas.0804216105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Rolls ET, Robertson RG, Georges-François P. Spatial view cells in the primate hippocampus. Eur J Neurosci. 1997;9:1789–1794. doi: 10.1111/j.1460-9568.1997.tb01538.x. [DOI] [PubMed] [Google Scholar]
  • 24.Weniger G, et al. The human parahippocampal cortex subserves egocentric spatial learning during navigation in a virtual maze. Neurobiol Learn Mem. 2010;93:46–55. doi: 10.1016/j.nlm.2009.08.003. [DOI] [PubMed] [Google Scholar]
  • 25.Burgess N, Maguire EA, O'Keefe J. The human hippocampus and spatial and episodic memory. Neuron. 2002;35:625–641. doi: 10.1016/s0896-6273(02)00830-9. [DOI] [PubMed] [Google Scholar]
  • 26.Iaria G, Petrides M, Dagher A, Pike B, Bohbot VD. Cognitive strategies dependent on the hippocampus and caudate nucleus in human navigation: Variability and change with practice. J Neurosci. 2003;23:5945–5952. doi: 10.1523/JNEUROSCI.23-13-05945.2003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Ohnishi T, Matsuda H, Hirakata M, Ugawa Y. Navigation ability dependent neural activation in the human brain: an fMRI study. Neurosci Res. 2006;55:361–369. doi: 10.1016/j.neures.2006.04.009. [DOI] [PubMed] [Google Scholar]
  • 28.Rosenbaum RS, Ziegler M, Winocur G, Grady CL, Moscovitch M. “I have often walked down this street before”: fMRI studies on the hippocampus and other structures during mental navigation of an old environment. Hippocampus. 2004;14:826–835. doi: 10.1002/hipo.10218. [DOI] [PubMed] [Google Scholar]
  • 29.Ghaem O, et al. Mental navigation along memorized routes activates the hippocampus, precuneus, and insula. Neuroreport. 1997;8:739–744. doi: 10.1097/00001756-199702100-00032. [DOI] [PubMed] [Google Scholar]
  • 30.Kumaran D, Maguire EA. The human hippocampus: cognitive maps or relational memory? J Neurosci. 2005;25:7254–7259. doi: 10.1523/JNEUROSCI.1103-05.2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Maguire EA, Frackowiak RS, Frith CD. Recalling routes around london: Activation of the right hippocampus in taxi drivers. J Neurosci. 1997;17:7103–7110. doi: 10.1523/JNEUROSCI.17-18-07103.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Jones EG, Powell TP. An anatomical study of converging sensory pathways within the cerebral cortex of the monkey. Brain. 1970;93:793–820. doi: 10.1093/brain/93.4.793. [DOI] [PubMed] [Google Scholar]
  • 33.Blatt GJ, Pandya DN, Rosene DL. Parcellation of cortical afferents to three distinct sectors in the parahippocampal gyrus of the rhesus monkey: An anatomical and neurophysiological study. J Comp Neurol. 2003;466:161–179. doi: 10.1002/cne.10866. [DOI] [PubMed] [Google Scholar]
  • 34.Van Hoesen GW. The parahippocampal gyrus. New observations regarding its cortical connections in the monkey. Trends Neurosci. 1982;5:345–350. [Google Scholar]
  • 35.Martin-Elkins CL, Horel JA. Cortical afferents to behaviorally defined regions of the inferior temporal and parahippocampal gyri as demonstrated by WGA-HRP. J Comp Neurol. 1992;321:177–192. doi: 10.1002/cne.903210202. [DOI] [PubMed] [Google Scholar]
  • 36.Powell HW, et al. Noninvasive in vivo demonstration of the connections of the human parahippocampal gyrus. Neuroimage. 2004;22:740–747. doi: 10.1016/j.neuroimage.2004.01.011. [DOI] [PubMed] [Google Scholar]
  • 37.Suzuki WA, Amaral DG. Perirhinal and parahippocampal cortices of the macaque monkey: Cortical afferents. J Comp Neurol. 1994;350:497–533. doi: 10.1002/cne.903500402. [DOI] [PubMed] [Google Scholar]
  • 38.Fujii T, Tanabe HC, Kochiyama T, Sadato N. An investigation of cross-modal plasticity of effective connectivity in the blind by dynamic causal modeling of functional MRI data. Neurosci Res. 2009;65:175–186. doi: 10.1016/j.neures.2009.06.014. [DOI] [PubMed] [Google Scholar]
  • 39.Kupers R, et al. Transcranial magnetic stimulation of the visual cortex induces somatotopically organized qualia in blind subjects. Proc Natl Acad Sci USA. 2006;103:13256–13260. doi: 10.1073/pnas.0602925103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Ptito M, Schneider FC, Paulson OB, Kupers R. Alterations of the visual pathways in congenital blindness. Exp Brain Res. 2008;187:41–49. doi: 10.1007/s00221-008-1273-4. [DOI] [PubMed] [Google Scholar]
  • 41.Aguirre GK, D'Esposito M. Topographical disorientation: A synthesis and taxonomy. Brain. 1999;122:1613–1628. doi: 10.1093/brain/122.9.1613. [DOI] [PubMed] [Google Scholar]
  • 42.Glikmann-Johnston Y, et al. Structural and functional correlates of unilateral mesial temporal lobe spatial memory impairment. Brain. 2008;131:3006–3018. doi: 10.1093/brain/awn213. [DOI] [PubMed] [Google Scholar]
  • 43.Spiers HJ, Burgess N, Hartley T, Vargha-Khadem F, O'Keefe J. Bilateral hippocampal pathology impairs topographical and episodic memory but not visual pattern matching. Hippocampus. 2001;11:715–725. doi: 10.1002/hipo.1087. [DOI] [PubMed] [Google Scholar]
  • 44.Save E, Cressant A, Thinus-Blanc C, Poucet B. Spatial firing of hippocampal place cells in blind rats. J Neurosci. 1998;18:1818–1826. doi: 10.1523/JNEUROSCI.18-05-01818.1998. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Grön G, et al. Hippocampal activations during repetitive learning and recall of geometric patterns. Learn Mem. 2001;8:336–345. doi: 10.1101/lm.42901. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Byrne P, Becker S, Burgess N. Remembering the past and imagining the future: A neural model of spatial memory and imagery. Psychol Rev. 2007;114:340–375. doi: 10.1037/0033-295X.114.2.340. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Iglói K, Zaoui M, Berthoz A, Rondi-Reig L. Sequential egocentric strategy is acquired as early as allocentric strategy: Parallel acquisition of these two navigation strategies. Hippocampus. 2009;19:1199–1211. doi: 10.1002/hipo.20595. [DOI] [PubMed] [Google Scholar]
  • 48.Friston KJ, et al. To smooth or not to smooth? Bias and efficiency in fMRI time-series analysis. Neuroimage. 2000;12:196–208. doi: 10.1006/nimg.2000.0609. [DOI] [PubMed] [Google Scholar]
  • 49.Glover GH, Li TQ, Ress D. Image-based method for retrospective correction of physiological motion effects in fMRI: RETROICOR. Magn Reson Med. 2000;44:162–167. doi: 10.1002/1522-2594(200007)44:1<162::aid-mrm23>3.0.co;2-e. [DOI] [PubMed] [Google Scholar]
  • 50.Friston KJ, Williams S, Howard R, Frackowiak RS, Turner R. Movement-related effects in fMRI time-series. Magn Reson Med. 1996;35:346–355. doi: 10.1002/mrm.1910350312. [DOI] [PubMed] [Google Scholar]
  • 51.Lund TE, Madsen KH, Sidaros K, Luo WL, Nichols TE. Non-white noise in fMRI: Does modelling have an impact? Neuroimage. 2006;29:54–66. doi: 10.1016/j.neuroimage.2005.07.005. [DOI] [PubMed] [Google Scholar]
  • 52.Worsley KJ, et al. A unified statistical approach for determining significant signals in images of cerebral activation. Hum Brain Mapp. 1996;4:58–73. doi: 10.1002/(SICI)1097-0193(1996)4:1<58::AID-HBM4>3.0.CO;2-O. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supporting Information

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences

RESOURCES