Abstract
In congenital blindness (CB), tactile, and auditory information can be reinterpreted by the brain to compensate for visual information through mechanisms of brain plasticity triggered by training. Visual deprivation does not cause a cognitive spatial deficit since blind people are able to acquire spatial knowledge about the environment. However, this spatial competence takes longer to achieve but is eventually reached through training-induced plasticity. Congenitally blind individuals can further improve their spatial skills with the extensive use of sensory substitution devices (SSDs), either visual-to-tactile or visual-to-auditory. Using a combination of functional and anatomical neuroimaging techniques, our recent work has demonstrated the impact of spatial training with both visual to tactile and visual to auditory SSDs on brain plasticity, cortical processing, and the achievement of certain forms of spatial competence. The comparison of performances between CB and sighted people using several different sensory substitution devices in perceptual and sensory-motor tasks uncovered the striking ability of the brain to rewire itself during perceptual learning and to interpret novel sensory information even during adulthood. We discuss here the implications of these findings for helping blind people in navigation tasks and to increase their accessibility to both real and virtual environments.
Keywords: multisensory, spatial cognition, vision, touch (haptic/cutaneous/tactile/kinesthesia), sensory substitution, brain plasticity, congenital blindness, navigation
Introduction
Several different mechanisms influence the development of the congenitally blind brain. Neuroimaging techniques show that brain structures devoted to vision are greatly affected (Kupers and Ptito, 2014; Fine and Park, 2018; Singh et al., 2018), and that the extensive use of the remaining senses (e.g., touch or/and audition) helps blind people to develop a set of impressive skills in various cognitive tasks, probably due to the triggering of neural plasticity mechanisms (Schinazi et al., 2016). These enhanced behavioral performances are correlated to brain plasticity using various types of SSDs (Chebat et al., 2018a). Brain modifications are triggered by sensory deprivation and later by the training of the other senses, for example through the use of SSDs to “perceive” visual information. We perceive our environment using all of our senses in parallel, creating a rich multisensory representation of space (Chebat, 2020), but how does the complete lack of vision impact spatial competence and spatial learning? In this paper, we review the plastic changes that occur in the brain of CB that are triggered by SSDs use.
Sensory Substitution Devices (SSDs)
SSDs translate visual cues into tactile or auditory information. SSDs consist of three components: a sensor, a processing unit that converts the visual cues using a specific code and algorithm, and a delivery system to transmit the tactile or auditory information. SSDs differ in terms of their respective approaches, codes or algorithms for capturing and sending information, and also in terms of their specific components, but they all aim to transmit visual information via another sense. For example, SSDs use different kinds of sensors to capture visual information, either from a camera (Bach-y-Rita et al., 1969; Meijer, 1992; Bach-y-Rita and Kercel, 2003; Ptito, 2005; Chebat et al., 2007a; Mann et al., 2011; Figures 1A,F,I for images of the camera set-ups used for the TDU, EyeMusic, and vOICe) or sonic (Kay, 1974), ultrasonic (Shoval et al., 1998; Hill and Black, 2003; Bhatlawande et al., 2012) and infrared sensors (Dunai et al., 2013; Maidenbaum et al., 2014c; Stoll et al., 2015). The means to deliver the information to the user can also vary greatly. In the case of the Tongue Display Unit (TDU) (Bach-y-Rita et al., 1969; Bach-y-Rita and Kercel, 2003; Figures 1A–C), the image captured by a camera is translated and coded onto an electro-tactile grid which “draws” an image on the tongue of the user (Figure 1C). In the case of the EyeCane (Maidenbaum et al., 2014c), distance information is received from an infra-red sensor and delivered to the hand and ears through the frequency of vibrations or sounds (Figures 1D,E). The EyeMusic (Abboud et al., 2014; Figures 1F–H) and vOICe (Meijer, 1992; Figure 1I) also rely on a camera for visual information but the algorithm codes the images into sounds, and in the case of the EyeMusic, different musical instruments code for different colors in the image.
Despite these differences, SSDs all use a form of code to translate visual information that must be actively integrated by the user. This process, called distal attribution (Auvray et al., 2005) requires the reinterpretation of what seems like random stimulation into a coherent, visual percept through sensori-motor feedback (Chebat et al., 2018a). This form of reinterpretation of visual information has often been likened to a kind of learned synesthesia (Ward and Wright, 2014). The use of these devices to transfer visual information, via the tactile, auditory or vibratory channels, coupled with complete congenital sensory deprivation leads to training-induced recruitment of brain regions that were typically considered purely visual (Ptito, 2005; Amedi et al., 2007; Proulx et al., 2016). Although the phenomenological sensations reported by CB during the use of these devices is similar to vision (Chebat et al., 2018a), these devices cannot approximate the complexity and resolution of vision per se. Thus, the resulting sensations are very different from vision in the sighted, and cannot genuinely replace a missing sense for all of its functions (Moraru and Boiangiu, 2016). This is also true for task specific sensory independent regions according to the task being completed (Kupers et al., 2010a; Matteau et al., 2010; Ptito et al., 2012; Striem-Amit et al., 2012a, b; Abboud et al., 2015; Maidenbaum et al., 2018). SSDs have not become widespread in their general use by the blind population (Loomis et al., 2010; Elli et al., 2014), for various practical reasons (Chebat et al., 2018a). In order for an SSD to be widely accepted by the a visually impaired public, it needs to meet many several criteria, such as general use (for many tasks), facility of use, cost and be worth the learning process in terms of the visual information it can afford in real time (Chebat et al., 2018a). From the point of view of navigation, several of these devices have great potential in improving navigation competence and strategies used by blind people during navigation. We review these concepts in the following sections.
Sensory Deprivation, Brain Plasticity, Amodality and Spatial Cognition
A large part of the cortical mantle is dedicated to vision. In the macaque, about 55% of the entire cortex is in some way responsive to visual information, and in humans it is about 35%. This cortical space is by no means wasted for people who are blind from birth, and can be recruited in a variety of cognitive and spatial tasks using the remaining intact senses. Indeed, the recruitment of primary visual areas by other sensory modalities has been known for quite some time in CB (Kupers and Ptito, 2014). This process, known as amodality (Heimler et al., 2015; Chebat et al., 2018b) enables the recruitment of brain areas in a task specific, sensory independent fashion (Cohen et al., 1997). The recruitment of task-specific brain nodes for shapes (Ptito et al., 2012), motion (Saenz et al., 2008; Ptito et al., 2009; Matteau et al., 2010; Striem-Amit et al., 2012b), number-forms (Abboud et al., 2015), body shapes (Striem-Amit and Amedi, 2014), colors (Steven et al., 2006), word shapes (Striem-Amit et al., 2012a), faces (Likova et al., 2019), echolocation (Norman and Thaler, 2019), and tactile navigation (Kupers et al., 2010a; Maidenbaum et al., 2018) is thought to represent mechanisms of brain plasticity (Fine and Park, 2018; Singh et al., 2018) for specific amodal recruitment (Ptito et al., 2008a; Chebat et al., 2018b; see Figure 2). The recruitment of the brain areas via SSDs not only shows that it is possible to supplement missing visual information, but that the brain treats the SSD information as if it were real vision, in the sense that it tries to extract the relevant sensory information for each specific task we are trying to accomplish (i.e., motion, colors, navigation, and other tasks illustrated in Figure 2). How do brain plasticity and amodality influence spatial perception in people who are blind from birth? Since, vision is quite important for active navigation (McVea and Pearson, 2009; Ekstrom, 2015; Jeamwatthanachai et al., 2019), how essential is it for the development of spatial abilities and the neural networks that support these abilities?
Animals can use either visual, tactile (Pereira et al., 2007), olfactory (Save et al., 2000), vestibular (Etienne and Jeffery, 2004), or auditory (Ulanovsky and Moss, 2008) cues to navigate (Rauschecker, 1995). Indeed, prolonged visual impairment improves auditory spatial acuity in ferrets (King and Parsons, 2008). Humans on the other hand have mostly relied on the visual sense to navigate, and vision is considered as the most adapted spatio-cognitive sensory modality (Foulke, 1982). Vision is a capital tool to form cognitive maps (Strelow, 1985). The more these cues are salient in terms of color, or shape the easier they are remembered, and the more precise is our representation of the environment (Appleyard, 1970). Vision is thus helpful for spatial representations, and also for obstacle avoidance. When approaching an obstacle, visual cues guide foot placement by constantly updating our distance with the obstacle (Patla, 1998; Patla and Greig, 2006) and adapt our locomotive behavior according to the circumstance (Armand et al., 1998; MacLellan and Patla, 2006). Certain auditory and tactile spatial abilities are also compromised by the lack of visual experience (Zwiers et al., 2001; Gori et al., 2014). For example, CB individuals show auditory and proprioceptive spatial impairments (Cappagli et al., 2017), deficits in auditory spatial localizations (Gori et al., 2014), and in encoding spatial motion (Finocchietti et al., 2015). It is the lack of visual information that leads to differences in the normal development and alignment of cortical and subcortical spatial maps (King and Carlile, 1993; King, 2009) and appropriate integration of the input from the remaining sensory modalities (Cattaneo et al., 2008; Gori et al., 2014). In addition, most of the neuronal networks responsible for spatial tasks are volumetrically reduced (Figure 3; Noppeney, 2007; Ptito et al., 2008b) compared to the sighted, including the posterior portion of the hippocampus (Chebat et al., 2007a; Illustrated in Figure 6A), which suggests that the taxing demands of learning to navigate without vision drives hippocampal plasticity and volumetric changes in CB (Chebat et al., 2007a; Ptito et al., 2008a; Leporé et al., 2010). Furthermore, there is a cascade of modifications involving other non-visual brain structures that undergo anatomical (Yang et al., 2014), morphological (Park et al., 2009), morphometric (Rombaux et al., 2010; Tomaiuolo et al., 2014; Aguirre et al., 2016; Maller et al., 2016), and functional connectivity (Heine et al., 2015) alterations.
Despite these anatomical changes, visual experience is not necessary for the development of topographically organized maps of the face in the intraparietal cortex (Pasqualotto et al., 2018), or for the ability to represent the work space (Nelson et al., 2018). CB can form mental representations of the work space via haptic information as efficiently as sighted people, indicating that this ability does not depend on visual experience (Nelson et al., 2018). People who are congenitally blind are capable of avoiding obstacles (Kellogg, 1962; Chebat et al., 2011, 2020), integrating paths (Loomis et al., 2012), remembering locations (Chebat et al., 2015), and generating cognitive representations of space (Passini et al., 1990; Thinus-Blanc and Gaunet, 1997; Fortin et al., 2006; Chebat et al., 2018a, b). As a consequence, CB maintain the ability to recognize a familiar route and represent spatial information (Marmor and Zaback, 1976; Kerr, 1983; Passini et al., 1990; Loomis et al., 1993; Thinus-Blanc and Gaunet, 1997; Fortin et al., 2006; Leporé et al., 2009). Moreover, CB can even perform better than their blindfolded sighted counterparts in certain spatial tasks (Rieser et al., 1980; Passini et al., 1990; Loomis et al., 1993; Thinus-Blanc and Gaunet, 1997) and navigate by substituting vision with echolocation (Supa et al., 1944; Teng et al., 2012; Kolarik et al., 2017), tactile information (White et al., 1970; Kupers et al., 2010a; Chebat et al., 2011, 2015, 2017), or even proprioceptive information (Juurmaa and Suonio, 1975). Interestingly, neonatal visual deprivation does not impair the cognitive representation of space. Instead, when substituting visual information by the tactile or auditory modality via SSDs, similar performances are observed in CB compared to sighted participants (Chebat et al., 2018a). CB are therefore able to navigate efficiently using either audition (Maidenbaum et al., 2014b, c, d; Chebat et al., 2015; Bell et al., 2019) or touch (Chebat et al., 2007a, 2011, 2020; Kupers et al., 2010b). They can locate objects (Auvray and Myin, 2009; Chebat et al., 2011), navigate around them (Chebat et al., 2011), and even perform as well (Chebat et al., 2015, 2017) or better than the sighted in certain spatial tasks (Loomis et al., 1993; Chebat et al., 2007b, 2015, 2017). These abilities can be further improved with training (Likova and Cacciamani, 2018). For instance, spatial knowledge can be acquired by CB individuals by using sound cues while playing video games and transferred to the real world (Connors et al., 2014). Using the EyeCane (Figures 1D,E), congenitally blind participants can learn real and virtual Hebb-Williams mazes as well as their sighted counterparts using vision (Chebat et al., 2015; Figures 4A,B). When learning an environment in the virtual world CB participants are able to create a mental map of this environment which enables them to resolve the maze in the real world more efficiently, and vice versa. Moreover, they can transfer the acquired spatial knowledge from real to virtual mazes (and conversely) in the same manner as the sighted (Figures 4C,D; Chebat et al., 2017).
Taken together, these results indicate that even if certain specific spatial abilities are deficient in the case of congenital blindness, the resulting deficit in navigation still remains purely perceptual (Vecchi et al., 2004; Amedi et al., 2005), and not as previously suggested a cognitive deficit (von Senden, 1932).
Navigation: Strategies for Acquiring Spatial Knowledge
Navigation is the ability to find our way in the environment (Sholl, 1996; Maguire et al., 1999) and requires several distinct, yet interrelated skills. Navigation is associated with different perceptual, cognitive and motor networks for path integration, wayfinding or obstacle avoidance and detection. For navigation through the environment, animals and humans alike must translate spatial information into cognitive maps that they compare with an internal egocentric representation (Whitlock et al., 2008). Animals can use strategies to navigate using olfactory indices (Holland et al., 2009), more complex egocentric strategies like the integration of paths based on proprioceptive cues (Etienne and Jeffery, 2004), or strategies relying on complex cognitive maps based on the spatial relation that objects have with one another (O’keefe and Nadel, 1978). Allocentric frames of reference are an abstract coordinate system enabling one to navigate from point to point, whereas an egocentric one does not (Klatzky, 1998).
Several types of labyrinths and mazes (Hebb and Williams, 1946; Barnes et al., 1966; Morris, 1984) and many other variants, including virtual mazes (Shore et al., 2001) have been used to understand the process by which people resolve spatial problems. The Morris maze has particularly been used (Cornwell et al., 2008), often to test the navigational ability of human subjects and its neurological substrates (see: section on neurological substrates of navigation). There is, however, a large inter-subjects variability in navigational performances (Wolbers and Hegarty, 2010), that can be attributed to the type and variety of strategies used when navigating. A navigational strategy is defined as a set of functional laws used in order to reach a spatial goal. It influences the way we interact with the environment and our representation of space. In other words, cognitive maps are largely dependent on the employed navigational strategies. Experienced navigators are usually better (Hegarty et al., 2006) since they employ more diverse strategies (Kato and Takeuchi, 2003; Blajenkova et al., 2005), and they are more flexible concerning the strategy to be adopted (Saucier et al., 2003). O’keefe and Nadel (1978) identified different strategies in the behavior of rats while exploring the environment in a Morris water maze. These strategies include the exploration of a novel environment as well as the detection of changes in an already familiar environment, and the ability to make detours or create shortcuts. In sighted humans, three major orientation strategies have been identified using the same paradigm (Kallai et al., 2005). They are characterized by a set of behaviors while looking for a platform in an open space. 1. Thigmotaxis (following the wall and approaching the platform); 2. Turning in circles (wandering around in circles); 3. Visual scans (turning in place to change their view-point); 4. Enfilade (accomplishing a quick scan and moving directly to the platform).
In blindness, research on orientation and mobility have identified a series of strategies used in navigation and the exploration of non-familiar environments reminiscent of what has been reported in sighted people (Geruschat and Smith, 1997). Hill et al. (1993) asked blind and low vision participants to explore an open space, find four objects and remember their emplacement. The movement of participants was recorded, quantified and categorized into different strategies. Certain of these strategies apply specifically to people with low vision, and others to blind individuals. Strategies were assigned to five categories for blind participants (Schinazi et al., 2016). 1. Perimetry (searching for objects by moving alongside the walls, or the perimeter of the room); 2. Perimetry toward the center (moving in concentric circles from the periphery toward the center of the room); 3. Grid (exploring the space in a systematic grid-like fashion); 4. Cyclical (moving directly from one object to the next); 5. Perimetry to the object (moving from the periphery toward the object).
The differences in strategies employed by sighted and blind people reflect the restrictions imposed on navigation without sight; there is no fundamental difference between the strategies employed by the blind and sighted, the only notable difference is that blind people cannot perform visual scans to find their targets, they must rely on encoding of stimuli using egocentric rather than allocentric, coordinates (Röder et al., 2008; Pasqualotto and Proulx, 2012). Although these strategies encourage an egocentric representation of space, and visual experience facilitates allocentric representations (Pasqualotto et al., 2013), it is also possible to achieve an allocentric representation of space without vision. The last two strategies, cyclical and perimetry to the object, that require an allocentric representation, can only be used by blind people once they have become familiar with the environment using the other strategies.
Neural Correlates of Navigation
Sighted people often accomplish tasks of navigation with the greatest ease, like for example going to a well-known destination, or to avoid obstacles in a crowded hallway. This seemingly effortless behavior is in fact the result of the interaction of a complex network of brain regions integrating information from visual, proprioceptive, tactile and auditory sources which translate into the appropriate behavior (Tosoni et al., 2008). The brain takes into consideration information from various senses simultaneously and accomplishes a multitude of operations to enable someone to find their way or step over an obstacle. The hippocampal and parietal cortices are two regions that are traditionally viewed as being related to spatial tasks (Poucet et al., 2003) since they are involved in the processing (Rodriguez, 2010) and in the encoding (Whitlock et al., 2008) of high level spatio-cognitive information, which is crucial for navigation.
The Hippocampus
The hippocampus is part of the medial temporal lobe and is implicated in spatial memory. In the adult monkey, a lesion to the hippocampus results in a deficiency in spatial learning (Lavenex et al., 2006), and in humans, its enlargement predicts learning of a cognitive map (Schinazi et al., 2013), which confirms its functional role in navigation. When implanting electrodes into the medial temporal lobe of rats that can freely move in a maze, pyramidal cells in the hippocampus respond preferentially when the animal is in a precise place (O’Keefe and Dostrovsky, 1971). These place cells, which are mostly found in the posterior part of the hippocampus (O’Keefe and Speakman, 1987; Burgess and O’Keefe, 1996), are organized in functional units that represent space (O’keefe and Nadel, 1978). They are at the origin of cognitive maps of the environment. Space is cartographied using a matrix of pyramidal cells that respond preferentially to places having been already visited (O’Keefe and Burgess, 2005). These maps are allocentric (O’Keefe, 1991) and use the limits of traversable space of their environment (O’Keefe and Burgess, 2005). These cells are also found in the primate (Matsumura et al., 1999) and can represent the position of objects and landmarks of the environment (Rolls and Kesner, 2006). These place cells can also adjust their response according to changes in the environment (Lenck-Santini et al., 2005) and the position of objects in a labyrinth (Smith and Mizumori, 2006). In addition, the prefrontal cortex (PFC) also seems to be sensitive to places, like hippocampal cells (O’Keefe and Dostrovsky, 1971).
In addition to place cells, there also exists populations of cells that are coding for the heading direction (Taube et al., 1990; Oler et al., 2008). Path integration requires that the animal constantly updates its direction during its movements through its trajectory. These cells that code for the direction of an animal are found in the subiculum (Taube et al., 1990), in the striatum (Wiener, 1993) and in the posterior parietal cortex (Chen et al., 1994). These cells compose a sort of internal compass that allows the animal to monitor its direction while traveling.
The Parahippocampal Complex
The human parahippocampus is composed of the entorhinal and perirhinal cortex. This structure surrounds the hippocampus, and the entorhinal cortex is one of the important sources of projection to the hippocampus. It is also implicated in navigation (Aguirre et al., 1996). The entorhinal cortex is composed of Brodmann area 28 and is situated alongside the rhinal sulcus. The grid cells (Hafting et al., 2005) recorded in the dorsal part of the entorhinal cortex respond preferentially in an organized way and code the environment in the form of a grid. They have receptive fields that are sensitive to different parts of the environment, which are divided in quadrants, like a grid. In opposition to place-cells of the hippocampus, the entorhinal grid-cells code the environment in a geometric fashion (Moser et al., 2008). The hippocampus and the entorhinal cortex cooperate to allow for navigation and we know that this system, when lesioned, perturbs this function (Parron et al., 2006). Indeed, sighted human patients with lesions to the parahippocampus are incapable of learning a new route (Hublet and Demeurisse, 1992; Maguire, 2001). In fact, a case study demonstrates that a lesion to the hippocampus has an effect mostly on the allocentric representation of a path (Holdstock et al., 2000). The parahippocampal area is also involved in the recognition of visual scenes used to navigate (Epstein and Kanwisher, 1998; Epstein et al., 2007). By representing an image of visual scenes to participants in an fMRI scanner, there is an elevation of blood flow in the parahippocampus, leading to the coining of this region as the parahippocampal place area (PPA).
It was later discovered that cells that are sensitive to places are also found in the retrosplenial cortex (RS) (Epstein, 2008). Although RS and PPA are both sensitive to the recognition of visual scenes for navigation, they have complementary, yet different roles (Epstein et al., 2007). The PPA would be more involved in the recognition of scenes, namely the representation of a particular one during navigation, whereas, the retrosplenial cortex serves to situate that scene in the environment. This type of scene recognition is used during navigation to transmit information (an egocentric representation) to a representation of this place on a map (allocentric). The interaction of these two zones during navigation could therefore serve to transform egocentric information of the environment into an allocentric one (Epstein, 2008). These landmarks that are so important for the formation of cognitive maps are coded in the parahippocampus in order to be recognized in their context and by the retrosplenial cortex to be situated in space.
The Parietal Cortex
The parietal cortex allows for several different functions. The anterior part of the parietal cortex is responsible for the integration of somatosensory information (Tommerdahl et al., 2010), and the posterior part (PPC) is implicated in multimodal integration of spatial information (Cohen, 2009), that is used to explore personal space (Mountcastle et al., 1975). PPC is also involved in spatial navigation (Seemungal et al., 2008). Lesion studies in the parietal cortex in rodents (King and Corwin, 1993) and primates (Weniger et al., 2009) demonstrate deficits in the processing of egocentric information: animals cannot integrate a path (Save et al., 2001). The PPC is part of the dorsal visual stream (Mishkin et al., 1983), and enables the perception of movement and the planification of our own movement (Goodale and Milner, 1992). The transformation of our own allocentric representation into a representation centered on the self to plan our movement in space takes place in the PPC (Buneo and Andersen, 2006). In monkeys, neural activity in the parietal cortex is sensitive to the direction of a learned trajectory (Crowe et al., 2004a), and these cells are activated when the animal tries to solve a maze (Crowe et al., 2004b). A recent model on the role of the parietal cortex suggests that it would interact with the hippocampus to select a more appropriate route between two points (planification), and produces a representation that is egocentric of the environment to guide movement between those two points (execution) (Nitz, 2009). Moreover, the parietal cortex interacts with the frontal cortex for the planification and decision making).
Clinical studies also show the importance of the parietal cortex in navigation and spatial representation in general (De Renzi, 1982a, b). Lesions in parietal regions in humans can lead to spatial disorientation (Hublet and Demeurisse, 1992), meaning an inability to find one’s way in the environment, and in some occasions even spatial (Vallar and Calzolari, 2018) or personal neglect (Committeri et al., 2018). fMRI studies showed that the parietal cortex is activated multiple times during the navigation process (Spiers and Maguire, 2006). Medio-Parietal regions play an important role in analyzing movement in immediate space and parietal regions play a role in the opacification of movement in space that is not visually accessible (Spiers and Maguire, 2006). This explains why lesions in the parietal lobe interfere with movement in personal space (spatial neglect) and in navigational space (topographical disorientation) as well. Studies using tactile mazes found that the parietal cortex is essential for the acquisition of spatial memory and the planification of movement (Saito and Watanabe, 2006). Indeed, in this task, participants use their parietal cortex only in the encoding of the goal phase of the task, meaning the encoding of the exit and the planification of movement to reach it.
Neural Activity According to the Type of Navigation Strategy
Using fMRI, the hippocampus in humans has been shown to be implicated in navigation (Ghaem et al., 1997). When participants try to solve a maze while in the scanner, the recorded activity is stronger in the right hippocampus (Maguire et al., 1997; Gagnon et al., 2012). Many studies have involved the hippocampus in topographic memory of places (Burgess et al., 2002) and allocentric representations (O’Keefe, 1991; Holdstock et al., 2000). A study demonstrated that the modulation of the interaction between the hippocampus and frontal or parietal regions depends on the type of strategy used in navigation (Mellet et al., 2000). Indeed, it is confirmed that the cortical activity in navigation tasks depends on the ability and strategies used by participants (Ohnishi et al., 2006). In addition, the cerebellum has also been linked to navigational tasks (Rondi-Reig et al., 2014).
There are also differences between men and women according to the strategy used to navigate (Grön et al., 2000). Men and women do not employ the same strategies when navigating, and men perform in general better than women (Astur et al., 1998). These differences are attributable to the fact that men employ strategies that are mostly allocentric and that women use more egocentric strategies to navigate (Sandstrom et al., 1998). BOLD responses differ when the mental navigation of maps are allocentric or from an egocentric viewpoint of a route (Mellet et al., 2000). Indeed, positron emission tomography (PET) shows that the hippocampus on the right side and the fronto-parietal network are recruited for both egocentric and allocentric representations (Galati et al., 2000; Zaehle et al., 2007). The PPA is activated bilaterally only for egocentric tasks. Using fMRI, different activations for egocentric and allocentric navigations are also found, but with certain nuances (Shelton and Gabrieli, 2004). In a fMRI study, Holdstock et al. (2000) reported that the hippocampus is more activated by allocentric tasks, and confirmed previous data reported in humans and animals (O’Keefe, 1991). In this study, the authors show that a parietal network is involved in navigation in both conditions, but that the frontal region is only present in the egocentric condition. It was found that participants that performed well in spatial tasks use allocentric strategies that are positively correlated with the medial temporal lobe (hippocampus). In opposition participants that performed poorly activated the parietal cortex and used more egocentric strategies (Ohnishi et al., 2006).
The Impact of Visual Deprivation on Spatial Competence: The Case for the Convergent Model
What happens then when someone is deprived of vision since birth? It is more difficult to gather sensory information in the absence of vision, and that information is harder to interpret, but spatial representations and competence can still be achieved. If sensory information is substituted with a different modality, the convergent model (Schinazi et al., 2016) suggests that spatial competence can be acquired faster (Figure 5).
Theories on the Acquisition of Spatial Competence in Blindness
Interestingly enough, as early as 1779, Diderot noted in his letter on the blind, the ability of certain non-sighted people to orient themselves in space without the aid of a cane, and that they had a certain innate sense for the perception of obstacles. In 1944, studies at Cornell University (Supa et al., 1944) showed that blind people were capable of detecting obstacles only when they were provided auditory information. The absence of tactile information did not perturb their obstacle detection sense, but the absence of any auditory information was detrimental to their performance. This hypothesis was confirmed by Ammons et al. (1953) who showed that in blind people in whom the auditory input was blocked, there was an inability to perceive obstacles. They concluded that audition was a crucial factor for navigation in blindness. This phenomenon is called echolocation. Blind people still use this technique by tapping their cane on the ground, clapping their hands or making clicking sounds with their tongue to perceive echoes. Kellogg (1962) was the first to quantify this ability. He measured the sensitivity of blind and sighted volunteers to the variation of size, distance and texture of objects perceived only with auditory echoes. He demonstrated that blind people had significantly superior results compared to the sighted in terms of their ability to detect objects, their texture and distance (Kellogg, 1962). These results were reproduced (Strelow and Brabyn, 1982), but it was demonstrated that although the CB outperformed their sighted blindfolded counterparts, their ability was way below that of the sighted using vision.
Theories on the acquisition of spatial competence in blindness can be classified into three main categories, that is either cumulative, persistent or convergent (Figure 5; Schinazi et al., 2016). It is evident that without visual cues, the acquisition of spatial knowledge concerning an environment and eventual spatial competence can be impaired, but to what extent? The cumulative model and persistent models hold that errors made when acquiring spatial knowledge, and thus also spatial competence, in an environment leads to further errors that are either persistently or cumulatively further away from the performance of their sighted counterparts having received as much spatial experience in the same environment. The convergent model considers that although it may take more time for CB people to gain spatial information and spatial competence, eventually their spatial competence will converge with that of the sighted. For a long time, the literature on the subject of congenital blindness has entertained the idea that people who are blind from birth were deficient or ineffective in their ability to comprehend space (von Senden, 1932). The deficiency theory proposes (see both the cumulative and deficient model in Figure 5) that people who are congenitally blind are either incapable of, or inefficient in their ability to develop mental representations of space and environment. According to this theory, this inability to form efficient cognitive maps is due to the use of tactile, proprioceptive, or auditory cues that are not useful in creating these maps. Blindness leads to a diminution in autonomy because of a deficit in orientation in space and mobility. It is evident that it is harder to navigate without the appropriate information furnished by vision. This inability to navigate alone is of course a handicap that is important for blind people (Loomis et al., 1993), who have difficulty in understanding certain concepts relating to space (Rieser et al., 1980), and in making mental rotations (Ungar et al., 1995; Fortin et al., 2006). Further evidence that would seem to support this view comes from volumetric studies of the hippocampus in CB. The posterior end of the right hippocampus is volumetrically reduced in CB (Chebat et al., 2007a; Figure 6A), precisely in the same area that is usually associated with navigation in humans (Duarte et al., 2014). The hippocampus is composed of many different distinct cellular layers (Figure 6B), and it is unknown which ones drive the volumetric reductions in CB. Despite these behavioral findings and volumetric differences, people who are blind, even those without any visual experience, are able to represent familiar spaces, and have an overall good understanding of large spaces (Casey, 1978). In opposition to the deficiency theories concerning spatial competence acquisition, there are also many different studies that seem to support the convergent model of spatial acquisition. For example, CB process spectral cues more efficiently than the sighted (Doucet et al., 2005), and can process auditory syllables more efficiently (Topalidis et al., 2020), have better sound pitch discrimination (Gougoux et al., 2004), are better at locating sound sources than the sighted (Lessard et al., 1998), more accurate sound localization than the sighted (Lewald, 2007), improved auditory spatial tuning (Röder et al., 1999), and even supra normal auditory abilities in far space (Voss et al., 2004), possibly by recruiting mechanisms of cross-modal brain plasticity to process auditory information (Collignon et al., 2009). Furthermore, it is possible to form a mental layout of space in a virtual task using echo-acoustic information (Dodsworth et al., 2020). It is therefore not a question of a deficit at the level of the mental representation of space. In an environment that does not enable the advantages of visual navigation (i.e., in a maze where the walls were at arm’s length, so that subjects could touch them), the performance of blind subjects was equal to, or even surpassed that of the sighted (Passini et al., 1990; Fortin et al., 2008). Far from being deficient in spatial tasks, nor in their comprehension of space in general, people who are blind may have a different comprehension of space generated by other senses and therefore develop other strategies to represent and configure space (Thinus-Blanc and Gaunet, 1997).
Spatial Perception Strategies and Sensory Substitution Devices
In the same way that the physiology of the brain shapes vision, the engineering of each different SSD sets limitations on the type and quality of visual information available. The angle of the camera-sensor (field of view) for example or nature of the sensor information (distance information vs. contrast information or edges of objects) and the way this information is conveyed, influence how the SSD user explores the environment (Bermejo et al., 2015). Regardless of the type of visual information transferred, or the modality used by the device (tactile or auditory), the distal attribution process is a crucial step in developing strategies when using SSDs (Siegle and Warren, 2010). This process allows the user to attribute an external cause to the sensation provided by the SSD (Hartcher-O’Brien and Auvray, 2014). When this process is complete, the user is able to understand how the information conveyed by the apparatus relates to the representation of the object in space. This leads to the integration and transformation of SSD information into a coherent representation of the world around us (Cecchetti et al., 2016a) allowing blind people to interact with their environment efficiently. Using the vOICe. for example it is possible to recognize and locate objects efficiently (Brown et al., 2011). The strategies developed by blind people when using SSDs to navigate reflects the absence of a cognitive deficiency in representing space (Schinazi et al., 2016). When vision is substituted by tactile or auditory information the type of strategies used by CB and LB resembles the strategies described above used by the sighted, and the spatial updating of auditory scenes mimics the spatial updating of visual scenes (Pasqualotto and Esenkaya, 2016). Indeed, when comparing the strategies used and navigation patterns of sighted and blindfolded sighted participants using the EyeCane in a virtual environment, we find that LB and CB performances can be quite similar to the sighted. The same is true for the paths they use to explore their environments, using a visual strategy to explore real life Hebb-Williams mazes (Maidenbaum et al., 2014b; Chebat et al., 2015). This is surprising given that much of the spatial information is lost when translated into tactile or auditory information (Richardson et al., 2019). It would seem then that even a little spatial information is enough to enable blind people to develop navigation strategies that resemble those employed by the sighted.
Perceptions of Obstacles by Congenitally Blind Individuals
Obstacle avoidance tasks include two separate skills. The ability to understand where the obstacle is in space, and also the ability to walk around it. Pointing tasks have for objective the evaluation of knowledge of participants on directional relations between places. These tasks can help to evaluate the perception of space (Kelly et al., 2004), the perception of movement (Israël et al., 1996; Philbeck et al., 2006) and the spatial memory to plan and accomplish a movement. One can ask the participant to move actively or passively and point toward the starting point. We can also ask the subject to verbally describe the azimuth toward the goal. Physically pointing implies the contribution of the motor network to accomplish the motor action of pointing as well as the spatial task to find your point of origin. Navigation also implies the ability to move in the environment and avoid obstacles on the path. Obstacles can be very large, like the size of a mountain or a building for example, that one must skirt (circle) to get around, or quite small, like a sidewalk that one must step over. In both cases, this implies being able to locate the obstacle on the path and develop a strategy to keep the goal in mind and reach it despite this obstacle.
Using tactile information, CBs are able to detect and avoid obstacles efficiently using a SSD in a real life obstacle course (Chebat et al., 2007a, 2011, 2020). Indeed, CBs have natural adaptive mechanisms to use tactile information in lieu of visual information. Using the TDU, for example, CBs outperform their sighted blindfolded counterparts in different tasks including navigation. Work from our laboratory using the TDU in route recognition demonstrated the recruitment of primary visual areas in CB, but not in sighted blindfolded or in LB (Kupers et al., 2010a; Figure 6C). In line with these results, CB participants, LB and blindfolded sighted controls learned to use an SSD to navigate in real-life size mazes. We observed that retinotopic regions, including both dorsal-stream regions (e.g., V6) and primary visual cortex regions (e.g., peripheral V1), were selectively recruited for non-visual navigation after the participants mastered the use of the SSD, demonstrating rapid plasticity for non-visual navigation (Maidenbaum et al., 2018). Moreover, the ability of participants to learn to use the SSD to detect and avoid obstacles was positively correlated with the volumes of a network commonly associated with navigation (Chebat et al., 2020; Figure 6D). For avoidance, both CB and SC rely on the dorsal stream network, whereas for obstacle detection SC recruit medial temporal lobe structures and CBs additionally recruit a motor network. These results suggest that the blind may rely more on motor memory to remember the location of obstacles (Chebat et al., 2020). Similar results were reported by Gagnon et al. (2010) in a tactile maze where the performance of CBs was significantly higher than that of the sighted controls.
Future Perspectives of Sensory Substitution Devices
The major conclusion of studies on the blind using SSDs is that navigation is indeed possible without any visual experience. Spatial competence can be achieved by blind individuals partly due to mechanisms of brain plasticity, and amodality. Visual deprivation from birth leads to anatomical volumetric reductions of all components of the visual system, from the retina to the thalamic primary visual relay (dorsal lateral geniculate nucleus) (Cecchetti et al., 2016b), the visual cortex and extrastriate cortices including the ventral and dorsal streams (Ptito et al., 2008b). These structures have been shown to reorganize and develop ectopic projections with other sensory cortices mostly touch and audition (reviewed in Kupers and Ptito, 2014; Chebat et al., 2018b; Harrar et al., 2018). Indeed, CB trained with SSDs activate their primary visual cortex (Ptito, 2005) in a tactile orientation task, and the dorsal visual and ventral streams for tactile motion (Ptito et al., 2009) and the perception of tactile form (Ptito et al., 2012). In line with these findings, another study found retinotopic like maps in the visual cortex of expert blind echolocators, providing further evidence for the task specific organization of the brain (Norman and Thaler, 2019). It seems therefore that CB can compensate for the loss of vision by using other trained senses to invade and recruit the visual cortices. This means that navigational skills are indeed possible through a rewired network of connections that involves the hippocampal/parahippocampal network (Kupers et al., 2010a; Kupers and Ptito, 2014). Furthermore, the use of SSDs could possibly greatly enhance spatial competence in people who are blind by supplementing missing visual information and allowing for the use of more direct exploration of the environment. This would allow blind people to form allocentric representations of space more quickly and efficiently. Indeed, according to a convergent model of spatial competence in CB, by being able to acquire more spatial information in relatively less time via SSDs, CB may be able achieve spatial competence more rapidly.
We conclude here on the future of SSDs and their efficacy for substituting vision in a natural environment. To date, all studies have focused on laboratory settings (Elli et al., 2014) with carefully controlled environments and have furnished encouraging results. However, as all available SSDs suffer from methodological shortcomings from their technology to their adaptability to the environment, it may take a while before we see their widespread use (Chebat et al., 2018a). Current trends investigating the impact of personality traits on SSD use (Richardson et al., 2020) will surely lead to better, more adaptable and customizable devices. Another important question concerns the ideal age to start training with SSDs. Indeed, the developmental aspect is crucial to SSD studies (Aitken and Bower, 1983; Strelow and Warren, 1985), and training children from a very young age could prove to be very beneficial from a behavioral point of view. Most studies using SSDs to explore mechanisms of brain plasticity do so with the training of people well beyond the critical period. Considering that the human brain is much more plastic before the critical period (Cohen et al., 1999; Sadato et al., 2002), it would be very interesting to investigate what congenitally blind children can achieve using SSDs (Strelow and Warren, 1985; Humphrey et al., 1988) compared to sighted adults (Gori et al., 2016). Future studies should also concentrate on studies in acquired blindness in later age, taking into account the onset and duration of blindness. It would also be interesting to investigate the impact of the sophistication (ease of use of devices) and personalization (adapted to each individual) of task specific SSDs.
In order for SSDs to become widespread there is a need to move experiments from the laboratory setting (Elli et al., 2014; Maidenbaum et al., 2014a) to real environments. Also, it would be useful to take advantage of virtual reality to train people with SSDs (Kupers et al., 2010b; Chebat et al., 2015; Baker et al., 2019; Netzer et al., 2019; Yazzolino et al., 2019; Siu et al., 2020) and explore their ability to transfer spatial knowledge between real and virtual environments (Chebat et al., 2017; Guerreiro et al., 2020). Given that these devices are totally non-invasive compared to other highly invasive techniques like surgical implants (retinal or cortical), efforts should be pursued in developing high quality SSDs that will improve the quality of life of the blind.
Ethics Statement
Written, informed consent was obtained for the publication of any identifiable data and images.
Author Contributions
All authors collaborated for many years on the topic of sensory substitution and cross modal plasticity, contributed equally to the conception, and writing of this review.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Footnotes
Funding. This work was supported by the Harland Sanders Research Chair in Visual Science (University of Montreal; MP), the Ariel University Research Authority Absorption Grant # RA1700000192 (D-RC), and the French Health Ministry (PHRC 0701044; FS).
References
- Abboud S., Hanassy S., Levy-Tzedek S., Maidenbaum S., Amedi A. (2014). EyeMusic: Introducing a “visual” colorful experience for the blind using auditory sensory substitution. Restor. Neurol. Neurosci. 32 247–257. 10.3233/rnn-130338 [DOI] [PubMed] [Google Scholar]
- Abboud S., Maidenbaum S., Dehaene S., Amedi A. (2015). A number-form area in the blind. Nat. Commun. 6:6026. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aguirre G. K., Datta R., Benson N. C., Prasad S., Jacobson S. G., Cideciyan A. V., et al. (2016). Patterns of individual variation in visual pathway structure and function in the sighted and blind. PLoS One 11:e0164677. 10.1371/journal.pone.0164677 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aguirre G. K., Detre J. A., Alsop D. C., D’Esposito M. (1996). The parahippocampus subserves topographical learning in man. Cereb. Cortex 6 823–829. [DOI] [PubMed] [Google Scholar]
- Aitken S., Bower T. G. R. (1983). Developmental aspects of sensory substitution. Int. J. Neurosci. 19 13–19. 10.3109/00207458309148641 [DOI] [PubMed] [Google Scholar]
- Amedi A., Merabet L. B., Bermpohl F., Pascual-Leone A. (2005). The occipital cortex in the blind lessons about plasticity and vision. Curr. Dir. Psychol. Sci. 14 306–311. 10.1111/j.0963-7214.2005.00387.x [DOI] [Google Scholar]
- Amedi A., Stern W. M., Camprodon J. A., Bermpohl F., Merabet L., Rotman S., et al. (2007). Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nat. Neurosci. 10 687–689. 10.1038/nn1912 [DOI] [PubMed] [Google Scholar]
- Ammons C. H., Worchel P., Dallenbach K. M. (1953). “Facial vision”: the perception of obstacles out of doors by blindfolded and blindfolded-deafened subjects. Am. J. Psychol. 66 519–553. [PubMed] [Google Scholar]
- Appleyard D. (1970). Styles and methods of structuring a city. Environ. Behav. 2 100–117. 10.1177/001391657000200106 [DOI] [Google Scholar]
- Armand M., Huissoon J. P., Patla A. E. (1998). Stepping over obstacles during locomotion: insights from multiobjective optimization on set of input parameters. IEEE Trans. Rehabil. Eng. 6 43–52. 10.1109/86.662619 [DOI] [PubMed] [Google Scholar]
- Arnott S. R., Thaler L., Milne J. L., Kish D., Goodale M. A. (2013). Shape-specific activation of occipital cortex in an early blind echolocation expert. Neuropsychologia 51 938–949. 10.1016/j.neuropsychologia.2013.01.024 [DOI] [PubMed] [Google Scholar]
- Astur R. S., Ortiz M. L., Sutherland R. J. (1998). A characterization of performance by men and women in a virtual Morris water task: a large and reliable sex difference. Behav. Brain Res. 93 185–190. 10.1016/s0166-4328(98)00019-9 [DOI] [PubMed] [Google Scholar]
- Auvray M., Hanneton S., Lenay C., O’REGAN K. (2005). There is something out there: distal attribution in sensory substitution, twenty years later. J. Integr. Neurosci. 4 505–521. 10.1142/s0219635205001002 [DOI] [PubMed] [Google Scholar]
- Auvray M., Myin E. (2009). Perception with compensatory devices: from sensory substitution to sensorimotor extension. Cogn. Sci. 33 1036–1058. 10.1111/j.1551-6709.2009.01040.x [DOI] [PubMed] [Google Scholar]
- Bach-y-Rita P., Collins C. C., Saunders F. A., White B., Scadden L. (1969). Vision substitution by tactile image projection. Nature 221 963–964. 10.1038/221963a0 [DOI] [PubMed] [Google Scholar]
- Bach-y-Rita P., Kercel S. W. (2003). Sensory substitution and the human–machine interface. Trends Cogn. Sci. 7 541–546. 10.1016/j.tics.2003.10.013 [DOI] [PubMed] [Google Scholar]
- Baker R. M., Ramos K., Turner J. R. (2019). Game design for visually-impaired individuals: Creativity and innovation theories and sensory substitution devices influence on virtual and physical navigation skills. Irish J. Technol. Enhanc. Learn. 4 36–47. [Google Scholar]
- Barnes R. H., Cunnold S. R., Zimmermann R. R., Simmons H., MacLeod R. B., Krook L. (1966). Influence of nutritional deprivations in early life on learning behavior of rats as measured by performance in a water maze. J. Nutr. 89 399–410. 10.1093/jn/89.4.399 [DOI] [PubMed] [Google Scholar]
- Bell L., Wagels L., Neuschaefer-Rube C., Fels J., Gur R. E., Konrad K. (2019). The cross-modal effects of sensory deprivation on spatial and temporal processes in vision and audition: a systematic review on behavioral and neuroimaging research since 2000. Neural Plast. 2019:9603469. 10.1155/2019/9603469 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bermejo F., Di Paolo E. A., Hüg M. X., Arias C. (2015). Sensorimotor strategies for recognizing geometrical shapes: a comparative study with different sensory substitution devices. Front. Psychol. 6:679. 10.3389/fpsyg.2015.00679 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bhatlawande S. S., Mukhopadhyay J., Mahadevappa M. (2012). “Ultrasonic spectacles and waist-belt for visually impaired and blind person,” in Proceedings of the 2012 National Conference on Communications (NCC), (Kharagpur: IEEE; ), 1–4. [Google Scholar]
- Blajenkova O., Motes M. A., Kozhevnikov M. (2005). Individual differences in the representations of novel environments. J. Environ. Psychol. 25 97–109. 10.1016/j.jenvp.2004.12.003 [DOI] [Google Scholar]
- Brown D., Macpherson T., Ward J. (2011). Seeing with sound? Exploring different characteristics of a visual-to-auditory sensory substitution device. Perception 40 1120–1135. 10.1068/p6952 [DOI] [PubMed] [Google Scholar]
- Buneo C. A., Andersen R. A. (2006). The posterior parietal cortex: sensorimotor interface for the planning and online control of visually guided movements. Neuropsychologia 44 2594–2606. 10.1016/j.neuropsychologia.2005.10.011 [DOI] [PubMed] [Google Scholar]
- Burgess N., Maguire E. A., O’Keefe J. (2002). The human hippocampus and spatial and episodic memory. Neuron 35 625–641. 10.1016/s0896-6273(02)00830-9 [DOI] [PubMed] [Google Scholar]
- Burgess N., O’Keefe J. (1996). Neuronal computations underlying the firing of place cells and their role in navigation. Hippocampus 6 749–762. [DOI] [PubMed] [Google Scholar]
- Cappagli G., Cocchi E., Gori M. (2017). Auditory and proprioceptive spatial impairments in blind children and adults. Dev. Sci. 20 e12374. 10.1111/desc.12374 [DOI] [PubMed] [Google Scholar]
- Casey S. M. (1978). Cognitive mapping by the blind. J. Vis. Impair. Blind. 72 297–301. [Google Scholar]
- Cattaneo Z., Vecchi T., Cornoldi C., Mammarella I., Bonino D., Ricciardi E., et al. (2008). Imagery and spatial processes in blindness and visual impairment. Neurosci. Biobehav. Rev. 32 1346–1360. 10.1016/j.neubiorev.2008.05.002 [DOI] [PubMed] [Google Scholar]
- Cecchetti L., Kupers R., Ptito M., Pietrini P., Ricciardi E. (2016a). Are supramodality and cross-modal plasticity the yin and yang of brain development? From blindness to rehabilitation. Front. Syst. Neurosci. 10:89. 10.3389/fnsys.2016.00089 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cecchetti L., Ricciardi E., Handjaras G., Kupers R., Ptito M., Pietrini P. (2016b). Congenital blindness affects diencephalic but not mesencephalic structures in the human brain. Brain Struct. Funct. 221 1465–1480. 10.1007/s00429-014-0984-5 [DOI] [PubMed] [Google Scholar]
- Chebat D. R. (2020). Introduction to the special issue on multisensory space—perception, neural representation and navigation. Multisens. Res. 33 375–382. 10.1163/22134808-bja10004 [DOI] [PubMed] [Google Scholar]
- Chebat D.-R., Chen J.-K., Schneider F., Ptito A., Kupers R., Ptito M. (2007a). Alterations in the right posterior hippocampus in early blind individuals. Neuroreport 18 329–333. 10.1097/wnr.0b013e32802b70f8 [DOI] [PubMed] [Google Scholar]
- Chebat D.-R., Harrar V., Kupers R., Maidenbaum S., Amedi A., Ptito M. (2018a). “Sensory substitution and the neural correlates of navigation in blindness,” in Mobility of Visually Impaired People, eds Pissaloux E., Velazquez R., (Berlin: Springer; ), 167–200. 10.1007/978-3-319-54446-5_6 [DOI] [Google Scholar]
- Chebat D.-R., Heimler B., Hofsetter S., Amedi A. (2018b). “The implications of brain plasticity and task selectivity for visual rehabilitation of blind and visually impaired individuals,” in The Neuroimaging of Brain Diseases, ed. Habas C., (Cham: Springer; ). [Google Scholar]
- Chebat D.-R., Maidenbaum S., Amedi A. (2015). Navigation using sensory substitution in real and virtual mazes. PLoS One 10:e0126307. 10.1371/journal.pone.0126307 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chebat D. R., Maidenbaum S., Amedi A. (2017). “The transfer of non-visual spatial knowledge between real and virtual mazes via sensory substitution,” in Proceedings of the 2017 International Conference on Virtual Rehabilitation, ICVR, (Montreal, QC: IEEE; ). 10.1371/journal.pone.0126307 [DOI] [Google Scholar]
- Chebat D.-R., Rainville C., Kupers R., Ptito M. (2007b). Tactile–‘visual’ acuity of the tongue in early blind individuals. Neuroreport 18, 1901–1904. 10.1097/WNR.0b013e3282f2a63 [DOI] [PubMed] [Google Scholar]
- Chebat D.-R., Schneider F., Kupers R., Ptito M. (2011). Navigation with a sensory substitution device in congenitally blind individuals. Neuroreport 22 342–347. 10.1097/wnr.0b013e3283462def [DOI] [PubMed] [Google Scholar]
- Chebat D.-R., Schneider F. C., Ptito M. (2020). Neural networks mediating perceptual learning in congenital blindness. Sci. Rep. 10 1–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chen L. L., Lin L.-H., Green E. J., Barnes C. A., McNaughton B. L. (1994). Head-direction cells in the rat posterior cortex. Exp. Brain Res. 101 8–23. [DOI] [PubMed] [Google Scholar]
- Cohen L. G., Celnik P., Pascual-Leone A., Corwell B., Falz L., Dambrosia J., et al. (1997). Functional relevance of cross-modal plasticity in blind humans. Nature 389 180–183. 10.1038/38278 [DOI] [PubMed] [Google Scholar]
- Cohen L. G., Weeks R. A., Sadato N., Celnik P., Ishii K., Hallett M. (1999). Period of susceptibility for cross−modal plasticity in the blind. Ann. Neurol. 45 451–460. [DOI] [PubMed] [Google Scholar]
- Cohen Y. E. (2009). Multimodal activity in the parietal cortex. Hear. Res. 258 100–105. 10.1016/j.heares.2009.01.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Collignon O., Vandewalle G., Voss P., Albouy G., Charbonneau G., Lassonde M., et al. (2011). Functional specialization for auditory–spatial processing in the occipital cortex of congenitally blind humans. Proc. Natl. Acad. Sci. U.S.A. 108 4435–4440. 10.1073/pnas.1013928108 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Collignon O., Voss P., Lassonde M., Lepore F. (2009). Cross-modal plasticity for the spatial processing of sounds in visually deprived subjects. Exp. Brain Res. 192:343. 10.1007/s00221-008-1553-z [DOI] [PubMed] [Google Scholar]
- Committeri G., Piervincenzi C., Pizzamiglio L. (2018). Personal neglect: a comprehensive theoretical and anatomo–clinical review. Neuropsychology 32:269. 10.1037/neu0000409 [DOI] [PubMed] [Google Scholar]
- Connors E., Chrastil E., Sánchez J., Merabet L. B. (2014). Action video game play and transfer of navigation and spatial cognition skills in adolescents who are blind. Front. Hum. Neurosci. 8:133. 10.3389/fnhum.2014.00133 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cornwell B. R., Johnson L. L., Holroyd T., Carver F. W., Grillon C. (2008). Human hippocampal and parahippocampal theta during goal-directed spatial navigation predicts performance on a virtual Morris water maze. J. Neurosci. 28 5983–5990. 10.1523/jneurosci.5001-07.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Crollen V., Lazzouni L., Rezk M., Bellemare A., Lepore F., Noël M.-P., et al. (2019). Recruitment of the occipital cortex by arithmetic processing follows computational bias in the congenitally blind. Neuroimage 186 549–556. 10.1016/j.neuroimage.2018.11.034 [DOI] [PubMed] [Google Scholar]
- Crowe D. A., Chafee M. V., Averbeck B. B., Georgopoulos A. P. (2004a). Neural activity in primate parietal area 7a related to spatial analysis of visual mazes. Cereb. Cortex 14 23–34. 10.1093/cercor/bhg088 [DOI] [PubMed] [Google Scholar]
- Crowe D. A., Chafee M. V., Averbeck B. B., Georgopoulos A. P. (2004b). Participation of primary motor cortical neurons in a distributed network during maze solution: representation of spatial parameters and time-course comparison with parietal area 7a. Exp. Brain Res. 158 28–34. [DOI] [PubMed] [Google Scholar]
- De Renzi E. (1982a). Disorders of Space Exploration and Cognition. New York, NY: John Wiley & Sons, Inc. [Google Scholar]
- De Renzi E. (1982b). Memory disorders following focal neocortical damage. Philos. Trans. R. Soc. London. B, Biol. Sci. 298 73–83. 10.1098/rstb.1982.0073 [DOI] [PubMed] [Google Scholar]
- Dodsworth C., Norman L. J., Thaler L. (2020). Navigation and perception of spatial layout in virtual echo-acoustic space. Cognition 197:104185. 10.1016/j.cognition.2020.104185 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Doucet M. E., Guillemot J. P., Lassonde M., Gagné J. P., Leclerc C., Lepore F. (2005). Blind subjects process auditory spectral cues more efficiently than sighted individuals. Exp. Brain Res. 160 194–202. 10.1007/s00221-004-2000-4 [DOI] [PubMed] [Google Scholar]
- Duarte I. C., Ferreira C., Marques J., Castelo-Branco M. (2014). Anterior/posterior competitive deactivation/activation dichotomy in the human hippocampus as revealed by a 3D navigation task. PLoS One 9:e86213. 10.1371/journal.pone.0086213 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dunai L., Peris-Fajarnés G., Lluna E., Defez B. (2013). Sensory navigation device for blind people. J. Navig. 66 349–362. 10.1017/s0373463312000574 [DOI] [Google Scholar]
- Ekstrom A. D. (2015). Why vision is important to how we navigate. Hippocampus 735 731–735. 10.1002/hipo.22449 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Elli G. V., Benetti S., Collignon O. (2014). Is there a future for sensory substitution outside academic laboratories? Multisens. Res. 27 271–291. 10.1163/22134808-00002460 [DOI] [PubMed] [Google Scholar]
- Epstein R., Kanwisher N. (1998). A cortical representation of the local visual environment. Nature 392 598–601. 10.1038/33402 [DOI] [PubMed] [Google Scholar]
- Epstein R. A. (2008). Parahippocampal and retrosplenial contributions to human spatial navigation. Trends Cogn. Sci. 12 388–396. 10.1016/j.tics.2008.07.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Epstein R. A., Parker W. E., Feiler A. M. (2007). Where am i now? Distinct roles for parahippocampal and retrosplenial cortices in place recognition. J. Neurosci. 27 6141–6149. 10.1523/jneurosci.0799-07.2007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Etienne A. S., Jeffery K. J. (2004). Path integration in mammals. Hippocampus 14 180–192. 10.1002/hipo.10173 [DOI] [PubMed] [Google Scholar]
- Fairhall S. L., Porter K. B., Bellucci C., Mazzetti M., Cipolli C., Gobbini M. I. (2017). Plastic reorganization of neural systems for perception of others in the congenitally blind. Neuroimage 158 126–135. 10.1016/j.neuroimage.2017.06.057 [DOI] [PubMed] [Google Scholar]
- Fine I., Park J. M. (2018). Blindness and human brain plasticity. Annu. Rev. Vis. Sci. 4 337–356. 10.1146/annurev-vision-102016-061241 [DOI] [PubMed] [Google Scholar]
- Finocchietti S., Cappagli G., Gori M. (2015). Encoding audio motion: spatial impairment in early blind individuals. Front. Psychol. 6:1357. 10.3389/fpsyg.2015.01357 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fortin M., Voss P., Lord C., Lassonde M., Pruessner J., Saint-Amour D., et al. (2008). Wayfinding in the blind: larger hippocampal volume and supranormal spatial navigation. Brain 131 2995–3005. 10.1093/brain/awn250 [DOI] [PubMed] [Google Scholar]
- Fortin M., Voss P., Rainville C., Lassonde M., Lepore F. (2006). Impact of vision on the development of topographical orientation abilities. Neuroreport 17 443–446. [DOI] [PubMed] [Google Scholar]
- Foulke E. (1982). Perception, cognition and the mobility of blind pedestrians. Spat. Abil. Dev. Physiol. Found. 55–76. [Google Scholar]
- Gagnon L., Kupers R., Schneider F. C., Ptito M. (2010). Tactile maze solving in congenitally blind individuals. Neuroreport 21 989–992. [DOI] [PubMed] [Google Scholar]
- Gagnon L., Schneider F. C., Siebner H. R., Paulson O. B., Kupers R., Ptito M. (2012). Activation of the hippocampal complex during tactile maze solving in congenitally blind subjects. Neuropsychologia 50 1663–1671. 10.1016/j.neuropsychologia.2012.03.022 [DOI] [PubMed] [Google Scholar]
- Galati G., Lobel E., Vallar G., Berthoz A., Pizzamiglio L., Le Bihan D. (2000). The neural basis of egocentric and allocentric coding of space in humans: a functional magnetic resonance study. Exp. Brain Res. 133 156–164. 10.1007/s002210000375 [DOI] [PubMed] [Google Scholar]
- Geruschat D., Smith A. J. (1997). “Low vision and mobility,” in Foundations of Orientation and Mobility, 2nd Edn., eds Blasch B. B., Weiner W. R., Welch R. L. (New York: AFB Press; ), 60–103. [Google Scholar]
- Ghaem O., Mellet E., Crivello F., Tzourio N., Mazoyer B., Berthoz A., et al. (1997). Mental navigation along memorized routes activates the hippocampus, precuneus, and insula. Neuroreport 8 739–744. 10.1097/00001756-199702100-00032 [DOI] [PubMed] [Google Scholar]
- Goodale M. A., Milner A. D. (1992). Separate visual pathways for perception and action. Trends Neurosci. 15 20–25. 10.1016/0166-2236(92)90344-8 [DOI] [PubMed] [Google Scholar]
- Gori M., Cappagli G., Tonelli A., Baud-Bovy G., Finocchietti S. (2016). Devices for visually impaired people: high technological devices with low user acceptance and no adaptability for children. Neurosci Biobehav. Rev. 69 79–88. 10.1016/j.neubiorev.2016.06.043 [DOI] [PubMed] [Google Scholar]
- Gori M., Sandini G., Martinoli C., Burr D. C. (2014). Impairment of auditory spatial localization in congenitally blind human subjects. Brain 137 288–293. 10.1093/brain/awt311 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gougoux F., Lepore F., Lassonde M., Voss P., Zatorre R. J., Belin P. (2004). Pitch discrimination in the early blind. Nature 430 309–309. 10.1038/430309a [DOI] [PubMed] [Google Scholar]
- Grön G., Wunderlich A. P., Spitzer M., Tomczak R., Riepe M. W. (2000). Brain activation during human navigation: gender-different neural networks as substrate of performance. Nat. Neurosci. 3 404–408. 10.1038/73980 [DOI] [PubMed] [Google Scholar]
- Guerreiro J., Sato D., Ahmetovic D., Ohn-Bar E., Kitani K. M., Asakawa C. (2020). Virtual navigation for blind people: transferring route knowledge to the real-World. Int. J. Hum. Comput. Stud. 135:102369 10.1016/j.ijhcs.2019.102369 [DOI] [Google Scholar]
- Hafting T., Fyhn M., Molden S., Moser M.-B., Moser E. I. (2005). Microstructure of a spatial map in the entorhinal cortex. Nature 436 801–806. 10.1038/nature03721 [DOI] [PubMed] [Google Scholar]
- Halko M. A., Connors E. C., Sánchez J., Merabet L. B. (2014). Real world navigation independence in the early blind correlates with differential brain activity associated with virtual navigation. Hum. Brain Mapp. 35 2768–2778. 10.1002/hbm.22365 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Harrar V., Aubin S., Chebat D.-R., Kupers R., Ptito M. (2018). “The multisensory blind brain,” in Mobility of Visually Impaired People, eds Pissaloux E., Velazquez R., (Cham: Springer; ), 111–136. 10.1007/978-3-319-54446-5_4 [DOI] [Google Scholar]
- Hartcher-O’Brien J., Auvray M. (2014). The process of distal attribution illuminated through studies of sensory substitution. Multisens. Res. 27 421–441. 10.1163/22134808-00002456 [DOI] [PubMed] [Google Scholar]
- Hebb D. O., Williams K. (1946). A method of rating animal intelligence. J. Gen. Psychol. 34 59–65. 10.1080/00221309.1946.10544520 [DOI] [PubMed] [Google Scholar]
- Hegarty M., Montello D. R., Richardson A. E., Ishikawa T., Lovelace K. (2006). Spatial abilities at different scales: individual differences in aptitude-test performance and spatial-layout learning. Intelligence 34 151–176. 10.1016/j.intell.2005.09.005 [DOI] [Google Scholar]
- Heimler B., Striem-Amit E., Amedi A. (2015). Origins of task-specific sensory-independent organization in the visual and auditory brain: neuroscience evidence, open questions and clinical implications. Curr. Opin. Neurobiol. 35 169–177. 10.1016/j.conb.2015.09.001 [DOI] [PubMed] [Google Scholar]
- Heine L., Bahri M. A., Cavaliere C., Soddu A., Laureys S., Ptito M., et al. (2015). Prevalence of increases in functional connectivity in visual, somatosensory and language areas in congenital blindness. Front. Neuroanat. 9:86. 10.3389/fnana.2015.00086 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hill E. W., Rieser J. J., Hill M.-M., Hill M., Halpin J., Halpin R. (1993). How persons with visual impairments explore novel spaces: strategies of good and poor performers. J. Vis. Impair. Blind. 87 295–301. 10.1177/0145482x9308700805 [DOI] [Google Scholar]
- Hill J., Black J. (2003). The miniguide: a new electronic travel device. J. Vis. Impair. Blind. 97 1–6. [Google Scholar]
- Holdstock J. S., Mayes A. R., Cezayirli E., Isaac C. L., Aggleton J. P., Roberts N. (2000). A comparison of egocentric and allocentric spatial memory in a patient with selective hippocampal damage. Neuropsychologia 38 410–425. 10.1016/s0028-3932(99)00099-8 [DOI] [PubMed] [Google Scholar]
- Holland R. A., Thorup K., Gagliardo A., Bisson I.-A., Knecht E., Mizrahi D., et al. (2009). Testing the role of sensory systems in the migratory heading of a songbird. J. Exp. Biol. 212 4065–4071. 10.1242/jeb.034504 [DOI] [PubMed] [Google Scholar]
- Hublet C., Demeurisse G. (1992). Pure topographical disorientation due to a deep-seated lesion with cortical remote effects. Cortex 28 123–128. 10.1016/s0010-9452(13)80170-0 [DOI] [PubMed] [Google Scholar]
- Humphrey G. K., Dodwell P. C., Muir D. W., Humphrey D. E. (1988). Can blind infants and children use sonar sensory aids? Can. J. Psychol. 42:94. 10.1037/h0084187 [DOI] [PubMed] [Google Scholar]
- Israël I., Bronstein A. M., Kanayama R., Faldon M., Gresty M. A. (1996). Visual and vestibular factors influencing vestibular “navigation.”. Exp. Brain Res. 112 411–419. [DOI] [PubMed] [Google Scholar]
- Jeamwatthanachai W., Wald M., Wills G. (2019). Indoor navigation by blind people: Behaviors and challenges in unfamiliar spaces and buildings. Br. J. Vis. Impair. 37 140–153. 10.1177/0264619619833723 [DOI] [Google Scholar]
- Juurmaa J., Suonio K. (1975). The role of audition and motion in the spatial orientation of the blind and the sighted. Scand. J. Psychol. 16 209–216. 10.1111/j.1467-9450.1975.tb00185.x [DOI] [PubMed] [Google Scholar]
- Kallai J., Makany T., Karadi K., Jacobs W. J. (2005). Spatial orientation strategies in Morris-type virtual water task for humans. Behav. Brain Res. 159 187–196. 10.1016/j.bbr.2004.10.015 [DOI] [PubMed] [Google Scholar]
- Kato Y., Takeuchi Y. (2003). Individual differences in wayfinding strategies. J. Environ. Psychol. 23 171–188. 10.1016/s0272-4944(03)00011-2 [DOI] [Google Scholar]
- Kay L. (1974). A sonar aid to enhance spatial perception of the blind: engineering design and evaluation. Radio Electron. Eng. 44 605–627. [Google Scholar]
- Kellogg W. N. (1962). Sonar system of the blind. Science 137 399–404. [DOI] [PubMed] [Google Scholar]
- Kelly J. W., Loomis J. M., Beall A. C. (2004). Judgments of exocentric direction in large-scale space. Perception 33 443–454. 10.1068/p5218 [DOI] [PubMed] [Google Scholar]
- Kerr N. H. (1983). The role of vision in “visual imagery” experiments: evidence from the congenitally blind. J. Exp. Psychol. Gen. 112 265–277. 10.1037/0096-3445.112.2.265 [DOI] [PubMed] [Google Scholar]
- King A. J. (2009). Visual influences on auditory spatial learning. Philos. Trans. R. Soc. B Biol. Sci. 364 331–339. 10.1098/rstb.2008.0230 [DOI] [PMC free article] [PubMed] [Google Scholar]
- King A. J., Carlile S. (1993). Changes induced in the representation of auditory space in the superior colliculus by rearing ferrets with binocular eyelid suture. Exp. Brain Res. 94 444–455. [DOI] [PubMed] [Google Scholar]
- King A. J., Parsons C. H. (2008). Improved auditory spatial acuity in visually deprived ferrets. Eur. J. Neurosci. 11 3945–3956. 10.1046/j.1460-9568.1999.00821.x [DOI] [PubMed] [Google Scholar]
- King V. R., Corwin J. V. (1993). Comparisons of hemi-inattention produced by unilateral lesions of the posterior parietal cortex or medial agranular prefrontal cortex in rats: neglect, extinction, and the role of stimulus distance. Behav. Brain Res. 54 117–131. 10.1016/0166-4328(93)90070-7 [DOI] [PubMed] [Google Scholar]
- Klatzky R. L. (1998). “Allocentric and egocentric spatial representations: definitions, distinctions, and interconnections,” in Spatial Cognition, eds Freksa C., Habel C., Wender K. F., (Berlin: Springer; ), 1–17. 10.1007/3-540-69342-4_1 [DOI] [Google Scholar]
- Kolarik A. J., Scarfe A. C., Moore B. C. J., Pardhan S. (2017). Blindness enhances auditory obstacle circumvention: assessing echolocation, sensory substitution, and visual-based navigation. PLoS One 12:e0175750. 10.1371/journal.pone.0175750 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kupers R., Chebat D.-R., Madsen K. H., Paulson O. B., Ptito M. (2010a). Neural correlates of virtual route recognition in congenital blindness. Proc. Natl. Acad. Sci. U.S.A. 107 12716–12721. 10.1073/pnas.1006199107 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kupers R., Chebat D., Madsen K., Paulson O., Ptito M. (2010b). P36-8 Insights from darkness: neural correlates of virtual route recognition in congenital blindness. Clin. Neurophysiol. 121 36–38. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kupers R., Ptito M. (2014). Compensatory plasticity and cross-modal reorganization following early visual deprivation. Neurosci. Biobehav. Rev. 41 36–52. 10.1016/j.neubiorev.2013.08.001 [DOI] [PubMed] [Google Scholar]
- Lavenex P. B., Amaral D. G., Lavenex P. (2006). Hippocampal lesion prevents spatial relational learning in adult macaque monkeys. J. Neurosci. 26 4546–4558. 10.1523/jneurosci.5412-05.2006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lenck-Santini P.-P., Rivard B., Muller R. U., Poucet B. (2005). Study of CA1 place cell activity and exploratory behavior following spatial and nonspatial changes in the environment. Hippocampus 15 356–369. 10.1002/hipo.20060 [DOI] [PubMed] [Google Scholar]
- Leporé N., Shi Y., Lepore F., Fortin M., Voss P., Chou Y.-Y., et al. (2009). Pattern of hippocampal shape and volume differences in blind subjects. Neuroimage 46 949–957. 10.1016/j.neuroimage.2009.01.071 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Leporé N., Voss P., Lepore F., Chou Y.-Y., Fortin M., Gougoux F., et al. (2010). Brain structure changes visualized in early- and late-onset blind subjects. Neuroimage 49 134–140. 10.1016/j.neuroimage.2009.07.048 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lessard N., Paré M., Lepore F., Lassonde M. (1998). Early-blind human subjects localize sound sources better than sighted subjects. Nature 395 278–280. 10.1038/26228 [DOI] [PubMed] [Google Scholar]
- Lewald J. (2007). More accurate sound localization induced by short-term light deprivation. Neuropsychologia 45 1215–1222. 10.1016/j.neuropsychologia.2006.10.006 [DOI] [PubMed] [Google Scholar]
- Likova L. T., Cacciamani L. (2018). Transfer of learning in people who are blind: enhancement of spatial-cognitive abilities through drawing. J. Vis. Impair. Blind. 112 385–397. 10.1177/0145482x1811200405 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Likova L. T., Mei M., Mineff K. N., Nicholas S. C. (2019). Learning face perception without vision: Rebound learning effect and hemispheric differences in congenital vs late-onset blindness. Electron. Imaging 2019 237–231. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Loomis J. M., Klatzky R. L., Golledge R. G., Cicinelli J. G., Pellegrino J. W., Fry P. (1993). Nonvisual navigation by blind and sighted: assessment of path integration ability. J. Exp. Psychol. Gen. 122 73–91. 10.1037/0096-3445.122.1.73 [DOI] [PubMed] [Google Scholar]
- Loomis J. M., Klatzky R. L., McHugh B., Giudice N. A. (2012). Spatial working memory for locations specified by vision and audition: testing the amodality hypothesis. Attent. Percept. Psychophys. 74 1260–1267. 10.3758/s13414-012-0311-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Loomis J. M., Wiener W. R., Welsh R. L., Blasch B. B. (2010). “Sensory substitution for orientation and mobility: what progress are we making?,” in Perceiving to Move and Moving to Perceive: Control of Locomotion by Students with Vision Loss. Foundations of Orientation and Mobility (History and Theory), eds Guth D. A., Rieser J. J., Ashmead, (New York, NY: AFB Press; ), 7–10. [Google Scholar]
- MacLellan M. J., Patla A. E. (2006). Stepping over an obstacle on a compliant travel surface reveals adaptive and maladaptive changes in locomotion patterns. Exp. Brain Res. 173 531–538. 10.1007/s00221-006-0398-6 [DOI] [PubMed] [Google Scholar]
- Maguire E. (2001). The retrosplenial contribution to human navigation: a review of lesion and neuroimaging findings. Scand. J. Psychol. 42 225–238. 10.1111/1467-9450.00233 [DOI] [PubMed] [Google Scholar]
- Maguire E. A., Burgess N., O’Keefe J. (1999). Human spatial navigation: cognitive maps, sexual dimorphism, and neural substrates. Curr. Opin. Neurobiol. 9 171–177. [DOI] [PubMed] [Google Scholar]
- Maguire E. A., Frackowiak R. S. J., Frith C. D. (1997). Recalling routes around London: activation of the right hippocampus in taxi drivers. J. Neurosci. 17 7103–7110. 10.1523/jneurosci.17-18-07103.1997 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maidenbaum S., Abboud S., Amedi A. (2014a). Sensory substitution: closing the gap between basic research and widespread practical visual rehabilitation. Neurosci. Biobehav. Rev. 41 3–15. 10.1016/j.neubiorev.2013.11.007 [DOI] [PubMed] [Google Scholar]
- Maidenbaum S., Chebat D., Amedi A. (2018). Human navigation with and without vision–the role of visual experience and visual regions. bioRxiv [Preprint] 10.1101/480558 [DOI] [Google Scholar]
- Maidenbaum S., Chebat D.-R., Levy-Tzedek S., Amedi A. (2014b). “Vision-deprived virtual navigation patterns using depth cues & the effect of extended sensory range,” in Proceedings of the CHI ’14 Extended Abstracts on Human Factors in Computing Systems, (New York, NY: ACM Digital Library; ), 1231–1236. [Google Scholar]
- Maidenbaum S., Hanassy S., Abboud S., Buchs G., Chebat D.-R., Levy-Tzedek S., et al. (2014c). The “EyeCane”, a new electronic travel aid for the blind: technology, behavior & swift learning. Restor. Neurol. Neurosci. 32 813–824. 10.3233/rnn-130351 [DOI] [PubMed] [Google Scholar]
- Maidenbaum S., Levy-Tzedek S., Chebat D.-R., Namer-Furstenberg R., Amedi A. (2014d). The effect of extended sensory range via the eyecane sensory substitution device on the characteristics of visionless virtual navigation. Multisens. Res. 27 379–397. 10.1163/22134808-00002463 [DOI] [PubMed] [Google Scholar]
- Maller J. J., Thomson R. H., Ng A., Mann C., Eager M., Ackland H., et al. (2016). Brain morphometry in blind and sighted subjects. J. Clin. Neurosci. 33 89–95. 10.1016/j.jocn.2016.01.040 [DOI] [PubMed] [Google Scholar]
- Mann S., Huang J., Janzen R., Lo R., Rampersad V., Chen A., et al. (2011). “Blind navigation with a wearable range camera and vibrotactile helmet,” in Proceedings of the 19th International Conference on Multimedea 2011, (Scottsdale, AZ: ACM Digital Library; ), 1325. [Google Scholar]
- Marmor G. S., Zaback L. A. (1976). Mental rotation by the blind: does mental rotation depend on visual imagery? J. Exp. Psychol. Hum. Percept. Perform. 2:515. 10.1037/0096-1523.2.4.515 [DOI] [PubMed] [Google Scholar]
- Matsumura N., Nishijo H., Tamura R., Eifuku S., Endo S., Ono T. (1999). Spatial-and task-dependent neuronal responses during real and virtual translocation in the monkey hippocampal formation. J. Neurosci. 19 2381–2393. 10.1523/jneurosci.19-06-02381.1999 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Matteau I., Kupers R., Ricciardi E., Pietrini P., Ptito M. (2010). Beyond visual, aural and haptic movement perception: hMT+ is activated by electrotactile motion stimulation of the tongue in sighted and in congenitally blind individuals. Brain Res. Bull. 82 264–270. 10.1016/j.brainresbull.2010.05.001 [DOI] [PubMed] [Google Scholar]
- McVea D. A., Pearson K. G. (2009). Object avoidance during locomotion. Adv. Exp. Med. Biol. 629 293–315. 10.1007/978-0-387-77064-2_15 [DOI] [PubMed] [Google Scholar]
- Meijer P. B. L. (1992). An experimental system for auditory image representations. IEEE Trans. Biomed. Eng. 39 112–121. 10.1109/10.121642 [DOI] [PubMed] [Google Scholar]
- Mellet E., Bricogne S., Tzourio-Mazoyer N., Ghaem O., Petit L., Zago L., et al. (2000). Neural correlates of topographic mental exploration: the impact of route versus survey perspective learning. Neuroimage 12 588–600. 10.1006/nimg.2000.0648 [DOI] [PubMed] [Google Scholar]
- Merabet L. B., Battelli L., Obretenova S., Maguire S., Meijer P., Pascual-Leone A. (2009). Functional recruitment of visual cortex for sound encoded object identification in the blind. Neuroreport 20:132. 10.1097/wnr.0b013e32832104dc [DOI] [PMC free article] [PubMed] [Google Scholar]
- Milne J. L., Arnott S. R., Kish D., Goodale M. A., Thaler L. (2015). Parahippocampal cortex is involved in material processing via echoes in blind echolocation experts. Vision Res. 109 139–148. 10.1016/j.visres.2014.07.004 [DOI] [PubMed] [Google Scholar]
- Mishkin M., Ungerleider L. G., Macko K. A. (1983). Object vision and spatial vision: two cortical pathways. Trends Neurosci. 6 414–417. 10.1016/0166-2236(83)90190-x [DOI] [Google Scholar]
- Moraru D., Boiangiu C. A. (2016). On how to achieve visual sensory substitution. Int. J. Biol. Biomed. Eng. 4510 40–46. [Google Scholar]
- Morris R. (1984). Developments of a water-maze procedure for studying spatial learning in the rat. J. Neurosci. Methods 11 47–60. 10.1016/0165-0270(84)90007-4 [DOI] [PubMed] [Google Scholar]
- Moser E. I., Kropff E., Moser M.-B. (2008). Place cells, grid cells, and the brain’s spatial representation system. Annu. Rev. Neurosci. 31 69–89. 10.1146/annurev.neuro.31.061307.090723 [DOI] [PubMed] [Google Scholar]
- Mountcastle V. B., Lynch J. C., Georgopoulos A., Sakata H., Acuna C. (1975). Posterior parietal association cortex of the monkey: command functions for operations within extrapersonal space. J. Neurophysiol. 38 871–908. 10.1152/jn.1975.38.4.871 [DOI] [PubMed] [Google Scholar]
- Nelson J. S., Kuling I. A., Gori M., Postma A., Brenner E., Smeets J. B. (2018). Spatial representation of the workspace in blind, low vision, and sighted human participants. I-Perception 9:2041669518781877. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Netzer O., Buchs G., Heimler B., Amedi A. (2019). “A systematic computerized training program for using Sensory Substitution Devices in real-life,” in Proceedings of the 2019 International Conference on Virtual Rehabilitation (ICVR), (Tel Aviv: IEEE; ), 1–2. [Google Scholar]
- Nitz D. (2009). Parietal cortex, navigation, and the construction of arbitrary reference frames for spatial information. Neurobiol. Learn. Mem. 91 179–185. 10.1016/j.nlm.2008.08.007 [DOI] [PubMed] [Google Scholar]
- Noppeney U. (2007). The effects of visual deprivation on functional and structural organization of the human brain. Neurosci. Biobehav. Rev. 31 1169–1180. 10.1016/j.neubiorev.2007.04.012 [DOI] [PubMed] [Google Scholar]
- Norman L. J., Thaler L. (2019). Retinotopic-like maps of spatial sound in primary ‘visual’ cortex of blind human echolocators. Proc. R. Soc. B 286:20191910. 10.1098/rspb.2019.1910 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ohnishi T., Matsuda H., Hirakata M., Ugawa Y. (2006). Navigation ability dependent neural activation in the human brain: an fMRI study. Neurosci. Res. 55 361–369. 10.1016/j.neures.2006.04.009 [DOI] [PubMed] [Google Scholar]
- O’Keefe J. (1991). An allocentric spatial model for the hippocampal cognitive map. Hippocampus 1 230–235. 10.1002/hipo.450010303 [DOI] [PubMed] [Google Scholar]
- O’Keefe J., Burgess N. (2005). Dual phase and rate coding in hippocampal place cells: theoretical significance and relationship to entorhinal grid cells. Hippocampus 15 853–866. 10.1002/hipo.20115 [DOI] [PMC free article] [PubMed] [Google Scholar]
- O’Keefe J., Dostrovsky J. (1971). The hippocampus as a spatial map: preliminary evidence from unit activity in the freely-moving rat. Brain Res. 34 171–175. 10.1016/0006-8993(71)90358-1 [DOI] [PubMed] [Google Scholar]
- O’keefe J., Nadel L. (1978). The Hippocampus as a Cognitive Map. Oxford: Clarendon Press. [Google Scholar]
- O’Keefe J., Speakman A. (1987). Single unit activity in the rat hippocampus during a spatial memory task. Exp. Brain Res. 68 1–27. [DOI] [PubMed] [Google Scholar]
- Oler J. A., Penley S. C., Sava S., Markus E. J. (2008). Does the dorsal hippocampus process navigational routes or behavioral context? A single-unit analysis. Eur. J. Neurosci. 28 802–812. 10.1111/j.1460-9568.2008.06375.x [DOI] [PubMed] [Google Scholar]
- Park H. J., Lee J. D., Kim E. Y., Park B., Oh M. K., Lee S. C., et al. (2009). Morphological alterations in the congenital blind based on the analysis of cortical thickness and surface area. Neuroimage 47 98–106. 10.1016/j.neuroimage.2009.03.076 [DOI] [PubMed] [Google Scholar]
- Parron C., Poucet B., Save E. (2006). Cooperation between the hippocampus and the entorhinal cortex in spatial memory: a disconnection study. Behav. Brain Res. 170 99–109. 10.1016/j.bbr.2006.02.006 [DOI] [PubMed] [Google Scholar]
- Pasqualotto A., Esenkaya T. (2016). Sensory substitution: the spatial updating of auditory scenes “Mimics” the spatial updating of visual scenes. Front. Behav. Neurosci. 10:79. 10.3389/fnbeh.2016.00079 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pasqualotto A., Furlan M. U. A., Proulx M. J., Sereno M. I. (2018). Visual loss alters multisensory face maps in humans. Brain Struct. Funct. 223 3731–3738. 10.1007/s00429-018-1713-2 [DOI] [PubMed] [Google Scholar]
- Pasqualotto A., Proulx M. J. (2012). The role of visual experience for the neural basis of spatial cognition. Neurosci. Biobehav. Rev. 36 1179–1187. 10.1016/j.neubiorev.2012.01.008 [DOI] [PubMed] [Google Scholar]
- Pasqualotto A., Spiller M. J., Jansari A., Proulx M. J. (2013). Visual experience facilitates allocentric spatial representation. Behav. Brain Res. 236 175–179. 10.1016/j.bbr.2012.08.042 [DOI] [PubMed] [Google Scholar]
- Passini R., Proulx G., Rainville C. (1990). The spatio-cognitive abilities of the visually impaired population. Environ. Behav. 22 91–118. 10.1177/0013916590221005 [DOI] [Google Scholar]
- Patla A. E. (1998). How is human gait controlled by vision. Ecol. Psychol. 10 287–302. 10.1080/10407413.1998.9652686 [DOI] [Google Scholar]
- Patla A. E., Greig M. (2006). Any way you look at it, successful obstacle negotiation needs visually guided on-line foot placement regulation during the approach phase. Neurosci. Lett. 397 110–114. 10.1016/j.neulet.2005.12.016 [DOI] [PubMed] [Google Scholar]
- Pereira A., Ribeiro S., Wiest M., Moore L. C., Pantoja J., Lin S.-C., et al. (2007). Processing of tactile information by the hippocampus. Proc. Natl. Acad. Sci. U.S.A. 104 18286–18291. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Philbeck J. W., Behrmann M., Biega T., Levy L. (2006). Asymmetrical perception of body rotation after unilateral injury to human vestibular cortex. Neuropsychologia 44 1878–1890. 10.1016/j.neuropsychologia.2006.02.004 [DOI] [PubMed] [Google Scholar]
- Poucet B., Lenck-Santini P. P., Paz-Villagrán V., Save E. (2003). Place cells, neocortex and spatial navigation: a short review. J. Physiol. Paris 97 537–546. 10.1016/j.jphysparis.2004.01.011 [DOI] [PubMed] [Google Scholar]
- Proulx M. J., Gwinnutt J., Dell’Erba S., Levy-Tzedek S., de Sousa A. A., Brown D. J. (2016). Other ways of seeing: from behavior to neural mechanisms in the online “visual” control of action with sensory substitution. Restor. Neurol. Neurosci. 34 29–44. 10.3233/rnn-150541 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ptito M. (2005). Cross-modal plasticity revealed by electrotactile stimulation of the tongue in the congenitally blind. Brain 128 606–614. 10.1093/brain/awh380 [DOI] [PubMed] [Google Scholar]
- Ptito M., Chebat D.-R., Kupers R. (2008a). “The blind get a taste of vision,” in Human Haptic Perception: Basics and Applications, ed. Grunwald M., (Basel: Birkhäuser Basel; ), 481–489. 10.1007/978-3-7643-7612-3_40 [DOI] [Google Scholar]
- Ptito M., Matteau I., Gjedde A., Kupers R. (2009). Recruitment of the middle temporal area by tactile motion in congenital blindness. Neuroreport 20 543–547. 10.1097/wnr.0b013e3283279909 [DOI] [PubMed] [Google Scholar]
- Ptito M., Matteau I., Zhi Wang A., Paulson O. B., Siebner H. R., Kupers R. (2012). Crossmodal recruitment of the ventral visual stream in congenital blindness. Neural Plast. 2012:304045. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ptito M., Schneider F. C. G., Paulson O. B., Kupers R. (2008b). Alterations of the visual pathways in congenital blindness. Exp. Brain Res 187 41–49. 10.1007/s00221-008-1273-4 [DOI] [PubMed] [Google Scholar]
- Rauschecker J. P. (1995). Compensatory plasticity and sensory substitution in the cerebral cortex. Trends Neurosci. 18 36–43. 10.1016/0166-2236(95)93948-w [DOI] [PubMed] [Google Scholar]
- Reich L., Szwed M., Cohen L., Amedi A. (2011). A ventral visual stream reading center independent of visual experience. Curr. Biol. 21 363–368. 10.1016/j.cub.2011.01.040 [DOI] [PubMed] [Google Scholar]
- Renier L., De Volder A. G. (2005). Cognitive and brain mechanisms in sensory substitution of vision: a contribution to the study of human perception. J. Integr. Neurosci. 4 489–503. 10.1142/s0219635205000999 [DOI] [PubMed] [Google Scholar]
- Richardson M., Esenkaya T., Petrini K., Proulx M. J. (2020). “Reading with the tongue: perceiving ambiguous stimuli with the BrainPort,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, (New York, NY: Association for Computing Machinery; ), 1–10. [Google Scholar]
- Richardson M., Thar J., Alvarez J., Borchers J., Ward J., Hamilton-Fletcher G. (2019). How much spatial information is lost in the sensory substitution process? Comparing visual, tactile, and auditory approaches. Perception 48 1079–1103. 10.1177/0301006619873194 [DOI] [PubMed] [Google Scholar]
- Rieser J. J., Lockman J. J., Pick H. L. (1980). The role of visual experience in knowledge of spatial layout. Percept. Psychophys. 28 185–190. 10.3758/bf03204374 [DOI] [PubMed] [Google Scholar]
- Röder B., Föcker J., Hötting K., Spence C. (2008). Spatial coordinate systems for tactile spatial attention depend on developmental vision: evidence from event-related potentials in sighted and congenitally blind adult humans. Eur. J. Neurosci. 28 475–483. 10.1111/j.1460-9568.2008.06352.x [DOI] [PubMed] [Google Scholar]
- Röder B., Teder-SaÈlejaÈrvi W., Sterr A., RoÈsler F., Hillyard S. A., Neville H. J. (1999). Improved auditory spatial tuning in blind humans. Nature 400 162–166. 10.1038/22106 [DOI] [PubMed] [Google Scholar]
- Rodriguez P. F. (2010). Human navigation that requires calculating heading vectors recruits parietal cortex in a virtual and visually sparse water maze task in fMRI. Behav. Neurosci. 124 532–540. 10.1037/a0020231 [DOI] [PubMed] [Google Scholar]
- Rolls E. T., Kesner R. P. (2006). A computational theory of hippocampal function, and empirical tests of the theory. Prog. Neurobiol. 79 1–48. 10.1016/j.pneurobio.2006.04.005 [DOI] [PubMed] [Google Scholar]
- Rombaux P., Huart C., De Volder A. G., Cuevas I., Renier L., Duprez T., et al. (2010). Increased olfactory bulb volume and olfactory function in early blind subjects. Neuroreport 21 1069–1073. 10.1097/wnr.0b013e32833fcb8a [DOI] [PubMed] [Google Scholar]
- Rondi-Reig L., Paradis A. L., Lefort J. M., Babayan B. M., Tobin C. (2014). How the cerebellum may monitor sensory information for spatial representation. Front. Syst. Neurosci. 8:205. 10.3389/fnsys.2014.00205 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sadato N., Okada T., Honda M., Yonekura Y. (2002). Critical period for cross-modal plasticity in blind humans: a functional MRI study. Neuroimage 16 389–400. 10.1006/nimg.2002.1111 [DOI] [PubMed] [Google Scholar]
- Saenz M., Lewis L. B., Huth A. G., Fine I., Koch C. (2008). Visual motion area MT+/V5 responds to auditory motion in human sight-recovery subjects. J. Neurosci. 28 5141–5148. 10.1523/jneurosci.0803-08.2008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saito K., Watanabe S. (2006). Spatial memory activation of the parietal cortex measured withnear-infrared spectroscopic imaging in the finger-maze of the morris water maze analogue for humans. Rev. Neurosci. 17 227–238. [DOI] [PubMed] [Google Scholar]
- Sandstrom N. J., Kaufman J., Huettel S. (1998). Males and females use different distal cues in a virtual environment navigation task. Cogn. Brain Res. 6 351–360. 10.1016/s0926-6410(98)00002-0 [DOI] [PubMed] [Google Scholar]
- Saucier D., Bowman M., Elias L. (2003). Sex differences in the effect of articulatory or spatial dual-task interference during navigation. Brain Cogn. 53 346–350. 10.1016/s0278-2626(03)00140-4 [DOI] [PubMed] [Google Scholar]
- Save E., Guazzelli A., Poucet B. (2001). Dissociation of the effects of bilateral lesions of the dorsal hippocampus and parietal cortex on path integration in the rat. Behav. Neurosci. 115:1212. 10.1037/0735-7044.115.6.1212 [DOI] [PubMed] [Google Scholar]
- Save E., Nerad L., Poucet B. (2000). Contribution of multiple sensory information to place field stability in hippocampal place cells. Hippocampus 10 64–76. [DOI] [PubMed] [Google Scholar]
- Schinazi V. R., Nardi D., Newcombe N. S., Shipley T. F., Epstein R. A. (2013). Hippocampal size predicts rapid learning of a cognitive map in humans. Hippocampus 23 515–528. 10.1002/hipo.22111 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schinazi V. R., Thrash T., Chebat D. R. (2016). Spatial navigation by congenitally blind individuals. Wiley Interdiscip. Rev. Cogn. Sci. 7 37–58. 10.1002/wcs.1375 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seemungal B. M., Rizzo V., Gresty M. A., Rothwell J. C., Bronstein A. M. (2008). Posterior parietal rTMS disrupts human path integration during a vestibular navigation task. Neurosci. Lett. 437 88–92. 10.1016/j.neulet.2008.03.067 [DOI] [PubMed] [Google Scholar]
- Shelton A. L., Gabrieli J. D. E. (2004). Neural correlates of individual differences in spatial learning strategies. Neuropsychology 18:442. 10.1037/0894-4105.18.3.442 [DOI] [PubMed] [Google Scholar]
- Sholl M. J. (1996). “From visual information to cognitive maps,” in The Construction of Cognitive Maps (Dordrecht: Springer: ), 157–186. [Google Scholar]
- Shore D. I., Stanford L., MacInnes W. J., Brown R. E., Klein R. M. (2001). Of mice and men: virtual Hebb—Williams mazes permit comparison of spatial learning across species. Cogn. Affect. Behav. Neurosci. 1 83–89. 10.3758/cabn.1.1.83 [DOI] [PubMed] [Google Scholar]
- Shoval S., Borenstein J., Koren Y. (1998). Auditory guidance with the navbelt-a computerized travel aid for the blind. IEEE Trans. Syst. Man, Cybern. Part C Appl. Rev. 28 459–467. 10.1109/5326.704589 [DOI] [PubMed] [Google Scholar]
- Siegle J. H., Warren W. H. (2010). Distal attribution and distance perception in sensory substitution. Perception 39 208–223. 10.1068/p6366 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sigalov N., Maidenbaum S., Amedi A. (2016). Reading in the dark: neural correlates and cross-modal plasticity for learning to read entire words without visual experience. Neuropsychologia 83 149–160. 10.1016/j.neuropsychologia.2015.11.009 [DOI] [PubMed] [Google Scholar]
- Singh A. K., Phillips F., Merabet L. B., Sinha P. (2018). Why does the cortex reorganize after sensory loss? Trends Cogn. Sci. 22 569–582. 10.1016/j.tics.2018.04.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Siu A. F., Sinclair M., Kovacs R., Ofek E., Holz C., Cutrell E. (2020). “Virtual reality without vision: a haptic and auditory white cane to navigate complex virtual worlds,” in Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, (New York, NY: Association for Computing Machinery; ), 1–13. [Google Scholar]
- Smith D. M., Mizumori S. J. Y. (2006). Hippocampal place cells, context, and episodic memory. Hippocampus 16 716–729. 10.1002/hipo.20208 [DOI] [PubMed] [Google Scholar]
- Spiers H. J., Maguire E. A. (2006). Thoughts, behaviour, and brain dynamics during navigation in the real world. Neuroimage 31 1826–1840. 10.1016/j.neuroimage.2006.01.037 [DOI] [PubMed] [Google Scholar]
- Steven M. S., Hansen P. C., Blakemore C. (2006). Activation of color-selective areas of the visual cortex in a blind synesthete. Cortex 42 304–308. 10.1016/s0010-9452(08)70356-3 [DOI] [PubMed] [Google Scholar]
- Stoll C., Palluel-Germain R., Fristot V., Pellerin D., Alleysson D., Graff C. (2015). Navigating from a depth image converted into sound. Appl. Bionics Biomech. 2015:543492. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Strelow E. R. (1985). What is needed for a theory of mobility: direct perceptions and cognitive maps—lessons from the blind. Psychol. Rev. 92 226–248. 10.1037/0033-295x.92.2.226 [DOI] [PubMed] [Google Scholar]
- Strelow E. R., Brabyn J. A. (1982). Locomotion of the blind controlled by natural sound cues. Perception 11 635–640. 10.1068/p110635 [DOI] [PubMed] [Google Scholar]
- Strelow E. R., Warren D. H. (1985). “Sensory substitution in blind children and neonates,” in Electronic Spatial Sensing for the Blind, eds Warren D. H., Strelow E. R., (Dordrecht: Springer; ), 273–298. 10.1007/978-94-017-1400-6_18 [DOI] [Google Scholar]
- Striem-Amit E., Amedi A. (2014). Visual cortex extrastriate body-selective area activation in congenitally blind people “seeing” by using sounds. Curr. Biol. 24 687–692. 10.1016/j.cub.2014.02.010 [DOI] [PubMed] [Google Scholar]
- Striem-Amit E., Cohen L., Dehaene S., Amedi A. (2012a). Reading with sounds: sensory substitution selectively activates the visual word form area in the blind. Neuron 76 640–652. 10.1016/j.neuron.2012.08.026 [DOI] [PubMed] [Google Scholar]
- Striem-Amit E., Dakwar O., Reich L., Amedi A. (2012b). The large-scale organization of “visual” streams emerges without visual experience. Cereb. Cortex 22 1698–1709. 10.1093/cercor/bhr253 [DOI] [PubMed] [Google Scholar]
- Supa M., Cotzin M., Dallenbach K. M. (1944). “Facial vision”: the perception of obstacles by the blind. Am. J. Psychol. 57 133–183. [Google Scholar]
- Taube J. S., Muller R. U., Ranck J. B. (1990). Head-direction cells recorded from the postsubiculum in freely moving rats. I. Description and quantitative analysis. J. Neurosci. 10 420–435. 10.1523/jneurosci.10-02-00420.1990 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Teng S., Puri A., Whitney D. (2012). Ultrafine spatial acuity of blind expert human echolocators. Exp. Brain Res. 216 483–488. 10.1007/s00221-011-2951-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thaler L., Arnott S. R., Goodale M. A. (2011). Neural correlates of natural human echolocation in early and late blind echolocation experts. PLoS One 6:e20162. 10.1371/journal.pone.0020162 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Thinus-Blanc C., Gaunet F. (1997). Representation of space in blind persons: vision as a spatial sense? Psychol. Bull. 121 20–42. 10.1037/0033-2909.121.1.20 [DOI] [PubMed] [Google Scholar]
- Tomaiuolo F., Campana S., Collins D. L., Fonov V. S., Ricciardi E., Sartori G., et al. (2014). Morphometric changes of the corpus callosum in congenital blindness. PLoS One 9:e107871. 10.1371/journal.pone.0107871 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tommerdahl M., Favorov O. V., Whitsel B. L. (2010). Dynamic representations of the somatosensory cortex. Neurosci. Biobehav. Rev. 34 160–170. 10.1016/j.neubiorev.2009.08.009 [DOI] [PubMed] [Google Scholar]
- Topalidis P., Zinchenko A., Gädeke J. C., Föcker J. (2020). The role of spatial selective attention in the processing of affective prosodies in congenitally blind adults: an ERP study. Brain Res. 1739:146819. 10.1016/j.brainres.2020.146819 [DOI] [PubMed] [Google Scholar]
- Tosoni A., Galati G., Romani G. L., Corbetta M. (2008). Sensory-motor mechanisms in human parietal cortex underlie arbitrary visual decisions. Nat. Neurosci. 11:1446. 10.1038/nn.2221 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ulanovsky N., Moss C. F. (2008). What the bat’s voice tells the bat’s brain. Proc. Natl. Acad. Sci. U.S.A. 105 8491–8498. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ungar S., Blades M., Spencer C. (1995). Mental rotation of a tactile layout by young visually impaired children. Perception 24 891–900. 10.1068/p240891 [DOI] [PubMed] [Google Scholar]
- Vallar G., Calzolari E. (2018). “Unilateral spatial neglect after posterior parietal damage,” in Handbook of Clinical Neurology, (Amsterdam: Elsevier; ), 287–312. 10.1016/b978-0-444-63622-5.00014-0 [DOI] [PubMed] [Google Scholar]
- Vecchi T., Tinti C., Cornoldi C. (2004). Spatial memory and integration processes in congenital blindness. Neuroreport 15 2787–2790. [PubMed] [Google Scholar]
- von Senden M. (1932). Die Raumauffassung bei Blindgeborenen vor und nach ihrer Operation. Ann Arbor, Mi: NA. [Google Scholar]
- Voss P., Lassonde M., Gougoux F., Fortin M., Guillemot J. P., Lepore F. (2004). Early-and late-onset blind individuals show supra-normal auditory abilities in far-space. Curr. Biol. 14 1734–1738. 10.1016/j.cub.2004.09.051 [DOI] [PubMed] [Google Scholar]
- Ward J., Wright T. (2014). Sensory substitution as an artificially acquired synaesthesia. Neurosci. Biobehav. Rev. 41 26–35. 10.1016/j.neubiorev.2012.07.007 [DOI] [PubMed] [Google Scholar]
- Weniger G., Ruhleder M., Wolf S., Lange C., Irle E. (2009). Egocentric memory impaired and allocentric memory intact as assessed by virtual reality in subjects with unilateral parietal cortex lesions. Neuropsychologia 47 59–69. 10.1016/j.neuropsychologia.2008.08.018 [DOI] [PubMed] [Google Scholar]
- White B. W., Saunders F. A., Scadden L., Bach-Y-Rita P., Collins C. C. (1970). Seeing with the skin. Percept. Psychophys. 7 23–27. [Google Scholar]
- Whitlock J. R., Sutherland R. J., Witter M. P., Moser M. B., Moser E. I. (2008). Navigating from hippocampus to parietal cortex. Proc. Natl. Acad. Sci. U.S.A 105 14755–14762. 10.1073/pnas.0804216105 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wiener S. I. (1993). Spatial and behavioral correlates of striatal neurons in rats performing a self-initiated navigation task. J. Neurosci. 13 3802–3817. 10.1523/jneurosci.13-09-03802.1993 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wolbers T., Hegarty M. (2010). What determines our navigational abilities? Trends Cogn. Sci. 14 138–146. 10.1016/j.tics.2010.01.001 [DOI] [PubMed] [Google Scholar]
- Yang C., Wu S., Lu W., Bai Y., Gao H. (2014). Anatomic differences in early blindness: a deformation-based morphometry MRI study. J. Neuroimaging 24 68–73. 10.1111/j.1552-6569.2011.00686.x [DOI] [PubMed] [Google Scholar]
- Yazzolino L. A., Connors E. C., Hirsch G. V., Sánchez J., Merabet L. B. (2019). “Developing virtual environments for learning and enhancing skills for the blind: incorporating user-centered and neuroscience based approaches,” in Virtual Reality for Psychological and Neurocognitive Interventions, eds Rizzo A. S., Bouchard S., (New York, NY: Springer; ), 361–385. 10.1007/978-1-4939-9482-3_16 [DOI] [Google Scholar]
- Zaehle T., Jordan K., Wüstenberg T., Baudewig J., Dechent P., Mast F. W. (2007). The neural basis of the egocentric and allocentric spatial frame of reference. Brain Res. 1137 92–103. 10.1016/j.brainres.2006.12.044 [DOI] [PubMed] [Google Scholar]
- Zwiers M. P., Van Opstal A. J., Cruysberg J. R. M., van Opstal A. J., Cruysberg J. R. M. (2001). A spatial hearing deficit in early-blind humans. J. Neurosci. 21 RC142–RC145. [DOI] [PMC free article] [PubMed] [Google Scholar]