Skip to main content
PLOS One logoLink to PLOS One
. 2021 Feb 26;16(2):e0247448. doi: 10.1371/journal.pone.0247448

Spatial navigation with horizontally spatialized sounds in early and late blind individuals

Samuel Paré 1,#, Maxime Bleau 1,#, Ismaël Djerourou 1,#, Vincent Malotaux 2,, Ron Kupers 1,2,3,, Maurice Ptito 1,3,‡,*
Editor: Thomas A Stoffregen4
PMCID: PMC7909643  PMID: 33635892

Abstract

Blind individuals often report difficulties to navigate and to detect objects placed outside their peri-personal space. Although classical sensory substitution devices could be helpful in this respect, these devices often give a complex signal which requires intensive training to analyze. New devices that provide a less complex output signal are therefore needed. Here, we evaluate a smartphone-based sensory substitution device that offers navigation guidance based on strictly spatial cues in the form of horizontally spatialized sounds. The system uses multiple sensors to either detect obstacles at a distance directly in front of the user or to create a 3D map of the environment (detection and avoidance mode, respectively), and informs the user with auditory feedback. We tested 12 early blind, 11 late blind and 24 blindfolded-sighted participants for their ability to detect obstacles and to navigate in an obstacle course. The three groups did not differ in the number of objects detected and avoided. However, early blind and late blind participants were faster than their sighted counterparts to navigate through the obstacle course. These results are consistent with previous research on sensory substitution showing that vision can be replaced by other senses to improve performance in a wide variety of tasks in blind individuals. This study offers new evidence that sensory substitution devices based on horizontally spatialized sounds can be used as a navigation tool with a minimal amount of training.

Introduction

Vision is the most important aspect for spatial navigation and mobility in humans; it is constantly used for movement guidance, route planning and orientation [13]. Therefore, visually impaired individuals face several challenges when navigating, such as disorientation, detecting and avoiding obstacles [4]. Thanks to the long cane and guide dogs, some of these issues can be addressed. For instance, ground based obstacles can be detected by the cane or avoided by the guide dog but obstacles above waist remain problematic [5]. Nevertheless, blind individuals lack navigational independency [6]. For this reason, sensory substitution devices (SSDs) [7,8] have been developed to bring visual information through other sensory modalities [9,10] such as audition [11] or touch [12,13].

The vOICe is one of the best studied visual-to-auditory (VTA) SSD. The camera of the device scans its field of view from left to right, thereby offering momentary “snapshots” of the environment in the form of sound cues. The vOICe informs the user about the vertical and horizontal positioning of objects, as well as brightness of the environment. Vertical and horizontal positioning are indicated by the frequency and the length of the sound, respectively, whereas brightness is indicated by differences in amplitudes of the sound oscillations. The vOICe demands the user to analyze multiple spectral cues, with a two second delay between each scan, to extract important information, detect and identify objects [11].

The Tongue Display Unit (or TDU) is a tactile-to-vision (TTV) SSD capable of transmitting images to the tongue in the form of electrotactile pulses. The TDU is composed of a tongue array consisting of 400 small circular electrodes arranged in a 20x20 matrix, a computer and a webcam. Every time an object enters within the visual field of the camera, the visual image is translated into electrotactile pulses that are transmitted to the tongue through the electrode array. The obstacles are thus ‘drawn’ with electrical current on the tongue in real time from the images provided by the camera [1416].

The Sound of Vision (SoV) is a more recently developed VTA SSD. The SoV provides combined audio and tactile feedback by using multiple cameras and depth sensors that are worn on the forehead and which are connected to a laptop stowed in a backpack and worn by the user. The system informs the user of obstacles positioning with vibrations to the abdomen through a haptic belt. Then, the SoV conveys depth information (or overall 3D objects’ shape) to the user by translating all 3D points into binaural sound effects of “popping bubbles” that will be modulated in loudness and pitch for proximity and elevation respectively [17].

These devices aim to give the user a “visual-like” experience of the environment by giving information about the whole visual field of the camera. In these cases, the user must analyze all the information given by the SSDs to extract what is useful for the task to be accomplished. While studies have shown that it is possible to achieve proficient levels of navigation with these devices in laboratory environments, such performance often requires an intensive training period [15,1821]. Indeed, learning to use these devices is a process that imposes a heavy load on cognitive resources, often creating a feeling of exhaustion [7,22].

There is hence a need for simpler SSDs that provide more pertinent information needed for a specific task. To be efficient as navigation aids, SSDs should provide information relevant for navigation by processing the raw visual data, extracting only spatial information (i.e. depth and position) and delivering it in a simple and meaningful format. This would minimize the strain on attentional resources, and therefore shorten the user’s reaction time [22]. In order to deal with some of the SSDs’ shortcomings, Maidenbaum and colleagues developed the EyeCane device, a more “user-friendly” mobility aid that provides “point-to-distance” information. In short, the EyeCane detects tangible obstacles with an infrared light sensor and is able to calculate the distance between the device and the detected object, providing this sole information in the form of vibration: the higher the vibration, the closer the object [23]. As a result, the user can navigate through obstacles by scanning the environment with the device [21,24,25].

In this pilot study, we tested the Guidance-SSD (GSSD), a newly developed smartphone application that expands on the concept of “point-to-distance” introduced with the EyeCane [23]. This application can be installed on a regular smartphone, thereby taking advantage of its cameras and information processing capacity. Instead of vibrations, the GSSD conveys information about all tangible obstacles in the environment by using horizontally spatialized sounds, with the aim to provide “guidance” to the user. Here, we define horizontally spatialized sounds as the combined auditory cues that allow the localization of objects (regardless of their height) on the horizontal plane. To do so, these auditory cues represent two relevant spatial inputs: 1) the object’s distance from the user and 2) its direction. Therefore, we hypothesize that when confronted with obstacles in their path of travel, participants using the application should have enough information to be “guided” through obstacles, hence the GSSD name.

To test this hypothesis, we investigated if participants could detect and avoid obstacles in a life-size obstacle course using this new application to guide their movements. We included early blind (EB), late blind (LB) and blindfolded sighted control subjects (SC) in a detection and avoidance task. We hypothesized that participants would be able to perform both tasks above chance level with a minimum amount of training. Moreover, given the established spatial auditory capacities of the blind [13,26], we supposed that both blind groups would outperform SC and that EB would be better than LB.

Methodology

Participants and ethics

A total of 12 EB (mean age: 45 ± 10; 5 women), 11 LB (mean age: 40 ± 12, 8 women) and 24 SC (mean age: 40 ± 12, 11 women) participated in the study. Not all participants completed both parts of the study (3 EB, 1 LB and 7 SC did not complete the navigation task while 3 LB and 1 SC did not complete the detection task for reasons of time and availability). Participants were recruited from the Institut Nazareth et Louis-Braille (INLB) in Montreal (Canada) and the BRAINlab of the University of Copenhagen (Denmark). Age and sex-matched control subjects were also recruited from the Montreal and Copenhagen area. All LB participants had acquired blindness at the age of 16 or older. To evaluate the influence of experience-dependent plasticity in the LB group, we calculated the blindness duration index (BDI) according to the formula “(age-age onset blindness)/age” (as described in [21]). The BDI score can vary from 0 to 1, expressing the relative amount of time a person has been blind, with low scores indicating recent onset of blindness and high scores long duration of blindness. The average BDI was 0.52 ± 0.16 (range: 0.13 till 0.66) while the mean onset of blindness was 21.4 ± 6.6 years. All blind participants were users of the long cane, while three of them mainly used a guide dog. None of the participants had associated neuropathy, hearing loss or other pathology that could affect navigation performance and mental spatial representation. Demographic data of the blind participants can be found in Table 1. All participants were blindfolded during the experiment. The experimental protocol was approved by the Comité d’éthique de la recherche Clinique de l’Université de Montréal (CERC-19-097-P) and all participants provided written informed consent before the experiment.

Table 1. Blind participants’ characteristics.

Participants Age & Sex Blindness Onset (years) BDIa Cause Residual Perception Participants Age & Sex Blindness Onset Cause Residual Perception
LB1* 55W 24 0.56 RPb LPj EB1** 56M Perinatal ROPi -
LB2 25M 17 0.32 RP - EB2 49W Perinatal ROP -
LB3 50M 17 0.66 Ae - EB3 44W Perinatal ROP -
LB4 44W 17 0.61 GLf - EB4 31W Perinatal ROP -
LB5 56W 20 0.64 RCg - EB5 18M Perinatal ROP -
LB6 47W 22 0.53 DRh - EB6 49M Perinatal ROP -
LB7 44W 17 0.61 GL - EB7 33M Perinatal ROP -
LB8 56W 20 0.64 RC - EB8 49W Perinatal ROP -
LB9 47W 22 0.53 DR - EB9 34W Perinatal ROP -
LB10 38W 20 0.47 GL - EB10 46M Perinatal ROP -
LB11 46M 40 0.13 Md - EB11 49M Perinatal ROP -
EB12 28M Birth LAc LP

*LB: Late blind

**EB: Early blind

aBDI: Blind duration index

bRP: Retinitis pigmentosa

cLA: Leber’s amaurosis

dM: Meningitis

eA: Accident

fGL: Glaucoma

gRC: Retinal cancer

hDR: Diabetic retinopathy

iROP: Retinopathy of prematurity

jLP: Light perception.

Apparatus

The navigation system uses a Lenovo Phab 2 Pro smartphone, bone conducting headphones and a headset supporting the smartphone at eye-level (see Fig 1A). The smartphone is equipped with a Qualcomm Snapdragon 652 processor, a 4050 mAh battery, 4 GB of RAM, and uses Android 6.0. The smartphone has an RGB camera, a depth camera and a motion tracking fisheye camera. For this experiment, the phone is placed in a custom head mount adjusted so the cameras are at eye-level (see Fig 1D).

Fig 1. The apparatus and the two modes offered by the system.

Fig 1

(A) The custom head mount, the bone-conducting headphones (lower left) and the Lenovo Phab 2 Pro smartphone. (B) In detection mode, auditory feedback is given about one point in space. The system indicates the presence of a tangible object in a straight line in front of the user (angle: 0-degree, range: 3 meters). (C) The avoidance mode’s feedback consists in a 3D audio construct of the environment; it detects everything in the fisheye camera’s field of view and renders every tangible surface location in relation to the user (range: 3 meters). Squares represent obstacles, speaker icons illustrate the sound heard by the participant. The more bars there are, the higher the BRR, SF and SI. An X illustrates the absence of sound. (D) A blindfolded participant using the device to reach an obstacle. The device is head-mounted so the camera is at eye-level and facing towards the environment in front of the participant.

The GSSD uses horizontally spatialized sounds to convey visual information to the user. Using the phone’s cameras, the system can detect tangible objects within a 3-meter radius in front of the user and signal their location in the horizontal plane through sonification. The auditory signal encodes the objects’ polar coordinates with the user’s position as the point of origin. To convey the angular coordinate, or the objects’ azymuth in relation to the user, the system uses binaural differences, thus creating a possible 360-degree audio feedback. To signal the radial coordinate, or the distance between the object and the user, the system uses a combination of three previously tested sonification strategies [27] [: 1. Beep Repetition Rate (BRR), where the interval between each beep correlates positively with distance; 2. Sound Fundamental Frequency (SFF), whereby the sound frequency (pitch for the user) correlates negatively with distance; and 3. Sound Intensity (SI) which also correlates negatively with distance. Thus, when the user approaches an obstacle, the combination of the three components (the time between the beeps shortens while the pitch and the SI get higher) allows the user to estimate the distance accordingly.

The sonification used in this study relies on synthetic liquid sound effects following the proposition of Doel [28]. It is said that liquid sounds are prevalent in our environment and are easily identified, making them suitable to represent spatial information when simulated [28]. Based on that, the GSSD simulates water droplets sounds as the individual beeps in the signal and synthetically modulates these sounds’ properties according to the three sonification strategies (BRR, SFF and SI).

The GSSD software offers a detection and avoidance mode. Both modes use horizontally spatialized sounds but in two different ways. The detection mode offers only one sound source, which is a straight line directly in front of the user (0-degree angle). Thus, this mode provides “point-to-distance” information similarly to the EyeCane device [23] and should be used in a comparable fashion: the user obtains information about obstacles by scanning (or pointing) the environment. However, the avoidance mode uses the cameras’ whole field-of-view (170 degrees) to convey information on location for every tangible object that could be in the user’s path of travel. Each object is represented by one sound source representing its closest point or edge in relation to the user. Thus, the user can simultaneously hear multiple objects and plan his/her movements to avoid collisions. With the Avoidance mode, participants could detect a maximum of 3 obstacles simultaneously (both walls and an obstacle) because of the design of the corridor. Obstacles behind the user are no longer sonified to avoid cognitive overload.

Experimental walkway

The life size obstacle course consisted of a corridor (21m long, 2.4m wide) where six obstacles were placed 3 meters apart from each other on the longitudinal axis, but randomly placed on the horizontal axis. The obstacles were made of cardboard boxes (L: 0.45m; l 0.4m; H: 1.9m) to avoid injury on impact.

Tasks

Navigation refers to the capacity to move in space, to orient oneself in space and avoid unexpected obstacles. Consequently, navigation encompasses multiple cognitive processes [29,30]. Furthermore, obstacle avoidance requires not only to detect obstacles, but also to judge and memorize the obstacle’s location, to estimate distance, to adjust the path of travel accordingly and to get back to the initial planned route in order to reach a destination safely. In other words, a navigation task necessitates higher cognitive spatial processing than an obstacle detection task [31,32]. Therefore, we divided the experiment into two tasks, the first being a simple obstacle detection task while the second task was a complete navigation task incorporating obstacle detection and avoidance.

In the obstacle detection task, participants used the detection mode to locate, point at (with a laser pointer) and reach obstacles. At the same time, they were told to walk as quickly as possible while making the fewest amount of errors possible. The pointing was used to ensure that the participants had correctly detected an obstacle and not a wall. Therefore, pointing at something other than an obstacle was considered a false alarm. In the navigation task, participants were told to cross the walkway as quickly as possible, thereby avoiding any collisions with obstacles or walls while using the GSSD’s avoidance mode.

Both tasks consisted of six runs (12 trials in total; one trial represents a single crossing of the corridor). The participants were not aware of the number of obstacles placed along the hallway, and all obstacles changed place after each run. Every participant walked through the same six configurations for each mode (12 different configurations in total, see Fig 2A). The avoidance and detection tasks were executed on different days.

Fig 2. Experimental design.

Fig 2

(A) The corridor’s dimensions and examples of obstacle configurations. The obstacles are represented by grey squares. (B) Representation of the corridor with obstacles placed randomly on the horizontal axis and 3 meters apart from each other.

Familiarization

Prior to the onset of the tasks, participants were asked to point to a sound source randomly located in their far space to assure they were able to localize sound sources efficiently. This was done multiple times with the sound source located in different positions and at different distances and angles from the participant, until 5 consecutively correct answers were obtained. Then, participants were explained the principles of usage for each mode, after which they were familiarized with the device. They were encouraged to interact with the device by waving their hands in front of the phone’s cameras and paying attention to the auditory feedback. The participants were then placed in front of one obstacle. They were taught how to link the auditory feedback to the distance separating them from the object, how to detect an obstacle which is in the middle of the corridor, how to differentiate it from the wall and how to detect an obstacle placed against a wall. To do so, the experimenter guided the participant to the obstacle multiple times. Then, the obstacle was moved to a different location and the participant had to walk to it until it was within reach (without touching it). This forced the participants to estimate the distance using auditory cues given by the device. Then, participants were allowed to touch the obstacle to associate the auditory feedback with the tactile information. We then performed a simulation of the task with 3 obstacles placed 3 meters apart to assess the participant’s understanding of the task. The familiarization process never lasted more than 30 minutes.

Statistical analysis

Data were analyzed using JASP, an open-source graphical program for statistical analysis, developed by the University of Amsterdam. Pearson’s correlation was used to evaluate if the performance was influenced by duration of blindness in the LB participants. Furthermore, two-way ANCOVAs corrected for age and sex were carried out to compare the groups’ ability to detect and avoid obstacles by comparing their average performances together. Since the performances of detection and avoidance had a non-Gaussian distribution, the analyses were confirmed with the Kruskall Wallis intergroup test. Then, we compared the average crossing time between the groups using a two-way ANCOVA corrected for age and sex. This could be achieved because the average crossing time had a Gaussian distribution. The tests were done for both mode of the device. Data are expressed as mean ± SD. P values of 0.05 were considered as statistically significant.

Results

Obstacle detection task

All three groups detected more than 70% of the obstacles. Average detection rates for SC, LB and EB were 79.8 ± 23.5%, 73.3 ± 23.5% and 78.7 ± 23.9%, respectively (Fig 3A). A two-way ANCOVA corrected for age and sex failed to show a significant group effect for obstacle detection (F(2,259) = 0.329, p = 0.72). However, a significant effect of age was revealed by the ANCOVA (F(1,259) = 12.557, p < 0.001). There was also no significant group effect in terms of time needed to finish a run (F(2,259) = 1.152, p = 0.32) Average times to finish a run were 249 ± 97 s, 237 ± 108 s and 259 ± 114 s for SC, LB and EB, respectively (Fig 3B).We did not find, however, a significant correlation between BDI and performance of detection for LB. The analysis also failed to show an effect of sex on performance of detection and crossing time.

Fig 3. Obstacle detection and average crossing time.

Fig 3

The average performance of detection (A) and crossing time (B) are for the six trials of the task and they are expressed in percentage of correctly detected and in seconds respectively. SC, sighted control; LB, late blind; EB, early blind.

Navigation task

For the navigation task, avoidance performance for EB, LB and SC were 85.1 ± 16.1%, 92.4 ± 9.9% and 85.9 ± 17.6%, respectively (Fig 4A). the two-way ANCOVA corrected for age and sex failed to show a significant group effect for obstacle detection (F(2,193) = 2.847, p = 0.06). However, there was a significant group difference for time needed to finish a run (F(2,193) = 23.394, p < 0.01). Average times to finish a run were 180 ± 74 s, 132 ± 64 s and 113 ± 41 s for SC, LB and EB, respectively (Fig 4B). Post-hoc t-tests with Bonferroni correction revealed that EB (t(154) = -6.328, p < 0.01) and LB (t(142) = -4.488, p < 0.01) were significantly faster than SC to cross the corridor. We did not find, however, a significant correlation between BDI and performance of avoidance for LB. The analysis also failed to show an effect of sex and age on performance of avoidance and crossing time.

Fig 4. Obstacle avoidance and average crossing time.

Fig 4

The average performance of avoidance (A) and crossing time (B) are for the six trials of the task and they are expressed in percentage of correctly detected and in seconds respectively. Significant differences are indicated by asterisks (** = p < 0.01). SC, sighted control; LB, late blind; EB, early blind.

Discussion

In this pilot study, we explored the potential and usability of the GSSD, a newly developed sensory substitution device for the blind. This application was designed to offer a simple visual-to-auditory signal that requires little training by using horizontally spatialized sounds to convey spatial data during navigation. Our data show that the GSSD allows blind and blindfolded sighted participants to detect, reach and avoid obstacles in a laboratory environment. Performance in both detection and navigation tasks exceeded 70% after a minimal amount (30 minutes) of training. These data seem to suggest that horizontally spatialized sounds that convey strict information about the location of obstacles could be sufficient to support navigation around obstacles in daily travels of those deprived of visual input. However, it is important to note that visually impaired and blind individuals constitute a heterogeneous population that will use and appreciate devices differently given the nature and time of acquisition of their condition. Furthermore, visually impaired individuals will seek tools that are easy to learn and that will complement, rather than blur or override, their remaining intact senses and abilities [22].

Intergroup differences on device usage

Detection and avoidance performances

Since blind individuals constantly rely on tactile and auditory cues in daily travels, we hypothesized that they would outperform SC participants in both detection and navigation tasks. Contrary to our hypothesis, blind participants did not outperform their sighted counterparts in the detection task, nor in the navigation task. While these results are seemingly against the known literature on blindness and SSDs [19], it is also known that the presumed advantage of EB over SC depends on the task and context [33]. Indeed, the similar performances of EB and SC may be explained by the simplicity of the obstacle course which used large and easily detectable obstacles. However, the fact that we did not observe group differences in the two tasks with different levels of complexity also suggests that the GSSD can be used efficiently by the three groups, irrespective of the task demands. The only significant difference in performance found is that younger participants, independently of blindness, were better at detecting obstacles than their older peers. This is consistent with what is known about the effect of age on sensory substitution navigation and sound localization [34,35].

Difference in crossing time

Apart from performance, it is worth noting that blind participants (EB and LB) did achieve the same level of performance in the navigation task while being faster than SC. This was not found in the detection task. This can be explained by how both modes are used. Indeed, the detection mode necessitates scanning the environment using head rotation, as well as extension and flexion movements of the neck. Such movements are underdeveloped in EB [19,36] and are known to impair balance, straight-line travel and orientation [36,37]. As for the avoidance mode, participants received auditory inputs from multiple sound sources simultaneously, allowing them to plan their route without the need for head movements. Furthermore, it is known that multiple sound sources can improve postural control in LB and EB [38] and that they are better at localizing sounds in the periphery [39], a necessary skill to use the device in avoidance mode. Indeed, to advance in the obstacle course, participants had to orient themselves in order to place all sounds far enough in the periphery, free the space upwards and avoid collisions with the obstacles. Furthermore, this finding is consistent with previous behavioral and neurophysiological studies on the use of SSDs. While it has not been investigated in LB yet, it has been demonstrated that EB are more efficient in the use of SSDs [15,21,23] and that the EB brain recruits functional networks used by sighted individuals for navigation such as the visual dorsal stream and the parahippocampal gyrus while blindfolded SC do not [40].

Alternatively, our results could also be explained by the fact that it is unusual for SC to navigate blindfolded, while it is the daily experience of the other two groups [41]. Indeed, when compared with blind individuals who only benefit from the additional information given by the SSD, being blindfolded (even with the SSD) is an incapacitating and disorienting event for SC. It eliminates visual inputs that they rely upon for navigation and postural control [37,42]. This could have transcribed into increased transit times and moderation of pace for SC when deprived of vision and guided only with new auditory cues. Moreover, since bone-conducting headphones were used, participants had preserved access to regular auditory cues. Therefore, it cannot be ruled out that environmental sounds could have influenced the performance of the blind. Indeed, it is known that blind individuals are trained to use environmental sounds to integrate spatial information and to detect obstacles with passive or active echolocation [4345]. However, our experiment was conducted in a quiet environment and we verified that participants did not use active echolocation. Furthermore, passive echolocation relies on environmental sounds [46]. Considering the quiet environment and that no significant difference was found in detection performances per se, it seems unlikely that passive echolocation is the sole reason for the faster pace of the blind. In contrast, using environmental sounds simultaneously with the GSSD could benefit the user. According to principles of ergonomic design for devices used by the blind, it is necessary that the feedback given by the device does not interfere nor undermine the use of other senses and abilities already developed and efficient for safe navigation. In other words, devices should complement the blinds’ abilities rather than keep them from using these [22]. Therefore, the purpose of our GSSD is to take advantage of the information the user has access to and not restrict it to the input of the device. For these reasons, bone-conducting headphones that keep the ears free were chosen for the experiment and for the use of our GSSD.

Sensory substitution devices

The potential and usability of sensory substitution devices for spatial navigation have been amply discussed in recent years. There is a consensus that SSDs can expand the blind’s perception of the environment and give them a “visual-like” experience of space. However, it is difficult to properly judge what design has the most potential due to the numerous different experimental designs and the general lack of testing in ecological (or “real-world”) settings [19].

A study on the use of the TDU in a navigation task showed that both congenital blind individuals (CB) and sighted controls were able to detect and avoid obstacles in a life-size obstacle course [15]. The authors demonstrated that CB were better than their sighted counterparts at using the TDU to navigate with performances very similar to ours for obstacle detection, but inferior than ours in obstacle avoidance. However, this could be explained by the complexity of their testing set-up which included widely different types of obstacles, whereas we only presented identically sized obstacles. Nevertheless, their participants trained for several hours before doing the experiment [15] while ours trained only for 30 minutes.

A recent study showed that using the TDU, and even the vOICe, could help individuals transfer information from a virtual map to a real environment [47]. It has been argued that the vOICe can achieve superior spatial resolution than the TDU [8]. Nevertheless, since the vOICe scans the field of view only every 2 seconds, it lacks the temporal resolution of others, such as our GSSD. Consequently, no real navigation and obstacle avoidance testing has been done with the vOICe since the time delay precludes a real time perception of the surroundings. However, several studies demonstrated that with sufficient training, blind and blindfolded sighted individuals can use the vOICe to locate and identify immobile objects in space [20,48,49]. Nonetheless, participants of these studies trained intensively for 15 hours [20], 40 hours [49] and even for several months [50].

Moreover, the vOICe and the TDU do not give direct information about depth which is crucial for navigation [11,51]. Indeed, these devices require the user to move the head to grasp the environment within its visual-field and then to analyze the area of the visual-field occupied by the object to extract distance information [52]. This makes the understanding of an already complex signal even more difficult [22]. This is a quasi-automatic process under normal vision, consequently unknown and laborious to understand for EB individuals. This has been demonstrated with EB individuals unable to perceive the Ponzo illusion with a visual-to-auditory SSD, while it was perceived by SC and LB [39].

As for the minimalist SSD based upon the “point-to-distance” concept, the Eyecane has been studied in many different obstacle avoidance and navigation studies. These studies all demonstrated that visually impaired individuals can detect and avoid obstacles [24], be efficient in a pathfinding task [23] as well as in translating information from a virtual maze to a real environment. [21] after a minimum amount of training. This suggests that simplifying the feedback can help the blind to navigate by focusing on the navigation task, rather than spending resources on the interpretation of a complex signal.

Perhaps the most alike SSD to the GSSD in terms of sonification, is the SoV [53]. Indeed, their sonification strategy incorporated similar water droplets or “bubbles” sound effects and binaural sound differences as part of their feedback. However, their sonification strategy is used to convey spatial information relevant for both navigation and object identification. The SoV conveys all points and edges of obstacles so participants can analyze the object shape and volume in space, requiring several hours of training.

Thus, our results suggest that the GSSD helps visually impaired individuals to navigate efficiently between obstacles with less training than other previously and currently tested SSDs. Such efficiency is achieved by restricting the transmitted information to spatial cues relevant to avoiding collisions (distance and direction). Visually impairments individuals give more importance to their safety and expect navigational aids to detect obstacles and give warning so they can be safely guided towards their destination [54], a goal that is accomplished with the GSSD. Moreover, an aspect of the GSSD that we have not explored in the experiment is the possibility to modify important settings of the GSSD to better suit individual needs and tasks. For instance, the user can personalize his/her experience by choosing between a variety of sounds. The user can also adjust the horizontal and vertical field of view of the system, as well as the detection range of the system, to adequately deal with closed narrow and open environments. We therefore believe that our device can bring better subjective appreciation than other SSDs.

Conclusion

The present study shows the GSSD’s potential towards easier, safer and more autonomous navigation for the visually impaired by using horizontally spatialized sounds as sole feedback. A simple and strictly relevant spatial feedback might prove to be easy to use and to allow for faster spatial learning. Indeed, we suggest that the GSSD can minimize cognitive load for the user in a similar fashion than the EyeCane [23], while giving more numerous and simultaneous cues about the environment. It is also important to note that the GSSD is designed not to replace, but rather to complement the blind’s tools and abilities such as the white cane and echolocation. Future work should focus on the use of the GSSD in more complex and ecological environments.

Supporting information

S1 Dataset

(XLSX)

Acknowledgments

The authors wish to thank Chuck Knowledge and Danny Bernal from SignalGarden for developing the GSSD software.

Data Availability

All relevant data are within the manuscript and its Supporting Information files.

Funding Statement

The study was funded by the Harland Sanders Chair in Visual Science and the Synoptik Foundation in Denmark. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Patla AE. Visual control of human locomotion. Advances in psychology. 78: Elsevier; 1991. p. 55–97. [Google Scholar]
  • 2.Patla AE, Prentice SD, Robinson C, Neufeld J. Visual control of locomotion: strategies for changing direction and for going over obstacles. Journal of Experimental Psychology: Human Perception and Performance. 1991;17(3):603. 10.1037//0096-1523.17.3.603 [DOI] [PubMed] [Google Scholar]
  • 3.McFadyen BJ, Bouyer L, Bent LR, Inglis JT. Visual-vestibular influences on locomotor adjustments for stepping over an obstacle. Experimental brain research. 2007;179(2):235–43. 10.1007/s00221-006-0784-0 [DOI] [PubMed] [Google Scholar]
  • 4.Scott AC, Barlow JM, Guth DA, Bentzen BL, Cunningham CM, Long R. Nonvisual cues for aligning to cross streets. Journal of visual impairment & blindness. 2011;105(10):648–61. [PMC free article] [PubMed] [Google Scholar]
  • 5.Manduchi R, Kurniawan S. Mobility-related accidents experienced by people with visual impairment. AER Journal: Research and Practice in Visual Impairment and Blindness. 2011;4(2):44–54. [Google Scholar]
  • 6.Giudice NA, Legge GE. Blind navigation and the role of technology. The engineering handbook of smart technology for aging, disability, and independence. 2008;8:479–500. [Google Scholar]
  • 7.Maidenbaum S, Abboud S, Amedi A. Sensory substitution: closing the gap between basic research and widespread practical visual rehabilitation. Neurosci Biobehav Rev. 2014;41:3–15. Epub 2013/11/28. 10.1016/j.neubiorev.2013.11.007 . [DOI] [PubMed] [Google Scholar]
  • 8.Proulx MJ, Ptito M, Amedi A. Multisensory integration, sensory substitution and visual rehabilitation. Neurosci Biobehav Rev. 2014;41:1–2. Epub 2014/04/25. 10.1016/j.neubiorev.2014.03.004 . [DOI] [PubMed] [Google Scholar]
  • 9.Scheller M, Petrini K, Proulx MJ. Perception and interactive technology. Stevens’ Handbook of Experimental Psychology and Cognitive Neuroscience. 2018;2:1–50. [Google Scholar]
  • 10.Proulx MJ, Gwinnutt J, Dell’Erba S, Levy-Tzedek S, de Sousa AA, Brown DJ. Other ways of seeing: From behavior to neural mechanisms in the online “visual” control of action with sensory substitution. Restorative neurology and neuroscience. 2016;34(1):29–44. 10.3233/RNN-150541 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Meijer PB. An experimental system for auditory image representations. IEEE transactions on biomedical engineering. 1992;39(2):112–21. 10.1109/10.121642 [DOI] [PubMed] [Google Scholar]
  • 12.Bach-y-Rita P. Brain mechanisms in sensory substitution: academic Press; 1972. [Google Scholar]
  • 13.Kupers R, Ptito M. Compensatory plasticity and cross-modal reorganization following early visual deprivation. Neuroscience & Biobehavioral Reviews. 2014;41:36–52. 10.1016/j.neubiorev.2013.08.001 [DOI] [PubMed] [Google Scholar]
  • 14.Kupers R, Ptito M, editors. “Seeing” through the tongue: cross-modal plasticity in the congenitally blind. International Congress Series; 2004: Elsevier. [Google Scholar]
  • 15.Chebat DR, Schneider FC, Kupers R, Ptito M. Navigation with a sensory substitution device in congenitally blind individuals. Neuroreport. 2011;22(7):342–7. Epub 2011/04/01. 10.1097/WNR.0b013e3283462def . [DOI] [PubMed] [Google Scholar]
  • 16.Pamir Z, Canoluk MU, Jung J-H, Peli E. Poor resolution at the back of the tongue is the bottleneck for spatial pattern recognition. Scientific Reports. 2020;10(1):1–13. 10.1038/s41598-019-56847-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Caraiman S, Zvoristeanu O, Burlacu A, Herghelegiu P. Stereo Vision Based Sensory Substitution for the Visually Impaired. Sensors. 2019;19(12):2771. 10.3390/s19122771 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Stoll C, Palluel-Germain R, Fristot V, Pellerin D, Alleysson D, Graff C. Navigating from a depth image converted into sound. Applied bionics and biomechanics. 2015;2015. 10.1155/2015/543492 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Chebat D-R, Harrar V, Kupers R, Maidenbaum S, Amedi A, Ptito M. Sensory substitution and the neural correlates of navigation in blindness. Mobility of Visually Impaired People: Springer; 2018. p. 167–200. [Google Scholar]
  • 20.Auvray M, Hanneton S, O’Regan JK. Learning to perceive with a visuo—auditory substitution system: localisation and object recognition with ‘The Voice’. Perception. 2007;36(3):416–30. 10.1068/p5631 [DOI] [PubMed] [Google Scholar]
  • 21.Chebat DR, Maidenbaum S, Amedi A. Navigation using sensory substitution in real and virtual mazes. PLoS One. 2015;10(6):e0126307. Epub 2015/06/04. 10.1371/journal.pone.0126307 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Elli GV, Benetti S, Collignon O. Is there a future for sensory substitution outside academic laboratories? Multisens Res. 2014;27(5–6):271–91. Epub 2015/02/20. 10.1163/22134808-00002460 . [DOI] [PubMed] [Google Scholar]
  • 23.Maidenbaum S, Hanassy S, Abboud S, Buchs G, Chebat DR, Levy-Tzedek S, et al. The "EyeCane", a new electronic travel aid for the blind: Technology, behavior & swift learning. Restor Neurol Neurosci. 2014;32(6):813–24. Epub 2014/09/10. 10.3233/RNN-130351 . [DOI] [PubMed] [Google Scholar]
  • 24.Buchs G, Maidenbaum S, Amedi A, editors. Obstacle identification and avoidance using the ‘EyeCane’: a tactile sensory substitution device for blind individuals. International Conference on Human Haptic Sensing and Touch Enabled Computer Applications; 2014: Springer. [Google Scholar]
  • 25.Buchs G, Simon N, Maidenbaum S, Amedi A. Waist-up protection for blind individuals using the EyeCane as a primary and secondary mobility aid. Restorative neurology and neuroscience. 2017;35(2):225–35. 10.3233/RNN-160686 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Proulx MJ, Brown DJ, Pasqualotto A, Meijer P. Multisensory perceptual learning and sensory substitution. Neuroscience & Biobehavioral Reviews. 2014;41:16–25. 10.1016/j.neubiorev.2012.11.017 [DOI] [PubMed] [Google Scholar]
  • 27.Bazilinskyy P, van Haarlem W, Quraishi H, Berssenbrugge C, Binda J, de Winter J. Sonifying the location of an object: A comparison of three methods. IFAC-PapersOnLine. 2016;49(19):531–6. [Google Scholar]
  • 28.Kvd Doel. Physically based models for liquid sounds. ACM Transactions on Applied Perception (TAP). 2005;2(4):534–46. [Google Scholar]
  • 29.Long RG, Hill E. Establishing and maintaining orientation for mobility. Foundations of orientation and mobility. 1997;1. [Google Scholar]
  • 30.Pissaloux E, Velázquez R. On spatial cognition and mobility strategies. Mobility of Visually Impaired People: Springer; 2018. p. 137–66. [Google Scholar]
  • 31.Kolarik AJ, Scarfe AC, Moore BC, Pardhan S. Blindness enhances auditory obstacle circumvention: Assessing echolocation, sensory substitution, and visual-based navigation. PLoS One. 2017;12(4):e0175750. Epub 2017/04/14. 10.1371/journal.pone.0175750 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Kolarik AJ, Scarfe AC, Moore BC, Pardhan S. An assessment of auditory-guided locomotion in an obstacle circumvention task. Experimental brain research. 2016;234(6):1725–35. 10.1007/s00221-016-4567-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Stronks HC, Nau AC, Ibbotson MR, Barnes N. The role of visual deprivation and experience on the performance of sensory substitution devices. Brain Res. 2015;1624:140–52. Epub 2015/07/18. 10.1016/j.brainres.2015.06.033 . [DOI] [PubMed] [Google Scholar]
  • 34.Levy-Tzedek S, Maidenbaum S, Amedi A, Lackner J. Aging and sensory substitution in a virtual navigation task. PloS one. 2016;11(3):e0151593. 10.1371/journal.pone.0151593 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Dobreva MS, O’Neill WE, Paige GD. Influence of aging on human sound localization. Journal of neurophysiology. 2011;105(5):2471–86. 10.1152/jn.00951.2010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Rosen S, Crawford J. Teaching orientation and mobility to learners with visual, physical, and health impairments. Foundations of Orientation and Mobility: Instructional strategies and practical applications. 2010;3:564–623. [Google Scholar]
  • 37.Schwesig R, Goldich Y, Hahn A, Müller A, Kohen-Raz R, Kluttig A, et al. Postural control in subjects with visual impairment. European journal of ophthalmology. 2011;21(3):303–9. 10.5301/EJO.2010.5504 [DOI] [PubMed] [Google Scholar]
  • 38.Gandemer L, Parseihian G, Kronland-Martinet R, Bourdin C. Spatial cues provided by sound improve postural stabilization: evidence of a spatial auditory map? Frontiers in neuroscience. 2017;11:357. 10.3389/fnins.2017.00357 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Gougoux F, Zatorre RJ, Lassonde M, Voss P, Lepore F. A functional neuroimaging study of sound localization: visual cortex activity predicts performance in early-blind individuals. PLoS biology. 2005;3(2). 10.1371/journal.pbio.0030027 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Kupers R, Chebat DR, Madsen KH, Paulson OB, Ptito M. Neural correlates of virtual route recognition in congenital blindness. Proc Natl Acad Sci U S A. 2010;107(28):12716–21. Epub 2010/07/10. 10.1073/pnas.1006199107 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Ribadi H, Rider RA, Toole T. A comparison of static and dynamic balance in congenitally blind, sighted, and sighted blindfolded adolescents. Adapted Physical Activity Quarterly. 1987;4(3):220–5. [Google Scholar]
  • 42.Heller MA, Gentaz E. Psychology of touch and blindness: Psychology press; 2013. [Google Scholar]
  • 43.Kolarik AJ, Cirstea S, Pardhan S, Moore BC. A summary of research investigating echolocation abilities of blind and sighted humans. Hearing research. 2014;310:60–8. 10.1016/j.heares.2014.01.010 [DOI] [PubMed] [Google Scholar]
  • 44.Kolarik AJ, Cirstea S, Pardhan S. Evidence for enhanced discrimination of virtual auditory distance among blind listeners using level and direct-to-reverberant cues. Experimental brain research. 2013;224(4):623–33. 10.1007/s00221-012-3340-0 [DOI] [PubMed] [Google Scholar]
  • 45.Röder B, Rösler F. Memory for environmental sounds in sighted, congenitally blind and late blind adults: evidence for cross-modal compensation. International Journal of Psychophysiology. 2003;50(1–2):27–39. 10.1016/s0167-8760(03)00122-3 [DOI] [PubMed] [Google Scholar]
  • 46.Schenkman BN, Nilsson ME. Human echolocation: Blind and sighted persons’ ability to detect sounds recorded in the presence of a reflecting object. Perception. 2010;39(4):483–501. 10.1068/p6473 [DOI] [PubMed] [Google Scholar]
  • 47.Jicol C, Lloyd-Esenkaya T, Proulx MJ, Lange-Smith S, Scheller M, O’Neill E, et al. Efficiency of sensory substitution devices alone and in combination with self-motion for spatial navigation in sighted and visually impaired. Frontiers in Psychology. 2020;11. 10.3389/fpsyg.2020.01443 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Brown D, Macpherson T, Ward J. Seeing with sound? exploring different characteristics of a visual-to-auditory sensory substitution device. Perception. 2011;40(9):1120–35. Epub 2012/01/03. 10.1068/p6952 . [DOI] [PubMed] [Google Scholar]
  • 49.Amedi A, Stern WM, Camprodon JA, Bermpohl F, Merabet L, Rotman S, et al. Shape conveyed by visual-to-auditory sensory substitution activates the lateral occipital complex. Nat Neurosci. 2007;10(6):687–9. Epub 2007/05/23. 10.1038/nn1912 . [DOI] [PubMed] [Google Scholar]
  • 50.Ward J, Meijer P. Visual experiences in the blind induced by an auditory sensory substitution device. Consciousness and cognition. 2010;19(1):492–500. 10.1016/j.concog.2009.10.006 [DOI] [PubMed] [Google Scholar]
  • 51.Ptito M, Moesgaard SM, Gjedde A, Kupers R. Cross-modal plasticity revealed by electrotactile stimulation of the tongue in the congenitally blind. Brain. 2005;128(Pt 3):606–14. Epub 2005/01/07. 10.1093/brain/awh380 . [DOI] [PubMed] [Google Scholar]
  • 52.Auvray M, Hanneton S, Lenay C, O’REGAN K. There is something out there: distal attribution in sensory substitution, twenty years later. Journal of integrative neuroscience. 2005;4(04):505–21. 10.1142/s0219635205001002 [DOI] [PubMed] [Google Scholar]
  • 53.Caraiman S, Morar A, Owczarek M, Burlacu A, Rzeszotarski D, Botezatu N, et al., editors. Computer vision for the visually impaired: the sound of vision system. Proceedings of the IEEE International Conference on Computer Vision Workshops; 2017.
  • 54.Alwi SRAW, Ahmad MN, editors. Survey on outdoor navigation system needs for blind people. 2013 IEEE Student Conference on Research and Developement; 2013: IEEE.

Decision Letter 0

Thomas A Stoffregen

5 Oct 2020

PONE-D-20-25379

Spatial navigation with horizontally spatialized sounds in early and late blind individuals

PLOS ONE

Dear Dr. Ptito,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please carefully consider the detailed reviews and use them in revising your manuscript.

Please submit your revised manuscript by Nov 19 2020 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols

We look forward to receiving your revised manuscript.

Kind regards,

Thomas A Stoffregen, PhD

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2.Thank you for stating the following financial disclosure:

 [The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.].

At this time, please address the following queries:

  1. Please clarify the sources of funding (financial or material support) for your study. List the grants or organizations that supported your study, including funding received from your institution.

  2. State what role the funders took in the study. If the funders had no role in your study, please state: “The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.”

  3. If any authors received a salary from any of your funders, please state which authors and which funders.

  4. If you did not receive any funding for this study, please state: “The authors received no specific funding for this work.”

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

3.We note that you have indicated that data from this study are available upon request. PLOS only allows data to be available upon request if there are legal or ethical restrictions on sharing data publicly. For more information on unacceptable data access restrictions, please see http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions.

In your revised cover letter, please address the following prompts:

a) If there are ethical or legal restrictions on sharing a de-identified data set, please explain them in detail (e.g., data contain potentially sensitive information, data are owned by a third-party organization, etc.) and who has imposed them (e.g., an ethics committee). Please also provide contact information for a data access committee, ethics committee, or other institutional body to which data requests may be sent.

b) If there are no restrictions, please upload the minimal anonymized data set necessary to replicate your study findings as either Supporting Information files or to a stable, public repository and provide us with the relevant URLs, DOIs, or accession numbers. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories.

We will update your Data Availability statement on your behalf to reflect the information you provide.

4. Please change "female” or "male" to "woman” or "man" as appropriate, when used as a noun."

Additional Editor Comments (if provided):

Two highly qualified Reviewers have offered detail comments. Based on their input, I am requesting a Major Revision. Both Reviewers point out the need better to review relevant prior work. Each Reviewer also offers specific suggestions about details of the manuscript.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Partly

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Spatial navigation with horizontally spatialized sounds in early and late blind individuals

Pare, et al.

An experiment tests a novel sensory substitution device (SSD) for the blind which provides horizontal and depth location information for objects via sound. The authors claim that the device is novel by virtue of providing horizontal location using interaural sound cues. Three subject groups were tested: early blind, late blind, and blindfolded sighted subjects on their ability to determine the location of obstructions and to navigate through a set of obstructions. All three groups show some success at both tasks with some differences observed between the groups.

The authors may have designed a very unique and promising SSD which may provide more usable information for guidance. Unfortunately, the write-up does not provide enough information about the full range of existing SSDs, so it is impossible for me to evaluate precisely how novel their device is, and whether it truly provides more usable information. Have no other SSD devices provided horizontal location information of obstacles using interaural cues? It is impossible to tell the answer to this question based on the cursory description of existing devices. Furthermore, without a more close comparison between performance with this device and that with others, it is impossible to assess the GSSD’s success. It actually may be that the GSSD is easier to use (and learn) and is more effective than what is currently available. However, there is not enough description of the earlier findings (e.g., testing methods and tasks; overall performance; learning curves) for a critical comparison to be made. For these reasons, I must recommend rejection of the current manuscript. However, I hope the authors can rework their paper to address these issues.

More minor comments:

p. 4, line 66 – “guidance-SSD” has not been defined yet, the claim that it can “minimize strain” is meaningless at this point in the paper.

p. 4 – much more discussion is needed of exactly how other sound-based SSDs (e.g., vOICe) code for horizontal position. Are there any existing SSDs that use interaural cues?

p. 4, line 80 – If the main purpose of the project is to determine whether the GSSD might be more effective, then it is unclear exactly why the three subject groups are being tested. Also, given what seems to be this main purpose, the hypotheses listed here aren’t well motivated.

p. 7, line 127– It is unclear what the basic signal being used is like. Is it pure tone? And what are the average frequency and intensity values like? How have these decisions been made? Making an audio example available somewhere online would help readers immensely.

p. 7, line 138— How are the widths of objects conveyed? Is there some type of interaural information available for this?

Results – It should be made clear what performance values should be expected if the device is successful. These values should allow comparison to those of results from other SSD research (e.g., from vOICe experiments).

p. 11, line 231 – It should be made clear here that these percentage values are calculated based on the percentage of obstacles not touched by the subjects during the avoidance task.

p. 12, line 252 – This conclusion statement needs some type of context – preferably relative to prior SSD research.

p. 12-13 – The differences between subject groups is really a secondary concern and doesn’t really warrant this much discussion in a short report. The reader is most interested in how this new device fairs relative to other SSDs.

Reviewer #2: Overview

The authors have developed a smartphone-based sensory substitution device (“SSD”), which enables visually impaired persons to navigate their routes by auditory feedback. In this paper, the authors work on demonstrating the feasibility of the SSD of three groups: early blind, late blind, and sighted participants.

The paper presents a valuable road map for further discussion and device development with two points. First, the SSD is designed that bone-conducting headphones allow blind persons to use both their own auditory sense and feedback from the SSD, so that they can detect and avoid non-sounding obstacles placed three meters apart from each other within an experimental corridor. Second, the SSD does not require long-term training to detect obstacles and relate the obstacles to the auditory feedback signals.

However, the paper does not provide the readers with sufficient information on the SSD specifications and the results of navigation experiments. I recommend that the authors should add details that logically support their arguments and make it more convincing to the readers.

Therefore, I would suggest a major revision.

My specific comments on the paper are as follows.

L44-45: Previous studies must be referred to support this statement.

L49-50: This is about detecting obstacles placed on the vertical direction, though the paper mainly focuses on using horizontal spatial cues. This sentence is not related to the context of the paper.

L67-69: It does not seem that the experimental design, results, and discussion are consistent with the goal of the study. From the experiment results and discussion alone, it is difficult to figure out how effectively the SSD (i.e., horizontally spatialized sounds feedback) provides the users with the spatial configuration environment.

L75-77: No reason is given to make a manipulative concept of “cognitive map” the basis to meet the goal mentioned on L78-79. There is no need for the reference of a cognitive map, the role of which is not mentioned in the result and discussion of the experiment section.

L81: It does not seem that the experiment is designed to prove that hypothesis. Especially, the authors must give enough data to convince the readers that the developed SSD needs shorter training than conventional devices to perform obstacle detection and avoidance tasks.

L88 (Participants and Ethics): Hearing abilities of the participants should be described.

L115 (Apparatus): For the readers’ better understanding of new SSD, it is necessary to provide more detailed specifications of the auditory feedback system developed exclusively for this study, in particular:

- Latency time that the SSD holds, from detecting obstacles to outputting horizontal sound feedback,

- Maximum number of obstacles the SSD can respond and create auditory feedback, and

- Quantitative evidence that describes how the users can estimate the distance to an obstacle accurately by using the combination of three acoustic components.

L141: For the readers’ better understanding, this should be supported with quantitative data, such as how many objects people can hear and distinguish.

L164 (Tasks): For experimenting the effect of auditory feedback from the SSD on the obstacle detection/avoidance tasks, it is critical to make sure that the participants cannot use other auditory cues available in the experiment corridor, possibly by adding earplugs and/or earmuffs to shut off background noise.

L187 (Familiarization): To convince that new SSD gives shorter training than conventional SSDs to learn how to use, it is necessary to compare the required time for training between them.

L218-219: More thorough, detailed discussion is necessary.

L234-235: It is necessary to discuss the possibilities that blind participants can use ambient sound available in the experiment location. This will effectively support the result that EB and LB finished a run faster than SC.

L254: Experiment results do not give sufficient data-based evidence to support the point. Quantitative data, such as the comparison with other SSDs in terms of time required for training, is necessary to convince the readers that the new SSD requires little training.

L273-276: It seems that authors overvalue the data they have obtained from the experiment.

L279-280: This is a fascinating statement. With prior studies on the plasticity of brain function of EB, the authors argue that the different duration to complete tasks among three groups is based on the nerve system. However, this argument does not well-support the quick task completion of LB. Also, the duration for task accomplishment is not directly related to the different nerve system functions.

L290-293: Here the authors emphasize that SC, who relies typically on the vision for higher cognitive spatial processing, may restrict the capability when they are “deprived of vision” during runs. However, no quantitative data are presented from the experiment to support the direct relationship between short duration of completing navigation tasks and higher cognitive spatial processing. Discussion about this may help the readers better understand why the blind groups and the sighted group were different in the time duration.

L295-296: Quantitative data is necessary to present how many sound sources the participants perceived during each run.

L328: Experiment results of this study may be too weak to support this conclusion because there are no control conditions that limit the participants to hearing the environmental sound.

L329: To help the readers understand the influence of visual experience on blind persons, this sentence needs clarity. It is unclear for the readers to find whether or not there is an influence of visual experience.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2021 Feb 26;16(2):e0247448. doi: 10.1371/journal.pone.0247448.r002

Author response to Decision Letter 0


16 Nov 2020

Response to Reviewers’ comments

Reviewer #1

An experiment tests a novel sensory substitution device (SSD) for the blind which provides horizontal and depth location information for objects via sound. The authors claim that the device is novel by virtue of providing horizontal location using interaural sound cues. Three subject groups were tested: early blind, late blind, and blindfolded sighted subjects on their ability to determine the location of obstructions and to navigate through a set of obstructions. All three groups show some success at both tasks with some differences observed between the groups.

The authors may have designed a very unique and promising SSD which may provide more usable information for guidance. Unfortunately, the write-up does not provide enough information about the full range of existing SSDs, so it is impossible for me to evaluate precisely how novel their device is, and whether it truly provides more usable information. Have no other SSD devices provided horizontal location information of obstacles using interaural cues? It is impossible to tell the answer to this question based on the cursory description of existing devices. Furthermore, without a more close comparison between performance with this device and that with others, it is impossible to assess the GSSD’s success. It actually may be that the GSSD is easier to use (and learn) and is more effective than what is currently available. However, there is not enough description of the earlier findings (e.g., testing methods and tasks; overall performance; learning curves) for a critical comparison to be made. For these reasons, I must recommend rejection of the current manuscript. However, I hope the authors can rework their paper to address these issues.

We thank the reviewer for this pertinent comment. Our GSSD was designed with the purpose to have a system that requires less effort and cognitive load for the user. The GSSD gives direct information about positioning and depth, and requires minimal processing from the user. Therefore, we added in the introduction (see lines 54 to 77) and Discussion (pages 18 - 21) sections a more detailed description of the other existing SSDs.

More minor comments:

p. 4, line 66 – “guidance-SSD” has not been defined yet, the claim that it can “minimize strain” is meaningless at this point in the paper.

We changed the paragraph (lines 91-102) to define the need for simpler SSDs and added the example of the EyeCane. We formulated our hypothesis that by giving less but exclusively relevant information, the device provides simple feedback which enhances navigation abilities. Next, we introduced our new application that uses horizontally spatialized sounds to “guide” the user through obstacles, hence the name “Guidance-SSD” (GSSD).

p. 4 – much more discussion is needed of exactly how other sound-based SSDs (e.g., vOICe) code for horizontal position. Are there any existing SSDs that use interaural cues?

As we wrote above, we added rather lengthy descriptions of other SSDs and how they work, thereby focusing on the best studied SSDs for navigation (for a review on the subject see Chebat et al., 2018). We added the following text to the Introduction:.

“The vOICe is one of the best studied visual-to-auditory (VTA) SSD. The camera of the device scans its field of view from left to right, thereby offering momentary “snapshots” of the environment in the form of sound cues. The vOICe informs the user about the vertical and horizontal positioning of objects, as well as brightness of the environment. Vertical and horizontal positioning are indicated by the frequency and the length of the sound, respectively, whereas brightness is indicated by differences in amplitudes of the sound oscillations. The vOICe demands the user to analyze multiple spectral cues, with a two second delay between each scan, to extract important information, detect and identify objects [11].

The Tongue Display Unit (or TDU) is a tactile-to-vision (TTV) SSD capable of transmitting images to the tongue in the form of electrotactile pulses. The TDU is composed of a tongue array consisting of 400 small circular electrodes arranged in a 20x20 matrix, a computer and a webcam. Every time an object enters within the visual field of the camera, the visual image is translated into electrotactile pulses that are transmitted to the tongue through the electrode array. The obstacles are thus ‘drawn’ with electrical current on the tongue in real time from the images provided by the camera [15-17].

The Sound of Vision (SoV) is a more recently developed VTA SSD. The SoV provides combined audio and tactile feedback by using multiple cameras and depth sensors that are worn on the forehead and which are connected to a laptop stowed in a backpack and worn by the user. The system informs the user of obstacles positioning with vibrations to the abdomen through a haptic belt. Then, the SoV conveys depth information (or overall 3D objects’ shape) to the user by translating all 3D points into binaural sound effects of “popping bubbles” that will be modulated in loudness and pitch for proximity and elevation respectively [14].”

p. 4, line 80 – If the main purpose of the project is to determine whether the GSSD might be more effective, then it is unclear exactly why the three subject groups are being tested. Also, given what seems to be this main purpose, the hypotheses listed here aren’t well motivated.

Age of onset of blindness has an important effect on compensatory neuroplastic changes (for a review, see Kupers and Ptito, Neurosci Biobehav. Rev, 2014). Congenitally blind and late-onset blind individuals use different navigational strategies, and differ in auditory capacities, and in cognitive processes. Late and early-onset blindness pose completely different challenges to the individual which can greatly affect the way they use a SSD. In order to test finetune a novel SSD, it is crucial to evaluate its potential in both early and late-onset blindness.

p. 7, line 127– It is unclear what the basic signal being used is like. Is it pure tone? And what are the average frequency and intensity values like? How have these decisions been made? Making an audio example available somewhere online would help readers immensely.

The audio signal is customizable to the user. We used a raindrop sound since fluid sounds such as water droplets and bubbles are easily identifiable and localizable due to their natural and regular occurrence (Doel, ACM Transactions on Applied Perception (TAP), 2005). We added a more precise description of the auditory signal in the revised manuscript (see lines 169 to 174).

p. 7, line 138— How are the widths of objects conveyed? Is there some type of interaural information available for this?

The device does not provide direct width information, because it would constitute additional information that could complexify the signal. Thus, the sound source always conveys information about a single point in space. However, this information can be used, depending on the mode, to extract such information if needed. In detection mode, the user can estimate the width of objects by scanning the object and comparing the point in space where it is first and last detected. They thus have to estimate the width using the angle information of their scan with the distance heard by the object. In the avoidance mode, no information can be deduced on the width of the object.

Results – It should be made clear what performance values should be expected if the device is successful. These values should allow comparison to those of results from other SSD research (e.g., from vOICe experiments).

We have added this now to the manuscript (see lines 110 to 116):

“To test this hypothesis, we investigated if participants could detect and avoid obstacles in a life-size obstacle course using this new application to guide their movements. We included early blind (EB), late blind (LB) and blindfolded sighted control subjects (SC) in a detection and avoidance task. We hypothesized that participants would be able to perform both tasks above chance level with a minimum amount of training. Moreover, given the established spatial auditory capacities of the blind [10, 20], we supposed that both blind groups would outperform SC and that EB would be better than LB.”

p. 11, line 231 – It should be made clear here that these percentage values are calculated based on the percentage of obstacles not touched by the subjects during the avoidance task.

Performance in the avoidance task is calculated as the percentage of obstacles avoided (i.e. not touched by the participant). A performance of 50% would mean that a participant touched or collided with three obstacles out of six in a trial. We also reformulated “performance” to “avoidance performance” in the results section of the navigation task.

p. 12, line 252 – This conclusion statement needs some type of context – preferably relative to prior SSD research.p. 12, line 252 – This conclusion statement needs some type of context – preferably relative to prior SSD research.

Indeed, we added a context to introduce this statement. Furthermore, comparisons with other SSDs were added in the discussion to relate to this statement (see pages 18 to 21).

p. 12-13 – The differences between subject groups is really a secondary concern and doesn’t really warrant this much discussion in a short report. The reader is most interested in how this new device fairs relative to other SSDs.

We agree with the reviewer and we adapted the Discussion section accordingly. More specifically, we added more details on how the new GSSD compares to existing SSDs in terms of ease of use, training time and navigational performance (see pages 18 to 21). However, we kept a shortened discussion on group differences in performance since many behavioral differences between EB and LB exist in the literature, especially with respect to auditory capacities and navigational strategies. For instance, EB are better at analyzing spectral cues and binaural cues than LB. EB tend to use a more egocentric strategy while LB rather use a mixture of egocentric and allocentric strategies.

Reviewer #2: Overview

The authors have developed a smartphone-based sensory substitution device (“SSD”), which enables visually impaired persons to navigate their routes by auditory feedback. In this paper, the authors work on demonstrating the feasibility of the SSD of three groups: early blind, late blind, and sighted participants. The paper presents a valuable road map for further discussion and device development with two points. First, the SSD is designed that bone-conducting headphones allow blind persons to use both their own auditory sense and feedback from the SSD, so that they can detect and avoid non-sounding obstacles placed three meters apart from each other within an experimental corridor. Second, the SSD does not require long-term training to detect obstacles and relate the obstacles to the auditory feedback signals. However, the paper does not provide the readers with sufficient information on the SSD specifications and the results of navigation experiments. I recommend that the authors should add details that logically support their arguments and make it more convincing to the readers. Therefore, I would suggest a major revision. My specific comments on the paper are as follows.

L44-45: Previous studies must be referred to support this statement.

We added three references that support this statement: Patla AE., Advances in psychology, 1991; Patla & al., Journal of Experimental Psychology: Human Perception and Performance, 1991; McFadyen & al., Experimental brain research, 2007

L49-50: This is about detecting obstacles placed on the vertical direction, though the paper mainly focuses on using horizontal spatial cues. This sentence is not related to the context of the paper.

We thank the reviewer for this comment. We have clarified that this statement is about identifying the present limitations that face blind people during navigation. We then specified that the GSSD is used to detect all tangible obstacles, regardless of their height (i.e. knee-high obstacles, hanging obstacles, or tall obstacles), and that the auditory feedback will allow the user to “localize” the objects on the horizontal plane to adjust his/her path of travel around the obstacle (regardless of its height). By doing so, the GSSD reduces the limitations faced by the user when only using the long cane (line 105).

L67-69: It does not seem that the experimental design, results, and discussion are consistent with the goal of the study. From the experiment results and discussion alone, it is difficult to figure out how effectively the SSD (i.e., horizontally spatialized sounds feedback) provides the users with the spatial configuration environment.

This is a pilot study to determine if the new GSSD allows obstacle circumvention. We provided more information on the device to ease the understanding of its functionalities (see page 9). The results show that participants were able to avoid obstacles and navigate between them efficiently with the GSSD. We agree that the Results and Discussion focused too heavily on intergroup differences rather than on testing this principle. Therefore, we changed the discussion to clarify how the device is used in the different tasks and how it compares to other SSDs (see pages 18 to 21).

The fact that all 3 groups were able to use the device in a proficient manner suggests that it is simple to use in the context of our tasks.

L75-77: No reason is given to make a manipulative concept of “cognitive map” the basis to meet the goal mentioned on L78-79. There is no need for the reference of a cognitive map, the role of which is not mentioned in the result and discussion of the experiment section.

We agree that the concept of “cognitive map” may be irrelevant in this section. Therefore, we reformulated the sentence (lines 107 to 109) :

“Therefore, we hypothesize that when confronted with obstacles in their path of travel, participants using the application should have enough information to be “guided” through obstacles, hence the GSSD name.”

L81: It does not seem that the experiment is designed to prove that hypothesis. Especially, the authors must give enough data to convince the readers that the developed SSD needs shorter training than conventional devices to perform obstacle detection and avoidance tasks.

We thank the reviewer for this comment. Therefore, the discussion part of the manuscript was adapted by adding more details about how our new GSSD compares to other SSDs in ease of use, training time and in navigation task performances (see pages 18 to 21).

L88 (Participants and Ethics): Hearing abilities of the participants should be described.

We did not evaluate the hearing abilities of the participants directly. However, when recruiting the participants we controlled for any associated neuropathy (including hearing disorders) that could influence the participants behavior and results. Furthermore, during familiarization, participants were asked to point to a far located sound source to assure they were able to localize sound in far space. All participants were successful and precise at pointing the sound sources. We added this part of the training in the revised manuscript (lines 245 to 249).

L115 (Apparatus): For the readers’ better understanding of new SSD, it is necessary to provide more detailed specifications of the auditory feedback system developed exclusively for this study, in particular: latency time that the SSD holds, from detecting obstacles to outputting horizontal sound feedback, maximum number of obstacles the SSD can respond and create auditory feedback, and quantitative evidence that describes how the users can estimate the distance to an obstacle accurately by using the combination of three acoustic components.

The SSD offers a real-time three-dimensional mapping of the environment with its spatial auditory feedback. The avoidance mode is designed to detect simultaneously every distinguishable object in the camera’s field of view. However, since the obstacle course had an obstacle every 3 meters (which is the maximum range the device), participants could only detect both walls and one obstacle at the same time. Therefore, only three tangible objects could be detected at once (see lines 186-187). No direct results on distance estimation were taken. However, the process of obstacle circumvention requires an individual to estimate distance correctly. Indeed, it requires the individual to distance his/her body far enough from the object to avoid it safely (see Kolarik et al., Experimental brain research, 2016). Therefore, in the context of navigation, correct obstacle avoidance is based upon correct distance estimation. We discussed this matter in the revised manuscript (see lines 212-215).

L141: For the readers’ better understanding, this should be supported with quantitative data, such as how many objects people can hear and distinguish.

In the case of this study, participants could detect both walls and an obstacle at once for a maximum of three objects detected simultaneously in this specific experimental set-up. (should be interesting to test how many sound sources participants can distinguish). This has been added in the text (see lines 186-187).

L164 (Tasks): For experimenting the effect of auditory feedback from the SSD on the obstacle detection/avoidance tasks, it is critical to make sure that the participants cannot use other auditory cues available in the experiment corridor, possibly by adding earplugs and/or earmuffs to shut off background noise.

We discussed the effect of bone-conducting headphones and environmental sounds in the manuscript (see lines 375 to 392). The reason we choose to keep the participants’ ears free is because environmental sounds are an important source of navigational information for the blind. According to principles of ergonomic design for devices used by the blind, it is necessary that the feedback given by the device does not interfere with the use of other senses and abilities already developed and efficient for safe navigation. In other words, devices should complement the blinds’ abilities rather than keep them from using these said abilities. Therefore, we opted for bone conduction of the sound information. Moreover, our goal was to assess the device in natural condition under which participants are free to use other auditory feedback. Since our testing environment was quiet, it is unlikely that participants have used environmental sound cues.

L187 (Familiarization): To convince that new SSD gives shorter training than conventional SSDs to learn how to use, it is necessary to compare the required time for training between them.

See also our response to reviewer 1 for a similar comment. We have added more details about how our new GSSD compares to other SSDs in terms of ease of use, training time and in navigational performance (see pages 18 to 21).

L218-219: More thorough, detailed discussion is necessary.

We have thoroughly restructured the Discussion section and mentioned two studies (Levy-Tzedek S et al., PloS one, 2016; Dobreva et al., Journal of neurophysiology, 2011) that demonstrated the influence of age on the use of SSD for navigation and for sound localization (see lines 346 to 348).

L234-235: It is necessary to discuss the possibilities that blind participants can use ambient sound available in the experiment location. This will effectively support the result that EB and LB finished a run faster than SC.

Thank you for pointing this out. We have now added this to the Discussion (see lines 375 to 392).

L254: Experiment results do not give sufficient data-based evidence to support the point. Quantitative data, such as the comparison with other SSDs in terms of time required for training, is necessary to convince the readers that the new SSD requires little training.

We added to the Discussion more details about how our new GSSD compares to other SSDs in terms of ease of use, training time and in navigation task performance (see pages 18 to 21).

L273-276: It seems that authors overvalue the data they have obtained from the experiment.

As mentioned above, we have thoroughly restructured the Discussion and reduced the emphasis on group differences (see lines 346 to 348).

L279-280: This is a fascinating statement. With prior studies on the plasticity of brain function of EB, the authors argue that the different duration to complete tasks among three groups is based on the nerve system. However, this argument does not well-support the quick task completion of LB. Also, the duration for task accomplishment is not directly related to the different nerve system functions.

We have modified the text in accordance with the reviewer’s comment concerning LB and EB (see lines 364-367)

L290-293: Here the authors emphasize that SC, who relies typically on the vision for higher cognitive spatial processing, may restrict the capability when they are “deprived of vision” during runs. However, no quantitative data are presented from the experiment to support the direct relationship between short duration of completing navigation tasks and higher cognitive spatial processing. Discussion about this may help the readers better understand why the blind groups and the sighted group were different in the time duration.

We agree with the reviewer and we have adapted this part of the discussion by focusing on the purported negative effects blindfolding may have in sighted individuals, especially for navigation and for postural control. We also added 3 references (Schwesig et al., European journal of ophthalmology, 2011; Ribadi et al., Adapted Physical Activity Quarterly, 1987; Heller et al., Psychology press, 2013) that support this statement (see lines 369 to 375).

L295-296: Quantitative data is necessary to present how many sound sources the participants perceived during each run.

Since an obstacle was placed every 3 meters (which is the maximum range the device), participants could only detect both walls and one obstacle at the same time. Therefore, only three tangible objects could be detected at once. Furthermore, since the SSD covers the entire width of the corridor, every participant detected the same number of obstacles and therefore was presented with as many sound sources as the other participants (lines 186-187).

L328: Experiment results of this study may be too weak to support this conclusion because there are no control conditions that limit the participants to hearing the environmental sound.

Indeed, since participants had their ears free, we cannot totally exclude the possibility that they could have used other sound sources. We discussed this issue in lines 375 - 392 and we also reformulated the Conclusion (see lines 457 to 465).

L329: To help the readers understand the influence of visual experience on blind persons, this sentence needs clarity. It is unclear for the readers to find whether or not there is an influence of visual experience.

Thank you for this suggestion. The sentence has been removed.

Attachment

Submitted filename: Response to Reviewers.docx

Decision Letter 1

Thomas A Stoffregen

8 Feb 2021

Spatial navigation with horizontally spatialized sounds in early and late blind individuals

PONE-D-20-25379R1

Dear Dr. Ptito,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Thomas A Stoffregen, PhD

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Acceptance letter

Thomas A Stoffregen

9 Feb 2021

PONE-D-20-25379R1

Spatial navigation with horizontally spatialized sounds in early and late blind individuals

Dear Dr. Ptito:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at plosone@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Thomas A Stoffregen

Academic Editor

PLOS ONE


Articles from PLoS ONE are provided here courtesy of PLOS

RESOURCES