Skip to main content
Scientific Reports logoLink to Scientific Reports
. 2020 Jan 31;10:1621. doi: 10.1038/s41598-020-58503-8

Electro-Haptic Enhancement of Spatial Hearing in Cochlear Implant Users

Mark D Fletcher 1,, Robyn O Cunningham 1, Sean R Mills 1
PMCID: PMC6994470  PMID: 32005889

Abstract

Cochlear implants (CIs) have enabled hundreds of thousands of profoundly hearing-impaired people to perceive sounds by electrically stimulating the auditory nerve. However, CI users are often very poor at locating sounds, which leads to impaired sound segregation and threat detection. We provided missing spatial hearing cues through haptic stimulation to augment the electrical CI signal. We found that this “electro-haptic” stimulation dramatically improved sound localisation. Furthermore, participants were able to effectively integrate spatial information transmitted through these two senses, performing better with combined audio and haptic stimulation than with either alone. Our haptic signal was presented to the wrists and could readily be delivered by a low-cost wearable device. This approach could provide a non-invasive means of improving outcomes for the vast majority of CI users who have only one implant, without the expense and risk of a second implantation.

Subject terms: Auditory system, Translational research

Introduction

Cochlear implants (CIs) are neural prostheses that enable profoundly hearing-impaired people to perceive sounds through electrical stimulation of the auditory nerve. The CI is one of the greatest achievements of modern medicine. However, recent decades have not been marked by the huge improvements in CI technology that were seen in the 1980s and 1990s1, and CIs still have significant limitations24. One of the primary limitations of CIs is that users often struggle to locate and segregate sounds5. This leads to impaired threat detection and an inability to separate sound sources in complex acoustic scenes, such as schools, cafes, and busy workplaces. In normal-hearing individuals, the origin of a sound is determined by exploiting differences in the intensity and arrival time of sounds between the ears (interaural level and time differences), as well as by the direction-dependent spectral filtering of sounds by the pinnae. CI users have limited access to interaural level difference (ILD) and interaural time difference (ITD) cues, particularly the around 95% of users that are implanted only in one ear6. Furthermore, because of the poor spectral resolution of CIs1 and the fact that CI microphones are typically mounted behind the ear, CI users often have severely limited access to important spatial information usually given by the pinnae. We propose a new approach for enhancing spatial hearing in CI users by providing missing spatial hearing cues through haptic stimulation of the wrists.

There are several existing approaches for improving spatial hearing in CI users, although each has substantial limitations. For example, preservation of residual low-frequency acoustic hearing after implantation can give benefits to sound localisation in some cases4,7. However, this is only possible for a small proportion of CI users (around 9%8) and residual hearing deteriorates at a faster rate after implantation9. Localisation can also be improved through the implantation of a second CI in the other ear4,5. However, this approach is expensive, poses a surgical risk, risks vestibular dysfunction and the loss of residual hearing, and limits access to future technologies and therapies. Our approach of using haptics could bring enhanced localisation to the majority of CI candidates who have severely limited localisation ability without the need for an expensive, invasive surgery to fit a second CI.

Haptic cues for spatial hearing have not previously been used to augment CI listening. However, historically, a small number of studies have looked at whether spatial cues can be provided through haptic stimulation on the upper arms10 or fingertips1114 of young normal-hearing listeners. In 1955, Von Bekesy described subjective reports of people being able to learn to locate sounds with the upper arms10, and later studies using the fingertips provided further support for the idea that spatial hearing cues can be transferred through the skin12,13. Furthermore, recent work has shown that haptic stimulation can be used to enhance speech intelligibility in background noise for CI users1517. Together, this research suggests that haptic stimulation may be able to augment the limited electrical signal from the implant to enhance CI spatial hearing.

In the current study, we investigated whether CI users’ ability to locate speech can be improved by augmenting the electrical signal provided by the implant with a haptic signal (electro-haptic stimulation17). We derived this haptic signal from the audio that would be received by CI or hearing aid microphones behind each ear. The haptic stimulus consisted of the amplitude envelope of the speech taken from bands in the frequency range where the ILD cues are largest (see Methods). The signal from each ear was then remapped to a frequency range where the skin is most sensitive to vibration and delivered to each wrist. This meant that the intensity difference between the wrists corresponded to the intensity difference between the ears. Our signal processing and haptic signal were designed to be readily deliverable by a low-latency wearable device with low power consumption.

We measured localisation ability under three conditions: audio only, combined audio and haptic (Audio-haptic), and haptic only. All conditions were measured before and after a short training regime (lasting around 15 minutes per condition). It was hypothesized that the haptic signal would allow participants to localise stimuli more accurately in the Audio-haptic condition than in the Audio-only condition. After training, it was anticipated that multisensory integration of the audio and haptic cues would occur, resulting in more accurate sound localisation in the Audio-haptic condition than in the Haptic-only condition.

Results

We tested twelve CI users’ ability to localise a speech stimulus in the horizontal plane, before and after a short training regime. Both unilateral CI users (who have a CI in one ear and no device in the other ear) and bimodal users (who have a CI in one ear and a hearing aid in the other ear) were tested in this study, which reflects the variety of implant and hearing aid configurations present in the population. Participants were tested using their everyday CI and hearing aid configuration to maximize ecological validity. Eleven loudspeakers were arranged in an arc around the participants from 75° to the left and right of centre. Participants were instructed to identify which loudspeaker the speech stimulus originated from. Figure 1 illustrates where participants perceived the speech to originate from compared to true location of the speech stimulus (upper panels), and shows localisation error in each of the three conditions, before and after training (lower panels).

Figure 1.

Figure 1

Haptic stimulation significantly reduces localisation error in cochlear implant users. (A,B) Mean response location vs actual sound source location before and after training (grey line = perfect localisation performance). (C,D) RMS error before and after training (grey bar = chance performance, +/− 95% confidence). Error bars show the standard error of the mean.

We found that haptic stimulation enhanced localisation performance for CI users (F(1.2,12.3) = 25.3, p < 0.001, ηp2 = 0.697). We also found that localisation performance improved between pre- and post-training testing sessions (F(1,11) = 36.5, p < 0.001, ηp2 = 0.768). The interaction between these factors was non-significant (F(1.9,14.6) = 1.0, p = 0.37). We then investigated whether participants were able to utilize the additional spatial hearing cues available through the haptic signal to localise speech more accurately. We found that the root-mean-square (RMS) error was significantly lower in the Audio-haptic condition compared to the Audio-only condition both before training (t(11) = 5.9, p < 0.001, d = 1.69) and after training (t(11) = 4.3, p = 0.005, d = 1.24; all t-test p-values are corrected for multiple comparisons [see Methods]). Before training, RMS error reduced by 17.9°, from 47.2° to 29.3° on average (SE = 3.05). After training, RMS error reduced by 17.2°, from 39.9° to 22.7° on average (SE = 4.0). All participants performed better in the Audio-haptic condition than the Audio-only condition in both sessions (see Fig. 2), with the benefit ranging from a 0.5° (P7; bimodal linked; pre-training) to a 37.7° reduction in RMS error (P8; unilateral; post-training).

Figure 2.

Figure 2

Training improves localisation performance and facilitates multi-modal integration. (A,B) Change in RMS error for each individual for the Audio-haptic and Haptic-only conditions relative to the Audio-only condition in the pre-training session. (C) Change in RMS error for the audio-only condition after training. (D) Performance in the Audio-haptic condition relative to the Haptic-only condition before and after training. Users with unilateral and bimodal device configurations with and without linked devices are indicated by different lines and markers (see legend).

Next, we investigated whether completing a short training regime (lasting around 15 minutes per condition) would allow participants to improve their ability to localise sounds using combined audio and haptic stimulation. Performance in the Audio-haptic condition was found to be significantly better in the post-training session than in the pre-training session (t(11) = 5.8, p < 0.001, d = 1.68). With training, RMS error reduced by 6.6° in the Audio-haptic condition (from 29.3° to 22.7°; SE = 1.13). We also assessed whether completing the training regime allowed participants to integrate information from the audio and haptic stimulation to enhance localisation performance. There was no difference in performance between the Haptic-only and Audio-haptic conditions in the pre-training session (p = 0.566). However, in the post-training session, participants were able to locate sounds more accurately (a 3.1° enhancement) with Audio-haptic stimulation than with only haptic stimulation (t(11) = 2.6, p = 0.048, d = 0.66).

We found that even without audio cues, haptic stimulation could be used to determine spatial location. Localisation performance was better in the Haptic-only condition than in the Audio-only condition, with participants performing with a significantly smaller RMS error both before (30.2° vs 47.2°; t(11) = 6.00, p < 0.001, d = 0.740) and after training (25.9° vs 39.9°; t(11) = 3.89, p = 0.012, d = 1.123). We also observed that most participants were able to improve in the Audio-only condition between sessions, with RMS error reducing from an average of 47.2° to 39.9° (SE = 1.95; t(11) = 3.70, p = 0.012, d = 1.07).

One factor that may have affected performance in the task is the hearing device configurations that participants used. We measured performance in seven unilateral and five bimodal CI users. Two bimodal users were using a ‘linked’ configuration, in which a CI in one ear and a hearing aid in the other ear share audio processing to reduce distortion of spatial hearing cues. We observed that the participants with unilateral configurations had poorer performance with audio cues alone than bimodal users (54.3° and 37.2° respectively before training; t(10) = 4.18, p = 0.008, d = 2.44). Both groups reached a similar level of performance with audio and haptic stimulation combined (22.6° and 23.0° respectively). As such, unilateral users had a greater enhancement in performance when haptic stimulation was combined with audio than bimodal users (see Fig. 2) in both the pre-training (24.6° vs 8.5°; t(10) = 3.99, p = 0.009, d = 2.35) and post-training (24.1° vs 7.5°; t(10) = 2.48, p = 0.034, d = 1.52) sessions. They also had a significantly greater performance enhancement in the Haptic-only condition than bimodal users in both the pre-training (t(10) = 4.54, p = 0.005, d = 2.83) and post-training sessions (t(10) = 2.85, p = 0.034, d = 1.68).

Discussion

The vast majority of CI users are implanted in only one ear and are very poor at locating sounds. In this study, we found that sound localisation accuracy improved substantially when audio and haptic stimulation were provided together (electro-haptic stimulation). Even with no training, adding haptic stimulation reduced the RMS error from 47.2° to 29.3° on average. This performance is similar to the average performance achieved by CI users with implants in both ears (~27°)4,18, or users with a CI in one ear and healthy hearing in the other (~28°)4. After a short training regime, participants’ average RMS error with electro-haptic stimulation was reduced to just 22.7°, which is comparable to the performance of bilateral hearing aid users (~19°)4,19. These results suggest that haptic stimulation can be used to substantially improve localisation for CI users with one implant, without the need for expensive and invasive surgery to fit a second implant.

The size of the improvement given by adding haptic stimulation depended on participants’ hearing device configuration. Participant’s with a unilateral configuration had poorer localisation with audio only than bimodal users (54.3° and 37.2° respectively, before training), which is consistent with previous studies4 and the fact that bimodal users are likely to have better access to spatial hearing cues. Despite this difference with audio only, both groups reached a similar level of performance with electro-haptic stimulation (22.6° and 23° after training, respectively). Therefore, electro-haptic stimulation appears to give the largest gains in performance for CI users who struggle most with audio alone. Remarkably, four out of seven unilateral participants performed more than 30° better with electro-haptic stimulation than with audio only, after training. These large effects are particularly encouraging given that there is no established alternative approach for improving localisation in CI users with a single device.

Importantly, a short training regime allowed participants to effectively combine audio and haptic input. We found that, after training, our participants performed better with electro-haptic stimulation than with either audio only (17.2° better) or haptic stimulation only (3.1° better). In this study, both the audio and haptic signals were speech stimuli consisting of temporally complex amplitude modulations, rather than more simple stimuli, such as tones or noises. Recent work has provided strong evidence of the importance of the correlation of temporal properties for maximizing multisensory integration, and the advantage of these temporal properties being complex2024. Therefore, our use of temporally complex stimuli may have facilitated effective integration of audio and haptic signals.

The audio-haptic enhancement in performance observed in the current study may be expected based on previous psychophysical, physiological, and anatomical findings. Psychophysicists have shown both that auditory stimuli can affect the perception of haptic stimuli2528 and that haptic stimuli can affect the perception of auditory stimuli29. Multisensory interactions have also been shown in the core auditory cortices of ferrets, where substantial populations of neurons that respond to auditory stimulation are modulated by tactile stimulation30. Furthermore, anatomical studies have shown the convergence of somatosensory input at many stages along the ascending auditory pathway, from the cochlear nucleus (the first node in the ascending auditory pathway) to core auditory cortices3039. Collectively, these studies provide compelling evidence of strong links between audition and touch and offer a neural basis for our finding that information from auditory and haptic stimulation can be effectively combined to improve behavioural performance.

In this study, like in many of the most effective haptic aids40, haptic stimulation was applied to the wrists. The wrist was selected as a practical candidate site for real-world use because wrist-worn devices do not typically impede everyday tasks and are easy to self-fit. Preserving the perceived intensity differences across the wrists is critical for this application, and additional testing is required to establish whether this would be affected by frequent changes in the relative positions of the wrists in everyday life. Encouragingly, researchers who found that haptic stimulation on one hand modulates haptic intensity perception on the other hand, found that this intensity modulation was not dependent on the relative hand positions41. However, there is a well-established effect of hand-crossing on temporal order judgement thresholds, with thresholds increasing substantially when the hands are crossed42,43. If required, candidate alternative sites might include the upper arms or upper forearms, which retain much of the convenience of the wrist but reduce the relative motion of the stimulation sites.

In the current study, less than one hour of training was provided. Despite this relatively small amount of training, we observed improvements in performance in all conditions (Audio-haptic, Haptic-only, and Audio-only). Future work should assess how generalizable training is to real-world listening and establish the optimum training regime to maximise audio-haptic performance. In this study, some of the observed performance improvement may have been due to participants learning to use spatial cues relating only to the specific loudspeaker positions used. However, previous work suggests that subjects can become more sensitive to spatial hearing cues with training44, indicating that our improvement in performance may be generalizable beyond the experimental procedure. Previous research has also shown that participants continue to improve their ability to identify speech presented through haptic stimulation after many hours of training4547. This suggests that long-term training may give further improvement in haptic performance. Finally, haptic stimulation has been shown to support lip-reading after extensive training48, suggesting that long-term training may increase multisensory integration of audio and haptic inputs.

It is important to note that in the current study, performance was assessed under simplified acoustic conditions where participants identified the location of a single speech stimulus. Future work should investigate the benefits of electro-haptic stimulation in more complex acoustic environments, with multiple simultaneous sound sources. In such environments, it may be possible to improve performance through the use of algorithms that magnify spatial hearing cues, aid the segregation of multiple sounds, and reduce background noise4951.

In this study, we showed that providing spatial information to CI users through haptic stimulation of the wrists substantially improves localisation. Our approach was designed to be easily transferable to a real-world application. The haptic signal was processed using a computationally lightweight algorithm that could be applied in real-time and was delivered at a vibration intensity that could readily be achieved by a low-cost wearable device. This could have an important clinical impact, providing an inexpensive, non-invasive means to dramatically improve spatial hearing in CI users.

Methods

Participants

Twelve CI users (4 male, 8 female; mean age = 52.6 years old, ranging from 41 to 63 years old) were recruited through the University of Southampton Auditory Implant Service. All participants were native British English speakers, had been implanted at least 6 months prior to the experiment, and had the capacity to give informed consent. Participants completed a screening questionnaire, confirming that they had no medical conditions and were taking no medication that may affect their sense of touch. Table 1 details the characteristics of the participants who took part in the study. Participants were instructed to use their normal hearing set up and not to adjust their settings during the experiment, and included seven unilateral users (a single implant), and five bimodal users (an implant and a contralateral hearing aid). One participant (P2) was categorized as having some residual hearing, defined here as having unaided thresholds at 250 and 500 Hz that are 65 dB HL or better in both ears.

Table 1.

Summary of participant characteristics. CI = Cochlear implant, HA = Hearing aid.

Participant Gender Age Device Left Device Right Years since implantation
1 M 59 CI: Cochlear CP920 HA: ReSound 3.2
2 F 42 CI: Cochlear CP920 None 4.4
3 F 54 None CI: MED-EL Rondo 4.3
4 F 50 HA: Danalogic CI: Cochlear CP1000 5.6
5 M 44 CI: Advanced Bionics Nadia Q70 HA: Phonak [linked] 1.0
6 F 49 CI: Cochlear CP 1000 HA: Oticon 1.5
7 M 58 HA: Phonak [linked] CI: Advanced Bionics Naida Q90 0.6
8 F 41 None CI: Cochlear CP 1000 10.6
9 F 61 CI: Med-El Sonnet None 2.4
10 M 63 None CI: Advanced Bionics Q90 0.7
11 F 58 CI: Advanced Bionics Naida Q70 None 9.1
12 F 52 CI: Advanced Bionics Naida Q70 None 11.3

Vibrotactile detection thresholds were measured at the fingertip and wrist at 31.5 Hz and 125 Hz following conditions and criteria specified in ISO 13091-1:200152. One participant (P7) had elevated thresholds at the fingertips of the left and right index fingers at 125 Hz (1.8 and 1.0 ms−2, respectively). All others had vibrotactile detection thresholds within the normal range (<0.4 ms−2 RMS at 31.5 Hz, and <0.7 ms−2 RMS at 125 Hz52). The mean vibrotactile detection threshold at the skin of the wrist at 31.5 Hz was 0.65 ms−2 RMS, and at 125 Hz was 0.75 ms−2 RMS (averaged across left and right wrists; there are no published standards for normal wrist sensitivity).

Stimuli

The speech stimulus consisted of recording of a female voice saying “Where am I speaking from?”, recorded using a Rode M5 microphone in the small anechoic chamber at the Institute of Sound and Vibration Research (ISVR), UK. This audio file is available at: 10.5258/SOTON/D1206. The speech signal was presented at a level of 65 dB SPL LAeq. The intensity of each presentation was roved randomly +/− 2.5 dB around 65 dB SPL to prevent participants learning to locate the speech based on absolute level cues. Each loudspeaker was calibrated at the listening position using a Brüel & Kjær (B&K) G4 type 2250 sound level meter (which was calibrated using a B&K type 4231 sound calibrator).

For the haptic signal, head-related transfer functions (HRTFs) were taken from The Oldenburg Hearing Device HRTF Database53 and applied to the speech signal separately for each loudspeaker position used in the experiment. The three-microphone behind the ear (“BTE_MultiCh”) HRTFs were used, in order to match a typical CI signal. The signal was then downsampled to a sampling frequency of 22050 Hz. Each channel of this stereo signal was then passed through an FIR filter bank with four frequency channels with center frequencies equally spaced on the ERB scale54. The edges of the bands were between 1,000 and 10,000 Hz, a frequency range that contains the most speech energy55 and large ILDs56. The Hilbert envelope for each frequency channel was calculated and a first-order low-pass filter was applied with a cut-off frequency of 10 Hz to extract the speech envelope. This low-pass filter emphasised the modulation frequency range between around 1 and 30 Hz, which is the most important range for speech intelligibility57. These signals were then used to modulate the amplitude envelopes of four fixed-phase tonal carriers with center frequencies of 50, 110, 170, and 230 Hz. This frequency range was selected because it is one to which the tactile system is highly sensitive58. The carriers had a 60 Hz frequency spacing and fixed phases. These carriers were chosen because they would be expected to be individually discriminable based on estimates of vibrotactile frequency difference limens59. These were then summed and presented via the HVLab tactile vibrometer. This signal processing strategy was similar to that used in Fletcher et al.17. Haptic stimuli were presented at a maximum acceleration magnitude of 1.84 ms−2 RMS (e.g. the left vibrometer when the signal is presented 75° to the left). The intensity difference between the two shakers directly corresponded to the intensity difference between the ears extracted from the HRTF, with no additional scaling applied.

The vibrometers were calibrated using a B&K type 4294 calibration exciter. During piloting of the experiment, waveforms from the shakers were recorded using the PCB Piezotronics ICP 353B43 accelerometers built into the HVLab tactile vibrometers, and visually inspected to ensure that the signals were faithfully reproduced.

Apparatus

Participants were seated in the centre of the ISVR small anechoic chamber. Eleven Genelec 8020 C PM Bi-Amplified Monitor System loudspeakers were positioned in an arc in front of the participant, from −75° to 75°, with 15° spacing between the loudspeakers (see Fig. 3). The speakers were placed 2 m from the centre of the participant’s head, at approximately the same height as their ears in a sitting position (1.16 m). The speakers were labelled L5 through R5 as illustrated in Figure 3. An acoustically treated 20′′ wide-screen monitor for displaying feedback and giving instructions was positioned on the floor 1 m in front of the participant. Two HVLab tactile vibrometers were placed beside the participant’s chair and were used to deliver the vibrotactile signal to the participants’ wrists (the palmer surface of the distal forearm) via a rigid 10-mm nylon probe with no surround to maximise the area of skin excitation. All stimuli were controlled using a custom MATLAB script (MATLAB 2018b) via a RME M-32 DA 32-Channel digital to analog converter.

Figure 3.

Figure 3

Schematic illustration of the experimental set up. This schematic shows the audio-only condition, where the participant has their hands in their lap rather than their wrists on the shaker contacts. On each trial the audio stimulus was presented through one of the 11 loudspeakers, positioned at points between 75° to the left and 75° to the right of the centre.

During testing, the experimenter sat in a separate control room. The participants’ verbal responses were monitored using a Shure BG 2.1 dynamic microphone placed low behind the participant’s seat, amplified by a Creek OBH-21 headphone amplifier, and played back through a pair of Sennheiser HD 380 Pro headphones. Participants were monitored visually using a Microsoft HD-3000 webcam.

Procedure

The experiment was conducted over two sessions not more than 5 days apart (average number of days = 1.58, SE = 0.38). In session 1, the participant first filled out a health questionnaire16 and had their vibrotactile thresholds measured following conditions and criteria specified in ISO 13091-1:200152. The task was then demonstrated to the participant by presenting the speech stimulus from speakers C (centre), L5 (75° left), and R5 (75° right). This demonstration was repeated for each of three conditions: Audio only, combined audio and haptic stimulation, and haptic stimulation only. At this stage, it was confirmed that the speech stimuli were clearly audible, and participants were given the opportunity to ask any questions.

A testing block of was then conducted, lasting around 20–25 minutes. In each trial, the participant was instructed to fixate on the central speaker (marked with a red cross), and to keep their head still. The speech stimulus was presented from one of the 11 loudspeakers, and the participant’s task was to identify which loudspeaker was the source. For each condition, the stimulus was presented from each speaker in a randomised order. This procedure was then repeated four times. Localisation accuracy was calculated as RMS error using the D statistic described by Rakerd and Hartman60. Chance performance level was estimated using a Monte Carlo simulation with 100,000 samples, assuming unbiased responses.

Responses were made verbally and recorded in the control room by the experimenter, who was blinded to the true source of the stimulus. The participant was monitored via webcam, to ensure that they did not move their head, were using the vibrometers in the haptic stimulation conditions, and were not making contact with the vibrometers in the audio only condition. The vibrometers were near silent, but were left on in all conditions to control for any subtle audio cues.

After a break of at least 15 minutes, the participant completed a training block, which was the same as the testing block except that stimuli were presented in a new randomised order and performance feedback was provided on the screen. The screen displayed an illustration of the speaker array (similar to Fig. 3). If the participant was correct, an illustration of the target speaker lit up green. If the participant was incorrect, an illustration of the chosen speaker lit up red, and the target speaker lit up green. In the second session, the participant completed a further training block, followed by a final testing block.

The experimental protocol was approved by the University of Southampton Ethics Committee (ERGO ID: 46201) and the UK National Health Service Research Ethics Service (Integrated Research Application System ID: 256879). All research was performed in accordance with the relevant guidelines and regulations.

Statistics

Performance was calculated as RMS error from the target location in degrees arc for all trials in each condition within a session60. Primary analysis of performance on the spatial hearing task consisted of a 3 × 2 repeated measures analysis of variance (ANOVA) with factors ‘Condition’ (Audio-only, Audio-haptic, or Haptic-only) and ‘Session’ (before or after training). Mauchly’s test indicated that the assumption of sphericity had been violated (χ2(2) = 15.5, p < 0.001), so degrees of freedom were corrected using Greenhouse-Geisser estimates of sphericity (ε = 0.56). The ANOVA used an alpha level of 0.05. Post-hoc two-tailed t-tests were conducted to investigate these effects. Nine two-tailed paired-samples t-tests (with a Bonferroni-Holm correction for multiple comparisons) were used to investigate performance across the three conditions and two sessions. Five two-tailed independent samples t-tests (also with a Bonferroni-Holm correction) were conducted to analyse differences in performance between the seven unilateral and five bimodal CI users who took part in the study.

Acknowledgements

Thank you to Professor Carl Verschuur for his support and encouragement and to the dedicated staff at the University of Southampton Auditory Implant Service for their enthusiastic help with recruitment of participants. Thank you to Ian Wiggins for helpful comments on the experimental design, to Joe Sollini and Tobi Goehring for feedback on the manuscript, to Eric Hamdan for technical support, and to Daniel Rowan for lending us crucial equipment. Thank you to Ben Lineton for his advice throughout the project, to Ellis for untiring guidance, and to B.P. Littlecock and A. Bin Afif for helpful comments and manual support. We are extremely grateful to each of the participants who gave up their time to take part in this study. Salary support for author M.D.F was provided by The Oticon Foundation.

Author contributions

M.D.F. designed and constructed the experiment. R.O.C. recruited and ran participants. S.R.M. and M.D.F. wrote the manuscript text. All authors reviewed the manuscript.

Data availability

The dataset and stimuli from the current study is publicly available through the University of Southampton’s Research Data Management Repository at: 10.5258/SOTON/D1206.

Competing interests

The authors declare no competing interests.

Footnotes

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Wilson BS. Getting a decent (but sparse) signal to the brain for users of cochlear implants. Hearing research. 2015;322:24–38. doi: 10.1016/j.heares.2014.11.009. [DOI] [PubMed] [Google Scholar]
  • 2.Spriet A, et al. Speech Understanding in Background Noise with the Two-Microphone Adaptive Beamformer BEAM in the Nucleus Freedom Cochlear Implant System. Ear and Hearing. 2007;28:62–72. doi: 10.1097/01.aud.0000252470.54246.54. [DOI] [PubMed] [Google Scholar]
  • 3.McDermott HJ. Music Perception with Cochlear Implants: A Review. Trends in Amplification. 2004;8:49–82. doi: 10.1177/108471380400800203. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Dorman MF, Loiselle LH, Cook SJ, Yost WA, Gifford RH. Sound Source Localization by Normal-Hearing Listeners, Hearing-Impaired Listeners and Cochlear Implant Listeners. Audiol. Neurootol. 2016;21:127–131. doi: 10.1159/000444740. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Verschuur CA, Lutman ME, Ramsden R, Greenham P, O’Driscoll M. Auditory Localization Abilities in Bilateral Cochlear Implant Recipients. Otology & Neurotology. 2005;26:965. doi: 10.1097/01.mao.0000185073.81070.07. [DOI] [PubMed] [Google Scholar]
  • 6.Peters BR, Wyss J, Manrique M. Worldwide trends in bilateral cochlear implantation. The Laryngoscope. 2010;120:17–44. doi: 10.1002/lary.20859. [DOI] [PubMed] [Google Scholar]
  • 7.O’Connell, B. P., Dedmon, M. M. & Haynes, D. S. Hearing Preservation Cochlear Implantation: a Review of Audiologic Benefits, Surgical Success Rates, and Variables That Impact Success. Curr. Otorhinolaryngol. Rep.5, 286–294 (2017).
  • 8.Verschuur C, Hellier W, Teo C. An evaluation of hearing preservation outcomes in routine cochlear implant care: Implications for candidacy. Cochlear Implants International. 2016;17:62–65. doi: 10.1080/14670100.2016.1152007. [DOI] [PubMed] [Google Scholar]
  • 9.Wanna GB, et al. Predictive factors for short- and long term hearing preservation in cochlear implantation with conventional length electrodes. Laryngoscope. 2018;128:482–489. doi: 10.1002/lary.26714. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Békésy GV. Human skin perception of traveling waves similar to those on the cochlea. The Journal of the Acoustical Society of America. 1955;27:830–841. doi: 10.1121/1.1908050. [DOI] [Google Scholar]
  • 11.Gescheider GA. Some comparisons between touch and hearing. IEEE Transactions on Man-Machine Systems. 1970;11:28–35. doi: 10.1109/TMMS.1970.299958. [DOI] [Google Scholar]
  • 12.Frost BJ, Richardson BL. Tactile localization of sounds: Acuity, tracking moving sources, and selective attention. The Journal of the Acoustical Society of America. 1976;59:907–914. doi: 10.1121/1.380950. [DOI] [PubMed] [Google Scholar]
  • 13.Richardson BL, Frost BJ. Tactile localization of the direction and distance of sounds. Perception & Psychophysics. 1979;25:336–344. doi: 10.3758/BF03198813. [DOI] [PubMed] [Google Scholar]
  • 14.Richardson BL, Wuillemin DB, Saunders FJ. Tactile discrimination of competing sounds. Perception & Psychophysics. 1978;24:546–550. doi: 10.3758/BF03198782. [DOI] [PubMed] [Google Scholar]
  • 15.Huang J, Sheffield B, Lin P, Zeng F-G. Electro-Tactile Stimulation Enhances Cochlear Implant Speech Recognition in Noise. Scientific Reports. 2017;7:2196. doi: 10.1038/s41598-017-02429-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Fletcher, M. D., Mills, S. R. & Goehring, T. Vibro-Tactile Enhancement of Speech Intelligibility in Multi-talker Noise for Simulated Cochlear Implant Listening. Trends in Hearing22 (2018). [DOI] [PMC free article] [PubMed]
  • 17.Fletcher MD, Hadeedi A, Goehring T, Mills SR. Electro-haptic hearing: Speech-in-noise performance in cochlear implant users is enhanced by tactile stimulation of the wrists. Scientific Reports. 2019;9:11428. doi: 10.1038/s41598-019-47718-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Aronoff JM, et al. The use of interaural time and level difference cues by bilateral cochlear implant users. The Journal of the Acoustical Society of America. 2010;127:87–92. doi: 10.1121/1.3298451. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Dunn, C. C., Perreau, A., Gantz, B. & Tyler, R. S. Benefits of Localization and Speech Perception with Multiple Noise Sources in Listeners with a Short-Electrode Cochlear Implant. J. Am Acad. Audiol.21, 44–51 (2010). [DOI] [PMC free article] [PubMed]
  • 20.Ernst MO, Bülthoff HH. Merging the senses into a robust percept. Trends in cognitive sciences. 2004;8:162–169. doi: 10.1016/j.tics.2004.02.002. [DOI] [PubMed] [Google Scholar]
  • 21.Parise CV, Spence C, Ernst MO. When correlation implies causation in multisensory integration. Current Biology. 2012;22:46–49. doi: 10.1016/j.cub.2011.11.039. [DOI] [PubMed] [Google Scholar]
  • 22.Fujisaki W, Nishida S. Temporal frequency characteristics of synchrony–asynchrony discrimination of audio-visual signals. Experimental Brain Research. 2005;166:455–464. doi: 10.1007/s00221-005-2385-8. [DOI] [PubMed] [Google Scholar]
  • 23.Burr D, Silva O, Cicchini GM, Banks MS, Morrone MC. Temporal mechanisms of multimodal binding. Proceedings of the Royal Society B: Biological Sciences. 2009;276:1761–1769. doi: 10.1098/rspb.2008.1899. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Parise CV, Ernst MO. Correlation detection as a general mechanism for multisensory integration. Nature communications. 2016;7:11543. doi: 10.1038/ncomms11543. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Jousmäki V, Hari R. Parchment-skin illusion: sound-biased touch. Current biology. 1998;8:190–191. doi: 10.1016/S0960-9822(98)70120-4. [DOI] [PubMed] [Google Scholar]
  • 26.Yau JM, Weber AI, Bensmaia S. Separate mechanisms for audio-tactile pitch and loudness interactions. Frontiers in Psychology. 2010;1:1–11. doi: 10.3389/fpsyg.2010.00160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Yau JM, Olenczak JB, Dammann JF, Bensmaia SJ. Temporal Frequency Channels Are Linked across Audition and Touch. Current Biology. 2009;19:561–566. doi: 10.1016/j.cub.2009.02.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Crommett LE, Pérez-Bellido A, Yau JM. Auditory adaptation improves tactile frequency perception. Journal of Neurophysiology. 2017;117:1352–1362. doi: 10.1152/jn.00783.2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Gick B. & Derrick, D. Aero-tactile integration in speech perception. Nature. 2009;462:502. doi: 10.1038/nature08572. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Meredith MA, Allman BL. Single-unit analysis of somatosensory processing in the core auditory cortex of hearing ferrets. European Journal of Neuroscience. 2015;41:686–698. doi: 10.1111/ejn.12828. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Shore SE, Vass Z, Wys NL, Altschuler RA. Trigeminal ganglion innervates the auditory brainstem. Journal of Comparative Neurology. 2000;419:271–285. doi: 10.1002/(SICI)1096-9861(20000410)419:3&#x0003c;271::AID-CNE1&#x0003e;3.0.CO;2-M. [DOI] [PubMed] [Google Scholar]
  • 32.Aitkin LM, Kenyon CE, Philpott P. The representation of the auditory and somatosensory systems in the external nucleus of the cat inferior colliculus. Journal of Comparative Neurology. 1981;196:25–40. doi: 10.1002/cne.901960104. [DOI] [PubMed] [Google Scholar]
  • 33.Shore SE, El Kashlan H, Lu J. Effects of trigeminal ganglion stimulation on unit activity of ventral cochlear nucleus neurons. Neuroscience. 2003;119:1085–1101. doi: 10.1016/S0306-4522(03)00207-0. [DOI] [PubMed] [Google Scholar]
  • 34.Allman BL, Keniston LP, Meredith MA. Adult deafness induces somatosensory conversion of ferret auditory cortex. Proceedings of the National Academy of Sciences. 2009;106:5925–5930. doi: 10.1073/pnas.0809483106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Meredith MA, Keniston LP, Allman BL. Multisensory dysfunction accompanies crossmodal plasticity following adult hearing impairment. Neuroscience. 2012;214:136–148. doi: 10.1016/j.neuroscience.2012.04.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Kanold PO, Davis KA, Young ED. Somatosensory context alters auditory responses in the cochlear nucleus. Journal of neurophysiology. 2010;105:1063–1070. doi: 10.1152/jn.00807.2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Foxe JJ, et al. Multisensory auditory–somatosensory interactions in early cortical processing revealed by high-density electrical mapping. Cognitive Brain Research. 2000;10:77–83. doi: 10.1016/S0926-6410(00)00024-0. [DOI] [PubMed] [Google Scholar]
  • 38.Gobbelé R, et al. Activation of the human posterior parietal and temporoparietal cortices during audiotactile interaction. NeuroImage. 2003;20:503–511. doi: 10.1016/S1053-8119(03)00312-4. [DOI] [PubMed] [Google Scholar]
  • 39.Caetano G, Jousmäki V. Evidence of vibrotactile input to human auditory cortex. NeuroImage. 2006;29:15–28. doi: 10.1016/j.neuroimage.2005.07.023. [DOI] [PubMed] [Google Scholar]
  • 40.Thornton R. D. & Phillips, A. J. A comparative trial of four vibrotactile aids. In Tactile Aids for the Hearing Impaired, edited by I. R. Summers Whurr, London, pp. 231–251 (1992).
  • 41.Rahman S, Yau JM. Somatosensory interactions reveal feature-dependent computations. J. Neurophysiol. 2019;122:5–21. doi: 10.1152/jn.00168.2019. [DOI] [PubMed] [Google Scholar]
  • 42.Yamamoto S, Kitazawa S. Reversal of subjective temporal order due to arm crossing’. Nature Neuroscience. 2001;4:759–765. doi: 10.1038/89559. [DOI] [PubMed] [Google Scholar]
  • 43.Shore D, Spry E, Spence C. Confusing the mind by crossing the hands. Cognitive Brain Research. 2002;14:153–163. doi: 10.1016/S0926-6410(02)00070-8. [DOI] [PubMed] [Google Scholar]
  • 44.Wright BA, Fitzgerald MB. Different patterns of human discrimination learning for two interaural cues to sound-source location. Proceedings of the National Academy of Sciences. 2001;98:12307–12312. doi: 10.1073/pnas.211220498. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Weisenberger JM. Sensitivity to amplitude‐modulated vibrotactile signals. The Journal of the Acoustical Society of America. 1986;80:1707–1715. doi: 10.1121/1.394283. [DOI] [PubMed] [Google Scholar]
  • 46.Saunders, F., Hill, W. & Simpson, C. Speech perception via the tactile mode: Progress report. In ICASSP’76. IEEE International Conference on Acoustics, Speech, and Signal Processing1, 594–597 (IEEE, 1976).
  • 47.Sparks DW, Kuhl PK, Edmonds AE, Gray GP. Investigating the MESA (Multipoint Electrotactile Speech Aid): The transmission of segmental features of speech. The Journal of the Acoustical Society of America. 1978;63:246–257. doi: 10.1121/1.381720. [DOI] [PubMed] [Google Scholar]
  • 48.Kishon-Rabin L, Boothroyd A, Hanin L. Speechreading enhancement: A comparison of spatial-tactile display of voice fundamental frequency (F 0) with auditory F 0. The Journal of the Acoustical Society of America. 1996;100:593–602. doi: 10.1121/1.415885. [DOI] [PubMed] [Google Scholar]
  • 49.Moore BCJ, Kolarik A, Stone MA, Lee Y-W. Evaluation of a method for enhancing interaural level differences at low frequencies. The Journal of the Acoustical Society of America. 2016;140:2817–2828. doi: 10.1121/1.4965299. [DOI] [PubMed] [Google Scholar]
  • 50.Pirhosseinloo, S. & Kokkinakis, K. An Interaural Magnification Algorithm for Enhancement of Naturally-Occurring Level Differences. Interspeech, 2558–2561 (2016).
  • 51.Williges, B., Jürgens, T., Hu, H. & Dietz, M. Coherent Coding of Enhanced Interaural Cues Improves Sound Localization in Noise With Bilateral Cochlear Implants. Trends in Hearing22 (2018). [DOI] [PMC free article] [PubMed]
  • 52.International Standards Organisation. ISO 13091-1:2001 - Mechanical vibration–Vibrotactile perception thresholds for the assessment of nerve dysfunction–Part 1: Methods of measurement at the fingertips. ISO Available at, http://www.iso.org/iso/catalogue_detail.htm?csnumber=32509. (Accessed: 21st July 2016) (2001).
  • 53.Denk, F., Ernst, S. M., Ewert, S. D. & Kollmeier, B. Adapting hearing devices to the individual ear acoustics: Database and target response correction functions for various device styles. Trends in hearing22 (2018). [DOI] [PMC free article] [PubMed]
  • 54.Glasberg BR, Moore BC. Derivation of auditory filter shapes from notched-noise data. Hear. Res. 1990;47:103–138. doi: 10.1016/0378-5955(90)90170-T. [DOI] [PubMed] [Google Scholar]
  • 55.Byrne D, et al. An international comparison of long‐term average speech spectra. The Journal of the Acoustical Society of America. 1994;96:2108–2120. doi: 10.1121/1.410152. [DOI] [Google Scholar]
  • 56.Feddersen WE, Sandel TT, Teas DC, Jeffress LA. Localization of High‐Frequency Tones. The Journal of the Acoustical Society of America. 1957;29:988–991. doi: 10.1121/1.1909356. [DOI] [Google Scholar]
  • 57.Drullman R, Festen J, Plomp R. Effect of temporal envelope smearing on speech reception. The Journal of the Acoustical Society of America. 1994;95:1053–1064. doi: 10.1121/1.408467. [DOI] [PubMed] [Google Scholar]
  • 58.Verrillo RT. Effect of contactor area on the vibrotactile threshold. The Journal of the Acoustical Society of America. 1963;35:1962–1966. doi: 10.1121/1.1918868. [DOI] [Google Scholar]
  • 59.Rothenberg M, Verrillo R, Zahorian S, Brachman M, Bolanowski S. Vibrotactile frequency for encoding a speech parameter. The Journal of the Acoustical Society of America. 1977;62:1003–1012. doi: 10.1121/1.381610. [DOI] [PubMed] [Google Scholar]
  • 60.Rakerd B, Hartmann WM. Localization of sound in rooms, III: Onset and duration effects. The Journal of the Acoustical Society of America. 1986;80:1695–1706. doi: 10.1121/1.394282. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The dataset and stimuli from the current study is publicly available through the University of Southampton’s Research Data Management Repository at: 10.5258/SOTON/D1206.


Articles from Scientific Reports are provided here courtesy of Nature Publishing Group

RESOURCES