Abstract
In users of a cochlear implant (CI) together with a contralateral hearing aid (HA), so-called bimodal listeners, differences in processing latencies between digital HA and CI up to 9 ms constantly superimpose interaural time differences. In the present study, the effect of this device delay mismatch on sound localization accuracy was investigated. For this purpose, localization accuracy in the frontal horizontal plane was measured with the original and minimized device delay mismatch. The reduction was achieved by delaying the CI stimulation according to the delay of the individually worn HA. For this, a portable, programmable, battery-powered delay line based on a ring buffer running on a microcontroller was designed and assembled. After an acclimatization period to the delayed CI stimulation of 1 hr, the nine bimodal study participants showed a highly significant improvement in localization accuracy of 11.6% compared with the everyday situation without the delay line (p < .01). Concluding, delaying CI stimulation to minimize the device delay mismatch seems to be a promising method to increase sound localization accuracy in bimodal listeners.
Keywords: bimodal hearing, cochlear implant, hearing aid, interaural stimulation timing, sound localization
Introduction
The normal hearing (NH) auditory system is very sensitive to interaural time differences (ITDs) in the temporal fine structure (TFS) and the amplitude envelope (ENV) of acoustic signals reaching the ears. Mills (1958) has found that the smallest perceivable ITD is as small as 10 μs using tone pulses with 1 s duration and 70 ms rise and fall time, where ITD occurred mainly in the TFS. Whereas TFS-ITD can be perceived at frequencies below 1500 Hz (Yost, Wightman, & Green, 1971), ENV-ITDs are perceivable also for carriers with higher frequencies (Ewert, Kaiser, Kernschmidt, & Wiegrebe, 2012; Henning, 1974). As the majority of real-world sounds are broadband and strongly modulated in amplitude, typically both TFS- and ENV-ITD are present. At higher frequencies than 1500 Hz, interaural level differences (ILDs) play a greater role for localization (Feddersen, Sandel, Teas, & Jeffress, 1957). Both ITD and ILD are used by the auditory system to localize sounds (Blauert, 1997; Moore, 2012; Rayleigh, 1907).
In bimodal cochlear implant (CI) or hearing aid (HA) users (hereafter referred to as bimodal listeners), the CI and HA are currently not synchronized in terms of processing delays. Typically, the CI transmits sound information to the auditory system with smaller processing delays than the HA. This has been shown for current CI systems from MED-EL (Innsbruck, Austria) type MAESTRO in combination with HA from Phonak (Stäfa, Switzerland) types Una M and Bolero Q90 (Zirn, Arndt, Aschendorff, & Wesarg, 2015). The difference of auditory brainstem activation was frequency independent with good approximation and as large as 7 ms. For other combinations of HA and CI system than those described, the actual interaural stimulation timing is dependent on the processing delays of both the HA (τHA) and the CI system (τCI). Values for τCI are not mentioned in literature or data sheets. Only the values published in Zirn et al. (2015) are known to the authors. For two other CI manufacturers, delay values of a CI ear relative to an acoustic ear (τNH) were published in Wess, Brungart, and Bernstein (2017). The stated values are τCI – τNH = 10.5–12.5 ms for CI systems from Cochlear Ltd. (Sydney, Australia) and τCI–τNH = 9–11 ms for CI systems from Advanced Bionics (Valencia, CA, USA). τHA values for different types of HAs are also rarely mentioned in scientific literature or data sheets. Dillon, Keidser, O'Brien, and Silberstein (2003) investigated τHA of five digital HAs and found more or less frequency-independent values between 3 and 11 ms. Further, measured τHA values for several types of HAs can be found in the attachment of Zirn et al. (2015).
The consequence for bimodal listeners is that ITDs, caused by sound coming from the side, are superimposed by a constant interaural timing mismatch or further called a “device delay mismatch.” As the largest ITD occurring physiologically is around 700 µs for normal head sizes occurring when a sound is at 90° azimuth, the maximum device delay mismatch in bimodal configurations can be up to 13 times (9 ms) as large as the largest ITD.
A study about tolerable HA delays showed disturbed hearing sensations for an across-frequency delay larger than 9 ms on one ear and significantly reduced identification of syllables for delays larger than 15 ms (Stone & Moore, 2003). Thus, speech understanding seems to be relatively robust toward a device delay mismatch in the order of several milliseconds at least in unilateral HA users.
However, sound localization might be more dependent on the device delay mismatch. Indication for this assumption comes from a study by Mossop and Culling (1998) who investigated ITD just noticeable differences (JNDs) for various reference ITD (the ITD which occurs at 0° azimuth) between 0 μs and 3000 μs in four subjects. For reference ITD between 0 μs and 700 μs, the JNDs typically increased linearly. Beyond the upper limit of the physiological ITD range, for reference ITD in the range of 1 to 3 ms, JNDs rose sharply to plateau much higher than for reference ITD below 700 μs. At a reference ITD of 3 ms, the ITD-JND was around 14 times as large as at 0 ms on group average.
Our hypothesis in the present study is that sound localization is negatively affected by the delay mismatch actually occurring in bimodal listeners. A study supporting this idea was published by Dorman, Loiselle, Cook, Yost, and Gifford (2016) showing that sound localization in a group of bimodal listeners was much worse than, for example, in bilateral CI users, bilateral HA users, and single-sided deaf CI users.
The research question of the present study is if minimizing the device delay mismatch leads to improved sound localization accuracy in bimodal listeners.
Material and Methods
Test Subjects
Nine adult bimodal listeners (age: 51.8 ± 13.0 years [mean ± standard deviation], minimum 36 years, maximum 72 years) participated in the study. Details are listed in Tables 1 and 2.
Table 1.
Data of Bimodal Subjects.
| Subject | Age (years) | Etiology | CI type (processoror implant) | Implanted side | CI experience (years) | HA experience (years) | Better ear or device | CI coding strategy |
|---|---|---|---|---|---|---|---|---|
| Bim01 | 59 | Sudden hearing loss | OPUS2 / CONCERTO Flex28 | right | 3.5 | 6 | Left or HA | FS4 |
| Bim02 | 40 | progressive | SONNET / SYNCHRONY Flex 28 | left | 2.5 | 14 | Left or CI | FSP |
| Bim03 | 43 | congenital | OPUS2 / CONCERTO Flex28 | left | 4.5 | 3.5 | Right or HA | FS4-p |
| Bim04 | 63 | Sudden hearing loss | OPUS2 / CONCERTO Flex28 | right | 3.5 | 10 | Right or CI | FS4 |
| Bim05 | 36 | progressive | SONNET / SYNCHRONY Flex24 | left | 2 | 2 | Left or CI | FS4 |
| Bim06 | 72 | Sudden hearing loss | OPUS2 / CONCERTO Flex28 | left | 6 | 6 | Left or CI | FS4 |
| Bim07 | 56 | Sudden hearing loss | SONNET / SYNCHRONY Flex28 | left | 2 | 3.5 | Right or HA | FS4-p |
| Bim08 | 37 | Ménière | SONNET / SONATAti100 Flex EAS | left | 8.5 | 1 | Left or CI | FS4 |
| Bim09 | 60 | Blast trauma | SONNET / SYNCHRONY Flex28 | right | 0.5 | 28 | Left or HA | FS4 |
Note. HA = hearing aid; CL = cochlear implant.
The better performing ear or device was selected based on the side with the highest score in the Freiburg monosyllabic test. The bimodal hearing experience of all study participants was congruent with the CI experience.
Table 2.
Hearing Aids of the Bimodal Subjects and Processing Delays (τHA) of These Devices Measured for Four Frequencies.
| Test subject | HA type | τHA at 0.5 kHz (ms) | τHA at 1 kHz (ms) | τHA at 2 kHz (ms) | τHA at 4 kHz (ms) | Averaged τHA (ms) |
|---|---|---|---|---|---|---|
| Bim01 | Oticon Agil pro | 5.7 | 5.3 | 5.3 | 5.1 | 5.4 |
| Bim02 | Oticon Opn 1 | 8.2 | 8.1 | 8.1 | 8.3 | 8.2 |
| Bim03 | Siemens Nitro 3mi | 7.0 | 7.1 | 7.0 | 5.5 | 6.7 |
| Bim04 | Phonak Naida 3 SP | 7.1 | 7.3 | 7.1 | 6.8 | 7.1 |
| Bim05 | Phonak Naida S 1 UP | 7.2 | 7.3 | 7.1 | 7.1 | 7.2 |
| Bim06 | Audio Service Comfort | 9.0 | 9.0 | 5.8 | 4.2 | 7.0 |
| Bim07 | Kind | 9.0 | 9.3 | 9.1 | 9.0 | 9.1 |
| Bim08 | Oticon Alta 2 | 7.1 | 7.1 | 5.9 | 5.8 | 6.5 |
| Bim09 | Bernafon CA3 N | 5.8 | 6.6 | 5.2 | 5.8 | 5.9 |
HA = hearing aid.
On the ear provided with the HA, the subjects had mild to severe sensorineural or combined hearing losses (see Figure 1 for pure tone thresholds). Prerequisites for inclusion of bimodal subjects in the study were (a) everyday use of their HA and CI and (b) a percent correct score of more than 50% obtained in the Freiburg monosyllabic (word) test at 65 dB SPL (both on the CI and HA side). The included bimodal listeners had no measurable residual acoustic hearing on the ear provided with the CI at the audiometric frequencies 0.5, 1, 2, and 4 kHz.
Figure 1.

PTA4 values (air conduction) of all bimodal study participants. The tested ear is stated in brackets in the legend.
l = left; r = right.
As a reference, six adult bilateral CI (BiCI) users (age: 45.8 ± 13.4 years, minimum 25 years, maximum 62 years) participated in the study. Prerequisite for inclusion of BiCI users was ≥50% correctly understood words in the Freiburg monosyllabic test at 65 dB SPL on each side. The CI system settings of BiCI users were not changed for the study. The CI and HA settings have also been largely maintained for bimodal listeners. The corresponding procedure for bimodal listeners is explained in the next section. Both, all bimodal and BiCI users had postlingual deafness and had used their respective CI audio processors (and HAs in case of bimodal listeners) for at least 6 months.
Furthermore, eight NH listeners participated in the experiment (age 24.9 ± 3.2 years, minimum 22 years, maximum 33 years).
The work described has been carried out in accordance with the Code of Ethics of the World Medical Association (Declaration of Helsinki) for experiments involving humans. University of Freiburg ethics committee approval was obtained (89/17). An informed consent was obtained from all test subjects.
The Delay Line
To delay sound signals with sufficient temporal resolution in bimodal listeners, a delay line (DL) based on the Arduino Due microcontroller (µC) board with a built-in Atmel SAM3X8E ARM Cortex-M3 CPU was used (Arduino, 2018). This type of µC board was chosen because it offers analog-to-digital converters as well as digital-to-analog converters (each with 12 bits of resolution), sufficient memory (512 kB of flash memory), and comparably high clock speed (84 MHz). It can be programmed in C or C++ using the Arduino IDE. To realize programmable delays, a circular ring buffer has been implemented. With this algorithm, delays corresponding to integer multiples of the reciprocal of the sampling rate are possible. With a sampling rate of 48000 Hz, corresponding to a sampling period of 20.8 µs, delays are adjustable in discrete steps of n = 20.8 µs. In addition, a temporal offset has to be added because both analog-to-digital converters and digital-to-analog converters require several clock cycles for signal processing. This offset is independent of the sampling rate and has been measured and quantified with 50 µs. Thus, delays were adjustable according to Equation 1.
| (1) |
In Zirn et al. (2015), we could show that the device delay mismatch can be well approached with the averaged processing delay of the individual HA of the bimodal user τHA in combination with a MED-EL Maestro CI system in the clinical configuration (i.e., with the CI processor running the implant). That is because the frequency specific delays introduced by this CI system are relatively close to the delays of a normal-hearing ear.
In the present study, the value of was set to a value as close as possible to τHA (see Table 2, right column) which was considered as an estimate for the device delay mismatch (i.e., τHA ≈ device delay mismatch) knowing that this was not necessarily a perfect match across frequency (see Figure 2). With this, the device delay mismatch is usually not entirely set to zero but largely reduced or in other words “minimized.”
Figure 2.

The across-frequency delay (DL) applied to the CI stimulation (green arrows) delays auditory nerve activation and therefore reduces the gap (i.e., the reference ITD approximated in this work with τHA) between CI induced ABR wave V latency and ABR wave V latency evoked via the HA Phonak Una M (Adapted from Zirn et al., 2015—Figure 7).
CI = cochlear implant; DL = delay line.
τHA was measured using the HA analyzer unit ACAM 5 from Acousticon GmbH, Reinheim, Germany as described in Zirn et al. (2015). In Table 2, τHA for the different types of HAs of the nine included bimodal listeners are listed.
The setup used for delaying the CI stimulation in bimodal listeners is schematically shown in Figure 3. To capture the acoustic signal on the CI side, an OPUS2 CI audio processor from MED-EL (Innsbruck, Austria) has been used. The tapped microphone signal is not delayed by the signal processing within the OPUS2. The processor was connected to the microphone-tester, a device which CI fitting specialists typically use in conjunction with monitor headphones to check that the CI audio processor microphone functions correctly. The microphone-tester is an analog amplifier with no remarkable processing delay.
Figure 3.
Setup applied to delay the CI stimulation. The microphone signal of the CI processor worn behind the ear was tapped, amplified (by the microphone tester), delayed (by the DL), and then fed back into a CI processor worn at the collar. This device ran with the individual map that the patient was used to from everyday life. The CI signal was then transmitted transcutaneously to the implant via a long cable and coil.
HA = hearing aid; DL = delay line.
The signal at the headphone output of the microphone-tester was tapped and used as the input to our DL described earlier. The output of the DL was then transmitted to the auxiliary input of a second OPUS2 CI audio processor using the red audio cable available from MED-EL. This OPUS2 was modified in a way that its internal microphone was deactivated to avoid a second nondelayed (attenuated) input to the bimodal user. This device is hereinafter referred to as “silent OPUS2.”
The silent OPUS2 was programmed in the same way as the patient’s own CI audio processor if their everyday CI processor was also an OPUS2 (see Table 1). In case of users of Sonnet processors, the user’s everyday CI program was converted for use with the silent OPUS2 processor using the conversion function of the Maestro System Software of MED-EL. The HA and its settings were not changed for and during the study.
This combination of devices (OPUS2 - microphone-tester – DL - silent OPUS2 - implant) had approximately the same internal delay as a usual OPUS2 – implant configuration, when the DL was programmed with the minimum delay (see Equation 1).
After completing the setup as shown in Figure 3, all bimodal participants reported that they were hearing subjectively as usual with their everyday bimodal configuration. Consequently, all bimodal subjects were accustomed to the fittings used in the study. The everyday coding strategy used by the bimodal listeners in this study was FS4 (for details about this strategy see, e.g., Zirn, Arndt, Aschendorff, Laszig, & Wesarg, 2016), FS4-p, or FSP (for details about these strategies, see, e.g., Riss et al., 2014).
Stimuli and Test Environment
All tests were administered in an audiometric booth. For the localization tests, seven loudspeakers (type Genelec 8030A) were located at intervals of 30° from −90° (Loudspeaker #1) to 90° (Loudspeaker #7) in a frontal semicircle 1 m in diameter in a horizontal plane at the subject’s head level. The loudspeakers carried number plates corresponding to the numbers in Figure 4. The subjects were not allowed to search the presenting speaker by moving their heads during the stimulus presentations.
Figure 4.
Setup for sound localization tests in the horizontal plane applied in bimodal and BiCI users.
Stimulus generation was similar as described by Yost, Loiselle, Dorman, Burns, and Brown (2013). White Gaussian noise was filtered by an eight-pole Butterworth bandpass filter (implemented in MATLAB, The Mathworks Inc., Natick, MA, USA) with a lower cutoff frequency of 125 and a higher cutoff frequency of 6000 Hz (orientated at the broadband stimulus in Yost et al.). The noise bursts were 200 ms in duration and ramped up and down with the rising and falling slopes (20 ms each) of a Hanning window. The level of presentation measured in the middle of the semicircle of the loudspeaker at ear level (of an average large adult) was 65 dB(A) measured with a Nor140 sound level meter from Norsonic AS. Sounds were digitally generated and played from an 8-channel digital-to-analog converter (Behringer, type Ultragain Pro-8 Digital ADA8000) running at a sampling rate of 44.1 kHz and 16 bits of resolution.
The listener was seated in the middle between Speakers #1 and #7 using a hand-held wireless keypad to enter his or her responses. One trial consisted of the presentation of a noise burst from a random loudspeaker and a key press on the keyboard.
In NH listeners, the stimuli were presented via headphones (Sennheiser HD 280 pro) simulating sounds originating from the loudspeakers shown in Figure 4 by convolution of the stimuli with angle-dependent head-related impulse responses. The head-related impulse responses were taken from an open source database (Dietrich et al., 2013). The presentation level was adjusted to the most comfortable level in the first NH participant and afterwards used unaltered in all further NH subjects.
Experimental Procedure
The sound localization test described in the previous section was repeated with varying listening conditions. In Figure 5, an overview over the experimental procedure with the time line (from top to bottom) is shown.
Figure 5.
Schematic of the experimental procedure applied in bimodal users.
CI = cochlear implant; DL = delay line.
The first localization test (Figure 5, Step 1) was conducted in the bimodal configuration (HA and CI worn and active) with the DL attached to the CI, programmed with n = 0 corresponding to = 50 µs (i.e., no considerable delay—see Equation 1; from here on referred to as “no-CI-delay” mode). This test has been done with the aim of assessing the participant’s ability to localize sounds in his or her everyday hearing condition, that is, with their everyday device delay mismatch. This first test could have been performed without the DL, but the DL was already applied to maintain the same combination of hearing devices or components across all test conditions. Prior to the first localization test, every subject was instructed to adjust the volume of the CI so that both the hearing impression of CI and HA are balanced subjectively. Afterwards, the volume settings of CI and HA were kept constant across all tests.
The second localization test (Figure 5, Step 2) was conducted with the better performing side only (unilateral configuration). The better performing side was selected based on the last available Freiburg monosyllables scores tested unilaterally both on the CI and HA side.
For that, the side with the higher score in the Freiburg monosyllables test was tested alone. In this condition, the contralateral device was switched off and removed from the ear. An improvement in the first test (bimodal) compared with the second test (unilateral) is referred from here on as bimodal benefit.
Afterwards, the DL was programmed with the measured τHA value as listed in Table 2 (Figure 5, Step 3).
Subsequently, to allow some acclimatization, the participant used all devices (HA and CI + DL) for 1 hr. He or she was instructed to walk around outside the lab while paying attention to the perceived direction of sound sources (e.g., cars in road traffic or conversations) for 45 min. After returning to the lab, each study participants was presented with the TV news (duration of 15 min) of the evening before in order to ensure similar hearing training for all participants (Figure 5, Step 4).
Finally, the localization test was performed in the bimodal configuration with CI delay (Figure 5, Step 5; from here on referred to as “with-CI-delay” mode). An improvement in the latter test compared with the initial bimodal test is referred from here on as DL benefit.
Localization training was conducted in every participant before the first test. For that, noise bursts were presented to the listener from each of the speakers beginning with Speaker #1 and ending with #7 and subsequently back to #1. This sequence was repeated once. The listener was instructed to just listen to the stimuli while looking straight ahead at the center speaker (#4) without moving the head. Afterwards, 21 trails with feedback (the participant was informed about the correct speaker number after making his or her guess) were applied for training.
After the training was accomplished, the tests to be evaluated began. For these tests, 140 stimuli (20 stimuli per speaker) were presented randomly without feedback. After each stimulus presentation, listeners pressed a key between 1 and 7 indicating which loudspeaker presented the sound (the appropriate numbers appeared on each loudspeaker).
The procedure for NH listeners was similar to that for bimodal listeners with some slight necessary deviations. The main reason to deviate was that, in contrast to bimodal listeners, NH listeners are not used to any device delay mismatch which alters the interaural timing. Accordingly, the sequence of the tests showed in Figure 5 had to be reversed. To realize a unilateral delay, the experiment was done via headphones using virtual sound sources as described in the previous section. First, an initial localization test was conducted with no unilateral delay (the usual listening condition of NH listeners). After that, a unilateral delay of 7 ms (which corresponds to τHA averaged across all HA of the bimodal study participants, see right-most column in Table 2) was applied to delay the right ear similar as in bimodal listeners, and the localization test was repeated. Then, 1 hr of audio-visual training was conducted with each NH listener using a three-dimensional videogame played with headphones (Sennheiser HD 280 pro). In this game named Portal 2 from Valve Software spatialization is accomplished by player viewpoint tracking and three-dimensional location sources: panning, ITDs, distance-based attenuation and interaural levels, occlusion-based attenuation, real-time geometry detected reflections, or reverb (personal communication with Mike Morasky from Valve Software). It delivers a subjectively good spatial hearing impression for NH listeners. The unilateral delay of 7 ms was constantly present during gameplay. After completing the audio-visual training, the localization test was repeated, maintaining the unilateral delay of 7 ms.
BiCI users were included in the present study as a reference to evaluate if training effects are present and responsible for any improvements caused by repeated localization tests. Therefore, the procedure shown in Figure 5 was also applied to BiCI users but no DL was fitted to them. Thus, there was no difference between Test 1 and Test 2 in BiCI users. The most plausible reason why the results in Test 2 may differ from the results in Test 1 in BiCI users is the effect of training.
Mathematical Evaluation and Statistical Analysis
Root mean square (rms) errors of localization accuracy were calculated as proposed by Rakerd and Hartmann (1986) corresponding to Equation 2.
| (2) |
A corresponds to the angle between two adjacent speakers (30° in the test setup used), M is the number of responses, r is the response (1–7) on the ith trial, and k is the source. D was calculated after each trial resulting in a vector of 140 D values. The reported D value termed as the rms error corresponds to the latest value stored in the vector.
To determine the chance level as well as the worst-case scenario, a Monte-Carlo-Simulation based on Equation 2 was conducted to simulate the responses of 1,000 completely deaf listeners with a purely random response behavior each absolving 140 trials in a localization test (using MATLAB’s Random Number Generator). In this, the simulated test subjects freely guess the (stimulus presenting) loudspeaker as they have no access to any acoustic cues. The simulation revealed normally distributed rms errors of 83.7 ± 5.8° (mean ± standard deviation), the mean of which was considered as the chance level in the following. In addition, the worst expected scenario was simulated by mimicking 1,000 listeners who always responded with Loudspeaker #1 irrespective of the actual stimulus presenting speaker (same could have been achieved by simulating “#7 responders”). This was done because one test subject (Bim05) decided to do so in the unilateral configuration, which resulted in rms errors larger than chance level plus two times the standard deviation. The worst-case simulation resulted in normally distributed rms errors of 108.1 ± 3.5°. The poorest result which occurred in the distribution of 1,000 values was an rms error of 125.5°. This value was considered as the worst case. Chance level and worst case are depicted in Figures 6, 8, and 9.
Figure 6.

Rms errors of virtual sound source localization in eight NH listeners for unilateral delays 0, 7 ms and 7 ms after 1 hr acclimatization (shown as labels on the x axis). Individual results are displayed as colored lines and symbols. The thick black line and square symbols are the group means including standard deviation in each listening condition. Furthermore the outcomes of statistical evaluation are depicted as horizontal brackets shown in the upper part of the figure. Chance level and worst case are shown as dashed horizontal lines. In addition, the area of ± 1 standard deviation around chance level is highlighted in gray.
Figure 8.

Rms localization errors of nine bimodal users in three different listening conditions (NDL: no-CI-delay mode; WDL: with-CI-delay mode). Individual results are displayed as colored lines and symbols. The thick black line and square symbols are the group means including standard deviation in each listening condition. Furthermore the outcomes of statistical evaluation are depicted as horizontal brackets in the upper part of the figure. Chance level and worst case are shown as dashed horizontal lines. In addition, the area of ± 1 standard deviation around chance level is highlighted in gray.
NDL = no-CI-delay mode; WDL = with-CI-delay mode.
Figure 9.

Rms localization errors of six BiCI users in three different listening conditions. Individual results are displayed as colored lines and symbols. The thick black line and square symbols are the group means including standard deviation in each listening condition.
Statistical analysis of the localization rms errors of the test subjects included pairwise comparisons using Wilcoxon signed-rank tests for paired data and Wilcoxon rank sum tests for unpaired data. Furthermore, to evaluate differences in Subjects Bim07 and Bim08, a paired t test was applied to the last 20 entries of the D vector (see Equation 2) for comparisons across the three conditions (unilateral, bimodal no-CI-delay, and bimodal with-CI-delay).
In addition, correlations were assessed by deriving Spearman’s rho. For all statistical tests, the level of significance was defined as α = 5%; significant p values (<.05) were marked with *, and highly significant p values (<.01) with**.
Results
Normal-Hearing Listeners
In Figure 6, rms errors of NH listeners obtained in the virtual sound source localization test are shown.
The rms error at 0 ms unilateral delay was 33.3 ± 7.6°. After acutely applying a unilateral delay of 7 ms, the rms error deteriorated significantly to 85.8 ± 7.5° (p = .008). After 1 hr of audio-visual training with this unilateral delay, the rms error was 85.1 ± 11.3° which was not significantly different from the rms error with the unilateral delay of 7 ms obtained in the acute test (p = .74).
Bimodal Listeners
In Figure 7, bubble plots of two test subjects are shown. The three subplots on the left depict results of test Subject Bim07 who showed the largest benefit from equalizing device delays. In the bimodal configuration in the no-CI-delay mode, the mean rms error across all presentation angles was 34.0° (Figure 7, left, top). In the unilateral configuration with the HA alone, Bim07 was able to make largely correct left-right judgments. This resulted in an rms error of 42.8° across all speakers revealing a bimodal benefit of 8.8° respective 20.5%, p < .01 (Figure 7, left, middle). Finally, in the bimodal configuration in the with-CI-delay mode, the rms error improved to 24.2° revealing a DL benefit of 9.8° respective 28.9%, p < .01 (Figure 7, left, bottom).
Figure 7.
Bubble plots indicating the percent of loudspeaker locations Subjects Bim07 (left) and Bim08 (right) reported as the perceived sound source location versus the actual location of the presentation loudspeaker. Size of bubble proportional to percent reported (bubble sizes in 5% steps; smallest bubble on the plot is 0%–5%; and the largest bubble is 95%–100%). An ideal listener would produce largest bubbles that lie exclusively on the angle bisector. (A) Results in the bimodal configuration no-CI-delay mode, (B) results in the unilateral configuration, and (C) results in the bimodal configuration with-CI-delay mode are shown. For both Bim07 and Bim08, the latter configuration (C) resulted in the lowest rms errors (see Table 3).
The three subplots in the right column of Figure 7 depict results of test Subject Bim08 who showed little benefit from equalizing device delays. In the bimodal configuration in no-CI-delay mode, the rms error was 30.7°. Interestingly, in the unilateral configuration with the CI alone (Figure 7, right, top), Bim08 was not able to make correct left-right judgments in most cases. This resulted in an rms error of 92.4° across all speakers revealing a large bimodal benefit of 61.7° respective 66.8%, p < .01. Finally, in the bimodal configuration in the with-CI-delay modethe rms error improved further to 28.9° showing a DL benefit of 1.8° respective 5.9%, p < .01.
In Table 3, individual localization accuracies of bimodal listeners in the different listening configurations are listed. All nine bimodal listeners scored better than chance level using both devices both in the no-CI-delay and the with-CI-delay mode (left and right column in Table 3). In Figure 8, these results and group means are shown. The average rms error across all bimodal participants was 40.4 ± 9.8° in the bimodal configuration in the no-CI-delay mode, 64.7 ± 25.2° in the unilateral configuration, and 35.5 ± 9.0° in the bimodal configuration with DL. The average improvement from unilateral to bimodal no-CI-delay mode was 21.1° (p = .074), from unilateral to bimodal with-CI-delay mode 25.9° (p = .008) and from bimodal no- to with-CI-delay mode 4.8° (p = .008). The results can also be described in percent by dividing the improvement in degrees by the average rms error and multiplying it by 100. If calculated in this way, the improvement from unilateral to bimodal no-CI-delay mode was 33.9%, from unilateral to bimodal with-CI-delay mode 41.6% and from bimodal no- to with-CI-delay mode 11.5%.
Table 3.
Localization Accuracy of Bimodal Listeners in the Three Different Configurations.
| Test subject | Bimodal (no-CI-delay) rms error (°) | unilaterala rms error (°) | Bimodal (with-CI-delay) rms error (°) | Bimodal benefit (°) / (%) | DL benefit (°) / (%) |
|---|---|---|---|---|---|
| Bim01 | 34.7 | 32.0 | 28.2 | −2.7 / −8.3 | 6.4 / 18.6 |
| Bim02 | 43.3 | 71.4 | 41.4 | 29.1 / 40.8 | 0.9 / 2.2 |
| Bim03 | 51.2 | 48.7 | 47.6 | −2.5 / −5.2 | 3.6 / 7.1 |
| Bim04 | 42.7 | 63.6 | 37.0 | 20.9 / 32.8 | 5.7 / 13.4 |
| Bim05 | 29.9 | 107.3 | 29.9 | 77.4 / 72.1 | 0.0 / 0.0 |
| Bim06 | 56.8 | 59.6 | 47.0 | 2.8 / 4.7 | 9.8 / 17.3 |
| Bim07 | 34.0 | 42.8 | 24.2 | 8.8 / 20.5 | 9.8 / 28.9 |
| Bim08 | 30.7 | 92.4 | 28.9 | 61.7 / 66.8 | 1.8 / 5.9 |
| Bim09 | 47.5 | 42.0 | 42.0 | −5.5 / −13.1 | 5.5 / 11.6 |
Note. DL = delay line.
The terms in brackets refer to measurements without or with DL.
In the unilateral configuration, the side with the higher score in the Freiburg monosyllables test (according to the clinic database) was tested alone (either the CI or the HA provided ear).
There was no significant correlation between the DL benefit and HA delay τHA (rho = −0.03, p = .95). Further, no significant correlation between the DL benefit and the bimodal experience (see Table 1) has been found (rho = −0.03, p = .95). Correlation analysis between DL benefit and mean PTA4 values (calculated based on the values shown in Figure 1) revealed rho = −0.48 (p = .19). A significant correlation has been found between localization accuracy in the unilateral configuration and DL benefit (rho = −0.72, p = .04).
BiCI Users
In Figure 9, measured localization accuracy of BiCI users in different listening configurations is depicted. The average rms error was 34.6 ± 8.9° in the bilateral configuration (Test 1), 66.4 ± 17.3° in the unilateral configuration, and 33.9 ± 9.0° in the bilateral configuration (Test 2). The average improvement from unilateral to bilateral (Test 1) was 31.8° and significant (p = .031), from unilateral to bilateral (Test 2) 32.5° and significant (p = .031), and from bilateral (Test 1) to bilateral (Test 2) 0.8° and not significant (p = .688). Similar as for bimodal listeners, the results can also be described in percent relative to the first mentioned configuration. The average improvement from unilateral to bilateral (Test 1) was 47.8%, from unilateral to bilateral (Test 2) 49.0%, and from bilateral (Test 1) to bilateral (Test 2) 2.2%.
The group mean rms errors found in bimodal listeners in the bimodal configuration no-CI-delay mode (40.4°) were poorer than those found in BiCI users (35.5° averaged across Test 1 and Test 2, p = .13), whereas in the with-CI-delay mode, the results of bimodal listeners (35.5°) were closer to those of BiCI users (p = .85).
Discussion
General Discussion
The objective of this study was to investigate the effect the device delay mismatch on localization accuracy in bimodal listeners. The device delay mismatch originates from differences in processing delays between CI systems (Wess et al., 2017) and HAs (Dillon et al., 2003; Zirn et al., 2015). For bimodal listeners provided with MED-EL MAESTRO CI systems, the device delay mismatch is largely frequency independent and as large as 7 ms for common types of HAs (Zirn et al., 2015). In the present work, localization accuracy was measured with the original device delay mismatch the bimodal listeners came with, and with a reduced (minimized) delay mismatch enabled by the application of a DL to the CI. As reference localization accuracy was measured unilaterally with one device only—the one with which speech understanding was better, thus the dominating side.
In the unilateral configuration, the localization accuracy of the bimodal listeners in the present study varied remarkably. Two test subjects (Bim05 and Bim08) performed unilaterally even worse than chance level (calculated for a purely random response behavior). As described in the Methods section, this can actually happen. Especially in those cases, but also for the other three bimodal listeners with a dominant CI ear, the improvement from unilateral to bimodal no CI-delay mode was relatively large (38.4° on average), whereas those subjects with a dominant HA provided ear did not show a benefit from unilateral to bimodal no-CI-delay mode (−0.5° on average). A possible reason why these subjects performed partly good just by using their HA is that spectral cues, originating from angle-dependent head-related filtering of free-field sound signals, can be relatively well perceptible for unilateral HA users. In contrast, it is well known that the spectral resolution in a CI provided ear is heavily reduced compared with acoustic hearing (e.g., Bernstein, Goupell, Schuchman, Rivera, & Brungart, 2016; Henry, Turner, & Behrens, 2005; Nelson, Donaldson, & Kreft, 2008). Thus those tiny spectral differences that enable unilateral HA users to localize sounds are barely available to unilateral CI users.
Even more interesting is the improvement that was achieved by delaying the CI stimulation in the bimodal listing condition. In the with-CI-delay mode, all bimodal listeners achieved at least the performance of the better ear alone (Bim03, Bim05, and Bim09). The majority (the remaining six bimodal participants) performed better than unilateral and bimodal in the no-CI-delay mode. An in-depth analysis of the effect will be discussed later.
An important question is whether the improvements seen in the results of bimodal listeners in the present study (from no- to with-CI-delay mode) were due to equalizing device delays or due to training effects originating from repeated testing with the same procedure. The most plausible reason why the results of BiCI users in Test 2 may differ from the results in Test 1 is the effect of training. However, the results of the BiCI users included in the present work showed that there was no significant difference between tests with the same configuration applying the same procedure as in bimodal listeners (but without the DL in BiCI users). From this outcome, we conclude that there was no systematic learning effect underlying. Therefore, even if the control group is a different group than the bimodal listeners, it is likely that the improvement in bimodal test subjects is caused by the reduction of the device delay mismatch.
The localization accuracy of bimodal listeners participating in the present study using both devices in the no-CI-delay mode was better as reported by Dorman et al. (2016). They investigated sound localization in eight bimodal listeners using their clinical devices with 13 loudspeakers placed in the frontal horizontal plane with a spacing of 15°. The rms errors found in bimodal listeners (on average 62°; close to chance level) were far worse than those found in bilateral CI users (on average 29°). In our study, the differences of the rms errors of bimodal listeners in the no-CI-delay mode (approximately 40° on average) and BiCI users (approximately 32° on average) were smaller. The main reason for these different findings might be differences in residual hearing on the HA side in the patient population investigated by Dorman et al. and in our study. The averaged PTA4 value in the ear contralateral to the CI of the bimodal listeners of Dorman et al. was 83 dB HL which is way larger than the PTA4 value in the present study (59 dB on average).
Localization in bimodal listeners using their clinical devices was also investigated by Seeber, Baumann, and Fastl (2004) using 11 loudspeakers mounted on a circular tube. The speakers spanned an angle of −50° left to +50° right with a spacing of 10°. The authors included 11 bimodal listeners which performed only slightly better when using both devices (CI + HA; mean absolute error 22.2°) compared with the configuration with the CI alone (mean absolute error 24.5°). The authors also included four BiCI users using the same test setup. Localization accuracy of BiCI users was better when both CIs were used (mean absolute error 15.0°) compared with bimodal listeners. Comparable to Dorman et al., the PTA4 values in the study of Seeber et al. were considerably larger (89 ± 15 dB HL) compared with the present study (58.6 ± 24.3 dB HL). Thus, differences in residual hearing on the HA side might also explain the different outcome compared with the present study.
Other possible factors that could be responsible for the different results across the studies of Dorman et al. and Seeber et al. and the present study are (a) the test setup used for localization testing, (b) the bimodal subjects (age, etiology, onset, and duration of hearing loss), (c) the performance with the CI and the HA, (d) the residual hearing on the HA side, and (e) the devices (HA and CI systems) and the quality of the device fitting.
Impact of the Device Delay Mismatch on Localization Accuracy in Bimodal Listeners
The main topic of this work was to investigate the effect of the device delay mismatch on the localization ability of bimodal listeners. As investigated by Zirn et al. (2015), the device delay mismatch for bimodal listeners using current MED-EL CI systems in combination with the HA types Phonak Una M and Bolero Q90 is approximately 7 ms in size. This value also applies quite well to the CI or HA configurations investigated in the current article (see Table 2).
Research has shown that ITD conveyed in the envelope of sounds (ENV-ITD) and ILD are the most relevant cues for sound localization in bimodal listeners (Francart, Brokx, & Wouters, 2009; Seeber et al., 2004).
A reduction of the device delay mismatch by delaying CI stimulation using the DL as proposed in the present study may be beneficial especially for ENV-ITD perception with the stimulus used in this study as interaural onset and offset time differences are shifted into or close to the physiological range of ITD.
The hypothesis that a reduced device delay mismatch could be helpful for bimodal listeners for sound source localization was supported: With the reduced device delay mismatch, the included test subjects either performed better (N = 7) or equally well (N = 2, Subjects Bim05 and Bim09), and none of them performed worse after applying the DL. There was a significant improvement (p = .016) of localization accuracy across all bimodal listeners with a mean value of 4.9° (which corresponds to 12.1% improvement). Important to mention in this context is that this improvement occurred already after a relatively short DL-acclimatization period of 1 hr compared with the duration the bimodal listeners used their CI and HA with their original timing (minimum 6 months). This outcome reveals a way to optimize CI systems for bimodal listeners—namely by implementation of a programmable delay element in the range of 1 to 11 ms as already suggested by Zirn et al. (2015) and Francart, Wiebe, and Wesarg (2018). As mentioned in our previous publication, this delay element should be integrated in the CI system and adjustable within the CI programming environment available to the clinician who has access to an appropriate table with τHA values as an approximate value for the device delay mismatch to be set. In this way, acclimatization periods longer than 1 hr (as applied in the present study) would be possible. With this, it would be conceivable that the positive effect of device delay mismatch reduction would be even more pronounced in most cases. A possible first approach would be to implement a fixed delay of 7 ms which corresponds to the average HA delay across the 18 devices investigated in the current study and the study from 2015. This delay should be accessible to CI fitting specialists in the fitting software.
Generally, we expect these data also apply to CI users with processors that implement different temporal-based processing strategies than that used here. To our knowledge, the processing delays of the current CI systems from Cochlear and Advanced Bionics are also not yet matched to the delays introduced by different kinds of contralateral HA in bimodal users. However, the device delay mismatch must first be quantified for different bimodal configurations, which has not yet been done systematically for other CI systems than that used in this study.
Reduction of device delay mismatch with an across-frequency delay, as presented in the current study, is not necessarily the best method for matching processing delays. A more precise technical interaural alignment would be possible by applying frequency dependent values for the delay. However, the question of whether a perfect (i.e., frequency specific) temporal adjustment would enable further improvements of localization remains unanswered up to now.
The fact that no correlation has been found between DL benefit (defined as the difference between the rms errors in the bimodal configuration in the no-CI-delay mode and with-CI-delay mode) and HA delay τHA might be due to the generally large τHA in the group of bimodal listeners included in this study. We assume that it makes no real difference whether the device delay mismatch is 8 or 13 times larger than the largest physiologically possible ITD. For device delay mismatch closer to the physiological ITD range; however, a correlation would be conceivable. However, this question was not explorable due to the comparably large τHA values found in this study.
A way how to deal with small asymmetric processing delays might be compensation of the interaural timing mismatch with ILD (Dietz, Ewert, & Hohmann, 2009). The authors successfully demonstrated the compensation of ITD within the physiologic range with ILD in NH listeners. Interaural phase differences up to 135° in 1 kHz pure tone stimuli (corresponding to TFS-ITD of 375 μs) were used. To produce a centered sound image, an ILD in the range of 5 to 15 dB had to be applied.
In summary, the approach of minimizing the device delay mismatch by delaying the CI stimulation as described in the present work would be a relatively simple development of the CI system, which is likely to improve sound localization accuracy in many bimodal listeners.
Plasticity of the Auditory System Regarding Interaural Timing
The bimodal subjects in our study had periods of bimodal experience between 0.5 and 8.5 years. Still, their localization abilities improved with reduced device delay mismatch within 1 hr of acclimatization. This promotes the question of whether the brain can adapt to a device delay mismatch.
Seebacher, Franke-Trieger, Weichbold, Zorowka, and Stephan (2019) investigated the effect of delaying the CI stimulation acutely in single-sided deaf CI users provided with MED-EL CI systems. They found that sound localization accuracy varied significantly with a delay as small as 0.5 ms.
The long-term effect of a unilateral delay in a similar size was investigated by Trapeau and Schönwiesner (2015). The authors constantly delayed sound input to one ear in adult NH listeners using a programmable ear plug. They applied a reference ITD of 0.625 ms, measured sound localization accuracy repeatedly, and found that a recalibration of the auditory system was possible to some extent. However, localization accuracy was worse (mean signed error −10°) with this delay than without (mean signed error −0.07°) in all 12 participants even after an acclimatization period of 7 days.
Mossop and Culling (1998) measured ITD-JNDs for various interaural delays between 0 and 3 ms in four adult NH listeners. ITD-JNDs with an interaural delay of 3 ms were around 14 times as large as with 0 ms (averaged across their four test subjects). The authors also reported that for interaural delays above the physiological ITD range, subjects needed more training to provide consistent results.
The NH listeners, who were participating in the task of virtual sound source localization in the present study, could not cope with a unilateral delay of 7 ms, neither acutely applied nor after 1 hr of audio-visual training with this delay. It can therefore be concluded that a unilateral delay of 7 ms is not easily compensated by the human auditory system.
Considering these outcomes, there appears to be some degree of plasticity of the auditory system with regard to ITD sensitivity in relation to interaural delays within the physiological ITD range. However, for interaural delays outside the physiological ITD range, plasticity seems to be limited.
Conclusions
As HAs and CIs are typically not aligned in terms of timing, bimodal listeners are exposed to a device delay mismatch which alters ITDs. In this study, the effect of the device delay mismatch on sound localization accuracy in the frontal-horizontal plane was investigated. Results indicate that a reduction of the device delay mismatch by appropriately delaying the CI stimulation seems to be a promising method to improve sound localization accuracy in bimodal listeners.
Acknowledgments
The authors thank the subjects who participated in this study for their time and effort. Further, the authors thank MED-EL, Innsbruck, Austria and MED-EL Germany GmbH for their support.
Declaration of Conflicting Interests
The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: SZ s institution has received research grants from a manufacturer of cochlear implants, MED-EL.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was funded by the company MED-EL Elektromedizinische Geraete Gesellschaft m. b. H., Innsbruck, Austria and MED-EL Deutschland GmbH.
References
- Arduino. (2018). Arduino web page. Retrieved from www.arduino.cc.
- Bernstein J. G., Goupell M. J., Schuchman G. I., Rivera A. L., Brungart D. S. (2016) Having two ears facilitates the perceptual separation of concurrent talkers for bilateral and single-sided deaf cochlear implantees. Ear and Hearing 37(3): 289–302. doi:10.1097/AUD.0000000000000284. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Blauert J. (1997) Spatial hearing: The psychophysics of human sound localization, revised ed Cambridge, MA: MIT Press. [Google Scholar]
- Dietrich, P., Guski, M., Klein, J., Müller-Trapet, M., Pollow, M., Scharrer, R., & Vorländer, M. (2013). Measurements and room acoustic analysis with the ITA-Toolbox for MATLAB. Paper presented at the 40th Italian (AIA) Annual Conference on Acoustics and the 39th German Annual Conference on Acoustics, Merano, Italy.
- Dietz M., Ewert S. D., Hohmann V. (2009) Lateralization of stimuli with independent fine-structure and envelope-based temporal disparities. Journal of the Acoustical Society of America 125(3): 1622–1635. doi:10.1121/1.3076045. [DOI] [PubMed] [Google Scholar]
- Dillon H., Keidser G., O'Brien A., Silberstein H. (2003) Sound quality comparisons of advanced hearing aids. Hearing Journal 56(4): 30–40. doi:10.1097/01.HJ.0000293908.50552.34. [Google Scholar]
- Dorman M. F., Loiselle L. H., Cook S. J., Yost W. A., Gifford R. H. (2016) Sound source localization by normal-hearing listeners, hearing-impaired listeners and cochlear implant listeners. Audiology and Neurotology 21(3): 127–131. doi:10.1159/000444740. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ewert S. D., Kaiser K., Kernschmidt L., Wiegrebe L. (2012) Perceptual sensitivity to high-frequency interaural time differences created by rustling sounds. Journal of the Association for Research in Otolaryngology 13(1): 131–143. doi:10.1007/s10162-011-0303-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Feddersen W. E., Sandel T. T., Teas D. C., Jeffress L. A. (1957) Localization of high-frequency tones. Journal of the Acoustical Society of America 29: 988–991. [Google Scholar]
- Francart T., Brokx J., Wouters J. (2009) Sensitivity to interaural time differences with combined cochlear implant and acoustic stimulation. Journal of the Association for Research in Otolaryngology 10(1): 131–141. doi:10.1007/s10162-008-0145-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Francart T., Wiebe K., Wesarg T. (2018) Interaural time difference perception with a cochlear implant and a normal ear. Journal of the Association for Research in Otolaryngology 19(6): 703–715. doi:10.1007/s10162-018-00697-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Henning G. B. (1974) Detectability of interaural delay in high-frequency complex waveforms. Journal of the Acoustical Society of America 55(1): 84–90. DOI: 10.1121/1.1928135. [DOI] [PubMed] [Google Scholar]
- Henry B. A., Turner C. W., Behrens A. (2005) Spectral peak resolution and speech recognition in quiet: Normal hearing, hearing impaired, and cochlear implant listeners. Journal of the Acoustical Society of America 118(2): 1111–1121. [DOI] [PubMed] [Google Scholar]
- Mills A. W. (1958) On the minimum audible angle. Journal of the Acoustical Society of America 30(4): 237–246. [Google Scholar]
- Moore B. C. (2012) An introduction to the psychology of hearing, 6th ed Bingley, England: Emerald Group Publishing Limited. [Google Scholar]
- Mossop J. E., Culling J. F. (1998) Lateralization of large interaural delays. Journal of the Acoustical Society of America 104(3 Pt 1): 1574–1579. [DOI] [PubMed] [Google Scholar]
- Nelson D. A., Donaldson G. S., Kreft H. (2008) Forward-masked spatial tuning curves in cochlear implant users. Journal of the Acoustical Society of America 123(3): 1522–1543. doi:10.1121/1.2836786. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rakerd B., Hartmann W. M. (1986) Localization of sound in rooms, III: Onset and duration effects. Journal of the Acoustical Society of America 80(6): 1695–1706. [DOI] [PubMed] [Google Scholar]
- Rayleigh L. (1907) On our perception of sound direction. Philosophical Magazine 13: 214–232. [Google Scholar]
- Riss D., Hamzavi J. S., Blineder M., Honeder C., Ehrenreich I., Kaider A., Arnoldner C. (2014) FS4, FS4-p, and FSP: A 4-month crossover study of 3 fine structure sound-coding strategies. Ear and Hearing 35(6): e272–e281. doi:10.1097/AUD.0000000000000063. [DOI] [PubMed] [Google Scholar]
- Seebacher J., Franke-Trieger A., Weichbold V., Zorowka P., Stephan K. (2019) Improved interaural timing of acoustic nerve stimulation affects sound localization in single-sided deaf cochlear implant users. Hearing Research 371: 19–27. doi:10.1016/j.heares.2018.10.015. [DOI] [PubMed] [Google Scholar]
- Seeber B. U., Baumann U., Fastl H. (2004) Localization ability with bimodal hearing aids and bilateral cochlear implants. Journal of the Acoustical Society of America 116(3): 1698–1709. [DOI] [PubMed] [Google Scholar]
- Stone M. A., Moore B. C. (2003) Tolerable hearing aid delays. III. Effects on speech production and perception of across-frequency variation in delay. Ear and Hearing 24(2): 175–183. doi:10.1097/01.AUD.0000058106.68049.9C. [DOI] [PubMed] [Google Scholar]
- Trapeau R., Schonwiesner M. (2015) Adaptation to shifted interaural time differences changes encoding of sound location in human auditory cortex. Neuroimage 118: 26–38. doi:10.1016/j.neuroimage.2015.06.006. [DOI] [PubMed] [Google Scholar]
- Wess J. M., Brungart D. S., Bernstein J. G. W. (2017) The effect of interaural mismatches on contralateral unmasking with single-sided vocoders. Ear and Hearing 38(3): 374–386. doi:10.1097/AUD.0000000000000374. [DOI] [PubMed] [Google Scholar]
- Yost W. A., Loiselle L., Dorman M., Burns J., Brown C. A. (2013) Sound source localization of filtered noises by listeners with normal hearing: A statistical analysis. Journal of the Acoustical Society of America 133(5): 2876–2882. doi:10.1121/1.4799803. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yost W. A., Wightman F. L., Green D. M. (1971) Lateralization of filtered clicks. Journal of the Acoustical Society of America 50(6): 1526–1531. [DOI] [PubMed] [Google Scholar]
- Zirn S., Arndt S., Aschendorff A., Laszig R., Wesarg T. (2016) Perception of interaural phase differences with envelope and fine structure coding strategies in bilateral cochlear implant users. Trends in Hearing 20 doi:10.1177/2331216516665608. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zirn S., Arndt S., Aschendorff A., Wesarg T. (2015) Interaural stimulation timing in single sided deaf cochlear implant users. Hearing Research 328: 148–156. doi:10.1016/j.heares.2015.08.010. [DOI] [PubMed] [Google Scholar]




