Abstract
Objectives.
Bilateral cochlear implant (CI) listeners use independent processors in each ear. This independence and lack of shared hardware prevents control of the timing of sampling and stimulation across ears, which precludes the development of bilaterally-coordinated signal processing strategies. As a result, these devices potentially reduce access to binaural cues and introduce disruptive artifacts. For example, measurements from two clinical processors demonstrate that independently-running processors introduce interaural incoherence. These issues are typically avoided in the laboratory by using research processors with bilaterally-synchronized hardware. However, these research processors do not typically run in real-time and are difficult to take out into the real-world due to their benchtop nature. Hence, the question of whether just applying hardware synchronization to reduce bilateral stimulation artifacts (and thereby potentially improve functional spatial hearing performance) has been difficult to answer. The ciPDA research processor, which uses one clock to drive two processors, presented an opportunity to examine whether synchronization of hardware can have an impact on spatial hearing performance.
Design.
Free-field sound localization and spatial release from masking (SRM) were assessed in ten bilateral CI listeners using both their clinical processors and the synchronized ciPDA processor. For sound localization, localization accuracy was compared within-subject for the two processor types. For SRM, speech reception thresholds were compared for spatially separated and co-located configurations, and the amount of unmasking was compared for synchronized and unsynchronized hardware. There were no deliberate changes of the sound processing strategy on the ciPDA to restore or improve binaural cues.
Results.
There was no significant difference in localization accuracy between unsynchronized and synchronized hardware (p = 0.62). Speech reception thresholds were higher with the ciPDA. In addition, though five of eight participants demonstrated improved SRM with synchronized hardware, there was no significant difference in the amount of unmasking due to spatial separation between synchronized and unsynchronized hardware (p = 0.21).
Conclusions.
Using processors with synchronized hardware did not yield an improvement in sound localization or spatial release from masking for all individuals, suggesting that mere synchronization of hardware is not sufficient for improving spatial hearing outcomes. Further work is needed to improve sound coding strategies to facilitate access to spatial hearing cues. This study provides a benchmark for spatial hearing performance with real-time, bilaterally-synchronized research processors.
INTRODUCTION
Cochlear implants (CIs) are auditory prosthetics that provide access to hearing for individuals who are deaf. Bilateral cochlear implants (BiCIs) yield improvements over unilateral CIs for spatial hearing tasks such as sound localization and understanding speech in noisy environments (Dunn et al., 2010; van Hoesel and Tyler, 2003; Noble et al., 2008; Seeber et al., 2004). Despite these improvements, there is still a gap in performance when compared to normal hearing (NH) listeners (Dorman et al., 2014; Grantham et al., 2007; Jones et al., 2014; Litovsky et al., 2012; Loizou et al., 2009).
Some of these differences in performance could be due to the fact that modern CIs are monaural systems and are not designed to provide coordinated stimulation across the ears. CI systems typically consist of two distinct components: an external “processor” with a microphone worn behind the ear and an internal “implant” that receives signals from the processor and stimulates the auditory nerve (Zeng et al., 2008). This means that BiCI users have an independent processor in each ear. This lack of shared hardware prevents synchronization of the timing of stimulation between ears, precludes the use of bilaterally-linked signal processing strategies, and may disrupt the delivery of the cues needed for spatial hearing by introducing unwanted stimulation artifacts. If so, using processors that provide synchronized sampling and stimulation may remove some of these artifacts, and thereby improve spatial hearing outcomes. To our knowledge, this topic has not been directly investigated. The development of the CI personal digital assistant (ciPDA), a bilaterally-synchronized CI research processor capable of reconfigurable real-time signal processing, has enabled us to study the impact of synchronized sampling and stimulation on spatial hearing outcomes. The ciPDA was designed as a single CI processor that uses one clock to drive sampling, analysis, and stimulation for both ears. The work discussed here presents one potential use of such bilaterally-synchronized research processors, a field of study that will hopefully continue to grow. In the following sections, we will describe the ciPDA, demonstrate the impact of synchronized vs. unsynchronized electrical stimulation, and then investigate whether synchronized hardware (without explicit encoding of binaural spatial hearing cues by the sound coding strategy) offers improvement in BiCI spatial hearing outcomes over unsynchronized processors.
Cochlear implant Personal Digital Assistant (ciPDA) research platform
The ciPDA was developed at the University of Texas at Dallas as a research platform that uses a single processor to drive two implants. All signal processing is performed on a single Windows Mobile PDA (Microsoft, Redmond, WA). The ciPDA simultaneously controls both the sampling of audio from two behind-the-ear microphones and the stimulation of two Cochlear Nucleus implants. The ciPDA provides speech intelligibility comparable to clinical Cochlear Nucleus sound processors (Ali et al., 2013).
Prior to 2017, there were only two other portable bilateral CI processors capable of real-time processing: (1) The SPEAR3 (Hearworks, Melbourne, VIC, Australia) is a programmable body-worn research processor that works with Cochlear Ltd (Macquarie Park, NSW, Australia) Nucleus implants and is equipped with a single digital signal processor connected to a pair of behind-the-ear microphones and stimulator coils; (2) The Neurolec (now Oticon Medical) Digisonic SP Binaural is a clinical BiCI system with a single behind-the-ear speech processor and an internally implanted wire connecting the two implants. Few studies have been published using these three devices for free-field spatial hearing (van Hoesel and Tyler, 2003; van Hoesel et al., 2008; Verhaert et al., 2012) and the benefits of synchronized hardware have not been studied in depth.
Synchronization of bilateral cochlear implants
In this paper, the term “synchronization” explicitly refers to controlling two implants with processors that have shared hardware, allowing for synchronized sampling and stimulation. This means that, if all else is equivalent, a diotic signal presented to both microphones should result in identical stimulation output. Fig. 1(a) illustrates an example of the stimulation that would result from interaural synchronization. In practice, there is no guarantee that the sampling, signal processing, and stimulation between two arbitrary clinical processors is synchronized. This lack of synchronization can be characterized in two ways (Dieudonné et al., 2020; Francart et al., 2015; van Hoesel et al., 2002; Laback et al., 2015). First, if the processors are not activated simultaneously, there could be a constant offset between the two processors, as depicted in Fig. 1(b). For processors with a stimulation rate of 900 pps, this offset could range from −550 to +550 μs (van Hoesel et al., 2002). Second, there could be interaural jitter, defined here as dynamic offsets across ears, as shown in Fig. 1(c). This could arise from independent clocks in each processor drifting over time. Additionally, processing strategies such as the Advanced Combination Encoder (ACE) strategy can introduce differences in channel selection across the ears due to the interaction of the acoustic effects of the head and ACE’s peak-picking strategy (Kan et al., 2018). It should be emphasized that our use of the term “synchronization” in this manuscript does not explicitly address the deliberate encoding of binaural cues; rather it is purely about the timing of sampling by the analog-to-digital converter and electrical stimulation between the two ears.
Figure 1:
Example stimulation cycles for different kinds of synchronization. Left pulses are blue; right pulses are red. (a) Synchronized pulses, (b) Constant offset, (c) Randomly jittered pulses.
In general, to characterize the synchronization of CI processors, a stimulus can be presented in free-field, equidistant from two processors placed next to each other, in order to not introduce a bias with level or spectral differences, and the electrode output can be recorded. From the recordings, the arrival time of individual pulses can be compared across the ears on a pulse-by-pulse basis. The mean of these pulse time differences is then an estimate of the constant offset across ears (Fig. 1(b)). To characterize the jitter of pulses between processors (Fig. 1(c)), the interaural coherence (IC) between left and right recordings can be calculated to summarize the overall fluctuations of processor output as one metric (see Goupell and Litovsky (2015)).
It was assumed that the stimulation from two unsynchronized processors could be modeled with a simple random sampling of the difference of uniform distributions of the pulse period for a given channel (results summarized in Fig. 2). Our simulations showed that when N=1, or only one maxima is able to be selected, the pulse will be scheduled at the same time in every frame (at the beginning of the frame). When N=2, for example, there are now two options for when the pulse in each ear could be scheduled (leading to four equally likely outcomes), and so on as N is increased. The simulated interaural offset converged to 370 μs. It should be noted that this simulation does not account for potential additional timing errors introduced by sound coding strategies that involve peak-picking and assumes identical test inputs to microphones. Therefore, it represents a best case scenario for unsynchronized hardware.
Figure 2:
Simulated metrics of synchronization. By varying the simulated number of N maxima, which controls the maximum amount of random displacement, different average offsets and coherence are observed. For this model, it is assumed that the pulse timing in each ear is an independent and identically distributed (i.i.d.) uniform random variable that takes any value on the interval 1.1 ms (1/900 pulses per second per channel) with equal probability (i.e.X ~ u[0, α]). For example, if a = 1/900, the stimulation period of a processor set to 900 pps, then the expectation of the triangular random variable is 370 μs. Processor outputs were simulated as five sequences of 900 pulses at five different electrodes. Each pulse time was selected from uniform distributions with lower bounds of 0 ms and upper bounds ranging from 0 to 1.1 ms. Both average offset (shown in Fig. 2(a)) and IC (shown in Fig. 2(b)) were calculated for these simulations as a function of “N maxima” (i.e. discrete random variables X were calculated using , with N varying from 1 to 8).
These predictions were validated with measurements using the clinically available Freedom® and Nucleus 5® processors from Cochlear, LTD and the ciPDA research platform (Fig. 3(a) and (b)). From measurements, the constant offset was 338 (±244 standard deviation (SD)) μs for the Freedom processors and 379 (±229 SD) μs for the N5 processors. Mean IC was 0.15 for the Freedom processors and 0.61 for the N5 processors. In contrast, the constant offset between the two ears for the ciPDA was estimated as 6 (±6 SD) μs, and the mean IC was 0.93. Note that the results of the ciPDA falls below the time resolution of the measurement equipment (12 μs). These values match those predicted by our simulations, suggesting that the ciPDA is capable of near-perfect synchronization, while the clinical processors are unable to provide time-synchronized stimulation across the two ears. This is further illustrated in Fig. 4, which shows examples of left and right pulse timings taken from the recordings used to generate Fig. 3 (a) and (b).
Figure 3:
Measured metrics of synchronization. Measurements were conducted in a sound booth (2.82 × 3.04 × 2.01 m) (Acoustic Systems, Austin, TX). Processor earpieces were placed side-by-side at a height of 97 cm and 80 cm from a loudspeaker (Tannoy Reveal 402, Coatbridge, Scotland). A constant amplitude sinusoid was presented at 60 dB SPL from the loudspeaker and matched the center frequency of the target electrode channel. Voltage outputs from the electrode pair were recorded using an “implant in a box” (CI24RE implants) provided by Cochlear, Ltd. The implant was attached to a bank of 8.2 kOhm resistors and voltage measured using a National Instruments (Austin, TX) data acquisition card (NI USB-6343). Threshold and most comfortable levels in the MAP were set to 100 and 200 current units, respectively. The pulse rate was set at 900 pulses per second and the number of maxima for ACE processing was set to 8.
Figure 4:
Examples of pulsatile outputs recorded from a matched pair of left and right electrodes from various cochlear implant processors. Each example was recorded from electrode number 12. Recordings show the output from a National Instruments (Austin, TC) data acquisition card (NI USB-6343).
Lack of hardware synchronization may disrupt spatial hearing
It is important to recognize that unsynchronized cochlear implants present uncontrolled and unknown timing variations in electrical stimulation across the ears. The impact of these variations in the timing of stimulation on free-field spatial performance has not been directly studied. However, such a study might partially explain the gap in performance in spatial hearing tasks between BiCI and NH listeners. For example, low interaural coherence of electrical stimulation across the ears could disrupt the audibility of binaural envelope cues. Hence, utilizing synchronized hardware could then potentially improve the interaural coherence of the signal envelope, or reduce onset jitter, which may lead to an improvement of spatial hearing outcomes.
In the following, we investigate the impact of using synchronized hardware on the free-field spatial hearing of CI listeners. We compared performance on two spatial hearing tasks using unsynchronized clinical processors and a synchronized bilateral research processor, the ciPDA. We hypothesized that using synchronized hardware might improve spatial hearing outcomes in BiCI listeners because the synchronized sampling and stimulation leads to an improvement in the interaural coherence of electrical stimulation.
METHODS
Participants
Ten post-lingually deafened BiCI users participated in this study (see Table I for listener profiles). Listeners traveled to the University of Wisconsin-Madison for testing and received payment and travel reimbursement. All listeners had experience with the testing setup and tasks from prior visits, and had documented sensitivity to binaural cues when delivered through direct electrical stimulation with benchtop research platforms. Experimental procedures followed the National Institutes of Health regulations and were approved by the University of Wisconsin-Madison’s Health Sciences Institutional Review Board.
TABLE 1.
Profile and etiology of listeners. If only one implant or external processor type is listed, device is the same in both ears. FRE = Freedom, N5 = Nucleus5, N6 = Nucleus6 processors.
ID | Age | Gender | Etiology | Years implanted (L/R) | Pulse Rate (pps) | Implant Type (L/R) | External Processor (L/R) |
---|---|---|---|---|---|---|---|
IAJ | 70 | F | Progressive loss beginning at age 3yrs |
18/11 | 1200 | CI24M/CI24R | FRE/FRE |
IAZ | 81 | M | Hereditary adult-onset | 8/7 | L1200, R900 | CI24RE (CA) | N5 |
IBF | 63 | F | Hereditary | 7/9 | 900 | CI24RE | N6 |
IBK | 74 | M | Hereditary adult-onset | 11/5 | 900 | CI24R/CI24RE | N5/FRE |
IBO | 49 | F | Otosclerosis | 3/7 | 1200 | CI512/CI24RE | N5 |
IBY | 51 | F | Unknown adult-onset | 7/3 | 900 | CI24RE/CI512 | FRE/N5 |
ICB | 64 | F | Hereditary, Progressive loss beginning at age 9yrs | 9/12 | 1800 | CI24RE | N5 |
ICF | 72 | F | Otosclerosis | 3/2 | 900 | CI512 | N5 |
ICJ | 65 | F | Unknown adolescent-onset | 4/4 | 900 | CI512 | N5 |
ICO | 33 | F | Progressive loss beginning in early childhood | 2/2 | 900 | CI24RE (CA) | N5 |
Devices
A full description of the ciPDA can be found in Ali et al. (2013), and only details relevant to this project are described here. The ciPDA has only one clock that drives all hardware, and so is synchronized as described in the introduction. The device was set to operate in real-time processing mode, meaning that sound was processed in a continuous fashion similar to regular clinical processors. The input sampling frequency of the ciPDA was 22,050 Hz with 11.6 ms buffer frames. We did not modify the ACE sound coding strategy implemented in the ciPDA. The ciPDA implementation is based on the description provided in Vandali et al. (2000), which is fundamentally similar to that used in Freedom sound processors but without dynamic range optimization and automatic gain control features. For the ciPDA, this means that audio is recorded simultaneously from the left and right microphones, but regular ACE processing is applied separately to the left and right signals using mapping parameters derived from the listeners’ left and right clinical MAPS, respectively. Finally, the outputs of the separate ACE processing are delivered to the implant in each ear, such that the timing of the delivery of pulses is simultaneous across the ears.
ciPDA mapping, loudness matching, and acclimatization procedures
Prior to free-field psychophysical testing, the ciPDA was programmed individually for each BiCI listener with the listener’s clinical MAPs (i.e., the programmed instructions assigning current levels to the individual electrodes). This means that the stimulation rates and number of maxima were the same between clinical devices and the ciPDA as well. Adjustments to the volume and sensitivity settings on the ciPDA processor were made to ensure: 1) comfortable loudness for listening, 2) minimal background system noise, 3) similar perceived loudness for both implants, and 4) similar loudness to the listener’s clinical processors. Sensitivity values for each listener are collected in Table 2. Listeners (except IAZ) had no experience using the ciPDA prior to testing, but were given as much time as needed to acclimate to the device in the lab before testing began. During acclimatization, subjects did not have exposure to the experimental tasks, but conversed with the researchers to adjust to the new device.
TABLE 2.
Sensitivity settings for both clinical devices and ciPDA.
ID | Clinical (L/R) | ciPDA (L/R) |
---|---|---|
IAJ | 12/12 | 7/7 |
IAZ | 11/11 | 1/1 |
IBF | 12/12 | 3/3 |
IBK | 10/10 | 13/11 |
IBO | 12/12 | 9/4 |
IBY | 12/12 | 12/12 |
ICB | 12/12 | 10/12 |
ICF | 12/12 | 7/7 |
ICJ | 12/12 | 4/3 |
ICO | 12/12 | 6/7 |
Sound localization
Sound localization testing was conducted in a sound attenuated booth with internal dimensions of 2.90 × 2.74 × 2.44 m (IAC, RS 254S) and additional sound absorbing foam attached to the inside walls to reduce reflections. A Tucker-Davis Technologies (Alachua, FL) System 3 was used to select and drive an array of 19 loudspeakers (Cambridge SoundWorks, North Andover, MA) arranged on a semi-circular arc of 1.2 m radius. Loudspeakers were positioned in 10° increments along the horizontal-plane between ±90° and were hidden behind an acoustically transparent curtain.
Stimuli were trains of 4 pink noise bursts (each burst 170 ms in duration, 50 ms inter-stimulus interval) presented at approximately 50 dB SPL. Stimuli were calibrated at 50 dB SPL using a digital precision sound level meter (System 824, Larson Davis; Depew, NY). In order to minimize the use of monaural cues, level (±4 dB) and spectral roving was applied to the stimuli. Spectral roving was applied dividing the energy spectrum of the stimulus into 50 critical bands and assigning a random intensity (±10 dB) to each band (Jones et al., 2014; Majdak et al., 2011; Wightman and Kistler, 1989). Listeners sat in the center of the array with their ears at the same height as the loudspeakers, which were hidden behind a dark, acoustically transparent curtain.
Stimuli were presented randomly through the loudspeakers on the array at a 48-kHz sampling rate. All stimulus presentations and data acquisition were done using MATLAB (Mathworks, Inc., Natick, MA). Both synchronized and unsynchronized conditions were tested three times as a separate block of 95 trials (5 stimulus presentations × 19 locations), with blocks counterbalanced between use of each processor. Trials were self-initiated by pressing a button on a touchscreen monitor that was placed in front of the listener and positioned such that it had a minimal effect on the acoustic stimuli. Listeners indicated their response with a touchscreen interface, putting a marker anywhere on an arc representing the loudspeaker array. Visual markers were placed at 45° increments both along the curtain in the room and on the GUI, in order to facilitate perceptual correspondence between the spatial locations of the loudspeakers in the room and the arc image on the touch screen. Root mean square error, or the distance between the target location of a sound and the perceived response location, was calculated for each participant.
Fig. 5 shows simulated measurements of the IC of the localization stimuli, with Fig. 5a) and Fig. 5d) showing the IC for synchronized and unsynchronized hardware, respectively. The simulations show that ACE processing running independently in the two ears potentially decrease overall IC compared to the pure tone measurements shown in the Introduction. However, lack of synchronized stimulation appears to further decrease IC, as one might expect from clinical processors.
Figure 5:
Interaural coherence (IC) as a function of spatial location and SNR for simulated spatialized sounds. The sounds used for the simulation are the same as that used in the psychophysical listening test, and were spatialized using head-related impulse responses measured in the ear canal of a KEMAR (Knowles Electronics Manikin for Acoustic Research, G.R.A.S Sound & Vibration, Holte, Denmark) manikin in the same room as that used for experimental testing. ACE processing was simulated using a MATLAB implementation developed for the CCi-MOBILE. MAPs were programmed with 22 active electrodes, standard FATs, and T and C levels of 100 and 150 CUs, respectively. Five repetitions of the stimulus were simulated per loudspeaker location, and IC was calculated for each electrode channel and averaged over repetitions. Unsynchronized processor output was simulated as in the Introduction, with a random displacement of pulse timing of up to 555 μs. Top row shows the IC for simulated synchronized hardware: a) localization stimuli, b) co-located speech stimuli, and c) spatially separated speech stimuli. Bottom row shows the IC for simulated unsynchronized hardware: d) localization stimuli, e) co-located speech stimuli, and f) spatially separated speech stimuli.
Spatial release from masking
Spatial release from masking (SRM) was evaluated in the same room as sound localization. On each trial, listeners identified a target consonant-nucleus-consonant word from fifty possible choices, with icons matching the words displayed on the computer monitor. The target word was spoken by a male talker and was always presented from the front loudspeaker. Masking speech consisted of two different Harvard IEEE sentences spoken by a female talker, presented at 50 dB SPL and beginning and ending 250 ms before and after the target word. Speech reception thresholds (SRTs) were measured for two masker conditions: (1) co-located with target and maskers all presented from the front loudspeaker, and (2) symmetrically separated with maskers presented from loudspeakers at −90° and +90°. The level of the target was adaptively changed using a 2-down 1-up algorithm to estimate 70.7% correct (Levitt, 1971). Step sizes were 3 dB for the first 3 reversals and 2 dB for the final 9 reversals. The SRT was calculated as the average signal-to-noise ratio of the final eight reversals. Two interleaved adaptive tracks were measured for each spatial configuration and listening condition. Each configuration was tested twice. SRTs in the two spatial configurations were used to compute SRM as: SRM = SRTCo-located − SRTSymmetric. Two listeners (IDs: ICO and ICF) did not complete testing on these conditions due to time constraints.
Fig. 5 shows the simulated effects of synchronized (subplot b and c) vs unsynchronized (subplot e and f) hardware on IC for the speech stimuli used in these experiments. The simulations shows that while independent ACE processing in each ear decreased the overall IC from the pure tone measurements shown in the Introduction, lack of synchronized stimulation further reduced IC, as one might expect from clinical processors.
Statistical analysis
All analyses were completed using R v.3.6.1. Differences in sound localization were analyzed with a paired-sample t-test, comparing localization error for synchronized and unsynchronized hardware. Speech reception scores were fit using the lme4 package (version 1.1–21) in R with the lmer function, with individual listeners as random effect and spatial configuration and synchronization condition as fixed effects. Mean scores were estimated using emmeans from the emmeans package (version 1.4.6). Overall spatial release from masking was compared for synchronized vs. unsynchronized conditions with a paired-sample t-test.
RESULTS
Sound localization
Sound localization performance was evaluated using root mean square (RMS) error, which encompasses both the accuracy and the variance of responses and is considered the most descriptive overall measure of sound localization (Hartmann, 1983). Fig. 6 shows the RMS error for all listeners using both synchronized and unsynchronized hardware. Five out of ten listeners had smaller RMS error with the synchronized hardware than with unsynchronized hardware. One listener had the same RMS error with synchronized and unsynchronized hardware. Mean RMS error was comparable between synchronized (29.7° ±6.9° SD) and unsynchronized (30.8° ±8.9° SD) conditions. A two-sided paired-sample t-test revealed no difference between synchronization conditions [t(9) = 0.4140, p = 0.69]. On an individual level, some participants showed differences in errors larger than 6° in one condition or the other (e.g. ICJ, IAZ), with the majority of participants showing errors ranging from about 25–35 degrees in both conditions.
Figure 6:
Comparing root mean square (RMS) error in sound localization for listeners using unsynchronized processors (their clinical devices) vs. synchronized hardware (the ciPDA research processor). Lower error scores are better; listeners whose markers fall in the shaded region performed better with synchronized hardware. Error bars indicate group mean and standard deviation.
Speech reception thresholds and spatial release from masking
Fig. 7 summarizes the speech reception thresholds (SRT) for each synchronization condition and spatial configuration combination. Statistical analysis of the SRTs revealed significant effects due to synchronization of hardware [F(1,128) = 59, p < 0.001] and spatial configuration [F(1,128) = 29, p < 0.001]. There was no interaction between synchronization condition and spatial configuration [F(1,128) = 0.43, p = 0.51]. Post hoc tests revealed a significant difference in SRT scores due to synchronization condition [(t(128) = −7.7, p < 0.001]. SRT scores with the synchronized hardware were significantly worse (higher SRTs) and had an estimated marginal mean of 4.17 dB compared with an estimated marginal mean of 1.51 dB with the unsynchronized hardware. There was also a significant difference in SRT scores due to spatial configuration [(t(126) = 4.9, p < 0.001]. SRT scores in the co-located configuration had an estimated marginal mean of 3.31 dB and in the symmetric configuration had an estimated marginal mean of −0.65 dB.
Figure 7:
Average SRT scores for unsynchronized (clinical) and synchronized (ciPDA) hardware in both symmetric and co-located configurations. Error bars represent standard deviation. All conditions are significantly different at p < 0.05 except for unsynchronized, co-located and synchronized, symmetric. Data plotted here are used to calculate spatial release from masking.
Fig. 8 shows average SRM for synchronized and unsynchronized hardware. Five out of eight listeners demonstrated greater SRM with the synchronized hardware. Two of those listeners had more than 3 dB of SRM with synchronized hardware, a clinically-significant improvement (Hawley et al., 2004). However, as a group, there was no difference in SRM between synchronization conditions [t(7)=−1.36, p = 0.2148]. Mean SRM was 4.96 dB (±3.98 SD) and 3.16 dB (±1.68 SD) for the synchronized and unsynchronized conditions, respectively.
Figure 8:
Comparing SRM for unsynchronized (clinical processors) and synchronized (ciPDA) conditions. Error bars indicate group mean and standard deviation. Higher scores indicate more release from masking. Shaded region indicates better SRM with the synchronized stimulation. Listeners outside the dashed region demonstrated a clinically-relevant improvement in SRM.
Additionally, the difference in performance between synchronized and unsynchronized conditions was explored by calculating the correlation coefficient between individuals’ RMS error and SRM scores. There was a correlation coefficient of −0.44, with a significance level of 0.26, indicating that there was no statistically-significant correlation between the outcomes of the two experiments.
DISCUSSION
This study was designed to investigate whether the use of synchronized bilateral cochlear implant hardware would improve the free-field spatial hearing of BiCI listeners, as compared to unsynchronized clinical processors. Listeners were tested with a synchronized research processor (ciPDA) vs. the listeners’ unsynchronized clinical processors that they used everyday. While some individuals performed better with synchronized hardware, there were no significant improvement in performance for sound localization or spatial release from masking.
Sound localization with synchronized versus unsynchronized hardware
We hypothesized that providing synchronized stimulation might improve sound localization performance, in part due to the increased coherence of electrical stimulation across the ears. This theory was based on the observation that the coherence of acoustic signals at the two ears can have an impact on sound localization performance in NH and CI listeners (Kerber and Seeber, 2013; Monaghan et al., 2013; Rakerd and Hartmann, 1985, 2010). Hence, one might predict that the incoherence in the electrical stimulation due to unsynchronized hardware may also have an impact on sound localization performance in CI users. Further, synchronized hardware might better deliver envelope ITDs, though no explicit encoding of binaural cues was attempted here. Kan et al. (2018) showed that ACE processing can encode perfect acoustic signal envelopes and argued that good envelope ITD encoding is theoretically possible. However, they showed that envelope ITD encoding may be disrupted by the jitter of unsynchronized processor hardware and the interaction of the physical acoustics of the head and ACE processing. Even so, the envelope ITDs delivered with unsynchronized clinical processors were within 100 μs of the target ITD. Hence, it is reasonable to theorize that with synchronized hardware, sensitivity to the envelope ITDs could be improved, and lead to improved sound localization ability. The psychoacoustic effects of these observations were not evaluated in the study by Kan et al., but prior work has shown that BiCI users can access ITDs in the envelopes of high rate pulse carriers when measured with benchtop bilaterally-synchronized research processors (van Hoesel et al., 2009; Laback et al., 2015; Majdak et al., 2006; Noel and Eddington, 2013). Hence, there may be potential improvement in sound localization performance if, through synchronized stimulation, the envelope ITDs were more coherently presented.
However, there was no significant difference in localization performance between synchronized vs. unsynchronized hardware. The RMS errors found in this study matched those of previously reported studies. For example, average RMS errors in this study, 29.7° (±6.9° SD) and 30.8° (±8.9° SD) for synchronized and unsynchronized hardware, respectively, are comparable to the 30.8°(±10.0° SD) reported in Grantham et al. (2007) and 27.9° (±12.3° SD) reported in Jones et al. (2014). Localization error was also similar to that achieved with the Digisonic SP Binaural cochlear implant, 35° (±17° SD), although localization for that experiment was only tested with five loudspeakers spaced by 45° (Verhaert et al., 2012).
Two listeners had outcomes that were different compared with the rest of the group. Listener IAZ had nearly 20° greater error using unsynchronized versus synchronized hardware, indicating that synchronization of electrical stimulation may have facilitated better sound localization. IAZ was the only participant who had different stimulation rates in their left and right ear processors (see Table 1). They also had prior experience with the ciPDA at the University of Texas at Dallas. Listener ICJ had the opposite outcome as IAZ, as their RMS error increased when using synchronized hardware.
The lack of significant improvement in localization performance appears to agree with the results reported in Jones et al. (2014), who also found no difference in NH sound localization performance when tested with an eight-channel vocoder simulation with correlated vs. uncorrelated noise carriers. Further, studies in NH listeners have shown that envelope ITD cues may not be as useful for sound localization as ITDs in the low frequency fine structure of a stimulus (Macpherson and Middlebrooks, 2002; Yost, 2017). As suggested by Yost (2017), while amplitude modulation may provide useful ITD information for headphone studies, these cues do not appear to contribute to free-field sound localization accuracy. Additionally, evidence from Seeber and Fastl (2008) suggests that even BiCI participants with excellent sound localization ability may not utilize envelope ITDs for localization. Given that low frequency ITDs are considered more important for sound localization (Wightman and Kistler, 1992), but these cues were not deliberately encoded in the present experiment, it is not unexpected that improvements in sound localization may be small. It has been shown that ITDs encoded into low-rate electrical pulse train can lead to ITD sensitivity within the range of NH listeners in some BiCI listeners (Laback et al., 2015; Litovsky et al., 2012; Thakkar et al., 2018). However, achieving such good thresholds requires that ITDs be presented to select pairs of electrodes across the ears, and that factors such as loudness balancing and pitch (or place) matching needs to be taken into consideration (Kan and Litovsky, 2015). The lack of control over these factors in the present experiment could have contributed to the lack of significant difference reported here.
Spatial release from masking with synchronized versus unsynchronized hardware
SRTs were overall higher with synchronized than with unsynchronized hardware. This means that listeners needed relatively higher signal-to-noise ratios (SNR) to achieve similar speech understanding when using synchronized hardware. This was unexpected, as it was previously reported that listeners had similar speech understanding percentages with the ciPDA and clinical processors (Ali et al., 2013). However, that study measured percent correct speech understanding at three SNR levels (10 dB, 5 dB, and quiet), no spatial variation and listeners were given a “short training” to acclimate to the new processor. In our study, speech reception thresholds were estimated for 70.7% correct using an adaptive method. We believe the higher thresholds with the ciPDA can be attributed to two reasons. First, several listeners reported quiet but audible system noise with the ciPDA. This noise was due to a fault in the hardware design of the device, as the circuit for supplying power was not sufficiently isolated from the analog-to-digital converter. This has since been corrected in later generations of the device. To accommodate for this system noise, the sensitivity was adjusted to improve the stimulus-to-system noise ratio; this adjustment may not have been enough to overcome the system noise. Further, it should be noted that the sensitivity in the ciPDA implementation is a front-end input gain scalar, and no automatic gain control is implemented. Hence, even at the same loudness matched levels, the clinical processors may have had a small SNR advantage compared to the ciPDA. Second, it is likely that listeners were far more accustomed to listening with their clinical devices and may have had some difficulty adjusting to the small difference in signal processing stages in the ciPDA.
Overall SRM results were comparable to those reported by Loizou et al. (2009), whose listeners used the SPEAR3 system. In that study, binaural virtual stimuli were used but there was no attempt to preserve synchronous delivery of pulses across ears. Loizou et al. (2009) reported a total advantage of separating target and masker between 2 and 5 dB, but claimed this was primarily due to monaural head shadow and that SRM due to binaural interaction was only 0–1 dB. Here, spatial separation led to 4.96 dB and 3.16 dB improvements for the synchronized and unsynchronized processors, respectively, within the range reported by Loizou et al. (2009). However, in this study, symmetrically placed maskers were used to maximize binaural interaction cues in the separated condition (Litovsky, 2012; Misurelli and Litovsky, 2012, 2015); the condition testing monaural head shadow was not tested. Hence, it is difficult to specify whether the SRM observed here was due to binaural advantage such as binaural squelch or summation (Schleich et al., 2004) enhanced by synchronized stimulation, better ear listening (Rana et al., 2017), or better glimpsing in the spatially separated condition (Hu et al., 2018). Two of the listeners tested here, IAZ and ICB, demonstrated an improvement in SRM greater than 3 dB with synchronized hardware as compared with unsynchronized processors. A ±3 dB difference with these stimuli has been considered as a clinically-relevant increase in SRM in other studies (e.g. Hawley et al., 2004).
Overall, this group of listeners had similar SRM with synchronized and unsynchronized hardware, despite overall higher SRTs with the ciPDA. This may imply that the unmasking benefits of spatial separation are robust to the overall SRTs. On the other hand, five of eight listeners exhibited more SRM with the synchronized hardware. The current study may not have been sensitive enough to detect significant improvements. Future experiments using simulations can be designed to parse out how synchronized stimulation impacts the use of ITDs, interaural level differences (ILDs), and binaural unmasking for the conditions tested here.
Limitations of the study
It should be acknowledged that this study contains a number of limitations. The focus was on whether the use of synchronized CI hardware (and thereby improvement in interaural coherence) could improve some aspects of spatial hearing. However, because the design of the ciPDA only allows for synchronous stimulation, CI listeners needed to use their everyday clinical processors for the unsynchronized condition in this study. As listeners were more accustomed to their own devices and had little time to adjust to the new device, it is possible that acute listening with the ciPDA may have masked potential benefits of synchronization. This may be the reason why speech reception threshold scores were overall higher with the ciPDA. In addition, there was no automatic gain control in the software chain of the ciPDA platform. Listeners using the unsynchronized clinical processors typically had automatic gain control active which may have provided an advantage.
One important factor was the use of ACE in both conditions. Even though our simulations suggest that IC of stimulation was higher with synchronized vs unsynchronized hardware (Fig. 5), the listening test data suggest that the higher stimulation coherence did not lead to psychoacoustically-relevant outcomes. It is likely that ACE processing, or similar N maxima selection strategies, may disrupt the benefits of using synchronized hardware. Therefore, even if bilateral CI hardware is synchronized, additional signal processing changes might be necessary to obtain improvements in spatial hearing outcomes for BiCI listeners.
It is still feasible that binaural cues could have been delivered better with the synchronized hardware in the current experiment but this difference may not have be perceptually measurable with the two tasks used here. For example, perhaps sensitivity to envelope ITDs is improved with synchronized hardware, but listeners are still biased towards relying on ILD cues for sound localization (Aronoff et al., 2010; Seeber and Fastl, 2008). The potential benefits of improved envelope ITD sensitivity is likely subtle, and may require training and acclimatization in order to observe benefits. Considering the unclear contributions of ITDs and ILDs to SRM, it is also unclear the extent to which improved ILDs and envelope ITDs would aid release from masking. To remedy this, future studies can pair free-field studies with basic psychophysical measures of ITD and ILD sensitivity. This may not predict performance in the free-field per se, but might yield greater insights on the impact of synchronized stimulation on binaural sensitivity.
Implications for future binaural processing strategies
Research devices like the ciPDA represent the potential for developing binaurally-linked processing strategies. Though a novel signal processing strategy was not tested in this study, this manuscript investigated a crucial step in the process of developing these strategies. At the same time, the limitations of the approach here should be taken into account in evaluating future strategies.
The results presented here agree with other work examining strategies aimed at improving binaural hearing outcomes. For example, Gajecki and Nogueira (2020) investigated the impact of synchronized stimulation and signal processing on speech in noise understanding. They found that both synchronized stimulation and signal processing were necessary to significantly improve SRM. It should be noted that the experimental condition in Gajecki and Nogueira (2020) is somewhat simpler than the one used in these experiments, where the masking talker was located on one side of the head. In this scenario, there is an ear with a significantly improved signal-to-noise ratio which means that the benefits of binaural listening such as squelch and summation might have been underestimated. In contrast, in the test condition used in this experiment, where the two maskers are located on either side of the head, reduces the head shadow effects. As a result, the benefits of synchronized stimulation may be more effectively evaluated because SRM would be a combination of squelch, summation and head shadow effects. The fact that SRM did not significantly improve in this study argues for the need of new bilateral signal processing strategies.
Though using synchronized hardware did not improve spatial performance, it also did not degrade performance on the binaural tasks, which is a finding that should not be overlooked. This benchmark result demonstrates the potential of using synchronized hardware as a platform for conducting spatial hearing research with BiCI users, especially because of its ability to be reprogrammed with novel sound coding strategies that can be tested in real-time. Also, it highlights the need for developing new bilateral CI sound coding strategies that may enhance envelope ITDs (Dieudonné et al., 2020; Monaghan and Seeber, 2016), provide access to low frequency ITDs (Thakkar et al., 2018; Williges et al., 2018), or coordinate gain across the ears to improve the fidelity of ILD cues (Archer-Boyd and Carlyon, 2019; Potts et al., 2019). Although the ciPDA is no longer available, its successor, the CCi-MOBILE, uses similar hardware but with signal processing now performed on Windows (using MATLAB) or Android hardware for greater portability (Hansen et al., 2019). Crucially, platforms like the ciPDA and CCi-MOBILE are necessary for implementing real-time versions of sound coding strategies that rely on access to audio information from both ears (Kan, 2018; Lopez-Poveda et al., 2019), computing ITDs (Thakkar et al., 2018), and/or synchronous and coordinated stimulation (Archer-Boyd and Carlyon, 2019; Kan and Meng, 2021; Potts et al., 2019).
CONCLUSIONS
The ciPDA was designed to be a bilaterally-synchronized research processor that provides time-locked sampling and stimulation to the left and right ears. Through simulation, we showed that this time-locked stimulation has the potential to remove interaural incoherence and theorized that it could potentially improve spatial hearing outcomes. However, without deliberate changes to the sound coding strategy, the ciPDA did not appear to change performance in spatial hearing tasks when compared to clinical processors. This may have been due to limited experience by our participants with synchronized stimulation or the lack of deliberate encoding of spatial hearing cues in the sound coding strategy. Further work is need to improve spatial hearing outcomes for bilateral cochlear implant users beyond mere hardware synchronization. However, this work provides a reference for future studies measuring spatial hearing performance with bilaterally-synchronized research processors. Such processors will be necessary for the development of novel binaural signal processing strategies to improve spatial hearing outcomes for bilateral cochlear implant users.
ACKNOWLEDGEMENTS
H.J.G. designed experiments and collected data; A.K. designed experiments and contributed to the introduction and discussion; R.Y.L. designed experiments and contributed to writing; S.R.D. provided analysis and critical revision; all authors discussed results and implications and commented on the manuscript. The authors would like to thank all the listeners who traveled to Madison, Wisconsin for the study, Shelly Godar for organizing participant recruitment and travels, Jake Bergal for his help with the device measurements, Tanvi Thakkar and Emily Burg for their insightful feedback, and the late Dr. Philip C. Loizou for his vision in creating the ciPDA. This research was supported by grants from the National Institutes of Health NIH-NIDCD (Grant Nos. R01 DC03083 and R01 5R01DC010494 to R.Y.L.), and also in part by a core grant to the Waisman Center from the NIH-NICHD (Grant No. U54 HD090256).
Financial disclosures/conflicts of interest:
This study was funded by National Institutes of Health NIH-NIDCD (Grant Nos. R01 DC03083 and R01 5R01DC010494 to R.Y.L.), and also in part by a core grant to the Waisman Center from the NIH-NICHD (Grant No. U54 HD090256). There are no conflicts of interest, financial or otherwise.
REFERENCES
- Ali H, Lobo AP, and Loizou PC (2013). “Design and Evaluation of a Personal Digital Assistant-based Research Platform for Cochlear Implants,” IEEE Trans. Biomed. Eng, 60, 3060–3073. doi: 10.1109/TBME.2013.2262712 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Archer-Boyd AW, and Carlyon RP (2019). “Simulations of the effect of unlinked cochlear-implant automatic gain control and head movement on interaural level differences,” J. Acoust. Soc. Am, 145, 1389–1400. doi: 10.1121/1.5093623 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Aronoff JM, Yoon YS, Freed DJ, Vermiglio AJ, Pal I, and Soli SD (2010). “The use of interaural time and level difference cues by bilateral cochlear implant users,” J. Acoust. Soc. Am, 127, EL87–EL92. doi: 10.1121/1.3298451 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dieudonné B, Van Wilderode M, and Francart T (2020). “Temporal quantization deteriorates the discrimination of interaural time differences,” J. Acoust. Soc. Am, 148, 815–828. doi: 10.1121/10.0001759 [DOI] [PubMed] [Google Scholar]
- Dorman MF, Loiselle L, Stohl J, Yost WA, Spahr A, Brown C, and Cook S (2014). “Interaural level differences and sound source localization for bilateral cochlear implant patients.,” Ear Hear, 35, 633–640. doi: 10.1097/AUD.0000000000000057 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dunn CC, Noble W, Tyler RS, Kordus M, Gantz BJ, and Ji H (2010). “Bilateral and unilateral cochlear implant users compared on speech perception in noise,” Ear Hear, 31, 296–298. doi: 10.1097/AUD.0b013e3181c12383 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Francart T, Lenssen A, Buechner A, Lenarz T, and Wouters J (2015). “Effect of channel synchrony on interaural time difference perception with bilateral cochlear implants,” Ear Hear, 36, e199–e206. doi: 10.1097/AUD.0000000000000152 [DOI] [PubMed] [Google Scholar]
- Goupell MJ, and Litovsky RY (2015). “Sensitivity to interaural envelope correlation changes in bilateral cochlear-implant users,” Cit. J. Acoust. Soc. Am, 137, 335. doi: 10.1121/1.4904491 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grantham DW, Ashmead DH, Ricketts TA, Labadie RF, and Haynes DS (2007). “Horizontal-plane localization of noise and speech signals by postlingually deafened adults fitted with bilateral cochlear implants,” Ear Hear, 28, 524–541. doi: 10.1097/AUD.0b013e31806dc21a [DOI] [PubMed] [Google Scholar]
- Hansen JHL, Ali H, Saba JN, Charan MCR, Mamun N, Ghosh R, and Brueggeman A (2019). “CCi-MOBILE: Design and Evaluation of a Cochlear Implant and Hearing Aid Research Platform for Speech Scientists and Engineers,” 2019 IEEE EMBS Int. Conf. Biomed. Heal. Informatics, BHI 2019 - Proc, doi: 10.1109/BHI.2019.8834652. doi:10.1109/BHI.2019.8834652 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hartmann WM (1983). “Localization of sound in rooms,” J. Acoust. Soc. Am, 74, 1380–1391. doi: 10.1121/1.390163 [DOI] [PubMed] [Google Scholar]
- Hawley ML, Litovsky RY, and Culling JF (2004). “The benefit of binaural hearing in a cocktail party: Effect of location and type of interferer,” J. Acoust. Soc. Am, 115, 833–843. doi: 10.1121/1.1639908 [DOI] [PubMed] [Google Scholar]
- van Hoesel RJM, and Tyler RS (2003). “Speech perception, localization, and lateralization with bilateral cochlear implants,” J. Acoust. Soc. Am, 113, 1617–1630. doi: 10.1121/1.1539520 [DOI] [PubMed] [Google Scholar]
- van Hoesel RJ, Jones GL, and Litovsky RY (2009). “Interaural time-delay sensitivity in bilateral cochlear implant users: effects of pulse rate, modulation rate, and place of stimulation,” J Assoc Res Otolaryngol, 10, 557–567. doi: 10.1007/s10162-009-0175-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- van Hoesel R, Böhm M, Pesch J, Vandali A, Battmer RD, and Lenarz T (2008). “Binaural speech unmasking and localization in noise with bilateral cochlear implants using envelope and fine-timing based strategies,” J. Acoust. Soc. Am, 123, 2249–2263. doi: 10.1121/1.2875229 [DOI] [PubMed] [Google Scholar]
- van Hoesel R, Ramsden R, and O’Driscoll M (2002). “Sound-direction identification, interaural time delay discrimination, and speech intelligibility advantages in noise for a bilateral cochlear implant user,” Ear Hear, 23, 137–149. doi: 10.1097/00003446-200204000-00006 [DOI] [PubMed] [Google Scholar]
- Hu H, Dietz M, Williges B, and Ewert SD (2018). “Better-ear glimpsing with symmetrically-placed interferers in bilateral cochlear implant users,” J. Acoust. Soc. Am, 143, 2128–2141. doi: 10.1121/1.5030918 [DOI] [PubMed] [Google Scholar]
- Jones H, Kan A, and Litovsky RY (2014). “Comparing sound localization deficits in bilateral cochlear-implant users and vocoder simulations with normal-hearing listeners,” Trends Hear, 18, 1–16. doi: 10.1177/2331216514554574 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kan A (2018). “Improving Speech Recognition in Bilateral Cochlear Implant Users by Listening With the Better Ear,” Trends Hear, 22, 1–11. doi: 10.1177/2331216518772963 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kan A, and Litovsky RY (2015). “Binaural hearing with electrical stimulation,” Hear. Res, 322, 127–137. doi: 10.1016/j.heares.2014.08.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kan A, and Meng Q (2021). “The Temporal Limits Encoder as a Sound Coding Strategy for Bilateral Cochlear Implants,” IEEE/ACM Trans. Audio, Speech, Lang. Process, , doi: 10.1109/TASLP.2020.3039601. doi:10.1109/TASLP.2020.3039601 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kan A, Peng ZE, Moua K, and Litovsky RY (2018). “A systematic assessment of a cochlear implant processor’s ability to encode interaural time differences,” Proc. 2018 APSIPA Annu. Summit Conf, , doi: 10.23919/APSIPA.2018.8659694. doi:10.23919/APSIPA.2018.8659694 [DOI] [Google Scholar]
- Kerber S, and Seeber BU (2013). “Localization in reverberation with cochlear implants: predicting performance from basic psychophysical measures.,” J. Assoc. Res. Otolaryngol, 14, 379–392. doi: 10.1007/s10162-013-0378-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- Laback B, Egger K, and Majdak P (2015). “Perception and coding of interaural time differences with bilateral cochlear implants,” Hear. Res, 322, 138–150. doi: 10.1016/j.heares.2014.10.004 [DOI] [PubMed] [Google Scholar]
- Litovsky RY (2012). “Spatial release from masking in adults,” Acoust. Today, 8, 18–25. [Google Scholar]
- Litovsky RY, Goupell MJ, Godar S, Grieco-Calub T, Jones GL, Garadat SN, Agrawal S, et al. (2012). “Studies on bilateral cochlear implants at the University of Wisconsin’s Binaural Hearing and Speech Laboratory,” J Am Acad Audiol, 23, 476–494. doi: 10.3766/jaaa.23.6.9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Loizou PC, Hu Y, Litovsky R, Yu G, Peters R, Lake J, and Roland P (2009). “Speech recognition by bilateral cochlear implant users in a cocktail-party setting,” J. Acoust. Soc. Am, 125, 372–383. doi: 10.1121/1.3036175 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lopez-Poveda EA, Eustaquio-Martín A, Fumero MJ, Stohl JS, Schatzer R, Nopp P, Wolford RD, et al. (2019). “Lateralization of virtual sound sources with a binaural cochlear-implant sound coding strategy inspired by the medial olivocochlear reflex,” Hear. Res, 379, 103–116. doi: 10.1016/J.HEARES.2019.05.004 [DOI] [PubMed] [Google Scholar]
- Macpherson EA, and Middlebrooks JC (2002). “Listener weighting of cues for lateral angle: The duplex theory of sound localization revisited,” J. Acoust. Soc. Am, 111, 2219–2236. doi: 10.1121/1.1471898 [DOI] [PubMed] [Google Scholar]
- Majdak P, Goupell MJ, and Laback B (2011). “Two-dimensional localization of virtual sound sources in cochlear-implant listeners.,” Ear Hear, 32, 198–208. doi: 10.1097/AUD.0b013e3181f4dfe9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Majdak P, Laback B, and Baumgartner W-D (2006). “Effects of interaural time differences in fine structure and envelope on lateral discrimination in electric hearing.,” J. Acoust. Soc. Am, 120, 2190–201. doi: 10.1121/1.2258390 [DOI] [PubMed] [Google Scholar]
- Misurelli SM, and Litovsky RY (2012). “Spatial release from masking in children with normal hearing and with bilateral cochlear implants: Effect of interferer asymmetry,” J. Acoust. Soc. Am, 132, 380–391. doi: 10.1121/1.4725760 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Misurelli SM, and Litovsky RY (2015). “Spatial release from masking in children with bilateral cochlear implants and with normal hearing: Effect of target-interferer similarity,” J. Acoust. Soc. Am, 138, 319–331. doi: 10.1121/1.4922777 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Monaghan JJM, Krumbholz K, and Seeber BU (2013). “Factors affecting the use of envelope interaural time differences in reverberation,” J. Acoust. Soc. Am, 133, 2288–2300. doi: 10.1121/1.4793270 [DOI] [PubMed] [Google Scholar]
- Monaghan JJM, and Seeber BU (2016). “A method to enhance the use of interaural time differences for cochlear implants in reverberant environments,” J. Acoust. Soc. Am, 140, 1116–1129. doi: 10.1121/1.4960572 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Noble W, Tyler R, Dunn C, and Bhullar N (2008). “Unilateral and bilateral cochlear implants and the implant-plus-hearing-aid profile: comparing self-assessed and measured abilities,” Int J Audiol, 47, 505–514. doi: 10.1080/14992020802070770 [DOI] [PubMed] [Google Scholar]
- Noel VA, and Eddington DK (2013). “Sensitivity of bilateral cochlear implant users to fine-structure and envelope interaural time differences,” J. Acoust. Soc. Am, , doi: 10.1121/1.4794372. doi:10.1121/1.4794372 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Potts WB, Ramanna L, Perry T, and Long CJ (2019). “Improving Localization and Speech Reception in Noise for Bilateral Cochlear Implant Recipients,” Trends Hear, 23, 1–18. doi: 10.1177/2331216519831492 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rakerd B, and Hartmann WM (1985). “Localization of sound in rooms, II: The effects of a single reflecting surface,” J Acoust Soc Am, 78, 524–533. doi: 10.1121/1.392474 [DOI] [PubMed] [Google Scholar]
- Rakerd B, and Hartmann WM (2010). “Localization of sound in rooms. V. Binaural coherence and human sensitivity to interaural time differences in noise,” J. Acoust. Soc. Am, 128, 3052–3063. doi: 10.1121/1.3493447 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rana B, Buchholz JM, Morgan C, Sharma M, Weller T, Konganda SA, Shirai K, et al. (2017). “Bilateral Versus Unilateral Cochlear Implantation in Adult Listeners: Speech-On-Speech Masking and Multitalker Localization,” Trends Hear, 21, 1–15. doi: 10.1177/2331216517722106 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Schleich P, Nopp P, and D’Haese P (2004). “Head shadow, squelch, and summation effects in bilateral users of the MED-EL COMBI 40/40+ cochlear implant,” Ear Hear, 25, 197–204. doi: 10.1097/01.AUD.0000130792.43315.97 [DOI] [PubMed] [Google Scholar]
- Seeber BU, Baumann U, and Fastl H (2004). “Localization ability with bimodal hearing aids and bilateral cochlear implants,” J. Acoust. Soc. Am, 116, 1698–1709. doi: 10.1121/1.1776192 [DOI] [PubMed] [Google Scholar]
- Seeber BU, and Fastl H (2008). “Localization cues with bilateral cochlear implants,” J Acoust Soc Am, 123, 1030–1042. doi: 10.1121/1.2821965 [DOI] [PubMed] [Google Scholar]
- Thakkar T, Kan A, Jones HG, and Litovsky RY (2018). “Mixed stimulation rates to improve sensitivity of interaural timing differences in bilateral cochlear implant listeners,” J. Acoust. Soc. Am, 143, 1428–1440. doi: 10.1121/1.5026618 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vandali AE, Whitford LA, Plant KL, and Clark GM (2000). “Speech perception as a function of electrical stimulation rate: Using the nucleus 24 cochlear implant system,” Ear Hear, 21, 608–624. doi: 10.1097/00003446-200012000-00008 [DOI] [PubMed] [Google Scholar]
- Verhaert N, Lazard DS, Gnansia D, Bébéar J-P, Romanet P, Meyer B, Péan V, et al. (2012). “Speech Performance and Sound Localization Abilities in Neurelec Digisonic® SP Binaural Cochlear Implant Users,” Audiol Neurotol, 17, 256–266. doi: 10.1159/000338472 [DOI] [PubMed] [Google Scholar]
- Wightman FL, and Kistler DDJ (1989). “Headphone simulation of free-field listening. II: Psychophysical validation,” J Acoust Soc Am, 85, 868–878. doi: 10.1121/1.397558 [DOI] [PubMed] [Google Scholar]
- Wightman FL, and Kistler DJ (1992). “The dominant role of low‐frequency interaural time differences in sound localization,” J. Acoust. Soc. Am, 91, 1648–1661. doi: 10.1121/1.402445 [DOI] [PubMed] [Google Scholar]
- Williges B, Jürgens T, Hu H, and Dietz M (2018). “Coherent Coding of Enhanced Interaural Cues Improves Sound Localization in Noise With Bilateral Cochlear Implants,” Trends Hear, 22, 233121651878174. doi: 10.1177/2331216518781746 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yost WA (2017). “Sound source localization identification accuracy: Envelope dependencies,” J. Acoust. Soc. Am, 142, 173–185. doi: 10.1121/1.4990656 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zeng F-G, Rebscher S, Harrison W, Sun X, and Feng H (2008). “Cochlear implants: system design, integration, and evaluation.,” IEEE Rev. Biomed. Eng, 1, 115–42. doi: 10.1109/RBME.2008.2008250 [DOI] [PMC free article] [PubMed] [Google Scholar]