Abstract
Cochlear-implant (CI) users have difficulty understanding speech in the presence of interfering sounds. This study was designed to determine if binaural unmasking of speech is limited by peripheral or central encoding. Speech was presented to bilateral CI listeners using their clinical processors; unprocessed or vocoded speech was presented to normal-hearing (NH) listeners. Performance was worst for all listener groups in conditions where both the target and interferer were presented monaurally or diotically (i.e., no spatial differences). Listeners demonstrated improved performance compared to the monaural and diotic conditions when the target and interferer were presented to opposite ears. However, only some CI listeners demonstrated improved performance if the target was in one ear and the interferer was presented diotically, and there was no change for the group on average. This is unlike the 12-dB benefit observed in the NH group when presented the CI simulation. The results suggest that CI users can direct attention to a target talker if the target and interferer are presented to opposite ears; however, larger binaural benefits are limited for more realistic listening configurations, likely due to the imprecise peripheral encoding of the two sounds.
I. INTRODUCTION
Many auditory environments contain multiple sound sources. While some sounds provide important information, other sounds are unimportant and listeners would benefit by ignoring them. The ability to selectively attend to a target sound source in a mixture of competing sources allows people to communicate in demanding listening environments, from noisy work situations to classrooms to social environments such as “cocktail parties” (Cherry, 1953). One way to improve speech understanding in the presence of interfering background sounds is by perceiving the sounds at different spatial locations, which is most effective when there are inputs to both ears. However, not all people have two equally effective inputs to their auditory system, which can lead to poor speech understanding in the presence of background sounds. Difficulty understanding speech in noisy environments is one of the most common complaints of hearing-impaired listeners (Kramer et al., 1998; Hallberg et al., 2008). The purpose of this study was to investigate the extent to which listeners who have severe hearing impairments and who are fitted with bilateral cochlear implants (CIs) can attend to a target talker in the presence of an interfering talker under conditions that allow us to begin to separate the contributions of peripheral deficits and central processes to the spatial unmasking of speech.
This study focused on bilateral CI users because they received two auditory prostheses in an attempt to improve spatial hearing abilities. Specifically, we were interested in whether spatial hearing could facilitate the processing of sounds in complex auditory scenes. The need to utilize spatial cues to organize an auditory scene may be particularly important for CI users because CI sound processing strategies do not effectively transmit pitch information that is normally conveyed through temporal fine structure (Churchill et al., 2014); pitch information typically provides an auditory cue that disambiguates between sound sources such as the voices of different sex talkers (Darwin and Hukin, 2000; Bernstein et al., 2016). Bilateral cochlear implantation has become increasingly prevalent and research to date suggests that bilateral CI users show improved sound localization abilities compared to unilateral CI users (e.g., Kerber and Seeber, 2012; Litovsky et al., 2012). Bilateral CIs can provide small improvements in speech understanding in quiet compared to one CI; one reason is that stimuli are perceived as louder, an effect called binaural summation (e.g., Litovsky et al., 2006). In addition, bilateral CI users show improved speech understanding when target and interfering sources are spatially separated. Improvement is generally seen when one of the ears has a better target-to-masker ratio (TMR), a phenomenon called the better-ear effect, which is monaural in nature (e.g., Loizou et al., 2009). Having two ears should ideally allow for utilization of binaural processing such that sounds are perceived at different locations. Binaural unmasking of speech (or squelch) is thought to be derived from the exquisite binaural encoding of interaural time differences (ITDs), interaural level differences (ILDs), and interaural correlation (Bronkhorst and Plomp, 1988; Lavandier and Culling, 2010). However, the binaural unmasking that results from the perception of sounds at different locations seems largely inaccessible to CI listeners (Schleich et al., 2004; Buss et al., 2008; van Hoesel et al., 2008; Loizou et al., 2009). In contrast, normal-hearing (NH) listeners demonstrate improvements of about 5–15 dB from binaural unmasking depending on the specific configuration and type of interfering sound (Hawley et al., 2004; Best et al., 2013).
The lack of binaural unmasking demonstrated by bilateral CI listeners may be related to a number of biological, surgical, hardware, and software limitations (Litovsky et al., 2012; Kan and Litovsky, 2015). First, the microphones and sound processors are not coordinated to synchronize automatic gain controls (van Hoesel, 2012). Second, there is poor access to ITDs because bilateral sound processors do not encode ITD information in the pulse timing or fine structure of the signal and as a result, most current CI sound processing strategies only encode envelope information from the original incoming acoustic signal. It is possible that a large portion of the binaural unmasking benefit in NH listeners is derived from potent low-frequency ITDs in the temporal fine structure (Bronkhorst and Plomp, 1988; Macpherson and Middlebrooks, 2002). Third, because binaural cues are assumed to necessitate a frequency-matched comparison across the ears, interaural place-of-stimulation mismatch introduced by having arrays with different insertion depths might reduce sensitivity to ITDs, ILDs, and changes in interaural correlation (Goupell, 2015; Kan et al., 2015b). Fourth, stimuli with temporal modulations, such as speech, might produce unintended interaural decorrelation that could limit one's ability to process binaural cues (e.g., Rakerd and Hartmann, 2010) because loudness growth curves at single electrodes are not necessarily the same across the ears (Goupell et al., 2013; Goupell, 2015; Goupell and Litovsky, 2015). Fifth, sound processing strategies activate multiple electrodes and current spread from monopolar stimulation could cause channel interactions that negatively affect binaural processing (Kan et al., 2015a; Egger et al., 2016). Sixth, bilateral CI users can have asymmetrical speech understanding performance (Mosnier et al., 2009), which may undermine binaural unmasking because the target talker signals in both ears may not be combined coherently. Any one, or combination, of these factors could limit binaural unmasking in CI listeners.
In this study, we aimed to further understand the impact of the above-mentioned factors on binaural unmasking in bilateral CI listeners. The paradigm used has the potential to allow us to tease apart peripheral vs central contributions to binaural unmasking of speech. We tested tightly controlled stimulus conditions where a target talker is presented to one ear alone, and when an interfering talker is also presented in the same ear, opposite ear, and in both ears. These conditions allow us to measure the change in performance in attending to a target talker when the interfering talker is either present or absent, and in the same or opposite ear as the target. In addition, a diotic condition, where target and interferer was presented together, was also tested so that we could see if there was benefit of spatial separation with this controlled setup.
It should be noted that one of the novel conditions tested in this study was a theoretical situation in which two different sources are presented to separate ears alone. We hypothesized that if contralateral separation can produce unmasking of speech in bilateral CI users, it would suggest that improper peripheral encoding of binaural cues is a major limitation in achieving binaural unmasking of speech in CI listeners. Conversely, if no unmasking of speech is demonstrated with contralateral separation, it would suggest a more serious central problem of limited ability to attend to different sources. Numerous studies in NH listeners using contralateral separation have shown that there is minimal interference from noise or speech maskers in the unattended ear (e.g., Brungart and Simpson, 2002). For CI listeners, all of the above factors that might hamper binaural processing and perception of different talker locations should not affect the contralateral separation condition. Furthermore, we performed the same tests with NH listeners who were presented either unprocessed or vocoded speech. For the vocoded speech, the original temporal fine structure was replaced with noise to simulate the signals that are provided by CI sound processors that primarily convey the speech envelope information. The other previously mentioned factors affecting binaural processing in CI listeners (interaural mismatch, loudness growth, and current spread) were not simulated. The comparison between the CI and NH listeners tested with simulated CI processing is helpful in elucidating mechanisms involved in binaural unmasking in CI listeners.
II. EXPERIMENT
A. Listeners and equipment
CI listeners (N = 11; 47–71 yr) and NH listeners (N = 29; 18–32 yr) participated in this experiment. The CI listeners had at least two years of experience with their CIs and used 24-electrode Nucleus CIs (N24, Freedom, or CI512). CI listener demographic information is shown in Table I. NH listeners had hearing thresholds that were ≤20 dB hearing level measured at octave frequencies between 250 and 8000 Hz, and ≤10 dB of asymmetry in their thresholds at any frequency. Nineteen NH listeners were tested with unprocessed acoustic speech and ten NH listeners were tested with vocoded acoustic speech (called the VOC group).1 All listeners were paid a stipend for their participation.
TABLE I.
Age | Experience (yr) | External device | |||
---|---|---|---|---|---|
Listener | Gender | (yr) | left/right | left/right | Etiology |
IAJ | F | 66 | 15/8 | Freedom/Freedom | Childhood onset, unknown |
IBD | F | 61 | 4/7 | Freedom/Freedom | Adult onset, hereditary |
IBK | M | 71 | 8/2 | Freedom/N5 | Adult onset, noise-induced, possibly hereditary |
IBL | F | 66 | 12/7 | Freedom/N5 | Childhood onset, unknown |
IBM | F | 57 | 1/5 | N5/N5 | Adult onset, unknown |
IBN | M | 65 | 2/11 | Freedom/Freedom | Born deaf, unknown |
IBO | F | 47 | 1/4 | N5/Freedom | Adult onset, otosclerosis |
IBR | F | 57 | 4/7 | N5/Freedom | Adult onset, progressive |
IBY | F | 48 | 4/0.5 | N5/N5 | Adult onset, unknown |
ICA | F | 52 | 2/9 | N5/N5 | Childhood onset, unknown |
ICB | F | 63 | 7/10 | Freedom/N5 | Childhood onset, hereditary |
CI listeners were presented with speech signals through Freedom TV/Hifi cables (Cochlear Ltd., Sydney, Australia) into the direct audio input of their sound processors. They used their clinical sound processors set to their clinically fit everyday program with automatic gain control activated. In the case that the listener had an N5 sound processor, an adapter was used so that they could use the Freedom TV/Hifi cable.2 Note that the input dynamic range of the CI processor, which was 40 dB [25–65 dB sound pressure level (SPL)], was reduced to 30 dB when using the Freedom TV/Hifi cables. NH listeners were presented with speech signals over open-back circumaural headphones (Sennheiser HD650, Hanover, Germany). All listeners were tested in a standard double-walled, sound-attenuating booth with the same equipment setup at the University of Maryland–College Park and the University of Wisconsin–Madison. Experiments were controlled by a personal computer running MATLAB (The Mathworks, Natick, MA) and stimuli were presented through a Tucker-Davis System3 (RP2.1, HB7, PA5; Alachua, FL).
B. Stimuli
The stimuli consisted of five-word sentences that were comprised of combinations of a name, verb, number, adjective, and object (Kidd et al., 2008). Each of the five-word categories had eight possible choices, randomly selected on each trial. In each condition there were one or two sentences spoken simultaneously. The target sentence was spoken by a female; when a second (interfering) sentence was presented, it was spoken by a male. For the NH listeners, target words were presented at an A-weighted sound pressure level of 70 dB, unless the TMR was negative, in which case the target level was reduced from 70 dB-A. For positive TMRs, the interferer level was reduced from 70 dB-A. The stimulus level was limited to 70 dB-A for all TMRs in order to minimize signal distortions introduced by the automatic gain control in CI processors above this level. For the CI listeners, the words were presented nominally at 70 dB-A. Before testing, the CI listeners adjusted the level of signal in the left ear to a comfortable loudness. They then adjusted the level of signal in the right ear to a comfortable loudness. Finally, they adjusted the two monaural comfortable signals to have an equal comfortable loudness across the ears when stimuli were presented diotically. These levels were not changed over the duration of testing. Total testing time was about eight hours and was distributed over two or three days for 2–3 h per day.
The NH and CI groups were tested using unprocessed speech. The VOC group was NH listeners tested using vocoded speech. All the vocoded stimuli were generated offline. Stimuli were bandpass filtered into eight channels using fourth-order Butterworth filters. Eight channels were used because it is comparable to the effective number of channels in CI listeners (Friesen et al., 2001). The channel corner frequencies were contiguous and logarithmically spaced between 300 and 8500 Hz. The envelope of each channel was extracted via half-wave rectification and low-pass filtering using a second-order Butterworth filter with a 400-Hz cutoff frequency. The envelope of each channel was then used to modulate a narrowband noise carrier with a bandwidth that corresponded to the bandwidth of the filtered channel. The narrowband noise carriers were coherent across the ears before modulation by the envelopes, except a subset of conditions was tested with incoherent noise carriers. Finally, the channels were summed into an acoustic signal.
For the quiet conditions and those with the target and interferer in the different ears, it was sufficient to produce one vocoded token of each word. They could then be randomly assembled into sentences. For the conditions with the target and interferer in the same ear, target and interferer were summed before vocoding, similar to what would occur in a sound processor. Therefore, it was necessary to assemble the words into sentences and mix them before vocoding. Ten sentence combinations of target-and-interferer were generated for each TMR and the sentence combination was randomly selected for each trial.
C. Procedure
Listeners were tested with speech at various levels in quiet and at various TMRs when there was an interferer. The levels that were tested varied across listener depending on their performance. In general, levels were chosen in such a way to ensure that there would be a 5-dB resolution of measurements around 50% correct and at least four levels were measured for each psychometric function. Psychometric functions were fit to the percent correct data (Wichmann and Hill, 2001) and speech reception thresholds (SRTs) were calculated at 50% correct. The SRTs are reported re: 70 dB-A for the NH listeners and most comfortable level for the CI listeners.3
Ten conditions were tested, half of which are pictorially represented in Fig. 1. Conditions were tested where attention was explicitly directed to the left ear (odd-numbered conditions) or right ear (even-numbered conditions). Both ears were tested because CI listeners may have a different overall performance in each ear (e.g., Mosnier et al., 2009). In conditions 1 and 2, SRTs for the female target were measured in quiet. In conditions 3 and 4, the female target was presented to one ear and the male interferer to the other (i.e., contralateral separation). In these conditions, the talkers should be perceived at opposite ears. In conditions 5 and 6, the target and interferer were presented diotically. In these conditions, the talkers should be perceived co-located near the center of the head. In conditions 7 and 8, both the target and interferer were presented to the left or right ear only. In these conditions, both talkers should be perceived co-located at one side of the head. In conditions 9 and 10, the target was presented to the left or right ear and the interferer was presented diotically to both ears. In these conditions, the target should be perceived at the right or left side of the head and the interferer should be perceived near the center of the head. Note that adding the interferer to the other ear (conditions 9 and 10 vs 7 and 8, respectively) affords no acoustic better-ear advantage but enables the possibility for binaural unmasking. Hence, the difference in conditions 9 and 7, and conditions 10 and 8 were used to evaluate the magnitude of binaural unmasking.
Listeners performed blocks of trials where condition and TMR were fixed, and the order of presentation of blocks was randomized. Each block for a fixed condition and TMR consisted of ten sentences. After each condition was tested, two more blocks were performed until three blocks were performed per condition. Therefore, each point on a psychometric function consisted of 150 key words (5 words/sentence × 10 sentences × 3 blocks).4 Listeners initiated each trial via button press. They responded by choosing words on a grid on a computer user interface. No feedback was provided.
D. Results
1. Comparison of SRTs
The data for the individual CI listeners are shown in Fig. 2. For all individual CI listeners, SRTs were best in quiet (conditions 1 and 2) and most often worst when the target and interferer were played to one or both ears, which is when the target and interferer should have been perceived as co-located (conditions 5–8). SRTs were generally better when listeners should have perceived the target and interferer as spatially separated. However, there was individual variability in this group. Listeners IBD, IBK, and IBR showed lower SRTs in several of the right-ear target conditions; listener IBO showed lower SRTs in some of the left-ear target conditions. Listeners IBL and IBR had notably large SRTs for condition 3. Many listeners had better SRTs for conditions 5/6 (diotic) compared to 7/8 (monaural). For some listeners, this difference was notably large. For example, listener IBR had a 9.5 dB increase in SRT from condition 5 to 7 and listener IAJ had a 6.1 dB increase from conditions 6 to 8. Listener IBD had better SRTs for conditions 9/10 (dichotic separation) compared to 7/8 (monaural), which is our measure of binaural unmasking. Listener ICA had >4 dB of binaural unmasking for both left and right ears, and the largest amount of binaural unmasking was 5.7 dB in the right ear. Of the 22 measurements of binaural unmasking, 6 of 22 were negative (i.e., they showed interference, not unmasking).
The average data for all three listener groups are shown in Fig. 3. A three-way analysis of variance (ANOVA) was performed with factors listener group (NH, VOC, or CI), ear (left or right), and condition; Tukey's honestly significant difference post hoc tests were subsequently performed. There was a significant effect of condition [F(4,269) = 289, p < 0.0001], ear [F(1,270) = 4.57, p = 0.034], and group [F(2,269) = 531, p < 0.0001]. For the factor condition, post hoc tests showed that almost all the conditions were significantly different from each other. The exceptions were that conditions 5/6 were not different from conditions 7/8, and conditions 5/6 were not different than conditions 9/10 (p > 0.05 for both). There was a significant interaction of listener group × condition [F(8,269)=14.4, p < 0.0001]. Therefore, each condition will be discussed separately below. All of the other interactions were not significant (p > 0.05 for all).
Figure 3 shows that in quiet the SRTs were lowest for the NH listeners, increased for the VOC listeners, and increased again for the CI listeners. Averaging conditions 1 and 2, the SRTs for the VOC listeners were 10.8 dB higher compared to the NH listeners, the SRTs for the CI listeners were 31.4 dB higher compared to the NH listeners, and the SRTs for the CI listeners were 20.6 dB higher compared to the VOC listeners. There was no significant left-right asymmetry in SRTs comparing conditions 1 and 2. Average SRTs in quiet were 0.1 (standard deviation = 1.5), 0.3 (2.3), 0.7 (5.2) dB lower for the right ear compared to the left ear for the NH, VOC, and CI listeners, respectively. Using an unpaired two-sample t-test that assumed equal variances to compare SRTs between the ears, there were no significant differences of ear for any of the three groups (p > 0.05).
Figure 3 shows that SRTs increased from conditions 1/2 (quiet) to conditions 3/4 (contralateral separation). The increase in SRTs was 8.0 (2.6), 11.6 (3.4), and 8.0 (9.6) dB for the NH, VOC, and CI listeners, respectively. There was no significant left-right asymmetry in the SRTs for conditions 3 and 4 (paired two-sample t-test, p > 0.05 for all three comparisons). SRTs were 1.1 (2.9), 2.0 (2.3), 6.2 (12.6) dB lower for the right ear (condition 4) compared to the left ear (condition 3) for the NH, VOC, and CI listeners, respectively. Note that the CI listeners were a relatively more heterogeneous group than NH listeners when comparing SRTs for conditions 3 and 4 (see Fig. 2 and the error bars in Fig. 3); this heterogeneity is further explored in Sec. II D 2.
As mentioned before, SRTs did not significantly change for conditions 5/6 compared to conditions 7/8; the increase in SRTs from conditions 5/6 to conditions 7/8 was only 1.1 (2.9), −0.1 (1.8), and 2.9 (2.7) dB for the NH, VOC, and CI listeners, respectively. However, the SRTs significantly increased from conditions 1/2 to the average of conditions 5/6 and 7/8 by 24.6 (3.8), 45.4 (2.6), and 23.0 (8.5) dB for the NH, VOC, and CI listeners, respectively. There was no significant left-right asymmetry in the SRTs (paired two-sample t-test; p > 0.05 for all three comparisons).
Figure 3 shows that SRTs mostly decreased from conditions 5/6 to conditions 9/10 by 1.9 (2.9), 12.2 (4.8), and −1.1 (3.6) dB for the NH, VOC, and CI listeners, respectively. There was no significant left-right asymmetry in the SRTs (paired two-sample t-test; p > 0.05 for all three comparisons).
To determine whether interaural correlation of the noise carriers would affect the amount of binaural unmasking for the listeners in the VOC group, we changed the noise carriers from interaurally correlated to interaurally uncorrelated. We could test only eight listeners from the VOC group on this condition. We found average SRTs were 0.6 dB higher for the uncorrelated noise carrier for conditions 9/10, which was not a significant difference (paired two-sample t-test: p = 0.52).
2. Asymmetry in the contralateral separation condition
The individual t-tests showed no significant left-right asymmetry in SRTs for any individual listener group (NH, VOC, and CI) and condition, but there was a significant left-right asymmetry in the three-way ANOVA with factors group, condition, and ear. Specifically, right ear (even-numbered conditions) SRTs were lower than the left (odd-numbered conditions) SRTs [factor ear; F(1,270)=4.57, p = 0.034]. Further investigation of the effect of ear revealed that two CI listeners (IBL and IBR) showed large asymmetries in SRTs, particularly for conditions 3 and 4 (contralateral separation; see Fig. 2). If CI listeners IBL and IBR were removed from the ANOVA, the effect of ear was not significant [F(1,250) = 3.64, p = 0.058]. All other interactions containing the factor ear were not significant (p > 0.05 for all).
To further explore this result, in CI listeners only, we first analyzed the differences in the SRTs in the quiet conditions (1/2), which can be compared visually in Fig. 2. Three CI listeners had a difference between conditions 1/2 of <1 dB (IBY, IBM, ICB). The largest difference was 11 dB (IBK). The average magnitude of the difference between conditions 1/2 was 3.7 dB with a standard deviation of 3.5 dB. The two CI listeners of interest, IBL and IBR, had differences in their thresholds in quiet of 4.5 and −3.1 dB, respectively.
Next, Fig. 4 shows example psychometric functions for conditions 1/2 (quiet) compared to 3/4 (contralateral separation). In the left-most column, a typical CI listener with a relatively small asymmetry, IBD, is shown. IBD's SRT (re: most comfortable level) in quiet is −34 dB for the left ear (condition 1) and −39 dB for the right (condition 2). Listener IBD's SRT with a contralateral interferer was −29 dB when attending to the left ear and −36 dB when attending to the right ear. The difference in SRTs between contralateral interferer and in quiet is 3 dB for the right ear and 5 dB for the left. We defined the dichotic listening or selective attention asymmetry as
(1) |
Therefore, listener IBD's 2-dB dichotic listening asymmetry is small given the 5-dB resolution of measurements in the experiment. The psychometric functions are also relatively parallel demonstrating the internal noise is approximately the same between conditions according to signal detection theory.
In the center and rightmost columns of Fig. 4, listeners IBL and IBR show a notably different pattern of psychometric functions compared to listener IBD. Like listener IBD, listeners IBL and IBR showed small increases in the right-ear SRT (condition 4 compared to condition 2). Unlike listener IBD, listeners IBL and IBR showed very large increases in the SRT in the left ear (condition 3 compared to condition 1). Listeners IBL and IBR had a dichotic listening asymmetry of 36 and 18 dB, respectively. The listeners' psychometric function when attending to the left ear for condition 3 also shows a different slope compared to condition 1, demonstrating more internal noise when there was a contralateral interferer.
To determine if these large dichotic listening asymmetries were atypical compared to NH listeners, the dichotic listening asymmetries of all the listeners are plotted in Fig. 5. For the NH and VOC listeners, the magnitude of the dichotic listening asymmetries was 10 dB or less for all the listeners, with only three listeners greater than 5 dB. For the CI listeners, all but two listeners had dichotic listening asymmetries of 5 dB or less. Thus, the asymmetries of IBL and IBR (18 and 36 dB) were clear outliers across all groups.
III. GENERAL DISCUSSION
1. Binaural unmasking in CI users
Studies have shown that bilateral CIs presently do not provide the same magnitude of benefit from having access to sound in both ears compared to NH listeners; the benefits seem mainly limited to binaural summation and the monaural better-ear advantage (e.g., Litovsky et al., 2006). Unlike NH listeners (e.g., Best et al., 2013), the ability to utilize binaural processing and achieve binaural unmasking for spatially separated compared to co-located speech seems to be weak or absent in most bilateral CI users, including adults (Schleich et al., 2004; Buss et al., 2008; van Hoesel et al., 2008; Loizou et al., 2009) and children (Misurelli and Litovsky, 2012, 2015). The purpose of this experiment was to begin to untangle the many possible factors that contribute to the limitations of binaural unmasking in CI listeners. We measured SRTs for a female target talker in the presence of a male interfering talker in NH and CI listeners. The stimulus configurations were selected in order to shed light on whether CI listeners can benefit from separation due to contralateral separation (i.e., talkers presented to separate ears, which targeted central processing mechanisms) and/or location-based separation (i.e., the target talker presented to and perceived at one ear, and the interfering talker presented to both ears and perceived near the center of the head, which targeted peripheral processing mechanisms).
The results show that NH listeners had the lowest SRTs across all conditions, presumably because they had access to an unprocessed speech signal (i.e., no spectral or temporal degradation), including voice pitch and spatial cues (Fig. 3). SRTs increased when the interferer was presented compared to the conditions in quiet. There was only an 8.0-dB increase in SRTs from quiet (conditions 1/2) to the contralateral separation conditions (3/4), and listeners were operating at approximately −51.9-dB SRT (re: 70 dB-A). NH listeners can thus selectively attend to one ear and almost completely ignore information in the other ear at near perfect performance (Cherry, 1953; Brungart and Simpson, 2002), thereby demonstrating minimal amounts of masking across the ears, even at low target levels. NH listeners also had SRTs of approximately −34.4 dB when the target and interferer were co-located (conditions 5–8). The SRTs for these conditions were much lower than those reported for speech-on-speech masking in other studies (e.g., Festen and Plomp, 1990). The discrepancy is likely the result of three factors: (1) the different genders of the talkers (Darwin and Hukin, 2000), (2) using a speech corpus with a small, closed set of word choices as compared to an open set of sentences, and (3) the level cue and the blocked testing design (i.e., listeners could simply listen for the quieter talker; Brungart et al., 2001). If it is the case that target and interferer were already separated into different objects by voice pitch and level, it would not be surprising to see only a small decrease in SRT to −36.9 dB SRT for conditions 9/10, where the target should be perceived at one ear and the interferer in the center of the head. In other words, only 1.8 dB of binaural unmasking was observed because the sources were already easily separable.
By removing most of the pitch information through an eight-channel noise vocoder, two outcomes were observed. First, SRTs systematically increased for the listeners in the VOC group compared to those in the NH group (Fig. 3). The largest increases were seen in the conditions with interferers and with no spatial cues (diotic: 5/6, monaural: 7/8), consistent with other reports on how masking is particularly effective for vocoded speech (Qin and Oxenham, 2003). Second, listeners in the VOC group, unlike the NH listeners, achieved 12.2 dB of binaural unmasking. Without access to pitch differences, listeners appear to rely on spatial cues to segregate the target and interferer. Such a result, where target and interferer are likely more confusable without voice pitch differences, is in line with the amount of binaural unmasking observed in paradigms that involve large amounts of confusability between target and interferers of the same gender (i.e., informational masking) (Hawley et al., 2004; Best et al., 2013; Bernstein et al., 2016). Recently, Bernstein et al. (2016) showed the importance of voice pitch for understanding speech in the presence of background talkers in bilateral CI users, CI users with a NH ear (i.e., single-sided deafness), and NH listeners presented vocoded stimuli to simulate these types of listeners. They systematically varied the number and gender of the talkers. All of the groups showed binaural unmasking. For the bilateral CI listeners and the corresponding vocoder group, there was no effect of gender; for the single-sided deafness CI listeners and corresponding vocoder group, binaural unmasking was mostly seen for the same-gender interferer conditions. Therefore, the results of this study, which used the same spatial configurations to measure binaural unmasking but a different word corpus, are in line with the results of Bernstein et al. (2016).
The SRTs for the CI listeners showed an overall increase compared to the listeners in the VOC group, which was mostly reflected in quiet conditions (1/2), contralateral separation conditions (3/4), and dichotic separation conditions (9/10) (Fig. 3). We attribute the higher SRTs for the CI listeners in quiet (conditions 1/2) to the limited input dynamic range of the CI processor, which was 30 dB when using the Freedom TV/Hifi cables. SRTs for conditions without interaural differences (diotic: 5/6, monaural: 7/8) were similar between the CI and VOC groups, which was expected because vocoded speech simulates aspects of CI sound processing and understanding of vocoded speech is considered the upper bound of understanding speech with a CI (Friesen et al., 2001). On average, the CI listeners showed 8 dB of contralateral masking of the target when comparing the contralateral separation conditions (3/4) to the quiet conditions (1/2), which was similar to the listeners in the NH and VOC groups. Therefore, CI listeners are clearly able to segregate sources across ears but at a higher SRT than VOC or NH listeners. Note that 8 dB is a relatively large percentage of the dynamic range for the CI listeners compared to the NH listeners. When considering the dynamic range across groups, the CI listeners could have a larger amount of contralateral masking, although exact values are difficult to calculate and interpret. CI listeners demonstrated no binaural unmasking comparing SRTs for the dichotic separation conditions (9/10) and the monaural conditions (7/8). This was unlike the VOC conditions with NH listeners who demonstrated 12.2 dB of binaural unmasking. The only difference between contralateral separation conditions (3/4) and dichotic separation conditions (9/10) is the presence of the interferer in the ear with the target for the latter conditions. Therefore, there was no binaural unmasking in the bilateral CI listeners despite good across-ear unmasking.
For this study, how might we understand this fundamental difference between the NH listeners presented vocoded stimuli and CI listeners for the dichotic separation conditions (9/10), despite their similarity for the diotic (5/6) and monaural (7/8) conditions? It is well known that sound localization abilities are poorer in bilateral CI listeners compared to NH listeners (Jones et al., 2014). Studies also show that some bilateral CI users' free field localization response patterns are categorical (e.g., left vs right, or left vs center vs right) (Dorman et al., 2014; Jones et al., 2014). Therefore, it may be that bilateral CI listeners do not perceive spatially punctate auditory objects, which would limit binaural unmasking of speech. This would be similar to adding interaural decorrelation to the signals. Decorrelation can be caused by many sources including lack of synchronization between the two processors, interaural mismatch in place-of-stimulation (Goupell, 2015), modulation encoding (Goupell et al., 2013), and multi-electrode stimulation (for a review, see Kan and Litovsky, 2015). The NH listeners presented with the vocoded stimuli, with their assumed normal peripheral encoding of the signals, would not experience decorrelation through any of these mechanisms, which is reflected in the data by the 12.2 dB of binaural unmasking. As a small control experiment, the interaural correlation of the noise carriers was changed from correlated to uncorrelated to determine if the amount of binaural unmasking for the listeners in the VOC group would decrease, similar to what was observed for the CI listeners. We found no change in average SRT for conditions 9/10. It may be necessary to add more severe decorrelation to the envelopes or replicate other aspects of the CI processing such as channel interactions to bring the binaural unmasking results between the VOC and CI groups in line.
Alternatively, Best et al. (2013) showed that energetic masking could be the primary factor that limits spatial release from masking. Specifically, they showed in a group of hearing-impaired listeners that monaural factors alone explained the amount of binaural masking that occurred in their experiment. If this is true, then CI listeners, who have even higher SRTs than hearing-impaired listeners when there are interfering sounds, should show little improvement from spatial separation because the energetic masking leaves no “head room” for which to show improvement through binaural mechanisms.
The lack of binaural unmasking observed in the CI listeners in this study is consistent with many previous reports, including two that are particularly relevant. First, van Hoesel et al. (2008) measured speech understanding and binaural unmasking using broadband noise interferers in four bilateral CI listeners. The stimuli were presented via a research sound processor and tested spatial configurations similar to the ones we tested here (conditions 9/10). The CI listeners did not demonstrate binaural unmasking. Second, Loizou et al. (2009) measured speech understanding and binaural unmasking using modulated noise or speech interferers in eight bilateral CI listeners. The stimuli were also presented via a research sound processor. Spatial differences between target and interferers were employed by convolving stimuli through generic head-related transfer functions, which contained the ITDs and ILDs from the recordings, and replicated the exact conditions from Hawley et al. (2004), where the amount of binaural unmasking was large in NH listeners. Loizou et al. (2009) also found no binaural unmasking for CI listeners. Therefore, these two studies and the present study are consistent in that significant binaural unmasking is difficult to achieve in bilateral CI listeners, despite a variety of methodology, including the spatial configurations and stimuli used. These results are in contrast to the 5 dB of binaural unmasking found in Bernstein et al. (2016) for nine bilateral CI listeners. The primary difference between the studies is that the procedure in the Bernstein et al. (2016) study optimized many of the factors that are thought to produce large amounts of unmasking, such as stimuli with large amounts of informational masking and particular spatial configurations. However, there is a discrepancy in the amount of binaural unmasking found between this study and in Bernstein et al. (2016), where the same spatial configurations were tested. One explanation for this discrepancy might include the different speech corpora used. Another is that different CI listeners were tested. It may be that the listeners in that study were more sensitive to binaural cues and could achieve binaural unmasking of speech. A range in unmasking is clear from the data in Bernstein et al. (2016), where one bilateral CI listener demonstrated interference from the second ear.
An additional issue is the age confound between the NH and CI listeners. This has been a pervasive problem in the literature when comparing these groups (Goupell, 2015). Binaural sensitivity (e.g., ITD thresholds) decreases with age (Strouse et al., 1998). Results are mixed, however, on the issue of whether binaural unmasking of speech changes with age (Dubno et al., 2008; Glyde et al., 2013). It is possible that the differences in chronological age contributed to the differences in binaural unmasking across the NH and CI listener groups.
2. Asymmetries in dichotic listening
Approximately 40% of bilateral CI listeners demonstrate interaurally asymmetrical speech understanding in quiet or noise (Mosnier et al., 2009). The results of the current study show that asymmetrical speech understanding can also occur in a dichotic listening task when target and interferer are presented to opposite ears (contralateral separation, conditions 3/4). Specifically, 2 of the 11 bilateral CI listeners demonstrated a very weak ability to ignore an interfering talker in their right ear when attending to a target talker in their left ear (Figs. 4 and 5). We attempted to account for any systematic shift in hearing sensitivity in our listeners by comparing SRTs for the target in quiet (conditions 1/2) to those with a contralateral interferer [conditions 3/4; see Eq. (1)]. In the group of CI listeners, we measured mostly small differences between the right and left ears in quiet (average magnitude of difference 3.7 dB, standard deviation of 3.5 dB). Listeners IBL and IBR had thresholds in quiet (left minus right) of 4.5 and −3.1 dB, respectively. Therefore, it is not the case that simply having asymmetric thresholds in quiet produced remarkably large dichotic listening asymmetries.
One explanation for the two notable CI dichotic listening asymmetries could be that the right and left ears encode the signals in substantially different ways. That is, the previously outlined factors that could reduce binaural performance, such as channel interactions and/or insertion depth, could be related to this phenomenon. In other words, it could be that difficulty in one of the dichotic listening conditions arises from asymmetry in peripheral encoding. Indeed, this explanation seems congruent with the hearing history of listener IBL, who was born with a mild hearing loss and received a hearing aid at age 12 yr, but only in the right ear. IBL wore that single hearing aid for more than 40 yr, until she received her first CI in the left ear. IBL used her CI in the left ear and a hearing aid in the right ear for another five years, which is when she received a second CI in the right ear. It has been shown in animal models that stimulation, acoustic or electric, may prevent atrophy of the auditory neural pathways (Leake et al., 2008). The lack of stimulation for the left ear for such a long duration might have caused the left ear to have fewer effective information channels than the right ear, and would explain some of the 36-dB dichotic listening asymmetry in IBL. On the other hand, listener IBR, who demonstrated a similar relatively large dichotic listening asymmetry (18 dB), had a fairly symmetric hearing loss, used hearing aids in both ears, and had only three years between implantations (Table I). Therefore, the explanation for the dichotic listening asymmetry cannot fully rely on hearing history.
The direction of the attention asymmetries in these two CI listeners favor better performance in the right ear. There is a “right-ear advantage” that exists in dichotic listening tasks, thought to be a result of the left hemisphere dominance in the temporal lobe for processing of language (Strouse et al., 2000). Therefore, it is possible that these data are an example of a very large right-ear advantage that is emphasized for some CI users. Clearly, more research is needed to fully understand these individual data.
3. Summary
One of the reasons for bilateral implantation is to improve speech understanding in the presence of background noise. This study demonstrates that most bilateral CI listeners experience minimal masking when competing talkers are presented to opposite ears. However, when there is energy from the interferer in the target ear, most CI listeners demonstrate little to no advantage of having two ears (i.e., they do not show binaural unmasking). This is unlike NH listeners who do show binaural benefits in such situations, particularly when they do not have access to pitch cues from the temporal fine structure. One explanation for this result is that sound processing algorithms and peripheral encoding inadvertently reduce the interaural correlation of the signals across the ears, which may blur the location perception of physically spatially separated talkers to the point that there is no perceived difference in spatial location. Future work on altering CI sound processors and algorithms to produce more correlated inputs in bilateral CI users might help these listeners achieve generally better hearing in a noisy world.
ACKNOWLEDGMENTS
We would like to thank Cochlear Ltd., particularly Aaron Parkinson, Zach Smith, and Christopher Long, for providing the testing equipment and technical support. We would like to thank Garrison Draves, Kyle Easter, Tanvi Thakkar, Ashley Chwastyk, Laura Taliaferro, and Robert Ellis for help in data collection and preparation. This work was supported by National of Institutes of Health (NIH)-National Institute on Deafness and Other Communication Disorders (NIDCD) Grant Nos. R01 DC014948 (Goupell), R01 DC003083 (Litovsky), P30 HD03352 (Waisman Center core grant), and P30 DC004664 (Center of Comparative Evolutionary Biology of Hearing core grant). The word corpus was funded by NIH-NIDCD Grant No. P30 DC04663 (Boston University Hearing Research Center core grant). Portions of this work were presented at the Association for Research in Otolaryngology 35th Midwinter Meeting, the 16th Conference on Implantable Auditory Prostheses, and the Association for Research in Otolaryngology 38th Midwinter Meeting.
Footnotes
Initially, NH listeners were only tested in one ear assuming that, by definition, someone with NH would have symmetric hearing. However, as testing progressed with the CI listeners, it became clear that the NH listeners needed to be tested in both ears. Hence, additional listeners were added to the study and were tested in both ears, and inferential statistics needed to be performed with an across-subjects design. Because of the homogeneity of the results across the NH listeners, we reported the pooled data.
The Freedom TV/Hifi Cables are passive devices that can attenuate signals with a volume dial. By inserting them into the direct audio input of the sound processor, they alter the dynamic range from 25 to 65 dB-A to 35 to 65 dB-A. This will result in an increase in thresholds for the CI group as compared to speech presented in the free field. Note that N5 TV/Hifi Cables do not alter the dynamic range similarly. Therefore, we did not use the N5 cables so that both Freedom and N5 sound processor users would have similar inputs.
Note that because the interferer was reduced in level when testing at positive TMRs, all positive SRTs in such cases are technically relative to a larger reference level. Since this only occurred in conditions with interferers, it should not affect the interpretation of any of the data.
Because of limited testing time with some listeners, some conditions had as few as 50 key words. However, less than 10% of the total conditions had less than 150 key words, which affected only 12 of the 42 total listeners in the study.
References
- 1. Bernstein, J. G. , Goupell, M. J. , Iyer, N. , Schuchman, G. , Rivera, A. , and Brungart, D. S. (2016). “ Having two ears facilitates the perceptual separation of concurrent talkers for bilateral and single-sided deaf cochlear implantees,” Ear Hear. 37, 282–288. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Best, V. , Thompson, E. R. , Mason, C. R. , and Kidd, G., Jr. (2013). “ An energetic limit on spatial release from masking,” J. Assoc. Res. Otolaryngol. 14, 603–610. 10.1007/s10162-013-0392-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Bronkhorst, A. W. , and Plomp, R. (1988). “ The effect of head-induced interaural time and level differences on speech intelligibility in noise,” J. Acoust. Soc. Am. 83, 1508–1516. 10.1121/1.395906 [DOI] [PubMed] [Google Scholar]
- 4. Brungart, D. S. , and Simpson, B. D. (2002). “ Within-ear and across-ear interference in a cocktail-party listening task,” J. Acoust. Soc. Am. 112, 2985–2995. 10.1121/1.1512703 [DOI] [PubMed] [Google Scholar]
- 5. Brungart, D. S. , Simpson, B. D. , Ericson, M. A. , and Scott, K. R. (2001). “ Informational and energetic masking effects in the perception of multiple simultaneous talkers,” J. Acoust. Soc. Am. 110, 2527–2538. 10.1121/1.1408946 [DOI] [PubMed] [Google Scholar]
- 6. Buss, E. , Pillsbury, H. C. , Buchman, C. A. , Pillsbury, C. H. , Clark, M. S. , Haynes, D. S. , Labadie, R. F. , Amberg, S. , Roland, P. S. , Kruger, P. , Novak, M. A. , Wirth, J. A. , Black, J. M. , Peters, R. , Lake, J. , Wackym, P. A. , Firszt, J. B. , Wilson, B. S. , Lawson, D. T. , Schatzer, R. , D'Haese, P. S. , and Barco, A. L. (2008). “ Multicenter U.S. bilateral MED-EL cochlear implantation study: Speech perception over the first year of use,” Ear Hear. 29, 20–32. [DOI] [PubMed] [Google Scholar]
- 7. Cherry, E. C. (1953). “ Some experiments on the recognition of speech, with one and with two ears,” J. Acoust. Soc. Am. 25, 975–979. 10.1121/1.1907229 [DOI] [Google Scholar]
- 8. Churchill, T. , Kan, A. , Goupell, M. J. , and Litovsky, R. Y. (2014). “ Spatial hearing benefits demonstrated with presentation of acoustic temporal fine structure cues in bilateral cochlear implant listeners,” J. Acoust. Soc. Am. 136, 1246–1256. 10.1121/1.4892764 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Darwin, C. J. , and Hukin, R. W. (2000). “ Effectiveness of spatial cues, prosody, and talker characteristics in selective attention,” J. Acoust. Soc. Am. 107, 970–977. 10.1121/1.428278 [DOI] [PubMed] [Google Scholar]
- 10. Dorman, M. F. , Loiselle, L. , Stohl, J. , Yost, W. A. , Spahr, A. , Brown, C. , and Cook, S. (2014). “ Interaural level differences and sound source localization for bilateral cochlear implant patients,” Ear Hear. 35, 633–640. 10.1097/AUD.0000000000000057 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Dubno, J. R. , Ahlstrom, J. B. , and Horwitz, A. R. (2008). “ Binaural advantage for younger and older adults with normal hearing,” J. Speech Lang. Hear. Res. 51, 539–556. 10.1044/1092-4388(2008/039) [DOI] [PubMed] [Google Scholar]
- 12. Egger, K. , Majdak, P. , and Laback, B. (2016). “ Channel interaction and current level affect across-electrode integration of interaural time differences in bilateral cochlear-implant listeners,” J. Assoc. Res. Otolaryngol. 17, 55–67. 10.1007/s10162-015-0542-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Festen, J. M. , and Plomp, R. (1990). “ Effects of fluctuating noise and interfering speech on the speech-reception threshold for impaired and normal hearing,” J. Acoust. Soc. Am. 88, 1725–1736. 10.1121/1.400247 [DOI] [PubMed] [Google Scholar]
- 14. Friesen, L. M. , Shannon, R. V. , Başkent, D. , and Wang, X. (2001). “ Speech recognition in noise as a function of the number of spectral channels: Comparison of acoustic hearing and cochlear implants,” J. Acoust. Soc. Am. 110, 1150–1163. 10.1121/1.1381538 [DOI] [PubMed] [Google Scholar]
- 15. Glyde, H. , Cameron, S. , Dillon, H. , Hickson, L. , and Seeto, M. (2013). “ The effects of hearing impairment and aging on spatial processing,” Ear Hear. 34, 15–28. 10.1097/AUD.0b013e3182617f94 [DOI] [PubMed] [Google Scholar]
- 16. Goupell, M. J. (2015). “ Interaural correlation-change discrimination in bilateral cochlear-implant users: Effects of interaural frequency mismatch, centering, and age of onset of deafness,” J. Acoust. Soc. Am. 137, 1282–1297. 10.1121/1.4908221 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Goupell, M. J. , Kan, A. , and Litovsky, R. Y. (2013). “ Typical mapping procedures can produce non-centered auditory images in bilateral cochlear-implant users,” J. Acoust. Soc. Am. 133, EL101–EL107. 10.1121/1.4776772 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Goupell, M. J. , and Litovsky, R. Y. (2015). “ Detection of changes in envelope correlation in bilateral cochlear-implant users,” J. Acoust. Soc. Am. 137, 335–349. 10.1121/1.4904491 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Hallberg, L. R. , Hallberg, U. , and Kramer, S. E. (2008). “ Self-reported hearing difficulties, communication strategies and psychological general well-being (quality of life) in patients with acquired hearing impairment,” Disabil. Rehabil. 30, 203–212. 10.1080/09638280701228073 [DOI] [PubMed] [Google Scholar]
- 20. Hawley, M. L. , Litovsky, R. Y. , and Culling, J. F. (2004). “ The benefit of binaural hearing in a cocktail party: Effect of location and type of interferer,” J. Acoust. Soc. Am. 115, 833–843. 10.1121/1.1639908 [DOI] [PubMed] [Google Scholar]
- 21. Jones, H. , Kan, A. , and Litovsky, R. Y. (2014). “ Comparing sound localization deficits in bilateral cochlear-implant users and vocoder simulations with normal-hearing listeners,” Trends Hear. 18, 1–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Kan, A. , Jones, H. G. , and Litovsky, R. Y. (2015a). “ Effect of multi-electrode configuration on sensitivity to interaural timing differences in bilateral cochlear-implant users,” J. Acoust. Soc. Am. 138, 3826–3833. 10.1121/1.4937754 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Kan, A. , and Litovsky, R. Y. (2015). “ Binaural hearing with electrical stimulation,” Hear. Res. 322, 127–137. 10.1016/j.heares.2014.08.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Kan, A. , Litovsky, R. Y. , and Goupell, M. J. (2015b). “ Effects of interaural pitch-matching and auditory image centering on binaural sensitivity in cochlear-implant users,” Ear Hear. 36, e62–e68. 10.1097/AUD.0000000000000135 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25. Kerber, S. , and Seeber, B. U. (2012). “ Sound localization in noise by normal-hearing listeners and cochlear implant users,” Ear Hear. 33, 445–457. 10.1097/AUD.0b013e318257607b [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. Kidd, G., Jr. , Best, V. , and Mason, C. R. (2008). “ Listening to every other word: Examining the strength of linkage variables in forming streams of speech,” J. Acoust. Soc. Am. 124, 3793–3802. 10.1121/1.2998980 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Kramer, S. E. , Kapteyn, T. S. , and Festen, J. M. (1998). “ The self-reported handicapping effect of hearing disabilities,” Audiology 37, 302–312. 10.3109/00206099809072984 [DOI] [PubMed] [Google Scholar]
- 28. Lavandier, M. , and Culling, J. F. (2010). “ Prediction of binaural speech intelligibility against noise in rooms,” J. Acoust. Soc. Am. 127, 387–399. 10.1121/1.3268612 [DOI] [PubMed] [Google Scholar]
- 29. Leake, P. A. , Stakhovskaya, O. , Hradek, G. T. , and Hetherington, A. M. (2008). “ Factors influencing neurotrophic effects of electrical stimulation in the deafened developing auditory system,” Hear. Res. 242, 86–99. 10.1016/j.heares.2008.06.002 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Litovsky, R. Y. , Goupell, M. J. , Godar, S. , Grieco-Calub, T. , Jones, G. L. , Garadat, S. N. , Agrawal, S. , Kan, A. , Todd, A. , Hess, C. , and Misurelli, S. (2012). “ Studies on bilateral cochlear implants at the University of Wisconsin's Binaural Hearing and Speech Laboratory,” J. Am. Acad. Audiol. 23, 476–494. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Litovsky, R. Y. , Parkinson, A. , Arcaroli, J. , and Sammeth, C. (2006). “ Simultaneous bilateral cochlear implantation in adults: A multicenter clinical study,” Ear Hear. 27, 714–731. 10.1097/01.aud.0000246816.50820.42 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Loizou, P. C. , Hu, Y. , Litovsky, R. , Yu, G. , Peters, R. , Lake, J. , and Roland, P. (2009). “ Speech recognition by bilateral cochlear implant users in a cocktail-party setting,” J. Acoust. Soc. Am. 125, 372–383. 10.1121/1.3036175 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Macpherson, E. A. , and Middlebrooks, J. C. (2002). “ Listener weighting of cues for lateral angle: The duplex theory of sound localization revisited,” J. Acoust. Soc. Am. 111, 2219–2236. 10.1121/1.1471898 [DOI] [PubMed] [Google Scholar]
- 34. Misurelli, S. M. , and Litovsky, R. Y. (2012). “ Spatial release from masking in children with normal hearing and with bilateral cochlear implants: Effect of interferer asymmetry,” J. Acoust. Soc. Am. 132, 380–391. 10.1121/1.4725760 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Misurelli, S. M. , and Litovsky, R. Y. (2015). “ Spatial release from masking in children with bilateral cochlear implants and with normal hearing: Effect of target-interferer similarity,” J. Acoust. Soc. Am. 138, 319–331. 10.1121/1.4922777 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Mosnier, I. , Sterkers, O. , Bebear, J. P. , Godey, B. , Robier, A. , Deguine, O. , Fraysse, B. , Bordure, P. , Mondain, M. , Bouccara, D. , Bozorg-Grayeli, A. , Borel, S. , Ambert-Dahan, E. , and Ferrary, E. (2009). “ Speech performance and sound localization in a complex noisy environment in bilaterally implanted adult patients,” Audiol. Neurootol. 14, 106–114. 10.1159/000159121 [DOI] [PubMed] [Google Scholar]
- 37. Qin, M. K. , and Oxenham, A. J. (2003). “ Effects of simulated cochlear-implant processing on speech reception in fluctuating maskers,” J. Acoust. Soc. Am. 114, 446–454. 10.1121/1.1579009 [DOI] [PubMed] [Google Scholar]
- 38. Rakerd, B. , and Hartmann, W. M. (2010). “ Localization of sound in rooms. V. Binaural coherence and human sensitivity to interaural time differences in noise,” J. Acoust. Soc. Am. 128, 3052–3063. 10.1121/1.3493447 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Schleich, P. , Nopp, P. , and D'Haese, P. (2004). “ Head shadow, squelch, and summation effects in bilateral users of the MED-EL COMBI 40/40+ cochlear implant,” Ear Hear. 25, 197–204. 10.1097/01.AUD.0000130792.43315.97 [DOI] [PubMed] [Google Scholar]
- 40. Strouse, A. , Ashmead, D. H. , Ohde, R. N. , and Grantham, D. W. (1998). “ Temporal processing in the aging auditory system,” J. Acoust. Soc. Am. 104, 2385–2399. 10.1121/1.423748 [DOI] [PubMed] [Google Scholar]
- 41. Strouse, A. , Wilson, R. H. , and Brush, N. (2000). “ Recognition of dichotic digits under pre-cued and post-cued response conditions in young and elderly listeners,” Br. J. Audiol. 34, 141–151. 10.3109/03005364000000124 [DOI] [PubMed] [Google Scholar]
- 42. van Hoesel, R. J. M. (2012). “ Contrasting benefits from contralateral implants and hearing aids in cochlear implant users,” Hear Res. 288, 100–113. 10.1016/j.heares.2011.11.014 [DOI] [PubMed] [Google Scholar]
- 43. van Hoesel, R. J. M. , Bohm, M. , Pesch, J. , Vandali, A. , Battmer, R. D. , and Lenarz, T. (2008). “ Binaural speech unmasking and localization in noise with bilateral cochlear implants using envelope and fine-timing based strategies,” J. Acoust. Soc. Am. 123, 2249–2263. 10.1121/1.2875229 [DOI] [PubMed] [Google Scholar]
- 44. Wichmann, F. A. , and Hill, N. J. (2001). “ The psychometric function: I. Fitting, sampling, and goodness of fit,” Percept. Psychophys. 63, 1293–1313. 10.3758/BF03194544 [DOI] [PubMed] [Google Scholar]