Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 May 1.
Published in final edited form as: Ear Hear. 2020 May-Jun;41(3):576–590. doi: 10.1097/AUD.0000000000000784

Binaural optimization of cochlear implants: Discarding frequency content without sacrificing head-shadow benefit

Sterling W Sheffield 1,2, Matthew J Goupell 3, Nathaniel J Spencer 4, Olga A Stakhovskaya 2,3, Joshua G W Bernstein 2
PMCID: PMC7028504  NIHMSID: NIHMS1533861  PMID: 31436754

Abstract

Objective:

Single-sided deafness (SSD) and bilateral (BI) cochlear-implant (CI) listeners gain near-normal levels of head-shadow benefit but limited binaural benefits. One possible reason for these limited binaural benefits is that cochlear places of stimulation tend to be mismatched between the ears. SSD-CI and BI-CI patients might benefit from a binaural fitting that reallocates frequencies to reduce interaural place mismatch. However, this approach could reduce monaural speech recognition and head-shadow benefit by excluding low- or high-frequency information from one ear. This study examined how much frequency information can be excluded from a CI signal in the poorer-hearing ear without reducing head-shadow benefits and how these outcomes are influenced by interaural asymmetry in monaural speech recognition.

Design:

Speech-recognition thresholds (SRTs) for sentences in speech-shaped noise were measured for six adult SSD-CI listeners, twelve BI-CI listeners, and nine normal-hearing (NH) listeners presented with vocoder simulations. Stimuli were presented using non-individualized in-the-ear or behind-the-ear head-related impulse-response (HRIR) simulations with speech presented from a 70° azimuth (poorer-hearing side) and noise from 70° (better-hearing side), thereby yielding a better signal-to-noise ratio (SNR) at the poorer-hearing ear. Head-shadow benefit was computed as the improvement in bilateral SRTs gained from enabling the CI in the poorer-hearing, better-SNR ear. High- or low-pass filtering was systematically applied to the HRIR-filtered stimuli presented to the poorer-hearing ear. For the SSD-CI listeners and SSD-vocoder simulations, only high-pass filtering was applied, since the CI frequency allocation would never need to be adjusted downward to frequency-match the ears. For the BI-CI listeners and BI-vocoder simulations, both low- and high-pass filtering were applied. The NH listeners were tested with two levels of performance to examine the effect of interaural asymmetry in monaural speech recognition (vocoder synthesis-filter slopes: 5 or 20 dB/octave).

Results:

Mean head-shadow benefit was smaller for the SSD-CI group (~7 dB) than for the BI-CI group (~14 dB). For SSD-CI listeners, frequencies <1236 Hz could be excluded; for BI-CI listeners, frequencies <886 Hz or >3814 Hz could be excluded from the poorer-hearing ear without reducing head-shadow benefit. Bilateral performance showed greater immunity to filtering than monaural performance, with gradual changes in performance as a function of filter cutoff. Real and vocoder-simulated CI users with larger interaural asymmetry in monaural performance had less head-shadow benefit.

Conclusions:

The “exclusion-frequency” ranges that could be removed without diminishing head-shadow benefit are interpreted in terms of low importance in the speech-intelligibility index and a small head-shadow magnitude at low frequencies. Although groups and individuals with greater performance asymmetry gained less head-shadow benefit, the magnitudes of these factors did not predict the exclusion-frequency range. Overall, these data suggest that for many SSD-CI and BI-CI listeners, the frequency allocation for the poorer-ear CI can be shifted substantially without sacrificing head-shadow benefit, at least for energetic maskers. Considering the two ears together as a single system may allow greater flexibility in discarding redundant frequency content from a CI in one ear when considering bilateral programming solutions aimed at reducing interaural frequency mismatch.

Keywords: cochlear implant, single-sided deafness, bilateral deafness, head-shadow benefit, interaural mismatch, interaural asymmetry

Introduction

By providing access to sound in both ears, single-sided deafness cochlear implants (SSD-CIs) for individuals with one deaf ear or bilateral cochlear implants (BI-CIs) for individuals with two deaf ears provide important benefits for a wide range of tasks, including speech recognition. The largest benefit for speech recognition that SSD-CI and BI-CI listeners experience from having two ears is the head-shadow benefit for speech recognition in noise under certain spatial configurations (e.g., Litovsky et al. 2009; Loizou et al. 2009; Arndt et al. 2011; Gifford et al. 2014; Zeitler et al. 2015; Bernstein et al. 2017). Head-shadow benefit is a monaural speech-recognition improvement gained by having access to the ear with the better signal-to-noise ratio (SNR) when the target and maskers are spatially separated. Whereas monaural listeners can only access the better SNR when it occurs on the side of the head with a functioning ear, having hearing in both ears allows access to the ear with the better SNR for all possible spatial configurations.

The purpose of this study was to assess how different spectral regions contribute to the head-shadow benefit for SSD-CI and BI-CI listeners. The reason that this is an important question is that mismatch in the frequency alignment between the ears might limit other potential benefits of having two ears, including the fusion of bilateral inputs into a single perceived object (Goupell et al. 2013; Kan et al. 2013; Aronoff et al. 2015), the use of differences in the sounds in the two ears to localize sound sources (Kan et al. 2013; e.g., Bernstein et al. 2018), and perceptual separation of concurrent voices based on their perceived spatial locations (Bernstein et al. 2016; Wess et al. 2017). As will be discussed below, this misalignment might be overcome by adjusting the allocation of frequencies delivered to individual CI electrodes in one or both ears (Staisloff et al. 2016). However, this approach would necessarily discard frequency content from one or both ears, and it is unknown how the loss of frequency information might affect the head-shadow benefit.

In addition to the head-shadow benefit, normal-hearing (NH) listeners can also take advantage of binaural unmasking, making use of interaural-time-difference (ITD) cues (Levitt & Rabiner 1967) and in some cases interaural-level-difference (ILD) cues (e.g., Gallun et al. 2005) to better understand the target speech in the presence of a spatially separated masker. There is some evidence that SSD-CI and BI-CI listeners also benefit from binaural unmasking, but so far, it is very limited and appears to occur only in specific spatial and masker configurations (Litovsky et al. 2009; Arndt et al. 2011; Gifford et al. 2014; Bernstein et al. 2016; Grange & Culling 2016). When speech is masked by stationary noise, SSD-CI and BI-CI listeners typically obtain less than 2 dB of binaural advantage in free-field configurations, compared to approximately 4–6 dB for NH listeners (e.g., Hawley et al. 2004). In the case of a target speaker masked by other talkers, SSD-CI and BI-CI listeners can obtain as much as 4–5 dB of binaural-unmasking benefit (Bernstein et al. 2016), presumably based on the ability to perceive the concurrent voices as arising from different spatial locations (Freyman et al. 2001), although this is still considerably less than the benefit experienced by NH listeners (7–10 dB; e.g., Freyman et al. 2004). Thus, even though SSD-CI and BI-CI listeners have access to sound in both ears, they gain limited binaural benefits from integrating information across ears. Some of the limitation is likely due to a lack of temporal fine-structure encoding in the CI, which may prevent the listener from taking advantage of the equalization-cancellation process theorized for NH listeners (Durlach 1972). Yet, SSD-CI and BI-CI listeners still experience reduced binaural benefit relative to vocoder simulations, which also lack fine-structure encoding (Bernstein et al. 2016). Furthermore, some individual SSD-CI and BI-CI listeners experience much less binaural benefit than others (Bernstein et al. 2016), and in some cases experience contralateral speech interference instead of a benefit (Goupell et al. 2018a).

Because optimal binaural processing requires frequency-matched inputs (Joris et al. 1998; Batra & Yin 2004), mismatch in the cochlear place of stimulation between ears (i.e., the same acoustic frequencies stimulate different regions of each cochlea) might reduce binaural benefit for some SSD-CI and BI-CI listeners. Mismatch is likely to occur for SSD-CI listeners because CI electrode arrays do not reach the apical end of the cochlea (Stakhovskaya et al. 2007; Landsberger et al. 2015), which for NH listeners is tuned to low-frequency sounds. Yet CIs are typically programmed with the full speech-frequency range (e.g., 150–8000 Hz) to relay as much speech information as possible. As a result, the lowest-frequency acoustic information (e.g., 150 Hz) will be delivered to a cochlear location in the CI ear that normally responds to a higher frequency (e.g., 500 Hz). BI-CI listeners could also experience an interaural place mismatch as a result of different insertion depths in the two ears, different electrode arrays, or differences between the two ears in cochlear morphology or patterns of spiral ganglia survival.

Reducing interaural place mismatch has the potential to improve speech recognition in noise and other binaural benefits. Interaural mismatch has been shown to reduce binaural-hearing benefits, impair sound lateralization and localization, and diminish fusion for SSD-CI (Bernstein et al. 2018) and BI-CI listeners (Goupell et al. 2013; Kan et al. 2013; Aronoff et al. 2015). Vocoder simulations of SSD-CI and BI-CI listening also suggest that mismatch reduces binaural summation (Yoon et al. 2011; Goupell et al. 2018b), spatial release from masking (Goupell et al. 2018b), and contralateral unmasking (Wess et al. 2017); in addition, matching apical frequency regions appears to lead to greater binaural fusion for speech (Staisloff et al. 2016). Thus, programming CIs to reduce interaural place mismatch for SSD-CI and BI-CI listeners could improve binaural speech-recognition benefits.

One possible programming strategy to reduce interaural mismatch would be to adjust the allocation of frequencies to individual electrodes in one ear to match the cochlear place of stimulation in the other ear. Such specialized programming in cases with interaural mismatch, however, would often require the exclusion of some low or high frequencies from the reprogrammed CI, reducing the input bandwidth. For SSD-CI listeners, this would require the exclusion of low-frequency information below the characteristic frequency associated with the cochlear location of the most apical electrode (i.e., <500 Hz in the above example). For BI-CI listeners, the reprogramming could involve the exclusion of the low- or high-frequency portions of the spectrum, depending on which ear is being reprogrammed and the direction of the mismatch. Previous research using vocoded stimuli presented to NH listeners found that excluding frequencies from one end of the spectrum reduced some binaural cues while leaving others intact (Aronoff et al. 2014). For example, using a bank of bandpass filters and presenting every other low-frequency band to alternate ears with high-frequency bands presented to both ears reduced low-frequency ITD cues but maintained high-frequency ILD cues. For the purposes of this study, it was assumed that any reprogramming would only be carried out in the ear with poorer monaural speech-recognition performance (defined as the “poorer-hearing ear”), with the goal of avoiding altering the speech information presented to the ear with better monaural speech-recognition performance (i.e., the “better-hearing ear”). One major consideration regarding the proposal to remap the frequency allocation in one ear to reduce mismatch is that excluding certain frequencies from the poorer-performing ear could reduce the benefit that listeners receive from having access to both ears. The largest and most reliable benefit for speech recognition that SSD-CI and BI-CI listeners gain from having two ears is the head-shadow benefit (e.g., Van Deun et al. 2009; Loizou et al. 2009; Bernstein et al. 2017).

The primary goal of this current study was to determine how much frequency content can be excluded from a CI in a poorer-hearing ear without diminishing head-shadow benefit for speech recognition in speech-shaped noise (SSN) for SSD-CI and BI-CI listeners. These “exclusion frequency” ranges were intended to provide guidance as to the allowable limits for CI frequency mapping strategies aimed at reducing interaural frequency mismatch. Head-shadow benefit in this study was defined as the benefit gained from adding a CI in a second, poorer-hearing ear, for a spatial configuration where the poorer-hearing ear had the better SNR. Because the magnitude of the head-shadow effect is known to increase with increasing frequency (Feddersen et al. 1957; Kuhn 1977) and low frequencies are weighted low in importance for speech intelligibility (ANSI 1997), it was hypothesized that removing low-frequency content from the poorer-hearing ear with the better SNR would have little effect on the head-shadow benefit provided by this ear, whereas removing high-frequency content would be relatively more deleterious to performance.

Two other goals of this study were to determine how much head-shadow benefit SSD-CI and BI-CI listeners receive from their poorer-hearing ear, and to what extent variation in the low-frequency exclusion cutoff and in the magnitude of the head-shadow benefit across individuals and across groups could be explained by interaural asymmetry in monaural speech-recognition performance. If these outcomes could be predicted on an individual level, this would eliminate the need to evaluate the effect of different frequency cutoffs on the head-shadow benefit for a given patient. Models and previous research have shown that head-shadow benefit gained from the poorer-hearing ear decreases with increasing interaural asymmetry in monaural speech recognition (Litovsky et al. 2006; Litovsky et al. 2009; Loizou et al. 2009; Culling et al. 2012; Gifford et al. 2014; Goupell et al. 2018a). Therefore, it was hypothesized that individuals and listener groups with more interaural asymmetry would experience less head-shadow benefit from the poorer-hearing ear because it would provide a relatively less salient speech signal. It was also hypothesized that more asymmetry would allow more low-frequency energy to be excluded without sacrificing head-shadow benefit.

These hypotheses were examined for SSD-CI and BI-CI listeners. Additionally, a group of NH listeners was presented with vocoder simulations of SSD-CI and BI-CI listening to examine the same hypotheses in a more controlled manner. Specifically, the simulations allowed for examination of the effects of interaural asymmetry and the differences between SSD-CI and BI-CI processing in a within-subject design in contrast to the across-subject comparisons made for the CI listeners.

Methods

Listeners

Six adult SSD-CI listeners and 12 adult BI-CI listeners participated in this study. The demographic, hearing, and CI information for these listeners are provided in Table 1. The SSD-CI listeners were between 38 and 69 years old (mean = 54 years). Pure-tone thresholds in the acoustic (better-hearing) ear for the SSD-CI listeners are listed in Table 2. The BI-CI listeners were between 40 and 78 years old (mean = 63 years). SSD-CI and BI-CI listeners were tested with their sound processors in their everyday programs and volume control settings. No CI listeners used hearing aids for testing. Nine adults with bilateral NH also participated and were presented with vocoder simulations of SSD-CI and BI-CI listening. The NH participants were between 20 and 32 years old (mean = 23.8) with pure-tone thresholds equal to or better than 20 dB HL at all standard audiometric frequencies between 125 Hz and 8000 Hz. Although aging affects the understanding of vocoded speech for NH listeners (Schvartz et al. 2008; Sheldon et al. 2008a; Sheldon et al. 2008b; Goupell et al. 2017) and the NH listeners in the present study were not age-matched to the CI listeners, the main purpose of vocoder stimulations was not for direct comparison with the CI results. Instead, it was desirable to have a relatively homogenous NH listener group without aging or high-frequency hearing loss to examine effects of asymmetry and SSD-CI versus BI-CI listening with the influence of intersubject variability minimized.

Table 1.

Demographic information and information regarding the CI participants.

Listener Sex Age CI ear Duration of Deafness (years) CI Exp (years) CI Brand Electrode array Sound processor Etiology
SSD-CI1 M 58 L 21 5 Cochlear CI24RE CP910 Sudden SNHL
SSD-CI2 M 69 R 0.75 7 Cochlear CI512 CP810 Unknown
SSD-CI3 M 38 R 0.25 4 MED-EL Flex28 Opus2 Sudden SNHL
SSD-CI4 M 59 R 8 2 MED-EL Flex28 Opus2 Sudden SNHL
SSD-CI5 M 45 R 22 4 MED-EL Flex28 Opus2 EVA, Trauma
SSD-CI6 M 52 L 3 1 Cochlear CI522 CP910 Sudden SNHL
BI-CI1 M 77 B 1 (R)
61 (L)
6 (R)
3 (L)
Cochlear CI24RE
CI512
CP910
CP910
Ototoxicity/trauma
BI-CI2 M 51 B 1 (R)
4 (L)
5 (R)
5 (L)
Cochlear CI24RE
CI24RE
CP920
CP920
Ototoxicity/trauma
BI-CI3 F 78 B 12 (R)
0.5 (L)
2 (R)
4 (L)
Cochlear CI24RE
CI24RE
CP920
CP920
Ototoxicity/trauma,
Family history
BI-CI4 M 77 B 0 (R)
0 (L)
8 (R)
13 (L)
Cochlear CI24 R CS
CI24RE
CP810
CP910
Unknown
BI-CI5 F 40 B 0 (R)
22 (L)
26 (R)
4 (L)
Cochlear CI512
Nucleus 22
Freedom
CP810
Ototoxicity/trauma
BI-CI6 F 58 B 9 (R)
5 (L)
4 (R)
8 (L)
Cochlear CI24RE
CI24RE
CP910
CP910
Ototoxicity/trauma
BI-CI7 F 51 B 2 (R)
0 (L)
1 (R)
7 (L)
Cochlear CI512
CI512
CP910
CP910
Unknown
BI-CI8 F 65 B 3 (R)
4 (L)
11 (R)
10 (L)
Cochlear CI24RE
CI24RE
CP920
CP920
Family history
BI-CI9 F 55 B 0 (R)
0 (L)
14 (R)
6 (L)
Cochlear CI422
CI24RE
CP910
CP910
Ototoxicity/trauma
BI-CI10 F 65 B 8 (R)
2 (L)
3 (R)
14 (L)
Cochlear CI24 R CS
CI24RE
CP910
CP910
Ototoxicity/trauma
BI-CI11 M 59 B 10 (R)
5 (L)
2 (R)
7 (L)
Cochlear CI24RE
CI422
CP920
CP920
Family history
BI-CI12 M 70 B 0 (R)
0 (L)
1 (R)
13 (L)
Cochlear CI422
CI24 R CS
CP920
CP920
Ototoxicity/trauma

BI-CI, bilateral cochlear implant; EVA, enlarged vestibular aqueduct; Exp, experience; L, left; R, right; SNHL, sensorineural hearing loss; SSD-CI, single-sided deafness cochlear implant. Cochlear Ltd. CIs are not currently labeled by the U.S. Food & Drug Administration for use for the treatment of SSD.

Table 2.

Acoustic-ear air-conduction thresholds for the six SSD-CI participants for the ear contralateral to the CI.

Audiometric threshold (dB HL)
Listener 250 500 1000 2000 3000 4000 6000 8000 Hz Hearing aid?
SSD-CI1 10 10 0 15 15 15 20 20 No
SSD-CI2 10 10 20 15 10 20 DNT 25 No
SSD-CI3 10 15 15 0 10 20 20 25 No
SSD-CI4 15 10 10 10 30 30 20 15 Yes
SSD-CI5 5 10 10 15 25 25 15 20 No
SSD-CI6 20 20 30 40 45 35 25 45 Yes

DNT, did not test

Stimuli

Speech recognition in noise was tested using a modified version of the Oldenburg Matrix Test (OMT) (Kollmeier & Wesselkamp 1997; Kollmeier et al. 2015). The OMT is a closed set of sentences with five words in each sentence and ten options for each word. Auditory recordings were made of two male and two female talkers each speaking 500 sentences (different for each talker) from the matrix of 100,000 possible sentences. The usage of each word in the matrix was approximately equal in the 2000 sentences. The SSN had an amplitude spectrum equal to the average of all 2000 sentences produced by the four talkers with random phase.

Head-related impulse response (HRIR) application

This experiment was designed to maximize the head-shadow benefit gained from a CI ear with poorer monaural speech recognition than the contralateral ear. Models of spatial release from masking for a SSN masker show that maximum head-shadow benefit is obtained when the speech and noise stimuli are symmetrically located on each side of the head near 70° and 290° (Culling et al. 2012), where 0° represents the azimuth of a source to the front of the listener and 90° represents the azimuth of a source on one side. Testing was completed in this spatial configuration (Fig. 1) to maximize head-shadow benefit and the subsequent high- and low-pass filtering effects. The experiment was carried out using HRIR simulations of free-field listening. While it would have been theoretically possible to low- and high-pass filter the stimuli in one ear for the CI listeners in the study by deactivating individual channels in the speech processor, this approach would not have worked for the NH listeners presented with vocoded stimuli. The use of HRIRs and acoustic filtering also allowed for more control of the stimulus parameters and reduced the possible influence of individual variability in microphone characteristics and CI frequency mapping.

Figure 1:

Figure 1:

Diagram of the spatial configuration (simulated using HRIRs) used for measuring and computing the head-shadow benefit provided by a CI in the poorer-hearing ear. Note that the speech is on the side of the poorer-hearing ear resulting in a better SNR at that ear. In contrast, the better-hearing ear has a poorer SNR. BI-CI = bilateral cochlear implant; SNR = signal-to-noise ratio; SSD-CI = single-sided deafness cochlear implant.

The free-field configuration was simulated using the HRIRs described by (Kayser et al. 2009). These HRIRs were recorded on a Bruel and Kjaer HATS manikin at a distance of 3 m. HRIRs for the front microphone of a Siemens Acuris behind-the-ear (BTE) hearing aid (microphone placement similar to behind-the-ear CIs) were used to simulate an ear with a CI. HRIRs for an in-the-canal (ITC) microphone placed near the end of the manikin’s ear canal were used to simulate an open acoustic ear. Although different CI microphones have different HRIRs, these differences have been shown to result in minimal differences in head-shadow benefit and spatial release from masking (Aronoff et al. 2011). This hearing-aid was chosen for consistency across listeners and because it has been previously shown to successfully predict CI head-shadow benefit (Culling et al. 2012). The speech was always presented on the side of the ear with a poorer monaural speech-recognition threshold (SRT) in SSN (i.e., the poorer-hearing ear) and the SSN was always presented on the side of the ear with better monaural SRT (i.e., the better-hearing ear), as shown in Fig. 1. Note that in this configuration, the poorer-hearing ear had a better SNR than the better-hearing ear due to the effects of head shadow. The convention adopted here was that an azimuth of 70° also represented the side of the poorer-hearing (better-SNR ear), while an azimuth of 290° always represented the side of the better-hearing (poorer-SNR) ear. The head-shadow benefit was defined as the improvement in speech-recognition performance when listening with both ears compared to when listening monaurally with the better-hearing (poorer-SNR) ear alone.

Figure 2 shows the effects of HRIR processing on the SNR at each ear as a function of frequency for the 70°/290° configuration tested in the study, where the speech and SSN were each presented at the same levels (i.e., SNR = 0 dB) prior to HRIR filtering. The difference between the solid black curve (BTE microphone representing the poorer-hearing, better-SNR) and the dashed black (BTE, better-hearing, poorer-SNR CI ear) or dotted black (ITC microphone representing a NH, poorer-SNR ear) curves illustrates how the difference in SNR between the ears generally increased with increasing frequency. Because the difference between curves is smaller at lower than at higher frequencies, the lower frequencies should contribute less to the head-shadow benefit. In the BI-CI case (solid and dashed black curves), the SNR difference decreased again above 5000 Hz, which is likely a result of constructive interference or the acoustical bright spot (Macaulay et al. 2010) and the pinna not blocking the short wavelengths for sound traveling around the back of the head. Because relatively little speech information is relayed at these high frequencies (ANSI 1997), the difference in ILD for BTE and ITC microphones for frequencies >5000 Hz should have minimal consequences for this study.

Figure 2:

Figure 2:

SNRs for left ITC (for an acoustic-hearing ear in a SSD-CI case, dotted line) and BTE (for a CI ear in an SSD or BI-CI case, solid black line for the right ear and dashed black line for the left ear) microphones using HRIRs for target speech at 70° and SSN at 290° (Fig. 1). The SNR was 0 dB before HRIR filtering. The solid gray line represents the modeled effect of performance asymmetry, calculated by subtracting 10 dB from the right BTE microphone response to simulate poorer speech-recognition performance in that ear. BI-CI = bilateral cochlear implant; SSD-CI = single-sided deafness cochlear implant; HRIR = head-related impulse response; ITC = in-the-canal; SNR = signal-to-noise ratio; SSN = speech-shaped noise.

Asymmetry in speech-recognition performance between the ears can also impact head-shadow benefit. The solid- and dotted-black curves in Fig. 2 (BTEs in both ears) represent a symmetric situation where at a given SNR, the two ears are equally effective at relaying speech information. However, almost all SSD-CI listeners and many BI-CI listeners experience an interaural performance asymmetry. For listeners with large asymmetries in monaural performance, the poorer-hearing ear will provide much less head-shadow benefit than it does for symmetrical listeners. This is because the fidelity of the speech will be relatively distorted in the poorer-hearing ear, even though it has a better SNR.

Culling et al. (2012) proposed that performance asymmetry could be modeled by reducing the effective SNR in the poorer-hearing ear according to the difference in monaural speech-recognition thresholds. The gray solid curve in Fig. 2 represents the case where the effective SNR is reduced in the right (poorer-hearing) ear (relative to the solid black curve) to demonstrate or model a 10-dB performance asymmetry between the ears. In this case, frequencies below 1000 Hz would not contribute at all to the head-shadow benefit afforded by the poorer-hearing ear (solid gray curve). Note that the curves representing the SNR in the better-hearing ear (dotted and dashed black curves) cross the curve representing the effective SNR in the poorer-hearing ear (solid gray curve). This means that the better-hearing ear would have a better effective SNR at frequencies below this crossover point, even though the physical SNR would always be higher at the poorer-hearing ear (solid black curve). Overall, this analysis suggests a possible interaction between the degree of interaural asymmetry in speech-recognition performance and the relative contributions of different portions of the frequency spectrum to the overall head-shadow benefit. Specifically, listeners with a larger degree of asymmetry are likely to experience less head-shadow benefit overall, and are also less likely to depend on low-frequency information in the poorer-hearing (better-SNR) ear to achieve that head-shadow benefit. To assess this relationship, head-shadow benefit and its frequency dependence were examined with respect to the degree of asymmetry for individual SSD-CI and BI-CI listeners, between the SSD-CI and BI-CI listener groups, and between two different asymmetry conditions for the NH vocoder-simulation listeners.

Vocoder simulations

The NH group was tested with simulations of SSD-CI and BI-CI listening, with vocoder processing applied following the HRIR filtering. The simulations employed an 11-channel vocoder with filter cutoffs of 188, 438, 688, 938, 1188, 1563, 2063, 2688, 3563, 4688, 6063, and 7938 Hz. The analysis filtering was applied in the frequency domain with infinite slope. Envelope information was extracted in each channel with a low-pass 4th order Butterworth filter with a cutoff frequency of 400 Hz. A white noise was modulated by the extracted envelope for each channel, then passed through a 256-point finite impulse-response synthesis filter (with the same filter cutoffs as the corresponding analysis filter) using the overlap/add method. A noise vocoder was used to allow manipulation of the synthesis filter slopes to adjust the vocoder’s frequency resolution. The noise bands in the vocoder were uncorrelated across the ears. The long-term average level for each output channel was matched to the level of the corresponding input channel. No adjustments were made for the time delay of the vocoder. The filter cutoffs of the vocoder were chosen based on the default frequency allocation tables for a common CI sound processor. For the bilateral-vocoder (BI-Vocoder) conditions, both ears were presented with vocoded stimuli. For the SSD-Vocoder conditions, one ear was presented with vocoded stimuli and the other ear received stimuli that were unprocessed (except for the HRIR simulations).

To evaluate the effect of interaural asymmetry on the head-shadow benefit, two levels of resolution/performance were examined for the vocoded stimuli presented to the poorer-hearing ear (i.e., the ear with the better SNR) by adjusting the filter slopes of the vocoder synthesis filters to simulate different degrees of CI current spread. In the high-resolution (HR) vocoder condition, narrow current spread was simulated with a filter slope of 20 dB/octave. In the low-resolution (LR) vocoder condition, broad current spread was represented with a slope of 5 dB/octave. For the BI-Vocoder conditions, the better-hearing ear was always presented with a HR vocoder. Thus, the BI-Vocoder simulations included conditions with symmetric (HR in both ears) and asymmetric (LR in poorer-hearing ear and HR in the better-hearing ear) performance between ears. The SSD-Vocoder conditions always had asymmetric performance, but this asymmetry was larger for the LR than for the HR vocoder condition. The ear with the filtering (designated as the poorer-hearing ear) was randomized across trials to control for any differences between ears.

Filtering of the poorer-hearing ear

The frequency content presented to the poorer-hearing ear was systematically manipulated with a series of low-and high-pass filters. Filtering for the SSD-CI and BI-CI groups was implemented in the frequency domain with infinite slope following the application of HRIRs. Filtering for the NH group was applied by excluding applicable channels from the output of the vocoder. The filter cutoffs were chosen based on the default frequency allocation tables for a common CI sound processor. The high-pass filter cutoffs were 438, 938, 2063, 3563, and 6063 Hz for the SSD-CI and BI-CI listeners. The NH listeners were additionally tested with high-pass filter cutoffs of 1563 and 4688 Hz in the SSD-Vocoder and BI-Vocoder conditions. The low-pass filter cutoffs were 938, 2063, 3563, and 6063 Hz for the BI-CI listeners and for the NH listeners in the BI-Vocoder condition. The SSD-CI listeners and the SSD-Vocoder simulations were not tested with low-pass filtered stimuli. This decision was based on the idea that because the CI array does not reach the apical portion of the cochlea, only low frequencies would need to be discarded from the CI signal to match cochlear place of stimulation to the NH ear for SSD-CI listeners.

Test conditions

SRTs were measured for each group and condition listed in Table 3. The conditions tested consisted of combinations of test ear (monaural or bilateral), spatial configuration of the target speech and SSN, filtering, and vocoder resolution.

Table 3.

Spatial configuration, filtering, and ear conditions tested in each group.

Better ear only
Poorer ear only
Both ears
Spatial Configuration 0,0 70,290 0,0 70,290 70,290 70,290 0,0 70,290 70,290 70,290




Filtering Broad band Broad band Broad band Broad band LP HP Broad band Broad band LP HP




SSD-CI X X X X X X X X
SSD-vocoder, HR X X X X X X X X
SSD-vocoder, LR X X X X X X X X
BI-CI X X X X X X X X X X
BI-vocoder, HR X X X X X X X X X X
BI-vocoder, LR X X X X X X X X X X

SSD-CI, Single sided deafness with a CI; BI-CI, bilateral cochlear implant; LP, low-pass; HP, high-pass; 0,0, speech and noise co-located at 0° azimuth; 70,290, speech and noise at 70° and 290° with speech on the side of the poorer ear and noise on the side of the better ear; HR, high resolution; LR, low resolution; X, a tested condition

Test ear.

All listeners were tested monaurally in each ear and with both ears together. Because current clinical CI practice is to program each ear in isolation to maximize performance, it was also instructive to understand the extent to which filtering affected monaural performance in the poorer-hearing ear alone. Monaural performance for the better-hearing ear was only evaluated in the broadband condition (i.e., with a full bandwidth signal without any filtering) to estimate the degree of performance asymmetry between the ears and to provide a baseline to compute head-shadow benefit.

Spatial configuration.

In addition to the spatially separated configuration (speech and SSN at 70° and 290°, respectively), monaural performance was also evaluated for each ear with broadband stimuli in the co-located spatial configuration where the target speech and SSN were both presented from the front. This allowed for an estimate of performance asymmetry for each listener.

Filtering.

Monaural performance for the poorer-hearing ear and bilateral performance were evaluated as a function of filter cutoff frequency, allowing for an examination of the effects of filtering on both head-shadow benefit and monaural performance.

Vocoder resolution.

For the NH listeners, all test conditions were repeated with the poorer-hearing ear presented with LR (5 dB/octave) and HR (20 dB/octave) vocoder processing.

Procedures

SRTs were measured adaptively by determining the SNR required for the listener to achieve 50% performance. Stimuli were presented through Sennheiser HD650 headphones (Wedemark, Germany). For CI ears, the headphone was placed over the clinical sound processor (Grantham et al. 2008; Goupell et al. 2018a). The system was calibrated with the SSN alone, at the level at which it would be presented for a 0-dB SNR in the co-located configuration. Speech was presented at 66 dB SPL after HRIR filtering for the SSD-CI and BI-CI listeners and 61 dB SPL for the NH listeners. For the NH listeners, the target speech was presented at a fixed level and the SSN level was varied. For the SSD-CI and BI-CI listeners, the speech was fixed and the SSN level was adjusted to produce SNRs ≥ 0 dB, while the SSN was fixed and the level of the speech was adjusted to produce SNRs < 0 dB. This was done to limit the possible influence of compression in the CI sound processor on the waveforms. The speech level was lower for the NH listeners to limit the overall levels at low SNRs when the noise level was high. The starting SNR for the adaptive track was different for each listener group to ensure a high level of intelligibility at the start of the track. For the BI-CI listeners, the starting SNR was 24 dB for all conditions. For the SSD-CI listeners, the starting SNR was 24 dB for conditions involving only the CI ear and 6 dB for the bilateral and monaural NH-ear conditions. For the NH listeners, the starting SNR was 24 dB when testing conditions with only vocoded stimuli and 6 dB in all other conditions.

On each trial, listeners were presented with a single sentence, and responded by selecting five words from a response matrix with a mouse. The 10×5 response matrix included ten response options for each word in the target sentence. For the purposes of the adaptive track, a response was considered correct if the listener correctly identified at least four of the five words. The SNR was increased following each incorrect response and decreased following each correct response. The initial step size was 6 dB, decreased to 4 dB after the first reversal (i.e., the first incorrect response), and then to 2 dB after two more reversals. If a participant responded incorrectly at an SNR of 24 dB, the maximum SNR tested, then the track was ended and their SRT was arbitrarily defined as 25 dB (unmeasurable performance). Testing was completed in blocks of 10 trials with a total of 30 trials per condition. The adaptive track was continued for the second and third blocks in each condition with the initial SNR for these blocks determined by the SNR and response to the 10th trial of the previous block. SRTs for each condition were defined as the average of the SNRs across the last 15 trials for the condition, with the intent to analyze the average of the stable portion of the tracking procedure. The BI-CI users first performed the adaptive track with a co-located speech and SSN in each ear to determine any asymmetry in performance between the ears and to identify which was the poorer-hearing ear (for speech recognition in noise) to be filtered during testing. Otherwise, the order of all conditions was randomly varied across blocks. Each participant received approximately 15 minutes of total practice time in a variety of monaural, binaural, and filtered conditions before beginning testing to acclimate to the task and become familiar with the stimuli. Total test time was approximately 1.5 hours for the SSD-CI listeners, 2.5 hours for the BI-CI listeners, and 7 hours for the NH listeners. NH listeners completed the testing in one or multiple visits depending on their preference.

Statistical analysis

Analyses included three main outcome variables.

Exclusion cutoff:

The maximum (or minimum) cutoff frequency below (or above) which frequency content can be removed from the poorer-hearing ear without significantly decreasing head-shadow benefit.

Magnitude of the head-shadow benefit.

The maximum magnitude of the head-shadow benefit in the broadband (unfiltered) condition.

Asymmetry.

The difference between the ears in monaural SRTs for co-located speech and SSN.

Analyses of these three main outcome variables were carried out to address the main experimental questions: (1) How much frequency content can be excluded without affecting the head-shadow benefit, and how does this vary across listener groups? (2) How much head-shadow benefit do listeners receive, and how does this vary across groups? (3) Can variation in the exclusion cutoff and the magnitude of the head-shadow benefit across individuals and groups be explained by interaural asymmetry?

The exclusion cutoff was calculated for each individual listener by fitting the head-shadow data with a sigmoidal function with two free parameters describing the slope and inflection point of the function. The plateau of the sigmoidal function – representing the maximum head-shadow benefit for each listener and condition – was set to be equal to the broadband head-shadow benefit. The exclusion cutoff for a given listener and condition, derived from this fitted function, was defined as the cutoff frequency that yielded performance significantly below the maximum head-shadow benefit. To compute the exclusion cutoff, the standard error of the SRT estimate (σSRT) was calculated for each listener (and for each of the four vocoder conditions for the NH listeners) based on the SNR values across the last 15 trials of the adaptive track for each filter condition. The root-mean-square of these standard errors was then computed across all the filter conditions tested. The exclusion cutoff was then defined as the cutoff frequency that yielded a level of performance that was 2√2(σSRT) below the maximum head-shadow benefit. This value represents a difference equal to the 95% confidence interval for the difference between independent measurements with a common standard error. Log-transformed exclusion-cutoff values were used for all statistical analyses.

Separate repeated-measures linear mixed-model analyses were used to compare each main outcome variables across listener groups and conditions. Within each analysis, there were two factors: CI or vocoder type (SSD or BI) and condition (CI, LR, or HR). In this way, all the CI and vocoder data were combined into a single analysis for each outcome variable. Bonferroni-corrected post-hoc tests allowed for more specific comparisons in cases where significant main effects or interactions were observed. In all cases, a p-value of less than 0.05 (after correction) was considered a significant effect. In addition to the mixed-model analyses of the complete data set from the NH and CI listeners, separate planned-comparison two-tailed t-tests were carried out for just the CI listeners to investigate the effect of listener group (SSD-CI or BI-CI) on each of the outcome variables. Correlation analyses were also carried out to examine the relationship between asymmetry in monaural performance and both the magnitude of the head-shadow advantage and the estimated exclusion cutoffs across individual CI listeners.

Results

Even though the main analyses were carried out based on the head-shadow benefit calculations, the raw monaural and bilateral SRTs are presented first (Fig. 3), for three purposes: 1) to visually demonstrate the effects of high- and low-pass filtering on speech recognition performance with one and both ears, 2) to indicate the performance level of the CI groups relative to the NH simulations, and 3) to visually describe how head-shadow benefit was calculated. The top row (Fig. 3AC) plots the SRT results for the SSD-CI listeners and associated vocoder conditions (high-pass filtering only). The other two rows plot the results for the BI-CI listeners and associated vocoder conditions, with the high-pass filtering conditions presented in the middle row (Fig. 3DF) and the low-pass filtering conditions presented in the bottom row (Fig. 3GI). The three columns in Fig. 3 represent the CI listeners (left column, Fig. 3A, D, and G) and the NH listeners presented with HR (middle column, Fig. 3B, E, and H) and LR vocoder simulations (right column, Fig. 3C, F, and I). In each panel, broadband performance is represented as a high-pass cutoff of 188 Hz (top and middle rows) or a low-pass cutoff of 8000 Hz (bottom row).

Figure 3:

Figure 3:

SRTs as a function of filter-cutoff frequency in the poorer-hearing, better-SNR ear for the CI listeners and NH listeners presented with CI simulations. The left column represents the CI listeners (panels A, D, G), the middle column the V-HR conditions (panels B, E, H), and the right column the V-LR conditions (panels C, F, I). The top row represents high-pass filtering for the SSD-CI listeners and simulations (panels A, B, C), the middle row represents high-pass filtering for the BI-CI listeners and simulations (panels D, E, F), and the bottom row represents low-pass filtering for the BI-CI listeners and simulations (panels G, H, I). Points in each panel represent the average SRT and error bars represent ±1 standard deviation. The better-hearing ear SRTs (±1 standard deviation) are represented by horizontal gray solid (and dotted) lines. Unmeasurable performance is noted at 25 dB by horizontal gray lines. BI = bilateral; CI = cochlear implant; SNR = signal-to-noise ratio; SRT = speech-recognition threshold; SSD = single-sided deafness; V-HR = high-resolution vocoder; V-LR = low-resolution vocoder.

Several general trends are apparent in Fig. 3. First, SRTs for the CI listeners (left column) generally fell between the SRTs for the two NH vocoder conditions (middle and right columns). Second, as expected, performance decreased for all groups as frequency content was removed from the signal – i.e., as the high-pass filter cutoff frequency increased or the low-pass filter cutoff frequency decreased. For the high-pass conditions (Fig. 3AF) there was generally no impact on monaural (circles) or bilateral performance (triangles) until the cutoff frequency rose above 963 Hz. One possible exception to this observation is that monaural performance for the BI-CI listeners showed a trend toward poorer performance for a cutoff frequency of 963 Hz (Fig. 3D), but the variability in the data was high. In contrast, both monaural and bilateral performance decreased as soon as any high-frequency content was removed from the signal, with SRTs consistently increasing going from right to left in Fig. 3G. Third, and most importantly, the decrease in performance across all listener groups and vocoder conditions was much more precipitous when testing only the poorer-hearing ear, which was filtered, than when testing both ears together (compare circles to triangles in each panel). In other words, bilateral performance showed greater immunity to filtering than monaural performance, with gradual changes in performance as a function of filter cutoff.

Head-shadow benefit and exclusion cutoffs

The main goal of this study was to determine the exclusion cutoff frequency – i.e., the extent to which frequency content could be removed from the stimulus in the poorer-hearing ear without reducing the head-shadow benefit – and how this varied across listeners. Figure 4 plots the head-shadow benefit (monaural better-ear SRT minus bilateral SRT) as a function of high-pass filter cutoff frequencies for the SSD-CI listeners and SSD-Vocoder conditions (Fig. 4A), and as a function of high-pass (Fig. 4B) and low-pass (Fig. 4C) filter cutoff frequency for the BI-CI listeners and BI-Vocoder conditions. The symbol colors in each panel represent the CI listeners (black squares) and LR (white squares) and HR (gray squares) vocoder conditions for the NH listeners. The symbols are horizontally shifted for visual clarity. Also included in each panel of Fig. 4 are sigmoidal fits to the data. The vertical lines in Fig. 4 represent the across-listener mean exclusion cutoff frequency, defined as the cutoff frequency for which the head-shadow benefit decreased from the maximum plateau by a significant amount (2√2(σSRT)). The mean low-pass and high-pass filtering exclusion frequencies were generally similar between the CI listeners and the two NH vocoder conditions, despite large differences in maximum head-shadow benefit.

Figure 4:

Figure 4:

Head-shadow benefit as a function of filter-cutoff frequency. The curved lines are sigmoidal fits of the data for each group/condition. Points in each panel represent the average SRT and error bars represent ±1 standard deviation. The vertical lines represent mean exclusion cutoffs for each group/condition. BI = bilateral; CI = cochlear implant; SSD = single-sided deafness; V-HR = high-resolution vocoder; V-LR = low-resolution vocoder.

Figure 5 shows the group-mean log-transformed exclusion cutoff frequencies for each listener group, filtering condition, and vocoder-resolution condition. The mean log-transformed high-pass exclusion cutoff frequencies were 1236 Hz for the SSD-CI listeners and 886 Hz for the BI-CI listeners, while the mean log-transformed low-pass exclusion cutoff was 3814 Hz for the BI-CI listeners. The linear mixed-model analysis including high-pass exclusion cutoff frequencies for all groups identified a significant main effect of CI/Vocoder type [SSD vs. BI; F(1,33.9) = 10.1, p < 0.005], supporting the observation that the high-pass exclusion cutoff was generally higher for the SSD-CI listeners and SSD-Vocoder conditions than for the BI-CI listeners and BI-Vocoder conditions. There was also a main effect of CI/Vocoder condition [CI vs. HR vocoder vs. LR vocoder; F(2,31.3) = 4.4, p < 0.05], but no significant interaction between the two factors (p = 0.59). Post-hoc comparisons revealed that the high-pass filter exclusion cutoffs were lower for the HR than for the LR vocoder conditions (compare white to gray symbols, p < 0.05), but there were no significant differences in high-pass exclusion cutoff between the CI listeners and either vocoder condition. A planned-comparison two-tailed t-test found that the difference in high-pass exclusion cutoffs between the SSD-CI listeners and the BI-CI listeners (compare black symbols) just failed to reach significance (p = 0.061). A mixed-model analysis conducted on the low-pass filtering data found no significant effect of CI/Vocoder condition [CI vs. HR vocoder vs. LR vocoder; p = 0.20]. In summary, high-pass filter exclusion cutoffs were higher for the SSD-CI listeners and SSD-Vocoder conditions considered together than for the BI-CI listeners and BI-Vocoder conditions considered together, although this difference was non-significant when considering only the SSD-CI and BI-CI listeners. High-pass exclusion cutoffs were also higher for the LR than for the HR vocoder conditions. There were no significant differences in low-pass filter exclusion cutoffs across the BI-CI listeners and BI-Vocoder conditions.

Figure 5:

Figure 5:

Exclusion cutoffs as a function of listener group and vocoder condition. Black symbols represent CI listeners. White symbols represent V-LR. Gray symbols represent V-HR conditions. Error bars represent ±1 standard deviation. High-pass filtering results are represented on the left of the black vertical line and low-pass filtering results on the right. BI = bilateral; CI = cochlear implant; V-HR = high-resolution vocoder; V-LR = low-resolution vocoder; SSD = single-sided deafness.

The second goal of this study was to evaluate how the magnitude of the head-shadow benefit varied across listeners. Figure 6 plots the maximum head-shadow benefit – defined as the magnitude of the head-shadow benefit in the broadband condition (full bandwidth in both ears) – for each listener group and vocoder condition. There was large variation in maximum head-shadow benefit across listener groups and vocoder conditions, ranging from 7 to 25 dB. A mixed-model analysis revealed a significant main effect of CI/Vocoder type [SSD vs. BI; F(1,42.2) = 70.9, p < 0.0005], supporting the observation that maximum head-shadow benefit was smaller for the SSD-CI listeners and SSD-Vocoder conditions (three left bars) than for the BI-CI listeners and BI-Vocoder conditions (three right bars). There was also a significant main effect of CI/vocoder condition [CI vs. HR vocoder vs. LR vocoder; F(2,27.0) = 92.5, p < 0.0005], but no significant interaction between the two factors (p = 0.82). Post-hoc comparisons revealed the head-shadow benefit to be larger for the HR vocoder conditions (gray bars) than for the LR vocoder conditions (white bars; p<0.0005) or for the CI listeners (black bars; p<0.0005), while the benefit was not statistically different for the LR vocoder conditions and CI listeners (p = 0.25). A planned-comparison two-tailed t-test showed a significantly larger head-shadow benefit for the BI-CI listeners (13.9 dB, black bar to the right) than for the SSD-CI listeners [7.7 dB, black bar to the left; t(16) = 4.8, p < 0.001].

Figure 6:

Figure 6:

Maximum head-shadow benefit as a function of listener group and vocoder condition. Black bars represent CI listeners. White bars represent V-LR. Gray bars represent V-HR conditions. Error bars represent +1 standard deviation. BI = bilateral; CI = cochlear implant; V-HR = high-resolution vocoder; V-LR = low-resolution vocoder; SSD = single-sided deafness.

Overall the BI-CI listeners (and BI-Vocoder simulations) gained more head-shadow benefit than the SSD-CI listeners (and SSD-Vocoder simulations). Additionally, both the SSD-CI and BI-CI listeners gained similar head-shadow benefit to NH listeners presented with LR vocoder simulations and less benefit than the NH listeners presented with HR vocoder simulations.

Asymmetry effects

The third goal of this study was to determine whether differences across groups and individuals in the exclusion cutoff and head-shadow magnitude could be explained in terms of the degree of asymmetry in monaural speech recognition between the two ears. It was hypothesized that more asymmetry would decrease head-shadow benefit provided by the poorer-hearing ear and increase high-pass filter exclusion cutoffs (Fig. 2). This hypothesis was tested by comparing the degree of asymmetry across listener groups and by examining the correlations between asymmetry and the exclusion cutoff and head-shadow magnitude for individual CI listeners.

Asymmetry values:

The asymmetry in speech-recognition performance, calculated as the group-mean interaural difference in monaural SRTs for co-located speech and SSN, is plotted for each listener group and vocoder condition in Fig. 7. To save time, asymmetry in the monaural conditions was not measured for the BI-Vocoder HR condition, but was assumed to be 0 dB because the same vocoder processing was presented to each ear (# symbol in Fig. 7). In contrast, asymmetry was measured in the BI-Vocoder LR condition, which involved a HR vocoder presented to one ear and a LR vocoder presented to the other ear.

Figure 7:

Figure 7:

Group-mean interaural asymmetry in monaural SRTs as a function of listener group and vocoder condition. Black bars represent CI listeners. White bars represent V-LR conditions (the V-LR was only used in the poorer-hearing ear, with either acoustic hearing or the V-HR in the better-hearing ear for SSD-Vocoder and BI-Vocoder conditions). Gray bars represent V-HR conditions. The error bars represent +1 standard deviation. The # represents the assumed lack of asymmetry in the V-HR BI-Vocoder condition (not tested). BI = bilateral; CI = cochlear implant; V-HR = high-resolution vocoder; V-LR = low-resolution vocoder; SRT = speech-recognition threshold; SSD = single-sided deafness.

The SSD-CI listeners and SSD-Vocoder conditions (left three bars) exhibited significantly greater asymmetry than the BI-CI listeners and BI-Vocoder conditions (right three bars), as confirmed by a significant main effect of CI/vocoder type (SSD vs. BI) in the mixed-model analysis [F(1,13.2) = 32.2, p < 0.0005]. There was also a main effect of CI/vocoder condition (CI vs. HR vs. LR) on interaural asymmetry [F(2,11.6) = 39.4, p < 0.005], but no significant interaction between CI/vocoder type and CI/vocoder condition (p = 0.81). A post-hoc comparison revealed that the LR vocoder conditions (white bars) yielded significantly more asymmetry than the HR conditions (gray bars), as expected (p < 0.0005). Asymmetry for the CI listeners was not statistically different from the HR vocoder conditions (p = 0.19) and just failed to be significantly smaller (p = 0.063) than for the LR vocoder conditions. A planned two-tailed t-test found no significant difference in asymmetry between the SSD-CI listeners (8.6 dB) and the BI-CI listeners (3.7 dB) (compare black bars, p = 0.11). In summary, asymmetry was in general larger for the SSD-CI listeners and SSD-Vocoder conditions than for the BI-CI listeners and BI-Vocoder conditions, but this difference was not significant when considering the CI listeners separately from the vocoder results. Additionally, the LR vocoder conditions yielded more asymmetry than the HR conditions.

Asymmetry correlations:

Increases in asymmetry (Fig. 7) by manipulating the vocoder resolution yielded a decrease in the magnitude of the head-shadow benefit (Fig. 4) and increase in the high-pass exclusion cutoff frequency (Fig. 5). The effect of asymmetry on exclusion frequencies and head-shadow benefit for the SSD-CI and BI-CI listeners was evaluated further using correlation analyses. Figure 8 plots exclusion frequencies and maximum head-shadow benefit for the CI listeners as a function of individual interaural asymmetry in monaural speech recognition. High-pass filter exclusion frequencies are plotted in Fig. 8A, low-pass exclusion frequencies are plotted in Fig. 8B, and head-shadow benefit is plotted in Fig. 8C. SSD-CI listeners are represented by black circles and BI-CI listeners are represented by white circles. Linear regression fits (solid lines) and 95% confidence bounds (dashed black lines) are plotted for each relationship. There was no significant correlation between asymmetry and high-pass (Fig. 8A) or low-pass (Fig. 8B) exclusion cutoff frequencies. There was, however, a negative correlation between maximum head-shadow benefit and interaural asymmetry (Fig. 8C). When separating the two CI groups, the correlation between performance asymmetry and maximum head-shadow benefit was r2 = 0.34 (p < 0.005) for the SSD-CI listeners and r2 = 0.61 (p < 0.001) for the BI-CI listeners, although it should be noted that the correlation for the SSD-CI listeners was mainly driven by a single listener with large asymmetry and relatively little head-shadow benefit. For the BI-CI listeners, the correlation between poorer-ear performance and maximum head-shadow benefit was somewhat weaker [r2 = 0.38, p < 0.001, not shown] than the correlation between performance asymmetry and maximum head-shadow benefit (Fig. 8C), suggesting that the magnitude of the head-shadow benefit reflected asymmetry rather than simply performance in the poorer-hearing ear. For the SSD-CI listeners, the correlation with asymmetry (Fig. 8C) mainly reflected the variability in performance in the poorer-hearing CI ear [r2 = 0.29, p < 0.008, not shown], since there was relatively little variability in performance in the better-hearing acoustic ear.

Figure 8:

Figure 8:

Individual (a) high-pass filter exclusion cutoffs, (b) low-pass filter exclusion cutoffs, and (c) head-shadow benefit (panel C) plotted as a function of individual performance asymmetry between ears. A linear regression fit (solid line) and 95% confidence intervals (dashed curves) are also plotted for each relationship. BI = bilateral; CI = cochlear implant; HP = high-pass; LP = low-pass; SSD = single-sided deafness.

Lastly, there was no significant correlation between high- or low-pass filter exclusion frequencies and maximum head-shadow benefit for either the SSD-CI or BI-CI listener groups (p > 0.40 for all correlations, not shown). In summary, we hypothesized (1) that high-pass filter exclusion cutoffs would increase with increasing monaural performance asymmetry and (2) that head-shadow benefit would decrease with increasing asymmetry. While the second hypothesis was supported in that maximum head-shadow benefit was inversely correlated with monaural-performance asymmetry, the first hypothesis was not supported in that individual exclusion frequencies were not significantly correlated with asymmetry.

Discussion

The purposes of the current study were 1) to determine how much frequency content can be excluded from a CI without diminishing head-shadow benefit when speech is presented from the side closest to the poorer-hearing ear, and how this varies across groups; 2) to determine how much head-shadow benefit listeners receive from a poorer-hearing ear, and how this varies across groups; and 3) to assess whether variation in the exclusion cutoff and the magnitude of the head-shadow benefit across individuals can be explained by interaural asymmetry in monaural speech-recognition performance. Head-shadow benefit was particularly important for the spatial configuration examined in this study because it directly determined overall bilateral speech-recognition performance for a particular filter condition. Head-shadow benefit was defined as the difference between bilateral and monaural better-hearing ear performance. Because filtering was only applied to the poorer-hearing (better-SNR) ear, the monaural SRT for the better-hearing ear was a fixed quantity (horizontal lines in Fig. 2). Thus, if a particular filtering condition reduced the head-shadow benefit, it also reduced overall bilateral performance (i.e., increased the bilateral SRT).

The results showed that on average, frequency content below 1236 Hz could be excluded from the poorer-hearing ear for SSD-CI listeners and that frequency content below 886 Hz or above 3814 Hz could be excluded from the poorer-hearing ear for BI-CI listeners without significantly reducing the head-shadow benefit that ear provides. This means that if one were to remap the frequencies in the poorer-hearing ear to minimize mismatch with the goal of improving binaural function, then the frequencies above or below these limits could be safely removed without affecting the head-shadow benefit or overall bilateral performance.

Group exclusion cutoffs

There are two important factors that likely contribute to which frequencies are less important for head-shadow benefit and can therefore be excluded. First, ILDs are smaller (i.e., <10 dB) below 1000 Hz than above 1000 Hz (see Fig. 2). Thus, low frequencies likely contribute less to head-shadow benefit than higher frequencies. Second, the lowest and highest frequencies in the speech range are less important for speech recognition than frequencies in the midrange. According to the speech intelligibility index (SII; ANSI 1997), 80% of speech information is contained between 500 and 4000 Hz. One important difference between the results for low- and high-pass filtering conditions was that in the high-pass conditions, the head-shadow benefit remained unaffected as low-frequencies were removed until the cutoff frequency exceeded about 1000 Hz (note the plateau region in Fig. 4A and 4B). In contrast, low-pass filtering seemed to begin to reduce the head-shadow benefit (Fig. 4C) and increase the bilateral SRT (Fig. 3G, H, and I) as soon as any frequency content was removed. As a result, removing content above 4000 Hz had more of an effect on performance than removing content below 500 Hz. This difference suggests that the frequency dependence of the attenuation caused by the head (i.e., more head shadow at high frequencies than at low frequencies, Fig. 2) was more responsible for the differential effects of low- and high-pass filtering on head-shadow benefit than the distribution of speech information. At the same time, the fact that the head-shadow benefit was not significantly affected by filtering the high frequencies until the cutoff frequency was less than 3814 Hz probably reflects the relatively small amount of important speech information present at higher frequencies.

Group head-shadow magnitudes

The results showed different magnitudes of head-shadow benefit across listener groups and vocoder conditions. On average, the head-shadow benefit was smaller for the SSD-CI listeners (7.7 dB) and SSD-Vocoder conditions than for the BI-CI listeners (13.9 dB) and BI-Vocoder conditions. Additionally, the head-shadow benefit was smaller for the LR vocoder conditions than for the HR vocoder conditions. These group differences are consistent with our hypothesis, based on previous research and models, that interaural asymmetry in monaural speech recognition should reduce head-shadow benefit (Litovsky et al. 2006; Litovsky et al. 2009; Loizou et al. 2009; Culling et al. 2012; Gifford et al. 2014; Goupell et al. 2018a). The group differences in head-shadow benefit magnitude are most likely due to group differences in asymmetry for monaural speech recognition.

The head-shadow benefit for both the SSD-CI and BI-CI listeners in the current study was larger than is often reported in the literature, likely because of the spatial configuration with symmetrically placed speech and the use of SSN. Bernstein et al. (2017) tested SSD-CI listeners in the free field with speech and noise placed symmetrically at azimuths of ±108° and reported that the CI ear provided an average head-shadow benefit of 5.1 dB, slightly smaller than the 7.7 dB in the current study. Culling et al. (2012) modeled spatial release from masking and head-shadow benefit by testing unilateral CI listeners with symmetrically placed speech and SSN. They estimated BI-CI head-shadow benefit by measuring the difference between unilateral performance with speech ipsilateral and contralateral to the CI. This resulted in a mean predicted and measured head-shadow benefit of approximately 18 dB for BI-CI listeners with speech and SSN at 60° and 300° azimuth on opposite sides of the head. This modeling and experimental design assumed symmetrical speech-recognition abilities between ears. The BI-CI listeners in the current study gained a mean head-shadow benefit of approximately 14 dB. Previous studies also found less head-shadow benefit for BI-CI listeners in conditions involving symmetrically placed speech and SSN than what Culling et al. (2012) predicted (Laszig et al. 2004; Laske et al. 2009), but this might be because these studies used a spatial configuration expected to produce less head shadow (45° and 315°). Figure 8C suggests that the BI-CI listeners in the current study most likely gained less head-shadow benefit on average than predicted by Culling et al. (2012) because of asymmetries in speech recognition between ears. The most symmetric BI-CI listeners (asymmetry ≤ 2 dB) gained 16–18 dB of head-shadow benefit, similar to the predictions of Culling et al., whereas the more asymmetric listeners showed less head-shadow benefit. Finally, it is also important to point out that our HRIR simulations represent an anechoic condition; with simulated reverberation to represent a more realistic listening environment, Culling et al. (2012) showed that the magnitude of the head-shadow benefit decreased.

The large head-shadow benefit in the HR-vocoder conditions should also be noted. The NH listeners gained approximately 25 dB of head-shadow benefit in the HR BI-vocoder condition, much more than predicted by the Culling et al. (2012) model. It is unclear why this occurred. One possibility is that the 20 dB/octave filter slopes represent less spread of excitation than would be expected for CI listeners. This high degree of frequency resolution might have introduced a binaural-interaction benefit that would not have been available to CI listeners and was not included in the Culling et al. model. However, the fact that broadband binaural performance was equal to monaural performance in the poorer-hearing (better-SNR) ear (Fig. 3E) suggests that listeners did not receive any binaural benefit in this condition. Another possibility is that the frequency-band importance function for the closed-set sentence materials used here might be more heavily weighted toward high frequencies than the average-speech importance function employed by the Culling et al. model. Although the band-importance function for OMT sentences has not been established, the functions for certain closed-set tasks, such as nonsense syllables, are known to be weighted more heavily toward high frequencies (ANSI 1997) where the head-shadow effects are largest.

Effects of monaural asymmetry

There was considerable variability across individuals in both high- and low-pass exclusion cutoffs. The estimated exclusion cutoffs for the SSD-CI and BI-CI listeners ranged from 359–1768 Hz for high-pass filtering and 1055–6278 Hz for low-pass filtering. The magnitude of the head-shadow benefit for broadband, unfiltered stimuli also varied considerably, with a range of 6.7–11.1 dB for the SSD-CI listeners and 7.8–17.9 dB for the BI-CI listeners. The listeners who demonstrated the largest head-shadow benefit matched the results of Culling et al. (2012), who inferred BI-CI performance based on monaural CI performance by assuming perfect symmetry in monaural performance.

We hypothesized that both (1) the magnitude of the head-shadow benefit and (2) the exclusion cutoff would depend on the degree of asymmetry in monaural performance. If the exclusion cutoff for a given BI-CI or BI-CI listener could be predicted simply based on monaural speech scores, this could preclude the need to conduct a more thorough series of tests measuring the impact of filtering on the head-shadow benefit to determine which frequencies could be sacrificed for a given individual.

The current results confirmed our first hypothesis that head-shadow benefit should decrease with increasing interaural asymmetry, consistent with previous literature and models (Litovsky et al. 2009; Culling et al. 2012; Gifford et al. 2014; Goupell et al. 2018a). A similar effect has been shown in vocoder simulations with interaural place mismatch (Goupell et al. 2018b). Thus, individuals or groups with nearly symmetrical performance, or with one ear that is only somewhat poorer than the other, gain more head-shadow benefit and therefore have more to lose with any reprogramming or changes made to the CI input signal or processing. This is an intuitive result – an ear is less likely to contribute much speech information if it is dramatically poorer than the other ear. It is important to note, however, that all of the participants in this study gained head-shadow benefit. The participants with the greatest interaural performance asymmetry (near 20 dB for one of the SSD-CI listeners) still gained more than 6 dB of head-shadow benefit. Thus, most CI listeners, at least those with some speech-recognition abilities in a CI ear, should gain some degree of head-shadow benefit. While BI-CI listeners tend to be more symmetrical and therefore more likely to receive this benefit, the current study and previous research has shown that even SSD-CI listeners receive significant head-shadow benefit. In fact, many SSD-CI listeners gain head-shadow benefit in the free-field, and not just in the ideal case measured here, with average reports of 2–5 dB benefit (Arndt et al. 2011; Ma et al. 2015; Zeitler et al. 2015; Kitterick & Lucas 2016; Bernstein et al. 2017; Sheffield et al. 2017).

The current results found only limited evidence in support of our second hypothesis (based on Culling et al., 2012; Fig. 2) that interaural performance asymmetry should affect exclusion cutoffs. NH simulations found higher high-pass filter exclusion cutoffs in the LR vocoder simulation with greater interaural asymmetry. In the NH simulations, we controlled interaural asymmetry with two levels of vocoder spectral resolution applied to the better SNR ear. Group results also found higher high-pass filter exclusion cutoffs for SSD-CI listeners and SSD-Vocoder conditions than BI-CI listeners and BI-Vocoder conditions. In other words, groups with more asymmetry showed greater tolerance to removal of low-frequency information. There was no significant correlation, however, between individual interaural asymmetry and high-pass filter exclusion cutoffs across individual SSD-CI and BI-CI listeners. This result suggests that although there might be some relationship between interaural asymmetry and high-pass filter exclusion cutoff, it might be difficult to predict at an individual level.

Possible Clinical Implications

The results of this study are preliminary – based on a small sample size and limited set of test conditions – and further research is needed to confirm them before clinical guidelines can be suggested. The current data nevertheless suggest that, on average, reprogramming the CI in the poorer-hearing ear to match interaural places of stimulation could be accomplished without reducing head-shadow benefit as long as only frequencies below approximately 1236 Hz for SSD-CI listeners or 886 Hz for BI-CI listeners or above approximately 3814 Hz for BI-CI listeners are removed from the signal. Imaging data suggest that the insertion depths of the most apical electrodes for fully-inserted CI electrode arrays typically fall between 360° and 540°, corresponding to frequencies of stimulation between 400 and 900 Hz in a NH cochlea (Stakhovskaya et al. 2007; Landsberger et al. 2015). These data suggest that to match a fully-inserted CI to a NH cochlea for frequency place of stimulation, frequencies below 400–900 Hz would have to be excluded. BI-CI patients could alternatively require the exclusion of high-frequencies to match place of stimulation across ears. Low-pass filtering results indicated that frequencies above 3814 Hz could be excluded without affecting head-shadow benefit (Fig. 4C) or the bilateral SRT (Fig. 3G). Thus, there is hope that CIs in a poorer-hearing ear can be reprogrammed to reduce interaural place mismatch and potentially improve binaural benefits while minimally reducing head-shadow benefit.

One important caveat is that these exclusion-cutoff recommendations are based on group-mean data with small groups. For individual listeners, there was a broad range of estimated exclusion cutoffs (359–1768 Hz for high-pass and 1055–6278 Hz for low-pass filtering). Unfortunately, it was not possible to predict the exclusion cutoff for an individual listener by measuring asymmetry in monaural speech-recognition performance. It would be prohibitively time consuming to perform the filtering tests carried out here in a clinical setting to determine the allowable exclusion range for a given listener. An alternative approach would be to set limits regarding the acceptable exclusion cutoffs based on the group-specific average exclusion cutoffs, particularly if data from larger groups were collected. Alternatively, a more conservative approach would be to select limits for acceptable exclusion cutoffs based on the endpoints of the current samples. Frequencies below 710 Hz were excluded from the CI ear without reducing the head-shadow benefit for all SSD-CI listeners. Frequencies below 488 Hz or above 5534 Hz were excluded from the CI in the poorer-hearing ear without reducing the head-shadow benefit for all but one of the BI-CI listeners. In summary, these results suggest promising possibilities in programming CIs to reduce interaural mismatch to improve binaural hearing benefits and maintain good speech recognition but further research with larger groups is needed to alter current clinical practice.

Study limitations

This study had several limitations that suggest caution in interpreting the results. The main limitation was that we used filtering to remove frequency regions without actually shifting the frequency-to-electrode allocation that would be required for reprogramming to match interaural places of stimulation. The motivation for this choice was to ensure that excluding some frequencies from the CI signal would not affect head-shadow benefit in acute testing. To truly ensure that reprogramming with shifted and excluded frequencies does not impact head-shadow benefit would require a longitudinal study with shifted frequency-allocation tables. It is hypothesized that even if an acute frequency shift were to reduce the intelligibility of speech information in the short term, CI listeners would adapt to the shifted map and ultimately regain speech intelligibility in the CI ear (Svirsky et al. 2004), especially if this shift was in the direction of the proper frequency-to-place alignment and the change was not too extreme. On the other hand, individuals who have become accustomed to a particular map after a long duration of use may reject a map that represents too substantial of a change, and might instead benefit from a series of incremental shifts toward the target frequency allocation (e.g., Svirsky et al. 2015).

A second important limitation of the study was that the stimuli used for speech-recognition testing were closed-set sentences with an unknown frequency-importance function. The monaural data for the poorer-hearing ear indicated limited effects on SRTs with exclusion of energy below approximately 1000 Hz, similar to the head-shadow benefit. It is possible that low frequencies are less important for the recognition of the closed-set of OMT sentences than for open-set speech recognition. As discussed above (see Group head-shadow magnitudes), the SII includes material-specific frequency-importance functions that place more emphasis on different frequency regions (ANSI 1997). Although an SII weighting function for the OMT sentences has not been established, it is possible that a different set of speech materials could have yielded different recommendations regarding the acceptable exclusion-cutoff range. The importance of different frequency regions might also be affected by the listening modality. For example, Sheffield et al. (2015) found that low-frequency acoustic hearing sensitivity in the ear contralateral to the CI became more important under audiovisual conditions.

Several other methodological choices could have also affected the results. For example, this study employed acoustic filtering instead of electrode deactivation. Filtering was used to better control for frequency bandwidth due to differences in frequency-allocation tables across participants and to use consistent cutoff frequencies across groups/simulations. Filtering also greatly increased the efficiency of the study by avoiding changes to individual frequency-allocation tables. However, it is possible that using electrode deactivation would have yielded different results. Filtering also required that the testing be done over headphones instead of in the free-field. This limited any possible effects of the CI sound-processor directional microphones on the signal levels. Additionally, none of the listeners were given significant training with the filtered stimuli. It is possible that training would have improved performance, increased head-shadow benefit, reduced variability between listeners within groups, or altered the estimated exclusion cutoffs. It is also important to note that the current study used SSN, which is primarily an energetic masker. It is possible that the impact of asymmetry and importance of low-frequency energy on head-shadow benefit would be different for informational or modulated maskers. Lastly, the current sample sizes were small by clinical research standards, particularly for the SSD-CI listeners, and a greater sample size would be needed to provide evidence for informing clinical practice.

Summary and conclusions

The primary conclusion of this study was that frequencies below 1236 Hz for the average SSD-CI listener, and below 886 Hz or above 3814 Hz for the average BI-CI listener could be excluded from a CI signal in a poorer-hearing ear without reducing head-shadow benefit. Considering the inter-subject variability in the allowable exclusion range, it is estimated that frequencies below 488 Hz or above 5534 Hz could be safely excluded without affecting the head-shadow benefit for most SSD-CI or BI-CI listeners. Asymmetry in monaural speech-recognition performance was not correlated with estimates of the exclusion cutoff for individual listeners. Asymmetry was, however, a significant predictor of the magnitude of the head-shadow benefit for an individual listener. Individual listeners, listener groups, and vocoder conditions with less asymmetry exhibited a larger head-shadow advantage. Despite this, the most asymmetric listeners in the study (i.e., the SSD-CI participants) still experienced a significant head-shadow advantage. Overall, these results suggest that it is possible to consider the two ears together as a single system, rather than treating each ear individually, with the goal of a more optimal bilateral or binaural programming solution. Although more data are required to generalize these results to a larger population and a variety of spatial configurations and speech tests, these results provide an initial guide regarding the limits of possible reprogramming of the CI in the poorer-hearing ear to reduce interaural frequency mismatch.

Acknowledgments

Research reported was supported by the National Institute On Deafness And Other Communication Disorders of the National Institutes of Health under Award Number R01DC015798 (J.G.W.B. and M.J.G.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The identification of specific products or scientific instrumentation does not constitute endorsement or implied endorsement on the part of the author, DoD, or any component agency. The views expressed in this article are those of the authors and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government.

We thank Cochlear Ltd. and Med-El for providing equipment and technical support. We thank John Culling for providing head shadow modeling software. We thank Ginny Alexander for her assistance with subject recruitment, coordination, and payment of subjects at the University of Maryland-College Park, as well as Brian Simpson and Matt Ankrom for the recruitment, coordination, and payment of the subject panel at the Air Force Research Laboratory. Portions of these data were presented at the 2017 Midwinter Meeting of the Association for Research in Otolaryngology, Baltimore, MD, and the 2017 Conference on Implantable Auditory Prostheses, Tahoe City, CA.

Financial Disclosures/Conflicts of Interest:

This research was funded by Award Number R01DC015798 from the National Institute on Deafness and Other Communication Disorders of the National Institutes of Health to Drs. Bernstein and Goupell. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The identification of specific products or scientific instrumentation does not constitute endorsement or implied endorsement on the part of the author, DoD, or any component agency. The views expressed in this article are those of the authors and do not reflect the official policy of the Department of Army/Navy/Air Force, Department of Defense, or U.S. Government.

REFERENCES

  1. ANSI (1997). Methods for the calculation of the speech intelligibility index, S3.5. New York: American National Standards Institute. [Google Scholar]
  2. Arndt S, Aschendorff A, Laszig R, et al. (2011). Comparison of pseudobinaural hearing to real binaural hearing rehabilitation after cochlear implantation in patients with unilateral deafness and tinnitus. Otol Neurotol, 32, 39–47. [DOI] [PubMed] [Google Scholar]
  3. Aronoff JM, Amano-Kusumoto A, Itoh M, et al. (2014). The effect of interleaved filters on normal hearing listeners’ perception of binaural cues. Ear Hear, 35, 708–710. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Aronoff JM, Freed DJ, Fisher LM, et al. (2011). The effect of different cochlear implant microphones on acoustic hearing individuals’ binaural benefits for speech perception in noise. Ear Hear, 32, 468–484. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Aronoff JM, Shayman C, Prasad A, et al. (2015). Unilateral spectral and temporal compression reduces binaural fusion for normal hearing listeners with cochlear implant simulations. Hear Res, 320, 24–29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Batra R, Yin TCT (2004). Cross correlation by neurons of the medial superior olive: A reexamination. J Assoc Res Otolaryngol, 5, 238–252. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bernstein JGW, Goupell MJ, Schuchman GI, et al. (2016). Having two ears facilitates the perceptual separation of concurrent talkers for bilateral and single-sided deaf cochlear implantees. Ear Hear, 37, 289–302. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bernstein JGW, Schuchman GI, Rivera AL (2017). Head shadow and binaural squelch for unilaterally deaf cochlear implantees. Otol Neurotol, 38, e195–e202. [DOI] [PubMed] [Google Scholar]
  9. Bernstein JGW, Stakhovskaya OA, Schuchman GI, et al. (2018). Interaural time-difference discrimination as a measure of place of stimulation for cochlear-implant users with single-sided deafness. Trends Hear, 22, 233121651876551. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Culling JF, Jelfs S, Talbert A, et al. (2012). The benefit of bilateral versus unilateral cochlear implantation to speech intelligibility in noise. Ear Hear, 33, 673–682. [DOI] [PubMed] [Google Scholar]
  11. Durlach NI (1972). Binaural signal detection: Equalization and cancellation theory. Found Mod Audit Theory, 2, 369–462. [Google Scholar]
  12. Feddersen WE, Sandel TT, Teas DC, et al. (1957). Localization of high‐frequency tones. J Acoust Soc Am, 29, 988–991. [Google Scholar]
  13. Freyman RL, Balakrishnan U, Helfer KS (2001). Spatial release from informational masking in speech recognition. J Acoust Soc Am, 109, 2112–2122. [DOI] [PubMed] [Google Scholar]
  14. Freyman RL, Balakrishnan U, Helfer KS (2004). Effect of number of masking talkers and auditory priming on informational masking in speech recognition. J Acoust Soc Am, 115, 2246–2256. [DOI] [PubMed] [Google Scholar]
  15. Gallun FJ, Mason CR, Kidd G Jr (2005). Binaural release from informational masking in a speech identification task. J Acoust Soc Am, 118, 1614–1625. [DOI] [PubMed] [Google Scholar]
  16. Gifford RH, Dorman M, Sheffield SW, et al. (2014). Availability of binaural cues for bilateral implant recipients and bimodal listeners with and without preserved hearing in the implanted ear. Audiol Neuro-Otol, 19, 57–71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Goupell MJ, Gaskins CR, Shader MJ, et al. (2017). Age-related differences in the processing of temporal envelope and spectral cues in a speech segment. Ear Hear, 38, e335–e342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Goupell MJ, Stakhovskaya OA, Bernstein JGW (2018a). Contralateral interference caused by binaurally presented competing speech in adult bilateral cochlear-implant users. Ear Hear, 39, 110–123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Goupell MJ, Stoelb C, Kan A, et al. (2013). Effect of mismatched place-of-stimulation on the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening. J Acoust Soc Am, 133, 2272–2287. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Goupell MJ, Stoelb C, Kan A, et al. (2018b). The effect of simulated interaural frequency mismatch on speech understanding and spatial release from masking. Ear Hear, 39, 895–905. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Grange JA, Culling JF (2016). Head orientation benefit to speech intelligibility in noise for cochlear implant users and in realistic listening conditions. J Acoust Soc Am, 140, 4061–4072. [DOI] [PubMed] [Google Scholar]
  22. Grantham DW, Ashmead DH, Ricketts TA, et al. (2008). Interaural time and level difference thresholds for acoustically presented signals in post-lingually deafened adults fitted with bilateral cochlear implants using CIS+ processing. Ear Hear, 29, 33–44. [DOI] [PubMed] [Google Scholar]
  23. Hawley ML, Litovsky RY, Culling JF (2004). The benefit of binaural hearing in a cocktail party: Effect of location and type of interferer. J Acoust Soc Am, 115, 833–843. [DOI] [PubMed] [Google Scholar]
  24. Joris PX, Smith PH, Yin TCT (1998). Coincidence detection in the auditory system. Neuron, 21, 1235–1238. [DOI] [PubMed] [Google Scholar]
  25. Kan A, Stoelb C, Litovsky RY, et al. (2013). Effect of mismatched place-of-stimulation on binaural fusion and lateralization in bilateral cochlear-implant users. J Acoust Soc Am, 134, 2272–2287. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Kayser H, Ewert SD, Anemüller J, et al. (2009). Database of multichannel in-ear and behind-the-ear head-related and binaural room impulse responses. EURASIP J Adv Signal Process, 2009, 298605. [Google Scholar]
  27. Kitterick PT, Lucas L (2016). Predicting speech perception outcomes following cochlear implantation in adults with unilateral deafness or highly asymmetric hearing loss. Cochlear Implants Int, 17, 51–4 [DOI] [PubMed] [Google Scholar]
  28. Kollmeier B, Warzybok A, Hochmuth S, et al. (2015). The multilingual matrix test: Principles, applications, and comparison across languages: A review. Int J Aud, 54(sup2), 3–16. [DOI] [PubMed] [Google Scholar]
  29. Kollmeier B, Wesselkamp M (1997). Development and evaluation of a German sentence test for objective and subjective speech intelligibility assessment. J Acoust Soc Am, 102, 2412–2421. [DOI] [PubMed] [Google Scholar]
  30. Kuhn GF (1977). Model for the interaural time differences in the azimuthal plane. J Acoust Soc Am, 62, 157–167. [Google Scholar]
  31. Landsberger DM, Svrakic Svrakic J, Svirsky M (2015). The relationship between insertion angles, default frequency allocations, and spiral ganglion place pitch in cochlear implants. Ear Hear, 36, e207–e213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Laske RD, Veraguth D, Dillier N, et al. (2009). Subjective and objective results after bilateral cochlear implantation in adults. Otol Neurotol, 30, 313–318. [DOI] [PubMed] [Google Scholar]
  33. Laszig R, Aschendorff A, Stecker M, et al. (2004). Benefits of bilateral electrical stimulation with the nucleus cochlear implant in adults: 6-month postoperative results. Otol Neurotol, 25, 958–968. [DOI] [PubMed] [Google Scholar]
  34. Levitt H, Rabiner LR (1967). Binaural release from masking for speech and gain in intelligibility. J Acoust Soc Am, 42, 601–608. [DOI] [PubMed] [Google Scholar]
  35. Litovsky R, Parkinson A, Arcaroli J, et al. (2006). Simultaneous bilateral cochlear implantation in adults: a multicenter clinical study. Ear Hear, 27, 714–730. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Litovsky RY, Parkinson A, Arcaroli J (2009). Spatial hearing and speech intelligibility in bilateral cochlear implant users. Ear Hear, 30, 419–431. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Loizou PC, Hu Y, Litovsky R, et al. (2009). Speech recognition by bilateral cochlear implant users in a cocktail-party setting. J Acoust Soc Am, 125, 372–383. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Ma N, Morris S, Kitterick P (2015). Benefits to speech perception in noise from the binaural integration of electric and acoustic signals in unilateral deafness. Ear Hear, 37, 248–259. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Macaulay EJ, Hartmann WM, Rakerd B (2010). The acoustical bright spot and mislocalization of tones by human listeners. J Acoust Soc Am, 127, 1440–1449. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Schvartz KC, Chatterjee M, Gordon-Salant S (2008). Recognition of spectrally degraded phonemes by younger, middle-aged, and older normal-hearing listeners. J Acoust Soc Am, 124, 3972–3988. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Sheffield BM, Schuchman G, Bernstein JGW (2015). Trimodal speech perception: how residual acoustic hearing supplements cochlear-implant consonant recognition in the presence of visual cues. Ear Hear, 36, e99–e112. [DOI] [PubMed] [Google Scholar]
  42. Sheffield BM, Schuchman G, Bernstein JGW (2017). Pre- and postoperative binaural unmasking for bimodal cochlear implant listeners. Ear Hear, 38, 554–567. [DOI] [PubMed] [Google Scholar]
  43. Sheldon S, Pichora-Fuller MK, Schneider BA (2008a). Effect of age, presentation method, and learning on identification of noise-vocoded words. J Acoust Soc Am, 123, 476–488. [DOI] [PubMed] [Google Scholar]
  44. Sheldon S, Pichora-Fuller MK, Schneider BA (2008b). Priming and sentence context support listening to noise-vocoded speech by younger and older adults. J Acoust Soc Am, 123, 489–499. [DOI] [PubMed] [Google Scholar]
  45. Staisloff HE, Lee DH, Aronoff JM (2016). Perceptually aligning apical frequency regions leads to more binaural fusion of speech in a cochlear implant simulation. Hear Res, 337, 59–64. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Stakhovskaya O, Sridhar D, Bonham BH, et al. (2007). Frequency map for the human cochlear spiral ganglion: implications for cochlear implants. J Assoc Res Otolaryngol, 8, 220–233. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Svirsky MA, Silveira A, Neuburger H, et al. (2004). Long-term auditory adaptation to a modified peripheral frequency map. Acta Otolaryngol, 124, 381–386. [PubMed] [Google Scholar]
  48. Svirsky MA, Talavage TM, Sinha S, et al. (2015). Gradual adaptation to auditory frequency mismatch. Hear Res, 322, 163–170. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Van Deun L, Van Wieringen A, Van den Bogaert T, et al. (2009). Sound localization, sound lateralization, and binaural masking level differences in young children with normal hearing. Ear Hear, 30, 178–190. [DOI] [PubMed] [Google Scholar]
  50. Wess JM, Brungart DS, Bernstein JGW (2017). The effect of interaural mismatches on contralateral unmasking with single-sided vocoders. Ear Hear, 38, 374–386. [DOI] [PubMed] [Google Scholar]
  51. Yoon Y, Liu A, Fu Q-J (2011). Binaural benefit for speech recognition with spectral mismatch across ears in simulated electric hearing. J Acoust Soc Am, 130, EL94–EL100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Zeitler DM, Dorman MF, Natale SJ, et al. (2015). Sound source localization and speech understanding in complex listening environments by single-sided deaf listeners after cochlear implantation. Otol Neurotol, 36, 1467–1471. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES