Abstract
This paper provides a review of the current literature on psychophysical properties of low-frequency hearing, both before and after implantation, with a focus on frequency selectivity, nonlinear cochlear processing, and speech perception in temporally modulated maskers for bimodal listeners as well as patients with hearing preservation in the implanted ear and receiving combined electric and acoustic stimulation (EAS). In this paper we review our work, the work of others, and report results not previously published for speech perception in steady-state and temporally fluctuating maskers; the degree of masking release and frequency resolution for 11 bimodal, 6 hearing preservation patients; and 5 control subjects with normal hearing. The results demonstrate that a small masking release is possible with acoustic hearing in just one ear, with the degree of masking release being correlated with the low-frequency pure tone average in the non-implanted ear; furthermore, frequency selectivity as defined by the width of the auditory filter was not correlated with the degree of masking release. Descriptions of the clinical utility of hearing preservation in the implanted ear for improving speech perception in complex listening environments, as well as directions for the future, are discussed.
Keywords: electric and acoustic stimulation (EAS), hearing preservation, cochlear implant, masking release
Background
In order to create combined electric and acoustic stimulation (EAS), surgeons insert an electrode array, ranging in length from 10 to 20 mm, into the scala tympani with the aim of preserving acoustic hearing apical to the tip of the array. Successful EAS surgery allows for electric stimulation of basal neural tissue without damaging the apical cochlear structures that transmit low-frequency acoustic information (e.g., [1–13]). For the majority of patients the mean postoperative threshold elevation ranges from 10 to 20 dB depending on the electrode array, the nature of the surgical technique, and experience of the surgeon [6–9,11,12,14–16]. It is expected, however, that even with a short array some patients will lose all or nearly all hearing postoperatively. Gantz et al. [10] reported that out of 87 patients implanted with the Nucleus Hybrid 10-mm electrode (i.e., the S8 implant), 8 subjects had a total loss of hearing ranging from immediately post implant to 24 months post activation. Thus from that sample of 87 subjects, 91% had some level of preserved hearing.
Combined EAS has been shown to improve speech understanding in quiet and in noise beyond that achieved by aided acoustic or electric hearing alone (e.g., [7–9,11,17,18–20]). This is the case for both true EAS – with acoustic hearing in both the implanted and non-implanted ears – as well as for bimodal hearing with acoustic hearing in only the non-implanted ear. One might expect that greater residual hearing in the implanted and/or the non-implanted ear would be associated with a higher level of benefit from combined EAS. A number of studies, however, have examined this issue and have found no correlation between audiometric threshold – either pre- or postoperatively – and speech perception performance with EAS (e.g., [21–23]). Additionally, a number of researchers have shown that comparable ranges and degrees of residual low-frequency hearing do not yield comparable benefit from combined EAS (e.g., [12,24,25]). These data suggest that the pure-tone audiogram is not an appropriate and useful tool in helping identify patients who would achieve high levels of speech perception with EAS. Clearly an evaluation of auditory processing beyond tonal detection may prove more useful in helping understand the synergism associated with combined EAS.
This paper will describe the psychophysical properties of low-frequency residual hearing in both pre- and post-operative EAS patients. Our aim is to improve our understanding of (i) the effect of an intracochlear foreign body – the implanted electrode array – on low-frequency auditory processing, (ii) the potential for improved speech recognition in quiet and noise with EAS, and (iii) the potential for improved auditory processing afforded by the preservation of binaural acoustic hearing.
Speech reception thresholds in steady-state and fluctuating maskers
It is well known that speech recognition performance for normally hearing listeners is much higher, i.e., better, in the presence of a temporally fluctuating masker than for a steady-state masker. The difference in masking effectiveness between a steady-state and temporally fluctuating masker is generally referred to as masking release. Masking release has been thought to be related to both spectral and temporal resolution [26,27]: listeners are able to derive benefit from the momentary improvements in the signal-to-noise ratio (SNR) occurring in the temporal dips of the masker. This is often referred to as “listening in the dips.” For listeners with sensorineural hearing loss, the ability to listen in the dips is considerably reduced or absent (e.g., [26,28]).
Cochlear implant recipients have great difficulty understanding speech in noisy environments. Given that many real-world noises have temporal fluctuations, the ability to listen in the dips would be extremely beneficial for cochlear implant users. Nelson et al. [29] examined sentence recognition in steady-state and modulated maskers for nine adult cochlear implant recipients. They found that the cochlear implant users did not have masking release for any of the modulation rates tested in the range of 1 to 32 Hz. Given that implant recipients have been shown to demonstrate rapid recovery from forward masking (e.g., [30,31]), Nelson et al. [29] postulated that abnormal forward masking was not likely the responsible factor. In a follow-up study, Nelson and Jin [32] examined sentence recognition with steady-state and gated maskers for subjects with normal hearing (unprocessed and simulations) and cochlear implants users. For normal-hearing subjects listening to cochlear implant simulations, masking release significantly improved by increasing the number of spectral channels from 4 to 12. However, implant patients, once again, demonstrated no masking release. Thus Nelson and Jin [32] suggested that spectral resolution is a critical variable for listening in the dips.
In a similar study, Fu and Nogaki [33] also obtained speech reception threshold (SRT) estimates for listeners with normal hearing as well as 10 cochlear implant recipients in the presence of steady-state noise as well as square-wave gated noise with various modulation rates. The cochlear implant recipients’ SRTs were essentially equivalent across the steady-state noise and the gated noise for all modulation rates. Thus they failed to demonstrate masking release. The subjects with normal hearing listening to cochlear implant simulations demonstrated increased masking release with (i) an increased number of spectral channels (4, 8, and 16), and (ii) an increased carrier filter slope from −6 dB/octave to −24 dB/octave. Fu and Nogaki [33] concluded that poor spectral resolution in combination with channel interaction contribute to the difficulty cochlear implant recipients experience in background noise – particularly that of a temporally fluctuating nature.
Cochlear implant recipients are known to have poor spectral resolution due to a relatively small number of intracochlear electrodes, channel interaction, and various degrees of spiral ganglion cell survival. Thus, it is not unexpected that cochlear implant recipients would be unable to listen in the dips. One might hypothesize, however, that bimodal and EAS listeners – with residual acoustic hearing in either one or both ears – may have considerably better spectral resolution, even if only for lower frequencies. These listeners, therefore, may be able to listen in the dips of a fluctuating background noise and demonstrate a release from masking.
Turner et al. [34] obtained SRTs for spondees in the presence of steady-state and two-talker backgrounds for 15 subjects with normal hearing, 20 conventional (long electrode) implant recipients, and 3 EAS subjects (Nucleus Hybrid 10-mm, i.e. S8, recipients). They found that the normal-hearing listeners performed significantly better than both the conventional long implant and EAS groups, which was not unexpected. The conventional CI subjects and the EAS subjects’ performance differed significantly for the single-talker background – the EAS subjects exhibited masking release but the conventional CI subjects did not. Turner et al. [34] suggested that residual acoustic spectral resolution available to the EAS subjects – albeit poorer than for normal-hearing listeners – was responsible for the minimal EAS-related masking release.
In a follow-up study, Turner et al. [35] obtained SRTs for spondees in the presence of a two-talker background for 20 conventional CI subjects and 19 EAS subjects (Nucleus Hybrid 10-mm). The EAS subjects demonstrated a significant 9-dB advantage over the conventional CI subjects – further demonstrating that the residual spectral resolution for low-frequency acoustic hearing may afford significantly higher speech recognition in a fluctuating background. A potential confound with these results, however, was the choice of spondees as the target stimuli. Van Tasell and Yanz [36] showed that spondees could be low-pass filtered at 400 Hz and normal-hearing subjects were able to achieve 100% correct recognition. Thus, one could argue that the experimental conditions chosen by Turner et al. [34,35] were those that would have the greatest possibility of demonstrating benefit for EAS listeners who have normal or near-normal low-frequency hearing.
It is important to understand whether residual low-frequency hearing for EAS subjects or bimodal subjects affords the spectral resolution necessary to obtain masking release in the presence of a temporally fluctuating background. Thus we have completed an experiment similar to that described by Turner et al. [34,35]; however, instead of using spondees, we have assessed open-set sentence recognition in the presence of a temporally fluctuating masker. A secondary question is whether residual acoustic hearing in a single ear – as is the case with bimodal listeners – is sufficient. In other words, might EAS listeners with binaural acoustic hearing outperform bimodal listeners in the presence of temporally fluctuating maskers?
Current study
To answer this question we have obtained SRTs in the presence of steady-state and temporally fluctuating maskers for both EAS and bimodal CI patients. Eleven bimodal subjects were implanted with a conventional long electrode array in one ear and had aided acoustic hearing in the contralateral ear. All 11 bimodal subjects met preoperative audiologic criteria for inclusion in the North American clinical trial of Med-El EAS. The 11 bimodal subjects ranged in age from 47 to 85 years with a mean of 69 years. Six EAS subjects were implanted with the Nucleus 10-mm Hybrid implant (i.e., S8 implant) with 6 electrical contacts. The 6 EAS subjects ranged in age from 42 to 76 years with a mean of 55 years. An additional 5 subjects had normal hearing and were included only as a reference for young normal-hearing performance. The normal-hearing subjects ranged in age from 21 to 37 years with a mean of 27 years. All 22 subjects were native speakers of American English. Information regarding the cochlear implant subjects’ age, implant type, processor design, and duration of implant use at the time of experimentation is provided in Table 1. Audiometric thresholds for the non-implanted ears are shown for the bimodal and EAS subjects in Figure 1. Figure 2 displays the pre- and post-implant thresholds for the implanted ears of the six EAS subjects.
Table 1.
Bimodal (BMD) and EAS subject data: age, months experience with cochlear implant, and implant/processor type
| Subject | Age at testing | Months experience with CI |
Implant/processor type |
|---|---|---|---|
| BMD1 | 84 | 7 | Advanced Bionics HR90K, Auria |
| BMD2 | 77 | 6 | Cochlear CI24RE, Freedom |
| BMD3 | 75 | 7 | Advanced Bionics HR90K, Auria |
| BMD4 | 78 | 8 | Advanced Bionics HR90K, Auria |
| BMD5 | 80 | 5 | Cochlear CI24RE, Freedom |
| BMD6 | 85 | 6 | Advanced Bionics HR90K, Auria |
| BMD7 | 47 | 39 | Cochlear CI24RCA, 3G |
| BMD8 | 55 | 5 | Cochlear CI24RE, Freedom |
| BMD9 | 67 | 18 | Advanced Bionics HR90K, Auria |
| BMD10 | 55 | 25 | Med-El Combi40+, Tempo+ |
| BMD11 | 52 | 26 | Cochlear CI24RCA, Freedom |
| EAS1 | 42 | 12 | Cochlear Hybrid 10 mm (S8), Freedom |
| EAS2 | 44 | 10 | Cochlear Hybrid 10 mm (S8), Freedom |
| EAS3 | 76 | 6 | Cochlear Hybrid 10 mm (S8), Freedom |
| EAS4 | 41 | 14 | Cochlear Hybrid 10 mm (S8), Freedom |
| EAS5 | 74 | 6 | Cochlear Hybrid 10 mm (S8), Freedom |
| EAS6 | 51 | 10 | Cochlear Hybrid 10 mm (S8), Freedom |
| MEAN | 63.7 | 12.4 | |
| STDEV | 15.8 | 9.5 |
Figure 1.

Audiometric thresholds for the non-implanted ears of bimodal and EAS patients.
Figure 2.
Pre- and post-implant audiograms for the implanted ear of EAS patients.
Speech recognition was measured using the sentences from the Hearing in Noise Test (HINT; [37]). The sentences were routed to a single loudspeaker placed at 0° azimuth at a distance of 1 m from the subject. For the steady-state (SS) noise, the sentences were presented in a broadband noise that was shaped to match the long-term spectrum of the HINT sentences (i.e., the same SS noise provided for use on the HINT CD). For the temporally fluctuating noise, the broadband noise was modulated with a 10-Hz square wave with a modulation depth of 100%. This noise is referred to as the square-wave (SQ) noise. The background noise was also presented to the same loudspeaker through a second channel of the digital signal processor.
The adaptive HINT procedure [37] was used to determine the SNR required to achieve 50% correct recognition using a one-down, one-up stepping rule (e.g., [38]). The noise level was fixed at 70 dB SPL and the sentence level was varied adaptively. For each trial, two 10-sentence lists were concatenated and run in sequence. The last six presentation levels for sentences 15 through 20 were averaged to provide an SRT. Two runs of 20-sentence lists were presented for each listening condition, and the mean of the two SRT estimates was taken to represent a single SRT, in dB SNR, for any given condition. Prior to data collection, every subject was presented with a practice run of 20 sentences to familiarize them with the task. The sentence lists and the condition for each run were randomly selected to counterbalance for order effects.
Results
The results for the bimodal and EAS listeners are shown in Figures 3 and 4, respectively. In both figures, the mean SRTs for the subjects with normal hearing are also shown for reference. Note first the results for the normal-hearing listeners. Mean SRTs for the SS and SQ makers were both achieved at a negative SNR. Also notice the large difference in the SRT obtained for the SS and SQ maskers for normal-hearing listeners, or the masking release. On average, the normal-hearing listeners demonstrated a masking release of 14.8 dB, which is consistent with previous studies with normal-hearing listeners (e.g., [26,33]).
Figure 3.

Speech reception thresholds for bimodal patients in 4 listening conditions. Data for both steady-state noise and 10 Hz square-wave modulated noise masking conditions are plotted.
Figure 4.

Speech reception thresholds for EAS patients in 6 listening conditions. Data for both steady-state noise and 10 Hz square-wave modulated noise masking conditions are plotted.
As expected from previous research, both the bimodal and EAS subjects required a more favorable SNR than the normal-hearing listeners to achieve 50% correct. A two-way, repeated-measures analysis of variance (ANOVA) was completed with masker type (SS and SQ) and subject group (bimodal, EAS, and normal hearing) as the independent variables. This analysis revealed a significant effect of subject group [F (2, 19)=12.6, p<0.001], an effect of masker type [F (1, 19)=121.0, p<0.001], and a significant interaction [F (2, 19)=55.0, p<0.001]. Post hoc tests revealed that for EAS subjects there was no difference for the SRTs obtained with the SS and SQ maskers (p=0.62). On the other hand, for bimodal subjects, there was a significant difference for the SRTs obtained with the SS and SQ maskers (p=0.001). Compared to the size of the effect for normal hearing patients (15.0 dB), the effect for bimodal patients was very small (2.9 dB) and driven primarily by 3 of the 11 bimodal subjects. In Figure 5 we plot SRTs for bimodal and EAS patients as a function of average threshold at 125, 250, and 500 Hz – the low frequency (LF) pure tone average (PTA). The interesting, and puzzling, observation is that patients with the poorest LF PTA tend to have the greatest release from masking. This is not consistent with the data presented by Turner et al. (2008) for whom the EAS subjects with the lowest, i.e. best, LF PTA exhibited the lowest, i.e. best, SRTs for spondees in noise. They did not, however, obtain estimates of release from masking.
Figure 5.

Masking release (in dB) as a function of the low-frequency pure tone average.
The small but significant level of masking release found for the bimodal patients suggests that only one partially hearing ear is necessary to achieve a minimal release from masking. Given the small magnitude of this effect, however, it is unclear how useful ‘listening in the dips’ would be for patients in real-world listening situations.
Relationship between spectral resolution and masking release
In a previous study we described pre- and post-implant spectral resolution for 5 EAS subjects [20]. Most subjects had some frequency selectivity pre-implant and some subjects retained some degree of frequency selectivity post-implant. Given the known relationship between audiometric threshold and the width of the auditory filter (e.g., [39,40]), one might hypothesize that those with better hearing preservation would demonstrate better spectral resolution and hence may be those subjects most likely to demonstrate masking release.
We have obtained estimates of frequency selectivity in the non-implanted ear for 6 of the bimodal subjects (BMD1, BMD2, BMD3, BMD4, BMD5, and BMD9). We obtained estimates of frequency selectivity in the implanted ear pre- and post-operatively for 4 of the EAS subjects (EAS1, EAS2, EAS3, and EAS4). Since the pre-operative audiograms for the EAS subjects were symmetrical across ears, the pre-implant estimate of frequency selectivity in the implanted ear could be considered a close approximation to that for the non-implanted ear.
Estimates of frequency selectivity were obtained by deriving auditory filter (AF) shapes using the notched-noise method [41] in a simultaneous-masking paradigm. The noise bands – which were 0.4 times the signal frequency, fs – were placed symmetrically or asymmetrically about the 500-Hz signal [42]. The signal was fixed at a level of 10 dB sensation level (SL), and the masker level was varied. The maximum masker spectrum level was set to 43 dB SPL. During a run, it was permissible for the threshold track to reach the ceiling value; however, if the tracking procedure called for a higher level, that run was discarded. If two runs for a particular condition were discarded on this basis, it was concluded that a threshold for that condition could not be achieved. This occurred for 4 of the bimodal subjects in the non-implanted ear (BMD2, BMD3, BMD4, and BMD5) and for 1 of the EAS subjects both pre- and post-operatively (EAS1). This is probably due to the fact that these 5 subjects had the highest quiet thresholds at 500 Hz (see Table 2). Thus the 10-dB-SL signal level was still audible even at the maximum permissible masker spectrum level.
Table 2.
Auditory thresholds for a 200-ms pure tone at 500 Hz for the bimodal (BMD) and EAS subjects. The equivalent rectangular bandwidth (ERB) of the auditory filter, in hertz, is shown for the non-implanted ear of two BMD subjects and for the implanted ear (pre- and post-implant) of the EAS subjects
| BMD subjects |
500-Hz threshold (dB SPL) |
ERB (Hz) | EAS subjects |
PRE-implant 500-Hz threshold (dB SPL) |
PRE-implant ERB (Hz) |
POST-implant 500-Hz threshold (dB SPL) |
POST-implant ERB (Hz) |
|---|---|---|---|---|---|---|---|
| BMD1 | 42 | 332.3 | EAS1 | 62 | N/A | 66 | N/A |
| BMD2 | 60 | N/A | EAS2 | 36 | 338.4 | 49 | 360.0 |
| BMD3 | 52 | N/A | EAS3 | 26 | 193.4 | 27 | 202.7 |
| BMD4 | 76 | N/A | EAS4 | 23 | 102.9 | 36 | 129.2 |
| BMD5 | 55 | N/A | |||||
| BMD9 | 46 | 234.6 | |||||
| MEAN * | 44.0 | 283.5 | MEAN * | 36.8 | 211.6 | 44.5 | 230.6 |
| STDEV * | 2.82 | 69.1 | STDEV * | 17.7 | 118.8 | 16.9 | 117.9 |
The mean and standard deviation has been calculated for those subjects demonstrating frequency selectivity at 500 Hz (BMD1, BMD9, EAS2, EAS3, and EAS4).
Subjects were provided with a minimum of 2 hours’ training on simultaneous masking prior to data collection. The masker and signal were 400 and 200 ms in duration, respectively. All thresholds were obtained using a two-down, one-up tracking rule to track 70.7% correct performance (Levitt, 1971) using a 3-interval forced choice (3IFC) paradigm.
Results
The masker spectrum levels at threshold were used to derive filter shapes using a roex (p, k) model [41]. Estimates of equivalent rectangular bandwidth (ERB) [43] of the AF are shown in Table 2. As shown in Table 2, both the EAS and bimodal subjects demonstrated considerable intersubject variation in AF width (e. g., [39,40]). Only one of the subjects (EAS4) demonstrated normal or near-normal frequency selectivity in the implanted ear both pre- and post-operatively1. Four subjects (BMD1, BMD9, EAS2, and EAS3) exhibited some degree of frequency selectivity at 500 Hz, though considerably poorer than normal. The remaining 5 subjects (EAS3, BMD2, BMD3, BMD4, and BMD5) were unable to complete the task, which is suggestive of little or no frequency selectivity at 500 Hz.
For the EAS subjects, all who had demonstrated frequency selectivity pre-operatively also exhibited frequency selectivity post-operatively – though the width of the AF tended to be broader following surgery. On average, the width of the AF at 500 Hz increased by 19 Hz. This is not an unexpected outcome given that subjects EAS2 and EAS4 lost hearing as a result of the surgery. For subject EAS3, both the pre- and post-operative quiet threshold and estimate of frequency selectivity at 500 Hz were nearly identical. In the implanted ear, 2 of the EAS subjects (EAS1 and EAS2) demonstrated some frequency selectivity postoperatively, though the width of the AF, in ERBs, was considerably wider than normal. The fourth EAS subject (EAS1) demonstrated no frequency selectivity in the implanted ear either pre- or post-operatively.
These results are consistent with previous estimates of frequency selectivity for listeners with similar degrees of low-frequency hearing loss (e.g., [20,39]) and thus these results are not unique to the literature. What is unique, however, is the comparison between frequency selectivity and the SRT for SS and SQ maskers, as well as the degree of masking release. Pearson Product moment correlation analysis was completed for the variables of ERB (Hz), low-frequency PTA (dB HL), SS SRT (dB SNR), SQ SRT (dB SNR), and masking release2. Similar to that observed for the full 17-subject sample, the low-frequency PTA in the non-implanted ear was found to be significantly correlated with the degree of masking release observed in the bimodal condition for this subset of 10 subjects (r=0.60, p=0.036). The SRTs for the SS and SQ maskers were found to be correlated with one another for bimodal (r=0.953, p<0.001) and combined conditions (r=0.957, p<0.001). The width of the auditory filter, in ERBs (Hz), for the non-implanted and the implanted ear was not correlated with the SRT for either masker nor with the degree of masking release. Thus, for this sample of 10 subjects, frequency selectivity – or spectral resolution – was not found to influence the subjects’ abilities to listen in the dips. It may be the case that the spectral smearing caused by channel interaction could not be overcome even with relatively normal spectral resolution provided acoustically. It may also be the case that there are other underlying mechanisms limiting the implant recipients’ abilities to listen in the dips. Nelson and Jin [31] proposed that reduced auditory stream segregation or fusion abilities may also play a role.
Examining the SRTs in both SS and SQ noise, the 11 bimodal and 6 EAS subjects achieved essentially equivalent performance. Thus it appears that for this task, acoustic hearing in just one ear was sufficient. One of the limitations of the current experimental design, however, was that both the noise and target stimulus originated from a single loudspeaker placed at 0° azimuth. Using a single loudspeaker minimizes the value of binaural cues that could be extracted with two partially hearing ears.
Preservation of nonlinear cochlear processing
Nonlinear cochlear processing is responsible for several important aspects of normal cochlear function, i.e., high sensitivity, a broad dynamic range, sharp frequency tuning, and enhanced spectral contrasts via suppression. Any reduction in the magnitude of the nonlinearity could result in one or more functional deficits, including impaired speech perception. We examined whether preservation of nonlinear cochlear function was possible following hearing preservation surgery for 6 recipients of the 20-mm Med-El EAS implant and for 7 recipients of the 10-mm Nucleus Hybrid implant [23]. Cochlear nonlinearity was evaluated at signal frequencies of 250 and 500 Hz using Schroeder phase maskers (e.g., [44–46]). We found that the preservation of nonlinear cochlear processing was possible following EAS surgery. Although only one subject exhibited normal post-implant nonlinear cochlear function at 250 Hz, most subjects had some residual nonlinearity (more so at 250 than 500 Hz). Thus, postoperatively most patients will retain some benefit from nonlinear cochlear processing at low frequencies.
Variations in nonlinearity, however, were not found to predict speech understanding benefit for EAS patients when acoustic hearing was added to electric stimulation [23]. In other words, patients with complete loss of nonlinear function exhibited as much EAS speech recognition benefit as those patients with normal, or near normal, nonlinearity.
Although the Schroeder masking functions were not found to correlate with EAS speech perception benefit, they did provide a highly sensitive measure of damage following surgical insertion of the electrode array. Although there was no significant change in low-frequency audiometric thresholds following surgery for 5 of the 13 subjects, these same 5 subjects, however, demonstrated considerable reduction in the degree of nonlinear cochlear processing. Thus, Schroeder-phase masking was found to provide a sensitive index of surgically-induced trauma to the cochlea and may ultimately be a useful tool for evaluating the success of ‘minimally traumatic surgery’ for hearing preservation.
Monaural versus binaural acoustic hearing
The published literature has not focused much on whether binaural acoustic hearing adds more to electric stimulation than just contralateral acoustic hearing. Instead, researchers have generally provided speech perception data for the electric (E) only condition, ipsilateral EAS, and/or the combined EAS condition without reporting the performance for the subject’s own bimodal condition with the ipsilateral ear occluded [4,7–9,11,12,16]. If it is the case that the contralateral ear offers better hearing sensitivity, then it is reasonable to assume that the contralateral ear will add more to performance than the ipsilateral ear. Dorman et al. [25] assessed the bimodal and combined EAS scores of 22 Nucleus Hybrid recipients implanted with a 10-mm electrode array. They found that preserved hearing in the implanted ear added a non-significant 9 percentage points to word recognition over the subject’s own bimodal hearing. In a subset of this subject population (n=7), Gifford et al. [22] reported identical scores for the bimodal and combined conditions on measures of sentence recognition in noise using the pseudoadaptive BKB-SIN test – with the speech and noise originating from a single loudspeaker.
Dunn et al. [47] reported significant benefit for the addition of acoustic hearing in the implanted ear for 11 recipients of the Hybrid S8 (10 mm, 6 electrodes). Spondee word recognition was assessed with an array of 8 loudspeakers arranged in an arc of 108° placed in front of the listener using three conditions: bimodal (CI + contralateral acoustic), hybrid (CI + ipsilateral acoustic), and combined (CI + bilateral acoustic). They showed a significant 2-dB improvement in the SNR at threshold with the addition of acoustic hearing in the ipsilateral, implanted ear to the standard bimodal condition. That is, the best performance was observed with bilateral acoustic hearing in combination with the CI. The subjects in their study had short electrodes and considerable low-frequency acoustic hearing in the implanted ear.
Dorman and Gifford [48] and Gifford et al. [20] also reported significant benefit of ipsilateral acoustic hearing for 8 hearing preservation patients listening in a restaurant simulation with a high-level, diffuse noise. However, just as with Dunn et al. [47], the sample size was small, the patients had very good pre- and post-implant hearing thresholds, and all were implanted with a 10-mm electrode. Thus it is not clear whether patients with longer electrodes (up to 31 mm) and differing levels of pre- and post-implant hearing could also benefit from preservation of hearing in the implanted ear.
Most EAS subjects do lose some hearing postoperatively, with mean losses ranging from 10 to 20 dB through 750 Hz [2,4,6–8,11–15]. Thus there is little reason to believe that, following surgery, a poorer ear would add greatly to a better ear when both are combined with electric stimulation. Furthermore, using conventional speech perception measures with a single loudspeaker placed at 0° azimuth, one should not expect to observe benefit from two acoustic hearing ears over one. Such measures do not assess the potential benefits of binaural acoustic hearing and thus may greatly underestimate the value of hearing preservation in the implanted ear. Having residual acoustic hearing in the implanted ear theoretically offers a number of potential benefits related to binaural hearing. EAS users with binaural low-frequency hearing will presumably make better use of these binaural cues than bimodal cochlear implant users who have just one acoustic-hearing ear.
Binaural hearing
When signals are presented binaurally, head shadow, squelch, and summation can play a role in performance. The effects that hold the most promise for EAS users are head shadow and squelch. Head shadow refers to a physical effect in which the head provides an acoustic barrier, resulting in amplitude or level differences between the ears. If one ear is closer to the noise source, the other ear has a higher, i.e. better, SNR. Of course, one need not have two acoustic hearing ears to benefit from head shadow, as even monaural hearing individuals can benefit if the noise source is directed to the poorer ear.
Binaural squelch refers to a true binaural effect in which an improvement in the SNR results from the comparison of time and intensity differences between the ears. Interaural time differences, which also provide information regarding signal frequency, are most prominent for frequencies below 1500 Hz. Thus, EAS users with binaural low-frequency hearing will presumably make better use of these cues than bimodal cochlear implant users with just one acoustic-hearing ear (most EAS users have measurable hearing below 1000 to 1500 Hz in both ears). Interaural time differences (ITDs) between speech and noise have been shown to improve speech understanding by 2 dB in terms of the SNR, in addition to the 3 dB offered by head shadow alone [49,50]. Given that electric hearing alone does not typically preserve good sensitivity to ITDs [51–53], there is reason to believe that EAS listeners with binaural acoustic hearing will outperform bimodal patients in real-world listening conditions.
Localization
Localization refers to the identification of source location for a sound originating in an individual’s horizontal plane. Both ITD and ILD cues contribute to localization ability. Given that electric hearing alone does not generally preserve good sensitivity to ITDs [51–54], there is reason to believe that EAS listeners with binaural acoustic hearing will outperform bimodal patients in sound-field listening conditions. Some localization, however, is still possible with bimodal hearing [21,47,55,56]. Previous research on localization with bimodal hearing has generally demonstrated that (i) patients perform above chance on tasks ranging from simple to detailed, (ii) even ears with very poor auditory thresholds can be useful in localization, and (iii) there is no correlation between the level of residual hearing and localization performance. Of course, localization with bimodal patients is significantly poorer than that observed for listeners with normal hearing. Normal hearing localization abilities are generally within 1 to 2° for frequencies below 1000 Hz originating from 0° azimuth (±30°) [57,58]. Using more complex spectral stimuli such as noise and speech, Grantham et al. [59] have shown that localization error for normal-hearing listeners is 8.7°, on average. Localization for bimodal listeners has generally been shown to range from approximately 10° to >50° for absolute localization error [21,60].
Dunn et al. [47] examined horizontal plane localization for 11 Hybrid S8 recipients using the Everyday Sounds Localization test [60]. Localization estimates were obtained for 8 loudspeaker locations placed in a 108° arc in front of the listener. They found that the two listening conditions of bilateral hearing aids and best-aided EAS (implant + bilateral hearing aids) yielded significantly better localization than either the bimodal or ipsilateral EAS conditions. Given the relatively small sample size, the use of everyday sounds with different spectral and temporal characteristics, and inclusion of only short electrode recipients, much additional research is needed. Although one could hypothesize that the underlying mechanism for improved localization with EAS is access to ITD cues, further investigation is required.
Other options for hearing preservation
Hearing preservation with a cochlear implant is also possible with a conventional ‘long’ electrode array. It was previously assumed that any residual hearing in the implanted ear would be sacrificed due to surgical trauma. However, this is not always the case. Minimally traumatic surgical techniques – which may include a smaller cochleostomy or round window insertion, careful electrode insertion, thinner electrode arrays, and/or atraumatic cochlear insertion – have allowed hearing preservation with standard (long) electrode arrays.
Balkany et al. [61] reported the results of a prospective study with 28 cochlear implant recipients which documented the feasibility of hearing preservation with standard electrode arrays. All 28 patients were implanted with the Nucleus Freedom
Contour Advance electrode [CI24RE(CA)]. They reported that 32% of the population exhibited complete hearing preservation – having postoperative audiometric thresholds within 10 dB of preoperative levels – at the 9-month postoperative test point. Further, 57% of the population exhibited partial hearing preservation. The preoperative low-frequency hearing in all patients, however, was in the severe to profound range and thus although postoperative hearing preservation was documented, the severity of postoperative acoustic hearing essentially precluded hearing without the use of the implant sound processor.
James et al. [62] described hearing preservation following implantation of the Nucleus Contour Advance electrode array for 12 patients. Insertion depth varied from 17 to 19 mm across subjects. Ten of the 12 subjects (83%) exhibited some degree of hearing preservation postoperatively, with a median threshold elevation of 23, 27, and 33 dB at 125, 250, and 500 Hz, respectively.
Fraysse et al. [63] reported on 27 subjects also implanted with the Nucleus Contour Advance electrode array. Minimum reported insertion depth was 17 mm with angular insertion depths ranging from approximately 300 to 420°. The ‘advance off stylet’ (AOS), minimally traumatic, surgical approach was followed for 12 of the 27 subjects. Seven of the 27 subjects showed less than a 20-dB change in thresholds from 125 to 500 Hz, postoperatively. Examining the AOS group exclusively, 9 of the 12 subjects had measurable hearing postoperatively and 33 to 50% of those 9 AOS subjects exhibited less than 20-dB change in thresholds for frequencies from 125 to 500 Hz. No measurable postoperative hearing was observed for 11 of the total 27 subjects (41%) and for 3 of the 12 subjects (25%) in the strict AOS group.
Gstoettner et al. [64] reported on 23 patients implanted with the Med-El Combi40+ medium (M) or standard (H) electrode array. Insertion depth ranged from 18 to 24 mm for all subjects. Complete hearing preservation – or post-operative thresholds within 10 dB of preoperative levels – was observed for 39% of subjects, partial hearing preservation was observed for 30% of subjects, and complete loss of hearing was observed for 31% of subjects. In a later study, Gstoettner et al. [19] reported on 18 subjects implanted with the Combi40+ medium (M) electrode array with a much improved rate of hearing preservation. The overall hearing preservation rate was 83.2% with 66.6% of subjects having enough hearing preservation postoperative to be classified as meeting EAS audiometric criteria. The mean degree of threshold elevation was 22 dB through 1000 Hz.
Hearing preservation for improved performance in a complex listening environment
EAS patients have two acoustic-hearing ears to code interaural time and intensity differences and to deliver redundant acoustic information. Thus, EAS recipients should have an advantage over bimodal patients when signal and noise are spatially separated. To test this hypothesis, Gifford et al. [20] obtained sentence recognition data for conventional unilateral implant recipients (n=25), bilateral cochlear implant recipients (n=10), bimodal listeners (n=24), and EAS listeners (n=5). The 5 EAS listeners were 3 Nucleus Hybrid recipients (2 Hybrid 10 mm, 1 Hybrid-L24 16 mm) and 2 conventional Nucleus N24 (CI24R-CA) long-electrode recipients with hearing preservation.
HINT sentence recognition [37] was assessed in a restaurant noise background [65] originating from the R-SPACE 8-loudspeaker array. The 8 loudspeakers were placed circumferentially about the subject’s head at a distance of 24 inches (60 cm) with each speaker separated by 45°. An SRT was obtained using a one-down, one-up adaptive procedure to determine the signal-to-noise ratio (SNR) required for 50% correct. The noise level was fixed at 72 dB SPL to simulate the average level of the noise observed during the restaurant recording.
The mean SRTs for the unilateral and bilateral implant recipients were 12.2 and 9.6 dB SNR, respectively. For bimodal listeners, i.e., bimodal patients and EAS patients in the bimodal condition, the SRTs were 10.6 and 9.6 dB SNR, respectively. When the EAS patients were able to use binaural acoustic hearing, their performance improved by 3.4 dB for a mean SRT of 6.2 dB SNR. These preliminary data support our hypothesis that the value of hearing preservation will be best shown in listening environments in which target and masker are spatially separated and in which binaural low-frequency cues can play a significant role – such as those environments encountered in the real world. Given that every 1-dB improvement in SNR can yield up to an 8- to 15-percentage point improvement in speech understanding performance (e.g., [31,66], the addition of acoustic hearing from the implanted ear has the potential to provide considerable speech recognition gains in complex listening environments.
How much low-frequency hearing is needed?
Given the increased success of minimally traumatic surgical techniques in preserving low-frequency hearing with standard electrode arrays, one must ask how much low-frequency hearing is needed to observe an EAS benefit? Brown and Bacon [66,67] assessed sentence recognition in noise for 8 cochlear implant recipients who had residual low-frequency hearing in the non-implanted ear (n=5) and/or the implanted ear (n=3). In order to determine the contribution of F0 to speech understanding in noise, they assessed sentence recognition in noise for the electric only condition (E), the electric plus acoustic low-pass filtered at 500 Hz (E+A), and for the electric plus a pure tone at F0 that was both frequency and amplitude modulated (E+F0). They found that a similar level of sentence performance in noise for the E+A and E+F0 conditions. Thus, it would appear that the majority of the EAS benefit arose from the information contained in the frequency region of F0 which ranged from 127 to 184 Hz.
With a similar aim, Zhang et al. [69] examined speech perception for 9 adult bimodal subjects using a low-pass filtering design. Acoustic speech stimuli delivered to the non-implanted ear was either unprocessed or low-pass filtered (90-dB/oct) at 125, 250, 500, or 750 Hz and combined with the electric stimulation delivered via the cochlear implant sound processor. CNC monosyllabic word recognition [70] and AzBio sentence recognition [71] at +10 dB SNR was assessed. Zhang et al. [69] found that the acoustic information provided by the 125-Hz low-pass filtered band provided the majority of the information provided by the unprocessed acoustic signal. This outcome fits well with the results of Brown and Bacon [67,68] and suggests that the majority of the EAS effect is provided by information in the region of the F0.
Early reports on the benefit of adding acoustic to electric stimulation suggested that the benefit could be due to the acoustic F0 aiding in the segregation of the target voice and the masking signal (e.g., [35,72–73]). However, more recent studies (e.g., [74]) suggest that this is not likely to be the case. On one account, the high resolution F0 signal provided by acoustic hearing allows better recognition of lexical boundaries in noise-corrupted speech [75,76].
The future
In light of these findings, though preliminary in nature, it may be the case that aidable hearing in a restricted low-frequency passband is required to obtain EAS benefit. This finding has the potential to influence EAS electrode design – particularly array length – that could ultimately become commercially available. Perhaps in the future a multitude of electrode lengths will be offered that could be chosen based upon the configuration, slope, and low-frequency thresholds associated with the audiogram, as well as individual cochlear anatomy based on imaging. It could also be the case that ultimately the hearing preservation achieved with conventional long electrodes will yield maximum EAS benefit – provided that sufficient low-frequency hearing and amplification is available. Clearly, much additional research is needed.
Acknowledgements
This work was supported by NIDCD grant DC006538 to RHG and by NIDCD grant R01 DC00654-16 to MFD. A portion of the results were presented at the 2006 International Conference on Cochlear Implants and Other Implantable Auditory Technologies in Vienna, Austria; the 2005 Hearing Preservation Workshop in Warsaw, Poland; and the 2008 Hearing Preservation Workshop in Kansas City, MO. We would like to thank the anonymous reviewers whose suggested edits enhanced the quality of this manuscript.
Footnotes
Estimates of normal frequency selectivity at 500 Hz were obtained from Gifford et al. (2010) in which the same experiment was conducted for 15 young adult subjects with normal hearing.
For those subjects not demonstrating frequency selectivity, a value of 600 Hz was entered as the width of the AF for correlation purposes.
References
- 1.Von Ilberg C, Kiefer J, Tillein J, et al. Electric-acoustic stimulation of the auditory system. ORL. 1999;61:334–40. doi: 10.1159/000027695. [DOI] [PubMed] [Google Scholar]
- 2.Skarzynski H, Lorens A, Piotrowska A. A new method of partial deafness treatment. Med Sci Monit. 2003;9(4):CS20–24. [PubMed] [Google Scholar]
- 3.Skarzynski H, Lorens A, Piotrowska A. Preservation of low-frequency hearing in partial deafness cochlear implantation. International Congress Series. 2004;1273:239–42. [Google Scholar]
- 4.Skarzynski H, Lorens A, Piotrowska A, Anderson I. Partial deafness cochlear implantation provides benefit to a new population of individuals with hearing loss. Acta Oto-Laryngologica. 2006;126:934–40. doi: 10.1080/00016480600606632. [DOI] [PubMed] [Google Scholar]
- 5.Gantz BJ, Turner CW. Combining acoustic and electrical hearing. Laryngoscope. 2003;113:1726–30. doi: 10.1097/00005537-200310000-00012. [DOI] [PubMed] [Google Scholar]
- 6.Gantz BJ, Turner CW. Combining acoustic and electrical speech processing: Iowa/Nucleus hybrid implant. Acta Oto-Laryngologica. 2004;124:334–47. doi: 10.1080/00016480410016423. [DOI] [PubMed] [Google Scholar]
- 7.Gstoettner W, Kiefer J, Baumgartner WD, et al. Hearing preservation in cochlear implantation for electric acoustic stimulation. Acta Oto-Laryngologica. 2004;124:348–52. doi: 10.1080/00016480410016432. [DOI] [PubMed] [Google Scholar]
- 8.Gantz BJ, Turner CW, Gfeller KE, Lowder M. Preservation of hearing in cochlear implant surgery: advantages of combined electrical and acoustical speech processing. Laryngoscope. 2005;115:796–802. doi: 10.1097/01.MLG.0000157695.07536.D2. [DOI] [PubMed] [Google Scholar]
- 9.Gantz BJ, Turner CW, Gfeller KE. Acoustic plus electric speech processing: preliminary results of a multicenter clinical trial of the Iowa/Nucleus Hybrid implant. Audiol Neurootol. 2006;11(Suppl.1):63–68. doi: 10.1159/000095616. [DOI] [PubMed] [Google Scholar]
- 10.Gantz BJ, Hansen MR, Turner CW, et al. Hybrid 10 clinical trial. Audiol Neurotol. 2009;14(Suppl.1):32–38. doi: 10.1159/000206493. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Kiefer J, Pok M, Adunka O, et al. Combined electric and acoustic stimulation of the auditory system: results of a clinical study. Audiol Neurotol. 2005;10:134–44. doi: 10.1159/000084023. [DOI] [PubMed] [Google Scholar]
- 12.Luetje CM, Thedinger BS, Buckler LR, et al. Hybrid cochlear implantation: clinical results and critical review of 13 cases. Otol Neurotol. 2007;28(4):473–78. doi: 10.1097/RMR.0b013e3180423aed. [DOI] [PubMed] [Google Scholar]
- 13.Woodson EA, Reiss LAJ, Turner CW, et al. The hybrid cochlear implant: a review. Adv Otorhinolaryngol. 2010;67:125–34. doi: 10.1159/000262604. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Arnolder C, Helbig S, Wagenblast J, et al. Electric acoustic stimulation in patients with postlingual severe high-frequency hearing loss: clinical experience. Adv Otorhinolaryngol. 2010;67:116–24. doi: 10.1159/000262603. [DOI] [PubMed] [Google Scholar]
- 15.Skarzynski H, Lorens A, Piotrowska A, Anderson I. Preservation of low frequency hearing in partial deafness cochlear implantation (PDCI) using the round window surgical approach. Acta Otolaryngol. 2007;127(1):41–48. doi: 10.1080/00016480500488917. [DOI] [PubMed] [Google Scholar]
- 16.Gstöttner W, Pok SM, Peters S, Kiefer J, Adunka O. Cochlear implantation with preservation of residual deep frequency hearing. HNO. 2005;53(9):784–90. doi: 10.1007/s00106-004-1170-5. [DOI] [PubMed] [Google Scholar]
- 17.Wilson BS, Lawson DT, Muller JM, et al. Cochlear implants: some likely next steps. Annu Rev Biomed Eng. 2003;5:207–49. doi: 10.1146/annurev.bioeng.5.040202.121645. [DOI] [PubMed] [Google Scholar]
- 18.Brill S, Lawson DT, Wolford RD, et al. Speech processors for auditory prostheses: Eleventh Quarterly Progress Report on NIH Project N01-DC-8-2105. 2002 [Google Scholar]
- 19.Gstoettner WK, van de Heyning P, O’Connor AF, et al. Electric acoustic stimulation of the auditory system: results of a multi-centre investigation. Acta Otolaryngol. 2008;128:968–75. doi: 10.1080/00016480701805471. [DOI] [PubMed] [Google Scholar]
- 20.Gifford R, Dorman M, Brown C. Psychophysical properties of low-frequency hearing: Implications for electric and acoustic stimulation (EAS) Adv Otorhinolaryngol. 2010;67:51–60. doi: 10.1159/000262596. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Ching TY, Inceri P, Hill M. Binaural benefits for adults who use hearing aids and cochlear implants in opposite ears. Ear Hear. 2004;25:9–21. doi: 10.1097/01.AUD.0000111261.84611.C8. [DOI] [PubMed] [Google Scholar]
- 22.Gifford RH, Dorman MF, Spahr AJ, McKarns SA. Combined electric and contralateral acoustic hearing: Word and sentence recognition with bimodal hearing. J Speech Hear Res. 2007;50:835–43. doi: 10.1044/1092-4388(2007/058). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Gifford RH, Dorman MF, Spahr AJ, et al. Hearing preservation surgery: psychophysical estimates of cochlear damage in recipients of a short electrode array. J Acoust Soc Am. 2008;124:2164–73. doi: 10.1121/1.2967842. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Wilson B, Wolford R, Lawson D, Schatzer R. Speech processors for auditory prostheses: third quarter progress report on NIH project N01-DC-2-1002. 2002 [Google Scholar]
- 25.Dorman MF, Gifford RH, Lewis K, et al. Word recognition following implantation of conventional and 10 mm Hybrid electrodes. Audiol Neurotol. 2009;14:181–89. doi: 10.1159/000171480. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Bacon SP, Opie JM, Montoya DY. The effects of hearing loss and noise masking on the masking release for speech in temporally complex backgrounds. J Speech Lang Hear Res. 1998;41(3):549–63. doi: 10.1044/jslhr.4103.549. [DOI] [PubMed] [Google Scholar]
- 27.Bacon SP, Takahashi GA. Overshoot in normal-hearing and hearing-impaired subjects. J Acoust Soc Am. 1992;91(5):2865–71. doi: 10.1121/1.402967. [DOI] [PubMed] [Google Scholar]
- 28.Festen JM, Plomp R. Effects of fluctuating noise and interfering speech on the speech-reception threshold for impaired and normal hearing. J Acoust Soc Am. 1990;88(4):1725–36. doi: 10.1121/1.400247. [DOI] [PubMed] [Google Scholar]
- 29.Nelson PB, Jin S-H, Carney AE, Nelson DA. Understanding speech in modulated interference: cochlear implant users and normal-hearing listeners. J Acoust Soc Am. 2003;113:961–68. doi: 10.1121/1.1531983. [DOI] [PubMed] [Google Scholar]
- 30.Nelson DA, Donaldson GS. Psychophysical recovery from single-pulse forward masking in electric hearing. J Acoust Soc Am. 2001;109(6):2921–33. doi: 10.1121/1.1371762. [DOI] [PubMed] [Google Scholar]
- 31.Shannon RV. Forward masking in patients with cochlear implants. J Acoust Soc Am. 1990;88:741–44. doi: 10.1121/1.399777. [DOI] [PubMed] [Google Scholar]
- 32.Nelson PB, Jin S-H. Factors affecting speech understanding in gated interference: cochlear implant users and normal-hearing listeners. J Acoust Soc Am. 2004;115:2286–94. doi: 10.1121/1.1703538. [DOI] [PubMed] [Google Scholar]
- 33.Fu QJ, Nogaki G. Noise susceptibility of cochlear implant users: the role of spectral resolution and smearing. J Assoc Res Otolaryngol. 2005;6:19–27. doi: 10.1007/s10162-004-5024-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Turner CW, Gantz BJ, Vidal C, et al. Speech recognition in noise for cochlear implant listeners: Benefits of residual acoustic hearing. J Acoust Soc Am. 2004;115:1729–35. doi: 10.1121/1.1687425. [DOI] [PubMed] [Google Scholar]
- 35.Turner CW, Gantz BJ, Reiss L. Integration of acoustic and electrical hearing. J Rehab Res Dev. 2008;45:769–78. doi: 10.1682/jrrd.2007.05.0065. [DOI] [PubMed] [Google Scholar]
- 36.Van Tasell DJ, Yanz JL. Speech recognition threshold in noise: effects of hearing loss, frequency response, and speech materials. J Speech Hear Res. 1987;30(3):377–86. [PubMed] [Google Scholar]
- 37.Nilsson M, Soli S, Sullivan J. Development of the hearing in noise test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am. 1994;95:1085–99. doi: 10.1121/1.408469. [DOI] [PubMed] [Google Scholar]
- 38.Levitt H. Transformed up-down methods in psychoacoustics. J Acoust Soc Am. 1971;49:467–77. [PubMed] [Google Scholar]
- 39.Laroche C, Hetu R, Quoc HT, et al. Frequency selectivity in workers with noise-induced hearing loss. Hear Res. 1992;64(1):61–72. doi: 10.1016/0378-5955(92)90168-m. [DOI] [PubMed] [Google Scholar]
- 40.Leek MR, Summers V. Auditory filter shapes of normal-hearing and hearing-impaired listeners in continuous broadband noise. J Acoust Soc Am. 1993;94:3127–37. doi: 10.1121/1.407218. [DOI] [PubMed] [Google Scholar]
- 41.Patterson RD, Nimmo-Smith I, Weber DL, Milroy R. The deterioration of hearing with age: frequency selectivity, the critical ratio, the audiogram, and speech threshold. J Acoust Soc Am. 1982;72(6):1788–803. doi: 10.1121/1.388652. [DOI] [PubMed] [Google Scholar]
- 42.Stone MA, Glasberg BR, Moore BCJ. Simplified measurement of auditory filter shapes using the notched-noise method. Br J Audiol. 1992;26(6):329–34. doi: 10.3109/03005369209076655. [DOI] [PubMed] [Google Scholar]
- 43.Glasberg BR, Moore BCJ. Derivation of auditory filter shapes from notched-noise data. Hear Res. 1990;47:103–38. doi: 10.1016/0378-5955(90)90170-t. [DOI] [PubMed] [Google Scholar]
- 44.Schroeder MR. Synthesis of low peak-factor signal and binary sequences with low autocorrelation. IEEE Transcations on Information Theory. 1971;16:85–89. [Google Scholar]
- 45.Oxenham AO, Dau T. Reconciling frequency selectivity and phase effects in masking. J Acoust Soc Am. 2001;110:1525–38. doi: 10.1121/1.1394740. [DOI] [PubMed] [Google Scholar]
- 46.Oxenham AO, Dau T. Masker phase effects in normal-hearing and hearing-impaired listeners: evidence for peripheral compression at low signal frequencies. J Acoust Soc Am. 2004;116:2248–57. doi: 10.1121/1.1786852. [DOI] [PubMed] [Google Scholar]
- 47.Dorman MF, Gifford RH. Combining acoustic and electric stimulation in the service of speech recognition. Intl J Audiol. 2010;49:912–19. doi: 10.3109/14992027.2010.509113. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Dunn CC, Perreau A, Grantz BJ, Tyler RS. Benefits of localization and speech perception with multiple noise sources in listeners with a short-electrode cochlear implant. J Am Acad Audiol. 2010;21:44–51. doi: 10.3766/jaaa.21.1.6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Hirsh IJ. The relation between localization and intelligibility. J Acoust Soc Am. 1950;22:196–200. [Google Scholar]
- 50.Licklider JCR. The influence of interaural phase relations upon the masking of speech by white noise. J Acoust Soc Am. 1948;20:150–59. [Google Scholar]
- 51.Van Hoesel RJ, Clark GM. Fusion and lateralization study with two binaural cochlear implant patients. Annals of Otology, Rhinology, and Laryngology Supplement. 1995;166:233–35. [PubMed] [Google Scholar]
- 52.Van Hoesel RJ, Clark GM. Psychophysical studies with two binaural cochlear implant subjects. J Acoust Soc Am. 1997;102:495–507. doi: 10.1121/1.419611. [DOI] [PubMed] [Google Scholar]
- 53.Van Hoesel RJ, Tong YC, Hollow RD, Clark GM. Psychophysical and speech perception studies: A case report on a binaural cochlear implant subject. J Acoust Soc Am. 1993;94:3178–89. doi: 10.1121/1.407223. [DOI] [PubMed] [Google Scholar]
- 54.Van Hoesel RJ, Ramsden R, Odriscoll M. Sound-direction indernification, interaural time delay discrimination, and speech intelligibility advantages in noise for a bilateral cochlear implant user. Ear Hear. 2002;23:137–49. doi: 10.1097/00003446-200204000-00006. [DOI] [PubMed] [Google Scholar]
- 55.Tyler RS, Parkinson AJ, Wilson BS, et al. Patients utilzing a hearing aid and a cochlear implant: speech perception and localization. Ear Hear. 2002;23:98–105. doi: 10.1097/00003446-200204000-00003. [DOI] [PubMed] [Google Scholar]
- 56.Seeber BU, Baumann U, Fastl H. Localization ability with bimodal hearing aid and bilateral cochlear implants. J Acoust Soc Am. 2004;116:1698–709. doi: 10.1121/1.1776192. [DOI] [PubMed] [Google Scholar]
- 57.Mills AW. On the minimum audible angle. J Acoust Soc Am. 1958;30:237–46. [Google Scholar]
- 58.Mills AW. Auditory localization. In: Tobias JV, editor. Foundations of Modern Auditory Theory. vol. 11. Academic Press; New York: 1972. pp. 303–48. [Google Scholar]
- 59.Grantham DW, Ashmead DH, Ricketts TA, et al. Horizontal-Plane Localization of Noise and Speech Signals by Postlingually Deafened Adults Fitted With Bilateral Cochlear Implants. Ear Hear. 2007;28:524–41. doi: 10.1097/AUD.0b013e31806dc21a. [DOI] [PubMed] [Google Scholar]
- 60.Dunn CC, Tyler RS, Witt SA. Benefit of wearing a hearing aid on the unimplanted ear in adult users of a cochlear implant. J Speech Lang Hear Res. 2005;48(3):668–80. doi: 10.1044/1092-4388(2005/046). [DOI] [PubMed] [Google Scholar]
- 61.Balkany TJ, Connell SS, Hodges AV, et al. Conservation of residual acoustic hearing after cochlear implantation. Otol Neurotol. 2006;27:1083–88. doi: 10.1097/01.mao.0000244355.34577.85. [DOI] [PubMed] [Google Scholar]
- 62.James C, Albegger K, Battmer R, et al. Preservation of residual hearing with cochlear implantation: How and why. Acta Otolaryngol. 2005;125:481–91. doi: 10.1080/00016480510026197. [DOI] [PubMed] [Google Scholar]
- 63.Fraysse B, Ramos A, Sterkers MO, et al. Residual hearing conservation and electroacoustic stimulation with the nucleus 24 contour advance cochlear implant. Otol Neurotol. 2006;27:624–33. doi: 10.1097/01.mao.0000226289.04048.0f. [DOI] [PubMed] [Google Scholar]
- 64.Gstoettner WK, Helbig S, Maier N, et al. Ipsilateral electric acoustic stimulation of the auditory system: results of long-term hearing preservation. Audiol Neurootol. 2006;11:49–56. doi: 10.1159/000095614. [DOI] [PubMed] [Google Scholar]
- 65.Compton-Conley CL, Neuman AC, Killion MC, Levitt H. Performance of directional microphones for hearing aids: real-world versus simulation. J Am Acad Audiol. 2004;15:440–55. doi: 10.3766/jaaa.15.6.5. [DOI] [PubMed] [Google Scholar]
- 66.Plomp R, Mimpen MA. Improving the reliability of testing the speech reception threshold for sentences. Audiology. 1979;18:43–52. doi: 10.3109/00206097909072618. [DOI] [PubMed] [Google Scholar]
- 67.Brown CB, Bacon SP. Low-frequency speech cues and simulated electric-acoustic hearing. J Acoust Soc Am. 2009;125:1658–65. doi: 10.1121/1.3068441. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Brown CB, Bacon SP. Achieving electric-acoustic benefit with a modulated tone. Ear Hear. 2009;30:489–93. doi: 10.1097/AUD.0b013e3181ab2b87. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Zhang T, Dorman M, Spahr A. Information from the voice fundamental frequency (F0) accounts for the majority of the benefit when acoustic stimulation is added to electric stimulation. Ear and Hearing. 2010;31(1):63–69. doi: 10.1097/aud.0b013e3181b7190c. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Peterson GE, Lehiste I. Revised CNC lists for auditory tests. J Speech Hear Disord. 1962;27:62–70. doi: 10.1044/jshd.2701.62. [DOI] [PubMed] [Google Scholar]
- 71.Spahr AJ, Dorman MF, Litvak LM, et al. Development and validation of the AzBio sentence lists. Ear Hear. 2012;33(1):112–17. doi: 10.1097/AUD.0b013e31822c2549. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72.Chang J, Bai J, Zeng F-G. Unintelligible low-frequency sound enhances simulated cochlear-implant speech recognition in noise. IEEE Trans Bio-Med Engineer. 2006;53:2598–601. doi: 10.1109/TBME.2006.883793. [DOI] [PubMed] [Google Scholar]
- 73.Qin M, Oxenham AJ. Effects of introducing unprocessed low-frequency information on the reception of envelope-vocoder processed speech. J Acoust Soc Am. 2006;119:2417–26. doi: 10.1121/1.2178719. [DOI] [PubMed] [Google Scholar]
- 74.Kong YY, Carlyon RP. Improved speech recognition in noise in simulated binaurally combined acoustic and electric stimulation. J Acoust Soc Am. 2007;121:3717–27. doi: 10.1121/1.2717408. [DOI] [PubMed] [Google Scholar]
- 75.Spitzer S, Liss J, Spahr R, et al. The use of fundamental frequency for lexical segmentation in listeners with cochlear implants. Journal of the Acoustical Society of America. 2009;125(6):EL 235. doi: 10.1121/1.3129304. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Li N, Loizou P. Factors affecting masking release in cochlear-implant vocoded speech. J Acoust Soc Am. 2009;126(1):338–46. doi: 10.1121/1.3133702. [DOI] [PMC free article] [PubMed] [Google Scholar]

