Abstract
The majority of recently implanted, cochlear implant patients can potentially benefit from a hearing aid in the ear contralateral to the implant. When patients combine electric and acoustic stimulation, word recognition in quiet and sentence recognition in noise increase significantly. Several studies suggest that the acoustic information that leads to the increased level of performance resides mostly in the frequency region of the voice fundamental, e.g. 125 Hz for a male voice. Recent studies suggest that this information aids speech recognition in noise by improving the recognition of lexical boundaries or word onsets. In some noise environments, patients with bilateral implants can achieve similar levels of performance as patients who combine electric and acoustic stimulation. Patients who have undergone hearing preservation surgery, and who have electric stimulation from a cochlear implant and who have low-frequency hearing in both the implanted and not-implanted ears, achieve the best performance in a high noise environment.
Keywords: Cochlear implants, Combined acoustic and electric stimulation, Bilateral cochlear implants
Until relatively recently, most potential cochlear implant patients were bilaterally deaf in the conventional sense, i.e. they could hear little or nothing from either ear. After receiving a cochlear implant they could hear with one ear—the implanted ear. Today, most patients have some residual hearing in their better, or non-implanted, ear. These patients have the possibility of hearing bilaterally by combining electric stimulation (E) from their implant with acoustic (A) stimulation (S) from the contralateral ear. In this paper we will review our recent work with this population of implant patients and with two other groups of implant patients who receive bilateral stimulation including, (1) those who have been fitted with bilateral cochlear implants, and (2) those who have a single cochlear implant and who have hearing in both the implanted ear and the non-implanted ear. At issue in this article is the performance on tests of speech understanding by unilateral cochlear implant patients and by the three groups of ‘two-eared’ patients described above.
How many patients might benefit from combining acoustic and electric stimulation (EAS)?
We have conducted a chart review of 276 adult patients who received a cochlear implant at either the Mayo Clinic, Rochester (n = 100), or at the School of Medicine at the University of Ottawa, Canada, (n = 176). The criterion for review was that the patients had been implanted within the last five years. Our interest was the hearing threshold at 250 Hz in the non-implanted ear. The results, in terms of audiometric threshold at 250 Hz, were binned in 5 dB steps from < 40 dB HL to > 100 dB HL and are shown in Table 1. Relatively few (n = 19) patients had thresholds of better than 40 dB HL. As would be expected, there were more patients with higher thresholds than with lower thresholds. The observation of interest is the cumulative number of patients (n = 165) in the categories up to and including the 80–85 dB HL category. These patients make up the majority of patients in this sample (165 of 276, or 59.8%). If we make the assumptions, (1) that these patients can at least potentially benefit from amplification in the non-implanted ear (e.g. Hamzavi et al, 2004), and (2) that these samples are representative of recent cochlear implant patients at other centers, then we can say that the majority of recent cochlear implant patients have the possibility of combining electric stimulation from a cochlear implant with low-frequency acoustic stimulation from the contralateral ear. If this is the case today, then it will certainly be the case in the near future when we can expect the rules for implant candidacy to be more liberal (in terms of auditory thresholds and speech understanding) than they are today (Gifford et al, 2010).
Table 1.
Auditory thresholds at 250 Hz in the contralateral ear of 276 CI patients.
Threshold @ 250 Hz | n | Σ |
---|---|---|
< 40 dB | 19 | |
40–45 dB | 15 | 34 |
50–55 dB | 29 | 63 |
60–65 dB | 27 | 90 |
70–75 dB | 34 | 124 |
80–85 dB | 41 | 165 |
90–95 dB | 31 | 196 |
100+ dB | 80 | 276 |
What level of performance can be achieved with EAS?
There have been many descriptions of speech understanding when patients combine electrical and acoustic stimulation (e.g. Shallop et al, 1992; Armstrong et al, 1997; Tyler et al, 2002; Ching et al, 2004; Gantz & Turner, 2004; Gstoettner et al, 2004; Kiefer et al, 2004; Turner et al, 2004; Kong et al, 2005; Gifford et al, 2007; Dorman et al, 2008, 2009; Mok et al, 2006). Here we describe the results from two of our recent studies. Figure 1 (right) shows consonant nucleus consonant (CNC) word scores (Peterson & Lehiste, 1962) for a group of patients whose average audiogram is shown at the left of the figure (from Dorman et al, 2008). These patients had relatively good thresholds at 250 Hz and 500 Hz: 38 and 53 dB HL, respectively. Indeed, the mean threshold at 250 Hz was in the top 12% of the patients in Table 1. Performance in the acoustic alone condition was poor: 27% correct. The mean word recognition score for the CI alone condition was 53% correct— a score very near the industry average for unilateral cochlear implants (Gifford et al, 2008; e.g. Firszt et al, 2004; Bassim et al, 2005; Balkany et al, 2007). Thus, this sample was a representative sample of typical cochlear implant patients. The score in the EAS condition, i.e. 73% correct, was significantly higher than in the CI-alone condition, i.e. 53% correct.
Figure 1.
Left: audiogram for the contralateral ear of EAS patients. Right: CNC word recognition in acoustic only, electric (CI) only, and combined electric and acoustic (EAS) conditions (from Dorman et al, 2008). A = acoustic stimulation; E = electric stimulation; E+A = electric plus acoustic stimulation. Error bars indicate +1 standard deviation.
Not every patient with hearing in the ear contralateral to the implant shows EAS benefit on tests of CNC word understanding (e.g. Dunn et al, 2005; Mok et al, 2006). This could be due to several factors. One factor is the magnitude of the hearing loss and the loss of normal cochlear function, e.g. the cochlear nonlinearity. This nonlinearity is responsible for several aspects of normal cochlear function, i.e. high sensitivity, a broad dynamic range, sharp frequency tuning, and enhanced spectral contrasts via suppression (e.g. Oxenham & Bacon, 2004). Any reduction in the magnitude of the nonlinearity could result in one or more functional deficits, including impaired speech perception.
Another factor is that CNC words may not be the most sensitive material to assess the benefits of combining acoustic and electric stimulation. For example, it is common to find relatively large improvements in EAS performance when sentences are presented in noise. Figure 2 shows sentence recognition at +10 dB signal-to-noise ratio for nine EAS patients who had relatively good low-frequency auditory thresholds (Zhang et al, 2010). The mean thresholds at 125, 250, 500, 1000, 2000, and 4000 Hz were 31, 34, 50, 76, 88, and 97 dB respectively. Performance in both the acoustic alone and CI alone conditions was modest, 42% and 40% correct, respectively. The scores increased by 45 percentage points in the EAS condition relative to the CI alone condition. Large gains, of the magnitude shown here, have also been reported by others for patients with more hearing loss at low frequencies (e.g. Kiefer et al, 2005).
Figure 2.
Sentence recognition by EAS patients in acoustic only, electric (CI) only, and combined electric and acoustic (EAS) conditions (from Zhang et al, 2010). A = acoustic stimulation; E = electric stimulation; E + A= electric plus acoustic stimulation. Error bars indicate +1 standard deviation.
In the near future it is unlikely that improvements in signal processing for a single, unilateral cochlear implant will improve performance by up to 20 percentage points in quiet and 40 percentage points in noise. For this reason, we suggest that combining electric and acoustic stimulation is currently the most promising approach for improving the performance of adult patients who qualify for a single cochlear implant (see also Ching et al, 2007).
Where in frequency space is the information that allows large gains in performance when acoustic stimulation is added to electric stimulation?
To answer this question EAS patients were presented with a full bandwidth signal to their implanted ear and a filtered signal to their non-implanted ear (Zhang et al, 2010). The filtered signals were created with 256th order FIR filters with corner frequencies at 125, 250, 500, and 750 Hz. As shown in Figure 3 (top), there was an improvement of 30 percentage points for CNC word recognition when the full bandwidth signal was presented to both the implanted and contralateral ears. When the full bandwidth signal was presented to the implanted ear and the 125 Hz low-pass signal was presented to the non-implanted ear, the gain in CNC word score was 21 percentage points. There was not a statistical difference between performance levels for the 125 Hz low pass and the full bandwidth acoustic signals when added to the electric stimulation provided by the implant. Thus, for CNC word recognition in quiet, all of the information that led to an increase in performance in the EAS condition was contained in the 125-Hz, low-pass filtered signal. This signal contained only the first harmonic of the glottal source—the F0—and a much attenuated second harmonic. Thus, the information that engenders the improvement in performance for CNC words, when acoustic stimulation is added to electric stimulation, is likely to be information about voicing.
Figure 3.
CNC word recognition (top), and AzBio sentence recognition at +10 dB SNR (bottom) for EAS patients in acoustic alone, electric alone, and EAS conditions. In the acoustic only and EAS conditions the acoustic signal was either wideband or low-pass (LP) filtered at 125, 250, 500, and 750 Hz. Error bars indicate +1 standard deviation. (From Zhang et al, 2010).
How would information about voicing benefit word recognition in quiet for a cochlear implant patient?
In the standard view, vowels and consonants—the constituent units of words—are specified by the location and changes in the location of the first, second, and third formants (F1, F2, and F3) (e.g. Liberman, 1996). The F0 contour has a very small role in specifying vowel and consonant identity. However, the presence of F0 (i.e. voicing) and an envelope that marks the onset and duration of voicing, play a critical role in labeling a consonant as voiced or voiceless (see Faulkner & Rosen, 1999, for a discussion of these cues in the context of auditory and audio-visual speech perception).
In one view of consonant recognition by implant patients, cues in the amplitude envelope provide enough information for the recognition of consonant voicing and consonant manner (e.g. Shannon et al, 1995). If this is the case, why would the addition of F0 from the acoustic signal be of use to an implant patient who should receive a good representation of envelope information from his/her implant?
Implant patients receive the envelope features of manner and voicing relatively well, but not perfectly. Spahr et al (2007) reported, for a sample of 39 implant patients with average or above average scores on CNC words, that consonant place of articulation was received with 59% accuracy, voicing with 73% accuracy, and manner with 86% accuracy. Ching (2005) reports an average place score of 46%, a voicing score of 54%, and a manner score of 57%. Thus, there is ample room for an acoustic signal to enhance both voicing and manner. Indeed, Ching (2005) has reported improved recognition of both voicing and manner in children and adults who combine electric and acoustic stimulation. Correct decisions about consonant manner and voicing provide phonotactic constraints that can narrow potential word candidates in a lexicon (e.g. Zue, 1985) and can lead to improved word recognition in quiet.
The value of signals with zero intelligibility
As shown in Figure 3 (top), the low passed 125 and 250 Hz signals, when presented for identification in isolation, had zero intelligibility. These signals, when added to electrical stimulation, produced more than a 20-percentage point gain in intelligibility. Others have also reported similar gains in the EAS condition for acoustic signals which, in isolation, have no intelligibility (e.g. von Ilberg et al, 1999; Chang et al, 2006; Kong et al, 2005). The important observation here is that we cannot assume that an ear with little or no speech understanding is a good ear to implant. That ear may contribute more to speech understanding when used in conjunction with a single implant than when fitted with a second cochlear implant. We will return to this point in the section on bilateral cochlear implants.
Sentence recognition in noise
Figure 3 (bottom) shows performance as a function of stimulus condition for sentences presented at +10 dB SNR. There was an improvement of 45 percentage points for sentence recognition when the full bandwidth signal to the non-implanted ear was added to electric stimulation from the CI. There was an improvement of 30 percentage points when the 125 Hz low-pass signal was added to the CI signal. The scores in the full bandwidth and 125 Hz low-pass conditions were significantly different. Thus, for sentences in noise, information contained in the bands above the 125-Hz low-pass band was used for recognition. This is in contrast to the case of monosyllabic word recognition where statistically the EAS effect could be isolated to the 125-Hz, low-pass filtered band.
Voicing as a landmark for lexical segmentation in noise
Li & Loizou (2008) propose that speech recognition in noise is facilitated when listeners have access to robust, low-frequency acoustic landmarks (Stevens, 2002), such as the onset of voicing, that mark syllable structure and word boundaries.
In a recent study, Spitzer et al (2009) analysed lexical (word) boundary errors from normal-hearing subjects listening to noise-band simulations of EAS and from EAS patients. At issue was whether adding the acoustic signal to the electric signal better defines word onsets for EAS patients. In English, words are more likely to begin with a stressed, or strong, syllable than a weak syllable (Cutler & Carter, 1987). Strong syllables in English are characterized acoustically by relatively high intensity, long duration, and pitch change. Strong and weak syllables also differ in vowel quality, e.g. vowels tend to be reduced towards schwa in weak syllables. If the acoustic signal better defines strong vs. weak syllables, then lexical boundary errors should be reduced in EAS conditions vs. electric-only conditions. This is, in fact, the case. Spitzer et al (2009) reported fewer lexical boundary errors in the EAS conditions than in the electric only conditions. Thus, it appears to be the case that, when segmental information is reduced, the acoustic signal aids the recognition of strong and weak syllables which, in turn, leads to better recognition of word boundaries in continuous speech.
A converging experiment on the information that is sufficient to produce large increments in speech understanding in noise with EAS
As noted above, the 125 Hz low pass signal in Zhang et al (2010) contained the first harmonic (F0) of the glottal buzz and a much attenuated second harmonic. Brown and Bacon (2009) created a signal with only the F0 component for use with EAS patients. This was accomplished by processing sentences off-line, extracting the F0, and then synthesizing an amplitude and frequency modulated sine wave that tracked the amplitude and frequency modulation of the original signal. Figure 4 shows the performance of EAS patients for sentence recognition in noise using, (1) the CI stimulation alone, (2) CI stimulation plus the original (wideband) acoustic signal presented to the non-implanted ear, and (3) CI stimulation plus the AM/FM sine wave presented to the non-implanted ear. The addition of the wideband signal allowed an improvement of 55 percentage points for the EAS condition when compared to the CI-alone condition. The addition of the AM/FM modulated sine wave allowed a 45-percentage point improvement. The scores in the CI plus wideband signal and CI plus AM/FM modulated sine wave conditions did not differ significantly. Thus, the sine wave stimulus contained all of the acoustic information that allowed the large increase in intelligibility in the EAS condition. This finding is consistent with the view that voicing is one of the principal landmarks detected in noise (see also Kong & Carlyon, 2007).
Figure 4.
Sentence recognition in noise for EAS patients in an E-alone condition and in two EAS conditions. In one EAS condition (E+WB), the acoustic signal was wideband. In the other EAS condition (E+Sine), the acoustic signal was an amplitude and frequency modulated sine wave that tracked the F0 of the original sentence. Error bars indicate +1 standard deviation. Adapted from Brown and Bacon (2009) with permission.
Other CI patients who hear with two ears: bilateral CIs
We turn now to a consideration of patients who hear with the aid of two cochlear implants. These patients receive the same type of signal from both implants— a signal with relatively low spatial resolution due to the monopolar stimulation delivered to a limited number of electrode contacts (e.g. Kral et al, 1998; Middlebrooks & Bierer, 2002; Snyder et al, 2008; Henry et al, 2005) Any increase in performance with bilateral stimulation vs. unilateral stimulation, when signals are presented from a single loudspeaker, likely comes from diotic summation (or binaural redundancy). Diotic stimulation for CIs is a relatively small effect—less than 10 percentage points (e.g. Litovsky et al, 2006; Buss et al, 2008). Another possibility for an increase in performance is that, due to differences in electrode placement, current spread, and neural survival, the two ears receive different, and complementary, information from the two implants (Wilson & Dorman, 2009). There is, however, only very limited evidence for this possibility (Lawson et al, 1999).
EAS patients receive different types of information from the CI and from the ear with low-frequency residual hearing (e.g. Ching et al, 2007). The CI provides a signal with low spectral resolution that is complemented by an acoustic signal with relatively better resolution at low frequencies. It is reasonable to hypothesize that the novel information provided by low-frequency acoustic hearing would provide more benefit than the redundant information added by a second implant. For CNC word recognition, this is the case. Figure 5 shows data for 82 bilateral CI patients and for 25 EAS patients on a test of CNC word recognition. The data for the bilateral CI patients were collated from the clinical trial data from the Advanced Bionics Corporation (Koch et al, in press), Cochlear Corporation (Litovsky et al, 2006), and Med El Corporation (Buss et al, 2008). For the EAS patients the mean unaided thresholds at 125, 250, 500, 1000, 2000, and 4000 Hz in the not-implanted ear were 37, 43, 58, 82, 96, and 101 dB HL.
Figure 5.
CNC word recognition by bilateral CI patients (n = 82) and EAS patients (n = 25). The mean score for each group is indicated by a horizontal line.
The distribution of scores for the two groups is very different and the mean scores for the two groups differ significantly (EAS = 74% correct; bilateral CI = 61% correct; t105 = 2.91, p = 0.0043). Thus, as would be expected from an analysis of the information provided by a CI and by low-frequency acoustic stimulation, EAS patients outperform bilateral CI patients when the test material is CNC words in quiet.
An analysis of the different types of information available to EAS and bilateral CI patients provides a rational account for the difference in performance between the two groups. However, the two groups differed in pre-operative hearing sensitivity and in speech perception abilities and thus had different ‘starting points’ for improvement with the two interventions. Audiometric thresholds at 500 Hz for the ‘better ear’ of bilateral patients ranged from 70 dB HL to no response (Litovsky et al, 2006; Buss et al, 2008). In contrast, the mean threshold at 500 Hz for the acoustically stimulated ear of EAS patients was 58 dB HL. In terms of preoperative speech perception, the bilateral patients showed a mean preoperative CNC word recognition score of 3% correct (e.g. Litovsky et al, 2006). The mean preoperative CNC score for the EAS patients was 27% correct with a range of 2 to 64% correct (Dorman et al, 2009). Given these data, it is reasonable to suggest that, if the ‘second’ ear of bilateral patients had the same auditory capabilities as the acoustically stimulated ear of the EAS patients, then the bilateral CI scores would have been higher. However, we find no support for this suggestion. There are no reports of a strong positive correlation between either the preoperative audiogram or preoperative speech perception and postoperative speech understanding (e.g. Rubinstein et al, 1999; Ching et al, 2007; Gifford et al, 2007). In the absence of positive correlations, the difference in performance between the groups is best attributed to the differences in information available to the patients from the two interventions. For a comprehensive review of bilateral and EAS (bimodal) performance see Ching (2007).
In the experiment described above, the signals were presented from a single loudspeaker. This test environment minimizes the value of two cochlear implants because only diotic summation can play a role in increasing speech understanding for the bilateral CI patients. To realize other potential bilateral benefits for speech understanding, such as squelch and head shadow, signals must be presented from multiple, spatially separated sources—as is more typically encountered in the real world.
We have tested sentence understanding in noise for unilateral CI patients (n = 25), bilateral CI patients (n = 15), and EAS patients (n = 35) when the noise was presented from eight loudspeakers (R-SPACE TM) which surrounded the listener. The results presented here are for a superset of patients described in Gifford et al (2010). For these patients the mean thresholds in the non-implanted ear were 51, 62, 74, 91, 102, 109 dB HL at 125, 250, 500, 1000, 2000, and 4000 Hz, respectively. The aim was to assess speech understanding in a realistic ‘restaurant’ environment in which a high level of noise (72 dB SPL) surrounds the listener (Compton-Conley et al, 2004). The level of the HINT sentences (Nilsson et al, 1994) was varied adaptively and speech reception thresholds (SRT) were obtained. The results are shown in Figure 6. A one-way analysis of variance indicated a significant difference in level of performance as a function of group [F (2, 74) = 4.885, p = 0.010]. Post hoc comparisons revealed that (1) there was a significant difference (3.7 dB) between the unilateral and bilateral groups (p = 0.017); (2) there was a significant difference (2.62 dB) between the unilateral and EAS groups (p = 0.03) and (3) there was not a significant difference (1.21 dB) between the bilateral and EAS groups.
Figure 6.
Threshold (dB SNR) for unilateral, bilateral, and EAS (bimodal) patients tested in the R-SPACE TM environment. Error bars indicate +1 standard deviation.
The finding of a similar level of performance for the EAS and bilateral CI patients in a realistic high noise environment stands in contrast to the large advantage in speech understanding found for EAS patients relative to bilateral patients when the stimulus material (CNC words) was presented from a single loudspeaker. It is reasonable to suppose that the relative improvement in the performance of the bilateral CI patients was due to the availability of both squelch and summation cues in the R-SPACE TM environment (see Ricketts et al, 2006).
Other CI patients who hear with two ears: hearing preservation patients
In one of the newest applications of cochlear implantation, surgeons implant an electrode array in one cochlea with the intention of preserving hearing in the frequency region apical to the tip of the electrode array (e.g. von Ilberg et al, 1999; Kiefer et al, 2005; Gantz & Turner, 2004). When this surgery is successful, the patients receive electrical stimulation from one cochlea and receive low-frequency acoustic stimulation from both the implanted cochlea and the non-implanted cochlea. These patients should show the benefits expected from adding acoustic information to electrically stimulated information and should show the benefits of having access to low-frequency, binaural cues processed by two ears with relatively good spectral selectivity.
Figure 7 (bottom) shows the audiograms for the implanted and non-implanted ears for a group of hearing preservation patients (n = 8) tested at the Mayo Clinic by the second author (again this is a superset of the patients described by Gifford et al (2010)). Both pre- and post-operative audiograms are shown for the ear that received the implant. Figure 7 (top) shows the performance of the patients in the R-SPACE TM environment in two conditions. In one, labeled EAS bimodal, the implanted ear was plugged and muffed to eliminate use of the preserved hearing in that ear. In this condition the patients functioned like conventional EAS (bimodal) patients. In the other condition (EAS combined), the plug and muff were removed and the patient had access to the stimulation from both partially hearing ears as well as to the stimulation provided by the CI. A t-test indicated a significant difference as a function of group (bimodal mean = 9.09, combined mean = 6.21; t7 = 6.97, p < .0002).
Figure 7.
Top: threshold (dB SNR) for hearing preservation patients (n = 8) with bimodal and combined stimulation. Bottom left: mean pre- and post-implant audiograms for the implanted ear. Bottom right: mean audiogram for the ear contralateral to the implant. Error bars indicate ±1 standard deviation.
In the combined condition, when the ear with preserved hearing was allowed to participate in recognition, there was a significant 2.9 dB improvement in the SNR relative to the bimodal condition. Given that each 1-dB improvement in the SNR can result in an 8–15-percentage improvement in speech recognition (e.g. Plomp & Mimpen, 1979; Wouters et al, 1994; Nilsson et al, 1994; Litovsky et al, 2006), this represents substantially improved performance in noise. Thus, access to low-frequency acoustic cues processed by two ears is very useful for patients when listening in noise. Of the four types of patients we have tested in the realistic restaurant-type environment— unilateral CI, bilateral CI, EAS (bimodal), and hearing preservation—the hearing preservation patients show the highest level of performance. These data argue for an attempt to preserve hearing in all patients who qualify for a cochlear implant.
Acknowledgments
The research reported here was supported by grant R01-DC-00654-20 from the NIDCD to the first author. The first author serves as a consultant to Advanced Bionics Corporation and to Cochlear Corporation. We thank David Schramm, M.D. and Elizabeth Fitzpatrick, Ph.D. for sharing data shown in Table 1. We thank Ruth Litovsky, Aaron Parkinson, Dawn Koch and Emily Buss for sharing data that was used in Figure 5.
Abbreviations
- CI
Cochlear implant
- CNC
Consonant nucleus consonant
- EAS
Electric and acoustic stimulation
- HL
Hearing level
- n
Sample size
Footnotes
Declaration of interest: The authors report no conflicts of interest. The authors alone are responsible for the content and writing of the paper.
References
- Armstrong M, Pegg P, James C, Blamey P. Speech perception in noise with implant and hearing aid. Am J Otol. 1997;18:S140–S141. [PubMed] [Google Scholar]
- Balkany T, Hodges A, Menapace C, Hazard L, Driscoll C, et al. Nucleus Freedom North American clinical trial. Otolaryngol Head Neck Surg. 2007;136:757–62. doi: 10.1016/j.otohns.2007.01.006. [DOI] [PubMed] [Google Scholar]
- Bassim MK, Buss E, Clark MS, Kolln KA, Pillsbury CH, et al. MED-EL Combi40+ cochlear implantation in adults. Laryngoscope. 2005;115:1568–1573. doi: 10.1097/01.mlg.0000171023.72680.95. [DOI] [PubMed] [Google Scholar]
- Brown C, Bacon S. Achieving electric-acoustic benefit with a modulated tone. Ear Hear. 2009;30(5):489–493. doi: 10.1097/AUD.0b013e3181ab2b87. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Buss E, Pillsbury HC, Buchman CA, Pillsbury CH, Clark MS, et al. Multicenter U.S. bilateral MED-EL cochlear implantation study: Speech perception over the first year of use. Ear Hear. 2008;29:20–32. doi: 10.1097/AUD.0b013e31815d7467. [DOI] [PubMed] [Google Scholar]
- Chang JE, Bai JY, Zeng FG. Unintelligible low-frequency sound enhances simulated cochlear-implant speech recognition in noise. IEEE Trans Biomed Eng. 2006;53:2598–2601. doi: 10.1109/TBME.2006.883793. [DOI] [PubMed] [Google Scholar]
- Ching TY. The evidence calls for making binaural-bimodal fitting routine. Hear J. 2005;58:32–41. [Google Scholar]
- Ching TY, Van Wanrooy E, Dillon H. Binaural-bimodal fitting or bilateral implantation for managing severe to profound deafness: A review. Trends Amplif. 2007;11:161–92. doi: 10.1177/1084713807304357. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ching TYC, Incerti P, Hill M. Binaural benefits for adults who use hearing aids and cochlear implants in opposite ears. Ear Hear. 2004;25:9–21. doi: 10.1097/01.AUD.0000111261.84611.C8. [DOI] [PubMed] [Google Scholar]
- Compton-Conley CL, Neuman AC, Killion MC, Levitt H. Performance of directional microphones for hearing aids: Real-world versus simulation. J Am Acad Audiol. 2004;15:440–55. doi: 10.3766/jaaa.15.6.5. [DOI] [PubMed] [Google Scholar]
- Cutler A, Carter DM. The predominance of strong initial syllables in the English vocabulary. Comp Speech Lang. 1987;2:133–142. [Google Scholar]
- Dorman M, Gifford R, Spahr A, Mckarns S. The benefits of combining acoustic and electric stimulation for the recognition of speech, voice, and melodies. Audiol Neurootol. 2008;13:105–112. doi: 10.1159/000111782. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dorman MF, Gifford R, Lewis K, Mckarns S, Ratigan J, et al. Word recognition following implantation of conventional and 10-mm hybrid electrodes. Audiol Neurootol. 2009;14:181–9. doi: 10.1159/000171480. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Faulkner A, Rosen S. Contributions of temporal encodings of voicing, voicelessness, fundamental frequency, and amplitude variation to audiovisual and auditory speech perception. J Acoust Soc Am. 1999;106:2063–73. doi: 10.1121/1.427951. [DOI] [PubMed] [Google Scholar]
- Firszt JB, Holden LK, Skinner MW, Tobey EA, Peterson A, et al. Recognition of speech presented at soft to loud levels by adult cochlear implant recipients of three cochlear implant systems. Ear Hear. 2004;25:375–87. doi: 10.1097/01.aud.0000134552.22205.ee. [DOI] [PubMed] [Google Scholar]
- Gantz BJ, Turner C. Combining acoustic and electrical speech processing: Iowa/nucleus hybrid implant. Acta Otolaryngol. 2004;124:344–347. doi: 10.1080/00016480410016423. [DOI] [PubMed] [Google Scholar]
- Gifford R, Dorman M, Shallop J, Sydlowski S. Evidence for the expansion of adult cochlear implant candidacy. Ear Hear. 2010 Jan 12; doi: 10.1097/AUD.0b013e3181c6b831. Epub ahead of print. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gifford R, Shallop J, Peterson A. Speech recognition materials and ceiling effects: Considerations for cochlear implant programs. Audiol Neurootol. 2008;13:193–205. doi: 10.1159/000113510. [DOI] [PubMed] [Google Scholar]
- Gifford RH, Dorman MF, Mckarns SA, Spahr AJ. Combined electric and contralateral acoustic hearing: Word and sentence recognition with bimodal hearing. J Speech Lang Hear Res. 2007;50:835–43. doi: 10.1044/1092-4388(2007/058). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gstoettner W, Kiefer J, Baumgartner WD, Pok S, Peters S, et al. Hearing preservation in cochlear implantation for electric acoustic stimulation. Acta Otolaryngol. 2004;124:348–352. doi: 10.1080/00016480410016432. [DOI] [PubMed] [Google Scholar]
- Hamzavi J, Pok S, Gstoettner W, Baumgartner WD. Speech perception with a cochleaer implant used in conjunction with a hearing aid in the opposite ear. Int J Audiol. 2004;43:61–65. [PubMed] [Google Scholar]
- Henry A, Turner C, Behrens A. Spectral peak resolution and speech recognition in quiet: Norman hearing, hearing impaired and cochlear implant listeners. J Acoust Soc Am. 2005;118:1111–1121. doi: 10.1121/1.1944567. [DOI] [PubMed] [Google Scholar]
- Kiefer J, Gstoettner W, Baumgartner W, Pok SM, Tillein J, et al. Conservation of low-frequency hearing in cochlear implantation. Acta Otolaryngol. 2004;124:272–80. doi: 10.1080/00016480310000755a. [DOI] [PubMed] [Google Scholar]
- Kiefer J, Pok M, Adunka O, Sturzebecher E, Baumgartner W, et al. Combined electric and acoustic stimulation of the auditory system: Results of a clinical study. Audiol Neurootol. 2005;10:134–144. doi: 10.1159/000084023. [DOI] [PubMed] [Google Scholar]
- Kong Y, Carlyon R. Improved speech recognition in noise in simulated binaurally combined acoustic and electric stimulation. J Acoust Soc Am. 2007;121:3717–3727. doi: 10.1121/1.2717408. [DOI] [PubMed] [Google Scholar]
- Kong YY, Stickney GS, Zeng FG. Speech and melody recognition in binaurally combined acoustic and electric hearing. J Acoust Soc Am. 2005;117:1351–61. doi: 10.1121/1.1857526. [DOI] [PubMed] [Google Scholar]
- Kral A, Hartmann R, Mortazavi D, Klinke R. Spatial resolution of cochlear implants: The electrical field and excitation of auditory afferents. Hear Res. 1998;121:11–28. doi: 10.1016/s0378-5955(98)00061-6. [DOI] [PubMed] [Google Scholar]
- Lawson DT, Wilson B, Zerbi M, Finley C. Speech Processors for Auditory Prostheses. 4th QPR; NIH N01DC-8-2105. 1999:1–27. Available at http://www.rti.org/capr/caprqprs.html.
- Li N, Loizou PC. The contribution of obstruent consonants and acoustic landmarks to speech recognition in noise. J Acoust Soc Am. 2008;124:3947. doi: 10.1121/1.2997435. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Liberman AM. Speech: A Special Code. Cambridge, USA: The Mit Press; 1996. [Google Scholar]
- Litovsky R, Parkinson A, Arcaroli J, Sammeth C. Simultaneous bilateral cochlear implantation in adults: A multicenter clinical study. Ear Hear. 2006;27:714–31. doi: 10.1097/01.aud.0000246816.50820.42. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Middlebrooks JC, Bierer JA. Auditory cortical images of cochlear-implant stimuli: Coding of stimulus channel and current level. J Neurophysiol. 2002;87:493–507. doi: 10.1152/jn.00211.2001. [DOI] [PubMed] [Google Scholar]
- Mok M, Grayden D, Dowell RC, Lawrence D. Speech perception for adults who use hearing aids in conjunction with cochlear implants in opposite ears. J Speech Lang Hear Res. 2006;49:338–51. doi: 10.1044/1092-4388(2006/027). [DOI] [PubMed] [Google Scholar]
- Nilsson M, Soli SD, Sullivan JA. Development of the hearing in noise test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am. 1994;95:1085–99. doi: 10.1121/1.408469. [DOI] [PubMed] [Google Scholar]
- Oxenham A, Bacon S. Psychophysical manifestations of compression: Normal-hearing listeners. In: Bacon SP, Fay RR, Popper AN, editors. Compression from Cochlea to Cochlear Implants. New York: Springer-Verlag; 2004. [Google Scholar]
- Peterson GE, Lehiste I. Revised CNC lists for auditory tests. Journal of Speech and Hearing Disorders. 1962;27:62–70. doi: 10.1044/jshd.2701.62. [DOI] [PubMed] [Google Scholar]
- Plomp R, Mimpen AM. Improving the reliability of testing the speech reception threshold for sentences. Audiology. 1979;18:43–52. doi: 10.3109/00206097909072618. [DOI] [PubMed] [Google Scholar]
- Ricketts TA, Grantham DW, Ashmead DH, Haynes DS, Labadie RF. Speech recognition for unilateral and bilateral cochlear implant modes in the presence of uncorrelated noise sources. Ear Hear. 2006;27:763–73. doi: 10.1097/01.aud.0000240814.27151.b9. [DOI] [PubMed] [Google Scholar]
- Shallop J, Arndt P, Turnacliff K. Expanded indications for cochlear implantation: Perceptual results in seven adults with residual hearing. Journal of Speech-Language Pathology & Applied Behavior Analysis. 1992;16:141–148. [Google Scholar]
- Shannon RV, Zeng FG, Kamath V, Wygonski J, Ekelid M. Speech recognition with primarily temporal cues. Science. 1995;270:303–4. doi: 10.1126/science.270.5234.303. [DOI] [PubMed] [Google Scholar]
- Snyder RL, Middlebrooks JC, Bonham BH. Cochlear implant electrode configuration effects on activation threshold and tonotopic selectivity. Hear Res. 2008;235:23–38. doi: 10.1016/j.heares.2007.09.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Spahr AJ, Dorman MF, Loiselle LH. Performance of patients using different cochlear implant systems: Effects of input dynamic range. Ear Hear. 2007;28:260–75. doi: 10.1097/AUD.0b013e3180312607. [DOI] [PubMed] [Google Scholar]
- Spitzer S, Liss J, Spahr T, Dorman M, Lansford K. The use of fundamental frequency for lexical segmentation in listeners with cochlear implants. J Acoust Soc Am. 2009;125:EL236–41. doi: 10.1121/1.3129304. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Stevens KN. Toward a model for lexical access based on acoustic landmarks and distinctive features. J Acoust Soc Am. 2002;111:1872–91. doi: 10.1121/1.1458026. [DOI] [PubMed] [Google Scholar]
- Turner CW, Gantz BJ, Vidal C, Behrens A, Henry BA. Speech recognition in noise for cochlear implant listeners: Benefits of residual acoustic hearing. J Acoust Soc Am. 2004;115:1729–1735. doi: 10.1121/1.1687425. [DOI] [PubMed] [Google Scholar]
- Tyler RS, Parkinson AJ, Wilson BS, Witt S, Preece JP, et al. Patients utilizing a hearing aid and a cochlear implant: Speech perception and localization. Ear Hear. 2002;23:98–105. doi: 10.1097/00003446-200204000-00003. [DOI] [PubMed] [Google Scholar]
- Von Ilberg C, Kiefer J, Tillein J, Pfenningdorff T, Hartmann R. Electric-acoustic stimulation of the auditory system: New technology for severe hearing loss. ORL J Otorhinolaryngol Relat Spec. 1999;61:334–340. doi: 10.1159/000027695. [DOI] [PubMed] [Google Scholar]
- Wilson B, Dorman M. The design of cochlear implants. In: Niparko JK, editor. Cochlear Implants, Principals and Practices. Philadelphia: Lippincott; 2009. pp. 95–136. [Google Scholar]
- Wouters J, Damman W, Bosman AJ. Vlaamse opname van woordenlijsten voor spraakaudiometrie. Logopedie. 1994;7:28–33. [Google Scholar]
- Zeitler DM, Kessler MA, Terushkin V, Roland TJ, Jr, Svirsky MA, et al. Speech perception benefits of sequential bilateral cochlear implantation in children and adults: A retrospective analysis. Otol Neurotol. 2008;29:314–25. doi: 10.1097/mao.0b013e3181662cb5. [DOI] [PubMed] [Google Scholar]
- Zhang T, Spahr A, Dorman M. Information from the voice fundamental frequency (F0) accounts for the majority of the benefit when acoustic stimulation is added to electric stimulation. Ear Hear. 2010;31(1):63–69. doi: 10.1097/aud.0b013e3181b7190c. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zue V. The use of speech knowledge in speech recognition. Proc IEEE. 1985;73:1602–15. [Google Scholar]