Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Jan 25.
Published in final edited form as: Otol Neurotol. 2021 Jan;42(1):197–202. doi: 10.1097/MAO.0000000000002965

Effectiveness of Place-Based Mapping in Electric-Acoustic Stimulation Devices

Margaret T Dillon 1, Michael W Canfarotta 1, Emily Buss 1, Joseph Hopfinger 2, Brendan P O’Connell 1
PMCID: PMC8787166  NIHMSID: NIHMS1769174  PMID: 33885267

Abstract

Background:

The default mapping procedure for electric-acoustic stimulation (EAS) devices uses the cochlear implant (CI) recipient’s unaided detection thresholds in the implanted ear to derive the acoustic settings and assign the lowest frequency filter of electric stimulation. Individual differences for speech recognition with EAS may be due to discrepancies between the electric frequency filters of individual electrode contacts and the cochlear place of stimulation, known as a frequency-to-place mismatch. Frequency-to-place mismatch of greater than ½ octave has been demonstrated in up to 60% of EAS users. Aligning the electric frequency filters via a place-based mapping procedure using postoperative imaging may improve speech recognition with EAS.

Methods:

Masked sentence recognition was evaluated for normal-hearing subjects (n=17) listening with vocoder simulations of EAS, using a place-based map and a default map. Simulation parameters were based on audiometric and imaging data from a representative 24-mm electrode array recipient and EAS user. The place-based map aligned electric frequency filters with the cochlear place frequency, which introduced a gap between the simulated acoustic and electric output. The default map settings were derived from the clinical programming software and provided the full speech frequency range.

Results:

Masked sentence recognition was significantly better for simulated EAS with the place-based map as compared to the default map.

Conclusion:

The simulated EAS place-based map supported better performance than the simulated EAS default map. This indicates that individualizing maps may improve performance in EAS users by helping them achieve better asymptotic performance earlier and mitigate the need for acclimatization.

Introduction

Cochlear implant (CI) recipients with hearing preservation in the implanted ear demonstrate significantly improved speech recognition when listening with electric-acoustic stimulation (EAS) as compared to electric stimulation alone1-5. While listeners demonstrate the ability to benefit from the combination of acoustic and electric information in the same ear, performance with EAS remains variable4,5. One possible explanation for these individual differences in performance is the discrepancy between electric frequency filters for individual electrode contacts and the natural cochlear place frequency, resulting in frequency-to-place mismatch6. If place of stimulation relative to tonotopic pitch affects outcomes with EAS, then individualizing the electric frequency filters with a place-based mapping procedure using postoperative imaging should improve speech recognition.

An EAS device divides the incoming sound stimulus into acoustic and electric output based on the CI recipient’s unaided detection thresholds in the implanted ear. The crossover frequency is defined as the frequency at which unaided detection thresholds in the implanted ear exceed a criterion level (e.g., 70 dB HL, see Gifford et al.7), above which acoustic stimulation is less effective. To minimize frequency information gaps, speech information above the crossover frequency is presented electrically and distributed logarithmically across active electrode contacts irrespective of their intracochlear location. Karsten et al.8 demonstrated that this EAS mapping approach supports better speech recognition compared to procedures that present the same speech information both acoustically and electrically, or procedures that leave a gap between the highest frequency associated with acoustic stimulation and the lowest frequency associated with electric stimulation.

Use of the CI recipient’s unaided detection thresholds allows for some degree of individualized EAS mapping which provides an effective representation of the acoustic information9,10. However, current procedures do not account for patient-specific variability in electrode array location. Given the wide variability in angular insertion depth (AID) for electrode arrays across CI recipients and the unpredictable nature of threshold shifts postoperatively, it is not surprising that 60% of all EAS users mapped with default procedures experience a frequency-to-place mismatch of at least ½ octave6. To further illustrate the relationship between mismatch and residual hearing, Figure 1 plots frequency-to-place mismatch across electrode contacts for an average 24 mm array recipient (see Canfarotta et al., 20206) as a function of AID, for different crossover frequencies determined by the default mapping procedure given varying amounts of hearing preservation. This demonstrates that if hearing is preserved out to 500 Hz, default mapping procedures result in relatively little mismatch on apical channels. However, if residual hearing is only present out to 125 or 250 Hz, a basal spectral shift ranging between 1-2 octaves on apical channels is present. Conversely, if hearing is preserved out to 750 Hz, an apical spectral shift of approximately ½ octave is introduced.

Figure 1:

Figure 1:

Frequency-to-place mismatch in semitones across electrode contacts for an average 24 mm array recipient (see Canfarotta et al., 2020) as a function of angular insertion depth. For EAS devices, filled symbols depict the frequency-to-place relationship for different crossover frequencies (125, 250, 500, & 750 Hz) determined by the default mapping procedure given varying amounts of hearing preservation. For a CI-alone device, open symbols depict the frequency-to-place relationship when mapped with the default procedure (full speech frequency range provided electrically). The zero reference line designates the average spiral ganglion frequency-to-place function as described by Stakhovskaya et al (2007).

The spectral shifts associated with frequency-to-place mismatch have been shown to negatively influence vowel recognition for normal-hearing subjects listening to vocoder simulations of EAS11. However, the impact of spectral shifts on sentence recognition, which provides richer context cues more similar to those encountered in natural communication, has not previously been explored in EAS simulations. While CI-alone recipients can acclimate to frequency-to-place mismatches, the shift in perception may take months to years, and some users may never fully accommodate mismatches12-15. It is therefore possible that the need to acclimate to a frequency-to-place mismatch slows and/or limits speech recognition growth with EAS.

As an alternative to the default mapping procedure, implementing a place-based mapping procedure individualizes the electric frequency filters of specific channels to match the respective cochlear place frequency. Initial investigations in CI-alone users or simulations of CI-alone devices demonstrated better speech recognition with maps that approximately matched the cochlear place frequency, based on linear insertion depth of the electrode array, as opposed to maps that presented spectrally-shifted information16-21. More recently, use of intraoperative and postoperative imaging has supported more accurate estimates of AID and the cochlear place frequency associated with individual electrode contacts22-24. The place-based mapping procedure used in the present report adjusts the electric frequency filters of low- and mid-frequency channels to align the electrically represented speech information with the cochlear place frequency. This procedure aims to eliminate the frequency-to-place mismatch at cochlear frequency regions important for speech recognition, and thus, may improve initial speech recognition with EAS.

A consideration of the present place-based mapping procedure is that it does not attempt to correct for potential gaps in frequency information provided by the acoustic and electric outputs. A frequency gap would occur when the most apical electrode contact is positioned basal to the cochlear place of the crossover frequency. EAS users experience poorer speech recognition when listening to acoustic and electric settings with a frequency gap as compared to the full speech frequency range8. It is therefore possible that better speech recognition may be observed with an EAS default map as compared to an EAS place-based map that creates a frequency gap.

The present report assessed these competing predictions that an EAS place-based map may improve initial speech recognition by limiting the need to acclimate to a frequency-to-place mismatch, yet could reduce performance if creating a frequency gap. One challenge to evaluating effects of different mapping procedures in EAS users is the individual differences in residual hearing, survival of neurons available for electrical stimulation, details of the electrode array position, and listening experience. Considering this, the performance of normal-hearing subjects listening to simulations of an EAS device with a place-based map versus a default map was compared on a masked sentence recognition task.

MATERIALS AND METHODS

Normal-hearing listeners participated in a masked sentence recognition task while listening to a simulation of an EAS device mapped with either the place-based or default electric frequency filters. The study procedures were approved by the study site Institutional Review Board. Subjects provided consent to participate and received one hour of research participation credit toward their Introductory Psychology course requirements for their participation.

Subjects

Seventeen normal-hearing young adults (15 female) between 18 to 25 years of age (mean: 20 years; SD: 2 years) participated. Subjects passed a hearing screening from .125 to 20 kHz and had not previously participated in hearing research studies.

Stimuli

A noise vocoder was used to simulate the two conditions: 1) EAS with the default map, and 2) EAS with a place-based map. Stimuli were AzBio sentences in a 10-talker masker25. The acoustic component of EAS was simulated by a finite impulse response (FIR) filter that shaped the output to match the aided sound field thresholds of a representative EAS user , as described below. Electric stimulation was simulated with a bank of 12 bandpass FIR filters, each with 5-Hz resolution, and center frequencies corresponding to either place-based or default map parameters. The Hilbert envelope was extracted from the output of each filter, lowpass filtered at 300 Hz with a 4th order Butterworth filter, and used to amplitude modulate a corresponding noise band, filtered according to the cochlear place frequencies.

Cochlear place and EAS map frequencies were modelled after a representative Flex24 electrode array (MED-EL Corporation, Innsbruck, Austria) recipient. The Flex24 electrode array is 24 mm in length and features 12 stimulation channels. This CI recipient underwent a postoperative CT scan, which confirmed a full insertion of the electrode array and allowed for the estimation of AID for each electrode contact. The AID values and cochlear place frequency for each electrode contact, using the spiral ganglion frequency-to-place function26, were provided by Vanderbilt University and appear in Table 1.

Table 1:

The spiral ganglion cochlear place frequency associated with each electrode contact and the center frequencies for the default and place-based maps are shown. The cochlear place frequencies (reported in Hertz) were derived from a representative Flex24 electrode array recipient. Center frequencies for the default map were obtained from the clinical programming software. Center frequencies for the place-based map were derived from the place-based mapping procedure, where the electric frequency filters of channels 1-8 were adjusted to match the cochlear place frequency and the remaining frequencies were distributed up to 8.5 kHz.

Electrode Contact
1 2 3 4 5 6 7 8 9 10 11 12
Cochlear
Place
697 811 1078 1434 2017 2582 3633 5017 6181 11133 14521 16584
Place-Based 697 811 1078 1434 2017 2582 3633 5017 6500 7000 7500 8000
Default 293 393 527 707 948 1272 1707 2290 3072 4121 5529 7418

For the default map and acoustic component settings, the CI recipient’s unaided detection thresholds in the implanted ear were entered into the clinical programming software (Maestro version 7.0.3). The unaided detection thresholds were 50, 60, 80, 90, and 90 dB HL at .125, .25, .5, .75, and 1 kHz, respectively. The CI recipient’s aided sound field thresholds with the default EAS acoustic settings were 40, 50, 55, and 65 dB HL at .125, .25, .5, and 1 kHz, respectively; these values were used to simulate the audibility of the acoustic low-frequency information. The unaided detection thresholds resulted in a default electric frequency range of 250-8500 Hz. The associated center frequencies for the default map are listed in Table 1.

For the place-based map, the electric frequency filters were determined by matching the center frequency of channels 1-8 to the associated cochlear place frequency for each electrode contact, and logarithmically distributing the remaining frequency information across channels 9-12. The place-based map had an electric frequency range of 550-8500 Hz; associated center frequencies are listed in Table 1. Simulation of the acoustic component of EAS was the same as for the default map; therefore, the place-based map had reduced access to speech cues in the region of the crossover between acoustic and electric stimulation.

Procedures

The experiment was controlled using custom MATLAB (MathWorks) scripts. Stimuli were routed through an external sound card (M-AUDIO, M-Track 2x2) and presented diotically over headphones (Sennheiser, HD 280 Pro).

Performance was evaluated adaptively, using an ascending signal-to-noise ratio (SNR) method as described by Buss, Calandruccio & Hall27. The 10-talker masker was presented at 60 dB SPL, and the SNR was manipulated by changing to level of the target. A sentence was presented at a challenging SNR (starting at −10 dB SNR), and subjects were asked to repeat what they heard. Subject responses were scored by the tester after each sentence. The SNR was increased in 2 dB step sizes until the subject correctly recognized all keywords or the 19-dB-SNR maximum level was reached. This procedure repeated for each of 20 sentences in a list. Feedback was not provided. Lists were randomly selected from among the ten lists shown to have equivalent intelligibility for CI users28, and sentence order within a list was randomized.

Data were collected as part of a larger study, including simulation of a CI-alone listening condition. With respect to the present dataset, fourteen subjects were randomized to listen to either the default or place-based EAS condition. Three subjects heard both of the two EAS simulations, in random order. This resulted in data from 10 subjects for each condition.

Data Analysis

The percent of keywords correct at each SNR was fitted with a three-parameter logit function prior to analysis and for illustration purposes; the three parameters were midpoint, slope, and upper asymptote. A logit transformation was applied to the proportion correct data prior to analysis. Masked sentence recognition was compared between the default and place-based maps at two SNRs (i.e., 5 dB and 10 dB SNR) using t-tests (SPSS, version 26). The 5 dB and 10 dB SNRs were selected for comparison as these are the fixed SNRs used clinically to evaluate EAS user performance. Performance at asymptote was also compared between the two maps using t-tests to assess potential differences at more favorable SNRs.

RESULTS

Figure 2 plots the masked sentence recognition of subjects listening in the default versus place-based map conditions. The percent of correctly recognized keywords at each SNR is indicated with circles, and lines show individual subjects’ psychometric functions. The results for the three subjects who provided data for both conditions are highlighted in blue; open circles indicate results for the first condition tested, and filled circles indicate results for the second condition tested.

Figure 2:

Figure 2:

Masked sentence recognition of subjects listening to the default map and the place-based map. Performance is reported in percent of keywords correctly recognized at each SNR (range −10 to 19 dB SNR), where a higher value indicates better speech recognition. Open grey circles represent the data from the 14 individuals who listened to one condition. Blue circles represent the data from the three subjects who listened to both conditions, with open circles indicating performance in the first condition and filled circles indicating performance in the second condition completed. SNR: signal-to-noise ratio.

Masked sentence recognition was poorer on average for the default map than the place-based map. At 5 dB SNR, the percent correct performance ranged from 7 to 23% (mean: 13%, SD: 6%) with the default map and from 15 to 48% (mean: 30%, SD: 10%) with the place-based map. At 10 dB SNR, the percent correct performance ranged from 20 to 54% (mean: 38%, SD: 10%) with the default map and from 41 to 76% (mean: 58%, SD: 11%) with the place-based map. Asymptotic performance ranged from 41 to 79% (mean: 57%, SD: 12%) with the default map and from 75 to 97% (mean: 89%, SD: 7%) with the place-based map. Masked sentence recognition was significantly better with the place-based map as compared to the default map at 5 dB SNR (t(18)=−4.42, p<0.001), 10 dB SNR (t(18)=−4.29, p<0.001), and at asymptote (t(18)=−7.01, p<0.001).

DISCUSSION

This study demonstrates significantly better masked sentence recognition with a place-based map as compared to a default map in normal-hearing subjects listening to EAS simulations. These findings support the notion that matching electric frequency filters to the cochlear place frequency to reduce frequency-to-place mismatch can confer benefit – even if it introduces a gap in acoustic and electric frequency information.

The present findings challenge our current understanding of the optimal mapping procedures for EAS. Frequency gaps between acoustic and electric stimulation have been shown to degrade speech recognition8; however, the present report found better performance for a place-based map that introduced a gap in available frequency information, yet aligned electric frequency filters with natural place of stimulation. This result raises the possibility that the auditory system can tolerate frequency gaps when speech information is delivered to the associated cochlear place. The present results corroborate the findings of Fu, Galvin, & Wang11 who reported that normal-hearing listeners experienced better vowel recognition with EAS simulations when the frequency mapped to the most apical filter of electric stimulation was increased to minimize the frequency-to-place mismatch. It is possible that listeners can tolerate slight gaps in the frequency information due to the spectral redundancy of speech29, whereas larger gaps in frequency information may negatively influence speech recognition30.

Tolerance for spectral gaps in EAS users is of great clinical interest since long-term hearing preservation rates are variable31-35. The default mapping procedure adjusts the low-frequency filter of electric stimulation when changes to unaided detection thresholds occur. Changing the low-frequency filter adjusts the frequency allocations of all channels, thus requiring the listener to re-acclimate to a new frequency-to-place mismatch each time acoustic thresholds change. In contrast, the place-based mapping procedure would largely obviate the need to adjust electric frequency filters in the face of changes in residual hearing. The result is a stable frequency-to-place association, but an increase in the gap between acoustic and electric frequency information if residual hearing is lost. The potential for a deleterious gap in frequency information would be greatest for electrode arrays at shallower insertion depths and limited acoustic hearing. With respect to an average insertion depth for the electrode array modeled herein (Flex24, 428°)6, hearing must be preserved out to at least 500 Hz to avoid frequency-to-place mismatch with the default mapping procedure or a gap in frequency information with strict place-based mapping (Figure 1).

Conversely, if hearing is preserved above 500 Hz for the average Flex 24 insertion, then apical electrode contacts reside in the region of aidable acoustic hearing (Figure 1). Given that low-frequency hearing preservation has been recently observed in CI recipients of long, flexible electrode arrays, albeit at lower rates than shorter arrays, this scenario in which apical contacts reside in the region of acoustic hearing has become increasingly relavent36,37. One consideration when using place-based mapping for EAS recipients in these cases is that providing electric stimulation in or near a region of aidable acoustic hearing may mask the perception of acoustic low-frequency cues38-40. In this scenario, place-based mapping procedures could reduce stimulation of apical electrode contacts that are within a region of aidable acoustic hearing, to limit potential masking between the acoustic and electric output.

While the present data support the effectiveness of place-based mapping in EAS devices, it should be noted that the performance of normal-hearing subjects listening to vocoder simulations may not accurately reflect the performance of CI recipients41. Additionally, the present study assessed masked sentence recognition with acute listening experience. Cochlear implant recipients with EAS who listen with default maps may acclimate to a chronic frequency-to-place mismatch with extended listening experience and could potentially demonstrate similar or better speech recognition as compared to listeners of place-based maps. Finally, the present investigation did not assess the influence of place-based mapping on binaural hearing abilities. Interaural mismatches in place of stimulation have been shown to be detrimental for binaural hearing in normal-hearers listening to CI simulations42-49and bilateral CI recipients46,47,50,51. The effectiveness of place-based mapping as compared to the default mapping procedures for EAS users is currently being assessed as part of a long-term, prospective investigation on monaural and binaural hearing abilities.

CONCLUSION

Data from normal-hearing subjects listening to vocoder simulations suggest that a place-based map may improve performance compared to a default map. This suggests that incorporating device variables, such as AID of electrode contacts and the associated cochlear place frequency, into mapping could maximize outcomes for EAS users. In CI recipients, a frequency-to-place match would mitigate the need to acclimate to spectrally-shifted speech information, as can occur with a default map.

Acknowledgements

The authors thank Kathryn Sobon, Kira Griffith, and Haley Murdock for their assistance with data collection.

References

  • 1.Gantz BJ, Turner C, Gfeller KE, Lowder MW Preservation of hearing in cochlear implant surgery: advantages of combined electrical and acoustical speech processing. Laryngoscope 2005;115:796–802. [DOI] [PubMed] [Google Scholar]
  • 2.Incerti PV, Ching TY, Cowan R A systematic review of electric-acoustic stimulation: device fitting ranges, outcomes, and clinical fitting practices. Trends Amplif 2013;17:3–26. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Dillon MT, Buss E, Adunka OF, Buchman CA, Pillsbury HC Influence of test condition on speech perception with electric-acoustic stimulation. AJA 2015;24:520–8. [DOI] [PubMed] [Google Scholar]
  • 4.Gantz BJ, Dunn O, Oleson J, Hansen M, Parkinson A, Turner C Multicenter clinical trial of the Nucleus Hybrid S8 cochlear implant: final outcomes. Laryngoscope 2016;126:962–73. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Pillsbury HC, Dillon MT, Buchman CA, et al. Multicenter US clinical trial with an electric-acoustic stimulation (EAS) system in adults: final outcomes. Otol Neurotol 2018;39:299–305. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Canfarotta MW, Dillon MT, Buss E, Pillsbury HC, Brown KD, O’Connell BP Frequency-to-place mismatch: characterizing variability and the influence on speech perception outcomes in cochlear implant recipients. Ear Hear, [Epub ahead of print]. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Gifford RH, Davis TJ, Sunderhaus LW, et al. Combined electric and acoustic stimulation with hearing preservation: effect of cochlear implant low-frequency cutoff on speech understanding and perceived listening difficulty. Ear Hear 2017;38:539–53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Karsten SA, Turner CW, Brown CJ, Jeon EK, Abbas PJ Gantz BJ Optimizing the combination of acoustic and electric hearing in the implanted ear. Ear Hear 2013;34:142–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Vermeire K, Anderson I, Flynn M, Van de Heyning P The influence of different speech processor and hearing aid settings on speech perception outcomes in electric acoustic stimulation patients. Ear Hear 2008;29:76–86. [DOI] [PubMed] [Google Scholar]
  • 10.Dillon MT, Buss E, Pillsbury HC, Adunka OF, Buchman CA, Adunka MC Effects of hearing aid settings for electric-acoustic stimulation. J Am Acad Audiol 2014;25:133–40. [DOI] [PubMed] [Google Scholar]
  • 11.Fu QJ, Galvin JJ 3rd, Wang X Integration of acoustic and electric hearing is better in the same ear than across ears. Sci Rep 2017;7:12500. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Reiss LA, Turner CW, Erenberg SR, Gantz BJ Changes in pitch with a cochlear implant over time. J Assoc Res Otolaryngol 2007;8:241–57. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Sagi E, Fu QJ, Galvin JJ 3rd, Svirsky MA A model of incomplete adaptation to a severely shifted frequency-to-electrode mapping by cochlear implant users. J Assoc Res Otolaryngol 2010;11:69–78. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Reiss LA, Turner CW, Karsten SA, Gantz BJ Plasticity in human pitch perception induced by tonotopically mismatched electro-acoustic stimulation. Neuroscience 2014;256:43–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Svirsky MA, Talavage TM, Sinha S, Neuburger H, Asadpour M Gradual adaptation to auditory frequency mismatch. Hear Res 2015;322:163–70. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Dorman MF, Loizou PC, Rainey D Simulating the effect of cochlear-implant electrode insertion depth on speech understanding. J Acoust Soc Am 1997;102:2993–6. [DOI] [PubMed] [Google Scholar]
  • 17.Fu QJ, Shannon RV Effects of electrode location and spacing on phoneme recognition with the Nucleus-22 cochlear implant. Ear Hear 1999;20:321–31. [DOI] [PubMed] [Google Scholar]
  • 18.Başkent D, Shannon RV Speech recognition under conditions of frequency-place compression and expansion. J Acoust Soc Am 2003;113:2064–76. [DOI] [PubMed] [Google Scholar]
  • 19.Başkent D, Shannon RV Frequency-place compression and expansion in cochlear implant listeners. J Acoust Soc Am 2004;116:3130–40. [DOI] [PubMed] [Google Scholar]
  • 20.Başkent D, Shannon RV Interactions between cochlear implant electrode insertion depth and frequency-place mapping. J Acoust Soc Am 2005;117:1405–16. [DOI] [PubMed] [Google Scholar]
  • 21.Li T, Fu QJ Effects of spectral shifting on speech perception in noise. Hear Res 2010;270:81–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Noble JH, Gifford RH, Labadie RF,. Dawant BM, Statistical shape model segmentation and frequency mapping of cochlear implant stimulation targets in CT. Med Image Comput Assist Interv 2012;15:421–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Noble JH, Gifford RH, Hedley-Williams AJ, Dawant BM, Labadie RF Clinical evaluation of an image-guided cochlear implant programming strategy. Audiol Neurootol 2014;19:400–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Canfarotta MW, Dillon MT, Buss E, Pillsbury HC, Brown KD, O’Connell BP Validating a new tablet-based tool in the determination of cochlear implant angular insertion depth. Otol Neurotol 2019;40:1006–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Spahr AJ, Dorman MF, Litvak LM, et al. (2012). Development and validation of the AzBio sentences lists. Ear Hear 2012;33:112–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Stakhovskaya O, Sridhar D, Bonham BH, Leake PA Frequency map for the human cochlear spiral ganglion: implications for cochlear implants. JARO 2007;8:220–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Buss E, Calandruccio L, Hall JW 3rd. Masked sentence recognition assessed at ascending target-to-masker ratios: modest effects of repeating stimuli. Ear Hear 2015;36:e14–e22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Schafer EC, Pogue J, Milrany T List equivalency of the AzBio sentence test in noise for listeners with normal-hearing sensitivity or cochlear implants. J Am Acad Audiol 2012;23:501–9. [DOI] [PubMed] [Google Scholar]
  • 29.Steeneken HJ, Houtgast T Mutual dependence of the octave-band weights in predicting speech intelligibility. Speech Communication 1999;28:109–23. [Google Scholar]
  • 30.Shannon RV, Galvin JJ 3rd, Başkent D Holes in hearing. JARO 2002;3:185–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Gstoettner WK, Helbig S, Maier N, Kiefer J, Radeloff A, Adunka OF Ipsilateral electric acoustic stimulation of the auditory system: results of long-term hearing preservation. Audiol Neurootol 2006;11:49–56. [DOI] [PubMed] [Google Scholar]
  • 32.Mertens G, Punte AK, Cochet E, De Bodt M, Van de Heyning P Long-term follow-up of hearing preservation in electric-acoustic stimulation patients. Otol Neurotol 2014;35:1765–72. [DOI] [PubMed] [Google Scholar]
  • 33.Helbig S, Adel Y, Rader T, Stöver T, Baumann U Long-term hearing preservation outcomes after cochlear implantation for electric-acoustic stimulation. Otol Neurotol 2016;37:e353–9. [DOI] [PubMed] [Google Scholar]
  • 34.Roland JT Jr., Gantz BJ, Waltzman SB, Parkinson AJ Long-term outcomes of cochlear implantation in patients with high-frequency hearing loss. Laryngoscope 2018;128:1939–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Wanna GB, O’Connell BP, Francis DO, et al. Predictive factors for short- and long-term hearing preservation in cochlear implantation with conventional-length electrodes. Laryngoscope 2018;128:482–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Mick P, Arnoodi H, Shipp D, et al. Hearing preservation with full insertion of the FLEXsoft electrode. Otol Neurotol 2014;35:e40–4. [DOI] [PubMed] [Google Scholar]
  • 37.Usami S, Moteki H, Tsukada K, et al. Hearing preservation and clinical outcome of 32 consecutive electric acoustic stimulation (EAS) surgeries. Acta Otolaryngol 2014;134:717–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Lin P, Turner CW, Gantz BJ, Djalilian HR, Zeng FG Ipsilateral masking between acoustic and electric stimulations. J Acoust Soc Am 2011;130:858–65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Krüger B, Büchner A, Nogueira W Simultaneous masking between electric and acoustic stimulation in cochlear implant users with residual low-frequency hearing. Hear Res 2017;353:185–96. [DOI] [PubMed] [Google Scholar]
  • 40.Imsiecke M, Krüger B, Büchner A, Lenarz T, Nogueira W Interaction between electric and acoustic stimulation influences speech perception in ipsilateral EAS users. Ear Hear 2019. (ahead of print). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Bhargava P, Gaudrain E, Başkent D The intelligibility of interrupted speech: Cochlear implant users and normal hearing listeners. J Assoc Res Otolaryngol 2016;17:475–91. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Blanks DA, Buss E, Grose JH, Fitzpatrick DC, Hall JW 3rd. Interaural time discrimination of envelopes carried on high-frequency tones as a function of level and interaural carrier mismatch. Ear Hear 2008;29:674–83. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Yoon YS, Liu A, Fu QJ Binaural benefit for speech recognition with spectral mismatch across ears in simulated electric hearing. J Acoust Soc Am 2011;130:EL94–100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Goupell MJ, Stoelb C, Kan A, Litovsky RY Effect of mismatched place-of-stimulation on the salience of binaural cues in conditions that simulate bilateral cochlear-implant listening. J Acoust Soc Am 2013;133:2272–87. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Wess JM, Brungart DS, Bernstein JGW The effect of interaural mismatches on contralateral unmasking with single-sided vocoders. Ear Hear 2017;38:374–86. [DOI] [PubMed] [Google Scholar]
  • 46.Kan A, Goupell MJ, Litovsky RY Effect of channel separation and interaural mismatch on fusion and lateralization in normal-hearing and cochlear-implant listeners. J Acoust Soc Am 2019;146:1448. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Sheffield SW, Goupell MJ, Spencer NJ, Stakhovskaya OA, Bernstein JGW Binaural optimization of cochlear implants: discarding frequency content without sacrificing head-shadow benefit. Ear Hear 2019;41:576–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.van Ginkel C, Gifford RH, Stecker GC Binaural interference with simulated electric acoustic stimulation. J Acoust Soc Am 2019;145:2445. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Xu K, Willis S, Gopen Q, Fu QJ Effects of spectral resolution and frequency mismatch on speech understanding and spatial release from masking in simulated bilateral cochlear implants. Ear Hear 2020. (ahead of print). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Kan A, Stoelb C, Litovsky RY, Goupell MJ Effect of mismatched place-of-stimulation on binaural fusion and lateralization in bilateral cochlear-implant users. J Acoust Soc Am 2013;134:2923–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Svirsky MA, Fitzgerald MB, Sagi E, Glassman EK Bilateral cochlear implants with large asymmetries in electrode insertion depth: implications for the study of auditory plasticity. Acta Otolaryngology 2015;135:354–63. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES