Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Apr 1.
Published in final edited form as: Hear Res. 2014 Nov 17;322:57–66. doi: 10.1016/j.heares.2014.11.003

Auditory Implant Research at the House Ear Institute 1989–2013

Robert V Shannon 1
PMCID: PMC4380593  NIHMSID: NIHMS648210  PMID: 25449009

Abstract

The House Ear Institute (HEI) had a long and distinguished history of auditory implant innovation and development. Early clinical innovations include being one of the first cochlear implant (CI) centers, being the first center to implant a child with a cochlear implant in the US, developing the auditory brainstem implant, and developing multiple surgical approaches and tools for Otology. This paper reviews the second stage of auditory implant research at House – in-depth basic research on perceptual capabilities and signal processing for both cochlear implants and auditory brainstem implants. Psychophysical studies characterized the loudness and temporal perceptual properties of electrical stimulation as a function of electrical parameters. Speech studies with the noise-band vocoder showed that only four bands of tonotopically arrayed information were sufficient for speech recognition, and that most implant users were receiving the equivalent of 8–10 bands of information. The noise-band vocoder allowed us to evaluate the effects of the manipulation of the number of bands, the alignment of the bands with the original tonotopic map, and distortions in the tonotopic mapping, including holes in the neural representation. Stimulation pulse rate was shown to have only a small effect on speech recognition. Electric fields were manipulated in position and sharpness, showing the potential benefit of improved tonotopic selectivity. Auditory training shows great promise for improving speech recognition for all patients. And the Auditory Brainstem Implant was developed and improved and its application expanded to new populations. Overall, the last 25 years of research at HEI helped increase the basic scientific understanding of electrical stimulation of hearing and contributed to the improved outcomes for patients with the CI and ABI devices.

Keywords: cochlear implant, auditory brainstem implant, speech recognition, music, pitch, loudness, spectral distortion, auditory training

Introduction

The 2013 Lasker-DeBakey Clinical Medicine Research Award honored three pioneers in the development of the modern cochlear implant (CI): Graeme Clark, Blake Wilson, and Ingeborg Hochmair. We salute these pioneers for their contributions. While the award honors these deserving individuals it also honors the whole field of cochlear implant research, where a worldwide effort for over 50 years has brought us to the present state of cochlear implants. In the beginning, cochlear implants were viewed as providing an auditory aid to lip reading. No one predicted that most implant recipients would be able to converse on the telephone without any lip reading cues, yet that is the state of the art in cochlear implants today. Cochlear implants have been successful beyond our most optimistic expectations.

The House Ear Institute (HEI) was one of the early centers in the development of the cochlear implant and achieved many “firsts”. William House was inspired by the reports of Djourno and Eyres in Paris (Eisen, 2003) and started working on electrical stimulation of the cochlea in the 1960s. Along with engineer Jack Urban, they developed single channel and multi-channel cochlear implant prototypes. They developed the first commercialized CI (with 3M corp: the 3M/House CI) and implanted the first child with a CI in 1981 (House and Eisenberg, 1983). William House and William Hitselberger developed the first auditory brainstem implant (ABI) and implanted the first patient in 1979 (Hitselberger et al., 1984). Hundreds of Otologic surgeons visited the House Institute and the House Clinic to learn new surgical techniques and to learn about new developments in technology and devices. The House Ear Institute and Clinic served as an international center of training and education for the emerging technologies and new surgical techniques in otology. In the mid-1980s House expanded to develop a stronger research base in basic and applied science. In 1989 they recruited me as the director of auditory implant research. At first I continued my previous work measuring psychophysical capabilities of patients with electrical stimulation of the cochlea (Shannon 1989b. 1990. 1992b). However the reputation and personnel at HEI offered many new opportunities and my research interests broadened. This chapter reviews implant research at the Auditory Implant Research Laboratory (AIR) at HEI for the 24 year period from 1989 to 2013.

ABI development

The House Clinic is the world leader in surgeries to remove vestibular schwannomas (VS), performing 200–300 per year. In some cases the VS were bilateral from a genetic disorder called neurofibromatosis type 2 (NF2). The NF2 schwannomas usually encapsulated the auditory portion of the eighth cranial nerve (VIIIn) so that removal of these tumors almost always resulted in the removal of the auditory nerve (VIIIn) and thus deafness. These patients could not benefit from a CI because their nerve was removed during tumor removal. So House and Hitselberger (Hitselberger et al., 1984) decided to try CI-type technology to stimulate the next station in the auditory system – the cochlear nucleus in the brainstem. This device is now known as the auditory brainstem implant (ABI). When they approached a potential patient with the concept she enthusiastically volunteered to be the first recipient of the ABI. That first ABI used a modified CI with a percutaneous connector and two ball electrodes placed on the cochlear nucleus. House and Hitselberger realized that the electrode needed additional stabilization because of its location, so together with Doug McCreery at the Huntington Medical Research Institutes (HMRI) in Pasadena, they designed an electrode with a fabric backing. The fabric encouraged the natural foreign body reaction to attach scar tissue to the electrode array and hold it in place. This design proved to be so stable that ABI patient #1 is still using her device every waking hour 35 years later and longitudinal CT studies show excellent electrode stability.

Starting in 1989 we added more electrode contacts to the ABI array and then partnered with a major cochlear implant manufacturer (Cochlear Corp.) to develop a commercial ABI device. The 8 electrode ABI entered FDA clinical trials in 1993 and was approved for use in 2000. By that time the electrode array had increased to 21 contacts. On average, the ABI does not provide the level of hearing provided by a CI, but it does provide most patients with sound awareness and substantial help with lip-reading (Brachman et al., 1993; Shannon et al., 1993; Otto et al., 1998, 2002). During this period audiologist Steve Otto was a critical element in the ABI program, providing maps and other clinical services that allowed patients to develop useful hearing with the ABI.

In 2014 there are more than 1200 ABI devices in use around the world and the use of ABIs is being expanded to patients without NF2 – patients who lost their VIIIn bilaterally from ossification or trauma and to children born without an auditory nerve. Today we know that 10–30% of ABI recipients, including NF2 patients, can achieve significant open set speech recognition (Colletti and Shannon, 2005; Behr et al., 2007; Matthies et al., 2014).

Fan-Gang Zeng and Intensity Coding

The first postdoctoral fellow to join the group was the editor of this volume, Fan-Gang Zeng. Fan-Gang had finished his PhD working on intensity coding in hearing and saw the CI and ABI as opportunities to test theories of intensity coding. He worked to characterize loudness and intensity discrimination in electric hearing and developed theory and mathematical models to encompass the coding of loudness in normal hearing, CI and ABI subjects. By comparing intensity discrimination and loudness across these populations he was able to determine that loudness percept was dependent on two key parts – compression in the periphery and expansion in the central system. Most sensory systems are able to code a range of stimulus intensities that is larger than the range of individual neurons. One way to code stimulus intensity is to compress the representation of intensity at the periphery, send this compressed signal along neurons to the brain, and then allow the brain to reverse the compression so that the perception matches the physics of the outside world. If we assume that postlingually deafened adults have developed such a sensory expansion in the brain, then differences in loudness and intensity discrimination should be explainable by understanding the peripheral coupling of stimulus to neuron. In the case of normal acoustic hearing considerable literature was available on the nonlinear compression in the cochlea. However, Fan-Gang had to develop techniques to quantify the loudness of electrical stimulation, when applying the electric stimulus at the level of the VIIIn or at the cochlear nucleus in the brainstem. His publications on this topic provided both data and theory to quantify the coding of loudness in acoustic and electric hearing (Zeng and Shannon, 1992, 1994, 1995a, 1995b, 1999; Zeng et al., 2002). Fan-Gang’s work on loudness combined measures in patients with a broad theoretical model. Alternative models of loudness in cochlear implants (McKay et al., 2003) were more pragmatic and included anatomical details that affect current flow in the cochlea. McKay has demonstrated a change in the slope of the loudness growth function when current spreads into the modiolus, for example.

Fan-Gang Zeng is presently the Director of Research in the Department of Otolaryngology at the University of California, Irvine. His work has diversified into tinnitus and hearing impairment as well as continued work on cochlear implants.

The Noise Band Vocoder

One project at the AIR lab was to develop a simulation of cochlear implant processing so that normally hearing people could experience the benefits and limitations of the CI. This project, undertaken primarily as a demonstration for parents and family members of CI recipients, was to lead to a new research tool for understanding speech pattern recognition in the brain: the noise band vocoder (NBV). Early CI speech processors filtered sounds into multiple frequency bands and then placed a compressed analog version of the filtered speech waveform directly onto the appropriate electrode. It was widely recognized that the electric field interactions between electrodes resulted in summed electric fields between the electrodes (Shannon, 1983). These electric field interactions had detrimental and unpredictable consequences for the pattern of stimulation of the nerves. To avoid such cross-electrode electrical field interactions pulsatile processors were developed so that the stimulation on adjacent electrodes consisted of biphasic current pulses that did not overlap in time (Chouard et al., 1983; Wilson, 1991). Instead of the analog signal, each electrode simply received a fixed-rate pulse train amplitude modulated by the energy in that band of speech. This fixed the electric field summation problem but it eliminated any spectral information within each band/electrode.

To simulate the loss of spectral information in each band, we used noise to replace the spectral information from a normally hearing ear. We extracted the energy envelope from each band of speech and used that envelope to modulate a noise band the same bandwidth as the band of speech that was being represented. This manipulation was similar to a type of processing called a vocoder originally developed in the 30s at Bell Labs by Homer Dudley (Dudley, 1939). Dudley developed the vocoder as a way to reduce the bandwidth of telephone transmissions, while we were trying to simulate the information provided by a cochlear implant.

When we originally developed the noise band vocoder in the lab we thought we had made a mistake because we could understand speech with only 4 bands, which was a dramatic reduction in information. We checked our algorithm and software for a month before we became convinced that we had not made any errors in our construction. With that we realized what we had discovered – that speech was intelligible even with drastically reduced spectral and temporal content. This not only provided a wonderful simulation of the information on a cochlear implant: it also provided a terrific experimental tool to test the limits of speech pattern recognition. We had the vocoder running in real time so I called a colleague who had done a recent single-channel noise vocoder study that started us along this research path (van Tasell et al., 1987). When she answered the phone all she could hear was my voice talking through a 4 channel noise vocoder. After an initial period of puzzlement, she understood what I was saying and realized that she was listening to noise vocoded speech and she broke out laughing. We realized immediately the importance of this serendipitous discovery and ran formal experiments showing the efficacy of the brain’s pattern recognition (Shannon et al., 1995, 2004).

Qian-Jie Fu and Speech Pattern Recognition

Soon after the development of the NBV, Qian-Jie Fu joined the lab as a PhD student in Biomedical Engineering. He immediately saw the importance of the NBV as a research tool and proceeded to use it to investigate the limits of speech pattern recognition. He manipulated the number of bands (Fu et al., 1998a), the spacing and placement of bands (Fu and Shannon, 1999a,b), the frequency shifting of the bands relative to the normal tonotopic map (Fu and Shannon, 1999c, 2002), and the relative timing across bands (Fu and Galvin, 2001). The NBV allowed us to manipulate the spectral resolution independently from other factors to see their relative importance in speech pattern recognition. This research showed that amplitude distortions were only mildly detrimental to speech intelligibility (Fu and Shannon, 1998, 1999d,e,f, 2000a). Speech recognition was not affected for tonotopic shifts of up to one-half octave, but dramatically decreased for larger shifts. The exact placement and spacing of bands was important when there were less than 6 bands, but relatively unimportant for larger numbers of bands. It appeared that a band division around 1500 Hz was critically important when small numbers of bands were used. The synchrony of timing across bands proved not to be very important for speech recognition, with tolerance for mismatching as high as hundreds of ms; with only 4 bands this tolerance was reduced to 50 ms. Fu and Nogaki (2005) used the NBV to measure the effect of channel interaction on speech understanding in fluctuating noise, finding that mean performance with real CI users was comparable to that of NH subjects listening to 4 spectrally smeared channels.

The NBV reduced the ability of listeners to detect voice pitch and voice quality, even though it provided excellent intelligibility. Without access to voice pitch, listeners have difficulty distinguishing male from female voices, questions from statements, and emotional content in speech. Fu and colleagues found tradeoffs between spectral and temporal envelope cues for voice gender identification, with a stronger contribution for temporal cues when the spectral resolution was poor (Fu et al., 1998b, 2004, 2005b). Luo et al. (2007) also found tradeoffs between spectral and temporal cues for vocal emotion recognition. Whether with research interfaces, clinical processors, or CI simulations, work by Qian-Jie Fu and colleagues has provided great understanding of how CI stimulation parameters affect speech performance.

Qian-Jie Fu was the leader of the Speech Perception and Auditory Perception Laboratory at HEI from 1997 until 2013, when he joined the Department of Otolaryngology at UCLA. He has continued to diversify his research interests into signal processing for noise reduction, perceptual training in children with CIs, and the development of a low-cost CI.

The Penetrating Electrode ABI

During the NBV work we were also developing a new implant to improve ABI performance. We thought that the reason for the low performance with the ABI might be due to the poor alignment of the electrode array with the tonotopic axis of the cochlear nucleus. In the human cochlear nucleus the primary tonotopic organization does not run along the surface but orthogonal to the surface, with low frequencies represented near the surface and high frequencies represented deep in the nucleus (Moore, 1987; Moore and Osen, 1979). To stimulate high frequency regions we needed to penetrate beneath the surface to activate deep neurons. A project to develop a next-generation ABI with penetrating microelectrodes (PABI) was funded by NIH with Doug McCreery at the Huntington Medical Research Institutes as the PI and HEI as a subcontractor to design the human trials. After almost 20 years of animal safety studies, surgical anatomy studies by Jean Moore, and electrode design by Doug McCreery (McCreery et al., 1994, 1998), human trials were initiated in 2003. Ten patients were implanted with the PABI device and participated in extensive perceptual testing. The penetrating electrodes achieved several of the desired improvements over surface electrode ABI devices: threshold levels were more than ten times lower for the PABI because of the close proximity of electrodes to the stimulated neurons, no interference was measured between electrodes because of the highly localized stimulation, and the percepts elicited by penetrating electrodes were high in pitch and had a clear, whistle-like quality. However, in spite of these expected improvements none of the ten patients achieved significant open set speech recognition. A statistical comparison of PABI vs ABI outcomes showed no advantage for the PABI (Otto et al., 2008; McCreery, 2008). Since the PABI device was more complex to manufacture and more difficult to implant than the surface electrode ABI without providing better outcomes, the PABI trial was halted.

Monita Chatterjee and Complex Pattern Processing

In 1997 Monita Chatterjee came to HEI as a postdoc from the Institute of Sensory Research at Syracuse University where she had been the last PhD student of Joseph Zwislocki. She changed from in-vivo measures of hair cell function to implant psychophysics because she wanted to work on more applied problems and wanted to work with patients. At that point considerable psychophysical research had been done on implant patients and there was a disappointingly low correlation between most measures of basic auditory capability and speech recognition. Monita started work to characterize more complex auditory processing, hoping to find differences in complex pattern processing that were more closely related to speech patterns than basic psychophysical tasks. Initially, she quantified the spread of excitation produced by stimulation of a single electrode and compared the patterns generated by bipolar activation modes at different levels and electrode separations. She found relatively broad spread of excitation even with narrow stimulation modes, a relatively linear growth of masking in most subjects (3 dB masker increase results in 3 dB of masking), and an excitation pattern that developed a two-peaked shape with very broad separations, i.e., when the two poles of the bipole approximated two true monopoles (Chatterjee and Shannon, 1998, 2000; Chatterjee et al., 2006a). This raised the question of whether more narrowly focused stimulation modes were really effective at limiting the spread of activation. In other experiments, she quantified the time course of recovery from forward masking (Chatterjee, 1999) and loudness growth functions in single-channel stimulation as an exponential function of the distance between the active and return electrodes, pulse phase duration, and current amplitude (Chatterjee, 1999; Chatterjee et al., 2000).

Monita also started a long series of experiments to characterize effects of envelope noise on temporal modulation sensitivity within and across electrodes. She and her coworkers found that noise could be both beneficial and detrimental, and characterized that parameter-space in detail (Chatterjee and Robert, 2001; Chatterjee and Oba, 2005). In studies of across-channel effects in modulation sensitivity, she found that the temporal pattern of activation on one electrode can modify the perception of the temporal envelope on another electrode, even if there is no direct physical or physiological interaction. Monita performed a series of experiments looking at the conditions that allowed cross-electrode patterns to fuse together or to break apart as a way of understanding how information is integrated across electrodes in a cochlear implant (Chatterjee, 2003, Chatterjee and Oba, 2004; Chatterjee et al., 2006b). Auditory processing of complex patterns is still a field in its early stages, so the implication of this research for improving speech processor design is still not clear. However, what is clear is that extrapolation from simple psychophysics is not sufficient. One spectral region can have enhancing or detrimental effects on another region (e.g. Grose et al., 2005; Kidd et al., 2003). A better understanding of cross-spectral processing could potentially improve processing for implants and hearing aids.

After a productive period at the University of Maryland, Monita Chatterjee became the Director of the Auditory Perception laboratory at the Boys Town National Research Hospital. Since her time at House, her research has expanded to include investigations of the perception of degraded speech by normally hearing and cochlear implanted listeners, mechanisms of pitch perception in electric and acoustic hearing, and most recently, to research with young children, looking at the development of prosodic information processing and lexical tone recognition with distorted and limited input provided by a hearing aid or cochlear implant.

How Many Channels?

It was clear that the most important factor contributing to speech recognition was the number of spectral bands of information conveyed. We had already shown that speech recognition increased with the number of bands in a NBV, but how did that finding relate to actual cochlear implant listeners? Research audiologists Lendra Friesen and Kim Fishman measured phoneme, word, and sentence recognition as a function of the number of bands in a NBV and as a function of the number of electrodes in CI listeners. The results were similar in NBV and CI except that CI performance was lower overall for the same number of spectral channels than NH with the NBV (Fishman et al., 1997; Friesen et al., 1999, 2000, 2001). While we assume that NH listeners can optimally use the information from multiple independent spectral channels, CI listeners can experience electrical field interactions that limit the independence across electrodes. In CI listeners, performance increased with the number of electrodes used, but only up to 8–10 electrodes, with no further improvements from 10–22 electrodes. This suggests that CI users are effectively only receiving 8–10 distinct information channels even though the number of electrodes is larger. In contrast, NH listeners continued to improve in performance as the number of channels in a NBV increased. Thus, we assumed the difference between NH and CI listeners was due to the additional channel smearing in CIs. Further work by Srinivasan et al. (2013), described later, showed that sharpening the electrical field does improve performance, but not fo all CI listeners.

Monica Padilla started her PhD work in Biomedical Engineering in 1998 following her BS in Electrical Engineering in Bogota, Columbia. As a multi-lingual student, Monica was interested in how multiple languages affected speech understanding in degraded listening conditions, like with a cochlear implant. She used the NBV to measure speech recognition as a function of the number of bands and as a function of added noise. She recruited monolingual English speakers as well as native Spanish speakers with differing durations of English experience. She measured the effects of such differences in experience when listening to speech degraded by NBV with 1, 2, 4, 6, 8, 12, and 16 channels. She measured the effects of linguistic complexity by using vowels and consonants as well as words and sentences. We expected that people with less English experience would have more difficulty with word and sentence recognition as the speech was degraded. That was only partly the case. By far the largest effect was that of English experience on vowel recognition. Consonants and sentences were only minimally affected by experience and words were only modestly affected. But the conflict in the vowels between English and Spanish had a large effect of experience when speech was degraded and added to noise. This was a bit surprising to us because vowels typically have a large amplitude and so are more resistant to added noise than consonants. It appears that the distortion of the NBV on vowel formants caused considerable mislabeling of English vowels because of the spectral smearing of the formants with the NBV. Even with considerable distortion and noise the recognition of simple sentences was only mildly degraded. (Padilla, 2003)

Deniz Baskent and Tonotopic Distortion

Deniz Baskent joined the lab as a PhD student in 1998 following her BS and MS in Electrical Engineering in Turkey. She was an enthusiastic student who enjoyed the combination of mathematics, engineering and signal processing required to do psychophysical research with cochlear implants. She used the NBV to manipulate the distribution of tonotopic information and performed the same experiments in cochlear implant listeners (with software assistance from Med-El) by manipulating the frequency-to-electrode maps (Baskent and Shannon, 2003, 2004, 2007). She quantified the effect of compressing or expanding the tonotopic representation (compression occurs in normal CI mapping). We knew the effects of frequency-place shifts from earlier work by Fu and Shannon (1999c, 2002), who simulated different electrode insertion depths. She measured the combined effects of frequency-place shift plus compression, and also in situations of differing insertion lengths of electrodes. Both frequency-place shift and compression often occur in a cochlear implant but the degree of each is uncertain and the interaction between the two types of tonotopic distortion, especially for differing insertion lengths, was not clear. Deniz systematically varied the degree of tonotopic shift and tonotopic compression/expansion and obtained a clear picture of the degree to which these factors can influence speech perception.

Deniz then looked at the possible effect of a “hole” in hearing, i.e. a region of missing or non-functioning nerve fibers. Using the NBV in normal hearing listeners and manipulating electrode-frequency maps in CI listeners she systematically measured the effect on speech recognition of “holes” of varying width in the apical, middle, or basal region of the cochlea. Octave-wide holes in the high-frequency region had little effect. In contrast, even half-octave holes in the low-frequency region had a dramatic detrimental effect on speech recognition. The patterns of results were nearly identical in NH and CI listeners, suggesting that effects were due to the loss of information rather than to implant or NBV processing (Shannon et al., 2002; Baskent and Shannon, 2006). Brian Moore was doing similar research on what he called “dead regions”: tonotopic areas of impaired ears that were missing hair cells (Moore et al., 2000). He developed clinical tests to diagnose “dead regions” and developed strategies for mapping hearing aid output around those regions to preserve the information (Moore et al., 2010). Similar tests and signal processing strategies could be developed for cochlear implants to detect region of nerve loss and adjust the electrode mapping around those regions.

Deniz Baskent was subsequently a researcher at Starkey Hearing Research Center, Berkeley, CA, and is presently a Professor of Otolaryngology at University Medical Center Groningen, The Netherlands. Her research has expanded to include the interaction between cognitive mechanisms and speech understanding with CIs. Deniz is doing research on perceptual learning and training paradigms that employ cognitive mechanisms, tools like music therapy and training, and linguistic processes related to perception of degraded CI speech in children and adults.

Chinese Tone Recognition with a CI

CI speech processors do not preserve the fundamental frequency (F0) of speech. F0 cues are important in English for conveying talker identity, emotion, and emphasis, but in some languages the F0 is used linguistically, where a change in F0 determines the meaning of the word. Mandarin Chinese has four lexical tones that define the meaning of words. If F0 information is lost, how well do CIs work for tonal languages that rely strongly on F0 as a linguistic cue?

Beginning as a postdoc, Qian-Jie Fu began exploring Mandarin speech perception via CI processing. Surprisingly, even NBV with 4 channels were sufficient to provide Mandarin Chinese listeners with 80% recognition of the four lexical tones. Fu et al. (1998b) found that the co-articulated intensity and duration cues were used to distinguish the four tones even when the voice pitch contour was not available. Qian-Jie extended this work with native Chinese Mandarin speaking CI users (Fu et al., 2004; Hsu et al., 2004; Li et al., 2011; Zhu et al., 2011). Fu and colleagues also created and validated Mandarin speech databases for testing native Chinese CI users (Fu et al., 2011, Zhu et al., 2012). Postdoc Xin Luo further quantified the contributions of F0, amplitude, and duration cues on lexical tone recognition (Luo and Fu, 2004, 2005, 2006, 2009; Luo et al., 2007, 2008, 2009, 2010, 2012). As the number of Chinese CI users continues to increase, such research will continue to be valuable towards optimizing CI signal processing for tonal languages.

Stimulation Rate

In the early 2000s there was great interest in the stimulation rate and its possible influence on CI outcomes. At that time signal processors used relatively low rates of stimulation – 250 to 800 pulses/sec (pps) on each electrode. It was known from physiological research that neurons showed extreme phase locking to electric pulses, with all neurons responding on every pulse up to rates in excess of 1000 pps (van den Honert and Stypulkowski, 1987). Electrophysiological recordings showed that high stimulation rates produced acoustic-like adaptation and probabilistic phase locking, but only for stimulation rates that exceeded 4000 pps (Wilson et al., 1997). Our group measured speech recognition with signal processors for a wide range of stimulation rates. In contrast to the physiological results, we saw no change in speech recognition once the stimulation rate exceeded approximately 500 pps (Fu and Shannon, 2000b; Friesen et al., 2005; Shannon et al., 2010). Thus, very high stimulation rates offered little advantage for CI speech recognition

Galvin and Fu (2006, 2009) measured the effects of stimulation rate on modulation detection. Modulation detection is one of the only psychophysical measures that correlates with speech recognition (Fu, 2002), and the better temporal sampling associated with high stimulation rates would presumably provide better modulation detection. For equally loud stimuli, modulation detection was actually poorer for high stimulation rates (2000 pps). Thus, even though the physiological responses showed a more natural stochastic response with higher rates, which was thought to be beneficial, performance on both psychophysical tasks and on speech recognition both showed deterioration with higher rates..

With few exceptions CI patients can’t discriminate pulse rates that exceed 300–500 pps. Ray Goldsworthy joined the AIR group in 2011 and investigated whether implant listeners could be trained to discriminate higher rates. Most people were skeptical, but Ray was able to show that, with training, four CI listeners were able to reliably discriminate stimulation rates up to 1000 pps and higher (Goldsworthy and Shannon, 2014). It remains to be seen if discrimination at such high pulse rates can lead to improved speech recognition in these trainees.

Fu and Colleagues: Auditory Training

When a CI patient first receives their device they generally find the sound quality to be odd or even unpleasant. Post-lingually deafened CI users must adapt to their new mode of electric hearing in relation to previous experience with acoustic hearing. During the initial months of implant use, the sound quality and speech understanding greatly improves. If CI listeners can adapt to electric hearing, especially to the acoustic-to-electric frequency mismatch that typically occurs, why bother to precisely map CI patients? However, there is probably some limit to this adaptation. Qian-Jie Fu and colleagues extensively studied adaption to a distorted frequency-place map. In one of these studies (Fu et al., 2002), CI patients were given progressively shifted frequency maps to wear at home for three months at a time. Acutely measured performance with the shifted maps was much poorer than with the clinical maps. While all subjects showed improved performance with the shifted maps after extended experience, performance remained poorer than with their original clinical map.

Further experiments explored the limits of adaptation to tonotopic shifts. CI simulation work by post-doc Tianhao Li showed that CI users could passively adapt to shifts up to approximately 6 mm without active training (Li et al., 2009), and could be trained to adapt to even larger shifts. However, using phonetic (lexical) labels for the stimuli actually interfered with adaptation; training was more effective with abstract labels (Li and Fu, 2007). Simulation work by Nogaki et al. (2007) showed that the frequency of training mattered less than the total amount of training performed when adapting to a severe spectral mismatch. Moderate amounts computer-assisted training showed significant improvements in CI users’ speech perception (Fu et al., 2005a; Fu and Galvin, 2007, 2008; Oba et al., 2011) and music perception (Galvin et al., 2007, 2012). Oba et al. (2013) showed that the improvements were indeed due to improved auditory perception, rather than just general learning.

Perhaps the greatest impact of the training research was the development of computer-based training that could be performed at home. Fu and colleagues developed training software and materials that CI users’ could download and use at home. This software was later licensed by major CI manufacturers, for use in several languages. Currently, the training software is freely available for home computers, smartphones and tablets via the Emily Fu Foundation (http://www.emilyshannonfufoundation.org ). The gains in performance with training can greatly exceed those due to CI parameter manipulations, so the availability of effective and affordable auditory training is a major contribution to CI patients worldwide.

Landsberger and Srinivasan: Electric Field Sharpening

It is clear that one of the most important factors in speech recognition performance is the effective number of distinct channels of spectral information; the more channels available, the higher the recognition, especially in noise (Fu et al., 1998a; Friesen et al., 2001). Modern cochlear implants have between 12 and 22 tonotopically spaced electrodes but all devices have similar outcomes. An experiment with NH and CI listeners showed that smearing of the selectivity of each electrode can reduce the effective number of spectral channels available to the listener (Fu and Nogaki, 2005). In cochlear implants the biggest problem is distance between the electrodes and the nerve to be stimulated. The electric field spreads out from each electrode so that if the nerve is relatively far away a broad field will stimulate a wide region of nerves, interfering heavily with the field from the adjacent electrode.

David Landsberger came to the lab as a postdoc in 2007 with a background in visual psychophysics from Brown University followed by postdoctoral research with Colette McKay working with cochlear implants. David teamed up with a PhD student, Arthi Srinivasan, from the University of Southern California (USC) Biomedical Engineering Department to design experiments investigating techniques of improving the sharpness of each channel in a cochlear implant. Arthi had joined the lab following a MS in Electrical Engineering at the California Institute of Technology (CalTech). David and Arthi utilized current steering and current field sharpening to improve the spectral selectivity of stimulation in a cochlear implant.

The Advanced Bionics Clarion CI has multiple current sources and so is capable of current field steering and sharpening by simultaneous stimulation of multiple electrodes. When multiple electrodes are stimulated simultaneously their electric fields can interact. In general such field interactions degrade the stimulation selectivity, but, depending on the levels and phase of the stimulation, multiple electrodes can be used to steer and sharpen the electric field (White & Van Compernolle, 1987; van den Honert and Kelsall, 2007). David and Arthi, together with postdoc Xin Luo, showed that CI listeners could hear pitch changes when they steered the current field from one position to another along the tonotopic axis of the cochlea (Landsberger and Srinivasan, 2009; Luo et al., 2010, 2012; Srinivasan et al., 2012). They also showed that negative current could be placed on adjacent electrodes to cancel the current field spreading from the primary electrode, thus sharpening the stimulation selectivity. While this is expected, David and Arthi showed that this sharpening created a perceptual change in quality (Landsberger et al., 2012), so that sharpened activation patterns sounded “sharper” and “cleaner” than broader activation patterns.

They measured and compared the excitation patterns from monopolar and sharpened (tripolar, partial bipolar and more) electrode configurations (Srinivasan et al., 2010; Landsberger et al., 2012; Saoji et al., 2013), and they showed preliminary data that the field sharpening could also improve speech recognition (Srinivasan et al., 2013). Srinivasan et al. (2011) showed that sharpening only worked if the electrode-neuron distance was in an intermediate range (as inferred from loudness growth functions). Sharpening wasn’t needed and so gave no improvement if the electrodes were close to the nerves and was ineffective if the electrodes were far away from the electrodes.

David is now an Assistant Professor of Otolaryngology at the New York University School of Medicine. Included in his many research interests and collaborative projects are: cochlear implants in patients with single sided deafness, assessing the quality and intelligibility of speech and music when using current steering and sharpening in CI patient maps, and understanding the functioning of the cochlear apex.

Galvin, Fu, and Crew: Music Processing

While cochlear implants are excellent at restoring speech understanding, they are poor at representing music. This is primarily because the typical signal processing eliminates spectral and temporal fine structure from the signal and the electrode spacing is too coarse to represent the harmonics that are important for conveying musical pitch. John Galvin brought expertise in music and sound engineering to the study of music processing in cochlear implants. Because the HEI was located in Los Angeles, we saw several CI patients who were once professional musicians. Working with these patients, Galvin and colleagues studied music perception with CIs and investigated what could be done to improve it. Much previous CI music perception research had focused on characterizing CI users’ subjective quality ratings and familiar melody identification. In order to better quantify musical pitch perception, Galvin and Fu developed a melodic contour identification (MCI) task. In the MCI task, listeners are typically asked to identify 5-note melodic contours that represent simple changes in pitch direction. The pitch range of the contours, the spacing between successive notes, the instrument playing the contours, and even a competing contour can be manipulated to better quantify CI users’ functional melodic pitch perception (Galvin et al., 2007, 2009b; Galvin and Fu, 2011). Qian-Jie Fu’s software allowed for flexible testing and training options previously unavailable for CI music research. There is a wide range in CI performance for MCI; while mean performance is generally poor, training can greatly improve melodic pitch perception (Galvin et al., 2007; 2012). CI users’ MCI performance can also be affected by instrument (Galvin et al., 2008). While normal hearing (NH) listeners’ MCI performance is largely unaffected by a competing contour, CI performance deteriorates (Galvin et al., 2009a; Zhu et al., 2011); again, training can greatly improve MCI performance with or without a competing instrument (Galvin et al., 2012). The MCI task has been also used for CI simulations, showing that channel interaction negatively affects melodic pitch perception (Crew et al., 2012). Most recently, Joseph Crew has been exploring the contributions of acoustic and electric hearing to speech and music perception for CI patients with residual acoustic hearing. As the CI currently cannot provide complex pitch perception, combined use of a hearing aid and CI may be the best option to improve music perception for CI patients.

John Galvin continues to be a strong organizing force for CI research in Los Angeles. Although the former AIR members are distributed between USC, the University of California, Los Angeles (UCLA), as well as other domestic and international locations, he provides collaborative direction and coordination across many investigators and labs around the world. In addition to his innovative work on music perception, his critical eye and friendly demeanor support many of the former AIR people and improves the quality of all of our work.

Pediatric ABI

Although the ABI was developed at HEI, new applications of the device came from Italy. Vittorio Colletti, a Neurotologist in Verona, implanted the ABI in children who were born with no auditory nerve. While the outcomes in adults with NF2 had been modest, Professor Colletti observed excellent speech recognition in some of these children, a finding that was initially met with skepticism by the field. Bob Shannon travelled to Verona to observe Dr. Colletti’s patients and confirmed that non-NF2 adults and some children were able to understand open set speech with the ABI – a level of performance that was rare in NF2 adults (Colletti and Shannon, 2005; Colletti et al., 2010, 2014).

This observation has important implications for neuroscience. Colletti’s ABI children were born with no access to hearing and no prior experience. The ABI provided coarse auditory information in a pattern that had little relation to the normal tonotopic representation of the cochlear nucleus. In spite of the smeared and scrambled information, children were able to utilize this highly unnatural sensory representation to develop speech understanding and speech production. This shows a remarkable degree of plasticity in early development and shows the potential for other sensory prostheses, if provided in early childhood. Exact reproduction of the original sensory representation is not necessary (Shannon, 2007, 2012, 2014; Rauschecker and Shannon, 2002; Moore and Shannon, 2009).

CIAP

The Conference on Implantable Auditory Prostheses (CIAP) grew out of a series of small conferences initiated by Bill House on the west coast of the US in the 1970s. These “West Coast CI conferences” brought together researchers and clinicians from the House Ear Institute, Stanford, UC San Francisco, and the University of Washington in a retreat setting to share ideas and data. In 1983 we were accepted as a Gordon Research Conference and the meetings expanded to include international CI research groups. We split from the Gordon Organization in 1989 and were affiliated with the Engineering Foundation in 1991. Then the House Ear Institute (with Bob Shannon serving as Administrative Chair) took over the organization/administration of this series of research meetings. The HEI provided support for the meetings in terms of logistics and administrative support. From 1993 to the present the CIAP meetings grew in importance and attendance in CI research, with 450 participants attending the 2013 meeting.

But the organization of CIAP was not accomplished by an individual – it became a biennial labor of love for the whole AIR group. While the scientific program was always determined by the elected chair and co-chair, the logistics and infrastructure of the meeting were largely handled by the whole AIR group. It became a team effort to compile the program book, organize the musical events for the social evening, and coordinate posters and audio-visual components of the meeting. It was truly a team effort to organize these meetings, with particular thanks to John Galvin.

Social Structure of Science

One of the key factors in the working of the AIR group over the years is the social structure. Our philosophy has been that the best science comes from the forge of frank discussion among friends. If the group has a friendly dynamic, then serious discussions are ferocious but fun. If there are social tensions in the group then discussions can be awkward or even angry. Many times in lab meetings we heard “that’s the stupidest idea I’ve ever heard” or “why would you ever think THAT?” or “you missed a critical control condition”, yet at the end of the meeting everyone felt that we had arrived at the best solution to the problem. Everyone learned from such heated discussion, which are most effective in a group of people who like and respect each other. Students are sometimes intimidated in such free-for-alls of scientific discussion, but after a while they learn the value of critical thinking without taking it personally, become active participants and hone their skills in the crucible of hard scientific discussion. Brainstorming like this is most effective when done in a friendly atmosphere. These friendships can last for a lifetime, even after individuals move on to other locations and independent careers.

One of the most rewarding aspects of research with cochlear implants is the interactions with the patients. Over the years we established close personal relationships with many of our test subjects, because they were in the lab so frequently and over many years. Many of the “regulars” became de facto members of the lab; joining us for lunch, social outings, and lab parties. Some experiments, in particular the ones that tested new speech processing strategies, required almost saintly patience and persistence on the part of these people. They inspired us with their commitment and pioneering spirit. They put up with long hours of boring psychophysical testing and weeks of listening to processers that they didn’t like because they knew that it was necessary for understanding implants and making them better. They are as committed to the research as the investigators. None of this research could have been accomplished without the persistence and commitment of the cochlear implant and ABI patients.

Infrastructure

An often overlooked aspect of research is the tools – hardware and software that makes the research possible. In AIR we shared an important common research interface that allowed us to connect directly to the patient’s implant and control each pulse from our research computers. This interface was originally developed at the electronic shop at Boys Town National Research Hospital (Shannon et al., 1990), and was redesigned and improved by the HEI electronic shop several times during the 24 years. John Wygonski was indispensable for the design and improvements of interfaces that would allow fine control over stimulation to implants from different implant manufacturers. The House Ear Institute Nucleus Research Interface (HEINRI) was fondly called the “Henry” and was the workhorse of our lab. The HEINRI allowed us to manipulate the pulses to cochlear implants in ways that would not have been possible with any interface available from the company. As people left the lab they took the HEINRIs with them and eventually we built them for others for research use.

Software to drive implant interfaces came from two sources: CONDOR and Fu-ware. CONDOR was a program written by Mark Robert that allowed all investigators to control psychophysical experiments with implant devices from Cochlear and MED-EL. Qian-Jie Fu also developed a set of programs that could control psychophysical experiments and speech processing to implants. Both program suites had been meticulously tested for safety, so that they would not allow stimulation levels that could damage neurons (Shannon, 1992a). New researchers could use these programs to run new experiments without having to “reinvent the wheel”. This software base allowed students and postdocs to initiate experiments quickly. Of course, new students and postdocs had to go through an apprentice phase to learn the principles of safe stimulation. Every new experiment was meticulously checked on an oscilloscope to make sure each electric pulse was exactly the specified duration and amplitude on the specified electrode.

To evaluate the effect of signal processing manipulations we needed a set of standard phonemes to measure confusion matrices and information transfer across vowel and consonants. Vowel materials had been recorded and validated by Hillenbrand et al. (1995) and they graciously shared these materials with others. Inspired by their generosity we recorded and validated consonant materials for use in speech phoneme research. We also make these materials freely available to all researchers (Shannon et al., 1999).

Spectral ripple detection is a popular method for measuring spectral resolution, but this task requires considerable time for the construction of stimuli and for the psychophysical measurement. Justin Aronoff and David Landsberger (2013) developed and validated a rapid method for measuring spectral ripple detection that could be used clinically. They called it the Spectral-temporally Modulated Ripple Test (SMRT) and they have made the test freely available for anyone to use at http://smrt.tigerspeech.com/.

Having the infrastructure of these common hardware and software tools allowed us all to focus on the important issues and theories of electric stimulation of hearing. We are all grateful to John, Mark, David, Justin and Qian-Jie for developing such excellent tools and sharing them with us all.

Conclusions

The House Ear Institute had a long and innovative history of auditory implant development and research until its demise in 2013. We developed a strong community of researchers working with cochlear implants and auditory brainstem implants. The 2013 Lasker-DeBakey Clinical Medical Research Award for cochlear implant development reminds us of the long history of implant research around the world and is a great source of pride for the strong community of researchers worldwide who have contributed to this work. We congratulate Graham Clark, Blake Wilson, and Ingeborg Hochmair for their contributions and inspiration to cochlear implant research. They represent well the large family of connected researchers who have brought cochlear implants from a fantasy to reality.

Figure 1.

Figure 1

Auditory Implant Research at the House Ear Institute (A montage in 2013 by Sandy Oba).

Top Row (l–r): Deniz Baskent, Xin Luo, Qian-Jie Fu, Steve Otto, Lendra Friesen, Fan-Gang Zeng, Monita Chatterjee, Mark Robert

Bottom Row (l–r): Joe Crew, Justin Aronoff, Xiaosong Wang, John (Bear) Galvin, Bob Shannon, David Landsberger, Arthi Srinivasan, Monica Padilla, Sandy Oba

Highlights.

  • Auditory implant research at the House Ear Institute is reviewed.

  • Cochlear and auditory brainstem research from 1989–2013.

  • Implant research on psychophysics, speech recognition, and training.

Acknowledgments

Much of this work could not have been accomplished without the generous funding of the NIDCD. Most of the work described was funded by R01 grants to the principal researchers mentioned, and some by R03 grants to junior members and F31 and F32 grants to students and postdocs. The CIAP meetings have been funded by an R13 conference grant from NIDCD. We thank the NIH for their enabling support of this research. The early ABI clinical project was partly funded by Cochlear Corp. and the development of the PABI was partly funded by an orphan device grant from the FDA. Some research projects were funded by small research grants from MedEl and Advanced Bionics.

The most sincere thanks go to the incredible patients who volunteer for the long hours of research to make this work possible. Over more than 30 years of research I have observed that many of the first patients volunteering for a new experimental procedure or device do fully understand and appreciate the nature of medical research. They realize that, although there is personal risk, someone must be the pioneer for future patients to benefit. These pioneers are some of the most admirable and inspiring people I have ever known. We could not have accomplished the work reported here without these committed pioneering heroes.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Aronoff JM, Landsberger DM. The development of a modified spectral ripple test. J Acoust Soc Am. 2013;134(2):EL217–22. doi: 10.1121/1.4813802. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Baskent D, Shannon RV. Speech recognition under conditions of frequency-place compression and expansion. J Acoust Soc Amer. 2003;113:2064–2076. doi: 10.1121/1.1558357. [DOI] [PubMed] [Google Scholar]
  3. Baskent D, Shannon RV. Frequency-Place Compression and Expansion in Cochlear Implant Patients. J Acoust Soc Amer. 2004;116:3130–3140. doi: 10.1121/1.1804627. [DOI] [PubMed] [Google Scholar]
  4. Baskent D, Shannon RV. Frequency transposition around dead regions simulated with a noise-band vocoder. J Acoust Soc Amer. 2006;119:1156–1163. doi: 10.1121/1.2151825. [DOI] [PubMed] [Google Scholar]
  5. Baskent D, Shannon RV. Combined effects of frequency compression-expansion and shift on speech recognition. Ear & Hearing. 2007;28(3):277–289. doi: 10.1097/AUD.0b013e318050d398. [DOI] [PubMed] [Google Scholar]
  6. Brackmann DE, Hitselberger WE, Nelson RA, Moore JK, Waring M, Portillo F, Shannon RV, Telischi F. Auditory brainstem implant. I: Issues in surgical implantation. Otolaryngology, Head and Neck Surgery. 1993;108:624–634. doi: 10.1177/019459989310800602. [DOI] [PubMed] [Google Scholar]
  7. Chatterjee M. Effects of stimulation mode on threshold and loudness growth in multielectrode cochlear implants. J Acoust Soc Am. 1999;105(2):850–860. doi: 10.1121/1.426274. [DOI] [PubMed] [Google Scholar]
  8. Chatterjee M. Modulation masking in cochlear implant listeners: envelope vs. tonotopic components. J Acoust Soc Am. 2003;113(4):2042–2053. doi: 10.1121/1.1555613. [DOI] [PubMed] [Google Scholar]
  9. Chatterjee M, Oba SI. Across- and within-channel envelope interactions in cochlear implant listeners. J Assoc Res Otolaryngol. 2004;5(4):360–375. doi: 10.1007/s10162-004-4050-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Chatterjee M, Oba SI. Noise improves modulation detection by cochlear implant listeners at moderate carrier levels. J Acoust Soc Am. 2005;118(2):993–1002. doi: 10.1121/1.1929258. [DOI] [PubMed] [Google Scholar]
  11. Chatterjee M, Robert ME. Noise enhances modulation sensitivity in cochlear implant listeners: stochastic resonance in a prosthetic sensory system? J Assoc Res Otolaryngol. 2001;2(2):159–171. doi: 10.1007/s101620010079. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Chatterjee M, Shannon RV. Forward masking excitation patterns in mulitelectrode cochlear implants. Journal of the Acoustical Society of America. 1998;103(5):2565–2572. doi: 10.1121/1.422777. [DOI] [PubMed] [Google Scholar]
  13. Chatterjee M, Fu Q-J, Shannon RV. Effects of phase duration and electrode separation on loudness growth in cochlear implant listeners. Journal of the Acoustical Society of America. 2000;107(3):1637–1644. doi: 10.1121/1.428448. [DOI] [PubMed] [Google Scholar]
  14. Chatterjee M, Galvin JJ, III, Fu Q-J, Shannon RV. Effects of stimulation mode, level and location on forward-masked excitation patterns in cochlear implant patients. J Assoc Res Otolaryngol. 2006a;7(1):15–25. doi: 10.1007/s10162-005-0019-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Chatterjee M, Sarampalis A, Oba SI. Auditory stream segregation with cochlear implants: A preliminary report. Hearing Res. 2006b;222(1–2):100–107. doi: 10.1016/j.heares.2006.09.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Colletti L, Shannon RV, Colletti V. The Development of Auditory Perception in Children Following Auditory Brainstem Implantation. Audiology & Neurotology. 2014 doi: 10.1159/000363684. in press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Chouard CH, Fugain C, Meyer B, Lacombe H. Long-term results of the multichannel cochlear implant. Ann N Y Acad Sci. 1983;405:387–411. doi: 10.1111/j.1749-6632.1983.tb31653.x. [DOI] [PubMed] [Google Scholar]
  18. Colletti V, Shannon RV. Open Set Speech Perception with Auditory Brainstem Implant? The Laryngoscope. 2005;115:1974–1978. doi: 10.1097/01.mlg.0000178327.42926.ec. [DOI] [PubMed] [Google Scholar]
  19. Colletti V, Shannon RV, Carner M, Veronese S, Colletti L. Complications in auditory brainstem implant surgery in adults and children. Otol Neurotol. 2010;31(4):558–64. doi: 10.1097/MAO.0b013e3181db7055. [DOI] [PubMed] [Google Scholar]
  20. Crew JD, Galvin JJ, 3rd, Fu QJ. Channel interaction limits melodic pitch perception in simulated cochlear implants. J Acoust Soc Am. 2012;132(5):EL429–35. doi: 10.1121/1.4758770. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Dudley HW. The vocoder. Bell Labs Rec. 1939;18:122–126. [Google Scholar]
  22. Eisen MD. Djourno, Eyries, and the First Implanted Electrical Neural Stimulator to Restore Hearing. Otology & Neurotology. 2003;24:500–506. doi: 10.1097/00129492-200305000-00025. [DOI] [PubMed] [Google Scholar]
  23. Fishman K, Shannon RV, Slattery WH. Speech recognition as a function of the number of electrodes used in the SPEAK cochlear implant speech processor. J Speech Hear Res. 1997;40:1201–1215. doi: 10.1044/jslhr.4005.1201. (NOTE: won JSHR Editor’s Award for best article on Hearing, 1997) [DOI] [PubMed] [Google Scholar]
  24. Friesen LM, Shannon RV, Slattery WH. Effects of frequency allocation on phoneme recognition with the Nucleus-22 cochlear implant. Amer J Otology. 1999;20(6):729–734. [PubMed] [Google Scholar]
  25. Friesen LM, Shannon RV, Slattery WH. Effects of electrode location on speech recognition in Nucleus-22 cochlear implant listeners. J Amer Acad Audiol. 2000;11(8):418–428. [PubMed] [Google Scholar]
  26. Friesen LM, Shannon RV, Baskent D, Wang X. Speech recognition in noise as a function of the number of spectral channels: comparison of acoustic hearing and cochlear implants. Journal of the Acoustical Society of America. 2001;110(2):1150–1163. doi: 10.1121/1.1381538. [DOI] [PubMed] [Google Scholar]
  27. Friesen LM, Shannon RV, Cruz RJ. Speech recognition as a function of stimulation rate in Clarion and Nucleus-22 cochlear implants. Audiology and Neuro-Otology. 2005;10:169–184. doi: 10.1159/000084027. [DOI] [PubMed] [Google Scholar]
  28. Fu QJ. Temporal processing and speech recognition in cochlear implant users. Neuroreport. 2002;13(13):1635–9. doi: 10.1097/00001756-200209160-00013. [DOI] [PubMed] [Google Scholar]
  29. Fu Q-J, Galvin JJ., III Recognition of spectrally asynchronous speech by normal-hearing listeners and Nucleus-22 cochlear implant users. J Acoust Soc Am. 2001;109:1166. doi: 10.1121/1.1344158. [DOI] [PubMed] [Google Scholar]
  30. Fu QJ, Galvin JJ., 3rd Computer-Assisted Speech Training for Cochlear Implant Patients: Feasibility, Outcomes, and Future Directions. Semin Hear. 2007;28(2) doi: 10.1055/s-2007-973440. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Fu QJ, Galvin JJ., 3rd Maximizing cochlear implant patients’ performance with advanced speech training procedures. Hear Res. 2008;242(1–2):198–208. doi: 10.1016/j.heares.2007.11.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Fu QJ, Nogaki G. Noise susceptibility of cochlear implant users: the role of spectral resolution and smearing. J Assoc Res Otolaryngol. 2005;6(1):19–27. doi: 10.1007/s10162-004-5024-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Fu Q-J, Shannon RV. Effects of amplitude nonlinearity on phoneme recognition by cochlear implant users and normal-hearing listeners. Journal of the Acoustical Society of America. 1998;104(5):2570–2577. doi: 10.1121/1.423912. [DOI] [PubMed] [Google Scholar]
  34. Fu Q-J, Shannon RV. Effects of electrode location and spacing on speech recognition with the Nucleus-22 cochlear implant. Ear & Hearing. 1999a;20(4):321–331. doi: 10.1097/00003446-199908000-00005. [DOI] [PubMed] [Google Scholar]
  35. Fu Q-J, Shannon RV. Effects of electrode configuration and frequency allocation on vowel recognition with the Nucleus-22 cochlear implant. Ear & Hearing. 1999b;20(4):332–344. doi: 10.1097/00003446-199908000-00006. [DOI] [PubMed] [Google Scholar]
  36. Fu Q-J, Shannon RV. Recognition of spectrally degraded and frequency-shifted vowels in acoustic and electric hearing. Journal of the Acoustical Society of America. 1999c;105(3):1889–1900. doi: 10.1121/1.426725. [DOI] [PubMed] [Google Scholar]
  37. Fu Q-J, Shannon RV. Recognition of spectrally degraded speech in noise with nonlinear amplitude mapping. Proceedings of the 1999 IEEE Conference on Acoustics, Speech, and Signal Processing. 1999d;1:369–372. [Google Scholar]
  38. Fu Q-J, Shannon RV. Effect of acoustic dynamic range on phoneme recognition in cochlear implant listeners. J Acoust Soc Amer. 1999e;106(6):L65–L70. doi: 10.1121/1.428148. [DOI] [PubMed] [Google Scholar]
  39. Fu Q-J, Shannon RV. Phoneme recognition as a function of signal-to-noise ratio under nonlinear amplitude mapping by cochlear implant users. Journal of the Acoustical Society of America. 1999f;106(2):L18–L23. doi: 10.1121/1.427031. [DOI] [PubMed] [Google Scholar]
  40. Fu Q-J, Shannon RV. Effects of dynamic range and amplitude mapping on phoneme recognition in Nucleus-22 cochlear implant users. Ear & Hearing. 2000a;21(3):227–235. doi: 10.1097/00003446-200006000-00006. [DOI] [PubMed] [Google Scholar]
  41. Fu Q-J, Shannon RV. Effect of stimulation rate on phoneme recognition in cochlear implants. Journal of the Acoustical Society of America. 2000b;107(1):589–597. doi: 10.1121/1.428325. [DOI] [PubMed] [Google Scholar]
  42. Fu Q-J, Shannon RV. Frequency mapping in cochlear implants. Ear and Hearing. 2002;23:339–348. doi: 10.1097/00003446-200208000-00009. [DOI] [PubMed] [Google Scholar]
  43. Fu Q-J, Shannon RV, Wang X. Effects of noise and spectral resolution on vowel and consonant recognition: Acoustic and electric hearing. Journal of the Acoustical Society of America. 1998a;104(6):3586–3596. doi: 10.1121/1.423941. [DOI] [PubMed] [Google Scholar]
  44. Fu Q-J, Zeng F-G, Shannon RV, Soli SD. Importance of tonal envelope cues in Chinese speech recognition. Journal of the Acoustical Society of America. 1998b;104(1):505–510. doi: 10.1121/1.423251. [DOI] [PubMed] [Google Scholar]
  45. Fu Q-J, Shannon RV, Galvin JJ., III Perceptual learning following changes in the frequency-to-electrode assignment with the Nucleus-22 cochlear implant. Journal of the Acoustical Society of America. 2002;112(4):1664–1674. doi: 10.1121/1.1502901. [DOI] [PubMed] [Google Scholar]
  46. Fu QJ, Hsu CJ, Horng MJ. Effects of speech processing strategy on Chinese tone recognition by nucleus-24 cochlear implant users. Ear Hear. 2004;25(5):501–8. doi: 10.1097/01.aud.0000145125.50433.19. [DOI] [PubMed] [Google Scholar]
  47. Fu QJ, Chinchilla S, Galvin JJ. The role of spectral and temporal cues in voice gender discrimination by normal-hearing listeners and cochlear implant users. J Assoc Res Otolaryngol. 2004;5(3):253–60. doi: 10.1007/s10162-004-4046-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Fu QJ, Nogaki G, Galvin JJ., 3rd Auditory training with spectrally shifted speech: implications for cochlear implant patient auditory rehabilitation. J Assoc Res Otolaryngol. 2005a;6(2):180–9. doi: 10.1007/s10162-005-5061-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Fu QJ, Chinchilla S, Nogaki G, Galvin JJ., 3rd Voice gender identification by cochlear implant users: the role of spectral and temporal resolution. J Acoust Soc Am. 2005b;118(3 Pt 1):1711–8. doi: 10.1121/1.1985024. [DOI] [PubMed] [Google Scholar]
  50. Fu QJ, Zhu M, Wang X. Development and validation of the Mandarin speech perception test. J Acoust Soc Am. 2011;129(6):EL267–73. doi: 10.1121/1.3590739. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Galvin JJ, 3rd, Fu QJ. Effects of stimulation rate, mode and level on modulation detection by cochlear implant users. J Assoc Res Otolaryngol. 2006;6(3):269–79. doi: 10.1007/s10162-005-0007-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Galvin JJ, 3rd, Fu QJ. Influence of stimulation rate and loudness growth on modulation detection and intensity discrimination in cochlear implant users. Hear Res. 2009;250(1–2):46–54. doi: 10.1016/j.heares.2009.01.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Galvin JJ, 3rd, Fu QJ. Effect of bandpass filtering on melodic contour identification by cochlear implant users. J Acoust Soc Am. 2011;129(2):EL39–44. doi: 10.1121/1.3531708. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Galvin JJ, 3rd, Fu QJ, Nogaki G. Melodic contour identification by cochlear implant listeners. Ear Hear. 2007;28(3):302–19. doi: 10.1097/01.aud.0000261689.35445.20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Galvin JJ, 3rd, Fu QJ, Oba S. Effect of instrument timbre on melodic contour identification by cochlear implant users. J Acoust Soc Am. 2008;124(4):EL189–95. doi: 10.1121/1.2961171. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Galvin JJ, 3rd, Fu QJ, Oba SI. Effect of a competing instrument on melodic contour identification by cochlear implant users. J Acoust Soc Am. 2009a;125(3):EL98–103. doi: 10.1121/1.3062148. [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Galvin JJ, III, Fu QJ, Shannon RV. Melodic contour identification and music perception by cochlear implant users. Annals of the New York Academy of Sciences. 2009b;1169:518–533. doi: 10.1111/j.1749-6632.2009.04551.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Galvin JJ, 3rd, Eskridge E, Oba SI, Fu QJ. Melodic Contour Identification Training in Cochlear Implant Users with and without a Competing Instrument. Semiars Hear. 2012;33(04):399–409. [Google Scholar]
  59. Goldsworthy RL, Shannon RV. Training improves cochlear implant rate discrimination on a psychophysical task. J Acoust Soc Am. 2014;135(1):334–41. doi: 10.1121/1.4835735. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Grose JH, Hall JW, 3rd, Buss E, Hatch DR. Detection of spectrally complex signals in comodulated maskers: effect of temporal fringe. J Acoust Soc Am. 2005;118(6):3774–82. doi: 10.1121/1.2108958. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Hillenbrand J, Getty LA, Clark MJ, Wheeler K. Acoustic characteristics of American English vowels. J Acoust Soc Am. 1995;97(5):3099–3110. doi: 10.1121/1.411872. [DOI] [PubMed] [Google Scholar]
  62. Hitselberger WE, House WF, Edgerton BJ, Whitaker S. Cochlear nucleus implants. Otolaryngol Head Neck Surg. 1984;92(1):52–4. doi: 10.1177/019459988409200111. [DOI] [PubMed] [Google Scholar]
  63. House WF, Eisenberg LS. The cochlear implant in preschool-aged children. Acta Otolaryngol. 1983;95(5–6):632–8. doi: 10.3109/00016488309139455. [DOI] [PubMed] [Google Scholar]
  64. Hsu CJ, Shiao SH, Chen YS, Horng MJ, Fu QJ. Effects of speech-coding strategies on speech perception performance in Mandarin-speaking children with nucleus 24 cochlear implants. Cochlear Implants Int. 2004;5(Suppl 1):45–7. doi: 10.1179/cim.2004.5.Supplement-1.45. [DOI] [PubMed] [Google Scholar]
  65. Kidd G, Jr, Mason CR, Richards VM. Multiple bursts, multiple looks, and stream coherence in the release from informational masking. J Acoust Soc Am. 2003;114(5):2835–45. doi: 10.1121/1.1621864. [DOI] [PubMed] [Google Scholar]
  66. Landsberger DM, Srinivasan AG. Virtual channel discrimination is improved by current focusing in cochlear implant recipients. Hear Res. 2009;254(1–2):34–41. doi: 10.1016/j.heares.2009.04.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Landsberger DM, Padilla M, Srinivasan AG. Reducing current spread using current focusing in cochlear implant users. Hear Res. 2012;284(1–2):16–24. doi: 10.1016/j.heares.2011.12.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Li T, Fu QJ. Perceptual adaptation to spectrally shifted vowels: training with nonlexical labels. J Assoc Res Otolaryngol. 2007;8(1):32–41. doi: 10.1007/s10162-006-0059-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Li T, Galvin JJ, 3rd, Fu QJ. Interactions between unsupervised learning and the degree of spectral mismatch on short-term perceptual adaptation to spectrally shifted speech. Ear Hear. 2009;30(2):238–49. doi: 10.1097/AUD.0b013e31819769ac. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Li Y, Zhang G, Kang HY, Liu S, Han D, Fu QJ. Effects of speaking style on speech intelligibility for Mandarin-speaking cochlear implant users. J Acoust Soc Am. 2011;129(6):EL242–7. doi: 10.1121/1.3582148. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Luo X, Fu Q-J. Enhancing Chinese tone recognition by manipulating amplitude envelope: Implications for cochlear implants. Journal of the Acoustical Society of America. 2004;116:3659–3667. doi: 10.1121/1.1783352. [DOI] [PubMed] [Google Scholar]
  72. Luo X, Fu Q-J. Speaker normalization for Chinese vowel recognition in cochlear implants. IEEE Transactions on Biomedical Engineering. 2005;52:1358–1361. doi: 10.1109/TBME.2005.847530. [DOI] [PubMed] [Google Scholar]
  73. Luo X, Fu Q-J. Contribution of low-frequency acoustic information to Chinese speech recognition in cochlear implant simulations. Journal of the Acoustical Society of America. 2006;120:2260–2266. doi: 10.1121/1.2336990. [DOI] [PubMed] [Google Scholar]
  74. Luo X, Fu Q-J. Concurrent-vowel and tone recognitions in acoustic and simulated electric hearing. Journal of the Acoustical Society of America. 2009;125:3223–3233. doi: 10.1121/1.3106534. [DOI] [PMC free article] [PubMed] [Google Scholar]
  75. Luo X, Fu Q-J, Galvin JJ. Vocal emotion recognition by normal-hearing listeners and cochlear implant users. Trends in Amplification. 2007;11:301–315. doi: 10.1177/1084713807305301. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Luo X, Fu Q-J, Wei C-G, Cao K-L. Speech recognition and temporal amplitude modulation processing by Mandarin-speaking cochlear implant users. Ear and Hearing. 2008;29:957–970. doi: 10.1097/AUD.0b013e3181888f61. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Luo X, Fu Q-J, Wu H-P, Hsu C-J. Concurrent-vowel and tone recognition by Mandarin-speaking cochlear implant users. Hearing Research. 2009;256:75–84. doi: 10.1016/j.heares.2009.07.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Luo X, Landsberger DM, Padilla M, Srinivasan AG. Encoding pitch contours using current steering. Journal of the Acoustical Society of America. 2010;128:1215–1223. doi: 10.1121/1.3474237. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Luo X, Padilla M, Landsberger DM. Pitch contour identification with combined place and temporal cues using cochlear implants. Journal of the Acoustical Society of America. 2012;131:1325–1336. doi: 10.1121/1.3672708. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Matthies C, Brill S, Varallyay C, Solymosi L, Gelbrich G, Roosen K, Ernestus RI, Helms J, Hagen R, Mlynski R, Shehata-Dieler W, Müller J. Auditory brainstem implants in neurofibromatosis Type 2: is open speech perception feasible? J Neurosurg. 2014;120(2):546–58. doi: 10.3171/2013.9.JNS12686. [DOI] [PubMed] [Google Scholar]
  81. McCreery DB. Cochlear nucleus auditory prostheses. Hear Res. 2008;242(1–2):64–73. doi: 10.1016/j.heares.2007.11.014. [DOI] [PubMed] [Google Scholar]
  82. McCreery DB, Yuen TG, Agnew WF, Bullara LA. Stimulus parameters affecting tissue injury during microstimulation in the cochlear nucleus of the cat. Hear Res. 1994;15;77(1–2):105–15. doi: 10.1016/0378-5955(94)90258-5. [DOI] [PubMed] [Google Scholar]
  83. McCreery DB, Shannon RV, Moore JK, Chatterjee M. Accessing the tonotopic organization of the ventral cochlear nucleus by intranuclear microstimulation. IEEE Transactions on Rehabilitation Engineering. 1998;6(4):391–399. doi: 10.1109/86.736153. [DOI] [PubMed] [Google Scholar]
  84. McKay CM, Henshall KR, Farrell RJ, McDermott HJ. A practical method of predicting the loudness of complex electrical stimuli. J Acoust Soc Am. 2003;113(4 Pt 1):2054–63. doi: 10.1121/1.1558378. [DOI] [PubMed] [Google Scholar]
  85. Moore BC, Huss M, Vickers DA, Glasberg BR, Alcántara JI. A test for the diagnosis of dead regions in the cochlea. Br J Audiol. 2000;34(4):205–24. doi: 10.3109/03005364000000131. [DOI] [PubMed] [Google Scholar]
  86. Moore BCJ, Glasberg B, Schlueter A. Detection of dead regions in the cochlea: relevance for combined electric and acoustic stimulation. Adv Otorhinolaryngol. 2010;67:43–50. doi: 10.1159/000262595. [DOI] [PubMed] [Google Scholar]
  87. Moore D, Shannon RV. Beyond cochlear implants: Awakening the deafened brain. Nature Neuroscience. 2009;12:687–691. doi: 10.1038/nn.2326. (Invited perspective) [DOI] [PubMed] [Google Scholar]
  88. Moore JK. The human auditory brain stem: a comparative view. Hear Res. 1987;29(1):1–32. doi: 10.1016/0378-5955(87)90202-4. [DOI] [PubMed] [Google Scholar]
  89. Moore JK, Osen KK. The cochlear nuclei in man. Am J Anat. 1979;154(3):393–418. doi: 10.1002/aja.1001540306. [DOI] [PubMed] [Google Scholar]
  90. Nogaki G, Fu QJ, Galvin JJ., 3rd Effect of training rate on recognition of spectrally shifted speech. Ear Hear. 2007;28(2):132–40. doi: 10.1097/AUD.0b013e3180312669. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Oba SI, Fu QJ, Galvin JJ., 3rd Digit training in noise can improve cochlear implant users’ speech understanding in noise. Ear Hear. 2011;32(5):573–81. doi: 10.1097/AUD.0b013e31820fc821. [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. Oba SI, Galvin JJ, 3rd, Fu QJ. Minimal effects of visual memory training on auditory performance of adult cochlear implant users. J Rehabil Res Dev. 2013;50(1):99–110. doi: 10.1682/jrrd.2011.12.0229. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Otto SA, Shannon RV, Brackmann DE, Hitselberger WE, Staller S, Menapace C. The multichannel auditory brainstem implant: Performance in 20 patients. Otolaryngology, Head and Neck Surgery. 1998;118(3):291–303. doi: 10.1016/S0194-59989870304-3. [DOI] [PubMed] [Google Scholar]
  94. Otto SR, Brackmann DE, Hitselberger WE, Shannon RV. The multichannel auditory brainstem implant update: Performance in 61 patients. Journal of Neurosurgery. 2002;96:1063–1071. doi: 10.3171/jns.2002.96.6.1063. [DOI] [PubMed] [Google Scholar]
  95. Otto SR, Shannon RV, Brackmann DE, Hitselberger WE, McCreery DB, Moore JK, Wilkinson EP. Audiological Outcomes with the Penetrating Electrode Auditory Brainstem Implant. Otology & Neurotology. 2008;29:1147–1154. doi: 10.1097/MAO.0b013e31818becb4. [DOI] [PubMed] [Google Scholar]
  96. Padilla M. PhD Dissertation. University of Southern California; 2003. English phoneme and word recognition by nonnative English listeners as a function of spectral resolution and English experience. [Google Scholar]
  97. Rauschecker J, Shannon RV. Sending sound to the brain. Science. 2002;295:1025–1029. doi: 10.1126/science.1067796. [DOI] [PubMed] [Google Scholar]
  98. Saoji AA, Landsberger DM, Padilla M, Litvak LM. Masking patterns for monopolar and phantom electrode stimulation in cochlear implants. Hear Res. 2013;298:109–116. doi: 10.1016/j.heares.2012.12.006. (Invited Viewpoint) [DOI] [PMC free article] [PubMed] [Google Scholar]
  99. Shannon RV. Multichannel electrical stimulation of the auditory nerve in man: Channel interaction. Hearing Research. 1983;12:1–16. doi: 10.1016/0378-5955(83)90115-6. [DOI] [PubMed] [Google Scholar]
  100. Shannon RV. Threshold functions for electrical stimulation of the human cochlear nucleus. Hearing Research. 1989a;40:173–178. doi: 10.1016/0378-5955(89)90110-x. [DOI] [PubMed] [Google Scholar]
  101. Shannon RV. A model of threshold for pulsatile electrical stimulation of cochlear implants. Hearing Research. 1989b;40:197–204. doi: 10.1016/0378-5955(89)90160-3. [DOI] [PubMed] [Google Scholar]
  102. Shannon RV. Forward masking in patients with cochlear implants. J Acoust Soc Amer. 1990;88:741–744. doi: 10.1121/1.399777. [DOI] [PubMed] [Google Scholar]
  103. Shannon RV. A model of safe levels for electrical stimulation. IEEE Trans Biomed Engr. 1992a;39:424–426. doi: 10.1109/10.126616. [DOI] [PubMed] [Google Scholar]
  104. Shannon RV. Temporal modulation transfer functions in patients with cochlear implants. J Acoust Soc Amer. 1992b;91:1974–1982. doi: 10.1121/1.403807. [DOI] [PubMed] [Google Scholar]
  105. Shannon RV. Understanding hearing through deafness. Proc Nat Acad Sci. 2007;104(17):6883–6884. doi: 10.1073/pnas.0702220104. (Invited commentary) [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. Shannon RV. Advances in Auditory Prostheses. Curr Opinion in Neurology. 2012;25(1):61–66. doi: 10.1097/WCO.0b013e32834ef878. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Shannon RV. Ear and Brain. Cell. 2014;156:863. (invited opinion for “Leading Edge Voices: Studying Circuits with Therapy in Mind”) [Google Scholar]
  108. Shannon RV, Otto SR. Psychophysical measures from electrical stimulation of the human cochlear nucleus. Hearing Res. 1990;47:159–168. doi: 10.1016/0378-5955(90)90173-m. [DOI] [PubMed] [Google Scholar]
  109. Shannon RV, Adams DD, Ferrel RL, Palumbo RL, Grandgenett M. A computer interface for psychophysical and speech research with the Nucleus cochlear implant. J Acoust Soc Amer. 1990;87:905–907. doi: 10.1121/1.398902. [DOI] [PubMed] [Google Scholar]
  110. Shannon RV, Fayad J, Moore JK, Lo W, O’Leary M, Otto S, Nelson RA. Auditory brainstem implant. II: Post-surgical issues and performance, Otolaryngology, Head and Neck Surgery. 1993;108:635–643. doi: 10.1177/019459989310800603. [DOI] [PubMed] [Google Scholar]
  111. Shannon RV, Zeng F-G, Wygonski J, Kamath V, Ekelid M. Speech recognition with primarily temporal cues. Science. 1995;270:303–304. doi: 10.1126/science.270.5234.303. [DOI] [PubMed] [Google Scholar]
  112. Shannon RV, Zeng F-G, Wygonski J. Speech recognition with altered spectral distribution of envelope cues. Journal of the Acoustical Society of America. 1998;104(4):2467–2476. doi: 10.1121/1.423774. [DOI] [PubMed] [Google Scholar]
  113. Shannon RV, Jensvold A, Padilla M, Robert M, Wang X. Consonant recordings for speech testing. J Acoust Soc Am. 1999;106(6):L71–L74. doi: 10.1121/1.428150. [DOI] [PubMed] [Google Scholar]
  114. Shannon RV, Galvin JG, Baskent D. Holes in hearing. J Assoc Res Otolaryngol. 2002;3(2):188–195. doi: 10.1007/s101620020021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Shannon RV, Fu Q-J, Galvin J. The number of spectral channels required for speech recognition depends on the difficulty of the listening situation. Acta Oto-Laryngologica Suppl. 2004;552:50–54. doi: 10.1080/03655230410017562. [DOI] [PubMed] [Google Scholar]
  116. Shannon RV, Cruz RJ, Galvin JJ., III Effect of simulation rate on cochlear implant users’ phoneme, word and sentence recognition in quiet and in noise. Audiology and Neurotology. 2010;16:113–123. doi: 10.1159/000315115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  117. Srinivasan AG, Landsberger DM, Shannon RV. Current Focusing Sharpens Local Peaks of Excitation in Cochlear Implant Stimulation. Hearing Research. 2010;270(1–2):89–100. doi: 10.1016/j.heares.2010.09.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  118. Srinivasan AG, Landsberger DM, Shannon RV. Relating Loudness Growth to Virtual Channel Discrimination. ARO Midwinter Meeting.2011. [Google Scholar]
  119. Srinivasan AG, Shannon RV, Landsberger DM. Improving Virtual Channel Discrimination in a Multi-channel context. Hearing Research. 2012;286(1–2):19–29. doi: 10.1016/j.heares.2012.02.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Srinivasan AG, Padilla M, Shannon RV, Landsberger DM. Improving speech perception in noise with current focusing in cochlear implant users. Hearing Research. 2013;299:29–36. doi: 10.1016/j.heares.2013.02.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. van den Honert C, Kelsall DC. Focused intracochlear electric stimulation with phased array channels. J Acoust Soc Am. 2007;121(6):3703–16. doi: 10.1121/1.2722047. [DOI] [PubMed] [Google Scholar]
  122. van den Honert C, Stypulkowski PH. Temporal response patterns of single auditory nerve fibers elicited by periodic electrical stimuli. Hear Res. 1987;29(2–3):207–22. doi: 10.1016/0378-5955(87)90168-7. [DOI] [PubMed] [Google Scholar]
  123. Van Tasell DJ, Soli SD, Kirby VM, Widin GP. Speech waveform envelope cues for consonant recognition. J Acoust Soc Am. 1987;82(4):1152–61. doi: 10.1121/1.395251. [DOI] [PubMed] [Google Scholar]
  124. White R, Van Compernolle D. Current spreading and speech-processing strategies for cochlear prostheses. Annals of Otology, Rhinology and Laryngology. 1987;96(1):22–24. [Google Scholar]
  125. Wilson BS, Finley CC, Lawson DT, Wolford RD, Eddington DK, Rabinowitz WM. Better speech recognition with cochlear implants. Nature. 1991;352(6332):236–8. doi: 10.1038/352236a0. [DOI] [PubMed] [Google Scholar]
  126. Wilson BS, Finley CC, Lawson DT, Zerbi M. Temporal representations with cochlear implants. Am J Otol. 1997;18(6 Suppl):S30–4. [PubMed] [Google Scholar]
  127. Zeng FG, Shannon RV. Loudness balance between electric and acoustic stimulation. Hearing Res. 1992;60:231–235. doi: 10.1016/0378-5955(92)90024-h. [DOI] [PubMed] [Google Scholar]
  128. Zeng F-G, Shannon RV. Loudness coding mechanisms inferred from electric stimulation of the human auditory system. Science. 1994;264:564–566. doi: 10.1126/science.8160013. [DOI] [PubMed] [Google Scholar]
  129. Zeng F-G, Shannon RV. Loudness of simple and complex stimuli in electric hearing. Annals of Otol Rhinol Laryngol. 1995a;104(9)(Suppl 166):235–238. [PubMed] [Google Scholar]
  130. Zeng F-G, Shannon RV. Possible origins of the non-monotonic intensity discrimination function in forward masking. Hearing Res. 1995b;82:216–224. doi: 10.1016/0378-5955(94)00179-t. [DOI] [PubMed] [Google Scholar]
  131. Zeng F-G, Shannon RV. Psychophysical laws revealed by electric hearing. NeuroReport. 1999;10(9):1–5. doi: 10.1097/00001756-199906230-00025. [DOI] [PubMed] [Google Scholar]
  132. Zeng FG, Grant G, Niparko J, Galvin J, Shannon RV, Opie J, Segel P. Speech dynamic range and its effect on cochlear implant performance. J Acoust Soc Amer. 2002;111:377–386. doi: 10.1121/1.1423926. [DOI] [PubMed] [Google Scholar]
  133. Zhu M, Fu QJ, Galvin JJ, 3rd, Jiang Y, Xu J, Xu C, Tao D, Chen B. Mandarin Chinese speech recognition by pediatric cochlear implant users. Int J Pediatr Otorhinolaryngol. 2011;75(6):793–800. doi: 10.1016/j.ijporl.2011.03.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  134. Zhu M, Wang X, Fu QJ. Development and validation of the Mandarin disyllable recognition test. Acta Otolaryngol. 2012;132(8):855–61. doi: 10.3109/00016489.2011.653668. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES