Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Jul 10.
Published in final edited form as: Hear Res. 2008 Jun 22;242(0):3–21. doi: 10.1016/j.heares.2008.06.005

Cochlear implants: a remarkable past and a brilliant future

Blake S Wilson a,*, Michael F Dorman b
PMCID: PMC3707130  NIHMSID: NIHMS66619  PMID: 18616994

Abstract

The aims of this paper are to (i) provide a brief history of cochlear implants; (ii) present a status report on the current state of implant engineering and the levels of speech understanding enabled by that engineering; (iii) describe limitations of current signal processing strategies and (iv) suggest new directions for research. With current technology the “average” implant patient, when listening to predictable conversations in quiet, is able to communicate with relative ease. However, in an environment typical of a workplace the average patient has a great deal of difficulty. Patients who are “above average” in terms of speech understanding, can achieve 100% correct scores on the most difficult tests of speech understanding in quiet but also have significant difficulty when signals are presented in noise. The major factors in these outcomes appear to be (i) a loss of low-frequency, fine structure information possibly due to the envelope extraction algorithms common to cochlear implant signal processing; (ii) a limitation in the number of effective channels of stimulation due to overlap in electric fields from electrodes, and (iii) central processing deficits, especially for patients with poor speech understanding. Two recent developments, bilateral implants and combined electric and acoustic stimulation, have promise to remediate some of the difficulties experienced by patients in noise and to reinstate low-frequency fine structure information. If other possibilities are realized, e.g., electrodes that emit drugs to inhibit cell death following trauma and to induce the growth of neurites toward electrodes, then the future is very bright indeed.

Keywords: Auditory prosthesis, Cochlear implant, Cortical plasticity, Deafness, Electrical stimulation, Hearing, Neural prosthesis, Speech perception

1. Introduction

In 1802, Ludwig van Beethoven fell into a deep depression following a nearly complete loss of his remaining hearing. His physician recommended rest in Heiligenstadt, a village then and now a part of greater Vienna. There Beethoven wrote in October of that year his famous Heiligenstadt Testament, addressed to his two brothers and meant to be read after his death. In it, he said (translated from the original German into modern English):

“…For me there can be no relaxation in human society; no refined conversations, no mutual confidences. I must live quite alone and may creep into society only as often as sheer necessity demands. …Such experiences almost made me despair, and I was on the point of putting an end to my life – the only thing that held me back was my art… [and] thus I have dragged on this miserable existence.”

Helen Keller wrote in her autobiography, The Story of My Life (first published in 1905):

“…I am just as deaf as I am blind. The problems of deafness are deeper and more complex, if not more important, than those of blindness. Deafness is a much worse misfortune. For it means the loss of the most vital stimulus – the sound of the voice that brings language, sets thoughts astir and keeps us in the intellectual company of man.”

These poignant descriptions convey the feelings of isolation that often accompany deafness. Beethoven stressed loneliness as the major hardship, as opposed to a separation from his music. Helen Keller stressed that “blindness cuts one off from things, but deafness cuts one off from people.”

Just thirty years ago there were no effective treatments for deafness or severe hearing impairments. The advent of cochlear implants (CIs) changed that, and today implants are widely regarded as one of the great achievements of modern medicine.

The purposes of this paper are first to provide a brief history of implants and then to present a status report of where we are and where we are headed with implants. The status report describes current designs and levels of performance. It also presents strengths and weaknesses of present-day implant systems, and offers some possibilities for addressing the weaknesses. These contributions are meant to serve as an introduction and overview for other papers in this special issue on Frontiers of Auditory Prosthesis Research: Implications for Clinical Practice. In addition, the contributions are meant to celebrate the courage and persistence of the pioneers who made this “miracle” of modern medicine possible. The authors of the papers in this issue, along with their many contemporary colleagues, are standing on broad shoulders indeed, and it is our shared challenge and opportunity to move this technology forward and to do so at the same remarkable pace as in the past 30 years.

2. Historical context

As recently as the early 1980s, many eminent and highly knowledgeable people believed that CIs would provide only an awareness of environmental sounds and possibly speech cadences to their users. Many were skeptical of implants and thought that 5 mimicking or reinstating the function of the exquisite machinery in the normal inner ear was a fool’s dream. Among these critics were world-renowned experts in otology and auditory physiology. Fortunately, pioneers persisted in the face of this intense criticism and provided the foundations for the present devices.

The early history of CIs is illustrated in Fig. 1, which shows principal events and the main developers of CI systems. The history begins with Alessandro Volta in 1790. He connected each end of a battery “stack” (or “pile,” as he called it) with a wire that terminated with a conductive rod. He then placed each of the two rods within his ear canals and experienced a “boom within the head,” followed by a sensation of sound similar to that of “boiling, thick soup.” He immediately terminated the experiment and did not repeat it. This was the first report of auditory percepts elicited with electrical stimulation, although it is not certain whether the percepts were produced with direct electrical activation of auditory neurons or via electro-mechanical effects, such as those underlying electrophonic hearing (e.g., Stevens, 1937). The voltage of the battery stack was about 50 V.

Fig. 1.

Fig. 1

Early history of cochlear implants. Developers and places of origin are shown, along with a timeline for the various efforts. Initial stages of development are depicted with the light lines, and clinical applications of devices are depicted with the heavy lines. Most of these devices are no longer in use, and many of the development efforts have been discontinued. Present devices and efforts are described in the text. (This figure is adapted from a historical model conceptualized by Donald K. Eddington, Ph.D., of the Massachusetts Eye & Ear Infirmary, and is used here with his permission. The figure also appeared in Niparko and Wilson, 2000, and is reprised here with the permission of Lippincott Williams & Wilkins.)

The first implant of a device for electrical stimulation of the auditory nerve was performed by Djourno and Eyriès in Paris in 1957. An induction coil was used, with one end placed on the stump of the auditory nerve or adjacent brainstem and the other end within the temporalis muscle (the patient had had bilateral cholesteatomas which had been removed in prior operations, taking the cochleas and peripheral parts of the auditory nerves with them). The patient used the device for several months before it failed, and was able to sense the presence of environmental sounds but could not understand speech or discriminate among speakers or many sounds. He could, however, discriminate among (i) large changes in frequencies of stimulation below about 1000 Hz, and (ii) speech sounds in small closed sets (e.g., with three words in a set), most likely on the basis of rhythmic cues. He was re-implanted with another device following the failure of the first device, but this second device also failed after a short period.

This demonstration of direct electrical stimulation of auditory system was not widely known outside of France until years later. By serendipity, a patient of Dr. William F. House in Los Angeles gave him a newspaper article that very briefly described the work of Djourno and Eyriès. Dr. House was inspired by the account, and he initiated an effort to develop a practical and reliable way to treat deafness using electrical stimulation of the cochlea. Dr. House first worked with Dr. James Doyle, a neurosurgeon, and later with Jack Urban, an engineer, and others. The first implants by Dr. House were performed in 1961. Each of two patients received a gold wire inserted a short distance into the (deaf) cochlea. The patients could hear sounds in the environment via electrical stimulation of this single electrode (with reference to another electrode in the temporalis muscle), but could not understand speech.

Soon after these initial implants by Dr. House, Dr. F. Blair Simmons began his work at Stanford, which included animal studies and which included implantation in human subjects of electrodes within the auditory nerve trunk in the modiolus. Multiple other efforts then commenced worldwide in the late 1960s and the 1970s, as depicted in the figure. Each of these subsequent efforts involved electrical stimulation of the auditory system using an electrode or an array of electrodes inserted into the scala tympani (ST), one of three fluid-filled chambers along the length the cochlea (see section 3.2 below). (Dr. House also used this approach.) Many of the efforts led to clinical applications, as indicated by the heavier vertical lines in the figure. Additional details about the fascinating early history of CIs are presented in Finn et al. (1998), Niparko and Wilson (2000), and Eisen (2006 and 2008), and a more-comprehensive discussion of Fig. 1 is presented in Niparko and Wilson (2000), from which the figure was taken. Personal accounts by Drs. House and Simmons can be found in Simmons (1966 and 1985), Bilger et al. (1977), and House and Berliner (1991).

As described in sections 4 and 6 below, contemporary CI systems support high levels of speech reception for their users. The progress since the late 1980s and early 1990s has been especially remarkable, and has quelled the naysayers mentioned at the beginning of this section on historical context.

Table 1 tracks changes in “expert” views about CIs from 1964 to the present. These views range from frank skepticism at the beginning to, at present, complaints about too many patients achieving 100% scores on standard tests of sentence understanding.

Table 1.

A line of progress

Person(s) or event Year Comment or outcome
Merle Lawrence 1964 “Direct stimulation of the auditory nerve fibers with resultant perception of speech is not feasible.”
Blair Simmons 1966 Rated the chances that electrical stimulation of the auditory nerve might ever provide “uniquely useful communication” at about 5 percent.
Harold Schuknecht 1974 “I have the utmost admiration for the courage of those surgeons who have implanted humans, and I will admit that we need a new operation in otology, but I am afraid this is not it.”
Bilger et al. 1977 “Although the subjects could not understand speech through their prostheses, they did score significantly higher on tests of lipreading and recognition of environmental sounds with their prostheses activated than without them.” (This was a NIH-funded study of all 13 implant patients in the United States at the time.)
First NIH Consensus Statement 1988 Suggested that multichannel implants were more likely to be effective than single-channel implants, and indicated that about 1 in 20 patients could carry out a normal conversation without lipreading. (The world population of implant recipients was about 3,000 in 1988.)
Second NIH Consensus Statement 1995 “A majority of those individuals with the latest speech processors for their implants will score above 80 percent correct on high-context sentences, even without visual cues.” (The number of implant recipients approximated 12,000 in 1995.)
Gifford et al. 2008 Reported that over a quarter of CI patients achieve 100% scores on standard sentence material and called for more difficult material to assess patient performance. (The cumulative number of implant recipients now exceeds 120,000.)

Watershed events and views in Table 1 include the “Bilger report” in 1977. By 1975, 13 patients in the United States had functioning, single-channel CIs. (Most of these patients had been implanted by Dr. House.) The United States National Institutes of Health (NIH) commissioned a study at that point, to evaluate the performance of those devices in tests with the 13 patients. The study was conducted by Dr. Robert C. Bilger and his colleagues, and most of the experiments were performed at the University of Pittsburg in the USA. The findings were published in a monograph (Bilger et al., 1977), which has become known as the Bilger report. One of its key conclusions was that “although the subjects could not understand speech through their prostheses, they did 8 score significantly higher on tests of lipreading and recognition of environmental sounds with their prostheses activated than without them.”

The Bilger report altered the perspective on CIs at the NIH. Up until that time, only a few, relatively-small projects had been supported by the agency, and most of those did not involve human studies. Indeed, as late as 1978 the NIH rejected an application for funding of human research with CIs on “moral grounds” (Simmons, 1985). The Bilger report demonstrated benefits of CIs, however, and also indicated possibilities for improvements. This had a profound effect at the NIH, and the NIH increased funding for CI research substantially after 1978, including support for human studies. Much of the progress made in the 1980s and afterwards was the direct result of this decision. In particular, work supported through the Neural Prosthesis Program (NPP) at the NIH, first directed by Dr. F. Terry Hambrecht and later by Dr. William J. Heetderks, produced many important innovations in electrode and speech processor designs that remain in use to this day. Additional contributions outside of the NPP were quite important as well, including contributions from investigators in the United States and Australia supported by “regular” (e.g., R01 and Program Project) NIH grants and other sources, and from investigators in Europe who were not supported by the NIH. (Much of the work in Australia was supported both through the NPP and with regular NIH grants, in addition to principal support from the Australian government and private sources.)

In 1988, NIH convened the first of two consensus development conferences on CIs. Multichannel systems – with multiple channels of processing and with multiple sites of stimulation in the cochlea – had come into use at that time. The consensus statement from the 1988 conference (National Institutes of Health, 1988) suggested that multichannel implants were more likely to be effective than single-channel implants, and indicated that about one in twenty patients could carry out a normal conversation without lipreading. Approximately 3,000 patients had received CIs by 1988.

New and highly effective processing strategies for CIs were developed in the late 1980s and early 1990s, principally through the NPP. Among these were the continuous interleaved sampling (CIS) (Wilson et al., 1991), n-of-m (Wilson et al., 1988), and spectral peak (SPEAK) (Skinner et al., 1994) strategies. Large gains in speech reception performance were achieved with these strategies, two of which remain in widespread use today (CIS and n-of-m). A detailed review of processing strategies and their lines of development is presented in Wilson (2006).

A second NIH consensus development conference was held in 1995. By then, approximately 12,000 patients had received implants. A major conclusion from the 1995 conference (National Institutes of Health, 1995) was that “a majority of those individuals with the latest speech processors for their implants will score above 80 percent correct on high-context sentences even without visual cues.”

At the time of this writing (May 2008), the cumulative number of implants worldwide has exceeded 120,000. This number is orders of magnitude higher than the numbers for all other types of neural prostheses combined. In addition, the restoration of function provided by present-day CIs far surpasses that achieved to date with any other neural prosthesis. Indeed, the experience with CIs is being used today as a model for the development or further development of other prostheses, e.g., those for restoration of vision or balance (e.g., Wilson and Dorman, 2008a).

Restoration of function with CIs has advanced to the point that in early 2008 Gifford et al. (i) noted that it has become difficult to track changes in patient performance over time because many patients achieve 90 to 100% scores on standard tests of sentence intelligibility in quiet (28% achieve perfect scores) and (ii) called for new, more difficult tests of sentence intelligibility. The need for such tests speaks to the progress made in CI designs in during the past two decades.

3. Present designs

3.1. Components of implant systems

The essential components in a cochlear prosthesis system are illustrated in Fig. 2 and include (1) a microphone for sensing sound in the environment; (2) a speech processor to transform the microphone output into a set of stimuli for an implanted array of electrodes; (3) a transcutaneous link for the transmission of power and stimulus information across the skin; (4) an implanted receiver/stimulator to (i) decode the information received from the radio-frequency signal produced by an external transmitting coil and (ii) generate stimuli using the instructions obtained from the decoded information; (5) a multi-wire cable to connect the outputs of the receiver/stimulator to the individual electrodes; and (6) the electrode array. These components must work together as a system and a weakness in a component can degrade performance significantly. For example, a limitation in the data bandwidth of the transcutaneous link can restrict the types and rates of stimuli that can be specified by the external speech processor and this in turn can limit performance. A thorough discussion of considerations for the design of cochlear prostheses and their constituent parts is presented in Wilson (2004).

Fig. 2.

Fig. 2

Components of a cochlear implant system. The TEMPO+ system is illustrated, but all present-day implant systems share the same basic components. (Diagram courtesy of MED-EL Medical Electronics GmbH, of Innsbruck, Austria.)

One “component” that is not illustrated in Fig. 2 is the biological component central to the auditory nerve, including the auditory pathways in the brainstem and the auditory cortices of the implant recipient. This biological component varies widely in its functional integrity and capabilities across patients (e.g., Lee et al., 2001; Shepherd and Hardie, 2001; Sharma et al., 2002; Kral et al., 2006; Shepherd et al., 2006; Fallon et al., 2008), and this variability may explain at least in part the remaining diversity in outcomes with present-day CIs. We will return to this point later in this paper.

3.2. Electrical stimulation of the auditory nerve

The principal cause of hearing loss is damage to or complete destruction of the sensory hair cells. Unfortunately, the hair cells are vulnerable structures and are subject to a wide variety of insults, including but not limited to genetic defects, infectious diseases (e.g., rubella and meningitis), overexposure to loud sounds, certain drugs (e.g., kanamycin, streptomycin, and cisplatin), and aging. In the deaf or deafened cochlea, the hair cells are largely or completely absent, severing the connections (both afferent and efferent) between the peripheral and central auditory systems. The function of a cochlear prosthesis is to bypass the missing or damaged hair cells by stimulating directly the surviving neurons in the auditory nerve, to reinstate afferent input to the central system.

In some cases, the auditory nerve may be grossly compromised, severed, or missing, e.g., in some types of congenital deafness, some types of basal skull fractures, and removals of tumors from the surface of or within the auditory nerve, which usually take the nerve with the resected tumor. In these (fortunately rare) cases, structures central to the auditory nerve must be stimulated to restore function. Sites that have been used include (1) the surface of the dorsal cochlear nucleus (DCN) (e.g., Otto et al., 2002); (2) the surface of the DCN combined with intra-nucleus stimulation using penetrating electrodes in conjunction with the surface electrodes (McCreery, this issue); and (3) the central nucleus of the inferior colliculus, using an array of electrodes on a penetrating shank or “carrier” (Lim et al., 2007 and this issue). The number of patients who have received implants at these locations in the central auditory system is slightly higher than 500, whereas the number of patients who have received CIs to date exceeds 120,000, as mentioned before. In the remainder of this paper, discussion is restricted to CIs. However, other papers in this special issue describe the experience thus far with stimulation of central auditory structures (McCreery and Lim et al.).

In the deaf cochlea, and without the normal stimulation provided by the hair cells, the peripheral parts of the neurons – between the cell bodies in the spiral ganglion and the terminals within the organ of Corti – undergo retrograde degeneration and cease to function (Hinojosa and Marion, 1983). Fortunately, the cell bodies are far more robust. At least some usually survive, even for prolonged deafness or for virulent etiologies such as meningitis (Hinojosa and Marion, 1983; Miura et al., 2002; Leake and Rebscher, 2004). These cells, or more specifically the nodes of Ranvier just distal or proximal to them, are the putative sites of excitation for CIs. In some cases, though, peripheral processes may survive, and excitation may occur more peripherally. (Survival of peripheral processes in the apical region of the cochlea is a certainty for patients with residual, low-frequency hearing in an implanted ear. Whether peripheral processes are – or can be – stimulated electrically with an implant remains to be demonstrated.)

Direct stimulation of remaining elements in the auditory nerve is produced by currents delivered through electrodes placed in the ST, for all contemporary CI systems. Different electrodes in the implanted array may stimulate different subpopulations of neurons. Implant systems attempt to mimic or reproduce a tonotopic pattern of stimulation by stimulating basally-situated electrodes to indicate the presence of high-frequency sounds, and by stimulating electrodes at more apical locations to indicate the presence of sounds with lower frequencies. The stimuli are presented to single electrodes in the array with reference to a remote electrode usually placed in the temporalis muscle or on the outside of the case for the receiver/stimulator, or the stimuli are presented between closely-spaced electrodes within the array. The former arrangement is called “monopolar stimulation” and the latter arrangement is called “bipolar stimulation.” All implant systems in current widespread use utilize monopolar stimulation, primarily because (1) it supports performance that is at least as good as that supported by bipolar stimulation (e.g., Zwolan et al., 1996); (2) it requires substantially less current and battery power for producing auditory percepts (e.g., Pfingst and Xu, 2004); and (3) differences in threshold or MCL for individual electrodes across the electrode array are substantially lower with monopolar than with bipolar stimulation (Pfingst and Xu, 2004), and this can simplify the fitting of speech processors for implant patients.

The spatial specificity of stimulation with ST electrodes most likely depends on variety of factors, including the geometric arrangement of the electrodes, the proximity of the electrodes to the target neural structures, and the condition of the implanted cochlea in terms of nerve survival, ossification, and fibrosis around the implant. An important goal in the design of CIs is to maximize the number of largely non-overlapping populations of neurons that can be addressed with the electrode array. Present evidence suggests, however, that no more than 4–8 independent sites are available in a speech processor context and using present electrode designs, even for arrays with as many as 22 electrodes (Lawson et al., 1996; Fishman et al., 1997; Wilson, 1997; Kiefer et al., 2000; Friesen et al., 2001; Garnham et al., 2002). Most likely, the number of independent sites is limited by substantial overlaps in the electric fields from adjacent (and more distant) electrodes (e.g., Fu and Nogaki, 2004; Dorman and Spahr, 2006). The overlaps are unavoidable for electrode placements in the ST, as the electrodes are “sitting” or “bathed” in the highly conductive fluid of the perilymph and moreover are relatively far from the target neural tissue for most patients, in the spiral ganglion. A closer apposition of the electrodes next to the inner wall of the ST would move them a bit closer to the target, and such placements have been shown, in some cases, to produce an improvement in the spatial specificity of stimulation (Cohen et al., 2006). However, a large gain in the number of independent sites may well require a fundamentally new type of electrode, or a fundamentally different placement of electrodes, or a fundamentally different type or mode of stimulation. The many issues related to electrode design, along with prospects for the future, are discussed in Wilson (2004), Spelman (2006), Middlebrooks and Snyder (2007 and this issue), Anderson (this issue), and Wise et al. (this issue). Additionally, a new approach using optical rather than electrical stimulation of auditory neurons is described by Richter et al. in this issue.

3.3. Processing strategies for cochlear implants

One of the simpler and most effective approaches for representing speech and other sounds with present-day CIs is illustrated in Fig. 3. This is the CIS strategy mentioned before in section 2, and which is used as the default strategy or as a processing option in all implant systems now in widespread clinical use.

Fig. 3.

Fig. 3

Block diagram of the continuous interleaved sampling (CIS) strategy. The input is indicated by the filled circle in the left-most part of the diagram. This input can be provided by a microphone or alternative sources. Following the input, a pre-emphasis filter (Pre-emp.) is used to attenuate strong components in speech below 1.2 kHz. This filter is followed by multiple channels of processing. Each channel includes stages of bandpass filtering (BPF), envelope detection, compression, and modulation. The envelope detectors generally use a full-wave or half-wave rectifier (Rect.) followed by a lowpass filter (LPF). A Hilbert Transform or a half-wave rectifier without the LPF also may be used. Carrier waveforms for two of the modulators are shown immediately below the two corresponding multiplier blocks (circles with a “x” mark within them). The outputs of the multipliers are directed to intracochlear electrodes (EL-1 to EL-n), via a transcutaneous link or a percutaneous connector. The inset shows an x-ray micrograph of the implanted cochlea, to which the outputs of the speech processor are directed. (Block diagram is adapted from Wilson et al., 1991, and is used here with the permission of the Nature Publishing Group. Inset is from Hüttenbrink et al., 2002, and is used here with the permission of Lippincott Williams & Wilkins.)

The CIS strategy filters speech or other input sounds into bands of frequencies with a bank of bandpass filters. Envelope variations in the different bands are represented at corresponding electrodes in the cochlea by modulating trains of biphasic electrical pulses. The envelope signals extracted from the bandpass filters are compressed with a nonlinear mapping function prior to the modulation, in order to map the wide dynamic range of sound in the environment (up to about 100 dB) into the narrow dynamic range of electrically evoked hearing (about 10 dB or somewhat higher). (The mapping also can be more restricted, e.g., from the approximately 30 dB range for speech sounds into the 10 dB range for electrically evoked hearing; for such a restricted mapping some sort of automatic gain or volume control following the microphone input is essential, to “shift” the range of ambient speech sounds into the dynamic range of processing for the filter bank and envelope detectors.) The output of each bandpass channel is directed to a single intracochlear electrode, with low-to-high channels assigned to apical-to-basal electrodes, to mimic at least the order, if not the precise locations, of frequency mapping in the normal cochlea. The pulse trains for the different channels and corresponding electrodes are interleaved in time, so that the pulses across channels and electrodes are nonsimultaneous. This eliminates a principal component of electrode interaction, which otherwise would be produced by direct vector summation of the electric fields from different (simultaneously stimulated) electrodes. (Other interaction components are not eliminated with the interleaving, but these other components are generally much lower in magnitude than the principal component due to the summation of the electric fields; see, e.g., Favre and Pelizzone, 1993.) The corner or “cutoff” frequency of the lowpass filter in each envelope detector typically is set at 200 Hz or higher, so that the fundamental frequencies (F0s) of speech sounds are represented in the modulation waveforms. Pulse rates in CIS processors typically approximate or exceed 1000 pulses/s/electrode, for an adequate “sampling” of the highest frequencies in the modulation waveforms (a “four times” oversampling rule is applied; see Busby et al., 1993; Wilson, 1997; Wilson et al., 1997). CIS gets its name from the continuous sampling of the (compressed) envelope signals by rapidly presented pulses that are interleaved across electrodes. Between 4 and 22 channels (and corresponding stimulus sites) have been used in CIS implementations to date. (CIS processors often are described as having a small number of channels and associated sites of stimulation, e.g., 6–8, but this is incorrect. The strategy itself does not place a limitation on the number of channels and sites; as just mentioned, CIS implementations to date have used as many as 22 channels and sites.)

Other strategies also have produced outstanding results. Among these are the n-of-m and SPEAK strategies also mentioned before in section 2, and the advanced combination encoder (ACE) (Kiefer et al., 2001) and “HiResolution” (HiRes) strategies (Koch et al., 2004), which were developed more recently. The n-of-m, SPEAK, and ACE strategies each use a channel-selection scheme, in which the envelope signals for the different channels are “scanned” prior to each frame of stimulation across the intracochlear electrodes, to identify the signals with the n-highest amplitudes from among m processing channels (and associated electrodes). Stimulus pulses are delivered only to the electrodes that correspond to the channels with those highest amplitudes. The parameter n is fixed in the n-of-m and ACE strategies, and that parameter can vary from frame to frame in the SPEAK strategy, depending on the level and spectral composition of the signal from the microphone. Stimulus rates typically approximate or exceed 1000 pulses/s/selected electrode in the n-of-m and ACE strategies, and they approximate 250 pulses/s/selected electrode in the SPEAK strategy. The designs of the n-of-m and ACE strategies are essentially identical, and they are quite similar to CIS except for the channel-selection feature (Wilson, 2006). The SPEAK strategy uses much lower rates of stimulation and an adaptive n, as noted above.

The channel selection or “spectral peak picking” scheme used in the n-of-m, ACE, and SPEAK strategies is designed in part to reduce the density of stimulation while still representing the most important aspects of the acoustic environment. The deletion of low-amplitude channels (and associated stimuli) for each frame of stimulation may reduce the overall level of masking or interference across electrode and excitation regions in the cochlea. To the extent that the omitted channels do not contain significant information, such “unmasking” may improve the perception of the input signal by the patient. In addition, for positive speech-to-noise ratios (S/Ns), selection of the channels with the greatest amplitudes in each frame may emphasize the primary speech signal with respect to the noise.

The HiRes strategy is a close variation of CIS that uses relatively high rates of stimulation, relatively high cutoff frequencies for the envelope detectors, and up to 16 processing channels and associated stimulus sites. The terms HiRes and CIS are sometimes used interchangeably. Detailed descriptions of all of the processing strategies mentioned above (and many of their predecessors) are presented in Wilson (2006).

During the past several years, increasing attention has been devoted to representing “fine structure” or “fine frequency” information with CIs (e.g., Smith et al., 2002; Nie et al., 2005; Wilson et al., 2005; Zeng et al., 2005; Hochmair et al., 2006; Arnoldner et al., 2007; Berenstein et al., 2008; Brendel et al., 2008; Buechner et al., 2008; Litvak et al., 2008; Bonham and Litvak, this issue). These recent efforts are reviewed and discussed in Wilson and Dorman (2008b, c). The extent to which the “envelope based” strategies reviewed above may represent the information is discussed in section 5.6 of the present paper.

Contemporary applications of processing strategies according to manufacturer are shown in Table 2. The manufacturers include MED-EL GmbH of Innsbruck, Austria; Cochlear Ltd. of Lane Cove, Australia; and Advanced Bionics Corp. of Valencia, CA, USA. Each of the manufacturers offers multiple processing strategies, as shown. Among these choices, CIS is the default strategy for the MED-EL device, ACE is the default strategy for the Cochlear device, and HiRes is the default choice for the Advanced Bionics device. An alternative strategy may be selected by the audiologist at the time of a first or subsequent fitting for a particular device and patient. However, this is rarely done, and the default choices are generally the ones used in standard clinical practice, at least as of this writing.

Table 2.

Processing strategies in current widespread use*

Manufacturer CIS n-of-m ACE SPEAK HiRes
MED-EL GmbH
Cochlear Ltd.
Advanced Bionics Corp.
*

Manufacturers are shown in the left column and the processing strategies used in their implant systems are shown in the remaining columns. The full names and detailed descriptions of the strategies are presented in the text. These three manufacturers presently account for more than 99% of the worldwide implant market.

Table 3 shows strategies that are currently being evaluated in company-sponsored studies. These strategies include “Fine Hearing” (Hochmair et al., 2006; Arnoldner et al., 2007), HiRes with the “Fidelity™ 120 option” (or “HiRes 120” for short) (Trautwein, 2006; Buechner et al., 2008; Litvak et al., 2008), and MP3000 (Nogueira et al., 2005; Büchner et al., 2008). The Fine Hearing strategy also is called the fine structure processing (FSP) strategy, and the MP3000 strategy also is called the psychoacoustic advanced combination encoder (PACE) strategy. The Fine Hearing strategy is designed to represent fine structure information (see section 5.6), in part through initiation of short groups of pulses at the positive zero crossings in the bandpass outputs for selected channels. The HiRes 120 strategy is designed to increase the spatial resolution of stimulation and perception with CIs using a current steering technique, and, through that, also to increase the transmission of fine-structure information. The MP3000 strategy uses a model of auditory masking to select sites of stimulation in the implant, that would correspond to perceptually-salient components of the sound input for listeners with normal hearing. In this selection, the components that would be masked for such listeners (and therefore not perceptually salient) are omitted from the representation. These various strategies are described in detail in Wilson and Dorman (2008c). Additionally, approaches using current steering or focusing are described by Bonham and Litvak in this special issue.

Table 3.

Processing strategies presently in company-sponsored clinical trials

Manufacturer Fine Hearing HiRes120 MP3000
MED-EL GmbH
Cochlear Ltd.
Advanced Bionics Corp.

Among the strategies listed in Table 3, the Fine Hearing and HiRes 120 strategies have become available for clinical use outside of the study protocols. For example, the HiRes 120 strategy is now approved for use in adults in the United States. The approved uses for these various strategies may well broaden in the near future to all classes of prospective patients and to all or most countries in which implant operations are performed.

Strategies still in the initial stages of development include those designed to provide a closer mimicking of the (intricate and interactive) processing that occurs in the normal auditory periphery (e.g., Wilson et al., 2005, 2006, and 2008), and additional approaches aimed at representing fine-structure information (e.g., Nie et al., 2005; Wilson et al., 2005; Zeng et al., 2005). Some of these newer strategies also are described in Wilson and Dorman (2008c), along with progenitors of the Fine Hearing and HiRes 120 strategies.

4. Performance with present-day unilateral implants

Each of the major implant systems and the default processing strategies used with them supports recognition of monosyllabic words on the order of 50–60% correct (using hearing alone), across populations of tested subjects (see, e.g., Table 2.4 in Wilson, 2006). Variability in outcomes is high, however, with some subjects achieving scores at or near 100% correct and with other subjects scoring close to zero on this most difficult of standard audiological measures. Standard deviations of the scores range from about 10% to about 30% for the various studies conducted to date.

The ranges of scores and other representative findings for contemporary CIs are illustrated in Fig. 4, which shows scores for 55 users of the MED-EL COMBI 40 implant system and the CIS processing strategy. Scores for the Hochmair-Schultz-Moser sentences are presented in the top panel, and scores for recognition of the Freiburger monosyllabic words are presented in the bottom panel. Results for five measurement intervals are shown, ranging from one month to two years following the initial fitting of the speech processor. The solid line in each panel shows the mean of the individual scores. The data are a superset of those reported in Helms et al. (1997), that include scores for additional subjects at various test intervals, as reported in Wilson (2006). Most of the subjects used an 8-channel processor with a pulse rate of about 1500/s/electrode. Some of the subjects used fewer channels and a proportionately higher rate.

Fig. 4.

Fig. 4

Percent correct scores for 55 users of the COMBI 40 implant and the CIS processing strategy. Scores for recognition of the Hochmair-Schultz-Moser sentences are presented in the top panel, and scores for recognition of the Freiburger monosyllabic words are presented in the bottom panel. Results for each of five test intervals following the initial fitting of the speech processor for each subject are shown. The horizontal line in each panel indicates the mean of the scores for that interval and test. (The great majority of the data are from Helms et al., 1997, with an update reported by Wilson, 2006. Figure is adapted from Dorman and Spahr, 2006, and is used here with the permission Thieme Medical Publishers.)

As is evident from the figure, scores are broadly distributed at each test interval and for both tests. Ceiling effects are encountered for the sentence test for many of the subjects, especially at the later test intervals. At 24 months, 46 of the 55 subjects score above 80% correct, consistent with the 1995 NIH Consensus Statement. Scores for the recognition of monosyllabic words are much more broadly distributed. For example, at the 24 month interval only 9 of the 55 subjects have scores above 80% correct and the distribution of scores from about 10% correct to nearly 100% correct is almost perfectly uniform. Nonetheless, the scores for the top performers are especially remarkable given that only a maximum of eight broadly overlapping sectors of the auditory nerve are stimulated with this device and the implementation of CIS used with it. This number is quite small in comparison to the normal complement of approximately 30,000 neurons in the human auditory nerve.

The results in Fig. 4 also show a learning or accommodation effect, with continuous improvements in scores over the first twelve months of use. This suggests the importance of cortical plasticity (reorganization) in a patient’s progressively better utilization of the sparse and otherwise abnormal inputs provided by a CI. The asymptotic means at and beyond 12 months are about 90% correct for sentences and 55% correct for monosyllabic words. Such results typify performance with the best of the modern CI systems and processing strategies, for electrical stimulation on one side with a unilateral implant. (Results for other conditions, e.g., electrical stimulation on both sides with bilateral implants, are presented in section 6.)

There is a broad equivalence among contemporary signal processing strategies, e.g., CIS and ACE, in terms of the levels of performance achieved by patients on standard tests of speech understanding. This equivalence indicates, among other things, that there is no obvious advantage in either selecting or not selecting subsets of channels for each frame of stimulation. What seems to be important instead is the many features shared by the strategies, e.g., nonsimultaneous stimulation across electrodes; rates of stimulation near or above 1000 pulses/s/stimulated electrode; the same or highly-similar processing in each bandpass channel for extracting and mapping envelope signals; a low-pass cutoff in the envelope detector (or its equivalent) of at least 200 Hz; and at least 6–8 active (selected or always utilized) channels and corresponding sites of stimulation, to match or slightly exceed the number of effective channels that can be supported with present-day ST implants and the current processing strategies.

When performance across a wide range of tests is evaluated, then differences in performance as a function of strategy implementations can be found. Spahr, Dorman, and Loiselle (2007) report differences in performance as a function of implementations when signals were presented at low signal levels and when signals were presented in noise. The differences in performance were not due to differences in the processing strategy per se, or the number of channels of stimulation, or the stimulation rate. Rather the differences in performance were linked to the size of the input dynamic range (larger is better) and the method by which compression is implemented.

The performance of contemporary patients on tests of (i) consonant identification; (ii) vowel identification; (iii) sentence identification in quiet and noise; (iv) between and within gender discrimination of male and female voices and (v) identification of simple melodies without rhythm cues is shown in Table 2. Patients were divided into two groups; one (n = 20) that achieved average scores (mean = 58% correct) on a test of monosyllabic word recognition and a second (n = 21) that achieved above average scores (mean = 80% correct) (Dorman and Spahr, 2006). In the average group, 12 patients used a CI with a 16-channel, high-rate CIS processor, and eight used a CI with a lower-rate, 12-channel CIS processor. In the above-average group, 12 patients used a 16-channel CIS processor and nine used a 12-channel CIS processor.

For the “average” patients, scores on the clearly-articulated City University of New York (CUNY) sentences (Boothroyd et al., 1985) were at the ceiling (97% correct) of performance. Thus, the average implant patient, when listening to predictable conversations in quiet, should be able to communicate with relative ease. However, scores from sentences that were spoken by multiple talkers in a more normal, casual speaking style [the AzBio sentences (Spahr et al., 2007)] averaged only 70% correct. This outcome suggests a greater difficulty in understanding speech in quiet than suggested from experiments using the CUNY or Hearing in Noise Test (HINT) sentences. At +10 dB S/N, performance on the AzBio sentences fell to 42% correct. At +5 dB S/N, a typical level in many work, classroom, and social environments, performance on the AzBio sentences was only 27% correct. The difference in performance for casually spoken sentences in quiet and in a commonly-encountered noise environment highlights the difficulty faced by the average implant patient when attempting to understand speech presented in competition with noise.

Discrimination between male and female voices was accomplished easily. However, discrimination of two male voices or two female voices was only slightly better than chance (50% correct).

To assess the recognition of melodies, each subject selected five familiar melodies from a list of 33 melodies. Each melody consisted of 16 equal-duration notes and was synthesized with MIDI software that used samples of a grand piano. The frequencies ranged from 277 to 622 Hz. The melodies were created without distinctive rhythmic information. Performance on this very simple task was very poor (33% correct; chance performance for the task was 20% correct).

The results from the voice discrimination task and the melody recognition task combine to demonstrate that most patients do not extract low-frequency information from the time waveform with a high degree of fidelity. This is the case even when the pulse rate is sufficiently high to provide good resolution of the time waveform.

Three aspects of the performance of the above-average patients are noteworthy. First, performance on sentences produced in a casual speaking style in quiet was high (90% correct) demonstrating that the better patients can function at a high level in a quiet environment even when multiple speakers are not trying to speak clearly and predictably. Second, at a S/N commonly encountered in many everyday environments (+5 dB), performance fell to 52% correct. This suggests that even the better patients will have difficulty understanding speech in a typically noisy environment. Third, performance on the tests of within-gender speaker recognition and of melody recognition, while superior to that of the patients in the average group, still was poor.

The relatively poor resolution of pitch, as noted above, should have a significant impact on CI patients who speak a tone language, i.e., a language in which pitch is used to signal a difference in meaning among words. Mandarin, for example, has four tone patterns; Cantonese has six. Mean scores on tests of recognition of the tone patterns for Mandarin average between 50 and 70% correct (e.g., Wei et al., 2004). More than a quarter of the world’s population uses tone languages (e.g., Zeng, 1995), so relatively low scores like those just cited are a potentially-important problem. (The full magnitude of the problem is not known at present, as other cues also can signal some of the differences in word meanings.) We discuss possible mechanisms underlying the relatively poor representation of F0s and the pitch of complex sounds later in this paper, in sections 5.6 and 5.7.

5. Strengths and limitations of present systems

5.1. Efficacy of sparse representations

Present-day CIs support a high level of function for the great majority of patients, as indicated in part by sentence scores of 80% correct or higher for most patients and the ability of most patients to use the telephone. In addition, some patients achieve spectacularly high scores with present-day CIs. Indeed, their scores are in the normal ranges even for the most difficult of standard audiological tests (e.g., the top few patients in Fig. 4 and the patient described in Wilson and Dorman, 2007). Such results are both encouraging and surprising in that the implants provide only a very crude mimicking of only some aspects of the normal physiology. The scores for the best-performing patients provide an existence proof of what is possible with electrical stimulation of a totally-deaf cochlea, and show that the information presented and received is adequate for restoration of clinically-normal function, at least for those patients and at least as measured by standard tests of speech reception. This is remarkable.

5.2. Variability in outcomes

One of the major remaining problems with CIs is the broad distribution of outcomes, especially for difficult tests and as exemplified in the bottom panel of Fig. 4. That is, patients using exactly the same implant system – with the same speech processor, transcutaneous link, implanted receiver/stimulator, and implanted electrode array – can have scores ranging from the floor to the ceiling for such tests. Indeed, only a small fraction of patients achieve the spectacularly high scores mentioned above, although this proportion is growing with the use of bilateral CIs and of combined electric and acoustic stimulation (EAS) of the auditory system, as described in section 6 below. (The overall variability in outcomes also is reduced but far from eliminated with these relatively new approaches.)

5.3. Likely limitations imposed by impairments in auditory pathway or cortical function

Accumulating and compelling evidence is pointing to differences among patients in cortical or auditory pathway function as a likely contributor to the variability in outcomes with CIs (Lee et al., 2001; Ponton and Eggermont, 2001; Sharma et al., 2002; Eggermont and Ponton, 2003; Tobey et al., 2004; McKay, 2005; Kral et al., 2006; Kral and Eggermont, 2007; Fallon et al., 2008). On average, patients with short durations of deafness prior to their implants fare better than patients with long durations of deafness (e.g., Gantz et al., 1993; Summerfield and Marshall, 1995; Blamey et al., 1996). This may be the result of sensory deprivation for long periods, which adversely affects connections between and among neurons in the central auditory system (Shepherd and Hardie, 2001) and may allow encroachment by other sensory inputs of cortical areas normally devoted to auditory processing (i.e., cross-modal plasticity; see Lee et al., 2001; Bavelier and Neville, 2002). Although one might think that differences in nerve survival at the periphery could explain the variability, either a negative correlation or no relationship has been found between the number of surviving ganglion cells and prior word recognition scores, for deceased implant patients who in life had agreed to donate their temporal bones for post-mortem histological studies (Blamey, 1997; Nadol et al., 2001; Khan et al., 2005; Fayad and Linthicum, 2006). In some cases, survival of the ganglion cells was far shy of the normal complement, and yet these same patients achieved high scores in speech reception tests. Conversely, in some other cases, survival of the ganglion cells was excellent, and yet these patients did not achieve high scores on the tests. Although some number of ganglion cells must be required for the function of a CI, this number appears to be small, at least for the prior generations of implant systems and processing strategies used by these patients in life. Above that putative threshold, the brains of the better-performing patients apparently can utilize a sparse input from even a small number of surviving cells for high levels of speech reception. (Current and future implant systems and processing strategies may require a higher number of surviving cells in order to perform optimally; for example, one might think that both excellent and uniform or nearly-uniform survival would be needed for good performance with the HiRes 120 strategy, which addresses many single-electrode and virtual sites of stimulation along the length of the cochlea, as described in Wilson and Dorman, 2008c, and as described by Bonham and Litvak in this special issue. However, such a dependence on processing strategy or type of implant system remains to be demonstrated.)

Similarly, it seems likely that the representation of speech sounds with a CI needs to be above some threshold in order for the brain to utilize the input for good speech reception. Single-channel implant systems did not rise above this second putative threshold for all but a few exceptional patients; nor did prior processing strategies for multichannel implants. The combination of multiple sites of stimulation in the cochlea (at least 6–8), relatively new processing strategies such as the CIS, HiRes, n-of-m, and ACE strategies, and some minimum survival of ganglion cells is sufficient for a high restoration of function in some patients. Those patients are likely to have intact “auditory brains” that can utilize these still sparse and distorted inputs, compared to the inputs received by the brain from the normal cochlea.

Other patients may not have the benefit of normal or nearly normal processing central to the auditory nerve. The effects of auditory deprivation for long periods have been mentioned. In addition, the brains of children become less “plastic” or adaptable to new inputs beyond their third or fourth birthdays. This may explain why deaf children implanted before then generally have much better outcomes than deaf children implanted at age five and older (e.g., Lee et al., 2001; Sharma et al., 2002; Dorman and Wilson, 2004).

The brain may well be the “tail that wags the dog” in determining outcomes with present-day CIs. The brain “saves us” in achieving high scores with implants, by somehow utilizing a crude and sparse and distorted representation at the periphery. In addition, strong learning or accommodation effects – over long periods ranging from about three months to a year or more – indicate a principal role of the brain in reaching asymptotic performance with implants (see Fig. 4). Multiple lines of evidence, such as those cited at the beginning of this section, further suggest that impairments or changes in brain function – including damage to the auditory pathways in the brainstem, or compromised function in the areas of cortex normally devoted to auditory processing, or reduced cortical plasticity, or cross-modal plasticity – can produce highly deleterious effects on results obtained with CIs.

Although the condition of the brain is likely to affect outcomes with CIs, other factors affect outcomes as well. CI systems and the utilized processing strategies must provide enough information for the intact brain to use, as noted above. Additionally, manipulations at the periphery obviously influence outcomes, as observed for example in the substantial gains in speech reception produced recently with bilateral CIs and with combined EAS (see section 6 below), and as discussed by Pfingst et al. in this special issue. The point here is that the brain also is an important contributor, and that impairments in brain function may limit what can be achieved with any method of peripheral stimulation developed to date.

5.4. Likely limitations imposed by present electrode designs and placements

Present designs and placements of electrodes for CIs do not support more than 4–8 effective sites of stimulation, or effective or functional channels, as mentioned before. Contemporary CIs use between 12 and 22 intracochlear electrodes, so the number of electrodes exceeds the number of effective channels (or sites of stimulation) for practically all patients and for all current devices. The number of effective channels depends on the patient and the speech reception measure used to evaluate performance. For example, increases in scores with increases in the number of active electrodes generally plateau at a lower number for consonant identification than for vowel identification. (This makes sense from the perspective that consonants may be identified with combinations of temporal and spectral cues, whereas vowels are identified primarily or exclusively with spectral cues, that are conveyed through independent sites of stimulation.) Patients with low speech reception scores generally do not have more than four effective channels for any test, whereas patients with high scores may have as many as eight or slightly more channels depending on the test (e.g., Friesen et al., 2001; Dorman and Spahr, 2006).

Results from studies using acoustic simulations of implant processors and subjects with normal hearing indicate that a higher number of effective channels or sites of stimulation for implants could be beneficial. With simulations and normal-hearing subjects, as many as ten channels are needed to reach asymptotic performance (for difficult tests) using a CIS-like processor (Dorman et al., 2002). Other investigators have found that even more channels are needed for asymptotic performance, especially for difficult tests such as identification of vowels or recognition of speech presented in competition with noise or a multi-talker babble (Friesen et al., 2001; Shannon et al., 2004). For example, Friesen et al. (2001) found that identification of vowels for listeners with normal hearing continued to improve with the addition of channels in the acoustic simulations up to the tested limit of 20 channels, for vowels presented in quiet and at progressively worse S/Ns out to and including +5 dB.

From another perspective, the number of largely independent filters in normal hearing is about 39 for the full range of frequencies from 50 Hz to 15 kHz, and is about 28 for the range of frequencies covered by speech sounds (Glasberg and Moore, 1990; Moore, 2003). These numbers are much higher than the number of effective channels with present-day implants.

This apparent limitation with present-day CIs is illustrated in Fig. 5, which shows speech reception scores as a function of the number of stimulated electrodes (and associated channels) for CIS processors. The top panel shows results from the first author’s laboratory, and the bottom panel shows results from studies conducted by Garnham et al. (2002). These results typify results from other studies.

Fig. 5.

Fig. 5

Speech reception scores as a function of the number of stimulated electrodes (and associated channels) using the CIS processing strategy. Means and standard errors of the mean are shown. Results from studies conducted in the first author’s laboratory are presented in the top panel, and results from Garnham et al. (2002) are presented in the bottom panel. The top panel shows scores for identification of 24 consonants in an /a/- consonant-/a/ context, by one subject using a Nucleus cochlear implant system with its 22 intracochlear electrodes. The bottom panel shows scores for recognition of the Bench, Kowal, and Bamford (BKB) sentences, identification of 16 consonants also in /a/- consonant-/a/ context, identification of 8 vowels in a /b/-vowel-/d/ context, and recognition of the Arthur Boothroyd (AB) monosyllabic words, by a maximum of 11 subjects (Ss) using the COMBI 40+ cochlear implant system with its 12 electrode sites. The test items either were presented in quiet or in competition with noise, as indicated in the legends for the two panels. For the presentations in competition with noise, the signalto- noise ratios (S/Ns) are indicated. The experimental conditions used for the study depicted in the top panel are the same as those described in Wilson (1997).

Both panels show improvements in speech reception scores – for a variety of tests – with increases in electrode number up to a relatively low value, depending on the test. Scores for tests of consonant identification in a quiet condition “saturate” or plateau at 3 electrodes (top panel), and scores for identification of consonants presented in competition with noise at the S/N of +5 dB saturate at 4 (bottom panel) or 5 (top panel) electrodes. Scores for recognition of sentences or vowels, also presented in competition with noise, at the S/Ns of +10 and -10 dB respectively, saturate at 6 electrodes (bottom panel). Scores for the remaining two tests shown in the bottom panel do not increase significantly with increases in electrode number beyond 6. These saturation points are well below the maximum number of electrodes for each of the studies, 22 for the top panel and 10 or 11 (among the available 12 in the implant device used) for the bottom panel.

Large improvements in the performance of CIs might well be obtained with an increase in the number of effective sites of stimulation, which would help narrow the gap between implant patients and subjects with normal hearing. This gap is especially wide for the many patients who do not have more than four effective sites across wide ranges of speech reception measures. Just a few more channels for the top performers with CIs would almost certainly help them in listening to speech in demanding situations, such as speech presented in competition with noise or other talkers. An increase in the number of functional channels for patients presently at the low end of the performance spectrum could improve their outcomes considerably.

A highly plausible explanation for the limitation in effective channels with implants is that the electric fields from different intracochlear electrodes strongly overlap at the sites of neural excitation (e.g., Fu and Nogaki, 2004; Dorman and Spahr, 2006). Such overlaps (or “electrode interactions”) may impose an upper bound on the number of electrodes that are sufficiently independent to convey perceptually separate channels of information. In addition, a central processing deficit may contribute to the limitation, perhaps especially for patients with low speech reception scores and (usually) a relatively low number of effective channels.

A problem with ST implants is that the electrodes are relatively far from the target tissue (most often the spiral ganglion), even for placements of electrodes next to the inner wall of the ST. Close apposition of the target and the electrode is necessary for a high spatial specificity of stimulation (Ranck, 1975). One possibility for providing a close apposition is to promote the growth of neurites from the ganglion cells toward the electrodes in the ST with controlled delivery of neurotrophic drugs into the perilymph (Roehm and Hansen, 2005; Pettingill et al., 2007; Rejali et al., 2007; Vieira et al., 2007; Hendricks et al., this issue). Such growth would bring the target to the electrodes. Another possibility is to implant an array of electrodes directly within the auditory nerve (an intramodiolar implant), through an opening made in the basal part of the cochlea (Arts et al., 2003; Badi et al., 2003, 2007; Hillman et al., 2003; Spelman, 2006; Middlebrooks and Snyder, 2007 and this issue; Anderson, this issue). In this case, the electrodes would be placed immediately adjacent to axons of the auditory nerve. Studies are underway to evaluate each of these possibilities, including safety and efficacy studies. Results from studies to evaluate the intramodiolar implant have demonstrated that it is feasible from fabrication and surgical perspectives, and that the number of independent sites of stimulation with that implant may be substantially higher than the number for ST implants (Badi et al., 2007; Middlebrooks and Snyder, 2007 and this issue). However, these are preliminary findings and a full course of safety studies needs to be completed before intramodiolar implants might be approved by the United States Food and Drug Administration or other regulatory agencies for applications in humans. The same is true for the use of neurotrophic drugs to promote the growth of neurties toward ST electrodes. Each of these possibilities is promising, but each needs further study and validation.

5.5. Apparent disconnect between the number of discriminable sites and the number of effective channels

In general, a high number of sites may be perceived by implant patients. For example, a subpopulation of patients can rank the 22 electrodes of the Cochlear Ltd. electrode array on the basis of discriminable pitches (e.g., Zwolan et al., 1997), and some patients can rank many more sites when “virtual sites” of stimulation between simultaneously-stimulated electrodes are used along with the available single-electrode sites (e.g., Donaldson et al., 2005; also see the discussion in Bonham and Litvak, this issue). However, no patient tested to date has more than about eight effective channels when stimuli are rapidly sequenced across electrodes in a real-time, speech-processor context. The mechanism(s) underlying this apparent disconnect – between the number of discriminable sites and the number of effective channels – remain(s) to be identified. Possibly, the mechanism(s) may relate to masking, temporal integration, or refractory effects that are produced both peripherally and centrally when stimuli are presented in rapid sequences among electrodes but not when stimuli are presented in isolation, as in the psychophysical ranking studies mentioned above. Identification of the mechanism(s) could be a great help, in that the knowledge might provide a prescription for patterning stimuli in a way that would bring the number of effective channels closer to the number of discriminable sites. Indeed, closing this gap may be more important than simply increasing the number of discriminable sites, which certainly would not guarantee an increase in the number of effective channels.

5.6. Possible deficit in the representation of fine structure information

Fine structure information relates to frequency variations within bandpass channels, that may not be represented or represented well with the CIS and other “envelope based” strategies. In particular, the envelope detector for each channel in such strategies senses energy only, for all frequencies within the band for the channel. Thus, a signal at one frequency within the band will produce an output at the envelope detector that is no different from the output produced by another frequency within the band, so long as the amplitudes for the signals are the same. Information about frequency variations of single components in the band, or about the frequencies of multiple components in the band, is lost or “discarded” at the envelope detector. Such a loss could degrade the representation of speech sounds (Smith et al., 2002), degrade the representation of tone patterns in tonal languages (Xu and Pfingst, 2003), and greatly diminish the representation of musical sounds (Smith et al., 2002).

Possibly, however, fine structure information is in fact represented at least to some extent by the envelope-based strategies. Variations at low frequencies – in the F0 range – are represented in the modulation waveforms, as described in section 3.3. These variations may be perceived as different pitches so long as the modulation frequencies do not exceed the “pitch saturation limit” for implant patients, i.e., the rate or frequency (or modulation frequency) at which further increases in rate or frequency do not produce further increases in pitch. This limit is about 300 Hz for most patients (e.g., Zeng, 2002), although it can range as high as 1 kHz or a bit beyond that for exceptional patients (e.g., Hochmair-Desoyer et al., 1983; Townshend et al., 1987; Zeng, 2002). The cutoff frequency for the envelope detectors in the envelope-based strategies typically is between 200 and 400 Hz, which corresponds to the pitch saturation limit for most patients. The effective cutoff is higher in the HiRes strategy, and this may allow for a somewhat greater transmission of fine structure information for the exceptional patients with the higher pitch saturation limits.

In addition to this temporal coding of fine structure information at low frequencies, the envelope-based strategies also may provide a “channel balance” cue, that could represent fine structure information at higher frequencies. McDermott and McKay (1994) among others (e.g., Kwon and van den Honert, 2006; Nobbe et al., 2007) have shown that intermediate pitches are produced when closely-spaced electrodes are stimulated in a rapid sequence, as compared to the pitches that are produced when each of the electrodes is stimulated separately. The pitch elicited with the rapid sequential stimulation varies according to the ratio of the currents delivered to the two electrodes. Thus, for example, progressively higher pitches would in general be produced as progressively greater proportions of the currents are delivered to the basal member of the pair. Rapid sequencing of stimuli across electrodes is used in the envelope-based strategies, and therefore one might expect that a continuous range of pitches could be produced with the strategies, including pitches that are intermediate to those produced with stimulation of single electrodes in isolation. Indeed, this has been demonstrated by Dorman et al. (1996) for the CIS strategy. Their data show that the strategy can represent fine gradations in frequencies across the represented spectrum of sounds, and Dorman et al. have suggested that this performance may be attributable to a channel-balance cue, produced as the ratio of stimulation between adjacent electrodes when an input at a given frequency excites the bandpass filters for each of the channels assigned to the electrodes. Such simultaneous excitation can occur because all physically-realizable filters have sloping cutoffs which often overlap. (The amount of overlap between adjacent filters is controlled by the rate of the attenuation beyond each cutoff frequency for each filter, usually measured in dB/octave or dB/decade of frequencies, and the point at which the filter responses “cross,” which typically is at the 3-dB attenuation point for each filter.) The relative excitation at the two electrodes would reflect the relative excitation of the two filters for the two channels by the sinusoidal input. Thus, fine structure information may be represented as the ratio of stimulation between channels and as a result of (1) overlaps in the responses between filters and (2) the intermediate pitches that are produced with rapid sequential stimulation of closely-spaced electrodes. Such a representation would require filters with overlapping responses, and indeed the representation might be enhanced with bandpass filters having a triangular- or bell-shaped response, as concatenation of those filters would produce a uniform or nearly uniform response across frequencies, and continuous variations in the ratio of filter outputs for changes in frequency. (The current and some prior MED-EL implementations of CIS use bandpass filters with bell-shaped responses for this very reason.)

The situation just described is quite different from the idealized example given at the beginning of this section 5.6. In the example, only a single filter (and channel) is used. In a real processor, multiple filters and associated channels and sites of stimulation are used. The filters for the channels overlap, producing multiple sites of stimulation for single frequencies at the input. Such stimulation in turn produces intermediate pitches and (potentially) a fine-grained representation of fine structure information.

At this time, it is not clear how much fine structure information is presented and received with the envelope-based strategies. The results reported by Dorman et al. (1996), and the analysis presented above, suggest that the amount may be substantial.

The possibility that only a small amount of the fine structure information is transmitted with envelope-based strategies, along with the findings of Smith et al. (2002) and of Xu and Pfingst (2003) demonstrating the importance of the information, have motivated efforts to represent the information in other ways (e.g., Nie et al., 2005; Wilson et al., 2005; Zeng et al., 2005; Hochmair et al., 2006; Arnoldner et al., 2007; Litvak et al., 2008; Bonham and Litvak, this issue). Indeed, many have assumed that little or no fine structure information is transmitted by the “envelope-based” strategies, as “only envelope information is presented.” This ignores the fact that temporal information is presented in the modulation waveforms up to 200–400 Hz or higher, and it ignores the fact that a channel-balance cue may well convey fine structure information at higher frequencies, especially if filter overlaps are appropriately designed.

The extent to which fine structure information already is available to implant patients is unknown at present. In addition to developing alternative approaches for representing the information, as mentioned above, future efforts also might be productively directed at developing direct measures of the transmission of fine structure information to implant patients. One promising approach for this has been described quite recently by Drennan et al. (2008) and involves discrimination of Schroeder-phase harmonic complexes, which differ in temporal fine structure only. Additional measures may be needed for assessing the transmission of fine structure information for bandpass channels with relatively high center frequencies, e.g., above 400 Hz. An excellent possibility for such measures is the frequency discrimination measure developed by Dorman et al. years ago (Dorman et al., 1996). As noted before, that measure demonstrated discrimination of many frequencies along a fine and continuous scale, for the CIS strategy and over the frequency range that included overlapping responses between adjacent bandpass filters. This finding showed that fine structure information is transmitted at the higher frequencies with CIS (generally above 400 Hz with the standard filter choices), and suggested the likely existence and operation of a channel-balance cue for frequencies other than the center frequencies of the bandpass filters.

In broad terms, at least some fine structure information is transmitted to patients by the CIS and (presumably) other enveloped-based strategies. How much is an open question. In addition, we do not know at this point whether any of the alternatives that have been specifically designed to increase the transmission in fact do that. Direct measures of the transmission of fine structure information with all of these strategies – CIS, n-of-m, ACE, SPEAK, HiRes, Fine Hearing, HiRes 120, and other strategies now in development – would be most helpful, to know whether one or more of them are better than the others and over what ranges of frequencies. In addition, and assuming considerable “headroom” may exist for the transmission of fine structure information, the 39 measures could inform the development of refined or new approaches that may increase the transmission further. (The results from Dorman et al. suggest that this is a good assumption, i.e., although frequency discrimination with CIS was good and considerably better than that for an alternative strategy, the discrimination also was worse than what would be expected for subjects with normal hearing taking the same test.) The key questions at this time are (1) how much of the information is transmitted with conventional envelope-based strategies; (2) whether those strategies can be improved to enhance the transmission, e.g., with strictly base-to-apex or apex-to-base update orders to ensure rapid sequential stimulation of all pairs of adjacent electrodes; and (3) whether a fundamentally-different strategy can produce a significant increment in the transmission.

5.7. Less-than-resolute representations of fundamental frequencies for complex sounds

Although F0s are represented in the modulation waveforms of CIS and other envelope-based strategies, such representations do not provide the highly salient and highly discriminable representations of F0s in normal hearing. As has been mentioned, temporal representations of frequencies with electrically elicited hearing are limited to frequencies lower than the pitch saturation limit, which is around 300 Hz for most patients. In addition, the difference limens (DLs) for rates or frequencies of electric stimuli are much worse (typically ten times worse) than the DLs for normal hearing and acoustic stimuli, in this low-frequency range below 300 Hz (e.g., Zeng, 2002; Baumann and Nobbe, 2004).

Of course, frequencies of input sounds also can be represented by place of stimulation with electric stimuli. Higher frequencies can be represented in this way than with the rate codes. Here, too, however, the DLs for electrically elicited hearing appear to be worse than the DLs for normal hearing (Dorman et al., 1996), as just mentioned in the preceding section.

The pitches elicited with changes in rate or frequency of stimulation may be similar to “non spectral” pitches elicited in normal hearing (Burns and Viemeister, 1976, 1981), with sinusoidally-amplitude-modulated noise. These non-spectral pitches also saturate at a relatively-low modulation frequency (850–1000 Hz, corresponding to the upper end of pitch saturation limits for implant patients) and the DLs for changes in modulation frequency below this saturation limit are worse than the DLs for the sinusoids presented alone, where both rate and place of stimulation vary with frequency.

Accurate perception of F0s is important for (1) separation of “auditory streams” from different sources (e.g., a primary speaker and a competing voice); (2) identification of a speaker’s gender; (3) identification in speech of emotion and declarative versus inquisitive intent; (4) reception of tone languages; and (5) reception of melodies. Thus, less-than-resolute representations of F0s is a problem for implants, and this problem has received considerable attention (e.g., Geurts and Wouters, 2001, 2004; Luo and Fu, 2004; Green et al., 2005; Laneau et al., 2006; Carroll and Zeng, 2007; Chatterjee and Peng, 2007; Sucher and McDermott, 2007). Unfortunately, a way to improve the representations has not been identified to date despite these efforts. (With some approaches, small improvements have been demonstrated, but only at the cost of decrements in the transmission of other types of important information.)

The apparent lack of salient pitches for implant patients, and the relatively poor discrimination of frequencies for the patients, may be attributable to the large differences in patterns of neural excitation with implants compared to the patterns in normal hearing (e.g., Moore, 2003; McKay, 2005; Turner et al., this issue). In normal hearing, frequencies of stimulation are coordinated with the places (or sites) of stimulation, and the lower harmonics of a fundamental frequency are resolved by the auditory filters and are separately represented along the cochlear partition for the F0s of the many sounds with relatively low F0s such as speech and music. In addition, a “slowing down” (accumulation of phase lags) of the traveling wave along the basilar membrane just basal to and at the position of the peak response for a given sinusoidal input produces a pattern of sharply increasing latencies of responses for neurons innervating this region in the normal cochlea, which could be “read” by the central auditory system as indicating the point of the peak response and therefore the frequency of stimulation (e.g., Loeb et al., 1983). Current CIs do not represent any of these features, with the possible exception of the first feature using the Fine Hearing strategy (Hochmair et al., 2006; Arnoldner et al., 2007), in which the rate of pulsatile stimulation may be roughly coordinated with the site(s) of stimulation, for the apical 1–3 channels, depending on choices made in the fitting of the strategy.

Among the features, the presentation and perception of the resolved harmonics appears to be essential for highly-salient pitch percepts in normal hearing (Oxenham et al., 2004). In addition, the harmonics may need to be at the “tonotopically correct” places along the cochlear partition to be effective. Such a representation would be difficult or impossible to achieve with a CI, in that precise control over multiple sites of stimulation – corresponding to the first several harmonics – would be required. Possibly, high density electrodes, or use of virtual sites between electrodes (formed either with rapid sequential stimulation of adjacent electrodes, as described above, or with simultaneous stimulation of adjacent electrodes, as described by Bonham and Litvak in this issue), might possibly provide the requisite level of control, especially if a closer apposition between the electrodes and their neural targets can be achieved.

Similarly, replication of the rapidly increasing latencies of neural responses, produced in normal hearing near and at the position(s) of maximal deflections of the basilar membrane, would be difficult with implants as well, as this would require a high level of control over the relative timing of neural responses over short distances along the length of the cochlea. This might be possible with high-density electrodes and a close apposition of those electrodes to target neurons, but probably is not possible with present designs of ST electrodes or without an induced growth of neurites toward ST electrodes.

Precise coordination of rate and place of stimulation also would require precise control over site of stimulation. In addition, the processing strategy would need to present pulses at the appropriate rate at each electrode, and would need to do this while still maintaining a non-simultaneity of stimulation across electrodes, to avoid interactions that result from direct summation of electric fields from different electrodes. Finally, coordination of rates and places for low F0s, such as those for speech (in the range from about 80 to 250 Hz), would require stimulation in apical parts of the cochlea, which might be achieved with deeply-inserted ST electrode arrays and selective stimulation of neurons that innervate the apical region, e.g., through selective stimulation of surviving peripheral processes in the apex.

Even if rate and place could be coordinated, the rate part of the representation may not be effective, as the perceived rate is restricted by the pitch saturation limit. Thus, presenting a 2 kHz stimulus at the 2 kHz place may also produce the postulated disconnect, in that the perceived rate may be the same as that produced by a 300 pulses/s stimulus, and similarly for all sites representing frequencies above the pitch saturation limit.

At this time, it is not known whether coordination of rate with place of stimulation is important, or whether replication of the latency fields that occur in normal hearing is important. However, representation of resolved harmonics is clearly important. This might be achieved through a higher level of neural control with implants, e.g., by bringing the electrodes closer to the neural targets or vice versa.

In contrast to the apparently weak representations of F0 for complex sounds with present-day unilateral CIs, the representation appears to be highly robust with combined EAS, as described in section 6 below. The acoustic stimulation part of combined EAS, and the perception of that information, may well include multiple harmonics of F0s for practically all voiced speech sounds and for most musical sounds, and additionally the correct placements of those harmonics along the length of the cochlea. Moreover, any latency fields of perceptual significance are most likely produced with the acoustic stimulation part of combined EAS. (The fields may be somewhat different than normal, depending on the degree of hearing loss at low frequencies.) Thus, combined EAS may well be the single best way to convey F0 information for persons with highly compromised hearing, but also with some remaining sensitivity to acoustic stimuli at low frequencies. For everyone else, another way needs to be found, as outlined in the discussion above.

5.8. Little or no sound localization ability with unilateral implants

Patients using unilateral CIs have little or no sound localization ability (e.g., Nopp et al., 2004; Senn et al., 2005). This reduces the effectiveness of the alerting function that could be supported by a prosthetic system for hearing and eliminates the S/N advantage of binaural hearing, especially for different locations of the speech and the noise. These deficits are largely repaired with bilateral CIs, as described below.

6. Two recent advances

Two recent advances have produced significant improvements in the overall (average) performance of implant systems. The advances are (1) electrical stimulation on both sides with bilateral CIs, and (2) combined EAS of the auditory system for persons with residual hearing at low frequencies. Bilateral electrical stimulation may reinstate at least to some extent the interaural amplitude and timing difference cues that allow people with normal hearing to lateralize sounds in the horizontal plane and to selectively “hear out” a voice or other source of sound from among multiple sources at different locations. Additionally, stimulation on both sides may allow users to make use of the acoustic shadow cast by the head for sound sources off the midline. In such cases, the S/N may well be more favorable at one ear compared to the other for multiple sources of sound, and users may be able to attend to the ear with the better S/N for the desired source. Combined EAS may preserve a relatively-normal hearing ability at low frequencies, with excellent frequency resolution and other attributes of normal hearing, while providing a complementary representation of high frequency sounds with the CI and electrical stimulation. Various surgical techniques and drug therapies have been developed to preserve low-frequency hearing in an implanted cochlea, to allow combined EAS of the same ear following an implant operation. These techniques and therapies are reviewed in Wilson and Dorman (2008b) and include deliberately-short insertions of the electrode array (6, 10, 16 or 20 mm) to reduce the risk of damaging the apical part of the cochlea and remaining hair cells there.

Each of these approaches – bilateral electrical stimulation and combined EAS – has produced large improvements in speech reception performance compared to control conditions (see review in Wilson and Dorman, 2008b, and see Turner et al. in this issue). In particular, bilateral stimulation can provide a substantial benefit in recognizing difficult speech materials such as monosyllabic words and in recognizing speech presented in competition with spatially distinct noise, in comparison to scores obtained with either unilateral implant alone (e.g., Gantz et al., 2002; Müller et al., 2002; Laszig et al., 2004; Ramsden et al., 2005; Litovsky et al., 2006; Ricketts et al., 2006). In addition, use of both implants supports an improved ability to lateralize or localize sounds (depending on which was measured in a particular study), again compared with either unilateral implant (e.g., van Hoesel and Tyler, 2003; Nopp et al., 2004; Senn et al., 2005; Grantham et al., 2007; Tyler et al., 2007). (This ability is nonexistent or almost nil with a unilateral implant, as noted before.) Combined EAS also provides a substantial benefit for listening to speech in quiet, in noise, in competition with another talker, or in competition with a multi-talker babble, compared with either electric stimulation only or acoustic stimulation only (e.g., von Ilberg et al., 1999; Kiefer et al., 2002, 2005; Gantz and Turner, 2003; Wilson et al., 2003; Gstoettner et al., 2004, 2006; Gantz et al., 2005, 2006; Kong et al., 2005; James et al., 2006; Gifford et al., 2007; Dorman et al., 2007; Turner et al., this issue). Indeed, in some cases the score for combined EAS is greater than the sum of the scores for the electric-only and acoustic-only conditions. This has been described as a synergistic effect (e.g., Gstoettner et al., 2004; Wilson et al., 2003). In addition, identification of melodies and reception of musical sounds is greatly improved with combined EAS compared to electric stimulation alone (Gantz et al., 2005; Kong et al., 2005; Gfeller et al., 2006 and 2007; Gstoettner et al., 2006; Dorman et al., 2007). (Scores with acoustic stimulation alone closely approximate the scores with combined EAS, for melody and music reception.) In cases of symmetric or nearlysymmetric hearing loss, the benefits of combined EAS can be obtained with the acoustic stimulus delivered either to the ear with the CI or to the opposite ear or to both ears. Large benefits also can be obtained in cases of complete or nearly complete loss of residual hearing on the implanted side and delivery of the acoustic stimulus to a stillsensitive ear on the contralateral side. (This observation is good news for recipients of a fully-inserted CI on one side, and residual hearing on the contralateral side, in that any residual hearing on the implanted side generally is lost with a full insertion of the electrode array.)

The described gains from bilateral electrical stimulation most likely arise from a partial or full restoration of the binaural difference cues and to the head shadow effect, as suggested above. A principal advantage of combined EAS over electric stimulation only may be that fine structure information is presented without modification in the low-frequency range with combined EAS, and a substantial portion or all of this information may be perceived, at least by the better users. The fine structure information is likely to include F0s and the first one or two or more harmonics of the F0s, along with at least some indication of first formant frequencies for speech. The information also is likely to include most F0s and perhaps the first one or two harmonics (depending on the F0) for music. The acoustic-stimulation part of combined EAS is effective for low frequencies only, of course, but fine structure information in this range is more important than fine structure information at higher frequencies, for both speech and music reception (e.g., Smith et al., 2002; Xu and Pfingst, 2003).

A detailed discussion of possible mechanisms underlying the benefits of bilateral CIs and of combined EAS is presented in Wilson and Dorman (2008b). Detailed discussions about possible mechanisms underlying the benefits of combined EAS also are presented in Turner et al. in this issue, in Qin and Oxenham (2006), and in Dorman et al. (2007).

Each of these relatively new approaches, bilateral electrical stimulation and combined EAS, utilizes or reinstates a part of the natural system. Two ears are better than one, and use of even a part of normal or nearly normal hearing at low frequencies can provide a highly significant advantage.

7. Possibilities for further improvements

Tremendous progress has been made in the design and performance of cochlear prostheses. However, much room remains for improvements. Patients with the best results still do not hear as well as listeners with normal hearing, particularly in demanding situations such as speech presented in competition with noise or other talkers. Users of standard unilateral implants do not have much access to music and other sounds that are more complex than speech. Most importantly, a wide range of outcomes persists, even with the current processing strategies and implant systems, and even with bilateral implants or combined EAS.

Fortunately, major steps forward have been made recently – with bilateral implants and combined EAS – and many other promising possibilities for further improvements in implant design and function are on the horizon. Some of the possibilities include:

  • New designs or placements of electrode arrays, to bring the electrodes in closer proximity to neural targets.

  • Detection of peripheral processes, using psychophysical or electrophysiological measures, and selective activation of the processes when present and if possible, again to reduce the distance between electrodes and their neural targets.

  • Continued efforts to promote the growth of neurites toward ST implants, to bring the target toward the electrodes.

  • Continued development of novel modes of stimulation, that may allow precise spatial control of excitation sites, such as the optical mode of stimulation described by Richter et al. in this issue.

  • Identification of the mechanism(s) underlying the apparent disconnect between the number of sites that can be discriminated when stimulated in isolation versus the number of effective channels in a real-time, speech-processor context, and use of that knowledge to possibly reduce the gap.

  • Continued efforts to increase the transmission of fine structure information to implant patients, as may be informed and facilitated by direct measures of the transmission.

  • Continued efforts to improve the representation and reception of F0 information, in the limited ways that may be available with present ST electrodes and in the possibly less-limited ways that may be available with other electrode designs.

  • Broaden the applications of combined EAS to include as many patients as possible, including acoustic stimulation on the side contralateral to a fully-inserted CI and with at least some residual hearing on that other side, as the acoustic stimulation part of combined EAS may be the single best way to provide salient representations of pitch and also FS information in the range of residual, low-frequency hearing. (Use of the natural system wherever possible almost has to be better than use of electrical stimuli.)

  • Refinement and optimization of processing strategies and other aspects for bilateral implants and for combined EAS, each of which are in their nascent stages.

  • Acoustic stimulation in conjunction with bilateral CIs, for persons with bilateral CIs having some residual hearing.

  • Continued development of surgical techniques and adjunctive drug therapies for better preservation of residual hearing during and after surgeries for combined EAS.

  • Continued development of electrical stimulation patterns and adjunctive drug therapies to preserve spiral ganglion cells and other neural structures in sensorineural hearing loss and in the implanted cochlea.

  • Continued development of strategies designed to provide a closer mimicking of the complex and interactive processing that occurs in the normal cochlea.

Each of the possibilities listed above is aimed at improving the representation at the periphery. A fundamentally new approach may be needed to help those patients presently at the low end of the performance spectrum, however. They may have compromised “auditory brains” as suggested above and by many recent findings. For them, a “top down” or “cognitive neuroscience” approach to implant design may be more effective than the traditional “bottom up” approach. In particular, a top-down approach would ask what the compromised brain needs as an input in order to perform optimally, in contrast to the traditional approach of replicating insofar as possible the normal patterns of activity at the auditory nerve. The patterns of stimulation specified by the new approach are quite likely to be different from the patterns specified by the traditional approach.

A related possibility that may help all patients at least to some extent is directed training to encourage and facilitate desired plastic changes in brain function (or, to put it another way, to help the brain in its task to learn how to utilize the inputs from the periphery provided by a CI). Such training if well designed may reduce the time needed to reach asymptotic performance and may produce higher levels of auditory function at that point and beyond (e.g., Fu and Galvin, this issue). The ideal training procedure for an infant or young child may be quite different from the ideal procedure for older children or adults due to differences in brain plasticity. For example, the “step size” for increments in the difficulty of a training task may need to be much smaller for adults than for infants and young children (Linkenhoker and Knudsen, 2002). However, all patients may benefit from appropriately designed procedures, that respect the differences in brain plasticity according to age.

The brain is a critical part of a prosthesis system. For patients with a fully intact brain, the “bottom up” approach to implant design probably is appropriate, i.e., an ever-closer approximation to the normal patterns of neural discharge at the periphery is likely to provide the inputs that the brain “expects” and is configured to receive and process. For patients with a compromised brain, such inputs may not be optimal. In those cases, a “top down” approach to implant design, or a combination of “top down” and “bottom up” approaches, may produce the best results. For example, a “top down” approach combined with techniques to minimize electrode interactions at the periphery may be especially effective for patients presently shackled with relatively poor outcomes.

8. Concluding remarks

We as a field have come a long way in the development of CIs. We and implant patients alike owe a great debt of gratitude to the pioneers who persevered in the face of intense criticism. In addition, following a cautious period at the beginning, the NIH played a major role in providing the foundations for present-day systems.

With a modern CI, Ludwig van Beethoven’s life, and Helen Keller’s life, would have been changed for the better. Their sense of isolation would have been assuaged if not eliminated. Beethoven probably would have been disappointed in listening to music, however.

The path before us is clear and exciting. We are starting in a good spot, with two recent advances and with the present high levels of sentence recognition in quiet. At the same time, there are multiple and promising opportunities for improvements in the current devices and approaches. Many outstanding groups, including those represented by the authors in this special issue, are pushing forward with the vigor and determination of the pioneers. Cochlear implants have a brilliant future.

Table 4.

Average age, duration of deafness and percent-correct scores on tests of speech, voice and music recognition for patients who achieve average scores (58% correct) and above-average scores (80% correct) on tests of CNC word recognition (the numbers within parentheses are standard deviations)

Patient Characteristic or Test Average Above Average
Age 58 (14) 48 (11)
Duration of Deafness 18 (18) 11 (14)
CNC Words 58 (9) 80 (7)
Consonants 66 (19) 84 (6)
Vowels 58 (18) 72 (17)
CUNY Sentences in Quiet 97 (3) 99 (2)
AzBio Sentences in Quiet 70 (16) 90 (7)
CUNY Sentences in Noise at +10 dB S/N 70 (20) 90 (8)
Az Bio Sentences in Noise at +10 dB S/N 42 (20) 72 (14)
Az Bio Sentences in Noise at +5 dB S/N 27 (15) 52 (15)
Voice: Male vs. Female 93 (5) 96 (5)
Voice: Within Gender 65 (6) 70 (6)
Five Melodies 33 (20) 56 (34)

Acknowledgments

Material also was drawn or adapted from several recent publications (Dorman and Wilson, 2004; Dorman and Spahr, 2006; Wilson, 2006; Wilson and Dorman, 2007, 2008a, b, c). Work contributing data and ideas to the paper was supported in part by NIH project N01-DC-2-1002 (to BSW) and its predecessors, all titled “Speech Processors for Auditory Prostheses,” and by NIH project 5R01DC000654 (to MFD) and its predecessors, all titled “Auditory Function and Speech Perception with Cochlear Implants.” The first author is a consultant for MED-EL Medical Electronics GmbH, of Innsbruck, Austria, as its Chief Strategy Advisor. None of the statements in this paper favor that or any other company, and none of the statements pose a conflict of interest. We thank the many subjects who have participated in our studies over the years, and who thereby have made our work possible. The first author also would like to thank Prof. Dr. Wolf-Dieter Baumgartner for his kind invitation to present the talk in Vienna and for his spectacular friendship and partnership in research. We are grateful to the two anonymous reviewers of this paper for their helpful suggestions.

Abbreviations

ACE

advanced combination encoder

CIs

cochlear implants

CIS

continuous interleaved sampling

CUNY

City University of New York (as in the CUNY sentences)

DCN

dorsal cochlear nucleus

DLs

difference limens

EAS

electric and acoustic stimulation (as in combined EAS)

F0s

fundamental frequencies

FSP

fine structure processing (as in the FSP strategy for CIs)

HINT

Hearing in Noise Test

HiRes

“HiResolution” (as in the HiRes processing strategy for CIs)

HiRes 120

HiRes with the “Fidelity™ 120 option”

NIH

United States National Institutes of Health

NPP

Neural Prosthesis Program (at the NIH)

PACE

psychoacoustic advanced combination encoder

S/Ns

speech-to-noise ratios

SPEAK

spectral peak (as in the SPEAK processing strategy for CIs)

ST

scala tympani

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Some of the findings and thoughts in this paper were first presented in a Special Guest of Honor Address by the first author at the Ninth International Conference on Cochlear Implants and Related Sciences, held in Vienna, Austria, June 14–17, 2006, and with the same title as the present paper.

References

  1. Anderson DJ. Penetrating multichannel stimulation and recording electrodes in auditory science and prosthesis research. Hear. Res. doi: 10.1016/j.heares.2008.01.010. (this issue) [DOI] [PubMed] [Google Scholar]
  2. Arnoldner C, Riss D, Brunner M, Baumgartner WD, Hamzavi JS. Speech and music perception with the new fine structure speech coding strategy: preliminary results. Acta Otolaryngol. 2007 doi: 10.1080/00016480701275261. in press. [DOI] [PubMed] [Google Scholar]
  3. Arts HA, Jones DA, Anderson DJ. Prosthetic stimulation of the auditory system with intraneural electrodes. Ann. Otol. Rhinol. Laryngol. 2003;112(Suppl. 2003):20–25. doi: 10.1177/00034894031120s905. [DOI] [PubMed] [Google Scholar]
  4. Badi AN, Kertesz TR, Gurgel RK, Shelton C, Normann RA. Development of a novel eighth-nerve intraneural auditory neuroprosthesis. Laryngoscope. 2003;113:833–842. doi: 10.1097/00005537-200305000-00012. [DOI] [PubMed] [Google Scholar]
  5. Badi AN, Owa AO, Shelton C, Normann RA. Electrode independence in intraneural cochlear nerve stimulation. Otol. Neurotol. 2007;28:16–24. doi: 10.1097/01.mao.0000244368.70190.38. [DOI] [PubMed] [Google Scholar]
  6. Baumann U, Nobbe A. Pulse rate discrimination with deeply inserted electrode arrays. Hear. Res. 2004;196:49–57. doi: 10.1016/j.heares.2004.06.008. [DOI] [PubMed] [Google Scholar]
  7. Bavelier D, Neville HJ. Cross-modal plasticity: where and how? Nat. Rev. Neurosci. 2002;3:443–452. doi: 10.1038/nrn848. [DOI] [PubMed] [Google Scholar]
  8. Berenstein CK, Mens LHM, Mulder JJS, Vanpoucke FJ. Current steering and current focusing in cochlear implants: comparison of monopolar, tripolar, and virtual channel electrode configurations. Ear Hear. 2008;29:250–260. doi: 10.1097/aud.0b013e3181645336. [DOI] [PubMed] [Google Scholar]
  9. Bilger RC, Black FO, Hopkinson NT, Myers EN, Payne JL, Stenson NR, Vega A, Wolf RV. Evaluation of subjects presently fitted with implanted auditory prostheses. Ann. Otol. Rhinol. Laryngol. 1977;86(Suppl. 38, No. 3, Part 2):1–176. [Google Scholar]
  10. Blamey P. Are spiral ganglion cell numbers important for speech perception with a cochlear implant? Am. J. Otol. 1997;18(Suppl. 6):S11–S12. [PubMed] [Google Scholar]
  11. Blamey P, Arndt P, Bergeron F, Bredberg G, Brimacombe J, Facer G, Larky J, Lindstrom B, Nedzelski J, Peterson A, Shipp D, Staller S, Whitford L. Factors affecting auditory performance of postlinguistically deaf adults using cochlear implants. Audiol. Neurootol. 1996;1:293–306. doi: 10.1159/000259212. [DOI] [PubMed] [Google Scholar]
  12. Bonham BH, Litvak LM. Current focusing and steering: modeling, physiology, and psychophysics. Hear. Res. doi: 10.1016/j.heares.2008.03.006. (this issue). [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Boothroyd A, Hanin L, Hnath T. Speech and Hearing Sciences Research Center. New York, NY: City University of New York; 1985. A sentence test of speech perception: reliability, set equivalence, and short-term learning. Internal Report RCI 10. [Google Scholar]
  14. Brendel M, Buechner A, Drueger B, Frohne-Buechner C, Lenarz T. Evaluation of the Harmony soundprocessor in combination with the speech coding strategy HiRes 120. Otol. Neurotol. 2008;29:199–202. doi: 10.1097/mao.0b013e31816335c6. [DOI] [PubMed] [Google Scholar]
  15. Büchner A, Nogueira W, Edler B, Battmer RD, Lenarz T. Results from a psychoacoustic model-based strategy for the Nucleus-24 and Freedom cochlear implants. Otol. Neurotol. 2008;29:189–192. doi: 10.1097/mao.0b013e318162512c. [DOI] [PubMed] [Google Scholar]
  16. Buechner A, Brendel M, Krüeger B, Frohne-Büchner C, Nogueira W, Edler B, Lenarz T. Current steering and results from novel speech coding strategies. Otol. Neurotol. 2008;29:203–207. doi: 10.1097/mao.0b013e318163746. [DOI] [PubMed] [Google Scholar]
  17. Burns EM, Viemeister NF. Nonspectral pitch. J. Acoust. Soc. Am. 1976;60:863–869. [Google Scholar]
  18. Burns EM, Viemeister NF. Played-again SAM: further observations on the pitch of amplitude-modulated noise. J. Acoust. Soc. Am. 1981;70:1655–1660. [Google Scholar]
  19. Busby PA, Tong YC, Clark GM. The perception of temporal modulations by cochlear implant patients. J. Acoust. Soc. Am. 1993;94:124–131. doi: 10.1121/1.408212. [DOI] [PubMed] [Google Scholar]
  20. Carroll J, Zeng FG. Fundamental frequency discrimination and speech perception in noise in cochlear implant simulations. Hear. Res. 2007;231:42–53. doi: 10.1016/j.heares.2007.05.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Chatterjee M, Peng SC. Processing F0 with cochlear implants: modulation frequency discrimination and speech intonation recognition. Hear. Res. 2007;235:143–156. doi: 10.1016/j.heares.2007.11.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Cohen LT, Saunders E, Knight MR, Cowan RS. Psychophysical measures in patients fitted with Contour and straight Nucleus electrode arrays. Hear. Res. 2006;212:160–175. doi: 10.1016/j.heares.2005.11.005. [DOI] [PubMed] [Google Scholar]
  23. Donaldson GS, Kreft HA, Litvak L. Place-pitch discrimination of single-versus dual-electrode stimuli by cochlear implant users. J. Acoust. Soc. Am. 2005;118:623–626. doi: 10.1121/1.1937362. [DOI] [PubMed] [Google Scholar]
  24. Dorman MF, Gifford RH, Spahr AJ, McKarns SA. The benefits of combining acoustic and electric stimulation for the recognition of speech, voice and melodies. Audiol. Neurotol. 2007;13:105–112. doi: 10.1159/000111782. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Dorman MF, Loizou PC, Spahr AJ, Maloff E. A comparison of the speech understanding provided by acoustic models of fixed-channel and channel-picking signal processors for cochlear implants. J. Speech Lang. Hear. Res. 2002;45:783–788. doi: 10.1044/1092-4388(2002/063). [DOI] [PubMed] [Google Scholar]
  26. Dorman MF, Smith LM, Smith M, Parkin JL. Frequency discrimination and speech recognition by patients who use the Ineraid and continuous interleaved sampling cochlear-implant signal processors. J. Acoust. Soc. Am. 1996;99:1174–1184. doi: 10.1121/1.414600. [DOI] [PubMed] [Google Scholar]
  27. Dorman MF, Spahr AJ. Speech perception by adults with multichannel cochlear implants. In: Waltzman SB, Roland JT Jr, editors. Cochlear Implants. 2nd Ed. New York: Thieme Medical Publishers; 2006. pp. 193–204. [Google Scholar]
  28. Dorman MF, Wilson BS. The design and function of cochlear implants. Am. Scientist. 2004;92:436–445. [Google Scholar]
  29. Drennan WR, Longnion JK, Ruffin C, Rubinstein JT. Discrimination of Schroeder-phase harmonic complexes by normal-hearing and cochlear-implant listeners. J. Assoc. Res. Otolaryngol. 2008;9:138–149. doi: 10.1007/s10162-007-0107-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Eggermont JJ, Ponton CW. Auditory-evoked potential studies of cortical maturation in normal and implanted children: correlations with changes in structure and speech perception. Acta Otolaryngol. 2003;123:249–252. doi: 10.1080/0036554021000028098. [DOI] [PubMed] [Google Scholar]
  31. Eisen MD. History of the cochlear implant. In: Waltzman SB, Roland JT Jr, editors. Cochlear Implants. 2nd Ed. New York: Thieme Medical Publishers; 2006. pp. 1–10. [Google Scholar]
  32. Eisen MD. The history of cochlear implants. In: Niparko JK, Kirk KI, Mellon NK, Robbins AM, Tucci DL, Wilson BS, editors. Cochlear Implants: Principles & Practices. 2nd Edition. Philadelphia: Lippincott Williams & Wilkins; 2008. in press. [Google Scholar]
  33. Fallon JB, Irvine DRF, Shepherd RK. Cochlear implants and brain plasticity. Hear. Res. 2008;238:110–117. doi: 10.1016/j.heares.2007.08.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Favre E, Pelizzone M. Channel interactions in patients using the Ineraid multichannel cochlear implant. Hear. Res. 1993;66:150–156. doi: 10.1016/0378-5955(93)90136-o. [DOI] [PubMed] [Google Scholar]
  35. Fayad JN, Linthicum FH., Jr Multichannel cochlear implants: relation of histopathology to performance. Laryngoscope. 2006;116:1310–1320. doi: 10.1097/01.mlg.0000227176.09500.28. [DOI] [PubMed] [Google Scholar]
  36. Finn R, with the assistance of Hudspeth AJ, Zwislocki J, Young E, Merzenich M. Beyond Discovery: The Path from Research to Human Benefit. Washington (DC): National Academy of Sciences; 1998. Sound from silence: the development of cochlear implants; pp. 1–8. (This report is available online at http://www.beyonddiscovery.org/content/view.article.asp?a=252.) [Google Scholar]
  37. Fishman KE, Shannon RV, Slattery WH. Speech recognition as a function of the number of electrodes used in the SPEAK cochlear implant speech processor. J. Speech Lang. Hear. Res. 1997;40:1201–1215. doi: 10.1044/jslhr.4005.1201. [DOI] [PubMed] [Google Scholar]
  38. Friesen LM, Shannon RV, Baskent D, Wang X. Speech recognition in noise as a function of the number of spectral channels: comparison of acoustic hearing and cochlear implants. J. Acoust. Soc. Am. 2001;110:1150–1163. doi: 10.1121/1.1381538. [DOI] [PubMed] [Google Scholar]
  39. Fu QJ, Galvin JJ., III Maximizing cochlear implant patients’ performance with advanced speech training procedures. Hear. Res. doi: 10.1016/j.heares.2007.11.010. (this issue) [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Fu QJ, Nogaki G. Noise susceptibility of cochlear implant users: the role of spectral resolution and smearing. J. Assoc. Res. Otolaryngol. 2004;6:19–27. doi: 10.1007/s10162-004-5024-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Gantz BJ, Turner CW. Combining electric and acoustic hearing. Laryngoscope. 2003;113:1726–1730. doi: 10.1097/00005537-200310000-00012. [DOI] [PubMed] [Google Scholar]
  42. Gantz BJ, Turner C, Gfeller KE. Acoustic plus electric speech processing: preliminary results of a multicenter clinical trial of the Iowa/Nucleus Hybrid Implant. Audiol. Neurootol. 2006;11(Suppl. 1):63–68. doi: 10.1159/000095616. [DOI] [PubMed] [Google Scholar]
  43. Gantz BJ, Turner C, Gfeller KE, Lowder MW. Preservation of hearing in cochlear implant surgery: advantages of combined electrical and acoustical speech processing. Laryngoscope. 2005;115:796–802. doi: 10.1097/01.MLG.0000157695.07536.D2. [DOI] [PubMed] [Google Scholar]
  44. Gantz BJ, Tyler RS, Rubinstein JT, Wolaver A, Lowder M, et al. Binaural cochlear implants placed during the same operation. Otol. Neurotol. 2002;23:169–180. doi: 10.1097/00129492-200203000-00012. [DOI] [PubMed] [Google Scholar]
  45. Gantz BJ, Woodworth GG, Knutson JF, Abbas PJ, Tyler RS. Multivariate predictors of audiological success with multichannel cochlear implants. Ann. Otol. Rhinol. Laryngol. 1993;102:909–916. doi: 10.1177/000348949310201201. [DOI] [PubMed] [Google Scholar]
  46. Garnham C, O’Driscol M, Ramsden R, Saeed S. Speech understanding in noise with a Med-El COMBI 40+ cochlear implant using reduced channel sets. Ear Hear. 2002;23:540–552. doi: 10.1097/00003446-200212000-00005. [DOI] [PubMed] [Google Scholar]
  47. Geurts L, Wouters J. Coding of fundamental frequency in continuous interleaved sampling processors for cochlear implants. J. Acoust. Soc. Am. 2001;109:713–726. doi: 10.1121/1.1340650. [DOI] [PubMed] [Google Scholar]
  48. Geurts L, Wouters J. Better place-coding of the fundamental frequency in cochlear implants. J. Acoust. Soc. Am. 2004;115:844–852. doi: 10.1121/1.1642623. [DOI] [PubMed] [Google Scholar]
  49. Gfeller K, Turner C, Oleson J, Zhang X, Gantz B, Froman R, Olszewski C. Accuracy of cochlear implant recipients on pitch perception, melody recognition, and speech reception in noise. Ear Hear. 2007;28:412–423. doi: 10.1097/AUD.0b013e3180479318. [DOI] [PubMed] [Google Scholar]
  50. Gfeller KE, Olszewski C, Turner C, Gantz B, Oleson J. Music perception with cochlear implants and residual hearing. Audiol. Neurootol. 2006;11(Suppl. 1):12–15. doi: 10.1159/000095608. [DOI] [PubMed] [Google Scholar]
  51. Gifford RH, Dorman MF, McKarns SA, Spahr AJ. Combined electric and contralateral acoustic hearing: word and sentence recognition with bimodal hearing. J. Speech Lang. Hear. Res. 2007;50:835–843. doi: 10.1044/1092-4388(2007/058). [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Gifford R, Shallop J, Peterson A. Speech recognition materials and ceiling effects: considerations for cochlear implant programs. Audiol. Neurotol. 2008;13:193–205. doi: 10.1159/000113510. [DOI] [PubMed] [Google Scholar]
  53. Glasberg BR, Moore BCJ. Derivation of auditory filter shapes from notched-noise data. Hear. Res. 1990;47:103–138. doi: 10.1016/0378-5955(90)90170-t. [DOI] [PubMed] [Google Scholar]
  54. Green T, Faulkner A, Rosen S, Macherey O. Enhancement of temporal periodicity cues in cochlear implants: effects on prosodic perception and vowel identification. J. Acoust. Soc. Am. 2005;118:375–385. doi: 10.1121/1.1925827. [DOI] [PubMed] [Google Scholar]
  55. Gstoettner WK, Helbig S, Maier N, Kiefer J, Radeloff A, Adunka OF. Ipsilateral electric acoustic stimulation of the auditory system: results of long-term hearing preservation. Audiol. Neurootol. 2006;11(Suppl. 1):49–56. doi: 10.1159/000095614. [DOI] [PubMed] [Google Scholar]
  56. Gstoettner W, Kiefer J, Baumgartner WD, Pok S, Peters S, Adunka O. Hearing preservation in cochlear implantation for electric acoustic stimulation. Acta Otolaryngol. 2004;124:348–352. doi: 10.1080/00016480410016432. [DOI] [PubMed] [Google Scholar]
  57. Helms J, Müller J, Schön F, Moser L, Arnold W, et al. Evaluation of performance with the COMBI 40 cochlear implant in adults: a multicentric clinical study. ORL J. Otorhinolaryngol. Relat. Spec. 1997;59:23–35. doi: 10.1159/000276901. [DOI] [PubMed] [Google Scholar]
  58. Hendricks JL, Chikar JA, Crumling MA, Raphael Y, Martin DC. Localized drug [and cell] delivery for auditory prostheses. Hear. Res. doi: 10.1016/j.heares.2008.06.003. (this issue) [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Hillman T, Badi AN, Normann RA, Kertesz T, Shelton C. Cochlear nerve stimulation with a 3-dimensional penetrating electrode array. Otol. Neurotol. 2003;24:764–768. doi: 10.1097/00129492-200309000-00013. [DOI] [PubMed] [Google Scholar]
  60. Hinojosa R, Marion M. Histopathology of profound sensorineural deafness. Ann. NY Acad. Sci. 1983;405:459–484. doi: 10.1111/j.1749-6632.1983.tb31662.x. [DOI] [PubMed] [Google Scholar]
  61. Hochmair I, Nopp P, Jolly C, Schmidt M, Schösser H, Garnham C, Anderson I. MED-EL cochlear implants: state of the art and a glimpse into the future. Trends Amplif. 2006;10:201–219. doi: 10.1177/1084713806296720. [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Hochmair-Desoyer IJ, Hochmair ES, Burian K, Stiglbrunner HK. Percepts from the Vienna cochlear prosthesis. Ann. NY Acad. Sci. 1983;405:295–306. doi: 10.1111/j.1749-6632.1983.tb31642.x. [DOI] [PubMed] [Google Scholar]
  63. House WF, Berliner KI. Cochlear implants: from idea to clinical practice. In: Cooper H, editor. Cochlear Implants: A Practical Guide. San Diego, CA: Singular Publishing; 1991. pp. 9–33. [Google Scholar]
  64. Hüttenbrink KB, Zahnert T, Jolly C, Hofmann G. Movements of cochlear implant electrodes inside the cochlea during insertion: an x-ray microscopy study. Otol. Neurotol. 2002;23:187–191. doi: 10.1097/00129492-200203000-00014. [DOI] [PubMed] [Google Scholar]
  65. James CJ, Fraysse B, Deguine O, Lenarz T, Mawman D, Ramos A, Ramsden R, Sterkers O. Combined electroacoustic stimulation in conventional candidates for cochlear implantation. Audiol. Neurootol. 2006;11(Suppl. 1):57–62. doi: 10.1159/000095615. [DOI] [PubMed] [Google Scholar]
  66. Khan AM, Handzel O, Burgess BJ, Damian D, Eddington DK, Nadol JB., Jr Is word recognition correlated with the number of surviving spiral ganglion cells and electrode insertion depth in human subjects with cochlear implants? Laryngoscope. 2005;115:672–677. doi: 10.1097/01.mlg.0000161335.62139.80. [DOI] [PubMed] [Google Scholar]
  67. Kiefer J, Hohl S, Sturzebecher E, Pfennigdorff T, Gstoettner W. Comparison of speech recognition with different speech coding strategies (SPEAK, CIS, and ACE) and their relationship to telemetric measures of compound action potentials in the Nucleus CI 24M cochlear implant system. Audiology. 2001;40:32–42. doi: 10.3109/00206090109073098. [DOI] [PubMed] [Google Scholar]
  68. Kiefer J, Pok M, Adunka O, Stürzebecher E, Baumgartner W, Schmidt M, Tillein J, Ye Q, Gstoettner W. Combined electric and acoustic stimulation of the auditory system: results of a clinical study. Audiol. Neurootol. 2005;10:134–144. doi: 10.1159/000084023. [DOI] [PubMed] [Google Scholar]
  69. Kiefer J, Tillein J, von Ilberg C, Pfennigdorff T, Stürzebecher E, Klinke R, Gstoettner W. Fundamental aspects and first results of the clinical application of combined electric and acoustic stimulation of the auditory system. In: Kubo T, Takahashi Y, Iwaki T, editors. Cochlear Implants – An Update. The Hague, The Netherlands: Kugler Publications; 2002. pp. 569–576. [Google Scholar]
  70. Kiefer J, von Ilberg C, Hubner-Egener J, Rupprecht V, Knecht R. Optimized speech understanding with the continuous interleaved sampling speech coding strategy in cochlear implants: effect of variations in stimulation rate and number of channels. Ann. Otol. Rhinol. Laryngol. 2000;109:1009–1020. doi: 10.1177/000348940010901105. [DOI] [PubMed] [Google Scholar]
  71. Koch DB, Osberger MJ, Segal P, Kessler D. HiResolution and conventional sound processing in the HiResolution Bionic Ear: using appropriate outcome measures to assess speech-recognition ability. Audiol. Neurootol. 2004;9:214–223. doi: 10.1159/000078391. [DOI] [PubMed] [Google Scholar]
  72. Kong YY, Stickney GS, Zeng FG. Speech and melody recognition in binaurally combined acoustic and electric hearing. J. Acoust. Soc. Am. 2005;117:1351–1361. doi: 10.1121/1.1857526. [DOI] [PubMed] [Google Scholar]
  73. Kral A, Eggermont JJ. What’s to lose and what’s to learn: development under auditory deprivation, cochlear implants and limits of cortical plasticity. Brain Res. Rev. 2007;56:259–269. doi: 10.1016/j.brainresrev.2007.07.021. [DOI] [PubMed] [Google Scholar]
  74. Kral A, Tillein J, Heid S, Klinke R, Hartmann R. Cochlear implants: cortical plasticity in congenital deprivation. Prog. Brain Res. 2006;157:283–313. doi: 10.1016/s0079-6123(06)57018-9. [DOI] [PubMed] [Google Scholar]
  75. Kwon BJ, van den Honert C. Dual-electrode pitch discrimination with sequential interleaved stimulation by cochlear implant users. J. Acoust. Soc. Am. 2006;120:EL1–EL6. doi: 10.1121/1.2208152. [DOI] [PubMed] [Google Scholar]
  76. Laneau J, Wouters J, Moonen M. Improved music perception with explicit pitch coding in cochlear implants. Audiol. Neurotol. 2006;11:38–51. doi: 10.1159/000088853. [DOI] [PubMed] [Google Scholar]
  77. Laszig R, Aschendorff A, Stecker M, Müller-Deile J, Maune S, et al. Benefits of bilateral electrical stimulation with the Nucleus cochlear implant in adults: 6-month postoperative results. Otol. Neurotol. 2004;25:958–968. doi: 10.1097/00129492-200411000-00016. [DOI] [PubMed] [Google Scholar]
  78. Lawrence M. Direct stimulation of auditory nerve fibers. Arch. Otolaryngol. 1964;80:367–368. [PubMed] [Google Scholar]
  79. Lawson DT, Wilson BS, Zerbi M, Finley CC. Neural Prosthesis Program. Bethesda, MD: National Institutes of Health; 1996. Speech processors for auditory prostheses: 22 electrode percutaneous study – Results for the first five subjects. Third Quarterly Progress Report, NIH project N01-DC-5-2103. [Google Scholar]
  80. Leake PA, Rebscher SJ. Anatomical considerations and long-term effects of electrical stimulation. In: Zeng FG, Popper AN, Fay RR, editors. Auditory Prostheses: Cochlear Implants and Beyond. New York: Springer-Verlag; 2004. pp. 101–148. [Google Scholar]
  81. Lee DS, Lee JS, Oh SH, Kim SK, Kim JW, Chung JK, Lee MC, Kim CS. Cross-modal plasticity and cochlear implants. Nature. 2001;409:149–150. doi: 10.1038/35051653. [DOI] [PubMed] [Google Scholar]
  82. Lim HH, Lenarz T, Anderson DJ, Lenarz M. The auditory midbrain implant: effects of electrode location. Hear. Res. doi: 10.1016/j.heares.2008.02.003. (this issue) [DOI] [PubMed] [Google Scholar]
  83. Lim HH, Lenarz T, Joseph G, Battmer RD, Samii M, Patrick JF, Lenarz M. Electrical stimulation of the midbrain for hearing restoration: insight into the functional organization of the human central auditory system. J. Neurosci. 2007;27:13541–13551. doi: 10.1523/JNEUROSCI.3123-07.2007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Linkenhoker BA, Knudsen EI. Incremental training increases the plasticity of the auditory space map in adult barn owls. Nature. 2002;419:293–296. doi: 10.1038/nature01002. [DOI] [PubMed] [Google Scholar]
  85. Litovsky R, Parkinson A, Arcaroli J, Sammeth C. Simultaneous bilateral cochlear implantation in adults: a multicenter clinical study. Ear Hear. 2006;27:714–731. doi: 10.1097/01.aud.0000246816.50820.42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Litvak LM, Krubsack DA, Overstreet EH. Method and system to convey the within-channel fine structure with a cochlear implant. US Patent. 2008:7317945. [Google Scholar]
  87. Loeb GE, White MW, Merzenich MM. Spatial cross correlation: a proposed mechanism for acoustic pitch perception. Biol. Cybern. 1983;47:149–163. doi: 10.1007/BF00337005. [DOI] [PubMed] [Google Scholar]
  88. Luo X, Fu QJ. Enhancing Chinese tone recognition by manipulating amplitude envelope: implications for cochlear implants. J. Acoust. Soc. Am. 2004;116:3659–3667. doi: 10.1121/1.1783352. [DOI] [PubMed] [Google Scholar]
  89. McCreery DB. Cochlear nucleus auditory prosthesis. Hear. Res. doi: 10.1016/j.heares.2007.11.014. (this issue) [DOI] [PubMed] [Google Scholar]
  90. McDermott HJ, McKay CM. Pitch ranking with non-simultaneous dual electrode electrical stimulation of the cochlea. J. Acoust. Soc. Am. 1994;96:155–162. doi: 10.1121/1.410475. [DOI] [PubMed] [Google Scholar]
  91. McKay CM. Spectral processing in cochlear implants. Int. Rev. Neurobiol. 2005;70:473–509. doi: 10.1016/S0074-7742(05)70014-3. [DOI] [PubMed] [Google Scholar]
  92. Middlebrooks JC, Snyder RL. Auditory prosthesis with a penetrating array. J. Assoc. Res. Otolaryngol. 2007;8:258–279. doi: 10.1007/s10162-007-0070-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Middlebrooks JC, Snyder RL. Intra-neural stimulation for auditory prosthesis: modiolar trunk and intracranial stimulation sites. Hear. Res. doi: 10.1016/j.heares.2008.04.001. (this issue) [DOI] [PubMed] [Google Scholar]
  94. Miura M, Sando I, Hirsch BE, Orita Y. Analysis of spiral ganglion cell populations in children with normal and pathological ears. Ann. Otol. Rhinol. Laryngol. 2002;111:1059–1065. doi: 10.1177/000348940211101201. [DOI] [PubMed] [Google Scholar]
  95. Moore BCJ. Coding of sounds in the auditory system and its relevance to signal processing and coding in cochlear implants. Otol. Neurotol. 2003;24:243–254. doi: 10.1097/00129492-200303000-00019. [DOI] [PubMed] [Google Scholar]
  96. Müller J, Schön F, Helms J. Speech understanding in quiet and noise in bilateral users of the MED-EL COMBI 40/40+ cochlear implant system. Ear Hear. 2002;23:198–206. doi: 10.1097/00003446-200206000-00004. [DOI] [PubMed] [Google Scholar]
  97. Nadol JB, Jr, Shiao JY, Burgess BJ, Ketten DR, Eddington DK, Gantz BJ, Kos I, Montandon P, Coker NJ, Roland JT, Jr, Shallop JK. Histopathology of cochlear implants in humans. Ann. Otol. Rhinol. Laryngol. 2001;110:883–891. doi: 10.1177/000348940111000914. [DOI] [PubMed] [Google Scholar]
  98. National Institutes of Health. Cochlear implants. NIH Consensus Statement. 1988;7(2):1–9. (This statement also is available in Arch. Otolaryngol. Head Neck Surg. 115, 31–36.) [PubMed] [Google Scholar]
  99. National Institutes of Health. Cochlear implants in adults and children. NIH Consensus Statement. 1995;13(2):1–30. (This statement also is available in JAMA 274, 1955–1961.) [PubMed] [Google Scholar]
  100. Nie K, Stickney G, Zeng FG. Encoding frequency modulation to improve cochlear implant performance in noise. IEEE Trans. Biomed. Eng. 2005;52:64–73. doi: 10.1109/TBME.2004.839799. [DOI] [PubMed] [Google Scholar]
  101. Niparko JK, Wilson BS. History of cochlear implants. In: Niparko JK, Kirk KI, Mellon NK, Robbins AM, Tucci DL, Wilson BS, editors. Cochlear Implants: Principles & Practices. Philadelphia: Lippincott Williams & Wilkins; 2000. pp. 103–107. [Google Scholar]
  102. Nobbe A, Schleich P, Zierhofer C, Nopp P. Frequency discrimination with sequential or simultaneous stimulation in MED-EL cochlear implants. Acta Otolaryngol. 2007;127:1266–1272. doi: 10.1080/00016480701253078. [DOI] [PubMed] [Google Scholar]
  103. Nogueira W, Büchner A, Lenarz T, Edler B. A psychoacoustic“NofM”-type speech coding strategy for cochlear implants. EURASIP J. Appl. Signal Processing. 2005;2005:3044–3059. [Google Scholar]
  104. Nopp P, Schleich P, D'Haese P. Sound localization in bilateral users of MEDEL COMBI 40/40+ cochlear implants. Ear Hear. 2004;25:205–214. doi: 10.1097/01.aud.0000130793.20444.50. [DOI] [PubMed] [Google Scholar]
  105. Otto SR, Brackmann DE, Hitselberger WE, Shannon RV, Kuchta J. Multichannel auditory brainstem implant: update on performance in 61 patients. J. Neurosurg. 2002;96:1063–1071. doi: 10.3171/jns.2002.96.6.1063. [DOI] [PubMed] [Google Scholar]
  106. Oxenham AJ, Bernstein JGW, Penagos H. Correct tonotopic representation is necessary for complex pitch perception. PNAS. 2004;101:1421–1425. doi: 10.1073/pnas.0306958101. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Pettingill LN, Richardson RT, Wise AK, O’Leary SJ, Shepherd RK. Neurotrophic factors and neural prostheses: potential clinical applications based upon findings in the auditory system. IEEE Trans. Biomed. Eng. 2007;54:1138–1148. doi: 10.1109/TBME.2007.895375. [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Pfingst BE, Burkholder-Juhasz RA, Zwolan TA, Xu L. Psychophysical assessment of stimulation sites in auditory prosthesis electrode arrays. Hear. Res. doi: 10.1016/j.heares.2007.11.007. (this issue). [DOI] [PMC free article] [PubMed] [Google Scholar]
  109. Pfingst BE, Xu L. Across-site variation in detection thresholds and maximum comfortable loudness levels for cochlear implants. J. Assoc. Res. Otolaryngol. 2004;5:11–24. doi: 10.1007/s10162-003-3051-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  110. Ponton CW, Eggermont JJ. Of kittens and kids: altered cortical maturation following profound deafness and cochlear implant use. J. Assoc. Res. Otolaryngol. 2001;2:87–103. doi: 10.1159/000046846. [DOI] [PubMed] [Google Scholar]
  111. Qin MK, Oxenham AJ. Effects of introducing unprocessed low-frequency information on the reception of envelope-vocoder processed speech. J. Acoust. Soc. Am. 2006;119:2417–2426. doi: 10.1121/1.2178719. [DOI] [PubMed] [Google Scholar]
  112. Ramsden R, Greenham P, O’Driscoll M, Mawman D, Proops D, et al. Evaluation of bilaterally implanted adult subjects with the Nucleus 24 cochlear implant system. Otol. Neurotol. 2005;26:988–998. doi: 10.1097/01.mao.0000185075.58199.22. [DOI] [PubMed] [Google Scholar]
  113. Ranck JB., Jr Which elements are excited in electrical stimulation of the mammalian central nervous system: a review. Brain Res. 1975;98:417–440. doi: 10.1016/0006-8993(75)90364-9. [DOI] [PubMed] [Google Scholar]
  114. Rejali D, Lee VA, Abrashkin KA, Humayun N, Swiderski DL, Rapheal Y. Cochlear implants and ex vivo BDNF gene therapy protect spiral ganglion neurons. Hear. Res. 2007;228:180–187. doi: 10.1016/j.heares.2007.02.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Richter CP, Bayon R, Izzo AD, Otting M, Suh E, Goyal S, Hotaling J, Walsh JT., Jr Acute and long term deafening effects on optical stimulation of auditory neurons. Hear. Res. doi: 10.1016/j.heares.2008.01.011. (this issue). [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Ricketts TA, Grantham DW, Ashmead DH, Haynes DS, Labadie RF. Speech recognition for unilateral and bilateral cochlear implant modes in the presence of uncorrelated noise sources. Ear Hear. 2006;27:763–773. doi: 10.1097/01.aud.0000240814.27151.b9. [DOI] [PubMed] [Google Scholar]
  117. Roehm PC, Hansen MR. Strategies to preserve or regenerate spiral ganglion neurons. Curr. Opin. Otolaryngol. Head Neck Surg. 2005;13:294–300. doi: 10.1097/01.moo.0000180919.68812.b9. [DOI] [PubMed] [Google Scholar]
  118. Schuknecht HF. Discussion. In: Merzenich MM, Schindler RA, Sooy FA, editors. Electrical Stimulation of the Acoustic Nerve in Man. San Francisco: Velo-Bind, Inc.; 1974. [Google Scholar]
  119. Senn P, Kompis M, Vischer M, Haeusler R. Minimum audible angle, just noticeable interaural differences and speech intelligibility with bilateral cochlear implants using clinical speech processors. Audiol. Neurootol. 2005;10:342–352. doi: 10.1159/000087351. [DOI] [PubMed] [Google Scholar]
  120. Shannon RV, Fu QJ, Galvin J., III The number of spectral channels required for speech recognition depends on the difficulty of the listening situation. Acta Otolaryngol. Suppl. 2004;552:50–54. doi: 10.1080/03655230410017562. [DOI] [PubMed] [Google Scholar]
  121. Sharma A, Dorman MF, Spahr AJ. Rapid development of cortical auditory evoked potentials after early cochlear implantation. NeuroReport. 2002;13:1365–1368. doi: 10.1097/00001756-200207190-00030. [DOI] [PubMed] [Google Scholar]
  122. Shepherd RK, Hardie NA. Deafness-induced changes in the auditory pathway: implications for cochlear implants. Audiol. Neurootol. 2001;6:305–318. doi: 10.1159/000046843. [DOI] [PubMed] [Google Scholar]
  123. Shepherd RK, Meltzer NE, Fallon JB, Ryugo DK. Consequences of deafness and electrical stimulation on the peripheral and central auditory system. In: Waltzman SB, Roland JT Jr, editors. Cochlear Implants. 2nd Edition. New York: Thieme Medical Publishers; 2006. pp. 25–39. [Google Scholar]
  124. Simmons FB. Electrical stimulation of the auditory nerve in man. Arch. Otolaryngol. 1966;84:2–54. doi: 10.1001/archotol.1966.00760030004003. [DOI] [PubMed] [Google Scholar]
  125. Simmons FB. A history of cochlear implants in the United States: a personal perspective. In: Schindler RA, Merzenich MM, editors. Cochlear Implants. New York: Raven Press; 1985. pp. 1–7. [Google Scholar]
  126. Skinner MW, Clark GM, Whitford LA, Seligman PM, Staller SJ, et al. Evaluation of a new spectral peak (SPEAK) coding strategy for the Nucleus 22 channel cochlear implant system. Am. J. Otol. 1994;15(Suppl. 2):15–27. [PubMed] [Google Scholar]
  127. Smith ZM, Delgutte B, Oxenham AJ. Chimaeric sounds reveal dichotomies in auditory perception. Nature. 2002;416:87–90. doi: 10.1038/416087a. [DOI] [PMC free article] [PubMed] [Google Scholar]
  128. Spahr A, Dorman M, Loiselle L. Performance of patients fit with different cochlear implant systems: effect of input dynamic range. Ear Hear. 2007;28:260–275. doi: 10.1097/AUD.0b013e3180312607. [DOI] [PubMed] [Google Scholar]
  129. Spelman FA. Cochlear electrode arrays: past, present and future. Audiol. Neurootol. 2006;11:77–85. doi: 10.1159/000090680. [DOI] [PubMed] [Google Scholar]
  130. Stevens SS. On hearing by electrical stimulation. J. Acoust. Soc. Am. 1937;8:191–195. [Google Scholar]
  131. Sucher CM, McDermott HJ. Pitch ranking of complex tones by normally hearing subjects and cochlear implant users. Hear. Res. 2007;230:80–87. doi: 10.1016/j.heares.2007.05.002. [DOI] [PubMed] [Google Scholar]
  132. Summerfield AQ, Marshall DH. Preoperative predictors of outcomes from cochlear implantation in adults: performance and quality of life. Ann. Otol. Rhinol. Laryngol. Suppl. 1995;166:105–108. [PubMed] [Google Scholar]
  133. Tobey EA, Devous MD, Sr, Buckley K, Cooper WB, Harris TS, Ringe W, Roland PS. Functional brain imaging as an objective measure of speech perception performance in adult cochlear implant users. Int. J. Audiol. 2004;43:S52–S56. [PubMed] [Google Scholar]
  134. Townshend B, Cotter N, Van Compernolle D, White RL. Pitch perception by cochlear implant subjects. J. Acoust. Soc. Am. 1987;82:106–115. doi: 10.1121/1.395554. [DOI] [PubMed] [Google Scholar]
  135. Trautwein P. HiRes with Fidelity™ 120 sound processing: implementing active current steering for increased spectral resolution in CII BionicEar® and HiRes90K users. Valencia, CA: Advanced Bionics Corporation; 2006. (This report is presented at http://www.bionicear.com/userfiles/File/HiRes_Fidelity120_Sound_Processing.pdf.) [Google Scholar]
  136. Turner CW, Reiss LAJ, Gantz BJ. Combined acoustic and electric hearing. Hear. Res. doi: 10.1016/j.heares.2007.11.008. (this issue). [DOI] [PMC free article] [PubMed] [Google Scholar]
  137. Tyler RS, Dunn CC, Witt SA, Noble WG. Speech perception and localization with adults with bilateral sequential cochlear implants. Ear Hear. 2007;28:86S–90S. doi: 10.1097/AUD.0b013e31803153e2. [DOI] [PubMed] [Google Scholar]
  138. van Hoesel RJ, Tyler RS. Speech perception, localization, and lateralization with bilateral cochlear implants. J. Acoust. Soc. Am. 2003;113:1617–1630. doi: 10.1121/1.1539520. [DOI] [PubMed] [Google Scholar]
  139. Vieira M, Christensen BL, Wheeler BC, Feng AS, Kollmar R. Survival and stimulation of neurite outgrowth in a serum-free culture of spiral ganglion neurons from adult mice. Hear. Res. 2007;230:17–23. doi: 10.1016/j.heares.2007.03.005. [DOI] [PubMed] [Google Scholar]
  140. von Ilberg C, Kiefer J, Tillein J, Pfennigdorff T, Hartmann R, Stürzebecher E, Klinke R. Electric-acoustic stimulation of the auditory system. New technology for severe hearing loss. ORL J. Otorhinolaryngol. Relat. Spec. 1999;61:334–340. doi: 10.1159/000027695. [DOI] [PubMed] [Google Scholar]
  141. Wei CG, Cao K, Zeng FG. Mandarin tone recognition in cochlear-implant subjects. Hear. Res. 2004;197:87–95. doi: 10.1016/j.heares.2004.06.002. [DOI] [PubMed] [Google Scholar]
  142. Wilson BS. The future of cochlear implants. Brit. J. Audiol. 1997;31:205–225. doi: 10.3109/03005369709076795. [DOI] [PubMed] [Google Scholar]
  143. Wilson BS. Engineering design of cochlear implant systems. In: Zeng FG, Popper AN, Fay RR, editors. Auditory Prostheses: Cochlear Implants and Beyond. New York: Springer-Verlag; 2004. pp. 14–52. [Google Scholar]
  144. Wilson BS. Speech processing strategies. In: Cooper HR, Craddock LC, editors. Cochlear Implants: A Practical Guide. 2nd Ed. Hoboken, NJ: John Wiley & Sons; 2006. pp. 21–69. [Google Scholar]
  145. Wilson BS, Dorman MF. The surprising performance of present-day cochlear implants. IEEE Trans. Biomed. Eng. 2007;54:969–972. doi: 10.1109/TBME.2007.893505. [DOI] [PubMed] [Google Scholar]
  146. Wilson BS, Dorman MF. Interfacing sensors with the nervous system: lessons from the development and success of the cochlear implant. IEEE Sensors J. 2008a;8:131–147. [Google Scholar]
  147. Wilson BS, Dorman MF. Cochlear implants: current designs and future possibilities. J. Rehabil. Res. Dev. 2008b doi: 10.1682/jrrd.2007.10.0173. in press. [DOI] [PubMed] [Google Scholar]
  148. Wilson BS, Dorman MF. The design of cochlear implants. In: Niparko JK, Kirk KI, Mellon NK, Robbins AM, Tucci DL, Wilson BS, editors. Cochlear Implants: Principles & Practices. 2nd Edition. Philadelphia: Lippincott Williams & Wilkins; 2008c. in press. [Google Scholar]
  149. Wilson BS, Finley CC, Farmer JC, Jr, Lawson DT, Weber BA, Wolford RD, Kenan PD, White MW, Merzenich MM, Schindler RA. Comparative studies of speech processing strategies for cochlear implants. Laryngoscope. 1988;98:1069–1077. doi: 10.1288/00005537-198810000-00009. [DOI] [PubMed] [Google Scholar]
  150. Wilson BS, Finley CC, Lawson DT, Wolford RD, Eddington DK, Rabinowitz WM. Better speech recognition with cochlear implants. Nature. 1991;352:236–238. doi: 10.1038/352236a0. [DOI] [PubMed] [Google Scholar]
  151. Wilson BS, Finley CC, Lawson DT, Zerbi M. Temporal representations with cochlear implants. Am. J. Otol. 1997;18:S30–S34. [PubMed] [Google Scholar]
  152. Wilson BS, Lawson DT, Müller JM, Tyler RS, Kiefer J. Cochlear implants: some likely next steps. Annu. Rev. Biomed. Eng. 2003;5:207–249. doi: 10.1146/annurev.bioeng.5.040202.121645. [DOI] [PubMed] [Google Scholar]
  153. Wilson BS, Lopez-Poveda EA, Schatzer R. Use of auditory models in developing coding strategies for cochlear implants. In: Meddis R, Lopez-Poveda EA, Popper AN, Fay RR, editors. Computational Models of the Auditory System. New York: Springer-Verlag; 2008. to be published. [Google Scholar]
  154. Wilson BS, Schatzer R, Lopez-Poveda EA. Possibilities for a closer mimicking of normal auditory functions with cochlear implants. In: Waltzman SB, Roland JT Jr, editors. Cochlear Implants. 2nd Ed. New York: Thieme Medical Publishers; 2006. pp. 48–56. [Google Scholar]
  155. Wilson BS, Schatzer R, Lopez-Poveda EA, Sun X, Lawson DT, Wolford RD. Two new directions in speech processor design for cochlear implants. Ear Hear. 2005;26:73S–81S. doi: 10.1097/00003446-200508001-00009. [DOI] [PubMed] [Google Scholar]
  156. Wise KD, Bhatti PT, Wang J, Friedrich CR. High-density cochlear implants with position sensing and control. Hear. Res. doi: 10.1016/j.heares.2008.04.002. (this issue). [DOI] [PubMed] [Google Scholar]
  157. Xu L, Pfingst BE. Relative importance of temporal envelope and fine structure in lexical-tone perception. J. Acoust. Soc. Am. 2003;114:3024–3027. doi: 10.1121/1.1623786. [DOI] [PMC free article] [PubMed] [Google Scholar]
  158. Zeng FG. Cochlear implants in China. Audiology. 1995;34:61–75. doi: 10.3109/00206099509071899. [DOI] [PubMed] [Google Scholar]
  159. Zeng FG. Temporal pitch in electric hearing. Hear Res. 2002;174:101–106. doi: 10.1016/s0378-5955(02)00644-5. [DOI] [PubMed] [Google Scholar]
  160. Zeng FG, Nie K, Stickney GS, Kong YY, Vongphoe M, Bhargave A, Wei C, Cao K. Speech recognition with amplitude and frequency modulations. PNAS. 2005;102:2293–2298. doi: 10.1073/pnas.0406460102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  161. Zwolan TA, Collins LM, Wakefield GH. Electrode discrimination and speech recognition in postlingually deafened adult cochlear implant subjects. J. Acoust. Soc. Am. 1997;102:3673–3685. doi: 10.1121/1.420401. [DOI] [PubMed] [Google Scholar]
  162. Zwolan TA, Kileny PR, Ashbaugh C, Telian SA. Patient performance with the Cochlear Corporation“20 + 2” implant: bipolar versus monopolar activation. Am. J. Otol. 1996;17:717–723. [PubMed] [Google Scholar]

RESOURCES