Abstract
Purpose
To determine why, in a pilot study, only 1 of 11 cochlear implant listeners was able to reliably identify a frequency-to-electrode map where the intervals of a familiar melody were played on the correct musical scale. The authors sought to validate their method and to assess the effect of pitch strength on musical scale recognition in normal-hearing listeners.
Method
Musical notes were generated as either sine waves or spectrally shaped noise bands, with a center frequency equal to that of a desired note and symmetrical (log-scale) reduction in amplitude away from the center frequency. The rate of amplitude reduction was manipulated to vary pitch strength of the notes and to simulate different degrees of current spread. The effect of the simulated degree of current spread was assessed on tasks of musical tuning/scaling, melody recognition, and frequency discrimination.
Results
Normal-hearing listeners could accurately and reliably identify the appropriate musical scale when stimuli were sine waves or steeply sloping noise bands. Simulating greater current spread degraded performance on all tasks.
Conclusions
Cochlear implant listeners with an auditory memory of a familiar melody could likely identify an appropriate frequency-to-electrode map but only in cases where the pitch strength of the electrically produced notes is very high.
Keywords: cochlear implant, music perception, spatial selectivity, simulation
Cochlear implants provide auditory percepts by stimulating neurons in the spiral ganglion with current delivered by intracochlear electrodes. As with acoustic stimulation, the pitch of an electrically presented signal is determined, in part, by the place of stimulation. The pitch percept gradually rises as the place of stimulation moves from apex to base. Thus, speech coding strategies use a frequency-to-place, or frequency-to-electrode, map that mimics the natural tonotopic organization of the normal system. Although most patients experience an approximation of normal pitch percepts (i.e., high vs. low), misalignments in the frequency-to-electrode map will undoubtedly impair pitch perception so that a one-octave change in frequency does not necessarily bring about a one-octave change in pitch. These discrepancies in the pitch percept will likely affect the quality of both speech and music and, at the very least, will require relearning, or recalibration, to overcome (see Dorman & Ketten, 2003; Fu, Nogaki, & Galvin, 2005; Fu & Shannon, 1999; Svirsky, Silveira, Neuburger, Teoh, & Suarez, 2004).
Misalignments in the frequency-to-electrode map will likely occur because the most common maps are based on the frequency-to-place maps derived from auditory hair cells (Greenwood, 1990), not cells of the spiral ganglion (Baumann & Nobbe, 2006; Dorman et al., 2007; Kawano, Seldon, & Clark, 1996; Sridhar, Stakhovskaya, & Leake, 2006), which are the target of electrical stimulation. Misalignments could be caused by variations in electrode placement observed across patients (Blamey, Dooley, Parisi, & Clark, 1996; Boex et al., 2006; Skinner et al., 2002) and variations in the consistency of spiral ganglion survival. Thus, the creation of an appropriate “one-size-fits-all” frequency-to-electrode map is more than unlikely.
Efforts to improve frequency-to-electrode assignments, on a group or individual level, are justified on many grounds. In the context of this report, the relevant observation is that most patients who are fit with a cochlear implant report dissatisfaction with the quality of music and struggle to identify simple melodies when forced to rely on pitch cues in the absence of tempo or rhythm (e.g., Gfeller et al., 2000; Gfeller & Lansing, 1991; Gfeller, Woodworth, Robin, Witt, & Knutson, 1997; Kong, Cruz, Jones, & Zeng, 2004; for a review, see McDermott, 2004; Schultz & Kerber, 1994; Spahr & Dorman, 2004; Spahr, Dorman, & Loiselle, 2007).
It is likely that there are multiple reasons for patients’ dissatisfaction with music. It is also likely that one reason is inappropriate frequency-to-electrode maps. To gain insight into this issue, we asked patients to identify the appropriate musical scale for a very familiar melody: “Twinkle, Twinkle, Little Star.” We relied heavily on the note structure for this melody, which is also used in the “Happy Birthday Song” and the “Alphabet Song” because of its widespread popularity and the recognizable perfect 5th interval occurring at the beginning of the melody. We searched for the appropriate frequency-to-electrode spacing by presenting the melody over approximately 2–6 mm of the electrode array in steps of 0.125 mm to compress and expand the perception of the musical intervals. The listeners were 11 patients fit with the Advanced Bionics Hi Resolution cochlear implant system. Current steering (Koch, Downing, Osberger, & Litvak, 2007; Townshend, Cotter, Van Compernolle, & White, 1987; Wilson, Lawson, Zerbi, & Finley, 1992) was used to create “virtual” channels between physical contacts so as to maintain an appropriate scaling of notes in each condition. Patients were informed that they would hear multiple versions of the melody and that some of these versions might sound “flat” (i.e., “the musical intervals are too small”), others might sound “stretched” (i.e., “the musical intervals are too large”), and still others might sound “correct.” Participants were to identify the version that sounded the most correct (i.e., the version that was most “in tune”). We assumed that in-tune judgments would be elicited when the pattern of note frequencies produced pitch percepts that were consistent with the auditory memory of the listener or, more optimistically, when the frequency-to-electrode map was best aligned with the perceptual pitch match of the listener.
Testing revealed that only 1 patient was highly reliable in identifying a preferred frequency-to-electrode spacing—that is, the in-tune judgments varied by less than 0.5 mm/octave across multiple trials. Three patients had a moderate degree of reliability—with most in-tune responses occurring within a 1-mm/octave range. However, for 7 patients, the degree of reliability was very poor, with the in-tune responses occurring over a 2-mm/octave range. Thus, when listening to multiple presentations of the same note sequence, these patients were as likely to indicate that the in-tune perception occurred with a 3-mm/octave spacing as with a 5-mm/octave spacing. For a normal-hearing listener, a melody presented at 3 mm/octave would sound very flat, and a melody presented at 5 mm/octave would sound extremely stretched.
The research in this report was motivated by the outcome described previously—that is, the difficulty experienced by implant patients in recognizing whether a note sequence was presented over the correct distance along the cochlea. We have explored two aspects this problem using normal-hearing participants and an acoustic simulation of electrical stimulation. First, we asked whether the method we used was appropriate. If it was, then normal-hearing listeners should easily identify the correct distance. Second, we manipulated the spread of excitation in the model to assess how variations in frequency resolution (e.g., Michaels, 1957; Moore, 1973) and variations in the strength of the pitch percept (e.g., Fastl & Stoll, 1979) altered judgments of musical intervals.
The pitch strength of an acoustic stimulus can be varied in several ways (Fastl & Stoll, 1979). In this project, we manipulated the signal in a manner that was relevant to electrical current spread and the breadth of the associated neural activation pattern. Thus, we used noise bands modeled after a simplified activation function for electrical stimulation (Rattay, 1990). Different degrees of spread of electrical stimulation were modeled by varying the roll-off of the noise bands away from the center frequency. Perceptually, signals of this kind sound like “fuzzy tones”—the more gradual the roll-off, the more “fuzzy” the percept.
The value of fuzzy tones has been assessed in models of cochlear implant excitation by comparing the speech recognition abilities of cochlear implant patients with the abilities of normal-hearing participants listening to a 15-channel, fuzzy-tone vocoder (Litvak, Spahr, Saoji, & Fridman, 2007). The outcomes revealed that variations in cochlear implant performance could be matched by varying the bandwidth of the fuzzy tones for normal-hearing listeners. Critically, the error patterns obtained from normal-hearing listeners in each condition were compared to cochlear implant patients who achieved a similar absolute level of performance on the same task. The goodness of the match was quantified by establishing the correlation between the vowel confusion matrices for the two groups of listeners. The correlations ranged from 0.77 for lower levels of performance to 0.985 at higher levels of performance. Although the ability to simulate the number and type of errors produced by cochlear implant listeners does not speak to the issue of sound quality, it does suggest that the signal had been distorted in a similar manner. With that perspective, the fuzzy tones emerged as the most appropriate acoustic signal to simulate the electrical delivery of melodic information via a cochlear implant.
Method
Participants
The participants were 10 normal-hearing listeners between 20 and 30 years of age. Normal hearing thresholds (≤20 dB HL) were obtained to pure-tone stimuli at test frequencies of 500, 1000, 2000, 4000, and 8000 Hz using a clinical audiometer in a sound-treated booth prior to testing. All participants were native English speakers, and all reported familiarity with the melodies described below. None of the listeners were musicians. All testing was completed at Arizona State University in Tempe, and all participants were paid for their participation. Each participant completed the musical tuning, melody recognition, and frequency discrimination tasks, respectively, during a single test session. The research protocol was approved by the Institutional Review Board at Arizona State University.
Signal Processing
The stimuli for all tests included both pure tones and noise bands (fuzzy tones). For the noise band conditions, each stimulus/note was a narrowband noise centered at the appropriate frequency. The bandwidth of each narrowband noise was proportional to the center frequency in order to achieve nearly equivalent excitation area for each note along the basilar membrane in the cochlea of normal-hearing listeners. The shape of the noise spectrum approximately corresponds to the shape of the simplified activation function for electrical stimulation described by Rattay (1990) as 1/(x2+ d2)1.5. In the preceding function, d is the distance from the neural tissue to the electrode, and x is the location of the neuron in the neural plane. The activation function has a pronounced peak near x = 0 for small values of d and is followed by a long tail. This activation function is thought to roughly correspond to the activation pattern in the cochlea created by electrical stimulation of a single electrode. In the case of the implanted cochlea, the activation in the cochlea grows with linear current (e.g., Eddington, 1980), whereas for normal cochlea, activation is thought to be proportional to dB SPL (Moore, 2003). Similarly, space in the implanted cochlea can be roughly converted to the logarithm of the frequency for the acoustically stimulated case (Greenwood, 1990). Hence, to simulate the current spread in normal-hearing listeners, x could be converted to the logarithm of the current, and activation can be converted to dB SPL.
The fuzzy tone stimuli were generated on a computer using the following relationships:
(1) |
(2) |
(3) |
In the above equations, f0 is the target frequency, r is the noise index used to determine the degree of roll-off away from the center frequency, Fs is the sampling rate, and u[i] is a random variable that was drawn from a uniform distribution that varies from –0.5 to 0.5. Figure 1 displays the spectra of a pure tone (r = 0) and four “fuzzy tone” stimuli (r = 0.5, 1.0, 1.5, and 2.0) generated with equal center frequencies and equal intensities (re: dB RMS). Note that the degree of amplitude roll-off is constant above and below the center frequency on a logarithmic scale (dB/octave). These r values correspond to bandwidths of 0.01, 0.19, 0.79, 2.5, and > 5.0 octaves, respectively, measured 3 dB below the peak.
Figure 1.
Spectra of a pure tone (r = 0) and four “fuzzy tone” stimuli generated with equal center frequencies and equal intensities (re: dB RMS).
Test Materials
Frequency difference limen (DL)
Sensitivity to changes in frequency was assessed using a two-interval, forced-choice procedure. The “standard” frequencies were 2000, 1000, and 500 Hz. Stimulus duration was 800 ms, with a 15-ms rise and fall time. Signals were presented at 65 dB SPL via Sennheiser HD250 Linear II headphones. Listeners were asked to identify the position (first interval or second interval) of the higher pitched tone. The difference in frequency between the two tones was referred to as the delta frequency (ΔF). A two-down, one-up tracking procedure was used to determine 70.7% discrimination. Each trial consisted of eight reversals, and threshold was defined as the average of the final six reversals. For the first two reversals, the ΔF was increased or decreased by a factor of 2. For the final six reversals, the ΔF was increased or decreased by a factor of 1.7. The reported thresholds are an average of three trials. Loudness was not roved in this experiment, as per Henning (1966). The condition order was fixed in descending order of the noise value (r).
Melody recognition
Five melodies (“Twinkle, Twinkle, Little Star,” “Old MacDonald Had a Farm,” “Hark the Herald Angels Sing,” “London Bridge Is Falling Down,” and “Yankee Doodle”) were used in this task. These melodies were commonly selected from a larger list of melodies by adult cochlear implant patients who participated in a previous study (Spahr et al., 2007). All normal-hearing listeners reported familiarity with the selected melodies prior to participation. The frequency of notes was identical to that described by Hartmann and Johnson (1991), except that pure tones or noise bands, with a frequency equal to the fundamental frequency of the musical note, were used instead of complex musical instrument digital interface (MIDI) notes. Each melody consisted of 16 notes of equal duration. The frequencies of the notes ranged from 277 Hz to 622 Hz. Melodies were presented at 65 dB SPL in a quiet background using Sennheiser HD250 Linear II headphones. A closed-set design was used where the names of all five melodies were displayed on a computer screen. After presentation of a randomly selected melody, the listener responded by clicking the appropriate on-screen button. Participants completed two repetitions of the test procedure, with feedback, as a practice condition. In the test condition, there were five repetitions of each stimulus. The order of the items was randomized in the test list. The condition order was fixed in descending order of the noise value (r).
Musical tuning
A familiar melody (“Twinkle, Twinkle, Little Star” or “Old MacDonald Had a Farm”) was presented to normal-hearing listeners using five different musical scales. The range of musical scales was chosen so that musical intervals would be perceived as reduced on one end of the spectrum and expanded on the other. In traditional western music, each successive note in the scale increases in frequency by a factor of 2(1/12). In this experiment the scaling factor was changed to (2(1/12)) (L/4), where L represents the simulated length of the electrode array used to represent one octave. The familiar melody was presented in conditions where L was set equal to 3.0, 3.5, 4.0, 4.5, and 5.0. Thus, the intervals were reduced in the 3.0-mm and 3.5-mm conditions, correct in the 4.0-mm condition, and expanded in the 4.5-mm and 5.0-mm conditions. The notes were generated using either pure tones (r = 0) or fuzzy tones (r = 0.5, 1.0, or 1.5).
Prior to testing, listeners were informed that they would hear multiple versions of the familiar melody and that some of these versions might sound “flat” (i.e., “the musical intervals are reduced”), others might sound “correct,” and still others might sound “stretched” (i.e., “the musical intervals are expanded”). Listeners were asked to play each version of the melody several times and then to determine which version was presented using the correct musical scale. During testing, listeners were seated comfortably in front of a computer monitor, and signals were presented at 65 dB SPL via Sennheiser HD250 Linear II headphones. The monitor displayed five boxes labeled 1–5. Each box contained a “play” button and a “most correct” button. The five versions of the familiar melody were randomly assigned to Boxes 1–5. Each version was played and repeated by clicking the “play” button. The listener responded by clicking the “most correct” button corresponding to the version they perceived as being presented on the correct musical scale. Participants completed 10 trials in each condition. The list order was randomized in each trial. The first 4 trials were considered practice. Responses from the final 6 trials were recorded. Responses were collected for two melodies (“Twinkle, Twinkle, Little Star” and “Old MacDonald Had a Farm”) in each condition. The condition order was fixed in descending order of the noise value (r).
Results
Frequency DL
Mean frequency discrimination thresholds (in semitones) are shown in Figure 2 as a function of the setting of the noise variable r. The measured responses were converted from Hz to semitones to compensate for the different test frequencies. For r = 0, the mean frequency discrimination threshold was 0.06 semitones; for r = 0.5, the threshold was 0.30 semitones; for r = 1.0, the threshold was 1.00 semitones; for r = 1.5, the threshold was 1.75 semitones; and for r = 2.0, the threshold was 2.30 semitones. A two-factor repeated measures analysis of variance (ANOVA) was used to examine main effects of test frequency and the noise variable r on frequency discrimination. The results revealed no significant main effect, F(4, 36) = 0.22, p = .8, power = 0.5, of test frequency. A significant main effect was found for the noise variable r on frequency discrimination, F(4, 36) = 44.9, p < .001, power = 0.99. A post hoc Fisher’s protected least significant difference (LSD) revealed no significant differences in the frequency difference limens (DLs) obtained in the r = 0 (average = 0.06 semitone) and r = 0.5 (average = 0.30 semitone) conditions. Frequency DLs increased significantly as the r level was increased to 1.0 (average = 1.0 semitone), 1.5 (average = 1.75 semitones), and 2.0 (average = 2.3 semitones).
Figure 2.
Mean frequency discrimination thresholds (in semitones) for 10 normal-hearing listeners as a function of the noise variable (r) setting. Symbols representing the different test frequencies have been offset along the x axis for ease of viewing. Error bars indicate +/− 1 SD from the mean.
Melody Recognition
Figure 3 displays melody recognition scores as a function of the noise variable r. A repeated measures ANOVA revealed a significant effect of the noise variable r on melody recognition. A post hoc Fisher’s protected LSD revealed no significant difference in performance in the r = 0 (average = 99.6%) and r = 0.5 (average = 96.0%) conditions. Significant decreases in performance were observed as the noise variable was increased to r = 1.0 (average = 72.4%), r = 1.5 (average = 56.8%), and r = 2.0 (average = 39.2%).
Figure 3.
Group mean data for normal-hearing participants (n = 10) on melody recognition and musical tuning tasks as a function of the noise variable r. The dashed line represents chance performance for both tasks. Error bars indicate +/− 1 SD from the mean.
Musical Tuning
In Figure 3, musical tuning scores are plotted as a function of the average frequency DL. A repeated measures ANOVA revealed a significant main effect of the noise variable r. A post hoc Fisher’s protected LSD revealed a significant drop in performance as the variable was increased from r = 0 (average = 86.7%) to r = 0.5 (average = 51.7%) and to r = 1.0 (average = 25.0%). There was no significant difference in performance in the r = 1.0 and r = 1.5 (average = 24.2%) conditions. Figure 4 displays the distribution of “most correct” (“in tune”) responses as a function of the noise variable r. In each case, the correct scaling factor is 4 mm/octave. A clear preference for the correct scaling is observed in the sinewave condition (r = 0). A less robust preference for the correct scaling factor is observed at the lowest noise level (r = 0.5), and responses are randomly distributed across all scaling factors at the two higher noise levels (r = 1.0 and 1.5).
Figure 4.
Number of “most correct” responses obtained from normal-hearing participants (n = 10) on the musical tuning task for simple melodies (“Twinkle, Twinkle, Little Star” and “Old MacDonald”) presented using multiple scaling factors at each tested r value. A scaling factor of 4 mm/octave produced the correct musical intervals.
Discussion
In preliminary experiments described in the introduction, we presented simple melodies over different cochlear distances to determine an appropriate frequency-to-electrode map for cochlear implant patients. We found that most patients were unable to produce reliable judgments about musical tuning, despite their overall success with the device (e.g., with monosyllabic word scores of 80%–100% in quiet). In this article, we sought (a) to validate, with normal-hearing listeners, the method we used to determine the appropriate map and (b) to assess whether one factor contributing to the variability in responses was a weak perception of pitch that resulted from broad neural activation patterns associated with electrical stimulation. We have validated our methods by demonstrating that in the pure-tone condition, normal-hearing listeners consistently identified the scaling factor that simulated an appropriate frequency-to-electrode map. We also found that the variability in the response pattern increased as signal bandwidth increased and pitch strength was degraded. Furthermore, we found that the fuzzy tones significantly affected frequency discrimination and melody recognition. The changes in frequency discrimination were not surprising given the many other reports on the effects of band widening on frequency discrimination (e.g., Gagne & Zurek, 1988). However, the outcomes of the melody recognition and musical tuning studies were less predictable, given that studies have shown that listeners can attend to the edge frequency of a noise band to make reliable judgments about pitch changes (e.g., Small & Daniloff, 1967).
Although both melody recognition and musical tuning required the listener to attend to musical intervals, there was a notable difference in the complexity of the two tasks. In the melody recognition task, the listener could use a simple process of elimination to make a judgment—for example, pitch drops at the beginning of “Old MacDonald Had a Farm” and rises at the beginning of “Twinkle, Twinkle, Little Star.” In the musical tuning task, the listener was forced to compare the perceived musical intervals of the melody with the auditory memory of that melody. Thus, melody recognition cues could be obtained with gross resolution of pitch, whereas musical tuning requires fine resolution of pitch (e.g., the interval was nearly a perfect fifth). This difference in task difficulty was reflected in the outcomes of this study.
A critical outcome was that normal-hearing listeners were able to reliably identify the correct map in the musical tuning task only when the stimulus r value was ≤0.5. This r value corresponds to a frequency discrimination threshold of 0.3 semitones or better. In conditions where the r value was ≥1.0 and the frequency discrimination threshold was 1.0 semitone or poorer, listeners appeared to be making random guesses as to the “correct” musical interval, and identification of melodies was impaired. The results of our simulation are consistent with those reported for cochlear implant listeners by Nimmons et al. (2007). They report that “listener[s] with a pitch threshold greater than 1 semitone at any base frequency will exhibit poor melody recognition” (p. 154). The 1 patient in their series who had 1.0 semitone resolution at all base frequencies had a melody recognition score of 81% correct. In our experiment, listeners with 1.0 semitone resolution averaged 72% correct—an 81 % correct score is well within the standard deviation of our scores.
In a standard Advanced Bionics cochlear implant, the electrodes are spaced approximately 1 mm apart, and the standard frequency-to-electrode map assigns a 12-semitone range of acoustic input to a 4-mm segment of the electrode array. Thus, 1.0-semitone resolution would require discrimination of 3 pitches between electrodes, and 0.3-semitone resolution would require discrimination of approximately 10 pitches between adjacent electrodes. A recent study by Koch et al. (2007) suggests that the spatial selectivity necessary for semitone resolution or better in a standard frequency-to-electrode map is observed in approximately 50% of the cochlear implant population fit with the Advanced Bionics cochlear implant. However, the spatial selectivity required for 0.3-semitone resolution is observed in a very small portion of the population. In this light, the inability of most cochlear implant listeners to reliably identify a frequency-to-electrode assignment that produced “correct” musical intervals is understandable. Indeed, their pattern of results was very similar to the results obtained from stimuli generated with noise values of 1.0 or 1.5 (see Figure 4), which created weak pitch percepts and poor frequency resolution (1 semitone or poorer).
Music, Speech, and Broad Activation Patterns
Laneau, Wouters, and Moonen (2006) haveuseda technique similar to the one we used to study music perception with the aid of a cochlear implant. In this study, an exponential decay model was used to mimic current spread in the cochlea. As in the present study, the amount of current spread was set to approximately match the frequency resolution of individual participants. Using this technique, Laneau et al. (2006) were able to partially account for frequency discrimination of harmonic complexes. The present study extends their study to tasks such as melody recognition and musical tuning (i.e., interval estimation).
As noted in the introduction, Litvak et al. (2007) found that variations in vowel recognition by cochlear implant patients could be modeled with very high accuracy (r = 0.77 to 0.985) by changes in the width of cochlear activation patterns. In a related study, Fu and Nogaki (2005) used shaped noise bands (–24 and –6 dB/octave filter slopes) to assess the effects of channel interactions, or current spread, on speech understanding in noise. They found that the significant release from masking achieved by normal-hearing participants listening to cochlear implant simulations with noise bands created with narrow (–24 dB/octave) filter slopes was reduced or eliminated by simulating channel interaction, or spectral smearing, with wider (–6 dB/octave) noise bands. Given the Litvak et al. (2007) and Fu and Nogaki (2005) outcomes with speech and the outcomes from the present study and Laneau et al. (2006) for music and tone complexes, it seems likely that much of the difficulty experienced by cochlear implant patients, in terms of speech understanding and melody recognition, can be accounted for by models in which pitch strength is degraded using broad acoustic outputs. If our reasoning is correct, then significant efforts should be directed toward the development of a stimulation strategy that reduces the spread of electrical excitation and increases the pitch strength produced by stimulation of individual electrodes. At present, tripolar stimulation (e.g., Spelman, Pfingst, Clopton, Jolly, & Rodenhiser, 1995) appears to have the best chance of reducing the spread of electrical excitation (Bierer & Middlebrooks, 2002; Kral, Hartmann, Mortazavi, & Klinke, 1998) and increasing pitch strength (Marzalek, Dorman, Spahr, & Litvak, 2007).
Acknowledgments
This research was supported by National Institute on Deafness and Other Communication Disorders (NIDCD) Grant R01 DC-000654-14 to MFD and by Advanced Bionics Corporation Grant PRT-0002 to Arizona State University.
Footnotes
Disclosure Statement
This research was supported by Advanced Bionics Corporation (Sylmar, CA) through a sponsored project with Arizona State University. The first author also serves as a research consultant for Advanced Bionics Corporation and received a consulting fee for his work on this project.
Contributor Information
Anthony J. Spahr, Arizona State University, Tempe
Leonid M. Litvak, Advanced Bionics Corporation, Sylmar, CA
Michael F. Dorman, Arizona State University, Tempe
Ashley R. Bohanan, Arizona State University, Tempe
Lakshmi N. Mishra, Advanced Bionics Corporation
References
- Baumann U, Nobbe A. The cochlear implant electrode-pitch function. Hearing Research. 2006;213(1–2):34–42. doi: 10.1016/j.heares.2005.12.010. [DOI] [PubMed] [Google Scholar]
- Bierer JA, Middlebrooks JC. Auditory cortical images of cochlear-implant stimuli: Dependence on electrode configuration. Journal of Neurophysiology. 2002;87:478–492. doi: 10.1152/jn.00212.2001. [DOI] [PubMed] [Google Scholar]
- Blamey PJ, Dooley GJ, Parisi ES, Clark GM. Pitch comparisons of acoustically and electrically evoked auditory sensations. Hearing Research. 1996;99(1–2):139–150. doi: 10.1016/s0378-5955(96)00095-0. [DOI] [PubMed] [Google Scholar]
- Boex C, Baud L, Cosendai G, Sigrist A, Kos MI, Pelizzone M. Acoustic to electric pitch comparisons in cochlear implant subjects with residual hearing. Journal of the Association for Research in Otolaryngology. 2006;7:110–124. doi: 10.1007/s10162-005-0027-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dorman MF, Ketten D. Adaptation by a cochlear-implant patient to upward shifts in the frequency representation of speech. Ear and Hearing. 2003;24:457–460. doi: 10.1097/01.AUD.0000090438.20404.D9. [DOI] [PubMed] [Google Scholar]
- Dorman MF, Spahr T, Gifford R, Loiselle L, McKarns S, Holden T, et al. An electric frequency-to-place map for a cochlear implant patient with hearing in the nonimplanted ear. Journal of the Association for Research in Otolaryngology. 2007;8:234–240. doi: 10.1007/s10162-007-0071-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eddington DK. Speech discrimination in deaf subjects with cochlear implants. The Journal of the Acoustical Society of America. 1980;68:885–891. doi: 10.1121/1.384827. [DOI] [PubMed] [Google Scholar]
- Fastl H, Stoll G. Scaling of pitch strength. Hearing Research. 1979;1:293–301. doi: 10.1016/0378-5955(79)90002-9. [DOI] [PubMed] [Google Scholar]
- Fu QJ, Nogaki G. Noise susceptibility of co-chlear implant users: The role of spectral resolution and smearing. Journal of the Association for Research in Otolaryngology. 2005;6:19–27. doi: 10.1007/s10162-004-5024-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fu QJ, Nogaki G, Galvin JJ., III Auditory training with spectrally shifted speech: Implications for cochlear implant patient auditory rehabilitation. Journal of the Association for Research in Otolaryngology. 2005;6:180–189. doi: 10.1007/s10162-005-5061-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fu QJ, Shannon RV. Recognition of spectrally degraded and frequency-shifted vowels in acoustic and electric hearing. The Journal of the Acoustical Society of America. 1999;105:1889–1900. doi: 10.1121/1.426725. [DOI] [PubMed] [Google Scholar]
- Gagne JP, Zurek PM. Resonance-frequency discrimination. The Journal of the Acoustical Society of America. 1988;83:2293–2299. doi: 10.1121/1.396360. [DOI] [PubMed] [Google Scholar]
- Gfeller K, Christ A, Knutson JF, Witt S, Murray KT, Tyler RS. Musical backgrounds, listening habits, and aesthetic enjoyment of adult cochlear implant recipients. Journal of the American Academy of Audiology. 2000;11:390–406. [PubMed] [Google Scholar]
- Gfeller K, Lansing CR. Melodic, rhythmic, and timbral perception of adult cochlear implant users. Journal of Speech and Hearing Research. 1991;34:916–920. doi: 10.1044/jshr.3404.916. [DOI] [PubMed] [Google Scholar]
- Gfeller K, Woodworth G, Robin DA, Witt S, Knutson JF. Perception of rhythmic and sequential pitch patterns by normally hearing adults and adult cochlear implant users. Ear and Hearing. 1997;18:252–260. doi: 10.1097/00003446-199706000-00008. [DOI] [PubMed] [Google Scholar]
- Greenwood DD. A cochlear frequency-position function for several species—29 years later. The Journal of the Acoustical Society of America. 1990;87:2592–2605. doi: 10.1121/1.399052. [DOI] [PubMed] [Google Scholar]
- Hartmann W, Johnson D. Stream segregation and peripheral channeling. Music Perception. 1991;9:155–184. [Google Scholar]
- Henning GB. Frequency discrimination of random-amplitude tones. The Journal of the Acoustical Society of America. 1966;39:336–339. doi: 10.1121/1.1909894. [DOI] [PubMed] [Google Scholar]
- Kawano A, Seldon HL, Clark GM. Computer-aided three-dimensional reconstruction in human cochlear maps: Measurement of the lengths of organ of Corti, outer wall, inner wall, and Rosenthal’s canal. Annals of Otology, Rhinology & Laryngology. 1996;105:701–709. doi: 10.1177/000348949610500906. [DOI] [PubMed] [Google Scholar]
- Koch D, Downing M, Osberger MJ, Litvak L. Using current steering to increase spectral resolution in CII and HiRes 90K users. Ear and Hearing. 2007;28(Suppl):38S–41S. doi: 10.1097/AUD.0b013e31803150de. [DOI] [PubMed] [Google Scholar]
- Kong YY, Cruz R, Jones JA, Zeng FG. Music perception with temporal cues in acoustic and electric hearing. Ear and Hearing. 2004;25:173–185. doi: 10.1097/01.aud.0000120365.97792.2f. [DOI] [PubMed] [Google Scholar]
- Kral A, Hartmann R, Mortazavi D, Klinke R. Spatial resolution of cochlear implants: The electrical field and excitation of auditory afferents. Hearing Research. 1998;121(1–2):11–28. doi: 10.1016/s0378-5955(98)00061-6. [DOI] [PubMed] [Google Scholar]
- Laneau J, Wouters J, Moonen M. Improved music perception with explicit pitch coding in cochlear implants. Audiology and Neurotology. 2006;11(1):38–52. doi: 10.1159/000088853. [DOI] [PubMed] [Google Scholar]
- Litvak LM, Spahr AJ, Saoji AA, Fridman GY. Relationship between perception of spectral ripple and speech recognition in cochlear implant and vocoder listeners. The Journal of the Acoustical Society of America. 2007;122:982–991. doi: 10.1121/1.2749413. [DOI] [PubMed] [Google Scholar]
- Marzalek M, Dorman M, Spahr A, Litvak L. Effects of multielectrode stimulation on tone perception: Modeling and outcomes. Paper presented at the Conference on Implantable Auditory Prostheses; Tahoe, CA. 2007. Jun, [Google Scholar]
- McDermott HJ. Music perception with cochlear implants: A review. Trends in Amplification. 2004;8:49–82. doi: 10.1177/108471380400800203. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Michaels RM. Frequency difference limens for narrow bands of noise. The Journal of the Acoustical Society of America. 1957;29:520–522. doi: 10.1121/1.1914343. [DOI] [PubMed] [Google Scholar]
- Moore BC. Frequency difference limens for short-duration tones. The Journal of the Acoustical Society of America. 1973;54:610–619. doi: 10.1121/1.1913640. [DOI] [PubMed] [Google Scholar]
- Moore BCJ. An introduction to the psychology of hearing. 5. Amsterdam and Boston: Academic Press; 2003. [Google Scholar]
- Nimmons GL, Kang RS, Drennan WR, Longnion J, Ruffin C, Worman T, et al. Clinical assessment of music perception in cochlear implant listeners. Otology and Neurotology. 2007;29:149–155. doi: 10.1097/mao.0b013e31812f7244. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rattay F. Electrical nerve stimulation: Theory, experiments, and applications. Vienna, Austria: Springer-Verlag/ Wien; 1990. [Google Scholar]
- Schultz E, Kerber M. Music perception with the MED-EL implants. In: Hochmair-Desoyer I, Hochmair E, editors. Advances in cochlear implants. Vienna, Austria: Manz; 1994. pp. 326–332. [Google Scholar]
- Skinner MW, Ketten DR, Holden LK, Harding GW, Smith PG, Gates GA, et al. CT-derived estimation of cochlear morphology and electrode array position in relation to word recognition in Nucleus-22 recipients. Journal of the Association for Research in Otolaryngology. 2002;3:332–350. doi: 10.1007/s101620020013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Small AM, Jr, Daniloff RG. Pitch of noise bands. The Journal of the Acoustical Society of America. 1967;41:506–512. doi: 10.1121/1.1910361. [DOI] [PubMed] [Google Scholar]
- Spahr AJ, Dorman MF. Performance of subjects fit with the Advanced Bionics CII and Nucleus 3G co-chlear implant devices. Archives of Otolaryngology–Head and Neck Surgery. 2004;130:624–628. doi: 10.1001/archotol.130.5.624. [DOI] [PubMed] [Google Scholar]
- Spahr AJ, Dorman MF, Loiselle LH. Performance of patients using different cochlear implant systems: Effects of input dynamic range. Ear and Hearing. 2007;28:260–275. doi: 10.1097/AUD.0b013e3180312607. [DOI] [PubMed] [Google Scholar]
- Spelman FA, Pfingst BE, Clopton BM, Jolly CN, Rodenhiser KL. Effects of electrical current configuration on potential fields in the electrically stimulated cochlea: Field models and measurements. The Annals of Otology, Rhinology & Laryngology. 1995;166(Suppl):131–136. [PubMed] [Google Scholar]
- Sridhar D, Stakhovskaya O, Leake PA. A frequency-position function for the human cochlear spiral ganglion. Audiology and Neurotology. 2006;11(Suppl 1):16–20. doi: 10.1159/000095609. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Svirsky MA, Silveira A, Neuburger H, Teoh SW, Suarez H. Long-term auditory adaptation to a modified peripheral frequency map. Acta Oto-Laryngologica. 2004;124:381–386. [PubMed] [Google Scholar]
- Townshend B, Cotter N, Van Compernolle D, White RL. Pitch perception by cochlear implant subjects. The Journal of the Acoustical Society of America. 1987;82:106–115. doi: 10.1121/1.395554. [DOI] [PubMed] [Google Scholar]
- Wilson B, Lawson DT, Zerbi M, Finley C. Speech processors for auditory prostheses. Bethesda, MD: National Institutes of Health; 1992. [First Quarterly Progress Report, NIH Contract N01-DC-2-2401] [Google Scholar]