Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Jul 1.
Published in final edited form as: Ear Hear. 2015 Jul-Aug;36(4):441–453. doi: 10.1097/AUD.0000000000000144

Relationships Among Peripheral and Central Electrophysiological Measures of Spatial and Spectral Selectivity and Speech Perception in Cochlear Implant Users

Rachel A Scheperle 1, Paul J Abbas 1,2
PMCID: PMC4478147  NIHMSID: NIHMS651408  PMID: 25658746

Abstract

Objectives

The ability to perceive speech is related to the listener’s ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli.

Design

Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every-other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex (ACC) with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel-discrimination and the Bamford-Kowal-Bench Sentence-in-Noise (BKB-SIN) test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses.

Results

All electrophysiological measures were significantly correlated with each other and with speech perception for the mixed-model analysis, which takes into account multiple measures per person (i.e. experimental MAPs). The ECAP measures were the best predictor of speech perception. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech; spectral ACC amplitude was the strongest predictor.

Conclusions

The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be the most useful for within-subject applications, when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered.

INTRODUCTION

Cochlear implants (CIs) are routinely recommended for children with severe to profound hearing loss. Post implantation, the audiologist is responsible for optimizing the stimulation provided by the device. Because children are often too young to give detailed or reliable feedback, electrophysiological measures can be useful. The electrically evoked compound action potential (ECAP) is one measure currently used to guide setting the output of each electrode. ECAP thresholds can be used to ensure that stimulation provided by the processor is detectable (Brown et al. 2000; Hughes et al. 2000a, b). However, an important goal of a CI is to provide pediatric recipients with sufficient acoustic detail for speech and language development. Detection of sound is obviously necessary but not sufficient for developing these skills. Spectral, temporal, and amplitude resolution are also necessary for listeners to process the fluctuating spectral content and varying amplitudes of complex speech signals (Shannon, 2002), but neither electrophysiological nor psychophysical measures of these abilities are included in standard clinical protocols for evaluating perception with or programming CIs. Additional tools to evaluate perception and to guide clinical decisions are needed and may be useful even for adults who can participate in speech-perception testing. This study focused on exploring electrophysiological measures that relate to spectral-resolution abilities in CI users.

Spectral resolution, a property of the auditory system involving differentiation among frequency components, can be assessed by evaluating an individual’s ability to discriminate among single frequencies or among complex signals that contain multiple frequency components. The ability to resolve frequency is variable across CI users and is generally poorer than that of hearing-aid users and normal-hearing listeners, even when normal-hearing individuals are tested using simulations of CI processing (e.g. Henry & Turner 2003; Henry et al. 2005). Although a CI system limits spectral resolution by the number of intracochlear electrodes and processor settings, CI users demonstrate performance limitations beyond those attributable to the device (Fishman et al. 1997; Fu et al. 1998; Friesen et al. 2001; Henry & Turner 2003). The number, functionality and location of surviving neurons, the location of the electrodes relative to stimulable neurons, and the impedance pathway for current spread determine the ability of the implant system to transmit spectral components to unique physical locations, and more specifically, to distinct groups of auditory neurons (e.g., Frijns et al. 1995; Cohen 2009; Goldwyn et al. 2010). The extent to which stimulation from each electrode results in a distinct neural excitation pattern is referred to within this document as spatial selectivity. Thus peripheral spatial selectivity, a property of CI stimulation specific to each individual, can limit spectral resolution.

Because spectral resolution is correlated with speech perception (Henry & Turner 2003; Henry et al. 2005; Litvak et al. 2007; Won et al. 2007; Berenstein et al. 2008; Saoji et al. 2009; Anderson et al. 2011; Spahr et al. 2011; Won et al. 2011b), and because peripheral spatial selectivity underlies spectral resolution (e.g. Anderson et al. 2011; Jones et al. 2013), a relationship between spatial selectivity and speech perception is expected. Litvak and colleagues (2007) provided indirect evidence of this relationship by changing the slope of vocoder bandpass filters to simulate varying degrees of peripheral spatial selectivity in normal-hearing individuals. The normal-hearing listeners’ performance on spectral resolution measures and speech perception (particularly vowel perception) tended to mimic the performance observed across CI users, suggesting that peripheral spatial selectivity accounts for some of the variability in performance observed across CI users. However, the direct relationship between a CI user’s spatial selectivity and speech perception remains unclear. A number of studies have demonstrated significant correlations between psychophysical measures of spatial selectivity and speech perception (e.g. Nelson et al. 1995; Collins et al. 1997; Throckmorton & Collins 1999; Henry et al. 2000; Boex et al. 2003; Jones et al. 2013), but others have not (Zwolan et al. 1997; Hughes & Abbas 2006a; Stickney et al. 2006; Hughes & Stille 2008; Anderson et al. 2011; Nelson et al. 2011; Azadpour & McKay 2012). Electrophysiological measures of peripheral spatial selectivity (i.e., ECAP channel-interaction functions) have not been found to correlate with speech perception (Cohen et al. 2003; Hughes & Abbas 2006a; Hughes & Stille 2008; Tang et al. 2011; van der Beek et al. 2012).

There are many potential limitations when attempting to use measures of peripheral spatial selectivity, specifically ECAP channel-interaction functions, to predict speech perception in CI users. The primary goal of this study was to reexamine the relationship between electrophysiological measures of spatial selectivity and speech perception by addressing a number of these limitations.

One limitation of previous studies is that ECAP channel-interaction functions were measured at few cochlear locations (apical, middle, basal electrodes: Cohen et al. 2003; Hughes & Stille 2008; Tang et al. 2011; van der Beek et al. 2012). Sparse sampling will not capture the likely variability of spatial selectivity along the length of the electrode array, which presumably is relevant for the processing of spectrally complex speech signals. Several psychophysical studies have used more than the typical three electrodes used in electrophysiological studies to characterize interactions and have observed significant correlations with speech perception (e.g. Nelson et al. 1995; Throckmorton & Collins 1997; Henry et al. 2000; Jones et al. 2013). In this study, ECAP channel-interaction functions were obtained for all of the electrodes activated during speech-perception testing in order to characterize the spatial selectivity of the periphery as fully as possible. Electrophysiological measures were used because, in addition to being objective, they can be performed in a fraction of the time necessary for equivalent psychophysical measures.

A second limitation addressed here is that ECAP channel-interaction functions typically are quantified individually, and in terms of width or breadth. Broad interaction functions presumably reflect poor spatial selectivity. However, broad stimulation patterns can result from many factors, such as monopolar stimulation and electrodes far from the modiolus (psychophysical: Cohen et al. 2006; Nelson et al. 2008; Zhu et al. 2012 but see Cohen et al. 2005; electrophysiological: Cohen et al. 2003; Hughes & Abbas 2006a,b; Zhu et al. 2012), which are not necessarily associated with poorer speech perception (Pfingst et al. 2001; Hughes & Abbas 2006a; Berenstein et al. 2008). Additionally, by necessity, ECAP channel-interaction functions are elicited using stimuli presented at suprathreshold levels with current levels that are often above those used in the clinical program, or MAP. There is a tendency for channel-interaction functions to broaden with stimulus level at least in some individuals and for some probe electrodes (e.g. Abbas et al. 2004; Eisen & Franck 2005; Hughes & Stille 2010); however, speech-perception scores do not decrease with increasing stimulation level (Firszt et al. 2004). Even in the normal auditory system, tonal response patterns broaden with level (e.g. Gorga et al. 2011), and yet speech-perception abilities do not degrade. Although electrodes that stimulate completely overlapping neural populations presumably offer no additional benefit over a single-channel CI, some spatial overlap might provide the central nervous system with the redundancy needed to further process the complex signal (Kiang & Moxon 1972).

In this study the ECAP channel-separation index was used to quantify peripheral spatial selectivity (Hughes 2008). This index is sensitive to differences in locations, magnitude, and shapes of two channel-interaction functions, and provides a means for quantifying the non-overlapping excitation areas resulting from two places of stimulation (i.e. probe electrodes). The channel-separation index was found to correlate significantly with pitch ranking (Hughes 2008) when channel-interaction function width did not (Hughes & Abbas 2006a). In the companion study, significant within-subject correlations between the channel-separation index and objective measures of electrode discrimination (spatial ACC amplitude) were observed (Scheperle & Abbas submitted). These findings suggest that although the breadth of a single channel-interaction function might not be meaningful in isolation, it might be meaningful when considered relative to the excitation patterns resulting from neighboring electrodes.

A third limitation of ECAP measures is that peripheral processing may not be sufficient to account for variable speech-perception abilities observed across CI users. The companion paper demonstrated that central processing, characterized by cortical electrophysiological response amplitudes, varied across adult CI users, and that the cross-subject differences could not be explained entirely by differences in peripheral input (Scheperle & Abbas submitted). The same cortical response, namely the auditory change complex (ACC), was included as an outcome measure in this study as well. The ACC is elicited by changing a parameter in an ongoing stimulus (Ostroff et al. 1998; Brown et al. 2008), and although these cortical responses are preattentive and do not reflect cognition or executive function, they do offer insight into intermediate stages of processing between the auditory nerve and speech perception, specifically related to discrimination (e.g. Jerger & Jerger 1970; Won et al. 2011a). For the companion study, the ACC was evoked by changing the stimulation site from one electrode to another. The response was considered a central measure of spatial selectivity. The spatial ACC data from the companion study were included in this study to evaluate the potential benefits of considering processing central to the auditory nerve for explaining variable speech-perception abilities.

A fourth limitation of ECAP channel-interaction functions is that they are obtained using relatively simple stimuli: pulses from electrode pairs presented at low stimulation rates. Another benefit of the ACC is that the response can be evoked with stimuli ranging in complexity. Examining the peripheral neural response to electrode pairs ignores the interaction that likely occurs across the electrode array when more complex stimuli (such as speech) are used and multiple electrodes are stimulated. Thus, the ACC was elicited by changing the frequency location of spectral peaks within a complex, vowel-like, rippled-noise stimulus (similar to Won et al. 2011a). This stimulus results in more complex excitation patterns at the periphery than electrode pairs stimulated sequentially using low-rate (ECAP), or high-rate (spatial ACC) pulse trains. Although the physiological contrast depends upon underlying spatial selectivity, this ACC response was considered a central measure of spectral selectivity.

In summary, three electrophysiological measures were included in this study: ECAP channel-interaction functions, spatial ACCs (evoked with sequentially stimulated electrode pairs) and spectral ACCs (evoked with rippled-noise stimuli). These three measures allowed us (1) to evaluate the relationship between the processing of simple (ECAP and spatial ACC) and complex (spectral ACC) stimulation patterns and (2) to determine the predictive ability of the three electrophysiological measures with speech perception (vowels and words within a noise background). The hypotheses were as follows:

  1. ECAP channel-interaction functions are predictive of spectral selectivity and speech perception. Although previous studies have not provided evidence in support of this hypothesis, the present study examined spatial selectivity for a greater number of electrode sites and used different methods to quantify the ECAP data.

  2. Considering neural responses at a level central to the auditory nerve can improve the predictions of the processing and perception of more complex signals.

  3. Electrophysiological measures of spectral resolution are predictive of speech perception. A strong correlation between behavioral measures of spectral resolution and speech perception has been demonstrated consistently (e.g. Henry & Turner 2003; Henry et al. 2005; Litvak et al. 2007; Won et al. 2007; Berenstein et al. 2008; Saoji et al. 2009; Anderson et al. 2011; Spahr et al. 2011; Won et al. 2011b), and electrophysiological and behavioral measures of spectral ripple discrimination are correlated (Won et al. 2011a; Lopez Valdes et al. 2014).

The results of this study improve our understanding of auditory neural processing of simple and complex stimuli at peripheral and cortical levels and how this processing is related to speech perception in CI recipients.

MATERIALS AND METHODS

Participants

The eleven peri- or post-lingually deafened adult CI users who participated in the companion study (Scheperle & Abbas submitted) also participated in this study. Briefly, participants were native English speakers ranging in age from 27 to 86 years and had at least one year of experience with a Nucleus CI24RE or CI512 internal device in the test ear (Cochlear Ltd., Macquarie, NSW, Australia). The companion paper contains additional demographic details. This study was approved by the Institutional Review Board of the University of Iowa.

General Experiment Overview

The general design of this study was modeled after that of Won and colleagues (2011c) and involved manipulating the CI processor settings, or MAPs, to effectively change spatial/spectral selectivity within individual CI users. Three experimental MAPs were created, each using 7 of the available 22 intracochlear electrodes. Activated electrodes were adjacent (MAP 1), every-other (MAP 2), or every third electrode (MAP 3; Table 1). Thus, the potential for interaction among stimulated electrodes was greatest with MAP 1 and least with MAP 3. A total of 13 electrodes (3, 6, 8, 9, 10 11, 12, 13, 14, 15, 16, 18, 21) were activated across the experimental programs. Measures of spectral selectivity (spectral ACC) and speech perception (vowel discrimination and sentences within background babble) were repeated for each participant/MAP. In contrast to Won and colleagues (2011c), this study also included measures of spatial selectivity for each participant/MAP, specifically, the peripheral (ECAP) and central (ACC) measures of spatial selectivity reported in the companion paper (Scheperle & Abbas submitted).

Table 1.

Activated electrodes and frequency allocation for the experimental MAPs.

Electrode Number (Frequency Range; Hz)
MAP 1 9 10 11 12 13 14 15
MAP 2 6 8 10 12 14 16 18
MAP 3 3 6 9 12 15 18 21
Frequency Allocation (3813–5600) (2563–3813) (1688–2563) (1188–1688) (813–1188) (563–813) (350–563)

The activated electrodes for F19R were adjusted to avoid those associated with small-amplitude or immeasurable ECAPs. MAP 1: electrodes 10, 11, 12, 13, 14, 15, 16; MAP 2: electrodes 7, 9, 11, 13, 15, 17, 19; MAP 3: electrodes 2, 7, 10, 13, 16, 19, 22. Spatial selectivity was evaluated for this modified set of electrodes (Scheperle & Abbas submitted).

Peripheral Spatial Selectivity: ECAP Channel-Separation Index

ECAP channel-interaction functions were obtained for all 13 activated electrodes using standard forward-masking stimulation, recording, and artifact subtraction procedures (Cohen et al., 2003; Abbas et al., 2004). Thus, peripheral spatial selectivity was described as completely as possible for each listening condition in this study. The ECAP channel-interaction functions associated with the seven activated electrodes in each MAP are shown for one participant (E55R) in the left column of Figure 1. The functions overlap the most for MAP 1 (top panel), and the differences across functions are relatively small. As the activated electrodes are spaced farther apart (MAPs 2 and 3), the amount of overlap decreases.

Figure 1.

Figure 1

Quantifying peripheral and central measures of spatial selectivity for comparison with spectral resolution and speech perception: E55R. The ECAP channel-interaction functions associated with the 7 activated electrodes in each experimental MAP are shown in the left panels. Probe electrodes are indicated by dotted vertical lines. Channel-separation indices calculated for the 6 pairs of adjacent activated electrodes in each MAP, and the mean across electrode pairs are shown in the top, right panel. The predicted spatial ACC amplitude associated with each channel-separation index was calculated using the individually optimized coefficients relating peripheral and central spatial selectivity (Scheperle & Abbas submitted). The predictions for E55R are shown in the lower, right panel.

For the companion study, ECAP channel-separation indices (i.e. average amplitude differences) were calculated using all channel-interaction functions paired with that of probe electrode 12. For this study, ECAP channel-separation indices were calculated for each of the six pairs of adjacent electrodes activated within the experimental MAPs (Table 1; Figure 1: top, right panel). As expected, channel separation indices tended to be smallest for MAP 1 and largest for MAP 3. This was generally true of the MAP means for all participants, though not necessarily for all electrode pairs. Box-and-whisker plots of the group data are provided in Supplemental Digital Content 1 (top panel).

Central Spatial Selectivity: Spatial ACC

For the companion study, the ACC was elicited using thirteen pairs of sequentially stimulated electrodes; electrode 12 was always one of the two electrodes stimulated. The relationship between the thirteen ECAP channel-separation indices (x) and ACC amplitudes (y) for corresponding electrode pairs was characterized for each participant using a saturating exponential: y = a * (1 − ebx). Both a and b coefficients were allowed to vary to individually optimize the fit (Scheperle & Abbas submitted). An example of one individual’s spatial selectivity data set and the fitted function is provided in Figure 2.

Figure 2.

Figure 2

Scatterplot of thirteen measured spatial ACC amplitudes and ECAP channel-separation indices for E60 from the companion paper (Scheperle & Abbas, submitted). The exponential fit (line) was used to predict the spatial ACC amplitudes associated with the ECAP channel-separation indices calculated for the adjacent activated electrodes in each experimental MAP. The dotted vertical line marks the ECAP channel-separation index calculated for electrode pair 6–8 for this participant. The dotted horizontal line marks the spatial ACC amplitude predicted from the exponential fit.

It was not practically feasible to elicit the ACC using all electrode pairs of interest for this study. Instead, we used the subject-specific coefficients calculated for the set of electrodes used in the companion study to predict the ACC amplitudes for the pairs of adjacent activated electrodes in each experimental MAP. For the example participant in Figure 2, the ECAP channel-separation index for electrode pair 6–8 is 0.14 (dotted vertical line), and the associated ACC amplitude is estimated to be 5.80 μV (dotted horizontal line). The predicted spatial ACC amplitudes for all MAP electrode pairs for the participant whose ECAP data are shown in Figure 1 are displayed in the lower right panel of that figure. Similar to the ECAP channel-separation index, predicted spatial ACC amplitudes tended to be smallest for MAP 1 electrode pairs and largest for MAP 3 electrode pairs. Group data are shown in the bottom panel of the figure in Supplemental Digital Content 1.

Processor Settings for Spectral-Selectivity and Speech-Perception Measures

As described in the companion paper, 500-ms pulse trains (monopolar mode: MP1; 900-pps rate) were used to determine threshold (T) and uncomfortable (U) levels for all 22 electrodes. For this study, initial comfort (C) levels were set 15–20 CL lower than loudness-balanced U-levels. The 7 electrodes for a given experimental MAP were activated together, and C levels were globally adjusted until overall level was considered “most comfortable”. Loudness was balanced across the three MAPs, and global adjustments were made as needed. Thus, the U-level contour (used for spatial-selectivity measures) was reflected in the C levels of experimental MAPs used for spectral resolution and speech perception measures.

The three experimental MAPs were programmed with identical processing features: monopolar (MP1) stimulation mode, 900-pps rate, 25-μsec/phase pulse width, and 8-μsec interphase gap. Continuous Interleaved Sampling (CIS) was simulated using the Advanced Combination Encoder (ACE) strategy with seven maxima. Pre-processing strategies were disabled. The overall bandwidth was 350–5600 Hz (default: 188–7983 Hz) specifically for the spectral-selectivity measures. The bandwidths of each “nth” channel were identical across all three MAPs (Table 1). For example, the most basal electrode activated in each MAP was programmed to transmit the envelope associated with 3813–5600 Hz.

Rippled-noise and speech stimuli were presented via the direct audio input port on the CI processor. Rippled-noise and vowel stimuli were presented at a level equivalent to 55 or 60 dBA; the Bamford-Kowal-Bench Speech-in-Noise test (BKB-SIN; Etymōtic Research, 2005) was administered at a 65 dBA equivalent level.

Spectral Selectivity: Spectral ACC

Rippled-noise stimuli (350–5600 Hz bandwidth, 0.5 ripples per octave (rpo), and 40-dB depth) were created in MATLAB (MathWorks, Natick, MA, 2012a; Litvak et al. 2007). To elicit the ACC, the frequency location of the peaks and troughs was reversed 400 ms after stimulus onset. The second 400 ms was scaled to have a rms amplitude equal to that of the first 400 ms. The full 800-ms stimuli were gated on and off with a Tukey window (20-ms ramps, 760-ms plateau). A bandpass filter (350–5600 Hz) was applied to remove spectral splatter at the transition between the two stimulus halves.

MATLAB was used to present stimuli to the processor at a rate of approximately one per three seconds. The recording procedures were identical to those described in the companion paper for the spatial ACC measures. Briefly, disposable surface electrodes were placed at the vertex (+), contralateral mastoid (−) and inion (−), allowing two differential recordings to be obtained simultaneously. Electrode impedances were below 5 kΩ and matched within 2 kΩ. Electroencephalographic (EEG) signals were filtered (1–30 Hz) and amplified (gain of 10,000; OptiAmp 1.10, Intelligent Hearing Systems), digitized (sampling rate of 25,000; National Instruments DAQ soundcard), and averaged and stored for offline analysis (100 sweeps; LabVIEW, National Instruments, 2009). A minimum of two sets of 100 sweeps were recorded for each MAP. The test order of the experimental MAPs was randomized.

Device-related artifact was reduced primarily by the recording procedures. Matching the impedance of the surface electrodes and the high-pass edge frequency of the filter has been shown to reduce low-frequency artifact. The low-pass edge frequency of the filter reduced the high-frequency artifact (McLaughlin et al. 2013a). Averaged waveforms were filtered offline to further minimize the effects of CI artifact and biological noise on the recorded neural potential. Example waveforms are displayed in Figure 3. The waveforms obtained under different listening conditions are shifted vertically: MAP 3 (top) to MAP 1 (bottom). Vertical lines at 0.0 and 0.4 sec mark the stimulus onset and the stimulus change (spectral envelope phase inversion), respectively. Negative (N1) and positive (P2) peaks of onset and change responses are indicated with crosses. For this individual, the most robust response was obtained with MAP 3; the smallest response was obtained with MAP 1. This pattern was observed for most participants.

Figure 3.

Figure 3

Example cortical waveforms elicited with spectral rippled noise: E40R. Waveforms obtained with the different experimental MAPs are offset vertically: MAP 3 (top) to MAP 1 (bottom). A minimum of 200 sweeps were averaged for each trace. Vertical lines at 0.0 and 0.4 seconds indicate stimulus onset and change (i.e., phase inversion), respectively. Onset and change N1 and P2 peaks are indicated by the plus signs.

Speech Perception

Vowel discrimination was assessed within a /h/-vowel-/d/ context using a 10- alternative forced choice procedure executed with modified PSYLAB scripts (version 2.4; Hansen, 2012) in MATLAB. The included words were had, hayed, head, heard, heed, hid, hoed, hood, hud, and who’d. Ten tokens of each word were selected from ten female speakers (Hillenbrand et al. 1995). The experimental MAPs were tested in a random order following a brief (~5 min) familiarization period for each program. For testing, each vowel token was presented once without replacement in a random order. Participants responded using a touch screen or mouse. The percent correct across all words was used for analysis.

The BKB-SIN test was administered following the vowel-discrimination test before switching to a new listening condition (MAP). The level of the target was held constant, and the level of the four-talker babble increased in 3-dB steps. The signal-to-noise ratio (SNR) started at +21 and ended at 0 dB. Listeners were asked to repeat each sentence and were scored using the number of key words (3–4) correct. This number was used to calculate the SNR for 50% correct (SNR-50). For testing, a list pair (two lists of eight sentences) was chosen randomly from lists 9–18 (recommended for CI users). Unused lists were used to familiarize participants with the procedures.

Statistical Analysis

Linear mixed-model regression analysis was used to evaluate (1) whether spatial selectivity (ECAP channel-separation index or predicted spatial ACC amplitude) was predictive of spectral resolution (spectral ACC amplitude) and (2) whether the three electrophysiological measures (ECAP channel-separation index, predicted spatial ACC, spectral ACC) were predictive of vowel discrimination and speech perception in noise. Akaike’s Information Criterion (AIC) was used to compare the different models (Kutner et al. 2004); lower scores indicate better fits. Simple linear regression analysis was performed using the data obtained with one program (MAP 3), so that we could evaluate the usefulness of the electrophysiological responses for cross-subject predictions when only a single measure is available.

A basic assumption influencing the interpretation of the results is that central measures, such as the ACC, reflect cumulative processing from the periphery to the cortex. Thus, any difference in the analysis using the ECAP channel-separation index (a measure that reflects peripheral processing) compared to the predicted spatial ACC (a measure that reflects both peripheral and central processing) was attributed to processing central to the auditory nerve. Any difference in the analysis using the spatial ACC compared to the spectral ACC (both measures reflect peripheral and central processing) was attributed to stimulus complexity.

RESULTS

Relating Spatial and Spectral Selectivity

Two questions of interest were (1) is spatial selectivity, measured with simple stimulation, predictive of the resolution of spectrally complex stimuli and (2) does including information about central processing improve the prediction? The spatial selectivity data for each MAP were reduced to the maximum ECAP channel-separation index and maximum predicted spatial ACC amplitude across adjacent activated electrodes. The decision to use the maximum was based on an expectation that a single electrode pair (limited cochlear region) might provide sufficient information for a discrimination task. Because the cochlear place with the best spatial selectivity might not align with the cochlear place associated with the greatest difference in stimulation resulting from the spectral phase inversion, the analysis was also performed by reducing the spatial selectivity data to the mean across adjacent electrode pairs for each MAP (see the table in Supplemental Digital Content 2). The general conclusions do not change.

Figure 4 displays spectral ACC amplitudes as a function of the two measures of spatial selectivity: ECAP channel-separation index (left) and predicted spatial ACC amplitude (right). There are 33 data points in each panel: 11 data points each for MAP 1 (white squares), MAP 2 (gray triangles), and MAP 3 (black circles). Thin gray lines connect the data belonging to each participant. MAP 3 data points for E55R were identified as outliers (studentized residual > 2.0) and are marked with asterisks. Dashed lines are the linear fits for the mixed-model analysis, which included all 33 data points. The solid lines are the linear fits to MAP 3 data (excluding the outlier). Regression-line slopes are positive, indicating that better spatial selectivity is associated with better spectral resolution.

Figure 4.

Figure 4

Relationship between spatial selectivity and spectral resolution. Spectral ACC amplitude is plotted as a function of both measures of spatial selectivity: maximum ECAP channel-separation index (left) and maximum predicted spatial ACC amplitude (right). Both panels contain 33 data points: MAP 1 (white squares), MAP 2 (gray triangles), MAP 3 (black circles). Outliers are marked with asterisks (MAP 3 data for E55R). The dashed lines are the linear regression fits for the mixed-model analyses, and the solid lines are the linear regression fits for the analyses using only MAP 3 data points.

Results of the mixed-model analyses (Table 2A) reveal that both the ECAP channel-separation index and predicted spatial ACC amplitude were significant predictors of spectral ACC amplitude (p<0.0001). The smaller AIC value associated with the ECAP channel-separation index indicates that it was a better predictor of spectral ACC amplitude than the predicted spatial ACC amplitude. These results suggest that the information about processing central to the auditory nerve does not improve the predictions of spectral selectivity when multiple sets of measures are available for a given individual. Another explanation is that predicting (rather than measuring) the spatial ACC for the electrode pairs of interest introduced noise and degraded the correlation. In light of the results of the regression analysis on MAP 3 data (next section), the latter explanation is unlikely the primary factor.

Table 2.

Peripheral and central spatial selectivity as predictors of spectral resolution.

A. Mixed-Model Regression Analysis
Predictor Variables Intercept (p-value) Slope (p-value) AIC
ECAP Channel-Separation Index −0.54 (0.3284) 27.27 (<0.0001) 106.0
Predicted Spatial ACC Amplitude 0.21 (0.7058) 0.7932 (<0.0001) 119.8
B. Simple Linear Regression Analysis: MAP 3
Predictor Variables Intercept (p-value) Slope (p-value) r2
ECAP Channel-Separation Index −0.37 (0.8240) 25.53 (0.0197) 0.51
Predicted Spatial ACC Amplitude 0.30 (0.6161) 1.05 (<0.0001) 0.87

Note: MAP 3 data for E55R were identified as outliers and were excluded from the linear regression analyses on MAP 3 data.

Akaike’s Information Criterion (AIC); Auditory Change Complex (ACC); Electrically Evoked Compound Action Potential (ECAP)

Results of the simple linear regression analyses for MAP 3 data (excluding E55R; Table 2B) reveal that both measures of spatial selectivity were significant predictors of spectral ACC amplitude (p<0.02). In contrast with the results of the mixed-model analysis, these results indicate that predicted spatial ACC amplitude is a better predictor of spectral ACC amplitude than the ECAP channel-separation index (r2=0.87 compared to r2=0.51). Although information about the peripheral neural excitation pattern is informative, these results suggest that when the goal is to make predictions about differences across participants based on a single observation per individual, it is beneficial to include a measure that also reflects processing central to the auditory nerve.

Speech Perception

The ultimate goal of this study was to examine the usefulness of electrophysiological measures to predict speech perception. The questions of interest were (1) are any of the electrophysiological measures predictive of speech perception, and (2) is any one electrophysiological measure a better predictor than the others? For these analyses, we reduced the spatial selectivity measures for each MAP to the mean ECAP channel-separation index and mean predicted spatial ACC amplitude. The decision to use the mean was based on the rationale that access to information across the cochlea is important for resolving differences among and identifying speech sounds.

Figure 5 relates the speech scores (top row: vowel discrimination; bottom row: BKB-SIN) with each of the electrophysiological measures (left column: ECAP channel-separation index, middle column: predicted spatial ACC amplitude, right column: spectral ACC amplitude). The color scheme, symbols, and lines are the same as in Figure 4. MAP 3 data for participant F26L were identified as outliers (studentized residual > 2.0) for the vowel regression analyses and are marked with asterisks.

Figure 5.

Figure 5

Electrophysiological measures as predictors of speech perception. Top row: Vowel perception (percent correct). Chance performance (10%) is indicated by a horizontal, dotted line in each panel. Bottom row: The signal-to-noise ratio required for 50% performance on the BKB-SIN test (SNR-50). Speech results are plotted for each of the three electrophysiological measures: mean ECAP channel-separation index (left), mean predicted spatial ACC amplitude (middle) and spectral ACC amplitude (right). The symbol/color scheme used in Figure 4 is also used here. Outliers are marked with asterisks (MAP 3 vowel data for F26L).

Regression-line slopes are positive for the vowel perception data and negative for the BKB-SIN test. Because the BKB-SIN test is scored as a signal-to-noise ratio for 50% correct performance, low scores indicate better performance. Thus, the interpretation is the same for both speech perception measures: better spatial/spectral selectivity is associated with better performance on speech measures.

Vowel Discrimination

Average vowel scores ranged from 17–85% across participants and MAPs, which was between chance (10%; horizontal, dotted lines in the top row of Fig. 5) and ceiling (100%) performance. Results of the mixed-model analyses (Table 3A, all 33 data points) reveal that all three electrophysiological measures were significantly predictive of vowel perception (p<0.005). These results suggest that improving spatial or spectral selectivity within an individual would be associated with an improvement in his/her vowel-perception abilities. The ECAP channel-separation index was not only a significant predictor (p<0.0001), but was the best predictor (smallest AIC). Results of the simple linear regression analyses for MAP 3 data (excluding F26L; Table 3B) reveal that both predicted spatial ACC and spectral ACC amplitudes were significantly predictive of vowel scores (p<0.04). Spectral ACC amplitude was the best predictor (r2 =0.65 compared to 0.45). The ECAP channel-separation index was not a significant predictor when only a single set of measures (those for MAP 3) was considered for each individual.

Table 3.

Electrophysiological measures as predictors of vowel discrimination.

A. Mixed-Model Regression Analysis
Predictor Variables Intercept (p-value) Slope (p-value) AIC
ECAP Channel-Separation Index 3.95 (0.4933) 434.11 (<0.0001) 257.2
Predicted Spatial ACC Amplitude 19.88 (0.0232) 7.69 (0.0033) 280.2
Spectral ACC Amplitude 20.17 (0.0008) 7.48 (<0.0001) 262.0
B. Simple Linear Regression Analysis: MAP 3
Predictor Variables Intercept (p-value) Slope (p-value) r2
ECAP Channel-Separation Index 27.22 (0.2190) 242.52 (0.1890) 0.20
Predicted Spatial ACC Amplitude 36.33 (0.0023) 5.93 (0.033) 0.45
Spectral ACC Amplitude 33.63 (0.0007) 5.03 (0.005) 0.65

Note: MAP 3 data for F26L were identified as outliers and were excluded from the linear regression analyses on MAP 3 data.

Akaike’s Information Criterion (AIC); Auditory Change Complex (ACC); Electrically Evoked Compound Action Potential (ECAP)

BKB-SIN

Results of the mixed-model analyses are provided in Table 4A. Consistent with the vowel-discrimination results, all three electrophysiological measures were significant predictors of word recognition in noise (p<0.0002) and the ECAP channel-separation index was the best predictor (smallest AIC). The simple linear regression results for MAP 3 are provided in Table 4B. There were no outliers identified, so the analyses included data from all 11 participants. When considering only a single listening condition, neither measure of spatial selectivity was significantly predictive of speech perception (p>0.05). Spectral ACC amplitude was significant (p=0.0123) and accounted for just over 50% of the variability.

Table 4.

Electrophysiological measures as predictors of BKB-SIN scores.

A. Mixed-Model Analysis
Predictor Variables Intercept (p-value) Slope (p-value) AIC
ECAP Channel-Separation Index 25.86 (<0.0001) −115.29 (<0.0001) 173.9
Predicted Spatial ACC Amplitude 25.15 (<0.0001) −3.25 (0.0002) 190.7
Spectral ACC Amplitude 21.70 (<0.0001) −2.04 (<0.0001) 178.0
B. Simple Linear Regression Analysis: MAP 3
Predictor Variables Intercept (p-value) Slope (p-value) r2
ECAP Channel-Separation Index 17.07 (0.0402) −45.41 (0.4654) 0.06
Predicted Spatial ACC Amplitude 17.23 (0.0003) −1.70 (0.0775) 0.31
Spectral ACC Amplitude 18.76 (<0.0001) −1.56 (0.0123) 0.52

Akaike’s Information Criterion (AIC); Auditory Change Complex (ACC); Electrically Evoked Compound Action Potential (ECAP)

DISCUSSION

Summary of Results

This study included three electrophysiological measures to evaluate peripheral and cortical neural representations of spectrally simple and complex stimuli. Speech perception was assessed using vowels and the BKB-SIN test. The outcome measures were compared within a regression framework, and the following relationships were identified:

  1. All three electrophysiological measures were significantly correlated with each other and with scores of both speech tests.

  2. The ECAP channel-separation index was the best predictor of changes in spectral ACC amplitude and speech perception when data from all three experimental MAPs were considered. The cortical measures, which provide information about central processing in addition to peripheral processing, did not improve these predictions.

  3. The cortical measures improved the predictions of spectral selectivity and speech perception when information about neural processing was limited to a single listening condition for each participant. Spectral ACC amplitude was the best predictor of speech perception and accounted for 50–60% of the variability.

Although we hypothesized that the ECAP measures would be predictive of speech perception, an unexpected finding was the strength of the relationship when multiple observations were included for each participant, especially in contrast to the weak correlations observed when the analysis was limited to the data set for a single listening condition. The implications of these differences will be discussed.

Results of the simple linear regression analyses appear to reflect the inherent hierarchy across outcome measures with regard to the complexity of the stimulus and stages of auditory processing contributing to the electrophysiological responses. This hierarchy is illustrated in Figure 6 using the statistically significant results for the cross-subject, simple linear regression analyses (MAP 3 data). The schematic displays the outcome measures in order of stimulus complexity (left-to-right) and stages of processing from the periphery to the cortex (bottom-to-top). Lines begin at the independent variables and terminate with the arrows pointing to the dependent variables. The r2 values associated with each analysis are displayed on the lines. The double lines connecting the ECAP channel-separation index with the spatial ACC are used to represent the “predicted spatial ACC amplitude”, which was calculated by combining the two measures together.

Figure 6.

Figure 6

Schematic of simple linear regression analyses (MAP 3 data). The outcome measures of this study are displayed in order of stimulus complexity (left-to-right) and in order of dependence upon processing at different stages along the auditory pathway, from the periphery (bottom) to more central structures (top). Lines with arrows connect single predictors with dependent variables; the coefficient of determination is displayed for each comparison. Double lines represent the combined peripheral-central measure of spatial selectivity: the predicted spatial ACC. Only statistically significant results are displayed.

The most peripheral response evoked with the simplest stimulus (ECAP) was significantly correlated with a more central response evoked with a more complex stimulus (spectral ACC), but not with either measure of speech perception when limiting the analysis to MAP 3 data. When information about central processing was included (in the form of the predicted spatial ACC), more of the variability observed in the measure of spectral selectivity was accounted for, and a significant correlation was observed with vowels but not with the most complex speech test used in this study (BKB-SIN). The spectral ACC, evoked with a complex, speech-like stimulus, explained more of the variability in vowel performance than either measure of spatial selectivity and was correlated with speech perception in noise. These results support the rationale behind the study design, which included electrophysiological measures from both peripheral and cortical auditory levels and the use of both simple and complex stimuli to evoke responses.

Caveats

In this study we activated different electrodes in three experimental MAPs to change spatial and spectral selectivity within each participant and to evaluate the effects on speech perception. Because the MAPs were novel and participants were given minimal listening practice, we expected that speech perception might be poorer than if participants were allowed to use their clinical MAPs (although this was not directly evaluated). The primary interest was not absolute performance but relative performance (i.e., differences across the experimental MAPs for a given individual and differences across participants). However, efforts to reduce differences in stimulation parameters across the outcome measures may limit generalization of the study findings. For example, the reference electrode was MP1 for all measures because this is what was used for the ECAP measures. Although stimulation levels were different across outcome measures, we also attempted to retain the relative electrode C-level profile for the spatial selectivity measures. Additionally, the processor bandwidth was set to 350–5600 Hz for speech-perception measures to match what was used for the spectral-selectivity measures. It is not clear to what extent the relationships observed between electrophysiological measures and speech perception were due to these controls, and how comparisons to everyday programs should be accomplished most effectively.

A second concern about the study design is the similarity/dissimilarity between the frequency-to-electrode allocation of the experimental MAPs and that of the CI users’ everyday MAPs. Shifting the place of stimulation from what a listener is accustomed to can be detrimental to acute vowel-perception abilities (Fu & Shannon 1999a,b). Spatial selectivity and speech perception were typically best when participants used MAP 3, which also had the most similar (though not identical) frequency-to-electrode allocation as the default settings. Although we do not know whether this had confounding effects on the mixed-model speech analyses, the simple linear regression analyses using only MAP 3 data were not confounded.

ECAP Channel-Interaction Functions

This is one of several studies to demonstrate significant correlations between measures of spatial selectivity and speech perception (e.g. Nelson et al. 1995; Collins et al. 1997; Throckmorton and Collins, 1999; Henry et al. 2000; Boex et al. 2003; Jones et al. 2013); however, it is the first study to show a direct relationship between ECAP channel-interaction functions and speech perception. We attribute the significant results in part to the more complete measures of peripheral spatial selectivity (similar to Jones et al. 2013) and the use of the channel-separation index (Hughes, 2008) to quantify the ECAP channel-interaction functions. More notable is that ECAP measures were predictive of speech perception only when considering multiple listening conditions for each individual. Previous studies have focused on cross-subject predictions (Cohen et al. 2003; Hughes and Abbas, 2006a; Hughes and Stille, 2008; Tang et al. 2011; but van der Beek et al. 2012 also used mixed-model analysis), but a goal of CI programming is to optimize stimulation for an individual. CI processors can be adjusted to produce different stimulation patterns (listening conditions) for the recipient, and our results suggest that measures of peripheral spatial selectivity may be useful for making comparisons. Although the experimental MAPs used in this study were not designed for everyday use, changing the spatial pattern of stimulation by deactivating electrodes or by adjusting the stimulation mode and processing strategies are standard options for clinical programming.

A number of investigators have demonstrated improved performance on speech tests when selectively deactivating electrodes (Zwolan et al. 1997; Garadat et al. 2012; Noble et al. 2013). Two of these studies used measures suggesting poor peripheral spatial selectivity as a criterion for choosing which electrodes to deactivate. Zwolan and colleagues (1997) deactivated electrodes that were perceptually indiscriminable. Noble and colleagues (2013) deactivated electrodes assumed to result in overlapping neural stimulation based on position within the cochlear duct. There is likely a limit to how much improvement can be obtained with this electrode-deactivation procedure, and to the maximum number of electrodes that can be deactivated before performance is negatively affected (e.g. Friesen et al. 2001). Zhou and Pfingst (2013) proposed an alternative method to electrode deactivation, which they refer to as “site rehabilitation”. Instead of deactivating electrodes with poor modulation detection thresholds, the investigators raised T levels to “artificially” improve detection of temporal modulations. Speech perception improved with the experimental programs for most participants. Based on the preliminary success of the electrode-deactivation and site-rehabilitation methods, it may be worth examining whether ECAP channel-interaction functions could be used to identify electrodes to deactivate or modify.

Another potential application is to use information about peripheral spatial selectivity to determine if a specific processing strategy or stimulation mode would be optimal. Although some of the clinically available strategies are associated with better performance than others on average (ACE compared to CIS and HiRes with Fidelity 120 compared to HiRes), the best or most preferred strategy is person-specific (e.g. Skinner et al. 2002; Firszt et al. 2009). Clinicians currently rely on their programming experiences and reports by patients to select processing features. Often the default settings within the programming software are left unchanged. Sometimes speech-perception tests are performed to compare across different processing strategies, but CI users often need time to acclimate to new listening conditions (e.g. Tyler et al. 1997), and a trial-and-error method is not efficient. As new techniques are introduced to improve device transmittance of spectral content (see Bonham & Litvak 2008 for a review of current focusing/steering), determining whether peripheral neural survival is sufficient to further transmit the detailed spectral content is needed to predict whether or not a strategy will benefit the individual. Spectral-resolution measures have been used to validate processing strategies (Berenstein et al. 2008; Drennan et al. 2010); however, the results of this study suggest that clinical decisions about how to change processor settings for an individual might be better guided with peripheral measures of spatial selectivity. The specific manner in which information about ECAP channel-interaction functions can be used to help with these more complex decisions is less straightforward than deactivating electrodes with poor spatial selectivity. But, considering that ECAP measures are noninvasive reflections of peripheral neural excitation, they are worth further exploration.

Although the ECAP channel-separation index has been suggested for quantifying ECAP channel-interaction functions (Hughes 2008; Hughes et al. 2013; Scheperle & Abbas submitted) and appeared adequate given the results of the mixed-model analysis in this study, it may be that the metric is fundamentally limited in its application to the perception of complex stimuli because it is calculated for pairs of electrodes. Complex sounds generally result in the stimulation of many electrodes, and an overall measure of interaction among all stimulated electrodes may be more appropriate. One such quantity calculated from measures of auditory filters in normal and hearing-impaired ears is an internal spectrum (Moore & Glasberg 1987; Turner & Henn 1989; Summers & Leek 1994). Adapting this method for use with analogous auditory filters in CI users was beyond the scope of this study. However, it could potentially provide a better characterization of the peripheral neural excitation pattern of complex stimuli than the mean or maximum ECAP channel-separation index, as used in this study.

Despite its limitations, calculating an ECAP channel-separation index is relatively straightforward, and there may be ways to improve using the measure. For example, a possible limitation of this study was the use of an unweighted mean across adjacent electrodes to compare with the speech measures. Different frequency regions are more important than others for speech intelligibility, and the relative importance depends upon the speech materials (ANSI, 1997). We did not consider a weighting function a priori and did not apply weights from previous studies post hoc as a number of approximations were required. Additionally, using a mean, even if weighted, may not be ideal for predicting speech perception if variability is observed across electrode pairs within an individual. For example, mean behavioral thresholds across the electrode array have not been found to correlate with speech perception; however, threshold variability across the array has (Pfingst et al. 2004; Bierer 2007). Quantifying ECAP channel-interaction functions in terms of variability has not been explored and is worth considering.

In this study, channel-interaction functions were normalized to the largest ECAP amplitude observed across probe electrodes within each individual (Hughes 2008). This normalization was preferred over normalizing each function to its own peak for the purpose of retaining relative amplitude differences, and may be an important difference in how the channel-separation index compares to other metrics used to quantify ECAP channel-interaction functions. However, in one participant (F26L; identified as an outlier for the vowel regression analysis) ECAP amplitudes across probe electrodes ranged from 140–600 μV. Dividing the amplitudes by the maximum resulted in small normalized ECAP amplitudes (and small channel-separation indices) for the majority of the basal electrodes, even though the non-normalized amplitudes were large compared to many other participants. It is not clear if or how the normalization procedure should be adjusted (e.g. perhaps based on a mean value instead of the maximum, and allowed to extend beyond 1.0), but for this one individual, normalization did not appear to reflect the good neural survival that was indirectly suggested by the non-normalized amplitudes and by good speech-perception scores.

Cortical Auditory Evoked Potentials

Initial evaluations of the ACC in CI users demonstrated feasibility of recording the response and described the sensitivity of the response to size/extent of the stimulus change (e.g. Friesen & Tremblay, 2006; Martin, 2007; Brown et al. 2008; Kim et al. 2009). More recent studies have demonstrated that cortical measures of discrimination are correlated with behavioral measures (Hoppe et al. 2010; Won et al. 2011a; He et al. 2013; Lopez Valdes et al. 2014). Complex phonemic contrasts and speech-like signals have been used to elicit cortical responses (Tremblay et al. 2003; Friesen & Tremblay, 2006; Martin, 2007; Won et al. 2011a; Lopez Valdes et al. 2014), but only one other study to date has directly related the ACC with speech perception abilities in CI users (He et al. 2013).

Spatial ACC

He and colleagues (2013) elicited the ACC within an electrode-discrimination framework and observed a significant correlation between objectively determined electrode-discrimination thresholds and speech perception for children identified with auditory neuropathy spectrum disorder. Our results extend those of He and colleagues by showing a significant relationship between the predicted spatial ACC for a more heterogeneous group of CI users. Additionally, speech scores were considered continuous variables in this study rather than categorized (He et al. 2013). The significant correlation observed between the predicted spatial ACC amplitude and speech perception is also consistent with the findings of studies using behavioral measures of spatial selectivity (e.g. Nelson et al. 1995; Collins et al. 1997; Throckmorton & Collins 1999; Henry et al. 2000; Boex et al. 2003; Jones et al. 2013.). Like the ACC, behavioral measures reflect cumulative processing from the periphery to the cortex. Additionally, many of the psychophysical studies included channel-interaction measures for the majority of activated electrodes. Our results, along with those listed here support two of our hypotheses: (1) numerous measures across the electrode array are necessary for relating spatial selectivity with the perception of more complex stimuli and (2) differences in central processing across CI users are important to consider. This last point is supported directly by our results showing that the predicted spatial ACC was more strongly correlated with speech scores than the ECAP measures when data were limited to a single listening condition.

Spectral ACC

Spectral-rippled noise is a popular stimulus for evaluating spectral resolution; however, there are concerns that listeners may rely on other perceptual abilities for discrimination, namely single-channel loudness cues or pitch perceptions associated with either the level presented on the lowest or highest electrode (edge effects) or the spectral centroid (Azadpour & McKay 2012, further described in Aronoff & Landsberger 2013). Several investigators have addressed these concerns and found that potentially confounding factors (edge effects and intensity cues) are likely minimal (e.g., Anderson et al. 2011; Won et al. 2011c). The significant correlations between measures of spatial selectivity and spectral ripple discrimination (Anderson et al. 2011; Jones et al. 2013, this study) and the large r2 values (Jones: r2= 0.94; this study: r2=0.87) suggest that confounding factors are likely minimal.

Although significant relationships between behavioral measures of spectral resolution and speech perception have been observed across studies using various stimulus paradigms (e.g., ripple density or depth) and speech-perception measures (Henry and Turner, 2003; Henry et al. 2005; Litvak et al. 2007; Won et al. 2007; Berenstein et al. 2008; Saoji et al. 2009; Anderson et al. 2011; Spahr et al. 2011; Won et al. 2011b), this is the first study to provide evidence that an electrophysiological correlate is also predictive of speech perception. Won and colleagues (2011a) demonstrated that the ACC could be evoked within a spectral-ripple, density-discrimination paradigm, and that electrophysiological responses were correlated with behavioral measures of ripple discrimination. Although we used a single ripple depth to elicit the ACC and compared the amplitude of the electrophysiological response to speech perception, in light of previous behavioral studies, we were not surprised to observe significant correlations.

Clinical Considerations

Using ECAP measures to evaluate peripheral spatial selectivity is faster and/or more cost effective than many of the psychophysical (e.g. electrode discrimination: Zwolan et al. 1997; forward-masked spatial tuning curves: Nelson et al. 2008; channel interaction: Jones et al. 2013) and objective (ABR thresholds using focused stimulation: Bierer et al. 2011; CT imaging: Noble et al. 2013) alternatives. Determining behavioral thresholds using focused stimulation across the array (Bierer 2007) in adult CI users is neither more costly nor time consuming than eliciting ECAP channel-interaction functions; however, behavioral methods require cooperation from participants and are not ideal for pediatric and difficult-to-test populations. ECAP channel-interaction functions can be measured using clinical software, and recording/analyzing ECAPs is within the scope of practice of audiologists. For this study ECAP channel-interaction functions were obtained for more than half of a 22-electrode array in less than an hour. The time needed to obtain channel-interaction functions for all intracochlear electrodes will depend upon the electrode array, but might be reduced considerably by using fewer masker electrodes or averaging across fewer sweeps when subtracted response amplitudes are large.

Although this study also provides evidence that the ACC evoked with simple and complex stimuli is useful for predicting perceptual abilities across individuals with CIs, especially when single observations are available for individuals, there are some practical considerations. First, the spatial ACC measures were more time consuming than ECAP measures. There may be ways to shorten test time from what was required for this study, and these would be worth exploring. For example, it is unclear whether it was necessary to record responses for thirteen electrode pairs (the protocol for the companion study) or if a smaller subset would have been sufficient. Shortening the test time likely would have been beneficial, as noise levels tended to increase across the duration of the lengthy sessions, and participants had difficultly staying alert. Eliciting the spectral ACC was less time consuming. Although behavioral studies using rippled-noise stimuli rely on obtaining density or depth thresholds, we explored using the response amplitude elicited at a single, suprathreshold ripple depth. The single response was predictive across individuals, suggesting that the more time-efficient method was adequate.

A second practical disadvantage of cortical potentials is that they are typically recorded using far-field electrodes placed on the scalp. Placement of far-field electrodes takes time and adds material expenses (electrodes, conductive paste, cleaner, etc.). McLaughlin and colleagues (2013b) explored the use of the CI extracochlear electrodes to obtain cortical recordings. Although it is not possible to obtain long-latency responses with clinical software in a time-efficient manner at the present time, the preliminary results are promising. Eliminating the need for scalp electrodes would be especially useful for obtaining measures in children; however, even responses obtained within passive listening paradigms require some cooperation. For example, listeners must sit relatively still while remaining alert. Although this combination is difficult to prompt in young children, the ACC has been successfully recording in 4-month old infants (Small & Werker 2012).

Finally, although we focused on the benefits of using cortical measures to compare performance across individuals, there is some indication that cortical measures are beneficial for evaluating changes in auditory processing within an individual. Cortical measures reflect maturational changes and development of the auditory system (e.g. Ponton et al. 1996; 2000; Wunderlich & Cone-Wesson 2006), and can reflect perceptual changes within a person due to listening experience (e.g. Sharma et al. 2002) or training (e.g. Menning et al. 2000). Although these additional factors may make the response more difficult to interpret, they also indicate the potential for more widespread applications.

CONCLUSIONS

This study provides evidence that has been elusive to date: objective measures of peripheral spatial selectivity (i.e. ECAP channel-interaction functions) relate to speech perception. Determining which metric best represents and quantifies the neural excitation patterns indirectly reflected in the response and how the information can be used to make clinical programming decisions will require additional work.

Our results indicate that peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about underlying neural processing and resulting perception. The best measure depends upon the application of interest. For example, the cortical response evoked with the most complex, speech-like stimulus (spectral ACC) was the best predictor of speech perception when information was limited to a single listening condition for each participant. The spectral ACC also has the practical advantage of being faster to elicit than the spatial ACC paradigm used in this study, and the test time was comparable to if not faster than performing ECAP channel-interaction functions on all activated electrodes. However we do not conclude that the spectral ACC is optimal for all situations. The more specific measures of spatial selectivity may be useful for making decisions about how to optimize the speech processor for an individual. The ECAP channel-separation index was the best predictor of perception when multiple measures were obtained within an individual, and may be better suited than cortical measures to inform the adjustment of CI programs.

Supplementary Material

Supplemental Data File _.doc_ .tif_ pdf_ etc.__1

Supplemental Digital Content 1. Figure that shows group distributions of peripheral and central spatial selectivity measures for each experimental MAP as box-and-whisker plots. pdf Supplemental Digital Content 2. Table that shows statistical results using the mean ECAP channel-separation index and mean predicted spatial ACC amplitudes for a given MAP as predictors of spectral ACC amplitude.pdf

Supplemental Data File _.doc_ .tif_ pdf_ etc.__2

Acknowledgments

Source of Funding: This research was funded by the National Institutes of Health, National Institute on Deafness and Other Communication Disorders under awards F31DC013202, P50DC000242, and R01DC012082. The content is solely the responsibility of the author and does not necessarily represent the official views of the National Institutes of Health. Participant compensation was funded in part by the University of Iowa Department of Communication Sciences and Disorders. The first-author received a student travel award to present portions of this work at the 2013 Conference on Implantable Auditory Prostheses. She also received a mentored student travel award to present a poster at the 2014 American Auditory Society on different project.

We acknowledge the participants for their commitment, cooperation and patience throughout the course of this project. This manuscript is based on the first author’s dissertation work at the University of Iowa under the mentorship of Paul J. Abbas. We are thankful for the contributions of all committee members: Carolyn J. Brown, Camille C. Dunn, Shawn S. Goodman, and Christopher W. Turner. We also appreciate the feedback and suggestions from Michelle L. Hughes.

Footnotes

Conflicts of Interest

No conflicts of interest were declared.

A portion of this work was presented at the 16th Biennial Conference on Implantable Auditory Prosthetics, Lake Tahoe, California, July 18, 2013.

References

  1. Abbas PJ, Hughes ML, Brown, et al. Channel interaction in cochlear implant users evaluated using the electrically evoked compound action potential. Audiol Neurootol. 2004;4:203–213. doi: 10.1159/000078390. [DOI] [PubMed] [Google Scholar]
  2. Anderson ES, Nelson DA, Kreft H, et al. Comparing spatial tuning curves, spectral ripple resolution, and speech perception in cochlear implant users. J Acoust Soc Am. 2011;1:364–375. doi: 10.1121/1.3589255. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. ANSI. ANSI S3.5–1997, American National Standards Methods for Calculation of the Speech Intelligibility Index. New York: 1997. [Google Scholar]
  4. Aronoff JM, Landsberger DM. The development of a modified spectral ripple test. J Acoust Soc Am. 2013;134:EL217–EL222. doi: 10.1121/1.4813802. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Azadpour M, McKay CM. A psychophysical method for measuring spatial resolution in cochlear implants. J Assoc Res Otolaryngol. 2012;13:145–157. doi: 10.1007/s10162-011-0294-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Berenstein CK, Mens LHM, Mulder JJS, et al. Current steering and current focusing in cochlear implants: Comparison of monopolar, tripolar, and virtual channel electrode configurations. Ear Hear. 2008;2:250–260. doi: 10.1097/aud.0b013e3181645336. [DOI] [PubMed] [Google Scholar]
  7. Bierer JA. Threshold and channel interaction in cochlear implant users: Evaluation of the tripolar electrode configuration. J Acoust Soc Am. 2007;121:1642–1653. doi: 10.1121/1.2436712. [DOI] [PubMed] [Google Scholar]
  8. Bierer JA, Faulkner KF, Tremblay KL. Identifying cochlear implant channels with poor electrode-neuron interface: electrically-evoked auditory brainstem responses measured with the partial tripolar configuration. Ear Hear. 2011;32:436–444. doi: 10.1097/AUD.0b013e3181ff33ab. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Boex C, Kos MI, Pelizzone M. Forward masking in different cochlear implant systems. J Acoust Soc Am. 2003;4(Pt 1):2058–2065. doi: 10.1121/1.1610452. [DOI] [PubMed] [Google Scholar]
  10. Bonham BH, Litvak LM. Current focusing and steering: Modeling, physiology, and psychophysics. Hear Res. 2008;242:141–153. doi: 10.1016/j.heares.2008.03.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Brown CJ, Etler C, He S, et al. The electrically evoked auditory change complex: preliminary results from nucleus cochlear implant users. Ear Hear. 2008;5:704–717. doi: 10.1097/AUD.0b013e31817a98af. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Brown CJ, Hughes ML, Luk B, et al. The relationship between EAP and EABR thresholds and levels used to program the nucleus 24 speech processor: Data from adults. Ear Hear. 2000;2:151–163. doi: 10.1097/00003446-200004000-00009. [DOI] [PubMed] [Google Scholar]
  13. Cohen LT. Practical model description of peripheral neural excitation in cochlear implant recipients: 4. Model development at low pulse rates: General model and application to individuals. Hear Res. 2009;248:15–30. doi: 10.1016/j.heares.2008.11.008. [DOI] [PubMed] [Google Scholar]
  14. Cohen LT, Lenarz T, Battmer RD, et al. A psychophysical forward masking comparison of longitudinal spread of neural excitation in the Contour and straight Nucleus electrode arrays. Int J Audiol. 2005;10:559–566. doi: 10.1080/14992020500258743. [DOI] [PubMed] [Google Scholar]
  15. Cohen LT, Richardson LM, Saunders E, et al. Spatial spread of neural excitation in cochlear implant recipients: comparison of improved ECAP method and psychophysical forward masking. Hear Res. 2003;1–2:72–87. doi: 10.1016/s0378-5955(03)00096-0. [DOI] [PubMed] [Google Scholar]
  16. Cohen LT, Saunders E, Knight MR, et al. Psychophysical measures in patients fitted with Contour and straight Nucleus electrode arrays. Hear Res. 2006;1–2:160–175. doi: 10.1016/j.heares.2005.11.005. [DOI] [PubMed] [Google Scholar]
  17. Collins LM, Zwolan TA, Wakefield GH. Comparison of electrode discrimination, pitch ranking, and pitch scaling data in postlingually deafened adult cochlear implant subjects. J Acoust Soc Am. 1997;1:440–455. doi: 10.1121/1.417989. [DOI] [PubMed] [Google Scholar]
  18. Drennan WR, Won JH, Nie K, et al. Sensitivity of psychophysical measures to signal processor modifications in cochlear implant users. Hear Res. 2010;1–2:1–8. doi: 10.1016/j.heares.2010.02.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Eisen MD, Franck KH. Electrode interaction in pediatric cochlear implant subjects. J Assoc Res Otolaryngol. 2005;2:160–170. doi: 10.1007/s10162-005-5057-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Firszt JB, Holden LK, Skinner MW, et al. Recognition of speech presented at soft to loud levels by adult cochlear implant recipients of three cochlear implant systems. Ear Hear. 2004;4:375–387. doi: 10.1097/01.aud.0000134552.22205.ee. [DOI] [PubMed] [Google Scholar]
  21. Firszt JB, Holden LK, Reeder RM, et al. Speech recognition in cochlear implant recipients: comparison of standard HiRes and HiRes 120 sound processing. Otol Neurotol. 2009;30:146–152. doi: 10.1097/MAO.0b013e3181924ff8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Fishman KE, Shannon RV, Slattery WH. Speech recognition as a function of the number of electrodes used in the SPEAK cochlear implant speech processor. J Speech Lang Hear Res. 1997;5:1201–1215. doi: 10.1044/jslhr.4005.1201. [DOI] [PubMed] [Google Scholar]
  23. Friesen LM, Shannon RV, Baskent D, et al. Speech recognition in noise as a function of the number of spectral channels: comparison of acoustic hearing and cochlear implants. J Acoust Soc Am. 2001;2:1150–1163. doi: 10.1121/1.1381538. [DOI] [PubMed] [Google Scholar]
  24. Friesen LM, Tremblay KL. Acoustic change complexes recorded in adult cochlear implant listeners. Ear Hear. 2006;6:678–685. doi: 10.1097/01.aud.0000240620.63453.c3. [DOI] [PubMed] [Google Scholar]
  25. Frijns JHM, de Snoo SL, Schoonhoven R. Potential distributions and neural excitation patterns in a rotationally symmetric model of the electrically stimulated cochlea. Hear Res. 1995;87:170–186. doi: 10.1016/0378-5955(95)00090-q. [DOI] [PubMed] [Google Scholar]
  26. Fu Q, Shannon RV. Effects of electrode configuration and frequency allocation on vowel recognition with the Nucleus-22 cochlear implant. Ear Hear. 1999a;20:332–344. doi: 10.1097/00003446-199908000-00006. [DOI] [PubMed] [Google Scholar]
  27. Fu Q, Shannon RV. Recognition of spectrally degraded and frequency-shifted vowels in acoustic and electric hearing. J Acoust Soc Am. 1999b;105:1889–1900. doi: 10.1121/1.426725. [DOI] [PubMed] [Google Scholar]
  28. Fu QJ, Shannon RV, Wang X. Effects of noise and spectral resolution on vowel and consonant recognition: Acoustic and electric hearing. J Acoust Soc Am. 1998;104:3586–3596. doi: 10.1121/1.423941. [DOI] [PubMed] [Google Scholar]
  29. Garadat SN, Zwolan TA, Pfingst BE. Across-site patterns of modulation detection: Relation to speech recognition. J Acoust Soc Am. 2012;131:4030–4041. doi: 10.1121/1.3701879. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Goldwyn JH, Bierer SM, Bierer JA. Modeling the electrode-neuron interface of cochlear implants: Effects of neural survival, electrode placement, and the partial tripolar configuration. Hear Res. 2010;268:93–104. doi: 10.1016/j.heares.2010.05.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Gorga MP, Neely ST, Kopun J, et al. Distortion-product otoacoustic emission suppression tuning curves in humans. J Acoust Soc Am. 2011;129:817–827. doi: 10.1121/1.3531864. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Hansen M. PsyLab-Documentation Version 2.4. Institut für Hörtechnik + Audiologie jade Hochschule; Oldenburg, Germany: 2012. Retrieved February 24, 2013 from http://www.hoertechnik-audiologie.de/psylab/psylab-doc.pdf. [Google Scholar]
  33. He S, Grose JH, Teagle HFB, et al. Objective measures of electrode discrimination with electrically evoked auditory change complex and speech-perception abilities in children with auditory neuropathy spectrum disorder. Ear Hear. 2013;34:733–744. doi: 10.1097/01.aud.0000436605.92129.1b. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Henry BA, McKay CM, McDermott HJ, et al. The relationship between speech perception and electrode discrimination in cochlear implantees. J Acoust Soc Am. 2000;3(Pt 1):1269–1280. doi: 10.1121/1.1287711. [DOI] [PubMed] [Google Scholar]
  35. Henry BA, Turner CW. The resolution of complex spectral patterns by cochlear implant and normal-hearing listeners. J Acoust Soc Am. 2003;5:2861–2873. doi: 10.1121/1.1561900. [DOI] [PubMed] [Google Scholar]
  36. Henry BA, Turner CW, Behrens A. Spectral peak resolution and speech recognition in quiet: normal hearing, hearing impaired, and cochlear implant listeners. J Acoust Soc Am. 2005;2:1111–1121. doi: 10.1121/1.1944567. [DOI] [PubMed] [Google Scholar]
  37. Hillenbrand J, Getty LA, Clark MJ, et al. Acoustic characteristics of American English vowels. J Acoust Soc Am. 1995;5(Pt 1):3099–3111. doi: 10.1121/1.411872. [DOI] [PubMed] [Google Scholar]
  38. Hoppe U, Wohlberedt T, Danilkina G, et al. Acoustic change complex in cochlear implant subjects in comparison with psychoacoustic measures. Cochlear Implants International. 2010;11:426–430. doi: 10.1179/146701010X12671177204101. [DOI] [PubMed] [Google Scholar]
  39. Hughes ML. A re-evaluation of the relation between physiological channel interaction and electrode pitch ranking in cochlear implants. J Acoust Soc Am. 2008;5:2711–2714. doi: 10.1121/1.2990710. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Hughes ML, Abbas PJ. The relation between electrophysiologic channel interaction and electrode pitch ranking in cochlear implant recipients. J Acoust Soc Am. 2006a;3:1527–1537. doi: 10.1121/1.2163273. [DOI] [PubMed] [Google Scholar]
  41. Hughes ML, Abbas PJ. Electrophysiologic channel interaction, electrode pitch ranking, and behavioral threshold in straight versus perimodiolar cochlear implant electrode arrays. J Acoust Soc Am. 2006b;3:1538–1547. doi: 10.1121/1.2164969. [DOI] [PubMed] [Google Scholar]
  42. Hughes ML, Abbas PJ, Brown CJ, et al. Using electrically evoked compound action potential thresholds to facilitate creating MAPs for children with the Nucleus CI24M. Adv Otorhinolaryngol. 2000a:260–265. doi: 10.1159/000059125. [DOI] [PubMed] [Google Scholar]
  43. Hughes ML, Brown CJ, Abbas PJ, et al. Comparison of EAP thresholds with MAP levels in the nucleus 24 cochlear implant: Data from children. Ear Hear. 2000b;2:164–174. doi: 10.1097/00003446-200004000-00010. [DOI] [PubMed] [Google Scholar]
  44. Hughes ML, Stille LJ. Psychophysical versus physiological spatial forward masking and the relation to speech perception in cochlear implants. Ear Hear. 2008;3:435–452. doi: 10.1097/AUD.0b013e31816a0d3d. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Hughes ML, Stille LJ. Effect of stimulus and recording parameters on spatial spread of excitation and masking patterns obtained with the electrically evoked compound action potential in cochlear implants. Ear Hear. 2010;5:679–692. doi: 10.1097/AUD.0b013e3181e1d19e. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Hughes ML, Stille LJ, Baudhuin JL, et al. ECAP spread of excitation with virtual channels and physical electrodes. Hear Res. 2013;306:93–103. doi: 10.1016/j.heares.2013.09.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Jerger J, Jerger S. Evoked response to intensity and frequency change. Archives of Otolaryngology. 1970;5:433–436. doi: 10.1001/archotol.1970.00770040627007. [DOI] [PubMed] [Google Scholar]
  48. Jones GL, Won JH, Drennan WR, et al. Relationship between channel interaction and spectral-ripple discrimination in cochlear implant users. J Acoust Soc Am. 2013;133:425–433. doi: 10.1121/1.4768881. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Kiang NY, Moxon EC. Physiological considerations in artificial stimulation of the inner ear. Ann Otol Rhinol Laryngol. 1972;5:714–730. doi: 10.1177/000348947208100513. [DOI] [PubMed] [Google Scholar]
  50. Kim J, Brown CJ, Abbas PJ, et al. The Effect of Changes in Stimulus Level on Electrically Evoked Cortical Auditory Potentials. Ear Hear. 2009;3:320–329. doi: 10.1097/AUD.0b013e31819c42b7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Kutner MH, Nachtsheim CJ, Neter J. Applied Linear Regression Models. 4. Boston, MA: McGraw-Hill/Irwin; 2004. Building the Regression Model I: Model Selection and Validation; pp. 359–360. [Google Scholar]
  52. Litvak LM, Spahr AJ, Saoji AA, et al. Relationship between perception of spectral ripple and speech recognition in cochlear implant and vocoder listeners. J Acoust Soc Am. 2007;2:982–991. doi: 10.1121/1.2749413. [DOI] [PubMed] [Google Scholar]
  53. Lopez Valdez A, McLaughlin M, Viani L, et al. Objective assessment of spectral ripple discrimination in cochlear implant listeners using cortical evoked responses to an oddball paradigm. PloS one. 2014;9:e90044. doi: 10.1371/journal.pone.0090044. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Martin BA. Can the acoustic change complex be recorded in an individual with a cochlear implant? Separating neural responses from cochlear implant artifact. J Am Acad Audiol. 2007;2:126–140. doi: 10.3766/jaaa.18.2.5. [DOI] [PubMed] [Google Scholar]
  55. McLaughlin M, Lopez Valdes A, Reilly RB, et al. Cochlear implant artifact attenuation in late auditory evoked potentials: A single channel approach. Hear Res. 2013a;302:84–95. doi: 10.1016/j.heares.2013.05.006. [DOI] [PubMed] [Google Scholar]
  56. McLaughlin M, Lu T, Dimitrijevic A, et al. Towards a closed-loop cochlear implant system: application of embedded monitoring of peripheral and central neural activity. IEEE. 2013b;20:443–454. doi: 10.1109/TNSRE.2012.2186982. [DOI] [PubMed] [Google Scholar]
  57. Menning H, Roberts LE, Pantev C. Plastic changes in the auditory cortex induced by intensive frequency discrimination training. NeuroReport. 2000;11:817–822. doi: 10.1097/00001756-200003200-00032. [DOI] [PubMed] [Google Scholar]
  58. Moore BCJ, Glasberg BR. Formulae describing frequency selectivity as a function of frequency and level, and their use in calculating excitation patterns. Hear Res. 1987;28:209–225. doi: 10.1016/0378-5955(87)90050-5. [DOI] [PubMed] [Google Scholar]
  59. Nelson DA, Donaldson GS, Kreft H. Forward-masked spatial tuning curves in cochlear implant users. J Acoust Soc Am. 2008;3:1522–1543. doi: 10.1121/1.2836786. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Nelson DA, Kreft HA, Anderson ES, et al. Spatial tuning curves from apical, middle, and basal electrodes in cochlear implant users. J Acoust Soc Am. 2011;6:3916–3933. doi: 10.1121/1.3583503. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Nelson DA, Vantasell DJ, Schroder AC, et al. Electrode Ranking of Place Pitch and Speech Recognition in Electrical Hearing. J Acoust Soc Am. 1995;4:1987–1999. doi: 10.1121/1.413317. [DOI] [PubMed] [Google Scholar]
  62. Noble JH, Labadie RF, Gifford RH, et al. Image-guidance enables new methods for customizing cochlear implant stimulation strategies. IEEE. 2013;21:820–829. doi: 10.1109/TNSRE.2013.2253333. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Ostroff JM, Martin BA, Boothroyd A. Cortical evoked response to acoustic change within a syllable. Ear Hear. 1998;4:290–297. doi: 10.1097/00003446-199808000-00004. [DOI] [PubMed] [Google Scholar]
  64. Pfingst BE, Franck KH, Xu L, et al. Effects of electrode configuration and place of stimulation on speech perception with cochlear prostheses. J Assoc Res Otolaryngol. 2001;2:87–103. doi: 10.1007/s101620010065. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Pfingst BE, Xu L, Thompson CS. Across-site threshold variation in cochlear implants: Relation to speech recognition. Audiol Neurootol. 2004;9:341–352. doi: 10.1159/000081283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Ponton CW, Don M, Eggermont JJ, et al. Maturation of human cortical auditory function: differences between normal-hearing children and children with cochlear implants. Ear Hear. 1996;5:430–437. doi: 10.1097/00003446-199610000-00009. [DOI] [PubMed] [Google Scholar]
  67. Ponton CW, Eggermont JJ, Kwong B, et al. Maturation of human central auditory system activity: evidence from multi-channel evoked potentials. Clinical Neurophys. 2000;111:220–236. doi: 10.1016/s1388-2457(99)00236-9. [DOI] [PubMed] [Google Scholar]
  68. Saoji AA, Litvak L, Spahr AJ, et al. Spectral modulation detection and vowel and consonant identifications in cochlear implant listeners. J Acoust Soc Am. 2009;3:955–958. doi: 10.1121/1.3179670. [DOI] [PubMed] [Google Scholar]
  69. Scheperle, Abbas Peripheral and Central Contributions to Cortical Responses in Cochlear Implant Users. Ear Hear. doi: 10.1097/AUD.0000000000000143. submitted. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Shannon RV. The relative importance of amplitude, temporal, and spectral cues for cochlear implant processor design. Am J Audiol. 2002;2:124–127. doi: 10.1044/1059-0889(2002/013). [DOI] [PubMed] [Google Scholar]
  71. Sharma A, Dorman MF, Spahr AJ. Rapid development of cortical auditory evoked potentials after early cochlear implantation. NeuroReport. 2002;13:1365–1368. doi: 10.1097/00001756-200207190-00030. [DOI] [PubMed] [Google Scholar]
  72. Skinner MW, Holden LK, Whitford LA, et al. Speech recognition with the nucleus 24 SPEAK, ACE, and CIS speech coding strategies in newly implanted adults. Ear Hear. 2002;23:207–223. doi: 10.1097/00003446-200206000-00005. [DOI] [PubMed] [Google Scholar]
  73. Small SA, Werker JF. Does the ACC Have Potential as an Index of Early Speech-Discrimination Ability? A Preliminary Study in 4-Month-Old Infants With Normal Hearing. Ear Hear. 2012;6:E59–E69. doi: 10.1097/AUD.0b013e31825f29be. [DOI] [PubMed] [Google Scholar]
  74. Spahr A, Saoji A, Litvak L, et al. Spectral cues for understanding speech in quiet and in noise. Cochlear Implants International. 2011;12:S66–S69. doi: 10.1179/146701011X13001035753056. [DOI] [PubMed] [Google Scholar]
  75. Stickney GS, Loizou PC, Mishra LN, et al. Effects of electrode design and configuration on channel interactions. Hear Res. 2006;211:33–45. doi: 10.1016/j.heares.2005.08.008. [DOI] [PubMed] [Google Scholar]
  76. Summers V, Leek MR. The internal representation of spectral contrast in hearing-impaired listeners. J Acoust Soc Am. 1994;6:3518–3528. doi: 10.1121/1.409969. [DOI] [PubMed] [Google Scholar]
  77. Tang Q, Benítez R, Zeng FG. Spatial channel interactions in cochlear implants. J Neural Eng. 2011;8 doi: 10.1088/1741-2560/8/4/046029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Throckmorton CS, Collins LM. Investigation of the effects of temporal and spatial interactions on speech-recognition skills in cochlear-implant subjects. J Acoust Soc Am. 1999;2(Pt 1):861–873. doi: 10.1121/1.426275. [DOI] [PubMed] [Google Scholar]
  79. Tremblay KL, Friesen L, Martin BA, et al. Test-retest reliability of cortical evoked potentials using naturally produced speech sounds. Ear Hear. 2003;3:225–232. doi: 10.1097/01.AUD.0000069229.84883.03. [DOI] [PubMed] [Google Scholar]
  80. Turner CW, Henn CC. The relation between vowel recognition and measures of frequency resolution. J Speech Hear Res. 1989;1:49–58. doi: 10.1044/jshr.3201.49. [DOI] [PubMed] [Google Scholar]
  81. Tyler RS, Parkinson AJ, Woodworth GG, Lowder MW, Gantz BJ. Performance over time of adult patients using the Ineraid or Nucleus cochlear implant. J Acoust Soc Am. 1997;102:508–522. doi: 10.1121/1.419724. [DOI] [PubMed] [Google Scholar]
  82. van der Beek FB, Briaire JJ, Frijns JHM. Effects of parameter manipulations on spread of excitation measured with electrically evoked compound action potentials. Int J Audiol. 2012;51:465–474. doi: 10.3109/14992027.2011.653446. [DOI] [PubMed] [Google Scholar]
  83. Won JH, Clinard CG, Kwon S, et al. Relationship between behavioral and physiological spectral-ripple discrimination. J Assoc Res Otolaryngol. 2011a;3:375–393. doi: 10.1007/s10162-011-0257-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Won JH, Drennan WR, Nie K, et al. Acoustic temporal modulation detection and speech perception in cochlear implant listeners. J Acoust Soc Am. 2011b;130:376–388. doi: 10.1121/1.3592521. [DOI] [PMC free article] [PubMed] [Google Scholar]
  85. Won JH, Drennan WR, Rubinstein JT. Spectral-ripple resolution correlates with speech reception in noise in cochlear implant users. J Assoc Res Otolaryngol. 2007;3:384–392. doi: 10.1007/s10162-007-0085-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. Won JH, Jones GL, Drennan WR, et al. Evidence of across-channel processing for spectral-ripple discrimination in cochlear implant listeners. J Acoust Soc Am. 2011c;130:2088–2097. doi: 10.1121/1.3624820. [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. Wunderlich JL, Cone-Wesson BK. Maturation of CAEP in infants and children: a review. Hear Res. 2006;1–2:212–223. doi: 10.1016/j.heares.2005.11.008. [DOI] [PubMed] [Google Scholar]
  88. Zhou N, Pfingst BE. Effects of site-specific level adjustments on speech recognition with cochlear implants. Ear Hear. 2013;35:30–40. doi: 10.1097/AUD.0b013e31829d15cc. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Zhu Z, Tang Q, Zeng F, et al. Cochlear-implant spatial selectivity with monopolar, bipolar and tripolar stimulation. Hear Res. 2012;283:45–58. doi: 10.1016/j.heares.2011.11.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Zwolan TA, Collins LM, Wakefield GH. Electrode discrimination and speech recognition in postlingually deafened adult cochlear implant subjects. J Acoust Soc Am. 1997;6:3673–3685. doi: 10.1121/1.420401. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Data File _.doc_ .tif_ pdf_ etc.__1

Supplemental Digital Content 1. Figure that shows group distributions of peripheral and central spatial selectivity measures for each experimental MAP as box-and-whisker plots. pdf Supplemental Digital Content 2. Table that shows statistical results using the mean ECAP channel-separation index and mean predicted spatial ACC amplitudes for a given MAP as predictors of spectral ACC amplitude.pdf

Supplemental Data File _.doc_ .tif_ pdf_ etc.__2

RESOURCES