Abstract
This study investigated the perceptual relationship between acoustic and electric stimuli presented to CI users with functional contralateral hearing.
Fourteen subjects with unilateral profound deafness implanted with a MED-EL CI scaled the perceptual differences between pure tones presented to the acoustic hearing ear and electric biphasic pulse trains presented to the implanted ear. The differences were analyzed with a multidimensional scaling (MDS) analysis. Additionally, speech performance in noise was tested using sentence material presented in different spatial configurations while patients listened with both their acoustic hearing and implanted ears.
Results of alternating least squares scaling (ALSCAL) analysis consistently demonstrate that a change in place of stimulation is in the same perceptual dimension as a change in acoustic frequency. However, the relative perceptual differences between the acoustic and the electric stimuli varied greatly across subjects. A degree of perceptual separation between acoustic and electric stimulation (quantified by relative dimensional weightings from an INDSCAL analysis) was hypothesized that would indicate a change in perceptual quality, but also be predictive of performance with combined acoustic and electric hearing. Perceptual separation between acoustic and electric stimuli was observed for some subjects. However, no relationship between the degree of perceptual separation and performance was found.
Keywords: Multidimensional scaling, cochlear implant, single sided deafness, speech, place pitch, pitch
I. INTRODUCTION
Although cochlear implants (CIs) allow severely hearing-impaired listeners to understand speech, CI users have more difficulty with music perception and speech comprehension in challenging listening environments than people with normal acoustic hearing. Presumably the limitations in performance with a CI are indicative of the differing properties of electric and acoustic stimulation. Amongst other differences, the rate and place of electric stimulation are mismatched, stimulation is pulsatile, and the spread of excitation is broader with electric stimulation relative to acoustic stimulation in normal hearing ears. While the physical differences between electric and acoustic stimulation are well understood, less is known about the perceptual differences. Despite these differences, subjects with both electric and contralateral acoustic hearing perform better than they do with only one of the two stimulation modes (e.g. Gifford et al., 2007; Ching et al., 2007; Kong et al., 2005; Vermeire et al., 2009; Arndt et al., 2011; Buechner et al, 2010; Tavora-Vieira et al., 2013; Rader et al, 2013).
When the qualities in which stimuli differ are unknown, a multidimensional scaling (MDS) paradigm can be a useful tool (Kruskal and Wish, 1977). In MDS studies of auditory perception, subjects are not required to make comparisons constrained by a given perceptual attribute, such as pitch or loudness. Instead, subjects are asked to provide ratings of dissimilarities between two stimuli presented as sequential pairs. Using the dissimilarity ratings, a map of the subject's perceptual space can be generated for any arbitrary number of dimensions. A number of experiments have used MDS to examine the perceptual space of electric stimulation with CI subjects (Tong et al., 1983; McKay and Carlyon, 1999; Collins and Throckmorton, 2000; McKay et al., 1996, 2005; Henshall and McKay, 2001; Macherey et al., 2011). Using MDS, Tong et al. (1983) demonstrated that while both a change in rate and a change in place of stimulation are described as changes in pitch, rate and place changes are perceptually orthogonal. McDermott and Sucher (2006) expanded the Tong results to compare rate and place changes in electrical stimulation with frequency changes in acoustic stimulation. Their results suggested that one dimension corresponded to both a change in acoustic frequency and place of stimulation and another dimension corresponded to a difference between electric and acoustic stimulation. The experiment was conducted using subjects with low-frequency residual hearing in their implanted ear (i.e. using electric-acoustic stimulation or (EAS) as described in von Ilberg et al. (1999) and Gantz and Turner (2004)). While acoustic stimuli were presented at frequencies within the subject's useable hearing range, some distortion in the acoustic percept is expected from their hearing loss.
Recently, a group of unilaterally deaf subjects with ipsilateral tinnitus and normal hearing or mild to moderate hearing loss in the non-implanted ear was studied (Vermeire et al., 2008, 2009; Van de Heyning et al., 2008; Kleine Punte et al., 2011). Unlike EAS subjects, these patients have functional hearing throughout the whole frequency range. Using these subjects, acoustic and electric stimuli can be directly compared without limitations in the useable frequency range or without a concern about perceptual distortion of acoustic stimulation.
In the present study, an MDS paradigm was used to examine the perceptual qualities of acoustic and electric stimulation in these subjects. Primarily, we hypothesized that a change in place of stimulation is perceptually equivalent (i.e. along the same perceptual dimension) to a change in acoustic frequency across the audible spectrum. Furthermore, it was hypothesized that the degree of perceptual separation between acoustic and electric stimuli would be representative of the integration of acoustic and electric hearing. If so, the degree of separation between electric and acoustic stimulation might be predictive of performance with combined acoustic and electric stimulation.
II. MATERIALS AND METHODS
A. Subjects
Fourteen adult volunteers participated in this study, all of whom were unilaterally profoundly deaf and participated in a parallel study investigating the effectiveness of CI in treating unilateral tinnitus (Van de Heyning et al., 2008; Kleine Punte et al., 2011). Demographic information is presented in Table I. All subjects received MED-EL CIs: a COMBI 40+ with an M electrode array (5 subjects) or a PULSARCI100 with a FLEXSOFT electrode array (9 subjects) using a cochleostomy approach. Both electrode arrays have twelve contacts, numbered E1 to E12 from apex to base. On the FLEXSOFT electrode array, E1 and E12 are 30.4 mm and 3.9 mm (with a 2.4 mm inter-electrode distance) from a marker ring indicating a full cochlear insertion. On the M electrode array, E1 and E12 are 30.4 mm and 9.4 mm (with a 1.9 mm inter-electrode distance) from a marker ring indicating a full cochlear insertion. In all subjects, a full insertion of the electrode array was obtained. All subjects were users of the CIS+ sound coding strategy which delivers interleaved biphasic stimulation pulses at constant high rates typically higher than 1000 Hz. All subjects had functional hearing in the non-implanted ear. Individual audiograms for the contralateral ears are plotted in Figure 1. All subjects use their CI daily for the whole day. Subjects S3, S4 and S11 used a contralateral HA on a daily basis but not during testing.
Table 1.
Subjects’ demographic information
| Subject | Age at surgery [yrs;mo] | Duration of deafness at surgery [yrs] | Etiology | Implant & Electrode type | Implant ear | PTA ( non-implanted ear in dB HL) | Duration of implant use [mo] |
|---|---|---|---|---|---|---|---|
| S1 | 47;2 | 10 | Viral cochleitis | PULSARCI100 FLEXSOFT | Left | 17 | 2 |
| S2 | 59;1 | 5.5 | Meniere | PULSARCI100 FLEXSOFT | Left | 17 | 7 |
| S3 | 44;8 | 2.5 | Meniere | COMBI 40+ M | Right | 57 | 18 |
| S4 | 71;7 | 50.5 | Ototoxicity | COMBI 40+ M | Left | 62 | 17 |
| S5 | 38;2 | 2.5 | Labyrinthitis | COMBI 40+ M | Left | 17 | 21 |
| S6 | 35;10 | 8.5 | Temporal bone fracture | COMBI 40+ M | Right | 27 | 23 |
| S7 | 49;3 | 2.5 | Late post-traumatic | COMBI 40+ M | Left | 43 | 21 |
| S8 | 62;6 | 2 | Sudden hearing loss | PULSARCI100 FLEXSOFT | Right | 10 | 6 |
| S9 | 49;2 | 1.5 | Otosclerosis | PULSARCI100 FLEXSOFT | Left | 39 | 4 |
| S10 | 22;11 | 2.5 | Sudden hearing loss | PULSARCI100 FLEXSOFT | Right | 13 | 18 |
| S11 | 64;3 | 2 | Otosclerosis | PULSARCI100 FLEXSOFT | Right | 70 | 3 |
| S12 | 59;1 | 3 | Herpes zoster oticus | PULSARCI100 FLEXSOFT | Left | 12 | 6 |
| S13 | 55;5 | 6.5 | Post-traumatic | PULSARCI100 FLEXSOFT | Right | 12 | 6 |
| S14 | 40;8 | 8 | Infection | PULSARCI100 FLEXSOFT | Right | 13 | 3 |
| MEAN | 50;2 | 7.7 | 11.1 | ||||
Figure 1.
Individual audiograms showing hearing thresholds in the non-implanted ear.
B. Stimulation hardware and software
Electrodes were stimulated using the Research Interface Box (RIB) (Reference Note 1), which transforms scripted instructions into a data stream sent to the implant via a Diagnostic Interface Box II coil (Reference Note 2). Communication with the RIB as well as generation of the acoustic stimuli was performed by custom software on a Microsoft Windows compatible computer. Acoustic stimuli were delivered via a sound card (Sigmatel STAC 9751 C-major Audio) and presented over HDA 280 headphones (Sennheiser) connected to a Presonus HP4 headphone amplifier.
C. Stimuli
MDS was performed with five electric stimuli and five acoustic stimuli. The electric stimuli were constant-amplitude pulse trains with a constant pulse rate of 1200 pps, assigned to one of the electrode contacts E2, E4, E6, E8 or E10. The stimulation rate was chosen to fall within the range of channel specific stimulation rates of the subjects’ clinical fittings. The phase duration of each phase of the cathodic first bi-phasic pulses was 26.7 μs. The amplitudes of all electric and acoustic stimuli were set according to the results of the loudness balancing task (described below). All electric stimuli were delivered in monopolar mode with the reference electrode under the musculus temporalis, as is standard with the COMBI 40+ and PULSARCI100 implants. The acoustic stimuli were pure tones at logarithmically spaced frequencies of 150 (A2), 336 (A4), 753 (A6), 1690 (A8), and 3790 Hz (A10). Although no attempt was made to pitch match the acoustic and electric stimuli, the acoustic stimuli were named in parallel with the electric stimuli such that the lowest frequency was designated A2 and the highest frequency was designated A10. The rise and fall times for the acoustic stimuli were ramped for 30 ms. Both the acoustic and electric stimuli were 500 ms in duration.
D. Procedures
1. Loudness balancing
The loudness of all the acoustic and electric stimuli was balanced. First, based on the audiogram, the acoustic stimulus that was assumed to require the highest amplification in order to achieve comfortable loudness was chosen to be the reference stimulus. Using an ascending-descending technique, the loudness of the reference stimulus was adjusted until it was perceived as comfortably loud. This level was used as a reference loudness level. Subjects had to balance all other acoustic stimuli relative to the loudness of this reference stimulus using method of adjustment (MOA). The acoustic stimuli that could not be adjusted to the loudness of the reference stimulus were excluded from the experiment, along with the corresponding electric stimuli (i.e. Subject S4: E8, E10, A8, A10; Subject S11: E10 and A10).
Next, using the MED-EL CI.Studio+ fitting software, electric stimulation level on electrode E6 was balanced at a comfortable loudness relative to the loudness of the 753 Hz acoustic stimulus (A6) using the same MOA procedure as for acoustic balancing. Subjects were instructed to focus on differences in loudness only and not on possible pitch differences. Then, all other electrodes were loudness balanced at a comfortable loudness relative to the loudness of reference electrode E6 (using MOA).
2. Multidimensional scaling
A general description of MDS can be found in Kruskal and Wish (1977). In brief, MDS takes a set of perceptual distances and transforms them into a set of vectors in an n-dimensional space for which the Euclidean distances match the perceptual dissimilarity as closely as possible. These vectors can be used to plot a map such that the objects that are perceived to be very similar to each other are placed near each other on the map, and the objects that are perceived to be very different from each other are placed far away from each other on the map.
Prior to the experiment, subjects were instructed on the task and presented once with the complete set of ten stimuli in order to become familiar with the range of perceptual differences between the stimuli. In a trial, two randomly selected stimuli were presented to the subject. The subject's task was to indicate the amount of dissimilarity between the two stimuli by positioning a marker on a line on a computer screen, which was approximately 10 cm long. Each response was converted into a number between 0 (equal) and 100 (most dissimilar). In a single run, all possible pairs of stimuli (100 for most subjects) were presented to the subject in random order. A total of six runs were collected for each subject.
3. Speech recognition
Speech recognition was tested in noise using the Leuven Intelligibility Sentence Test (LIST; Van Wieringen & Wouters, 2008). Tests were performed in free-field in a sound treated chamber. Subjects were seated 1 meter away from the loudspeakers, which were separated by 90 degrees. Spatial configurations included both speech and noise presented from the front (S0N0), speech presented from the front and noise from the CI side (S0NCI) and noise presented from the front and speech from the CI side (SCIN0). Speech material was presented adaptively with noise set to a constant 65 dB SPL. For further details on the speech testing methodology in noise, refer to Vermeire and Van de Heyning (2009).
4. Data analyses
MDS data analysis was based on the mean scaling of six independent MDS runs. First, a Weighted MDS (WMDS) otherwise known as Individual Differences Scaling (INDSCAL) (Takane et al., 1977) analysis was performed. INDSCAL produces an n-dimensional (we choose two) plot that best represents the overall stimulus space taking into account the data from all subjects. Furthermore, INDSCAL provides for each subject the relative weightings for each perceptual dimension. The relative weightings for each subject on the dimension roughly corresponding to the perceptual separation between acoustic and electric stimulation were used as estimates of acoustic-electric integration. Then to examine the individual perceptual spaces, the responses from each subject were analysed using the alternating least squares scaling (ALSCAL) algorithm (Young & Lewyckyi, 1979). A Pearson Product Moment Correlation was used to look for correlations between estimates of acoustic-electric integration and bimodal performance.
5. Ethics
The study protocol was approved by the Ethical Committee of the Antwerp University Hospital (approval number OG085) and was in accordance with the Declaration of Helsinki. All subjects signed an informed consent prior to participating in the study. All subjects participated on a voluntary basis.
III. RESULTS
To obtain an overall representation of the perceptual spaces across subjects, an INDSCAL analysis was performed using the mean distances between points for each subject. Because the hearing loss for subjects S4 and S11 prevented testing with a complete set of acoustic stimuli, they were excluded from the INDSCAL analysis. The results, plotted in figure 2, show what appears to be a two-dimensional pattern, where one dimension is related to the frequency (or place of stimulation) of the stimuli, while the other dimension separates the acoustic from the electric. An r2 value of 0.65 was found suggesting that although the INDSCAL plot is representative of the group, there was a fair amount of variability across subjects.
Figure 2.
The best fitting 2-dimensional perceptual space representing the electric (E2-E10) and acoustic (A2-A10) stimuli for all subjects except S4 and S11 as generated by an INDSCAL analysis. The distances between points represent the relative perceptual differences between each stimulus. The goodness of fit for this analysis is represented by the r2 value (0.648). Dimension 1 is consistent with a change in acoustic frequency and electrode place while dimension 2 is consistent with the timbre differences between acoustic and electric stimulation.
Individual subject data were analysed using ALSCAL. Figure 3 shows individual, two-dimensional MDS stimulus spaces for all subjects. MDS data were rotated to align acoustic stimuli in parallel to dimension one, i.e. the standard deviation of the y-values was minimized. Some subjects (such as S1 and S8) in the present study showed curved MDS maps (a horseshoe effect) in a two-dimensional representation, which is typically found in 2-dimensional plots for stimuli that vary along a single dimension (Kendall, 1971). Other subjects (such as S9 and S13) show two-dimensional maps with stimuli scaled along two dimensions. Similarly to the INDSCAL analysis, stimuli were ordered along one dimension according to acoustic frequency or place of electric stimulation while a second dimension represented the separation between acoustic and electric stimuli. Subjects S4, S7, and S11 provide results which deviate from the above 2 patterns. These deviating results may be caused by hearing loss in the unimplanted ear. Subjects S4 and S11 have high frequency hearing losses and wear a hearing aid in daily use. Although subject S7 does not wear a hearing aid, S7 has a 50 dB hearing loss at 4000 Hz which might have caused the 3790 Hz acoustic stimuli to sound much more different than all other stimuli and distort the ALSCAL plot of S7's data.
Figure 3.
The best fitting 2-dimensional perceptual spaces representing the electric (E2-E10) and acoustic (A2-A10) stimuli for each subject as generated by an ALSCAL analysis. In each panel, the distances between points represent the relative perceptual differences between each stimulus for a given subject. The goodness of fit for each subject is presented by an r2 value in each panel. To help visualization, dashed lines connect the corresponding electric and acoustic stimuli (i.e. E2 and A2). Dimension 1 is consistent with a change in acoustic frequency and electrode place while dimension 2 is consistent with the timbre differences between acoustic and electric stimulation.
In addition to providing a representation of the perceptual space for the population, an INDSCAL analysis provides the relative weights of the various dimensions for each subject. The relative weights from the previous INDSCAL analysis (where dimension 1 is consistent with frequency and place of stimulation and dimension 2 is consistent with the timbral difference between electric and acoustic stimuli) are presented in figure 4. The relative weighting between the two dimensions varied greatly across subjects. Some subjects (such as S1 and S5) primarily discriminate along the first dimension, while the majority of subjects discriminate along both dimensions. Subject S7's weights on each dimension are presumably low because S7's perceptual space is dominated by A10 being perceived as very different from all other stimuli.
Figure 4.
Derived subject weights from the INDSCAL analysis (Figure 2). Each point represents the relative weighting of each dimension by an individual subject in the MDS procedure. Dimension 1 seems to correspond to the dimension related to a change in acoustic frequency or electric place of stimulation. Dimension 2 seems to correspond with the perceptual separation between the electric and acoustic stimuli.
We hypothesized that the relative weight of dimension 2 (representing the timbral difference between electric and acoustic stimulation) is indicative of the degree of perceptual integration of electric and acoustic stimulation. If so, then it might be reflected in speech performance with the implant. We used a Pearson product-moment correlation to determine if there was a relationship between the dimension 2 weights calculated in the INDSCAL analysis and benefit in speech performance from listening with both the acoustic hearing and implanted ears (Figure 5). If dimension 2 is representative of integration, then a strong relationship between the listener's dimension 2 weight and their benefit from bilateral input might be expected. However, no significant correlations were found between the various measures of binaural speech in noise and dimension 2 weights. If the relative weight of dimension 2 is indicative of integration of electric and acoustic stimulation, then the dimension 2 weighting for each subject might be dependent on the duration of implant use. However, no significant relationship between dimension 2 weighting and duration of implant use was observed (Figure 6).
Figure 5.
Dimension 2 weight (from Figure 4) vs. speech recognition in noise results using the LIST test at different spatial configurations, using a fixed noise level of 65 dB SPL and adaptive speech level. The y-axis for each panel represents adaptive SRT performance presented bilaterally in free-field with three different spatial configurations. In each panel, the best fitting line is presented with the corresponding r2 and p values.
Figure 6.
Duration of implant use (in months) vs dimension 2 weight.
VI. DISCUSSION
The present study examined the perceptual differences between electric and acoustic stimulation in cochlear implant users with functional hearing in the non-implanted ear. With the possible exceptions of S4 and S11 for whom only a subset of stimuli were tested and S7 whose MDS plot is distorted by an outlying point (A10), the plots from each individual subject MDS analysis suggest one of two patterns. Some subjects (such as S1 and S8) reveal a horseshoe pattern indicative of a single perceptual dimension (Kendall, 1971; McKay et al, 1996). Other subjects (such as S9 and S13) reveal a two-dimensional pattern. However, for both of these sets of data, a change in electrode position is represented along the same dimension as a change in place pitch across the electrode array. It has been previously shown that a change in place corresponds to a change in pitch (e.g. Busby and Clark, 2000; Baumann and Nobbe, 2006). However, it has also been previously shown that although changes in either rate or place cues are described as changes in pitch, place and rate changes are perceptually orthogonal (e.g. Tong et al., 1983). Because a change in pitch can be represented by different perceptual dimensions, it was previously unknown if a change in acoustic frequency is perceived as pitch along the same dimension (i.e. in the same way) as a change in place pitch. These results are important because all multichannel audio processing strategies use place of stimulation to encode tonotopic information.
While most subjects demonstrated a dimension related to place and acoustic frequency, roughly half of them also demonstrated a separation between the acoustic and electric stimuli represented by a second dimension (dimension 2). We quantified the separation along dimension 2 for each subject using the subject weights calculated by the INDSCAL analysis. The perceptual qualities related to this dimension were less clear. We hypothesized that a change along this dimension represented a difference in sound quality between the two stimulation modes. It was plausible that subjects who determined the electric and acoustic stimuli to be similar (i.e. having a small dimension 2 weight) were the subjects who had more successfully integrated electric and acoustic stimulation. Therefore, subjects with smaller dimension 2 weights might be more likely to be either better performers with their implant or benefit more from binaural integration. However, as demonstrated in figure 5, no relationship between dimension 2 weights and performance was found.
The lack of relationship between dimension 2 weights and speech performance could be caused by multiple reasons. One possibility is that there was not enough power to detect a relationship. It is also possible that while the dimension 2 weight describes electric and acoustic integration, it may be that the integration is a necessary precursor to binaural benefits. In other words, it may be possible that with experience, perceptual integration improves (and the dimension 2 weight is reduced) and that binaural benefit will only begin to develop after reaching a certain integration criterion (i.e. a dimension 2 weight below a critical point). This hypothesis is consistent with Vermeire and Van de Heyning (2009) who showed that in the same population, patients only start to show binaural benefit after 12 months of CI use. The degree of integration might also be affected by the duration of deafness as demonstrated by Yang and Zeng (2013). Yoon et al. (2011) demonstrated that with bilateral CI users, the more similar performance is with each implant alone, the greater the bilateral benefit. Similarly, bilateral benefit in patients with an acoustically and electrically stimulated ear may depend more on similar performance with ears alone then on perceptual integration between the two ears.
Another possibility is that dimension 2 does not adequately represent integration. As suggested by McKay and Carlyon (1999), it is possible that some subjects base their decisions on the most obvious differences and ignored other differences. If so, some patients might have a very small dimension 2 weight, despite hearing differences between electric and acoustic stimuli and having limited integration. Conversely, patients for whom electric and acoustic stimuli may sound identical may separate them perceptually based on lateralization. When testing EAS patients, McDermott and Sucher (2006) showed a perceptual dimension separating acoustic and electric stimulation in the same ear, suggesting there were perceptual differences other than lateralization.
Of particular interest is the nature of the perceptual differences between electric and acoustic stimulation as represented by dimension 2. It is still unknown exactly how the sound quality of electric and acoustic stimulation differs. There are multiple factors that may affect the sound quality of electric stimulation. It is very likely that neural survival differs both between subjects and across cochlear regions and that the corresponding sound quality may change in terms of “roughness”, “brightness”, or “buzziness” (Sucher and McDermott, 2007; Collins et al., 1997, Collins and Throckmorton, 2000; McKay et al., 1996). The neural survival local to the stimulating electrode is likely to affect the sound quality of a pulse train. Similarly, Pauka (1989) hypothesized that a more “buzzy” percept would be produced by wider current spread, and that narrower current spread produces purer, pitch-like percepts. Landsberger et al. (2012) found that when spread of excitation is reduced, sounds were described as less “dirty” and “noisy” than a broader spread of excitation from the same electrode. If the perceptual difference represented by dimension 2 is based on the “buzzy” or “noisy” percept of monopolar stimulation, then a repetition of this experiment using monopolar and current focused stimulation should reveal that current focused pulse trains are perceptually more similar to the acoustic stimulation than the monopolar stimuli. Lazard et al. (2012) asked patients to adjust the width of bandpass filtered noise presented to an ear with residual low-frequency hearing to provide the best match to a fixed rate pulse train on the most apical electrode of a Cochlear Nucleus device. The bandwidth representing the best acoustic match varied greatly across subjects. It would be interesting to know if the best-matched bandwidth was correlated with the spread of excitation from stimulation from the electrode.
The findings in this manuscript are consistent with the findings of McDermott and Sucher (2006). They concluded from their experiments that a change of place of electric stimulation was in the same perceptual dimension as a change in acoustic frequency. However, their results were limited by the residual acoustic hearing. Patients’ audiograms varied such that the upper limit of useable hearing ranged from 200 to 1000 Hz, severely limiting the frequency ranges that could be tested, and possibly explaining the variability across subjects in their results. Additionally, because all of the patients in the McDermott and Sucher study were severely hearing impaired, it is likely that the sound quality of the acoustic stimulation delivered to these patients was different than the sound quality of the acoustic stimuli presented to normal-hearing to moderately-impaired hearing of the patients used in the present study. Perhaps the severe hearing impairment of the McDermott and Sucher (2006) experiment distorted the acoustic hearing and magnified (or possibly reduced) the perceptual differences between the acoustic and electric stimulation. An additional distortion in the acoustic hearing might explain why all of the subjects in McDermott and Sucher (2006) had a perceptual dimension of stimulation mode but only approximately half of the subjects in the present study did.
In summary, the present manuscript has shown that a change in place of electric stimulation yields a perceptual change in the same dimension as a change in acoustic frequency. This result is important in that it confirms that coding frequency changes as change in place of stimulation is appropriate in a speech processing strategy. Furthermore, a number of subjects reported no perceptual separation between the acoustic and electric stimulation, suggesting that monopolar electric stimulation can approximate the sound quality of acoustic tones at comfortable loudness. A better understanding of the perceptual separation between acoustic and electric stimuli (represented by dimension 2) would provide further insights into both the sound quality of a cochlear implant and the perceptual integration between acoustic and electric hearing. Further studies manipulating the quality of the electric and/or acoustic stimulation would provide additional insight into the nature of the perceptual differences between the stimulation modes.
Multi-Dimensional Scaling was used to explore acoustic and electric hearing.
Changes in place of stimulation corresponded to a changes in acoustic frequency.
Variability was observed in differences between electric and acoustic stimulation.
Observed acoustic and electric differences were not related to speech performance.
ACKNOWLEDGEMENTS
This work was supported by grants from Research Foundation Flanders (FWO; A 7/2 EP B5), NIH/NIDCD (R01 DC012152), Med-El Hearing Solutions, and a TOPBOF (5503) in the University of Antwerp. The authors would like to express their thanks and appreciation to the subjects for their time and effort. We gratefully acknowledge contributions from Andrea Nobbe and Ernst Aschbacher.
Footnotes
Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
RIB Research Interface Box System, Manual V1.0, University of Innsbruck, 2001.
MED-EL, DIB II, Diagnostic Interface Box II, User Manual, Innsbruck, Austria.
REFERENCES
- Arndt S, Aschendorff A, Laszig R, Beck R, Schild C, Kroeger S, Ihorst G, Wesarg T. Comparison of pseudobinaural hearing to real binaural hearing rehabilitation after cochlear implantation in patients with unilateral deafness and tinnitus. Otol Neurotol. 2011;32:39–47. doi: 10.1097/MAO.0b013e3181fcf271. [DOI] [PubMed] [Google Scholar]
- Baumann U, Nobbe A. The cochlear implant electrode-pitch function. Hear Res. 2006;213:34–42. doi: 10.1016/j.heares.2005.12.010. [DOI] [PubMed] [Google Scholar]
- Buechner A, Brendel M, Lesinski-Schiedat A, Wenzel G, Frohne-Buechner C, Jaeger B, Lenarz T. Cochlear implantation in unilateral deaf subjects associated with ipsilateral tinnitus. Otol Neurotol. 2010;31:1381–5. doi: 10.1097/MAO.0b013e3181e3d353. [DOI] [PubMed] [Google Scholar]
- Busby PA, Clark GM. Pitch estimation by early-deafened subjects using a multiple-electrode cochlear implant. J Acoust Soc Am. 2000;107:547–58. doi: 10.1121/1.428353. [DOI] [PubMed] [Google Scholar]
- Ching TY, van Wanrooy E, Dillon H. Binaural-bimodal fitting or bilateral implantation for managing severe to profound deafness: a review. Trends Amplif. 2007;11:161–92. doi: 10.1177/1084713807304357. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Collins LM, Zwolan TA, Wakefield GH. Comparison of electrode discrimination, pitch ranking, and pitch scaling data in postlingually deafened adult cochlear implant subjects. J Acoust Soc Am. 1997;101:440–55. doi: 10.1121/1.417989. [DOI] [PubMed] [Google Scholar]
- Collins LM, Throckmorton CS. Investigating perceptual features of electrode stimulation via a multidimensional scaling paradigm. J Acoust Soc Am. 2000;108(5 Pt 1):2353–65. doi: 10.1121/1.1314320. [DOI] [PubMed] [Google Scholar]
- Gifford RH, Dorman MF, McKarns SA, Spahr AJ. Combined electric and contralateral acoustic hearing: word and sentence recognition with bimodal hearing. J Speech Lang Hear Res. 2007;50:835–43. doi: 10.1044/1092-4388(2007/058). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Henshall KR, McKay CM. Optimizing electrode and filter selection in cochlear implant speech processor maps. J Am Acad Audiol. 2001;12:478–89. [PubMed] [Google Scholar]
- Kendall D. Seriation from abundance matrices. In: Hodson F, Kendall D, Tautu P, editors. Mathematics in the Archaeological and Historical Sciences. Edinburgh University Press; Edinburgh: 1971. pp. 215–252. [Google Scholar]
- Kong YY, Stickney GS, Zeng FG. Speech and melody recognition in binaurally combined acoustic and electric hearing. J Acoust Soc Am. 2005;117:1351–61. doi: 10.1121/1.1857526. [DOI] [PubMed] [Google Scholar]
- Kruskal J, Wish M. Multidimensional Scaling. Sage Publications; Beverly Hills, CA: 1977. [Google Scholar]
- Landsberger DM, Padilla M, Srinivasan AG. Reducing current spread using current focusing in cochlear implant users. Hear Res. 2012;284:16–24. doi: 10.1016/j.heares.2011.12.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lazard DS, Marozeau J, McDermott HJ. The Sound Sensation of Apical Electric Stimulation in Cochlear Implant Recipients with Contralateral Residual Hearing. PLoS ONE. 2012;7:e38687. doi: 10.1371/journal.pone.0038687. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Macherey O, Deeks JM, Carlyon RP. Extending the limits of place and temporal pitch perception in cochlear implant users. J Assoc Res Otolaryngol. 2011;12:233–51. doi: 10.1007/s10162-010-0248-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McDermott HJ, Sucher CM. Perceptual dissimilarities among acoustic stimuli and ipsilateral electric stimuli. Hear Res. 2006;218:81–8. doi: 10.1016/j.heares.2006.05.002. [DOI] [PubMed] [Google Scholar]
- McKay CM, Carlyon RP. Dual temporal pitch percepts from acoustic and electric amplitude-modulated pulse trains. J Acoust Soc Am. 1999;105:347–57. doi: 10.1121/1.424553. [DOI] [PubMed] [Google Scholar]
- McKay CM, McDermott HJ, Clark GM. The perceptual dimensions of single-electrode and nonsimultaneous dual-electrode stimuli in cochlear implantees. J Acoust Soc Am. 1996;99:1079–90. doi: 10.1121/1.414594. [DOI] [PubMed] [Google Scholar]
- McKay CM, Henshall KR, Hull AE. The effect of rate of stimulation on perception of spectral shape by cochlear implantees. J Acoust Soc Am. 2005;118:386–92. doi: 10.1121/1.1937349. [DOI] [PubMed] [Google Scholar]
- Pauka CK. Place-pitch and vowel-pitch comparisons in cochlear implant patients using the Melbourne-Nucleus cochlear implant. J Laryngol Otol Suppl. 1989;19:1–31. [PubMed] [Google Scholar]
- Punte AK, Vermeire K, Hofkens A, De Bodt M, De Ridder D, Van de Heyning P. Cochlear implantation as a durable tinnitus treatment in single-sided deafness. Cochlear Implants Int. 2011;12(Suppl 1):S26–9. doi: 10.1179/146701011X13001035752336. [DOI] [PubMed] [Google Scholar]
- Rader T, Fastl H, Baumann U. Combining electric acoustic stimulation and contralateral acoustic hearing: speech perception compared to bilateral cochlear implants depending on noise characteristics. Ear & Hear. 2013;34:324–332. doi: 10.1097/AUD.0b013e318272f189. [DOI] [PubMed] [Google Scholar]
- Sucher CM, McDermott HJ. Pitch ranking of complex tones by normally hearing subjects and cochlear implant users. Hear Res. 2007;230:80–7. doi: 10.1016/j.heares.2007.05.002. [DOI] [PubMed] [Google Scholar]
- Takane Y, Young F, de Leeuw J. Nonmetric individual differences multidimensional scaling: An alternating least squares method with optimal scaling features. Psychometrika. 1977;42:7–67. [Google Scholar]
- Tavora-Vieira D, Marino R, Krishnaswamy J, Kuthbutheen J, Rajan GP. Cochlear implantation for unilateral deafness with and without tinnitus: a case series. Laryngoscope. 2013;123:1251–5. doi: 10.1002/lary.23764. [DOI] [PubMed] [Google Scholar]
- Tong YC, Dowell RC, Blamey PJ, Clark GM. Two-component hearing sensations produced by two-electrode stimulation in the cochlea of a deaf patient. Science. 1983;219:993–4. doi: 10.1126/science.6823564. [DOI] [PubMed] [Google Scholar]
- Van de Heyning P, Vermeire K, Diebl M, Nopp P, Anderson I, De Ridder D. Incapacitating unilateral tinnitus in single-sided deafness treated by cochlear implantation. Ann Otol Rhinol Laryngol. 2008;117:645–52. doi: 10.1177/000348940811700903. [DOI] [PubMed] [Google Scholar]
- van Wieringen A, Wouters J. LIST and LINT: sentences and numbers for quantifying speech understanding in severely impaired listeners for Flanders and the Netherlands. Int J Audiol. 2008;47:348–55. doi: 10.1080/14992020801895144. [DOI] [PubMed] [Google Scholar]
- Vermeire K, Van de Heyning P. Binaural hearing after cochlear implantation in subjects with unilateral sensorineural deafness and tinnitus. Audiol Neurootol. 2009;14:163–71. doi: 10.1159/000171478. [DOI] [PubMed] [Google Scholar]
- Yang HI, Zeng FG. Reduced acoustic and electric integration in concurrent-vowel recognition. Scientific reports. 2013;3:1419. doi: 10.1038/srep01419. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Young F, Lewyckyj R. ALSCAL-4 User's Guide. University of North Carolina; Chapel Hill: 1979. [PubMed] [Google Scholar]
- Vermeire K, Nobbe A, Schleich P, Nopp P, Voormolen MH, Van de Heyning PH. Neural tonotopy in cochlear implants: an evaluation in unilateral cochlear implant patients with unilateral deafness and tinnitus. Hear Res. 2008;245:98–106. doi: 10.1016/j.heares.2008.09.003. [DOI] [PubMed] [Google Scholar]
- Yoon YS, Li Y, Kang HY, Fu QJ. The relationship between binaural benefit and difference in unilateral speech recognition performance for bilateral cochlear implant users. Int J Audiol. 2011;50:554–65. doi: 10.3109/14992027.2011.580785. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Young F, Lewyckyj R. ALSCAL-4 User's Guide. University of North Carolina; Chapel Hill: 1979. [PubMed] [Google Scholar]






