Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Dec 1.
Published in final edited form as: Otol Neurotol. 2016 Dec;37(10):1522–1528. doi: 10.1097/MAO.0000000000001211

The Enigma of Poor Performance by Adults with Cochlear Implants

Aaron C Moberly 1, Chelsea Bates 1, Michael S Harris 1, David B Pisoni 2
PMCID: PMC5102802  NIHMSID: NIHMS809373  PMID: 27631833

Abstract

Objective

Considerable unexplained variability and large individual differences exist in speech recognition outcomes for postlingually deaf adults who use cochlear implants (CIs), and a sizeable fraction of CI users can be considered “poor performers.” This paper summarizes our current knowledge of poor CI performance, and provides suggestions to clinicians managing these patients.

Method

Studies are reviewed pertaining to speech recognition variability in adults with hearing loss. Findings are augmented by recent studies in our laboratories examining outcomes in postlingually deaf adults with CIs.

Results

In addition to conventional clinical predictors of CI performance (e.g., amount of residual hearing, duration of deafness), factors pertaining to both “bottom-up” auditory sensitivity to the spectro-temporal details of speech, and “top-down” linguistic knowledge and neurocognitive functions contribute to CI outcomes.

Conclusions

The broad array of factors that contribute to speech recognition performance in adult CI users suggests the potential both for novel diagnostic assessment batteries to explain poor performance, and also new rehabilitation strategies for patients who exhibit poor outcomes. Moreover, this broad array of factors determining outcome performance suggests the need to treat individual CI patients using a personalized rehabilitation approach.

Keywords: Adults, Cochlear implants, Sensorineural hearing loss, Speech perception

Introduction

It is well known in research and clinical settings that unexplained variability and large individual differences exist in speech recognition outcomes for adults with cochlear implants (CIs) (1-3). This is true even for adults with postlingual deafness whom we would expect to do well, given their previously normal language development. Studies typically focus on group mean performance in quiet or in noise (3), or occasionally consider factors that enable “star” performers to do exceptionally well (4). Unfortunately, there is very little in the literature regarding those patients on the other end of the spectrum: the poor performers. Depending on criteria used to define poor performance, 10 to 50% of adult CI users fall into this category (5). For example, 35-50% of CI users cannot use the telephone (6). Lenarz and colleagues (5) identified 13% of their adult CI users as poor performers, who were able to recognize less than 10% correct words in sentences in quiet. A fundamental gap in our knowledge currently exists regarding the underlying sources of poor performance, and this lack of knowledge directly leads to two major clinical problems: first, we cannot predict when a patient will do poorly with a CI, and, second, we cannot intervene appropriately for poorly performing patients.

Most often in clinical CI centers, a patient who is performing more poorly than generally expected (based on clinical intuition, since we cannot reliably predict outcomes) will undergo a limited diagnostic battery. This battery typically consists of imaging, usually computerized tomography (CT), to ensure the electrode array is in good position; remapping of the device by the audiologist to ensure appropriate stimulation parameters; and a hardware integrity check to confirm that the device itself is functioning normally. This limited battery often does not reveal any problems that can be addressed surgically or clinically. As a result, clinicians are then restricted to reassurance and recommending that the patient “keep working at it.” In some settings, a struggling CI patient may be referred to a speech-language pathologist who focuses on aural rehabilitation, although this strategy is limited by a lack of evidence-based methodologies or support from insurance providers. Patients may elect to use one of several “one-size-fits-all” auditory training programs on a home computer, and often seek advice from other CI users or support groups and provided anecdotal support for strategies such as use of audiobooks and spending time in challenging listening environments (7). Ultimately, poorly performing patients are frustrated by difficulty understanding speech through their devices, and by their inability to meet outcome expectations. Based on these experiences, some patients even stop using their CIs.

As clinicians and researchers, it is imperative that we develop a better understanding of the sources of poor outcomes in this clinical population. Doing so should help us predict when a patient being evaluated for implantation is at risk for a poor outcome, identify the underlying problem for an individual patient with poor performance with a CI, and develop a personalized aural rehabilitation program for that patient, targeted at specific weaknesses.

The purpose of this article is to discuss the “enigma” of poor CI performance in postlingually deaf adults, and to suggest the roles that clinicians can play. Although not meant as an exhaustive review, we discuss the current state of knowledge regarding variability in performance and poor outcomes in CI users. We briefly discuss conventional clinical measures relating to speech recognition outcomes, and then relevant findings from studies of CI users and patients with lesser degrees of hearing loss, including recent work from our laboratories. We also discuss future work that needs to be undertaken, and, finally, we offer some practical recommendations for clinicians who are treating CI patients with poor speech recognition performance after implantation.

Conventional Clinical Predictors of Poor Performance

Only a few clinical predictors of speech recognition outcomes have been identified in postlingual adults with CIs. Greater amount of residual hearing prior to implantation and previous hearing aid use predict better speech recognition outcomes (8-12). Partial insertion of the electrode array, a history of meningitis, and congenital inner ear malformations negatively impact performance (13-14). A longer duration of moderate-to-profound hearing loss experienced by the patient has an important detrimental effect on outcomes (15-18). Performance has been found to be poorer for older adults using CIs (usually defined as over age 65 years) than younger adult CI users, but this finding is not universal (12,19-22). Unfortunately, these reported clinical predictors generally cannot be addressed therapeutically. Moreover, the underlying information processing and neural mechanisms by which these clinical factors affect recognition of speech are unclear.

Understanding the Mechanisms Underlying Poor Outcomes

Beyond these clinical predictors, a number of factors have been identified that may partially explain variability in speech recognition outcomes in postlingually deaf adult patients with CIs, and these factors likely contribute to poor performance for some individuals. While not entirely separate constructs, a useful way to conceptualize these factors is to group them into three broad domains: “auditory sensitivity,” “linguistic skills,” and “neurocognitive functions.” We will examine each of these domains independently, recognizing that abilities within each domain interact with skills within the other domains during the process of spoken language understanding. Moreover, there is good reason to suspect that different factors may contribute to poor performance for different patients.

“Bottom-up” Auditory Sensitivity

A relatively common assumption in otology is that the variability in speech recognition among adult CI users, and the poor performance by some patients, arises as a direct consequence of variability in the degraded quality of the speech signals listeners receive through their devices. The electrode array of the CI is limited in its ability to provide highly detailed spectral (frequency-specific) information of speech for two main reasons: first, the electrode array cannot be inserted far enough into the cochlea to cover the entire apex, and attempting to do so is traumatic. As a result, the frequency range of the incoming auditory speech input must be allocated to electrodes that do not reach the apex, resulting in spectral mismatch between the acoustic input and the electrode locations inside the cochlea (23-24).

Second, the electrode array has a limited number of stimulating electrodes (usually around 20), but the effective number of independent channels of information presented through an implant is only around four to seven (25). This is a result of spread of excitation from adjacent electrodes, leading to overlapping regions of neural stimulation. Therefore, individuals with CIs hear speech that is both spectrally shifted and spectrally degraded (26-27). One method that has been used to examine spectral resolution is to obtain spectral ripple discrimination thresholds, which have been found to predict 25 to 30% of variability in speech reception thresholds and word recognition for words both in babble and in quiet conditions for adult CI users (28). Likewise, temporal resolution, assessed by amplitude modulation detection thresholds, has been found to explain variability in speech recognition (29).

Third, the electrode array proximity to the modiolus is variable among patients, and it is likely that an electrode array that is more tightly coiled around the modiolus would provide greater frequency resolution. As measured by electrically evoked compound action potential (ECAP), there are data supporting the prediction that a shorter electrode-to-modiolus distance correlates with higher speech recognition score (30). Additionally, there is some evidence that patients with perimodiolar arrays display better speech recognition than those with lateral-wall hugging electrodes (3).

Lastly, the health of the spiral ganglion cells that are stimulated explains some variability in speech recognition, based on intraoperative electrocochleography (ECoG) measurements that serve as a biomarker of peripheral auditory system integrity (31-32). In particular, the ECoG “total response” (an estimate of neural survival) has been found to predict 47% of the variability in consonant-nucleus-consonant (CNC) scores for adults with CIs (32). An area ripe for investigation is the potential influence of peripheral auditory system declines on higher-order, cortical functions: a longer duration of deafness before CI may lead to declines in dendritic, spiral ganglion, and auditory nerve function, but it may also result in detrimental auditory cortex plastic changes that do not automatically reverse following restoration of peripheral input through a CI (33).

“Top-down” Linguistic Skills

Although auditory sensitivity factors are historically the most commonly investigated sources of outcome variability for CI users, recent studies in adults with less severe hearing loss have provided clues to top-down sources of variability. Broadly, these top-down factors relate to the linguistic skills and neurocognitive information processing functions of the listener.

Top-down abilities, and their interactions with the incoming speech signal, are widely accepted as crucial in all models of spoken word recognition (34-39). It is generally believed that the transmission of an undistorted signal leads primarily (though not exclusively) to a bottom-up hearing strategy with fast and implicit decoding of linguistic content and seamless lexical access. However, a distorted speech signal, whether due to masking by noise, a hearing impairment, listening through a CI (or, experimentally, using vocoded speech), requires top-down mechanisms for explicit decoding of the linguistic content (40). In those cases, successful recognition of speech requires use of lexical and contextual knowledge (41). This linguistic knowledge can include phonological knowledge (sensitivity to the sounds and sound patterns of the language), lexical knowledge (a large vocabulary and familiarity with how the sounds of the language are typically combined), semantic knowledge (an understanding of relationships among words and their meanings), and grammatical skills (knowledge of how phrases and sentences are put together).

Sensitivity to the phonological structure of speech has been found to predict open-set speech recognition in CI users (42). In a recent study, we demonstrated that a single measure of phonemic awareness (Final Consonant Choice) predicted 40% of variability in word recognition in quiet (43). In general, CI users demonstrate deficits in tasks that explicitly require phonological access (e.g., nonword repetition) (Moberly et al., under review). Moreover, knowledge of the phonotactic probabilities of a language – the frequencies with which phonological segments and sequences of segments legally occur – has been found to influence how adult CI users recognize spoken words (44).

When hearing a spoken word under degraded listening conditions, better lexical knowledge should decrease the ambiguity of any given phoneme as a result of the listener's prior experience with the possible sequences of phonemes within each word (45). When it comes to recognition of words within a sentence, greater lexical knowledge may support better use of lexical connectivity among words (46-49). For example, a larger receptive vocabulary assessed using the Peabody Picture Vocabulary Test (50) has been found to correlate with speech intelligibility scores of sentences in noise for listeners with normal hearing (51). However, for postlingual adult listeners with CIs, word and/or sentence recognition scores have not been found to correlate with scores of receptive vocabulary (52), expressive vocabulary (43), or word familiarity (Moberly et al., under review), suggesting possible differences in linguistic coding and integration processes.

Top-down processes routinely take on larger roles as linguistic context from the speech input increases, or as the speech signal becomes more degraded (53-54). The recognition of individual words automatically triggers semantic information from long-term memory that is helpful in sentence recognition (55-56). Moreover, the semantic and grammatical constraints imposed by the words surrounding a target word should support recognition, as listeners apply their linguistic knowledge to make inferences about what is being said and what is likely to occur next. The listener's knowledge and prior developmental experience helps to segment the continuous acoustic stream into phonemes (individual sound units of the language), syllables, and words (57). For example, there is evidence that better grammatical knowledge correlates with higher recognition scores for words in sentences in adults with CIs (Moberly et al., under review).

Taken together, the above studies suggest that linguistic skills – phonological, lexical, semantic, and grammatical knowledge – contribute to speech recognition outcome variability in adult CI users. Thus, it is likely that poor performance in some adult patients with CIs could be related to declines or deficits in these foundational language skills.

Neurocognitive Skills

Cognition and the information processing skills underlying perception, attention, and memory are increasingly being recognized as important in explaining variability in the speech recognition abilities of adults with lesser degrees of hearing loss, suggesting its impact in adult CI users (58-59). Active and effortful processing of degraded speech clearly places additional information processing demands on a listener's limited cognitive resources (41,60).

One neurocognitive process in particular – working memory (WM) – has been suggested to play a critical role in compensating for the loss of fine spectro-temporal details in the speech input received by hearing impaired individuals and CI users. Using non-auditory tasks of reading span and visual digit- or letter-monitoring tasks, measures of working memory capacity have been found to predict 10% to 30% of variability in speech recognition in noise for hearing aid users, though results are not always consistent (61-66; but see 67-68). Converging evidence suggests that a direct relationship exists between increasing background noise (or decreasing signal-to-noise ratio) and reliance on working memory during speech recognition tasks (69-70). Adult listeners’ immediate benefit from context in speech recognition relates to their ability to keep and update a semantic representation of the sentence content in WM, providing evidence that WM underlies efficient speech processing and semantic integration of the spoken message (71). Working memory capacity (WMC) has also been associated with release from informational masking by semantically related information (72).

Although verbal WM has received little attention in adult CI users, several studies have examined this cognitive ability in pediatric CI users. It is clear that, in this population, phonological WM plays an important role in speech and language outcomes (73-76). When it comes to adults with CIs, Lyxell and colleagues (42) found a small but significant correlation between reading span scores prior to implantation and speech recognition abilities after 12 months of implant experience. Tao and colleagues (77) found significant correlations of speech recognition (Mandarin disyllable recognition) and digit span scores (forward and backward) in adult CI users. A recent study in our lab examined WM in 30 postlingually deaf adult CI users and 30 NH peers (Moberly et al., under review). Performance on WM tasks that explicitly required phonological sensitivity (a task of serial recall of rhyming words and a nonword repetition task) was significantly poorer for CI users than NH peers, and scores on those tasks predicted 14% to 18% of the variability in recognition of words in sentences by CI users. In a follow-up study, we demonstrated that WMC, using an auditory Listening Span task, and grammatical skills both predicted recognition of words in sentences, but in a combined regression analysis, neither contributed independently of the other, providing further evidence that better WM abilities enable more effective use of linguistic (e.g., grammatical) skills during the process of recognizing sentences presented through a CI.

Another neurocognitive skill, perceptual organization (or perceptual closure), refers to the process of using degraded sensory input to create a meaningful perceptual form (78-80). For speech, perceptual organization is the principle of treating the multiple sensory elements that compose the signal as a coherent percept (81). Relations have been found between visual perceptual organization skills and speech recognition in older adults with hearing loss (82). Perceptual closure was also assessed in hearing impaired listeners by George and colleagues (83), using a Text Reception Threshold (TRT) test. Participants were asked to read degraded sentences on a computer screen with varying degrees of visual masking using vertical bars. Accuracy threshold scores predicted 10% to 15% of variability in speech reception thresholds in noise. Zekveld and coauthors (80) found that TRT scores predicted 30% of the variability in auditory speech reception threshold in speech-shaped noise for NH listeners. Similarly, using a Fragmented Sentence Test in which portions of printed letters had been deleted, Watson and colleagues (84) reported correlations with speech recognition in white noise for NH college students. We are currently investigating whether accuracy on a degraded visual Fragmented Sentence Test will predict sentence recognition in adult CI users.

An additional neurocognitive skill that is related to speech recognition outcome variability in adult CI users is inhibitory control, the ability to inhibit irrelevant stimulus information (e.g., noise), or to inhibit lexical competitors during recognition of words. Sommers and Danielson (85) provided support for age-related inhibitory deficits as a mechanism contributing to poorer word recognition in older adults. We recently examined inhibitory control abilities in adult CI users, using a computerized Stroop task (Moberly et al., under review). During this task, the participant viewed a computer monitor on which two types of visual stimuli were presented: color words that matched the color of the word (congruent condition) or color words that were presented in a different color (incongruent condition). Response times for the incongruent condition, indexing speed of inhibition, were found to be negatively correlated with recognition scores for words in sentences in speech-shaped noise. This finding suggests that inhibitory control processes play a role in speech recognition for adult CI users.

Implications – What Needs to be Done?

Although findings from the studies reviewed above begin to provide some insights into sources of variability in speech recognition outcomes for adults with hearing loss, these top-down processes have not been examined extensively in adult CI users. One of our goals is to develop and test a more comprehensive model of speech recognition in adult CI users to explain the interacting roles of auditory sensitivity, linguistic skills, and neurocognitive functions. As a field, a first step will be to re-conceptualize the way we look at cochlear implantation; we are not simply restoring audibility through a CI to an otherwise normal peripheral and central auditory system. A helpful way to approach this idea is using a “connectome” framework (86). A connectome is a network of neural projections and synaptic connections that shape an individual's global communication functions. As a child, development of a connectome is highly dependent on early sensory experience and activities. Likewise, postlingual adults with hearing loss should be considered as having a connectome disease: the sensory loss likely leads to downstream neurocognitive effects, which, in turn, have implications for adaptation to the CI. Application of a connectome model to patients with hearing loss suggests that outcomes following cochlear implantation will not be confined only to the auditory system itself, but also that the neurocognitive effects of prolonged hearing loss will affect outcomes in other related domains as well.

Second, there are clearly areas of investigation that have barely been touched relating to outcomes for adults with CIs, but have demonstrated relations in pediatric CI users. For example, greater maternal sensitivity to communication needs of the child predicts better speech recognition performance (87); interpersonal family dynamics have not been explored at all in adult CI users. Similarly, the role of intensive postoperative aural rehabilitation has been emphasized in the pediatric population but has primarily been limited in adult CI users to patient-driven computerized auditory training (88). Moreover, patients’ personality and “grit” – their perseverance and drive toward long-term goals (89) – likely contribute to ultimate outcomes, as demonstrated in a recent pilot study in our lab (7).

Third, new clinical assessment batteries need to be developed to examine these additional sources of variability in patients with CIs. We hypothesize that individual CI users may display poor speech recognition performance for different reasons, and these differences may suggest divergent therapeutic approaches. For patients with auditory sensitivity through their implants that is too poor to access essential speech structure, remapping of their devices or alternative signal processing strategies may be needed. For patients with poor linguistic skills, language training may be helpful. For CI users with poor neurocognitive processing, perceptual training which incorporates high cognitive demands may improve speech perception in degraded listening conditions directly, by training WM and attention, and indirectly strengthening cortical-subcortical sound-to-meaning relationships (90). Additionally, training methods may provide the opportunity to teach adults to effectively use compensatory mechanisms to cope with the processing demands of complex listening environments (91).

Lastly, a number of recommendations can be made for surgeons and audiologists. (1) It is important to recognize that poor performance for a given patient may not be entirely attributed to problems with bottom-up auditory sensitivity and audibility (i.e., electrode array placement, status of the auditory nerve); rather, it may be that top-down linguistic and neurocognitive factors are contributing, and formal aural rehabilitation in conjunction with an experienced speech-language pathologist may be helpful. (2) Anecdotal evidence from focused patient interviews suggests the potential benefit of patient-driven strategies like use of support groups or audio books to assist with rehabilitation (7). (3) It should be kept in mind that some patients may require more than two years of CI experience before reaching a plateau in performance (5,92). (4) Clinicians should talk with their colleagues about their poor performers. Although every clinician would prefer to discuss the CI users who are “stars,” it is likely that more open discussion regarding our poor performers will help us to develop more effective intervention strategies for individual patients. (5) Understand that the broad array of factors contributing to CI outcomes suggests that a personalized rehabilitation approach likely needs to be developed for each patient. This thinking is in line with general advances across medical fields to develop and implement “personalized medicine” that is tailored to individual patients.

Conclusions

Poor speech recognition performance among adult CI users is, unfortunately, relatively common. Bottom-up auditory sensitivity, along with top-down linguistic and neurocognitive skills, contribute to variability in outcomes and likely differentially explain poor performance for many patients. By understanding these sources of variability and their interactions, and developing novel targeted intervention strategies that help patients remediate and compensate for them, we should be able to optimize speech and language outcomes for more patients and therefore drastically reduce the numbers of poor performers whom we see in our clinics.

Footnotes

Financial Disclosures: None

Conflicts of Interest: None

References

  • 1.Firszt JB, Holden LK, Skinner MW, et al. Recognition of speech presented at soft to loud levels by adult cochlear implant recipients of three cochlear implant systems. Ear Hear. 2004;25(4):375–387. doi: 10.1097/01.aud.0000134552.22205.ee. [DOI] [PubMed] [Google Scholar]
  • 2.Gifford RH, Shallop JK, Peterson AM. Speech recognition materials and ceiling effects: considerations for cochlear implant programs. Audiol Neurotol. 2008;13:193–205. doi: 10.1159/000113510. [DOI] [PubMed] [Google Scholar]
  • 3.Holden LK, Finley CC, Firszt JB, et al. Factors affecting open-set word recognition in adults with cochlear implants. Ear Hear. 2013;34(3):342–360. doi: 10.1097/AUD.0b013e3182741aa7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Pisoni DB, Svirsky MA, Kirk KI, Miyamoto RT. Looking at the “stars”: A first report on the intercorrelations among measures of speech perception, intelligibility, and language development in pediatric cochlear implant users. Research on Spoken Language Processing Progress Report. 1997;21:51–91. [Google Scholar]
  • 5.Lenarz M, Sönmez H, Joseph G, Büchner A, Lenarz T. Long-term performance of cochlear implants in postlingually deafened adults. Otolaryng Head Neck. 2012;147(1):112–8. doi: 10.1177/0194599812438041. [DOI] [PubMed] [Google Scholar]
  • 6.Rumeau C, Frère J, Montaut-Verient B, Lion A, Gauchard G, Parietti-Winkler C. Quality of life and audiologic performance through the ability to phone of cochlear implant users. Eur Arch Oto-Rhino-L. 2015;272(12):3685–92. doi: 10.1007/s00405-014-3448-x. [DOI] [PubMed] [Google Scholar]
  • 7.Harris MS, Capretta NR, Henning SC, Feeney L, Pitt MA, Moberly AC. Postoperative Rehabilitation Strategies Used by Adults with Cochlear Implants: a Pilot Study. Laryngoscope Investigative Otolaryngology. doi: 10.1002/lio2.20. In press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Kelly AS, Purdy SC, Thorne PR. Electrophysiological and speech perception measures of auditory processing in experienced adult cochlear implant users. Clin Neurophysiol. 2005;116(6):1235–1246. doi: 10.1016/j.clinph.2005.02.011. [DOI] [PubMed] [Google Scholar]
  • 9.Lazard DS, Vincent C, Venail F, et al. Pre-, per-and postoperative factors affecting performance of postlinguistically deaf adults using cochlear implants: a new conceptual model over time. PLoS One. 2012;7:e48739. doi: 10.1371/journal.pone.0048739. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Leung J, Wang NY, Yeagle JD, et al. Predictive models for cochlear implantation in elderly candidates. Arch Otolaryngol Head Neck Surg. 2005;131(12):1049–1054. doi: 10.1001/archotol.131.12.1049. [DOI] [PubMed] [Google Scholar]
  • 11.Roberts DS, Lin HW, Herrmann BS, Lee DJ. Differential cochlear implant outcomes in older adults. Laryngoscope. 2013;123(8):1952–1956. doi: 10.1002/lary.23676. [DOI] [PubMed] [Google Scholar]
  • 12.Williamson RA, Pytynia K, Oghalai JS, Vrabec JT. Auditory performance after cochlear implantation in late septuagenarians and octogenarians. Otol Neurotol. 2009;30(7):916–920. doi: 10.1097/MAO.0b013e3181b4e594. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Buchman CA, Copeland BJ, Yu KK, Brown CJ, Carrasco VN, Pillsbury HC., 3rd Cochlear implantation in children with congenital inner ear malformations. Laryngoscope. 2004;114(2):309–316. doi: 10.1097/00005537-200402000-00025. [DOI] [PubMed] [Google Scholar]
  • 14.Rotteveel LJ, Snik AF, Vermeulen AM, Mylanus EA. Three-year follow-up of children with postmeningitic deafness and partial cochlear implant insertion. Clin Otolaryngol. 2005;30(3):242–248. doi: 10.1111/j.1365-2273.2005.00958.x. [DOI] [PubMed] [Google Scholar]
  • 15.Gantz BJ, Woodworth GG, Knutson JF, Abbas PJ, Tyler RS. Multivariate predictors of audiological success with multichannel cochlear implants. Ann Otol Rhinol Laryngol. 1993;102(12):909–916. doi: 10.1177/000348949310201201. [DOI] [PubMed] [Google Scholar]
  • 16.Green KM, Bhatt Y, Mawman DJ, et al. Predictors of audiological outcome following cochlear implantation in adults. Cochlear Implants Int. 2007;8(1):1–11. doi: 10.1179/cim.2007.8.1.1. [DOI] [PubMed] [Google Scholar]
  • 17.Rubinstein JT, Parkinson WS, Tyler RS, Gantz BJ. Residual speech recognition and cochlear implant performance: Effects of implantation criteria. Am J Otol. 1999;20(4):445–452. [PubMed] [Google Scholar]
  • 18.Summerfield AQ, Marshall DH. Preoperative predictors of outcomes from cochlear implantation in adults: Performance and quality of life. Ann Otol Rhinol Laryngol Suppl. 1995;166:105–108. [PubMed] [Google Scholar]
  • 19.Chatelin V, Kim EJ, Driscoll C, et al. Cochlear implant outcomes in the elderly. Otol Neurotol. 2004;25(3):298–301. doi: 10.1097/00129492-200405000-00017. [DOI] [PubMed] [Google Scholar]
  • 20.Vermeire K, Brokx JP, Wuyts FL, Cochet E, Hofkens A, Van de Heyning PH. Quality-of-life benefit from cochlear implantation in the elderly. Otol Neurotol. 2005;26(2):188–195. doi: 10.1097/00129492-200503000-00010. [DOI] [PubMed] [Google Scholar]
  • 21.Facer GW, Peterson AM, Brey RH. Cochlear implantation in the senior citizen age group using the nucleus 22-channel device. Ann Otol Rhinol Laryngol Suppl. 1995;166:187–190. [PubMed] [Google Scholar]
  • 22.Park E, Shipp DB, Chen JM, Nedzelski JM, Lin VY. Postlingually deaf adults of all ages derive equal benefits from unilateral multichannel cochlear implant. J Am Acad Audiol. 2011 Nov-Dec;22(10):637–43. doi: 10.3766/jaaa.22.10.2. [DOI] [PubMed] [Google Scholar]
  • 23.Guérit F, Santurette S, Chalupper J, Dau T. Investigating interaural frequency-place mismatches via bimodal vowel integration. Trends hear. 2014:18. doi: 10.1177/2331216514560590. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Svirsky MA, Fitzgerald MB, Sagi E, Glassman EK. Bilateral cochlear implants with large asymmetries in electrode insertion depth: implications for the study of auditory plasticity. Acta Oto-Laryngol. 2015;135(4):354–63. doi: 10.3109/00016489.2014.1002052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Friesen LM, Shannon RV, Baskent D, Wang X. Speech recognition in noise as a function of the number of spectral channels: comparison of acoustic hearing and cochlear implants. J Acoust Soc Am. 2001;110(2):1150–63. doi: 10.1121/1.1381538. [DOI] [PubMed] [Google Scholar]
  • 26.Fu QJ, Nogaki G, Galvin JJ. Auditory training with spectrally shifted speech: implications for cochlear implant patient auditory rehabilitation. J Assoc Res Otolaryngol. 2005;6(2):180–189. doi: 10.1007/s10162-005-5061-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Fu QJ, Galvin JJ. Perceptual learning and auditory training in cochlear implant recipients. Trends Hear. 2007;11(3):193–205. doi: 10.1177/1084713807301379. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Won JH, Drennan WR, Rubinstein JT. Spectral-ripple resolution correlates with speech perception in noise in cochlear implant users. J Assoc Res Otolaryngol. 2007;8(3):384–92. doi: 10.1007/s10162-007-0085-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Won JH, Clinard CG, Kwon S, Dasika VK, Nie K, Drennan WR, Tremblay KL, Rubinstein JT. Relationship between behavioral and physiological spectral-ripple discrimination. J Assoc Res Otolaryngol. 2011;12(3):375–93. doi: 10.1007/s10162-011-0257-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.DeVries L, Scheperle R, Bierer JA. Assessing the electrode-neuron interface with the electrically evoked compound action potential, electrode position, and behavioral thresholds. J Assoc Res Otolaryngol. 2016;17(3):237–52. doi: 10.1007/s10162-016-0557-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Choudhury B, Fitzpatrick DC, Buchman CA, et al. Intraoperative round window recordings to acoustic stimuli from cochlear implant patients. Otol Neurotol. 2012;33(9):1507–15. doi: 10.1097/MAO.0b013e31826dbc80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Fitzpatrick DC, Campbell A, Choudhury B, Dillon M, Forgues M, Buchman CA, Adunka OF. Round window electrocochleography just prior to cochlear implantation: relationship to word recognition outcomes in adults. Otol Neurotol. 2014;35(1):64. doi: 10.1097/MAO.0000000000000219. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Eggermont JJ. The role of sound in adult and developmental auditory cortical plasticity. Ear Hear. 2008;29(6):819–29. doi: 10.1097/AUD.0b013e3181853030. [DOI] [PubMed] [Google Scholar]
  • 34.Bhargava P, Gaudrain E, Başkent D. Top–down restoration of speech in cochlear-implant users. Hear Res. 2014;309:113–123. doi: 10.1016/j.heares.2013.12.003. [DOI] [PubMed] [Google Scholar]
  • 35.Luce PA, Pisoni DB. Recognizing spoken words: the neighborhood activation model. Ear Hear. 1998;19(1):1–36. doi: 10.1097/00003446-199802000-00001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.McClelland JL, Elman JL. The TRACE model of speech perception. Cognitive Psychol. 1986;18(1):1–86. doi: 10.1016/0010-0285(86)90015-0. [DOI] [PubMed] [Google Scholar]
  • 37.Marslen-Wilson WD. Functional parallelism in spoken word-recognition. Cognition. 1987;25:71–102. doi: 10.1016/0010-0277(87)90005-9. [DOI] [PubMed] [Google Scholar]
  • 38.Norris D, McQueen JM. Shortlist B: a Bayesian model of continuous speech recognition. 115. Psychol Rev. 2008;(2):357–395. doi: 10.1037/0033-295X.115.2.357. [DOI] [PubMed] [Google Scholar]
  • 39.Poeppel D, Idsardi WJ, van Wassenhove V. Speech perception at the interface of neurobiology and linguistics. Philos T Roy Soc B. 2008;363(1493):1071–1086. doi: 10.1098/rstb.2007.2160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Stenfelt S, Rönnberg J. The signal-cognition interface: interactions between degraded auditory signals and cognitive processes. Scand J Psychol. 2009;50(5):385–393. doi: 10.1111/j.1467-9450.2009.00748.x. [DOI] [PubMed] [Google Scholar]
  • 41.Heald SLM, Nusbaum HC. Speech perception as an active cognitive process. Front Syst Neurosci. 2014;8:1–15. doi: 10.3389/fnsys.2014.00035. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Lyxell B, Andersson J, Andersson U, Arlinger S, Bredberg G, Harder H. Phonological representation and speech understanding with cochlear implants in deafened adults. Scand J Psycho. 1998;39(3):175–179. doi: 10.1111/1467-9450.393075. [DOI] [PubMed] [Google Scholar]
  • 43.Moberly AC, Lowenstein JH, Nittrouer S. Word Recognition Variability With Cochlear Implants: The Degradation of Phonemic Sensitivity. Otol Neurotol. doi: 10.1097/MAO.0000000000001001. In press. [DOI] [PubMed] [Google Scholar]
  • 44.Vitevitch MS, Pisoni DB, Kirk KI, Hay-McCutcheon M, Yount SL. Effects of phonotactic probabilities on the processing of spoken words and nonwords by adults with cochlear implants who were postlingually deafened. Volta Rev. 2000;102(4):283–302. [PMC free article] [PubMed] [Google Scholar]
  • 45.Gelfand JT, Christie RE, Gelfand SA. Large-corpus phoneme and word recognition and the generality of lexical content in CVC word perception. J Speech Lang Hear Res. 2014;57:297–307. doi: 10.1044/1092-4388(2013/12-0183). [DOI] [PubMed] [Google Scholar]
  • 46.Altieri N, Gruenenfelder T, Pisoni DB. Clustering coefficients of lexical neighborhoods: does neighborhood structure matter in spoken word recognition?. Ment Lex. 2010;5(1):1–21. doi: 10.1075/ml.5.1.01alt. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Pisoni DB, Nusbaum HC, Luce PA, Slowiaczek LM. Speech perception, word recognition and the structure of the lexicon. Speech communication. 1985 Aug 31;4(1):75–95. doi: 10.1016/0167-6393(85)90037-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Ganong WF. Phonetic categorization in auditory word recognition. J Exp Psychol. 1980;6:110–125. doi: 10.1037//0096-1523.6.1.110. [DOI] [PubMed] [Google Scholar]
  • 49.Samuel AG. Red herring detectors and speech perception: in defense of selective adaptation. Cognitive Psychol. 1986;18(4):452–99. doi: 10.1016/0010-0285(86)90007-1. [DOI] [PubMed] [Google Scholar]
  • 50.Dunn DM, Dunn LM. Peabody picture vocabulary test: Manual. Pearson. 2007 [Google Scholar]
  • 51.Benard MR, Mensink JS, Başkent D. Individual differences in top-down restoration of interrupted speech: links to linguistic and cognitive abilities. J Acoust Soc Am. 2013;135(2):88–94. doi: 10.1121/1.4862879. [DOI] [PubMed] [Google Scholar]
  • 52.Moberly AC, Lowenstein JH, Tarr E, et al. Do adults with cochlear implants rely on different acoustic cues for phoneme perception than adults with normal hearing. J Speech Hear Res. 2014;57(2):566–82. doi: 10.1044/2014_JSLHR-H-12-0323. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Benichov J, Cox C, Tun PA, Wingfield A. Word recognition within a linguistic context: effects of age, hearing acuity, verbal ability and cognitive function. Ear Hear. 2012;32:250–256. doi: 10.1097/AUD.0b013e31822f680f. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Mattys SL, White L, Melhorn JF. Integration of multiple speech segmentation cues: a hierarchical framework. J Exp Psychol. 2005;134:477–500. doi: 10.1037/0096-3445.134.4.477. [DOI] [PubMed] [Google Scholar]
  • 55.Boland JE, Cutler A. Interaction with autonomy: multiple output models and the inadequacy of the great divide. Cognition. 1996;58(3):309–20. doi: 10.1016/0010-0277(95)00684-2. [DOI] [PubMed] [Google Scholar]
  • 56.Spehar B, Goebel S, Tye-Murray N. Effects of Context Type on Lipreading and Listening Performance and Implications for Sentence Processing. J Speech Lang Hear Res. 2015;58(3):1093–102. doi: 10.1044/2015_JSLHR-H-14-0360. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Committee on Hearing, Bioacoustics and Biomechanics (CHABA) Speech understanding and aging. J Acoust Soc Am. 1988;83:859–820. [PubMed] [Google Scholar]
  • 58.Akeroyd MA. Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. Int J Audiol. 2008;47:53–71. doi: 10.1080/14992020802301142. [DOI] [PubMed] [Google Scholar]
  • 59.Pisoni DB. Cognitive factors and cochlear implants: Some thoughts on perception, learning, and memory in speech perception. Ear Hear. 2000;21(1):70. doi: 10.1097/00003446-200002000-00010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Faulkner KF, Pisoni DB. Some observations about cochlear implants: challenges and future directions. Neurosci Disc. 2013:1–9. [Google Scholar]
  • 61.Arehart KH, Souza P, Baca R, Kates J. Working memory, age and hearing loss: susceptibility to hearing aid distortion. Ear Hear. 2013;34:251–260. doi: 10.1097/AUD.0b013e318271aa5e. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Lunner T. Cognitive function in relation to hearing aid use. Int J Audiol. 2003;42(1):49–58. doi: 10.3109/14992020309074624. [DOI] [PubMed] [Google Scholar]
  • 63.Lunner T, Sundewall-Thorén E. Interactions between cognition, compression, and listening conditions: effects on speech-in-noise performance in a two-channel hearing aid. J Am Acad Audiol. 2007;18(7):604–17. doi: 10.3766/jaaa.18.7.7. [DOI] [PubMed] [Google Scholar]
  • 64.Pichora-Fuller MK, Souza PE. Effects of aging on auditory processing of speech. Int J Audiol. 2003;42(2):11–16. [PubMed] [Google Scholar]
  • 65.Rönnberg J, Lunner T, Zekveld A, et al. The ease of language understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci. 2013;7:1–17. doi: 10.3389/fnsys.2013.00031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Rudner M, Foo C, Sundewall-Thorén E, Lunner T, Rönnberg J. Phonological mismatch and explicit cognitive processing in a sample of 102 hearing-aid users. Int J Audiol. 2008;47(2):91–98. doi: 10.1080/14992020802304393. [DOI] [PubMed] [Google Scholar]
  • 67.Fu QJ, Galvin JJ. Perceptual learning and auditory training in cochlear implant recipients. Trends Hear. 2007;11(3):193–205. doi: 10.1177/1084713807301379. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Rudner M, Fransson P, Ingvar M, Nyberg L, Rönnberg J. Neural representation of binding lexical signs and words in the episodic buffer of working memory. Neuropsych. 2007;45:2258–2276. doi: 10.1016/j.neuropsychologia.2007.02.017. [DOI] [PubMed] [Google Scholar]
  • 69.Pichora-Fuller MK, Schneider BA, Daneman M. How young and old adults listen to and remember speech in noise. J Acoust Soc Am. 1995;97:593–608. doi: 10.1121/1.412282. [DOI] [PubMed] [Google Scholar]
  • 70.Rönnberg J. Cognition in the hearing impaired and deaf as a bridge between signal and dialogue: A framework and a model. Int J Audiol. 2003;42:S68–S76. doi: 10.3109/14992020309074626. [DOI] [PubMed] [Google Scholar]
  • 71.Janse E, Jesse A. Working memory affects older adults’ use of context in spoken-word recognition. Quart J Exp Psychol. 2014;67(9):1842–62. doi: 10.1080/17470218.2013.879391. [DOI] [PubMed] [Google Scholar]
  • 72.Zekveld AA, Rudner M, Johnsrude IS, Rönnberg J. The effects of working memory capacity and semantic cues on the intelligibility of speech in noise. J Acoust Soc Am. 2013;134:2225–34. doi: 10.1121/1.4817926. [DOI] [PubMed] [Google Scholar]
  • 73.Dawson PW, Busby PA, McKay CM, Clark GM. Short-term auditory memory in children using cochlear implants and its relevance to receptive language. J Speech Lang Hear Res. 2002;45:789–801. doi: 10.1044/1092-4388(2002/064). [DOI] [PubMed] [Google Scholar]
  • 74.Cleary M, Pisoni DB, Geers AE. Some measures of verbal and spatial working memory in eight- and nine-year-old hearing-impaired children with cochlear implants. Ear Hear. 2001;22(5):395–411. doi: 10.1097/00003446-200110000-00004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75.Nittrouer S, Caldwell-Tarr A, Lowenstein JH. Working memory in children with cochlear implants: problems are in storage, not processing. Int J Ped Otorhinolaryngol. 2013;77:1886–1898. doi: 10.1016/j.ijporl.2013.09.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76.Pisoni DB. Rapid phonological coding and working memory dynamics in children with cochlear implants. Perspectives on Phonological Theory and Development: In honor of Daniel A. Dinnsen. 2014;56:91. [Google Scholar]
  • 77.Tao D, Deng R, Jiang Y, Galvin III JJ, Fu QJ, Chen B. Contribution of Auditory Working Memory to Speech Understanding in Mandarin-Speaking Cochlear Implant Users. PloS one. 2014;9(6):e99096. doi: 10.1371/journal.pone.0099096. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Stothers M, Klein PD. Perceptual organization, phonological awareness, and reading comprehension in adults with and without learning disabilities. Ann Dyslex. 2010;60:209–237. doi: 10.1007/s11881-010-0042-9. [DOI] [PubMed] [Google Scholar]
  • 79.Behrmann M, Kimchi R. What does visual agnosia tell us about perceptual organization and its relationship to object perception? J Exp Psychol. 2003;29(1):19–42. doi: 10.1037//0096-1523.29.1.19. [DOI] [PubMed] [Google Scholar]
  • 80.Zekveld AA, Deijen JB, Goverts ST, Kramer SE. The relationship between nonverbal cognitive functions and hearing loss. J Speech Lang Hear Res. 2007;50:74. doi: 10.1044/1092-4388(2007/006). [DOI] [PubMed] [Google Scholar]
  • 81.Rosenblum LD, Pisoni DB, Remez R. Primacy of multimodal speech perception. The handbook of speech perception. 2005:51–78. [Google Scholar]
  • 82.Humes LE. The contributions of audibility and cognitive factors to the benefit provided by amplified speech to older adults. J Am Acad Audiol. 2007;18(7):590–603. doi: 10.3766/jaaa.18.7.6. [DOI] [PubMed] [Google Scholar]
  • 83.George EL, Zekveld AA, Kramer SE, Goverts ST, Festen JM, Houtgast T. Auditory and nonauditory factors affecting speech reception in noise by older listeners. J Acoust Soc Am. 2007;121(4):2362–75. doi: 10.1121/1.2642072. [DOI] [PubMed] [Google Scholar]
  • 84.Watson CS, Qiu WW, Chamberlain MM, Li X. Auditory and visual speech perception: confirmation of a modality-independent source of individual differences in speech recognition. J Acoust Soc Am. 1996;100(2):1153–1162. doi: 10.1121/1.416300. [DOI] [PubMed] [Google Scholar]
  • 85.Sommers MS, Danielson SM. Inhibitory processes and spoken word recognition in young and older adults: the interaction of lexical competition and semantic context. Psychol Aging. 1999;14(3):458–472. doi: 10.1037//0882-7974.14.3.458. [DOI] [PubMed] [Google Scholar]
  • 86.Kral A, Kronenberger WG, Pisoni DB, O'Donoghue GM. Neurocognitive factors in sensory restoration of early deafness: a connectome model. Lancet Neurol. 2016;15(6):610–21. doi: 10.1016/S1474-4422(16)00034-X. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87.Barnard JM, Fisher LM, Johnson KC, Eisenberg LS, Wang NY, Quittner AL, Carson CM, Niparko JK, CDaCI Investigative Team A Prospective Longitudinal Study of US Children Unable to Achieve Open-Set Speech Recognition 5 Years After Cochlear Implantation. Otol Neurotol. 2015;36(6):985–92. doi: 10.1097/MAO.0000000000000723. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88.Fu QJ, Galvin JJ. Maximizing cochlear implant patients’ performance with advanced speech training procedures. Hear Res. 2008;242(1):198–208. doi: 10.1016/j.heares.2007.11.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 89.Duckworth AL, Peterson C, Mathews MD, Kelly DR. Grit: perseverance and passion for long-term goals. J Pers Soc Psychol. 2007;92(6):1087–1101. doi: 10.1037/0022-3514.92.6.1087. [DOI] [PubMed] [Google Scholar]
  • 90.Anderson S, Parbery-Clark A, White-Schwoch T, Kraus N. Auditory brainstem response to complex sounds predicts self-reported speech-in-noise performance. J Speech Lang Hear Res. 2013;56:31–43. doi: 10.1044/1092-4388(2012/12-0043). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91.Saija JD, AkyÜrek EG, Andringa TC, Başkent D. Perceptual restoration of degraded speech is preserved with advancing age. J Assoc Res Otolaryngol. 2013;15(1):139–148. doi: 10.1007/s10162-013-0422-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92.Herzog M, Schön F, Müller J, Knaus C, Scholtz L, Helms J. Long term results after cochlear implantation in elderly patients. Laryngo Rhino Otol. 2003;82(7):490–493. doi: 10.1055/s-2003-40896. [DOI] [PubMed] [Google Scholar]

RESOURCES