Skip to main content
Laryngoscope Investigative Otolaryngology logoLink to Laryngoscope Investigative Otolaryngology
. 2016 Nov 14;1(6):154–162. doi: 10.1002/lio2.38

Non‐auditory neurocognitive skills contribute to speech recognition in adults with cochlear implants

Aaron C Moberly 1,, Derek M Houston 1, Irina Castellanos 1
PMCID: PMC5467524  PMID: 28660253

Abstract

Objective

Unexplained variability in speech recognition outcomes among postlingually deafened adults with cochlear implants (CIs) is an enormous clinical and research barrier to progress. This variability is only partially explained by patient factors (e.g., duration of deafness) and auditory sensitivity (e.g., spectral and temporal resolution). This study sought to determine whether non‐auditory neurocognitive skills could explain speech recognition variability exhibited by adult CI users.

Study Design

Thirty postlingually deafened adults with CIs and thirty age‐matched normal‐hearing (NH) controls were enrolled.

Methods

Participants were assessed for recognition of words in sentences in noise and several non‐auditory measures of neurocognitive function. These non‐auditory tasks assessed global intelligence (problem‐solving), controlled fluency, working memory, and inhibition‐concentration abilities.

Results

For CI users, faster response times during a non‐auditory task of inhibition‐concentration predicted better recognition of sentences in noise; however, similar effects were not evident for NH listeners.

Conclusions

Findings from this study suggest that inhibition‐concentration skills play a role in speech recognition for CI users, but less so for NH listeners. Further research will be required to elucidate this role and its potential as a novel target for intervention.

Keywords: cochlear implants, sensorineural hearing loss, speech perception

INTRODUCTION

Although cochlear implants (CIs) are effective in restoring access to auditory input for adults with acquired hearing loss, the benefits to speech recognition are not consistent across patients. Average speech recognition after implantation is approximately 70% correct words in sentences in quiet, with generally poorer performance in noise. Some patients experience minimal speech recognition benefit after implantation, while others achieve scores near 100% in quiet.1, 2, 3, 4 This variability in outcomes presents a challenge for healthcare providers. Identifying factors that explain outcome variability, along with factors that can be used to prognosticate postoperative outcomes, may help us to better counsel patients as well as to identify novel targets for clinical intervention for poorly performing patients.

Most research on postlingually deaf adults with CIs has focused on the “bottom‐up” auditory sensitivity to the spectral and temporal properties of speech signals by improving CI hardware, processing, and stimulation parameters.3, 5, 6, 7 However, there is increasing evidence that “top‐down” neurocognitive mechanisms—here broadly defined as using language knowledge and executive control during intentional and goal‐directed behavior–contribute to speech recognition outcomes.8, 9, 10 During spoken language recognition, the listener must use neurocognitive skills to make sense of the incoming speech signal, relating it to linguistic representations in long‐term memory.11, 12 These neurocognitive processes appear to be especially important when the bottom‐up sensory input is degraded (e.g., in noise, when using a hearing aid, or when listening to the degraded signals transmitted by a CI); degraded input leads to greater ambiguity in how the information within that input should be organized perceptually. Under these degraded listening conditions, sufficient neurocognitive resources are required to result in successful speech recognition.

A number of neurocognitive skills have been examined previously for their effects on speech recognition in adults with lesser degrees of hearing loss. Some listeners may be better able to make sense of degraded speech by being able to more effectively store and integrate new information with information presented earlier, or by being able to do so more rapidly. In general, measures of verbal working memory, a limited‐capacity temporary storage mechanism for holding and processing information, have been found to be successful predictors of speech recognition under degraded or challenging listening conditions.8 On the other hand, general scholastic abilities (e.g., standardized test scores or grade point average), tests of IQ, and measures of simple reaction time have typically failed to demonstrate significant associations with speech recognition performance.13, 14, 15

When it comes to CI users, much less is known regarding the role of neurocognitive processes during speech recognition. In an early study of predictors of speech recognition performance in 29 adults with early‐generation multichannel CIs, scores on a Visual Monitoring Task, requiring a rapid response to digits displayed on a computer screen when a specified pattern was produced, and a visual Sequence Learning Task (a written task of rapid detection and completion of a sequence of characters) accounted for 10 to 31% of variance in speech recognition measures.16 Follow‐up studies in a larger group of 48 adults receiving CIs demonstrated results of development of a preoperative predictive index using multivariate regression modeling, including duration of deafness, speech‐reading ability, residual hearing function, measures of compliance with treatment, and also cognitive ability.17, 18 In their combined multivariate regression analysis, scores on the Visual Monitoring Task predicted approximately 5 to 20% of the variance in speech recognition outcomes. Interestingly, a more recent study demonstrated Visual Monitoring Task scores as significant predictors of accuracy in music recognition, suggesting similar neurocognitive demands in tasks of speech recognition and music perception.19

These early studies of predictors of speech recognition performance in CI users demonstrated support for the role of rapid processing of sequentially presented stimuli. The first goal of the current study was to examine several other neurocognitive abilities in a group of adult CI users. The study was designed to test the hypothesis that non‐auditory neurocognitive skills contribute to sentence recognition scores in postlingually deafened adult CI users. Several neurocognitive skills likely come into play when performing a task of sentence recognition under degraded listening conditions. In particular, these skills include sustaining controlled attention to the task,20 exerting controlled fluency: the ability to process stimuli rapidly under concentration demands,21 and exerting inhibition‐concentration: the ability to concentrate on information relevant to the task while suppressing prepotent or automatic responses not relevant to the task.22 Support for the role of inhibitory control comes from studies demonstrating that reductions in older adults' abilities to ignore task‐irrelevant information are an important contributor to their difficulty recognizing words in noise.23, 24, 25, 26 Inhibitory processes may also facilitate the identification of correct lexical items and inhibit incorrect responses.27

In CI users, it is possible that neurocognitive abilities play an even greater role in speech recognition than for individuals with normal hearing (NH) listening under degraded conditions (e.g., noise), because CI users face even greater degrees of degradation of the spectro‐temporal details of speech delivered by their implants. The second goal of this study was to examine whether the relations among non‐auditory measures of neurocognitive skills and sentence recognition were different between CI users and NH age‐matched peers listening to sentences in noise.

To address the above goals, a group of postlingually deafened adult experienced CI users, alongside a group of age‐matched peers with NH, were tested using several measures of recognition of words in sentences, along with non‐auditory measures of neurocognitive function, including global fluid intelligence (problem‐solving), working memory, controlled fluency, and inhibition‐concentration abilities. Neurocognitive scores were analyzed for their relationships with sentence recognition.

Addressing these two goals should have clinical ramifications: identifying neurocognitive factors that contribute to speech recognition outcomes, which can be tested in a non‐auditory fashion, could suggest novel diagnostic predictors of outcomes for patients considering cochlear implantation. Moreover, findings could suggest potential neurocognitive intervention targets for poorly performing patients.

MATERIALS AND METHODS

Participants

Sixty adults were enrolled. Thirty were experienced CI users, between ages 50 and 82 years, recruited from the Otolaryngology department at The Ohio State University. Implant users had varying etiologies of hearing loss and ages at implantation; however, all CI users had progressive declines in hearing during adulthood. All patients received their implants at the age of 35 years or later. Participants had CI‐aided thresholds better than 35 dB HL at 0.25, .5, 1, and 2 kHz, as measured by clinical audiologists within the year before study enrollment. All patients had used their CIs for at least 9 months. All used Cochlear devices and an Advanced Combined Encoder processing strategy. Thirteen CI users had a right CI, 9 used a left device, and 8 had bilateral CIs. A contralateral hearing aid was worn by 13 patients. During testing, participants wore devices in their everyday mode, including use of hearing aids, and kept the same settings during the entire testing session. Residual hearing in each ear was assessed immediately before testing.

Thirty age‐matched normal‐hearing (NH) controls were also tested, matched as closely as possible to the chronological ages of the CI users. Controls were evaluated for NH immediately before testing; NH was defined as four‐tone (0.5, 1, 2, and 4 kHz) pure‐tone average (PTA) better than 25 dB HL in the better ear. This criterion was relaxed to 30 dB HL PTA for participants over age 60 years, and only three had a PTA of poorer than 25 dB HL. NH control participants were recruited from patients in the Otolaryngology department with non‐otologic complaints, or by using ResearchMatch, a research recruitment database.

All participants underwent screening to ensure no evidence of cognitive impairment. The Mini‐Mental State Examination (MMSE) was used, which is a validated assessment tool for verbal working memory, attention, and the ability to follow instructions.28 Raw scores were converted to T scores, using age and education, with a T score less than 29 being concerning for cognitive impairment. All participants had T scores greater than 29 on the MMSE.

Participants were also assessed for basic word‐reading ability, using the Word Reading subtest of the Wide Range Achievement Test, 4th edition (WRAT),29 serving as a metric of general language proficiency. All participants demonstrated a standard score of ≥ 85, with no participant scoring poorer than one standard deviation below the mean. Because some tasks required looking at a computer monitor or paper forms, a final screening test of near‐vision was performed; all participants had corrected near‐vision of better than or equal to 20/30, the criterion for passing vision screens in educational settings.

Participants of both CI and NH groups were adults with spoken American English as their first language. All had a high school diploma, except for one CI user with a GED. A measure of socioeconomic status (SES) was obtained, because SES may predict access to vocabulary and language. SES was quantified using a metric defined by Nittrouer and Burton,30 based on occupational and educational levels, using two scales between 1 and 8. Scores of 8 were the highest levels possible. The two scores were multiplied, resulting in scores between 1 and 64. No significant differences were found for age or SES, but CI participants scored significantly more poorly on the reading and cognitive screening tasks. Demographic and audiologic data for the CI users are shown in Table 1. Mean demographic measures for the CI and NH groups are shown in Table 2.

Table 1.

Cochlear implant participant demographics. Sentence recognition tasks were performed at a +3 dB SNR for long, complex sentences and for short, meaningful sentences, and in quiet for nonsense sentences.

Participant Gender Age (years) Implantation Age (years) SES Side of Implant Hearing Aid Etiology of Hearing Loss Better ear PTA (dB HL) Sentence Recognition ‐ Long, Complex (% correct words) Sentence Recognition ‐ Short, Meaningful (% correct words) Sentence Recognition ‐ Nonsense (% correct words)
1 F 64 54 24 B N Genetic 120.0 70.7 96.0 93.0
2 F 66 62 35 R Y Genetic, progressive, adult onset 78.8 32.6 59.2 86.0
3 M 66 61 18 L N Noise, Meniere's 82.5 44.3 66.4 91.0
4 F 66 58 12 R Y Genetic, progressive, adult onset 98.8 62.2 92.0 83.0
6 M 69 65 24 R N Genetic, progressive, adult onset 88.8 20.4 76.0 84.0
7 M 58 52 36 B N Rubella, progressive 115.0 6.5 25.6 40.0
8 F 56 48 25 R Y Genetic, progressive 82.5 51.4 84.0 77.0
9 M 79 67 49 L N Genetic 120.0 0.7 0.0 46.0
10 M 79 76 36 R Y Progressive, adult onset, noise 70.0 34.0 73.6 71.0
12 F 68 56 12 B N Otosclerosis 112.5 12.7 25.6 92.0
13 M 54 50 24 B N Progressive, adult onset 120.0 58.2 84.8 90.0
16 F 62 59 35 R N Progressive, adult onset 115.0 7.9 17.6 69.0
19 F 75 67 36 L N Progressive, adult onset, autoimmune 120.0 1.9 1.6 48.0
20 M 78 74 15 L N Ear infections 108.8 4.2 0.0 57.0
21 M 82 58 42 L Y Meniere's 71.3 29.4 55.2 72.0
23 F 80 73 30 R N Progressive, adult onset 87.5 26.2 35.2 75.0
25 M 58 57 24 R Y Autoimmune, sudden 120.0 7.2 3.2 72.0
28 M 77 72 12 B N Progressive, adult onset 120.0 0.9 0.8 41.0
31 F 67 62 25 L Y Progressive as child 102.5 8.6 16.8 68.0
34 M 60 54 42 L Y Noise, Meniere's, sudden 98.8 7.5 1.6 83.0
35 M 68 62 42 B N Genetic, progressive, adult onset 120.0 31.3 68.8 74.0
37 F 50 35 35 B N Progressive as child 120.0 76.8 97.6 92.0
38 M 75 74 35 L Y Ototoxicity 96.3 1.4 3.2 31.0
39 F 63 61 30 R N Progressive, adult onset 107.5 16.0 16.0 82.0
40 F 66 59 15 B N Genetic, Meniere's 120.0 31.5 73.6 89.0
41 F 59 56 15 R Y Sudden HL 87.5 37.1 60.8 80.0
42 M 82 76 42 R Y Progressive, adult onset, noise 68.8 38.9 61.6 74.0
44 F 72 66 25 R N Progressive, adult onset 98.8 10.6 7.2 77.0
46 M 75 74 42 L Y Progressive, adult onset 87.5 0 0.0 27.0
48 F 78 48 15 R Y Progressive, adult onset 110.0 7.6 12.0 53.0

Notes: SES: socioeconomic status; PTA: pure‐tone average; HL: hearing level

Table 2.

Participant demographics

Groups
Normal Hearing (N = 30) Cochlear Implant (N = 30)
Mean (SD) Mean (SD) t value p value
Demographics
Age (years) 68.3 (9.4) 68.4 (8.9) 0.03 .98
Reading (standard score) 107 (12.5) 100.5 (11.1) 2.13 .04
MMSE (T score) 55.8 (10.7) 49.8 (9.4) 2.29 .03
SES 34 (13.9) 28.2 (11.3) 1.74 .09

Equipment

Audiometry was performed using a Welch Allyn TN262 audiometer with TDH‐39 headphones. For the MMSE and WRAT screening tasks, as well as the tasks of sentence recognition and the neurocognitive tasks, participant responses were video‐ and audio‐recorded. Participants wore vests holding FM transmitters that sent signals to receivers, which provided input directly into the video camera. Responses for these tasks were live‐scored but then could also be scored later; two staff members could independently score responses to check reliability. Participants were tested while using their usual devices (one CI, two CIs, or CI plus contralateral hearing aid) or no devices (for NH controls), and devices were checked at the beginning of testing by having the tester confirm sound detection by the participant. Speech samples for the sentence recognition measures were collected from a male talker directly onto the computer hard drive, via an AKG C535 EB microphone, a Shure M268 amplifier, and a Creative Laboratories Soundblaster soundcard.

Stimuli‐specific Procedures

All tasks were performed in a soundproof booth or a sound‐treated testing room.

Sentence Recognition

Three measures examining the recognition of words in sentences were included: long, syntactically complex sentences (“long, complex” sentences); short, highly constrained, meaningful sentences (“short, meaningful” sentences); and short strings of nonwords that were syntactically correct but semantically anomalous (“nonsense” sentences). To avoid ceiling and floor effects, participants were tested in different amounts of speech‐shaped noise based on pilot testing of three NH and 3 CI listeners, with the presentation of signal and noise at 68 dB SPL. For CI participants, the signal‐to‐noise ratio (SNR) was +3 dB for long, complex and short, meaningful sentences, and CI users were tested in quiet for nonsense sentences; NH listeners were tested at −3 dB SNR for all sentence recognition tasks. Percentages of correct words repeated for each sentence type were used as the measures of interest.

Recognition of Words in Long, Complex Sentences

These sentences were long, syntactically complex sentences that were designed to assess comprehension of complex syntax in children with dyslexia (e.g., “The stars that the sailor saw came out at midnight”). These sentences contained a mix of sentences with three types of syntax: compound clauses, subject‐object, and object‐subject.

Recognition of Words in Short, Meaningful Sentences

Fifty‐four of the 72 five‐word sentences (four for practice, 50 for testing) used by Nittrouer and Lowenstein study were used.31 These sentences are semantically predictable and syntactically correct, and they follow a subject‐predicate structure (e.g., “Flowers grow in the garden”).

Recognition of Words in Nonsense Sentences

These sentences were four words in length, syntactically correct, but semantically anomalous (e.g., “Soft rocks taste red”), used by Nittrouer and colleagues.32

Non‐auditory Measures of Neurocognitive Functioning

Non‐auditory tasks from the Leiter‐3 International Performance Scale were used to assess global intelligence (“Figure Ground,” “Form Completion,” and “Visual Patterns”), controlled fluency (“Attention Sustained”), and working memory (“Forward/Reverse Memory”).33 A non‐auditory computerized measure of inhibition‐concentration (Stroop) was also collected.

Leiter‐3

The Leiter‐3 is a standardized neurocognitive assessment battery designed to assess neurocognitive functions in children and adults, with age norms up to 75+ years of age. Because all measures are non‐auditory in nature, the Leiter‐3 can be used with patients with hearing loss. All instructions are given to the participant through pantomime and gesturing. The following measures from the Leiter‐3 were included. The first three, Figure Ground, Form Completion, and Visual Patterns were used as measures of global intellectual ability related to fluid reasoning, and were collected to ensure that these global intellectual skills were equivalent between CI and NH groups. Moreover, it was predicted that these measures would not demonstrate relations with speech recognition abilities. The other tasks included from the Leiter‐3 were Attention Sustained (considered a task of controlled fluency in this paper) and Forward and Reverse Memory (non‐auditory measures of working memory). All tasks were presented as discussed in the Leiter‐3 manual. Raw scores were converted into standard scores, which were used in analyses.

Global Intellectual Skills

During the Figure Ground task, participants pointed to where figures depicted on cards were located on a larger picture. As the task proceeded, the pictures and figures became more detailed, and abstract images were included, increasing the difficulty of the task. During the Form Completion task, three blocks with fragments of a complete picture were placed on a table in front of the participant. Participants were required to put the blocks in the corresponding slots of an easel to complete the target form. During the Visual Patterns task, participants selected blocks in an appropriate sequence to complete a visual pattern. For each of these tasks, correct responses were counted.

Controlled Fluency

During the Attention Sustained subtest, participants were given a 30‐ or 60‐second duration of time to cross out as many figures as possible on a piece of paper that matched a target figure shown at the top of the page. Correct responses were counted, and errors were subtracted.

Working Memory

During the Forward Memory and Reverse Memory subtests, an easel was shown with several pictures of animals in squares. The tester pointed to a sequence of pictures, and participants were required to point to the corresponding pictures in the same order or in the reverse order. Correct responses were counted.

Inhibition‐Concentration

A non‐auditory computerized version of a verbal Stroop task was used, which is publicly available (http://www.millisecond.com). Participants were presented with color words one at a time on a computer monitor and were asked to give a response naming the color of the text of the word shown. Scoring was done automatically at the time of testing when the participant directly entered responses into a computer by pressing buttons corresponding to the colors. Response times were computed for correct responses to congruent words (automatic word reading; e.g., the word “Red” was shown in red ink) and to incongruent words (requires participants to inhibit word reading and concentrate on the ink color; e.g., the word “Red” was shown in blue ink).

General Procedures

All procedures were approved by The Ohio State University Institutional Review Board. Participants were tested in one session over two hours. First, hearing thresholds and screening measures were obtained. Participants then completed sentence recognition testing, with different sentence materials presented in blocks and order of sentences randomized. Lastly, participants completed the neurocognitive testing, with task order randomized.

Data Analyses

Independent‐samples t‐tests were performed to identify differences in neurocognitive scores between CI and NH groups. Pearson‐product correlation analyses were performed among neurocognitive and sentence recognition measures.

RESULTS

For the CI group, side of implantation (right, left, or bilateral) did not influence any of the neurocognitive or sentence recognition performance scores (p > .50). Additionally, no differences in performance were found for CI users who wear only CIs versus a CI plus hearing aid (p > .50). Therefore, all CI users were included together in subsequent analyses.

On screening measures, CI users performed significantly more poorly than NH peers on word reading (WRAT) and cognitive functioning (MMSE), though all participants were within the normal range. Item analyses of the MMSE revealed that 74% of the errors in CI users' responses occurred during questions requiring verbal working memory processes (e.g., recall a 3‐word list). CI and NH groups did not differ on global nonverbal intelligence (Figure Ground, Form Completion, and Visual Patterns), nor did they differ on controlled fluency (Attention Sustained), reverse working memory (Reverse Memory), or inhibition‐concentration (Verbal Stroop; see Table 3). CI users scored more poorly than NH participants on forward working memory (Forward Memory). However, CI users displayed forward working memory scores within the normal range. Scores for the sentence recognition assessments were not normally distributed; therefore, arcsine transformations were computed and used for all subsequent analyses. Sentence recognition scores were not directly compared between CI and NH groups, because they were tested at different SNRs, but mean scores are shown in Table 3.

Table 3.

Group mean neurocognitive and sentence recognition scores and results of independent‐samples t‐tests. Sentence recognition scores were not compared between groups, because signal‐to‐noise ratio (SNR) was different between groups. For CI users, sentence recognition scores were presented at +3 dB SNR for long, complex and short, meaningful sentences and in quiet for nonsense sentences. For NH listeners, all sentence recognition tasks were presented at −3 dB SNR.

Groups
NH (N = 30) CI (N = 30)
N Mean (SD) N Mean (SD) t value p value
Figure Ground (scaled score) 30 11.6 (5.2) 30 11.2 (3.2) .36 .72
Form Completion (scaled score) 30 10.9 (2.4) 30 11.0 (2.9) .10 .92
Visual Patterns (scaled score) 30 12.4 (2.6) 30 11.8 (2.5) .89 .38
Attention Sustained (scaled score) 30 10.2 (1.9) 30 9.6 (2.0) 1.20 .24
Forward Memory (scaled score) 30 13.0 (2.3) 30 11.8 (2.3) 2.08 .04
Reverse Memory (scaled score) 30 13.5 (2.4) 30 12.7 (2.2) 1.44 .16
Verbal Stroop ‐ Congruent (response time in seconds) 30 1.22 (.30) 28 1.34 (.47) 1.15 .26
Verbal Stroop ‐ Incongruent (response time in seconds) 30 1.57 (.47) 28 1.72 (.48) 1.16 .25
Sentence Recognition ‐ Long, complex (% words correct) 30 66.7 (14.4) 30 24.6 (22.4)
Sentence Recognition ‐ Short, meaningful (% words correct) 30 81.7 (9.3) 30 40.5 (35.0)
Sentence Recognition ‐ Nonsense (% words correct) 30 38.8 (11.7) 30 70.6 (19.0)

The first goal of this study was to examine whether neurocognitive skills, assessed using non‐auditory tasks, were associated with sentence recognition performance. Correlations between neurocognitive scores and sentence recognition scores are shown in Table 4. For CI users, only one of the neurocognitive domains, inhibition‐concentration, was significantly associated with all three sentence recognition scores (p = .02 – .03 across sentence measures). Specifically, the response times from the “incongruent” condition correlated with sentence recognition scores (see Figure 1), but response times from the “congruent” condition did not. This finding suggests that speed of inhibitory control, but not general response speed, was associated with sentence recognition in CI users. For NH controls, none of the neurocognitive scores were associated with sentence recognition. Because word reading (WRAT) and cognitive functioning (MMSE) scores were poorer for CI users than NH peers, these were also examined for correlations with sentence recognition scores; no significant correlations were identified.

Table 4.

r values from correlation analyses with recognition of words in sentences. CI users were tested at +3 dB SNR for long, complex and highly meaningful sentences, and in quiet for nonsense sentences. NH listeners were tested at −3 dB SNR for all sentence materials.

Groups
NH CI
Long, complex sentences Highly meaningful sentences Nonsense sentences Long, complex sentences Highly meaningful sentences Nonsense sentences
Figure Ground (scaled score) .05 .02 .09 .15 .13 ‐.03
Form Completion (scaled score) .13 ‐.11 .01 ‐.09 ‐.16 ‐.17
Visual Patterns (scaled score) .24 ‐.03 .32 .33 .26 .23
Attention Sustained (scaled score) .14 .07 ‐.08 .14 .19 .29
Forward Memory (scaled score) ‐.10 ‐.35 .17 .23 .23 .14
Reverse Memory (scaled score) .06 ‐.11 .08 .20 .20 .04
Verbal Stroop ‐ Congruent (response time) ‐.04 .20 .07 ‐.28 ‐.29 ‐.36
Verbal Stroop ‐ Incongruent (response time) ‐.14 ‐.05 ‐.03 ‐.41 * ‐.43 * ‐.43 *

* p <0.05

** p <0.01

Figure 1.

Figure 1

Correlations between sentence recognition scores and inhibition‐concentration response times for cochlear implant users. Participants were tested at +3 dB SNR for long, complex sentences and short, meaningful sentences and in quiet for nonsense sentences.

The second goal of the study was to determine if the relations among neurocognitive skills and sentence recognition would differ between CI and NH groups. It was predicted that different correlations would be identified among neurocognitive skills and sentence recognition scores for CI users than NH peers, because of the greater degree of spectro‐temporal degradation experienced by CI listeners relative to NH listeners. As demonstrated in Table 4, no correlations were demonstrated between neurocognitive scores and sentence recognition for the NH participants. Thus, it can be concluded that inhibition‐concentration skills contributed to sentence recognition in CI users, but not in NH peers.

DISCUSSION

This study was designed to examine whether the neurocognitive abilities of postlingually deafened adults with contemporary CIs, as assessed using non‐auditory measures, would be associated with the ability to recognize words in sentences. Moreover, the study aimed to examine whether relationships among neurocognitive measures and sentence recognition differed between CI and NH listeners.

Results of this study demonstrated that neurocognitive functions were generally similar for CI users as compared with their NH age‐matched peers. Scores were poorer for CI users than for our sample of NH peers on Forward Memory and MMSE (primarily as a result of relative deficits on the MMSE on items requiring verbal working memory). However, CI users' scores for both Forward Memory and MMSE were within the normal range. Reading scores were also poorer for the CI group than NH peers. However, we cannot necessarily attribute these differences to hearing loss or use of a CI. Recent studies have suggested that neurocognitive functions decline with worsening hearing loss, and some even suggest that cochlear implantation may reverse these declines.34 Future studies are required to examine these effects in detail.

Turning to relations among neurocognitive functions and speech recognition, support for our first hypothesis was demonstrated: inhibition‐concentration skills of CI users were significantly correlated with recognizing words in all three types of sentence materials, with faster inhibition responses associated with better sentence recognition. Although inhibition‐concentration skills have not been previously examined in adult CI users, results are consistent with findings by Sommers and Danielson, who identified individual differences in inhibitory control as contributing to sentence recognition performance in older adults with NH.27 We speculate that inhibition‐concentration abilities may be particularly important for CI users during speech recognition, in which they must ignore irrelevant stimuli (noise) and/or inhibit perceiving incorrect lexical items. This explanation is consistent with models of speech perception that emphasize the role that working memory plays in inhibiting interference for irrelevant information, or for inhibiting prepotent but incorrect responses.35 For example, in the Ease of Language Understanding (ELU) model, under degraded listening conditions, successful speech perception requires a shift from rapid automatic processing to more effortful, controlled processing, which is heavily dependent on working memory capacity.36 The relations of inhibition‐concentration, working memory capacity, and speech recognition processes deserve further exploration.

In contrast to inhibition‐concentration, controlled fluency and non‐auditory working memory skills were not associated with speech recognition scores. At least two possible conclusions may be drawn from these findings: first, exerting executive control on linguistic representations, versus visual representations, may relate most strongly to speech recognition skills. However, our results are not consistent with those of Knutson and colleagues, who demonstrated relations between speech recognition measures and visually presented sequential processing tasks.16, 17, 18 Alternatively, results may suggest that our measures of neurocognitive functioning from the Leiter–3 are not necessarily sensitive measures tapping into the neurocognitive abilities that underlie spoken language recognition, or that our sample sizes were not large enough to detect significant relations. Further research is necessary to delineate these findings.

The second hypothesis tested was that relations among neurocognitive skills and sentence recognition would differ between CI users and NH listeners. This hypothesis was supported: faster inhibition was associated with better sentence recognition only for CI users.

Several possibilities may explain the lack of significant correlations between neurocognitive functions and speech recognition for NH listeners. One such explanation is that NH listeners' ranges of performance on the speech recognition tasks were much narrower than those of the CI users; this restricted variance in speech recognition scores across NH listeners may have contributed to the observed weak relationships with neurocognitive scores. A second explanation is that there are differential relations between neurocognitive functioning and speech recognition for CI and NH listeners. This differential relation between CI and NH listeners is consistent with recent findings. Füllgrabe and Rosen have demonstrated that neurocognitive skills (particularly working memory capacity) contribute little to NH listeners' performance on tasks of speech recognition in noise,37 in contrast with several studies in adults with hearing loss.8, 9, 10 Third, it could be that testing listeners under noise conditions that provide greater informational masking (e.g., multi‐talker babble), rather than the energetic masking provided by speech‐shaped noise here, would allow us to better observe top‐down processing contributions to speech recognition. Finally, although our primary analyses correlated sentence recognition with non‐auditory neurocognitive skills, we also correlated five additional measures obtained from testing (Global Intellectual Skills: Figure Ground, Form Completion, Visual Patterns; Reading Skills: WRAT; and Cognitive Impairment Screen: MMSE) with the neurocognitive assessments, thereby providing clinicians with more comprehensive information about functioning following hearing loss and cochlear implantation. However, by conducting these additional correlations we increased our risk of experiment‐wise error and these additional correlations should be interpreted with caution. Additional studies will be required to better understand the differential relations between NH listeners and patients with hearing loss, including those with CIs.

CONCLUSION

Our findings indicate that inhibition‐concentration skills contribute to CI users' abilities to recognize words in sentences, while other neurocognitive tests employed by this study did not predict word recognition ability. Findings provide further evidence for the role of neurocognitive processing by CI users and imply potential benefits of developing clinical aural rehabilitation programs that target inhibition‐concentration skills.

Acknowledgments

Research reported in this publication was supported by the Triological Society Career Development Award and the American Speech‐Language‐Hearing Foundation/Acoustical Society of America Speech Science Award to Aaron Moberly. Normal‐hearing participants were recruited through ResearchMatch, which is funded by the NIH Clinical and Translational Science Award (CTSA) program, grants UL1TR000445 and 1U54RR032646‐01. The authors would like to acknowledge Susan Nittrouer and Joanna Lowenstein for their development of sentence recognition materials used, and Lauren Boyce and Taylor Wucinich for assistance in data collection and scoring. The authors declare no conflicts of interest.

Data from this study were presented at the 2016 Triological Society annual meeting of the Combined Otolaryngology Spring Meetings (COSM), May 20‐21, 2016, in Chicago, IL.

Financial Disclosures: Research reported in this publication was supported by the Triological Society Career Development Award and the American Speech‐Language‐Hearing Foundation Speech Science Award to Aaron Moberly. Normal‐hearing participants were recruited through ResearchMatch, which is funded by the NIH Clinical and Translational Science Award (CTSA) program, grants UL1TR000445 and 1U54RR032646‐01.

Conflicts of Interest: None

Bibliography

  • 1. Firszt JB, Holden LK, Skinner MW, et al. Recognition of speech presented at soft to loud levels by adult cochlear implant recipients of three cochlear implant systems. Ear Hear 2004;25:375–387. [DOI] [PubMed] [Google Scholar]
  • 2. Gifford RH, Shallop JK, Peterson AM. Speech recognition materials and ceiling effects: Considerations for cochlear implant programs. Audiol Neurotol 2008;13:193–205. [DOI] [PubMed] [Google Scholar]
  • 3. Holden LK, Finley CC, Firszt JB, et al. Factors affecting open‐set word recognition in adults with cochlear implants. Ear Hear 2013;34:342–360. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Moberly AC, Houston DM, Castellanos I, Boyce L, Nittrouer S. Linguistic knowledge and working memory in adults with cochlear implants. Under review.
  • 5. Holden LK, Reeder RM, Firszt JB, Finley CC. Optimizing the perception of soft speech and speech in noise with the advanced bionics cochlear implant system. Int J Audiol 2011;50:255–269. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Kenway B, Tam YC, Vanat Z, Harris F, Gray R, Birchall J, et al. Pitch discrimination—an independent doctor in cochlear implant performance outcomes. Otol Neurotol 2015;36:1472–1479. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Srinivasan AG, Padilla M, Shannon RV, Landsberger DM. Improving speech perception in noise with current focusing in cochlear implant users. Hear Res 2013;299:29–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Akeroyd MA. Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing‐impaired adults. Int J Audiol 2008;47:53–71. [DOI] [PubMed] [Google Scholar]
  • 9. Arehart KH, Souza P, Baca R, Kates J. Working memory, age and hearing loss: Susceptibility to hearing aid distortion. Ear Hear 2013;34:251–260. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Rönnberg J, Lunner T, Zekveld A, et al. The ease of language understanding (ELU) model: Theoretical, empirical, and clinical advances. Front Syst Neurosci 2013;7:1–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Heald SLM, Nusbaum HC. Speech perception as an active cognitive process. Front Syst Neurosci 2014;8:1–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Pisoni DB, Cleary M. Measures of working memory span and verbal rehearsal speed in deaf children after cochlear implantation. Ear Hear 2003;24(Suppl. 1):106S–120S. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Jerger J, Jerger S, Pirozzolo F. Correlational analysis of speech audiometric scores, hearing loss, age, and cognitive abilities in the elderly. Ear Hear 1991;12:103–109. [DOI] [PubMed] [Google Scholar]
  • 14. Kidd GR, Watson CS, Gygi B. Individual differences in auditory abilities. J Acoust Soc Am 2007;122:418–435. [DOI] [PubMed] [Google Scholar]
  • 15. Surprenant AM, Watson CS. Individual differences in the processing of speech and nonspeech sounds by normal‐hearing listeners. J Acoust Soc Am 2001;110:2085–2095. [DOI] [PubMed] [Google Scholar]
  • 16. Knutson JF, Hinrichs JV, Tyler RS, Gantz BJ, Schartz HA, Woodworth G. Psychological predictors of audiological outcomes of multichannel cochlear implants: Preliminary findings. Ann Otol Rhinol Laryngol 1991;100:817–822. [DOI] [PubMed] [Google Scholar]
  • 17. Gantz BJ, Woodworth GG, Knutson JF, Abbas PJ, Tyler RS. Multivariate predictors of success with cochlear implants. Adv Oto Rhino Laryngol 1993;48:153–167. [DOI] [PubMed] [Google Scholar]
  • 18. Gantz BJ, Woodworth G, Abbas P, Knutson JF, Tyler RS. Multivariate predictors of audiological success with cochlear implants. Ann Otol Rhinol Laryngol 1993;102:909–916. [DOI] [PubMed] [Google Scholar]
  • 19. Gfeller K, Oleson J, Knutson JF, Breheny P, Driscoll V, Olszewski C. Multivariate predictors of music perception and appraisal by adult cochlear implant users. J Am Acad Audiol 2008;19:120–134. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Amitay S. Forward and reverse hierarchies in auditory perceptual learning. Learn Percept 2009;1:59–68. [Google Scholar]
  • 21. Humes LE, Floyd SS. Measures of working memory, sequence learning, and speech recognition in the elderly. J Speech Lang Hear Res 2005;48:224–235. [DOI] [PubMed] [Google Scholar]
  • 22. Cahana‐Amitay D, Spiro A III, Sayers JT, et al. How older adults use cognition in sentence‐final word recognition. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 2015;16:1–27. [DOI] [PubMed] [Google Scholar]
  • 23. Janse E. A non‐auditory measure of interference predicts distraction by competing speech in older adults. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 2012;19:741–758. [DOI] [PubMed] [Google Scholar]
  • 24. Pichora‐Fuller MK. Processing speed and timing in aging adults: Psychoacoustics, speech perception, and comprehension. Int J Audiol 2003;42:S59–S67. [DOI] [PubMed] [Google Scholar]
  • 25. Tun PA, McCoy S, Wingfield A. Aging, hearing acuity, and the attentional costs of effortful listening. Psychol Aging 2009;24:761–766. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Wingfield A, Tun PA. Cognitive supports and cognitive constraints on comprehension of spoken language. J Am Acad Audiol 2007;18:548–558. [DOI] [PubMed] [Google Scholar]
  • 27. Sommers MS, Danielson SM. Inhibitory processes and spoken word recognition in young and older adults: The interaction of lexical competition and semantic context. Psychol Aging 1999;14:458–472. [DOI] [PubMed] [Google Scholar]
  • 28. Folstein MF, Folstein SE, McHugh PR. Mini‐mental state – practical method for grading cognitive state of patients for clinician. J Psychiatr Res 1975;12:189–198. [DOI] [PubMed] [Google Scholar]
  • 29. Wilkinson GS, Robertson GJ. Wide Range Achievement Test. 4th ed Lutz, FL: Psychological Assessment Resources; 2006. [Google Scholar]
  • 30. Nittrouer S, Burton LT. The role of early language experience in the development of speech perception and phonological processing abilities: evidence from 5‐year‐olds with histories of otitis media with effusion and low socioeconomic status. J Commun Dis 2005;38:29–63. [DOI] [PubMed] [Google Scholar]
  • 31. Nittrouer S, Lowenstein JH. Learning to perceptually organize speech signals in native fashion. J Acoust Soc Am 2010;127:1624–1635. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Nittrouer S, Tarr E, Bolster V, Caldwell‐Tarr A, Moberly AC, Lowenstein JH. Low‐frequency signals support perceptual organization of implant‐simulated speech for adults and children. Int J Audiol 2014;53:270–284. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Roid GH, Miller LJ, Pomplun M, Koch C. Leiter international performance scale, (Leiter‐3). Los Angeles: Western Psychological Services; 2013. [Google Scholar]
  • 34. Cosetti MK, Pinkston JB, Flores JM, et al. Neurocognitive testing and cochlear implantation: Insights into performance in older adults. Clin Interv Aging 2016;11:603–613. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Wingfield A. Evolution of models of working memory and cognitive resources. Ear Hear 2016;37:35S–43S. [DOI] [PubMed] [Google Scholar]
  • 36. Rönnberg J, Lunner T, Zekveld A, et al. The Ease of Language Understanding (ELU) model: Theoretical, empirical, and clinical advances. Front Syst Neurosci 2013;7:1–17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Füllgrabe C, Rosen S. Investigating the role of working memory in speech‐in‐noise identification for listeners with normal hearing In Physiology, Psychoacoustics and Cognition in Normal and Impaired Hearing, Advances in Experimental Medicine and Biology, Dijk PV. (ed). 216;894:29–36. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Laryngoscope Investigative Otolaryngology are provided here courtesy of Wiley

RESOURCES