Abstract
Hypothesis:
Clinical adult cochlear implant (CI) candidacy evaluations rely heavily on measures of sentence recognition under the best-aided listening conditions. The hypothesis tested in this study was that nonauditory measures of neurocognitive processes would contribute to scores on preoperative sentence recognition for CI candidates, above and beyond hearing ability as assessed using pure-tone average (PTA). Support for this hypothesis would suggest that best-aided sentence recognition is not simply a measure of hearing ability; rather, neurocognitive functions contribute to performance and should be considered while counseling patients during CI candidacy evaluation about postoperative rehabilitative and outcome expectations.
Background:
Neurocognitive functions, such as working memory capacity, inhibition-concentration, information processing speed, and nonverbal reasoning contribute to aided speech recognition outcomes in adults with hearing loss. This study examined the roles of these neurocognitive factors on preoperative speech recognition performance in adults evaluated for CI candidacy.
Methods:
Thirty-one postlingually deafened adult CI candidates were enrolled. Participants were assessed using nonauditory measures of working memory capacity, inhibition-concentration, information processing speed, and nonverbal reasoning. Measures of sentence recognition in quiet and in multitalker babble (AzBio sentences) as well as sentences from the City University of New York in quiet were collected under best-aided conditions.
Results:
AzBio sentence recognition scores in babble were predicted significantly by scores of working memory capacity after accounting for PTA. Similarly, the City University of New York sentence recognition scores were predicted significantly by nonverbal reasoning after accounting for PTA.
Conclusions:
Findings support the idea that clinical measures of sentence recognition may be affected to varying degrees by neurocognitive functions, and these functions should be considered during evaluation for CI candidacy.
Keywords: Cochlear implants, Cognition, Speech perception
When evaluating adults with moderate-to-profound sensorineural hearing loss in clinical settings, we currently rely on a few basic audiologic measures to determine cochlear implant (CI) candidacy. This audiologic information is used to determine whether a patient would most likely benefit from a CI in terms of functional hearing status. Specifically, measures of sentence recognition under the best-aided listening conditions serve as the primary measure that determines an individual patient’s CI candidacy. It is generally accepted that best-aided sentence recognition serves as a reasonable measure of functional auditory processing and global communication ability in these patients. Using traditional CI criteria, those patients who score more poorly than a specified value on best-aided sentence recognition (typically 60% words correct in sentences or 40% words correct by Medicare standards) are deemed reasonable CI candidates, and implantation is typically recommended.
However, an important assumption made in treating best-aided sentence recognition as the diagnostic measure for CI candidacy is that this assessment provides a reliable estimate of auditory processing abilities. That is, it is generally assumed by clinicians that if an individual scores less than the defined percent words correct criterion under best-aided conditions that restoration of auditory input through a CI should consistently result in improved speech recognition and communicative abilities for that patient. Although this is a logical assumption, variability in postoperative speech recognition outcomes is well established and has proven to be difficult to explain. Recently, there is increasing evidence to suggest that upstream neurocognitive processes contribute to the ability of a listener to understand degraded auditory speech signals. In fact, a number of recent studies in patients with mild-to-moderate sensorineural hearing loss, as well as in experienced adult CI users, have supported the notion that linguistic and neurocognitive abilities contribute to speech recognition (1–4). In the latter group, measures of information processing speed, inhibition-concentration, and nonverbal reasoning (Moberly et al., under review) have all been found to correlate with sentence recognition abilities in experienced adult CI users (3–5). Thus, it is reasonable to predict that these (and/or other) neurocognitive functions would also contribute to the ability of adult CI candidates to recognize degraded sentence materials under the best-aided conditions in the preoperative period. Although CIs provide novel acoustic–phonetic signals that deliver highly degraded spectral representations of speech to the listener, listeners with moderate-to-severe sensorineural hearing loss who are undergoing CI evaluation under the best-aided listening conditions are also faced with degraded acoustic–phonetic speech signals, albeit of different quality than that delivered by a CI. As a result, it is likely that similar neurocognitive functions would contribute to listeners’ sentence recognition during the best-aided CI candidacy evaluation.
The purpose of the current study was to examine the neurocognitive functions of adult patients with moderate-to-severe sensorineural hearing loss who were found to be traditional CI candidates in a tertiary adult CI center. Patients who were found to be CI candidates using our center’s CI candidacy evaluation criteria (typically ≤60% correct words in AzBio sentences in quiet or at +10 dB sound pressure level (SPL) si-to-noise ratio [SNR], or ≤40% for Medicare patients) were invited to participate in a session of neurocognitive testing using a battery of visual measures of working memory capacity, inhibition-concentration, information processing speed, and nonverbal reasoning. Participants also underwent an additional sentence recognition assessment using research materials (the City University of New York [CUNY] sentences) to more broadly assess sentence recognition, including materials with which participants would not be familiar from clinical testing at our site. Preoperative neurocognitive measures were examined for relations with best-aided AzBio and CUNY sentence recognition after accounting for hearing ability (using pure-tone audiometry), with the hypothesis that neurocognitive functions would contribute to scores of sentence recognition for CI candidates. Support for this hypothesis would suggest that our clinical measures of best-aided sentence recognition for CI candidacy are not solely measures of hearing ability that can be restored by a CI; instead, these clinical CI candidacy measures should be considered to serve as complex measures of a combination of auditory processing and neurocognitive functioning. If so, we should recognize that preoperative best-aided sentence recognition testing is not simply an assessment of the functioning of the peripheral auditory system (i.e., how bad a listener’s hearing is). Moreover, we should not necessarily expect that restoration of auditory input via a CI would be sufficient to enhance sentence recognition processing for these patients. Instead, it may be important to develop more specific CI candidacy measures that separate the contributions of auditory processing and higher level neurocognitive processing in patients undergoing CI candidacy evaluation.
METHODS
Participants
Thirty-one postlingually deaf adults were enrolled and underwent testing before CI surgery. Patients from the Otolaryngology Department who had undergone a full CI candidacy evaluation and had been found to meet candidacy criteria for implantation were invited to enroll. Meeting CI candidacy meant the patient demonstrated a moderate-to-profound sensorineural hearing loss, had undergone at least a 1-month hearing aid trial, and demonstrated ≤60% correct words in sentence recognition testing using AzBio (6) sentences in quiet with speech presented at 60 dBA, or at +10 dB SNR in multitalker babble, under binaural best-aided conditions; for Medicare patients, this criterion was set to ≤40%. Inclusion criteria included being a native English speaker with postlingual deafness. This meant that they should have developed reasonably proficient language skills before losing their hearing. Twenty-eight (90.3%) participants reported onset of hearing loss after the age of 12 years (i.e., normal hearing until the time of puberty). The other three (9.7%) reported some degree of congenital hearing loss or onset of hearing loss during childhood. However, all participants had experienced early hearing aid intervention and typical auditory-only spoken language development during childhood, had been mainstreamed in conventional education programs, and had experienced progressive hearing losses into adulthood. Exclusion criteria for participation in the study included prelingual deafness, inner ear malformation on preoperative imaging (either computed tomography or magnetic resonance imaging), history of stroke or neurological disorder than might impact CI functioning (e.g., multiple sclerosis), or history of diagnosed cognitive impairment.
All enrolled participants were assessed at the beginning of the research testing session using several measures to be considered as covariates in analyses. A screening task for cognitive impairment was completed, using a written version of the Mini Mental State Examination (MMSE) (7), with a MMSE raw score ≥26 typically suggesting normal cognitive function. In the current study, five CI candidates had MMSE raw scores of ≤25 (lowest was 22). A second test of basic word reading was completed, using the Wide Range Achievement Test (WRAT) (8). All participants demonstrated near-vision abilities of better than 20/50 corrected near vision, because all of the cognitive measures were presented visually. Socioeconomic status of participants was also collected because it may be a proxy of speech and language abilities. This was accomplished by quantifying socioeconomic status based on a metric developed by Nittrouer and Burton (9), consisting of occupational and educational levels. There were two scales for occupational and education levels, each ranging from one to eight, with eight being the highest level. These two numerical scores were then multiplied, resulting in scores between 1 and 64. Finally, a screening audiogram of unaided residual hearing was performed for each ear separately for all participants.
Enrolled CI candidates were between the ages of 49 and 94 years (mean 69.6; SD 10.9). Duration of hearing loss ranged from 10 to 61 years (mean 33.7; SD 14.4). Details of individual CI participants can be found in Table 1. Group mean demographic and screening measure scores for participants are shown in Table 2.
TABLE 1.
Participant | Gender | Age (Years) | SES | Duration of Hearing Loss (Years) | Hearing Aid | Etiology of Hearing Loss | Better ear PTA (dB HL) |
---|---|---|---|---|---|---|---|
300001 | F | 76 | 12 | 61 | Yes | Ménière’s disease | 101.3 |
300002 | M | 82 | – | 37 | Yes | Noise exposure | 78.8 |
300003 | M | 77 | 9 | 37 | Yes | Progressive as adult, noise exposure | 120 |
300004 | M | 56 | 36 | 13 | Yes | Ménière’s disease | 80 |
300005 | M | 60 | 18 | 48 | Yes | Physical trauma | 112.5 |
300007 | M | 94 | 42 | 39 | Yes | Noise exposure | 82.5 |
300011 | M | 55 | 42 | 42 | Yes | Progressive as adult | 80 |
300012 | M | 67 | 7.5 | 27 | Yes | Noise exposure | 77.5 |
300014 | M | 72 | 12 | 12 | Yes | Progressive as adult, noise exposure | 75 |
300015 | F | 65 | 15 | 27 | Yes | Genetic, progressive as adult | 81.3 |
300017 | M | 78 | 6 | 58 | Yes | Progressive as adult, noise exposure | 87.5 |
300018 | M | 82 | 36 | 44 | Yes | genetic, progressive, noise exposure | 90 |
300019 | M | 65 | 42 | 35 | Yes | Genetic, noise exposure | 91.3 |
300020 | M | 74 | 36 | 11 | Yes | Genetic, progressive, noise exposure | 53.4 |
300021 | M | 77 | 42 | 27 | Yes | Unknown | 77.5 |
300022 | F | 61 | – | 41 | Yes | Noise exposure, chronic ear infections | 73.7 |
300023 | M | 68 | 9 | 28 | Yes | Physical trauma | 81.3 |
300024 | F | 58 | 15 | 52 | Yes | Genetic, progressive as adult | 120 |
300025 | M | 68 | 42 | 15 | Yes | Progressive as adult, noise exposure | 71 |
300026 | M | 79 | – | 10 | No | Progressive as adult, noise exposure | 97.5 |
300027 | M | 75 | 15 | 35 | Yes | Progressive as adult, noise exposure | 76 |
300028 | F | 54 | 42 | 54 | Yes | Congenital progressive, genetic | 106.3 |
300029 | F | 75 | 39 | 18 | Yes | Progressive as adult | 67 |
300030 | M | 91 | 28 | 43 | Yes | Genetic, progressive as adult, noise exposure | 114 |
300031 | M | 67 | 49 | 22 | Yes | Genetic | 78 |
300032 | M | 65 | 9 | 40 | Yes | Progressive as adult, Ménière’s | 65 |
300033 | M | 65 | 20 | 15 | Yes | Progressive as adult, noise exposure | 80 |
300034 | M | 76 | 30 | 31 | Yes | Genetic, noise exposure | 77.5 |
300035 | F | 53 | 30 | 41 | Yes | Genetic, progressive as adult | 80 |
300036 | M | 73 | 32.5 | 33 | Yes | Genetic, progressive as adult | 66 |
300038 | F | 49 | 42 | 49 | Yes | Congenital progressive | 96 |
Where SES is not reported, participant reported “retired” but did not specify previous occupation. HL indicates hearing level; PTA, pure-tone average; SES, socioeconomic status.
TABLE 2.
Participants (N = 31) | ||
---|---|---|
Mean | (SD) | |
Demographics | ||
Age (years) | 69.6 | (10.9) |
Total duration of hearing loss (years) | 33.7 | (14.4) |
Reading (standard score) | 95.3 | (11.9) |
MMSE (raw score) | 27.8 | (2.1) |
SES | 27.1 | (14.0) |
Better ear PTA (dB HL) | 85.2 | (16.7) |
HL indicates hearing level; MMSE, Mini-Mental State Examination; PTA, pure-tone average (0.5, 1, 2, and 4 kHz); SES, socioeconomic status.
EQUIPMENT AND MATERIALS
All testing took place in sound-proof booths and acoustically insulated rooms. All research tests requiring auditory responses were audiovisually recorded for later scoring. Participants wore frequency modulation transmitters through the use of specially designed vests. This allowed for their responses to have direct input into the camera, permitting later off-line scoring of tasks. Each task was scored by two separate individuals for 25% of responses to ensure reliable results. Reliability was determined to be >95% for all measures.
Visual stimuli for neurocognitive measures were presented on paper or a touch screen monitor made by Keytec Inc. (Garland, TX), placed 2 ft in front of the participant. Auditory stimuli were presented in the clinic (AzBio sentences in quiet and in babble) at 60 dBA, or they were presented in the laboratory (CUNY sentences) via a Roland MA-12C (Los Angeles, CA) speaker placed 1 m in front of the participant at 0° azimuth, calibrated to 68 dB SPL using a sound level meter. The measures outlined below were collected.
Measures of Sentence Recognition
Participants were tested in their best-aided condition (typically binaural with hearing aids). Two speech recognition measures were included for all participants to assess recognition of sentences under three different conditions. The clinical measures of AzBio sentences in quiet and in 10-talker babble at +10 dB SNR were used, because they are the standard clinical measures used during our CI candidacy evaluations in the best-aided listening conditions. Additionally, research measures of CUNY sentence recognition under auditory-only presentation were collected and presented here:
AzBio sentences in quiet – All participants were tested using these sentences, presented in quiet at 60 dBA (6). Twenty sentences were presented. Scores were percentage of key words repeated correctly.
AzBio sentences in 10-talker babble at 10 +dB – Sixteen CI candidates also underwent AzBio sentence recognition testing in babble at the discretion of the clinical audiologist. Twenty sentences were presented at 60 dBA, with babble presented at 50 dBA.
CUNY sentences in quiet – Sentences from the City University of New York (CUNY) (10) corpus were presented in the auditory-only, combined audiovisual, and visual-only fashion, but only auditory-only performance will be discussed here. Twelve sentences were presented, which had been recorded by a single female talker, with presentation at 68 dB SPL in quiet over loudspeaker.
Measures of Neurocognitive Functioning
Four measures of neurocognitive functioning were collected. Instructions for all measures were given in written form.
-
Verbal working memory capacity – Visual digit span, object span, and symbol span – These tasks assessed verbal working memory capacity using visual presentation. The digit span task was based on the original auditory digit span task from the Wechsler Intelligence Scale for Children, Fourth Edition, Integrated (11). For this task, participants were presented with a sequence of visual stimuli in the form of digits (one through nine) on a computer monitor. To familiarize the participants with the stimuli, one digit appeared on the screen first, followed by a screen with a 3×3 matrix of all nine numbers. Participants were asked to touch the digit on the screen that had appeared first. Next, the participants saw a sequence of numerical digits and were asked to reproduce the sequence correctly, via touching the numbers on the computer screen in the correct order, when the screen with all nine numbers appeared. The number of digits presented on each trial began with two stimuli and increased gradually as the participant continued to answer correctly, up to a maximum of seven digits. Each string of digits was presented twice (different stimuli, same string length). When the participant failed to reproduce two strings of the same length correctly, the task automatically terminated. Digits were presented visually one at a time on a computer screen. Once the numbers disappeared from the screen, the participant was asked to touch the numbers on the screen in the correct serial order. Total correct items served as the performance score.
For visual object span and symbol span, the procedures were identical, except that easily named objects, or nonsense symbols without easily assigned verbal labels, respectively, were used.
Inhibition-concentration – Stroop – This task evaluated inhibitory control abilities (12), and the computerized version is publicly available (http://www.millisecond.com). Participants were shown a color word on the computer screen, presented in either the same or a different color font. The participant was asked to press the computer key on the keyboard that corresponded with the color of the font of the word, not the color name represented by the word. The Stroop task was divided into congruent “concentration” trials (color and color word matched), incongruent “inhibition” trials (color and color word did not match), and control “processing-speed” trials (a colored box on the screen). Response times were computed for each condition, with longer response times (slower processing) reflected by larger Stroop scores.
Information processing speed for lexical/phonological access – Test of Word Reading Efficiency, Version 2 – The Test of Word Reading Efficiency, Version 2 is a measure of word reading accuracy and fluency, and can be considered an assessment of the speed of a participant’s lexical and phonological access (13). The test assesses two types of reading skills: the ability to accurately recognize and identify familiar real words, and the ability to “sound out” nonwords via phonologically decoding the nonwords. The participants read as many words as they could from the 108-word list in 45 seconds, followed by reading as many nonwords as they could from the 66-nonword list in 45 seconds. Two scores were computed: percent whole words correct and percent whole nonwords correct.
Nonverbal fluid reasoning – Raven’s Progressive Matrices – A computerized version of the Raven’s test was used to assess nonverbal intelligence or reasoning (14,15). The Raven’s presents visual displays of geometric designs in a matrix in which each design contains a missing piece, and participants must select a response box to complete the pattern. Participants completed as many items as possible in 10 minutes, and scores were total number of correct items.
General Approach
The study protocol was approved by the local Institutional Review Board. All participants provided informed, written consent, and were reimbursed $15 per hour for participation. Research testing was completed over a single 2-hour session, with frequent breaks to prevent fatigue. During testing, participants were tested in the best-aided condition, including any hearing aids, except during the unaided audiogram.
Data Analyses
A multistep approach to analysis was performed. First bivariate correlations were computed among the different neurocognitive measures to identify issues of collinearity. For those measures that correlated significantly at r > 0.80, a composite neurocognitive measure was created by summing z-transformed scores on individual assessments. Next, bivariate correlations were performed between each sentence recognition measure and each neurocognitive measure. Multivariable regression analyses were then performed for each of the sentence recognition measures as outcome variables. In each blockwise regression analysis, unaided better ear pure-tone average (PTA) across four frequencies (0.5, 1, 2, and 4 kHz) was entered as the first predictor in Block 1. Only neurocognitive variables which were significantly correlated with sentence recognition scores at p < 0.05 were entered into the regression in Block 2. This approach served to reduce the number of variables in the regression analyses.
RESULTS
Group mean scores for sentence recognition and neurocognitive measures among CI candidates are shown in Table 3. Results demonstrate variability among CI candidates in both sentence recognition and neurocognitive scores.
TABLE 3.
Sentence Recognition | Mean | (SD) | N |
---|---|---|---|
AzBio sentences in quiet (% key words correct) | 28.0 | (23.1) | 31 |
AzBio sentences in babble (% key words correct) | 19.7 | (16.0) | 16 |
CUNY sentences in quiet, auditory-only (% words correct) | 29.1 | (29.0) | 31 |
Neurocognitive Measures | |||
Digit span (total correct) | 36.9 | (19.3) | 31 |
Object span (total correct) | 29.4 | (12.8) | 31 |
Symbol span (total correct) | 6.1 | (6.7) | 31 |
Stroop congruent response time (ms) | 1432.6 | (474.6) | 31 |
Stroop incongruent response time (ms) | 2006.3 | (1071.9) | 31 |
Stroop control response time (ms) | 1550.2 | (751.9) | 31 |
TOWRE words (percent correct) | 68.9 | (10.7) | 31 |
TOWRE nonwords (percent correct) | 54.4 | (19.9) | 31 |
Raven’s nonverbal fluid reasoning (total correct) | 9.3 | (5.2) | 31 |
ms indicates millisecond; TOWRE, Test of Word Reading Efficiency.
Before completing our analyses of interest, Pearson correlation analyses were completed between sentence recognition measures and demographic and audiologic measures to identify if any of these scores should serve as covariates in our analyses, with results shown in Table 4. Of the measures included, only better ear PTA correlated significantly with best-aided sentence recognition testing for AzBio in quiet and CUNY sentences.
TABLE 4.
Sentence Recognition | ||||||
---|---|---|---|---|---|---|
AzBio in Quiet (% Key Words Correct) (N = 31) | AzBio in Babble (% Key Words Correct) (N = 16) | CUNY (% Words Correct) (N = 31) | ||||
r Value | (95% Confidence Interval) | r Value | (95% Confidence Interval) | r value | (95% Confidence Interval) | |
Demographic/Audiologic Measures | ||||||
Age (years) | 0.14 | (−0.22 to 0.46) | 0.36 | (−0.71 to 0.16) | −0.13 | (−0.46 to 0.23) |
Better ear pure-tone average (dB HL) | −0.44* | (−0.68 to −0.10) | 0.18 | (−0.61 to 0.34) | −0.42* | (−0.67 to −0.08) |
MMSE (raw score) | 0.17 | (−0.19 to 0.49) | 0.34 | (−0.18 to 0.70) | 0.35 | (−0.05 to 0.62) |
Reading ability (WRAT score) | 0.18 | (−0.18 to 0.50) | 0.01 | (−0.48 to 0.49) | 0.22 | (−0.14 to 0.53) |
SES | 0.20 | (−0.16 to 0.51) | 0.16 | (−0.35 to 0.59) | 0.21 | (−0.15 to 0.52) |
Total duration hearing loss (years) | −0.09 | (−0.42 to 0.27) | 0.19 | (−0.33 to 0.61) | −0.16 | (−0.48 to 0.20) |
Duration hearing loss until CI (years) | −0.06 | (−0.40 to 0.30) | 0.23 | (−0.29 to 0.64) | −0.10 | (−0.43 to 0.26) |
p < 0.05.
HL indicates hearing level; MMSE, Mini-Mental State Examination; SES, Socioeconomic status; WRAT, Wide Range Achievement Test.
Next, Pearson correlations were computed among the different neurocognitive measures, with results shown in Table 5. Only Stroop control and Stroop congruent demonstrated r > 0.80, concerning for collinearity, so a composite Stroop control-congruent score (the sum of z-scores for Stroop control and Stroop congruent response times) was computed for use in subsequent analyses.
TABLE 5.
MMSE | Digit Span | Object Span | Symbol Span | Stroop Congruent | Stroop Incongruent | Stroop Control | TOWRE Words | TOWRE Nonwords | Raven’s | |
---|---|---|---|---|---|---|---|---|---|---|
MMSE (raw score) | 1 | 0.41* | 0.25 | −0.10 | −0.03 | −0.27 | −0.11 | 0.33 | 0.48** | 0.37* |
Digit span (items correct) | 1 | 0.65** | 0.29 | 0.06 | −0.10 | −0.06 | 0.05 | 0.53** | 0.30 | |
Object span (items correct) | 1 | 0.28 | −0.10 | −0.21 | −0.09 | −0.21 | 0.34 | 0.11 | ||
Symbol span (items correct) | 1 | −0.32 | −0.24 | −0.35 | −0.06 | 0.18 | 0.13 | |||
Stroop congruent (ms) | 1 | 0.63** | 0.92** | 0.15 | −0.04 | −0.33 | ||||
Stroop incongruent (ms) | 1 | 0.62** | −0.01 | −0.27 | −0.35 | |||||
Stroop control (ms) | 1 | −0.04 | −0.20 | −0.33 | ||||||
TOWRE words (% correct) | 1 | 0.53** | −0.06 | |||||||
TOWRE nonwords (% correct) | 1 | 0.21 | ||||||||
Raven’s (items correct) | 1 |
p < 0.05.
p < 0.01.
MMSE indicates Mini-Mental State Examination; ms, millisecond; TOWRE, Test of Word Reading Efficiency.
Next, Pearson correlation analyses were performed among the sentence recognition scores and the neurocognitive measures, with results shown in Table 6. For AzBio sentences in quiet, no neurocognitive measure correlated significantly with speech recognition scores. AzBio sentence scores in babble correlated with digit span scores (p < 0.001). CUNY scores correlated with digit span (p = 0.044) and Raven’s (p = 0.042) scores.
TABLE 6.
Sentence Recognition | ||||||
---|---|---|---|---|---|---|
AzBio in Quiet (% Key Words Correct) (N = 31) | AzBio in Babble (% Key Words Correct) (N = 16) | CUNY (% Words Correct) (N = 31) | ||||
r Value | (95% Confidence Interval) | r Value | (95% Confidence Interval) | r value | (95% Confidence Interval) | |
Neurocognitive Measures | ||||||
Digit span (items correct) | 0.31 | (−0.05 to 0.59) | 0.78* | (0.45 to 0.92) | 0.38** | (0.03 to 0.64) |
Object span (items correct) | −0.02 | (−0.37 to 0.33) | 0.41 | (−0.10 to 0.74) | 0.10 | (−0.26 to 0.43) |
Symbol span (items correct) | 0.10 | (−0.26 to 0.43) | −0.01 | (−0.49 to 0.48) | 0.25 | (−0.11 to 0.55) |
Stroop control-congruent composite (ms) | 0.10 | (−0.26 to 0.43) | 0.11 | (−0.40 to 0.56) | −0.04 | (−0.38 to 0.31) |
Stroop incongruent (ms) | 0.07 | (−0.29 to 0.41) | −0.26 | (−0.66 to 0.26) | −0.19 | (−0.50 to 0.17) |
TOWRE words (% correct) | 0.10 | (−0.26 to 0.43) | −0.15 | (−0.59 to 0.36) | 0.08 | (−0.28 to 0.42) |
TOWRE nonwords (% correct) | 0.11 | (−0.25 to 0.44) | 0.27 | (−0.25 to 0.66) | 0.22 | (−0.14 to 0.53) |
Raven’s (items correct) | 0.27 | (−0.09 to 0.56) | 0.30 | (−0.22 to 0.68) | 0.39** | (0.04 to 0.65) |
p < .05.
p < .01.
ms indicates millisecond; TOWRE, Test of Word Reading Efficiency.
The next step of analyses was to perform a separate blockwise regression analysis for each sentence recognition measure entered as the outcome variable. Better ear PTA was entered as a predictor in Block 1 for all the regression analyses. Next, the neurocognitive measures that were significantly correlated with each sentence recognition measure (from Table 6) were entered in stepwise fashion in Block 2. For AzBio sentences in quiet, no neurocognitive measures were correlated with sentence recognition, so only PTA was entered as a predictor, with results shown in Table 7. The model was significant, with AzBio sentences in quiet predicted by PTA (F [1,29] = 6.38; p = 0.018). For AzBio sentences in babble, PTA and digit span were entered as predictors, with results shown in Table 8. The model was significant (F [1,14] = 11.65; p = 0.002), predicting 66% of outcome variance, and only digit span was a significant independent predictor. Finally, for CUNY sentences, PTA, digit span, and Raven’s scores were all entered as predictors, with results in Table 9. The model was significant (F [1,27] = 5.46; p = 0.011). The effect of PTA was significant, as was the effect of Raven’s scores. That is, PTA predicted 19% of the variance in CUNY sentences; when Raven’s was added in Block 2, PTA and Raven’s together predicted 31% of the variance in CUNY scores.
TABLE 7.
AzBio Sentences in Quiet (% Words Correct) | B | SE (B) | β | t | Sig. (p) | R2 | R2 Change |
---|---|---|---|---|---|---|---|
Predictor: | 0.197 | 0.197 | |||||
PTA (dB HL) | −0.653 | 0.259 | −0.444 | −2.53 | 0.018 |
HL indicates hearing level; PTA, pure-tone average.
TABLE 8.
AzBio Sentences in Noise (% Words Correct) | B | SE (B) | β | t | Sig. (p) | R2 | R2 Change |
---|---|---|---|---|---|---|---|
Block 1 | 0.660 | ||||||
PTA (dB HL) | 0.025 | 0.189 | 0.023 | 0.13 | 0.899 | 0.033 | |
Block 2 | |||||||
Digit span (items correct) | 0.676 | 0.144 | 0.818 | 4.71 | 0.001 | 0.627 |
HL indicates hearing level; PTA, pure-tone average.
TABLE 9.
CUNY Sentences (% Words Correct) | B | SE (B) | β | t | Sig. (p) | R2 | R2 Change |
---|---|---|---|---|---|---|---|
Block 1 | 0.313 | ||||||
PTA (dB HL) | −0.684 | 0.316 | −0.371 | −2.16 | 0.041 | 0.185 | |
Block 2 | |||||||
Raven’s (items correct) | 2.598 | 1.232 | 0.362 | 2.11 | 0.046 | 0.156 | |
Digit span (items correct) | 0.286 | 0.287 | 0.196 | 0.99 | 0.328 |
HL indicates hearing level.
DISCUSSION
Clinical evaluation for CI candidacy for adult patients with moderate-to-profound sensorineural hearing loss primarily uses measures of best-aided sentence recognition testing. Although these measures have some face validity related to the likely auditory communication skills of the patient undergoing the CI candidacy evaluation, it is generally assumed in clinical settings that assessment tools of sentence recognition reflect the general auditory processing abilities of these patients. That is, for patients who meet candidacy criteria using sentence recognition materials under best-aided conditions, “correction” of auditory input through a CI should result in improved auditory processing and generally good CI speech recognition outcomes. However, that general assumption that best-aided sentence recognition in CI candidates simply reflects auditory processing functions may be unfounded, and this study sought to investigate this assumption.
Results demonstrated that after accounting for hearing ability using better ear PTA, performance by adult CI candidates in our current typical best-aided measure of sentence recognition, AzBio sentences, relates to performance on a nonauditory measure of verbal working memory capacity (digit span), at least when testing sentence recognition in babble. Specifically, for each additional digit span item answered correctly, the AzBio sentence in babble score improved by 0.81%. With the maximum number of digit span items being 108, a difference of 20 items correct on digit span would predict a difference of 16.2%, which would be clinically significant. In other words, AzBio sentence recognition performance in multitalker babble appears to be associated with working memory capacity in addition to the ability to process the auditory speech information heard by the listener. In contrast, analyses failed to reveal effects of neurocognitive skills on performance in the AzBio sentences in quiet, at least considering the methods incorporated in this study. This is likely because sentence recognition in quiet, as compared to in babble, does not require participants to exert a large amount of behavioral control. Although implemented less commonly in clinical settings, performance for the CUNY auditory sentence materials did relate to neurocognitive functioning, namely, nonverbal reasoning. It is unclear why neurocognitive functions related differentially with AzBio in babble and CUNY sentence materials. Nonetheless, perhaps best-aided sentence recognition testing should not always be considered simply an assessment of auditory processing that can be corrected by restoration of speech signal input through cochlear implantation. Instead, measures of speech recognition may tap into higher order neurocognitive skills, and more ideal future clinical CI candidacy evaluation measures should attempt to separate the contributions of auditory processing and neurocognitive functions. This is especially important in light of the high variability demonstrated among patients in postoperative speech recognition outcomes (16). It is plausible that current sentence recognition testing, both in the best-aided preoperative candidacy evaluation setting and the postoperative outcome setting, actually captures a whole complex series of factors – from the peripheral auditory nerve function to brainstem and cortical processing to further upstream neurocognitive functions. Thus, although current CI candidacy evaluation determinations are required to be based on best-aided sentence recognition testing, we conjecture that best-aided isolated word recognition testing (or even phoneme recognition) would provide a more accurate representation of speech auditory processing abilities of patients considering cochlear implantation, because the roles of top-down linguistic and neurocognitive processes to performance on these measures would be much more limited; this consideration warrants explicit study. Similarly, additional separate testing of neurocognitive process measures, such as working memory capacity, information processing speed, inhibition-concentration, and nonverbal reasoning skills, may provide a more complete assessment of a CI candidate, which can be used to better prognosticate postoperative speech recognition outcomes, to counsel patients preoperatively, and perhaps to tailor postoperative rehabilitation approaches to optimize performance.
This study has several limitations that should be considered. First, a few adult CI candidates were included who did not pass the cognitive screening MMSE examination. We decided to include these participants in data analyses because we wanted to incorporate as many individual participants as possible who would be representative of CI candidates evaluated in our clinical CI program. It is possible that excluding the patients who failed the MMSE examination would provide different relations between sentence recognition and neurocognitive performance, but it seems important from a clinical standpoint to examine these relations in a representative clinical sample. Second, it is unclear why there were differential associations between neurocognitive tests and our sentence recognition tasks – AzBio in babble and CUNY sentences. One possible explanation for this finding was that participants completed CUNY testing toward the end of a 2-hour block of research testing, while AzBio testing was completed during a clinical visit that may not have been as cognitively demanding. Thus, it is conceivable that cognitive fatigue may have come into play during CUNY testing but not AzBio testing, warranting the need for the randomization of tasks during future studies. Third, it should be noted that several of our selected neurocognitive measures did not relate to any of our speech recognition measures, which raises the question as to whether our positive findings for digit span and Raven’s were simply a result of performing a large number of correlation analyses. However, the magnitude of the effects during multivariable regression analyses argue against this idea, and it is more likely that the neurocognitive measures that did not demonstrate relations with speech recognition simply did not tap closely into abilities required for speech processing. Fourth, CI candidates in this study were generally older (mean age 69.6 yr), so it is possible that findings would not generalize to younger CI candidates. Fifth, correlations and regression models of neurocognitive measures relating to sentence recognition do not imply causation, and this is inherently a limitation of all cross-sectional studies. Finally, our overall sample size of adult CI candidates was relatively small, especially for use in analyses of AzBio in babble scores (i.e., only 16 participants); however, even in this small sample, significant relations were identified between sentence recognition and neurocognitive functions, suggesting genuine relationships that should be considered when interpreting the results of CI candidacy evaluation assessments.
CONCLUSION
Findings of this study suggest that preoperative adult CI candidacy evaluations using best-aided sentence recognition testing may assess more than simple auditory processing. Rather, sentence recognition testing using some speech materials for CI candidates may also relate to neurocognitive functions like working memory capacity and nonverbal reasoning. These results suggest that perhaps additional measures should be incorporated to assess the relative contributions of auditory processes and neurocognitive functions during the clinical CI evaluation process.
Acknowledgments:
The authors would like to thank Kara Vasil, AuD, Jessica Lewis, BA, for their study assistance, and David Pisoni, PhD, for his mentorship on this project.
Funding: This work was supported by the American Otological Society Clinician-Scientist Award to A.C.M.
A.C.M. receives grant funding support from Cochlear Americas for an unrelated investigator-initiated research study.
Footnotes
Presentation of Data: Data from this manuscript were presented at the 151st Annual Meeting of the American Otological Society, April 20–21, 2018, National Harbor, Maryland.
Conflicts of Interest Statement: The authors disclose no conflicts of interest.
REFERENCES
- 1.Akeroyd MA. Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. Int J Audiol 2008;47 (Suppl 2):S53–71. [DOI] [PubMed] [Google Scholar]
- 2.Moberly AC, Castellanos I, Vasil KJ, et al. “Product” versus “Process” measures in assessing speech recognition outcomes in adults with cochlear implants. Otol Neurotol 2018;39:e195–202. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Moberly AC, Houston DM, Castellanos I. Non-auditory neurocognitive skills contribute to speech recognition in adults with cochlear implants. Laryngoscope Investig Otolaryngol 2016;1: 154–162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Moberly AC, Houston DM, Harris MS, et al. Verbal working memory and inhibition-concentration in adults with cochlear implants. Laryngoscope Investig Otolaryngol 2017;2:254–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Moberly AC, Harris MS, Boyce L, et al. Speech recognition in adults with cochlear implants: the effects of working memory, phonological sensitivity, and aging. J Speech Lang Hear Res 2017;60:1046–61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Spahr AJ, Dorman MF, Litvak LM, et al. Development and validation of the AzBio sentence lists. Ear Hear 2012;33:112–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res 1975;12:189–98. [DOI] [PubMed] [Google Scholar]
- 8.Wilkinson G, Robertson G. Wide Range Achievement Test-4th Ed. Lutz, FL: Psychological Assessment Resources, 2006 [Google Scholar]
- 9.Nittrouer S, Burton LT. The role of early language experience in the development of speech perception and phonological processing abilities: evidence from 5-year-olds with histories of otitis media with effusion and low socioeconomic status. J Commun Disord 2005;38:29–63. [DOI] [PubMed] [Google Scholar]
- 10.Boothroyd A, Hnath-Chisolm T, Hanin L, et al. Voice fundamental frequency as an auditory supplement to the speechreading of sentences. Ear Hear 1988;9:306–12. [DOI] [PubMed] [Google Scholar]
- 11.Wechsler D WISC-IV: Wechsler Intelligence Scale for Children, Integrated: Technical and Interpretive Manual: Hancourt Brace and Company, 2004 [Google Scholar]
- 12.Stroop JR. Studies of interference in serial verbal reactions. J Exper Psychol 1935;18:643–62. [Google Scholar]
- 13.Torgesen JK, Wagner RK, Rashotte CA. Test of word reading efficiency. Austin, TX: Pro-Ed; 1999. [Google Scholar]
- 14.J. R. Advanced Progressive Matrices, Set II. London: H. K. Lewis, 1962 [Google Scholar]
- 15.Raven JR, Court JH. Manual for Raven’s progressive matrices and vocabulary scales. Oxford: Oxford Psychologists Press; 1998. [Google Scholar]
- 16.Pisoni DB, Broadstock A, Wucinich T, et al. Verbal learning and memory after cochlear implantation in postlingually deaf adults: some new findings with the CVLT-II. Ear Hear 2018;39: 720–745. [DOI] [PMC free article] [PubMed] [Google Scholar]