Abstract
Purpose
Speech recognition relies upon a listener's successful pairing of the acoustic–phonetic details from the bottom-up input with top-down linguistic processing of the incoming speech stream. When the speech is spectrally degraded, such as through a cochlear implant (CI), this role of top-down processing is poorly understood. This study explored the interactions of top-down processing, specifically the use of semantic context during sentence recognition, and the relative contributions of different neurocognitive functions during speech recognition in adult CI users.
Method
Data from 41 experienced adult CI users were collected and used in analyses. Participants were tested for recognition and immediate repetition of speech materials in the clear. They were asked to repeat 2 sets of sentence materials, 1 that was semantically meaningful and 1 that was syntactically appropriate but semantically anomalous. Participants also were tested on 4 visual measures of neurocognitive functioning to assess working memory capacity (Digit Span; Wechsler, 2004), speed of lexical access (Test of Word Reading Efficiency; Torgeson, Wagner, & Rashotte, 1999), inhibitory control (Stroop; Stroop, 1935), and nonverbal fluid reasoning (Raven's Progressive Matrices; Raven, 2000).
Results
Individual listeners' inhibitory control predicted recognition of meaningful sentences when controlling for performance on anomalous sentences, our proxy for the quality of the bottom-up input. Additionally, speed of lexical access and nonverbal reasoning predicted recognition of anomalous sentences.
Conclusions
Findings from this study identified inhibitory control as a potential mechanism at work when listeners make use of semantic context during sentence recognition. Moreover, speed of lexical access and nonverbal reasoning were associated with recognition of sentences that lacked semantic context. These results motivate the development of improved comprehensive rehabilitative approaches for adult patients with CIs to optimize use of top-down processing and underlying core neurocognitive functions.
For many patients with hearing loss, particularly cochlear implant (CI) users, recognizing speech can be difficult, even when listening under ideal quiet conditions, and broad variability in speech recognition outcomes exists. The difficulty in speech recognition for CI users arises in large part because the signals delivered by their implants result in highly degraded acoustic–phonetic “bottom-up” representations of speech, particularly in the spectral domain (Zeng, 2004). For individual listeners with CIs, semantic and pragmatic constraints (i.e., “top-down” processing) facilitate comprehension of the degraded signals transmitted through their implants. For example, listeners can utilize context cues to disambiguate “fork and spoon” from “torque and loom” when asked to set the table. The extent to which individual listeners take advantage of such context cues may depend, in part, on their neurocognitive resources, such as working memory (WM). Although general models of speech recognition acknowledge this interplay between bottom-up and top-down processing, there exists variability within individuals in the neurocognitive resources available to make sense of the incoming speech signal. The current study explores this variability among individual adult CI users in order to understand the relations between top-down processing and neurocognitive functions and their contributions to speech recognition abilities.
Bottom-Up Processing of the Signal and Top-Down Use of Context
Speech recognition is an interactive process between the incoming acoustic–phonetic features of the auditory signal and the long-term language knowledge of the listener (e.g., Tuennerhoff & Noppeney, 2016). Shared among most models of word recognition is the idea that long-term linguistic knowledge assists the listener in selecting the appropriate phonological and lexical candidates, allowing the listener to progressively home in on the correct selection based on the acoustic–phonetic features of the incoming speech signal (logogen model: Morton, 1969; TRACE and TRACE II: McClelland & Elman, 1986; adaptive resonance theory: Grossberg & Stone, 1986; neighborhood activation model: Luce & Pisoni, 1998; Bayesian approaches: Norris, McQueen, & Cutler, 2016). Generally speaking, listeners' relative reliance on bottom-up and top-down processes appears to depend on the quality of the input; speech signal degradation is associated with less robust lexical selection based on phonological information and greater dependence on semantic and syntactic contexts (Kalikow, Stevens, & Elliott, 1977; Luce & Pisoni, 1998).
Although it is generally accepted that signal degradation, such as when listening through a CI, results in greater dependence on context, the details of the interactions of top-down processing with neurocognitive functions in individual listeners are poorly understood. One explanatory framework, the ease of language understanding model, was specifically developed to capture the interactive process of speech recognition for individuals with hearing loss (Rönnberg, 2003; Rönnberg et al., 2013). In this framework, the auditory speech input is rapidly and automatically bound by the listener into a phonological representation in a short-term memory buffer. If this information matches phonological representations in long-term memory, relatively automatic and effortless lexical access occurs. On the other hand, if there is a mismatch, effortful controlled processing comes into play using higher level linguistic knowledge (e.g., semantic or syntactic context). Thus, controlled processing and the use of sentence context primarily “kick in” when bottom-up processing fails, and this controlled processing is believed to rely heavily on neurocognitive processes.
The Roles of Neurocognitive Functions in Top-Down Processing
Individual listeners approach the task of recognizing degraded speech through a CI equipped with differing neurocognitive resources. It is increasingly clear that neurocognitive functions allow a listener to capitalize on contextual constraints. For example, the foundation of the ease of language understanding model's controlled processing pathway is that the use of top-down knowledge and sentential context depends on neurocognitive functions, primarily WM capacity, which are called into play under conditions of mismatch (Classon, Rudner, & Rönnberg, 2013; Zekveld, Rudner, Johnsrude, Heslenfeld, & Rönnberg, 2012). More broadly, there is growing evidence that the ability to recognize degraded speech by individuals with hearing loss requires a number of additional neurocognitive resources (Akeroyd, 2008; Moberly, Houston, & Castellanos, 2016; Rönnberg et al., 2013). Thus, the main goal of this study was to examine the roles of neurocognitive processes during top-down processing, specifically the use of semantic context during sentence recognition. From a theoretical standpoint, investigating which neurocognitive functions contribute to top-down processing during speech recognition in CI users will help us to better understand how listeners recognize degraded speech in general. From a clinical standpoint, identifying which neurocognitive functions contribute most strongly to sentence recognition and the use of sentence context will contribute to development of novel targets for rehabilitation (e.g., through cognitive training therapies) to improve speech recognition performance in CI users.
The most studied neurocognitive factor implicated in speech processing, particularly when the speech signal is degraded, is WM, which is responsible for storing and processing information temporarily and allows that information to be manipulated in the moment (Baddeley, 1992; Daneman & Carpenter, 1980). Mounting evidence suggests that WM capacity supports the listener in making better use of semantic cues. For example, among adults with mild-to-moderate hearing loss, WM capacity significantly predicted listeners' recognition of sentences in babble processed with frequency compression, accounting for 29% of the variance in recognition scores (Arehart, Souza, Baca, & Kates, 2013). Similarly, Schvartz, Chatterjee, and Gordon-Salant (2008) demonstrated a correlation between verbal WM capacity and spectrally degraded speech perception skills in normal hearing (NH) listeners.
Unlike the traditional model of immediate competition among lexical items during speech processing among NH listeners, research by Farris-Trimble and colleagues suggests that CI users adopt a delayed “wait and see” approach to lexical selection (Farris-Trimble, McMurray, Cigrand, & Tomblin, 2014; McMurray, Farris-Trimble, & Rigler, 2017). In those studies, eye tracking was used within the “visual world paradigm,” which reveals the time course of speech processing as it unfolds. Participants saw an array of objects while hearing a target word. Individuals with NH shifted their gaze to the target more quickly than individuals with CIs (or NH individuals listening to vocoded speech). Instead, when processing degraded input, listeners committed their visual fixation relatively later, suggesting a “wait and see” approach. Consequently, WM may be particularly critical for CI users when listening to highly degraded speech as they store and process the incoming signal in order to “make sense” of the speech.
A second neurocognitive function that likely contributes to sentence recognition under degraded listening conditions is information-processing speed, which is related to performance on complex cognitive tasks such as reasoning and language comprehension (Salthouse, 1996; Verhaeghen & Salthouse, 1997; Wingfield, 1996). Carroll, Uslar, Brand, and Ruidendijk (2016) manipulated both sentence complexity and intelligibility, such that NH listeners and those with mild-to-moderate bilateral hearing loss listened to canonical and noncanonical sentence structures presented in silence and in background noise. They assessed reaction time to identify different parts of speech (e.g., subject, verb) and highlighted the important role of information-processing speed, particularly when the acoustic signal was degraded because of hearing loss. Information-processing speed for linguistic information, specifically speed of lexical access, is a likely contributor to successful speech recognition. In most models of lexical access, both semantic context and acoustic information contribute to activation of lexical candidates and selection of the most compatible candidate during speech recognition (Marslen-Wilson, 1993; McClelland & Elman, 1986). Thus, it was predicted here that speed of lexical access would contribute to the ability to recognize degraded sentences.
As listeners process the incoming speech stream, lexical competitors (i.e., items in lexical neighborhoods; Luce & Pisoni, 1998) are activated and must be inhibited. Consequently, inhibitory control may be a third neurocognitive factor that likely supports sentence recognition under degraded conditions by blocking activation of lexical competitors or by ignoring noise. When asked to identify nonwords that differed in terms of lexical neighborhood density and phonotactic probability, older listeners' ability to complete the Trail-Making Test (Reitan, 1958) predicted their nonword identification performance (Janse & Newman, 2013). A measure of task switching, the Trail-Making Test requires participants to inhibit one strategy to adhere to a second set of rules. Similarly, Sörqvist and Rönnberg (2012) demonstrated that the inhibition processes may act to resolve semantic confusions under adverse listening conditions. Koelewijn, Zekveld, Festen, Rönnberg, and Kramer (2012) demonstrated a relation between scores reflecting the ability to inhibit irrelevant linguistic information and speech reception thresholds in noise. When it comes to adult CI users, Moberly, Houston, et al. (2016) examined nonauditory neurocognitive functions and found that faster response times during a task of inhibition concentration predicted better sentence recognition scores in speech-shaped noise. However, that study did not specifically investigate how neurocognitive functions contribute to top-down processing and use of semantic context during speech recognition in CI users.
In addition to the specific neurocognitive skills explored above, we also consider the role of nonverbal fluid reasoning (i.e., IQ) in speech processing. Nonverbal reasoning tasks measure the ability of the participant to solve problems, using awareness of the relations between multiple items in a task. For example, scores on the Raven's Progressive Matrices Test (Raven, 1938, 2000) have been thought to relate to participants' perception of wholes, memory, and speed of perception (Rimoldi, 1948). Although other general tests of IQ, such as the Wechsler Adult Intelligence Scale–Revised, generally have failed to demonstrate significant relations with speech recognition abilities, nonverbal reasoning has received little research attention as a factor contributing to performance in adult CI users, with the exception of three studies. Knutson et al. (1991) found that scores on a Raven's Matrices Visual task were moderately predictive (r = .44) of audiovisual consonant recognition in a group of adults with early multichannel CIs. Holden et al. (2013) found a correlation between a composite cognitive score (including verbal memory, vocabulary, similarities, and matrix reasoning) and word recognition outcomes in adult CI users; however, it was unclear in that study which component of the cognitive measure drove this relationship. Finally, a recent study by Mattingly, Castellanos, and Moberly (2018) demonstrated a relation between scores on the Raven's Matrices task and recognition scores for different meaningful sentence types (rs = .35–.47) in adult CI users.
Thus, this study aimed to investigate the relations of individual CI listeners' neurocognitive functions—specifically WM capacity, speed of lexical access, inhibitory control, and nonverbal reasoning—with sentence recognition and their ability to use sentential context when listening to speech. Experienced adult CI users were asked to recognize two sets of sentences in the clear: (a) highly meaningful sentences and (b) sentences that retained appropriate syntactic structure but lacked semantic context (anomalous). The primary difference between the two sentence types was the degree to which semantic context could be used to support sentence recognition, with meaningful sentences conveying top-down semantic context and anomalous sentences conveying predominantly bottom-up input, based on the listener's ability to pair the acoustic–phonetic details of the signal with lexical knowledge, without any available constraints of semantic context. We hypothesized that there would be contributions of neurocognitive functions to recognition performance on meaningful sentence recognition. Moreover, neurocognitive functions would predict meaningful sentence recognition while controlling for anomalous sentence recognition, supporting a role for neurocognitive functions in the use of top-down semantic context. Testing these hypotheses will help clarify how neurocognitive functions contribute to our clinical understanding of the factors at play when adult CI users recognize meaningful sentences broadly and to their top-down processing in the form of the use of semantic context more specifically.
Materials and Method
Participants
Data from 41 adults were included in analyses. An additional four participants were tested, but their data were excluded from analyses: Three participants were excluded because they were unable to complete all testing due to time constraints, and one participant was unable to complete testing due to computer error. All participants were experienced CI users who were recruited from the otolaryngology department at The Ohio State University between the ages of 50 and 83 years. All included participants met audiologic, cognitive, and reading ability criteria described below. Participants had varying underlying etiologies of hearing loss and ages of implantation. Mean age at onset of hearing loss was 28.6 years (SD = 20.6). All but nine of the CI users reported onset of hearing loss after the age of 12 years, meaning they were postlingually deaf and had relatively normal language development prior to the onset of hearing loss (suggested by their NH until the time of puberty). The other nine CI users reported some degree of congenital hearing loss or onset of hearing loss during childhood. However, all of these participants experienced early hearing aid intervention and auditory-only language development during childhood, were mainstreamed in education, and experienced progressive hearing losses into adulthood. All participants received their implants at or after the age of 35 years, with mean age at implantation of 59.8 years (SD = 12.0). Mean duration of hearing loss (computed as age at first CI minus reported age at onset of hearing loss) was 38.2 years (SD = 19.4). Participants had CI-aided thresholds that were better than 35 dB HL at 0.25, 0.5, 1, and 2 kHz, as measured by clinical audiologists within 1 year before enrollment in the study. All had used their implants for at least 18 months, with a mean duration of CI use of 7.3 years (SD = 6.7). All except one used Cochlear Corporation devices and an Advanced Combined Encoder processing strategy; 1 CI user had an Advanced Bionics device and used a HiRes Optima-S processing strategy. Seventeen participants had a right CI, 11 used a left device, and 13 had bilateral CIs. Fourteen participants wore a contralateral hearing aid.
All participants underwent screening to ensure no evidence of cognitive impairment. The Mini-Mental State Examination (MMSE; Folstein, Folstein, & McHugh, 1975) was performed, which is a validated assessment tool for memory, attention, and the ability to follow instructions. During this test, participants read the instructions to avoid effects caused by poor audibility. A raw score of less than 26 is concerning for possible cognitive impairment. All participants whose data were included in analyses demonstrated scores ≥ 26 on the MMSE.
All participants also were assessed for basic word-reading ability as a metric of general language proficiency, using the Word Reading subtest of the Wide Range Achievement Test 4 (WRAT4; Wilkinson & Robertson, 2006). All participants whose data were included in analyses demonstrated standard scores of ≥ 80. Because some tasks required the participants to look at a computer monitor or complete paper forms, a final screening test of near-vision was done, and all but nine participants had corrected near-vision of better than or equal to 20/30. The participants who demonstrated near-vision worse than 20/30 all had reading standard scores on the WRAT4 of better than 80, suggesting sufficient vision abilities to be included in data analyses. All participants spoke American English as their native language. All had at least a high school diploma. Because socioeconomic status (SES) has been linked to language abilities, a measure of SES was obtained. This was quantified using a metric defined by Nittrouer and Burton (2005), using occupation and education levels, with two scales between 1 and 8, with 8 being the highest level possible. The two scores were multiplied, giving scores between 1 and 64. Individual demographic and audiologic data for the 41 participants with CIs included in analyses are shown in Table 1. Averaged demographics including age, duration of hearing loss, duration of CI use, reading, and MMSE scores are shown in Table 2.
Table 1.
Participant demographics for individual cochlear implant users.
| Participant | Gender | Age (years) | SES | Implantation age (years) | Side of implant | Hearing aid | Etiology of hearing loss | Better ear PTA (dB HL) |
|---|---|---|---|---|---|---|---|---|
| 1 | F | 65 | 24 | 54 | Bilateral | No | Genetic, progressive | 120 |
| 2 | F | 66 | 35 | 62 | Right | Yes | Progressive loss as adult, noise exposure | 78.75 |
| 3 | F | 67 | 12 | 58 | Right | Yes | Genetic, progressive as an adult | 103.75 |
| 4 | F | 55 | 15 | 44 | Left | No | Sudden, otosclerosis, progressive as adult | 120 |
| 5 | M | 70 | 30 | 65 | Right | No | Genetic, progressive as an adult | 88.75 |
| 6 | M | 60 | 36 | 52 | Bilateral | No | Measles | 105 |
| 7 | F | 57 | 25 | 48 | Right | Yes | Genetic, progressive as a child | 82.5 |
| 8 | M | 79 | 48 | 76 | Right | Yes | Progressive as adult, noise exposure | 70 |
| 9 | F | 69 | 10.5 | 56 | Bilateral | No | Otosclerosis, progressive as adult | 112.5 |
| 10 | M | 55 | 30 | 50 | Bilateral | No | Progressive loss as adult | 120 |
| 11 | F | 76 | 30 | 68 | Left | No | Progressive loss as adult, probable autoimmune | 108.75 |
| 12 | M | 79 | 10 | 74 | Left | No | Unknown | 108.75 |
| 13 | F | 81 | 30 | 71 | Right | No | Progressive as adult, sudden hearing loss | 88.75 |
| 14 | M | 59 | 24 | 57 | Bilateral | No | Sudden hearing loss | 120 |
| 15 | M | 78 | 12.5 | 72 | Bilateral | No | Progressive loss as adult | 120 |
| 16 | M | 69 | 56 | 62 | Bilateral | No | Genetic, progressive loss as child | 120 |
| 17 | F | 50 | 32.5 | 35 | Bilateral | No | Progressive loss as child | 117.5 |
| 18 | F | 64 | 30 | 61 | Right | No | Progressive loss as adult | 103.75 |
| 19 | F | 67 | 9 | 58 | Bilateral | No | Ménière's disease | 120 |
| 20 | M | 83 | 42 | 76 | Right | Yes | Progressive loss as adult, noise exposure | 68.75 |
| 21 | F | 73 | 15 | 67 | Right | No | Progressive loss as child | 98.75 |
| 22 | M | 76 | 49 | 73 | Left | Yes | Progressive loss as adult, noise exposure | 72.5 |
| 23 | F | 79 | 15 | 45 | Right | Yes | Progressive loss as adult | 57.5 |
| 24 | M | 66 | 18 | 60 | Left | No | Ménière's disease | 80 |
| 25 | M | 77 | 24 | 75 | Bilateral | No | Chronic ear infections, cholesteatoma | 105 |
| 26 | F | 65 | 36 | 63 | Right | No | Genetic, progressive as adult | 86.25 |
| 27 | F | 62 | 14 | 59 | Bilateral | No | Sepsis, ototoxic medications | 95 |
| 28 | M | 80 | 9 | 79 | Left | No | Genetic, progressive as adult, noise exposure | 76.25 |
| 29 | M | 61 | 24 | 55 | Right | Yes | Unknown | 111.25 |
| 30 | M | 60 | 10.5 | 55 | Left | No | Genetic, progressive as adult | 115 |
| 31 | F | 64 | 24 | 59 | Left | Yes | Progressive as a child | 95 |
| 32 | M | 59 | 6 | 54 | Right | No | Genetic, progressive as adult | 108.75 |
| 33 | M | 57 | 6 | 55 | Right | No | Genetic, progressive as adult | 101.25 |
| 34 | M | 65 | 12 | 38 | Right | Yes | Progressive as adult, noise exposure | 92.5 |
| 35 | M | 50 | 25 | 35 | Bilateral | No | Genetic, progressive as adult | 120 |
| 36 | M | 81 | 49 | 80 | Left | Yes | Progressive as adult | 75 |
| 37 | M | 70 | 42 | 68 | Left | Yes | Progressive as adult | 93.75 |
| 38 | M | 53 | 20 | 36 | Bilateral | No | Progressive as adult | 120 |
| 39 | F | 64 | 28 | 61 | Right | Yes | Ménière's disease | 22.5 |
| 40 | F | 74 | 35 | 72 | Left | Yes | Genetic, progressive as adult | 60 |
| 41 | M | 68 | 30 | 65 | Right | No | Progressive as adult, noise exposure | 100 |
Note. SES = socioeconomic status; PTA = pure-tone average; F = female; M = male.
Table 2.
Participant demographics for 41 cochlear implant (CI) users.
| Demographics | M | SD |
|---|---|---|
| Age (years) | 67.2 | 9.1 |
| Duration of hearing loss (years) | 38.2 | 19.4 |
| Duration of CI use (years) | 7.3 | 6.7 |
| Reading (standard score) | 97.6 | 12.1 |
| MMSE (raw score) | 28.7 | 1.3 |
Note. MMSE = Mini-Mental State Examination.
Equipment
All tasks were performed in a soundproof booth or sound-treated testing room. Audiometry was performed using a Welch Allyn TN262 audiometer with TDH-39 headphones. For the MMSE and WRAT4 screening tasks, tests of sentence recognition, and speed of lexical access, participant responses were video- and audio-recorded to allow later scoring. Participants wore vests with frequency modulation transmitters that sent signals directly to receivers connected to the video camera. Responses for these tasks were scored offline. Two experimenters independently scored 25% of responses to check reliability. For the computerized tasks of WM, inhibitory control, and nonverbal reasoning, participants entered their responses directly into the computer, which computed output scores. During testing, participants wore devices in their usual everyday modes, including any use of hearing aids, and settings were kept the same throughout the entire testing session. Residual hearing was assessed for each ear immediately before testing.
Stimuli and Stimuli-Specific Procedure
Speech Recognition
Speech stimuli were presented to participants in quiet via a Roland MA-12C speaker calibrated to 68 dB SPL using a sound-level meter positioned 1 m in front of the speaker at 0° azimuth. Two measures of recognition of words in sentences were included: Long, complex, and semantically meaningful sentences were taken from the Institute of Electrical and Electronics Engineers (IEEE) corpus (IEEE, 1969)—Meaningful Sentences—such as “The wharf could be seen from the opposite shore.” Phonetically balanced, semantically anomalous sentences, using modified versions of sentences from the IEEE corpus (Herman & Pisoni, 2000; Loebach & Pisoni, 2008)—Anomalous Sentences—were also used. Each sentence was phonetically balanced, syntactically correct, and semantically meaningless, such as “The deep buckle walked the old crowd.” For each sentence type, listeners were presented with two training sentences without feedback and then 28 test sentences spoken by the same male talker, and scores were computed as percent total correct words. Each sentence type was presented within a single block, and order of blocks was counterbalanced among participants. Percent total correct word scores were computed.
Neurocognitive Measures
Visual digit span
This computerized task was used to measure WM capacity, based on the original auditory digit span from the Wechsler Intelligence Scale for Children–Fourth Edition, Integrated (Wechsler, 2004). Visual stimuli were used to eliminate the effects of audibility on performance. This task has been used before with CI users, and details can be found in that report (AuBuchon, Pisoni, & Kronenberger, 2015). Sequences of digits were presented visually on a computer screen, one at a time, and participants were asked to reproduce the lists of digits in correct serial order by touching the screen. The total number of correct digits in correct serial order was used in analyses.
Speed of lexical access
The Test of Word Reading Efficiency (TOWRE; Torgesen, Wagner, & Rashotte, 1999) was used to assess participants' speed of verbal processing for written materials. Participants were asked to read as many words as accurately as possible from a list of 108 words within 45 s. Percent correct words served as the measure used in analyses.
Visual Stroop
This computerized task was used to measure inhibitory control abilities and has been used previously with adult CI users (Moberly, Houston, et al., 2016). A computerized visual version of a verbal Stroop task and based on the original version (Stroop, 1935) was used, which is publicly available (http://www.millisecond.com). Participants were presented with color words one at a time on a computer screen and were asked to push a keyboard button identifying the color of the text of the word shown. Scoring was automatically performed by the computer at the time of testing after the participant directly entered responses. Response times were computed for correct responses to congruent words (automatic word reading; e.g., the word “Green” was shown in green ink) and to incongruent words (inhibition of word reading to concentrate on ink color; e.g., the word “Red” was shown in green ink). An interference score was computed as the response time to incongruent words minus the response time to congruent words, with larger scores representing greater interference (i.e., poorer inhibitory control), and this interference score was used in analyses.
Nonverbal reasoning
A computerized version of the Raven's Progressive Matrices (Raven, 2000) was used. This task presented geometric designs in a matrix where each design contained a missing piece, and participants were asked to complete the pattern by selecting a response box that completed the design. Participants were encouraged to guess if they were unable to determine the correct response. An abbreviated version of the Raven's test was conducted over 10 min. Raw score (items correct) was used as the measure of nonverbal reasoning.
General Procedure
Procedures were approved by The Ohio State University Institutional Review Board. Participants were tested in one session lasting approximately 2 hr. First, audiometric thresholds and screening measures were collected, followed by the TOWRE task and then the Digit Span task. Next, participants completed Meaningful Sentence and Anomalous Sentence Recognition testing, each in its own block with order counterbalanced across participants. Lastly, participants completed the Stroop and Raven's Progressive Matrices tasks.
Data Analysis Plan
To test our research question concerning which neurocognitive functions would predict meaningful sentence recognition abilities when controlling for performance on anomalous sentence in order to isolate the gain in scores when sentential context was available to listeners, a blockwise multiple linear regression analysis was performed in two blocks. Percent correct scores on Meaningful Sentences served as the dependent measure. In the first block, appropriate covariates and Anomalous Sentence scores were entered to control for the quality of bottom-up input. In the second block, neurocognitive measures (WM capacity, speed of lexical access, inhibitory control, and nonverbal reasoning) were entered together as the main predictors of interest.
Results
Statistical analyses were performed using SPSS software Version 25 (IBM). All data were screened for normal distributions and homogeneity of variances using Kolmogorov–Smirnov and Shapiro–Wilk tests of normality, as well as by reviewing of Q-Q plots of standardized residuals. Scores on Meaningful Sentences were not normally distributed and demonstrated negative skew; following an arcsine transformation, which is used as a variance-stabilizing transformation, scores on this variable were normally distributed. The transformed variable was used in all subsequent analyses. For all analyses, an α of .05 was set for significance. Based on a series of one-way analyses of variance, side of implantation (left, right, or bilateral) did not influence any speech recognition or neurocognitive performance scores. Additionally, no differences in scores were found by independent-samples t tests for participants who wore only CIs compared to those who wore a CI plus a hearing aid. Consequently, data were collapsed across all CI users in the analyses that follow. Results on sentence recognition measures and neurocognitive assessments are shown in Table 3. Mean Sentence Recognition scores were 72.2% (SD = 18.5) for Meaningful Sentences and 45.4% (SD = 19.5) for Anomalous Sentences. Individual Sentence Recognition scores are plotted in Figure 1, demonstrating consistently better scores for Meaningful Sentences than for Anomalous Sentences but with substantial variability across participants.
Table 3.
Speech recognition and neurocognitive scores for 41 cochlear implant users.
| Measure | Cochlear implant (N = 41) |
Minimum | Maximum | |
|---|---|---|---|---|
| M | SD | |||
| Speech recognition | ||||
| Meaningful Sentences (% words correct) | 72.2 | 18.5 | 12.3 | 90.7 |
| Anomalous Sentences (% words correct) | 45.4 | 19.5 | 0.9 | 75.1 |
| Neurocognitive tasks | ||||
| Digit Span (no. items correct) | 42.9 | 16.9 | 14 | 96 |
| TOWRE Words (% words correct) | 71.4 | 12.0 | 42 | 100 |
| Stroop Interference (ms) | 385.5 | 590.9 | 0 | 3573 |
| Raven's Nonverbal Reasoning (no. items correct) | 9.9 | 4.9 | 1 | 20 |
Note. TOWRE = Test of Word Reading Efficiency.
Figure 1.
Sentence recognition accuracy scores for individual cochlear implant users. Scores for Meaningful Sentences are plotted as black circles, and scores for Anomalous Sentences are plotted as gray squares. For illustration purposes, participant scores are demonstrated from poorest (left) to best (right) Meaningful Sentence scores.
Reliability
Interscorer reliability was assessed for tests that involved audiovisual recording and offline scoring of responses. All responses were scored by one trained scorer and then scored again by a second scorer for 25% of all participants (n = 11). With interscorer reliability greater than 90% (range: 93%–100%) for the MMSE, Word Reading, Sentence Recognition, and neurocognitive tests, the scores from the initial scorer were used in further analyses.
Determination of Covariates
Prior to performing our main analysis of interest, bivariate correlation analyses were run between our outcome measure of Meaningful Sentence recognition and demographic factors of age and SES to identify whether either of these demographic measures should be treated as covariates in the main analysis. Meaningful Sentence scores did not correlate with age (p = .121) or SES (p = .794). Similarly, an independent-samples t test was performed to see if scores on Meaningful Sentences differed between male and female participants; indeed, female participants scored significantly higher (78.9% words correct, SD = 11.0) than male participants (66.8% words correct, SD = 21.6), t(38) = −2.170, p = .036. Therefore, gender was included as a covariate in our main analysis of interest.
For our main analysis, a blockwise multiple linear regression analysis was run to predict our dependent measure of Meaningful Sentences. In Block 1, participants' gender was entered as a covariate along with Anomalous Sentence scores to control for the quality of bottom-up input. In Block 2, neurocognitive skill scores (Visual Digit Span, TOWRE, Stroop, and Raven's) were entered together as predictors of interest. Results are shown in Table 4. The model for Block 1 was significant, F(2, 36) = 49.44, p < .001, with Anomalous Sentence scores predicting Meaningful Sentence scores. The model for Block 2 was also significant, F(6, 32) = 20.07, p < .001; here, gender and Anomalous Sentence scores independently predicted Meaningful Sentence scores. In addition, of the neurocognitive factors, only Stroop score predicted meaningful sentence recognition independently of Anomalous Sentence scores, with the neurocognitive block providing a small but significant R 2 change of .057.
Table 4.
Results of blockwise multiple linear regression analyses for cochlear implant participants, with Meaningful Sentence score as the dependent measure.
| Dependent measure: Meaningful Sentence Recognition (% words correct) | Unstandardized B | Coefficient SE | Standardized β | t | Sig. (p) | R 2 change |
|---|---|---|---|---|---|---|
| Block 1 predictors | .733 | |||||
| Gender (male or female) | 0.108 | 0.073 | .132 | 1.492 | .144 | |
| Anomalous Sentence recognition score (% words correct) | 0.018 | 0.002 | .817 | 9.249 | < .001 | |
| Block 2 predictors | .057 | |||||
| Gender (male or female) | 0.143 | 0.070 | .174 | 2.051 | .049 | |
| Anomalous Sentence recognition score (% words correct) | 0.019 | 0.002 | .908 | 8.493 | < .001 | |
| Digit Span (no. items correct) | 0.001 | 0.002 | −.010 | −0.114 | .910 | |
| TOWRE Words (% words correct) | −0.297 | 0.359 | −.081 | −0.828 | .414 | |
| Stroop Interference (ms) | 0.001 | 0.001 | −.259 | −2.842 | .008 | |
| Raven's Nonverbal Reasoning (no. items correct) | −0.012 | 0.009 | −.141 | −1.393 | .173 |
Note. In Block 1, gender was entered as a covariate and Anomalous Sentence score was entered as a predictor. In Block 2, neurocognitive measures were entered as predictors of interest. Values are bolded where p < .05. Sig. = significance; TOWRE = Test of Word Reading Efficiency.
These results, which demonstrated that the Stroop score of inhibitory control was the only neurocognitive factor contributing to Meaningful Sentence recognition when controlling for Anomalous Sentence scores, led to the consideration that the other neurocognitive factors might still be at play through their potential effects on Anomalous Sentence recognition. This possibility was tested by performing another blockwise multiple linear regression analysis, now using Anomalous Sentence scores as the dependent measure. In Block 1, gender was entered as a covariate. In Block 2, neurocognitive skill scores (Visual Digit Span, TOWRE, Stroop, and Raven's) were entered together as predictors of interest. Results of this analysis are shown in Table 5. The model for Block 1 was not significant, p = .171. The model for Block 2 was significant, F(5, 33) = 4.907, p = .002; here, both TOWRE and Raven's independently predicted Anomalous Sentence score. The neurocognitive block provided an R 2 change of .377.
Table 5.
Results of blockwise multiple linear regression analyses for cochlear implant participants, with Anomalous Sentence score as the dependent measure.
| Dependent measure: Anomalous Sentence Recognition (% words correct) | Unstandardized B | Coefficient SE | Standardized β | t | Sig. (p) | R 2 change |
|---|---|---|---|---|---|---|
| Block 1 predictor | NS | |||||
| Gender (male or female) | 8.574 | 6.149 | .223 | 1.394 | .171 | |
| Block 2 predictors | .377 | |||||
| Gender (male or female) | 2.480 | 5.288 | .065 | 0.335 | .740 | |
| Digit Span (no. items correct) | 0.059 | 0.177 | .046 | 0.335 | .740 | |
| TOWRE Words (% words correct) | 67.061 | 24.621 | .391 | 2.724 | .010 | |
| Stroop Interference (ms) | 0.005 | 0.005 | .163 | 1.116 | .273 | |
| Raven's Nonverbal Reasoning (no. items correct) | 1.709 | 0.600 | .421 | 2.847 | .008 |
Note. In Block 1, gender was entered as a covariate. In Block 2, neurocognitive measures were entered as predictors of interest. Sig. = significance; NS = not significant; TOWRE = Test of Word Reading Efficiency.
Discussion
This study was designed to examine the interactions among top-down linguistic and neurocognitive processes during speech recognition by adults with CIs. Neurocognitive functions were examined for their associations with the ability to use sentence context. The hypothesis tested in this study was that better neurocognitive functioning would predict better meaningful sentence recognition when controlling for Anomalous Sentence scores, and this hypothesis was supported, but only for one of the neurocognitive functions tested, inhibitory control, providing a small but significant R 2 change of .057.
The relation between use of semantic context and inhibitory control is consistent with previous findings that performance on the Stroop task was related to recognition of sentences in speech-shaped noise among CI users (Moberly, Houston, et al., 2016). It is likely that better ability to inhibit incorrect lexical competitors supports better sentence recognition for meaningful sentences. These findings are also consistent with those of Sörqvist and Rönnberg (2012) and Koelewijn et al. (2012), who found inhibitory control ability to be related to the ability to complete a task of speech recognition in competing speech in NH listeners. Moreover, Sommers and Danielson (1999) demonstrated that older NH individuals had poorer inhibitory control than young NH peers and that individual differences in these inhibitory abilities contributed to spoken word recognition accuracy. Thus, for adult CI users, listeners may recruit inhibitory control processes in order to resolve semantic confusions and capitalize on semantic context because the sentence stimuli are spectrotemporally degraded. The mechanisms underlying this role of inhibitory control in the use of semantic context deserve further exploration.
In contrast, we did not find any expected relations between Meaningful Sentence recognition and WM capacity when controlling for Anomalous Sentence scores. However, we did find that speed of lexical access and nonverbal reasoning predicted scores on Anomalous Sentence recognition in our follow-up analysis, with the neurocognitive factors contributing to an R 2 change of .377. Speed of lexical access has not previously been evaluated in postlingual adult CI users, although basic word reading ability using the untimed WRAT4 Word Reading task was not found to be associated with speech recognition abilities in previous studies (Moberly, Lowenstein, & Nittrouer, 2016). We hypothesized that the TOWRE Word Reading task would more directly assess participants' speed of lexical access, which should be a critically relevant information-processing operation during rapid processing of sentences. This would be consistent with Carroll et al.'s (2016) findings that highlighted the important role of information-processing speed in sentence processing. It is possible that speed of lexical access contributes to recognition of a sequence of words generally, but not specifically to the use of semantic context.
Similarly, nonverbal reasoning independently predicted Anomalous Sentence recognition. This finding is consistent with findings by Mattingly et al. (2018), in which Raven's scores were found to predict recognition scores for high–talker-variability Perceptually Robust English Sentence Test Open-Set sentences by adult CI users. The current study expands those findings by identifying an association between nonverbal reasoning and Anomalous Sentence recognition. Thus, it may be that, like speed of lexical access, nonverbal reasoning plays a more general role in sentence recognition, more than the ability to “figure out” the semantic context of degraded sentences.
WM capacity using visual digit span did not predict Meaningful Sentence recognition, which is a finding consistent with some previous studies in CI users (Moberly, Harris, Boyce, & Nittrouer, 2017; Moberly, Houston, et al., 2016) but which is inconsistent with other studies suggesting a link between WM capacity and speech recognition in patients with lesser degrees of hearing loss (Akeroyd, 2008; Rönnberg et al., 2013). Interestingly, a recent study by Nagaraj (2017) demonstrated a relation between sentence recognition and WM capacity using a visual Reading Span task, but only for low-context sentences, not for semantically meaningful passages. Our lack of a relation between WM capacity and sentence recognition may be a result of testing WM capacity using a visual digit span task for two reasons. First, it has been shown previously in children with CIs that the relation of WM capacity and speech and language measures may be specific to the auditory modality (Cleary, Pisoni, & Kirk, 2000). However, other studies have demonstrated a relation between speech recognition and scores on visually presented tasks of sequential processing in adult CI users (Gantz, Woodworth, Knutson, Abbas, & Tyler, 1993; Gfeller et al., 2008; Knutson et al., 1991). Second, some would consider forward digit span to be a better assessment of short-term memory as opposed to WM because the mental manipulation (i.e., processing) demands of forward digit span are minimal. In contrast, a measure such as reverse digit span or a more complicated measure, such as the reading span, may provide a more accurate assessment of WM capacity. Thus, additional studies will be required to further elucidate the role of visual and verbal WM capacity in speech recognition for patients with CIs.
Several limitations of this study should be noted. First, although the two types of sentences used in this study primarily differed in their presence or absence of semantic content, it is possible that the sentence materials also were systematically different with respect to their lexical content and syntactic structure. As such, the semantic contributions to differences in performance between sentence types could be confounded with other linguistic factors. Second, although the MEANINGFUL IEEE sentences chosen for this study certainly provide more semantic context than the anomalous sentences, they are relatively lower in context than other sentence materials in use, such as the Revised Speech Perception in Noise (Bilger, Nuetzel, Rabinowitz, & Rzeczkowski, 1984) materials or the City University of New York (Boothroyd, Hanin, & Hnath, 1985) sentences. Thus, our findings may underestimate the impact that neurocognitive functions contribute to the use of semantic context. Finally, although we identified several neurocognitive scores as significant predictors of Meaningful or Anomalous Sentence recognition, the primarily observational design of this study limits our ability to conclude that these relation are causal in nature.
The clinical significance of this study is twofold. First, patients with even relatively poor signal quality may be able to capitalize greatly on linguistic context and top-down predictive coding. This finding suggests that, even if the bottom-up signal cannot be improved (e.g., through reprogramming of the device), rehabilitative training might be useful particularly to improve CI listeners' use of linguistic context in understanding speech. Prospective interventional rehabilitation studies will be required to test this prediction. Second, findings from this study provide additional converging evidence that clinical speech recognition testing using sentence materials may tap into core neurocognitive functions that contribute to more realistic real-world speech recognition scenarios (e.g., sentences in running connected speech) that listeners face in their daily lives. Although our listeners were tested in quiet listening conditions, which is rare from an ecological standpoint, use of several different types of sentence materials in clinical assessments, as compared to isolated word recognition, at least provides a better window into the operations of the entire speech-processing network (bottom-up and top-down) of patients with CIs.
Conclusion
Findings from this study demonstrate that inhibitory control may play an important role in using semantic context for adult CI users. Speed of lexical access and nonverbal reasoning were also associated with recognition ability for sentences lacking semantic context. Some patients experience poor outcomes with their CIs, and these poor outcomes may in part be attributable to suboptimal neurocognitive functions. These findings motivate the development of improved comprehensive rehabilitative approaches for patients with CIs who are not performing as well as expected even after several years of device use.
Acknowledgments
This work was supported by the American Otological Society Clinician–Scientist Award and the National Institute on Deafness and Other Communication Disorders Career Development Award 5K23DC015539-02 to A. C. M. ResearchMatch, used to recruit some normal hearing participants, is supported by National Center for Advancing Translational Sciences Grant UL1TR001070. The authors would like to thank David Pisoni, Derek Houston, and Christin Ray for their helpful recommendations in article preparation and Kara Vasil for her overseeing of participant data collection. A. C. M. designed the study, analyzed data, and prepared the article, and receives grant funding support from Cochlear Americas for an unrelated investigator-initiated research study. J. R. contributed to hypothesis development, data analyses, and preparation and revision of the article.
Funding Statement
This work was supported by the American Otological Society Clinician–Scientist Award and the National Institute on Deafness and Other Communication Disorders Career Development Award 5K23DC015539-02 to A. C. M. ResearchMatch, used to recruit some normal hearing participants, is supported by National Center for Advancing Translational Sciences Grant UL1TR001070.
References
- Akeroyd M. A. (2008). Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. International Journal of Audiology, 47(Suppl. 2), S53–S71. [DOI] [PubMed] [Google Scholar]
- Arehart K. H., Souza P., Baca R., & Kates J. (2013). Working memory, age and hearing loss: Susceptibility to hearing aid distortion. Ear and Hearing, 34, 251–260. [DOI] [PMC free article] [PubMed] [Google Scholar]
- AuBuchon A. M., Pisoni D. B., & Kronenberger W. G. (2015). Short-term and working memory impairments in early-implanted, long-term cochlear implant users are independent of audibility and speech production. Ear and Hearing, 36, 733–737. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baddeley A. (1992). Working memory. Science, 255, 556–559. [DOI] [PubMed] [Google Scholar]
- Bilger R. C., Nuetzel J. M., Rabinowitz W. M., & Rzeczkowski C. (1984). Standardization of a test of speech perception in noise. Journal of Speech and Hearing Disorders, 27(1), 32–48. [DOI] [PubMed] [Google Scholar]
- Boothroyd A., Hanin L., & Hnath T. (1985). CUNY laser videodisk of everyday sentences. New York: Speech and Hearing Sciences Research Center, City University of New York. [Google Scholar]
- Carroll R., Uslar V., Brand T., & Ruigendijk E. (2016). Processing mechanisms in hearing-impaired listeners: Evidence from reaction times and sentence interpretation. Ear and Hearing, 37, e391–e401. [DOI] [PubMed] [Google Scholar]
- Classon E., Rudner M., & Rönnberg J. (2013). Working memory compensates for hearing related phonological processing deficit. Journal of Communication Disorders, 46(1), 17–29. [DOI] [PubMed] [Google Scholar]
- Cleary M., Pisoni D. B., & Kirk K. I. (2000). Working memory spans as predictors of spoken word recognition and receptive vocabulary in children with cochlear implants. The Volta Review, 102(4), 259–280. [PMC free article] [PubMed] [Google Scholar]
- Daneman M., & Carpenter P. A. (1980). Individual differences in working memory and reading. Journal of Verbal Learning and Verbal Behavior, 19, 450–466. [Google Scholar]
- Farris-Trimble A., McMurray B., Cigrand N., & Tomblin J. B. (2014). The process of spoken word recognition in the face of signal degradation. Journal of Experimental Psychology: Human Perception and Performance, 40, 308–327. https://doi.org/10.1037/a0034353 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Folstein M. F., Folstein S. E., & McHugh P. R. (1975). “Mini-mental state”: A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research, 12(3), 189–198. [DOI] [PubMed] [Google Scholar]
- Gantz B. J., Woodworth G. G., Knutson J. F., Abbas P. J., & Tyler R. S. (1993). Multivariate predictors of success with cochlear implants. In Fraysse B. & Deguine O. (Eds.), Cochlear implants: New perspectives (pp. 153–167). Basel, Switzerland: Karger Publishers. [DOI] [PubMed] [Google Scholar]
- Gfeller K., Oleson J., Knutson J. F., Breheny P., Driscoll V., & Olszewski C. (2008). Multivariate predictors of music perception and appraisal by adult cochlear implant users. Journal of the American Academy of Audiology, 19(2), 120–134. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Grossberg S., & Stone G. (1986). Neural dynamics of word recognition and recall: Attentional priming, learning, and resonance. Psychological Review, 93(1), 46–74. [PubMed] [Google Scholar]
- Herman R., & Pisoni D. B. (2000). Perception of “elliptical speech” by an adult hearing impaired listener with a cochlear implant: Some preliminary findings on coarse-coding in speech perception. Research on Spoken Language Processing, 24, 87–112. [Google Scholar]
- Holden L. K., Finley C. C., Firszt J. B., Holden T. A., Brenner C., Potts L. G., … Skinner M. W. (2013). Factors affecting open-set word recognition in adults with cochlear implants. Ear and Hearing, 34, 342–360. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Institute of Electrical and Electrionics Engineers. (1969). IEEE recommended practice for speech quality measurements. New York, NY: Author. [Google Scholar]
- Janse E., & Newman R. S. (2013). Identifying nonwords: Effects of lexical neighborhoods, phonotactic probability, and listener characteristics. Language and Speech, 56(4), 421–441. [DOI] [PubMed] [Google Scholar]
- Kalikow D. N., Stevens K. N., & Elliott L. L. (1977). Development of a test of speech intelligibility in noise using sentence materials with controlled word predictability. The Journal of the Acoustical Society of America, 61(5), 1337–1351. [DOI] [PubMed] [Google Scholar]
- Knutson J. F., Hinrichs J. V., Tyler R. S., Gantz B. J., Schartz H. A., & Woodworth G. (1991). Psychological predictors of audiological outcomes of multichannel cochlear implants: Preliminary findings. Annals of Otology, Rhinology & Laryngology, 100(10), 817–822. [DOI] [PubMed] [Google Scholar]
- Koelewijn T., Zekveld A. A., Festen J. M., Rönnberg J., & Kramer S. E. (2012). Processing load induced by informational masking is related to linguistic abilities. International Journal of Otolaryngology, 1, 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Loebach J. L., & Pisoni D. B. (2008). Perceptual learning of spectrally degraded speech and environmental sounds. The Journal of the Acoustical Society of America, 123(2), 1126–1139. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Luce P. A., & Pisoni D. B. (1998). Recognizing spoken words: The neighborhood activation model. Ear and Hearing, 19, 1–36. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Marslen-Wilson W. (1993). Issues of process and representation in lexical access. In Cognitive models of speech processing: The second Sperlonga meeting (pp. 187–210). Mahwah, NJ: Erlbaum. [Google Scholar]
- Mattingly J. K., Castellanos I., & Moberly A. C. (2018). Nonverbal reasoning as a contributor to sentence recognition outcomes in adults with cochlear implants. Otology & Neurotology, 39(10), e956–e963. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McClelland J. L., & Elman J. L. (1986). The TRACE model of speech perception. Cognitive Psychology, 18(1), 1–86. [DOI] [PubMed] [Google Scholar]
- McMurray B., Farris-Trimble A., & Rigler H. (2017). Waiting for lexical access: Cochlear implants or severely degraded input lead listeners to process speech less incrementally. Cognition, 169, 147–164. https://doi.org/10.1016/j.cognition.2017.08.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moberly A. C., Harris M. S., Boyce L., & Nittrouer S. (2017). Speech recognition in adults with cochlear implants: The effects of working memory, phonological sensitivity, and aging. Journal of Speech, Language, and Hearing Research, 60(4), 1046–1061. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moberly A. C., Houston D. M., & Castellanos I. (2016). Non-auditory neurocognitive skills contribute to speech recognition in adults with cochlear implants. Laryngoscope Investigative Otolaryngology, 1(6), 154–162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Moberly A. C., Lowenstein J. H., & Nittrouer S. (2016). Word recognition variability with cochlear implants: The degradation of phonemic sensitivity. Otology & Neurotology, 37(5), 470–477. [DOI] [PubMed] [Google Scholar]
- Morton J. (1969). Interaction of information in word recognition. Psychological Review, 76(2), 165–178. [Google Scholar]
- Nagaraj N. K. (2017). Working memory and speech comprehension in older adults with hearing impairment. Journal of Speech, Language, and Hearing Research, 60(10), 2949–2964. [DOI] [PubMed] [Google Scholar]
- Nittrouer S., & Burton L. T. (2005). The role of early language experience in the development of speech perception and phonological processing abilities: Evidence from 5-year-olds with histories of otitis media with effusion and low socioeconomic status. Journal of Communication Disorders, 38(1), 29–63. [DOI] [PubMed] [Google Scholar]
- Norris D., McQueen J. M., & Cutler A. (2016). Prediction, Bayesian inference and feedback in speech recognition. Language, Cognition and Neuroscience, 31(1), 4–18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Raven J. C. (1938). Raven's Progressive Matrices. Torrance, CA: Western Psychological Services. [Google Scholar]
- Raven J. C. (2000). The Raven's Progressive Matrices: Change and stability over culture and time. Cognitive Psychology, 41, 1–48. [DOI] [PubMed] [Google Scholar]
- Reitan R. M. (1958). Validity of the Trail Making Test as an indicator of organic brain damage. Perceptual and Motor Skills, 8(3), 271–276. [Google Scholar]
- Rimoldi H. J. (1948). A note on Raven's Progressive Matrices Test. Educational and Psychological Measurement, 8(3–1), 347–352. [Google Scholar]
- Rönnberg J. (2003). Cognition in the hearing impaired and deaf as a bridge between signal and dialogue: A framework and a model. International Journal of Audiology, 42, S68–S76. [DOI] [PubMed] [Google Scholar]
- Rönnberg J., Lunner T., Zekveld A., Sörqvist P., Danielsson H., Lyxell B., … Rudner M. (2013). The ease of language understanding (ELU) model: Theoretical, empirical, and clinical advances. Frontiers in Systems Neuroscience, 7, 31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Salthouse T. A. (1996). The processing-speed theory of adult age differences in cognition. Psychology Review, 103, 403–428. [DOI] [PubMed] [Google Scholar]
- Schvartz K. C., Chatterjee M., & Gordon-Salant S. (2008). Recognition of spectrally degraded phonemes by younger, middle-aged, and older normal-hearing listeners. The Journal of the Acoustical Society of America, 124, 3972–3988. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sommers M. S., & Danielson S. M. (1999). Inhibitory processes and spoken word recognition in young and older adults: The interaction of lexical competition and semantic context. Psychology and Aging, 14(3), 458–472. [DOI] [PubMed] [Google Scholar]
- Sörqvist P., & Rönnberg J. (2012). Episodic long-term memory of spoken discourse masked by speech: What is the role for working memory capacity? Journal of Speech, Language, and Hearing Research, 55(1), 210–218. [DOI] [PubMed] [Google Scholar]
- Stroop J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18(6), 643–662. [Google Scholar]
- Torgesen J. K., Wagner R. K., & Rashotte C. A. (1999). Test of Word Reading Efficiency. Austin, TX: Pro-Ed. [Google Scholar]
- Tuennerhoff J., & Noppeney U. (2016). When sentences live up to your expectations. NeuroImage, 124, 641–653. [DOI] [PubMed] [Google Scholar]
- Verhaeghen P., & Salthouse T. A. (1997). Meta-analyses of age–cognition relations in adulthood: Estimates of linear and nonlinear age effects and structural models. Psychological Bulletin, 122(3), 231. [DOI] [PubMed] [Google Scholar]
- Wechsler D. (1981). WAIS-R: Manual. New York, NY: The Psychological Corporation. [Google Scholar]
- Wechsler D. (2004). Wechsler Intelligence Scale for Children–Fourth Edition, Integrated (WISC-IV Integrated). San Antonio, TX: Psychological Corporation. [Google Scholar]
- Wilkinson G. S., & Robertson G. J. (2006). Wide Range Achievement Test–Fourth Edition (WRAT4). Lutz, FL: Psychological Assessment Resources. [Google Scholar]
- Wingfield A. (1996). Cognitive factors in auditory performance: Context, speed of processing, and constraints of memory. Journal of the American Academy of Audiology, 7, 175–182. [PubMed] [Google Scholar]
- Zekveld A. A., Rudner M., Johnsrude I. S., Heslenfeld D. J., & Rönnberg J. (2012). Behavioral and fMRI evidence that cognitive ability modulates the effect of semantic context on speech intelligibility. Brain and Language, 122(2), 103–113. [DOI] [PubMed] [Google Scholar]
- Zeng F. G. (2004). Trends in cochlear implants. Trends in Amplification, 8, 1–34. [DOI] [PMC free article] [PubMed] [Google Scholar]

