Skip to main content
Journal of Speech, Language, and Hearing Research : JSLHR logoLink to Journal of Speech, Language, and Hearing Research : JSLHR
. 2021 Jul 12;64(8):3330–3342. doi: 10.1044/2021_JSLHR-20-00574

Forward Digit Span and Word Familiarity Do Not Correlate With Differences in Speech Recognition in Individuals With Cochlear Implants After Accounting for Auditory Resolution

Adam K Bosen a,, Victoria A Sevich a,b, Shauntelle A Cannon a
PMCID: PMC8740688  PMID: 34251908

Abstract

Purpose

In individuals with cochlear implants, speech recognition is not associated with tests of working memory that primarily reflect storage, such as forward digit span. In contrast, our previous work found that vocoded speech recognition in individuals with normal hearing was correlated with performance on a forward digit span task. A possible explanation for this difference across groups is that variability in auditory resolution across individuals with cochlear implants could conceal the true relationship between speech and memory tasks. Here, our goal was to determine if performance on forward digit span and speech recognition tasks are correlated in individuals with cochlear implants after controlling for individual differences in auditory resolution.

Method

We measured sentence recognition ability in 20 individuals with cochlear implants with Perceptually Robust English Sentence Test Open-set sentences. Spectral and temporal modulation detection tasks were used to assess individual differences in auditory resolution, auditory forward digit span was used to assess working memory storage, and self-reported word familiarity was used to assess vocabulary.

Results

Individual differences in speech recognition were predicted by spectral and temporal resolution. A correlation was found between forward digit span and speech recognition, but this correlation was not significant after controlling for spectral and temporal resolution. No relationship was found between word familiarity and speech recognition. Forward digit span performance was not associated with individual differences in auditory resolution.

Conclusions

Our findings support the idea that sentence recognition in individuals with cochlear implants is primarily limited by individual differences in working memory processing, not storage. Studies examining the relationship between speech and memory should control for individual differences in auditory resolution.


Speech recognition outcomes vary substantially across individuals with cochlear implants. This variability can be attributed to both individual differences in the resolution of the signal conveyed by the auditory pathway and individual differences in the cognitive abilities, which support speech recognition. While many studies have examined how either aspects of auditory resolution or aspects of cognition predict speech recognition outcomes, fewer studies have examined how auditory resolution and cognitive ability jointly affect speech recognition and interact with one another. A problem with not jointly measuring both resolution and cognitive ability is that not accounting for variance in one attribute weakens our ability to accurately characterize the relationship of the other attribute with speech recognition. For example, some previous studies did not find a significant relationship between performance on speech recognition and simple span tasks, an assessment of cognitive ability. It is possible that a relationship does not exist, but it is also possible that variance in auditory resolution across participants, which was often not accounted for in studies which assessed cognition, added so much variability to speech recognition outcomes that it concealed the presence of a real relationship. The current study was designed to be a step toward addressing this problem by reassessing whether performance on a simple span task has a significant correlation with speech recognition after controlling for differences in auditory resolution across individuals with cochlear implants.

One particular cognitive ability that has been frequently studied in individuals with cochlear implants is working memory, often via complex span tasks such as reading span. In these tasks, participants must store some information in memory while processing other information, such as remembering a list of letters or words while judging the semantic meaning of sentences (Daneman & Carpenter, 1980; Kane et al., 2004). Successfully completing these tasks places a high demand on the ability to process or manipulate information in memory alongside a lower demand on the ability to store information in memory (Unsworth & Engle, 2007), so these tasks are often considered to measure the “working” aspect of working memory. The interleaved processing and storage components of complex span tasks are analogous to the task of recognizing speech in difficult listening conditions in the sense that speech cues must be stored in memory while the listener processes semantic meaning from them (Rönnberg et al., 2013). Previously observed links between complex span tasks and speech recognition in individuals with cochlear implants (Kaandorp et al., 2017; O'Neill et al., 2019) indicate that their ability to recognize speech is limited by individual differences in their ability to process information in working memory.

Another common way to assess memory is through simple span tasks, which require storing a sequence in memory and then repeating it in order without performing an interleaved task. Because simple span tasks rely more on the ability to store sequences of information in memory and less on the ability to concurrently process information than complex span tasks (Unsworth & Engle, 2007), these tasks are often considered to measure short-term memory in the absence of a strong “working” aspect. In contrast to complex span tasks, previous studies have not typically shown a relationship between simple span task performance and speech recognition in individuals with cochlear implants. Sequences of visually presented digits are often used (e.g., Moberly, Vasil, et al., 2018; Moberly & Reed, 2019; Tamati et al., 2020), although some studies have also used spoken digits and words (Moberly, Harris, et al., 2017), pictures (Moberly, Pisoni, & Harris, 2018), and spatial locations (Moberly et al., 2016; Moberly, Houston, et al., 2017). None of these studies have found a relationship between speech recognition outcomes in individuals with cochlear implants and performance on simple span tasks. These results suggest that individual differences in the ability to store information in memory do not limit speech recognition in individuals with cochlear implants.

Studies of young adults with normal hearing suggest different relationships between processing, storage, and speech recognition from the relationships found in individuals with cochlear implants. Specifically, young adults with normal hearing do not tend to show a correlation between complex span tasks and speech recognition in noise (Füllgrabe & Rosen, 2016), which suggests that processing ability is not a major limiter of their ability to recognize speech in noise. In our previous work (Bosen & Barry, 2020), we found that simple digit span was associated with vocoded sentence recognition in young adults with normal hearing, indicating that storage is associated with speech recognition in these individuals. We tested sentence recognition and serial recall of digit sequences with auditory stimuli that were vocoded with 16-, 8-, and 4-channel vocoders to manipulate spectral resolution. Recall accuracy for digits was unaffected by the number of vocoder channels, which indicates that serial recall of digits is unaffected by auditory resolution. Despite not being sensitive to auditory resolution, individual differences in digit span accuracy were nonetheless a predictor of vocoded sentence recognition accuracy across all tested numbers of channels. This correlation between digit span and speech recognition could reflect individual differences in auditory specific storage and rehearsal processes (Kane et al., 2004), verbal articulation (Acheson & Macdonald, 2009; Maidment & Macken, 2012), or linguistic experience (Jones & Macken, 2015) that affect performance on both tasks. In agreement with our findings, Tamati et al. (2013) found that young adults with normal hearing who had low sentence recognition accuracy in noise also had lower performance on simple digit span than individuals who had high sentence recognition accuracy in noise. They also found that individuals with low accuracy reported lower familiarity with a variety of English words than individuals with high accuracy, which supports the notion that linguistic experience facilitates speech recognition.

One possible interpretation of differences in the relationships of processing and storage with speech recognition across individuals with cochlear implants and young adults with normal hearing is that these groups rely on different aspects of working memory to facilitate speech recognition. However, an alternative interpretation is that methodological differences between studies could lead to different results. Specifically, the digit span task used in Bosen and Barry (2020) differed from the simple span tasks used for individuals with cochlear implants described above in several ways. Most simple span tasks start with short lists of items and adaptively increase sequence length in serial recall tasks until the participant is unable to recall two sequences of the same length and quantify individual ability as the total number of items in the lists that were recalled correctly. This measure of serial recall ability has low internal consistency (Woods et al., 2011) and does not consistently include individual differences at longer list lengths, which more strongly reflect underlying cognitive abilities (Unsworth & Engle, 2006). We addressed this problem by using a digit span task with a fixed number of trials and equal sampling of long and short list lengths (Bosen & Barry, 2020). Digits also have the advantage of being a familiar, closed set of words that can be accurately identified in degraded listening conditions. Individual performance in digit span tasks does not vary across unprocessed and vocoded stimuli (Bosen & Luckasen, 2019) or with varying number of channels in a vocoder (Bosen & Barry, 2020), which indicates that digit span should be able to assess serial recall ability without being affected by individual differences in auditory resolution in individuals with cochlear implants.

Another concern when comparing across groups is that the auditory resolution varies greatly across individuals with cochlear implants, whereas vocoding stimuli fixes the auditory resolution of the speech signal across individuals with normal hearing. Spectral (Anderson et al., 2012; Gifford et al., 2018; Litvak et al., 2007) and temporal (Cazals et al., 1994; Fu, 2002; Garadat et al., 2012; Won et al., 2011) modulation detection thresholds vary greatly across individuals with cochlear implants, with some individuals able to detect modulations almost as small as those that individuals with normal hearing can detect and some individuals unable to detect gross modulations. Both spectral and temporal modulation detection thresholds predict speech recognition ability across individuals with cochlear implants. If auditory resolution is not accounted for, then individual differences in resolution will introduce substantial variability into speech recognition outcomes, which could mask a true relationship between cognitive factors and speech recognition. Because modern cochlear implant stimulation strategies rely solely on transmitting spectral and temporal envelope cues, it is likely that a combination of spectral and temporal modulation detection tasks will cover the major auditory sources of variability in speech recognition outcomes. Some studies have measured aspects of auditory resolution alongside working memory, but there is little consensus as to what measures of auditory resolution should be used. For example, O'Neill et al. (2019) used free-field measures of spectral resolution in addition to measures of working memory and fluid intelligence and found that these measures together could account for about 41% of the variance in sentence recognition. Moberly, Vasil, et al. (2018) used a spectrotemporal modulation detection task (Aronoff & Landsberger, 2013) and a battery of cognitive tasks, although they did not find a combination of factors that were substantial predictors of sentence recognition. In a study that did not consider cognitive factors, Won et al. (2011) demonstrated that measuring both free-field spectral and temporal modulation detection thresholds together could predict about 58% of the variance in word recognition scores, whereas spectral and temporal measures in isolation were weaker predictors. Their work indicates that both spectral and temporal resolution should be measured when assessing individual differences in auditory resolution as they relate to speech recognition.

The goal of the current study was to determine whether digit span, as implemented by Bosen and Barry (2020), is a predictor of sentence recognition in individuals with cochlear implants after controlling for individual differences in spectral and temporal resolution. Although the measures of spectral and temporal resolution and serial recall used here have been previously studied in this population, to our knowledge, it is novel to jointly assess performance on these tasks within participants and examine their relationship to speech recognition and one another. A significant relationship between serial recall and speech recognition in individuals with cochlear implants would indicate that individual differences in the ability to store verbal materials in working memory also contributes to variability in sentence recognition accuracy in these individuals, in addition to the known contributions of processing. In this study, we implemented a modified version of Won et al.’s (2011) spectral and temporal modulation detection tasks. These tasks were implemented in a free-field listening environment and used participants' everyday cochlear implant processing strategies to avoid the need for specialized hardware and facilitate their use in future studies. We also included the word familiarity task used by Tamati et al. (2013) to see if their findings in young adults with normal hearing extend to individuals with cochlear implants. Combining measures of auditory resolution and cognitive ability within a study provides a greater opportunity to identify the full set of factors that impact speech recognition outcomes in individuals with cochlear implants.

Method

This study was designed to determine if forward digit span performance predicts sentence recognition after accounting for auditory resolution. We tested the ability of individuals with cochlear implants to recognize sentences, detect spectral and temporal modulations, recall digit sequences, and recognize unfamiliar words. Data from each task and analysis scripts are provided through the Open Science Framework and can be found at https://doi.org/10.17605/OSF.IO/5PY8J at the time of publication.

Participants

Twenty adult cochlear implant (CI) users participated in this study (12 females; age ranged from 22 to 76 years with a median age of 63 years). Six participants were bilateral CI users. See Table 1 for a summary of participant demographics. All participants were native speakers of American English. Participants did not report any developmental, intellectual, or neurological disorders that would interfere with speech recognition or short-term memory. All participants were compensated hourly for participation and per mile for travel time. The study was approved by the Boys Town National Research Hospital Institutional Review Board.

Table 1.

Participant demographic and device information.

Participant Sex Age at testing (years) Duration of device use (years) Manufacturer Listening mode
CI_01 F 59 9 Cochlear Corporation Bilateral
CI_02 M 72 16 Advanced Bionics Bilateral
CI_03 M 65 6 Cochlear Corporation Bimodal (L)
CI_04 F 76 6 Cochlear Corporation Bimodal (R)
CI_05 F 64 16 Cochlear Corporation Bilateral
CI_06 M 68 19 R-Cochlear Corporation L-Advanced Bionics Bilateral
CI_07 M 60 19 Cochlear Corporation Bilateral
CI_08 F 73 11 Advanced Bionics Unilateral (L)
CI_09 M 71 3 Advanced Bionics Bimodal (L)
CI_10 M 22 10 Advanced Bionics Unilateral (L)
CI_11 F 62 5 Advanced Bionics Unilateral (L)
CI_12 F 43 18 Advanced Bionics Bilateral
CI_13 M 56 17 Advanced Bionics Unilateral (L)
CI_14 F 72 6 Advanced Bionics Bimodal (R)
CI_15 F 56 14 Cochlear Corporation Unilateral (L)
CI_16 F 46 11 Cochlear Corporation Unilateral (L)
CI_17 F 71 6 Cochlear Corporation Bimodal (L)
CI_18 F 73 17 Advanced Bionics Bimodal (L)
CI_19 F 61 6 Cochlear Corporation Unilateral (L)
CI_20 M 25 1 Advanced Bionics Bimodal (R)

Note. Duration of device use was defined as the year in which participant received the implant subtracted from the date of testing. M = male; F = female; L = left; R = right.

Experimental Conditions

Experiments were conducted in an echo-attenuated, double-walled, sound-treated booth. Auditory stimuli were presented from a loudspeaker (Genelec 8830APM) located approximately at head height and 0.75 m in front of participants. Presentations and recordings were controlled with custom MATLAB scripts (MathWorks). All participants listened to the stimuli using their own sound processor set to clinical settings. Participants who had bimodal hearing removed their hearing aid and wore an ear plug in the ear with residual acoustic hearing during testing. All bimodal participants had at least moderate hearing loss in their ear with acoustic hearing, so the ear plug ensured that stimuli were inaudible to that ear. Ear plugs were inserted by trained lab staff and participants verbally confirmed that they could not hear out of the plugged ear. Verbal responses were recorded with a microphone (Audio-Technica AT2020) and were used for off-line scoring of responses. The total duration of testing was approximately 2 hr, including breaks.

Sentence Recognition

Perceptually Robust English Sentence Test Open-set (PRESTO) sentences were used due to their high variability and low predictability (Gilbert et al., 2013; Tamati et al., 2013). Sentences in this stimulus set vary in talker, sex, regional dialect, and speaking rate. Participants initiated playback of each sentence with a mouse click on a PC and were instructed to repeat back whatever they heard at their own pace. Sentences were played at a mean level of 65 dB SPL in quiet. Each sentence was followed by a 100-ms, 440-Hz tone at 60 dB SPL to prompt response. Participants were encouraged to guess if unsure, no feedback was provided, and repetitions were not allowed. Three sentences from List 1 were used to familiarize participants with the task. Lists 13 and 17 were used to test sentence recognition for all participants to control for differences in intelligibility across stimulus lists (Faulkner et al., 2015). Each list is composed of 76 keywords distributed across 18 sentences, which provided a total of 152 keywords across 36 sentences in this study. PRESTO accuracy was defined as the proportion of keywords that were correctly identified across both test lists.

Temporal and Spectral Modulation Detection

Temporal and spectral modulation detection thresholds were measured with adaptive psychophysical tasks. A three-alternative, three-interval forced-choice psychophysical task was used to measure amplitude modulation detection thresholds for signals modulated over either time or frequency. For both modulation types, the carrier signal was a 60 dB SPL, 400-ms-long noise burst with energy between 350 and 6500 Hz (selected to match the stimuli in Litvak et al., 2007), gated on and off with 20-ms sin2 ramps. This carrier was modulated in the target interval and was unmodulated in the other two intervals. In the temporal modulation detection task, the target interval was sinusoidally amplitude modulated at a 100-Hz rate, which has been previously shown to correlate with speech recognition in individuals with cochlear implants when presented via direct stimulation (Fu, 2002; Luo et al., 2008) and via a loudspeaker (Won et al., 2011). In the spectral modulation detection task, the target interval was spectrally modulated at 0.5 ripples per octave, which has also been previously shown to correlate with speech recognition when presented via a loudspeaker (Anderson et al., 2012; Litvak et al., 2007; Saoji et al., 2009). Carrier level for every interval was randomly varied between −3 and +3 dB SPL to diminish the availability of loudness differences between modulated and unmodulated intervals (McKay & Henshall, 2010). Participants selected the interval that they thought was modulated by using a mouse to click on one of three boxes corresponding to each interval on a computer screen.

A 2-down, 1-up adaptive procedure was used to measure modulation depth threshold, which converged on 71% detection accuracy (Levitt, 1971). The temporal modulation detection task started at a maximum modulation depth of 100% and was adapted in steps of 4 dB relative to 100% modulation from the first to fourth reversal and 1 dB for the remaining reversals. The spectral modulation detection task started at a maximum peak-to-valley ratio of 40 dB and was adapted in steps of 4 dB from the first to fourth reversal and 1 dB for the remaining reversals. Runs were ended after 55 trials or eight reversals. For each run, the final four reversals were averaged to obtain the modulation threshold. Participants completed three runs each of the spectral and temporal modulation detection tasks, blocked by modulation type. A few participants had thresholds that seemed inconsistent to the experimenter (differences of more than about 10-dB peak-to-valley ratio) across the three runs of the spectral detection tasks, so for these participants, a fourth run was also completed. Spectral and temporal thresholds were defined as the mean threshold across all runs. A break was offered between runs of different modulation types. Participants were given visual feedback after every trial by highlighting the button corresponding to the modulated interval in green if participants selected that interval or in red if participants selected a different interval. Participants were encouraged to guess if unsure.

Digit Span

Memory was tested with the digit span task described in Bosen and Barry (2020), although the recorded stimuli were not vocoded for the current study. Verbal sequences of between two and nine digits were presented at a rate of one digit onset per second, and participants repeated back the sequence in forward order. This task was preceded by a brief familiarization, in which participants listened to and repeated back digits from one to nine in sequential order, followed by one randomly ordered two-digit list, five-digit list, and nine-digit list. Participants initiated each trial with a mouse click. Each list length was tested 10 times in a pseudorandom order that was fixed across participants, for a total of 80 trials. Participants were unaware of the length of each list before it was presented. This design differs from adaptive digit span tasks in order to capture individual differences in performance at long list lengths. Performance was quantified as the total proportion of digits recalled in the correct position across all list lengths, regardless of whether whole lists were correctly recalled. This scoring method has better internal consistency than scoring performance as the total number of lists or the longest list length that were accurately recalled (Conway et al., 2005; Woods et al., 2011).

Digit order was pseudorandomized within each list, and the lists were designed to ensure that no randomizations produced sequential ordering that could facilitate recall or storage of digits as sequential chunks (Cowan, 2001). Digits were presented at a fixed level of 65 dB SPL. One second after the onset of the last item in a list, a 100-ms, 440-Hz tone was presented at 60 dB SPL to prompt response. Participants were instructed to verbally repeat back items in order at their own pace, and no restrictions were placed on rehearsal. Participants were encouraged to guess if unsure, and no feedback was provided. A break was offered halfway through the task.

Word Familiarity

A word familiarity task (WordFam; Pisoni, 2007) was included as a measure of participants' self-reported familiarity with a set of 150 words in the English language. The questionnaire was originally developed by Lewellen et al. (1993) to quantify individual differences in self-reported lexical knowledge. In this task, participants were instructed to report how familiar they were with 150 English words on a 7-point scale, ranging from 1 (“You have never seen or heard the word before”) to 7 (“You recognize the word and are confident that you know the meaning of the word”). On this scale, keywords in PRESTO sentences had an average familiarity rating of 6.9 out of 7 for each sentence list (Gilbert et al., 2013), ensuring that incorrect keyword recognition was unlikely to arise from unfamiliarity with keywords. Words in the familiarity task were selected to be equally representative of high-, medium-, and low-familiarity English words based on the ratings obtained by Nusbaum et al. (1984). The questionnaire was administered on a PC in a Microsoft Excel spreadsheet and the participants were instructed to type the number corresponding to their familiarity with a given word. Responses were averaged across all three familiarity conditions to obtain a single word familiarity score.

Results

Table 2 provides descriptive statistics for all tasks. Performance on all tasks varied across individuals, and almost every task could be completed to some extent by each participant. One participant had floor performance on the sentence recognition task and two participants had close to floor performance on the temporal modulation detection task.

Table 2.

Descriptive statistics for all tasks.

Task M SD Minimum Maximum
PRESTO 0.54 0.24 0.01 0.78
SMDT 18.6 8.1 7.8 35
AMDT −6.4 4.0 −14.2 −1
Digit Span 0.74 0.11 0.50 0.90
Word Familiarity 4.68 0.90 3.11 6.62

Note. For PRESTO sentence recognition, values are the proportion of keywords correctly identified across sentences. For spectral modulation detection thresholds (SMDT), values are in dB peak-to-valley ratio, with lower values indicating better performance. For temporal modulation detection thresholds (AMDT), values are in dB relative to 100% (full) modulation, with lower (more negative) values indicating better performance. For digit span, values are the proportion of digits recalled in the correct position across all lists. For word familiarity, values are the average familiarity rating across all words, with minimum and maximum possible familiarities of 1 and 7.

Figure 1 and Table 3 show the association between PRESTO sentence recognition and performance on each of the predictor tasks. As shown in Figure 1, spectral modulation detection thresholds and performance on the digit span task were significant predictors of sentence recognition in isolation (p < .05). Linear regression models were fit to the data, with each model shown as a separate line in Table 3. Models were also fit to standardized data and standardized coefficients are reported to enable comparison of relative effect sizes. As shown in Table 3, including spectral and temporal modulation detection thresholds yielded a model where both predictors were significant, which replicates the findings of Won et al. (2011). On their own, performance on the digit span task was significantly correlated with sentence recognition accuracy and the performance on the word familiarity tasks was marginally correlated with sentence recognition accuracy. However, these relationships were not evident when comparing performance on these tasks with residual sentence recognition accuracy after factoring out the effect of spectral and temporal modulation detection thresholds. This finding indicates that digit span and word familiarity did not account for variability in sentence recognition once individual differences in auditory resolution were accounted for.

Figure 1.

Figure 1.

Association between PRESTO sentence recognition and predictor tasks. The top four panels show the relationship between PRESTO accuracy and performance on each predictor task, and the bottom two panels show the relationship between digit span and word familiarity task performance with residual PRESTO sentence recognition after factoring out spectral and temporal modulation detection thresholds. Each point represents results from one participant. Correlation coefficients and p values for simple linear correlations are provided in each panel. Lines show the standard major axis regression fit (Legendre, 2013) across individuals for significant relationships in Table 3. PRESTO = Perceptually Robust English Sentence Test Open-set.

Table 3.

Comparison of linear regression models predicting PRESTO sentence recognition accuracy.

Intercept
SMDT
AMDT
Model statistics
B p B p β B p β F p R 2
0.79 3.1 × 10 −7 −0.023 8.2 × 10 −5 −.76 −0.029 5.6 × 10 −3 −.47 15.4 1.5 × 10−4 .64
DS
WF
−0.24 .48 1.07 0.03 .47 5.88 .03 .26
0.03 .91 0.11 .08 .39 3.33 .08 .16
DS – Residual
WF – Residual
−0.20 .41 0.28 .39 .12 0.77 .39 .04
−0.12 .51 0.03 .50 .10 0.47 .50 .03

Note. Each row represents one model. For each model, the unstandardized coefficients are given as B and the p value of each unstandardized coefficient is given as p. Standardized versions of each model were fit as well, and coefficients for standardized models are given for all predictors aside from the intercept as β. Empty cells indicate predictors that were not included in that model. Unstandardized coefficients with p values less than .05 are shown in bold. Model significance relative to an intercept-only model is quantified with F and p values. Goodness of fit for unstandardized models is shown in variance explained (R 2). The first model included only spectral modulation detection threshold (SMDT) and temporal modulation detection threshold (AMDT) as a replication of previous studies. The next two models include only digit span (DS), and word familiarity (WF) as predictors to show their relationship to sentence recognition on their own. The last two models show partial regressions between digit span and word familiarity and residual PRESTO sentence recognition after factoring out spectral and temporal modulation detection thresholds. PRESTO = Perceptually Robust English Sentence Test Open-set.

The lack of a relationship between digit span and speech recognition after factoring out individual differences in spectral and temporal resolution differs from our previous findings in young adults with normal hearing (Bosen & Barry, 2020), so we further analyzed the digit span data to ensure that it yielded meaningful results. Figure 2 shows digit span performance for each participant. Participants recalled lists of two and three digits nearly perfectly, but the proportion of digits recalled in the correct position dropped for longer lists. Accurate recall for short lists indicates that participants could correctly identify these digit stimuli when they were not accompanied by a memory load, indicating that the stimuli were intelligible. Thus, declines in recall accuracy for longer sequences indicate that participants were less able to store longer sequences in memory. On average, the group of participants with cochlear implants had lower recall accuracies at longer list lengths than young adults with normal hearing (Bosen & Barry, 2020), although several participants in this study had better recall than the young normal hearing average. These results indicate that the range of performance observed across participants in the digit span task is sufficient to identify an association between digit span and sentence recognition if one existed.

Figure 2.

Figure 2.

Digit span performance across participants. Each thin line represents the proportion of digits that were recalled in the correct position across all trials at each list length for one individual. The thick black line shows the average performance across individuals. The thick gray line shows average performance of young adults with normal hearing averaged across vocoder listening conditions in Bosen and Barry (2020).

It is possible that the residual results in Table 3 could arise from significant covariance between digit span and the modulation detection thresholds, which could mask association between digit span and sentence recognition. Figure 3 shows the relationship between digit span and the other predictor variables, which demonstrates that covariance between most predictors was not found. Simple linear correlations between digit span and other predictors found that digit span performance was not correlated with spectral (r = −.36, p = .13) or temporal (r = −.15, p = .53) thresholds, but was correlated with word familiarity (r = .47, p = .04). The association between word familiarity and digit span suggests that performance on both tasks is influenced by individual differences in a common latent factor, but the lack of a correlation with spectral and temporal thresholds indicates that this latent factor is distinct from auditory resolution.

Figure 3.

Figure 3.

Association between digit span and other predictor tasks. Each point represents results from one participant. Correlation coefficients and p values for simple linear correlations are provided in each panel.

Discussion

The goal of this study was to determine whether serial recall of verbal materials is a predictor of sentence recognition in individuals with cochlear implants when using reliable scoring methods and after accounting for individual differences in auditory resolution. We found that free-field measures of auditory resolution were significant predictors of sentence recognition, while digit span and word familiarity were not after factoring out the effects of auditory resolution. Performance on the digit span task was not correlated with auditory resolution and was nearly perfect for short lists, indicating that digit span was not impaired by low resolution auditory input in these individuals. Our results replicate previous findings (Won et al., 2011) that free-field measures of temporal and spectral resolution are significant predictors of speech recognition in individuals with cochlear implants. These results also demonstrate that digit span does not reflect the working memory constructs that are essential for speech recognition in individuals with cochlear implants, in contrast to young adults with normal hearing listening to vocoded speech.

The Role of Working Memory in Speech Recognition Differs Across Populations

In our previous work, we found a significant relationship between performance in serial recall tasks, including the digit span task used here, and vocoded sentence recognition in young adults with normal hearing (Bosen & Barry, 2020). In contrast, the current work indicates that digit span is not a substantial predictor of speech recognition in individuals with cochlear implants.

This pattern is the opposite of what is typically observed with reading span. For example, O'Neill et al. (2019) examined the relationship between speech recognition and reading span in young adults with normal hearing listening to vocoded sentences and individuals with cochlear implants listening in steady state noise. In their work, reading span was not a significant predictor of speech recognition in young adults with normal hearing, although it was for the individuals with cochlear implants. The comparison of O'Neill et al.'s findings to our current and prior results needs to be qualified by methodological differences across studies. We used auditory stimuli and reading span typically uses visual stimuli (although see Smith et al., 2016, for an analogous auditory complex span task). Core working memory constructs tend to be general across modalities but serial recall performance tends to be more modality specific (Kane et al., 2004), so it is possible that visual short-term memory tasks would not be associated with sentence recognition in young adults with normal hearing, in contrast to our previous findings with a verbal digit span task. O'Neill et al. (2019) used different speech stimuli than in the current study and included multiple levels of steady state background noise, which may yield additional differences between their results and ours. They also tested both zero-order correlations between reading span and speech recognition and multiple linear regression using both cognitive and spectral resolution as predictors. In contrast to our results, in their multiple regression the cognitive element remained a predictor of speech recognition even with the spectral resolution factor included in the model. Speech recognition ability tends to be consistent across different forms of degradation (Carbonell, 2017), so we would expect any general link between working memory and speech recognition to be consistent across stimuli and listening conditions so long as the spread in performance was large enough to reflect individual differences (e.g., not at floor or ceiling accuracy).

The absence of a correlation between digit span and speech recognition in the current study, after controlling for auditory resolution, differs from the trends observed in young adults with normal hearing listening to vocoded speech. This pattern of results suggests a change in the aspects of working memory that limit speech recognition in individuals with cochlear implants. Working memory contains distinct mechanisms for storing and processing information (Shipstead et al., 2014) that are engaged to varying degrees by different memory tasks (Engle et al., 1999; Unsworth & Engle, 2007). Forward serial recall tasks, such as digit span, predominantly engage storage mechanisms, whereas complex span tasks, such as reading span, engage the ability to alternate between storing and processing information (Daneman & Merikle, 1996). Complex span tasks have been emphasized in studies of individuals with hearing loss because these tasks are believed to reflect aspects of processing that support speech recognition (Akeroyd, 2008), although these tasks generally do not predict speech recognition in young adults with normal hearing (Füllgrabe & Rosen, 2016). The correlations between digit span and vocoded sentence recognition found by Bosen and Barry (2020) indicate that working memory also plays a role in vocoded speech recognition in young adults with normal hearing. The difference between these groups seems to be that the limiting factor associated with speech recognition shifts from storage ability in young adults with normal hearing listening to vocoded speech to processing ability in individuals with cochlear implants.

O'Neill et al. (2019) additionally tested the ability of older adults with normal hearing who were age-matched to their participants with cochlear implants to recognize vocoded sentences and found a similar strength correlation between reading span and sentence recognition (r = .45) as they found in individuals with cochlear implants (r = .43). The similar trend across groups suggests that the difference in which working memory mechanisms associate with speech recognition may have more to do with aging than with hearing loss. Aging leads to a selective deficit in processing in working memory (Bopp & Verhaeghen, 2005; Oberauer, 2005), so processing ability could be the limiting mechanism in speech recognition in older adults regardless of hearing status. On average, participants in this study were slightly worse than young, normal hearing individuals in the digit span task with either vocoded (Bosen & Barry, 2020b) or unprocessed (Bosen & Luckasen, 2019) digits, although there were large individual differences in performance, with several participants in this study performing better than the normal hearing average. This indicates that many of these participants had an intact ability to store sequences of verbal information, in agreement with previous work (Moberly, Harris, et al., 2017; Moberly, Pisoni, & Harris, 2018).

Hearing loss exacerbates declines in overall cognition (Deal et al., 2015; Loughrey et al., 2018; Yuan et al., 2018) and processing ability (Rönnberg et al., 2011) with age, which could compound the difficulty in using working memory to support speech recognition. In support of this idea, tests of fluid intelligence also predict speech outcomes in individuals with cochlear implants (Mattingly et al., 2018; Moberly et al., 2019; Moberly & Reed, 2019) and fluid intelligence partially mediates the relationship between aging and speech recognition independent of auditory sensitivity (Moberly, Vasil, et al., 2018). Fluid intelligence is a closely related construct to processing in working memory (Wilhelm et al., 2013), so additional work is needed to distinguish how the role of these mechanisms in speech recognition changes with both aging and hearing loss.

As with digit span, the lack of correlation between word familiarity and sentence recognition in this study is also not consistent with previous work in young adults with normal hearing (Tamati et al., 2013). The mean word familiarity rating and range of ratings across individuals were similar to values reported by Tamati et al., indicating that this difference across studies is not due to group-level differences in word familiarity. There is not an obvious reason why differences in age or hearing status across groups would alter the relationship between this task and sentence recognition, so it is likely that other unmeasured factors, such as working memory processing or fluid intelligence, introduce noise that hides a true relationship if it exists. Tamati et al. found an effect of word familiarity by splitting groups of good and poor performers, which can overestimate the magnitude of cross-group differences (Preacher et al., 2005). As a result, it is unclear what the true magnitude of the association between word familiarity and sentence recognition should be, although it seems likely that they should be associated to some extent.

Auditory and Cognitive Factors Should Be Considered Together

Our free-field measures of temporal and spectral modulation detection thresholds jointly explained approximately half of the variance in speech recognition, in agreement with previous work (Won et al., 2011). Spectral modulation detection thresholds were the largest predictor on their own, which is consistent with the practice of using only measures of spectral resolution to predict speech recognition outcomes in individuals with cochlear implants (e.g., Anderson et al., 2012; Litvak et al., 2007; O'Neill et al., 2019). Most previous work that has reported relationships between speech recognition and temporal modulation detection has precisely controlled the stimulation pattern applied to electrodes within the cochlear implant (e.g., Fu, 2002; Garadat et al., 2012; Luo et al., 2008), although our findings along with those of Won et al. (2011) show that free-field temporal modulation detection also partially accounts for speech recognition outcomes. Our results differ from those of Won et al. in that the correlation between temporal modulation detection thresholds and speech recognition was not significant on its own, but the similar direction and magnitude of the trend between their worn and ours lead us to believe that our lack of significance was due to our relatively small sample size.

Many studies of the relationship between cognitive abilities and speech recognition in individuals with cochlear implants have not accounted for the role of auditory resolution in speech recognition. The original goal of this study was to determine if a significant correlation would emerge between digit span and sentence recognition after we controlled for the effects of auditory resolution. Instead, we found the opposite pattern where digit span was only a predictor of sentence recognition before we controlled for auditory resolution (see Table 2). The pattern of our results would have been problematic had we not controlled for auditory resolution. If we had only tested digit span, we would likely have concluded that a significant relationship exists with sentence recognition in individuals with cochlear implants. Including measures of auditory resolution better accounted for individual differences in sentence recognition, to the point where digit span was no longer a significant predictor. One participant had poor performance on sentence recognition, spectral and temporal modulation detection tasks, and digit span. This individual could account for the apparent correlation between digit span and sentence recognition, but at a group level, the reason for their poor sentence recognition is better attributed to poor auditory resolution. Similar problems would likely arise in any study of the link between cognition and speech recognition that does not control for auditory resolution.

One of the reasons auditory resolution has not been more widely measured is because direct control of electrical stimuli requires specialized hardware and training to implement. This control is often desirable because it enables the experimenter to precisely manipulate the pattern of electrical stimulation provided to each electrode, which can facilitate separating different aspects of auditory resolution from one another. In contrast, free-field listening is dependent on the signal processing strategy and mapping that each participant's cochlear implant processor has been programmed with. As a result, in free-field conditions, the processing strategy imposes common signal processing manipulations across different types of acoustic inputs (e.g., dynamic range and gain control), which can vary across individuals. Without direct measurement, it can be difficult to predict the exact pattern of electrical stimulation elicited by free-field acoustic inputs. Despite these limitations, the fact that free-field listening tasks heard through everyday cochlear implant processing strategies predict speech recognition outcomes in these individuals shows that free-field tasks can yield meaningful results. The ability to obtain meaningful results with free-field tasks allows researchers to sidestep the technical barriers associated with direct stimulation. Therefore, subsequent research on the role of cognition in speech recognition in individuals with cochlear implants should consider using free-field measures of auditory resolution to factor out individual differences in hearing that are not related to the cognitive variables of interest. In individuals with acoustic hearing loss, both acoustic and cognitive factors play distinct roles in speech recognition (Humes, 2002; Rönnberg et al., 2016; van Rooij & Plomp, 1990), so similar distinct roles should also be evident in individuals with cochlear implants. Recent work supports the notion that auditory resolution and cognition are distinct factors in speech recognition (Moberly, Vasil, et al., 2018; O'Neill et al., 2019; Tamati et al., 2020), although additional work is needed to develop a complete picture of the critical acoustic and cognitive factors underlying speech recognition in individuals with cochlear implants.

Verbal Digit Span Is Insensitive to Auditory Resolution

Short digit lists (two and three items long) were almost always correctly recalled, which indicates that digits could be identified regardless of individual differences in hearing. Overall performance on the digit span task was also not associated with temporal or spectral modulation detection. These results are consistent with the fact that vocoding digit span stimuli does not impair recall (Bosen & Barry, 2020; Bosen & Luckasen, 2019), and generally indicates that verbal digit span is an appropriate measure of serial recall in individuals with hearing loss. Working memory has modality specific and modality general components (Camos et al., 2013; Nees, 2016), so using both verbal and visual measures of serial recall could help distinguish the contributions of these components to speech recognition.

Accurate recall of short digit lists indicates that individuals with cochlear implants use their knowledge of the set of possible stimuli to overcome limitations in their auditory input and successfully recognize words in closed-set speech identification tasks. This ability to use contextual information should be considered when designing experiments. If the goal of an experiment is to measure an underlying cognitive process without having to control for individual differences in auditory resolution, then it is advantageous to use a closed stimulus set to provide strong contextual information. For instance, a study by Moberly, Harris, et al. (2017) used a closed set of words for a serial recall task and found that recall ability was similar across individuals with normal hearing and with cochlear implants. On the other hand, if the goal of an experiment is to assess overall speech recognition ability then open set tasks should be used, because the contextual information provided in closed set tasks can mask individual differences in auditory resolution.

Limitations

We found that spectral and temporal modulation detection thresholds were significant predictors of sentence recognition but did not find a relationship between sentence recognition and digit span or word familiarity after factoring out the effects of spectral and temporal modulation detection thresholds. The choice to test only digit span was motivated by our previous work and the desire to keep experimental sessions short, although our point about considering auditory and cognitive factors in the same study would likely be stronger had we included a significant cognitive predictor, such as reading span. A better approach would be to use a battery of tasks designed to estimate underlying cognitive constructs (e.g., Engle et al., 1999). Doing so would help distinguish task-specific variation in cognitive measures from individual differences in the cognitive constructs themselves, enabling direct examination of the relationship of those constructs with speech recognition.

There is a general weak correlation between speech recognition and a variety of cognitive tasks across different domains, such as inhibitory control, working memory, and processing speed (Dryden et al., 2017). These correlations indicate that there is a relatively small (r ≈ .3) but consistent relationship between speech perception and general cognitive ability. Although digit span and word familiarity were not correlated with sentence recognition after factoring out the effects of auditory resolution in this study, a weak relationship likely exists, although reliably detecting a relationship of this size would require a much larger sample of individuals with cochlear implants.

In addition to cognitive ability, some demographic factors may contribute to variability in speech recognition. Although speech recognition outcomes tend to asymptote after around a year of device use (Wilson & Dorman, 2008), research in individuals with hearing aids indicates that the relationship between speech recognition and working memory can continue to evolve over years of device use (Ng & Rönnberg, 2020), which could also be the case in individuals with cochlear implants. In addition to the role of aging described above, the duration of deafness prior to implantation can affect speech recognition outcomes (Holden et al., 2013), and socioeconomic status can affect the relationship between cognitive ability and hearing loss (Kramer et al., 2018). The potential effects of these factors should also be taken into account when examining predictors of speech recognition in individuals with cochlear implants.

Conclusions

Our results in conjunction with previous studies demonstrate differences in the working memory constructs which are associated with speech recognition across groups of individuals which differ in age and/or hearing status. Future work aiming to examine the relationship between cognition and speech recognition should consider measuring individual differences in auditory resolution, assessing underlying cognitive constructs using multiple tasks, and examining how these factors interact with age.

Acknowledgments

This work was supported by Centers of Biomedical Research Excellence (COBRE) Grant NIH-NIGMS/5P20GM109023-05 and a Student Training Grant NIH-NIDCD /5T35DC008757-14. David Pisoni provided recordings of the Perceptually Robust English Sentence Test Open-set (PRESTO) sentences and materials for the word familiarity task. Marc Brennan provided the software for the spectral and temporal modulation detection tasks. Angela AuBuchon and Monita Chatterjee provided feedback on experiment design and data analysis. Aditya Kulkarni assisted with participant recruitment. Elizabeth Schneider spoke the digit stimuli for this study.

Funding Statement

This work was supported by Centers of Biomedical Research Excellence (COBRE) Grant NIH-NIGMS/5P20GM109023-05 and a Student Training Grant NIH-NIDCD /5T35DC008757-14. David Pisoni provided recordings of the Perceptually Robust English Sentence Test Open-set (PRESTO) sentences and materials for the word familiarity task. Marc Brennan provided the software for the spectral and temporal modulation detection tasks. Angela AuBuchon and Monita Chatterjee provided feedback on experiment design and data analysis. Aditya Kulkarni assisted with participant recruitment. Elizabeth Schneider spoke the digit stimuli for this study.

References

  1. Acheson, D. J. , & Macdonald, M. C. (2009). Twisting tongues and memories: Explorations of the relationship between language production and verbal working memory. Journal of Memory and Language, 60(3), 329–350. https://doi.org/10.1016/j.jml.2008.12.002.Twistingtable [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Akeroyd, M. A. (2008). Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. International Journal of Audiology, 47(Suppl. 2), S53–S71. https://doi.org/10.1080/14992020802301142 [DOI] [PubMed] [Google Scholar]
  3. Anderson, E. S. , Oxenham, A. J. , Nelson, P. B. , & Nelson, D. A. (2012). Assessing the role of spectral and intensity cues in spectral ripple detection and discrimination in cochlear-implant users. The Journal of the Acoustical Society of America, 132(6), 3925–3934. https://doi.org/10.1121/1.4763999 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Aronoff, J. M. , & Landsberger, D. M. (2013). The development of a modified spectral ripple test. The Journal of the Acoustical Society of America, 134(2), EL217–EL222. https://doi.org/10.1121/1.4813802 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bopp, K. L. , & Verhaeghen, P. (2005). Aging and verbal memory span: A meta-analysis. Journal of Gerontology, 60(5), P223–P233. https://doi.org/10.1093/geronb/60.5.P223 [DOI] [PubMed] [Google Scholar]
  6. Bosen, A. K. , & Barry, M. F. (2020). Serial recall predicts vocoded sentence recognition across spectral resolutions. Journal of Speech, Language, and Hearing Research, 63(4), 1282–1298. https://doi.org/10.1044/2020_jslhr-19-00319 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Bosen, A. K. , & Luckasen, M. C. (2019). Interactions between item set and vocoding in serial recall. Ear and Hearing, 40(6), 1404–1417. https://doi.org/10.1097/aud.0000000000000718 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Camos, V. , Mora, G. , & Barrouillet, P. (2013). Phonological similarity effect in complex span task. Quarterly Journal of Experimental Psychology, 66(10), 1927–1950. https://doi.org/10.1080/17470218.2013.768275 [DOI] [PubMed] [Google Scholar]
  9. Carbonell, K. M. (2017). Reliability of individual differences in degraded speech perception. The Journal of the Acoustical Society of America, 142(5), EL461–EL466. https://doi.org/10.1121/1.5010148 [DOI] [PubMed] [Google Scholar]
  10. Cazals, Y. , Pelizzone, M. , Saudan, O. , & Boex, C. (1994). Low-pass filtering in amplitude modulation detection associated with vowel and consonant identification in subjects with cochlear implants. The Journal of the Acoustical Society of America, 96(4), 2048–2054. https://doi.org/10.1121/1.410146 [DOI] [PubMed] [Google Scholar]
  11. Conway, A. R. A. , Kane, M. J. , Bunting, M. F. , Hambrick, D. Z. , Wilhelm, O. , & Engle, R. W. (2005). Working memory span tasks: A methodological review and user's guide. Psychonomic Bulletin & Review, 12(5), 769–786. https://doi.org/10.3758/BF03196772 [DOI] [PubMed] [Google Scholar]
  12. Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. The Behavioral and Brain Sciences, 24(1), 87–185. https://doi.org/10.1017/S0140525X01003922 [DOI] [PubMed] [Google Scholar]
  13. Daneman, M. , & Carpenter, P. A. (1980). Individual differences in working memory and reading. Journal of Verbal Learning and Verbal Behavior, 19(4), 450–466. https://doi.org/10.1016/S0022-5371(80)90312-6 [Google Scholar]
  14. Daneman, M. , & Merikle, P. M. (1996). Working memory and language comprehension: A meta-analysis. Psychonomic Bulletin and Review, 3(4), 422–433. https://doi.org/10.3758/BF03214546 [DOI] [PubMed] [Google Scholar]
  15. Deal, J. A. , Sharrett, A. R. , Albert, M. S. , Coresh, J. , Mosley, T. H. , Knopman, D. , Wruck, L. M. , & Lin, F. R. (2015). Hearing impairment and cognitive decline: A pilot study conducted within the atherosclerosis risk in communities neurocognitive study. American Journal of Epidemiology, 181(9), 680–690. https://doi.org/10.1093/aje/kwu333 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Dryden, A. , Allen, H. A. , Henshaw, H. , & Heinrich, A. (2017). The association between cognitive performance and speech-in-noise perception for adult listeners: A systematic literature review and meta-analysis. Trends in Hearing, 21, 233121651774467. https://doi.org/10.1177/2331216517744675 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Engle, R. W. , Tuholski, S. W. , Laughlin, J. E. , & Conway, A. R. A. (1999). Working memory, short-term memory, and general fluid intelligence: A latent-variable approach. Journal of Experimental Psychology: General, 128(3), 309–331. https://doi.org/10.1037/0096-3445.128.3.309 [DOI] [PubMed] [Google Scholar]
  18. Faulkner, K. F. , Tamati, T. N. , Gilbert, J. L. , & Pisoni, D. B. (2015). List equivalency of PRESTO for the evaluation of speech recognition. Journal of the American Academy of Audiology, 26(6), 582–594. https://doi.org/10.3766/jaaa.14082 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Fu, Q.-J. (2002). Temporal processing and speech recognition in cochlear implant users. NeuroReport, 13(13), 1635–1639. https://doi.org/10.1097/00001756-200209160-00013 [DOI] [PubMed] [Google Scholar]
  20. Füllgrabe, C. , & Rosen, S. (2016). On the (un)importance of working memory in speech-in-noise processing for listeners with normal hearing thresholds. Frontiers in Psychology, 7, 1268. https://doi.org/10.3389/fpsyg.2016.01268 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Garadat, S. N. , Zwolan, T. A. , & Pfingst, B. E. (2012). Across-site patterns of modulation detection: Relation to speech recognition. The Journal of the Acoustical Society of America, 131(5), 4030–4041. https://doi.org/10.1121/1.3701879 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Gifford, R. H. , Noble, J. H. , Camarata, S. M. , Sunderhaus, L. W. , Dwyer, R. T. , Dawant, B. M. , Dietrich, M. S. , & Labadie, R. F. (2018). The relationship between spectral modulation detection and speech recognition: Adult versus pediatric cochlear implant recipients. Trends in Hearing, 22, 1–14. https://doi.org/10.1177/2331216518771176 [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Gilbert, J. L. , Tamati, T. N. , & Pisoni, D. B. (2013). Development, reliability, and validity of PRESTO: A new high-variability sentence recognition test. Journal of the American Academy of Audiology, 24(1), 26–36. https://doi.org/10.3766/jaaa.24.1.4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Holden, L. K. , Finley, C. C. , Firszt, J. B. , Holden, T. A. , Brenner, C. , Potts, L. G. , Gotter, B. D. , Vanderhoof, S. S. , Mispagel, K. , Heydebrand, G. , & Skinner, M. W. (2013). Factors affecting open-set word recognition in adults with cochlear implants. Ear and Hearing, 34(3), 342–360. https://doi.org/10.1097/AUD.0b013e3182741aa7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Humes, L. E. (2002). Factors underlying the speech-recognition performance of elderly hearing-aid wearers. The Journal of the Acoustical Society of America, 112(3), 1112–1132. https://doi.org/10.1121/1.1499132 [DOI] [PubMed] [Google Scholar]
  26. Jones, G. , & Macken, B. (2015). Questioning short-term memory and its measurement: Why digit span measures long-term associative learning. Cognition, 144, 1–13. https://doi.org/10.1016/j.cognition.2015.07.009 [DOI] [PubMed] [Google Scholar]
  27. Kaandorp, M. W. , Smits, C. , Merkus, P. , Festen, J. M. , & Goverts, S. T. (2017). Lexical-access ability and cognitive predictors of speech recognition in noise in adult cochlear implant users. Trends in Hearing, 21, 233121651774388. https://doi.org/10.1177/2331216517743887 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Kane, M. J. , Hambrick, D. Z. , Tuholski, S. W. , Wilhelm, O. , Payne, T. W. , & Engle, R. W. (2004). The generality of working memory capacity: A latent-variable approach to verbal and visuospatial memory span and reasoning. Journal of Experimental Psychology: General, 133(2), 189–217. https://doi.org/10.1037/0096-3445.133.2.189 [DOI] [PubMed] [Google Scholar]
  29. Kramer, S. , Vasil, K. J. , Adunka, O. F. , Pisoni, D. B. , & Moberly, A. C. (2018). Cognitive functions in adult cochlear implant users, cochlear implant candidates, and normal-hearing listeners. Laryngoscope Investigative Otolaryngology, 3(4), 304–310. https://doi.org/10.1002/lio2.172 [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Legendre, P. (2013). Model II regression user's guide, R edition. R Vignette, 4, 1–14. http://ftp-nyc.osuosl.org/pub/cran/web/packages/lmodel2/vignettes/mod2user.pdf [Google Scholar]
  31. Levitt, H. (1971). Transformed up-down methods in psychoacoustics. The Journal of the Acoustical Society of America, 49(2, Pt. 2), 467–477. https://doi.org/10.1121/1.1912375 [PubMed] [Google Scholar]
  32. Lewellen, M. J. , Goldinger, S. D. , Pisoni, D. B. , & Greene, B. G. (1993). Lexical familiarity and processing efficiency: Individual differences in naming, lexical decision, and semantic categorization. Journal of Experimental Psychology: General, 122(3), 316–330. https://doi.org/10.1037/0096-3445.122.3.316 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Litvak, L. M. , Spahr, A. J. , Saoji, A. A. , & Fridman, G. Y. (2007). Relationship between perception of spectral ripple and speech recognition in cochlear implant and vocoder listeners. The Journal of the Acoustical Society of America, 122(2), 982–991. https://doi.org/10.1121/1.2749413 [DOI] [PubMed] [Google Scholar]
  34. Loughrey, D. G. , Kelly, M. E. , Kelley, G. A. , Brennan, S. , & Lawlor, B. A. (2018). Association of age-related hearing loss with cognitive function, cognitive impairment, and dementia a systematic review and meta-analysis. JAMA Otolaryngology – Head & Neck Surgery, 144(2), 115–126. https://doi.org/10.1001/jamaoto.2017.2513 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Luo, X. , Fu, Q.-J. , Wei, C.-G. , & Cao, K.-L. (2008). Speech recognition and temporal amplitude modulation processing by Mandarin-speaking cochlear implant users. Ear and Hearing, 29(6), 957–970. https://doi.org/10.1097/aud.0b013e3181888f61 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Maidment, D. W. , & Macken, W. J. (2012). The ineluctable modality of the audible: Perceptual determinants of auditory verbal short-term memory. Journal of Experimental Psychology: Human Perception and Performance, 38(4), 989–997. https://doi.org/10.1037/a0027884 [DOI] [PubMed] [Google Scholar]
  37. Mattingly, J. K. , Castellanos, I. , & Moberly, A. C. (2018). Nonverbal reasoning as a contributor to sentence recognition outcomes in adults with cochlear implants. Otology and Neurotology, 39(10), e956–e963. https://doi.org/10.1097/mao.0000000000001998 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. McKay, C. M. , & Henshall, K. R. (2010). Amplitude Modulation and Loudness in Cochlear Implantees. Journal of the Association for Research in Otolaryngology, 11(1), 101–111. https://doi.org/10.1007/s10162-009-0188-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Moberly, A. C. , Harris, M. S. , Boyce, L. , & Nittrouer, S. (2017). Speech recognition in adults with cochlear implants: The effects of working memory, phonological sensitivity, and aging. Journal of Speech, Language, and Hearing Research, 60(4), 1046–1061. https://doi.org/10.1044/2016_JSLHR-H-16-0119 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Moberly, A. C. , Houston, D. M. , & Castellanos, I. (2016). Non-auditory neurocognitive skills contribute to speech recognition in adults with cochlear implants. Laryngoscope Investigative Otolaryngology, 1(6), 154–162. https://doi.org/10.1002/lio2.38 [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Moberly, A. C. , Houston, D. M. , Harris, M. S. , Adunka, O. F. , & Castellanos, I. (2017). Verbal working memory and inhibition-concentration in adults with cochlear implants. Laryngoscope Investigative Otolaryngology, 2(5), 254–261. https://doi.org/10.1002/lio2.90 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Moberly, A. C. , Mattingly, J. K. , & Castellanos, I. (2019). How does nonverbal reasoning affect sentence recognition in adults with cochlear implants and normal-hearing peers. Audiology and Neurotology, 24(3), 127–138. https://doi.org/10.1159/000500699 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Moberly, A. C. , Pisoni, D. B. , & Harris, M. S. (2018). Visual working memory span in adults with cochlear implants: Some preliminary findings. World Journal of Otorhinolaryngology - Head and Neck Surgery, 3(4), 224–230. https://doi.org/10.1016/j.wjorl.2017.12.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Moberly, A. C. , & Reed, J. (2019). Making sense of sentences: Top-down processing of speech by adult cochlear implant users. Journal of Speech, Language, and Hearing Research, 62(8), 2895–2905. https://doi.org/10.1044/2019_JSLHR-H-18-0472 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Moberly, A. C. , Vasil, K. J. , Wucinich, T. L. , Safdar, N. , Boyce, L. , Roup, C. , Holt, R. F. , Adunka, O. F. , Castellanos, I. , Shafiro, V. , Houston, D. M. , & Pisoni, D. B. (2018). How does aging affect recognition of spectrally degraded speech? The Laryngoscope, 128(Suppl. 5), S1–S16. https://doi.org/10.1002/lary.27457 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Nees, M. A. (2016). Have we forgotten auditory sensory memory? Retention intervals in studies of nonverbal auditory working memory. Frontiers in Psychology, 7, 1892. https://doi.org/10.3389/fpsyg.2016.01892 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Ng, E. H. N. , & Rönnberg, J. (2020). Hearing aid experience and background noise affect the robust relationship between working memory and speech recognition in noise. International Journal of Audiology, 59(3), 208–218. https://doi.org/10.1080/14992027.2019.1677951 [DOI] [PubMed] [Google Scholar]
  48. Nusbaum, H. C. , Pisoni, D. B. , & Davis, C. K. (1984). Sizing up the Hoosier mental lexicon. Research on Spoken Language Processing Report, 10(3), 357–376. [Google Scholar]
  49. Oberauer, K. (2005). Control of the contents of working memory–A comparison of two paradigms and two age groups. Journal of Experimental Psychology: Learning Memory and Cognition, 31(4), 714–728. https://doi.org/10.1037/0278-7393.31.4.714 [DOI] [PubMed] [Google Scholar]
  50. O'Neill, E. R. , Kreft, H. A. , & Oxenham, A. J. (2019). Cognitive factors contribute to speech perception in cochlear-implant users and age-matched normal-hearing listeners under vocoded conditions. The Journal of the Acoustical Society of America, 146(1), 195–210. https://doi.org/10.1121/1.5116009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Pisoni, D. B. (2007). WordFam: Rating word familiarity in English. Indiana University. [Google Scholar]
  52. Preacher, K. J. , Rucker, D. D. , MacCallum, R. C. , & Nicewander, W. A. (2005). Use of the extreme groups approach: A critical reexamination and new recommendations. Psychological Methods, 10(2), 178–192. https://doi.org/10.1037/1082-989X.10.2.178 [DOI] [PubMed] [Google Scholar]
  53. Rönnberg, J. , Danielsson, H. , Rudner, M. , Arlinger, S. , Sternäng, O. , Wahlin, A. , & Nilssond, L. G. (2011). Hearing loss is negatively related to episodic and semantic long-term memory but not to short-term memory. Journal of Speech, Language, and Hearing Research, 54(2), 705–726. https://doi.org/10.1044/1092-4388(2010/09-0088) [DOI] [PubMed] [Google Scholar]
  54. Rönnberg, J. , Lunner, T. , Ng, E. H. N. , Lidestam, B. , Zekveld, A. A. , Sörqvist, P. , Lyxell, B. , Träff, U. , Yumba, W. , Classon, E. , Hällgren, M. , Larsby, B. , Signoret, C. , Pichora-Fuller, M. K. , Rudner, M. , Danielsson, H. , & Stenfelt, S. (2016). Hearing impairment, cognition and speech understanding: exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study. International Journal of Audiology, 55(11), 623–642. https://doi.org/10.1080/14992027.2016.1219775 [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Rönnberg, J. , Lunner, T. , Zekveld, A. , Sörqvist, P. , Danielsson, H. , Lyxell, B. , Dahlström, O. , Signoret, C. , Stenfelt, S. , Pichora-Fuller, M. K. , & Rudner, M. (2013). The Ease of Language Understanding (ELU) model: Theoretical, empirical, and clinical advances. Frontiers in Systems Neuroscience, 7, 31. https://doi.org/10.3389/fnsys.2013.00031 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Saoji, A. A. , Litvak, L. , Spahr, A. J. , & Eddins, D. A. (2009). Spectral modulation detection and vowel and consonant identifications in cochlear implant listeners. The Journal of the Acoustical Society of America, 126(3), 955–958. https://doi.org/10.1121/1.3179670 [DOI] [PubMed] [Google Scholar]
  57. Shipstead, Z. , Lindsey, D. R. B. , Marshall, R. L. , & Engle, R. W. (2014). The mechanisms of working memory capacity: Primary memory, secondary memory, and attention control. Journal of Memory and Language, 72(1), 116–141. https://doi.org/10.1016/j.jml.2014.01.004 [Google Scholar]
  58. Smith, S. L. , Pichora-Fuller, M. K. , & Alexander, G. (2016). Development of the word auditory recognition and recall measure: A working memory test for use in rehabilitative audiology. Ear and Hearing, 37(6), e360–e376. https://doi.org/10.1097/aud.0000000000000329 [DOI] [PubMed] [Google Scholar]
  59. Tamati, T. N. , Gilbert, J. L. , & Pisoni, D. B. (2013). Some factors underlying individual differences in speech recognition on PRESTO: A first report. Journal of the American Academy of Audiology, 24(7), 616–634. https://doi.org/10.3766/jaaa.24.7.10 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Tamati, T. N. , Ray, C. , Vasil, K. J. , Pisoni, D. B. , & Moberly, A. C. (2019). High- and low-performing adult cochlear implant users on high-variability sentence recognition: Differences in auditory spectral resolution and neurocognitive functioning. Journal of the American Academy of Audiology, 31(05), 324–335. https://doi.org/10.3766/jaaa.18106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Unsworth, N. , & Engle, R. W. (2006). Simple and complex memory spans and their relation to fluid abilities: Evidence from list-length effects. Journal of Memory and Language, 54(1), 68–80. https://doi.org/10.1016/j.jml.2005.06.003 [Google Scholar]
  62. Unsworth, N. , & Engle, R. W. (2007). On the division of short-term and working memory: An examination of simple and complex span and their relation to higher order abilities. Psychological Bulletin, 133(6), 1038–1066. https://doi.org/10.1037/0033-2909.133.6.1038 [DOI] [PubMed] [Google Scholar]
  63. van Rooij, J. C. G. M. , & Plomp, R. (1990). Auditive and cognitive factors in speech perception by elderly listeners. II: Multivariate analyses. The Journal of the Acoustical Society of America, 88(6), 2611–2624. https://doi.org/10.1121/1.399981 [DOI] [PubMed] [Google Scholar]
  64. Wilhelm, O. , Hildebrandt, A. , & Oberauer, K. (2013). What is working memory capacity, and how can we measure it. Frontiers in Psychology, 4, 433. https://doi.org/10.3389/fpsyg.2013.00433 [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Wilson, B. S. , & Dorman, M. F. (2008). Cochlear implants: Current designs and future possibilities. The Journal of Rehabilitation Research and Development, 45(5), 695–730. https://doi.org/10.1682/JRRD.2007.10.0173 [DOI] [PubMed] [Google Scholar]
  66. Won, J. H. , Drennan, W. R. , Nie, K. , Jameyson, E. M. , & Rubinstein, J. T. (2011). Acoustic temporal modulation detection and speech perception in cochlear implant listeners. The Journal of the Acoustical Society of America, 130(1), 376–388. https://doi.org/10.1121/1.3592521 [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Woods, D. L. , Kishiyama, M. M. , Yund, E. W. , Herron, T. J. , Edwards, B. , Poliva, O. , Hink, R. F. , & Reed, B. (2011). Improving digit span assessment of short-term verbal memory. Journal of Clinical and Experimental Neuropsychology, 33(1), 101–111. https://doi.org/10.1080/13803395.2010.493149 [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Yuan, J. , Sun, Y. , Sang, S. , Pham, J. H. , & Kong, W. J. (2018). The risk of cognitive impairment associated with hearing function in older adults: A pooled analysis of data from eleven studies. Scientific Reports, 8, 2137. https://doi.org/10.1038/s41598-018-20496-w [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Journal of Speech, Language, and Hearing Research : JSLHR are provided here courtesy of American Speech-Language-Hearing Association

RESOURCES