Skip to main content
The Journal of the Acoustical Society of America logoLink to The Journal of the Acoustical Society of America
. 2021 Mar 4;149(3):1488–1497. doi: 10.1121/10.0003573

Access to semantic cues does not lead to perceptual restoration of interrupted speech in cochlear-implant users

Brittany N Jaekel 1,a), Sarah Weinstein 1, Rochelle S Newman 1,b), Matthew J Goupell 1,c)
PMCID: PMC7935498  PMID: 33765790

Abstract

Cochlear-implant (CI) users experience less success in understanding speech in noisy, real-world listening environments than normal-hearing (NH) listeners. Perceptual restoration is one method NH listeners use to repair noise-interrupted speech. Whereas previous work has reported that CI users can use perceptual restoration in certain cases, they failed to do so under listening conditions in which NH listeners can successfully restore. Providing increased opportunities to use top-down linguistic knowledge is one possible method to increase perceptual restoration use in CI users. This work tested perceptual restoration abilities in 18 CI users and varied whether a semantic cue (presented visually) was available prior to the target sentence (presented auditorily). Results showed that whereas access to a semantic cue generally improved performance with interrupted speech, CI users failed to perceptually restore speech regardless of the semantic cue availability. The lack of restoration in this population directly contradicts previous work in this field and raises questions of whether restoration is possible in CI users. One reason for speech-in-noise understanding difficulty in CI users could be that they are unable to use tools like restoration to process noise-interrupted speech effectively.

I. INTRODUCTION

Cochlear-implant (CI) users struggle to understand speech in noisy, real-world listening environments, potentially impacting adults' professional and personal lives and affecting children's ability to acquire language (Busch et al., 2017). According to the data logs of 2.4 × 106 listening hours in 1501 CI users of all ages, approximately four hours per day were spent in noisy conditions (Busch et al., 2017). Listening to speech in noise is difficult with a CI because the device's processing schemes can convey only degraded spectral (frequency) information and only some aspects of temporal (timing, intensity) information—that is, the rich acoustic detail of speech is not available to CI users, making it more difficult to separate important speech information from noise (Shannon et al., 1995; Jin et al., 2013; O'Neill et al., 2019).

In everyday listening environments, normal-hearing (NH) listeners may recover from sudden, noise-induced disruptions and losses of information to successfully understand speech by using a repair strategy called perceptual restoration (Warren, 1970; Verschuure and Brocaar, 1983; Bashford et al., 1992; Başkent, 2012). The presence of noise in a signal interruption promotes an illusion of an intact and continuous speech stream, allowing restoration of the speech signal to occur. If noise was removed from a listening scene and only the silent signal interruptions remained, speech understanding, in most cases, would decrease. Thus, restoration can be thought of as a perceptual process by which the brain fills in missing or imperceptible information from a speech signal with what (logically) should have been there. The perceptual restoration effect is quantified as the increase in speech understanding that a listener achieves when presented noise-burst interrupted speech compared to silent-gap interrupted speech.

Perceptual restoration is thought to involve an interaction of top-down factors with bottom-up acoustic information (Shinn-Cunningham and Wang, 2008; Başkent, 2012; Başkent et al., 2016). These top-down factors include context usage (i.e., applying one's knowledge about the context in which the speech is occurring, expectations about the speaker, and topic of conversation) and linguistic knowledge of vocabulary and grammatical constraints (Samuel, 1987; Shinn-Cunningham and Wang, 2008; Başkent et al., 2016; Ishida and Arai, 2016; Patro and Mendel, 2020). These factors help constrain the possible identities of an interrupted word and increase the potential for restoration. In contrast, a pseudo-word would be less possible to restore in such a framework as the listener would have no lexical entry and no contextual expectations for a pseudo-word (unless primed beforehand; see Samuel, 1981).

Perceptual restoration has been shown to be a useful tool for NH adult listeners hearing speech in noisy environments (Warren, 1970; Samuel, 1981; Newman, 2004; Başkent, 2012). The extent to which perceptual restoration is accessible to adults without normal hearing, particularly those who use hearing devices like CIs, is less understood. Much of the research in this area has been conducted with NH listeners presented with simulations of CI processing rather than with CI users themselves. Difficulties understanding speech in noise is one of the chief concerns reported by CI users; thus, ensuring that perceptual restoration is accessible to this group is important.

Bhargava et al. (2014) measured perceptual restoration in CI users and hypothesized that CI users would struggle to restore speech because they would have less access to high-quality bottom-up acoustic information during speech repair. First, limitations in CI processing would lead to lower-quality (i.e., more degraded) signals because CIs encode only limited spectral information and no temporal fine structure or rapid temporal changes to the acoustic information. Furthermore, front-end preprocessing in CI speech processors, such as compression, can distort the shape of temporal envelopes (Başkent et al., 2009). Second, limitations in peripheral auditory encoding could lead to less well-represented sound information because some CI electrodes may have a poor interface with surviving auditory neurons. A poor electrode-to-neural interface can be due to several factors, including the success with which the electrode array was inserted into the cochlea, and because some areas of the cochlea may have no surviving auditory neurons (Long et al., 2014; DeVries et al., 2016; Kan, 2018).

For their experiment, Bhargava et al. (2014) tested 13 Dutch CI users, aged 22–65 years old. The researchers also tested 14 Dutch NH listeners, aged 19–28 years old. Experimental stimuli were everyday Dutch sentences, interrupted with silent gaps or noise bursts at a frequency of 1.5 Hz. Duty cycles were either 50% or 75%, which removed one-half or one-fourth of the speech information in each 666-ms segment of the sentence, respectively. Signal-to-noise ratios (SNRs) were either −10-, −5-, 0-, or +5-dB SNR. CI users listened to speech with their regular default sound processor settings and NH listeners were presented stimuli in two conditions: either unprocessed (normal) or eight-channel noise-vocoded.

In the more difficult 50% duty-cycle condition, CI users showed no significant restoration effect (i.e., no improvement in speech understanding from silent-gap to noise-burst interruption conditions). In contrast, NH listeners presented with unprocessed speech achieved a significant restoration benefit. For the same NH listeners presented with vocoded speech, no significant restoration benefits were observed, matching previous research in this area (Başkent, 2012). Thus, when only half of the speech information was available in each segment, neither CI users nor NH listeners presented with a CI simulation could restore speech. This supported the authors' prediction that poorer-quality bottom-up acoustic information impairs restoration, particularly when only short durations of speech information are available. When more speech information was available between interruptions (i.e., in the 75% duty cycle), CI users were able to achieve a significant restoration benefit at all tested SNRs (Bhargava et al., 2014). NH listeners presented with a CI simulation showed significant restoration benefits at one SNR only.

Bhargava et al. (2014) also analyzed the relationships between CI users' individual data and their demographics and hearing histories. CI users were more likely to achieve a restoration benefit in the difficult 50% duty-cycle condition if they had better baseline speech understanding scores (that is, better speech understanding with intact, non-interrupted sentences) or longer durations of CI use. In the easier 75% duty-cycle condition, no participant variables were significantly correlated with a restoration benefit. The researchers concluded that CI users obtaining a restoration benefit at the 50% duty cycle were likely better able to perceive and encode speech information and/or to use the speech information that they had access to, perhaps due to more experience with their devices. The researchers also posited that CI users likely did better overall in the 75% duty cycle because a greater proportion of temporal envelopes were left intact by the interruption parameters, which could have led to more accurate lexical activation. Temporal envelopes are important for CI users as they are one of the few cues available for speech understanding following CI processing (Shannon et al., 1995). Interruptions to temporal speech envelopes via fluctuating noise maskers (which overlapped the speech information) resulted in poor auditory fusion of the available speech information (Nelson and Jin, 2004); however, in that study, contrary to the Bhargava et al. (2014) study, a clear relationship between the speech understanding score and interruption rate (i.e., modulation frequency) or duty cycle was not observed.

In summary, Bhargava et al. (2014) showed that when interruptions obscured greater portions of the speech signal, neither CI users nor NH listeners presented with a CI simulation could obtain a restoration benefit similar to that of NH listeners presented with unprocessed speech, on average. However, some CI users with longer use of their CIs and better overall speech understanding could obtain restoration in this difficult speech condition. Overall, CI users failed to show typical restoration in scenarios where restoration was possible for NH listeners presented non-vocoded speech, likely due to the poorer-quality bottom-up acoustic information this population receives via a combination of CI processing and the integrity of peripheral auditory encoding.

Perceptual restoration involves an important trade-off when it comes to the noise-burst interruption conditions: first, noise-burst interruptions that are similar to the missing speech information can act as powerful, plausible maskers (Warren and Obusek, 1971; Samuel, 1981; Clarke et al., 2016); second, noise interruptions and speech need to be perceptually separable in that the brain needs to be able to detect which portions of the incoming signal are speech segments (Clarke et al., 2016). This latter point may be violated when speech and noise interruptions are too perceptually similar (as they would be with noise-vocoded speech and noise-vocoded noise bursts or with CI processing, in general), reducing restoration. Jaekel et al. (2018) asked if spectral differences between noise interruptions and speech were important for restoration to occur with degraded signals. Perhaps non-noise-vocoded noise bursts could lead to better restoration at lower spectral resolutions because of the greater perceptible difference and, thus, better segregation of noise bursts from the noise-vocoded speech. Jaekel et al. (2018) found that young adult NH listeners presented with CI simulations benefitted from greater spectral differences between speech and noise bursts, whereas older NH listeners (i.e., aged 60 years old and older) did not. Furthermore, when restoration occurred in the CI simulation conditions, older NH listeners obtained significantly greater restoration than did young NH listeners with degraded speech (as is the case with nondegraded speech; Saija et al., 2014). While this “aging benefit” for restoration, especially in degraded conditions, seems like a hopeful sign for CI users, many of whom are older and lost their hearing later in life, the study by Bhargava et al. (2014) found that the older CI users (aged 52–65 years old) showed negligible restoration. Thus, more work in this area is needed to determine how aging and degraded speech interact during speech repair.

In terms of top-down rather than bottom-up factors, priming has been shown to strongly enhance the restoration effect in NH listeners (Samuel, 1981). The present study measures whether providing a semantic cue—here, a single word meaningfully related to the content of the upcoming sentence—can effectively prime the listener for the upcoming sentence and increase restoration. Semantic cues can activate meaningful associations that allow for faster and more efficient processing of upcoming speech (McNamara, 2005), and priming is meant to capture a real-world occurrence: that a given sentence may be tied semantically to an existing topic of conversation, providing the listener with a conceptual cue as to what the speaker is likely to be talking about. Therefore, while priming with a single word itself may not be an ecologically realistic occurrence, the more general phenomenon of knowing the likely topic of conversation is quite common. However, it is possible that semantic cues may enhance restoration differently based on the quality of the bottom-up acoustic information, namely, whether that acoustic information is highly degraded. Experiencing more signal degradation (specifically, having access to fewer channels of spectral information) has been shown to reduce restoration in NH listeners (Başkent, 2012; Bhargava et al., 2014; Clarke et al., 2016).

In the current study, both ears of bilateral CI users were tested separately. This was intended to investigate whether an ear with functionally poorer encoding would fail to repair speech effectively becuase bottom-up signals would be too degraded for interaction with top-down linguistic knowledge when that knowledge is made available. Poor encoding could be caused by dead regions of auditory neurons in the cochlea, ineffective placement of the CI electrode array, and many other causes (Long et al., 2014; DeVries et al., 2016; Kan, 2018). As the integration of top-down knowledge and bottom-up acoustic information is key to understanding speech in noisy environments (Shinn-Cunningham and Wang, 2008; Başkent, 2012; Patro and Mendel, 2016), it was also hypothesized that CI users would be able to use linguistic knowledge (i.e., a semantic cue) to show a larger restoration effect.

II. METHODS

A. Participants

Eighteen CI users participated in this experiment. Two additional CI users were tested, but their data collection was incomplete due to equipment failure during the experiment presentation. Table I presents information about the tested participants. Originally, this experiment was intended to include an ear presentation manipulation, necessitating the designation of a functionally better ear and functionally poorer ear for each individual participant. Initially, these designations were planned to be made based on performance with 20 intact baseline sentences presented at 55 dB SPL (sound pressure level)—10 sentences for each ear. These baseline sentences were declarative 5–12 word sentences developed by the experimenters and recorded by a young adult female speaker. Baseline sentences were not repeated in the main experiment. Whichever ear earned higher sentence scores was planned to be designated as the better ear. Baseline sentence scores are presented in Fig. 1. The left panel of Fig. 1 presents scores in this manner: the highest scoring ear is plotted on the left, and the lowest scoring ear is plotted on the right. In general, baseline sentence scores were similar across ears; only 6 of 18 participants showed performance differences greater than 10% between ears. Better ears had an average baseline score of 94.3% [standard deviation (SD) = 5.8%], and poorer ears had an average baseline score of 83.3% (SD = 14.1%), an average across-ear difference of 11.7%. In terms of hearing history, better ears tended to experience non-normal hearing at later ages compared to poorer ears, and better ears tended to have shorter durations of non-normal hearing compared to poorer ears. Ten participants' better ear was the right ear, and eight participants' better ear was the left ear. These performance-based designations often conflicted with the patient's self-report of which ear was their better ear: eight participants reported the opposite ear as their better ear. Baseline sentence scores with ear designations based on self-report are presented in the right panel of Fig. 1. For some of these participants, ear performance at baseline was similar across ears so it was unsurprising that an opposite ear was self-reported as the better ear. Only three of eight participants with “mismatched” ear designations had comparatively large performance differences between their self-reported better ear and best-performing ear on the baseline test, ranging from 13% to 38% score differences. Because of the lack of clarity in terms of which ear could truly be considered the “better ear” and because most participants showed little difference in terms of performance across ears, it was decided that an accurate analysis of better ear vs poorer ear performance was not feasible with this sample of participants. While participants were still tested in each ear separately, “ear” was ultimately not considered an independent variable for the study.

TABLE I.

Demographics, hearing histories, and cognitive/vocabulary scores for the 18 participants. The cognitive/vocabulary scores are age-corrected standard scores (SD, standard deviation).

Mean (SD) Range
Age (years) 63.7 (13.3) 32–81
Average age at onset of non-normal hearing (years) 23.3 (21.1) 0–70
Average duration of non-normal hearing prior to implantation (years) 31.1 (22.6) 0–68
Average duration of CI use (years) 9.3 (4.7) 2–23
Average baseline speech understanding performance (%)a 89.1 (11.7) 55–100
Processing speed (standard score) 103.9 (15.4) 64–130
Working memory (standard score) 102.6 (14.2) 82–123
Attention (standard score) 102.6 (11.6) 83–123
Vocabulary (standard score) 105.9 (12.6) 91–134
a

Full results are reported for only 15 of 18 participants. Intact sentence scores were not measured for one participant due to an experimenter error. Intact sentence scores for the left ear in two participants were not included because low performance (<50% words correct) resulted in cancellation of testing in that ear.

FIG. 1.

FIG. 1.

Individual (open circles) and mean (filled circles) performance with baseline intact sentences. The data on the left, under “Functional,” present the performance across better and poorer ears with the better ear being designated as such by a more accurate performance on the baseline task. The data on the right, under “Self-report,” present the performance across the participant's self-reported better and poorer ears. Two participants could not complete the task in their poorer ear, and no baseline data were collected for one participant due to an experimenter error.

Participants' linguistic knowledge was measured with the Peabody Picture Vocabulary Test, fourth edition (PPVT-4), which measures receptive vocabulary size (Dunn and Dunn, 2007). Scores from this test were considered a proxy measure of the participants' receptive language ability. Furthermore, participants completed a battery of cognitive tests available via the National Institutes of Health (NIH) Toolbox Cognition Battery iPad application (Gershon et al., 2013; Tulsky et al., 2014). For attention and executive functioning, the Flanker Inhibitory Control and Attention Test Age 12+ was used; for working memory, the List Sorting Working Memory Test Age 7+ was used; and for processing speed, the Pattern Comparison Processing Speed Test Age 7+ was used. Vocabulary, attention, working memory, and processing speed scores were, generally, near age-corrected standard scores of 100 with the greatest SDs observed for processing speed and working memory (Table I). Thus, participants had average executive functioning and average vocabulary knowledge. Finally, 16 of the 18 participants passed the Montreal Cognitive Assessment (MoCA) with scores of 26 or greater, indicating a lack of mild cognitive impairment (Nasreddine et al., 2005). Two participants scored 24, which is slightly below the recommended cutoff score. The mean MoCA score was 27.1.

B. Stimuli

Stimuli were 240 Institute of Electrical and Electronics Engineers (IEEE) sentences, which are declarative sentences containing 5–12 words (Rothauser et al., 1969). The sentences were recorded by a young adult male speaker using Standard American English dialect. The two interruption types (silent gaps, noise bursts) were applied to sentences in a manner such that 120 sentences were interrupted with silent gaps and 120 sentences were interrupted with noise bursts. Sentences were interrupted with silent gaps by applying a 5-Hz periodic nominally square wave with an 80% duty cycle to the signal with 1-ms raised cosine on/off ramps. The 80% duty cycle resulted in each 200-ms long speech segment having its first 160 ms left intact and the following 40 ms replaced with a silent gap. This duty cycle and interruption rate were chosen based on pilot testing with four adult CI users. On average, this duty cycle (compared to 50%, 60%, 70%, and 90% duty cycles) produced a perceptual restoration effect of 6% among pilot testers presented with interrupted sentences, which was the most positive effect elicited from the tested duty cycles. Parameters resulting in perceptual restoration in the pilot test were chosen because the present study was designed to detect if higher-quality bottom-up acoustic information and/or additional top-down linguistic information could enhance rather than simply elicit perceptual restoration in this population. Meyer et al. (2011) reported average English phoneme durations to be between 103 and 205 ms, depending on speaking rate; thus, the 80% duty cycle removed short-duration phonemes like /b/ and /ɛ/ while having less effect on longer-duration phonemes like most vowels and fricatives.

To create noise-burst interrupted sentences, sentences were interrupted in the same manner as outlined above but with speech-shaped noise bursts rather than silent gaps. Hence, portions of the original speech were replaced (i.e., not overlapped) with the speech-shaped noise bursts. The noise bursts were not modulated by the speech envelope that would have appeared in the missing speech segment. Although speech-envelope-modulated noise bursts have been shown to increase restoration over and above non-modulated noise bursts (Shinn-Cunningham and Wang, 2008; Miller et al., 2018), noise bursts encountered in a naturalistic listening environment would be non-modulated by the missing speech signal and, thus, the present study's method provided a realistic challenge to participants attempting to restore speech. Noise bursts were presented at 65 dB SPL with a −10-dB SNR, meaning that noise bursts were 10 dB more intense than the average level of the target speech signal. This negative SNR was chosen because previous literature has shown that negative SNRs are typically necessary for the strongest restoration effects to occur and are more likely to prompt the auditory illusion of speech “continuing” through noise (Başkent, 2012). Logically, a noise that is less intense than speech would not illusorily “mask” the speech (if the speech was truly present—in the restoration paradigm, the speech is removed, and the noise actually masks a silent gap) and, therefore, the illusion of continuity would be less likely to occur.

A semantic cue (a single word meaningfully related to the content of the sentence about to be presented) was presented visually on a computer monitor prior to each sentence for 120 sentences (60 of which were silent-gap interrupted and 60 of which were noise-burst interrupted). Semantic cues were generated in the following way. Three assistants unfamiliar with the experiment were provided a list of the 720 IEEE sentences and were asked to generate 1–2 words for each sentence that were meaningfully related to the sentence content. The answers were compiled, and the most commonly reported related word or the word judged most appropriate by the experimenter was chosen as that sentence's “semantic cue” word. For example, the word “fish” was chosen for the IEEE sentence “A rod is used to catch pink salmon.” The assistants were instructed that words in the target sentence could not serve as cues nor could any conjugation of a verb in the target sentence. One cue word was associated with each of the 720 sentences through this method, and the first author. selected 240 of these sentences, judged to be most appropriate, to be used in the experiment.

Each participant in the main experiment was presented 120 sentences via random selection from this set of 240 sentences with related cue words. For the remaining 120 sentences (not duplicates of the first 120 sentences), no semantic cue was provided to the participant. Instead, during these trials, a series of “X” symbols (equal in length to the target sentence's associated semantic cue word) was presented prior to the target sentence. All visual text was presented in Courier font in which all characters are the same width.

C. Equipment

Participants were seated in a soundproof booth one meter in front of a computer monitor located at eye-level at 0°. Sentences were presented over a pair of loudspeakers located at ±45°. matlab 2018b (MathWorks, Natick, MA) was used to administer the experiment. The NIH Toolbox Cognition Battery was administered on an iPad 2 (Apple, Inc., Cupertino, CA) in a quiet location. The test battery was completed in 15 minutes or less.

D. Procedure

Independent variables manipulated in this experiment were ear presentation (2 levels: better ear, poorer ear), semantic cue (2 levels: present, absent), and interruption type (2 levels: silent gap, noise burst) for a total of 8 conditions with 30 sentences per condition. For each participant, each of the 240 sentences was randomly assigned to 1 of the 8 conditions. Sentences were presented in 2 blocks: one block presented 120 sentences to the better ear only, and one block presented the remaining 120 sentences to the poorer ear only. Within blocks, sentences were presented in a randomized order. The order of ear presentation was also randomized for each participant.

Before the presentation of each sentence, participants focused on a crosshair presented in the middle of the computer screen. In the semantic cue “present” condition, a word semantically related to the sentence (e.g., “FISH”) replaced the crosshair for two seconds and disappeared. The sentence was then immediately auditorily presented (e.g., “A rod is used to catch pink salmon.”). In the semantic cue “absent” condition, a series of X characters (equal in length to the semantic cue word associated with that sentence, e.g., “XXXX”) replaced the crosshair instead. Participants were instructed to read any text appearing on the screen, then listen to each sentence, and repeat aloud what they heard into a voice recorder. When finished, participants were instructed to press the space bar on the computer keyboard to begin the next trial. The experiment was self-paced. Two experimenters graded responses separately, one live and one off the voice recording. Experimenters recorded the number of words correct for each sentence. Inter-rater reliability for the full dataset (n = 18) was 82.3% based on the number of sentences agreed on. Inconsistencies were resolved by averaging the scores of the two graders for that specific trial. The experimenters' ability to grade responses may have been affected by difficulties understanding speech produced by some participants.

III. RESULTS

Two participants did not complete both ear presentation blocks of the experiment due to low baseline intact speech scores in one ear (see Table I). Thus, bilateral ear data were available for only 16 of 18 participants.

Data were analyzed using a multilevel model. The following participant variables were considered for the analysis beyond the independent variables manipulated in the study: working memory score, processing speed score, attention score, vocabulary score, baseline intact speech score (one score per ear), age at onset of non-normal hearing (one value per ear), duration of non-normal hearing prior to implantation (one value per ear), duration of CI use (one value per ear), and age. Previous work has reported age effects for perceptual restoration; therefore, including age in the present analysis was theoretically justified (Saija et al., 2014; Jaekel et al., 2018). Aging generally appears to impact speech understanding in CI users as well (Jin et al., 2014; Sladen and Zappler, 2015; Goupell et al., 2017; Xie et al., 2019).

Furthermore, the inclusion of hearing history variables and baseline speech understanding scores in the analysis was theoretically justified because Bhargava et al. (2014) found that longer durations of CI use and better baseline sentence scores were predictive of perceptual restoration benefits among CI users in the difficult 50% duty-cycle condition. Among the three hearing history variables (age at onset of non-normal hearing, duration of non-normal hearing, and duration of CI use), only age at onset and duration of non-normal hearing were significantly correlated with one another per Pearson's correlations Bonferroni-corrected for three tests (α = 0.017). Specifically, with later (older) ages of onset of non-normal hearing, the duration of non-normal hearing prior to implantation decreased [r(34) = −0.82, p < 0.001]. Because multi-collinearity among variables in a mixed level model can lead to convergence problems, only age at onset of non-normal hearing and duration of CI use were retained for the model. The age at onset was considered an important variable as it could interact with language development (as in the case of prelingually deafened listeners) and, thus, could have an impact on top-down knowledge.

Cognitive and vocabulary scores were entered as covariates. Bonferroni-corrected for four tests (α = 0.013), no cognitive or vocabulary scores were significantly correlated with one another per Pearson's correlations. All included participant variables were centered and standardized for the multilevel model analysis.

The multilevel model was built using R/R-Studio (version 4.0.0, RStudio Inc., Boston, MA) and the buildmer package (version 1.6, RStudio Inc., Boston, MA). The independent variables (interruption type and cue) were effect-coded. Codes of +0.5 indicated a noise-burst condition or cue condition. Codes of −0.5 indicated a silent gap condition or no cue condition. The dependent variable was percent words correct per sentence. The final model, which was determined using backward stepwise elimination based on individual effects' contributions to a change in log-likelihood (such that a maximal model was derived), was specified as follows:

Percentwordscorrect1+Cue+Interruptiontype+Baselinespeechunderstandingscore+(1+Interruptiontype|Participant/Ear)+(1+Interruptiontype|Sentence)

The results of the statistical analysis are presented in Table II (fixed effects) and Table III (random effects). The percent correct scores across the listening conditions are presented in Fig. 2. The performance with noise bursts was significantly worse than the performance with silent gaps (p < 0.001; Table II). Performance with noise bursts was, on average, 43.5% correct, and performance with silent gaps was 49.0% correct. Thus, in general, the perceptual restoration benefit was not observed.

TABLE II.

Multilevel model analysis fixed effect results.

Fixed effect Estimate Standard error T p
Intercept 0.443 0.049 8.96 <0.001
Cue 0.050 0.008 6.50 <0.001
Interruption type −0.055 0.015 −3.76 <0.001
Baseline speech understanding score 0.111 0.025 4.37 <0.001

TABLE III.

Multilevel model analysis random effect results.

Random effect Variance SD
Participant: Intercept 0.034 0.183
Ear within participant: Intercept 0.013 0.112
Ear within participant: Interruption type slope 0.004 0.062
Sentence: Intercept 0.011 0.106
Sentence: Interruption type slope 0.010 0.099
Residual 0.052 0.228

FIG. 2.

FIG. 2.

Percent words correct per sentence across cue/no cue listening conditions. Open circles indicate the performance with silent-gap interrupted sentences, and filled circles indicate the performance with noise-burst interrupted sentences. Error bars indicate standard error.

Semantic cues significantly improved performance overall (p < 0.001; Table II), and this benefit was not dependent on the interruption type (i.e., cue did not significantly interact with interruption type). On average, performance increased from 43.7% correct with no cue to 48.8% correct with a cue, an increase of approximately 5%. Access to a cue did not increase performance with noise bursts specifically—that is, cues did not help elicit a perceptual restoration benefit.

All above-sample-average baseline speech understanding scores were associated with overall better interrupted speech understanding performance (p < 0.001; Table II) regardless of interruption type (Fig. 4). No other participant variables were found to have a significant effect on performance.

FIG. 4.

FIG. 4.

Percent words correct for experimental, interrupted sentences plotted against percent words correct for baseline intact sentences. Open circles indicate performance with silent-gap interrupted sentences, and filled circles indicate performance with noise-burst interrupted sentences. Data are plotted for the 32 ears tested in the study.

The random effects listed in Table III were included to control for variance in performance across participants, ears within participants and sentences, in particular for responses to interruption types. The largest known source of variability in the model was among individual participants (27% calculated via intra-class correlation coefficient; Table III) followed by ears within participants (10%; Table III). Individual sentences accounted for 9% of variability in the model (Table III).

Perceptual restoration effects, calculated as the difference between noise-burst interrupted speech performance and silent-gap interrupted speech performance, for each cue condition are presented in Fig. 3. Since performance with silent-gap interrupted speech was often better than performance with noise-burst interrupted speech, most perceptual restoration effects were negative (Fig. 3).

FIG. 3.

FIG. 3.

Perceptual restoration effects across cue/no cue listening conditions. Perceptual restoration effects were calculated as the performance with silent-gap interrupted sentences subtracted from the performance with noise-burst interrupted sentences for each participant. Open circles indicate individual data, filled circles indicate mean data, and error bars indicate the SD. Perceptual restoration effects below zero indicate a noise interference effect, and effects above zero indicate a restoration benefit.

IV. DISCUSSION

This experiment aimed to measure the extent to which different levels of degradation in bottom-up acoustic information affected integration with top-down linguistic knowledge during perceptual restoration in CI users. Poorer ears (which likely experience greater signal degradation) were expected to show relatively less restoration or even fail to restore, whereas better ears (which likely experience less signal degradation) were expected to interact successfully with top-down linguistic knowledge to prompt relatively more restoration. A three-way interaction of interruption type, ear presentation, and provision of a semantic cue was predicted: performance with noise bursts was expected to be highest (and higher than performance with silent gaps) in the better ear when a semantic cue was available to prime the upcoming sentence.

On average, no positive restoration effects were observed in this sample (Fig. 3). This lack of an average restoration benefit among CI users was confirmed by the multilevel model analysis (i.e., the main effect of interruption type had a negative coefficient; Table II). No significant improvement with noise-burst interrupted speech over silent-gap interrupted speech was observed in either the cue or no cue conditions. In addition, the hypothesized three-way interaction could not be evaluated: “better” and “poorer” ears could not adequately be defined for this sample and, therefore, this variable was not included in the analysis. To summarize, performance with noise-burst interrupted speech was always significantly poorer than performance with silent-gap interrupted speech on average.

What was found, instead, was an overall beneficial effect of semantic cues for repairing interrupted speech in general—whether those interruptions were silent gaps or noise bursts. Thus, top-down linguistic knowledge appeared to be used by CI users whenever interrupted speech was encountered. Overall, the benefit of access to a cue prior to an interrupted sentence was an approximate 5% increase in speech understanding on average. Cue benefits varied across participants, however, with one participant showing a slight decrease (-1.8%) in interrupted speech understanding when cues were present, and three participants showing a >10% increase in performance with cues. One participant from the study offered an explanation for why the presence of a cue might decrease performance: first, the visual text presentation was sometimes distracting; second, when a cue was present, the participant felt he was expending extra effort to not only listen to the interrupted speech but to also store and maintain the cue word in memory. Processing noisy sentences (compared to sentences with gaps) may require greater effort and drain cognitive resources faster in general (Finke et al., 2015). While anecdotal, this participant's report could inspire future work investigating whether highly effortful listening situations result in reduced restoration ability.

Higher baseline intact speech understanding scores were predictive of better overall interrupted speech understanding (Fig. 4) whether semantic cues were present or absent and whether interruptions were silent gaps or noise bursts. It was originally hypothesized that restoration would be more likely in a better ear or an ear with comparatively (to the CI user's own other ear) high baseline speech understanding scores, as bottom-up acoustic cues were expected to be of higher quality. High-quality bottom-up cues have been purported to be important for successful speech repair (Başkent, 2012; Bhargava et al., 2014; Jaekel et al., 2018) and stimulating context usage (Patro and Mendel, 2016). Characteristics of the current sample precluded the ability to categorize each participant's ear as a better or poorer ear and, hence, baseline speech understanding across all ears tested in the study was investigated instead. However, Fig. 4 illustrates that higher baseline speech understanding scores, in general, did not elicit particular improvements for the listener in either interruption condition; that is, restoration was not more likely among ears with high baseline scores, arguing against the notion that restoration would have been likely had participants had clear ear differences. In addition, the lack of any mediating effect from cognition or linguistic knowledge on task performance may be evidence that the incoming signal was generally of too poor quality to engage these higher-level skills.

To summarize, CI users on average showed an interference effect rather than a restoration benefit when speech was interrupted with noise bursts. This finding contradicts previous literature (Bhargava et al., 2014); however, there were several notable differences in the stimuli and participant demographics across the two studies. The present study used a faster interruption rate than the previous study used (5 Hz vs 1.5 Hz), a briefer raised cosine on/off ramp (1 ms vs 5 ms), a slightly different duty cycle (80% vs 75%), a different corpus in a different language (English vs Dutch), and tested older participants (average age of 63.7 years old vs 49 years old). The interruption rate, on/off ramps, and duty-cycle differences further impacted how much intact speech material was available between each interruption: in the study by Bhargava et al. (2014), every cycle was 666.7 ms in duration with 500 ms of speech (2% of the 500 ms was on/off ramping); for the current study, every cycle was 200 ms in duration with 160 ms of speech (1.25% of the 160 ms was on/off ramping). Future work should consider the impacts of such differences on perceptual restoration. A slower interruption rate could potentially allow CI users to use a semantic cue more effectively as greater amounts of intact speech information would be available for top-down integration. For example, the 1.5-Hz interruption rate used in Bhargava et al. (2014) with the 80% duty cycle used in the present study would provide listeners with 533 ms of intact speech, followed by 133 ms of interruption, potentially providing several intact phonemes to a listener within each segment. Speech glimpses longer than 500 ms were posited by Nelson et al. (2003) as potentially being necessary for CI users to successfully integrate speech segments across interruptions and form a stable auditory image. The present study provided speech glimpses of only 160 ms.

Another avenue for future work is how the different interruption parameters between the Bhargava study and the present study particularly impacted the noise-burst interruption condition. Silent-gap interrupted speech perception with a 75% duty cycle and 6 Hz interruption rate was reported for CI users in Bhargava et al. (2016); in that study, performance was quite similar to silent-gap interrupted speech perception in the current study, which had the slightly different parameters of an 80% duty cycle with a 5 Hz interruption rate. Thus, the present study differs from the literature largely in terms of its noise-burst interrupted speech perception findings. The brief noise bursts employed in the present study (40 ms) may have been misinterpreted as an unidentified phoneme and/or a spurious cue, leading to poorer performance.

Finally, the participants in the Bhargava study were trained prior to performing the perceptual restoration task: specifically, CI users were presented an interrupted sentence, repeated what they heard, and then were presented the uninterrupted sentence with a display of the text of the sentence for a total of 13 sentences. This procedure differs from the current study in which no training with the interrupted speech task was provided. Whereas previous work has shown that training failed to elicit phonemic restoration benefits in NH listeners presented with vocoded speech (Benard and Başkent, 2014), it is possible that even a short training period could improve perceptual restoration in CI users and further account for the different results between the current study and the study by Bhargava et al. (2014).

Analysis of perceptual restoration benefits and participant variables that were shown to be important for restoration by Bhargava et al. (2014) revealed that neither intact sentence understanding nor duration of CI use were relevant. Per the multilevel modeling analysis, only baseline speech understanding scores were predictive of overall interrupted speech understanding (see Table II) among all of the participant variables. Whereas age was not predictive of the perceptual restoration benefit in the present study (and, thus, not included in the model), previous work has shown that even when presented vocoded speech, older NH listeners are more likely to benefit from perceptual restoration (Saija et al., 2014; Jaekel et al., 2018). However, even when restricting our current dataset to the older age ranges reported in Saija et al. (2014) or Jaekel et al. (2018), a perceptual restoration effect was not observed among older CI users. The lack of an interaction between age and cue in the present study was also surprising as the provision of a cue provided additional context, which older listeners are more likely to use during speech perception (Pichora-Fuller, 2008) and could have helped CI users overcome age-related temporal processing difficulties that could affect speech perception. Age-related temporal processing difficulties are particularly relevant for older CI users (Goupell et al., 2017; Xie et al., 2019) as CI users largely rely on temporal envelope information to perceive speech (Shannon et al., 1995). Future work should investigate the effects of other participant-related variables, such as the effects of listening effort and/or affective response, on perceptual restoration in CI users as some participants from the current study remarked on the difficulty of processing noisy stimuli in comparison to silent-gap interrupted speech, as well as feelings of irritation and frustration when interrupting noise was present.

Interrupted speech understanding performance was improved by five percentage points, generally, via the provision of a semantic cue prior to sentence presentation. This indicated that additional top-down context is useful for CI users attempting to understand interrupted speech regardless of the quality of the bottom-up acoustic information (with quality being indicated in the present study by way of baseline intact speech understanding ability). Testing a large sample of bilateral CI users with more severe quality differences between ears could confirm whether this finding is true in the greater bilateral CI population rather than in the highly symmetrical listeners in the present study's sample.

Overall, CI users were more successful at understanding silent-gap interrupted rather than noise-burst interrupted sentences, and showed no strong evidence of performing speech repair. The provision of a semantic cue failed to elicit a restoration effect, although semantic cues did improve overall performance with interrupted speech.

V. CONCLUSION

CI users failed to consistently repair noisy speech signals using this study's paradigm. Factors like bottom-up signal quality and top-down linguistic knowledge use, whose integration is believed to be the key to repairing speech, did not produce restoration benefits in this study's sample. Whereas the perceptual restoration framework involving the interaction of top-down and bottom-up factors appears to apply to NH listeners, the processing of noise-burst interrupted speech may be qualitatively different in CI users for whom noise generally served as an interferer rather than a promoter of speech repair. Perceptual restoration, then, may not currently be a useful perceptual tool for CI users attempting to understand speech in noisy environments, and the inability to use restoration may be a contributor to this population's general difficulties understanding speech in noise.

ACKNOWLEDGMENTS

Research reported in this publication was supported by the National Institute On Deafness And Other Communication Disorders (NIDCD) of the NIH under Award Nos. F31DC017362 (B.N.J.), T32DC000046 (Trainee, B.N.J.), and NIDCD Award No. R01DC014948 (M.J.G.). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. Thank you to Alyssa Giammetta, Stefanie Kuchinsky, Jan Edwards, Samira Anderson, Catherine Carr, Maureen Shader, Kristina Milvae, Nicole Nguyen, Olga Stahkovskaya, Julie Cohen, Elizabeth Kolberg, Ginny Alexander, Will Bologna, Zilong Xie, Emily Shroads, Debbie Moon, Emma Peterson, Bobby Gibbs, and Kelly Miller. Portions of this work were presented at the 2019 Conference on Implantable Auditory Prostheses.

References

  • 1. Bashford, J. A. , Riener, K. R. , and Warren, R. M. (1992). “ Increasing the intelligibility of speech through multiple phonemic restorations,” Percept. Psychophys. 51, 211–217. 10.3758/BF03212247 [DOI] [PubMed] [Google Scholar]
  • 2. Başkent, D. (2012). “ Effect of speech degradation on top-down repair: Phonemic restoration with simulations of cochlear implants and combined electric-acoustic stimulation,” J. Assoc. Res. Otolaryngol. 13, 683–692. 10.1007/s10162-012-0334-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Başkent, D. , Clarke, J. , Pals, C. , Benard, M. R. , Bhargava, P. , Saija, J. D. , Sarampalis, A. , Wagner, A. , and Gaudrain, E. (2016). “ Cognitive compensation of speech perception with hearing impairment, cochlear implants, and aging: How and to what degree can it be achieved?,” Trends Hear. 20, 233121651667027–233121651667016. 10.1177/2331216516670279 [DOI] [Google Scholar]
  • 4. Başkent, D. , Eiler, C. , and Edwards, B. (2009). “ Effects of envelope discontinuities on perceptual restoration of amplitude-compressed speech,” J. Acoust. Soc. Am. 125, 3995–4005. 10.1121/1.3125329 [DOI] [PubMed] [Google Scholar]
  • 5. Benard, M. R. , and Başkent, D. (2014). “ Perceptual learning of temporally interrupted spectrally degraded speech,” J. Acoust. Soc. Am. 136, 1344–1351. 10.1121/1.4892756 [DOI] [PubMed] [Google Scholar]
  • 6. Bhargava, P. , Gaudrain, E. , and Başkent, D. (2014). “ Top-down restoration of speech in cochlear-implant users,” Hear. Res. 309, 113–123. 10.1016/j.heares.2013.12.003 [DOI] [PubMed] [Google Scholar]
  • 7. Bhargava, P. , Gaudrain, E. , and Başkent, D. (2016). “ The intelligibility of interrupted speech: Cochlear implant users and normal hearing listeners,” J. Assoc. Res. Otolaryngol. 17, 475–491. 10.1007/s10162-016-0565-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Busch, T. , Vanpoucke, F. , and van Wieringen, A. (2017). “ Auditory environment across the life span of cochlear implant users: Insights from data logging,” J. Speech Lang. Hear. Res. 60, 1362–1377. 10.1044/2016_JSLHR-H-16-0162 [DOI] [PubMed] [Google Scholar]
  • 9. Clarke, J. , Başkent, D. , and Gaudrain, E. (2016). “ Pitch and spectral resolution: A systematic comparison of bottom-up cues for top-down repair of degraded speech,” J. Acoust. Soc. Am. 139, 395–405. 10.1121/1.4939962 [DOI] [PubMed] [Google Scholar]
  • 10. DeVries, L. , Scheperle, R. , and Bierer, J. A. (2016). “ Assessing the electrode-neuron interface with the electrically evoked compound action potential, electrode position, and behavioral thresholds,” J. Assoc. Res. Otolaryngol. 17, 237–252. 10.1007/s10162-016-0557-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Dunn, L. M. , and Dunn, D. M. (2007). The Peabody Picture Vocabulary Test, Fourth ed. ( NCS Pearson, Inc., Bloomington, MN: ). [Google Scholar]
  • 12. Finke, M. , Sandmann, P. , Kopp, B. , Lenarz, T. , and Büchner, A. (2015). “ Auditory distraction transmitted by a cochlear implant alters allocation of attentional resources,” Front. Neurosci. 9, 1–16. 10.3389/fnins.2015.00068 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Gershon, R. C. , Wagster, M. V. , Hendrie, H. C. , Fox, N. A. , Cook, K. F. , and Nowinski, C. J. (2013). “ NIH toolbox for assessment of neurological and behavioral function,” Neurology 80, S2–S6. 10.1212/WNL.0b013e3182872e5f [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Goupell, M. J. , Gaskins, C. R. , Shader, M. J. , Walter, E. P. , Anderson, S. , and Gordon-Salant, S. (2017). “ Age-related differences in the processing of temporal envelope and spectral cues in a speech segment,” Ear Hear. 38, e335–e342. 10.1097/AUD.0000000000000447 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Ishida, M. , and Arai, T. (2016). “ Missing phonemes are perceptually restored but differently by native and non-native listeners,” SpringerPlus 5(1), 713. 10.1186/s40064-016-2479-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Jaekel, B. N. , Newman, R. S. , and Goupell, M. J. (2018). “ Age effects on perceptual restoration of degraded interrupted sentences,” J. Acoust. Soc. Am. 143, 84–97. 10.1121/1.5016968 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Jin, S. H. , Liu, C. , and Sladen, D. P. (2014). “ The effects of aging on speech perception in noise: Comparison between normal-hearing and cochlear-implant listeners,” J. Am. Acad. Audiol. 25, 656–665. 10.3766/jaaa.25.7.4 [DOI] [PubMed] [Google Scholar]
  • 18. Jin, S. H. , Nie, Y. , and Nelson, P. (2013). “ Masking release and modulation interference in cochlear implant and simulation listeners,” Am. J. Audiol. 22, 135–146. 10.1044/1059-0889(2013/12-0049) [DOI] [PubMed] [Google Scholar]
  • 19. Kan, A. (2018). “ Improving speech recognition in bilateral cochlear implant users by listening with the better ear,” Trends Hear. 22, 233121651877296. 10.1177/2331216518772963 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Long, C. J. , Holden, T. A. , McClelland, G. H. , Parkinson, W. S. , Shelton, C. , Kelsall, D. C. , and Smith, Z. M. (2014). “ Examining the electro-neural interface of cochlear implant users using psychophysics, CT scans, and speech understanding,” J. Assoc. Res. Otolaryngol. 15, 293–304. 10.1007/s10162-013-0437-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. McNamara, T. P. (2005). Semantic Priming: Perspectives from Memory and Word Recognition ( Psychology, New York: ). [Google Scholar]
  • 22. Meyer, B. T. , Brand, T. , and Kollmeier, B. (2011). “ Effect of speech-intrinsic variations on human and automatic recognition of spoken phonemes,” J. Acoust. Soc. Am. 129, 388–403. 10.1121/1.3514525 [DOI] [PubMed] [Google Scholar]
  • 23. Miller, R. E. , Gibbs, B. E., 2nd , and Fogerty, D. (2018). “ Glimpsing speech interrupted by speech-modulated noise,” J. Acoust. Soc. Am. 143, 3058–3058. 10.1121/1.5038273 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Nasreddine, Z. S. , Phillips, N. A. , Bedirian, V. , Charbonneau, S. , Whitehead, V. , Collin, I. , Cummings, J. L. , and Chertkow, H. (2005). “ The Montreal Cognitive Assessment, MoCA: A brief screening tool for mild cognitive impairment,” J. Am. Geriatr. Soc. 53, 695–699. 10.1111/j.1532-5415.2005.53221.x [DOI] [PubMed] [Google Scholar]
  • 25. Nelson, P. B. , and Jin, S. H. (2004). “ Factors affecting speech understanding in gated interference: Cochlear implant users and normal-hearing listeners,” J. Acoust. Soc. Am. 115, 2286–2294. 10.1121/1.1703538 [DOI] [PubMed] [Google Scholar]
  • 26. Nelson, P. B. , Jin, S. H. , Carney, A. E. , and Nelson, D. A. (2003). “ Understanding speech in modulated interference: Cochlear implant users and normal-hearing listeners,” J. Acoust. Soc. Am. 113, 961–968. 10.1121/1.1531983 [DOI] [PubMed] [Google Scholar]
  • 27. Newman, R. S. (2004). “ Perceptual restoration in children versus adults,” Appl. Psycholinguist. 25, 481–493. 10.1017/S0142716404001237 [DOI] [Google Scholar]
  • 28. O'Neill, E. R. , Kreft, H. A. , and Oxenham, A. J. (2019). “ Speech perception with spectrally non-overlapping maskers as measure of spectral resolution in cochlear implant users,” J. Assoc. Res. Otolaryngol. 20, 151–167. 10.1007/s10162-018-00702-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Patro, C. , and Mendel, L. L. (2016). “ Role of contextual cues on the perception of spectrally reduced interrupted speech,” J. Acoust. Soc. Am. 140, 1336–1345. 10.1121/1.4961450 [DOI] [PubMed] [Google Scholar]
  • 30. Patro, C. , and Mendel, L. L. (2020). “ Semantic influences on the perception of degraded speech by individuals with cochlear implants,” J. Acoust. Soc. Am. 147, 1778–1789. 10.1121/10.0000934 [DOI] [PubMed] [Google Scholar]
  • 31. Pichora-Fuller, M. K. (2008). “ Use of supportive context by younger and older adult listeners: Balancing bottom-up and top-down information processing,” Int. J. Audiol. 47(Suppl 2), S72–S82. 10.1080/14992020802307404 [DOI] [PubMed] [Google Scholar]
  • 32. Rothauser, E. , Chapman, W. , Guttman, N. , Nordby, K. , Silbiger, H. , Urbanek, G. , and Weinstock, M. (1969). “ IEEE recommended practice for speech quality measurements,” IEEE Trans. Audio Electroacoust. 17, 225–246. 10.1109/TAU.1969.1162058 [DOI] [Google Scholar]
  • 33. Saija, J. D. , Akyürek, E. G. , Andringa, T. C. , and Başkent, D. (2014). “ Perceptual restoration of degraded speech is preserved with advancing age,” J. Assoc. Res. Otolaryngol. 15, 139–148. 10.1007/s10162-013-0422-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Samuel, A. G. (1981). “ Phonemic restoration: Insights from a new methodology,” J. Exp. Psychol. Gen. 110, 474–494. 10.1037/0096-3445.110.4.474 [DOI] [PubMed] [Google Scholar]
  • 35. Samuel, A. G. (1987). “ Lexical uniqueness effects on phonemic restoration,” J. Mem. Lang. 26, 36–56. 10.1016/0749-596X(87)90061-1 [DOI] [Google Scholar]
  • 36. Shannon, R. V. , Zeng, F. G. , Kamath, V. , Wygonski, J. , and Ekelid, M. (1995). “ Speech recognition with primarily temporal cues,” Science 270, 303–304. 10.1126/science.270.5234.303 [DOI] [PubMed] [Google Scholar]
  • 37. Shinn-Cunningham, B. G. , and Wang, D. (2008). “ Influences of auditory object formation on phonemic restoration,” J. Acoust. Soc. Am. 123, 295–301. 10.1121/1.2804701 [DOI] [PubMed] [Google Scholar]
  • 38. Sladen, D. P. , and Zappler, A. (2015). “ Older and younger adult cochlear implant users: Speech recognition in quiet and noise, quality of life, and music perception,” Am. J. Audiol. 24, 31–39. 10.1044/2014_AJA-13-0066 [DOI] [PubMed] [Google Scholar]
  • 39. Tulsky, D. S. , Carlozzi, N. , Chiaravalloti, N. D. , Beaumont, J. L. , Kisala, P. A. , Mungas, D. , Conway, K. , and Gershon, R. (2014). “ NIH Toolbox Cognition Battery (NIHTB-CB): List sorting test to measure working memory,” J. Int. Neuropsychol. Soc. 20, 599–610. 10.1017/S135561771400040X [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Verschuure, J. , and Brocaar, M. P. (1983). “ Intelligibility of interrupted meaningful and nonsense speech with and without intervening noise,” Percept. Psychophys. 33, 232–240. 10.3758/BF03202859 [DOI] [PubMed] [Google Scholar]
  • 41. Warren, R. M. (1970). “ Perceptual restoration of missing speech sounds,” Science 167, 392–393. 10.1126/science.167.3917.392 [DOI] [PubMed] [Google Scholar]
  • 42. Warren, R. M. , and Obusek, C. J. (1971). “ Speech perception and phonemic restorations,” Atten. Percept. Psychophys. 9, 358–362. 10.3758/BF03212667 [DOI] [Google Scholar]
  • 43. Xie, Z. , Gaskins, C. R. , Shader, M. J. , Gordon-Salant, S. , Anderson, S. , and Goupell, M. J. (2019). “ Age-related temporal processing deficits in word segments in adult cochlear-implant users,” Trends Hear. 23, 233121651988668. 10.1177/2331216519886688 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The Journal of the Acoustical Society of America are provided here courtesy of Acoustical Society of America

RESOURCES