Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2019 Jan 1.
Published in final edited form as: Ear Hear. 2018 Jan-Feb;39(1):101–109. doi: 10.1097/AUD.0000000000000469

Linguistic Context Versus Semantic Competition in Word Recognition by Younger and Older Adults with Cochlear Implants

Nicole M Amichetti 1, Eriko Atagi 1,2, Ying-Yee Kong 2, Arthur Wingfield 1
PMCID: PMC5741484  NIHMSID: NIHMS879233  PMID: 28700448

Abstract

Objectives

The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition.

Design

Younger (n = 8; M age = 22.5 years) and older (n = 8; M age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word’s probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50-ms increments until the word was correctly identified.

Results

Results showed that for both younger and older adult implant users the amount of word onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults’ word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context.

Conclusions

Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users’ recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a greater degree of interference from other words that might also be activated by the context, with negative effects on ease of word recognition. These results are consistent with an age-related inhibition deficit extending to the domain of semantic constraints on word recognition.

Keywords: Cochlear implants, Aging, Linguistic context, Response competition, Cognition

INTRODUCTION

The perception of everyday speech is based on a continuous interaction between bottom-up processing (analysis of the physical properties of the acoustic signal), and top-down information (context, knowledge, expectancy). Top-down and bottom-up processes ordinarily have a reciprocal relationship, such that the more probable a word is within a linguistic or real-world context, the less bottom-up information the listener will need for accurate word recognition (Morton 1969). This fundamental relationship underlies the ease of speech comprehension when listeners are faced with speech that is under-articulated (Lindblom et al. 1992; Pollack & Pickett 1963; Wingfield et al. 1994), when listening to accented speech (Van Engen & Peelle 2014), when speech is heard in a noisy background (Dubno et al. 2000; Cox et al. 2008), or for listeners with reduced hearing acuity (Benichov et al. 2012).

Cochlear implants represent a special case of speech distortion, in which word recognition must succeed with a bottom-up signal that has limited spectral resolution and temporal fine-structure cues (Loizou 1999; Wilson & Dorman 2008a). In spite of these limitations, both younger and older adults typically adapt well to this novel signal, culminating in relatively good speech intelligibility (Friedland et al. 2010; Budenz et al. 2011; Roberts et al. 2013). In part, this success is due to effective use of linguistic context, such that, for example, with spectrally-degraded speech signals, word intelligibility scores are higher for words heard in a sentence context than when words are presented in isolation, both in quiet and in noisy backgrounds (e.g., Fishman et al. 1997; Friesen et al. 2001; Eisenberg et al. 2002; Sheldon et al. 2008; Wilson & Dorman 2008b; Nittrouer et al. 2014; Kong et al. 2015; Oh et al. 2016; Winn 2016). Such findings emphasize the role of a cognitive, top-down contribution to successful perceptual operations in listeners using cochlear implants (e.g., Bhargava et al. 2014; Başkent et al. 2016; Clarke et al. 2016).

This latter point raises a special issue in the case of older adults, many of whom, in their 60s, 70, and even 80s, are currently receiving cochlear implants in increasing numbers (Gifford et al. 2010; Lin 2011; Lin et al. 2012; Dillon et al. 2013). That is, balanced against well-preserved linguistic knowledge in healthy aging (Wingfield & Stine-Morrow 2000), older adults typically show age-related reductions in working memory capacity (Salthouse 1994), attentional resources (McCabe et al. 2010), and processing speed (Cerella 1994; Salthouse 1996), all of which may impact speech comprehension and recall for what has been heard (Wingfield et al. 2003; Tun et al. 2009; DeCaro et al. 2016). An additional feature of cognitive aging is that of an inhibition deficit characterized by greater susceptibility to interference from off-target activations (Hasher & Zacks 1988; Zacks et al. 1999; Lustig et al. 2007; Lash & Wingfield 2014). For example, older adults, independent of hearing acuity, show more interference in recognition of words when there are large numbers of words that share phonology with that word (Sommers 1996; Sommers & Danielson 1999). Older adults also have more difficulty than younger adults in recognizing a word when it is one of a large number words that might also be suggested by a particular linguistic context (Lash et al. 2013).

Thus, for the older adult there are two competing forces in operation when a word is heard within a linguistic context. The first of these is a positive effect of stimulus expectancy that facilitates word recognition, with past work with non-implant listeners—presented with natural speech or noise-vocoded speech to simulate cochlear implant listening—showing that older adults generally gain as much as, or even more facilitation from a linguistic context than younger adults (Cohen & Faulkner 1983; Wingfield et al. 1991; Pichora-Fuller et al. 1995; Sheldon et al. 2008; Benichov et al. 2012). At the same time, older adults, regardless of hearing acuity, tend to show greater interference in word recognition from response competition engendered by activation of additional words that might also be activated by the linguistic context (Lash et al. 2013).

Stimulus Expectancy and Response Competition from “Cloze” Norms

Consider, for example, hearing a sentence fragment such as, “He wondered if the storm had done much…” This sentence-beginning would lead one to expect a highly probable sentence-final word such as damage. This can be compared with a less constraining sentence, such as, “He was soothed by the gentle…,” which would have a greater degree of response uncertainty. Using a “cloze” procedure (Taylor 1953), in which a large sample of young adults was asked to complete the first sentence with a likely ending, Bloom and Fischler (1980) found in the first example that 97% of respondents gave the word damage. The most frequent response to the second example was music, but this was given by only 23% of respondents, with various participants giving a range of other responses. Using the information from such norms, affirmed for younger, middle-aged, and older adults (Lahar et al. 2004), it can be shown that the amount of word onset duration (Wingfield et al. 1991) or the signal-to-noise ratio (Benichov et al. 2012) necessary for the correct recognition of sentence-final words is inversely proportional to their expectancy based on their preceding linguistic context.

A feature of norms such as Bloom and Fischler (1980) and Lahar et al. (1994) is that, in addition to listing the dominant sentence completions, these norms also list the full range of alternative responses given by respondents, and the percentage of respondents giving each of these responses. In the first illustrative sentence above, for example, the sentence-final word harm was also given, although by very few respondents. By contrast, the second sentence context above produced over 12 different responses, with varying degrees of popularity. These published norms thus allow one to know not only the transition probability of a target word in a sentence context, but also the uncertainty implied by the number and probability distribution of alternative possibilities drawn from respondents’ sentence completions. This latter measure has been referred to as response entropy (Shannon 1948; Shannon & Weaver 1949; Treisman 1965; van Rooij & Plomp 1991).

Word Recognition and Word-onset Gating

In the experiment to be described, we made use of the word-onset gating technique (Grosjean 1980, 1996; Wingfield et al. 1991), in which participants hear increasing amounts of word-onset information in a series of increasingly longer temporal “gates” until the word can be correctly identified*. Studies using an onset gating procedure have shown that words can often be recognized long before their full acoustic duration has been heard, in many cases when as little as 330 ms of the word’s onset has been heard (Grosjean 1980; Marslen-Wilson 1984). It has been argued that correct word identification in the absence of the complete acoustic signal is made possible because of the rapid decrease in the number of possible word candidates that share the same initial sounds with a target word as more and more of the word’s onset duration is heard (Tyler 1984; Wayland et al. 1989). In some cases correct recognition can occur even when more than one word candidate remains in the onset cohort. In such cases recognition may be aided by matching prosodic information, such as syllabic stress, available in the word onset, or the presence of coarticulation cues that may supply information about sounds yet to be heard (Warren & Marslen-Wilson 1987; Lindfield et al. 1999; Wingfield et al. 2000; Magnuson et al. 2001).

Onset-gating has shown itself to be sensitive to a number of factors that are known to affect ease of word recognition. Gating studies have confirmed, for example, that the beginnings of words in English typically contain a greater amount of discriminative phonological information than in the latter parts of words (Wingfield et al. 1997) and that there is an advantage for words with a high frequency of occurrence in the language relative to low-frequency words (Grosjean 1980). Relevant to our present interests, gating studies have also shown that a greater amount of a word’s onset will be needed for recognition if speech quality is poor (Nooteboom & Doodeman 1984; Grosjean 1985; Moradi et al. 2014), and that words heard within a semantically-constraining sentence context can often be recognized with as little as the first 150 to 200 ms of their onset duration (Grosjean 1980; Marslen-Wilson 1984).

The present experiment was designed to use the gating paradigm to test the interplay between context-based expectancy and response competition in younger and older adult implant recipients’ recognition of sentence-final target words. Extrapolating from data obtained from younger adults and from older adults with normal hearing and mild-to-moderate hearing loss (Lash et al., 2013), one would expect that older adult implant users would require a larger word-onset duration than younger adult implant users to identify a spoken word when there is limited linguistic context. One might also expect this difference to be reduced or eliminated by presenting the target word with increasing amounts of linguistic context (cf. Wingfield et al., 1991; Pichora-Fuller et al., 1995; Dubno et al., 2000; Benichov et al., 2012).

Of special interest, however, is the extent to which older adult implant users’ word recognition may be differentially hampered by a postulated inhibition deficit that must be taken into account along with the beneficial effects of a linguistic context. To the extent that older adults show such an age-related inhibition deficit (Hasher & Zacks 1988; Lustig et al., 2007), one would hypothesize a differentially greater effect of interference on older relative to younger adult implant users’ recognition of a sentence-final word when there is a large number of words that might also fit a sentence context.

MATERIALS AND METHODS

Participants

Participants were 16 adult users of cochlear implants. Eight were younger adults (4 men, 4 women) ranging in age from 18–29 years (M = 22.5 years, SD = 4.7) and eight were older adults (3 men, 5 women) ranging in age from 63–74 years (M = 67.5 years, SD = 3.5). All participants were in self-reported good health, with no known history of stroke, Parkinson’s disease, or other neurologic involvement that might negatively impact their performance in the experimental task. The two groups were similar in vocabulary knowledge as assessed by the Shipley vocabulary test (Zachary, 1991), with mean scores of 14.3 (SD = 1.9) and 15.5 (SD = 2.7) for the younger and older adults, respectively.

Table 1 gives detailed demographic information for the younger and older adults, including age, gender, hearing history, hearing device(s), and consonant-nucleus-consonant (CNC, Peterson & Lehiste 1962) word recognition scores. It can be seen that the majority of the younger adults were diagnosed with severe-to-profound hearing loss during infancy and were implanted with one or two cochlear implants before the age of five years. The older adults had later onset of severe-to-profound hearing loss. The younger adults were predominantly implanted with Cochlear Nucleus implants (CI22 or CI24), whereas the older adults were implanted with Advanced Bionics implants (Clarion CI, Clarion CII, or HiRes90K).

TABLE 1.

Participant Demographic Information

ID Age Gender Age of HL onse* Age severe-to-profound HL Etiology CI use (yrs)** Left device Right device CNC score (%)
Y1 22 F L:birth; R:7 L:birth; R:14 Cogan’s Syndrome 12 CI24 (N6) CI24 (N6) 98
Y2 20 F birth birth Usher Syndrome 19 CI24 (N6) CI24 (N6) 80
Y3 29 M birth 5 Unknown 13 HiRes90k (Naida) Clarion CII (Naida) 64
Y4 18 M birth birth Connexin-26 15 CI24 (N5) CI24 (N5) 90
Y5 28 M birth birth Unknown 24 CI22 (N6) CI22 (N6) 76
Y6 25 M birth birth Unknown 23 CI22 (G3) NA 80
Y7 18 F birth birth Unknown 15 CI24 (N5) NA 78
Y8 20 F birth birth Unknown 15 CI24 (N5) CI24 (N5) 56

O1 65 F 35 L:51; R:40 Unknown 14 NA Clarion CII (Harmony) 68
O2 68 M 40 55 Unknown 6 HiRes90k (Naida) NA 78
O3 69 F 5 27 Hereditary 25 CI24 (Freedom) CI22 (Freedom) 96
O4 63 F 15 34 Otoclerosis & Meniere 16 Clarion CII (Naida) HiRes90k (Naida) 84
O5 70 F L:29; R:birth L:50; R:birth L:Infection; R:Unknown 13 Clarion CII (Naida) NA 80
O6 66 F 16 22 Hereditary 17 HiRes90k (Harmony) Clarion CI (Harmony) 80
O7 74 M mid 30s 66 Otosclerosis 2 (HA: Phonak Naida SP) HiRes90k (Naida) 64
O8 65 M 5 mid 40s Premature birth 14 HiRes90k (Naida) (HA: Phonak Naida SP) 66

Notes:

*

Age of onset of hearing loss. Left and right ear reported separately in case of asymmetry.

**

Number of years of cochlear implant experience.

Hearing device used at time of testing, with type of speech processor indicated in the parenthesis. (Note that participants “O8” and “O9” used a Phonak Naida Super Power hearing aid [HA] in the non-implanted ear. “NA” indicates no hearing device used in the non-implanted ear for unilateral cochlear implant recipients.)

Percent correct monosyllabic word recognition in quiet listening using a 50-word CNC list (Peterson & Lehiste 1962).

For the testing sessions, participants were encouraged to use the device(s) they used on a regular basis, including implants and/or hearing aids. For those who had only one implant, three participants (Y6, Y7, and O2) had a profound hearing loss in the non-implanted ear and did not wear a hearing aid in that ear. Participant O5 had a severe to profound hearing loss in the non-implant ear. Although she had a hearing aid for that side, she did not use it regularly and was tested without a hearing aid in the non-implanted ear.

Within these differences, as occur in typical implant populations, we nevertheless ensured that the younger and older adults were closely matched on CNC word-recognition scores (younger adults: 77.8%; older adults: 77.0%; t(14) = 0.12, p = 0.90).

Stimuli

The stimuli consisted of 20 one- and two-syllable common nouns and adjectives with high frequencies of occurrence in English, with mean log-transformed Hyperspace Analogue to Language (HAL) frequencies ranging from 9.31 to 13.58 (Lund & Burgess 1996; Balota et al. 2007). The mean duration of the recorded target words was 605.3 ms, representing an average of 12.6 50-ms gates for full word inclusion. Accepting truncated but just-identifiable word-final phonemes as marking word endings reflected a mean word duration of 515 ms.

Each of the words was presented as the final word in three sentence contexts that varied in the degree to which they would be likely sentence endings. The sentence and target-word sets, which were taken from Lash et al. (2013), were selected from published cloze data such that each target word could be heard as the final word in a sentence in which the target word had a low expectancy (probability [prob] range: 0.02 to 0.05; M = 0.03; e.g., “The cigar burned a hole in the FLOOR” [prob = 0.03]), a medium expectancy (probability range: 0.09 to 0.21, M = 0.13; e.g., “The boys helped Jane wax her FLOOR” [prob = 0.10]), or a high expectancy (probability range: 0.23 to 0.85, M = 0.52; e.g., “Some of the ashes dropped on the FLOOR” [prob = 0.43]).

These transition probabilities were based on the published responses of 100 young adults who were given sentences with a final word missing and asked to complete the sentence with a likely final word, keeping the sentence syntactically and semantically coherent (Bloom & Fischler 1980). Such “cloze” norms are presumed to reflect the combined influence of the syntactic and semantic constraints imposed by the sentence contexts (e.g., Treisman 1965). Although originally generated by young adults, the Bloom and Fischler (1980) norms have been shown to be predictive of word selection by children (Stanovich et al. 1985), university undergraduates (Block & Baldwin 2010), and young, middle-aged, and older adults in the United States and Canada (Lahar et al. 2004). In addition to the high, medium, and low context conditions, each target word was also prepared in a neutral context, in which the target word was preceded by the carrier phrase “The word is….”

The sentences and target words were spoken by a female speaker of American English and recorded via Sound Studio software (FeltTip Inc.) that digitized the speech at a sampling rate of 22 kHz. To avoid potential effects of coarticulatory cues that could be tied to the ends of specific sentence frames, or accidental effects of differences in the way the target word was spoken, a single rendition of each target word was recorded and spliced onto the end of each of its sentence contexts and the neutral carrier phrase. Splicing was carried out to avoid an artificial pause so as to sound like a natural continuous sentence.

Procedures

Each sentence context and target word was presented in a series of successive presentations, with the onset gate size of the target word increased in 50-ms increments until the target word was correctly identified. The recognition threshold was defined as the gate size (in ms) at which the participant first gave the correct response.

Each participant heard each of the 20 target words in only one of its sentence contexts (low, medium, high, or neutral context). The particular combinations of context sentences and target words was varied between participants such that, across the eight participants in each age group, each target word was heard in combination with each of its preceding contexts an equal number of times. This design was employed to counterbalance potential effects of individual word factors such as word frequency and the number of phonological neighbors, both of which are known to affect ease of word recognition (e.g., Howes 1957; Grosjean 1980; Luce & Pisoni 1998; Sommers 1996; Taler et al., 2010). The particular order of presentation of the target words (and hence context types) was varied randomly between participants.

The stimuli were presented from an external sound card (TASCAM US-366) and transmitted through a soundfield loudspeaker in a soundproofed booth. Participants were seated with the loudspeaker 1 m, zero azimuth from the listener using his or her everyday program settings. All stimuli (including the CNC words) were presented at a root-mean-square (RMS) level of 67–68 dB A, which was reported to be a comfortable listening level by all participants. The experimental protocol was approved by the Brandeis University Institutional Review Board (IRB). Written informed consent was obtained from all participants prior to the start of the experiment.

RESULTS

Effects of Stimulus Expectancy

Figure 1 shows the mean gate size (ms) required by the younger and older adults for the correct recognition of target words when they were preceded by a linguistic context with a relatively low, medium, or high probability of suggesting the sentence-final word. These data include only those cases where participants repeated the context sentence correctly, as the ability to correctly hear the sentence frame is a necessary precondition for evaluating the effectiveness of contextual constraints on target word recognition (Craig et al., 1993; Janse & Ernestus, 2011; Kidd & Humes. 2012; Molis et al., 2015). Figure 1 and subsequent analyses exclude 0.04% of the data for the younger adults and an equivalent 0.04% of the data for the older adults where the participant failed to accurately repeat the context sentence frame.

Figure 1.

Figure 1

Mean gate size required for correct word recognition with low, medium, or high degrees of linguistic context for older and younger adults. Numbers on the abscissa are mean cloze probability values of the target words. Error bars represent one standard error.

These data show cochlear implant listeners to follow the two features that have been found for context effects on word recognition: first, that the ease of word recognition increases progressively with increasing cloze probability; and second, that age differences progressively decrease with increasing stimulus expectancy (Cohen & Faulkner, 1983; Wingfield et al., 1991; Pichora-Fuller et al., 1995; Benichov et al., 2012; Lash et al., 2013). This pattern was confirmed by a 3 (Expectancy: low, medium, high) × 2 (Participant group: younger, older) mixed-design analysis of variance (ANOVA), with expectancy as a within-participants variable and participant group as a between-participants variable. The appropriateness of using parametric statistics was supported by inspection of Q-Q plots and examination of the unstandardized residuals with the Shapiro-Wilk test for normality. The S-W test failed to reveal significant departures from normality for either groups’ data (younger: W = 0.959, df =8, p = 0.796; older: W = 0.963, df = 8, p = 0.836) nor was there significant evidence of skewness (younger: 0.278; older: 0.445) or kurtosis (younger: − 0.892; older: −0.360).

The appearance of fewer gates being required for word recognition as the level of stimulus expectancy increased was confirmed by a significant main effect of expectancy, F(2,28) = 61.74, p < 0.001, ηp2 = 0.82. An absence of a main effect of participant group, F(1,14) = 2.45, p = 0.14, ηp2 = .15, was moderated by a significant Expectancy × Participant group interaction, F(2,28) = 4.13, p = 0.027, ηp2 = 0.23. This interaction was driven by a difference in recognition thresholds between the two participant groups at the low expectancy level, where the older adults required a significantly larger gate size for correct word recognition than the younger adults, t(14) = 2.85, p = 0.013, with this age difference diminishing with increasing contextual constraints.

Figure 1 does not show the gate-size data for the neutral context condition where target words were preceded only by the non-informational carrier phrase, “The word is…” These gate sizes, especially for the older adults, approached the discriminative point for many words, thus representing a functional “ceiling” effect, which, as indicated previously, can be as little as 330 ms. This had the effect of reducing the potential size of an age difference, with the older adults achieving correct recognition with a mean onset gate size of 383.44 ms (SD = 51.98) and the younger adults with a mean gate size of 324.58 (SD = 56.03), t(14) = 2.18, p = 0.047. As noted previously, the two participant groups were initially matched for CNC score, t(14) = 0.12, p = 0.91. The small but significant age difference in gate size observed for recognition of words in the neutral context condition was not predicted by the CNC score, r(16) = −0.018, p = 0.95.

Effects of Response Entropy

Response entropy was calculated for each of the target words in each of the three context conditions (low, medium, and high context), with entropy (H) calculated as the total number of different responses given in the Bloom and Fischler (1980) norms and the probability distribution of the responses. Following Lash et al. (2013), this is shown in the following equation:

H=i=1np(xi)logbp(xi)

where x is a response, for which there are n possible responses (x1, x2, …, xn). For each xi there is a probability p that xi will occur. The subscript b represents the base of the logarithm used, where we use the base 2 in keeping with the traditional measurement of statistical information represented in bits (Shannon 1948; Shannon & Weaver 1949). For example, when asked to complete the sentence beginning “Bob would often sleep during his lunch … “ with a likely sentence-final word, 54% of respondents gave the word hour (prob = 0.54), 41% gave break (prob = 0.41) and 5% gave period (prob = 0.05). Using the above equation, H = − [(0.54 log2 0.54) + (0.41 log2 0.41) + (0.05 log2 0.05)], or 1.22 bits. This can be compared to a higher entropy situation where the distribution of probabilities is more uniform, such as “My aunt likes to read the daily …,” where “paper,” “newspaper,” and “news” were produced with probabilities of 0.47, 0.31, and 0.22, respectively, yielding an entropy value of 1.52 bits.

Figure 2 shows mean recognition thresholds as a function of response entropy for the younger and older adults for low entropy (0.40 – 1.60; M= 1.18), medium entropy (1.70 – 2.39; M = 2.01) and high entropy (2.41 – 3.27; M = 2.74) conditions. These data were submitted to a 3 (Entropy: low, medium, high) × 2 (Participant group: younger, older) mixed-design ANOVA, with entropy as a within-participants variable and participant group as a between-participants variable. The appearance of more gates being required for correct recognition as response entropy increased was confirmed by a main effect of entropy, F(2,28) = 4.65, p = 0.018, ηp2 = 0.25. The main effect of age showed a non-significant trend, F(1,14) = 3.67, p = 0.076, ηp2 = 0.21. The suggestion in Figure 2 of a differentially larger age difference when response entropy was high was not reflected by an overall Entropy × Participant group interaction, F(2,28) = 0.73, p = 0.49, ηp2 = 0.05. However, planned comparison testing yielded a significant age difference in the high entropy condition, t(14) = 2.31, p = 0.037.

Figure 2.

Figure 2

Mean gate size required for correct word recognition with low, medium, or high degrees of response entropy for older and younger adults. Numbers on the abscissa are mean calculated entropy values of the target words. Error bars represent one standard error.

Number of Competitors versus Strongest Competitor

It should be noted that entropy is a single index that reflects both the number of different responses given by participants in the published norms but also the distribution of the frequencies (i.e., probabilities) with which they were given. Because these factors tend to be inter-correlated we examined their independent effects using a partial correlation analysis (Bruning & Kintz, 1997). This analysis showed a significant correlation between recognition thresholds and the size of the competitor pool (the number of unique responses given in the published norms), with the correlation between recognition thresholds and probability of the strongest competitor partialed out. This held for both the younger adults, r(115) = 0.32, p <0.001, and the older adults, r(109) = 0.39, p <0.001. By contrast, the correlation between recognition threshold and the response probability of the strongest competitor, with the correlation between recognition threshold and pool size partialed out, failed to reach significance for either the younger adults, r(115) = 0.15, p = 0.099, or the older adults, r(109) = 0.17, p = 0.070.

For these data the size of the competitor pool thus had a greater effect on recognition thresholds than the strength of the strongest competitor. To the extent that the entropy effect was driven primarily by the size of the competitor pool, an age-dependent inhibition deficit would predict that a large number of semantically-suggested competitors would have a greater adverse effect on word recognition for the older adults than for the younger adults. To test this prediction we calculated the mean recognition thresholds for the younger and older adults using a median split to select cases with few competitors (2 to 7 unique responses) and cases when there was a larger number of competitors (8 to 18 unique responses).

These data are shown in Table 2, where a 2 (Competitor pool: small, large) × 2 (Participant group: younger, older) mixed design ANOVA confirmed main effects of the size of the competitor pool, F(1,14) = 15.68, p = 0.001, ηp2 = 0.53, and of participant group, F(1,14) = 17.81, p = 0.001, ηp2 = 0.56. Of primary interest was a significant Competitor pool size × Participant group interaction, F(1,14) = 5.33, p = 0.037, ηp2 = 0.28, confirming that a large number of response competitors had a differentially larger negative effect on recognition thresholds for the older adults relative to the younger adults.

TABLE 2.

Recognition Gate Size for Smaller and Larger Numbers of Competing Responses

Younger Adults Older Adults
Number of Competitors M (SD) M (SD)
Small (2–7) 120.25 (21.76) 137.78 (42.39)
Large (8–18) 140.93 (23.36) 215.79 (49.91)

It is the case that target expectancy and the number of potential competitors will often show a reciprocal relationship, with higher probability targets having fewer response competitors than target words with lower probability. To ensure that the demonstrated effect of size of the competitor pool on recognition thresholds was not simply a consequence of this relationship, we calculated a second series of partial product-moment correlations for each of the two participant groups in which the predictive effect of each of these factors was examined while partialing out the effect of the other. We first confirmed a significant correlation between stimulus expectancy and recognition threshold, with the relationship between recognition threshold and competitor size held constant (younger: r = − 0.19, p = 0.045; older: r = − 0.32, p = 0.001). It was also the case that there was a significant correlation between size of the competitor pool and recognition threshold, with the relationship between recognition threshold and stimulus expectancy held constant (younger: r = 0.23, p = 0.011; older: r = 0.28, p = 0.003).

We thus conclude that both stimulus expectancy and response competition operated on the recognition thresholds for the sentence-final target words, with the latter having a larger detrimental effect on the older adults than on the younger adults.

DISCUSSION

We earlier cited the basic principle of perception that the higher the probability of a stimulus, the less sensory information will be needed for its accurate recognition. Considerable research conducted with young adults with age-normal hearing has confirmed this basic principle, with long-standing demonstrations that the signal-to-noise ratio needed for correct recognition of spoken words (or the necessary exposure duration of printed words) is inversely proportional to the logarithm of their probability within a linguistic context (Tulving & Gold 1963; Morton 1964a,b, 1969; see also Miller et al., 1951; Black 1952; Bruce 1958). Subsequent research has shown this general principle to hold for older adults as well as for younger adults, for both spoken and written materials, with a tendency for older adults to make especially good use of linguistic context in word recognition relative to young adults (Cohen & Faulkner 1983; Madden 1988; Nittrouer & Boothroyd, 1990; Wingfield et al. 1991; Perry & Wingfield 1994; Pichora-Fuller et al. 1995; Dubno et al. 2000; Grant & Seitz 2000). This general principle is embodied in traditional speech-in-noise (SPIN) tests in audiometric assessment, where intelligibility is tested for words presented in noise with or without a constraining linguistic context (e.g., Kalikow et al. 1977; Wilson et al. 2007).

Using materials drawn from the revised Speech in Noise (R-SPIN) test (Bilger et al., 1984), Humes and colleagues (2007) affirmed the facilitation effect of a sentence context on word recognition for younger and older adults for both spoken and written stimuli. By comparing the two modalities within a single experiment, however, they found that while the magnitude of the contextual benefit was relatively larger for presentations in the auditory than in the visual modality, speeded auditory presentations were more detrimental to older adults’ performance than speeded presentations in the visual domain.

Although noting several differences inherent in the manner of the two presentations, Humes and colleagues suggest that one must consider two factors in word recognition in adult aging: a modality specific effect, as observed in their study, and amodal cognitive factors that affect both visual and auditory processing. An example of the latter is that while older adults make at least as good use of a prior linguistic context on spoken word recognition as younger adults (e.g., Cohen & Faulkner, 1983; Wingfield et al., 1991; Pichora-Fuller et al., 1995; Benichov et al., 2012; Lash et al., 2013), it can also be shown that older adults are less effective than younger adults in identifying an indistinct word retrospectively from the context that follows that word (Wingfield et al., 1994). This latter ability requires maintenance of a temporary memory trace of the indistinct stimulus, and potentially the surrounding context in working memory—a cognitive capacity which, as previously noted, is typically limited in adult aging (Hasher & Zacks, 1988; Salthouse, 1994; Zacks et al., 1999).

The facilitating effects of linguistic context with gated stimuli observed in this study are compatible with most word recognition models that assume a reciprocal balance between bottom-up information affected by stimulus quality and top-down information supplied by a system of linguistic knowledge (e.g., McClelland & Elman 1986; Marslen-Wilson & Zwitserlood 1989; Rönnberg et al. 2013). An early model that captures many of the features of word recognition was developed by Morton (1964a,b, 1969a,b, 1979), who postulated a “dictionary” of “units” (later named “logogens”) representing lexical entries stored in long-term semantic memory. Each stored lexical unit has an initial level of activation (a “resting potential”) based on it’s a priori probability derived from the frequency with which the word has been encountered in past experience, which underlies the word frequency effect in ease of written (Howes & Solomon 1951) and spoken (Howes 1957; Grosjean 1980) word recognition. In the Morton model, as a sentence unfolds in time, progressively increasing the likelihood of hearing (or seeing) a subsequent word, the level of activation of the corresponding logogen is increased until a threshold of activation is exceeded. At this point the unit “fires” and the corresponding word in the mental lexicon is available as a response. (A corollary of this proposition is that when a word has an especially high probability of occurrence, recognition may be triggered with little or no sensory information. This can result in contextually congruent misidentifications, often expressed with inappropriately high confidence [Rogers et al., 2012; Rogers & Wingfield, 2015]).

Our present results using word-onset gating show, for both younger and older adult implant listeners, not only a context versus no context distinction in ease of word recognition. More specifically, consistent with models such as Morton (1979), the effect of context is a graded one, with recognition thresholds progressively decreasing with increasing degrees of likelihood of hearing a target word within a sentence context. Also seen in these data is older adults’ effective use of a linguistic context to compensate for their initial disadvantage in recognizing words presented in neutral or low context conditions (see also, Cohen & Faulkner 1983; Wingfield et al. 1991; Dubno et al. 2000; Pichora-Fuller 2008; Sheldon et al. 2008;).

In this latter regard it is especially notable that the older adults required a greater amount of word onset duration for correct recognition than the younger adults in the neutral and low context conditions, even though the younger and older participants did not differ in word recognition scores for CNC words. This distinction suggests that the gating task taps not only listeners’ perceptual capability, but also the operation of matching the detected word onset against potential candidates that share this onset. It is in this operation that we saw a significant age difference. The age deficit observed for gated stimuli would thus appear to reside at the interface between the sensory input and matching this input against lexical possibilities that share, or nearly share, their phonological onsets. This process may represent the earliest stage at which cognitive aging begins to exert an influence in the linguistic process.

An additional feature seen in the present data was the finding that, within the overall positive effects of stimulus expectancy on word recognition, older adults’ recognition was differentially impacted when there was a relatively large number of semantically congruent words that might also be activated by a sentence context. This finding is consistent with the suggestion that effects of cognitive aging include reduced efficiency in inhibiting potential sources of interference (Lustig et al. 2007). In this case such interference was in the form of competition on word recognition from other words that also fit a given sentential context.

We did not in this experiment have an independent measure of inhibition. Numerous studies, however, have made the case for an inhibition deficit in older adulthood that results in older adults showing a greater negative effect of interference on retaining material in working memory (Hasher & Zacks 1988; Zacks et al. 1999), in visual object recognition (Lindfield et al., 1994) and—relevant to our present interests—in spoken word recognition (Sommers 1996; Sommers & Danielson 1999; Lindfield et al. 1994; Lash et al., 2013; Lash & Wingfield, 2014). The present data add to these studies analogous findings for older adults who, in this case, receive their sensory information via a cochlear implant.

It should be noted that, on average, older adults tend to have larger vocabulary sizes than their younger adult counterparts (Nicholas et al., 1997; Verhaeghen, 2003). One might thus ask whether a larger vocabulary size would contribute to older adults’ greater susceptibility to verbal competitors. As previously noted, however, the vocabulary scores for the younger and older adults taking part in this present study were similar to each other. Although the question of whether vocabulary size in general may influence the size of interference effects remains a topic for enquiry, it would not appear to be a factor in the present case.

It is well known that untreated hearing loss can interfere with effective communication, quality of life, and accelerated cognitive decline (Lin 2011; Lin et al. 2013; Humes et al. 2013). Such findings have raised the question of whether fitting severely hearing impaired older adults with cochlear implants will not only improve social activity and quality of life, but whether this intervention may also yield a positive impact on cognitive function (Mosnier et al. 2015). It is thus encouraging that older adults, as well as their younger adult counterparts, can show impressive adaptation to the novel signal produced by cochlear implants, culminating in good speech intelligibility (Friedland et al. 2010; Budenz et al. 2011; Roberts et al. 2013). Our present results add to this picture by demonstrating the importance of taking into account sensory-cognitive interactions, both positive effects (in this case effective use of top-down linguistic context as an aid to word recognition), and negative effects (interference derived from an age-related inhibition deficit) that may underlie older adults’ performance on word recognition with cochlear implants.

Acknowledgments

The authors are grateful to Andrew Oxenham and Heather Kreft for their assistance with recruiting participants. This work was supported by NIH grants R01 AG019714 from the National Institute on Aging (A.W.) and R01 DC012300 (Y-Y.K.) from the National Institute on Deafness and Other Communication Disorders, and NIH training grants T32 AG000204 (N.M.A.) and T32 NS007292 (E.A.). Portions of this article were presented at the 43rd Annual Scientific and Technology Meeting of the American Auditory Society, Scottsdale, Arizona; and the 6th Aging and Speech Communication Research Conference, Bloomington, Indiana.

Sources of Funding: This research was funded by NIH grants NIA R01 AG019714 (A.W.) and NIDCD R01 DC012300 (Y-Y.K.).

This work was prepared while Ying-Yee Kong was employed at Northeastern University. The opinions expressed in this article are the authors’ own and do not reflect the view of the National Institutes of Health, the Department of Health and Human Services, or the United States government.

Footnotes

*

In the original use of the technique, an electronic gate was opened at the start of a recorded word and closed after a set time as measured from the acoustic beginning of the word (Grosjean 1980). Although computer speech editing is now used to control the amount of word onset information a participant will hear, the term “gating” is still used.

Conflicts of Interest: There are no conflicts of interest to report.

References

  1. Balota DA, Yap MJ, Cortese MJ, et al. The English Lexicon project. Behav Res Meth. 2007;39:445–459. doi: 10.3758/bf03193014. [DOI] [PubMed] [Google Scholar]
  2. Başkent D, Clarke J, Pals C, et al. Cognitive compensation of speech perception with hearing impairment, cochlear implants, and aging: How and to what degree can it be achieved? Trends Hear. 2016;20 doi: 10.1177/2331216516670279. [DOI] [Google Scholar]
  3. Benichov J, Cox LC, Tun PA, et al. Word recognition within a linguistic context: Effects of age, hearing acuity, verbal ability, and cognitive function. Ear Hear. 2012;33:250–256. doi: 10.1097/AUD.0b013e31822f680f. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bhargava P, Gaudrain E, Başkent D. Top-down restoration of speech in cochlear-implant users. Hear Res. 2014;309:113–123. doi: 10.1016/j.heares.2013.12.003. [DOI] [PubMed] [Google Scholar]
  5. Bilger RC, Nuetzel MJ, Rabinowitz WM, Rzeckowski C. Standardization of a test of speech perception in noise. J Speech Hear Res. 1984;27:32–48. doi: 10.1044/jshr.2701.32. [DOI] [PubMed] [Google Scholar]
  6. Black JW. Accompaniments of word intelligibility. J Speech Dis. 1952;17:409–418. doi: 10.1044/jshd.1704.409. [DOI] [PubMed] [Google Scholar]
  7. Block CK, Baldwin CL. Cloze probability and completion norms for 498 sentences: Behavioral and neural validation using event-related potentials. Behav Res Meth. 2010;42:665–670. doi: 10.3758/BRM.42.3.665. [DOI] [PubMed] [Google Scholar]
  8. Bloom PA, Fischler I. Completion norms for 329 sentence contexts. Mem Cognit. 1980;8:631–642. doi: 10.3758/bf03213783. [DOI] [PubMed] [Google Scholar]
  9. Bruce DJ. The effects of listeners’ anticipation on the intelligibility of heard speech. Lang Speech. 1958;1:79–97. [Google Scholar]
  10. Bruning JL, Kintz BL. Computational Handbook of Statistics. 4th. Bostpn, MA: Allyn & Bacon; 1997. [Google Scholar]
  11. Budenz CL, Cosetti MK, Coelho DH, et al. The effects of cochlear implants on speech perception in older adults. J Am Geriatrics Soc. 2011;59:446–453. doi: 10.1111/j.1532-5415.2010.03310.x. [DOI] [PubMed] [Google Scholar]
  12. Cerella J. Generalized slowing and Brinley plots. J Gerontol B Psychol Sci Soc Sci. 1994;49:65–71. doi: 10.1093/geronj/49.2.p65. [DOI] [PubMed] [Google Scholar]
  13. Clarke J, Başkent D, Gaudrain E. Pitch and spectral resolution: a systematic comparison of bottom-up cues for top-down repair of degraded speech. J Acoust Soc Am. 2016;139:395–405. doi: 10.1121/1.4939962. [DOI] [PubMed] [Google Scholar]
  14. Cohen G, Faulkner D. Word recognition: Age differences in contextual facilitation effects. Brit J Psychol. 1983;74:239–251. doi: 10.1111/j.2044-8295.1983.tb01860.x. [DOI] [PubMed] [Google Scholar]
  15. Cox LC, McCoy SL, Tun PA, et al. Monotic auditory processing disorder tests in the older adult population. J Am Acad Audiol. 2008;19:293–308. doi: 10.3766/jaaa.19.4.3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Craig CH, Kim BW, Rhyner PMP, Chirillo TKB. Effects of word predictability, child development, and aging on time-gated speech recognition performance. J Speech Lang Hear Res. 1993;36(4):832–841. doi: 10.1044/jshr.3604.832. [DOI] [PubMed] [Google Scholar]
  17. DeCaro R, Peelle JE, Grossman M, et al. The two sides of sensory-cognitive interactions: effects of age, hearing acuity, and working memory span on sentence comprehension. Front Psychol. 2016;7:236. doi: 10.3389/fpsyg.2016.00236. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Dillon MT, Buss E, Adunka MC, et al. Longterm speech perception in elderly cochlear implant users. JAMA Otolyrngol Head Neck Surg. 2013;139:279–283. doi: 10.1001/jamaoto.2013.1814. [DOI] [PubMed] [Google Scholar]
  19. Dubno JR, Ahlstrom JB, Horwitz AR. Use of context by young and aged adults with normal hearing. J Acoust Soc Am. 2000;107:538–546. doi: 10.1121/1.428322. [DOI] [PubMed] [Google Scholar]
  20. Eisenberg LS, Matinez AS, Holowecky SR, et al. Recognition of lexically controlled words and sentences by children with normal hearing and children with cochlear implants. Ear Hear. 2002;23:450–462. doi: 10.1097/00003446-200210000-00007. [DOI] [PubMed] [Google Scholar]
  21. Fishman KE, Shannon RV, Slattery WH. Speech recognition as a function of the number of electrodes used in the SPEAK cochlear implant speech processor. J Speech Lang Hear Res. 1997;40:1201–1215. doi: 10.1044/jslhr.4005.1201. [DOI] [PubMed] [Google Scholar]
  22. Friesen LM, Shannon RV, Başken D, et al. Speech recognition in noise as a function of the number of spectral channels: comparison of acoustic hearing and cochlear implants. J Acoust Soc Am. 2001;110:1150–1163. doi: 10.1121/1.1381538. [DOI] [PubMed] [Google Scholar]
  23. Friedland DR, Runge-Samuelson C, Baig H, et al. Case-control analysis of cochlear implant performance in elderly patients. Arch Otolaryngol. 2010;136:432–438. doi: 10.1001/archoto.2010.57. [DOI] [PubMed] [Google Scholar]
  24. Gifford RH, Dorman MF, Shallop JK, et al. Evidence for the expansion of adult cochlear implant candidacy. Ear Hear. 2010;31:186–194. doi: 10.1097/AUD.0b013e3181c6b831. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Grant KW, Seitz PF. The recognition of isolated words and words in sentences: Individual variability in the use of sentence context. J Acoust Soc Am. 2000;107:1000–1011. doi: 10.1121/1.428280. [DOI] [PubMed] [Google Scholar]
  26. Grosjean F. Spoken word recognition processes and the gating paradigm. Percept Psychophys. 1980;28:267–283. doi: 10.3758/bf03204386. [DOI] [PubMed] [Google Scholar]
  27. Grosjean F. The recognition of words after their acoustic offset: Evidence and implications. Percept Psychophys. 1985;38:299–310. doi: 10.3758/bf03207159. [DOI] [PubMed] [Google Scholar]
  28. Grosjean F. Gating. Lang Cognitive Proc. 1996;11:597–604. [Google Scholar]
  29. Hasher L, Zacks RT. Working memory, comprehension and aging: A review and a new view. Psychol Learn Motiv. 1988;22:193–225. [Google Scholar]
  30. Howes D. On the relation between the intelligibility and frequency of occurrence of English words. J Acoust Soc Am. 1957;29:296–305. [Google Scholar]
  31. Howes D, Solomon RL. Visual duration threshold as a function of word probability. J Exp Psychol. 1951;41:401–410. doi: 10.1037/h0056020. [DOI] [PubMed] [Google Scholar]
  32. Humes LE, Burk MH, Coughlin MP, Busey TA, Strauser LE. Auditory speech recognition and visual text recognition in younger and older adults: Similarities and differences between modalities and the effects of presentation rate. J Speech Lang Hear Res. 2007;50:283–303. doi: 10.1044/1092-4388(2007/021). [DOI] [PubMed] [Google Scholar]
  33. Humes LE, Busey TA, Craig J, et al. Are age-related changes in cognitive function driven by age-related changes in sensory processing? Atten Percept Psychophys. 2013;75:508–524. doi: 10.3758/s13414-012-0406-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Janse E, Ernestus M. The roles of bottom-up and top-down information in the recognition of reduced speech: Evidence from listeners with normal and impaired hearing. J Phonetics. 2011;39:330–343. [Google Scholar]
  35. Kalikow DN, Stevens KN, Elliott LL. Development of a test of speech intelligibility in noise using sentence material with controlled word predictability. J Acoust Soc Am. 1977;61:1337–1351. doi: 10.1121/1.381436. [DOI] [PubMed] [Google Scholar]
  36. Kidd GR, Humes LE. Effects of age and hearing loss on the recognition of interrupted words in isolation and in sentences. J Acoust Soc Am. 2012;131:1434–1448. doi: 10.1121/1.3675975. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Kong YY, Donaldson G, Somarowthu A. Effects of contextual cues on speech recognition in simulated electric-acoustic stimulation. J Acoust Soc Am. 2015;137:2846–2857. doi: 10.1121/1.4919337. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Lahar C, Tun PA, Wingfield A. Sentence-final word completion norms for young, middle-aged, and older adults. J Gerontol Psychol Sci. 2004;59B:7–10. doi: 10.1093/geronb/59.1.p7. [DOI] [PubMed] [Google Scholar]
  39. Lash A, Rogers CS, Zoller A, et al. Expectation and entropy in spoken word recognition: Effects of age and hearing acuity. Exp Aging Res. 2013;39:235–253. doi: 10.1080/0361073X.2013.779175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Lash A, Wingfield A. A Bruner-Potter effect in audition? Spoken word recognition in adult aging. Psychol Aging. 2014;29:907–912. doi: 10.1037/a0037829. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Lin FR. Hearing loss and cognition among older adults in the United States. J Gerontol Med Sci. 2011;66A:1131–1136. doi: 10.1093/gerona/glr115. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Lin FR, Chien WW, Li L, et al. Cochlear implants in older adults. Medicine. 2012;91:229–241. doi: 10.1097/MD.0b013e31826b145a. [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Lin FR, Yaffe K, Xia J, et al. Hearing loss and cognitive decline in older adults. JAMA Int Med. 2013;173:293–299. doi: 10.1001/jamainternmed.2013.1868. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Lindblom B, Brownlee S, Davis B, et al. Speech transforms. Speech Commun. 1992;11:357–368. [Google Scholar]
  45. Lindfield KC, Wingfield A, Bowles NL. Identification of fragmented pictures under ascending versus fixed presentation in young and elderly adults: Evidence for the inhibition-deficit hypothesis. Aging Cognition. 1994;1:282–291. [Google Scholar]
  46. Lindfield KC, Wingfield A, Goodglass H. The contribution of prosody to spoken word recognition. Appl Psycholinguist. 1999;20:395–405. [Google Scholar]
  47. Loizou PC. Signal-processing techniques for cochlear implants. IEEE Eng Med Biol Mag. 1999;18:34–46. doi: 10.1109/51.765187. [DOI] [PubMed] [Google Scholar]
  48. Luce PA, Pisoni DB. Recognizing spoken words: The neighborhood activation model. Ear Hear. 1998;19:1–36. doi: 10.1097/00003446-199802000-00001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Lund K, Burgess C. Producing high-dimensional semantic spaces from lexical co-occurrence. Behav Res Meth Instr. 1996;28:203–208. [Google Scholar]
  50. Lustig C, Hasher L, Zacks RT. Inhibitory deficit theory: Recent developments in a “new view”. In: Gorfein DS, MacLeod CM, editors. The Place of Inhibition in Cognition. Washington, DC: American Psychological Association; 2007. pp. 145–162. [Google Scholar]
  51. Madden DJ. Adult age differences in the effects of sentence context and stimulus degradation during visual word recognition. Psychol Aging. 1988;3:167–172. doi: 10.1037//0882-7974.3.2.167. [DOI] [PubMed] [Google Scholar]
  52. Magnuson JS, Tanenhaus MK, Hogan EM. Subcategorical mismatches and the time course of lexical access: Evidence for lexical competition. Lang Cognitive Proc. 2001;16:507–534. [Google Scholar]
  53. Marslen-Wilson WD. Function and process in spoken word recognition. In: Bouma H, Bouwhuis DG, editors. Attention and performance X. Hillsdale, NJ: Erlbaum; 1984. [Google Scholar]
  54. Marslen-Wilson WD, Zwitserlood P. Accessing spoken words: The importance of word onsets. J Exp Psychol Hum Percept Perform. 1989;15:576–585. [Google Scholar]
  55. McCabe DP, Roediger HL, McDaniel MA, et al. The relationship between working memory capacity and executive functioning: Evidence for a common executive attention construct. Neuropsychology. 2010;24:222–243. doi: 10.1037/a0017619. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. McClelland JL, Elman JL. The TRACE model of speech recognition. Cognitive Psychol. 1986;18:1–86. doi: 10.1016/0010-0285(86)90015-0. [DOI] [PubMed] [Google Scholar]
  57. McMurray B, Farris-Trimble A, Seedorff M, et al. The effect of residual acoustic hearing and adaptation to uncertainty on speech perception in cochlear implant users: evidence from eye-tracking. Ear Hear. 2016;37:e37–e51. doi: 10.1097/AUD.0000000000000207. [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Miller GA, Heise GA, Lichten W. The intelligibility of speech as a function of the context of test materials. J Exp Psychol. 1951;41:329–335. doi: 10.1037/h0062491. [DOI] [PubMed] [Google Scholar]
  59. Molis MR, Kampel SD, McMillan GP, Gallun FJ, Dann SM, Konrad-Martin D. Effects of hearing and aging on sentence-level time-gated word recognition. J Speech Lang Hear Res. 2015;58:481–496. doi: 10.1044/2015_JSLHR-H-14-0098. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Moradi S, Lidestam B, Hällgren M, Rönnberg J. Gated auditory speech perception in elderly hearing aid users and elderly normal-hearing individuals: Effects of hearing impairment and cognitive capacity. Trends Hear. 2014;18:1–12. doi: 10.1177/2331216514545406. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Morton J. A preliminary functional model for language behavior. Int Audiol. 1964a;3:1–9. [Google Scholar]
  62. Morton J. A model for continuous language behavior. Lang Speech. 1964b;7:450–470. [Google Scholar]
  63. Morton J. Interaction of information in word recognition. Psychol Rev. 1969;76:165–178. [Google Scholar]
  64. Morton J. Facilitation in word recognition: experiments causing change in the logogen model. In: Kolers PA, Wrolstad ME, Bouma H, editors. Processing Visual Language. New York: Plenum Press; 1979. pp. 259–268. [Google Scholar]
  65. Mosnier I, Bebear JP, Marx M, et al. Improvement of cognitive function after cochlear implantation in elderly patients. JAMA Otolaryngol Head Neck Surg. 2015 doi: 10.1001/jamaoto.2015.129. [DOI] [PubMed] [Google Scholar]
  66. Nicholas M, Barth C, Obler LK, Au R, Albert ML. Naming in normal aging and dementia of the Alzheimer’s type. In: Goodglass H, Wingfield A, editors. Anomia: Neuroanatomical and Cognitive Correlates. San Diego, CA: Academic Press; 1997. pp. 166–188. [Google Scholar]
  67. Nittrouer S, Boothroyd A. Context effects in phoneme and word recognition by young children and older cadlts. J Acoust Soc Am. 1990;87:2705–2715. doi: 10.1121/1.399061. [DOI] [PubMed] [Google Scholar]
  68. Nittrouer S, Tarr E, Bolster V, et al. Very low-frequency signals support perceptual organization of implant-simulated speech for adults and children. Int J Audiol. 2014;53:270–284. doi: 10.3109/14992027.2013.871649. [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Nooteboom SG, Doodeman GJN. Speech quality and the gating paradigm. In: van den Broake MPR, Cohen A, editors. Proceeding of the Tenth International Congress of Phonetic Sciences. Dordrecht: Foris; 1984. [Google Scholar]
  70. Oh SH, Donaldson GS, Kong YY. Top-down processes in simulated electric-acoustic hearing: the effect of linguistic context on bimodal benefit for temporally interrupted speech. Ear Hear. 2016;37:582–592. doi: 10.1097/AUD.0000000000000298. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Perry AR, Wingfield A. Contextual encoding by young and elderly adults as revealed by cued and free recall. Aging Cogn. 1994;1:120–139. [Google Scholar]
  72. Peterson GE, Lehiste I. Revised CNC lists for auditory tests. J Speech Hear Dis. 1962;27:62–70. doi: 10.1044/jshd.2701.62. [DOI] [PubMed] [Google Scholar]
  73. Pichora-Fuller MK. Use of supportive context by younger and older adult listeners: Balancing bottom-up and top-down information processing. Intl J Audiol. 2008;47:S72–S82. doi: 10.1080/14992020802307404. [DOI] [PubMed] [Google Scholar]
  74. Pichora-Fuller MK, Schneider BA, Daneman M. How young and old adults listen to and remember speech in noise. J Acoust Soc Am. 1995;97:593–608. doi: 10.1121/1.412282. [DOI] [PubMed] [Google Scholar]
  75. Pollack I, Pickett JM. The intelligibility of excerpts from conversation. Lang Speech. 1963;6:165–171. [Google Scholar]
  76. Roberts DS, Lin HW, Herrmann BS, Lee DJ. Differential cochlear implant outcomes in older adults. Laryngoscope. 2013;123:1952–1956. doi: 10.1002/lary.23676. [DOI] [PubMed] [Google Scholar]
  77. Rogers CS, Jacoby LL, Sommers MS. Frequent false hearing by older adults: The role of age differences in metacognition. Psychol Aging. 2012;27:33–45. doi: 10.1037/a0026231. [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Rogers CS, Wingfield A. Stimulus-independent semantic bias misdirects word recognition in older adults. J Acoust Soc Am. 2015;138:EL26–EL30. doi: 10.1121/1.4922363. http://dx.doi.org/10.1121/1.4922363. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Rönnberg J, Lunner T, Zekveld A, et al. The Ease of Language Understanding (ELU) model: theoretical, empirical, and clinical advances. Front Sytems Neurosci. 2013;7:31. doi: 10.3389/fnsys.2013.00031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Salthouse TA. The aging of working memory. Neuropsychology. 1994;8:535–543. [Google Scholar]
  81. Salthouse TA. The processing-speed theory of adult age differences in cognition. Psychol Rev. 1996;103:403–428. doi: 10.1037/0033-295x.103.3.403. [DOI] [PubMed] [Google Scholar]
  82. Shannon CE. A mathematical theory of communication. Bell Syst Tech J. 1948;27:379–423. 623–656. [Google Scholar]
  83. Shannon CE, Weaver W. The Mathematical Theory of Communication. Urbana, IL: University of Illinois Press; 1949. [Google Scholar]
  84. Sheldon S, Pichora-Fuller MK, Schneider BA. Priming and sentence context support listening to noise-vocoded speech by younger and older adults. J Acoust Soc Am. 2008;123:489–499. doi: 10.1121/1.2783762. [DOI] [PubMed] [Google Scholar]
  85. Sommers MS. The structural organization of the mental lexicon and its contribution to age-related declines in spoken-word recognition. Psychol Aging. 1996;11:333–341. doi: 10.1037//0882-7974.11.2.333. [DOI] [PubMed] [Google Scholar]
  86. Sommers MS, Danielson SM. Inhibitory processes and spoken word recognition in young and older adults: The interaction of lexical competition and semantic context. Psychol Aging. 1999;14:458–472. doi: 10.1037//0882-7974.14.3.458. [DOI] [PubMed] [Google Scholar]
  87. Stanovich KE, Nathan RG, West R, et al. Children’s word recognition in context: Spreading activation, expectancy, and modularity. Child Dev. 1985;56:1418–1428. [Google Scholar]
  88. Tyler LK. The structure of the initial cohort: evidence from gating. Percept Psychophys. 1984;36:417–427. doi: 10.3758/bf03207496. [DOI] [PubMed] [Google Scholar]
  89. Taler V, Aaron GP, Steinmetz LG, Pisoni DB. Lexical neighborhood density effects on spoken word recognition and production in healthy aging. J Gerontol Psychol Sci. 2010;65B:551–560. doi: 10.1093/geronb/gbq039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Taylor WL. “Cloze” procedure: A new tool for measuring readability. Journalism Quart. 1953;30:415–433. [Google Scholar]
  91. Treisman AM. Effect of verbal context on latency of word selection. Nature. 1965;206:218–219. doi: 10.1038/206218a0. [DOI] [PubMed] [Google Scholar]
  92. Tulving E, Gold C. Stimulus information and contextual information as determinants of tachistoscopic recognition for words. J Exp Psychol. 1963;66:319–327. doi: 10.1037/h0048802. [DOI] [PubMed] [Google Scholar]
  93. Tun PA, McCoy S, Wingfield A. Aging, hearing acuity, and the attentional costs of effortful listening. Psychol Aging. 2009;24:761–766. doi: 10.1037/a0014802. [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. Van Engen KJ, Peelle JE. Listening effort and accented speech. Front Hum Neurosci. 2014 doi: 10.3389/fnhum.2014.00577. http://dx.doi.org/10.3389/fnhum.2014.00577. [DOI] [PMC free article] [PubMed]
  95. van Rooij JCGM, Plomp R. The effect of linguistic entropy on speech perception in noise in young and elderly listeners. J Acoust Soc Am. 1991;90:2985–2991. doi: 10.1121/1.401772. [DOI] [PubMed] [Google Scholar]
  96. Verhaeghen P. Aging and vocabulary score: a meta-analysis. Psychol Aging. 2003;18:332–339. doi: 10.1037/0882-7974.18.2.332. [DOI] [PubMed] [Google Scholar]
  97. Warren P, Marslen-Wilson WD. Continuous uptake of acoustic cues in spoken word recognition. Percept Psychophys. 1987;41:262–275. doi: 10.3758/bf03208224. [DOI] [PubMed] [Google Scholar]
  98. Wayland SC, Wingfield A, Goodglass H. Recognition of isolated words: The dynamics of cohort reduction. Appl Psycholinguist. 1989;10:475–487. [Google Scholar]
  99. Wilson BS, Dorman MF. Cochlear implants: current designs and future possibilities. J Rehab Res Dev. 2008a;45:695–730. doi: 10.1682/jrrd.2007.10.0173. [DOI] [PubMed] [Google Scholar]
  100. Wilson BS, Dorman MF. Cochlear implants: a remarkable past and a brilliant future. Hear Res. 2008b;242:3–21. doi: 10.1016/j.heares.2008.06.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Wilson RH, McArdle RA, Smith SL. An evaluation of the BKB-SIN, HINT, QuickSin, and WIN materials on listeners with normal hearing and listeners with hearing loss. J Speech Lang Hear Res. 2007;50:844–856. doi: 10.1044/1092-4388(2007/059). [DOI] [PubMed] [Google Scholar]
  102. Wingfield A, Aberdeen JS, Stine EAL. Word onset gating and linguistic context in spoken word recognition by young and elderly adults. J Gerontol Psychol Sci. 1991;46:127–129. doi: 10.1093/geronj/46.3.p127. [DOI] [PubMed] [Google Scholar]
  103. Wingfield A, Alexander AH, Cavigelli S. Does memory constrain utilization of top-down information in spoken word recognition? Evidence from normal aging. Lang Speech. 1994;37:221–235. doi: 10.1177/002383099403700301. [DOI] [PubMed] [Google Scholar]
  104. Wingfield A, Goodglass H, Lindfield KC. Word recognition from acoustic onsets and acoustic offsets: Effects of cohort size and syllabic stress. Appl Psycholinguist. 1997;18:85–100. [Google Scholar]
  105. Wingfield A, Lindfield KC, Goodglass H. Effects of age and hearing sensitivity on the use of prosodic information in spoken word recognition. J Speech Lang Hear Res. 2000;43:915–925. doi: 10.1044/jslhr.4304.915. [DOI] [PubMed] [Google Scholar]
  106. Wingfield A, Peelle JE, Grossman M. Speech rate and syntactic complexity as multiplicative factors in speech comprehension by young and older adults. J Aging Neuropsychol Cognit. 2003;10:310–322. [Google Scholar]
  107. Wingfield A, Stine-Morrow EAL. Language and speech. In: Craik FIM, Salthouse TA, editors. Handbook of Aging and Cognition. 2nd. Mahwah, NJ: Erlbaum; 2000. pp. 359–416. [Google Scholar]
  108. Winn M. Rapid release from listening effort resulting from semantic context, and effects of spectral degradation and cochlear implants. Trends Hear. 2016;20 doi: 10.1177/2331216516669723. [DOI] [PMC free article] [PubMed] [Google Scholar]
  109. Zachary RA. Shipley Institute of Living Scale: Revised Manual. Los Angeles, CA: Western Psychological Services; 1991. [Google Scholar]
  110. Zacks RT, Hasher L, Li KZH. Human memory. In: Craik FIM, Salthouse TA, editors. Handbook of Aging and Cognition. Mahwah, NJ: Erlbaum; 1999. pp. 200–230. [Google Scholar]

RESOURCES