Skip to main content
The Journal of the Acoustical Society of America logoLink to The Journal of the Acoustical Society of America
. 2015 Jul 9;138(1):EL26–EL30. doi: 10.1121/1.4922363

Stimulus-independent semantic bias misdirects word recognition in older adults

Chad S Rogers 1,a), Arthur Wingfield 1
PMCID: PMC4499053  PMID: 26233056

Abstract

Older adults' normally adaptive use of semantic context to aid in word recognition can have a negative consequence of causing misrecognitions, especially when the word actually spoken sounds similar to a word that more closely fits the context. Word-pairs were presented to young and older adults, with the second word of the pair masked by multi-talker babble varying in signal-to-noise ratio. Results confirmed older adults' greater tendency to misidentify words based on their semantic context compared to the young adults, and to do so with a higher level of confidence. This age difference was unaffected by differences in the relative level of acoustic masking.

1. Introduction

A basic principle of perception is that the more probable a spoken word, the less sensory information will be needed for its correct identification (Morton, 1969). A correlate of this principle is that a word with a similar sound to a highly probable word within a semantic context will often be misidentified as that word—often with considerable confidence (Rogers et al., 2012). This “false hearing” is a type of illusory perception, similar to phonemic restoration except operating at the word level (cf. Samuel, 1981; Warren, 1970).

Rogers et al. (2012) presented young and older adults with recorded word-pairs, with the second word of the pair presented in background noise. A single signal-to-noise ratio (SNR) was used, adjusted to each participant's 50% Speech Reception Threshold (SRT) to control for age differences in hearing acuity. The participants' task was to say aloud the second, noise-masked word (the target word). The first word (the prime word) could be unrelated to the target (neutral prime condition; e.g., Jaw – PASS), a semantic associate of the target word (semantic prime condition; e.g., Row – BOAT), or a semantic associate of a similar sounding target word (semantic lure condition; e.g., Row – GOAT), thus putting the semantic context in conflict with successful perception. Interestingly, older adults in the semantic lure condition were significantly more likely than young adults to misidentify a target word as a similar sounding high associate of the prime word (Rogers et al., 2012).

A critical concern raised by such results is whether older adults' higher incidence of semantically driven misrecognitions reflects a flexible response to an aging auditory system, or whether the ordinarily adaptive use of semantic context to support word recognition leads to a maladaptive over-use of context. In addressing the concern we conducted a direct test to determine whether the incidence of older adults' semantic-based misrecognitions varies with, or is independent of, the acoustic clarity of a stimulus word. This was accomplished by presenting word-pairs to young and older adults in a similar manner to that of Rogers et al. (2012), but with target words acoustically masked by several different SNRs relative to each individual's 50% SRT (the only SNR level used by Rogers et al., 2012).

Our special interest was the recognition accuracy of target words in a semantic lure condition and the types of perceptual errors young and older adults produce as the acoustic clarity of the target word is systematically increased. To the extent that older adults' perceptual decisions are more heavily influenced by a word's semantic context than young adults', one would expect to see the older adults differentially more likely to be lured into misrecognitions due to a target word's semantic association with the prime word. The experimental question was whether the magnitude of this difference, and participants' degree of confidence in their recognition responses, would hold to the same degree for young and older adults across different levels of stimulus clarity.

2. Method

2.1. Participants

Participants were 15 university undergraduates (ages 18–21; M = 18.93 years) and 15 healthy older adults (ages 65–85; M = 74.67 years). Prior to the main experiment we determined the minimal SNR that allowed for 50% open-set identification accuracy for monosyllabic words in six-talker babble. The older adults' SRTs were higher than the young adults' [older adult M = 9.67, standard deviation (SD) = 5.16; young adult M = 5.27, SD = 1.98; t[28] = 3.08, p < 0.01]. Listeners' threshold data were used to adjust SNRs to equate the young and older listeners' baseline performance in a neutral prime condition.

2.2. Stimuli and procedures

The stimuli consisted of 234 prime – target pairs. Of these, each participant heard equal numbers in neutral prime (e.g., Jaw – PASS), semantic prime (e.g., Row – BOAT), and semantic lure conditions (e.g., Row – GOAT). Semantic associations were taken from published norms (Nelson et al., 2004; M forward association strength of primes to targets in the association conditions = 0.49). Target words in all conditions were balanced for Hyperspace Analog to Language (HAL) word frequency, neighborhood density, and frequency distributions of phonological neighbors. All words were consonant-vowel-consonant words, recorded by an adult speaker of American English with no heavy accent, and normalized for root mean square intensity. Targets for the semantic prime and semantic lure conditions differed only by place of articulation of one consonant (e.g., BOAT vs GOAT). Each word was recorded in isolation with consistent pitch-accent type and stress.

Prime words were always presented in the clear at 65 dB sound pressure level, followed by the target word masked by six-talker babble. The participant's task was to say aloud the target word after each word-pair had been presented, and to rate confidence in their identification on a scale from zero to 100%. Participants were explicitly told to respond only on the basis of what they heard and not on the basis of semantic association between words. Prior to the main experiment participants received six practice trials, designed to familiarize them with the task. None of these words was used in the main experiment.

The three levels of noise used to mask target words were the SNR that had yielded that individual's 50% SRT in the pre-test (medium noise), 4 dB below that level (heavy noise), and 4 dB above that level (light noise). Of the 78 word-pairs in each condition, 26 were given at each of the three SNRs for that individual. Each word-pair was heard only once by any participant, with word-pair and SNRs counterbalanced across participants and age groups such that, by the end of the experiment, each word-pair had been heard at each SNR by young and older adults an equal number of times. Word-pair conditions and SNRs were intermixed in presentation.

3. Results

3.1. Correct responses

The three upper panels in Fig. 1 show the proportion of target words correctly identified by the young and older adults at each SNR in each of the three conditions. The accuracy scores in the neutral prime condition [panel (a)] demonstrate that the individual SNR adjustments were successful in placing the young and older adults on the same intelligibility baseline for all three SNRs. A two-factor analysis of variance conducted on these data, with age group as a between-subjects variable and SNR as a within-subjects variable, showed a significant main effect of SNR [F(2,56) = 116.10, p < 0.001], confirming the effectiveness of the SNR manipulation. An absence of a main effect of age (p > 0.46) or an Age × SNR interaction (p > 0.52) reflected the effectiveness of the baseline adjustment for both age groups.

Fig. 1.

Fig. 1.

Upper panels show mean proportion of correct recognitions of target words as a function of acoustic masking when preceded by (a) neutral primes, (b) semantic primes, and (c) semantic lures for young and older adults. Lower panels [(d)–(f)] show listeners' corresponding confidence ratings in their responses. Sample size for each age group was 15. Error bars represent one standard error.

As might be expected, with a semantic prime [panel (b)] the accuracy for both age groups increased to a near ceiling level, with only a suggestion of a departure between the two groups at the most difficult (−4 dB) SNR. This pattern was seen in a significant main effect of SNR [F(2,56) = 19.34, p < 0.001] without a significant main effect of age (p > 0.42) or an Age × SNR interaction (p > 0.97).

A different picture appears for the critical, semantic lure condition [panel (c)]. The main effect of SNR remains significant [F(2,56) = 80.35, p < 0.001], but in this case a significant main effect of age appears [F(1,28) = 7.23, p < 0.05], reflecting a greater negative effect on the older adults' recognition accuracy when the target word had a phonological similarity to a semantic associate of the prime word. This age difference was constant across SNRs, reflected in the absence of an Age × SNR interaction (p > 0.64).

The lower three panels of Fig. 1 show mean confidence ratings for participants' correct responses in the three word-pair conditions. In the neutral prime condition [panel (d)] both age groups show increasing confidence as the perceptual task became easier with an increasing SNR, with a significant main effect of SNR [F(2,56) = 60.14, p < 0.001]. The suggestion of the young adults being more confident in their correct identifications than the older adults did not reach significance (p > 0.21), nor was there a significant Age × SNR interaction (p > 0.98).

When the target word was a semantic associate of the prime word [panel (e)], listeners' confidence in correct responses also increased with an increase in SNR, showing a significant main effect of SNR [F(2,56) = 15.17, p < 0.001], although with a shallower increase than in the neutral prime condition. Overall, the older adults were more confident in their correct responses than the young adults, confirmed by a significant main effect of age, [F(1,28) = 11.85, p < 0.01]. As suggested by the nearly parallel effects of SNR on participant confidence, the Age × SNR interaction was not significant (p > 0.69).

Confidence ratings for correct responses in the semantic lure condition [panel (f)] show both young and older listeners' confidence ratings increasing with increasing SNR, and to the relatively same degree. This pattern resulted in a significant main effect of SNR [F(2,56) = 19.28, p < 0.001], and a marginal main effect age [F(1,28) = 3.60, p < 0.07] in the absence of a significant Age × SNR interaction (p > 0.31).

3.2. Errors to semantic lures

Virtually all errors fell into one of two categories. Panel (a) in Fig. 2 shows data for the most common type of error: the percentage of errors in the semantic lure condition in which listeners, instead of reporting the target word correctly, instead gave a semantic associate of the prime word that shared phonology with the target word (e.g., mishearing Row – GOAT as Row – BOAT). This category of errors, which was more common for the older than the young adults, increased in frequency with increased SNR. This would be the case as listeners gained more phonological information from the target word, thus reinforcing the phonological overlap with the activated associate of the prime word. This pattern produced significant main effects of SNR [F(2,56) = 5.24, p < 0.01], age [F(1,28) = 9.65, p < 0.01], and an absence of a significant Age × SNR interaction (p > 0.61).

Fig. 2.

Fig. 2.

Upper panels show the mean percentage of misidentifications of target words as a function of stimulus quality in the semantic lure condition that (a) bore a semantic relation to the prime word or (b) that closely approximated the phonology of the target word for young and older adults. Lower panels [(c), (d)] show listeners' corresponding confidence ratings in their responses. Sample size for each age group was 15. Error bars represent one standard error.

This picture is reversed in panel (b), which shows the incidence with which participants' error responses followed the sound of the target word, in spite of its phonological similarity to a semantic associate of the prime word (e.g., hearing Barn – PAY and responding with SAY). SNR had a minimal effect on this category of errors, which made up a small proportion of errors relative to those seen in panel a. In this case the main effect of SNR did not reach significance (p > 0.10). That is, regardless of SNR, the older adults produced this type of misidentification less often than did the young adults. This resulted in a significant main effect of age [F(1,28) = 17.91, p < 0.001] and an absence of a significant Age × SNR interaction (p > 0.64).

Listeners' confidence in their semantic-based errors [panel (c)] tended to increase with increasing SNR, and hence access to the target word phonology that was also shared with the activated semantic associate of the prime word, making the illusory perception more compelling. This effect was captured by a significant a main effect of SNR [F(2,56) = 4.47, p < 0.05]. Importantly, the older adults showed a higher degree of confidence in these misidentifications lured by semantic context than did the young adults, as revealed in a main effect of age [F(1,28) = 4.35, p < 0.05]. This age difference was independent of SNR, seen in an absence of a significant Age × SNR interaction (p > 0.82).

As suggested by visual inspection of panel (d), confidence ratings for misrecognitions that followed the sound of the target word were low, and unaffected by SNR (p > 0.24), age (p > 0.15), nor was there a significant Age × SNR interaction (p > 0.23).

4. Discussion

Context can influence how acoustic information is perceived in a variety of circumstances even by young adults in the absence of noise masking (e.g., Kleber and Niehbur, 2010). The critical condition in the present experiment for examining age differences in context effects was the semantic lure condition in which a target word was similar in sound to a semantic associate of the prime word. As found by Rogers et al. (2012), our analysis of misidentifications showed older adults to be more prone than young adults to report hearing a word that differed from the target but that was semantically associated with the prime word. They did so, moreover, with significantly greater confidence in these misidentifications than the young adults. It should be noted that these misidentifications were not due to differences in hearing acuity for speech in noise, as individuals' SNRs were individually adjusted to equate for baseline speech-in-noise intelligibility. This semantic bias had, as its consequence, reduced identification accuracy for target words in the semantic lure condition.

A critical advance in our current experiment was the finding that the age-related increases in semantically driven misidentifications were independent of stimulus clarity. That is, the present results confirm that context has a moderating effect on the impact of stimulus clarity on recognition accuracy and confidence, but we now show that this relationship is age invariant across degrees of stimulus clarity referenced to individual baselines for speech-in-noise intelligibility.

The source of the semantic bias that differentially leads to a greater number of misrecognitions by older adults may be a perceptual “style” developed over years of successful use of semantic context to aid speech recognition in everyday listening in acoustically varied environments (Pichora-Fuller, 2008). Older adults are also less likely than young adults to inhibit a highly accessible response (Jacoby et al., 2005; Lash et al., 2013). This inhibition deficit could well account for the increase in word recognition errors by older adults when a stimulus word has phonetic similarity with a highly probable semantic associate. In either case it has now been shown that older adults' increased incidence of semantically driven misrecognitions relative to their young adult counterparts is more than a simple response to an age difference in ordinary stimulus clarity.

Acknowledgments

This work was supported by NIH Grant No. AG019714 from the National Institute on Aging (A.W.) and Training Grant No. T32 NS007292 (C.S.R.).

References and links

  • 1. Jacoby, L. L. , Bishara, A. J. , Hessels, S. , and Toth, J. P. (2005). “ Aging, subjective experience, and cognitive control: Dramatic false remembering by older adults,” J. Exp. Psychol.: General 134(2), 131–148. 10.1037/0096-3445.134.2.131 [DOI] [PubMed] [Google Scholar]
  • 2. Kleber, F. , and Niebuhr, O. (2010). “ Semantic context effects on lexical stress and syllable prominence,” in Proceedings of the 5th International Conference of Speech Prosody, Chicago, IL, pp. 1–4. [Google Scholar]
  • 3. Lash, A. , Rogers, C. S. , Zoller, A. , and Wingfield, A. (2013). “ Expectation and entropy in spoken word recognition: Effects of age and hearing acuity,” Exp. Aging Res. 39, 235–253. 10.1080/0361073X.2013.779175 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Morton, J. (1969). “ Interaction of information in word recognition,” Psychol. Rev. 76, 165–178. 10.1037/h0027366 [DOI] [Google Scholar]
  • 5. Nelson, D. L. , McEvoy, C. L. , and Schreiber, T. A. (2004). “ The University of South Florida free association, rhyme, and word fragment norms,” Behav. Res. Meth. 36, 402–407. 10.3758/BF03195588 [DOI] [PubMed] [Google Scholar]
  • 6. Pichora-Fuller, K. M. (2008). “ Use of supportive context by younger and older adult listeners: Balancing bottom-up and top-down information processing,” Int. J. Audiol. 47, S72–S82. 10.1080/14992020802307404 [DOI] [PubMed] [Google Scholar]
  • 7. Rogers, C. S. , Jacoby, L. L. , and Sommers, M. S. (2012). “ Frequent false hearing by older adults: The role of age differences in metacognition,” Psychol. Aging 27, 33–45. 10.1037/a0026231 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Samuel, A. G. (1981). “ Phonemic restoration: Insights from a new methodology,” J. Exp. Psychol. Gen. 110, 474–494. 10.1037/0096-3445.110.4.474 [DOI] [PubMed] [Google Scholar]
  • 9. Warren, R. M. (1970). “ Perceptual restoration of missing speech sounds,” Science 167(3917), 392–393. 10.1126/science.167.3917.392 [DOI] [PubMed] [Google Scholar]

Articles from The Journal of the Acoustical Society of America are provided here courtesy of Acoustical Society of America

RESOURCES