Skip to main content
American Journal of Audiology logoLink to American Journal of Audiology
. 2019 May 14;28(2):369–375. doi: 10.1044/2019_AJA-18-0173

Variations Within Normal Hearing Acuity and Speech Comprehension: An Exploratory Study

Nicole D Ayasse a, Lana R Penn a, Arthur Wingfield a,
PMCID: PMC6802869  PMID: 31091111

Abstract

Purpose

Many young adults with a mild hearing loss can appear unaware or unconcerned about their loss or its potential effects. A question that has not been raised in prior research is whether slight variability, even within the range of clinically normal hearing, may have a detrimental effect on comprehension of spoken sentences, especially when attempting to understand the meaning of sentences that offer an additional cognitive challenge. The purpose of this study was to address this question.

Method

An exploratory analysis was conducted on data from 3 published studies that included young adults, ages 18 to 29 years, with audiometrically normal hearing acuity (pure-tone average < 15 dB HL) tested for comprehension of sentences that conveyed the sentence meaning with simpler or more complex linguistic structures. A product–moment correlation was conducted between individuals' hearing acuity and their comprehension accuracy.

Results

A significant correlation appeared between hearing acuity and comprehension accuracy for syntactically complex sentences, but not for sentences with a simpler syntactic structure. Partial correlations confirmed this relationship to hold independent of participant age within this relatively narrow age range.

Conclusion

These findings suggest that slight elevations in hearing thresholds, even among young adults who pass a screen for normal hearing, can affect comprehension accuracy for spoken sentences when combined with cognitive demands imposed by sentences that convey their meaning with a complex linguistic structure. These findings support limited resource models of attentional allocation and argue for routine baseline hearing evaluations of young adults with current age-normal hearing acuity.


There has been a continuing concern that high levels of amplification associated with many concert venues and personal music players may lead to an increased incidence of hearing loss among university-aged young adults (Le Prell, Hensley, Campbell, Hall, & Guire, 2011; Rota-Donahue & Levey, 2016; Shargodsky, Curhan, Curhan, & Eavey, 2010). When Le Prell et al. tested the hearing of university students who had expressed the belief in a prestudy telephone interview that they had normal hearing, they found a number of students who, in fact, had some degree of hearing loss (Le Prellet al., 2011; see also Widén, Holmes, Johnson, Bohlin, & Erlandsson, 2009). Such data suggest that there are students listening to lectures in university classrooms and in their daily activities who are totally unaware that they have a hearing impairment. A critical question, and one we address here, is whether even small differences in hearing acuity may have consequences for comprehension of speech input.

It is obviously the case that reduced hearing acuity can cause one to miss or mishear words, with detrimental effects on recall or comprehension of what has been heard. As we outline below, however, there is also evidence that there are detrimental effects of a degraded input even when word recognition per se is successful, but where perceptual effort has been required for this success.

Some 50 years ago, Patrick Rabbitt (1968) conducted an experiment in which participants with normal hearing were presented with spoken eight-digit lists separated into 2 four-digit sets by a 2-s pause after the first four digits. Participants were asked to listen to and remember both sets of digits but were only required to recall one set. Critical to the experiment, listeners were only told which list they were to recall after both lists had been presented. In one of the conditions of interest, Rabbitt presented the second set of digits masked by background noise with its level adjusted such that the digits could still be recognized, but only with effort. Rabbitt found that the first four digits, even when those digits were presented clearly, were less well recalled when the second four digits were partially masked by noise than when both sets were presented in the clear. Rabbitt suggested that the increased effort required to identify successfully the noise-masked digits may have deprived the listeners of processing resources that would otherwise have been available for effectively encoding the first digit set in memory.

This relatively simple memory experiment led to what came to be called an effortfulness hypothesis, with subsequent studies confirming poorer recall for speech that requires listening effort, whether this is due to noise masking of speech (Murphy, Craik, Li, & Schneider, 2000; Surprenant, 1999) or for speech presented in the clear but to individuals with mild–moderate hearing loss (McCoy et al., 2005; Rabbitt, 1991; van Boxtel et al., 2000). Relevant to our present interests, perceptual effort attendant to reduced hearing acuity can also interfere with comprehending the meaning of spoken sentences, especially when the meaning is expressed with a complex syntactic structure (Wingfield, McCoy, Peelle, Tun, & Cox, 2006). The crucial feature of the effortfulness effect is that such downstream effects can occur even when it can be demonstrated that the words themselves have been correctly identified.

Findings such as these are consistent with general resource models in cognitive psychology such as that proposed by Kahneman (1973), who postulated a limited pool of attentional resources that must be allocated among concurrent or closely successive tasks. Because resources are limited, a heavy resource demand imposed by one task will leave fewer resources for other tasks. This general principle has been adapted by Pichora-Fuller et al. (2016) to the detrimental effects of listening effort on speech understanding in their “framework for understanding effortful listening” model.

A recent imaging study with participants whose hearing ranged from normal acuity (pure-tone averages [PTAs] ≤ 15 dB HL) to a slight hearing loss (PTAs of 16 to 25 dB HL) showed that, while listening to sentences for comprehension, even this modest variation in hearing acuity revealed unique patterns of activation in a brain network associated with executive attention (Lee et al., 2018). These imaging findings, combined with Le Prell et al.'s (2011) discovery of university students being unaware of a hearing loss, motivated us to look closely at potential consequences of minor variations in hearing acuity on the accuracy of speech comprehension, especially when perceptual effort is combined with the additional cognitive demands that are required for comprehension of sentences that express their meaning with complex syntax.

Unlike studies in which participants have had some degree of hearing loss, our focus here was specifically on university-aged young adults, all of whom were tested to have normal hearing acuity (viz., PTA < 15 dB HL), albeit with some variation from individual to individual within this normal range. A demonstration that individual differences even within this range can affect accuracy in understanding the meaning of linguistically challenging sentences would be a strong test of Rabbitt's effortfulness hypothesis (1968, 1991) and associated limited resource models (Kahneman, 1973; Pichora-Fuller et al., 2016). It would also have implications for public attitudes toward the importance of hearing integrity.

In several past studies, we have asked these questions in the context of older adults with age-related mild–moderate hearing loss using sentences that differed in syntactic complexity (Amichetti, White, & Wingfield, 2016; Ayasse & Wingfield, 2018; DeCaro, Peelle, Grossman, & Wingfield, 2016). In all three of these experiments, however, we also had a comparison group of young adults with normal hearing acuity to serve as a reference against which to compare comprehension accuracy for older adults with hearing loss. We reasoned that combining the data of the young participants with normal hearing from these three experiments would allow us to explore the question of whether small variations in hearing acuity—even among young adults who meet the criterion of normal hearing acuity (PTA ≤ 15 dB HL)—might be reflected in comprehension accuracy when confronted by sentences whose meaning is expressed with complex syntax.

These three studies were selected because (a) they report audiometric screening for normal hearing for all participants, (b) sentences were presented at a specified normal speech rate, (c) confirmed testing for audibility was reported, and (d) all three studies (Amichetti et al., 2016; Ayasse & Wingfield, 2018; DeCaro et al., 2016) contrasted sentence types that differed in syntactic complexity. Together, these three studies yielded acuity and sentence comprehension data for 60 young adults with normal hearing for this exploratory analysis. Our specific question was whether accuracy in sentence comprehension would be correlated with hearing acuity, but especially so for sentences with syntactic structures known to require a heavier demand on cognitive resources for their comprehension.

Method

The data for this analysis were taken from the above-cited three studies published between 2016 and 2018 (Study 1: Amichetti et al., 2016 [Exp. 2]; Study 2: DeCaro et al., 2016; Study 3: Ayasse & Wingfield, 2018).

Audiometric Testing

Audiometric assessment was conducted using standard audiometric procedures in a sound-attenuating testing room via a GSI 61 clinical audiometer (Grason-Stadler, Inc.) for Studies 1 and 2 and an AudioStar Pro clinical audiometer (Grason-Stadler, Inc.) for Study 3. In all three studies, audiometric assessment and experimental stimuli were presented via calibrated Eartone 3A insert earphones (E-A-R Auditory Systems, Aero Company), after an otoscopic examination confirmed normal canal anatomy and an absence of any blockage in the ear canal.

Syntactic Manipulation

All three studies contrasted two sentence types known to differ in their ease of comprehension. The simpler sentence type expressed its meaning with a subject-relative structure (e.g., “The girl that helped the boy was generous.”). Such sentences follow a common pattern in English, in which the first noun in the sentence indicates an agent performing an action, the first verb identifies that action, and the second noun names the recipient of the action.

Sentences of this type were contrasted with sentences that contained the same words, and conveyed the same agency, but with an object-relative structure (e.g., “The boy that the girl helped was generous.”). It is still the girl who was the agent of the action (helped) and the boy who is the recipient of this action. In this case, however, the order of thematic roles is not canonical in that the first noun is no longer the agent of the action. Understanding the meaning of such sentences thus requires a more extensive thematic integration than subject-relative sentences (Gibson, Bergen, & Piantadosi, 2013) and places a heavier demand on cognitive resources for their resolution (Carpenter, Miyake, & Just, 1994). Consistent with these processing challenges, object-relative sentences reliably produce more comprehension errors than subject-relative sentences (Carpenter et al., 1994; Wingfield et al., 2006), increased patterns of neural activation in functional imaging studies (Just, Carpenter, Keller, Eddy, & Thulborn, 1996; Peelle, McMillan, Moore, Grossman, & Wingfield, 2004; Peelle, Troiani, Wingfield, & Grossman, 2010), and slower self-pacing times when listeners are given control of input rate (Fallon, Peelle, & Wingfield, 2006).

Study Details

In Study 1 (Amichetti et al., 2016, Exp. 2), each participant heard 64 sentences, eight words in length. Half of the sentences had a subject-relative structure (e.g., “The eagle that attacked the rabbit was large.”), and half had an object-relative structure (e.g., “The rabbit that the eagle attacked was large.”). In addition, half of the subject-relative sentences and half of the object-relative sentences were also grammatically correct but were semantically less plausible (e.g., “The rabbit that attacked the eagle was large.”; “The eagle that the rabbit attacked was large.”). For the purposes of this analysis, the subject-relative and object-relative data were averaged across the two plausibility conditions. After each sentence was presented, the participant was asked to name either the agent or the recipient of the action.

Stimuli were presented at 65 dB HL, approximately equivalent to everyday conversational levels. An audibility check was conducted by asking participants to repeat one- and two-syllable common words presented at the same 65–dB HL sound level as would be used in the main experiment (M = 98.3% words correct).

In Study 2 (DeCaro et al., 2016), each participant heard 144 sentences consisting of six- and 10-word subject-relative and object-relative sentences. The 10-word sentences were created by adding a four-word adjectival phrase into a six-word sentence (e.g., “Brothers that sisters with short brown hair assist are generous.”). An interest in this study was the specific placement of the adjectival phrases. In each sentence, a male agent (e.g., boy, uncle, king) or a female agent (e.g., girl, aunt, queen) was performing an action (e.g., pushed, helped, teased). Agency was tested by asking participants to indicate whether the agent of the action was a male or female. For this present analysis, we excluded the short six-word baseline sentences. This resulted in 96 ten-word test sentences for this analysis, 48 subject-relative sentences and 48 object-relative sentences collapsing across adjectival phrase placement.

In contrast to Study 1 and Study 3, two sound levels were employed in this experiment. Half of the stimuli were presented at 20 dB above the individual's auditory threshold (i.e., 20 dB SL [sensation level]); and half, at 65 dB HL. To be comparable with Study 1, only the data for the 10-word sentences presented at 65 dB HL were included in this analysis. Prior to the main experiment, audibility at this sound level was tested by asking participants to repeat words presented one at a time at this sound level. Accuracy was 100% correct for all participants.

The focus of Study 3 (Ayasse & Wingfield, 2018) was on the use of the task-evoked pupillary response as an index of processing effort. The test stimuli in this experiment were 96 ten-word subject-relative and object-relative sentences based on the 10-word sentences used in Study 2 (DeCaro et al., 2016). The participants' task was to indicate the agent of the action described in the sentence. As in Study 2, the data used for this analysis were collapsed across adjectival phrase placement. All stimuli were presented at 20 dB SL, with audibility confirmed by 100% accuracy in repeating short sentences at this sound level.

Stimuli in all three studies were recorded by a native speaker of American English at a natural speaking rate and natural prosody using Sound Studio Version 2.2.4 (Macromedia, Inc.) that digitized (16-bit) at a sampling rate of 44.1 kHz. Recordings were equalized within and across sentence types for root-mean-square intensity using MATLAB (MathWorks). In all three studies, stimuli were presented binaurally over calibrated Eartone 3A insert earphones (E-A-R Auditory Systems, Aero Company) via a clinical audiometer (as described above; Grason-Stadler, Inc.) in the same sound-isolated testing room in which hearing acuity was tested.

The three studies used a within-participant counterbalanced designs in which no participant heard the same core sentence (a particular combination of agent, action, and recipient) more than once. Across participants, however, by the end of the experiment, each core sentence had been heard an equal number of times in each of the sentence-type conditions. Each of the three studies also included filler sentences with different linguistic properties to discourage listeners from developing incidental processing strategies based on limited sentence types. Only the data for the young adults with normal hearing in these studies are considered here. Data for the older adults can be found in the published articles.

Participants

Study 1 had 24 young adult participants (five males, 19 females), Study 2 had 18 young adults (three males, 15 females), and Study 3 had 18 young adults (three males, 15 females). Table 1 gives, for each of the three studies, the mean, standard deviation, and range of participants' PTAs, averaged over 0.5, 1, 2, and 4 kHz. It can be seen that all participants fell within a range traditionally considered to be clinically normal hearing (≤ 15 dB HL; Harrell, 2000). Table 1 also gives participants' ages in each of the three studies. (In Study 3, the published article reported data for 14 young adult participants. An additional four participants were added following the same protocol to bring the total to 18 to equate the numbers with Study 2.) None of the participants participated in more than one of the three studies.

Table 1.

Participants' better-ear pure-tone averages (PTA) and ages.

Measures Variables Study 1 Study 2 Study 3
PTA (dB HL) M 8.5 6.7 7.3
SD 3.1 3.1 3.8
Range 1.3 to 13.8 2.5 to 12.5 −1.3 to 13.8
Age (years) M 19.7 20.4 21.0
SD 1.9 2.7 2.0
Range 18 to 27 18 to 29 18 to 24

Note. PTAs averaged over 0.5, 1.0, 2.0, and 4.0 kHz.

The young adults in all three studies were university undergraduates, graduate students, or university-graduate research staff. All reported themselves to be native speakers of American English with no history of speech or language disorders. Written informed consent was obtained prior to each study using a protocol approved by the Brandeis University Institutional Review Board.

Statistical Analyses

Correlational analyses were performed in R Version 3.4.4 using the cor.test function, and all tests were conducted as two tailed. Accuracy scores were correlated with pure-tone thresholds averaged over 0.5, 1, 2, and 4 kHz. Because of differences in experimental conditions, the percentage of correct responses from the three studies were scaled and centered within each study using the scale function. Correlations were Pearson product–moment correlations and were calculated separately for the subject-relative and object-relative sentence data. All t tests were conducted in SPSS Statistics Version 24.

Results

As might be expected for the simpler subject-relative sentences, the ability to determine the agent or recipient of the action in the sentence offered little challenge to these young adults: Study 1, 94.5% correct (SD = 5.7); Study 2, 96.8% correct (SD = 5.6); Study 3, 99.0% (SD = 2.5). Figure 1a shows scaled comprehension accuracy for the subject-relative sentences as a function of individuals' better-ear PTAs. For these sentences, where participants were at near-ceiling accuracy, there was no systematic relationship between hearing acuity and comprehension accuracy, r(58) = −.02, p = .86.

Figure 1.

Figure 1.

Comprehension accuracy and hearing acuity. Scaled accuracy scores for the simpler subject-relative sentences (a) and for more complex object-relative sentences (b) as a function of better-ear hearing acuity. Triangles show data from Study 1, squares show data from Study 2, and circles show data from Study 3.

For the more complex object-relative sentences, comprehension accuracy, although still relatively good, was nevertheless significantly poorer when compared to the subject-relative sentences in all three studies: Study 1, 85.1% correct (SD = 8.6), t(23) = 5.33, p < .001; Study 2, 87.3% correct (SD = 11.7), t(17) = 3.19, p = .005; Study 3, 89.4% correct (SD = 8.9), t(17) = 4.40, p < .001. Figure 1b shows the scaled accuracy scores for the object-relative sentences plotted as a function of the same individuals' better-ear PTAs. In this case, we observed a significant relationship between hearing acuity and comprehension accuracy, with better hearing acuity (i.e., lower PTAs) associated with better comprehension accuracy, r(58) = −.40, p = .002.

Although as previously indicated, most of the participants were in their early 20s; across the three studies, participants' ages ranged from 18 to 29 years. We examined the data for potential effects of age and found for this sample no significant correlation between participants' age and hearing acuity, r(58) = .04, p = .768, nor between participants' age and comprehension accuracy, r(58) = .23, p = .079. Because of the small, albeit not significant, trend for age and comprehension accuracy, as a final step, we calculated a partial correlation that confirmed the relationship between hearing acuity and comprehension accuracy with this small relationship between age and comprehension accuracy partialed out, r(58) = −.42, p = .001.

Discussion

In considering the finding that hearing acuity affected comprehension accuracy only for the object-relative sentences, it is important to emphasize that all participants passed a screen for normal hearing; that the object-relative and subject-relative sentences always contained the same words, differing only in word order; and that both sentence types within each experiment were presented at the same sound levels. The difference in comprehension accuracy between the subject-relative and object-relative sentences could thus not be attributed to any lexical or audibility differences between the two sentence types. That is, the two sentence types differed only in the relative cognitive demands they place on the listener as he or she attempted to determine the core meaning of the sentence.

Although there are differences in postulated linguistic operations required for agency determination among sentence processing theories, all converge on the expectation that object-relative sentences represent a greater processing challenge than subject-relative sentences (Carpenter et al., 1994; Gibson et al., 2013). It thus follows that a greater allocation of limited resources at the perceptual level due to an elevated hearing threshold would have a greater detrimental impact on successful comprehension of the more resource-demanding object-relative sentences than for subject-relative sentences.

The hypothesis that perceptual effort at the word recognition level draws resources that would otherwise be available for higher level linguistic operations has been supported in past studies with adults with hearing losses in the 25–50 dB HL range (a mild–moderate loss), the most common degree of loss among older adults with hearing impairment. With this degree of loss, speech perception can be shown to come at the cost of measureable effort (Ayasse, Lash, & Wingfield, 2017; Kramer, Kapteyn, Festen, & Kuik, 1997; Kuchinsky et al., 2013). This study shows a surprising analogous effect for healthy young adults with only small threshold elevations that still fall within the range classified in audiology as normal hearing. This finding makes two points: one theoretical and one of public health concern.

At the theoretical level, the results of this exploratory study are well aligned with developments of the effortfulness hypothesis that began with Rabbitt's seminal work demonstrating detrimental effects of effortful perception on recall of verbal materials (Rabbitt, 1968, 1991). This hypothesis in its current form (Pichora-Fuller et al., 2016) rests on the notion that human information processing is a limited resource system. Within this system, the commitment of resources to support difficult perception at the word level may still allow adequate resources for comprehension of simple speech materials that compose much of what we hear on a daily basis (Goldman-Eisler, 1968). This may, in part, underlie the reports of university students who are unaware that they have a hearing loss. As we also show, however, this picture can change with more complex speech materials. That is, errors in comprehension begin to appear for more difficult speech materials with surprisingly small variations in hearing acuity that do not appear for simpler speech materials. Importantly, we show that this is so even for young adults who pass a screen for normal hearing.

It is the case that terms such as resources, attention, and effort remain largely placeholder terms for processes yet fully defined (McGarrigle et al., 2014; Wingfield, 2016). The present data do not address the mechanisms underlying the effortfulness effect, although it is possible that a degraded input slows processing of sequential verbal materials to the detriment of effective encoding and higher level linguistic operations (Cousins, Dar, Wingfield, & Miller, 2014). The current work, however, suggests that the detrimental effects of perceptual effort may be far more sensitive to variation in hearing acuity than might have been expected from studies that have looked at effortful listening through the lens of a significantly degraded acoustic signal.

Two methodological limitations in this study should be noted. The first is that the magnitude of any correlations obtained will be constrained by the size of the measurement unit employed. The cited studies, in following standard clinical procedures, obtained their thresholds using 5-dB increments. As such, it can be seen that ± 5 dB encompasses a large portion of the range traditionally classified as normal hearing and hence the range of the present sample (Harrell, 2000). Initiation of future studies along these lines could thus benefit from audiometric measurements in smaller-than-standard increments. A second caveat follows from the unequal sex distribution in the present sample, a limitation that should be addressed in any future studies.

At the level of public health are studies such as that of Widén et al. (2009) who found that, of 258 university undergraduates between the ages of 17 and 21 years screened for hearing acuity, 4.2% self-reported a hearing difficulty but 25.9% of the 258 failed a PTA cutoff of 20 dB HL. Our present results show that, even among young adults who would clearly pass such a screen, and who performed well when comprehending sentences with a simple syntactic structure, those with even slightly elevated hearing thresholds showed a higher incidence of comprehension failures for sentences with noncanonical syntax than those with better hearing.

One should, of course, be cautious about overinterpreting moderate, albeit statistically significant, correlations and a proposed causal link underlying this relationship. The present data nevertheless show exactly what one would predict from the principles expressed in the “framework for understanding effortful listening” model of listening effort (Pichora-Fuller et al., 2016) and effects of listening effort on language comprehension. No less important, these data imply a strong argument for routine baseline hearing evaluations, even among healthy young adults who currently show age-normal hearing.

Acknowledgments

This study was supported by National Institute on Deafness and Other Communication Disorders Grant R01 DC016834, awarded to A. W. The studies from which the data for this analysis were taken were supported by National Institutes of Health Grant R01 AG019714, awarded to A.W. N. D. A. acknowledges support from National Institutes of Health Training Grant T32 GM084907. We also gratefully acknowledge support from the W. M. Keck Foundation.

Funding Statement

This study was supported by National Institute on Deafness and Other Communication Disorders Grant R01 DC016834, awarded to A. W. The studies from which the data for this analysis were taken were supported by National Institutes of Health Grant R01 AG019714, awarded to A.W. N. D. A. acknowledges support from National Institutes of Health Training Grant T32 GM084907.

References

  1. Amichetti N. M., White A. G., & Wingfield A. (2016). Multiple solutions to the same problem: Strategies of sentence comprehension by older adults with impaired hearing. Frontiers in Psychology, 7, 789 https://doi.org/10.3389/fpsyg.2016.00789 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Ayasse N. D., Lash A., & Wingfield A. (2017). Effort not speed characterizes comprehension of spoken sentences by older adults with mild hearing impairment. Frontiers in Aging Neuroscience, 8, 329 https://doi.org/10.3389/fnagi.2016.00329 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Ayasse N. D., & Wingfield A. (2018). A tipping point in listening effort: Effects of linguistic complexity and age-related hearing loss on sentence comprehension. Trends in Hearing, 22, 2331216518790907 https://doi.org/10.1177/2331216518790907 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Carpenter P. A., Miyake A., & Just M. A. (1994). Working memory constraints in comprehension: Evidence from individual differences, aphasia, and aging. In Gernsbacher M. (Ed.), The handbook of psycholinguistics (pp. 1075–1122). San Diego, CA: Academic Press. [Google Scholar]
  5. Cousins K. A., Dar H., Wingfield A., & Miller P. (2014). Acoustic masking disrupts time-dependent mechanisms of memory encoding in word-list recall. Memory & Cognition, 42, 622–638. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. DeCaro R., Peelle J. E., Grossman M., & Wingfield A. (2016). The two sides of sensory–cognitive interactions: Effects of age, hearing acuity, and working memory span on sentence comprehension. Frontiers in Psychology, 7, 236 https://doi.org/10.3389/fpsyg.2016.00236 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Fallon M., Peelle J. E., & Wingfield A. (2006). Spoken sentence processing in young and older adults modulated by task demands: Evidence from self-paced listening. Journals of Gerontology: Series B: Psychological Sciences and Social Sciences, 61, P10–P17. [DOI] [PubMed] [Google Scholar]
  8. Gibson E., Bergen L., & Piantadosi S. T. (2013). Rational integration of noisy evidence and prior semantic expectations in sentence interpretation. Proceedings of the National Academy of Sciences of the United States of America, 110, 8051–8056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Goldman-Eisler F. (1968). Psycholinguistics. London, England: Academic Press. [Google Scholar]
  10. Harrell R. W. (2000). Puretone evaluation. In Katz J. (Ed.), Handbook of clinical audiology (pp. 71–87). Philadelphia, PA: Lippincott Williams & Wilkins. [Google Scholar]
  11. Just M. A., Carpenter P. A., Keller T. A., Eddy W. F., & Thulborn K. R. (1996). Brain activation modulated by sentence comprehension. Science, 274, 114–116. [DOI] [PubMed] [Google Scholar]
  12. Kahneman D. (1973). Attention and effort. Englewood Cliffs, NJ: Prentice Hall. [Google Scholar]
  13. Kramer S. E., Kapteyn T. S., Festen J. M., & Kuik D. J. (1997). Assessing aspects of auditory handicap by means of pupil dilation. Audiology, 36, 155–164. [DOI] [PubMed] [Google Scholar]
  14. Kuchinsky S. E., Ahlstrom J. B., Vaden K. I. Jr., Cute S. L., Humes L. E., Dubno J. R., & Eckert M. A. (2013). Pupil size varies with word listening and response selection difficulty in older adults with hearing loss. Psychophysiology, 50, 23–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Le Prell C. G., Hensley B. N., Campbell K. C., Hall J. W. III., & Guire K. (2011). Evidence of hearing loss in a “normally-hearing” college-student population. International Journal of Audiology, 50, S21–S31. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Lee Y. S., Wingfield A., Min N.-E., Kotloff E., Grossman M., & Peelle J. E. (2018). Differences in hearing acuity among “normal-hearing” young adults modulate the neural basis for speech comprehension. eNeuro, 5(3). https://doi.org/10.1523/ENEURO.0263-17.2018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. McCoy S. L., Tun P. A., Cox L. C., Colangelo M., Stewart R. A., & Wingfield A. (2005). Hearing loss and perceptual effort: Downstream effects on older adults' memory for speech. The Quarterly Journal of Experimental Psychology: Section A, Human Experimental Psychology, 58, 22–33. [DOI] [PubMed] [Google Scholar]
  18. McGarrigle R., Munro K. J., Dawes P., Stewart A. J., Moore D. R., Barry J. G., & Amitay S. (2014). Listening effort and fatigue: What exactly are we measuring? A British society of audiology cognition in hearing special interest group “white paper.” International Journal of Audiology, 53, 433–440. [DOI] [PubMed] [Google Scholar]
  19. Murphy D. R., Craik F. I. M., Li K. Z. H., & Schneider B. A. (2000). Comparing the effects of aging and background noise on short-term memory performance. Psychology and Aging, 15, 323–334. [DOI] [PubMed] [Google Scholar]
  20. Peelle J. E., McMillan C., Moore P., Grossman M., & Wingfield A. (2004). Dissociable patterns of brain activity during comprehension of rapid and syntactically complex speech: Evidence from fMRI. Brain and Language, 91, 315–325. [DOI] [PubMed] [Google Scholar]
  21. Peelle J. E., Troiani V., Wingfield A., & Grossman M. (2010). Neural processing during older adults' comprehension of spoken sentences: Age differences in resource allocation and connectivity. Cerebral Cortex, 20, 773–782. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Pichora-Fuller M. K., Kramer S. E., Eckert M. A., Edwards B., Hornsby B. W., Humes L. E., … Wingfield A. (2016). Hearing impairment and cognitive energy: The framework for understanding effortful listening (FUEL). Ear and Hearing, 37(Suppl. 1), 5S–27S. [DOI] [PubMed] [Google Scholar]
  23. Rabbitt P. M. (1968). Channel capacity, intelligibility and immediate memory. The Quarterly Journal of Experimental Psychology, 20, 241–248. [DOI] [PubMed] [Google Scholar]
  24. Rabbitt P. M. (1991). Mild hearing loss can cause apparent memory failures which increase with age and reduce with IQ. Acta Oto-Laryngologica. Supplementum, 111(476), 167–176. [DOI] [PubMed] [Google Scholar]
  25. Rota-Donahue C., & Levey S. (2016). Noise-induced hearing loss in the campus. The Hearing Journal, 69, 38–39. [Google Scholar]
  26. Shargodsky J., Curhan S. G., Curhan G. C., & Eavey R. (2010). Change in prevalence of hearing loss in US adolescents. Journal of American Medical Association, 304, 772–778. [DOI] [PubMed] [Google Scholar]
  27. Surprenant A. M. (1999). The effect of noise on memory for spoken syllables. International Journal of Psychology, 34, 328–333. [Google Scholar]
  28. van Boxtel M. P., van Beijsterveldt C. E., Houx P. J., Anteunis L. J., Metsemakers J. F., & Jolles J. (2000). Mild hearing impairment can reduce verbal memory performance in a healthy adult population. Journal of Clinical and Experimental Neuropsychology, 22, 147–154. [DOI] [PubMed] [Google Scholar]
  29. Widén S. E., Holmes A. E., Johnson T., Bohlin M., & Erlandsson S. I. (2009). Hearing, use of hearing protection, and attitudes towards noise among young American adults. International Journal of Audiology, 48, 537–545. [DOI] [PubMed] [Google Scholar]
  30. Wingfield A. (2016). Evolution of models of working memory and cognitive resources. Ear and Hearing, 37(Suppl. 1), 35S–43S. [DOI] [PubMed] [Google Scholar]
  31. Wingfield A., McCoy S. L., Peelle J. E., Tun P. A., & Cox L. C. (2006). Effects of adult aging and hearing loss on comprehension of rapid speech varying in syntactic complexity. Journal of the American Academy of Audiology, 17, 487–497. [DOI] [PubMed] [Google Scholar]

Articles from American Journal of Audiology are provided here courtesy of American Speech-Language-Hearing Association

RESOURCES