Skip to main content
Neurobiology of Language logoLink to Neurobiology of Language
. 2020 Oct 1;1(4):452–473. doi: 10.1162/nol_a_00021

Age-Related Differences in Auditory Cortex Activity During Spoken Word Recognition

Chad S Rogers 1,*, Michael S Jones 2, Sarah McConkey 3, Brent Spehar 4, Kristin J Van Engen 5, Mitchell S Sommers 6, Jonathan E Peelle 7
PMCID: PMC8318202  NIHMSID: NIHMS1668156  PMID: 34327333

Abstract

Understanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task. Listeners were 29 young adults (aged 19–30 years) and 32 older adults (aged 65–81 years) without self-reported hearing difficulty. We found largely similar patterns of activity during word perception for both young and older adults, centered on the bilateral superior temporal gyrus. As expected, the repetition condition resulted in significantly more activity in areas related to motor planning and execution (including the premotor cortex and supplemental motor area) compared to the attentive listening condition. Importantly, however, older adults showed significantly less activity in probabilistically defined auditory cortex than young adults when listening to individual words in both the attentive listening and repetition tasks. Age differences in auditory cortex activity were seen selectively for words (no age differences were present for 1-channel vocoded speech, used as a control condition), and could not be easily explained by accuracy on the task, movement in the scanner, or hearing sensitivity (available on a subset of participants). These findings indicate largely similar patterns of brain activity for young and older adults when listening to words in quiet, but suggest less recruitment of auditory cortex by the older adults.

Keywords: speech perception, cognitive aging, speech production

INTRODUCTION

Understanding spoken words requires mapping complex acoustic signals to a listener’s stored lexical representations. Evidence from neuropsychology and cognitive neuroscience provides increasingly converging evidence about the roles of the bilateral temporal cortex (particularly the superior temporal gyrus and the middle temporal gyrus) in processing speech acoustics and recognizing single words (Binder et al., 2000; Hickok & Poeppel, 2007; Peelle, Johnsrude, & Davis, 2010). However, the degree to which the networks supporting spoken word recognition change over our lifetime remains unclear. The goals of the current study were to test whether young and older adults relied on different brain networks during successful spoken word recognition, and whether any age differences were related to the specific task.

Important themes when considering older adults’ language processing include the degree to which linguistic processing is preserved, and whether older adults may adopt different strategies when understanding language compared to young adults (Peelle, 2019; Wingfield & Stine-Morrow, 2000). Particularly important for spoken word recognition is that adult aging frequently brings changes to both hearing sensitivity (Peelle & Wingfield, 2016) and cognitive ability (Park et al., 2002). Thus, it is not surprising that older adults’ spoken word perception differs from that of young adults, particularly in the presence of background noise (Humes, 1996). Older adults tend to take longer to recognize words (Lash, Rogers, Zoller, & Wingfield, 2013; Wingfield, Aberdeen, & Stine, 1991), make more recognition errors than young adults, and show increased sensitivity to factors such as the number of phonological neighbors (competitors) associated with a given target word (Sommers & Danielson, 1999). An open question centers on the brain networks on which older adults rely during spoken word recognition. Of particular interest is whether additional regions may be recruited to support successful recognition, compared to those engaged by young adults.

A number of studies have investigated neural activity during older adults’ speech processing in noise or other acoustic degradation, using an assortment of tasks and testing participants with different levels of hearing (Bilodeau-Mercure, Lortie, Sato, Guitton, & Tremblay, 2015; Hwang, Li, Wu, Chen, & Liu, 2007; Manan, Yusoff, Franz, & Mukari, 2017; Manan, Franz, Yusoff, & Mukari, 2015; Wong et al., 2009). Harris, Dubno, Keren, Ahlstrom, and Eckert (2009), for example, examined spoken word recognition in young and older adults. They varied the intelligibility of the target items using low-pass filtering of the acoustic signal. During scanning, participants were asked to repeat back the word they heard. The authors found increased activity in regions associated with word processing, including the auditory cortex and the premotor cortex, when words were more intelligible; these intelligibility-related changes did not statistically differ between young and older adults. Older adults did show more activation in the anterior cingulate cortex and the supplemental motor area than the young adults did, suggesting a possible increase in top-down executive control.

Age differences in speech understanding have also been studied in the context of sentence comprehension. One common finding is that during successful sentence processing, older adults show additional activity compared to young adults (e.g., in contralateral homologs to regions seen in young adults, or in regions beyond the network activated by young adults; Peelle, Troiani, Wingfield, & Grossman, 2010; Tyler et al., 2010). These findings have been interpreted in a compensation framework in which older adults are less efficient using a core speech network and need to recruit additional regions to support successful comprehension (Wingfield & Grossman, 2006). However, at least some of this additional activity has been shown to be related to the tasks performed by participants in the scanner, which frequently contain metalinguistic decisions not required during everyday conversation (Davis, Zhuang, Wright, & Tyler, 2014). Thus, it may be that core language computations are well-preserved in aging (Campbell et al., 2016; Shafto & Tyler, 2014).

The role of executive attention in older adults’ spoken word recognition has also been of significant interest. Listening to speech that is acoustically degraded can result in perception errors, after which listeners must re-engage attention systems to support successful listening. The cingulo-opercular network, an executive attention network (Neta et al., 2015; Power & Petersen, 2013), shows increased activity following perception errors (similar to error-related activity in other domains). Crucially, when listening to spoken words in background noise, increased cingulo-opercular activity following one trial is associated with recognition success on the following trial (Vaden et al., 2016; Vaden et al., 2013), consistent with a role in maintaining task-related attention (Eckert et al., 2009).

An important challenge when considering the performance of listeners with hearing loss is that words may not be equally intelligible to all listeners. A common measure of accuracy in spoken word recognition is to ask listeners to repeat each word after hearing it; however, this type of task requires motor responses, which may obscure activations related to speech perception and increase participant motion in the scanner (Gracco, Tremblay, & Pike, 2005). In addition, differences in the brain regions coordinating speech production in older adults (Bilodeau-Mercure & Tremblay, 2016; Tremblay, Sato, & Deschamps, 2017) may interfere with clear measurements of activity during perception and recognition. The degree to which motor effects resulting from word repetition may obscure activity related to speech perception is unclear. In sentence processing tasks, task effects can be significant (Davis et al., 2014), and if not accounted for may obscure what are actually consistent patterns of language-related activity across the lifespan (Campbell et al., 2016).

In the current study we investigated spoken word processing in young and older adult listeners in the absence of background noise. We compared paradigms requiring words to be repeated with “attentive listening” (no motor response required). Our interest is, first, whether age differences exist in the brain networks supporting spoken word recognition, and second, whether these differences are affected by the choice of task. Thus, our primary analyses will focus on activity seen for words (greater than noise) in the experimental conditions.

The influence of psycholinguistic factors on spoken word recognition has long been appreciated. In a secondary set of analyses, we will investigate whether word frequency or phonological neighborhood density modulate activity during spoken word recognition. Although behavioral and electrophysiological studies suggest that high frequency words are processed more quickly than low frequency words, the degree to which this might be captured in fMRI is unclear. Similarly, although neighborhood density effects are widely reported in behavioral studies (with words from dense neighborhoods typically being more difficult to process), the degree to which lexical competition effects may differ with age is unclear.

MATERIALS AND METHODS

Stimuli, data, and analysis scripts are available from https://osf.io/vmzag/.

Participants

We recruited two groups of participants (young and older adults) for this study. The young adults were 29 self-reported healthy, right-handed adults, aged 19–30 years (M = 23.8, SD = 2.9, 19 female), and were recruited via the Washington University in St. Louis Department of Psychological and Brain Sciences Subject Pool. Older adult participants were 32 self-reported healthy, right-handed adults, aged 65–81 years (M = 71.0, SD = 5.0, 17 female). All participants self-reported themselves to be native speakers of American English with no history of neurological difficulty, and with normal hearing (and no history of a diagnosed hearing problem). Participants were compensated for their participation, and all provided informed consent commensurate with practices approved by the Washington University in St. Louis Institutional Review Board.

Audiograms were collected on a subset of eight young and nine older participants using pure-tone audiometry (Figure 1a). We summarized hearing ability using a better-ear pure tone average (PTA) at 1, 2, and 4 kHz. PTAs in participants’ better hearing ears ranged from −3.33 to 8.33 dB HL in young adults (M = 2.92, SD = 4.15), and 8.33 to 23.3 dB HL in older adults (M = 23.3, SD = 9.17).

Figure 1. .


Figure 1. 

Experiment overview. (a) Audiograms for the subset of participants on whom hearing was available for left and right ears. Individual participants are shown in thin lines, group means in thick lines. (b) Frequency of occurrence and phonological neighborhood density for the 240 experimental items. (c) Task design for attentive listening and word repetition tasks. (d) Behavioral accuracy for the repetition condition for young and older adults. HAL = Hyperspace Analogue to Language, EPI = echo planar imaging.

Materials

Stimuli for this study were 375 monosyllabic consonant-vowel-consonant words. The auditory stimuli were recorded at 48 kHz using a 16-bit digital-to-analog converter with an Audio Technica 2035 microphone in a quiet room. Words were spoken by a female speaker with a standard American dialect. Root-mean-square amplitude of the stimuli was equated.

Out of the full set of words, 75 words were vocoded using a single channel with white noise as a carrier signal (Shannon, Zeng, Kamath, Wygonski, & Ekelid, 1995) using jp_vocode.m from http://github.com/jpeelle/jp_matlab. These stimuli were used for an unintelligible baseline “noise” condition. The remaining 300 words were divided into five lists of 60 words, using MATCH software (Van Casteren & Davis, 2007), and were balanced for word frequency (as measured by the log of the Hyperspace Analogue to Language dataset), orthographic length, concreteness (Brysbaert, Warriner, & Kuperman, 2014), and familiarity (Balota et al., 2007). The distribution of word frequency and phonological neighborhood density are shown in Figure 1b.

One of these lists was combined with 15 of the noise vocoded words and used for word repetition task practice outside of the scanner. The remaining four lists of 60 words served as the critical items inside the scanner, with half of the lists used for attentive listening (120 total words) and the other half for word repetition (120 total words). Word lists were counterbalanced such that each word was presented in both “listen” and “repeat” conditions across participants.

Procedure

Prior to scanning, participants were taken to a quiet room. (The room was not sound isolated and low frequency noise from the building heating, ventilation, and air conditioning system was typically present.) During that time participants provided informed consent, completed demographic questionnaires, and a subset had their hearing tested using a calibrated Maico MA40 portable audiometer (Maico Diagnostics, Inc., Eden Prairie MN) by an audiologist-trained researcher.

Participants were then instructed for the two tasks they would perform in the scanner: attentive listening and word repetition. During attentive listening, participants were asked to stay alert, still, and keep their eyes focused on a fixation cross while listening to a sequence of auditory sounds, including words, silence, and noise (single-channel noise vocoded words). During word repetition, participants were asked to do the same as in attentive listening, with the addition of repeating the word they just heard aloud. Participants were instructed to repeat the words following the volume acquisition after each word (Figure 1c). Participants were told to give their best guess if they could not understand a word. Participants practiced a simulation of the word repetition task until the experimenter was confident that the participant understood the pacing and the nature of the task. Sound levels were adjusted to achieve audible presentations at the beginning of the study and thereafter not adjusted.

Functional MRI scanning took place over the course of four scanning blocks, where participants alternated between blocks of attentive listening and word repetition (Figure 1c). The order of blocks was counterbalanced such that participants were equally likely to begin with a word repetition or an attentive listening block. During word repetition, participants’ spoken responses were recorded using an in-bore Fibersound optical microphone. These responses were scored for accuracy offline by a research assistant (Figure 1d).

MRI Data Acquisition and Processing

The MRI data collected in this study are available from https://openneuro.org/datasets/ds002382 (Poldrack et al., 2013). MRI data were acquired using a Siemens Prisma scanner (Siemens Medical Systems) at 3 T equipped with a 32-channel head coil. Scan sequences began with a T1-weighted structural volume using an MPRAGE sequence (repetition time [TR] = 2.4 s, echo time [TE] = 2.2 ms, flip angle = 8°, 300 × 320 matrix, voxel size = 0.8 mm isotropic). Blood oxygenation level-dependent fMRI images were acquired using a multiband echo planar imaging sequence (Feinberg et al., 2010; TR = 3.07 s, TA = 0.770 s, TE = 37 ms, flip angle = 37°, voxel size = 2 mm isotropic, multiband factor = 8). (The flip angle was suboptimal due to an error setting up the sequences; although discovered partway through the study, we left it unchanged to maintain consistent data quality. With a TR of ~3 s we would expect a better signal-to-noise ratio with a flip angle of 90°.) We used a sparse imaging design in which there was a 2.3 s delay between scanning acquisitions and the TR was longer than the acquisition time to allow for minimal scanning noise during stimulus presentation and audio recording of participant responses (Edmister, Talavage, Ledden, & Weisskoff, 1999; Hall et al., 1999).

Analysis of the MRI data was performed using Automatic Analysis version 5.4.0 (Cusack et al., 2015; RRID:SCR_003560), which scripted a combination of SPM12 (Wellcome Trust Centre for Neuroimaging) version 7487 (RRID:SCR_007037) and FMRIB Software Library (FSL; FMRIB Analysis Group; Jenkinson, Beckmann, Behrens, Woolrich, & Smith, 2012) version 6.0.1 (RRID:SCR_002823).

Data were realigned using rigid-body image registration, and functional data were coregistered with the bias-corrected T1-weighted structural image. Spatial and functional images were normalized to MNI space using a unified segmentation approach (Ashburner & Friston, 2005), and resampled to 2 mm. Finally, the functional data were smoothed using an 8 mm full width at half maximum Gaussian kernel.

For the attentive listening condition, we did not have measures of accuracy, so we analyzed all trials. For the repetition condition, we analyzed only trials associated with correct responses. For both tasks, we modeled the noise condition in addition to words. Finally, we included three parametric modulators for word events: word frequency, phonological neighborhood density, and their interaction. To avoid order effects (Mumford, Poline, & Poldrack, 2015), these were not orthogonalized.

Motion effects were of particular importance given that participants were speaking during the repetition condition. To mitigate the effects of motion, we used a thresholding approach in which high motion frames were individually modeled for each subject using a delta function in the general linear model (see, e.g., Siegel et al., 2014). Motion was quantified using framewise displacement (FD), calculated from the six motion parameters estimated during realignment, assuming the head is a sphere having a radius of 50 mm (Power, Barnes, Snyder, Schlaggar, & Petersen, 2012). We then chose an FD threshold (0.561) that we used for all participants. Our rationale was that some participants move more, and thus produce worse data; we therefore wanted to use a single threshold for all participants, resulting in more data exclusion from high-motion participants. This threshold resulted in 2.2–19.4% (M = 6.21, SD = 4.45) data exclusion for the young adults and 2.8–58.4% (M = 22.6, SD = 15.3) data exclusion for the older adults. For each frame exceeding this threshold, we added a column to that participant’s design matrix consisting of a delta function at the time point in question, which effectively excludes the variance of that frame from the model.

Contrast images from single subject analyses were analyzed at the second level using permutation testing (FSL randomise; 5,000 permutations; https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/FSL), with a cluster-forming threshold of p < 0.001 (uncorrected) and results corrected for multiple comparisons based on cluster extent (p < 0.05). Images (contrast images and unthresholded t maps) are available from https://identifiers.org/neurovault.collection:6735 (Gorgolewski et al., 2015). Anatomical localization was performed using converging evidence from author experience (Devlin & Poldrack, 2007) viewing statistical maps overlaid in MRIcroGL (Rorden & Brett, 2000), supplemented by atlas labels (Tzourio-Mazoyer et al., 2002).

For region of interest (ROI) analysis of primary auditory cortex, we used probabilistic maps based on postmortem human histological staining (Morosan et al., 2001), available in the SPM Anatomy toolbox (Eickhoff et al., 2005; RRID:SCR_013273). We created a binary mask for regions Te1.0 and Te1.1 and then extracted parameter estimates for noise and word contrasts for the attentive listening and repetition conditions from each participant’s first-level analyses by averaging over all voxels in each ROI (left auditory, right auditory).

Outputs from analysis stages used for quality control are available from https://osf.io/vmzag/ in the aa_report folder.

RESULTS

Behavioral Data

We analyzed the accuracy data using a linear mixed effects analysis, implemented using the lme4 and lmerTest packages in R version 3.6.2 (Bates, Mächler, Bolker, & Walker, 2015; Kuznetsova, Brockhoff, & Christensen, 2017; RRID:SCR_001905). Because trial-level accuracy data was binary, we used logistic regression. We first tested for age differences using a model that included age group as a fixed factor and subject as a random factor:

m0 <- glmer(accuracy ~ age_group + (1 | subject),

   data = df, family = "binomial")

The estimate of age_group was −1.4929 (SE = 0.1902), p = 4.24e−15, consistent with a main effect of age (older adults performing more poorly). Because of the ceiling effects and the lack of variability in the young adult data, we ran an additional analysis only in the older adults, using a model that included neighborhood density and word frequency as fixed factors, and item and subject as random factors:

m1 <- glmer(accuracy ~ neighborhood_density * log_freq +

   (1 | word) + (1 | subject),

   data = df, family = "binomial")

The model failed to converge when including a more complex random effect structure. The results of this model are shown in Table 1. There were no significant effects of neighborhood density or word frequency in the accuracy data.

Table 1. .

Fixed effects results for accuracy data model

  Estimate SE Z p  
Intercept 1.45917 0.20998 6.95 3.67e−12 ***
neighborhood density 0.06476 0.08944 0.724 0.469  
log word frequency 0.05480 0.08852 0.619 0.536  
density:frequency −0.07036 0.09054 −0.777 0.437  

Note. *** p < 0.001; ** p < 0.01; * p < 0.05

There are many reasons a participant might make an incorrect response, and our primary interest is in the processes supporting successful comprehension. Thus, for the fMRI analyses, we restricted our analyses to correct trials only.

Imaging Data

We began by looking at activity in the auditory cortex, followed by whole-brain analyses. Activity in left and right auditory cortex for noise and word conditions for young and older adults is shown in Figure 2a. We analyzed these data using a linear mixed effects analysis, implemented using the lme4 and lmerTest packages in R version 3.6.2 (RRID:SCR_001905). The full model included task (listen, repeat), stimulus (word, noise), hemisphere (left, right), age group (young, older), and accuracy on the repetition task as fixed factors, with subject, stimulus type, and task as random factors:

m1 <- lmer(activity ~ task * stimulus * hemisphere * agegroup + accuracy +

   (1 + stimulus * task | subject),

   data = dfleftright)

When including hemisphere as an additional random factor, the model failed to converge, and as our main interests lay elsewhere we settled on the above model.

Figure 2. .


Figure 2. 

Activity in auditory cortex regions of interest. (a) Activity (parameter estimates, arbitrary units) for left and right auditory cortex as a function of age group and task. Participants are indicated by individual dots; mean ± standard error indicated by error bars. (b) Activity for left and right auditory cortex during the word repetition task in older adults as a function of accuracy and hearing (hearing only available in a subset of participants).

Full model results are shown in Table 2. The p values were obtained from the lmerTest package using the Satterthwaite method for degrees of freedom and t statistics. We found significant interactions between task and stimulus, consistent with a greater degree of activation for words relative to noise in the repetition task compared to the attentive listening task. Importantly, there was a significant interaction between stimulus and age group, consistent with greater age differences for words relative to noise. We verified this with follow-up t tests, collapsing over hemisphere, which showed a significant difference in activity between young and older adults for words, attentive listening: t(57.973) = 3.1428, p = 0.002636; repetition: t(56.619) = 2.5583, p = 0.01322, but not for noise, attentive listening: t(58.559) = 0.66361, p = 0.5095; repetition: t(57.241) = 0.19028, p = 0.8498. None of the other main effects or interactions were significant.

Table 2. .

Fixed effects results for auditory cortex model

  Estimate SE df t p  
Intercept 1.76976 0.84161 60.91 2.103 0.03962 *
task −0.21168 0.32741 85.14 −0.647 0.51966  
stimulus 0.35568 0.32741 125.70 1.518 0.13156  
hemisphere −0.01502 0.19203 236.00 −0.078 0.93771  
Age −0.02927 0.39170 82.77 −0.075 0.94061  
accuracy 0.65940 1.09603 58.00 0.602 0.54977  
task:stimulus 1.66089 0.37515 105.02 4.427 2.34e−05 ***
task:hemisphere 0.29972 0.27157 236.00 1.104 0.27087  
stimulus:hemisphere 0.44485 0.27157 236.00 1.638 0.10274  
task:age −0.15187 0.47484 85.14 −0.320 0.74988  
stimulus:age 1.13313 0.33986 125.70 3.334 0.00112 **
hemisphere:age 0.19873 0.27851 236.00 0.714 0.47620  
task:stimulus:hemisphere −0.46865 0.38406 236.00 −1.22 0.22359  
task:stimulus:age 0.25428 0.54409 105.02 0.467 0.64122  
task:hemisphere:age −0.23793 0.39387 236.00 −0.604 0.54636  
stimulus:hemisphere:age −0.29720 0.39387 236.00 −0.755 0.45126  
task:stimulus:hemisphere:age 0.50413 0.55701 236.00 0.905 0.36635  

Note. *** p < 0.001; ** p < 0.01; * p < 0.05

To explore the possible contribution of other factors to older adults’ reduced activity in the auditory cortex, we conducted a series of exploratory correlation analyses with accuracy, hearing, and movement parameters from fMRI (median FD). None of these analyses showed significant correlations with auditory cortex activity. Correlations for accuracy and hearing in the word repetition task are shown in Figure 2b. Overall, we interpret these results as being consistent with less auditory activity for the older adults during spoken word perception (but not during our nonspeech control condition).

To complement the ROI analyses, we next performed whole-brain analyses for all conditions of interest. Activity for word perception in the attentive listening condition (greater than the noise baseline) is shown in Figure 3 (with maxima listed in Tables 35). As expected, both young and older adults showed significant activity in the bilateral superior temporal cortex. Young adults showed significantly stronger activity in the superior temporal cortex near the auditory cortex. There were no regions in which older adults showed greater activity than young adults.

Figure 3. .


Figure 3. 

Whole-brain activity for the attentive listening condition. Top: Unthresholded parameter estimates. Middle: Unthresholded t maps. Bottom: Thresholded t maps (p < 0.05, cluster corrected). White ovals highlight left and right auditory cortex.

Table 3. .

Peak activations for attentive listening condition greater than noise, young adults

Region Size (μl) t score Coordinates
x y z
Left superior temporal gyrus 38,936 13.6 −62 −16 6
 Left inferior frontal gyrus   5.44 −52 12 20
 Left inferior frontal gyrus   5.08 −38 28 0
 Left inferior frontal gyrus   5.0 −50 20 −6
 Left inferior frontal gyrus   4.75 −52 28 16
 Left insula   4.08 −30 24 −4
 Left inferior frontal gyrus   3.89 −34 32 −14
 Left inferior frontal gyrus   3.7 −38 26 −14
 Left precentral gyrus   3.46 −60 2 26
Right superior temporal gyrus 23,432 12.4 60 −8 4
 Right superior temporal gyrus   10.2 66 −18 0
 Right superior temporal gyrus   4.07 48 12 −12
Supplemental motor area 9,144 5.39 −2 6 56
 Supplemental motor area   4.33 0 −4 64
 Supplemental motor area   4.01 −8 0 76
 Left paracentral lobule   3.93 −4 −28 78
 Left paracentral lobule   3.84 −4 −16 80
 Left postcentral gyrus   3.61 −18 −28 76
 Supplemental motor area   3.6 0 8 72

Table 5. .

Peak activations for attentive listening condition greater than noise, young > older adults

Region Size (μl) t score Coordinates
x y z
Left superior temporal gyrus 8,472 6.29 −62 −16 8
 Left Heschl’s gyrus   4.18 −40 −30 10
 Left Heschl’s gyrus   4.08 −42 −24 12
Right superior temporal gyrus 3,400 4.84 52 −16 10
 Right superior temporal gyrus   3.37 62 8 2

Table 4. .

Peak activations for attentive listening condition greater than noise, older adults

Region Size (μl) t score Coordinates
x y z
Right superior temporal gyrus 11,152 7.99 62 −8 2
 Right superior temporal gyrus   7.78 60 0 −4
 Right superior temporal gyrus   7.47 64 −20 0
Left superior temporal gyrus 9,664 7.87 −60 −10 0
 Left superior temporal gyrus   7.83 −64 −24 2

In addition to the overall pattern associated with word perception, we examined psycholinguistic effects of word frequency and phonological neighborhood density using a parametric modulation analysis. There were no significant effects of either word frequency or neighborhood density in the attentive listening condition.

Activity for word perception in the repetition condition (relative to a noise baseline) is shown in Figure 4 (with maxima in Tables 68). Again, both young and older adults showed significant activity in the bilateral temporal cortex, as well as frontal regions related to articulatory planning, including the premotor cortex and the supplemental motor area. As with the attentive listening condition, young adults showed significantly more activity in superior temporal regions near the auditory cortex. There were no regions where older adults showed more activity than the young adults.

Figure 4. .


Figure 4. 

Whole-brain activity for the repetition condition (correct responses only). Top: Unthresholded parameter estimates. Middle: Unthresholded t maps. Bottom: Thresholded t maps (p < 0.05, cluster corrected). White ovals highlight left and right auditory cortex.

Table 6. .

Peak activations for repetition condition greater than noise, young adults

Region Size (μl) t score Coordinates
x y z
Left superior temporal gyrus 581,432 14 −60 −14 4
 Left postcentral gyrus   13.9 −42 −16 38
 Left postcentral gyrus   13.8 −48 −14 40
 Right postcentral gyrus   13.7 44 −12 36
 Left postcentral gyrus   13.4 −52 −8 30
 Left putamen   12.9 −24 0 4
 Right superior temporal gyrus   12.5 56 −10 4
 Supplemental motor area   12.4 0 0 58
 Right superior temporal gyrus   12.3 52 −16 6
 Right superior temporal gyrus   11.4 66 −20 2
 Right putamen   10.6 28 0 −4
 Right precentral gyrus   10.6 20 −28 60
 Left paracentral lobule   10.5 −18 −30 60
 Left Heschl’s gyrus   9.89 −36 −30 14
 Left inferior frontal gyrus   9.58 −52 10 22
 Left inferior temporal gyrus   9.13 −44 −56 −10
 Left insula   8.84 −32 22 4
 Left inferior parietal cortex   8.41 −38 −36 42
 Right insula   8.19 34 20 6
 Dorsal anterior cingulate   8.11 −8 12 38

Table 8. .

Peak activations for repetition condition greater than noise, young > older adults

Region Size (μl) t score Coordinates
x y z
Right Heschl’s gyrus 5,224 5.22 48 −20 10
Right Heschl’s gyrus   4.99 40 −26 16
 Right superior temporal sulcus   4.18 60 −30 0
 Right superior temporal gyrus   4.08 68 −26 8
 Right superior temporal gyrus   3.79 56 −8 4
 Right superior temporal gyrus   3.62 48 −34 10
Left Heschl’s gyrus 4,600 4.92 −36 −30 14
 Left superior temporal gyrus   4.17 −64 −18 8
 Left superior temporal gyrus   3.78 −62 −32 14
 Left superior temporal gyrus   3.66 −60 −18 −4
 Left Heschl’s gyrus   3.47 −46 −24 6
 Left superior temporal gyrus   3.29 −62 −38 8
 Left superior temporal gyrus   3.19 −52 −16 2
Left postcentral gyrus 4,248 5.26 −50 −14 44
 Left postcentral gyrus   5.21 −46 −16 42
 Left postcentral gyrus   5.12 −42 −18 40
 Left postcentral gyrus   3.88 −54 −8 28

Table 7. .

Peak activations for repetition condition greater than noise, older adults

Region Size (μl) t score Coordinates
x y z
Left postcentral gyrus 278,528 10.1 −44 −14 34
 Supplemental motor area   9.49 −2 4 56
 Right postcentral gyrus   9.35 42 −12 36
 Left postcentral gyrus   9.26 −56 −4 24
 Right superior temporal gyrus   9.19 64 −18 0
 Right postcentral gyrus   8.14 56 −4 28
 Left superior temporal gyrus   8.01 −60 −14 2
 Left superior temporal gyrus   7.84 −44 −22 10
 Left superior parietal cortex   7.68 −26 −66 52
 Left inferior frontal gyrus   7.51 −44 8 26
 Left superior temporal gyrus   7.39 −62 −28 4
 Left precentral gyrus   6.87 20 −28 60
 Right insula   6.76 32 26 0
 Right putamen   6.72 18 16 0
 Left insula   6.48 −30 24 4
 Left postcentral   6.44 −18 −30 58
 Left caudate   6.44 −16 14 8
 Right insula   6.32 36 18 8
 Left inferior parietal cortex   6.25 −46 −32 40
 Fornix   5.97 6 0 6
Left thalamus 3,784 4.13 −12 −18 0
 Superior cerebellar pedunculus   3.98 0 −24 2
 Left superior cerebellar pedunculus   3.6 −4 −28 −14
 Right superior cerebellar pedunculus   3.56 2 −14 −8
 Right thalamus   3.41 12 −20 2
 Right thalamus   3.35 16 −18 −2
 Left superior cerebellar pedunculus   3.33 −6 −34 −2
 Right thalamus   3.3 14 −18 8
 Left superior cerebellar pedunculus   3.04 −10 −34 −20

In addition to the overall pattern associated with word perception, we examined psycholinguistic effects of word frequency and phonological neighborhood density using a parametric modulation analysis. There were no significant effects of either word frequency or neighborhood density in the repetition condition.

Finally, we directly compared the attentive listening and repetition conditions, shown in Figure 5 (with maxima in Tables 9 and 10). Compared to the attentive listening condition, during the repetition condition both young and older listeners showed increased activity in motor and premotor cortex. There were no significant differences between young and older adults.

Figure 5. .


Figure 5. 

Whole-brain activity for the repetition condition > attentive listening. Top: Unthresholded parameter estimates. Middle: Unthresholded t maps. Bottom: Thresholded t maps (p < 0.05, cluster corrected). White ovals highlight left and right auditory cortex. There were no significant differences between young and older adults in the repetition > listening contrast.

Table 9. .

Peak activations for word recognition in the repetition condition greater than listening condition, young adults

Region Size (μl) t score Coordinates
x y z
Left postcentral gyrus 311,408 13.8 −44 −14 36
 Right postcentral gyrus   13.8 44 −12 36
 Left putamen   10.4 −24 0 4
 Supplemental motor area   9.97 0 −2 60
 Right postcentral gyrus   9.81 20 −28 60
 Left Heschl’s gyrus   9.36 −36 −32 14
 Right superior temporal gyrus   9.11 46 −20 8
 Left postcentral gyrus   9.06 −18 −30 60
 Right Heschl’s gyrus   8.96 38 −26 14
 Right putamen   8.91 28 0 −4
 Left insula   7.16 −32 20 8
 Left inferior frontal gyrus   7.13 −46 8 20
 Right insula   7.11 34 22 4
 Left superior temporal sulcus   6.23 −52 −42 6
 Anterior cingulate   6 −8 12 38
 Right superior temporal gyrus   5.61 64 −24 4
 Left inferior parietal cortex   5.6 −38 −36 42
 Right calcarine sulcus   5.47 18 −66 8
 Right cerebellum   5.31 24 −60 −22

Table 10. .

Peak activations for word recognition in the repetition condition greater than listening condition, older adults

Region Size (μl) t score Coordinates
x y z
Left postcentral gyrus 238,712 10.21 −44 −14 34
 Right postcentral gyrus   9.04 42 −12 36
 Supplemental motor area   7.56 −2 2 56
 Right postcentral gyrus   7.43 54 −4 26
 Left Heschl’s gyrus   6.45 −42 −24 10
 Left superior parietal cortex   6.43 −28 −64 54
 Left postcentral gyrus   6.03 −18 −30 60
 Right precentral gyrus   6.02 20 −28 60
 Left precentral gyrus   6 −46 2 38
 Right putamen   5.88 20 16 0
 Right caudate   5.84 18 14 6
 Right insula   5.73 34 28 0
 Left inferior parietal cortex   5.69 −44 −32 40
 Anterior cingulate   5.49 −6 14 42
 Left postcentral gyrus   5.44 −26 −40 62
 Left insula   5.38 −46 14 −2
 Right superior parietal cortex   5.29 18 −68 56
 Left insula   5.28 −32 24 4
 Left caudate   5.16 −14 14 8
 Left precentral gyrus   5.13 −32 −2 64
Right superior temporal gyrus 4,632 4.12 58 −22 0
 Right Heschl’s gyrus   4.11 34 −24 4
 Right superior temporal gyrus   4.1 64 −16 0
 Right superior temporal gyrus   4.09 52 −16 4
 Right posterior insula   3.14 32 −24 16

DISCUSSION

We used fMRI to examine neural activity during spoken word recognition in quiet for young and older adult listeners. In both ROI and whole-brain analyses, we found converging evidence for reduced activity in the auditory cortex for the older adults. The age differences in auditory cortex activation were present in both the attentive listening task and the word repetition task: Although the repetition task resulted in more widespread activation overall, patterns of age-related differences in the auditory cortex were comparable.

There are a number of possible explanations for older adults’ reduced activity during spoken word recognition. One possibility is that age differences in intelligibility might play a role. Intelligible speech is associated with increased activity in a broad network of frontal and temporal regions (Davis & Johnsrude, 2003; Kuchinsky et al., 2012), and in prior studies of older adults, intelligibility has correlated with auditory cortex activity (Harris et al., 2009). We restricted our analyses to correct responses in the repetition condition, and found no statistical support for a relationship between intelligibility and auditory cortex activation (although numerically, participants with better accuracy showed more activity than participants with worse accuracy).

The fact that young and older adults showed comparable activity in the auditory cortex during noise trials, with age differences emerging for word recognition trials, is significant. Group differences in activation could be driven not only by neural processing, but also by such factors as neurovascular coupling, goodness-of-fit of a canonical hemodynamic response, or movement within the scanner—in other words, artifacts that might differentially impact model parameter estimates in young and older adults but are not of theoretical interest in this context. Although impossible to completely rule out, the selective age differences for speech (but not noise) are consistent with a condition-specific—and thus we argue, neural—interpretation.

Recent evidence suggests age-related changes in temporal sensitivity in auditory regions can be detected with fMRI (Erb, Schmitt, & Obleser, 2020). Although our current stimuli do not allow us to explore specific acoustic features, one possibility is that the age-related differences in auditory activity we observed reflect well-known changes in auditory cortical processing that occur in normal aging (Peelle & Wingfield, 2016). Given the increased acoustic complexity of the words relative to noise, acoustic processing differences might drive overall response differences. Such changes may also reflect decreased stimulation as a result of hearing loss; we had insufficient data to rule out this possibility. It is important to note that we cannot completely rule out audibility effects. Even though we limited our responses to correct identification trials, specific acoustic features may still have been less audible for the older adults. It remains an open question whether varying the presentation level of the stimuli would change the age effects we observed.

Age differences in auditory processing are not the only explanation for our results. The auditory cortex is positioned in a hierarchy of speech processing regions that include both ascending and descending projections (Davis & Johnsrude, 2007; Peelle, Johnsrude, et al., 2010). The auditory cortex not only is sensitive to changes in acoustic information, but also reflects top-down effects of expectation and prediction (Signoret, Johnsrude, Classon, & Rudner, 2018; Sohoglu, Peelle, Carlyon, & Davis, 2012; Wild et al., 2012). Thus, the observed age differences in the auditory cortex may reflect differential top-down modulation of auditory activity in young and older adult listeners.

Indeed, prior to conducting this study, we expected to observe increased activity (e.g., in the prefrontal cortex) for older adults relative to young adults, reflecting top-down compensation for reduced auditory sensitivity. Such activity would be consistent with increased cognitive demand during speech perception in listeners with hearing loss or other acoustic challenges (Peelle, 2018; Pichora-Fuller et al., 2016). Although we were somewhat surprised not to see this, in retrospect, perhaps it would be expected. The stimuli in the current study were presented in quiet, and thus may not have challenged perception sufficiently to robustly engage frontal brain networks. We conclude that during perception of acoustically clear words, older adults do not seem to require additional resources from the frontal cortex; whether this changes with increasing speech demands (either acoustic or linguistic) remains an open question.

We did not observe significant effects of either word frequency or phonological neighborhood density on activity during spoken word recognition. These results stand in contrast to prior studies showing frequency effects in visual word perception in fMRI (Hauk, Davis, & Pulvermüller, 2008; Kronbichler et al., 2004), and word frequency effects in electrophysiological responses (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Prior fMRI studies of lexical competition (including phonological neighborhood density) have been mixed, with some studies finding effects (Zhuang, Randall, Stamatakis, Marslen-Wilson, & Tyler, 2011) and others not (Binder et al., 2003). It could be that a wider range of frequency or density or a greater number of stimuli would be needed to identify such effects.

Finally, we found largely comparable age differences in the attentive listening and repetition conditions in the auditory cortex. The similarity of the results suggests that using a repetition task may be a reasonable choice in studies of spoken word recognition: Although repetition tasks necessarily engage regions related to articulation and hearing one’s own voice, in our data these were not differentially affected by age. An advantage of using a repetition task, of course, is that trial-by-trial accuracy measures can be obtained, which are frequently useful. It is worth noting that our finding of comparable activity in young and older adults for attentive listening and repetition tasks may not generalize to other stimuli or tasks (Campbell et al., 2016; Davis et al., 2014).

A significant limitation of our current study is that we only collected hearing sensitivity data on a minority of our participants. Thus, although we saw a trend toward poorer hearing being associated with reduced auditory cortex activation, it is challenging to draw any firm conclusions regarding the relationship between hearing sensitivity and brain activity. Prior studies using sentence-level materials have found relationships between hearing sensitivity and brain activity in both young (Lee et al., 2018) and older (Peelle, Troiani, Grossman, & Wingfield, 2011) adults. Future investigations with a larger sample of participants with hearing data will be needed to further explore the effects of hearing in spoken word recognition.

From a broader perspective, the link between spoken word recognition and everyday communication is not always straightforward. Much of our everyday communication occurs in the context of semantically meaningful, coherent sentences, frequently with the added availability of visual speech and gesture cues. Given potential age differences in reliance on many of these cues—including older adults’ seemingly greater reliance on semantic context (Rogers, 2016; Rogers, Jacoby, & Sommers, 2012; Wingfield & Lindfield, 1995)—it seems likely that our findings using isolated spoken words cannot be extrapolated to richer naturalistic settings.

In summary, we observed largely overlapping brain regions supporting spoken word recognition in young and older adults in the absence of background noise. Older adults showed less activity than young adults in the auditory cortex when listening to words, but not noise. These patterns of age difference were present regardless of the task (attentive listening vs. repetition).

ACKNOWLEDGMENTS

Research reported here was funded by grant R01 DC014281 from the US National Institutes of Health. The multiband echo planar imaging sequence was provided by the University of Minnesota Center for Magnetic Resonance Research. We are grateful to Linda Hood for assistance with data collection, and to Henry Greenstein, Ben Muller, Olivia Murray, Connor Perkins, and Tracy Zhang for help with data scoring.

FUNDING INFORMATION

Jonathan E. Peelle, National Institute on Deafness and Other Communication Disorders (http://dx.doi.org/10.13039/100000055), Award ID: R01 DC014281.

AUTHOR CONTRIBUTIONS

Chad S. Rogers: Conceptualization: Equal; Data curation: Equal; Investigation: Equal; Project administration: Equal; Supervision: Supporting; Validation: Equal; Writing–Review & Editing: Equal. Michael S. Jones: Formal analysis: Lead; Methodology: Equal; Software: Lead; Validation: Lead; Writing–Review & Editing: Equal. Sarah McConkey: Investigation: Equal; Project administration: Equal; Writing–Review & Editing: Equal. Brent Spehar: Conceptualization: Equal; Investigation: Supporting; Resources: Supporting; Writing–Review & Editing: Equal. Kristin J. Van Engen: Conceptualization: Equal; Funding acquisition: Supporting; Project administration: Equal; Writing–Review & Editing: Equal. Mitchell S. Sommers: Conceptualization: Equal; Funding acquisition: Supporting; Project administration: Supporting; Writing–Review & Editing: Equal. Jonathan E. Peelle: Conceptualization: Equal; Data curation: Equal; Formal analysis: Equal; Funding acquisition: Lead; Project administration: Equal; Supervision: Lead; Visualization: Lead; Writing–Original Draft: Lead; Writing–Review & Editing: Equal.

Contributor Information

Chad S. Rogers, Department of Psychology, Union College, Schenectady, NY, USA.

Michael S. Jones, Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA

Sarah McConkey, Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA.

Brent Spehar, Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA.

Kristin J. Van Engen, Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, USA

Mitchell S. Sommers, Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, USA

Jonathan E. Peelle, Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA

REFERENCES

  1. Ashburner, J. , & Friston, K. J. (2005). Unified segmentation. NeuroImage, 26, 839–851. DOI: https://doi.org/10.1016/j.neuroimage.2005.02.018, PMID: 15955494 [DOI] [PubMed] [Google Scholar]
  2. Balota, D. A. , Yap, M. J. , Cortese, M. J. , Hutchison, K. A. , Kessler, B. , Loftis, B. , … Treiman, R. (2007). The English lexicon project. Behavior Research Methods, 39, 445–459. DOI: https://doi.org/10.3758/BF03193014, PMID: 17958156 [DOI] [PubMed] [Google Scholar]
  3. Bates, D. , Mächler, M. , Bolker, B. M. , & Walker, S. C. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. DOI: 10.18637/jss.v067.i01 [DOI] [Google Scholar]
  4. Bilodeau-Mercure, M. , Lortie, C. L. , Sato, M. , Guitton, M. , & Tremblay, P. (2015). The neurobiology of speech perception decline in aging. Brain Structure & Function, 220, 979–997. DOI: https://doi.org/10.1007/s00429-013-0695-3, PMID: 24402675 [DOI] [PubMed] [Google Scholar]
  5. Bilodeau-Mercure, M. , & Tremblay, P. (2016). Age differences in sequential speech production: Articulatory and physiological factors. Journal of the American Geriatrics Society, 64(11), e177–e182. DOI: https://doi.org/10.1111/jgs.14491, PMID: 27783395 [DOI] [PubMed] [Google Scholar]
  6. Binder, J. R. , Frost, J. A. , Hammeke, T. A. , Bellgowan, P. S. , Springer, J. A. , Kaufman, J. N. , & Possing, E. T. (2000). Human temporal lobe activation by speech and nonspeech sounds. Cerebral Cortex, 10(5), 512–528. DOI: https://doi.org/10.1093/cercor/10.5.512, PMID: 10847601 [DOI] [PubMed] [Google Scholar]
  7. Binder, J. R. , McKiernan, K. A. , Parsons, M. E. , Westbury, C. F. , Possing, E. T. , Kaufman, J. N. , & Buchanan, L. (2003). Neural correlates of lexical access during visual word recognition. Journal of Cognitive Neuroscience, 15(3), 373–393. DOI: https://doi.org/10.1162/089892903321593108, PMID: 12729490 [DOI] [PubMed] [Google Scholar]
  8. Brysbaert, M. , Warriner, A. B. , & Kuperman, V. (2014). Concreteness ratings for 40 thousand generally known English word lemmas. Behavior Research Methods, 46(3), 904–911. DOI: https://doi.org/10.3758/s13428-013-0403-5, PMID: 24142837 [DOI] [PubMed] [Google Scholar]
  9. Campbell, K. L. , Samu, D. , Davis, S. W. , Geerligs, L. , Mustafa, A. , Tyler, L. K. , & for Cambridge Centre for Aging and Neuroscience. (2016). Robust resilience of the frontotemporal syntax system to aging. Journal of Neuroscience, 36, 5214–5227. DOI: https://doi.org/10.1523/JNEUROSCI.4561-15.2016, PMID: 27170120, PMCID: PMC4863058 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Cusack, R. , Vicente-Grabovetsky, A. , Mitchell, D. J. , Wild, C. J. , Auer, T. , Linke, A. C. , & Peelle, J. E. (2015). Automatic analysis (aa): Efficient neuroimaging workflows and parallel processing using Matlab and XML. Frontiers in Neuroinformatics, 8, 90. DOI: https://doi.org/10.3389/fninf.2014.00090, PMID: 25642185, PMCID: PMC4295539 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Davis, M. H. , & Johnsrude, I. S. (2003). Hierarchical processing in spoken language comprehension. Journal of Neuroscience, 23(8), 3423–3431. DOI: https://doi.org/10.1523/JNEUROSCI.23-08-03423.2003, PMID: 12716950, PMCID: PMC6742313 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Davis, M. H. , & Johnsrude, I. S. (2007). Hearing speech sounds: Top-down influences on the interface between audition and speech perception. Hearing Research, 229, 132–147. DOI: https://doi.org/10.1016/j.heares.2007.01.014, PMID: 17317056 [DOI] [PubMed] [Google Scholar]
  13. Davis, S. W. , Zhuang, J. , Wright, P. , & Tyler, L. K. (2014). Age-related sensitivity to task-related modulation of language-processing networks. Neuropsychologia, 63, 107–115. DOI: https://doi.org/10.1016/j.neuropsychologia.2014.08.017, PMID: 25172389, PMCID: PMC4410794 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Devlin, J. T. , & Poldrack, R. A. (2007). In praise of tedious anatomy. NeuroImage, 37(4), 1033–1041. DOI: https://doi.org/10.1016/j.neuroimage.2006.09.055, PMID: 17870621, PMCID: PMC1986635 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Eckert, M. A. , Menon, V. , Walczak, A. , Ahlstrom, J. , Denslow, S. , Horwitz, A. , & Dubno, J. R. (2009). At the heart of the ventral attention system: The right anterior insula. Human Brain Mapping, 30, 2530–2541. DOI: https://doi.org/10.1002/hbm.20688, PMID: 19072895, PMCID: PMC2712290 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Edmister, W. B. , Talavage, T. M. , Ledden, P. J. , & Weisskoff, R. M. (1999). Improved auditory cortex imaging using clustered volume acquisitions. Human Brain Mapping, 7, 89–97. DOI: https://doi.org/10.1002/(SICI)1097-0193(1999)7:2<89::AID-HBM2>3.0.CO;2-N, PMID: 9950066, PMCID: PMC6873308 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Eickhoff, S. B. , Stephan, K. E. , Mohlberg, H. , Grefkes, C. , Fink, G. R. , Amunts, K. , & Zilles, K. (2005). A new SPM toolbox for combining probabilistic cytoarchitectonic maps and functional imaging data. NeuroImage, 25(4), 1325–1335. DOI: https://doi.org/10.1016/j.neuroimage.2004.12.034, PMID: 15850749 [DOI] [PubMed] [Google Scholar]
  18. Embick, D. , Hackl, M. , Schaeffer, J. , Kelepir, M. , & Marantz, A. (2001). A magnetoencephalographic component whose latency reflects lexical frequency. Cognitive Brain Research, 10(3), 345–348. DOI: https://doi.org/10.1016/s0926-6410(00)00053-7, PMID: 11167059 [DOI] [PubMed] [Google Scholar]
  19. Erb, J. , Schmitt, L.-M. , & Obleser, J. (2020). Temporal selectivity declines in the aging human auditory cortex. In bioRxiv, 2020.01.24.919126. DOI: 10.1101/2020.01.24.919126 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Feinberg, D. A. , Moeller, S. , Smith, S. M. , Auerbach, E. , Ramanna, S. , Glasser, M. F. , … Yacoub, E. (2010). Multiplexed echo planar imaging for sub-second whole brain FMRI and fast diffusion imaging. PLOS One, 5, e15710. DOI: https://doi.org/10.1371/journal.pone.0015710, PMID: 21187930, PMCID: PMC3004955 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Gorgolewski, K. J. , Varoquaux, G. , Rivera, G. , Schwarz, Y. , Ghosh, S. S. , Maumet, C. , … Margulies, D. S. (2015). NeuroVault.org: A web-based repository for collecting and sharing unthresholded statistical maps of the human brain. Frontiers in Neuroinformatics, 9, 8. DOI: https://doi.org/10.3389/fninf.2015.00008, PMID: 25914639, PMCID: PMC4392315 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Gracco, V. L. , Tremblay, P. , & Pike, B. (2005). Imaging speech production using fMRI. NeuroImage, 26(1), 294–301. DOI: https://doi.org/10.1016/j.neuroimage.2005.01.033, PMID: 15862230 [DOI] [PubMed] [Google Scholar]
  23. Hall, D. A. , Haggard, M. P. , Akeroyd, M. A. , Palmer, A. R. , Summerfield, A. Q. , Elliott, M. R. , … Bowtell, R. W. (1999). “Sparse” temporal sampling in auditory fMRI. Human Brain Mapping, 7(3), 213–223. DOI: https://doi.org/10.1002/(SICI)1097-0193(1999)7:3<213::AID-HBM5>3.0.CO;2-N, PMID: 10194620, PMCID: PMC6873323 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Harris, K. C. , Dubno, J. R. , Keren, N. I. , Ahlstrom, J. B. , & Eckert, M. A. (2009). Speech recognition in younger and older adults: A dependency on low-level auditory cortex. Journal of Neuroscience, 29, 6078–6087. DOI: https://doi.org/10.1523/JNEUROSCI.0412-09.2009, PMID: 19439585, PMCID: PMC2717741 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Hauk, O. , Davis, M. H. , & Pulvermüller, F. (2008). Modulation of brain activity by multiple lexical and word form variables in visual word recognition: A parametric fMRI study. NeuroImage, 42(3), 1185–1195. DOI: https://doi.org/10.1016/j.neuroimage.2008.05.054, PMID: 18582580 [DOI] [PubMed] [Google Scholar]
  26. Hickok, G. , & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews Neuroscience, 8, 393–402. DOI: https://doi.org/10.1038/nrn2113, PMID: 17431404 [DOI] [PubMed] [Google Scholar]
  27. Humes, L. E. (1996). Speech understanding in the elderly. Journal of the American Academy of Audiology, 7, 161–167. PMID: 8780988 [PubMed] [Google Scholar]
  28. Hwang, J.-H. , Li, C.-W. , Wu, C.-W. , Chen, J.-H. , & Liu, T.-C. (2007). Aging effects on the activation of the auditory cortex during binaural speech listening in white noise: An fMRI study. Audiology & Neuro-Otology, 12(5), 285–294. DOI: https://doi.org/10.1159/000103209, PMID: 17536197 [DOI] [PubMed] [Google Scholar]
  29. Jenkinson, M. , Beckmann, C. F. , Behrens, T. E. J. , Woolrich, M. W. , & Smith, S. M. (2012). FSL. NeuroImage, 62(2), 782–790. DOI: https://doi.org/10.1016/j.neuroimage.2011.09.015, PMID: 21979382 [DOI] [PubMed] [Google Scholar]
  30. Kronbichler, M. , Hutzler, F. , Wimmer, H. , Mair, A. , Staffen, W. , & Ladurner, G. (2004). The visual word form area and the frequency with which words are encountered: Evidence from a parametric fMRI study. NeuroImage, 21(3), 946–953. DOI: https://doi.org/10.1016/j.neuroimage.2003.10.021, PMID: 15006661 [DOI] [PubMed] [Google Scholar]
  31. Kuchinsky, S. E. , Vaden, K. I., Jr. , Keren, N. I. , Harris, K. C. , Ahlstrom, J. B. , Dubno, J. R. , & Eckert, M. A. (2012). Word intelligibility and age predict visual cortex activity during word listening. Cerebral Cortex, 22, 1360–1371. DOI: https://doi.org/10.1093/cercor/bhr211, PMID: 21862447, PMCID: PMC3357178 [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Kuznetsova, A. , Brockhoff, P. , & Christensen, R. (2017). lmerTest Package: Tests in linear mixed effects models. Journal of Statistical Software, 82(13), 1–26. DOI: 10.18637/jss.v082.i13 [DOI] [Google Scholar]
  33. Lash, A. , Rogers, C. S. , Zoller, A. , & Wingfield, A. (2013). Expectation and entropy in spoken word recognition: Effects of age and hearing acuity. Experimental Aging Research, 39, 235–253. DOI: https://doi.org/10.1080/0361073X.2013.779175, PMID: 23607396, PMCID: PMC3668645 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Lee, Y. S. , Wingfield, A. , Min, N. E. , Kotloff, E. , Grossman, M. , & Peelle, J. E. (2018). Differences in hearing acuity among “normal-hearing” young adults modulate the neural basis for speech comprehension. eNeuro, 5(3), e0263–17.2018. DOI: https://doi.org/10.1523/ENEURO.0263-17.2018, PMID: 29911176, PMCID: PMC6001266 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Manan, H. A. , Franz, E. A. , Yusoff, A. N. , & Mukari, S. Z.-M. S. (2015). The effects of aging on the brain activation pattern during a speech perception task: An fMRI study. Aging Clinical and Experimental Research, 27(1), 27–36. DOI: https://doi.org/10.1007/s40520-014-0240-0, PMID: 24906677 [DOI] [PubMed] [Google Scholar]
  36. Manan, H. A. , Yusoff, A. N. , Franz, E. A. , & Mukari, S. Z.-M. S. (2017). Effects of aging and background babble noise on speech perception processing: An fMRI study. Neurophysiology, 49(6), 441–452. DOI: 10.1007/s11062-018-9707-5 [DOI] [Google Scholar]
  37. Morosan, P. , Rademacher, J. , Schleicher, A. , Amunts, K. , Schormann, T. , & Zilles, K. (2001). Human primary auditory cortex: Cytoarchitectonic subdivisions and mapping into a spatial reference system. NeuroImage, 13, 684–701. DOI: https://doi.org/10.1006/nimg.2000.0715, PMID: 11305897 [DOI] [PubMed] [Google Scholar]
  38. Mumford, J. A. , Poline, J.-B. , & Poldrack, R. A. (2015). Orthogonalization of regressors in fMRI models. PLOS One, 10, e0126255. DOI: https://doi.org/10.1371/journal.pone.0126255, PMID: 25919488, PMCID: PMC4412813 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Neta, M. , Miezin, F. M. , Nelson, S. M. , Dubis, J. W. , Dosenbach, N. U. F. , Schlaggar, B. L. , & Petersen, S. E. (2015). Spatial and temporal characteristics of error-related activity in the human brain. Journal of Neuroscience, 35, 253–266. DOI: https://doi.org/10.1523/JNEUROSCI.1313-14.2015, PMID: 25568119, PMCID: PMC4287146 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Park, D. C. , Lautenschlager, G. , Hedden, T. , Davidson, N. S. , Smith, A. D. , & Smith, P. K. (2002). Models of visuospatial and verbal memory across the adult life span. Psychology and Aging, 17(2), 299–320. DOI: https://doi.org/10.1037/0882-7974.17.2.299, PMID: 12061414 [PubMed] [Google Scholar]
  41. Peelle, J. E. (2018). Listening effort: How the cognitive consequences of acoustic challenge are reflected in brain and behavior. Ear and Hearing, 39(2), 204–214. DOI: https://doi.org/10.1097/AUD.0000000000000494, PMID: 28938250, PMCID: PMC5821557 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Peelle, J. E. (2019). Language and aging. In de Zubicaray G. I. & Schiller N. O. (Eds.), The Oxford handbook of neurolinguistics (pp. 295–316). Oxford: Oxford University Press. [Google Scholar]
  43. Peelle, J. E. , Johnsrude, I. S. , & Davis, M. H. (2010). Hierarchical processing for speech in human auditory cortex and beyond. Frontiers in Human Neuroscience, 4, 51. DOI: https://doi.org/10.3389/fnhum.2010.00051, PMID: 20661456, PMCID: PMC2907234 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Peelle, J. E. , Troiani, V. , Grossman, M. , & Wingfield, A. (2011). Hearing loss in older adults affects neural systems supporting speech comprehension. Journal of Neuroscience, 31(35), 12638–12643. DOI: https://doi.org/10.1523/JNEUROSCI.2559-11.2011, PMID: 21880924, PMCID: PMC3175595 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Peelle, J. E. , Troiani, V. , Wingfield, A. , & Grossman, M. (2010). Neural processing during older adults’ comprehension of spoken sentences: Age differences in resource allocation and connectivity. Cerebral Cortex, 20, 773–782. DOI: https://doi.org/10.1093/cercor/bhp142, PMID: 19666829, PMCID: PMC2837088 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Peelle, J. E. , & Wingfield, A. (2016). The neural consequences of age-related hearing loss. Trends in Neurosciences, 39(7), 486–497. DOI: https://doi.org/10.1016/j.tins.2016.05.001, PMID: 27262177, PMCID: PMC4930712 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Pichora-Fuller, M. K. , Kramer, S. E. , Eckert, M. A. , Edwards, B. , Hornsby, B. W. Y. , Humes, L. E. , … Wingfield, A. (2016). Eriksholm workshop on hearing impairment and cognitive energy: The framework for understanding effortful listening (FUEL). Ear and Hearing, 37, 5S–27S. DOI: https://doi.org/10.1097/AUD.0000000000000306, PMID: 27355766 [DOI] [PubMed] [Google Scholar]
  48. Poldrack, R. A. , Barch, D. M. , Mitchell, J. P. , Wager, T. D. , Wagner, A. D. , Devlin, J. T. , … Milham, M. P. (2013). Toward open sharing of task-based fMRI data: The OpenfMRI project. Frontiers in Neuroinformatics, 7, 12. DOI: https://doi.org/10.3389/fninf.2013.00012, PMID: 23847528, PMCID: PMC3703526 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Power, J. D. , Barnes, K. A. , Snyder, A. Z. , Schlaggar, B. L. , & Petersen, S. E. (2012). Spurious but systematic correlations in functional connectivity MRI networks arise from subject motion. NeuroImage, 59, 2142–2154. DOI: https://doi.org/10.1016/j.neuroimage.2011.10.018, PMID: 22019881, PMCID: PMC3254728 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Power, J. D. , & Petersen, S. E. (2013). Control-related systems in the human brain. Current Opinion in Neurobiology, 23, 223–228. DOI: https://doi.org/10.1016/j.conb.2012.12.009, PMID: 23347645, PMCID: PMC3632325 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Rogers, C. S. (2016). Semantic priming, not repetition priming, is to blame for false hearing. Psychonomic Bulletin & Review, 24, 1194–1204. DOI: https://doi.org/10.3758/s13423-016-1185-4, PMID: 27844295, PMCID: PMC5429986 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Rogers, C. S. , Jacoby, L. L. , & Sommers, M. S. (2012). Frequent false hearing by older adults: The role of age differences in metacognition. Psychology and Aging, 27, 33–45. DOI: https://doi.org/10.1037/a0026231, PMID: 22149253, PMCID: PMC3319693 [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Rorden, C. , & Brett, M. (2000). Stereotaxic display of brain lesions. Behavioural Neurology, 12, 191–2000. DOI: https://doi.org/10.1155/2000/421719, PMID: 11568431 [DOI] [PubMed] [Google Scholar]
  54. Shafto, M. A. , & Tyler, L. K. (2014). Language in the aging brain: The network dynamics of decline and preservation. Science, 346, 583–587. DOI: https://doi.org/10.1126/science.1254404, PMID: 25359966 [DOI] [PubMed] [Google Scholar]
  55. Shannon, R. V. , Zeng, F.-G. , Kamath, V. , Wygonski, J. , & Ekelid, M. (1995). Speech recognition with primarily temporal cues. Science, 270, 303–304. DOI: https://doi.org/10.1126/science.270.5234.303, PMID: 7569981 [DOI] [PubMed] [Google Scholar]
  56. Siegel, J. S. , Power, J. D. , Dubis, J. W. , Vogel, A. C. , Church, J. A. , Schlaggar, B. L. , & Petersen, S. E. (2014). Statistical improvements in functional magnetic resonance imaging analyses produced by censoring high-motion data points. Human Brain Mapping, 35, 1981–1996. DOI: https://doi.org/10.1002/hbm.22307, PMID: 23861343, PMCID: PMC3895106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Signoret, C. , Johnsrude, I. , Classon, E. , & Rudner, M. (2018). Combined effects of form- and meaning-based predictability on perceived clarity of speech. Journal of Experimental Psychology: Human Perception and Performance, 44(2), 277–285. DOI: https://doi.org/10.1037/xhp0000442, PMID: 28557490 [DOI] [PubMed] [Google Scholar]
  58. Sohoglu, E. , Peelle, J. E. , Carlyon, R. P. , & Davis, M. H. (2012). Predictive top-down integration of prior knowledge during speech perception. Journal of Neuroscience, 32(25), 8443–8453. DOI: https://doi.org/10.1523/JNEUROSCI.5069-11.2012, PMID: 22723684, PMCID: PMC6620994 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Sommers, M. S. , & Danielson, S. M. (1999). Inhibitory processes and spoken word recognition in young and older adults: The interaction of lexical competition and semantic context. Psychology and Aging, 14, 458–472. DOI: https://doi.org/10.1037/0882-7974.14.3.458, PMID: 10509700 [DOI] [PubMed] [Google Scholar]
  60. Tremblay, P. , Sato, M. , & Deschamps, I. (2017). Age differences in the motor control of speech: An fMRI study of healthy aging. Human Brain Mapping, 38(5), 2751–2771. DOI: https://doi.org/10.1002/hbm.23558, PMID: 28263012, PMCID: PMC6866863 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Tyler, L. K. , Shafto, M. A. , Randall, B. , Wright, P. , Marslen-Wilson, W. D. , & Stamatakis, E. A. (2010). Preserving syntactic processing across the adult life span: The modulation of the frontotemporal language system in the context of age-related atrophy. Cerebral Cortex, 20, 352–364. DOI: https://doi.org/10.1093/cercor/bhp105, PMID: 19505991, PMCID: PMC2803734 [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Tzourio-Mazoyer, N. , Landeau, B. , Papathanassiou, D. , Crivello, F. , Etard, O. , Delcroix, N. , … Joliot, M. (2002). Automated anatomical labeling of activations in SPM using a macroscopic anatomical parcellation of the MNI MRI single-subject brain. NeuroImage, 15, 273–289. DOI: https://doi.org/10.1006/nimg.2001.0978, PMID: 11771995 [DOI] [PubMed] [Google Scholar]
  63. Vaden, K. I., Jr. , Kuchinsky, S. E. , Ahlstrom, J. B. , Teubner-Rhodes, S. E. , Dubno, J. R. , & Eckert, M. A. (2016). Cingulo-opercular function during word recognition in noise for older adults with hearing loss. Experimental Aging Research, 42, 67–82. DOI: https://doi.org/10.1080/0361073X.2016.1108784, PMID: 26683042, PMCID: PMC4899824 [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Vaden, K. I., Jr. , Kuchinsky, S. E. , Cute, S. L. , Ahlstrom, J. B. , Dubno, J. R. , & Eckert, M. A. (2013). The cingulo-opercular network provides word-recognition benefit. Journal of Neuroscience, 33, 18979–18986. DOI: https://doi.org/10.1523/JNEUROSCI.1417-13.2013, PMID: 24285902, PMCID: PMC3841458 [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Van Casteren, M. , & Davis, M. H. (2007). Match: A program to assist in matching the conditions of factorial experiments. Behavioral Research Methods, 39(4), 973–978. DOI: https://doi.org/10.3758/BF03192992, PMID: 18183914 [DOI] [PubMed] [Google Scholar]
  66. Wild, C. J. , Yusuf, A. , Wilson, D. E. , Peelle, J. E. , Davis, M. H. , & Johnsrude, I. S. (2012). Effortful listening: The processing of degraded speech depends critically on attention. Journal of Neuroscience, 32(40), 14010–14021. DOI: https://doi.org/10.1523/JNEUROSCI.1528-12.2012, PMID: 23035108, PMCID: PMC6704770 [DOI] [PMC free article] [PubMed] [Google Scholar]
  67. Wingfield, A. , Aberdeen, J. S. , & Stine, E. A. (1991). Word onset gating and linguistic context in spoken word recognition by young and elderly adults. Journals of Gerontology, 46(3), 127–129. DOI: https://doi.org/10.1093/geronj/46.3.P127, PMID: 2030278 [DOI] [PubMed] [Google Scholar]
  68. Wingfield, A. , & Grossman, M. (2006). Language and the aging brain: Patterns of neural compensation revealed by functional brain imaging. Journal of Neurophysiology, 96, 2830–2839. DOI: https://doi.org/10.1152/jn.00628.2006, PMID: 17110737 [DOI] [PubMed] [Google Scholar]
  69. Wingfield, A. , & Lindfield, K. C. (1995). Multiple memory systems in the processing of speech: Evidence from aging. Experimental Aging Research, 21(2), 101–121. DOI: https://doi.org/10.1080/03610739508254272, PMID: 7628506 [DOI] [PubMed] [Google Scholar]
  70. Wingfield, A. , & Stine-Morrow, E. A. L. (2000). Language and speech. In Craik F. I. M. & Salthouse T. A. (Eds.), The handbook of aging and cognition (2nd ed., pp. 359–416). Mahwah, NJ: Lawrence Erlbaum. [Google Scholar]
  71. Wong, P. C. M. , Jin, J. X. , Gunasekera, G. M. , Abel, R. , Lee, E. R. , & Dhar, S. (2009). Aging and cortical mechanisms of speech perception in noise. Neuropsychologia, 47(3), 693–703. DOI: https://doi.org/10.1016/j.neuropsychologia.2008.11.032, PMID: 19124032, PMCID: PMC2649004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Zhuang, J. , Randall, B. , Stamatakis, E. A. , Marslen-Wilson, W. D. , & Tyler, L. K. (2011). The interaction of lexical semantics and cohort competition in spoken word recognition: An fMRI study. Journal of Cognitive Neuroscience, 23, 3778–3790. DOI: https://doi.org/10.1162/jocn_a_00046, PMID: 21563885 [DOI] [PubMed] [Google Scholar]

Articles from Neurobiology of Language are provided here courtesy of MIT Press

RESOURCES