Abstract
There is broad consensus that listening effort is an important outcome for measuring hearing performance. However, there remains debate on the best ways to measure listening effort. This study sought to measure neural correlates of listening effort using functional near-infrared spectroscopy (fNIRS) in experienced adult hearing aid users. The study evaluated impacts of amplification and signal-to-noise ratio (SNR) on cerebral blood oxygenation, with the expectation that easier listening conditions would be associated with less oxygenation in the prefrontal cortex. Thirty experienced adult hearing aid users repeated sentence-final words from low-context Revised Speech Perception in Noise Test sentences. Participants repeated words at a hard SNR (individual SNR-50) or easy SNR (individual SNR-50 + 10 dB), while wearing hearing aids fit to prescriptive targets or without wearing hearing aids. In addition to assessing listening accuracy and subjective listening effort, prefrontal blood oxygenation was measured using fNIRS. As expected, easier listening conditions (i.e., easy SNR, with hearing aids) led to better listening accuracy, lower subjective listening effort, and lower oxygenation across the entire prefrontal cortex compared to harder listening conditions. Listening accuracy and subjective listening effort were also significant predictors of oxygenation.
Keywords: functional near-infrared spectroscopy, hearing aids, listening effort, speech in noise
Introduction
Routine assessment of audiological status does not necessarily reflect all the challenges faced by those with hearing loss. Difficulty understanding speech in noise is among the most common challenges reported by those with hearing loss, often leading to increased reports of listening fatigue (Alhanbali et al., 2017; Bess & Hornsby, 2014; Nachtegaal et al., 2009). Despite this, the audiological test battery primarily relies on the detection of pure tones and repetition of words in quiet with considerably less attention directed to speech in noise abilities. The speech-in-noise challenge forces listeners to allocate significantly more cognitive resources than they would compared with listening to speech in quiet. The cognitive effort involved with resolving speech from background noise has been labeled “listening effort,” and has been formally defined as “the deliberate allocation of mental resources to overcome obstacles in goal pursuit when carrying out a [listening] task” (Pichora-Fuller et al., 2016). Measuring listening effort, and more specifically measuring how hearing aid interventions may reduce listening effort, has garnered considerable interest from researchers and clinicians alike. In clinic, measures of listening effort may eventually complement standard audiological techniques by informing new counselling strategies and characterizing the benefits of amplification (McGarrigle et al., 2014). However, the subjectivity involved in self-assessment of listening effort may make some clinicians hesitant to embrace it fully.
Measuring Listening Effort
While standardized measures of listening effort lack in clinical practice (Pichora-Fuller et al., 2016), many techniques for measuring listening effort have been implemented in hearing research. Systematic reviews by McGarrigle et al. (2014) and Ohlenforst et al. (2017) characterized the state of research investigating effects of hearing impairment and hearing aids on listening effort across a variety of different techniques. Both reviews broadly grouped existing techniques into three categories: (a) self-report or subjective measures, which use questionnaires, surveys or ratings (Johnson et al., 2015); (b) behavioral measures, which generally assess performance on a secondary task (e.g., accuracy or reaction time) while engaged in a primary task (Gagné et al., 2017); and (c) physiological measures, which use the peripheral (e.g., heart rate or pupil dilation) or central nervous system (e.g., hemodynamic activity) to gauge listening effort by tracking changes in the nervous system during different listening tasks (McGarrigle et al., 2014).
For subjective measures of listening effort, Ohlenforst et al.’s (2017) review found a modest majority of literature reporting reductions in listening effort with the use of hearing aids, and in normal hearing vs. individuals with hearing loss. These findings were echoed by more recent investigations (Bakkum et al., 2023; Wu et al., 2019). While advantageous due to their ease of administration, subjective measures are sometimes criticized for their difficulty generalizing across paradigms especially when different rating scales or questionnaires are used. Similarly, the meta-cognitive challenge involved in reflecting on cognitive effort may bias some participants toward conflating perceived accuracy with listening effort (Moore & Picou, 2018).
Behavioral measures overcome limitations attributed to subjective measures because they rely on objective measures, such as reaction time, of task performance. Ohlenforst et al.’s (2017) review found that a majority of studies incorporating behavioral measures showed that listeners with hearing loss experienced more listening effort compared to those with normal hearing and a reduction of listening effort when using hearing aids. However, caution was urged over the over-interpretation of behavioral listening effort outcomes, citing the wide variety of study parameters and design choices across studies. Furthermore, different study paradigms may pinpoint different components of listening effort, due to its multidimensional nature (Alhanbali et al., 2019). Perhaps the largest limitation in behavioral methods is that it is assumed that the listener's attention is entirely devoted to the task at hand (McGarrigle et al., 2014). However, there is no way to characterize how mental resources are divided among competing tasks. The listener's attention may also be divided and influenced by attention, motivation and biases, thus clouding the relationship between listening effort and behavioral performance.
Physiological measures gauge listening effort by tracking changes in the nervous system during different listening tasks, which are thought to overcome participant subjectivity attributed to subjective and behavioral methods. Increased autonomic arousal is typically associated with increased listening difficulty (Mackersie & Calderon-Moultrie, 2016), tracked from peripheral or central processes (McGarrigle et al., 2014). Peripheral processes include pupil dilation (Kuchinsky et al., 2013; Neagu et al., 2023), heart rate (Seeman & Sims, 2015), or electrodermal activity (Mackersie & Cones, 2011), among others. Central measures include functional magnetic resonance imaging (fMRI), which uses magnetic fields to detect hemodynamic activity (i.e., changes in blood oxygenation levels) in the brain, and electroencephalography (EEG), which is used to measure electrical activity in the brain, by measuring event related potentials (ERPs) or changes in band power in response to task demands.
Despite the directness of physiological measures, they occur much less frequently in listening effort research compared to subjective and behavioral measures, likely due in part to the costs and technical limitations associated with them (McGarrigle et al., 2014; Ohlenforst et al., 2017). For example, metal objects are prohibited near MRI machines, barring the ecological study of hearing aids. In addition, scanner noises can be well above 100 dB SPL (Ravicz et al., 2000), leading to distraction and uncontrolled masking. While sparse sampling and other techniques have been developed to facilitate presentation of auditory stimuli in the scanner (Gaab et al., 2007; Peelle, 2014), concerns about the influence of scanner noise in auditory tasks are entirely reasonable. One example is the potential for scanner noise to induce incremental changes in network connectivity and drowsiness over time (Pellegrino et al., 2022). These limitations have prompted interest in other physiological methods.
Functional Near Infrared Spectroscopy
Functional near infrared spectroscopy (fNIRS) is a noninvasive optical neuroimaging method that resolves some limitations for hearing research presented by other physiological methods. fNIRS emits near-infrared light into the superficial cortex to measure changes in blood oxygenation via cerebral blood flow as a function of time. fNIRS capitalizes on the fact that oxygenated (HbO) and deoxygenated (HbR) hemoglobin have different absorption properties in the near-infrared range. Practically speaking, each channel (or optode) is associated with a light source and detector. The near-infrared light is emitted from the source and travels through the skull and cerebrospinal fluid, eventually reaching brain tissue between 1 and 2 cm beneath the scalp. The detector receives the backscattered light that has travelled through cerebral tissue and HbO/HbR concentrations can then be determined for the optode using the modified Beer-Lambert law (Kocsis et al., 2006). Higher HbO concentrations and lower HbR are assumed to reflect increased neurovascular coupling (Devor et al., 2012), due to the increased oxygen needed to recruit more neural resources. fNIRS is attractive for listening effort research involving hearing aids due to its relative silence, low electromagnetic interference, and relative low cost (Shatzer & Russo, 2023). Like EEG, it also lends itself well to mobile brain/body imaging, enabling the possibility of future real-world testing in the field (Jungnickel et al., 2019).
Neural Correlates of Listening Effort
In order to meaningfully utilize fNIRS, prior knowledge regarding neural sources for listening effort is needed. An fNIRS device should be placed over cortical areas hypothesized to be sensitive to changes in listening conditions associated with differing listening effort. A meta-analysis by Alain et al. (2018) aggregated common neural activation patterns across brain imaging studies implementing speech-in-noise tasks. The analysis revealed that most speech-in-noise studies implicated higher activation in the left inferior frontal gyrus (IFG), containing Broca's language processing area and making up part of the prefrontal cortex, in response to more adverse listening conditions (Binder et al., 2004; Du et al., 2014; Golestani et al., 2013; Vaden et al., 2013; Zekveld et al., 2012). The analysis also revealed that the left IFG had the largest activation, compared to other implicated areas: the right insula and left inferior parietal lobule. IFG activation is thought to recruit a network of prefrontal areas that facilitate parsing of speech from distractors when primary auditory input from peripheral regions is ambiguous (Cabeza et al., 2002). This network is believed to contribute to speech-motor representations of the acoustic input in question, as well as the integration of prior world and linguistic knowledge (Davis & Johnsrude, 2003; Wild et al., 2012) to assist with resolving the acoustic input provided by auditory cortex.
fNIRS has implicated left prefrontal areas (including the left IFG) in a handful of listening effort studies with normal hearing listeners. Wijayasiri et al. (2017) investigated whether the fNIRS-measured frontal lobe activation was sensitive to changes in speech clarity and listener attention. Normal hearing listeners recalled and repeated sentences varying in intelligibility and with a distractor present or absent. Greater HbO concentration (i.e., activation) was observed for degraded vs. clean speech in the left inferior frontal gyrus when attention was directed towards the speech. Rowland et al. (2018) instructed participants to rate perceived intelligibility and effort of speech in a variety of a naturalistic scenes varying in signal to noise ratio. Greater brain activation was observed for lower-SNR scenes using fNIRS in the bilateral auditory cortices and in regions of the prefrontal cortex compared to higher-SNR scenes. Other studies have implicated fNIRS sensitivity to listening effort by demonstrating increased activation in the left inferior frontal cortex that was associated with increasingly degraded speech (Lawrence et al., 2018), increased activation in the left lateral prefrontal cortex as a function of poor SNR and low context (Rovetti et al., 2022) and reduced activation in the left auditory and lateral frontal cortices in response to vocoded and word-shuffled sentences (Zhou et al., 2022).
Much less fNIRS research on listening effort has focused on the potential benefits of hearing aid use. The majority of this research has focused on real and simulated cochlear implant outcomes (Chen et al., 2017; Harrison et al., 2021; McKay et al., 2016; White & Langdon, 2021) with considerably less attention devoted to hearing aid users. We are aware of three exceptions that have been published to date. The first was a study by Rovetti et al. (2019) that investigated prefrontal cortex activation in aided and unaided users engaged in an auditory n-back task. Increased prefrontal cortex activation was observed as task complexity increased, but no differences were observed between the aided and unaided conditions. Some limitations in this study were that the n-back task does not necessarily reflect real-world comprehension and that participants wore their own hearing aids, rather than controlling for hearing aid fitting across participants. The second study investigated speech understanding in a small sample of children across auditory scenes that varied with respect to reverberation time, distractor location and distractor pitch (Bell et al., 2020). Six of the children had normal-hearing and three wore hearing aids. The normal hearing children showed increased oxygenation in the left inferior frontal gyrus with increasing reverberation time. Only one hearing-aided child demonstrated cortical activation that aligned with the normal hearing controls. Behaviorally, the hearing-aided children performed poorly across all conditions which was accompanied by more uniformity in fNIRS responses across conditions compared to the normal hearing group. While informative, this study is limited due to its small sample size, its lack of aided/unaided comparisons in a single group, and because findings from children do not necessarily generalize to adults. The third study was an unpublished dissertation by Oeding (2022) that investigated changes in fNIRS responses in young normal hearing adult participants attributable to differences in omnidirectional and directional hearing aid processing. Stimuli varied with respect to levels of syntactic complexity and vocoding. Participants were instructed to listen to sentences in noise and recall whether actions were spoken by a male or female talker. Brain activation in the left inferior frontal gyrus was found to be influenced by syntactic complexity but not by hearing aid program. A significant limitation in this study, however, was that all listeners had normal hearing. Therefore, the benefit of amplification was uncertain and did not reflect a real-world hearing aid usage.
Current Study
The primary objective of the current study was to assess the sensitivity of prefrontal fNIRS to differences in amplification strategies and listening difficulty. This study sought to overcome limitations in the aforementioned hearing aid studies and to build on the studies by Rovetti et al. (2019, 2022) by recruiting a population of older adult, experienced hearing aid users and by controlling for hearing aid fitting across participants. Consistent with Rovetti et al. (2022), participants were instructed to repeat sentence-final words in noise using sentences from the revised speech in noise test (R-SPIN; Bilger et al., 1984) in a 2 × 2 factorial design with oxygenation measured across the prefrontal cortex. Unlike Rovetti et al. (2022), in which SNR and context were used as factors, the independent variables here included amplification (aided-directional vs. unaided) and SNR (hard vs. easy). The stimulus parameters for hard-SNR and easy-SNR were determined psychometrically for each participant: SNR-50 and SNR-50 + 10 dB, respectively. The primary dependent variable was cerebral oxygen exchange (HbDiff = HbO minus HbR). HbDiff reflects composite changes in HbO and HbR, thus describing the extent of activation as oxygenated blood enters an area and deoxygenated blood leaves an area (Lassnigg et al., 1999). HbDiff is a highly sensitive measure of cerebral activation that has been utilized to characterize cognitive effort (Liang et al., 2016; Lu et al., 2015; Saleh et al., 2018), inclusive of listening effort (Rovetti et al., 2022). In addition to HbDiff, dependent measures included subjective ratings and listening accuracy. Given the sensitivity of the left IFG to changes in listening conditions that impact listening effort, the left prefrontal cortex was the primary region of interest. However, given that both the left and right lateral prefrontal cortices have been shown to be sensitive to differences in SNR (Rovetti et al., 2022), and that reductions in hemispheric asymmetry may occur in older adults (Cabeza, 2002; Cabeza et al., 2002), we assessed fNIRS sensitivity across the entire prefrontal cortex. We hypothesized that increased listening difficulty, represented by no amplification and the hard SNR, would produce poorer listening accuracy, poorer listening effort ratings, and elevated oxygenation across the prefrontal cortex compared to less difficult conditions.
Materials and Methods
Participants
Data are reported for 30 adult participants (16 females, 14 males) who were recruited from the Sonova Innovation Centre Toronto research database, 26 of whom were native English speakers with the remaining four having learned English during childhood. Ages ranged from 33–83 years (M = 67.8, SD = 10.5). On average, participants presented with symmetrical mild sloping to severe hearing loss. Audiometric thresholds are illustrated in Figure 1. All participants were experienced hearing aid users, with years of experience ranging from 2–46 years (mean = 10.3, SD = 10.0). An additional four participants completed the study. However, their data were omitted due to technical errors during data collection. The study was approved by Western Copernicus Group Institutional Review Board and participants were financially compensated.
Figure 1.
Mean air conduction pure-tone thresholds for participants’ left ears (left panel) and right ears (right panel). Thin lines show individual data and the thick lines show the group mean. Error bars represent one standard deviation.
Design
The experiment followed a 2 × 2 within-subjects design with independent variables SNR [SNR-50 (hard) vs. SNR-50 + 10 dB (easy)] and amplification (unaided vs. aided). The SNR-50 and SNR50 + 10 dB were chosen as the hard-SNR and easy-SNR conditions because they represent the peak and upper tail, respectively, of psychometric functions for listening effort as measured by reaction times (Wu et al., 2016) and pupil dilation (Zekveld & Kramer, 2014). We assumed that the peak and upper tail of the psychometric function would elicit the largest possible difference in oxygenation between SNR conditions. The dependent variables were listening accuracy, subjective listening effort, and HbDiff. Subjective listening effort was measured on a 7-point scale ranging from no effort to extreme effort (1 = no effort, 2 = very little effort, 3 = little effort, 4 = moderate effort, 5 =considerable effort, 6 = much effort, 7 = extreme effort; Johnson et al., 2015). Listening accuracy was measured as percent correct in the speech understanding task. HbDiff was measured using a prefrontal fNIRS system consisting of eight active optodes positioned across the prefrontal cortex.
Test Materials
Stimuli consisted of sentences from the revised speech perception in noise test (R-SPIN; Bilger et al., 1984). The R-SPIN test is composed of sentence lists each containing 50 sentences; 25 high-context sentences in which semantic contextual information can aid performance, and 25 low-context sentences in which it cannot. Since the impact of context on performance was not of interest, only the low-context sentences from lists 1 through 8 were included (200 sentences total). The speech was spoken by a male with a North American accent, and the 12-talker babble background noise played throughout the entire clip. The sentences-in-noise were approximately 5 s long. Speech was presented from 0 degrees and noise was presented from 180 degrees to fully maximize the benefit of directional microphones. Each condition consisted of 25 low-context sentences drawn from a single list. The list/condition combinations were random between participants.
Participants were instructed to repeat the final word from each sentence. If they were unsure but might have heard part of the word, they were encouraged to guess. However, if they were confident that they were unsure or if they did not hear the sentence, they were instructed to say “pass.”
Equipment
Stimulus Playback
The experiment took place in a double-walled sound attenuated booth. The participant sat on a chair in the center of the booth. The R-SPIN test was implemented using a custom MATLAB script (version R2021b) installed on a Windows computer outside the booth. The MATLAB script was written to trigger stimulus playback, record participant responses, and send stimulus onset time markers to the fNIRS recording software. The Windows computer was connected to an RME Fireface UCX sound card, which routed stimuli into the booth to two self-powered Genelec 8030C speakers at 0 and 180 degrees relative to the participant chair, each at a distance of 1.5 m.
fNIRS Device
Cerebral oxygenation was recorded using an Octamon (Artinis Medical Systems, Netherlands) continuous wave fNIRS device with eight active optodes (eight laser diode emitters at wavelengths 760 and 850 nm and two photodiode detectors) sampled at 10 Hz. Light emitters and detectors were spaced by approximately 3.5 cm and penetrated approximately 1.75 cm into the cortex. The device was worn over the participant's prefrontal cortex. Figure 2 (left) illustrates the fNIRS as it was worn over a participant's head whereas Figure 2 (right) illustrates the optode montage superimposed on a 20–10 coordinate system. The base of the fNIRS headband was placed just above the nasion (Nz) such that the lower medial light emitters were symmetrical around the FpZ coordinate (Acharya et al., 2016). In practice, the fNIRS device appeared to sit on the participant's forehead, between the hairline and eyebrows. During recording, data were transmitted via Bluetooth to OxySoft (version 3.2.70) recording software installed on a Windows computer. OxySoft saved recordings in .oxy4 format, which were then converted to .snirf format for further processing in MATLAB-integrated Homer3 scripts.
Figure 2.
Left: illustration of the fNIRS device used in this study. Right: Optode montage. Eight light emitter sources (magenta circles) and two light detectors (magenta circles) forming eight channels (violet lines) were placed over the prefrontal cortex. Prefrontal regions were subdivided with respect to channel position: left upper lateral (LUL), left lower lateral (LLL), left upper medial (LUM), left lower medial (LLM), right upper medial (RUM), right lower medial (RLM), right upper lateral (RUL), and right lower lateral (RLL).
Hearing Aids
Hearing aids were programmed prior to the experiment using the participants’ audiometric data and were worn for the aided conditions. The hearing aids were Unitron Moxi Blu B9-312 receiver-in-the-canal hearing aids and coupled to participants’ ears using occluding silicone power domes. The hearing aids were individually prescribed to NAL-NL2 targets (Keidser et al., 2011) by a licensed audiologist and set to a fixed directional microphone mode with all remaining features disabled.
Procedure
SNR-50 Search
Participants first completed an SNR-50 search procedure using R-SPIN sentences without wearing hearing aids. The SNR-50 search procedure was used to find the signal-to-noise ratio at which participants could repeat 50% of words correct. The average SNR-50 over two lists would later be used as the hard-SNR condition per participant. The SNR-50 search was completed using modified SNR adaption rules from Nilsson et al. (1994) and the Oldenburg Matrix Test's manual adaptive level controls (Kollmeier et al., 2015). Using these rules, participants repeated the final word from 20 low-context sentences, in which correct and incorrect sentences led to smaller and larger SNRs, respectively, in proceeding sentences. During the search, the overall stimulus level (speech plus noise) was always maintained at 70 dBA. The SNR adjustments for the first 5 of 20 sentences were ±4 dB and ±2 dB for the remaining sentences. The SNR-50 per list was calculated as the average SNR for sentences 12 through 20 plus the calculated SNR after the last presented sentence. Twenty sentences for each of the two SNR-50 lists were randomly sampled from the 25 sentences in R-SPIN lists 7 and 8, with sentences from a single list being used for each run. For each sentence, the experimenter entered the repeated word in real time to track correctness and trigger the presentation of the next sentence. The average SNR-50 was +4.9 dB (SD = 6.1).
Subjective Listening Effort
Next, participants completed the listening effort rating task. Participants were presented with sentences blocked by condition. As in the SNR-50 search, the overall stimulus presentation level (speech plus noise) was always maintained at 70 dBA. The four conditions were divided into two aided/unaided blocks, each subdivided by easy-SNR/hard-SNR blocks. Amplification order, and SNR order within each amplification condition, was randomized between participants. Participants repeated the final word from each sentence and then provided the number corresponding to how much subjective listening effort was required. Listening effort rating descriptors were listed on a sheet of paper accessible to participants. Correctness and ratings were tracked in real time. Six low-context sentences randomly sampled without replacement from R-SPIN lists 5 and 6 were used for each condition, for a total of 24 sentences in this section of the experiment. During aided conditions, participants wore hearing aids. During unaided conditions, participants did not wear hearing aids.
Listening Accuracy and fNIRS
Next, the experimenter donned the fNIRS device on the participant and paired the device to the recording software. Then, the experimenter verified that the signal quality was acceptable by using Oxysoft's signal quality index algorithm (Sappia et al., 2020) which allowed the experimenter to verify that the fNIRS recording SNR was above noise floor and that it was not saturated by ambient lighting. In the case of poor signal quality, adjustments were made to the device fitting. To minimize the potential for movement artifacts, the experimenter instructed participants to visually fixate on a cross below the front speaker and to minimize movements during this part of the experiment.
Participants were again presented with sentences blocked by condition following the same randomization rules as in the rating task. Like the SNR-50 search and subjective listening effort block, the overall stimulus level (speech plus noise) was always maintained at 70 dBA. Participants continued repeating the final word in each sentence so that listening accuracy could be measured, but participants stopped providing listening effort ratings so as to minimize cognitive activity extraneous to listening effort while fNIRS was recording. Listening accuracy was assessed in real time by the experimenter. All 25 low-context sentences from each of lists 1 through 4 were used for each condition, for a total of 100 sentences. Sentence lists were randomly assigned to conditions between participants. Unlike the prior tasks, a 30-s period of silence was presented prior to the first condition and in between each condition. Further, a silent period of 6–9 s was placed between each sentence. On average, the duration of each block was 5.7 min (SD = 0.3).
Analysis
fNIRS Data Processing
fNIRS HbO and HbR signals were analyzed in MATLAB using scripts adopted from Homer3 (v1.31.2; Huppert et al., 2009). fNIRS signals were processed using a bandpass filtering method adapted from Zhou et al. (2020). While Zhou et al.'s (2020) method implemented two bandpass filters before and after short-channel reduction, short-channel optodes were not included in the current study design. Therefore, only the latter bandpass filter was implemented. The processing method occurred as follows:
Removal of step-like noise. Momentary losses of contact between optodes and skin can introduce unwanted noise during data collection. To remove this noise, the derivative of each channel's raw fNIRS intensity was estimated. Any absolute values of the derivative that were greater than two standard deviations above the mean absolute value were set to zero. The channel's raw fNIRS time series was recalculated using the cumulative sum of the updated derivative.
Exclude channels of poor quality. The scalp coupling index (SCI; Pollonini et al., 2014) was used to assess data quality per channel. The SCI assesses fNIRS data quality by filtering out the heartbeat signal from raw fNIRS intensity time series data using a third-order Butteworth bandpass filter between 0.5 and 1.5 Hz and correlating the filtered signal between the two wavelengths. Since the heartbeat signal is a characteristic component of fNIRS, a poor correlation would suggest that the fNIRS signal did not contain a robust heartbeat and would likely not contain other meaningful physiological responses. This study applied the same threshold as Pollonini et al. (2014), which rejected data below a threshold of SCI = 0.75. As a result of this threshold, 8% of channels were rejected across participants.
Conversion of light intensity to optical density. The method by which light intensity is converted to optical density is described in Huppert et al., 2009 (pages 4–5).
Correction of motion artifacts. Participant movement during testing, such as scratching, speaking, or changing facial expressions, might cause physical displacement of the optodes from the participant's head. This displacement risks introducing noise in the data. Therefore, the wavelet decomposition method (Molavi & Dumont, 2012) was used to set motion artifacts (appearing as breaks in the wavelet domain) to zero so long as the wavelet coefficients were outside of the interquartile range of 0.1 used by Zhou et al. (2020).
Conversion of optical density to hemoglobin concentration. The modified Beer-Lambert law (Delpy et al., 1988) was used to convert optical density data into HbO and HbR time series data.
Band-pass filter (0.01–0.09 Hz). A third-order Butterworth band-pass filter between 0.01 and 0.09 Hz was applied to the HbO and HbR time series data to remove low-frequency noise such as drift and high-frequency physiological noise such as breathing and heart rate.
Averaging. fNIRS responses were calculated by subtracting a 20-s baseline average prior to the start of each condition from the time series of the entire condition.
Data for analysis. Each prefrontal region's fNIRS response used for statistical analysis was calculated as the cerebral oxygen exchange (HbDiff = HbO − HbR). HbDiff was calculated as the grand average across fNIRS time-series responses for the optodes corresponding to that region. Four subregions, each containing the two upper and lower optodes, were used for analysis. They included the left lateral, left medial, right medial, and right lateral prefrontal cortices.
Statistical Modelling
In order to model changes in subjective listening effort, listening accuracy, and blood oxygenation, data were modelled using multi-level modeling (MLM). The decision to use MLM was motivated by clustering effects that are inherent to repeated measures designs (Nezlek & Mroziński, 2020), as there is an implicated violation of the assumption of independence of observation (Quené & Van Den Bergh, 2004). This can lead to conflated estimates of standard error impacting slope estimates and p-values (McCoach & Adelson, 2010). MLM can overcome this violation by modelling the data with random effects (slope or intercept). Thus, all models used in our analysis included a random slope per participant. MLM is also robust against cases of missing data (Baayen et al., 2008), such as rejected channels falling below the SCI threshold.
All analyses were conducted in RStudio. MLMs were run using the lmer4 package (Bates et al., 2015) which uses a log-likelihood function to estimate coefficients. Degrees of freedom were estimated using the Satterthwaite approximations and we used a two-tailed p-values with alpha values for statistical significance set to .05.
To determine which model best fit the data, a nested modeling approach was used. This approach iteratively adds a predictor variable to the model and assesses if the additional variable contributes to the overall fit of the model using a chi-square test. This approach tends to result in a parsimonious model which best fits the variance of the data. The Akaike information criterion (AIC) was also used to asses model goodness-of-fit, with lower AIC-scored models reflecting better fits to the data (Burnham & Anderson, 2004).
The main outcome variables in our models were: subjective listening effort, listening accuracy, and block-averaged cerebral oxygen exchange (HbDiff). Listening accuracy scores were transformed to rationalized arcsin units to reduce the risk of clustering of scores near ceiling performance (Studebaker, 1985). The main predictor variables were amplification (aided vs. unaided) and signal-to-noise ratio (SNR; easy vs. hard) and an interaction term between amplification and SNR. Categorical variables, amplification and SNR, were dummy coded using 0 s and 1 s. For amplification, aided was used as the reference (i.e., 0). For SNR, the easy-SNR condition was used as the reference. If the best-fitting model contained a significant interaction term, simple slopes were decomposed to determine specific trends. For the HbDiff model only, prefrontal cortex region was also included as a predictor variable.
A final model was used to assess the relationship between subjective listening effort and listening accuracy and HbDiff . The main outcome variable was HbDiff, the predictor variables were subjective listening effort and listening accuracy, and each participant was assigned a random slope.
Results
Subjective Listening Effort
Listening effort results are illustrated in Figure 3 (left panel) and summary statistics are listed in Table 1. Model comparison results revealed that a model containing only main effects of amplification and SNR provided a significantly better fit than an intercept-only model, χ2(2) = 50.2, p < .001. A model which contained the interaction term between amplification and SNR was not a better fit, χ2(1) = 0.52, p = .47.
Figure 3.
Boxplots of outcome measures for self-reported listening effort (left panel) and listening accuracy (right panel). Box edges represent the interquartile range whereas whiskers represent the interquartile range multiplied by 1.5, the red lines represent the median, and the X symbols represent the mean.
Table 1.
Nested Model Performance for the Impact of Amplification and SNR on Subjective Listening Effort.
Model | Fixed effects | Goodness of fit and comparison | |||||||
---|---|---|---|---|---|---|---|---|---|
Estimate | SE | df | t | p | χ2 | AIC | df | p | |
Baseline | |||||||||
Intercept | 3.0 | 0.15 | 30 | 20.1 | <.001 | 400.9 | |||
+main effects | |||||||||
Intercept | 2.03 | 0.19 | 67.9 | 11.0 | <.001 | 50.2 | 354.7 | 2 | <.001 |
SNR | 0.8 | 0.16 | 90 | 5.1 | <.001 | ||||
Amplification | 1.02 | 0.16 | 90 | 6.4 | <.001 | ||||
+interaction | |||||||||
Intercept | 2.1 | 0.20 | 84.2 | 10.4 | <.001 | 0.52 | 356.2 | 1 | .5 |
SNR | 0.71 | 0.22 | 90 | 4.0 | <.001 | ||||
Amp | 0.9 | 0.22 | 90 | 4.1 | <.001 | ||||
SNR × Amplification | 0.23 | 0.32 | 90 | 0.7 | .47 |
Note. SE = standard error; df = degrees of freedom; AIC = Akaike Information Criterion
Results from the final model revealed a significant main effect of amplification, b = 1.02, t(90) = 6.4, p < .001. This suggests that, while all other variables in the model were held constant, subjective listening effort was 1.02 units lower when hearing aids were worn compared to when they were not worn. There was also a significant main effect of SNR, b = 0.8, t(90) = 5.1, p < .001. This suggests that, while all other variables are held constant, subjective listening effort was 0.8 units lower in the easy-SNR compared to the hard-SNR conditions.
Listening Accuracy
Listening accuracy results are illustrated in Figure 3 (right panel) and summary statistics are listed in Table 2. Model comparison results revealed that a model containing only main effects of amplification and SNR provided a significantly better fit than an intercept-only model, χ2(2) = 1025.0, p < .001. A model which contained the interaction term between amplification and SNR was not a better fit, χ2(1) = 0.27, p = .6.
Table 2.
Nested Model Performance for the Impact of Amplification and SNR on Listening Accuracy Scores.
Model | Fixed effects | Goodness of fit and comparison | |||||||
---|---|---|---|---|---|---|---|---|---|
Estimate | SE | df | t | p | χ2 | AIC | df | p | |
Baseline | |||||||||
Intercept | 76.5 | 2.3 | 120 | 32.9 | <.001 | 1123.5 | |||
+ main effects | |||||||||
Intercept | 101.9 | 2.9 | 76.4 | 35.0 | <.001 | 102.5 | 1025.0 | 2 | <.001 |
SNR | −18.9 | 2.7 | 90 | −7.1 | <.001 | ||||
Amplification | −31.7 | 2.7 | 90 | −11.8 | <.001 | ||||
+ interaction | |||||||||
Intercept | 1.01 | 3.2 | 94.2 | 31.6 | <.001 | 0.27 | 1026.8 | 1 | .6 |
SNR | −17.5 | 3.8 | 90 | −4.6 | <.001 | ||||
Amp | −30.3 | 3.8 | 90 | −8.0 | <.001 | ||||
SNR*Amplification | −2.8 | 5.3 | 90 | −0.5 | .60 |
Note. SE = standard error; df = degrees of freedom; AIC = Akaike information criterion.
Results from the final model revealed a significant main effect of amplification, b = 31.7, t(90) = 11.8, p < .001. This suggests that, while all other variables in the model were held constant, listening accuracy scores were 31.7 rationalized arcsin units higher when hearing aids were worn compared to when they were not worn. There was also a significant main effect of SNR, b = −18.9, t(90) = −7.1, p < .001. This suggests that, while all other variables are held constant, listening accuracy scores were 18.9 rationalized arcsin units higher in the easy-SNR compared to the hard-SNR conditions.
Brain
Figure 4 provides the HbO time series data averaged across participants for each subregion. Across subregions, difficult listening conditions (unaided listening and listening at the more challenging SNRs) were associated with greater HbO compared to easier listening conditions. For purposes of comparison, HbR data are provided in Figure A1. As may be expected, these findings loosely mirror the HbO data.
Figure 4.
Hbo time series for each subregion averaged across participants.
HbDiff results are plotted in Figure 5 and summary statistics are listed in Table 3. Analysis of the initial model revealed residuals presenting with negative skewness. Therefore, a reflect-and-square-root transformation was applied to the data to satisfy the assumption of normally distribution residuals for MLM. Model comparison results revealed that a model containing only main effects of amplification and SNR provided a significantly better fit than an intercept-only model, χ2(2) = 17.7 p < .001. A model which contained the interaction term between amplification and SNR was not a better fit, χ2(1) = 0.015, p = .9011, nor was a model containing region as a main effect term, χ2(3) = 0.25, p = .97.
Figure 5.
Boxplots of cerebral oxygen exchange (HbDiff; HbO-HbR) by prefrontal cortex region (LL = left lateral, LM = left medial, RM = right medial, RL = right lateral,). Box edges represent the interquartile range, whiskers represent 1.5 times the interquartile range, the red line represents the median, the red crosses represent outliers, and the X symbols represent the mean.
Table 3.
Nested Model Performance for the Impact of Amplification and SNR on Cerebral Oxygen Exchange (HbO-HbR) Results.
Model | Fixed effects | Goodness of fit and comparison | |||||||
---|---|---|---|---|---|---|---|---|---|
Estimate | SE | df | t | p | χ2 | AIC | df | p | |
Baseline | |||||||||
Intercept | 13.0 | 0.13 | 29.8 | 98.7 | <.001 | 2083.3 | |||
+ main effects | |||||||||
Intercept | 13.7 | 0.22 | 180.4 | 63.1 | <.001 | 17.7 | 2069.6 | 2 | <.001 |
SNR | −0.85 | 0.24 | 408.4 | −3.46 | <.01 | ||||
Amplification | −0.60 | 0.24 | 408.4 | −2.47 | <.05 | ||||
+ interaction | |||||||||
Intercept | 13.8 | 0.25 | 255.2 | 55.1 | <.001 | 0.015 | 2071.6 | 1 | .90 |
SNR | −0.88 | 0.35 | 408.4 | −2.54 | <.05 | ||||
Amp | −0.63 | 0.35 | 408.4 | −1.8 | .07 | ||||
SNR × Amplification | 0.06 | 0.49 | 408.4 | 0.12 | .9 | ||||
+ region | |||||||||
Intercept | 13.7 | 0.30 | 346.9 | 46.1 | <.001 | 0.25 | 2075.4 | 3 | .97 |
SNR | −0.85 | 0.24 | 408.3 | −3.5 | <.01 | ||||
Amp | −0.60 | 0.24 | 408.3 | −2.5 | <.05 | ||||
Region LM | 0.09 | 0.35 | 418.5 | 0.26 | .80 | ||||
Region RL | 0.08 | 0.34 | 415.2 | 0.24 | .81 | ||||
Region RM | −0.06 | 0.34 | 421.5 | −0.17 | .86 |
Note. The + Region Model was compared against the + Main Effects Model, Since the + Interaction Comparison was not significant. SE = standard error; df = degrees of freedom; AIC = Akaike information criterion.
Results from the final model revealed a significant main effect of amplification, b = −0.60, t(408.4) = −2.5, p < .05. This suggests that, while all other variables in the model were held constant, HbDiff was 17.1 units lower when hearing aids were worn compared to when they were not worn. There was also a significant main effect of SNR, b = −0.85, t(408.4) = −3.5, p < .001. This suggests that, while all other variables are held constant, the HbDiff was 21.1 units lower in the easy-SNR compared to the hard-SNR conditions.
Brain-Behavior Interactions
Given that the HbDiff model did not uncover a significant effect of region, the models investigating the relationship between HbDiff, subjective listening effort, and listening accuracy was collapsed across regions. Furthermore, given the inverse relationship between subjective listening effort and listening accuracy illustrated in Figure 3, we assessed the correlation between these two variables. A moderate correlation was obtained (r = −.56, p < .001). Given this outcome, a nested model approach was used in which subjective listening effort was independently compared to HbDiff, followed by a model in which listening accuracy was independently compared to HbDiff.
This modelling approach indicated that subjective listening effort was a significant predictor of HbDiff when compared against an intercept-only model, χ2(1) = 13.6, p < .001, as was listening accuracy when compared against the intercept-only model, χ2(1) = 19.2, p < .001. The models revealed that HbDiff decreased as subjective listening effort decreased and as listening accuracy increased.
Discussion
Self-Report and Behavior
As expected, subjective listening effort decreased and listening accuracy increased as listening conditions improved. In the task employed here, SNR was increased by reducing the level of background noise and increasing the level of speech. Similarly, functional SNR at the level of the ear was increased through the use of binaural directionally-fit hearing aids in a scenario where signals were presented from the front and noise was presented from the back (Derleth et al., 2021; Ricketts, 2001). These outcomes align with psychoacoustic principles about the impact of noise masking on the threshold at which a stimulus can be detected or understood (Yost, 2013), as well as cognitive hearing theory regarding the effects of noise on subjective listening effort (Peelle, 2018; Pichora-Fuller et al., 2016; Picou et al., 2013).
fNIRS: Effect of SNR
The results of this study also demonstrate that oxygenation in the prefrontal cortex is sensitive to changes in SNR. Specifically, greater oxygenation was observed across the prefrontal cortex at the hard SNR compared to the easy SNR. This finding was consistent with our theoretically derived hypothesis and is consistent with prior investigations concerning the impact of SNR changes on prefrontal hemodynamic activity. The current study design was most similar to that of Rovetti et al. (2022), in which young normal hearing listeners completed the R-SPIN test at hard (−2 dB) and easy (+4 dB) SNRs, while prefrontal cortex oxygenation was measured using fNIRS. Oxygenation increased to compensate for the degrading impacts of a poor SNR across the prefrontal cortex. Using fMRI, Du et al. (2014) also found an increased hemodynamic activity in prefrontal areas under challenging conditions among young normal hearing listeners. Specifically, they observed increased activity in left ventral premotor cortices and Broca's areas at more challenging SNRs in the context of a phoneme identification task. Although Rowland et al. (2018) also used fNIRS, they did not observe changes in prefrontal cortical activity corresponding to SNR. This null finding obtained by Rowland et al. (2018) may be attributable to the use of normal hearing participants or to the specific task employed that involved a high degree of ecological validity. In that study, participants listened to speech embedded in dynamic naturalistic scenes that varied in SNR. While the current study also retained some level of ecological validity, considerably more experimental control was imposed on the listening conditions with blocking of SNR and amplifications across conditions and the consistent use of multitalker babble as compared with real-world binaural recordings of background sounds. Another important difference between the studies was the use of hearing impaired participants in the current study compared with normal-hearing participants in Rowland et al. (2018).
fNIRS: Effect of Amplification
The results of the current experiment demonstrated that overall prefrontal cortex oxygenation was sensitive to changes in listening conditions. Specifically, less oxygenation was observed across the prefrontal cortex when participants wore hearing aids compared to not wearing hearing aids. These results were also consistent with our theoretically derived hypothesis, but inconsistent with the two prior fNIRS studies that specifically evaluated hearing aid outcomes. Rovetti et al. (2019) found no difference in oxygenation between hearing aid use and non-use in older adults while completing an auditory n-back task. Oeding (2022) also found no difference between an omnidirectional and directional microphone program while asking young normal hearing listeners to identify a talker's gender in noise. To reconcile these differences in outcomes, it is important to consider methodological differences across these studies.
First, the previous studies implemented tasks that may be described as considerably different from a speech understanding task. While the n-back task employed by Rovetti et al. (2019) is widely used for scientific and clinical assessment of working memory (Jaeggi et al., 2010; Kirchner, 1958), it does not reflect real world situations in which hearing aids might be used. The gender identification by voice task employed by Oeding (2022), is ecologically valid but it may not recruit the same cortical networks involved in speech understanding. Second, participants in Rovetti et al. (2019) wore their own hearing aids, as opposed to having hearing aid fitting controlled for in the study design. This design decision had the advantage of eliminating acclimatization factors but it may have inadvertently introduced variability in hearing aid benefit due to expected variability in fitting quality (Aazh et al., 2012; Baker & Jenstad, 2017; Dao et al., 2021; Polonenko et al., 2010). Finally, the participants in Oeding (2022) were normal hearing listeners and were not candidates for, nor had the experience of amplification, which may have explained the absence of hearing aid benefits. The discrepancy in experimental parameters between the studies described echoes the inconsistency of hearing aid benefit across studies summarized in the review by Ohlenforst et al. (2017). Based on the current results, we would suggest that future assessments of hearing aid benefit employing fNIRS would benefit from the use of an ecologically valid task with experienced hearing aid users (wearing systematically controlled hearing aids).
A final consideration for the effect of amplification is why its benefit was greater than that of SNR in terms of listening accuracy and subjective listening effort ratings, but not in terms prefrontal cortex oxygenation. Amplification increased listening accuracy scores by 31.7 arcsin units and reduced listening effort ratings by 1.02, compared to 18.9 and 0.8 respectively by SNR. On the contrary, amplification reduced oxygenation by 17.1 mmHg compared to 21.1 mmHg for an increase in SNR. Recall that most hearing aids implement a beamformer, which selectively amplifies speech in front of the listener while attenuating noise from the rear, thus providing an SNR advantage in the hearing aid condition compared to the unaided condition. In ideal conditions, beamforming can offer an SNR advantage of approximately 3–6 dB (Derleth et al., 2021; Ricketts, 2001). Beamforming is coupled with individualized frequency-specific hearing loss gain compensation, delivering audibility to spectral cues that further help speech understanding. The coupling of gain compensation and beamforming may explain why the difference in behavioral scores was greater when comparing amplification conditions vs. SNR conditions. This begs the question: Should amplification have reduced oxygenation in the prefrontal cortex by a greater magnitude than a change of SNR in the stimulus? The meta-analysis by Alain et al. (2018) identified cortical networks that were associated with different types of distortions. Studies focusing on speech-in-noise tasks (i.e., such as those in which SNR is manipulated) were associated with activation of the left prefrontal cortex, along with the parietal and insular cortices. In contrast, studies focusing on spectrally degraded speech (i.e., degradations due to impacts of hearing loss on speech audibility) were only associated with activation in temporal cortices and not the prefrontal cortex which was investigated here. This may explain why the magnitude of oxygenation in the prefrontal cortex was less for amplification than it was for a change in SNR, as spectral degradation in the unaided condition may have been associated with greater activation of other cortices not measured compared to clearer speech. Further research is needed to delineate the relationship between different forms of acoustic distortions and activation of different cortical regions, specifically in populations with hearing loss.
Although we did not observe a significant interaction between amplification and SNR, inspection of Figure 5 suggests that the main effect of amplification was driven primarily by the easy-SNR condition. It seems likely that we did not have sufficient statistical power to observe a statistically significant interaction effect. Future studies wishing to investigate this specific interaction would be cautioned against drawing definitive conclusions based solely on these findings and should consider employing larger sample sizes or alternative experimental designs to enhance statistical power and reliability of the results.
Functional Role of the Prefrontal cortex in Speech-in-Noise Tasks
This study sought to illustrate the sensitivity of the prefrontal cortex to changes in listening conditions associated with listening effort, as numerous studies have done so before (Rovetti et al., 2019, 2022; Rowland et al., 2018; Wijayasiri et al., 2017). We observed a significant negative correlation between prefrontal oxygenation and listening accuracy, and a significant positive correlation between prefrontal oxygenation and subjective listening effort. These findings support hemodynamic activity of the prefrontal cortex as an index of listening effort. However, given that listening accuracy and effort ratings were only moderately correlated, the two outcomes are not mutually exclusive. The moderate correlation between these two behavioral outcomes may be explained by prior research suggesting that subjective ratings of listening effort are partially driven by perception of accuracy (Moore & Picou, 2018). This finding should also be considered alongside that by Zhou et al. (2022), in which the auditory and left frontal cortical activation was negatively correlated with self-reported difficulty and positively correlated with speech intelligibility. Although this finding is contrary to the current findings, it should be noted that Zhou et al. (2022)'s study used sentences that were distorted using a vocoder and word-shuffling. Since different forms of speech distortions have been associated with different cortical activation patterns (Alain et al., 2018), some level of uncertainty exists. Future work is needed to better understand the relationship between subjective listening effort and oxygenation but for the time being, we would advise caution when interpreting subjective listening effort results independent of accuracy results.
A final consideration regarding activation patterns attributed here to the prefrontal cortex lies in time series data reported in Figure 4. If we assume that increases in oxygenation are associated with task demands, then why did a rapid decrease occur in the first 30 s of the task followed by a rapid recovery to a sub-baseline plateau? The answer may lie in the cingulo-opercular network (COPN), which is comprised of the dorsal anterior cingulate cortex, anterior insula, and frontal operculum—areas adjacent to and associated with the prefrontal cortex (Dosenbach et al., 2007; Hausman et al., 2021). The COPN is thought to be engaged during top-down attentional control required in goal-directed behavior, particularly in anticipation of upcoming task demands (Dosenbach et al., 2007) as well as during the processing of degraded speech (Peelle, 2018) . While most of the COPN is out of reach given the depth of penetration possible in fNIRS, the dorsal anterior cingulate cortex abuts the superficial cortex falling within 2 cm of the scalp (Bush et al., 2002). In the current study, the COPN may have been particularly active during baseline recording while participants anticipated the test stimuli followed by deactivation once participants listened to the first few stimuli. Beyond these first few stimuli, the network may have reactivated again, particularly in the context of more difficult listening conditions requiring a stronger demand to sustain attention (Peelle, 2018). We propose that these activations, deactivations, and reactivations likely influenced our prefrontal fNIRS recordings.
Another point to consider is the difference in the pattern of activations observed here and in Rovetti et al. (2022) despite the overlap with respect to task demands (R-SPIN). In the current study, there was an initial drop in prefrontal activity, followed by a rapid recovery. An initial drop was also evident in Rovetti et al. (2022) and may be attributable to the calibration of effort at the outset of the block. However, the recovery from the initial drop in Rovetti et al. (2022) was more gradual taking place over the course of the entire block. This more gradual recovery may reflect a reduced need for engagement of the prefrontal cortex in a younger normal hearing sample. Moreover, while all oxygenation levels were negative compared to baseline in the current study, they were all positive in Rovetti et al. (2022), including the initial drop. The differences in the overall extent of prefrontal activation (with contribution from abutting COPN) across studies may again be due to differences in the study samples. The current study consisted of older adults with hearing loss, thus experiencing communicative difficulties and potentially being under greater anticipatory threat while waiting for the task to begin. This would have likely led to increased COPN activation during the baseline period. The younger normal-hearing participants tested by Rovetti et al. (2022) likely experienced less anticipatory threat as they had normal hearing and were for the most part experienced test participants having been partly drawn from an undergraduate psychology participant pool.
Limitations
This study treated listening effort as an isolated construct unaffected by confounding factors. However, Pichora-Fuller et al. (2016) posited that listening effort is a dynamic construct that interacts with motivation, fatigue, task demands, and time. The authors assumed that cognitive resources are finite during a single session. In the current study, it is possible that motivation and fatigue varied over time. Future research should investigate the relationship between oxygenation and listening where motivation, fatigue, and task demands are systematically modified.
We had put forward a tentative prediction concerning localization of activation patterns. Specifically, we expected that the left prefrontal cortex may be most sensitive to differences in listening conditions as has been shown in several prior investigations (Lawrence et al., 2018; Rovetti et al., 2022; Rowland et al., 2018; Wijayasiri et al., 2017). However, this prediction was tempered by the knowledge that our participants were older adults, and thus less likely to show hemispheric specialization (Cabeza, 2002). In addition, our participants were hearing impaired, which may lead to compensatory patterns of activity which disrupt localization patterns previously observed in younger adults with normal hearing.
The lack of hemispheric specialization may also be explained by the lack of short-channel optodes. fNIRS responses consist of hemodynamic activity of both neural and non-neural origins (i.e., extracerebral blood flow; Kirilina et al., 2012; Scholkmann et al., 2022). Short-channel optodes are recommended to measure the extracerebral blood flow such that it can be removed from the fNIRS response (Yücel et al., 2021). Like neural responses, co-located extracerebral areas may also show task-evoked responses or recording artefacts due to local circulatory networks supplying and draining cerebral blood flow (Kirilina et al., 2012). These extracerebral areas may also be broader in size than the cortical regions of interest. The relative proximity of the extracerebral space to the optodes leads to a much stronger extracerebral response vs. cerebral response, which risks masking hemodynamic activity originating from an intended cortical area, which in the current study was the left prefrontal cortex. Therefore, given that the current study observed increased activation across the entire rather than just the left prefrontal cortex, it is possible that the observed effect was derived from the extracerebral space maintaining blood flow in the cortex rather than the cortex itself.
Another methodological limitation may have also added noise to the data. Recall that the study procedure required participants to verbally repeat a word, causing facial movement and risking slight repositioning of the fNIRS optodes. To control this risk, motion artifacts were corrected for as per fNIRS best practices (Yücel et al., 2021) using the wavelet decomposition method (Molavi & Dumont, 2012).
Conclusion
In conclusion, this study was among the first to investigate fNIRS sensitivity to hearing aid use and SNR in a speech understanding/listening effort task with older adult experienced hearing aid users. The study found that a degradation in listening conditions (as represented by the removal of hearing aids and a reduction in SNR) was associated with increased oxygenation across the prefrontal cortex. These results suggest that the use of hearing aids may reduce the cognitive resources required during communication, thus lessening objective listening effort (i.e., mental resources to overcome obstacles in goal pursuit when carrying out a [listening] task). Future research should investigate the relationship between oxygenation and brain-based measures associated with listening effort, where listening accuracy, task demands, and motivation are systematically modified. Overall, the current findings support the efficacy of fNIRS to measure the impacts of hearing aid use on listening effort in older adults.
Appendix
Figure A1.
HbR time series for each subregion averaged across participants.
Footnotes
The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Authors J. M. V. and J. Q. are employees of Sonova. Author F. A. R. receives grant funding from Sonova.
Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was funded by Sonova Canada Inc. Funding for this study was derived in part from the NSERC-Sonova Senior Industrial Research Chair awarded to author F. A. R.
ORCID iD: Frank A. Russo https://orcid.org/0000-0002-2939-6358
References
- Aazh H., Moore B. C. J., Prasher D. (2012). The accuracy of matching target insertion gains with open-fit hearing aids. American Journal of Audiology, 21(2), 175–180. 10.1044/1059-0889(2012/11-0008) [DOI] [PubMed] [Google Scholar]
- Acharya J. N., Hani A., Cheek J., Thirumala P., Tsuchida T. N. (2016). American clinical neurophysiology society guideline 2: Guidelines for standard electrode position nomenclature. Journal of Clinical Neurophysiology, 33(4), 308–311. 10.1097/WNP.0000000000000316 [DOI] [PubMed] [Google Scholar]
- Alain C., Du Y., Bernstein L. J., Barten T., Banai K. (2018). Listening under difficult conditions: An activation likelihood estimation meta-analysis. Human Brain Mapping, 39(7), 2695–2709. 10.1002/hbm.24031 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Alhanbali S., Dawes P., Lloyd S., Munro K. J. (2017). Self-reported listening-related effort and fatigue in hearing-impaired adults. Ear & Hearing, 38(1), e39–e48. 10.1097/AUD.0000000000000361 [DOI] [PubMed] [Google Scholar]
- Alhanbali S., Dawes P., Millman R. E., Munro K. J. (2019). Measures of listening effort are multidimensional. Ear and Hearing, 40(5), 1084–1097. 10.1097/AUD.0000000000000697 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baayen R. H., Davidson D. J., Bates D. M. (2008). Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4), 390–412. 10.1016/j.jml.2007.12.005 [DOI] [Google Scholar]
- Baker S., Jenstad L. (2017). Matching real-ear targets for adult hearing aid fittings: NAL-NL1 and DSL v5.0 prescriptive formulae. Canadian Journal of Speech-Language Pathology and Audiology, 41(2), 227–235. [Google Scholar]
- Bakkum K. H. E., Teunissen E. M., Janssen A. M., Lieu J. E. C., Hol M. K. S. (2023). Subjective fatigue in children with unaided and aided unilateral hearing loss. The Laryngoscope, 133(1), 189–198. 10.1002/lary.30104 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bates D., Mächler M., Bolker B., Walker S. (2015). Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1), 1–48. 10.18637/jss.v067.i01 [DOI] [Google Scholar]
- Bell L., Peng Z. E., Pausch F., Reindl V., Neuschaefer-Rube C., Fels J., Konrad K. (2020). fNIRS assessment of speech comprehension in children with normal hearing and children with hearing aids in virtual acoustic environments: Pilot data and practical recommendations. Children, 7(11), 1–25. 10.3390/children7110219 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bess F. H., Hornsby B. W. Y. (2014). Commentary: Listening can be exhausting—fatigue in children and adults with hearing loss. Ear & Hearing, 35(6), 592–599. 10.1097/AUD.0000000000000099 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bilger R. C., Nuetzel J. M., Rabinowitz W. M., Rzeczkowski C. (1984). Standardization of a test of speech perception in noise. Journal of Speech, Language, and Hearing Research, 27(1), 32–48. 10.1044/jshr.2701.32 [DOI] [PubMed] [Google Scholar]
- Binder J. R., Liebenthal E., Possing E. T., Medler D. A., Ward B. D. (2004). Neural correlates of sensory and decision processes in auditory object identification. Nature Neuroscience, 7(3), 295–301. https://doi.org/10.1038/nn1198 [DOI] [PubMed] [Google Scholar]
- Burnham K. P., Anderson D. R. (2004). Multimodel inference: Understanding AIC and BIC in model selection. Sociological Methods & Research, 33(2), 261–304. 10.1177/0049124104268644 [DOI] [Google Scholar]
- Bush G., Vogt B. A., Holmes J., Dale A. M., Greve D., Jenike M. A., Rosen B. R. (2002). Dorsal anterior cingulate cortex: A role in reward-based decision making. Proceedings of the National Academy of Sciences of the United States of America, 99(1), 523–528. 10.1073/pnas.012470999 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cabeza R. (2002). Hemispheric asymmetry reduction in older adults: The HAROLD model. Psychology and Aging, 17(1), 85–100. 10.1037/0882-7974.17.1.85 [DOI] [PubMed] [Google Scholar]
- Cabeza R., Anderson N. D., Locantore J. K., McIntosh A. R. (2002). Aging gracefully: Compensatory brain activity in high-performing older adults. NeuroImage, 17(3), 1394–1402. 10.1006/nimg.2002.1280 [DOI] [PubMed] [Google Scholar]
- Chen L. C., Stropahl M., Schönwiesner M., Debener S. (2017). Enhanced visual adaptation in cochlear implant users revealed by concurrent EEG-fNIRS. NeuroImage, 146, 600–608. 10.1016/j.neuroimage.2016.09.033 [DOI] [PubMed] [Google Scholar]
- Dao A., Folkeard P., Baker S., Pumford J., Scollie S., Scollie S. (2021). Fit-to-Targets and aided speech intelligibility index values for hearing aids gitted to the DSL v5-adult prescription. Journal of the American Academy of Audiology, 32(2), 90–98. 10.1055/s-0040-1718707 [DOI] [PubMed] [Google Scholar]
- Davis M. H., Johnsrude I. S. (2003). Hierarchical processing in spoken language comprehension. The Journal of Neuroscience, 23(8), 3423–3431. 10.1523/JNEUROSCI.23-08-03423.2003 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Delpy D. T., Cope M., Zee P. V. D., Arridge S., Wray S., Wyatt J. (1988). Estimation of optical pathlength through tissue from direct time of flight measurement. Physics in Medicine and Biology, 33(12), 1433–1442. 10.1088/0031-9155/33/12/008 [DOI] [PubMed] [Google Scholar]
- Derleth P., Georganti E., Latzel M., Courtois G., Hofbauer M., Raether J., Kuehnel V. (2021). Binaural signal processing in hearing aids. Seminars in Hearing, 42(03), 206–223. 10.1055/s-0041-1735176 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Devor A., Boas D., Einevoll G., Buxton R., Dale A. (2012). Neuronal basis of non-invasive functional imaging: From microscopic neurovascular dynamics to BOLD fMRI. In Neural metabolism in vivo advances in neurobiology (pp. 433–450). Springer. 10.1007/978-1-4614-1788-0_15 [DOI] [Google Scholar]
- Dosenbach N. U. F., Fair D. A., Miezin F. M., Cohen A. L., Wenger K. K., Dosenbach R. A. T., Fox M. D., Snyder A. Z., Vincent J. L., Raichle M. E., Schlaggar B. L., Petersen S. E. (2007). Distinct brain networks for adaptive and stable task control in humans. Proceedings of the National Academy of Sciences of the United States of America, 104(26), 11073–11078. 10.1073/pnas.0704320104 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Du Y., Buchsbaum B. R., Grady C. L., Alain C. (2014). Noise differentially impacts phoneme representations in the auditory and speech motor systems. Proceedings of the National Academy of Sciences, 111(19), 7126–7131. 10.1073/pnas.1318738111 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gaab N., Gabrieli J. D. E., Glover G. H. (2007). Assessing the influence of scanner background noise on auditory processing. II. An fMRI study comparing auditory processing in the absence and presence of recorded scanner noise using a sparse design. Human Brain Mapping, 28(8), 721–732. 10.1002/hbm.20299 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gagné J.-P., Besser J., Lemke U. (2017). Behavioral assessment of listening effort using a dual-task paradigm: A review. Trends in Hearing, 21, 1–25. 10.1177/2331216516687287 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Golestani N., Hervais-Adelman A., Obleser J., Scott S. K. (2013). Semantic versus perceptual interactions in neural processing of speech-in-noise. NeuroImage, 79, 52–61. 10.1016/j.neuroimage.2013.04.049 [DOI] [PubMed] [Google Scholar]
- Harrison S. C., Lawrence R., Hoare D. J., Wiggins I. M., Hartley D. E. H. (2021). Use of functional near-infrared spectroscopy to predict and measure cochlear implant outcomes: A scoping review. Brain Sciences, 11(1439), 1–22. 10.3390/brainsci11111439 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hausman H. K., Hardcastle C., Albizu A., Kraft J. N., Evangelista N. D., Boutzoukas E. M., Langer K., O’Shea A., Van Etten E. J., Bharadwaj P. K., Song H., Smith S. G., Porges E., DeKosky S. T., Hishaw G. A., Wu S., Marsiske M., Cohen R., Alexander G. E., Woods A. J. (2021). Cingulo-opercular and frontoparietal control network connectivity and executive functioning in older adults. GeroScience, 44(2), 847–866. 10.1007/s11357-021-00503-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Huppert T. J., Diamond S. G., Franceschini M. A., Boas D. A. (2009). HomER: A review of time-series analysis methods for near-infrared spectroscopy of the brain. Applied Optics, 48(10), D280. 10.1364/AO.48.00D280 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jaeggi S. M., Buschkuehl M., Perrig W. J., Meier B. (2010). The concurrent validity of the N-back task as a working memory measure. Memory (Hove, England), 18(4), 394–412. 10.1080/09658211003702171 [DOI] [PubMed] [Google Scholar]
- Johnson J., Xu J., Cox R., Pendergraft P. (2015). A comparison of two methods for measuring listening effort as part of an audiologic test battery. American Journal of Audiology, 24(3), 419–431. 10.1044/2015 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jungnickel E., Gehrke L., Klug M., Gramann K. (2019). MoBI—Mobile brain/body imaging. In H. Ayaz & F. Dehais (Eds.), Neuroergonomics (pp. 59–63). Academic Press. https://doi.org/10.1016/B978-0-12-811926-6.00010-5 [Google Scholar]
- Keidser G., Dillon H. R., Flax M., Ching T., Brewer S. (2011). The NAL-NL2 prescription procedure. Audiology Research, 1(1), e24. 10.4081/audiores.2011.e24 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kirchner W. K. (1958). Age differences in short-term retention of rapidly changing information. Journal of Experimental Psychology, 55(4), 352–358. 10.1037/h0043688 [DOI] [PubMed] [Google Scholar]
- Kirilina E., Jelzow A., Heine A., Niessing M., Wabnitz H., Brühl R., Ittermann B., Jacobs A. M., Tachtsidis I. (2012). The physiological origin of task-evoked systemic artefacts in functional near infrared spectroscopy. Neuroimage, 61(1), 70–81. 10.1016/j.neuroimage.2012.02.074 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kocsis L., Herman P., Eke A. (2006). The modified Beer-Lambert law revisited. Physics in Medcine and Biology, 51, N91–N98. 10.1088/0031-9155/51/5/N02 [DOI] [PubMed] [Google Scholar]
- Kollmeier B., Warzybok A., Hochmuth S., Zokoll M. A., Uslar V., Brand T., Wagener K. C. (2015). The multilingual matrix test: Principles, applications, and comparison across languages: A review. International Journal of Audiology, 54(Suppl. 2), 3–16. 10.3109/14992027.2015.1020971 [DOI] [PubMed] [Google Scholar]
- Kuchinsky S. E., Ahlstrom J. B., Vaden K. I., Cute S. L., Humes L. E., Dubno J. R., Eckert M. A. (2013). Pupil size varies with word listening and response selection difficulty in older adults with hearing loss. Psychophysiology, 50(1), 23–34. 10.1111/j.1469-8986.2012.01477.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lassnigg A., Hiesmayr M., Keznickl P., Müllner T., Ehrlich M., Grubhofer G. (1999). Cerebral oxygenation during cardiopulmonary bypass measured by near-infrared spectroscopy: Effects of hemodilution, temperature, and flow. Journal of Cardiothoracic and Vascular Anesthesia, 13(5), 544–548. 10.1016/S1053-0770(99)90005-8 [DOI] [PubMed] [Google Scholar]
- Lawrence R. J., Wiggins I., Lawrence R. J., Wiggins I. M., Anderson C. A. (2018). Cortical correlates of speech intelligibility measured using functional near- infrared spectroscopy (fNIRS). Hearing Research, 370(July 2019), 53–64. 10.1016/j.heares.2018.09.005 [DOI] [PubMed] [Google Scholar]
- Liang L.-Y., Shewokis P. A., Getchell N. (2016). Brain activation in the prefrontal cortex during motor and cognitive tasks in adults. Journal of Behavioral and Brain Science, 06(12), 463–474. 10.4236/jbbs.2016.612042 [DOI] [Google Scholar]
- Lu C.-F., Liu Y.-C., Yang Y.-R., Wu Y.-T., Wang R.-Y. (2015). Maintaining gait performance by cortical activation during dual-task interference: A functional near-infrared spectroscopy study. PLOS ONE, 10(6), e0129390. 10.1371/journal.pone.0129390 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mackersie C. L., Calderon-Moultrie N. (2016). Autonomic nervous system reactivity during speech repetition tasks: Heart rate variability and skin conductance. Ear & Hearing, 37(1), 118S–125S. 10.1097/AUD.0000000000000305 [DOI] [PubMed] [Google Scholar]
- Mackersie C. L., Cones H. (2011). Subjective and psychophysiological indexes of listening effort in a competing-talker task. Journal of the American Academy of Audiology, 22(02), 113–122. 10.3766/jaaa.22.2.6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- McCoach D. B., Adelson J. L. (2010). Dealing with dependence (part I): Understanding the effects of clustered data. Gifted Child Quarterly, 54(2), 152–155. 10.1177/0016986210363076 [DOI] [Google Scholar]
- McGarrigle R., Munro K. J., Dawes P., Stewart A. J., Moore D. R., Barry J. G., Amitay S. (2014). Listening effort and fatigue: What exactly are we measuring? A British society of audiology cognition in hearing special interest group “white paper.” International Journal of Audiology, 53(7), 433–445. 10.3109/14992027.2014.890296 [DOI] [PubMed] [Google Scholar]
- McKay C. M., Shah A., Seghouane A. K., Zhou X., Cross W., Litovsky R. (2016). Connectivity in language areas of the brain in cochlear implant users as revealed by fNIRS. Advances in Experimental Medicine and Biology, 894, 327–335. 10.1007/978-3-319-25474-6_34 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Molavi B., Dumont G. A. (2012). Wavelet-based motion artifact removal for functional near-infrared spectroscopy. Physiological Measurement, 33(2), 259–270. 10.1088/0967-3334/33/2/259 [DOI] [PubMed] [Google Scholar]
- Moore T. M., Picou E. M. (2018). A potential bias in subjective ratings of mental effort. Journal of Speech, Language, and Hearing Research, 61(9), 2405–2421. 10.1044/2018_JSLHR-H-17-0451 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Nachtegaal J., Kuik D. J., Anema J. R., Goverts S. T., Festen J. M., Kramer S. E. (2009). Hearing status, need for recovery after work, and psychosocial work characteristics: Results from an internet-based national survey on hearing. International Journal of Audiology, 48(10), 684–691. 10.1080/14992020902962421 [DOI] [PubMed] [Google Scholar]
- Neagu M.-B., Kressner A. A., Relaño-Iborra H., Bækgaard P., Dau T., Wendt D. (2023). Investigating the reliability of pupillometry as a measure of individualized listening effort. Trends in Hearing, 27, 233121652311532. 10.1177/23312165231153288 [DOI] [Google Scholar]
- Nezlek J. B., Mroziński B. (2020). Applications of multilevel modeling in psychological science: Intensive repeated measures designs. L’Année Psychologique, 120(1), 39–72. 10.3917/anpsy1.201.0039 [DOI] [Google Scholar]
- Nilsson M., Soli S. D., Sullivan J. A. (1994). Development of the hearing in noise test for the measurement of speech reception thresholds in quiet and in noise. Journal of the Acoustical Society of America, 95(2), 1085–1099. 10.1121/1.408469 [DOI] [PubMed] [Google Scholar]
- Oeding K. A. (2022). Improving hearing aid outcomes in background noise: An investigation of outcome measures and patient factors. University of Minnesota. [Google Scholar]
- Ohlenforst B., Zekveld A. A., Jansma E. P., Wang Y., Naylor G., Lorens A., Lunner T., Kramer S. E. (2017). Effects of hearing impairment and hearing aid amplification on listening effort: A systematic review. Ear and Hearing, 38(3), 267–281. 10.1097/AUD.0000000000000396 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peelle J. E. (2014). Methodological challenges and solutions in auditory functional magnetic resonance imaging. Frontiers in Neuroscience, 8(8 JUL), 1–13. 10.3389/fnins.2014.00253 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Peelle J. E. (2018). Listening effort: How the cognitive consequences of acoustic challenge are reflected in brain and behavior. Ear and Hearing, 39(2), 204–214. 10.1097/AUD.0000000000000494 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pellegrino G., Schuler A. L., Arcara G., Di Pino G., Piccione F., Kobayashi E. (2022). Resting state network connectivity is attenuated by fMRI acoustic noise. NeuroImage, 247(October 2021), 118791. 10.1016/j.neuroimage.2021.118791 [DOI] [PubMed] [Google Scholar]
- Pichora-Fuller M. K., Kramer S. E., Eckert M. A., Edwards B., Hornsby B. W. Y., Humes L. E., Lemke U., Lunner T., Matthen M., Mackersie C. L., Naylor G., Phillips N. A., Richter M., Rudner M., Sommers M. S., Tremblay K. L., Wingfield A. (2016). Hearing impairment and cognitive energy: The framework for understanding effortful listening (FUEL). Ear & Hearing, 37, 5S–27S. 10.1097/AUD.0000000000000312 [DOI] [PubMed] [Google Scholar]
- Picou E. M., Ricketts T. A., Hornsby B. W. Y. (2013). How hearing aids, background noise, and visual cues influence objective listening effort. Ear and Hearing, 34(5), e52–e64. 10.1097/AUD.0b013e31827f0431 [DOI] [PubMed] [Google Scholar]
- Pollonini L., Olds C., Abaya H., Bortfeld H., Beauchamp M. S., Oghalai J. S. (2014). Auditory cortex activation to natural speech and simulated cochlear implant speech measured with functional near-infrared spectroscopy. Hearing Research, 309, 84–93. 10.1016/j.heares.2013.11.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Polonenko M. J., Scollie S. D., Moodie S., Seewald R. C., Laurnagaray D., Shantz J., Richards A. (2010). Fit to targets, preferred listening levels, and self-reported outcomes for the DSL v5.0 a hearing aid prescription for adults. International Journal of Audiology, 49(8), 550–560. 10.3109/14992021003713122 [DOI] [PubMed] [Google Scholar]
- Quené H., Van Den Bergh H. (2004). On multi-level modeling of data from repeated measures designs: A tutorial. Speech Communication, 43(1–2), 103–121. 10.1016/j.specom.2004.02.004 [DOI] [Google Scholar]
- Ravicz M. E., Melcher J. R., Kiang N. Y.-S. (2000). Acoustic noise during functional magnetic resonance imaging. The Journal of the Acoustical Society of America, 108(4), 1683–1696. 10.1121/1.1310190 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ricketts T. A. (2001). Directional hearing aids. Trends in Amplification, 5(4), 139–176. 10.1177/108471380100500401 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rovetti J., Goy H., Pichora-Fuller M. K., Russo F. A. (2019). Functional near-infrared spectroscopy as a measure of listening effort in older adults who use hearing aids. Trends in Hearing, 23, 233121651988672. 10.1177/2331216519886722 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rovetti J., Goy H., Zara M., Russo F. A. (2022). Reduced semantic context and signal-to-noise ratio increase listening effort as measured using functional near-infrared spectroscopy. Ear & Hearing, 43(3), 836–848. 10.1097/AUD.0000000000001137 [DOI] [PubMed] [Google Scholar]
- Rowland S. C., Hartley D. E. H., Wiggins I. M. (2018). Listening in naturalistic scenes: What can functional near-infrared spectroscopy and intersubject correlation analysis tell us about the underlying brain activity? Trends in Hearing, 22, 233121651880411. 10.1177/2331216518804116 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saleh S., Sandroff B. M., Vitiello T., Owoeye O., Hoxha A., Hake P., Goverover Y., Wylie G., Yue G., DeLuca J. (2018). The role of premotor areas in dual tasking in healthy controls and persons with multiple sclerosis: An fNIRS imaging study. Frontiers in Behavioral Neuroscience, 12, 296. 10.3389/fnbeh.2018.00296 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sappia M. S., Hakimi N., Colier W. N. J. M., Horschig J. M. (2020). Signal quality index: An algorithm for quantitative assessment of functional near infrared spectroscopy signal quality. Biomedical Optics Express, 11(11), 6732. 10.1364/BOE.409317 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Scholkmann F., Tachtsidis I., Wolf M., Wolf U. (2022). Systemic physiology augmented functional near-infrared spectroscopy: A powerful approach to study the embodied human brain. Neurophotonics, 9(3), 030801. 10.1117/1.NPh.9.3.030801 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Seeman S., Sims R. (2015). Comparison of psychophysiological and dual-task measures of listening effort. Journal of Speech, Language, and Hearing Research, 58(6), 1781–1792. 10.1044/2015_JSLHR-H-14-0180 [DOI] [PubMed] [Google Scholar]
- Shatzer H. E., Russo F. A. (2023). Brightening the study of listening effort with functional near-infrared spectroscopy: A scoping review. Seminars in Hearing, 44(2), 188–210. 10.1055/s-0043-1766105 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Studebaker G. A. (1985). A “rationalized” arcsine transform. Journal of Speech, Language, and Hearing Research, 28(3), 455–462. 10.1044/jshr.2803.455 [DOI] [PubMed] [Google Scholar]
- Vaden K. I., Kuchinsky S. E., Cute S. L., Ahlstrom J. B., Dubno J. R., Eckert M. A. (2013). The Cingulo-Opercular network provides word-recognition benefit. The Journal of Neuroscience, 33(48), 18979–18986. 10.1523/JNEUROSCI.1417-13.2013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- White B. E., Langdon C. (2021). Neuroimage the cortical organization of listening effort: New insight from functional near-infrared spectroscopy. NeuroImage, 240, 118324. 10.1016/j.neuroimage.2021.118324 [DOI] [PubMed] [Google Scholar]
- Wijayasiri P., Hartley D. E. H., Wiggins I. M. (2017). Brain activity underlying the recovery of meaning from degraded speech: A functional near-infrared spectroscopy (fNIRS) study. Hearing Research, 351, 55–67. 10.1016/j.heares.2017.05.010 [DOI] [PubMed] [Google Scholar]
- Wild C. J., Yusuf A., Wilson D. E., Peelle J. E., Davis M. H., Johnsrude I. S. (2012). Effortful listening: The processing of degraded speech depends critically on attention. Journal of Neuroscience, 32(40), 14010–14021. 10.1523/JNEUROSCI.1528-12.2012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wu Y.-H., Stangl E., Chipara O., Hasan S. S., DeVries S., Oleson J. (2019). Efficacy and effectiveness of advanced hearing aid directional and noise reduction technologies for older adults with mild to moderate hearing loss. Ear & Hearing, 40(4), 805–822. 10.1097/AUD.0000000000000672 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wu Y. H., Stangl E., Zhang X., Perkins J., Eilers E. (2016). Psychometric functions of dual-task paradigms for measuring listening effort. Ear & Hearing, 37(6), 660–670. 10.1097/AUD.0000000000000335 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yost W. A. (2013). Fundamentals of hearing: An introduction (5th ed.). Emerald Group Publishing Limited. [Google Scholar]
- Yücel M. A., Lühmann A. V., Scholkmann F., Gervain J., Dan I., Ayaz H., Boas D., Cooper R. J., Culver J., Elwell C. E., Eggebrecht A., Franceschini M. A., Grova C., Homae F., Lesage F., Obrig H., Tachtsidis I., Tak S., Tong Y. (2021). Best practices for fNIRS publications. Neurophotonics, 8(1), 012101. 10.1117/1.NPh.8.1.012101 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zekveld A. A., Kramer S. E. (2014). Cognitive processing load across a wide range of listening conditions: Insights from pupillometry. Psychophysiology, 51(3), 277–284. 10.1111/psyp.12151 [DOI] [PubMed] [Google Scholar]
- Zekveld A. A., Rudner M., Johnsrude I. S., Heslenfeld D. J., Rönnberg J. (2012). Behavioral and fMRI evidence that cognitive ability modulates the effect of semantic context on speech intelligibility. Brain and Language, 122(2), 103–113. 10.1016/j.bandl.2012.05.006 [DOI] [PubMed] [Google Scholar]
- Zhou X., Burg E., Kan A., Litovsky R. Y. (2022). Investigating effortful speech perception using fNIRS and pupillometry measures. Current Research in Neurobiology, 3, 100052. 10.1016/j.crneur.2022.100052 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhou X., Sobczak G., McKay C. M., Litovsky R. Y. (2020). Comparing fNIRS signal qualities between approaches with and without short channels. PLoS ONE, 15(12), e0244186. 10.1371/journal.pone.0244186 [DOI] [PMC free article] [PubMed] [Google Scholar]