Abstract
We investigate the short-term association between multidimensional acoustic characteristics of everyday ambient sound and continuous mean heart rate. We used in-market data from hearing aid users who logged ambient acoustics via smartphone-connected hearing aids and continuous mean heart rate in 5 min intervals from their own wearables. We find that acoustic characteristics explain approximately 4% of the fluctuation in mean heart rate throughout the day. Specifically, increases in ambient sound pressure intensity are significantly related to increases in mean heart rate, corroborating prior laboratory and short-term real-world data. In addition, increases in ambient sound quality—that is, more favourable signal to noise ratios—are associated with decreases in mean heart rate. Our findings document a previously unrecognized mixed influence of everyday sounds on cardiovascular stress, and that the relationship is more complex than is seen from an examination of sound intensity alone. Thus, our findings highlight the relevance of ambient environmental sound in models of human ecophysiology.
Keywords: real-world heart rates, data logging, everyday sounds, linear mixed-effects models, hearing loss, hearing aids
1. Introduction
Humans constantly face a multifaceted soundscape. Some content is by choice (e.g. listening to music), while other is not (e.g. traffic noise). Despite environmental sounds being an integral part of everyday human life, there is very little knowledge about how acoustic aspects of the soundscape encountered in everyday life, beyond that of intensity, affect the human body [1]. There is, however, increasing recognition that sounds classified as ‘noise' have harmful effects on the human auditory [2,3] and cardiovascular system [4,5] despite being at intensity levels below those known to cause physical damage. A recent report from the WHO summarized the biological mechanism of cardiovascular reactions to noise [6]. In brief, it suggested that auditory noise causes both a direct (i.e. through subcortical connections of the brain) and/or indirect (involving projections from the auditory thalamus to the auditory cortex) stress reaction by disrupting the autonomic nervous system (ANS) balance due to elevated activity in the sympathetic nervous system branch and reduced activity in the parasympathetic nervous system branch [6–10], both of which control everyday fluctuations in heart rate (HR). Elevation in sympathetic control leads to slow (approx. 5 s delay) increases in HR [11], while elevated parasympathetic control leads to fast (within milliseconds) decreases in HR [12,13]. In the case of noise exposure, the elevation of the sympathetic control can have acute effects that lead to momentarily elevated HR and blood pressure [14] and can cause long-term damage, such as elevating the risk of heart disease due to sustained periods of hypertension [15]. What is more, effects of noise are not always perceivable, as illustrated in a study [16] in which clerical workers were exposed to simulated open-office noise. Compared with a control group they showed increased levels of adrenaline, indicative of elevated sympathetic levels, but did not report higher stress than the control group. Thus, to fully understand the impact of the everyday acoustic environment on the human body, subjective reporting of stress needs to be supported by objective measures such as cardiovascular reactions toward changes in the environment.
In a recent laboratory study, Shoushtarian et al. [17] documented a causal link between short-term noise exposure and acute cardiovascular system reactions. They found that relative to complete silence, noise segments presented at 40 and 15 dBA led to a decrease in HR, while those presented at levels of 65 and 90 dBA led to significant increases in HR. These changes were seen within 2–5 s post exposure. This timing is in line with the dynamic response of sympathetic ANS modulation of HR [12], rather than the faster acting parasympathetic control [11], suggesting that, indeed, changes in the intensity of acoustic noise induce a ‘fight and flight' stress response in the sympathetic nervous system branch [5]. Pairwise comparisons of the changes revealed significant differences in HRs between all comparison levels except for between 65 and 90 dBA, suggesting a possible ceiling effect. The authors interpreted the 2% reduction in mean HR from baseline for the 15 dBA noise to be due to a fast-acting orienting response elicited by the onset of low intensity noise [18], and their finding suggests that noise can have diverse effects on the ANS depending on its intensity.
Diverse ANS reactions to sound have also been linked to the type of sound, psychological evaluation and task-related listening. For example, Sim et al. [19] reported that type of noise (e.g. traffic noise, background noise, speech noise) when presented at equal intensity differentially impacts heart rate variability (HRV). They argued that, besides activating a sympathetic stress reaction, noise consisting of speech also activates the parasympathetic branch leading to a more stable balance of the ANS typically represented by an increased HRV [20] and decreased HR [13]. Umemura & Honda [21] found that listening to classical music suppressed a sympathetic stress reaction (i.e. by keeping a stable ANS balance) compared with rock music or noise—both of which increased activity in the sympathetic control (measured by Mayer wave-related sinus arrhythmia) while simultaneously decreasing activity in the parasympathetic control (measured by respiratory sinus arrhythmia). Moreover, the degree of suppression was positively correlated with subjective reports of comfort [21]. On top of this, individual differences in HR responses to unpleasant noise (high-speed dental engine) have been explained by individuals' familiarity and experience with the acoustic stimuli [22], indicating adaptation to commonly occurring noise sources in daily life. Regarding active task-related listening, elevated stress levels are typically associated with listening in adverse conditions such as noise, which degrade speech recognition and facilitate recruitment of executive cognitive resources [23]. This was illustrated by Holube et al. [24], who found stress as measured with electrodermal activity correlated significantly with subjective ratings of listening effort and reported stress for listening to a speech at different signal-to-noise ratios at a constant sound intensity (55 dB), indicating that stress was induced by the effort of listening. In addition to this, the reward or success importance of a given listening task has been found to modulate cardiovascular activity by sympathetic arousal [25]. These findings are in line with the framework for understanding effortful listening (FUEL), which dictates that the effort expended during active listening is modulated by task demand (i.e. listening condition) and personal motivation for engagement [26]. This again highlights the need for considering sound dimensions beyond intensity when assessing the effects of the everyday acoustic environment on the cardiovascular system.
Measurements of the association between human ANS balance and noise intensity have typically been performed under highly controlled conditions (i.e. in the laboratory). This eliminates the potential impact of daily life contexts, which as pointed out above, are expected to play a significant role in ANS reactions. For example, individual sensitivity towards noise might fluctuate with time-of-day, location (e.g. home versus at work) or personal motivation, thus limiting the generalizability of effects measured in the laboratory [26,27]. Notably, in a recent real-world study [28], short-term increases in sound intensity over a 7-day period were associated with a concomitant increase in HR and HRV parameters, but a delayed decrease in the overall HRV reflecting the withdrawal of parasympathetic and elevation of sympathetic control [20], and confirming the role of real-world noise intensity in disrupting ANS stability. In addition, the strength of the association between sound intensity and change in HR was significantly moderated by place/mobility contexts. This highlights the importance of conducting real-world studies examining the effects of everyday sound immersion on the human cardiovascular system [28,29]. This notion has long been acknowledged in city planning [30,31] for example, but has not yet been applied in the study of human stress reactions to everyday sound immersion.
In this paper, we use real-world longitudinal and observational data from hearing aid users to investigate how everyday ANS dynamics are associated with acoustic characteristics of the sound environment. First, we describe the typical daily sound environments encountered by the participants, and second, we model how short-term changes in acoustic characteristics are associated with changes in HR throughout the day. We expand upon the current scientific evidence of how noise affects cardiovascular stress by including dimensions of the acoustic environment related to real-world listening experiences. Data were obtained from the participants' own hearing aids and wearables during daily life.
The sound environment is described using four parameters: sound pressure level (SPL), sound modulation level (SML), signal-to-noise ratio (SNR) and soundscape class, representing distinct characteristics of the momentary sound immersion. SPLs represent the sound intensity and are the most commonly used indicator of the sound wave strength. SPL correlates well with human perception of loudness [32]. SMLs represent temporal amplitude modulation—that is, the degree by which the sound wave amplitude oscillates over short periods of time as found for speech and music [33]. Thus, SML represents the short-term dynamics of the sound wave. SNRs represent a spectral dimension of the sound by differentiating between the level of background noise relative to the level of the signal in decibels. A more positive value indicates less noise relative to the signal. Finally, soundscapes are a qualitative dimension of the acoustic environment assumed to relate to how effortful it is to listen to speech-like sources in the presence of different levels of background noise [34,35].
2. Methods
2.1. Participants and ethics
Participants were users of Oticon Opn hearing aids (Oticon A/S, Smørum, Denmark), who had signed up for the HearingFitness™ feature via the Oticon ON™ remote control app. When signing up for HearingFitness™ participants submitted their consent agreeing to use of their anonymized data (i.e. no personal identifiers were available) for research purposes on aggregated levels (i.e. no single-case investigation are performed) and agreed that data could be stored on Oticon A/S-owned secure servers. In addition, participants gave specific consent permitting access to their GPS location (as a relative measure in metres between each sample) and health data from Apple HealthKit. All data collection and storage were conducted in a ‘privacy by design' manner in accordance with the General Data Protection Regulation (EU regulation 2016/679). No ethical approval was necessary for this study according to Danish National Scientific Ethical Committee (https://www.nvk.dk/forsker/naar-du-anmelder/hvilke-projekter-skal-jeg-anmelde).
2.2. Data sources and apparatus
Data were obtained from the commercially available HearingFitness™ feature [36], which is offered to users of Oticon A/S Internet-connected hearing aids. All participants had paired their Oticon Opn™ hearing aids with the Oticon ON™ iOS smartphone app for remote control with HearingFitness™ enabled. The HearingFitness™ program activates automatic logging of sound data from the hearing aid microphones together with extraction of health data from the Apple HealthKit app associated with a user-owned wearable. No information regarding the type and model of the user-owned wearable was available; however, health data from consumer wearables represent a promising source of data for observational studies [37,38].
We extracted a convenience sample of real-world data from 98 in-market users between June and December 2019. Given the real-world nature of the data and the privacy-preserved extraction, no personal information characterizing the participants (e.g. age, gender) are available. However, the participants represent a random sample of typical hearing aid users, thus, we expect that 6 in 10 are male, and are aged around 74 years [39].
2.3. Variables collected
2.3.1. Sound data
Sound data concerning the ambient acoustic environment were logged by the smartphone-connected hearing aids. Data consist of three continuous acoustic variables and one discrete soundscape variable. The continuous data represent acoustic characteristics of the momentary sound wave sensed by calibrated hearing aid microphones at ear level. These variables are SPL, SML and SNR, all estimated in a broadband frequency range (0–10 kHz) in decibel units. SPL is the level output estimate from a low-pass infinite impulse-response filter with a time constant of 63 ms. SML is then derived as the difference between a top and bottom tracker (peak and valley detector) of the SPL. The bottom tracker is implemented with a slow dynamic attack time of 1 to 5 s and a fast release time of 30 ms and the top tracker is implemented with the reverse (figs. 3–10 in [40]). SNR is the difference between the bottom tracker and the immediate SPL. Thus, values of SPL and SNR are dynamically changing on the same time scale whereas the SML changes with a slower time constant of up to 5 s. The discrete soundscape variable classifies the momentary sound environment into four soundscapes by a proprietary hearing aid algorithm using SPL, SML and SNR values. The soundscapes are: ‘Quiet', ‘Noise', ‘Speech' and ‘Speech in Noise'. Similar soundscape classification from hearing aids has been reported as a potential source for data mining [41]. Data of the acoustic environment were logged every 60 s together with timestamps indicating when the hearing aids were turned on and connected via Bluetooth to a smartphone.
2.3.2. Heart rate data
A 5 min running mean HR was extracted from the Apple HealthKit storage (https://developer.apple.com/documentation/healthkit) approximately every 7 min (observed mean = 7.6 min, s.d. = 2.8 min) when a wearable with a heart rate sensor was connected to the smartphone.
2.4. Data selection and filtration
In total, 1 115 332 acoustic environment data logs and 522 715 heart rate logs were obtained, which represents approximately 9000 h of bilateral hearing aid use and 61 000 h of data from wearables, respectively. Only data logged between 06.00 and 24.00 were considered valid to avoid confounds from night-time logs that were probably collected when neither the hearing aids nor wearables were being worn.
Data were examined in two stages. First, in order to maximize the data available, the acoustic environment was described using all observations regardless of temporal overlap with HR logs. Second, for subsequent statistical modelling, the data were pre-processed to ensure full overlap between acoustic variables and HR logs. Pre-processing consisted of selecting time-windows of 5 min prior to each HR log and computing the arithmetic average of each acoustic variable within that window. Thus, the fully overlapping data consists of data records with the value of each acoustic data variable estimated from the same time-window as the running mean HR logs. We excluded data records with HRs that were below the 5th percentile and above the 95th percentile of the group mean HR to avoid potential confounds from low-incident HRs [37]. Exclusion of these data ensured normality of residuals from statistical modelling but did not affect the order of regression coefficients or the statistical significance of the included statistical models. Further, we only included participants for whom there were at least 50 overlapping HR and acoustic data logs. After pre-processing, our data consisted of 25 193 data records from 56 participants and 971 unique participant-days. Figure 3a shows the count of data records separated by time-of-day and weekday, and figure 3b shows the density distributions of each variable in the data records.
Figure 3.
Overview of data records for statistically associating mean HR and ambient sound. (a) Total counts of data records for each hour of the day (i) and weekday (ii) for each soundscape class (colours). (b) Density distributions for each acoustic data variable and the heart rates. Note that data records with HR below the 5th or above the 95th percentile were excluded prior to visualizing (see text for details).
The effective sampling period of data records was dependent on the proportion of time that a participant simultaneously wore their hearing aid(s) and a wearable while each was connected to a smartphone. As is typical of hearing aid users, participants wore their hearing aids for different durations throughout the day. This was true both within and across participants. Thus, we estimated the effective sampling frequency (i.e. representativeness of samples across time) by computing the average amount of data records each hour between 6.00 and 24.00 per participant across the period covered in the data. The grand average sampling frequency across all 971 participant-days and hours was 2.11 data records per hour (s.d. = 2.42) with a peak at 19.00 (2.21 data records per hour, s.d. = 1.72).
2.5. Adjusting for movement
A subset of the data records contained GPS information associated with each HR measure. This was used to examine the possible confounding effect of movement on the relationship between HRs and acoustic environment data. The movement was estimated within the database and extracted for analysis. Specifically, the distance in metres between two consecutive latitude and longitude coordinates was computed using the haversine method. Using this method, for each pair of subsequent latitude (,) and longitude (,) coordinates the distance between them in metres, d, is estimated as
| 2.1 |
with r being equal to the radius of the earth. Movement in m s−1 was then computed by dividing d with the 1 min time-window between each observation and averaging across the 5 min time-window preceding each HR measure. To ensure that only movement due to physical activity was considered, data records with movement that exceeded cycling speed (10 m s−1) was excluded.
2.6. Statistical analysis
Due to the unbalanced (unequal samples per participant, day, hour etc.) and hierarchical multi-level nature of the data records, associations between variables were quantified using linear mixed-effect (LME) models. All statistical analysis and visualizations were done in R v. 3.6.1. For the application of mixed-effects models, the ‘nlme—Linear and Nonlinear Mixed-Effects Models' package (v. 3.1) was used [43]. Visualizations were done with ‘ggplot2’ package (v. 3.3.2).
2.6.1. Associating sound with heart rate
LME models were applied separately for associating mean HR with either the categorical soundscape or the acoustic data (SPL, SML and SNR). The random effects structure was adjusted for individual differences in baseline HR and sensitivity towards each fixed-effect (i.e. random intercepts and slopes) and adjusted for baseline offsets in HRs due to time-of-day (in hours), and weekday nested within individuals [44]. In separate models, adjustment for the movement was applied by adding an additional nested random effect equal to the estimated movement (see the previous section) quantified into 10 equal-sized bins (aka deciles).
2.6.2. Testing the moderating effect of soundscape
An interaction model was applied to test the hypothesis that soundscape moderates the association between acoustic data and mean HR. That is, we expect associations between acoustic characteristics and HR to be moderated by listening condition, which is represented by the soundscape data. The interaction model was applied with the same random effects structure as the simpler models testing for main effects. Note that the movement adjustment was not applied for the interaction model due to convergence issues in the model fitting procedure.
2.6.3. Model diagnostics
Model diagnostics were conducted in accordance with recommendations in the literature [45], and models adhered to assumptions of normality of residuals and homogeneity of variance among random effect groups (see the electronic supplementary material). The autocorrelation of residuals (i.e. HR observations are not independent) was addressed with a first-order autoregressive structure, which has previously been shown to improve the goodness-of-fit with similar data of mean HRs [28]. To assess the degree of multicollinearity within the models, particularly with the addition of acoustic characteristics and their interaction with soundscape, the generalized variance inflation factor (GVIF) was computed using the ‘car' package v. 3.0.8 [46]. The GVIF is a generalization of the variance inflation factor (VIF) that can be applied to categorical explanatory variables [47]. Values of GVIF less than 4 are usually considered to be acceptable [48].
2.6.4. Effect size estimation
Besides inspecting the magnitude and confidence intervals of the standardized LME model coefficients, partial variance explained by the fixed effects, , was estimated based on recommendations for multi-level hierarchical models [49]. Briefly, the sum of residual sum-of-squares across each grouping level of the models including both fixed and random effects were compared with those from intercept-only models with identical random effects. Explained variance of the fixed effects were then computed as the proportional reduction in prediction error [50],
| 2.2 |
where represents the level-one (individual) residual sum-of-squares for the full model; represents the level-two (weekday) residual sum-of-squares for the full model and represents the level-three (time-of-day) residual sum-of-squares for the full model. The terms in the denominator represent the residual sum-of–squares for the same three levels but for the intercept-only (null) model. To estimate the explained variance of the full model including random effect terms , the denominator in equation (2.2) was replaced with the total sum-of-squares from the observations (i.e. the ‘natural' variance).
3. Results
3.1. Descriptive statistics
Prior to examining associations between ambient sound data and HR, descriptive analyses and comparisons with previously reported data were conducted to assess data validity.
3.1.1. Soundscape and acoustic data
The grand median SPL across all participants was 54.42 dB (s.d. = 6.68 dB), which corresponds to a level just below normal conversational speech (approx. 60 dB). The values of each acoustic characteristic varied according to the soundscape as classified by the hearing aid (figure 1). Specifically, as expected, ‘Speech' environments consisted of higher values of raw SNR and SML, while ‘Speech in Noise' and ‘Noise' had the highest SPL.
Figure 1.
Cumulative distribution functions of acoustic data separated by soundscape. Median sound pressure level (a), sound modulation level (b) and signal-to-noise ratio (c) for each percentile across participants. Shaded area represents the 95% confidence interval.
We also examined the variation in sound data as a function of time of day. Figure 2a–c shows quartiles of each acoustic characteristic (grand median across participants), while figure 2d shows the relative occurrence of each soundscape by the time of day. As would be expected, the SPL is lowest late in the evening (22.00 and 23.00), slightly higher in the morning (6.00 to 8.00), and highest around lunchtime (12.00) and dinner time (18.00). The proportion of time the sound environment was classified as being ‘Quiet' was also highest in the evening (from 20.00) and early morning (6.00 to 9.00). Conversely, the proportion of time the ambient acoustic environment was classified as ‘Speech' was greatest early evening (16.00 to 21.00), which corresponds in time to when the SML was also greatest. Similar findings have been reported in other studies [51].
Figure 2.
Everyday acoustic environment. (a–c) quartiles of the continuous acoustic data for each hour of the day computed as the grand median across all participants (solid, dashed and dotted lines). (d) Relative occurrence of each soundscape for each hour of the day computed as the mean percentage across all participants. Shaded area represents the standard error.
We interpret the temporally fluctuating acoustic environment characteristics as reflecting differing everyday contexts, with figure 2d suggesting that for approximately 65–70% of the day, the acoustic environment is classified as ‘Quiet' or ‘Speech'. This is similar to the findings of Humes et al. [51], who found that older adults with hearing aids typically spent 60% of their time in ‘quiet' or ‘speech-only' conditions, while around 10% of the time is being spent in ‘pure noise'. This, combined with the face-validity of the data, provide evidence that our acoustic data reflect real-world characteristics.
3.1.2. Heart rate data
The grand mean HR was 75.60 bpm (s.d. = 6.89 bpm), which is lower than the normative value of 79.1 bpm (s.d. = 14.5 bpm) for individuals aged 18 years and older [52]. Our values might differ due to the age distribution of our population combined with the fact that HR declines with age [53]. Our study population consists of hearing aid users and the average age of first-time hearing aid users is around 69 years [39]. When comparing our data with the mean real-world HR for people aged 71 to 80 years, the values are much closer: 74.2 bpm (s.d. = 11.1 bpm) versus 75.60 bpm (s.d. = 6.89 bpm) here.
3.2. Association between ambient sound and heart rate
The association between ambient sound characteristics and mean HRs was investigated with the subset of data with overlapping acoustic and HR information. We first show descriptive statistics and then the results of LME modelling to formally associate changes in ambient sound with changes in mean HR.
Figure 3 shows summary distributions of the temporally overlapping data records of HR and sound data. Figure 3a indicates a close to uniform sampling of observations from 10.00 to 20.00 and across weekdays except for Friday and Sunday, which exhibits approximately 500–1000 less records than the rest of the week. When averaged across all participants, 35% (s.d. = 20%) of data records were registered as being a ‘Quiet' soundscape, 32% (s.d. = 18%) as a ‘Speech' soundscape, 18% (s.d. = 14%) as a ‘Speech in Noise' soundscape, and 15% (s.d. = 12%) as a ‘Noise’ soundscape.
Density distributions of the continuous acoustic parameters associated with each data record are shown in figure 3b(i, ii, iii). Not surprisingly, the sound intensity (SPL) was highest for ‘Speech in Noise' and ‘Noise’ soundscapes with median SPL Leq being 72.47 dB (s.d. = 3.87 dB) and 65.97 dB (s.d. = 7.34 dB), respectively. For comparison, the same value in ‘Quiet' was 48.79 dB (s.d. = 5.25 dB). These values correspond well with those reported by El Aarbaoui & Chaix [28] for differing contextual locations (i.e. ‘Public' and ‘Transportation’ versus ‘Home'). Regarding SNR, as would be expected, the ‘Quiet' and ‘Noise' soundscapes exhibited lower median values (Quiet: 4.89 dB, s.d. = 3.86 dB, Noise = 3.81 dB, s.d. = 2.50 dB) compared with ‘Speech' (12.50 dB, s.d. = 5.14 dB). These values are in line with previously reported real-world data [54,55]. Lastly, median SML was highest for ‘Speech’ and ‘Quiet' (Speech = 21.77 dB, s.d. = 6.45 dB; Quiet = 15.28 dB, s.d. = 5.84 dB) and lowest for ‘Noise' (11.57 dB, s.d. = 4.69 dB), indicating that, indeed, SML represents modulated sound with low levels of noise.
Figure 3b(iv) shows the distribution of HR across all records. Mean HR differed by soundscape with values for ‘Quiet': 74.6 bpm (s.d. = 6.9 bpm); ‘Speech': 75.9 bpm (s.d. = 7.0 bpm); ‘Speech in Noise': 78.2 bpm (s.d. = 8.6 bpm); and ‘Noise’: 77.6 bpm (s.d. = 6.9 bpm). A repeated-measures ANOVA shows a significant main effect of soundscape (F(1.94,103.03) = 9.39, p = 0.034), with post hoc comparisons using Bonferroni correction indicating that the mean HR for ‘Quiet' was significantly lower than for ‘Speech' (p = 0.017), ‘Speech in Noise' (p < 0.001), and ‘Noise' (p < 0.001); the mean HR for ‘Speech' was significantly lower than for ‘Speech in Noise' (p = 0.019), but not for ‘Noise' (p = 0.12). HR for ‘Speech in Noise' and ‘Noise' did not significantly differ.
Marginal mean (i.e. across factors) HR at increasing levels of each acoustic characteristic are shown in figure 4. The marginal mean HR was computed by first standardizing each participant's HR and acoustic data (centring and scaling) and then computing the pooled average HR within non-overlapping bins for deciles of SPL, SNR and SML. For example, the first bin in figure 4 (at the 5% quantile on the x-axis) represents the average standardized HR for values of acoustic data falling between the 0% and 10% quantile. The standardization of acoustic characteristics was done to prevent confounds from inter-individual differences (e.g. hearing aid microphone placement and offset) when computing the marginal mean HR.
Figure 4.
Marginal mean heart rate (HR) grouped in non-overlapping decile bins (bin-centres on x-axis) of the acoustic characteristics. Solid lines represent best fitting linear regression with 95% CI for the prediction (shaded area). (a) HR versus SPL Leq (β = 0.05, F1,8 = 886.70, p < 0.001, R2 = 0.99). (b) HR versus SML at low intensities (β = −0.01, F1,8 = 2.30, p = 0.132, R2 = 0.27). (c) HR versus SML at high intensities (β = −0.04, F1,8 = 61.86, p < 0.001, R2 = 0.86). (d) HR versus SNR at low intensities (β = 0.04, F1,8 = 45.39.31, p < 0.001, R2 = 0.85). (e) HR versus SNR at high intensities (β = −0.02, F1,8 = 20.76, p = 0.002, R2 = 0.72). See text for details about computing the marginal means.
Since high levels of SNR and SML can occur for both low and high levels of SPL (figure 5), we computed the marginal mean HR versus SML and SNR in two regions of SPL: SPLs below and above the overall median (i.e. median Leq = 60 dB SPL).
Figure 5.
Relationship between each acoustic variable and the soundscape in the data records as two-dimensional density distributions. (a) SML versus SPL. (b) SNR versus SPL. (c) SNR versus SML.
The marginal means and fitted linear trends in figure 4 indicate a strong association between acoustic characteristics and HRs. In addition, figure 4b–e reveals that the association between derived moments of the acoustic signal (i.e. SNR and SML) and mean HR is conditional on intensity. That is, a comparison of figure 4d,e show that SNR is more strongly associated with HR when the SPL Leq is above rather than below 60 dB (i.e. steeper slope of the regression) while figure 4b,c indicate that SML is either positively associated with HR at lower SPLs or negatively associated with HR at higher SPLs.
3.2.1. Statistical modelling
As stated in the introduction, everyday sounds are not only modulated by intensity but also by dynamic characteristics, perceptual quality (i.e. noisy or clean signals) and type (e.g. conversation, music, traffic). These distinct traits can be approximated by SPL, SML, SNR and soundscape, respectively. However, these acoustic dimensions inherently correlate. For example, high-intensity speech signals (high SPLs) also have a high SML, and if not masked by noise, a high SNR. This is evident from figure 5, which shows the interrelations between SPL, SML, SNR and soundscape as two-dimensional density distributions. The potential confound of multicollinearity in the LME model testing for main effects of acoustic data was assessed by computing the generalized variance inflation factor (GVIF, see Methods). The GVIF was 1.05, 1.17 and 1.22 for SPL, SML and SNR, respectively, indicating that multicollinearity was not a problem [48]. However, given the high degree of clustering of the continuous acoustic data with soundscape (figure 5), we included the two types of data as independent variables in two separate LME models to predict mean HR. We next fitted the models to a subset of the observations which included GPS data to investigate the influence of controlling for movement activity (adjusted model in table 1) estimated from relative GPS coordinates (see Methods).
Table 1.
Regression coefficients (β) and 95% confidence intervals of the change in mean heart rate (bpm) associated with a change in soundscape (soundscape model) or a 1 s.d. change in either SPL, SML or SNR (acoustic data model). Models were fitted to either all data records (n = 25 193) or to only those records that contained movement data (n = 5613). In the latter case, movement was included as a nested random effects term in the adjusted model and left out in the non-adjusted model. Note that regression coefficients are considered significant as long as the 95% confidence interval does not cross zero [42].
| all data records |
data records with movement |
|||||
|---|---|---|---|---|---|---|
| non-adjusted |
adjusted |
non-adjusted |
||||
| β | 95% CI | β | 95% CI | β | 95% CI | |
| soundscape model | ||||||
| S versus Q | +1.07 | [+0.81 to +1.33] | +1.46 | [+0.91 to +2.00] | +1.49 | [+0.96 to +2.02] |
| SN versus Q | +2.23 | [+1.89 to +2.57] | +1.85 | [+1.17 to +2.53] | +2.00 | [+1.33 to +2.65] |
| N versus Q | +2.61 | [+2.25 to +2.97] | +1.84 | [+1.03 to +2.66] | +1.96 | [+1.17 to +2.75] |
| acoustic data model | ||||||
| SPL | +1.47 | [+1.29 to +1.76] | +1.39 | [+0.89 to +1.90] | +1.44 | [+0.95 to +1.93] |
| SML | +0.72 | [+0.49 to +0.95] | +0.53 | [+0.06 to +0.99] | +0.53 | [+0.09 to +0.98] |
| SNR | −1.03 | [−1.26 to −0.79] | −0.86 | [−1.21 to −0.51] | −0.88 | [−1.21 to −0.55] |
Note: Q, quiet; S, speech; SN, speech in noise; N, noise.
Inclusion of sound data to predict heart rates significantly improved model goodness-of-fit when compared with intercept-only models with likelihood-ratio tests (acoustic data: χ2(6) = 723, p < 0.001; soundscape data: χ2(3) = 277, p < 0.001), confirming a significant association between changes in mean HR and the acoustic environment. The estimated partial variance explained was 4.25% for the acoustic data model and 1.43% for the soundscape model (table 2).
Table 2.
Explained variance by fixed effects () and the full model including both fixed and random effects (). Note that data in the interaction model were collapsed across ‘speech in noise' and ‘noise' soundscapes.
| all data records (n = 25 715) | ||
| soundscape model | 1.43% | 56.28% |
| acoustic data model | 4.25% | 57.22% |
| interaction model | 4.54% | 57.30% |
| data records with movement (n = 5919) | ||
| soundscape model | ||
| adjusted for movement | 0.61% | 72.12% |
| non-adjusted for movement | 0.78% | 62.11% |
| acoustic data model | ||
| adjusted for movement | 2.52% | 72.56% |
| non-adjusted for movement | 2.76% | 62.86% |
The acoustic data model revealed distinct associations between HR and the acoustic characteristics. Specifically, the regression coefficient for SPL is larger than for SML and SNR (non-overlapping confidence intervals), and the regression coefficient for SNR has negative signage, whereas both SPL and SML are positively associated with HR. The movement-adjusted acoustic data model explained approximately 10%-point additional variance (table 2); however, the regression coefficients did not differ between the adjusted and non-adjusted model (table 1).
The regression coefficients for the soundscape model confirmed the trend present in figure 3b(iv) that HR increases as the complexity of the soundscape increases from ‘Quiet', to ‘Speech', to ‘Speech in Noise' and finally ‘Noise'. Note that complexity is defined as the interaction between SNR and SPL. High complexity is assigned to soundscapes with low SNR and high SPL (i.e. ‘Noise', figure 5). Thus, despite ‘Speech in Noise' having the highest SPL (figure 1a), HRs were overall higher in soundscapes classified as noise. The movement-adjusted soundscape model likewise yielded a 10%-point increase in explained variance (table 2), and the coefficients (table 1) suggest that a change from ‘Quiet' to ‘Speech in Noise’ and from ‘Quiet' to ‘Noise' results in approximately the same change in mean HR. However, these changes are lower in magnitude as was the case for the non-adjusted model. This indicates that, indeed, the movement might have influenced the change in mean HR when contrasting ‘Speech' with more complex (i.e. noisier) ‘Speech in Noise' and ‘Noise’ soundscapes.
The median movement in the time-window preceding each HR observation was 0.15 m s−1 (s.d. = 1.55 m s−1), suggesting that most of these data records were associated with little movement although a significant linear relationship was observed between movement and mean HR (LME adjusted for the participant, β = 0.39, 95% CI = [+0.22 to +0.57], t = 4.40, p < 0.001).
3.3. Moderating effect of soundscape
The data in table 1 suggest that changes in mean HR throughout the day are significantly associated with changes in the soundscape, and that overall mean HRs are lower in less complex soundscapes (e.g. ‘Quiet' versus ‘Speech in Noise'). Here, we investigated the extent to which the soundscape moderated the strength of the association between HR and its acoustic characteristics. That is, were participants more sensitive towards acoustic characteristics in certain soundscapes? We would expect this to be true since differences among soundscapes proxy differences in listening conditions. For instance, it is less effortful to understand speech in a quiet environment than in a noisy one. To examine this, we fitted an interaction model for data from ‘Quiet', ‘Speech' and ‘Noisy' soundscapes, reflecting increasing levels of soundscape complexity. The ‘Noisy' consisted of data from the ‘Speech in Noise' and ‘Noise' soundscapes combined since they were highly overlapping in terms of acoustic characteristics (figure 5) and contained the fewest observations (figure 3).
We fitted an interaction model with each acoustic parameter being allowed to interact with soundscape while keeping the same random effects variance structure as before. For this model, we used all data records and did not include movement as a random effect term since the additional degrees-of-freedom would otherwise render the model unidentifiable. The interaction model produced better predictions than both the soundscape model (ΔAIC = 270.5, χ2(11) = 292.54, p < 0.001) and the acoustic data model (ΔAIC = 12.6, χ2(8) = 28.6, p < 0.001) and were able to explain slightly more of the HR variance (table 2). Despite the potentially high covariance between soundscape and acoustic data (figure 5), the largest GVIF was estimated to be 1.79 (the interaction between soundscape and SNR), which again indicates that multicollinearity was not an issue [48].
The main effects replicated the outcomes of fitting the acoustic data and soundscape models separately (table 1). That is, HRs are higher for more complex soundscapes (F(2) = 26.39, p < 0.001), more intense and modulated sound environments (SPL: F(1) = 68.48, p < 0.001; SML: F(1) = 30.04, p < 0.001), and environments with more background noise (SNR: F(1) = 15.35, p < 0.001). Additionally, interactions between soundscape and SPL, SML were significant (F(2) = 5.21, p = 0.006; F(2) = 15.64, p < 0.001).
Figure 6 shows each level of the three interactions between acoustic variables and soundscape. It indicates that interactions were driven by differences in the strength of association between ‘Quiet'/'Speech' and ‘Noisy' soundscapes. Thus, observed HRs were more strongly associated with sound intensities and modulation levels when the ambient acoustic environment was classified as being favourable for listening (i.e. in quiet and speech-dominated soundscapes).
Figure 6.
Predicted regression lines from the LME interaction model of the coefficients SPL (a), SML (b) and SNR (c). Shaded area represents the standard error of prediction.
We assessed the significance of each pairwise interaction by contrasting each coefficient (i.e. slope) with the ‘Quiet' soundscape as baseline. For SPLs, the slope for ‘Noisy' soundscape was significantly lower than for ‘Quiet' (β = −0.60, s.e. = 0.23, p = 0.008) whereas the slope for ‘Speech' did not differ from the slope for ‘Quiet' (β = 0.16, s.e. = 0.24, p = 0.51). For SNRs, the slope for ‘Noisy' was significantly less steep than for ‘Quiet' (β = 0.79, s.e. = 0.24, p < 0.001). Finally, for SMLs, slopes for both ‘Speech' and ‘Noisy' were significantly lower than for ‘Quiet' (β = −0.74, s.e. = 0.18, p < 0.001; β = −1.08, s.e. = 0.21, p < 0.001).
3.4. Effect size considerations
The explained variance of the acoustic data model (table 2) suggests that ambient acoustics (i.e. SPL, SML and SNR) explain around 4% of the within-individual variation in 5 min mean HRs. In order to assess the magnitude of the regression coefficients, and to compare our findings from hearing-impaired individuals with those from El Aarbaoui & Chaix [28] using normal-hearing individuals, we re-fitted the acoustic data model after log-transforming the HRs while keeping predictors (SPL, SML and SNR) in their original dB scale. Back-transformation of the coefficients, β, with (exp(β) − 1) * 100 yields percentage change in 5 min mean HR with 1 dB change in the level of either SPL, SML or SNR. After re-fitting, the coefficients were SPL: β = 0.154%, 95% CI [+0.127 to +0.181]; SML: β = 0.112%, 95% CI = [+0.078 to +0.145]; and SNR: β = −0.169%, 95% CI = [−0.209 to −0.130]. For comparison, El Aarbaoui & Chaix [28] documented a 0.141% (95% CI = [+0.135 to +0.148]) change in mean HR (5 min window preceding each HR measure) from a 1 dBA change in SPL.
4. Discussion
Using real-world data from in-market hearing aids, we were able to investigate the association between exposure to everyday acoustic characteristics and short-term (i.e. within 5 min) changes in HR. The data reflect hearing aid use over several weeks and as such represent the acoustic environment expected from everyday-life activities (figures 1 and 2), with characteristics of intensity and soundscape occurrences that agree with previous research [1,51,55].
We found that characteristics of the sound environment were significantly associated with changes in heart rate. Specifically, higher SPLs and SMLs were associated with increased HRs, whereas more favourable (higher) SNRs were associated with lowered HR. These results reproduce earlier findings from other research groups [28] and expand the evidence to also include other dimensions (i.e. SML, SNR and soundscape) of the acoustic environment. These effects were not caused by movement (i.e. sound and HR changing in response to latent physical activity). While the movement-adjusted model explained approximately 10% additional HR variance (table 2), the adjusted versus non-adjusted for movement acoustic data models had similar coefficient magnitudes and overlapping confidence intervals (table 1). Thus, the additional explained variance by the adjusted model did not covary with acoustic data—suggesting that the movement predictor captured HR variance unrelated to variance captured by the acoustic data predictors.
The documented associations between acoustic data and HR were stronger in simple listening soundscapes (‘Quiet' and ‘Speech’) compared with soundscapes classified as containing noise, while marginal means revealed that HR moderation by SMLs and SNRs were distinct, depending on sound intensity. Specifically, marginal means suggested that the negative association between SNR and mean HR was most pronounced in sound environments where the SPL Leq exceeded 60 dB, whereas sound modulation was more strongly associated with increases in HR at intensity levels below 60 dB. SPLs are directly associated with loudness perception [32] and have been found to induce activation of the sympathetic branch of the human ANS [5,17,28]. Our study suggests that even everyday SPLs (i.e. levels well below those typically defined as hazardous to the auditory system) also affect heart rate. Thus, we speculate that the ANS is moderated by sound pressure—regardless of the level of sound intensity.
We also documented a positive, albeit smaller, association between HR and SML. SML reflects sound wave modulation. Highly modulated sound is typically characterized by fast oscillating SPLs, which are indicative of speech or music. We speculate that the positive association between SMLs and HRs are, to some extent, caused by conversational task demands [56]. That is, in highly modulated sound environments, listening and speech demands are increased, which leads to slightly increased sympathetic ANS activity. This speculation is corroborated by the fact that the positive association between SML and HR is stronger for low-intensity simpler acoustic environments than for louder noisy environments, i.e. Figures 4d and 6c, which suggests the association is being driven by task-related activities.
We also documented a negative association between real-world HRs and SNRs. This finding corroborates laboratory research within the hearing sciences, which shows that difficult listening conditions (e.g. characterized by low SNRs) increases the listening effort needed for speech understanding, and thus, either elevates the sympathetic stress levels or decreases the parasympathetic ANS activity seen as increased pupil dilation [57,58] or skin conductance [56] and decreased HR variability [59]. This suggests that the ambient SNR, as measured by the hearing aids, could proxy as a real-life indicator of momentary and contextual listening difficulty, while associated changes in HR might indicate the level of mobilized listening effort [26]. However, future studies should include subjective reporting of listening intentions and experiences to reveal specifically the contribution of listening effort and fatigue to changes in everyday ANS activity. For example, experience sampling methods such as ecological momentary assessments for assessing everyday-life listening experiences in hearing aid users [60–62] could be expanded with monitoring of physiological signals.
The sound data used in this study were measured by commercial hearing devices. These devices are typically optimized for low power consumption, which typically would entail lower resolution and accuracy of sound estimators. Despite this, summary statistics of our data agree with other studies using devices specifically developed for research purposes [28]. This highlights a potential for exploiting commercial devices, and in particular, hearing aids, for obtaining real-world and truly ecological evidence about human behavioural and biological reactions to environmental sound stimuli. For example, future studies using a similar set-up could investigate how sensitivity towards specific acoustic characteristics differ between populations with various underlying health pathologies or different degrees of hearing losses [63]. Indeed, previous research has already documented the use of similar hearing aid data logging for policy-making within the hearing or public health domains [64,65].
5. Limitations
The participants in our dataset do not represent the general population because all were hearing aid users, thus, they have some degree of hearing loss and are probably older. Some studies suggest that older people with hearing loss spend more time in quiet sound environments than younger and normal-hearing individuals [66]. Moreover, hearing loss can lead to increased sensitivity towards noisy and loud sound environments, which might have impacted the associations listed in table 1. However, comparing the magnitude of the regression coefficient for SPL with that of El Aarbaoui & Chaix [28] suggests that age and hearing abilities did not affect the associations found here. Further, hearing aids process ambient sound with the goal of improving the SNR by changing how signals are amplified and noise is reduced. Detailed information about these parameters under varying real-world conditions are unavailable in our data. Thus, the effective SNR might differ from the logged ambient SNR presented in figures 1–5. It is thus possible that the findings presented in figure 4e and table 1 would differ among people with normal hearing and/or with unaided impaired hearing. However, as noted in the introduction, laboratory testing of people with unaided hearing impairment also shows negative associations between the SNR of listening tasks and stress reactions measured as electrodermal activity [24].
Heart rates are sensitive to both sympathetic and parasympathetic ANS influence. However, in the current study, only mean HRs are reported. This means that an in-depth investigation of the extent to which stress is caused by an elevation in sympathetic activity or withdrawal of parasympathetic activity is not possible here. Future studies could consider leveraging data available from commercial wearables that measure continuous HRV. This might reveal distinct contributions from the sympathetic and parasympathetic branch of the ANS [20] in response to different sound exposures, representing distinct cognitive and physiological processes. Indeed, short recordings of HRV have been shown to be stable across longitudinal studies [67].
Finally, the data assessed in the current study is considered real-world in nature, which means that there was no control over who used what devices, when and how often. In addition, we did not have information regarding data quality from each wearable device manufacturer and thus cannot give estimates regarding the clinical validity of the HR data. However, we do not expect data quality to have biased our findings since the validity of real-world data has been documented [37], and because the application of filtration during data extraction and application of mixed-effects modelling for statistical analyses minimized the impact of unbalanced observations and inter-individual differences in measures of HR.
6. Conclusion
This study is the first to examine the association between real-world human HRs and multidimensional characteristics of the ambient acoustic environment with longitudinal data. Our results indicate that ambient sound intensity is positively associated with heart rates. In addition, we document that the real-world ambient signal-to-noise ratios are associated with lowered HRs, suggesting that sound conditions which reduce the auditory perceptual load and listening effort de-stress the human cardiovascular system [35,57,58,68]. This finding is supported by a documented effect of soundscape on the strength of the association between acoustic characteristics and ANS reactions. That is, in favourable listening conditions, acoustic characteristics have the strongest association with changes in HR.
In summary, our findings suggest a possible mixed influence of everyday sounds on cardiovascular stress, and that the relationship is more complex than is seen from an examination of sound intensity alone. Furthermore, our findings highlight the importance of including exposure to ambient sound in models predicting human physiology and demonstrate that data logging with commercially available devices can be used to study how ecological everyday acoustic environments impact human physiological reactions.
Supplementary Material
Data accessibility
There are ethical restrictions on publicly sharing the dataset. The consent given by users did not explicitly detail sharing of the data in any format; this limitation is in keeping with EU General Data Protection Regulation and is imposed by the Research Ethics Committees of the Capital Region of Denmark. Data can be obtained by contacting the corresponding author and signing a non-disclosure agreement. Code for conducting the analysis presented in the study can be accessed via the Open Science Framework (doi:10.17605/OSF.IO/RC37Z).
Authors' contributions
J.H.C. wrote the manuscript, analysed the data, produced the figures and came up with the initial hypothesis. G.H.S. critically reviewed and contributed to the manuscript. M.P. collected and made available the data used. N.H.P. critically reviewed the manuscript and supplied technical aspects related to the hearing aid technology.
Competing interests
We declare we have no competing interests.
Funding
The work of J.H.C., G.H.S. and N.H.P. was partly funded from the European Union's Horizon 2020 research and innovation programme under grant agreement no. 727521.
References
- 1.Flamme GA, Stephenson MR, Deiters K, Tatro A, van Gessel D, Geda K, Wyllys K, McGregor K. 2012. Typical noise exposure in daily life. Int. J. Audiol. 51(Suppl.), S3-11. ( 10.3109/14992027.2011.635316) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Kirchner DB, Evenson E, Dobie RA, Rabinowitz P, Crawford J, Kopke R, Warner Hudson T. 2012. Occupational noise-induced hearing loss: ACOEM task force on occupational hearing loss. J. Occup. Environ. Med. 54, 106-108. ( 10.1097/JOM.0b013e318242677d) [DOI] [PubMed] [Google Scholar]
- 3.Kujawa SG. 2006. Acceleration of age-related hearing loss by early noise exposure: evidence of a misspent youth. J. Neurosci. 26, 2115-2123. ( 10.1523/JNEUROSCI.4985-05.2006) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Burns K, Sun K, Fobil J, Neitzel R. 2016. Heart rate, stress, and occupational noise exposure among electronic waste recycling workers. Int. J. Environ. Res. Public Health. 13, 140. ( 10.3390/ijerph13010140) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Munzel T, Gori T, Babisch W, Basner M. 2014. Cardiovascular effects of environmental noise exposure. Eur. Heart J. 35, 829-836. ( 10.1093/eurheartj/ehu030) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Eriksson C, Pershagen G, Nilsson M. 2019. Biological mechanisms related to cardiovascular and metabolic effects by environmental noise. Copenhagen, Denmark: World Health Organization.
- 7.Belojevic G, Jakovljevic B, Stojanov V, Paunovic K, Ilic J. 2008. Urban road-traffic noise and blood pressure and heart rate in preschool children. Environ. Int. 34, 226-231. ( 10.1016/j.envint.2007.08.003) [DOI] [PubMed] [Google Scholar]
- 8.Björ B, Burström L, Karlsson M, Nilsson T, Näslund U, Wiklund U. 2007. Acute effects on heart rate variability when exposed to hand transmitted vibration and noise. Int. Arch. Occup. Environ. Health. 81, 193-199. ( 10.1007/s00420-007-0205-0) [DOI] [PubMed] [Google Scholar]
- 9.Robinson BF, Epstein s.e., Beiser GD, Braunwald E. 1966. Control of heart rate by the autonomic nervous system: studies in man on the interrelation between baroreceptor mechanisms and exercise. Circ. Res. 19, 400-411. ( 10.1161/01.RES.19.2.400) [DOI] [PubMed] [Google Scholar]
- 10.Tzaneva L, Danev S, Nikolova R. 2001. Investigation of noise exposure effect on heart rate variability parameters. Cent. Eur. J. Public Health. 9, 130-132. [PubMed] [Google Scholar]
- 11.Draghici AE, Taylor JA. 2016. The physiological basis and measurement of heart rate variability in humans. J. Physiol. Anthropol. 35, 22. ( 10.1186/s40101-016-0113-7) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Berger RD, Saul JP, Cohen RJ. 1989. Transfer function analysis of autonomic regulation. I. Canine atrial rate response. Am. J. Physiol. Heart Circ. Physiol. 256, H142-H152. ( 10.1152/ajpheart.1989.256.1.H142) [DOI] [PubMed] [Google Scholar]
- 13.Glick G, Braunwald E, Lewis RM. 1965. Relative roles of the sympathetic and parasympathetic nervous systems in the reflex control of heart rate. Circ. Res. 16, 363-375. ( 10.1161/01.RES.16.4.363) [DOI] [PubMed] [Google Scholar]
- 14.Lusk SL, Gillespie B, Hagerty BM, Ziemba RA. 2004. Acute effects of noise on blood pressure and heart rate. Arch. Environ. Health Int. J. 59, 392-399. ( 10.3200/AEOH.59.8.392-399) [DOI] [PubMed] [Google Scholar]
- 15.Babisch W. 2014. Updated exposure-response relationship between road traffic noise and coronary heart diseases: a meta-analysis. Noise Health. 16, 1-9. ( 10.4103/1463-1741.127847) [DOI] [PubMed] [Google Scholar]
- 16.Evans GW, Johnson D. 2000. Stress and open-office noise. J. Appl. Psychol. 85, 779-783. ( 10.1037/0021-9010.85.5.779) [DOI] [PubMed] [Google Scholar]
- 17.Shoushtarian M, Weder S, Innes-Brown H, McKay CM. 2019. Assessing hearing by measuring heartbeat: the effect of sound level. PLoS ONE 14, e0212940. ( 10.1371/journal.pone.0212940) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Graham FK, Clifton RK. 1966. Heart-rate change as a component of the orienting response. Psychol. Bull. 65, 305-320. ( 10.1037/h0023258) [DOI] [PubMed] [Google Scholar]
- 19.Sim CS, Sung JH, Cheon SH, Lee JM, Lee JW, Lee J. 2015. The effects of different noise types on heart rate variability in men. Yonsei Med. J. 56, 235. ( 10.3349/ymj.2015.56.1.235) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Rajendra Acharya U, Paul Joseph K, Kannathal N, Lim CM, Suri JS. 2006. Heart rate variability: a review. Med. Biol. Eng. Comput. 44, 1031-1051. ( 10.1007/s11517-006-0119-0) [DOI] [PubMed] [Google Scholar]
- 21.Umemura M, Honda K. 1998. Influence of music on heart rate variability and comfort: a consideration through comparison of music and noise. J. Hum. Ergol. (Tokyo) 27, 30-38. [PubMed] [Google Scholar]
- 22.Gang MJ, Teft L. 1975. Individual differences in heart rate responses to affective sound. Psychophysiology 12, 423-426. ( 10.1111/j.1469-8986.1975.tb00016.x) [DOI] [PubMed] [Google Scholar]
- 23.Peelle JE. 2018. Listening effort: how the cognitive consequences of acoustic challenge are reflected in brain and behavior. Ear Hear. 39, 204-214. ( 10.1097/AUD.0000000000000494) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Holube I, Haeder K, Imbery C, Weber R. 2016. Subjective listening effort and electrodermal activity in listening situations with reverberation and noise. Trends Hear. 20, 233121651666773. ( 10.1177/2331216516667734) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Richter M. 2016. The moderating effect of success importance on the relationship between listening demand and listening effort. Ear Hear. 37, 111S-117S. ( 10.1097/AUD.0000000000000295) [DOI] [PubMed] [Google Scholar]
- 26.Pichora-Fuller MK, et al. 2016. Hearing impairment and cognitive energy: the framework for understanding effortful listening (FUEL). Ear Hear. 37, 5S-27S. ( 10.1097/AUD.0000000000000312) [DOI] [PubMed] [Google Scholar]
- 27.Di Nisi J, Muzet A, Ehrhart J, Libert JP. 1990. Comparison of cardiovascular responses to noise during waking and sleeping in humans. Sleep 13, 108-120. ( 10.1093/sleep/13.2.108) [DOI] [PubMed] [Google Scholar]
- 28.El Aarbaoui T, Chaix B. 2019. The short-term association between exposure to noise and heart rate variability in daily locations and mobility contexts. J. Expo. Sci. Environ. Epidemiol. 30, 383-393. ( 10.1038/s41370-019-0158-x) [DOI] [PubMed] [Google Scholar]
- 29.Can A, Aumond P, Michel S, De Coensel B, Ribeiro C, Botteldooren D, Lavandier C. 2016. Comparison of noise indicators in an urban context. In Inter-Noise 2016, 45th Int. Congress and Exposition of Noise Control Engineering, Hamburg, Germany. See https://hal.archives-ouvertes.fr/hal-01373857. [Google Scholar]
- 30.Nilsson ME, Berglund B. 2006. Soundscape quality in suburban green areas and city parks. Acta Acust. United Acust. 92, 903-911. [Google Scholar]
- 31.Schafer RM. 1993. The soundscape: our sonic environment and the tuning of the world. Rochester, VT: Simon and Schuster. [Google Scholar]
- 32.Long M. 2014. Fundamentals of acoustics. In: Architectural acoustics, p. 39-79. Oxford, UK: Elsevier. [cited 11 June 2020]. See https://linkinghub.elsevier.com/retrieve/pii/B9780123982582000027. [Google Scholar]
- 33.Elliott TM, Theunissen FE. 2009. The modulation transfer function for speech intelligibility. PLoS Comput. Biol. 5, e1000302. ( 10.1371/journal.pcbi.1000302) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Brown VA, Strand JF. 2019. Noise increases listening effort in normal-hearing young adults, regardless of working memory capacity. Lang. Cogn. Neurosci. 34, 628-640. ( 10.1080/23273798.2018.1562084) [DOI] [Google Scholar]
- 35.Picou EM, Gordon J, Ricketts TA. 2016. The effects of noise and reverberation on listening effort in adults with normal hearing. Ear Hear. 37, 1-13. ( 10.1097/AUD.0000000000000222) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Laplante-Lévesque A, Pontoppidan NH, Mazevski A, Schum D, Behrens T, Porsbo M. 2017. Data-driven hearing care with HearingFitnessTM: oticon shares openly its innovative vision and roadmap, pp. 1-7. Copenhagen, Denmark: Oticon A/S. Report No.: 27896UK. [Google Scholar]
- 37.Hicks JL, Althoff T, Sosic R, Kuhar P, Bostjancic B, King AC, Leskovec J, Delp SL. 2019. Best practices for analyzing large-scale health data from wearables and smartphone apps. Npj Digit. Med. 2, 45. ( 10.1038/s41746-019-0121-1) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Witt DR, Kellogg RA, Snyder MP, Dunn J. 2019. Windows into human health through wearables data analytics. Curr. Opin Biomed. Eng. 9, 28-46. ( 10.1016/j.cobme.2019.01.001) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Kochkin S. 2010. MarkeTrak VIII: The efficacy of hearing aids in achieving compensation equity in the workplace. Hear J. 63, 19-28. ( 10.1097/01.HJ.0000389923.80044.e6) [DOI] [Google Scholar]
- 40.Kates JM. 2008. Digital hearing aids. San Diego, CA: Plural Publ. (Audiology). [Google Scholar]
- 41.Mellor J, Stone MA, Keane J. 2018. Application of data mining to a large hearing-aid manufacturer's dataset to identify possible benefits for clinicians, manufacturers, and users. Trends Hear. 22, 233121651877363. ( 10.1177/2331216518773632) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.du Prel J-B, Hommel G, Röhrig B, Blettner M. 2009. Confidence interval or p-value? Dtsch Ärztebl Int. 106, 335-339. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Pinheiro J, Bates D, DebRoy S, Sarkar D, R Core Team. 2013. nlme: linear and nonlinear mixed effects models. R package v. 3.111. [Google Scholar]
- 44.Barr DJ, Levy R, Scheepers C, Tily HJ. 2013. Random effects structure for confirmatory hypothesis testing: keep it maximal. J. Mem. Lang. 68, 255-278. ( 10.1016/j.jml.2012.11.001) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Harrison XA, et al. 2018. A brief introduction to mixed effects modelling and multi-model inference in ecology. PeerJ 6, e4794. ( 10.7717/peerj.4794) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Fox J. 2009. car: Companion to Applied Regression. R package version 1.2–16. http://CRAN.R-project.org/package=car (accessed 20 January 2020). [Google Scholar]
- 47.Fox J, Monette G. 1992. Generalized collinearity diagnostics. J. Am. Stat. Assoc. 87, 178-183. ( 10.1080/01621459.1992.10475190) [DOI] [Google Scholar]
- 48.O'brien RM. 2007. A caution regarding rules of thumb for variance inflation factors. Qual. Quant. 41, 673-690. ( 10.1007/s11135-006-9018-6) [DOI] [Google Scholar]
- 49.Lorah J. 2018. Effect size measures for multilevel models: definition, interpretation, and TIMSS example. Large-Scale Assess Educ. 6, 8. ( 10.1186/s40536-018-0061-2) [DOI] [Google Scholar]
- 50.Snijders TA, Bosker RJ. 2011. Multilevel analysis: an introduction to basic and advanced multilevel modeling. Thousand Oaks, CA: Sage. [Google Scholar]
- 51.Humes LE, Rogers s.e., Main AK, Kinney DL. 2018. The acoustic environments in which older adults wear their hearing aids: insights from datalogging sound environment classification. Am. J. Audiol. 27, 594-603. ( 10.1044/2018_AJA-18-0061) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Avram R, et al. 2019. Real-world heart rate norms in the health eHeart study. Npj Digit. Med. 2, 58. ( 10.1038/s41746-019-0134-9) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Gellish RL, Goslin BR, Olson RE, McDonald A, Russi GD, Moudgil VK. 2007. Longitudinal modeling of the relationship between age and maximal heart rate. Med. Sci. Sports Exerc. 39, 822-829. ( 10.1097/mss.0b013e31803349c6) [DOI] [PubMed] [Google Scholar]
- 54.Smeds K, Wolters F, Rung M. 2015. Estimation of signal-to-noise ratios in realistic sound scenarios. J. Am. Acad. Audiol. 26, 183-196. ( 10.3766/jaaa.26.2.7) [DOI] [PubMed] [Google Scholar]
- 55.Wu Y-H, Stangl E, Chipara O, Hasan SS, Welhaven A, Oleson J. 2018. Characteristics of real-world signal to noise ratios and speech listening situations of older adults with mild to moderate hearing loss. Ear Hear. 39, 293-304. ( 10.1097/AUD.0000000000000486) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Mackersie CL, Calderon-Moultrie N. 2016. Autonomic nervous system reactivity during speech repetition tasks: heart rate variability and skin conductance. Ear Hear. 37, 118S-125S. ( 10.1097/AUD.0000000000000305) [DOI] [PubMed] [Google Scholar]
- 57.Wendt D, Hietkamp RK, Lunner T. 2017. Impact of noise and noise reduction on processing effort: a pupillometry study. J. Acoust. Soc. Am. 141, 4040-4040. ( 10.1121/1.4989330) [DOI] [PubMed] [Google Scholar]
- 58.Zekveld AA, Koelewijn T, Kramer s.e.. 2018. The pupil dilation response to auditory stimuli: current state of knowledge. Trends Hear. 22, 233121651877717. ( 10.1177/2331216518777174) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Seeman S, Sims R. 2015. Comparison of psychophysiological and dual-task measures of listening effort. J. Speech Lang. Hear Res. 58, 1781-1792. ( 10.1044/2015_JSLHR-H-14-0180) [DOI] [PubMed] [Google Scholar]
- 60.Shiffman S, Stone AA, Hufford MR. 2008. Ecological momentary assessment. Annu. Rev. Clin. Psychol. 4, 1-32. ( 10.1146/annurev.clinpsy.3.022806.091415) [DOI] [PubMed] [Google Scholar]
- 61.Burke LA, Naylor G. 2020. Daily-life fatigue in mild to moderate hearing impairment: an ecological momentary assessment study. Ear Hear. 41, 1518-1532. ( 10.1097/AUD.0000000000000888) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Wu Y-H, Stangl E, Zhang X, Bentler RA. 2015. Construct validity of the ecological momentary assessment in audiology research. J. Am. Acad. Audiol. 26, 872-884. ( 10.3766/jaaa.15034) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Caduff A, Feldman Y, Ben Ishai P, Launer S. 2020. Physiological monitoring and hearing loss: toward a more integrated and ecologically validated health mapping. Ear Hear. 41, 120S. ( 10.1097/AUD.0000000000000960) [DOI] [PubMed] [Google Scholar]
- 64.Christensen JH, et al. 2019. Fully synthetic longitudinal real-world data from hearing aid wearers for public health policy modeling. Front. Neurosci. Audit Cogn. Neurosci. 13, 850. ( 10.3389/fnins.2019.00850) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Saunders GH, et al. 2020. Application of big data to support evidence-based public health policy decision-making for hearing. Ear Hear 41, 1057. ( 10.1097/AUD.0000000000000850) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Wu Y-H, Bentler RA. 2012. Do older adults have social lifestyles that place fewer demands on hearing? J. Am. Acad. Audiol. 23, 697-711. ( 10.3766/jaaa.23.9.4) [DOI] [PubMed] [Google Scholar]
- 67.Sinnreich R, Kark JD, Friedlander Y, Sapoznikov D, Luria MH. 1998. Five minute recordings of heart rate variability for population studies: repeatability and age–sex characteristics. Heart 80, 156-162. ( 10.1136/hrt.80.2.156) [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Howard CS, Munro KJ, Plack CJ. 2010. Listening effort at signal-to-noise ratios that are typical of the school classroom. Int. J. Audiol. 49, 928-932. ( 10.3109/14992027.2010.520036) [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
There are ethical restrictions on publicly sharing the dataset. The consent given by users did not explicitly detail sharing of the data in any format; this limitation is in keeping with EU General Data Protection Regulation and is imposed by the Research Ethics Committees of the Capital Region of Denmark. Data can be obtained by contacting the corresponding author and signing a non-disclosure agreement. Code for conducting the analysis presented in the study can be accessed via the Open Science Framework (doi:10.17605/OSF.IO/RC37Z).






