Skip to main content
Noise & Health logoLink to Noise & Health
. 2026 Feb 28;28(130):250–256. doi: 10.4103/nah.nah_248_25

‘Better Ear’ Self-Perception Aligns with Audiometry: Insights from Machine-Learning Modeling

Georgios P Georgiou 1,2,
PMCID: PMC13095089  PMID: 41800690

Abstract

Background/Objectives:

Accurate self-perception of interaural hearing asymmetry is crucial for clinical decision-making and communication strategies, yet the relationship between objective audiometric patterns and subjective awareness remains poorly characterized. Using nationally representative data from the U.S. National Health and Nutrition Examination Survey, this study employed a machine learning approach to model the probability that an individual’s self-reported “better-hearing” ear matches the audiometrically defined one.

Methods:

A Light Gradient Boosting Machine classifier was trained exclusively on objective measures—ear-specific pure-tone averages (PTA) and interaural asymmetry metrics—to predict correct subjective identification.

Results:

The model demonstrated robust performance on a held-out test set, with an accuracy of 0.85, precision of 0.84, recall of 0.98, an F1-score of 0.91, and an area under the receiver operating characteristic curve of 0.83. Explainable artificial intelligence analysis revealed that the absolute magnitude of interaural PTA asymmetry was the dominant predictor of correct self-report, while the signed direction of asymmetry contributed minimally.

Conclusion:

The results indicate that subjective awareness is strongly tied to the size of the hearing difference between ears rather than its direction and becomes more reliable with greater asymmetry. These findings indicate that a simple “better ear” self-report item captures meaningful audiometric information, supporting its potential use in clinical triage and public health surveillance, while also highlighting the need for caution in cases of mild asymmetry where misclassification is more likely.

Keywords: audiometry, hearing, machine learning, self-assessment

KEY MESSAGES

  • (1)

    Self-reported identification of the “better-hearing” ear generally aligns well with audiometric reality in a large, nationally representative sample.

  • (2)

    A machine-learning classifier using only ear-specific PTAs and asymmetry metrics accurately predicted correct self-report.

  • (3)

    The magnitude of interaural PTA asymmetry—not its direction—was the strongest determinant of accurate awareness.

  • (4)

    Simple “better ear” self-report items can meaningfully inform clinical triage, but caution is warranted when asymmetry is mild.

Introduction

Interaural differences in hearing sensitivity can have substantial consequences for everyday communication. Normal binaural hearing enables accurate sound localization, segregation of competing talkers, and improved speech understanding in noise, benefits that depend on the brain’s ability to compare inputs from the two ears.[1] When hearing is asymmetric, these binaural advantages are reduced, because listeners may struggle to localize sound sources, experience greater listening effort, and report poorer communication in complex acoustic environments.[2] Asymmetric hearing loss is, therefore, clinically important, both as a potential marker of underlying pathology and as a determinant of functional limitations that may not be captured by bilateral average thresholds alone.[3]

Hearing loss in general is highly prevalent and increases sharply with age, with population-based studies in the United States and elsewhere reporting that roughly one in five adults has at least mild audiometric impairment, and a much higher prevalence among older adults.[4,5,6] Within this broader burden, asymmetric hearing loss is relatively common, especially among men and older age groups; estimates from large community samples suggest that 6–15% of adults show clinically meaningful asymmetry depending on frequency range and definition.[7] Yet most epidemiologic work has focused on the presence or severity of hearing loss per se, rather than on how accurately individuals perceive asymmetry between ears or identify a “better-hearing” ear. This distinction matters because self-perception shapes help-seeking, hearing-aid uptake, and daily communication strategies, particularly in children and older adults who rely on self-report or caregiver observations to access services.[8,9]

A substantial literature has compared self-reported hearing difficulty with objectively measured audiometric loss, generally finding only moderate agreement and systematic biases by age, sex, socioeconomic position, and cultural context.[10,11] Individuals often under- or overestimate the severity of their impairment, and the accuracy of self-report can vary across population subgroups and across different self-report instruments.[8,10] However, prior work has primarily treated hearing as a global construct (e.g., trouble hearing in general), without examining whether people can reliably identify which ear hears better when a measurable asymmetry exists. Understanding subjective awareness of interaural asymmetry is important for several reasons. Clinically, misperception of the better-hearing ear could influence hearing-aid fitting decisions, cochlear-implant candidacy discussions, and counseling about communication strategies. In research and public health surveillance, reliance on simple self-report items about a “better ear” may misclassify asymmetry status if subjective awareness is inaccurate or systematically biased.

Nationally representative survey data provide a unique opportunity to address these questions. The U.S. National Health and Nutrition Examination Survey (NHANES) includes detailed audiometric assessments and standardized pre-exam questions about hearing, administered using a complex, multistage probability sampling design that yields estimates generalizable to the civilian and non-institutionalized US population.[12] In the 2017–2018 cycle, NHANES collected pure-tone air-conduction thresholds across a range of frequencies in children, adolescents, and older adults using the Audiometric Research Tool system, along with age-appropriate questions about whether one ear hears better than the other.[6] These data allow direct comparison of a simple self-reported “better ear” designation with an objective reference based on standard speech-frequency pure-tone averages (PTAs).

At the same time, advances in machine learning (ML) offer new tools for characterizing how objective audiometric patterns are related to subjective awareness. Several recent studies illustrate how ML can be used to map objective audiometric patterns onto subjective hearing awareness or handicap. Gathman et al.[13] used gradient-boosted trees to predict PTAs from demographics, clinical factors, and a categorical self-report of hearing status, showing that subjective hearing carries systematic information about objective loss and can be integrated into ML models to estimate audiometric thresholds. Yang et al.[14] combined items from the Hearing Handicap Inventory for the Elderly (HHIE) questionnaire with demographic variables in a LightGBM framework, using SHapley Additive exPlanations (SHAP) values to interpret how specific self-report items contribute to predicting age-related hearing risk, thereby explicitly linking subjective handicap profiles to audiometrically defined impairment. In a clinical sample of chronic otitis media patients, Yoon et al.[15] evaluated logistic regression and ML models using HHIE scores to infer hearing levels when full audiometry was unavailable, again using self-reported handicap as a proxy for objective function. The authors found that self-reported handicap can reasonably approximate hearing level, though there is still substantial misclassification. Finally, Ellis and Souza[16] applied several classification algorithms to NHANES audiometric data combined with demographic variables and self-reported hearing difficulty to classify individuals into audiometric hearing-loss categories. The authors concluded that self-report–driven ML models classify hearing loss only modestly well, but still estimate audiograms accurately enough to support clinically acceptable hearing-aid gain prescriptions. Together, these studies support the use of modern ML methods to characterize relationships between objective pure-tone measures and subjective hearing reports, and they motivate our focus on modeling subjective awareness of interaural asymmetry from ear-specific PTAs and asymmetry metrics.

In this study, we used audiometry data from an open-source database to model the probability that self-report correctly matches the objectively better ear using an ML classifier trained on ear-specific PTAs and interaural asymmetry measures and used explainable values to interpret the relative importance of these audiometric predictors for subjective awareness of asymmetry. Unlike earlier ML studies that combined self-reported hearing difficulty or handicap with often limited audiometric information to predict hearing loss or risk, we use self-report only to define the correctness label, defined as agreement between the self-reported and PTA-defined better ear. The ML model was trained solely on objective ear-specific PTAs and asymmetry measures, thereby focusing on the audiometric determinants of subjective awareness of interaural asymmetry. Using nationally representative audiometric data and modern ML interpretability techniques, we aimed to clarify how objective patterns of interaural hearing asymmetry translate into subjective perception across the lifespan. We further aimed to inform the use and interpretation of “better ear” self-report items in clinical practice and population research.

MATERIALS AND METHODS

Participants

This cross-sectional study used publicly available data from NHANES 2017–2018, specifically the Audiometry Examination component (AUX_J).[12] NHANES is a nationally representative survey of the civilian, noninstitutionalized US population conducted by the National Center for Health Statistics using a complex, multistage probability sampling design. The audiometry component includes pre-exam hearing questions, otoscopy, tympanometry, and pure-tone air-conduction audiometry collected in a sound-isolating booth by trained examiners following standardized NHANES procedures.

The AUX_J audiometry examination was administered to participants aged 6–19 years and 70 years or older (both males and females), in line with NHANES eligibility criteria for this cycle. For the present analysis, participants were included if they had a valid response to the self-reported “better ear” question and valid air-conduction thresholds at 500, 1000, and 2000 Hz in both ears sufficient to compute PTAs. Participants were excluded if the better-ear self-report was missing, refused, or not interpretable, or if PTA could not be computed for one or both ears after threshold cleaning. Applying these criteria yielded a final analytic sample of 2585 participants.

Measures

Subjective awareness of hearing asymmetry was derived from NHANES pre-exam audiometry items. Children aged 6–11 years were administered AUQ051 (“Is it easier for you to hear out of one ear than the other?”), while adolescents aged 12–19 years and adults aged 70 years or older were administered AUQ050 (“Do you hear better in one ear than the other?”). Both variables use the same coding: 1 indicates the right ear hears better, 2 indicates the left ear hears better, and 9 indicates no difference or uncertainty (“no/don’t know”).

Objective hearing sensitivity was assessed using pure-tone air-conduction thresholds (dB HL) measured at 500 Hz (AUXU500R, AUXU500L), 1000 Hz (AUXU1K1R, AUXU1K1L), and 2000 Hz (AUXU2KR, AUXU2KL). These three frequencies were selected to compute a standard speech-frequency PTA for each ear. Right-ear PTA (PTA_R) and left-ear PTA (PTA_L) were calculated as the mean of valid thresholds at 500, 1000, and 2000 Hz for the corresponding ear, with lower values indicating better hearing.

Threshold data were cleaned prior to PTA computation in accordance with NHANES analytic guidance. Values coded as 666 (“no response”) or 888 (“could not obtain”) were treated as missing. Additionally, extremely small floating-point numbers resulting from SAS missing-value artefacts during file import were treated as missing, and thresholds outside plausible audiometric limits (below −15 dB HL or above 150 dB HL) were set to missing. PTAs were computed using available frequency values when at least two of the three PTA frequencies were valid; participants lacking a PTA for either ear after cleaning were excluded from analysis.

Interaural PTA asymmetry was quantified as PTA_diff = PTA_L − PTA_R and as an absolute magnitude PTA_diff_abs = |PTA_L − PTA_R|. An objective better-ear category (obj_better) was defined using a clinically meaningful asymmetry cutoff of 10 dB. If PTA_diff was ≥+10 dB, hearing was classified as better in the right ear (left worse); if PTA_diff was ≤−10 dB, hearing was classified as better in the left ear (right worse); and if PTA_diff lay between −10 and +10 dB, hearing was classified as having no meaningful difference. This produced three objective categories directly comparable to self-report.

Machine Learning Model Training

To model subjective awareness of interaural asymmetry, we trained a supervised binary classifier with the outcome variable correct (1 = self-reported better ear matched the PTA-defined better ear; 0 = mismatch). The predictor set consisted of objective audiometric summary measures: right-ear PTA (PTA_R), left-ear PTA (PTA_L), signed PTA difference between ears (PTA_diff = PTA_L − PTA_R), and absolute asymmetry magnitude (PTA_diff_abs = |PTA_L − PTA_R|). The analysis dataset was randomly partitioned into training (70%) and testing (30%) subsets using stratified sampling on the correct variable to preserve class proportions.

Model training was performed using Light Gradient Boosting Machine (LightGBM) with gradient-boosted decision trees, implemented in the lightgbm package in R (R Foundation for Statistical Computing, Vienna, Austria).[17] Predictors were supplied to LightGBM as numeric matrices, and the binary outcome was encoded as 1 for TRUE (correct) and 0 for FALSE (incorrect). Because tree-based boosting is insensitive to monotonic transformations and does not require feature scaling, no standardization was applied.

Hyperparameters were specified a priori to balance model complexity and generalization. The objective function was binary logistic loss with area under the curve (AUC) as the optimization metric. We used a learning rate of 0.05, maximum tree depth of 5, and 15 leaves per tree, with feature subsampling (feature_fraction = 0.9) and row subsampling (bagging_fraction = 0.8, bagging_freq = 5) to reduce overfitting. Minimum leaf size was set to 20 observations. L1 and L2 regularizations (lambda_l1 = 0.1; lambda_l2 = 0.5) were included to further penalize overly complex solutions. The model was trained for up to 2000 boosting iterations, with early stopping triggered if validation AUC did not improve for 50 consecutive rounds. The held-out test set was used as the validation monitor for early stopping and was not used for parameter tuning.

After training, the final model generated predicted probabilities of correctness for each observation in the test set. These probabilities were converted to class labels using a default threshold of 0.50. Performance was evaluated on the test set using a confusion matrix and standard classification metrics: accuracy, precision, recall (sensitivity), and F1 score, with TRUE (correct) treated as the positive class. Discriminative ability independent of threshold was quantified using receiver operating characteristic (ROC) analysis and the area under the ROC curve, computed with the pROC package. All ML training and evaluation steps were implemented in R using caret for data partitioning and confusion-matrix summaries, lightgbm for model fitting, and pROC for ROC/AUC computation. Table 1 presents participant characteristics and hearing-threshold summary measures.

Table 1.

Participant characteristics and hearing-threshold summary measures

Characteristic Value
Age n (%)
6–11 years 707 (27)
12–19 and ≥70 years 1878 (73)
PTA Mean dB (SD)
Right-ear PTA 13.87 (15.38)
Left-ear PTA 14.58 (15.59)
PTA difference 0.71 (8.47)
PTA difference absolute 5.26 (6.68)
Objective better ear n (%)
Right better 350 (13)
Left better 43 (2)
No meaningful difference 2192 (85)
Self-reported better ear n (%)
Right 166 (6)
Left 152 (6)
No difference/don’t know 2267 (88)

Note: dB, decibel; PTA, pure-tone averages; SD, standard deviation.

RESULTS

On the held-out test set, the classifier performed best at identifying true positives, evidenced by a very high recall score. This was followed by a correspondingly strong balance between precision and recall, as captured by the F1-score. Overall accuracy was also high. Precision indicated that the majority of predicted positive cases aligned with the reference standard. Threshold-independent discrimination, evaluated using AUC, demonstrated solid separability between correct and incorrect classifications. The scores for each metric are presented in Table 2.

Table 2.

Evaluation results of the ML classifier

Metric Score
Accuracy 0.85
Precision 0.84
Recall 0.98
F1-score 0.91
AUC 0.83

Note : AUC, area under the curve.

[Figure 1] displays the ROC curves for the LightGBM classifier on the training and testing subsets. As is typical and desirable in ML models, the training curve lies above the testing curve, reflecting the model’s ability to learn patterns present in the training data. Importantly, the two curves remain closely aligned and follow a similar shape across the range of false-positive rates, and both lie well above the chance line. This pattern indicates no evidence of overfitting, as the model maintains strong discriminative performance on unseen test data that is consistent with its behavior on the training set. Overall, the figure illustrates robust performance across both datasets.

Figure 1.

Figure 1

Receiver operating characteristic (ROC) curves and area under the curve (AUC) values of the training and testing subsets. The gray dashed diagonal represents chance-level performance (no discrimination; AUC = 0.5).

Mean absolute SHAP values were calculated for all predictors included in the model, indicating their relative contribution to the prediction of whether participants correctly identified their better-hearing ear. Absolute interaural asymmetry (PTA_diff_abs) was the dominant contributor, followed by the individual PTAs for the left (PTA_L) and right (PTA_R) ears. The signed asymmetry measure (PTA_diff) contributed minimally. Larger SHAP values reflect a stronger influence on model output (see [Figures 23].

Figure 2.

Figure 2

SHapley Additive exPlanations (SHAP) values for each of the predictor variables: right-ear pure-tone average (PTA) (PTA_R), left-ear PTA (PTA_L), signed PTA difference between ears (PTA_diff = PTA_L − PTA_R), and absolute asymmetry magnitude (PTA_diff_abs = |PTA_L − PTA_R|).

Figure 3.

Figure 3

SHapley Additive exPlanations (SHAP) beeswarm plot illustrating feature contributions to the model predicting correct better-ear identification. Each point represents an individual observation. The x-axis shows SHAP values, indicating the contribution of each feature to the model output, with positive values increasing and negative values decreasing the predicted probability of a correct response. Features are ordered by mean absolute SHAP value (top to bottom). Point color encodes the original feature value (low to high).

DISCUSSION

Main Findings

This study used nationally representative audiometry data to model how accurately individuals identify their “better-hearing” ear based on objective ear-specific PTAs and interaural asymmetry measures. The LightGBM classifier achieved high recall, strong F1-score, and good accuracy and AUC, indicating that relatively simple audiometric summaries capture much of the information underlying agreement between self-report and PTA-defined better ear. These findings suggest that subjective awareness of interaural asymmetry is systematic rather than random in nature, as evidenced by the general alignment of judgments with PTA patterns when a better ear is reported.

SHAP analyses showed that absolute interaural PTA asymmetry was the dominant predictor of accurate self-identification, consistent with fundamental principles of binaural hearing.[18] Accuracy was driven primarily by the magnitude of asymmetry rather than its direction, indicating that once interaural differences exceed a clinically meaningful range, listeners reliably recognize that one ear is better. Model uncertainty was concentrated in near-symmetric cases, aligning with clinical evidence that small interaural differences (<10–15 dB) are often unnoticed or inconsistently reported, whereas larger asymmetries are perceptually salient.[19,20] These results extend prior work showing only moderate agreement between global self-reported hearing difficulty and audiometric thresholds, with accuracy varying by sociodemographic factors.[11,21] Unlike earlier studies that treated hearing as a unitary construct, our focus on ear-specific judgments demonstrates that subjective spatial awareness is strongly determined by objective binaural patterns—particularly asymmetry magnitude—rather than reflecting purely noisy introspection. However, the fact that AUC did not approach 1.0 indicates that non-audiometric factors (e.g., cognition, health beliefs) likely contribute to residual error.[22,23] Our approach complements recent ML studies that use subjective information to predict objective hearing status.[13,14,15] In contrast, we rely solely on objective PTAs and asymmetry metrics, using self-report only to define correctness. That such a model achieves good discrimination without demographic or psychosocial inputs suggests that the structure of interaural hearing plays a dominant role in awareness, even if other factors modulate this relationship.

Mechanistic evidence from studies of asymmetric cochlear injury supports this interpretation, because asymmetric cochlear injury from unequal noise exposure or differential vulnerability disrupts binaural cue processing and induces central auditory reorganization, often emerging early in hearing-loss trajectories.[24,25,26] Interaural asymmetry is, therefore, a biologically meaningful marker linked to functional decline in spatial hearing and speech perception in noise, explaining why larger asymmetries are more reliably perceived as indicating a better ear.[24,25] Clinically, the high recall observed here suggests that patient reports of a better ear are usually reliable when moderate-to-large PTA asymmetries are present. In such cases, a simple “better ear” question may serve as a reasonable starting point for triage or device discussions.[27] Errors are most likely when asymmetry is small, underscoring the need for caution when relying on self-report near the 10 dB threshold, and reinforcing the importance of ear-specific audiometry for irreversible decisions. Nevertheless, the incorrect classifications likely reflect perceptual ambiguity near the ≥10 dB cutoff rather than true misidentifications.

More broadly, accurate identification of a better-hearing ear reflects a general principle of sensory awareness: spatial asymmetries that meaningfully disrupt information processing are more likely to reach conscious awareness and guide behavior.[28,29,30,31] From a public health perspective, our findings suggest that survey-based “better ear” items capture meaningful but imperfect information about interaural asymmetry. ML-based calibration models could help correct misclassification when audiometry is unavailable, improving population-level estimates of asymmetric hearing burden.[32]

Limitations and Future Work

Several limitations apply. Because the data are cross-sectional, we cannot determine how awareness of asymmetry changes over time (e.g., with progression of loss or after device fitting/counseling). NHANES audiometry includes children and older adults but not midlife adults, limiting generalizability to the age range when loss often begins. We also restricted inputs to speech-frequency PTAs (0.5–2 kHz), so high-frequency asymmetries were not evaluated. Finally, demographic, cognitive, and psychosocial factors known to affect self-report accuracy were intentionally excluded to keep the model audiometry-only. Future work should (i) use multi-class models to predict the three self-report categories (right better/left better/no difference) from richer audiometric profiles, including high-frequency thresholds and frequency-specific/broader-band asymmetry measures; (ii) use longitudinal designs to test when awareness emerges and whether counseling improves alignment with audiometry; and (iii) extend to clinical cohorts with asymmetric pathology (e.g., unilateral/bimodal cochlear-implant candidates, chronic otitis media) to inform counseling and the practical use of brief “better ear” questions.

CONCLUSION

The findings show that simple audiometric measures—ear-specific speech-frequency PTAs and interaural asymmetry—are potentially reliable predictors of whether individuals correctly identify their better-hearing ear. Using an ML classifier trained solely on objective audiometric inputs, we found high recall and good overall discrimination, indicating that subjective awareness of a “better ear” is closely tied to the magnitude of interaural PTA differences rather than their direction. At the same time, model performance falling short of perfect classification underscores that awareness is imperfect and that non-audiometric factors also shape whether the self-report aligns with objective asymmetry.

Availability of Data and Materials

The data analyzed in this study are publicly available from the U.S. National Health and Nutrition Examination Survey (NHANES), maintained by the National Center for Health Statistics (NCHS), Centers for Disease Control and Prevention (CDC). NHANES data are released for public use in accordance with NCHS/CDC public disclosure and data release policies and are provided as de-identified public-use files. This study used the NHANES 2017–2018 Audiometry Examination component and related publicly available documentation. No restricted-use data was accessed. Data are available at: https://wwwn.cdc.gov/nchs/nhanes/search/datapage.aspx?Component=Examination&Cycle=2017-2018

Author Contributions

GG: conceptualization, methodology, software, validation, formal analysis, investigation, resources, data curation, writing − original draft, writing − review & editing, visualization, supervision, project administration.

Ethics Approval and Consent to Participate

The study is based on the publicly available, de-identified dataset provided by the National Health and Nutrition Examination Survey (NHANES) 2017–2018 database (National Center for Health Statistics [NCHS], Centers for Disease Control and Prevention [CDC]. The NHANES protocol was reviewed and approved by the NCHS Ethics Review Board (Protocol #2018-01; with a continuation of Protocol #2011-17 effective through October 26, 2017). Informed consent was obtained from all participants prior to data collection, and the present study has obtained the appropriate authorization for secondary use of the publicly released NHANES data.

Conflicts of Interest

There is no conflict of interest.

Acknowledgment

The author thanks the Phonetic Lab of the University of Nicosia.

Funding Statement

No funding was received for this work.

REFERENCES

  • 1.Avan P, Giraudet F, Büki B. Importance of binaural hearing. Audiol Neurotol. 2015;20:3–6. doi: 10.1159/000380741. [DOI] [PubMed] [Google Scholar]
  • 2.Firszt JB, Reeder RM, Holden LK. Unilateral hearing loss: understanding speech recognition and localization variability—implications for cochlear implant candidacy. Ear Hear. 2017;38:159–173. doi: 10.1097/AUD.0000000000000380. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Segal N, Shkolnik M, Kochba A, Segal A, Kraus M. Asymmetric hearing loss in a random population of patients with mild to moderate sensorineural hearing loss. Ann Otol Rhinol Laryngol. 2007;116:7–10. doi: 10.1177/000348940711600102. [DOI] [PubMed] [Google Scholar]
  • 4.Hoffman HJ, Dobie RA, Losonczy KG, Themann CL, Flamme GA. Declining prevalence of hearing loss in US adults aged 20 to 69 years. JAMA Otolaryngol Head Neck Surg. 2017;143:274–285. doi: 10.1001/jamaoto.2016.3527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Mattos LC, Veras RP. The prevalence of hearing loss in an elderly population in Rio de Janeiro: a cross-sectional study. Rev Bras Otorrinol. 2007;73:654–659. doi: 10.1016/S1808-8694(15)30126-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Mo F, Zhu S, Jia H, Xia Y, Lang L, Zheng Q, et al. Trends in prevalence of hearing loss in adults in the USA 1999-2018: a cross-sectional study. BMC Public Health. 2024;24:976. doi: 10.1186/s12889-024-18426-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Suen JJ, Betz J, Reed NS, Deal JA, Lin FR, Goman AM. Prevalence of asymmetric hearing among adults in the United States. Otol Neurotol. 2021;42:e111–e113. doi: 10.1097/MAO.0000000000002931. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Goman AM, Reed NS, Lin FR, Willink A. Variations in prevalence and number of older adults with self-reported hearing trouble by audiometric hearing loss and sociodemographic characteristics. JAMA Otolaryngol Head Neck Surg. 2020;146:201–203. doi: 10.1001/jamaoto.2019.3584. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Im GJ, Ahn JH, Lee JH, Do Han K, Lee SH, Kim JS, et al. Prevalence of severe-profound hearing loss in South Korea: a nationwide population-based study to analyse a 10-year trend (2006–2015) Sci Rep. 2018;8:9940. doi: 10.1038/s41598-018-28279-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Humes LE. US population data on hearing loss, trouble hearing, and hearing-device use in adults: National Health and Nutrition Examination Survey, 2011-12, 2015-16, and 2017–20. Trends Hear. 2023;27:23312165231160978. doi: 10.1177/23312165231160978. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Kamil RJ, Genther DJ, Lin FR. Factors associated with the accuracy of subjective assessments of hearing impairment. Ear Hear. 2015;36:164–167. doi: 10.1097/AUD.0000000000000075. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.2021 https://wwwn.cdc.gov/nchs/nhanes/search/datapage.aspx?Component=Examination&Cycle= 2017-2018 [Google Scholar]
  • 13.Gathman TJ, Choi JS, Vasdev RM, Schoephoerster JA, Adams ME. Machine learning prediction of objective hearing loss with demographics, clinical factors, and subjective hearing status. Otolaryngol Head Neck Surg. 2023;169:504–513. doi: 10.1002/ohn.288. [DOI] [PubMed] [Google Scholar]
  • 14.Yang TH, Chen YF, Cheng YF, Huang JN, Wu CS, Chu YC. Optimizing age-related hearing risk predictions: an advanced machine learning integration with HHIE-S. BioData Min. 2023;16:35. doi: 10.1186/s13040-023-00351-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Yoon HS, Kim MJ, Lim KH, Kim MS, Kang BJ, Rah YC, et al. Evaluating prediction models with hearing handicap inventory for the elderly in chronic otitis media patients. Diagnostics (Basel) 2024;14:2000. doi: 10.3390/diagnostics14182000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Ellis GM, Souza PE. Using machine learning and the national health and nutrition examination survey to classify individuals with hearing loss. Front Digit Health. 2021;3:723533. doi: 10.3389/fdgth.2021.723533. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.2025 https://www.R-project.org/ [Google Scholar]
  • 18.Mishra SK, Dey R. Unilateral auditory deprivation in humans: effects on frequency discrimination and auditory memory span in the normal ear. Hear Res. 2021;405:108245. doi: 10.1016/j.heares.2021.108245. [DOI] [PubMed] [Google Scholar]
  • 19.Firszt JB, Holden LK, Reeder RM, Cowdrey L, King S. Cochlear implantation in adults with asymmetric hearing loss. Ear Hear. 2012;33:521–533. doi: 10.1097/AUD.0b013e31824b9dfc. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Forli F, Berrettini S, Bruschini L, Canelli R, Lazzerini F. Cochlear implantation in patients with asymmetric hearing loss: reporting and discussing the benefits in speech perception, speech reception threshold, squelch abilities, and patients’ reported outcomes. J Laryngol Otol. 2022;136:964–969. doi: 10.1017/S0022215121004333. [DOI] [PubMed] [Google Scholar]
  • 21.Curti SA, Taylor EN, Su D, Spankovich C. Prevalence of and characteristics associated with self-reported good hearing in a population with elevated audiometric thresholds. JAMA Otolaryngol Head Neck Surg. 2019;145:626–633. doi: 10.1001/jamaoto.2019.1020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Kamerer AM, Harris SE, Kopun JG, Neely ST, Rasetshwane DM. Understanding self-reported hearing disability in adults with normal hearing. Ear Hear. 2022;43:773–784. doi: 10.1097/AUD.0000000000001161. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Wang D, Zhuang Y, Wu Y, Ma H, Peng Y, Xu H, et al. Analysis of influential factors of self-reported hearing loss deviation in young adults. J Public Health (Berl) 2020;28:455–461. [Google Scholar]
  • 24.Kumpik DP, King AJ. A review of the effects of unilateral hearing loss on spatial hearing. Hear Res. 2019;372:17–28. doi: 10.1016/j.heares.2018.08.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Anderson SR, Burg E, Suveg L, Litovsky RY. Review of binaural processing with asymmetrical hearing outcomes in patients with bilateral cochlear implants. Trends Hear. 2024;28:23312165241229880. doi: 10.1177/23312165241229880. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Klingel M, Kopčo N, Laback B. Reweighting of binaural localization cues induced by lateralization training. J Assoc Res Otolaryngol. 2021;22:551–566. doi: 10.1007/s10162-021-00800-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Dillon MT, Buss E, Rooth MA, King ER, McCarthy SA, Bucker AL, et al. Cochlear implantation in cases of asymmetric hearing loss: subjective benefit, word recognition, and spatial hearing. Trends Hear. 2020;24:2331216520945524. doi: 10.1177/2331216520945524. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Craig AD. How do you feel—now? The anterior insula and human awareness. Nat Rev Neurosci. 2009;10:59–70. doi: 10.1038/nrn2555. [DOI] [PubMed] [Google Scholar]
  • 29.Vallar G, Maravita A. Personal and extrapersonal spatial awareness: neuropsychological evidence. Prog Brain Res. 2009;176:3–28. [Google Scholar]
  • 30.Shinn-Cunningham BG. Object-based auditory and visual attention. Trends Cogn Sci. 2008;12:182–186. doi: 10.1016/j.tics.2008.02.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Best V, Thompson ER, Mason CR, Kidd G., Jr An energetic limit on spatial release from masking. J Acoust Soc Am. 2015;137:2883–2893. [Google Scholar]
  • 32.Humes L. Audiograms and prevalence of hearing loss in US children and adolescents 6 – 19 years of age. J Speech Lang Hear Res. 2024;67:3178–3200. doi: 10.1044/2024_JSLHR-24-00050. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The data analyzed in this study are publicly available from the U.S. National Health and Nutrition Examination Survey (NHANES), maintained by the National Center for Health Statistics (NCHS), Centers for Disease Control and Prevention (CDC). NHANES data are released for public use in accordance with NCHS/CDC public disclosure and data release policies and are provided as de-identified public-use files. This study used the NHANES 2017–2018 Audiometry Examination component and related publicly available documentation. No restricted-use data was accessed. Data are available at: https://wwwn.cdc.gov/nchs/nhanes/search/datapage.aspx?Component=Examination&Cycle=2017-2018


Articles from Noise & Health are provided here courtesy of Wolters Kluwer -- Medknow Publications

RESOURCES