Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2026 Feb 27.
Published before final editing as: Sleep Med. 2026 Feb 9;141:108828. doi: 10.1016/j.sleep.2026.108828

Evaluation of an automated sleep apnea scoring algorithm via the Wesper Lab home sleep apnea test

Chelsie Rohrscheib a,*, Antonio Artur Moura a, Janna Raphelson b, Jeremy E Orr b, Ruchir P Patel c, Atul Malhotra b
PMCID: PMC12945445  NIHMSID: NIHMS2149162  PMID: 41671846

Abstract

This study evaluated the performance of the Wesper Lab home sleep apnea test (HSAT) artificial intelligence (AI) automated scoring algorithm under both in-laboratory and real-world conditions. We conducted a multi-tiered validation using two datasets and three analyses. The primary analysis compared apnea-hypopnea index (AHI) and central apnea index (CAI) from Wesper Lab HSATs with simultaneous polysomnography (PSG) scored by blinded technologists (n = 44). The secondary analysis evaluated blinded scoring of raw Wesper Lab signals from the same 44 patients: first by a single scorer, then by two additional scorers to assess inter-scorer consistency. The tertiary analysis examined clinical HSATs (n = 139) in which algorithm-derived AHI was compared with expert rescoring across 11 independent clinics. Agreement metrics included Pearson correlation, Bland-Altman analysis, and confusion matrices. Primary analysis: the algorithm showed strong correlation with PSG for AHI (r = 0.90) and CAI (r = 0.82) with minimal bias on Bland-Altman analysis. Secondary analysis: the correlation was r = 0. 95 with minimal bias. Across three scorers, correlation remained ≥0.93. Tertiary analysis: correlation was r = 0.98 with minimal bias. These findings demonstrate that the Wesper Lab autoscoring algorithm is a reliable tool for obstructive sleep apnea and central apnea event detection, supporting its role as an HSAT platform that enhances accessibility to sleep apnea diagnosis.

Keywords: Sleep medicine, Obstructive sleep apnea, Central sleep apnea, Home sleep apnea test, Artificial intelligence, Algorithms

1. Introduction

Obstructive sleep apnea (OSA) is a common sleep-related breathing disorder characterized by recurrent episodes of upper airway obstruction during sleep, leading to intermittent hypoxia and sleep fragmentation [1]. If left untreated, OSA is associated with a range of serious comorbidities, including cardiovascular disease, type 2 diabetes, cognitive impairment, and depression [2]. Despite its high prevalence, OSA remains underdiagnosed, in part due to barriers associated with traditional in-lab polysornnography (PSG), such as cost, limited access, and patient discomfort [35].

To overcome the limitations of traditional sleep apnea diagnostics, there has been increasing interest in the use of artificial intelligence (AI) to improve the accuracy and efficiency of sleep testing, particularly through home-based approaches [6,7]. AI-enabled algorithms are now commonly integrated into home sleep apnea tests (HSATs), where they automate the scoring of respiratory events, reduce inter-scorer variability, and support scalable, multi-night interpretation of sleep data [8,9]. One such system is Wesper Lab, an FDA-cleared, Type III HSAT that uses AI-driven scoring to detect obstructive or central apneas and hypopneas in accordance with American Academy of Sleep Medicine (AASM) 3% and 4% desaturation criteria [10,11]. The Wesper Lab system records thoracic and abdominal respiratory effort and derives airflow using a summed respiratory signal analogous to respiratory inductance plethysmography, providing physiologic information relevant to distinguishing obstructive events (preserved effort) from central events (absent effort).

The objective of this study was to evaluate the diagnostic performance of the Wesper Lab automated scoring algorithm across both controlled laboratory conditions and real-world home testing, extending prior device-level validation work by focusing specifically on algorithm performance across complementary contexts. To achieve this, we conducted three complementary analyses: (1) comparison of expert-scored in-laboratory PSG with concurrent Wesper Lab HSATs scored automatically, (2) comparison of blinded expert rescoring of raw in-laboratory Wesper Lab HSAT signals with automated scoring, and (3) comparison of automated scoring with expert rescoring of real-world clinical Wesper Lab HSATs. Together, these analyses provide a multi-tiered assessment of the algorithm’s diagnostic accuracy, reliability, and clinical applicability across diverse testing environments.

2. Materials and methods

2.1. Analysis overview

To evaluate the performance of the Wesper Lab automated scoring algorithm, we conducted a multi-tiered validation using two datasets and three distinct analyses, each designed to assess different aspects of algorithmic accuracy relative to human scoring. The datasets included (1) concurrent Wesper Lab HSATs and in-laboratory polysomnography (PSG) studies, and (2) real-world clinical Wesper Lab HSATs. Across all analyses, respiratory event detection and classification were performed using AASM-standard scoring definitions, and algorithm-derived indices were compared with those scored by trained human technologists using standard AASM criteria (3% desaturation threshold) [11].

Primary endpoints included agreement between algorithm-derived and PSG-derived apnea-hypopnea index (AHI) and central apnea index (CAI), assessed using Pearson correlation and Bland-Altman limits of agreement. CAI was derived as a subclassification of apnea events within the AHI framework using identical physiologic signals and scoring rules. This study did not assess respiratory-effort-related arousals. Blinding procedures were implemented to minimize scorer bias and ensure independence between automated and human scoring across all analyses, in accordance with the Institutional Review Board (IRB)-approved study protocol and prior validation methodology.

To control the overall Type I error rate associated with multiple hypothesis testing, a hierarchical gate-keeping strategy was employed. Two primary hypotheses were prespecified, corresponding to agreement between algorithm-derived and PSG-derived AHI and CAI. Statistical significance testing proceeded sequentially, such that success of the primary objective required both primary endpoints to achieve statistical significance at the two-sided α = 0.05 level. Formal hypothesis testing of secondary objectives was conducted only if the primary objective was met. Tertiary and exploratory analyses were performed without adjustment for multiplicity; accordingly, associated p-values are reported descriptively and are intended for hypothesis generation rather than confirmatory inference.

2.2. Population and primary analysis: Concurrent in-lab PSG comparison

In the primary validation analysis, we used data from a previously published 2023 IRB-approved clinical study by Raphelson and colleagues, in which 44 participants underwent simultaneous in-lab PSG and Wesper Lab testing [10]. The cohort included 26 males and 18 females, with a mean age of 48.8 years (SD = 14.7) and a mean BMI of 33.73 kg/m2 (SD = 9.57).

PSG recordings were scored by board-certified sleep technologists who were blinded to all Wesper Lab signals and algorithm-derived outputs. Wesper HSAT data were processed independently by the automated algorithm, which had no access to PSG signals, annotations, or scores. Importantly, the technologist was not involved in patient recruitment, PSG acquisition, or data collection. Wesper Lab algorithm-derived AHI and CAI values were compared to the AHI and CAI scored from PSG signals. Statistical analyses followed a pre-specified analysis plan and were conducted using locked algorithm outputs.

2.3. Analysis: Blinded rescoring of raw wesper lab signals

In the secondary analysis, we evaluated the accuracy of the Wesper Lab algorithm independently of polysomnography by using blinded human scoring of raw Wesper Lab signals from the same 44 patients included in the concurrent PSG study. As in the primary analysis, automated Wesper HSAT data were processed independently by the algorithm, which had no access to PSG signals, annotations, or scores.

Human scoring was conducted in two stages. First, a single board-certified sleep technologist, blinded to the Wesper Lab algorithm outputs, PSG results, and patient identifiers, manually scored the raw Wesper signals. Second, to assess inter-scorer reliability, two additional board-certified technologists, also blinded to algorithm outputs, PSG data, and to each other’s scores, independently rescored the same Wesper Lab recordings. Human scorers were not involved in data collection or enrollment. For both stages, the agreement between automated and human scoring was assessed using Pearson correlation, Bland-Altman analysis, and confusion matrix classification in OSA severity categories.

2.4. Tertiary analysis: Clinical HSAT rescoring

The tertiary analysis used a real-world dataset of 139 patients who underwent home sleep apnea testing with Wesper Lab in routine clinical practice across 11 independent sleep clinics in 8 U.S. states. Expert rescoring was performed on raw Wesper signals as part of standard clinical workflows and therefore was not conducted under fully blinded conditions; in some cases, scorers may have had access to automated algorithm outputs during review.

Inclusion criteria for the tertiary real-world clinical analysis required that at least 75% of each HSAT recording undergo either manual rescoring or substantive review by a trained sleep technologist, defined as event-level assessment with confirmation, modification, or rejection of algorithm-detected respiratory events rather than simple report sign-off. In addition, studies were required to demonstrate physiologic concordance between respiratory event frequency and oxygen desaturation burden, defined by an apnea-hypopnea index (AHI) to oxygen desaturation index (ODI) ratio between 0.8 and 1.5. This ratio constraint was prespecified as a data-quality and physiologic plausibility criterion, consistent with prior work demonstrating close coupling between AHI and ODI in desaturation-based scoring paradigms [12]. Exclusion of recordings with marked AHI-ODI discordance reduced the influence of implausible outliers likely attributable to scoring variability or signal artifact rather than true disease severity. All comparative analyses were performed retrospectively using locked, algorithm-derived AHI values that were not modified based on human review.

Of the 256 real-world clinical HSAT recordings initially eligible for tertiary analysis, 116 were excluded based on the ODI/AHI ratio criterion, resulting in a final analytic cohort of 139 studies. Excluded recordings were not uniformly distributed across contributing sites, with a disproportionate number originating from a small subset of clinics. The final cohort included 59 males, 68 females, 12 not reported, with a mean age of 45.9 years (SD = 17.1) and a mean BMI of 28.2 kg/m2 (SD = 6.4). For each patient, algorithm-derived AHI values were directly compared to the manually rescored values. Agreement was assessed using Pearson correlation, Bland-Altman analysis (mean bias and 95% limits of agreement), and confusion matrix classification across OSA severity categories (normal, mild, moderate, severe).

3. Results

3.1. Primary analysis

In the comparison between Wesper Lab algorithm scoring and blinded human scoring of in-lab PSG data, the algorithm showed an AHI Pearson correlation of r = 0.90 (95% CI: [0.83, 0.95]; p < 0.0001). Bland-Altman analysis revealed a mean bias of 0.59 (95% CI: [−1.24, 2.41]), with 95% limits of agreement from −11.17 (95% CI: [−14.31, −8.03]) to 12.35 (95% CI: [9.21, 15.49]), indicating minimal overall bias (Fig. 1A and 1B). The confusion matrix showed good classification for normal and mild OSA, but slightly reduced detection of moderate and severe OSA, likely due to the lower representation of high-severity cases in this cohort (Fig. 1C).

Fig. 1.

Fig. 1.

Comparison of AHI and AHI severity between the Wesper algorithm and human scoring of concurrent PSG signals. A. Pearson’s correlation. B. Bland-Altman analysis. C. Confusion matrix comparing OSA severity classification.

For the second primary endpoint, CAI agreement between the AI autoscoring algorithm and PSG was evaluated. For CAI, the Pearson correlation was r = 0.82 (95% CI: [0.68, 0.90]; p < 0.0001). Bland-Altman analysis demonstrated a mean bias of −0.53 (95% CI: [−1.05, −0.02]), with 95% limits of agreement from −3.83 (95% CI: [−4.70, −2.95]) to 2.76 (95% CI: [1.88, 3.63]), indicating minimal overall bias (Fig. 2).

Fig. 2.

Fig. 2.

Comparison of CAI between the Wesper algorithm and human scoring of concurrent PSG signals. A. Pearson’s correlation. B. Bland-Altman analysis.

3.2. Secondary analysis

In the comparison between the Wesper Lab algorithm and blinded rescoring of the raw Wesper Lab signals, the correlation remained strong, with r = 0.95 (95% CI: [0.90, 0.97]; p < 0.0001). Bland-Altman analysis showed a mean bias of 1.44 (95% CI: [−0.02, 2.90]), with 95% limits of agreement from −7.95 (95% CI: [−10.46, −5.44]) to 10.83 (95% CI: [8.32, 13.34]). These results reflect scoring by a single primary blinded reader. The corresponding confusion matrix demonstrated good agreement across severity levels. Misclassifications were infrequent and largely limited to adjacent categories, particularly in the moderate and normal ranges (Fig. 3C).

Fig. 3.

Fig. 3.

Comparison of AHI and AHI severity between the Wesper Lab algorithm and blinded human scoring of Wesper Lab signals. A. Pearson’s correlation. B. Bland-Altman analysis. C. Confusion matrix comparing OSA severity classification.

To further evaluate inter-reader consistency, we conducted an additional analysis using three independent blinded readers. Accuracy was consistently maintained across scorers with an AHI Pearson correlation of r ≥ 0.93 (Table 1).

Table 1.

Agreement between the Wesper Lab autoscoring algorithm and three blinded human scorers of raw Wesper Lab signals, showing correlation and Bland-Altman bias and limits of agreement.

Scorer Pearson r Mean Bias (events/h) Lower 95% LOA (events/h) Upper 95% LOA (events/h)
1 0.95 1.44 −7.95 10.83
2 0.93 4.90 −8.12 17.92
3 0.95 1.59 −8.09 11.26

3.3. Tertiary analysis

When compared to manually rescored Wesper Lab HSATs, the Wesper Lab algorithm demonstrated excellent agreement, with a Pearson correlation coefficient of r = 0.98 (95% CI: [0.97, 0.98]; p < 0.0001). Bland-Altman analysis revealed a mean bias of −0.28 (95% CI: [−1.02, 0.46]), with 95% limits of agreement from −8.88 (95% CI: [−10.14, −7.62]) to 8.32 (95% CI: [7.06, 9.58]), indicating minimal systematic error and strong consistency across the AHI spectrum (Fig. 4A and 4B).

Fig. 4.

Fig. 4.

Comparison of AHI and AHI severity between the Wesper algorithm and human rescoring of Wesper Lab HSATs. A. Pearson’s correlation. B. Bland-Altman analysis. C. Confusion matrix comparing OSA severity classification.

The confusion matrix demonstrated high agreement across all OSA severity categories, with 79 of 85 patients without OSA correctly identified. Classification of moderate and severe cases was also accurate, with 14 of 16 severe cases and 10 of 14 moderate cases correctly detected. Misclassifications were infrequent and generally limited to adjacent categories, supporting the algorithm’s ability to reliably stratify OSA severity (Fig. 4C).

4. Discussion

This study demonstrates that the Wesper Lab automated scoring algorithm provides clinically robust detection of respiratory apnea events across multiple validation settings. In the primary analysis using simultaneous PSG data, the algorithm maintained high correlation and diagnostic performance. In the secondary analysis, where blinded technologists rescored raw Wesper Lab signals, the algorithm showed strong agreement across scorers, confirming that its performance is consistent with expert interpretation even without PSG as a reference. Finally, in the tertiary real-world HSAT analysis, the algorithm demonstrated strong correlation with expert rescoring and minimal systematic bias. Together, these results underscore the algorithm’s reliability both in controlled laboratory settings and in routine clinical use, while also supporting the validity of Wesper Lab as a standalone diagnostic platform.

Across all three blinded scorers, agreement remained consistent, with correlation coefficients ≥ 0.93 and minimal systematic bias. These findings suggest that the algorithm not only performs comparably to human experts but also does so with minimal variability across scorers, addressing a long-standing issue of inter-scorer inconsistency in manual sleep study interpretation [13,14]. As AI-enabled HSATs continue to evolve, validated systems like Wesper Lab represent a scalable, accessible solution to address the diagnostic gap in obstructive sleep apnea. Given growing evidence that single-night testing can misclassify OSA severity, multi-night home testing combined with automated scoring may improve longitudinal characterization while reducing scoring burden [15]. Future work should explore the algorithm’s utility in longitudinal monitoring and its integration into clinical workflows for therapy management.

While the present study focused on diagnostic agreement, future validation efforts should extend beyond accuracy metrics to evaluate clinically meaningful downstream outcomes. Particularly relevant endpoints include the accuracy of treatment initiation decisions (e.g., PAP prescription and modality selection), longitudinal treatment response as reflected by changes in AHI, ODI, or symptom burden, and concordance with clinician-directed management pathways. Evaluation of these outcomes would help establish the clinical utility of automated HSA T not only for diagnosis but also for therapy guidance and monitoring.

Despite our study’s strengths, we acknowledge several limitations. First, the sample size was limited, and thus we strongly support additional efforts to confirm or refute our findings. Additionally, although demographic variables including race, ethnicity, and Fitzpatrick skin phototype were collected as part of the parent clinical study [10], the present analysis focused on overall algorithm agreement and did not examine performance stratified by these characteristics. Because skin pigmentation may influence pulse oximetry accuracy [16], future studies could further evaluate algorithm performance across diverse populations. Second, we did not evaluate hard clinical outcomes [17, 18], such as myocardial infarction or stroke, so future prospective studies will be needed to determine the predictive value of this algorithm in relation to sleep apnea–related health risks.

Although we were careful to maintain blinding, the process of rescoring in the tertiary analysis required some inspection of automated outputs, which could modestly inflate accuracy estimates. However, this also suggests that the addition of expert review to algorithmic scoring has minimal effect, reinforcing the reliability of automated scoring as a standalone method. Further, because the tertiary analysis reflects routine clinical practice, an ODI/AHI ratio inclusion criterion was applied as a data-quality safeguard to limit the influence of extreme discordance unlikely to reflect true disease severity. While this may introduce selection bias by excluding uncommon non-desaturating phenotypes, the criterion was confined to the real-world dataset and did not affect the blinded primary or secondary validation analyses. Finally, our analysis of CAI in Wesper Lab-only datasets was limited by variability among human scorers in identifying central events. This is a well-recognized challenge in manual scoring, and while our findings are encouraging in comparison to PSG, they highlight the need for future CSA-focused validation studies, rather than suggesting a systematic shortcoming of the algorithm.

5. Conclusion

The Wesper Lab automated scoring algorithm demonstrates strong agreement with expert scoring across both in-laboratory PSG and real-world diagnostic HSATs, supporting its role as a reliable and scalable tool for OSA detection and central apnea event classification. By addressing inter-scorer variability and maintaining robust accuracy in routine clinical practice, Wesper Lab represents a validated AI-driven platform that can enhance access, standardization, and efficiency in sleep medicine. Future work should evaluate its performance in longitudinal monitoring, diverse populations, and integration into clinical workflows for therapy management.

Acknowledgments

This study was funded by Wesper Inc. The authors thank Kyle Schwab, MD, for providing feedback on the completed manuscript. We also acknowledge the authors of Raphelson et al. (2023, J Clin Sleep Med) for their original clinical trial, which provided data supporting this validation study. Additionally, we would like to thank Dr. Colleen Kelly, Ph.D. PStat for her assistance with the statistical analysis.

Declaration of competing interest

The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Atul Molhotra reports financial support was provided by Wesper Inc. Atul Malhotra reports a relationship with Wesper Inc that includes: consulting or advisory. Ruchir P Patel is the Principal Investigator (PI) for an unrelated clinical trial sponsored by Wesper, Inc. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Footnotes

CRediT authorship contribution statement

Chelsie Rohrscheib: Writing - review & editing, Writing - original draft, Visualization, Validation, Supervision, Project administration, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Antonio Artur Moura: Writing - review & editing, Writing - original draft, Visualization, Validation, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Janna Raphelson: Writing - review & editing, Writing - original draft, Formal analysis. Jeremy E. Orr: Writing - review & editing, Writing - original draft, Formal analysis. Ruchir P. Patel: Writing - review & editing, Writing - original draft, Methodology, Formal analysis. Atul Malhotra: Writing - review & editing, Writing - original draft, Methodology, Formal analysis, Conceptualization.

References

  • [1].Abbasi A, Gupta SS, Sabharwal N, Meghrajani V, Shanna S, Kamholz S, et al. A comprehensive review of obstructive sleep apnea. Sleep Sci 2021;14:142–54. 10.5935/1984-0063. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Gleeson M, McNicholas WT. Bidirectional relationships of comorbidity with obstructive sleep apnoea. Eur Respir Rev 2022;31:210256. 10.1183/16000617.0256-2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Benjafield AV, Ayas NT, Eastwood PR, Heinzer R, Ip MSM, Morrell MJ, et al. Estimation of the global prevalence and burden of obstructive sleep apnoea: a literature-based analysis. Lancet Respir Med 2019;7:687–98. 10.1016/S2213-2600(19)30198-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Peppard PE, Young T, Barnet JH, Palta M, Hagen EW, Hla KM. Increased prevalence of sleep-disordered breathing in adults. Am J Epidemiol 2013;177:1006–14. 10.1093/aje/kws342. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Natsky AN, Vakulin A, Coetzer CLC, McEvoy RD, Adams RJ, Kaambwa B. Economic evaluation of diagnostic sleep studies for obstructive sleep apnoea: a systematic review protocol. Syst Rev 2021;10:104. 10.1186/s13643-021-01651-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Malhotra A, Ayappa I, Ayas N, Collop N, Kirsch D, Kryger M, et al. Metrics of sleep apnea severity: beyond the apnea-hypopnea index. Sleep 2020;43:zsaa045. 10.1093/sleep/zsaa045. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Korkalainen H, Aakko J, Duce B, Kainulainen S, Leino A, Nikkonen S, et al. Deep learning enables sleep staging from photoplethysmogram for patients with suspected sleep apnea. Sleep 2020;43:zsaa098. 10.1093/sleep/zsaa098. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Takita H, Kabata D, Walston SL, Tatekawa H, Saito K, Tsujimoto Y, et al. A systematic review and meta-analysis of diagnostic performance comparison between generative AI and physicians. NPJ Digit Med 2025;8:175. 10.1038/s41746-025-01543-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Hussein O, Alkhader A, Gohar A, Bhat A. Home sleep apnea testing for obstructive sleep apnea. Mo Med 2024;121:60–5. [PMC free article] [PubMed] [Google Scholar]
  • [10].Raphelson JR, Ahmed IM, Ancoli-Israel S, Ojile J, Pearson S, Bennett N, et al. Evaluation of a novel device to assess obstructive sleep apnea and body position. J Clin Sleep Med 2023;19:1643–9. 10.5664/jcsm.10644. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].American Academy of Sleep Medicine. The AASM manual for the scoring of sleep and associated events: rules, terminology and technical specifications. Darien, IL: Version 3, American Academy of Sleep Medicine; 2023. [Google Scholar]
  • [12].Veugen CCAFM, Teunissen EM, den Otter LAS, Kos MP, Stokroos RJ, Copper MP. Prediction of obstructive sleep apnea: comparative performance of three screening instruments on the apnea-hypopnea index and the oxygen desaturation index. Sleep Breath 2021;25:1267–75. 10.1007/sl1325-020-02219-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Lee YJ, Lee JY, Cho JH, Choi JH. Interrater reliability of sleep stage scoring: a meta-analysis. J Clin Sleep Med 2022;18:193–202. 10.5664/jcsm.9538. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Penzel T, Zhang X, Fietze I. Inter-scorer reliability between sleep centers can teach us what to improve in the scoring rules. J Clin Sleep Med 2013;9:89–91. 10.5664/jcsm.2352. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [15].Lechat B, Nguyen DP, Reynolds A, Loffler K, Escourrou P, McEvoy RD, Adams R, et al. Single-night diagnosis of sleep apnea contributes to inconsistent cardiovascular outcome findings. Chest 2023;164:231–40. 10.1016/j.chest.2023.01.027. [DOI] [PubMed] [Google Scholar]
  • [16].Sjoding MW, Dickson RP, Iwashyna TJ, Gay SE, Valley TS. Racial bias in pulse oximetry measurement. N Engl J Med 2020;383:2477–8. 10.1056/NEJMc2029240; N Engl J Med 2021;385:2496 http://dx.doi.org/10.1056/NEJMx210003, (erratum). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Agrawal R, Sharafkhaneh A, Nambi V, BaHammam A, Razjouyan J. Obstructive sleep apnea modulates clinical outcomes post-acute myocardial infarction: a large longitudinal veterans’ dataset report. Respir Med 2023;211:107214. 10.1016/j.rmed.2023.107214. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [18].Yeghiazarians Y, Jneid H, Tietjens JR, Redline S, Brown DL, El-Sherif N, et al. Obstructive sleep apnea and cardiovascular disease: a scientific statement from the American heart association. Circulation 2021;144:e56–67. 10.1161/CIR.0000000000000988. [DOI] [PubMed] [Google Scholar]

RESOURCES