Abstract
Background
Diagnostic guidelines for pediatric ARDS (PARDS) were developed at the 2015 Pediatric Acute Lung Injury Consensus Conference (PALICC). Although this was an improvement in creating pediatric-specific diagnostic criteria, there remains potential for variability in identification of PARDS.
Research Question
What is the interrater reliability of the 2015 PALICC criteria for diagnosing moderate to severe PARDS? What clinical criteria and patient factors are associated with diagnostic disagreements?
Study Design and Methods
Patients with acute hypoxic respiratory failure admitted from 2016 to 2021 who received invasive mechanical ventilation were retrospectively reviewed by two PICU physicians. Reviewers evaluated whether the patient met the 2015 PALICC definition of moderate to severe PARDS and rated their diagnostic confidence. Interrater reliability was measured using Gwet’s agreement coefficient.
Results
Thirty-seven of 191 encounters had a diagnostic disagreement. Interrater reliability was substantial (Gwet’s agreement coefficient, 0.74; 95% CI, 0.65-0.83). Disagreements were caused by different interpretations of chest radiographs (56.8%), ambiguity in origin of pulmonary edema (37.8%), or lack of clarity if patient’s current condition was significantly different from baseline (27.0%). Disagreement was more likely in patients who were chronically ventilated (OR, 4.66; 95% CI, 2.16-10.08; P < .001), had a primary cardiac admission diagnosis (OR, 3.36; 95% CI, 1.18-9.53; P = .02), or underwent cardiothoracic surgery during the admission (OR, 4.90; 95% CI, 1.60-15.00; P = .005). Reviewers were at least moderately confident in their decision 73% of the time; however, they were less likely to be confident if the patient had cardiac disease or chronic respiratory failure.
Interpretation
The interrater reliability of the 2015 PALICC criteria for diagnosing moderate to severe PARDS in this cohort was substantial, with diagnostic disagreements commonly caused by differences in chest radiograph interpretations. Patients with cardiac disease or chronic respiratory failure were more vulnerable to diagnostic disagreements. More guidance is needed on interpreting chest radiographs and diagnosing PARDS in these subgroups.
Key Words: ards, PALICC criteria, pediatric ARDS
Take-home Point.
Study Question: What is the interrater reliability of the 2015 Pediatric Acute Lung Injury Consensus Conference criteria for diagnosing moderate to severe pediatric ARDS, and what clinical criteria and patient factors are associated with diagnostic disagreements?
Results: Interrater reliability was substantial with diagnostic disagreements most caused by different interpretations of chest radiographs and in patients who had cardiac disease or chronic respiratory failure.
Interpretation: Additional guidance on diagnosing pediatric ARDS in these specific subgroups and instructive parameters for interpreting radiographic imaging would be beneficial to improve the identification of these patients, which would allow further understanding of how treatment interventions impact patient outcomes in future research.
Pediatric ARDS (PARDS) occurs in approximately 3% of patients admitted to PICUs and 6% of those on mechanical ventilation based on international data.1 Despite advances in recognition and treatment in the last few decades, PARDS continues to have significant morbidity and mortality in PICUs worldwide.1, 2, 3
Up until very recently, the standard of care for diagnosing and managing PARDS was the 2015 Pediatric Acute Lung Injury Consensus Conference (PALICC) guidelines, developed by a panel of experts.4 Major variations from the adult Berlin definition include the following: (1) the ability to include children without arterial blood gas measurements, (2) utilization of the oxygenation (saturation) index as opposed to the Pao 2/Fio 2 ratio, (3) simplification of radiographic criteria, and (4) mention of the special populations of children with chronic lung disease and congenital heart disease.5 , 6
An updated version of the PALICC guidelines was published in February 2023.7 Major highlights include new categories of possible PARDS and at-risk for PARDS, grouping mild and moderate PARDS into nonsevere vs severe, and instituting a 4-h waiting period prior to risk stratification. However, many of the guidelines are relatively unchanged because of a lack of new literature since 2015 (eg, how to diagnose PARDS in patients with chronic cardiorespiratory illness).
Although these guidelines are an improvement in creating pediatric-specific diagnostic criteria, as with any syndrome which lacks a criterion standard laboratory or histopathologic diagnosis, there is still potential for significant variability in identification of the disease. Delayed, missed, or inappropriate recognition of PARDS may influence management strategies and clinical outcomes for pediatric patients.
In this study, we aimed to answer the following questions: (1) What is the interrater reliability of the PALICC 2015 criteria for diagnosing moderate to severe PARDS?; and (2) What are the major clinical criteria and patient factors that are associated with diagnostic disagreements? Based on prior literature in both adult and pediatric patients, we hypothesized that reviewer agreement would be fair to moderate with frequent differences in chest radiograph interpretations.8 , 9 Additionally, we predicted diagnostic disagreements would be more frequent in the subpopulations of those with chronic cardiorespiratory disease.
Study Design and Methods
A single-center retrospective cohort study was conducted at Cohen Children’s Medical Center, a tertiary care children’s hospital in the greater New York City area, of all patients who received invasive mechanical ventilation for acute hypoxic respiratory failure in the PICU from November 2016 through April 2021. The electronic health record was queried for all invasively mechanically ventilated patients within the study period who had Fio 2, Spo 2, and mean airway pressure recorded. The oxygen saturation index (OSI) (mean airway pressure × Fio 2 × 100/Spo 2) was calculated anytime any of these three variables were updated. Missing data at these time points were imputed by forward-filling the last recorded value for the missing variable. OSI was then used to stratify PARDS severity per the 2015 PALICC guidelines.4 Only patients who had an OSI > 7.5 (corresponding to moderate PARDS) for at least 8 consecutive hours were included.10 The requirement for at least having moderate PARDS was chosen because of 2015 PALICC recommendations to consider aggressive management strategies (eg, high-frequency oscillatory ventilation, ECMO) in this population. Patients were excluded if they did not receive invasive mechanical ventilation for at least 48 h or if they spent any time in the neonatal ICU during the hospital admission of interest. Patients were classified as obese if their BMI was > 95th percentile for age and sex (> 2 y of age) or for weight-for-length (< 2 y of age).11
Two pediatric critical care physicians reviewed each subject and evaluated whether the patient met the PALICC definition of moderate to severe PARDS. Physicians ranged in experience from year two of fellowship to 7 years in practice as an attending. Reviewers were provided with the PALICC guidelines, a brief clinical summary with any references to PARDS removed (e-Appendix 1), a spreadsheet containing all recorded OSIs, and access to the patient’s chest imaging. To limit biasing their determination of whether the patient had PARDS, the reviewer did not review patient notes. A questionnaire was developed in REDCap (hosted at Northwell Health; version 12.5.8) which prompted the reviewer to specify if the patient met moderate to severe PARDS criteria and, if not, to select which component they did not meet. If the reviewer thought the patient had PARDS, they were asked to document the date and time when all criteria were first met. Reviewers were then questioned regarding their diagnostic confidence (equivocal, slightly, moderately, or highly confident).
Interrater reliability was measured using Gwet’s agreement coefficient (AC1) to assess agreement between each pair of raters for one subject. AC1 is a modification of the well-known kappa statistic, which makes an adjustment to the calculation of chance-corrected agreement to overcome the paradoxes that lead to biased estimates under certain conditions.12, 13, 14, 15, 16 The Landis-Koch interpretation of agreement coefficients was used as the benchmark scale which defines an AC1 of 0.8 to 1 as almost perfect agreement, 0.6 to 0.8 as substantial agreement, 0.4 to 0.6 as moderate agreement, 0.2 to 0.4 as fair agreement, 0 to 0.2 as slight agreement, and < 0 as poor agreement.17
Logistic regression was used to assess agreement status between reviewers as a function of each explanatory variable of interest. ORs and 95% CIs were computed. Descriptive statistics were computed for categorical data. The χ2 tests were performed to assess for association of confidence level with certain variables. All analyses were conducted using SAS version 9.4 (SAS Institute Inc).
The Institutional Review Board of Northwell approved the study (No. 21-0355).
Results
One hundred and ninety-one patient encounters were analyzed (Fig 1 ). Of these, 145 met moderate to severe PARDS criteria by both reviewers, nine did not meet criteria by either, and there were 37 diagnostic disagreements. Table 1 displays characteristics of the cohort. The median number of days of invasive mechanical ventilation was 13.0 (quartile 1-quartile 3, 7.2-21.9). The overall in-hospital mortality rate was 20.4%.
Figure 1.
Consolidated Standards of Reporting Trials-style flow diagram. The electronic health record (EHR) was queried for all patients meeting an oxygen saturation index ≥ 7.5 for at least 8 consecutive hours. NICU patients were excluded. Eighteen patients were not mechanically ventilated and had data in erroneous fields in the EHR and therefore had been screened incorrectly. Finally, patients who were ventilated only for a short period of time, or those who had no daily notes to refer to (for the purpose of creating clinical vignettes) were removed. NICU = neonatal ICU.
Table 1.
Characteristics of Patients in the Cohort (N = 191)
| Characteristic | Value |
|---|---|
| Age, y | 2.4 (0.54-9.04) |
| Male sex | 116 (60.7) |
| Obese | 62 (32.5) |
| Primary admission diagnosis | |
| Cardiac | 17 (8.9) |
| Cardiothoracic surgery during admission | 14 (7.3) |
| Respiratory | 116 (60.7) |
| Infectious disease | 12 (6.3) |
| Neurologic | 8 (4.2) |
| GI | 5 (2.6) |
| Endocrine | 1 (0.5) |
| Oncologic | 10 (5.2) |
| Hematologic | 5 (2.6) |
| Postcardiac arrest | 9 (4.7) |
| Multitrauma | 1 (0.5) |
| Other | 7 (3.7) |
| History of prematurity ≤ 32 wk GA | 42 (22.0) |
| Presence of preexisting comorbidities | 148 (77.5) |
| Type of comorbidity | |
| Cardiac | 54 (28.3) |
| Cyanotic disease | 12 (6.3) |
| Respiratory | 89 (46.6) |
| Tracheostomy dependent | 51 (26.7) |
| Mechanical ventilator dependent | 44 (23.0) |
| Neurologic | 68 (35.6) |
| GI | 42 (22.0) |
| Endocrine | 5 (2.6) |
| Oncologic | 18 (9.4) |
| Hematologic | 12 (6.3) |
| Renal | 5 (2.6) |
| Genetic | 12 (6.3) |
| Other | 13 (6.8) |
| Length of invasive mechanical ventilation, d | 13.0 (7.2-21.9) |
| Hospital mortality | 39 (20.4) |
Values are median (quartile 1-quartile 3) or No. (%).
Interrater reliability of the PALICC criteria for moderate to severe PARDS in this retrospective cohort was found to be substantial (AC1, 0.74; 95% CI, 0.65-0.83). When there was a diagnostic disagreement, reviewers were most likely to disagree regarding whether the chest radiograph had qualifying findings (56.8%), or whether there was an opacity (35.1%). Other common causes of disagreement included whether pulmonary edema could be fully explained by cardiac failure or fluid overload (37.8%) and if the patient’s current condition was significantly different from their baseline clinical status (27.0%). For all disagreements where patients were admitted with a primary cardiac diagnosis (n = 7), underwent cardiac surgery during the admission (n = 7), or had congenital cyanotic heart disease (n = 4), raters disagreed on whether pulmonary edema could be explained by cardiac failure or fluid overload. Among chronically mechanically ventilated patient disagreements (n = 18), raters disagreed on whether the patient had qualifying chest radiograph findings (55.6%) and/or whether the condition was significantly different from their baseline (50.0%).
Univariate logistic regression was performed to determine which factors were associated with a higher likelihood of disagreement between the reviewers (Table 2 ). Disagreement was more likely in patients who were chronically mechanically ventilated (OR, 4.66; 95% CI, 2.16-10.08; P < .001), had a primary cardiac admission diagnosis (OR, 3.36; 95% CI, 1.18-9.53; P = .02), or underwent cardiothoracic surgery during the admission (OR, 4.90; 95% CI, 1.60-15.00; P = .005). Analysis of patients with congenital cyanotic heart disease did not approach significance (OR, 2.21; 95% CI, 0.63-7.79; P = .22). There were no associations between agreement status and patient’s current age, history of prematurity, or obesity status. A multivariable analysis was performed using demographic variables and select variables from the cardiac and respiratory domains (e-Table 1). No significant deviations were found compared with the univariate analyses.
Table 2.
] Univariate Analysis of Disagreement Status and Confidence in Rating
| Characteristic | OR for Disagreement (95% CI), P Value | % Moderate+ Confidence in PARDS Determination, Characteristic Present vs Not Present (P Value) |
|---|---|---|
| Age < 2 y | 1.34 (0.65-2.75), .43 | 68.5% vs 77.3% (.06) |
| Obese | 1.16 (0.55-2.47), .70 | 71.0% vs 74.0% (.54) |
| Preexisting comorbidities | 2.80 (0.93-8.40), .07 | 70.6% vs 81.4% (.05) |
| History of prematurity | 1.18 (0.51-2.74), .70 | 70.2% vs 73.8% (.58) |
| Respiratory | ||
| Respiratory admit diagnosis | 0.54 (0.26-1.12), .10 | 78.9% vs 64.0% (.002)a |
| Tracheostomy dependent | 3.47 (1.64-7.36), .001a | 60.8% vs 77.5% (.002)a |
| Ventilator dependent | 4.66 (2.16-10.08), < .001a | 59.1% vs 77.2% (.002)a |
| Cardiac | ||
| Cardiac admit diagnosis | 3.36 (1.18-9.53), .02a | 32.4% vs 77.0% (< .001)a |
| Cyanotic heart disease | 2.21 (0.63-7.79), .22 | 45.8% vs 74.9% (.003)a |
| Cardiac surgery during admission | 4.90 (1.60-15.00), .005a | 39.3% vs 75.7% (< .001)a |
Characteristics were compared with their converse in likelihood of diagnostic disagreement using logistic regression, and moderate to high confidence in PARDS determination was compared against equivocal-slight confidence using the χ2 test. For example, a disagreement was 4.66 times more likely to be found in patients who were chronically ventilated compared with those who were not, and reviewers marked down that they were at least moderately confident in their PARDS determination 61.4% of the time in patients chronically ventilated compared with 74.6% of the time for patients who were not chronically ventilated. PARDS = pediatric ARDS.
Reviewer confidence in their diagnosis was analyzed. Although they rated their confidence level as moderate to high 73.0% of the time, reviewers were less likely to be confident for many of the same patient factors as those associated with a higher likelihood of disagreement (Table 2). However, reviewers were more confident in their decision if the patient had a primary respiratory diagnosis compared with those who did not (78.9% vs 64.0%, respectively; P = .002). When reviewers were not at least moderately confident in their PARDS determination, lack of confidence in interpreting chest radiographs and origin of pulmonary edema were the most common reasons overall (56.3% and 47.6%, respectively).
Discussion
Pediatric critical care physicians had substantial agreement when diagnosing moderate to severe PARDS using the PALICC criteria in this retrospective cohort. The main driver of diagnostic disagreement was variations in chest radiograph interpretations, with origin of pulmonary edema and determining whether the patient had a significant change from their baseline clinical status as other common factors.
Despite this overall high level of interrater reliability, children with cardiac disease or chronic respiratory failure were significantly more likely to have disagreements in PARDS diagnosis, specifically those patients who were chronically mechanically ventilated, those who had a primary cardiac admission diagnosis, and those who underwent cardiothoracic surgery during the hospitalization. This is not particularly surprising because patients with congenital heart disease and chronic lung disease are unique populations that often have alterations in their baseline oxygen saturation and respiratory requirements, and have a multitude of reasons for abnormal chest imaging. Our results did not find a statistical significance in disagreement among patients with congenital cyanotic heart disease; however, it is possible that we were underpowered to find a difference because only 12 patients were included in this analysis.
The 2015 and 2023 PALICC guidelines aim to include patients with cardiac disease or chronic respiratory failure; however, they both lack clarity on how to overcome these obstacles. For example, guidance is not provided on how to adjust the OSI for a child who is chronically mechanically ventilated prior to an acute insult, or what specific objective echocardiogram data to use to assist with classifying chest imaging opacifications. Prior researchers have alluded to this conundrum.2 To our knowledge, this study is the first that quantifies how lack of clarification for these subgroups leads to diagnostic disagreement and therefore could theoretically lead to variations in clinical management and patient outcomes.
Sjoding et al8 investigated the interrater reliability of the Berlin criteria for ARDS in a study of adults and found only moderate agreement among reviewers. Similar to the current study, most disagreements were caused by variability in chest radiograph interpretations. López-Fernández et al9 performed a subanalysis in a pediatric cohort and found only slight physician agreement that chest imaging demonstrated bilateral infiltrates. The current study demonstrated similar findings. A relatively high percentage of disagreements (35%) were caused by defining whether the patient had any opacity on chest radiographs. Future studies should consider investigating how physicians make this determination.
Strengths of this study include the large sample size with a variety of underlying medical and surgical diagnoses. Compared with the Pediatric Acute Respiratory Distress Syndrome Incidence and Epidemiology (PARDIE) study, an international prospective study from 2016 to 2017, the cohort in our study had a similar makeup in terms of age, rates of prematurity, comorbidity, and mortality rates.1 However, a unique aspect of our study is the inclusion of children with cyanotic heart disease and chronic lung disease because many retrospective and prospective studies have excluded them in analyses.
This study has several limitations. As a single-center cohort study, the population analyzed may not be generalizable to the PARDS population as a whole. Although our PICU treats a wide variety of patient pathology, we do not treat most solid organ transplant recipients. Full echocardiogram reports were not specifically provided to the rater but were summarized in the clinical vignette where appropriate. If raters had access to the echocardiogram reports, then the reliability may have increased in the cardiac population. These data were not provided to avoid the risk of information leaking through that might have biased the raters, but a future study might include only predetermined specific objective data from echocardiograms. Similarly, routine laboratory values were not provided to reviewers because there was variability in which laboratory studies were obtained and when in the time course they were collected. This lack of standardization would have introduced further bias but could be standardized for research studies in the future.
Patients with PARDS were retrospectively identified and stratified. We opted to select for moderate to severe PARDS because this population specifically was being called out for more aggressive management considerations in the 2015 PALICC guidelines, therefore potentially being a more important group in which to identify factors leading to diagnostic disagreements. However, this is different from the 2023 guidelines, which groups patients into mild to moderate vs severe categories. If mild PARDS had been included in this study, it is possible that the interrater reliability would have decreased. The use of a screening algorithm may have artificially increased the interrater reliability overall because reviewers did not have to determine degree of hypoxia for all intubated patients. Finally, it is unknown whether the interrater reliability would be similar if patients were analyzed prospectively because their course of illness was ongoing.
Interpretation
In this study, we found that the interrater reliability of the 2015 PALICC criteria for moderate to severe PARDS was substantial. Diagnostic disagreements were most commonly caused by differences in chest radiograph interpretations. However, certain populations of pediatric patients were more vulnerable to diagnostic disagreements, specifically those with cardiac disease or chronic respiratory failure. The more recently published 2023 PALICC guidelines do not provide new guidance on how to overcome diagnostic obstacles in these patients. Additional guidance on diagnosing PARDS in these specific subgroups, and instructive parameters for interpreting radiographic imaging, would be beneficial to improve the identification of these patients, which would allow further understanding of how treatment interventions impact patient outcomes in future research.
Funding/Support
The authors have reported to CHEST that no funding was received for this study.
Financial/Nonfinancial Disclosures
None declared.
Acknowledgments
Author contributions: S. S. is responsible for all content of this manuscript. L. S., D. K., and S. S. are responsible for conception of the work, analysis and interpretation of the data, and writing and revising the manuscript. J. F. contributed to study design, data analysis, and revising of the manuscript. I. M. and J. A. contributed to conception of the work, interpretation of the data, and revising of the manuscript.
Additional information: The e-Appendix and e-Table are available online under "Supplementary Data."
Supplementary Data
References
- 1.Khemani R.G., Smith L., Lopez-Fernandez Y.M., et al. Paediatric acute respiratory distress syndrome incidence and epidemiology (PARDIE): an international, observational study. Lancet Respir Med. 2019;7(2):115–128. doi: 10.1016/S2213-2600(18)30344-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Parvathaneni K., Belani S., Leung D., Newth C.J., Khemani R.G. Evaluating the performance of the Pediatric Acute Lung Injury Consensus Conference definition of acute respiratory distress syndrome. Pediatr Crit Care Med. 2017;18(1):17–25. doi: 10.1097/PCC.0000000000000945. [DOI] [PubMed] [Google Scholar]
- 3.Wong J.J., Phan H.P., Phumeetham S., et al. Risk stratification in pediatric acute respiratory distress syndrome: a multicenter observational study. Crit Care Med. 2017;45(11):1820–1828. doi: 10.1097/CCM.0000000000002623. [DOI] [PubMed] [Google Scholar]
- 4.Pediatric Acute Lung Injury Consensus Conference Group Pediatric acute respiratory distress syndrome: consensus recommendations from the Pediatric Acute Lung Injury Consensus Conference. Pediatr Crit Care Med. 2015;16(5):428–439. doi: 10.1097/PCC.0000000000000350. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Beltramo F., Khemani R.G. Definition and global epidemiology of pediatric acute respiratory distress syndrome. Ann Transl Med. 2019;7(19):502. doi: 10.21037/atm.2019.09.31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Gupta S., Sankar J., Lodha R., Kabra S.K. Comparison of prevalence and outcomes of pediatric acute respiratory distress syndrome using Pediatric Acute Lung Injury Consensus Conference criteria and Berlin definition. Front Pediatr. 2018;6:93. doi: 10.3389/fped.2018.00093. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Pediatric Acute Lung Injury Consensus Conference Group Executive summary of the second international guidelines for the diagnosis and management of pediatric acute respiratory distress syndrome (PALICC-2) Pediatr Crit Care Med. 2023;24(2):143–168. doi: 10.1097/PCC.0000000000003147. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Sjoding M.W., Hofer T.P., Co I., Courey A., Cookie C.R., Iwashyna T.J. Interobserver reliability of the Berlin ARDS definition and strategies to improve the reliability of the ARDS diagnosis. Chest. 2018;153(2):361–367. doi: 10.1016/j.chest.2017.11.037. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.López-Fernández Y.M., Smith L.S., Kohne J.G., al at. Prognostic relevance and inter-observer reliability of chest-imaging in pediatric ARDS: a pediatric acute respiratory distress incidence and epidemiology (PARDIE) study. Intensive Care Med. 2020;46(7):1382–1393. doi: 10.1007/s00134-020-06074-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Yehya N., Thomas N.J., Khemani R.G. Risk stratification using oxygenation in the first 24 hours of pediatric acute respiratory distress syndrome. Crit Care Med. 2018;46(4):619–624. doi: 10.1097/CCM.0000000000002958. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Centers for Disease Control and Prevention Clinical growth charts. https://www.cdc.gov/growthcharts/clinical_charts.htm
- 12.Gwet K. 4th ed. Advanced Analytics, LLC; 2014. Handbook of Inter-Rater Reliability: The Definitive Guide to Measuring the Extent of Agreement Among Raters. [Google Scholar]
- 13.Zec S., Soriani N., Comoretto R., Baldi I. High agreement and high prevalence: the paradox of Cohen's kappa. Open Nurs J. 2017;11:211–218. doi: 10.2174/1874434601711010211. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Feinstein A.R., Cicchetti D.V. High agreement but low kappa: I. The problems of two paradoxes. J Clin Epidemiol. 1990;43:543–549. doi: 10.1016/0895-4356(90)90158-l. [DOI] [PubMed] [Google Scholar]
- 15.Cicchetti D.V., Feinstein A.R. High agreement but low kappa: II. Resolving the paradoxes. J Clin Epidemiol. 1990;43:551–558. doi: 10.1016/0895-4356(90)90159-m. [DOI] [PubMed] [Google Scholar]
- 16.Xie Q. Agree or disagree? A demonstration of an alternative statistic to Cohen’s kappa for measuring the extent and reliability of agreement between observers. Proceedings of the Federal Committee on Statistical Methodology Research Conference. 2013;4 [Google Scholar]
- 17.Landis J.R., Koch G.G. The measurement of observer agreement for categorical data. Biometrics. 1977;33(1):159–174. [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.

