Skip to main content
eClinicalMedicine logoLink to eClinicalMedicine
. 2024 Sep 13;75:102814. doi: 10.1016/j.eclinm.2024.102814

The literacy barrier in clinical trial consents: a retrospective analysis

Fatima N Mirza a, Eric Wu b, Hael F Abdulrazeq a, Ian D Connolly c, Oliver Y Tang d, Cheryl K Zogg e, Theresa Williamson c, Paul F Galamaga f, G Dean Roye a, Prakash Sampath a, Albert E Telfeian a, Abrar A Qureshi a, Michael W Groff e, John H Shin c, Wael F Asaad a, Tiffany J Libby a, Ziya L Gokaslan a, Isaac S Kohane c, James Zou b, Rohaid Ali a,
PMCID: PMC11701435  PMID: 39763593

Summary

Background

Historically, the readability of consent forms in medicine have been above the average reading level of patients. This can create challenges in obtaining truly informed consent, but the implications on clinical trial participant retention are not fully explored. To address this gap, we seek to analyze clinical trial consent forms by determining their readability and relationship with the associated trial's participant dropout rate. Additionally, we explore a potential method for simplifying these forms.

Methods

We analyzed the readability of consent forms of federally funded interventional clinical trials, which were completed in the United States on or before January 1, 2023, and were posted online and made accessible on ClinicalTrials.gov. We correlated their readability with trial dropout rates. As an exploratory analysis, a subset of these forms was simplified using a large language model, with expert medicolegal review.

Findings

Across 798 included federally funded trials, the mean (±SD) Flesch-Kincaid Grade Level of their consent forms was 12.0 ± 1.3, equivalent to a high school graduate reading level and significantly higher than the 8th grade average reading level of adults in the United States (U.S.) (P < 0.001). In risk-adjusted analyses, each additional Flesch-Kincaid Grade Level increase in a clinical trial's consent form was associated with a 16% higher dropout rate (incidence rate ratio, 1.16; 95% confidence interval, 1.12–1.22; P < 0.001). Our exploratory analysis of a simplification method showed promising results in lowering the reading level while preserving medicolegal content.

Interpretation

The average readability of informed consent forms of federally funded clinical trials exceeds the reading comprehension skills of the majority of adults in the U.S., potentially undermining clinical trial participant retention rates. Future work should explore the use of large language models and other tools as possible means to close this literacy barrier and potentially enhancing clinical trial participation.

Funding

This research received no sources of funding. The authors have no conflicts of interest to report.

Keywords: Clinical trials, Readability, Artificial intelligence


Research in context.

Evidence before this study

A systematic review of literature published before January 1, 2023, from databases including PubMed, Scopus, and Google Scholar, identified a consistent trend: clinical trial consent forms are frequently written at reading levels exceeding the comprehension of the average U.S. adult. These studies highlight the readability gap but often lack a large-scale quantitative analysis connecting this issue to trial outcomes like participant retention.

Added value of this study

This study provides a comprehensive analysis of nearly 800 federally funded clinical trial consent forms, demonstrating a clear association between higher readability levels and increased dropout rates. Moreover, it explores the potential of AI-driven simplification using large language models (LLMs) to improve readability while retaining essential medicolegal content, offering a practical solution to this persistent problem.

Implications of all the available evidence

The evidence suggests a critical need to address the literacy barrier in clinical trial consent forms to improve participant comprehension and potentially enhance retention. AI-powered tools, particularly LLMs, hold promise as a scalable and efficient method for simplifying these documents. Future research should focus on refining these tools, assessing their impact on diverse populations, and integrating them into clinical trial workflows to promote inclusivity and ethical research conduct.

Introduction

Effective communication is vital in clinical human subject research, particularly to ensure appropriate informed consent, a process that includes the use of written consent forms that are comprehensible to participants.1 The Belmont Report of 1979 established a global standard for informed consent, emphasizing “respect for persons” and necessitating that information be presented in a manner that supports informed decision-making.2 Despite this, a notable disconnect persists: the majority of adults in the United States (U.S.) read below a 6th-grade level, yet studies indicate that human subject research consent forms across several disciplines are written, on average, at a late high school or undergraduate level, representing a chasm of over half a decade of education.1,3, 4, 5, 6, 7 Such a literacy gap is concerning given the growing evidence in clinical medicine that suggests that the readability of patient-facing literature significantly affects patient care-seeking behaviors and adherence to clinical recommendations.8, 9, 10

The 2019 Common Rule revision has created new opportunities for analyzing informed consent in clinical trials.11 This change in United States federal law (45 CFR 46.116 [h]) requires federally funded clinical trials to post a copy of an Institutional Review Board-approved consent form used in trial enrollment on a federal website following trial completion. As a result, ClinicalTrials.gov has become a significant repository for these consent forms.12 This growing collection of consent forms on ClinicalTrials.gov now allows for an in-depth and large-scale analysis of their readability. Moreover, by assessing these consent documents in the context of existing clinical trial information on ClinicalTrials.gov, we can analyze potential interplays between clinical trial consent form readability and participant behavior.

Thus, the objectives of this study were threefold. First, using data from ClinicalTrials.gov, we conducted a comprehensive evaluation of the readability of consent forms from federally funded interventional clinical trials. In doing so, we sought to determine how the average readability of these forms changed (if at all) over time, by type of intervention and by study design. Second, we investigated the association between the readability of these consent forms and participant behavior, specifically focusing on the correlation between the readability of a clinical trial's consent form and the trial's dropout rate. We used dropout rates as an indicator of participant engagement for this analysis. Finally, building on our group's prior work,13,14 we evaluated to what extent artificial intelligence (AI), in particular large language models (LLMs), paired with expert human oversight could serve as one possible means to simplify the reading complexity of clinical trial consent forms.

Methods

We systematically searched ClinicalTrials.gov for all interventional clinical trials completed in the United States on or before January 1, 2023. Included studies were required to have their results posted online and to have provided accessible informed consent forms. We did not restrict our search based on participant demographics, disease condition, or trial phase in order to ensure a comprehensive dataset. To focus on federally-funded research, studies not sponsored by the National Institutes of Health (NIH) or other U.S. federal agencies were excluded.

Using downloaded consent forms, we calculated readability metrics of the forms using the Readability Calculator from Online-Utility. org,15 as recommended by the National Cancer Institute.16 Calculated readability metrics included Flesch-Kincaid Grade Level—a readability metric that estimates the average American grade in school required for a reader to understand what is written in the text (e.g., a Flesch-Kincaid Level of 12 would require a completed high school education to understand)—and its constituent parts, including the average number of words per sentence and the average number of syllables per word. The Flesch-Kincaid Grade Level estimates the U.S. grade level needed to understand a text, based on average sentence length and syllables per word. Scores correspond directly to grade levels; for example, 8.0 indicates an 8th grade reading level, while 12.0 suggests a high school graduate level. Generally, scores of 8.0 or below are considered easily readable by the general public, 8.0–12.0 moderately difficult, and above 12.0 difficult for most people.17 We also collected other readability metrics, including word-level characteristics (the average number of characters per word) and document-level characteristics (the average number of characters, words, sentences, and pages per consent form).

To calculate the dropout rate for each clinical trial, we divided the number of participants who did not complete the study by the number of participants initially enrolled, as recorded on ClinicalTrials.gov. To account for potential confounding, we also collected available information on participant age, participant gender, primary intervention, study design, study phase, and oncologic trial status.

Finally, we set out to evaluate the potential of AI in streamlining the language used in clinical trial consent forms. Our objective was to determine if an AI model could simplify the text of these forms without losing the content's medical and legal meaning. We focused on 6 key sections commonly required by the Code of Federal Regulations for clinical trial consent forms: Purpose, Benefits, Risks, Alternatives, Voluntariness, and Confidentiality. We focused on these 6 sections because these sections are clearly delineated in clinical trial consent forms and are applicable across a wide range of clinical trial studies. We began by selecting a random sample of consent forms collected using the methods above from ClinicalTrials.gov, aiming for a sample size that would be representative of the larger population of included documents. The selected forms underwent a manual extraction of the 6 specified sections. These sections were then processed by the GPT-4 LLM (OpenAI, San Francisco, CA)18 by inputting these sections and prompting GPT-4 with the following: “While preserving content and meaning, convert this text to the average American reading level by using simpler words and limiting sentence length to 10 or fewer words.” To assess the effectiveness of the AI's simplification, we calculated Flesch-Kincaid Grade Level and word count both before and after the AI intervention. To ensure that the simplified text maintained its legal and medical integrity, we arranged for subsequent reviews by a healthcare lawyer (P.G.) and a panel of four clinicians (F.N.M., R.A., H.A., and I.D.C.). Each reviewer would independently confirm whether the AI-modified sections were still medically or legally sound compared to their originals. Of note, this research received no sources of funding.

Statistics

Data were summarized using descriptive statistics: frequencies and percentages for categorical variables, means and standard deviations (SD) for continuous variables (median and interquartile range [IQR] for non-normally distributed continuous variables). Given a wide amount of variance between federally-funded clinical trials and a high percentage of clinical trials with no loss to follow-up (137 out of 798 trials, or 17.2%, had a 0% dropout rate), crude and risk-adjusted (accounting for the influence of other known external variables) zero-inflated negative binomial models were used to compare the association between readability metrics and clinical trial dropout rate. Results were presented as resultant incidence rate ratios (IRR) and corresponding 95% confidence intervals (95% CI). For the AI-simplified text, paired before-and-after nonparametric Mann–Whitney U tests were used to assess the statistical extent of simplification. For all statistical analyses, a two-sided P-values <0.05 were considered significant.

Role of the funding source

This study received no funding.

Results

798 trials met our inclusion criteria (Table 1). The median enrollment was 58 participants (IQR, 28–150). The majority of trials involved participants ≥18 years (65.3%). The most common primary interventions were behavioral (39.1%) and pharmacological (29.5%). More than half (55.9%) utilized a randomized parallel group design. Among trials reporting their phase, a plurality were in Phase 2 (43.5%). The average dropout rate among included clinical trials was 18.0% (SD 19.6%).

Table 1.

Summary of U.S. federally funded clinical trials used in this study.

Study population characteristics
Number of clinical trials, N = 798 N (%)
Study enrollment (clinical trial size) Median (IQR): 58 (28–150)
Participant gender
 All comers 702 (88.0)
 Female only 66 (8.3)
 Male only 30 (3.8)
Participant age
 Child (<18 years) 53 (6.6)
 Adult (18–64 years) 119 (14.9)
 Older Adult (>64 years) 26 (3.3)
 Child and Adult 40 (5.0)
 Adult and Older Adult 521 (65.3)
 Child, Adult, and Older Adult 39 (4.9)
Primary intervention
 Drug 235 (29.5)
 Behavioral 312 (39.1)
 Biological 80 (10.1)
 Device 89 (11.2)
 Other 82 (10.3)
Study design
 Randomized (Parallel Group) 446 (55.9)
 Randomized (Crossover) 55 (6.9)
 Randomized (Other) 26 (3.3)
 Non-Randomized 77 (9.7)
 Other 194 (24.3)
Study phase
 Early Phase 15 (1.9)
 Phase 1 73 (9.2)
 Phase 1/2 37 (4.6)
 Phase 2 154 (19.3)
 Phase 2/3 10 (1.3)
 Phase 3 36 (4.5)
 Phase 4 29 (3.6)
 N/A 444 (55.6)
Oncologic trials
 Total 133 (16.7)
 By intervention:
 Behavioral 20 (2.5)
 Drug 60 (7.5)
 Procedure 7 (0.9)
 Biological 26 (3.3)
 Device 4 (0.5)
 Laboratory biomarker 4 (0.5)
 Radiation 3 (0.4)
 Other 9 (1.1)
Consent form metrics
Median (IQR)
Number of Characters 21,987 (14,938, 31,042)
Number of Words 4637 (3,070, 6610)
Number of Sentences 235 (161, 314)
Average Number of Characters per Word 4.8 (4.7, 4.9)
Average Number of Syllables per Word 1.7 (1.6, 1.7)
Average Number of Words per Sentence 19.8 (18.2, 21.6)
Number of Pages 11 (8, 15)
Flesch Kincaid Grade Level 12.0 (11.1, 12.8)
Outcome metric
Median (IQR)
Dropout Rate (%) 12.5 (2.9, 25.8)

Examining the readability of informed consent forms, we found the average Flesch Kincaid Grade Level to be 12.0 (SD 1.3), equivalent to that of a high school graduate and significantly above the average American reading level of 8th grade (P < 0.001).19 The consent forms, on average, contained 24,492 characters (SD 15,540), 5139 words (SD 3299), and spanned 12.1 pages (SD 7.0). Analysis of potential changes in Flesch Kincaid Grade Level over time revealed no significant differences (P = 0.189), with the complexity of consent forms remaining relatively stable between 2000 and 2023 (Fig. 1A). Similar consistently high reading levels were seen when stratifying based on differences in clinical trial primary intervention (Fig. 1B) and study design (Fig. 1C).

Fig. 1.

Fig. 1

The readability of federally funded clinical trials: Figures 1A–C (A) Changes over time (no significant difference, P = 0.189), (B) Differences by primary intervention, and (C) Differences by study design. Dashed lines show a 6th grade reading level; 54% of Americans have a reading proficiency below this point. Each dot represents one clinical trial consent form (n = 798).

Our analysis revealed a significant association between the Flesch-Kincaid Grade Level of consent forms and the clinical trial dropout rate (Table 2; Supplemental Table S1; Supplemental Table S3). A risk-adjusted negative binomial model that accounted for baseline differences in clinical trials according to 8 parameters indicated that each additional Flesch Kincaid Grade Level increase of a clinical trial's consent form was associated with a 16% increase in the trial's dropout rate (IRR, 1.16; 95% CI, 1.12–1.22; P < 0.001). In contrast, the number of pages failed to reach statistical significance in the risk adjusted negative binomial model in predicting a clinical trial's dropout rate (IRR, 1.01; 95% CI, 0.99–1.03; P = 0.305). This relationship held in a sensitivity analysis in which only those studies with non-zero drop-out rate were assessed (Supplemental Table S2). See the Discussion section below for further considerations.

Table 2.

Association between consent form metrics and federally funded clinical trial dropout rate.

Clinical trial consent form characteristic Unadjusted, IRR [95% CI] P-value Risk-Adjusted, IRR [95% CI] P-value
Number of pages 1.29 (1.27–1.30) P < 0.001 1.01 (0.99–1.03)∗ P = 0.305
Flesch-Kincaid Grade Level 1.27 (1.26–1.28) P < 0.001 1.16 (1.11–1.21)+ P < 0.001

Abbreviations: IRR, incidence rate ratio, 95% CI, 95% confidence interval.

Results taken from (risk-adjusted) negative binomial model that accounted for potential confounding due to baseline differences in enrollment (clinical trial size), subject gender, subject age, primary intervention (drug, behavioral, biological, device, other), study design, study phase, oncologic trial, and the ∗Flesch-Kincaid Grade Level and +number of pages of a clinical trial's consent.

The incident rate ratio represents what occurs to the dropout rate with each additional increase in either the Flesch-Kincaid Grade Level or the number of pages of a clinical trial's consent form. For example, for every 1 grade increase in the Flesch-Kincaid Grade Level, the associated dropout rate increased by approximately 16% according to the risk-adjusted model.

Finally, we examined the impact of an AI-mediated language simplification process paired with expert human oversight on the readability of consent forms (Table 3). Each of the 6 federally mandated sections of 119 randomly selected consent forms (approximately 15% out of the original 798) was manually extracted and simplified using the GPT-4 AI language model. The original sections had an average Flesch-Kincaid Grade Level ranging from 10.33 for Alternatives to 14.5 for the Confidentiality clause, indicating a complexity level exceeding the average American reading level (P < 0.001). After simplification, the Flesh-Kincaid Grade Level significantly decreased across all sections, with Alternatives reducing to a Flesh-Kincaid Grade of 4.48 (P < 0.001) and the Confidentiality clause to a Flesh-Kincaid Grade of 5.70 (P < 0.001). This simplification process also resulted in a notable decrease in word count across all sections (P < 0.001). Importantly, a medical and legal review of 20 randomly chosen (about 1 out of 6 of the 119 subset above) simplified forms confirmed that they were medically and legally equally sufficient to their originals.

Table 3.

Original and AI-simplified consent form readability.

Section Original word count GPT summary word count P value (Word Count) Original FK Grade GPT summary FK Grade P value (FK grade)
Alternatives 56.49 34.7 <0.001 10.33 4.48 <0.001
Benefits 67.12 40.02 <0.001 11.29 4.74 <0.001
Confidentiality clause 165.9 86.4 <0.001 14.5 5.70 <0.001
Purpose 122.69 69.64 <0.001 12.67 4.76 <0.001
Risks 381.84 160.38 <0.001 12.14 5.11 <0.001
Voluntary clause 65.57 38.39 <0.001 10.63 4.48 <0.001

Original and AI-simplified clinical trial sections, representing data on 6 sections each taken from 119 clinical trial consent forms randomly selected from the original 798 analyzed in this study.

Discussion

In the domain of clinical trials, informed consent is imperative, and the written consent form is a critical instrument in this process. It marks the initial step in a participant's involvement in research that seeks to extend the boundaries of current medical knowledge. The clarity of these forms is of utmost importance, perhaps now more than ever. As stated by members of the first WHO Global Clinical Trials Forum that took place in November 2023, trust in science has been challenged by the COVID-19 pandemic. Accordingly, every effort must be taken by the clinical trial community to seek participant engagement and build inclusivity in trial design.20 Unfortunately, our analysis of clinical trial consent forms, which to the best of our knowledge is the largest to date, reveals that considerable progress is still needed to achieve these aims.

Across nearly 800 federally funded clinical trials, we found that their consent forms are, on average, a dozen pages long and written at the reading comprehension level of a high school graduate, despite over half of adults in the U.S. having a reading comprehension level at or below the 6th grade.5 In fact, there were a handful of studies that had consent forms significantly longer, from 40, 50 to as high as 80 pages altogether, which could indeed serve as a barrier to facilitating truly informed consent. This trend has been persistent across over two decades of federally funded clinical trial research, and is pervasive across clinical trials of varying study designs and forms of intervention. Importantly, we found an association between the reading complexity of a clinical trial's consent form, as measured by Flesch-Kincaid Grade Level, and the trial's dropout rate. This association remained significant even when controlling for a variety of external factors including differences in trial enrollment, subject age, subject gender, primary intervention, study design, study phase, oncologic trial, and number of pages of its consent form.

The observed association between consent form complexity and participant dropout rates could be attributed to several factors. More complex consent forms may lead to poorer initial understanding of the trial's requirements and procedures, potentially resulting in participants enrolling without fully grasping what will be expected of them. This misalignment of expectations could lead to higher dropout rates as the trial progresses. Participants who struggle to understand complex consent forms may be more likely to experience surprise or dissatisfaction with study requirements, side effects, or procedures as the trial progresses. While the complexity of the consent form might serve as a proxy for the overall complexity of the trial itself, our analysis controlled for several trial characteristics, suggesting that consent form readability may have an independent effect on retention. Complex language in consent forms could erode trust between researchers and participants, particularly among populations with lower health literacy. This erosion of trust might manifest as increased dropout rates over time. Additionally, participants who struggle to understand the initial consent form may feel less empowered to ask questions or seek clarification throughout the trial, leading to disengagement and eventual dropout. While these hypotheses are speculative and require further investigation, they underscore the importance of clear, accessible communication in clinical research, not just at the point of enrollment but throughout the entire trial process.

The implications of these findings extend beyond participant comprehension. Poor representation of the general American patient population in clinical trials, particularly in terms of age, sex, and race, has been well-documented and poses significant consequences for the applicability of trial results to real-world clinical practice.21 This issue is especially acute for trials involving rare clinical entities, where the dropout of even a few participants can markedly affect the trial's outcomes and the robustness of its findings.22 Furthermore, the FDA's increasing emphasis on diversity action plans for trial planning signifies that these concerns may soon become integral to regulatory compliance.23 Therefore, the need to simplify consent forms and make clinical trials more inclusive and representative is not only a matter of ethical responsibility and scientific integrity but is also becoming a regulatory necessity.

Prior efforts have been undertaken to address the complexity of clinical trial consent form language, ranging from the utilization of standardized templates with simplified verbiage to intensive manual re-writing, among others.1,24 However, our data reveals a persistent (Fig. 1A) and widespread (Fig. 1B and C) pattern of clinical trial consent forms written with prohibitive complexity for the majority of U.S. adults., implicating shortcomings in the widespread adoption of existing language simplification methods. As evidenced in the current study, the usage of AI, in particular LLMs, represents an effective method to simplify the language of clinical trial consent forms while preserving their medical and legal content. Indeed, the real-world experience of a number of the authors at Rhode Island's largest healthcare system with a simplified universal surgical consent form—re-written with an LLM under expert human clinical and legal oversight—mirrors these findings.13 Moreover, given the rapidly growing public awareness and adoption of LLMs,25 the “activation energy” required to utilize these easily accessible tools to simplify clinical trial consent form language may not be as high as other methods seeking to do the same. Additionally, the application of AI extends beyond merely revising existing forms. The AI-human expert collaboration provides a scalable model for the integration of AI in clinical and research contexts. Given the swift pace of clinical trial evolution, there is an opportunity for dynamic, AI-facilitated development of research consent forms, representing an exciting future area of study.

It is important to acknowledge our study's limitations. First, our analysis was confined to federally funded clinical trials within the United States, which may not fully represent the diversity of consent forms used in privately funded or international studies. This limitation could affect the generalizability of our findings to a broader range of clinical trials. Second, while we employed the Flesch-Kincaid Grade Level and other readability metrics, these tools have inherent limitations and may not fully capture the complexity of medical and legal language in consent forms.26 Additionally, our approach to simplifying consent forms with AI, particularly LLMs, might not account for the nuanced understanding and cultural sensitivities required in patient communication. The AI's performance in this context was validated by human experts, but the long-term effectiveness and patient comprehension of these AI-simplified forms in real-world settings, including in mitigating participant dropout, remain to be fully assessed. Furthermore, our study primarily focused on the readability of consent forms without delving deeply into other aspects of the consent process, such as participant comprehension or the effectiveness of consent discussions with healthcare providers. Our study also lacked data on initial refusal rates and participants' comprehension of consent forms, which could provide valuable insights into the relationship between informed consent quality and participant retention.

Lastly, while we observed an association between consent form complexity and trial dropout rates, this correlation does not necessarily imply causation, and other unmeasured factors may contribute to participant dropout. Our analysis does not distinguish between different types of attrition, such as active withdrawal and loss to follow-up, which may have distinct underlying causes and potentially different relationships to consent form readability. For instance, active withdrawal might be more directly linked to a participant's comprehension of the study, while loss to follow-up could be influenced by a broader range of factors, including socioeconomic barriers or changes in health status. Additionally, our analysis does not account for potential variations in how individual trials report completion rates, particularly regarding the inclusion or exclusion of deaths, which may affect the interpretation of dropout rates across different types of studies. We acknowledge that trial complexity and duration may correlate with both consent form readability and dropout rates, potentially confounding our main findings and warranting further investigation in future studies. Acknowledging these limitations is essential for contextualizing our findings and guiding future research in this area.

In conclusion, this study highlights a significant mismatch between the reading level of consent forms and the average American's literacy, with analysis of nearly 800 federally funded clinical trials revealing that these forms are predominantly written well above the comprehension level of over half of the American population. In a risk-adjusted model, we find higher reading complexity of these documents was associated with higher rates of participant dropout, underscoring the need for simpler, more accessible consent forms to potentially enhance participant engagement and inclusivity in clinical research. Finally, our research demonstrates the potential of AI, particularly LLMs, in successfully simplifying these forms. By employing LLMs to reduce complexity while preserving medicolegal sufficiency, and coupling this with human expert oversight, we envision a promising path forward in improving participant understanding and, potentially, participation in clinical research.

Contributors

Fatima N. Mirza: Study Design, Conecptualization, Formal Analysis, Writing–Original Draft, Writing–Review and Editing, Eric Wu: Data Curation, Formal Analysis, Validation; Hael F. Abdulrazeq: Data Collection, Data Interpretation, Writing–Review and Editing; Ian D. Connolly: Data Collection, Data Interpretation, Writing–Review and Editing, Figures, Oliver Y. Tang: Data Collection, Data Interpretation, Writing–Review and Editing, Figures, Cheryl K. Zogg: Formal Analysis, Figures, Theresa Williamson: Data Interpretation, Writing–Review and Editing, Paul F. Galamaga: Data Interpretation, Writing–Review and Editing, G. Dean Roye: Data Interpretation, Writing–Review and Editing, Prakash Sampath: Data Interpretation, Writing–Review and Editing, Albert E. Telfeian: Data Interpretation, Writing–Review and Editing, Abrar A. Qureshi: Data Interpretation, Writing–Review and Editing, Michael W. Groff: Data Interpretation, Writing–Review and Editing, John H. Shin: Data Interpretation, Writing–Review and Editing, Wael F. Asaad: Data Interpretation, Writing–Review and Editing, Supervision; Tiffany J. Libby: Data Interpretation, Writing–Review and Editing, Ziya L. Gokaslan: Data Interpretation, Writing–Review and Editing, Isaac S. Kohane: Data Interpretation, Writing–Review and Editing, James Zou: Data Interpretation, Writing–Review and Editing, Rohaid Ali: Study Design, Conecptualization, Formal Analysis, Writing–Original Draft, Writing–Review and Editing, Supervision, Resources.

Fatima N. Mirza, Eric Wu, Hael Abdulrazeq, James Zou, and Rohaid Ali accessed and verified the underlying data.

Data sharing statement

Additional data analyzed for this research project will be available on reasonable request via email to the corresponding author.

Declaration of interests

The authors have no conflicts of interest to disclose.

Footnotes

All data produced in the present study are available upon reasonable request to the authors.

Appendix A

Supplementary data related to this article can be found at https://doi.org/10.1016/j.eclinm.2024.102814.

Appendix A. Supplementary data

Supplementary Figure and Tables
mmc1.docx (26.8KB, docx)

References

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary Figure and Tables
mmc1.docx (26.8KB, docx)

Articles from eClinicalMedicine are provided here courtesy of Elsevier

RESOURCES