Skip to main content
HHS Author Manuscripts logoLink to HHS Author Manuscripts
. Author manuscript; available in PMC: 2025 Jun 4.
Published in final edited form as: J Clin Epidemiol. 2024 Aug 27;175:111507. doi: 10.1016/j.jclinepi.2024.111507

Quantitative bias analysis methods for summary-level epidemiologic data in the peer-reviewed literature: a systematic review

Xiaoting Shi a, Ziang Liu b, Mingfeng Zhang c, Wei Hua c, Jie Li c, Joo-Yeon Lee d, Sai Dharmarajan e, Kate Nyhan a,f, Ashley Naimi g, Timothy L Lash g, Molly M Jeffery h,i, Joseph S Ross j,k,l, Zeyan Liew a, Joshua D Wallach g,*
PMCID: PMC12135099  NIHMSID: NIHMS2085515  PMID: 39197688

Abstract

Objectives:

Quantitative bias analysis (QBA) methods evaluate the impact of biases arising from systematic errors on observational study results. This systematic review aimed to summarize the range and characteristics of QBA methods for summary-level data published in the peer-reviewed literature.

Study Design and Setting:

We searched MEDLINE, Embase, Scopus, and Web of Science for English-language articles describing QBA methods. For each QBA method, we recorded key characteristics, including applicable study designs, bias(es) addressed; bias parameters, and publicly available software. The study protocol was preregistered on the Open Science Framework (https://osf.io/ue6vm/).

Results:

Our search identified 10,249 records, of which 53 were articles describing 57 QBA methods for summary-level data. Of the 57 QBA methods, 53 (93%) were explicitly designed for observational studies, and 4 (7%) for meta-analyses. There were 29 (51%) QBA methods that addressed unmeasured confounding, 19 (33%) misclassification bias, 6 (11%) selection bias, and 3 (5%) multiple biases. Thirty-eight (67%) QBA methods were designed to generate bias-adjusted effect estimates and 18 (32%) were designed to describe how bias could explain away observed findings. Twenty-two (39%) articles provided code or online tools to implement the QBA methods.

Conclusion:

In this systematic review, we identified a total of 57 QBA methods for summary-level epidemiologic data published in the peer-reviewed literature. Future investigators can use this systematic review to identify different QBA methods for summary-level epidemiologic data.

Keywords: Quantitative bias analysis, Epidemiological biases, Systematic bias, Confounding, Information bias, Selection bias

Plain Language Summary

Quantitative bias analysis (QBA) methods can be used to evaluate the impact of biases on observational study results. However, little is known about the full range and characteristics of available methods in the peer-reviewed literature that can be used to conduct QBA using information reported in manuscripts and other publicly available sources without requiring the raw data from a study. In this systematic review, we identified 57 QBA methods for summary-level data from observational studies. Overall, there were 29 methods that addressed unmeasured confounding, 19 that addressed misclassification bias, six that addressed selection bias, and three that addressed multiple biases. This systematic review may help future investigators identify different QBA methods for summary-level data.

1. Introduction

Randomized controlled trials (RCTs) are often considered the gold standard for estimating causal effects in clinical research. However, RCTs are not feasible for all clinical questions (eg, when randomization is not ethical and for estimating treatment effects in populations not included in efficacy trials), often have strict inclusion and exclusion criteria, face recruitment and retention difficulties, and take a long time to complete [1]. These limitations and operational challenges, which can lead to higher costs and lower generalizability to real-world settings, highlight the important role of observational studies [2]. Although these study designs can overcome some of the challenges faced by RCTs, they are more susceptible to systematic errors (ie, uncontrolled confounding, misclassification, and selection bias), which can contribute to the uncertainty of a study’s results [3]. Therefore, analytical methods are needed to help assess the impact of systematic errors on the findings from observational studies.

Quantitative bias analysis (QBA) methods estimate the direction, magnitude, and uncertainty resulting from systematic errors in a study, and can be used to explore how sensitive study findings are to assumptions and bias parameters [4,5]. QBA methods, which are often classified across 6 categories—simple sensitivity analysis, multidimensional analysis, probabilistic analysis, direct bias modeling and missing data methods, Bayesian analysis, and multiple bias modeling—can be used to estimate what the observed association from a study would have been in the absence of systematic errors (Table 1) [4,5]. Although numerous QBA methods have been published in the literature, there are several challenges that have limited the widespread application of QBA methods in observational studies. First, some QBA methods require more extensive statistical and programming expertise [4,68]. Second, it may be difficult to assign reasonable values to the bias parameters and priors for QBA methods. Third, some QBA methods can only be conducted using the individual participant-level data from a study [4]. However, certain QBA methods can be conducted using simple equations and summary-level data based on published study results, including 2-by-2 contingency tables, effect estimates and corresponding 95% CIs, bias parameters from the literature, assumptions, and educated guesses [4]. These QBA methods for summary-level data may be more straightforward to include as sensitivity analyses in observational studies. However, little is known about the full range and characteristics of available QBA methods in the peer-reviewed literature that only require summary-level data.

Table 1.

Common classification system for quantitative bias analysis methods (adapted from previous sources [4,6])

Classification Assignment of bias parameters Number of biases accounted for Output
Simple sensitivity analysis One fixed value assigned to each bias parameter One at a time Single bias-adjusted effect estimate
Multidimensional analysis More than 1 value assigned to each bias parameter One at a time Range of bias-adjusted effect estimates
Probabilistic analysis Probability distributions assigned to each bias parameter One at a time Frequency distribution of bias-adjusted effect estimates
Direct bias modeling and missing data methods Estimate and variance obtained from information internal or external to dataset One at a time Distribution of bias-adjusted effect estimates
Bayesian analysis Probability distributions assigned to each bias parameter Multiple biases at a time Distribution of bias-adjusted effect estimates
Multiple bias modeling Probability distributions assigned to each bias parameter Multiple biases at a time Frequency distribution of bias-adjusted effect estimates

To address these knowledge gaps, we conducted a systematic review to comprehensively identify and summarize QBA methods for summary-level data from observational studies that have been proposed in the peer-reviewed literature.

2. Materials and methods

This review was reported following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses 2020 statement (Appendix 1) [9]. We preregistered our study protocol on the Open Science Framework (https://osf.io/ue6vm/) and have included a document outlining the dates of our protocol deviations before journal submission (eText Appendix 2).

2.1. Literature search and study selection

Working with an experienced librarian (KN), we developed a systematic literature search capturing the broad concepts of bias analysis and epidemiologic methods. The full search strategy, which prioritized sensitivity over specificity, is provided in Appendix 2 (eTable 1). On January 10 2022, a research librarian (KN) performed a comprehensive search of multiple databases: MEDLINE (Ovid ALL, from 1949), Embase (Ovid, from 1974), Scopus, and Web of Science Core Collection as licensed by Yale University. No date limit was applied. The search retrieved a total of 13,356 records, which we pooled in EndNote (https://endnote.com/), deduplicated, and uploaded to Covidence (https://covidence.org/). On October 14, 2022, 2702 additional records were retrieved through backwards reference chaining. At least two independent investigators (XS, ZLiu, and/or JDW) screened records at the title-abstract and then full-text level. All uncertainties were discussed and reviewed by two additional authors (ZLiew and JDW).

2.2. Eligibility criteria

Articles were considered eligible for inclusion if they were peer-reviewed English-language publications describing, evaluating, and/or comparing QBA methods for summary-level data from observational (cohort, case-control, cross-sectional studies, single-arm studies with or without external controls, and quasi-experimental studies), and meta-analyses of observational study designs with no date limits applied. We included methodological articles that described significant or slight modifications of previously published QBA methods focused on unmeasured confounding, misclassification (information bias), and selection bias. We excluded all conference abstracts, corrigendum, and non-peer-reviewed articles. Analytical methods that were designed to address biases but were not defined as QBA methods were excluded (ie, inverse probability weighting, marginal structural models, g-estimation, covariate regression adjustment, propensity scores, missing data imputation, negative controls, instrumental variable analyses, restriction and mediation) [5]. We further excluded articles that only applied QBA methods in primary or sensitivity analyses, but did not propose new methods or modification to the existing methods, since a previous systematic review already focused on the application of QBA in observational studies [5]. QBA methods that required individual participant-level data (ie, record-level data, raw data) were also excluded.

2.3. Data collection

For all eligible articles, two independent investigators (XS and ZLiu) abstracted the following article characteristics: study title, first author, publication year, and digital object identifier.

For each eligible QBA method for summary-level data, we recorded the following study characteristics: name of the method, applicable study design scenarios (ie, cohort only, case-control only, cohort and case-control, or meta-analyses); sources of bias(es) addressed (ie, unmeasured confounding, misclassification bias, selection bias, or multiple biases); bias parameters required to conduct the analysis; required data format for the exposure, confounder, and outcome (ie, categorical, continuous, time-to-event/survival, multiple data types, or unclear); effect measure of interest (ie, ratio measures [eg, risk ratio, odds ratio, rate ratio, hazard ratio] and/or absolute measures [eg, mean difference, risk difference]); output and type of output obtained from each method (ie, explain away [ie, if the observed exposure-outcome relationship is explained away by the bias] or a corrected effect estimate); stated data assumptions and additional required features to implement each method; and the availability of publicly available software, tools, or websites to conduct the analyses. For study design, effect measure of interest, and stated data assumptions, we only recorded the information explicitly mentioned or used by the authors. Next, we recorded the main formulas and any explicitly mentioned considerations relevant to each QBA method. We then determined the relevant interpretation of the output of each method. Each article describing the eligible QBA methods were then reviewed for explicit discussions regarding the key similarities and differences between the eligible QBA methods. Last, we reviewed a prominent textbook on QBA methods to determine which of the identified methods were referenced and/or explained [4]. All data collection tools were pilot tested (eTable 2 in Appendix 2) and abstractions were reviewed and arbitrated by two reviewers (ZLiew and JDW).

2.4. Analyses

Eligible QBA methods were classified into previously developed categories: simple sensitivity analysis, multidimensional analysis, probabilistic analysis, Bayesian analysis, direct bias modeling and missing data methods, and multiple bias modeling (Table 1) [4]. Key characteristics were summarized using descriptive statistics.

2.5. QBA method clusters

We chronologically grouped clusters of QBA methods that addressed the same systematic errors and where the authors noted that they were derivations or modified versions of previously developed methods. Methods that only mentioned or compared previous methods but did not lead into new methods or were not derived from previous methods were not considered. Next, we classified the QBA methods based on their study characteristics (Appendix 3): study design (ie, cohort study, case-control study, other observational study, or meta-analysis), bias type (ie, unmeasured confounding, misclassification bias, or selection bias), result of interest (ie, if the goal to explain away or bias adjust the observed effect estimates), and the exposure, outcome, and confounder data types. Appendix 4 provides a demonstration of how Appendix 3 could be used to identify potential QBA methods.

3. Results

3.1. Study selection

Of the 16,058 records that were identified (Figure), 5,809 were excluded as duplicates, leaving 10,249 articles for initial screening. We excluded 9,662 articles based on title and abstract. Among the 587 full-text articles assessed for eligibility, 534 articles were excluded, mostly because they focused on non-QBA methods (eg, propensity scores and other approaches: 218, 41%) or described QBA methods for individual participant-level data (170, 32%). We were left with 53 articles with 57 QBA methods that met the inclusion criteria. Appendix 3 contains detailed supplementary tables that outline any explicitly described assumptions, required bias parameters, formulas, and characteristics necessary to interpret the results from QBA methods.

Figure.

Figure.

PRISMA flowchart. PRISMA, Preferred Reporting Items for Systematic Reviews and Meta-Analyses; QBA, quantitative bias analysis.

3.2. Description of included QBA methods

The 53 eligible articles described 57 QBA methods for summary-level data (Table 2). Of these, we classified 35 (61%) as simple sensitivity analysis methods, 8 (14%) as multidimensional analysis methods, 4 (7%) as Bayesian analysis methods, 3 (5%) as probabilistic analysis methods, 3 (5%) as multiple bias modeling methods, and 1 (2%) as direct bias modeling (classification scheme in Table 1). There were 3 (5%) methods that were classified as simple or multidimensional analysis methods because it was possible to assign 1 or multiple values to the bias parameters. Overall, 21 (37%) methods were referenced in a QBA textbook [4], of which 11 (52%) were also described in detail.

Table 2.

Summary of quantitative bias analysis methods for summary-level data (n = 57 methods)

Study characteristics n %
Publication year
 Median (IQR) 2007 (1993–2018)
 Full range 1959–2021
Method classification
 Simple sensitivity analysis 35 61
 Multidimensional analysis 8 14
 Bayesian analysis 4 7
 Probabilistic analysis 3 5
 Multiple bias modeling 3 5
 Direct bias modeling and missing data methods 1 2
 Simple or multidimensional analysis 3 5
Applicable study scenarios
 Observational study designs only 53 93
  Cohort only 7 13
  Case-control only 13 25
  Cohort and case-control 32 60
  Other observational studies 1 2
 Meta-analyses 4 7
Sources of bias addressed
 Unmeasured confounding 29 51
  Single confounder only 22 76
  Single and multiple confounders 7 24
 Misclassification bias 19 33
  Exposure misclassification 13 68
   Differential only 1 8
   Nondifferential only 6 46
   Both differential and nondifferential 6 46
  Confounder misclassification 2 11
   Differential only 0 0
   Nondifferential only 2 100
   Both differential and nondifferential 0 0
  Outcome misclassification 5 26
   Differential only 1 20
   Nondifferential only 2 40
   Both differential and nondifferential 2 40
  Selection bias 6 11
  Multiple biases 3 5
Referenced by prominent QBA textbooka
 Not referenced 36 63
 Referenced but not explained 10 18
 Referenced and explained 11 19
Exposure data type
 Categorical 47 82
 Continuous only 0 0
 Multiple data types 10 18
Outcome data type
 Categorical 37 65
 Continuous only 2 4
 Time-to-event only 0 0
 Multiple data types 18 32
Confounder data type
 Categorical 29 51
 Continuous only 0 0
 Multiple data types 14 25
 Unclear or not applicable 14 25
Effect estimates
 Ratio measuresb 42 74
 Difference measures 1 2
 Both 11 19
 Otherc 2 4
 Uncleard 1 2
Result of interest
 Corrected estimates 38 67
 Explain away 18 32
 Bias illustrationd 1 2
Recommended software/tools
 One tool 16 28
  Code/software package (R, SAS) only 10 63
  Online tools only 1 6
  Excel only 5 31
 Multiple tools 6 11
 No tool 35 61

QBA, quantitative bias analysis.

a

Fox et al 2021, Applying Quantitative Bias Analysis to Epidemiologic Data (second edition).

b

Ratio measures include relative risk, rate ratio, odds ratio, and hazard ratio; difference measures include mean difference, risk difference and attributable fraction.

c

Two methods work for regression coefficients.

d

This method uses bias plots to illustrate the potential range of bias, and has an unclear result of interest.

There were 53 (93%) QBA methods that were explicitly described as being suitable to use for observational studies and 4 (7%) for meta-analyses (Table 2).

3.3. Sources of bias

There were 29 (51%) QBA methods designed to address unmeasured confounding, of which 22 (76%) were for studies that focused on examining a single unmeasured confounder (Table 2). There were 19 (33%) methods for misclassification bias, of which 13 (68%) were for exposure misclassification, 2 (11%) for confounder misclassification, and 5 (26%) for outcome misclassification. There were 6 (11%) methods for selection bias and 3 (5%) for multiple biases at a time.

3.4. Data types and effect estimates

There were 47 (82%) methods explicitly designed for studies where the exposure can be treated as a categorical variable and 10 (18%) for studies where the exposure can be treated as either categorical or continuous (ie, multiple data types) (Table 2). There were 37 (65%) methods that were explicitly designed to accommodate only categorical outcome variables and 18 (32%) methods were described for multiple data types.

There were 42 (74%) QBA methods explicitly designed for studies with only ratio measures, 1 (2%) for only difference measures, and 11 (19%) for studies with both ratio and difference measures. Two-thirds (38, 67%) of the methods were designed to generate bias-adjusted effect estimates and 18 (32%) to describe how bias could fully explain away observed findings (ie, to bias adjust non-null findings to the null).

3.5. Software, tools, and code

Among the 53 articles describing the 57 QBA methods, 22 (39%) provided publicly available supplementary code or tools to implement the QBA methods; three noted that their code was available upon request.

3.6. QBA method clusters

We identified two distinct clusters of QBA methods with the same fundamental form—confounding methods derived from Cornfield 1959 and Bross 1966 (15 [52%] of the 29 methods for unmeasured confounding) and the matrix correction methods for misclassification bias (6 [30%] of the 20 methods for misclassification bias) (eTable 3 and 4 in Appendix 2) [10,11].

4. Discussion

In this systematic review, we identified 53 articles describing 57 QBA methods for summary-level data from observational studies in the peer-reviewed literature. While over 50% of these methods were designed to address unmeasured confounding, 11% were for selection bias. Approximately two-thirds of the QBA methods for summary-level data were designed to generate bias-adjusted effect estimates and one-third were designed to describe how bias can explain away the observed findings from a study. Although this systematic review can be used to identify different QBA methods for summary-level epidemiologic data, investigators should carefully review the original manuscripts to ensure that any assumptions are fulfilled, that the necessary bias parameters are available and accurate, and that all interpretations and conclusions are made with caution.

We found that most QBA methods for summary-level epidemiologic data were for unmeasured confounding. In fact, over half of the QBA methods for confounding identified outlined that they were derived from Cornfield in 1959, who described methods to assess the impact of uncontrolled confounding when evaluating the association between smoking and lung cancer in 2-by-2 tables [10], and Bross 1966, who introduced a framework to analyze the bias due to a binary unmeasured confounder by relating an observed effect estimate to a bias-adjusted effect estimate (ie, the size rule/array approach) [11]. Subsequent approaches that build upon these methods account for additional types of effect estimates [12,13], data types [14], and assumptions [15,16]. More recent QBA methods, including the E-value, require fewer assumptions and specifications, and allow for the estimation of the minimum strength of association, on the relative risk scale, that an unmeasured confounder would need to have with both the exposure and outcome to fully explain away observed findings [17,18]. Since 2017, the E-value method has been extended to further minimize the number of required assumptions [17,18].

Evidence suggests that many epidemiologic studies conducting bias analyses evaluate how strong a potential unmeasured confounder would need to be to fully explain away an observed non-null effect estimate [5]. Unlike QBA methods that generate bias-adjusted effect estimates, which can be compared with the crude estimates from a study to determine the potential magnitude and direction of the bias [4,5], these methods may not always be as informative (eg, when the magnitude of the effect estimate in a study is large or when the objective is to determine whether potential confounders are likely to change effect estimates by a specific amount) [5]. Given that many of the related methods for unmeasured confounding require relatively few assumptions and parameters, and have online tools to facilitate analyses [1720], investigators should carefully consider the characteristics of their studies before determining whether it is more appropriate to measure the potential magnitude and direction of unmeasured confounding or describe how unmeasured confounding could explain away observed findings.

In our study, a third of the QBA methods for summary-level data that we identified were for misclassification bias and fewer than 10% were for selection bias. Many QBA methods for misclassification fall under the matrix correction cluster of methods and can be used to adjust for the effects of nondifferential or differential misclassification using summary-level data from contingency tables. The earliest approach that we identified was the matrix correction method from Barron (1977) [21], which can be used to evaluate the effect of nondifferential misclassification in studies with categorical exposures and outcomes. Subsequent methods have extended this approach to accommodate differential misclassification bias, matched data, arbitrary 2-way tables [22], and multilevel exposures [23]. Given the availability of methods for misclassification bias, it may not be surprising that misclassification bias is modeled more often than selection bias in epidemiologic studies [5]. Selection bias is often considered more challenging to understand than confounding or misclassification bias [24], with parameters that may not be as easy to identify [5]. These findings highlight the need for greater guidance on the approaches, assumptions, required parameters, and interpretations of QBA methods for selection bias.

For this review, we generated detailed supplementary tables that outline any explicitly described assumptions, required bias parameters, formulas, and characteristics necessary to interpret the results from QBA methods. Previous studies have highlighted that one of the possible reasons for the limited application of the QBA methods in the epidemiologic literature is the fact that investigators may not be aware of the methods that are most straightforward to conduct [5,6]. We hope that our review will help researchers identify methods that may be appropriate for their studies, including those with publicly available code or online tools. However, it is worth noting that not all assumptions and parameters are explicitly specified in the manuscripts describing the identified QBA methods. This is particularly concerning because it can lead to the misuse and misinterpretation of QBA analyses. Moving forward, it is crucial that manuscripts describing QBA methods clearly outline their assumptions and required parameters. Furthermore, anyone considering using QBA methods, including those identified by this review, should carefully review the original manuscripts to ensure the approach is appropriate given the study characteristics, that any assumptions are fulfilled, that the necessary bias parameters are available and accurate, and that interpretations and conclusions are made with caution [6]. According to previous resources describing good practices for QBA, authors of medium or large studies (ie, those with sufficient sample size and small standard errors) that consider alternative hypotheses and draw inferences are well-suited for bias analysis [6]. Multiple bias analyses methods are particularly essential if a study is focused on making policy recommendations [6]. For additional explicit guidance on how and when to apply QBA methods, authors should consult the QBA textbook and other resources [6].

This study has a few limitations. First, the terminology used to describe various biases and bias analysis methods is largely unstandardized, which can make it difficult to identify articles developing QBA methods. However, we conducted a comprehensive search, with broad concepts across multiple databases, and performed reference chaining. Second, we restricted our search for QBA methods to the peer-reviewed literature, which did not include QBA methods described in preprints, conferences abstracts, working papers, dissertations, or textbooks. Third, we did not include QBA methods that can only be conducted using individual participant-level data. Fourth, we relied on the information explicitly described in the manuscripts for each QBA method. However, it is possible that QBA methods could be extended to accommodate different study designs and data formats that are not described in the articles. Therefore, the information reported in the supplementary tables describing the QBA methods could change based on more comprehensive statistical evaluations. Fifth, our review does not provide information outlining the most appropriate QBA method for specific scenarios. Although the appendix tables describing each method include information on the strengths and limitations identified in studies comparing or commenting on the QBA methods, authors selecting QBA methods should carefully consider the stated and unstated assumptions, the feasibility of identifying required parameters, and the actual performance of the methods.

5. Conclusion

In this systematic review, we identified a total of 57 QBA methods for summary-level epidemiologic data published in the peer-reviewed literature. Future investigators can use this review to identify potential QBA methods that could be evaluated and then used for different study designs and biases. However, appropriate interpretation and implementation of these methods is necessary.

Supplementary Material

Prisma Checklist
Detailed search characteristics
Example identification and application of QBA methods
Classification of QBA methods

Supplementary data to this article can be found online at https://doi.org/10.1016/j.jclinepi.2024.111507.

What is new?

Key findings

  • This systematic review identified 57 quantitative bias analysis (QBA) methods for summary-level data from observational and nonrandomized interventional studies.

  • Overall, there were 29 QBA methods that addressed unmeasured confounding, 19 that addressed misclassification bias, 6 that addressed selection bias, and 3 that addressed multiple biases.

What this adds to what is known?

  • This systematic review provides an overview of the range and characteristics of QBA methods for summary-level epidemiologic data that are published in the peer-reviewed literature and that can be used by researchers within the field of clinical epidemiology.

What is the implication and what should change now?

  • This systematic review may help future investigators identify different QBA methods for summary-level data. However, investigators should carefully review the original manuscripts to ensure that any assumptions are fulfilled, that the necessary bias parameters are available and accurate, and that all interpretations and conclusions are made with caution.

Funding:

This work was supported by the United States Food and Drug Administration of the US Department of Health and Human Services as part of a financial assistance award [U01FD005938] totaling $250,000 with 100 percent funded by the United States Food and Drug Administration/the US Department of Health and Human Services.

Declaration of competing interest

In the past 36 months, T.L.L. served as a member of the Amgen Methods Advisory Council, for which he received consulting fees and travel support. J.S.R. reported receiving grants from the US Food and Drug Administration; Johnson and Johnson; Medical Device Innovation Consortium; Agency for Healthcare Research and Quality; National Heart, Lung, and Blood Institute; and Arnold Ventures outside the submitted work and is also is an expert witness at the request of relator attorneys, the Greene Law Firm, in a qui tam suit alleging violations of the False Claims Act and Anti-Kickback Statute against Biogen Inc. that was settled in September 2022. M.M.J reported receiving grants from the US Food and Drug Administration; Agency for Healthcare Research and Quality; National Heart, Lung, and Blood Institute; National Center for Advancing Translational Sciences; National Institute on Drug Abuse; and American Cancer Society. J.D.W. is supported by Arnold Ventures, Johnson & Johnson through the Yale Open Data Access project, and the National Institute on Alcohol Abuse and Alcoholism of the National Institutes of Health under award 1K01AA028258 and previously served as a consultant to Hagens Berman Sobol Shapiro LLP and Dugan Law Firm APLC.

Footnotes

Disclaimer: The contents are those of the authors and do not necessarily represent the official views of, nor an endorsement, by the Food and Drug Administration/the US Department of Health and Human Services, or the US Government. The authors relied on the information explicitly described in the manuscripts for each quantitative bias analysis method. However, it is possible that quantitative bias analysis methods could be extended to accommodate different study designs and data formats that are not outlined in the articles. The information reported in the supplementary tables describing the quantitative bias analysis method could change based on more comprehensive statistical evaluations.

Trial registration number: https://osf.io/ue6vm.

CRediT authorship contribution statement

Xiaoting Shi: Writing – review & editing, Writing – original draft, Methodology, Investigation, Formal analysis, Data curation, Conceptualization. Ziang Liu: Writing – review & editing, Validation, Methodology, Investigation, Data curation. Mingfeng Zhang: Writing – review & editing, Methodology, Investigation, Conceptualization. Wei Hua: Writing – review & editing, Methodology, Investigation, Conceptualization. Jie Li: Writing – review & editing, Methodology, Investigation, Conceptualization. Joo-Yeon Lee: Writing – review & editing, Methodology, Investigation, Conceptualization. Sai Dharmarajan: Writing – review & editing, Methodology, Investigation. Kate Nyhan: Writing – review & editing, Resources, Methodology, Investigation. Ashley Naimi: Writing – review & editing, Methodology, Investigation. Timothy L. Lash: Writing – review & editing, Methodology, Investigation. Molly M. Jeffery: Writing – review & editing, Methodology, Investigation, Funding acquisition. Joseph S. Ross: Writing – review & editing, Project administration, Methodology, Investigation, Funding acquisition, Conceptualization. Zeyan Liew: Writing – review & editing, Validation, Supervision, Resources, Project administration, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization. Joshua D. Wallach: Writing – review & editing, Validation, Supervision, Resources, Project administration, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization.

Data availability

All data are available in the supplementary documents.

References

  • [1].Hariton E, Locascio JJ. Randomised controlled trials - the gold standard for effectiveness research: study design: randomised controlled trials. BJOG 2018;125(13):1716. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Concato J, Corrigan-Curay J. Real-world evidence — where are we now? N Engl J Med 2022;386(18):1680–2. [DOI] [PubMed] [Google Scholar]
  • [3].Lash TL, Fox MP, Cooney D, Lu Y, Forshee RA. Quantitative bias analysis in regulatory settings. Am J Public Health 2016;106(7):1227–30. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Fox MP, MacLehose RF, Lash TL. Applying quantitative bias analysis to epidemiologic data. New York, NY: Springer Nature; 2021. [Google Scholar]
  • [5].Petersen JM, Ranker LR, Barnard-Mayers R, MacLehose RF, Fox MP. A systematic review of quantitative bias analysis applied to epidemiological research. Int J Epidemiol 2021;50(5):1708–30. [DOI] [PubMed] [Google Scholar]
  • [6].Lash TL, Fox MP, MacLehose RF, Maldonado G, McCandless LC, Greenland S. Good practices for quantitative bias analysis. Int J Epidemiol 2014;43(6):1969–85. [DOI] [PubMed] [Google Scholar]
  • [7].Orsini N, Bellocco R, Bottai M, Wolk A, Greenland S. A tool for deterministic and probabilistic sensitivity analysis of epidemiologic studies. STATA J 2008;8(1):29–48. [Google Scholar]
  • [8].Smith LH, Mathur MB, VanderWeele TJ. Multiple-bias sensitivity analysis using bounds. Epidemiology 2021;32(5):625–34. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ 2021;372:n71. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [10].Cornfield J, Haenszel W, Hammond EC, Lilienfeld AM, Shimkin MB, Wynder EL. Smoking and lung cancer: recent evidence and a discussion of some questions. J Natl Cancer Inst 1959;22(1):173–203. [PubMed] [Google Scholar]
  • [11].Bross ID. Spurious effects from an extraneous variable. J Chronic Dis 1966;19(6):637–47. [DOI] [PubMed] [Google Scholar]
  • [12].Arah OA, Chiba Y, Greenland S. Bias formulas for external adjustment and sensitivity analysis of unmeasured confounders. Ann Epidemiol 2008;18(8):637–46. [DOI] [PubMed] [Google Scholar]
  • [13].Vanderweele TJ, Arah OA. Bias formulas for sensitivity analysis of unmeasured confounding for general outcomes, treatments, and confounders. Epidemiology 2011;22(1):42–52. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Lin DY, Psaty BM, Kronmal RA. Assessing the sensitivity of regression results to unmeasured confounders in observational studies. Biometrics 1998;54(3):948–63. [PubMed] [Google Scholar]
  • [15].Schlesselman JJ. Assessing effects of confounding variables. Am J Epidemiol 1978;108(1):3–8. [PubMed] [Google Scholar]
  • [16].Rosenbaum PR, Rubin DB. Assessing sensitivity to an unobserved binary covariate in an observational study with binary outcome. J Roy Stat Soc B 1983;45(2):212–8. [Google Scholar]
  • [17].VanderWeele TJ, Ding P. Sensitivity analysis in observational research: introducing the E-value. Ann Intern Med 2017;167(4): 268–74. [DOI] [PubMed] [Google Scholar]
  • [18].Ding P, VanderWeele TJ. Sensitivity analysis without assumptions. Epidemiology 2016;27(3):368–77. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].MacLehose RF, Ahern TP, Lash TL, Poole C, Greenland S. The importance of making assumptions in bias analysis. Epidemiology 2021;32(5):617–24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Cusson A, Infante-Rivard C. Bias factor, maximum bias and the E-value: insight and extended applications. Int J Epidemiol 2020; 49(5):1509–16. [DOI] [PubMed] [Google Scholar]
  • [21].Barron BA. The effects of misclassification on the estimation of relative risk. Biometrics 1977;33(2):414–8. [PubMed] [Google Scholar]
  • [22].Greenland S, Kleinbaum DG. Correcting for misclassification in two-way tables and matched-pair studies. Int J Epidemiol 1983;12(1): 93–7. [DOI] [PubMed] [Google Scholar]
  • [23].Weinkam JJ, Rosenbaum WL, Sterling TD. Recovering true risks when multilevel exposure and covariables are both misclassified. Am J Epidemiol 1999;150(8):886–91. [DOI] [PubMed] [Google Scholar]
  • [24].Infante-Rivard C, Cusson A. Reflection on modern methods: selection bias—a review of recent developments. Int J Epidemiol 2018; 47(5):1714–22. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Prisma Checklist
Detailed search characteristics
Example identification and application of QBA methods
Classification of QBA methods

Data Availability Statement

All data are available in the supplementary documents.

RESOURCES