Skip to main content
Sage Choice logoLink to Sage Choice
. 2025 Jan 23;45(3):223–231. doi: 10.1177/0272989X241310898

Directed Acyclic Graphs in Decision-Analytic Modeling: Bridging Causal Inference and Effective Model Design in Medical Decision Making

Stijntje W Dijk 1,2,3,4, Maurice Korf 5, Jeremy A Labrecque 6, Ankur Pandya 7, Bart S Ferket 8, Lára R Hallsson 9, John B Wong 10, Uwe Siebert 11,12,13,14, M G Myriam Hunink 15,16,17,
PMCID: PMC11894903  PMID: 39846352

Abstract

Decision-analytic models (DAMs) are essentially informative yet complex tools for solving questions in medical decision making. When their complexity grows, the need for causal inference techniques becomes evident as causal relationships between variables become unclear. In this methodological commentary, we argue that graphical representations of assumptions on such relationships, directed acyclic graphs (DAGs), can enhance the transparency of decision models and aid in parameter selection and estimation through visually specifying backdoor paths (i.e., potential biases in parameter estimates) and visually clarifying structural modeling choices of frontdoor paths (i.e., the effect of the model structure on the outcome). This commentary discusses the benefit of integrating DAGs and DAMs in medical decision making and in particular health economics with 2 applications: the first examines statin use for prevention of cardiovascular disease, and the second considers mindfulness-based interventions for students’ stress. Despite the potential application of DAGs in the decision science framework, challenges remain, including simplicity, defining the scope of a DAG, unmeasured confounding, noncausal aspects, and limited data availability or quality. Broader adoption of DAGs in decision science requires full-model applications and further debate.

Highlights

  • Our commentary proposes the application of directed acyclic graphs (DAGs) in the design of decision-analytic models, offering researchers a valuable and structured tool to enhance transparency and accuracy by bridging the gap between causal inference and model design in medical decision making.

  • The practical examples in this article showcase the transformative effect DAGs can have on model structure, parameter selection, and the resulting conclusions on effectiveness and cost-effectiveness.

  • This methodological article invites a broader conversation on decision-modeling choices grounded in causal assumptions.

Keywords: biomedical technology assessment, causality, costs and cost analysis, decision making, decision support techniques, epidemiologic factors, epidemiologic methods, research design


Decision-analytic models (DAMs) play a vital role in medical decision making and serve as indispensable tools to inform policy makers and clinicians about the potential consequences of their decisions and existing uncertainty.13 DAMs integrate relevant data, uncertainties, and preferences into computational representations that quantify and compare the potential outcomes of different interventions. These models are often complex and require significant expertise to develop. They are also prone to bias through hidden causal assumptions. As DAMs become more complex, the need for tools that facilitate reflection on the causal and noncausal assumptions underlying the model arises. These assumptions can be made explicit and explored through the use of causal inference techniques.

In causal inference, directed acyclic graphs (DAGs) provide formal graphical representations of the assumed causal relationships between variables4,5 depicted by the (absence of) directed edges (arrows) (Figure 1). For more detailed information on DAG construction, interpretation, and the underlying theory, readers are encouraged to consult existing tutorials and textbooks.4,68

Figure 1.

Figure 1

Illustration of a directed acyclic graph (DAG). An intervention influences an outcome variable through a “frontdoor path,” where the arrows (usually) lead from the intervention to the outcome. The green arrows (edges) from the exposure to the outcome represent the frontdoor path on the direct causal pathway, often referred to as the “direct effect.” When the exposure influences the outcome through a mediator, this is known as the “indirect effect” (purple). The total effect would combine both the direct and indirect effects, representing the overall impact of the exposure on the outcome across all pathways. DAGs can help to identify and adequately address bias that arises through so-called “open backdoor paths” between the intervention and the outcome, that is, a bias in the estimation of a valid causal effect estimate (blue arrows). To identify appropriate adjustments and statistical methods that can minimize bias, one requires a causal diagram that accurately represents the data-generating process. A path is closed when it is blocked, for example, as shown by the red arrows leading from the exposure and the outcome to a collider. Bias can then arise when this previously blocked path is inadvertently opened by inappropriate adjustment.

Causal inference and decision-analytic modeling share a common goal: they can help to explore counterfactual scenarios—hypothetical alternatives to the observed scenario—in a quantitative manner that mimics the real world 9 and answers the crucial “what if?” question. 4 The decision node in the DAM splits into the counterfactual and observed scenarios—in DAM terminology, the strategies. The causal question is: what happens if we follow a counterfactual scenario compared with the observed scenario?

The application of causal inference in general decision science is not new. 9 This commentary, however, explicitly proposes an integration of DAGs in decision modeling to enhance the accuracy and transparency of DAM design. We will discuss the potential advantages and limitations of using DAGs to inform 1) model structure and 2) model parameters and 3) to illustrate their practical relevance through several examples.

How DAGs Could Support DAMs

In the following sections, we will argue that DAGs can

  • improve visualization and transparency of the causal assumptions underlying DAMs,

  • guide choices in model structure,

  • aid choices in the selection and application of model input parameters,

  • help to verify whether underlying causal assumptions (exchangeability) are met,

  • clarify the estimands the model evaluates, and

  • guide sensitivity analyses and causal bias analyses.

Visualization and Transparency of Causal Assumptions Underlying DAMs

First and foremost, DAGs can serve as tools to make the complex causal relationships that underlie DAMs explicit. By doing so within a formally defined framework, DAGs facilitate informed discussions among authors, stakeholders, readers, and reviewers. 10 This transparency not only aids in model reproducibility but also ensures a shared understanding of the structural choices. Lastly, DAGs serve an instructive purpose by helping readers connect observed interventions or actions to their corresponding counterfactual scenarios and consequences.

Choices in Model Structure

The DAG’s ability to visually depict causal pathways helps modelers make informed choices in structuring their models. The different paths on a DAG leading from the intervention to the outcome variable can be identified as “frontdoor paths” or “backdoor paths.” In a frontdoor path, the arrows most commonly face away from the intervention toward the outcome variable (although some exceptions exist). These paths are all ways in which a change in the intervention is associated with a change in the outcome. In a decision model, this change would typically consist of the structure of the model stemming from the decision node. An open backdoor path represents bias in estimating valid causal effects, which is discussed under the sections discussing selection of input parameters and evaluating the estimands and target trial emulation.

Box 1 illustrates a first example, in which we show why we think the effect of statins on cardiovascular disease (CVD) may be modeled more accurately when modeling the total effect rather than the effect of treatment on individual risk factors.

Box 1.

First Example of an Application of Directed Acyclic Graphs in Decision-Analytic Modeling a

Example 1: The cost-effectiveness of statin therapy on cardiovascular disease
graphic file with name 10.1177_0272989X241310898-img2.jpg
Figure 2 Simplified visualization (parts of model A and B) of 2 different decision-analytic model approaches. The objective of both models is to identify the cost-effectiveness of statin therapy on CVD outcomes. Costs and utilities were directly linked to health states and other outcomes/events and not visualized in the image. Model A models for a person with a specific risk profile the change in each individual risk factor (such as cholesterol, blood pressure, and others) with and without statins and based on the resulting RP applies the corresponding risk of CVD. Model B models the probability of CVD based on a relative risk with and without statins. BP, blood pressure; CVD, cardiovascular disease; RF, risk factors; RP, risk profile from the Framingham Study; RR, relative risk with versus without statins.
graphic file with name 10.1177_0272989X241310898-img3.jpg
Figure 3 Directed acyclic graphs (DAG) of the effect of statin therapy on coronary heart disease. On the left is the DAG (A) that fits the assumptions made in model A above, where we would assume that all of the effect of statins is mediated through lipid-level modification. However, the second DAG (B) is a better representation of current knowledge that the total effect of statin is in part an indirect effect through lipid-level modification and in part a direct effect through other mechanisms, such as through their anti-inflammatory, vascular endothelial, plaque-stabilizing, and platelet aggregation–inhibiting effects. 11 CVD, cardiovascular disease.
a

The modified example is based on work by van Kempen et al. 12

If we are interested in the cost-effectiveness of statin therapy, a cholesterol-lowering treatment, on CVD, then there are several modeling approaches that we can take, each involving alternative effectiveness assumptions. van Kempen et al., 12 who set out to model this effect, found that some research groups modeled the effect of statin therapy merely through the modification of lipid levels (model A in Figure 1), and others used the observed risk reductions of CVD from trials (model B in Figure 2). 12 We will use a DAG to visualize what the assumptions underlying these 2 modeling approaches could look like.

For model A, we could use a microsimulation model with states “well,” “CVD,” and “death” and use tracker variables for the risk factors. We assign each patient a set of risk factors, and we simulate CVD events based on multivariable transition probabilities, for example, derived from regression analyses of the Framingham Study data (the atherosclerotic cardiovascular disease [ASCVD] risk estimator score). 13 The effect of statin use is modeled through its expected lipid-level modification from systematic reviews of statin trials. This means we model the changes in risk factors for our population, most importantly, the change in lipid levels through statin use and as a result update the transition probability of CVD events based on the updated ASCVD score.

Now we draw the DAG that we think reflects our decision problem (Figure 3), forcing us to consider all possible ways statins affect our outcomes. The DAG based on our subject knowledge shows that statins beneficially affect our outcomes both through lipid levels (mediated effect) but also possibly through direct effects, such as anti-inflammatory, vascular endothelial, plaque-stabilizing, and platelet aggregation–inhibiting effects, together giving the total effect. 11 Model A assumes that the effect of statins is only through lipids. If that assumption is wrong, the analysis will be biased. Model A will therefore capture only part of the effect and most likely underestimate the (cost-)effectiveness of statin use. Model B, on the other hand, models the total effect from systematic reviews of the same statin trials as a fixed CVD risk reduction. Model B also requires a less complex decision model structure than if we modeled changes in individual risk factors over time.

Exploring this issue, van Kempen et al. 12 modeled both structures and compared the model outputs while holding all other characteristics constant. They found that the choice in model structure had an important influence on model outcomes (model A: €56,642/quality-adjusted life-year [QALY], model B: €22,131/QALY). Modeling through lipid modification alone, QALYs were underestimated and costs overestimated compared with modeling total risk reduction (Box 1). For a willingness-to-pay threshold of €50,000/QALY, these choices would have led to different decisions derived from cost-effectiveness. For another more elaborate example using a DAG for statin treatment and DAM in a hypothetical target randomized controlled trial (RCT), see the article by Kuehne et al. 14

Choices in the Selection and Application of Model Input Parameters

Selecting causal versus noncausal parameters

DAGs can help to identify, prioritize, and assess relevant model input parameters for important underlying biases. This approach can avoid common noncausal choices in model input parameters based on prediction scores or mere statistical (noncausal) associations in places where the model aims to derive causal effects of a decision, thereby refining the accuracy of the model. There are situations in which using a prediction model within the DAM could be an appropriate choice, such as determining baseline cardiac event rates, which are unaffected by the model’s intervention and are the same in both counterfactual worlds. The example in Box 1 model A, however, used a noncausal tool (a prediction model to calculate the observed risk among patients observed to take statins) for a causal purpose (calculating the revised risk if our intervention causes patients to take statins). In addition, a causal inference analysis published more than 2 decades ago using Framingham data showed that the results from valid causal effect estimation may differ substantially from the coefficients of a regression analysis.15,16

Consider the following hypothetical illustration that could make the importance of distinguishing model A from model B in Box 1 Figure 2 even more apparent. Suppose an observational dataset shows that individuals having yellow fingers is a potential confounder, that is, it predicts long-term lung cancer and mortality. If we model an intervention that “cleans fingers” without addressing the confounder, smoking, this model (using a prediction model and change in risk factor approach similar to model A) would erroneously suggest this intervention prevents lung cancer and would probably be very cost-effective. In a model similar to model B, the intervention “cleaning fingers” will likely show no difference in effect on mortality between the intervention and control strategy. Therefore, we would prefer to model the causal impact of smoking-related interventions using model B’s structure to accurately estimate the change in lung cancer.

In addition to the challenges posed by confounding variables, observational data often present another layer of complexity due to historical trajectories influencing variables. For instance, changing from being a smoker to a former smoker or reducing lipid levels may not solely reflect interventions. These individuals may have inherently different risk profiles or different exposures to interventions, leading to potentially biased associations when not controlling for their history. Even with meticulous adjustment or advanced analyses addressing time-varying confounding, such as considering changes in cholesterol levels over time, identifying a purely causal association remains challenging. The findings from model B’s structure would likely reflect the intervention effect more closely than the findings from model A.

In addition, DAGs could save time in model development if not every risk factor has to be modeled individually. When full datasets are available that allow for adjustments in the analysis plan, traditional causal inference methods can guide new analyses based on where the parameter falls within the causal pathways of the DAG and DAM structures.

Applying data to the model and adjusting the model to the data

In Box 2, we present a second example for which we have original data available from a RCT and use external data sources to build our model. In the example, we are interested in the effect of mindfulness-based interventions (MBIs) on perceived stress in medical students and derived from that their quality of life and the associated costs to the university.

Box 2.

Second Example of an Application of Directed Acyclic Graphs in Decision-Analytic Modeling a

Example 2: The effect of Mindfulness-Based Interventions on Perceived Stress in medical students
graphic file with name 10.1177_0272989X241310898-img4.jpg
Figure 4 Directed acyclic graph representing the (naïve expectation of the) trial.
graphic file with name 10.1177_0272989X241310898-img5.jpg
Figure 5 Directed acyclic graph representing the actual trial that investigates the effect of mindfulness-based interventions on our outcomes of interest. The trial outcome is perceived stress, which is used as an intermediate outcome for the decision model and quality-adjusted life-years and costs are based on perceived stress levels.
a

The example is based on the work by Lu et al. 17 and Dijk et al.10,18 and the DEcrease STress though RESilience training for Students (DESTRESS) study.

Our initial, naïve DAG in Figure 4 is simple: we randomize students to receive MBIs or not, and through the trial outcome of perceived stress, we infer the costs and QALY outcomes. We build a Markov model with the states “healthy,” “symptoms,” “graduated,” and “dropout.” We assign the utility and cost of being stressed (through dropout and study delay) to the states. We assume there is no immediate effect of MBIs on dropout itself; the intervention influences only perceived stress.

However, drawing the DAG makes us realize that the underlying study data we intend to use is not perfect. We use our revised DAG (Figure 5) to either make adjustments to our data analysis plan, highlight limitations to our audience, or select other sources for our input parameters that better reflect our assumptions.

First, the DAG shows why we prefer to use RCT data over observational data: There are too many unmeasured confounders that would have an arrow going into the intervention, as someone’s motivation can influence whether or not they are exposed to MBIs, opening a backdoor path. Second, we have an “actual treatment,” as not all trial participants adhered to the intervention. This forces us to review how we had analyzed the data and how we modeled adherence. For example, if we use an intention-to-treat analysis, we should not integrate adherence in our DAM structure but we should if we use per-protocol analysis.

Another major threat to the reliability of our findings stems from incomplete participant follow-up. We will have data only on those who complete their follow-up (censoring), and the likelihood that people complete follow-up is dependent on many (unmeasured) factors. By conditioning on complete follow-up, we open a backdoor path between the intervention and confounders to the outcome. We can attempt to address this issue by performing missing data imputation and using traditional causal inference analysis methods such as inverse probability-of-censoring weighting to close this backdoor path using original data. We could also model a subgroup of our population that did have complete follow-up and use sensitivity analyses for alternative scenarios with both higher and lower effectiveness for the remaining groups since conditioning on complete follow-up introduced bias.

Verify Whether Common Causal Assumptions Are Met

Both causal inference and decision sciences make assumptions that affect their results and validity, such as assumptions on exchangeability, consistency, and positivity 4 as well as correct model specification and lack of measurement error. Here we discuss the first 3 of these assumptions since the latter 2 speak for themselves. In the following discussion, consider the example of a model of an RCT comparing a computerized decision support system (CDSS) intervention for physicians to increase the appropriateness of their diagnostic imaging requests to usual care. 19 Exchangeability intuitively refers to having no selection or confounding bias. It is exemplified by the decision node in the model, where the same individuals are compared under different strategies. For instance, in the CDSS example, physicians with and without CDSS support should be comparable in terms of imaging requests. If only junior doctors use decision support, and we compare them to a control group of experienced radiologists without decision support, we expect these 2 groups to differ in the appropriateness of their imaging requests, and exchangeability is violated. In our DAM, this assumption affects the validity of our data choices for the comparator and intervention parameters. The plausibility of the exchangeability assumption can be directly assessed using a DAG, provided all backdoor paths are visualized and the minimal adjustment set is identified. At the same time, it can indicate whether important covariates are missing.

While the remaining causal assumptions cannot be directly assessed using a DAG, they still need to be considered in order to create a representative causal DAM. Consistency20,21 requires a well-defined exposure in the DAG and the DAM. This would be violated if the intervention is the use of CDSS, but there are different versions of CDSS use that lead to different outcomes, for example, when physicians in one randomization group delegate their imaging requests to physician assistants. Unlike exchangeability, a DAG cannot directly assess the plausibility of the consistency assumption. Nevertheless, it can encourage researchers to explicitly consider this assumption. The same applies to the positivity assumption. Positivity asserts that to draw a causal conclusion, there must be a nonzero probability of both receiving and not receiving the intervention conditional on the adjusted variables. If in a DAM we used data from an observational study in which all participants were exposed to the CDSS, positivity is compromised, and we would not have sufficient information to inform our intervention parameter.

Clarifying the Estimands the Model Evaluates

DAMs should be guided by a clear research question, including a specified objective, population, intervention, comparator, consequences, and outcome of interest and its corresponding target estimands, as well as the related tradeoffs, such as incremental cost-effectiveness.1,2 This estimand represents what the study aims to estimate and what medical decision the study results can inform. 22 However, the formulation of target estimands is underacknowledged within the decision science literature. An important question to consider is whether the observed estimand, that is, what was actually estimated, matches the DAM’s target estimand. DAGs can help evaluate discrepancies between the target and observed estimands. For each causal DAM input parameter there could be a corresponding DAG, and for each DAG there could be a corresponding causal estimand that describes precisely what is estimated. For example, the DAGs in Box 2 illustrate how the estimand for effectiveness of MBIs may be prone to selection bias due to differential loss to follow-up (censoring), leading to violation of the exchangeability assumption.

Target trial emulation offers a framework for structuring an analysis in such a way that it resembles an RCT that would try to obtain our target estimands. RCTs are often considered the gold standard to answer questions about comparative effectiveness of interventions. 23 However, they are not always feasible, desirable, or ethical, and when they are performed, they are often imperfect, as illustrated by Box 2. Some claim that every RCT is an observational study after the point of randomization. 24 Causal inference from observational data and the use of DAMs could be viewed as attempts to emulate a target RCT that would answer the question of interest. 23 In this context, DAGs aid in checking whether exchangeability holds when defining the target trial. When the causal question cannot be translated into a target trial, then the research question is likely not well-defined and should be redefined. In fact, to maximize transparency, a target trial protocol should be published even before data analysis or structuring a DAM. The first example of a published target trial protocol was published in 2019 on the dynamic treatment question of when to start statin treatment. 14

Guiding Sensitivity Analyses and Causal Bias Analyses

A DAG can clarify that the estimate you are putting into a model and the output from the model is prone to bias. Generally, the idea of quantitatively assessing the systematic error of an estimate is referred to as sensitivity analysis. However, here we want to make a distinction between causal and noncausal input parameters and thereby reserve the term sensitivity analyses to refer to noncausal input parameters and causal bias analysis to refer to causal input parameters (although we should note that in instances such as selection bias, even noncausal input parameters may necessitate a form of causal bias analysis). Making this distinction helps in selecting the appropriate tools for the respective input parameters. For causal input parameters, specifically designed methods can be used that, for example, estimate bias for unmeasured confounders or selection bias. 25 DAGs can help us think through potential sources of bias. In addition, when a DAG shows important weaknesses in our analysis that cannot be addressed by reanalyzing data, it can suggest input parameters to use in uncertainty and scenario analysis.

Another important distinction is made between sensitivity analysis for uncertainty due to a limited sample size of the population that the parameter was based n, and sensitivity analysis for systematic error due to biased estimates. The former can be addressed using standard techniques in DAMs such as probabilistic analysis and value-of-information analysis. 1 The latter requires consideration of structural sources of bias, for which scenario analysis or threshold analysis can support decision -makers. DAGs could visualize assumptions about input parameters or DAMs that are likely to introduce bias and the direction of the bias, helping in the selection of scenario analyses, such as the incomplete follow-up in the example from Box 2.

Discussion of Limitations

While DAGs offer a promising framework for enhancing DAMs, we also see challenges and limitations that require further discussion. These constraints arise from both the inherent nature of DAGs and the practicalities of applying them in decision science.

  • Simplicity and common sense: When the DAM structure and parameters align with intuitive reasoning, creating a DAG might be perceived as unnecessary. However, we argue that it still provides transparency to the reader.

  • A DAG of what? Our article did not provide explicit guidance of what the DAG should exactly represent: the full model, a specific parameter, or the target trial? While we could argue for a comprehensive approach, encompassing all variables and their relationships, others contend that the decision model’s complexity might not always be accurately captured in a single DAG. The DAG for the overall DAM will be most useful if all frontdoor paths relevant to costs and benefits are present, and for input parameters, the backdoor paths leading into those parameters will be essential.

  • Noncausal aspects in modeling: DAGs, like decision models, imply causal assumptions. However, these assumptions in decision models are often implicit, while DAGs force explicit consideration. DAGs in nature describe a causal relationship; however, there may be useful noncausal associations worth including in DAMs (examples in Box 2). A combination between an influence diagram and a highlighted causal diagram could offer the reader information on causal assumptions while still showing a representation of a full model.

  • Data availability and model simplification: Both DAGs and DAMs inherently involve simplifications of reality, and decision modelers may not always have data precisely fitting the DAG. However, DAGs can still serve the purpose of highlighting potential sources of unobserved confounding in decision models resulting from practical concessions and provide readers with a better understanding of the model’s limitations.

  • Missing proof: Our article includes a discussion based on our underlying assumptions and our expectations of the DAG’s added value, yet the article itself does not prove this value. More applications of a DAG-driven approach in modeling are needed to identify where and how applications are most useful.

Conclusion and Future Directions

This commentary advocates for the integration of DAGs into DAM. We argue that DAGs can significantly enhance the transparency, reliability, and informativeness of decision models by visually representing underlying assumptions, identifying potential biases, and clarifying structural modeling choices. The 2 applications presented—statin use for CVD prevention and MBIs for student stress—demonstrate the practical utility of DAGs in medical decision making and health economics.

However, our article also invites a more nuanced exploration of their potential applications and limitations. Looking ahead, the increasing demand for real-world evidence in health economics underscores the potential importance of DAGs in identifying and addressing potential bias in decision modeling. To foster broader adaptation, it is crucial to identify situations in which DAGs notably improve clarity and guide decision-analytic modelers in their modeling choices. Furthermore, the acceptance of DAGs in decision science relies on demonstrating their practical impact through real, full-model applications, transcending theoretical discussions. 7 Continuous engagement within the scientific community, collaborative efforts, educational initiatives and the incorporation of causal inference inquiries into future versions of reporting and quality assessment frameworks such as CHEERS 26 and CHEQUE27,28 can lead to broader adoption and understanding of DAGs in decision science.

Footnotes

The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Apart from the submitted work, the authors receive (or received in the past 36 mo) the following funding: Dr. Dijk receives research funding from the Gordon and Betty Moore Foundation and the German Innovation Committee at the Federal Joint Committee. Dr. Korf has no conflicts of interest to report. Dr. Labrecque is supported by a NWO/ZonMW Veni grant (09150162010213). Dr. Pandya has no conflicts of interest to report. Dr. Hallsson has no conflicts of interest to report. Dr. Ferket has no conflicts of interest to report. Dr. Wong has no conflicts of interest to report. Dr. Siebert has no conflicts of interest to report. Dr. Hunink receives (or received in the past 36 mo) royalties from Cambridge University Press for a textbook on medical decision making, reimbursement of expenses from the European Society of Radiology (ESR) for work on the ESR guidelines for imaging referrals, and research funding from the American Diabetes Association, the Netherlands Organization for Health Research and Development, Netherlands Educational Grant (“Studievoorschotmiddelen”), the German Innovation Committee at the Federal Joint Committee, and the Gordon and Betty Moore Foundation. The authors received no financial support for the research, authorship, and/or publication of this article.

Author Contributions: Concept and design: Dijk, Korf, Labrecque, Pandya, Siebert, Ferket, Hunink; acquisition and analysis: Dijk, Hunink; drafting of the manuscript: Dijk, Hunink; critical revision of the manuscript for important intellectual content: Dijk, Hunink, Korf, Labrecque, Pandya, Ferket, Hallsson, Wong, Siebert, Hunink; supervision: Hunink.

Ethics Epproval: This article is a methodological commentary. No ethical approval is required.

Data Availability: This article is a methodological commentary. No data were collected or made available.

Contributor Information

Stijntje W. Dijk, Department of Epidemiology, Erasmus MC University Medical Center, Rotterdam, The Netherlands Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, The Netherlands; Department of Gastroenterology and Hepatology, HagaZiekenhuis, The Hague, The Netherlands; Department of Radiology, Elisabeth-Tweesteden Ziekenhuis, Tilburg, The Netherlands.

Maurice Korf, Department of Epidemiology, Erasmus MC University Medical Center, Rotterdam, The Netherlands.

Jeremy A. Labrecque, Department of Epidemiology, Erasmus MC University Medical Center, Rotterdam, The Netherlands

Ankur Pandya, Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, USA.

Bart S. Ferket, Institute for Healthcare Delivery Science, Department of Population Health Science and Policy, Icahn School of Medicine at Mount Sinai, New York, NY, USA

Lára R. Hallsson, Department of Public Health, Health Services Research and Health Technology Assessment, UMIT TIROL – University for Health Sciences and Technology, Hall in Tirol, Austria

John B. Wong, Division of Clinical Decision Making, Tufts Medical Center, Boston, USA

Uwe Siebert, Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, USA; Department of Public Health, Health Services Research and Health Technology Assessment, UMIT TIROL – University for Health Sciences and Technology, Hall in Tirol, Austria; Institute for Technology Assessment and Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA; Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, USA.

M. G. Myriam Hunink, Department of Epidemiology, Erasmus MC University Medical Center, Rotterdam, The Netherlands; Department of Radiology and Nuclear Medicine, Erasmus MC University Medical Center, Rotterdam, The Netherlands; Center for Health Decision Science, Harvard T.H. Chan School of Public Health, Boston, USA.

References

  • 1. Hunink MGM, Weinstein MC, Wittenberg E, et al. Decision Making in Health and Medicine: Integrating Evidence and Values. 2nd ed. Cambridge (UK): Cambridge University Press; 2014. [Google Scholar]
  • 2. Siebert U. When should decision-analytic modeling be used in the economic evaluation of health care? HEPAC. 2003;4:143–50. [Google Scholar]
  • 3. Caro JJ, Briggs AH, Siebert U, Kuntz KM. Modeling good research practices—overview: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force–1. Med Decis Making. 2012;32(5):667–77. [DOI] [PubMed] [Google Scholar]
  • 4. Hernán MA, Robins JM. Causal Inference: What If? In What If. Boca Raton (FL): Chapman & Hall/CRC; 2020. Available from: www.hsph.harvard.edu/miguel-hernan/causal-inference-book/ [Google Scholar]
  • 5. Greenland S, Pearl J, Robins JM. Causal diagrams for epidemiologic research. Epidemiology. 1999;10:37–48. [PubMed] [Google Scholar]
  • 6. Digitale JC, Martin JN, Glymour MM. Tutorial on directed acyclic graphs. J Clin Epidemiol. 2022;142:264–7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Imbens GW. Potential outcome and directed acyclic graph approaches to causality: Relevance for empirical practice in economics. J Econ Lit. 2020;58(4):1129–79. [Google Scholar]
  • 8. Tennant PWG, Murray EJ, Arnold KF, et al. Use of directed acyclic graphs (DAGs) to identify confounders in applied health research: review and recommendations. Int J Epidemiol. 2021;50(2):620–32. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Kühne F, Schomaker M, Stojkov I, et al. Causal evidence in health decision making: methodological approaches of causal inference and health decision science. GMS Ger Med Sci. 2022;20:Doc12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Dijk SW, Caulley LM, Hunink M, Labrecque J. From complexity to clarity: how directed acyclic graphs enhance the study design of systematic reviews and meta-analyses. Eur J Epidemiol. 2024;39:7–33. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Morofu ji Y, Nakagawa S, Ujifuku K, et al. Beyond lipid-lowering: effects of statins on cardiovascular and cerebrovascular diseases and cancer. Pharmaceuticals. 2022;15(2):151. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. van Kempen BJH, Ferket BS, Hofman A, Spronk S, Steyerberg E, Hunink MGM. Do different methods of modeling statin treatment effectiveness influence the optimal decision? Med Decis Making. 2012;32(3):507–16. [DOI] [PubMed] [Google Scholar]
  • 13. Kannel WB, McGee DL. Diabetes and cardiovascular disease: the Framingham study. JAMA. 1979;241(19):2035–8. [DOI] [PubMed] [Google Scholar]
  • 14. Kuehne F, Jahn B, Conrads-Frank A, et al. Guidance for a causal comparative effectiveness analysis emulating a target trial based on big real world evidence: when to start statin treatment. J Comp Eff Res. 2019;8(12):1013–25. [DOI] [PubMed] [Google Scholar]
  • 15. Siebert U, Hernán MA, Robins JM. Monte Carlo simulation of the direct and indirect impact of risk factor interventions on coronary heart disease. An application of the g-formula. In: Proceedings of the 8th Biennial Conference of the European Society for Medical Decision Making. (Abstract) Taormina, Sicily, Italy, 2–5 June 2002, p. 51. [Google Scholar]
  • 16. Robins JM, Hernán MA, Siebert U. Effects of multiple interventions. In: Ezzati M, Lopez AD, Rodgers A, Murray CJL, eds. Comparative Quantification of Health Risks: Global and Regional Burden of Disease Attributable to Selected Major Risk Factors. Vol 1. Geneva (Switzerland): World Health Organization; 2004. p 2191–230. [Google Scholar]
  • 17. Lu C, Dijk SW, Pandit A, Kranenburg L, Luik AI, Hunink MGM. The effect of mindfulness-based interventions on reducing stress in future health professionals: a systematic review and meta-analysis of randomized controlled trials. Appl Psychol Heal Well-Being. 2024;16(2):765–92. [DOI] [PubMed] [Google Scholar]
  • 18. Dijk SW, Steijlen OFM, Kranenburg LW, et al. DEcrease STress through RESilience training for Students (DESTRESS) study: protocol for a randomized controlled trial nested in a longitudinal observational cohort study. Contemp Clin Trials. 2022;122:106928. [DOI] [PubMed] [Google Scholar]
  • 19. Dijk SW, Kroencke T, Wollny C, et al. Medical Imaging Decision And Support (MIDAS): study protocol for a multi-centre cluster randomized trial evaluating the ESR iGuide. Contemp Clin Trials. 2023;135:107384. [DOI] [PubMed] [Google Scholar]
  • 20. Hernán MA. Does water kill? A call for less casual causal inferences. Ann Epidemiol. 2016;26(10):674–80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. VanderWeele TJ. Concerning the consistency assumption in causal inference. Epidemiology. 2009;20(6):880–3. [DOI] [PubMed] [Google Scholar]
  • 22. Luijken K, van Eekelen R, Gardarsdottir H, Groenwold RHH, van Geloven N. Tell me what you want, what you really really want: estimands in observational pharmacoepidemiologic comparative effectiveness and safety studies. Pharmacoepidemiol Drug Saf. 2023;32(8):863–72. [DOI] [PubMed] [Google Scholar]
  • 23. Hernán MA, Robins JM. Using big data to emulate a target trial when a randomized trial is not available. Am J Epidemiol. 2016;183(8):758–64. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Hernán MA, Hernández-Díaz S, Robins JM. Randomized trials analyzed as observational studies. Ann Intern Med. 2013;159:560–2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Fox MP, MacLehose RF, Lash TL. Applying Quantitative Bias Analysis to Epidemiologic Data. Cham (Switzerland): Springer; 2022. [Google Scholar]
  • 26. Husereau D, Drummond M, Augustovski F, et al. Consolidated Health Economic Evaluation Reporting Standards 2022 (CHEERS 2022) statement: updated reporting guidance for health economic evaluations. Int J Technol Assess Health Care. 2022;38(1):e13. [DOI] [PubMed] [Google Scholar]
  • 27. Kim DD, Do LA, Synnott PG, et al. Developing criteria for health economic quality evaluation tool. Value Health. 2023;26(8):1225–34. [DOI] [PubMed] [Google Scholar]
  • 28. Dijk SW, Essafi S, Hunink MGM. An application of the Checklist for Health Quality Evaluations (CHEQUE) in a systematic review setting. Value in Health (2024). [DOI] [PubMed] [Google Scholar]

Articles from Medical Decision Making are provided here courtesy of SAGE Publications

RESOURCES