Abstract
Background
Developing health economic decision-analytic models requires making modelling choices to simplify reality while addressing the decision context. Finding the right balance between a decision-analytic model’s simplicity and its adequacy is important but can be challenging.
Objective
We aimed to develop a tool that supports the systematic reporting and justification of modelling choices in a decision-analytic model, ensuring it is adequate and only as complex as necessary for addressing the decision context.
Methods
We identified decision-analytic model features from the key literature and our expertise. For each feature, we defined both simple and complex modelling choices that could be selected, and the consequences of simplifying a feature contrary to requirements of the decision context. Next, we designed the tool and assessed its clarity and completeness through interviews and expert workshops. To ensure consistency of use, we developed a glossary sheet and applied the tool in an illustrative case: a decision-analytic model on a repurposed drug for treatment-resistant hypertension.
Results
We conducted five interviews and two workshops with 18 decision-analytic model experts. The developed SMART (Systematic Model adequacy Assessment and Reporting Tool) consists of a framework of 28 model features, allowing users to select modelling choices per feature, then assessing the consequences of their choices for validity and transparency. SMART also includes a glossary sheet. The treatment resistant hypertension case example is provided separately.
Conclusions
SMART supports decision-analytic model development and assessment, by promoting clear reporting and justification of modelling choices, and highlighting their consequences for model validity and transparency. Thoughtful and well-justified modelling choices can help optimise the use of resources and time for model development, while ensuring the model is adequate to support decision making.
Supplementary Information
The online version contains supplementary material available at 10.1007/s40273-025-01515-x.
Key Points for Decision Makers
| Developing decision-analytic models in health economic evaluations requires making many modelling choices to create a model that adequately represents reality to address the decision context while accounting for constraints in time, data and resources. |
| SMART (Systematic Model adequacy Assessment and Reporting Tool) supports the systematic reporting and justification of modelling choices, highlights their consequences for validity and transparency, and helps find the right balance between decision-analytic models simplicity and complexity, to ensure that decision-analytic models are fit for purpose and only as complex as necessary. |
| Well-justified and clearly reported modelling choices can optimise resource and time allocation in decision-analytic model development while ensuring the model’s adequacy in informing healthcare decision making. |
Introduction
“Everything should be made as simple as possible, but not simpler.” Often attributed to Einstein, this quote is frequently used to emphasise the importance of balancing simplification with adequacy in complex subjects. However, achieving this balance can be challenging, and the development and assessment of decision-analytic models (DAMs) for health economic evaluations is no exception. A DAM for a health economic evaluation is used to assess and compare a set of alternative health interventions based on their potential costs, effects and cost effectiveness. As such, DAMs are a vital tool for decision makers in identifying the best course of action at various development stages of healthcare interventions [1–3].
In early development stages, DAMs can help anticipate the maximum acceptable price for an innovation to be considered cost effective in what is referred to as a headroom analysis [2]. Early-stage simple DAMs are also helpful in estimating unmet need and potential societal value in settings with constraints in clinical and economic data and time. Such settings include low- and middle-income countries [4], and rare diseases [5]. These early-stage DAMs are often simpler than their counterparts used for reimbursement decisions at health technology assessment (HTA) agencies. Additionally, the shift towards personalised medicine, and specifically mechanism-based drug repurposing, where a vast number of drug-indication pairs need to be screened for their potential societal value, provides a setting where simple DAMs can support research prioritisation [6]. Simplification of reality is inherent in all models, but especially in such settings. At later development stages, as more evidence emerges, DAMs can quantify the intervention’s potential performance in different placements in the care pathway, inform pricing and reimbursement decisions, and identify areas where further research is needed. Whilst more resources are typically available at this stage, model simplifications will surely still be made.
Developing DAMs is an iterative process involving structured steps and continuous input from subject matter experts [2, 7]. As models are inevitably a simplification of reality [8], many choices are made during their development to create an adequate model for addressing the decision context; that is, a DAM that sufficiently addresses a decision problem while considering the available options, as well as the time and resource constraints within the setting of the decision problem. Examples of modelling choices include selecting a simpler approach (e.g., a decision tree instead of a Markov model for decision problems with a limited number of outcomes and a short time horizon [7]), conducting a rapid literature review, using only the evidence at hand to populate the DAM, and not assessing the value of collecting more information [9]. These modelling choices create a spectrum of simplicity (or complexity) in DAMs, with a range from “back of the envelop” calculations that rapidly inform decisions to very complex and data-intensive models [10].
Ideally, DAMs should adequately address the decision context; however, their development can be constrained by time, data and resources. Although discouraged [11], many DAM modelling choices are often driven by these constraints [9, 11, 12]. Moreover, the modelling choices may be more driven by the modellers’ expertise and less by their potential consequences [12, 13]. This may in turn have consequences for key elements of DAMs. Some complexity may be necessary to ensure validity of the model in capturing the complexity of clinical practice and disease trajectories [7]. However, excessively complex DAMs can be difficult and time consuming to develop and run. In contrast, simpler modelling choices allow for quicker and easier development, but the validity of the DAM and its results can be unclear [7, 10, 14, 15]. Finding the optimal trade-off between simplicity and validity of DAMs can be difficult. Transparency, another key element of DAMs, complicates this balance even further. While transparency can help stakeholders understand a DAM and its various features, oversimplification for the sake of transparency risks ignoring the intricacies of reality and could negatively impact the validity of the DAM and its results [7, 8, 12, 16, 17]. Moreover, a simpler model is not always more transparent. For instance, if an aggregated structure relies on numerous calculations and assumptions, reviewing the model and the calculations performed may not be transparent to external reviewers or model users. This raises a fundamental question: how can we strike the right balance between a DAM’s simplicity and its validity, and transparency, to ensure its adequacy for decision making?
Currently, there are several tools and checklists to support the development, reporting and validation of DAMs, by focusing on optimising individual (and crucial) aspects such as: DAM reporting (CHEERS) [18], technical verification (TECH-VER) [19], detailed validation (AdViSHE) [20], and uncertainty identification and assessment (TRUST) [21]. The ISPOR-SMDM Modelling Good Research Practices Task Force also offers recommendations on best practices for conceptualising, parameterising and validating DAMs [7, 11, 15, 22, 23], as well as a questionnaire to help reviewers assess the credibility of modelling studies [24]. However, they do not directly address the critical challenge of systematically balancing simplicity and complexity of all DAM elements by considering the consequences of modelling choices for validity and transparency. In this article, we address this gap by developing a tool that supports the systematic reporting and justification of modelling choices, highlights their consequences, and helps in finding the balance between simplicity and complexity in DAMs to ensure their adequacy for the decision context.
Methods
Developing the Tool
To develop the theoretical framework of the tool, we reviewed the key literature and guidelines on developing DAMs. We used the articles by Barton et al. [1] and Brennan et al. [25], Breeze et al. [17] and the TRUST tool for assessing uncertainties in health economic decision models as a starting point [21]. We then reviewed the citation lists of these articles to identify additional relevant articles [8–10, 12–14]. We also used the ISPOR-SMDM Modelling Good Research Practices Task Force reports [7, 11, 15, 22, 23].
Based on the above-mentioned literature and our expertise, we first identified the steps involved in DAM development. Next, we compiled a list of all DAM features within these development steps to identify decision points in the modelling process. For each of these identified features, there is a wide spectrum of modelling choices, which range from simple to complex. For usability’s sake, we selected one choice deemed relatively simple and another deemed relatively complex for each feature. We then outlined the consequences of simplifying a DAM feature, when the decision context at hand requires the complex choice. Finally, we implemented our findings (the theoretical framework) as a first version of the tool using Microsoft Excel.
We used a case example of an early DAM for treatment-resistant hypertension to demonstrate how the tool aids in DAM development. Based on the decision context, we filled in the tool’s different sections and provided justifications for our modelling choices.
In-Depth Interviews and Expert Workshops for Assessing the First Version of the Tool
In-Depth Interviews
To assess the relevance, clarity and comprehensiveness of the first version of the tool, we conducted online in-depth semi-structured interviews with DAM developers and reviewers. A convenience sampling approach was followed to identify potential participants for the interviews based on their expertise, availability and accessibility. Invitations for the interviews, along with an information sheet describing the study and aim of the interview, were sent via e-mail to potential participants. The participants were also asked to fill in a consent form if they accepted the invitation. Microsoft teams was used to conduct and record the interviews. The interview recordings were summarised and analysed by themes, and all feedback, including references to the additional literature [26], was used to improve the tool based on agreement among the authors The interview structure and questions can be found in the Electronic Supplementary Material (ESM).
Expert Workshops
To further assess the completeness, relevance and clarity of the first version of the tool, we conducted two 2-hour online workshops with experts in DAMs. Participants were identified using a convenience sample. An invitation was sent via e-mail along with an information sheet describing the study, the aim of the workshop and a consent form. Each workshop was structured into three parts:
The participants were introduced to the tool and provided with an overview of the steps undertaken to develop the tool.
Introduction to the case of treatment resistant hypertension: an early DAM to evaluate the cost effectiveness of a novel treatment for patients with treatment-resistant hypertension. The new treatment is being developed by a clinical research team in an academic institution who must provide evidence, within a short timeframe and given limited evidence, about the cost effectiveness of this new treatment. In a group exercise, participants were asked to use the tool to plan the model development in the given decision context. A moderator guided the participants in selecting the different model features and assessing the consequences of their modelling choices. The participants were encouraged to discuss their rationales for their choices while completing the tasks.
In the final part, the participants were asked to reflect on their experiences with using the tool. This included discussing DAM features, the consequences of model choices, and the overall relevance and usefulness of the tool.
The structure of the workshops and the questions can be found in the ESM. The workshops were recorded, the video recordings were transcribed and summarised, and the feedback was used to further improve the first version of the tool upon agreement among the authors. The protocol, including the structure and questions that were used in the interviews and workshops, as well as the invitation e-mails and consent forms, has been approved by the FHML-REC, the Ethics Review Committee of the Faculty of Health, Medicine, and Life Sciences at Maastricht University.
To ensure that all users have a consistent understanding of the features and consequences in the tool, we developed a glossary sheet to integrate it into the tool. We used the identified literature on developing DAMs to inform the definitions in the glossary sheet. The definitions were reviewed by MJ and SG to reach a consensus.
Results
SMART (Systematic Model adequacy Assessment and Reporting Tool)
Steps of Decision-Analytic Model (DAM) Development and DAM Features
We grouped the steps of DAM development from the literature [7–9] into three overarching labels: model conceptualisation, model inputs and model analysis. (1) Model conceptualisation involves conceptualising the decision problem by translating the knowledge about the disease and its care pathway into a structured representation [7]. It also includes conceptualising the model structure, where elements of the decision problem are represented in mathematical structure [7], and selecting modelling approaches that align with the conceptualised model [9]. (2) The second step, model inputs, involves identifying and populating all inputs [9]. (3) In the model analysis step, the DAM is run to generate outcomes, and uncertainties within the model are identified and evaluated. Within these steps, we identified 28 DAM features, each with both a simple and a complex modelling choice. For example, in the case of population heterogeneity, if the population is deemed heterogeneous, modelling all subgroups and/or including patient characteristics through individual patient-level modelling can lead to a more complex model compared to one that treats the population as homogeneous. Table 1 summarises all DAM features and their simple and complex modelling choices.
Table 1.
Features in decision-analytic models and their simple and complex modelling choices
| Model development steps | Model features | Simple | Complex | |
|---|---|---|---|---|
| Model conceptualisation | Population | Heterogeneity | Homogeneous | Heterogeneous |
| Dynamics | Closed population | Open population | ||
| Intervention | Intervention–outcome relationship | Direct | Multifaceted | |
| Placement in the care pathway | Clear/single placement | Unclear/multiple placements | ||
| Capacity constraints | No | Yes | ||
| Comparator | Number of relevant comparators | Single or few | Many | |
| Outcomes | Number of outcomes | Few | Many | |
| Type of outcomes | Intermediate | Long-term/final | ||
| Time | Time horizon | Short | Long | |
| Perspective | Model perspective | Narrow | Broad | |
| Audience | Intended audience(s)/objective(s) | Single | Multiple | |
| Model structure | Number of health states events | Few | Many | |
| Connections between health states/events | Few | Many | ||
| Modelling approach | Combined modelling approaches | No | Yes | |
| Inclusion of memory | No | Yes | ||
| Interaction between individuals | No | Yes | ||
| Explicit modelling of time | No | Yes | ||
| Timing of modelled events and consequences | Approximate | Exact | ||
| Model inputs | Literature review | Rapid and targeted | Systematic | |
| Synthesis | No | Yes | ||
| Clinical data collection | No | Yes | ||
| Structured expert elicitation | No | Yes | ||
| Data analysis | Limited | Extensive | ||
| Model analysis | Patient-level stochastic simulations (if applicable) | Few | Many | |
| Deterministic analyses | Few | Many | ||
| Probabilistic analysis | No | Yes | ||
| Decision uncertainty analysis | No | Yes | ||
Consequences of Simplifying a DAM Feature
We have identified two key consequences of simplifying a DAM feature when the decision context requires a complex modelling choice: validity and transparency. With validity, we refer to the degree to which a model accurately represents the real-world scenarios, makes relevant predictions and produces reliable outcomes to inform the decision problem [15, 20]. To fully assess a DAM’s validity, several steps are involved, including identifying and correcting errors in model implementation, assessing conceptual validity, validating input data and ensuring that the model’s predictions align sufficiently well with real-world data [15, 27]. In SMART (Systematic Model adequacy Assessment and Reporting Tool), we are focusing on assessing validity as the model’s adequacy and the applicability of its results to the decision context. For example, if the decision context involves a heterogeneous patient population that consists of different subgroups with differences in the disease trajectory, modelling the population as homogeneous may produce misleading results, thus impacting the validity of the model adversely.
With transparency, we refer to the stakeholders’ and reviewers’ ability to understand and evaluate the model structure, parameters and results [15]. Transparency could also influence the replicability of the model results, depending on whether the technical information and details are reported clearly. [15, 28]. In the same example as above, modelling a homogeneous population, when the actual population of the decision problem is heterogeneous, could have both positive and negative impact on the model’s transparency. For example, fewer states or branches in the DAM could make it easier for external reviewers to understand and evaluate the model, while the intended audience might prefer to see more details on treatment effectiveness based on the different characteristics of the population.
SMART
SMART integrates the theoretical framework of DAM features, their simple and complex modelling choices, and a section for evaluating the consequences of choices that deviate from the requirements of the decision context (Fig. 1). The tool includes three sections. Section 1 (Features of a decision analytic model) lists DAM features, with each row in the tool corresponding to a specific feature. Section 2 (Fill in the modelling choices) allow users to select each modelling choice from predefined options, first based on the decision context requirements, then, based on their own choice. Requirements of the decision context can be informed by the decision problem, i.e. PICOTPA (population, intervention, comparator, outcomes, time horizon, perspective and audience of the decision problem), disease management guidelines, relevant reference cases and/or health economic guidelines. If the user deviated from these requirements, they must assess the consequence of their choices in Section 3. If a simplified modelling choice is made that does not fully capture the necessary complexity, a more conservative approach should be adopted by selecting the simpler option from the drop-down list in Section 2 and then assessing its consequences in Section 3. Users may also select “Not applicable” if a feature is irrelevant to their decision context. In Section 3 (Fill in the consequences and provide justifications for your modelling choices), the user must identify the consequences of their deviating choices for validity and transparency and provide justifications for their modelling choice and the rationale used to assess its consequences. The user can also include supporting evidence (i.e., literature or published DAMs for the same disease).
Fig. 1.
SMART (systematic model adequacy assessment and reporting tool)
The steps required to complete the tool are explained in the user instructions sheet provided with the tool. A glossary sheet is also integrated into the tool to ensure consistent use among different users (Also available in the ESM). SMART, including the glossary and user instructions sheets, can be found at https://osf.io/4j8qx/?view_only=7c32744f833845a596629e467f7c71ac.
Case Example: Treatment-Resistant Hypertension Early Decision Model
We used SMART in a case example on treatment-resistant hypertension (Table 2). The decision context involved developing an early DAM to assess the potential cost effectiveness of a new combination treatment and the associated digital biomarker for treating patients with treatment-resistant hypertension, compared to standard of care, from the Dutch societal perspective. The DAM needed to be developed within a limited budget and short timeline to support the funding decision for proceeding with the phase II trial, the first-in-human study.
Table 2.
SMART as used in the treatment-resistant hypertension case study example
| Section 1: Features of a decision analytic model | Section 2: Fill in the modelling choices | Section 3: Fill in the consequences and provide justifications for your modelling choices | ||||||
|---|---|---|---|---|---|---|---|---|
| Model features | 2.1 Requirements of the decision context | 2.2 Your modelling choices | Do you need to assess the consequences of your choice? | Validity | Transparency | Justification/further explanation | ||
| Model conceptualisation | Population | Heterogeneity | Homogeneous | Homogeneous | No | Please select | Please select | |
| Dynamics | Closed population | Closed population | No | Please select | Please select | |||
| Intervention | Intervention-outcome relationship | Direct | Direct | No | Please select | Please select | ||
| Placement in the care pathway | Clear / single placement | Clear / single placement | No | Please select | Please select | |||
| Capacity constraints | No | No | No | Please select | Please select | |||
| Comparator | Number of relevant comparators | Single or few | Single or few | No | Please select | Please select | ||
| Outcomes | Number of outcomes | Many | Few | Yes | Likely moderate impact | No impact | Due to time and data constraints, the model outcome for number of CV events will only focus on stroke and MI, the most common and costly CV events. This may moderately impact validity by underestimating treatment effectiveness, but transparency may not be affected by the simplified outcomes. | |
| Type of outcomes | Long-term / Final | Long-term/final | No | Please select | Please select | |||
| Time | Time horizon | Long | Long | No | Please select | Please select | ||
| Perspective | Model perspective | Broad | Narrow | Yes | Likely low impact | No impact | Dutch guidelines require the societal perspective; however, the model will only include the healthcare perspective due to time and data constraints. Including the societal perspective might present the treatment as more cost-effective, therefore, the model's validity will not be significantly affected. No impact is expected on the model's transparency. | |
| Audience | Intended audience (s) / objective (s) | Single | Single | No | Please select | Please select | ||
| Model structure | Number of health states / events | Many | Few | Yes | Likely moderate impact | Less transparent | Including multiple health states is resource-intensive, so our model focuses on stroke and MI. While this may moderately impact validity by overestimating the ICER, it could also reduce transparency due to implicit assumptions. | |
| Connections between health states / events | Many | Few | Yes | Likely moderate impact | More transparent | Using an aggregated structure and ignoring recurring events reduces connections among health states and events, likely having a moderate impact on validity by oversimplifying the complexity of the patient trajectory. However, it may enhance transparency and improve understanding for the model's audience. | ||
| Modelling approach | Combined modelling approaches | Yes | Yes | No | Please select | Please select | ||
| Inclusion of memory | Yes | No | Yes | Likely moderate impact | Less transparent | Due to time constraints, incorporating memory into the model is not feasible, potentially oversimplifying disease progression and overestimating treatment effectiveness, as health states and costs depend on prior events over time. This may moderately impact validity and reduce transparency regarding patient progression. | ||
| Interaction between individuals | No | No | No | Please select | Please select | |||
| Explicit modelling of time | Yes | Yes | No | Please select | Please select | |||
| Timing of modelled events and consequences | Exact | Approximate | Yes | Likely severe impact | More transparent | Time-dependent health states (post-stroke, post-MI, and HF Year 1 and Year 2) could be included to apply different risks, but due to time constraints, event timing and consequences will be modelled at a convenient point. This may reduce validity by oversimplifying time-sensitive risks, potentially overestimating treatment effectiveness. This approach can make it easier for the audience to review and understand the model. | ||
| Model inputs | Literature review | Rapid and targeted | Rapid and targeted | No | Please select | Please select | ||
| Synthesis | No | No | No | Please select | Please select | |||
| Clinical data collection | No | No | No | Please select | Please select | |||
| Structured expert elicitation | No | No | No | Please select | Please select | |||
| Data analysis | Extensive | Limited | Yes | Likely moderate impact | Less transparent | Given time and resource constraints, we will use data and information from previous decision models that are already in a usable format for our model. While this approach might have a moderate impact on the validity, it may reduce transparency for our model reviewers and users. Without reading and checking original data sources, reviewers might find it difficult to fully understand the calculations behind this data. | ||
| Model analysis | Patient-level stochastic simulations (If applicable) | Not applicable | Not applicable | No | Please select | Please select | ||
| Deterministic analyses | Few | Few | No | Please select | Please select | |||
| Probabilistic analysis | Yes | Yes | No | Please select | Please select | |||
| Decision uncertainty analysis | Yes | No | Yes | Likely low impact | Less transparent | Dutch guidelines require VOI analysis, but due to time constraints, it will not be conducted. While this does not affect model validity, it reduces transparency by omitting information on the value of further research and the impact of uncertain data inputs and assumptions. | ||
We completed the tool based on the decision context by first filling in the requirements for modelling choices, then selecting our own modelling choices in Sect. 2. In Sect. 3, we assessed the consequences of deviating from standard choices for validity and transparency. The deviating choices were made in capacity constraints, number of outcomes, model perspective, number of health states/events, connections between health states and events, timing of modelled events and consequences, data analysis and decision uncertainty analysis. We also provided justifications for our modelling choices and the consequences assessment. The full decision context and the completed tool are provided in the ESM and can be accessed at https://osf.io/4j8qx/?view_only=7c32744f833845a596629e467f7c71ac.
Feedback from the Interviews and Expert Workshops
Five online in-depth interviews were conducted with experts in DAMs using Microsoft Teams. Each interview lasted approximately 60 minutes. Three of the participants were DAM developers, while two were DAM reviewers. Three participants worked in academic settings, one in an HTA agency, and one in both an academic setting and an HTA agency.
In addition, two online expert workshops were conducted using Microsoft Teams, each lasting 120 minutes. Five participants attended the first workshop on 28 November, 2024, and seven attended the second workshop on 4 December, 2024. Among all participants, nine were DAM developers, and three were DAM reviewers. Eight participants worked in academic settings, two in HTA agencies, and two in industrial settings.
Feedback and recommendations from in-depth interviews and expert workshops were used to update and improve the tool (the evolution of the framework of SMART based on feedback from interviews and workshops are detailed in the ESM). Table 3 summarises key findings, recommendations and corresponding adaptations from the interviews and workshops.
Table 3.
Key feedback and recommendations from the interviews and expert workshops and the adaptation made to address them
| Feedback/recommendation | Adaptation |
|---|---|
| Model features | |
| Many of the model features are interconnected, which means that the model choice for some features may affect choices selected for others and that some features may no longer be relevant when a simplification was made elsewhere. It was suggested to design a flow diagram in SMART, where features only appear when they are relevant | None. In SMART, we provide a structured overview of all DAM features to ensure they are systematically considered when making modelling choices, with space to note interconnections among them. A different interface with a flow diagram, where features appear only when relevant, can be explored in future research |
| Consequences | |
| Judging the validity of results when simplifications are made is difficult without building a complex model |
While SMART cannot formally test the impact of modelling choices on validity, it can identify the direction and magnitude of bias introduced. We also added the option to select “Unknown impact on validity” in Column H when assessing the consequences for validity |
| A simple model does not always equate to a transparent model. When assessing transparency, the SMART user should consider the audience of the model | In the SMART user instructions, we indicated that users should assess the consequences for transparency from the perspective of the model audience |
| Time and resources were originally included as consequences. However, they were considered rather as a justification for any simplifications | We removed the section originally dedicated to assessing the consequences for time and resources |
| Generalisability/transferability of DAMs to other settings was considered vital, and this may also be influenced by simplifications in some model features. It was recommended to explore the feasibility of adding this as another consequence | While generalisability/transferability can be an important consequence of the developed model, this depends on whether the model is needed to address broader questions (e.g. public health policy). This might not always be the case. In future research, we may assess the applicability and relevance of generalisability/transferability within the tool |
| General feedback | |
| SMART is valuable for both building and assessing models, especially in settings with limited resources. Other use cases mentioned include planning your model development for grants and teaching | We specified the intended users; DAM developers, reviewers and users, in SMART user instructions |
| Maintain a standardised tool structure and provide a free space for additional user comments | We standardised the steps for completing the tool to avoid confusion among different SMART users |
| Including a glossary sheet with the tool, with a definition for each model feature, can ensure the clarity of the tool and help users fill in these features properly | We attached a glossary sheet to SMART |
| It is important to illustrate how the tool is used in a case study | We developed and included a case example for an early model of treatment-resistant hypertension in the ESM |
| SMART should remain simple to ensure its clarity and accessibility for non-experts in DAMs | We kept the structure of SMART concise. The user instructions and glossary sheet that were integrated into the tool provide clear descriptions of the tool elements and guidance for completing the different sections |
| Explore implementing SMART in alternative software or a web-based interface to enhance usability | We will explore an alternative interface for the tool in future research |
DAM decision-analytic model, SMART Systematic Model adequacy Assessment and Reporting Tool
Discussion
In this article, we developed SMART, which supports the systematic reporting and justification of modelling choices and the assessment of their consequences, to help balance the simplicity and adequacy of DAMs, thereby helping make DAMs as simple as possible, but not simpler. SMART is the first tool to aid users with the delicate balance between a DAM’s simplicity and complexity, ensuring it is adequate for informing the decision context. We believe SMART will be valuable for: (1) DAM developers in developing DAMs that are suitable for their intended use, ensuring the efficient utilisation of their time and resources; (2) DAM reviewers (i.e. regulatory bodies, HTA agencies and peer reviewers) and DAM users (i.e. decision makers, policymakers, clinicians, government agencies and healthcare providers) by making the consequences of modelling choices explicit, thereby enabling them to form an informed judgement on the suitability of the DAM for healthcare decision making. SMART and its output can be used in health economic assessment plans (e.g. grant applications for resource planning and budgeting, and study protocols), as well as supporting documentation for models submitted to journals or HTA agencies. In addition, SMART can help teach DAMs or support non-experts in health economic evaluations by providing a structured step-by-step approach for making modelling choices from predefined options, encouraging the assessment and justification of these choices. The tool can be updated iteratively alongside DAM development to reflect evolving evidence and the consequences of different modelling choices at each stage.
SMART builds onto the existing literature on DAM best practices. Key articles include Barton et al. and Brennan et al. [1, 25], who provided guidance on selecting model structures, the ISPOR Task Force reports [7, 11, 15, 22, 23] on best practices for developing DAMs, and Breeze et al. on developing complex system models for public health interventions [17]. Existing tools and checklists including CHEERS, TECH-VER, AdViSHE and TRUST, among others, also support the quality and transparency in DAMs. However, none addresses the challenge of systematically balancing simplicity with validity and transparency for elements of DAMs. SMART expands on the existing literature by incorporating input from model developers, reviewers, and users from the interviews and workshops to provide a detailed framework of all DAM features for which modelling choices can be made. It also enables its users to assess the consequences of these modelling choices, helping them determine whether the developed DAM can adequately address the decision context.
SMART will complement existing tools to enhance the quality of DAMs in health economic evaluations. By helping in the systematic consideration of model simplifications and their consequences, SMART enables a verdict on a DAM’s adequacy for a given decision context. In addition, reporting validity efforts (ADVISHE), transparent reporting of the model and the methods that have been used (CHEERS), and systematically identifying uncertainty, its sources, and impact (TRUST) are important in their own right. Together, they create a comprehensive framework for building, evaluating and transparently communicating DAMs that support informed decision making.
SMART offers a valuable support for decision-analytic modelling; however, it is important to acknowledge some limitations. The list of DAM features and their simple and complex choices were derived from the key literature on DAMs and our expertise; however, some degree of subjectivity remains in their selection. To mitigate this, interviews and workshops were conducted to evaluate the clarity and completeness of the list. Future research will involve usability testing to further improve the included DAM features and their modelling choices. Another consideration is that DAMs in health economic evaluations are an evolving field, with continuous advancements in methodologies. As such, SMART should be periodically revisited to ensure it is relevance and accuracy. Additionally, while Microsoft Excel enhances the tool’s accessibility, it is worth exploring the value of alternative software that might offer greater flexibility and better highlight the interrelations among DAM features. Future research should also test SMART’s application in real case examples, including real decision contexts across different development stages and settings.
Conclusions
Simplifying DAMs is sometimes necessary to help decision makers better understand complex problems and/or to accommodate time and resource constraints. However, these modelling choices often come with trade-offs in DAM’s validity and transparency. SMART promotes transparent reporting and justification of modelling choices, and assessing their consequences for validity and transparency, at the model feature level, making DAMs as simple as possible. SMART is designed for DAM developers, reviewers, and users. Further research could involve conducting usability tests, exploring alternative software for implementing SMART and applying SMART across various decision-making settings. Ultimately, we aim for SMART to facilitate discussions among stakeholders on the optimal allocation of time and resources when developing DAMs for healthcare decision making.
Supplementary Information
Below is the link to the electronic supplementary material.
Acknowledgements
We express our gratitude to Joseph Loscalzo (Brigham and Women’s Hospital and Harvard Medical School), Adam J.N. Raymakers (Brigham and Women’s Hospital and Harvard Medical School), Anna Vassall (The London School of Hygiene & Tropical Medicine), Ed Wilson (University of Exeter), Elisabeth Fenwick (OPEN Health), Hazel Squires (University of Sheffield), Mathyn Vervaart (Oslo University Hospital/University of Oslo), Mohamed El Alili (Zorginstituut Nederland/Vrije Universiteit Amsterdam), Natalia Kunst (University of York), Penny Breeze (University of Sheffield), Ron Handels (Maastricht University), Saskia Knies (Zorginstituut Nederland), Talitha Feenstra (University of Groningen),Thea van Asselt (University of Groningen), Xavier G.L.V. Pouwels (University of Twente) and the anonymous participants for their valuable contributions to this paper.
Declarations
Funding
This project is funded by the European Union under grant agreement No. 101057619. The views and opinions expressed are, however, those of the authors only and do not necessarily reflect those of the European Union or European Health and Digital Executive Agency. Neither the European Union nor the granting authority can be held responsible for them.
Conflicts of Interest
Teebah Abu-Zahra, Sabine Grimm, Mirre Scholte and Manuela Joore have no competing financial or non-financial interests that are directly or indirectly related to the content of this work.
Ethics Approval
The protocol of this research article, including the structure and questions that were used in the interviews and workshops, as well as the invitation e-mails and consent forms, has been approved as a non-WMO study by the FHML-REC, the Ethics Review Committee of the Faculty of Health, Medicine, and Life Sciences at Maastricht University.
Consent to Participate
Not applicable.
Consent for Publication
Not applicable.
Availability of Data and Material
SMART and the treatment-resistant hypertension case example are available at the following link: https://osf.io/4j8qx/?view_only=7c32744f833845a596629e467f7c71ac.
Code Availability
Not applicable.
Authors’ Contributions
TA-Z: writing, conceptualisation, reviewing and editing of the original draft, writing and editing of the research protocol, methodology development, investigation, visualisation, validation, data curation and formal analysis. SEG: conceptualisation, review and editing of the original draft, validation, visualisation, supervision, project administration and funding acquisition. MS: conceptualisation, review and editing of original draft, investigation, visualisation and validation. MJ: conceptualisation, review and editing of the original draft, validation, visualisation, supervision, project administration and funding acquisition. We hereby confirm that all authors have read and approved the final version of the article prior to submission.
References
- 1.Barton P, Bryan S, Robinson S. Modelling in the economic evaluation of health care: selecting the appropriate approach. J Health Serv Res Policy. 2004;9(2):110–8. 10.1258/135581904322987535. [DOI] [PubMed] [Google Scholar]
- 2.Bouttell J, Briggs A, Hawkins N. A toolkit of methods of development-focused health technology assessment. Int J Technol Assess Health Care. 2021;37(1): e84. 10.1017/S0266462321000507. [DOI] [PubMed] [Google Scholar]
- 3.Grutters JPC, et al. Methods for early assessment of the societal value of health technologies: a scoping review and proposal for classification. Value Health. 2022;25(7):1227–34. 10.1016/j.jval.2021.12.003. [DOI] [PubMed] [Google Scholar]
- 4.Nemzoff C, et al. Adaptive health technology assessment to facilitate priority setting in low- and middle-income countries. BMJ Glob Health. 2021;6(4): e004549. 10.1136/bmjgh-2020-004549. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Pearson I, et al. Economic modeling considerations for rare diseases. Value Health. 2018;21(5):515–24. 10.1016/j.jval.2018.02.008. [DOI] [PubMed] [Google Scholar]
- 6.Abu-Zahra T, et al. How health technology assessment can help to address challenges in drug repurposing: a conceptual framework. Drug Discov Today. 2024;29(6): 104008. 10.1016/j.drudis.2024.104008. [DOI] [PubMed] [Google Scholar]
- 7.Roberts M, et al. Conceptualizing a model: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-2. Value Health. 2012;15(6):804–11. 10.1016/j.jval.2012.06.016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Tappenden P, Chilcott J. Avoiding and identifying errors and other threats to the credibility of health economic models. Value Health. 2014;17(7):A585. 10.1016/j.jval.2014.08.1991. [DOI] [PubMed] [Google Scholar]
- 9.Kunst N, et al. A guide to an iterative approach to model-based decision making in health and medicine: an iterative decision-making framework. Pharmacoeconomics. 2024;42(4):363–71. 10.1007/s40273-023-01341-z. [DOI] [PubMed] [Google Scholar]
- 10.Abel L, et al. Early economic evaluation of diagnostic technologies: experiences of the NIHR diagnostic evidence co-operatives. Med Decis Making. 2019;39(7):857–66. 10.1177/0272989X19866415. [DOI] [PubMed] [Google Scholar]
- 11.Caro JJ, et al. Modeling good research practices: overview: a rReport of the ISPOR-SMDM Modeling Good Research Practices Task Force-1. Value Health. 2012;15(6):796–803. 10.1016/j.jval.2012.06.012. [DOI] [PubMed] [Google Scholar]
- 12.Chilcott J, et al. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review. Health Technol Assess. 2010;14(25):1–107. 10.3310/hta14250. [DOI] [PubMed] [Google Scholar]
- 13.Jin H, et al. Overview and use of tools for selecting modelling techniques in health economic studies. Pharmacoeconomics. 2021;39(7):757–70. 10.1007/s40273-021-01038-1. [DOI] [PubMed] [Google Scholar]
- 14.Squires H, et al. A framework for developing the structure of public health economic models. Value Health. 2016;19(5):588–601. 10.1016/j.jval.2016.02.011. [DOI] [PubMed] [Google Scholar]
- 15.Eddy DM, et al. Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-7. Med Decis Making. 2012;32(5):733–43. 10.1177/0272989X12454579. [DOI] [PubMed] [Google Scholar]
- 16.Kaltenthaler E, Tappenden P, Paisley S. Reviewing the evidence to inform the population of cost-effectiveness models within health technology assessments. Value Health. 2013;16(5):830–6. 10.1016/j.jval.2013.04.009. [DOI] [PubMed] [Google Scholar]
- 17.Breeze PR, et al. Guidance on the use of complex systems models for economic evaluations of public health interventions. Health Econ. 2023;32(7):1603–25. 10.1002/hec.4681. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Husereau D, et al. Consolidated Health Economic Evaluation Reporting Standards 2022 (CHEERS 2022) statement: updated reporting guidance for health economic evaluations. Health Policy Open. 2022;3: 100063. 10.1016/j.hpopen.2021.100063. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Buyukkaramikli NC, et al. TECH-VER: a verification checklist to reduce errors in models and improve their credibility. Pharmacoeconomics. 2019;37(11):1391–408. 10.1007/s40273-019-00844-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Vemer P, et al. AdViSHE: a validation-assessment tool of health-economic models for decision makers and model users. Pharmacoeconomics. 2016;34(4):349–61. 10.1007/s40273-015-0327-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Grimm SE, et al. Development and validation of the TRansparent uncertainty ASsessmenT (TRUST) tool for assessing uncertainties in health economic decision models. Pharmacoeconomics. 2020;38(2):205–16. 10.1007/s40273-019-00855-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Briggs AH, et al. Model parameter estimation and uncertainty analysis: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force Working Group-6. Med Decis Making. 2012;32(5):722–32. 10.1177/0272989X12458348. [DOI] [PubMed] [Google Scholar]
- 23.Weinstein MC, et al. Principles of good practice for decision analytic modeling in health-care evaluation: report of the ISPOR Task Force on Good Research Practices-Modeling Studies. Value Health. 2003;6(1):9–17. 10.1046/j.1524-4733.2003.00234.x. [DOI] [PubMed] [Google Scholar]
- 24.Jaime Caro J, et al. Questionnaire to assess relevance and credibility of modeling studies for informing health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health. 2014;17(2):174–82. 10.1016/j.jval.2014.01.003. [DOI] [PubMed] [Google Scholar]
- 25.Brennan A, Chick SE, Davies R. A taxonomy of model structures for economic evaluation of health technologies. Health Econ. 2006;15(12):1295–310. 10.1002/hec.1148. [DOI] [PubMed] [Google Scholar]
- 26.Palmer AJ, et al. Computer modeling of diabetes and its transparency: a report on the Eighth Mount Hood Challenge. Value Health. 2018;21(6):724–31. 10.1016/j.jval.2018.02.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Corro Ramos I, et al. Evaluating the validation process: embracing complexity and transparency in health economic modelling. Pharmacoeconomics. 2024;42(7):715–9. 10.1007/s40273-024-01364-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Bermejo I, Tappenden P, Youn JH. Replicating health economic models: firm foundations or a house of cards? Pharmacoeconomics. 2017;35(11):1113–21. 10.1007/s40273-017-0553-x. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.

