Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Oct 23.
Published in final edited form as: Value Health. 2012 Sep-Oct;15(6):804–811. doi: 10.1016/j.jval.2012.06.016

Conceptualizing a Model: A Report of the ISPOR-SMDM Modeling Good Research Practices Task Force-2

Mark Roberts 1,*, Louise B Russell 2, A David Paltiel 3, Michael Chambers 4, Phil McEwan 5, Murray Krahn, on Behalf of the ISPOR-SMDM Modeling Good Research Practices Task Force6
PMCID: PMC4207095  NIHMSID: NIHMS394740  PMID: 22999129

Abstract

The appropriate development of a model begins with understanding the problem that is being represented. The aim of this article was to provide a series of consensus-based best practices regarding the process of model conceptualization. For the purpose of this series of articles, we consider the development of models whose purpose is to inform medical decisions and health-related resource allocation questions. We specifically divide the conceptualization process into two distinct components: the conceptualization of the problem, which converts knowledge of the health care process or decision into a representation of the problem, followed by the conceptualization of the model itself, which matches the attributes and characteristics of a particular modeling type with the needs of the problem being represented. Recommendations are made regarding the structure of the modeling team, agreement on the statement of the problem, the structure, perspective, and target population of the model, and the interventions and outcomes represented. Best practices relating to the specific characteristics of model structure and which characteristics of the problem might be most easily represented in a specific modeling method are presented. Each section contains a number of recommendations that were iterated among the authors, as well as among the wider modeling taskforce, jointly set up by the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making.

Keywords: conceptualization, best practices, methods, modeling

Introduction

Perhaps no other word in the policy analyst’s lexicon inspires greater confusion among lay observers than the word “model.” Most would agree that a model is a simplified representation of reality. Beyond that description, the term may lead in various directions. The Task Force has agreed that for its context, a model’s purpose is to inform medical decisions and health-related resource allocation questions. Thus, this article is restricted to models as normative decision-making aids, and recommendations apply most directly to models that structure evidence on clinical and economic outcomes in a form that helps decision makers choose from among competing courses of action and allocate limited resources. It excludes from consideration several useful, scientifically sound modeling forms. For example, regression models lie outside the scope of this report. While regression is of critical importance in generating inputs for models, it is a descriptive method that explains and predicts the relationship between inputs and outputs. A regression model, however, cannot give normative direction regarding policy options. An infectious disease transmission model is beyond this report’s scope if it is about what epidemics do but is within scope if it uses that information to evaluate what can be done to affect epidemics.

This article describes two distinct components of the modeling process (Fig. 1): the problem conceptualization, which converts knowledge of the health care process or decision into a representation of the problem, followed by model conceptualization, in which the components of the problem are represented by using a particular analytic method (1 in figure). The model’s conceptual representation will usually direct the decision as to which modeling technique to use (2, 3, and 4 in figure). This article covers the process up to technique selection.

Fig. 1. Development and construction of a model.

Fig. 1

The numbers in the figure represent the methods papers in this series: 1) the conceptualization paper, which describes the conceptualization of both the problem and the model; 2), 3) and 4) which describe the three main kinds of modeling methods addressed, including state transition model, discrete event and agent based models and dynamic transmission models; 5) parameter estimation used to calibrate the models, and 6) the transparency and validation of a model. See text for details.

Conceptualizing the Problem

Statement of problem and objectives

Before constructing a model, it is important to be clear about the nature of the problem under consideration and the project objectives, which will usually fall in one of several categories:

Guide clinical practice

A study involving 6 models designed to support the recommendations of the US Preventive Health Services Task Force (USPSTF) on mammography screening [7] will be used as an ongoing example for how the objectives, scope, and policy context of a modeling exercise are described (seeBox).

Box.

Defining the objectives, scope, and policy context of a model (here, six models): Effects of mammography screening under different screening schedules [7]

Decision problem/decision
objective
To evaluate US breast cancer screening strategies.
Policy context This analysis was used to inform the 2009 US Preventive Services Task Force recommendations on breast cancer screening.
Funding source AHRQ, NCI
Disease Breast cancer: Four models included ductal carcinoma in situ, two did not; cancer was characterized by estrogen receptor status, tumor size, and stage in all models and by calendar year in three.
Perspective Stated as societal. Health outcomes are breast cancer outcomes for patients. Limited modeling of resources used (see below). The US Preventive Services Task Force does not consider costs in making its recommendations.
Target population
Cohort of US women born in 1960.
Subgroups were defined by age and the disease characteristics noted above. Subgroups mentioned in the report but not analyzed: BRCA1 and BRCA2, black, comorbidities, HRT, obese.
Health outcomes
Reduction in breast cancer deaths and life-years gained, false-positive results, overdiagnosis. Explicitly not included: morbidity from unnecessary biopsies or from treatment.
Strategies/comparators
Screening: Twenty mammography screening strategies defined by frequency (annual or biennial), starting age (40, 45, 50, 55, or 60 y), and stopping age (69, 74, 79, or 84 y); no screening. Assumed 100% compliance.
Follow-up treatment: ideal and observed patterns.
Resources/costs Number of mammograms, unnecessary biopsies
Time horizon Remaining lifetime of women

AHRQ, Agency for Healthcare Research and Quality; HRT, hormone replacement therapy; NCI, National Cancer Institute.

Inform a funding decision or reimbursement rate for a new intervention

For example, the cost-effectiveness of multidisciplinary heart failure clinics was evaluated to guide the Ontario Health Technology Advisory Committee’s decision regarding their widespread diffusion [8].

Optimize use of scarce resources

For example, a model of the US organ allocation system was developed to guide policy around the use of livers for transplantation [9].

Guide public health practice

For example, a model was developed to assess the cost-effectiveness of universal vaccination for epidemic influenza [10].

The problem’s nature will have important implications for model structure, data requirements, analytic strategy, and reporting. Components of the problem, including factors such as disease or condition, patient populations, diagnostic or therapeutic actions and interventions, and outcomes, will be addressed below.

Although the problem’s general nature may seem clear, there is often some ambiguity leading to variation in understanding of the problem by stakeholders. For example, while it seems clear that a model of a genetic test aiding patient selection for adjuvant breast cancer therapy [11] was developed to inform the decision whether to cover it, it subsequently became apparent that the problem could be understood in several ways. One was to ask what the consequences of a positive decision were likely to be in practice regarding health outcomes and costs. A model answering this question would represent practice regarding clinical risk stratification, the new test’s use, and chemotherapy use conditional on test results. The potential benefits of testing are then compared with current practice. A second way is to ask about the optimal circumstances of test use to maximize patient outcomes. A model answering this question must explore benefits of testing in a wide variety of risk groups and treatment options conditional on test results, irrespective of how the test is currently used.

Early specification of the decision problem and modeling objectives will improve model building efficiency. Defining the modeling objective is an iterative process, and specific objectives may change as understanding of the problem deepens.

Best practices

II-1 The modeling team should consult widely with subject experts and stakeholders to assure that the model represents disease processes appropriately and adequately addresses the decision problem

It is important to read and consult widely and refine the problem definition early in model development. Existing models addressing related problems should be reviewed. The clinical and policy literature describing the problem should be understood by the modeling team. Experts, including clinical, epidemiologic, policy, and methodological, should be consulted. Clinical experts are central in developing a representation of clinical practice. Policy experts should be consulted when the model addresses a health policy decision. Consultations with patients may deepen understanding of the values and preferences relevant to the problem.

Best practices

II-2 A clear, written statement of the decision problem, modeling objective, and scope should be developed. This should include: disease spectrum considered, analytic perspective, target population, alternative interventions, health and other outcomes, and time horizon

It is very useful to state the problem in writing early in model formulation. It “lets the problem stakeholders and decision makers provide direct input into the model. . . . Once complete, the narrative . . . serves as a reference point for further discussion and refining the problem description” [12]. The process of creating a problem statement may uncover variations in stakeholders’ conceptualization and aid the development of clear, shared, modeling objectives, which should be included in the written statement.

To build a model, the analyst must choose a structure appropriate for the problem and identify data to populate it. Thus, the next step is to make the problem more specific [1315]. The appropriate perspective must be carefully defined, as must the target population, the health outcomes of importance for that population, the technologies (new or old) and settings to be considered for addressing the disease, whether and how costs will be represented, and the time horizon over which all outcomes will be projected. The experts and stakeholders who helped frame the problem should be involved in defining the model specifications. The development of statements characterizing model objectives and specifications can be simultaneous.

Best practices

II-2a A model’s scope and structure should be consistent with, and adequate to address, the decision problem and policy context

The condition specified in the problem plays a critical part in determining relevant interventions and health outcomes. Typically, a single disease (e.g., breast cancer) or a set of closely related diseases (e.g., cardiac, cerebrovascular, and peripheral vascular disease) is of interest. Other conditions may be included if they are sequelae of the disease of interest or common comorbidities that affect its course. The decision problem, and thus the model, can encompass, however, a broad range of conditions (e.g., Statistics Canada maintains a population health model that simulates the effects of risk factors such as smoking and weight on the development and course of a wide range of diseases including osteoarthritis, cancer, diabetes, and heart disease [16]).

The availability of data may constrain model development, but the initial discussion of the problem should range broadly and encompass features of the disease and its outcomes for which data may be poor or unavailable. It is important to have a complete picture of the problem, regardless of data availability. It is also often possible to conduct sensitivity analyses on model features for which no data exist in order to investigate their influence on the results [5] (e.g., the breast cancer screening models used in the USPSTF modeling exercise [7] include a component that represents the unobserved preclinical stages of breast cancer). Various methods are available for inferring possible values for unobserved model parameters [17].

Best practices

II-2b The analytic perspective should be stated and defined. Outcomes modeled should be consistent with the perspective. Analyses which take a perspective narrower than societal should report which outcomes are included and which excluded

Perspectives commonly considered are those of the patient, the health plan or insurer, and society. In some cases, the employer’s perspective (responsible for health insurance premiums and interested in workforce productivity) may be important. The Panel on Cost-Effectiveness in Health and Medicine recommended the societal, or public interest, perspective [18]. This includes all significant health outcomes and costs, no matter who experiences them or whether the costs are matched by budgetary outlays. Per-haps because of this recommendation, analysts sometimes assert that they used the societal perspective, even when the outcomes and costs included are those of a narrower (“health care payer”) perspective [19].

When a model simulates disease without assigning costs, the perspective is typically left unstated. Most models focus on health outcomes accruing to patients who have, or are at risk of, the disease of interest and receive the interventions modeled. Effects on the health of others are not included. Although widely used, this perspective has not been explicitly defined or named; we will call it the medical sector perspective. This perspective is closest to that of clinical decision making, where health outcomes associated with treatment options for the presenting patient are considered on the basis of evidence from cohorts of similar patients.

When costs are included, modelers usually state the perspective explicitly. As the perspective for health outcomes has commonly been unstated, it is not recognized that all outputs should be analyzed from the same perspective. In practice, the great majority adopts the medical sector perspective for both health and cost effects, a perspective conventionally, if inaccurately, described as the health care payer perspective [19]. This perspective includes only those health outcomes that are experienced by patients receiving the interventions modeled and costs are those for medical services required to provide the intervention. These differ from costs to a health plan or insurer if patients are responsible for co-pays and co-insurance. Resources provided without payment, such as the time of volunteers or family members, and of patients themselves, as well as costs incurred outside the medical sector are not included.

The USPSTF mammography evaluation stated that it adopted the societal perspective (Box), but the outcomes modeled suggest that it might be better described as the medical sector perspective. The evaluation modeled breast cancer outcomes and limited costs to those of mammograms and unnecessary biopsies, in keeping with the charge to base recommendations only on medical effectiveness.

Choice of perspective, and the care with which it is carried through, is a source of variation within and across studies. Its impact on a study’s results could be explored through sensitivity analysis, but whether or not this is done, it is important to be accurate in describing and correctly applying the chosen perspective.

Best practices

II-2c The target population should be defined in terms of features relevant to the decision (e.g., geography, patient characteristics, including comorbid conditions, disease prevalence and stage)

The target population consists of patients who have, or might develop, the disease(s) and who will receive the interventions being modeled. The population is also defined by geography, as the patients live in specific communities and countries. The disease stage and timing, or route of access to the intervention, often affects the definition. A vaccine for children necessarily implies that the target population is children, but the options can be more complex. An evaluation of rubella vaccine, for example, considered the vaccination of children or women of childbearing age, and therefore had two rather different target populations [20].

In some cases, people who are not the interventions’ target will be affected by it. An obvious example is vaccinations, which often confer benefits (herd immunity) on unvaccinated people [21]. Folic acid fortification of grains aims to prevent neural tube defects in infants but may harm the elderly [22]. Health outcomes and consequences of introducing interventions may confer (or reduce) substantial responsibilities on families and friends, which can generate costs and affect their health. In such cases, consideration should be given to these additional effects.

The target population may need to be classified into subgroups to reflect characteristics that differentially affect disease course or the intervention’s impact, and thereby costs and other model outcomes. These groups may be characterized by age (older than 65 years, younger than 65 years), prior disease course (presence of a complication or not), health behaviors (smokers vs. nonsmokers), comorbidities (patients with and without diabetes), and genetic predisposition or family history. The variety and levels of these characteristics can affect the choice of model [21]. When there are relatively few, models based on group averages (“cohort models”) might be used. A greater number of characteristics may require several cohort models for different comorbidity and age strata. As the number of characteristics (and the number of levels required for each) increases, models based on individuals will become the more practical choice. Such “microsimulations” can record individuals’ initial characteristics, how these change over time, and historical factors such as prior health states or interventions. There is no absolute limit to the number of states in a state-transition model (the Coronary Heart Policy Model contains many thousands of states). Cohort models with more than 30 to 50 states become unwieldy; few current cohort models include more than 100 health states.

The target population can be modeled as open (new members can enter as time progresses)or closed (members enter only at the beginning) [23]. The open approach can represent an ongoing intervention program and is often the basis for budget impact calculations. The closed approach corresponds more closely to the medical sector perspective and is often used in health technology assessments. The USPSTF modeled a closed cohort of women born in 1960, with screening starting no sooner than the year they turned 40. Modeling a series of cohorts can bridge the two approaches.

Best practices

II-2d Health outcomes, which may be events, cases of disease, deaths, life-years gained, quality-adjusted life-years, disability-adjusted life-years, or other measures important to stakeholders, should be directly relevant to the question being asked

Health outcomes can be represented in many ways. They may be clinically defined states or events (e.g., myocardial infarction, hepatitis B infection, and cancer death); changes in physiologic parameters (e.g. glomerular filtration rate); or health indices (e.g. quality-adjusted life-years, disability-adjusted life-years [21]) that characterize health using a vector composed of separate measures of quality and quantity of life, and possibly other factors (age or equity adjustments). Outcomes may be subjective (e.g., anxiety while waiting for biopsy results) or objective (biopsy results). Broader metrics are popular with funding agencies as they facilitate budgetary allocations across disease areas.

Event outcomes are usually selected because they are associated with better health. They may be referred to as “intermediate outcomes,” but this must be distinguished from “intermediate” physiological or biologic measures (e.g., tumor response, blood pressure) that may be used to project “final outcomes” in a model using predictive equations. Some models (addressing issues of process efficiency in health care delivery) may not explicitly represent health outcomes at all, but only processes (waiting times, number of visits, length of stay) that are indirectly linked to health. It is generally recommended that “models should include long-term or final outcomes” [14].

Modeling relevant outcomes related to final end points usually requires a series of intermediate disease states that track the condition’s progress and effects of interventions. A realistic model will include each disease aspect that may result in significantly different outcomes. In the mammography evaluation [7], these intermediate states were the breast cancer stages detected clinically or through screening.

In addition to beneficial effects, the adverse consequences of interventions should be modeled to produce an accurate picture. If adverse effects are not automatically captured, as in mortality rates associated with treatment, they must be modeled separately. The mammography evaluation included false-positive screens, unnecessary biopsies, and overdiagnosis as adverse screening effects but did not include morbidity from biopsies or treatment.

Best practices

II-2e Interventions modeled in the analysis should be clearly defined in terms of frequency, component services, dose or intensity, duration, and any variations required for subgroups, and should include standard care and other strategies routinely considered and in use

It is critically important to model all practical interventions and their variations [14]. Nevertheless, the range of interventions considered is bounded by the problem. Although there are many breast cancer interventions, the USPSTF evaluation addressed only mammography screening. The choice of comparators has a major impact on estimated effectiveness and efficiency, and the results are meaningful only in relation to the interventions considered. The mammography evaluation investigated 20 screening strategies defined by mammogram frequency, age at screening start, and age at end and considered two treatment patterns: ideal and actually observed in the United States.

The form interventions take will differ across countries and often across settings within countries. Thus, despite the same label (e.g., “breast cancer screening”), the effects may differ, depending on the practice patterns in the target population area. Although the model should reflect the applicable practice patterns, it is important to specify their components in detail so that users can determine how well the analysis reflects their situations. When evaluating results of a model from another setting, even if costs are transformed to the appropriate local currency, practice patterns and prices of drugs and services may be significantly different and hinder generalizability of results.

Best practices

II-3 Although data are essential to a model, the conceptual structure should be driven by the decision problem or research question and not determined by data availability

Nevertheless, the model’s credibility will be evaluated, at least in part, by the quality of data it employs, particularly for key parameters such as treatment effectiveness or diagnostic test characteristics. If what the field regards as key evidence is omitted, the model’s credibility diminishes. Thus, data selection requires attention to the sometimes competing criteria of fidelity to the problem, representativeness, and data quality.

Best practices

II-3a The choice of comparators crucially affects results and should be determined by the problem, not by data availability or quality. All feasible and practical strategies should be considered. Constraining the range of strategies should be justified

Comparisons should address all interventions relevant to the problem. These may be specific alternatives, or a distribution that reflects routine (or “standard”) practice, or even no intervention (the “natural” disease course). When the latter is standard practice, it should be a comparator. If an intervention can take different forms, these should be included and compared with each other.

Best practices

II-3b The time horizon of the model should be long enough to capture relevant differences in outcomes across strategies. A lifetime time horizon may be required

The choice between closed and open population affects the time horizon choice. A cohort simulation is implicitly constrained by the cohort’s lifetime. This is not the case for open models, where the modeler needs to make separate decisions about program duration and how long the model should be run to capture program effects.

Modeling over patients’ lifetimes usually requires extrapolating well beyond available data, since trials and observational studies rarely cover such long periods. Thus, short-term effects and costs may be based on primary data, while longer-term ones must be extrapolated. Discounting of future costs and health outcomes limits the impact of using a lengthy time horizon [15]. Sensitivity analyses should be performed examining upper and lower boundary cases for assumptions used in the extrapolations.

There may also be secular trends in the disease over patients’ lifetimes. For example, when vaccination is successful against covered serotypes, those not covered may over time become more widespread (“serotype replacement”). If these trends are likely to significantly affect the disease or intervention, they should be incorporated in the model, at least for sensitivity analysis. It is, however, generally not useful (or feasible) to project trends in treatment beyond the introduction of the intervention of interest. Of note, the time horizon considered important by the decision maker may not incorporate the entire time horizon of the disease. The choice of time horizon and its justification should be explicitly stated in model development.

Valuing outcomes

Models require a value structure—a way of valuing outcomes. Quality-adjusted life-years [2426], or in some developing countries, disability-adjusted life-years, are common ways of expressing value [18]. Resources used by the interventions modeled should be described in detail so that decision makers can tell how closely the model resembles the interventions they are considering. Resource use is typically valued in monetary terms [27,28]. In the short term, certain resources (e.g., number of hospital beds or mammography technicians) are fixed, and it can be prohibitively expensive or impossible to increase them. In the longer term, these resources could be increased or decreased as necessary. Most clinical guidelines are intended for long-term use, and thus the long-term approach is appropriate for guidelines development.

Best practices

II-4 The problem conceptualization should be used to identify key uncertainties in model structure where sensitivity analyses could inform their impact

Each decision made in problem conceptualization has the potential to alter the results. During the conceptualization, experts and modelers should identify assumptions that should be evaluated through structural sensitivity analysis. This may impact the choice of modeling type: some sensitivity analyses require a change in structure in one type but reduce to a parameter sensitivity analysis in another.

Best practices

II-5 The policy context of the model should be clearly stated. This includes the funder, developer, whether the model is for single or multiple application, and the policy audience

The development of models is often explicitly linked to policy questions. Health technology assessment (HTA) agencies often commission models to evaluate the cost-effectiveness of interventions. The close linkage between model development and policy has implications for model construction, as modelers will work within the context of specific methodological guidelines that reflect the decision makers’ views, priorities, and values [29,30]. Other models may be developed unrelated to a specific policy application with objectives such as scientific discovery, evaluation of broadly relevant clinical strategies, or a platform to address many policy questions [31]. Models may facilitate policy development [32], as well as implementation, by providing the architecture for organizing evidence for a specific policy initiative, and helping generate policy questions [33].

The policy context may also have an undesirable effect. Manufacturers have strong financial incentives to gain access to specific markets and, thus, to reach a favorable conclusion in model-based economic analyses. Evidence suggests that sponsorship bias exists [3436]. Whether these effects are mediated through the selection of products with substantial effectiveness for economic evaluation [37], of comparators, of parameters, or study interpretation or publication bias is uncertain. Sponsorship bias may also be present in analyses funded by health systems. Payers, including governments, have large incentives to constrain costs. New technologies with the potential for widespread diffusion and high costs may be analyzed differently than technologies with a lower potential system impact.

Conceptualizing the Model

The appropriate model type is determined by purpose, level of detail, and complexity. To illustrate, consider a coin toss [38]. If one sought to portray the coin’s real-world behavior faithfully, one might construct a descriptive model taking into account such considerations as gravity, angular momentum, air resistance, force applied, and height from which the coin was dropped. But, if one’s aim were to advise a team captain whether to call “heads” or “tails” before kickoff, one might adopt a model that treats the coin toss as a random event with a 50% likelihood of each outcome.

Best practices

II-6 An explicit process (expert consultations, influence diagrams, concept mapping, or similar method) should be used to convert the problem conceptualization into an appropriate model structure, ensuring it reflects current disease knowledge and the process modeled

Although the formality of the process of moving from conceptualization to structuring may vary substantially with the problem scope [39], there are substantial benefits to making it explicit. Decisions taken define the simplifications and assumptions used to create the problem representation. There should be a written, explicit record of the process by which the conceptualization is instantiated, using methods such as influence diagrams [4042] and concept mapping [43,44]. One advantage of adopting an explicit process is that it supports focused discussions between content experts, policymakers, and modelers on what should be included and the simplifying assumptions made in representing the problem and the treatment/disease process.

Best practices

II-7 Several model types may be suitable. Some problems are more naturally represented in some types than others

Virtually any problem can be represented in any type of model, and therefore, these recommendations are not prescriptive. Some methods are designed for particular problem types, however. There are several modeling techniques available [21,45]: individual or cohort, deterministic or stochastic. Common model types include decision trees, state-transition models [2], discrete event simulation (DES) [3], agent-based simulation, and dynamic transmission models [4]. Decision trees are useful for problems with short time horizons where the estimation of outcomes is straight forward. State-transition models are useful for problems with longer time frames or when probabilities vary over time. DES is useful for representing what happens to individuals, particularly when there are resource constraints or interactions among individuals. Dynamic transmission models are useful when interactions occurring between groups have an impact on the results.

Characteristics that affect model selection

Several problem characteristics should be considered to decide which modeling method is most appropriate: will the model represent individuals or groups; are there interactions among individuals; what time horizon is appropriate; should time be represented as continuous or discrete; do events occur more than once; are resource constraints to be considered.

Unit of representation: individuals versus groups

Models can represent patients as individuals or as members of a homogeneous cohort. Decision trees, Markov processes, and infectious disease compartment models represent populations as cohorts that are homogeneous within each state or component. State-transition microsimulation, DES, and agent-based models represent each patient individually and calculate outcomes by aggregating across individuals. Modeling individuals does not automatically imply greater accuracy. Cohort models can be detailed regarding subgroup characteristics and very specific regarding the impact of a decision on those cohorts. It is easier, however, to represent the biology of a process using an individual technique. The choice of unit is also important because it changes the way that individuals or groups may interact in the model. Whether individuals can be regarded as independent will in part determine the most efficient modeling method [46].

Another reason for representing patients as individuals or groups is the level of detail required for the variables that predict outcomes: the more detailed, the more reason to select individual representation. For example, consider a model in which blood creatinine levels (a measure of kidney function) are important in predicting the occurrence of a particular event. When modeled as a group (i.e., to define different health states), creatinine will need to be a categorical variable (e.g., creatinine ≤2.0 mg/dL or >2.0 mg/ dL). For variables not used to define the group (health state), a representative value will need to be obtained for each group. Although there is no limit to the number of such groups that may be created, valuable information may be lost by categorizing variables or using representative values for specific groups. Individual modeling is not so constrained—patient characteristics may be retained as continuous variables with specific values as required over time. Representing these changes over time may add complexity, but if risks of events are determined by such values, a model that represents individuals should be used.

Interactions between individuals and other components of the model

A second aspect to consider is whether interactions among individuals need to be represented. For example, if the problem is evaluating the appropriate treatment for a patient with HIV, it is not necessary to include the treatment effect on the epidemic itself [47]. The results of such a model do not depend on and therefore do not require modeling HIV transmission between individuals. When the problem requires modeling the effect of an intervention on disease spread, methods designed for patient interaction should be selected, such as dynamic transmission models [48,49], DES [50,51], and agent-based models [52]. Similarly, these methods are appropriate when individuals interact with other components of the model, such as use limited resources [53,54].

Time horizon and time measurement

The time horizon—how far into the future outcomes are modeled—is dictated by the problem scope. Decision trees may be appropriate for models with very short time horizons; longer horizons require more dynamic modeling methods such as state-transition, DES, or dynamic transmission. Similarly, the modeler needs to assess whether time should be modeled continuously or in discrete cycles. As only a single transition may occur within a cycle in state-transition models, very short cycle times are required if the likelihood of events is high.

Best practices

II-7a For simple models, or problems with special characteristics (e.g., very short time horizons, very few outcomes), a decision tree may be appropriate

Although decision trees are less common now, they present several advantages [39,55]. They can be simple to conceptualize, create, and modify and can be useful tools to rapidly outline the components of a particular problem. They are most suitable when the outcome set is small and defined, the time horizon is short [56], or when the consequences of a decision are known with some certainty [57].

Best practices

II-7b If the conceptualization involves representing the disease or treatment process as a series of health states, state-transition models are appropriate. Their primary disadvantage, the Markovian assumption that transition probabilities do not depend on history, can be addressed by increasing the number of states. Individual state-transition models, which do not require this assumption, are an alternative when the number of states grows too large

State transition models are ubiquitous as they may be simple to develop, debug, communicate, analyze, and readily accommodate the evaluation of parameter uncertainty. They make sense when the problem has been conceptualized as a series of homogeneous states. They are consistent with a categorical clinical view, where the disease is broken into distinct stages, as in cancer, or its presence/absence (e.g., diabetes), or on/off treatment. Transitions between states define mode progression over time. After choosing a state-transition framework, the decision regarding whether to model a series of cohorts or individuals is primarily pragmatic: if the number of states required to represent the problem becomes unmanageably large, use individual simulation, which allows representation of substantial heterogeneity in characteristics.

Best practices

II-7c When the disease or treatment process includes interactions between individuals, the methods should be able to represent those interactions and evaluate their effects

Dynamic-transmission, DES, or agent-based models, given their ability to represent interactions between individuals (e.g., transmission of disease from infected to uninfected) or with other aspects of the model (e.g., allocation of organs to individuals on a waiting list), should be chosen when the problem conceptualization involves interactions. Furthermore, these models are able to represent time continuously, rather than in discrete cycles, and therefore more accurately implement continuous risk functions and incorporate time-to-event data.

Dynamic-transmission models, which require the definition of “compartments” that classify people (e.g., susceptible, infectious, or immune), become analytically complex with more detailed problem characterization and are prone to state expansion. When models become numerically intractable because of a very large numbers of states, or when the conceptualization represents geography or spatial proximity, DES or agent-based models are more appropriate.

Best practices

II-7d When the problem involves resource constraints, the modeling method should be able to represent them and evaluate their effects

Similar to interactions between individuals, some problem conceptualizations require that individuals interact with other model parts. Questions regarding scarce resource allocation (e.g., organ allocation for transplantation, distribution of antiretroviral medications in resource-poor environments, appropriate scheduling of operating room to minimize surgeon wait time, or the number and location of distribution sites for vaccination during a pandemic) require the ability to incorporate competition for resources and the development of waiting lists or queues. DES and agent-based simulation were designed for these types of problems.

Best practices

II-7e For some problems, combinations of model types, hybrid models, and other modeling methodologies are appropriate

The model types described in these articles are not exhaustive. Some health care problems are not easily represented in these commonly used platforms. There has been recent interest in developing physiologic models, and these “in-silico” simulations do not fit precisely into the standard modeling types [5860]. Hybrid modes utilizing various techniques including multiple differential equations have also appeared [61,62].

Best practices

II-8 Model simplicity is desirable for transparency, ease of analysis, validation and description. However, the model must be complex enough to ensure that differences in value (e.g. health or cost) across the strategies considered are faithfully represented. Some degree of model complexity may be desirable to preserve face validity to clinical experts. Greater complexity may be necessary in policy models that are intended to be used for many problems

Selecting the correct level of detail is one of the most difficult decisions a modeler faces. Models that are too simple may lose face validity because they do not incorporate aspects that content experts feel are required, but models that are too complex may be difficult to build, debug, analyze, understand, and communicate. As Einstein said, “everything should be made as simple as possible, but not simpler” [63]. Scope, perspective, target population, outcomes, and the interventions considered in the evaluation all contribute to the level of detail required to appropriately model the particular problem.

Source of financial support: This Task Force was supported by ISPOR.

Supplementary Material

1

Background to the Task Force.

A new Good Research Practices in Modeling Task Force was approved by the ISPOR Board of Directors in 2010, and the Society for Medical Decision Making was invited to join the effort. The Task Force cochairs and members are expert developers and experienced model users from academia, industry, and government, with representation from many countries. Several teleconferences and hosted information sessions during scientific meetings of the Societies culminated in an in-person meeting of the Task Force as a whole, held in Boston in March 2011. Draft recommendations were discussed and subsequently edited and circulated to the Task Force members in the form of a survey where each one was asked to agree or disagree with each recommendation, and if the latter, to provide the reasons. Each group received the results of the survey and endeavored to address all issues. The final drafts of the seven articles were available on the ISPOR and Society for Medical Decision Making Web sites for general comment. A second group of experts was invited to formally review the articles. The comments received were addressed, and the final version of each article was prepared. (A copy of the original draft article, as well as the reviewer comments and author responses, is available at the ISPOR Web site: http://www.ispor.org/workpaper/Conceptualizing-A-Model.asp.) A summary of these articles was presented at a plenary session at the ISPOR 16th Annual International Meeting in Baltimore, MD, in May 2011, and again at the 33rd Annual Meeting of the Society for Medical Decision Making in Chicago, IL, in October 2011. These articles are jointly published in the Societies’ respective journals, Value in Health and Medical Decision Making. Other articles in this series [16] describe best practices for building and applying particular types of models, addressing uncertainty, and ensuring transparency and validity. This article addresses best practices for conceptualizing models. Examples are cited throughout, without implying endorsement or preemi-nenceof the articles referenced, and an appendix in Supplemental Materials found at http://dx.doi.org/10.1016/j.jval.2012.06.016 provides a detailed example.

Footnotes

Supplemental Material

Supplemental material accompanying this article can be found in the online version as a hyperlink at http://dx.doi.org/10.1016/j.jval.2012.06.016 at www.valueinhealthjournal.com/issues (select volume, issue, and article).

REFERENCES

  • 1.Caro JJ, Briggs AH, Siebert U, et al. Modeling good research practices— overview: a report of the ISPOR-SMDM modeling good research practices task force-1. Value Health. 2012;15:796–803. doi: 10.1016/j.jval.2012.06.012. [DOI] [PubMed] [Google Scholar]
  • 2.Siebert U, Alagoz O, Bayoumi AM, et al. State-transition modeling: a report of the ISPOR-SMDM modeling good research practices task force-3. Value Health. 2012;15:812–820. doi: 10.1016/j.jval.2012.06.014. [DOI] [PubMed] [Google Scholar]
  • 3.Karnon J, Stahl J, Brennan A, et al. Modeling using discrete event simulation: a report of the ISPOR-SMDM modeling good research practices task force-4. Value Health. 2012;15:821–827. doi: 10.1016/j.jval.2012.04.013. [DOI] [PubMed] [Google Scholar]
  • 4.Pitman R, Fisman D, Zaric GS, et al. Dynamic transmission modeling: a report of the ISPOR-SMDM modeling good research practices task force-5. Value Health. 2012;15:828–834. doi: 10.1016/j.jval.2012.06.011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Briggs AH, Weinstein M, Fenwick E, et al. Model parameter estimation and uncertainty: a report of the ISPOR-SMDM modeling good research practices task force-6. Value Health. 2012;15:835–842. doi: 10.1016/j.jval.2012.04.014. [DOI] [PubMed] [Google Scholar]
  • 6.Eddy DM, Hollingworth W, Caro JJ, et al. Model transparency and validation: a report of the ISPOR-SMDM modeling good research practices task force-7. Value Health. 2012;15:843–850. doi: 10.1016/j.jval.2012.04.012. [DOI] [PubMed] [Google Scholar]
  • 7.Mandelblatt JS, Cronin KA, Bailey S, et al. Effects of mammography screening under different screening schedules: model estimates of potential benefits and harms. Ann Intern Med. 2009;151:738–747. doi: 10.1059/0003-4819-151-10-200911170-00010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Wijeysundera HC, Machado M, Wang X, et al. Cost-effectiveness of specialized multidisciplinary heart failure clinics in Ontario, Canada. Value Health. 2010;13:915–921. doi: 10.1111/j.1524-4733.2010.00797.x. [DOI] [PubMed] [Google Scholar]
  • 9.Shechter SM, Bryce CL, Alagoz O. A clinically based discrete-event simulation of end-stage liver disease and the organ allocation process. Med Decis Making. 2005;25:199–209. doi: 10.1177/0272989X04268956. [DOI] [PubMed] [Google Scholar]
  • 10.Sander B, Bauch CT, Fisman D. Is a mass immunization program for pandemic (H1N1) 2009 good value for money? Evidence from the Canadian Experience. Vaccine. 2010;28:10–20. doi: 10.1016/j.vaccine.2010.07.010. [DOI] [PubMed] [Google Scholar]
  • 11.Medical Advisory Secretariat. Gene expression profiling for guiding adjuvant chemotherapy decisions in women with early breast cancer: an evidence-based and economic analysis. [[Accessed June 24, 2012]];Ont Health Technol Assess Ser [Internet] 2010 Dec;10(23):1–57. Available from: http://www.health.gov.on.ca/english/providers/program/mas/tech/reviews/pdf/gep_20101213.pdf. [PMC free article] [PubMed] [Google Scholar]
  • 12.Ramwadhdoebe S, Buskens E, Sakkers RJ, Stahl JE. A tutorial on discrete-event simulation for health policy design and decision making: optimizing pediatric ultrasound screening for hip dysplasia as an illustration. Health Policy. 2009;93:143–150. doi: 10.1016/j.healthpol.2009.07.007. [DOI] [PubMed] [Google Scholar]
  • 13.Philips Z, Bojke L, Sculpher M, Claxton K, Golder S. Good practice guidelines for decision-analytic modelling in health technology assessment: a review and consolidation of quality assessment. Pharmacoeconomics. 2006;24:355–371. doi: 10.2165/00019053-200624040-00006. [DOI] [PubMed] [Google Scholar]
  • 14.Philips Z, Ginnelly L, Sculpher M, et al. Review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Technol Assess. 2004;8:1–158. doi: 10.3310/hta8360. [DOI] [PubMed] [Google Scholar]
  • 15.Torrance GW, Siegel JE, Luce BR. Framing and designing the cost-effectiveness analysis. In: Gold MR, Siegel JE, Russell LB, Weinstein MC, editors. Cost-Effectiveness in Health and Medicine. New York: Oxford University Press; 1996. [Google Scholar]
  • 16.Health Models. [[Accessed June 24, 2012]]; [Online]. [cited 2011 Feb 6. Available from: http://www.statcan.gc.ca/microsimulation/health-sante/health-sante-eng.htm#a2.
  • 17.Erenay FS, Alagoz O, Banerjee R, Cima RR. Estimating the unknown parameters of the natural history of metachronous colorectal cancer using discrete-event simulation. Med Decis Making. 2011;31:611–624. doi: 10.1177/0272989X10391809. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Gold MR, Siegel JE, Russell LB, Weinstein MC, editors. Cost Effectiveness Analysis in Health and Medicine. New York: Oxford University Press; 1996. [Google Scholar]
  • 19.Neumann PJ. Costing and perspective in published cost-effectiveness analysis. Med Care. 2009;47(Suppl.):S28–S32. doi: 10.1097/MLR.0b013e31819bc09d. [DOI] [PubMed] [Google Scholar]
  • 20.Knox EG. Strategy for rubella vaccination. Int J Epidemiol. 1980;9:13–23. doi: 10.1093/ije/9.1.13. [DOI] [PubMed] [Google Scholar]
  • 21.Kim SY, Goldie SJ. Cost-effectiveness analyses of vaccination programmes: a focused review of modelling approaches. Pharmacoeconomics. 2008;26:191–215. doi: 10.2165/00019053-200826030-00004. [DOI] [PubMed] [Google Scholar]
  • 22.Kelly AE, Haddix AC, Scanlon KS, et al. Cost-effectiveness of strategies to prevent neural tube defects. Appendix B. In: Gold MR, Siegel JE, Russell LB, et al., editors. Cost-Effectiveness in Health and Medicine. New York: Oxford University Press; 1996. [Google Scholar]
  • 23.Rutter CM, Zaslavsky AM, Feuer EJ. Dynamic microsimulation models for health outcomes: a review. Med Decis Making. 2011;31:10–18. doi: 10.1177/0272989X10369005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Smith MD, Drummond M, Brixner D. Moving the QALY forward: rationale for change. Value Health. 2009;12(Suppl.):S1–S4. doi: 10.1111/j.1524-4733.2009.00514.x. [DOI] [PubMed] [Google Scholar]
  • 25.Weinstein MC, Torrance G, McGuire A. QALYs: the basics. Value Health. 2009;12(Suppl.):S5–S9. doi: 10.1111/j.1524-4733.2009.00515.x. [DOI] [PubMed] [Google Scholar]
  • 26.Kahneman DA. different approach to health state valuation. Value Health. 2009;12(Suppl):S16–S17. doi: 10.1111/j.1524-4733.2009.00517.x. [DOI] [PubMed] [Google Scholar]
  • 27.Eisenberg JM. Clinical economics: a guide to the economic analysis of clinical practices. JAMA. 1989;262:79–86. doi: 10.1001/jama.262.20.2879. [DOI] [PubMed] [Google Scholar]
  • 28.Ramsey S, Willke R, Briggs A, et al. Good research practices for cost-effectiveness analysis alongside clinical trials: the ISPOR RCT-CEA task force report. Value Health. 2005;8:521–533. doi: 10.1111/j.1524-4733.2005.00045.x. [DOI] [PubMed] [Google Scholar]
  • 29.Canadian Agency for Drugs and Technologies in Health. 3rd ed. vii, 46. Ottawa: Canadian Agency for Drugs and Technologies in Health; 2006. [[Accessed June 24, 2012]]. Guidelines for the economic evaluation of health technologies: Canada [Internet] A-17. Available from: http://www.cadth.ca/media/pdf/186_EconomicGuidelines_e.pdf. [Google Scholar]
  • 30.Henry D. Economic analysis as an aid to subsidisation decisions: the development of Australian guidelines for pharmaceuticals. Pharmacoeconomics. 1992;1:54–67. doi: 10.2165/00019053-199201010-00010. [DOI] [PubMed] [Google Scholar]
  • 31.Weinstein MC, Coxson PG, Williams LW, et al. Forecasting coronary heart disease incidence, mortality, and cost: the Coronary Heart Disease Policy Model. Am J Pub Health. 1987;77:1417–1426. doi: 10.2105/ajph.77.11.1417. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Chapman G, Sonnenberg F, editors. Decision Making in Health Care: Theory, Psychology, and Applications. Cambridge: Cambridge University Press; 2003. [Google Scholar]
  • 33.Matchar DB, Samsa GP, Matthews JR, et al. The Stroke Prevention Policy Model: linking evidence and clinical decisions. Ann Intern Med. 1997;127:704–711. doi: 10.7326/0003-4819-127-8_part_2-199710151-00054. [DOI] [PubMed] [Google Scholar]
  • 34.Baker C, Johnsrud M, Crismon M, et al. Quantitative analysis of sponsorship bias in economic studies of antidepressants. Br J Psychiatry. 2003;183:498–506. doi: 10.1192/bjp.183.6.498. [DOI] [PubMed] [Google Scholar]
  • 35.Bell C, Urbach D, Ray J, et al. Bias in published cost effectiveness studies: systematic review. BMJ. 2006;332:699–703. doi: 10.1136/bmj.38737.607558.80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Chauhan D, Miners AH, Fischer AJ. Exploration of the difference in results of economic submissions to the National Institute of Clinical Excellence by manufacturers and assessment groups. Int J Technol Assess Health Care. 2007;23:96–100. doi: 10.1017/S0266462307051628. [DOI] [PubMed] [Google Scholar]
  • 37.Barbieri M, Drummond M. Conflict of interest in industry-sponsored economic evaluations: real or imagined? Curr Oncol Rep. 2001;3:410–413. doi: 10.1007/s11912-001-0027-2. [DOI] [PubMed] [Google Scholar]
  • 38.Strzalko J, Grabski J, Stefanski A, et al. Understanding coin-tossing. Mathematical Intelligencer. 2010;32:54–58. [Google Scholar]
  • 39.Detsky AS, Naglie G, Krahn MD, et al. Primer on medical decision analysis, Part 1--getting started. Med Decis Making. 1997;17:123–125. doi: 10.1177/0272989X9701700201. [DOI] [PubMed] [Google Scholar]
  • 40.Helfand M. Influence diagrams: a new dimension for decision models. Med Decis Making. 1997;17:351–352. doi: 10.1177/0272989X9701700312. [DOI] [PubMed] [Google Scholar]
  • 41.Nease RF, Jr, Owens DK. Use of influence diagrams to structure medical decisions. Med Decis Making. 1997;17:263–275. doi: 10.1177/0272989X9701700302. [DOI] [PubMed] [Google Scholar]
  • 42.Owens DK, Shachter RD, Nease RF., Jr Representation and analysis of medical decision problems with influence diagrams. Med Decis Making. 1997;17:241–262. doi: 10.1177/0272989X9701700301. [DOI] [PubMed] [Google Scholar]
  • 43.Novak JD, Carnas AJ. Florida Institute for Human and Machine Cognition. [Online] [[Accessed June 24, 2012]];2008 [cited 2011 12 26. Available from: http://cmaihmc.us/Publications/ResearchPapers/TheoryCmaps/Theory UnderlyingConceptMaps.htm.
  • 44.Ruiz-Primo MA, Shavelson RJ. Problems and issues in the use of concept maps in science assessment. J Res Sci Teaching. 1996;33:569–600. [Google Scholar]
  • 45.Stahl JE. Modelling methods for pharmacoeconomics and health technology assessment: an overview and guide. Pharmacoeconomics. 2008;26:131–148. doi: 10.2165/00019053-200826020-00004. [DOI] [PubMed] [Google Scholar]
  • 46.Barton P, Bryan S, Robinson S. Modelling in the economic evaluation of health care: selecting the appropriate approach. J Health Serv Res Policy. 2004;9:110–118. doi: 10.1258/135581904322987535. [DOI] [PubMed] [Google Scholar]
  • 47.Freedberg KA, Scharfstein JA, Seage GR, et al. The cost-effectiveness of preventing AIDS-related opportunistic infections. JAMA. 1998;279:130–136. doi: 10.1001/jama.279.2.130. [DOI] [PubMed] [Google Scholar]
  • 48.Long EF, Brandeau ML, Owens DK. Potential population health outcomes and expenditures of HIV vaccination strategies in the United States. Vaccine. 2009;27:5402–5410. doi: 10.1016/j.vaccine.2009.06.063. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Richter A, Loomis B. Health and economic impacts of an HIV intervention in out of treatment substance abusers: evidence from a dynamic model. Health Care Manag Sci. 2005;8:67–79. doi: 10.1007/s10729-005-5218-1. [DOI] [PubMed] [Google Scholar]
  • 50.Leslie WD, Brunham RC. The dynamics of HIV spread: a computer simulation model. Comput Biomed Res. 1990;23:380–401. doi: 10.1016/0010-4809(90)90028-b. [DOI] [PubMed] [Google Scholar]
  • 51.Porco TC, Small PM, Blower SM. Amplification dynamics: predicting the effect of HIV on tuberculosis outbreaks. J Acquir Immune Defic Syndr. 2001;28:437–444. doi: 10.1097/00042560-200112150-00005. [DOI] [PubMed] [Google Scholar]
  • 52.Auchincloss A, Diez Roux A. A new tool for epidemiology: the usefulness of dynamic-agent models in understanding place effects on health. Am J Epidemiol. 2008;168:1–8. doi: 10.1093/aje/kwn118. [DOI] [PubMed] [Google Scholar]
  • 53.Linas BP, Losina E, Rockwell A, et al. Improving outcomes in state AIDS drug assistance programs. J Acquir Immune Defic Syndr. 2009;51:513–521. doi: 10.1097/QAI.0b013e3181b16d00. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Zaric GS, Barnett PG, Brandeau ML. HIV transmission and the cost-effectiveness of methadone maintenance. Am J Public Health. 2000;90:1100–1111. doi: 10.2105/ajph.90.7.1100. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55.Detsky AS, Naglie G, Krahn MD, et al. Primer on medical decision analysis, part 2--building a tree. Med Decis Making. 1997;17:126–135. doi: 10.1177/0272989X9701700202. [DOI] [PubMed] [Google Scholar]
  • 56.Carlson K, Mulley A. Management of acute dysuria: a decision-analysis model of alternative strategies. Ann Intern Med. 1985;102:244–249. doi: 10.7326/0003-4819-102-2-244. [DOI] [PubMed] [Google Scholar]
  • 57.Krumholz HM, Pasternak RC, Weinstein MC, et al. Cost effectiveness of thrombolytic therapy with streptokinase in elderly patients with suspected acute myocardial infarction. N Engl J Med. 1992;327:7–13. doi: 10.1056/NEJM199207023270102. [DOI] [PubMed] [Google Scholar]
  • 58.Clermont G, Bartels J, Kumar R, et al. In silico design of clinical trials: a method coming of age. Crit Care Med. 2004;32:2061–2070. doi: 10.1097/01.ccm.0000142394.28791.c3. [DOI] [PubMed] [Google Scholar]
  • 59.Daun S, Rubin J, Vodovotz Y, Clermont G. Equation-based models of dynamic biological systems. J Crit Care. 2008;23:585–594. doi: 10.1016/j.jcrc.2008.02.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Kumar R, Chow CC, Bartels JD, et al. A mathematical simulation of the inflammatory response to anthrax infection. Shock. 2008;29:104–111. doi: 10.1097/SHK.0b013e318067da56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Eddy DM, Schlessinger L. Validation of the Archimedes diabetes model. Diabetes Care. 2003;26:3102–3110. doi: 10.2337/diacare.26.11.3102. [DOI] [PubMed] [Google Scholar]
  • 62.Schlessinger L, Eddy DM. Archimedes: a new model for simulating health care systems-the mathematical formulation. J Biomed Inform. 2002;35:37–50. doi: 10.1016/s1532-0464(02)00006-0. [DOI] [PubMed] [Google Scholar]
  • 63.Quote Investigator. [[Accessed June 24, 2012]]; [Online]. [cited 2011 12 27. Available from: http://quoteinvestigator.com/2011/05/13/einstein-simple. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES