Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2012 Jul 25.
Published in final edited form as: Med Decis Making. 2010 May 18;31(1):10–18. doi: 10.1177/0272989X10369005

Dynamic Microsimulation Models for Health Outcomes: A Review

Carolyn M Rutter, Alan Zaslavsky, Eric Feuer
PMCID: PMC3404886  NIHMSID: NIHMS393222  PMID: 20484091

Abstract

Background

Microsimulation models (MSMs) for health outcomes simulate individual event histories associated with key components of a disease process; these simulated life histories can be aggregated to estimate population-level effects of treatment on disease outcomes and the comparative effectiveness of treatments. Although MSMs are used to address a wide range of research questions, methodological improvements in MSM approaches have been slowed by the lack of communication among modelers. In addition, there are few resources to guide individuals who may wish to use MSM projections to inform decisions.

Methods

This article presents an overview of microsimulation modeling, focusing on the development and application of MSMs for health policy questions. We discuss MSM goals, overall components of MSMs, methods for selecting MSM parameters to reproduce observed or expected results (calibration), methods for MSM checking (validation), and issues related to reporting and interpreting MSM findings (sensitivity analyses, reporting of variability, and model transparency).

Conclusions

MSMs are increasingly being used to provide information to guide health policy decisions. This increased use brings with it the need both for better understanding of MSMs by policy researchers, and continued improvement in methods for developing and applying MSMs.

Keywords: Discrete Event Simulation, Decision Analytic Models

1.0 Introduction

In the late 1950's, Guy Orcutt proposed microsimulation models (MSMs) as a method for exploring social policy questions by simulating the effect of policy on decision making units (e.g., individuals, families, or corporations).1 During the 1970's, MSMs were developed to guide U.S. social policy decisions.2 Models of traffic flow are used to plan transit projects.3, 4 MSMs are used in operations research5 to describe queuing systems; where they are referred to as discrete event simulations68. MSMs for the transmission of infectious diseases such as HIV,6, 7 influenza,810 smallpox,11 onchocerciasis12, 13 and shistosomiasis,14 are used to examine the effects of intervention and policy change on disease transmission, including the cost effectiveness of vaccination programs.15 MSMs for traffic and infectious diseases allow interactions among `agents' (e.g., cars or individuals). Such `agent-based models'16, 17 are needed when agent interactions are critical to downstream outcomes, for example, when modeling epidemics. To simplify discussion, this article focuses on somewhat simpler MSMs that assume independence across individuals, though the issues raised here are equally relevant to agent-based models.

MSMs were applied to health policy questions as early as 1985, when Habbema and colleagues introduced the “Microsimulation Screening Analysis” (MISCAN) model18 to examine the impact of cancer screening on morbidity and mortality. MISCAN models have been developed to describe the effects of screening for cervical19, breast20, colorectal21, and prostate cancer.22 In 1994, the Population Health Model was introduced to evaluate costs of diagnosis and treatment of lung and breast cancer,23, 24 as part of a broader microsimulation effort by Statistics Canada25. MSMs have also been developed for diabetes,26, 27 cardiovascular disease,28, 29 stroke,30 reoperation rates after aortic valve replacement,31 organ transplant,32, 33 osteoporosis,3436 and end stage liver disease.37 Karnon and colleagues review some of the models used to evaluate screening programs.38 Several funding agencies now support the development and application of MSMs, including the National Cancer Institute's Cancer Intervention and Surveillance Modeling Network (CISNET)39, the National Institute of General Medical Sciences' Models of Infectious Disease Agent Study40, and the Robert Wood Johnson Foundation and National Institutes of Health's Childhood Obesity Modeling Network41.

Many of the MSMs used to examine health policy questions are written by researchers for specific disease processes, an endeavor requiring a high level of programming expertise and a large time commitment. Software programs, such as TreeAge42 and Arena,43 allow users to implement discrete event MSMs with relative ease,4447 though there are few publications to guide this work. With the increasing use of MSMs, there is a growing need for both modelers and end-users of MSMs to consider issues related to their application. This article provides an overview of MSMs, focusing on model development and applications to health policy questions.

2.0 Goals of Microsimulation Modeling

MSMs describe events and outcomes at the person-level with an ultimate goal of providing information that can guide policy decisions. (In contrast, mechanistic models, such as those that simulate the behavior of cells, are developed to provide insight into underlying processes. Biological models, discussed in section 3.0, can incorporate mechanistic models into policy-focused MSMs.) Examples of policy-relevant findings from MSMs include over-diagnosis of prostate cancer among PSA-detected cases;48 identification of efficient cervical cancer screening policies;49 and the impact of modifiable risk factors, screening, and treatment on colorectal cancer (CRC) mortality rate.50

MSMs provide policy-relevant information by generating predictions. For example, MSMs can be used to predict trends in disease incidence and mortality under alternative health policy scenarios50, 51, or to compare the effectiveness and cost-effectiveness of treatments.52 Using MSMs for prediction often requires integration of results across studies, and may require extension of results, for example extending cross-sectional results to longitudinal predictions, and extending results to more diverse patient populations or to comparisons not made in available RCTs. For example, RCTs demonstrate that fecal occult blood testing (FOBT) leads to reduced CRC mortality,5355 and a case-control study found that flexible sigmoidoscopy was also associated with reduced CRC mortality.56 While there is no direct evidence that either optical colonoscopy or CT colonography reduce CRC mortality, several studies have estimated their sensitivity and specificity for detecting colorectal adenomas.5759 MSMs for colorectal cancer have combined available information about CRC screening tests to compare the effect and cost-effectiveness of all four of these screening modalities.60

Thus, the explicit goal of using MSMs for prediction is met by developing models that combine results from randomized controlled trials, epidemiologic studies (e.g., case-control and cohort studies), meta-analyses, and expert options. This combining of information becomes an implicit modeling goal and is conducted in conjunction with model calibration.

3.0 Developing a New MSM

MSMs have two components: a natural history model and an intervention model. Our focus is on the natural history model, which describes the disease process in the absence of intervention. The natural history model may be complex, but once developed it can be combined with different intervention models to answer policy questions. The appropriate level of model complexity depends on the disease process, the scientific questions to be addressed by the model, and data available to inform model parameters. There is a tension between the simplicity of a model and the complexity of the disease process. Simpler models include fewer parameters making it more likely that model parameters can be estimated using observed data, and are easier to describe, making them more transparent. However, more complex MSMs can be used to address a wider range of scientific questions. One approach is to begin with relatively simple models, extending these as needed to address specific research questions.61

When considering model complexity, it is useful to distinguish between biological models, which describe the underlying disease processes, and epidemiological models, which focus attention on the observable portion of the disease process62. For example, Luebeck and colleagues63 predicted the effect of folate on colorectal cancer risk using a biological model that describes three phases of carcinogenesis: stem cell initiation (acquisition of one or more mutations), cell proliferation, and malignant conversion. Their model allowed folate to both reduce initiation rates and increase proliferation rates, and predicted that early folate initiation (e.g., at age 2) decreases colorectal cancer risk while late folate initiation (e.g., at age 65) may increase risk. While complex biological models can describe the disease process more completely than epidemiological models, they also include parameters that cannot be directly informed by data (such as the impact of folate on cell initiation and proliferation rates).

There are three essential steps in developing any MSM: 1) Identifying a fixed number of distinct states and characteristics associated with these states; 2) Specifying stochastic rules for transition through states; and 3) Setting values for model parameters.

3.1 Identify Distinct States

The specification of distinct states included in an MSM depends on the disease process, the research questions of interest, and the availability of data. The number of states included in a model may also depend on how characteristics are attributed to states. For example, an MSM could describe lesion size categorically with different size categories considered different states, or it could model a single lesion state with size treated as a characteristic of the lesion.

3.2 Specify Transition Rules

MSMs are either state-transition or continuous-time models. State-transition models allow individuals to move between states at fixed time intervals (possibly dependent on the state), with probabilities specific to this cycle length. In continuous-time MSMs, intervals between transitions have continuous distributions. Both types MSMs allow transition rules to depend on individual characteristics. MSM transition probabilities may be specified as Markov,64 meaning that the probabilities of the next transition depend only on the current state and not on the previous history.65 Many models relax this assumption by carrying forward relevant past information into the current state.

3.3 Specify Model Parameters

Given the structure of an MSM, it is necessary to specify the values of parameters that determine transitions through disease states. Parameters associated with observable processes can be estimated directly from biological, clinical, or epidemiological evidence. For example, the distribution survival time following cancer diagnosis may be estimated from Surveillance Epidemiology and End Reporting (SEER) data,66 and other-cause mortality probabilities may be based directly on life tables derived by removing deaths attributable to the disease modeled from Berkeley mortality databasesA. For example, Rosenberg67 demonstrates calculation of non-breast cancer mortality using Berkeley mortality data bases with breast cancer deaths estimated using data from the National Center for Health Statistics databases.68

When such direct estimation is not possible, model parameters are selected so that the model reproduces observed results, a process called `calibration'. Calibration is typically necessary for complex models involving transitions that are only observed through indirect consequences. For example, an observed rate of tumors detected is a function of two unobserved processes: the rate of tumor initiation and the growth of tumors to a detectable size. In this case, the functional relationships between MSM parameters and observable calibration data are complex, since MSM parameters describe transitions between specific states, which might not all be observed, and data generally describes observed states resulting from a series of transitions. As a result, MSM parameters may be nonidentifiable.69 Parameter identifiability is determined by the model in the context of available data. A nonidentifiable parameter cannot be estimated, even with an infinite amount of data. However, the parameter could become identifiable if a new type of data became available. When parameters are unidentifiable, it is still possible to find values that provide good fit to calibration data, but these parameter values are not unique. For example, (observed) rates of detected tumors at screening might depend on (unobservable) rates of initiation and growth to detectable size, with different combinations of these parameters producing the same rate of detection. Whether or not a parameter is identifiable may not be obvious. A parameter associated with an unobservable process can be identifiable if there is sufficient information about the entire process. Complete description of a disease process may necessitate specification of an MSM with nonidentifiable parameters. In this case, parameters can be identified by adding information to the model, either by setting some parameters to fixed values and carrying out sensitivity analysis (described in section 5.2) or by specifying prior distributions for parameters and carrying out Bayesian calibration (described below).

Early MSMs were calibrated by perturbing parameters one at a time and subjectively judging agreement with available data.70 Subjective judgment has largely been replaced by the use of statistics that measure how well the model fits calibration data, and ad hoc parameter perturbation has been replaced by grid search methods.7174 An undirected grid search selects parameter values based on evaluation of model fit at every node in a grid of parameter values or at a random set of values.73, 75 Undirected searching is based only on MSM results, and therefore can be directly applied to virtually any MSM. However, grid searches are not computationally feasible for highly parameterized models, since the number of grid nodes grows exponentially with the number of model parameters. Furthermore, even a dense grid or a large random sample of parameters might miss regions of good fit.

Directed searches move through the parameter space by `hill climbing', that is, moving in a direction of improving goodness of fit. (The Nelder Mead simplex algorithm76 is an example of a commonly used directed search method.) The direction taken by a directed search is based on the derivative of the likelihood function, which describes the probability of the calibration data given the MSM and a set of parameter values. These derivatives provide information about the rate of change of objective functions, and so direct the algorithm to move in the direction of most rapid increase (`up the hill'). For MSMs, there is usually not a closed form expression for these derivatives, so that directional searches must rely on approximations.77 In addition, directed searches may find parameter values that provide locally good fit, but not the best fit across all possible parameter sets (globally good fit). To avoid locally, but not globally, good solutions, directed searches should be initiated at widely dispersed points within the parameter space. In spite of these difficulties, directional search methods for MSM calibration are generally more computationally efficient than grid search approaches, requiring fewer runs of the MSM for parameter estimation. Some modelers have focused on model simplification to allow likelihood-based estimation of MSM parameters that parallels usual frequentist or Bayesian estimation approaches.74, 7886

Another challenge associated with calibration is the combination of data from different sources, collected at different times, from different populations. This problem, essentially one of meta-analysis, arises from the implicit MSM goal of integrating results from randomized controlled trials, observational studies, and expert opinion. If study data include information on participant characteristics, it can be incorporated into models to adjust for between-study differences, but such covariate adjustment is not always possible. In addition, some data may carry more weight than other data because of the sample size, the relevance of the population studied or subjectively perceived data quality. Thus, for example, the modeler might have to assign relative weights to a study of moderate size conducted under current conditions and a much larger study conducted during an earlier period.

4.0 Simulating Populations

Once developed, MSMs are used to simulate a hypothetical population with specific characteristics, such as a specific age-sex distribution, or a specific risk factor profile. An MSM can be structured to simulate a population directly, taking distributions of population characteristics at baseline as inputs (e.g., age, gender, and relevant risk factors). Alternatively, cohort models simulate individuals for a relatively narrow age range and specify uniform age distributions within the age range, with hypothetical populations generated by combining multiple simulated cohorts.

MSMs are sometimes used to simulate cohorts rather than representative populations. The choice of the target set (population or cohort) can affect the conclusions drawn from the MSM when costs, effectiveness, or cost effectiveness vary across cohorts.87 For example, because cervical cancer screening is less cost-effective for older women, simulations targeted to a cohort of 15–20 year olds will find that shortening the screening interval from 3 to 2 years is more cost effective than simulations targeted to a population that includes more older women.88

5.0 Model Assessment

Assessment of a calibrated MSM includes model validation, examination of sensitivity to untestable assumptions, and incorporation of variability.

5.1 Model Validation

Model validation is the process of assessing whether a model is consistent with data not used for calibration, a process also called `external validation'. Because validation requires that data be held out of the calibration process, MSMs may not be validated, or validation may be based on simulation of randomized trials that are not directly related to the processes of interest. A related problem is that the models generally cannot be validated against the outcomes of greatest interest, since the models focus on unobserved or unobservable phenomena. In some cases, important outcomes, such as survival, may be set aside and used as validation points.89

Comparison of model outputs to calibration data is sometimes called `internal validation'. There is a grey area between internal and external validation that involves using detailed calibration data for model assessment. For example, suppose that calibration data include incidence rates by decade of life, and these rates can be used to internally validate the MSM. If incidence rates were also available by gender and year of life, then further model validation could compare these more detailed rates that were not directly used for calibration.

5.2 Sensitivity Analysis

Sensitivity analysis refers to estimation and presentation of model results under various scenarios, often corresponding to varying values of model parameters that are inestimable or poorly estimable. Sensitivity analyses can also provide insight into the impact of specific model assumptions. For example, sensitivity analysis can be used to explore whether adenoma regression, which cannot be directly observed, is plausible by comparing predictions under specific scenarios, e.g., `no regression' and `10% of lesions regress'.72 Probabilistic sensitivity analysis places distributions on unknown parameters, providing a range of possible results. Parameters are sampled from specified distributions, and multiple MSM runs are used to infer variability in model results that result from variability in model parameters.83, 9093 Sensitivity analyses are common, largely because most models include unobservable components.

5.3 Between-Model Comparisons

Another type of model assessment compares predictions from different models that are calibrated using the same data. Such comparative modeling studies allow exploration of uncertainty due to model structure. Examples of this approach include estimation of the combined effects of screening and treatment on breast cancer mortality based on 7 CISNET models for breast cancer89 and the Mt. Hood Challenge comparing diabetes models.26, 27 Each of these groups compared models only after standardizing the calibration data. Without such cooperation to simulate and present results, it can be difficult to directly compare model results. For example, four different simulation studies examining the cost-effectiveness of spiral CT for lung cancer screening based on results from the Early Lung Cancer Action Project (ELCAP)94, 95 reported a wide range of cost effectiveness, from $2,500 per life year gained to $154,000 per quality adjusted life year gained.9699 The reasons for these large differences includes differences in the screening frequency examined, lung cancer risk in the hypothetical population, and costs attributed to screening, diagnosis and treatment. Differences in the underlying assumptions of the microsimulation models is another important source of variability, but the impact of these differences is difficult to determined without a systematic comparison of both model assumptions and model predictions.

While comparison of results across independently developed models provides an important avenue for addressing variability due to model structure, these comparisons are very time consuming and are only practical for major policy questions. In addition, it can be difficult to determine which models provide the best fit to available data. Bayesian model averaging provides a formal framework for model comparison, that could in theory be used to compare different parameterization of a single model, but this approach requires a maximized likelihood for each competing model.100

5.4 Sources of Variability

Sources of MSM variability and uncertainty include inherent variability in the population of interest, variability due to estimation of unknown parameters, selection of calibration data, sampling variability of the selected calibration data, simulation (Monte Carlo) variability, and variability due to model structure assumptions.83, 101 Bayesian calibration methods parallel Bayesian estimation, and provide interval estimates that describe the variability in both parameter estimates and model predictions due to parameter estimation, sampling variability of the selected calibration data, and simulation variability85. Repeated model runs that systematically vary parameter values can be used to assess the relationship between parameter variability and variability in model predictions, with findings used to direct model improvement towards reducing variability of those parameters that have the greatest impact on prediction variability (e.g., additional data collection or modifications to the model structure).102, 103.

To illustrate the use of replication in error analysis for calibration, suppose the target population includes N individuals. Rather than simulating a single population of 1000N individuals, more information can be gained by simulating results independently in 1000 samples of size N; while a precise point estimate is obtained as the mean of the 1000 simulations, an interval estimate can be obtained by calculating the variability in model results across simulations.

Although these methods can be used to summarize and explain uncertainty in model predictions, because of computational and conceptual limitations, MSM results are routinely provided without measures of precision. Further development of computationally efficient methods to estimate the uncertainty in MSM model estimates and predictions is an important area of research.

6.0 MSM Transparency

Model transparency refers to the ability to convey model assumptions. There are divergent views on what transparency means and what level of transparency is optimal 104,105 Different levels of disclosure can be consistent with model transparency, such as describing MSM assumptions (including relative costs), providing algorithms used in the MSM, providing equations used to program the MSM, and, ultimately, releasing the computer code underlying the model. While release of computer code is seemingly the most transparent approach, this strategy is time consuming and ultimately uninformative to the vast majority of end users so that code release may obscure rather than clarify the model. While transparency often refers to model structure (assumptions, algorithms, equations), it also should incorporate the data used to calibrate or estimate model parameters, goodness of fit to calibration data, including fit to subgroups of interest, and validation results.

MSM transparency can be difficult to achieve. In experimental or observational research, transparency is often understood as the hypothetical ability to repeat an experiment or analysis. Yet many published articles use complex data analytic models that cannot be completely described within page limits, so that even these analyses are not completely transparent. Furthermore, there are very few venues for complete model description and review, which is necessary for complete evaluation of the model. Technical appendices, including supplemental material published online, can be critical. For example the CISNET modeling group provides online model profiles106. Finally, there are disincentives to fully transparent models. Models take years to develop, resulting in hesitation to fully disclose this `intellectual property'. There are explicit financial disincentives for making proprietary models fully transparent. In spite of these difficulties, model transparency is necessary if MSMs are used to inform policy decisions.

7.0 Conclusions

Microsimulation models combine expert opinion with observational and experimental results, providing a relatively inexpensive way to estimate population-level effects (including costs and benefits) of interventions, policy changes, or shifts in risk factors. Microsimulation modeling is beginning to coalesce as a defined area of expertise, as evidenced by publication of modeling guidelines,107, 108 funding of microsimulation modeling efforts by NCI and NIH, microsimulation-focused conferences such as the Conference of the International Microsimulation Association, sponsored by Statistics Canada 109 and the General Conference of the International Microsimulation Association110, and the recently launched International Journal of Microsimulation. Given this interest, we anticipate both an increased use of microsimulation models to address research questions and parallel improvements in methods used to develop and apply microsimulation models.

Acknowledgments

Supported by NCI U01 CA97427

Footnotes

9.0 References

  • 1.Orcutt GH. A new type of socio-economic system. Review of Economics and Statistics. 1957;80:1081–1100. [Google Scholar]
  • 2.Citro CF, Hanushek EA, editors. Review and Recommendations. Vol I. National Research Council; 1991. Improving Information for Social Policy Decisions: The Uses of Microsimulation Modeling. [Google Scholar]
  • 3.Lemp JD, McWethy LB, Kockelman KM. From aggregate methods to microsimulation - Assessing benefits 80 of microscopic activity-based models of travel demand. Transportation Research Record. 2007;1994:80–88. [Google Scholar]
  • 4.Hoogendoorn SP, Bovy PHL. State-of-the-art of vehicular traffic flow modelling. Proceedings of the Institution of Mechanical Engineers. 2001;215(14):283–303. [Google Scholar]
  • 5.Hollocks BW. Forty years of discrete event simulation - a personal reflection. Journal of Operational Research Society. 2006;57:1383–1399. [Google Scholar]
  • 6.Cassels S, Clark SJ, Morris M. Mathematical models for HIV transmission dynamics: tools for social and behavioral science research. J Acquir Immune Defic Syndr. 2008 Mar 1;47(Suppl 1):S34–39. doi: 10.1097/QAI.0b013e3181605da3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Castiglione F, Pappalardo F, Bernaschi M, Motta S. Optimization of HAART with genetic algorithms and agent-based models of HIV infection. Bioinformatics. 2007 Dec 15;23(24):3350–3355. doi: 10.1093/bioinformatics/btm408. [DOI] [PubMed] [Google Scholar]
  • 8.Ackerman E, Longini IM, Jr., Seaholm SK, Hedin AS. Simulation of mechanisms of viral interference in influenza. Int J Epidemiol. 1990 Jun;19(2):444–454. doi: 10.1093/ije/19.2.444. [DOI] [PubMed] [Google Scholar]
  • 9.Ackerman E, Zhuo Z, Altmann M, et al. Simulation of stochastic micropopulation models--I. The SUMMERS simulation shell. Comput Biol Med. 1993 May;23(3):177–198. doi: 10.1016/0010-4825(93)90020-2. [DOI] [PubMed] [Google Scholar]
  • 10.Peterson D, Gatewood L, Zhuo Z, Yang JJ, Seaholm S, Ackerman E. Simulation of stochastic micropopulation models--II. VESPERS: epidemiological model implementations for spread of viral infections. Comput Biol Med. 1993 May;23(3):199–213. doi: 10.1016/0010-4825(93)90021-r. [DOI] [PubMed] [Google Scholar]
  • 11.Longini IM, Jr., Halloran ME, Nizam A, et al. Containing a large bioterrorist smallpox attack: a computer simulation approach. Int J Infect Dis. 2007 Mar;11(2):98–108. doi: 10.1016/j.ijid.2006.03.002. [DOI] [PubMed] [Google Scholar]
  • 12.Plaisier AP, van Oortmarssen GJ, Habbema JD, Remme J, Alley ES. ONCHOSIM: a model and computer simulation program for the transmission and control of onchocerciasis. Comput Methods Programs Biomed. 1990 Jan;31(1):43–56. doi: 10.1016/0169-2607(90)90030-d. [DOI] [PubMed] [Google Scholar]
  • 13.Habbema JD, Alley ES, Plaisier AP, van Oortmarssen GJ, Remme JH. Epidemiological modelling for onchocerciasis control. Parasitol Today. 1992 Mar;8(3):99–103. doi: 10.1016/0169-4758(92)90248-z. [DOI] [PubMed] [Google Scholar]
  • 14.Habbema JD, De Vlas SJ, Plaisier AP, Van Oortmarssen GJ. The microsimulation approach to epidemiologic modeling of helminthic infections, with special reference to schistosomiasis. Am J Trop Med Hyg. 1996 Nov;55(5 Suppl):165–169. doi: 10.4269/ajtmh.1996.55.165. [DOI] [PubMed] [Google Scholar]
  • 15.Kim SY, Goldie SJ. Cost-effectiveness analyses of vaccination programmes : a focused review of modelling approaches. Pharmacoeconomics. 2008;26(3):191–215. doi: 10.2165/00019053-200826030-00004. [DOI] [PubMed] [Google Scholar]
  • 16.Bonabeau E. Agent-based modeling: methods and techniques for simulating human systems. Proc Natl Acad Sci U S A. 2002 May 14;99(Suppl 3):7280–7287. doi: 10.1073/pnas.082080899. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Tesfatsion L, Judd KL. Handbook of Computational Economics. Vol 2. Elseiver; North-Holland, Amsterdam: 2006. [Google Scholar]
  • 18.Habbema JD, van Oortmarssen GJ, Lubbe JT, van der Maas PJ. The MISCAN simulation program for the evaluation of screening for disease. Comput Methods Programs Biomed. 1985 May;20(1):79–93. doi: 10.1016/0169-2607(85)90048-3. [DOI] [PubMed] [Google Scholar]
  • 19.Habbema JD, van Oortmarssen GJ, Lubbe JT, van der Maas PJ. Model building on the basis of Dutch cervical cancer screening data. Maturitas. 1985 May;7(1):11–20. doi: 10.1016/0378-5122(85)90030-1. [DOI] [PubMed] [Google Scholar]
  • 20.van Oortmarssen GJ, Habbema JD, van der Maas PJ, et al. A model for breast cancer screening. Cancer. 1990 Oct 1;66(7):1601–1612. doi: 10.1002/1097-0142(19901001)66:7<1601::aid-cncr2820660727>3.0.co;2-o. [DOI] [PubMed] [Google Scholar]
  • 21.Loeve F, Boer R, van Oortmarssen GJ, van Ballegooijen M, Habbema JD. The MISCANCOLON simulation model for the evaluation of colorectal cancer screening. Comput Biomed Res. 1999;32(1):13–33. doi: 10.1006/cbmr.1998.1498. [DOI] [PubMed] [Google Scholar]
  • 22.Draisma G, Boer R, Otto SJ, et al. Lead times and overdetection due to prostate-specific antigen screening: estimates from the European Randomized Study of Screening for Prostate Cancer. J Natl Cancer Inst. 2003 Jun 18;95(12):868–878. doi: 10.1093/jnci/95.12.868. [DOI] [PubMed] [Google Scholar]
  • 3.Wolfson MC. POHEM--a framework for understanding and modelling the health of human populations. World Health Stat Q. 1994;47(3–4):157–176. [PubMed] [Google Scholar]
  • 24.Will BP, Berthelot JM, Nobrega KM, Flanagan W, Evans WK. Canada's Population Health Model (POHEM): a tool for performing economic evaluations of cancer control interventions. Eur J Cancer. 2001 Sep;37(14):1797–1804. doi: 10.1016/s0959-8049(01)00204-0. [DOI] [PubMed] [Google Scholar]
  • 25. [Accessed June 12, 2009];Microsimulation. 2009 http://www.statcan.ca/english/spsd/
  • 26.Brown JB, Palmer AJ, Bisgaard P. The Mount Hood Challenge: Cross-testing two diabetes simulation models. Diabetes Care. 2000;50(Suppl 3):S57–64. doi: 10.1016/s0168-8227(00)00217-5. [DOI] [PubMed] [Google Scholar]
  • 27.Group TMHM Computer modeling of Diabetes and it's complication: A report on the 4th Mount Hood Challenge Meeging. Diabetes Care. 2007;30:1638–1646. doi: 10.2337/dc07-9919. [DOI] [PubMed] [Google Scholar]
  • 28.Kuntz KM, Tsevat J, Goldman L, Weinstein MC. Cost-effectiveness of routine coronary angiography after acute myocardial infarction. Circulation. 1996 Sep 1;94(5):957–965. doi: 10.1161/01.cir.94.5.957. [DOI] [PubMed] [Google Scholar]
  • 29.Smolen HJ, Cohen DJ, Samsa GP, et al. Development, validation, and application of a microsimulation model to predict stroke and mortality in medically managed asymptomatic patients with significant carotid artery stenosis. Value Health. 2007 Nov-Dec;10(6):489–497. doi: 10.1111/j.1524-4733.2007.00204.x. [DOI] [PubMed] [Google Scholar]
  • 30.Matchar DB, Samsa GP, Matthews JR, et al. The Stroke Prevention Policy Model: linking evidence and clinical decisions. Ann Intern Med. 1997 Oct 15;127(8 Pt 2):704–711. doi: 10.7326/0003-4819-127-8_part_2-199710151-00054. [DOI] [PubMed] [Google Scholar]
  • 31.Puvimanasinghe JP, Takkenberg JJ, Edwards MB, et al. Comparison of outcomes after aortic valve replacement with a mechanical valve or a bioprosthesis using microsimulation. Heart. 2004 Oct;90(10):1172–1178. doi: 10.1136/hrt.2003.013102. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Ouwens JP, Groen H, TenVergert EM, Koeter GH, de Boer WJ, van der Bij W. Simulated waiting list prioritization for equitable allocation of donor lungs. J Heart Lung Transplant. 2002 Jul;21(7):797–803. doi: 10.1016/s1053-2498(02)00385-6. [DOI] [PubMed] [Google Scholar]
  • 33.Groen H, van der Bij W, Koeter GH, TenVergert EM. Cost-effectiveness of lung transplantation in relation to type of end-stage pulmonary disease. Am J Transplant. 2004 Jul;4(7):1155–1162. doi: 10.1111/j.1600-6143.2004.00479.x. [DOI] [PubMed] [Google Scholar]
  • 34.Vanness DJ, Tosteson AN, Gabriel SE, Melton LJ., 3rd The need for microsimulation to evaluate osteoporosis interventions. Osteoporos Int. 2005 Apr;16(4):353–358. doi: 10.1007/s00198-004-1826-8. [DOI] [PubMed] [Google Scholar]
  • 35.Liu H, Michaud K, Nayak S, Karpf DB, Owens DK, Garber AM. The cost-effectiveness of therapy with teriparatide and alendronate in women with severe osteoporosis. Arch Intern Med. 2006 Jun 12;166(11):1209–1217. doi: 10.1001/archinte.166.11.1209. [DOI] [PubMed] [Google Scholar]
  • 36.Schousboe JT, Taylor BC, Fink HA, et al. Cost-effectiveness of bone densitometry followed by treatment of osteoporosis in older men. JAMA. 2007 Aug 8;298(6):629–637. doi: 10.1001/jama.298.6.629. [DOI] [PubMed] [Google Scholar]
  • 37.Alagoz O, Bryce CL, Shechter S, et al. Incorporating biological natural history in simulation models: empirical estimates of the progression of end-stage liver disease. Med Decis Making. 2005 Nov-Dec;25(6):620–632. doi: 10.1177/0272989X05282719. [DOI] [PubMed] [Google Scholar]
  • 38.Karnon J, Goyder E, Tappenden P, et al. A review and critique of modelling in prioritising and designing screening programmes. Health Technol Assess. 2007 Dec;11(52):iii–iv. ix–xi, 1–145. doi: 10.3310/hta11520. [DOI] [PubMed] [Google Scholar]
  • 39.National Cancer Institute. Cancer Intervention and Surveillance Modeling Network (CISNET) 2008 http://cisnet.cancer.gov/
  • 40.Models of Infectious Disease Agent Study, MIDAS [Accessed June 12, 2009];2009 http://www.nigms.nih.gov/Initiatives/MIDAS/.
  • 41. [Accessed June 12, 2009];COMnet. 2009 http://obesitymodeling.net/beta/.
  • 42.TreeAge [computer program] TreeAge Software, Inc.; Williamstown, MA: 2008. [Google Scholar]
  • 43.Kelton DW, Sadowski RP, Sturrock DT. Simulation with Arena. 4th ed McGraw-Hill; 2007. [Google Scholar]
  • 44.McEwan P, Williams JE, Griffiths JD, et al. Evaluating the performance of the Framingham risk equations in a population with diabetes. Diabet Med. 2004 Apr;21(4):318–323. doi: 10.1111/j.1464-5491.2004.01139.x. [DOI] [PubMed] [Google Scholar]
  • 45.Bellsey J, Sheldon AM. Healthcare Applications of Decision Analytic Modeling. Health Economics in Prevention and Care. 2000;1:37–43. [Google Scholar]
  • 46.Hernandez R, Vale L. The value of myocardial perfusion scintigraphy in the diagnosis and management of angina and myocardial infarction: a probabilistic economic analysis. Med Decis Making. 2007 Nov-Dec;27(6):772–788. doi: 10.1177/0272989X07306111. [DOI] [PubMed] [Google Scholar]
  • 47.Delaney G, Jacob S, Featherstone C, Barton M. The role of radiotherapy in cancer treatment: estimating optimal utilization from a review of evidence-based clinical guidelines. Cancer. 2005 Sep 15;104(6):1129–1137. doi: 10.1002/cncr.21324. [DOI] [PubMed] [Google Scholar]
  • 48.Etzioni R, Penson DF, Legler JM, et al. Overdiagnosis due to prostate-specific antigen screening: lessons from U.S. prostate cancer incidence trends. J Natl Cancer Inst. 2002 Jul 3;94(13):981–990. doi: 10.1093/jnci/94.13.981. [DOI] [PubMed] [Google Scholar]
  • 49.van der Akker-van Marle ME, van Ballegooijen M, van Ootmarssen GJ, Boer R, Habbema JDF. Cost-effectivness of cervical cancer screening: Comparison of screening policies. J Natl Cancer Inst. 2002;94:193–204. doi: 10.1093/jnci/94.3.193. [DOI] [PubMed] [Google Scholar]
  • 50.Vogelaar I, Van Ballegooijen M, Schrag D, et al. How much can current interventions reduce colorectal cancer mortality in the U.S.? Cancer. 2006;107:1623–1633. doi: 10.1002/cncr.22115. [DOI] [PubMed] [Google Scholar]
  • 51.National Cancer Institute Colorectal Cancer Mortality Projections. 2007 Dec; http://www.cisnet.cancer.gov/projections/colorectal/index.php, 2008.
  • 52.Zauber AG, Lansdorp-Vogelaar I, Knudsen AB, Wilschut J, van Ballegooijen M, Kuntz KM. Evaluating test strategies for colorectal cancer screening: a decision analysis for the U.S. Preventive Services Task Force. Ann Intern Med. 2008 Nov 4;149(9):659–669. doi: 10.7326/0003-4819-149-9-200811040-00244. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Hardcastle JD, Chamberlain JO, Robinson MH, et al. Randomised controlled trial of faecal-occult-blood screening for colorectal cancer. Lancet. 1996;348(9040):1472–1477. doi: 10.1016/S0140-6736(96)03386-7. [DOI] [PubMed] [Google Scholar]
  • 54.Kronborg O, Fenger C, Olsen J, Jorgensen OD, Sondergaard O. Randomised study of screening for colorectal cancer with faecal-occult-blood test. Lancet. 1996;348(9040):1467–1471. doi: 10.1016/S0140-6736(96)03430-7. [DOI] [PubMed] [Google Scholar]
  • 55.Towler B, Irwig L, Glasziou P, Kewenter J, Weller D, Silagy C. A systematic review of the effects of screening for colorectal cancer using the faecal occult blood test, hemoccult. BMJ. 1998;317:559–565. doi: 10.1136/bmj.317.7158.559. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Selby JV, Friedman GD, Quesenberry CP, Jr., Weiss NS. A case-control study of screening sigmoidoscopy and mortality from colorectal cancer. N Engl J Med. 1992;326(10):653–657. doi: 10.1056/NEJM199203053261001. [DOI] [PubMed] [Google Scholar]
  • 57.Hixson LJ, Fennerty MB, Sampliner RE, McGee D, Garewal H. Prospective study of the frequency and size distribution of polyps missed by colonoscopy. J Natl Cancer Inst. 1990;82(22):1769–1772. doi: 10.1093/jnci/82.22.1769. [DOI] [PubMed] [Google Scholar]
  • 58.Rex DK, Cutler CS, Lemmel GT, et al. Colonoscopic miss rates of adenomas determined by back-to-back colonoscopies. Gastroenterology. 1997;112(1):24–28. doi: 10.1016/s0016-5085(97)70214-2. [DOI] [PubMed] [Google Scholar]
  • 59.Johnson CD, Chen MH, Toledano AY, et al. Accuracy of CT colonography for detection of large adenomas and cancers. N Engl J Med. 2008 Sep 18;359(12):1207–1217. doi: 10.1056/NEJMoa0800996. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.HHS.gov. Centers for Medicare & Medicaid Services (CMS) 2008 http://www.cms.hhs.gov/mcd/viewmcac.asp?from2=viewmcac.asp&where=index&mid=45&, 2008.
  • 61.Lee RC, Donaldson C, Cook LS. The need for evolution in healthcare decision modeling. Med Care. 2003 Sep;41(9):1024–1033. doi: 10.1097/01.MLR.0000083746.54410.CF. [DOI] [PubMed] [Google Scholar]
  • 62.Feuer EJ, Etzioni R, Cronin KA, Mariotto A. The use of modeling to understand the impact of screening on U.S. mortality: examples from mammography and PSA testing. Stat Methods Med Res. 2004 Dec;13(6):421–442. doi: 10.1191/0962280204sm376ra. [DOI] [PubMed] [Google Scholar]
  • 63.Luebeck EG, Moolgavkar SH, Liu AY, Boynton A, Ulrich CM. Does folic acid supplementation prevent or promote colorectal cancer? Results from model-based predictions. Cancer Epidemiol Biomarkers Prev. 2008 Jun;17(6):1360–1367. doi: 10.1158/1055-9965.EPI-07-2878. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64.Cox DR, Miller HD. The theory of stochastic processes. Chapman and Hall Ltd.; London: 1965. [Google Scholar]
  • 65.Beck J, Pauker S. The Markov process in medical prognosis. Med Decis Making. 1983;3(419–58) doi: 10.1177/0272989X8300300403. [DOI] [PubMed] [Google Scholar]
  • 66.National Cancer Institute Surveillance, Epidemiology and End Results - SEER 9 Regs Public Use. 2004 www.seer.cancer.gov 2004.
  • 67.Rosenberg MA. Competing risks to breast cancer mortality. J Natl Cancer Inst Monogr. 2006;36:15–19. doi: 10.1093/jncimonographs/lgj004. [DOI] [PubMed] [Google Scholar]
  • 68.National Center for Health Statistics US Life Tables. 2000:1960–2000. www.cdc.gov/nchs/products/pubs/pubd/lftbls/life/1966.htm.
  • 69.Bickel PJ, Doksum KA. Mathematical Statistics: Basic Ideas and Selected Topics. Olden-Day; New York: 1977. [Google Scholar]
  • 70.Ramsey SD, McIntosh M, Etzioni R, Urban N. Simulation modeling of outcomes and cost effectiveness. Hematol Oncol Clin North Am. 2000;14(4):925–938. doi: 10.1016/s0889-8588(05)70319-1. [DOI] [PubMed] [Google Scholar]
  • 71.Ness RM, Holmes AM, Klein R, Dittus R. Cost-utility of one-time colonoscopic screening for colorectal cancer at various ages. Am J Gastroenterol. 2000;95(7):1800–1811. doi: 10.1111/j.1572-0241.2000.02172.x. [DOI] [PubMed] [Google Scholar]
  • 72.Loeve F, Boer R, Zauber AG, et al. National Polyp Study data: evidence for regression of adenomas. Int J Cancer. 2004 Sep 10;111(4):633–639. doi: 10.1002/ijc.20277. [DOI] [PubMed] [Google Scholar]
  • 73.Salomon JA, Weinstein MC, Hammitt JK, Goldie SJ. Empirically calibrated model of hepatitis C virus infection in the United States. Am J Epidemiol. 2002 Oct 15;156(8):761–773. doi: 10.1093/aje/kwf100. [DOI] [PubMed] [Google Scholar]
  • 74.Chia YL, Salzman P, Plevritis SK, Glynn PW. Simulation-based parameter estimation for complex models: a breast cancer natural history modelling illustration. Stat Methods Med Res. 2004 Dec;13(6):507–524. doi: 10.1191/0962280204sm380ra. [DOI] [PubMed] [Google Scholar]
  • 75.Kim JJ, Kuntz KM, Stout NK, et al. Multiparameter calibration of a natural history model of cervical cancer. Am J Epidemiol. 2007 Jul 15;166(2):137–150. doi: 10.1093/aje/kwm086. [DOI] [PubMed] [Google Scholar]
  • 76.Press BP, Teukolsky SA, Vetterby WT, Flannery BP. Numerous receipes in Fortran - The art of scientific computing. Cambridge University Press; New York: 1992. [Google Scholar]
  • 77.Tan SY, van Oortmarrsen GJ. Estimating parameters of a microsimulation model for breast cancer screening using the score function method. Annals of Operations Research. 2003;119:43–61. [Google Scholar]
  • 78.Lee SJ, Zelen M. Scheduling periodic examinations for the early detection of disease: Applications to breast cancer. JASA. 1998;93:1271–1281. [Google Scholar]
  • 79.Shen Y, Zelen M. Parametric estimation procedures for screening programmes: Stable and nonstable disease models for multimodality case finding. Biometrika. 1999;86(3):503–515. [Google Scholar]
  • 80.Shen Y, Zelen M. Robust modeling in screening studies: estimation of sensitivity and preclinical sojourn time distribution. Biostatistics. 2005 Oct;6(4):604–614. doi: 10.1093/biostatistics/kxi030. [DOI] [PubMed] [Google Scholar]
  • 81.Tsodikov A, Szabo A, Wegelin J. A population model of prostate cancer incidence. Stat Med. 2006 Aug 30;25(16):2846–2866. doi: 10.1002/sim.2257. [DOI] [PubMed] [Google Scholar]
  • 82.Flehinger BJ, Kimmel M. The natural history of lung cancer in a periodically screened population. Biometrics. 1987 Mar;43(1):127–144. [PubMed] [Google Scholar]
  • 83.Parmigiani G. Measuring uncertainty in complex decision analysis models. Stat Methods Med Res. 2002 Dec;11(6):513–537. doi: 10.1191/0962280202sm307ra. [DOI] [PubMed] [Google Scholar]
  • 84.Berry DA, Inoue L, Shen Y, et al. Modeling the impact of treatment and screening on U.S. breast cancer mortality: a Bayesian approach. J Natl Cancer Inst Monogr. 2006(36):30–36. doi: 10.1093/jncimonographs/lgj006. [DOI] [PubMed] [Google Scholar]
  • 85.Rutter CM, Miglioretti DL, Savarino JE. Bayesian calibration of microsimulation models. J Am Stat Assoc. 2009;104(488):1338–1350. doi: 10.1198/jasa.2009.ap07466. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86.Craig BA, Fryback DG, Klein R, Klein BE. A Bayesian approach to modelling the natural history of a chronic condition from observations with intervention. Stat Med. 1999;18(11):1355–1371. doi: 10.1002/(sici)1097-0258(19990615)18:11<1355::aid-sim130>3.0.co;2-k. [DOI] [PubMed] [Google Scholar]
  • 87.Gold MR, Siegel JE, Russell LB, Weinstein MC. Cost-effectiveness in health and medicine. Oxford University Press; New York: 1996. [Google Scholar]
  • 88.Dewilde S, Anderson R. The cost-effectiveness of screening programs using single and multiple birth cohort simulations: a comparison using a model of cervical cancer. Med Decis Making. 2004 Sep-Oct;24(5):486–492. doi: 10.1177/0272989X04268953. [DOI] [PubMed] [Google Scholar]
  • 89.Berry DA, Cronin KA, Plevritis SK, et al. Effect of screening and adjuvant therapy on mortality from breast cancer. N Engl J Med. 2005;353(17):1784–1792. doi: 10.1056/NEJMoa050518. [DOI] [PubMed] [Google Scholar]
  • 90.Cronin KA, Legler JM, Etzioni RD. Assessing uncertainty in microsimulation modelling with application to cancer screening interventions. Stat Med. 1998;17(21):2509–2523. doi: 10.1002/(sici)1097-0258(19981115)17:21<2509::aid-sim949>3.0.co;2-v. [DOI] [PubMed] [Google Scholar]
  • 91.Doubilet P, Begg CB, Weinstein MC, Braun P, McNeil BJ. Probabilistic sensitivity analysis using Monte Carlo simulation. A practical approach. Med Decis Making. 1985 Summer;5(2):157–177. doi: 10.1177/0272989X8500500205. [DOI] [PubMed] [Google Scholar]
  • 92.Critchfield GC, Willard KE. Probabilistic analysis of decision trees using Monte Carlo simulation. Med Decis Making. 1986 Apr-Jun;6(2):85–92. doi: 10.1177/0272989X8600600205. [DOI] [PubMed] [Google Scholar]
  • 93.Griffin S, Claxton K, Hawkins N, Sculpher M. Probabilistic analysis and computationally expensive models: Necessary and required? Value Health. 2006 Jul-Aug;9(4):244–252. doi: 10.1111/j.1524-4733.2006.00107.x. [DOI] [PubMed] [Google Scholar]
  • 94.Henschke CI, McCauley DI, Yankelevitz DF, et al. Early Lung Cancer Action Project: overall design and findings from baseline screening. Lancet. 1999 Jul 10;354(9173):99–105. doi: 10.1016/S0140-6736(99)06093-6. [DOI] [PubMed] [Google Scholar]
  • 95.Henschke CI, McCauley DI, Yankelevitz DF, et al. Early lung cancer action project: a summary of the findings on baseline screening. Oncologist. 2001;6(2):147–152. doi: 10.1634/theoncologist.6-2-147. [DOI] [PubMed] [Google Scholar]
  • 96.Wisnivesky JP, Mushlin AI, Sicherman N, Henschke C. The cost-effectiveness of low-dose CT screening for lung cancer: preliminary results of baseline screening. Chest. 2003 Aug;124(2):614–621. doi: 10.1378/chest.124.2.614. [DOI] [PubMed] [Google Scholar]
  • 97.Marshall D, Simpson KN, Earle CC, Chu CW. Economic decision analysis model of screening for lung cancer. Eur J Cancer. 2001 Sep;37(14):1759–1767. doi: 10.1016/s0959-8049(01)00205-2. [DOI] [PubMed] [Google Scholar]
  • 98.Chirikos TN, Hazelton T, Tockman M, Clark R. Screening for lung cancer with Ct. Chest. 2002;121:1507–1514. doi: 10.1378/chest.121.5.1507. [DOI] [PubMed] [Google Scholar]
  • 99.Mahadevia PJ, Fleisher LA, Frick KD, Eng J, Goodman SN, Powe NR. Lung cancer screening with helical computed tomography in older adult smokers: a decision and cost-effectiveness analysis. JAMA. 2003;289(3):313–322. doi: 10.1001/jama.289.3.313. [DOI] [PubMed] [Google Scholar]
  • 100.Jackson CH, Thompson SG, Sharples LD. Accounting for uncertainty in health economic decision models by using model averaging. JRSS-A. 2009;172:282–404. doi: 10.1111/j.1467-985X.2008.00573.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101.Parmigiani G, Ancukiewicz M, Matchar DB. Decision models in clinical recommendations development: the Stroke Prevention Policy Model. In: Berry DA, Stangl DK, editors. Bayesian Biostatistics. Vol 151. Marcel Dekker; New York: 1996. pp. 207–233. [Google Scholar]
  • 102.Thurston SW, Zaslavsky AM. Variance estimation in microsimulation models of the food stamp program. Paper presented at: American Statistical Association, Social Statistics Section1996. [Google Scholar]
  • 103.Zaslavsky AM, Thurston SW. Error analysis of food stamp microsimulation models: Further results. Paper presented at: American Statistical Association, Social Statistics Section1995; Orlando, FL. [Google Scholar]
  • 104.Eddy DM. Accuracy versus transparency in pharmacoeconomic modelling: finding the right balance. Pharmacoeconomics. 2006;24(9):837–844. doi: 10.2165/00019053-200624090-00002. [DOI] [PubMed] [Google Scholar]
  • 105.Fendrick AM. The future of health economic modeling: have we gone too far or not far enough? Value Health. 2006 May-Jun;9(3):179–180. doi: 10.1111/j.1524-4733.2006.00115.x. [DOI] [PubMed] [Google Scholar]
  • 106.CISNET [Accessed June 12, 2009];2009 http://cisnet.cancer.gov.
  • 107.Weinstein MC, O'Brien B, Hornberger J, et al. Principles of good practice for decision analytic modeling in health-care evaluation: report of the ISPOR Task Force on Good Research Practices--Modeling Studies. Value Health. 2003;6(1):9–17. doi: 10.1046/j.1524-4733.2003.00234.x. [DOI] [PubMed] [Google Scholar]
  • 108.American Diabetes Association Consensus Panel Guidelines for computer modeling of Diabetes and its complication. Diabetes Care. 2004;27(9):2262–2265. doi: 10.2337/diacare.27.9.2262. [DOI] [PubMed] [Google Scholar]
  • 109.Statistics Canada [Accessed June 12, 2009];2009 http://www.statcan.gc.ca/start-debut-eng.html.
  • 110.International Microsimulation Association [Accessed June 12, 2009];2009 www.microsimulation.org.

RESOURCES