Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Sep 11.
Published in final edited form as: Med Decis Making. 2020 Aug 5;40(6):718–721. doi: 10.1177/0272989X20945308

Dissemination science to advance the use simulation modeling: Our obligation moving forward

Bohdan Nosyk 1,2, Janet A Weiner 3, Emanuel Krebs 1,2, Xiao Zang 4, Benjamin Enns 2, Czarina N Behrends 5, Daniel J Feaster 6, Hawre J Jalal 7, Brandon DL Marshall 4, Ankur N Pandya 8, Bruce R Schackman 5, Zachary F Meisel 3
PMCID: PMC7484337  NIHMSID: NIHMS1610049  PMID: 32755285

The COVID-19 pandemic has thrust simulation models into the public discourse at an unprecedented speed. The projections of simulation models have been discussed, parsed, accepted, and discounted by the press, policymakers, pundits and politicos. While the story of how simulation models have been translated and digested during the pandemic is still unfolding, even at this stage it is worth considering what we can learn from the experience of using infectious disease models to make public policy decisions.

On vivid display is the core inherent challenge facing the field of simulation modeling: namely, the wide range of assumptions, qualifications, and caveats that accompanies every model. As scientists, we are trained not to overstate our conclusions; the modelers’ mantra (or crutch, if you’re prone to cynicism) of ‘all models are wrong, some models are useful’ is appropriate from our perspective, though decision-makers and the general public may tune out after the first part of the phrase. At their best, health simulation models are designed to produce nuanced recommendations from rigorous analysis, using high-quality input data, while decision-makers and the public demand simple stories with clear recommendations.

Despite this challenge, modelers leading the charge in COVID-19 have been of great service to the public’s health. The popularization of ‘flattening the curve’ – essentially the translation of an incidence density plot under increasing levels of behavioural mitigation – has been highly successful in focusing the public on the constraints of the health care system and in inspiring personal behavior changes.(1) Recent estimates suggest large-scale mitigation policies have averted over 500 million infections across China, South Korea, Italy, Iran, France and the United States, to date.(2) But when these models prove to be less-than-accurate, their usefulness is questioned by policymakers and a public reeling from the isolation, record levels of job loss, and a barrage of false or misleading information about the pandemic. For every example of how models have been successfully received and absorbed by the public, there is a counterexample of models and modelers being criticized for either under- or over-estimating outcomes. There will be important opportunities for the field to learn from comparative evaluations of the structural design of the various COVID-19 models once more is known about their key parameters – the course of the virus, its transmission and the effectiveness of the mitigation strategies implemented in different jurisdictions across the globe.

Large-scale, population-based simulation models are complex to produce and even harder to explain in a succinct manner. Models synthesize multiple forms of evidence and make simplifying assumptions that are often implicit in their design. How models are calibrated to fit real-world conditions remains a matter of debate, and even the most rigorous methods of ‘validation’ are partial at best. These qualifications pose a distinct communication challenge, even in scientific publications, much less in news articles or policy briefs. Contrast these communication challenges to the randomized controlled trial, for which the key pieces of information can be conveyed in a sentence or two using a “PICO” mnemonic: the population, intervention, comparator, and outcome under study. Simulation models often have the same points to convey, but along with a range of other information regarding assumptions, qualifications, and degrees of uncertainty. Not surprisingly, communication of the model’s results is often either too brief to fully express what they mean, or overwhelming in the volume of information conveyed.

Our team’s experience in developing, translating, and disseminating the results of a simulation model for policy decisions in HIV prevention and treatment has been instructive (and humbling). In an effort to set a new standard for transparency, we published separate manuscripts on the evidence synthesis (3), calibration and validation (4), baseline projections (5), cost-effectiveness of individual interventions (6), and finally, our cost-effectiveness analysis of different strategies to combat HIV/AIDS in disparate geographic regions (7). We followed these efforts with a closer examination of the uncertainty in our findings, and the value of collecting more information to reduce this uncertainty (8). Altogether, this effort took >55,000 words, 154 tables and 45 figures to communicate (with 59%, 93% and 60% of these respectively buried in appendices). And while we hope this effort has advanced the science, it has been challenging to distill our findings for decision-makers, who are the ultimate audience for this work. Despite the fact that we based our model on the best-available local data, we have faced mixed reactions when initially presenting our results to local stakeholders. Even condensing our presentation to a reasonable length has posed challenges in the context of our audience’s competing priorities, time constraints, and apprehensions about simulation models. However, several public health departments were already engaged with other modeling efforts to guide their decision-making process and others expressed interest in our approach. We remain hopeful this work will advance in spite of the substantial efforts still required of our audience in the SARS-CoV-2 response.

Of course, simulation models allow us to do what we cannot with other study designs. They create an in silico environment where we can change the conditions of the world (if only a simplified version of it), and observe how the outcomes of this change play out into the future. The benefits for decision-making in health policy are indisputable. Therefore, translation cannot be an afterthought. As a research community, we need to improve the way we disseminate, translate, and communicate the results of our models to both decision makers and the public. These efforts must be multi-pronged, well-intentioned, and engage audiences early in the process.

The effort must start within our ranks. We need to integrate in our approach the science of dissemination to drive best practices on communicating and ranking the array of scientific evidence – and assumptions – that go into our models. The NIH defines Dissemination Research as “the study of targeted distribution of information and intervention materials to a specific public health or clinical practice audience. The intent is to understand how to best spread and sustain knowledge and the associated evidence-based interventions.” (9) Similarly, the NIH defines Implementation Research as “the study of the use of strategies to adopt and integrate evidence-based health interventions into clinical and community setting to improve patient outcomes and benefit population health.”(9) Taken together, Dissemination and Implementation (DI) science represents investigations of the processes by which scientific evidence and evidence-based practices are communicated, adopted, implemented, spread, and sustained.(10)

The steps taken by D and I researchers can and often include the following stages: 1) identifying and characterizing the implementation or dissemination setting, 2) identifying and characterizing key stakeholders, 3) characterizing barriers and facilitators to implementation and dissemination, 4) problem solving to address barriers.(1113)

In the setting of economic modeling, D and I science would conceivably include evaluations of the mechanisms by which models are shared, incorporated and used by key stakeholders in determining how best to use resources, make policies, and deliver prevention and care to at risk communities. When appropriate and feasible, we should work harder to engage decisionmakers early on in the model building and development process. We need standardized approaches for describing calibration and validation, and new ways to communicate how we construct our models. These new modes of communication might include intuitive model diagrams, or ‘model stories’ to illustrate some of our structural decisions. In addition, we need to improve how we convey the uncertainty inherent to model-based recommendations and how this uncertainty may be better incorporated in decision making.

Modelers might consider building an interface for their models that would allow the end-user to “play” with key assumptions to assess how the results would change. For example, the Wharton School at the University of Pennsylvania recently produced an economic and epidemiological model of COVID-19 that estimated deaths and job losses from various state strategies for reopening.(14) The modelers displayed the results of the “Coronavirus Policy Response Simulator” on a publicly-accessible site in which key assumptions about social distancing can be changed, allowing the end-user to experience uncertainty rather than simply hear about it. Other interfaces have been developed more for educational purposes, to teach policy makers how models work and how to interpret their projections.(15) Recent advancements in meta-modeling of agent-based simulation models serve as another example where the implementation of easy-to-use interfaces could facilitate closing the gap between modeling and policy making.(16, 17) These interfaces should be developed and tested to assess their ability to translate complex information effectively.

As modelers, we must also increase the transparency of what we do.(18) By externally validating our model code (19), developing publicly-accessible statistical packages, making our models publicly available (20) via cloud platforms (21), or developing webpages to simplify the illustration of complex models (22), we can provide opportunities for further validation by other modelers and increase public confidence in our models. To this end, we can look to examples in other fields, astronomy and physics in particular, which have been using modeling for many centuries to explain complex planetary and galactic dynamics, fine-tuning their models as new evidence comes available.(23) Otherwise, we’ve previously argued that modeling collaborations, at their best, can increase transparency and rigor (24), as well as help disseminate findings beyond the reach of individual modelers.

In all dissemination activities, we must be explicit about the goals of modeling, emphasize the need for simplicity and transparency of assumptions when time is of the essence, and underscore when data to inform model parameters are inaccurate or incomplete. When model outputs are highly uncertain (as in the case of COVID-19), we must be careful not to oversell their accuracy and precision; at the same time, we should illuminate the increased complexity and rigor when high-quality data are available for modeling a more precise public health response.

We salute our colleagues, no doubt working around the clock to continually integrate new information into their projections to help the world navigate the COVID-19 pandemic. Perhaps one hurdle to the use of models to guide policies — that of mere recognition — has been overcome. The resolve of the broader health simulation modeling community should be strengthened as we see our profession’s profound impact on a global scale. Health simulation modeling has come of age; we must now ensure that our work is heard and understood at a level that promotes its best use.

References

  • 1.Betsch C How behavioural science data helps mitigate the COVID-19 crisis. Nat Hum Behav. 2020;4(5):438. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Hsiang S, Allen D, Annan-Phan S, Bell K, Bolliger I, Chong T, et al. The effect of large- scale anti-contagion policies on the COVID-19 pandemic. Nature. 2020. [DOI] [PubMed] [Google Scholar]
  • 3.Krebs E, Enns B, Wang L, Zang X, Panagiotoglou D, Del Rio C, et al. Developing a dynamic HIV transmission model for 6 U.S. cities: An evidence synthesis. PLOS ONE. 2019;14(5):e0217559. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Zang X, Krebs E, Min JE, Pandya A, Marshall BD, Schackman BR, et al. Development and calibration of a dynamic HIV transmission model for 6 US cities. Medical Decision Making. 2020;40(1):3–16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Nosyk B, Zang X, Krebs E, Min JE, Behrends CN, Del Rio C, et al. Ending the Epidemic in America Will Not Happen if the Status Quo Continues: Modeled Projections for Human Immunodeficiency Virus Incidence in 6 US Cities. Clinical Infectious Diseases. 2019;69(12):2195–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Krebs E, Zang X, Enns B, Behrends C, Del Rio C, Dombrowski J, et al. The impact of localized implementation: determining the cost-effectiveness of HIV prevention and care interventions across six U.S. cities. AIDS. 2020;34(3):447–58. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Nosyk B, Zang X, Krebs E, Enns B, Min J, Behrends C, et al. What will it take to ‘End the HIV epidemic’ in the US? An economic modeling study in 6 cities. The Lancet HIV. 2020;In Press. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Zang X, Jalal H, Krebs E, Pandya A, Zhou H, Enns B, et al. Prioritizing additional data collection to reduce decision uncertainty in the HIV/AIDS response in 6 US cities: a value of information analysis. Value in Health. 2020; R&R. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.National Institutes of Health. PAR-18-007 [Available from: https://grants.nih.gov/grants/guide/pa-files/PAR-18-007.html. [Google Scholar]
  • 10.Estabrooks PA, Brownson RC, Pronk NP. Dissemination and Implementation Science for Public Health Professionals: An Overview and Call to Action. Prev Chronic Dis. 2018;15:E162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Tabak RG, Khoong EC, Chambers DA, Brownson RC. Bridging research and practice: models for dissemination and implementation research. Am J Prev Med. 2012;43(3):337–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Koorts H, Eakin E, Estabrooks P, Timperio A, Salmon J, Bauman A. Implementation and scale up of population physical activity interventions for clinical and community settings: the PRACTIS guide. Int J Behav Nutr Phys Act. 2018;15(1):51. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Brownson RC, Eyler AA, Harris JK, Moore JB, Tabak RG. Getting the Word Out: New Approaches for Disseminating Public Health Science. J Public Health Manag Pract. 2018;24(2):102–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Penn Wharton Budget Model. Coronavirus Policy Response Simulator: Health and Economic Effects of State Reopenings: University of Pennsylvania; 2020. [cited 2020 July 3]. Available from: https://budgetmodel.wharton.upenn.edu/issues/2020/5/1/coronavirus-reopening-simulator. [Google Scholar]
  • 15.Salathé M, Case N. What Happens Next? COVID-19 Futures, Explained With Playable Simulations 2020. [Available from: https://ncase.me/covid-19/. [Google Scholar]
  • 16.Jalal H, Dowd B, Sainfort F, Kuntz KM. Linear regression metamodeling as a tool to summarize and present simulation model results. Med Decis Making. 2013;33(7):880–90. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Friedman LW. The simulation metamodel: Springer Science & Business Media; 2012. [Google Scholar]
  • 18.Barton CM, Alberti M, Ames D, Atkinson JA, Bales J, Burke E, et al. Call for transparency of COVID-19 models. Science. 2020;368(6490):482–3. [DOI] [PubMed] [Google Scholar]
  • 19.Eglen S, Nust D, editors. CODECHECK: An open-science initiative to facilitate sharing of computer programs and results presented in scientific publications. Septentrio Conference Series; 2019. [Google Scholar]
  • 20.Alarid-Escudero F, Krijkamp EM, Pechlivanoglou P, Jalal H, Kao S-YZ, Yang A, et al. A need for change! A coding framework for improving transparency in decision modeling. PharmacoEconomics. 2019;37(11):1329–39. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Adibi AS, Mohsen. PRISM Model Repository 2020 [Available from: http://resp.core.ubc.ca/ipress/prism. [Google Scholar]
  • 22.Marshall Research Group. TITAN Model 2020 [cited 2020 May 27]. Available from: https://www.titanmodel.org/. [Google Scholar]
  • 23.Vogelsberger M, Marinacci F, Torrey P, Puchwein E. Cosmological simulations of galaxy formation. Nature Reviews Physics. 2020;2(1):42–66. [Google Scholar]
  • 24.The Opioid Use Disorder Modeling Writing Group. How simulation modeling can support the public health response to the opioid crisis in North America: Setting priorities and assessing value. International Journal of Drug Policy. 2020;In Press. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES