(A) A typical workflow that creates and uses a dynamical model: in “aggregate data,” a modeler collects data from papers, public data sources and/or private experiments; in “construct model,” they use the data, their biological knowledge, assumptions, and modeling methods to create a model; in “estimate parameters,” the modeler produces a complete and self-consistent set of input parameters from the data; in “simulate model,” the modeler integrates the model over time; in “store and analyze results,” they store simulation results and analyze them; in “verify & validate model,” the modeler ensures that the model and its predictions are consistent with experimental data; in “document artifacts,” the modeler annotates and provides human-readable descriptions (tan rectangles) for all model artifacts from each stage; in “package artifacts and documentation,” they combine all model artifacts and documentation into archive(s) to be shared publicly, and in “publish and disseminate,” the modeler publishes their novel scientific findings and shares the archive(s) by depositing them in open-source repositories that independent researchers can access to reproduce, understand, and reuse the model. Black arrows indicate the transitions between workflow stages.
(B) Software tools and data formats for reproducible modeling: Tools and data formats that enhance reproducibility are listed in a diagram that parallels the workflow in (A). These tools and data formats are split into recommendations for standards-based and general-purpose approaches to modeling, as presented in the text. Tools that are useful in multiple modeling stages are listed in those stages.
A table with links to the tools shown in Figure 1 is included in the Supplemental Information (see Table S1).”