Skip to main content
PLOS ONE logoLink to PLOS ONE
. 2014 Mar 19;9(3):e91989. doi: 10.1371/journal.pone.0091989

Disease Prediction Models and Operational Readiness

Courtney D Corley 1,*, Laura L Pullum 2, David M Hartley 3, Corey Benedum 1, Christine Noonan 1, Peter M Rabinowitz 4, Mary J Lancaster 1
Editor: Niko Speybroeck5
PMCID: PMC3960139  PMID: 24647562

Abstract

The objective of this manuscript is to present a systematic review of biosurveillance models that operate on select agents and can forecast the occurrence of a disease event. We define a disease event to be a biological event with focus on the One Health paradigm. These events are characterized by evidence of infection and or disease condition. We reviewed models that attempted to predict a disease event, not merely its transmission dynamics and we considered models involving pathogens of concern as determined by the US National Select Agent Registry (as of June 2011). We searched commercial and government databases and harvested Google search results for eligible models, using terms and phrases provided by public health analysts relating to biosurveillance, remote sensing, risk assessments, spatial epidemiology, and ecological niche modeling. After removal of duplications and extraneous material, a core collection of 6,524 items was established, and these publications along with their abstracts are presented in a semantic wiki at http://BioCat.pnnl.gov. As a result, we systematically reviewed 44 papers, and the results are presented in this analysis. We identified 44 models, classified as one or more of the following: event prediction (4), spatial (26), ecological niche (28), diagnostic or clinical (6), spread or response (9), and reviews (3). The model parameters (e.g., etiology, climatic, spatial, cultural) and data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) were recorded and reviewed. A component of this review is the identification of verification and validation (V&V) methods applied to each model, if any V&V method was reported. All models were classified as either having undergone Some Verification or Validation method, or No Verification or Validation. We close by outlining an initial set of operational readiness level guidelines for disease prediction models based upon established Technology Readiness Level definitions.

Introduction

A rich and diverse field of infectious disease modeling has emerged in the past 60 years and has advanced our understanding of population- and individual-level disease transmission dynamics, including risk factors, virulence, and spatio-temporal patterns of disease spread [1][4]. These modeling techniques span domains from biostatistical methods to massive agent-based, biophysical, ordinary differential equation (ODE), to ecological-niche models [5][8]. Diverse data sources are being integrated into these models as well, such as demographics, remotely sensed measurements and imaging, environmental measurements, and surrogate data such as news alerts and social media [9][11]. Moreover, nascent research is occurring at the omics-level to aid in forecasting future epidemics; such research includes phylogenetic techniques for predicting pathogen mutations, algorithms for microbial identification in next-generation technologies, meta-genomics, and multi-scale systems biology [12], [13]. Yet emerging infectious diseases continue to impact the health and economic security across the globe. There remains a gap in the sensitivity and specificity of these modeling forecasts designed not only to track infectious disease events but also predict disease occurrence [14][16]. For an example one needs to look no further than the 2009 H1N1 influenza pandemic. The latency between identification and characterization of the virus pathogenicity and transmissibility caused, perhaps, unnecessary mitigation measures such as school and business closures [17][19]. Moreover, there are strong indicators that dynamics and emergence of vector-borne diseases are in flux because of, among other factors, changes in land use, human behavior, and climate [20][22].

The goal of this systematic review was to identify areas for research to characterize the viability of biosurveillance models to provide operationally relevant information to decision makers about disease events. We define a disease event to be a biological event characterized by evidence of infection and or disease in humans, animals, and plants (i.e. the One Health paradigm). These disease events are neither mutually exclusive nor limited to the following examples for evidence of infection: person-to-person transmission (e.g., Mycobacterium tuberculosis), zoonoses (e.g., Francisella tularensis), food-borne pathogens (e.g., Salmonella), vector-borne pathogens (e.g., equine encephalitis virus), waterborne pathogens (e.g., Vibrio cholerae), airborne pathogens (e.g., influenza), veterinary pathogens (e.g., Aphtae epizooticae), and plant pathogens (e.g., soybean and wheat rusts). Examples for evidence of condition include accidental or deliberate events affecting air or water quality (e.g., volcanic ash, pesticide runoff), economically motivated adulteration of the food and pharmaceutical supply, and intentional exposure. In the context of this article, a biosurveillance model is broadly defined as an abstract computational, algorithmic, statistical, or mathematical representation that produces informative output related to event detection or event risk [23]. The model is formulated with a priori knowledge and may ingest, process, and analyze data. A biosurveillance model may be proactive or anticipatory (e.g., used to detect or forecast an event, respectively), it may assess risk, or it may be descriptive (e.g., used to understand the dynamics or drivers of an event) [23].

There also is a true lack of implementation of such models in routine surveillance and control activities; as a result there is not an active effort to build and improve capacity for such model implementation in the future [24][27]. When it comes to emerging infectious disease events, or the intentional or accidental release of a bioterrorism agent, most such pathogens are zoonotic (transmitted from animal to human) in origin [28][30]. Therefore, in assessing disease prediction models for biosurveillance preparedness, it is reasonable to include a focus on agents of zoonotic origin that could arise from wildlife or domestic animal populations or could affect such animal populations concurrently with human populations [31]. To date, the development of surveillance systems for tracking disease events in animals and humans have arisen largely in isolation, leading to calls for better integration of human and animal disease surveillance data streams [32], to better prepare for emerging and existing disease threats. Recent reports have shown some utility for such linkage [26], [33].

Two critical characteristics differentiate this work from other infectious disease modeling systematic reviews (e.g., [34][38]). First, we reviewed models that attempted to predict or forecast the disease event (not simply predict transmission dynamics). Second, we considered models involving pathogens of concern as determined by the U.S. National Select Agent Registry as of June 2011 (http://www.selectagents.gov).

Methods

Subject matter experts were asked to supply keywords and phrases salient to the research topic. A sample of keywords and phrases used is shown in Table 1. Multiple searches were conducted in bibliographic databases covering the broad areas of medicine, physical and life sciences, the physical environment, government and security. There were no restrictions placed on publication date or language of publication. Abstracts and citations of journal articles, books, books in a series, book sections or chapters, edited books, theses and dissertations, conference proceedings and abstracts, and technical reports containing the keywords and phrases were reviewed. The publication date of search results returned are bound by the dates of coverage of each database and the date in which the search was performed, however all searching was completed by December 31, 2010. The databases queried resulted in 12,152 citations being collected. Irrelevant citations on the topic of sexually transmitted diseases, cancer and diabetes were retrieved. We de-duplicated and removed extraneous studies resulting in a collection of 6,503 publications. We also collected 13,767 web documents based on Google queries, often referred to as Google harvesting. We down selected the web documents for theses and dissertations, reducing this number to 21. Citations not relevant to the study of select agents, such as sexually transmitted diseases, cancer and diabetes, were identified and removed, leaving 6,524 documents. See Checklist S1 for a list of information sources used in this study.

Table 1. A Sampling of Keywords and Phrases.

Keywords and Phrases Used (not exhaustive):
Biosurveillance Disease forecast Infectious disease surveillance Remote sensing + disease forecast Biosurveillance
Bioterror* and model Disease outbreak origin Pathogen detection Spatial disease model Bioterror* and model
CBRN model* Epidemic model* Population dynamic + outbreak Vector-borne disease model CBRN model*

+ Is used to link phrases or keywords with the Boolean operator “and”.

* Is used as truncation to search for words that begin with the same letters or to replace any number of characters.

Next, we filtered citations by hand based upon the definition of a biosurveillance model presented in the introduction and for select agents, which resulted in a 117 curated papers. Of these 117 papers, 54 were considered relevant to the study based on our selection criteria; however, 10 of these dealt purely with disease spread models, inactivation of bacteria, or the modeling of human immune system responses to pathogens. As a result, we systematically reviewed 44 papers and the results are presented in this analysis. See Figure S1 for a graphic summary of the data reduction methodology and Checklist S1 for the PRISMA guidelines used for the evaluation of the 44 papers. To enable real-time collaboration and sharing of the literature, the citations were exported to the Biosurveillance Model Catalog housed at http://BioCat.pnnl.gov.

The models in the selected publications were classified in the categories listed below. These categories are not mutually exclusive and publications involving multiple modeling and analytic approaches were assigned to multiple categories.

Risk Assessment models correlate risk factors for a specific location based upon weather and other covariates to calculate disease risk, similar to a forest fire warning. This type of model is commonly referred to as ecological niche modeling or disease risk mapping [39][41].

Event Prediction models will assign a probability for when and where the disease event is likely to occur based upon specific data sources and variables. The difference between event prediction and risk assessment is the end product of the model; in the former, the output is the location and a time period a disease outbreak will occur, while the risk assessment model provides the risk of an outbreak occurring under specified conditions [42][44].

Spatial models forecast the geographic spread of a disease after it occurs based upon the relationship between the outbreak and primarily geospatial factors. It should be noted that spatial models can be considered dynamical models in that they change in time, e.g., spatial patch models. [45][47].

Dynamical models examine how a specific disease moves through a population. These models may include parameters, such as movement restrictions, that have the effect of interventions on the severity of an epidemic or epizootic. These models may be used to predict and understand the dynamics of how a disease will spread through a naïve population or when the pathogenicity will change [48], [49].

Event Detection models attempt to identify outbreaks either through sentinel groups or through the collection of real-time diagnostic, clinical, or syndromic data and to detect spikes in signs, symptoms or syndromes that are indicative of an event (e.g., event-based biosurveillance) [50], [51].

The disease agents examined in this study were taken from the U.S. National Select Agent Registry and include human, plant, and animal pathogens. The agents described within these models are grouped non-exclusively by their mode of transmission: direct contact, vector-borne, water- or soil-borne and non-specific.

Next, we analyzed the data sources in order to find ways to improve operational use of biosurveillance models. These non-mutually exclusive data source categories were: “Epidemiological Data from the Same Location”; “Epidemiological Data from a Different Location”; “Governmental and Non-Governmental Organizations”; “Satellite (Remote Sensing)”; “Simulated”; “Laboratory Diagnostic”; “Expert Opinion”; and “Literature.” If a paper cited any form of literature that was not epidemiological, weather, or population data, it was categorized within the literature group. An example of this is references to the preferred natural habitat or survival requirements for a disease agent. Papers that cited epidemiological data from a location independent of the validation data were grouped “Epidemiological Data from a Different Location,” “Simulated Data,” and “Experimental Data.” “Expert Opinion” did not explicitly state from whom or what type of data was used. In addition to the model data sources, twelve non-mutually exclusive variable categories were identified to facilitate understanding of how these models could be used effectively by the research and operational communities. Models with variables describing location or distance and rainfall or temperature were categorized as “Geospatial” and “Climatic,” respectively. Models that took into account the epidemiological (population-level) characteristics of the disease were grouped together as “Epidemiological.” Variables that dealt specifically with the agent or etiology were categorized under “Etiology.” Population size, density, and other related variables were grouped into either “Affected Population” (i.e., the animal, plant, or human population affected by the disease) or “Vectors and Other Populations” (i.e., populations of the vector or any other population that may be considered within the model but that was not affected by the disease). Models that utilized remote sensing data such as the “normalized difference vegetation index” (NDVI), a measurement used to determine the amount of living green vegetation in a targeted area, were grouped within “Satellite (Remote Sensing).” “Agricultural” techniques, such as tillage systems, were also identified to be variables in some models as well as “Clinical” and “Temporal” variables. The final two variable types identified were “Topographic and Environmental,” such as altitude or forest type, and “Social, Cultural, and Behavioral,” which included religious affiliations and education.

There are many verification and validation (V&V) standards (e.g., ISO/IEC 15288-2008 [52], IEEE Std 1012-2012 [53], ISO/IEEE 12207 [54]) and definitions, including some that are specifically focused on modeling and simulation: NASA-STD-7009 [55], Verification, Validation, and Accreditation Recommended Practices Guide from the U.S. Department of Defense (U.S. DoD) Modeling & Simulation Coordination Office [56], U.S. Army TRADOC Reg 5-11 [57], U.S. Navy Best Practices Guide for Verification, Validation, and Accreditation of Legacy Modeling and Simulation [58], and U.S. DoD MIL-STD-3022 [59]. For instance, the U.S. DoD definition of verification for modeling and simulation is “the process of determining that a model implementation and its associated data accurately represent the developer's conceptual description and specifications”[56]. The US DoD definition of validation for modeling and simulation is “the process of determining the degree to which a model and its associated data provide an accurate representation of the real world from the perspective of the intended uses of the model”[56]. In the words of Boehm, verification answers the question “Did we build the system right?” and validation answers, “Did we build the right system?” [60]. Further, the “official certification that a model, simulation, or federation of models and simulations and its associated data is acceptable for use for a specific purpose” is its accreditation [56], which answers the question of whether the model/simulation is credible enough to be used.

All models were classified as either a) having undergone Some V&V method, or b) No V&V based only on the paper(s) cited for that model. Those models classified as having undergone Some V&V were further classified based upon the type of V&V method(s) applied to these models. The V&V method classifications used were “Statistical Verification”; “Sensitivity Analysis (verification)”; “Specificity and Sensitivity (verification)”; “Verification using Training Data”; “Validation using Temporally Independent Data”; and “Validation using Spatially and Temporally Independent Data.” In general, no conclusions on model credibility can be based on the types of V&V methods used, given that a) none of the papers were focused on the model V&V, and b) seldom are all aspects of V&V reported upon in the types of papers surveyed. The most frequently used verification method used is some form of statistical verification. It is important to note that verification methods do not necessarily imply that a model is correct. In this type of verification, methods such as Kappa (used to assess the degree to which two or more persons, examining the same data, agree on the assignment of data to categories), area under the receiving operating characteristic (ROC) curve, goodness of fit, and other statistical values are examined to help measure the ability of the model to accurately describe or predict the outbreak. Several models plotted observed data against predicted data as a V&V technique. This technique was further delineated, depending on whether the observed data were part the model's training data (verification), temporally independent of the training data (validation), or temporally and spatially independent of the training data (validation). The remaining models applied verification methods such as sensitivity analysis, which examined whether a model functioned as it was believed to when different values were input into important variables; or specificity and sensitivity metrics, which measure the ability to determine true positives and negatives. We acknowledge that not all of these V&V techniques are applicable to every model type. Also note that the use of a verification or validation method does not constitute complete verification or validation of the model. For instance, the IEEE standard for software verification and validation (IEEE Std 1012-2005) includes five V&V processes, supported by ten V&V activities, in turn implemented by 79 V&V tasks. To put this in perspective of the study, the V&V methods noted herein are at or below the level of task. Assessment of inherent biases present within these source documents and models reviewed is beyond the scope of this study.

Results and Analysis

The publications' models were categorized as follows (see Table 2): event prediction (n = 4), spatial (n = 26), ecological niche (n = 28), diagnostic or clinical (n = 6), spread or response (n = 9), and reviews (n = 3). The event prediction type includes only four models—possibly explained by the difficulty in creating of a model that truly predicts disease events. In general, these models were applied to (or involved) small or special populations (e.g., populations with chronic diseases). According to Favier et al., the lack of prediction models could be addressed by taking a “toy model” and creating a predictive model [61]. If models that are similar to predictive models, such as risk assessment, could be modified into such, the number of predictive models could be increased.

Table 2. Citations categorized by model type.

Model Type Citations Total
Dynamical [74][82] 9
Event Detection [80], [83][87] 6
Event Prediction [74], [88][90] 4
Review Articles [91][93] 3*
Risk Assessment [21], [62], [75][78], [88], [91], [94][107] 28
Spatial [61], [74], [75], [78], [79], [83][85], [89], [94][99], [108], [109] 26

The categories are not mutually exclusive.

* The authors acknowledge others significant work in event-based biosurveillance, such as the G-7 Global Health Security Action Group [110], which is not cited in this table because of the selection criteria.

Transmission Mode

The transmission modes of the models disease agent spanned the following: direct contact (n = 24), vector-borne (n = 15), water- or soil-borne (n = 7), and non-specific (n = 3); (see Table 3). Direct contact and vector-borne models accounted for approximately 84% of all of the evaluated models.

Table 3. The citations placed in each mode of transmission group.

Agent Mode of Transmission Citations Total
Direct Contact [74], [75], [77][84], [88], [90], [93], [97], [99][101], [104], [105], [107], [108], [111][113] 24
Non-Specific [86], [92], [114] 3
Vector-Borne [61], [62], [76], [85], [87], [91], [94], [95], [97], [102][107], [109] 15
Water-, Soil-Borne [21], [84], [89], [91], [96], [98] 7

If a model involved multiple agents in different categories, the paper was placed in multiple groups.

Data Sources and Variables

The data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological) and variable parameters (e.g., etiology, climatic, spatial, cultural) for each model were recorded and reviewed (see Table 4). The two categories that contained the most data sources were “Epidemiological Data from the Same Location” (n = 25), such as a previous outbreak, and data gathered from an organization, such as census data. Thirty-two models used some type of “Literature” (n = 14) an important fact is that the majority of data used in the models were scientifically measured.

Table 4. Citation categorized by Data Source.

Data Source Citation Total
Epidemiological Data Different Location [79], [93] 2
Epidemiological Data Same Location [21], [75], [77], [79], [81][83], [85], [87], [89], [90], [94][97], [99], [101], [102], [104], [105], [108], [111][113] 25
Expert Opinion [93], [115] 2
Governmental or Non-Governmental Organization [21], [74][76], [78], [80], [83], [84], [87][89], [93], [94], [96], [97], [99][102], [105], [106], [111], [113], [115] 25
Laboratory Diagnostic [82], [84], [95] 3
Literature [61], [62], [77], [82], [83], [86], [88], [90], [92], [94], [98], [103], [106], [109] 14
Satellite (Remote Sensing) [21], [87], [88], [94], [95], [99], [102][105], [107], [115] 12
Simulated [74], [82], [93], [108] 4

If a model utilized data from multiple categories, it was placed in each.

Categories of variables and parameters utilized in the models supplemented the data sources. The two largest groupings were “Geospatial” and “Climatic” variables. According to Eisen et al. [62], models that do not use epidemiological data produce results with lower confidence such that users may not trust the results or may not trust that the findings are relevant. Similarly, users may not have faith that models are structured in a biologically meaningful way if biologic or epidemiologic data do not appear in a model [63]. Nonetheless, before incorporating epidemiological data in disease event prediction models, further research is needed to determine whether such data will increase the model's robustness, sensitivity, and specificity. Factors such as accuracy and precision of epidemiological data will influence this analysis.

To better understand the relationship between the variables and the disease agent's mode of transmission, a graph (Figure 1) was created to show the distribution of different modes of transmission cited for each variable type used in the evaluated models. Table 5 shows the distribution of citations for each variable type. It was noted without surprise that, as more research was done on a mode of transmission, more variables were examined. Furthermore the variables, “Vectors or Other Populations” and “Social, Cultural, Behavioral” were underutilized in the evaluated models. This is unfortunate because these variables typically have a seasonal abundance pattern. Further, human socio-cultural behaviors greatly impact the interactions between human and vector populations and seasonal meteorological variation can strongly affect vector abundance and competence [64]. Relatively few disease prediction models were identified in which the causative agent was water- or soil-borne [20], [65].

Figure 1. The Percentage of Citations Placed in Each Variable Group by Transmission Mode (if a model contained variables from multiple groups, it was placed in each respective group).

Figure 1

Table 5. Citations Organized by Variable Group.

Variables Group Citations Total
Affected Population [61], [75], [76], [79], [82][85], [88], [93], [95], [99], [101], [115] 14
Agricultural [74], [79], [83], [90], [95] 5
Climatic [21], [75], [76], [78], [80], [85], [87], [89][91], [94], [97], [98], [100], [102], [106], [107], [111], [115] 19
Clinical [62], [76], [79], [92], [93], [101], [112][114] 10
Epidemiological [61], [79], [81], [89], [92], [99], [111] 7
Etiology [61], [62], [74], [77], [81], [82], [89], [111], [113], [114] 13
Geospatial [62], [74], [75], [79], [80], [83], [84], [91], [93][96], [102], [104][106], [108], [109], [111], [115] 20
Remote Sensing [21], [83], [87], [88], [94], [99], [102][105], [107] 11
Social, Cultural, Behavioral [75], [93], [96], [98] 4
Time [74], [82], [87], [94], [99], [100], [102], [108], [112], [113] 12
Topography or Environment [78], [85], [88], [91], [95], [98][100], [102][106] 13
Vector or Other Populations [61], [62], [75], [85], [87], [95], [115] 7

If a model contained variables from multiple groups, it was placed in each respective group.

Verification and Validation Methods

The V&V methods applied to each model, if any, were also analyzed; see Table 6. Among the types of papers surveyed, few aspects of V&V are typically reported. The majority of models selected for this study were subjected some method of verification or validation. Publications on many applications of predictive models typically state statistical, sensitivity analysis, and training data test results. These are necessary, though insufficient methods to determine the credibility, verification or validation of a model. For instance, the IEEE standard for software verification and validation (IEEE Std 1012-2005) includes five V&V processes, supported by ten V&V activities, which are in turn implemented by 79 V&V tasks. To put this into perspective, the V&V methods noted herein are at or below the level of task. The papers reported the use of V&V methods for many models but not for others, and for the latter case it is unclear whether V&V methods were not used or merely unreported. Another positive observation is the significant use of real epidemiological data to examine aspects of model validity. Even though “Validation using Spatially and Temporally Independent Data” was used for one of the smallest sets of models, use of actual data versus predicted data for validation tests was reported for approximately 33% of the models. The reader is encouraged to understand that the use of a verification or validation method does not constitute complete verification or validation of the model [66][68].

Table 6. Grouping of Citations by Verification and Validation (V&V) Methods.

V&V Method Citations Total
No V&V [61], [78], [86], [92], [109] 5
Sensitivity Analysis (verification) [79][82], [94], [99], [112], [113] 8
Specificity and Sensitivity (verification) [1], [75], [84], [95] 4
Statistical Verification [21], [75], [77], [79], [82], [83], [88], [94][97], [99][106], [108], [115] 21
Validation using Spatially and Temporally Independent Data [79], [90] 2
Validation using Temporally Independent Data [84], [88], [97], [102], [103], [111] 6
Verification using Training Data [21], [75], [81], [85], [89], [97], [104], [105], [107], [112], [115] 11

If a model used multiple methods for its verification or validation, it was categorized in each respective group.

Operational Readiness

Given the importance of these models to national and international health security [69], we note the importance of a categorization scheme that defines a model's viability for use in an operational setting. To our knowledge, none exists, but below we illustrate one possibility, based upon the “technology readiness level” (TRL) originally defined by NASA [70], to evaluate the technology readiness of space development programs. Important to note: NASA TRL levels were not developed to cover modeling and simulation, much less biosurveillance models, so the definitions require modification. In the public health domain, TRLs can assist decision makers in understanding the operational readiness level, maturity and utility of a disease event or prediction model. Advantages of utilizing the TRL paradigm are that it can provide a common understanding of biosurveillance model maturity, inform risk management, support decision making concerning government funded research and technology investments, and support decisions concerning transition of technology. We also point out the characteristics of TRLs that may limit their utility, such as the operational readiness of a model does not necessarily fit with technology maturity (V&V), a mature disease prediction or forecasting model may possess a greater or lesser degree of readiness for use in a particular geographic region than one of lower maturity, and numerous additional factors must be considered, including the relevance of the models' operational environment, the cost, technological accessibility, sustainability, etc.

"Operational readiness" is a concept that is user and intended use dependent. A model that one user may consider ready may not suffice for readiness with another user. Different users have different needs according to their missions. For example, in the case of surveillance models, some will need to see everything reported by event-based surveillance systems (i.e., they are unconcerned with specificity but sensitivity is of high value to them), while other users may demand low false alarm rates (i.e., specificity is important for their needs) [71], [72]. The Operational Readiness Level rating of any given model will thus depend upon the diverse questions and purposes to which any given model is applied.

An initial scheme modifying these definitions is shown in Table 7. In such a scheme, the models would be characterized based on how the model was validated, what type of data was used to validate the model, and the validity of data used to create the model. The V&V of predictive models, regardless of realm of application, is an area that requires better definition and techniques. The results of model V&V can be used in the definition of model operational readiness; however the readiness level definitions must also be accompanied by data validation, uncertainty quantification, and model fitness for use evaluations, many of which are areas of active research [73].

Table 7. Initial Definitions of Operational Readiness Levels for Disease Prediction Models.

Level Definition
1 Research only reported on observed information
2 A constructed model which has yet to be applied to data (or The model theory has been developed based on observed or hypothesized information)
3 The model has been created but has not been validated
4 The model has been verified and validated
5 The model has been demonstrated as useful but for only its original location (pathogen or population) and is still being updated to accommodate additional locations
6 The model has been demonstrated as useful in both its original location (pathogen or population) as well as an independent location (pathogen or population), but not all requisite locations
7…n Further study is needed to explicitly delineate the criterion for all levels.

Discussion

Our study was conducted to characterize published select-agent pathogen models that are capable of predicting disease events in order to determine opportunities for expanded research and to define operational readiness levels [38]. Out of an initial collection of 6,524 items 44 papers met inclusion criteria and were systematically reviewed. Models were classified as one or more of the following: event prediction, spatial, ecological niche, diagnostic or clinical, spread or response, and reviews. Model parameters (e.g., etiology, climatic, spatial, cultural), data sources (e.g., remote sensing, non-governmental organizations, expert opinion, epidemiological), and V&V methods applied to each model, if any, were identified. Moreover, an initial set of operational readiness level guidelines for disease prediction models, based upon established Technology Readiness Level definitions, were suggested.

In the majority of the models we examined, few aspects of V&V were reported. Although many models underwent some level of V&V, few if any demonstrated validation, and thus readiness, in a general sense that would find credibility with operational users. Such V&V is difficult to implement in general, for the reasons discussed in the previous section. However, if users are to apply the models and have confidence in model results, it is imperative to advance model V&V. Similarly, the suggested operational readiness level guidelines are meant to spur additional investigation, as our literature review uncovered no operational model readiness metrics. Given better definition of readiness levels, providing clear means to achieve upper operational readiness levels, and the ability to consistently assign confidence in readiness level assignment will lead to enhanced value to the decision makers. In order to test operational readiness levels, we suggest further development of the criteria and application of the levels to existing disease prediction models to evaluate their usefulness in an operational environment. Public health analysts and decision makers are in need of evidenced-based advice, and the value of operational readiness levels for the models on which they depend cannot be overstated.

Supporting Information

Figure S1

The PRISMA Flow Diagram.

(PDF)

Table S1

Information sources used in biosurveillance study.

(DOCX)

Checklist S1

The PRISMA Checklist.

(DOC)

Acknowledgments

The authors thank Dr. Dylan George (Department of Health and Human Services), Mr. Chase Dowling (Pacific Northwest National Laboratory) and Dr. Andrew Cowell (Pacific Northwest National Laboratory) for critical feedback, information analysis and helpful discussions during manuscript development. The authors also thank the operational biosurveillance subject matter expertise provided by management and analysts at DHS's National Biosurveillance Integration Center. Disclaimer Oak Ridge National Laboratory is operated by UT-Battelle, LLC, for the U.S. Department of Energy under contract DE-AC05-00OR22725. Pacific Northwest National Laboratory is operated by Battelle for the U.S. Department of Energy under contract DE- ACO5-76RLO 1830. The United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes. The authors' opinions do not necessarily reflect those of their organizations.

Funding Statement

This study was supported through a contract to Pacific Northwest National Laboratory from the National Biosurveillance Integration Center, Office of Health Affairs, and the Science and Technology Directorate, Chemical and Biological Division, Threat Characterization and Attribution Branch, of the U.S. Department of Homeland Security (DHS). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

References

  • 1.Anderson RM, May RM (1991) Infectious diseases of humans: dynamics and control: Oxford University Press.
  • 2. Boily MC, Masse B (1997) Mathematical models of disease transmission: a precious tool for the study of sexually transmitted diseases. Can J Public Health 88: 255–265. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Angulo JJ (1987) Interdisciplinary approaches in epidemic studies—II: Four geographic models of the flow of contagious disease. Soc Sci Med 24: 57–69. [DOI] [PubMed] [Google Scholar]
  • 4. Riley S (2007) Large-scale Spatial-transmission Models of Infectious Disease. Science 316: 1298–1301. [DOI] [PubMed] [Google Scholar]
  • 5. Perez L, Dragicevic S (2009) An agent-based approach for modeling dynamics of contagious disease spread. Int J Health Geogr 8: 50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Van den Broeck W, Gioannini C, Goncalves B, Quaggiotto M, Colizza V, et al. (2011) The GLEaMviz computational tool, publicly available software to explore realistic epidemic spreading scenarios at the global scale. BMC Infect Dis 11: 50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Miller JC, Slim AC, Volz EM (2012) Edge-based compartmental modelling for infectious disease spread. J Royal Soc Interface 9: 890–906. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Siettos CI, Russo L (2013) Mathematical modeling of infectious disease dynamics. Virulence 4: 297–306. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Brownstein JS, Holford TR, Fish D (2003) A climate-based model predicts the spatial distribution of the Lyme disease vector Ixodes scapularis in the United States. Environmental Health Perspectives 111: 1152–1157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Vasquez-Prokopec GM, Bisanzio D, Stoddard ST, Paz-Soldan V, Morrison AC, et al. (2013) Using GPS technology to quantify human mobility, dynamic contacts and infectious disease dynamics in a resource-poor urban environment. PLoS One 8: e58802. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Chan EH, Sahai V, Conrad C, Bronstein JS (2011) Using Web Search Query Data to Monitor Dengue Epidemics: A New Model for Neglected Tropical Disease Surveillance. PLoS Negl Trop Dis 5: e1206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Bush RM, Bender CA, Subbarao K, Cox NJ, Fitch WM (1999) Predicting the Evolution of Human Influenza A. Science 286: 1921–1925. [DOI] [PubMed] [Google Scholar]
  • 13. Liao Y-C, Lee M-S, Ko C-Y, Hsiung CA (2008) Bioinformatics models for predicting antigenic variants of influenza A/H3N2 virus. Bioinformatics 24: 505–512. [DOI] [PubMed] [Google Scholar]
  • 14. Johnson DA, Alldredge JR, Vakoch DL (1996) Potato late blight forecasting models for the semiarid environment of south-central Washington. Phytopathology 86: 480–484. [Google Scholar]
  • 15. Yuen JE, Hughes G (2002) Bayesian analysis of plant disease prediction. Plant Pathology 51: 407–412. [Google Scholar]
  • 16. Hashimoto S, Murakami Y, Taniguchi K, Nagaid M (2000) Detection of epidemics in their early stage through infectious disease surveillance. International Journal of Epidemiology 29: 905–910. [DOI] [PubMed] [Google Scholar]
  • 17. Jackson C, Vynnycky E, Hawker J, Olowokure B, Mangtani P (2013) School closures and influenza: systematic review of epidemiological studies. BMJ Open 3: e002149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Kawaguchi R, Miyazono M, Noda T, Takayama Y, Sasai Y, et al. (2009) Influenza (H1N1) 2009 outbreak and school closure, Osaka Prefecture, Japan. Emerg Infect Dis 15: 1685. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Brown ST, Tai JH, Bailey RR, Cooley PC, Wheaton WD, et al. (2011) Would school closure for the 2009 H1N1 influenza epidemic have been worth the cost? A computational simulation of Pennsylvania. BMC Public Health 11: 353. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Ford TE, Colwell RR, Rose JB, Morse SS, Rogers DJ, et al. (2009) Using satellite images of environmental changes to predict infectious disease outbreaks. Emerging Infectious Diseases 15: 1341–1346. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. de Magny GC, Murtugudde R, Sapiano MRP, Nizam A, Brown CW, et al. (2008) Environmental signatures associated with cholera epidemics. Proceedings of the National Academy of Sciences of the United States of America 105: 17676–17681. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Linthicum KJ, Anyamba A, Tucker CJ, Kelley PW, Myers MF, et al. (1999) Climate and satellite indicators to forecast Rift Valley fever epidemics in Kenya. Science 285: 397–400. [DOI] [PubMed] [Google Scholar]
  • 23. Corley CD, Lancaster MJ, Brigantic RT, Chung JS, Walters RA, et al. (2012) Assessing the continuum of event-based biosurveillance through an operational lens. Biosecur Bioterror 10: 131–141. [DOI] [PubMed] [Google Scholar]
  • 24. Halliday J, Daborn C, Auty H, Mtema Z, Lembo T, et al. (2012) Bringing together emerging and endemic zoonoses surveillance: shared challenges and a common solution. Philosophical Transactions of the Royal Society B: Biological Sciences 367: 2872–2880. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Halliday JEB, Meredith AL, Knobel DL, Shaw DJ, Bronsvoort BMDC, et al. (2007) A framework for evaluating animals as sentinels for infectious disease surveillance. Journal of the Royal Society Interface 4: 973–984. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Scotch M, Brownstein J, Vegso S, Galusha D, Rabinowitz P (2011) Human vs. Animal Outbreaks of the 2009 Swine-Origin H1N1 Influenza A epidemic. EcoHealth 8: 376–380. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Scotch M, Odofin L, Rabinowitz P (2009) Linkages between animal and human health sentinel data. Bmc Veterinary Research 5: 1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Woolhouse MEJ, Matthews L, Coen P, Stringer SM, Foster JD, et al. (1999) Population dynamics of scrapie in a sheep flock. Philosophical Transactions of the Royal Society of London Series B-Biological Sciences 354: 751–756. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Woolhouse M, Gaunt E (2007) Ecological origins of novel human pathogens. Critical Reviews in Microbiology 33: 231–242. [DOI] [PubMed] [Google Scholar]
  • 30. Daszak P, Cunningham AA, Hyatt AD (2000) Emerging Infectious Diseases of Wildlife— Threats to Biodiversity and Human Health. Science 287: 443–449. [DOI] [PubMed] [Google Scholar]
  • 31. Rabinowitz P, Gordon Z, Chudnov D, Wilcox M, Odofin L, et al. (2006) Animals as sentinels of bioterrorism agents. Emerging Infectious Diseases 12: 647–652. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Models PCoI-S, Workshop GfCAtaIOHBS—A, Medicine Io (2012) Information Sharing and Collaboration: Applications to Integrated Biosurveillance: Workshop Summary: The National Academies Press. [PubMed]
  • 33. Daszak P (2009) A Call for “Smart Surveillance”: A Lesson Learned from H1N1. EcoHealth 6: 1–2. [DOI] [PubMed] [Google Scholar]
  • 34. Lloyd-Smith JO, George D, Pepin KM, Pitzer VE, Pulliam JRC, et al. (2009) Epidemic Dynamics at the Human-Animal Interface. Science 326: 1362–1367. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Feighner BH, Eubank S, Glass RJ, Davey VJ, Chrétien JP, et al. (2009) Infectious disease modeling and military readiness. Emerging Infectious Diseases 15: e1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Bravata DM, Sundaram V, McDonald KM, Smith WM, Szeto H, et al. (2004) Evaluating detection and diagnostic decision support systems for bioterrorism response. Emerg Infect Dis 10: 100–108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Rolka H, Burkom H, Cooper GF, Kulldorff M, Madigan D, et al. (2007) Issues in applied statistics for public health bioterrorism surveillance using multiple data streams: Research needs ‡. Statistics in Medicine 26: 1834–1856. [DOI] [PubMed] [Google Scholar]
  • 38. Prieto DM, Das TK, Savachkin AA, Uribe A, Izurieta R, et al. (2012) A systematic review to identify areas of enhancements of pandemic simulation models for operational use at provincial and local levels. BMC Public Health 12: 251. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Costa J, Peterson AT, Beard CB (2002) Ecologic niche modeling and differentiation of poplations of Triatoma brasiliensis Neiva, 1911, the most important Chagas' disease vector in northeastern Brazil (Hemiptera, Reduviidae, Triatominae). Am J Trop Med Hyg 67: 516–520. [DOI] [PubMed] [Google Scholar]
  • 40. Knorr-Held L, Besag J (1998) Modelling risk from a disease in time and space. Stat Med 17: 2045–2060. [DOI] [PubMed] [Google Scholar]
  • 41. Jentes ES, Poumerol G, Gershman MD, Hill DR, Lemarchand J, et al. (2011) The revised global yellow fever risk map and recommendations for vaccination, 2010: consensus of the Informal WHO Working Group on Geographic Risk for Yellow Fever. The Lancet Infectious Diseases 11: 622–632. [DOI] [PubMed] [Google Scholar]
  • 42. Eisen L, Eisen RJ (2011) Using geographic information systems and decision support systems for the prediction, prevention and control of vector-borne diseases. Annu Rev Entomol 56: 41–61. [DOI] [PubMed] [Google Scholar]
  • 43. Tatem AJ, Hay SI, Rogers DJ (2006) Global traffic and disease vector dispersal. Proceedings of the National Academy of Sciences of the United States of America 103: 6242–6247. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Matsuda F, Ishimura S, Wagatsuma Y, Higashi T, Hayashi T, et al. (2008) Prediction of epidemic cholera due to Vibrio cholerae O1 in children younger than 10 years using climate data in Bangladesh. Epidemiology and Infection 136: 73–79. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45. Brooker S, Hotez PJ, Bundy DAP (2010) The Global Atlas of Helminth Infection: Mapping the Way Forward in Neglected Tropical Disease Control. PLoS Negl Trop Dis 4: e779. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Murray KA, Retallick RWR, Puschendorf R, Skerratt LF, Rosauer D, et al. (2011) Assessing spatial patterns of disease risk to biodiversity: implications for the management of the amphibian pathogen, Batrachochytrium dendrobatidis. Journal of Applied Ecology 49: 163–173. [Google Scholar]
  • 47. Tatem AJ, Baylis M, Mellor PS, Purse BV, Capela R, et al. (2003) Prediction of bluetongue vector distribution in Europe and north Africa using satellite imagery. Veterinary Microbiology 97: 13–29. [DOI] [PubMed] [Google Scholar]
  • 48. Estrada-Peña A, Zatansever Z, Gargili A, Aktas M, Uzun R, et al. (2007) Modeling the spatial distribution of crimean-congo hemorrhagic fever outbreaks in Turkey. Vector Borne & Zoonotic Diseases 7: 667–678. [DOI] [PubMed] [Google Scholar]
  • 49. Liccardo A, Fierro A (2013) A lattice model for influenza spreading. PLoS One 8: e63935. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Keller M, Blench M, Tolentino H, Freifeld CC, Mandl KD, et al. (2009) Use of unstructured event-based reports for global infectious disease surveillance. Emerg Infect Dis 15: 689–695. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Takla A, Velasco E, Benzler J (2012) The FIFA Women's World Cup in Germany 2011: A practical example for tailoring an event-specific enhanced infectious disease surveillance system. BMC Public Health 12: 576. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.ISO/IEC (2008) ISO/IEC 15288:2008 Systems and software engineering — System life cycle processes. International Organization for Standardization/International Electrotechnical Commission.
  • 53.IEEE (2012) IEEE Std 1012-2012 IEEE Standard for System and Software Verification and Validation. IEEE Standards Association.
  • 54.ISO/IEC-IEEE (2008) ISO/IEC 12207:2008, IEEE Std 12207-2008 Systems and Software Engineering – Software Life Cycle Processes. International Organization for Standardization/International Electrotechnical Commission and Institute of Electrical and Electronics Engineers.
  • 55.NASA (2008) NASA-STD-7009 Standard for Models and Simulations. National Aeronautics and Space Administration.
  • 56.U.S DoD (2011) Verification, Validation, and Accreditation (VV&A) Recommended Practices Guide (RPG). Modeling & Simulation Coordination Office, U.S. Department of Defense.
  • 57.U.S Army (1998) TRADOC Reg 5-11 U.S. Army Training and Doctrine Command (TRADOC) Models and Simulations (M&S) and Data Management. United States Army Training and Doctrine Command.
  • 58.U.S Navy (2005) Best Practices Guide for Verification, Validation, and Accreditation of Legacy Modeling and Simulation. Department of the Navy, Navy Modeling & Simulation Office.
  • 59.U.S DoD (2008) MIL-STD-3022 Documentation of Verification, Validation & Accreditation (VV&A) for Models and Simulations. U.S. Department of Defense, Modeling and Simulation Coordination Office.
  • 60.Boehm BW (1981) Software Engineering Economics; Yeh RT, editor: Prentice-Hall.
  • 61. Favier C, Chalvet-Monfray K, Sabatier P, Lancelot R, Fontenille D, et al. (2006) Rift Valley fever in West Africa: the role of space in endemicity. Trop Med Int Health 11: 1878–1888. [DOI] [PubMed] [Google Scholar]
  • 62. Eisen RJ, Eisen L (2008) Spatial Modeling of human risk of exposure to vector-borne pathogens based on epidemiological versus arthropod vector data. Journal of Medical Entomology 45: 181–192. [DOI] [PubMed] [Google Scholar]
  • 63. Margevicius KJ, Generous N, Taylor-McCabe KJ, Brown M, Daniel WB, et al. (2014) Advancing a framework to enable characterization and evaluation of data streams useful for biosurveillance. PLoS ONE 9: e83730. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64. Hartley DM, Barker CM, Le Menach A, Niu T, Gaff HD, et al. (2012) Effects of Temperature on Emergence and Seasonality of West Nile Virus in California. The American Journal of Tropical Medicine and Hygiene 86: 884–894. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65. Pascual M (2000) Cholera Dynamics and El Nino-Southern Oscillation. Science 289: 1766–1769. [DOI] [PubMed] [Google Scholar]
  • 66.Pullum LL, Cui X. Techniques and Issues in Agent-Based Model Validation; 2012; Boston, MA.
  • 67.Pullum LL, Cui X. A Hybrid Sensitivity Analysis Approach for Agent-based Disease Spread Models; 2012; Boston, MA.
  • 68. Koopman J (2004) Modeling Infection Transmission. Annual Review of Public Health 25: 303–326. [DOI] [PubMed] [Google Scholar]
  • 69. Bernard KW (2013) Health and national security: A contemporary collision of cultures. Biosecurity and Bioterrorism: Biodefense Strategy, Practice, and Science 11: 157–162. [DOI] [PubMed] [Google Scholar]
  • 70.Mankins JC (1995) Technology Readiness Levels.
  • 71.Hartley DM (2014) Using social media and other Internet data for public health surveillance: The importance of talking. Milbank Quarterly In press. [DOI] [PMC free article] [PubMed]
  • 72. Métras R, Collins LM, White RG, Alonso S, Chevalier V, et al. (2011) Rift Valley fever epidemiology, surveillance, and control: what have models contributed? Vector-Borne and Zoonotic Diseases 11: 761–771. [DOI] [PubMed] [Google Scholar]
  • 73. Pitman R, Fisman D, Zaric GS, Postma M, Kretzschmar M, et al. (2012) Dynamic Transmission Modeling: A Report of the ISPOR-SMDM Modeling Good Research Practices Task Force-5. Value in health: the journal of the International Society for Pharmacoeconomics and Outcomes Research 15: 828–834. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74. Jewell CP, Kypraios T, Christley RM, Roberts GO (2009) A novel approach to real-time risk prediction for emerging infectious diseases: A case study in Avian Influenza H5N1. Preventive Veterinary Medicine 91: 19–28. [DOI] [PubMed] [Google Scholar]
  • 75.Erraguntla M, Ramachandran S, Chang-Nien W, Mayer RJ (2010) Avian Influenza Datamining Using Environment, Epidemiology, and Etiology Surveillance and Analysis Toolkit (E3SAT). 2010 43rd Hawaii International Conference on System Sciences (HICSS-43); Honolulu, HI. IEEE. pp. 7 pp. [Google Scholar]
  • 76.Hadorn DC, Racloz V, Schwermer H, Stark KDC (2009) Establishing a cost-effective national surveillance system for Bluetongue using scenario tree modelling. Veterinary Research40: : Article 57. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77. Hutber AM, Kitching RP, Pilipcinec E (2006) Predictions for the timing and use of culling or vaccination during a foot-and-mouth disease epidemic. Research in Veterinary Science 81: 31–36. [DOI] [PubMed] [Google Scholar]
  • 78. Mayer D, Reiczigel J, Rubel F (2008) A Lagrangian particle model to predict the airborne spread of foot-and-mouth disease virus. Atmospheric Environment 42: 466–479. [Google Scholar]
  • 79. Martínez-López B, Ivorra B, Ramos AM, Sánchez-Vizcaíno JM (2011) A novel spatial and stochastic model to evaluate the within- and between-farm transmission of classical swine fever virus. I. General concepts and description of the model. Veterinary Microbiology 147: 300–309. [DOI] [PubMed] [Google Scholar]
  • 80. Rubel F, Fuchs K (2005) A decision-support system for real-time risk assessment of airborne spread of the foot-and-mouth disease virus. Methods Inf Med 44: 590–595. [PubMed] [Google Scholar]
  • 81. Bos MEH, Van Boven M, Nielen M, Bouma A, Elders ARW, et al. (2007) Estimating the day of highly pathogenic avian influenza (H7N7) virus introduction into a poultry flock based on mortality data. Veterinary Research 38: 493–504. [DOI] [PubMed] [Google Scholar]
  • 82. Verdugo C, Cardona CJ, Carpenter TE (2009) Simulation of an early warning system using sentinel birds to detect a change of a low pathogenic avian influenza virus (LPAIV) to high pathogenic avian influenza virus (HPAIV). Preventive Veterinary Medicine 88: 109–119. [DOI] [PubMed] [Google Scholar]
  • 83. Mongkolsawat C, Kamchai T (2009) GIS Modeling for Avian Influenza Risk Areas. International Journal of Geoinformatics 5: 7–12. [Google Scholar]
  • 84. Ortiz-Pelaez A, Pfeiffer DU, Tempia S, Otieno FT, Aden HH, et al. (2010) Risk mapping of Rinderpest sero-prevalence in Central and Southern Somalia based on spatial and network risk factors. Bmc Veterinary Research 6: 22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85. Racloz V, Venter G, Griot C, Stark KDC (2008) Estimating the temporal and spatial risk of bluetongue related to the incursion of infected vectors into Switzerland. BMC Vet Res 4: 42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86. Radosavljevic V, Belojevic G (2009) A new model of bioterrorism risk assessment. Biosecur Bioterror 7: 443–451. [DOI] [PubMed] [Google Scholar]
  • 87. Purse BV, Baylis M, Tatem AJ, Rogers DJ, Mellor PS, et al. (2004) Predicting the risk of bluetongue through time: climate models of temporal patterns of outbreaks in Israel. Revue Scientifique Et Technique-Office International Des Epizooties 23: 761–775. [DOI] [PubMed] [Google Scholar]
  • 88.Cappelle J, Girard O, Fofana B, Gaidet N, Gilbert M Ecological Modeling of the Spatial Distribution of Wild Waterbirds to Identify the Main Areas Where Avian Influenza Viruses are Circulating in the Inner Niger Delta, Mali. EcoHealth ePub: 1–11. In Press. [DOI] [PubMed]
  • 89.Mubangizi M, Mwebaze E, Quinn JA (2009) Computational Prediction of Cholera Outbreaks; Kampala. ICCIR.
  • 90. Schaafsma AW, Hooker DC (2007) Climatic models to predict occurrence of Fusarium toxins in wheat and maize. Int J Food Microbiol 119: 116–125. [DOI] [PubMed] [Google Scholar]
  • 91. Marechal F, Ribeiro N, Lafaye M, Guell A (2008) Satellite imaging and vector-borne diseases: the approach of the French National Space Agency (CNES). Geospatial Health 3: 1–5. [DOI] [PubMed] [Google Scholar]
  • 92. Wagner MM (2002) Models of computer-based outbreak detection. The Reference Librarian 39: 343–362. [Google Scholar]
  • 93. Yamamoto T, Tsutsui T, Nishiguchi A, Kobayashi S (2008) Evaluation of surveillance strategies for bovine brucellosis in Japan using a simulation model. Preventive Veterinary Medicine 86: 57–74. [DOI] [PubMed] [Google Scholar]
  • 94. Fichet-Calvet E, Rogers DJ (2009) Risk maps of lassa fever in West Africa. Plos Neglected Tropical Diseases 3: e388. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95. Green AL, Dargatz DA, Herrero MV, Seitzinger AH, Wagner BA, et al. (2005) Risk factors associated with herd-level exposure of cattle in Nebraska, North Dakota, and South Dakota to bluetongue virus. American Journal of Veterinary Research 66: 853–860. [DOI] [PubMed] [Google Scholar]
  • 96. Kim DR, Ali M, Thiem VD, Park JK, von Seidlein L, et al. (2008) Geographic analysis of shigellosis in Vietnam. Health and Place 14: 755–767. [DOI] [PubMed] [Google Scholar]
  • 97. Kolivras KN, Comrie AC (2003) Modeling valley fever (coccidioidomycosis) incidence on the basis of climate conditions. Int J Biometeorol 47: 87–101. [DOI] [PubMed] [Google Scholar]
  • 98. Lipp EK, Huq A, Colwell R (2002) Effects Of Global Climate On Infectious Disease: The Cholera Model. Clinical Microbiology Reviews 15: 757–770. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 99.Lockhart CY (2008) Surveillance for diseases of poultry with specific reference to avian influenza: Massey University.
  • 100. Baptista-Rosas RC, Hinojosa A, Riquelme M (2007) Ecological Niche Modeling of Coccidioides spp. in Western North American Deserts. Annals of the New York Academy of Sciences 1111: 35–46. [DOI] [PubMed] [Google Scholar]
  • 101. Chhetri BK, Perez AM, Thurmond MC (2010) Factors associated with spatial clustering of foot-and-mouth disease in Nepal. Tropical Animal Health and Production 42: 1441–1449. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102.Cooke III WH, Grala K, Wallis RC (2006) Avian GIS models signal human risk for West Nile virus in Mississippi. International Journal of Health Geographics 5: : Article 36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103. Daniel M, Kolar J, Zeman P, Pavelka K, Sadlo J (1998) Predictive map of Ixodes vicinus high-incidence habitats and a tick-borne encephalitis risk assessment using satellite data. Experimental & Applied Acarology 22: 417–433. [DOI] [PubMed] [Google Scholar]
  • 104. Eisen RJ, Griffith KS, Borchert JN, MacMillan K, Apangu T, et al. (2010) Assessing human risk of exposure to plague bacteria in northwestern Uganda based on remotely sensed predictors. American Journal of Tropical Medicine and Hygiene 82: 904–911. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 105. Eisen RJ, Reynolds PJ, Ettestad P, Brown T, Enscore RE, et al. (2007) Residence-linked human plague in New Mexico: A habitat-suitability model. American Journal of Tropical Medicine and Hygiene 77: 121–125. [PubMed] [Google Scholar]
  • 106. Adjemian JCZ, Girvetz EH, Beckett L, oley JE (2006) Analysis of Genetic Algorithm for Rule-Set Production (GARP)Modeling Approach for Predicting Distributions of Fleas Implicatedas Vectors of Plague, Yersinia pestis, in California. Journal of Medical Entomology 43: 93–103. [DOI] [PubMed] [Google Scholar]
  • 107. Anyamba A, Chretien J-P, Small J, Tucker CJ, Formenty PB, et al. (2009) Prediction of a Rift Valley fever outbreak. Proceedings of the National Academy of Sciences of the United States of America 106: 955–959. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 108. Kleinman K, Lazarus R, Platt R (2004) A generalized linear mixed models approach for detecting incident clusters of disease in small areas, with an application to biological terrorism. Am J Epidemiol 159: 217–224. [DOI] [PubMed] [Google Scholar]
  • 109. Munar-Vivas O, Morales-Osorio JG, Castañeda-Sánchez DA (2010) Use of field-integrated information in GIS-based maps to evaluate Moko disease (Ralstonia solanacearum) in banana growing farms in Colombia. Crop Protection 29: 936–941. [Google Scholar]
  • 110.Hartley DM, Nelson NP, Walters RA, Arthur R, Yangarber R, et al. (2010) The landscape of international event-based biosurveillance. Emerging Health Threats Journal 3: : Article e3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111.Kong X, Wallstrom GL, Hogan WR (2008) A temporal extension of the Bayesian aerosol release detector. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Raleigh, NC. Springer Verlag. pp. 97–107.
  • 112.Lu HM, Zeng D, Chen H (2008) Bioterrorism event detection based on the Markov switching model: A simulated anthrax outbreak study; Taipei, Taiwan.IEEE. pp. 76–81.
  • 113. Nordin JD, Goodman MJ, Kulldorff M, Ritzwoller DP, Abrams AM, et al. (2005) Simulated anthrax attacks and syndromic surveillance. Emerg Infect Dis 11: 1394–1398. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 114.Martin PAJ, Cameron A.R., Greiner M. (2006) Demonstrating freedom from disease using multiple complex data sources 1: A new methodology based on scenario trees. Preventive Veterinary Medicine. [DOI] [PubMed]
  • 115. Kolivras KN (2010) Changes in dengue risk potential in Hawaii, USA, due to climate variability and change. Climate Research 42: 1–11. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Figure S1

The PRISMA Flow Diagram.

(PDF)

Table S1

Information sources used in biosurveillance study.

(DOCX)

Checklist S1

The PRISMA Checklist.

(DOC)


Articles from PLoS ONE are provided here courtesy of PLOS

RESOURCES