Abstract
Outbreaks of transboundary animal diseases (TADs) have the potential to cause significant detriment to animal, human, and environmental health; severe economic implications; and national security. Challenges concerning data sharing, model development, decision support, and disease emergence science have recently been promoted. These challenges and recommendations have been recognized and advocated in the disciplines intersecting with outbreak prediction and forecast modeling regarding infectious diseases. To advance the effective application of computation and risk communication, analytical products ought to follow a collaboratively agreed common plan for implementation. Research articles should seek to inform and assist prioritization of national and international strategies in developing established criteria to identify and follow best practice standards to assess risk model attributes and performance. A well-defined framework to help eliminate gaps in policy, process, and planning knowledge areas would help alleviate the intense need for the formation of a comprehensive strategy for countering TAD outbreak risks. A quantitative assessment that accurately captures the risk of introduction of a TAD through various pathways can be a powerful tool in guiding where government, academic, and industry resources ought to be allocated, whether implementation of additional risk management solutions is merited, and where research efforts should be directed to minimize risk. This review outlines a part of a process for the development of quantitative risk analysis to collect, analyze, and communicate this knowledge. A more comprehensive and unabridged manual was also developed. The framework used in supporting the application of aligning computational tools for readiness continues our approach to apply a preparedness mindset to challenges concerning threats to global biosecurity, secure food systems, and risk-mitigated agricultural economies.
Keywords: : emerging diseases, epidemiology, preparedness, pathways risk analysis, quantitative risk assessment, veterinary public health
Review
Consequences associated with outbreaks include animal suffering and death (due to sickness, starvation resulting from restriction to feed movement, and culling efforts used to eradicate the disease), psychosocial impact to those affected by the disease, and varying environmental effects resulting from disposal of animal carcasses and waste by rendering, incineration, or burial.1 All layers of agricultural production from small holder livestock operations to large integrated corporations would be impacted by an outbreak event. Furthermore, most people with even tertiary connections to agricultural and livestock employees would be affected. Economic costs result not only from control and eradication efforts, including replacing lost livestock, but also export and tourism losses.2 Prominent examples in the western world include the 2001 outbreak of foot-and-mouth disease in the United Kingdom and the 1997–1998 outbreak of classical swine fever in the Netherlands, both of which cost their respective governments billions of dollars and required the slaughter of millions of animals to control disease spread.3,4 There is always a need for review and strengthening of measures for the prevention of such events.5 To minimize risk of transboundary animal disease (TAD) incursion, these measures must be well informed by knowledge of potential entry points and pathways to exposure of a pathogen.6
There are limitations in the existing bodies of knowledge for establishing a common set of tools for collecting data, computing, and communicating quantitative risk regarding TAD. Codifying and promoting the use of a common approach would be a worthwhile purpose for organizations such as the World Organization for Animal Health (OIE) and nations with significant commerce and security impetuses. The process outlined in this article will help the timely creation of meaningful quantitative risk assessments as informative components of risk analysis.
Risk Analysis
Risk analysis consists of hazard identification, risk assessment, and risk management components7: Risk analysis can demonstrate which diseases and pathways for entry have levels of risk over an acceptable threshold, guiding measures for risk.
Hazard identification recognizes mechanisms by which an adverse event could occur. This consists of the pathways by which a pathogen could be introduced to a susceptible livestock population. Articulating these pathways relies on a comprehensive understanding of host, agent, and environmental interactions plus existing risk management procedures in place.
Risk assessments are a tool for systematically evaluating the probability of a consequential event, such as the entry of a TAD through various pathways recognized in a hazard identification, as well as the consequences of the event.6,7 Application of risk assessments to animal diseases is valuable in demonstrating gaps in a nation's biosecurity, such as poorly enforced import regulations of potential disease-carrying products or ineffective quarantine protocols at agricultural sites.1 Risk assessments may inform other risk management policies, such as training and awareness programs, for farmers and veterinarians or implementation of surveillance techniques in high-risk animal populations.7–9
Risk assessments are a critical component of risk analysis as they pertain to international trade. When importing from another country or region that is perceived to have a higher risk, the Agreement of the Application of Sanitary and Phytosanitary Measures of the World Trade Organization (WTO) stipulates that protective measures must be based on analysis of the risks associated with continued trade with that country.10
For risk analysis concerned with TAD introduction, risk management may include techniques such as inspection of imported animals and products, quarantines, and trade restrictions. To appropriately guide regulatory decision making, risk analysis must take into account “available scientific evidence.”11,12 The OIE lays out a set of standards for risk analysis in the Terrestrial Animal Health Code.6
Quantitative Risk Assessment
The magnitude of risk estimated in an assessment can be expressed qualitatively or quantitatively. For qualitative, the probability of an event is expressed using descriptors such as negligible, low, moderate, or high. There is inherently a low level of detail to this output, and interpretations of a descriptive term may vary considerably. However, if a highly detailed expression of risk is not required, qualitative risk assessments can be useful and less expensive tools. A quantitative assessment expresses the likelihood of an event in numerical terms. This can give a much clearer interval of magnitude of risk and is not prone to the linguistic limitations of a qualitative assessment, but a quantitative assessment is not appropriate in all situations.7,13 If the data for a quantitative model are largely unknown, the output may provide no more detail than a qualitative assessment. Quantitative models also take more time and resources to create, meaning a qualitative assessment may be more appropriate for certain tasks. In general, the objective and scope of a risk assessment should guide the method of estimating the likelihood of an event.6,14
For quantitative assessments, the risk estimate should be presented in units that are rational for the objective of the study. For risk assessments of TAD introduction, this is often given in terms of likelihood of an outbreak per year, or the inverse, expected years until an outbreak,8,15–21 although not always.22 While this approach is rational and intuitive, a better output also considers the consequence of an outbreak, multiplying the probability by the cost to give an output in economic value per year. This allows for directly evaluating the cost-effectiveness of risk management techniques by comparing the estimated reduction in risk with the cost of implementing the protocol. It also encourages balancing the benefits of trade when weighed against restrictive management techniques.23
Quantitative models consist of a series of events that must occur as prerequisites for an outbreak. Each event is assigned a probability based on the best knowledge available at the time. Predicting the likelihood of future events entails uncertainty and these inputs should be entered as a distribution of possible values. The uncertainty associated with a model's output should be conserved and demonstrated clearly.24 Depending on the distributions utilized, this can be presented in a variety of ways, including probability density functions or giving the shape of the probability distribution along with key parameters. Presenting the uncertainty associated with all inputs of a model is also important in demonstrating knowledge gaps in areas with high uncertainty. This information may be useful for directing research efforts.
There were several considerations in designing a process for developing risk analyses with quantitative risk assessments. Our process focuses on creation of models that numerically express risk of introduction of a TAD to a susceptible livestock population. It is important that these risk analyses be produced in a timely manner to stay relevant and up to date with current available information. This process ensures that models can be created from a template based on similar diseases and updated by altering input parameters for any given step of a scenario tree.
There are seven steps to our developed process for the development of a risk analysis using a quantitative risk assessment. While not a newly developed statistical model, quantitative modeling is employed as a process step:
(1) Identify entry routes.
(2) Develop a scenario tree outlining the pathways culminating in the identified hazard, with each event as a branch, or “node.”
(3) Define units and assign data to scenario tree.
(4) Gather data and compute numerical quantities for each node.
(5) Perform calculations to capture the overall risk of the series of events in the pathway.
(6) Evaluate impact of risk management options at appropriate nodes of the scenario tree.
(7) Communicate results.
Step 1. A thorough understanding of the pathogen, affected species, and environmental interactions should be established to facilitate identification of potential entry scenarios. Agent factors include routes of transmission, geographic distribution, hardiness, number of strains, and mutability. Host factors include the species affected, clinical signs presentation, morbidity, and mortality. Environmental factors such as whether the pathogen is vector borne (and if so, the geographic distribution of suitable vectors) and survival of the pathogen in varying climatic conditions (i.e., wind impacts, temperature–humidity factors, surface moisture, and terrain features) should be considered as well. An understanding of these factors should guide consideration of potential routes of entry.
Step 2. Once identified, scenario trees for each pathway should be created. These scenario trees should show each step in a pathway for an outbreak, starting with the introduction of the pathogen to a country or region of origin and terminating with the exposure of an infective dose to a susceptible animal. It may be helpful to consider these steps in two phases: a release phase and an exposure phase. For release, the disease must be present in the country of origin being considered and the agent must be effectively sent out to its destination country. For exposure, the agent must survive transit (including counter measures) and make effective contact with a susceptible host in the destination country.
Step 3. Once a pathway scenario tree is in place, appropriate units should be assigned to each node in a diagram. The product of the units should be a rational value with utility in decision making, and kept consistent across pathways for ease of comparison. We recommend that scenario trees be set up to give an output in terms of outbreaks per year (or the inverse, average years between outbreaks) or in average annual cost of economic value due to outbreaks. The latter units have the added utility of being readily compared with estimated annual cost for additional risk management efforts, including added cost for implementation. It also helps to demonstrate the importance of consequence factors in a risk analysis, important for providing clearer guidance from data-derived analysis. Ideally, all values will be based on previous well-performed scientific studies or well-kept records. Even if the necessary data exist, finding it is often not a trivial task.
Step 4. Many pathways (e.g., illegal imports) involve information with high uncertainty. For values that are not available from well-documented studies, a best scientific judgment should be elicited from experts in the relevant field.25 Estimates should be based on all available data that might inform a rational judgment. Expert elicitation is a powerful tool for obtaining data through synthesis of expert prediction where little data is available or obtainable to inform timely risk analysis.26 It should be acknowledged that expert predictions are imperfect, and the expert elicitation process should seek to minimize the impact of biases and heuristics in these judgments. A breakdown of expert elicitation techniques is discussed later.
Step 5. Next, simulate to estimate the overall risk of introduction of the TAD by the given pathway. Since many, if not all, of the nodes will be represented by a distribution of possible values, simulation modeling is necessary to incorporate variation exhibited by these factors. Monte Carlo simulation performs risk assessment by selecting a random value from each node that has inherent uncertainty, and then calculated. New randomly selected values are chosen and results calculated repeatedly until the results converge to form a distribution of possible outcome values. The number of iterations needed depends on factors such as the number of nodes in a pathway, the uncertainty at each of the nodes, and the desired margin of error.
To investigate the robustness of a model for a given pathway, sensitivity analysis should be performed. Sensitivity analysis is used to understand the degree to which the overall output of a model is dependent on uncertainty or variability in an individual factor. Alternative scenarios at a given cell can be simulated on the model to evaluate the impact on the model results. The type and extent of sensitivity analysis that should be used depend on the type and degree of uncertainty and variability expected for a value. The results of sensitivity analysis may be valuable in demonstrating where more investigation is needed to increase confidence in a model if there is high uncertainty at a node that greatly impacts the overall result.
Step 6. Alternative sets of input variables can be substituted to simulate use of a new or different risk management process. If an action is expected to change the probability of an event required for an outbreak to occur by a certain percentage, the probability at that node can be adjusted accordingly and the new output of the model compared with the nonadjusted parameters. If the output of the model is given in economic terms such as average cost due to outbreaks per year, then the economic benefit can be evaluated by this method. An alternative application of this approach is calculating the minimum required efficacy of a risk management technique to justify the investment required for its implementation.
Step 7. The final step in the process of development of a risk analysis is communication of the findings to relevant parties. Quantitative risk assessment can be used to inform not only regulatory decision making, but also inform research efforts by highlighting important areas of uncertainty indicating gaps in knowledge.
Communicating Results
Developing a quantitative risk assessment as part of a risk analysis is not a minor task. It requires substantial investment and input from several parties to be accomplished. A well-performed quantitative risk assessment can provide a level of detail in informing decision making that cannot be matched by qualitative assessments. A certain quality level of information must be available as input parameters of a quantitative model for its output to be meaningful, a principle that should guide not only whether a quantitative approach is appropriate, but also a healthy dose of caution when judging whether the data to be used in a model truly represents reality.
Models are never perfect, but to the extent that they accurately represent the conditions of the system they are meant to replicate they can be useful. An analyst has a duty to document all assumptions, variabilities, and uncertainty that is inherent in the model to seek elucidative guidance for decision makers. With regard to assessing risk of TAD introduction, the consequences of poorly estimating the probability or consequence of an outbreak can have significant economic, political, and (through the SPS agreement of the WTO) legal ramifications. Producing risk analyses in a timely manner is therefore important to ensure that data used are current and applicable for decision making. However, by its very nature creating quantitative assessments is not a quick and easy process, and time must be allocated for identifying important and realistic pathways and gathering data.
Expert Elicitation
The elicitation of quantitative judgments in the form of subjective probability distributions from experts in the relevant field is a useful tool for informing decision making where empirical data are not available. There is always uncertainty associated with estimating unknown values, which these elicited probabilities must reflect. As the amount of evidence available increases, these distributions will approach reality.27 In areas where there is little knowledge, a high amount of uncertainty will be represented by wide probability intervals.
Expert judgment, just like any other, is subject to a variety of biases and heuristics. Some of the most prominent and frequently discussed are representativeness, anchoring and adjustment, and availability.28 Each of these heuristics and the biases that go along with them should be considered when designing an expert elicitation protocol.29
During the elicitation process, it should also be considered that experts tend to express overconfidence in their estimates. Ideally, the proportion of true values that lie within a confidence interval should be equal to the probability assigned by the expert. In reality, it has been shown repeatedly that the proportion of true values that lie outside experts' confidence intervals tend to be too high when judging difficult values (the reverse has been shown to be true of simple general knowledge values).30–33
Properly formatting questions can minimize the effect of overconfidence. Asking for an upper and lower limit to a confidence interval before eliciting a “most likely value” helps ensure that an expert does not overestimate the likelihood of one-point value (this relates to the anchoring and adjustment heuristic). It is also recommended that the elicitor prompt the expert to think of situations that would result in the actual value being above or below the upper and lower limits given by the expert.26 If they can think of a feasible way that the actual value is outside their parameters, they may reconsider and expand their confidence interval.
Elicitors should be aware of these heuristics and biases as well as methods for minimizing their effects.
A general rule for selecting experts for the elicitation process is that they should, as a group, have a knowledge base that is balanced, credible, and as extensive as possible on the subject matter.
It is of particular importance that all scientific views on a subject are represented by the panel of experts chosen. Perhaps more than any other factor, this should play a role in determining the number of experts chosen for elicitation. If opinion in the field is relatively uniform, then a smaller panel is likely to be sufficient. It is recommended that at least five to six experts should be included in a given study.26 If diverse views exist on questions of interest, then more experts are needed.34 Adding more experts past 15 may provide marginal returns on the additional cost and time.35
Ultimately, financial and time constraints play a major part in determining the number of experts chosen. However, all relevant views and knowledge ought to be included in the study to ensure that the results are meaningful and not misleading to decision makers.
A first step in identifying experts for a study should involve a collaborator who is highly knowledgeable in the field and relevant literature that can identify and nominate potential experts. Depending on the formality of the selection process, experts that are selected to take part in the study may offer their own nominations for other experts in the field. This “cascade methodology” can result in a network of a large number of experts nominated for a study,36 although caution should be taken to ensure that this does not lead to exclusive participation by only a subset of experts of one background or viewpoint. A more formal method of selecting participants may be used in situations where an impression of transparency and fairness are necessary. Some approaches that have been used include literature counts and formal nomination by peers. While these methods have the attractive quality of added perceived legitimacy, they may exclude industry professionals, government officials, and others that could be considered experts but do not fit the mold of highly published researchers in academia.
Face-to-face interviews are generally held to be preferable due to benefits of active interaction by the elicitor in reducing the impact of biases discussed earlier.37 Additionally, face-to-face interviews may motivate experts to feel more responsible for providing rational, thoughtful judgments as opposed to an anonymous survey.34 However, these benefits need to be balanced with the additional cost and time associated with personal interviews both for the expert and researcher.38 For remote elicitation, possibilities include paper or electronic questionnaires or use of software specific for elicitation. If remote elicitation is chosen, similar pre-elicitation preparation should still be provided as for personal interviews.39
Another consideration for elicitation procedures using personal interviews is whether to elicit in an individual or group setting. If done individually, the elicited opinions of each expert are aggregated. The results of individual experts can be weighted equally or be based on a principle of indifference. Unequal weighting can be defined from peer opinion, number of literature citations, or performance on elicitation of a series of calibration variables whose quantities are known to the elicitor but not the experts. Cooke's classical model40 is an example of the latter; there is mixed evidence and opinion on the utility of using such methods for performance-based weighting of results over equal weighting.41–43
Group elicitations can take several forms. The group can discuss and come to a consensus on each question, or experts may discuss the topics but answer the questions individually.
Group elicitation settings carry the benefit of allowing experts to share knowledge and viewpoints, which can be especially useful where the scope of the study crosses lines between disciplines of research. However, group interaction introduces a tendency of certain influential experts to dominate discussion.37 This can hide disagreement between experts that can demonstrate and even explain levels of uncertainty surrounding a parameter. If the group is asked to discuss the questions but answer individually, the effects of these influential experts may be minimized; however, the views of many individuals are still likely to be shaped by predominant personalities during the discussion.
The Delphi method is a format for individual elicitation that allows for restricted interaction between experts. Usually performed remotely, experts are asked to respond to questions with rationales for each response, and send them back to the elicitor/analyst. The results of each expert are then presented anonymously in the next iteration of the questions, with the hope that the experts will alter their responses based on the merits of the feedback of their peers without being pressured by social queues. Previously demonstrated accuracy that judgments tend to increase each round is presumably because experts with the least knowledge on a topic tend to change their estimates more than those with more knowledge.44 After the final round, the responses are aggregated mathematically.
Connecting the names of elicited experts with their predictions in any publications may be an uncomfortable prospect for them, since there is a substantial degree of uncertainty that goes with subjective estimates. This may limit them from providing their best judgment considering corporate, political, or other ramifications. Furthermore, inclusion of each individual source of data is counterproductive toward the creation of an assessment that concisely conveys results. For these reasons, it is not recommended that experts are directly linked to their predictions.
There are also problems associated with full anonymity of experts. Some degree of responsibility for the outcomes of the study ensures that responses are considered carefully. Connecting the experts to the overall results of the study makes certain that they take their responses seriously. Following common tradition of acknowledging contributors and collaborators as well as any conflicts of interest should be the standard upheld in communicating data from risk analyses.
In general, face-to-face interviews should be able to be completed in a few hours, including preparation time for explaining the scope and purpose of the study and familiarizing experts with the elicitation process. The quantitative risk assessment development process outlined and discussed above helps to create these objectives. The design of scenario trees and assigning values based on available data in previous steps will make knowledge gaps to be filled by expert elicitation apparent.
There are several ways to ask experts to estimate unknown values of interest. The upper and lower extremes of the probability distribution should be investigated. Depending on the desired confidence interval (C), experts can be asked for a value for which they estimate that there is only a (1 − C)/2 chance of the actual value being larger (or smaller). This does not allow the elicitor to ask for a specific confidence interval which might be necessary for model parameters, but in certain situations it may reduce overconfidence in judgments and can help to clarify probability estimates with experts.37 A general rule with any type of elicitation is: multiple iterations of questions with varying phrasing should be readied in advance of the procedure to best match metrics with which the expert is most familiar.45
The number of estimates that should be elicited for a given value depends on the nature of the value and knowledge of that value. If the shape of the estimated probability distribution for a value is known, then only the parameters to define that distribution are needed. Directly asking for parameters such as the standard deviation or variance is not recommended, as this requires that the expert go through one or more extra calculations from an intuitive estimate to reach the requested parameter.45 Instead, the elicitor should derive parameters of variance from the upper and lower bounds of the expert's estimated distribution. For common distributions, experts may even be more comfortable being initially asked for values 1, 2, or 3 standard deviations above (below) the mean.
Refining efforts and methods to better harmonize progress at reducing gaps in worldwide risks from TADs utilizing already existing substantial expertise can focus bodies of knowledge from relevant disciplines in creation of useful, valuable work products.
One very useful next step is the development of additional planning-oriented articles to continue identifying and addressing gaps, and bring current the dissemination of the state-of-the-science regarding TAD planning and response.
Following an established, collaborative process with guidance from worldwide authorities will accomplish the generation of common pathways to practical evaluations of risk and provide powerful, consequential comparisons for global communities involved in diminishing risks to health, food, and security.
Abbreviations Used
- CSF
classical swine fever
- FMD
foot-and-mouth disease
- OIE
World Organization for Animal Health
- TAD
transboundary animal diseases
- WTO
World Trade Organization
Acknowledgments
This material is based upon work supported by the State of Kansas, National Bio and Agro-Defense Facility (NBAF) Transition Fund and The Institute for Infectious Animal Diseases (IIAD), a Department of Homeland Security Science and Technology Center of Excellence.
Author Disclosure Statement
No competing financial interests exist.
References
- 1.Otte MJ, Nugent R, McLeod A. Transboundary Animal Diseases: Assessment of Socio-Economic Impacts and Institutional Responses. Food and Agriculture Organization: Rome, Italy, 2004 [Google Scholar]
- 2.Morgan N, Prakash A. International livestock markets and the impact of animal disease. Rev Sci Tech. 2006;25:517–528 [PubMed] [Google Scholar]
- 3.Anderson I. Foot and Mouth Disease 2001: Lessons to be Learned Inquiry Report. Stationery Office: London, 2002; HC 888 [Google Scholar]
- 4.Elber AR, Stegeman A, Moser H, et al. The classical swine fever epidemic 1997–1998 in The Netherlands: descriptive epidemiology. Prev Vet Med. 1999;42:157–184 [DOI] [PubMed] [Google Scholar]
- 5.Roberts H, Carbon M, Hartley M, et al. Assessing the risk of disease introduction in imports. Vet Rec. 2011;168:447–448 [DOI] [PubMed] [Google Scholar]
- 6.OIE. Terrestrial Animal Health Code, Vol. 1. 2014. Paris, France: World Organisation for Animal Health; Available at: https://www.oie.int/doc/ged/D13850.PDF (accessed July31, 2015) [Google Scholar]
- 7.MacDiarmid SC, Pharo HJ. Risk analysis: assessment, management and communication. Rev Sci Tech. 2003;22:397–408 [DOI] [PubMed] [Google Scholar]
- 8.Martínez-López B, Perez AM, De la Torre A, et al. Quantitative risk assessment of foot-and-mouth disease introduction into Spain via importation of live animals. Prev Vet Med. 2008;86:43–56 [DOI] [PubMed] [Google Scholar]
- 9.Taylor N. Review of the Use of Models in Informing Disease Control Policy Development and Adjustment: A Report for DEFRA. Veterinary Epidemiology and Economics Research Unit, University of Reading: Reading, United Kingdom, 2003 [Google Scholar]
- 10.WTO. Agreement on the Application of Sanitary and Phytosanitary Measures. World Trade Organization: Geneva, 1995 [Google Scholar]
- 11.Dearfield KL, Hoelzer K, Kause JR. Review of various approaches for assessing public health risks in regulatory decision making: choosing the right approach for the problem. J Food Prot. 2014;77:1428–1440 [DOI] [PubMed] [Google Scholar]
- 12.Peel J. Risk Regulation Under the WTO SPS Agreement: Science as an International Normative Yardstick? Jean Monnet Working Paper 02/04. 2004
- 13.Miller L, McElvaine MD, McDowell RM, et al. Developing a quantitative risk assessment process. Rev Sci Tech. 1993;12:1153–1164 [DOI] [PubMed] [Google Scholar]
- 14.Peeler EJ, Reese RA, Thrush MA. Animal disease import risk analysis—a review of current methods and practice. Transbound Emerg Dis. 2015;62:480–490 [DOI] [PubMed] [Google Scholar]
- 15.Yu P, Habtemariam T, Wilson S, et al. A risk-assessment model for foot and mouth disease (FMD) virus introduction through deboned beef importation. Prev Vet Med. 1997;30:49–59 [DOI] [PubMed] [Google Scholar]
- 16.Asseged B, Tameru B, Nganwa D, et al. A quantitative assessment of the risk of introducing foot and mouth disease virus into the United States via cloned bovine embryos. Rev Sci Tech. 2012;31:15. [DOI] [PubMed] [Google Scholar]
- 17.Bronsvoort BMdC, Alban L, Greiner M. Quantitative assessment of the likelihood of the introduction of classical swine fever virus into the Danish swine population. Prev Vet Med. 2008;85:226–240 [DOI] [PubMed] [Google Scholar]
- 18.De Vos CJ, Saatkamp HW, Nielen M, et al. Scenario tree modeling to analyze the probability of classical swine fever virus introduction into member states of the European Union. Risk Anal. 2004;24:237–253 [DOI] [PubMed] [Google Scholar]
- 19.Hartnett E, Adkin A, Seaman M, et al. A quantitative assessment of the risks from illegally imported meat contaminated with foot and mouth disease virus to Great Britain. Risk Anal. 2007;27:187–202 [DOI] [PubMed] [Google Scholar]
- 20.McElvaine MD, McDowell RM, Fite RW, et al. An assessment of the risk of foreign animal disease introduction into the United States of America through garbage from Alaskan cruise ships. Rev Sci Tech. 1993;12:1165–1174 [DOI] [PubMed] [Google Scholar]
- 21.USDA APHIS Veterinary Services. Risk Analysis for Importation of Classical Swine Fever Virus in Swine and Swine Products from the European Union. USDA National Agricultural Library, 2000 [Google Scholar]
- 22.Delgado J, Pollard S, Snary E, et al. A systems approach to the policy-level risk assessment of exotic animal diseases: network model and application to classical swine fever. Risk Anal. 2013;33:1454–1472 [DOI] [PubMed] [Google Scholar]
- 23.Williams RA, Thompson KM. Integrated analysis: combining risk and economic assessments while preserving the separation of powers. Risk Anal. 2004;24:1613–1623 [DOI] [PubMed] [Google Scholar]
- 24.Joint FAO/WHO Food Standards Programme. Principles and Guidelines for the Conduct of Microbiological Risk Assessment. Codex Alimentarius Food Hygiene—Basic Texts, 2nd ed. Rome, 2001 [Google Scholar]
- 25.Bolger F, Wright G. Assessing the quality of expert judgment: issues and analysis. Decis Support Syst. 1994;11:1–24 [Google Scholar]
- 26.Morgan MG. Use (and abuse) of expert elicitation in support of decision making for public policy. Proc Natl Acad Sci U S A. 2014;111:7176–7184 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Estes WK. Research and Theory on the Learning of Probabilities. J Am Stat Assoc. 1972;67:81–102 [Google Scholar]
- 28.Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185:1124–1131 [DOI] [PubMed] [Google Scholar]
- 29.Kahneman D, Tversky A. On the study of statistical intuitions. Cognition. 1982;11:123–141 [DOI] [PubMed] [Google Scholar]
- 30.Harvey N. Confidence in judgment. Trends Cogn Sci. 1997;1:78–82 [DOI] [PubMed] [Google Scholar]
- 31.Klayman J, Soll JB, González-Vallejo C, et al. Overconfidence: it depends on how, what, and whom you ask. Organ Behav Hum Decis Process. 1999;79:216–247 [DOI] [PubMed] [Google Scholar]
- 32.Lichtenstein S, Fischhoff B, Phillips L. Calibration of probabilities: the state of the art to 1980. In: Judgment Under Uncertainty, 1st ed. Kahneman D, Slovic P, Tversky A, (eds.) Cambridge University Press: Cambridge; pp. 306–334; 1982 [Google Scholar]
- 33.McClelland AGR, Bolger F. The calibration of subjective probability: theories and models 1980–94. In: Subjective Probability. Ayton GWP, (ed.) John Wiley & Sons: Oxford, England; pp. 453–482; 1994 [Google Scholar]
- 34.Clemen RT, Winkler RL. Limits for the precision and value of information from dependent sources. Oper Res. 1985;33:427–442 [Google Scholar]
- 35.Aspinall W. A route to more tractable expert advice. Nature. 2010;463:294–295 [DOI] [PubMed] [Google Scholar]
- 36.Wentholt MTA, Cardoen S, Imberechts H, et al. Defining European preparedness and research needs regarding emerging infectious animal diseases: results from a Delphi expert consultation. Prev Vet Med. 2012;103:81–92 [DOI] [PubMed] [Google Scholar]
- 37.Expert Elicitation Task Force White Paper (Draft Version, 2009). 2011. Washington, DC: U.S. EPA; Available at: https://yosemite.epa.gov/sab/sabproduct.nsf/fedrgstr_activites/F4ACE05D0975F8C68525719200598BC7/$File/Expert_Elicitation_White_Paper-January_06_2009.pdf (accessed July31, 2015) [Google Scholar]
- 38.EFSA. Guidance on expert knowledge elicitation in food and feed safety risk assessment. EFSA J 2014;12:278 [Google Scholar]
- 39.Devilee J, Knol A. Software to Support Expert Elicitation: An Exploratory Study of Existing Software Packages. Dutch National Institute of Public Health and Environment: Bilthoven, The Netherlands, 2011; No. 630003001/2011 [Google Scholar]
- 40.Cooke RM. Experts in Uncertainty: Opinion and Subjective Probability in Science. Oxford University Press: Oxford, 1991 [Google Scholar]
- 41.Clemen RT. Comment on Cooke's classical method. Reliab Eng Syst Saf. 2008;93:760–765 [Google Scholar]
- 42.Cooke RM, Goosens LJH. TU Delft Expert Judgment Data Base. Reliab Eng Syst Saf. 2008;93:657–674 [Google Scholar]
- 43.Lin SW, Cheng CH. The reliability of aggregated probability judgments obtained through Cooke's classical model. J Model Manag. 2009;4:149–161 [Google Scholar]
- 44.Rowe G, Wright G. The Delphi technique as a forecasting tool: issues and analysis. Int J Forecast. 1999;15:353–375 [Google Scholar]
- 45.Kynn M. 2008. The ‘heuristics and biases’ bias in expert elicitation. J R Stat Soc Ser A Stat Soc 2008;171:239–264 [Google Scholar]
References
Cite this article as: Miller J, Burton K, Fund J, Self A (2017) Process review for development of quantitative risk analyses for transboundary animal disease to pathogen free territories, BioResearch Open Access 6:1, 133–140, DOI: 10.1089/biores.2016.0046.