Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 Dec 8.
Published in final edited form as: J Appl Ecol. 2017 Mar 9;54(6):2063–2068. doi: 10.1111/1365-2664.12887

Embracing uncertainty in applied ecology

EJ Milner-Gulland 1,*, K Shea 2
PMCID: PMC5722456  NIHMSID: NIHMS851780  PMID: 29225369

Summary

  1. Applied ecologists often face uncertainty that hinders effective decision-making.

  2. Common traps that may catch the unwary are: ignoring uncertainty, acknowledging uncertainty but ploughing on, focussing on trivial uncertainties, believing your models, and unclear objectives.

  3. We integrate research insights and examples from a wide range of applied ecological fields to illustrate advances that are generally underused, but could facilitate ecologists’ ability to plan and execute research to support management.

  4. Recommended approaches to avoid uncertainty traps are: embracing models, using decision theory, using models more effectively, thinking experimentally, and being realistic about uncertainty.

  5. Synthesis and applications. Applied ecologists can become more effective at informing management by using approaches that explicitly take account of uncertainty.

Keywords: harvesting, pest management, epidemiology, conservation, modelling, adaptive management, management strategy evaluation, value of information, decision theory, structured decision-making

Introduction

Environmental managers are constantly required to make difficult decisions in the face of uncertainty, learning from experience and thereby reducing the unknowns in the system. A key role of applied ecologists is to conduct structured, hypothesis-driven research to reduce uncertainty more efficiently and comprehensively than can be achieved through such contingent learning. A number of typologies of uncertainty in social-ecological systems have been published (e.g. Regan et al. 2002). We here focus on process uncertainty (the inherent variation in natural and human systems), observation uncertainty (introduced when attempting to measure quantities; all social-ecological systems are only partially observable), model, or structural, uncertainty (involving limitations in our representation of the real world in conceptual or computer models, due to a lack of understanding of the system), and linguistic uncertainty (involving lack of clarity or agreement in the conceptualisation and expression of uncertainty).

Given the pervasiveness of uncertainty and the need to make decisions regardless, it can be useful to conceptualise these different sources of uncertainty in terms of whether they are controllable and important (Table 1). “Important” uncertainty has a significant and qualitative effect on management outcomes, and “controllable” uncertainty can be minimised or managed. It is all too easy to focus applied ecological research on uncertainties that are tractable to study, but unimportant or uncontrollable, limiting the relevance of science to management.

Table 1.

The dimensions of uncertainty relevant to resource managers, with examples in each cell.

Sources of uncertainty Controllable?
Yes No
Important? Yes Resource users’ compliance with rules Environmental change at ecologically relevant scales
No Survival of juvenile stages in long-lived species Autocorrelation in daily movement patterns

When faced with uncertainty, applied ecologists may fall into some common traps. These limit our ability not just to appreciate the degree and nature of the uncertainty, but to plan research to support management. There has been considerable thought about how to avoid these traps in a range of applied ecological research fields (harvesting, conservation, pest and disease management). However, the subsequent advances are underappreciated and underused outside their specific areas. We outline these traps, with examples, and suggest solutions, before highlighting some overarching principles to help applied ecologists deal more effectively with uncertainty.

Dealing with uncertainty: Common traps and how to avoid them

Putting things in the “Too difficult” box: Ignoring the uncertainty

It is easy to ignore uncertainty. For example, saiga antelopes suffered a precipitous population decline from the late 1990s. Following conservation action, reported numbers in one population increased rapidly and consistently. However, these population estimates were based on simple extrapolations of numbers seen in aerial surveys, ignoring biases and uncertainties caused by changes in density and group size; once these uncertainties were properly accounted for, it was impossible to distinguish a significant population trend (McConville et al. 2008).

Ignoring uncertainty: solutions

Power analysis is routinely used to ensure that the expected level of uncertainty is not so high as to render analysis uninformative. Power analysis could be much more widely used in applied ecology, to inform research prioritisation and management action in advance. Field et al. (2004) show that monitoring koalas to reduce uncertainty about trends before investing in conservation action is unjustified. In this case, the species is so valuable that the cost of a type 2 error (thinking that there is no population decline when there actually is) far outweighs the cost of a type 1 error (thinking that there is a population decline, therefore acting, when there is not). The analysis suggests that the correct action is to ignore uncertainty and act anyway; the crucial point is that this result came from a cost-benefit analysis rather than from simply ignoring the issue.

Hoping it doesn’t matter: Acknowledging uncertainty, but ploughing on

Some uncertainty is evident, but is nevertheless ignored in the hope that it may not be important. Many long-term counts of wildlife populations only include the most observable demographic class. For example, estimates of grey seal numbers in UK waters are based on counts of pups on the shore, because other population stages are usually at sea. The uncertainties inherent in this approach were acknowledged, but felt to be unimportant and prohibitively expensive to control while the grey seal population was increasing exponentially. However, increases in pup numbers slowed and became more regionally heterogeneous (SCOS 2007). Different, plausible, assumptions about density dependence led to estimates of population size which varied by 2–3x (Lonergan et al. 2011). Simply continuing with the long-term programme of monitoring this species, without investing in obtaining independent information concerning whether the observed changes in pup counts are due to changes in fecundity or mortality, would lead to increasingly unhelpful scientific policy advice.

Ploughing on: solutions

It is possible to set up management specifically to support learning about a system. In some fields of applied ecology (such as pest management and wildlife harvesting) experimental research can be carried out in advance of broad policy implementation. Even if prior experimentation is not possible, it is still possible to integrate research and management via adaptive management (AM) approaches which specifically include a plan for learning (Walters 1986; Shea et al 2002). This allows management actions to be updated based on information gained during management. However, AM is still underused (Keith et al. 2011), and uncertainties still tend to be swept under the carpet, rather than confronted.

Partly, whether AM is worthwhile depends upon the characteristics of the uncertainty limiting managers’ ability to make decisions. Managing to learn is less productive if the uncertainty is irreducible (e.g. environmental variation), the system generates very slow feedback, or decisions are one-off, in which case some of the wide array of approaches in the decision-theoretic literature may help (e.g. risk analysis or decision trees; Cohrssen & Covello 1999, Rokach & Maimon 2015).

If managers are effectively playing a “one shot” game, in which prior experimentation and real-time learning are not possible, for ethical or practical reasons, setting up a “virtual experiment” within a modelling environment is a powerful approach. Fisheries scientists have developed Management Strategy Evaluation (MSE) as a tool for exploring uncertainties a priori, and as a component of AM in the longer term (Butterworth & Punt 1999). MSE is a framework linking an “operating model”, describing the researchers’ best understanding of system dynamics, with an “observation model” that mimics the observation process to produce an estimate of population parameters with associated uncertainty, an “assessment model” that simulates how managers use the information they collect to produce rules, and an “implementation model” that describes the process by which these rules are translated into management actions. This approach has great potential for broader application to conservation and resource management (Bunnefeld et al. 2011).

Fiddling while Rome burns: Focussing on trivial uncertainties

When an obvious problem arises in a social-ecological system, there is a strong urge to “do something” without necessarily evaluating the action’s likely efficacy. For example, many marine turtles are endangered; a relatively easy management action is to protect eggs in the nest. However, sensitivity analyses of the life stages in which changes would most affect population growth indicate that focusing on this part of the life cycle has very little effect for loggerhead sea turtles (Crouse et al. 1987). Instead, improvements in the survival rates of older juveniles and sub-adults are most likely to increase population growth rates; these insights resulted in fishery bycatch policy changes that are credited with saving loggerhead turtles from extinction (Crowder et al. 1995).

Conversely, decision-makers may delay action until further information on uncertainties is available from researchers. For example, monitoring programmes may take the place of action to conserve endangered species (Lindenmayer et al., 2013) or action may be postponed beyond the point at which meaningful intervention is possible.

Addressing trivial uncertainties: solutions

Model-based experimentation (sensitivity analyses, structured decision-making, MSE, scenario modelling) is a powerful way to explore which uncertainties are likely to be trivial or uncontrollable and which are crucial to address, before interventions are put in place. Value of Information (VoI) analyses may be used in advance to predict the usefulness of actions to implement a decision, thereby focussing research effort. VoI is widely used in medical research to quantify the likelihood of a change in a decision and the marginal payoff from that change, given the additional information that a piece of research could provide (Yokota & Thompson 2004). It has been used sporadically in the wildlife management literature (e.g. Williams 2001), and more recently in disease management (Shea et al. 2014) but is a powerful tool that deserves far wider acknowledgement (Canessa et al. 2015).

Runge et al. (2011) explored VoI for a range of uncertainties besetting managers of whooping cranes in North America. The species suffered from extremely poor reproduction, but there was considerable disagreement about its potential cause, and hence the appropriate mitigation actions. Expert elicitation was used to define multiple different hypotheses, and partial VoI analyses were conducted to address which uncertainties most hampered decision-making. The process gave initial management recommendations, helped to prioritize research, and ultimately motivated an adaptive management plan for this endangered bird.

Hubris: Believing your models or rules of thumb are telling you the truth

Solutions that appear “optimal” within our frame of thinking may not actually be optimal for the reasons we imagine. For example, deer managers in Scotland had a rule of thumb of culling 14% of the hinds on their land, which appeared to work relatively well in terms of keeping numbers at an appropriate density. However, observation errors in counting deer led to underestimation of population size. If managers had actually culled at this rate, deer density would have declined substantially (Milner-Gulland et al. 2004).

It is vital not to succumb to the temptation of believing model answers and forgetting about the “unknown unknowns”, irrespective of how sophisticated the model which is used to carry out a priori experiments may be, and however extensive the model testing. For example, Milner-Gulland et al. (2001) carried out extensive model-based testing of strategies for harvesting saiga antelopes under a range of model structures, and suggested that a robust harvest strategy would be relatively strongly male-biased. Two years later, males had been so heavily selectively hunted that the few remaining animals couldn’t mate with all the females and a collapse in fecundity occurred, an eventuality not conceived of in even the most extreme of the model tests (Milner-Gulland et al. 2003).

Believing your own assumptions: solutions

Models are only an expression of a researcher’s assumptions and can never replace field-based observation and experimentation. Instead, a synergistic approach is required, in which models are confronted with data, to test and refine hypotheses in an iterative process (Hilborn & Mangel 1997). Scenario analysis is a good way to structure thinking about the future in a way that encourages the contemplation of uncertainties and their potential implications. This has been widely used in climate science, but is uncommon in applied ecology; one example is Davies et al.’s (2015) analysis of likely futures for the Indian Ocean tuna fishery.

Managing with unattainable, unclear or no objectives: Sidestepping assessment of the impact of uncertainties

Being completely clear about objectives is fundamentally important, yet is often overlooked. Caughley & Sinclair (1994) give the example of the New Zealand government’s rationale for their red deer hunting quota. Since 1920, the stated objective of hunting varied, but was never clearly spelt out. Thus, neither the reasoning behind the assignment of quotas, nor the effectiveness of the management measure, could be evaluated. The benefit of research to reduce uncertainties (e.g. on the role of hunting in reducing population growth rates in the context of environmental variation or habitat trends), was therefore hard to assess. Similar issues were found with the objectives of harvest management for North American waterfowl (Williams 2012).

The Convention on Biological Diversity includes commitments to reduce the global loss of biodiversity and has agreed indicators for evaluating progress towards this target. However, the indicators suffer from substantial biases and uncertainties, while the target of “to achieve by 2010 a significant reduction of the current rate of biodiversity loss” was almost certainly unachievable when set in 2002, and “significant” was undefined (Butchart et al 2016). The extent to which different forms of uncertainty impede the ability of policy makers to report meaningful progress against such targets using relevant indicators can be quantified, but little work of this type has yet been done (Nicholson et al. 2012).

Sidestepping uncertainty: solutions

The field of robust decision-making explores how to make decisions that are good enough, given uncertainty, rather than finding optimal solutions that may be less robust to change or error. A range of approaches to setting objectives that are robust to uncertainty is available, including satisficing (Schwartz et al. 2011), bet hedging (Boyce et al. 2002), rules of thumb (Leung et al. 2005) and info-gap theory for extreme uncertainty (e.g. Regan et al. 2005). All these approaches can be set within a decision-theoretic framework (Shea et al 1998). Explicitly acknowledging the potential for linguistic uncertainty in objective-setting, so as to expose and resolve it, is also an important step (Shea et al. 2010; Probert et al. 2016).

The way forward for tackling uncertainty in applied ecology

Embrace modelling

The power of models as tools for decision-making remains underappreciated (Addison et al. 2013). Typical views include that models can’t be trusted because they are bound to misrepresent reality, that the issues of concern are so specific that they need to be tackled case-by-case rather than through general frameworks, or that modelling is too difficult or technical. Seeking out collaborators with modelling expertise is useful, but even the simplest conceptual models, which may be no more than a flow chart, can be incredibly useful to enhance managers’ understanding of the ramifications of uncertainty. Participatory modelling, in which interest groups are brought together to develop a model of the system, first conceptually and then as a computer model, is becoming more accessible due to advances in computer software and visualisation. This process allows groups whose interests may not coincide to reach a common understanding of the underlying processes, uncertainties and assumptions, enabling them to set objectives and explore management options together (e.g. Redpath et al. 2004).

Embed modelling in a decision-making framework

There are numerous approaches to help the applied ecologist to make useful decisions, some which help to address multiple traps (e.g. VoI). Many of these, such as AM and MSE, fall within the general purview of structured decision-making (SDM; Williams et al. 2002). These approaches have been adopted piecemeal into different fields of applied ecology, at different times, thereby leading to a lack of appreciation of the rich literature which exists on decision theory. In part these different approaches arise from different traditions with different preoccupations (e.g. MSE arose via fisheries science, AM via wildlife and natural resource management). All involve stating the objective, identifying possible management actions and constructing alternative models to explicitly acknowledge key uncertainties.

Use models more effectively

Models are particularly valuable in enabling researchers to explore the implications of a range of uncertainties and assumptions. Simple simulation-based model exploration is an underused tool for exploring the potential range of outcomes that different assumptions produce. At a minimum, models can be used to encapsulate what is, and is not, known about a system, as a useful first step in addressing uncertainty. They enable learning through experimentation that would be challenging or impossible in the field (“virtual ecologist” models; Zurell et al. 2010), the testing of experimental methods or hypotheses, and exploration of the ramifications of novel situations (e.g. climate change). They can also be used to examine the effectiveness of proxies and indicators for biodiversity change (Nicholson et al. 2012).

Having an experimental frame of mind

Having an experimental frame of mind is vital when managing systems under uncertainty. There is a continuum from experimentation within a virtual world, prior to implementation (e.g. MSE; Butterworth & Punt 1999), through experimentation in the laboratory but external to the model (e.g. competing saiga management; Milner-Gulland et al. 2001), to experimentation in the field that informs model development (e.g. active AM; Walters 1986). Different systems need different levels of model-based learning prior to field experimentation or implementation. In some instances (e.g. pest management), field-based experimentation and testing at a reasonable spatio-temporal scale is possible, and modelling is a less critical component of the toolkit. However, even here modelling will become increasingly important as local climates change and the status quo no longer applies (e.g. Teller et al. 2016). In many other cases (e.g. fisheries, conservation), not getting management right first time may have serious implications for human wellbeing or species survival; in this case experimentation in a model system prior to management intervention is vital. In all cases, models and real-world evidence need to inform each other, allowing better integration of research and management.

Being realistic about uncertainty

Even with the most effective approaches to minimising and managing for uncertainty, there will be an irreducible element. It is important to realise that even the best approach to managing for uncertainty may not succeed; this means that a focus on robustness rather than optimality may be appropriate. This can help avoid decision paralysis, where decisions are needlessly postponed while research is conducted. This realism does not necessarily require the use of sophisticated models. For example, linguistic uncertainty can be addressed by stakeholders spending time ensuring that they mean the same thing (for example the phrase “pest control” means different things to different people; from outright eradication to maintaining densities below an economic damage threshold), and that they set clear and agreed objectives. Nonetheless, major benefits could be realised by prioritising capacity-building that enables resource managers to take advantage of the types of quantitative tools and approaches outlined here.

It is increasingly being realised that all resource management problems have a range of stakeholders, and only through the acknowledgement and inclusion of trade-offs between competing objectives can management have the hope of sustainability (Redpath et al. 2013). Uncertainty is not, however, always included in these calls for inclusivity; it is important for all parties to realise that uncertainties need to be understood and addressed. Otherwise hard-won compromises and trade-offs can be derailed as the unexpected happens.

We have illustrated our points using examples from a wide range of applied ecological disciplines; from epidemiology, pest management, fisheries and conservation (Table 2). Although we have not carried out a systematic review, our experience is that the tools and approaches we highlight here are not being applied as widely or as frequently as they could be. One of the impediments to improving the treatment of uncertainty in applied ecology is the continued failure to break down disciplinary barriers. This journal is one of the few that explicitly covers the whole range of applied management problems, and thus can act as a forum for cross-fertilisation of ideas. We need to bring the study of the causes, implications and control of uncertainty into the mainstream of the discipline, and ensure that methods such as those discussed here are more broadly applied. This will reduce the power of “uncertainty traps” to catch the unwary.

Table 2.

Uncertainty traps, and methods for addressing them.

Uncertainty trap Description Example Useful Methods
Ignoring uncertainty: put it in the too difficult box Treating systems as deterministic when uncertainty actually compromises management Saiga population estimates without confidence intervals have no power to detect change Power analysis
Value of Information (VoI) analysis
Acknowledging uncertainty: plough on Recognising there is uncertainty but assuming/hoping that it doesn’t make a qualitative difference to management Monitoring an uninformative life stage for seals because too expensive to do otherwise Manage for learning (Adaptive Management)
Virtual experiments (e.g. Management Strategy Evaluation, MSE)
Focussing on trivial uncertainties: Fiddle while Rome burns Addressing uncertainties, but not the ones that make the most difference to management outcomes Nest protection and head-starting turtles when the major issue for population viability is adult survival at sea Model-based experimentation to highlight key uncertainties (VoI, MSE)
Believing models or rules of thumb: hubris Management accounts for the uncertainties highlighted in models, e.g. through rules of thumb, but without challenging them Red deer rule of thumb works because it cancels out two uncertainties; model-based experimentation for saiga management fails to account for reproductive collapse Cycling between field-based experimentation and modelling.
Scenario analysis to broaden horizons.
Sidestepping uncertainty: unclear objectives If objectives are unclear, then assessing performance against them is difficult, so when uncertainties cause management inefficiency, they are missed Invasive species management through culling in New Zealand without defined goals, international sustainability goals not SMART Decision analysis, explicit consideration of trade-offs, rules of thumb, satisficing, stakeholder engagement

Acknowledgments

KS acknowledges the EEID program of the NSF/NIH (award number 1 R01 GM105247-01), the NSF RAPID program (award DEB-1514704). Thanks to P. Addison, C. Baker, S. Li., S. Lloyd, A. McKee, M. Runge, L. Russo and Y. Tao for comments.

Footnotes

Authors’ contributions

EJMG and KS conceived and wrote the paper together.

Data accessibility

Data have not been archived because this article does not contain data.

References

  1. Addison PF, Rumpff L, Bau SS, Carey JM, Chee YE, Jarrad FC, McBride MF, Burgman MA. Practical solutions for making models indispensable in conservation decision-making. Diversity and Distributions. 2013;19:490–502. [Google Scholar]
  2. Boyce MS, Kirsch EM, Servheen C. Bet-hedging applications for conservation. Journal of Biosciences. 2002;27:385–392. doi: 10.1007/BF02704967. [DOI] [PubMed] [Google Scholar]
  3. Bunnefeld N, Hoshino E, Milner-Gulland EJ. Management Strategy Evaluation: A powerful tool for conservation? TREE. 2011;26:441–447. doi: 10.1016/j.tree.2011.05.003. [DOI] [PubMed] [Google Scholar]
  4. Butchart SHM, Di Marco M, Watson JEM. Formulating Smart Commitments on Biodiversity: Lessons from the Aichi Targets. Conserv Lett. 2016 doi: 10.1111/conl.12278. [DOI] [Google Scholar]
  5. Butterworth DS, Punt AE. Experiences in the evaluation and implementation of management procedures. ICES Journal of Marine Science. 1999;56:985–998. [Google Scholar]
  6. Canessa S, Guillera-Arroita G, Lahoz-Monfort JJ, Southwell DM, Armstrong DP, Chadès I, Lacy RC, Converse SJ. When do we need more data? A primer on calculating the value of information for applied ecologists. Methods in Ecology and Evolution. 2015;6:1219–1228. [Google Scholar]
  7. Caughley G, Sinclair ARE. Wildlife Ecology & Management. Blackwell Science; Oxford: 1994. [Google Scholar]
  8. Cohrssen JJ, Covello VT. Risk analysis: a guide to principles and methods for analyzing health and environmental risks. DIANE Publishing; Washington DC: 1999. [Google Scholar]
  9. Crouse DT, Crowder LB, Caswell H. A stage-based population model for loggerhead sea turtles and implications for conservation. Ecology. 1987;68:1412–1423. [Google Scholar]
  10. Crowder LB, Hopkins-Murphy SR, Royle JA. Effects of Turtle Excluder Devices (TEDs) on Loggerhead Sea Turtle Strandings with Implications for Conservation. Copeia. 1995;4:773–779. [Google Scholar]
  11. Davies TK, Mees CC, Milner-Gulland EJ. Second-guessing uncertainty: Scenario planning for management of the Indian Ocean tuna purse seine fishery. Marine Policy. 2015;62:169–177. [Google Scholar]
  12. Field SA, Tyre AJ, Jonzen N, Rhodes JR, Possingham HP. Minimizing the cost of environmental management decisions by optimizing statistical thresholds. Ecology Letters. 2004;7:669–675. [Google Scholar]
  13. Hilborn R, Mangel M. The Ecological Detective: Confronting Models with Data. Princeton University Press; 1997. [Google Scholar]
  14. Keith DA, Martin TG, McDonald-Madden E, Walters C. Uncertainty and adaptive management for biodiversity conservation. Biological Conservation. 2011;144:1175–1178. [Google Scholar]
  15. Leung B, Finnoff D, Shogren JF, Lodge D. Managing invasive species: rules of thumb for rapid assessment. Ecological Economics. 2005;55:24–36. [Google Scholar]
  16. Lindenmayer D, Piggott M, Wintle B. Counting the books while the library burns: why conservation monitoring programs need a plan for action. Frontiers in Ecology and the Environment. 2013;11:549–555. [Google Scholar]
  17. Lonergan ME, Thompson D, Thomas LJ, Duck CD. An approximate Bayesian method applied to estimating the trajectories of four British grey seal (Halichoerus grypus) populations from pup counts. Journal of Marine Biology. 2011;2011:597424. [Google Scholar]
  18. McConville AJ, Grachev IuA, Keane A, Coulson T, Bekenov A, Milner-Gulland EJ. Reconstructing the observation process to correct for changing detection probability of a critically endangered species. Endangered Species Research. 2008;6:231–237. [Google Scholar]
  19. Milner-Gulland EJ, Shea K, Possingham H, Coulson TN, Wilcox C. Competing harvesting strategies in a simulated population under uncertainty. Animal Conservation. 2001;4:157–167. [Google Scholar]
  20. Milner-Gulland EJ, Bukreeva OM, Coulson TN, Lushchekina AA, Kholodova MV, Bekenov AB, Grachev IuA. Reproductive collapse in saiga antelope harems. Nature. 2003;422:135. doi: 10.1038/422135a. [DOI] [PubMed] [Google Scholar]
  21. Milner-Gulland EJ, Coulson TN, Clutton-Brock TH. Sex differences and data quality as determinants of income from hunting red deer. Wildlife Biology. 2004;10:187–201. [Google Scholar]
  22. Nicholson E, Collen B, Barausse A, Blanchard J, Costelloe B, Sullivan K, et al. Robust policy decisions with global biodiversity indicators. PLoS One. 2012;7(7):e41128. doi: 10.1371/journal.pone.0041128. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Probert WJM, Shea K, Fonnesbeck CJ, Runge MC, Carpenter TE, Dürr S, et al. Decision-making for foot-and-mouth disease control: objectives matter. Epidemics. 2016;15:10–19. doi: 10.1016/j.epidem.2015.11.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Redpath SM, Arroyo BE, Leckie FM, Bacon P, Bayfield N, Gutierrez RJ, Thirgood SJ. Using decision modeling with stakeholders to reduce human–wildlife conflict: a raptor–grouse case study. Conservation Biology. 2004;18:350–359. [Google Scholar]
  25. Redpath SM, Young J, Evely A, Adams WM, Sutherland WJ, Whitehouse A, et al. Understanding and managing conservation conflicts. Trends in Ecology and Evolution. 2013;28:100–109. doi: 10.1016/j.tree.2012.08.021. [DOI] [PubMed] [Google Scholar]
  26. Regan HM, Colyvan M, Burgman MA. A taxonomy and treatment of uncertainty for ecology and conservation biology. Ecological Applications. 2002;12:618–628. [Google Scholar]
  27. Regan HM, Ben-Haim Y, Langford W, Wilson WG, Lundberg P, Andelman SJ, Burgman MA. Robust decision-making under severe uncertainty for conservation management. Ecological Applications. 2005;15:1471–1477. [Google Scholar]
  28. Rokach L, Maimon O. Data mining with decision trees: theory and applications. World Scientific; Singapore: 2015. [Google Scholar]
  29. Runge MC, Converse SJ, Lyons JE. Which uncertainty? Using expert elicitation and expected value of information to design an adaptive program. Biological Conservation. 2011;144:1214–1223. [Google Scholar]
  30. Schwartz B, Ben-Haim Y, Dacso C. What makes a good decision? Robust satisficing as a normative standard of rational decision making. Journal for the Theory of Social Behaviour. 2011;41:209–227. [Google Scholar]
  31. SCOS. Scientific Advice on Matters Related to the Management of Seal Populations: 2007. NERC Special Committee on Seals; 2007. http://www.smru.st-andrews.ac.uk/documents/SCOS_2007_FINAL_ADVICE_1.pdf. [Google Scholar]
  32. Shea K the NCEAS Working Group on Population Management. Management of populations in conservation, harvesting and control. Trends in Ecology and Evolution. 1998;13:371–375. doi: 10.1016/S0169-5347(98)01381-0. [DOI] [PubMed] [Google Scholar]
  33. Shea K, Possingham HP, Murdoch WW, Roush R. Active adaptive management in insect pest and weed control: intervention with a plan for learning. Ecological Applications. 2002;12:927–936. [Google Scholar]
  34. Shea K, Jongejans E, Skarpaas O, Kelly D, Sheppard A. Optimal management strategies to control local population growth or population spread may not be the same. Ecological Applications. 2010;20:1148–1161. doi: 10.1890/09-0316.1. [DOI] [PubMed] [Google Scholar]
  35. Shea K, Tildesley MJ, Runge MC, Fonnesbeck CJ, Ferrari MJ. Adaptive management and the value of information: learning via intervention in epidemiology. PLoS Biol. 2014;12:1001970. doi: 10.1371/journal.pbio.1001970. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Teller BJ, Zhang R, Shea K. Seed release in a changing climate: Initiation of movement increases spread of an invasive species under simulated climate warming. Diversity and Distributions. 2016;22:708–716. [Google Scholar]
  37. Walters C. Adaptive management of renewable resources. Blackburn Press; USA: 1986. [Google Scholar]
  38. Williams BK. Uncertainty, learning and the optimal management of wildlife. Environmental and Ecological Statistics. 2001;8:269–288. [Google Scholar]
  39. Williams BK, Nichols JD, Conroy MJ. Analysis and management of animal populations. Academic Press; San Diego, California, USA: 2002. [Google Scholar]
  40. Williams BK. Reducing uncertainty about objective functions in adaptive management. Ecological Modelling. 2012;225:61–65. [Google Scholar]
  41. Yokota F, Thompson KM. Value of information literature analysis: A review of applications in health risk management. Medical Decision Making. 2004;24:287–298. doi: 10.1177/0272989X04263157. [DOI] [PubMed] [Google Scholar]
  42. Zurell D, Berger U, Cabral JS, Jeltsch F, Meynard CN, Münkemüller T, et al. The virtual ecologist approach: simulating data and observers. Oikos. 2010;119:622–635. [Google Scholar]

RESOURCES