Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2022 Jan 28;36(1):e13868. doi: 10.1111/cobi.13868

An introduction to decision science for conservation

Victoria Hemming 1,, Abbey E Camaclang 1, Megan S Adams 1, Mark Burgman 2, Katherine Carbeck 1, Josie Carwardine 3, Iadine Chadès 3, Lia Chalifour 4,1, Sarah J Converse 5, Lindsay N K Davidson 6, Georgia E Garrard 7, Riley Finn 1, Jesse R Fleri 8,1, Jacqueline Huard 1, Helen J Mayfield 9,11, Eve McDonald Madden 9, Ilona Naujokaitis‐Lewis 10, Hugh P Possingham 11, Libby Rumpff 7, Michael C Runge 12, Daniel Stewart 1, Vivitskaia J D Tulloch 1, Terry Walshe 7, Tara G Martin 1
PMCID: PMC9302662  PMID: 34856010

Abstract

Biodiversity conservation decisions are difficult, especially when they involve differing values, complex multidimensional objectives, scarce resources, urgency, and considerable uncertainty. Decision science embodies a theory about how to make difficult decisions and an extensive array of frameworks and tools that make that theory practical. We sought to improve conceptual clarity and practical application of decision science to help decision makers apply decision science to conservation problems. We addressed barriers to the uptake of decision science, including a lack of training and awareness of decision science; confusion over common terminology and which tools and frameworks to apply; and the mistaken impression that applying decision science must be time consuming, expensive, and complex. To aid in navigating the extensive and disparate decision science literature, we clarify meaning of common terms: decision science, decision theory, decision analysis, structured decision‐making, and decision‐support tools. Applying decision science does not have to be complex or time consuming; rather, it begins with knowing how to think through the components of a decision utilizing decision analysis (i.e., define the problem, elicit objectives, develop alternatives, estimate consequences, and perform trade‐offs). This is best achieved by applying a rapid‐prototyping approach. At each step, decision‐support tools can provide additional insight and clarity, whereas decision‐support frameworks (e.g., priority threat management and systematic conservation planning) can aid navigation of multiple steps of a decision analysis for particular contexts. We summarize key decision‐support frameworks and tools and describe to which step of a decision analysis, and to which contexts, each is most useful to apply. Our introduction to decision science will aid in contextualizing current approaches and new developments, and help decision makers begin to apply decision science to conservation problems.

Keywords: conservation, decision analysis, decision science, decision‐making, prioritization, social science, structured decision‐making, uncertainty, values, análisis de decisiones, ciencias de la decisión, ciencias sociales, conservación, incertidumbre, priorización, toma de decisiones, toma estructurada de decisiones, valores

Short abstract

Article impact statement: An introduction to decision science is provided to aid in conceptual clarity and practical application for conservation decisions.

INTRODUCTION

Effective conservation requires making decisions about when, where, and how to respond to threats affecting species, ecosystems, and the services they provide in a timely, informed, and defensible manner (Martin, Nally, et al., 2012; Possingham et al., 2001; Soule, 1985). These decisions are difficult. Cumulative anthropogenic pressures, compounded by chronic underfunding (e.g., Bottrill et al., 2008; McCarthy et al., 2012; Wintle et al., 2019), are exacerbating rates of species extinction and ecosystem degradation (CBD, 2020; WWF, 2020). Adding to the challenge, conservation decisions are situated in broad social and economic contexts that require the consideration of multiple, often conflicting, values (McShane et al., 2011). However, many involved in these decisions have not been trained to identify and logically consider the interplay of these different factors when identifying and choosing actions (Johnson et al., 2015; Rose et al., 2019; Wright et al., 2020).

Ad hoc decision‐making in conservation has been linked to poor decision outcomes, such as delayed and suboptimal investment in research, monitoring, and action (e.g., Martin, Nally, et al., 2012; Possingham et al., 2012; Wilson et al., 2009). Many decisions have inadequately included those affected by the decisions and overlooked potential trade‐offs, resulting in missed opportunities, disenfranchisement, and failed implementation (e.g., McShane et al., 2011; Turner et al., 2008; Wright et al., 2020). Effective stewardship of species and ecosystems depends on adopting better processes for making decisions.

Decision science structures thinking so that decisions are informed, transparent, and defensible and alternatives identified improve the chance of achieving desired outcomes (Gregory et al., 2012; Possingham et al., 2001; Raiffa, 2002). Decision science embodies a theory (decision theory) that focuses on how to proceed when confronting a decision involving differing values, competing objectives, and uncertainty. Decision science is applied through an array of practical frameworks and tools (McDaniels, 2021; Raiffa, 2002). It has a long history of application to conservation and natural resource problems (e.g., Conroy and Peterson, 2013; Gregory et al., 2012; Maguire, 1986; Possingham et al., 2001). Application has led to conservation outcomes, including informing spatial policies (e.g., protected areas) to better meet conservation objectives and other values (Fernandes et al., 2005; Margules and Sarkar, 2007; Sinclair et al., 2018); identification of cost‐effective conservation actions for recovery of large numbers of threatened species (Brazill‐Boast et al., 2018; Carwardine et al., 2019); and inclusion of cultural, social, and economic values of decision makers, titleholders, and stakeholders (Gregory et al., 2012; Runge et al., 2020; Turner et al., 2008).

However, we posit that widespread application of decision science has been hampered by lack of training and awareness of decision science for conservation (Fuller et al., 2020; Johnson et al., 2015; Wright et al., 2020); confusion over terminology (e.g., decision theory, decision science, decision analysis, structured decision‐making, decision‐support frameworks, and decision‐support tools) and over which tools or frameworks to apply (Bower et al., 2018; Schwartz et al., 2018; Walshe et al., 2019); and the impression that applying decision science is too time consuming, expensive, and complex (Fuller et al., 2020; Rose et al., 2019).

We sought to provide an entry point to decision science to enhance understanding and application and to improve the likelihood of achieving positive conservation outcomes. Based on the peer‐reviewed literature and our own diverse experiences with decision science, we reviewed: common challenges to conservation decisions that merit the application of decision science; key terminology, frameworks, and tools to provide conceptual clarity ; and the process of applying decision science to conservation problems via decision analysis to enhance capacity of conservation practitioners to confront complex decisions in a feasible way.

We believe beneficial outcomes in conservation arise primarily from knowing how to think through decisions to clarify the decision to be made and the values of importance and to creatively identify and compare how alternatives could meet these values (Keeney, 2004; Raiffa, 2002). The tools can be simple or complex, and the time invested can range from a few minutes to years (Garrard et al., 2017; Gregory et al., 2012). The conceptual clarity and practical strategies outlined in this paper will help practitioners navigate the vast and sometimes disparate decision science literature and to improve the rigor and feasibility of decision science being applied in conservation contexts.

DIFFICULTY OF CONSERVATION DECISIONS

A decision is a choice between alternative options (Howard, 1966). One makes decisions every day, many of which require little thought (Keeney, 2004) (Figure 1). However, conservation decisions are challenging, especially if resources and capacities are limited. Consider the following common conservation questions: What actions should be implemented to protect endangered species and ecosystems within a region? How can a protected area network be designed to effectively conserve whole ecosystems under a changing climate? What actions will protect species and ecosystems while also considering the cultural and livelihood needs of people? How should an action be chosen when uncertainty impedes knowledge of the best action? When should researching a problem stop and implementation of a solution start?

FIGURE 1.

FIGURE 1

A model for how decisions should be made. As suggested by Keeney (2004), out of 10,000 decisions, many (∼9000) can be made intuitively or have small consequences and do not warrant more thought or application of decision science. The remaining 1000 decisions are worthy of more thought (challenges in Table 1). Many decisions (∼750) could be improved by simply thinking through the decision consistent with the steps of decision analysis. The remaining decisions (∼250) may require additional analysis, the level of which will be identified by further rapid prototyping of the decision and application of a few simple tools. Very few, typically the most complex decisions (∼50 [0.5%]), will require a full decision analysis and would benefit from more time and resources

Decisions for questions like these are difficult because they involve multiple value judgments, considerable uncertainty, potentially irreversible consequences, and other challenging characteristics common to conservation decisions (Table 1). These decisions could benefit from more thought and the application of decision science (Figure 1).

TABLE 1.

Description of challenges common to conservation decisions

Challenge Description
Unclear decision problem It can be challenging to determine what the decision is and who has the authority to make it. As a result, one may focus on the wrong problem, miss important objectives, or design and consider poor or incomplete alternatives. For example, one may assume the main decision is to determine the most effective action to take (i.e., to recover a species) and in doing so narrowly focus decision‐making on effectiveness, without considering how alternatives could be developed to meet other objectives of importance, such as cost and social acceptability.
Complicated governance structures Decision makers have the authority to act. In many conservation problems, this authority may be shared by multiple decision makers. Often multiple decision makers have overlapping or conflicting mandates. For example, a decision maker tasked with endangered species conservation and one tasked with natural resource management may have different ideas about the problem or the objectives to be achieved. In the most challenging cases, governance is contested, that is, the (potential) decision makers may not agree on who holds the authority over the decision.
Multiple stakeholders and titleholders Decisions can affect the interests of many interested and diverse people (i.e., titleholders and stakeholders, see “Define the Problem”). If their values are not considered, the alternatives may not address their concerns, and the decision may be contested or rejected.
Differing value judgments Value judgments are inherent in decision‐making. They are implicit in social and cultural identities and shape the problems one focuses on, the objectives one sets out to achieve, the actions one is willing to consider, how one measures and weighs the achievement of objectives, and how one deals with uncertainty (risk attitude). Value judgments are difficult because people are seldom taught how to identify, discuss, and logically include them in decisions transparently, and because they may differ among decision makers, titleholders, and stakeholders.
Multiple competing objectives Decisions often have multiple objectives (even when there is a single decision maker), including those that go beyond conventional conservation objectives such as species recovery. For example, important considerations may include species recovery, ecosystem health, ecosystem services, habitat quality and quantity, cost, feasibility, social acceptability, economic effects, equity in all its forms, and cultural values. When these objectives conflict, decisions will be more difficult.
Intangible objectives Intangible objectives include difficult‐to‐measure or quantify objectives, such as ecosystem functioning and biodiversity resilience, as well as many social, cultural, and spiritual objectives. They can be vitally important to the decision, and it is necessary to consider them alongside more easily quantified objectives, such as species abundance and cost.
Scarce resources Resources (e.g., time, staff capacity, money, space) available for conservation are often limited, requiring consideration of how to best allocate resources to achieve objectives.
Complex alternatives In complex ecological decisions, the range of possible alternative actions is often very large and multifaceted.
Irreversible consequences and tipping points Conservation decisions sometimes involve tipping points between different system states or irreversible outcomes to be avoided. For example, many decisions involve imperiled species and ecosystems for which a negative outcome could lead to extinction.
Uncertainty Uncertainty is ubiquitous in decision‐making. Its presence means one may not be sure what the problem is, what alternatives could be implemented, or their efficacy. Lack of data and understanding of ecological processes in conservation are major causes of uncertainty. Uncertainty can lead to difficult choices between delaying decisions to collect more data versus implementing a decision while there is still time and resources to act.
Risk When uncertainty cannot be resolved, it can create a difficult choice between alternatives (i.e., weighing the chance of an uncertain, excellent outcome against a certain but less beneficial outcome). Making a good choice requires characterizing the risk and understanding the risk attitudes of the decision maker and all those affected.
Cognitive biases Many decisions are made intuitively relying on mental shortcuts (i.e., heuristics). Heuristics can be helpful for small everyday decisions; however, for more complex decisions, they can lead to poor intuition (i.e., cognitive biases), such as overconfidence, outcome bias, and confirmation bias, which can result in poor judgments and poor decisions.

DEFINITION OF DECISION SCIENCE

Decision science is the field of research that focuses on decision‐making (Figure 2). It consists of theories, frameworks, and tools for informing decisions (e.g., Kleindorfer et al., 1993; Morgenstern and Von Neumann, 1953; Raiffa, 2002)(Figure 2). It arose to support decisions involving uncertainty, multiple values, and other challenges (Table 1) and integrates many fields, including operations research, economics, mathematics, risk analysis, philosophy, and psychology (Gregory et al., 2012; McDaniels, 2021; Raiffa, 2002).

FIGURE 2.

FIGURE 2

A conceptual overview of decision science and the relationship between key terms. Prescriptive decision theory guides decision analysis (combines insights from normative and descriptive decision theory) (see “Decision theory”). Pr, problem; O, objectives; A, alternatives; C, consequence; T, trade‐offs; D, deciding and implementing; M, monitoring; Pr, O, A, C, and T precede D and M. Decision‐support tools provide insight at each component; decision‐support frameworks help to step through multiple components (see “Decision‐support frameworks and tools”)

The main branch of decision science, decision theory, focuses on cases in which there is one or more decision makers with the authority to make the decision who can engage in a full, open, and truthful exchange (i.e., a single decision‐making body). Game theory (Colyvan et al., 2011), negotiation analysis (Sebenius, 2007), and conflict resolution (Redpath et al., 2013) extend to cases with multiple, possibly competing, decision makers or contested objectives. We did not consider these here (but see Appendix S1).

Decision theory

Decision theory is the theory of how people do, and can, make better decisions (Figure 2). A better decision is one that is informed, transparent, logically coherent, and identifies alternatives that are more likely to achieve values of importance (i.e., objectives). Decision theory can be divided into 3 interconnected areas of inquiry (Bell et al., 1988). Normative theory provides ideas for how people should inform their decisions. It suggests decisions should be guided by the consequences of each alternative and the value decision makers place on those consequences. These can be combined to create a measure of desirability (i.e., value or utility) for each alternative (Appendix S1). The alternative with the highest desirability should be preferred (Keeney, 1982).

Descriptive theory (i.e., behavioral theory) examines how people actually inform their decisions. It finds people rarely behave as normative theory assumes they should, due to mental shortcuts (cognitive heuristics and biases) under uncertainty, such as overconfidence bias, availability and status quo heuristics, and because many real‐world complexities (Table 1) can make the application of normative theories challenging (Hammond et al., 1998; Kahneman et al., 1982; Raue and Scholl, 2018).

Prescriptive theory combines insights from normative and descriptive theories (Figure 2) to develop practical frameworks, tools, and advice to help decision makers improve the chance of achieving desired outcomes. We focused on the prescriptive frameworks, tools, and advice that support the application of decision science to conservation decisions.

Decision analysis

Decision analysis has a range of meanings (McDaniels, 2021; Raiffa, 2002). Most often, it refers to a process for decomposing decisions into their key components to support the application of normative decision theory to real decisions (Howard, 1966). The formulation and order of the steps differs among practitioners and applications; however, the basic components are represented by the PrOACT ACRONYM (Hammond et al., 2015) (Figure 3): define the Problem, elicit Objectives, develop Alternatives, estimate Consequences, and evaluate Trade‐offs

FIGURE 3.

FIGURE 3

Decision analysis (commonly referred to as structured decision‐making) follows the PrOACT steps (steps 1–5) to help inform decisions. Once a decision is made (step 6), monitoring is often used (step 7) to evaluate the outcomes of the decision or to continue to learn about the consequences (link between 7 and 4) or the problem (link between 7 and 1) (dashed arrows, process is often iterative and return to a previous step may be needed as new information is obtained; white boxes, decision‐support tools available for a step). Appendix S1 describes these tools and provides useful references for their application. Figure adapted from Garrard et al. (2017)

In conservation, decision analysis centered around these components (Figure 3) is often referred to as “structured decision‐making” (Gregory et al., 2012; Ralls and Starfield, 1995; Runge et al., 2020). The two terms are often used interchangeably, although structured decision‐making is also used by some practitioners specifically to refer to a framework for facilitating decision analysis that emphasizes group deliberation, is often centered around multi‐criteria decision analysis, and incorporates steps for monitoring and evaluation (Gregory et al., 2012).

For most decisions, rapidly iterating through the steps of a decision analysis (Figure 1) helps promote clear thinking about the nature of the problem, the aspects of the decision that pertain to what one might do (alternatives), the possible outcomes of decisions (consequences), and how one values those outcomes (objectives and trade‐offs). At each step, a range of decision‐support tools are available (Figure 3). For more complex decisions, uncertainty, risk, and links between decisions may need to be considered to solve the decision problem (Hammond et al., 2015). In conservation, monitoring and learning can be vital components and extensions of the decision process (Figure 3).

Decision‐support frameworks and tools

In conservation and natural resource management, an extensive selection of decision‐support tools and decision‐support frameworks support decision‐making. Decision‐support frameworks are structured approaches that guide decision makers from problem formulation to action, monitoring, and reporting (Figure 2) (Bower et al., 2018; Schwartz et al., 2018). Decision‐support tools help provide insight for individual components of a decision (Walshe et al., 2019) (Figure 2). A distinction that can be useful is that the realm of decision‐support tools and frameworks, broadly, may include approaches outside the field of decision science designed to support other aspects of decision‐making, such as project planning and implementation (e.g., conservation standards [CMP, 2020]), rather than the application of decision theory to inform choices between alternative actions. Although these approaches are beyond the scope of this article, they form part of the broader toolbox (i.e., “systematic conservation decision‐making” [Bower et al., 2018]) and provide support for translating conservation science into action (i.e., implementation, engagement, accountability, and monitoring). As such, there are opportunities to explore complementarities between these approaches and those from decision science to help achieve beneficial conservation outcomes (CMP, 2020; Runge and Bean, 2020; Schwartz et al., 2018).

With these definitions in mind, decision analysis can also be viewed as a decision‐support framework that helps inform choices between alternatives by facilitating group deliberations as required and drawing on approaches consistent with decision theory to help identify solutions (Figure 2) (Runge et al., 2020, 2013). Table 2 provides an overview of some common decision‐support frameworks for a range of challenging contexts in conservation (noting others exist, e.g., Pannell et al., 2012). Each of these frameworks largely follow the steps of a decision analysis, with variations to suit specific contexts (Appendix S1). Thus, learning to decompose decisions with decision analysis is a widely applicable skill for choosing and applying these frameworks.

TABLE 2.

Examples of decision‐support frameworks that help with decision analysis for a range of conservation problems or contexts*

Framework Description
Project prioritization protocol (PPP) (Joseph et al., 2009) Which species‐specific projects are most cost‐effective?
A form of decision analysis developed to help with resource allocation decisions (Table 4), where the decision is to decide which species‐focused projects to invest in, with the broad objective of maximizing the number of species recovered for a given budget. It may be adapted to address other conservation objectives. Alternatives include all the possible portfolios of the recovery projects, each comprising the set of actions required to manage a particular species. Evaluating the consequences involves estimating the cost of implementing each project, the feasibility of implementation, and the expected benefits. Cost‐effectiveness analysis is used to rank projects. Projects are often selected by applying constrained ranking until the budget has been exhausted, an approach (or algorithm) that provides an approximation to the optimal solution.
Priority threat management (PTM) (Carwardine et al., 2019) Which actions recover the most species within a region for the least cost?
A form of decision analysis to help with resource allocation decisions (Table 4), where the decision involves prioritizing actions to manage threats for the protection and recovery of multiple species and ecosystems at a regional scale. The broad aim is to identify feasible combinations of actions (i.e., portfolios) that maximize persistence of multiple species per dollar spent. The approach identifies critical threats to species then uses group deliberative approaches to design actions for each threat. It deploys cost‐effectiveness analysis and complementarity analysis to sequentially identify collections of alternative actions that recover the most species or ecosystems per dollar invested.
Systematic conservation planning (SCP) (Margules and Sarkar, 2007) What collection of spatial areas can meet conservation and other objectives?
A framework that draws on decision theory to inform spatial planning problems, often used for resource allocation problems (Table 4). This framework aids in locating, configuring, and implementing spatially explicit management alternatives (e.g., protected areas, reserve designs, management areas, or policy changes). Objectives are first phrased as qualitative goals and include conservation and socioeconomic concerns, as well as reserve design features (i.e., comprehensive, adequate, representative). Goals are linked to quantitative hard or soft targets. Alternatives represent sets of spatial areas that are sequentially selected to best meet the objectives. The development of alternatives is aided by tools for spatial conservation prioritization (often confused for the broader framework). Alternatives may be further compared and refined with multi‐criteria decision analysis and deliberation.
Adaptive management (AM) (Williams et al., 2009) How can one manage and learn under uncertainty?
A form of decision analysis for recurrent decisions in which uncertainty impedes the choices of action and learning during early decisions can improve later decisions (uncertainty and information problems in Table 4). Adaptive management aims to reduce uncertainty by iterating through the decision analysis, implementing management alternatives, monitoring the response of the system related to the key uncertainties, and updating management strategies accordingly (single‐loop learning, link 4–7, Figure 3). It can also be used to reduce uncertainty related to problem framing and objectives (double‐loop learning) or the context of the decision (triple‐loop learning) (link 7–1, Figure 3) (Runge et al., 2013). The term has evolved to refer to almost any framework for recurrent decisions in which learning is used to update the understanding of the system dynamics, the alternatives available, the objectives being sought, or governance structure (McFadden et al., 2011; Williams et al., 2009).
*

Refer to the supporting information for a detailed overview of each of these frameworks.

Selecting the most appropriate decision‐support tool depends on the nature of the decision and the resources available and can be aided by knowing the decision analysis stage in which certain tools may be most helpful (Figure 3 & descriptions and citations for each tool in Appendix S1). Qualitative tools often help structure the problem; these include conceptual models, stakeholder mapping, and the Delphi technique (an approach to structuring discussion between individuals). Quantitative tools, including quantitative models, structured expert elicitation, multi‐criteria decision analysis, and multi‐objective programming, are often used to estimate consequences and evaluate trade‐offs.

APPLYING DECISION ANALYSIS

Improving the chance of good outcomes for difficult conservation decisions (challenges listed in Table 1) arises from first knowing how to think through decisions with the foundational concepts of decision theory (Keeney, 2004; Raiffa, 2002; Smith, 2020a). Although this entails more careful consideration of the decision problem, most decisions are improved by working through the components of a decision analysis rapidly with the information at hand (Keeney, 2004) (Figure 1). This rapid‐prototyping process helps build an initial frame of the decision problem, identify potential obstacles or knowledge gaps early, and provide the opportunity to make changes that improve the analysis in subsequent iterations (Garrard et al., 2017). Many decisions do not require analysts to cycle through all components and few will require a full decision analysis (Figure 1). Carefully thinking about the problem, important objectives, and possible alternatives can lead to sufficient clarity for a decision to be made (Keeney, 2004). For more complex decisions, the prototype will help identify whether subsequent iterations, tools, or frameworks are useful and ensure that the right amount of time is spent on the decision. Rapid prototyping through a decision analysis can be applied to a broad array of decisions, largely undertaken with some critical thinking, without relying on too many tools. In each step, there are potential obstacles and useful tools if subsequent iterations are required. In the following sections italic type identifies tools or sets of tools for which references and descriptions are in Appendix S1.

Define the problem

A common mistake when making decisions is to frame the problem too narrowly or attempt to solve vaguely defined problems (Keeney, 2004; Raiffa, 2002; Smith, 2020a). Problem framing (Figure 3, step 1) is where the decision is defined to ensure the right questions are asked and all parties involved agree on the decision. First a “decision sketch” or summary of the key characteristics of the decision is developed (Garrard et al., 2017; Gregory et al., 2012). Ideally, this sketch is undertaken with the information available, without getting too caught up in the details. Table 3 provides a selection of useful problem framing questions. Although the answers to all questions may not be known, the initial sketch can be refined in subsequent stages.

TABLE 3.

Useful problem framing questions (adapted from Converse and Grant [2019] and Smith [2020a])

Decision component Useful question
Who needs to be involved? Who are the decision makers and under what authority do they act?
Who else needs to be involved or considered in the analysis and what are their values?
What is the problem? What needs to be decided?
What is the spatial scale and temporal scale of the decision?
What is the trigger for the decision?
Why does the decision matter?
What is stopping the decision from being made?
What constraints need to be considered? Are they real or perceived?
What are the decision makers trying to achieve?
What are the key uncertainties?
What are the linked decisions?
How should the decision be made? When does a decision need to be made by?
What is the legal and regulatory context that guide the decision?
What resources are available to investigate and then implement the decision?
What deliverable is required from the decision process?
What analytical methods and tools might be needed?

Problem framing also includes identifying who may need to be involved in a decision (Appendix S1). They include the decision makers (those who have the authority to act), titleholders (those with unique rights and protocols for consultation in a decision [e.g., Indigenous groups]), stakeholders (those affected by or who may affect a decision), facilitators, analysts, and technical advisors or subject matter experts. Although decisions are ultimately the responsibility of decision makers, the analysis and success of implementation can be improved by considering the views and values of those who may be affected by the decision (titleholders and stakeholders) (e.g., Bennett et al., 2019; Gregory et al., 2012; Sarkar and Illoldi‐Rangel, 2010).

Useful decision‐support tools for problem framing include (Figure 3 & Appendix S1): futures tools to help identify unforeseen decision contexts and possible and preferred alternative futures; conceptual models to describe the conceptual understanding of the problem; stakeholder mapping to identify who needs to be included or considered in a decision; and the Delphi technique to facilitate discussion among individuals to avoid group biases and myopic thinking. Existing information describing the context such as threat assessments, status assessments, and spatial data can also be useful. Time should not be spent at this stage trying to collect new data. Rather, information needs should be documented and the analysis should progress to the next step. Although this may be uncomfortable, the rapid prototyping process allows a decision to be made without obtaining further information (McDonald‐Madden et al., 2010; Runge et al., 2011) or identification of critical information needs for the decision to be resolved (Nichols and Williams, 2006; Reynolds et al., 2016).

With practice it is possible for analysts to begin to recognize the key impediments to the decision at the problem‐formulation stage; these impediments can indicate decision classes that the decision most likely falls into. Decisions in the same decision class often follow the same structure and use similar tools (Runge and Bean, 2020). Table 4 lists six decision classes commonly encountered in conservation and natural resource management (Runge et al., 2020). Although the list is not definitive, and many real problems are often hybrids of challenges, there are often 1–2 key impediments to a decision, and identifying these impediments facilitates identification of the decision class (Table 4). Identifying the key decision class or classes during the initial prototype can help identify possible tools, frameworks, or previous successful approaches that may be useful to apply if subsequent iterations and analysis are required.

TABLE 4.

Classes of decisions often encountered in conservation and natural resource management (adapted from Runge et al. [2020])

Decision classes Key impediment (description) Useful resources
Problem structuring taking a complex problem and decomposing it into tractable components; (applies to most decisions and is usually achieved through the problem, objectives, and alternatives steps) Gregory et al., 2012; Hammond et al., 2015; Smith, 2020a
Multi‐objective problems making trade‐offs among multiple objectives (involves choosing a single alternative from a small set of discrete alternatives [multi‐criteria decisions] or an alternative from a large or implicitly defined [i.e., continuous] set of alternatives [multi‐objective optimization or programming problems] [see “Evaluate trade‐offs”]) Converse, 2020; Gregory et al., 2012; Williams and Kendall, 2017
Resource allocation problems choosing the best collection of actions from a large number of possible combinations, often while considering resource constraints, see “Evaluate trade‐offs” (includes budget allocation problems, reserve design problems [spatial allocation], portfolio problems, and prioritization problems; are often multi‐objective, but can include single‐objective problems; are the focus of the project prioritization protocol, priority threat management, and systematic conservation planning decision‐support frameworks [Table 2]) Lyons, 2020
Uncertainty and information problems uncertainty impedes choice of action and an early decision is whether to take action in the face of uncertainty or delay action to collect more information (typically associated with value of information problems [see “Evaluate trade‐offs”], monitoring design, and research proposals [see “Monitor”]; if uncertainty is worth learning about and the decision is recurrent, then adaptive management [Table 2] can be used to design monitoring and management to learn about the uncertainty while implementing ongoing actions) Canessa et al., 2015; Runge et al., 2011; Smith, 2020b
Risk problems decisions that need to be made in the face of uncertainty that cannot be practically reduced (see “Evaluate trade‐offs”) (often handled with tools from the field of risk analysis) Burgman, 2005; Runge and Converse, 2020
Linked decisions decisions linked through space or time and choices made in one decision affect the choices or outcomes of another decision (solving these decisions requires identifying the individual decisions to be made and how the outcomes are realized over the whole series of actions [see “Evaluate trade‐offs” for useful tools]) (Runge, 2020

Elicit objectives and performance measures

Objectives (Figure 3, step 2) define what is valued in a decision, for example, to maximize species persistence (i.e., higher is better), maximize social acceptance (i.e., higher is better), and minimize cost (i.e., lower is better) (Table 4). Performance measures (often called attributes) assess the extent to which the objectives are achieved under different alternatives. For example, cost might be measured in dollars or in staff time, whereas species recovery may be measured as probability of persistence or a change in abundance (Failing and Gregory, 2003) (Table 5).

TABLE 5.

A consequence table for a decision involving the removal of introduced cows from Gabo Island, Australia (Walshe and Hemming, 2019), in which no single alternative outperforms all others across all objectives

Objective
1. penguin population (active nests) (maximize) 2. visitor experience (qualitative scale) (maximize) 3. management costs ($AUD) (minimize)
Alternative Best (lower‐upper) Best Best
1.Retain cows (status quo) 20,000 (12,500‐25,000) 1‐cows present; weeds uncontrolled (Low) b 76,000 a
2. Remove cows 16,500 b (10,000–23,500) 2‐cows absent; weeds uncontrolled (Medium) 81,000
3. Remove cows + prescribed burning 20,000 (7500–30,000) 3‐cows absent; weeds controlled (High) a 783,000 b
4. Remove cows + spray 21,000 a (14,500–30,000) 3‐cows absent; weeds controlled (High) a 410,000
a

Best performing alternatives on an objective.

b

Worst performing alternatives on an objective.

Setting objectives and performance measures involves value judgments about what is important. Different people and organizations hold different ideas of importance, and, in different contexts, the objectives and their importance might change. Good decisions therefore depend on thoughtful consideration of the objectives and performance measures. The process of identifying objectives and performance measures entails the following substeps (Appendix S1): first, identify what is important to achieve in a decision (i.e., objectives, values). Second, rephrase values into succinct statements about what is to be achieved or avoided (e.g., maximize species persistence). Third, separate fundamental objectives (those of primary importance) from means objectives (the means of achieving fundamental objectives). Fourth, clarify what is intended by each objective. Fifth, define the performance measure for each objective (e.g., dollars for cost, number of distinct species for biodiversity).

Problems that commonly arise when setting objectives and performance measures include failing to identify fundamental objectives (i.e., assuming actions are the fundamental objectives); framing objectives too narrowly, excluding or ignoring values that cannot be easily quantified (e.g., cultural, social, and spiritual values); selecting inappropriate performance measures (i.e., those that are not sensitive or meaningful to the decision); selecting proxy measures and constructed scales when a direct measure is available; and phrasing objectives as constraints (targets, thresholds, or goals) too early in the process (Failing and Gregory, 2003; Game et al., 2013; Gregory and Keeney, 2002). Working through the substeps in more detail, as outlined in the Appendix S1, will help to avoid many of these problems. Common tools to aid the process include (Figure 3 & Appendix S1): brainstorming, the Delphi technique, means‐ends diagrams (working backward from ends to means), and objective hierarchies.

Develop alternatives

Alternatives (Figure 3, step 3) refer to the options among which the decision maker can choose to achieve the objectives. They may consist of single actions or multiple actions combined into strategies (i.e., collections of actions) and may be discrete (e.g., restore, protect, eradicate) or a range of continuous values (e.g., size of a protected area, amount of time to spend monitoring, rate of resource harvest). Common cognitive traps are to assume that existing or current actions are the only or the best options (“status quo bias”) or that the alternatives offered prior to the analysis are the only alternatives that can meet the objectives (“either–or bias”) (Gregory and Keeney, 2002; Hammond et al., 1998). Creative thinking affords the best opportunity to overcome these cognitive traps to uncover novel and more effective decision alternatives.

Initially, it is useful to retain all suggested alternatives and avoid elimination of new ideas simply because they are unfamiliar. Useful tools at this step include (Figure 3 & Appendix S1): conceptual models, brainstorming, means‐ends diagrams, Delphi technique, and strategy tables.

A good starting point for generating alternatives might include a mix of the following (Gregory and Keeney, 2002): alternatives developed to best meet a single objective or subset of objectives; a business‐as‐usual (or status quo) alternative; no‐action alternative (imagining that all current funding and programs are removed); and speculative or extreme alternatives that can be used to understand the bounds of what could be achieved (plausible and best‐ and worst‐case scenarios).

The list of alternatives may be further improved by subsequent creative thinking. For example, an alternative that performs well on a single objective could be adjusted to better meet other objectives, possibly revealing an alternative the meets all objectives (Converse and Sipe, 2021). Table 5 provides an example of four alternatives to meet three objectives of importance (from Walshe and Hemming, 2019). Prior to the analysis, only alternatives 1 and 2 had been considered by decision makers, largely because one of the objectives (visitor experience) had been submerged in their thinking. Explicit capture and description of fundamental objectives stimulated exploration of the additional two alternatives.

Estimate consequences

Consequences (Figure 3, step 4) are the estimated performance of each alternative against each objective. For example, what population size is expected if alternative 1 is implemented (Table 5)? Data‐driven methods (e.g., quantitative models) are often deployed at this step. However, decision makers and analysts are encouraged first to use a simple tool to determine whether the decision can be made without further analysis. For example, sometimes even sketching a simple conceptual model can help convey the possible outcomes of decision alternatives or rule out implausible ones. Alternatively, initial estimates may be made by those developing the prototype; the accuracy of these estimates can be improved by adopting steps from structured expert elicitation (Hemming et al., 2018; Martin, Burgman, et al., 2012).

If the initial rapid prototype does not provide a clear solution, subsequent iterations applying additional decision‐support tools may help. Here, quantitative models can simulate and predict the effect of alternative actions on species, ecosystem, and socioeconomic values, whereas methods from structured expert elicitation and evidence synthesis can help synthesize multiple lines of evidence and inform parameters and model structures (Figure 3 & Appendix S1).

Estimating the consequences of proposed alternatives involves predictions about future events and often untried management actions. Transparency about the source of estimates is critical. Documenting the basis of evidence, unresolved uncertainties, and expected theories of change via conceptual models can aid this process. Quantifying uncertainties can help identify actions that are robust to those uncertainties. This may be as simple as providing upper and lower credible bounds on a point estimate (e.g., objective 1, Table 5) or a degree of belief about an event (Hemming et al., 2018; Tulloch et al., 2015). For a useful primer on decision uncertainty, refer to Gregory et al. (2012).

Placing estimates in a consequence table (Table 5) can help clearly describe the performance of each alternative against each objective, an advantage over common pros‐and‐cons lists typically used to compare alternatives without explicit reference to the objectives (Appendix S1). A quick glance across a consequence table can make the key trade‐offs impeding the decision explicit and may be sufficient to identify a preferred alternative without any further analysis.

Evaluate trade‐offs

Identifying a preferred alternative often involves consideration of trade‐offs. The extent to which a loss in one objective is compensated by a gain in another varies from person to person. Dealing with trade‐offs therefore requires balancing consequences and values across objectives (multiple objective trade‐offs) (Gregory and Keeney, 2002). There may also be other trade‐offs involved in reducing uncertainties, managing risks, or dealing with linked decisions.

Understanding the key trade‐offs faced in a decision can further help to identify possible decision classes (Table 4), decision‐support tools (Figure 3 & Appendix S1), and decision‐support frameworks (Table 2) if subsequent iterations are required. In this section, we have identified common types of trade‐offs involved in choosing a preferred alternative.

Many decisions involve trade‐offs among multiple objectives (i.e., multi‐objective problems [Table 4]), for example, between maximizing species persistence, maximizing consumptive benefits (e.g., natural resources), and minimizing cost. The tools one uses depend on whether one is choosing a single alternative from among a small set of discrete alternatives (multi‐criteria decisions, a subset of multi‐objective problems [Table 4]) or an alternative from a large or implicitly defined (e.g., continuous) set of alternatives (Converse, 2020).

In multi‐criteria decisions, the alternatives are often compared in a consequence table. The consequence table can simplify the problem by revealing the performance of each alternative against each fundamental objective and identifying irrelevant objectives and dominated alternatives (Gregory et al., 2012). Irrelevant objectives are those for which the expected performance of alternatives does not vary meaningfully. Dominated alternatives are those that perform the same as or worse than another alternative across all objectives. For example, in Table 5, alternative 3 performs no better and at times worse than alternative 4 across all objectives (it is dominated by alternative 4). Removing alternative 3 from further consideration simplifies the decision space. Deliberative trade‐off techniques can help facilitate this process and identify the preferred alternative. If the trade‐offs are still difficult to navigate, then techniques from multi‐criteria decision analysis are commonly used (Converse, 2020; Gregory et al., 2012). These approaches weight objectives according to the decision maker's values, resulting in a measure of overall value for each alternative. Objective weights can also be elicited from titleholders and stakeholders to identify important trade‐offs (Walshe and Slade, 2020). In contrast, multi‐objective optimization (or programming) problems (Converse, 2020; Williams and Kendall, 2017) often draw on multi‐objective programing tools to help identify a solution (Converse et al., 2011).

A difficult class of trade‐off (so‐called taboo trade‐offs) involves trade‐offs between values people regard as inviolate or sacred, such as cultural or spiritual values, and secular values, such as monetary cost (Converse, 2020; Schwartz, 2021; Tetlock et al., 2000). Many people regard the persistence of a species as a sacred value (Wilson and Law, 2016). The mere mention of trade‐offs between species persistence and other values, such as cost, can be offensive or provoke moral outrage (Schwartz, 2021). Although decision analysis cannot magically solve these disputes, it will help uncover them. The process can then be used to facilitate a dialogue about the nature of trade‐offs and stimulate creative thinking to determine whether trade‐offs can be reduced by adjusting the decision scope or developing alternatives that better consider these values (Converse, 2020).

Some decisions involve identifying an optimal collection of actions (where a single collection represents an alternative) from a large set of actions to meet the objectives. These collections are often termed "portfolios" and are commonly associated with prioritization problems. These resource‐allocation problems (Table 4) often include multiple objectives and typically have constraints (e.g., a budget constraint) (Lyons, 2020). To solve these problems, the analysis focuses on identifying the best collection of actions that achieve the objectives given the constraint (i.e., a knapsack problem).

Frameworks like the project prioritization protocol (Joseph et al., 2009) (Table 2) do this by applying constrained ranking to sequentially select projects based on their cost‐effectiveness scores until funding has been exhausted. This approach (sometimes called a greedy algorithm for solving knapsack problems) is simple but can overlook the value provided by alternative combinations of projects (Lyons, 2020). Frameworks such as priority threat management and systematic conservation planning overcome this problem by applying multi‐objective programming tools and accounting for complementarity, whereby strategies are sequentially added depending on the contribution they make to achieving objectives relative to those already in the portfolio (Chadés et al., 2014; Margules and Sarkar, 2007). In the case of systematic conservation planning, spatial conservation prioritization tools (e.g., Marxan, Prioritizr, Zonation) perform these analyses for spatially explicit data.

Sometimes uncertainty impedes knowledge of the best action, and decision makers may delay commitment to a single course of action (Rose et al., 2019). For these decisions (often termed uncertainty and information problems [Table 4]), the trade‐off is between maximizing the benefits of spending time and resources to reduce uncertainty versus minimizing delays in taking action (Martin et al., 2017). One way to resolve this is to consider whether reducing uncertainty will change the decision and the extent to which delaying action to reduce uncertainty will reduce the chances of achieving objectives (i.e., species recovery, minimize cost) (Runge et al., 2011). A value of information analysis (Figure 3 & Appendix S1) can be used to examine these trade‐offs. If the uncertainty is worth learning about and the decision is recurrent, then adaptive management (Table 2) can be used to design monitoring and management to learn about the uncertainty.

When uncertainty cannot practically be reduced, then the decision must be made in the face of uncertainty, resulting in risk (i.e., the chance of a negative outcome) (Burgman, 2005; Runge and Converse, 2020). In these problems (termed risk problems [Table 4]), there is an additional type of value judgement that needs to be considered: the risk attitude of the decision maker (i.e., risk seeking, risk neutral, or risk averse). Risk analysis tools and theories, including expected utility theory (Appendix S1), can reveal the nature of the trade‐off, and how the risk attitude of the decision maker, along with the predicted consequences, results in preferences for various alternatives.

Some decisions are linked to others through time, or space, and finding the optimal action for one decision requires an understanding of how it affects the choices and outcomes for others (termed linked [or sequential] decisions [Table 4]). Here, the trade‐off is often between costs now and benefits later. Useful tools for solving linked decisions include decision trees, stochastic dynamic programming, and partially observable Markov decision processes (Appendix S1). Adaptive management, in which the optimal policy may change over time as learning accrues, falls in this class of decisions (Table 2).

Decide and implement

After an initial assessment of trade‐offs, the decision maker may have the information required to choose and implement an alternative (Figure 3, step 6). To examine the sensitivity of the decision to various assumptions or inputs, robustness and sensitivity analyses are often used (Appendix S1). Alternatively, subsequent iteration of the decision analysis may be required.

For some decisions, there may be no alternative that satisfies all objectives. In these cases, the solution is to find the alternative that is most likely to deliver tolerable outcomes to those involved (termed satisficing), rather than optimal outcomes on all objectives, while considering the robustness of the alternatives to uncertainties (i.e., robust satisficing) (Gregory et al., 2012; Regan et al., 2005).

Decision analysis aims to identify alternatives that would best meet objectives of importance. However, in conservation, implementation can still be challenging (Rose et al., 2019; Wright et al., 2020). Rapid prototyping can reveal these barriers early, allowing subsequent iterations to focus on refining the decision to improve the chance of implementation, for example, by including short‐term milestones to track and report project success within political funding cycles, refining the decision alternative to make it more feasible to implement (including the addition of monitoring and reporting), and explicitly developing funding and communication strategies (Wright et al., 2020). Many decision‐support frameworks developed for conservation and natural resource management include suggestions for some or all of these steps (e.g., Carwardine et al., 2019; CMP, 2020; Gregory et al., 2012).

Monitor

There are many reasons to monitor (Figure 3, step 7) (McDonald‐Madden et al., 2010; Nichols and Williams, 2006), but the key motivations for monitoring for decision‐making are to assess the achievement of the objectives; measure the state of the system for state‐dependent decision‐making; and provide information to inform future decision‐making by reducing critical uncertainties (i.e., adaptive management) (Lyons et al., 2008). Through decision analysis, decision makers can better understand the key uncertainties, whether they could be reduced by monitoring, and how monitoring can be optimized (Reynolds et al., 2016). Several tools are useful to help guide monitoring, including conceptual models, quantitative models, project tracking tools, and impact evaluation (Appendix S1).

DISCUSSION

Timely and proactive decision‐making better achieves desired conservation outcomes (Martin, Nally, et al., 2012). Yet decision‐making in conservation is difficult, and many have not been trained in how to navigate the challenges that commonly arise (Table 1). This has resulted in delayed or suboptimal decisions. Theories, frameworks, and tools from decision science can structure thinking and proactively identify solutions to difficult conservation problems (e.g., Carwardine et al., 2019; Runge et al., 2020; Sinclair et al., 2018). The application of decision science can also provide underappreciated ancillary benefits, including mitigating common cognitive traps; imparting defensibility, trust, and collaboration in decision‐making; and improving the likelihood of implementation (Bottrill and Pressey, 2012; Gregory et al., 2012; Wright et al., 2020). Despite this, the widespread application of decision science has been impeded by a lack of training, confusion over common terminology, and the impression that decision science is too complex or time consuming to be applied. The entry point to the field of decision science we provided will improve conceptual understanding and practical application.

Rather than deploying complex and time‐consuming analyses, making better decisions begins with knowing how to think through decisions with the foundational components of decision analysis (Figure 1). These components are largely common sense but likely require changing how one approaches many decisions in conservation (Keeney, 2004; Williams et al., 2020). We contend that all decisions should begin by rapidly working through the components of a decision analysis at least once (Keeney, 2004), doing so may solve the problem (Figure 1) and will help determine whether an alternative tool, framework, or more detailed analysis is required. The steps we outlined will help decision makers and practitioners begin to apply decision analysis.

If decisions are to be converted into outcomes for biodiversity, then decision makers need to be involved in the process. This does not preclude researchers applying decision analysis to develop a proof‐of‐concept to begin the conversation, but researchers should be aware that these solutions are unlikely to be implemented without the decision makers in the process (Wright et al., 2020).

Related to this point, the steps we outlined can be undertaken by a single person; however, the quality of decisions will be improved by drawing on advice of others, elicited in a structured way, starting with the problem formulation stage. The multi‐objective nature of many decisions means that input from others should not be limited to those from the project team or conservation or western science backgrounds. Indigenous peoples, social scientists, economists, project managers, and those who hold diverse interests and perspectives on the problem can better inform decisions (Ban et al., 2018; Burgman et al., 2011; Converse and Sipe, 2021; Robinson et al., 2019). Likewise, applications must be adjusted to respect multiple ways of knowing and the local context of a decision (Adams et al., 2014; Reid et al., 2021).

In some contexts, decisions may not have buy‐in from all decision makers (i.e., they may be unwilling to participate in the process) or emotions may run too high to apply decision analysis meaningfully. In these cases, alternative branches of decision science such as negotiation theory, conflict resolution, or game theory can be deployed (Appendix S1). Even here, the steps of decision analysis are often useful for first revealing the underlying causes of disputes. For these types of problems, a competent facilitator and analyst can be invaluable.

Due to the multi‐objective nature of many conservation problems, a clearly preferable alternative may not exist (McShane et al., 2011). However, decision analysis can illuminate trade‐offs, which can help remove the worst alternatives, enabling decision makers to focus discussion on a limited selection of viable and well‐vetted options (Keeney, 2004). If uncertainty is quantified, it can further identify alternatives that are more likely to achieve desired outcomes, despite future uncertainties, or reduce the risk of unacceptable outcomes. Monitoring and evaluation can further guard against unexpected outcomes by detecting changes in a timely manner and providing a basis for adapting management accordingly.

In our experience, decision analysis and the broader field of decision science provide a useful lens through which to address conservation problems, a greater awareness of the role of values and science in these decisions, and a process for identifying alternatives that are more likely to achieve multiple important values in a timely manner. It is our hope that this article helps practitioners and decision makers to better understand and navigate the expansive field, to begin to apply decision science, and illuminates a tractable pathway for turning difficult conservation problems into timely, effective, and beneficial outcomes.

Supporting information

Appendix S1: Hemming et al. (Accepted Manuscript) An Introduction to Decision Science for Conservation. Conservation Biology

Table S 1 Common decision support tools

Figure S1 A summary of decision‐support frameworks listed in the main text and expanded upon here.

Figure S2 The nine steps of the Project Prioritization Protocol as described by Joseph et al. (2009), and the tools drawn on throughout the process

Figure S3 The four steps and related sub steps of the Priority Threat Management Framework, and associated tools commonly used when implementing Priority Threat Management (Carwardine et al., 2019).

Figure S4 The steps of Systematic Conservation Planning as described by Pressey and Bottrill (2009) (pink), with additional steps from Sarkar and IlloldiRangel (2010) (green), and modifications (in italics) to consider and b and better include social and cultural objectives as outlined by Sarkar and Montoya (2011).

Figure S5 Three different types of learning supported by the ‘decision theoretical approach’ to adaptive management (adapted from Pahl‐Wostl (2009))

Figure S5 Three different types of learning supported by the ‘decision theoretical approach’ to adaptive management (adapted from Pahl‐Wostl (2009))

Table S2 outlines some of people that may need to be involved in a decision. Stakeholder mapping can help to identify who to include (Table S1).

Table S3 Types objectives in decision‐making (Keeney, 2007)

Figure S6 A means‐ends diagram of the objectives. Refer to Gregory et al. (2012), Keeney (2007), Runge and Walshe (2014) for additional examples

Table S4 Three types of scales that might be used to develop performance measures

ACKNOWLEDGMENTS

The authors received funding and support from their affiliated institutions. In addition, V.H. was supported by Environment and Climate Change Canada, T.G.M. and L.C. by The Natural Sciences and Engineering Research Council of Canada (NSERC, RGPIN‐2019‐04535, CGSD3‐534335‐2019), L.N.K. by the Beaty Biodiversity Postdoctoral Fellowship from the University of British Columbia, L.R. by the Australian Government's National Environmental Science Program through the Threatened Species Recovery Hub, E.M.M. by an ARC Future Fellowship, and T.G.M. by the Liber Ero Chair in Conservation. Any use of trade, firm, or product names is for descriptive purposes only and does not imply endorsement by the U.S. Government.

Hemming, V. , Camaclang, A. E. , Adams, M. , et al. (2022). An introduction to decision science for conservation. Conservation Biology, 36, e13868. 10.1111/cobi.13868.

Article impact statement An introduction to decision science is provided to aid in its conceptual clarity and practical application for conservation decisions.

LITERATURE CITED

  1. Adams, M. S. , Carpenter, J. , Housty, J. A. , Neasloss, D. , Paquet, P. C. , Service, C. , Walkus, J. , & Darimont, C. T. (2014). Toward increased engagement between academic and indigenous community partners in ecological research. Ecology and Society, 19, 5. [Google Scholar]
  2. Ban, N. C. , Frid, A. , Reid, M. , Edgar, B. , Shaw, D. , & Siwallace, P. (2018). Incorporate Indigenous perspectives for impactful research and effective management. Nature Ecology & Evolution, 2, 1680–1683. [DOI] [PubMed] [Google Scholar]
  3. Bell, D. E. , Raiffa, H. , & Tversky, A. (1988). Decision making: Descriptive, normative, and prescriptive interactions. Cambridge University Press. [Google Scholar]
  4. Bennett, N. J. , Di Franco, A. , Calò, A. , Nethery, E. , Niccolini, F. , Milazzo, M. , & Guidetti, P. (2019). Local support for conservation is associated with perceptions of good governance, social impacts, and ecological effectiveness. Conservation Letters, 12, e12640. [Google Scholar]
  5. Bottrill Madeleine C., Joseph Liana N., Carwardine Josie, Bode Michael, Cook Carly, Game Edward T., Grantham Hedley, Kark Salit, Linke Simon, McDonald‐Madden Eve, Pressey Robert L., Walker Susan, Wilson Kerrie A., Possingham Hugh P. (2008). Is conservation triage just smart decision making?. Trends in Ecology & Evolution, 23, (12), 649. –654. 10.1016/j.tree.2008.07.007 [DOI] [PubMed] [Google Scholar]
  6. Bottrill, M. C. , & Pressey, R. L. (2012). The effectiveness and evaluation of conservation planning. Conservation Letters, 5, 407–420. [Google Scholar]
  7. Bower, S. D. , Brownscombe, J. W. , Birnie‐Gauvin, K. , Ford, M. I. , Moraga, A. D. , Pusiak, R. J. P. , Turenne, E. D. , Zolderdo, A. J. , Cooke, S. J. , & Bennett, J. R. (2018). Making tough choices: Picking the appropriate conservation decision‐making tool: Choosing conservation decision‐making tools. Conservation Letters, 11, e12418. [Google Scholar]
  8. Brazill‐Boast James, Williams Moira, Rickwood Beth, Partridge Thalie, Bywater Grant, Cumbo Bronwyn, Shannon Ian, Probert William J. M., Ravallion Julie, Possingham Hugh, Maloney Richard F. (2018). A large‐scale application of project prioritization to threatened species investment by a government agency. PLOS ONE, 13, (8), e0201413. 10.1371/journal.pone.0201413 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Burgman, M. (2005). Risks and decisions for conservation and environmental management. Cambridge University Press. [Google Scholar]
  10. Burgman, M. , Carr, A. , Godden, L. , Gregory, R. , McBride, M. , Flander, L. , & Maguire, L. (2011). Redefining expertise and improving ecological judgment. Conservation Letters, 4, 81–87. [Google Scholar]
  11. Canessa, S. , Guillera‐Arroita, G. , Lahoz‐Monfort, J. J. , Southwell, D. M. , Armstrong, D. P. , Chadès, I. , Lacy, R. C. , & Converse, S. J. (2015). When do we need more data? A primer on calculating the value of information for applied ecologists. Methods in Ecology and Evolution, 6, 1219–1228. [Google Scholar]
  12. Carwardine, J. , Martin, T. G. , Firn, J. , Reyes, R. P. , Nicol, S. , Reeson, A. , Grantham, H. S. , Stratford, D. , Kehoe, L. , & Chadès, I. (2019). Priority Threat Management for biodiversity conservation: A handbook. Journal of Applied Ecology, 56, 481–490. [Google Scholar]
  13. CBD . (2020). Global biodiversity outlook 5. Secretariat of the Convention on Biological Diversity. [Google Scholar]
  14. Chadés, I. , Nicol, S. , van Leeuwen, S. , Walters, B. , Firn, J. , Reeson, A. , Martin, T. G. , & Carwardine, J. (2014). Benefits of integrating complementarity into priority threat management. Conservation Biology, 29, 525–536. [DOI] [PubMed] [Google Scholar]
  15. CMP . (2020). Open standards for the practice of conservation (Version 4.0). Author. [Google Scholar]
  16. Colyvan, M. , Justus, J. , & Regan, H. M. (2011). The conservation game. Biological Conservation, 144, 1246–1253. [Google Scholar]
  17. Conroy, M. J. , & Peterson, J. T. (2013). Decision making in natural resource management: A structured, adaptive approach. John Wiley & Sons. [Google Scholar]
  18. Converse, S. J. (2020). Introduction to multi‐criteria decision analysis. In Runge M. C., Converse S. J., Lyons J. E., & Smith D. R. (Eds.), Structured decision making: Case studies in natural resource management (pp. 51–61). Johns Hopkins University Press. [Google Scholar]
  19. Converse, S. J. , & Grant, E. H. C. (2019). A three‐pipe problem: Dealing with complexity to halt amphibian declines. Biological Conservation, 236, 107–114. [Google Scholar]
  20. Converse, S. J. , Shelley, K. J. , Morey, S. , Chan, J. , LaTier, A. , Scafidi, C. , Crouse, D. T. , & Runge, M. C. , (2011). A decision‐analytic approach to the optimal allocation of resources for endangered species consultation. Biological conservation, 144, 319–329. [Google Scholar]
  21. Converse, S. J. , & Sipe, H. A. (2021). Finding the win‐win strategies in endangered species conservation. Animal Conservation, 24, 161–162. [Google Scholar]
  22. Failing, L. , & Gregory, R. (2003). Ten common mistakes in designing biodiversity indicators for forest policy. Journal of Environmental Management, 68, 121–132. [DOI] [PubMed] [Google Scholar]
  23. Fernandes Leanne, Day Jon, Lewis Adam, Slegers Suzanne, Kerrigan Brigid, Breen Dan, Cameron Darren, Jago Belinda, Hall James, Lowe Dave, Innes James, Tanzer John, Chadwick Virginia, Thompson Leanne, Gorman Kerrie, Simmons Mark, Barnett Bryony, Sampson Kirsti, De'ath Glenn, Mapstone Bruce, Marsh Helene, Possingham Hugh, Ball Ian, Ward Trevor, Dobbs Kirstin, Aumend James, Slater Deb, Stapleton Kate (2005). Establishing Representative No‐Take Areas in the Great Barrier Reef: Large‐Scale Implementation of Theory on Marine Protected Areas. Conservation Biology, 19, (6), 1733. –1744. 10.1111/j.1523-1739.2005.00302.x [DOI] [Google Scholar]
  24. Fuller, A. K. , Decker, D. J. , Schiavone, M. V. , & Forstchen, A. B. (2020). Ratcheting up rigor in wildlife management decision making. Wildlife Society Bulletin, 44, 29–41. [Google Scholar]
  25. Game, E. T. , Kareiva, P. , & Possingham, H. P. (2013). Six common mistakes in conservation priority setting: Priority‐setting mistakes. Conservation Biology, 27, 480–485. 10.1111/cobi.12051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Garrard, G. E. , Rumpff, L. , Runge, M. C. , & Converse, S. J. (2017). Rapid prototyping for decision structuring: An efficient approach to conservation decision analysis. In Bunnefeld N., E. Nicholson, & E. J. Milner‐Gulland (Eds.), Decision‐making in conservation and natural resource management (pp. 46–64). Cambridge University Press. [Google Scholar]
  27. Gregory, R. , Failing, L. , Harstone, M. , Long, G. , McDaniels, T. , & Ohlson, D. (2012). Structured decision making: A practical guide to environmental management choices. John Wiley & Sons. [Google Scholar]
  28. Gregory, R. S. , & Keeney, R. L. (2002). Making smarter environmental management decisions. JAWRA Journal of the American Water Resources Association, 38, 1601–1612. [Google Scholar]
  29. Hammond, J. S. , Keeney, R. L. , & Raiffa, H. (2015). Smart choices: A practical guide to making better decisions. Harvard Business Review Press. [Google Scholar]
  30. Hammond, J. S. , Keeney, R. L. , & Raiffa, H. (1998a). The hidden traps in decision making. Harvard Business Review, 76, 47–58. [PubMed] [Google Scholar]
  31. Hemming, V. , Burgman, M. A. , Hanea, A. M. , McBride, M. F. , & Wintle, B. C. (2018). A practical guide to structured expert elicitation using the IDEA protocol. Methods in Ecology and Evolution, 9, 169–180. [Google Scholar]
  32. Howard, R. A. (1966). Decision analysis: Applied decision theory. Stanford Research Institute. [Google Scholar]
  33. Johnson, F. , Eaton, M. , Williams, J. , Jensen, G. , & Madsen, J. (2015). Training conservation practitioners to be better decision makers. Sustainability, 7, 8354–8373. [Google Scholar]
  34. Joseph, L. N. , Maloney, R. F. , & Possingham, H. P. (2009). Optimal allocation of resources among threatened species: A project prioritization protocol. Conservation Biology, 23, 328–338. [DOI] [PubMed] [Google Scholar]
  35. Kahneman, D. , Slovic, S. P. , Slovic, P. , & Tversky, A. (1982). Judgment under uncertainty: Heuristics and biases. Cambridge University Press. [DOI] [PubMed] [Google Scholar]
  36. Keeney, R. L. (2004). Making better decision makers. Decision Analysis, 1, 193–204. [Google Scholar]
  37. Keeney, R. L. (1982). Feature article—Decision analysis: An overview. Operations Research, 30, 803–838. 10.1287/opre.30.5.803 [DOI] [PubMed] [Google Scholar]
  38. Kleindorfer, P. R. , Kunreuther, H. , & Schoemaker, P. J. (1993). Decision sciences: An integrative perspective. Cambridge University Press. [Google Scholar]
  39. Lyons, J. E. (2020). Introduction to resource allocation, in: Structured decision making: Case studies in natural resource management. John Hopkins University Press. [Google Scholar]
  40. Lyons, J. E. , Runge, M. C. , Laskowski, H. P. , & Kendall, W. L. (2008). Monitoring in the context of structured decision‐making and adaptive management. Journal of Wildlife Management, 72, 1683–1692. [Google Scholar]
  41. Maguire, L. A. (1986). Using decision analysis to manage endangered species populations. Journal of Environmental Management, 22, 345–360. [Google Scholar]
  42. Margules, C. R. , & Sarkar, S. (2007). Systematic conservation planning. Cambridge University Press. [Google Scholar]
  43. Martin, T. G. , Burgman, M. A. , Fidler, F. , Kuhnert, P. M. , Low‐Choy, S. , McBride, M. , & Mengersen, K. (2012). Eliciting expert knowledge in conservation science. Conservation Biology, 26, 29–38. [DOI] [PubMed] [Google Scholar]
  44. Martin, T. G. , Camaclang, A. E. , Possingham, H. P. , Maguire, L. A. , & Chadès, I. (2017). Timing of protection of critical habitat matters. Conservation Letters, 10, 308–316. [Google Scholar]
  45. Martin, T. G. , Nally, S. , Burbidge, A. A. , Arnall, S. , Garnett, S. T. , Hayward, M. W. , Lumsden, L. F. , Menkhorst, P. , McDonald‐Madden, E. , & Possingham, H. P. (2012). Acting fast helps avoid extinction: Acting fast avoids extinctions. Conservation Letters, 5, 274–280. [Google Scholar]
  46. McCarthy Donal P., Donald Paul F., Scharlemann Jörn P. W., Buchanan Graeme M., Balmford Andrew, Green Jonathan M. H., Bennun Leon A., Burgess Neil D., Fishpool Lincoln D. C., Garnett Stephen T., Leonard David L., Maloney Richard F., Morling Paul, Schaefer H. Martin, Symes Andy, Wiedenfeld David A., Butchart Stuart H. M. (2012). Financial Costs of Meeting Global Biodiversity Conservation Targets: Current Spending and Unmet Needs. Science, 338, (6109), 946. –949. 10.1126/science.1229803 [DOI] [PubMed] [Google Scholar]
  47. McDaniels, T. (2021). Four decades of transformation in decision analytic practice for societal risk management. Risk Analysis, 41, 491–502. [DOI] [PubMed] [Google Scholar]
  48. McDonald‐Madden, E. , Baxter, P. W. J. , Fuller, R. A. , Martin, T. G. , Game, E. T. , Montambault, J. , & Possingham, H. P. (2010). Monitoring does not always count. Trends in Ecology & Evolution, 25, 547–550. [DOI] [PubMed] [Google Scholar]
  49. McFadden, J. E. , Hiller, T. L. , & Tyre, A. J. (2011). Evaluating the efficacy of adaptive management approaches: Is there a formula for success? Journal of Environmental Management, 92, 1354–1359. [DOI] [PubMed] [Google Scholar]
  50. McShane, T. O. , Hirsch, P. D. , Trung, T. C. , Songorwa, A. N. , Kinzig, A. , Monteferri, B. , Mutekanga, D. , Thang, H. V. , Dammert, J. L. , Pulgar‐Vidal, M. , Welch‐Devine, M. , Peter Brosius, J. , Coppolillo, P. , & O'Connor, S. (2011). Hard choices: Making trade‐offs between biodiversity conservation and human well‐being. Biological Conservation, 144, 966–972. [Google Scholar]
  51. Morgenstern, O. , & Von Neumann, J. (1953). Theory of games and economic behavior. Princeton University Press. [Google Scholar]
  52. Nichols, J. D. , & Williams, B. K. (2006). Monitoring for conservation. Trends in Ecology & Evolution, 21, 668–673. [DOI] [PubMed] [Google Scholar]
  53. Pannell, D. J. , Roberts, A. M. , Park, G. , Alexander, J. , Curatolo, A. , & Marsh, S. (2012). Integrated assessment of public investment in land‐use change to protect environmental assets in Australia. Land Use Policy, 29, 377–387. [Google Scholar]
  54. Possingham, H. P. , Andelman, S. J. , Noon, B. R. , Trombulak, S. , Pulliam, H. R. , Soule, M. E. , & Orians, G. H. (2001). Making smart conservation decisions. In Soule M. E. & Orians G. H. (Eds.), Conservation biology: Research priorities for the next decade (pp. 225–244). Island Press. [Google Scholar]
  55. Possingham, H. P. , Wintle, B. A. , Fuller, R. A. , & Joseph, L. N. (2012). The conservation return on investment from ecological monitoring. In Lindenmayer D. B. & Gibbons P. (Eds.), Biodiversity monitoring in Australia (pp. 49–61). CSIRO Publishing. [Google Scholar]
  56. Raiffa, H. (2002). Decision analysis: A personal account of how it got started and evolved. Operations Research, 50, 179–185. [Google Scholar]
  57. Ralls, K. , & Starfield, A. M. (1995). Choosing a management strategy: Two structured decision‐making methods for evaluating the predictions of stochastic simulation models. Conservation Biology, 9, 175–181. [Google Scholar]
  58. Raue, M. , & Scholl, S. G. (2018). The use of heuristics in decision making under risk and uncertainty. In Raue M., Lermer E., & Streicher B. (Eds.), Psychological perspectives on risk and risk analysis (pp. 153–179). Springer. [Google Scholar]
  59. Redpath Steve M., Young Juliette, Evely Anna, Adams William M., Sutherland William J., Whitehouse Andrew, Amar Arjun, Lambert Robert A., Linnell John D.C., Watt Allan, Gutiérrez R.J. (2013). Understanding and managing conservation conflicts. Trends in Ecology & Evolution, 28, (2), 100. –109. 10.1016/j.tree.2012.08.021 [DOI] [PubMed] [Google Scholar]
  60. Regan, H. M. , Ben‐Haim, Y. , Langford, B. , Wilson, W. G. , Lundberg, P. , Andelman, S. J. , & Burgman, M. A. (2005). Robust decision‐making under severe uncertainty for conservation management. Ecological Applications, 15, 1471–1477. [Google Scholar]
  61. Reid, A. J. , Eckert, L. E. , Lane, J.‐F. , Young, N. , Hinch, S. G. , Darimont, C. T. , Cooke, S. J. , Ban, N. C. , & Marshall, A. (2021). Two‐Eyed Seeing”: An Indigenous framework to transform fisheries research and management. Fish and Fisheries, 22, 243–261. [Google Scholar]
  62. Reynolds, J. H. , Knutson, M. G. , Newman, K. B. , Silverman, E. D. , & Thompson, W. L. (2016). A road map for designing and implementing a biological monitoring program. Environmental Monitoring and Assessment, 188, 1–25. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Robinson, K. F. , Fuller, A. K. , Stedman, R. C. , Siemer, W. F. , & Decker, D. J. (2019). Integration of social and ecological sciences for natural resource decision making: Challenges and opportunities. Environmental Management, 63, 565–573. [DOI] [PubMed] [Google Scholar]
  64. Rose, D. C. , Amano, T. , González‐Varo, J. P. , Mukherjee, N. , Robertson, R. J. , Simmons, B. I. , Wauchope, H. S. , & Sutherland, W. J. (2019). Calling for a new agenda for conservation science to create evidence‐informed policy. Biological Conservation, 238, 108222. [Google Scholar]
  65. Runge, M. C. (2020). Introduction to linked and dynamic decisions. Runge M. C., Converse S. J., Lyons J. E., & Smith D. R. (Eds.), Structured decision making: Case studies in natural resource management (pp. 227–233). John Hopkins University Press. [Google Scholar]
  66. Runge, M. C. , & Bean, E. A. (2020). Decision analysis for managing public natural resources. In Runge M. C., Converse S. J., Lyons J. E., & Smith D. R. (Eds.), Structured decision making: Case studies in natural resource management (pp. 1–11). John Hopkins University Press. [Google Scholar]
  67. Runge, M. C. , & Converse, S. J. (2020). Introduction to risk analysis. In Runge M. C., Converse S. J., Lyons J. E., & Smith D. R. (Eds.), Structured decision making: Case studies in natural resource management (pp. 149–155). John Hopkins University Press. [Google Scholar]
  68. Runge, M. C. , Converse, S. J. , & Lyons, J. E. (2011). Which uncertainty? Using expert elicitation and expected value of information to design an adaptive program. Biological Conservation, 144, 1214–1223. [Google Scholar]
  69. Runge, M. C. , Converse, S. J. , Lyons, J. E. , & Smith, D. R. (2020). Structured decision making: Case studies in natural resource management. John Hopkins University Press. [Google Scholar]
  70. Runge, M. C. , Grand, J. B. , & Mitchell, M. S. (2013). Structured decision making. In Krausman P. R. & Cain J. W. III (Eds.), Wildlife management and conservation: Contemporary principles and practices (pp. 51–72). John Hopkins University Press. [Google Scholar]
  71. Sarkar, S. , & Illoldi‐Rangel, P. (2010). Systematic conservation planning: An updated protocol. Natureza & Conservação, 8, 19–26. [Google Scholar]
  72. Schwartz, M. W. (2021). Conservation lessons from taboos and trolley problems. Conservation Biology, 35, 794–803. [DOI] [PubMed] [Google Scholar]
  73. Schwartz, M. W. , Cook, C. N. , Pressey, R. L. , Pullin, A. S. , Runge, M. C. , Salafsky, N. , Sutherland, W. J. , & Williamson, M. A. (2018). Decision support frameworks and tools for conservation: Decision support for conservation. Conservation Letters, 11, e12385. [Google Scholar]
  74. Sebenius, J. (2007). Negotiation analysis: Between decisions and games. In Edwards W., Miles R. F. J., & von Winterfeldt D. (Eds.), Advances in decision analysis: From foundations to applications (pp. 469–488). Cambridge University Press. [Google Scholar]
  75. Sinclair, S. P. , Milner‐Gulland, E. J. , Smith, R. J. , McIntosh, E. J. , Possingham, H. P. , Vercammen, A. , & Knight, A. T. (2018). The use, and usefulness, of spatial conservation prioritizations. Conservation Letters, 11, e12459. [Google Scholar]
  76. Smith, D. R. (2020a). Introduction to structuring decisions. In Runge M. C., Converse S. J., Lyons J. E., & Smith D. R. (Eds.), Structured decision making: Case studies in natural resource management (pp. 15–22). John Hopkins University Press. [Google Scholar]
  77. Smith, D. R. (2020b). Introduction to prediction and the value of information. In Runge M. C., Converse S. J., Lyons J. E., & Smith D. R. (Eds.), Structured decision making: Case studies in natural resource management (pp. 189–224). John Hopkins University Press. [Google Scholar]
  78. Soule, M. E. (1985). What is conservation biology? BioScience, 35, 727–734. [Google Scholar]
  79. Tetlock, P. E. , Kristel, O. V. , Elson, S. B. , Green, M. C. , & Lerner, J. S. (2000). The psychology of the unthinkable: Taboo trade‐offs, forbidden base rates, and heretical counterfactuals. Journal of Personality and Social Psychology, 78, 853–870. [DOI] [PubMed] [Google Scholar]
  80. Tulloch Vivitskaia JD, Tulloch Ayesha IT, Visconti Piero, Halpern Benjamin S, Watson James EM, Evans Megan C, Auerbach Nancy A, Barnes Megan, Beger Maria, Chadès Iadine, Giakoumi Sylvaine, McDonald‐Madden Eve, Murray Nicholas J, Ringma Jeremy, Possingham Hugh P (2015). Why do we map threats? Linking threat mapping with actions to make better conservation decisions. Frontiers in Ecology and the Environment, 13, (2), 91. –99. 10.1890/140022 [DOI] [Google Scholar]
  81. Turner, N. J. , Gregory, R. , Brooks, C. , Failing, L. , & Satterfield, T. (2008). From invisibility to transparency: Identifying the implications. Ecology and Society, 13, 7. [Google Scholar]
  82. Walshe, T. , Dempster, F. , & Pascoe, S. (2019). Review of decision support tools and their potential application in the management of Australian Marine Parks . National Environmental Science Program, Marine Biodiversity Hub. [Google Scholar]
  83. Walshe, T. , & Slade, S. (2020). Coral reef fin fish spawning closures for coral reef fin fish. In Runge M. C., Converse S. J., Lyons J. E., & Smith D. R. (Eds.), Structured decision making: Case studies in natural resource management (pp. 72–82). John Hopkins University Press. [Google Scholar]
  84. Walshe, T. V. , & Hemming, V. (2019). Gabo Island Structured Decision‐Making: Report on outcomes from a workshop held October 2011 and penguin surveys conducted 2008–2016 . School of Biosciences, The University of Melbourne. [Google Scholar]
  85. Williams, B. K. , Szaro, R. C. , & Shapiro, C. D. (2009). Adaptive management: The U.S. Department of the Interior Technical Guide. Adaptive Management Working Group. [Google Scholar]
  86. Williams, D. R. , Balmford, A. , & Wilcove, D. S. (2020). The past and future role of conservation science in saving biodiversity. Conservation Letters, 13, e12720. [Google Scholar]
  87. Williams, P. J. , & Kendall, W. L. (2017). A guide to multi‐objective optimization for ecological problems with an application to cackling goose management. Ecological Modelling, 343, 54–67. [Google Scholar]
  88. Wilson, K. A. , Carwardine, J. , & Possingham, H. P. (2009). Setting conservation priorities. Annals of the New York Academy of Sciences, 1162, 237–264. [DOI] [PubMed] [Google Scholar]
  89. Wilson, K. A. , & Law, E. A. (2016). Ethics of conservation triage. Frontiers in Ecology and Evolution, 4, 112. [Google Scholar]
  90. Wintle Brendan A., Cadenhead Natasha C.R., Morgain Rachel A., Legge Sarah M., Bekessy Sarah A., Cantele Matthew, Possingham Hugh P., Watson James E.M., Maron Martine, Keith David A., Garnett Stephen T., Woinarski John C. Z., Lindenmayer David B. (2019). Spending to save: What will it cost to halt Australia's extinction crisis?. Conservation Letters, 12, (6), 10.1111/conl.12682 [DOI] [Google Scholar]
  91. Wright Alexander D., Bernard Riley F., Mosher Brittany A., O'Donnell Katherine M., Braunagel Taylor, DiRenzo Graziella V., Fleming Jill, Shafer Charles, Brand Adrianne B., Zipkin Elise F., Campbell Grant Evan H. (2020). Moving from decision to action in conservation science. Biological Conservation, 249, 108698. 10.1016/j.biocon.2020.108698 [DOI] [Google Scholar]
  92. WWF . (2020). Living Planet Report 2020 ‐ Bending the curve of biodiversity loss. Author. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix S1: Hemming et al. (Accepted Manuscript) An Introduction to Decision Science for Conservation. Conservation Biology

Table S 1 Common decision support tools

Figure S1 A summary of decision‐support frameworks listed in the main text and expanded upon here.

Figure S2 The nine steps of the Project Prioritization Protocol as described by Joseph et al. (2009), and the tools drawn on throughout the process

Figure S3 The four steps and related sub steps of the Priority Threat Management Framework, and associated tools commonly used when implementing Priority Threat Management (Carwardine et al., 2019).

Figure S4 The steps of Systematic Conservation Planning as described by Pressey and Bottrill (2009) (pink), with additional steps from Sarkar and IlloldiRangel (2010) (green), and modifications (in italics) to consider and b and better include social and cultural objectives as outlined by Sarkar and Montoya (2011).

Figure S5 Three different types of learning supported by the ‘decision theoretical approach’ to adaptive management (adapted from Pahl‐Wostl (2009))

Figure S5 Three different types of learning supported by the ‘decision theoretical approach’ to adaptive management (adapted from Pahl‐Wostl (2009))

Table S2 outlines some of people that may need to be involved in a decision. Stakeholder mapping can help to identify who to include (Table S1).

Table S3 Types objectives in decision‐making (Keeney, 2007)

Figure S6 A means‐ends diagram of the objectives. Refer to Gregory et al. (2012), Keeney (2007), Runge and Walshe (2014) for additional examples

Table S4 Three types of scales that might be used to develop performance measures


Articles from Conservation Biology are provided here courtesy of Wiley

RESOURCES