Skip to main content
Wiley Open Access Collection logoLink to Wiley Open Access Collection
. 2021 Sep 21;31(8):e02446. doi: 10.1002/eap.2446

Evaluating online and tangible interfaces for engaging stakeholders in forecasting and control of biological invasions

Devon A Gaydos 1,, Chris M Jones 2, Shannon K Jones 2, Garrett C Millar 2, Vaclav Petras 2, Anna Petrasova 2, Helena Mitasova 2,3, Ross K Meentemeyer 2,4
PMCID: PMC9285687  PMID: 34448316

Abstract

Ecological forecasts will be best suited to inform intervention strategies if they are accessible to a diversity of decision‐makers. Researchers are developing intuitive forecasting interfaces to guide stakeholders through the development of intervention strategies and visualization of results. Yet, few studies to date have evaluated how user interface design facilitates the coordinated, cross‐boundary management required for controlling biological invasions. We used a participatory approach to develop complementary tangible and online interfaces for collaboratively forecasting biological invasions and devising control strategies. A diverse group of stakeholders evaluated both systems in the real‐world context of controlling sudden oak death, an emerging forest disease killing millions of trees in California and Oregon. Our findings suggest that while both interfaces encouraged adaptive experimentation, tangible interfaces are particularly well suited to support collaborative decision‐making. Reflecting on the strengths of both systems, we suggest workbench‐style interfaces that support simultaneous interactions and dynamic geospatial visualizations.

Keywords: adaptive management, ecological forecasting, geospatial, interface design, participatory modeling, plant pathogen

Introduction

Ecological forecasts are important tools for understanding and controlling biological invasions (Dietze et al. 2018). They have been used to identify high‐risk areas, expose dispersal patterns, and assess how human interventions are likely to impact spread trajectories, making them quite valuable for crafting control strategies (Gilligan and van den Bosch 2008, Cunniffe et al. 2016, Miller et al. 2017, Jones et al. 2021a ). Yet, these capabilities may be moot if decision‐makers themselves cannot interact directly with the forecasts. Researchers have recognized a knowledge‐practice gap, where insights from models are rarely translated into on‐the‐ground decisions because: (1) models do not directly address the operational or policy problem at hand, (2) decision‐makers do not fully understand the model or its results, and (3) decision‐makers cannot use the model without considerable time and expertise (Bayliss et al. 2013, Matzek et al. 2014, Cunniffe et al. 2015, Knight et al. 2016, Voinov et al. 2016, Muscatello et al. 2017). Participatory modeling, or designing and applying models in collaboration with stakeholders, has been proposed to address these shortcomings (Voinov and Bousquet et al. 2010, Voinov et al. 2016). By guiding model development, stakeholders ensure that the outcomes are relevant to decision‐making while simultaneously learning about model processes and uncertainties, how to apply the model, and how to interpret findings (Blades et al. 2016, Olabisi et al. 2016, Jordan et al. 2018). Additionally, modelers can capitalize on the diverse perspectives and local knowledge that stakeholders from government (federal, state, and local), tribal institutions, industry, nonprofits, and other branches of academia provide. Participatory approaches have been used to support decision‐making in some cases of natural resource management (Voinov and Gaddis 2008, White et al. 2010, Blades et al. 2016), but are still relatively new in forecasting biological invasions (Miller et al. 2017, Crimmins et al. 2019, Gaydos et al. 2019).

Although participatory modeling empowers stakeholders to become amateur modelers, forecasts are often written in esoteric code, inherently limiting direct stakeholder interaction (Cunniffe et al. 2015, Tonini et al. 2017). User‐friendly interfaces that enable forecasting without coding are therefore key to increasing stakeholder involvement. Interfaces can come in different forms, but should allow users to quickly and intuitively: (1) carry out model functions, (2) vary parameters to explore forecast uncertainty, (3) visualize ecological trajectories, and (4) assess how interventions alter these trajectories (Cunniffe et al. 2015, Petrasova et al. 2018, Jones et al. 2021a ). With these capacities, interfaces can unmask the analytical power of forecasts, especially for less tech‐savvy stakeholders, enabling them to easily contribute their perspectives (Gaydos et al. 2019). Essentially, user‐friendly interfaces can enable a bottom‐up approach in which researchers crowdsource diverse opinions to develop new intervention strategies (Voinov et al. 2016).

Additionally, researchers must also critically consider how the interface facilitates collaboration. The control of large‐scale environmental problems, such as biological invasions, relies heavily on cooperation between affected stakeholders, often across jurisdictional boundaries (Epanchin‐Niell et al. 2010). These stakeholders bring a diversity of knowledge, values, resources, and regulatory requirements to the table that inform scenario development and decision‐making (Crowley et al. 2017, Miller et al. 2017). Therefore, researchers should consider how interfaces can support collaboration between stakeholders with different technical and professional backgrounds. Forecasting interfaces that encourage discussion may function as boundary objects (Star 2010), common points of reference that mediate communication and knowledge transfer between diverse groups (Star and Griesemer 1989, Blades et al. 2016). Often, however, only the predetermined outcomes of forecasts are used in this boundary object capacity, limiting stakeholders' abilities to fully explore dynamic processes (Cash et al. 2006, Blades et al. 2016, Miller at al. 2017). Forecasting interfaces that allow intuitive, on‐the‐fly interventions may better facilitate decision‐making by enabling stakeholders to work together to develop dynamic and adaptive scenarios (Gaydos et al. 2019, Vukomanovic et al. 2019).

Yet, little is known about how different forecasting interfaces function as boundary objects. Graphical user interfaces (GUIs) are a common and compelling option because they have long been the paradigm for human‐computer interaction and can be web‐based, enabling widespread access (Dix et al. 2004, Cunniffe et al. 2015, Jones et al. 2021a ). However, it is unclear how GUIs function as boundary objects. Specifically, by confining users' inputs to a mouse and keyboard, and visual feedback to 2D graphics, GUIs limit users' abilities to perceive and relate spatial information (Ishii 2008, Millar et al. 2018). In response, tangible user interfaces (TUIs) were designed so that users could see, feel, and directly experience data through the movement of physical objects (Ullmer and Ishii 2000, Ishii 2008). Theories of embodied cognition assert that by engaging our physical senses, TUIs inherently support intuitive interaction, which may be especially important with less tech‐savvy users (Ratti et al. 2004, Ishii 2008, Kirsh 2013). Interestingly, some TUIs are designed to mimic traditional tabletop workbenches to encourage multiuser interactions, which could naturally fit into a boundary object framework (Coors et al. 1999, Underkoffler and Ishii 1999, Higgins et al. 2012, Petrasova et al. 2018). To date, comparisons of these different interface types are lacking, and it is unclear which is more conducive for group exploration of forecasts. A better understanding of how such tools resonate with stakeholders could help to guide the development of forecasting interfaces.

Under the umbrella of the broader Pest or Pathogen Spread (PoPS) Forecasting Initiative (Jones et al. 2021b ), we designed two interfaces to enable group exploration of invasion trajectories: PoPS Tangible Landscape (a TUI) and PoPS Forecasting Platform (a web‐based GUI) (Fig. 1). Both interfaces permit running the simulation, adjusting parameters, visualizing spread trajectories, and applying interventions, but differ in their interaction modality, which may, in turn, influence group dynamics. We evaluated how these technologies shape collaborative exploration of forecasts in the real‐world context of sudden oak death (SOD) management in Oregon. SOD, caused by the oomycete Phytophthora ramorum, is a disruptive forest disease responsible for widespread tree mortality along the Pacific Coast (Hansen et al. 2008, Cobb et al. 2012, Peterson et al. 2015). Uniquely, southwestern Oregon is the only place in the world infected by both European‐1 (EU1) and North American‐1 (NA1), two distinct pathogen strains that exhibit different spread patterns, complicating control efforts (Manter et al. 2010, Grünwald et al. 2016, Søndreli et al. 2019, Fig. 2). While EU1 is considered more aggressive, continued spread of either strain could inhibit the timber trade through one of Oregon's largest shipping ports, jeopardizing 1,200 jobs and US$57.9 million in annual wages (LeBoldus et al. 2017, Highland Economics and Mason, Bruce, Girard, Inc. 2019). In response, diverse stakeholder interests banded together to form the Oregon SOD Task Force aimed at preventing spread into new territories (Peterson et al. 2015, Hansen et al. 2019). While the circumstances of SOD in Oregon are unique, the challenges presented are common among biological invasions and the need for collective response is broadly characteristic of landscape‐scale environmental management (Epanchin‐Niell et al. 2010, Mills et al. 2011).

Fig. 1.

Fig. 1

Schematic showing the complementary functionality of the online and tangible interfaces. The systems are linked by a centralized database and forecast. With the tangible interface (a), multiple users physically designate treatments on a 3D representation of the study area and resulting infection statistics are displayed on an accompanying monitor. With the online interface (b), a single user draws treatments with a computer mouse based on verbal input from others. Dashed lines indicate the parallel functionalities of the interfaces.

Fig. 2.

Fig. 2

The distribution of both strains (EU1 and NA1) of Phytophthora ramorum in Curry County, Oregon in 2020. Stakeholders are concerned with preventing northward spread into Coos County, which could have economic implications for timber trade.

We held a workshop with members of the Oregon SOD Task Force to explore near‐term trajectories of disease spread, assess stakeholder perceptions of the forecast and interfaces, and compile suggestions for further research and development. We reflect on how both forecasting interfaces encouraged adaptive experimentation and social learning. Further, given the strengths of our interface designs, we suggest that workbench‐style tangible interfaces that support multiuser interaction and dynamic geospatial visualization may be best suited for collective decision‐making. Our research demonstrates how forecasting interfaces can facilitate the collaborative exploration of alternative ecological trajectories of biological invasions.

Methods

Participatory approach

Routine and consistent engagement with stakeholders is a central feature of participatory modeling (Voinov et al. 2016). Our partnership with the Oregon SOD Task Force began in 2017 when we held an initial participatory modeling workshop to better understand stakeholders' forecasting needs (Gaydos et al. 2019). At this first workshop, the Task Force's Core Science Team explored a pre‐existing SOD forecast using the PoPS Tangible Landscape interface and suggested how to tailor the system to Oregon's unique conditions. Throughout subsequent forecast development, we continued our collaboration with the Core Science Team via regularly scheduled meetings, research symposiums, and semistructured interviews to ensure that new developments accurately reflected stakeholders' expectations. Additionally, the Core Science Team maintained a bidirectional flow of information between researchers and stakeholders which strengthened our participatory engagement by bolstering stakeholders' sense of involvement and improving researchers' understanding of the operational and political intricacies of management. This collaborative partnership with Oregon stakeholders inspired several key forecasting developments, including adding yearly interventions, modeling disease strains separately, and developing the web‐based PoPS Forecasting Platform. These updates not only improved the forecast for SOD management, they have broad relevance for other pest and pathogen systems. With these improvements, we held a second participatory modeling workshop with all members of the Task Force to generate treatment scenarios, compare interfaces, and gather further suggestions for improvement. In this paper, we report findings from this second workshop, including how stakeholder‐driven developments supported collaborative forecasting and analysis.

Forecasts

Based on local observations provided by the Oregon SOD Task Force, we adapted a forecast of SOD spread in California (Meentemeyer et al. 2011) to reflect Oregon epidemiology (Gaydos et al. 2019). The forecast is a stochastic, spatially explicit, susceptible‐infected‐removed (SIR) simulation that uses raster‐based initial infection locations, relative host density, and weekly weather conditions to mimic pathogen dispersal and establishment across the landscape. Forecasts are largely controlled by two parameters, reproductive rate and dispersal distance, which were calibrated for the EU1 and NA1 strains separately. Mirroring field observations, EU1 is forecasted to spread faster than NA1. For this second workshop, we forecasted a three‐year period from 2020 to 2022, starting with 2019 infections. Infection locations, obtained from the Oregon Department of Forestry, represented the distribution of each strain in September 2019. Some infections were still pending genotyping and could not be definitively assigned to a strain. For these cases, the closest genotyped strain served as a proxy as most SOD dispersal is local (Peterson et al. 2015). Host data representing the proportion of tanoak in a cell were adapted from forest species maps created by Ohmann et al. (2011) and can be found at our Github repository (Jones and Gaydos 2021). We assumed an average future weather scenario generated by resampling historical Daymet weather data (Thornton et al. 2020) using the process described in Meentemeyer et al. (2011). All raster data were processed at 100‐m resolution and represented the extent of Curry County, Oregon.

Treatments have the same impact on the forecast, regardless of which interface they were designed with. The user designates treatment polygons using either felt markers in PoPS Tangible Landscape or a point‐and‐click draw tool in PoPS Forecasting Platform (Petrasova et al. 2018, Jones et al. 2021a , Fig 1). If a cell intersects a treatment polygon, all infected trees and a proportion of susceptible trees (relative to the cell area covered by the polygon) are removed. By conducting treatments in this way, we assume that all infected trees in a partially treated cell are located and removed but that the cell could become reinfected later. Treatment locations are determined at the beginning of the year, but are not applied until the following December to account for realistic time‐lags between detection and treatment (Kanaskie et al. 2011). Treatments cost US$1.23/m2 (Highland Economics and Mason, Bruce, and Girard, Inc. 2019), but were not considered expenditures where treatments overlapped or no hosts were present.

In previous studies by our research team, treatments could only be conducted in the first year of the forecast (Tonini et al. 2017). With guidance from Oregon stakeholders, we developed an adaptive management capacity that allows for yearly interventions (Gaydos et al. 2019, Petrasova et al. 2020). To accommodate the model's stochastic nature, we developed an iterative framework in which management is conducted on a single stochastic iteration, but future disease spread is visualized as a probability, allowing users to understand the potential range of outcomes. Users place treatments based on the current 2019 infections and then run ten stochastic iterations until the final year (2022). The single iteration that is closest to the average of all iterations is selected and replaces 2019 as the new starting point, essentially the new “simulated reality.” To see how the trajectory may play out into the future without any new treatments, users can visualize the infection probabilities in 2021 and 2022. With this information, users decide how to treat the 2020 infections, repeating this process until the final management year (2021) and the final visualization year (2022). This adaptive management capacity allows a user to walk through a simulated reality and explore the direct effect of treatments while simultaneously forecasting likely disease spread and impacts (Petrasova et al. 2020).

Forecast interfaces

The forecast, composed of R and C++ code, can be run via two open‐source interfaces: PoPS Tangible Landscape and PoPS Forecasting Platform (Fig. 1). The interfaces share the underlying simulation, can be linked via the Forecasting Platform, and were designed with complementary functionality for visualizing spread trajectories, and applying interventions (Fig. 1). Importantly, stakeholder feedback guided the development of both systems to improve usability (Gaydos et al. 2019, Jones et al. 2021a ).

Tangible Landscape is a TUI interface in which users guide a geospatial simulation via physical actions (Fig. 1). It was designed as a general geospatial modeling tool and has been used in several applications besides pest and pathogen forecasting (Petrasova et al. 2018). The system consists of a scanner, a projector, a physical representation of the study area, and a computer equipped with GRASS GIS and Tangible Landscape software (GRASS Development Team 2019, Jones et al. 2021b , Fig. 1a). Infection locations, host density, and aerial imagery are projected onto the physical landscape for visualization. Stakeholders design scenarios by arranging felt treatment polygons (using either predefined or customized shapes) on the physical landscape. Cumulative cost metrics are automatically updated as new treatments are proposed to inform scenario development. Users run the forecast one year into the future with the click of a physical USB‐enabled button and scroll through past observations and future forecasts using a clicker. Detailed statistical summaries of scenarios are stored on the PoPS Forecasting Platform to allow quick comparisons of all previous scenarios. This interface has been used in multiple case studies (Tonini et al. 2017, Vukomanovic et al. 2019), including our first participatory modeling workshop with Oregon stakeholders (Gaydos et al. 2019).

Stakeholders' requests from this first workshop sparked the development of the PoPS Forecasting Platform (Fig. 1b), an online interface with similar functionality as PoPS Tangible Landscape (Gaydos et al. 2019). Here, users design scenarios by drawing treatment polygons on a web map of the study area. Users can either select predefined treatment shapes, or create unique shapes using a point‐and‐click draw tool. As with PoPS Tangible Landscape, cumulative costs are displayed to inform scenario development. Users can run the forecast one year into the future through the click of a button and can scroll through past observations and future projections with the display year toggle. The Forecasting Platform stores detailed statistical information about all scenarios, and this capacity can be linked with PoPS Tangible Landscape so users can quickly and intuitively compare scenarios developed on either interface (Fig. 1).

Although the interfaces are complementary, key differences exist. Because PoPS Tangible Landscape is a TUI, it could naturally facilitate group interaction as users gather around the physical landscape to negotiate scenarios and work together to designate treatment locations (Coors et al. 1999, Underkoffler and Ishii 1999, Geller 2006, Petrasova et al. 2018; Fig. 1a). Due to the technical complexity of multiuser web interactions, the PoPS Forecasting Platform is controlled by a single user, so groups must designate an individual to draw the proposed treatments (Fig. 1b). So although the forecast is the same, the group interaction caused by the interfaces could be quite different. The flexibility of the study area is another key distinction. In PoPS Tangible Landscape, the physical landscape cannot be altered, so the study area is fixed. Additionally, increasing the scale of the study area also increases the scale of treatments, so to conduct realistic treatments, the study area must remain relatively compact (Petrasova et al. 2018). In comparison, the PoPS Forecasting Platform offers more flexibility to pan and zoom around the data with no limits on study area size. Because of these differences in study extent, we modeled the widely dispersed NA1 strain using PoPS Forecasting Platform and the confined EU1 outbreak using PoPS Tangible Landscape.

Workshop

With the help of the Core Science Team, we organized our second participatory modeling workshop on September 10th, 2019 at the Oregon Department of Forestry Headquarters in Salem, Oregon (Fig. 3). Our goals were to: (1) provide decision support for SOD managers in Oregon, (2) assess stakeholder perceptions of the forecast and interfaces, and (3) collect suggestions to guide further research and development. We invited the entire Oregon SOD Task Force (including our collaborators on the Core Science Team) to capture a diversity of perspectives. Participants were invited from each of the 37 Task Force organizations, starting with the main contacts provided by Task Force organizers. If a participant could not attend, they were encouraged to recommend another individual to represent their organization. All organizations were contacted at least three times: twice by researchers and once by Task Force administrators.

Fig. 3.

Fig. 3

Workshop participants developing intervention scenarios using PoPS Tangible Landscape (a) and PoPS Forecasting Platform (b).

Research suggests that smaller groups promote participatory interaction (Kaner et al. 2007, Blades et al. 2016, Gaydos et al. 2019). Prior to the workshop, we split participants into two subgroups, A and B, using a stratified random sample. We categorized each organization into the following groups: federal government, state and local government, tribal government, timber industry, nonprofits, and academia. Within each organization type, participants were randomly assigned to either group A or B until the participant list was exhausted. This stratified random approach allowed us to create a small group dynamic while preserving the Task Force's organizational diversity.

We partitioned the full‐day workshop into several distinct activities, starting with the pre‐workshop questionnaire. Then researchers presented an overview of the forecast and interfaces and explained how the Core Science Team influenced development. To avoid the perception of a “black box” system, participants were encouraged to ask questions and challenge the forecast assumptions during this time (Voinov et al. 2016). Stakeholders were then split into the stratified groups, with Group A assessing EU1 spread at the PoPS Tangible Landscape station and Group B assessing NA1 spread at the PoPS Forecasting Platform station. Researchers walked participants through the functionality of the interfaces, demonstrated the adaptive management workflow, and encouraged participants to test management scenarios. Following lunch, the groups switched stations to ensure that all participants used both interfaces. Each modeling session lasted 90 min, the upper limit recommended for group activities (Kaner et al. 2007). Once all participants had used both interfaces, we administered the post‐workshop questionnaire followed by a semistructured group discussion to summarize key insights and promote convergent thinking, a creative‐thinking activity shown to aid group decision‐making (Cropley 2006, Kaner et al. 2007). Immediately following the workshop, the Task Force held a meeting to review the current treatment plan in light of what was learned via the forecast.

Questionnaires

We administered questionnaires before and after the workshop, a common technique in participatory modeling (White et al. 2010, Blades et al. 2016, Morisette et al. 2017). Questionnaires were adapted from our prior participatory work with PoPS Tangible Landscape (Gaydos et al. 2019), and were a combination of open‐ended questions, 5‐point Likert‐items, and yes/no questions (Dillman et al. 2014, Fowler 2014).

The pre‐workshop questionnaire gauged participants' background knowledge of GIS and disease forecasts, as technological expertise may influence perceptions of the forecasts and interfaces (Harvey and Chrisman 1998, Gaydos et al. 2019). The post‐workshop questionnaire collected participants' perceptions of the forecast, the interfaces, and the workshop using three common constructions of preference: credibility (scientific accuracy), salience (usefulness), and legitimacy (respectfulness and mediation of different viewpoints) (Van Voorn et al. 2016).

Stakeholders assessed forecast credibility by rating the accuracy of simulation processes, assumptions, and host data. We focused on the host data specifically because stakeholders previously identified this as an area for improvement (Gaydos et al. 2019). Salience was measured by interface usability (described in detail below), how the interface enabled treatment prioritization, and whether stakeholders could test all relevant treatment scenarios with the forecast. Legitimacy was assessed by how the interfaces facilitated discussion and collaboration, the perception that treatment functionality was designed to meet stakeholder needs, and if the workshop facilitated open discussion. As the forecast and treatment type (culling tanoak) were the same for both interfaces, questions about processes, assumptions, underlying host data, and treatments were asked only once. In contrast, questions about usability, treatment prioritization, and enabling discussion and collaboration could vary by interface and we assessed separately for PoPS Tangible Landscape and PoPS Forecasting Platform.

We integrated the System Usability Scale into the post‐workshop questionnaire to assess the usability of both interfaces. Often described as a “quick and dirty” survey, the System Usability Scale provides a broad, subjective measure of usability, which has proven to be a reliable and valid industry standard (Brooke 1996, Bangor et al. 2008). It can be applied to a wide range of technologies and has been used to assess differences between interfaces, making it highly applicable to our case (Bangor et al. 2008). It is composed of ten 5‐point Likert‐items, alternating between positive and negative concepts to limit response bias (Dillman et al. 2014, Fowler 2014). Total scores are converted to a 0–100 scale, with 100 being the most positive (Brooke 1996), which relates to a qualitative scale with categories including: best imaginable, excellent, good, okay, poor, awful, and worst imaginable (Brooke 1996, Bangor et al. 2008).

Additionally, we collected open‐ended responses to better understand collaborative learning, a key aspect of participatory modeling, and to guide future research and development (Voinov et al. 2016, Jordan et al. 2018). By asking participants to self‐report what they learned, we gathered rich contextual information about stakeholders' perceptions of the workshop. Additionally, to expand our forecast functionality, participants were asked to describe any relevant management scenarios that could not be tested as‐is. Lastly, participants were asked to give broad feedback on the forecast, interfaces, and workshop to inform future collaborative forecasting workshops. All open‐ended answers were inductively coded into emergent themes by two researchers. Categorization differences were thoroughly investigated and discussed until both researchers reached consensus.

Results

At the Sudden Oak Death Management Scenarios Workshop, 34 participants represented 24 organizations spanning state and local government [8], federal government [6], nonprofits [5], timber industry [3], academia [1], and tribal government [1]. Three participants were members of the Core Science Team of the Task Force and had attended the first participatory workshop (Gaydos et al. 2019). A stratified random sample split participants into two smaller subgroups, A and B. Group A had 15 participants representing the federal government [4], state/local government [4], academia [3], timber industry [2], and nonprofits [2]. Importantly, the three members of the Core Science Team who were present at our first workshop were all randomly assigned to Group A. Group B had 17 participants representing the federal government [4], state/local government [6], academia [2], timber industry [1], nonprofits [3], and tribal government [1]. Additionally, two participants were not assigned a group because they could only attend one of the two modeling sessions. Rather, these participants briefly participated in both groups to interact with both interfaces.

Stakeholder preferences

Of the 34 participants, 32 (94%) returned the pre‐workshop questionnaire and 29 (85%) returned the post‐workshop questionnaire. One post‐workshop survey was incomplete, so comparisons of PoPS Tangible Landscape and PoPS Forecasting Platform had a sample size of 28. Responses were coded from 1 to 5, where 3 is neutral and 5 is the most positive. While Groups A and B returned a similar number of questionnaires (Group A: 15 pre, 14 post; Group B: 15 pre, 13 post), Group B had a slightly lower response rate due to a larger group size (Group A: 100% for pre, 93% for post; Group B: 88% for pre, 76% for post). Participants who missed any part of either the pre‐ or post‐workshop surveys were given two opportunities to respond to an online version. While nonresponses could potentially bias our results, we find this response rate comparable with other similar workshops (Blades et al. 2016).

We characterized participants' familiarities with GIS and disease simulations, which may impact their perceptions of the forecast and interfaces (Harvey and Chrisman 1998). Most stakeholders considered themselves familiar with GIS (average = 4.38), but only a third (34%) had used it for forest disease management. Participants were less familiar with disease forecasts (average = 3.34), and fewer than a third (28%) had used them. Only 7 of the 32 participants who completed the pre‐workshop questionnaire had used both GIS and disease simulations, suggesting that most participants were new to the geospatial disease forecasts used in the workshop.

Participants predominantly found the forecast credible, especially simulation processes (average = 4.20) and assumptions (average = 4.12), which were widely regarded as accurate representations of the system. Responses were positive, but more varied, regarding host data (average = 3.85), suggesting that this may still be an area for improvement. Notably, a handful of participants (4 or 5) chose the “don't know” option, so while most felt confident in the scientific accuracy of the forecast and data, others felt unable to comment. We conducted Mann‐Whitney U tests to compare whether participants with background knowledge of GIS and disease forecasts rated aspects of credibility differently than those without this technical background. Results revealed no significant differences (Table 1), suggesting that technological expertise had little effect on how credible participants found the forecast and data.

Table 1.

Evaluation of how participants' technical background affected ratings of the forecast and interfaces.

Perception metric Forecast Tangible interface Web interface
With/without GIS experience With/without disease model experience With/without GIS experience With/without disease model experience With/without GIS experience With/without disease model experience
Credibility
Forecast processes 3.8 (0.67)/4.3 (0.77) 4.3 (0.71)/4.2 (0.75)
P = 0.66 P = 0.89
Forecast assumptions 4.3 (0.48)/4.0 (0.58) 4.1 (0.35)/4.1 (0.64)
P = 0.22 P = 0.94
Host data 3.8 (1.09)/4.0 (0.88) 4.1 (0.83)/3.8 (1.01)
P = 0.64 P = 0.52
Salience
Management capabilities 4.4 (0.70)/4.0 (0.84) 4.0 (0.83)/4.2 (0.76)
P = 0.23 P = 0.53
System Usability Scale 75.3 (8.70)/71.0 (9.78) 74.1 (9.25)/71.9 (9.73) 68.61 (11.18)/70.75 (12.90) 74.38 (10.50)/67.38 (12.42)
P = 0.35 P = 0.76 P = 0.14 P = 0.21
Prioritizing treatments 4.6 (0.52)/4.5 (0.51) 4.5 (0.53)/4.5 (0.51) 4.5 (0.53)/4.1 (0.58) 4.4 (0.52)/4.2 (0.62)
P = 0.63 P = 0.83 P = 0.10 P = 0.53
Legitimacy
Facilitating discussion 4.6 (0.52)/4.5 (0.78) 4.6 (0.52)/4.6 (0.76) 4.7 (0.48)/4.2 (0.43) 4.5 (0.53)/4.4 (0.49)
P = 0.84 P = 0.10 P = 0.01* P = 0.48
Enabling collaboration 4.4 (0.52)/4.5 (0.51) 4.5 (0.53)/4.5 (0.51) 4.5 (0.53)/3.9 (0.73) 4.2 (0.71)/4.1 (0.72)
P = 0.64 P = 0.84 P = 0.04* P = 0.66
Management capabilities designed for stakeholder 3.9 (0.57)/3.8 (0.65) 3.9 (0.64)/3.8 (0.62)
P = 0.60 P = 0.79

Participants were divided by whether they had used either geospatial information systems or disease simulations prior to the workshop. The System Usability Scale ranges from 0 to 100 with 100 being most positive. All other metrics range from 1 to 5, with 5 being the most positive. We report average scores with standard deviation in parentheses. Significant differences between those with and without prior experience are denoted by an *.

Studies have determined that 69.69 is an average rating on the System Usability Scale (Bangor et al. 2008). Here, we find that both PoPS Tangible Landscape (average = 71.34) and PoPS Forecasting Platform (average = 68.26) had average usability (Table 2). When converted to the qualitative scale described by Bangor et al. (2008), both systems were considered “good.” While the usability ratings were similar, PoPS Tangible Landscape received higher scores from 16 participants. Users' interface ratings were compared via a paired Wilcoxon signed‐rank test, and we found a significant preference for PoPS Tangible Landscape (Table 2). However, for both interfaces, a Wilcoxon signed‐rank test revealed that prior GIS or disease forecasting experience did not impact usability ratings (Table 1).

Table 2.

Comparison of how participants perceived PoPS Tangible Landscape and PoPS Forecasting Platform.

Stakeholder perception Rating of tangible interface Rating of web interface Paired Wilcoxon test of difference
Salience: System Usability Scale 71.34 (9.48) 68.26 (12.15) P = 0.04*
Salience: prioritizing treatments 4.53 (0.51) 4.25 (0.59) P = 0.02*
Legitimacy: facilitating discussion 4.57 (0.69) 4.39 (0.50) P = 0.20
Legitimacy: enabling collaboration 4.45 (0.51) 4.14 (0.71) P = 0.04*

The System Usability Scale ranges from 0 to 100 with 100 being most positive. All other metrics range from 1 to 5, with 5 being the most positive. We report average scores with standard deviation in parentheses. * denotes significant differences.

Other measures of salience were also rated favorably. Participants found both interfaces useful for prioritizing treatment locations (tangible average = 4.53, web average = 4.25), although paired Wilcoxon tests found that participants preferred PoPS Tangible Landscape (Table 2). The forecast's ability to address all relevant scenarios was positive but was the lowest rated aspect of salience (average = 4.06). A majority of participants (58%) did not offer suggestions for additional management capabilities. For those that did, comments were inductively coded into seven themes by two researchers: (1) scenarios including landowner information [3], (2) additional treatment customizations [3], (3) other case studies [2], (4) additional funding scenarios [1], (5) extended time frames [1], (6) other SOD hosts [1], and (7) adding new infection locations [1] (Appendix S1: Table S1). We found that participants' GIS and disease simulation backgrounds did not affect ratings, nor how likely a person was to suggest additional scenarios (Table 1).

Overall, stakeholders thought the treatment functionality was designed to meet their needs (average = 3.79), but this relationship was weakly positive. However, ratings of the other legitimacy metrics were strongly positive. Both interfaces were more highly rated in their ability to facilitate discussion (tangible average = 4.57, web average = 4.39) than their ability to enable collaboration (tangible average = 4.45, web average = 4.14). Stakeholders preferred PoPS Tangible Landscape in both cases, but this relationship was only significant for the collaboration metric (Table 2). We found no effect of prior GIS or disease simulation experience on ratings of PoPS Tangible Landscape (Table 1). However, we found that those with GIS experience were statistically more likely to consider PoPS Forecasting Platform effective for facilitating discussion, although this relationship was not seen with disease simulation experience (Table 1).

Additionally, we assessed participants' perceptions of the workshop itself. Participants overwhelmingly agreed that the workshop encouraged open discussion of ideas (average = 4.44) and promoted learning (average = 4.58). Additionally, 86% of participants provided comments about what they learned, which were inductively coded into five themes: (1) the forecasting tools [14], (2) the effectiveness of treatments [7], (3) the importance of collaborative learning [5], (4) the educational potential [3], (5) and SOD in general [3] (Appendix S1: Table S2). All organization types mentioned the forecasting tools and the effectiveness of treatments, suggesting that these were the most ubiquitous takeaways.

Ten participants (34%) suggested improvements to the forecast, interfaces, or workshop relating to five themes: (1) additional data visualizations [6], (2) structuring group modeling activities [5], (3) additional outcome metrics [3], (4) additional scenarios [3], and (5) additional treatment features [2] (Appendix S1: Table S3). Some comments overlapped with aforementioned suggestions for additional management capacities. Several suggestions reflect a desire for additional case‐specific data and treatment options. Stakeholders also offered suggestions on the workshop format, including shortening the forecasting sessions and incorporating more structured scenarios, which will inform future scenarios. Lastly, several participants offered compliments that contained no suggestions [8]. In most cases, both interfaces were mentioned, but PoPS Tangible Landscape [3] was highlighted in comments more than PoPS Forecasting Platform [1]. Following the workshop, some participants inquired about using the interfaces for other case studies, including teaching, citizen science, and other pest and pathogen systems. Intriguingly, more participants expressed interest in PoPS Forecasting Platform [3] than PoPS Tangible Landscape [1], likely due to the access from a standard laptop.

Scenario outcomes

The interfaces enabled stakeholders to generate 13 scenarios of SOD management (Table 3). Interestingly, Group B produced more scenarios, but Group A had a very productive exchange of information that led to several key suggestions about system improvement. A majority of the scenarios [9] focused on the aggressive EU1 strain, which has been of particular concern to the Task Force, all of which were conducted with PoPS Tangible Landscape. The remaining four scenarios focused on the NA1 strain and were conducted using PoPS Forecasting Platform. Scenarios were analyzed by total costs, infected area in 2022, management tactics, and budget distribution (Table 3). Strategies were categorized into the following tactics described in Filipe et al. (2012) and Cunniffe et al. (2016): (1) wave front: removing infections along the spreading‐front of the disease, (2) foci: removing densely infected areas near the disease epicenter, (3) eradication: removing all or nearly all infections in a single year, (4) host barrier: removing a line of hosts ahead of the spreading‐front, and (5) preemptive host removal: removing uninfected areas with high host density that were nonadjacent to current infections. Importantly, stakeholders often combined tactics or applied different tactics in different years of the simulation. Additionally, stakeholders examined how to distribute their three‐year budget. Some modeling studies have demonstrated that front‐loading a budget (spending most of the budget early in disease management) is a cost‐effective way to reduce long‐term disease impacts (Cunniffe et al. 2016, Thompson et al. 2018). Stakeholders tested this front‐loading strategy in four of the scenarios. Additionally, stakeholders evaluated some extreme budget scenarios, such as NA1 Scenario 4, which were unrealistic but provided insights into disease control.

Table 3.

Stakeholders developed 13 scenarios of SOD management in Oregon, with nine focusing on the EU1 strain and four focusing on the NA1 strain.

Scenario name Interface Disease management tactics Budget division Money spent (US$) Area infected in 2022 (acres)
Group A
EU1 No Management Scenario 1,282
EU1 Scenario 1 Tangible

1. Eradication

2. Eradication

3. Eradication

Evenly divided 2,801,314 66
EU1 Scenario 2 Tangible

1. Eradication, host barrier

2. Eradication, host barrier, preemptive host removal

3. Eradication

Evenly divided 2,780,255 27
EU1 Scenario 3 Tangible

1. Wave front

2. Wave front

3. Wave front, foci

Evenly divided 2,117,724 596
Group B
EU1 Scenario 4 Tangible

1. Host barrier, wave front

2. Wave front

3. Wave front, foci

Evenly divided 2,812,770 511
EU1 Scenario 5 Tangible

1. Eradication

2. Wave front

3. Eradication

Evenly divided 2,736,891 160
EU1 Scenario 6* Tangible

1. Wave front

2. Wave front

3. Wave front, host barrier

Evenly divided 1,344,419 487
EU1 Scenario 7 Tangible

1. Eradication

2. Eradication

3. Eradication

Front loaded 2,384,983 34
EU1 Scenario 8† Tangible

1. Eradication

2. Eradication

3. Eradication

Front loaded 5,038,509 54
EU1 Scenario 9 Tangible

1. Eradication

2. Eradication

3. Eradication

Front loaded 4,624,446 34
NA1 No Management Scenario 6,766
Group A
NA1 Scenario 1* Web

1. Wave front

2. Wave front

3. Wave front

Evenly divided 7,979,514 6,009
Group B
NA1 Scenario 2 Web

1. Host barrier, wave front

2. Host barrier, wave front

3. Host barrier, wave front

Evenly divided 7,152,207 6,401
NA1 Scenario 3 Web

1. Wave front

2. Wave front

3. Wave front

Evenly divided 8,320,519 5,974
NA1 Scenario 4† Web

1. Eradication

2. No management

3. No management

Front loaded 520,626,142 91

Scenarios are differentiated by money spent, area infected in 2022, tactics applied, and how budget was distributed. The most cost‐effective and least cost‐effective scenarios for each strain are designated with an * and †, respectively.

Discussion

Forecasts are effective for exploring alternative trajectories of biological invasions but will have less impact on control strategies if inaccessible to decision‐makers (Voinov and Bousquet 2010). Reflecting other studies, our surveys show that despite familiarity with forecasting tools, there has been little adoption among SOD stakeholders in Oregon (Gaydos et al. 2019). This knowledge‐practice gap is common and limits forecasts' potential to address ecological challenges (Matzek et al. 2014, Cunniffe et al. 2015, Knight et al. 2016). We demonstrate how user‐friendly interfaces and participatory approaches can reduce this gap by enabling adaptive experimentation, driving future research, and promoting collaborative learning.

On‐the‐fly collaborative exploration of intervention strategies would have been tedious, if not nearly impossible, with the code‐based versions of the forecast. Both interfaces were a substantial improvement in this regard. Both were broadly considered credible, salient, and legitimate, and had “good” usability scores, reflecting a participant's comment that the forecast was “extremely useful from a planning perspective.” For most metrics, ratings were unaffected by prior experience with GIS and disease forecasts, suggesting that the interfaces successfully reduced traditional technological barriers to use (Table 2). Further, the adaptive management capacity, which arose from our previous participatory workshop (Gaydos et al. 2019, Petrasova et al. 2020), empowered stakeholders to generate novel intervention scenarios (Table 3). Although the management tactics themselves (e.g. wave front, host barrier) were not novel (Cunniffe et al. 2016, Filipe et al. 2012), stakeholders combined the tactics in unique ways. Most forecasting studies apply only one tactic per scenario, which prevents the kind of adaptive experimentation that is vital for natural resource management (Serrouya et al. 2019). Here, stakeholders combined several tactics and changed their approaches through time, which may better reflect the realistic complexities of control (Walters and Holling 1990, Kanaskie et al. 2011). These results highlight how interactive forecasting interfaces can open new analytical doors for stakeholders and researchers alike.

Furthermore, we saw broad benefits from the collaborative interactions. Participants reported that the workshop encouraged open discussion and learning, fundamental aspects of participatory modeling (Voinov et al. 2016, Jordan et al. 2018). Comments suggest that participants learned about a broad range of topics, with the effectiveness of treatment scenarios as one of the most ubiquitous takeaways (Appendix S1: Table S2). One participant remarked that “it made more sense that a wide buffer may not be a silver bullet for stopping the spread northeast” (Appendix S1: Table S2), a particularly compelling comment because host barrier strategies had been proven ineffective for SOD by other forecasting analyses (Cunniffe et al. 2016, Filipe et al. 2012). The information from these studies had not reached this decision‐maker, highlighting a classic knowledge‐practice gap. However, through interactive forecasting, participants quickly discovered the effectiveness (or in this case, ineffectiveness) of tactics, demonstrating the power of experimental learning. When collaborative, stakeholders may also learn about each other as they negotiate their diverse perspectives to design scenarios (Jordan et al. 2018, White et al. 2018). Several participants commented that they “learned a lot about what stakeholders value most in containing SOD,” suggesting that there may be ancillary socio‐political outcomes of the participatory process (Appendix S1: Table S2). These results demonstrate how user‐friendly interfaces, regardless of mode of interaction, can empower stakeholders to collectively learn about control tactics and others' perspectives.

We observed some key differences, however, in how stakeholders used and perceived the two interfaces. Both groups generated more scenarios using PoPS Tangible Landscape, which exceeded PoPS Forecasting Platform in all survey comparisons (Table 1). Further, this preference was statistically significant for three metrics: system usability, prioritizing treatments, and enabling collaboration. So, while both interfaces provided advantages over code‐based versions, participants preferred PoPS Tangible Landscape for collaborative forecasting. Our observations suggest that this preference was driven by the physical setup and the multiuser interaction. The physical setup energized the group dynamic as participants physically gathered around the system (Fig. 3a), echoing suggestions that workbench‐style tangible interfaces naturally promote group interaction (Coors et al. 1999, Underkoffler and Ishii 1999, Geller 2006, Millar et al. 2018, Petrasova et al. 2018). Additionally, this workbench layout enabled several participants to access the interface, paving the way for the multiuser input which was likely the greatest collaborative advantage of PoPS Tangible Landscape (Figs. 1a and 3a). Essentially, stakeholders could simultaneously construct interventions that facilitated teamwork and reduced the time taken to develop scenarios. We also found that, like other tangible systems (Underkoffler and Ishii 1999), PoPS Tangible Landscape stimulated playful interactions, with one participant commenting, “the Tangible Landscape really gets stakeholders talking and engaging in a fun and constructive manner.” This playful dynamic may be especially important, as studies have shown that stakeholders are more likely to prioritize fun or novel engagement activities (White et al. 2018). These factors combined to make PoPS Tangible Landscape a particularly effective forecasting boundary object.

However, the PoPS Forecasting Platform outperformed PoPS Tangible Landscape in some areas. Both researchers and participants observed that PoPS Forecasting Platform provoked in‐depth dialogue, with one participant commenting, “I was fascinated with the Tangible Landscape, but PoPS Forecasting Platform seemed to trigger more inclusive discussion.” As with Tangible Landscape, this is partly due to how the interface influenced group dynamics. The single‐user input necessitated greater verbal communication as groups dictated scenarios to the individual drawing treatments (Figs. 1b, 2b). However, we also observed how the dynamic data visualization fueled communication. As participants panned and zoomed around the flexible study extent, they analyzed fine‐scale details of the treatments and the underlying GIS data. This sparked conversations about the patterns of disease spread, the accuracy of the underlying host data, the spatial configuration of treatments, and what other data would be relevant for decision‐making, which ultimately inspired participants' recommendations. Essentially, the web interface became a platform to analyze the GIS data, which may explain why those with GIS experience rated it more favorably for facilitating discussion (Table 2). As mediating knowledge transfer is a central function of boundary objects, this aspect of the PoPS Forecasting Platform warrants recognition.

Given the strengths of both systems, we suggest developing workbench‐style interfaces that support intuitive multiuser interactions and dynamic geospatial visualizations. Multitouch tables, like those commonly found in interactive museum displays, naturally fit this context because they are intuitive, visually dynamic, responsive to touch, and enable multiuser data exploration (Geller 2006, Antle et al. 2011, Higgins et al. 2012). We are currently exploring how to customize such tables for the collaborative forecasting of biological invasions. However, we acknowledge that such workbench‐style interfaces will be the most impactful when stakeholders are collocated and can physically convene. When this is not the case, forecasting tools should seek to enable distributed, virtual collaboration. With this in mind, we are exploring how websockets (Lombardi 2015), commonly used to host online chat rooms, may enable multiuser input for the PoPS Forecasting Platform. This virtual collaboration may be especially beneficial for agencies that coordinate pest management and research at national and international scales, such as the United States Department of Agriculture (USDA), Animal and Plant Health Inspection Service (APHIS), and the United States Forest Service (USFS). We will continue to work alongside such partners as we expand our multiuser web capabilities. Additionally, Web3D is becoming a vital tool in distributed science and we are exploring how 3D visualizations could enhance the web platform's interactivity and increasing its similarity to PoPS Tangible Landscape (Zhang et al. 2007). By linking dynamic data visualization with multiuser interactions, both the multitouch tables and collaborative web portal are promising alternatives for increasing the collaborative capacity of biological forecasts.

Notably, we have seen continued interest in the forecast and interfaces following the workshop. Four participants inquired about using PoPS Tangible Landscape and PoPS Forecasting Platform in other applications, including teaching, citizen science, and forecasting other socio‐ecological systems. Additionally, the Oregon SOD Task Force has been mobilizing to expand their management capacity in response to what was learned at the workshop (Sarah Navarro, personal communication). Although continued engagement with these stakeholders will be necessary to fully incorporate such forecasting tools into the decision‐making process, we consider these positive indications that stakeholders saw direct value in forecasting “what if” management scenarios. By documenting our process and sharing our data and code, we provide a path for other researchers to build upon these forecasting systems or to develop interactive forecasting systems of their own. Our work demonstrates how interactive forecasting systems can enable analytical scenario experimentation by diverse users, thereby empowering stakeholders to collaboratively develop novel and adaptive solutions to biological invasions.

Supporting information

Appendix S1

Acknowledgments

We give special thanks to the workshop participants who contributed their time and insights to the project. We particularly thank Sarah Navarro, Mark Labhart, and the Oregon Department of Forestry staff for workshop assistance. We thank Nick Kruskamp for developing the host data, Payam Tabrizian for his contributions to the tangible and web interfaces, Megan Skrip for editorial assistance, and members of the Center for Geospatial Analytics who provided feedback throughout the research. All authors contributed significantly to the conception and implementation of this interdisciplinary research. D.A.G., C.M.J., A.P., V.P., G.C.M., and R.K.M. facilitated the workshop. D.A.G. designed outreach materials, developed questionnaires, and analyzed workshop outcomes with input from G.C.M. C.M.J., and D.A.G. developed the SOD forecast. With significant input from all authors, A.P. developed PoPS Tangible Landscape, and C.M.J. and S.K.J. developed the PoPS Forecasting Platform. S.K.J. designed Fig. 1, D.A.G. designed Fig. 2, V.P. designed Fig. 3. D.A.G. wrote the manuscript and G.C.M. contributed substantially to revisions. We declare we have no competing interesting. This project was supported, in part, by the USDA Forest Service Pacific Southwest Research Station Forest Health Protection Special Technology Development Program (award number R5‐2016‐05), the joint National Science Foundation‐National Institute of Health Ecology and Evolution of Infectious Diseases Program (grant number 2015‐67013‐23818), the Southeast Climate Adaptation Science Center Global Change Fellowship Program, and Cooperative Agreements from the United States Department of Agriculture's Animal and Plant Health Inspection Service (APHIS). It may not necessarily express APHIS' views. Additionally, this work was supported by Google Cloud and NVIDIA.

Gaydos, D. A. , Jones C. M., Jones S. K., Millar G. C., Petras V., Petrasova A., Mitasova H., and Meentemeyer R. K.. 2021. Evaluating online and tangible interfaces for engaging stakeholders in forecasting and control of biological invasions. Ecological Applications 31(8):e02446. 10.1002/eap.2446

Corresponding Editor: E. Natasha Stavros.

Open Research

Software and tools developed through this research are open‐source and freely available under GNU General Public License. The PoPS Forecast code, along with version history, is compiled and available via the Open Science Framework at https://doi.org/10.5281/zenodo.5015078. The PoPS Forecasting Platform can be accessed via the rpops package (Jones et al. 2021b ) on Zenodo at https://doi.org/10.5281/zenodo.5015078 or GRASS GIS (Petras et al. 2018) at https://doi.org/10.5281/zenodo.5160178. Detailed instructions on how to setup and use Tangible Landscape (Petrasova et al. 2021) can be found at https://doi.org/10.5281/zenodo.5160087. Data and code (Jones and Gaydos 2021) specific to the case study are provided on Zenodo at: https://doi.org/10.5281/zenodo.4708142. Questionnaire data can be found in electronic supplemental material, Appendix S1.

Literature Cited

  1. Antle, A. N. , Tanenbaum J., Bevans A., Seaborn K., and Wang S.. 2011. Balancing act: enabling public engagement with sustainability issues through a multi‐touch tabletop collaborative game. Pages 194–211 In IFIP Conference on Human‐Computer Interaction. Springer, Berlin, Heidelberg, Germany. 10.1007/978-3-642-23771-3_16 [DOI] [Google Scholar]
  2. Bangor, A. , Kortum P. T., and Miller J. T.. 2008. An empirical evaluation of the system usability scale. International Journal of Human‐Computer Interaction 24:574–594. 10.1080/10447310802205776 [DOI] [Google Scholar]
  3. Bayliss, H. , Stewart G., Wilcox A., and Randall N.. 2013. A perceived gap between invasive species research and stakeholder priorities. NeoBiota 19:67. 10.3897/neobiota.19.4897 [DOI] [Google Scholar]
  4. Blades, J. J. , Klos P. Z., Kemp K. B., Hall T. E., Force J. E., Morgan P., and Tinkham W. T.. 2016. Forest managers' response to climate change science: Evaluating the constructs of boundary objects and organizations. Forest Ecology and Management 360:376–387. 10.1016/j.foreco.2015.07.020 [DOI] [Google Scholar]
  5. Brooke, J. 1996. SUS‐A quick and dirty usability scale. Usability Evaluation in Industry 189:4–7. [Google Scholar]
  6. Cash, D. W. , Borck J. C., and Patt A. G.. 2006. Countering the loading‐dock approach to linking science and decision making: comparative analysis of El Niño/Southern Oscillation (ENSO) forecasting systems. Science, Technology, & Human Values 31:465–494. 10.1177/0162243906287547 [DOI] [Google Scholar]
  7. Cobb, R. C. , Filipe J. A., Meentemeyer R. K., Gilligan C. A., and Rizzo D. M.. 2012. Ecosystem transformation by emerging infectious disease: loss of large tanoak from California forests. Journal of Ecology 100:712–722. 10.1111/j.1365-2745.2012.01960.x [DOI] [Google Scholar]
  8. Coors, V. , Jasnoch U., and Jung V.. 1999. Using the virtual table as an interaction platform for collaborative urban planning. Computers & Graphics 23:487–496. 10.1016/S0097-8493(99)00068-0 [DOI] [Google Scholar]
  9. Crimmins, T. M. , Rosemartin A., Gerst K., Posthumus E., Morelli T. L., and Wallace C.. (2019). Better together: A story of co‐production between the USA National Phenology Network and Invasive Species Managers. Page PA53A‐08 In AGU Fall Meeting Abstracts (Vol. 2019). https://agu.confex.com/agu/fm19/meetingapp.cgi/Paper/525114 [Google Scholar]
  10. Cropley, A. 2006. In praise of convergent thinking. Creativity Research Journal 18:391–404. 10.1207/s15326934crj1803_13 [DOI] [Google Scholar]
  11. Crowley, S. L. , Hinchliffe S., and McDonald R. A.. 2017. Conflict in invasive species management. Frontiers in Ecology and the Environment 15:133–141. 10.1002/fee.1471 [DOI] [Google Scholar]
  12. Cunniffe, N. J. , Cobb R. C., Meentemeyer R. K., Rizzo D. M., and Gilligan C. A.. 2016. Modeling when, where, and how to manage a forest epidemic, motivated by sudden oak death in California. Proceedings of the National Academy of Sciences of the United States of America 113:5640–5645. 10.1073/pnas.1602153113 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Cunniffe, N. J. , Stutt R. O., DeSimone R. E., Gottwald T. R., and Gilligan C. A.. 2015. Optimising and communicating options for the control of invasive plant disease when there is epidemiological uncertainty. PLoS Computational Biology 11:e1004211. 10.1371/journal.pcbi.1004211 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Dietze, M. C. , et al. 2018. Iterative near‐term ecological forecasting: Needs, opportunities, and challenges. Proceedings of the National Academy of Sciences of the United States of America 115:1424–1432. 10.1073/pnas.1710231115 [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Dillman, D. A. , Smyth J. D., and Christian L. M.. 2014. Internet, phone, mail, and mixed‐mode surveys: the tailored design method. Fourth edition. John Wiley & Sons, Hoboken, New Jersey, USA. ISBN‐13: 978‐1118456149. [Google Scholar]
  16. Dix, A. , Finlay J., Abowd G. D., and Beale R.. 2004. Human‐computer interaction. Third edition. Pearson Education Limited, Harlow, Essex, England. ISBN‐13: 978‐0‐13‐046109‐4. [Google Scholar]
  17. Epanchin‐Niell, R. S. , Hufford M. B., Aslan C. E., Sexton J. P., Port J. D., and Waring T. M.. 2010. Controlling invasive species in complex social landscapes. Frontiers in Ecology and the Environment 8:210–216. 10.1890/090029 [DOI] [Google Scholar]
  18. Filipe, J. A. , Cobb R. C., Meentemeyer R. K., Lee C. A., Valachovic Y. S., Cook A. R., Rizzo D. M., and Gilligan C. A.. 2012. Landscape epidemiology and control of pathogens with cryptic and long‐distance dispersal: sudden oak death in northern Californian forests. PLoS Computational Biology 8(1):e1002328. 10.1371/journal.pcbi.1002328 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Fowler Jr, F. J. (2014). Survey research methods. Fifth edition. Sage Publications, Thousand Oaks, California, USA. ISBN: 978‐1‐4833‐1240‐8. [Google Scholar]
  20. Gaydos, D. A. , Petrasova A., Cobb R. C., and Meentemeyer R. K.. 2019. Forecasting and control of emerging infectious forest disease through participatory modelling. Philosophical Transactions of the Royal Society B 374:20180283. 10.1098/rstb.2018.0283 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Geller, T. 2006. Interactive tabletop exhibits in museums and galleries. IEEE Computer Graphics and Applications 26:6–11. 10.1109/MCG.2006.111 [DOI] [PubMed] [Google Scholar]
  22. Gilligan, C. A. , and van den Bosch F.. 2008. Epidemiological models for invasion and persistence of pathogens. Annual Review of Phytopathology 46:385–418. 10.1146/annurev.phyto.45.062806.094357 [DOI] [PubMed] [Google Scholar]
  23. GRASS Development Team . 2019. Geographic Resources Analysis Support System (GRASS) Software, Version 7.6. Open Source Geospatial Foundation, Beaverton, Oregon, USA. http://grass.osgeo.org [Google Scholar]
  24. Grünwald, N. J. , Larsen M. M., Kamvar Z. N., Reeser P. W., Kanaskie A., Laine J., and Wiese R.. 2016. First report of the EU1 clonal lineage of Phytophthora ramorum on tanoak in an Oregon forest. Plant Disease 100(5):1024. 10.1094/PDIS-10-15-1169-PDN [DOI] [Google Scholar]
  25. Hansen, E. M. , et al. 2008. Epidemiology of Phytophthora ramorum in Oregon tanoak forests. Canadian Journal of Forest Research 38(5):1133–1143. 10.1139/X07-217 [DOI] [Google Scholar]
  26. Hansen, E. , Reeser P., Sutton W., Kanaskie A., Navarro S., and Goheen E. M.. 2019. Efficacy of local eradication treatments against the sudden oak death epidemic in Oregon tanoak forests. Forest Pathology 49:e12530. 10.1111/efp.12530 [DOI] [Google Scholar]
  27. Harvey, F. , and Chrisman N.. 1998. Boundary objects and the social construction of GIS technology. Environment and Planning A 30:1683–1694. 10.1068/a301683 [DOI] [Google Scholar]
  28. Higgins, S. , Mercier E., Burd L., and Joyce‐Gibbons A.. 2012. Multi‐touch tables and collaborative learning. British Journal of Educational Technology 43:1041–1054. 10.1111/j.1467-8535.2011.01259.x [DOI] [Google Scholar]
  29. Highland Economics & Mason, Bruce & Girard, Inc. 2019. Sudden oak death: Economic impact assessment. Highland Economics & Mason, Bruce & Girard, Inc., Portland, Oregon, USA. https://www.oregon.gov/ODF/Documents/ForestBenefits/sudden‐oak‐death‐economic‐impact‐assessment.pdf [Google Scholar]
  30. Ishii, H. 2008. Tangible bits: beyond pixels. Pages xv–xxv in Proceedings of the 2nd International Conference on Tangible and Embedded Interaction. 10.1145/1347390.1347392 [DOI] [Google Scholar]
  31. Jones, C. , and Gaydos D.. 2021. ncsu‐landscape‐dynamics/sod‐oregon‐casestudy: Sod Oregon Casestudy (v1.0). Zenodo. 10.5281/zenodo.4708142 [DOI] [Google Scholar]
  32. Jones, C. M. , Jones S., Petrasova A., Petras V., Gaydos D., Skrip M. M., Takeuchi Y., Bigsby K., and Meentemeyer R.. 2021a. Iteratively forecasting biological invasions with PoPS and a little help from our friends. Frontiers in Ecology and the Environment, 19(7):411–418. 10.1002/fee.2357 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Jones, C. M. , Kruskamp N. F., Petras V., Gaydos D. A., and Lawrimore M.. 2021b. ncsu‐landscape‐dynamics/rpops: PoPS 1.1.0 (1.1.0). Zenodo. 10.5281/zenodo.5015078 [DOI] [Google Scholar]
  34. Jordan, R. , et al. 2018. Twelve questions for the participatory modeling community. Earth's Future 6:1046–1057. 10.1029/2018EF000841 [DOI] [Google Scholar]
  35. Kanaskie, A. , Hansen E., Sutton W., Reeser P., and Choquette C.. (2011). Application of phosphonate to prevent sudden oak death in south‐western Oregon tanoak (Notholithocarpus densiflorus) forests. New Zealand Journal of Forestry Science 41S:S177–S187. [Google Scholar]
  36. Kaner, S. , Lind L., Toldi C., Fisk S., and Berger D.. 2007. Facilitator's guide to participatory decision‐making. Second edition. John Wiley & Sons, San Francisco, California, USA. ISBN‐13: 978‐0‐7879‐8266‐9. [Google Scholar]
  37. Kirsh, D. 2013. Embodied cognition and the magical future of interaction design. ACM Transactions on Computer‐Human Interaction (TOCHI) 20:1–30. 10.1145/2442106.2442109 [DOI] [Google Scholar]
  38. Knight, G. M. , Dharan N. J., Fox G. J., Stennis N., Zwerling A., Khurana R., and Dowdy D. W.. 2016. Bridging the gap between evidence and policy for infectious diseases: How models can aid public health decision‐making. International Journal of Infectious Diseases 42:17–23. 10.1016/j.ijid.2015.10.024 [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. LeBoldus, J. M. , Sondreli K. L., Sutton W., Reeser P., Navarro S., Kanaskie A., and Grünwald N. J.. 2017. First report of Phytophthora ramorum lineage EU1 infecting Douglas fir and grand fir in Oregon. Plant Disease 102:455. 10.1094/PDIS-05-17-0681-PDN [DOI] [Google Scholar]
  40. Lombardi, A. 2015. WebSocket: lightweight client‐server communications. O'Reilly Media, Inc., Sebastopol, California, USA. ISBN 978‐1‐449‐36927‐9. [Google Scholar]
  41. Manter, D. K. , Kolodny E. H., Hansen E. M., and Parke J. L.. 2010. Virulence, sporulation, and elicitin production in three clonal lineages of Phytophthora ramorum . Physiological and Molecular Plant Pathology 74:317–322. 10.1016/j.pmpp.2010.04.008 [DOI] [Google Scholar]
  42. Matzek, V. , Covino J., Funk J. L., and Saunders M.. 2014. Closing the knowing–doing gap in invasive plant management: accessibility and interdisciplinarity of scientific research. Conservation Letters 7:208–215. 10.1111/conl.12042 [DOI] [Google Scholar]
  43. Meentemeyer, R. K. , Cunniffe N. J., Cook A. R., Filipe J. A., Hunter R. D., Rizzo D. M., and Gilligan C. A.. 2011. Epidemiological modeling of invasion in heterogeneous landscapes: spread of sudden oak death in California (1990–2030). Ecosphere 2:1–24. 10.1890/ES10-00192.1 [DOI] [Google Scholar]
  44. Millar, G. C. , Tabrizian P., Petrasova A., Petras V., Harmon B., Mitasova H., and Meetenmeyer R. K.. 2018. Tangible landscape: a hands‐on method for teaching terrain analysis. Pages 1–12 in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 10.1145/3173574.3173954 [DOI] [Google Scholar]
  45. Miller, B. W. , Symstad A. J., Frid L., Fisichelli N. A., and Schuurman G. W.. 2017. Co‐producing simulation models to inform resource management: a case study from southwest South Dakota. Ecosphere 8:e02020. 10.1002/ecs2.2020 [DOI] [Google Scholar]
  46. Mills, P. , et al. 2011. Integrating natural and social science perspectives on plant disease risk, management and policy formulation. Philosophical Transactions of the Royal Society of London B: Biological Sciences 366:2035–2044. 10.1098/rstb.2010.0411 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Morisette, J. T. , Cravens A. E., Miller B. W., Talbert M., Talbert C., Jarnevich C., Fink M., Decker K., and Odell E. A.. 2017. Crossing boundaries in a collaborative modeling workspace. Society & Natural Resources 30:1158–1167. 10.1080/08941920.2017.1290178 [DOI] [Google Scholar]
  48. Muscatello, D. J. , Chughtai A. A., Heywood A., Gardner L. M., Heslop D. J., and MacIntyre C. R.. 2017. Translation of real‐time infectious disease modeling into routine public health practice. Emerging Infectious Diseases 23(5):e161720. 10.3201/eid2305.161720 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Ohmann, J. L. , Gregory M. J., Henderson E. B., and Roberts H. M.. 2011. Mapping gradients of community composition with nearest‐neighbour imputation: extending plot data for landscape analysis. Journal of Vegetation Science 22:660–676. 10.1111/j.1654-1103.2010.01244.x [DOI] [Google Scholar]
  50. Olabisi, L. S. , Blythe S., Levine R., Cameron L., and Beaulac M.. 2016. Participatory, dynamic models: a tool for dialogue. Pages 99–116 in Climate in context: Science and society partnering for adaptation (Vol. 1). John Wiley & Sons, Ltd., Philadelphia, Pennsylvania, USA. 10.1002/9781118474785.ch5 [DOI] [Google Scholar]
  51. Peterson, E. K. , Hansen E. M., and Kanaskie A.. 2015. Temporal epidemiology of sudden oak death in Oregon. Phytopathology 105:937–946. 10.1094/PHYTO-12-14-0348-FI [DOI] [PubMed] [Google Scholar]
  52. Petras, V. , Petrasova A., Chen Z., and Tonini F.. 2018. r.pops.spread ‐ PoPS model implemented as a GRASS GIS module (29074a1). Zenodo. 10.5281/zenodo.5160179 [DOI] [Google Scholar]
  53. Petrasova, A. , Gaydos D. A., Petra V., Jones C. M., Mitasova H., and Meentemeyer R. K.. 2020. Geospatial simulation steering for adaptive management. Environmental Modelling and Software 133:104801. 10.1016/j.envsoft.2020.104801 [DOI] [Google Scholar]
  54. Petrasova, A. , Harmon B., Petras V., Tabrizian P., and Mitasova H.. 2018. Tangible modeling with open source GIS. Second edition. Springer International Publishing, New York, New York, USA. 10.1007/978-3-319-89303-7 [DOI] [Google Scholar]
  55. Petrasova, A. , Petras V., and Harmon B.. (2021). Tangible‐landscape/grass‐tangible‐landscape: Tangible Landscape 1.2.0 (v1.2.0). Zenodo. 10.5281/zenodo.5160088 [DOI] [Google Scholar]
  56. Ratti, C. , Wang Y., Ishii H., Piper B., and Frenchman D.. 2004. Tangible User Interfaces (TUIs): a novel paradigm for GIS. Transactions in GIS 8:407–421. 10.1111/j.1467-9671.2004.00193.x [DOI] [Google Scholar]
  57. Serrouya, R. , Seip D. R., Hervieux D., McLellan B. N., McNay R. S., Steenweg R., Heard D. C., Hebblewhite M., Gillingham M., and Boutin S.. 2019. Saving endangered species using adaptive management. Proceedings of the National Academy of Sciences of the United States of America 116:6181–6186. 10.1073/pnas.1816923116 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Søndreli, K. L. , Kanaskie A., Keriö S., and LeBoldus J. M.. 2019. Variation in susceptibility of tanoak to the NA1 and EU1 Lineages of Phytophthora ramorum, the cause of sudden oak death. Plant Disease 103:3154–3160. 10.1094/PDIS-04-19-0831-RE [DOI] [PubMed] [Google Scholar]
  59. Star, S. L. 2010. This is not a boundary object: Reflections on the origin of a concept. Science, Technology, & Human Values 35:601–617. 10.1177/0162243910377624 [DOI] [Google Scholar]
  60. Star, S. L. , and Griesemer J. R.. 1989. Institutional ecology, ‘translations’ and boundary objects: amateurs and professionals in Berkeley's Museum of Vertebrate Zoology, 1907–39. Social Studies of Science 19:387–420. 10.1177/030631289019003001 [DOI] [Google Scholar]
  61. Thompson, R. N. , Gilligan C. A., and Cunniffe N. J.. 2018. Control fast or control smart: When should invading pathogens be controlled? PLoS Computational Biology 14:e1006014. 10.1371/journal.pcbi.1006014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  62. Thornton, M. M. , Shrestha R., Wei Y., Thornton P. E., Kao S., and Wilson B. E.. 2020. Daymet: daily surface weather data on a 1‐km grid for North America, Version 4. ORNL DAAC, Oak Ridge, Tennessee, USA. 10.3334/ORNLDAAC/1840 [DOI] [Google Scholar]
  63. Tonini, F. , Shoemaker D., Petrasova A., Harmon B., Petras V., Cobb R. C., Mitasova H., and Meentemeyer R. K.. 2017. Tangible geospatial modeling for collaborative solutions to invasive species management. Environmental Modelling & Software 92:176–188. 10.1016/j.envsoft.2017.02.020 [DOI] [Google Scholar]
  64. Ullmer, B. , and Ishii H.. 2000. Emerging frameworks for tangible user interfaces. IBM Systems Journal 39:915–931. 10.1147/sj.393.0915 [DOI] [Google Scholar]
  65. Underkoffler, J. , and Ishii H.. 1999. Urp: a luminous‐tangible workbench for urban planning and design. Pages 386–393 in Proceedings of the SIGCHI conference on Human Factors in Computing Systems. 10.1145/302979.303114 [DOI] [Google Scholar]
  66. van Voorn, G. A. K. , Verburg R. W., Kunseler E. M., Vader J., and Janssen P. H. M.. 2016. A checklist for model credibility, salience, and legitimacy to improve information transfer in environmental policy assessments. Environmental Modelling & Software 83:224–236. 10.1016/j.envsoft.2016.06.003 [DOI] [Google Scholar]
  67. Voinov, A. , and Bousquet F.. 2010. Modelling with stakeholders. Environmental Modelling & Software 25:1268–1281. 10.1016/j.envsoft.2010.03.007 [DOI] [Google Scholar]
  68. Voinov, A. , and Gaddis E. J. B.. 2008. Lessons for successful participatory watershed modeling: a perspective from modeling practitioners. Ecological Modelling 216:197–207. 10.1016/j.ecolmodel.2008.03.010 [DOI] [Google Scholar]
  69. Voinov, A. , Kolagani N., McCall M. K., Glynn P. D., Kragt M. E., Ostermann F. O., Pierce S. A., and Ramu P.. 2016. Modelling with stakeholders–next generation. Environmental Modelling & Software 77:196–220. 10.1016/j.envsoft.2015.11.016 [DOI] [Google Scholar]
  70. Vukomanovic, J. , Skrip M. M., and Meentemeyer R. K.. 2019. Making it spatial makes it personal: engaging stakeholders with geospatial participatory modeling. Land 8:38. 10.3390/land8020038 [DOI] [Google Scholar]
  71. Walters, C. J. , and Holling C. S.. 1990. Large‐scale management experiments and learning by doing. Ecology 71:2060–2068. 10.2307/1938620 [DOI] [Google Scholar]
  72. White, D. D. , Wutich A., Larson K. L., Gober P., Lant T., and Senneville C.. 2010. Credibility, salience, and legitimacy of boundary objects: water managers' assessment of a simulation model in an immersive decision theater. Science and Public Policy 37:219. 10.3152/030234210X497726 [DOI] [Google Scholar]
  73. White, R. M. , Young J., Marzano M., and Leahy S.. 2018. Prioritising stakeholder engagement for forest health, across spatial, temporal and governance scales, in an era of austerity. Forest Ecology and Management 417:313–322. 10.1016/j.foreco.2018.01.050 [DOI] [Google Scholar]
  74. Zhang, J. , Gong J., Lin H., Wang G., Huang J. L., Zhu J., Xu B., and Teng J.. 2007. Design and development of distributed virtual geographic environment system based on web services. Information Sciences 177:3968–3980. 10.1016/j.ins.2007.02.049 [DOI] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix S1

Data Availability Statement

Software and tools developed through this research are open‐source and freely available under GNU General Public License. The PoPS Forecast code, along with version history, is compiled and available via the Open Science Framework at https://doi.org/10.5281/zenodo.5015078. The PoPS Forecasting Platform can be accessed via the rpops package (Jones et al. 2021b ) on Zenodo at https://doi.org/10.5281/zenodo.5015078 or GRASS GIS (Petras et al. 2018) at https://doi.org/10.5281/zenodo.5160178. Detailed instructions on how to setup and use Tangible Landscape (Petrasova et al. 2021) can be found at https://doi.org/10.5281/zenodo.5160087. Data and code (Jones and Gaydos 2021) specific to the case study are provided on Zenodo at: https://doi.org/10.5281/zenodo.4708142. Questionnaire data can be found in electronic supplemental material, Appendix S1.


Articles from Ecological Applications are provided here courtesy of Wiley

RESOURCES