Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2022 Apr 8.
Published in final edited form as: GeoJournal. 2020 Jul 1;87(1):261–275. doi: 10.1007/s10708-020-10251-y

Post-survey Likert constructions: an adaptive method for generalizing perceptions of environmental variability

Kalli F Doubleday 1, Kelley A Crews 1, Amelia C Eisenhart 1, Kenneth R Young 1
PMCID: PMC8992389  NIHMSID: NIHMS1664314  PMID: 35400795

Abstract

Environmental perceptions are inherently based on an individual’s existing knowledge, experiences, and future expectations. Methods for measuring environmental perception, therefore, must capture a range of experiences while also being flexible enough to integrate these experiences into a coherent unit for analysis. Many research topics require cross-cultural comparisons in order to corroborate findings; however, assessments of environmental perception are often place- and context-specific. We propose here post-survey Likert constructions (PSLCs), using semi-structured interviews to construct a five-point scale system from multiple household responses after the completion of interviews. This method is able to capture the natural variability in the population using the respondents’ own language and characterizations of phenomena. We applied this method to measure the perceived environmental variability of residents living in a dynamic flooding landscape in the Okavango Delta, Botswana. The PSLC method captures the differences in environmental perception in a location with different settlement and cultural histories, multiple language groups, and different environmental conditions. The method easily transfers to other environments and populations, allowing for potential cross-cultural comparisons of perceived environmental variability. This publication responds to calls for increased transparency in reporting the development, execution, advantages, and disadvantages of methods related to environmental change.

Keywords: Adaptive methodology, Environmental perception, Climate change, Environmental variability

Introduction

Perceptions of the existence, drivers, and impacts of climate change are widely varied and are said to be dependent on a multitude of cultural, political, and social factors (Etkin and Ho 2007; Mertz et al. 2009; Lorenzoni et al. 2007; Weber 2010). The difficulties inherent in evaluating stakeholder or resident perception on a specific topic are amplified when the topic has been as widely debated and discussed as climate change has been, often because these discourses influence people’s responses (Mertz et al. 2009). Research regarding people’s perceptions of the physical environment, why those perceptions exist, and how those perceptions influence environmentally-related actions has utilized a variety of measures of environmental perception (Arbuthnot 1977; Bamberg 2003; Finger 1994; Hines et al. 1987; Truelove et al. 2014). However, the nuances of understanding environmental perception have largely not translated into evaluations of perceptions of climate-related events or phenomena. Perceptions of climate-related events or phenomena have instead been evaluated primarily through macro-scale surveys that by necessity of design have little local sociocultural and biophysical context (Capstick and Pidgeon 2014), with a few notable exceptions (Hamilton and Keim 2009; Wolf and Moser 2011).

The aim of documenting perceptions of climatic changes in specific regions or communities is to identify cultural, political, economic, technical, and social complexities relevant to implementing adaptation to climatic changes (Dodman and Mitlin 2015; Etzold et al. 2014; Mikulewicz 2018; Mitchell and Laycock 2017; Reid and Huq 2014; Wright et al. 2014). Though scientists have used a variety of methods to document vulnerability and adaptation related to climate change, this variety of methods in conjunction with social and cultural differences makes comparison across studies extremely difficult and their results less generalizable when compared. Categorizations of environmental perceptions that have broader applicability to other settings can be undermined by qualitative studies that provide rich detail that is also highly-context specific. This limitation has also stalled a widely-accepted common language of vulnerability (Barnett 2001) creating a feedback loop that further impedes a common methodological approach to assessing and responding to vulnerability.

We introduce here a method termed “post survey Likert constructions” (PSLC) that offers a remedy for these limitations. The PSLC method converts a collection of semi-structured interviews to a Likert scale (5 point) after all of the data have been collected. This method uses both first and second cycle coding (Saldaña 2016) in the spirit of grounded theory (Glaser and Strauss 1967) to move beyond description and original researcher intent to incorporate aspects of contextualization, emergence, and unanticipated or newly sought research directions (Glaser 2001, 2003). The use of PSLC offers four particular advantages: (1) The scale is ultimately constructed using the participants’ own statements, creating a scale from local characterizations of how and when things change, avoiding problems of using a pre-constructed Likert scale for data collection; (2) The PSLC conversion is completed after the data collection stage of the project, allowing the researcher to incorporate unanticipated interview results; and (3) Because the method has been designed to incorporate a wide range of potential responses, it is also highly generalizable to other geographic areas and case studies with potentially different trajectories and impacts of climate change; and (4) The PSLC method can evaluate qualitative data using quantitative methods. The use of PSLC as a method for analyzing qualitative data on environmental variability, therefore, addresses concerns about methodological consistency for comparable evaluations of study results while simultaneously balancing the variety of potential climate-based perceptions of change.

Predominant methods used to assess perceptions of climate change

Many of the most widely-cited works on climate change perception have utilized telephone or written (mail or internet) surveys that seek responses from a large number of people (Rabe and Borick 2012; Spence et al. 2011; Stedman 2004). The nuances associated with dynamic and complex topics, such as climate change perception, are potentially addressed by using a significantly large, and ideally random, sample of the population (Capstick et al. 2015). The primary advantages of these types of survey methodologies are consistency as well as the ability for direct comparison among respondents within that survey and time series analysis using statistical measures among multiple similar surveys (Brulle et al. 2012; Stimson 1999). Though these surveys may be ideal for streamlining analysis, the use of open-ended questioning methods (both semi-structured and qualitative/ethnographic) produces an increased amount of rich research material (Wolf and Moser 2011). Furthermore, research has shown that predictors that are often significant in closed question methods are poor and statistically insignificant predictors when respondents are able to produce more nuanced answers (Tvinnereim and Fløttum 2015).

Survey-based research has been focused in prominent Western nations in which climate change is a major political topic and may be hotly contested, such as the US, the UK, Canada, as well as the EU (Brulle et al. 2012; Capstick and Pidgeon 2014; Howe et al. 2015; La Jeunesse et al. 2016; Leiserowitz 2005, 2006; Lorenzoni and Pidgeon 2006; Lorenzoni et al. 2007; Spence et al. 2011; Stedman 2004). While this approach may be relevant in the scientist’s home country, the one-size-fits-all style disregards documented regional differences in perspective (Hamilton and Keim 2009; Lee et al. 2002) and scale (Donatti et al. 2018; Rademacher-Schulz et al. 2014; Singh et al. 2018) as well as individual differences that are based in previous experience and local conditions of place (Brody et al. 2004, 2008; Eisenhart et al. 2018; Rao et al. 2017; Stott and Huq 2014; Teklewold et al. 2018). More importantly in terms of climate change, Mertz et al. (2009) found that household survey results did not as easily uncover relationships between drivers of environmental change as did other methods.

Researchers documenting more localized community knowledge have moved in the direction of documenting richer local perspectives (McDowell et al. 2016; van Aalst et al. 2008). Evaluations of environmental perception are therefore achieved through individual and household interviews (Phuong et al. 2018; Sakdapolrak et al. 2014), focus groups (Jost et al. 2016), and participant observation (Arnall 2018), in addition to surveys (McDowell et al. 2016). The vast majority of qualitative studies of climate change (92%) that were published in Global Environmental Change between 2000 and 2012 used a type of in-person interview (Nielsen and D’haen 2014). Such interviews are able to solicit information that may not yet be known to the researchers as well as elicit unanticipated linkages among environmental, social, and political factors following extreme events (Walch 2018). The more open-ended types of interview, such as semi-structured group interviews, have been described as the “most valuable” method in understanding household perceptions and strategies of adaptation to climate change (Mertz et al. 2009, p. 812), in part due to their ability to maximize comparability between interviews while allowing for rich, unrestricted answers (Bernard 2000).

In particular, qualitative interview approaches are valued because their more open-ended nature allows respondents to communicate their own understandings of the links between cultural, socioeconomic, and environmental drivers of change (Mertz et al. 2009; Thomas et al. 2007). However, synthesis or comparison of the interviews becomes difficult as these reports become richer and more detailed, in part due to this exact advantage: the more open-ended an interview, the smaller the likelihood that all participants were able to give information on the same topics as all other interviewees. Therefore, while interviews may capture rich detail about the perceived changes in the environment and potential adaptations (Paerregaard 2018), these same questionnaires are often not as transferable to other areas with potentially differing climate and natural resource issues (La Jeunesse et al. 2016). In the case of climate change and its myriad of effects, scientists are attempting not only to measure responses in a locally-specific way (Afifi et al. 2014; Milan and Ruano 2014), but also to document and assess changing extremes (Murali and Afifi 2014; Phuong et al. 2018). There remain uncertainties about how the advantages of a survey and the advantages of semi-structured interviews can be captured together in methods that are replicable across case studies.

The development of the PSLC protocol provided here is part of the larger call for transparency in methodology development and application (Nielsen and D’haen 2014) and therefore greater generalizability across studies, regions, times, and stakeholders to integrate disparate data findings into a coherent and actionable framework of co-production of environmental knowledge and effective governance (Lebel et al. 2006; Reed et al. 2015). The technique contributes to growing concern for adaptive methodologies that bridge the barrier between qualitative-quantitative research design, theoretical framework, and methodologies (DeLyser and Sui 2014; Nielsen and D’haen 2014). We propose that this second cycle coding (Saldaña 2016) method can effectively be used to more consistently compare varying perspectives and experiences that may impact communication about and preparation for climate change (MacInnis et al. 2015).

Methods

Study area

The application of this method was performed as part of a larger study in the Okavango Delta, Botswana. The Okavango Delta is an internationally recognized UNESCO World Heritage site (#1000, established 2014), Ramsar Wetland of International Importance (site 879, established 1997) biodiversity hotspot, a major tourist destination, and a source of resource extraction for local communities. The wetland is situated within the northern Botswana Kalahari Desert and is fed by both upstream precipitation that falls in the Angolan highlands as well as local precipitation, with relative contributions varying with climatic oscillation shifts (McCarthy et al. 2000; Murray-Hudson et al. 2006). People in the Delta region have often confronted climatic variability at intra- and inter-annual and decadal timescales (Neuenschwander and Crews 2008). Temporal variability in arrival and magnitude of the floodwaters is dependent upon upstream precipitation, localized precipitation, and local tectonic changes, which alter the flow patterns of the water (McCarthy et al. 2000). The five major villages in which interviews were conducted are located in parts of the wetland with different flooding patterns due to their relative position within the floodplain (Motsholapheko et al. 2011). In addition to variability in flooding, there are also local ramifications to changes in precipitation timing and amount (Kgathi et al. 2007), temperatures (Murray-Hudson et al. 2006), and fire patterns (Heinl et al. 2007).

Data collection

A total of 113 semi-structured household interviews in five villages were conducted. From this total, six interviews did not result in enough dialog to assess and score onto our PSLC. Thus, a total of 107 interviews were placed onto PSLCs. In this study area, household-level interviews were comprised of individuals, multiple household members, and even members of multiple surrounding compounds when they were present with the household members at the time of the interview. Sixty-nine interviews were conducted on the northwestern edge of the Okavango Delta in the villages of Etsha 1, Etsha 6, and Etsha 13. Nineteen interviews were conducted in the southeastern region of the Delta in the village of Mababe and twenty-nine interviews took place in the northeastern region in the village of Seronga. Interviews were conducted from late May through mid-June to ensure seasonal consistency of flooding and rainfall patterns; fieldwork took place in 2011 in the Etsha villages and in 2012 in Mababe and Seronga.

The interview instrument and subsequent semi-structured interviews were focused on six general themes: (1) household composition and residential history, (2) livelihoods practiced within the household, (3) perceived environmental changes, (4) the use of natural resources and the potential impact of environmental changes on resource availability and collection, (5) potential livelihood adjustments to perceived environmental change, and (6) constraints due to external factors, such as governmental institutions or rules. Notably, during the dialog on perceived environmental changes, participants were not specifically asked if general “climate” or “weather” had changed, but were asked questions more specific to their locality. These questions maintained the focus on environments in which the participants regularly engaged and to which they could relate.

In the Okavango Delta, hydrological variables were of the utmost importance. Therefore, interview questions inquired if floods or rains had changed since the time that the respondent had moved to the area. Previous studies in the region have noted that both positive and negative impacts of specific environmental variables are reported by different individuals and that these perceptions are spatially related to environmental expectations (Eisenhart et al. 2018). For instance, King et al. (2018) noted that flooding was desirable for some livelihoods in the western floodplain where the seasonal floods are expected every year. However, interviews in the distal portion of the southern floodplain (where flooding events are much rarer) noted that floods can cause devastation to homes and reworking of human-wildlife interactions (Yurco et al. 2017). The semi-structured interview format was explicitly used to elicit all manner of reactions about different environmental variables for specific analyses pertaining to those variables.

Additionally, questions and follow-ups explored perceptions of how rain and flooding variability impacted natural resource collection and availability. By answering questions such as “Since you have lived here, has it become easier or more difficult to collect natural resources?” participants provided primary insight into resource collection while simultaneously expanding on their previous answers regarding environmental variability. Thus, questions not regarding variability often garnered responses that included participant insight into environmental variability. This iterative process of questioning allowed for qualitative assessment of response consistency as well as clarification through multiple opportunities to gather perceptions of environmental variability without potentially introducing bias by asking for the cause.

Scale development

Climate-related science on risk, hazards, awareness, and perceptions often employs Likert-type scale surveys (e.g., Bird 2009; Cvitanovic et al. 2014; Frondel et al. 2017; Martin et al. 2009; Visschers 2018) for “consistent and reliable results” (Weber et al. 2000, p. 30). However, existing research indicates that typical Likert scales may be lacking in their efficacy for research on topics such as climate change perceptions, which aim to evaluate causes of human behavior and are often conducted in countries with cultural and educational differences among participants and researchers. Though the use of fewer response choices (3, for example) may be advantageous to non-readers (Chachamovich et al. 2009; Williams and Swanson 2001) and may help to ensure consistency across individual responses, Gore and Kahler (2015) indicated the problems associated with inference from scales with fewer than 5 options. This was particularly applicable for programs designed to influence human behavior or evaluate program efficacy that were either confirmatory or longitudinal in nature (Gore and Kahler 2015). The use of more than 10-point scale options, however, has its own critics that state that these scales are not consistent across different individuals (e.g., Preston and Colman 2000).

Likert scale design and implementation for data collection has a long history of debate across many fields, notably psychology and medical-related research (e.g., Carifio and Perla 2008; Heine et al. 2002). However, the practice to use Likert scales as a form of data collection (independently or as part of interviews, etc.) has persisted and proven productive. Therefore, the PSLC method described here uses the Likert scale as a concept that indicates that different levels of extremity can be determined among responses. As proposed here, the method moves beyond asking participants to agree with a particular, pre-defined ranking and instead leverages an open-ended format that provides richer, more nuanced, and sometimes unanticipated responses. This alteration of a typical Likert scale methodology provides insight into the “why” and “how” of local characterizations of environmental phenomena that qualitative data provide, while also allowing for quantitative analysis of the rankings that are constructed from the qualitative data.

To construct the PSLC of perceived environmental variability, all 107 interview transcripts were read a total of three times by two researchers. The first read-through was conducted as a conceptual precursor, or first cycle coding (Saldaña 2016) (Step 1, Fig. 1). In the second cycle of reading, attention was focused on specific questions from the full interview that were more consistently related to environmental perceptions (Step 2, Fig. 1). Excerpts that offered unique statements related to perceptions of environmental variability were identified and then clustered into five groups. The groups were then ranked in order of increasing perceived environmental variability from 1 to 5 by each researcher to create a scale (Step 2, Fig. 1). Results were compared and found to be highly consistent between the two researchers. The full interviews were then read for a third time by a single researcher and a Likert score from 1 to 5 was assigned based on the interview’s comparison with the scale created (Step 3, Fig. 1). These steps are explained in more detail below.

Fig. 1.

Fig. 1

Visual representation of the PSLC process with three specific stages of first and second cycle coding broken down into five distinct methodology steps

In the first round of readings, each researcher read the entirety of all interviews without making detailed comments; instead, the primary purpose was to gather enough information for background contextualization for following readings (first cycle coding per Saldaña 2016). During this round, the researchers determined which questions of the original full interview set offered the most insight into perceptions of climatic change, resulting in a focus on twelve questions (Fig. 2). These questions regularly garnered answers that included comments specifically on perceptions of environmental variability of past, current, or future environmental variability, including slow- and long-term changes in precipitation, flooding, fires, and/or wildlife presence. The second cycle of readings was then conducted on only the answers to these twelve questions, including in light of unanticipated responses and differences across study sites apparent only after all interviews had been completed. If an answer to one or several of these questions provided a strong indication of perceived environmental variability, the excerpt was coded for further analysis. A total of fifty-seven excerpts were coded for further analysis.

Fig. 2.

Fig. 2

Interviews contained 41 primary questions. Twelve questions, above, were primarily responsible for direct participant reflection on perceived environmental variability. Answers to these twelve questions were the focus of the second cycle read through that provided excerpts to create scales in steps 3 and 4 (Fig. 1)

All fifty-seven excerpts (hereafter referred to as ‘representative statements’) were first clustered into similar narratives and opinions (Step 3, Fig. 1). Statements were clustered based not only on stated content, but the extremity of the language used and non-verbal cues that were noted in the interview transcript as well, a contextualization process often associated with second cycle coding (Glaser 2001, 2003; Saldaña 2016). Patterns emerged from this process that were later helpful in the final coding of interviews. The lowest and highest PEV (corresponding to 1 and 5 on the final scale) were clear from the clustering process, acting as ad hoc adjective- or phrase-based post-survey anchors (Step 4, Fig. 1). The other three clusters were then compared to one another to create a full ordinal ranking. Thus, these representative statements were used in relation to one another to build 5-point scales of PEV; this allowed for contextual sensitivity to what should be considered ‘neutral’ or ‘extreme’ PEV specifically from the study communities’ perspectives. As a result, and importantly, the authors did not pre-define the scale as a five-category scale; rather, the 5 categories emerged from the general clustering of all fifty-seven excerpts. In this fashion, participants collectively co-produced the scale categories through their collective perspectives.

Each cluster of statements was then synthesized to provide a general guideline for each of the five classes, resulting in the final scale key (Step 4, Fig. 1). This key comprised the theme of each number on the scale and corresponding representative excerpts so that each category had many representative examples. The first and second authors used the key to read and independently index ten interviews. Seven out of ten interviews were scored the same; the remaining three differed only from their classification between a 4 and a 5 on the scale, leading to a clarification on the fifth ranking. Five more interviews were subsequently independently double coded and 100% equivalency was reached. The second author was present for the original field campaigns, while the first was not. The inter-reliability then speaks to the use of the scale for effective convergence of analysis from those who designed the study versus team members post-design and field campaigns, a major goal of this particular manuscript. One author then reread all interviews and indexed each onto the participant-driven scale (Table 1) to maintain reliability across the entire sample.

Table 1.

This participant-drive five-point scale was created based on natural clustering of representative experts from step 2 and 3 of the PSLC methodology (Fig. 1)

Perceived Environmental Variability (EV) PSLC Scale

1 2 3 4 5

Thematic categories within the data that represent each score 1–5 Absent Neutral
Citing EV but without any parallel difficulty or benefit
EV recognition but without reactive planning or very limited intention for reactive planning
A situation where EV is cited before interviews prompt with any EV specific questions
Citing EV but unable to adapt due to lack of knowledge
Cite distinct environmental patterns forming and/or changing
Cite changes in farming to adapt to EV
Floods, rains, and/or fire described as hazards to health and safety
Strong pattern recognition plus prompting family members to prepare
Very strong reaction to EV (e.g., complete disinvestment)

During the third reading, the presence of one of the representative statements did not automatically place that interview into a specific PEV category. Instead, the content contained in the entirety of the interview (not only the 12 questions examined for representative statements) was used to assign the household respondent to a PEV category (Step 4, Fig. 1) as per the “contextualization” noted as important in both second cycle coding (Saldaña 2016) as well as in many grounded theory approaches (Glaser 2001, 2003). Therefore, any comment on perception of change was noted and considered holistically to determine the final score for that interview. Therefore, a single phrase did not control the interviewee’s score, an important advantage over automated coding methods. The entire interview is a rich source for data regarding perceptions of the environment and corresponding change as interviewees responded to questions of all types (demographic, environment, natural resources, tourism) with clear reference to relevant changes. By considering the interview as a whole, responses that noted perceptions of environmental change without being prompted about this topic specifically could be appropriately ranked higher. The dynamic, compounding, and sometimes contradictory interview answers were analyzed within the context of the entire interview, as well as the collective group of interviews, to assess the prevailing perception.

Finally, meetings with resident-stakeholders were held in 2016 after the findings of the 2011 and 2012 interviews had been determined. These meetings were held with permission and cooperation from each main village’s Chief and governing body. The purpose of these “report-out” meetings include the resident-stakeholders in the production of the science and to explicitly ask, “Did we hear you correctly?” which was affirmed by all three villages’ attendees and Chiefs/governing bodies.

Results

Ordinal scaling of perception

The 107 semi-structured interviews indicated differences in expressed PEV that allowed for a post-data collection ranking, even though the interviews were conducted without the respondent being guided to assign themselves a ranking, whether numerically (Likert scale) or normatively (more or less than your neighbors).

Most easily distinguished were the extreme ends of the ranking, notably those of the lowest perceptions of variability (an assigned ranking of 1). An assigned ranking of 1 indicated that the household reported no changes in the environment, flatly stated that they did not know if the environment had changed, or said that the environment is outside their control and provided no further elaboration. For instance, when asked “Do you think the flooding, rain, or the fires will change in the future?” a woman responded, “These are natural things…it is up to God.”1 (Female, Etsha 6 Village).

An assigned ranking of 2 on the scale was assigned to those households that reported environmental changes but expressed a neutral perspective about them with no paralleled difficulty or benefit ascribed to the changes. Furthermore, for a ranking of 2, respondents only discussed environmental changes when specifically asked about them, not during other questions regarding natural resources or livelihoods. When variability was reported during the specific questions on environmental changes, it was in a manner that simply stated that things had changed, but did not describe specific patterns. When pattern recognition was noted, the household interview was assigned a ranking of 3 on the scale. This ranking indicated that not only was variability reported, but also the variability followed a pattern of some sort that the respondent recognized and perceived to be affecting the lives of people. However, people often reported an inability to do anything about the changes due to their own lack of knowledge or resources. For instance, a woman from Mababe stated, “What the elephants harvested is more than what I harvested.” The woman then expressed her awareness of the growing change in elephant presence and crop raiding but stated that she had a lack of knowledge of how to prevent it.

An interview that was assigned a ranking of 4 contained answers indicating environmental variability with a relatively predictable pattern, discussed the impacts of these changes during other questions in the interview, and also expressed changing their behaviors based on these patterns. A male from Etsha 1 stated this succinctly: “I once knew when the floods would come and could start plowing based on that. Now I wait for the grass to grow ankle-high after the first rain before I plow.” The changes in the environment are anticipated, and a measure is used to determine how and when to change livelihood patterns based on the perceptions of the environmental change.

Finally, a ranking of 5 on the scale indicated the most extreme responses of households regarding environmental change. The interviews assigned a ranking of 5 indicated that not only was the environment changing, but that it was changing to such an extent that it threatened the lives of individuals or that the community was in danger to the point that it required extensive government help because these changes were too extreme for ordinary preparations. In Seronga, a woman stated, “Now we are starving. We have nowhere to go because the river is flooded and the crops are scorched.” The indications of a loss of livelihood and a threat to the family were used to characterize the most extreme perceptions of environmental change.

Responses otherwise difficult to characterize

In general, the PSLC protocol avoids condensing the variety of valid expressions that impact livability. In this section, we discuss some of the responses that would have been difficult to capture by using a pre-determined Likert scale approach. These responses include examples such as “I don’t know” that might not fit easily within an ordinal ranking system (complete lack of knowledge is not equivalent to very little knowledge). Additionally, we include here more complex responses that indicated potentially contradictory perceptions or the distinction between individual perceptions and what one has been told by the community or the government.

By employing semi-structured interviews and reviewing these narratives, any unexpected necessity to include a “don’t know” or other previously unanticipated category to the scale is easily incorporated. This adaptability means these valuable and notable scale categories (i.e., the ‘absence of’ category) can be included. These responses are contextually placed based on the current research rather than the original research expectations and premise under which the data were collected. Gore and Kahler (2015) found that the inclusion of a “don’t know” option on their scales would have been appropriate as occasionally respondents in Namibia and Madagascar answered that they simply “did not know.”

In some cases, participants described their perceptions of some events, such as flooding, in complementary, inconsistent, or augmented ways. For instance, a man in Etsha 6 early in the interview said the rains had not changed, but further into the interview, when questions about livelihoods were discussed, he said that in fact the rains are now greatly delayed and starting to show a new pattern. This interview was assigned a 3, based on a recognition of environmental variability, because the detail of a new pattern he described regardless of self-refutation; the pattern outwayed the first, quick response. Using PSLCs captured this complicated nature of perception by using semi-structured interviews that allow for authentic and dynamic perceptions. Further, using answers to multiple questions to rank an interview incorporated the realism of differing answers reported under different types of questions. Instead of being problematic, an interview containing inconsistent information is simply viewed as a complex perspective that is viewed through some lenses as a change in one way, but not others.

The use of more open-ended techniques also enabled the distinction between when an individual noted their own experiences of environmental change and when an individual recognized that an outside influence supported or did not support that perspective. In one such case, a woman in Etsha 13 indicated she “doesn’t know” what will happen in the future, but heard from a government official that floods will persist for five years, but she isn’t “sure.” This allowed for differentiation between their own perceptions in addition to (or in contrast to) what external influences have explicitly said to them.

Report out meetings

In 2016, preliminary results were presented to each major community at meetings with community members, village Chiefs, and the local governing body. When we presented our preliminary findings and asked if they were representative of what the community experienced, in each instance the answer was “yes.” Essentially, the verification with the residents of the villages ensured that the clustering of respondents’ views were not driven by the researchers’ perceptions of the respondents’ perceptions (Glaser 2001). When respondents were asked if they would like to add any additional information, they noted that the patterns of flooding, precipitation, and temperature change were now very different from when the interviews initially took place, an important consideration in areas to likely experience a greater variability in precipitation as per the IPCC 5th Assessment (Stocker et al. 2013). This statement supports the use of PSLCs as described in the discussion below.

Discussion

Advantages of PSLCs

Climate change research presents inherent difficulties because both the environmental conditions and people’s perceptions of those conditions are expected to be reciprocal and mutually reinforcing. The rate at which perceptions of variability change is extremely uncertain due to wide ranges in human behavior. Therefore, methodologies that are adaptive and not only allow the unanticipated to be incorporated into the results, but plan explicitly to include the unexpected, have an advantage over more stringent (or more stringently a priori) methodologies and expectations. In a topic evolving so rapidly as compared to the rate at which academic research progresses, researchers should expect that their projects will necessarily uncover undocumented results that need to be reported within context. These results might include new extremities of climate variables, impacts, or drivers due to local contextualization of climate change within that place (Paerregaard 2018).

Increasingly, dynamic systems are understood as a norm (Allen and Hoekstra 2015). Therefore, if an evaluation of perceptions of variability leads to results presented as though the system is static, the applicability of those results necessarily may be very short-lived. If systems exhibit dynamic social-ecological changes, evaluation methods must allow for this dynamism. If the research presented here had specifically focused on documenting specific environmental changes that had occurred, the research would have been null and void after only a few years. As exhibited by the positive feedback and the updates from communities during the follow-up stakeholder meetings, a dynamic system requires a shift from a static ranking to a more flexible model such as offered here by PSLCs.

In a specific example of this from the Okavango Delta, respondents from the village of Seronga did not consider flooding in the same way as residents of the Etsha villages. Because of Seronga’s location along the Okavango River, homes would not typically flood as the water entered the Delta. Therefore, when beginning the interview process in Seronga the year after the previous field season in the Etsha village, members of our research team explained to us that we needed to use a different word for flooding than had been used in the Etsha region. This adjustment dramatically improved data collection, but it bears emphasizing that this type of critical fine-tuning would not have been possible with an imposed Likert scale. The PSLC method allows for more flexible and iterative collection of data that supports scalability to other settings, including other field seasons and other locations. This flexibility offers a more nuanced set of research results for qualitative and quantitative analyses while operating within a transferable methodological framework.

The construction of the scale following the data collection serves to create a participant-driven scale in addition to saving time during the data collection stages, also allowing unanticipated or emergent future questions to be more robustly investigated. In this case, researchers do not need to agonize over which questions might be able to elicit specific insights, but can instead focus on translating concepts clearly and within the sociocultural context necessary for climate change research. As previously noted, Gore and Kahler (2015) expressed difficulties in their use of visual scales. Many of the respondents in that study were not satisfied with their option of ‘high risk’ but instead adamantly answered, “azure toda,” which translates to “very high” (Gore and Kahler 2015, p. 163). In the PSLCs presented here, this “very high” category was the highest ranking on the scale, clearly delineated by the differences in the way that respondents spoke and emphasized their suffering. Here the post-survey scale anchors were derived from contextualization of responses and not simply keywords.

PSLCs could therefore be applied to other geographic regions or other timeframes with different sociocultural perspectives and environmental factors of interest. The use of the dataset itself to construct a scale ranging from relative extremes to average perceptions allows implementation of this method for worldwide case studies. A method that is transferrable to multiple areas under multiple conditions, such as PSLC, will aid in accurate syntheses of studies. The full methodological process and reasoning provided within this paper increases the potential for use of the method and replication studies; future researchers will not be hindered by a lack of explanation. The difficulties in replicating environmental research that is time, space, and context-dependent argue for a transparent methodology that recognizes and accounts for the difficulties in mixed methods, multi-disciplinary research. Geographic realities differ across regions that influence not only policy, but beliefs and risk perception as well (Howe et al. 2015; Leiserowitz 2005). Perceptions cannot be extrapolated from data collected outside of the sociocultural and political context, nor can they be assumed to correlate with instrumental records (Niles and Mueller 2016). As researchers seek to understand human understandings and responses to climate change, even modelling at the global scale needs to take into account the more local and place-dependent realities of perceptions (Eisenhart et al. 2018; Verburg et al. 2016).

Drawbacks of PSLCs

As with any method, the use of PSLCs is not without disadvantages. Due to the strong focus on the language and expressions documented in the interviews, a person who did not wish to talk to the interviewers was probably less likely to achieve a ranking of 4 or 5 even if their perceptions of environmental variability were high. Therefore, people who are less inclined to speak with strangers, particularly strangers of a different culture, ethnicity, gender, or language, could potentially have their viewpoints less emphasized than their neighbors. In this case, that would result in a potential under-estimation of strength of response to change but not likely an over-estimation, though it bears noting that no interviews were refused and no reticence noted on the part of the interviewers or translators, and the standard protocol was to ensure informed verbal consent was given in the person’s preferred language prior to the commencement of the interview.

Similarly, if people being interviewed only answered the questions asked of them and did not elaborate, their assigned ranking of environmental variability may be lower. The PSLC method assumes that those individuals who have stronger perceptions of change will be more vocal about these changes. Therefore, if a person observes a specific type of environmental change, but the interviewers do not ask about that type of change, the person may not mention it out of a perception that this is not the purpose of the discussion. These problems are not limited to only the PSLC method, but can influence the results of many interview and survey methodologies; however, PSLCs do focus more on the verbiage and the quantity of statements than other methodologies might.

The coding and verification process of PSLCs also requires an extensive time commitment from multiple researchers, though allows for and explicitly may benefit from involvement of “fresh eyes” combined with the perspective of those participating in the interviews and/or working in the area for an extended period. Likewise, report out meetings with stakeholders as part of this process requires time and resources that are often limited and result in a general neglect of data verification with research participants.

Applying PSLCs to further research

The PSLC presented here is an adaptive methodology suited to learning from tangible difficulties faced by communities most at risk of negative consequences from climate variability. Cognitive psychologists consistently report that future estimations of climate change are often incorrect (Rachlinski 2000), but those perceptions and predictions are nonetheless critical to understanding how, when, and whether people will react under climate change disturbances (MacInnis et al. 2015). These results have implications for how we move forward in understanding, mitigating, and especially implementing policy regarding climate change disturbances and how we communicate this information to the people at most risk of extreme events (Lee et al. 2015).

The typical practice to use Likert scales as a form of data collection has been shown to be productive in climate change science (e.g., Khanian et al. 2017; Spence and Pidgeon 2010). The use of PSLCs flips the construction and application to post-data collection settings that allow for contextualization of perceptions of environmental change across different hydroecological sub-systems in considerations of each. This methodology also uncovered nuances that would have been missed from Likert surveys administered during data collection. Through its inherent adaptability and transferability, PSLC allows for comparison of environmental perception within and potentially across social-ecological systems. We posit this method offers a way to compare studies from different researchers and different regions—perhaps not perfectly—as a means to compile, review, and evaluate research in order to build towards advancing theory.

Final thoughts

The PSLC method engages an adaptive protocol to contextualize responses in consideration of the broader perspective offered across the entirety of an interview. The PSLC method thus provides a transferable methodology for re-reading of previous surveys/interviews collected at disparate times and places for a variety of purposes, and then can be holistically and systematically integrated through this two-step coding protocol. The method described here was completed in a population with multiple cultural groups, a wide range of residence times, age and household demographics, and across differentially lived recent environmental and political shocks. This methodology provides a participant-driven, co-produced Likert scale through which researchers can rank households’ perceived environmental changes. It is true that societies are “responding, some are resisting, and some are caught in the middle” (Molnar 2010, p. 1) in regards to climate and environmental change more broadly. However, there are number of reactions that do not even fall within that spectrum, meaning that researchers need a broader framework from which to draw. Ultimately, some people are unaware, some are disengaged, some recognize patterns, some even make conscious preparation decisions, some are welcoming help, and in the worst-case scenario, some people have lost all hope. These perspectives should not be discarded because they do not fit neatly within existing theories. However, as shown here, these results can be made commensurate post-survey across study sites to move beyond pseudo-comparative case studies to openly sharing and building a more comprehensive and resident-informed record of climate change perception. Thus, the advancement of the field lies within methodologies that consider the multitude of perspectives on climate change as equally valid and worth reporting—even when these perspectives might be difficult to capture on a single, simple scale. We believe this type of approach is critical for moving beyond comparative cases studies to building a more integrated inquiry into the locally lived experiences of climate change.

Acknowledgements

This research was supported by the National Science Foundation under grant numbers: BCS/GSS-0964596 and Rapid-0942211. The work was also supported by grant, P2CHD042849 awarded to the Population Research Center at The University of Texas at Austin by the Eunice Kennedy Shriver National Institute of Child Health and Human Development. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The authors thank those involved in the household and livelihood data collection for this project, including Reverend Emmanuel, Fuata John, Japhet John, Kentse Madise, Evan Griffin, Allison White, Jamie Shinn, and Brian King.

Footnotes

Compliance with ethical standards

Ethical approval Research involving Human Participants was approved by University of Texas-Austin IRB process. All participants gave oral consent before participating in interviews.

1

Some of the most obvious representative statements are reported here for a balance of clarity and brevity. The use of a single statement did not automatically assign a score to an interview, but instead the group of similarly-ranked statements was used to create a representative archetype of the category on the PEV scale.

References

  1. Afifi T, Liwenga E, & Kwezi L (2014). Rainfall-induced crop failure, food insecurity and out-migration in Same-Kilimanjaro, Tanzania. Climate and Development, 6(1), 53–60. [Google Scholar]
  2. Allen TFH, & Hoekstra TW (2015). Toward a unified ecology (2nd ed.). New York: Columbia University Press. [Google Scholar]
  3. Arbuthnot J (1977). The roles of attitudinal and personality variables in the prediction of environmental behavior and knowledge. Environment and Behavior, 9(2), 217–232. [Google Scholar]
  4. Arnall A (2018). Resettlement as climate change adaptation: What can be learned from state-led relocation in rural Africa and Asia? Climate and Development, 11(3), 253–263. [Google Scholar]
  5. Bamberg S (2003). How does environmental concern influence specific environmentally related behaviors? A new answer to an old question. Journal of Environmental Psychology, 23(1), 21–32. [Google Scholar]
  6. Barnett J (2001). Adapting to climate change in Pacific Island countries: The problem of uncertainty. World Development, 29(6), 977–993. [Google Scholar]
  7. Bernard HR (2000). Social research methods: Qualitative and quantitative approaches. Thousand Oaks: Sage Publications. [Google Scholar]
  8. Bird DK (2009). The use of questionnaires for acquiring information on public perception of natural hazards and risk mitigation—A review of current knowledge and practice. Natural Hazards and Earth System Sciences, 9(4), 1307–1325. [Google Scholar]
  9. Brody SD, Highfield W, & Alston L (2004). Does location matter? Measuring environmental perceptions of creeks in two San Antonio watersheds. Environment and Behavior, 36(2), 229–250. [Google Scholar]
  10. Brody SD, Zahran S, Vedlitz A, & Grover H (2008). Examining the relationship between physical vulnerability and public perceptions of global climate change in the United States. Environment and Behavior, 40(1), 72–95. [Google Scholar]
  11. Brulle RJ, Carmichael J, & Jenkins JC (2012). Shifting public opinion on climate change: An empirical assessment of factors influencing concern over climate change in the US, 2002–2010. Climatic Change, 114(2), 169–188. [Google Scholar]
  12. Capstick SB, & Pidgeon NF (2014). Public perception of cold weather events as evidence for and against climate change. Climatic Change, 122(4), 695–708. [Google Scholar]
  13. Capstick S, Whitmarsh L, Poortinga W, Pidgeon N, & Upham P (2015). International trends in public perceptions of climate change over the past quarter century. WIREs Climate Change, 6(1), 35–61. [Google Scholar]
  14. Carifio J, & Perla R (2008). Resolving the 50-year debate around using and misusing Likert scales. Medical Education, 42(12), 1150–1152. [DOI] [PubMed] [Google Scholar]
  15. Chachamovich E, Fleck MP, & Power M (2009). Literacy affected ability to adequately discriminate among categories in multipoint Likert Scales. Journal of Clinical Epidemiology, 62(1), 37–46. [DOI] [PubMed] [Google Scholar]
  16. Cvitanovic C, Marshall NA, Wilson SK, Dobbs K, & Hobday AJ (2014). Perceptions of Australian marine protected area managers regarding the role, importance, and achievability of adaptation for managing the risks of climate change. Ecology and Society, 19(4). [Google Scholar]
  17. DeLyser D, & Sui D (2014). Crossing the qualitative–quantitative chasm III: Enduring methods, open geography, participatory research, and the fourth paradigm. Progress in Human Geography, 38(2), 294–307. [Google Scholar]
  18. Dodman D, & Mitlin D (2015). The national and local politics of climate change adaptation in Zimbabwe. Climate and Development, 7(3), 223–234. [Google Scholar]
  19. Donatti CI, Harvey CA, Martinez-Rodriguez MR, Vignola R, & Rodriguez CM (2018). Vulnerability of smallholder farmers to climate change in Central America and Mexico: Current knowledge and research gaps. Climate and Development, 3(2019), 264–286. [Google Scholar]
  20. Eisenhart AC, Crews Meyer KA, King B, & Young KR (2018). Environmental perception, sense of place, and residence time in the Okavango Delta, Botswana. The Professional Geographer, 71(1), 109–122. [Google Scholar]
  21. Etkin D, & Ho E (2007). Climate change: Perceptions and discourses of risk. Journal of Risk Research, 10(5), 623–641. [Google Scholar]
  22. Etzold B, Ahmed AU, Hassan SR, & Neelormi S (2014). Clouds gather in the sky, but no rain falls. Vulnerability to rainfall variability and food insecurity in Northern Bangladesh and its effects on migration. Climate and Development, 6(1), 18–27. [Google Scholar]
  23. Finger M (1994). From knowledge to action? Exploring the relationships between environmental experiences, learning, and behavior. Journal of Social Issues, 50(3), 141–160. [Google Scholar]
  24. Frondel M, Simora M, & Sommer S (2017). Risk perception of climate change: Empirical evidence for Germany. Ecological Economics, 137, 173–183. [Google Scholar]
  25. Glaser BG (2001). The grounded theory perspective: Conceptualization contrasted with description (p. 232). Mill Valley, CA: Sociology Press. [Google Scholar]
  26. Glaser BG (2003). The grounded theory perspective II: Description’s remodeling of grounded theory methodology (p. 217). Mill Valley, CA: Sociology Press. [Google Scholar]
  27. Glaser BG, & Strauss A (1967). The discovery of grounded theory (p. 271). Chicago: Aldine. [Google Scholar]
  28. Gore ML, & Kahler JS (2015). Using visual scales in researching global human dimensions of wildlife. Human Dimensions of Wildlife, 20(2), 159–166. [Google Scholar]
  29. Hamilton LC, & Keim BD (2009). Regional variation in perceptions about climate change. International Journal of Climatology: A Journal of the Royal Meteorological Society, 29(15), 2348–2352. [Google Scholar]
  30. Heine SJ, Lehman DR, Peng K, & Greenholtz J (2002). What’s wrong with cross-cultural comparisons of subjective Likert scales?: The reference-group effect. Journal of Personality and Social Psychology, 82(6), 903. [PubMed] [Google Scholar]
  31. Heinl M, Frost P, Vanderpost C, & Silva J (2007). Fire activity on drylands and floodplains in the southern Okavango Delta, Botswana. Journal of Arid Environments, 68(1), 77–87. [Google Scholar]
  32. Hines JM, Hungerford HR, & Tomera AN (1987). Analysis and synthesis of research on responsible environmental behavior: A meta-analysis. The Journal of Environmental Education, 18(2), 1–8. [Google Scholar]
  33. Howe PD, Mildenberger M, Marlon JR, & Leiserowitz A (2015). Geographic variation in opinions on climate change at state and local scales in the USA. Nature Climate Change, 5(6), 596. [Google Scholar]
  34. Jost C, Kyazze F, Naab J, Neelormi S, Kinyangi J, Zougmore R, et al. (2016). Understanding gender dimensions of agriculture and climate change in small-holder farming communities. Climate and Development, 8(2), 133–144. [Google Scholar]
  35. Kgathi DL, Ngwenya BN, & Wilk J (2007). Shocks and rural livelihoods in the Okavango Delta, Botswana. Development Southern Africa, 24(2), 289–308. [Google Scholar]
  36. Khanian M, Serpoush B, & Gheitarani N (2017). Balance between place attachment and migration based on subjective adaptive capacity in response to climate change: The case of Famenin County in Western Iran. Climate and Development, 11(1), 69–82. [Google Scholar]
  37. King B, Shinn JE, Yurco K, Young KR, & Crews KA (2018). Political ecologies of dynamic wetlands: Hydrosocial waterscapes in the Okavango Delta. The Professional Geographer, 71(1), 29–38. [Google Scholar]
  38. La Jeunesse I, Cirelli C, Aubin D, Larrue C, Sellami H, Afifi S, et al. (2016). Is climate change a threat for water uses in the Mediterranean region? Results from a survey at a local scale. Science of the Total Environment, 543, 981–996. [DOI] [PubMed] [Google Scholar]
  39. Lebel L, Anderies JM, Campbell B, Folke C, Hatfield-Dodds S, Hughes TP, et al. (2006). Governance and the capacity to manage resilience in regional social–ecological systems. Ecology and Society, 11(1), 19. [Google Scholar]
  40. Lee JW, Jones PS, Mineyama Y, & Zhang XE (2002). Cultural differences in responses to a Likert scale. Research in Nursing & Health, 25(4), 295–306. [DOI] [PubMed] [Google Scholar]
  41. Lee TM, Markowitz EM, Howe PD, Ko C-Y, & Leiserowitz AA (2015). Predictors of public climate change awareness and risk perception around the world. Nature Climate Change, 5(11), 1014–1020. [Google Scholar]
  42. Leiserowitz AA (2005). American risk perceptions: Is climate change dangerous? Risk Analysis: An International Journal, 25(6), 1433–1442. [DOI] [PubMed] [Google Scholar]
  43. Leiserowitz A (2006). Climate change risk perception and policy preferences: The role of affect, imagery, and values. Climatic Change, 77(1–2), 45–72. [Google Scholar]
  44. Lorenzoni I, Nicholson-Cole S, & Whitmarsh L (2007). Barriers perceived to engaging with climate change among the UK public and their policy implications. Global Environmental Change, 17(3–4), 445–459. 10.1016/j.gloenvcha.2007.01.004. [DOI] [Google Scholar]
  45. Lorenzoni I, & Pidgeon NF (2006). Public views on climate change: European and USA perspectives. Climate Change, 77(1–2), 73–95. [Google Scholar]
  46. MacInnis B, Krosnick JA, Abeles A, Caldwell MR, Prahler E, & Dunne DD (2015). The American public’s preference for preparation for the possible effects of global warming: Impact of communication strategies. Climatic Change, 128(1), 17–33. [Google Scholar]
  47. Martin WE, Martin IM, & Kent B (2009). The role of risk perceptions in the risk mitigation process: The case of wildfire in high risk communities. Journal of Environmental Management, 91(2), 489–498. [DOI] [PubMed] [Google Scholar]
  48. McCarthy TS, Cooper GRJ, Tyson PD, & Ellery WN (2000). Seasonal flooding of the Okavango Delta, Botswana: Recent history and future prospects. South African Journal of Science, 96(1), 25–33. [Google Scholar]
  49. McDowell G, Ford J, & Jones J (2016). Community-level climate change vulnerability research: Trends, progress, and future directions. Environmental Research Letters, 11(1–2), 033001. [Google Scholar]
  50. Mertz O, Mbow C, Reenberg A, & Diouf A (2009). Farmers’ perceptions of climate change and agricultural adaptation strategies in rural Sahel. Environmental Management, 43(5), 804–816. [DOI] [PubMed] [Google Scholar]
  51. Mikulewicz M (2018). Politicizing vulnerability and adaptation: On the need to democratize local responses to climate impacts in developing countries. Climate and Development, 10(1), 18–34. [Google Scholar]
  52. Milan A, & Ruano S (2014). Rainfall variability, food insecurity and migration in Cabricán, Guatemala. Climate and Development, 6(1), 61–68. [Google Scholar]
  53. Mitchell CL, & Laycock KE (2017). Planning for adaptation to climate change: Exploring the climate science-to-practice disconnect. Climate and Development, 1(2019), 60–68. [Google Scholar]
  54. Molnar JJ (2010). Climate change and societal Reponses: Livelihoods, communities, and the environment. Rural Sociology, 75(1), 1–16. [Google Scholar]
  55. Motsholapheko MR, Kgathi DL, & Vanderpost C (2011). Rural livelihoods and household adaptation to extreme flooding in the Okavango Delta, Botswana. Physics and Chemistry of the Earth, Parts A/B/C, 36(14–15), 984–995. [Google Scholar]
  56. Murali J, & Afifi T (2014). Rainfall variability, food security and human mobility in the Janjgir-Champa district of Chhattisgarh state, India. Climate and Development, 6(1), 28–37. [Google Scholar]
  57. Murray-Hudson M, Wolski P, & Ringrose S (2006). Scenarios of the impact of local and upstream changes in climate and water use on hydro-ecology in the Okavango Delta, Botswana. Journal of Hydrology, 331(1–2), 73–84. [Google Scholar]
  58. Neuenschwander AL, & Crews KA (2008). Disturbance, management, and landscape dynamics. Photogrammetric Engineering & Remote Sensing, 74(6), 753–764. [Google Scholar]
  59. Nielsen JØ, & D’haen SAL (2014). Asking about climate change: Reflections on methodology in qualitative climate change research published in Global Environmental Change since 2000. Global Environmental Change, 24, 402–409. [Google Scholar]
  60. Niles MT, & Mueller ND (2016). Farmer perceptions of climate change: Associations with observed temperature and precipitation trends, irrigation, and climate beliefs. Global Environmental Change, 39, 133–142. [Google Scholar]
  61. Paerregaard K (2018). The climate-development nexus: Using climate voices to prepare adaptation initiatives in the Peruvian Andes. Climate and Development, 10(4), 360–368. [Google Scholar]
  62. Phuong LTH, Biesbroek GR, Sen LTH, & Wals AEJ (2018). Understanding smallholder farmers’ capacity to respond to climate change in a coastal community in Central Vietnam. Climate and Development, 10(8), 701–716. [Google Scholar]
  63. Preston CC, & Colman AM (2000). Optimal number of response categories in rating scales: Reliability, validity, discriminating power, and respondent preferences. Acta Psychologica, 104(1), 1–15. [DOI] [PubMed] [Google Scholar]
  64. Rabe BG, & Borick CP (2012). Fall 2011 national survey of American public opinion on climate change. Issues in Governance Studies, Brookings Institution. Washington, D.C. [Google Scholar]
  65. Rachlinski JJ (2000). The psychology of global climate change. University of Illinois Law Review, 2000(1), 299–320. [Google Scholar]
  66. Rademacher-Schulz C, Schraven B, & Mahama ES (2014). Time matters: Shifting seasonal migration in Northern Ghana in response to rainfall variability and food insecurity. Climate and Development, 6(1), 46–52. [Google Scholar]
  67. Rao N, Lawson ET, Raditloaneng WN, Solomon D, & Angula MN (2017). Gendered vulnerabilities to climate change: Insights from the semi-arid regions of Africa and Asia. Climate and Development, 1(2019), 14–26. [Google Scholar]
  68. Reed SO, Friend R, Jarvie J, Henceroth J, Thinphanga P, Singh D, et al. (2015). Resilience projects as experiments: Implementing climate change resilience in Asian cities. Climate and Development, 7(5), 469–480. [Google Scholar]
  69. Reid H, & Huq S (2014). Mainstreaming community-based adaptation into national and local planning. Climate and Development, 6(4), 291–292. [Google Scholar]
  70. Sakdapolrak P, Promburom P, & Reif A (2014). Why successful in situ adaptation with environmental stress does not prevent people from migrating? Empirical evidence from Northern Thailand. Climate and Development, 6(1), 38–45. [Google Scholar]
  71. Saldaña J (2016). The coding manual for qualitative researchers (3rd ed., p. 339). Thousand Oaks, CA: SAGE. [Google Scholar]
  72. Singh C, Daron J, Bazaz A, Ziervogel G, Spear D, Krishnaswamy J, et al. (2018). The utility of weather and climate information for adaptation decision-making: Current uses and future prospects in Africa and India. Climate and Development, 10(5), 389–405. [Google Scholar]
  73. Spence A, & Pidgeon N (2010). Framing and communicating climate change: The effects of distance and outcome frame manipulations. Global Environmental Change, 20(4), 656–667. [Google Scholar]
  74. Spence A, Poortinga W, Butler C, & Pidgeon NF (2011). Perceptions of climate change and willingness to save energy related to flood experience. Nature Climate Change, 1(1), 46–49. [Google Scholar]
  75. Stedman RC (2004). Risk and climate change: Perceptions of key policy actors in Canada. Risk Analysis, 24(5), 1395–1406. [DOI] [PubMed] [Google Scholar]
  76. Stimson JA (1999). Public opinion in America: Moods, cycles, and swings (2nd ed.). Boulder, Colorado: Westview Press. [Google Scholar]
  77. Stocker TF, Qin D, Plattner GK, Tignor M, Allen SK, Boschung J, Nauels A, Xia Y, Bex V, & Midgley PM (2013). Climate change 2013: The physical science basis. Working Group 1 (WG1) Contribution to the Intergovernmental Panel on Climate Change (IPCC) 5th Assessment Report (AR5). Cambridge, UK; and New York, New York, USA. [Google Scholar]
  78. Stott C, & Huq S (2014). Knowledge flows in climate change adaptation: Exploring friction between scales. Climate and Development, 6(4), 382–387. [Google Scholar]
  79. Teklewold H, Mekonnen A, & Kohlin G (2018). Climate change adaptation: A study of multiple climate-smart practices in the Nile Basin of Ethiopia. Climate and Development, 11(2), 180–192. [Google Scholar]
  80. Thomas DS, Twyman C, Osbahr H, & Hewitson B (2007). Adaptation to climate change and variability: Farmer responses to intra-seasonal precipitation trends in South Africa. Climatic Change, 83(3), 301–322. [Google Scholar]
  81. Truelove HB, Carrico AR, Weber EU, Raimi KT, & Vandenbergh MP (2014). Positive and negative spillover of pro-environmental behavior: An integrative review and theoretical framework. Global Environmental Change, 29, 127–138. [Google Scholar]
  82. Tvinnereim E, & Fløttum K (2015). Explaining topic prevalence in answers to open-ended survey questions about climate change. Nature Climate Change, 5(8), 744. [Google Scholar]
  83. Van Aalst MK, Cannon R, & Burton I (2008). Community level adaptation to climate change: The potential role of participatory community risk assessment. Global Environmental Change, 18(1), 165–179. [Google Scholar]
  84. Verburg PH, Dearing JA, Dyke JG, van der Leeuw S, Seitzinger S, Steffen W, et al. (2016). Methods and approaches to modeling the Anthropocene. Global Environmental Change, 39, 328–340. [Google Scholar]
  85. Visschers VH (2018). Public perception of uncertainties within climate change science. Risk Analysis, 38(1), 43–55. [DOI] [PubMed] [Google Scholar]
  86. Walch C (2018). Adaptive governance in the developing world: Disaster risk reduction in the State of Odisha, India. Climate and Development, 3(2019), 238–252. [Google Scholar]
  87. Weber EU (2010). What shapes perceptions of climate change? Wiley Interdisciplinary Reviews: Climate Change, 1(3), 332–342. [Google Scholar]
  88. Weber JM, Hair JF Jr., & Fowler CR (2000). Developing a measure of perceived environmental risk. The Journal of Environmental Education, 32(1), 28–35. [Google Scholar]
  89. Williams SA, & Swanson MS (2001). The effect of reading ability and response formats on patients’ abilities to respond to a patient satisfaction scale. The Journal of Continuing Education in Nursing, 32(2), 60–67. [DOI] [PubMed] [Google Scholar]
  90. Wolf J, & Moser SC (2011). Individual understandings, perceptions, and engagement with climate change: Insights from in-depth studies across the world. WIREs Climate Change, 2(4), 547–569. 10.1002/wcc.120. [DOI] [Google Scholar]
  91. Wright H, Vermeulen S, Laganda G, Olupot M, Ampaire E, & Jat ML (2014). Farmers, food and climate change: Ensuring community-based adaptation is mainstreamed into agricultural programmes. Climate and Development, 6(4), 318–328. [Google Scholar]
  92. Yurco K, King B, Young KR, & Crews KA (2017). Human–wildlife interactions and environmental dynamics in the Okavango Delta, Botswana. Society & Natural Resources, 30(9), 1112–1126. [Google Scholar]

RESOURCES