Skip to main content
Translational Behavioral Medicine logoLink to Translational Behavioral Medicine
. 2019 Nov 25;10(1):136–145. doi: 10.1093/tbm/ibz170

Conceptualizing and measuring sustainability of prevention programs, policies, and practices

Lawrence A Palinkas 1,, Suzanne E Spear 2, Sapna J Mendon 1, Juan Villamar 3, Charles Reynolds 4, Costella D Green 4, Charlotte Olson 4, Audrey Adade 5, C Hendricks Brown 3
PMCID: PMC7020391  PMID: 31764968

Assessing the sustainability of prevention programs and initiatives is better achieved by a a standardized process of measuring determinants and outcomes than by using a single standardized instrument.

Keywords: Sustainability, Methodology, Qualitative, Measurement, Behavioral health, Prevention

Abstract

A large knowledge gap exists regarding the measurement of sustainability of evidence-based prevention programs for mental and behavioral health. We interviewed 45 representatives of 10 grantees and 9 program officers within 4 Substance Abuse and Mental Health Services Administration prevention grant initiatives to identify experiences with implementation and sustainability barriers and facilitators; what “sustainability” means and what it will take to sustain their programs; and which Consolidated Framework for Implementation Research (CFIR) elements are important for sustainability. Lists of sustainability determinants and outcomes were then compiled from each data set and compared with one another. Analysis of themes from interviews and free lists revealed considerable overlap between sustainability determinants and outcomes. Four sustainability elements were identified by all three data sets (ongoing coalitions, collaborations, and networks and partnerships; infrastructure and capacity to support sustainability; community need for program; and ongoing evaluation of performance and outcomes), and 11 elements were identified by two of three data sets (availability of funding; consistency with organizational culture; evidence of positive outcomes; development of a plan for implementation and sustainment; presence of a champion; institutionalization and integration of program; institutional support and commitment; community buy-in and support; program continuity; supportive leadership; and opportunities for staff training). All but one of the CFIR domain elements (pressure from other states, tribes, or communities) were endorsed as important to sustainability by 50% or more of participants. It may be more important to implement a standardized process of eliciting determinants and outcomes of sustainability than to implement a single standardized instrument.


Implications.

Policy: By systematically collecting information on the most relevant domains identified here, a funding agency such as Substance Abuse and Mental Health Services Administration could develop a sustainment measurement system that would help achieve higher levels of sustainment once the formal funding period has ended.

Practice: A valid and reliable means of assessing predictors and outcomes of sustainable prevention programs and initiatives could form the basis for improved monitoring and feedback to grantees about their progress toward sustainment during the grant cycle.

Research: It may be more important to implement a standardized process of eliciting determinants and outcomes of sustainability than to implement a standardized instrument containing the same list of sustainability determinants and outcomes.

BACKGROUND

There exist numerous implementation theories, models, and frameworks [1,2], many of which consider sustainability to be the final stage of the process of implementation of evidence-based practices, policies, and programs (EBPs) [3,4]. In recent years, there has been a proliferation of frameworks that focus specifically on sustainability [5–9]. However, despite the growing consensus as to how sustainability should be defined [7–9], the underdeveloped state of measurement of sustainment poses one of the most serious methodological challenges to understanding and facilitating the sustainability of evidence-based practices and programs [6,9,10]. Scheirer and Dearing [7, p. 2060] defined sustainability as “the continued use of program components and activities for the continued achievement of desirable program and population outcomes.” Moore et al. [8] provided a revised definition with five characteristics: (a) after a defined period of time, (b) a program, clinical intervention, and/or implementation strategies continue to be delivered and/or (c) individual behavior change (i.e., clinician, patient) is maintained; (d) the program and individual behavior change may evolve or adapt while (e) continuing to produce benefits for individuals/systems. However, there are no uniform or agreed-upon criteria for determining whether something has been sustained or not [11,12]. This may be due to the fact that what is to be sustained differs from one program to the next. For instance, with respect to the community coalitions supporting drug and suicide prevention activities, some definitions of sustainability focus on the coalition itself, while others focus on the activities and impacts of the coalition [12]. It may also be due to the fact that sustainability is increasingly being viewed as a dynamic process with shifting outcomes [5]. Furthermore, with few exceptions [13–15], most studies of EBP implementation have focused on earlier stages of implementation progress (exploration, adoption, and routine use) and not on sustainability [6,16].

In addition to uncertainty as to how to define sustainability, there is a lack of consensus as to how to measure it. Stirnam et al. [17] distinguished between studies that measure the sustainability of a specific intervention and studies that measure the broader ecological approach to sustainability. An illustration of the former approach to sustainability measurement is the Stages of Implementation Completion (SIC), an eight-stage assessment tool developed as part of a large-scale randomized implementation trial [18]. The stages range from “Engagement with the developers” to “Practitioner competency” and map onto three well-accepted phases of implementation—Pre-Implementation, Implementation, and Sustainability—the latter stage is currently only measured by a single Stage 8 certification step. While the SIC has been used successfully as a measure of earlier stages of implementation [19], the ability of this instrument to measure intervention sustainability across different interventions has not yet been validated [20]. Further, the SIC is designed to measure implementation and sustainability outcomes but not determinants.

An example of a broader ecological approach to measurement is the Program Sustainability Assessment Tool (PSAT) [21,22], containing 40 items across eight sustainability domains, with five items per domain. The instrument developers reported high internal consistency reliability and some evidence of validity; however, the instrument has been used largely with evaluating chronic disease prevention programs and appeared to perform poorly with the program’s effect on the health attitudes, perceptions, and behaviors in the area it serves (public health impacts). They concluded that future research and evaluation work needs to be done to ascertain the validity and reliability of the instrument with different fields and types of interventions [22]. The PSAT has been used primarily to assess capacity and to plan for sustainability [22,23]. With the exception of the SIC, which measures sustainability as an outcome, most attempts to develop sustainability measures have focused on determinants or factors that influence sustainability. These include the PSAT, The Program Sustainability Index [23], and the Sustained Implementation Support Scale [24]. To our knowledge, there is no tool at the present time that assesses both sustainability determinants and outcomes.

The Substance Abuse and Mental Health Services Administration (SAMHSA) supports a wide array of prevention grant initiatives administered by the Center for Substance Abuse Prevention (CSAP) and the Center for Mental Health Services (CMHS), which include the prevention of some behavioral disorders and suicidal behaviors. Each of SAMHSA’s prevention grant initiatives has specific sets of goals and objectives, and each has different prevention approaches to be sustained once funding from SAMHSA ends [25]. However, in order to support and monitor progress toward sustainability for each and every program funded by these grant initiatives, a flexible measurement system is required that includes an assessment of sustainability determinants as well as outcomes and identifies both the unique requirements for improving sustainability for each program as well as for developing a generalizable framework comprised of core components of sustainability across diverse prevention approaches. In this study, we examined the sustainability of four prevention grant initiatives that were selected by SAMHSA to have wide diversity in approaches and content as a first step in developing such a measurement system. We then used the same investigative procedures across the four grant initiatives to determine what is meant by the term “sustainability” in order to identify and support the requirements for improving sustainability for each grant initiative through the measurement of process and outcomes.

METHODS

Background

Funded by SAMHSA’s CSAP, the Strategic Prevention Framework State Incentive Grant (SPF-SIG) Initiative awarded 60 block grants to states, tribes, and territories and has three goals: (a) prevent the onset and reduce the progression of substance abuse for youth and adults, (b) reduce substance abuse-related problems, and (c) build prevention capacity and infrastructure at the state, tribal, territory, and community-levels through SAMHSA’s Strategic Prevention Framework (SPF) steps. These SPF steps require that grantees: (a) assess their prevention needs based on epidemiological data; (b) build their prevention capacity; (c) develop a strategic plan; (d) implement effective community prevention programs, policies, and practices; and (e) evaluate their efforts for outcomes.

The Sober Truth on Preventing Underage Drinking Act (STOP Act) Initiative awarded 696 grants to community organizations. This initiative works to achieve two goals: (a) establish and strengthen collaboration among communities, public, and private nonprofit agencies and federal, state, local, and tribal governments to support the efforts of community coalitions working to demonstrate a long-term commitment to prevent and reduce alcohol use among youth and young adults aged 12–20 and (b) reduce alcohol use among youth by addressing the factors in a community that increase the risk of alcohol use among youth and promoting the factors that minimize the risk of alcohol abuse.

The Implementing Evidence-Based Prevention Practices in Schools (PPS) Initiative awarded grants directly to 21 school districts to implement the Good Behavior Game (GBG), a universal classroom-based preventive intervention designed to help first-grade students exhibiting early aggressive/disruptive behavior and reduce the risk of onset of mental health disorders, the use of illicit drugs, underage drinking, underage smoking and suicidal ideation, and juvenile justice involvement [26]. As a classroom management strategy implemented by classroom teachers, young children learn to work together and develop pro-social behaviors through activities requiring group contingency and subsequent rewards.

The Garrett Lee Smith (GLS) State and Tribal Youth Suicide Prevention Grants provided grant funding to 53 states, tribes, and territories and requires that funds be used by grantees for program development that address substance abuse and other behavioral health problems (e.g., depression), risks that are directly linked to suicide [27]. The grant initiative is community-based and has six goals: (a) increased development and implementation of community-based suicide prevention programs for youth and adults; (b) training for recognition of at-risk behaviors; (c) improvement in access to and linkages with substance abuse and mental health services; (d) improvement and expansion of surveillance of suicide-related outcomes; (e) increased awareness of suicide as a public health problem; and (f) development and implementation of strategies for reducing stigma associated with services for mental health and suicide prevention activities. A comparison of the aims, organization, requirements, and target population of the four grant initiatives is provided in Supplementary Table 1.

Participants

In collaboration with SAMHSA’s CMHS and CSAP associate directors and senior program staff, we identified two or three grantees within each of the four SAMHSA-funded grant initiatives and solicited their study participation. These 10 sites included 2 PPS grantees that had implemented the Good Behavior Game, 3 SPF-SIG grantees, 2 STOP Act grantees, and 3 GLS grantees. These sites were purposefully sampled [28] to reflect diversity with respect to race/ethnicity, geography, quality of evidence supporting funded activities (i.e., the extent to which they are evidence based or “evidence informed”), and perceived level of success in achieving sustainability of program activities, infrastructure, or outcomes. Five of the grantees (two PPS, two SPF-SIG, and one STOP Act) were no longer funded by the respective SAMHSA program initiative. The other five grantees (three GLS, one STOP Act, and one SPF-SIG) were in their first or second year of funding by the respective SAMHSA program initiative. Participation rate for both grantees and grantee representatives invited to participate in the study was 100%.

During a 2–3 day visit at each site, investigators conducted individual semistructured interviews with the grantee principal investigator (PI), the program coordinator, and a minimum of two key informants representing coalition or community partners purposefully sampled on the basis of the site PI’s assessment of level of engagement in the program. A total of 45 individuals were interviewed (14 men and 31 women) representing the four grant initiatives (6 PPS grantees, 17 SPF-SIG grantees, 15 GLS grantees, and 7 STOP Act grantees).

Interviews were also conducted with nine government program officers (GPOs) (seven women and two men) representing the four SAMHSA grant initiatives (two from PPS, two from SPF-SIG, three from STOP Act, and two from GLS). All these interviews were conducted by telephone and remain anonymous, only being reported in the aggregate by program.

The study was approved by the appropriate institutional review boards prior to participant recruitment, and written informed consent was obtained prior to conducting interviews. Participants were contacted via email for recruitment to the study. All participants were told that their answers would remain anonymous and only reported in the aggregate by program.

Data collection

The 1 hr long interviews were conducted with the use of an interview guide and comprised of three parts: (a) a series of semistructured questions relating to experience with implementing and sustaining the grantee’s program; (b) a free-list exercise [29], and (c) a template of Consolidated Framework for Implementation Research (CFIR) domains and components [2]. The CFIR is not designed to assess sustainability outcomes, but because sustainability is often viewed as the final stage of implementation [3,4], we were interested in seeing if constructs believed to be predictive of successful implementation were also perceived to be predictive of successful sustainability.

In the first part of the interview, a series of semistructured questions were asked about experience with implementation and sustainability, what specifically about their program grantees wanted to sustain, what criteria were used to determine whether or not the program or programs have been or were likely to be sustained, what was required to sustain the program, barriers and facilitators to sustainability, and recommendations for overcoming barriers. In the second part of the interview, participants were asked to provide as many responses as possible to the following: (a) all of the things you can think of when you hear the words “sustainment” or “sustainability”; (b) all of the things about your program you would like to see sustained [once/now that] SAMHSA funding has come to an end; and (c) of the things you think will be necessary to ensure that your program is sustained [once/now that] SAMHSA funding has come to an end. In the third part of the interview, participants were asked to rate each of the domains and elements of the CFIR as being unimportant (0), somewhat important (1), important (2), or very important (3) to the sustainability of their program. Participants were also asked to explain the basis for their assessment of each component to sustainability. Due to time constraints, SAMHSA GPOs were not asked to complete the CFIR checklist.

Data analysis

Transcripts of the semistructured interview component were analyzed in the following manner. First, each transcript was reviewed by study investigators to develop a broad understanding of content and to identify topics for discussion and observation. Investigators prepared short descriptive statements to document initial impressions of themes and their relationships and to define the boundaries of specific codes. Second, the empirical material contained in the interviews was independently coded by the investigators to condense the data into analyzable units. Segments of text ranging from a phrase to several paragraphs were assigned codes based on a priori or emergent themes (open coding) [30]. Codes were then assigned to describe connections between categories and between categories and subcategories (axial coding) [30]. Lists of codes developed by each investigator were integrated into a single codebook. Third, each text was independently coded by at least two investigators. Disagreements in assignment or description of codes were resolved through discussion between investigators. The final codebook, constructed through a consensus of team members, consisted of a numbered list of themes, issues, accounts of behaviors, and opinions that relate to organizational and system characteristics that influence sustainability. Fourth, the cloud-based qualitative data management system Dedoose (www.dedoose.com) was used to generate a series of categories arranged in a treelike structure connecting text segments grouped into separate categories of codes or “nodes.” These nodes and trees were used to further the process of axial or pattern coding to examine the association between different a priori and emergent categories and identify the existence of new and specific examples of co-occurrence illustrated with transcript texts. Fifth, by comparing these categories with each other, the different categories were further condensed into broad themes and subthemes.

Responses to the free-list exercise were analyzed in two different ways. First, the procedure of constant comparison was used to identify meaningful clusters of items representing similar constructs. Items were rank ordered based on the number of participants who mentioned them. Items were further weighted based on their rank ordering on the lists of individual participants. However, because the two indices are highly correlated [29], we only report here the percentage of participants who mentioned an item. The list of items in each of the three response categories (meaning of term sustainment or sustainability, what program elements should be sustained, and what was required to sustain it) were further compared to determine which elements appeared on more than one list. Regarding the importance of items that are shared across programs, we sorted these based on the overall rate of elicitation. Endorsement of CFIR elements was assessed by means of percentage of participants citing an element as high (2) or very high importance (3).

Finally, the data sets containing sustainability determinants and outcomes from each of the three components of the interview (semistructured interview, free lists, and CFIR checklist) were then compared through a process of data triangulation (i.e., determining consistency of findings obtained from different sources of data) [31] to identify items that were elicited from more than one data set. Items were then placed into three groups: (a) those that appeared in only one of the three data sets; (b) those that appeared on two of the three data sets; and (c) those that appeared on all three data sets.

RESULTS

Semistructured interview themes

Analysis of the transcripts of the semistructured component of the interview revealed two general themes with a number of subthemes. The two primary themes were sustainability determinants and sustainability outcomes. Sustainability determinants included six subthemes: availability of funding to support program activities and infrastructure, consistency with or fit between the program and the organizational culture of the agency or agencies and supporting coalitions, evidence of positive outcomes associated with the program, development of and adherence to a plan for sustainability at the early stages of program implementation, ongoing involvement of a program champion, organizational capacity to sustain the program, and embedding the program within the institutional framework of the participating organizations. Sustainability outcomes included five subthemes: continued institutional support and commitment, evidence of available and ongoing funding to support the program, continued community support and buy-in, whether the coalition developed to support the program continues to exist in some form, and whether there is evidence that the activities that were initially supported by the program through SAMHSA funding continue to operate, albeit with a different source of financial support.

Although these two separate themes were based on responses to interview questions related to what participants wished to sustain and what they thought would be required to sustain it, the analysis revealed considerable overlap between determinants and outcomes such that most of the subthemes in each category could be viewed as both a determinant and an outcome. However, outcomes were distinguished by their continued presence and operation once funding from SAMHSA had come to an end. Availability of continued funding was both a determinant and an outcome of sustainability in programs funded by all four SAMHSA grant initiatives. With the exception of PPS programs, a coalition was viewed by most but not all of the programs as a requirement for sustainability and its continued presence and operation were considered as evidence that the program was being sustained. On the other hand, the planning process, presence of a champion, and monitoring and evaluation were viewed as important determinants of sustainability but not viewed as outcomes.

Free-list themes

With respect to definitions of sustainability, funding was mentioned by slightly less than half (46.1%) of the participants. All the other elements in the definition were identified by one in four or one in five. Program-specific activities were identified by the majority (84.6%) of the participants as the aspect of the program they wished most to see sustained, followed by training needed to conduct those activities (41%) and coalitions, collaborations, and networking (33.3%). Ongoing funding was considered to be the most important determinant of sustainability (59%), followed by coalitions, collaborations, and networking (38.5%), and partnerships (25.6%).

As with the overlapping of the two themes revealed in the analysis of the semistructured component of the interview, there were several items that appeared on all three free lists, including funding, coalitions/collaborations/networking, partnerships, and positive outcomes. Evaluation and monitoring were listed both as a characteristic that defined sustainability and something that should be sustained. Utility/translation/value (i.e., addressing a community need), capacity/infrastructure, and community support/buy-in were listed both as a characteristic that defined sustainability and a determinant of sustainability. Training was listed as a program element that should be sustained and a determinant of sustainability.

CFIR domains

Within the program itself, the characteristics considered to be most important for sustainability were strength and quality of evidence supporting this particular approach (90%) and the ability to adapt the program to meet the needs of the target population and the organizations serving that population (82%). Within the domain of the outer setting of implementation, the characteristics considered to be most important for sustainability were the needs and resources of the population being served (97%) and the degree to which the organization or agency responsible for sustaining the program is networked with other organizations (87%). Within the domain of the inner setting of implementation, seven of the nine items were deemed important by at least three quarters of the respondents, including access to knowledge about the program (92%), the nature and quality of networks and communications between organizations (90%), perception of current situation as intolerable or needing change (90%), establishment of clear goals and mechanisms for providing feedback (90%), engagement of leaders in implementing and sustaining the program (90%), availability of resources dedicated for implementing and sustaining the program (85%), and shared perception of program importance (77%). Characteristics of individuals involved in the program believed to be important for sustainability included sufficient knowledge of program goals and mechanisms (91%) and self-efficacy for sustainability (84%). Finally, characteristics of the implementation process believed to be important for sustainability included ongoing evaluation of progress made toward implementation and sustainability (95%), presence of opinion leaders in organization or coalition/partnership (85%), formally appointed implementation leaders, and program champions (82%). The only CFIR domain not endorsed as important to sustainability by a majority of participants was pressure from other states, tribes, and communities (21.1%).

Integration of three data sets

Four sustainability elements were identified by all three data sets: (a) ongoing coalitions, collaborations, networks, and partnerships; (b) infrastructure and capacity to support sustainability; (c) community need for program; and (d) ongoing evaluation of performance and outcomes; Table 1). An additional 11 elements were identified by two of three data sets: (a) availability of funding; (b) consistency with organizational culture; (c) evidence of positive outcomes; (d) development of a plan for implementation and sustainment; (e) presence of a champion; (f) institutionalization and integration of program; (g) institutional support and commitment; (h) community buy-in and support; (i) program continuity; j) supportive leadership; and k) opportunities for staff training. Each of these 15 elements appeared to be relevant to grantees funded by all four SAMHSA grant initiatives, although the degree of relevance varied somewhat.

Table 1.

Themes and constructs identified from semistructured interviews, free-list exercises, and Consolidated Framework for Implementation Research (CFIR) endorsements

Interview Free lists CFIR endorsements
Construct Themes Definitions Priorities Requirements Domain and construct
Three sets % % % %
Coalitions/collaboration/networking O D 25.6 33.3 38.5 II Degree to which an organization is networked with other organizations 87.2
Partnership 28.2 10.3 25.6 III Nature and quality of networks and communications between organizations 89.5
Evaluation D 17.9 17.9 V Evaluation of progress made toward implementation and sustainability 94.7
Infrastructure/capacity D 25.6 15.4 III Available resources dedicated for implementing and sustaining program 84.6
III Structural characteristics of organizations responsible for implementing program 65.8
Need/utility/value O 23.1 12.8 II Needs and resources of population being served 97.4
III Perception of current situation as intolerable or needing change 89.5
Two sets
Funding O D 46.1 12.8 59.0
Consistency with organizational culture D III Shared perception of program importance 76.9
III Norms, values, and guiding principles 69.1
Positive outcomes D 23.1 10.3 12.8
Planning D V Degree to which tasks are implemented and sustaining are developed in advance and quality of those plans 72.7
Champion D V Engaging formally appointed internal implementation leaders 81.6
Institutionalization D 12.8
Institutional support/commitment O 17.9 20.5
Community support O 17.9 17.9
Ongoing/continuity O 25.6
Leadership support 20.5 III Engagement of leaders in implementing and sustaining program 89.5
Training 41.0 15.4 Access to knowledge and information about the program 92.3
IV Knowledge and beliefs about the intervention/program 91.4
One set
Program-specific activities 84.6
Approach/strategies 23.1
Media campaign 12.8
Staffing 10.3 15.4
I Strength and quality of evidence supporting this particular approach 89.5
I Ability to adapt the program to meet own (local) needs 82.1
I Costs associated with implementing the program 72.2
I Perceived difficulty of implementation 70.3
I Perceived excellence in how program is bundled, presented, and assembled 68.4
I Ability to test program on a small scale and reverse course in warranted 67.6
I Relative advantage of implementing versus an alternative solution 62.2
I Where idea for the program came from 61.5
II External policies and incentives 68.4
III Goals and feedback 89.7
IV Self-efficacy 84.2
IV Other personal attributes 70.0
IV Individual identification with organization 67.6
V Engaging opinion leaders in organization or coalition/partnership 84.6
V Engaging external change agents 73.7

Semistructured Interview Themes O = Outcomes, D = Determinants. CFIR Domains I = Intervention/program, II = Outer setting, III = Inner setting, IV = Characteristics of individuals, V = Process.

DISCUSSION

Adopting the recommendation of collaborating with stakeholders to specify key dimensions of sustainability [32], we identified 15 common elements of sustainability of programs funded by four SAMHSA program initiatives. These elements have also been identified in several other studies and sustainability frameworks. For instance, infrastructure and capacity to support sustainability, community buy-in and support, availability of funding and resources, leadership, and presence of a champion are also found in the list of influences on sustainability found in Stirman et al. [17]; the inner and outer contextual factors of the Integrated Sustainability Framework (ISF) [9]; the intervention, practice setting, and ecological system of the Dynamic Sustainability Framework (DSF) [5]; and the constructs of the PSAT [21,22]. Training is also found in the ISF [9] and DSF [5]. Coalitions, collaborations, partnerships, and networks, community need for program, and ongoing evaluation of performance and outcomes are also found in the ISF [9]. Organizational culture is also found in the DSF [5]. Evidence of positive outcomes is also listed by Stirman et al. [17], Moore et al. [8] (i.e., continuing to produce benefits for individuals/systems), the ISF [9], and the DSF [5].

However, many of these studies and frameworks point to elements of sustainability that are not identified as prominent in this study. For instance, several sustainability studies and frameworks have pointed to the ability to adapt an intervention to meet the needs of a specific population or organization [5,9,13,18]. In this study, over 80% of the participants endorsed the importance of the CFIR element of the ability to adapt the program to meet one’s own needs; however, adaptation was mentioned only by 5% in the free lists and by none of the participants in the semistructured component of the interview. Although adaptation did not emerge as a significant predictor of sustainability in the context of the studies funded by the four SAMHSA grant initiatives, it might be more relevant in other settings and contexts, including low- and middle-income countries [5,6,8]. Communication is a domain of the PSAT but was mentioned (in reference to media campaigns) only by 12.8% of the participants as something to be sustained in the free-list exercise. Despite being on two lists, strategic planning is another domain of the PSAT and the list of influences found in Stirman et al. [17] that did not figure as prominently in this study, being mentioned by only one participant in the semistructured component of the interview and by 18% of participants in the free-list exercise.

The findings also have important implications for using measures that predict implementation as predictors or determinants of sustainability. Although all of the CFIR domains but one (pressure from other states, tribes, or communities) were identified as important to sustainability by study participants, only a few were “validated” through triangulation with semistructured interview and free-list data, suggesting that not every domain considered to be important for implementation is equally important for sustainability. This finding supports the premise of the Exploration, Preparation, Implementation, and Sustainment (EPIS) framework that features of the inner or outer setting or intervention that predict earlier stages of implementation may not necessarily predict for sustainability, the final stage [3].

As noted earlier, “the continued use of program components and activities for the continued achievement of desirable program and population outcomes” [7 p. 2060] is a frequently cited definition of sustainability. In this study, continuity was considered to be an element of sustainability in the free list exercises; it is not an element of the CFIR. Program components and activities include elements that are specific to each program (e.g., suicide screening and training in the GLS grants, teacher coaching in the PPS grants), and elements that are common across grantees funded by all four SAMHSA grant initiatives like continued funding, partnerships, utility or value in addressing a community need, and capacity or infrastructure. Desirable program outcomes include the continued existence of a supportive coalition and community buy-in while desirable population outcomes include reduced rates of suicide and suicidal behaviors, binge and underage drinking, and disruptive behavior in schools.

However, the conceptualization of sustainability as described by study participants raises the question of whether it should be conceptualized as a process, an outcome, or both. Schell et al. [21] note that determining the point at which a program is sustained may prove difficult given programs’ varying sizes, fidelity, and stage in the life cycle. Scheirer and Dearing [7] point to the definition of sustainability used by some as a set of processes [33,34]. However, they note: “a process definition of sustainability presents challenges for planning research and evaluation on this topic. Without explicit definition of outcome variables, along with measures of hypothesized influences on those outcomes, research often cannot accumulate or disconfirm findings about predictors of sustainability.” They further note: “sustainability, like implementation, is not necessarily a steady state. Thus, although sustainability may usefully be considered as a set of outcomes, it is variable and can unfold as a set of processes that can incorporate recursive learning in an organization and community over time” [7, p. 2060]. We agree that sustainability is a continuing process that has no common endpoint, yet there are specific elements of sustainment that can be used in research to identify whether sustainability is continuing. For example, the continued delivery of program elements 1 year past initial funding would be an important marker of sustainability.

This also raises the question of whether the characteristics of sustainability serve as predictors or whether they serve as criteria used to determine whether or not a program is being sustained. For instance, in looking at the free lists of what defined sustainability, what should be sustained, and what was required to sustain those elements, considerable overlap was noted, including continued funding, partnerships, coalitions, community support, utility and value, evaluation, monitoring and data collection, and positive outcomes. The sustainability literature suggests that organizational capacity and support [5,8,12,35], characteristics of the implementers or program being implemented [3638], and sustainability planning [13] may be key factors in predicting whether an EBP will be sustained [12]. Scheirer and Dearing [7] recommended six criterion or dependent variables to assess outcomes: (a) whether benefits or outcomes for consumers, clients, or patients are continued (when the intervention provides services to individuals), (b) continuing the program activities or components of the original intervention, (c) maintaining community-level partnerships or coalitions developed during the funded program, (d) maintaining new organizational practices, procedures, and policies that were started during program implementation, (e) sustaining attention to the issue or problem, and (f) program diffusion and replication in other sites. In this study, participants made references to all of these criterion variables except for diffusion or replication in other sites. Study participants also cited elements of the three sets of factors that influence sustainability identified by Scheirer and Dearing: (a) characteristics of the intervention, (b) factors in the organizational setting, and (c) factors in the community environment of each intervention site.

Nevertheless, the results of this study suggest that the identification of a variable as a determinant or predictor of sustainability or as an outcome of sustainability depends on the goals of the program as perceived by the grantee and by the funder (i.e., what it is that they want sustained). In sustaining a specific EBP, continued operation of coalitions and networks may be perceived as a means to an end, while, in sustaining a strategic or ecological model of prevention, the coalitions and networks may be perceived as the end in itself. It may, thus, be more important to implement a standardized process of eliciting determinants and outcomes of sustainability, such as the three forms of data collection and triangulation used in this study, than to implement a standardized instrument containing the same list of sustainability determinants and outcomes. This approach is also more consistent with the DSF [5] and the growing consensus of sustainability as a dynamic process with changes in both determinants and outcomes over time.

In considering the findings from this study and their implications for measuring sustainability, several limitations should be kept in mind. For instance, although the study sample included both specific interventions (Good Behavior Game) and broader ecological approaches to prevention (e.g., SPF), the generalizability of study findings was limited to SAMHSA-funded programs that target prevention of behavioral health problems. Due to the small sample, comparisons across the four grant initiatives were unlikely to be statistically significant due to limited power. Such a comparison with a larger sample of grantees will appear in a subsequent manuscript. Similarly, the small sample size precluded comparisons of responses by the participant’s role in the grantee program, which may influence their perceptions regarding what sustainability is and what factors are important. Such perceptions of sustainability, which may be biased in some ways, are not as preferable as empirically derived predictors of sustainability. However, eliciting such perceptions is an important step in the development of a measure of sustainability than can then be empirically validated with a larger sample size. Although data triangulation is a tool for validating qualitative data, different forms of elicitation (e.g., semistructured interviews, free list, and checklist) may affect the likelihood that a specific element does or does not appear in a data set. For instance, the CFIR is designed to measure implementation determinants and not outcomes. Furthermore, potentially important elements of sustainability may not have been recognized by grantees that had not yet entered a period where they no longer had SAMHSA funding, and perceptions about sustainability determinants and outcomes were not necessarily based on experience with having to sustain their currently funded programs.

CONCLUSIONS

Despite these limitations, this qualitative evaluation of what sustainability of prevention programs and initiatives means and how it should be measured identified 15 common elements across grantees funded by four SAMHSA prevention grant initiatives, as well as several program-specific elements. The common elements reflect both determinants and outcomes of sustainability. Future research will examine the validity and reliability of these constructs as either dependent criteria of sustainability outcomes, independent predictors of sustainability determinants, or both.

Supplementary Material

ibz170_suppl_Supplementary_Table_S1

Acknowledgments

We thank our colleagues in the Substance Abuse and Mental Health Services Administration (SAMHSA) for their help in connecting with their grantees and for the grantees themselves for the time and information they provided for this project. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the funding agency, SAMHSA, or Community Anti-Drug Coalitions of America (CADCA).

Funding: This work is supported by the National Institute on Drug Abuse (1 R34 DA037516-01A1 and P30 DA02878).

Compliance with Ethical Standards

Conflicts of Interest: All authors declare that they have no conflicts of interest.

Authors’ Contributions:

Overall integrity of the work from inception to publication: L.A.P. Design and acquisition of data: L.A.P., S.E.S., S.J.M., and J.V. Analysis and interpretation of the data: L.A.P., S.E.S., S.J.M., J.V., and C.H.B. Preparation and review of manuscript for important intellectual content: L.A.P., S.E.S., S.J.M., J.V., C.H.B., C.R., C.D.G., C.O., and A.A. Final approval of the version to be published: L.A.P., S.E.S., S.J.M., J.V., C.H.B., C.R., C.D.G., C.O., and A.A.

Ethical Approval: All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. The study was approved by the Institutional Review Boards at the University of Southern California and Northwestern University prior to participant recruitment. This article does not contain any studies with animals performed by any of the authors.

Informed Consent: Informed consent was obtained from all individual participants included in the study.

References

  • 1. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implement Sci. 2009;4:50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38(1):4–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: Systematic review and recommendations. Milbank Q. 2004;82(4):581–629. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: Addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Proctor E, Luke D, Calhoun A, et al. . Sustainability of evidence-based healthcare: Research agenda, methodological advances, and infrastructure support. Implement Sci. 2015;10:88. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Scheirer MA, Dearing JW. An agenda for research on the sustainability of public health programs. Am J Public Health. 2011;101(11):2059–2067. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Moore JE, Mascarenhas A, Bain J, Straus SE. Developing a comprehensive definition of sustainability. Implement Sci. 2017;12(1):110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Shelton RC, Cooper BR, Stirman SW. The sustainability of evidence-based interventions and practices in public health and health care. Annu Rev Public Health. 2018;39:55–76. [DOI] [PubMed] [Google Scholar]
  • 10. Walugembe DR, Sibbald S, Le Ber MJ, Kothari A. Sustainability of public health interventions: Where are the gaps? Health Res Policy Syst. 2019;17(1):8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Scheirer MA. Linking sustainability research to intervention types. Am J Public Health. 2013;103(4):e73–e80. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Cooper BR, Bumbarger BK, Moore JE. Sustaining evidence-based prevention programs: Correlates in a large-scale dissemination initiative. Prev Sci. 2015;16(1):145–157. [DOI] [PubMed] [Google Scholar]
  • 13. Feinberg ME, Bontempo DE, Greenberg MT. Predictors and level of sustainability of community prevention coalitions. Am J Prev Med. 2008;34(6):495–501. [DOI] [PubMed] [Google Scholar]
  • 14. Gloppen KM, Arthur MW, Hawkins JD, Shapiro VB. Sustainability of the communities that care prevention system by coalitions participating in the community youth development study. J Adolesc Health. 2012;51(3):259–264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Rhew IC, Brown EC, Hawkins JD, Briney JS. Sustained effects of the communities that care system on prevention service system transformation. Am J Public Health. 2013;103(3):529–535. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Johnson AM, Moore JE, Chambers DA, Rup J, Dinyarian C, Straus SE. How do researchers conceptualize and plan for the sustainability of their NIH R01 implementation projects? Implement Sci. 2019;14(1):50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Wiltsey Stirman S, Kimberly J, Cook N, Calloway A, Castro F, Charns M. The sustainability of new programs and innovations: A review of the empirical literature and recommendations for future research. Implement Sci. 2012;7:17. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Chamberlain P, Brown CH, Saldana L. Observational measure of implementation progress in community based settings: The Stages of Implementation Completion (SIC). Implement Sci. 2011;6:116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Saldana L, Chamberlain P, Wang W, Hendricks Brown C. Predicting program start-up using the stages of implementation measure. Adm Policy Ment Health. 2012;39(6):419–425. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Saldana L. The stages of implementation completion for evidence-based practice: Protocol for a mixed methods study. Implement Sci. 2014;9(1):43. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Schell SF, Luke DA, Schooley MW, et al. . Public health program capacity for sustainability: A new framework. Implement Sci. 2013;8:15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Luke DA, Calhoun A, Robichaux CB, Elliott MB, Moreland-Russell S. The program sustainability assessment tool: A new instrument for public health programs. Prev Chronic Dis. 2014;11:130184. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23. Calhoun A, Mainor A, Moreland-Russell S, Maier RC, Brossart L, Luke DA. Using the program sustainability assessment tool to assess and plan for sustainability. Prev Chronic Dis. 2014;11:130185. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Hodge LM, Turner KMT, Sanders MR, Filus A. Sustained implementation support scale: Validation of a measure of program characteristics and workplace functioning for sustained program implementation. J Behav Health Serv Res. 2017;44(3):442–464. [DOI] [PubMed] [Google Scholar]
  • 25. Palinkas LA, Spear SE, Mendon SJ, et al. . Measuring sustainability of prevention programs and initiatives: a study protocol. Implement Sci. 2016;11(1):95. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Kellam SG, Brown CH, Poduska JM, et al. . Effects of a universal classroom behavior management program in first and second grades on young adult behavioral, psychiatric, and social outcomes. Drug Alcohol Depend. 2008;95(suppl 1):S5–S28. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Goldston DB, Walrath CM, McKeon R, et al. . The Garrett Lee Smith memorial suicide prevention program. Suicide Life Threat Behav. 2010;40(3):245–256. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm Policy Ment Health. 2015;42(5):533–544. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Weller SC, Romney AK.. Systematic Data Collection. Newbury Park, CA: Sage; 1988. [Google Scholar]
  • 30. Strauss AL, Corbin J.. Basics of Qualitative Research: Techniques And Procedures For Developing Grounded Theory. Thousand Oaks, CA: Sage; 1998. [Google Scholar]
  • 31. Patton MQ. Qualitative Evaluation and Research Methods. 3rd ed Thousand Oaks, CA: Sage; 2002. [Google Scholar]
  • 32. Shelton RC, Lee M. Sustaining evidence-based interventions and policies: Recent innovations and future directions in implementation science. Am J Public Health. 2019;109(S2):S132–S134. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Johnson K, Hays C, Center H, Daley C. Building capacity and sustainable prevention innovations: A sustainability planning model. Eval Program Plann. 2004;27:135–149. [Google Scholar]
  • 34. Pluye P, Potvin L, Denis JL, Pelletier J. Program sustainability: Focus on organizational routines. Health Promot Int. 2004;19(4):489–500. [DOI] [PubMed] [Google Scholar]
  • 35. Tibbits MK, Bumbarger BK, Kyler SJ, Perkins DF. Sustaining evidence-based interventions under real-world conditions: Results from a large-scale diffusion project. Prev Sci. 2010;11(3):252–262. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Han SS, Weiss B. Sustainability of teacher implementation of school-based mental health programs. J Abnorm Child Psychol. 2005;33(6):665–679. [DOI] [PubMed] [Google Scholar]
  • 37. Scheirer MA. Is sustainability possible? A review and commentary on empirical studies of program sustainability. Am J Evaluation. 2005;26:320–347. [Google Scholar]
  • 38. Mancini JA, Marek LI. Sustaining community-based programs for families: Conceptualization and measurement. Fam Relat. 2004;53:339–347. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ibz170_suppl_Supplementary_Table_S1

Articles from Translational Behavioral Medicine are provided here courtesy of Oxford University Press

RESOURCES