Skip to main content

This is a preprint.

It has not yet been peer reviewed by a journal.

The National Library of Medicine is running a pilot to include preprints that result from research funded by NIH in PMC and PubMed.

Research Square logoLink to Research Square
[Preprint]. 2023 May 19:rs.3.rs-2783056. [Version 1] doi: 10.21203/rs.3.rs-2783056/v1

Action Planning for Building Program Sustainability: Results from a Group-Randomized Trial

Sarah Moreland-Russell 1, Todd Combs 2, Jessica Gannon 3, Eliot Jost 4, Louise Farah Saliba 5, Kimberly Prewitt 6, Douglas Luke 7, Ross C Brownson 8
PMCID: PMC10246119  PMID: 37292588

Abstract

Background:

Public health programs are charged with implementing evidence-based interventions to support public health improvement, however, to achieve long term population based benefit these interventions must be sustained. Empirical evidence suggests that program sustainability can be improved through training and technical assistance, but few resources are available to support public health programs in building capacity for sustainability.

Methods:

This study sought to build capacity for sustainability among state tobacco control programs through a multiyear, group-randomized trial that developed, tested, and evaluated a novel Program Sustainability Action Planning Model and Training Curricula. Using Kolb’s experiential learning theory, we developed this action-oriented training model to address the program-related domains proven to impact capacity for sustainability as outlined in the Program Sustainability Framework. We evaluated the intervention using a longitudinal mixed-effects model using Program Sustainability Assessment (PSAT) scores from three time points. The main predictors in our model included group (control vs intervention) and type of dosage (active and passive). Covariates included state level American Lung Association score (proxy for tobacco control policy environment) and percent of CDC-recommended funding (proxy for program resources).

Results:

Twenty-three of the 24 state tobacco control programs were included in the analyses: 11 received the training intervention and 12 were control. Results of the longitudinal mixed-effects linear regression model, where the annual PSAT score was the outcome, showed that states in the intervention condition reported significantly higher PSAT scores. The effects for CDC-recommended funding and American Lung Association smoke-free scores (proxy for policy environment) were small but statistically significant.

Conclusion:

This study found that the Program Sustainability Action Planning Model and Training Curricula was effective in building capacity for sustainability. The training was most beneficial for programs that had made less policy progress than others, implying that tailored training may be most appropriate for programs possibly struggling to make progress. Finally, while funding had a small, statistically significant effect in our model, it virtually made no difference for the average program in our study. This suggests that other factors may be more or equally important as the level of funding a program receives.

Trial registration:

NCT03598114, Registered on July 26, 2018 (clnicaltrials.gov/NCT03598114)

Keywords: sustainability, action planning, sustainability plans

Introduction

Public health programs are charged with implementing evidence-based interventions to support public health improvement. For a population to receive the full benefits of implementing an evidence-based intervention, the intervention must be sustained over time. While empirical evidence has established that program sustainability can be improved through training and technical assistance,(1, 2) few resources are available to support public health programs in building capacity for sustainability. To date, no evidence-based sustainability training curricula exists to assist public health programs.

Sustainability is the presence of adaptive structures and processes which enable a program to effectively implement and institutionalize evidence-based policies and activities over time.(3) This definition goes beyond the characteristics of a program characteristics and encompasses the organizational and system characteristics of the program. There is a growing body of research on the factors affecting sustainability,(1, 49) but sparse work has been done to translate the components of program sustainability capacity into practical guides and tools for practitioners to plan for how best to increase their capacity for sustaining evidence based programs and policies.(1, 10)

Only a few conceptual models focus exclusively on the ‘how’ or the programmatic process for building capacity for sustainability. The Dynamic Sustainability Framework offered by Chambers et al., considers the context in which an evidence based intervention is implemented and operationalized within a system (11). However, it does not offer an explicit implementation strategy or mechanism based on the alignment of programs with their contexts nor any detailed strategies to actually sustain programs once they have been implemented. May et al.’s, Normalization Process Theory explains how new ideas, ways of acting, and ways of working become routinely embedded or normalized in practice settings (12). It has been utilized in studying program implementation and sustainability(13) and found useful in identifying processes that are likely to enhance sustainability, but again does not offer a mechanism for which programs should engage to improve sustainability.

The Program Sustainability Framework(14), which was utilized for our study, outlines eight domains of sustainability including organizational capacity, funding stability, strategic planning, external environment, partnerships, communication, program adaptation, and program evaluation. These domains have been proven to affect the capacity for sustainability among public health programs;(3) however, understanding of how these domains interact to improve program sustainability or how to determine whether success in one domain might improve capacity in other domains is not yet understood. In addition, while these frameworks exist, few are actually referenced in implementation research; few researchers funded by the National Institutes of Health referenced frameworks with sustainability constructs and offered limited information on how they operationalized frameworks (Johnson, 2019).

The “how” to plan for sustainability or increase programmatic capacity for sustainability has become increasingly more important in the past five years as funders have become more concerned with or required sustainability plans.(15) For example, the Centers for Disease Control and Prevention (CDC)’s Office on Smoking and Health has required all state level tobacco control programs in which they support (DP15-1509 funding announcement) to design and implement a sustainability plan. However, little has been done to translate the components of program sustainability capacity into practical guides and tools for public health practitioner utilization. Empirical evidence has established that program sustainability can be improved through in-person, hands-on, action-oriented training and technical assistance .(1, 2, 16, 17) Research also highlights the importance of creating an action plan to move sustainability progress forward and such planning has been shown to predict program survival and post-launch funding;(18) however, to date, no evidence-based sustainability training curriculum exists. Because state’s tobacco control program (and several other public health programs) funding is consistently at risk for being diminished or eliminated, (19, 20) it is important for state programs to engage in some sort of planning for sustainability. In addition, state tobacco control programming involves comprehensive plans, implementation of multiple intervention (health communications, cessation, policy, etc) and many types of stakeholders including coalitions, state and local level interventions. There is therefore an immense need to use the Program Sustainability Framework to understand the various components of these programs and develop an action-oriented planning intervention for improving these program’s capacity for sustainability.

The Plans, Actions, and Capacity to Sustain Tobacco Control (PACT) study sought to build capacity for sustainability among evidence-based state tobacco control programs (TCPs) through a multiyear, group-randomized trial that developed, tested, and evaluated a novel Program Sustainability Action Planning Model and Training Curricula.(21) Using Kolb’s experiential learning theory,(22) we developed this action-oriented training model to address the internal and external program-related domains proven to impact capacity for sustainability of public health programs as outlined in the Program Sustainability Framework.(3) This paper aims to evaluate the effectiveness of the Program Sustainability Action Planning Model and Training Curricula. To accomplish this, we employed the following research questions:

  1. Will the intervention group state TCPs increase their capacity for sustainability more than the control group state tobacco control programs?

  2. Does the amount of dosage (i.e. active engagement time) be have an effect on the sustainability outcomes measured?

  3. Is the Program Sustainability Action Planning Model and Training Curricula(21) more effective when provided in states with lower tobacco control policy progress than those with higher policy progress?

Methods

The PACT study utilized a utilized a multiphase outcome evaluation incorporating a group-randomized experimental design testing the effectiveness of a novel intervention, the Program Sustainability Action Planning Model and Training Curricula, to increase the capacity for sustainability among state level tobacco control programs. This study was approved by the Institutional Review Board of Washington University in St. Louis (reference number 201801196). This study also received approval under Washington University’s Protocol Review and Monitoring Committee. This study was also registered retrospectively on July 26, 2018, as a clinical trial (ClinicalTrials.gov/NCT03598114)

Intervention Development and implementation

The primary goal of the PACT was to provide an in person, manualized training for sustainability action planning and assessment in public health programs. We used a multiphase approach over five years (2018–2023) to develop and implement an assessment of the effectiveness of the Program Sustainability Action Planning Model and Training Curricula. In the first phase of the PACT study, the intervention was developed through a rigorous multidisciplinary literature review process and a series of expert consultations. We used SCOPUS, ERIC (ProQuest), PubMed, Education Full Text, and PsychINF databases to conduct formative review to inform the development and evaluation of the training intervention. Specifically, we performed literature reviews regarding experiential models of learning (i.e., duration and components) and technical assistance (type and duration) to design the intervention. To design the evaluation of the intervention, we conducted formative reviews to assess previous metrics of experiential learning and technical assistance effectiveness. We also consulted with two academic experts in sustainability, two state tobacco control program directors, and three officials from the CDC Office on Smoking and Health to determine the final Program Sustainability Action Planning Model and Training Curricula.

In the second phase of this study, a multiyear, group-randomized trial was conducted to assess the effectiveness in improving the capacity for sustainability among state level tobacco control programs (TCP). Ultimately, 11 intervention and 12 control TCPs participated. The Program Sustainability Action Planning Model and Training Curricula was delivered to 11 TCPs. The intervention consisted of a two-day workshop to design a program sustainability action plan, two years of tailored technical assistance for implementing the action plan and sustainability outcome assessment. Participants of the workshops actively engaged in developing state TCP-specific sustainability action plans. Each state action plan outlined one or two domain-focused objectives, matched with time-specific activities to be shared across stakeholders present. One person at each workshop claimed responsibility for overseeing the implementation process. Sustainability plans were designed to be implemented over the course of two years. All Program Sustainability Action Planning Training workshops followed the same structure, but were tailored to each state depending on the Program Sustainability Framework domain chosen for the action plan. The two-day workshop involved the TCP staff and as well as a number of stakeholders (i.e. advocates, coalition members, voluntary organizations, grantees, local level health department staff) actively participating to design a sustainability action plan and develop an implementation strategy. Inclusion of and participation by all stakeholders engaged was an important component of the sustainability action plan development process and ensuring all components of the state TCPs were considered through tailored workshops at baseline and ongoing, robust TA through their three year participation.(23)

Our main hypotheses for the trial included:

  1. H1: Intervention group states will increase their capacity for sustainability more than the control group.

  2. H2: There will be a positive interaction effect between group and the amount of dosage, meaning those in the intervention group will benefit more as dosage increases.

  3. H3: The intervention will be more effective for states with lower policy progress (as proxied by the ALA smoke-free score) than those with higher policy progress.

Participating states and randomization

Our original sample consisted of the 50 US state tobacco control programs. Power analyses (at α = 0.05) revealed that between 9 (power = 0.8) and 12 (power = 0.9) states per group (control and intervention) would be appropriate. To randomize the two groups, we stratified the 50 states into four quadrants based on states’ needs (as adult smoking rates) and tobacco control policy environments (as American Lung Association (ALA) smoke-free scores, 2015).(24) The ALA score is a grade assigned to all 50 US states and the federal government that assesses the state of tobacco control on four key tobacco control policies: tobacco control and prevention spending, smokefree air, tobacco taxes, and cessation coverage. In Fig. 1, smoking rates are on the x-axis and ALA scores are on the y-axis. We created the quadrants using the mean scores (black horizontal and vertical lines). The state markers are sized by the percentage of CDC-recommended funding the states spend. We chose three states with different degrees of meeting percentage of CDC-recommended funding(25) from each quadrant. We then chose the closest match (pair) for each state chosen based on the three characteristics displayed. Finally, we randomized states by pairs into the control or intervention group.

Figure 1.

Figure 1

Quadrant stratification for state selection

Measures

Data metrics were defined following recommendations from the advisory board and tobacco control experts and included organizational indicators, Program Sustainability Assessment Test (PSAT) scores, and intervention dosage. Organizational data was collected via record abstraction from annual state-level reports to the CDC Office of Smoking and Health. These reports address fulfillment criteria for the DP15-1509 funding announcement and describe the infrastructure, personnel, and activities of state tobacco control programs in detail. The funding announcements are a requirement of state programs, set by the CDC, to complete yearly reports of progress, goals, and challenges in order to receive federal funding. In addition to the CDC reports, other data was collected via secondary data sources, including the ALA’s annual State of Tobacco Control report(24) and the annual Healthy Americans report issued by Trust for America’s Health. The specific organizational metrics collected are described in a previously published manuscript.(21)

In addition, we collected two primary sources of data. First, because it was not feasible to collect all data points through CDC program records, the study team developed a key informant interview tool to collect remaining information. The interviews were conducted by phone interview with state program managers or other qualified surrogate and lasted 15–20 minutes. Responses were recorded, transcribed, and reviewed for completeness and accuracy. An online Qualtrics survey was developed with identical questions for the convenience of state program managers that preferred not to complete a phone interview.

Data from the Program Sustainability Assessment Tool (PSAT) was also collected at three time points (baseline, 1-year post intervention, and 2-years post intervention). The PSAT consists of 40, 7-point Likert-scale items organized into the eight domains of the program sustainability framework (environmental support, funding stability, partnerships, organizational capacity, program evaluation, program adaptation, communications, and strategic planning). The PSAT was emailed to all stakeholders who participated in the sustainability action planning process in each state. The range of participants per states was 7–15. To complete the PSAT, respondents rated the extent (1, little or no extent to 7, a very great extent) to which the program has or does what the item describes (e.g., “Diverse community organizations are invested in the success of the program”). We calculated state-specific means for each of the 40 items. State specific domain scores were obtained by averaging item scores within a domain. Overall domain scores were obtained by averaging the scores from all participating stakeholders for each domain, and standard deviations were calculated to show variability by state. These scores were used as the outcome in our analyses. The PSAT is a reliable instrument developed to evaluate capacity for sustainability of public health, social service, and clinical care programs.(2, 26)

Active dosage was measured in hours spent in sustainability training, technical assistance, or workshops delivered in-person or virtually. All programs (including control and intervention state TCPs) were given access to online sustainability resources (https://sustaintool.org/psat/resources/ and https://prcstl.wustl.edu/pact-resources/), referred to as passive dosage. A summary of intervention and control group activities can be seen in Table 1.

Table 1.

Intervention and control group activities

Activity Frequency Intervention Control
Initial PSAT meeting (active dosage) 1 time (baseline)
Annual PSAT assessment 2 times (years 1,2)
Sustaintool.org resource access (passive dosage) Ongoing
Technical assistance (active dosage) 10–12 times (3–4/year)
Two-day Sustainability Action Planning Training (active dosage) 1 time

Data and Analyses

We tested these hypotheses using longitudinal mixed-effects modeling using data from the three time points annually in the intervention. The main predictors were group (Hypothesis 1) and two types of dosage (active and passive). Active dosage was measured in contact hours spent in sustainability training, technical assistance, or workshops delivered in-person or virtually. Passive dosage was measured as binary where 0 = no resource use and 1 = any resource use, as reported in annual surveys of programs. Other covariates included the percentage of CDC-recommended funding(27) (as a proxy for level of program resources), ALA smoke-free score(28) (as a proxy for tobacco control policy progress), and program manager tenure as reported in annual surveys to represent program staff turnover or stability. This variable was measured categorically (vacant, less than 1 year, 1–3 years, 3–5 years, and more than 5 years. In addition, we included interaction terms between group and each type of dosage (Hypothesis 2), and one for group and ALA smoke-free score (Hypothesis 3).

Results

Twenty-three of the 24 state programs were included in the analyses; one state dropped out of the study before data could be collected. Descriptive statistics are shown in Table 2. Average PSAT scores increased from 4.6 (sd: 0.4) to 4.8 (sd: 0.7) for the intervention group, and from 4.4 (sd: 0.7) to 4.7 (sd: 0.7) for the control group. Active dosage hours varied from 1.1–7.6 for the intervention group and 0-0.8 for the control group. Average ALA smoke-free scores and their variances were similar across groups and years, as were percentages of CDC-recommended funding. In years 1 and 2, 3 and 5 programs in the intervention group took advantage of Sustaintool.org resources, compared to 4 and 1 control programs in the same years. Across years, most programs had managers with at least 1 year of experience: Year 0–21 out of 23 or 91%; Year 1–19 (83%); and Year 2–21 (91%).

Table 2.

Descriptive statistics for variables used in the regression model with 11 state programs in the intervention group and 12 in the control group for a total of 69 program-years.

Year 0 Year 1 Year 2
Mean SD Mean SD Mean SD
Continuous variables
PSAT
4. Intervention 4.6 0.4 4.5 0.7 4.8 0.7
5. Control 4.4 0.7 4.7 0.9 4.7 0.7
Dosage: Active
6. Intervention 7.6 1.2 1.5 1.1 1.5 1.1
7. Control 0.8 0.0 0.0 0.0 0.4 0.7
ALA score
8. Intervention 35.3 10.9 34.7 11.4 34.7 11.4
9. Control 32.1 15.4 33.4 15.0 33.6 15.1
CDC rec. funding (%)
10. Intervention 22.9 18.2 21.7 14.9 26.5 25.6
11. Control 27.6 27.6 24.4 26.1 25.5 27.3
Categorical variables n % n % n %
Dosage: Passive: None
12. Intervention 11 100 8 73 6 55
13. Control 12 100 8 67 11 92
Dosage: Passive: Any
14. Intervention -- -- 3 27 5 45
15. Control -- -- 4 33 1 8
PM tenure: Vacant
16. Intervention 1 9 -- -- -- --
17. Control -- -- -- -- -- --
PM tenure: < 1 year
18. Intervention -- -- 4 36 2 18
19. Control 1 8 1 8 2 17
PM tenure: 1–3 years
20. Intervention 5 45 -- -- 2 18
21. Control 1 8 2 17 2 17
PM tenure: 3–5 years
22. Intervention 5 45 3 27 2 18
23. Control 4 33 1 8 -- --
PM tenure: > 5 years
24. Intervention -- -- -- -- 5 45
25. Control 6 50 8 67 8 67

Notes: “--” = 0 programs in category; ALA = American Lung Association, PM = program manager.

Table 3 contains the results of the longitudinal mixed-effects linear regression model, where the annual PSAT score was the outcome. States in the intervention condition reported significantly higher PSAT scores, suggesting greater capacity for sustainability after receiving the PACT training. The effects for CDC-recommended funding and ALA smoke-free scores were small but statistically significant indicating that 1) as a program’s funding rose by 1% its PSAT score would increase by 0.01 (95% CI: 0.01–0.02), and 2) as a program’s ALA score increased by 1 (regardless of group) its PSAT score would increase by 0.04 (95% CI: 0.02–0.05), all else equal. Finally, in states with a higher ALA score, and therefore stronger policy environment, the impact of the intervention mattered less. This effect is explored more below. The remaining variables were not statistically significant. It is also useful to note that we experimenting collapsing the program manager tenure variable to three and four categories and results were similar and not statistically significant.

To complement the results in Table 3, Fig. 2 illustrates the influence of the percentage of CDC-recommended funding that a program receives and the ALA score on a program’s capacity for sustainability after controlling for all other covariates. The left panel looks at funding, and illustrates that the average state program had no influence on capacity for sustainability from funding levels. The right panel looks at ALA scores, as a measure of strength of tobacco control policy, and indicates that the difference between groups – or the effect of being in the intervention group – was larger for those programs with relatively low ALA scores. After program’s scores pass a threshold around 20, the impact of the score on its capacity for sustainability diminishes.

Figure 2.

Figure 2

Predicted differences between PSAT scores (Intervention minus Control) across the ranges of CDC-recommended funding percentage and ALA smoke-free score. All other effects held constant at mean or mode.

Discussion

This study is significant in the development of the first evidence-based training: Program Sustainability Action Planning Model and Training Curricula to increase the sustainability capacity of tobacco control programs. There is a growing body of research on aspects affecting sustainability,(4, 69, 29) however, little has been done to translate the components of program sustainability capacity into practical guides and tools for practitioner utilization. We developed the Program Sustainability Action Planning Model and Training Curricula, based on expert consultation, extensive literature reviews, Kolb’s experiential learning model, and the Program Sustainability Framework. The main goal was to show that tailored training involving experiential learning and action planning could be effective in increasing the capacity for sustainability for recipient programs. We also hypothesized that the amount of dosage defined as mainly active hours of in-person or virtual engagement with the training – would positively correlate with increases in sustainability capacity. Finally, we investigated whether the training would be more beneficial to those programs who had made relatively less tobacco control policy progress than others.

We found that the in person, action-oriented Program Sustainability Action Planning Model and Training Curricula was effective for those in the intervention group, regardless of dosage. This suggests that no matter the intensity or frequency of engagement with the training, receiving any amount can influence a program’s capacity for sustainability. Empirical evidence has established that program sustainability can be improved through in-person, hands-on, action-oriented training and technical assistance.(1, 2, 16, 17) Research also highlights the importance of creating an action plan to move sustainability progress forward.(18) Our results further indicate the importance of action-oriented training and technical assistance.

We also found that the training was most beneficial for those state program’s that had made less policy progress than others, implying that tailored training may be most appropriate for programs that may be struggling to make progress. States with relatively higher success in policy progress benefited less as demonstrated by the declining difference in sustainability capacity between these programs in our study. However, research consistently indicates that even effectively implemented interventions risk failure when funding, planning, or training ends.(6, 2931) Given that our study included only three years of sustainability tracking, continued research is needed to determine if difference in policy progress truly serves as a protection factor.

Despite many years of research related to other factors that relate to program sustainability,(15) many observers still equate sustainability with funding. We found that while funding had a small, statistically significant effect in our model, it virtually made no difference for the average program in our study. This is not to say that programs do not need funding to survive and sustain themselves, only that other factors may be more or equally important as the level of funding a program receives. For example, the Program Sustainability Framework highlights seven other components important to building capacity for sustainability. Studies have shown that several of these non-funding components from the Program Sustainability Framework, including partnerships, external support, and strong organizational capacity among local health departments (32) and program adaptation, environmental support, and organizational capacity, among state level chronic disease programs (33) were more important to maintaining program sustainability.

Many also perceive that staff turnover is a major threat to program sustainability. In a scoping review by Pascoe et al,(34) assessing the effects of workforce turnover on program sustainability, 29 of 30 articles related that workforce turnover potentially threatened program components of sustainability, including loss of organizational knowledge, lack of evidence based program fidelity, and financial stress. In addition, according to the Public Health Workforce Interests and Needs Survey Report (2022),(35) adequate staff capacity is fundamental to providing sustained services in every community. We proxied staff turnover with program manager tenure and found that it had no effect on a program’s sustainability capacity. Again, this is not to claim that high levels of staff turnover might not affect sustainability and believe there is a need to further study the relationship of staff turnover and program sustainability.

Limitations

Our study has a handful of limitations that deserve mention. There was a state program in the intervention group that dropped out, leaving us with 23 rather than 24 programs. However, power analyses before the study estimated this sample size (at least 11 per group) at between 0.85 and 0.90. We also proxied staff turnover with program manager tenure, due to data availability issues, and these two phenomena may be less related than we assume. Future studies should focus directly on the relationship between sustainability and staff turnover to further illuminate the mechanisms at work.

Finally, while this study analyzed sustainability data over three years, we believe to determine true effectiveness of the Program Sustainability Action Planning Model and Training Curricula, programs who utilize the training should track sustainability measures over a greater amount of time.

Conclusions and next steps

Research consistently indicates that even effectively implemented interventions risk failure when funding, planning, or training ends .(6, 2931) In fact, it is estimated that up to 40% of programs end within 2 years of losing funding.(36) Failure to sustain an implemented program negatively impacts communities through loss of trust in public health initiatives and waste of valuable resources.(37) The findings from this study have the potential to improve public health programs by introducing Program Sustainability Action Planning Model and Training Curricula to improve sustainability over time. The benefits of program sustainability will not only benefit the state programs themselves, but also the health of state populations through the continuation of evidence based public health initiatives. This study’s findings will contribute to the field of implementation science by providing knowledge on “how” to mature and sustain activities over time, thereby achieving the full benefit of significant public health investments. Future research is needed to further validate the results of this study. First, research testing the implementation of the Program Sustainability Action Planning Model and Training Curricula within other public health and chronic disease program areas would extend the utility of our work. Second, given the growth in online based training because of the COVID pandemic and need for social distancing, testing the implementation of the Program Sustainability Action Planning Model and Training Curricula intervention in an online format could possibly allow for more programs to access the training and for more stakeholders to participate in the planning, especially in more rural states.

Implications for public health practice

  • Implementation of newly funded programs does not guarantee long term sustainment, so programs and evaluators should take a more exhaustive focus on the factors that influence sustainability

  • Targeting tailored trainings at relatively lower-performing programs may conserve resources for programs, evaluators, and implementation scientists.

  • Training curricula materials provided to broader public health audiences are associated with increased sustainability of evidence-based practices

Contributions to the literature.

  1. This study is significant in the development of the first evidence-based training: Program Sustainability Action Planning Model and Training Curricula to increase the sustainability capacity of tobacco control programs.

  2. The “how” to plan for sustainability or increase programmatic capacity for sustainability has become increasingly more important in the past five years as funders have become more concerned with or required sustainability plans, yet few to no resources exist

  3. This study highlights future steps in sustainability training research including need to test other implementation methods (in person vs online) and utility among other public health and chronic disease prevention programs outside of tobacco control.

Acknowledgement

We would like to acknowledge Linda Dix for her work in coordinating the implementation of training in each intervention state. We would also like to acknowledge the rest of the PACT study team including the advisory panel members and Rachel Hackett for work related to completing PACT study activities.

Funding

The present study is funded by the National Cancer Institute of the National Institutes of Health (award number R01CA203844 and the Centers for Disease Control and Prevention (award number U48DP006395), and the Foundation for Barnes-Jewish Hospital. The findings and conclusions in this article are those of the authors and do not necessarily represent the official positions of the National Institutes of Health or the Centers for Disease Control and Prevention.

Footnotes

Ethics approval and consent to participate

Ethical approval for this study was provided by the Washington University in St Louis Institutional Review Board (IRB#201801196). All participants were emailed a copy of the consent form prior to interviews, and verbal consent was obtained from participants. This trial is registered: NCT03598114

Consent for publication

Consent was obtained from participants to use data collected as part of publications. This is included in our Washington University in St Louis Institutional Review Board (IRB#201801196) approved consent form.

Competing interests: The authors declare that they have no competing interests.

Contributor Information

Sarah Moreland-Russell, Washington University In St Louis: Washington University in St Louis.

Todd Combs, Washington University In St Louis: Washington University in St Louis.

Jessica Gannon, Washington University In St Louis: Washington University in St Louis.

Eliot Jost, Washington University In Saint Louis: Washington University in St Louis.

Louise Farah Saliba, Washington University In St Louis: Washington University in St Louis.

Kimberly Prewitt, Washington University In St Louis: Washington University in St Louis.

Douglas Luke, Washington University In Saint Louis: Washington University in St Louis.

Ross C Brownson, Washington University In St Louis: Washington University in St Louis.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available due to privacy protections but are available from the corresponding author on reasonable request.

References

  • 1.Johnson K, Collins D, Wandersman A. Sustaining Innovations In Community Prevention Systems: A Data-Informed Sustainability Strategy. J Community Psychol. 2013;41(3). [Google Scholar]
  • 2.Calhoun A, Mainor A, Moreland-Russell S, Maier RC, Brossart L, Luke DA. Using the program sustainability assessment tool to assess and plan for sustainability. Prev Chronic Dis. 2014;11(2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Schell SF, Luke DA, Schooley MW, Elliott MB, Herbers SH, Mueller NB et al. Public health program capacity for sustainability: A new framework. Implementation Science. 2013;8(1). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Scheirer MA. Linking sustainability research to intervention types. Am J Public Health. 2013;103(4). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Bodkin A, Hakimi S. Sustainable by design: A systematic review of factors for health promotion program sustainability. BMC Public Health. 2020. Jun 19;20(1). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Wiltsey Stirman S, Kimberly J, Cook N, Calloway A, Castro F, Charns M. The sustainability of new programs and innovations: A review of the empirical literature and recommendations for future research. Vol. 7, Implementation Science. 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Gruen RL, Elliott JH, Nolan ML, Lawton PD, Parkhill A, McLaren CJ, et al. Sustainability science: an integrated approach for health-programme planning. The Lancet. 2008;372:9649. [DOI] [PubMed] [Google Scholar]
  • 8.Scheirer MA. Is sustainability possible? A review and commentary on empirical studies of program sustainability.American Journal of Evaluation. 2005;26(3). [Google Scholar]
  • 9.Hanson HM, Salmoni AW, Volpe R. Defining program sustainability: Differing views of stakeholders. Canadian Journal of Public Health. 2009;100(4). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Johnson K, Hays C, Center H, Daley C. Building capacity and sustainable prevention innovations: A sustainability planning model.Eval Program Plann. 2004;27(2). [Google Scholar]
  • 11.Chambers D, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8(1):117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.May CR, Mair F, Finch T, MacFarlane A, Dowrick C, Treweek S, et al. Development of a theory of implementation and integration: Normalization Process Theory. Implement Sci. 2009;4(1):1–9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Hooker L, Small R, Humphreys C, Hegarty K, Taft A. Applying normalization process theory to understand implementation of a family violence screening and care model in maternal and child health nursing practice: a mixed method process evaluation of a randomised controlled trial. Implement Sci. 2015;10(1):1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Schell S, Luke D, Schooley M, Elliott M, Mueller N, Bunger AC. Public health program capacity for sustainability: a new framework.Implement Sci. 2013;8(15). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Walugembe DR, Sibbald S, Le Ber MJ, Kothari A. Sustainability of public health interventions: Where are the gaps? Vol. 17, Health Research Policy and Systems. 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Feinberg ME, Ridenour TA, Greenberg MT. The longitudinal effect of technical assistance dosage on the functioning of Communities That Care prevention boards in Pennsylvania.Journal of Primary Prevention. 2008;29(2). [DOI] [PubMed] [Google Scholar]
  • 17.Perkins DF, Feinberg ME, Greenberg MT, Johnson LE, Chilenski SM, Mincemoyer CC et al. Team factors that predict to sustainability indicators for community-based prevention teams.Eval Program Plann. 2011;34(3). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Feinberg ME, Bontempo DE, Greenberg MT. Predictors and Level of Sustainability of Community Prevention Coalitions.Am J Prev Med. 2008;34(6). [DOI] [PubMed] [Google Scholar]
  • 19.Pizacani BA, Dent CW, Maher JE, Rohde K, Stark MJ, Biglan A et al. Smoking Patterns in Oregon Youth: Effects of Funding and Defunding of a Comprehensive State Tobacco Control Program.Journal of Adolescent Health. 2009;44(3). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Nelson DE, Reynolds JH, Luke DA, Mueller NB, Eischen MH, Jordan J et al. Successfully maintaining program funding during trying times: Lessons from tobacco control programs in five states.Journal of Public Health Management and Practice. 2007;13(6). [DOI] [PubMed] [Google Scholar]
  • 21.Vitale R, Blaine T, Zofkie E, Moreland-Russell S, Combs T, Brownson RC et al. Developing an evidence-based program sustainability training curriculum: a group randomized, multi-phase approach.Implement Sci. 2018;13(1). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Kolb DA. Experiential Learning: Experience as The Source of Learning and Development. Prentice Hall, Inc. 1984;(1984). [Google Scholar]
  • 23.Moreland-Russell S, Jost E, Gannon J. A conceptual model for building program sustainability in public health settings: Learning from the implementation of the program sustainability action planning model and training curricula. Frontiers in Health Services [Internet]. 2023. Mar 29;3. Available from: 10.3389/frhs.2023.1026484/full [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.American Lung Association. State of Tobacco Control Report 2015 [Internet], Washington DC.; 2015. [cited 2023 Mar 29]. Available from: http://www.stateoftobaccocontrol.org/. [Google Scholar]
  • 25.Centers for Disease Control and Prevention. State Tobacco Activities Tracking & Evaluation (STATE) System [Internet]. 2015. [cited 2015 May 11]. Available from: http://www.cdc.gov/tobacco/data_statistics/state_data/state_system/index.htm
  • 26.Center for Public Health Systems Science. Sustainability [Internet]. [cited 2023 Mar 29]. Available from: https://cphss.wustl.edu/items/program-sustainability-assessment-tool-project/
  • 27.Christopher GC, Harris CM, Harris RT, Fleming D, Martinez ON, Mcguire CK et al. A Funding Crisis for Public Health and Safety: Public Health Report [Internet]. 2017. [cited 2018 Feb 20]. Available from:
  • 28.American Lung Association. Smokefree Air Laws [Internet]. [cited 2018 Feb 20]. Available from: https://www.lung.org/research/sotc/state-grades/state-rankings/smokefree-air-laws
  • 29.Wright C, Catty J, Watt H, Burns T. A systematic review of home treatment services - Classification and sustainability. Vol. 39, Social Psychiatry and Psychiatric Epidemiology. 2004. p.789–96. [DOI] [PubMed] [Google Scholar]
  • 30.Scheirer MA, Dearing JW. An agenda for research on the sustainability of Public Health Programs.Am J Public Health. 2011;101(11). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31.Mâsse LC, McKay H, Valente M, Brant R, Naylor PJ. Physical activity implementation in schools: A 4-year follow-up.Am J Prev Med. 2012;43(4). [DOI] [PubMed] [Google Scholar]
  • 32.Tabak RG, Duggan K, Smith C, Aisaka K, Moreland-Russell S, Brownson RC. Assessing Capacity for Sustainability of Effective Programs and Policies in Local Health Departments.Journal of Public Health Management and Practice. 2016;22(2). [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Moreland-Russell S, Combs T, Polk L, Dexter S. Assessment of the Sustainability Capacity of a Coordinated Approach to Chronic Disease Prevention.Journal of Public Health Management and Practice. 2018;24(4). [DOI] [PubMed] [Google Scholar]
  • 34.Pascoe KM, Petrescu-Prahova M, Steinman L, Bacci J, Mahorter S, Belza B, et al. Exploring the impact of workforce turnover on the sustainability of evidence-based programs: A scoping review. Implement Res Pract. 2021. Jan;2:263348952110345. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.de Foundation B. ASTHO. Rising Stress and Burnout in Public Health [Internet]. [cited 2023 Mar 29]. Available from: https://debeaumont.org/wp-content/uploads/dlm_uploads/2022/03/Stress-and-Burnout-Brief_final.pdf
  • 36.Savaya R, Spiro S, Elran-Barak R. Sustainability of social programs: A comparative case study analysis.American Journal of Evaluation. 2008;29(4). [Google Scholar]
  • 37.Akerlund KM. Prevention program sustainability: The state’s perspective. J Community Psychol. 2000;28(3):353–62. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The datasets generated and/or analyzed during the current study are not publicly available due to privacy protections but are available from the corresponding author on reasonable request.


Articles from Research Square are provided here courtesy of American Journal Experts

RESOURCES