Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 May 1.
Published in final edited form as: Am J Prev Med. 2015 May;48(5):543–551. doi: 10.1016/j.amepre.2014.11.015

Understanding Mis-implementation in Public Health Practice

Ross C Brownson 1, Peg Allen 1, Rebekah R Jacob 1, Jenine K Harris 1, Kathleen Duggan 1, Pamela R Hipp 1, Paul C Erwin 1
PMCID: PMC4405656  NIHMSID: NIHMS648155  PMID: 25891053

Abstract

Introduction

A better understanding of mis-implementation in public health (ending effective programs and policies or continuing ineffective ones) may provide important information for decision makers. The purpose of this study is to describe the frequency and patterns in mis-implementation of programs in state and local health departments in the U.S.

Methods

A cross-sectional study of 944 public health practitioners was conducted. The sample included state (n=277) and local health department employees (n=398) and key partners from other agencies (n=269). Data were collected from October 2013 through June 2014 (analyzed in May through October 2014). Online survey questions focused on ending programs that should continue, continuing programs that should end, and reasons for endings.

Results

Among state health department employees, 36.5% reported that programs often or always end that should have continued, compared with 42.0% of respondents in local health departments and 38.3% of respondents working in other agencies. In contrast to ending programs that should have continued, 24.7% of state respondents reported programs often or always continuing when they should have ended, compared to 29.4% for local health departments and 25% of respondents working in other agencies. Certain reasons for program endings differed at the state versus local level (e.g., policy support, support from agency leadership), suggesting that actions to address mis-implementation are likely to vary.

Conclusions

The current data suggest a need to focus on mis-implementation in public health practice in order to make the best use of scarce resources.

Introduction

Mis-implementation in public health practice refers to both the de-adoption of effective programs, policies, or other interventions that should continue and to the continuation of ineffective interventions that should end. It is important to understand the mis-implementation of public health programs for several reasons. First and most importantly, public health resources are limited and decreasing in many settings.1-3 Resources are most efficiently used when effective programs are continued and ineffective programs are discontinued. Second, understanding reasons for mis-implementation can help practitioners in designing and implementing more effective programs. For example, if having a program champion or “spark plug” is essential for continuing an effective program,4 this knowledge can help shape how a program is staffed and managed. And third, building in part from Diffusion of Innovations Theory,5 a significant gap in the field of dissemination and implementation science is guidance on how to “de-implement” or reduce the use of interventions that are not evidence-based, have been widely adopted prematurely, or are detrimental.6,7 Building this knowledge in public health may translate to other areas (e.g., healthcare delivery, education, social services) and provide new frameworks for action.8

Mis-implementation in public health and closely related settings involves two seemingly opposite strategies. In some cases, programs are ended that are effective and should be continued. An example of this is the VERB campaign, which was a national program to promote physical activity using a social marketing approach.9 The VERB campaign delivered positive messages to youth aged 10-13 years via multiple channels (e.g., mass media, school and community promotions, the Internet).10 VERB positively influenced children’s physical activity and campaign effects persisted into their adolescent years. Despite the benefits of VERB, national-scale efforts to continue VERB have not been successful and sustained community commitment to VERB-like efforts is challenging.11,12 In other cases, programs are continued that are not effective and should be ended. A prominent example is the Drug Abuse Resistance Education (D.A.R.E.) program, which is one of the most widely used school-based drug use prevention programs in the U.S. that has been disseminated to over half of U.S. school districts.13-15 Although not typically carried out by public health departments, D.A.R.E. relates to core public health topics (substance abuse) and uses public resources. Systematic reviews of D.A.R.E. program evaluations have shown the program is ineffective in preventing substance use behavior.16,17 Although these examples may appear opposite, they have the same consequence: inefficient use of scarce resources.

To date, much of our knowledge related to mis-implementation comes from public policy and medicine. The concept of mis-implementation in policy settings is commonly deemed policy “termination” and is an important part of the policy process. Policy termination has been described for several decades and suggests that a specific policy should be regularly evaluated, and in some cases ended if it is redundant or outmoded.18-21 Most of the literature on policy termination consists of case studies,22 with sparse quantitative research. In medicine and health services research, the attention has been on underuse of certain types of medical care (e.g., not prescribing aspirin after myocardial infarction), misuse of incorrect care (e.g., prescribing a drug the patient is allergic to), and overuse of medical services that lack benefits or cause harms (e.g., treating a simple viral infection with antibiotics).23-25 It is estimated that overuse may account for up to 30% of U.S. healthcare spending.23

Given its importance and the sparse empirical literature, this article reports on the frequency and reasons for mis-implementation among state and local public health departments.

Methods

The reported data were derived from two cross-sectional surveys of the state and local public health workforce as part of ongoing research projects (the State Survey and Local Survey described below).26-28 Participants were state and local public health department employees and key partners from other agencies who were identified by leaders in the public health agencies (e.g., coalitions, voluntary health organizations, advocacy organizations, healthcare organizations, universities). Human subjects approval was obtained from Washington University in St. Louis.

State Survey

A sample of 596 public health practitioners was drawn from six randomly selected states (Arizona, Delaware, Minnesota, South Dakota, Washington, and Wisconsin) that are part of a larger ongoing cluster randomized trial examining dissemination and implementation of evidence-based practices in state health departments.26 Potential respondents were selected by state health department chronic disease directors and their leadership teams of program managers. Represented program areas were cancer prevention and screening, obesity prevention, physical activity, healthy eating, tobacco control, heart health, diabetes, school health, and related areas. The state survey asked about evidence-based public health skills and resources and contained 63 items. The 596 State Survey participants included 277 state health department employees, 50 from local health departments, and 269 from other partnering health organizations or agencies. Most participants worked in chronic disease prevention and health promotion.

Local Survey

The local data came from a follow-up 66-item survey on evidence-based public health with local health departments in selected states. The sampling was derived from a merged database of two national surveys previously conducted by the research team.27,28 In the original data collection, a random sample of 1,067 U.S. local health departments was drawn from the database of 2,565 local health departments maintained by the National Association of County and City Health Officials. The baseline survey consisted of 849 local health department directors (or their designees) and program managers. The follow-up survey was part of evidence-based public health capacity-building activities in four intervention states (Michigan, North Carolina, Ohio, and Washington) along with a set of controls from the original two national samples.29 In this follow-up survey, a subsample of the local health department directors and program managers were invited to participate. For this study, post-intervention data were used from 348 local public health practitioners drawn from 34 states including the four intervention states.

Measures

In addition to the core questions in each survey, a set of new questions on program mis-implementation was developed for both the State and Local Surveys. First, the literature was reviewed to identify reasons for potential mis-implementation from medicine and policy. An initial set of items was reviewed by the research team, revised, and reviewed with a larger team of faculty and practice-oriented partners. The final instrument included three mis-implementation items. A definition of a program was provided and included any type of organized public health action, including direct service interventions, community mobilization efforts, policy implementation, environmental changes, outbreak investigations, health communication campaigns, or health promotion programs. Two items used Likert scale questions to ascertain mis-implementation (de-adoption) of effective programs (In your opinion, how often do programs end that should not have ended?) and continuation of ineffective programs (In your opinion, how often do programs continue that should have ended?). One question focused on common reasons for mis-implementation (When you think about public health programs that have ended, what are the most common reasons for programs ending? [Respondents were asked to rank the top three from a list.]). The reported reasons were categorized relative to three domains proposed by Bauer and Knill30: (1) external factors (e.g., state-level policy support changed); (2) institutional constraints and opportunities (e.g., support from leaders in agency changed); and (3) situational factors (e.g., funding diverted to a higher-priority program).

Data Collection

All data were self-reported and collected online using Qualtrics survey software (Provo UT). In both the State and Local Surveys, a unique link was e-mailed to each participant, and non-respondents received e-mail and phone reminders to bolster response rates. No financial incentive was provided in the State Survey; upon completion, a $10 Amazon gift card was provided in the Local Survey. The State Survey data were collected from January through June 2014 and the Local Survey Data were collected from October through December 2013. The median completion time was 16 minutes for the State Survey and 20 minutes for the Local Survey. Response rates were 75.1% for the State Survey (596/794) and 75.1% for the Local Survey 348/460).

Data Analysis

Descriptive statistics were calculated for each of the three core questions, reported as percentages with 95% CIs. The sample characteristics were derived from the survey data and archival data for each health department using population size of jurisdiction and local health department governance structure.31 Statistical testing for differences in proportions across the surveys or for the full combined sample was not conducted owing to the lack of independence of observations (e.g., nesting of individuals in a small number of state and local health departments and other organizations).

Results

Of the 944 respondents, 29.3% (n=277) worked in a state health department, 42.2% (n=398) worked in a local health department, and 28.5% (n=269) worked in another agency (e.g., a voluntary health organization, university extension) (Table 1). The largest proportion of respondents held a master’s degree (43.4% total, including 16.4% with an MPH) as their highest degree earned. Bachelor’s degrees were the highest earned by another 27.6% and 15.6% held a doctorate (e.g., PhD, MD). Respondents had worked in public health for an average of 15 years (SD=9.7).

Table 1.

Characteristics of the sample, program mis-implementation, U.S. 2013-2014

Characteristic State health
department
277
n (%)a
Local health
department
398
n (%)a
Other agencies
combined
269 n (%)a
Age (yrs)
 20-39 74 (27.0) 100 (26.2) 90 (33.6)
 40-49 61 (22.3) 101 (26.4) 62 (23.1)
 50-59 88 (31.8) 135 (35.3) 70 (26.1)
 60 and older 51 (18.6) 46 (12.0) 46 (17.2)
Gender
 Female 224 (81.2) 289 (75.7) 206 (76.9)
 Male 52 (18.8) 93 (24.3) 62 (23.1)
 Years worked in public health, mean (SD) 15.5 (9.9) 15.5 (9.3) 13.9 (9.9)
Highest degree
 Doctoral 59 (21.6) 21 (5.5) 64 (23.8)
 Master of Public Health 65 (23.8) 71 (18.7) 15 (15.6)
 Other master’s degree 65 (23.8) 111 (29.3) 72 (26.8)
 Nursing 17 (6.2) 56 (14.8) 17 (6.3)
 Bachelor’s degree or less 67 (24.5) 120 (31.7) 101 (37.6)
Program area
 Obesity, physical activity, nutrition 36 (13.0) N/A 41 (15.2)
 Tobacco 26 (13.0) N/A 34 (12.6)
 Cancer 32 (11.6) N/A 30 (11.2)
 Diabetes/cardiovascular disease 19 (6.9) N/A 15 (5.6)
 Other single primary program areab 82 (29.6) N/A 68 (25.3)
 Multiple program areasc 82 (29.6) N/A 81 (30.1)
Population of jurisdiction d
 <25,000 N/A 34 (11.1) N/A
 25,000 to 49,999 N/A 72 (23.5) N/A
 50,000 to 99,999 N/A 62 (20.2) N/A
 100,000 to 499,999 N/A 111 (36.2) N/A
 500,000 or larger N/A 28 (9.1) N/A
Governance structure
 State governed N/A 7 (1.8) N/A
 Locally governed N/A 385 (96.7) N/A
 Shared governance N/A 6 (1.5) N/A
a

Percentages reported for valid, non-missing cases.

b

Other single primary program areas examples are maternal and child health, communicable diseases, and injury and violence prevention.

c

Multiple program areas represent respondents who selected that they primarily worked across several program areas.

d

Population size of health department jurisdiction was assessed only among the local health department participants in the Local Survey.

Among state health department employees, 36.5% reported that programs often or always end that should have continued, compared with 42.0% of respondents in local health departments and 38.3% of respondents working in other agencies (Table 2). For state respondents, those working in cancer programs reported less frequent inappropriate program endings compared to those from other program areas. People working in diabetes and cardiovascular disease programs were the most likely to report programs being discontinued when they should have been continued. Local respondents from jurisdictions with population between 25,000 and 99,999 people reported more programs inappropriately ending. Those local agencies with a state governance structure (i.e., local health departments operating within a centralized state administrative unit) may be less likely to end programs inappropriately, although the denominators were small for the state-governed (n=7) and shared governance (n=6) categories. Among those from the local health department survey, managers and other employees were more likely than administrators to perceive programs ending that should have continued (46.8% vs 30.5%, p=0.003) (data not shown).

Table 2.

Perceived frequencya of program mis-implementation, U.S. 2013-2014

% of Respondents reporting
programs ending that should have
continued
% of Respondents reporting
programs continuing that should
have ended

Variable State
health
departmen
t
n = 277
% (95%
CI)
Local
health
departmen
t
n = 398
% (95%
CI)
Other
agencies
combine
d
n = 269
% (95%
CI)
State
health
departmen
t
n = 277
% (95%
CI)
Local
health
departmen
t
n = 398
% (95%
CI)
Other
agencies
combine
d
n=269
% (95%
CI)
Overall 36.5 (30.8-
42.3)
42.0 (37.1-
46.8)
38.3
(32.3-
44.3)
24.7 (19.6-
29.9)
29.4 (24.9-
33.9)
25.0
(19.7-
30.3)
Program area
  Obesity, physical
  activity, nutrition
31.4 (15.3-
47.6)
N/A 41.0
(24.9-
57.2)
28.6 (12.8-
44.3)
N/A 23.1
(9.2-
36.9)
  Tobacco 36.0 (15.8-
56.2)
N/A 50.0
(32.3-
67.7)
16.0 (0.6-
31.4)
N/A 32.4
(15.8-
48.9)
  Cancer 18.8 (4.5-
33.1)
N/A 37.9
(19.2-
56.7)
18.8 (4.5-
33.1)
N/A 17.2
(2.6-
31.9)
  Diabetes/cardiovascula
  r disease
47.4 (22.6-
72.1)
N/A 35.7
(7.0-
64.4)
26.3 (4.5-
48.1)
N/A 50.0
(20.0-
80.0)
  Other single primary
  program areab
33.8 (23.2-
44.3)
N/A 28.6
(17.1-
40.0)
22.5 (13.2-
31.9)
N/A 15.9
(6.6-
25.2)
  Multiple program areasc 46.2 (35.1-
57.4)
NA 40.3
(29.1-
51.5)
30.0 (19.7-
40.3)
N/A 28.6
(18.3-
38.9)
Population of jurisdiction
of local health
departments
  <25,000 (n=34) N/A 32.4 (15.8-
48.9)
N/A N/A 17.7 (4.2-
31.2)
N/A
  25,000 to 49,999
  (n=72)
N/A 49.3 (37.4-
61.2)
N/A N/A 29.6 (18.7-
40.5)
N/A
  50,000 to 99,999
  (n=62)
N/A 50.8 (38.1-
63.5)
N/A N/A 33.3 (21.4-
45.3)
N/A
  100,000 to 499,999
  (n=111)
N/A 34.2 (25.5-
42.9)
N/A N/A 33.3 (24.7-
42.0)
N/A
  500,000 or larger
  (n=28)
N/A 24.2 (8.8-
39.7)
N/A N/A 36.4 (19.0-
53.7)
N/A
Governance structure of
local health departments
  State governed (n=7) N/A 14.3 (0.0-
49.2)d
N/A N/A 28.6 (0.0-
73.7)d
N/A
  Locally governed
  (n=385)
N/A 42.3 (37.4-
47.3)
N/A N/A 29.4 (24.8-
33.9)
N/A
  Shared governance
  (n=6)
N/A 50.0 (0.0-
100.0)d
N/A N/A 33.3 (0.0-
87.5)d
N/A
a

Percentage reporting “often” or “always” for valid non-missing cases.

b

Other single primary program areas examples are maternal and child health, communicable diseases, and injury and violence prevention.

c

Multiple program areas represents respondents who selected that they primarily worked across several program areas.

d

Very few local health department respondents represented states where local public health governance is shared or state governed, which resulted in wide confidence intervals.

In contrast to ending programs that should have continued, 24.7% of state respondents reported programs often or always continuing when they should have ended, compared to 29.4% for local health departments and 25% of respondents working in other agencies (Table 2). It appears that among state health department respondents, staff in tobacco and cancer programs may be less likely to report inappropriate continuation of programs compared to staff in other program areas. Among local public health practitioners, there were no clear patterns of continuing programs when they should have ended based on population of jurisdiction or governance structure. Participants conducting work in other agencies in diabetes/cardiovascular disease prevention and management reported a higher rate of continuing programs when they should have ended compared to other agency respondents conducting other types of programs.

The ranking of the reasons for ending programs showed a wide range in frequencies (Table 3). The ending of grant funding was the top reason for all three respondent groups, with nearly equal proportions among state and local practitioners (87.4% and 88.0%), and a lower frequency among respondents from other agencies (79.2%). Two variables (a change in support from policymakers, support from leaders in your agency changed) were more common reasons among state respondents than among local respondents and respondents from other agencies. One reason (program was continued by another organization) was more commonly reported by local respondents (30.3%) than by state respondents (16.7%) or other agencies (15.6%). Two variables (program champion left the agency and ending of insurance coverage) were more frequently reported among respondents from other agencies than among those in state or local public health agencies. Among the three broad (and overlapping) categories of reasons for ending programs (i.e., external, institutional, and situational), the situational factors ranked most highly, although all three domains appeared to be important. Among those from the local health department survey, managers and other employees were more likely than administrators to report support from leaders changed in their top three reasons for why programs end (27.9% vs 18.3%, p=0.045) (data not shown).

Table 3.

The most common reasons for programs ending, U.S., 2013-2014

Ranked as 1st, 2nd, or 3rd most common reason (%)
Categorya State health
department
n = 277
Local health
department
n = 398
Other agency
n = 269
Reasons % (95% CI) % (95% CI) % (95% CI)
Grant funding ended S 87.4 (83.4-91.4) 88.0 (84.8-91.3) 79.2 (74.1-84.3)
Funding was diverted to a higher
priority program
S 60.0 (54.1-65.9) 61.1 (56.2-65.9) 62.8 (56.8-68.8)
Support from policy makers changed E 45.9 (39.9-51.9) 33.8 (29.1-38.5) 38.4 (32.3-44.5)
Support from leaders in your agency
changed
I 35.6 (29.8-41.3) 23.7 (19.4-27.9) 20.0 (15.0-25.0)
Program was adopted or continued by
other organizations
E 16.7 (12.2-21.1) 30.3 (25.7-34.8) 15.6 (11.1-20.1)
Program champion left the agency I 16.3 (11.9-20.7) 12.5 (9.2-15.8) 26.8 (21.3-32.3)
Program was never evaluated I 12.2 (8.3-16.2) 14.8 (11.2-18.3) 12.0 (7.9-16.1)
Program was evaluated but did not
demonstrate impact
I 11.1 (7.3-14.9) 14.2 (10.8-17.7) 17.2 (12.5-21.9)
Support from the general public
changed
E 3.3 (1.2-5.5) 6.9 (4.4-9.4) 6.8 (3.7-9.9)
Insurance funding/coverage ended S 3.0 (0.9-5.0) 4.3 (2.3-6.4) 8.0 (4.6-11.4)
a

Categories: E = external factors; I = institutional constraints and opportunities; S = situational factors.30

Discussion

To the authors’ knowledge, this exploratory study is the first to report data on program mis-implementation among state and local public health departments in the U.S. It begins to fill what appears to be a significant void in the literature—addressing this gap has the potential to make more efficient use of scarce public health resources. There are several key findings from the current study of perceived mis-implementation:

  • In both state and local health departments, a substantial proportion of staff report that programs are either ending when they should continue or are being continued when they should be terminated.

  • For both state and local health departments, there are higher percentages of programs ending that should be continued than of those continuing when they should be ended.

  • The problem of mis-implementation in public health may be slightly larger at the local level than at the state level.

  • Many of the reasons for mis-implementation relate to funding (e.g. grant funding ended, funding was diverted to a higher-priority program).

  • Certain reasons for ending programs differ at the state versus local level, suggesting that actions to address mis-implementation are likely to vary accordingly.

  • Although sample sizes for subgroup analyses were small, there may be important variations in mis-implementation according to program area, local population jurisdiction size, and local governance structure.

In part, the reasons for public health mis-implementation show similarity to literature on mental health interventions. Massatti and colleagues32 studied the de-adoption of innovative mental health practices among 12 mental health providers in Ohio. They found that the top reasons for de-adoption included the lack of financial resources, lack of community support, difficulty in attracting and retaining staff, and lack of tangible outcomes or benefits. The current findings are consistent with these, and also emphasize the importance of support from policymakers and agency leaders (especially among state public health practitioners). Partnerships are critical to success in public health,33-36 which is underscored by the current results showing that local practitioners are much more likely to report that programs were adopted or continued by other organizations. Study findings also highlight the importance of evaluation within public health.37-39 A reason for program endings is either lack of evaluation or evaluation showing the need to terminate a program (although frequency of noting program was not evaluated is lower than most other reasons).

Some of the current findings and future work can benefit from related research in policy and medical settings. Literature on policy termination highlights potential areas of importance for public health practice. For example, deLeon18 suggests a hierarchy of termination, noting that functions are the most difficult to terminate, organizations are the next most difficult to end, policies are intermediate, and programs are the easiest to terminate. The current study did not attempt to differentiate these macro- to micro-level distinctions. Policy termination may garner the greatest attention during times of budget austerity,40 which may explain some of the current findings on program ending due in part to the funding challenges in public health practice. In addition, policy termination can be induced, often via media coverage.21 Similarly, in medical practice, mis-implementation can sometimes occur rapidly because of highly publicized negative trial data.41 The impact of the media on mis-implementation in public health is worthy of future inquiry.

As the inverse to inappropriate program endings, there are also likely to be lessons from the literature on sustainability and scale-up of effective programs in public health. For example, maintenance of a public health program is likely to relate closely to having one or more of the following: a sufficient budget, a favorable political climate, sufficient organizational capacity, careful attention to context, and ongoing evaluation allowing for mid-course corrections.42-48 There now is a small set of measurement tools for sustainability,45,49-51 which needs to be adapted and developed to address the various mis-implementation scenarios (e.g., continuing programs that should be ended, ending programs that should be continued).

There are several limitations to the current study. The study relies on a limited set of self-reported questions that lacked extensive psychometric testing and relies on individual perception of program effectiveness. Study estimates are based on the concept of programs, which is a term that may be interpreted broadly or narrowly. In future work, it will be important to delineate what may be subtle differences among a program, intervention, or policy. The sample includes only selected states participating in or serving as controls in other studies and is not a representative, national sample of state or local health departments. Study data are cross-sectional, limited to certain program areas, and do not allow for assessing predictors of mis-implementation or how findings might be affected by macro-trends such as reductions in funding to public health. And finally, to fully understand the issue of mis-implementation, a mixed-methods approach (quantitative and qualitative)52 is likely to be the most useful.

It is likely that research on public health mis-implementation is in its first generation. This suggests a wide range of future research needs, ranging from broad categories of inquiry such as developing reliable and valid measures of mis-implementation, understanding variables that predict mis-implementation, describing mediators and moderators (e.g., state versus local differences, understanding how healthcare reform efforts may affect mis-implementation of clinical services, variations by program area, the role of media attention), and developing qualitative case studies of successful and unsuccessful mis-implementation.

Developing the evidence base for mis-implementation in public health practice will allow practitioners and policymakers to bolster efforts to continue effective programs and target ineffective programs for discontinuation, diverting these resources to promising programs that are not being fully implemented, evaluated, or scaled up. Such actions are likely to make the most effective use of resources and improve the health of the public.

Acknowledgments

This study was supported in part by Robert Wood Johnson Foundation (grant No. 69964, Public Health Services and Systems Research), National Cancer Institute (grant No. R01CA160327), and National Institute of Diabetes and Digestive and Kidney Diseases (grant No. 1P30DK092950). This article is a product of a Prevention Research Center and was also supported by Cooperative Agreement No. U48/DP001903 from CDC. The findings and conclusions in this article are those of the authors and do not necessarily represent the official position of CDC. Human subjects approval was obtained from the Washington University IRB.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

No financial disclosures were reported by the authors of this paper.

References

RESOURCES