INTRODUCTION
Scholars have emphasized the importance of using evidenced-based programs to promote health and prevent disease (Brownson, Fielding, & Maylahn, 2009; Fielding & Briss, 2006). While definitions of evidence-based vary, researchers suggest that a program must have been tested in at least one experimental trial and found to have the predicted effect on the outcomes for which it was designed (Gottfredson et al., 2015; Puddy & Wilkins, N, 2011). Although theoretically and empirically-based programs may be effective in carefully controlled conditions, many fail to achieve desired outcomes when implemented in real world settings (Kilbourne, Neumann, Pincus, Bauer, & Stall, 2007). Variations in delivery related to program design, the community or organizational context, and implementation processes may influence the extent to which a program is implemented as intended (Damschroder et al., 2009; Proctor et al., 2011). Aspects of delivery such as fidelity, dose delivered, dose received, and program quality have profound effects on program outcomes (Baranowski & Stables, 2000; Durlak, 1998; Durlak & DuPre, 2008), but detailed information about delivery is often lacking. The purpose of this paper is to present methods we used to document and assess implementation of Youth Empowerment Solutions (YES) to unpack the black box of program delivery and improve processes for dissemination.
Ensuring high quality implementation of health promotion programs is critically important as variation in implementation is closely associated with program effectiveness (Durlak & DuPre, 2008; Wilson, Lipsey, & Derzon, 2003). In five meta-analyses, analyzing over 500 prevention and health promotion programs, Durlak and Dupre (2008) found that mean effect sizes for programs delivered with high fidelity and intensity were two to three times greater than effect sizes for programs that faced significant implementation challenges. The strong association between quality of delivery and effectiveness underscores the need to 1) monitor and understand variations in aspects of implementation, and 2) use information gained to guide program improvements. Proctor, et al., (2011) suggest that measuring implementation outcomes is vital to understanding what makes a program successful or unsuccessful. Yet, measurements of implementation vary greatly across studies, revealing a need to document program delivery more systematically.
Assessing implementation outcomes is especially challenging as health promotion programs become more complex, multi-component and transportable (Dusenbury, Brannigan, Hansen, Walsh, & Falco, 2005; Steckler & Linnan, 2002). Implementation becomes particularly salient when a program moves from clinical efficacy trials to community effectiveness trials where many contextual factors can influence fidelity (Bauer, Damschroder, Hagedorn, Smith, & Kilbourne, 2015; Kilbourne et al., 2007). Given the close association between implementation quality and effectiveness, Curran, et al., (2012) recommend combining outcome and process measures in blended implementation-effectiveness designs. These authors note that a hybrid approach allows investigators to examine potential problems associated with delivery, identify needed modifications and tailor implementation strategies to specific contexts.
Researchers have developed comprehensive frameworks to promote quality implementation across the lifecycle of a program (Domitrovich et al., 2008; Kilbourne et al., 2007). Meyers, et al., (2012) suggest that strategies to promote implementation can be grouped into four temporal phases: Phase 1 strategies seek to establish pre-conditions for quality implementation in the host organization and setting; Phase 2 strategies create structures to promote quality implementation prior to delivery; Phase 3 strategies facilitate implementation during delivery; and Phase 4 strategies support continuous learning to improve implementation during future iterations. It is crucial to conduct a process evaluation during Phase 3, the delivery phase, to document whether a program has been implemented as intended and to assess its quality (Kilbourne et al., 2007; Meyers, Durlak, & Wandersman, 2012; Moore et al., 2015).
While scholars identify as many as 11 aspects of implementation during the delivery stage (Dane & Schneider, 1998; Dusenbury et al., 2005; Linnan & Steckler, 2002), many process evaluations only monitor fidelity and dose delivered (Durlak & DuPre, 2008). Other aspects thought to be influential, including participant engagement, are understudied (Durlak & DuPre, 2008). We present a model for gathering and assessing information using multiple measures of four aspects of program delivery: 1) fidelity, the extent to which the program was delivered as intended by the developers (Durlak & DuPre, 2008); 2) dose delivered, the quantity of the program units, hours, or sessions delivered (Baranowski & Stables, 2000; Durlak & DuPre, 2008); 3) dose received, the degree to which participants were receptive to, engaged with, or utilized program offerings (Baranowski & Stables, 2000); and 4) program quality, how well the various components of the program were carried out (Durlak & DuPre, 2008). Measuring these delivery aspects provides a foundation for future research to examine the relationships between delivery and effectiveness and to identify areas for improvement.
Youth Empowerment Solutions
Youth Empowerment Solutions (YES) is a community-level violence prevention and positive youth development program that engages adolescents in carrying out community change projects of their own design, assisted by supportive adults (Zimmerman, Stewart, Morrel-Samuels, Franzen, & Reischl, 2011). YES is based on psychological empowerment theory, which includes both empowering processes and empowered outcomes (Zimmerman, 2000). The YES program focuses on participatory, youth-driven approaches to build skills, develop intergenerational partnerships and provide opportunities for participants to use their skills to effect community change (Zimmerman et al., 2011).
The initial concept for the YES program emerged from discussions with a steering committee of community representatives that advised Flint’s Youth Violence Prevention Center (Griffith et al., 2008). The Centers for Disease Control and Prevention provided funding for the development and implementation of YES (Cooperative Agreement U49/CE000348) over four years (2004–2008). The YES curriculum was created through an iterative process that involved collaboration among program staff, research team members, and youth themselves (Franzen, Morrel-Samuels, Reischl, & Zimmerman, 2009; Zimmerman et al., 2011). The resulting curriculum included six themed units: 1) Youth as Leaders; 2) Learning about Our Community; 3) Improving Our Community; 4) Building Intergenerational Partnerships; 5) Planning for Change; and 6) Action and Reflection (Zimmerman et al., 2011). We identified core content components associated with each of the YES sessions based on the program’s theoretical model (Eisman et al., 2016; Freire, Perkinson, Morrel-Samuels, & Zimmerman, 2015). These core content components are: self-esteem; leadership efficacy; civic efficacy; adult mentoring relationships; adult resources; resource mobilization; leadership behavior; community engagement; and school engagement. Activities designed to help youth understand their cultural identities and appreciate those of others are built into the units. The curriculum includes a community change project designed by youth participants as a culminating activity (Zimmerman et al., 2017).
During the initial grant period, the curriculum was implemented with several groups of youth and its outcomes were tested in a quasi-experimental design. The results of the study indicated that participants in the YES program were less likely to report victimization and more confident that they could avoid or resolve conflicts non-violently than were youth in the same school system who had not participated in YES (Reischl et al., 2011).
Building on the YES pilot study results, the next phase was to test the program in a randomized trial (NIH R01HD062565). In this study, the program was delivered by regular afterschool teachers, as opposed to research staff. We provided training for teachers and their supervisors prior to implementation and research staff members were available for ongoing technical assistance. To examine implementation, we gathered extensive data, both prospectively and retrospectively, using multiple methods. The protocol included regular fidelity observations, school and teacher records, and participant surveys. Zimmerman et al. (2017) reported the results of this control group design indicating that YES was effective in empowering youth, reducing problem behaviors, and enhancing positive youth development outcomes.
Delivery Context
The YES study was conducted in 12 middle and elementary schools in Flint, Michigan and surrounding Genesee County. Flint and its suburbs have suffered economically for several decades, a result of the loss of over 70,000 manufacturing jobs. These circumstances have had especially severe effects on the school systems where the study took place. During the study period, school systems were in a state of flux, due to budgetary constraints and declining enrollment. Schools were closed or consolidated with ongoing turnover of teachers and administrators. In addition, federal grants some districts had received to support their afterschool programming ended and several schools were forced to discontinue afterschool programs altogether. These disruptions led to five schools being dropped from the study and five new schools being added.
As with many school-based programs, we experienced barriers to recruitment and delivery due to staff changes and interruptions to school schedules. Added to these challenges were the life circumstances of the participants themselves, many from families experiencing residential instability, frequent personal crises or economic hardships that prevented them from having their children attend after-school programs on a regular basis. All of these factors threatened the ability to deliver the YES program as designed, but these challenges are common across settings where youth programming is provided, especially in economically challenged communities serving youth most in need. For these reasons, it was vital to examine the implementation of YES to understand variation in delivery and what factors contributed to successful delivery.
METHODS
We collected process evaluation data on 25 YES groups from 12 schools over four years. Four groups (from four different schools) were eliminated from our analyses because of incomplete data, resulting in a final sample of 21 groups. One eliminated group was missing attendance data, two groups were missing core content component data, and one group was missing both attendance data and core content component data. All of the schools with eliminated groups had subsequent groups that were included in the analysis so the contexts of these schools were represented in our final sample.
The process evaluation assessed four key aspects of delivery as defined by Durlak and Dupre (2008) and Baranowski and Stables (2000): fidelity; dose delivered; dose received; and program quality. Table 1 summarizes the types and sources of data that we used to assess these aspects of program delivery. While several measures might apply to more than one aspect of delivery, we assigned them to the aspect they most closely assess.
TABLE 1.
Assessment Methods for Aspects of YES Program Delivery
| Participant/teacher interaction | Core Content Components | Sessions Offered | Attendance | Participant Engagement | Participant Satisfaction | Teacher training | Quality Summary Score | |
|---|---|---|---|---|---|---|---|---|
|
| ||||||||
| Source | Observation | Teacher self-report | School records & teacher self-report | School records & teacher self-report | Participant self-report | Participant self-report | Study records | Calculated |
| Fidelity | X | X | ||||||
| Dose Delivered | X | X | ||||||
| Dose Received | X | X | ||||||
| Program Quality | X | X | ||||||
Fidelity
Teacher/Participant Interaction
Given the empowerment focus of YES, a critical component of delivery was teacher and participant interaction. Fidelity observations included two items concerning youth opportunities to lead activities (“no youth had an opportunity” to “all youth had an opportunity”), and the extent to which the teacher shared control with youth (“never shared control” to “always shared control”). Both items were rated on a five-point scale, with 5 being the highest possible score. These items were rated by pairs of trained research staff who observed approximately 4 sessions per group. The ratings were averaged over all observations for each group. We assessed inter-rater reliability by calculating percent agreement between the two raters on a subset of 15 observations. Percent agreement for all ratings ranged from 86.6%- 100%.
Core Content Components
To assess the coverage of core content components, we used logs that the teachers were asked to complete for each session delivered. We computed the percent of core content components covered by dividing the sessions actually delivered by the number of possible sessions addressing each component.
Dose Delivered
Sessions Offered
The YES curriculum had 41 possible sessions. Of these, 22 were classified as essential, 15 were recommended and 4 were optional based on practical considerations from pilot testing. We used school records to determine the number of sessions offered by 15 of the groups. For the 6 groups where school records were unavailable, we used teacher logs documenting sessions completed.
Attendance
The average attendance per group was based on school records for 15 groups and teacher logs for the 6 groups for which school records were unavailable.
Dose Received
Participant Engagement
On post-test surveys, we asked youth whether they had done several key YES activities: participating in a photovoice project; working with a neighborhood advocate; learning about cultural traditions; identifying community assets; and planning a summer project. Participants’ scores were calculated by summing each of the activities they reported. Possible scores ranged from 0 (participated in no key activities) to 5 (participated in all key activities). We calculated a group score by averaging scores of all participants.
Participant Satisfaction
We administered a satisfaction questionnaire to youth participants following completion of the program (Franzen et al., 2009). We included two items to assess how helpful youth felt the program was for improving their social skills and empowering them to prevent violence. These items used a five-point scale from 1 (not at all helpful) to 5 (extremely helpful). We also asked whether participants would recommend the program to a friend. The response choices were yes, no and maybe.
Quality
Teacher Training
Training was conducted by research staff members for teachers and their supervisors annually. Each group was scored on whether the teacher did, or did not, participate in training. Groups with at least one trained teacher received a score of 1 and groups with untrained teachers received a score of 0.
Summary Quality Score
We used an index approach to combine the measures described above into one summary quality score, based on whether group scores on the individual measures were above or below the mean of the entire sample, and whether teachers had received training. Groups below the mean on each individual indicator received a 0, and those above the mean received a 1. Groups without a trained teacher received a 0; those with a trained teacher received a 1. The highest possible score was 10.
RESULTS
We found wide variations in delivery for some measures, while others were more consistent from group to group. Table 2 presents scores for each delivery measure.
TABLE 2.
Measures of YES Implementation by Group
| Study Records | School Records/Teacher Logs | Fidelity Observations | Youth Self-Report | Post-Test | Teacher Logs | Summary | ||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
| ||||||||||||
| Group ID | Cohort | Teacher Attended Training | Number of Sessions Offered | Mean Attendance | Youth Lead Opportunities Mean (S.D.) | Shared Control Mean (S.D.) | YES helpful in building social skills Mean (S.D.) | YES helpful in empowering violence prevention Mean (S.D.) | % who would recommend to friends | Youth Participation Mean (S.D.) | Percent of Core Content Components Covered | Composite Quality Score |
| 1 | 3 | 1 | 24 | 13 | 4.00 (2.00) | 4.00 (1.41) | 4.33 (0.89) | 4.14 (1.40) | 75.0% | 2.00 (1.64) | 46% | 5 |
| 2 | 3 | 0 | 11 | 1.54 | 4.67 (0.58) | 4.33 (1.16) | 4.33 (0.52) | 3.67 (1.37) | 83.3% | 1.44 (1.43) | 8% | 2 |
| 3 | 3 | 1 | 14 | 3.44 | 4.33 (1.16) | 4.33 (1.16) | 4.00 (1.00) | 3.20 (1.48) | 100.0% | 1.33 (1.56) | 38% | 2 |
| 4 | 4 | 1 | 21 | 8.33 | 4.17 (1.60) | 4.83 (0.41) | 4.20 (0.84) | 4.40 (0.55) | 80.0% | 1.50 (1.38) | 18% | 4 |
| 5 | 4 | 1 | 10 | 2 | 3.67 (1.50) | 4.11 (0.93) | 4.15 (1.14) | 4.42 (0.79) | 69.2% | 2.62 (1.89) | 8% | 3 |
| 6 | 4 | 1 | 8 | 4.27 | 4.77 (0.44) | 4.85 (0.38) | 4.36 (0.67) | 4.36 (1.29) | 90.9% | 2.27 (1.42) | 8% | 7 |
| 7 | 4 | 0 | 24 | 7.42 | 4.67 (0.52) | 4.17 (0.98) | 3.69 (1.44) | 3.23 (1.54) | 69.2% | 1.73 (1.45) | 56% | 3 |
| 8 | 4 | 1 | 27 | 17.75 | 4.00 (2.00) | 5.00 (0.00) | 4.17 (0.75) | 3.67 (1.37) | 100.0% | 2.00 (1.26) | 37% | 5 |
| 9 | 4 | 1 | 22 | 4.5 | 4.80 (0.78) | 4.73 (0.60) | 4.60 (0.52) | 4.50 (0.53) | 80.0% | 2.50 (1.30) | 61% | 8 |
| 10 | 4 | 0 | 23 | 7.27 | 4.63 (1.06) | 4.88 (0.35) | 4.55 (0.93) | 4.18 (1.17) | 63.6% | 2.55 (2.11) | 42% | 6 |
| 11 | 5 | 1 | 7 | 3.71 | 5.00 (0.00) | 4.00 (1.41) | 4.17 (0.75) | 3.67 (0.52) | 100.0% | 2.00 (1.79) | 14% | 3 |
| 12 | 5 | 1 | 14 | 9.27 | 4.67 (0.82) | 4.00 (0.89) | 4.55 (0.69) | 5.00 (0.00) | 100.0% | 2.63 (1.69) | 52% | 8 |
| 13 | 5 | 1 | 17 | 11.5 | 1.75 (0.96) | 3.00 (1.83) | 4.60 (0.89) | 4.60 (0.55) | 80.0% | 2.20 (1.10) | 52% | 6 |
| 14 | 5 | 1 | 9 | 3.83 | 5.00 (0.00) | 4.50 (0.71) | 3.80 (1.79) | 4.00 (1.73) | 80.0% | 1.40 (1.34) | 17% | 3 |
| 15 | 5 | 1 | 31 | 9.8 | 5.00 (0.00) | 4.88 (0.35) | 4.29 (0.76) | 4.57 (0.78) | 85.7% | 1.56 (1.74) | 79% | 8 |
| 16 | 5 | 1 | 46 | 24.83 | 4.50 (0.58) | 5.00 (0.00) | 4.17 (0.84) | 4.17 (0.82) | 91.7% | 3.67 (1.07) | 86% | 9 |
| 17 | 5 | 1 | 31 | 18.5 | 4.67 (0.58) | 4.67 (0.58) | 4.30 (0.82) | 3.90 (0.88) | 80.0% | 2.75 (1.76) | 73% | 8 |
| 18 | 6 | 1 | 16 | 4.67 | 5.00 (0.00) | 5.00 (0.00) | 4.00 (0.78) | 4.25 (1.04) | 87.5% | 2.11 (1.45) | 61% | 7 |
| 19 | 6 | 1 | 22 | 14.88 | 5.00 (0.00) | 5.00 (0.00) | 4.57 (0.79) | 4.71 (0.76) | 100.0% | 2.71 (1.38) | 58% | 10 |
| 20 | 6 | 1 | 8 | 3.56 | 5.00 (0.00) | 5.00 (0.00) | 4.38 (1.41) | 4.38 (1.41) | 75.0% | 1.44 (1.51) | 45% | 6 |
| 21 | 6 | 1 | 6 | 3.3 | 5.00 (0.00) | 5.00 (0.00) | 4.44 (0.88) | 4.33 (0.87) | 88.9% | 1.78 (1.72) | 34% | 6 |
|
| ||||||||||||
| M | 18.62 | 8.45 | 4.36 | 4.44 | 4.30 | 4.16 | 85% | 2.06 | 43% | 5.67 | ||
| SD | (10.07) | (6.28) | (0.81) | (0.54) | (0.30) | (0.50) | (0.11) | (0.56) | (0.24) | (2.39) | ||
With regard to fidelity, observers scored almost all teachers highly on providing youth opportunities to lead and sharing control. We found great variation, however, in the proportion of curriculum core content components covered by each group, ranging from 8% to 86%. Core components that were heavily concentrated early in the curriculum (e.g. self-esteem, leadership efficacy, civic efficacy) received more coverage, as many groups did not have time to complete all sessions.
Dose delivered also varied widely. The number of sessions offered ranged from 7 to 46, with an average of 18.6 (10.1) sessions per implementation. It is important to note that the teachers were sometimes inconsistent in maintaining logs. The number of sessions recorded is therefore likely to be an underestimate in some cases. Average attendance showed wide variation across groups. The highest mean number of participants per group was 25 and the lowest was 2, with an overall mean of 8.45 (6.3) participants.
Our measures of dose received were based on participant self-report. The group means for improving social skills ranged from 3.7 (1.4) to 4.6 (0.5), and for empowering to prevent violence from 3.2 (1.5) to 5 (0.0), with 5 being the maximum possible rating. Overall, 84% of students reported that they would recommend the program to others (average group scores ranged from 64% to 100%). To assess participant engagement, we calculated a score based on the mean of key activities reported by all participants in each group. The mean group score for participation was 2.1 (.6), with group scores ranging from 1.4 (1.4) – 3.7 (1.1).
Out of 15 teachers, some of whom led more than one group, 4 did not attend training. Teachers who did not attend training, however, received informal technical assistance from program staff.
Variations in summary quality scores of program delivery were striking, ranging from 2 – 10, with an average of 5.7. Notably, of the 12 groups that received quality scores of 6 or above, 5 were led by the same teacher, and 2 others were led by the same team of 2 teachers. The remaining high-scoring groups were led by teachers who taught only one time. Of the 9 groups with quality scores of 5 or less, 4 were led by teachers who taught 2 groups each, and the rest by teachers who taught only once.
DISCUSSION
The results of this study provide program developers and implementers with a framework to measure fidelity, dose delivered, dose received and quality to inform process evaluation and program improvement. Our results indicated that delivery may have been impeded for many groups by inadequate time to complete the program. Although we recommended that the program begin in January, a few groups started as late as April. As a result, some teachers truncated the curriculum, while others dropped sessions in order to cover the final units. In particular, Unit 4, Building Intergenerational Partnerships, was largely skipped by many of the teachers. Teacher feedback indicated that this unit was passed over because recruiting volunteers to work in schools required background checks and training, and more time was needed for youth to complete their community projects. Unsurprisingly, we found substantial variation in the coverage of core components, especially when time to complete the curriculum was inadequate. Teachers needed more guidance regarding which sessions to omit if they could not complete them all. Notably, an analysis of program outcomes at post-test found that youth who participated in a greater number of key curriculum activities reported more psychological empowerment and prosocial outcomes and less antisocial outcomes than youth who received fewer of the intervention components (Zimmerman, et al., in press).
We found wide variations in attendance, as some program groups had just a handful of participants. These findings point to the need to identify barriers to recruitment and promote timely initiation of the program. They suggest that greater attention must be paid to pre-conditions for quality implementation, such as organizational capacity and readiness (Meyers et al., 2012). Our results suggest that intensive collaboration with school administrators in advance of program delivery is important to obtain support and commitment.
We found less variation in self-reports of youth satisfaction and observations of teachers’ interactions with participants, as compared with measures including sessions offered and core components covered. This suggests that measures based on objective counts may be more informative with regard to variation in implementation than measures based on participant or observer perceptions. Yet, participant and observer perceptions are also important to collect as they reflect program engagement more proximally (Moore, et al., 2015). Despite any shortcomings in delivery, participants’ experiences with the program appeared to be positive, with the majority reporting gains in social and violence prevention skills. Indeed, almost 85% reported that they would recommend YES to their peers.
The summary quality index appeared to reliably identify successful and less successful teachers. Of the 5 teachers who led more than one group, 3 had consistently high quality scores, while 2 others always scored below the mean. This effect did not appear to be associated with the schools where the programs took place, as one of the most highly scoring teachers led the program in three different schools. Although most groups led by highly scoring teachers did not score above the mean on all measures, in sum, the quality score differentiated them from less successful teachers. Research staff noted that the highly scoring teachers seemed to have great enthusiasm for the program and put significant effort into community projects.
These implementation findings informed several changes to the second edition of the curriculum: 1) the total number of sessions was reduced; 2) the theoretical model of the program was emphasized in training and the curriculum text; 3) the core components covered by each session were explicitly listed; and 4) a condensed curriculum guide was included so that groups unable to complete all sessions may include all core components to some extent (Zimmerman et al., 2017). Finally, the results pointed to the need to provide teachers with stronger scaffolding through training and ongoing technical assistance during delivery of the program. We organized monthly meetings to both provide technical assistance and create a learning community so teachers could share experiences about implementing the program and learn from each other.
CONCLUSION
To determine why a program is, or is not, effective, researchers and practitioners must first understand how the program components were actually delivered, to which participants, and with what degree of fidelity to the program’s established curriculum and methods (Linnan & Steckler, 2002; Meyers et al., 2012). Additionally, practitioners need detailed implementation information to initiate improvements and to help insure the external validity of the program.
Our results suggest that combining process measures into a summary quality score may be more useful for identifying successful delivery than focusing on any single measure. Each of the measures we used has limitations on its own, but collectively they painted a more complete picture of how the program was delivered. Administrative records may be incomplete, self-reported data may suffer from inaccurate recall or desirability bias, and observations may be inconsistent. Yet, taken together these sources of information helped to triangulate data to establish a quality measure for implementation. These indicators of program delivery provided a strong basis for evaluating program implementation, taking actions to improve it, and ultimately, deepening interpretation of program effectiveness. This study provides a model that may add dimension to our understanding of what makes evidence-based programs successful in real-world settings.
Acknowledgment
This research was supported by the Youth Empowerment Solutions for Positive Youth Development Grant Number 5R01HD062565-03 (PI, Zimmerman) from the National Institute of Child Health and Human Development and the Michigan Youth Violence Prevention Center Cooperative Agreement Number 1U 01CE002698-01 (PI, Zimmerman) from the Centers for Disease Control and Prevention. We thank the Genesee County and Flint Community Schools, the teachers, volunteers and youth who participated in YES.
Contributor Information
Susan Morrel-Samuels, Managing Director of the Michigan Youth Violence Prevention Center in the Department of Health Behavior and Health Education at the University of Michigan School of Public Health in Ann Arbor, Michigan..
Laney A. Rupp, Research Specialist in the Department of Health Behavior and Health Education at the University of Michigan School of Public Health in Ann Arbor, Michigan..
Andria B. Eisman, Research Assistant Professor of Health Behavior and Health Education at the University of Michigan School of Public Health in Ann Arbor, Michigan..
Alison L. Miller, Associate Professor of Health Behavior and Health Education at the University of Michigan School of Public Health in Ann Arbor, Michigan..
Sarah A. Stoddard, Assistant Professor of Nursing at the University of Michigan School of Nursing in Ann Arbor, Michigan..
Susan P. Franzen, Research Specialist in the Department of Health Behavior and Health Education at the University of Michigan School of Public Health in Ann Arbor, Michigan..
Peter Hutchison, Program Manager for the Youth Empowerment Solutions Program at the University of Michigan School of Public Health in Ann Arbor, Michigan..
Marc A. Zimmerman, Professor of Health Behavior and Health Education and the Director of the Michigan Youth Violence Prevention Center at the University of Michigan School of Public Health in Ann Arbor, Michigan..
REFERENCES
- Baranowski T, & Stables G (2000). Process Evaluations of the 5-a-Day Projects. Health Education & Behavior, 27(2), 157–166. 10.1177/109019810002700202 [DOI] [PubMed] [Google Scholar]
- Bauer MS, Damschroder L, Hagedorn H, Smith J, & Kilbourne AM (2015). An introduction to implementation science for the non-specialist. BMC Psychology, 3, 32. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Brownson RC, Fielding JE, & Maylahn CM (2009). Evidence-Based Public Health: A Fundamental Concept for Public Health Practice. Annual Review of Public Health, 30(1), 175–201. 10.1146/annurev.publhealth.031308.100134 [DOI] [PubMed] [Google Scholar]
- Curran GM, Bauer M, Mittman B, Pyne JM, & Stetler C (2012). Effectiveness-implementation Hybrid Designs. Medical Care, 50(3), 217–226. 10.1097/MLR.0b013e3182408812 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, & Lowery JC (2009). Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science, 4, 50. 10.1186/1748-5908-4-50 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dane AV, & Schneider BH (1998). Program Integrity in Primary and Early Secondary Prevention: Are Implementation Effects Out of Control? Clinical Psychology Review, 18(1), 23–45. 10.1016/S0272-7358(97)00043-3 [DOI] [PubMed] [Google Scholar]
- Domitrovich CE, Bradshaw CP, Poduska JM, Hoagwood K, Buckley JA, Olin S, Ialongo NS (2008). Maximizing the Implementation Quality of Evidence-Based Preventive Interventions in Schools: A Conceptual Framework. Advances in School Mental Health Promotion, 1(3), 6–28. 10.1080/1754730X.2008.9715730 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Durlak JA (1998). Why Program Implementation is Important. Journal of Prevention & Intervention in the Community, 17(2), 5–18. 10.1300/J005v17n02_02 [DOI] [Google Scholar]
- Durlak JA, & DuPre EP (2008). Implementation Matters: A Review of Research on the Influence of Implementation on Program Outcomes and the Factors Affecting Implementation. American Journal of Community Psychology, 41(3–4), 327–350. 10.1007/s10464-008-9165-0 [DOI] [PubMed] [Google Scholar]
- Dusenbury L, Brannigan R, Hansen WB, Walsh J, & Falco M (2005). Quality of implementation: developing measures crucial to understanding the diffusion of preventive interventions. Health Education Research, 20(3), 308–313. 10.1093/her/cyg134 [DOI] [PubMed] [Google Scholar]
- Eisman AB, Zimmerman MA, Kruger D, Reischl TM, Miller AL, Franzen SP, & Morrel-Samuels S (2016). Psychological Empowerment Among Urban Youth: Measurement Model and Associations with Youth Outcomes. American Journal of Community Psychology, 58(3–4), 410–421. 10.1002/ajcp.12094 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fielding JE, & Briss PA (2006). Promoting Evidence-Based Public Health Policy: Can We Have Better Evidence And More Action? Health Affairs, 25(4), 969–978. 10.1377/hlthaff.25.4.969 [DOI] [PubMed] [Google Scholar]
- Franzen S, Morrel-Samuels S, Reischl TM, & Zimmerman MA (2009). Using process evaluation to strengthen intergenerational partnerships in the Youth Empowerment Solutions program. Journal of Prevention & Intervention in the Community, 37(4), 289–301. 10.1080/10852350903196290 [DOI] [PubMed] [Google Scholar]
- Freire KE, Perkinson L, Morrel-Samuels S, & Zimmerman MA (2015). Three Cs of Translating Evidence-Based Programs for Youth and Families to Practice Settings. New Directions for Child and Adolescent Development, 2015(149), 25–39. 10.1002/cad.20111 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gottfredson DC, Cook TD, Gardner FEM, Gorman-Smith D, Howe GW, Sandler IN, & Zafft KM (2015). Standards of Evidence for Efficacy, Effectiveness, and Scale-up Research in Prevention Science: Next Generation. Prevention Science, 16(7), 893–926. 10.1007/s11121-015-0555-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Griffith DM, Allen JO, Zimmerman MA, Morrel-Samuels S, Reischl TM, Cohen SE, & Campbell KA (2008). Organizational empowerment in community mobilization to address youth violence. American Journal of Preventive Medicine, 34(3 Suppl), S89–99. [DOI] [PubMed] [Google Scholar]
- Kilbourne AM, Neumann MS, Pincus HA, Bauer MS, & Stall R (2007). Implementing evidence-based interventions in health care: application of the replicating effective programs framework. Implementation Science, 2, 42. 10.1186/1748-5908-2-42 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Meyers DC, Durlak JA, & Wandersman A (2012). The Quality Implementation Framework: A Synthesis of Critical Steps in the Implementation Process. American Journal of Community Psychology, 50(3–4), 462–480. 10.1007/s10464-012-9522-x [DOI] [PubMed] [Google Scholar]
- Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, … Baird J (2015). Process evaluation of complex interventions: Medical Research Council guidance. BMJ, 350, h1258. 10.1136/bmj.h1258 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, … Hensley M (2011). Outcomes for Implementation Research: Conceptual Distinctions, Measurement Challenges, and Research Agenda. Administration and Policy in Mental Health and Mental Health Services Research, 38(2), 65–76. 10.1007/s10488-010-0319-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Puddy RW, & Wilkins N (2011). Understanding Evidence Part 1: Best Available Research Evidence. Retrieved from https://calio.dspacedirect.org/handle/11212/227
- Reischl TM, Zimmerman MA, Morrel-Samuels S, Franzen SP, Faulk M, Eisman AB, & Roberts E (2011). Youth empowerment solutions for violence prevention. Adolescent Medicine: State of the Art Reviews, 22(3), 581–600, xiii. [PubMed] [Google Scholar]
- Steckler A, & Linnan L (2002). Process Evaluation for Public Health Interventions and Research. San Francisco, CA: Jossey-Bass. [Google Scholar]
- Wilson SJ, Lipsey MW, & Derzon JH (2003). The effects of school-based intervention programs on aggressive behavior: A meta-analysis. Journal of Consulting and Clinical Psychology, 71(1), 136–149. 10.1037/0022-006X.71.1.136 [DOI] [PubMed] [Google Scholar]
- Zimmerman M (2000). Empowerment theory: Psychological, organizational, and community levels of analysis. In Rappaport J, Seidman E, Rappaport J, & Seidman E (Eds.), Handbook of community psychology. (pp. 43–63). Dordrecht, Netherlands: Kluwer Academic Publishers. [Google Scholar]
- Zimmerman M, Stewart SE, Morrel-Samuels S, Franzen S, & Reischl TM (2011). Youth Empowerment Solutions for Peaceful Communities: Combining theory and practice in a community-level violence prevention curriculum. Health Promot Pract, 12(3), 425–39. 10.1177/1524839909357316 [DOI] [PubMed] [Google Scholar]
- Zimmerman M, Stewart S, Hurd N, Morrel-Samuels S, Hutchison P, Reischl T, & Roberts E (2017). Youth Empowerment Solutions Curriculum (Second Edition). Ann Arbor, MI: The University of Michigan. [Google Scholar]
- Zimmerman MA, Eisman AB, Reischl TM, Morrel-Samuels S, Stoddard S, Miller AL, Hutchison P, Franzen S, Rupp L Youth Empowerment Solutions: Evaluation of an after-school program to engage middle school students in community change. Health Education and Behavior. (In Press) [DOI] [PMC free article] [PubMed] [Google Scholar]
