Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Mar 4.
Published in final edited form as: Prev Sci. 2011 Sep;12(3):223–234. doi: 10.1007/s11121-011-0226-5

Effects of Communities That Care on the Adoption and Implementation Fidelity of Evidence-Based Prevention Programs in Communities: Results from a Randomized Controlled Trial

Abigail A Fagan *, Michael W Arthur **, Koren Hanson **, John S Briney **, J David Hawkins **
PMCID: PMC3587334  NIHMSID: NIHMS362213  PMID: 21667142

Abstract

This paper describes findings from the Community Youth Development Study (CYDS), a randomized controlled trial of the Communities That Care (CTC) prevention system, on the adoption and implementation fidelity of science-based prevention programming in 24 communities. Data were collected using the Community Resource Documentation (CRD), which entailed a multi-tiered sampling process and phone and web-based surveys with directors of community-based agencies and coalitions, school principals, service providers, and teachers. Four years after the initiation of the CTC prevention system, the results indicated increased use of tested, effective prevention programs in the 12 CTC intervention communities compared to the 12 control communities, and significant differences favoring the intervention communities in the numbers of children and families participating in these programs. Few significant differences were found regarding implementation quality; respondents from both intervention and control communities reported high rates of implementation fidelity across the services provided.

Keywords: community coalitions, adoption, implementation fidelity, dissemination


Increasing the scope and delivery of tested and effective prevention programming is a major goal of prevention science (Glasgow, Lichtenstein, & Marcus, 2003; Rohrbach, Grana, Sussman et al., 2006; Saul, Duffy, Noonan et al., 2008). Prevention strategies with evidence of effectiveness in reducing problem behaviors have been identified and are available to be implemented by a range of community service providers (Hawkins & Catalano, 2004; National Research Council and Institute of Medicine, 2009), but such programs have not been widely adopted (Kumpfer & Alvarado, 2003; Ringwalt, Hanley, Vincus et al., 2008). Even when selected, programs are often implemented with low levels of implementation fidelity (Gottfredson & Gottfredson, 2002; Hallfors & Godette, 2002), which is problematic given that stronger adherence to program guidelines has been associated with greater program success (Durlak & DuPre, 2008; Fixsen, Naoom, Blase et al., 2005). This paper examines the extent to which use of the Communities That Care prevention system increased the adoption and effective implementation of tested, effective programs in a randomized controlled evaluation involving 24 communities.

To What Extent are Tested and Effective Programs Adopted and Well Implemented?

There is some evidence to suggest that the adoption of effective prevention programs is increasing in schools (Komro, Perry, Veblen-Mortenson et al., 2008; Ringwalt, Vincus, Hanley et al., 2010), likely given the emphasis on using research-based drug prevention curricula from the Safe and Drug Free Schools and No Child Left Behind legislations. However, although Ringwalt et al. (2010) reported an increase in the proportion of middle schools nationally that implemented a tested and effective drug prevention program from 2005 to 2008 (from 43% to 47%), only 26% of school reported using such programs “the most” out of all of the prevention activities they implemented. In addition, only about 10% of high schools in these districts reported the use of effective curricula (Ringwalt et al., 2008). There have been fewer systematic attempts to document the spread of evidence-based practices to other types of community agencies, but rates of adoption appear to be low (Printz, Sanders, Shapiro et al., 2009; Saul, Wandersman, Flaspohler et al., 2008). Kumpfer and Alvarado (2003) estimated that only about 10% of practitioners implement effective family-based prevention programs. Most rely on locally-developed or commercially available programs that have not been evaluated using rigorous scientific methods.

Even when community organizations select a tested, effective prevention program, their implementation practices often lack fidelity (Gottfredson et al., 2002; Hallfors et al., 2002; Kumpfer et al., 2003). That is, local agencies often fail to purchase and use program materials; ensure that all implementers receive training from program developers; deliver the majority of program sessions and content; ensure that staff are prepared, enthusiastic, and supervised; or monitor and evaluate implementation procedures (Dusenbury, Brannigan, Falco et al., 2003; Fixsen et al., 2005; Gottfredson et al., 2002). A study based on surveys of 81 Safe and Drug Free School coordinators in 11 states concluded that only 19% of school districts faithfully implemented effective curricula (Hallfors et al., 2002). There have been fewer large-scale studies examining program implementation by other types of community organizations, but there is some evidence that program deviations can be significant. In particular, staff adaptations such as deleting or adding to the program content can threaten program adherence (Dariotis, Bumbarger, Duncan et al., 2008; Gottfredson, Kumpfer, Polizzi-Fox et al., 2006; Griner Hill, Maucione, & Hood, 2006; Kumpfer et al., 2003; Polizzi Fox, Gottfredson, Kumpfer et al., 2004).

Glasgow and colleagues (2003) attribute lower rates of implementation fidelity found during local replications to the fact that programs created and tested by scientists tend to be more intense, complex, and standardized compared to practices favored by community agencies. Practitioners must often serve large, diverse populations with limited time, resources, training and oversight (Fixsen et al., 2005; Glasgow et al., 2003). Staff supervision and feedback tends to emphasize administrative issues (e.g., documenting numbers served, recording billable hours, etc.) rather than scientific principles such as implementation fidelity (Fixsen et al., 2005). These structural and organizational barriers likely reduce the ability of community practitioners to achieve high levels of implementation fidelity. Methods for enhancing the implementation fidelity—as well as the adoption—of local prevention practices are needed to help ensure that communities can achieve widespread reductions in targeted problem behaviors.

Methodological Challenges in Measuring Adoption and Implementation Fidelity

The lack of research assessing the dissemination and effective replication of prevention strategies likely reflects the difficulty in reliably and validly measuring these processes. Without accurate measurement of the use of tested and effective prevention practices in communities, however, it is impossible to assess the degree to which advances in prevention science are being translated to local communities or to examine the factors associated with the adoption and high quality delivery of evidence-based practices. Assessment is also needed to control for potential contamination when evaluating new prevention initiatives. Sloboda and colleagues (2008) caution that scientists can no longer assume the presence of “pure” control sites and emphasize that research trials must measure participant exposure to all prevention services operating in the school or community; however, there are very few models for doing so.

Prior research investigating program adoption and fidelity has typically relied on mail or phone surveys with agency administrators or program providers, observation of program sessions, and reports from technical assistance providers (Dariotis et al., 2008; Elliott & Mihalic, 2004; Gottfredson et al., 2006; Hallfors et al., 2002; Ringwalt et al., 2010). All of these measures have advantages and disadvantages. Hallfors and Godette (2002) found that school district administrators tended to report higher rates of program adoption on mail surveys than in follow-up phone interviews, and that these “secondary informants” were too far removed to accurately assess the implementation practices of program staff. Sloboda and colleagues (2008) found that students under-reported exposure to evidence-based drug prevention programs, compared to school administrators. Other research has suggested that self-reports of implementation fidelity may be inflated due to social desirability (Lillehoj, Griffin, & Spoth, 2004; Melde, Esbensen, & Tusinski, 2006). Although more objective measures, such as live or videotaped observations may be preferred, they can be expensive, and their introduction may alter the natural setting of the intervention (Johnson, 2009; Melde et al., 2006).

The Current Study

The goal of the current study was to measure and evaluate the degree to which intervention communities participating in a randomized trial increased their adoption and high quality implementation of tested and effective prevention strategies, relative to control communities, through the use of the Communities That Care (CTC) operating system. CTC involves the formation of diverse and broad-based coalitions who receive training and technical assistance in how to identify elevated risk factors and depressed protective factors faced by community youth, target these needs using prevention strategies that have previously been tested and proven effective in reducing problem behaviors, and monitor the implementation quality of selected interventions (Hawkins, Catalano, & Arthur, 2002). In prior papers (Fagan, Hanson, Hawkins et al., 2008; Fagan, Hanson, Hawkins et al., 2009), we have detailed the procedures used to assess the implementation fidelity of prevention programs adopted by intervention communities using research funding. These measures included self-reports from program implementers, independent observations of programs sessions by community members, collection of attendance records, and pre- and post-surveys of participants. This set of measures was selected in order to provide in-depth data on fidelity as a part of the process evaluation of the study, and also to serve as a model that could be used by these communities in the future to monitor their own implementation practices.

In the current study, we describe a different methodology, the Community Resource Documentation surveys, which were conducted in order to compare rates of adoption and fidelity across intervention and control communities. While data from the process evaluation indicated high rates of implementation fidelity in intervention sites (Fagan, Hanson, Hawkins et al., 2008; Fagan, Hanson et al., 2009), that evaluation did not assess program adoption or implementation fidelity of all tested and effective prevention programs operating in the intervention communities, nor did it assess the extent of program adoption or fidelity in control communities. The current investigation provides a more extensive evaluation of the CTC system, by testing the degree to which the CTC emphasis on the adoption and high quality implementation of tested and effective strategies spread throughout the intervention communities, the degree of “contamination” in control communities (i.e., the degree to which these sites adopted and implemented tested and effective prevention strategies), and the degree to which intervention and control communities differed in these processes. To our knowledge, there have been no other randomized controlled evaluations assessing differences in community rates of program adoption and fidelity. The specific aims of the current study are to provide a scientific evaluation of:

  1. The degree to which use of the CTC prevention system increased the adoption of tested and effective prevention strategies in intervention versus control communities.

    1. Hypothesis: Intervention communities would implement a greater number of tested, effective prevention strategies compared to control communities.

  2. The degree to which the use of the CTC prevention system increased the implementation fidelity of selected tested and effective prevention strategies.

    1. Hypothesis: Intervention communities would demonstrate a higher quality of implementation fidelity compared to control communities.

METHODS

Study Description

This study utilizes data from the Community Youth Development Study (CYDS), a ten-year study involving 12 pairs of small- to medium-sized communities from Colorado, Illinois, Kansas, Maine, Oregon, Utah, and Washington matched within state with regard to size, poverty, diversity, and crime indices. In the fall of 2002, one member of each matched pair was randomized to either the CTC intervention or the control condition (Hawkins, Catalano, Arthur et al., 2008). The 12 control communities conducted prevention program planning and implementation according to their usual practices and received no resources or services encouraging them to adopt or effectively implement tested and effective prevention programs. From spring, 2003 through spring, 2008, each of the 12 intervention communities was provided with training in the CTC model and proactive technical assistance via telephone calls, e-mail correspondence, and site visits at least once annually. They also received funding for a full-time CTC coordinator and up to $75,000 annually in Years 2–5 of the study to implement tested and effective programs, policies and practices that targeted fifth- to ninth-grade students (the focus age group of the study) and their families.

Data Collection Process

The Community Resource Documentation (CRD) surveys were developed to measure the implementation of evidence-based prevention strategies in communities as part of a previous study, the Diffusion Project (Arthur, Glaser, Hawkins et al., 2005). The CRD was administered in 2001–2002 as part of that project in 41 communities, which included the 24 communities participating in the Community Youth Development Study1. The 2001–2002 surveys were conducted 1.5 years prior to the start of the CYDS, and data from the 24 CYDS sites collected at this time point provide (pre-)baseline data for the current investigation. The next CRD was administered in 2004–2005, 1.5 years after the CYDS began, and the third administration occurred 3.5 years post-baseline (in 2006–2007). For ease of interpretation, we refer to the three time points as 2002, 2005, and 2007. It should be noted that each assessment measured the adoption and fidelity of prevention programs offered during the prior year, and the three time points can be considered repeated, cross-sectional ‘snapshots’ of these outcomes, rather than longitudinal data assessing changes in the adoption or implementation of particular programs.

A three-tiered snowball sampling approach was used to generate the CRD sample, which included directors of community agencies and coalitions, program coordinators and staff, school administrators, and teachers. In Tier 1, interviews were conducted in each community with 10 positional leaders (e.g., mayors, school superintendents, police chiefs) and 5 community leaders identified by the positional leaders as being most knowledgeable about community prevention efforts. Respondents were asked to name community agencies, organizations, and coalitions providing prevention services and to provide contact information for their directors.

In Tier 2, telephone interviews were conducted with nominated agency directors and prevention coalition leaders2. These respondents were asked to name the prevention programs their organizations delivered or sponsored in the past year, with ‘programs’ defined as: A defined set of services with set activities (sessions, classes or meeting times) that are provided to a defined group of people (members of the community, customers or participants). Respondents were asked to nominate only programs that focused on the prevention of problem behaviors, rather than treatment of those with pre-existing problems, which served residents in the target community, and to focus on four program types:

  1. Parent Training: Programs that use curricula to teach parents skills for effective parenting.

  2. Programs to Promote Social and Emotional Competence: Programs that use curricula to teach emotional, social and behavioral skills.

  3. Mentoring: Programs that match adults or older teens with children in a supervised one-on-one relationship for at least one school year.

  4. Tutoring: Programs that link children with trained tutors (older children or adults) to improve academic skills or performance.

For each program that met these criteria, respondents provided contact information for the program coordinator(s), who were later interviewed. Also during Tier 2, principals of all public elementary, middle, and high schools in the 24 communities were mailed surveys and asked to identify prevention programs offered by their schools and contact information for program coordinators, who were later interviewed.

As shown in Table 1, response rates for the agency and coalition interviews were comparable across intervention and control communities, and over 90% of eligible respondents participated in each administration. Of the eligible population, about 300 agency and coalition directors were interviewed at each time point, for overall response rates of 91% in 2002, 94% in 2005 and 95% in 2007. Response rates for the principal surveys were also high and comparable across conditions, with 76–87% of administrators completing the surveys at the three time points.

Table 1.

The Community Resource Documentation (CRD) Components and Response Rates, 2002, 2005, and 2007

CRD COMPONENT
Type and Purpose
Respondent Type Year Conducted Number of Identified Respondents Eligible Respondents Completed Interviews Response Rates
CTC Control
AGENCY & COALITION INTERVIEWS
Phone Interview to Generate Sample
Director of Agency or Coalition 2002 598 347 317 91.7% 91.0%
2005 442 315 296 96.0% 91.4%
2007 523 337 320 96.8% 92.8%

PRINCIPAL SURVEY
Mail Survey to Generate Sample
Principal or Vice-Principal 2002 179 165 132 83.0% 76.6%
2005 153 153 131 86.8% 83.9%
2007 153 153 125 80.2% 83.8%

PROGRAM INTERVIEW
Phone Interview to Measure Program Adoption and Fidelity
Program Director or Staff Person 2002 465 357 337 92.6% 96.7%
2005 640 372 339 92.2% 89.8%
2007 799 326 300 91.7% 92.4%

TEACHER SURVEY
Internet-based Survey to Measure Program Adoption and Fidelity
Teacher 2005–2006 2236 1989 1607 81.9% 79.5%
2007 2218 1983 1579 80.6% 78.7%

Tier 3 of the CRD process was designed to verify the adoption and assess the implementation fidelity of nominated evidence-based programs. Program coordinators and staff identified during Tier 2 were surveyed using computer-assisted telephone interviews (CATI). Response rates for these Program Interviews were 90% to 97% across the three waves and involved 337 respondents in 2002, 339 respondents in 2005, and 300 respondents in 2007 in the intervention and control communities combined (see Table 1). For each identified program, respondents were asked to verify that the program was currently being offered, was prevention-focused, and served at least one parent or youth in the targeted age group in the study community. Respondents in the 2005 and 2007 Parent Training, Social Competence, and Mentoring interviews were presented with a list of prevention programs previously demonstrated in at least one high quality research trial to reduce problem behaviors, as identified in the CTC Prevention Strategies Guide (http://www.sdrg.org/ctcresource/) or through reviews conducted by the research team3. Respondents were asked: In the past year, did your program use any of the following curricula? and (in the Parent Training and Social Competence interviews): Is there a primary curriculum from which [your program] draws? Respondents who identified one of the listed programs in response to either question were considered adopters4. Adoption of a tested and effective Tutoring program was contingent on affirmative responses to five items indicating that tutors were screened before acceptance and trained, tutors were supervised, tutoring sessions occurred at least twice a week, there was a tutor to tutee ratio of less than 1 to 5, and changes in tutees’ performance were evaluated.

In the 2002 CRD Program Interviews, the Parent Training and Social Competence interviews included shorter lists of tested and effective programs, because evidence of ‘what works’ in prevention was less complete. Respondents in 2002 were asked to identify their program by name, and those naming a program included on the 2002, 2005 or 2007 lists were considered adopters. Those completing the Mentoring interview were asked to identify the program they used, and only those using the Big Brothers/Big Sisters and Across Ages programs were coded as adopters. Adoption of Tutoring programs in 2002 was assessed using similar criteria as in 2005 and 2007. All respondents, regardless of program type, were also asked at each time point to report the total number of participants served by the program in the past year.

An internet-based survey for teachers was used to measure adoption and fidelity of tested and effective programs delivered in participating schools in mid-to-late spring of 2005 and 2007 (i.e., towards the end of each school year). Eligible teachers were those who taught students in Grades 5–9 and whose principals allowed them to participate in the survey. In 2005, 20 schools refused to participate, including all schools in one community; these schools were re-approached in 2006 and 16 agreed to be surveyed. Combining the 2005 and 2006 samples, about 82% of eligible teachers in CTC communities and 80% of eligible teachers in control communities completed the surveys (see Table 1). Comparable rates were achieved in 2007. In this year, one school district in one community refused to participate, so data from this site and its matched control community were not analyzed.

To assess the adoption of classroom-based programs, teachers first had to report that they had taught prevention curricula to students in Grades 5–9, and were then asked whether or not they used each of the tested, effective programs listed on a menu derived from the CTC Prevention Strategies Guide and reviews conducted by the research team. For each program, teachers were presented with the program name, logo (if available), name of the program developer and/or distribution company, and a short description of the program. Program adoption was considered to have occurred when teachers reported delivery of one of these programs in the current school year, excepting those who reported delivery of: 1) universal programs (i.e., programs designed to be taught to all students in a classroom and/or school) to fewer than 10 children (unless they taught special education populations); 2) programs to zero students; or 3) 10 or more programs. If the Olweus Bullying Prevention Program was reported by only one teacher in a school, it was not considered to have been adopted unless confirmed on the Principal Survey (for 2007 only), given that this is a school-wide strategy and usage by only one teacher was considered too large a deviation from the program as designed. Teachers were also asked to report, using open-ended questions, the number of students receiving each identified program.

Program Fidelity Measures

Program adopters were asked additional questions to assess implementation fidelity on the 2005 and 2007 Program Interviews and the 2007 Teacher Surveys5. Multiple items and constructs were used to measure program adherence, dosage, participant responsiveness, and program oversight (i.e., monitoring and evaluation), with minor differences in items across the four Program Interviews given variation in their requirements and delivery methods. Because they were asked about more than 30 programs, and in order to reduce respondent burden, teachers were asked fewer questions to assess fidelity. In most cases, respondents rated aspects of implementation fidelity using dichotomous (yes/no) responses, which were averaged across all respondents in each community to produce an overall score. Respondents were asked to rate implementation practices occurring during the past year, and if programs were offered multiple times during the year, to consider all program offerings in their responses.

Adherence

Adherence measured the extent to which implementers received required trainings, purchased required materials, and delivered specified program components.

Training

Respondents on the Parent Training and Social Competence Interviews and Teacher Surveys reported whether or not they or program staff had received training in how to deliver the program from the program developer or licensed distributor of the curriculum or through a structured training from another training organization. Those who responded affirmatively to either question were coded as having been trained in the program, while those reporting that training consisted only of reading the curriculum prior to leading sessions or mentoring by other staff who have used the curriculum before were not. For Mentoring programs, respondents were asked whether or not mentors received training. For Tutoring programs, respondents had to report that staff received training in teaching skills, lesson planning, and content knowledge (three items).

Materials

The 2007 Parent Training and Social Competence Interviews included two items asking if staff manuals and participants materials had been purchased from the program developer or licensed distributor. Those answering ‘yes’ to each question were coded as having bought the required materials.

Delivery of appropriate content

The Parent Training, Social Competence, and Mentoring Interviews each included items describing content and components that have been identified in tested and effective programs in these areas, and respondents were asked to report whether or not their program included each item. Parent Training interviews included 18 items in 2005 and 16 items in 2007, asking, for example, if parents were trained in: how to give approval and reinforcement of children’s positive behavior, skills to control their own anger, and helping their children to refuse offers to use drugs. Ten items were asked in Social Competence Interviews, with sample items including: teaching students skills for making clear requests, refusal skills, and conflict resolution skills. Mentoring Interviews included 12 items; for example, if mentors are screened, have contact with program staff at least once a month, and if mentors and youth share in deciding howthey will spend their time. To calculate the delivery of appropriate content, the number of items the respondent endorsed was divided by the total number of items. This measure was not calculated for Tutoring programs because these types of items were used as screening tools to differentiate effective and non-effective tutoring programs.

Dosage

Dosage measured the percentage of required program sessions actually delivered. For the Social Competence, Parent Training, and Tutoring Interviews and the Teacher Surveys, dosage was based on two items asking the number of program sessions and the number of sessions delivered to participants in the past year. For Mentoring programs, dosage was calculated as the number of meetings between mentors and mentees that occurred divided by the number of expected meetings. Dosage could not be calculated for teachers as too many respondents were still delivering the program at the time of the surveys (mid-spring).

Participant Responsiveness

Participant responsiveness was based on attendance information. For the Parent Training and Social Competence Interviews, regular attendance was reported by respondents as the percentage of participants who attended at least two-thirds of the program sessions. For Tutoring programs, respondents reported the percentage of students who completed the program in the past year. For Mentoring programs, the attendance score was based on respondents’ reports of the percentage of all mentor/mentee matches whose relationship had lasted for a year or longer. Teachers did not report attendance.

Program Oversight

Program oversight measured the degree to which program implementation was monitored and evaluated, staff received feedback on their performance, and information about implementation was used to improve the quality of prevention activities.

Fidelity Monitoring

The Program Interviews and 2007 Teacher Surveys asked respondents: Is there a system in place to monitor if the curriculum is being delivered as designed? Follow-up items asked about the specific types of monitoring procedures utilized. All respondents who reported using checklists of program content or observations of program sessions or staff feedback/coaching (on the Parent Training, Social Competence, and Tutoring Interviews and on the Teacher Surveys for the first two items), or site visits or supervision meetings (for Mentoring programs) were coded as having a monitoring system.

Program Evaluation

All Program Interviews and the Teacher Survey asked respondents: Is your program being evaluated or has it been evaluated during the past year? Those who reported on follow-up items using pre/post surveys or archival indicators were coded as having evaluated their program; those who reported less rigorous methods, such as using numbers served or staff written or verbal opinion, were not.

Staff support/Coaching

Parent Training and Social Competence Interviews asked: Is ongoing coaching, facilitation, or support provided for those who deliver the prevention program or practices? Mentoring Interviews asked whether or not mentors had contact with program staff at least once a month, while the Tutoring Interview asked if tutors received supervision by the teacher/coordinator. Those who reported ‘yes’ to these items were coded as providing coaching.

Quality Assurance

In 2007 only, the Parent Training and Social Competence Interviews asked: Are the results of program monitoring and evaluation used to make changes to program delivery or teaching practices? Those who answered affirmatively were considered to engage in quality assurance.

Statistical Procedures

Program adoption was calculated as the total number of tested and effective programs identified by respondents in each of the 24 study communities. The total number of programs adopted in all 12 intervention communities and all 12 control communities was also calculated. To calculate implementation fidelity scores, if multiple respondents in the same community or school indicated implementation of the same program, their responses were averaged to calculate the overall fidelity score for that program. Fidelity scores for each community were calculated by averaging scores across all programs operating in each community, and for CTC versus control communities by averaging scores across intervention conditions. Adoption and fidelity outcomes were also examined within each of the four program types included in Program Interviews, based on averaging scores across all programs in each category. These results are mentioned when they indicated major differences in implementation across program types, but findings were not tabled or discussed in detail given the small number of programs identified within each program type.

Tests of statistical significance were conducted using the Wilcoxon Signed Ranks Test, (http://www.fon.hum.uva.nl/Service/Statistics/Signed_Rank_Test.html), a non-parametric statistic that makes paired comparisons using two-tailed tests. It was chosen to account for the non-normally distributed data and the nesting of programs within communities. This test allowed investigation of the degree of difference between each intervention community and its randomly assigned matched control community in program adoption and fidelity.

Rates of missing data were minimal and somewhat higher in control communities compared to intervention communities for the Program Interviews. In 2005, 2.4% of all data were missing in the Program Interviews (1.8% in CTC communities and 3.4% in control communities) and 0.3% were missing on the Teacher Surveys. In 2007, rates of missing data were 6.1% for the Program Interviews (3.7% in intervention communities and 11.1% in control communities) and 0.2% for the Teacher Surveys. Missing data were not included in the analyses.

RESULTS

Adoption of Tested and Effective Programs and Program Participation

Table 2 presents the findings related to the adoption of tested and effective programs in CTC and control communities and the total number of participants served by these programs. As shown, the CTC intervention communities adopted more tested and effective programs compared to control communities at each time point; this difference was statistically significant (p<.05) only in the 2007 Program Interviews. In 2002, respondents in CTC communities reported the adoption of 17 tested and effective programs compared to 11 in control communities; 36 tested and effective programs were reported by intervention communities compared to 24 in control communities in 2005; and 44 tested and effective programs were reported in intervention sites versus 19 in control communities in 20076.

Table 2.

Adoption of and Number of Participants Served by Tested and Effective Programs in the Intervention (CTC) and Control (C) Communities

OUTCOME PROGRAM 2002 PROGRAM 2005 PROGRAM 2007 TEACHER 2005 TEACHER 2007
CTC C CTC C CTC C CTC C CTC C
ADOPTION: Number of programs 17 11 36 24 44 19 56 55 40 32
W+=15.50, W−=5.50, n=6, p<0.32 W+=48, W−=18, n=11, p<0.21 W+=60, W−=6, n=11, p<0.05 W+=40, W−=38, n=12, p<0.97 W+=38, W−=17, n=10, p<0.33
PARTICIPATION: Number of participants 3454 3333 5522 6084 11,261 3864 6092 9143 13,146 3263
W+=36, W−= 30, n=11, p<0.83 W+=23, W−=32, n=10, p<0.70 W+=49, W−=6, n=10, p <0.05 W+=30, W−=48, n=12, p<0.52 W+=56, W−=10, n=11, p<0.05

Note: Scores were summed across all programs in the Intervention (CTC) and Control (C) communities.

Significance tests were conducted using the Wilcoxon Signed Ranks Test, http://www.fon.hum.uva.nl/Service/Statistics/Signed_Rank_Test.html; statistically significant (p<.05) differences based on two-tailed tests are indicated in bold.

Intervention differences in program adoption were greatest for Parent Training programs (results not shown), as more Parent Training interventions were reported compared to any other program type in the intervention communities in 2005 and 2007 (N=13 and N=15, respectively), while they were least likely to be adopted in control communities (N=4 at each time point). According to the Teacher Surveys, the number of tested and effective programs reported in CTC and control communities was similar in 2005 (56 versus 55). Intervention communities implemented more tested and effective programs in schools (N=40) than did control communities (N=32) in 2007, but this difference was not significant.

Regarding program participation, the 2002 and 2005 Program Interviews showed that CTC and control communities delivered tested and effective programs to similar numbers of participants. However, in 2007, the fourth year of the study, participation in tested and effective programs was significantly higher in CTC (N=11,261) than control (N=3,864) sites (see Table 3). The intervention effect was largely due to greater participation in CTC communities and lesser participation in control communities in Social Competence programs in 2007 compared to 2005 (results not shown). Across both conditions, Social Competence and Tutoring programs reached more community members than did Parent Training and Mentoring programs. Teachers in the control communities reported serving a greater number of students with tested and effective school-based programs in 2005 (a non-significant difference), but by 2007, CTC communities served significantly more participants (13,146 students) compared to control communities (3,263 students)7.

Table 3.

Implementation Fidelity of Tested and Effective Programs in the Intervention (CTC) and Control (C) Communities

OUTCOME PROGRAM 2005 PROGRAM 2007 TEACHER 2007
CTC C CTC C CTC C
ADHERENCE

Staff Training 89% 72% 76% 79% 42% 53%
W+=26, W−=10 (n=8), p<0.32 W+=7, W−=8 (n=5), p<1.0 W+=13, W−=32 (n=9), p<0.30
Teacher Manual 96% 80%
W+=1, W−=0 (n=1), p<1.0
Participant Materials1 92% 100%
Content Delivered 92% 70% 91% 81%
W+=27, W−=1 (n=7), p<0.05 W+=16, W−=5 (n=6), p<0.32

DOSAGE

Lessons Taught 98% 71% 97% 90%
W+=10, W−=0 (n=4), p<0.13 W+=3, W−=3 (n=3), p<1.0

PARTICIPANT RESPONSIVENESS

Good Attendance 86% 82% 82% 81%
W+=34, W−=11 (n=9), p<0.21 W+=21, W−=15 (n=8), p<0.75

PROGRAM OVERSIGHT

Monitoring System 90% 75% 82% 90% 53% 28%
W+=18, W−=3 (n=6), p<0.16 W+=4, W−=11 (n=5), p<0.44 W+=36, W−=9 (n=9), p<0.13
Evaluation System 90% 82% 90% 79% 28% 24%
W+=4.5, W−=1.5 (n=3), p<0.50 W+=9, W−=1 (n=4), p<0.25 W+=25.5, W−=29.5 (n=10), p<0.85
Staff Coaching 97% 76% 88% 90%
W+=15, W−=0 (n=5), p<0.07 W+=6, W−=9 (n=5), p<0.82
Quality Assurance 82% 56%
W+=13.50, W−=7.50 (n=6), p<0.57

Note: Implementation fidelity scores were averaged across all programs in the Intervention (CTC) and Control (C) communities.

Significance tests were conducted using the Wilcoxon Signed Ranks Test, http://www.fon.hum.uva.nl/Service/Statistics/Signed_Rank_Test.html; statistically significant (p<.05) differences based on two-tailed tests are indicated in bold.

1

There were no valid comparisons on which to conduct the Wilcoxon significance test.

Program Fidelity

Adherence

The findings in Table 3 are mixed regarding the degree to which the intervention affected program adherence. Respondents in both conditions reported relatively high compliance in terms of buying the required staff manuals and participants materials. The majority of respondents in the Program Interviews also reported receiving structured training workshops from program developers, but teachers in both conditions were less likely to report training. There was one statistically significant (p<.05) intervention difference, with CTC sites more likely to report delivery of appropriate content according to the 2005 Program Interviews. For the remaining adherence measures, some findings favored CTC communities while others favored control communities, but none of these differences was statistically significant.

Dosage

The Program Interview findings showed that CTC communities provided more dosage in 2005, delivering 98% of program sessions (or Mentoring meetings), compared to 71% in control communities (see Table 3); however, this difference was not significant (p<.05). In 2007, both groups reported high rates of dosage, with 97% of sessions taught in CTC communities and 90% in control sites. Dosage outcomes were similar across the four program types, although the results (not shown) were more difficult to obtain for Mentoring programs, as many respondents reported that they did not know or refused to report how often matches were meeting.

Participant Responsiveness

Program attendance was nearly the same in CTC and control communities according to 2005 and 2007 Program Interviews (see Table 3), with over 80% of participants reported as attending regularly. Attendance rates were similar and high across program types at each time point, with at least 60% of participants attending regularly (results not shown).

Program Oversight

Relatively high rates of program oversight were reported in both CTC and control communities. Results generally favored the CTC sites, although none of the differences were statistically significant (p<.05): a greater percentage of respondents in the CTC communities reported monitoring program implementation, evaluating the results of program implementation, providing staff with supervision and coaching, and using information about implementation to make changes and improve the quality of delivery of programs (see Table 3). The outcome relating to staff coaching was marginally significant (p<.10) in the 2005 Program Interviews, with CTC communities more likely to provide staff with regular coaching. Program oversight was less likely to be reported by teachers in each condition: only about one-fourth of teachers reported using pre/post surveys to evaluate program effectiveness, and only 28% of teachers in control sites and 53% of teachers in CTC communities reported having monitoring systems.

DISCUSSION

This paper investigated the degree to which use of the Communities That Care system enhanced program adoption and fidelity in communities participating in a randomized trial. CTC emphasizes the selection, adoption, and high-quality delivery of programs that have been scientifically evaluated and shown to be effective in reducing rates of adolescent problem behaviors (Hawkins et al., 2002), and it was hypothesized that communities using the CTC system would have higher rates of adoption and implementation fidelity compared to control communities.

The results indicated significant positive effects of CTC on the adoption of tested, effective programs in 2007, four years after the CTC system was initiated, with CTC communities implementing a greater number of tested and effective Parent Training, Social Competence, Mentoring, and Tutoring programs compared to control communities. In addition, significantly more participants were reached by school- and community-based programs in 2007 in CTC communities than in control communities.

The data collected through the Community Resource Documentation did not, however, demonstrate intervention differences in rates of implementation fidelity. According to the Program Interviews, communities in both conditions reported good to very good levels of implementation fidelity. Only one aspect of fidelity showed statistically significant (p<.05) differences across conditions: programs operating in the CTC communities were reported to deliver more appropriate content compared to programs in the control communities. Program oversight was generally higher among CTC sites, particularly regarding the provision of staff support/coaching and in monitoring intervention processes, but not significantly so.

Across conditions, the results indicated lower rates of implementation fidelity in the school-based compared to community-based programs. Although these findings are consistent with research reporting poor implementation of school-based prevention activities (Gottfredson et al., 2002; Hallfors et al., 2002), few studies have assessed the degree to which schools have unique challenges that make fidelity more difficult compared to other contexts. An exception is work by Dariotos and colleagues (2008), who found that, compared to community agencies, schools had more difficulty cultivating champions and prioritizing prevention activities. Additional research identifying barriers to and facilitators of implementation fidelity across program types could help promote effective community-based prevention programming.

The current findings are both consistent with and different from our previous publications describing the results of a process evaluation focused on the 12 CTC communities (Fagan et al., 2008; Fagan, Hanson et al., 2009). The previous work was based on frequent communication with the intervention sites and collection of detailed data regarding the implementation fidelity of tested and effective programs funded by the research study. Those data are consistent with the findings from the current CRD surveys that the CTC communities had significantly greater adoption and participation in tested and effective programs in 2007 but not 2005 relative to the control condition. Most intervention communities initially adopted only 1 or 2 new programs when study funding was first made available in 2004–2005, then added programs (which also increased participation) over time, as they became more successful with the CTC model. For example, only 5 of the 12 CTC communities adopted school curricula in 2004–2005 using study funding, but all had done so by the end of the project (Fagan, Brooke-Weiss, Cady et al., 2009).

The high rates of implementation fidelity seen in this study among the 12 intervention sites are consistent with the prior process evaluation (Fagan, Hanson et al., 2009). However, the high rates of fidelity reported in control sites are surprising, given that other research has suggested that implementation tends to be poor when programs are locally replicated without the provision of external oversight and technical assistance (Glasgow et al., 2003; Gottfredson et al., 2002; Hallfors et al., 2002). It is possible that practitioners in control communities have also become aware of the need to adhere to program guidelines and monitor prevention practices. Alternatively, the results may reflect social desirability (Lillehoj et al., 2004; Melde et al., 2006), given that questions regarding fidelity were self-reported by program staff. It is also possible that some of the items assessing fidelity were too general to capture true differences in the quality of implementation between sites. For example, items asked whether or not and how staff were trained, but questions did not assess if every implementer received training.

These issues underscore the difficulty in measuring implementation fidelity, as well as the adoption, of a variety of tested and effective programs across multiple communities. Few models have been developed to do so, particularly systems that can produce valid and reliable data while minimizing human and financial costs. Although our process evaluation of CTC implementation in the intervention sites employed multiple and detailed measures of program fidelity (e.g., observations of sessions and self-reported information collected during implementation), such methods were not feasible for the analyses of the prevalence of tested and effective prevention programs in both CTC and control communities assessed using the CRD, given the number of agencies and potential prevention practices operating across these sites. Given the costs associated with the Community Resource Documentation, it could only be administrated periodically. As a result, it could not capture changes occurring in intervening years. Finally, the CRD asked respondents to report retrospectively on program adoption and implementation in the past year, which may have affected the reliability of the data.

Despite these limitations, the first purpose of the CRD was to provide a ‘snapshot’ of implementation activities at given points during the intervention and to assess differences in these activities between intervention and control conditions. The methodology was adequate for achieving these aims. Other strengths of the CRD methodology are its use of a three-tiered sampling design that attempted to survey all individuals and agencies that provided four types of prevention services to the specified age group. It utilized a combination of mail, phone, and web surveys, administered to those considered most knowledgable about program implementation, and attempted to verify information using follow-up questions and multiple informants when possible. We found that all of the programs funded by the study in CTC communities were identified through the CRD Program Interviews and all but one were identified through the Teacher Surveys which provides some confidence in the validity of the findings.

Methods for measuring and improving the dissemination and high quality replication of tested and effective prevention programs are generally lacking (Flay, Biglan, Boruch et al., 2005; Rohrbach et al., 2006; Sloboda et al., 2008), and the findings from this study help to advance the field of prevention science in several ways. The CRD methodology offers a potential means of measuring the implementation of a range of evidence-based programs when they are widely disseminated. Accurate measurement of the adoption of tested and effective prevention practices is necessary in order to document the degree to which such services are being used in communities, assess the quality of their delivery, identify factors related to program adoption and fidelity, and evaluate the potential for contamination during research trials. The findings from this study also indicate that use of the CTC system can help to increase the dissemination and widespread use of tested and effective programs in communities, which, in turn, increases the potential to promote healthy youth development and prevent the development of problem behaviors among young people community-wide.

Acknowledgments

This work was supported by research grants from the National Institute on Drug Abuse (R01 DA10768-01A1 and R01 DA015183-01A1) with co-funding from the National Cancer Institute, the National Institute of Child Health and Human Development, the National Institute of Mental Health, and the Center for Substance Abuse Prevention.

The authors gratefully acknowledge contributions made to this paper by Eric Brown and other members of the Community Youth Development Study research team, as well as to residents of the 24 communities participating in the study.

Footnotes

1

At the conclusion of the Diffusion Project, 13 pairs of communities were identified as eligible for the Community Youth Development Study, and 12 pairs agreed to participate; see Hawkins et al. (2008) for details.

2

Coalitions eligible for inclusion were those defined as “a group of community leaders, service providers, and/or community residents representing different organizations and sectors of the community that meet to plan, coordinate, and integrate the community’s prevention activities.” The 2005 and 2007 coalition interviews in the intervention communities included the 12 coalitions that were trained in the CTC system, as well as any other coalitions operating in these communities. Most control communities also had prevention-oriented coalitions, though their characteristics tended to differ from CTC coalitions (Arthur, Hawkins, Brown et al., 2010).

3

A list of the programs included on the Program Interviews and in the Teacher Surveys is available upon request from Blair Brook-Weiss (bbrooke@u.washington.edu) at the Social Development Research Group.

4

If multiple respondents from the same community reported implementation of the same program, the program was counted only once in the adoption measure.

5

Fidelity scores were not calculated for the 2002 Program Interview or the 2005 Teacher Survey data, given that fewer items assessing fidelity were included on these surveys and because only a few programs were identified in each of the four Program Interviews conducted in 2002.

6

These figures represent the total number of programs reported in the year prior to the survey administration. Given the possibility that programs identified at earlier time points were also identified at later time points, adding these results across the three times could over-estimate the total number of programs implemented in communities. The same is true of the program participation results.

7

Respondents on the Social Competence Interviews could have reported adoption of the same programs as teachers, given that community/school collaborations did occur; thus, adding up the participation rates across the Program Interviews and Teacher Surveys would over-estimate the total number of residents served.

References

  1. Arthur MW, Glaser RR, Hawkins JD, Peters RD, Leadbeater B, McMahon RJ. Steps towards community-level resilience: Community adoption of science-based prevention programming. In: Peters RD, Leadbeater B, McMahon RJ, editors. Resilience in children, families, and communities: Linking context to practice and policy. New York, NY: Kluwer Academic/Plenum Publishers; 2005. [Google Scholar]
  2. Arthur MW, Hawkins JD, Brown EC, Briney JS, Oesterle S, Abbott RD. Implementation of the Communities That Care prevention system by coalitions in the Community Youth Development Study. Journal of Community Psychology. 2010;38:245–258. doi: 10.1002/jcop.20362. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Dariotis JK, Bumbarger BK, Duncan LG, Greenberg M. How do implementation efforts relate to program adherence? Examining the role of organizational, implementer, and program factors. Journal of Community Psychology. 2008;36:744–760. [Google Scholar]
  4. Durlak JA, DuPre EP. Implementation matters: A review of the research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology. 2008;41:327–350. doi: 10.1007/s10464-008-9165-0. [DOI] [PubMed] [Google Scholar]
  5. Dusenbury L, Brannigan R, Falco M, Hansen WB. A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research. 2003;18:237–256. doi: 10.1093/her/18.2.237. [DOI] [PubMed] [Google Scholar]
  6. Elliott DS, Mihalic S. Issues in disseminating and replicating effective prevention programs. Prevention Science. 2004;5:47–53. doi: 10.1023/b:prev.0000013981.28071.52. [DOI] [PubMed] [Google Scholar]
  7. Fagan AA, Brooke-Weiss B, Cady R, Hawkins JD. If at first you don’t succeed … keep trying: Strategies to enhance coalition/school partnerships to implement school-based prevention programming. Australian and New Zealand Journal of Criminology. 2009;42:387–405. doi: 10.1375/acri.42.3.387. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Fagan AA, Hanson K, Hawkins JD, Arthur MW. Bridging science to practice: Achieving prevention program fidelity in the community youth development study. American Journal of Community Psychology. 2008;41:235–249. doi: 10.1007/s10464-008-9176-x. [DOI] [PubMed] [Google Scholar]
  9. Fagan AA, Hanson K, Hawkins JD, Arthur MW. Implementing effective community-based prevention programs in the community youth development study. Youth Violence and Juvenile Justice. 2008;6:256–278. [Google Scholar]
  10. Fagan AA, Hanson K, Hawkins JD, Arthur MW. Translational research in action: Implementation of the communities that care prevention system in 12 communities. Journal of Community Psychology. 2009;37:809–829. doi: 10.1002/jcop.20332. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network; 2005. (FMHI Publication #231) [Google Scholar]
  12. Flay BR, Biglan A, Boruch RF, Castro FG, Gottfredson DC, Kellam SG, et al. Standards of evidence: Criteria for efficacy, effectiveness, and dissemination. Prevention Science. 2005;6:151–176. doi: 10.1007/s11121-005-5553-y. [DOI] [PubMed] [Google Scholar]
  13. Glasgow RE, Lichtenstein E, Marcus AC. Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health. 2003;93:1261–1267. doi: 10.2105/ajph.93.8.1261. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Gottfredson DC, Gottfredson GD. Quality of school-based prevention programs: Results from a national survey. Journal of Research in Crime and Delinquency. 2002;39:3–35. [Google Scholar]
  15. Gottfredson DC, Kumpfer K, Polizzi-Fox D, Wilson D, Puryear V, Beatty P, et al. The Strengthening Families Washington D.C. Families project: A randomized effectiveness trial of family-based prevention. Prevention Science. 2006;7:57–73. doi: 10.1007/s11121-005-0017-y. [DOI] [PubMed] [Google Scholar]
  16. Griner Hill L, Maucione K, Hood BK. A focused approach to assessing program fidelity. Prevention Science. 2006;8:25–34. doi: 10.1007/s11121-006-0051-4. [DOI] [PubMed] [Google Scholar]
  17. Hallfors D, Godette D. Will the “Principles of effectiveness” Improve prevention practice? Early findings from a diffusion study. Health Education Research. 2002;17:461–470. doi: 10.1093/her/17.4.461. [DOI] [PubMed] [Google Scholar]
  18. Hawkins JD, Catalano RF. Communities that care prevention strategies guide. South Deerfield, MA: Channing Bete Company, Inc; 2004. [Google Scholar]
  19. Hawkins JD, Catalano RF, Arthur MW. Promoting science-based prevention in communities. Addictive Behaviors. 2002;27:951–976. doi: 10.1016/s0306-4603(02)00298-8. [DOI] [PubMed] [Google Scholar]
  20. Hawkins JD, Catalano RF, Arthur MW, Egan E, Brown EC, Abbott RD, et al. Testing Communities That Care: Rationale and design of the Community Youth Development Study. Prevention Science. 2008;9:178–190. doi: 10.1007/s11121-008-0092-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Johnson SL. Improving the school environment to reduce school violence: A review of the literature. Journal of School Health. 2009;79:451–465. doi: 10.1111/j.1746-1561.2009.00435.x. [DOI] [PubMed] [Google Scholar]
  22. Komro K, Perry CL, Veblen-Mortenson S, Farbakhsh K, Toomey TL, Stigler MH, et al. Outcomes from a randomized controlled trial of a multi-component alcohol use preventive intervention for urban youth: Project Northland Chicago. Addiction. 2008;103:606–618. doi: 10.1111/j.1360-0443.2007.02110.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Kumpfer KL, Alvarado R. Family-strengthening approaches for the prevention of youth problem behaviors. American Psychologist. 2003;58:457–465. doi: 10.1037/0003-066X.58.6-7.457. [DOI] [PubMed] [Google Scholar]
  24. Lillehoj CJ, Griffin KW, Spoth R. Program provider and observer ratings of school-based preventive intervention implementation: Agreement and relation to youth outcomes. Health Education and Behavior. 2004;31:242–257. doi: 10.1177/1090198103260514. [DOI] [PubMed] [Google Scholar]
  25. Melde C, Esbensen FA, Tusinski K. Addressing program fidelity using onsite observations and program provider descriptions of program delivery. Evaluation Review. 2006;30:714–740. doi: 10.1177/0193841X06293412. [DOI] [PubMed] [Google Scholar]
  26. National Research Council and Institute of Medicine. Preventing mental, emotional, and behavioral disorders among young people: Progress and possibilities. Committee on the prevention of mental disorders and substance abuse among children, youth, and young adults: Research advances and promising interventions. Washington, D.C: Board on Children, Youth, and Families, Division of Behavioral and Social Sciences and Education. The National Academies Press; 2009. [PubMed] [Google Scholar]
  27. Polizzi Fox D, Gottfredson DC, Kumpfer KL, Beatty P. Challenges in disseminating model programs: A qualitative analysis of the Strengthening Washington D.C. Families program. Clinical Child and Family Psychology Review. 2004;7:165–176. doi: 10.1023/b:ccfp.0000045125.68018.61. [DOI] [PubMed] [Google Scholar]
  28. Printz RJ, Sanders MR, Shapiro CJ, Whitaker DJ, Lutzker JR. Population-based prevention of child maltreatment: The US Triple P system population trial. Prevention Science. 2009;10:1–12. doi: 10.1007/s11121-009-0123-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Ringwalt C, Hanley S, Vincus AA, Ennett ST, Rohrbach LA, Bowling JM. The prevalence of effective substance use prevention curricula in the nation’s high schools. Journal of Primary Prevention. 2008;29:479–488. doi: 10.1007/s10935-008-0158-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Ringwalt C, Vincus AA, Hanley S, Ennett ST, Bowling JM, Haws S. The prevalence of evidence-based drug use prevention curricula in U.S. middle schools in 2008. Prevention Science. 2010 doi: 10.1007/s11121-010-0184-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Rohrbach LA, Grana R, Sussman S, Valente TW. Type II translation: Transporting prevention interventions from research to real-world settings. Evaluation and the Health Professions. 2006;29:302–333. doi: 10.1177/0163278706290408. [DOI] [PubMed] [Google Scholar]
  32. Saul J, Duffy J, Noonan R, Lubell K, Wandersman A, Flaspohler P, et al. Bridging science and practice in violence prevention: Addressing ten key challenges. American Journal of Community Psychology. 2008;41:197–205. doi: 10.1007/s10464-008-9171-2. [DOI] [PubMed] [Google Scholar]
  33. Saul J, Wandersman A, Flaspohler P, Duffy J, Lubell K, Noonan R. Research and action for bridging science and practice in prevention. American Journal of Community Psychology. 2008;41:165–170. doi: 10.1007/s10464-008-9169-9. [DOI] [PubMed] [Google Scholar]
  34. Sloboda Z, Pyakuryal A, Stephens PC, Teasdale B, Forrest D, Stephens RC, et al. Reports of substance abuse prevention programming available in schools. Prevention Science. 2008;9:276–287. doi: 10.1007/s11121-008-0102-0. [DOI] [PubMed] [Google Scholar]

RESOURCES