Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2015 Jun 1.
Published in final edited form as: Eval Program Plann. 2014 Feb 14;44:89–97. doi: 10.1016/j.evalprogplan.2014.02.002

Implementation Assessment of Widely Used but Understudied Prevention Programs: An Illustration from the Common Sense Parenting Trial

Robert G Oats a,d, Wendi F Cross b, W Alex Mason a, Mary Casey-Goldstein c, Ronald W Thompson a, Koren Hanson c, Kevin P Haggerty c
PMCID: PMC4073790  NIHMSID: NIHMS576101  PMID: 24632185

Abstract

Common Sense Parenting is a parent-training program that is widely disseminated, has promising preliminary support, and is being tested in a randomized controlled trial that targets lower-income, urban 8th-grade students and their families (recruited in two annual cohorts) to improve the transition to high school. The workshop-based program is being tested in both standard 6-session (CSP) and modified 8-session (CSP Plus) formats; CSP Plus adds adolescent-skills training activities. To offer a comprehensive picture of implementation outcomes in the CSP trial, we describe the tools used to assess program adherence, quality of delivery, program dosage, and participant satisfaction, and report the implementation data collected during the trial. Results indicated that workshop leaders had high adherence to the program content and manual-stated goal times of the CSP/CSP Plus curriculum and delivered the intervention with high quality. The majority of intervention families attended some or all of the sessions. Participant satisfaction ratings for the workshops were high. There were no significant cohort differences for adherence, quality and dosage; however, there were significant cohort improvements for participant satisfaction. Positive fidelity results may be due to the availability of detailed workshop leader guides, in addition to ongoing training and supervision, which included performance-based feedback.

Keywords: parent-training, implementation, fidelity, assessment, program adherence, quality of delivery, program dosage, participant satisfaction


The assessment of program implementation is a critical consideration in randomized prevention trials (Snyder et al., 2006). Without an assurance that preventive interventions are being implemented properly, judgments about the potential causal impact of tested programs are compromised. Research demonstrates that better outcomes are achieved when evidence-based preventive interventions are implemented as designed (Durlak & DuPre, 2008; Dusenbury, Brannigan, Falco, & Hansen, 2003). Moreover, poor implementation is one reason why positive intervention outcome findings from many tightly controlled efficacy trials have not been replicated in effectiveness trials conducted under real world conditions (Cross & West, 2011). Fortunately, the collection of at least some implementation data has become commonplace in prevention trials. However, there is considerable variability in the type and amount of implementation data that are collected and reported by researchers (Chaudoir, Dugan, & Barr, 2013), even in well-designed efficacy studies.

Most prevention trials have been developed as part of a traditional evaluation research cycle that moves from efficacy to effectiveness to dissemination studies (Flay, 1986), the latter of which examine methods to scale up supported programs for widespread distribution and implementation. This traditional evaluation research cycle has come under scrutiny in recent years, partly because it has encouraged a focus on dissemination only as an afterthought to efficacy and effectiveness studies. As a consequence, a science-to-practice gap has emerged (Institute of Medicine (U.S.) Committee on Crossing the Quality Chasm: Adaptation to Mental Health and Addictive Disorders, 2006) characterized by a proliferation of efficacious and effective programs that are not being widely disseminated and implemented for public health impact (Glasgow et al., 2012). Mason and colleagues (2013) outline one framework for helping to close the science-to-practice gap by selecting promising preventive interventions that are already being used within community settings and rigorously testing those interventions (Rotheram-Borus & Duan, 2003). The goal of the framework is to promote expanded use of promising community interventions, if supported by rigorous testing, by building on existing dissemination and implementation infrastructures and resources (Curran, Bauer, Mittman, Pyne, & Stetler, 2012).

Testing community programs that are already in use presents challenges to evaluators with regard to implementation assessment. Due to practical constraints, providers typically lack the rigorous implementation assessment tools and procedures that are called for in prevention trials. For example, it can be overly burdensome and cost-prohibitive for providers to collect and analyze observational data of their program implementation efforts on a routine basis. Thus, before embarking on tests of these programs, implementation measurement development work often is needed. Such work is critical within the context of testing a program that is already in use. To ensure external validity while maximizing internal validity (Glasgow, Vogt, & Boles, 1999), it is essential to have an assurance that tested programs are being implemented within experimental trials in a manner that is consistent with how the programs are manualized and offered in routine practice within community settings.

This article describes both the implementation assessment procedures used in the Common Sense Parenting (CSP) trial and the implementation data collected during the intervention phase of the project. The CSP trial is an experimental test of the widely-used but understudied CSP program (Burke, Herron, & Barnes, 2006). CSP is a workshop-based parent-training program developed at and provided by Boys Town, a national service provision organization. In 2012, CSP served 3,509 children from 1,756 families across 11 Boys Town sites. Since 2004, Boys Town has provided CSP training-of-trainers to agencies in four countries and 28 states. Having accrued positive support in prior small scale, non-experimental and quasi-experimental studies (Griffith, 2010; Thompson, Ruma, Schuchmann, & Burke, 1996; Thompson, Grow, Ruma, Daly, & Burke, 1993; Thompson, Ruma, Brewster, Besetsney, & Burke, 1997), CSP is an ideal candidate for further testing in a larger experimental trial. Given the importance of implementation assessment (Durlak & DuPre, 2008), the primary goal of this article is to provide an illustration that can serve as a model for both the collection and reporting of implementation data when rigorously testing promising preventive interventions that are already in use.

Implementation refers to the various considerations involved in the delivery of a program within a given setting (Durlak & DuPre, 2008). Dane and Schneider (1998) outlined five such considerations or dimensions of implementation, including (1) adherence to key program components, (2) competence or quality of delivery, (3) amount of the program delivered or dosage, (4) interest in the program or participant responsiveness/satisfaction, and (5) uniqueness from other programs or differentiation. Of these five implementation dimensions, the first four can be readily measured at some point during the course of a prevention trial and have received the most attention. To offer a comprehensive picture of implementation outcomes (Proctor et al., 2011) in the CSP trial, we describe the tools used to assess program adherence, quality of delivery, program dosage, and participant satisfaction, and report the implementation data collected during the trial. Importantly, implementation data were collected from multiple sources, including objective observational coding of videotaped program sessions by individuals not involved in CSP delivery. This type of multimethod implementation assessment has become a hallmark of rigorous program evaluations (Fisher & Chamberlain, 2000; Schoenwald, Henggeler, Brondino, & Rowland, 2000).

Study Hypotheses

A unique feature of the CSP trial is that participants were recruited in two annual cohorts, with intervention implementation occurring for each respective cohort in the initial two years of the ongoing project. This design feature provides an opportunity to compare implementation outcomes across the two cohorts (Breitenstein et al., 2010b). Strict training protocols were implemented to standardize program delivery and ensure a high degree of implementation, and efforts were made to retain CSP workshop leaders throughout the intervention phase of the project. Ongoing supervision and coaching occurred, and performance-based feedback was provided throughout the intervention phases of the study and between cohorts. Still, variations in the levels of program adherence, quality, dosage, and participant satisfaction are expected. Moreover, it is anticipated that implementation may have improved over time with experience and ongoing training, although such improvements are expected to have been modest given the high implementation standards established at the outset and maintained throughout the intervention phase of this efficacy trial.

Method

Participants

Families participating in the CSP trial include a target parent and a target adolescent, in which the adolescent is an eighth grade student attending one of five Tacoma public middle schools. Over the course of two academic years (2010–2011 and 2011–2012), 321 families were recruited into the study through classroom visits and presentations at school-wide events for students and their parents. Families completed pretesting and were randomly assigned to one of three conditions: a standard CSP program condition (n = 118), a modified CSP Plus program condition (n = 95), or a control condition (n = 108). A small majority of parents in the two intervention conditions (n = 213) were Caucasian (52%), whereas 26% were African American, 8% were Hispanic, 4% were Asian, 4% were Pacific Islander, 1% were Native American, and 5% were mixed race. Most parents were female (i.e., mothers or maternal caregivers), and their average age at the outset of the study was 40.65 years (SD = 7.69). Forty-one percent of the parents reported annual incomes below $24,000 for their families and 60% received food stamps. The mean age of students at enrollment was 13.46 years (SD = .53); 55% of 8th-graders were female.

In the coming years, complete posttest, 1-year follow-up, and 2-year follow-up data will be collected from participating parents and adolescents to address the primary aims of the study, which are to test the efficacy of CSP and CSP Plus for improving parenting and family relationship quality and reducing risks for substance use and related problem behaviors in the high school years. Analyses of those longitudinal data will be reported in future articles. The focus in the current article is on the procedures used to deliver the interventions and collect implementation data in the CSP and CSP Plus program conditions. All study procedures, including those for obtaining consent/assent, were approved by the human subjects review committees at the University of Washington and Father Flanagan’s Boys’ Home (aka Boys Town).

Interventions

CSP is theoretically grounded in the Teaching Family Model (Fixsen, Phillips, & Wolf, 1973; Minkin et al., 1976), and draws from social learning principles (Bandura, 1977) as well as from social interaction theory (Patterson, Reid, & Dishion, 1992) and coercion theory (Patterson, 1982; Snyder, Edwards, McGraw, Kilgore, & Holton, 1994). The program is fully manualized (Burke et al., 2006) and has components that are found in many existing, evidence-based parenting interventions (Barth et al., 2005; Kaminski, Valle, Filene, & Boyle, 2008). In a series of six weekly, 2-hour group workshops, parents learn and practice skills that address issues of communication, discipline, decision making, relationships, self-control, and school success to promote positive behavior and teach alternatives to problem behavior. CSP sessions are structured according to six fundamental learning activities: introduction (Session 1), review (Sessions 2–6), instruction, modeled examples, skill practice, and summary. Within each session, as outlined in the manual, workshop leaders are expected to spend a prescribed amount of time on each learning activity (e.g., 10 minutes for review), with the largest blocks of time dedicated to modeled examples and skill practice, which are viewed as being the primary active ingredients of the program.

CSP was designed originally for parents of children between ages 6 and 16 years. Because this is a wide developmental range and the program often is used with parents of teenagers, CSP has been modified to focus more on early adolescent development. The 8-session modified program, called CSP Plus, adds two new sessions, one to the beginning and another to the end of the standard CSP program based on materials from an established curriculum known as Stepping Up To High School (SUTHS), developed as a booster session for the Raising Health Children project (Brown, Catalano, Fleming, Haggerty, & Abbott, 2005). Each new session asks parents to attend with their early adolescent-aged children, and includes content focused on preparing for a positive transition to high school and a successful move toward independence. Table 1 provides a brief overview of the CSP/CSP Plus program content.

Table 1.

CSP/CSP Plus Session Content

Session Participant Title Content
CSP +1 Parent & Planning for ! ! Opportunities for high school success
Teen Success ! ! Parent and teen check-ups
! ! Setting goals for high school
CSP 1 Parent Parents are ! ! Effective discipline
Only Teachers ! ! Describing children’s behaviors
! ! Using consequences to change behaviors
CSP 2 Parent Encouraging ! ! Giving kids reasons
Only Good Behavior ! ! Using Effective Praise to increase positive behaviors
CSP 3 Parent Preventing ! ! Teaching social skills to children
Only Problems ! ! Using Preventive Teaching to set children up for success
CSP 4 Parent Correcting ! ! Staying calm
Only Problem Behavior ! ! Using Corrective Teaching to stop problem behaviors and teach alternative behaviors
CSP 5 Parent Teaching Self- ! ! Safe home plans
Only Control ! ! Using Teaching Self-Control when children are not cooperating
CSP 6 Parent Putting It All ! ! Holding family meetings
Only Together ! ! Establishing family routines and traditions
! ! Developing a parenting plan for using all the Common Sense Parenting skills
CSP +8 Parent & Letting Loose ! ! Opportunities for independence
Teen without Letting Go ! ! Trust and freedom
! ! Coaching decision-making with teens

Note: Standard CSP intervention is CSP Sessions 1–6. CSP+ intervention begins with Session +1, continues with CSP Sessions 1 – 6, and ends with Session +8. Each CSP/CSP+ session is 2 hours in duration. CSP = Common Sense Parenting.

Intervention Training Protocols

Thirteen workshop leaders were hired and trained to conduct CSP/CSP Plus workshops, including nine in the first year of recruitment and an additional four in the second year of recruitment. There were 12 female workshop leaders (8 Caucasian, 3 African American, and 1 Hispanic) and one male Caucasian workshop leader. Eight of the workshop leaders were currently parenting or had previously parented teens; three were the parent of younger children and one was not yet a parent. All workshop leaders had experience working with teens.

At the beginning of each year of program implementation, all newly hired workshop leaders attended a 3-day CSP training led by trainers from Boys Town. The goal of the training was to provide workshop leaders with the background knowledge and practical application to be able to deliver the CSP curriculum with a high degree of quality and faithfulness to the program manual. During the first two days of training, group leaders reviewed each of the six sessions of CSP. Time was dedicated both days to practicing the core skills and strategies from the six workshop sessions. The third day focused on conducting effective role play with time dedicated to practice in leading role playing situations with parents. Workshop leaders had an opportunity to present a portion of the curriculum to others and to receive feedback from the Boys Town certified trainers. Workshop leaders also were provided with tips about organizing and managing a class. Eleven of the thirteen trainers hired for the project successfully completed the training and became certified to lead CSP workshops. Note that all but one of the trainers hired in the first year of the project returned to conduct workshops in the second year (one trainer moved out of the area in between the two recruitment years).

After successfully completing the CSP training, five group leaders assigned to conduct the CSP Plus workshops received two additional training sessions, one for each of the two newly developed CSP Plus sessions. During each of these four-hour trainings, workshop leaders had a chance to review a two-hour session and work through all of the activities including role playing practice.

Supervision

After initial training and certification, all workshop leaders participated in three additional group meetings for ongoing training and support. These meetings were led by the project’s intervention coordinator, a certified CSP trainer with a master’s degree and extensive parent training and supervisory experience. During these meetings, the intervention coordinator reviewed challenging learning activities such as modeled examples and skill practice, in addition to discussing methods for improving presentation skills in general. Workshop leaders also received individual supervision from the intervention coordinator where they were given performance feedback and suggestions for improvement based upon observations of in-person and videotaped sessions.

Implementation Data Collection Procedures and Instruments

We collected data to reflect four major dimensions of implementation as described by Dane and Schneider (1998): program adherence, quality of delivery, program dosage, and participant satisfaction. Implementation data were collected primarily via observational coding of videotaped group workshops, but also using self-reports from workshop leaders and participants as well as standard documentation procedures (e.g., attendance records). Implementation observation rating forms were developed for the original six CSP sessions based upon the CSP Trainer’s Guide (Burke, Schuchmann, & Barnes, 2006) and for the two CSP Plus sessions based upon the SUTHS curriculum (Haggerty, Casey-Goldstein, & Barber, 2000a; Haggerty, Casey-Goldstein, & Barber, 2000b). Forms were refined through consultation with CSP and SUTHS program developers, who helped identify aspects of sessions – and develop corresponding items – that were essential to core intervention delivery. As described below, the final implementation observation rating forms included a checklist of learning activities to document adherence to key program components, as well as items to assess the quality of program delivery.

Workshop leaders conducted 141 sessions across two cohorts; there were 48 sessions in Cohort One and 93 sessions in Cohort Two. All sessions were videotaped. Blocking by session and workshop leader, we randomly selected 38 (27%) videotaped sessions for model implementation assessment. Two experienced and certified Boys Town CSP trainers independently rated 14 (29%) sessions in Cohort One and 24 (26%) sessions in Cohort Two for adherence and quality.

Program Adherence

Program adherence, regarding both content and time allocation, was measured using implementation observation rating forms developed for each session. Each form contained several items (average of 28 items per form, range = 18 – 41) describing the six primary session learning activities, including: (1) introducing the CSP/CSP Plus approach used to address parenting skill development and providing an overview of subsequent session topics (first session only), (2) reviewing skills taught in the previous session (after first session), (3) instructing parents in new skills, (4) viewing and discussing videotaped modeled examples of the new skill, (5) practicing new skills using role-playing exercises with performance feedback, and (6) summarizing the session. Content adherence, the degree to which the workshop leader adhered to the core components of the CSP/CSP Plus curriculum, was defined as a rating of 1 or 2, from both raters, using the following 3-point scale: 1 = Yes, workshop leader fully adhered to the task; 2 = Partial, workshop leader partially adhered to the task; 3 = No, workshop leader did not adhere to the task. Overall inter-rater agreement was 96%, as calculated by dividing the rating agreements by the sum of rating agreements and disagreements.

As noted, each CSP/CSP Plus session is two hours in duration and there are goal times established for the various learning activities to help ensure that all of the session content is adequately reviewed. Time adherence was measured by calculating the difference between the goal time for a given learning activity and the session overall, and the actual time spent on the activity and the session overall. A value of zero indicates that the actual time matched the goal time; a negative value indicates the actual time was less than the goal time and a positive value indicates the actual time was more than the goal time. Time adherence data were consistently collected by one coder, thus analyses are based only upon this coder’s data; the objective nature of the time-stamp data made double coding unnecessary for time adherence calculation.

Quality

Intervention delivery quality was measured with a 19-item instrument using the following 5-point scale: 1 = Disagree, 2 = Slightly Disagree, 3 = Neither, 4 = Slightly Agree, 5 = Agree. This instrument contained two subscales: quality of skill practice (α = .90; e.g., “The skill practice was introduced correctly”), quality of workshop leader attributes (α = .93; e.g., “The trainer was enthusiastic”), and a total quality score (α = .96). Scale scores were computed as the mean of the ratings from the two coders. Workshop leaders also completed quality self-ratings (quality of skill practice, α = .60; quality of workshop leader attributes, α = .64; total quality, α = .76) using the same 19-item instrument.

Dosage

Dosage refers to the number of intervention sessions attended by the families, which was recorded by workshop leaders throughout the intervention period of the study. Session attendance was categorized as attendance at none of the sessions, less than half of the sessions, or more than half of the sessions.

Participant satisfaction

Participant satisfaction was measured using a participant-completed workshop evaluation. At the last session of each CSP and CSP Plus group, parents completed a 10-item workshop evaluation instrument using the following 4-point scale: 1 = Strongly Disagree, 2 = Disagree, 3 = Agree, 4 = Strongly Agree. This instrument contained subscales measuring respondent’s satisfaction with the quality of the workshop content (5 items; α = .80) and the quality of the workshop leader (5 items; α = .79), in addition to a workshop evaluation total score (α = .84).

Results

Adherence

Mean content adherence by session, learning activity, and cohort is presented in Table 2. Overall adherence of objectively rated sessions was 95%, indicating a high degree of adherence to the core components of the CSP and CSP Plus curricula. For the total sample, the degree of adherence across learning activities within each session ranged from 89% (Summary) to 99% (Introduction). Across sessions, overall adherence ranged from 91% (CSP 6 and CSP Plus 8) to 99% (CSP 2 and 3). There was a trend indicating that overall adherence improved somewhat over time (92% in Cohort 1 and 97% in Cohort 2); tests of independent proportions indicated that the cohort differences in overall adherence (z = −0.65, p = 0.95) and across learning activities (results available upon request) were not statistically significant.

Table 2.

Mean (standard deviation) content adherence by session, learning activity, and cohort

CSP Session (S) Cohort (C) Average


Learning S+1 S1 S2 S3 S4 S5 S6 S+8 S Avg. C1 C2
Activity (n = 4) (n = 5) (n = 5) (n = 5) (n = 5) (n = 5) (n = 5) (n = 4) (n = 38) (n = 14) (n = 24)
Intro. 98% 100% n/a n/a n/a n/a n/a n/a 99% 100% 99%
(15%) (0%) (--) (--) (--) (--) (--) (--) (11%) (0%) (12%)
Review n/a 17% 98% 98% 97% 95% 86% 50% 91% 80% 98%
(--) (41%) (14%) (15%) (18%) (22%) (35%) (53%) (29%) (40%) (15%)
Instruct. 88% 100% 100% 99% 100% 98% 100% 95% 97% 96% 97%
(33%) (0%) (0%) (8%) (0%) (16%) (0%) (21%) (18%) (20%) (17%)
Mod Ex. 100% 100% 100% 90% 100% 92% 100% 100% 97% 91% 100%
(0%) (0%) (0%) (31%) (0%) (27%) (0%) (0%) (18%) (28%) (0%)
Skill Pract. 100% 94% 100% 100% 90% 97% 90% 90% 95% 95% 95%
(0%) (23%) (0%) (0%) (30%) (16%) (30%) (31%) (22%) (21%) (23%)
Summary 100% 100% 93% 100% 97% 73% 75% 81% 89% 80% 93%
(0%) (0%) (25%) (0%) (18%) (45%) (45%) (40%) (31%) (40%) (26%)
Average 92% 97% 99% 99% 97% 93% 91% 91% 95% 92% 97%
(27%) (17%) (11%) (12%) (17%) (26%) (29%) (29%) (22%) (27%) (18%)

S = Session; Intro = Introduction; Instruct = Instruction; Mod Ex = Modeled Example; Pract = Practice; n/a = not applicable.

Mean time adherence by session, learning activity, and cohort is presented in Table 3. Overall session mean time adherence was −27 sec, indicating that workshop leaders closely adhered to the two-hour session time (i.e., only 27 seconds under the targeted time allotment, on average). Results for the learning activities indicated that average actual times were longer than the goal times for the introduction (19 sec), review (1 min, 27 sec), instruction (2 min, 18 sec), and modeled examples (4 min, 12 sec) activities, and shorter than the goal times for the skill practice (−9 min, 32 sec) and summary (−3 min, 55 sec) activities. Across sessions, average time adherence ranged from 3 min and 34 sec under the allotted time for CSP Session 6 to 51 seconds over the allotted time for CSP Session 5. Independent-samples t-tests were conducted to compare average time adherence overall and across the learning activities by cohort; there were no statistically significant differences for overall time adherence, t (36) = −0.86, p = 0.39, or by learning activity (results available upon request).

Table 3.

Mean (standard deviation) time adherence by session, learning activity, and cohort

CSP Session (S) Cohort (C) Average


Learning S+1 S1 S2 S3 S4 S5 S6 S+8 S Avg. C1 C2
Activity (n = 4) (n = 5) (n = 5) (n = 5) (n = 5) (n = 5) (n = 5) (n = 4) (n = 38) (n = 14) (n = 24)
Intro. 02:05 −00:45 n/a n/a n/a n/a n/a n/a 00:19 04:23 01:53*
(03:36) (03:55) (--:--) (--:--) (--:--) (--:--) (--:--) (--:--) (03:50) (01:00) (02:56)
Review n/a −01:33 −00:44 04:19 00:22 00:02 09:14 −10:30 01:27 00:09 02:30
(--:--) (03:00) (05:05) (04:51) (04:11) (02:17) (13:34) (06:22) (07:56) (07:06) (08:38)
Instruct. 01:22 05:47 00:55 04:55 00:51 03:11 −07:14 04:22 02:18 01:41 02:38
(04:37) (08:27) (04:36) (03:14) (04:17) (07:09) (04:12) (09:18) (06:55) (05:56) (07:26)
Mod Ex. 02:45 03:47 04:06 08:55 15:47 03:49 −02:45 −01:43 04:12 02:48 05:02
(02:40) (05:10) (04:43) (00:53) (11:04) (06:18) (02:28) (01:32) (06:43) (05:34) (07:15)
Skill Pract. −12:29 −06:55 −05:51 −22:51 −12:30 −06:12 −11:28 −09:15 −09:32 −07:15 −10:55
(01:42) (09:02) (06:41) (05:31) (09:28) (16:05) (14:34) (05:37) (10:17) (11:34) (09:22)
Summary −06:38 −03:26 02:34 −03:17 −06:22 −04:17 −08:05 −02:55 −03:55 −05:48 −02:59
(00:45) (02:59) (02:56) (03:03) (02:20) (01:27) (04:31) (04:37) (04:09) (03:57) (04:00)
Average −00:45 00:04 00:01 −00:36 −00:22 00:51 −03:34 −01:08 −00:27 −01:01 −00:07
(06:09) (08:11) (06:10) (11:40) (11:37) (08:47) (11:11) (08:48) (08:58) (08:07) (09:15)

S = Session; C = Cohort; Intro = Introduction; Instruct = Instruction; Mod Ex = Modeled Example; Pract = Practice; n/a = not applicable.

*

sample size too small for valid comparison; n/a = not applicable; time format = mm:ss

Quality

Table 4 displays the mean of the two subscales, quality of skill practice and workshop leader attributes, by session and cohort. The mean total quality score was 4.07 (SD = 0.98, on a 5-point scale), indicating high quality of implementation. Mean skill practice and workshop leader attributes were also high at 4.08 (SD = 0.93) and 4.07 (SD = 1.02), respectively. Three of the eight sessions had an average quality rating less than four: CSP1, CSP3, and CSP6. Independent-samples t-tests were conducted to compare quality by cohort; there were no statistically significant differences for total quality (t (36) = −1.74, p = 0.09), quality of skill practice (t (34) = −0.50, p = 0.62), or quality of workshop leader attributes (t (36) = −1.92, p = 0.06).

Table 4.

Mean (standard deviation) quality of skill practice and workshop leader attributes by session and cohort

CSP Session (S) Cohort (C) Average


S+1 S1 S2 S3 S4 S5 S6 S+8 S Avg. C1 C2
Quality Subscale (n = 4) (n = 5) (n = 5) (n = 5) (n = 5) (n = 5) (n = 5) (n = 4) (n = 38) (n = 14) (n = 24)
Skill Practice 4.47 3.73 4.20 3.86 4.34 4.06 3.75 4.19 4.08 4.01 4.12
(0.69) (1.22) (0.66) (0.74) (0.98) (0.92) (1.17) (0.59) (0.93) (0.93) (0.92)
Workshop leader 4.52 3.79 4.22 3.64 4.30 3.98 3.75 4.40 4.07 3.78 4.22
(0.74) (1.13) (0.82) (1.02) (0.94) (1.14) (1.18) (0.71) (1.02) (1.09) (0.95)
Average 4.50 3.77 4.21 3.73 4.32 4.02 3.75 4.32 4.07 3.87 4.17
(0.72) (1.17) (0.75) (0.92) (0.96) (1.04) (1.17) (0.67) (0.98) (1.03) (0.94)

S = Session; C = Cohort; n/a = not applicable

Workshop leader quality self-ratings were available only for Cohort Two given that the quality measure items were refined after Cohort One, thus reducing the pool of potential comparisons between the self-rated sessions and independent coder-rated sessions. There were 16 self-rated Cohort Two sessions that had a match in the random sample of 38 Cohort One and Cohort Two sessions that were rated by the independent coders. A Pearson product-moment correlation coefficient examining the relationship between the matched self-rated and coder-rated sessions was not statistically significant for total quality, quality of skill practice, or quality of workshop leader attributes (Table 5). Despite the small sample size for these analyses, it is important to note that the effect sizes represented by the correlation coefficients were small in magnitude.

Table 5.

Independent coder vs. workshop leader self-rated quality ratings

Quality Subscale Rater N Mean SD r
Skill Practice Coder 16 3.99 0.70 −0.01
W. Leader 16 4.38 0.34
Workshop Leader Attributes Coder 16 4.11 0.66 −0.33
W. Leader 16 4.39 0.30
Average Coder 16 4.07 0.65 −0.28
W. Leader 16 4.38 0.34

W. Leader = Workshop Leader; all p > .05.

Dosage

Session attendance by cohort is presented in Table 6. As expected, there was variability in attendance. Overall, 80% of families assigned to the intervention conditions of this randomized trial attended some or all of the CSP/CSP Plus sessions, leaving 20% who failed to engage in the programming at all. Attendance rates were fairly similar across cohorts, although 64% of parents attended half or more of sessions in Cohort One and 56% of parents attended half or more of sessions in Cohort Two. A chi-square test of independence examining the difference in the number of sessions attended by cohort was statistically non-significant, ! ! (2, N = 213) = 1.397, p = 0.50.

Table 6.

Session attendance by cohort

Total
Cohort 1
Cohort 2
Sessions Attended n % n % n %
None 43 20% 14 17% 29 22%
Less than half 44 21% 15 19% 29 22%
Half or more 126 59% 52 64% 74 56%
Total 213 100% 81 100% 132 100%

All p > .05

Participant satisfaction

Table 7 displays the means and standard deviations for the workshop evaluations. The mean workshop evaluation total score was 3.75 (SD = 0.33, on a 4-point scale), indicating high satisfaction with the workshop as a whole. Mean workshop content and workshop leader attributes were also high at 3.92 (SD = 0.20) and 3.84 (SD = 0.23), respectively. Independent-samples t-tests were conducted to compare workshop evaluation by cohort. There were significant differences in the mean scores for the overall workshop evaluation score (t (59) = −3.47, p < 0.05), the quality of session content subscale (t (65) = −3.32, p < 0.05), and the quality of the workshop leader subscale (t (50) = −2.55, p < 0.05) for Cohort One compared to Cohort Two. Parents in Cohort Two had higher scores in all areas, suggesting a higher degree of satisfaction.

Table 7.

Mean (standard deviation) workshop evaluation scores by cohort

Total
Cohort 1
Cohort 2
(n = 107) (n = 42) (n = 65)
Workshop Content Subscale 3.75 3.62 3.84*
(0.33) (0.38) (0.26)
Workshop Leader Quality Subscale 3.92 3.85 3.96*
(0.20) (0.28) (0.12)
Workshop Evaluation Total 3.84 3.73 3.90*
(0.23) 0.28) (0.17)
*

p < .05

Discussion

This article describes the implementation assessment procedures used in a randomized test of CSP, a widely used but understudied parent-training program. Assessments focused on four components of implementation (Dane & Schneider, 1998): program adherence, quality of delivery, program dosage, and participant satisfaction. We expected that there would be modest but non-significant cohort improvements in these implementation components. Results indicated that workshop leaders had high adherence to the program content (i.e., 95% overall adherence) and manual-stated goal times (i.e., overall session time was 27 sec shorter that the goal time) of the CSP/CSP Plus curriculum. The majority of intervention families attended some or all of the sessions and participant satisfaction ratings for the workshops were high.

There were non-significant cohort improvements in content adherence for the review and summary learning activities; content adherence increased for both activities from Cohort One to Cohort Two (review: 80% to 98%; summary: 80% to 93%). In a meta-analysis examining implementation and its relationship to program outcomes, Durlak and Dupre (2008) note that perfect implementation is atypical, and that studies with implementation levels as low as 60% have reported positive findings. Furthermore, some studies have used 60% as the minimum required content delivered in order to test an intervention (Sholomskas et al., 2005). Our higher adherence rates may have resulted from our training and supervision protocols, including performance-based feedback. Specifically, ongoing supervision and coaching occurred throughout the intervention phases of the trial, with review of selected videotaped workshops. In addition, a booster training session was held after Cohort One to share with the workshop leaders results and recommendations from the initial adherence and quality ratings, review some minor session content revisions (e.g., using more adolescent-related examples during the sessions), and address any questions/concerns of the three new and five returning workshop leaders. For this study, the lack of significant cohort differences is a desirable outcome because it supports the notion that the intervention was delivered consistently across cohorts and that all participants essentially received the same intervention.

We were also interested in how closely workshop leaders adhered to the manual’s time allocation. Time adherence data indicated that workshop leaders adhered closely to the two-hour overall session time, but divided up the time differently than the manual stated. Workshop leaders spent slightly more time on activities earlier in the session (i.e., review, instruction, modeled examples) than activities later in the session (i.e., skill practice, summary). One possible explanation for this result is that all workshop leaders were new to CSP and were likely still developing their time management skills with intervention delivery. It is important to note, however, that the learning activity goal times are essentially guidelines that have not undergone formal testing to establish parameters for acceptable time deviations. With this caveat in mind, time adherence data used in conjunction with content adherence data can provide some useful information. For example, as described above, the review and summary content adherence were each 80% for Cohort One, but the time adherence for these activities was different: +1 min, 27 sec and −3 min, 55 sec, respectively, indicating that the content coverage was the same, yet the amount of time spent on the content varied. Examining the videotaped sessions revealed that workshop leaders sometimes allowed off-topic conversations to extend the review time at the beginning of the session and typically had little time left at the end of the session, which resulted in a somewhat “rushed” summary.

Likewise, the overall content adherence for modeled examples (97%) and skill practice (95%) learning activities was high, yet their overall time adherence was +4 min, 12 sec and −9 min, 32 sec, respectively, again indicating a discrepancy between content adherence and time adherence. Further examination of the videotaped sessions revealed that parents were usually eager to discuss live and videotaped modeled examples of skills they were learning, hence the extended amount of time spent on this learning activity. Moreover, demonstrating new skills in front of other parents during the skill practice learning activity can be anxiety provoking for parents and for new workshop leaders to provide accurate and encouraging feedback. One possible explanation for the lower skill practice time adherence is that both workshop leaders and parents may have avoided spending time on this essential, yet at times difficult, learning activity. Of course, other factors such as group size can impact the amount of time spent practicing skills, with smaller groups needing less time to demonstrate mastery of a given skill. There were 5 – 6 parents attending each session for this study, which is less than the 8 – 12 parents typically recommended for CSP session attendance. The examples described above demonstrate that monitoring time spent on learning activities can provide another helpful dimension to program adherence when evaluating intervention implementation.

Independent coder-rated quality of intervention implementation was high for skill practice, workshop leader attributes, and overall quality. While still meeting our quality standards, the first (Parents are Teachers), third (Preventing Problems), and sixth (Putting it all Together) CSP sessions (Table 1) had the lowest mean quality ratings, suggesting that the content for these sessions may be slightly more difficult for workshop leaders to learn and/or effectively communicate to parents. These sessions may need additional attention when training and supervising new workshop leaders. For example, steps might be taken to improve the quality of delivery for these sessions by adding training components focused on intervention delivery techniques, such as skills for helping participants feel comfortable during practice and providing effective feedback. As with content adherence, there were non-significant cohort improvements in quality, likely due to some degree of growing familiarity and practice with the CSP curriculum experienced by the five returning Cohort Two workshop leaders.

Workshop leader mean self-rated quality ratings were higher than independent coder-rated mean quality ratings, supporting research indicating that individuals tend to rate themselves higher than third-party raters (Breitenstein et al., 2010a). Not surprisingly, the correlations between self-rated and coder-rated quality ratings were non-significant and small in magnitude. Self-ratings of adherence and quality can be a practical and cost-efficient alternative to using independent coders in applied settings, provided that workshop leaders have some training in recording their own behavior and have demonstrated acceptable levels of reliability. Research is needed to develop and evaluate training protocols that can improve the self-ratings of workshop leaders.

Program dosage findings indicated that the majority of intervention families (80%) attended some or all of the sessions, an attendance rate that is similar to or more than rates typically found in family-focused intervention research (Spoth, Clair, Greenberg, Redmond, & Shin, 2007). Cohort One (64%) had more parents attend half or more of sessions than Cohort Two (56%), although these differences were non-significant. A detailed analysis of factors impacting participation and retention rates (e.g., race of parents, sex of students, family income) in the present study is described by Fleming and colleagues (2013).

In terms of participant satisfaction, results from parent-completed workshop evaluations revealed high satisfaction with the quality of workshop content and workshop leader attributes. There were significantly higher ratings in Cohort Two than Cohort One. It is important to note that parents were rating their satisfaction with the quality of the content and workshop leader attributes for the entire workshop, rather than providing these ratings on a given individual session as was done by the independent coders. Five of the eight workshop leaders in Cohort Two had conducted sessions in Cohort One and also had the benefit of receiving performance feedback from the intervention coordinator during this time. Skill practice (i.e., intervention delivery) combined with performance feedback is an effective method of improving skills in a variety of areas (Beidas & Kendall, 2010; Burns, Peters, & Noell, 2008; Hepner, Hunter, Paddock, Zhou, & Watkins, 2011) and may be responsible, in addition to the between-cohort booster training described above, for the increase in workshop evaluation ratings.

Limitations and Directions for Future Research

There are several limitations to the present study. One important limitation is that we did not collect workshop leader self-ratings of content adherence. At the start of intervention delivery, the final versions of the implementation observation rating forms (i.e., content adherence items) were not completed, and thus were unavailable for workshop leaders. Future CSP studies will incorporate workshop leader self-ratings of content adherence as they may be more practical for real world fidelity assessment, particularly if the self-ratings have demonstrated reliability with independent ratings. A second limitation is that there was no training protocol employed for workshop leaders to conduct self-ratings of quality. Additional research is needed to develop practical tools and training protocols for workshop leaders to conduct self-ratings of content adherence and quality. We had a limited ability to compare independent coder ratings of the quality of program delivery with workshop leader self-reports of quality. As noted, observational methods, although essential for randomized trials, are impractical for real world settings. By contrast, ratings from program providers themselves are more feasible for real world applications, but may be subject to bias. Finally, participants completed a workshop evaluation at the last session; however, participants could also provide adherence and quality information after each session. Given the tight time constraints of the intervention for each session, the forms would need to be brief, and perhaps focused on key elements such as skill practice. Despite these limitations, we believe the illustration reported herein will contribute to the ongoing development of implementation assessment and reporting methods for the prevention field.

Lessons Learned

There are several important lessons learned from the current study. As noted, the development of observation forms is a critical process in rigorous implementation assessment and uniquely challenging for the case of testing widely used but understudied programs. The availability of detailed intervention manuals, in addition to consultation with CSP/CSP Plus program experts, helped us to identify core intervention elements for each session and keep the forms at a reasonable length. Whereas it is common to examine content adherence, our results showed that it also is valuable to examine time adherence; covered content can deviate from expectations in terms of time allocation, potentially resulting in reduced time for key components, such as skill practice. Finally, incorporating the observation forms in supervision gives workshop leaders valuable performance feedback to help improve adherence and quality of intervention delivery. Given the time and expense of videotaping sessions and training independent coders to rate implementation, it may be more practical and cost effective to train workshop leaders to self-rate adherence and quality in real world settings.

Highlights.

  • We report fidelity assessment in the test of a widely used but understudied program.

  • Workshop leaders had high adherence to the content and goal times of the curriculum.

  • Workshop leaders delivered the intervention with high quality.

  • Participant satisfaction ratings for the workshops were high.

  • Positive findings may be due to detailed intervention manuals and ongoing training.

Acknowledgements

The project described was supported by National Institute on Drug Abuse Grant # 1R01DA025651. The content is solely the responsibility of the authors and does not necessarily represent the official views of the funding agency or the National Institutes of Health.

Biographies

Robert G. Oats, M.A., is a Senior Research Analyst at the Boys Town National Research Institute for Child and Family Studies. His research interests include implementation assessment, evaluation of applied research, and behavioral consultation in applied settings.

W. Alex Mason, Ph.D., is the Associate Director of the Boys Town National Research Institute for Child and Family Studies. His research interests include the developmental etiology and family-based prevention of adolescent and young adult substance misuse and co-occurring problems. He also has interests in longitudinal and intervention-related methods and analytic techniques.

Wendi F. Cross, Ph.D., is an Associate Professor of Psychiatry (Psychology) in the Department of Psychiatry at the University of Rochester Medical Center. Her research focuses on developing and testing models of training and transfer of training in the implementation of interventions in real world settings.

Mary Casey-Goldstein, M.S. Ed, is an Intervention Specialist at the Social Development Research Group at the University of Washington. Her work includes developing programs for parents and youth. She also oversees the intervention components of research trials evaluating the effectiveness of various programs.

Koren Hanson, M.A., is a Data Manager at the Social Development Research Group, School of Social Work, University of Washington, and works on a variety of studies examining family-, school- and community-based prevention programs and policies.

Kevin P. Haggerty, Ph.D., is the Associate Director of the Social Development Research Group and a faculty member of the University of Washington School of Social Work. He has specialized in the development and implementation of prevention programs at the community, school, and family levels.

Ronald W. Thompson, Ph.D., is the Director of the Boys Town National Research Institute for Child and Family Studies. He has forty years of experience as a clinician, program administrator, consultant, applied researcher, and research administrator and has held faculty positions at the University of Nebraska Department of Special Education, Creighton University School of Medicine, and the University of Kansas Department of Human Development.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Reference List

  1. Bandura Albert. Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review. 1977;84:191–215. doi: 10.1037//0033-295x.84.2.191. [DOI] [PubMed] [Google Scholar]
  2. Barth RP, Landsverk J, Chamberlain P, Reid JB, Rolls JA, Hurlburt MS, et al. Parent-training programs in child welfare services: Planning for a more evidence-based approach to serving biological parents. Research on Social Work Practice. 2005;15:353–371. [Google Scholar]
  3. Beidas Rinad S, Kendall Philip C. Training therapists in evidence! based practice: A critical review of studies from a systems! contextual perspective. Clinical Psychology: Science and Practice. 2010;17:1–30. doi: 10.1111/j.1468-2850.2009.01187.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Breitenstein Susan M, Fogg Louis, Garvey Christine, Hill Carri, Resnick Barbara, Gross Deborah. Measuring implementation fidelity in a community-based parenting intervention. Nursing research. 2010a;59:158. doi: 10.1097/NNR.0b013e3181dbb2e2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Breitenstein Susan M, Gross Deborah, Garvey Christine A, Hill Carri, Fogg Louis, Resnick Barbara. Implementation fidelity in community-based interventions. Research in nursing & health. 2010b;33:164–173. doi: 10.1002/nur.20373. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Brown Eric C, Catalano Richard F, Fleming Charles B, Haggerty Kevin P, Abbott Robert D. Adolescent substance use outcomes in the Raising Healthy Children Project: A two-part latent growth curve analysis. Journal of Consulting and Clinical Psychology. 2005;73:699–710. doi: 10.1037/0022-006X.73.4.699. [DOI] [PubMed] [Google Scholar]
  7. Burke R, Herron R, Barnes BA. Common Sense Parenting: Using your head as well as your heart to raise school-aged children. 3rd ed. Boys Town: Boys Town Press; 2006. [Google Scholar]
  8. Burke R, Schuchmann LF, Barnes BA. Common Sense Parenting Trainer Guide. Boys Town: Boys Town Press; 2006. [Google Scholar]
  9. Burns Matthew K, Peters Rebecca, Noell George H. Using performance feedback to enhance implementation fidelity of the problem-solving team process. Journal of School Psychology. 2008;46:537–550. doi: 10.1016/j.jsp.2008.04.001. [DOI] [PubMed] [Google Scholar]
  10. Chaudoir Stephenie, Dugan Alicia, Barr Colin H. Measuring factors affecting implementation of health innovations: a systematic review of structural, organizational, provider, patient, and innovation level measures. Implementation Science. 2013;8:22. doi: 10.1186/1748-5908-8-22. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Cross Wendi, West Jennifer. Examining implementer fidelity: Conceptualising and measuring adherence and competence. Journal of Children’s Services. 2011;6:18–33. doi: 10.5042/jcs.2011.0123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Curran Geoffrey M, Bauer Mark, Mittman Brian, Pyne Jeffrey M, Stetler Cheryl. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Medical Care. 2012;50:217–226. doi: 10.1097/MLR.0b013e3182408812. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Dane Andrew V, Schneider Barry H. Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review. 1998;18:23–45. doi: 10.1016/s0272-7358(97)00043-3. [DOI] [PubMed] [Google Scholar]
  14. Durlak Joseph A, DuPre Emily P. Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology. 2008;41:327–350. doi: 10.1007/s10464-008-9165-0. [DOI] [PubMed] [Google Scholar]
  15. Dusenbury Linda, Brannigan Rosalind, Falco Mathea, Hansen William B. A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research. 2003;18:237–256. doi: 10.1093/her/18.2.237. [DOI] [PubMed] [Google Scholar]
  16. Fisher Philip A, Chamberlain Patricia. Multidimensional treatment foster care: A program for intensive parenting, family support, and skill building. Journal of Emotional and Behavioral Disorders. 2000;8:155–164. [Google Scholar]
  17. Fixsen Dean L, Phillips Elery L, Wolf Montrose M. Achievement Place: Experiments in self-government with pre-delinquents. Journal of Applied Behavior Analysis. 1973;6:31–47. doi: 10.1901/jaba.1973.6-31. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Flay Brian R. Efficacy and effectiveness trials (and other phases of research) in the development of health promotion programs. Preventive Medicine. 1986;15:451–474. doi: 10.1016/0091-7435(86)90024-1. [DOI] [PubMed] [Google Scholar]
  19. Fleming Charles B, Mason WA, Thompson Ronald W, Haggerty KP, Fernandez Kate, Casey-Goldstein M, et al. Predictors of participation in parenting workshops for improving adolescent behavioral and mental health: Results from the Common Sense Parenting trial. 2013 doi: 10.1007/s10935-015-0386-3. Manuscript submitted for publication. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Glasgow Russell E, Vinson Cynthia, Chambers David, Khoury Muin J, Kaplan Robert M, Hunter Christine. National institutes of health approaches to dissemination and implementation science: current and future directions. American Journal of Public Health. 2012;102:1274–1281. doi: 10.2105/AJPH.2012.300755. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Glasgow Russell E, Vogt Thomas M, Boles Shawn M. Evaluating the public health impact of health promotion interventions: The RE-AIM framework. American Journal of Public Health. 1999;89:1322–1327. doi: 10.2105/ajph.89.9.1322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Griffith Annette K. The use of a behavioral parent training program for parents of adolescents. Journal of At-Risk Issues. 2010;15:1–8. [Google Scholar]
  23. Haggerty KP, Casey-Goldstein M, Barber LM. Letting Loose without Letting Go [unpublished manual] Social Development Research Group, School of Social Work, University of Washington; 2000a. [Google Scholar]
  24. Haggerty KP, Casey-Goldstein M, Barber LM. Stepping up to High School [unpublished manual] Social Development Research Group, School of Social Work, University of Washington; 2000b. [Google Scholar]
  25. Hepner Kimberly A, Hunter Sarah B, Paddock Susan M, Zhou Annie J, Watkins Katherine E. Training addiction counselors to implement CBT for depression. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38:313–323. doi: 10.1007/s10488-011-0359-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Institute of Medicine (U.S.) Committee on Crossing the Quality Chasm: Adaptation to Mental Health and Addictive Disorders. Improving the quality of health care for mental and substance-use conditions. Washington, DC: National Academies Press; 2006. [Google Scholar]
  27. Kaminski Jennifer W, Valle Linda A, Filene Jill H, Boyle Cynthia L. A meta-analytic review of components associated with parent training program effectiveness. Journal of Abnormal Child Psychology. 2008;36:567–589. doi: 10.1007/s10802-007-9201-9. [DOI] [PubMed] [Google Scholar]
  28. Mason WA, Fleming Charles B, Thompson Ronald W, Haggerty Kevin P, Snyder James J. A framework for testing and promoting expanded dissemination of promising preventive interventions that are being implemented in community settings. Prevention Science. 2013:1–10. doi: 10.1007/s11121-013-0409-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Minkin Neil, Braukman Curtis J, Minkin Bonnie L, Timbers Gary D, Timbers Barbara J, Phillips Elery L, et al. The social validaton and training of conversational skills. Journal of Applied Behavior Analysis. 1976;9:127–139. doi: 10.1901/jaba.1976.9-127. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Patterson GR. Coercive family process. Eugene, OR: Castalia; 1982. [Google Scholar]
  31. Patterson GR, Reid JB, Dishion TJ. A social interactional approach, Volume 4: Antisocial boys. Eugene, OR: Castalia Publishing; 1992. [Google Scholar]
  32. Proctor Enola, Silmere Hiie, Raghavan Ramesh, Hovmand Peter, Aarons Greg, Bunger Alicia, et al. Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health. 2011;38:65–76. doi: 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Rotheram-Borus Mary J, Duan Naihua. Next generation of preventive interventions. Journal of the American Academy of Child & Adolescent Psychiatry. 2003;42:518–526. doi: 10.1097/01.CHI.0000046836.90931.E9. [DOI] [PubMed] [Google Scholar]
  34. Schoenwald Sonja K, Henggeler Scott W, Brondino Michael J, Rowland Melisa. Multisystemic therapy: Monitoring treatment fidelity. Family Process. 2000;39:83–103. doi: 10.1111/j.1545-5300.2000.39109.x. [DOI] [PubMed] [Google Scholar]
  35. Sholomskas Diane E, Syracuse-Siewert Gia, Rounsaville Bruce J, Ball Samuel A, Nuro Kathryn F, Carroll Kathleen M. We don’t train in vain: A dissemination trial of three strategies of training clinicians in cognitive! behavioral therapy. Journal of Consulting and Clinical Psychology. 2005;73:106. doi: 10.1037/0022-006X.73.1.106. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Snyder James, Edwards Patty, McGraw Kate, Kilgore Kim, Holton Angie. Escalation and reinforcement in mother-child conflict: Social processes associated with the development of physical aggression. Development and Psychopathology. 1994;6:305–321. [Google Scholar]
  37. Snyder James, Reid John, Stoolmiller Mike, Howe George, Brown Hendricks, Dagne Getachew, et al. The role of behavior observation in measurement systems for randomized prevention trials. Prevention Science. 2006;7:43–56. doi: 10.1007/s11121-005-0020-3. [DOI] [PubMed] [Google Scholar]
  38. Spoth Richard, Clair Scott, Greenberg Mark, Redmond Cleve, Shin Chungyeol. Toward dissemination of evidence-based family interventions: maintenance of community-based partnership recruitment results and associated factors. Journal of Family Psychology. 2007;21:137. doi: 10.1037/0893-3200.21.2.137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Thompson Ronald W, Grow Crystal R, Ruma Penney R, Daly Daniel L, Burke Raymond V. Evaluation of a practical parenting program with middle-and low-income families. Family Relations: Interdisciplinary Journal of Applied Family Studies. 1993;42:21–25. [Google Scholar]
  40. Thompson Ronald W, Ruma Penney R, Brewster Albert L, Besetsney Leasley K, Burke Raymond V. Evaluation of an Air Force child physical abuse prevention project using the reliable change index. Journal of Child and Family Studies. 1997;6:421–434. [Google Scholar]
  41. Thompson Ronald W, Ruma Penney R, Schuchmann Linda F, Burke Raymond V. A cost-effectiveness evaluation of parent training. Journal of Child and Family Studies. 1996;5:415–429. [Google Scholar]

RESOURCES