Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Mar 1.
Published in final edited form as: J Exp Criminol. 2013 Jan 30;9(1):119–126. doi: 10.1007/s11292-013-9173-4

Assessing findings from the fast track study

Conduct Problems Prevention Research Group
PMCID: PMC3607637  NIHMSID: NIHMS443992  PMID: 23539066

Abstract

Objectives

The aim of this paper is to respond to the Commentary, “Reassessing Findings from the Fast Track Study: Problems of Methods and Analysis” provided by E. Michael Foster (Foster, this issue) to our article “Fast Track Intervention Effects on Youth Arrests and Delinquency” (Conduct Problems Prevention Research Group 2010, Journal of Experimental Criminology, 6, 131–157). Our response begins with a description of the mission and goals of the Fast Track project, and how they guided the original design of the study and continue to inform outcome analyses. Then, we respond to the Commentary's five points in the order they were raised.

Conclusions

We agree with the Commentary that efforts to prevent crime and delinquency are of high public health significance because the costs of crime and delinquency to society are indeed enormous. We believe that rigorous, careful intervention research is needed to accumulate evidence that informs prevention programs and activities. We have appreciated the opportunity to respond to the Commentary and to clarify the procedures and results that we presented in our paper on Fast Track effects on youth arrests and delinquency. Our response has clarified the framework for the number of statistical tests made, has reiterated the randomization process, has supported our tests for site-by-intervention effects, has provided our rationale for assuming missing at random, and has clarified that the incarceration variable was not included as a covariate in the hazard analyses. We stand by our conclusion that random assignment to Fast Track had a positive impact in preventing juvenile arrests, and we echo our additional caveat that it will be essential to determine whether intervention produces any longer-term effects on adult arrests as the sample transitions into young adulthood. We also appreciate the opportunity for open scientific debate on the values and risks associated with multiple analyses in long-term prevention program designs such as Fast Track. We believe that, once collected, completed longitudinal intervention datasets should be fully used to understand the impact, process, strengths, and weaknesses of the intervention approach. We agree with the Commentary that efforts to prevent crime and delinquency are of high public health significance because the costs of crime and delinquency to society are indeed enormous. As a result, we argue that it is important to balance the need to maintain awareness and caution regarding potential risks in the design or approach that may confound interpretation of findings, in the manner raised by the Commentator, with the need for extended analyses of the available data so we can better understand over time how antisocial behavior and violence can be effectively reduced.

Keywords: Prevention, Conduct problems, Criminal offenses


We appreciate the opportunity to respond to the Commentary, “Reassessing Findings from the Fast Track Study: Problems of Methods and Analysis” provided by E. Michael Foster (Foster, this issue) to our article “Fast Track Intervention Effects on Youth Arrests and Delinquency” (Conduct Problems Prevention Research Group 2010). We agree with the Commentary that efforts to prevent crime and delinquency are of high public health significance because the costs of crime and delinquency to society are indeed enormous. We believe that rigorous, careful intervention research is needed to accumulate evidence that informs prevention programs and activities. We begin our response by reflecting on the mission and goals of the Fast Track project, and how they guided the original design of the study and continue to inform outcome analyses. Then, we respond to the Commentary's five points in the order they were raised.

Twenty years ago, when the Fast Track project was originally designed, it was the first study of its kind. Our attempt to implement a sustained 10-year intervention with multiple levels (universal and indicated) across multiple settings (home and school) in the context of a long-term, longitudinal study at four diverse sites was (and is) unprecedented. From the start, the purpose was to explore and understand the process of change as well as outcomes for participants. We sought to evaluate the potential of this multi-faceted preventive intervention to alter the negative developmental cascade that so often characterizes the life course of children with early-starting conduct problem behaviors (Conduct Problems Prevention Research Group 1992, 2002; Dodge et al. 2008) and to explore the ways in which intervention impact occurs. As such, this approach differs in important ways from that of a time-limited clinical trial that seeks to deliver a “yes/no” test of program efficacy on a single outcome (for example, whether drug X vs. control reduces a specific symptom or set of symptoms). This study was guided by two theoretical orientations. The first was emerging ideas in prevention science, an inter-disciplinary field of formal inquiry (Coie et al. 1993). The second was the growing field of developmental psychopathology (Sroufe and Rutter 1984), which forged new ideas and developmental models that focused on how risk and protective factors operate in altering the developmental trajectories of health and disorder. Fast Track used developmental theory and longitudinal research to inform its multifaceted prevention program design (Conduct Problems Prevention Research Group 1992). As a long-term prevention trial, rather than a short-term clinical trial, the study had multiple aims. One was to evaluate intervention effects at each developmental period on the multiple competencies as well as risk factors targeted by the intervention, and to evaluate impact on the targeted adolescent outcomes, especially multiple antisocial behaviors. Additional aims, as articulated in the original and subsequent grant proposals, were to determine whether characteristics of the participant sample moderated the effects of the intervention, to understand the factors that mediated successful intervention, to identify factors that influenced individual differences in intervention engagement and response, and to inform theoretical models of the development of early and late-starting conduct problems.

Hence, in the article of interest (Conduct Problems Prevention Research Group 2010), as in other outcome papers, we report analyses that test the main effect of the intervention on primary outcomes, but we also include exploratory analyses designed to describe the nature and process of change in more detail. As noted by the Commentator, these descriptive analyses must be understood in the context in which they are performed, as they represent a different paradigmatic view of science that than of the straightforward FDA model of a clinical trial. Brown and his colleagues in the Prevention Science Methodology Research Group (Brown et al. 2008) have well articulated the richer orientation toward RCTs that is characteristic of the prevention science paradigm. Following this paradigm, the intention in our analytic models in Fast Track is to provide important insights that can test developmental models and theories (Conduct Problems Prevention Research Group 2004) and aid in our scientific quest to understand and prevent the developmental escalation of antisocial behavior. Even though exploratory, such analyses may fuel hypothesis generation for subsequent validation and suggest areas for promise (or areas in need of modification), thereby strengthening future prevention program and research designs. Thus, we note that the Commentator represents only one view and only one of the broad goals within which Fast Track was designed, measured, analyzed, and interpreted. We have adopted a different view that is more consistent with the history of prevention science (Brown et al. 2008; Coie et al. 1993). With this context in mind, we address the specific points raised by the Commentator.

The first concern raised in the Commentary was that the Fast Track article included an “enormous” number of statistical tests, referring to the tests presented in tables 6 and 7 (Conduct Problems Prevention Research Group 2010). It is important to recognize that there are only three outcome constructs presented in the article (i.e., court-records of juvenile arrest; court records of adult arrests; self-reported delinquent offenses). The two tables report two different approaches to examining these three constructs: table 6 shows the results of an ordered logit and negative binomial approach, which addresses the impact of intervention on the rate of offending, while table 7 shows the results of a discrete time hazard analysis, which addresses the impact of intervention on the timing of onset of offending. Because they represent two different conceptual approaches to the assessment of intervention impact on the same phenomena, tables 6 and 7 should be considered separately.

In table 6, the left-hand column indicates the intervention findings on the severity of the three offense constructs. Of these three outcomes, there was a significant intervention main effect for Juvenile Arrests, and non-significant intervention effects for Adult Arrests and for Self-Reported Offenses. As shown in the left-hand column of table 7, when these same three constructs were examined in terms of the timing of onset, an identical pattern emerges: intervention delays the onset of Juvenile Arrests, but does not have an effect on the onset of Adult Arrests or on Self-Reported Offenses. Thus, these two sets of analyses provide replication, documenting direct and straightforward intervention effects on one of the three principal outcomes (i.e., juvenile arrest severity and onset)—well above chance levels.

The three right-hand columns in tables 6 and 7 are supplementary analyses exploring intervention effects in more nuanced ways within the different levels of offense severity. The Commentator assumes incorrectly that all outcome variables have equal importance and that experiment-wide statistical adjustments must be made each time a new variable is analyzed. In the case of this article, the left-hand columns test the impact of intervention on three measures of the primary outcome (antisocial offending), and the other columns show exploratory analyses designed to describe the nature of those primary effects. For example, in the top line of table 6, the results indicate that the intervention had a significant effect in producing relatively lower rates of moderate severity Juvenile Arrests (Severity 3), but not in reducing high severity (Severity 4/5) or low severity (Severity 1/2) Juvenile Arrests. The results in the three right-hand columns of Juvenile Arrests are examining subsets of the overall Juvenile Arrest Severity Index (in the left-hand column), and thus must be considered separately to avoid combining analyses of the variable and its derivatives. These supplementary analyses were not part of the original version of this paper submitted to the Journal of Experimental Criminology, which examined intervention effects only on the lifetime severity weighted frequency measures of arrests and delinquency (i.e., the severity indices in table 6 and the onset variables in table 7). However, as part of the review process, a reviewer and editor suggested that it would be useful to separate out serious offenses on the self-report of delinquency to determine whether the lack of an overall intervention effect on this construct might be due to intervention effects that would emerge only on serious offenses, as we had speculated in the Discussion section of the manuscript first submitted. As a result of this editorial feedback, we reanalyzed intervention effects at the three levels of severity for each of the three outcome variables (low, moderate, and serious offenses). Of the nine supplementary results reported in table 6, only one was significant (for moderate level Juvenile Arrests), and similarly only one of the nine supplementary analyses reported in table 7 was significant (for severe self-reported offenses, as queried by the reviewer). Thus, although somewhat above chance levels, these specific findings should be regarded as descriptive and exploratory, and distinct from the primary analyses of intervention impact and overall intervention main effect found for Juvenile Arrests.

The Commentator also questioned our reporting of standard errors rather than standard deviations. To recap, we included standard errors to provide perspective about the magnitude of the differences between intervention and control youth rates of offending. That is, we calculated a negative binomial regression using the multiple imputed datasets, and showed that among high-risk youth (those in the top 13th percentile of the normative initial risk score distribution) intervention decreased the expected number of severity 4 or 5 adult arrests by 47 % (p value=0.05). To understand the magnitude of this intervention effect, we calculated descriptive statistics for the high-risk intervention and control samples using the imputed datasets by estimating an intercept-only regression model for each sample separately. We reported the standard errors for the means, but as the Commentator indicates, we should have converted them to standard deviations. To translate the standard errors into standard deviations, the reported SEs (0.04 and 0.04) must be multiplied by the square root of each sample size yielding standard deviations of 0.57 and 0.55 for the intervention and control samples, respectively. The results and implications remain the same as those reported in the article. The frequency distributions provide an additional (and perhaps a more illustrative) description of the sample differences. Among high-risk intervention youth, 93.6 % had no severe arrests, 5.8 % had one severe arrest, and 0.6 % had two or more severe arrests, whereas among high-risk control youth, 90.1 % had no severe arrests, 6.4 % had one arrest, and 3.6 % had two or more severe arrests.

In response to the concern raised by the Commentator regarding the randomization process, the purpose of the CONSORT diagram was to provide a flow chart of individual participant selection and retention across time. The randomization of school pairs into intervention and control conditions is accurately described at the beginning of the Methods section.

We agree with the third concern noted by the Commentator that context matters in intervention research, that potential site effects should be considered in analyses, and that, as noted in the Commentary, handling of site in this paper is an appropriate option—although not the only option—for analyses. Implementing the Fast Track intervention at four different sites across the country is a strength of the study, increasing the generalizability of the findings. We do agree with the Commentator's point that other methods for considering site effects would be useful; however, treating site as a fixed factor versus random factor is a controversial decision. We chose to treat site as a fixed factor and to address site-by-intervention effects through standard statistical tests. With regard to the tangential point raised about the description of the Fast Track screening procedure, the description of the screening system described in the first paragraph of the Methods section is accurate and complete. As the Commentator indicates, over time we have recognized the importance of providing a more elaborate and precise description of the screening process and sample than we did in initial publications, such that descriptions in papers published over the last 10 years provide a more complete and precise description than those published in the prior decade.

As his fourth concern, the Commentator wondered whether the missing at random assumption was plausible for Juvenile Arrests in our article. As noted in the participants section, parental consents that are necessary to collect Juvenile Arrest data were not obtained for 9 % of the intervention children and 14 % of the control children. These are relatively low overall levels of attrition across this long follow-up period that extended from participant identification in kindergarten through age 18. However, attrition bias may be evident even when attrition rates are low. The participants section of the article summarized the attrition bias findings and noted that complete tables for attrition bias are available upon request. To recap, the intervention and control youth with missing and non-missing juvenile court records were compared on the 11 baseline variables that served as study covariates, including SES, five caregiver variables (caregiver depression, satisfaction with family and friend social support, stressful life events, use of verbal and physical punishment with child), one observer variable (observed caretaker warmth toward the child) and three child variables (reading achievement, emotion understanding, and social competence). Control youth with missing Juvenile Arrest records experienced higher levels of physical punishment at baseline than intervention youth with missing records (1 of 11 variables significant). Control youth with Juvenile Arrest records had lower reading achievement scores at baseline than intervention youth with records (1 of 11 variables significant). Within the intervention group, those missing Juvenile Arrest records had lower caregiver satisfaction with family and friend support and experienced less physical punishment at baseline than those with records (3 of 11 variables significant, with 2 favoring youth with records and 1 favoring youth without records.) Within the control group, there were no significant baseline differences between those with and without records (0 of 11 variables). Thus, overall, there were 5 significant intervention effects versus control attrition bias effects out of 44 tests, but there was no systematic pattern of attrition that would contraindicate the missing at random assumption.

The Commentator suggests that a violation of the missing at random assumption resulted in a positive intervention finding for Juvenile Arrest data that was not replicated on the Adult Arrest data. But given the lack of any evidence that the missing at random assumption was violated, a simpler and more plausible description is that both findings are valid: random assignment to intervention had a positive effect on preventing juvenile arrests and no significant effect on preventing adult arrests by age 18. The failure to find an intervention effect on Adult Arrests at this young age may reflect the low base rate of such arrests and that the intervention impact was not powerful enough to alter this low base rate variable when measured in the late adolescent period. Future analyses of adult arrest data collected once the sample has moved through early adulthood should provide a better basis on which to evaluate intervention impact on adult arrests.

In response to a tangential point made in the Commentary regarding the possibility that outcome data were biased due to data collection by intervention staff members, we can clarify that interventionists were involved as research staff members collecting data only during the first 3 years of the project, and only from families in the normative developmental study, whom they did not know. All of the data used in this paper were collected by non-intervention research staff members who were blind to participants’ intervention condition.

The fifth concern raised in the Commentary was that the hazard analyses (summarized in table 7) had a problematic covariate assessing whether youth spent time incarcerated in grades 8–12. The inclusion of an incarceration indicator as a covariate was recommended by a JEC reviewer for the analyses of intervention effects on the frequency of offenses, and it was included as a covariate only in the analyses summarized in table 6. It was not used in, and should not have been listed for, the analyses reported in table 7, and we have submitted an erratum to correct that entry.

In conclusion, we appreciate the opportunity to respond to the Commentary and to clarify the procedures and results that we presented in our paper on Fast Track effects on youth arrests and delinquency. We stand by our conclusion that random assignment to Fast Track had a positive impact in preventing juvenile arrests, and we echo our additional caveat that it will be essential to determine whether intervention produces any longer-term effects on adult arrests as the sample transitions into young adulthood.

We also appreciate the opportunity for open scientific debate on the values and risks associated with multiple analyses in long-term prevention program designs such as Fast Track. Intervention research of this kind is extremely challenging. By moving outside the parameters of clinical trial-based research, and opting for population-based sampling and recruitment along with long-term longitudinal designs, prevention research of this kind strives to engage the most vulnerable children and families. As in the Fast Track program sample, this strategy often means that a substantial proportion of the participating children and families lead lives characterized by multiple disadvantages and live in contexts of significant risk. Identifying, recruiting, engaging, and maintaining these participants is a challenge, as is the delivery of high-fidelity intervention across multiple and diverse communities and sites. Once accomplished, prudence dictates the full use of the completed datasets to achieve as much understanding as possible about the nature of parameters affecting developing psychopathology (or resilience) and about the impact, process, strengths, and weaknesses of the intervention approach. Balancing this desire to learn from extended analyses of the available data is the need to maintain awareness and caution regarding potential risks in the design or approach that may confound interpretation of findings, in the manner raised by the Commentator. Given the paramount importance of moving forward in our capacity as a society to reduce antisocial behavior and prevent violence, the practical and scientific challenges associated with this kind of preventive intervention research are well worth ongoing effort and investment.

Acknowledgments

This work was supported by National Institute of Mental Health (NIMH) grants R18 MH48043, R18 MH50951, R18 MH50952, and R18 MH50953. The Center for Substance Abuse Prevention and the National Institute on Drug Abuse also have provided support for Fast Track through a memorandum of agreement with the NIMH. This work was also supported in part by Department of Education grant S184U30002, NIMH grants K05MH00797 and K05MH01027, and NIDA grants DA16903, DA017589, and DA015226.

Footnotes

Members of the Conduct Problems Prevention Research Group, in alphabetical order, include Karen L. Bierman, Department of Psychology, Pennsylvania State University; John D. Coie, Department of Psychology and Neuroscience, Duke University; Kenneth A. Dodge, Center for Child and Family Policy, Duke University; Mark T. Greenberg, Department of Human Development and Family Studies, Pennsylvania State University; John E. Lochman, Department of Psychology, The University of Alabama; Robert J. McMahon, Department of Psychology, University of Washington; and Ellen E. Pinderhughes, Department of Child Development, Tufts University.

We are grateful for the close collaboration of the Durham Public Schools, the Metropolitan Nashville Public Schools, the Bellefonte Area Schools, the Tyrone Area Schools, the Mifflin County Schools, the Highline Public Schools, and the Seattle Public Schools. We greatly appreciate the hard work and dedication of the many staff members who implemented the project, collected the evaluation data, and assisted with data management and analyses. We particularly express appreciation to Jennifer Godwin for her work on data analyses for this paper.

References

  1. Brown CH, Wang W, Kellam SG, Muthen BO, Petras H, et al. Methods for testing theory and evaluating impact in randomized field trials: intent-to-treat analyses for integrating perspective of person, place, and time. Drug and Alcohol Dependence. 2008;95S:S74–S104. doi: 10.1016/j.drugalcdep.2007.11.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Coie JD, Watt NF, West SG, Hawkins JD, Asarnow JR, Markman HJ, Ramey SL, Shure MB, Long B. The science of prevention:a conceptual framework and some directions for a national research program. American Psychologist. 1993;48:1013–1022. doi: 10.1037//0003-066x.48.10.1013. [DOI] [PubMed] [Google Scholar]
  3. Conduct Problems Prevention Research Group A developmental and clinical model for the prevention of conduct disorders: the FAST Track Program. Development and Psychopathology. 1992;4:509–527. [Google Scholar]
  4. Conduct Problems Prevention Research Group Using the Fast Track randomized prevention trial to test the early-starter model of the development of serious conduct problems. Development and Psychopathology. 2002;14:927–945. doi: 10.1017/s0954579402004133. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Conduct Problems Prevention Research Group . The Fast Track experiment: translating the developmental model into a prevention design. In: Kupersmidt JB, Dodge KA, editors. Children's Peer relations: from development to intervention. American Psychological Association; Washington, DC: 2004. pp. 181–208. [Google Scholar]
  6. Conduct Problems Prevention Research Group Fast Track intervention effects on youth arrests and delinquency. Journal of Experimental Criminology. 2010;6:131–157. doi: 10.1007/s11292-010-9091-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Dodge KA, Greenberg MT, Malone PS, the Conduct Problems Prevention Research Group Testing an idealized dynamic cascade model of the development of serious violence in adolescence. Child Development. 2008;79:1907–1927. doi: 10.1111/j.1467-8624.2008.01233.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Sroufe LA, Rutter M. The domain of developmental psychopathology. Child Development. 1984;55:17–29. [PubMed] [Google Scholar]

RESOURCES