Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2009 Oct 27.
Published in final edited form as: J Rural Health. 2007 Fall;23(Suppl):29–36. doi: 10.1111/j.1748-0361.2007.00121.x

Examining the Effects of School-based Drug Prevention Programs on Drug Use in Rural Settings: Methodology and Initial Findings

C Hendricks Brown 1, Jing Guo 1, L Terri Singer, Katheryne Downes 1, Joseph M Brinales 1
PMCID: PMC2768124  NIHMSID: NIHMS145876  PMID: 18237322

Abstract

Context

Although there have been substantial advances in knowledge about drug prevention over the last decade, the majority of school-based drug prevention studies have been conducted in urban settings. There is little knowledge about the effectiveness of such programs when they are implemented in rural populations.

Purpose

To examine the prevention effects of school-based drug prevention programs implemented in rural populations.

Methods

Mixed model or two-level meta-analysis of trials based on school-based drug prevention programs that included rural populations. A total of 182 trials were coded for urbanicity of schools and 22 separate trials were selected for the analysis conducted in this paper. A total of 435 distinct analyses were examined from these 22 trials.

Findings

We found a modest but consistent beneficial impact of drug prevention programs on later use as well as level of use. Regarding later drug use, the largest impact was on those who were not using at baseline and those exposed to an interactive program; the results were much larger for marijuana and other drugs compared to alcohol or tobacco, while inhalant use was less affected than other drug categories. Regarding level of use, the impact was greatest six months after the trial ended, with diminishing effects thereafter.

Conclusions

Evidence exists for a small but systematic beneficial effect of drug prevention programs in rural settings. It is likely that these programs have produced a mild reduction in new use of substances but have had little impact on those already using substances.

Suggested keywords: drug prevention, meta-analysis

Introduction

Despite the last decade’s substantial advances in knowledge about drug prevention, comparatively little is known about the effectiveness of drug prevention programs in rural as opposed to urban settings. Most prevention researchers are located in urban universities, so it is not surprising that the intense efforts required of prevention researchers to establish working community partnerships and conduct rigorous trials frequently occur in urban settings. In this paper we present an initial look at the effects of school-based drug prevention programs that were identified by Tobler in an earlier meta-analysis.1 These trials were then classified based on rural/urban categories, and we then focused on the 22 trials that included rural settings. We also devote some attention in this paper to enhancing meta-analytic techniques in order to carry out multi-level models that distinguish effects of community and trial design, intervention, and timing and type of drug use outcome. In this paper, we provide initial estimates of the overall and drug specific impact of such school-based programs in rural settings.

While the overall magnitude of impact in rural communities is important, examining the variation in these estimates of impact can inform us about conditions that allow for higher or lower impact and lead to more generally valid theories of intervention. Epidemiologic studies frequently identify distinct patterns of drug use across rural communities. The first step in the Communities that Care model,2 for example, is to develop a profile of drug use and other risk and protective factors that can guide the community in choosing a prevention program that addresses their unique problems. Since environmental factors such as drug availability, peer influences, community norms, and the availability of community services all play major roles in youth drug use, it is possible that prevention programs can also have differential impact across different rural communities. We hypothesize several factors that relate to the magnitude of intervention impact. First, as Tobler found,1 intervention programs that rely on interactive dialogues with youth should have larger effects than those delivered through lectures. This relationship is expected to hold in rural settings as well.

Secondly, we anticipate that program effects should vary more when the study design is of poor quality compared to high quality. Two advantages of well-conducted trials, where schools and children are assigned to intervention using randomization and blocking, is that both bias and variance should be reduced compared to studies with less rigorous intervention assignment, higher attrition and the like. 3, 4 Thus, we predict intervention impact to be higher and less variable when the trial is conducted more rigorously.5

We also expect that rural communities experiencing high substance use will show higher rates of impact than those with lower substance use. Our reasoning for predicting this is twofold. Communities with high rates of substance use have more opportunity to reduce these rates. Also, we would predict that many rural communities with high rates of drug use are ones that have had little exposure to good prevention programs in the past. Thus the implementation of a novel prevention effort in the school could lead to greater reduction in drug use compared to communities whose “controls” are already exposed to some prevention elements.

To examine these twin objectives of characterizing overall prevention effects and sources of variation in rural settings, we found the need to provide new methodology for carrying out these meta-analyses. First, some of the drug use outcomes are dichotomies, such as “ever use cigarettes,” while others are based on ordinal scales, such as “number of times you have used cigarettes in the last month.” Since we did not have access to the original data and can only rely on summaries of findings in this meta-analysis, we have chosen to perform analyses separately for dichotomous variables — using log odds ratios and continuous level variables – using effect sizes. Additionally, we include in our multilevel analyses standard errors that take into account clustering within a trial, where typically schools are often the level of assignment to the intervention, and clustering caused by multiple outcome measures taken on the same individuals within a trial. This new method is described in the appendix and applied to school-based intervention trials that were identified by Tobler1 and tested in rural settings.

Methods

For this paper, we began with a well-established although somewhat dated list of school-based drug prevention programs, those identified by Tobler and her colleagues in 2000. The literature on school-based drug prevention programs has expanded considerably since this publication, so it would be possible to extend the number of trials for our analyses. However, it would require a vast amount of work to obtain a complete up-to-date search of this literature, and we recognized that we could easily introduce systematic bias if we did an incomplete search through the new literature. Thus we chose to limit our analyses to the carefully completed enumeration provided nearly six years ago. In this meta-analysis, they reviewed the impact of school-based prevention in 182 papers. We then applied our own criteria for inclusion in our meta-analysis: all programs had to be school-based, taking place in the United States, specifically targeting prevention of drug use and published in a form other than a dissertation. From this list, we coded each paper as to the urbanicity of the schools, based first on author descriptions in the paper. Typically, authors identified the study locale and indicated the number or urban and rural schools in the sample. Approximately 80% of the papers identified in Tobler’s review could be excluded based on evidence from the report indicating that the sites only included non-rural settings. When such descriptions by the authors were not available, we attempted to contact the author; for the few cases where we were unsuccessful in determining rurality, we identified the locale and relied on current classifications based on the U.S. Agriculture Department’s Economic Research Service.6 A total of 24 of the 182 papers were classified as including rural populations.730 We noted that two distinct pairs of these papers were derived from the same studies. Thus the data on 22 separate trials were examined further.

The 22 trials were conducted between the years 1978 and 1995 and took place in a variety of different geographical locations such as California, Oregon, New York, Vermont, South Carolina, western Washington and southeastern Michigan. The intervention programs included Drug Abuse Resistance Education (DARE), Project Alert, Here’s Looking at You, Here’s Looking at You Two, Project Model Health, Project Northland, and Life Skills Training. The types of interventions included substance abuse/misuse prevention, intensive in-school health promotion and social-influence prevention to life-skills training and educational presentations. One program specifically tailored its intervention to Native American students to address the unique cultural issues faced by this group.

Measures at the level of community/intervention/trial

Our data were further classified into primarily rural or not primarily rural based on the proportion of rural schools in the sample. Again, this criterion of primarily rural was taken either from the author’s explicit statement or based on a majority (more than half) of the schools being classified by the author as rural. The average age of the youth at the beginning of the study was also computed so we could assess the impact based on the target’s grade level.

We coded the intervention across two dimensions. The intervention could be directed only at tobacco (cigarettes and/or smokeless tobacco), only at alcohol, or a combination of substances. It could be classified as either an interactive or primarily a non-interactive program. The interactive coding came from Tobler’s original coding. All but one of the interventions were universal, so no variation across type of intervention target was available for analyses.

To assess the quality of a trial, we coded each trial across nine dimensions using a modified scale originating from meta-analyses done by the Cochrane Collaboration and then applied by us in previous work on interventions on children aged 0–6.5,31

Our measure of trial quality is based on a nine-point scale. A somewhat similar tool has been used by the Cochrane Collaboration and others to determine whether major elements of a trial are present.32 The current “Cochrane” scale used in this report gives one point to every positive response on these items: (1) Aims stated clearly, (2) “Randomized controlled trial” or a trial with a “comparable comparison group,” (3) an intervention that is described sufficiently to be replicable, (4) the number of recruited subjects is provided, (5) pre-intervention data are provided, (6) the level of attrition is discussed, (7) results of all measured outcomes are discussed, (8) post-intervention results are provided for all intervention groups, and (9) the number of intervention and control groups is provided. Higher scores indicate a better quality study. All relevant statements by the authors were taken at face value. For example, if the author used the words “randomized assignment,” this was sufficient to receive a positive score on this item. It should be noted that the coding used in this study may differ from others using similar items. A coding manual was developed with flow charts to maintain reliability in the coding.

Also, at this stage, individual reports were hand-linked if they were based on the same intervention trial. It should be noted that the coding of the trial quality scale was assessed at the level of individual reports and then combined across all reports for each trial. For example, if “attrition was discussed” appeared in one report, then the trial was coded positive for this item regardless of whether it was present or absent in any of the other reports on this trial.

Because of its importance to the conduct of rigorous research evaluations, we included as a predictor whether or not random assignment to intervention took place. For nearly all the studies, these randomizations were at the level of school (that is, they were group-based randomized trials).

Measures at the Outcome Level

For every trial all outcomes in these 24 papers pertaining to substance use after the intervention period had ended were coded using a standardized format. There were a total of 435 outcome analyses in 22 trials; these differed by type of substance use, time of follow-up, whether it was conducted on a subgroup (for example, those who were nonusers at baseline or for males only) or on the entire population. We coded each outcome on the measurement properties of the outcome variable. We found that it was possible to classify nearly every outcome as either dichotomous or (close to) continuous. For the few trichotomous outcomes, we created a dichotomy by separating the highest use category from the others.

For the 93 district analyses involving continuous variables that were reported in the 22 trials, we calculated an effect size by taking the difference in intervention and treatment outcomes divided by the total standard deviation in the sample. Occasionally, more than one standard deviation was computable. We selected that for the control group if available; if not then we selected a standard deviation based on pooling both intervention and control groups.

The standard error for effect size within the experiment was computed using the Delta Method as described in the Appendix. Since many of the statistical analyses in the original papers did not take into correct account the effects of the group-based assignment, we computed two standard errors, one based on an independence assumption at the individual level and one based on group assignments. Since the latter quantity depended on the intra-class correlation, which was generally not available from the text, we assumed an intra-class correlation of 0.05, which in drug prevention work is considered a reasonable high-end limit for such data. For our analyses we included the larger of these two quantities. Details are provided in the appendix.

A similar approach was taken for the analysis of the 342 dichotomous outcomes, but in the place of effect sizes we used log odds ratios. Similarly, two standard errors for these log odds ratios were computed based on 1) an assumption of independence at the level of the child and 2) independence at the level of the cluster (typically school), with an intra-class correlation of 0.05 as before. The larger of these two values was used to assess the variability of these log odds ratios.

Both effect size and log odds ratio values were recoded so that a value greater than 0 indicated beneficial impact of the intervention compared to the control condition.

We used four covariates that were codable at the level of the individual outcome evaluation. First, each outcome was assigned to a general drug class: alcohol, inhalants, marijuana, tobacco, and other drugs. Secondly, we coded the length of time of follow-up in years after the intervention period had ended. We anticipated that the impact could differ across time. We also distinguished a small number of subgroups that were repetitively examined in this study: males and females, nonusers at baseline, and users at baseline. Our analytical question was whether the interventions affect subjects differentially.

Analyses

We followed standard methods for mixed model or empirical Bayes meta-analyses as originally developed by Hedges, et al.33 Such methods are more conservative than the fixed effects alternative models, but because they incorporate heterogeneity in impact, this conservatism is warranted. All analyses were conducted using Mplus version 3.13 (Muthén & Muthén, Los Angeles) using two-level models. These two-level analyses allowed for the inclusion of standard errors computed for each outcome, as well as additional correlation of different outcomes within the same study. We used random slopes to account for the former and residual unexplained variance for the latter. To compensate for a potentially different scale for effect size versus log odds ratio in combined analyses with continuous and discrete variables, we allowed both the mean and the variance to vary by this factor.

A small proportion of effect sizes and log odds ratios were not precisely computable given the summary statistics; for example, when a report indicated only that the effect was non-significant. Had a specific significance level been given, it would generally be possible to calculate an effect size directly. Since the number of these uncertain outcomes was small, we dropped these from our analyses.

Results

The overall mean effect size for all drug use outcomes for the 22 prevention programs conducted in rural settings was 0.11 (SE = 0.045, p < 0.05). Thus, there was evidence of a significant effect of these school-based drug prevention interventions on continuous outcomes. Also, fully three-quarters of the effect sizes were positive. In contrast, there was a negligible effect overall on binary outcomes since the log odds ratios had an overall mean of 0.041 (SE = 0.41, p > 0.50). For both continuous and binary outcomes, there was substantial variability in the outcomes above and beyond the internal standard errors, even with our conservative recalculations taking into account the group randomizations for most of these trials. We first included covariates at the level of the individual analysis to explain this variation then added covariates at the level of the trial. Because of the lower power for trial-level variables, we report findings that are significant at the 0.10 level.

For outcomes involving continuous variables, there were two outcome analysis-level factors that contributed significantly to predictors of effect size — the duration of follow-up and the type of drug examined at outcome. First, there was a significantly large impact on outcomes a half-year after the intervention had ended compared to immediate follow-up (effect size difference of 0.187, SE = 0.054, p < 0.001) and a strong asymptotic decline towards zero in effect size thereafter (p < 0.01). The intervention impact was similar across all drug categories except inhalants, which showed significantly less effect than did alcohol and other drugs (effect size difference −0.137, SE = 0.044, p < 0.002). With these predictors in the model, there was little remaining variance at the individual outcome level to be explained (p = 0.05).

At the level of the trial, the effect sizes appeared suggestively larger in samples that had a majority of schools that were rural compared to those where rural schools were in the minority (effect size difference 0.117, SE = 0.072, p = 0.10). Also, compared to interactive interventions, programs that were noninteractive appeared slightly stronger (effect size difference −0.165, SE = 0.100, p = 0.10).

In analyzing the binary outcomes using log odds ratios, we found that impact across outcomes differed by drug category, with impact on tobacco and alcohol significantly lower than that of all other drugs (log odds ratio 0.124, SE = 0.028, p <0.05 for other drugs versus alcohol). The impact on tobacco was non-significant and less in magnitude than that for alcohol (log odds ratio −0.056, SE = 0.034, p = 0.10). The strongest predictor of impact of these interventions, however, was on nonusers at baseline compared to users at baseline (log odds ratio 0.124, SE = 0.028, p < 0.0001). In contrast to analyses involving continuous variables, we found no significant impact on duration of follow-up, and lack of data precluded us from examining inhalant use alone.

In examining trial-level covariates for binary outcomes, only one factor appeared significant. Interactive programs showed a significant benefit over noninteractive ones (log odds ratio 0.217, SE = 0.103, p < 0.05). All other trial-level characteristics, including whether there was a majority of rural schools in the sample, were non-significant predictors.

Discussion

We found a consistent pattern of beneficial impact of these prevention programs on rural youth, although the average effect sizes and log odds ratios were small. For continuous outcomes, the overall impact was minimal immediately after the intervention period had ended, then increased to a maximum effect at half a year, then dropped slowly over the next two years. For binary outcomes, the impact was greatest for non-users and on other drugs besides alcohol and tobacco; thus impact was on the substances used least during adolescence. Interactive programs were found to be more beneficial for binary outcomes while they appeared slightly less so for continuous outcomes.

There are two reasons that can explain these different findings by type of outcome being studied. First, the limited amount of continuous-level analysis that distinguished baseline users from non-users prohibited us from examining whether this factor contributed any explanation to the effect sizes. Similarly, there was only one dichotomous measure that examined inhalant use. Second, all of the continuous measures related to level of substance use, which is distinct from user/non-user status.

The fact that school-based preventive interventions show consistent evidence of positive effects in rural settings provides initial support for the potential benefit of existing programs. While some of these programs no doubt incorporated elements that related to the community’s own norms, values, and cultures, based on the curriculum of these studies, most of these programs made no special distinctions that dealt with rural society. In the midst of the current debate regarding the degree to which prevention programs need to maintain fidelity to the original program versus ab initio development or local adaptation to the specific target community, 34 these findings suggest our current programs do provide some benefit. Indeed, the data we now have provide some support that some of these interventions could have a slightly higher effect in rural areas compared to suburban or urban areas.

Another new indication from our data is that in primarily rural settings, interactive programs show benefit on use/non-use measures but less evidence on level of use. Furthermore, impact on use/non-use is much stronger on delaying non-users than it is in cessation of use among users. The fact that impact appears least on alcohol and tobacco use — the two most commonly used substances by adolescents — compared to all other substances suggests that the currently tested programs have some primary preventive effect but little secondary effect. It may be that rural programs had so few indigenous prevention efforts that even non-interactive programs did some good.

From a methodologic perspective, our multilevel meta-analysis provided a more general approach to examining intervention impact than what is often done. Unlike the Tobler, et al. meta-analysis, which performs a general least squares regression with each observation weighted by the inverse of its internal variance, the full multilevel model or empirical Bayes method that we have conducted here provides a more appropriate, conservative measure of variance that takes into account both within and between study variance. For example, the Tobler, et al. methodology necessarily requires that overall precision of intervention impact increase whenever a new trial is added, even if that intervention effect is substantially different from all other trials’ intervention effects. In contrast, our methodology incorporates both the within-trial and between-trial variability in calculating the uncertainty regarding intervention impact. This more conservative mixed model approach is recommended because it takes into account variability in intervention impact, an empirical finding that we clearly find in this work. 35

One surprising methodologic issue occurred because we found dramatic differences in the results from continuous outcomes compared to dichotomous outcomes. We had planned to incorporate both types of outcomes in the same analysis using scaling, but this strategy appears to be of little value at the current time. There have been several approaches to this problem in the literature. Tobler and colleagues, for example, converted all evaluations to effect sizes and did not make any distinctions in their analysis. Such equivalences can lead to misinterpretations, so we have kept our two sets of analyses distinct from one another.

There were two major limitations to this study. First, this meta-analysis suffers from all the known limitations on such studies. For example, there may be reporting bias that ignores studies that have been conducted rigorously but never got published because of null findings. Where our analyses are likely to be most sensitive to publishing bias is in our examination of intervention impact within subgroups, such as baseline non-users. Also, meta-analyses are limited on what variables are available; here we had to rely on relatively coarse information available in reported analyses. Most importantly, there was only modest information about the urbanicity-rurality continuum. While actual population statistics would have been more ideal, we had to rely on the authors’ count of schools being located in either rural or urban settings. Most of these trials included both rural and non-rural schools, and few of these reported analyses are specific to the rural settings alone. Indeed, not a single author reported impact by rurality, thus limiting our ability to pinpoint whether the prevention impact varied across such schools. Also, we were unable to code some important measures – such as the level of baseline substance use – consistently across all studies. Thus an important predictive variable was not available for this analysis.

Second, the studies included here were based only on those reported by Tobler and colleagues; the vast expansion in effectiveness and efficacy prevention trials that occurred since that time is not included here. A more exhaustive literature review is now likely to reveal additional studies, especially those published after her meta-analyses. Future plans are to perform literature searches by both controlled vocabulary and free-text terms in various databases, as well as searching individual authors listed in these studies to expand our database of studies.

Acknowledgments

We gratefully acknowledge the support of the National Institute of Drug Abuse through a supplement to grant number R01-MH40859. An earlier version of this paper was presented at the NIDA-sponsored conference on Rural Drug Prevention in Bethesda, MD, December 2004.

Appendix

  • (1) We used the following equation for standard error of the effect size, itself defined by
    ES=X¯1X¯2SDpooled

    where 1 is mean of treatment group, Xmacr;2 is mean of control group.

    1. First, we assume that all youths’ responses within a trial are independent. By using the Delta method, the equation for the standard error of effect size can be expressed as:
      =1N1+1N2+12×ES2N1+N2,

      a well-known result.

    2. Second, we assume that within a trial, responses at the youth level are not independent but those across different schools, or other pertinent level of random assignment, are independent.

      We will assume that the number of youth per school is the same and large relative to the number of schools. Then with σs2 representing the variance across schools and σk2 that across youth,
      (X¯1X¯2SD2)N(μ1,σs2N1μ2,σs2N2σs2+σk2,2(σs2+σk2)2N1+N2)
      where the sample sizes are now the numbers of schools. Then
      var(X¯1X¯2SDpooled)(fX¯1)2×var(X¯1)+(fX¯2)2×var(X¯2)+(fSD2)2×var(SD2)(1σs2+σk2)2×σs2N1+(1σs2+σk2)2×σs2N2=(1N1+1N2)×σs2σs2+σk2=(1N1+1N2)×ICC
  • (2) The following equation is developed for the standard error of the log (OR):

    1. First, we assume all subjects’ outcomes within a trial are independent, then the standard error of log (OR) can be calculated by using the following equation:
      se(log(OR))=1n11+1n12+1n13+1n14

      where the sample sizes correspond to total cell counts in the two-by-two table.

    2. If there is non-independence across schools, an approximate value for the standard error for the log odds ratio can be expressed, using s as the total number of schools in the study.

      θ̂i = log(OR) for cluster iN(μi,s(1n11+1n12+1n13+1n14))

      θ^¯=θ^isN(μ,σs2+(se(log(OR)))2) where σs2=var(μi)

      Let the within school level variability in the proportion estimate in the control group given by pc (1 – pc)

      Let the between variance σp2=var(pci)

      Then by Delta Method, σs2σp2pc(1pc) and var(θ^¯)=se2(log(OR))+ICCs×pc(1pc).

References

  • 1.Tobler NS, Roona MR, Ochshorn P, Marshall DG, Streke AV, Stackpole KM. School-based adolescent drug prevention programs: 1998 meta-analysis. Journal of Primary Prevention. 2000 Sum;20(4):275–336. [Google Scholar]
  • 2.Hawkins JD, Arthur MW, Olson JJ. Community interventions to reduce risks and enhance protection against antisocial behavior. In: Stoff DW, Breiling J, Masers JD, editors. Handbook of antisocial behaviors. New York: John Wiley and Sons, Inc; 1998. pp. 365–374. [Google Scholar]
  • 3.Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin Co; 2002. [Google Scholar]
  • 4.Shadish WR, Heinsman DT. Experiments versus quasi-experiments: do they yield the same answer? NIDA Research Monograph. 1997;170:147–164. [PubMed] [Google Scholar]
  • 5.Brown CH, Berndt D, Brinales JM, Zong X, Bhagwat D. Evaluating the evidence of effectiveness for preventive interventions: Using a registry system to influence policy through science. Addictive Behaviors. 2000 Nov–Dec;25(6):955–964. doi: 10.1016/s0306-4603(00)00131-3. [DOI] [PubMed] [Google Scholar]
  • 6.United States Department of Agriculture. [Accessed January 26, 2006];Economic Research Service. 2006 January 17; [web site] Available at: http://ers.usda.gov/
  • 7.Ary DV, Biglan A, Glasgow R, et al. The efficacy of social-influence prevention programs versus “standard care”: are new initiatives needed? Journal of Behavioral Medicine. 1990 Jun;13(3):281–296. doi: 10.1007/BF00846835. [DOI] [PubMed] [Google Scholar]
  • 8.Battistich V, Schaps E, Watson M, Solomon D. Prevention effects of the Child Development Project: Early findings from an ongoing multisite demonstration trial. Journal of Adolescent Research. 1996 Jan;11(1):12–35. [Google Scholar]
  • 9.Bell RM, Ellickson PL, Harrison ER. Do drug prevention effects persist into high school? How Project ALERT did with ninth graders. Preventive Medicine. 1993 Jul;22(4):463–483. doi: 10.1006/pmed.1993.1038. [DOI] [PubMed] [Google Scholar]
  • 10.Botvin GJ, Baker E, Dusenbury L, Tortu S, Botvin EM. Preventing adolescent drug abuse through a multimodal cognitive-behavioral approach: Results of a 3-year study. Journal of Consulting & Clinical Psychology. 1990 Aug;58(4):437–446. doi: 10.1037//0022-006x.58.4.437. [DOI] [PubMed] [Google Scholar]
  • 11.Clarke JH, MacPherson B, Holmes DR, Jones R. Reducing adolescent smoking: a comparison of peer-led, teacher-led, and expert interventions. Journal of School Health. 1986 Mar;56(3):102–106. doi: 10.1111/j.1746-1561.1986.tb05707.x. [DOI] [PubMed] [Google Scholar]
  • 12.Collins D, Cellucci T. Effects of a school-based alcohol education program with a media prevention component. Psychological Reports. 1991 Aug;69(1):191–197. doi: 10.2466/pr0.1991.69.1.191. [DOI] [PubMed] [Google Scholar]
  • 13.Ellickson PL, Bell RM. Drug prevention in junior high: A multi-site longitudinal test. Science. 1990 Mar;247(4948):1299–1305. doi: 10.1126/science.2180065. [DOI] [PubMed] [Google Scholar]
  • 14.Ennett ST, Rosenbaum DP, Flewelling RL, Bieler GS, Ringwalt CL, Bailey SL. Long-term evaluation of Drug Abuse Resistance Education. Addictive Behaviors. 1994 Mar–Apr;19(2):113–125. doi: 10.1016/0306-4603(94)90036-1. [DOI] [PubMed] [Google Scholar]
  • 15.Gilchrist LD, Schinke SP, Trimble JE, Cvetkovich GT. Skills enhancement to prevent substance abuse among American Indian adolescents. International Journal of the Addictions. 1987 Sep;22(9):869–879. doi: 10.3109/10826088709027465. [DOI] [PubMed] [Google Scholar]
  • 16.Harmon MA. Reducing the risk of drug involvement among early adolescents: An evaluation of Drug Abuse Resistance Education (DARE) Evaluation Review. 1993 Apr;17(2):221–239. [Google Scholar]
  • 17.Hopkins RH, Mauss AL, Kearney KA, Weisheit RA. Comprehensive evaluation of a model alcohol education curriculum. Journal of Studies on Alcohol. 1988 Jan;49(1):38–50. doi: 10.15288/jsa.1988.49.38. [DOI] [PubMed] [Google Scholar]
  • 18.Kim S, McLeod JH, Shantzis C. An outcome evaluation of Here’s Looking At You 2000. Journal of Drug Education. 1993;23(1):67–81. doi: 10.2190/WF2C-72QJ-TR9G-UNH1. [DOI] [PubMed] [Google Scholar]
  • 19.Moberg DP, Piper DL. An outcome evaluation of project model health: a middle school health promotion program. Health Education Quarterly. 1990 Spring;17(1):37–51. doi: 10.1177/109019819001700106. [DOI] [PubMed] [Google Scholar]
  • 20.Perry CL, Williams CL, Veblen-Mortenson S, et al. Project Northland: outcomes of a communitywide alcohol use prevention program during early adolescence. American Journal of Public Health. 1996 Jul;86(7):956–965. doi: 10.2105/ajph.86.7.956. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Ringwalt C, Ennett ST, Holt KD. An outcome evaluation of Project DARE (Drug Abuse Resistance Education) Health Education Research. 1991 Sep;6(3):327–337. [Google Scholar]
  • 22.Rosenbaum DP, Flewelling RL, Bailey SL, Ringwalt CL, Wilkinson DL. Cops in the classroom: A longitudinal evaluation of Drug Abuse Resistance Education (DARE) Journal of Research in Crime and Delinquency. 1994 Feb;31(1):3. [Google Scholar]
  • 23.Schinke SP, Orlandi MA, Botvin GJ, Gilchrist LD, Locklear VS. Preventing substance abuse among American-Indian adolescents: A bicultural competence skills approach. Journal of Counseling Psychology. 1988 Jan;35(1):87–90. [PMC free article] [PubMed] [Google Scholar]
  • 24.Schinke SP, Schilling RF, Gilchrist LD. Prevention of drug and alcohol abuse in American Indian youths. Social Work Research & Abstracts. 1986 Win;22(4):18–19. [Google Scholar]
  • 25.Shope JT, Copeland LA, Maharg R, Dielman TE. Effectiveness of a high school alcohol misuse prevention program. Alcoholism: Clinical & Experimental Research. 1996 Aug;20(5):791–798. doi: 10.1111/j.1530-0277.1996.tb05253.x. [DOI] [PubMed] [Google Scholar]
  • 26.Shope JT, Copeland LA, Marcoux BC, Kamp ME. Effectiveness of a school-based substance abuse prevention program. Journal of Drug Education. 1996;26(4):323–337. doi: 10.2190/E9HH-PBUH-802D-XD6U. [DOI] [PubMed] [Google Scholar]
  • 27.Stevens MM, Freeman DH, Mott LA, Youells FE, Linsey SC. Smokeless tobacco use among children: The New Hampshire study. American Journal of Preventive Medicine. 1993 May–Jun;9(3):160–167. [PubMed] [Google Scholar]
  • 28.Swisher JD, Nesselroade C, Tatanish C. Here’s Looking at You Two is looking good: An experimental analysis. Journal of Humanistic Counseling, Education & Development. 1985 Mar;23(3):111–119. [Google Scholar]
  • 29.Williams CL, Perry CL, Dudovitz B, et al. A home-based prevention program for sixth-grade alcohol use: Results from project Northland. Journal of Primary Prevention. 1995;16(2):125–147. doi: 10.1007/BF02407336. [DOI] [PubMed] [Google Scholar]
  • 30.Wodarski JS. Evaluating a social learning approach to teaching adolescents about alcohol and driving: A multiple variable evaluation. Journal of Social Service Research. 1987 Winter-Summer;10(2–4):121–144. [Google Scholar]
  • 31.Brown CH. Design principles and their application in preventive field trials. In: Bukoski WJ, Sloboda Z, editors. Handbook of drug abuse prevention: theory, science, and practice. New York: Kluwer Academic/Plenum Press; 2003. pp. 523–540. [Google Scholar]
  • 32.Oakley A, Fullerton D, Holland J. Behavioural interventions for HIV/AIDS prevention. AIDS. 1995 May;9(5):479–486. [PubMed] [Google Scholar]
  • 33.Hedges LV, Olkin I. Statistical methods for meta-analysis. Orlando: Academic Press; 1985. [Google Scholar]
  • 34.Elliott DS, Mihalic S. Issues in disseminating and replicating effective prevention programs. Prevention Science. 2004 Mar;5(1):47–53. doi: 10.1023/b:prev.0000013981.28071.52. [DOI] [PubMed] [Google Scholar]
  • 35.DerSimonian R, Laird N. Meta-analysis in clinical trials. Controlled Clinical Trials. 1986 Sep;7(3):177–188. doi: 10.1016/0197-2456(86)90046-2. [DOI] [PubMed] [Google Scholar]
  • 36.Tobler NS. Meta-analysis of adolescent drug prevention programs: results of the 1993 meta-analysis. NIDA Research Monograph. 1997;170:5–68. [PubMed] [Google Scholar]
  • 37.Tobler NS. Meta-analysis of 143 adolescent drug prevention programs: Quantitative outcome results of program participants compared to a control or comparison group. Journal of Drug Issues. 1986 Fall;16(4):537–567. [Google Scholar]

RESOURCES