Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2013 Jul 1.
Published in final edited form as: J Subst Abuse Treat. 2011 Nov 25;43(1):1–11. doi: 10.1016/j.jsat.2011.10.005

Meta-Analyses of Seven of NIDA’s Principles of Drug Addiction Treatment

Frank S Pearson a,*, Michael L Prendergast b, Deborah Podus b, Peter Vazan a, Lisa Greenwell c, Zachary Hamilton d
PMCID: PMC3290709  NIHMSID: NIHMS333144  PMID: 22119178

Abstract

Seven of the 13 Principles of Drug Addiction Treatment disseminated by the National Institute on Drug Abuse (NIDA) were meta-analyzed as part of the Evidence-based Principles of Treatment (EPT) project. By averaging outcomes over the diverse programs included in EPT, we found that five of the NIDA principles examined are supported: matching treatment to the client’s needs; attending to the multiple needs of clients; behavioral counseling interventions; treatment plan reassessment; and counseling to reduce risk of HIV. Two of the NIDA principles are not supported: remaining in treatment for an adequate period of time and frequency of testing for drug use. These weak effects could be the result of the principles being stated too generally to apply to the diverse interventions and programs that exist or of unmeasured moderator variables being confounded with the moderators that measured the principles. Meta-analysis should be a standard tool for developing principles of effective treatment for substance use disorders.

Keywords: drug addiction treatment services, drug use outcomes, meta-analysis, principles of drug addiction treatment

1. Introduction

Decades of scientific research and clinical practice have stimulated some groups to identify principles and practices that are associated with effective treatment. The assumption is that developing and implementing treatment programs, modalities, or techniques based on research-based principles will lead to improved client outcomes and more efficient use of treatment resources. The National Institute on Drug Abuse (NIDA) developed a set of treatment principles, which was based on discussions among experts at the National Conference on Drug Addiction Treatment: From Research to Practice, held in 1998. The resulting 13 principles were disseminated in the booklet Principles of Drug Addiction Treatment: A Research-Based Guide (NIDA, 1999); an updated edition (NIDA, 2009) is available at http://www.drugabuse.gov/PODAT/.

The empirical basis for the NIDA Principles was not limited to randomized trials. It included cohort studies like the Drug Abuse Treatment Outcome Study (DATOS), as well as outcomes obtained from quasi-experimental studies of residential treatment (therapeutic communities), outpatient programs, various pharmacotherapies, and behavioral therapies, e.g., cognitive behavioral therapy and contingency management (NIDA, 1999; 2009). Prior to 1999, there were few meta-analyses available to inform the development of the NIDA principles (Brewer et al., 1998; Stanton & Shadish, 1997; Marsch, 1998). Subsequently, a number of meta-analyses have been published, e.g., on relapse prevention (Irvin, Bowers, Dunn, & Wang, 1999), co-occurring disorders (Dumaine, 2003), motivational interviewing (Burke, Arkowitz, & Menchola, 2003), naltrexone maintenance treatment (Johansson, Berglund, & Lindgren, 2006), HIV risk reduction among IDUs (Copenhaver, Lee, Harman, Johnson, & Carey, 2006), voucher-based reinforcement therapy (Lussier, Heil, Mongeon, Badger, & Higgins, 2006), contingency management (Prendergast, Podus, Finney, Greenwell, & Roll, 2006), case management (Vanderplasschen, Wolf, Rapp, & Broekaert, 2007), behavioral couples therapy (Powers, Vedel, & Emmelkamp, 2008), and cognitive-behavioral therapy (Magill & Ray, 2009).

The objective of the Evidence-based Principles of Treatment (EPT) project was to assess, using meta-analysis, the effectiveness of seven of the NIDA principles. The reasons why some of the principles were not assessed in the EPT project are mentioned at the beginning of the Results section. The EPT project involved the efforts of two teams: the National Development Research Institutes, Inc. (NDRI), and the UCLA Integrated Substance Abuse Programs (ISAP). The teams collaborated on every stage of the EPT project.

2. Methods

2.1. Literature search

Using the ISI Web of Knowledge, four bibliographic databases were searched: PsychInfo, Current Contents, Web of Science (which contains Social Sciences Citation Index), and Medline. Searches were conducted for the time period 1995 to 2007; in previous meta-analysis projects, each of the research organizations had collected relevant literature for the period before 1995. In each of the 52 search runs (13 years times 4 databases), approximately 30 keywords were used. Three different sets of terms were used to enhance coverage: (a) general terms referring to substance abuse and drug dependence, (b) specific drug-related terms (e.g., heroin, cocaine), and (c) terms related to addiction treatment and treatment outcomes. Citations obtained from previous meta-analyses by NDRI and ISAP were added to the search results. More than 8,000 unduplicated citations with bibliographic information and abstracts were retrieved for the given time period.

These cited reports were divided between the two teams and assessed for whether each met EPT study eligibility criteria (see below). After a two-stage screening process, which included retrieval of full-text documents when a judgment of eligibility could not be made based on the abstract alone, the citations that met the eligibility criteria were coded and entered into a Microsoft Access™ database.

2.2 Criteria for inclusion and exclusion of studies

Studies (published or unpublished) were included if the study: (1) was in English, (2) appeared between January 1965 (1995 for articles identified through the Web of Knowledge searches) and April 30, 2007; (3) was conducted in the United States or Canada; (4) assessed a treatment, rehabilitation, or intervention program having apparent interest in reducing drug use (even if another goal, e.g., reducing crime, may have been the main emphasis); (5) used a treatment-comparison or treatment-control group design; and (6) reported statistical data on outcomes that permitted calculation of an effect size. Investigations limited to the study of a non-FDA–approved medication undergoing clinical trials at the time of the literature search were not included in EPT.

2.3 Screening and coding the study documents

The studies were coded by M.A.- or Ph.D.-level coders who received training at the beginning of the study and met regularly with senior research staff to discuss coding problems and to establish policies on particular questions. Within each team, each study was coded by two coders, who resolved differences through discussion; unresolved issues were decided by senior staff. When findings from a study were reported in more than one document, we drew information from all of the documents associated with that study.

NDRI and ISAP conducted a formal reliability assessment by independently double coding 16 studies. The reliability checks of effect sizes yielded a Cronbach’s α of .90, and the various nominal level codes we checked yielded agreement rates ranging from .88 to 1.00.

For the analyses reported below, the following types of information extracted from the studies were used: (1) drug use based on both self-report (e.g., Addiction Severity Index) and non-self-report (e.g., urinalysis results), (2) outcome data collected for as many as three follow-up periods, and (3) quantitative data on outcomes for as many as five different types of illicit drugs. About 40% of the studies also reported findings for non-drug-use outcomes, but these are not covered in this article.

2.4. Method of statistical analysis

We do not treat each meta-analysis of the principles of effective treatment as a null-hypothesis significance test (NHST) posed at the level of meta-analysis. Many authors have pointed out the disadvantages of NHST (e.g., Harlow, Mulaik, & Steiger, 1997; Krantz, 1999; Loftus, 1996; Schmidt & Hunter, 2002; Wilkinson & Task Force on Statistical Inference, 1999). In addition, problems of interpretation of the NHST approach (i.e., do or do not reject the null hypothesis) are compounded when the same or subsequent researchers conduct a 2nd (or 3rd, or 4th) NHST involving some (or all) of the same data tested in the original NHST. These “re-tests” may be in the form of secondary analysis or meta-analysis; indeed, some of the studies assessed for EPT had already been included in one (or possibly more) prior meta-analyses that may have used an NHST approach. A similar problem occurs when the same independent comparisons are used to test more than one of the principles of effective treatment of interest. It is not feasible to produce an adequate statistical adjustment for the multiplicity of tests involved in these NHSTs. The literature critical of NHST recommends using an “effect size with confidence interval” method, making clear that the precision or imprecision of the confidence intervals depends on the number of cases in the analysis, not just on the effect size. We thus report that information here. Because there are likely to be other sources of variability in addition to the random variability captured in that confidence interval, we do not claim that the confidence interval is a sufficiently accurate measure of the true variation that may exist in future replications. The confidence interval should be interpreted as a rough indicator of the minimal variability that may exist in the effect size.

The analyses reported below involve two types of comparisons. Some of the EPT analyses aggregate data from multiple studies in which the independent variable was manipulated through random assignment or other assignment process. In these cases, the effect size represents the magnitude of the difference between a treatment condition and some type of comparison condition. However, most of the principles of effective treatment examined in EPT involve independent variables that were not manipulated experimentally (e.g., length of treatment participation, frequency of drug testing); that is, they are characteristics or attributes of programs or services that were reported in primary studies and can be investigated as potential moderators of the effect sizes observed. As Lipsey (2003) has pointed out, potential moderator variables may be statistically confounded with one another, with the result that this type of meta-analytical evidence is weaker than evidence from meta-analyses of manipulated variables. In EPT we tried to advance the state of knowledge about these non-manipulated principles of treatment by using correlations and statistical controls for some other key moderator variables that are plausible competing explanations (e.g., the methodological quality of primary studies).

We do not claim that EPT has “proven” these non-manipulated principles; rather, we use meta-regressions or meta-analyses to assess the balance of evidence for each principle of effective treatment.

Coding the documents yielded 232 studies. If in a study, one experimental (E) group was compared with more than one comparison (C) group or one comparison group was compared with more than one experimental group, we used our best judgment to pick the fairest, most pertinent comparison group relative to each particular experimental group and analyzed only that “independent comparison” from that study. There were 243 independent comparisons.

In some studies, effect sizes for some outcomes may be so atypical that there is a risk of distorting the overall results. A practical criterion for detecting such outliers is a value that is more than 1.5 times the interquartile range above the 3rd quartile or more than 1.5 times the interquartile range below the 1st quartile (Crawley, 2005; Walfish, 2006). Calculating the quartiles for the effect sizes in our study and applying this criterion led us to Winsorize a few extremely large positive effect sizes that were greater than +1.20 to the value of +1.20 and a couple of extremely negative effect sizes that were less than −0.48 to the value of −0.48.

Effect sizes d=0.2 and r=0.1 have been labeled “small” with d=0.5 and r=0.30 labeled “medium” (Cohen, 1988). However, Rosenthal, Rosnow & Rubin (2000: 15-28) caution against mechanically using those verbal ratings and show how “small” effect sizes can actually reflect an important finding in some contexts. Our primary effect size was Hedges’ g standardized mean difference, which corrects for a bias in Cohen’s d that exists when sample sizes are small. To aid in understanding the Hedges’ g values, we also report the corresponding correlation effect size, r. We also used meta-regression analysis, which provides a weighted linear regression of the effect size on a predictor variable (a moderator). The unstandardized meta-regression coefficient provides the change in effect size associated with a one-unit change in the predictor, e.g., from 0 to 1. The standardized meta-regression coefficient (usually termed “beta”) provides the change in effect size (in standard deviation units) associated with a one standard deviation increase in the predictor. In a bivariate linear regression, because the value of the standardized regression coefficient, beta, is the same as the correlation coefficient, it provides a correlation as a measure of the strength of association.

In EPT, three strength-of-methods variables showed significant relationships with the effect sizes in the coded studies: (1) random assignment to E and C (of the 243 independent comparisons in EPT, 74.5% used random assignment); (2) intent-to-treat analysis (82.7% used intent to treat analysis); and (3) no selection bias favoring the E group (89.7% had no apparent selection bias favoring the E group). These three variables were combined to form an aggregate strength-of-methods variable, experimental rigor, that had a value of 1 if the study had (a) random assignment AND (b) intent to treat AND (c) no selection bias favoring E; otherwise the value was 0.

For each principle, we report results from two types of analyses. (1) In a main analysis, we identified a set of independent comparisons of E and C groups (for convenience we labeled them “studies”) pertaining to the principle and report (1a) a point estimate of the relationship between the principle and the drug abuse outcomes, such as Hedges’ g, (1b) the nominal 95% confidence interval computed for that effect size, and (1c) the Pearson correlation to provide a more familiar interpretation of the effect size. (2) In auxiliary analyses, we conducted (2a) an analysis comparing the studies coded as using higher vs. lower experimental rigor, and we assessed (2b) the type of outcome measure used, i.e., a self-reported measure versus a non-self-report measure, (2c) the timing of the outcome measure used, i.e., drug use measured during treatment versus at the end of treatment versus some time after treatment, and (2d) whether the study findings appeared in a publication or in an unpublished report. Tables summarizing the findings of the auxiliary analyses can be obtained from the corresponding author.

All of the meta-analyses and meta-regressions reported here use the random effects model, which allows for the possibility that there may be more than one true effect size underlying different types of studies. The analyses were conducted using Comprehensive Meta-analysis™ software (Borenstein, Hedges, Higgins, & Rothstein, 2005), David Wilson’s meta-analysis SPSS macros (Wilson, 2006), and R statistical software 2.10.0 (R Development Core Team, 2009), specifically, the package “metafor” version 0.5-5 (Viechtbauer, 2009).

3. Results

Over 90% of the studies were published between 1980 and 2006. In terms of gender, 13% of the samples were exclusively male, 10% were exclusively female, and the remainder included both males and females. The percentages of Black, White, and Hispanic participants in study samples varied widely; averages over all of the studies were 42% Black, 42% White, and 16% Hispanic. The mean age of research samples ranged from 15 to 45, with an overall mean of 33. The primary drugs most commonly mentioned were heroin, cocaine, and crack.

The EPT project was able to conduct informative meta-analyses on seven of the NIDA principles of effective drug abuse treatment. EPT was not able to address some of the principles because the identified studies did not meet the EPT selection criteria. Principles 1, 3, and 10 do not lend themselves to the EPT approach of analyzing a set of treatment-control group intervention studies with data bearing on a given principle: (1) “…drugs of abuse alter the brain’s structure and function….”; (3) “… the earlier treatment is offered in the disease process, the greater the likelihood of positive outcomes….”; and (10) “Medically assisted detoxification is only the first stage of addiction treatment and by itself does little to change long-term drug abuse.”

Principle 7 (“Medications are an important element of treatment for many patients, especially when combined with counseling and other behavioral therapies”) was not investigated using the EPT approach to meta-analysis because (1) due to ethical requirements, primary research studies do not withhold methadone from a comparison group while it is being provided to an experimental group and (2) studies of approved medications for addiction treatment have evaluated types of services or dosing schedules, which relate to the optimal use of the medication, not with its effectiveness (for meta-analyses supporting medication-assisted treatment using different selection criteria than EPT, see Farre, Mas, Torrens, Moreno, & Cami, 2002; Gowing, Farrell, Bornemann, Sullivan, & Ali, 2006; Mattick, Breen, Kimber, & Davoli, 2009).

Principle 9 (“…And when these problems [addiction and mental disorders] co-occur, treatment should address both (or all), including the use of medications as appropriate….”) was not included because one inclusion criterion for EPT was that the study must report on a program located in the United States or Canada, and many of the reported studies were of programs in Australia or the United Kingdom; another criterion was that the program must have an apparent interest in reducing drug use, but several more of the studies had no focus on drug abuse (they focused only on mental disorders). This left only 3 independent comparisons, which we considered insufficient for meta-analysis. Other overviews (which used different eligibility criteria than EPT) of studies of co-occurring disorders are a systematic review by Drake, O’Neal, & Wallach (2008) and a meta-analysis by Dumaine (2003).

Principle 11 (“Treatment does not need to be voluntary to be effective”) was excluded because coercion is typically examined using data from single-group studies or as secondary analyses of comparison group designs rather than in experimental studies that manipulate referral status (coerced vs. voluntary; for a meta-analysis of coerced treatment with offenders, see Parhar, Wormith, Derkzen, & Beauregard, 2008).

3.1 NIDA Principle 2: Matching treatment to the client’s needs

Principle 2 is stated as: “Matching treatment settings, interventions, and services to an individual’s particular problems and needs is critical to his or her ultimate success in returning to productive functioning in the family, workplace, and society.” The EPT project found 12 independent comparisons in which the “matching” of clients to treatment programs based on client characteristics and/or program services was the main focus of the study. The statistics from the meta-analysis are presented in Table 1. The overall point estimate for g is 0.24. This corresponds to a correlation of 0.12. This finding is consistent with the inference that, on average, there is a positive effect for treatment matching interventions.

Table 1.

Summary Statistics of the Findings from the EPT Projecta

Selected NIDA Principles 95% CI f
kb g c rd b e
(and related subdomains): Lower Upper
#2 Matching treatment to the client’s needs 12 0.24 0.12 0.11 0.38
#4 Attending to the multiple needs of clients 236 0.16 0.02 0.01 0.04 e
#5 Remaining in treatment
Completion 185 0.14 0.02 0.00 0.03 e
Duration (<12 vs. ≥ 12 weeks) 230 0.02 0.01 −0.08 0.10 e
#6 Counseling:
Contingency Management 42 0.21 0.10 0.13 0.29
Cognitive Behavioral Therapy 26 0.11 0.05 0.02 0.20
Therapeutic Communities 10 0.36 0.18 0.18 0.53
#8 Treatment plan reassessment 8 0.25 0.12 0.05 0.46
#12 Testing for drug use (yes/no) 225 0.06 0.04 −0.05 0.13 e
#13 Counseling to reduce risk of HIV 10 0.19 0.09 0.002 0.38
a

Some of the primary research studies differ substantially in methodological rigor, outcome measures, etc. Tables summarizing the findings of the auxiliary analyses can be obtained from the corresponding author.

b

Number of studies (and their independent comparisons) used in the meta-analysis

c

Hedges’ g standardized mean difference (effect size)

d

Correlation coefficient obtained either from g or from beta (standardized meta-regression coefficient)

e

Unstandardized meta-regression coefficient

f

Confidence intervals of b (unstandardized coefficient) or of g

As noted above, we conducted auxiliary analyses to see whether there were any patterns that seem to exceed random variability, focusing on experimental rigor, biological measure vs. self-report measure, timing of outcome, and publication status. The auxiliary analyses for the treatment matching principle suggest that two of the four moderators were associated with clear differences in effect size. The effect sizes were 0.10 for studies with more experimental rigor (k = 4) compared with 0.34 for those with less rigor (k = 8); similarly, 0.12 for published studies (k = 5) compared with 0.31 for unpublished (k = 7).

3.2 NIDA Principle 4: Attending to the multiple needs of clients

The principle says that “To be effective, treatment must address the individual’s drug abuse and any associated medical, psychological, social, vocational, and legal problems.” Because most studies did not provide a measure of whether clients’ needs were actually addressed, we operationalized this principle in terms of the number of services that a program provided, on the assumption that a program that provided a greater number of services was more likely to be able to address a greater variety of client needs. We coded each study in terms of which of 38 types of drug abuse treatment services were reported in the study description. (A frequency table of these services can be obtained from the corresponding author.) Each treatment service was given a code of 0, indicating that it was not reported in the study, or 1, indicating that it was reported, doing so for the E program/intervention and for the C condition. For each treatment service, the code for C was subtracted from the code for E. A “treatment difference” of 0 for a service indicated either that the service was present in both groups (E’s 1 minus C’s 1 = 0) or absent in both groups (E’s 0 minus C’s 0 = 0). A treatment difference of 1 indicated that it was provided in the E group but not in the C group. The indicator used in the EPT study for treatment attending to multiple needs was the variable “number of additional treatment services that the E group received,” which is the sum of the treatment differences over all 38 different types of drug abuse treatment services. As an example, one study had a sum of treatment differences equal to 3; for this study, the experimental condition had three services not found in the comparison condition.

The meta-regression summarized in Table 1 shows that a best-fitting regression line to the 236 studies has an unstandardized slope of 0.02. This means that one additional treatment service in E relative to C is associated with a very small increase of 0.02 in the effect size (g). Note, however, that the standardized meta-regression coefficient, beta, is 0.16, indicating that an increase of 1 standard deviation in the number of additional treatment services that the E group received is associated with an increase of 0.16 standard deviations in the g effect size. (In a bivariate regression beta is equivalent to r.) Findings from the auxiliary analyses showed no substantial differences from those of the main analysis.

3.3 NIDA Principle 5: Completion and duration of treatment

Principle 5 includes statements that “Research indicates that most addicted individuals need at least 3 months in treatment to significantly reduce or stop their drug use and that the best outcomes occur with longer durations of treatment” and “Because individuals often leave treatment prematurely, programs should include strategies to engage and keep patients in treatment.” Given that Principle 5 refers both to remaining in treatment and to remaining in treatment for an adequate period of time, our main analysis first assesses the relationship of the effect sizes with the percentage of E subjects who completed treatment and then the relationship of the effect sizes with the average length of time in weeks that E subjects actually participated in treatment.

In the examination of treatment completion, considering that a 1 percentage point increase in the percentage of clients that complete treatment is a very small increase (e.g., 45% to 46% in treatment completion), the predictor variable was expressed in 10% units, so that a change such as 45% to 55% or 85% to 95% in treatment completion would be a meaningful change. Table 1 shows the summary statistics from the meta-regression of the effect sizes on the percentage of E subjects who completed treatment. The unstandardized slope (based on 185 studies) is 0.02: that is, a 10 percentage point increase in treatment completion is associated with a 0.02 increase in g. The standardized coefficient, beta, is 0.14, i.e., the correlation of the effect sizes with the percentage of subjects who completed treatment, but the confidence interval includes zero. The auxiliary findings do not provide a rationale for inferring a stronger or weaker relationship than that found in the main analysis.

The next analysis concerns the relationship of the effect sizes with the average length of time in weeks that E subjects actually participated in treatment, dichotomized at < 12 weeks versus ≥ 12 weeks (i.e., the three-month threshold specified in Principle 5). Table 1 shows that the meta-regression coefficient for Planned Treatment Duration of 3 Months or More (dichotomized: 0 = No, 1 = Yes) is 0.01. The standardized (beta) coefficient is 0.02. Thus, the estimated effect is virtually zero.

The single most important moderator variable was our measure of experimental rigor. Studies with higher experimental rigor have a point estimate g = 0.09, whereas studies with lower experimental rigor have a point estimate g = −0.10, but the confidence intervals for these subcategories overlap somewhat.

3.4 NIDA Principle 6: Behavioral/counseling therapies

Principle 6 includes the statements that “Counseling—individual and/or group— and other behavioral therapies are the most commonly used forms of drug abuse treatment” and “Also, participation in group therapy and other peer support programs during and following treatment can help maintain abstinence.” There are at least three types of treatment approaches that fit within this principle: (1) Many scientists consider contingency management to be one type of behavioral therapy; (2) NIDA considers cognitive behavioral therapy a research-based counseling treatment (National Institute on Drug Abuse, 1999, p. 35); and (3) NIDA considers the therapeutic community to be a research-based counseling treatment (National Institute on Drug Abuse, 2002).

3.4.1 Contingency management

The research question here is whether studies in which the treatment included positive reinforcement for desired behavior had a larger mean effect size than the treatment provided to the comparison group, e.g., treatment as usual. Table 1 shows that the overall point estimate across 42 studies is an effect size (g) of about 0.21. The 95% confidence interval excludes the zero effect-size point. The correlation coefficient is about 0.10. These statistics are consistent with the inference that, on average, there is a substantial positive effect of contingency management interventions. In the auxiliary analyses for this principle, there were no differences in the effect sizes beyond those that could be due to random variability.

3.4.2 Cognitive behavioral therapy

The EPT project coded 26 independent comparisons that were experimental evaluations of cognitive behavioral treatment (CBT) interventions for substance abuse. Table 1 shows that the overall point estimate for g for CBT is about 0.11. The 95% confidence interval excludes the zero effect-size point, indicating that the effect size calculated for CBT is significantly different from zero. The g effect size corresponds to a correlation, r, of about 0.05. These statistics are consistent with the inference that, on average, there is a small positive effect for CBT interventions. The auxiliary analyses did not show any variations in the effect size beyond that expected from random variability.

3.4.3 Therapeutic communities

This type of intensive treatment is investigated using those studies in which the E condition had a therapeutic community (TC) intervention and the C condition did not. The meta-analysis results (based on 10 studies) shown in Table 1 include a point estimate of g = 0.36. This corresponds to a correlation of 0.18. The confidence interval excludes the zero effect point. The auxiliary analyses showed similar results.

3.5 NIDA Principle 8: Treatment plan reassessment

This principle is stated as “An individual’s treatment and services plan must be assessed continually and modified as necessary to ensure that it meets his or her changing needs.” To operationalize this principle, the EPT study codebook included this item: “Rate the Principle, individualized treatment plan reassessment and modification, as present in the treatment for the E group. 1 = Treatment plan reassessment seems poorer than average (or not done at all); 2 = Nothing reported to indicate anything other than average individualized treatment plan reassessment; 3 = Treatment plan reassessment seems better than average”. The same item was also coded for the C group. There were 23 studies that included information for both E and C that would allow coding of whether treatment plan reassessment was poorer or better than average. In 15 of the studies, the coders rated the E and C groups the same on treatment plan reassessment,, so there was no variability in the reassessment moderator in those 15 studies. This left eight studies for the main meta-analysis shown in Table 1: (a) in four studies E had a coding 1 level higher than C on the item shown above (2 vs. 1 or 3 vs. 2) and (b) in the other four studies E had a coding 2 levels higher than C (3 vs. 1).

The overall point estimate of g for the treatment reassessment moderator is 0.25. The 95% confidence interval excludes the zero effect-size point, indicating that the effect size calculated for treatment reassessment is significantly different from zero. This corresponds to a correlation of 0.12.

It did not appear that studies with better or poorer research quality had higher or lower effect sizes (given random sampling variability). However, the timing of the outcome measurement in the studies may be suggestive: The effect size (g) is 0.49 for studies with the outcome measured during treatment, 0.25 at end of treatment, and 0.15 after treatment.

Given the small number of studies in which this feature of treatment has been reported and the variability in the smaller subsets of studies in the auxiliary analyses, the point estimate in the main analysis may be taken as provisional support for the principle of treatment reassessment.

3.6 NIDA Principle 12: Testing for drug use

Principle 12 states, Drug use during treatment must be monitored continuously, as lapses during treatment do occur. Knowing their drug use is being monitored can be a powerful incentive for patients and can help them withstand urges to use drugs.” As shown in Table 1, in a meta-regression of 225 studies, with the predictor testing vs. no testing, the unstandardized meta-regression coefficient g is 0.04, but the confidence interval extends below the zero point. The correlation is 0.06.

As an auxiliary analysis for this principle, the drug testing variable was expanded from a dichotomy to a rough numeric scale, which estimates the drug test frequency per 28-day period. The constructed values for frequency of drug testing per month are as follows: 28 = daily; 20= four-six times a week; 10 = two-three times a week; 4 = once a week; 2.5 = two-three times a month; 1 = once a month; 0.5 = less than once a month; 0 = no drug testing done (0). Of the 225 studies in the main analysis, 33 had to be excluded because the code value was “testing was done, but the frequency was not reported.” The auxiliary analysis had an unstandardized regression coefficient of 0.005 and an r = 0.09, but the confidence interval includes the zero effect point. The unstandardized regression coefficient indicates that a difference of 1 additional day of testing in a 28-day period would be associated with an increase of 0.005 in g for drug use.

3.7 NIDA Principle 13: Counseling to reduce risk of HIV

Principle 13 states, “Treatment programs should assess patients for the presence of HIV/ AIDS, hepatitis B and C, tuberculosis, and other infectious diseases as well as provide targeted risk-reduction counseling to help patients modify or change behaviors that place them at risk of contracting or spreading infectious diseases.” EPT study inclusion criteria required that if the program was a corrections-based drug abuse treatment program, at least 80% of the subjects had to be identified as users of illicit drugs, or if it was a non-corrections-based program, over 98% had to be users of illicit drugs. Many studies that focused on HIV risk reduction did not meet these EPT study eligibility criteria. There were, however, 10 studies with an interest in reducing HIV risk behavior from injection drug use that were screened into the EPT study.

As shown in Table 1, the g is 0.19. This corresponds to r = 0.09, indicating that targeting risk reduction was associated with a decrease in risk behavior. The confidence interval excludes the zero point. In this set of 10 studies, there was variability on the experimental rigor moderator variable: the five studies with lower experimental rigor had a g = 0.12, while the five with higher experimental rigor had a g = 0.27.

4. Discussion

Some of the meta-analytic methods in EPT use the same approach as prior meta-analyses, that is, comparing outcomes between a treatment group and a comparison group (e.g., the analyses of contingency management, cognitive behavioral therapy, and therapeutic communities). For the other NIDA principles covered in EPT, meta-regressions were used to analyze characteristics or attributes of treatment that typically extend across different types of programs. The latter involves focusing on specific moderators of effect size, which were not manipulated in the primary studies, and thus the findings from these analyses should be considered associational (correlational), not necessarily causal.

The obtained results can be summarized as follows. NIDA Principle 2 refers to matching clients with settings, interventions, and services that are appropriate to their needs and circumstances in order to achieve optimal outcomes. The 12 studies available for analysis of treatment matching yielded a g of 0.24, indicating that the use of matching results in improved outcomes. As a cautionary note, only 4 out of the 12 studies were rated as having high experimental rigor, and in that set g=0.10.

NIDA Principle 4 has to do with the importance of addressing clients’ multiple needs beyond just their drug use. For the 236 relevant studies available, the greater number of services provided in the E group compared with the C group was used as a predictor of effect size. We found that the addition of one additional treatment service was associated with only a very small increase in effect size. It is important to bear in mind that the data used to test this principle pertained to the number of services made available to clients, not whether clients actually received these services. Also, the data did not seem adequate to assess whether specific services produced larger effect sizes than other services. Nevertheless, mainly because the confidence interval excludes the zero point and because the correlation effect size is 0.16, our assessment is that the principle of treating multiple client needs has been supported in the EPT project.

NIDA Principle 5 states that optimal outcomes depend on clients’ completing treatment or remaining in treatment a minimum length of time. Based on our findings, our assessment is that the treatment completion principle was not supported by the studies in the EPT project. However, that treatment completion was determined from data in the primary studies, in which the expected length of treatment varied depending on the program or intervention being evaluated. We do not think that the finding means that “completion of treatment doesn’t matter at all.” Rather, it means that the percentage of clients who complete treatment is not a very important moderator of the effectiveness in the diverse collection of programs or interventions. Future research may establish that treatment completion is an important moderator in some particular subset of programs or interventions.

With respect to duration, meta-regression of 230 studies indicated that studies in which clients participated in treatment 12 weeks or more had virtually the same drug use outcomes as those with participation of less than 12 weeks. The confidence interval includes the zero point, so our assessment is that the treatment duration principle was not supported in these studies. A scatterplot of the effect sizes in relation to duration of treatment as a continuous variable (not shown) did not reveal any clear linear (or curvilinear) relationship. Presumably, intensive post hoc exploratory analyses might find some subset of types of treatment for which this NIDA principle would indeed be a reasonable approximation.

Under NIDA Principle 6, we conducted meta-analyses on three specific behavioral counseling approaches: contingency management (CM), cognitive behavioral therapy (CBT), and therapeutic community (TC). However, the average effect size found in the EPT study, g=0.21, is smaller than that found in other studies. Contingency management was supported in the meta-analysis. For example, Lussier, Heil, Mongeon, Badger, and Higgins (2006), in an analysis of 30 studies, reported an effect size for abstinence of r = 0.32, which is equivalent to g effect size of about 0.60. Prendergast, Podus, Finney, Greenwell, and Roll (2006) found a mean effect size across 47 studies of g = 0.49. Finally, Dutra et al. (2008) found a mean effect size for 14 CM studies of 0.58. Methodological differences in the meta-analyses, e.g., search strategy, selection criteria, and outcomes, probably account for the different magnitudes in effect sizes. However, clearly CM has a strong effect on drug use outcomes.

Even though CBT is based on learning theory principles, there is no one CBT protocol for drug abuse treatment; rather different CBT treatments address different goals (e.g., skills training, relapse prevention) and make use of different combinations of techniques (e.g., skills rehearsal, cognitive restructuring, behavioral analysis). In EPT the 26 studies of CBT yielded a small effect size of g = 0.11. However, the confidence interval does exclude the zero point, and our assessment is that CBT received at least marginal support in the meta-analysis. A recent meta-analysis of CBT treatment for alcohol and illicit drug users (Magill & Ray, 2009), based on 53 randomized control trials, found a somewhat higher average effect size of g = .15. Another meta-analysis (Dutra et al., 2008) that examined randomized studies of psychosocial interventions for substance use disorders (not alcohol) reported an effect size of 0.28 for 13 studies of CBT. Thus, EPT and the two other meta-analyses found significant positive effect sizes, with the differences in magnitude found in the studies possibly accounted for by the selection criteria or other methodological features of each meta-analysis.

In the 10 therapeutic community studies analyzed, we found that this approach was supported in the meta-analysis. Most studies were rated as having lower experimental rigor. Most drug use outcomes were measured by self-report. All studies measured drug use at some time after treatment. Among the published studies, the effect size was slightly lower than that for the full set of studies. Most previous meta-analyses of TCs, including those conducted in prison, have found that TCs have positive effects compared with no treatment or alternative treatments (Lees, Manning, & Rawlings, 2004; Mitchell, Wilson, & MacKenzie, 2006; Lipton, Pearson, Cleland, & Yee, 2003; Prendergast, Podus, Chang, & Urada, 2002), although a meta-analysis of seven random assignment studies of TCs (Smith, Gates, & Foxcroft, 2006) found little evidence for the effectiveness of TCs.

NIDA Principle 8 recommends that a client’s initial treatment plan should be reassessed and modified as necessary to ensure that the plan meets the client’s changing needs. Based on our analysis of 8 independent comparisons in which the E condition had a higher rating on treatment plan reassessment than the C condition we found that treatment reassessment was supported. However, given the small number of studies and the fact that the moderator analysis was suggestive of a smaller g for post-treatment outcomes, the principle regarding treatment plan reassessment should be regarded as receiving only provisional confirmation in the studies available. Research that directly manipulated whether or not treatment plans were reassessed, and how frequently, would provide a stronger test of this principle.

NIDA’s Principle 12 recommends testing clients for drug use as an important clinical tool during treatment. Because the confidence interval extended below the zero point, our assessment is that the relationship between testing for drug use and drug use outcomes is not supported in this set of studies. When the drug testing variable was analyzed in terms of frequency of testing per month (28 days), the effect of increasing the number of days of testing was very small, with a r of 0.09. From the studies in the EPT project, it appears that whether testing is done at all or how frequently it is done has little effect on drug use outcomes. For some studies, testing may have occurred, but it was not reported, although given the importance of testing as a valid measure of drug use, such non-reporting of testing, when it was in fact done, is probably rare. It is possible that the effects of drug testing that have been found in primary research are limited to particular characteristics of those studies, for example, in the context of contingency management (e.g., Griffith, Rowan-Szal, Roark, & Simpson (2000). An experiment contrasting low and high frequencies of conducting urine or other type of testing (holding constant the severity of the consequences of positive tests) could fill in this knowledge gap.

Ten studies that met the EPT project’s selection criteria focused on reducing risk behavior for HIV infection, e.g., injection drug use (Principle 13). Our assessment is that the principle of reducing the risk behavior for HIV infection was supported by the meta-analysis. The evidence is also encouraging in that studies rated as having higher experimental rigor had larger effect sizes than those rated as having lower rigor. The studies were very heterogeneous in their treatment approaches, which included network-oriented peer outreach intervention, social learning, node-link mapping counseling, African American focused counseling, cognitive behavioral therapy, and therapeutic communities. Although findings from the EPT meta-analysis lead to confidence that some HIV risk reduction interventions are associated with effect sizes indicative of reduction in drug-related HIV risk, the small number of studies, together with their high heterogeneity, prevents us from identifying which types of HIV intervention are more successful or less successful.

Two previous meta-analyses are relevant. Prendergast, Urada, and Podus (2001) found an overall effect size of 0.31 for all types of outcomes reported in 16 studies of HIV risk-reduction interventions conducted in drug abuse treatment programs; however, for injection practices (5 studies), which was the outcome that was most similar to the drug use outcome reported in the EPT study, the effect size was 0.04 (non-significant). A more recent meta-analysis by Copenhaver, Lee, Harman, Johnson, and Carey (2006), which focused on randomized control trials of behavioral risk-reduction interventions intended for people who injected drugs (but who were not necessarily in drug treatment), found a g effect size for reductions in injection drug use (30 studies) of 0.08 (significant) and for reductions in non-injection drug use (11 studies) of 0.18 (significant).

Limitations

Meta-analysts are sometimes criticized for combining a heterogeneous set of studies (the “apples and oranges” problem; Sharpe, 1997). We tried to deal with some possibly important sources of heterogeneity in two ways. First, we included only studies with an experimental group and a control/comparison group. Second, in our analyses, we included as moderator variables some potential sources of variation in effect sizes: (1) a measure of the experimental rigor of the primary research studies, (2) the type of outcome measure used, (3) the timing of the outcome measure, and (4) whether study findings appeared in a publication or in an unpublished report. Of course, there may be other potential sources of heterogeneity, and that remains a limitation of this EPT meta-analytic study.

5. Conclusions

We urge the reader to use his or her own background knowledge of the treatments and moderators discussed here to assess whether any of our analyses have a systematic bias of some sort and, if so, to assess how our nominal confidence intervals may be inadequate. The following are our conclusions from these analyses. Five of the NIDA principles find satisfactory meta-analytic support: (1) matching treatment to the client’s needs, (2) attending to the multiple needs of clients, (3) utilization of CM, CBT, or TC treatment, all prominent behavioral/counseling approaches, (4) treatment plan reassessment, and (5) counseling to reduce risk of HIV. We believe that these findings are substantive, and we are not aware of any specific source of systematic error.

Taken individually, many of the principles could be regarded as of rather small clinical significance, even though they reached statistical significance. However, if a program were to introduce several of the NIDA principles of effective treatment, the overall effect on outcomes could be substantial.

The other two principles (remaining in treatment and frequency of testing for drug use) have 95% confidence intervals that include the zero effect point. Two interpretations may be offered for those findings: (1) the EPT study findings for these principles are approximately accurate and the NIDA principles were stated too generally (e.g., specific conditions of applicability should have been included in the statement of these principles) and/or (2) the methodology of the EPT study was flawed in some way regarding these particular principles, which led to underestimation of their effect sizes. In line with the first interpretation, by averaging over all of these diverse programs, (1) there are approximately comparable effects when smaller proportions complete the programs and when some clients stay in the program for shorter durations, and (2) there is only a weak effect in adding drug testing to the program or in increasing the frequency of drug testing currently present.

More generally, the EPT study has shown that meta-analysis is an important way to assess principles of effective treatment as well as the effectiveness of treatment programs and interventions. For a new area of research or for a controversial topic, a meta-analysis, focusing on even a small number of relevant studies, could add quantitative findings to qualitative assessment and policy considerations. We support NIDA’s work in assembling principles of effective treatment. When any group works to develop principles of effective treatment, we recommend that they conduct meta-analyses as an important aid in that process. A specific suggestion we have is that NIDA encourage researchers (through funding opportunities) to conduct studies that directly manipulate other variables involved in the principles of effective treatment, e.g., manipulating a specific type of program to have a contrasting duration (shorter vs. longer planned duration of treatment) and conducting a randomized experiment on that treatment difference. Theoretical principles, primary research studies, and meta-analyses should be used together to move the field forward.

Acknowledgements

This study was funded by the National Institute on Drug Abuse, grant R01 DA016600. The contents of this report are solely the responsibility of the authors and do not necessarily represent the views of the Department of Health and Human Services or the National Institute on Drug Abuse. We greatly appreciate the contributions of Aaron Brownstein, Ph.D., Anna Hyun, Ph.D., and Stephanie Kovalchik, Ph.D., the coders on the EPT project at UCLA, and of Stacy Calhoun, M.A., a research associate on the UCLA team.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Borenstein M, Hedges L, Higgins J, Rothstein H. Comprehensive Meta-analysis. Biostat; Englewood, NJ: 2005. [Google Scholar]
  2. Brewer DD, Catalano RF, Haggerty K, Gainey RR, Fleming CB. A meta-analysis of predictors of continued drug use during and after treatment for opiate addiction. Addiction. 1998;93(1):73–92. [PubMed] [Google Scholar]
  3. Burke BL, Arkowitz H, Menchola H. The efficacy of motivational interviewing: A meta-analysis of controlled clinical trials. Journal of Consulting and Clinical Psychology. 2003;71(5):843–861. doi: 10.1037/0022-006X.71.5.843. [DOI] [PubMed] [Google Scholar]
  4. Cohen J. Statistical Power Analysis for the Behavioral Sciences. Lawrence Erlbaum Associates; Hillsdale, NJ: 1988. [Google Scholar]
  5. Copenhaver M, Lee IC, Harman J, Johnson BT, Carey M. Behavioral HIV risk reduction among injection drug users: Meta-analytic evidence of efficacy. Journal of Substance Abuse Treatment. 2006;31:163–171. doi: 10.1016/j.jsat.2006.04.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Crawley MJ. Statistics: An introduction using R. Wiley; Chichester, England: 2005. [Google Scholar]
  7. Drake RE, O’Neal EL, Wallach MA. A systematic review of psychosocial research on psychosocial interventions for people with co-occurring severe mental and substance use disorders. Journal of Substance Abuse Treatment. 2008;34(1):123–138. doi: 10.1016/j.jsat.2007.01.011. [DOI] [PubMed] [Google Scholar]
  8. Dumaine M. Meta-analysis of interventions with co-occurring disorders of severe mental illness and substance abuse: Implications for social work practice. Research on Social Work Practice. 2003;13(2):142–165. [Google Scholar]
  9. Dutra L, Stathopoulou G, Basden SL, Leyro TM, Powers MB, Otto MW. A meta-analytic review of psychosocial interventions for substance use disorders. American Journal of Psychiatry. 2008;165(2):179–187. doi: 10.1176/appi.ajp.2007.06111851. [DOI] [PubMed] [Google Scholar]
  10. Farre M, Mas A, Torrens M, Moreno V, Cami J. Retention rate and illicit opioid use during methadone maintenance interventions: A meta-analysis. Drug and Alcohol Dependence. 2002;65(3):283–290. doi: 10.1016/s0376-8716(01)00171-5. [DOI] [PubMed] [Google Scholar]
  11. Gowing LR, Farrell M, Bornemann R, Sullivan LE, Ali RL. Methadone treatment of injecting opioid users for prevention of HIV infection. Journal of General Internal Medicine. 2006;21(2):193–195. doi: 10.1111/j.1525-1497.2005.00287.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Griffith JD, Rowan-Szal GA, Roark RR, Simpson DD. Contingency management in outpatient methadone treatment: A meta-analysis. Drug and Alcohol Dependence. 2000;58:55–66. doi: 10.1016/s0376-8716(99)00068-x. [DOI] [PubMed] [Google Scholar]
  13. Harlow LL, Mulaik SA, Steiger JH, editors. What if there were no more significance tests? Lawrence Erlbaum Associates; Mahwah, NJ: 1997. [Google Scholar]
  14. Irvin JE, Bowers CA, Dunn ME, Wang MC. Efficacy of relapse prevention: A meta-analytic review. Journal of Consulting and Clinical Psychology. 1999;67(4):563–70. doi: 10.1037//0022-006x.67.4.563. [DOI] [PubMed] [Google Scholar]
  15. Johansson BA, Berglund M, Lindgren A. Efficacy of maintenance treatment with naltrexone for opioid dependence: A meta-analytical review. Addiction. 2006;101(4):491–503. doi: 10.1111/j.1360-0443.2006.01369.x. [DOI] [PubMed] [Google Scholar]
  16. Krantz DH. The null hypothesis testing controversy in psychology. Journal of the American Statistical Association. 1999;94(448):1372–1381. [Google Scholar]
  17. Lees J, Manning N, Rawlings B. A culture of enquiry: Research evidence and the therapeutic community. Psychiatric Quarterly. 2004;75(3):279–294. doi: 10.1023/b:psaq.0000031797.74295.f8. [DOI] [PubMed] [Google Scholar]
  18. Lipsey MW. Those confounded moderators in meta-analysis: Good, bad, and ugly. Annals of the American Academy of Political and Social Science. 2003;587(May):69–81. [Google Scholar]
  19. Lipton DS, Pearson FS, Cleland CM, Yee D. The effects of therapeutic communities and milieu therapy on recidivism: Meta-analytic findings from the Correctional Drug Abuse Treatment Effectiveness (CDATE) study. In: McGuire J, editor. Offender Rehabilitation and Treatment: Effective Programmes and Policies to Reduce Re-offending. John Wiley; London: 2003. pp. 39–77. [Google Scholar]
  20. Loftus GR. Psychology will be a much better science when we change the way we analyze data. Current Directions in Psychological Science. 1996;5:161–171. [Google Scholar]
  21. Lussier JP, Heil SH, Mongeon JA, Badger GJ, Higgins ST. A meta-analysis of voucher-based reinforcement therapy for substance use disorders. Addiction. 2006;101:192–203. doi: 10.1111/j.1360-0443.2006.01311.x. [DOI] [PubMed] [Google Scholar]
  22. Magill M, Ray LA. Cognitive-behavioral treatment with adult alcohol and illicit drug users: A meta-analysis of randomized controlled trials. Journal of Studies on Alcohol and Drugs. 2009;70:516–527. doi: 10.15288/jsad.2009.70.516. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Marsch LA. The efficacy of methadone maintenance interventions in reducing illicit opiate use, HIV risk behaviors and criminality: A meta-analysis. Addiction. 1998;93(4):515–532. doi: 10.1046/j.1360-0443.1998.9345157.x. [DOI] [PubMed] [Google Scholar]
  24. Mattick RP, Breen C, Kimber J, Davoli M. Methadone maintenance therapy versus no opioid replacement therapy for opioid dependence. Cochrane Database of Systematic Reviews. 2009;(3) doi: 10.1002/14651858.CD002209.pub2. Art. No.: CD002209. DOI: 10.1002/14651858.CD002209. pub2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Mitchell O, Wilson DB, MacKenzie DL. The effectiveness of incarceration-based drug treatment on criminal behavior. Campbell Collaboration Systematic Review. 2006 Available at http://www.campbellcollaboration.org/frontend2.asp?ID=122.
  26. National Institute on Drug Abuse Principles of drug addiction treatment. (Second edition) 2009 1999. [Web Page]. Originally retrieved June 20, 2002, from http://www.drugabuse.gov/PODAT/PODATIndex.html. is available at http://www.drugabuse.gov/PODAT/
  27. National Institute on Drug Abuse Research Report Series - Therapeutic Community. 2011 Apr 17; 2002. [Web Page]. URL http://www.nida.nih.gov/researchreports/therapeutic/default.html.
  28. National Institute on Drug Abuse Principles of drug addiction treatment. (Second Edition) 2011 Apr 11; 2009. [Web Page]. URL http://www.drugabuse.gov/PODAT/PODATIndex.html.
  29. Parhar KK, Wormith JS, Derkzen DM, Beauregard AM. Offender coercion in treatment: A meta-analysis of effectiveness. Criminal Justice and Behavior. 2008;35:1109–1135. [Google Scholar]
  30. Powers MB, Vedel E, Emmelkamp PMG. Behavioral couples therapy (BCT) for alcohol and drug use disorders: A meta-analysis. Clinical Psychology Review. 2008;28(6):952–962. doi: 10.1016/j.cpr.2008.02.002. [DOI] [PubMed] [Google Scholar]
  31. Prendergast ML, Podus D, Chang E, Urada D. The effectiveness of drug abuse treatment: A meta-analysis of comparison group studies. Drug and Alcohol Dependence. 2002;67(1):53–72. doi: 10.1016/s0376-8716(02)00014-5. [DOI] [PubMed] [Google Scholar]
  32. Prendergast ML, Urada D, Podus D. Meta-analysis of HIV risk-reduction interventions within drug abuse treatment programs. Journal of Consulting and Clinical Psychology. 2001;69(3):389–405. doi: 10.1037//0022-006x.69.3.389. [DOI] [PubMed] [Google Scholar]
  33. Prendergast M, Podus D, Finney J, Greenwell L, Roll J. Contingency management for treatment of substance use disorders: A meta-analysis. Addiction. 2006;101:1546–1560. doi: 10.1111/j.1360-0443.2006.01581.x. [DOI] [PubMed] [Google Scholar]
  34. R Development Core Team . R: A language and environment for statistical computing, reference index version 2.10.0 R Foundation for Statistical Computing. Vienna; Austria: Nov 4, 2009. 2009. ISBN 3-900051-07-0 [Web Page]. URL http://www.R-project.org. [Google Scholar]
  35. Rosenthal R, Rosnow RL, Rubin DB. Contrasts and Effect Sizes in Behavioral Research: A Correlational Approach. Cambridge University Press; New York: 2000. [Google Scholar]
  36. Schmidt F, Hunter J. Are there benefits from NHST? American Psychologist. 2002;57(1):65–71. [PubMed] [Google Scholar]
  37. Sharpe D. Of apples and oranges, file drawers and garbage: Why validity issues in meta-analysis will not go away. Clinical Psychology Review. 1997;17:881–901. doi: 10.1016/s0272-7358(97)00056-1. [DOI] [PubMed] [Google Scholar]
  38. Smith LA, Gates S, Foxcroft D. Therapeutic communities for substance related disorder. Cochrane Database of Systematic Reviews 2006. 1. 2006. Art. No.: CD005338. DOI: 10.1002/14651858.CD005338.pub2. [DOI] [PubMed] [Google Scholar]
  39. Stanton MD, Shadish WR. Outcome, attrition, and family-couples treatment for drug abuse: A meta-analysis and eview of the controlled, comparative studies. Psychological Bulletin. 1997;122(2):170–191. doi: 10.1037/0033-2909.122.2.170. [DOI] [PubMed] [Google Scholar]
  40. Vanderplasschen W, Wolf J, Rapp RC, Broekaert E. Effectiveness of different models of case management for substance-abusing populations. Journal of Psychoactive Drugs. 2007;39(1):81–85. doi: 10.1080/02791072.2007.10399867. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Viechtbauer W. Package “metafor”. 2009 Nov 4; 2009. [Web Page]. URL http://www.wvbauer.com/
  42. Walfish S. A review of statistical outlier methods in Pharmaceutical Technology. 2006 [Web Page]. Retrieved December 4, 2009, from http://www.statisticaloutsourcingservices.com/Outlier2.pdf. [Google Scholar]
  43. Wilkinson L, Task Force on Statistical Inference Statistical methods in psychology journals: Guidelines and explanations. American Psychologist. 1999;54(8):594–604. [Google Scholar]
  44. Wilson DB. Meta-Analysis Stuff. 2007 Oct 26; 2006. [Web Page]. URL http://mason.gmu.edu/~dwilsonb/ma.html.

RESOURCES