Abstract
Aims
With the increased need for sanctioning behavioral addiction treatments to guide key stakeholders, focus has shifted to developing and applying criteria for establishing empirically-supported treatments (EST). Among the many criteria offered, demonstration of incremental efficacy over a placebo or comparison in at least two independent randomized clinical trials (RCT) has been the gold standard. While necessary, the present EST criteria are not sufficient. The present work: (1) argues for empirically supported specificity in behavioral addiction treatment, (2) explores the limitations of empirical support for EST efficacy without evidence of specificity, and (3) discusses implications and recommendations for ultimately raising the bar for status as an EST.
Methods
The authors review relevant literature on ESTs, evidence-based practice, and clinical trial design in the addictions and related disciplines.
Results
We clarify that the additional bar of specificity does not denote uniqueness in causal processes and we argue that specificity should not be inferred only via the nature of the experimental contrast. Rather, a treatment has specificity if its active ingredients are identified and empirically validated as predictors of subsequent treatment-related outcomes. Within this new definition, there are implications for clinical research and other key stakeholders.
Conclusions
A heightened centrality of empirically-supported addiction treatment ingredients moving forward will advance clinical knowledge and evaluation methodology at a far greater pace.
Keywords: Active Ingredients, Empirically-Supported Treatment, Evidence-Based Practice, Mechanisms of Behavior Change, Randomized Clinical Trials
Introduction
With the increased need for sanctioning behavioral addiction treatments to guide key stakeholders, focus has shifted to developing and applying criteria for establishing empirically-supported treatments (EST). Additional standards of clinician accountability in evidence-based practice (EBP) [1, 2] include setting measurable treatment goals, engaging in a dialogue of informed consent, assessing intervention outcomes, and providing feedback on progress. Treatment selection in EBP is informed by, though not exclusive to, information on ESTs [3]. Among the many criteria offered for establishing an EST, demonstration of incremental efficacy over a placebo or comparison in at least two independent randomized clinical trials (RCT) has been the gold standard; when support is found in only one study or in two RCTs by a single research team, a treatment achieves status as possibly efficacious [4]. Meeting the gold standard criterion is necessary, but should not be sufficient. Without identifying the active ingredients of a given treatment, utilization of the EST criteria may inadvertently promote irrelevant or even contra-indicated practices. In this article, we argue for an additional evidential criterion when judging status as an EST. Our arguments are informed by recent movements in the addictions as well as the broader field of psychotherapy research. Although complimentary considerations can be made for pharmacological interventions, we restrict our discussion to behavioral treatments.
Background
Brief history
Interest in ESTs evolved out of a combination of advancing scientific knowledge on clinical care and increased reliance on technological/managerial systems more generally. Two originating movements advocating an integration of research in clinical training and practice were the United States’, Boulder, Scientist-Practitioner Model of training in clinical psychology and Sackett and colleagues’ [2] model of Evidence-Based Medicine in the United Kingdom. A call for research-informed practice then required greater research accessibility. Internationally, organizations emerged dedicated to research dissemination and supporting research-informed health care (e.g., The United Kingdom’s Cochrane and Campbell Collaborations). In mental health, the 1995 American Psychological Association Task Force on the Promotion and Dissemination of Psychological Procedures (now termed The Council on Science and Practice) provided a provisional list of ESTs and criteria to be met when attempting to achieve EST status. Subsequent reports provided further detail on criteria for treatment validation [4, 5]; though never perfect nor complete, its proponents have stated appropriately: “the field must start somewhere” [6] (p.37).
Key definitions
Demonstration of a treatment’s efficacy in two independent and methodologically rigorous RCT’s establishes a treatment as empirically-supported. Dimensions of methodological rigor include well-justified and described sample characteristics, outcome measurement with validated measures, treatment manualization and demonstrated fidelity, and appropriate statistical analyses. The treatment manual is the document that, in combination with training, teaches the clinician how to deliver the treatment. It describes the conceptual foundations underlying the treatment and procedures for its delivery. The manual further selectively describes what are considered to be the key features of the treatment, that is, its hypothesized active ingredients. Active ingredients are the processes and interventions within the treatment that predict incremental outcomes, and we distinguish these from client mechanisms, which are the casual change processes that occur within the person and account for a given treatment and/or ingredient’s effect.
Chambless and Hollon [4] speak to efficacy over no treatment (i.e., waitlist, assessment only) as the minimum threshold for empirical support; which they describe as efficacy without specificity. In other words, the conclusion can be drawn that something occurred within the treatment that was beyond what would have occurred without the treatment. Efficacy with specificity is achieved when a treatment demonstrates superiority under conditions where nonspecific and/or treatment-specific processes, known to affect behavior change, have been controlled. Table One summarizes degrees of specificity when achieved within the RCT framework; it provides exemplars of experimental contrast and the corresponding causal inference. Chambless and Hollon [4] state that head to head contrasts of ‘bona-fide’ interventions provide the most stringent test of all and “…have implications for theory because they increase confidence in the specific explanatory model on which the treatment is based” (p.8). In the present work, we define specificity differently. A totally specified treatment is one in which all of its active ingredients (nonspecific and/or specific) have been identified and empirically validated. Given this definition it will never be possible to totally specify a treatment; rather, a continuum of specificity is denoted and as we discuss in further detail later, successive levels are achieved via a range of research strategies.
Table One.
Degree of specificity when achieved by RCT contrast
Status | Contrast | Causal Inference |
---|---|---|
Efficacy without specificity1 |
Waitlist, assessment only |
causal processes occurred that were superior to what would have occurred due to research involvement, assessment, and/or time. |
Efficacy with specificity1 |
attention placebo, inert treatment |
causal processes occurred that were superior to what would have occurred due to research involvement, assessment, time, and nonspecific predictors (e.g., placebo, attention, information). |
‘treatment as usual’ |
causal processes occurred that were superior to what would have occurred due to research involvement, assessment, time, nonspecific predictors, and usual care2. |
|
‘bona fide treatment’ |
causal processes occurred that were superior to what would have occurred due to research involvement, assessment, time, nonspecific predictors, and the effects of a competing treatment. |
Notes.
As defined by Chambless & Hollon (1998).
We separate ‘treatment as usual’ under the assumption that there are unique causal processes of usual care that are yet to be defined, but that are beyond nonspecific processes.
Current guidelines establishing an EST with minimum levels of specificity allow the field to move forward clinically while science continues to examine key features of ESTs. Ultimately, however, direct tests rather than inferred validation of a treatment’s causal theory and consequentially its active ingredients are necessary. Moreover, concerns identified in the literature as to the applicability (i.e., transportability of RCT findings to the clinical context; e.g., [7]), adaptability (i.e., degree of flexibility in EST implementation; e.g., [8, 9]), and clinical utility (i.e., readiness of the knowledge base to inform therapist decision making; e.g., [10, 11]) of the evidence provide valuable recommendations for the future of ESTs and subsequently, EBP. We believe that future efforts to empirically test the hypothesized active ingredients of ESTs will address a number of these concerns [12-14]. The intent of the present work is to: (1) argue for empirically-supported specificity in behavioral addiction treatment, (2) explore the limitations of empirical support for EST efficacy without evidence of specificity and (3) discusses implications and recommendations for ultimately raising the bar for status as an EST.
The importance of expanding the evidence for ESTs
Efficacy, fidelity, and the reification of active treatment ingredients
In a traditional RCT, casual logic asserts that if the treatment is found to be efficacious in relation to a control or comparison condition and the clinical team have demonstrated fidelity to the treatment, then a relationship between the treatment and the outcome has been demonstrated. Within this approach, we implicitly and perhaps erroneously conclude that the experimental treatment has provided a clinical benefit due to the therapists’ implementation of the treatment manual. The assumption is also that the treatment “package”, or delivered combination of ingredients, defines the treatment and thus is responsible for its efficacy. As Figure One illustrates, the traditional RCT will demonstrate that the experimental treatment is associated with main effect outcomes, and that the experimental treatment was associated with the therapist’s delivery of its hypothesized ingredients. How or if those ingredients affected client outcomes has not been established.
Figure One.
Causal paths tested and validated in the traditional RCT paradigm
Notes. In a traditional RCT paradigm if path from Treatment A vs. B is significant then paths a (treatment to ingredients) and c (direct effect of treatment on outcome) are demonstrated; the b path from treatment ingredients to outcome is untested.
If the treatment works, does it matter why it works?
Psychosocial treatments that produce comparative efficacy when applied by independent investigators to a specified disorder have evidence of efficacy with inferred specificity. In actuality, tests of fidelity to the treatment, coupled with a demonstration that providers of the alternative treatment enacted fewer of the behaviors defining the more effective treatment, provide evidence only that treatment A is both more effective than treatment B and different from B. Across several RCT’s with different populations and contextual circumstances, consistent superiority of treatment A over alternatives would be all the specificity that is required. However, for most behavioral treatments for addictive disorders invariant efficacy has not been achieved. In other words, implementation of the hypothesized active ingredients has not invariantly assured superior improvement to an alternative that is hypothesized to not include those ingredients. At minimum this leads to a judgment that the treatment is not incrementally effective under the conditions in which it was delivered, and/or for this defined patient population. A more fundamental judgment could be that the ingredients of the treatment that are active in enhancing outcomes are not the ones hypothesized [15, 16]. To the extent that future RCT’s yield results of no difference, the need for specificity as to how the treatment works becomes more compelling.
There are a number of alternative explanations for the effectiveness, and even experimental superiority, of particular treatments that have been thoughtfully raised in the literature. Wampold [17], in his meta-analytic review of psychotherapy studies, has shown that a good portion of treatment effect magnitude is accounted for by researcher allegiance to the experimental treatment, and thus the independent replication criterion for an EST is well-founded. The superiority of the more effective treatment may be a result of any number of sources of bias in RCT design. It may also be that we are partially correct in hypothesized ingredients, but we rarely know which part. There is a well-established literature on the importance of nonspecific, or common, processes (e.g., therapist empathy, working alliance, installation of client hope) in relation to psychotherapy outcomes [18-21]. Yet, the role of these common processes in RCTs has received relatively minimal attention in the literature. For example, even when control over nonspecific factors is the goal of the clinical trial, a completely defined and tested control condition is exceedingly rare. In other words, what is controlled by the control condition is an inference [16]. Finally, what if two purportedly different treatments perform the same? These two established treatments may be equally efficacious because their causal elements are common or they may be unique to each treatment but equally effective [22]. Taken in sum, these scenarios underscore what little we know about how behavioral addiction treatments work, and subsequently, how little we have to advocate to frontline providers of addiction services.
Effectiveness, training, and real world implications
Therapists and researchers share the goal of aiding patient improvement through effective treatment delivery. However, few therapists have the luxury of implementing a perfectly linear succession of intervention components as would be the case, and therefore empirically supported, in a tightly controlled clinical trial. Differential responding to multiple presenting concerns is the rule and not the exception [23]. Demonstration of effectiveness of EST’s in dissemination research informs providers of the degree of flexibility in treatment delivery while remaining evidence based. Yet, treatment dissemination research in the addictions has historically had mixed results. Provider attitudes toward ESTs, and even specific models, are often positive, and adoption to frontline settings is possible, but the evidence on EST implementation in relation to main effect outcomes is not straightforward [24]. Studies show a positive effect of training combined with consultation, feedback, or supervisory contact on increased therapist adherence to the treatment [25-27], but trained community therapist adherence has not necessarily demonstrated differential effects on client outcomes when compared to treatment as usual [e.g., 28]. This suggests that while frontline providers can effectively implement ESTs, similar outcomes across degrees of adherence or in contrast with usual services call into question the central importance of the putative mode of delivery as identified in the manual. To justify adoption of a specific EST, we must identify the degree of flexibility in implementation that is possible while also remaining efficacious over existing service provision. This essential data might also engender greater buy-in from frontline providers thus closing the research to treatment gap. This can be accomplished not only after we have specified active ingredients, but also the factors that may moderate their effectiveness.
If our ESTs were consistently both efficacious and greater than moderately effective, the questions raised thus far would be of intellectual interest only. It is the case that 100% of outcome variance will never be explained. However, by specifying a treatment’s active ingredients we will then be able to examine the possible conditions that affect their efficacy. Patient characteristics and behaviors as well as their contextual environments may affect the strength or direction of effect of delivered ingredients, as may the context in which the treatment is provided. Clinicians need to know how these factors will impact their implementation of an EST. Therefore, we advocate a shift in scientific inquiry that has been an emerging topic of interest [12, 15, 29-31]. Studies targeting treatment efficacy, effectiveness, and ingredients or client mechanisms, have to date existed largely in tandem, yielding valuable findings that are often difficult to integrate into a unified knowledge base for a given treatment [32]. The treatment is nevertheless touted for frontline implementation. Until a more concerted effort occurs, the RCT preponderance of evidence supports the efficacy of the treatment and it is deemed an EST, but as long as the failure rate of the treatment is non-negligible, patients continue to suffer unnecessarily and the society that supports the treatment is not optimizing its cost-benefit ratio.
The implications of expanding the evidence for ESTs
Criterion definition
We propose an extended definition and therefore additional criterion for status as an EST. The field is at a critical juncture where continued treatment development followed by randomized control comparisons will yield diminishing returns [15, 22]. This may be with regard to effect size magnitude in ‘bona fide’ contrasts and/or redundancy of intervention ingredients [33]. The additional criterion that a given behavioral treatment for addictive disorders should meet is the specification of the important processes or components that carry that treatment’s effects. These processes or components, i.e. ingredients, would be empirically validated, i.e. deemed active, as causal predictors of subsequent client mechanisms and/or main treatment effects, and such analyses would occur as an extension of clinical trials methodology, as well as in other complementary experimental tests (e.g., dismantling designs, analogue experiments). We note that to have evidence of specificity does not equate with evidence of uniqueness; rather it is to be defined in empirical terms. Initially, research contrasts may be in relation to an inert comparison, but over time, comparative examination of differing treatment theories would bring to light where ESTs do and do not actually differ. To summarize we define the additional bar of empirical support for a given EST as: the specification and empirical validation of given efficacious treatment’s active ingredients (nonspecific and/or specific) as incremental predictors of subsequent client mechanisms and/or main effect outcomes.
Implications and recommendations for clinical research
When a higher bar for EST specificity is set, a central implication for clinical research is the breadth of our gaps in knowledge will be revealed. At present, the essential aim of the RCT is to test the efficacy of one or more experimental treatments. When causal ingredients are of interest, they are most often relegated to secondary aims. A shift in priority that puts confirmation (or refutation) of a treatment’s underlying causal theory at the same level of importance as demonstrated treatment efficacy will more rapidly increase our knowledge and the subsequent clinical utility of our treatments than the unmodified RCT paradigm. Table Two illustrates this point. Column one portrays the typical RCT design where treatment efficacy is either supported or unsupported. Regardless, without testing for active ingredients of the experimental treatment, we have no empirically-supported rationale for the observed effects. In contrast, when tests of active ingredients are included as a co-primary aim of the design, we can differentiate theoretically-validated effects from Type One and Two theoretical errors. In cell 1.2, the research findings support both efficacy and the theory underlying hypothesized incremental effectiveness over comparison; the treatment in this case is specified (or more likely, partially specified to the extent of outcome variance explained). In cell 2.3, there is no support for incremental effectiveness or the underlying causal theory. In both of these cases the evidence is convergent. In cell 1.3, treatment efficacy is supported but its causal theory is not. This Type One theoretical error indicates that whatever the causes of treatment effectiveness, they are not those supposed a priori. In the absence of such a theoretical test, this treatment would move forward as an EST, carrying with it an unproven rationale. In cell 2.2, the treatment is not observed to have incremental effectiveness, while the theory underlying its rationale is supported. Because of the lack of demonstrated efficacy, this treatment may not move forward as an EST despite its theoretical support, a Type Two theoretical error. In this latter case, the treatment produces a benefit through its hypothesized ingredients, but these particular causal processes do not produce a differential benefit over those occurring within the contrast condition. Without a concurrent theoretical test of both, or all, conditions, clinical providers and researchers are at risk for perpetuating ineffective, inefficient, or redundant practices while missing the opportunity to capitalize on others.
Table Two.
Knowledge derived from RCT design with and without causal modeling
Theory not tested | Theory tested and supported |
Theory tested and unsupported |
|
---|---|---|---|
Efficacy Supported |
1.1Unknown causal effects |
1.2Treatment Specified | 1.3Theory failure |
Efficacy Unsupported |
2.1Unknown causal effects |
2.2Treatment not validated as EST |
2.3Treatment and theory failure |
When empirical validation of treatment theory holds the same value as empirical validation of treatment efficacy, clinical research will require greater attention to research design factors that can impact experimental treatment effects sizes, for better or worse. A full methodological review of such factors is beyond the scope of this work, but we argue for an incremental approach to measuring all sources of participant change. By incremental, we underscore that at each point in the clinical trial, change is incurred even if at some points, incurred equally in all conditions. For example, in the pre-treatment/pre-randomization phase change may occur due to agreement to engage in a clinical research study [34] as well as subsequent research assessment procedures [35]. Although this phenomenon may affect study groups equally, it can mute differential effect sizes thus reducing power for subsequent causal analyses. By incorporating measurement intervals from screening to assessment and from assessment to the first treatment session, RCT designs could differentiate between ‘true’ responders and pre-treatment responders. Next, randomization will hold pre-treatment influences constant, but cannot continue subsequent control during the active treatment phase [31]. When attempting to isolate the influence of treatment ingredients, the issue of dose-received becomes critical. Therefore, useful covariates may include: number of sessions attended, a threshold variable of minimum dose, or other indices of compliance. Finally, and also during the active treatment phase, the possible influence of individual therapists on ingredient delivery must be empirically examined [36]. The overarching theme here is sensitivity of measurement at each pre-follow-up phase of the study. By doing so, we are attempting to dismantle key sources of variance in the measured treatment effect size (or lack thereof) thus optimizing subsequent efforts to conduct what can be complex and underpowered tests of active ingredients.
While the above recommendations relate most often to sources of variance to control, future clinical research should also incorporate correlational process models and experimental methods to test a priori causal theories, and this includes a greater recognition of change processes occurring in contrast conditions. This can be achieved in a number of ways, but we will highlight some key options. When attempting to specify active ingredients of treatments, a central approach is correlational mediation analyses (for discussion of mediation design in intervention research see e.g., [37, 38]). Here, active ingredients can be measured in two primary ways - via measurement of enacted behaviors of the therapist (dose-delivered) or by assessing client changes theorized to result from intervention (dose-received). Either way, the approach reduces the treatment manual into measurable and testable elements. For example, important gains have been achieved in the motivational interviewing literature via observational process coding of therapy sessions to examine prescribed (e.g., questions, reflections) and proscribed (e.g., confrontations) therapist behaviors as well as theory-driven client proximal responses (e.g., statements of commitment or anti-commitment to change; for review, see [39]). Where this work has been limited is in comparable testing for the alternative hypothesis that parallel processes are occurring within contrast conditions. In other words, even greater knowledge gains regarding differential efficacy are possible when what is ‘active’ in comparison conditions is also empirically demonstrated [40]. Finally, experimental methods that compare treatments with and without key ingredients (i.e., dismantling), that manipulate treatment prescribed practices, or that target a single process in an analogue laboratory test, provide the strongest test for active treatment ingredients, and should represent a much higher priority for future clinical research [32].
Implications for other stakeholders
When current addiction treatments are held to the standard of EST with specificity, it might be that few will immediately qualify. This circumstance need not result in the field moving from a place of knowing to unknowing. Meaning, adoption of a bar of efficacy with specificity does not negate knowledge derived to date and the capacity of current ESTs to promote client improvement. Existing treatment guidelines (e.g., United States’ Committee on Science and Practice; United Kingdom’s National Institute for Health and Clinical Excellence) would remain key resources for consumer and provider education. Concurrently, however, given the necessity of establishing specificity, RCT designs would change markedly to achieve a higher threshold, and this would require a paradigm shift in funding priority. A broad-scale emphasis would increase pre-study deliberations, leading to more informative clinical trials, and information gleaned from such studies would establish treatment ingredients, subsequent client mechanisms, and this feedback loop would promote modifications in treatments that sooner rather than later enhance treatment efficiency.
At present, addictions treatments such as motivational interviewing, due to progress in the area of process analysis, and contingency management, due to the relative simplicity in manipulating its key causal ingredient (i.e., the incentive), would near or meet the bar of specificity [32] and for those that do not, achieved evidence of efficacy would be maintained (e.g., behavioral couples therapy; twelve-step facilitation). As important ingredients are identified, clinician and consumer educators would be positioned to offer more detailed rationale for treatment efficacy, and we forecast this would hold a higher intuitive appeal to recipients of a prescribed course of action. Others argue that empirically supported principles, not therapies, should be the dissemination and training goals of the future (e.g., 33, 41, 42]. This latter approach may also resonate with agency, clinician, and consumer stakeholders. Empirically supported principles and empirically supported ingredients are highly compatible areas of knowledge and only through concerted efforts toward their discovery, will we be in the position to determine which should be the future priority.
Treatments achieving the goal of efficacy with specificity, and therefore empirically containing specified principles of delivery, will also require tests of clinical transportability. As the efficacious treatment is delivered to more heterogeneous patient populations in diverse settings by more diverse therapists, the causal properties of specified ingredients might not remain consistently robust. The treatment theorist must then think through the likely set of conditions under which treatment effects hold, and for what patient populations. The resulting knowledge base would feedback to EBP practitioners as well as clinical training programs with the ultimate goal of addressing a 43 year old question posed by Gordon Paul [43]:
“What treatment, by whom, is most effective for this individual with what specific problem, under which set of circumstances, and how does it come about?” (p. 44)
Conclusions
The movement toward understanding active treatment ingredients and client change mechanisms is a force in motion. We hypothesize that adding the requirement of establishing specificity as an essential criterion for a fully established EST will greatly accentuate the importance of understanding how behavioral addiction treatments work, leading to more effective clinical training and treatment. Until validation of active ingredients is achieved, all treatments falling short of this high bar should be designated with the provisional efficacy without specificity status, that is, an efficacious treatment with an unproven set of underlying causal processes. Obviously, we should not withhold efficacious treatments because they have not achieved this level of support, but demonstration of the active ingredients of a treatment in tandem with demonstrations of efficacy should be recognized as the gold standard for status as an EST.
ACKNOWLEDGEMENT
The present work is supported by a grant awarded by National Institute on Alcohol Abuse and Alcoholism (NIAAA; K23AA018126) to Dr. Molly Magill. However, both authors have contributed equally to the completion of this manuscript. This work is the responsibility of the authors and reflects the positions of the authors, and does not represent an official position of NIAAA or the National Institutes of Health.
References
- [1].APA Presidential Task Force on Evidence-Based Practice Evidence-based practice in psychology. American Psychologist. 2006:271–85. doi: 10.1037/0003-066X.61.4.271. [DOI] [PubMed] [Google Scholar]
- [2].Sackett DL, Rosenburg WMC, Muir Gray JA, Haynes RB, Richardson WS. Evidence based medicine: what it is and what it isn’t. British Medical Journal. 1996;312:71–2. doi: 10.1136/bmj.312.7023.71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [3].Levant RF, Hasan NT. Evidence-based practice in psychology. Professional Psychology: Research and Practice. 2008;39(6):659–62. [Google Scholar]
- [4].Chambless DL, Hollon SD. Defining empirically supported therapies. Journal of Consulting and Clinical Psychology. 1998;66(1):7–18. doi: 10.1037//0022-006x.66.1.7. [DOI] [PubMed] [Google Scholar]
- [5].Chambless DL, Sanderson WC, Shoham V, Bennett-Johnson S, Pope KS, Crits-Christoph P, et al. An update on empirically validated therapies. Clinical Psychologist. 1996;49:5–18. [Google Scholar]
- [6].DeRubeis RJ, Crits-Christoph P. Empirically Supported individual and group psychological treatments for adult mental disorders. Journal of Consulting and Clinical Psychology. 1998;66(1):37–52. doi: 10.1037//0022-006x.66.1.37. [DOI] [PubMed] [Google Scholar]
- [7].Seligman M. The effectiveness of psychotherapy: The Consumer Reports study. American Psychologist. 1995;50(12):965–74. doi: 10.1037//0003-066x.50.12.965. [DOI] [PubMed] [Google Scholar]
- [8].Goldfried M, Wolfe B. Psychotherapy practice and research: Repairing a strained relationship. American Psychologist. 1996;51(10):1007–16. doi: 10.1037//0003-066x.51.10.1007. [DOI] [PubMed] [Google Scholar]
- [9].Silverman WH. Cookbooks, manuals, and Paint-by-Numbers: Psychotherapy in the 90s. Psychotherapy. 1996;33(2):207–15. [Google Scholar]
- [10].Hunsley J. Addressing key challenges in evidence-based practice in psychology. Professional Psychology: Research and Practice. 2007;38(2):113–21. [Google Scholar]
- [11].Stricker G. Evidence-based practice: The wave of the past. Counseling Psychologist. 2003;31(5):546–54. [Google Scholar]
- [12].Longabaugh R, Donovan DM, Karno MP, McCrady BS, Morgenstern J, Tonigan JS. Active ingredients: how and why evidence-based alcohol behavioral treatment interventions work. Alcoholism: Clinical and Experimental Research. 2005;29(2):235–47. doi: 10.1097/01.alc.0000153541.78005.1f. [DOI] [PubMed] [Google Scholar]
- [13].Longabaugh R, Magill M. Recent advances in behavioral addiction treatments: focusing on mechanisms of change. Current Psychiatry Reports. 2011;13(5):382–9. doi: 10.1007/s11920-011-0220-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [14].Magill M. The Future of evidence in evidence-based practice: Who will answer the call for clinical relevance? Journal of Social Work. 2006;6(2):101–15. [Google Scholar]
- [15].Kazdin AE, Nock MK. Delineating mechanisms of change in child and adolescent therapy: Methodological issues and research recommendations. Journal of Child Psychology and Psychiatry. 2003;44(8):1116–29. doi: 10.1111/1469-7610.00195. [DOI] [PubMed] [Google Scholar]
- [16].Lohr JM, DeMaio C, McGlynn FD. Specific and nonspecific treatment factors in the experimental analysis of behavioral treatment effects. Behavior Modification. 2003;27:322–68. doi: 10.1177/0145445503027003005. [DOI] [PubMed] [Google Scholar]
- [17].Wampold BE. The great psychotherapy debate: Model, methods and findings. Lawrence Erlbaum; Mahwah, NJ: 2001. [Google Scholar]
- [18].Imel ZE, Wampold BE, Miller SD, Fleming RR. Distinctions without a difference: Direct comparisons of psychotherapies for alcohol use disorders. Psychology of Addictive Behaviors. 2008;22(4):533–43. doi: 10.1037/a0013171. [DOI] [PubMed] [Google Scholar]
- [19].Lambert MJ, Barley DE. Psychotherapy relationships that work: Therapist contributions and responsiveness to patients. Oxford University press; New York: 2002. Research summary on the therapeutic relationship and psychotherapy outcome; pp. 17–32. [Google Scholar]
- [20].Messer S, Wampold BE. Let’s face facts: Common factors are more potent than specific therapy ingredients. Clinical Psychology: Science and Practice. 2002;9(1):21–5. [Google Scholar]
- [21].Norcross JC. Psychotherapy relationships that work: Therapist contributions and responsiveness to patients. Oxford University Press; New York: 2002. [DOI] [PubMed] [Google Scholar]
- [22].Longabaugh R. The search for mechanisms of change in behavioral treatments for alcohol use disorders: A commentary. Alcoholism: Clinical and Experimental Research. 2007;31:21S–32S. doi: 10.1111/j.1530-0277.2007.00490.x. [DOI] [PubMed] [Google Scholar]
- [23].Persons JB, Silberschatz G. Are results of randomized controlled trials useful to psychotherapists? Journal of Consulting and Clinical Psychology. 1998;66(1):126–35. doi: 10.1037//0022-006x.66.1.126. [DOI] [PubMed] [Google Scholar]
- [24].Garner BR. The diffusion of evidence-based treatments in substance abuse treatment: A systematic review. 2009;36:376–99. doi: 10.1016/j.jsat.2008.08.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [25].Moyers TB, Manuel JK, Wilson PG, Talcott W, Durand P, Hendrickson SML. A randomized trial investigating training in motivational interviewing for behavioral health providers. Behavioural and Cognitive Psychotherapy. 2008;36(2):149–62. [Google Scholar]
- [26].Sholomskas DE, Syracuse-Siewert G, Rounsaville BJ, Ball SA, Nuro KF, Carroll KM. We don’t train in vain: A Dissemination trial of three strategies of training clinicians in cognitive-behavioral therapy. Journal of Consulting and Clinical Psychology. 2005:106–15. doi: 10.1037/0022-006X.73.1.106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [27].Sholomskas DE, Carroll KM. One small step for manuals: Computer-assisted training in twelve-step facilitation. Journal of Studies on Alcohol and Drugs. 2006;67(6):939–45. doi: 10.15288/jsa.2006.67.939. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [28].Morgenstern J, Keller DS, Morgan TJ, McCrady BS, Carroll K. Manual-guided cognitive-behavioral therapy training: A promising method for disseminating empirically supported substance abuse treatments to the practice community. Psychology of Addictive Behaviors. 2001;15(2):83–8. [PubMed] [Google Scholar]
- [29].Huebner RB, Tonigan JS. The search for mechanisms of behavior change in evidence-based behavioral treatments for alcohol use disorders: Overview. Alcoholism: Clinical and Experimental Research. 2007;31(10):1S–3S. doi: 10.1111/j.1530-0277.2007.00487.x. [DOI] [PubMed] [Google Scholar]
- [30].Morgenstern J, McKay JR. Rethinking the paradigms that inform behavioral treatment research for substance use disorders. Addiction. 2007;102:1377–89. doi: 10.1111/j.1360-0443.2007.01882.x. [DOI] [PubMed] [Google Scholar]
- [31].Tucker JA, Roth DL. Extending the evidence hierarchy to enhance evidence-based practice for substance use disorders. Addiction. 2006;101:918–32. doi: 10.1111/j.1360-0443.2006.01396.x. [DOI] [PubMed] [Google Scholar]
- [32].Longabaugh R, Magill M, Morgenstern J, Huebner RB. Mechanisms of behavior change in treatment for alcohol and other drug use disorders. In: McCrady BS, Epstein EE, editors. Addictions: A Comprehensive Guidebook. Oxford University Press; USA: In Press. [Google Scholar]
- [33].Rosen GM, Davison GC. Psychology should list empirically supported principles of change (ESPs) and not credential trademarked therapies or other treatment packages. Behavior Modification. 2003;27:300–12. doi: 10.1177/0145445503027003003. [DOI] [PubMed] [Google Scholar]
- [34].Sobell LC, Sobell MB, Connors GJ, Agrawal S. Assessing drinking outcomes in alcohol Treatment Efficacy Studies: Selecting a Yardstick of Success. Alcoholism: Clinical and Experimental Research. 2003;27(10):1661–6. doi: 10.1097/01.ALC.0000091227.26627.75. [DOI] [PubMed] [Google Scholar]
- [35].Clifford PR, Maisto SA. Subject reactivity effects and alcohol treatment outcome research. Journal of Studies on Alcohol and Drugs. 2000;61(6):781–93. doi: 10.15288/jsa.2000.61.787. [DOI] [PubMed] [Google Scholar]
- [36].Crits-Cristoph P, Baranackie K, Kurcias C, Beck A, Carroll K, Perry K, et al. Meta-analysis of therapist effects in psychotherapy outcome studies. Psychotherapy Research. 1991;1(2):81–91. [Google Scholar]
- [37].Judd CM, Kenny DA. Process analysis: Estimating mediation in treatment evaluations. Evaluation Review. 1981;5(5):602–19. [Google Scholar]
- [38].MacKinnon DP, Fairchild AJ, Fritz MS. Mediation analysis. Annual Review of Psychology. 2007;58:593–614. doi: 10.1146/annurev.psych.58.110405.085542. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [39].Miller WR, Rose GS. Toward a theory of motivational interviewing. American Psychologist. 2009;64(6):527–37. doi: 10.1037/a0016830. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [40].Bernstein J, Bernstein E, Heeren T. Mechanisms of change in control group drinking in clinical trials of brief alcohol intervention: Implications for bias toward the null Drug and Alcohol Review. 2010;29:498–507. doi: 10.1111/j.1465-3362.2010.00174.x. [DOI] [PubMed] [Google Scholar]
- [41].Beutler LE. Empirically Based decision making in clinical practice. Prevention and Treatment. 2000;3(1):6–22. [Google Scholar]
- [42].Manuel JK, Hagedorn HJ, Finney JW. Implementing evidence-based psychosocial treatment in specialty substance use disorder care. Journal of Substance Abuse Treatment. 2011:225–37. doi: 10.1037/a0022398. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [43].Paul GL. Behavior modification research: Design and tactics. In: Franks CM, editor. Behavior therapy: Appraisal and status. McGraw-Hill; New York: 1969. pp. 29–62. [Google Scholar]
- [44].Franks CM. Behavior therapy: Appraisal and status. McGraw-Hill; New York: 1969. [Google Scholar]