Abstract
The National Institute on Drug Abuse (NIDA) Clinical Trials Network (CTN) is designed to test drug abuse treatment interventions in multisite clinical trials and to support the translation of effective interventions into practice. In this study, qualitative methods were applied to examine adoption of motivational interviewing and motivational enhancement therapy (MI/MET) in five clinics where these interventions were tested. Participants were clinic staff (n = 17) who were interviewed about the MI/MET study, and about whether MI/MET was adopted after the study ended. Although clinics’ participation in a clinical trial includes many elements thought to be necessary for later adoption of the intervention, we found that there was “adoption” in one clinic, “partial adoption” in one clinic, “counselor adoption” in one clinic, and “no adoption” in two clinics. These findings highlight a distinction between adoption at the organizational and counselor levels, and suggest that a range of adoption outcomes may be observed in the field. Findings are relevant to clinical staff, program directors, administrators and policy makers concerned with improvement of drug abuse treatment systems through adoption of evidence-based practices.
Keywords: adoption, clinical trials, dissemination, drug abuse treatment, implementation
The National Institute on Drug Abuse (NIDA) Clinical Trials Network (CTN) was established in 1999 with the goal of improving drug abuse treatment through systematic scientific effort (Hanson, Leshner & Tai 2002; O’Connor 2001). The CTN was NIDA’s response to an Institute of Medicine report which observed that new drug abuse treatments were often developed with little regard for the needs of community treatment programs and then tested in research environments where scientific control was high but similarity to routine clinical settings was low (Lamb, Greenlick & McCarty 1998). As a consequence, the report commented, community-based programs were slow to adopt research-based interventions.
The CTN is a national network comprising 16 research centers and more than 120 community-based treatment programs. Collaborative effort, or “bidirectionality,” is a guiding principle operationalized through equal representation of research and clinical leaders on the national steering committee, and in protocol development, planning, and implementation teams. The CTN aims to improve drug abuse treatment by determining the effectiveness of promising interventions in multisite trials, and by supporting the transfer of effective interventions into clinical practice (Hanson, Leshner & Tai 2002).
Now in its tenth year, the CTN has enrolled more than 10,000 participants (CTN 2009) into over 30 multisite studies in various stages of completion. Findings are available for randomized trials of buprenorphine (Ling et al. 2005), contingency management (Pierce et al. 2006; Roll et al. 2006; Petry et al. 2005), motivational interviewing (Carroll et al. 2006), motivational enhancement therapy (Ball et al. 2007), telephone-based continuing care (Hubbard et al. 2007), HIV risk reduction intervention (Calsyn et al. 2009), and smoking cessation (Reid et al. 2008). To support transfer of tested interventions into practice, the CTN developed cooperative efforts with Addiction Technology Transfer Centers (ATTCs) and “Blending Teams” to disseminate CTN findings to community-based treatment settings (Whitten 2005), and developed a series of training tools designed to support such dissemination (NIDA 2005–2007).
Concurrent with the development of the CTN has been a trend toward the use of evidence-based practice (EBP) or evidence-based treatment (EBT) in drug abuse treatment. The Iowa single state authority for drug and alcohol treatment developed a plan to support community-based programs in the adoption of EBPs, with the aim of making treatment funding contingent on use of these practices (Iowa Consortium for Substance Abuse Research and Evaluation 2003). Oregon Senate Bill 267 mandated use of EBPs in drug treatment settings and required that annually increasing proportions of all funds awarded through the state office of Addictions and Mental Health Services (AMH 2006) be used to provide EBPs. At the federal level, the Substance Abuse and Mental Health Services Administration (SAMHSA) is expanding the National Registry of Evidence-Based Programs and Practices (NREPP) beyond its current focus on drug abuse prevention to include drug abuse treatment (Federal Register 2005), with the goal that provider agencies can use NREPP to select EBPs (Clark 2006). Similarly, Requests for Applications for treatment expansion funds available through the Center for Substance Abuse Treatment, popular with treatment providers, require applicants to show the effectiveness of the interventions included in their proposals.
While the NIDA CTN works to identify effective interventions, and while state and national agencies press for accountability through use of EBPs, there is increasing interest in how research-based interventions are adopted into routine practice (Glasgow, Lichtenstein & Marcus 2003). Rogers observed that characteristics of the social system, such as its structure, norms, and decision-making processes, can affect diffusion (the process by which an innovation is communicated to others), and that characteristics of the intervention—such as trialability (how much an intervention can be used or tried in a limited way), compatibility with current practice, and complexity—influence adoption (Rogers 2003, 2002). Management and organizational literature has conceptualized adoption of new interventions in terms of organizational change, focusing on the organization rather than the individual, and considering factors such as organizational size, communication style, and culture (Burke & Litwin 1992). In an analysis relevant to current efforts to expand health care and to provide addiction services on parity with other healthcare services (Federal Register 2009), Scott and colleagues (2000) describe levels of impact on adoption that occur outside the organizational level, including the organizational set (defined by the relationship between one organization and other organizations important to its performance), organizational population (for example, drug abuse treatment programs within a specific county or state) and organizational field (including organizational populations and related agencies concerned with funding, regulation, and oversight).
Regarding the study of the adoption of addictions treatment specifically, most writings begin with Rogers’ (2003) landmark work on diffusion of innovation, first published in 1962. Rogers drew lessons from many different fields, including substance abuse, from which he cited studies of adoption of alcoholism counseling in employee assistance programs (Fennell 1984) and adoption of an early intervention for persons with addiction problems (Turner, Martin & Cunningham 1998). Backer (1995) added that availability of resources and the human dynamics of change, particularly organizational readiness for change, influence adoption. Simpson (2002) offered a conceptual model of how innovation occurs in drug abuse treatment through the sequential steps of exposure, adoption (defined as the intention to try a new practice), implementation, and practice. In conceptualizing adoption of new interventions, each of these models suggests that adoption is more likely to occur when programs have knowledge of the intervention, are exposed to the intervention through training, and have opportunity to practice the intervention so that they can evaluate it in their own setting.
Garner (2008) recently reviewed the literature concerning diffusion of EBTs in substance abuse and classified papers as to whether they concerned attitudes toward EBTs (25 studies), adoption of EBTs (31 studies), or implementation of EBTs (nine studies). Of the 65 papers reviewed, at least eight concerned the CTN directly or included CTN clinics as part of the sample, reflecting the contribution of the CTN to adoption literature in the area of substance abuse. Ducharme and colleagues (2007), for example, studied adoption of buprenorphine and motivational incentives in 1,006 programs that had differing levels of exposure to CTN studies. Some programs had participated directly in CTN clinical trials of these interventions, some were CTN programs that had not participated in these specific protocols, and some were programs outside the CTN. Programs that participated in CTN clinical trials of buprenorphine were five times more likely to report adoption of buprenorphine than were other programs, suggesting that exposure to this intervention through a CTN clinical trial may have increased buprenorphine adoption. However, this CTN exposure effect on adoption was not seen for clinics that participated in CTN studies of motivational incentives (Ducharme et al. 2007). The difference in adoption for these two interventions may suggest that, compared to adoption of pharmacotherapy interventions, adoption of psychosocial interventions is more difficult to implement or more difficult to measure. It is also possible, however, that characteristics of the interventions themselves are more or less compatible with current practices. Use of pharmacotherapy to treat opiate addiction has a long history, whereas use of material incentives to achieve positive outcomes may have less support in addictions treatment, both in terms of practitioner attitudes and in terms of how the use of incentives can be billed and reimbursed.
Treatment programs that are involved in testing new interventions meet some of the criteria believed to be important in adoption. They can assess trialability, compatibility, and complexity of the intervention in their own setting (Rogers 2003, 2002). They demonstrate some readiness to change when they volunteer to participate in such trials, and the trial itself provides resources for the intervention (Backer 1995). Through the clinical trial, they gain exposure, implementation, and practice with MI/MET (Simpson 2002). For these reasons, adoption of new treatments seems most likely in clinics that are involved in testing those treatments (Guydish et al. 2007). There are very few studies, however, of adoption in clinics that have participated in clinical trials.
This article reports on a qualitative study of adoption in two CTN clinical trials designed to test motivational interviewing and motivational enhancement therapy (MI/MET). The overall goal was to explore the organizational level of adoption demonstrated through organizational commitment to post-trial use of the intervention, as reflected in comments by the counselor, supervisor, and director in each clinic.
METHODS
Motivational Interviewing and Motivational Enhancement Therapy Protocols
Motivational interviewing (MI) and motivational enhancement therapy (MET) are designed to enhance motivation for behavioral change. While MI represents a broader therapeutic approach, MET includes specific emphasis on personalized assessment, feedback, and change plans. MI/MET address client ambivalence toward change and are guided by collaboration between the interviewer and client, and the presumption that the resources for change reside in the client (Miller & Rollnick 2002). There is literature supporting the efficacy of MI (Hettema, Steele & Miller 2005; Burke, Arkowitz & Menchola 2003; Miller & Willbourne 2002), although at least one study failed to show better outcomes for MI compared to standard treatment (Miller, Yahne & Tonigan 2003).
In developing the MI/MET clinical trials, the CTN planning team confronted a real-world problem. Although motivational interventions are traditionally delivered in individual counseling, many treatment clinics rely heavily on a group format. To address this, two similar protocols were developed. The MET protocol, used in programs having individual counseling, included three individual sessions of MET compared to three individual sessions of standard care. In the MI protocol, used in programs that relied on group treatment but conducted the initial assessment as an individual session, a single MI session was attached to the initial assessment, and this MI-enhanced assessment condition was compared to standard assessment (Carroll et al. 2002).
A total of 11 sites participated in these protocols, with five sites testing MI and six sites testing MET. MI was associated with greater treatment retention at the 28-day assessment, but substance abuse outcomes did not differ between groups (Carroll et al. 2006). In the MET study, both the MET and control conditions showed decreased days of use during the four-week therapy phase. During the 12 weeks following the therapy phase, however, these drug use reductions were sustained in the MET condition but not for controls (Ball et al. 2007). In the current study of adoption, for logistic and cost reasons, program directors and staff at five study sites in the Western United States were interviewed. These included three clinics that tested MI and two clinics that tested MET.
The Multilevel Assessment Protocol (MAP)
The levels of organization supporting a multisite trial can vary between protocols; however, in the MI/MET protocols we identified seven organizational levels: intervention developer, clinical trial funder, lead node, protocol team, clinic director, clinic supervisor, and clinic counselor. These organizational levels map onto the infrastructure supporting CTN trials. In the CTN, executive functions of planning, decision-making, and oversight for any given trial are managed by a protocol team. The protocol team will often include the intervention developer and a NIDA project officer, who represents the funder. The lead node, which was responsible for study design, is later responsible for study execution, and coordinates staff training, supervision, data collection, and management. The lead node is represented on, and also leads, the work of the protocol team. At each clinic participating in the trial there is a clinic director, supervisor, and one or more counselors who are involved in implementing study procedures. A qualitative approach was used to study adoption in an organizational context, and respondents were selected to represent each level of organization supporting the MI/MET trials.
Participants and Procedures
Study sample and recruitment
The person who designed the intervention, the project officer who represented the funding agency, and directors of individual clinics where the study was implemented were easily identifiable. Lead node and protocol team members were listed on the written study protocol. Clinic directors identified staff who were involved in the clinical trial and were still with the clinic or, if they had left the clinic, for whom contact information was available. In cases where individuals had roles in more than one organizational level, participants were selected using two criteria: first, that they were not already interviewed as a member of a different organizational level and, second, that they had a leadership role in the study.
Initial contact was by mail, with telephone follow-up to assess willingness to participate and schedule interviews. Informed consent procedures were completed prior to each interview. Participants who had left their clinical site by the time of the interview were located and contacted using the same procedures. For each clinic, the goal was to interview one counselor who had delivered the MI/MET intervention and one who had delivered the treatment-asusual intervention. A treatment-asusual counselor was included because their perspective may indicate steps takes by the agency to adopt MI/MET. For example, since the trial had ended they may have attended MI/MET trainings, may have been supervised concerning use of MI/MET, or may be aware of other ways the clinic was supporting MI/MET. Any of these could suggest an organizational shift toward adoption of MI/MET, and the absence of these events could suggest no organizational shift toward MI/MET.
Data collection
Semistructured interview guides (available from the first author) were developed by the study team, informed by organizational theory and reflecting domains that can influence adoption of interventions (Guydish et al. 2005). These include, for example, organizational structure and culture (Lamb, Greenlick & McCarty 1998; Burke & Litwin 1992), organizational readiness for change (Backer 1995), perception of the intervention (Rogers 2003; Backer 1991), and resources (Rogers 1995; Backer 1991). All interviews were audiotaped. Twenty-five interviews were conducted in person and four were done by phone.
Interview guides were developed for each level of assessment within the organizational structure of the clinical trial (e.g., intervention developer, clinical trial funder, clinic directors). While all respondents were asked questions regarding their role, the impact of the clinical trial, their perspective of the intervention, and their knowledge of adoption at clinic sites, each interview guide was designed to gather responses unique to one level. For the leadership and management levels (developer, funder, clinic directors), respondents were asked about adequacy of resources, their perspectives on organizational dynamics related to adoption, staff training on the intervention after the trial was completed, and dissemination of study results. Clinic staff, including clinic directors, were asked about current treatment practices, their perspectives on the effectiveness of the intervention, their knowledge of study results, and continued use of the intervention by the program.
The study plan was to interview participants after some time had elapsed following the completion of the trial, so that clinics had time to consider whether to continue to provide the treatment. MI/MET study recruitment was completed in all clinics in February 2003, and the study intervention phase was completed one month later. For the current study, interviews were conducted over an 18-month period and occurred, in any given clinic, from four to 13 months after the clinic had stopped providing MI/MET for research purposes. One counselor interview was conducted earlier, after study recruitment ended but before all MI/MET sessions were completed.
Clinics received $1,000 for study participation. At the discretion of the clinic director, this amount was paid to the clinic (four sites) or distributed to respondents within the clinic (one site). Respondents who were formerly employed by a clinic but who had since left the clinic received a cash reimbursement of $50. Financial incentives were not offered to respondents outside the clinic level (e.g., intervention developer, clinical trial funder, lead node), as these respondents were remunerated for their efforts in the context of the study award. All procedures were approved by the University of California, San Francisco, Institutional Review Board.
Data analysis
Analysis was conducted using a theoretical analytic framework derived from literature on organizational functioning and change theory (Rogers 2003; Backer 1995, 1991; Rogers 1995; Burke & Litwin 1992; Bulmer 1979). The framework allowed for use of key domains as analytic categories to examine participant perspectives. Analytic categories included organizational structure and culture, readiness for change, attitudes toward research, perceptions of intervention, resources, dissemination of study results, and reinvention. Closed codes were developed using these categories and codes were added as analysis proceeded (Boyle 1991).
Transcribed interviews were coded using ATLAS.ti.™, a qualitative data analysis program. Initial codes were developed in the context of prior work (Guydish et al. 2005), where consistency between raters was supported by coding the first 14 interviews as a team to obtain agreement, and then having the primary coders (BT, SM) independently code five interviews, with a review for consistency by a third team member (MJ). Each interview was coded by two primary coders, meaning that each interview was coded twice, and these two sets of codes were merged prior to analysis. The final MI/MET codebook included 64 codes (available from the first author). Data attached to each code were discussed by team members in weekly meetings. Decisions on data coding and related questions involved discussion and agreement on the meaning and limits of a particular code, and assignment of the text in question to a particular code or to multiple codes. New codes were added when the team discovered meaningful data outside the existing coding scheme. Analytic memos, constant comparison, ongoing discussion of the data, and member checks were applied to ensure trustworthiness of the data. Simultaneous data collection and analysis supported dependability, and, in the interpretation phase, reflexivity of team members regarding participant narratives was used to enhance trustworthiness (Creswell 1994; Lipson 1991; Lincoln & Guba 1985).
Data for 29 respondents were coded, reviewed, and used to inform the analysis. However, as the current study concerns adoption of the MI/MET intervention at the level of each clinic, results reported below rely on interviews from 17 respondents who were clinic directors, counselors, and supervisors. The aim was to include one supervisor from each clinic; however in two clinics, the supervisor and clinic director were the same person, and one MI/MET clinic supervisor had left the clinic and could not be located. Similarly, all interview material in all study codes was reviewed and discussed by the study team and used to inform the analysis. However, four codes contained most of the data used in this report (adoption, adaptation, partial adoption, and planning for adoption).
RESULTS
A total of 31 interviews were planned and 29 were completed. One selected participant did not respond to recruitment efforts, and one declined participation. Results reported in the present study are drawn from 17 interviews conducted with directors, supervisors, and counselors in each of the five participating programs. These 17 participants included 10 women and seven men. Among the five clinic directors, two held doctoral degrees, two held master’s degrees, and educational attainment was unknown for one director. Among the two supervisors, one held an MD degree and one held a master’s degree. Among the 10 counselors, three held master’s degrees, one held a bachelor’s degree, three had some college, and educational attainment was unknown for three counselors.
Extent of Adoption
Excerpts below, edited to conserve space without changing meaning, are used to describe the range of adoption outcomes observed, including adoption (one clinic), partial adoption (one clinic), counselor adoption (one clinic), and no adoption (two clinics).
Adoption
In the quote below, the director of the clinic that adopted the MI/MET intervention describes efforts to develop MI experience in the clinic even predating the clinical trial, including a commitment of organizational resources by reducing staff workload to allow time to read book chapters about an intervention based on the stages of change (Prochaska, Norcross & DiClemente 1994). This reflects organizational readiness to change or a culture that encourages innovation.
You need to be oriented towards customer satisfaction…. The customers were the referral sources, as well as the clients…. So that was kind of always the philosophy…. I never had heard the term “motivational interviewing” until … my partner went to a conference … and came back with [the book] … Changing for Good…. We saw something … that fit our philosophy about how you treat people. And, we trained our staff … in the Changing for Good book. [We] … did a chapter a month, had different people leading the discussions each time we met, and assigned people reading time and reduced some of their workload in order to give them time to do it … and then about a year later … we did the same thing with the motivational interviewing book, and then challenged the staff … to operationalize it … and come up with a model for doing treatment that would work better than what we were doing, that everybody could believe in … and do….
—Clinic Director
Consistent with the report of the clinic director, the clinical supervisor noted continued use of MI through participation in a second MI study, which was used by the clinic to expand training of staff, and offered direct plans to incorporate MI at the close of this second clinical trial.
Even with the data that we already got in [the clinic], we feel good enough to say, “Well, it’s not a bad intervention,” … at least it’s equal. But it actually shows a little better … retention and completion rates, and … we want to be able to actually implement the MI way of doing the assessment for the whole clinic. But … we have to wait now to finish this other [study] … so we can implement it. At that point, many more … counselors will have been trained, we will have done the training for everybody . . . . And then, we actually want to do … just like we were doing in the … MI piece, and make that be our assessment. And I want to do that…. We’re going to really implement it. Probably not with the rigors of research, but … having had that experience of … doing the research, a lot more people will be already trained…
—Clinical Supervisor
Also consistent with the above reports of the clinic director and supervisor, the counselor reported using MI in clinical assessments, as was the practice in the MI/MET protocol, and commented on clinical use of client ambivalence and written assignments. The counselor also reported adapting the MI assessment protocol to the specific needs of domestic violence clients.
In my assessments … even the design of the domestic violence part of the assessment is very motivational interviewing. And since I’ve been … coordinator … I’ve redesigned most of the … assignments to really work with … the … ambivalence … and have really tried to use motivational interviewing techniques in the written assignments …
—Counselor
In this clinic, where the MI/MET intervention was adopted, what Rogers described as characteristics of the social system (structure, norms, and decision-making process) appeared to support adoption (Rogers 2003). The clinic director reported that the MI philosophy “fit our philosophy about how you treat people,” suggesting that the practice was consistent with clinic norms. The clinic conducted systematic MI training in advance of the MI/MET protocol and so demonstrated what Backer (1995) called organizational readiness for change, at least with respect to MI. The clinic director fits Rogers’ (2002) description of an influential “champion” of the intervention, and the clinic may have self-selected into two sequential MI study protocols as a way to further develop MI capabilities. Key resources were available in this clinic in the post-study period, in that the clinical supervisor was a regional trainer for the MI/MET protocol, so that additional MI training was available in this clinic. Finally, the clinical supervisor continued to supervise MI in the context of a second study and planned to incorporate MI more widely. In the five clinics studied, this was the only clinic where all three informants affirmatively reported adoption of the tested intervention.
Partial adoption
Rogers (2003: 181) uses the term reinvention to describe “the degree to which an innovation is changed or modified by the user in the process of its adoption and implementation.” He acknowledges that there are other terms for reinvention, and elsewhere uses the term “adaptation” (Rogers 1995). The term “partial adoption” is used here to indicate a special case of reinvention, in which parts of the intervention were adopted systematically and at the organizational level. In this setting, the clinic director, who was also the clinical supervisor, did not have an active client caseload and did not practice MI with clients. This supervisor did not know whether counselors used MI, indicating that MI was not a framework used in supervision, and also noted that staff turnover had eroded MI capability in the agency.
I don’t know [if counselors are using MI currently]…. I think most of our counselors do. The sad thing is that there’s been so much staff turnover…. [A]ll of our MI [trained people] … except for [one] … are gone…. I just talked to [the MI trainer] … and she’s going to do [MI] training for the whole agency again in May…. So that’ll give us a good startup….
—Clinic Director/Supervisor
However, the same respondent noted that use of American Society for Addiction Medicine (ASAM) Patient Placement Criteria (Mee-Lee et al. 2001) in clinical assessments is required by the state and reported that these criteria were recently revised to include a MI philosophy, so that MI is reflected in clinic assessment practices.
We already have done some things, partly because the ASAM placement criteria… [is] what we have to use, by state requirements, to decide where people need to go to treatment—what level of treatment they need … and that criteria was revised recently to include motivational interviewing stages of change theory. So, we already have to incorporate it into our system. … When assessing people, we have to look at their readiness to change and, and if they’re highly resistant, we put them in a lower level of care, which is something that we didn’t used to do. … [W]e have to consider their willingness … and what level of treatment they’re ready for…. So we already have integrated some of the motivational interviewing into our assessment, placement…. [W]e have one primary assessment counselor. … I’d want to invest some training money and time in really helping our primary assessment person utilize that approach … because that’s what the research was; but I’d like all the staff to use it.
—Clinic Director/Supervisor
As it was practiced in the MI/MET trial in this clinic, the MI intervention was a brief intervention attached to the initial assessment, and included information and training on stages of change. So it is reasonable that a recent change in assessment practices to systematically incorporate stages of change may bear similarity to MI as studied in the protocol. At the same time, incorporation of stages of change is not the same as incorporation of MI, and so this change may reflect only a partial adoption of MI. Although this change in practice was externally generated, the agency was better positioned to make the change as a result of participation in the clinical trial, and there is a plan “to do [MI] training for the whole agency.”
Consistent with the report of the Clinic Director/Supervisor, the counselor used MI in practice and was about to start a precontemplation group, suggesting that MI principles had affected clinical practice at the agency level.
I’m gonna be starting a group next week … to deal specifically with precontemplative Level I people. You know, “Send me … your resistant, your angry, your frustrated … and I’ll talk to ‘em for a little while,” … But [it will be] primarily educational … because the thing with a DUI [driving under the influence] … which tend to be our most resistant folks … you can break through their resistance. And, see, I still use terms like “breaking through resistance.” … you can engage them despite their ambivalence …
—Counselor
Two factors affecting adoption can be seen in the partial adoption clinic. First, high rates of staff turnover observed in community-based treatment programs (McLellan & Meyers 2004; Gallon, Gabriel & Knudsen 2003) militate against adoption, because those who are trained in the intervention may leave the clinic. Second, the external factor of state-mandated use of ASAM criteria, which, as the respondent explained, incorporated an MI approach into the assessment, supported adoption. Although MI was not supported at the clinic director or supervisory levels in this clinic, one component of MI (a focus on stages of change) became part of the assessment because of the ASAM requirement.
Counselor adoption
It is possible for an individual counselor to seek out trainings or study materials such as books and manuals, and then incorporate innovative techniques into that counseler’s practice. This is described here as “counselor adoption” and, as the supervisor comments below about MI/MET, “it becomes one of the tools in the kit.” However, the supervisor was uncertain about the degree to which counselors used MI, signaling no current supervision of MI and little organizational investment in the use of MI. Like the clinical supervisor, the counselor reported some use of MI in their own practice.
I know I use [MI]…. I think that [counselors] do use it. I think that it becomes one of the tools in the kit. Do they use it all the time, just the same way it was in the protocol? No. I know they don’t do that. First of all, once the study was over, we dropped that meeting with your counselor … for an hour-and-a-half before you started treatment. They still will meet for intake with their counselor, and I think that the MI counselors are doing some MI…. I hope when you talk to them, that the MI counselors are gonna say, “Yeah, that was a good thing for me. I got a lot out of that, and I use it.” … Do I absolutely know that? I know it’s not the protocol, but I think the tools and techniques are still there.
—Clinical Supervisor
Okay. I’m trying to remember as I’m talking. No, I use, I do use motivational interviewing … with my clients today, I don’t think it’s necessarily all MI. But … I do use the techniques.
—Counselor
While adoption and partial adoption outcomes reflect some organizational commitment to MI/MET, counselor adoption reflects a change in counselor practices without broader organizational support. While counselor adoption is necessary for organizational adoption to occur, counselor efforts require support at leadership and supervisory levels, and diffusion through the counseling staff, before the intervention can become organizational practice.
No adoption
Two clinics reported no adoption of MI/MET following the clinical trial. Respondents in the first clinic reported a philosophical shift in their approach to treatment, as well as an intention to adopt MI/MET, but had not adopted the practices at the time of the interviews. Below, the clinic director describes a change in the general approach to clients from “you’re in denial” to “Of course you’re ambivalent. So what do we do about that?” He acknowledged that such a change will take time and require additional staff training, and that he plans to use treatment engagement groups, but he also noted that these groups are not yet implemented.
It was still the more, “you’re in denial, you’re resistive” kind of attitude. And slowly … through the [clinic] handbook, and then we’re gonna do more and more training … there’ll be a culture change in terms of “This is normal. Of course you’re ambivalent. So what do we do about that?” We’ve even set up groups … but I don’t think it’s fully implemented, which we even call Treatment Decision Groups … [for] nonmotivated patients … They’re not ready to engage in treatment. So what we hope is that we can give them a place where we can use things like motivational interviewing techniques … saying “You have choices here, and what do you really want right now?” And so, again, we have to get more of the staff trained in that way.
—Clinic Director
In the same clinic, the counselor was willing to use MI/MET techniques learned in the trial, but uses the future tense “I will” because at the time of this interview, some clients were still completing MI/MET protocol.
Definitely, I would [use MI/MET]. I will! Because I still see myself using some techniques with other patients, regular patients. I might … reflect back what they say or try to de-escalate when they are … angry … so I think … that the tools that I learned from MI/MET helped me to … be with the patient and understand where they are at.
—Counselor
In this first no adoption clinic, the director discussed the role of program norms (Rogers 2003) and described a cultural shift away from confrontation and toward motivational approaches. Adoption in this clinic was influenced by clinic size, a factor previously shown to be related to adoption (Burke & Litwin 1992). The clinic director reported: “As you know, just a few counselors were actually [trained for the study]: three, and we have [over 100] employees. So … hardly anybody was actually trained in MI/MET through this study.” Adoption was also influenced at this clinic by the unavailability of resources (Backer 1995), as the clinic director commented in regard to a different (not MI/MET) training: “it’s four … or five Wednesdays in a row, three or four hours each time; … can I afford all the staff to go down there to do all that?”
In the second no adoption clinic, the supervisor first suggested that some components of MI/MET may have been integrated into their program, but later noted a lack of interest in “doing more and learning more” about MI/MET.
There was a three-session manual, but people are always gonna want to do what they think is in the best interest of the client…. [I]f the client comes in and says, I had a horrible thing happen and I really want to spend the session today talking about that, and it says in the manual, well, you have to do blah blah blah, homework … people who do treatment aren’t gonna do it. But I do think that the manuals could provide really good outlines, really good suggestions … really good techniques. And that, I think, does get integrated.
—Clinical Supervisor
I think … if [the treatment as usual counselor] heard about an interesting conference … and it was … how to do motivational interviewing … she would be interested in that. She would think back: “Yeah, I know a little bit about MET, and that study was a disaster, but it seemed really interesting when it worked with people.” … There is that piece of it. The study—disaster. MET is interesting … and I’m interested in doing more … but see, nobody cares about that.
—Clinical Supervisor
In the below quote, the counselor reported using MI/MET but demonstrated a misunderstanding of MET principles. When a client reported having been drug free for one week the counselor suggested “do you think maybe you can go another week.” This direct suggestion from counselor to client is not consistent with MI principles (express empathy, develop discrepancy, roll with resistance and support self-efficacy)and is not supportive of autonomy because it tells the patient what to do. The absence of an organizational or supervisory effort to support MI/MET, and the vague example offered by the counselor, suggest little ongoing practice of MI/MET in this setting.
I tried to use [MET] in the groups, too. And it worked really well…. I think the clients … gave better responses to questions … when I repeated back what they had said. Like … if I said, well, how long has it been since you last used? Well, I used maybe a week ago … Oh, okay. Only about a week ago. Do you think you could do a little bit better the next time? Do you think maybe you can go another week or something? … Why don’t you see if you can go for two weeks …
—Counselor
In this second no adoption clinic, the outcome was likely influenced by the experience of conducting the trial itself. The clinical supervisor felt that there was poor support for the study from clinic leadership, which resulted in MI/MET counselors not fully attending to the project: “They didn’t make it a priority to see people; … they didn’t make it a priority at all to work with … the [research assistants]…. They didn’t make it a priority to get their paper work done…. They didn’t make it a priority to meet with me.” Recruitment for the protocol faltered, and the supervisor summarized the experience of doing the study as a “disaster.” Slow study recruitment also meant that the intervention counselors implemented MI/MET infrequently. While smooth and successful implementation of the research protocol does not ensure later adoption of the intervention, poor implementation of the research protocol may signal a disinterested or disorganized environment where adoption is unlikely.
DISCUSSION
In this study, adoption of MI/MET was examined in five clinics in the context of a CTN multisite clinical trial. The main finding is that, even while participation in the clinical trial created a number of conditions supportive of adoption, adoption of MI/MET occurred in only two of five clinics studied. Due to their participation in the clinical trial, several factors believed to support adoption were in place. Programs had resources to implement the intervention (Backer 1995) and experienced the sequential steps of exposure, implementation, and practice (Simpson 2002). Programs could assess the trialability, compatibility with current practice, and complexity of the intervention (Rogers 2002). With one exception, these conditions were not sufficient to ensure adoption after the end of the trial. Adoption of MI/MET occurred in one clinic, and partial adoption occurred in another, both reflecting a change in practice at the organizational level. Counselor adoption, reflecting changes in individual counselor practice but without organizational support, occurred in one clinic, and no adoption was seen in two clinics.
The current study adds to prior similar research in several ways. It offers the distinction between organizational-level adoption and counselor-level adoption. Counselor-level adoption appears to be the aim of many clinical training efforts, and is a necessary condition for organizational adoption. However, if the clinic director is unaware that counselors are practicing a specific intervention, and if the supervisor is not supervising that intervention, then the counselor-level adoption is unsupported. Counselor-level practices in this situation can be close to adhering to the manualized intervention (Guydish et al. 2005), but counselor-level practices may also be unreliably reported, as in the case of the counselor in one of the no adoption clinics who reported practicing MI/MET but offered a questionable example. The disciplined implementation of an intervention by a single counselor does not meet the test of organizational adoption, unless that effort is supported by clinic leadership, enhanced through supervision, and extended to other counselors through training and supervision. Organizational adoption may also support the use of EBPs in the context of staff turnover, since practices that are embedded in the culture of the organization are less vulnerable to the departure of individual staff members.
Second, a range of adoption outcomes are suggested, including adoption, partial adoption, counselor adoption, and no adoption. This range may offer a more nuanced framework than the simpler question of whether or not an intervention was adopted. Considering a range of adoption outcomes can contribute to efforts to disseminate and implement EBPs by enabling clinicians, researchers, and policy makers to identify the type of adoption they want to achieve, to gear their intervention toward the adoption goal, and to better measure whether a specific kind of adoption occurred. For example, many current clinical training approaches may aim for counselor adoption outcomes, whereas EBP initiatives may aim for organizational-level adoption or partial adoption outcomes. Specifying these aims in advance can drive the design of efforts to implement EBPs and to determine whether the type of adoption desired is also the type of adoption achieved.
In the context of prior research on adoption in the wake of clinical trials, current findings may offer a benchmark for how frequently adoption occurs as a direct outcome of clinical trials research in those clinics where the intervention was tested. Guydish and colleagues (2005) reported on adoption of the Matrix intervention for methamphetamine abuse in one of six clinics where it was tested and where conditions permitted adoption, and the present study found adoption of MI/MET at an organizational level (including adoption and partial adoption) in two of five clinics. Together, these studies found adoption in three of 11 clinics studied, or 27% of clinics participating in those trials. These rates of adoption among clinics participating in clinical trials appear low and are consistent with at least one other report where the presence of smoking cessation clinical trials did not lead to measurable change in how staff in those clinics addressed smoking (Chun, Guydish & Delucchi 2009). However, these low rates of adoption occur within the specific paradigm of examining whether participation in clinical trials may lead to adoption, and in the context of small sample case studies. Consequently, the quality of this estimate of the rate of adoption is a matter for further research.
Whether and how clinical trials research can support downstream adoption of effective interventions has been discussed both outside and inside the CTN. Tunis, Stryer and Clancy (2003) have advocated redesign of the clinical research enterprise to focus on practical clinical trials that would compare alternative treatments, include diverse study populations and a range of practice settings to support generalizability, and include measurement of multiple health outcomes. Glasgow, Lichtenstein and Marcus (2003) have suggested that effectiveness trials should monitor adoption of the tested intervention in the post-study period, and that funders require that study protocols include implementation and sustainability components. Considering the CTN specifically, Guydish and colleagues (2007) suggested that CTN protocols could be developed using strategies to support post-trial adoption by planning for adoption when the protocol is developed, training senior clinicians to implement the intervention, and using regional trainers and local supervisors to train counselors and monitor intervention implementation. At the same time, Jessup and colleagues (2008) noted that researchers and clinicians in the CTN do not have a shared understanding of the role of the CTN in supporting adoption, and that some features of CTN research, including the necessary firewall between counselors who deliver experimental and control interventions, militate against adoption in the context of the CTN.
Last, factors believed to support adoption that are embedded in the conduct of a clinical trial, such as resource availability (Backer 1995), exposure to and practice of the intervention (Simpson 2002), and ability to assess compatibility and complexity of the intervention (Rogers 2002), are not sufficient to ensure adoption in the post-study period. In the clinic adopting MI/MET, other factors believed to support adoption included favorable norms and decision-making processes (Rogers 2003), organizational readiness to change (Backer 1995), and the presence of an influential intervention champion (Rogers 2002). All of these factors were internal to the clinic. In the case of the partial adoption clinic, an external factor was influential, as the agency funding the clinic mandated the use of a procedure that incorporated MI components into the assessment process. In one no adoption clinic, the clinic director reported a shift in norms toward MI philosophy that was due partly to participation in the clinical trial, but the resources needed to train and supervise a large staff in MI were seen as barriers to adoption. In the second no adoption clinic, the experience of conducting the clinical trial itself appeared negative, and this experience likely affected the subsequent ability of the clinic to adopt MI/MET. Some level of adoption was observed in each of the three MI clinics, where the MI intervention was delivered in a single session at the time of assessment. No adoption was observed in the two MET clinics, where the intervention was delivered in a series of three individual counseling sessions. Although speculative, it is possible that the single session MI intervention was more easily incorporated into usual clinic practice.
Limitations
A major limitation of this work is the small number of clinics studied, which significantly limits generalizability. The strengths of the approach used here are the ability to study adoption in-depth in a small number of clinics, where the clinical trial itself has created conditions supportive of adoption, and to formulate conclusions based on reports from multiple levels in the organization (director, supervisor, counselor). However, adoption research will benefit from better and more economical measures of adoption that can be applied in a large number of clinics, and designs that experimentally manipulate factors believed to support adoption in a large number of clinics.
Only psychosocial interventions were studied, and adoption outcomes and factors affecting adoption may be different in medication trials. Only five of 11 clinics involved in the MI/MET trials were studied, and findings may not generalize to all 11 clinics. Absent data from those remaining clinics, this study cannot comment on their adoption of MI/MET in the post-study period. Interviews were conducted four to 13 months after completion of the MI/MET trial, and so refer to a limited timeframe. Longer follow-up with clinics may lead to different conclusions. Due to the cross-sectional design, associations were observed between factors influencing adoption and actual adoption outcomes, but the causal nature of those factors cannot be assessed. Observations reported here reflect a point in time rather than a longitudinal process. Accordingly, a range of adoption outcomes has been suggested, and this is distinct from a process of adoption, or stages that a clinic may pass through in the course of adoption (Fixsen et al. 2005).
Adoption outcomes were determined based on comments made by respondents, the inter-relationship of comments made by different respondents within the same clinic, and judgments of the study team. There is at present no gold standard against which to measure adoption of psychosocial interventions (Garner 2008), so that determining adoption outcomes may include a degree of subjectivity and imprecision. “Partial adoption” is used in this report to reflect a broad range in which some organizational change may be observed. Yet the amount of change necessary to constitute partial adoption is unspecified and different observers may make different judgments. Nevertheless, efforts to conceptualize adoption using a range of outcomes may move discussion in the direction of better measurement and greater precision.
Also, study interviews were conducted at a time when participating clinics did not yet know the results of the study in which they participated and, arguably, clinics should not adopt interventions until effectiveness is known. Interventions selected for testing in multisite clinical trials are selected in part because there are compelling efficacy data from multiple single-site studies. Interventions selected for testing within the CTN are selected in partnership with the clinical practice community, and the study protocols are shaped to increase their relevance to practice settings. The cost and effort required to implement multisite clinical trials, coupled with the challenges in moving research-based interventions into practice, have led to the suggestion that post-study adoption should be considered not only before results of the clinical trial are known, but even during protocol planning stages and before the clinical trial is implemented (Guydish et al. 2007; Glasgow, Lichtenstein & Marcus 2003).
Conclusion
The work reported here offers a benchmark for how frequently programs involved in clinical trials later adopt the tested intervention. It distinguishes between the concepts of counselor-level and organizational-level adoption, the latter of which may be of more interest to funding and regulatory agencies currently moving the treatment field toward EBPs. A range of adoption outcomes was observed and factors were considered that, at least from an observational and cross-sectional perspective, appear to be associated with different adoption outcomes. Further research must determine what kind of adoption is wanted when clinics are pressed to adopt EBPs (e.g., adoption, partial adoption, counselor adoption), how those outcomes may be operationalized and reliably measured and, in experimental designs, whether factors believed to be associated with adoption can be manipulated to achieve specific adoption outcomes.
In the context of a clinical trial, clinics receive training and other supports necessary to implement EBPs, and actually do implement EBPs, for a limited time period. From a policy perspective, it may be helpful to note that such time-limited support for EBPs may not lead to a sustained change in practice. In one clinic where adoption did occur, these supports were accompanied by organizational readiness to change and the presence of an intervention champion. External factors may also play a role, as in the clinic where the funding agency mandated incorporating a particular practice. State initiatives, such as Oregon Senate Bill 267 (AMH 2006), and federal EBP registry efforts, such as NREPP, provide external support for use of EBPs, although their effectiveness is not yet demonstrated. Sustained implementation of EBPs requires that programs support counselors with training and supervision in the new practices, and that programs have a number of theoretically important supports, including resources, exposure, and practice. Sustained implementation of EBPs, in many programs, likely requires external support from county or other healthcare treatment systems, state-funded drug abuse treatment systems, or federal agencies, and support in the form of policy leadership or guidance, regulatory or funding mandates, or changes in reimbursement strategies that enable and support EBPs.
Footnotes
This work was supported by the National Institute on Drug Abuse (R01 DA-14470), by the California-Arizona research node of the NIDA Clinical Trials Network (U10 DA-015815), and by the NIDA San Francisco Treatment Research Center (P50 DA-009253). At the time of this writing, Dr. Guydish was a Visiting Researcher in the School of Health and Social Care, Oxford Brookes University, Oxford, England.
References
- Addictions and Mental Health Services (AMH) 2006 Final Versions of the Evidence-Based Practices Report to the Judiciary Committee. Salem, OR: State of Oregon: Addictions and Mental Health Services; 2006. Available at: http://www.oregon.gov/DHS/mentalhealth/ebp/report2jud-com.pdf. [Google Scholar]
- Backer TE. Assessing and enhancing readiness for change: Implications for technology transfer. In: Backer TE, David SL, Saucy G, editors. Reviewing the Behavioral Science Knowledge Base on Technology Transfer. NIDA Research Monograph. Rockville, MD: National Institute on Drug Abuse; 1995. p. 155. [Google Scholar]
- Backer TE. Drug Abuse Technology Transfer. Rockville, MD: National Institute on Drug Abuse; 1991. [Google Scholar]
- Ball SA, Martino S, Nich C, Frankforter TL, Van Horn D, Crits-Christoph P, Woody GE, Obert JL, Farentinos C, Carroll KM National Institute on Drug Abuse Clinical Trials Network. Site matters: Multisite randomized trial of motivational enhancement therapy in community drug abuse clinics. Journal of Consulting and Clinical Psychology. 2007;75:556–67. doi: 10.1037/0022-006X.75.4.556. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Boyle JS. Field research: A collaborative model for practice and research. In: Morse JM, editor. Qualitative Nursing Research: A Contemporary Dialogue. 2. Newbury Park, CA: Sage; 1991. [Google Scholar]
- Bulmer M. Concepts in the analysis of qualitative data. Sociological Review. 1979;27:651–77. [Google Scholar]
- Burke BL, Arkowitz H, Menchola M. The efficacy of MI: A meta-analysis of controlled clinical trials. Journal of Consulting and Clinical Psychology. 2003;71:843–61. doi: 10.1037/0022-006X.71.5.843. [DOI] [PubMed] [Google Scholar]
- Burke WW, Litwin GH. A causal model of organizational performance and change. Journal of Management. 1992;18:532–45. [Google Scholar]
- Calsyn DA, Hatch-Maillette M, Tross S, Doyle SR, Crits-Christoph P, Song YS, Harrer JM, Lalos G, Berns SB. Motivational and skills training HIV/sexually transmitted infection sexual risk reduction groups for men. Journal of Substance Abuse Treatment. 2009;37:138–50. doi: 10.1016/j.jsat.2008.11.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carroll KM, Ball SA, Nich C, Martino S, Frankforter TL, Farentinos C, Kunkel LE, Mikulich-Gilbertson SK, Morgenstern J, Obert JL, Polcin D, Snead N, Woody GE National Institute on Drug Abuse Clinical Trials Network. Motivational interviewing to improve treatment engagement and outcome in individuals seeking treatment for substance abuse: A multisite effectiveness study. Drug and Alcohol Dependence. 2006;81:301–12. doi: 10.1016/j.drugalcdep.2005.08.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carroll KM, Farentinos C, Ball SA, Crits-Christoph P, Libby B, Morgenstern J, Obert JL, Polcin D, Woody GE. MET meets the real world: Design issues and clinical strategies in the Clinical Trials Network. Journal of Substance Abuse Treatment. 2002;23:73–80. doi: 10.1016/s0740-5472(02)00255-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chun J, Guydish J, Delucchi K. Does the presence of a smoking cessation clinical trial affect staff practices related to smoking? Journal of Drug Issues. 2009;39:385–400. doi: 10.1177/002204260903900209. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Clark W. Treatments that work: The vision of the Substance Abuse and Mental Health Services Administration. Presentation at Treatments that Work: A Substance Abuse Forum; San Francisco, CA. 2006. [Google Scholar]
- Clinical Trials Network. CTN Bulletin 09–10. 2009 Available at: http://ctndisseminationlibrary.org/bulletin/20090522.pdf.
- Creswell J. Research Design: Qualitative and Quantitative Approaches. Thousand Oaks, CA: Sage; 1994. [Google Scholar]
- Ducharme LJ, Knudsen HK, Roman PM, Johnson JA. Innovation adoption in substance abuse treatment: Exposure, trialability, and the Clinical Trials Network. Journal of Substance Abuse Treatment. 2007;32:321–29. doi: 10.1016/j.jsat.2006.05.021. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Federal Register. Request for information regarding the Paul Wellstone and Pete Domenicic Mental Health Parity and Addiction Equity Act of 2008. Vol. 74. Internal Revenue Service, Department of the Treasury; Employee Benefits Security Administration, Department of Labor; Centers for Medicare & Medicaid Services, Department of Health and Human Services; 2009. pp. 19155–58. [Google Scholar]
- Federal Register. Notice: Request for Comments; National Registry of Evidence-Based Programs and Practices (NREPP) Vol. 70. Department of Health and Human Services, Substance Abuse and Mental Health Services Administration (SAMHSA); 2005. pp. 50381–90. [Google Scholar]
- Fennell ML. Synergy, influence and information in the adoption of administrative innovations. Academy of Management Journal. 1984;27:113–29. [PubMed] [Google Scholar]
- Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. FMHI Publication #231. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network; 2005. Implementation Research: A Synthesis of the Literature. [Google Scholar]
- Gallon SL, Gabriel RM, Knudsen JRW. The toughest job you’ll ever love: A Pacific Northwest treatment workforce survey. Journal of Substance Abuse Treatment. 2003;24:183–96. doi: 10.1016/s0740-5472(03)00032-1. [DOI] [PubMed] [Google Scholar]
- Garner BR. Research on the diffusion of evidence-based treatments within substance abuse treatment: A systematic review. Journal of Substance Abuse Treatment. 2008;36:376–99. doi: 10.1016/j.jsat.2008.08.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glasgow RE, Lichtenstein E, Marcus AC. Why don’t we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health. 2003;93:1261–67. doi: 10.2105/ajph.93.8.1261. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Guydish J, Tajima B, Manser S, Jessup M. Strategies to encourage adoption in multi-site clinical trials. Journal of Substance Abuse Treatment. 2007;32:177–88. doi: 10.1016/j.jsat.2006.08.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Guydish J, Manser ST, Jessup M, Tajima B, Sears C, Montini T. Multi-level assessment protocol (MAP) for adoption in multi-site clinical trials. Journal of Drug Issues. 2005;35:529–46. doi: 10.1177/002204260503500306. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hanson GR, Leshner AI, Tai B. Putting drug abuse research to use in real-life settings. Journal of Substance Abuse Treatment. 2002;23:69–70. doi: 10.1016/s0740-5472(02)00269-6. [DOI] [PubMed] [Google Scholar]
- Hettema J, Steele J, Miller WR. Motivational interviewing. Annual Review of Clinical Psychology. 2005;1:91–111. doi: 10.1146/annurev.clinpsy.1.102803.143833. [DOI] [PubMed] [Google Scholar]
- Hubbard RL, Leimberger JD, Haynes L, Patkar AA, Holter J, Liepman MR, Lucas K, Tyson B, Day T, Thorpe EA, Faulkner B, Hasson A National Institute on Drug Abuse. Telephone Enhancement of Long-term Engagement (TELE) in continuing care for substance abuse treatment: A NIDA Clinical Trials Network (CTN) study. American Journal on Addictions. 2007;16:495–502. doi: 10.1080/10550490701641678. [DOI] [PubMed] [Google Scholar]
- Iowa Consortium for Substance Abuse Research and Evaluation. Evidence-Based Practices: An Implementation Guide for Community-Based Substance Abuse Treatment Agencies. Iowa City: Iowa Consortium for Substance Abuse Research and Evaluation; 2003. [Google Scholar]
- Jessup MA, Guydish J, Manser ST, Tajima B. The place of adoption in the NIDA Clinical Trials Network. Journal of Drug Issues. 2008;38:1083–1104. doi: 10.1177/002204260803800408. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lamb S, Greenlick MR, McCarty D, editors. Bridging the Gap between Practice and Research: Forging Partnerships with Community-Based Drug and Alcohol Treatment. Washington, DC: National Academy Press; 1998. [PubMed] [Google Scholar]
- Lincoln YS, Guba EG. Naturalistic Inquiry. Beverly Hills, CA: Sage; 1985. [Google Scholar]
- Ling W, Amass L, Shoptaw S, Annon JJ, Hillhouse M, Babcock D, Brigham G, Harrer J, Reid M, Muir J, Buchan B, Orr D, Woody G, Krejci J, Ziedonis D Buprenorphine Study Protocol Group. A multi-center randomized trial of buprenorphine-naloxone versus clonidine for opioid detoxification: Findings from the National Institute on Drug Abuse Clinical Trials Network. Addiction. 2005;100:1090–1100. doi: 10.1111/j.1360-0443.2005.01154.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lipson JG. The use of self in ethnographic research. In: Morse JM, editor. Qualitative Nursing Research: A Contemporary Dialogue. 2. Newbury Park, CA: Sage; 1991. [Google Scholar]
- McLellan AT, Meyers K. Contemporary addiction treatment: A review of systems problems for adults and adolescents. Biological Psychiatry. 2004;56:764–70. doi: 10.1016/j.biopsych.2004.06.018. [DOI] [PubMed] [Google Scholar]
- Mee-Lee D, Shulman GD, Fishman M, Gastfriend DR, Griffith JH, editors. ASAM Patient Placement Criteria for the Treatment of Substance-Related Disorders (ASAM PPC-2R) 2. Chevy Chase, MD: American Society of Addiction Medicine; 2001. [Google Scholar]
- Miller WR, Rollnick S. Motivational Interviewing: Preparing People for Change. 2. New York: Guilford Press; 2002. [Google Scholar]
- Miller WR, Willbourne PL. Mesa Grande: A methodological analysis of clinical trials of treatments for alcohol use disorders. Addiction. 2002;97:265–77. doi: 10.1046/j.1360-0443.2002.00019.x. [DOI] [PubMed] [Google Scholar]
- Miller WR, Yahne CE, Tonigan JS. Motivational interviewing in drug abuse services: A randomized trial. Journal of Consulting and Clinical Psychology. 2003;71:754–63. doi: 10.1037/0022-006x.71.4.754. [DOI] [PubMed] [Google Scholar]
- National Institute on Drug Abuse (NIDA) The Science of Treatment: Dissemination of Research-Based Drug Addiction Treatment Findings [CD-ROM set] Bethesda, MD: NIDA/SAMHSA Blending Initiative; 2005–2007. [Google Scholar]
- O’Connor E. Real-world research: The Clinical Trials Network. Monitor on Psychology, American Psychological Association. 2001;32:28–29. [Google Scholar]
- Petry NM, Peirce JM, Stitzer ML, Blaine J, Roll JM, Cohen A, Obert J, Killeen T, Saladin ME, Cowell M, Kirby KC, Sterling R, Royer-Malvestuto C, Hamilton J, Booth RE, Macdonald M, Liebert M, Rader L, Burns R, DiMaria J, Copersino M, Stabile PQ, Kolodner K, Li R. Effect of prize-based incentives on outcomes of stimulant abusers in outpatient psychosocial treatment programs: A National Drug Abuse Treatment Clinical Trials Network study. Archives of General Psychiatry. 2005;62:1148–56. doi: 10.1001/archpsyc.62.10.1148. [DOI] [PubMed] [Google Scholar]
- Pierce JM, Petry NM, Stitzer ML, Blaine J, Kellogg S, Satterfield F, Schwartz M, Krasnansky J, Pencer E, Silva-Vazquez L, Kirby K, Royer-Malvestuto C, Roll JM, Cohen A, Copersino ML, Kolodner K, Li R. Effects of lower-cost incentives on stimulant abstinence in methadone maintenance treatment: A National Drug Abuse Treatment Clinical Trials Network study. Archives of General Psychiatry. 2006;63:201–8. doi: 10.1001/archpsyc.63.2.201. [DOI] [PubMed] [Google Scholar]
- Prochaska JO, Norcross JC, DiClemente CC. Changing for Good: The Revolutionary Program That Explains the Six Stages of Change and Teaches You How to Free Yourself from Bad Habits. New York: W. Morrow; 1994. [Google Scholar]
- Reid MS, Fallon B, Sonne S, Flammino F, Nunes EV, Jiang H, Kourniotis E, Lima J, Brady R, Burgess C, Arfken C, Pihlgren E, Giordano L, Starosta A, Robinson J, Rotrosen J. Smoking cessation treatment in community-based substance abuse rehabilitation programs. Journal of Substance Abuse Treatment. 2008;35:68–77. doi: 10.1016/j.jsat.2007.08.010. [DOI] [PubMed] [Google Scholar]
- Rogers EM. Diffusion of Innovations. 2. New York: Free Press; 2003. [Google Scholar]
- Rogers EM. Diffusion of preventive innovations. Addictive Behaviors. 2002;27:989–93. doi: 10.1016/s0306-4603(02)00300-3. [DOI] [PubMed] [Google Scholar]
- Rogers EM. Diffusion of drug abuse prevention programs: Spontaneous diffusion, agenda setting, and reinvention. In: Backer TE, David SL, Saucy G, editors. Reviewing the Behavioral Science Knowledge Base on Technology Transfer. NIDA Research Monograph. Rockville, MD: National Institute on Drug Abuse; 1995. p. 155. [PubMed] [Google Scholar]
- Roll JM, Petry NM, Stitzer ML, Brecht ML, Peirce JM, McCann MJ, Blaine J, MacDonald M, DiMaria J, Lucero L, Kellogg S. Contingency management for the treatment of methamphetamine use disorders. American Journal of Psychiatry. 2006;163:1993–99. doi: 10.1176/ajp.2006.163.11.1993. [DOI] [PubMed] [Google Scholar]
- Simpson DD. A conceptual framework for transferring research to practice. Journal of Substance Abuse Treatment. 2002;22:171–82. doi: 10.1016/s0740-5472(02)00231-3. [DOI] [PubMed] [Google Scholar]
- Scott WR, Ruef M, Mendel PJ, Caronna CA. Institutional Change and Healthcare Organizations: From Professional Dominance to Managed Care. Chicago: The University of Chicago Press; 2000. [Google Scholar]
- Tunis SR, Stryer DB, Clancy CM. Practical clinical trials: Increasing the value of clinical research for decision making in clinical and health policy. Journal of the American Medical Association. 2003;290:1624–32. doi: 10.1001/jama.290.12.1624. [DOI] [PubMed] [Google Scholar]
- Turner BJ, Martin GW, Cunningham JA. The effectiveness of demonstrations in disseminating research-based counseling programs. Science Communication. 1998;19:349–65. [Google Scholar]
- Whitten L. CTN update: Blending initiative introduces two new training programs. NIDA Notes. 2005;20 (1):13. [Google Scholar]