Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2010 Sep 29.
Published in final edited form as: J Drug Issues. 2005 Jul 1;35(3):529–546. doi: 10.1177/002204260503500306

Multi-level assessment protocol (MAP) for adoption in multi-site clinical trials

J Guydish, ST Manser, M Jessup, B Tajima, C Sears, T Montini
PMCID: PMC2947142  NIHMSID: NIHMS230088  PMID: 20890376

Abstract

The National Institute on Drug Abuse (NIDA) Clinical Trials Network (CTN) is intended to test promising drug abuse treatment models in multi-site clinical trials, and to support adoption of new interventions into clinical practice. Using qualitative research methods we asked: How might the technology of multi-site clinical trials be modified to better support adoption of tested interventions? A total of 42 participants, representing 8 organizational levels ranging from clinic staff to clinical trial leaders, were interviewed about their role in the clinical trial, its interactions with clinics, and intervention adoption. Among eight clinics participating in the clinical trial, we found adoption of the tested intervention in one clinic only. In analysis of interview data we identified four conceptual themes which are likely to affect adoption and may be informative in future multi-site clinical trials. We offer the conclusion that planning for adoption in the early stages of protocol development will better serve the aim of integrating new interventions into practice.

Keywords: Drug abuse treatment, clinical trials, qualitative analysis, organizational research


Historically, much of the research supporting effectiveness of drug abuse treatment relied on uncontrolled studies (Institute of Medicine, 1990). In the past 10 years the paradigm has shifted toward randomized clinical trials and, most recently through the National Institute on Drug Abuse (NIDA) Clinical Trials Network (CTN), toward multi-site clinical trials. The NIDA CTN aims to improve drug abuse treatment by conducting multi-site trials of promising interventions, and by developing models for integrating new interventions into clinical practice (National Institute on Drug Abuse [NIDA], n.d.). We view the latter aim, integrating new interventions into practice, in terms of adoption. In this study, we ask the broad question of whether multi-site clinical trials can support adoption of research-based interventions. Treatment programs decide whether or not to continue research-based interventions using their own resources when study protocols near completion. Because the CTN did not yet have study protocols at the completion stage when the present study began, we instead studied adoption in the context of the Methamphetamine Treatment Project (MTP), funded by the Center for Substance Abuse Treatment (CSAT). The work reported in this paper explores adoption in the context of multi-site clinical trials research in drug abuse treatment, and may be informative to clinical trials both within and outside of the NIDA CTN.

The process of adopting a research-based intervention can be conceptualized as organizational change. The organizational change perspective (developed within management studies and the social sciences) is helpful for studying adoption as it treats the organization (i.e. the substance abuse clinic) as the unit of analysis, rather than the individual. This perspective acknowledges clinic’s resistance to innovation as a common occurrence, and facilitates identification of internal organizational factors and external environmental factors that may nonetheless facilitate change and intervention adoption.

Rogers (1995a) describes the potential for adoption of an innovation as the relationship of diffusion (the process by which an innovation is communicated) and rate of adoption (the decision to use the innovation). Intervention characteristics that support adoption are that it is advantageous in relation to current practice, compatible with existing values, not too complex, amenable to a trial period, and has observable effects. Second, an innovation is more likely to be adopted if innovator/adoptee communication occurs between people of similar social location (e.g., similar education and social status). Time can influence diffusion because it affects innovativeness and level of familiarity. Last, characteristics of the social system, such as its structure, norms, and decision-making processes, can affect diffusion (Rogers, 1995a). Focusing on adoption of innovations in drug treatment, Backer (1991) also identifies four influential factors. Two of these - dissemination of information about the innovation and evidence that it will improve practice - overlap with Rogers. However, Backer adds that availability of resources and the human dynamics of change, particularly organizational readiness for change, influence adoption (Backer, 1995).

Reviewing the organizational change literature, including the work of Backer and Rogers, Simpson (2002) proposes a conceptual model of organizational change that occurs when innovative treatments are introduced into substance abuse programs. The model centers on four sequential steps: exposure, adoption, implementation and practice. Exposure to the innovation frequently involves training; sufficient institutional resources and motivation for change are required at this stage. However, as this paper illustrates, exposure may also occur through participation in a clinical trial. Adoption of the intervention refers to the individual’s or group’s intent to try an innovation; perceptions of the innovation’s usefulness and compatibility with accepted therapeutic approaches are key influences at this stage. Implementation of the innovation refers to a trial period; resources, institutional supports, and an organizational climate encouraging of change are required. Practice refers to the incorporation of the innovation into regular clinical practice; staff attributes that are supportive of change are necessary at this stage. In this paper, we define adoption as analogous to Simpson’s implementation and practice steps.

The adoption of research-based interventions is part of the research/practice relationship, and the slow pace of adoption reflects the gap between research and practice (Marinelli-Casey, Domier, and Rawson, 2002; Rawson, Marinelli-Casey, and Ling, 2002; U.S. Department of Health, NIDA, 1995). Lamb, Greenlick, and McCarty (1998) identified barriers to closing this gap, including structural barriers (e.g., time constraints), financial barriers (e.g., funding regulations that require avoidance of controversial treatments), and educational barriers (e.g., resistance to new treatment approaches among paraprofessionals compared to professional staff). Factors that may facilitate change include organizational size (Pfeffer, 1997), communication style, and organizational culture.

The present study was designed to investigate adoption in the context of multi-site randomized clinical trials (RCTs). We developed for this study a multi-level assessment protocol (MAP) in which clinical trial participants at all organizational levels were interviewed about their experience with the trial. The MAP approach was developed with reference to organizational change and other literature, which provided key content domains for the investigation. To our knowledge, MAP is not an established approach to the study of organizational change.

Other researchers have described general research to practice issues in the MTP study (Brown, in press; Rawson, McCann, Huber, Marinelli-Casey, and Williams, 2000). The work reported in this paper was developed using the more specific lens, within the research to practice arena, of intervention adoption. Our question, intended to inform the CTN in particular and other clinical trials in general, was: How might the technology of multi-site clinical trials be modified to better support adoption of tested interventions? We reasoned that adoption of new treatments was most likely in clinics that had been involved in testing those treatments, because the clinics would have been exposed to the intervention, and would have received related training, materials, and supervision. Conversely, if a new intervention is not adopted in a clinic where it was tested for one or more years, then its adoption in other settings seems less likely.

Methods

The Methamphetamine Treatment Project (MTP)

The Matrix intervention was developed in the 1980s in response to the cocaine epidemic (Obert et al., 2000). An early protocol involved 6 months of intensive outpatient treatment conducted in group and individual formats. The model was later abbreviated to 16 weeks, the number of individual sessions was reduced, and most content was delivered in groups. Sessions are held 3 times per week, each lasting 90 minutes to 2 hours, and address early recovery skills, relapse prevention, and family education. Participation in self-help meetings is prescribed, and urinalyses are conducted once per week (Obert et al., 2000). Uncontrolled trials showed favorable outcomes for level of cocaine use, stimulant-free urines and treatment completion (Rawson, Obert, McCann, and Mann, 1986; Rawson, Obert, McCann, and Ling, 1991), and a randomized trial found a dose-response relationship between treatment amount and cocaine-negative urinalyses for the Matrix group, but not for the comparison group (Rawson et al., 1995). A retrospective cohort study, showing similar outcomes for cocaine and methamphetamine using participants, suggested extending the model to methamphetamine treatment (Huber et al., 1997).

On the strength of these data, and confronted with increasing methamphetamine use nationally, the Center for Substance Abuse Treatment (CSAT) developed the MTP to test the effectiveness of the Matrix model for the treatment of methamphetamine abuse. Seven research sites and one Coordinating Center were funded through a Cooperative Agreement to conduct the study. Each site had a principal investigator and evaluator to oversee local tasks, and the study was directed by a Steering Committee (Herrell, Taylor, Gallagher, and Dawud-Nouri., 2000).

MTP study measures and procedures are reported in Huber et al. (2000). Briefly, across all sites a total of 978 participants were randomly assigned to either the 16 week Matrix condition or to treatment-as-usual (Rawson et al., in press). Treatment-as-usual generally reflected non-manualized group and/or individual counseling interventions in place at participating clinics when the study began. While there were 7 geographic sites, one site included 2 clinics, giving a total of 8 clinics. Additional descriptive information about the participating programs, including treatment philosophy and nature of the treatment-as-usual condition in each, can be found in Galloway et al. (2000). Participants were interviewed using standard treatment outcome measures (see Huber et al., 2000) at baseline, discharge, and at 6 and 12 months post admission. Recruitment was completed in July, 2001. The first reported results found improvement over time in both conditions, no differential improvement by condition, but better retention in the Matrix condition (Rawson et al., in press).

As a multi-site clinical trial of a drug abuse treatment intervention, the MTP study foreshadowed the work of the CTN. It was developed as an effectiveness trial for a cognitive-behavioral drug abuse treatment intervention (Matrix), which had shown promise in prior research. The decision-making body was a steering committee composed of researchers, community treatment programs, and coordinating center and funder representatives, and many early design issues were decided by this committee. As with some (but not all) CTN protocols, the MTP studies relied on comparison conditions defined by treatment-as-usual (Galloway et al., 2000). Like the CTN, the MTP study invested in disseminating descriptive information about the Matrix intervention and the study itself (e.g., Herrell et al., 2000; Obert et al., 2000) and, when results became available, dissemination in scientific journals (Rawson et al., in press). In this way, the MTP formed a ready testing ground on which to explore intervention adoption among clinic partners in the time period after the study ends.

The Multi-level Assessment Protocol (MAP) Study of Adoption

Study sample and recruitment

Participants were representative of the multiple levels of a) the broad organizational structure of the MTP clinical trial and b) the local and clinic levels at participating study sites (see Table 1). Participants included 23 women and 19 men with varied levels of education (14 had doctoral degrees, 17 had master degrees, 6 had bachelor degrees, 1 had high school education only, and educational status was unknown for 4 participants).

Table 1.

Organizational Levels and Number of Study Participants.

Level N
Intervention Designer 1
Clinical Trial Funder 2
Steering Committee Member 3
Coordinating Center 4
Site Principal Investigator 7
Site Evaluator 5
Clinic Director 6
Clinical Staff 14
Total Interviews 42

Individuals at the levels of Intervention Designer, Funder, Site Principal Investigator, Evaluator and Clinic Director were easily identifiable by their role, and all persons in these groups were invited to participate. Representatives of the Steering Committee and Coordinating Center levels were selected using two criteria. First, that they were not already interviewed as a member of a different assessment level (e.g., not a Site Principal Investigator or Clinic Director) and, second, that they had a leadership role in the MTP study. At the site level, Clinic Directors identified counseling staff who had been employed at least one year during the MTP study.

Participants were initially contacted by mail, with telephone follow up to assess willingness to participate and schedule interviews. Informed consent procedures were completed prior to each interview. Eleven participants (one Principal Investigator, one Clinic Director, nine Clinic Staff) who had left their MTP site were located and contacted using the same procedures. For each clinic, our goal was to interview one counselor who had provided the Matrix intervention and one who had provided the treatment-as-usual intervention. In one case these counselors had left the clinic and could not be located. Among those invited into the study, one person declined to and one failed to respond to recruitment contacts.

Clinics were offered $1000 for study participation, using one of three reimbursement options: 1) a lump sum payment to the clinic’s fiscal entity, 2) individual payment of interviewees, or 3) staff training on a topic of their choice. We considered that a menu of reimbursement strategies offered more choice to participating clinics. The Clinic Director selected from among these options and, in practice, all clinics selected the option of a lump sum payment to the clinic. Respondents who had been formerly employed by a clinic but who had since left the clinic could not use the clinic payment to defer the costs of their time invested in completing the interview, and so these participants received a cash reimbursement of $50. Financial incentives were not offered to the Intervention Designer, Funder, Steering Committee members, or Coordinating Center representatives, as these respondents were remunerated for their efforts in the context of the Matrix study award. For the same reason, Site Principal Investigators and Site Evaluators were not offered reimbursement independent of clinic payments. All study procedures were approved by the University of California, San Francisco, institutional review board.

Data collection

Semi-structured interview guides (available from first author) were developed by the study team, informed by organizational theory and reflecting 7 domains that can influence adoption of research-based interventions (see Table 2). Interview guides included questions concerning the respondent’s role in the MTP, perspectives on the clinical trial and its interactions with clinics, and intervention adoption. Audiotaped interviews lasted 1 to 2 hours each, most were conducted in person, and six were conducted by telephone.

Table 2.

Domains that may influence a substance abuse treatment program’s decision to adopt a research-based intervention.

Domains Components Literature
Organizational Structure and Culture
  • Organizational characteristics

  • Staffing patterns

  • Organizational mission

  • Organizational norms

  • Leadership style

De Smet (1998)
Burke and Litwin (1992)
Lamb et al. (1998)
Rogers (1995a)
Organizational Readiness for Change
  • Perceived need for change

  • Perceived capacity for change

  • Previous experience with change

Backer (1995)
Attitudes Towards Research
  • Opinion of scientific research

  • Familiarity with research process

  • Experience of research/practice collaboration

Lamb et al. (1998)
Perception of Intervention
  • Effectiveness

  • Innovativeness

  • Compatibility

  • Complexity

Backer (1991)
Rogers (1995a)
Resources
  • Resource conflicts

  • Adequacy of resources

Backer (1991)
Rogers (1995b)
Lamb et al. (1998)
Dissemination of Study Results
  • Methods of dissemination

  • Use of results

Sorensen et al (1988)
Rogers (1995b)
Backer (1991)
Lamb et al. (1998)
Reinvention
  • Partial Adoption

  • Modification of intervention

Lamb et al. (1998)
Rogers (1995b)

MTP study recruitment was completed in different clinics at different times, but was completed in all participating clinics on July 1, 2001. Delivery of the Matrix intervention likewise ended at different times in different clinics, with all clinics completing Matrix treatment, for study purposes, on November 1, 2001. Qualitative data collection interviews for the current adoption study were conducted between January and December 2002, and occurred, in any given clinic, from 2 to 12 months after the clinic had stopped providing Matrix treatment for research purposes. The study plan was to interview participants close to when clinics stopped delivering Matrix treatment for study purposes, and after some time elapsed in which clinics could have considered whether to continue to provide Matrix treatment. Beyond these general considerations the actual timing of interviews was determined by when the qualitative study was funded, and by the logistics of planning and conducting interviews in multiple study sites.

Data Analysis

Interviews were transcribed and compared for completeness to the audiotapes by two members of the research team (SM, BT), and were then read, reviewed, and discussed by all team members. Analysis was conducted using a theoretical analytic framework (Bulmer, 1979) derived from research literature on organizational functioning and change theory (see Table 2). The framework allowed for use of these domains as analytic categories to examine participants’ perspectives on organizational functioning during and after participation in an MTP study. The analytic categories included organizational structure and culture, readiness for change, attitudes toward research, perception of intervention, resources, dissemination of study results and reinvention. Initially, closed codes were developed using these categories. As content analysis proceeded, other more specific codes were developed (Boyle, 1991). Using analytic software, a mapped display of the information was developed to indicate relationships between codes and to serve as a map in conceptualizing factors that may influence adoption.

Two team members (SM, BT) coded interviews using ATLAS.ti., a qualitative analysis program. Inter-rater reliability was supported by coding 14 interviews as a team to obtain agreement, and then having two team members independently code 5 interviews, with a review for consistency by a third team member. A total of 69 codes emerged and each transcript was coded using those codes. The codebook was refined as the research proceeded, and data attached to each code were discussed by all team members. Analytic memos, constant comparison and on-going discussion of the data, and member checks were applied to ensure trustworthiness of the data. Simultaneous data collection and analysis supported dependability and, in the interpretation phase, reflexivity of team members regarding participant narratives was used to enhance trustworthiness (Creswell 1994; Lipson 1991; Lincoln & Guba 1985).

Results

Preliminary findings are based on analysis of 42 interviews representing CSAT MTP study participants at eight organizational levels. We report on the extent of adoption of the Matrix intervention at each site, and then describe four salient themes likely to affect adoption.

Extent of Adoption

Site level participants (Site principal investigator, site evaluator, clinic director, clinic staff) were asked whether the Matrix intervention was adopted in their site after the conclusion of the MTP study. Participants from one clinic reported having adopted the Matrix model in the sense of reinvention, which is described by Rogers (1995b) as a type of adoption where the intervention is modified or adapted to the particular needs of the setting. The agency had opened a new outpatient program to accommodate the clinical trial, and both the Matrix intervention and treatment-as usual intervention (derived from practice in the parent clinic) were provided in this setting. The new clinic, which served Drug Court participants, was supported by the referring judge who identified the clinic as providing the Matrix treatment and who was interested to continue referring participants to that treatment. The counselor who was trained to implement Matrix continued to work at the program after the clinical trial ended. The site was described by respondents as having a “philosophy of evidence-based practices” with “a commitment to training” and “an organizational culture of initiative.”

While respondents from this site viewed the Matrix intervention as effective, they modified the intervention to address concerns raised by clients but not addressed in the Matrix manual. These included socioeconomic issues such as poverty and gang violence, as well as individual mental health issues such as trauma, abuse, and dual diagnosis. The program also added psychological assessments to augment the manualized Matrix intervention, increased the number of individual sessions when needed, and made outside referrals to focused skill building groups such as anger management. In this clinic, several factors believed to support adoption were in place. A new program had been opened to accommodate the study rather having the Matrix intervention brought into an existing program. Matrix trained staff remained with the program after the MTP trial ended, so that some of the resources needed to support adoption were available (Backer, 1991). The clinic believed in their own capacity to change (Backer, 1995), were familiar with the research process and interested in research-based practice (Lamb et al., 1998) and perceived the Matrix intervention as effective (Backer, 1991).

In a second site respondents reported adoption limited to a single counselor. The organizational culture at this clinic was described as resistant to change- “this is our standard way of doing it-it doesn’t matter what you find out there- this is how we do it.” Separation had also occurred between MTP staff and other staff and administrators not involved in the clinical trial. Staff not involved in the MTP study regarded the Matrix intervention with skepticism and ambivalence, in part because it did not match State definitions of reimbursable intensive outpatient service. The Matrix-trained counselor continued to work at the clinic after the MTP study ended, and continued the intervention on her own. However, because the intervention was not continued on an organizational basis, we consider this partial adoption.

In two MTP sites there was no opportunity for adoption. One clinic was defunded by its parent organization near the end of MTP recruitment and, while able to complete the study, was unable to provide any further drug abuse treatment services (Matrix or otherwise) after the study ended. The second clinic used Matrix as its usual treatment, and developed a non-Matrix comparison condition for study purposes. There was no opportunity to adopt the Matrix intervention because the organization had the Matrix model as its usual treatment.

At the four remaining clinics there was little evidence of adoption. In each case, the Matrix-trained staff left the clinic after the MTP study ended. Respondents from these sites, at the time of interview, reported no current use of Matrix and no plan to adopt the intervention. Two clinics did modify practices by drawing on their Matrix experience. One clinic added evening groups, and increased the length of their groups. The second clinic incorporated Matrix visual aids into standard treatment, and increased consumer focus by offering snacks and child care. While these may represent meaningful practice modifications based on the Matrix intervention, they do not represent reinvention, adoption, or partial adoption.

Factors Likely to Affect Adoption

Drawing on qualitative analyses of interview data we identified four conceptual themes which are likely to affect adoption and which may be informative in future multi-site clinical trials: Spaceship RCT, Integration, Waiting for Godot, and Planning for Adoption.

Spaceship RCT

For any clinic participating in a multi-site randomized clinical trial (RCT), and particularly for clinics not experienced in research, the requirements of the clinical trial may seem foreign and invasive. A multi-site trial arrives at a clinic site bringing financial resources, often additional staff positions, staff training and supervision, and the experimental intervention. Depending on how well these components are integrated into the clinic (see Integration, below), the termination of the clinical trial takes away those same resources, staff, and innovative practices necessary to sustain the intervention. The two key dimensions of this theme, the invasive impact on the clinic and the bringing and taking away of resources, are reflected by respondents, below.

… the intrusive alien research force…

Site PI

…you staff the research study with other people that come in …the grant ends, and they go away…. they as practitioners may take what they feel, what they learned in here…off someplace else…

Site PI

Integration

Integration refers to the degree to which the intervention was integrated within the clinic organizations both horizontally, across project and usual clinical staff, and vertically, within the clinic administration. Respondents reported that the lack of integration of the research initiative created isolation of the MTP project staff and separation between MTP staff and those staff members not involved in the clinical trial. In addition, the project tended to be isolated from clinic administration where administration often lacked investment and knowledge of the usefulness of the intervention.

…and if (the intervention) doesn’t get written into the P&P’s [Policies and Procedures] of “This is how we do it,” or “This is the structure we put into place,” it lasts only as long as the people who did it and believe in it are still in the positions.

Clinic Director

…having a model come in that … is the new model, the other counselors, because they weren’t… pulled in enough were resentful of that model. And there almost became egos involved there that said, “Oh, well, that’s the new model, and I don’t see that it’s different.” But there needs to be a way that model can get infused in more than just one therapist if you really wanted it to stay long term. That kind of organizational culture issues are major, I think, whether it becomes adopted long term or not.

Site PI

Waiting for Godot

Clinical trial data are inaccessible to the participating programs, typically until long after the trial has ended. Interviews reported here occurred on average 9 months after study recruitment had ended and, while respondents described having intuitions about effectiveness of the intervention, none had seen outcome data. Like the characters in Samuel Beckett’s play, the site and clinic level participants who implemented the clinical trial were waiting at length. During this interval, if the clinic should decide whether or not to continue the intervention, the decision was made without effectiveness data.

…that’s a long time. If you’re a community treatment provider and you participated. You know, a long time to find out, well, was it worth doing? And in the meantime, should I be making changes in my model? Well, they’re probably only making changes because they liked some of the things that they saw, but certainly not based on data.

Steering Committee Member

I’m concerned about even the first papers coming out for this project, you know, because I had - people are going to forget about this program. They’re going to say, “Oh, Matrix! So is that the one… oh, really? Did it work?” I mean, just because it’s, it’ll be three years now, you know. And who knows when the first paper is going to come out?

Site PI

Planning for Adoption

Planning for Adoption refers to any administrative, clinical, or funding strategies or activities directed toward the adoption of the innovative intervention at the site or clinic level. Participants were asked how they may have planned for adoption before or during the course of the clinical trial. Common responses in all interviews were that no planning for adoption had occurred at any level of the MTP project.

And the simple truth is that there wasn’t a whole lot of thought in the beginning of any of our studies about what happens next. The presumption has been, “We’ll find truth out there, and we’ll publish the truth, and we’ll let people know what happened, and something will magically happen, and the truth will be used.” I’m being sarcastic obviously. But there still isn’t a plan for “once we find out what works, how do we put it into better practice?”

Steering Committee Member

…I think doing a demonstration project, showing a good or bad outcome and then removing the money isn’t the way to get practices changed. I mean, it gave us data.

Steering Committee Member

Discussion

Concerning adoption of the Matrix intervention following the MTP study, we found full adoption in one clinic, partial adoption in another clinic, no adoption in four clinics, and no opportunity for adoption in two clinics. Drawing on qualitative analysis of interviews, we identified four themes to inform adoption in the context of multi-site clinical trials.

The Spaceship RCT theme is familiar in that clinicians often complain about researchers who “parachute” into a clinic setting, get data, and get out. Spaceship RCT reflects, on one hand, the same concern but elevated beyond a single researcher to the elaborate and interconnected organization of a multi-site clinical trial. Spaceship RCT, however, also describes how a large clinical trial organization descends onto the clinic setting, bringing extensive resources along with it and then, later, taking resources away with it. Integration is a related theme. Clinical trial efforts that are not integrated into the clinic setting are less likely to support adoption post clinical trial, while those that are more integrated offer a hedge against the end of the clinical trial and the loss of resources. For example, in several MTP clinics new staff were hired to conduct the Matrix intervention. When MTP funding ended, those staff often left the clinic, leaving the clinic without Matrix-trained staff and limiting the potential to continue the intervention. Recruiting from among current staff to conduct the experimental intervention, especially staff having some history with the clinic, would increase the likelihood that knowledge or skills developed in the course of the trial would remain longer within the clinic. The MTP clinical trial also used a centralized conference call supervision model, where Matrix counselors in all clinics were supervised by staff in the coordinating center. An alternative approach may be to train supervisors within each clinic, so that capacity to supervise Matrix intervention could also remain with the clinic once the trial ended.

The Waiting for Godot theme reflects a basic conflict between clinical trials research and intervention adoption. The practical demands of adoption are that clinics need effectiveness data near the end of the clinical trial, to guide their decision on whether or not to adopt the tested intervention. The opposing demands of clinical trials research are to protect scientific integrity, and avoid bias, by withholding effectiveness data until all data collection has been completed. In practice, withholding of effectiveness data continues through data analysis, through some levels of internal or organizational vetting, and up to the time that findings are submitted, accepted for publication, or actually published.

This conflict is extended by the need for long term outcome data. Findings that one intervention is more effective than another at the end of the treatment are undermined when later comparisons (e.g., 6 or 12 months post-treatment) show no differences between groups. When studies are extended to long term follow up periods, so are extended the traditional practices of protecting integrity of the data until the final data collection point. Our recommendation for multi-site clinical trial leaders and funders is to carefully reconsider the balance between the demands of science and of practice, and to develop standards for data release which bring effectiveness data back to the participating clinics, and to the field, as soon as possible.

The theme Planning for Adoption assumes that such planning may be a regular part of clinical trials research, but the traditional approach is to do clinical trials to assess effectiveness and then use other strategies to disseminate interventions. In the context of the CTN, a large national drug abuse research network with the explicit aim not only to test effectiveness but to also integrate effective interventions into practice, we suggest that planning for adoption can be included in the developmental stages of every protocol. This can be accomplished by designing protocols that increase clinical trial integration in the clinic setting (e.g., selecting existing staff to deliver new treatments, using on-site supervision models, providing intervention training to all staff at the conclusion of the clinical trial), and by aggressive standards for rapid return of effectiveness data to participating clinics. Such a planning intervention may add value to the CTN by improving the capacity of clinics to adopt tested interventions, to apply research-based practices and, where tested interventions are effective, to implement best practice models.

While we interviewed 42 participants representing multiple organizational levels and treatment clinics, results are nevertheless derived from a case study of a single multi-site clinical trial, and generalize only to that trial. Results reported here refer to events at one point in time, and additional adoption or partial adoption may have occurred in MTP clinics after these interviews. We also investigated a non-CTN clinical trial in an effort to draw inferences informative for the NIDA CTN. While the organization of clinical trials within the CTN has some similarities to the MTP trial, it also has numerous differences. One is that, while 5 of the 8 MTP clinics were research naive, many CTN clinics were selected for having an organizational culture that was research experienced. We are currently extending this research to include a protocol within the CTN organization, and this work will inform the study of adoption in important ways. Notwithstanding these limitations, we offer the conclusion that planning for adoption in the early stages of CTN protocol development will better serve the CTN aim of integrating new interventions into practice.

Acknowledgments

This work was supported by National Institute on Drug Abuse grant R01 DA-14470, by the California-Arizona node of the NIDA Clinical Trials Network (U10 DA-015815), and by the NIDA San Francisco Treatment Research Center (P50 DA-09253).

References

  1. Backer TE. Drug abuse technology transfer. Rockville, MD: National Institute on Drug Abuse; 1991. [Google Scholar]
  2. Backer TE. Assessing and enhancing readiness for change: Implications for technology transfer. In: Backer TE, David SL, Saucy G, editors. Reviewing the Behavioral Science Knowledge Base on Technology Transfer. Rockville, MD: National Institute on Drug Abuse; 1995. NIDA Research Monograph 155. [Google Scholar]
  3. Boyle JS. Field research: A collaborative model for practice and research. In: Morse JM, editor. Qualitative nursing research: a contemporary dialogue. Newbury Park, CA: Sage; 1991. [Google Scholar]
  4. Brown AH. Integrating research and practice in the CSAT Methamphetamine Treatment Project. Journal of Substance Abuse Treatment. 2004;26:103–108. doi: 10.1016/S0740-5472(03)00163-6. [DOI] [PubMed] [Google Scholar]
  5. Bulmer M. Concepts in the analysis of qualitative data. Sociological Review. 1979;27:651–677. [Google Scholar]
  6. Burke WW, Litwin GH. A causal model of organizational performance and change. Journal of Management. 1992;18:532–545. [Google Scholar]
  7. Creswell J. Research design: Qualitative and quantitative approaches. Thousand Oaks, CA: Sage; 1994. [Google Scholar]
  8. De Smet AL. Issue Paper. NIDA Resource Center for Health Services Research; 1998. Adaptive Models of Organization for Substance Abuse Treatment. [Google Scholar]
  9. Galloway G, Martinelli-Casey P, Stalcup J, Lord R, Christian D, Cohen J, Reiber C, Vandersloot D. Treatment- as-Usual in the Methamphetamine Treatment Project. Journal of Psychoactive Drugs. 2000;32:165–176. doi: 10.1080/02791072.2000.10400225. [DOI] [PubMed] [Google Scholar]
  10. Herrell JM, Taylor J, Gallagher C, Dawud-Nouri S. A multisite study of the effectiveness of methamphetamine treatment: An initiative of the Center for Substance Abuse Treatment. Journal of Psychoactive Drugs. 2000;32:143–147. doi: 10.1080/02791072.2000.10400222. [DOI] [PubMed] [Google Scholar]
  11. Huber A, Ling W, Shoptaw SJ, Gulati V, Brethen P, Rawson RA. Integrating treatments for methamphetamine abuse: A psychosocial perspective. Journal of Addictive diseases. 1997;16:41–50. doi: 10.1080/10550889709511142. [DOI] [PubMed] [Google Scholar]
  12. Huber A, Lord R, Gulati V, Marinelli-Casey P, Rawson R, Ling W. The CSAT methamphetamine treatment program: Research design accommodation for “real world” application. Journal of Psychoactive Drugs. 2000;32:149–156. doi: 10.1080/02791072.2000.10400223. [DOI] [PubMed] [Google Scholar]
  13. Institute of Medicine. Treating drug problems: A study of the evolution, effectiveness, and financing of public and private drug treatment systems. Vol. 1. Washington, DC: National Academy Press; 1990. [PubMed] [Google Scholar]
  14. Lamb S, Greenlick MR, McCarty D. Bridging the gap between practice and research: Forging partnerships with community-based drug and alcohol treatment. Washington, DC: National Academy Press; 1998. [PubMed] [Google Scholar]
  15. Lincoln YS, Guba EG. Naturalistic inquiry. Beverly Hills, CA: Sage; 1985. [Google Scholar]
  16. Lipson JG. The use of self in ethnographic research. In: Morse JM, editor. Qualitative nursing research: A contemporary dialogue. Newbury Park, CA: Sage; 1991. Rev ed. [Google Scholar]
  17. Marinelli-Casey P, Domier CP, Rawson RA. The gap between research and practice in substance abuse treatment. Psychiatric Services. 2002;53(8):984–987. doi: 10.1176/appi.ps.53.8.984. [DOI] [PubMed] [Google Scholar]
  18. National Institute on Drug Abuse. NIDA CTN- About the CTN. n.d Retrieved December 14, 2003, from http://www.nida.nih.gov/CTN/about.html.
  19. National Institute on Drug Abuse. Reviewing the behavioral science knowledge base on technology transfer. In: Backer T, David S, Saucy G, editors. Research Monograph. Vol. 155. Rockville, MD: National Institute on Drug Abuse; 1995. [PubMed] [Google Scholar]
  20. Obert JL, McCann MJ, Marinelli-Casey P, Weiner A, Minsky S, Brethen P, Rawson RA. The Matrix model of outpatient stimulant abuse treatment: History and description. Journal of Psychoactive Drugs. 2000;32:157–164. doi: 10.1080/02791072.2000.10400224. [DOI] [PubMed] [Google Scholar]
  21. Pfeffer J. New directions for organizational theory: Problems and prospects. New York: Oxford University Press; 1997. [Google Scholar]
  22. Rawson RA, Marinelli-Casey P, Anglin MD, Dickow A, Frazier Y, Gallagher C, Galloway G, Herrell JM, Huber A, McCann MJ, Obert J, Pennell S, Reiber C, Vandersloot D, Zweben J. A multi-site comparison of psychosocial approaches for the treatment of methamphetamine dependence. Addiction. 2004;99:708–717. doi: 10.1111/j.1360-0443.2004.00707.x. [DOI] [PubMed] [Google Scholar]
  23. Rawson RA, Marinelli-Casey P, Ling W. Dancing with strangers: Will U.S. substance abuse practice and research organizations build mutually productive relationships? Addictive Behavior. 2002;27:941–949. doi: 10.1016/s0306-4603(02)00292-7. [DOI] [PubMed] [Google Scholar]
  24. Rawson RA, McCann MA, Huber A, Marinelli-Casey P, Williams L. Moving research into community settings in the CSAT Methamphetamine Treatment Project: the Coordinating center Perspective. Journal of Psychoactive Drugs. 2000;32:201–08. doi: 10.1080/02791072.2000.10400229. [DOI] [PubMed] [Google Scholar]
  25. Rawson RA, Obert JL, McCann MJ, Ling W. Psychological approaches to the treatment of cocaine dependency. Journal of Addictive Diseases. 1991;11:97–120. doi: 10.1300/j069v11n02_07. [DOI] [PubMed] [Google Scholar]
  26. Rawson RA, Obert JL, McCann MJ, Mann AJ. In: Harris LS, editor. Cocaine treatment outcome: Cocaine use following inpatient, outpatient, and no treatment; Problems of drug dependence: Proceedings of the 47th Annual Scientific Meeting. NIDA Research Monograph 67. DHHS Pub. No. (ADM) 86–1448; Rockville, MD: National Institute on Drug Abuse; 1986. [PubMed] [Google Scholar]
  27. Rawson RA, Shoptaw SJ, Obert JL, McCann MJ, Hasson AL, Marinelli-Casey P, Brethen PR, Ling W. An intensive outpatient approach for coaine abuse treatment: The Matrix model. Journal of Substance Abuse Treatment. 1995;12:117–27. doi: 10.1016/0740-5472(94)00080-b. [DOI] [PubMed] [Google Scholar]
  28. Rogers EM. Diffusion of innovations. 4. New York: Free Press; 1995a. [Google Scholar]
  29. Rogers EM. Diffusion of drug abuse prevention programs: Spontaneous diffusion, agenda setting, and reinvention. In: Backer TE, David SL, Saucy G, editors. Reviewing the Behavioral Science Knowledge Base on Technology Transfer. NIDA Research Monograph, 155. Rockville, MD: National Institute on Drug Abuse; 1995b. [PubMed] [Google Scholar]
  30. Simpson DD. A conceptual framework for transferring research to practice. Journal of Substance Abuse Treatment. 2002;22:171–182. doi: 10.1016/s0740-5472(02)00231-3. [DOI] [PubMed] [Google Scholar]

RESOURCES