Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2008 Sep 1.
Published in final edited form as: J Subst Abuse Treat. 2007 Apr 16;33(2):193–199. doi: 10.1016/j.jsat.2007.01.005

Counselor Assessments of Training and Adoption Barriers

Norma G Bartholomew 1, George W Joe 1, Grace A Rowan-Szal 1, D Dwayne Simpson 1
PMCID: PMC1989156  NIHMSID: NIHMS28741  PMID: 17434707

Abstract

The prevailing emphasis on adoption of evidence-based practices suggests more focused training evaluations are needed that capture factors in clinician decisions to use new techniques. This includes relationships of post-conference evaluations with subsequent adoption of training materials. Training assessments were therefore collected at two time points from substance abuse treatment counselors who attended training on dual diagnosis and on therapeutic alliance as part of a state-sponsored conference. Customized evaluations were collected to assess counselor perceptions of training quality, relevance, and resources in relation to its utilization during the 6 months following the conference. Higher ratings for relevance of training concepts and materials to service needs of clients, desire to have additional training, and level of program support were each related to greater trial usage during the follow-up period. Primary resource-related and procedural barriers cited by counselors included lack of time and redundancy with existing practices.

Keywords: Training evaluation, Training assessments, Trial adoption, Implementation barriers, Technology transfer

1. Introduction

Manuals are a preferred tool for guiding delivery of interventions and improving their fidelity, but Fixsen, Naoom, Blase, Friedman, and Wallace (2005) indicate that “practice” (e.g., role-playing, behavior rehearsal, coaching) is an essential training component for optimizing their effectiveness. As such, training events are one of the primary ways substance abuse treatment counselors learn about new practices. With the growing emphasis on encouraging treatment programs to adopt evidence-based practices, the influence of training workshops on clinician decisions to use and apply new techniques deserves more careful attention (Gotham, 2004; Simpson, 2006). Correspondingly, Fixsen and his colleagues reviewed the implementation literature and found an absence of evaluation research on the effectiveness training procedures. More systematic study of training components in the adoption and implementation process is therefore needed. In particular, it would be helpful to know how counselor evaluations of training attributes, especially in relation to their perceptions of organizational realities, might be influencing the adoption and implementation of new interventions or techniques. Simpson (2002, 2004) suggests that both personal counseling dispositions and organizational factors are involved in the initial adoption and trial use of new treatment practices.

Staffing and funding limitations often dictate the types and frequency of training that treatment professionals can attend. Moreover, state or licensure requirements are major considerations when these professionals make choices about continuing education (Taleff, 1996). For example, in a distance-learning program conducted by the Addiction Technology Transfer Center of New England (Hagberg, Love, Bryant, & Storti, 2000), 55% of clinicians reported they applied the training toward certification/licensure. About 57% of enrollees said they participated in less than 40 hours of training a year, and 61% were reimbursed for training. Cost was a factor in their decisions to act on training opportunities (Hagberg et al., 2000). Few programs have the luxury of being able to free up more than a few staff at a time for intensive training (5 working days or longer) and high costs can prevent staff from attending on their own time (Brown, 2006; Hagberg et al., 2000).

It is not unusual for state substance abuse authorities to hold annual training conferences that include in-depth coverage of therapeutic strategies or “best practices” (e.g., 3-hour workshops and specialized “tracks” that may continue over one or more days of the training event). Clinicians attending these mainstream workshops are usually asked to complete what Kirkpatrick (1977) has described as “customer satisfaction” questionnaires. These offer simple feedback on content and trainer performance factors but seldom shed light on participant intentions and reasons for actually using the training materials so it offers little help with understanding underlying factors that drive decisions to implement materials. In addition, such evaluations often do not include systematic follow-up surveys to assess progress with the actual use of training information (Walters, Matson, Baer, & Ziedonis, 2005). Exceptions include some of the training activities conducted by regional Addiction Technology Transfer Centers (ATTCs) across the country (Hagberg et al., 2000) and funded research focused on technology transfer (Lewis, Record, & Young, 1998; Miller, Yahne, Moyers, Martinez, & Pirritano, 2004).

Even fewer conference evaluations take a broader view of this process by including both post-training and follow-up questionnaires that consider organizational factors (resources, time, staff) in addition to counselor comfort and satisfaction with the materials in relation to decisions about adoption and implementation. Backer, Liberman, and Kuehnel (1986) have discussed the interactive importance of practitioner attributes and organizational support in dissemination and adoption efforts of new technologies. Practical experiences and recommendations for meeting training challenges in using a cognitive intervention technique (for visual communication mapping, Dansereau & Dees, 2002) and a family therapy program for adolescents (for Multidimensional Family Therapy, Liddle et al., 2002) address the value of hands-on practice, feedback and rewards for progress, being realistic about skill requirements and limitations, organizational team building and peer support, and empirical evaluations of results. More systems-level attention on these and related training components in the adoption and implementation process is needed.

The current study reports on a two-step approach for assessing the impact of clinical training and trial adoption of training material by counselors. As suggested by Fixsen et al. (2005) and Simpson and Flynn (this issue), it focuses particularly on whether staff can readily see the relevance and benefits that an innovation offers to recipients, along with implementation barriers such as the common difficulty of finding release time and financial resources for sufficient staff training. Participants were surveyed immediately following their training to ascertain their personal reactions and intentions to use the materials presented, along with perceptions of organizational factors that may impact applications of the training. In the second step, follow-up surveys were collected 6 months later for asking participants about progress in using the materials and related barriers they may have encountered. The study represents a practical approach for evaluating the penetration and impact of clinical training workshops in cases where comprehensive, in-depth analyses of training outcomes is not feasible or affordable (e.g., Miller, Moyers, Arciniega, Ernst, & Forcehimes, 2005).

2. Method

2.1. Workshop training

In 2002 a state office of drug and alcohol services sought assistance from the Institute of Behavioral Research at Texas Christian University (TCU) and its regional ATTC to assess the training needs of its workforce of treatment staff. The plan also included evaluating a training event (i.e., a state-sponsored training conference) based on the training needs staff had identified using the TCU Program Training Needs (PTN) assessment (see Rowan-Szal, Greener, Joe, & Simpson, this issue; Rowan-Szal, Joe, Greener, & Simpson, 2005). The conference was held over a 3-day period, which was repeated during another 3 days to address scheduling conflicts so the majority of state workers could attend. The conference theme was dedicated to key issues identified by program directors and staff in the training needs survey. Namely, these included working with dually diagnosed clients, improving counseling skills (therapeutic alliance and client engagement), and working with adolescents and their families. Three day-long training sessions devoted to these specific topics were offered in accordance with staff training preferences obtained as part of the needs survey.

Close to 300 counselors and administrators attended one or more days of the training conference. The majority of clinical staff attended sessions on working with dual diagnosis1 and on improving therapeutic alliance and treatment engagement,2 but about 70 participants attended one of two other specialty breakout sessions (i.e., working with adolescents, or a workgroup for regional and program administrators). Participants received a full-day session on these topics (about 7 hours). In addition, day-long “booster” sessions for reviewing and rehearsing material from the therapeutic alliance workshop were offered regionally following the conference for the purpose of further review and rehearsal of the clinical strategies that had been presented.

2.1.2. Participants and data collection procedures

Training-specific assessments were collected from conference participants at two time points. Respondents read a “passive” informed consent statement at the beginning of the conference, according to procedures approved by the Institutional Review Board, which explained that completing the survey indicated their willingness to participate in the conference evaluation study.

Evaluation forms were collected by research personnel in February 2003 from 214 participants who attended the Dual Diagnosis (DD) workshop and from 293 participants who attended the workshop on Therapeutic Alliance (TA). Follow-up surveys were mailed 6 months later (in August 2003) to workshop attendees. Instructions indicated that the survey should be completed and mailed directly to TCU using an enclosed envelope. A total of 156 follow-up surveys (73%) were returned for the DD training and 173 (59%) from the TA training. This return rate for the follow-up surveys was slightly higher than the 56% to 64% rates for employees surveyed by mail as generally reported in the organizational literature (Schneider, Parkington, & Buxton, 1980; Schneider, White, & Paul, 1998).

A 4-digit anonymous linking code (first letter mother's first name, first letter father's first name, first digit SS#, last digit SS#) was included in an effort to cross-link the evaluation forms. There were 88 matched (conference-to-follow-up) surveys for the DD training respondents and 115 for the TA training respondents. These rates for successful matching were 41% and 39%, respectively, compared against the total number of counselors attending the original training 6 months earlier.

Demographic information available for the 253 counselors that attended the statewide workshop indicated 65% were female, 65% were Caucasian, and 30% were African American, (5% were other). Average age of the counselors was 45 years.

2.2. Instruments

2.2.1. Workshop evaluation form

The 22-item TCU Workshop Evaluation (WEVAL) form was used to collect counselor ratings on (1) relevance of the training, (2) desire to obtain more training, and (3) program resources supporting the training and implementation. The WEVAL was completed by participants immediately following the DD and TA training sessions, and item responses were made on a 5-point Likert scale (1 = not at all, 2 = a little, 3 = some, 4 = a lot, 5 = very much).

Ratings for each workshop were factor analyzed using principal factor analysis with squared multiple correlations as commonality estimates, and three factors were identified for both of them. The first factor was for relevance of the training (i.e., materials were seen as relevant and “doable”). It was defined by the following items: “material is relevant to the needs of your clients,” “you expect things learned will be used in program soon,” “you were satisfied with the material and procedures,” and “you would feel comfortable using them in your program.” Coefficient alpha reliabilities were .72 and .82 for the DD and TA workshops, respectively. The second factor was labeled training engagement, reflecting behavioral interests in obtaining further training on the materials. The items were: “you would attend a follow-up training session,” “you would invite other staff from agency to attend follow-up training session,” and “follow-up training session would facilitate implementation of material.” Coefficient alpha reliabilities for this scale were .89 and .88, respectively. The third factor represented program support, based on program resources available to implement the materials. Its marker items were “your program has enough staff to implement material,” “your program has sufficient resources to implement material,” and “you have the time to do set-up work required to use this material.” Coefficient alpha reliabilities were .78 and .80.

2.2.2. Workshop assessment follow-up

The 14-item TCU Workshop Assessment Follow-Up (WAFU) form contained a 6-item section on post-training evaluation and trial adoption of workshop materials, and an 8-item inventory about implementation barriers. The evaluation items were (1) How satisfied were you with the training provided? (2) Have you used any of the ideas or materials from the workshop? (3) If so, how useful were they? (4) Have you recommended or discussed them with others? (5) Do you expect to use these materials in the future? and (6) Are you interested in further more specialized training? Item responses were made on a 5-point Likert scale (1 = not at all, 2 = a little, 3 = some, 4 = a lot, 5 = very much).

Although all items in this section were represented by a single factor in a principal factor analysis, Items 1 and 6 dealt with satisfaction and interest, while the other four were concerned with applications of the materials. Analyses for this study focused specifically on the 4-item subset that addressed trial use of workshop materials; it had a coefficient alpha reliability of .90.

For the items on barriers or reasons why materials had not been used, respondents were asked “what has kept you from making more use of the materials?” They marked all problems they had encountered. Resource barriers included lack of time, lack or resources, and not enough training. Procedural barriers included items about already using similar materials, not my style, strategies won't work here, materials are difficult to use, and materials conflict with agency philosophy.

3. Results

To examine the evaluations of workshop training in relation to subsequent usage of the materials at follow-up, separate analyses were completed for the DD and TA training workshops. Pearson correlations were computed between the measures from the WEVAL (relevance, training engagement, and program support) and the WAFU follow-up (trial use), and this was followed by multiple regression analysis in which trial usage of training materials was predicted by relevance, training engagement, and program support.

Table 1 shows results for the two workshops. Training evaluation measures (relevance, training engagement, and program support) are listed in the left margin and trial use for each of the two workshop materials during the follow-up period are arrayed across the top. The correlations show that each training evaluation measure was significantly related to trial use of materials for each workshop. That is, more favorable workshop ratings with regard to relevance, training engagement, and program support were significantly related to more trial usage of training in the follow-up period for both workshops.

Table 1.

Means, Correlations, and Multiple Regression Analyses for Trial Use of Training during Follow-up Period

Workshop (WEVAL) Trial Use of Dual Diagnosis (DD) Trial Use of Therapeutic Alliance (TA)
Measures mean (sd) r beta weight B t mean (sd) r beta weight B t
Intercept −.38 −.61 1.21 1.94
Relevance 4.2 (.5) .48*** .30 .47 2.75** 4.5 (.5) .30** .17 .25 1.76
Engagement 4.0 (.8) .43*** .25 .28 2.39* 4.3 (.7) .32*** .26 .29 2.94**
Support 2.9 (.9) .30*** .17 .15 1.74 3.3 (1.0) .26** .18 .13 1.91

Sample Size 89 88 115 114

Multiple R .55 .42
R2 .30 .16
F-test 12.07 8.15
Df (3, 84) (3, 111)
P .0001 .0001

p <. 10;

*

p < .05;

**

p < .01;

***

p < .001

The table also presents the multiple regression analysis of post-training trial use. Due to moderate intercorrelations between the ratings for relevance, training engagement, and program support, not all of these predictors received statistically significant regression weights. For the DD workshop, relevance and training engagement were statistically significant predictors, with program support being significant at the p < .09 level. The amount of variance accounted for by these measures was 30%. The correlation and regression results together suggest that being comfortable with using what was taught in the workshop, interest in obtaining more training, and belief that their treatment program had the available resources needed to support what was taught with regard to dual diagnosis were important in predicting reports of subsequent trial usage of training. For the TA workshop, the results showed that a significant amount of variance (16%) was also predicted by the WEVAL measures. Based on the regression weights, only the workshop rating of training engagement contributed significantly independent information for this prediction, with program support being significant at the p < .06 level and relevance significant at the p < .08 level. Again, based on the correlation and regression results, actual trial use of the TA materials was related to counselor interests in obtaining more training, program resources and being comfortable with using what was taught in the workshop. For both workshops, the results indicate that if counselors have more favorable attitudes toward training relevance and quality, they are more likely to try it following the workshop.

3.1. Barriers to using training materials

Reasons cited by participants for not using the workshop training materials are summarized in Table 2. With regard to the DD training, the most frequent resource-related barrier was lack of time (46%), followed by not enough training (15%), and lack of resources (12%). Most common procedural reasons included already using something similar (30%) and conflict with agency philosophy (7%). Reasons like not my style (2%), strategies won't work, (1%), and materials were difficult (1%) were rarely cited.

Table 2.

Barriers Reported by Counselors to Using Training during Follow-up Period (%)

Dual Diagnosis Training (n = 96) Therapeutic Alliance Training (n = 115)
Resource Barriers:
 % Lack of time 46 46
 % Lack of resources 12 15
 % Not enough training 15 10
Procedural Barriers:
 % Already use similar materials 30 31
 % Materials conflict with agency philosophy 7 15
 % Strategies won't work here 1 7
 % Not my style 2 3
 % Materials difficult to use 1 1
 % Other reasons 18 17

Barriers to using the TA materials were highly similar to those found for the DD training. The most frequently cited resource barriers were lack of time (46%), lack of resources (15%), and not enough training (10%). Among the procedural barriers, the most cited reason was already using something similar (31%), followed by materials conflict with agency philosophy (15%), and strategies won't work (7%). Reasons that would suggest personal conflict with the material – such as not my style (3%) and materials were difficult (1%) – and were among the lowest reported. Roughly 17−18% of the participants mentioned “other reasons.” Closer examination of these responses showed they were related to agency leadership issues (upper management or supervisors were not supportive), short tenure of clients (clients not in treatment long enough to implement techniques), and modality conflicts (strategies would not work in group counseling, inpatient clients, etc). In addition, several counselors noted that they were not currently counseling clients directly because they were supervisors or in administrative positions.

In addition, workshop ratings of program support were compared with the barriers staff reported, particularly involving lack of resources. In the DD workshop, low rating scores for program support were related to a longer list of barriers reflecting lack of resources (r = −.31). For the TA workshop participants, poor program support ratings were likewise related to more barriers listed representing lack of resources (r = −.35) and lack of time (r = −.34). Logistic regressions (with the program resources variable dichotomized as a median split) showed the odds of staff with low scores citing lack of resources as a barrier were 4.4 (χ2(1) = 4.42, p < .04) and 3.4 (χ2(1) = 4.04, p < .05) times more likely for those in the upper half of the program supportdistribution of ratings for the DD and TA participants, respectively. These results indicated there was consistency, as expected, between staff ratings of program resources and the types of implementation barriers they reported.

4. Discussion

As discussed by Simpson and Flynn (this issue), the road from training to full implementation of an innovation as “routine practice” is not a straight line. As shown by Rowan-Szal et al. (this issue), some degree of program-level planning and preparation should precede a targeted training event focused on introducing staff to counseling innovations deemed desirable. Ideally, this would include a realistic understanding of current program challenges (for example, high rates of co-occurring mental health problems or early client dropout) and related staff training needs, along with awareness of emerging “best practices” for addressing the challenges. Possible training solutions and goals logically emerge from these considerations, and the process of implementing an innovation begins. Well-executed training that respects requirements of adult learners is relevant to the needs of the trainee and counts toward much-needed continuing education hours has the best chance of garnering a favorable “first impression.” However, the decision-making process begins during the training itself as counselor trainees make a series of personal judgments about actually using new or revised ideas about delivering treatment. For instance, is there an administrative push at their program for these new ideas? Will leadership support using them? Can the program afford it? How will other staff react and can other staff collectively use these ideas effectively? Is it a good fit with the prevailing program philosophy about client care and needs?

This study examined counselor perceptions about training and whether they were related to its use in the following months. The general objective was to identify factors predictive of subsequent utilization. In accord with the literature reviewed by Fixsen et al. (2005), favorable attitudes toward the quality and relevance of training were found to predict reports of its later use. Specifically, aspects of engaging in the materials at the time of training, being comfortable with their applicability, and perceived availability of program resources were predictive of later usage. Barriers to trial usage showed almost half of the counselors noted that lack of time was a problem. This speaks to several issues relating to client overload, preparation time, and other work-related duties that can de-rail innovation adoption. These concerns also were reinforced by looking at the written responses for the roughly one-in-five respondents who had noted “other reasons” as barriers. Some counselors, for instance, listed specific reasons related to lack of time such as “I have too much paperwork” and “I need time to change my style and integrate new ideas.”

Additionally, redundancy of materials with similar ideas already being used was mentioned as a concern by nearly a third of those trained. These factors raise procedural questions about program readiness, strategic planning or selecting innovations appropriate to address needs, and commitments to dedicate time and energy needed for change (see Courtney, Joe, Rowan-Szal, & Simpson, this issue, and Rowan-Szal et al., this issue). In a similar vein, Saldana, Chapman, Henggeler, and Rowland (this issue) point out that caseload size was cited by program counselors as a particularly important resource-related barrier to finding time for training and implementation of innovations. They also suggest that levels of formal clinical training and experience of counselors are related to readiness for innovation applications.

Limitations of the current study include a final sample that might not have been representative of all counselors who attended the statewide conference, due to the voluntary nature of participation in the evaluation process. In addition, there was a sizeable number of participants for whom matched information was unavailable. This may have been due to participants not providing complete link code information on either the training or follow-up evaluation forms, or to counselors leaving their programs in the 6 months before the follow-up evaluation. This study suggests that information used to define linking codes should be re-examined to find ways to improve this method of linking longitudinal data. More emphasis might also be given in instructing participants about filling in this information completely on all forms, and clarifying how their responses will remain truly anonymous. Despite this limitation, the matched and unmatched samples did not appear to be much different with respect to demographics and the initial training evaluations. Based on the limited demographic records available, the matched sample differed from the nonmatched sample on race but not on age and gender. In examining biases with regards to the follow-up evaluation scores, the only difference found was on the relevance for the Miller participants. That is, matched survey participants reported higher scores on “relevance” than did counselors who did not have a linked follow-up survey.

Furthermore, this study relied on self-report data from counselors, considered by some to be a liability when observable outcomes are not included as well. Because counselor perceptions about training and its potential applications help formulate their behavioral intentions and action plans, however, cognitive elements related to this programmatic change process need closer examination. As emphasized in the program change model used to guide this research (Simpson, 2002; Simpson & Flynn, this issue), decisions about adoption and implementation of innovations are expected to be influenced both by personal and program-level factors. Most studies in the present volume address program-level predictors of innovation change, based on aggregated staff responses. Importantly, Simpson, Joe, and Rowan-Szal (this issue) have demonstrated that program-level indicators of agency needs and resources are associated with subsequent counselor responses to training, which in turn predict client-reported therapeutic engagement differences across programs. However, the role of individual-level perceptions of counselors about innovations and efforts to implement them over time remains less clear. Findings from this study begin to establish discrete dimensions of counselor cognitions that are potentially important for monitoring and improving innovation training-to-implementation steps. Self-report measures are needed and appropriate for assessing these personal cognitive formulations.

In conclusion, there is encouraging support for the training, decision, and action (i.e., adoption) stages studied in this special volume. The present findings contribute by showing that relevant and feasible counseling innovations coupled with a satisfactory training experience encourages trial adoption and increase the desire by participating staff to learn more. What happens at staff and organization levels after this interest has been piqued likely holds the answer to successful transfer into regular practice.

Acknowledgements

The authors would like to thank Mr. Michael Duffy, Director of the Louisiana Office for Addictive Disorders, and his staff for their leadership in conducting this statewide training conference in early 2003 and for their collaborative assistance in completing the series of data collection activities between 2002 and 2004. Dr. Richard Spence (Director of the Gulf Coast Addiction Technology Training Center) and his associates at the GCATTC also provided a crucial partnership in carrying out this long-range evaluation project. Especially important was their assistance in managing portions of the data collection, and coordination with the network of treatment programs in Louisiana for sustaining agency participation. We would also like to thank the individual programs (staff and clients) in Louisiana who participated in the assessments and training.

This work was funded by the National Institute of Drug Abuse (Grant R37 DA13093). The interpretations and conclusions, however, do not necessarily represent the position of NIDA or the Department of Health and Human Services. More information (including intervention manuals and data collection instruments that can be downloaded without charge) is available on the Internet at www.ibr.tcu.edu, and electronic mail can be sent to ibr@tcu.edu.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1

David Mee-Lee, M. D. – “Dual Diagnosis: Clinical Dilemmas in Assessment and Treatment”

2

Scott Miller, Ph.D. – “Heart and Soul of Change: What Works in Therapy”

References

  1. Backer TE, Liberman RP, Kuehnel TG. Dissemination and adoption of innovative psychosocial interventions. Journal of Consulting and Clinical Psychology. 1986;54(1):111–118. doi: 10.1037//0022-006x.54.1.111. [DOI] [PubMed] [Google Scholar]
  2. Brown BS. Evidence-based treatment: Why, what, where, and how. Journal of Substance Abuse Treatment. 2006;30:87–89. doi: 10.1016/j.jsat.2005.11.002. [DOI] [PubMed] [Google Scholar]
  3. Courtney KO, Joe GW, Rowan-Szal GA, Simpson DD. Using organizational assessment as a tool for program change. Journal of Substance Abuse Treatment. doi: 10.1016/j.jsat.2006.12.024. this issue. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Dansereau DF, Dees SM. Mapping training: The transfer of a cognitive technology for improving counseling. Journal of Substance Abuse Treatment. 2002;22:219–230. doi: 10.1016/s0740-5472(02)00235-0. [DOI] [PubMed] [Google Scholar]
  5. Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: A synthesis of the literature. University of South Florida; Tampa: 2005. No. Louis de la Parte Florida Mental Health Publication #231. [Google Scholar]
  6. Gotham HJ. Diffusion of mental health and substance abuse treatments: Development, dissemination, and implementation. Clinical Psychology: Science and Practice. 2004;11(2):160–176. [Google Scholar]
  7. Hagberg S, Love C, Bryant MD, Storti SA. Evaluation of on-line learning at the Addiction Technology Transfer Center of New England. Brown University, Addiction Technology Transfer Center of New England, Center for Alcohol and Addiction Studies; Providence, RI: 2000. [Google Scholar]
  8. Kirkpatrick DL. Evaluating training programs: Evidence versus proof. Training and Development Journal. 1977;31(11):9–12. [Google Scholar]
  9. Lewis YP, Record NS, Young PA. Reaping the benefits of research: Technology transfer. Knowledge, Technology, and Policy. 1998;11(12):24–40. [Google Scholar]
  10. Liddle HA, Rowe CL, Quille TJ, Dakof GA, Mills DS, Sakran E, Biaggi H. Transporting a research-based adolescent drug treatment into practice. Journal of Substance Abuse Treatment. 2002;22(4):231–243. doi: 10.1016/s0740-5472(02)00239-8. [DOI] [PubMed] [Google Scholar]
  11. Miller WR, Moyers TB, Arciniega L, Ernst D, Forcehimes A. Training, supervision, and quality monitoring of the COMBINE study behavioral interventions. Journal of Studies on Alcohol. 2005;(Supplement 15):188–195. doi: 10.15288/jsas.2005.s15.188. [DOI] [PubMed] [Google Scholar]
  12. Miller WR, Yahne CE, Moyers TB, Martinez J, Pirritano M. A randomized trail of methods to help clinicians learn motivational interviewing. Journal of Consulting and Clinical Psychology. 2004;72(6):1050–1062. doi: 10.1037/0022-006X.72.6.1050. [DOI] [PubMed] [Google Scholar]
  13. Rowan-Szal GA, Greener JM, Joe GW, Simpson DD. Assessing program needs and planning change. Journal of Substance Abuse Treatment. doi: 10.1016/j.jsat.2006.12.028. this issue. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Rowan-Szal GA, Joe GW, Greener JM, Simpson DD. Assessment of the TCU Program Training Needs (PTN) Survey. Poster presentation at the annual Addiction Health Services Research Conference; Santa Monica, CA. Oct, 2005. [Google Scholar]
  15. Saldana L, Chapman JE, Henggeler SW, Rowland MD. Organizational Readiness for Change in adolescent programs: Criterion validity. Journal of Substance Abuse Treatment. doi: 10.1016/j.jsat.2006.12.029. this issue. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Schneider B, Parkington JJ, Buxton VM. Employee and customer perceptions of service in banks. Administrative Science Quarterly. 1980;25:252–267. [Google Scholar]
  17. Schneider B, White SS, Paul MC. Linking service climate and customer perceptions of service quality: Test of a causal model. Journal of Applied Psychology. 1998;83(2):150–163. doi: 10.1037/0021-9010.83.2.150. [DOI] [PubMed] [Google Scholar]
  18. Simpson DD. A conceptual framework for transferring research to practice. Journal of Substance Abuse Treatment. 2002;22(4):171–182. doi: 10.1016/s0740-5472(02)00231-3. [DOI] [PubMed] [Google Scholar]
  19. Simpson DD. A conceptual framework for drug treatment process and outcomes. Journal of Substance Abuse Treatment. 2004;27:99–121. doi: 10.1016/j.jsat.2004.06.001. [DOI] [PubMed] [Google Scholar]
  20. Simpson DD. A plan for planning treatment. Counselor: A Magazine for Addiction Professionals. 2006;7(4):20–28. [Google Scholar]
  21. Simpson DD, Flynn PM. Moving innovations into treatment: A state-based approach to program change. Journal of Substance Abuse Treatment. doi: 10.1016/j.jsat.2006.12.023. this issue. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Simpson DD, Joe GW, Rowan-Szal GA. Linking the elements of change: Program and client responses to innovation. Journal of Substance Abuse Treatment. doi: 10.1016/j.jsat.2006.12.022. this issue. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Taleff MJ. A survey of training needs of experienced certified addictions counselors. Journal of Drug Education. 1996;26(2):199–205. doi: 10.2190/DUX8-KMW9-EE7D-L5BT. [DOI] [PubMed] [Google Scholar]
  24. Walters ST, Matson SA, Baer JS, Ziedonis DM. Effectiveness of workshop training for psychosocial addiction treatments: A systematic review. Journal of Substance Abuse Treatment. 2005;29(4):283–293. doi: 10.1016/j.jsat.2005.08.006. [DOI] [PubMed] [Google Scholar]

RESOURCES