Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2008 Sep 1.
Published in final edited form as: J Subst Abuse Treat. 2007 Apr 12;33(2):131–137. doi: 10.1016/j.jsat.2006.12.024

Using Organizational Assessment as a Tool for Program Change

Katherine Ortega Courtney 1, George W Joe 1, Grace A Rowan-Szal 1, D Dwayne Simpson 1
PMCID: PMC2001278  NIHMSID: NIHMS28750  PMID: 17433861

Abstract

Organizational functioning within substance abuse treatment organizations is important to the transfer of research innovations into practice. Programs should be performing well for new interventions to be implemented successfully. The present study examined characteristics of treatment programs that participated in an assessment and training workshop designed to improve organizational functioning. The workshop was attended by directors and clinical supervisors from 53 community-based treatment units in a single state in the Southwest. Logistic regression analysis was used to examine attributes related to program-level decisions to engage in a structured process for making organizational changes. Findings showed that programs with higher needs and pressures, and those with more limited institutional resources, and poorer ratings of staff attributes and organizational climate were most likely to engage in a change strategy. Furthermore, organizations with greater staff consensus (i.e., smaller standard deviations) on ratings of organizational climate also were more likely to engage in change.

Keywords: Organizational assessment, Organizational functioning, Organization change, Change process

1. Introduction

Organizational functioning in substance abuse treatment programs is important because of its links to program health and client engagement in the treatment process, including client and counselor rapport (e.g., Broome, Flynn, Knight, & Simpson, this issue; Greener, Joe, Simpson, Rowan-Szal, & Lehman, this issue; Lehman, Greener, & Simpson, 2002; Simpson, 2004). It is also a factor that deserves consideration by programs preparing to implement new treatment innovations in clinical practices (Simpson & Dansereau in press; Simpson, 2002). Historically, a shortcoming in the research community has been the assumption that simply conducting treatment research and publishing it in journals will lead to popular use by substance abuse treatment organizations (Backer, 2000). It is now clear that this is not enough. There is increasing agreement that organizational factors (e.g., stress, communication, financial pressures) may be more important in transferring research to practice than how the materials are distributed (Backer, David, & Soucy, 1995; Simpson, 2002). Thus, in order to transfer new treatment interventions and techniques more effectively, it is first important to better understand a program’s functional dynamics.

Simpson (2002) has observed that key elements in program change involve training, adoption, implementation, and practice. As elaborated in the studies included in the present volume, these stages are influenced by staff perceptions of program needs and pressures, resources, staff attributes, and organizational climate. Poorly functioning organizations have less success in transferring technology. Therefore, changing the functioning of an organization is a way to increase the probability of innovation transfer.

One approach for doing is to provide feedback on the functioning of the organization to those who are in the position to improve the situation. Feedback information, modeled on concepts originally applied to machine systems (Hinrichs, 1996; Nadler, 1977), can be a motivating catalyst for change (Nadler, 1977) and help focus energy on solving specific problems (Born & Mathieu, 1996). Moreover, negative feedback that is viewed as accurate and unbiased can serve as a stressor that requires a response towards taking steps to resolve the problems (McGrath, 1976). A feedback process commonly used with organizations is based on staff surveys, representing a summary of employee perceptions and attitudes (Born & Mathieu, 1996; Nelson & Quick, 1994; Nicholas, 1982). Indeed, it is helpful to include comparisons or norms involving similar organizations (Nadler, 1996). In the present study, feedback concerning organizational functioning was used with directors and clinical supervisors in the participating organizations expecting that they would respond by thinking of ways to make needed adjustments toward remedying issues raised. The Change Book (Addiction Technology Transfer Center, 2004) was used to help guide this process. Previous studies of technology transfer using these materials have been found to be effective (McCarty, Rieckmann, Green, Gallon, & Knudsen, 2004), and the steps presented in The Change Book appeared to be a viable strategy for improving organizational functioning. The present research focused on a method for improving organizational functioning in drug treatment programs.

With this goal in mind, a workshop was organized by a statewide association of treatment programs in the Southwest. Staff perceptions of program needs and functioning were assessed using the Texas Christian University (TCU) Organizational Readiness for Change (ORC; Lehman et al., 2002). This was followed by an invitation from the association for programs to participate in a workshop designed to discuss findings from the ORC assessment and encourage positive organizational change in areas where weaknesses were identified. Survey results (along with norms for other organizations) were presented in the context of treatment effectiveness evidence and the role of program functioning. It was anticipated that feedback would serve to motivate and to help program staff and leaders in engaging their organizations in change.

It was of primary interest to evaluate this process with respect to which programs engaged and followed through with making plans for change. There were three main hypotheses. First, it was expected that mean scores on the ORC scales representing program motivations (i.e., needs and pressures) would be positively associated with their responsiveness (Backer, 1995, 2000; Lehman et al., 2002; Simpson, 2004; Yahne & Miller, 1999). Second, it was expected that “adequacy of functioning” as indicated by the three remaining ORC domains (program resources, climate, and staff attributes) would be related to responsiveness. That is, the directors and clinical supervisors attending the workshop should act on improving their programs after viewing their program profiles, particularly in areas where their staffs rated their programs poorly (Nadler, 1977; Kraut, 1996). Third, organizations with more internally consistent staff ratings (i.e., smaller standard deviations) on the ORC climate scales were expected to be more responsive than those with a diversity of staff opinions. Specifically, small standard deviations on ORC scales indicate greater similarity and uniformity of ratings from the staff members, and suggest higher agreement (consensus) about the state of the organization (Hause, 2001; Malamut, 2002). As such, organizations with staff who were in agreement about the state of their functioning should be more likely to be responsive and engage in change.

2. Method

2.1 Procedure

This study was conducted in collaboration with a state Association of Substance Abuse Programs (ASAP) and the Gulf Coast Addiction Technology Transfer Center (GCATTC). In October, 2004, the Organizational Readiness for Change (ORC) assessment was administered by GCATTC staff via the internet using PsychData (an online survey collection tool) to counseling staff of participating programs who were members of ASAP. A month later, program directors and clinical supervisors representing the participating organizations were invited to attend a 2-day “TCU Model Training - Making it Real” workshop. The goal of the workshop was to allow the participants to work with their own assessment information (feedback from the ORCs) to develop treatment quality improvement plans for their respective organizations.

On the first day of the workshop, conceptual overview presentations of the TCU Treatment Process Model (Simpson, 2004) and the TCU Program Change Model (Simpson, 2002) were given. These lectures also included information about how the ORC data were collected and analyzed. This was followed by a presentation describing The Change Book (Addiction Technology Transfer Center, 2004). Participants were then given personalized feedback consisting of ORC scores for their respective agencies, along with graphical representations of 25th and 75th percentile scores based on ORC administrations from previous studies. They were then encouraged to chart their organization’s data on these graphs to provide comparisons with other agencies.

On the second day, participants worked in groups of 7-9 people to develop quality improvement plans, using the 10 steps presented in The Change Book as a guide. The composition of the small groups varied; in most cases, all individuals from a program stayed together as part of a group. Each workgroup developed a composite list of problem areas and a specific target was chosen to focus on in group practice exercises (e.g., stress, and communication were popular topics). Next, the groups discussed possible ways to improve the problem, along with positive and negative influences on the target. At the completion of this stage, participants were asked if they wished to continue the change process and participate in a follow-up telephone interview in which they would be asked in depth about their customized change plans and goals for their organizations. Individuals representing each program that attended the workshop was asked to volunteer to engage in the follow-up process and evaluation. The decision to participate in the follow-up was used as an indicator of engagement in program change.

2.2. Participants

This study was carried out in two stages. First, the TCU Organizational Readiness for Change (ORC) assessment was completed by the staff of the substance abuse treatment programs recruited to participate in the workshop entitled “TCU Model Training - Making it Real.”

Second, a group of directors and clinical supervisors from these participating programs attended the workshop to review results of the staff assessments and to consider potential “corrective action” plans. There were 309 counselors in the participating programs who completed the ORC prior to the training workshop. These data served as the basis for workshop feedback on program functioning and was used for the analyses described below. These 309 ORC assessments were matched to the respective treatment units using address and treatment type information from ASAP. Because there was some missing information for address or treatment type for some respondents, it was not possible to classify 25 of the ORC records according to treatment units. These cases were dropped from the analyses. Therefore, there were 284 individual pre-workshop ORC assessments representing 53 treatment units (from 24 multisite parent organizations) included in the analyses.

Average age of these 284 counselors was 47 years and 65% were women; 62% were Caucasian, 21% African American, 15% Hispanic, and 2% other. About 42% had a bachelor’s degree or higher levels of education. Around two-thirds were currently certified (68%) and had at least 5 years of experience in drug abuse counseling (64%). Approximately half (52%) were in their present job for 3 years or less (with about 16% on their present job for less than six months), 15% for 3-5 years, and 28% for at least 5 years. Fifty percent of the counselors had a client load of less than 10 clients, while 10% had over 40 clients.

With regard to program type, over half of the treatment units represented were residential (46% inpatient programs, 8% Therapeutic communities), a third were outpatient (20% intensive outpatient, 13% regular outpatient), and 12% classified as other. Sixty-one percent of the programs serviced an area that was urban, 20% suburban, and 19% rural. About half (54%) reported being part of a larger parent organization. In terms of clients being treated, 99% included both alcohol and other drug users.

The workshop was attended by directors and clinical supervisors from the 53 programs, and they were responsible for making decisions relating to program engagement and follow-up participation in the organizational change process. Procedurally, they received personalized program feedback from the ORC, accompanied by personal consulting with TCU and GCATTC staff about interpretations. They then participated in group discussions and exercises for identifying and prioritizing issues, and planning a change process. Each program was asked to decide if there was interest in continuing the process over time, with follow-up discussions and interviews to monitor activities.

2.3. Measures

2.3.1. Organizational Readiness for Change (ORC) assessment

Organizational functioning was assessed by the TCU Organizational Readiness for Change (ORC; Lehman et al., 2002). It was designed to measure program needs and pressures, attributes of organizational leaders and staff, as well as institutional resources and climate. The rationale, scale descriptions, and psychometric properties of the ORC were previously reported in detail by Lehman et al. (2002). The ORC included 115 items representing 18 scales covering four major areas: Motivation, Resources, Staff Attributes, and Organizational Climate. Items use 5-point response categories (disagree strongly, disagree, uncertain, agree, agree strongly), and scale scores are calculated by reflecting items that need to be reverse scored and computing the mean and multiplying by 10. Thus, 30 represents a neutral score; scores over 30 indicate stronger levels of agreement (similarly, scores below 30 indicate stronger levels of disagreement).

The Motivation scales included the dimensions of Program Needs for Improvement (8 items, coefficient alpha = .87), immediate Training Needs ( 8 items, coefficient alpha = .84), and Pressures for Change (7 items, coefficient alpha = .70).

The scales in the Institutional Resources domain covered Offices (4 items, coefficient alpha = .62), Staffing (6 items, coefficient alpha = .70), Training Resources (4 items, coefficient alpha = .57), Computer Access (7 items, coefficient alpha = .60), and Electronic Communication (4 items, coefficient alpha = .69).

The dimensions measured in the Staff Attributes domain addressed Growth (5 items, coefficient alpha = .62), Efficacy (5 items, coefficient alpha = .71), Influence (6 items, coefficient alpha = .79), and Adaptability (4 items, coefficient alpha = .66). And finally, the scales of Organizational Climate included Clarity of Mission and Goals (5 items, coefficient alpha = .70), Staff Cohesiveness (6 items, coefficient alpha = .84), Staff Autonomy (5 items, coefficient alpha = .57), Openness of Communication (5 items, coefficient alpha = .80), Stress (4 items, coefficient alpha = .79), and Openness to Change (5 items, coefficient alpha = .73).

In addition to ORC mean scale scores, standard deviations for each program unit were calculated as indicators of staff consensus (or congruence) in ratings. Smaller standard deviations reflect greater agreement among staff on the ORC scales, and these measures were examined in relation to program change efforts.

2.3.2. Program engagement in organizational change

The measure of program engagement in the organizational change process was defined by whether the staff representatives at the workshop committed to participate in the follow-up implementation and evaluation phase. Commitment was made by the clinical leaders or directors for each treatment unit. Programs complied with early steps of the consulting and planning activities, although quality and thoroughness of the follow-up participation was not evaluated as part of this study.

3. Results

Eleven of the 53 programs chose not to engage and complete the program change process, while the other 42 decided to participate. Scale means and standard deviations from the ORC were examined as possible variables that would discriminate the non-engagers from those who agreed to continue the change process in their programs. Figure 1 shows mean scores on the ORC scales for both groups of programs. When compared to 25-75th percentile norms based on previous research with over 2,000 treatment programs that completed this assessment (see www.ibr.tcu.edu), these programs on average fall in the middle 50% of the normed profiles. However, the motivation scales (i.e., for program needs and training needs) fall below the neutral score of 30 (representing lower levels of perceived needs, based on staff ratings) and approach the cutoff for the lower 25th percentile score values. In regards to Resources, the “engaged” group of programs reported noticeably lower scores than the non-participating programs on staff and training resources. They also differed on the Organizational Climate scales for communication and stress. On average, programs that decided to remain engaged in the organizational change process were lower in communication and higher in stress levels.

Figure 1.

Figure 1

Program means and 25th-75th percentile norms for ORC scale profiles.

Logistic regressions were used to identify the variables that discriminated between these groups. Initially, this involved using each pre-workshop ORC scale treatment unit mean and each standard deviation in a separate regression. Each mean was weighted by the number of individuals in the agency who completed the ORC. The odds ratios corresponding to the logistic regression weights (b- weights) are also presented as they represent the effect sizes of the weights and help in the interpretation of the significance of the findings.

The results for the separate logistic regressions predicting responsiveness from each ORC scale mean are shown in Table 1. The first hypothesis was that staff perceptions about levels of needs and pressures should be positively related to program discussions about continuing in the change process. Support for this hypothesis was found in the results from the program needs scale analysis (χ2 = 10.90, p = .001). The positive weight and odds ratio (OR) suggested that for each unit increase on program needs the odds of continuing in the change process increased by 11%. This suggests that programs whose staffs were viewed as having more needs were more likely to continue with the organization change process.

Table 1.

Results of Logistic Regressions Predicting Program Engagement in Change Process from ORC Means

ORC Scale Intercept B weight Odds ratio p
Motivation
 Program Needs -1.69 0.10 1.11 0.001
Institutional Resources
 Staffing 9.35 -0.25 0.78 < .0001
 Training 3.20 -0.06 0.94 0.0039
 Computer Access 5.27 -0.11 0.89 0.003
 e-communication 6.64 -0.14 0.87 0.0001
Staff Attributes
 Growth 6.39 -0.14 0.87 0.0003
 Influence 7.12 -0.16 0.85 0.002
Climate
 Cohesion 3.84 -0.08 0.93 0.01
 Communication 7.48 -0.19 0.83 < .0001
 Stress -9.30 0.34 1.41 < .0001
 Openness to change 9.07 -0.23 0.80 < .0001

The second hypothesis was that the programs more likely to continue in the change process would be those with ORC profiles on institutional resources, staff attributes, and organizational climate that suggested they were functioning less adequately. As shown in Table 1, there was statistical support for this hypothesis from each domain, but it appeared strongest for organizational climate. Four individual scales from this domain were significant, including cohesion (χ2 = 5.97, p = 0.01), communication (χ2 = 25.37 p < 0.0001), openness to change (χ2 = 22.07, p < 0.0001), and stress (χ2 = 73.37, p < 0.0001). Lower scores on cohesion, communication, and openness to change were associated with a higher likelihood of further work in making improvements in the area(s) identified in the workshop, while higher levels of stress increased those odds. Stress had the strongest effect size of any of the analyses, and it was found that for every unit of increase on the stress scale, the odds increased by 41%.

Results from the analyses for the resources domain provided further support to the second hypothesis. Programs whose staffs perceived fewer resources regarding training (χ2 = 4.26, p = .039), staffing (χ2 = 41.54 p < 0.0001), computers (χ2 = 8.60, p = 0.003), and e-communication (χ2 = 10.02, p = 0.001) were more likely to continue in the organization change process. Further support for this second hypothesis also was found in the staff attributes domain in growth (χ2 = 8.51, p = 0.003) and influence (χ2 = 9.06, p = 0.002). Lower scores on growth and influence increased the odds of continuing in the process.

The third hypothesis was that higher staff consensus (i.e., smaller standard deviations) on ORC scale scores would be associated with higher engagement in the process of organizational change. As shown in Table 2, the results of the analyses of the Organizational Climate domain strongly supported this hypothesis. Four program scale standard deviations were negatively associated with continuing in working at making improvements; these included cohesion (χ2 = 18.67, p < .0001), autonomy (χ2 = 3.95, p = 0.05), stress (χ2 = 5.95, p = 0.01), and openness to change (χ2 = 7.40, p = 0.006). Thus, more staff consensus was related to increased likelihood that their program continued in the change process.

Table 2.

Results of Logistic Regressions Predicting Program Engagement in Change Process from ORC Standard Deviations

ORC Scale Intercept B weight Odds ratio p
Motivation
 SD Program Needs -0.88 0.27 1.31 0.0001
 SD Press for Change 4.33 -0.56 0.57 < .0001
Institutional Resources
 SD Staffing 2.08 -0.15 0.86 0.01
Staff Attributes
 SD influence -0.25 0.24 1.27 0.004
Climate
 SD cohesion 2.75 -0.21 0.81 < .0001
 SD autonomy 1.74 -0.12 0.89 0.05
 SD stress 2.33 -0.15 0.87 0.01
 SD openness to change 1.77 -0.10 0.90 0.006

Results for the other ORC domains were less consistent with respect to this third hypothesis. In the Motivation domain, the program standard deviation for program needs was positively related to continuing (χ2 = 16.60, p < 0.0001), while the program standard deviation for pressures for change was negatively related (χ2 = 36.42, p < 0.0001). For this domain, only the result for pressures for change supported this third hypothesis. For the Institutional Resources domain, only the program standard deviation for staffing resources was significant, with smaller standard deviations on this scale associated with increased odds of continuing with the change process (χ2 = 6.23, p = 0.01). For the Staff Attributes domain, there was no support for the hypothesis. The only scale with a program standard deviation that was significant in this domain was for influence, but it was the larger standard deviations on this scale that related to the higher likelihood of continuing the change process (χ2 = 8.27, p = 0.004).

4. Discussion

The current study examined the importance of a workshop and feedback designed to initiate a process of organizational functioning improvement for treatment programs. Counseling staff ratings of organizational climate, program motivation (needs and pressures), resources, and their perceptions of professional opportunities and abilities to perform their work were given to directors and clinical supervisors in a workshop to identify and prioritize issues and plan for change. The workshop was structured so that it focused on organizational functioning and its relationship with client outcomes, and it provided a means for initiating the organizational change process. It was anticipated that providing percentile reference points as to where their programs stood would serve as a source of motivation to the attending program directors and clinical supervisors. Indeed, the results of the current research supported the expectation that most programs under these conditions would be motivated to change, and those most likely to opt to continue with the process to improve their functioning were those that appeared to be functioning less well.

The first of three hypotheses was that more needs and pressures on the programs would be a motivating force for change. Results supported this hypothesis and findings were consistent with literature on the importance of recognizing a need for improvement in the change process (e.g., Backer, 1995, 2000; Lehman et al., 2002).

The second hypothesis concerned the effects of staff ratings on organizational climate, resources, and their professional opportunities and abilities. There was statistical support for the expectation that poorer ratings would be related to decisions by program leaders to work on these issues. Specifically, those programs whose directors and clinical supervisors were made aware of their lower scores in the ORC domains of institutional resources, staff attributes, and organizational climate were more likely to want to participate further in the change process. The two ORC profile elements demonstrating the strongest effects were stress and communication. Other significant scales were cohesion and openness to change from the organizational climate domain, staffing resources, computers, and e-communications from the institutional resources domain, and growth and influence from the staff attributes domain.

These findings are consistent with the literature which indicates feedback on inadequate functioning can motivate programs to make changes (Nadler, 1977). They also correspond with the stress paradigm proposed by McGrath (1976) in which a stress situation is created (in this case from the feedback presented showing inadequacy of functioning), and as a result, a decision leads to how to resolve this situation. In this instance, one such reaction is to be responsive, and engage in change.

Moreover, this also is indicative of program readiness and potential for implementing new training and interventions into clinical practices. Those more likely to be ready to perform technology transfer have higher levels of organizational functioning and are less in need of organization change. Correspondingly, these programs are likely to be functioning more effectively with respect to their clients’ engagement and participation in treatment (Broome et al., this issue; Greener et al., this issue).

The third hypothesis involving staff consensus or agreement about their ratings within a program was supported by the organizational climate ratings, but less so in the other domains. That is, standard deviations on four of the climate scales (cohesion, autonomy, stress, and openness to change) were negatively related to responsiveness to continuing with change activities. As expected, more staff consensus about climate issues was related to the directors and clinical supervisors of those programs being more likely to continue in the change process. Smaller standard deviations, however, do not necessarily signify that the organization is in a good or bad condition. For example, if the staff generally agreed that stress is high, this might reflect recognition of a problem and foundations for a readiness within the organization to make improvements. If, on the other hand, there is a large standard deviation on stress reflecting a wide array of beliefs about the state of stress in the organization, there may not be agreement about the situation and responsiveness may be less likely. These findings are congruent with previous research that found level of staff agreement on organizational functioning measures is predictive of organizational effectiveness (Hause, 2001). Based on the results of the current study, it appears that the use of standard deviations, or level of staff consensus on ORC climate scales, is a viable area to examine when assessing organizational functioning and change.

A limitation of the current study is the outcome variable. It basically represents an engagement or leadership decision by each participating program, and the decision-making process may have been different for each program. Namely, the decision to participate may have been affected by consultation of staff in some cases but not in others. It also was likely to have been affected by the satisfaction of participants with the workshop and their acceptance that the areas identified by the ORC as potential “problem areas” were valid. How much of the variance of the outcome decision was based on various aspects of the feedback, including the types and number of issues raised, was not examined. In addition, there may have been other administrative or procedural reasons that affected the ability of some programs to continue participation in the change process.

Another limitation is sample size. While the findings highlighted some predictors of participation, the power to test other hypotheses was low. Studies with larger samples might allow more adequate examination of interactions between standard deviations and mean scores on the ORC scales. However, establishing larger data bases with longitudinal and cross-linked assessments involving programs, counselors, clients, and training is challenging (see Simpson, Joe, & Rowan-Szal, this issue). Because these data were collected electronically, there was no information available as to what was the total population to be surveyed in the participating programs. However, because the workshop was planned and organized by the professional association to which participation programs belonged, it is expected that representation was reasonably good.

The study sample might appropriately be regarded as “early adopters” as referred to in the organizational change literature (Rogers, 1995; Simpson, 2002). That is, all of the programs displayed initiative and a desire to improve by participating in the workshop. Additionally, based on their ORC profiles, most were “functioning reasonably well overall” in comparison to ORC scale score norms established on the basis of previous research with this assessment (see www.ibr.tcu.edu). For instance, only five had any of their organizational climate scores fall in the negative range of ratings (i.e., below 30), and another three had comparatively high staff stress scores (i.e., 35). Therefore, the need for change in these organizations may not have been as great as the need in other organizations that did not participate. In future studies, programs that may not be highly motivated to change might be an important comparison group to include.

This study suggests that staff assessments of program needs and functioning can influence decisions by program leadership to make changes. It also suggests some types of information appear to be more significant than others. If workshop organizers can identify organizations most likely to examine the change process seriously and continue through the change process, then those who are unlikely to do so could receive more strategic encouragement and attention before and during the workshop. The process of organizational change is not an easy one, but it is sometimes necessary if an organization is to remain successful in today’s world of competitive health care services. This study sheds some light on our understanding of the change process in substance abuse treatment organizations. Identifying organizations that are likely to be responsive is an important first step.

Acknowledgements

The authors would like to thank Cynthia Humphry, the Association of Substance Abuse Programs (ASAP), and the Gulf Coast Addiction Technology Training Centers (ATTCs) for their assistance with recruitment and training. We would also like to thank the individual programs (staff and clients) who participated in the assessments and training in the DATAR Project.

This work was funded by the National Institute of Drug Abuse (Grant R37 DA13093). The interpretations and conclusions, however, do not necessarily represent the position of NIDA or the Department of Health and Human Services. More information (including intervention manuals and data collection instruments that can be downloaded without charge) is available on the Internet at www.ibr.tcu.edu, and electronic mail can be sent to ibr@tcu.edu.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  1. Addiction Technology Transfer Centers . The Change Book: A blueprint for technology transfer. ATTC National Office; Kansas City, MO: 2004. [Google Scholar]
  2. Backer TE. Assessing and enhancing readiness for change: Implications for technology transfer. In: Backer TE, David SL, Soucy G, editors. Reviewing the behavioral science knowledge base on technology transfer (NIDA Research Monograph 155, NIH Publication No. 95-4035) National Institute on Drug Abuse; Rockville, MD: 1995. [PubMed] [Google Scholar]
  3. Backer TE. The failure of success: Effective substance abuse prevention programs. Journal of Community Psychology. 2000;28(3):363–373. [Google Scholar]
  4. Backer TE, David SL, Soucy G. Reviewing the behavioral science knowledge base on technology transfer. In: Backer TE, David SL, Soucy G, editors. Reviewing the behavioral science knowledge base on technology transfer (NIDA Research Monograph 155, NIH Publication No. 95-4035) National Institute on Drug Abuse; Rockville, MD: 1995. [PubMed] [Google Scholar]
  5. Born DH, Mathieu JE. Differential effects of survey-guided feedback: The rich get richer and the poor get poorer. Group & Organization Management. 1996;21(4):388–403. [Google Scholar]
  6. Broome KM, Flynn PM, Knight DK, Simpson DD. Program structure, staff perceptions, and client engagement in treatment. Journal of Substance Abuse Treatment. (this issue) doi: 10.1016/j.jsat.2006.12.030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Greener JM, Joe GW, Simpson DD, Rowan-Szal GA, Lehman WEK. The influence of organizational functioning on client engagement in treatment. Journal of Substance Abuse Treatment. (this issue) doi: 10.1016/j.jsat.2006.12.025. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Hause OR., Jr.Relationships between organizational culture strength and organizational effectiveness in an electrical utility company (Doctoral Dissertation, University of Georgia, 2001) 2001. Dissertation Abstracts International, 61 11B, 6172 [Google Scholar]
  9. Hinrichs JR. Feedback, action planning, and follow-through. In: Kraut AI, editor. Organizational surveys tools for assessment and change. Jossey-Bass; San Francisco: 1996. [Google Scholar]
  10. Kraut AL. Organizational surveys: Tools for assessment and change. Jossey-Bass; San Francisco: 1996. [Google Scholar]
  11. Lehman WEK, Greener JM, Simpson DD. Assessing organizational readiness for change. Journal of Substance Abuse Treatment. 2002;22(4):197–209. doi: 10.1016/s0740-5472(02)00233-7. [DOI] [PubMed] [Google Scholar]
  12. Malamut AB.Socialization and perceptual agreement: Testing a bottom-up emergence model of organizational climate formation (Doctoral dissertation, The George Washington University, 2002) 2002. Dissertation Abstracts International, 63 3B, 1598 [Google Scholar]
  13. McCarty D, Rieckmann T, Green C, Gallon S, Knudsen J. Training rural practitioners to use buprenorphine: Using The Change Book to facilitate technology transfer. Journal of Substance Abuse Treatment. 2004;26(3):203–208. doi: 10.1016/S0740-5472(03)00247-2. [DOI] [PubMed] [Google Scholar]
  14. McGrath JE. Stress and behavior in organizations. In: Dunnette MD, editor. Handbook of industrial and organizational psychology. Rand McNally College Publishing Company; Chicago: 1976. pp. 1351–1395. [Google Scholar]
  15. Nadler DA. Feedback and organization development: Using data-based methods. Addison-Wesley Publishing Company; Reading, MA: 1977. [Google Scholar]
  16. Nadler DA. Setting expectations and reporting results: Conversations with top management. In: Kraut AI, editor. Organizational surveys: Tools for assessment and change. Jossey-Bass; San Francisco: 1996. pp. 177–203. [Google Scholar]
  17. Nelson DL, Quick JC. Organizational behavior: Foundations, realities and challenges. West Publishing Company; St. Paul, MN: 1994. [Google Scholar]
  18. Nicholas JM. The comparative impact of organization development interventions on hard criteria measures. Academy of Management Review. 1982;7(4):531–542. [Google Scholar]
  19. Rogers EM. Diffusion of innovations. 4th ed. The Free Press; New York: 1995. [Google Scholar]
  20. Simpson DD. A conceptual framework for transferring research to practice. Journal of Substance Abuse Treatment. 2002;22(4):171–182. doi: 10.1016/s0740-5472(02)00231-3. [DOI] [PubMed] [Google Scholar]
  21. Simpson DD. A conceptual framework for drug treatment process and outcomes: Applications for improving treatment effectiveness. Journal of Substance Abuse Treatment. 2004;27(2):99–121. doi: 10.1016/j.jsat.2004.06.001. [DOI] [PubMed] [Google Scholar]
  22. Simpson DD, Dansereau DF. in press Assessing organizational functioning as a step toward innovation NIDA Science and Practice Perspectives [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Simpson DD, Joe GW, Rowan-Szal GA. Linking the elements of change: Program and client responses to innovation. Journal of Substance Abuse Treatment. (this issue) doi: 10.1016/j.jsat.2006.12.022. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Yahne CE, Miller WR. Enhancing motivation for treatment and change. In: McCrady BS, Epstein EE, editors. Addictions: A comprehensive guidebook. Oxford University Press; New York: 1999. pp. 235–249. [Google Scholar]

RESOURCES