Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Jan 1.
Published in final edited form as: J Dual Diagn. 2013 May 3;9(2):165–170. doi: 10.1080/15504263.2013.779051

Measuring Organizational Capacity to Treat Co-Occurring Psychiatric and Substance Use Disorders

Gary R Bond 1, Mark P McGovern 1
PMCID: PMC3780454  NIHMSID: NIHMS463555  PMID: 24072988

Behavioral health program leaders, practitioners, policy makers, and researchers have a keen interest in understanding how best to implement and sustain evidence-based practices. Among the most important determinants of successful implementation are organizational factors, such as an organization’s structure, climate, resources, and leadership. Yet despite the recognition that organizational characteristics matter, the field remains largely in a prescientific stage of development. For example, experienced leaders believe they know, by reputation, which are the most competent service agencies in their state or region, and further, which ones are most competent in implementing innovative practices. But how accurate are these beliefs? Are reputational ratings predictive of success in implementing a new program? Over a half-century ago, Meehl (1954) showed that clinical judgment was often useless in predicting client outcome. In nearly every study, actuarial calculation based on objective indicators was a better predictor of outcome than clinical prediction. Thus global clinical judgments, whether they be about predicting improvement in client outcomes or readiness of an organization to change are poorer guides to action than are objective scales measuring discrete aspects of performance. But the development and validation of scales requires hard work, and much has yet to be done at the organizational level in integrated treatment.

The impetus for this special section in the Journal of Dual Diagnosis on the measurement organizational capacity for integrated treatment is two-fold. First, the development of a cumulative science on the implementation of evidence-based services for people with co-occurring disorders requires psychometrically-valid measures that can be used to assess implementation and outcomes. Second, these measures are likewise important for quality improvement purposes. Systematic collection and review of key indicators can promote the implementation and long-term sustainability of evidence-based services. The need for practical implementation measures is indicated by the gap between routine access to effective integrated treatment and the organizational capacity of service providers (McGovern, Lambert-Harris, Gotham, Claus, & Xie, 2012). Using a standardized measure with over 200 community providers in the United States, only 18% of addiction treatment and 9% of mental health programs met criteria for dual diagnosis capable or integrated treatment services. To close this gap, service providers need practical implementation guidelines for this process, as well as a set of practice-specific instruments and checklists to make the process concrete (McGovern, McHugo, Drake, Bond, & Merrens, 2013). Although theoretical writings sometimes imply a bright line between scientific research and quality improvement, these two activities often overlap, as the papers in this special section demonstrate. The comments in this introduction refer equally to both enterprises.

Among the various types of organizational measures in common use, some assess fidelity to an evidence-based service and are designed to evaluate processes at the team level. Some measure adherence to guidelines or evidence and expert consensus-based guidelines. Others measure organizational readiness and position to adopt evidence-based therapies.

Organizational measures may be particularly useful in behavioral health. Both specialty mental health and addiction treatment have a fluid workforce, remarkable for high rates of turnover within the first two years of employment (Garner, Hunter, Modisette, Ihnes, & Godley, 2012; Knudsen, Abraham, Roman, & Studts, 2011). Several studies on translating evidence-based therapies into community settings found that training of front-line clinicians was ultimately ineffective because of the instability of the workforce, as well as the limited authority of clinicians to implement changes in a program’s day to day procedures (Manuel, Hagedorn, & Finney, 2011). In contrast, although behavioral health organizations are also fluid and dynamic, they are far more likely to be in place for over two years than individual clinicians. Most implementation frameworks hypothesize the “inner setting,” or the organization wherein the implementation is to take place, to be most associated with success (Damschroder et al., 2009). Thus, organizational measures are useful for their potential durability and validity, both in the present and for the future.

Systems of care are composed of a set of organizations, so organizational measures can be used in aggregate to help systems and policymakers understand patterns and practice variation and subsequently offer technical assistance for under-performing sites (Rapp, Goscha, & Carlson, 2010). Similarly, if a system has variation in capacity across organizations it could use the data to understand determinants and reallocate resources. For example, it might be that the more capable programs are in urban areas, or are in programs that are affiliated with hospitals or academic medical centers. Would policymakers want all enhanced services in one place or region, or would there be a rationale for equivalent distribution? Much like Level I trauma centers, system administrators may want an even distribution of integrated services based on population, access, regions or other factors. In some systems, organizations at higher levels of verified capability are eligible for enhanced reimbursement rates for services.

Finally, individuals and families (i.e., consumers of services) can use organizational measures of capacity, especially of integrated treatment capability, to make informed choices about organizations or programs where they seek care. At present, reliable and valid information for consumers is unavailable, and essentially a caveat emptor marketplace prevails.

This special section of the Journal of Dual Diagnosis includes three papers providing a sampling of current work on the development and psychometric validation of organizational measures. These papers examine three tools. The Dual Diagnosis Capability in Addiction Treatment (DDCAT) Index (McGovern, Matzkin, & Giard, 2007), was developed for addiction treatment programs, while the Dual Diagnosis Capability in Mental Health Treatment (DDCMHT) Index (Gotham, Claus, Selig, & Homer, 2010) and the Tool for Measurement of Assertive Community Treatment (TMACT) (Monroe-DeVita, Teague, & Moser, 2011) were developed for mental health treatment programs. The first two papers are descriptive studies with large samples. One examines associations in a statewide sample between integrated treatment capability and duration in treatment (Chaple, Sacks, Melnick, McKendrick, & Brandau, 2013). This study illustrates the research examining the relationship between the measure and an outcome indicator. The second article examines associations in a county sample between program funding source and integrated treatment capability (Padwa, Larkins, Crevecoeur-MacPhail, & Grella, 2013). This paper illustrates research on practice variation, drawing on a database for a large system comprised of different types of programs (Moser, Monroe-DeVita, & Teague, 2013). Together the two studies add to the literature documenting the predictors and impact of integrated treatment capacity, and in so doing illustrate the practical utility of the measures. The third paper in this section describes a scale in an early stage of development. This paper gives the rationale for incorporating systematic measurement of substance abuse treatment in a recently-published fidelity scale that itself is an adaptation of a widely-used scale. The TMACT assesses fidelity to a specific type of mental health treatment team. Like the DDCAT and DDCMHT, the TMACT assesses the capacity to deliver appropriate substance abuse services.

A limitation of all three instruments is that they do not assess the quality of treatment services, policies or workforce, but rather the quantity, presence and adherence to a benchmark. These scales differ from psychotherapy rating scales designed to measure therapist competence, which pertains to quality of the treatment. Like psychotherapy rating scales, each of the studies described involve measures based on observational rather than self-report data. Each involves site visits by independent assessors and therefore a substantial commitment to the assessment process. In a following section, we provide some data which suggest that observational assessments are of greater reliability and validity, and therefore utility, than treatment provider self-assessments or surveys. An investment in independent fidelity assessments and other forms of technical assistance may be a prerequisite for achieving initial high fidelity to an organizational practice or benchmark and for sustaining these services over time (Bond et al., 2012; Bond, Drake, McHugo, Rapp, & Whitley, 2009).

As a counterpoint to the preceding papers, the final paper in the special section is neither devoted to a specific measure nor does it examine measurement issues per se. Instead, Sylvain and Lamothe (2013) challenge conventional assumptions about the critical role for quantitative measures. They argue that the current preoccupation with fidelity scales and other quantitative methods in implementation research may be impeding our progress in understanding the dynamic process as it unfolds in real world practice. After reviewing 14 studies examining the implementation of programs serving persons with dual diagnoses, the authors conclude that the stages of implementation are not well understood because of the static prediction models used in most studies. As an alternative, they advocate for idiographic qualitative methods for evaluating the implementation process in which individual programs are studied intensively over time to identify salience of different factors at different stages of implementation.

In the sections which follow, we consider several issues pertinent to the assessment of organizational capacity to serve persons with co-occurring disorders: organizational measures and implementation research, the scientific base of organizational measures, generic versus specific measures of organizations, and balancing the costs and benefits of independent objective organizational assessment versus self-report. We conclude with five recommendations for further research and development in this field.

Organizational Measures and Implementation Research

Implementation research in behavioral health care is in its infancy (McGovern, Saunders, & Kim, 2013). The field has neither adequately described the stages of implementation and sustainability, nor developed a consensus battery of standardized implementation instruments (Schell et al., 2013). It is timely to appraise how far we have come and where we go from here. A discussion about measuring organizational capacity of services for people with co-occurring disorders leads inevitably to the broader literature on implementation of evidence-based practices in mental health and addiction services.

Within the implementation literature, organizational factors are conceptualized as both predictors of successful implementation and as outcomes of the implementation process. In the first instance, organizational factors are not components of the treatment program, but are necessary conditions for implementation. Many different organizational constructs, such as organizational readiness (Saldana, Chapman, Henggeler, & Rowland, 2007), absorptive capacity (Maharaj, 2010), organizational barriers and facilitators (Torrey, Bond, McHugo, & Swain, 2012), organizational culture (Damschroder & Hagedorn, 2011), and structural characteristics of the organization (Gotham, Claus, Selig, & Homer, 2010) have been found to predict successful implementation.

Other organizational measures are “implementation outcomes” (Proctor et al., 2011). Organizational capacity for treating co-occurring psychiatric and substance use disorders is an implementation outcome in that effective services require coordination and commitment from an entire organization and not just a single practitioner or team within an agency. Thus the DDCAT is a fidelity scale and classified as an implementation outcome in the Proctor et al. (2011) taxonomy.

Not surprisingly, the development of implementation measures is as much in its infancy as is the implementation field itself. Progress in implementation research is not possible until we have adequate measures. Among the many complex and vexing issues impeding progress in the measurement development, we briefly discuss three salient ones.

The Scientific Quality of Organizational Measures in Behavioral Health

A cursory review of the implementation literature shows no shortage of measures. Quantity does not imply quality, however. An ambitious effort in progress to develop a repository of current implementation measures has documented the explosion of new scales and measures in the implementation research field (www.seattleimplementation.org). The repository organizers have documented the proliferation of hundreds of measures, over 40% of which are homegrown instruments, which the organizers characterize as “…developed in haste without systematically using theory, not engaging in the necessary steps of appropriate instrument development…” (Lewis, Martinez, & Comtois, 2012, p. 24) Similarly, a review of organizational factors affecting implementation of evidence-based practices found that most organizational measures were developed for a single study and not used thereafter (Emmons, Weiner, Fernandez, & Tu, 2012). Clearly progress has been impeded by the failure to develop, refine, and extend the adoption of promising instruments.

Generic Versus Specific Organizational Measures

In contrast to homegrown scales developed expressly for a specific project, a number of general-purpose measures are also popular in implementation research. These are instruments designed for administration in any setting without regard to the specific practice being implemented. The appeal of these instruments is obvious: They are ready to go, they appear straightforward, and their brevity solves a problem for the implementation researcher.

One popular instrument is the Organizational Readiness to Change (ORC) checklist (Simpson, 2002). Despite its grounding in theory, extensive literature, and simplicity of administration, the ORC lacks specificity to the evidence-based practice being implemented. Consider this ORC item, “Learning and using new procedures are easy for you.” This item is so broad and susceptible to social desirability as to cast doubt on its utility for implementation research.

Another well-known attitude scale used in implementation research is the Evidence-Based Practice Attitude Scale (EBPAS) (Aarons, 2004; Aarons, Cafri, Lugo, & Sawitzky, 2012). It assesses general acceptability of evidence-based practices with a checklist completed by practitioners and program leaders. While the EBPAS is simple to administer and is intuitively appealing, it is limited to measuring global reactions to the concept of implementing an evidence-based practice without pinpointing specific features that clinicians find objectionable (Borntrager, Chorpita, Higa-McMillan, & Weisz, 2009). Like other self-report attitude scales, evidence for EBPAS validity derives primarily on associations with other self-report measures (Aarons et al., 2012) and not with hard indicators of implementation outcomes.

An implicit assumption of inventories that ask program leaders and practitioners to make judgments about their organization’s capacity for change or to identify barriers and strategies to change is that respondents have the requisite knowledge to make these judgments. But what if leaders misjudge the organizational response to the implementation of a project? Prior to implementing a new practice, attitudes are based on preconceptions and not on direct experience. Thus, one study found that prior to adoption of new practices, program leaders systematically underestimated the importance of funding and staffing barriers that loomed large once implementation was under way (Seffrin, Panzano, & Roth, 2008). Another study found no evidence that staff attitudes during the first year of implementation predicted successful implementation of the practice (Torrey, Bond, McHugo, & Swain, 2012). As repeatedly documented in the literature, attitudinal measures are poor predictors of actions taken by individuals or groups, especially when attitudes are not based on direct experience.

Objective Assessment by Independent Raters Versus Self-Assessment

While implementation research has relied primarily on self-report, another important trend has been the use of independent raters to obtain objective ratings of implementation processes and outcomes. The use of independent raters has been most prominent in fidelity measurement (McHugo et al., 2007). Assessing fidelity in this fashion is labor intensive, requiring site visits by trained fidelity assessors not employed by the provider agency. Recently, some researchers have proposed that self-assessment procedures could substitute for independent assessments under some circumstances (McGrew, White, Stull, & Wright-Berryman, 2013). While we understand the motivation for this short-cut approach, self-report bias is a great danger. For example, in the best-known study comparing the differences between DDCAT ratings obtained from self-assessment and from independent ratings, Lee and Cameron (2009) found that self-ratings were 30% to 40% higher than independent ratings for 13 addiction treatment programs in Australia. Even after two days of training in using the DDCAT measure, agency self-assessment scores were significantly higher.

Another study also indicates the discrepancy between provider survey responses relative to independently obtained systematic data (McGovern & Giard, 2007). Self-report data found that well over 75% of organizations offer integrated treatment and provide services to persons with co-occurring disorders. Standardized data obtained via site visits in contrast yielded the inverse ratio: no more than 25% of addiction or mental programs met criteria for dual diagnosis capable services. The 25% finding corresponds more closely to data obtained from surveys of community members or consumers of services with co-occurring disorders (SAMHSA, 2010). In these SAMHSA surveys less than 10% reported receiving simultaneous treatment for both their mental health and substance use disorder. Estimates of whether the “simultaneous” treatment was delivered by the same program, by the same clinicians, using an integrated approach are not available. But one would assume this number would be much lower.

Our broader concern with the endorsement of any shift from independent assessment to self-assessment of organizational capacity, program fidelity, or other implementation outcomes is that such short-cuts readily appeal to state administrators and program leaders in under-resourced service systems. Studies such as those by McGrew et al. (2013) might be used as justification for wholesale adoption of self-assessment as an expedient alternative to independent assessments. Most worrisome are self-report assessments conducted by users who have no direct experience with the assessment procedures and further have a poor understanding of the rationale and underlying theory. Unfortunately, misapplication of implementation scales by unqualified users is ubiquitous. The research literature is filled with evaluations of purportedly “high-fidelity” programs that bear little resemblance to the original models. Inaccurate self-labeling of programs was widespread decades ago before dissemination of fidelity scales, and, unfortunately, this remains true today (Michie, Fixsen, Grimshaw, & Eccles, 2009).

An interesting hybrid approach between using independent assessors outside the agency and self-assessment is the establishment of an independent quality improvement team within an agency to evaluate individual treatment teams within that agency (Davis et al., 2012). The degree to which an agency-employed quality improvement team can maintain objectivity depends on several factors, including the assessment team’s autonomy, the training and supervision they receive, and the intended use of the data (e.g., whether ratings are used to determine eligibility for higher reimbursement rates).

Conclusion

The four papers included in this special section represent a glimpse into a huge, ever burgeoning literature with its cacophony of promising instruments, as well as untried instruments untethered to theory and often orphaned after initial study. The three instruments showcased have avoided most of the pitfalls outlined above. All three are grounded in theory and prior research on team and organizational models of services they measure. All prescribe administration by independent raters who are well trained in the use of the instruments, as illustrated in the papers in this section. A further strength of all three measures is that they each have been examined in multiple studies and are widely used in quality improvement efforts. For these reasons we included these scales in a list of recommended measures in an implementation handbook for practitioners and program leaders (McGovern, McHugo, Drake, Bond, & Merrens, 2013).

Nevertheless, far more psychometric validation is needed on all three instruments. For example, it would be useful to have normative benchmarks of successful implementation, which could then be used for comparing across studies. As an example that benchmarks for integrated treatment capability have not yet achieved universal agreed-upon standards, readers will note that the two papers in this section examining the DDCAT (Chaple, Sacks, Melnick, McKendrick, & Brandau, 2013; Padwa, Larkins, Crevecoeur-MacPhail, & Grella, 2013) used different quantitative criteria for integrated capacity, inhibiting comparison between the studies. These studies used these approaches for statistical convenience and based on the distribution of scores in their samples. Using the categories described in the DDCAT manual (such as Addiction Only Services or AOS, Dual Diagnosis Capable or DDC, or Dual Diagnosis Enhanced or DDE) would lend for increased comparability across studies (Giard et al., 2011).

Sylvain and Lamothe’s (2013) critical review of the integrated treatment implementation literature is an important reminder that quantitative methods have their limitations. Our reading of their provocative paper is that it is not a dismissal of quantitative efforts such as in the preceding papers, but rather a recommended complementary approach. Both kinds of research are needed.

We conclude with five recommendations:

  1. In future implementation research, include the DDCAT, the DDCMHT, and TMACT, as part of a core battery of instruments, when suitable to the setting. For researchers, policymakers or those involved in program development in the integration of integrated behavioral health care in routine medical settings, the counterpart to the DDCAT and DDCMHT, the Dual Diagnosis Capability in Health Care Settings (DDCHCS) index, is an emerging measure (McGovern, Urada, Lambert-Harris, Sullivan, & Mazade, 2012). In making this recommendation, we join many others calling for development of a consensus battery on measurement strategies. These instruments represent a start.

  2. Use these instruments without modification, as indicated in their manuals. The instrument developers have previously made major refinements to their scales. Certainly all three measures will continue to warrant major improvements. But wholesale variations by other users undermines interpretation of findings in individual studies (Bond, 2007), prevents direct comparisons to benchmarks, and inhibits the growth of a cumulative science. Scale revisions are inevitable and necessary, but once a scale has been initially vetted, future revisions should be made only infrequently. One need only recall annoying updates to word processing software to recognize the pitfalls of constant change.

  3. Obtain training and supervision from established experts in the administration of the instruments. Measures that are poorly administered, typically by self-taught users, undermine the credibility of the measures and of the studies in which they are used.

  4. Share data and compile cumulative databases from different user groups. Large databases provide opportunities for benchmarking and statistical analyses not possible in individual, small-scale studies. This recommendation is especially apt given the challenges of collecting organizational-level data.

  5. Continue to update critical reviews. The ongoing review process allows investigators to identify next steps in implementation research.

ACKNOWLEDGMENTS

This review and summary was funded in part by NIDA (R01DA027650)(PI: McGovern).

Footnotes

DISCLOSURES

The authors report no financial relationships with commercial interests and have no additional income or compensation to declare.

REFERENCES

  1. Aarons GA. Mental health provider attitudes toward adoption of evidence-based practice: The Evidence-Based Practice Attitude Scale (EBPAS) Mental Health Services Research. 2004;6:61–74. doi: 10.1023/b:mhsr.0000024351.12294.65. doi: 10.1023/B:MHSR.0000024351.12294.65. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Aarons GA, Cafri G, Lugo L, Sawitzky A. Expanding the domains of attitudes towards evidence-based practice: The Evidence Based Practice Attitude Scale-50. Administration and Policy in Mental Health and Mental Health Services Research. 2012;39:331–340. doi: 10.1007/s10488-010-0302-3. doi: 10.1007/s10488-010-0302-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Aarons GA, Glisson C, Green PD, Hoagwood K, Kelleher KJ, Landsverk JA. The organizational social context of mental health services and clinician attitudes toward evidence-based practice: A United States national study. Implementation Science. 2012;7:56. doi: 10.1186/1748-5908-7-56. doi: 10.1186/1748-5908-7-56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bond GR. Modest implementation efforts, modest fidelity, and modest outcomes. Psychiatric Services. 2007;58:334. doi: 10.1176/ps.2007.58.3.334. doi: 10.1176/appi.ps.58.3.334. [DOI] [PubMed] [Google Scholar]
  5. Bond GR, Drake RE, McHugo GJ, Peterson AE, Jones AM, Williams J. Long-term sustainability of evidence-based practices in community mental health agencies. Administration and Policy in Mental Health and Mental Health Services Research. 2012;39 doi: 10.1007/s10488-012-0461-5. doi: 10.1007/s10488-012-0461-5. [DOI] [PubMed] [Google Scholar]
  6. Bond GR, Drake RE, McHugo GJ, Rapp CA, Whitley R. Strategies for improving fidelity in the National Evidence-Based Practices Project. Research on Social Work Practice. 2009;19:569–581. doi: 10.1177/1049731509335531. [Google Scholar]
  7. Borntrager CF, Chorpita BF, Higa-McMillan C, Weisz JR. Provider attitudes toward evidence-based practices: Are the concerns with the evidence or with the manuals? Psychiatric Services. 2009;60:677–681. doi: 10.1176/ps.2009.60.5.677. doi: 10.1176/appi.ps.60.5.677. [DOI] [PubMed] [Google Scholar]
  8. Chaple M, Sacks S, Melnick G, McKendrick K, Brandau S. Exploring the predictive validity of the DDCAT Index. Journal of Dual Diagnosis. 2013;9 [Google Scholar]
  9. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science. 2009;4:50. doi: 10.1186/1748-5908-4-50. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Damschroder LJ, Hagedorn HJ. A guiding framework and approach for implementation research in substance use disorders treatment. Psychology of Addictive Behaviors. 2011;25:194–205. doi: 10.1037/a0022284. doi: 10.1037/a0022284. [DOI] [PubMed] [Google Scholar]
  11. Davis K, O’Neill S, Devitt T, Baerentzen B, Little N, Wilkniss S. Consulting in action: A case study of six community support teams sustaining integrated dual disorder treatment. American Journal of Psychiatric Rehabilitation. 2012;15:313–333. doi: 10.1080/15487768.2012.733284. [Google Scholar]
  12. Emmons KM, Weiner B, Fernandez ME, Tu SP. Systems antecedents for dissemination and implementation: A review and analysis of measures. Health Education and Behavior. 2012;39:87–105. doi: 10.1177/1090198111409748. doi: 10.1177/1090198111409748. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Garner BR, Hunter BD, Modisette KC, Ihnes PC, Godley SH. Treatment staff turnover in organizations implementing evidence-based practices: Turnover rates and their association with client outcomes. Journal of Substance Abuse Treatment. 2012;42:134–142. doi: 10.1016/j.jsat.2011.10.015. doi: 10.1016/j.jsat.2011.10.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Giard J, Kincaid R, Gotham HJ, Claus R, Lambert-Harris C, McGovern MP, Brown JL. Dual Diagnosis Capability in Addiction Treatment (DDCAT) toolkit, Version 4.0. SAMHSA; Rockville MD: 2011. Retrieved December 21, 2011, from http://www.samhsa.gov/co-occurring/ddcat. [Google Scholar]
  15. Gotham HJ, Claus RE, Selig K, Homer AL. Increasing program capability to provide treatment for co-occurring substance use and mental disorders: Organizational characteristics. Journal of Substance Abuse Treatment. 2010;38:160–169. doi: 10.1016/j.jsat.2009.07.005. doi: 10.1016/j.jsat.2009.07.005. [DOI] [PubMed] [Google Scholar]
  16. Knudsen HK, Abraham AJ, Roman PM, Studts JL. Nurse turnover in substance abuse treatment programs affiliated with the National Drug Abuse Treatment Clinical Trials Network. Journal of Substance Abuse Treatment. 2011;40:307–312. doi: 10.1016/j.jsat.2010.11.012. doi: 10.1016/j.jsat.2010.11.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Lee N, Cameron J. Differences in self and independent ratings on an organisational dual diagnosis capacity measure. Drug and Alcohol Review. 2009;28:682–684. doi: 10.1111/j.1465-3362.2009.00116.x. doi: 10.1111/j.1465-3362.2009.00116.x. [DOI] [PubMed] [Google Scholar]
  18. Lewis CC, Martinez RG, Comtois KA. Measurement issues in implementation science; Paper presented at the HSR&D Cyberseminar; 2012, November 1; www.hsrd.research.va.gov/cyberseminars/archives/eis-110112.pdf. [Google Scholar]
  19. Maharaj R. Unpublished doctoral dissertation. Catholic University; Washington, DC: 2010. Organizational culture, absorptive capacity, and the change process: Influences on the fidelity of implementation of integrated dual disorder treatment in community-based mental health organizations. [Google Scholar]
  20. Manuel JK, Hagedorn HJ, Finney JW. Implementing evidence-based psychosocial treatment in specialty substance use disorder care. Psychology of Addictive Behaviors. 2011;25:225–237. doi: 10.1037/a0022398. doi: 10.1037/a0022398. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. McGovern MP, Giard J. Services research with co-occurring disorders: applications of the DDCAT index; Paper presented at the Addiction Health Services Research Conference; Atlanta, GA. 2007, October. [Google Scholar]
  22. McGovern MP, Lambert-Harris C, Gotham HJ, Claus RE, Xie H. Dual diagnosis capability in mental health and addiction treatment services: An assessment of programs across multiple state systems. Administration and Policy in Mental Health and Mental Health Services Research. 2012 doi: 10.1007/s10488-012-0449-1. doi:10.1007/s10488-012-0449-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. McGovern MP, Matzkin AL, Giard J. Assessing the dual diagnosis capability of addiction treatment services: The Dual Diagnosis Capability in Addiction Treatment (DDCAT) Index. Journal of Dual Diagnosis. 2007;3:111–123. doi: 10.1300/J374v03n02_13. [Google Scholar]
  24. McGovern MP, McHugo GJ, Drake RE, Bond GR, Merrens MR. Implementing evidence-based practices in behavioral health. Hazelden; Center City, MN: 2013. [Google Scholar]
  25. McGovern MP, Saunders EC, Kim E. Substance abuse treatment implementation research. Journal of Substance Abuse Treatment. 2013;44:1–3. doi: 10.1016/j.jsat.2012.09.006. doi: 10.1016/j.jsat.2012.09.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. McGovern MP, Urada D, Lambert-Harris C, Sullivan ST, Mazade NA. Development and initial feasibility of an organizational measure of behavioral health integration in medical care settings. Journal of Substance Abuse Treatment. 2012;43:402–409. doi: 10.1016/j.jsat.2012.08.013. doi: 10.1016/j.jsat.2012.08.013. [DOI] [PubMed] [Google Scholar]
  27. McGrew J, White L, Stull L, Wright-Berryman J. A comparison of self-reported and phone-based fidelity for assertive community treatment (ACT): A pilot study in Indiana. Psychiatric Services. 2013 doi: 10.1176/appi.ps.001252012. Advance online publication. doi:10.1176/appi.ps.001252012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. McHugo GJ, Drake RE, Whitley R, Bond GR, Campbell K, Rapp CA, Finnerty MT. Fidelity outcomes in the National Implementing Evidence-Based Practices Project. Psychiatric Services. 2007;58:1279–1284. doi: 10.1176/ps.2007.58.10.1279. doi: 10.1176/appi.ps.58.10.1279. [DOI] [PubMed] [Google Scholar]
  29. Meehl PE. Clinical versus statistical prediction: A theoretical analysis and review of the evidence. University of Minnesota Press; Minneapolis, MN: 1954. [Google Scholar]
  30. Michie S, Fixsen D, Grimshaw JM, Eccles MP. Specifying and reporting complex behaviour change interventions: The need for a scientific method. Implementation Science. 2009;4:40. doi: 10.1186/1748-5908-4-40. doi: 10.1186/1748-5908-4-40. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Monroe-DeVita M, Teague GB, Moser LL. The TMACT: A new tool for measuring fidelity to assertive community treatment. Journal of the American Psychiatric Nurses Association. 2011;17:17–29. doi: 10.1177/1078390310394658. doi: 10.1177/1078390310394658. [DOI] [PubMed] [Google Scholar]
  32. Moser LL, Monroe-DeVita M, Teague GB. Evaluating integrated dual disorders treatment within assertive community treatment: A tool to guide quality improvement. Journal of Dual Diagnosis. 2013;9 [Google Scholar]
  33. Padwa H, Larkins S, Crevecoeur-MacPhail D, Grella C. Measuring dual diagnosis capacity in community substance use disorder and mental health treatment agencies. Journal of Dual Diagnosis. 2013;9 doi: 10.1080/15504263.2013.778441. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Hensley M. Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38:65–76. doi: 10.1007/s10488-010-0319-7. doi: 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Rapp CA, Goscha RJ, Carlson LS. Evidence-based practice implementation in Kansas. Community Mental Health Journal. 2010;46:461–465. doi: 10.1007/s10597-010-9311-7. doi: 10.1007/s10597-010-9311-7. [DOI] [PubMed] [Google Scholar]
  36. Saldana L, Chapman J, Henggeler S, Rowland M. Organizational readiness for change in adolescent programs: Criterion validity. Journal of Substance Abuse Treatment. 2007;33:159–169. doi: 10.1016/j.jsat.2006.12.029. doi: 10.1016/j.jsat.2006.12.029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. SAMHSA . Results from the 2009 National survey on drug use and health: Mental health findings. Center for Behavioral Health Statistics and Quality; Rockville, MD: 2010. [Google Scholar]
  38. Schell SF, Luke DA, Schooley MW, Elliott MB, Herbers SH, Mueller NB, Bunger AC. Public health program capacity for sustainability: A new framework. Implementation Science. 2013 doi: 10.1186/1748-5908-8-15. Advance online publication. doi:10.1186/1748-5908-8-15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Seffrin B, Panzano PC, Roth D. What gets noticed: How barrier and facilitator perceptions relate to the adoption and implementation of innovative mental health practices. Community Mental Health Journal. 2008;44:475–484. doi: 10.1007/s10597-008-9151-x. doi: 10.1007/s10597-008-9151-x. [DOI] [PubMed] [Google Scholar]
  40. Simpson DD. A conceptual framework for transferring research into practice. Journal of Substance Abuse Treatment. 2002;22:171–182. doi: 10.1016/s0740-5472(02)00231-3. doi: 10.1016/S0740-5472(02)00231-3. [DOI] [PubMed] [Google Scholar]
  41. Sylvain C, Lamothe L. Dual diagnosis services: Toward a better understanding of their implementation. Journal of Dual Diagnosis. 2013;9 [Google Scholar]
  42. Torrey WC, Bond GR, McHugo GJ, Swain K. Evidence-based practice implementation in community mental health settings: The relative importance of key domains of implementation activity. Administration and Policy in Mental Health and Mental Health Services Research. 2012;39:353–364. doi: 10.1007/s10488-011-0357-9. doi: 10.1007/s10488-011-0357-9. [DOI] [PubMed] [Google Scholar]

RESOURCES