This essay takes the position that mental health (MH) services for youth are unlikely to improve without a system of measurement that is administered frequently, is concurrent with treatment and provides feedback. The system, which I characterize as a measurement feedback system (MFS), should include clinical processes (mediators), contexts (moderators), outcomes, and feedback to clinicians and supervisors. In spite of the routine call to collect and use outcome data in real world treatment, progress has been painstakingly slow.1–3 For example, Garland and colleagues found that even when outcome assessments were required, over 90% of the clinicians surveyed used their own judgment and paid little heed to the data.4 A more recent national survey of MH service organizations serving children and families indicated that almost 75% reported collecting some standardized outcome data.5 However, just collecting data on an annual basis will not result in improvement.
Measurement is not enough
Feedback from clients and families naturally occurs in treatment but it is highly filtered, biased, and subject to distortions caused by the use of cognitive heuristics and schemas.6 This informal and flawed feedback needs to be supplemented by a measurement feedback system (MFS) that uses valid, reliable and standardized measures. This system is central to quality improvement, professional development, as well as enhancing accountability.
Feedback has been successfully applied outside of MH for several decades.7–8 However the application of a fully implemented MFS is in its infancy in mental health. A MFS has been shown to improve outcomes in adult MH, especially for those clients who were either not improving or deteriorating while in therapy.9 It has rarely been applied in children’s mental health. Yet researchers have demonstrated that the benefits of feedback can be substantial and replicable.10 The idea of using systematic data in treatment is not new 11 but computer technology and advances in psychometrics makes a MFS more feasible. Despite this, use of a MFS has not been widely accepted and is often subtly rejected with arguments about scarcity of resources rather than patent opposition. Learning to what extent MH services are a good investment and implementing quality improvement efforts to make them better should be one of the highest priorities. However, there are many barriers to implementing a MFS --and few incentives.
What are the barriers to the adoption of a MFS?
Practitioners and managers report several reasons for not using a MFS, including: amount of paperwork, the large time burden, insufficient resources, low clinical usefulness, confidentiality, potential misuse, low scientific merit, and value differences.2,12–13 There are five not so discernible barriers to the adoption of a MFS that are particularly important.
1. Improving MH outcomes has no obvious financial value
Payment for services is typically based on just the number of hours or days and the location of the services (e.g., hospital, outpatient clinic). Occasionally more experienced or educated providers will be paid more. In general, however, one unit of MH service, be it a visit or a day, is equivalent to another, thus making it a commodity. Because there is no widespread use of measures of effectiveness or quality of services, the commoditization process results in competition being based primarily on price. This is an advantage to funders since it should result in lower prices. However, it also may result in less effective services and a disincentive for improving services. Although pay for performance (P4P) schemes are growing in popularity in the general health sector, they are rare in MH.14 However, P4P should not be undertaken unless a mature measurement system with high integrity and security is in place. It is likely that MFSs will not succumb to a similar commoditization process because of the existence of indicators of measurement quality such as validity and reliability.
There is good evidence that decreases in the price of MH services have occurred, especially when compared to other health services. It has been reported that the value of behavioral health benefits decreased 54 percent from 1988 to 1998.15 From 1980 to 1997, the share of total claims accounted for by MH and substance abuse declined from 7.8% to 1.9%.16 The number of specialty MH providers grew at half the average annual rate of non-specialty providers from 1993 to 2003.17 Overall, annual costs per youth decreased $157 (14.4%) between 1997 and 2000. The decrease was driven by a combination of fewer outpatient visits (−1.3%) and a decline in payments per outpatient visit (−6.1%).18 The point here is not that we should pay more for MH services of unknown effectiveness but that in order to reverse the downward spiral on the price, services must be able to be differentiated on meaningful indicators of effectiveness.
2. Organizational and psychological factors
Currently everyone except the client appears to benefit from not having a MFS. Since effectiveness data are not available, states (typically the funder) can claim they are meeting the needs of their citizens without dealing with ineffective and problematic (in contrast to scandalous) services. Service providers can maintain that their use of public funds is justified because they use evidence-based treatments (EBTs), which may or may not be effective or implemented properly. Supervisors can continue to do their supervision, paradoxically based primarily on what their supervisees tell them. Clinicians can avoid disconfirmation of their effectiveness and the potential negative psychological effects by not considering alternative sources of information other than their own observations and intuition. All participants in the system can avoid the political, financial, and organizational problems of implementing a MFS and the data that it produces. These barriers to the implementation of a MFS have an immediate impact but hopefully the long term benefits of MFSs will prevail over the barriers. The benefits of a MFS are mostly in the long term and depend upon clinically effective treatments as well as solving many issues in developing, implementing and sustaining the system. From this perspective it is very understandable why MFSs are slow to be adopted.
3. Evidence-based treatments (EBT) and practice guidelines
Strong confidence in the putative effectiveness of EBTs, both pharmacological and psychosocial, may be misplaced. For example, Weisz and his colleagues found that although EBTs for youth were modestly superior to usual care, that advantage only occurred when the EBT was evaluated by the developer of that EBT.19 The factors determining the conditions under which EBTs are effective still remain to be clearly identified. Even if the effectiveness of EBTs are taken at face value, there are additional problems associated with depending on them to improve outcomes in the real world.
In an extensive review of the literature on continuing medical education, researchers found that the quality of the evidence for its effects on knowledge, attitudes, skills, practice, and outcomes to be low or very low.20 In mental health, Ganju concluded, “a key finding is that training alone, even when it is fairly intensive, appears to increase knowledge but has a limited impact on practice.”21(p4) It would be imprudent to depend only on the training of clinicians in EBTs to attain high fidelity implementation and the maintenance of fidelity. However, a national survey found little evidence of the monitoring of the ongoing effectiveness of EBTs.22
Without a MFS it is unlikely that whatever was learned in a workshop will continue to be used reliably. In addition, even if a treatment model is strictly followed, there is little assurance that the treatment will be equally effective in different contexts. Ongoing measurement is necessary to maintain fidelity and to understand how treatments may be successfully adapted in different environments. MFSs do not compete with EBTs, but help to sustain and improve them. The concept of assessing the results of any treatment is a key step in the classic evidence-based medicine approach to practice.
Without monitoring an EBT it is illusory to claim that the EBT is effective. However, of greater concern is that EBTs will be used as a substitute for a MFS. That is, training in EBTs becomes the goal and displaces the original purpose of providing effective treatment. It is much easier to claim success because some number of clinicians have been trained rather than demonstrate it improved client outcomes. There is no longer any excuse for misconstruing inputs with outcomes. EBTs are not structured in a way that they can be mechanically implemented without variations introduced by the clinician and the service organization. MH services will not be successful in removing the influence of the clinician or “clinician proofing” treatments anymore than the field of education has been successful in “teacher proofing” the curriculum.23
4. Accreditation and licensing
There is no substantial empirical data that licensing and accreditation affect the outcome of services. For example, a national survey of state licensing, regulating, and monitoring of residential facilities for children did not list evidence of effectiveness or measurement as one of the procedures required to obtain licensure or certification.24 The National Committee for Quality Assurance (NCQA) has only two measures as part of its HEDIS system that specifically target youth. One is follow-up after hospitalization for mental illness and the second is follow-up for children prescribed ADHD medication. In 1998 the Joint Commission on Accreditation of Healthcare Organizations (JCAHO) expanded its ORYX™ performance measurement initiative to behavioral health.25 There are no specific performance measures for child and adolescent behavioral health and no specific criteria for measures. The primary purpose of these mechanisms appears to be to ensure a minimal standard of quality and safety. However, the Commission on Accreditation of Rehabilitation Facilities (CARF) has comprehensive standards for meeting its performance improvement criterion. These standards include measurement at intake, discharge, post discharge and other intervals of reliable and valid clinical data that are used at least annually in a performance analysis.26 Whether these more rigorous standards are followed and affect outcomes is yet to be determined, but CARF is moving in the right direction.
5. Clinical experience and judgment
Clinicians, as all other professionals, see themselves as competent. Why stay in a profession if you think you are ineffective? One implication of this sense of efficacy is to lower motivation to adopt anything new. Anecdotal feedback from some clinicians in our current MFS study (described below) is that a MFS is probably good for new clinicians, but not needed for experienced ones.
There is little research support for the belief that more experienced clinicians produce better outcomes. It has been found that the number of years of experience, amount of supervision, and accreditation were not related to judges’ ratings of competence.27 A review of the literature did not find a consistent relationship between “clinical competence” and outcomes.28 Michael and his colleagues conducted a meta-analysis of child and adolescent treatment studies for depression and found that professionals and graduate students produced equivalent outcomes.29 Because we have yet to find substantial evidence that general training or experience affects outcomes does not imply that clinicians do not affect these outcomes. A multilevel real world data-analysis of 1998 psychotherapy outpatients and 60 clinicians found that 17% of the variance in rates of improvement was explained by therapist differences.30 This was twice as high than that found in similar studies31–33, which suggests that naturalistic samples may include a wider range of therapist skill than do controlled clinical trials since the best managed trials appear to show the smallest therapist effects.34
The barriers for adoption of MFSs are not solely due to service providers. Researchers have been slow to develop valid and feasible measures, feedback systems, or methods to integrate feedback into clinical practice. It is likely that the low demand for MFSs has contributed to the prolonged pace of development. Consumers seem more focused on access to services rather than the effectiveness of services. However, this may be changing as consumers see information on effectiveness of their treatment as their right, not a management option. Most importantly, payers have not made the implementation of a MFS a serious funding priority.
Moving in the right direction
The practice-based evidence approach as advocated in this paper promotes the systematic and frequent measurement of treatment progress and process in a continuous quality improvement framework.35–36 This orientation is usually aligned with common factors, strengths-based and client–centered values.37–38 In addition, there recently has been an increased emphasis on good measurement in what has been called evidence-based assessment. This approach to assessment not only requires good psychometric quality but emphasizes clinical usefulness.39–40
The Substance Abuse and Mental Health Services Administration (SAMHSA) has had a major role in funding meetings, roundtables, and pilot research. With SAMHSA’s support, several states are implementing some form of a measurement system. For example, Ohio has spent more than a decade developing and implementing a measurement system that is now required state-wide. Massachusetts has also instituted measurement systems.14 Lambert and Burlingame, pioneers in the field of real-time outcome measurement, have collaborated with the state of Utah to implement a very brief concurrent measurement system in which data are entered on PDAs and are immediately available to the clinician.41 Since 2003 the MacArthur Foundation has also invested in research to develop a MFS with the cooperation of the state of Hawaii.42 MH services in the U.S. are not alone in promoting measurement. Great Britain has been in the forefront of routine measurement of adult and child and adolescent MH services, yet their child services is struggling to implement a voluntary measurement system.43 Australia has been one of the first countries to support routine mental health outcome measurement, however implementation has been problematic.44–45
CFIT, an example of a MFS
The Center for Evaluation and Program Improvement at Peabody College has developed an evidence-based, outcome-driven continuous quality improvement system called Contextualized Feedback Intervention and Training (CFIT). It can be used for continuing professional development and quality improvement. CFIT enables provider organizations to make data-based decisions and transform themselves into learning organizations.46 CFIT is based on a theory of change, and grounded in psychological and organizational research.47–48
CFIT has four major components: organizational assessment, treatment progress measurement, feedback, and training. CFIT is designed to affect and be affected by the culture of the organization. Each application of CFIT begins with an assessment of the organization's needs and readiness for change. This information can be used to tailor the implementation process of the system to the specific organizational context.49 Although research on CFIT may lead to a better understanding of the contextual influences on implementation, the field has not yet developed scientifically based interventions designed to improve implementation and to sustain it in the long run. However, this is a great improvement over simply ignoring the important role of organizational context and personnel.50
In order to make measurement concurrent with treatment feasible brief instruments were developed to collect weekly data on multiple domains from youth, caregiver, and clinician. The instrument battery assesses both processes (therapeutic alliance, treatment motivation, and session impact) and clinical outcomes (life satisfaction, hope, symptoms and functioning). It is available for free at http://peabody.vanderbilt.edu/ptpb.
The feedback can be used at all levels of the organization—by the clinician, supervisor, and administrator. The goal is to revolutionize the way these groups operate by providing them with shared information. The feedback is provided online in a user-friendly format. Individual scores can be compared to organization-specific norms. The data can also be aggregated to compare clinicians, clinics, provider organizations, and types of treatment. Such information can transform MH services from being viewed as a commodity to a service that is selected for its effectiveness.
CFIT’s clinical training is based on a common factors approach—those factors that are common across almost all therapies—rather than a specific therapeutic school.51 CFIT uses a combination of online modules, written manuals, teletraining, and on-site training sessions. The main objective is to provide clinicians and supervisors with information on how to optimize the therapeutic processes. Clinicians and supervisors can evaluate the success of their interventions, as well as refine interventions, through a continuing review of the feedback reports. With the support of a NIMH grant the effectiveness of CFIT to influence outcomes is currently being evaluated in a large scale field experiment.
Conclusions
Developing, implementing and sustaining a MFS is neither simple or a total solution for obtaining client improvement. Yes, it can be used with a broad range of clients, clinicians, treatments and contexts. Yes, it measures implementation and outcomes, and therefore problems are less likely to be hidden. However, all of the barriers noted earlier will affect its implementation and sustainability. Much more needs to be learned about how to successfully implement a MFS. While the importance of the organizational and cultural context of a MFS is recognized, it is not known under which settings a MFS will be feasible and effective. Measurement of client improvement is both conceptually and logistically complex. There is concern about how to measure change, what should be measured and the lack of agreement among respondents. It is unlikely that the level of commitment and funding will be ideal. It is likely that computer programs, which to the inexperienced seem simple, will remain immensely complicated and expensive to develop and test. All of these problems are likely to occur and there is not sufficient knowledge and technology to deal with them.21 However, the only way to learn about how to solve implementation problems is though continued implementation of MFSs.
Dalton Conley, a sociologist and winner of a $500,000 National Science Foundation award, is quoted as saying, “I would like to argue that sociology is among the hardest sciences of all — harder than the proverbial rocket science”.52 Establishing and sustaining a MFS is even more difficult. Real change in the real world is really hard.
Acknowledgments
Preparation of this article was partially supported by grants from NIMH (MH 068589-01) and the Leon Lowenstein Foundation.
Footnotes
Financial Disclosure Document
Disclosure: Disclosure: Vanderbilt University licenses CFIT to Qualifacts Corporation, and both Vanderbilt University and Dr. Bickman can receive funds from commercial sales of the system. Preparation of this article was partially supported by grants from NIMH (MH 068589-01) and the Leon Lowenstein Foundation.
References
- 1.APA Presidential Task Force on Evidence-Based Practice. Evidence-based practice in psychology. Am Psychol. 2006;61(4):271–285. doi: 10.1037/0003-066X.61.4.271. [DOI] [PubMed] [Google Scholar]
- 2.Johnston C, Gowers S. Routine outcome measurement: a survey of UK child and adolescent mental health services. Child Adol Ment Health. 2005;10(3):133–139. doi: 10.1111/j.1475-3588.2005.00357.x. [DOI] [PubMed] [Google Scholar]
- 3.Phelps R, Eisman EJ, Kohout J. Psychological practice and managed care results of the CAPP Practitioner Survey. Prof Psychol Res Pr. 1998;29(1):31–36. [Google Scholar]
- 4.Garland AF, Kruse M, Aarons GA. Clinicians and outcome measurement: what's the use? J Behav Health Serv Res. 2003;30(4):393–405. doi: 10.1007/BF02287427. [DOI] [PubMed] [Google Scholar]
- 5.Schoenwald SK, Chapman JE, Kelleher K, et al. A survey of the infrastructure for children’s mental health services: Implications for the implementation of empirically supported treatments (ESTs) Adm Policy Ment Health. 2008;35(1–2):84–97. doi: 10.1007/s10488-007-0147-6. [DOI] [PubMed] [Google Scholar]
- 6.Ægisdóttir S, White MJ, Spengler PM, et al. The meta-analysis of clinical judgment project: fifty-six years of accumulated research on clinical versus statistical prediction. Couns Psychol. 2006;34(3):341–382. [Google Scholar]
- 7.Kluger AN, Denisi A. The effects of feedback interventions on performance: a historical review, a meta-analysis and a preliminary feedback intervention theory. Psychol Bull. 1996;119(2):254–284. [Google Scholar]
- 8.Rose DJ, Church RJ. Learning to teach: the acquisition and maintenance of teaching skills. J Behav Educ. 1998;8(1):5–35. [Google Scholar]
- 9.Hawkins EJ, Lambert MJ, Vermeersch D, Slade K, Tuttle K. The therapeutic effects of providing patient progress information to therapists and patients. Psychother Res. 2004;14(3):308–327. [Google Scholar]
- 10.Lambert MJ, Whipple JL, Hawkins EJ, Vermeersch D, Nielsen SL, Smart DW. Is it time for clinicians to routinely track patient outcome? A meta-analysis. Clin Psychol Sci Pract. 2003;10(3):288–301. [Google Scholar]
- 11.Stricker G, Trierweiler SJ. The local clinical scientist: A bridge between science and practice. Am Psychol. 1995;50(12):995–1002. doi: 10.1037//0003-066x.50.12.995. [DOI] [PubMed] [Google Scholar]
- 12.Hatfield DR, Ogles BM. The use of outcome measures by psychologists in clinical practice. Prof Psychol Res Pr. 2004;35(5):485–491. [Google Scholar]
- 13.Meehan T, McCombes S, Hatzipetrou L, Catchpoole R. Introduction of routine outcome measures: staff reactions and issues for consideration. J Psychiatr Ment Health Nurs. 2006;13:581–587. doi: 10.1111/j.1365-2850.2006.00985.x. [DOI] [PubMed] [Google Scholar]
- 14.Bachman J. Pay for performance in primary and specialty behavioral health care: Two "concept" proposals. Prof Psychol Res Pr. 2006;37(4):384–388. [Google Scholar]
- 15.Center for Mental Health Services. Mental Health, United States, 2000. Washington, DC: U. S. Government Printing Office; 2001. (DHHS Pub No. (SMA) 01-3537) [Google Scholar]
- 16.Foote SM, Jones SB. Trends: Consumer choice markets: Lessons from FEHB mental health coverage. Health Aff. 1999;18(5):125–130. doi: 10.1377/hlthaff.18.5.125. [DOI] [PubMed] [Google Scholar]
- 17.Mark TL, Levit KR, Coffey RM, et al. National Expenditures for Mental Health Services and Substance Abuse Treatment, 1993–2003. Rockville, MD: Substance Abuse and Mental Health Services Administration; 2007. (SAMHSA Publication No. SMA 07-4227) [Google Scholar]
- 18.Martin A, Leslie D. Psychiatric inpatient, outpatient, and medication utilization and costs among privately insured youths, 1997–2000. Am J Psychol. 2003;160(4):757–764. doi: 10.1176/appi.ajp.160.4.757. [DOI] [PubMed] [Google Scholar]
- 19.Weisz JR, Jensen-Doss A, Hawley KM. Evidence-based youth psychotherapies versus usual clinical care: A meta-analysis of direct comparisons. Am Psychol. 2006;61(7):671–689. doi: 10.1037/0003-066X.61.7.671. [DOI] [PubMed] [Google Scholar]
- 20.Marinopoulos SS, Dorman T, Ratanawongsa N, et al. Effectiveness of continuing medical education. Evidence Report/Technology Assessment No. 149. Rockville, MD: Agency for Healthcare Research and Quality; 2007. AHRQ Publication No. 07-E006. [Google Scholar]
- 21.Ganju VK. The Need for an Evidence-Based Culture: Lessons Learned from Evidence-Based Practices Implementation Initiatives. Alexandria, VA: NASMHPD Research Institute, Inc.; 2006. [Google Scholar]
- 22.Results of a Survey of State Directors of Adult and Child Mental Health Services on Implementation of Evidence-Based Practices. Alexandria, VA: NASMHPD Research Institute, Inc.; 2005. National Association of State Mental Health Program Directors. [Google Scholar]
- 23.Sawyer RK. Creative teaching: Collaborative discussion as disciplined improvisation. Educ Res. 2004;33(2):12–20. [Google Scholar]
- 24.Teich JL, Ireys HT. A national survey of state licensing, regulating, and monitoring of residential facilities for children with mental illness. Psychiatr Serv. 2007;58(7):991–998. doi: 10.1176/ps.2007.58.7.991. [DOI] [PubMed] [Google Scholar]
- 25.Joint Commission on the Accreditation of Healthcare Organizations (JCAHO) [Accessed February 26];Performance measurement initiatives. 2008 Available at http://www.jointcommission.org/PerformanceMeasurement/PerformanceMeasurement/
- 26.Commission on Accreditation of Rehabilitation Facilities (CARF) 2007 behavioral health standards manual. Tucson, AZ: CARF International; 2007. [Google Scholar]
- 27.Brosan L, Reynolds S, Moore RG. Factors associated with competence in cognitive therapists. Behav Cogn Psychoth. 2007;35(2):179–190. [Google Scholar]
- 28.Barber JP, Sharpless BA, Klostermann S, McCarthy KS. Assessing intervention competence and its relation to therapy outcome: a selected review derived from the outcome literature. Prof Psychol Res Pr. 2007;38(5):493–500. [Google Scholar]
- 29.Michael KD, Huelsman TJ, Crowley SL. Interventions for child and adolescent depression: Do professional therapists produce better results? J Child Fam Stud. 2005;14(2):223–236. [Google Scholar]
- 30.Lutz W, Leon SC, Martinovich Z, Lyons JS, Stiles WB. Therapist effects in outpatient psychotherapy: A three-level growth curve approach. J Couns Psychol. 2007;54(1):32–39. [Google Scholar]
- 31.Wampold BE, Brown GS. Estimating therapist variability: A naturalistic study of outcomes in manage care. J Consult Clin Psychol. 2005;73:914–923. doi: 10.1037/0022-006X.73.5.914. [DOI] [PubMed] [Google Scholar]
- 32.Kim DM, Wampold BE, Bolt DM. Therapist effects in psychotherapy: A random-effects modeling of the National Institute of Mental Health Treatment of Depression Collaborative Research Program data. Psychother Res. 2006;16(2):161–161. [Google Scholar]
- 33.Crits-Christoph P, Baranackie K, Kurcias JS, et al. Meta-analysis of therapist effects in psychotherapy outcome studies. Psychother Res. 1991;1:81–91. [Google Scholar]
- 34.Elkin I. Rejoinder to commentaries by Stephen Soldz and Paul Crits-Christoph on therapist effects. Psychother Res. 2006;16:182–183. [Google Scholar]
- 35.Howard KI, Moras K, Brill PL, Martinovich Z, Lutz W. Evaluation of psychotherapy efficacy, effectiveness and patient progress. Am Psychol. 1996;51(10):1059–1064. doi: 10.1037//0003-066x.51.10.1059. [DOI] [PubMed] [Google Scholar]
- 36.Margison FR, McGrath G, Barkham M, et al. Measurement and psychotherapy. Br J Psych. 2000;177(2):123–130. doi: 10.1192/bjp.177.2.123. [DOI] [PubMed] [Google Scholar]
- 37.Bickman L, editor. A common factors approach to improving mental health services [Special Issue] Ment Health Serv Res. 2005;7(1) doi: 10.1007/s11020-005-1961-7. [DOI] [PubMed] [Google Scholar]
- 38.Duncan BL, Miller SD, Sparks J. Common factors and the uncommon heroisms of youth. Psychother Australia. 2007;13(2):34–43. [Google Scholar]
- 39.Mash EJ, Hunsley J. Evidence-based assessment of child and adolescent disorders: Issues and challenges. J Clin Child Adolesc Psychol. 2005;34(3):362–379. doi: 10.1207/s15374424jccp3403_1. [DOI] [PubMed] [Google Scholar]
- 40.Hunsley J, Mash EJ. Evidence-based assessment. Annu Rev Clin Psychol. 2007;3:29–51. doi: 10.1146/annurev.clinpsy.3.022806.091419. [DOI] [PubMed] [Google Scholar]
- 41.Lambert MJ, Burlingame GM. Using practice-based evidence with evidence-based practice (Health Module) Behav Healthc. 2007;27(10):16. [PubMed] [Google Scholar]
- 42.Chorpita BF, Bernstein A, Daleiden EL. The Research Network on Youth Mental Health. Driving with roadmaps and dashboards: Using information resources to structure the decision models in service organizations. Adm Policy Mental Health. 2008 Mar;35(1–2):114–123. doi: 10.1007/s10488-007-0151-x. [DOI] [PubMed] [Google Scholar]
- 43.Wolpert M, Cooper L, Tingay K, Young K, Svanberg E the CORC Committee. CAMHS Outcomes Research Consortium Handbook (Version 2.0) London: The Consortium; 2007; [Accessed January 10, 2008]. Available at http://www.networks.nhs.uk/uploads/user/handbook_2007.pdf. [Google Scholar]
- 44.Bickman L, Nurcombe B, Townsend C, Belle M, Schut J, Karver M. Consumer measurement system in child and adolescent mental health. Canberra, ACT: Department of Health and Family Services; 1999. [Google Scholar]
- 45.Pirkis J, Burgess P, Coombs T, Clarke A, Jones-Ellis D, Dickson R. Routine measurement of outcomes in Australia's public sector mental health services. Aust New Zealand Health Policy. 2005 Apr 19;2(1):8. doi: 10.1186/1743-8462-2-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Birleson P. Turning child and adolescent mental-health services into learning organizations. Clin Child Psychol Psych. 1999;4(2):265–274. [Google Scholar]
- 47.Riemer M, Rosof-Williams J, Bickman L. Theories related to changing clinician practice. Child Adolesc Psychiatr Clin N Am. 2005;14(2):241–254. doi: 10.1016/j.chc.2004.05.002. [DOI] [PubMed] [Google Scholar]
- 48.Sapyta J, Riemer M, Bickman L. Feedback to clinicians: Theory, research, and practice. J Clin Psychol. 2005;61(2):145–153. doi: 10.1002/jclp.20107. [DOI] [PubMed] [Google Scholar]
- 49.Glisson C, Landsverk J, Schoenwald SK, et al. Assessing the Organizational Social Context (OSC) of mental health services: Implications for implementation research and practice. Admin Policy Ment Health. 2008;35(1–2):98–113. doi: 10.1007/s10488-007-0148-5. [DOI] [PubMed] [Google Scholar]
- 50.Peterson KA, Bickman L. Program personnel: The missing ingredient in describing the program environment. In: Conrad KJ, Roberts-Gray C, editors. Evaluating Program Environments. Vol. 1988. San Francisco: Jossey-Bass; pp. 83–92. [Google Scholar]
- 51.Karver MS, Handelsman JB, Fields S, Bickman L. A theoretical model of common process factors in youth and family therapy. Ment Health Serv Res. 2005;7(1):35–51. doi: 10.1007/s11020-005-1964-4. [DOI] [PubMed] [Google Scholar]
- 52.Reichhardt T. Harder than rocket science. Nature. 2005;435(7045):1024–1025. doi: 10.1038/4351024a. [DOI] [PubMed] [Google Scholar]