Driven by recent policy developments that emphasize accountability and data sharing (e.g., Patient Protection and Affordable Care Act, 2010; HITECH Act, 2009), health information technologies (HIT) are rapidly becoming ubiquitous within the contemporary healthcare landscape. Specific subtypes of HIT support service quality monitoring and can be classified within the “quality management” set of implementation strategies articulated by Powell et al. (2012; 2015). Despite their potential to function as an implementation strategy, these technologies often require their own strategic implementation supports to be successfully installed and effectively used in new service systems (Cohen, 2015; Ruud, 2015). Although there is widespread recognition of the potential for HIT to usher in new cost savings in healthcare, issues related to technology design and implementation processes have interfered with the extent to which those savings have been realized (Carroll, 2015; Leviss, 2011; Ribitzky, Sterling, & Bradley, 2010).
Measurement feedback systems (MFS; Bickman, 2008) are one type of HIT that support service quality monitoring and directly inform care decisions through (a) the ongoing collection of intervention process and outcome data and (b) data-based feedback to providers (i.e., routine outcome monitoring; ROM). Because it is a core component of numerous evidence-based psychotherapies (e.g., Cognitive Behavioral Therapy, Interpersonal Psychotherapy) and in light of the burgeoning evidence for its positive impact on service recipient outcomes, ROM has been identified a potential minimal intervention needed for change (MINC; Glasgow et al., 2011; Scott & Lewis, 2015). MFS represent a leading strategy to enable ROM in mental health service delivery offering two basic functions: (1) They include, or provide the ability to input into the system, quantitative measures that are administered regularly throughout treatment to collect ongoing information about the process and progress of the intervention; and (2) They provide automated presentation of that information to support timely and clinically-useful feedback to mental health providers about their cases (Bickman, 2008). Importantly, these functions do not necessitate a freestanding system. Although MFS technologies often exist as stand-alone products, they may also be integrated into other HIT, such as electronic health records (Douglas, Button, & Casey, 2014; Steinfeld, Franklin, Mercer, Fraynt, & Simon, this issue). MFS have seen rapid proliferation, with close to 50 such systems identified (Lyon, Lewis, Boyd, Hendrix, & Liu, this issue).
Despite the growth of MFS and their widespread endorsement in the mental health services literature (e.g., Garland et al., 2013; Halford et al, 2012; SAMHSA, 2012), little empirical work has examined the strategies through which these systems are developed (e.g., contextual inquiry, software design, system usability testing) and implemented (e.g., training, consultation, use of incentives) in real world mental health service delivery settings. The integration and effective use of MFS is beset with the typical implementation challenges common across behavior change efforts (Cohen, 2015). However, technology development and roll out are perhaps uniquely complicated and often unexpectedly protracted tasks in which mental health professionals (e.g., administrators, researchers, practitioners) and resource-poor settings are rarely equipped to engage. Unfortunately, no matter how well intentioned or conceptualized, problematic system design and implementation can impede use and undermine the otherwise effective practices that technologies are intended to support (Littlejohns, Wyatt, & Garvican, 2003; Karsh, 2004; Maguire, 2001). Presently, there are few examples and scarce guidance available to inform design/implementation processes as they relate to the popular technology of MFS – including how to effectively engage key business/technology partners who may not be primarily focused on a system’s clinical utility. As a result, development teams tend to remain highly “siloed,” which limits the extent to which innovative and effective approaches are developed and shared, leading to redundant work and wasted resources. Although some authors have begun advocating for the application of user-centered design principles to MFS development (Bickman, Kelley, & Athay, 2012; Lyon, Wasse et al., this issue), scant literature exists to describe the iterative process of system design, testing, and revision. Furthermore, although implementation science frameworks have identified key variables and outcomes related to the introduction of innovations into novel contexts (e.g., Aarons, Hurlburt, & Horwitz, 2011; Damschroder et al., 2009; Proctor et al., 2011), they have only recently begun to be applied to the domain of MFS technologies. These issues have significantly hampered the identification and dissemination of “best practice” models for MFS design and implementation.
Special Section Contents
As a component of the larger special issue on outcome assessment and feedback processes, this special section focuses on the design, development, refinement, and implementation of MFS technologies across mental health service delivery sectors. This collection of articles reflects projects in which new technologies or capabilities were developed (e.g., Bruns, Hyde, Sather, Hook, & Lyon, this issue; Steinfeld et al., this issue) as well as examples of adapting existing technologies to meet local needs (e.g., Lyon, Wasse et al., this issue; Nadeem, Cappella, Holland, Coccaro, & Crisonino, this issue). In addition, the articles present process (e.g., Bruns et al., this issue; Gleacher et al., this issue; Lyon, Wasse et al., this issue; Nadeem et al., this issue;) and outcome (e.g., Bickman et al., this issue; Gleacher et al., this issue; Steinfeld et al., this issue) data from MFS development and implementation efforts that reflect the full range of success; from the decommissioning of the Contextualized Feedback System due to ongoing technological and cost issues (Bickman et al., this issue), to moderate uptake of the Team Management System WrapLogic (Bruns et al., this issue), and to 90% penetration of a the mental health progress monitoring tool embedded within the electronic health record at Group Health Cooperative (Steinfeld et al., this issue).
Specifically, Bruns et al. (this issue) describe their user-centered design informed process of developing an electronic behavioral health information system (EBHIS) to support Wraparound care coordination for youth. They detail the core functions of electronic health records, acknowledging the critical importance of integrating measurement-feedback and care coordination functions into the existing (required) infrastructure and workflow. Bruns et al. (this issue) break down their process into replicable phases (e.g., literature review, input from the field, development, etc.) and put forth a theory of impact as it pertains to Wraparound-specific EBHIS.
Nadeem and colleagues present a school-based, community-partnered method for developing and piloting an MFS to support teacher use of classroom practices and feedback to improve student emotional and behavioral issues. They present an iterative approach to integrating user feedback into the adaptation of a relatively low-tech dashboard MFS (e.g., using Microsoft Excel), designed to fit the implementation context per a baseline needs assessment. Pilot data indicated a dose-response relationship between the amount of feedback provided and its impact on outcomes.
Lyon, Wasse et al. (this issue) introduce the Contextualized Technology Adaptation Process (CTAP), a systematic approach to HIT development that draws heavily on human-centered design principles and models of implementation science. The CTAP contends that optimizing innovation-organization fit is vital to MFS implementation and that user-centered design provides methods and techniques for making necessary adaptations. Phase 1 (of 5 phases) – contextual evaluation – is presented in the context of MFS development for use in school-based mental health as an illustration of this mixed methods approach.
Steinfeld and colleagues (this issue) highlight the importance of emphasizing the clinical utility of the measurement tool to be featured in a MFS. This led to their development of the mental health progress monitoring tool (MHPMT) – a combination of established measures (e.g., Patient Health Questionnaire) and new items (e.g., regarding therapeutic alliance) deemed relevant to the majority of their population. After ceasing work with an external vendor (due to cost, time, and privacy issues), their team prioritized integrating the MHPMT into their electronic health record system to streamline workflow and relied upon patient completion of a paper copy to be entered by clinicians. Using lean process improvement tools (Steinfeld et al., 2014) following a pilot implementation at a single site, Steinfeld present quantitative and qualitative data from providers and patients on the utility of the MFS and its impact on patient progress.
Bickman et al. (this issue) present data from a randomized experiment aimed at determining the impact of feedback amount and timing on youth outcomes in a rural versus an urban clinic setting. Critical issues regarding software delays, in particular, were noted, ultimately resulting in decommissioning of their longstanding MFS: the Contextualized Feedback System. Despite the implementation challenges, the trial yielded an implementation index (dose of feedback) that could be used across studies to build what appears to be growing evidence of a strong dose-response relationship between feedback and outcomes, but one that may vary according to the implementation approach (e.g., manual versus electronic data entry).
Finally, Gleacher et al. (this issue) describe a qualitative analysis of interviews with clinicians at the clinics included in the above-mentioned randomized trial (Bickman et al., this issue) to reveal explanatory barriers and facilitators to the differential MFS implementation outcomes. Interestingly, the proportion of barriers to facilitators did not seem to explain the differential implementation success, but rather organizational and leadership supports appeared critical in the clinic that demonstrated higher implementation rates. Their study suggests a tipping point may be useful in examining the ratio of barriers to facilitators, but that some categories of contextual factors (e.g., leadership) may carry more weight.
de Jong (this issue) offers a critical analysis of the special section’s articles and future directions for MFS implementation research, placing it in the context of the key challenges that emerged from quality improvement efforts in the United Kingdom. Specifically, de Jong (this issue) suggests that the major issues that may be especially important to MFS development and implementation are: design planning of the improvement intervention (given the heavy technical emphasis of MFS), organizational contexts (with a focus on leveraging appropriate leadership approaches), as well as sustainability (by ensuring relative advantage of the MFS to existing systems) and unintended consequences (the idea that MFS, or ROM more generally, has not been sufficiently tested across populations and settings and may in some cases have iatrogenic effects).
Chorpita, Daleiden, and Bernstein (this issue) encourage a step back from the MFS to shed light on the intersection of technology and decision-making. They contend that four evidence bases should be considered if design and implementation are to achieve their full potential: case-specific historical information, local aggregate evidence, general services research, and causal mechanism research. They present a case example to illustrate the interplay and dependency of these four evidence bases, which MFS have yet to fully address and integrate, thus revealing the complexity of the real world clinical context and the rather large design challenges that remain.
Themes from Special Section Articles
Purpose of MFS Implementation
MFS have the potential to support full intervention or service delivery model implementation (e.g., Bruns et al., this issue; Nadeem et al., this issue). In these cases, MFS capabilities move well beyond collection of patient data and provision of feedback to include functions such as communication among providers, documentation, billing, and tracking of practice(s). Importantly, MFS that are built to facilitate the communication among providers, such as the Mental Health Integrated Tracking System, demonstrate improved patient outcomes via collaborative and coordinated care (Unützer et al. 2002; 2012). Although not yet empirically established as superior to more basic MFS, the addition of tracking practice elements or intervention processes may afford providers greater opportunity for individualizing care, particularly in the face of suboptimal patient outcomes. As alluded to in Chorpita and colleagues’ commentary (this issue), MFS may one day have the capacity to inform, support, and enhance real-time decision making about how to best proceed with each unique client given the relevant and existing evidence bases.
Simultaneously, each additional MFS capability has the potential to increase system complexity. Rogers’ (2010) Diffusion of Innovations theory conceptualizes innovation complexity as a predictor of adoption, in that the more complex the innovation the less likely adoption will occur. In support of this notion, we suggest that MFS complexity (i.e., number of system capabilities) may be inversely related to system spread (Lyon, Lewis et al., this issue). Indeed, that the majority of the articles in this special section focus on the relatively “simple” task of collecting patient outcomes and providing feedback to providers. Nevertheless, implementation challenges were frequent, suggesting that parsimony may be a critical starting point for this work going forward.
MFS Technology Platform
Most of the contributions in this special section reflect freestanding MFS technologies, and only the Steinfeld et al. paper (this issue) describes the integration of MFS capabilities into an existing electronic health record (EHR) technology. As noted by Lyon, Wasse et al. (this issue), the lack of MFS integration within the existing electronic health record is likely a key barrier to adoption, implementation, and sustainment. Specifically, there is concern that duplicative data entry may be required for each clinical encounter. Moreover, simply having to navigate two separate systems simultaneously in the context of clear time limitations could undermine MFS potential for impact. There is the possibility of technologically achieving interoperability using products such as middleware (Bernstein, 1996), which support communication between freestanding MFS and EHRs. However, these solutions typically retain system independence in that providers must still move between systems/screens. This may be suboptimal, considering that the number of clicks to complete a task is one commonly used metric for “ease of use” in HIT (Clauson, Marsh, Polen, Seamon, & Ortiz, 2007). Perhaps for these reasons, Steinfeld et al. (this issue) demonstrated great success when they moved away from a separate vendor to support ROM and instead made revisions directly to their EHR. Although perhaps more costly up front, Bruns et al. (this issue) also saw value in and pursued the creation of an EHR that would optimize MFS and integrate many additional capabilities.
Further, although the present special section is focused on the development and implementation of digital technologies that support outcome monitoring and feedback, we would like to simultaneously caution against racing to “go digital” when analogue methods may also be appropriate. In addition to the potential for increased complexity that accompanies some digital solutions (e.g., computer/internet access, system compatibility), many accounts of implementing MFS or EHRs reflect a continued need for access to paper measures or charts. In the study by Bickman et al. (this issue), for instance, many practitioners opted to use paper and pencil administration when a digital option was available. Studies of EHR use have similarly found that paper-based workarounds are common (Flanagan, Saleem, Millitello, Russ, & Doebbeling, 2013).
Stakeholder Input into System Development
Each article in this special section that describes a MFS development process references some degree of stakeholder information and feedback gathering. Nadeem et al. (this issue) engaged in an adaptation process that made use of strong community partnerships. Their approach represents rapid MFS tailoring that is especially accessible when working with a relatively simple (and easily altered) MFS. Bruns and colleagues (this issue) sought to generate a more widely applicable EHR that could possibly support Wraparound services in numerous settings and so they carefully sought input via a national stakeholder survey, followed by an interactive webinar. Subsequently, they engaged in formative usability testing, much like what is suggested by Lyon, Wasse, and colleagues (this issue). At its core, user centered design (UCD) emphasizes that direct interactions with (and, frequently, observations of) end users are essential to successful technology development and integration. Important to note is that the process of gathering user input also requires some degree of “pruning” to determine which points of feedback are most in need of attention. Lyon, Wasse et al. (this issue) describe their decision-making process surrounding which components of stakeholder feedback to incorporate into a revision of their technology, which may serve as a useful guide to others new to UCD-informed procedures.
Ultimately, the goal for MFS design and development is to optimize both the innovation-organization and innovation-intervention fit. To some, the costs associated with stakeholder testing during design may appear prohibitive; however, personnel and cost savings are likely realized when this approach is taken. For instance, Johnson, Johnson, and Zhang (2005) reported that costs exponentially increase depending on the phase in which problems are identified. Specifically, fixing a problem in the initial design phase is 10 times cheaper than fixing one in the development/programming phase. It will be useful for future MFS development research to share information on cost in general, and cost savings with respect to incorporating testing with stakeholders in the design phases, especially given the potential need for adaptation to fit each unique organization or context (Lyon, Wasse, et al this issue).
Conclusion and Future Directions
The articles contained within this special section reflect a current snapshot of evolving efforts to use digital technologies to support ROM practices in mental and behavioral health. Indeed, MFS technologies are constantly appearing and disappearing from the behavioral health service landscape (Lyon, Lewis et al., this issue), ensuring that the accuracy of any published accounts of the state of this field will rapidly diminish. Therefore, in addition to documenting characteristics of specific existing systems, it may be particularly important to identify the overarching and generalizable mechanisms through which MFS impact provider behavior and client outcomes. Considering the centrality of feedback processes to MFS functioning, initial direction in mechanism identification may most appropriately come from well articulated feedback theories (e.g., Kluger & DeNisi, 1996; Riemer, Rosof-Williams, & Bickman, 2005). As noted in our review of MFS capabilities (Lyon, Lewis et al., this issue), mechanistic research of this sort could ultimately support the development of more streamlined, targeted, and effective technologies.
Of more immediate concern is the question of what potential individual- and system-level MFS users might look for when making adoption decisions. It may be unlikely that MFS technologies are available “off the shelf” and ready to implement in a novel context. Similar to most psychosocial interventions (Stirman, Miller, Toder, & Calloway, 2013), MFS implementation in a new setting is likely to require a process of adaptation as well as a period during which an organization receives support surrounding effective system use. The major difficulty, however, is that such a process has the potential to result in considerable system “scope creep,” and ultimately a bloated technology, unless clear parameters are placed on the system from the outset. In contrast, more streamlined and parsimonious innovations are likely to be more readily adopted (Rogers, 2010). Simpler MFS also decrease the potential for overlap or redundancy with existing EHR technologies, perhaps improving system acceptability. Clearer articulation of the behaviors and functions that MFS should support (as highlighted by Chorpita et al., this issue) may be one promising pathway toward this type of simplification and specification. In the meantime, potential adopters may be well advised to question the allure of a system that boasts the most functions, measurement tools, or data presentation options, and instead think explicitly about the practices that they are hoping to support.
In sum, we hope readers of this section (and the accompanying section in this special issue; Edbrooke-Childs, Wolpert, & Deighton, this issue) will find considerable utility in the content of the papers presented. Whether MFS technologies persist for decades to come, or are ultimately integrated into more fully-functional EHRs and disappear from the HIT landscape, may depend on the ability of system developers and implementers to supply adopters with user-centered, cost-effective, and parsimonious products that provide meaningful information to service providers, recipients, and administrators at the moment when it is most needed.
Acknowledgments
Funding: Work on this publication was supported by the National Institute of Mental Health under award numbers K08MH095939 and R01MH103310.
Footnotes
Conflict of interest: Both authors declare that they have no conflicts of interest.
Ethical approval: This article does not contain any studies with human participants performed by either of the authors.
References
- Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(1):4–23. doi: 10.1007/s10488-010-0327-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bernstein PA. Middleware: A model for distributed system services. Communications of the Association for Computing Machinery. 1996;39(2):86–98. [Google Scholar]
- Bickman L. A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child & Adolescent Psychiatry. 2008;47(10):1114–1119. doi: 10.1097/CHI.0b013e3181825af8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bickman L, Douglas SR, De Andrade ARV, Tomlinson M, Gleacher A, Olin S, Hoagwood K. Implementing a measurement feedback system: A tale of two sites. Administration and Policy in Mental Health and Mental Health Services Research. :1–16. doi: 10.1007/s10488-015-0647-8. (this issue) [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bickman L, Kelley SD, Athay M. The technology of measurement feedback systems. Couple and Family Psychology: Review and Practice. 2012;1:274–284. doi: 10.1037/a0031022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bruns EJ, Hyde KL, Sather A, Hook AN, Lyon AR. Applying user input to the design and testing of an electronic behavioral health information system for wraparound care coordination. Administration and Policy in Mental Health and Mental Health Services Research. :1–19. doi: 10.1007/s10488-015-0658-5. (this issue) [DOI] [PMC free article] [PubMed] [Google Scholar]
- Carroll AE. How health information technology is failing to achieve Its full potential. Journal of the American Medical Association Pediatrics. 2015;169(3):201–202. doi: 10.1001/jamapediatrics.2014.3115. [DOI] [PubMed] [Google Scholar]
- Chorpita BF, Daleiden EL, Bernstein A. At the intersection of health information technology and decision support: Measurement feedback systems…and beyond. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-015-0702-5. (this issue) [DOI] [PubMed] [Google Scholar]
- Clauson KA, Marsh WA, Polen HH, Seamon MJ, Ortiz BI. Clinical decision support tools: analysis of online drug information databases. BMC medical informatics and decision making. 2007;7(1):7. doi: 10.1186/1472-6947-7-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cohen D. Assessing the effect of an electronic decision support system on children’s mental health service outcomes. Journal of Technology in Human Services. 2015;33(3):225–240. [Google Scholar]
- Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: A consolidated framework for advancing implementation science. Implementation Science. 2009;4(1):50. doi: 10.1186/1748-5908-4-50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- de Jong K. Challenges in the implementation of measurement feedback systems. Administration and Policy in Mental Health and Mental Health Services Research. :1–4. (this issue) [Google Scholar]
- Douglas S, Button S, Casey SE. Implementing for sustainability: Promoting use of a measurement feedback system for innovation and quality improvement. Administration and Policy in Mental Health and Mental Health Services Research. 2014:1–6. doi: 10.1007/s10488-014-0607-8. [DOI] [PubMed] [Google Scholar]
- Edbrooke-Childs J, Wolpert M, Deighton J. Introduction to the special section on implementing routine outcome monitoring in child and adult mental health services. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-016-0726-5. (this issue) [DOI] [PubMed] [Google Scholar]
- Flanagan ME, Saleem JJ, Millitello LG, Russ AL, Doebbeling BN. Paper-and computer-based workarounds to electronic health record use at three benchmark institutions. Journal of the American Medical Informatics Association. 2013;20(e1):e59–e66. doi: 10.1136/amiajnl-2012-000982. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Garland AF, Haine-Schlagel R, Brookman-Frazee L, Baker-Ericzen M, Trask E, Fawley-King K. Improving community-based mental health care for children: Translating knowledge into action. Administration and Policy in Mental Health and Mental Health Services Research. 2013;40(1):6–22. doi: 10.1007/s10488-012-0450-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Glasgow RE, Fisher L, Strycker LA, Hessler D, Toobert DJ, King DK, Jacobs T. Minimal intervention needed for change: Definition, use, and value for improving health and health research. Translational Behavioral Medicine. 2014;4(1):26–33. doi: 10.1007/s13142-013-0232-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gleacher AA, Olin SS, Nadeem E, Pollock M, Ringle V, Bickman L, Hoagwood K. Implementing a measurement feedback system in community mental health clinics: A case study of multilevel barriers and facilitators. Administration and Policy in Mental Health and Mental Health Services Research. :1–15. doi: 10.1007/s10488-015-0642-0. (this issue) [DOI] [PMC free article] [PubMed] [Google Scholar]
- Halford WK, Hayes S, Christensen A, Lambert M, Baucom DH, Atkins DC. Toward making progress feedback an effective common factor in couple therapy. Behavior Therapy. 2012;43(1):49–60. doi: 10.1016/j.beth.2011.03.005. [DOI] [PubMed] [Google Scholar]
- Health Information Technology for Economic and Clinical Health (HITECH) Act. Title XIII of Division A and Title IV of Division B of the American Recovery and Reinvestment Act of 2009 (ARRA) Pub. L. No. 111-5, 123 Stat. 226. 2009 Feb 17; codified at 42 U.S.C. §§300jj et seq.; §§17901 et seq. [Google Scholar]
- Johnson CM, Johnson TR, Zhang J. A user-centered framework for redesigning health care interfaces. Journal of Biomedical Informatics. 2005;38(1):75–87. doi: 10.1016/j.jbi.2004.11.005. [DOI] [PubMed] [Google Scholar]
- Karsh BT. Beyond usability: Designing effective technology implementation systems to promote patient safety. Quality and Safety in Health Care. 2004;13(5):388–394. doi: 10.1136/qshc.2004.010322. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kluger AN, DeNisi A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin. 1996;119(2):254. [Google Scholar]
- Leviss J. HIT or Miss–Studying failures to enable success. Applied Clinical Informatics. 2011;2(3):345–349. doi: 10.4338/ACI-2011-03-IE-0020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Littlejohns P, Wyatt JC, Garvican L. Evaluating computerised health information systems: Hard lessons still to be learnt. British Medical Journal. 2003;326(7394):860–863. doi: 10.1136/bmj.326.7394.860. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lyon AR, Lewis CC, Boyd MR, Hendrix E, Liu F. Capabilities and characteristics of digital measurement feedback systems: Results from a comprehensive review. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-016-0719-4. (this issue) [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lyon AR, Wasse JK, Ludwig K, Zachry M, Bruns EJ, Unützer J, McCauley E. The Contextualized Technology Adaptation Process (CTAP): Optimizing health information technology to improve mental health systems. Administration and Policy in Mental Health and Mental Health Services Research. :1–16. doi: 10.1007/s10488-015-0637-x. (this issue) [DOI] [PMC free article] [PubMed] [Google Scholar]
- Maguire M. Methods to support human-centred design. International Journal of Human-Computer Studies. 2001;55(4):587–634. [Google Scholar]
- Nadeem E, Cappella E, Holland S, Coccaro C, Crisonino G. Development and piloting of a classroom-focused measurement feedback system. Administration and Policy in Mental Health and Mental Health Services Research. :1–15. doi: 10.1007/s10488-015-0651-z. (this isse) [DOI] [PMC free article] [PubMed] [Google Scholar]
- Patient Protection and Affordable Care Act of 2010. Pub. L. No. 111-148, § 6301, 124 Stat. 727. 2010 [Google Scholar]
- Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, York JL. A compilation of strategies for implementing clinical innovations in health and mental health. Medical Care Research and Review. 2012;69(2):123–157. doi: 10.1177/1077558711430690. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, Kirchner JE. A refined compilation of implementation strategies: Results from the Expert Recommendations for Implementing Change (ERIC) project. Implementation Science. 2015;10(1):21. doi: 10.1186/s13012-015-0209-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Hensley M. Outcomes for implementation research: Conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research. 2011;38(2):65–76. doi: 10.1007/s10488-010-0319-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ribitzky R, Sterling MA, Bradley V. EHR usability pain points survey Q4 2009. Presentation at the 2010 Annual HIMSS Conference & Exhibition 2010 Mar; [Google Scholar]
- Riemer M, Rosof-Williams J, Bickman L. Theories related to changing clinician practice. Child and Adolescent Psychiatric Clinics of North America. 2005;14(2):241–254. doi: 10.1016/j.chc.2004.05.002. [DOI] [PubMed] [Google Scholar]
- Rogers EM. Diffusion of innovations. Simon and Schuster; 2010. [Google Scholar]
- Ruud T. Routine outcome measures in Norway: Only partly implemented. International Review of Psychiatry. 2015;27(4):338–344. doi: 10.3109/09540261.2015.1054268. [DOI] [PubMed] [Google Scholar]
- Scott K, Lewis CC. Using measurement-based care to enhance any treatment. Cognitive and Behavioral Practice. 2015;22(1):49–59. doi: 10.1016/j.cbpra.2014.01.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Steinfeld B, Franklin A, Mercer B, Fraynt R, Simon G. Progress monitoring in an integrated health care system: Tracking behavioral health vital signs. Administration and Policy in Mental Health and Mental Health Services Research. :1–10. doi: 10.1007/s10488-015-0648-7. (this issue) [DOI] [PubMed] [Google Scholar]
- Steinfeld B, Scott J, Vilander G, Marx L, Quirk M, Lindberg J, Koerner K. The role of lean process improvement in implementation of evidence-based practices in behavioral health care. The Journal of Behavioral Health Services & Research. 2014:1–15. doi: 10.1007/s11414-013-9386-3. [DOI] [PubMed] [Google Scholar]
- Stirman SW, Miller CJ, Toder K, Calloway A. Development of a framework and coding system for modifications and adaptations of evidence-based interventions. Implementation Science. 2013;8(1):65. doi: 10.1186/1748-5908-8-65. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Substance Abuse and Mental Health Services Administration (SAMHSA) Partners for Change Outcome Management System (PCOMS): International center for clinical excellence. SAMHSA’s National Registry of Evidence-based Programs and Practices. 2012 Jan; Retrieved from http://www.nrepp.samhsa.gov/ViewIntervention.aspx?id=249.
- Unützer J, Chan YF, Hafer E, Knaster J, Shields A, Powers D, Veith RC. Quality improvement with pay-for-performance incentives in integrated behavioral health care. American Journal of Public Health. 2012;102(6):e41–e45. doi: 10.2105/AJPH.2011.300555. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Unützer J, Katon W, Callahan CM, Williams JW, Jr, Hunkeler E, Harpole L, Impact Investigators Collaborative care management of late-life depression in the primary care setting: A randomized controlled trial. Journal of the American Medical Association. 2002;288(22):2836–2845. doi: 10.1001/jama.288.22.2836. [DOI] [PubMed] [Google Scholar]