Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2017 May 1.
Published in final edited form as: Adm Policy Ment Health. 2016 May;43(3):441–466. doi: 10.1007/s10488-016-0719-4

Capabilities and Characteristics of Digital Measurement Feedback Systems: Results from a Comprehensive Review

Aaron R Lyon 1,*,, Cara C Lewis 1,2,, Meredith R Boyd 2, Ethan Hendrix 1, Freda Liu 1
PMCID: PMC4833592  NIHMSID: NIHMS759274  PMID: 26860952

Abstract

Measurement Feedback Systems (MFS) are a class of Health Information Technology (HIT) that function as an implementation support strategy for integrating measurement based care or routine outcome monitoring into clinical practice. Although many MFS have been developed, little is known about their functions. This paper reports findings from an application of Health Information Technology- Academic and Commercial Evaluation (HIT-ACE), a systematic and consolidated evaluation method, to MFS designed for use in behavioral healthcare settings. Forty-nine MFS were identified and subjected to systematic characteristic and capability coding. Results are presented with respect to the representation of characteristics and capabilities across MFS.

Keywords: Health Information Technology, Measurement Feedback System, Measurement Based Care, Routine Outcome Monitoring

Introduction

Measurement feedback systems (MFS) are technologies with the ability to capture service recipient data from regular assessment of treatment progress (e.g., functional outcomes, symptom changes) or processes (e.g., therapeutic alliance, services delivered) and then deliver that information to clinicians and other relevant parties to support decision-making (Bickman, 2008). MFS have emerged as an implementation strategy with great potential to support provider implementation of evidence-based assessment practices such as measurement-based care (MBC). MBC is defined as the use of data collected throughout treatment to drive clinical decisions (Scott & Lewis, 2015) and is often used interchangeably with terms such as routine outcome monitoring. Although MBC has been touted as a “minimal intervention needed for change” in behavioral health service delivery (Scott & Lewis, 2015), studies have consistently documented that mental health practitioners infrequently apply MBC in their practice (Garland, Kruse, & Aarons, 2003; Hatfield & Ogles, 2004) underscoring a critical need for effective MFS to promote MBC use.

MFS have rapidly proliferated in recent years, presumably because of their ability to support the implementation of MBC and to encourage accountability and efficiency within service systems. Important to acknowledge is that MFS alone typically cannot support the full integration of MBC, but that MFS are one implementation strategy that may be used in a multi-faceted protocol along with strategies such as needs assessment, training, technical assistance, and guidelines (see Nadeem et al., this issue; Lyon et al., this issue; Steinfeld et al., this issue). More research is necessary to determine which strategies are needed to effectively support MFS; however, the articles in this special issue present useful preliminary evidence in this regard.

Unfortunately, one of the factors limiting MFS advancement as an effective implementation strategy is that, similar to other types of health information technologies, MFS development has typically been confined to “proprietary silos” (Brailer, 2005) with little information shared across development teams. This siloing inhibits the advancement of MFS as well as MBC implementation by restricting consumer and researcher access to information about this emerging class of digital technology. This dearth of information prevents consumers from making informed decisions when choosing between systems and prevents researchers from delineating essential capabilities of MFS that best support MBC. Without specific methods for integrating information about different MFS, rapid advancement is unlikely.

A comprehensive mapping and synthesis of MFS technologies is needed that systematically identifies extant systems, details their common and unique functions, and evaluates the ways in which they are designed to support MBC. Such a review has relevance to potential users, MFS developers, and researchers alike. For instance, prior to making adoption decisions, individual providers and service system administrators are often interested in the types of assessment tools contained within MFS, capacity for integration with other technologies (e.g., electronic health records), and costs (Bruns, Hyde, Sather, Hook, & Lyon, this issue). Furthermore, the results of such a synthesis can be fed back to developers to guide additional product innovations, reduce system redundancies, and facilitate interoperability; thus improving relative advantage and the likelihood of adoption (Rogers, 2010). Researchers interested in how MFS can support MBC would also benefit from systematically collected information about MFS because the resulting information could be used to organize MFS capabilities for evaluation (e.g., different types of alerts), develop models for how MFS could ideally function, or drive emerging inquiry into the mechanisms through which they influence professional practice and service recipient outcomes (Douglas et al., 2015). Moreover, it is unclear whether the rapid proliferation of MFS by independent research and commercial development teams stems from necessity (i.e., whether MFS must be tailored to each service setting, EMR system, population, and treatment model in order to be effective), a lack of awareness of existing MFS capabilities (i.e., teams do not have easy or sufficient access to details about existing MFS), underestimation of the time and resources required to develop new MFS, or a combination of these factors.

In light of the anticipated benefits of a review of MFS technologies and the lack of substantive work in this area, our team initiated a project to identify and evaluate existing MFS. To accomplish these goals, we developed the Health Information Technologies – Academic and Commercial Evaluation methodology (Lyon et al., under review), which is guided by theories and frameworks related to feedback processes (Kluger & DeNisi, 1996; Riemer, Rosof-Williams, & Bickman, 2005), user-centered design (Courage & Baxter, 2005; Norman & Draper, 1986), and implementation science (Rogers, 2010) and intended to evaluate key system capabilities and characteristics. The HIT-ACE methodology is available in a separate manuscript that details its development and includes broad MFS review results (e.g., number of systems, system representation in the scientific literature) as an example application (Lyon et al., under review). The current paper, in contrast, provides a detailed report on MFS capabilities identified through the HIT-ACE methodology, including those that explicitly support the implementation of MBC in service systems.

Method

Scope of the Review

In the current review, MFS were defined as “digital technologies that (1) include, or provide the ability to input into the system, quantitative measures that are administered regularly throughout treatment to collect ongoing information about the process and progress of treatment, and (2) provide an automated presentation of the information described above in order to supply timely and clinically useful feedback to mental health providers about their cases” (Lyon et al., under review). This review is limited to MFS that address behavioral health for several reasons. First, recent research – including multiple reviews of studies – has provided strong evidence of benefits of using MFS in this context (Bickman, Kelley, Breda, de Andrade, & Riemer, 2011; Gondek, Edbrooke-Childs, Fink, Deighton, & Wolpert, in press; Krägeloh, Czuba, Billington, Kersten, & Siegert, 2015; Lambert et al., 2003). Second, MFS for behavioral health have flooded the market in recent years in response to increasing demands for accountability to provide evidence of positive outcomes of treatment. However there are no means for interested consumers, researchers, and developers to compare and differentiate among MFS and make decisions regarding adoption/use. Third, MFS intended for behavioral health may require and possess capabilities or processes distinct from those used to support the treatment of physical illness.

Identification of Systems and Associated Materials

Because MFS originate from both the academic and commercial sectors, a comprehensive search was conducted via multiple channels (Google searches, research database searches, and from members of relevant implementation science focused behavioral health professional listservs) to identify MFS that fit the above-mentioned definition (for a full description of search strings and search method see Lyon et al., under review). All available materials for each MFS were collected for analysis (e.g., academic articles, websites, MFS brochures).

Inclusion and exclusion criteria were developed to further refine the list of MFS for coding. MFS were included if their descriptions aligned with the definition of MFS and they facilitated MBC in behavioral health care. MFS were excluded if they did not appear to facilitate MBC or if it was not possible to locate websites or literature associated with the system. As of December 31st, 2014, the final list included 49 MFS for review.

A trained member of the research team reviewed all available material collected for each MFS and the team collaboratively selected the most information-rich source for coding. When available, the websites for MFS were given preference for coding, as they likely contained the most up-to-date information. However if the information on a website was sparse (e.g. contained the name of the product and logo without any additional information), then an in-depth article was also coded, if available.

Codebook Development

The coding scheme was created with the purpose of capturing the capabilities and characteristics of each MFS in order to classify and describe extant MFS. A capability of an MFS is defined as the ability to perform or achieve certain actions or outcomes through a set of controllable and measurable faculties, features, functions, processes, or services. An example of an MFS capability included in the coding scheme is “tracks standardized outcomes.” A characteristic of a system is a distinguishing trait, quality, or property. An example of an MFS characteristic included in the coding scheme is “internet based” (e.g., cloud-based on a remote server), as opposed to “software based” (i.e., loaded onto a single computer).

Capability and characteristic codes were developed both inductively and deductively. The inductive approach involved a review of the literature associated with electronic health records (EHR) and health information technologies (HIT) more generally as well as a review of Feedback Intervention Theory (FIT; Kluger & DeNisi, 1996) and Contextualized Feedback Intervention Theory (Riemer et al., 2005), which propose how and when feedback is effective depending on content, mode of delivery, and timing. The deductive approach involved applying the preliminary coding scheme to representative MFS websites to evaluate the scheme’s comprehensiveness. New codes were then added to capture capabilities and characteristics not yet reflected in coding. Finally, consumers and experts (researchers who have created or studied MFS technologies) provided feedback on the coding scheme resulting in additional codes. Approximately 60% of our codes were developed internally, drawing from theory or the research team’s existing knowledge. Approximately 25% were generated based on initial review of MFS materials and the coding process, whereas the remainder (~15%) developed following feedback gathered from external experts (see Lyon et al., under review for a more complete description of the origin of each code).

The investigative team then piloted the resulting coding scheme with another MFS website. Each characteristic and capability was coded as either present “1” or absent “0.” Codes of “1” were only given when the capability or characteristic was explicitly discussed and codes of “0” were given if the capability or characteristic was not discussed or if the description was vague. This coding approach was adopted because it was never the case that an MFS source description explicitly indicated it did not possess certain capabilities.

After coding the MFS, authors met to compare discrepant codes. Using this information, the coding scheme was refined (e.g. redundant codes were removed, wording was edited for clarity), the coding approach was formalized (e.g. process for systematically reviewing information rich materials especially websites with many links), and the definitions of capabilities and characteristics were clarified to promote reliability of coding.

Codes were also divided into four categories of capabilities (Tracking, Feedback, Customizability, and Data) and five categories of characteristics (Technology, Training and Technical Support, Administration and Use Options, System Acquisition, and Accessibility) for ease of coding as well as ease of interpretation of coding results. Tables 1 and 2 contain all capabilities and characteristics (and associated definitions), respectively. The Tracking category consists of capabilities associated with the MFS’s ability to capture outcomes and processes that are relevant to a service recipient’s progression through treatment. An example capability from this category is “tracks interventions delivered by the provider.” The Feedback category consists of capabilities related to an MFS’s capacity to give feedback based on data inputted into the system and to provide alerts containing this feedback as well as prompts based on use (or lack of use) of the system. An example capability from this category is, “compares treatment outcomes to user defined goals.” The Customizability category contains capabilities associated with how and what aspects of the MFS can be altered to fit a site, provider, or service recipient’s unique needs. An example of a capability that makes a system customizable is “provider can add new tools directly.” The Data category contains capabilities of the MFS related to how data can be displayed, disseminated, and manipulated. Example capabilities in this category include: “aggregates data at multiple levels” and “displays outcomes as graphs.” It is important to note that different data inputs and displays may themselves be considered types of feedback. For example, the aggregation of data by individual treatment provider could act as feedback on provider performance. Simply inputting service recipient scores into the system and noticing his/her responses is also a type of feedback. However, for the purposes of coding, the Feedback category contains the capabilities related to active action by the system to give feedback and the Data category contains capabilities related to how the data can be utilized.

Table 1.

Capability codes and definitions

Category Capability Definition
Feedback
Outcome monitoring for provider is a prime function System's prime function is noted here
Immediate feedback timing System provides immediate feedback (i.e. within seconds; available upon screen refresh) to service provider upon data collection as opposed to a couple hours/days later, by mail or email, etc.
Provides standard gap feedback Standard-gap feedback provides information to a user that compares data contained within system to information derived from an external source. This includes standard gaps to norms, prior expectation, past performance, performance of other groups, ideal goal.
Alerts to provider Alerts are made to service provider in order to bring critical information to the user’s attention in ways that circumvent the usual pathway of providing information. May include emails, pop- ups, flags, etc.
Corrective feedback from system System provides corrective feedback (i.e. feedback aimed at changing a provider's approach, strategy or treatment decision) to service provider with the aim of producing a more positive treatment outcome
Makes referrals System facilitates referrals for additional services (i.e., those other than the reason why the MFS- facilitated contact occurred such as a referral to a primary physician) either in-house (within an agency) or to a different organization.
Compares service providers to other providers System is able to compare users to other providers in various ways, e.g. how often providers use system, how compliant they are to system.
Alerts to others Alerts are made to individuals other than the service provider, i.e. supervisors, guardians, etc.
Compares treatment outcomes to user defined goals System is able to compare treatment outcomes across time to previously established individual treatment targets.
Data
Summary reports System creates a static snapshot of relevant information, likely designed for (1) paper chart documentation or (2) sharing with some party (e.g., supervisor, insurance company, client). This report will likely include only a subset of the information available in system.
Displays outcomes as graphs System has ability to produce a graphic display of various outcomes.
Aggregate data at multiple levels * System is able to present data on various levels beyond the individual treatment recipient level, e.g. by treatment provider, center, measure, etc.
View option of treatment recipient System gives service provider the ability to view a single client’s relevant information.
Summary reports for service recipient A static summary report specifically designed to be shared with the service recipient.
Customizability
Library of measures to choose from System provides 2 or more measures that users can choose to utilize on a case-by-case or program-by- program basis.
Provider determines frequency of measure administration Service provider has the ability to determine how often measures are administered by system; frequency is not set by system.
New tools and measures can be added New outcome monitoring tools, instruments, or measures can be added to system.
Ability to create idiographic tracking mechanisms System has ability to create idiographic tracking mechanisms that may be used to measure progress related to the individual treatment targets recorded by system.
Customizable dashboard System user is able to customize and determine what information appears on/in system dashboard.
Provider can add new tools directly Individual service providers are able to add new outcome monitoring tools themselves rather than other parties, i.e. supervisors or system administrators.
Ability to customize alerts System allows for customizable alerts, e.g. timing of alerts, mode of alert delivery, types of alerts, etc.
Tracking
Tracks standardized outcomes Outcomes are specified, quantitative treatment targets that may reasonably be believed to result from the intervention. May include mental/behavioral health (e.g., depression, conduct problems, other symptoms), client functioning across domains (e.g., work, school, social, etc.), physical health, etc. Outcomes may include standardized (i.e., norm-referenced) assessment scales or idiographic (i.e., individualized) outcomes.
Tracks idiographic measures relevant to treatment process System is able to track idiographic/non- standardized outcomes (e.g. OCD compulsions, tantrums, self-injury incidents).
Tracks therapeutic processes System tracks therapeutic processes related to treatment, e.g. therapeutic alliance, engagement/motivation.
Tracks interventions delivered by providers System allows for tracking over time of specified treatment protocol or intervention element/subcomponent use (e.g. exposure therapy, mindfulness exercises, etc.).
Tracks/ measures individual treatment targets (goals) System is able to track and measure the individual treatment targets/goals that were recorded by system.
Records treatment goals System is able to explicitly record defined individual treatment goals for the service recipient.
Tracks critical events for service recipient System allows for indicating the occurrence of important/clinically-relevant events (e.g., suicide attempt, fights with significant others) at discrete points in time regardless of whether these have been previously identified for ongoing monitoring.

Table 2.

Characteristic codes and definitions

Category Characteristic Definition
Technology
Reports system as evidence-based Coding source states that any aspect of system (e.g., measures, entire systems) is evidence-based.
HIPPA compliant Coding source explicitly states that system and its components are HIPAA compliant.
HL7 compliant Coding source explicitly states that system is HL7 Compliant.
Adaptive measures Measures included in system and their included questions are adaptive based on service recipient's responses.
Generate invoices for the purposes of billing System generates invoices based on information within itself.
System is an HER System explicitly states that it is an electronic health record (EHR).
Reports fulfilling "Meaningful Use" criteria “Meaningful Use” criteria.
Reports system as Blue Button Compliant Coding source explicitly states that system is Blue Button Compliant.
Training and Technical Support
Available training for system use other than demo There is available training for use of the system (e.g. in person training, webinars, etc.)
Available technology support Tech support involves the availability of individuals with extensive experience in the navigation/use of system itself and problem solving related to issues with the technology of itself.
Available instruction manual for system There is an available and freely accessible instruction manual for system.
Ongoing support beyond technical support System or its creating organization provides ongoing support for the implementation of system and its integration into provider workflows, organizational policies, etc. (e.g., continued consultation about its use in clinical care, administrator decision-making based on aggregated data). This support is ongoing over time.
Administration and Use Options
Internet based System is fully web-based, accessible via a browser, and it is updated without requiring a download to a local machine or device
Free standing software System is software that "lives" on a local machine/device (e.g. Microsoft Word) that must be updated by user.
Ability to use on mobile devices System has ability to be used on mobile devices, e.g. PDA, phone, tablet, etc.
Available service recipient portal for data entry Service recipients are able to enter data directly into system via a dedicated portal (e.g. log-in in wait room to complete measures before therapy session).
System Acquisition
Available for purchase/acquisition System is currently available for purchase or acquisition.
Price listed in source materials Coding source provides the price of the system for those interested in purchasing.
Available demo of system for promotional purposes A demo of system is available without requiring purchase or acquisition of system.
Contact information of developer Coding source provides contact information for system’s developer.
Accessibility
Provisions for special populations System contains built-in, automatic capabilities to support its accessibility to special populations such as populations with particular diagnoses
Available in other languages System has built-in, automatic availability in at least 1 language other than English.
Provisions for disabled populations System contains built-in, automatic capabilities to support its accessibility to disabled populations without the need for additional assistive devices (e.g. visually impaired).

The Technology category contains characteristics such as HIPAA compliance and HL7 compliance. The Training and Technical Support Category provides information regarding available training, support, and instruction manuals for MFS. The Administration and Use Options category contains characteristics related to how and where the MFS can be used, such as compatibility with mobile device platforms. The System Acquisition category includes characteristics of an MFS related to the ability of an interested consumer to purchase or acquire an MFS. Finally, the Accessibility category includes characteristics of an MFS related to ease of use by specific types of users (disabled populations, non-English speaking users, etc.).

Coding

Dichotomous codes

All MFS were reviewed by two independent coders who then met to come to consensus about discrepant codes through open dialog. As coders reviewed each MFS, they collected information (website links, copied and pasted text) to justify their coding decision, which aided in the consensus process.

Descriptive subcategory codes

When applicable, descriptive information was collected and coded for capabilities to provide context and detail for the dichotomous codes. For example, if an MFS was coded “1” for the capability “tracks standardized outcomes,” the coder would also document information regarding the specific types of outcomes the system tracked. When collecting descriptive information, coders copied and pasted text directly from the coding source for subsequent coding. The qualitative information was then coded using conventional content analysis (Hsieh & Shannon, 2005), which focuses on describing phenomena of interest based on the content of materials reviewed. This allowed for a more detailed characterization of capabilities that may make MFS more or less compelling to consumers and inform adoption decisions. The two coders independently coded the qualitative information by allowing the information collected to determine the subcategory codes. The coders then met to discuss the codes they created and decide on a final set of subcategories. Subsequent recoding occurred using a consensus process similar to that described by Hill and colleagues (Hill, Knox, Thompson, Nutt Williams, & Hess, 2005; Hill, Thompson, & Nutt Williams, 1997), in which materials were coded independently by two different raters who then meet to arrive at consensus judgments through open dialogue (DeSantis &Ugarriza, 2000; Hill et al., 2005). The consensus coding process is designed to circumvent biases, better capture data complexity, avoid errors, and reduce groupthink (Hill et al., 1997). Through this process, coders recoded the information to fit the agreed upon subcategories and met to address any discrepancies through consensus discussions. For example, for the capability “tracks standardized outcomes,” information related to the types of outcomes tracked was collected. Upon independent coding followed by discussion between the two coders, four subcategories emerged: psychological outcomes, physical/biological outcomes, outcomes related to functioning (e.g., social functioning) and outcomes related to interactions with treatment (e.g. satisfaction with treatment, engagement in treatment).

Results

Results from HIT-ACE coding are presented below. First, we provide a summary of the characteristics (i.e., distinguishing traits, qualities, or properties) of systems and then describe system capabilities (i.e., abilities to perform or achieve certain actions/outcomes through faculties, features, functions, processes, or services) in greater detail.

Characteristics

With respect to Technology, the majority of systems reported having an evidence base (83.7%). A minority of systems reported being HIPAA compliant (34.7%) with even fewer reporting HL7 compliance (6.1%) or other means of integration with technologies such as electronic health records (EHRs; 28.6%). Few MFS possessed adaptive measures (16.4%) meaning the questions asked change based on user input and even fewer MFS (10.3%) were able to generate invoices for the purposes of billing. Two systems primarily functioned as EHRs with outcome monitoring as a secondary feature. Three systems reported fulfilling “Meaningful Use” criteria, meaning they were identified and approved for use in the Medicare EHR incentive (“Electronic Health Records [EHR] Incentive Programs,” 2015). Blue Button compliance was not reflected in any MFS. With respect to Training and Technical Support, fewer than one-half of systems referenced providing specific training in its use (44.9%) or technical support of some kind (44.9%). In just over one-third of cases (36.7%), a manual was available to provide guidance on MFS use. A small number of MFS provided ongoing support beyond technical support (24.5%) such as consultation, data cleaning services, and custom data query writing services. With respect to Administration and Use Options, the majority of MFS were internet-based (83.7%) as opposed to freestanding software (22.4%), with some MFS having both an internet and software option (11.9%). Fewer than one-half (40.8%) of MFS could be administered on mobile devices (e.g., phone, tablet) and 36.7% contained a portal for service recipients to enter their own data. With respect to System Acquisition, a little over one-half (55.1%) were clearly available for purchase; of those, not quite two-thirds listed the price in the source materials (63.0%), and 74.1% provided a web-based promotional demonstration. The majority of MFS (87.8%) provided MFS developer contact information. Finally, with respect to Accessibility, a minority of MFS possessed provisions for special populations with 4.1% of MFS (i.e., two MFS) having accommodations for disabled populations (such as service recipients with developmental disabilities) and 32.7% providing language options beyond English (most typically Spanish was a second option, with some MFS indicating as many as 90 language options).

Capabilities

Results of the capability coding for the 49 MFS are presented by category (i.e., Tracking, Feedback, Data, Customizability) in Tables 36. With respect to Tracking (see Table 3), consistent with the scope of the review, the vast majority (93.9%) of MFS tracked standardized outcomes, while only 28.6% offered the capability of tracking individualized/idiographic measures relevant to the treatment progress (e.g., OCD compulsions, tantrums, self injury incidents). A quarter or less of the MFS (24.5%) offered tracking of other aspects or processes of treatment (e.g., interventions delivered by providers, individual treatment goals, critical events for the service recipient). With respect to Feedback (see Table 4), the majority of MFS (91.8%) provided feedback on service recipient outcomes and progress to providers as primary function of the MFS. While over half (55.1%) of the MFS provided immediate feedback to providers upon service recipient completion of assessments, just under half provided feedback about how a service recipient’s current status related to some standard or norm (i.e., standard gap feedback; 44.9%). A large minority of MFS (42.9%) also provided some type of alert to providers, meaning the system brings critical information to the user’s attention in ways that circumvent the usual pathway of providing information. In terms of Data capabilities (see Table 5), just over two-thirds of the MFS (67.4%) provided summary reports (static snapshot of relevant information) or displayed data graphically. The majority (59.2%) of the MFS provided aggregate data at multiple levels such as aggregation of data by the treatment provider or service center. For the Customizability category (Table 6), a large majority of MFS (71.4%) offered a library of standardized assessments from which to choose; however less than half of these (34.3%) specified who is able to choose which measures in the library to administer. Of those MFS that report this data, eight systems specified that the provider can directly choose the measures, one specified “user with appropriate permissions,” two specified practice or organization and one specified that the administrator or provider could choose measures from the library to administer. About one-fifth of the MFS (20.4%) allowed the addition of new measures that do not already exist in the library to be added. However, of the MFS that allow additional measures to be added, a select few (8.2%) allow the providers to add the new tools themselves. Typically, systems only allow MFS developers or administrators with special permissions to add tools that providers might request. About a one-fifth of MFS (20.4%) allowed providers to specify the frequency of assessments. Even fewer MFS allowed providers to create new, customize dashboards (10.2%), or customize alerts (6.1%).

Table 3.

Tracking capabilities possessed by each MFS

Tracking capabilities

System Tracks
standardized
outcomes
Tracks
idiographic
measures
relevant to
treatment
process
Tracks
therapeutic
processes
Tracks
interventions
delivered by
providers
Tracks/
measures
individual
treatment
targets
(goals)
Records
treatment
goals
Tracks
critical
events for
service
recipient
Total
capabilities
possessed
by each
system
ACORN 1 0 1 1 0 0 0 3
AKQUASI 1 0 1 0 0 0 0 2
ALERT 1 0 0 0 0 0 0 1
Assessment Center 1 0 0 0 0 0 0 1
BASIS-24 1 0 0 0 0 0 0 1
Behavior Monitoring Assessment System (BIMAS) 1 0 0 0 1 1 0 3
Brief Problem Monitoring (BPM) 1 0 0 1 0 0 0 2
Carepaths 1 0 0 0 0 0 0 1
CelestHealth System 1 0 1 0 0 0 0 2
Centervention 1 0 0 0 0 0 0 1
CFS 0 0 0 0 0 0 0 0
Child Health & Development Interactive System (CHADIS) 1 0 0 1 0 0 0 2
Computer-based Health Evaluation System (CHES) 1 1 0 0 0 0 0 2
Clinical Dashboard 1 1 1 1 1 1 1 7
Care Management Tracking System (CMTS) 1 0 0 1 1 1 0 4
COMMEND 1 1 0 0 1 0 0 3
CORE 1 1 1 0 1 1 0 5
CROMIS 1 0 0 0 0 0 0 1
DIALOG 1 0 0 0 0 0 0 1
Evidence-Based Assessment System for Clinicians (EAS-C) 1 1 1 0 1 0 0 4
Functional Assessment System (FAS) 1 0 0 0 1 1 0 3
Innerlife 0 0 0 0 0 0 0 0
Intra/ Compass 1 0 0 0 0 0 0 1
MHITS 1 0 0 1 1 1 0 4
Mobile Therapy 1 1 0 1 0 0 0 3
My Outcomes 1 0 1 1 0 0 0 3
OQ Measures 1 1 1 0 0 0 1 4
Outcome Tracker 1 0 0 0 0 0 0 1
Owl Outcomes 1 0 0 0 0 0 0 1
Partners for Change Outcome Management (PCOMS) 1 0 1 0 0 0 0 2
Penelope 1 1 1 1 1 1 0 6
Polaris-BH 1 0 0 0 0 0 0 1
Polaris-CD 1 0 1 0 0 0 1 3
PQRS PRO 1 0 0 0 0 0 0 1
PracticeWise 1 0 0 1 0 0 0 2
Psychological Outcome Profiles (PSYCHLOPS) 1 1 0 0 0 0 0 2
Systemic Therapy Inventory of Change (STIC) 1 1 1 0 0 0 0 3
SumOne for Kids 1 0 0 0 0 0 0 1
Texas Children's Mental Health Plan (TCMHP) 1 0 0 0 0 0 0 1
The Schwatz Outcome Monitoring 1 0 0 0 0 0 0 1
Therapy Rewind 1 0 0 1 1 1 0 4
Telesage Outcome Measurement System (TOMS) 1 0 1 0 0 0 0 2
Tool Kit 1 1 0 1 0 0 0 3
Treatment Outcome Package (TOP) 1 1 0 0 1 1 0 4
Treatment Progress 1 0 0 0 0 0 0 1
Indicator (TPI)
Treatment Response Assessment for Children (TRAC) 1 1 0 0 0 0 0 2
Valant 1 0 0 0 0 0 0 1
VitalHealth 1 1 0 0 0 0 0 2
Wrap Around Team Monitoring 0 0 0 0 0 0 0 0

Total (%) systems that possess the capability 46 (93.9) 14 (28.6) 13 (26.6) 12 (24.5) 11 (22.5) 9 (18.4) 3 (6.2)

Table 6.

Customizability capabilities possessed by each MFS

Customizability capabilities

System Library of
measures to
choose from
Provider
determines
frequency of
measure
administration
New tools and
measures can
be added
Ability to
create
idiographic
tracking
mechanisms
Customizable
dashboard
Provider can
add new tools
directly
Ability to
customize
alerts
Total
capabilities
possessed by
each system
ACORN 1 0 1 0 1 0 0 3
AKQUASI 1 1 0 0 0 0 0 2
ALERT 1 0 0 0 0 0 0 1
Assessment Center 1 1 1 1 0 1 0 5
BASIS-24 1 0 0 0 0 0 0 1
BIMAS 1 0 0 0 0 0 0 1
BPM 1 1 1 0 0 0 0 3
Carepaths 1 0 0 0 0 0 0 1
CelestHealth System 1 0 0 0 0 0 0 1
Centervention 0 0 0 0 0 0 0 0
CFS 0 0 0 0 0 0 0 0
CHADIS 1 0 0 0 0 0 0 1
CHES 1 0 1 0 0 0 0 2
Clinical Dashboard 0 0 0 0 0 0 0 0
CMTS 0 0 0 0 0 0 0 0
COMMEND 0 0 0 1 0 0 0 1
CORE 1 0 1 0 0 0 0 2
CROMIS 0 0 0 0 0 0 0 0
DIALOG 1 0 0 0 0 0 0 1
EAS-C 1 1 1 1 0 1 0 5
FAS 1 0 0 0 0 0 0 1
Innerlife 0 0 0 0 0 0 0 0
Intra/Compass 1 0 0 0 0 0 0 1
MHITS 1 0 0 0 0 0 0 1
Mobile Therapy 1 1 0 0 0 0 0 2
My Outcomes 0 0 0 0 1 0 1 2
OQ Measures 1 1 0 0 0 0 0 2
Outcome Tracker 1 1 0 0 0 0 0 2
Owl Outcomes 1 0 1 1 0 0 0 3
PCOMS 1 0 0 0 0 0 0 1
Penelope 1 0 1 1 1 1 1 6
Polaris-BH 0 0 0 0 0 0 1 1
Polaris-CD 1 0 1 0 0 0 0 2
PQRS PRO 1 0 0 0 0 0 0 1
PracticeWise 1 0 0 0 1 0 0 2
PSYCHLOPS 0 0 0 0 0 0 0 0
STIC 1 0 0 0 0 0 0 1
SumOne for Kids 0 0 0 0 0 0 0 0
TCMHP 1 0 0 0 0 0 0 1
The Schwartz Outcome Monitoring 0 0 0 0 0 0 0 0
Therapy Rewind 0 0 0 0 0 0 0 0
TOMS 1 0 0 0 0 0 0 1
Tool Kit 1 0 0 1 0 0 0 2
TOP 1 1 0 0 0 0 0 2
TPI 0 1 0 0 0 0 0 1
TRAC 1 0 0 0 0 0 0 1
Vālant 1 0 0 0 0 0 0 1
VitalHealth 1 1 1 1 1 1 0 6
Wrap Around Team Monitoring 1 0 0 0 0 0 0 1

Total (%) systems that possess the capability 35 (71.4) 10 (20.4) 10 (20.4) 7 (14.3) 5 (10.2) 4 (8.2) 3 (6.1)

Table 4.

Feedback capabilities possessed by each MFS

Feedback capabilities

System Outcome
monitoring
for provider
is a prime
function
Immediate
feedback
timing
Provides
standard gap
feedback
Alerts to
provider
Corrective
feedback
from system
Makes
referrals
Compares
service
providers to
other
providers
Alerts to
others
Compares
treatment
outcomes to
user defined
goals
Total
capabilities
possessed
by each
MFS
ACORN 1 0 1 0 0 0 1 0 0 3
AKQUASI 1 1 1 1 1 0 0 0 0 5
ALERT 1 0 1 1 1 0 1 0 0 5
Assessment Center 1 1 1 0 0 0 0 0 0 3
BASIS-24 1 1 1 0 0 0 0 0 0 3
BIMAS 1 0 1 0 0 0 0 0 1 3
BPM 1 0 1 1 0 0 0 0 0 3
Carepaths 0 1 0 1 0 1 0 0 0 3
CelestHealth System 1 1 0 0 0 0 0 0 0 2
Centervention 1 1 0 1 0 0 0 0 0 3
CFS 1 1 0 0 0 0 0 0 0 2
CHADIS 1 0 0 1 1 1 0 0 0 4
CHES 1 1 1 0 0 0 0 0 0 3
Clinical Dashboard 1 1 0 0 1 0 0 0 0 3
CMTS 1 0 0 1 0 1 0 0 1 4
COMMEND 1 0 0 0 0 0 1 0 0 2
CORE 1 1 1 1 0 0 0 0 0 4
CROMIS 1 0 0 0 1 0 0 0 0 2
DIALOG 1 0 0 0 0 0 0 0 0 1
EAS-C 1 1 1 0 0 0 0 1 0 4
FAS 1 1 0 1 1 1 0 0 0 5
Innerlife 1 0 0 0 0 0 0 0 0 1
Intra/ Compass 1 1 1 0 0 0 0 0 0 3
MHITS 1 1 1 1 0 1 0 1 0 6
Mobile Therapy 1 0 0 1 0 0 0 0 0 2
My Outcomes 1 1 1 1 1 0 0 1 0 6
OQ Measures 1 1 1 1 1 0 0 0 0 5
Outcome Tracker 1 0 0 1 0 0 0 0 0 2
Owl Outcomes 1 1 1 0 0 0 0 0 0 3
PCOMS 1 1 0 0 0 0 0 0 0 2
Penelope 1 1 1 1 1 1 1 1 1 9
Polaris-BH 1 1 1 1 0 1 1 0 0 6
Polaris-CD 1 1 1 1 1 1 0 1 0 7
PQRS PRO 0 1 0 0 0 0 0 0 0 1
PracticeWise 1 1 0 0 1 0 1 0 0 4
PSYCHLOPS 1 0 0 0 0 0 0 0 0 1
STIC 1 1 0 0 0 0 0 0 0 2
SumOne for Kids 1 0 0 0 0 0 0 0 0 1
TCMHP 1 0 0 0 0 0 0 0 0 1
The Schwartz Outcome Monitoring 1 0 1 0 0 0 0 0 0 2
Therapy Rewind 1 0 0 1 0 0 0 0 0 2
TOMS 1 1 0 1 0 0 0 0 0 3
Tool Kit 1 0 0 0 0 0 0 0 0 1
TOP 1 1 1 1 1 1 1 0 0 7
TPI 1 1 1 0 0 0 0 0 0 3
TRAC 1 0 0 0 0 0 0 0 0 1
Vālant 0 0 0 0 0 0 0 0 0 0
VitalHealth 1 0 1 1 1 0 0 0 0 4
Wrap Around Team Monitoring 0 0 0 0 0 0 0 0 0 0

Total (%) systems that possess the capability 45 (91.8) 27 (55.1) 22 (44.9) 21 (42.9) 13 (26.5) 9 (18.4) 7 (14.3) 5 (10.2) 3 (6.1)

Table 5.

Data capabilities possessed by each MFS

Data capabilities

System Summary
reports
Displays
outcomes as
graphs
Aggregate
data at
multiple
levels*
View option
of treatment
recipient
Summary
reports for
service
recipient
Total
capabilities
possessed
by each
system
AKQUASI 1 1 1 1 0 4
ALERT 1 0 1 0 0 2
Assessment Center 1 1 1 1 1 5
BASIS-24 1 1 1 1 0 4
BIMAS 1 1 1 1 0 4
BPM 1 1 1 1 0 4
Carepaths 1 1 0 1 1 4
CelestHealth System 1 1 1 1 0 4
Centervention 1 0 1 1 0 3
CFS 1 1 0 1 0 3
CHADIS 1 1 1 1 1 5
CHES 1 1 1 0 0 3
Clinical Dashboard 1 1 1 1 1 5
CMTS 1 0 1 1 0 3
COMMEND 1 1 1 1 0 4
CORE 1 1 1 1 1 5
CROMIS 1 0 0 1 1 3
DIALOG 0 1 1 0 0 2
EAS-C 0 1 1 1 0 3
FAS 1 1 1 1 1 5
Innerlife 1 0 0 0 0 1
Intra/ Compass 0 0 0 0 0 0
MHITS 1 0 1 0 0 2
Mobile Therapy 1 1 0 1 0 3
My Outcomes 1 1 1 1 1 5
OQ Measures 1 1 0 1 1 4
Outcome Tracker 0 1 1 1 0 3
Owl Outcomes 0 1 0 0 0 1
PCOMS 0 0 0 0 0 0
Penelope 1 1 1 1 0 4
Polaris-BH 1 0 0 0 1 2
Polaris-CD 1 1 1 1 1 5
PQRS PRO 1 0 0 0 0 1
PracticeWise 0 1 1 0 0 2
PSYCHLOPS 0 0 0 0 0 0
STIC 1 1 0 1 0 3
SumOne for Kids 0 0 1 0 0 1
TCMHP 0 1 1 0 0 2
The Schwartz Outcome Monitoring 0 0 1 0 0 1
Therapy Rewind 0 1 0 1 0 2
TOMS 1 1 1 0 0 3
Tool Kit 1 1 0 1 0 3
TOP 1 1 1 0 0 3
TPI 1 1 0 1 0 3
TRAC 0 1 0 0 0 1
Vālant 1 0 0 0 1 2
VitalHealth 0 1 1 0 0 2
Wrap Around Team Monitoring 0 0 0 0 0 0

Total (%) systems that possess the capability 33 (67.4) 33 (67.4) 29 (59.2) 27 (55.2) 12 (24.5)

The top ten capabilities possessed by the majority of MFS are listed in Table 7. Subcategory data are provided for the top ten capabilities when the coding materials consistently provided sufficiently detailed and relevant information associated with the capability (see also Tables 812) With respect to the types of outcomes tracked by the MFS (associated with the capability, “tracks standardized outcomes”) four broad subcategories emerged: behavioral/mental health outcomes (57.1% of MFS tracked this type of outcome; e.g. depression symptoms, anxiety symptoms, self-harm/suicidality), physical/biological health outcomes (24.5% of MFS tracked this type of outcome; e.g. sleep, pain, and mobility), life/social functioning (46.9%; e.g. work functioning, social/interpersonal functioning, and life functioning) and interaction with treatment (16.3%; engagement in treatment, satisfaction with treatment and therapeutic alliance). Full details of these coding results can be found in Table 8. With respect to the library of measures, great variability in the number of available measures was observed. Specifically, of the MFS that had a library of measures to choose from (71.4%), 13 had between two and five measures, five had between six and ten measures, five had 11–40 measures, four had over 40 measures and 8 MFS did not specify the number of measures in their libraries. See Table 9 for a breakdown of measures in the library.

Table 7.

Top ten most frequently possessed capabilities by MFS

Capabilities Number (%) Qualitative data collected
Tracks standardized outcomes 46 (93.9) Outcomes tracked
Outcome monitoring for provider is a prime function 45 (91.9) N/A
Library of measures to choose from 35 (71.5) Number of measures in library
Summary reports 34 (69.4) N/A
Displays outcomes as graphs 34 (69.4) N/A
Aggregate data at multiple levels 30 (61.3) Levels at which data can be aggregated
View option of treatment recipient 28 (57.2) N/A
Immediate feedback timing 27 (55.2) N/A
Provides standard gap feedback 22 (44.9) Types of standard gap feedback provided
Alerts to provider 21 (42.9) Types of alerts to provider
Delivery mode of alerts

Table 8.

Types of outcomes tracked

Psychological: Behavioral/Mental Physical/Biological Functioning Interaction with Treatment




System Anxiety Depression Self-harm/
Suicidality
Pain Mobility/
functioning
Sleep Work
functioning
Life
functioning
Social/
inter-
personal
functioning
Satisfaction Engagement Therapeutic
Alliance
ACORN 0 0 0 0 0 0 0 0 0 0 0 0
AKQUASI 1 1 0 1 1 0 1 1 1 1 1 1
ALERT 0 1 0 0 0 0 0 1 0 0 0 0
Assessment Center 0 1 0 1 1 1 0 1 1 0 0 0
BASIS-24 0 1 1 0 0 0 0 0 1 0 0 0
BIMAS 0 0 0 0 0 0 0 0 1 0 0 0
BPM 0 0 0 0 0 0 0 0 0 0 0 0
Carepaths 0 0 0 0 0 0 0 0 0 0 0 0
CelestHealth 0 0 1 0 0 0 0 1 0 0 0 0
Centervention 0 0 0 0 0 0 0 0 0 0 0 0
CHADIS 1 1 0 0 0 0 0 0 1 0 0 0
CHES 1 1 0 1 1 0 0 0 0 0 0 0
Clinical Dashboard 0 0 0 0 0 0 0 0 0 0 0 0
CMTS 0 0 0 0 0 0 0 0 0 0 0 0
COMMEND 0 0 0 0 0 0 0 0 0 0 0 0
CORE 0 0 1 0 0 0 0 1 0 0 0 0
CROMIS 0 0 0 0 0 0 0 0 0 0 0 0
DIALOG 0 0 0 0 0 0 1 1 1 1 0 0
EAS-C 0 0 0 0 0 0 0 0 0 0 0 0
FAS 0 0 0 0 0 0 0 0 0 0 0 0
Integra/ Compass 1 1 0 0 1 0 1 1 1 0 0 0
MHITS 0 0 0 0 0 0 0 0 0 0 0 0
Mobile Therapy 1 1 0 0 0 1 0 0 1 0 0 0
MyOutcomes 0 0 0 0 0 0 0 1 1 0 0 0
OQMeasures 1 1 0 0 0 0 0 0 1 0 0 0
Outcome Tracker 1 1 0 0 0 0 0 0 0 0 0 0
Owl Outcomes 0 0 0 0 0 0 0 0 0 0 0 0
PCOMS 0 0 0 0 0 0 0 0 1 0 0 0
Penelope 0 0 0 0 0 0 0 0 0 0 0 0
Polaris-BH 0 0 0 0 0 0 0 0 0 0 0 0
Polaris-CD 1 1 1 0 0 0 1 1 1 1 1 1
PQRS PRO 0 1 1 1 1 1 0 0 0 0 0 0
PracticeWise 0 0 0 0 0 0 0 0 0 0 0 0
PSYCHLOPS 0 0 0 0 0 0 0 1 0 0 0 0
STIC 1 1 0 0 0 0 0 0 1 0 0 1
SumOne for Kids 0 0 0 0 0 0 0 0 0 0 0 0
TCMHP 0 0 0 0 0 0 0 0 1 1 0 0
The Schwartz Outcome Monitoring 0 0 0 0 0 0 0 0 1 0 0 0
Therapy Rewind 0 0 0 0 0 0 0 0 0 0 0 0
TOMS 1 1 0 0 0 0 1 1 1 0 0 0
Tool Kit 0 0 0 0 0 0 0 0 0 0 0 0
TOP 1 1 1 0 0 1 1 1 1 1 1 0
TPI 1 1 0 0 0 0 0 0 1 0 0 0
TRAC 0 0 0 0 0 0 0 0 0 0 0 0
Vālant 1 1 1 0 1 0 0 0 0 0 0 0

Total (%) systems that track this outcomea 13 (26.6) 17 (34.7) 7 (14.3) 4 (8.2) 6 (12.3) 4 (8.2) 6 (12.3) 12 (24.5) 18 (36.8) 5 (10.3) 3 (6.2) 3 (6.2)

Note. Only MFS that possess the capability "tracks standardized outcomes" were included in the table. Systems that do not have information in the table, did not include details about the specific outcomes coded in their materials

a

Percentages calculated based on total number of systems (N=49)

Table 12.

Type and delivery mode of alerts to providers

Types of alerts Delivery mode of alert


System High
risk/critical
itemsa
Workflow or
case
managementb
Measure
completion
(or lack of)c
Gradations in
improvement or
declined
Cue/
reminder/
flage
Dashboard/
console/reportf
Colors/
highlightingg
Emailsh
AKQUASI 0 0 0 0 0 0 1 1
ALERT 1 0 0 0 0 0 0 0
BPM 0 0 1 0 0 0 0 1
Carepaths 0 0 0 0 0 0 0 0
Centervention 0 0 0 0 0 0 0 0
CHADIS 0 1 0 0 1 1 0 0
CMTS 1 1 0 0 1 0 0 0
CORE 1 1 0 0 1 0 0 0
FAS 1 0 0 0 0 1 0 0
MHITS 0 0 0 0 0 0 0 0
Mobile Therapy 0 0 0 0 0 0 0 0
My Outcomes 1 0 1 1 0 1 1 0
OQ Measures 1 0 1 1 0 1 1 0
Outcome Tracker 0 1 0 0 0 0 0 0
Penelope 0 1 0 0 0 0 0 1
Polaris-BH 0 0 0 0 0 0 0 0
Polaris-CD 1 0 0 0 1 0 0 0
Therapy Rewind 0 0 0 0 1 0 0 0
TOMS 0 1 0 0 0 0 0 0
TOP 1 0 0 0 0 0 0 0
VitalHealth 0 0 0 0 0 0 0 0

Total (%) systems that give alerts to providers in the specified wayi 8 (16.3) 6 (12.2) 3 (6.1) 2 (4.1) 0 5 (10.2) 4 (8.2) 3 (6.1) 3 (6.1)

Note. Only MFS that possess the capability "provides standard gap feedbacks" were included in the table. Systems that do not have information in the table, did not include details about the specific alerts coded in their materials

a

Alerts provider when the service recipient endorses suicidality or other critical items on a measure

b

Alerts providers of specific tasks they must complete

c

Alerts provider when service recipient fails to complete a measure assigned to him/her

d

Alerts provider of service recipient improvement or decline based on a specific measure

e

Information is brought to the provider’s attention via a cue, flag or reminder

f

Information is brought to provider’s attention in the dashboard, console, or report

g

Information is brought to the provider’s attention using specific colors or highlights

h

Information is brought to the provider’s attention via email

i

Percentages calculated based on total number of systems (N=49)

Table 9.

Number of measures to chose from in MFS library

Range of measures in library

System 2 to 5 6 to 10 11 to 40 40+ Multiple
Unspecifieda
ACORN 0 0 0 0 1
AKQUASI 0 0 1 0 0
ALERT 1 0 0 0 0
Assessment Center 0 1 0 0 0
BASIS-24 1 0 0 0 0
BIMAS Outcomes 1 0 0 0 0
BPM 1 0 0 0 0
Carepaths 0 0 0 0 1
CelestHealth 1 0 0 0 0
CHADIS 0 0 0 1 0
CHES 0 0 0 0 1
CORE 0 0 1 0 0
DIALOG 0 1 0 0 0
EAS-C 1 0 0 0 0
FAS 1 0 0 0 0
Intra/Compass 1 0 0 0 0
MHITS 0 0 0 0 1
Mobile Therapy 0 0 0 1 0
OQ Measures 0 1 0 0 0
Outcome Tracker 0 0 1 0 0
Owl Outcomes 0 0 0 0 1
PCOMS 1 0 0 0 0
Penelope 0 0 0 0 1
Polaris-CD 0 0 0 1 0
PQRS PRO 0 0 0 1 0
PracticeWise 0 0 0 0 1
STIC 1 0 0 0 0
TCMHP 0 1 0 0 0
TOMS 0 1 0 0 0
Tool Kit 0 0 1 0 0
TOP 1 0 0 0 0
TRAC 1 0 0 0 0
Valant 0 0 1 0 0
VitalHealth 0 0 0 0 1
Wrap Around Team Monitoring 1 0 0 0 0

Total (%) systems that have specified range of measures in libraryb 13 (26.5) 5 (10.2) 5 (10.2) 4 (8.2) 8 (16.3)

Note. Only MFS that possess the capability "library of measures to choose from" were included in the table.

a

Coding source references a library of multiple measures to choose from but does not specify the exact number of measures

b

Percentages calculated based on total number of systems (N=49)

With respect to the MFS’s capability to aggregate data at multiple levels, eight levels were observed. Specifically, aggregation across a site, multiple sites, or an entire organization (system level) was possible in 22.5% of MFS; aggregation by a single provider or provider caseload (individual provider level) was possible in 14.3%; aggregation by multiple providers or providers’ caseloads (multiple provider level) was possible in 10.3%; aggregation of data related to a single service recipient (individual level) was possible in 28.6%; aggregation of data across multiple service recipients by a single variable such as diagnosis or demographic information (multiple level) was possible in 30.7%; by date or range of dates (date range level) was 10.3%; aggregation by a single measure or item on a measure (measure/item level) was possible in 18.4%; or customized aggregation at any level or criteria specified by the user (custom level) was possible in 8.2%.. See Table 10 for a breakdown of aggregation data by MFS.

Table 10.

Levels at which data can be aggregated

Provider Level Service Recipient Level
System
Levela
Provider
Unspecifiedb
Individualc Multipled Recipient
Unspecifiede
Individualf Multipleg Date
rangeh
Measure/
Item Leveli
Customj Other
ACORN 0 0 1 0 0 1 1 0 0 0 0
AKQUASI 0 0 0 0 0 1 1 0 1 0 0
ALERT 0 0 0 0 0 0 1 0 0 1 0
Assessment Center 0 0 0 0 0 0 0 0 1 0 1
BASIS-24 0 0 0 0 0 1 1 0 0 0 0
BIMAS 0 0 0 0 0 1 1 0 1 0 0
BPM 1 1 0 1 0 0 1 1 1 0 0
CelestHealth 1 0 0 0 0 1 0 0 0 0 0
Centervention 1 0 0 0 0 0 0 0 0 0 0
CHADIS 0 0 0 0 0 0 1 0 0 0 0
CHES 0 0 0 0 0 0 0 0 0 1 0
Clinical Dashboard 1 0 1 0 0 1 0 0 0 0 0
CMTS 0 0 0 0 0 1 1 0 0 0 0
COMMEND 0 0 1 1 1 0 0 1 1 0 0
CORE 0 0 0 0 0 0 0 0 0 1 0
DIALOG 0 0 0 0 0 0 0 1 0 0 1
EAS-C 0 0 0 0 1 1 0 0 0 0 0
FAS 0 0 0 0 0 0 1 1 0 0 0
MHITS 1 0 0 0 0 1 1 0 0 0 1
MyOutcomes 1 0 1 0 0 0 1 0 1 0 0
Outcome Tracker 0 0 1 0 0 1 1 0 1 0 0
Penelope 1 0 1 1 0 0 1 0 0 1 0
Polaris-CD 1 0 0 1 0 1 0 1 0 0 1
PracticeWise 1 0 0 1 0 0 1 0 0 0 0
SumOne for Kids 0 0 0 0 0 0 0 0 0 0 1
TCMHP 0 0 0 0 0 1 0 0 1 0 0
The Schwartz Outcome Monitoring 0 0 0 0 0 1 0 0 0 0 1
TOMS 1 0 0 0 0 0 0 0 1 0 0
TOP 1 0 1 0 0 0 0 0 0 0 0
VitalHealth 0 0 0 0 0 1 1 0 0 0 0

Total (%) systems that aggregate at the level specifiedk 11 (22.5) 1 (2.1) 7 (14.3) 5 (10.3) 0 2 (4.1) 14 (28.6) 15 (30.7) 5 (10.3) 9 (18.4) 4 (8.2) 6 (12.3)

Note. Only MFS that possess the capability "aggregates data at multiple levels" were included in the table.

a

Aggregates data across a site, multiple sites or an entire organization

b

Aggregates data by providers, could be individual or multiple providers

c

Aggregates data of a single provider or provider caseload

d

Aggregates data of multiple providers

e

Aggregates data by service recipients, could be individual or multiple service recipients

f

Aggregates data of an individual service recipient

g

Aggregates data of multiple service recipients based on a specific variable such as diagnosis or demographic information

h

Aggregates data by a specified date or date range

i

Aggregates data about a single measure or item on a measure completed by a service recipient multiple times

j

Aggregates data by any level or based on any criteria of interest

k

Percentages calculated based on total number of systems (N=49)

Within the descriptive data available for the types of standard gap feedback provided, four subcategories emerged: (1) expected progression through treatment (i.e., milestones, trajectories and expected outcomes) was represented in 16.3% of MFS; (2) published norms and clinical cut off scores for measures (i.e., clinical norms) in 26.5% of MFS; (3) other service recipients in the same area or system (i.e., local norms) in 4.1%; and (4) other service recipients with similar diagnoses, baseline scores or symptom severity (i.e., matched/specified norms) in 8.2%. See Table 11 for a breakdown of types of standard gap feedback by MFS. Finally, two types of subcategories were collected for the capability “gives alerts to providers.” These were types of alerts and delivery mode of alerts. Subcategories for types of alerts included high risk or critical items (e.g. suicidality; 16.3%), workflow alerts (e.g. reminders to complete tasks; 12.2%), alerts about service recipient measure completion or lack of measure completion (6.1%), and alerts regarding service recipient improvement or decline (4.1%). Delivery mode of the alerts included cues, reminders or flags, alerts in the dashboard (10.2%), console or summary reports (8.2%), colors/highlighting (6.1%), and emails (6.1%). See Table 12 for full details.

Table 11.

Types of standard gap feedback from MFS

Milestones/
trajectory/
expected
outcomea
Norms
unspecifiedb
Clinical
normsc
Local
normsd
Matched/
specifiede
ACORN 1 0 1 0 0
AKQUASI 0 1 0 0 0
ALERT 1 0 1 0 0
Assessment Center 0 0 0 0 0
BASIS-24 0 0 1 1 0
BIMAS 0 0 1 0 0
BPM 0 0 0 0 1
CHES 0 0 1 0 0
CORE 0 0 1 0 0
EAS-C 0 0 1 1 0
Intra/Compass 1 0 1 0 0
MHITS 1 1 1 0 0
My Outcomes 1 0 0 0 1
OQ Measures 1 0 0 0 0
Owl Outcomes 0 1 0 0 0
Penelope 1 0 1 0 0
Polaris-BH 0 0 1 0 0
Polaris-CD 1 0 0 0 1
The Schwartz Outcome Monitoring 0 0 1 0 0
TOP 0 0 1 0 0
TPI 0 0 0 0 1
VitalHealth 0 1 0 0 0

Total (%) systems that provide standardized feedback in the specified wayf 8 (16.3) 4 (8.2) 13 (26.5) 2 (4.1) 4 (8.2)

Note. Only MFS that possess the capability "provides standard gap feedbacks" were included in the table. Systems that do not have information in the table, did not include details about the specific standard gap feedback coded in their materials

a

Comparison of service recipient progress to expected progress through treatment

b

Comparison of service recipient to unspecified norms, could be clinical or local

c

Comparison to published norms and clinical cut off scores for a measure

d

Comparison to other service recipients in the same area or system

e

Comparison to other service recipients with similar diagnoses, baseline scores or symptom severity

f

Percentages calculated based on total number of systems (N=49)

Subcategories were also coded for capabilities possessed by more than 20% (or about 10 MFS) of MFS because this number of MFS provided enough information for qualitative coding. Capabilities with associated subcategory data include “tracks interventions delivered by providers” (24.5%) and “corrective feedback from system” (26.5%). In association with the capability “tracks interventions delivered by providers,” descriptive data regarding the types of corrective feedback were collected. This included treatment history (8.1%), in session strategies or notes about strategies (6.1%), medication (4.1%), treatment recipient response to intervention (2.0%) and referrals (2.0%) (see Table 13). In association with the capability “corrective feedback from system,” descriptive data regarding the types of corrective feedback were collected. This includes fit with treatment (12.2%), recommendations, strategies and next steps for treatment (6.1%), general decision making support (2.0%), service recipient fit with treatment (2.0%). and direction to outside sources or materials for useful information (2.0%) (see Table 14).

Table 13.

Types of interventions delivered by provider tracked by MFS

System Treatment
historya
In session
strategies/
notesb
Medicationc Response to
interventiond
Referralse
ACORN 1 0 0 0 0
BPM 0 0 0 1 0
CHADIS 0 0 0 0 0
Clinical Dashboard 0 0 0 0 0
CMTS 1 0 1 0 0
MHITS 1 0 0 0 1
Mobile Therapy 1 0 1 0 0
My Outcomes 0 0 0 0 0
Penelope 0 1 0 0 0
PracticeWise 0 0 0 0 0
Therapy Rewind 0 1 0 0 0
Tool Kit 0 1 0 0 0

Total (%) systems that track specified interventionf 4 (8.1) 3 (6.1) 2 (4.1) 1 (2.0) 1 (2.0)

Note. Only MFS that possess the capability "tracks interventions delivered by provider" were included in the table.

a

Tracks treatment received from current or past providers

b

Tracks therapeutic strategies/interventions the provider uses in session and relevant notes taken

c

Tracks medications prescribed to service recipient by current or past providers

d

Tracks service recipient response to interventions delivered by the provider

e

Tracks referrals the provider makes for the service recipient (e.g. referral for physical examination, referaal to a psychiatrist, etc.)

f

Percentages calculated based on total number of systems (N=49)

Table 14.

Types of corrective feedback given by system

System Recommendations/
strategies/next stepsa
Decision
supportb
Fit with
treatmentc
Outside source/
materialsd
AKQUASI 1 0 0 0
ALERT 0 0 0 0
CHADIS 0 1 0 1
Clinical Dashboard 1 0 0 0
CROMIS 1 0 0 0
FAS 0 0 0 0
My Outcomes 0 0 1 0
OQ Measures 1 0 0 0
Penelope 0 0 0 0
Polaris-CD 0 1 0 0
PracticeWise 1 0 0 0
TOP 1 0 0 0
VitalHealth 0 1 0 0
Total (%) systems that track specified type of interventione 6 (12.2) 3 (6.1) 1 (2.0) 1 (2.0)

Note.Only MFS that possess the capability "provides standard gap feedbacks" were included in the table. Systems that do not have information in the table, did not include details about the specific interventions coded in their materials

a

Provides recommendations based on data inputted into the system (e.g. terminate treatment early)

b

Provides decision support based on data inputted into the system

c

Provides feedback on fit with treatment based on service recipient improvement (or lack of improvement) and therapeutic alliance with provider

d

Provides helpful materials and information based on service recipient measure results

e

Percentages calculated based on total number of systems (N=49)

Discussion

MFS Capabilities in Context

There exist numerous MFS (N=49) for use in behavioral health, reflecting rapid and consistent proliferation since 1995. Interestingly, the representation of capabilities within and across categories was quite variable, with no MFS possessing all possible coded capabilities (28 capability codes; range of coverage = 1 to 25). In fact, only two capabilities (“tracks standardized outcomes” and “outcome monitoring for provider is a prime function”) were present in more than three-quarters of the identified MFS. This is not necessarily reflective of lack of sophistication or underspecified design. Rather, the number of capabilities possessed by an MFS is likely correlated with its degree of complexity, which is theorized to be inversely related to adoption (Rogers, 2010). Strategic and parsimonious design that focuses on key capabilities is likely optimal and may explain why, on average, the highest proportion of capabilities possessed for a given category was 56%. Indeed, the capabilities represented in the greatest number of MFS within each of the four categories (Tracking; Feedback; Customizability; Data; see Table 15) – and in general (see Table 7) – are reflective of core MFS features (e.g., “tracks standardized outcomes”), theory-guided functions (e.g., “immediate feedback timing;” Kluger & DeNisi, 1996), or key components of MBC fidelity (e.g., “displays outcomes as graph;” Lewis et al., 2015b).

Table 15.

Most frequently possessed and least frequently possessed capability by category

Category Most frequent Least frequent System(s) with most
capabilities in
category
Number of
capabilities
in category
Mean Range
Tracking category “Tracks standardized outcomes” “Tracks critical events relevant to the service recipient” Clinical Dashboard 7 2.2 0–7
Feedback category “Outcome monitoring to provider is the prime function” “Compares service providers to other providers” Penelope 9 3.1 0–9
Customizability category “Library of measures to choose from” “Ability to customize alerts” Penelope, VitalHealth 7 1.5 0–6
Data category “Summary reports,” “Displays outcomes as graphs” “Summary report for treatment recipient” CHADIS, CORE, Clinical Dashboard, FAS, My Outcomes, Polaris-CD, Assessment Center 5 2.8 0–5

However, a closer look at the capabilities represented within categories reveals that many additional capabilities that the empirical literature and/or relevant theory would suggest are important are largely absent from MFS. For instance, within the Tracking category, “tracking critical events” was least represented by MFS. By definition, this capability supports tracking the occurrence of clinically relevant and important events such as suicide attempts. It may be problematic that so few MFS appear to support tracking such critical events given that a history of suicide attempts is the strongest and most robust predictor of future suicide (Suominen et al., 2004). Moreover, integration of “tracking critical events” and “tracking standardized outcomes” capabilities may aid service recipients (and providers) in detecting rises in symptom severity that likely precede costly and dangerous coping behaviors.

Within the Feedback category, the ability to “compare service providers to other providers” was least represented by MFS. Given that potential provider concerns about performance evaluations are sometimes cited when discussing MBC and MFS (e.g. De Jong & De Goede, 2015) this omission may actually increase the acceptability of the systems. However, Feedback Intervention Theory (Kluger & DeNisi, 1996) posits that standard-feedback gaps create motivation if there is a discrepancy between the observed state and a goal state (i.e., the standard). Peers represent an influential reference group and, in this case, peer data could provide an important standard for comparison (Landis-Lewis, Brehaut, Hochheiser, Douglas, & Jacobson, 2015). Unfortunately, it is unclear how to reconcile these literatures to guide feedback capability prioritization, revealing a critical gap in the study of MFS core capabilities and associated mechanisms. Future research should explicitly evaluate the impact of different standards on provider feedback interpretations and behavior.

Within the Customizability category, the ability to “customize alerts” was least common among MFS, meaning that, for the majority of MFS, the features of the alert system are predetermined by developers and cannot be altered by the consumer (e.g., agency administrator) to suit agency-specific needs. Alerts represent another key feature of the Feedback Intervention Theory in that the timing, mode, and type of alerts may strengthen or weaken the effectiveness of feedback (Kluger & DeNisi, 1996). Importantly, “alert fatigue” is a common unintended consequence of clinical decision support systems such as MFS (Ash, Sittig, Campbell, Guappone, & Dykstra, 2007). Given the importance of alerts for informing provider behaviors coupled with the danger of alert fatigue, it may be that the option to customize alerts to the agency’s preference is important. However, while offering users the ability to customize alerts could have the effect of decreasing unnecessary alerts, it could also exacerbate alert fatigue depending on who is allowed to make these customization decisions (e.g., service system administrators deciding to push more alerts out to front-line practitioners) or simply tax IT resources.

Finally, within the Data category, the “provision of summary reports” for service recipients was least represented in MFS. Absence of this capability also has the potential to limit the effectiveness of MBC. For instance, qualitative data from a study by Dowrick et al. (2009) indicated that service recipients were overwhelmingly positive about the use of depression screening measures because it helped them to better understand their symptoms. It seems a summary report for service recipients would only enhance self-understanding, but few MFS possess this capability. Nevertheless, like the previously discussed underrepresented capabilities, there is no direct empirical evidence that these summary reports are necessary to optimize MFS impact. Instead, it may be just as effective for the provider to verbally review the score trajectory, hand-draw a graph depicting scores over time, or print out the score summary of the clinician view option.

In sum, it appears as though MFS developers are prioritizing core capabilities that allow MFS to serve their intended purpose, but related capabilities that may further support this purpose are overlooked by the majority of systems. Determining which capabilities to prioritize may be especially challenging due to the almost complete lack of empirical data available to guide which of the 28 capabilities are critical to MFS optimization. That is, despite literature to loosely inform MFS capability prioritization (e.g., suicide attempts predict suicide and thus MFS would benefit from tracking critical events), there is virtually no mechanistic research regarding either the processes through which MBC improves usual care or through which MFS improves the implementation of MBC (Douglas et al., 2015). Identification of mechanisms will allow for optimization of MFS as an implementation support technology by focusing on capabilities that are most likely to impact key processes and eliminating unnecessary capabilities. A parallel process is needed to refine MBC’s focus on the core data elements and procedures that should be supported by MFS (Chorpita, Daleiden, & Bernstein, this issue). Presently, there is a dearth of evidence for the relative importance of the various data elements that a MFS may track. As another critical example, the central capability of MFS is provision of feedback. Our study suggests MFS offer many different forms of standard-gap feedback--such as expected progression through treatment, comparison to published clinical norms, comparison to other local service recipients, and comparison to a specified subset of service recipients included in a larger database--but it is unclear which kind of feedback will optimize the impact of MFS. Research comparing the effects of these feedback types on provider recognition, interpretation, and internalization of feedback messages – and, ultimately, on MBC fidelity – would do much to optimize MFS technologies and push the field forward. Until this mechanistic research agenda advances (Lewis et al., 2015a), however, the sheer number of capabilities and diversity in MFS capability representation will likely continue to yield technological redundancies, wasted development resources, and inadequate implementation of MBC in community practice.

Implications for MFS Development and Implementation

Despite their potential utility, it is unclear if standalone MFS will be able to persist in the face of enormous contextual constraints and ongoing difficulties in achieving innovation-organization fit. For instance, in a qualitative analysis of two clinics, both of which attempted to implement the same MFS for youth, (Gleacher et al., this issue) identified 119 unique barriers with 48% of those reflecting characteristics of the implemented technology. Some teams have attempted to align MFS with existing workflow requirements by incorporating user centered design principles in development or adaptation processes (Doherty, Coyle, & Matthews, 2010; Lyon et al., this issue), though little empirical evidence currently exists to determine whether improved implementation and sustainment will result. Three pathways are envisioned for the future of MFS development: (1) plug-and-play MFS are built to seamlessly integrate into existing EHR, focusing on common usability metrics, such as minimizing “clicks” between interfaces and facilitating rapid task completion (Clauson, Marsh, Polen, Seamon, & Ortiz, 2007); (2) MBC-supportive capabilities (e.g., “tracks standardized outcomes,” “immediate feedback to clinician,” “displays outcomes as graphs”) are built into existing electronic health records (EHR; Steinfeld, Franklin, Mercer, Fraynt, & Simon, this issue); or (3) new EHRs are built around the identified need for a digital strategy to support the implementation of MBC and other intervention components (Bruns et al., this issue). Regardless of the platform (standalone plug-and-play, EHR-integrated, or EHR-MFS-enhanced), there is a critical need to identify the core capabilities to reduce resources and streamline MFS development.

In addition, largely in response to the Patient Protection Affordable Care Act (Protection & Act, 2010), it may be especially important for MFS to support integrated care in which both health and mental health outcomes can be tracked simultaneously with relevance to a larger multi-disciplinary care team addressing comorbid conditions. Although this review explicitly focused on MFS that support MBC and related functions in behavioral health settings (57.1% tracked behavioral/ mental health outcomes; 46.9% tracked social functioning), 24.5% had the capacity to measure and provide feedback on health and behavioral health symptoms, suggesting movement toward this goal. Moreover, MFS will need to be responsive to other movements in the field such as HL7 standards (for the exchange, integration, sharing, and retrieval of electronic health information), Blue Button compliance (to support portable medical histories and facilitate dialog among health care providers, caregivers, and other entities), and “Meaningful Use” requirements (use of certified EHR technology used to achieve specific objectives, such as outcome monitoring). As of December, 2014 only 10.2% and 6.1% met the requirements for HL7 and Meaningful Use, respectively; no MFS met requirements for Blue Button Compliance. As these policy changes exert great influence on administrative decisions and provider behaviors, MFS that do not incorporate these features are ultimately likely to struggle in the marketplace, especially as EHRs develop feedback capabilities and become direct competitors. Related to this, 55.1% of identified MFS are proprietary. Although cost is typically one of the critical factors considered by consumers in the adoption process (Bruns et al., this issue), MFS source materials did not consistently report on cost (37.0% of MFS available for purchase did not provide cost information on their website). This is unfortunate given that MFS discontinuation may occur because the cost of MFS development and upkeep is unmanageable (e.g., Bickman et al., this issue). As a subsequent step in our HIT-ACE evaluation (Lyon et al., under review), we are currently completing interviews with MFS developers in which we are collecting detailed information on system cost.

Going forward, developers are encouraged to attend to the empirical literature and relevant theory to identify core capabilities for prioritization and to consider the relative advantage and complexity of new MFS early on in development. Providing systems that focus on a specific subset of capabilities that are empirically-based, maintain MFS parsimony, and provide an advantage over competitors may allow developers to offer their customers “more for less.” Consumers are encouraged to access Tables 3 through 6 when making decisions about MFS adoption and to think carefully about innovation-organization fit prior to selecting an MFS. Finally, researchers are encouraged to engage in an investigation of the mechanisms supporting both MBC and MFS in behavioral health settings so as to illuminate intervention targets and expedite implementation.

Limitations

There are several limitations of the current review of MFS. First, since completing our MFS identification process we have become aware of a number of additional systems that were not included. It seems our Google search, review of library databases, and solicitation for MFS via listservs and experts did not cast the net wide enough to obtain all examples of MFS and, in particular, omitted a number of internationally-based systems. In addition, new MFS are emerging all the time, which is consistent with the nature of HIT in general. Since the end of 2014 (coding cutoff date for the current study), no less than 10 additional MFS have come to our attention. Second, and similarly, capability frequencies have the potential to be somewhat outdated, as capabilities can and are frequently added in an effort to make MFS more functional once system infrastructure has been established. Third, the capabilities and characteristics coded for are not an exhaustive list of all possible capabilities and characteristics of systems. Creating an exhaustive list is nearly impossible due to the diversity of system complexity and capacity. Fourth, we recognize that our Phase 1 coding method likely under represents the capabilities that actually exist for each MFS, given that we only coded capabilities that were explicitly stated in the best available information source. As we complete the HIT-ACE developer interviews mentioned above, we intend to confirm and update our coding as needed. Fifth, it was beyond the scope of the current review to evaluate the psychometric strength of the measures included in each MFS library, the majority of which did not offer numerous measures from which to choose (55.1%). This is stated as a limitation because some may argue that the MFS’ clinical utility is largely a function of the quality of the measures used to monitor progress. Therefore, without establishing each measure’s validity or reliability, it is unclear the extent to which MFS collect data and provide feedback on their intended clinical constructs and/or if they do so consistently over time (a key requirement for MFS). Finally, although it is also a component of our forthcoming developer interviews, we do not currently have information available surrounding each system’s market share, making it difficult to link the capabilities identified to system spread at this time.

Summary and Conclusions

There are currently well over 50 MFS (49 reviewed here) designed to support the implementation of MBC in behavioral healthcare, with great diversity in their characteristics and capabilities. The results of this review provide a clearer picture of the current landscape of MFS that support MBC in behavioral health. It was not our intention to identify “winning” or “losing” systems, but to systematically provide detailed and summary information to stakeholders (developers, researchers, consumers) interested in the MFS technology space. The majority of MFS track standardized outcomes and deliver feedback to providers to support progress monitoring as a primary function. They display outcomes in the form of graphs and offer a library of standardized measures. These four capabilities likely represent core features of MFS currently available. However, consensus stops there, and the variability in characteristics and capabilities among existing MFS likely represents the relatively nascent developmental stage of MFS as a technology for supporting MBC implementation and the sizable number of potentially good ideas developers have had to improve service quality and efficiency. Moreover, most MFS do not include training or support to facilitate implementation of the MFS itself, despite MFS being one of a larger set of strategies likely necessary for implementing MBC in behavioral healthcare. This is a glaring weakness of most MFS that should be of concern to developers, healthcare agencies, and researchers alike, given that implementation failures (e.g., weak penetration) greatly reduces the benefit of MFS and the public health impact of MBC. Findings of the present study provide an overview of the current landscape of MFS by gathering much needed information from disparate sources and bringing transparency and clarity to the current state of MFS development. We hope that these data will assist healthcare agencies in their decision-making processes for choosing a MFS, promote competition and innovation among MFS developers, and spur future research in this field.

Acknowledgments

Funding: Work on this publication was supported by the Seattle Children’s Research Institute, Center for Child Health, Behavior and Development (CCHBD); and the National Institute of Mental Health (NIMH) under award numbers K08MH095939 and R01MH103310.

Footnotes

Conflict of interest: All authors declare that they have no conflicts of interest.

Ethical approval: This article does not contain any studies with human participants performed by any of the authors.

References

  1. Ash JS, Sittig DF, Campbell EM, Guappone KP, Dykstra RH. Some unintended consequences of clinical decision support systems. American Medical Informatics Association (AMIA) Annual Symposium Proceedings. 2007:26–30. [PMC free article] [PubMed] [Google Scholar]
  2. Bickman L. A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child & Adolescent Psychiatry. 2008;47(10):1114–1119. doi: 10.1097/CHI.0b013e3181825af8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bickman L, Douglas SR, De Andrade ARV, Tomlinson M, Gleacher A, Olin S, Hoagwood K. Implementing a measurement feedback system: A tale of two sites. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-015-0647-8. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bickman L, Kelley SD, Breda C, de Andrade AR, Riemer M. Effects of routine feedback to clinicians on mental health outcomes of youths: Results of a randomized trial. Psychiatric Services. 2011;62(12):1423–1429. doi: 10.1176/appi.ps.002052011. [DOI] [PubMed] [Google Scholar]
  5. Brailer DJ. Interoperability: The Key to the Future Health Care Systems. Health Affairs. 2005:w5–w19. doi: 10.1377/hlthaff.w5.19. [DOI] [PubMed] [Google Scholar]
  6. Bruns EJ, Hyde KL, Sather A, Hook AN, Lyon AR. Applying User Input to the Design and Testing of an Electronic Behavioral Health Information System for Wraparound Care Coordination. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-015-0658-5. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Chorpita BF, Daleiden EL, Bernstein AD. At the intersection of health and information technology and decision support: Measurement feedback systems…and beyond. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-015-0702-5. (in press) [DOI] [PubMed] [Google Scholar]
  8. Clauson KA, Marsh WA, Polen HH, Seamon MJ, Ortiz BI. Clinical decision support tools: Analysis of online drug information databases. BMC Medical Informatics and Decision Making. 2007;7(1):1–7. doi: 10.1186/1472-6947-7-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Courage C, Baxter K. Understanding your users: A practical guide to user requirements methods, tools, and techniques. San Francisco, CA: Elsevier; 2005. [Google Scholar]
  10. De Jong K, De Goede M. Why do some therapists not deal with outcome monitoring feedback? A feasibility study on the effect of regulatory focus and person–organization fit on attitude and outcome. Psychotherapy Research. 2015;25(6):661–668. doi: 10.1080/10503307.2015.1076198. [DOI] [PubMed] [Google Scholar]
  11. DeSantis L, Ugarriza DN. The concept of theme as used in qualitative nursing research. Western Journal of Nursing Research. 2000;22:351–372. doi: 10.1177/019394590002200308. [DOI] [PubMed] [Google Scholar]
  12. Douglas SR, Jonghyuk B, de Andrade ARV, Tomlinson MM, Hargraves RP, Bickman L. Feedback mechanisms of change: How problem alerts reported by youth clients and their caregivers impact clinician-reported session content. Psychotherapy Research. 2015;25(6):678–693. doi: 10.1080/10503307.2015.1059966. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Dowrick C, Leydon GM, McBride A, Howe A, Burgess H, Clarke P, et al. Patients’ and doctors’ views on depression severity questionnaires incentivised in UK quality and outcomes framework: qualitative study. British Medical Journal. 2009;338:b663. doi: 10.1136/bmj.b663. [DOI] [PubMed] [Google Scholar]
  14. Electronic Health Records (EHR) Incentive Programs. [Retrieved November 12, 2015];2015 Oct 29; from https://www.cms.gov/Regulations-and-Guidance/Legislation/EHRIncentivePrograms/index.html?redirect=/ehrincentiveprograms.
  15. Garland AF, Kruse M, Aarons GA. Clinicians and outcome measurement: What’s the use? The Journal of Behavioral Health Services & Research. 2003;30(4):393–405. doi: 10.1007/BF02287427. [DOI] [PubMed] [Google Scholar]
  16. Gleacher AA, Olin SS, Nadeem E, Pollock M, Ringle V, Bickman L, Hoagwood K. Implementing a Measurement Feedback System in Community Mental Health Clinics: A Case Study of Multilevel Barriers and Facilitators. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-015-0642-0. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Gondek D, Edbrooke-Childs J, Fink E, Deighton J, Wolpert M. Feedback from Outcome Measures and Treatment Effectiveness, Treatment Efficiency, and Collaborative Practice: A Systematic Review. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-015-0710-5. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Hatfield DR, Ogles BM. The Use of Outcome Measures by Psychologists in Clinical Practice. Professional Psychology: Research and Practice. 2004;35(5):485. [Google Scholar]
  19. Hill CE, Knox S, Thompson BJ, Nutt Williams E, Hess SA. Consensual qualitative research: An update. Journal of Counseling Psychology. 2005;52:196–205. [Google Scholar]
  20. Hill CE, Thompson BJ, Nutt Williams E. A guide to conducting consensual qualitative research. The Counseling Psychologist. 1997;25:517–572. [Google Scholar]
  21. Hsieh H-F, Shannon SE. Three approaches to qualitative content analysis. Qualitative Health Research. 2005;15(9):1277–1288. doi: 10.1177/1049732305276687. [DOI] [PubMed] [Google Scholar]
  22. Kluger AN, DeNisi A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin. 1996;119(2):254–284. [Google Scholar]
  23. Krägeloh CU, Czuba KJ, Billington DR, Kersten P, Siegert RJ. Using feedback from patient-reported outcome measures in mental health services: A scoping study and typology. Psychiatric Services. 2015;66(3):224–241. doi: 10.1176/appi.ps.201400141. [DOI] [PubMed] [Google Scholar]
  24. Lambert MJ, Whipple JL, Hawkins EJ, Vermeersch DA, Nielsen SL, Smart DW. Is it time for clinicians to routinely track patient outcome? A meta-analysis. Clinical Psychology: Science and Practice. 2003;10(3):288–301. [Google Scholar]
  25. Landis-Lewis Z, Brehaut JC, Hochheiser H, Douglas GP, Jacobson RS. Computer-supported feedback message tailoring: theory-informed adaptation of clinical audit and feedback for learning and behavior change. Implementation Science. 2015;10(1):12. doi: 10.1186/s13012-014-0203-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Lewis CC, Boyd M, Beidas RS, Lyon AR, Chambers D, Aarons G, Mittman B. A research agenda for mechanistic dissemination and implementation research; Presentation at the Conference on the Science of Dissemination and Implementation; Bethesda, MD. 2015. Dec, [Google Scholar]
  27. Lewis CC, Scott K, Marti CN, Marriott BR, Kroenke K, Putz JW, Rutkowski D. Implementing measurement-based care (iMBC) for depression in community mental health: a dynamic cluster randomized trial study protocol. Implementation Science. 2015;10(1):1–14. doi: 10.1186/s13012-015-0313-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Lyon AR, Wasse JK, Ludwig K, Zachry M, Bruns EJ, Unützer J, McCauley E. The Contextualized Technology Adaptation Process (CTAP): Optimizing health information technology to improve mental health systems. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-015-0637-x. (in press) [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Lyon AR, Lewis CC, Boyd M, Melvin A, Nicodimos S, Liu F, et al. Health Information Technologies—Academic and Commercial Evaluation (HIT-ACE) methodology: Description and application to clinical feedback systems. doi: 10.1186/s13012-016-0495-2. (under review) [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Norman DA, Draper SW, editors. User centered system design: New perspectives on human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates; 1986. [Google Scholar]
  31. Patient Protection and Affordable Care Act of 2010, Pub. L. No. 111–148, § 6301, 124 Stat. 727. 2010 [Google Scholar]
  32. Riemer M, Rosof-Williams J, Bickman L. Theories related to changing clinician practice. Child and Adolescent Psychiatric Clinics of North America. 2005;14(2):241–254. doi: 10.1016/j.chc.2004.05.002. [DOI] [PubMed] [Google Scholar]
  33. Rogers EM. Diffusions of innovations. 6th. New York, NY: Free Press; 2010. [Google Scholar]
  34. Scott K, Lewis CC. Using measurement-based care to enhance any treatment. Cognitive and Behavioral Practice. 2015;22(1):49–59. doi: 10.1016/j.cbpra.2014.01.010. http://doi.org/10.1016/j.cbpra.2014.01.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Steinfeld B, Franklin A, Mercer B, Fraynt R, Simon G. Progress Monitoring in an Integrated Health Care System: Tracking Behavioral Health Vital Signs. Administration and Policy in Mental Health and Mental Health Services Research. doi: 10.1007/s10488-015-0648-7. (in press) [DOI] [PubMed] [Google Scholar]
  36. Suominen K, Isometsä E, Suokas J, Haukka J, Achte K, Lönnqvist J. Completed suicide after a suicide attempt: a 37-year follow-up study. American Journal of Psychiatry. 2004;161(3):562–563. doi: 10.1176/appi.ajp.161.3.562. [DOI] [PubMed] [Google Scholar]

RESOURCES