Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Jan 1.
Published in final edited form as: Evid Based Pract Child Adolesc Ment Health. 2021 Oct 13;7(4):439–451. doi: 10.1080/23794925.2021.1981178

Measurement-Based Care in the Adolescent Partial Hospital Setting: Implementation, Challenges, & Future Directions

Jessica Lavender 1,2, Margaret M Benningfield 1,2,3, Jessica A Merritt 1,2,3, Rachel L Gibson 1,2, Alexandra H Bettis 1
PMCID: PMC9683479  NIHMSID: NIHMS1741898  PMID: 36439894

Abstract

In this paper, we describe the process of implementing measurement-based care (MBC) in the adolescent partial hospital program setting. First, we outline the rationale for incorporating MBC in this treatment setting. Second, we describe the partial hospital setting in which implementation took place, including the patient population, treatment providers, and structure of programming. Next, we outline the initial implementation of standardized assessments into our programming, including key initial considerations and challenges during implementation. We describe the importance of considering the primary symptom presentations of the patient population when selecting assessment tools, the importance of leveraging existing electronic health record tools to efficiently track and record data collection, and the ability to integrate assessments into clinical workflows. Fourth, we present data describing compliance with implementation, patient outcomes, and providers’ attitudes towards and knowledge of MBC following implementation. We found after the initial implementation period, compliance was high. We also found providers had an overall positive perception of the use of MBC, reporting they perceived it to be helpful to both their clinical practice and patient outcomes. Finally, we discuss future directions for best utilizing standardized assessments in intensive treatment settings.

Introduction

Partial hospital programs in the U.S. provide care for children and adolescents with a broad range of mental health problems. These programs are variable with regard to their structure, programming, and the clinical concerns addressed. Furthermore, whether a program or clinician’s treatment approach demonstrates effectiveness is often not empirically tested through the use of routine monitoring using evidence-based assessment (Bickman et al., 2000; Garland et al., 2003). Measurement of treatment outcomes in routine clinical care in acute settings provides an opportunity to demonstrate treatment effectiveness and provide feedback to clinicians and program administrators regarding what works for their patient populations. The current paper aims to describe the process of implementing a measurement-based care (MBC) protocol into a real-world intensive treatment setting. We provide a rationale for utilizing standardized assessment in routine clinical care, describe the process of implementing this into an adolescent partial hospital treatment program at a large medical center in the southeastern United States, present data on providers’ perceptions of the MBC process, and review challenges and future directions for programs related to assessment.

The case for MBC in intensive treatment settings.

MBC, also referred to as standardized assessment, progress monitoring, or outcome monitoring, is the process of collecting data using formal measures to assess patients’ clinical progress over the course of treatment and using this data to inform clinical decision making (Scott & Lewis, 2015). Standardized assessment of clinical outcomes throughout the treatment process provides benefits at the provider, program, and patient level.

First, these assessments can help clinicians and treatment teams to identify primary treatment targets at the start of treatment and to track progress on a patient’s treatment targets over the course of a treatment program (Scott & Lewis, 2015). Quantitative assessment of patients’ symptoms and functioning, in addition to utilizing clinical judgment and experience, may provide a more comprehensive picture of whether patients are benefitting from a given program and can aide in critical clinical decisions, such as discharge planning (Valenstein et al., 2009). These considerations are especially important in the intensive treatment setting, as the transition from daily, intensive therapy to standard outpatient care (e.g., once-a-week sessions) is significant. This transition not only results in dramatically reduced levels of clinical support, but also often includes a transition back to school and/or work. Thus, standardized, repeated assessments of symptoms may help to support when these transitions may be most appropriate for a patient.

In addition, there is mounting evidence suggesting the use of measurement-based assessment systems is associated with improved patient outcomes (e.g., Anker et al., 2009; Lambert et al., 2002). For example, in a randomized control trial, youth whose clinicians received weekly feedback improved more quickly than those whose clinicians did not receive feedback in a community clinic setting (Bickman et al., 2011). In an outpatient psychotherapy trial comparing no feedback, clinician feedback, and clinician and patient feedback, both patients in both feedback conditions showed more improvement over the course of treatment (Hawkins et al., 2006). Providing patients with feedback regarding their symptom progress may help to facilitate an environment where conversations about treatment progress and patient-provider collaboration are a routine part of the therapeutic process. Indeed, a recent systematic review of 51 studies found that the use of standardized symptom measures in clinical care, resulting in positive outcomes for patients (Fortney et al., 2017). These studies, however, have been primarily conducted in adult samples and in standard (i.e., non-intensive) outpatient treatment settings. In the intensive treatment setting, there is an added opportunity to monitor patients more closely, as clinicians may have contact with a patient daily or several times per week. Whereas in standard outpatient care providers may only see patients weekly or bi-weekly and have limited time during sessions to administer measures, in a day treatment program there is expanded opportunity to collect these data and collaborate with patients using this data. Furthermore, given that patients may vary regarding their length of stay in intensive treatment settings (e.g., some patients may attend for 1–2 weeks, and others for 1–2 months), having a standardized process for administering measures regularly (e.g., at admission, every 5th treatment day/once a week, and at discharge) ensures all patients can receive regular and consistent monitoring to inform progress and plans for discharge that is not impacted by length of stay.

In intensive treatment settings, such as day treatment programs or inpatient programs, patients may also be readmitted to the program over time. Collecting consistent data on patient symptom severity and outcomes during an initial admission may also help to inform treatment decisions upon readmission, and could provide a benchmark for expectations of progress during subsequent hospital courses.

The use of standardized assessments over time also presents added benefits for healthcare systems and programs. Over time, symptom data across all patients in a given treatment program or clinic can inform whether the program is achieving its clinical goals. Data can be used to determine how a program is performing broadly across all patients, and data may directly inform areas for programs to target for quality improvement or training initiatives (Bickman, 2008). These data may be used to assess the effectiveness of treatment modalities employed by a program, and provide critical opportunities to assess empirically-supported treatments that have often only been studied under formal, controlled research protocols with select patients.

Taken together, there is strong evidence to support the use of MBC in behavioral healthcare systems, as it has the potential for widespread benefits from the individual patient to the organization as a whole. (For a comprehensive research review of the potential and demonstrated benefits of MBC, see Jensen-Doss et al., 2020.) However, healthcare systems across the US have been slow to adopt MBC, and among those who have adopted this system, quality of the implementation of MBC is relatively low (Fortney et al., 2017). Given the need for greater use of standardized assessment in intensive treatment settings, below we describe the implementation of MBC in an adolescent partial hospital program.

Barriers to implementation of MBC.

As highlighted above, there is an underutilization of MBC in healthcare systems across the US and worldwide. Furthermore, some evidence suggests that even when assessments are completed, clinical providers may not incorporate that information into their clinical care (Garland et al., 2003). Researchers have identified multiple barriers at the provider and organizational levels that may impede implementation efforts (Boswell et al., 2015; Hatfield & Ogles, 2007; Jensen-Doss et al., 2020). Clinical providers may hold negative attitudes towards MBC, with studies finding that some clinicians report uncertainty about the benefits of using standardized assessments in clinical practice or that they simply do not find them to be useful to their practice (Hatfield & Ogles, 2004). Notably, attitudes towards and the use of MBC may vary by provider type (Oslin et al., 2019). Further, providers frequently report practical and logistical concerns about such assessments, including the additional time and paperwork required amidst already busy schedules (Hatfield & Ogles, 2004, 2007).

These practical concerns also can emerge at the organizational level. For example, organizations who do not have the capacity to either utilize an electronic medical record system or that are not able to integrate MBC into the electronic medical record may have difficulties implementing MBC over time. MBC implementation also requires organizational support to implement, as it takes time to train staff in utilizing measures, monitor progress and engagement, and refine MBC practices based on a setting’s needs (Lewis et al., 2019). In intensive treatment settings specifically, there are often multiple staff (e.g., social workers, psychologists, psychiatrists) who are involved in providing behavioral health care, and building in an organizational workflow that clearly delineates responsibility for administering and documenting measurement-based assessments is also critical.

Despite these potential barriers, the past two decades have seen an increase in efforts to implement MBC in behavioral health (for a recent review, see Lewis et al., 2019). The process of implementation can take many forms, including both tailored, flexible approaches and standardized approaches (Powell et al., 2012). While there is no singular, superior implementation strategy or framework for all behavioral health organizations to follow in implementing MBC, several useful implementation approaches and tools have been identified in the literature (Damschroder et al., 2009; Nilsen, 2015; Proctor et al., 2009). Lewis and colleagues (2019) recently published a 10-point research agenda to aide in improving the evidence base for MBC implementation, particularly for psychotherapy programs, highlighting the need to identify “discrete, identify discrete evidence-based strategies needed to support implementation regardless of setting” (p. 332). Given the range of approaches and tools available to aide in implementation, there are also important methodological challenges in implementation science (e.g., inconsistencies in terminology and definitions used in the field; see Proctor et al., 2011, 2013; Powell et al., 2017 for further discussion).

In many instances, however, treatment programs choose to implement MBC systems without the support of a formally trained implementation scientist or research team. As an example of such a case, below, we describe how our program implemented MBC, directly in response to accreditation guidelines, including the challenges, successes, and experiences of providers in this process.

Implementation of MBC: The Vanderbilt Psychiatric Hospital Adolescent Partial Hospital Program.

In 2018, The Joint Commission, a national accreditation organization that identifies standards to support safe and effective healthcare delivery, implemented a new standard requiring implementation of a MBC system for organizations surveyed for their Behavioral Health accreditation (Commission, 2018). This standard requires that programs select rating scales that are evidence-based for the population served and utilize them to measure treatment outcomes throughout an episode of care. Furthermore, programs are required to aggregate this data and utilize it for program evaluation and improvement.

Prior to 2018, the Adolescent Partial Hospital Program (PHP) at Vanderbilt Psychiatric Hospital did not utilize a standardized assessment system within our treatment program. Indeed, most PHP and intensive outpatient programs (IOP) did not have any standardized systems in place to evaluate the outcomes of their treatment prior to this new standard. The Association for Ambulatory Behavioral Health surveyed its membership in March 2018, three months after the new standard went into effect, and found that only 42% of their PHP and IOP program membership reported they were in compliance with the new standard (Rosser, 2018). Further, only 49% of the membership reporting understanding the new initiative, and 77% of the membership reported needing more training and assistance to achieve compliance (Rosser, 2018). Thus, while a push towards implementing MBC was evident, how to effectively implement such programming remained a challenge for many programs nationwide. Furthermore, there were limited examples available to programs describing how to implement such a process, including important considerations, challenges, and successes. Below, we describe our treatment program and the process of implementing MBC in the behavioral health setting, to provide a framework for other programs moving forward.

The Vanderbilt Adolescent PHP treats adolescents ages 13–18 presenting with primary symptoms of severe mood and anxiety disorders. The program curriculum consists of assessment, individual, group and family psychotherapy, psychoeducation groups, recreation therapy, medication management and discharge planning. The staff consists of eight mental health professionals, including: Psychiatrists, Licensed Social Workers, a Registered Nurse, a Behavioral Health Specialist and a Discharge Planner. The average length of stay is nine treatment days. From July 1, 2019 through June 30, 2020, the program treated a total of 326 adolescents. Forty-four percent of those transitioned to PHP from our adolescent inpatient setting. Twenty-three percent were admitted to PHP directly from an emergency setting. The remaining thirty-three percent were referred from their outpatient providers directly to PHP. The structure of this program is representative of the hospital’s other intensive outpatient treatment programs, including our adult PHP, adult co-occurring disorders IOP, and young adult IOP.

Implementation Measurement-Based Assessment: Initial Considerations.

After the Joint Commission standard was announced, our program started the process of implementing MBC. The Program Director and Medical Director met to discuss plans to respond adequately to these standards. When designing the system, we determined there were three primary considerations for implementation: (1) selecting assessment tools relevant to our diagnostic population; (2) integrating available technology to efficiently administer, track, and analyze data collected; and (3) ensuring data collection could meaningfully and efficiently be integrated into clinical workflows.

First, to achieve the goals set forth by the Joint Commission standards, we sought assessment tools that would capture the primary symptom domains of the patients seen in our program. To do this, we underwent an analysis of historical patients to review the most common psychiatric diagnoses of patients presenting for treatment in the PHP and IOP. Program data from the two years prior to implementation indicated that the most common primary diagnoses seen in our programs are Major Depressive Disorder and Generalized Anxiety Disorder. We then revisited the Joint Commission’s rationale for their new MBC standard, which recommended sources to guide programs in selecting appropriate tools to meet this new standard (Wrenn et al., n.d.). Based upon these lists, we selected two measures that (a) had strong psychometric properties; (b) captured the primary symptom presentations seen in our programs; (c) were brief to administer, thus reducing clinician and patient burden; and (d) were publicly available and free to administer. The two measures selected for initial implementation of the measurement-based assessment system were the Patient Health Questionnaire – 9 (PHQ-9; Kroenke et al., 2001), a 9-item measure assessing past 2-week depressive symptoms, and the Generalized Anxiety Disorder-7 (GAD-7; Spitzer et al., 2006), a 7-item measure assessing past 2-week symptoms of generalized anxiety. The PHQ-9 has demonstrated good psychometric properties in adolescent samples, with specificity and sensitivity found to be comparable to use in adult samples (Borghero et al., 2018; Richardson et al., 2010). Notably, these studies recommend using a clinical cut-off score of 11 in adolescents, compared to a score of 10 in adults. The GAD-7 also shows acceptable specificity and sensitivity in adolescents, with a cut-off score of 11 or greater to identify moderate to severe anxiety (Mossman et al., 2017).

Second, it was critical to integrate our assessment tools into existing technology to ensure that we could efficiently administer, track, and analyze the data collected across our patient programs. We had a strong preference to integrate the measurements into our existing electronic health record system, EPIC. This required that the tools we selected for assessment were publicly available and free to administer, as noted above. We collaborated with the Vanderbilt University Medical Center’s Health IT team to install these measures into our existing EPIC templates, and we worked with the Enterprise Analytics Team at Vanderbilt University Medical Center to discuss options for easily accessing and viewing program-level data. Using Tableau, an interactive data visualization tool, our program could receive auto-generated reports directly form EPIC which describe both the overall trends in patient outcomes and individual patient level data. Data is presented on a user-friendly dashboard, helping our program easily visualize both the degree to which the measurement-based assessment system is being implemented (i.e., how many patients have received the surveys) and clinical outcomes based on these measurements. The individual level data allows us to easily identify patients who are not progressing well in treatment for treatment team review. The data can be sorted by date, pre-treatment mid-treatment, and post-treatment scores, and is color coded to easily identify patients who are not progressing or worsening in symptoms.

Third, we aimed to incorporate data collection that would meaningfully and efficiently integrate into existing clinical workflows. To ensure that the clinical staff were able to integrate these new assessments into their existing workflows, we needed to develop a system that would minimize burden and maximize ease of administration of these new measures. We also needed to efficiently communicate the scores across the treatment team and to discuss them with the patients. We describe this consideration and the initial installation of MBC into our adolescent PHP further below.

Implementation of Measurement-Based Assessment: Installation and Challenges.

With the considerations outlined above in mind, the two selected measures were implemented into the clinical workflow of the adolescent PHP starting in November 2017. In order to begin implementation, the Program Director and Medical Director met with program staff over the course of several weeks to provide education on the rationale of this new standard, the proposed changes to the workflow, and the value that would be added to our patient care. The timeliness of the initiative was communicated due to the new Joint Commission standard, as we were a few months away from our survey period. However, the communication strategy also emphasized enhancing the patient care experience and improving our ability to measure patient outcomes.

Importantly, staff were included in the decisions around how to make the process most efficient in their workflow. Together, we evaluated our current treatment review timelines, and selected timeframes for measure administration based on these timelines and average patient stay. Because our social workers meet with every patient on the day of admission, every fifth program day (once per week) and day of discharge, we created a protocol to administer the PHQ-9 and GAD-7 on these days. To ensure the measures were completed, the program nurse provided the patient with a paper questionnaire to complete during their morning check-in. The patient’s social worker then reviews the scores with the patient during their meeting that day as a part of discussing treatment goals and progress. Then, the social worker documents the patient’s scores in their note in the electronic medical record, adding minimal additional documentation to their pre-existing workflow. The scores are then aggregated into Tableau and discussed in full treatment team meetings, which are scheduled twice per week, so that all disciplines on the team are involved in tracking patients’ symptom progress. When these steps were initially implemented, a checklist was provided to each staff member that clearly communicated the steps of administering and documenting the PHQ-9 and GAD-7.

The primary challenge upon informing staff of these changes was adequately explaining the value that these measures would provide to the program. For many years, practitioners had relied on their own clinical judgment, experience, and observations to determine treatment progress, and many were not familiar with these instruments, thus doubting their added value to the work they had already been doing. Over the course of several meetings, the program leadership team were intentional about providing education to support the use of these measures and patient examples to demonstrate how these measures could aide in clinical care. We also provided space for clinical staff to express their concerns and ask questions prior to implementation. Importantly, we highlighted how this system had the potential to improve the experience for both patients and providers, and also emphasized that indications of a lack of treatment progress would not be used punitively towards providers. Ultimately, these ongoing, collaborative discussions were critical to the success of implementation.

For the first six months of implementation (installation phase), the program focused on compliance with the new workflow. Starting in February 2018, we began conducting random chart audits and requested that scores be reported in each treatment team meeting to ensure measures were completed. Based on monthly chart audits, we found that compliance with completing measurements at the desired intervals was initially low. Chart audits completed between February-April 2018 found that just 67% of patients had completed the new measures. Therefore, we made compliance with the new workflow a priority in monthly documentation audits and worked with individual providers who were struggling to adjust to this new workflow. In treatment team meetings, we used patient stories to portray the value of sharing measurements with patients to illustrate their progress or lack of progress. For example, a common scenario that we encountered when first implementing MBC was a patient who was struggling to identify their own progress, despite the clinical team recognizing treatment gains. Providing this patient with their PHQ-9 and GAD-7 scores helps them to quantify in a concrete way their symptom change, and aides in improving morale during treatment. Another example was a patient who has worsening scores over the course of their admission. Based on scores alone, this would be of concern. However, through discussion with the patient of these scores, the patient identifies that they are being more open and honest with their treatment team than on admission. This allows for the treatment team to have a more accurate picture of the patient’s symptoms, and can facilitate discussion around potential modifications to the patient’s treatment plan. Finally, we also highlighted how MBC can be helpful for therapists when summarizing to a caregiver or family a patient’s progress in treatment. Using these discussions of real patient scenarios, we worked collaboratively with staff to problem-solve around challenges they faced resulting in noncompliance. Overall, we found providers were incredibly receptive to feedback and worked as a team to improve compliance over time.

After identifying challenges in initial compliance, we provided visualizations of both compliance with the workflow and patient outcomes in our monthly staff meetings using the Tableau dashboard. Giving feedback during these meetings helped in continuing to build an understanding of the system and how these measures were contributing to patient care. In cases where patients demonstrated poor outcomes, based on the assessment measures, program leadership reviewed these cases and met with the corresponding treatment teams to problem-solve and provide additional support for patient care. At the time of our Joint Commission survey in May 2018, six months after the initial implementation, we were fully in compliant with the new standard (100% compliance in May 2018 based on random chart audits).

From May-December 2018 (full implementation phase), we found that staff were consistently performing the new workflow with few reminders or missing data; compliance during this period was up to 98%. In 2019, average compliance based on random chart audits was 88%, and in 2020 average compliance was 93%. We noted challenges related to compliance occurred most often during periods of time when primary program staff were more likely to use paid time off (e.g., winter holidays) and during training transitions (e.g., when the academic year changes at the end of June/early July). Compliance rates based on random chart audits, conducted on a monthly basis from February 2018-December 2020, are presented in Figure 1.

Figure 1.

Figure 1.

Rates of compliance with MBC assessments, assessed by monthly random chart audits conducted from February 2018 to December 2020.

After the first year of implementation, we expanded our measurement battery based on provider feedback and clinical need. While keeping our core measures (PHQ-9 and GAD-7), providers also had the option of administering the following additional measures when clinically indicated: Brief Addiction Monitor (Cacciola et al., 2013), Dimensions of Psychosis (Levinson et al., 2002), Screen for Child Anxiety Related Disorders (Birmaher et al., 1999), and the Young Mania Rating Scale (Young et al., 1978). As the adolescent PHP (and our other intensive outpatient clinical program) often treats patients with comorbidities, including substance use, psychosis, and bipolar disorder, these measures allowed clinical staff to track a broader range of relevant symptoms over the course of treatment based on patients’ symptom profiles. In addition, we made changes to the structure of our Tableau dashboard to ensure easier tracking of patient progress. This included adding filters to allow providers to sort by program, date of assessment, and admission scores on the primary symptom measures. We also added visual indicators of progress, including color coding to indicate which patients were improving, worsening, or not demonstrating any change over the course of treatment. These features allowed the program leadership and treatment teams to quickly assess program needs, identify areas of progress and challenge, and communicate more efficiently and effectively.

Implementation of MBC: PHQ-9 and GAD-7 scores.

We examined patient outcomes over four implementation periods: installation stage (months 1–6 of implementation, spanning 11/17/17–5/17/18); the initial full implementation period (months 7–12 of implementation, spanning 5/18/18–12/31/18); year 2 of full implementation/maintenance (1/1/19–12/31/19); and year 3 of full implementation/maintenance (1/1/20–12/31/20). The adolescent PHP largely serves youth with primary mood and anxiety disorders; over the period of MBC implementation, 73–74% of patients each year presented with a primary mood disorder diagnosis and 21–24% of patients presented with a primary anxiety disorder diagnosis. We present mean GAD-7 and PHQ-9 scores at admission and discharge for these four time periods in Table 1 as descriptive information about the program sample; of note, we do not present patient outcome data, as the focus of this paper is to highlight the process of MBC implementation. (Notably, the COVID-19 pandemic began in March 2020, and the program transitioned in and out of virtual formatting from March 2020-December 2020.)

Table 1.

Primary diagnoses and mean PHQ-9 and GAD-7 scores for the adolescent partial hospital program at admission and discharge from treatment.

PHQ-9 Mean (SD) GAD-7 Mean (SD) MBC Compliance
N Admission score Discharge score Admission score Discharge score Average % of charts with pre- and post-data
11/17/17 – 5/17/18 238 12.81 (6.64) 6.12 (6.17) 10.64 (5.46) 5.43 (5.35) 74.92%
5/18/18–12/31/18 372 14.05 (7.37) 8.06 (7.95) 11.83 (6.05) 6.90 (6.45) 96.76%
1/1/19–12/31/19 726 14.08 (7.50) 8.59 (7.85) 11.54 (5.91) 7.34 (6.30) 87.83%
1/1/20–12/31/20 672 15.53 (7.37) 9.81 (8.08) 12.63 (5.76) 8.48 (7.03) 92.97%

N = Number of patients. SD = Standard deviation. MBC = Measurement based care. Note: Time frames are divided by the installation phase (11/17/17–5/17/18), initial implementation phase (5/18/18–12/31/18), and the maintenance stages (1/1/19–12/31/19; 1/1/20–12/31/20).

Implementation of MBC: Provider Attitudes & Perceptions.

To assess providers’ perception of applying evidence-based measurement (such as PHQ-9 and GAD-7), we administered a one-time, anonymous questionnaire to all providers in the Vanderbilt Psychiatric partial hospital and intensive outpatient treatment settings. Study data were collected and managed using REDCap (Research Electronic Data Capture) hosted at Vanderbilt University Medical Center (Harris et al., 2019, 2009). REDCap is a secure, web-based software platform designed to support data capture for research studies, providing 1) an intuitive interface for validated data capture; 2) audit trails for tracking data manipulation and export procedures; 3) automated export procedures for seamless data downloads to common statistical packages; and 4) procedures for data integration and interoperability with external sources. Data collection was approved by the Vanderbilt University Medical Center Institutional Review Board.

Of the 20 providers offered the survey, 16 completed the survey. Importantly, only 8 providers work in the adolescent partial hospital setting described above. Because of this small number of providers, we chose to survey providers across our adolescent and adult partial hospital programs and our adult intensive outpatient program, to protect providers’ confidentiality and increase comfort with completing the survey.

The 19-item measure was adapted from the Screening, Brief Intervention and Referral to Treatment (SBIRT) Training Surveys (Putney et al., 2017); see Appendix for complete measure. The survey assessed providers’ knowledge of evidence-based measurement, ease and appropriateness of application, impact on ability to be effective in their job, and their impact on quality of care and patients’ progression toward goals while incorporating the evidence-based measures into their clinical care settings. Questions were rated from 1 (strongly disagree) to 7 (strongly agree). Ratings of 1–3 indicated providers disagreed with a statement; ratings of 4 indicated a neutral stance (neither agree nor disagree); and ratings of 5–7 indicated providers agreed with a statement.

Providers also answered two open-ended questions: (1) Please describe what has been most helpful or valuable about using standardized assessments in your work, and (2) Please describe what has been the most challenging or unhelpful about using standardized assessments in your work. Below, we provide a summary of providers’ attitudes towards and knowledge of the use of this measurement-based system.

Providers’ knowledge and understanding of MBC.

All of the providers (100%; n = 16) reported that they have a good understanding of the rationale for including evidence-based measurement of symptoms in their treatment settings, of which more than two-thirds (68.8%; n = 11) strongly agreed with this statement. Despite a strong reported understanding of these measurements, 62.5% of providers (n = 10) expressed a strong desire for more training in the use of evidence-based assessment in the intensive psychiatric treatment settings. Furthermore, all of the providers indicated that they wanted to use evidence-based methods for directly assessing patients’ symptoms and treatment progress (100%; n = 16).

Perceived impact on providers.

The majority of providers (75%; n = 12) reported the use of these measures has not been burdensome to their work, whereas only one provider (6%) reported they have been somewhat burdensome. Further, over half of providers (62.5%; n = 10) did not report the tools were cumbersome to administer; less than one third of providers (31.25%; n = 5) reported feeling neutral about whether administering these measures was burdensome or not, and one provider (6%) reported they felt the measures were burdensome. However, four providers (25%) did indicate they felt there was not enough time to complete the measures during the treatment day, four (25%) reported they did not agree or disagree that time was a concern, and eight (50%) reported feeling they did have time to complete these measures with patients. Half of providers (50%; n = 8) reported a neutral stance on whether measurement-based assessment of symptoms helps them to accomplish clinical tasks more quickly and efficiently, with some providers reporting it made their job easier (25%; n = 4) and some reported it made their job harder (25%; n = 4). Ultimately, the majority of providers (81.25%; n = 13) indicated that the use of standardized assessments has been valuable to their role as a clinical provider and that these assessments make it easier to do their job.

Perceived impact on quality of care and patients’ progression toward goals.

Over two-thirds of providers (68.75%; n = 11) reported use of evidence-based assessment tools has improved the quality of care their patients receive. Four providers (15%; n = 4) reported they neither agreed nor disagreed that quality of care was improved by these measures, and one (6%; n = 1) reported they did not feel as though quality of care was improved. The majority of providers (81.25%; n = 13) indicated these measures are critical in meeting the healthcare needs of their patients. Many providers (68.75%; n = 11) believe these tools have improved interactions with patients and patients’ families, whereas two providers (12.5%) reported feeling neutral and three providers (18.75%) disagreed that these measures facilitate improved interactions between staff and patients. Nearly every provider reported evidence-based assessment as ultimately helpful for patients seen in their treatment setting (87.5%; n = 14).

Qualitative responses.

With regard to what has been helpful or valuable to providers in using standardized assessments, several providers reported that they like being able to quantify patients’ progress or lack of progress, and that these indices help to start conversations with patients about their treatment progress. Further, one provider noted they value “the impact it makes when a patient is able to see improved outcomes based on the comparisons of assessments completed throughout the program.” Two providers also noted that having standardized assessments can help when obtaining and/or extending insurance coverage for the program. On the contrary, providers also noted several challenges regarding the use of these measures. Providers most frequently noted that the measures do not capture all of the patients’ symptoms or areas of difficulty, and therefore may not reflect their full progress to date. In addition, providers noted that remembering to administer the measures and reminding patients to complete the surveys can be a challenge.

Discussion.

The use of standardized tools affords a means for objective reflection on how behavioral health systems can better serve patients at an individual and systemic level. In the current paper, we describe how our programs implemented standardized assessment procedures following the 2018 Joint Commission standards calling for the use of MBC in behavioral health. Based on the presentation and needs of our patient population, we successfully implemented two primary measures to assess anxiety and depressive symptoms in patients at the time of admission, across treatment, and at the time of discharge from our intensive outpatient treatment programs. Despite providers’ initial wariness regarding the use of these measures, over a period of 6-months, we were able to fully implement this system into our adolescent PHP. After successful implementation, we expanded our assessment battery to better capture the needs of our treatment population. Furthermore, in this paper, we describe providers’ current attitudes regarding the use of standardized assessments. Overall, nearly 3 years following initial implementation, providers’ report these measures are low burden, easy to use, informative for clinical care and decision-making, and improve patient interactions and outcomes.

Notably, the implementation of MBC into our partial hospital setting was a direct response to Joint Commission standards in 2018, rather than a pre-planned initiative driven by our own program interests. Therefore, we had a limited period of time in which we needed to change our programming to implement these processes into clinical care. Moving forward, we aim to proactively consider ways we can continue to implement best practices into our clinical care, from assessment to treatment.

Based on our experience during the MBC implementation process, we encourage programs to engage relevant stakeholders (e.g., clinical providers and staff) early in the process to ensure maximum buy-in and to help to anticipate any clinical challenges that may arise during implementation. Furthermore, assessing stakeholders’ (e.g., providers, program leadership) experiences over time will add value to our understanding of how to best implement these practices into treatment settings; we only collected provider-level data several years after initial implementation, which may not reflect the initial challenges providers faced when the changes were introduced or capture how providers’ knowledge and attitudes may have improved over time. (For a comprehensive review of MBC in behavioral health and a 10-point agenda to improve the integration of these systems into clinical practice, see Lewis et al., 2019.)

Future directions.

The implementation of two core measures into our treatment settings, in addition to the four supplemental measures, has improved our ability to track and respond to symptom changes in patients. However, we acknowledge that patients also present with symptoms and functional impairments that are not captured by this small battery of questionnaires. Systematic data collection on both a broader range of symptoms and areas of impairment, as well as patients’ strengths (e.g., coping capacity, future orientation) may further improve our understanding of how and when patients benefit most from our treatment programs. In addition, given our program treats adolescent patients, standardized data collection from caregivers or other key family members has the opportunity to further improve clinical care. Given many empirically-supported interventions for children and adolescent mental health include some caregiver or family component (e.g., Dardas et al., 2018; Taboas et al., 2015), these data may provide important information about progress across multiple domains. In integrating multiple data sources, programs will also need to contend with discrepancies across reporting as they are making decisions about next steps in clinical care, including discharge disposition (for review of these issues in treatment, see Marsh et al., 2020).

Thus, a critical next step in the development of our adolescent PHP MBC system will be to expand the data we collect at admission, during treatment, and upon discharge from treatment. There are several important considerations for this next step in this work, including the length of the assessment battery, feasibility of integration into the current workflow, and the added clinical utility. As we embark upon this next step, we will integrate feedback from clinicians, data from electronic health records, data from the broader literature focused on partial hospital treatment program outcomes, and input from patients. These data streams will guide us in expanding our battery in a way that is clinically useful, feasible, and impactful.

Outcomes measures not only demonstrate the utility of services and bolster clinical and programmatic decision making, but they may also increase public support for effective treatment settings and advocacy for cost appropriate care (Mirin & Namerow, 1991). Consistent collection of empirically-supported measures provides an opportunity to disseminate program findings to the public and to provide more transparency to potential consumers of these treatment programs. This may be particularly important for populations which have historically experienced discrimination or harm in the mental health system. Transparent reporting of treatment outcomes across different groups of individuals, in addition to transparency about the treatment program itself, may help to rebuild these relationships.

Standardized assessment of clinical outcome data may also permit researchers to pool data across treatment programs to ask critical clinical questions that may require larger datasets. For example, a single treatment program alone may not generate enough data to examine questions about treatment efficacy for subgroups of patients that are known to be at higher risk for severe clinical outcomes, such as sexual and gender minority individuals. Indeed, there is a dearth of evidence to indicate whether evidence-based treatment programs are efficacious for LGBTQ+ patients (Bochicchio et al., 2020). Data from one cognitive behavioral- and dialectical behavioral therapy-focused PHP suggests their programming is equally efficacious for heterosexual and sexual minority patients; however this study did not evaluate gender minority individuals and findings need replication (Beard et al., 2017). Combining samples across programs may help to inform whether programs may or may not consider adaptations for specific subpopulations of patients to improve treatment outcomes. When standard assessments are employed across similar treatment settings, we may answer these important questions about treatment outcomes to guide future clinical care.

The utilization of standard measures also permits the evaluation of patient care during unexpected and sudden changes to programming, such as the necessity of incorporating telehealth options in the context of the rapidly evolving COVID-19 pandemic. Over the past year, programs were required to be flexible and shift into virtual formats despite a lack of guidelines for effective virtual partial programming and the potential loss of vital service delivery components. Indeed, our program has shifted throughout the year between virtual, mixed, and in-person programming, with little preparation or guidance in making these changes. The regular use of standard measures prior to and after an unprecedented event provide a critical opportunity to compare care as usual (i.e., in-person programming) to service changes made during the pandemic (i.e., virtual or blended programming). During periods of unanticipated change, the continued use of standardized regular measures can guide programs to navigate quality improvement, measure the continued efficacy of services, and direct attention to service gaps during these adjustment periods.

Conclusion.

In summary, MBC systems provide several advantages to intensive treatment settings, including aiding in clinical decision making, providing opportunities for feedback to treatment programs and clinicians, providing an opportunity for the integration of clinical practice and research, and opening the door for greater transparency with patients and the public. In an adolescent PHP, we demonstrated that the use of these standardized assessment systems is acceptable to providers, and over a relatively brief period of time (6 months), these systems can be effectively implemented into intensive behavioral health settings.

Funding:

The Vanderbilt REDCap Project, utilized for data collection, is supported by a grant from NACTS/NIH (UL1 TR000445). Alex Bettis is supported by funding from NIMH (K23MH122737).

APPENDIX I.

VPH Provider Survey: Provider Perceptions of Measurement-Based Care

Introduction:

We are interested in learning more about your experiences delivering standardized assessments in your treatment setting. All responses will be anonymous and confidential.

Instructions: Please rate how much you agree with the following statements from 1 (Strongly DISAGREE) to 7 (Strongly AGREE).

  • 1

    I have a good understanding of the rationale for including evidence-based measurement of symptoms in our treatment setting.

  • 2

    The use of evidence-based assessment tools has been valuable to my role in delivering patient care.

  • 3

    The use of evidence-based assessment tools has improved the quality of care my patients receive.

  • 4

    The use of evidence-based assessment tools has been burdensome to me as a provider.

  • 5

    The use of evidence-based assessment tools has been burdensome to my patients.

  • 6

    Using an evidence-based approach to assess patient outcomes helps me in my current job.

  • 7

    I want to use effective methods for directly assessing my patients’ symptoms and treatment progress.

  • 8

    Incorporating evidence-based assessment of psychiatric symptoms is critical to meet the healthcare needs of patients.

  • 9

    I would like more training in the use of evidence-based assessment in intensive psychiatric treatment settings.

  • 10

    I do not think there is enough time in the intensive treatment setting to complete and utilize measurement-based assessment.

  • 11

    My interaction with a patient and their family is improved by having a standardized assessment of their symptoms over the course of their treatment.

  • 12

    Evidence-based assessment is helpful for patients seen in my treatment setting.

  • 13

    I believe that the measurement-based assessment system has been cumbersome to use.

  • 14

    Using the measurement-based assessment system has improved the quality of work I do.

  • 15

    Using measurement-based assessment of symptoms makes it easier to do my job.

  • 16

    Using measurement-based assessment of symptoms requires a lot of mental effort.

  • 17

    Using measurement-based assessment of symptoms helps me to accomplish clinical tasks more quickly and efficiently.

Instructions: The following questions are open-ended.

  • 18

    Please describe what has been most helpful or valuable about using standardized assessments in your work:

  • 19

    Please describe was has been the most challenging or unhelpful about using standardized assessments in your work:

References

  1. Anker MG, Duncan BL, & Sparks JA (2009). Using client feedback to improve couple therapy outcomes: a randomized clinical trial in a naturalistic setting. Journal of Consulting and Clinical Psychology, 77, 693. [DOI] [PubMed] [Google Scholar]
  2. Beard C, Kirakosian N, Silverman AL, Winer JP, Wadsworth LP, & Björgvinsson T (2017). Comparing Treatment Response between LGBQ and Heterosexual Individuals Attending a CBT- and DBT-Skills-Based Partial Hospital. Journal of Consulting and Clinical Psychology, 85, 1171–1181. 10.1037/ccp0000251 [DOI] [PubMed] [Google Scholar]
  3. Bickman L (2008). A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child and Adolescent Psychiatry, 47, 1114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bickman L, Kelley SD, Breda C, Regina de Andrade A, & Riemer M (2011). Effects of Routine Feedback to Clinicians on Mental Health Outcomes of Youths: Results of a Randomized Trial. Psychiatric S, 62, 1423–1429. https://blackboard.vcu.edu/@@/0D03A744D66B74F89D6A098528BCA68E/courses/1/PSYC-662-001-2012Spring/content/_3360778_1/Bickman_et_al_(2011)_Outcome_Monitoring_Trial.pdf%5Cnpapers2://publication/uuid/80EDF4C5-B2FB-4B8B-899B-3A55B20D520A [DOI] [PubMed] [Google Scholar]
  5. Bickman L, Rosof-Williams J, Salzer MS, Summerfelt W, Noser K, Wilson SJ, & Karver MS (2000). What information do clinicians value for monitoring adolescent client progress and outcomes? Professional Psychology: Research and Practice, 31, 70. [Google Scholar]
  6. Birmaher B, Brent DA, Chiappetta L, Bridge J, Monga S, & Baugher M (1999). Psychometric properties of the Screen for Child Anxiety Related Emotional Disorders (SCARED): a replication study. Journal of the American Academy of Child & Adolescent Psychiatry, 38, 1230–1236. [DOI] [PubMed] [Google Scholar]
  7. Bochicchio L, Reeder K, Ivanoff A, Pope H, & Stefancic A (2020). Psychotherapeutic interventions for LGBTQ+ youth: a systematic review. Journal of LGBT Youth, 1–28. [Google Scholar]
  8. Borghero F, Martínez V, Zitko P, Vöhringer PA, Cavada G, & Rojas G (2018). [Screening depressive episodes in adolescents. Validation of the Patient Health Questionnaire-9 (PHQ-9)]. Revista medica de Chile, 146, 479–486. 10.4067/s0034-98872018000400479 [DOI] [PubMed] [Google Scholar]
  9. Boswell JF, Kraus DR, Miller SD, & Lambert MJ (2015). Fostering collaboration between researchers and clinicians through building practice-oriented research: An introduction. PSYCHOTHERAPY RESEARCH, 25, 6–19. [DOI] [PubMed] [Google Scholar]
  10. Cacciola JS, Alterman AI, DePhilippis D, Drapkin ML, Valadez C Jr, Fala NC, Oslin D, & McKay JR (2013). Development and initial evaluation of the Brief Addiction Monitor (BAM). Journal of Substance Abuse Treatment, 44, 256–263. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Commission, T. J. (2018). Measurement-Based Care – Standardized Tools and Instruments. https://www.jointcommission.org/standards/standard-faqs/behavioral-health/care-treatment-and-services-cts/000002332/
  12. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, & Lowery JC (2009). Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implementation Science, 4, 1–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Dardas LA, van de Water B, & Simmons LA (2018). Parental involvement in adolescent depression interventions: A systematic review of randomized clinical trials. International Journal of Mental Health Nursing, 27, 555–570. [DOI] [PubMed] [Google Scholar]
  14. Fortney JC, Unützer J, Wrenn G, Pyne JM, Smith GR, Schoenbaum M, & Harbin HT (2017). A tipping point for measurement-based care. Psychiatric Services, 68, 179–188. [DOI] [PubMed] [Google Scholar]
  15. Garland AF, Kruse M, & Aarons GA (2003). Clinicians and outcome measurement: what’s the use? The Journal of Behavioral Health Services & Research, 30, 393–405. [DOI] [PubMed] [Google Scholar]
  16. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, & Conde JG (2009). Research Electronic Data Capture (REDCap) - A Metadata-driven methodology and workflow process for providing translational research informatics support. Journal of Biomedical Informatics, 42, 377–381. [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Harris P, Taylor R, Minor B, Elliott V, Fernandez M, L O’Neal L, McLeod L, Delacqua G, Delacqua F, Kirby J, & Duda S (2019). REDCap Consortium, The REDCap consortium: Building an international community of software partners. Journal of Biomedical Informatics. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Hatfield DR, & Ogles BM (2004). The Use of Outcome Measures by Psychologists in Clinical Practice. Professional Psychology: Research and Practice, 35, 485. [Google Scholar]
  19. Hatfield DR, & Ogles BM (2007). Why some clinicians use outcome measures and others do not. Administration and Policy in Mental Health and Mental Health Services Research, 34, 283–291. [DOI] [PubMed] [Google Scholar]
  20. Hawkins EJ, Lambert MJ, Vermeersch DA, Slade KL, & Tuttle KC (2006). The therapeutic effects of providing patient progress information to therapists and patients. Psychotherapy Research. [DOI] [PubMed] [Google Scholar]
  21. Jensen-Doss A, Douglas S, Phillips DA, Gencdur O, Zalman A, & Gomez NE (2020). Measurement-based Care as a Practice Improvement Tool: Clinical and Organizational Applications in Youth Mental Health. Evidence-Based Practice in Child and Adolescent Mental Health, 5, 233–250. 10.1080/23794925.2020.1784062 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Kroenke K, Spitzer RL, & Williams JBW (2001). The PHQ-9: validity of a brief depression severity measure. Journal of General Internal Medicine, 16, 606–613. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Lambert MJ, Whipple JL, Vermeersch DA, Smart DW, Hawkins EJ, Nielsen SL, & Goates M (2002). Enhancing psychotherapy outcomes via providing feedback on client progress: A replication. Clinical Psychology & Psychotherapy, 9, 91–103. [Google Scholar]
  24. Levinson DF, Mowry BJ, Escamilla MA, & Faraone SV (2002). The Lifetime Dimensions of Psychosis Scale (LDPS): description and interrater reliability. Schizophrenia Bulletin, 28, 683–695. [DOI] [PubMed] [Google Scholar]
  25. Lewis CC, Boyd M, Puspitasari A, Navarro E, Howard J, Kassab H, Hoffman M, Scott K, Lyon A, Douglas S, Simon G, & Kroenke K (2019). Implementing Measurement-Based Care in Behavioral Health: A Review. JAMA Psychiatry, 76, 324–335. 10.1001/jamapsychiatry.2018.3329 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Marsh JK, Zeveney AS, & De Los Reyes A (2020). Informant Discrepancies in Judgments About Change During Mental Health Treatments. Clinical Psychological Science, 8, 318–332. 10.1177/2167702619894905 [DOI] [Google Scholar]
  27. Mirin SM, & Namerow MJ (1991). Why study treatment outcome? Psychiatric Services, 42, 1007–1013. [DOI] [PubMed] [Google Scholar]
  28. Mossman SA, Luft MJ, Schroeder HK, Varney ST, Fleck DE, Barzman DH, Gilman R, DelBello MP, & Strawn JR (2017). The Generalized Anxiety Disorder 7-item scale in adolescents with generalized anxiety disorder: Signal detection and validation. Annals of Clinical Psychiatry : Official Journal of the American Academy of Clinical Psychiatrists, 29, 227–234A. https://pubmed.ncbi.nlm.nih.gov/29069107 [PMC free article] [PubMed] [Google Scholar]
  29. Nilsen P (2015). Making sense of implementation theories, models and frameworks. Implementation Science, 10, 1–13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Oslin DW, Hoff R, Mignogna J, & Resnick SG (2019). Provider attitudes and experience with measurement-based mental health care in the VA Implementation Project. Psychiatric Services, 70, 135–138. [DOI] [PubMed] [Google Scholar]
  31. Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen JC, Proctor EK, & Mandell DS (2017). Methods to improve the selection and tailoring of implementation strategies. The Journal of Behavioral Health Services & Research, 44, 177–194. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, Glass JE, & York JL (2012). A compilation of strategies for implementing clinical innovations in health and mental health. Medical Care Research and Review, 69, 123–157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Proctor EK, Landsverk J, Aarons G, Chambers D, Glisson C, & Mittman B (2009). Implementation research in mental health services: An emerging science with conceptual, methodological, and training challenges. Administration and Policy in Mental Health and Mental Health Services Research, 36, 24–34. 10.1007/s10488-008-0197-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Proctor EK, Powell BJ, & McMillen JC (2013). Implementation strategies: recommendations for specifying and reporting. Implementation Science, 8, 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, & Hensley M (2011). Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research, 38, 65–76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Putney JM, O’Brien KH, Collin C, & Levine A (2017). Evaluation of alcohol screening, brief intervention, and referral to treatment (SBIRT) training for social workers. Journal of Social Work Practice in Addictions, 17, 169–187. [Google Scholar]
  37. Richardson LP, McCauley E, Grossman DC, McCarty CA, Richards J, Russo JE, Rockhill C, & Katon W (2010). Evaluation of the Patient Health Questionnaire-9 Item for detecting major depression among adolescents. Pediatrics, 126, 1117–1123. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Rosser J (2018). Standardized Measurement in Partial and Outpatient Programs. AABH National Conference. [Google Scholar]
  39. Scott K, & Lewis CC (2015). Using measurement-based care to enhance any treatment. Cognitive and Behavioral Practice, 22, 49–59. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Spitzer RL, Kroenke K, Williams JBW, & Löwe B (2006). A brief measure for assessing generalized anxiety disorder: the GAD-7. Archives of Internal Medicine, 166, 1092–1097. [DOI] [PubMed] [Google Scholar]
  41. Taboas WR, McKay D, Whiteside SPH, & Storch EA (2015). Parental involvement in youth anxiety treatment: Conceptual bases, controversies, and recommendations for intervention. Journal of Anxiety Disorders, 30, 16–18. 10.1016/j.janxdis.2014.12.005 [DOI] [PubMed] [Google Scholar]
  42. Valenstein M, Adler DA, Berlant J, Dixon LB, Dulit RA, Goldman B, Hackman A, Oslin DW, Siris SG, & Sonis WA (2009). Implementing standardized assessments in clinical care: now’s the time. Psychiatric Services, 60, 1372–1375. [DOI] [PubMed] [Google Scholar]
  43. Wrenn G, Kennedy P, Harbin H, Carneal G, Daviss S, Heiman HJ, Simon K, Sladek R, Unützer J, & Vinzon S (n.d.). A Core Set of Outcome Measures for Behavioral Health Across Service Settings. http://thekennedyforum-dot-org.s3.amazonaws.com/documents/MBC_supplement.pdf
  44. Young RC, Biggs JT, Ziegler VE, & Meyer DA (1978). A rating scale for mania: reliability, validity and sensitivity. The British Journal of Psychiatry, 133, 429–435. [DOI] [PubMed] [Google Scholar]

RESOURCES