Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Aug 29.
Published in final edited form as: Evid Based Pract Child Adolesc Ment Health. 2020 Feb 21;5(1):67–82. doi: 10.1080/23794925.2020.1727795

Developing Measurement-Based Care for Youth in an Outpatient Psychiatry Clinic: The Penn State Psychiatry Clinical Assessment and Rating Evaluation System for Youth (PCARES-Youth)

Daniel A Waschbusch a, Amanda Pearl a, Dara E Babinski a, Jamal H Essayli b, Sujatha P Koduvayur a, Duanping Liao c, Dahlia Mukherjee a, Erika F H Saunders a
PMCID: PMC12393003  NIHMSID: NIHMS2054412  PMID: 40894514

Abstract

Measurement-based care (MBC) is an evidence-based approach to improving outcomes of mental health services. There is considerable interest in implementing it in clinical practice settings. However, relatively little attention has been paid to the pragmatics of developing and implementing MBC in youth mental health. This paper describes one effort to accomplish this goal. After a brief discussion of the advantages of MBC, the process, and initial implementation of the Penn State Psychiatry Clinical Assessment and Rating Evaluation System for Youth (PCARES-Youth) is described. Results of a pilot project of PCARES-Youth are presented, including a survey of clinicians about their experiences with the initiative. Finally, lessons learned and next steps of the project are discussed.


Psychiatric and behavioral illnesses are common in the United States, with estimates suggesting that over a 12-month period approximately 13% of youth meet diagnostic criteria for one or more serious mental illnesses that impairs quality of life (Polanczyk, Salum, Sugaya, Caye, & Rohde, 2015). Fortunately, there are empirically supported treatments that effectively address the major mental health concerns exhibited by youth. However, many empirically supported treatments were developed and examined in well-controlled research settings and often exhibit diminished effects when translated into “real life” practice settings (Weisz et al., 2017). In contrast to research studies, most clinics do not use objective measures to systematically assess treatment outcomes, which may contribute to the reduced effectiveness of empirically supported treatments in these clinical practice settings (Hoagwood, Burns, Kiser, Ringseisen, & Schoenwald, 2001; Weisz, Weiss, Han, Granger, & Morton, 1995). This and similar data emphasizes the need to incorporate systematic measurement into routine clinical care, an effort that has been referred to as measurement-based care (MBC), patient reported outcome measurement, or measurement feedback systems (Rush, 2015).

The goal of MBC is to improve the accuracy and efficiency of diagnosing patients to facilitate the implementation and monitor the impact of effective and personalized treatments. In mental health settings, this usually consists of patient and/or caregiver completion of established psychometric rating scales as a routine part of their care for the purpose of clinical assessment (e.g., to determine whether symptoms or impairments are present) and/or evaluation of treatment progress (e.g., to measure effects of treatments on symptoms and impairments). Typically, rating scales are completed ahead of clinical services so that they are available to the clinician during the appointment, and there is continuity in the measures over time to clarify patient change (or lack thereof). Importantly, the results of these assessments drive data-based clinical decision-making (Fortney et al., 2015). Providing mental health professionals with standardized, reliable, and valid data on individual youth patients should help achieve these aims (Bickman, Kelley, Breda, de Andrade, & Riemer, 2011; Knaup, Koesters, Schoefer, Becker, & Puschner, 2009).

Advantages of MBC for youth patients

MBC provides several advantages for the assessment and treatment of youth patients (Boswell, Kraus, Miller, & Lambert, 2015). First, MBC helps ensure that clinicians conduct a thorough and accurate assessment of symptoms at intake and over the course of treatment. This is accomplished by developing a measurement model to conceptualize the constructs of interest (i.e., the major areas of concern for youth, caregivers, and clinician), then selecting psychometrically sound assessments (usually rating scales) to operationalize the constructs described in the measurement model (Bickman, 2008; Fortney et al., 2017). This helps minimize biases when clinicians evaluate their youth patients. For example, in the absence of such standardized data clinicians may show a bias by overly focusing assessments on areas in which they have more expertise, or demonstrate a self-serving bias by evaluating treatment progress in a falsely positive manner (Boswell et al., 2015). In contrast, when clinical judgment is complemented by rating scale data evaluations of progress are more likely to be comprehensive and accurate (Youngstrom & Van Meter, 2016).

Second, youth patients and/or caregivers usually complete measures prior to seeing the clinician, which increases the efficiency of the assessment process. Rather than spending valuable in-session time reviewing multiple areas of functioning and psychopathology to ensure that important areas are not missed, the clinician can quickly scan ratings that were previously completed by a patient and then follow up on specific areas as needed. This may also allow the provider to spend more time directly with the youth and/or caregiver. Moreover, because the same measures are used across youth patients, clinicians soon develop familiarity with the assessment measures. Reviewing measures becomes increasingly routine and efficient over time. By improving the efficiency of the assessment process clinicians are able to offer more time assessing areas of need with each youth patient or provide services to additional youth patients.

Third, by using the same assessment measures over time, MBC tracks change in symptoms longitudinally and hence monitors treatment effectiveness (Fortney et al., 2017). Increased awareness of symptom profiles over time may have advantages whether it is a change for the better (positive response to treatment), change for the worse (deterioration during treatment), or no change at all. For caregivers of children in psychiatric care, this may have particular saliency. In situations where there is an improvement in symptoms, the completion of measures by caregivers may make these changes more visible when they may have gone unnoticed otherwise. This may in turn reinforce behaviors leading to change and facilitate caregiver-level improvements such as decrease in overall stress, increase in hope and optimism for improvement, and more positive interactions with the youth, all of which may serve to enhance and encourage further progress. When caregivers notice deterioration, it may prompt them to intervene more quickly by advocating for input from the treatment provider, thereby preventing further deterioration. A similar process seems to occur with treatment providers as there is evidence that clinicians are more likely to notice patient deterioration when MBC is implemented (Sussman, 2007).

Fourth, MBC may enhance communication between youth, caregivers, and clinicians. Completing ratings may provide youth and caregivers with commonly understood vocabulary for describing problems with mood or behavior when they might otherwise struggle to describe the problems in a manner that the treatment provider would readily understand. Moreover, ratings can be perceived by youth and caregivers as relatively non-threatening and thereby provide a safe way, particularly for youth, to bring up topics that they may have otherwise been too embarrassed or defensive to discuss with a provider. Likewise, caregivers may be more comfortable introducing areas of concern by endorsing them in a rating scale versus bringing them up during a face-to-face encounter with a treatment provider.

Fifth, by directly assessing the problems or concerns that are the key reasons for seeking treatment, MBC sends an implicit message to patients that the treatment provider (and the larger health-care system) understands and prioritizes their primary concern and is taking active steps to address it. Often, health-care systems evaluate patient outcomes by focusing on process measures, such as days to appointment, time spent in the waiting room, and patient reports of satisfaction. Process measures such as these are clearly important, especially for improving clinics or health-care systems, but they may not directly relate to the outcomes that bring patients into care (Porter, Larsson, & Lee, 2016). For example, an outpatient clinic may successfully reduce the number of days it takes for a new patient to receive services, but this may have limited impact on whether an individual patient’s symptoms of depression are reduced once engaged in treatment.

Importantly, these advantages seem to translate into better patient outcomes. This was clearly demonstrated in a study of adults treated for clinically significant depression who were randomized to receive MBC or routine care (Guo et al., 2015). Results showed that rates of remission from depression were significantly higher for those treated with MBC (73.8%) as compared to those in the treatment as usual (TAU) group (28.8%), and remission was achieved about twice as fast in the MBC group (MBC = 10.2 weeks, TAU = 19.2 weeks). These advantages of MBC were obtained even though the groups did not differ on treatment adherence (MBT = 99.8%, TAU = 99.7%) or number of treatment sessions (MBC = 8.2, TAU = 7.8). Of note, youth treated with MBC seem to achieve similar benefits; one study found that youth patients improved significantly faster if they were randomly assigned to MBC as compared to standard care (Bickman et al., 2011).

Advantages of MBC for clinics

An important part of MBC, in our view, is that there is consistency in measures used across patients. In other words, all patients who present for services complete at least some of the same measures, resulting in data that is comparable across patients and clinicians, thereby providing clinic-level data. In addition to patient-level benefits, this approach offers several advantages to the clinic as a system. First, in clinics that have specialty services, clinic-level data can help match patients to the appropriate clinician. For example, youth patients whose assessments show elevated rates of anxiety can be offered treatment by a provider with expertise in anxiety, even if the caregiver indicates the primary concern is noncompliance or aggression, which may be secondary symptoms that follow panic and behavioral avoidance. Second, clinic-level data helps determine the optimal allocation of resources within clinics, such as when staffing changes may be needed in response to emerging trends among patients. For example, there is evidence that ADHD, autism, and intellectual disabilities have increased over the past decade, especially among older youth (ages 12 to 17), but other developmental disabilities have not (Zablotsky et al., 2019). Collecting clinic-wide data using MBC could help identify whether these trends are apparent locally, and if so, allow for a data-based response to this trend. Third, clinic-level data can help identify clinicians who need additional support or education, such as those whose patients routinely show an insufficient response to treatment. Fourth, clinic-level data provides objective information to illustrate how clinics are performing as a system – what percent of patients is remitting, for example, or the average time to remission. The ability for a clinic to demonstrate the value of services provided with objective data can mean the difference between continuing to receive funding and ceasing to exist, especially as health care turns away from procedure-based reimbursement and toward outcome- or value-based reimbursement (Burwell, 2015).

Barriers to implementation of MBC

As a result of these advantages, MBC has been described as a necessary part of improving mental health care in youth (Bickman, 2008). Even so, MBC remains relatively rare in psychiatric services (Bickman, Lyon, & Wolpert, 2016; Boswell et al., 2015). Surveys of clinicians who provide services to children and families of youth with mental health problems show that a minority (23–37%) of providers use MBC (Bickman et al., 2000; Hatfield & Ogles, 2004). The relatively rare use of MBC does not seem to be improving over time; a more recent survey of approximately 1,500 providers of youth mental health care reported that 77% of providers almost exclusively used unstandardized assessments of client functioning (Cook, Hausman, Jensen-Doss, & Hawley, 2017). However, each of the surveys also reported that clinicians not using MBC were interested in it, but did not personally use it due to practical barriers and lack of knowledge.

The Penn State Psychiatry Clinical Assessment and Rating Evaluation System for Youth (PCARES-Youth) is an MBC program that is being implemented in an outpatient psychiatry clinic. We describe the development of this program, as well as preliminary clinic-level data, to provide information on how MBC may be started in similar outpatient mental health clinics and to illustrate what clinic-level information might be learned from it. It is hoped that this paper will guide and encourage others who are interested in MBC to take similar steps, building on our experiences. PCARES-Youth was designed for use at an academic medical center in which a wide range of mental health services are provided to diverse populations of youth. Assessment and intervention services are available in outpatient, inpatient, and partial hospitalization clinics that provide both generalized and specialized services delivered by many different mental health providers (psychiatrists, psychologists, social workers, etc.). The current project was piloted in an outpatient clinic within the larger healthcare system, as such the data presented below reflects MBC in this environment alone. The project was initiated by the clinical, research, and administrative leadership of the Department of Psychiatry and Behavioral Health. Steering committees and working groups, guided by an advisory board of experts from other institutions, were formed to develop and refine the measurement models and processes (discussed in the following sections). Separate approaches were developed for adult patients (defined as ages 18-years-old and older) and for youth patients (defined as younger than 18-years-old); this paper focuses on the approach used for youth patients.

Development of PCARES-Youth

A flow chart showing steps used to develop and implement PCARES-Youth is presented in Figure 1. The first step was to formulate the goals of the project. The first and most important goal was that PCARES-Youth should improve patient care at the individual level. The second goal was to improve the functioning of the clinic by providing clinic-level data. The third goal was to enhance research and training. This goal is consistent with the scientist-practitioner model that emphasizes the connection between research, training, and clinical services; that is, research and training initiatives provide long-term benefit to patients through improvements in assessment and treatment, and by enhancing the expertise and performance of mental health providers.

Figure 1.

Figure 1.

Illustration of the development and implementation of PCARES-Youth.

Second, a measurement model was developed by defining the specific constructs to measure. These were conceptually organized into three domains: (1) signs and symptoms of psychopathology, (2) level of impairment and daily functioning, and (3) treatment moderators (i.e., factors that might influence the delivery or course of treatment). Within each of these broad domains, specific constructs were elaborated, as shown in Table 1. The specific constructs were selected based on their prevalence in the clinic and nationally, interest among providers, and their impact on patient care.

Table 1.

Summary of PCARES-Youth measures.

Phase/Measure Conceptual Domain Specific Constructs Reference

Pre-visit Assessment
 Child Behavior Checklist Psychopathology Several Constructs Achenbach and Rescorla (2001)
Intake Assessment
 Demographic Information Form Demographics Several Constructs New measure
 Brief Problem Rating Form Psychopathology Several Constructs New measure
 Disruptive Behavior Disorders Rating Scale Psychopathology ADHD, ODD, CD Pelham, Gnagy, Greenslade, and Milich (1992)
 Screen for Child Anxiety Related Emotional Disorders – 5 Psychopathology Anxiety Birmaher et al. (1999)
 Checklist for Autism Spectrum Disorders – Short Form Psychopathology Autism Mayes (2012)
 Mood and Feelings Questionnaire – Short Form Psychopathology Depression Angold, Costello, Messer, and Pickles (1995)
 Affective Reactivity Index Psychopathology Irritability/ Mood Dysregulation Stringaris et al. (2012)
 Limited Prosocial Emotions Questionnaire Psychopathology Limited Prosocial Emotions New measure
 Impairment Rating Scale Impairment/Treatment Need Functional Impairment Fabiano et al. (2006)
 ADHD Self Report Scale DSM-5 Treatment Moderator ADHD in Caregiver Ustun et al. (2017)
 Brief Measure of Caregiver Strain Treatment Moderator Caregiver Strain Brannan, Athay, and de Andrade (2012)
Treatment Monitoring
 Brief Problem Monitoring Form Psychopathology Several Constructs New measure
 Impairment Rating Scale Impairment/Treatment Need Functional Impairment Fabiano et al. (2006)
 Brief Measure of Caregiver Strain Treatment Moderator Caregiver Strain Brannan et al. (2012)
 Youth Treatment Monitoring Form Change from Treatment Improvement or Deterioration Pelham, Gnagy, and Greiner (2000)

ADHD = attention-deficit/hyperactivity disorder; ODD = oppositional defiant disorder; CD = conduct disorder.

Third, methods of assessing constructs were discussed, and rating scales were selected as the primary measurement tool and caregivers as the primary informant. Rating scales were chosen because they are feasible to implement (low financial and time cost), provide psychometrically sound information, and facilitate the integration of data across informants and time. A standard set of ratings was selected for use across all youth patients because it limited the chance of missed data, reduced the training and cognitive burden on treatment providers (who would only need to learn to use and interpret one set of measures), was pragmatically easier, and allowed for comparisons across time, patient, and provider. Caregivers were selected as the primary informant because they typically have the most knowledge of their child, can provide psychometrically sound data across all ages of youth, and are often a primary determinant of whether youth seek and access treatment.

Fourth, timing of the assessments was determined. In general, our goal was to map assessments onto stages of treatment (Youngstrom & Van Meter, 2016). Toward that end, assessment was divided into three stages: (a) pre-visit; (b) initial visit; and (c) treatment monitoring. The purpose of the pre-visit assessment was to provide clinicians with information about the youth and family before meeting them. The advantage of this was to improve the efficiency of the in-person visit by providing the clinician with data about areas that are likely to require more or less attention. The purpose of the initial visit assessment was to provide the clinician with data directly relevant to their diagnostic and functional conceptualization of the youth. As such, this assessment needed to be fairly comprehensive, covering most areas of psychopathology and impairment. The purpose of the treatment monitoring assessment was to provide the clinician with data about the youth’s response to treatment over time. Given that caregivers would be asked to complete this assessment at each clinic visit, this assessment needed to minimize patient burden while maximizing treatment-specific information for providers. These assessment stages are described in more detail below.

Finally, the outcome of this process was to select specific measures to use in PCARES-Youth that would provide appropriate patient-level data and clinic-level data. Table 1 summarizes the selected measures and how they relate to the important features of the measurement model. The majority of measures had well-supported psychometrics as judged by conventional standards (Youngstrom et al., 2017), including (for treatment monitoring measures) sensitivity to treatment. However, a few new measures were created specifically for PCARES-Youth. This was done when existing measures were judged as not suitable for the project (e.g., too long, confusing for parents or clinicians, questionable psychometrics, etc.). For example, limited prosocial emotions were considered an important construct to assess because it is a specifier of conduct disorder in the current (5th) edition of the Diagnostic and Statistical Manual of Mental Disorders (American Psychiatric Association, 2013). However, no well-supported yet brief measure of this construct was available when PCARES-Youth was being developed. As such, a new measure of this construct was created for use in PCARES-Youth and research reporting its psychometric performance is currently under review (Castagnia, Babinski, Pearl, Waxmonsky & Waschbusch, under review).

Having selected measures, the next step was to test them in routine practice. To accomplish this we conducted a pilot project of PCARES-Youth. Below we discuss the assessment stages as implemented in the pilot project. We then illustrate the value of PCARES-Youth as a tool for learning not just about individual patients but also about the base rates and demographics in a specific clinic. It is important to understand base rates at specific clinics because they often deviate from general base rates found in the literature; knowing this can then inform the diagnostic process (Youngstrom & Duax, 2005). We illustrate the value of MBC for learning about the landscape of our specific clinic as a system by examining four exemplar questions: (1) What type of psychopathologies are exhibited by youth seeking mental health services in our clinic? (2) Are there age and/or sex differences in psychopathology among youth in our clinic? (3) What proportion of youth have multiple types of psychopathology? and (4) Are mother and father ratings of the same youth associated? These questions were addressed using the pre-visit, pilot data. Finally, we summarize the results of a survey of clinicians who participated in PCARES-Youth.

Pilot project

Overview

The PCARES-Youth pilot project was implemented in a single outpatient mental health clinic in central Pennsylvania that is affiliated with the Penn State Health Milton S. Hershey Medical Center Department of Psychiatry and Behavioral Health. The clinic is staffed by approximately 32 treatment providers who work with youth (both full-time and part-time, including psychiatrists, psychologists, licensed social workers, and nurse practitioners). The clinic provides about 10,000 clinical service visits per year to youth patients, with about 800 of those to new patients. Youth patients served by the clinic are diverse in terms of their mental health diagnoses, age, race, income, and geography (rural, suburban, or urban).

The pilot project was conducted from May 2016 through February 2017. The goal was to collect PCARES-Youth data from caregivers of all new youth patients, where “new patient” was defined as anyone who had not received services in the clinic within the past six months, and “youth patient” was defined as any individual under age 18 years. A few patient populations were excluded from the pilot study: (a) youth with Autism Spectrum Disorder (ASD) due to concern with the validity of some measures with this population; (b) youth in research studies in which the assessment might interfere with ongoing research; and (c) youth referred primarily for psychological adjustment to health conditions (e.g., problems secondary to eating, irritable bowel syndrome, etc.) because the primary presenting problem was not psychopathology. The specifics of each assessment stage (pre-visit, initial visit, and treatment monitoring) are described next. The institutional review board approved the use of PCARES-Youth data for research purposes.

Assessment stages

Pre-visit assessment

The pre-visit assessment was initiated when caregivers first contacted the clinic to inquire about services and were asked if they would be willing to complete a rating scale about their child or adolescent before they arrived at the clinic. Those who agreed were then invited to complete the rating scale electronically if they were willing to provide an e-mail address. All parents reported intent to complete the ratings (although not all followed through, as shown below), and more than 95% of caregivers provided an e-mail to do so electronically. The 5% of parents who were not willing to complete the measures electronically or who declined to provide an e-mail address were sent paper copies of the same ratings to complete and return during their first appointment. The invitation to complete ratings and recording of e-mail addresses was completed by intake staff as a routine part of contacting new patients. Ancillary staff whose job duties included supporting PCARES-Youth was responsible for emailing ratings and providing caregivers the information needed to complete the ratings. The PCARES-Youth staff was also responsible for scoring completed ratings (using published software), then printing the results and distributing them to clinicians prior to the youth’s appointment.

As shown in Table 1, the Child Behavior Checklist (CBCL; Achenbach & Rescorla, 2001) was selected as the measure for the pre-visit assessment. This measure was selected because it has excellent psychometric properties and norms, strong infrastructure support, is available in many languages, and can be administered and completed electronically (via HIPAA compliant web page) or with paper/pencil. In the pilot project, the mean response time between sending a request to the caregiver to complete the CBCL and receiving the completed ratings was 5.2 days for female caregivers (range: 0 to 63 days) and 5.7 days for male caregivers (range: 0 to 32 days). A total of 302 invitations were sent and 217 were completed for an overall response rate of 71.9%, with a monthly response rate that ranged from a low of 57% to a high of 95%. ANOVA and Chi-square tests showed that neither child age (p = .795) nor child sex (p = .590) differed between not rated youth (57.8% male; Age: M = 11.19, SD = 3.35, range: 6 to 17) and rated youth (54.4% male; Age: M = 11.08, SD = 3.28, range: 6 to 178).

Initial visit assessment

The initial visit assessment occurred on the day of the patient’s first visit to the clinic. Caregivers were scheduled to arrive 30 to 40 minutes before their appointment to complete the assessment and this was sufficient for most caregivers. Paper copies of ratings were given to caregivers by front desk staff as part of check-in procedures, with each caregiver present at the appointment (e.g., mother, father) encouraged to complete their own ratings. There were 11 measures (see Table 1) that collectively included about 150 items, which is similar to the number of items on other widely used rating scales that broadly measure psychopathology, such as the third edition of the Behavioral Assessment Scale for Children (Reynolds & Kamphaus, 2015). Clinicians were encouraged to ask caregivers for the completed ratings at the start of the appointment and integrate them into the assessment by reviewing them with caregivers and youth patients and following up as needed.

Treatment monitoring assessment

Treatment monitoring assessments occurred at every in-person visit after the first appointment. As part of check-in procedures with front-desk staff, each caregiver was given a rating scale that included 50 items. The ratings were designed to be completed within 15 minutes because patients are routinely asked to arrive 15 minutes before each appointment. The majority of items for these ratings were taken directly from the intake assessment, providing measures of patient change over the course of treatment. Ratings also included items that directly asked caregivers whether their child’s functioning had improved, declined, or not changed in response to treatment. Although one measure used for treatment monitoring is new and currently undergoing psychometric evaluation, other measures were selected based on their reliability, validity, and sensitivity to treatment response (e.g., Evans, Sibley, & Serpell, 2009; Pelham et al., 2016). Treatment monitoring ratings were collected throughout the pilot project but this data was not examined for the current study in order to maintain focus on clinic-level data, as described next.

Pre-assessment visit: Clinic-level data example

Overview

To illustrate how MBC programs, such as PCARES-Youth, provide useful information for answering clinic-level questions, we examined ratings on the CBCL, which was administered at the pre-assessment visit. The CBCL consists of 113 items that are rated as 0 (not true), 1 (somewhat or sometimes true), or 2 (very true or often true). Items are summed to compute subscale scores that measure a wide range of psychopathology. For this study, the six DSM-5 subscales were computed: Affect Problems, Anxiety, Somatic Problems, Attention-Deficit Hyperactivity Disorder (ADHD), Oppositional Defiant, and Conduct Problems. Both raw scores and T-scores, which are computed using age and sex norms, were used. T-scores were examined as continuous measures and as categorical measures, with T-scores of 65 or above used to indicate the presence of psychopathology. A T-score cutoff of 65 represents about 7% of the general population and is described as “borderline clinically elevated” by the CBCL. This cutoff was applied independently for each subscale.

The CBCL was available for 217 youth who were rated by their caregivers as part of the pre-visit assessment. Of the 217 youth, 131 were rated only by female caregivers (hereafter referred to as mothers), 18 were rated only by male caregivers (hereafter referred to as fathers), and 68 were rated by both mothers and fathers. The majority of the youth who were rated were between 6 and 12 years old (65.2%) and were boys (55.3%). As introduced earlier, we used these data to ask four questions about youth served by the psychiatry clinic: (1) What type of psychopathologies is exhibited by youth seeking mental health services in our clinic? (2) Are there age (child: 6 to 12 vs. adolescent: 13 to 17) and/or sex differences in psychopathology among youth in our clinic? (3) What proportion of youth in our clinic has multiple types of psychopathology? and (4) Are mother and father ratings of the same youth associated? Only mother ratings were used for the first three questions because of the larger sample size, but data from father ratings are descriptively presented.

What types of psychopathologies are exhibited by youth seeking services in our clinic?

Relative severity of different types of psychopathology within each child was examined by computing a one-way ANOVA with scale (Affect Problems vs. Anxiety vs. Somatic Problems vs. ADHD vs. Oppositional Defiant vs. Conduct Problems) as a within-subjects factor and t-scores as the dependent variable. The ANOVA was significant (F (5, 990) = 35.42, p < .001) with follow-up tests showing that each scale differed from every other (p < .05), with the exception of the Affect Problems and Anxiety scales that did not differ (p = .672). Examination of means (see Table 2) showed that scores were highest (most severe) for Affect Problems and Anxiety scales and lowest for the Somatic Problems scale.

Table 2.

Descriptive statistics (top) and rates of psychopathology (bottom) for caregiver ratings on the Child Behavior Checklist.

Mother Ratings
Father Ratings
Scale N M SD Max N M SD Max

Affect Problems 199 66.98  8.94 91.00 86 64.90  9.00  84.00
Anxiety 199 66.67  11.87 97.00 86 64.22  12.31  97.00
Somatic Problems 199 58.27  8.58 87.00 86 55.99  9.40  100.00
ADHD 199 63.88  8.75 80.00 86 62.12  8.74  80.00
Oppositional Defiant 199 62.15  8.91 80.00 86 60.34  9.35  80.00
Conduct Problems 199 60.22  9.45 96.00 86 58.24  9.22  86.00

Mother Ratings
Father Ratings
Not Present
Present
Not Present
Present
Scale N % N % N % N %

Affect Problems 76 38.2 123 61.8 44 51.2 42 48.8
Anxiety 82 41.2 117 58.8 44 51.2 42 48.8
Somatic Problems 151 75.9 48 24.1 71 82.6 15 17.4
ADHD 91 45.7 108 54.3 49 57.0 37 43.0
Oppositional Defiant 115 38.1 84 42.2 57 66.3 29 33.7
Conduct Problems 135 67.8 64 32.2 62 72.1 24 27.9

Scales are DSM-5 subscales from the Child Behavior Checklist (CBCL). Minimum scores for all scales was 50. Psychopathology was defined as present if t-scores were at or above 65, which is considered borderline clinically elevated according to the CBCL manual.

Are there age and sex differences in psychopathology in youth seeking services in our clinic?

Age and sex differences on the CBCL were examined by computing an ANOVA with sex (female vs. male) and age (child vs. adolescent) as between-subjects factors and scale (Affect Problems vs. Anxiety vs. Somatic Problems vs. ADHD vs. Oppositional Defiant vs. Conduct Problems) as a within-subjects factor. Raw scores on the CBCL were used for this analysis because T-scores correct for age and sex differences. There was a significant scale × sex interaction, F (5, 975) = 8.93, p < .001, and a significant scale × age interaction, F (5, 975) = 8.67, p = .004. Follow up tests of scale × sex showed that males were rated significantly higher than females on the ADHD, Oppositional Defiant, and Conduct Problems scales (ps < .018), whereas females were higher than males on the Somatic Problems scale (p = .035). Males and females did not differ on the Affect (p = .361) or Anxiety (p = .054) scales. Follow up tests of scale × age showed that adolescents were rated as having higher scores on the Affect Problems scale (p < .001), whereas children were rated higher on the ADHD scale (p = .043). Adolescents and children did not differ on other scales (ps ≥ .089).

What proportion of youth who present to our clinic have multiple types of psychopathology?

Rates of psychopathology (see Table 2) ranges from 24.1% for Somatic Problems to 61.8% for Affect Problems according to mother ratings. The average number of elevated areas of psychopathology was 2.73 (SD = 1.59), with the distribution as follows: 11.6% of youth were elevated in one area, 24.1% in two areas, 23.1% in three areas, 15.1% in four areas, 12.6% in five areas, 3.5% in six areas, and 10.1% had no elevations. The overlap between specific types of psychopathologies was examined by computing series of 2 (psychopathology 1: no vs. yes) × 2 (psychopathology 2: no vs. yes) chi-square tests. These results showed that Affect Problems, Anxiety, and Somatic Problems were significantly overlapping (ps ≤ .031), and ADHD, Oppositional Defiant, and Conduct Problems were significantly overlapping (ps < .001). In contrast, Affect Problems, Anxiety, and Somatic Problems were not associated with ADHD, Oppositional Defiant, or Conduct Problems (ps ≥ .133).

Are mother and father ratings of the same youth who present to our clinic associated?

The association between mother and father ratings was examined by computing Pearson correlations between T-scores (see Table 3). As shown, mother and father ratings of the same construct were significantly and highly correlated, with r’s varying from .57 to .75. The overlap between mother and father ratings was also examined when psychopathology was defined categorically (no vs. yes). As shown in Table 4, mother and father ratings were again associated. Of note, few children were above psychopathology cutoffs on father ratings but not mother ratings, with rates ranging from 2.9% to 10.3%.

Table 3.

Correlations within and between mother and father rating on the Child Behavior Checklist.

Mother Ratings
Father Ratings
1 2 3 4 5 6 7 8 9 10 11

Mother
1. Affect Problems
2. Anxiety .53*
3. Somatic Problems .31* .37*
4. ADHD .06 .02 −.01
5. Oppositional Defiant .23* .12 .02 .51*
6. Conduct Problems Father .24* .04 −.01 .51* .76*
7. Affect Problems .71* .46* .21 −.04 .20 .16
8. Anxiety .32* .59* .22 .15 .25* .24 .57*
9. Somatic Problems .25* .35* .57* .05 .21 .28* .36* .39*
10. ADHD .02 .03 −.01 .71* .35* .28* .15 .32* .08
11. Oppositional Defiant .21 .16 .09 .53* .75* .67* .44* .42* .20 .57*
12. Conduct Problems .19 .14 .12 .62* .75* .75* .36* .36* .19 .59* .85*

Values in table are Pearson correlations of t-scores. Scales are DSM-5 Subscales on the Child Behavior Checklist. Sample sizes were 199 for mother-mother correlations, 86 for father-father correlations, and 68 for mother-father correlations.

*

= p < .05.

Table 4.

Agreement between mother and father categorical definitions of psychopathology on the Child Behavior Checklist.

Neither
Mother Only
Father Only
Both
N % N  % N  % N % χ2 phi

Affect Problems 17 25.0 17 25.0 2  2.9 32 47.1 16.43* .49*
Anxiety 19 27.9 16 23.5 7 10.3 26 38.2  7.87* .34*
Somatic Problems 44 64.7 10 14.7 4  5.9 10 14.7 14.99* .47*
ADHD 27 39.7 12 17.6 3  4.4 26 38.2 23.39* .59*
Oppositional Defiant 41 60.3  4  5.9 4  5.9 19 27.9 36.96* .74*
Conduct Problems 44 64.7  5  7.4 3  4.4 16 23.5 35.13* .72*

Scales are DSM-5 Subscales on the Child Behavior Checklist. Neither = neither mother nor father ratings exceeded cutoff; Mother Only = exceeded cutoff for mother but not father ratings; Father Only = exceeded cutoffs for father but not mother ratings; Both = exceeded cutoffs for mother and father ratings.

*

= p < .05.

Clinician survey about PCARES-Youth

Approximately one year after the introduction of PCARES-Youth clinicians were asked to complete an anonymous ten question survey assessing their knowledge and attitudes toward PCARES-Youth. The survey included two items about clinicians themselves (primary type of clinical work, type of highest degree) and eight items about PCARES-Youth. Additional demographic information (age, sex, experience, time since highest degree) was also collected on another form that was separated from the main survey responses to keep results anonymous. Of the 32 clinicians who were eligible, 29 completed the survey (90.6% response rate). Respondents were 62.1% female, had an average age of 45 years (range: 29 to 71 years), earned their highest education degree an average of 14.9 years prior to taking the survey (range: 1 to 46 years), and had worked in the clinic an average of 5.8 years (range: <1 to 38 years). When asked about their primary clinical activities, 14.3% conducted diagnostic assessments, 21.4% provided treatment, and 64.3% provided both services. About one-third (37.9%) indicated that they prescribe medication.

The first question asked whether the clinician and their patients participated in PCARES-Youth, with 22 of the 29 respondents (75.9%) indicating yes and 7 indicating no. Reasons for not participating in PCARES-Youth were: prefer their own assessment measures (n = 3), measures were not helpful or relevant (n = 2), not sure why (n = 1), and no reason given (n = 1). The remaining questions were evaluated using Likert scales with the following anchors: strongly disagree (2), disagree (1), neutral (0), agree (1), strongly agree (2). For ease of interpretation, items were scored by computing percentages of respondents who disagreed, agreed, or were neutral about each statement and the percentages were compared using single-sample chi-square tests which were significant for nearly every item, indicating that some response options were endorsed more strongly than others (see Table 5). These data suggest that most clinicians were familiar with PCARES-Youth, could interpret and understand results, did not receive strong feedback (positive or negative) from caregivers, were usually provided completed PCARES-Youth ratings, and judged the ratings to be worth the time and effort required by both the clinician and caregiver. The lone item which did not have significantly varying responses assessed whether PCARES-Youth was helpful; on this item one-half of clinicians agreed with the statement, about one-third was neutral, and the remainder disagreed.

Table 5.

Clinician responses to anonymous survey about PCARES-Youth.

% % %
Item N Disagree Neutral Agree χ2

I am familiar with the PCARES-Youth assessment project 28 10.7% 10.7% 78.6% 25.79*
I can interpret and understand the results of parent/caregiver ratings on the PCARES-Youth measures. 28 21.4%  7.1% 71.4% 19.14*
Parents/caregivers make positive comments about the PCARES-Youth measures. 26 23.1% 69.2%  7.7% 16.00*
Parents/caregivers make negative comments about the PCARES-Youth measures. 26 11.5% 57.7% 30.8%  8.39*
PCARES-Youth measures are usually completed by parents/caregivers and completed ratings are usually provided to you. 25 12.0% 20.0% 68.0% 13.76*
PCARES-Youth is helpful in my work with children, parents, or families 26 19.2% 30.8% 50.0%  3.77
Overall, PCARES-Youth assessments are worth the time and effort they require from my patients and from me. 27 22.2% 22.2% 55.6%   6.00*

Items were evaluated using Likert scales with the following anchors: strongly disagree (−2), disagree (−1), neutral (0), agree (1), strongly agree (2). % disagree = percent of respondents who endorsed “disagree” or “strongly disagree” response options; % neutral = percent of respondents who endorsed the “neutral response option”; % agreement = percent of respondents who endorsed “agree” or “strongly agree” response options.

*

= p < .05 in a one-sample chi-square test with two degrees of freedom.

Finally, stakeholders (i.e., clinic staff, providers, caregivers) were invited to provide their thoughts, opinions, and advice about PCARES-Youth using an open-ended question. A few themes emerged from these questions. First, comments across all stakeholder populations were supportive of the project and encouraged it to continue. Second, comments suggested the assessments were too long, included redundant questions, or had questions that were not relevant. Third, comments suggested improvements that could be made to PCARES-Youth. Most notable in this regard were suggestions for providing clinicians with detailed findings more quickly, in graphical form, and integrated into the patient’s medical record.

Discussion

Despite the numerous benefits of MBC, less than a third of community outpatient mental health clinics implement MBC systems (Bickman et al., 2000; Cook et al., 2017; Hatfield & Ogles, 2004). This is in part due to a lack of existing research to guide the development and implementation of MBC systems. The purpose of this paper was to address this gap by describing the development and initial implementation of PCARES-Youth. Development of PCARES-Youth involved a multi-step process (see Figure 1) that included defining goals for the system, developing measurement models and constructs, selecting appropriate assessment methods, determining the flexibility and timing of the assessments, and selecting specific validated and new measures. The program was piloted for ten months and demonstrated high levels of patient engagement, provided psychometrically sound clinical-level data about youth presenting problems, and encouraging levels of clinician satisfaction.

Several findings from this pilot project are worth noting. First, caregivers were clearly willing to participate in the MBC system, as virtually all caregivers indicated intent to complete the measures prior to their first visit. The fact that all indicated a willingness to do so, and 72% carried through, is consistent with other similarly designed MBC projects (Bickman et al., 2016). This response rate was found despite a multi-step process that included checking their e-mail, following the link provided, registering for an account (creating a username and password), and finally completing ratings about their child that consisted of 113 problem behavior items and other items measuring adaptive behaviors. This is a clear demonstration that they were not only willing to engage in MBC, but put forth considerable time and effort to do so. At the same time, a number of potential barriers that may lower the response rate were identified. Many of these barriers were logistical and related to challenges integrating a new system into standard clinic procedures. For example, there were difficulties communicating procedures to the large team in the clinic (both providers and administrative) as well as to new team members or “float” front-desk staff who fill in between various clinics in our health system. Likewise, patients often show up late for their appointments or have youth who exhibit demanding behaviors in the waiting room or toddlers who require supervision, leaving little to no time to complete the measures between arriving at the clinic and starting the appointment.

Second, about 95% of parents preferred completing measures electronically rather than by paper and pencil when offered a choice. This suggests MBC systems that include an option to complete measures digitally are likely to be more accepted and completed by caregivers. This preference is encouraging because a system in which caregivers directly respond electronically eliminates the need to enter, verify, and file paper responses, thereby reducing staffing costs and providing the opportunity for clinicians to have scored results immediately available.

Third, examination of clinic-level data suggests that our clinic is quite similar to other outpatient clinics in terms of age, gender, and presenting concerns. For example, most youth in our sample were elevated on more than one area of psychopathology, consistent with evidence that comorbidity is the rule rather than the exception in clinics serving youth with mental health problems (Caron & Rutter, 1991). On the other hand, internalizing symptoms (anxiety, affect problems) were the most highly rated area of concern on the pre-visit assessment (see Table 2). This was somewhat surprising as externalizing symptoms are typically reported as the most common reason youth are referred to mental health clinics (Hawley & Weisz, 2003; Weisz & Weiss, 1991). This illustrates one advantage of an MBC system – clinic-level data aids in understanding who is served in the clinic, providing a better understanding of how to best serve the needs of patients. It is important to collect such data because mental health trends identified in clinic-level data can differ in meaningful ways from mental health trends identified in population-level data (Youngstrom & Duax, 2005).

Fourth, results also provide information about female versus male ratings of youth psychopathology. Although it is well established that mother and father ratings of the same youth tend to be significantly but modestly correlated (Achenbach, McConaughy, & Howell, 1987; De Los Reyes et al., 2015; Duhig, Renk, Epstein, & Phares, 2006), relatively few studies have examined how overlapping mother ratings and father ratings are of the same youth when psychopathology is defined categorically; that is, when examining data from a person-centered rather than variable-centered approach. Results of this analysis in this study showed that few youth were uniquely identified by ratings from male caregivers, whereas a substantial portion were uniquely identified by ratings from female caregivers (see Table 4). These results raise the possibility that, if forced to choose between ratings from a mother or a father, mother ratings may be advantageous if the goal is to identify psychopathology. We hasten to add that this does not suggest father ratings are not useful – father ratings may be superior in other ways not examined in this study. Instead, the point of these analyses is to again demonstrate how data from MBC data can be helpful in generating hypotheses about improving the assessment and treatment process in a clinic, including how to streamline the process for patients and administrators.

Fifth, eliciting stakeholder feedback was a necessary and informative way of improving the PCARES-Youth battery and process. There is evidence to support this approach (Wiering, de Boer, & Delnoij, 2017). A survey assessing clinician engagement and opinion of PCARES-Youth (Table 5) provided mixed results, with about one-half of respondents agreeing that PCARES-Youth is helpful and worth the time and effort required but other respondents indicating that they did not agree with these statements. These mixed results parallel findings from prior research, some of which found general satisfaction with MBC (Burr, Fowler, Allen, Wiltgen, & Madan, 2017), and others which have found neither strong positive nor strong negative attitudes toward these systems (Norman, Dean, Hansford, & Ford, 2014). The composition of providers within the site of this pilot project included individuals with various backgrounds (e.g., clinical psychologists, psychiatrists, Masters level therapists); it could be that some providers may be more or less averse to the scientist-practitioner model that guides this MBC system.

Most importantly, employing this feedback approach allowed us to identify a significant road-block in our PCARES-Youth implementation: the time and effort burden associated with the pre-visit assessment was unacceptably high for both staff and caregivers. The burden on clinic support included generating an e-mail invitation to complete the measure, sending reminders when it was not completed after a short time, printing results when completed, and distributing these to the appropriate clinician – all prior to the patient’s first visit to the clinic. Likewise, caregivers indicated that they were willing to complete the ratings but did not like the steps they were required to take (create an account and generate a password) to get to the ratings. For instance, if different members of the same household (mothers and fathers, for example) wanted to complete ratings independently, each caregiver had to provide their own e-mail, generate their own password, and then log on and complete the measures separately. Moreover, the measure we selected for use at the pre-intake assessment (the CBCL) has separate costs for administering and scoring it and has highly restrictive rules about its use at a time when assessments are increasingly moving toward open-source approaches. Considering these factors, a decision was made to discontinue the use of this measure. This decision in turn prompted a modest revision of the measurement model by dropping the pre-screening assessment; the PCARES-Youth assessments now begin with the in-person assessment rather than with the pre-visit assessment. What is clear from these results is that there is considerable room to improve PCARES-Youth.

Future directions

Apart from identifying workflow issues, our feedback system identified several directions in which to move forward with PCARES-Youth. First, and in many some most pressing, we are in the process of psychometrically evaluating some of the ratings used to operationalize our measurement model. Specifically, the psychometric characteristics of newly created measures are being investigated and steps for reducing the length of the battery are being outlined based on evidence that shorter assessments often perform as well as longer ones (e.g., De Boer et al., 2004; De La Garza, Rush, Grannemann, & Trivedi, 2017) and are almost universally recommended by MBC advocates (Boswell et al., 2015; Lambert & Hawkins, 2004). Relatedly, discussions for implementing a measurement model for collecting ratings directly from youth, as well as other collateral reporters such as teachers, and incorporating it into PCARES-Youth are ongoing. Finally, adaptations for implementation of PCARES-Youth into partial and inpatient hospitalization settings is an important next step in the improvement and implementation of this MBC.

To streamline the feedback process and increase acceptance from patients and caregivers, we intend to conduct focus groups of caregivers and youth to inform the current PCARES-Youth program and to modify the system moving forward. Our intention is to make this a routine part of PCARES-Youth going forward as recommended (Wiering et al., 2017), rather than a one-and-done effort. We will also conduct separate focus groups of clinicians and support staff (e.g., front-desk receptions, schedulers, administrative professionals) to systematically elicit their input for PCARES-Youth.

Finally, we are pursuing electronic capture of data and real-time scoring to provide clinicians the opportunity to visualize the results within the electronic medical record. These processes will allow more timely and specific feedback to clinicians which evidence suggests is the critical ingredient in MBC for improving patient outcomes (Bickman et al., 2011; Mellor-Clark, Cross, Macdonald, & Skjulsvik, 2016). Several platforms are available that offer these capabilities, but they are typically costly, offer limited ability to make revisions once they are established, and pose challenges in terms of integrating them into the existing electronic medical record system. Overcoming these barriers is important because the benefits of capturing data electronically are likely to be substantial and may to lead to considerable improvement in the usefulness of the data collected.

As our findings indicate, designing and implementing an MBC system is challenging. It takes substantial time and effort from all involved, including patients, providers, and administrative personnel, as well as a determination to persist despite expected and unexpected challenges. However, using data collected within a clinical setting as opposed to research settings to inform assessment and treatment decisions, which is the heart of MBC, shows great promise of benefit to providers, clinics, and most importantly, to patients.

Acknowledgments

Errol Aksu, Pevitr Bansal, Edward Bixler, Trish Cain, Evelyn Hernandez, Steven Sinderman

Footnotes

Disclosure statement

The project described was supported by the National Center for Advancing Translational Sciences, National Institutes of Health, through Grant UL1 TR002014 and Grant UL1 TR00045. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. No potential conflict of interest was reported by the authors.

References

  1. Achenbach TM, McConaughy SH, & Howell CT (1987). Child/adolescent behavioral and emotional problems: Implications of cross-informant correlations for situational specificity. Psychological Bulletin, 101(2), 213–232. doi: 10.1037/0033-2909.101.2.213 [DOI] [PubMed] [Google Scholar]
  2. Achenbach TM, & Rescorla LA (2001). Manual for ASEBA school-age forms & profiles. Burlington: University of Vermont Research Center for Children, Youth and Families. [Google Scholar]
  3. American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). Arlington, VA: Author. [Google Scholar]
  4. Angold A, Costello EJ, Messer SC, & Pickles A. (1995). Development of a short form questionnaire for use in epidemiological studies of depression in children and adolescents. International Journal of Methods in Psychiatric Research, 5(4), 237–249. [Google Scholar]
  5. Bickman L. (2008). A measurement feedback system (MFS) is necessary to improve mental health outcomes. Journal of the American Academy of Child and Adolescent Psychiatry, 47(10), 1114–1119. doi: 10.1097/CHI.0b013e3181825af8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bickman L, Kelley SD, Breda C, de Andrade AR, & Riemer M. (2011). Effects of routine feedback to clinicians on mental health outcomes of youths: Results of a randomized trial. Psychiatric Services, 62(12), 1423–1429. doi: 10.1176/appi.ps.002052011 [DOI] [PubMed] [Google Scholar]
  7. Bickman L, Lyon AR, & Wolpert M. (2016). Achieving precision mental health through effective assessment, monitoring, and feedback processes: Introduction to the special issue. Administration and Policy in Mental Health, 43(3), 271–276. doi: 10.1007/s10488-016-0718-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bickman L, Rosof-Williams J, Salzer MS, Summerfelt WT, Noser K, Wilson SJ, & Karver MS (2000). What information do clinicians value for monitoring adolescent client progress and outcomes? Professional Psychology: Research and Practice, 31(1), 70–74. doi: 10.1037/0735-7028.31.1.70 [DOI] [Google Scholar]
  9. Birmaher B, Brent DA, Chiappetta L, Bridge J, Monga S, & Baugher M. (1999). Psychometric properties of the Screen for Child Anxiety Related Emotional Disorders (SCARED): A replication study. Journal of the American Academy of Child and Adolescent Psychiatry, 38 (10), 1230–1236. doi: 10.1097/00004583-199910000-00011 [DOI] [PubMed] [Google Scholar]
  10. Boswell JF, Kraus DR, Miller SD, & Lambert MJ (2015). Implementing routine outcome monitoring in clinical practice: Benefits, challenges, and solutions. Psychotherapy Research, 25(1), 6–19. doi: 10.1080/10503307.2013.817696 [DOI] [PubMed] [Google Scholar]
  11. Brannan AM, Athay MM, & de Andrade AR (2012). Measurement quality of the Caregiver Strain Questionnaire-Short Form 7 (CGSQ-SF7). Administration and Policy in Mental Health, 39(1–2), 51–59. doi: 10.1007/s10488-012-0412-1 [DOI] [PubMed] [Google Scholar]
  12. Burr SK, Fowler JC, Allen JG, Wiltgen A, & Madan A. (2017). Patient-reported outcomes in practice: Clinicians’ perspectives from an inpatient psychiatric setting. Journal of Psychiatric Practice, 23(5), 312–319. doi: 10.1097/PRA.0000000000000250 [DOI] [PubMed] [Google Scholar]
  13. Burwell SM (2015). Setting value-based payment goals—HHS efforts to improve US health care. New England Journal of Medicine, 372(10), 897–899. doi: 10.1056/NEJMp1500445 [DOI] [PubMed] [Google Scholar]
  14. Caron C, & Rutter M. (1991). Comorbidity in child psychopathology: Concepts, issues, and research strategies. Journal of Child Psychology and Psychiatry, 32(7), 1061–1080. doi: 10.1111/jcpp.1991.32.issue-7 [DOI] [PubMed] [Google Scholar]
  15. Cook JR, Hausman EM, Jensen-Doss A, & Hawley KM (2017). Assessment practices of child clinicians. Assessment, 24(2), 210–221. doi: 10.1177/1073191115604353 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. De Boer A, Van Lanschot J, Stalmeier P, Van Sandick J, Hulscher JB, De Haes J, & Sprangers M. (2004). Is a single-item visual analogue scale as valid, reliable and responsive as multi-item scales in measuring quality of life? Quality of Life Research, 13(2), 311–320. doi: 10.1023/B:QURE.0000018499.64574.1f [DOI] [PubMed] [Google Scholar]
  17. De La Garza N, Rush JA, Grannemann BD,& Trivedi MH (2017). Toward a very brief self-report to assess the core symptoms of depression (VQIDS-SR5). Acta Psychiatrica Scandinavica, 135(6), 548–553. doi: 10.1111/acps.12720 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. De Los Reyes A, Augenstein TM, Wang M, Thomas SA, Drabick DA, Burgers DE, & Rabinowitz J. (2015). The validity of the multi-informant approach to assessing child and adolescent mental health. Psychological Bulletin, 141(4), 858–900. doi: 10.1037/a0038498 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Duhig AM, Renk K, Epstein MK, & Phares V. (2006). Interparental agreement on internalizing, externalizing, and total behavior problems: A meta-analysis. Clinical Psychology: Science and Practice, 7(4), 435–453. doi: 10.1093/clipsy.7.4.435 [DOI] [Google Scholar]
  20. Evans SW, Sibley MH, & Serpell ZN (2009). Changes in caregiver strain over time in adolescents with ADHD. Journal of Attention Disorders, 12(6), 516–524. doi: 10.1177/1087054708322987 [DOI] [PubMed] [Google Scholar]
  21. Fabiano GA, Pelham WE Jr., Waschbusch DA, Gnagy EM, Lahey BB, Chronis AM, … Burrows-MacLean L. (2006). A practical impairment measure: Psychometric properties of the impairment rating scale in samples of children with attention-deficit/hyperactivity disorder and two school-based samples. Journal of Clinical Child and Adolescent Psychology, 35(3), 369–385. doi: 10.1207/s15374424jccp3503_3 [DOI] [PubMed] [Google Scholar]
  22. Fortney JC, Slack R, Unutzer J, Kennedy K, Harbin HT, Emmet B, … Carneal G. (2015). Fixing behavioral health care in America: A national call for measurement-based care in the delivery of behavioral health services. Retrieved from www.thekennedyforum.org
  23. Fortney JC, Unutzer J, Wrenn G, Pyne JM, Smith GR, Schoenbaum M, & Harbin HT (2017). A tipping point for measurement-based care. Psychiatric Services, 68(2), 179–188. doi: 10.1176/appi.ps.201500439 [DOI] [PubMed] [Google Scholar]
  24. Guo T, Xiang YT, Xiao L, Hu CQ, Chiu HF, Ungvari GS, … Geng Y. (2015). Measurement-based care versus standard care for major depression: A randomized controlled trial with blind raters. American Journal of Psychiatry, 172(10), 1004–1013. doi: 10.1176/appi.ajp.2015.14050652 [DOI] [PubMed] [Google Scholar]
  25. Hatfield DR, & Ogles BM (2004). The use of outcome measures by psychologists in clinical practice. Professional Psychology: Research and Practice, 35(5), 485–491. doi: 10.1037/0735-7028.35.5.485 [DOI] [Google Scholar]
  26. Hawley KM, & Weisz JR (2003). Child, parent and therapist (dis)agreement on target problems in outpatient therapy: The therapist’s dilemma and its implications. Journal of Consulting and Clinical Psychology, 71(1), 62–70. doi: 10.1037/0022-006x.71.1.62 [DOI] [PubMed] [Google Scholar]
  27. Hoagwood K, Burns BJ, Kiser L, Ringseisen H, & Schoenwald SK (2001). Evidence-based practice in child and adolescent mental health services. Psychiatric Services, 52(9), 1179–1189. doi: 10.1176/appi.ps.52.9.1179 [DOI] [PubMed] [Google Scholar]
  28. Knaup C, Koesters M, Schoefer D, Becker T, & Puschner B. (2009). Effect of feedback of treatment outcome in specialist mental healthcare: Meta-analysis. The British Journal of Psychiatry, 195(1), 15–22. doi: 10.1192/bjp.bp.108.053967 [DOI] [PubMed] [Google Scholar]
  29. Lambert MJ, & Hawkins EJ (2004). Measuring outcome in professional practice: Considerations in selecting and using brief outcome instruments. Professional Psychology-Research and Practice, 35(5), 492–499. doi: 10.1037/0735-7028.35.5.492 [DOI] [Google Scholar]
  30. Mayes SD (2012). Checklist for autism spectrum disorder. Woods Dale, IL: Stoelting. [Google Scholar]
  31. Mellor-Clark J, Cross S, Macdonald J, & Skjulsvik T. (2016). Leading horses to water: Lessons from a decade of helping psychological therapy services use routine outcome measurement to improve practice. Administration and Policy in Mental Health and Mental Health Services Research, 43(3), 279–285. doi: 10.1007/s10488-014-0587-8 [DOI] [PubMed] [Google Scholar]
  32. Norman S, Dean S, Hansford L, & Ford T. (2014). Clinical practitioner’s attitudes towards the use of routine outcome monitoring within child and adolescent mental health services: A qualitative study of two child and adolescent mental health services. Clinical Child Psychology and Psychiatry, 19 (4), 576–595. doi: 10.1177/1359104513492348 [DOI] [PubMed] [Google Scholar]
  33. Pelham WE Jr., Fabiano GA, Waxmonsky JG, Greiner AR, Gnagy EM, Pelham WE 3rd, … Murphy SA (2016). Treatment sequencing for childhood ADHD: A multiple-randomization study of adaptive medication and behavioral interventions. Journal of Clinical Child and Adolescent Psychology, 45(4), 396–415. doi: 10.1080/15374416.2015.1105138 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Pelham WE Jr., Gnagy EM, Greenslade KE, & Milich R. (1992). Teacher ratings of DSM-III-R symptoms for the disruptive behavior disorders. Journal of the American Academy of Child and Adolescent Psychiatry, 31 (2), 210–218. doi: 10.1097/00004583-199203000-00006 [DOI] [PubMed] [Google Scholar]
  35. Pelham WE Jr., Gnagy EM, Greiner AR, & MTA Cooperative Group. (2000, November). Parent and teacher satisfaction with treatment and evaluation of effectiveness. Paper presented at the Association for the Advancement of Behavior Therapy, New Orleans, LA. [Google Scholar]
  36. Polanczyk GV, Salum GA, Sugaya LS, Caye A, & Rohde LA (2015). Annual research review: A meta-analysis of the worldwide prevalence of mental disorders in children and adolescents. Journal of Child Psychology and Psychiatry, 56(3), 345–365. doi: 10.1111/jcpp.12381 [DOI] [PubMed] [Google Scholar]
  37. Porter ME, Larsson S, & Lee TH (2016). Standardizing patient outcomes measurement. New England Journal of Medicine, 374(6), 504–506. doi: 10.1056/NEJMp1511701 [DOI] [PubMed] [Google Scholar]
  38. Reynolds CR, & Kamphaus RW (2015). Behavior assessment system for children (3rd ed.). New York, NY: Pearson. [Google Scholar]
  39. Rush AJ (2015). Isn’t it about time to employ measurement-based care in practice? [Editorial]. American Journal of Psychiatry, 172(10), 934–936. doi: 10.1176/appi.ajp.2015.15070928 [DOI] [PubMed] [Google Scholar]
  40. Stringaris A, Goodman R, Ferdinando S, Razdan V, Muhrer E, Leibenluft E, & Brotman MA (2012). The affective reactivity index: A concise irritability scale for clinical and research settings. Journal of Child Psychology and Psychiatry, 53(11), 1109–1117. doi: 10.1111/j.1469-7610.2012.02561.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Sussman N. (2007). Translating science into service. The Primary Care Companion to the Journal of Clinical Psychiatry, 09(05), 331–337. doi: 10.4088/PCC.v09n0501 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Ustun B, Adler LA, Rudin C, Faraone SV, Spencer TJ, Berglund P, … Kessler RC (2017). The World Health Organization adult attention-deficit/hyperactivity disorder self-report screening scale for DSM-5. JAMA Psychiatry, 74 (5), 520–526. doi: 10.1001/jamapsychiatry.2017.0298 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Weisz JR, Kuppens S, Ng MY, Eckshtain D, Ugueto AM, Vaughn-Coaxum R, … Fordwood SR (2017). What five decades of research tells us about the effects of youth psychological therapy: A multilevel meta-analysis and implications for science and practice. American Psychologist, 72(2), 79–117. doi: 10.1037/a0040360 [DOI] [PubMed] [Google Scholar]
  44. Weisz JR, & Weiss B. (1991). Studying the “referability” of child clinical problems. Journal of Consulting and Clinical Psychology, 59(2), 266–273. doi: 10.1037/0022-006X.59.2.266 [DOI] [PubMed] [Google Scholar]
  45. Weisz JR, Weiss B, Han SS, Granger DA, & Morton T. (1995). Effects of psychotherapy with children and adolescents revisited: A meta-analysis of treatment outcome studies. Psychological Bulletin, 117(3), 450–468. doi: 10.1037/0033-2909.117.3.450 [DOI] [PubMed] [Google Scholar]
  46. Wiering B, de Boer D, & Delnoij D. (2017). Patient involvement in the development of patient-reported outcome measures: A scoping review. Health Expectations, 20(1), 11–23. doi: 10.1111/hex.12442 [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Youngstrom EA, & Duax J. (2005). Evidence-based assessment of pediatric bipolar disorder, part I: Base rate and family history. Journal of the American Academy of Child and Adolescent Psychiatry, 44(7), 712–717. doi: 10.1097/01.chi.0000162581.87710.bd [DOI] [PubMed] [Google Scholar]
  48. Youngstrom EA,& Van Meter A. (2016). Empirically supported assessment of children and adolescents. Clinical Psychology-Science and Practice, 23(4), 327–347. doi: 10.1111/cpsp.12172 [DOI] [Google Scholar]
  49. Youngstrom EA, Van Meter A, Frazier TW, Hunsley J, Prinstein MJ, Ong ML, & Youngstrom JK (2017). Evidence-based assessment as an integrative model for applying psychological science to guide the voyage of treatment. Clinical Psychology-Science and Practice, 24(4), 331–363. doi: 10.1111/cpsp.12207 [DOI] [Google Scholar]
  50. Zablotsky B, Black LI, Maenner MJ, Schieve LA, Danielson ML, Bitsko RH, … Boyle CA (2019). Prevalence and trends of developmental disabilities among children in the United States: 2009–2017. Pediatrics, 144(4), e20190811. doi: 10.1542/peds.2019-0811 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES