Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Jul 1.
Published in final edited form as: J Clin Child Adolesc Psychol. 2021 Jul 16;51(4):453–468. doi: 10.1080/15374416.2021.1941057

A Multidimensional Examination of the Measurement of Treatment Engagement: Implications for Children’s Mental Health Services and Research

Davielle Lakind 1, W Joshua Bradley 2, Ajay Patel 3, Bruce F Chorpita 4, Kimberly D Becker 2
PMCID: PMC8761203  NIHMSID: NIHMS1726258  PMID: 34269632

Abstract

Objective:

The gap between rates of children’s mental health problems and their participation in services highlights the need to address concerns related to engagement in mental health services more effectively. To identify, understand, and resolve engagement concerns appropriately requires effective measurement. In this study, we employed a multidimensional conceptual framework of engagement to examine the measurement of engagement in intervention studies focused on improving children’s and/or families’ engagement in services.

Method:

We coded 52 randomized controlled trials (RCTs) of interventions designed to enhance treatment engagement published between 1974 and 2019 to examine what engagement constructs have been measured, how these constructs have been measured, who has provided information about engagement, and when and why engagement measures have been administered.

Results:

Attendance was measured in 94.2% of studies, and 59.6% of studies measured only attendance. Furthermore, most studies (61.5%) measured only one engagement dimension. One hundred twelve unique indicators of treatment engagement were used (61.6% measuring attendance). Infrequent measurement of youth (19.2% of studies) or caregiver (26.9%) perspectives was apparent. About half (54.7%) of measures were completed on one occasion, with 53.7% of measures completed after treatment was concluded.

Conclusions:

Results highlight how the field’s measurement of engagement has focused narrowly on attendance and on interventions that improve attendance. We consider promising new directions for capturing the multidimensional, dynamic, and subjective aspects of engagement, and for leveraging measurement in research and practice settings to feasibly and effectively identify, monitor, and address engagement challenges.

Keywords: engagement, attendance, measurement, mental health services, children


Roughly one in five youth in the U.S. has emotional or behavioral challenges significant enough to warrant treatment (Perou et al., 2013), yet less than half enroll in mental health services (Merikangas et al., 2010), and over 50% who enroll then terminate prematurely (Nock & Ferriter, 2005; Pellerin et al., 2010). The cost of this unmet need is considerable, as treatment dropout predicts poorer youth outcomes (Danko et al., 2016), and poor youth mental health can contribute to detriments in functioning and wellbeing into adulthood (Chen et al., 2006). Given the gap between need and participation, addressing problems related to treatment engagement – i.e., to individuals’ and families’ commitment and capacity to effectively participate in treatment – could have significant public health impact on the effectiveness of mental health services.

Fortunately, there is a substantial and growing evidence base of effective interventions designed to increase involvement in youth mental health services. These interventions have been summarized in several previous reviews (see Becker et al., 2018), and range considerably in terms of their characteristics, focus, and use within interventions. Briefly, they have been embedded within mental health clinics, but also in emergency departments and pediatric settings. They have been used to promote engagement in a variety of interventions, including parent training programs, multisystemic therapy, and child psychotherapy in general. They represent many approaches, from strategies to improve screening and referral processes, to those that orient youth and/or caregivers to treatment, to motivational enhancement strategies. Interventions also incorporate discrete clinical procedures in varying combinations, including appointment reminders, psychoeducation, and goal setting (Becker et al., 2018; Lindsey et al., 2014).

In psychology broadly, assessment is integral to intervention from start to finish. According to Meehl (1954/1996), standardized and/or statistical approaches can offer added value beyond pure clinical decision-making; using standardized measures in assessment both increases the accuracy of prediction, and provides context and comparison for the ideographic presentation of the client. As formulated by Youngstrom (2013), assessment can strengthen intervention through prediction (providing information regarding a diagnosis or other criterion of interest), prescription (informing the choice of treatment), or process (progress over the course of treatment, outcomes of treatment). The utility of an assessment approach can be evaluated based on its contributions to prognosis and treatment across these “Three P’s.”

The evidence base now affords us the opportunity to examine the field’s instrumentation related to treatment engagement. As interventions continue to mature and the collective body of research reveals what works for what purpose (cf. Paul, 1967; Becker et al., 2018), we should consider whether measurement strategies have similarly kept pace to allow us to evaluate how well our interventions are working and identify opportunities to advance measurement. The purpose of this paper was to characterize the measurement of engagement within the context of a sample of studies testing engagement interventions. Specifically, we examined what engagement constructs are measured, how these constructs are measured, who reports about engagement, and when and why engagement measures are administered. We discuss each of these parameters in turn.

Consensus has emerged among researchers that the construct of engagement is best characterized as multidimensional (e.g., Becker et al., 2015; Becker et al., 2018; Chacko et al., 2016; Haine-Schlagel et al., 2019;, Pullman et al., 2013), including domains that are social (e.g., therapeutic alliance), cognitive (e.g., expectancies related to treatment effectiveness, understanding of treatment approach), affective (e.g., emotions related to treatment such as hopefulness or frustration), and behavioral (e.g., attendance, active participation in session, homework completion). Reflecting these multiple domains, the nature of the engagement problems that occur in treatment vary (e.g., De Haan et al., 2013; Garibaldi et al., 2020; Karver et al., 2018; Wright et al., 2019). Thus, what is measured is a critical question. Ideally, the field’s instrumentation should have the capacity to accurately and reliably measure multiple domains of engagement. Prior research suggests, however, markedly disproportionate measurement of different domains; for example, in a review of engagement interventions using a five-dimensional conceptual framework for engagement (also used in the current study), attendance was measured in 94% of studies; other domains (relationship, expectancy, clarity, and homework) were each examined in less than 30% of studies (Becker et al., 2018). Given attendance alone does not predict treatment outcomes (Nock & Ferriter, 2005), and attendance and other engagement domains may be interrelated, a focus on measuring attendance might occur at the expense of advancing measurement of other social, cognitive, and/or affective treatment engagement factors and our ability to assess whether interventions improve them. Even within specific domains of engagement, definitions and methods of operationalization may diverge, leading to divergent outcomes, as Warnick et al. (2012) demonstrated in a study in which rates and predictors of attrition from youth psychotherapy varied depending on the method by which attrition was defined.

How dimensions of engagement are measured determines which inferences can be drawn about the construct and its components. The field of psychology has a tradition of ensuring that measures have strong psychometric properties and clearly defined parameters for use in various populations and contexts (Meehl, 1996). Psychometric properties of measures such as reliability and validity provide important information about the strengths and limits of measures, which in turn have implications for their use in research and practice (Cook & Beckman, 2006; Wood et al., 2007). Advancing research on the predictors of treatment engagement and outcomes of engagement interventions requires reliable, valid, and theoretically coherent measurement (cf Hock et al., 2015). An additional consideration reflects whether a measure is available to users beyond the research in which it was initially utilized. For a measure to be used in practice, it must be accessible; proprietary research measures, and measures offered at prohibitively high cost, cannot be adopted widely.

Sound assessment is also predicated on informants; who the informant is shapes what we know about the construct of interest, and we will likely obtain diverging information from different informants. Kraemer et al. (2003) proposed that various informants can be identified to represent unique contexts and perspectives, such that informant discrepancies provide complementary information that improve our capacity to comprehend a phenomenon more fully. Similarly, De Los Reyes et al. (2013) offered the Operations Triad Model (OTM) for identifying circumstances under which informant discrepancies yield meaningful information about behavior; in particular, the model suggests informant discrepancies are useful when a behavior expresses itself differently across a factor of interest (e.g., the severity of a behavior varies across settings), and the discrepant reports meaningfully reflect this variation. Integrating and building on the conceptual foundations of Kraemer et al. (2003) and the OTM (De Los Reyes et al., 2013), Makol et al. (2020) examined the incremental validity and predictive capacity of using principal components analysis to derive “trait scores” (components that reflect concerns across contexts and perspectives) from multiple informants to assess social anxiety, and found that trait scores explained more variance in observed adolescent social anxiety and predicted adolescent referral status than individual informants’ reports or a mean-derived composite of informants’ reports of social anxiety.

Several studies offer direct evidence of discrepant perspectives on treatment engagement. In a study examining concordance between therapist and client understanding of why clients terminated treatment, Hunsley and colleagues (1999) found that therapists underestimated how frequently clients’ reasons for termination involved dissatisfaction with therapy or therapist. In an examination of perceptions of barriers to youth mental health treatment (Champine et al., 2019), providers identified culturally competent care as a way to improve access and quality of treatment, yet cultural competence was not a concern raised by caregivers; however, caregivers identified barriers not noted by providers, including discomfort with in-home services, conflicting information from schools and providers regarding treatment, and inflexibility of services. As both De Los Reyes et al. (2013) and Kraemer and colleagues (2003) emphasized, discrepancies such as these highlight opportunities for refined lines of inquiry and enhanced decision-making by capturing complex constructs holistically. If we conceptualize treatment engagement as multidimensional, transactional, and inclusive of both concrete and subjective components, youth, caregiver, and provider perspectives all likely offer unique and valuable information to understand and address the array of engagement challenges that may arise over treatment.

Given that engagement is understood to be dynamic (i.e., changes over time), when we measure engagement may, similarly, offer varying information, and lend itself to varying purposes (i.e., why). For example, Nock and Ferriter (2005) distinguished between preparatory enhancement and continuous enhancement strategies for engagement; assessment at the beginning of treatment would indicate the effectiveness of any preparatory enhancement strategies implemented, but may not predict engagement at subsequent timepoints. Whenever it occurs, assessing engagement on one occasion provides a snapshot, and assessing engagement shortly after implementing an enhancement strategy provides information relative to the short-term effectiveness of the strategy; however, it may not be appropriate to extrapolate from this single measurement.

Illustratively, Langer et al. (2017) found that early in treatment, observers rated youth-therapist alliance to be higher for youth receiving manualized treatment than youth receiving non-manualized care; however, observer ratings for the two groups converged over time such that no differences between groups were observed in middle or late treatment. The authors speculated that early differences in alliance could suggest that properties of manualized treatment approaches (e.g., clear treatment structure and goals) might contribute to early alliance-building, whereas diminished differences subsequently could reflect increasing familiarity and comfort with session structure and therapist qualities for youth across conditions, or diminished enthusiasm over time for a manualized approach. These hypotheses highlight how measurement at different timepoints can be leveraged to understand different engagement phenomena. Relatedly, repeated measurement can facilitate the detection of emerging problems, and can be integrated as a dynamic piece of information to consider and respond to over the course of treatment. An engagement concern could be identified through measurement, then measured again over time to evaluate change, and whether changes are linked temporally with intervention components including those designed specifically to address the identified engagement problem and those with unintended consequences for engagement.

Current Study

Our goal was to gain insight about opportunities for advancing instrumentation around treatment engagement. We aimed to characterize the measurement of engagement within the context of intervention studies focused on improving children’s and/or families’ engagement in mental health services. We chose to focus on randomized controlled trials (RCTs) that explicitly targeted treatment engagement to understand how the most advanced children’s mental health research on the topic of engagement approached conceptualization and instrumentation; further, because participant flow is included in the Consolidated Standards of Reporting Trials, we suspected that a broader examination of engagement within children’s mental health interventions could result in a spurious inflation of studies that report treatment attrition and/or completion (i.e., attendance) as their only measures of engagement.

Specifically, this study examined the following questions: (1) What are the measurement patterns of engagement domains? (2) How have those domains been measured? (3) Whose perspectives and experiences regarding treatment engagement have been assessed? (4) When and for what purposes has treatment engagement been measured? As previous work has demonstrated that attendance is the most commonly measured engagement outcome (Lindsey et al., 2014), we hypothesized that attendance would be assessed with greater frequency than other engagement domains. Because attendance is often assessed via clinic records, we hypothesized that clinic records would serve as the most frequent information source. We hypothesized that most studies would measure engagement on a single occasion, as an intervention outcome.

Method

Selection Criteria and Sample

Forty-eight of the 52 studies in this review were initially compiled for a systematic review of engagement research undertaken to identify practice elements associated with improvements across specific domains of engagement (Becker et al., 2018). The literature search and selection process are detailed in Becker et al., 2018. In brief, the systematic review included RCTs that tested psychosocial interventions intended to improve youth or family engagement in children’s mental health services and reported outcomes for at least one measure of engagement, with a final sample of 48 articles published between 1974 and 2016.

We then conducted an independent literature search to identify additional articles published between January 2016 and December 2020. We searched PsycINFO, SocINDEX, and PubMed using the following terms: engagement OR retention OR attrition, plus an exploded “mental health services” term; we further specified “Randomized Controlled Trial” under “Article Type” in PubMed. We reviewed 563 records through PsycINFO; 181 records from SocINDEX, and 380 from PubMed. We also conducted a PubMed search with the same search terms, but specifying “Review” and “Systematic Review” for “Article Type.” Of the 209 articles returned, we reviewed abstracts for 33 potentially relevant articles, then closely examined references for five (Forman-Hoffman et al., 2017; Godoy et al., 2019; Moore, 2018; Werlen et al., 2019; Yasui et al., 2017). We examined references in four additional systematic reviews identified through PsycINFO and SocINDEX (Georgeson et al., 2020; Greef et al., 2017; Pereira & Barros, 2019; Petts & Shahidullah, 2020). Through this process we identified four additional articles appropriate for inclusion. Our final sample included 52 RCTs published between 1974 and 2019.

Across the 52 RCTs, data were included for a total of 6,340 participants between the ages of 0 and 21 years (M = 10.51, SD = 1.92; reported in 53.8% of studies). Studies tested engagement interventions in a variety of settings (or multiple settings), including clinics (67.3%), homes (42.3%), hospitals (5.7%), and the community (3.8%).1

Coding Procedures and Reliability

Across studies, we identified 112 distinct measures of engagement, and coded them using eight codes: availability, psychometric indicator, information source, target, administration count, timing, purpose, and informed treatment (see Table 1 for code definitions and levels). Each study was analyzed independently by two coders. Coders met regularly to review code applications, focusing on clarifying coding nuances, preventing coder drift, ensuring overall consistency of code application, and resolving discrepancies through consensus (Palinkas, 2014), with the first author serving as auditor (Hill et al., 2005). Interrater reliability (see Table 1) was calculated prior to discrepancy resolution; kappas were all above published standards (i.e., at least 0.40; Fleiss, 1981).

Table 1.

Study Codes with Definitions, Levels, and Inter-Rater Reliability

Research
Question
Codes Code Definitions Code Levels Kappa
2 Availability Availability of measure for use
beyond the original research
study setting
Available in the public domain with no associated cost; Available with associated cost; Unavailable .84
2 Psychometric Indicator Was at least one psychometric indicator reported for measure? Yes; No .98
3 Information Source Source providing information / completing the measure Service Records/Setting; Youth; Caregiver; Provider; Research Staff/Observer .89
3 Target Individual about whom the
measure was completed
Youth; Caregiver; Family; Provider .76
4 Administration Count Number of times measure was completed over course of study Count 1.00
4 Timing The timing of measurement
within the course of treatment
Before treatment; during/after 1st treatment session; repeatedly (measured multiple times over course of treatment); pre/post (measured before treatment and after conclusion of treatment); after conclusion of treatment; other .78
4 Purpose The purpose for which the
measure was collected in the
study
Initial evaluation (completed prior to engagement intervention); Progress Monitoring (to evaluate change in an engagement domain over the course of an intervention); Outcome Measure (dependent variable to determine intervention efficacy); Other .57
4 Informed Treatment Were measure data used in treatment planning, intervention selection, intervention adaptation, or another component of treatment/intervention delivery? Yes; No 1.00

Note. “Engagement Domain” was coded for 48 studies in (Becker et al. (2018)). For the additional four studies coded in the current study, the kappa value for engagement domain was 1.00.

A ninth code, engagement domain, was reliably coded for the previously completed systematic review (Becker et al., 2018) and referred to a multidimensional measurement framework developed by Becker and Chorpita (2016) with five domains of treatment engagement organized around the acronym REACH: relationship (e.g., therapeutic alliance); expectancy (e.g., beliefs and/or expectations about how helpful treatment will be); attendance (e.g., barriers to attending or being on time for treatment sessions); clarity (e.g., shared understanding of treatment goals and approach); and homework (e.g., active participation and completion of work assigned as part of therapy by youth and family, both in and between sessions). Engagement domain was coded along with the eight codes described above for the four articles published after 2016.

Analytic Plan

Analyses were conducted using SPSS statistical software (v. 26). We addressed our four research questions as follows: (1) to assess the measurement patterns of engagement domains, we examined the frequency with which each of the five REACH domains was measured in the sample of studies, the frequency and modal number of domains measured across studies, and conditional probabilities for pairs of domains assessed across studies; (2) to assess how engagement domains have been measured, we examined the number and breadth of different types of measures within each domain; and features of the measures including measure availability, cost, and reported psychometric properties; (3) to assess whose perspectives and experiences regarding treatment engagement have been assessed, we examined the frequencies of information sources and measure targets used in studies, mean number of information sources per study, the proportion of information sources and targets associated with each domain, and the co-occurrence of information sources, and of targets, with one another; and (4) to assess the timing of measure administration and the purposes for which treatment engagement has been measured, we examined the frequency of measure administration, the modal number of times measures were completed, the timing of measurement within the course of treatment, frequencies examining the purpose of data collection overall and within REACH domains (i.e., was measure collected for purpose of initial evaluation or as an outcome of interest), and whether the data collected was used to inform the delivery of intervention.

Results

What are the measurement patterns of engagement domains?

The most frequently assessed engagement domain across studies was attendance (94.2% of studies), followed by homework (21.2%), expectancy (13.5%), clarity (11.5%), and relationship (11.5%). The modal number of REACH domains measured within a study was one: 61.5% of studies measured one domain, 26.9% of studies measured two domains, 9.6% of studies measured three domains, and 1.9% of studies measured four.

Table 2 presents the conditional probabilities of each other REACH domain being measured in a study when a given domain was measured. Fifty-nine percent of studies measured only attendance to assess engagement. When attendance was measured (i.e., the given domain in Table 2), the probability of another domain being measured ranged from 8.2% for relationship or clarity to 22.4% for homework. Attendance had a high probability of being measured (i.e., the conditional domain) when each other engagement domain was measured (i.e., when another domain was the given domain): relationship (66.7%), expectancy (71.4%), clarity (66.7%), and homework (100.0%). Homework was not measured concurrently with clarity or relationship in any studies.

Table 2.

Conditional Probabilities for Co-occurrence of REACH Domains within Studies

Given Domain Conditional Domain
Relationship Expectancy Attendance Clarity Homework
Relationship 66.7 66.7 50.0 0.0
Expectancy 57.1 71.4 28.6 28.6
Attendance 8.2 10.2 8.2 22.4
Clarity 50.0 33.3 66.7 0.0
Homework 0.0 18.2 100.0 0.0

Note. This table presents the conditional probability of other REACH domains being measured in studies when a given domain is measured. Numerals are percentages.

How have engagement domains been measured?

Indicators used to assess engagement across studies are presented in Table 3; aggregated across studies and domains, there were 112 unique indicators of engagement in this sample. Attendance was assessed using the greatest number of indicators, with 69 unique indicators across studies (61.6% of indicators). Total appointments attended was used in 22 studies; attendance at first session was used in 14 studies; and the proportion of youth/families that completed treatment (which varied according to how completion was defined) was used in 10 studies. Fifty-four indicators of attendance were used in one study each. Eighteen unique indicators were used to assess homework (16.1% of indicators), 13 indicators for expectancy (11.6%), nine for relationship (8.0%), and six for clarity (5.4%).2

Table 3.

Indicators and Measures Used to Assess Engagement by REACH Domain, with Frequency of Use Across Studies

Domain Measures
Relationship 1 study: Adaptation to Therapy (AT), total score; AT, Relationship with Therapist subscale; AT, Child Communication subscale; Attraction-Receptivity Questionnaire*; Client Evaluation of Motivational Interviewing Scale (CEMI)*, Client Perceived Quality of Motivational Interview subscalea; Treatment Beliefs Questionnaire (TBQ) a*; Cross-Cultural Counseling Inventory-Revised*; Tolan Adapted Process Measure; Working Alliance Inventory-Short Form*.
Expectancy 2 studies: Parent Motivation Inventory (PMI)* total score. 1 study: Brief Development Survey*; CEMI*, Client Perceived Quality of Motivational Interview subscalea; Expectations of Therapy Outcome Scale*; Mental Health Services Grid Interview, Perceived Need for Treatment; Parenting Self-Agency Measure*; PMI, Readiness to Change subscale*; PMI Desire for Child Change subscale*; PMI, Perceived Ability to Change subscale*; Readiness to Change (% rating)*; Readiness Ruler*; TBQa*; Understanding Mood Disorders Questionnaire.
Attendance 22 studies: Total appointments attended. 14 studies: Attended first appointment. 10 studies: Completed treatment. 8 studies: Dropped out of treatment. 7 studies: Attended at least one session. 3 studies: Attended all scheduled appointments; Attended second appointment; No-showed to first appointment. 2 studies: Appointments attended on time; Attended intake and one therapy session; Attended first appointment or called to reschedule; Cancelled appointments (% of sample); Cancelled appointments (raw number); Families attended at least 4 sessions. 1 study: # clinical telephone support sessions; # days between appointments; # days between ER discharge and treatment initiation; # days between referral and initial screening appointment; # late arrivals to appointments; # online modules completed; # sessions within six months of ER discharge; # sessions within two months of ER discharge; Attended second visit to psychiatry or addiction program within 30 days of the first; Attended one session within six months of ER discharge; Attended one session within two months of ER discharge; Attended subsequent appointments; Attended third appointment; Barriers to Services Questionnaire (BSQ)*, # families who experienced at least one family barrier; BSQ*, # families who experienced at least one service barrier; BSQ*, # family barriers; BSQ*, # service barriers; Canceled first appointment; Changed or cancelled first appointment; Cases who used any medical services; Cases who used any mental health service; Completed all eight online modules; Completed first online module; Completed seven sessions; Duration of clinical telephone support sessions; Endorsed “other” as barrier to treatment; Enrolled in treatment; Family attended 4+ sessions; Family available for scheduled call times; Families who never attended one outpatient appointment; General Continuing Care Adherence scale*; Hours of intervention received; Mean # of minutes late to appointments; Mean # of medical visits; Mean # of mental health visits; Mental Health Services Grid Interview, # services used by family; No-showed for appointment (% of sample); No-showed for appointment (raw number); Outpatient appointments attended; Refused services; Service Assessment for Children and Adolescents (SACA), attended one session within six months of ER discharge; SACA, attended one session within two months of ER discharge; Services for Children and Adolescents - Parent Interview (SCAPI)*, cases who visited emergency room; SCAPI*, cases who had family therapy; SCAPI*, cases who had other therapy; SCAPI*, cases with overnight stay; SCAPI*, cases who visited psychiatrist; SCAPI*, cases who visited teen clinic; SCAPI*, mean # emergency room visits; SCAPI*, mean # family therapy visits; SCAPI*, mean # other therapy visits; SCAPI*, mean # visits to psychiatrist; SCAPI*, mean # visits to teen clinic; Overall attrition (% individuals who did not complete all eight online modules); Pretreatment attrition (% individuals who did not complete any of the eight online modules); Time spent completing online modules
Clarity 1 study: Therapy Expectations Questionnaire; Therapy Knowledge Survey; Therapy Survey*, Client Expectations of Therapy Structure subscale; TBQa*; Understanding of Therapy Questionnaire; Understanding of Therapy Test.
Homework 5 studies: Homework assignments completed. 1 study: Adherence Questionnaire (AQ), Quality of Parent Adherence in Past week subscale; AQ, Quantity of Parent Adherence subscale; Caregivers who provided name and address to staff; Caregivers who returned program registration; Dyadic Parent-Child Interaction Skills Coding System (DPICS)*, Clean Up Do Skills; DPICS*, Clean Up Don’t Skills; DPICS*, Negative Parenting; DPICS*, Positive Parenting; DPICS*, Play Do Skills; DPICS*, Play Don’t Skills; Days caregiver cooperates with treatment assignments; Parent Participation Engagement in Child Psychotherapy Observational Coding System; Parenting Practices Scale*; Therapist rating of Cooperation; Therapist rating of extent to which client participated; Therapist rating of participation quality*; Therapist-report measure of parent engagement*.
a

These measures assessed multiple engagement domains.

*

[measure] indicates that a measure is free and available in the public domain.

It appeared that most measures (72.2%) were available in the public domain with no associated cost. Remaining measures were not available for use (i.e., unable to access in public domain). Most scale or survey measures (77.8%) were reported to include at least one psychometric indicator (e.g., intraclass correlation coefficient, Cronbach’s alpha, Cohen’s kappa). Availability of measures in the public domain varied across REACH domains. Attendance had the greatest proportion of measures available (93.8%), followed by expectancy (84.6%), homework (69.2%), relationship (55.6%), and clarity (33.3%). Among measures for which psychometrics are appropriate and expected (e.g., scales, coding systems, etc.), all measures used to assess attendance had at least one reported psychometric indicator (100%), followed by expectancy (83.3%), homework (83.3%), clarity (50.0%), and relationship (44.4%).

Whose perspectives and experiences regarding treatment engagement have been assessed?

Across studies, five sources provided information regarding engagement. Aggregated across domains, the most frequent information source was clinic records (86.5% of studies). Caregivers (26.9%), providers (23.1%), youth (19.2%), and research staff/observers (7.7%) also served as information sources. The mean number of information sources per study was 1.63 (SD = 0.82), and the modal number of information sources was one. Most studies relied on one information source (55.8%), followed by two sources (26.9%), three sources (15.4%), and four sources (1.9%). Assessments were completed regarding four targets (i.e., the person about whom assessments were completed). The most frequent target was the family (51.9% of studies), followed by caregiver (40.4%), youth (36.5%), and provider (5.8%). The mean number of targets per study was 1.35 (SD = 0.59), and the modal number of targets was one. Most studies (71.2%) examined one target, followed by two targets (23.1%) and three targets (5.8%). Table 4 presents the proportion of information sources and measure targets associated with each domain. Caregivers provided information related to all five domains, while youth provided information related to all domains except homework. Caregivers represented the primary information source for measures of relationship (66.7%) and expectancy (76.9%), and caregivers and youth were primary information sources for measures of clarity (each represented in 66.7% of measures). Most measures (59.4%) used to assess attendance relied on information from clinic records, and measures used to assess homework were most frequently completed by provider (44.4%). The family unit was the target of measurement for 39.1% of measures of attendance but was not a target for other domains. Youth were measure targets for all domains except homework. Caregivers were targets for measures assessing all five engagement domains; they were the only identified targets of measures assessing homework, and the predominant target for measures assessing expectancy (61.5%) and clarity (66.7%). Providers were the measure target for 44.4% of measures assessing relationship and 7.7% of measures assessing expectancy.

Table 4.

Proportion of Measures Assessing REACH Domains by Information Source and Target

Domain Information Source
Clinic Youth Caregiver Provider Research
Staff/Observer
Relationship 0.0 44.4 66.7 22.2 0.0
Expectancy 0.0 30.8 76.9 7.7 0.0
Attendance 59.4 29.0 17.4 7.2 8.7
Clarity 0.0 66.7 66.7 0.0 0.0
Homework 16.7 0.0 16.7 44.4 38.9
Domain Target
Family Youth Caregiver Provider
Relationship 0.0 44.4 22.2 44.4
Expectancy 0.0 30.8 61.5 7.7
Attendance 39.1 52.2 31.9 0.0
Clarity 0.0 66.7 66.7 0.0
Homework 0.0 0.0 100.00 0.0

Note. This table presents the proportion of measures assessing REACH domains by respondent and target (e.g., 30.8% of measures assessing expectancy used information from the youth). Numerals are percentages. Row percentages may exceed 100%, as some measures assessed multiple domains.

Table 5 presents the percentage and number of studies in which information sources and targets co-occurred. The highest degree of co-occurrence for information sources was between clinic and caregiver, with 21.2% of studies collecting information from both sources. Five studies (9.6%) collected information from the youth and caregiver concurrently. The greatest co-occurrence for measure targets was caregiver and family (17.3%). Caregiver and youth were both targets in 11.5% of studies.

Table 5.

Co-occurrence of Information Sources and Targets in Studies

Information Sources Clinic Youth Caregiver Provider Research
Staff/Observer
Clinic __ 11.5 21.2 15.4 5.8
Youth __ 9.6 5.8 0.0
Caregiver __ __ __ 9.6 1.9
Provider __ __ __ __ 3.8
Research Staff __ __ __ __ __
Targets Family Youth Caregiver Provider
Family __ 1.9 17.3 1.9
Youth __ __ 11.5 3.8
Caregiver __ __ 3.8
Provider __ __ __ __

Note. This table presents the percentage and number of studies in which information sources and targets co-occurred (e.g., 11.5% of studies used clinic and youth as information sources). Numerals are percentages.

When and for what purposes has treatment engagement been measured?

Within a study, the number of times an engagement measure was administered ranged from one to twelve; the modal number of times was one (54.7%). Measures were completed twice in 25.0% of studies and three or more times in 20.3% of studies. Most measures were completed after termination of treatment (53.7%), followed by completion in the first session (15.3%), repeated completion across the treatment episode (10.8%), completion before and after treatment (i.e., pre-post; 8.4%), completion prior to treatment initiation (8.4%), and completion at other defined time points (e.g., once after second or third session; 3.4%).

Most measures (54.5%) assessing relationship were completed multiple times over the course of treatment. Expectancy was assessed prior to treatment initiation in 35.7% of studies and before and after treatment in 35.7% of studies. Most measures (72.6%) used to assess attendance were compiled following treatment termination; most measures (66.7%) of clarity were completed before treatment initiation; and homework was assessed repeatedly across the treatment episode most frequently (46.2%). Most indicators (90.6%, n = 184) were used as outcome measures to assess the efficacy of interventions. Fewer measures were used for progress monitoring (7.9%, n = 16) and initial evaluation of engagement (1.5%, n = 3). No measures were used to inform treatment.

Discussion

In this study, we examined how treatment engagement was measured in 52 RCTs of interventions targeting engagement in children’s mental health services. Using the REACH framework, a multidimensional conceptual framework for treatment engagement used in previous studies (Becker et al., 2018; Becker et al., 2019), we scrutinized (1) what the measurement patterns were across engagement domains; (2) how those domains have been measured; (3) whose perspectives and experiences of treatment engagement have been assessed; and (4) when and for what purpose treatment engagement has been measured. Our intention was to examine how the field’s instrumentation related to engagement shapes what we know about effective interventions, and to identify opportunities for advancing science and for assessing and addressing engagement problems in practice.

In examining what the measurement patterns are across engagement domains, we found that attendance was measured in 94.2% of studies – far more frequently than other domains, but consistent with previous reviews (Lindsey et al., 2014, Becker et al., 2015). Most studies (61.5%) examined a single engagement domain, and 26.9% examined two domains, mostly attendance plus one other domain. Examining attendance may be a widely adopted strategy because it is cost effective and efficient. Absence at first appointments and treatment attrition might also be conceptualized as adverse events; much like monitoring hospitalization or suicide in intervention studies, measuring these attendance outcomes is critical because their negative downstream impact on treatment is close to absolute. However, assessing attendance without assessing other domains may result in missed opportunities to understand the nature of the engagement problem, as inconsistent or flagging attendance may be a superficial manifestation of a problem rooted in cognitive or social barriers, or appear after the optimal window for intervention (Kazdin & Wassell, 1999; Spirito et al., 2002). In other words, attendance is not synonymous with engagement; no single indicator or domain is. Instead, engagement is likely a multidimensional construct. Examining just a single domain limits our understanding of engagement as well as the relation between different domains.

In our examination of how engagement has been measured, we found considerable heterogeneity in the operationalization and assessment of engagement in each domain, from six methods for measuring clarity to 69 methods of measuring attendance. We found most scales, questions, or behavioral observation coding systems are free and publicly available (72.2%) and have at least one reported psychometric indicator (77.8%), indicating there are accessible measures that can be used to assess engagement, with information that consumers can use to assess whether the measure provides satisfactory reliability and/or validity.

Examining who supplied information regarding engagement, we found that 45 studies used information generated by the clinic, largely in keeping with the focus on attendance. Youth provided information in 10 studies. Although it seems redundant to state that a client’s own experiences and perspectives are integral to their engagement, this truism is not reflected in the instrumentation. For services targeting younger children it may be that caregivers are the primary participants in treatment. Further, regardless of client age, caregiver involvement in children’s treatment contributes substantially to youth outcomes (Dowell & Ogles, 2010), and caregiver participation is an important component of engagement given that caregivers often decide whether to initiate and/or remain in treatment. Yet caregivers provided information about engagement in only 13 studies. Caregiver and youth perspectives about treatment likely often diverge, as well (Garland et al., 2000), and it may therefore be useful to gather information from both, yet caregiver and youth were both reporters in only 9.6% of studies. “Family” was identified as the target of assessment in 51.9% of studies, but exclusively regarding attendance. Several studies examined “family” attendance without articulating whether and how youth and/or caregivers were meant to participate in treatment, obscuring the potential contributions of specific actors’ participation in the treatment process, and the distinct treatment challenges and implications for treatment outcomes that may be related to engagement problems with youth versus caregivers. Further, only 23.1% of studies included therapist as an information source, despite evidence suggesting that therapists’ perspectives – for example, their view of therapeutic alliance (Bachelor, 2013) and expectancies regarding client improvement (Meyer et al., 2002) – are associated with outcomes. Lastly, the modal reliance on a single information source suggests missed opportunities to generate nuanced and holistic understandings of the engagement processes and concerns addressed in these studies, given the value of strategically integrating multiple data sources to assess a construct of interest (De Los Reyes et al., 2013; Kraemer et al., 2003; Makol et al., 2020).

Examining when engagement was measured, we found that roughly half of the studies in our sample (54.7%) measured engagement on one occasion; another 25.0% of studies measured engagement twice. Most studies assessed engagement following the conclusion of treatment, associated with the reliance on assessing attendance outcomes in this sample of studies. Other domains were associated with different timing patterns; for example, expectancy and clarity were measured most frequently before treatment initiation, in keeping with the emphasis of the interventions in those studies on improving expectancy and clarity at the beginning of treatment through preparatory enhancement strategies (Nock & Ferriter, 2005). Relationship and homework were most often assessed repeatedly, perhaps reflecting in the first case a more established conceptualization that the therapeutic relationship changes over time, and in the second case the relative ease of tracking concrete behavioral indicators such as homework completion (Lindsey et al., 2014).

Our examination of why engagement was measured demonstrated that in 90.6% of studies measures were used to assess the outcome of an intervention, with fewer studies assessing initial levels of engagement or monitoring changes over time. Outcomes-focused studies have clarified a number of specific strategies that can improve engagement – particularly attendance at the first treatment session (e.g., Breland Noble, 2012; McKay et al., 1998) and across the course of treatment (e.g., Kutash et al., 2011) – for groups at heightened risk for engagement problems. The field has less evidence to specify how to understand which individuals demonstrate varying degrees and types of engagement problems across the course of treatment. Research to date has also not examined how engagement can be measured to inform subsequent intervention with a specific individual. This is notable given that the ultimate outcome of interest in treatment is symptom reduction or improved functioning, and engagement is a proximal, enabling factor.

Implications

These findings highlight opportunities for advancing the science and practice related to treatment engagement by more explicitly targeting questions through our measurement related to what, how, who, when, and why. An approach to the measurement of treatment engagement that captures more of its multidimensional, interactional, and dynamic nature could more accurately reflect the current consensus and extend our understanding around what comprises treatment engagement. A multidimensional approach to measurement may be particularly important because of evidence suggesting that while some empirically supported engagement interventions (e.g., psychoeducation) are associated with improvements across all engagement domains, others are associated with improved engagement in just one domain (e.g., appointment reminders, which improve attendance), and most are associated with improvements in two to three domains (Becker et al., 2018). That certain practices may be better suited to improve engagement in certain domains highlights the importance not just of assessing for engagement problems broadly but to support identification of domain-specific problems.

Characteristics of future potential measurement tools are also important (i.e., how engagement is measured). Consistent with the literature on pragmatic measurement (Glasgow & Riley, 2013), the measurement tool must be perceived by users as both useful and easy to use. Glasgow and Riley articulate a number of additional required and recommended criteria for pragmatic measures: a measure must be 1) important to stakeholders; 2) low burden for both respondents and staff; 3) actionable; and 4) sensitive to change (p. 238). Research to develop new engagement measurement tools that meet these criteria can facilitate translation into practice. Further, the relative dearth to date of studies utilizing consumer and provider perspectives, and drawing on multiple perspectives, suggests new directions for future research related to who provides data. This could include multi-informant measurement drawing on perspectives of those “closest to the action,” with assessment strategies designed to leverage discrepancies as a means to more fully understand the relevant constructs (De Los Reyes et al., 2013; Kraemer et al., 2003). Designing studies that prompt consumers (youth and caregivers) and providers to represent their experiences with service utilization and barriers would advance research by identifying who specifically might demonstrate an engagement concern; it could also illuminate transactional and interpersonal elements of engagement undetectable by examining one perspective.

Because treatment engagement is a dynamic, interactional, process-oriented construct, our field may benefit from research that approaches when to measure as “early and often.” If a concern related to treatment engagement is identified and an intervention implemented to address it, repeated assessment would facilitate evaluation of whether the problem improved following intervention, or whether a different approach might be needed. Repeated measurement can also contribute to the evidence around when different engagement domains may be activated or challenged in treatment. Given that changes in engagement over time are to be expected across domains, the interpretation of repeated assessment results may vary.

There may also be added value if studies expand why engagement is being measured. For example, accurate detection of engagement problems via multidimensional, multi-informant approaches can be used as an information-gathering step within an action cycle (Deming 1993), a goal-directed process in which actors make deliberate and informed plans, implement those plans, evaluate implementation outcomes, and make adaptations based on results that are themselves then implemented, evaluated, etc. Broadly, instrumentation may be conceptualized and applied most usefully in the context of a coordinated decision-making framework, in which a set of organizing principles define and guide how various components of service provision – treatments, clients, providers, etc. – function together in a system (Chorpita & Daleiden, 2014). Such a framework links the questions we ask in this paper. Future research around engagement instrumentation may be most useful if it is not considered separately from the system in which it is utilized; instrumentation questions of what, how, who, when, and why will be most valuable to consider in relation to one another, as part of a larger whole.

The current study examined resource-intensive intervention trials specifically focused on enhancing engagement; thus, we would expect that our results represent the “upper bound” in conceptualization and instrumentation related to measuring engagement. Although much thoughtful and rigorous research has been done to develop interventions to enhance engagement in treatment, our results highlight promising new directions for instrumentation and assessment approaches. We can design multidimensional measures to better understand when in treatment problems arise and their course over time. We can also design studies to examine the effects of engagement interventions on targeted dimensions and whether there is generalization across dimensions – for example if an intervention targets relationship, do client expectancies improve, as well? We can also seek to reduce the assessment burden by conducting item-response analyses and identifying specific items/indicators with predictive validity for downstream engagement outcomes (e.g., attendance or attrition).

The import of this work also extends to the broader field of interventions research. Despite the significant threat to high-fidelity treatment delivery posed by low engagement, interventions research in general tends to omit meaningful measures of engagement. Illustratively, an examination of Behavioral Parent Training studies (Chacko et al., 2016) found that very few studies assessed within or between-session engagement; most studies measured engagement only via attrition. This is a clinical context characterized by high attrition – 51% percent of individuals did not complete treatment across Behavioral Parent Training studies – but measuring engagement in relation to treatment outcome occurred infrequently. Measuring engagement in interventions research multidimensionally, repeatedly, across multiple informants, and within a dynamic decision-making framework can illuminate opportunities to improve engagement and thereby enhance treatment delivery and improve outcomes. Including multiple measures of engagement in interventions research can also leverage and contribute to the science of engagement, helping us strengthen conceptual models of how different engagement domains relate to one another.

In practice, we see even more striking patterns of attrition than in treatment outcomes research. As just one example, in a large study using a nationally representative sample, approximately 40% of youth enrolled in services attended only a single mental health visit, and only one third participated in a “minimally adequate” number of sessions (Saloner et al., 2014). Yet the small body of services research focused on engagement suggests that providers rarely measure engagement at all.

In a study in which providers described their experiences detecting clients’ engagement problems, they overwhelmingly reported relying on their own observation, and none used a formal measure of engagement (Becker et al., 2021); this is not ideal, given the limitations of relying solely on clinical judgment (Meehl, 1954/1996) and the documented challenges therapists have with accurately detecting engagement concerns (Hunsley et al., 1999). However, these results are also not surprising, given that the field has not yet provided a feasible (i.e., low burden) measurement approach for providers to assess multiple dimensions of engagement. There are, however, still opportunities to enhance practice. First, recognizing that engagement is multidimensional, providers can consider how to measure the subtler signs of social and/or cognitive engagement rather than focusing on attendance as the primary indicator. It may not be necessary, and certainly may not be feasible, to utilize a burdensome array of measures reflecting different dimensions, but providers could, for example, administer an established alliance measure at multiple timepoints, and supplement with questions to assess other domains. In the absence of any formal measures, providers could ask questions that elicit client perspectives related to different engagement dimensions – for example, a provider could assess expectancy by asking a client to give a subjective scale rating in response to the question, “How optimistic are you that in the end, therapy will help make things better?” Importantly, providers could connect the results of these assessments to possible solutions for intervention when problems are detected. In short, providers may benefit from a measurement approach that is low burden, detects multiple types of engagement problems, can be administered early and often, and that informs decision-making.

Limitations

This study provides a descriptive characterization of the measurement of treatment engagement in intervention research targeting engagement in children’s mental health services. Because we reviewed interventions aggregated over decades, this study does not characterize the current state of the field, and may not reflect important advances in measurement. Yet certain findings hold true across the sample, and therefore highlight new directions for future research; for example, no studies used measurement to inform intervention. Using the REACH framework could also be interpreted as privileging one perspective on the varied dimensions of treatment engagement. However, the framework is grounded in prior research (see Becker et al., 2018). Consensus in the field also holds that engagement is multidimensional, and all measures were coded as belonging to one domain or another; thus, this study still highlights that intervention research on engagement in children’s mental health has not utilized a multidimensional operationalization. Lastly, this study is limited to providing observed values; we cannot suggest expected values or benchmarks until other research addresses certain key questions: whether a hierarchy or causal structure links engagement domains; whether all engagement domains should receive equal attention; optimal schedules for assessing various domains; and optimal respondents or information sources for various domains. As the field progresses to examine these questions, we can synthesize what may be most important to measure, how and when to measure it, who is best suited to provide the information, and the best uses for that measurement to address engagement-related challenges.

Conclusion

This study examined how the measurement of treatment engagement has shaped our understanding of the construct and of effective interventions to improve it. Results highlight that the field’s instrumentation has supported research examining universally applied interventions that improve engagement in one or two domains, with a preponderance of studies examining attendance. Recent scholarship conceptualizing engagement as multidimensional, interpersonal, and dynamic clarifies new opportunities to build on current intervention research to expand our understanding of this crucial and complex phenomenon. There may be particular promise in research that measures engagement to detect and address potential engagement concerns, as that may create opportunities to leverage research evidence in practice settings. Developing a research base that utilizes multidimensional, multi-informant, repeated assessment of treatment engagement can help us identify 1) valid, reliable, and sensitive indicators of different types of engagement problems; 2) optimal timing for detecting different engagement problems and variations in different dimensions of engagement over time; 3) the associations among various facets of engagement; 4) what procedures increase which dimensions of engagement; and 5) how different engagement dimensions relate to treatment process and outcome. Research that focuses on the development of pragmatic measurement, and that considers how the assessment of treatment engagement can be folded into practice in order to help providers and agencies more effectively identify and address problems, will contribute even more to our shared pursuit of effectively supporting youth and families to benefit from mental health services. Gaining a fuller understanding of how engagement functions, and integrating actionable measurement, should be a shared priority for all researchers and providers focused on addressing children’s mental health concerns.

Acknowledgments

We have no known conflicts of interest to disclose. During preparation of this manuscript, W.J. Bradley was supported by NIH-NIGMS (T32-GM081740). Publication contents are solely the responsibility of the authors and do not necessarily represent the official views of the NIGMS or NIH. We wish to acknowledge Eric L. Daleiden, Ph.D., of Practicewise, LLC, and Richard P. Barth, Dean of the University of Maryland School of Social Work, for their leadership and contributions to the foundational work for this review.

Footnotes

1

Sum of percentages exceed 100% because some studies were conducted in multiple settings.

2

The sum of total unique indicators is 115 rather than 112 because two measures (noted in Table 3) assessed multiple domains.

References

References marked with an asterisk indicate studies included in the meta-analysis.

  1. Bachelor A (2013). Clients' and therapists' views of the therapeutic alliance: Similarities, differences and relationship to therapy outcome. Clinical Psychology & Psychotherapy, 20, 118–135. 10.1002/cpp.792 [DOI] [PubMed] [Google Scholar]
  2. Becker KD, Boustani MM, Gellatly R, & Chorpita BF (2018). Forty years of engagement research in children’s mental health services: Multidimensional measurement and practice elements. Journal of Clinical Child & Adolescent Psychology, 47(1), 1–23. 10.1080/15374416.2017.1326121 [DOI] [PubMed] [Google Scholar]
  3. Becker KD, Buckingham SL, Rith-Najarian L, & Kline ER (2015). The Common Elements of treatment engagement for clinically high-risk youth and youth with first-episode psychosis. Early Intervention in Psychiatry, 10(6), 455–467. 10.1111/eip.12283 [DOI] [PubMed] [Google Scholar]
  4. Becker KD, & Chorpita BF (2016, August). Enhancing the design of engagement interventions to enhance the public health impact of mental health treatments for youth. In Becker K (Chair), Extending the reach and impact of science on clinical care for youth and families: Looking for new models for the old challenges [Symposium]. 23rd NIMH Conference on Mental Health Services Research: Harnessing Science to Strengthen the Public Health Impact, Bethesda, MD. [Google Scholar]
  5. Becker KD, Lee BR, Daleiden EL, Lindsey M, Brandt NE, & Chorpita BF (2015). The common elements of engagement in children’s mental health services: Which elements for which outcomes? Journal of Clinical Child & Adolescent Psychology, 44(1), 30–43. 10.1080/15374416.2013.814543 [DOI] [PubMed] [Google Scholar]
  6. Becker KD, Dickerson K, Boustani M & Chorpita BF (2021). Knowing what to do and when to do it: Mental health professionals and the evidence base for treatment engagement. Administration and Policy in Mental Health and Mental Health Services Research, 48, 201–218. 10.1007/s10488-020-01067-6 [DOI] [PubMed] [Google Scholar]
  7. Becker KD, Park AL, Boustani MM, & Chorpita BF (2019). A pilot study to examine the feasibility and acceptability of a coordinated intervention design to address treatment engagement challenges in school mental health services. Journal of School Psychology, 76. 78–88. 10.1016/j.jsp.2019.07.013 [DOI] [PubMed] [Google Scholar]
  8. *Bonner B, & Everett F (1986). Influence of client preparation and problem severity on attitudes and expectations in child psychotherapy. Professional Psychology: Research and Practice, 17(3), 223–229. 10.1037/0735-7028.17.3.223 [DOI] [Google Scholar]
  9. *Breland-Noble A, The AAKOMA Project Adult Advisory Board (2012). Community and treatment engagement for depressed African American youth: The AAKOMA FLOA pilot. Journal of Clinical Psychology in Medical Settings, 19(1), 41–48. 10.1007/s10880-011-9281-0 [DOI] [PubMed] [Google Scholar]
  10. Chacko A, Jensen SA, Lowry LS, Cornwell M, Chimklis A, Chan E, Lee D, & Pulgarin B (2016). Engagement in Behavioral Parent Training: Review of the Literature and Implications for Practice. Clinical Child and Family Psychology Review, 19(3), 204–215. 10.1007/s10567-016-0205-2 [DOI] [PubMed] [Google Scholar]
  11. *Chacko A, Wymbs BT, Wymbs FA, Pelham WE, Swanger-Gagne MS, Girio E, Pirvics L, Herbst L, Guzzo J, Phillips C, & O'Connor B (2009). Enhancing traditional behavioral parent training for single mothers of children with ADHD. Journal of Clinical Child and Adolescent Psychology, 38(2), 206–218. 10.1080/15374410802698388 [DOI] [PubMed] [Google Scholar]
  12. Champine RB, Shaker AH, Tsitaridis KA, Whitson ML, & Kaufman JS (2019). Service-Related Barriers and Facilitators in an Early Childhood System of Care: Comparing the Perspectives of Parents and Providers. Community Mental Health Journal, 55(6), 942–953. 10.1007/s10597-019-00418-4 [DOI] [PubMed] [Google Scholar]
  13. Chen H, Cohen P, Kasen S, Johnson JG, Berenson K, & Gordon K (2006). Impact of Adolescent Mental Disorders and Physical Illnesses on Quality of Life 17 Years Later. Archives of Pediatrics & Adolescent Medicine, 160(1), 93. 10.1001/archpedi.160.1.93 [DOI] [PubMed] [Google Scholar]
  14. Chorpita BF, & Daleiden EL (2014). Structuring the collaboration of science and service in pursuit of a shared vision. Journal of Clinical Child & Adolescent Psychology, 43(2), 323–338. 10.1080/15374416.2013.828297 [DOI] [PubMed] [Google Scholar]
  15. *Coatsworth JD, Santisteban DA, McBride CK, & Szapocznik J (2001). Brief strategic family therapy versus community control: Engagement, retention, and an exploration of the moderating role of adolescent symptom severity. Family Process, 40(3), 313–332. 10.1111/j.1545-5300.2001.4030100313.x [DOI] [PubMed] [Google Scholar]
  16. *Coker TR, Porras-Javier L, Zhang L, Soares N, Park C, Patel A, Tang L, Chung PJ & Zima BT (2019). A telehealth-enhanced referral process in pediatric primary care: A cluster randomized trial. Pediatrics, 143(3), 1–10. doi: 10.1542/peds.2018-2738 [DOI] [PubMed] [Google Scholar]
  17. *Coleman DJ, & Kaplan MS (1990). Effects of pretherapy videotape preparation on child therapy outcomes. Professional Psychology: Research and Practice, 21(3), 199–203. 10.1037/0735-7028.21.3.199 [DOI] [Google Scholar]
  18. Cook DA, & Beckman TJ (2006). Current concepts in validity and reliability for psychometric instruments: theory and application. American Journal of Medicine, 119(2), 166.e17–166.e16. 10.1016/j.amjmed.2005.10.036 [DOI] [PubMed] [Google Scholar]
  19. Danko C, Garbacz L, & Budd K (2016). Outcomes of Parent–Child Interaction Therapy in an urban community clinic: A comparison of treatment completers and dropouts. Children and Youth Services Review, 60, 42–51. 10.1016/j.childyouth.2015.11.007 [DOI] [Google Scholar]
  20. *Day JJ, & Sanders MR (2018). Do parents benefit from help when completing a self-guided parenting program online? A randomized controlled trial comparing Triple P Online with and without telephone support. Behavior Therapy, 49(6), 1020–1038. 10.1016/j.beth.2018.03.002 [DOI] [PubMed] [Google Scholar]
  21. *Dean S, Britt E, Bell E, Stanley J, & Collings S (2016). Motivational interviewing to enhance adolescent mental health treatment engagement: A randomized clinical trial. Psychological Medicine, 46(9), 1961–1969. 10.1017/S0033291716000568 [DOI] [PubMed] [Google Scholar]
  22. Deming WE (1993). The new economics for industry, government, education (2nd ed.). MIT Press. [Google Scholar]
  23. De Haan AM, Boon AE, de Jong JT, Hoeve M, & Vermeiren RR (2013). A meta-analytic review on treatment dropout in child and adolescent outpatient mental health care. Clinical Psychology Review, 33(5), 698–711. 10.1016/j.cpr.2013.04.005 [DOI] [PubMed] [Google Scholar]
  24. De Los Reyes A, Thomas SA, Goodman KL, & Kundey SM (2013). Principles underlying the use of multiple informants' reports. Annual Review of Clinical Psychology, 9(1), 123–149. doi: 10.1146/annurev-clinpsy-050212-185617 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. *Donohue B, Azrin N, Lawson H, Friedlander J, Teichner G, & Rindsberg J (1998). Improving initial session attendance of substance abusing and conduct disordered adolescents: A controlled study. Journal of Child & Adolescent Substance Abuse, 8(1), 1–13. 10.1300/J029v08n01_01 [DOI] [Google Scholar]
  26. *Dorsey S, Pullmann MD, Berliner L, Koschmann E, McKay M, & Deblinger E (2014). Engaging foster parents in treatment: A randomized trial of supplementing trauma-focused cognitive behavioral therapy with evidence-based engagement strategies. Child Abuse & Neglect, 38(9), 1508–1520. 10.1016/j.chiabu.2014.03.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Dowell KA, & Ogles BM (2010). The effects of parent participation on child psychotherapy outcome: A meta-analytic review. Journal of Clinical Child & Adolescent Psychology, 39(2), 151–162. 10.1080/15374410903532585 [DOI] [PubMed] [Google Scholar]
  28. *Dumas JE, Begle AM, French B, & Pearl A (2010). Effects of monetary incentives on engagement in the PACE parenting program. Journal of Clinical Child and Adolescent Psychology, 39(3), 302–313. 10.1080/15374411003691792 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. *Eyberg SM, & Johnson SM (1974). Multiple assessment of behavior modification with families: Effects of contingency contracting and order of treated problems. Journal of Consulting and Clinical Psychology, 42(4), 594–606. 10.1037/h0036723 [DOI] [PubMed] [Google Scholar]
  30. *Fabiano GA, Chacko A, Pelham WE Jr, Robb J, Walker KS, Wymbs F, Sastry AL, Flammer L, Keenan JK, Visweswaraiah H, Shulman S, Herbst L, & Pirvics L (2009). A comparison of behavioral parent training programs for fathers of children with attention-deficit/hyperactivity disorder. Behavior Therapy, 40(2), 190–204. 10.1016/j.beth.2008.05.002 [DOI] [PubMed] [Google Scholar]
  31. *Fleischman MJ (1979). Using parenting salaries to control attrition and cooperation in therapy. Behavior Therapy, 10(1), 111–116. 10.1016/S0005-7894(79)80014-3 [DOI] [Google Scholar]
  32. Fleiss JL (1981). Statistical methods for rates and proportions (2nd ed., pp. 38–46). Wiley. [Google Scholar]
  33. Forman-Hoffman VL, Middleton JC, McKeeman JL, Stambaugh LF, Christian RB, Gaynes BN, Kane HL, Kahwati LC, Lohr KN, & Viswanathan M (2017). Quality improvement, implementation, and dissemination strategies to improve mental health care for children and adolescents: A systematic review. Implementation Science: IS, 12(1), 93. 10.1186/s13012-017-0626-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Garibaldi PM, Abel MR, Snow RL, & Schleider JL (2020). Does Failure Help or Harm? Linking Parents’ Treatment Histories, Views of Failure, and Expectancies for Child Psychotherapy. Child & Youth Care Forum, 49(1), 151–169). 10.1007/s10566-019-09523-7 [DOI] [Google Scholar]
  35. Garland AF, Saltzman MD, & Aarons GA (2000). Adolescent satisfaction with mental health services: Development of a multidimensional scale. Evaluation and Program Planning, 23(2), 165–175. 10.1016/S0149-7189(00)00009-4 [DOI] [Google Scholar]
  36. Georgeson A, Highlander A, Loiselle R, Zachary C, & Jones D (2020). Engagement in technology-enhanced interventions for children and adolescents: Current status and recommendations for moving forward. Clinical Psychology Review, 78, 101858–101858. 10.1016/j.cpr.2020.101858 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Glasgow RE, & Riley WT (2013). Pragmatic measures: What they are and why we need them. American Journal of Preventive Medicine, 5(2), 237–243. 10.1016/j.amepre.2013.03.010 [DOI] [PubMed] [Google Scholar]
  38. *Godley MD, Godley SH, Dennis ML, Funk RR, & Passetti LL (2007). The effect of assertive continuing care on continuing care linkage, adherence and abstinence following residential treatment for adolescents with substance use disorders. Addiction, 102(1), 81–93. 10.1111/j.1360-0443.2006.01648.x [DOI] [PubMed] [Google Scholar]
  39. Godoy L, Hodgkinson S, Robertson HA, Sham E, Druskin L, Wambach CG, Beers LS, & Long M (2019). Increasing mental health engagement from primary care: The potential role of family navigation. Pediatrics, 143(4), e20182418. 10.1542/peds.2018-2418 [DOI] [PubMed] [Google Scholar]
  40. Greef M, Pijnenburg H, Hattum M, McLeod B, & Scholte R (2017). Parent-professional alliance and outcomes of child, parent, and family treatment: A systematic review. Journal of Child and Family Studies, 26(4), 961–976. 10.1007/s10826-016-0620-5 [DOI] [Google Scholar]
  41. *Grupp-Phelan J, Stevens J, Boyd S, Cohen DM, Ammerman RT, Liddy-Hicks S, Heck K, Marcus SC, Stone L, Campo JV, & Bridge JA (2019). Effect of a motivational interviewing-based intervention on initiation of mental health treatment and mental health after an emergency department visit among suicidal adolescents: A randomized clinical trial. JAMA Network Open, 2(12): e1917941, 1-10. 10.1001/jamanetworkopen.2019.17941 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Haine-Schlagel R, Dickson KS, Shapiro AF, May GC, & Cheng P (2019). Parent mental health problems and motivation as predictors of their engagement in community-based child mental health services. Children and Youth Services Review, 104. 10.1016/j.childyouth.2019.06.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. *Haine-Schlagel R, Martinez JI, Roesch SC, Bustos CE, & Janicki C (2018). Randomized trial of the Parent and Caregiver Active Participation Toolkit for child mental health treatment. Journal of Clinical Child & Adolescent Psychology, 47(sup1), S150–S160 1–11. 10.1080/15374416.2016.1183497 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. *Heinrichs N (2006). The effects of two different incentives on recruitment rates of families into a prevention program. The Journal of Primary Prevention, 27(4), 345–365. 10.1007/s10935-006-0038-8 [DOI] [PubMed] [Google Scholar]
  45. Hill C, Knox S, Thompson B, Williams E, Hess S, & Ladany N (2005). Consensual qualitative research: An update. Journal of Counseling Psychology, 52(2), 196–205. 10.1037/0022-0167.52.2.196 [DOI] [Google Scholar]
  46. Hock R, Priester M, Iachini A, Browne T, DeHart D, & Clone S (2015). A review of family engagement measures for adolescent substance use services. Journal of Child and Family Studies, 24(12), 3700–3710. 10.1007/s10826-015-0178-7 [DOI] [Google Scholar]
  47. *Holmes DS, & Urie RG (1975). Effects of preparing children for psychotherapy. Journal of Consulting and Clinical Psychology, 43(3), 311–318. 10.1037/h0076735 [DOI] [PubMed] [Google Scholar]
  48. Hunsley J, Aubry TD, Verstervelt CM, & Vito D (1999). Comparing therapist and client perspectives on reasons for psychotherapy termination. Psychotherapy: Theory, Research, Practice, Training, 36(4), 380–388. 10.1037/h0087802 [DOI] [Google Scholar]
  49. *Jensen SA, & Grimes LK (2010). Increases in parent attendance to behavioral parent training due to concurrent child treatment groups. Child & Youth Care Forum, 39(4), 239–251. 10.1007/s10566-010-9101-y [DOI] [Google Scholar]
  50. *Jones DJ, Forehand R, Cuellar J, Parent J, Honeycutt A, Khavjou O, Gonzalez M, Anton M, & Newey GA (2014). Technology-enhanced program for child disruptive behavior disorders: Development and pilot randomized control trial. Journal of Clinical Child and Adolescent Psychology, 43(1), 88–101. 10.1080/15374416.2013.822308 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Karver MS, De Nadai AS, Monahan M, & Shirk SR (2018). Meta-analysis of the prospective relation between alliance and outcome in child and adolescent psychotherapy. Psychotherapy, 55(4), 341. 10.1037/pst0000176 [DOI] [PubMed] [Google Scholar]
  52. Kazdin AE, & Wassell G (1999). Barriers to treatment participation and therapeutic change among children referred for conduct disorder. Journal of Clinical Child Psychology, 28(2), 160–172. 10.1207/s15374424jccp2802_4 [DOI] [PubMed] [Google Scholar]
  53. *Kourany RF, Garber J, & Tornusciolo G (1990). Improving first appointment attendance rates in child psychiatry outpatient clinics. Journal of the American Academy of Child and Adolescent Psychiatry, 29(4), 657–660. 10.1097/00004583-199007000-00022 [DOI] [PubMed] [Google Scholar]
  54. Kraemer HC, Measelle JR, Ablow JC, Essex MJ, Boyce WT, & Kupfer DJ (2003). A new approach to integrating data from multiple informants in psychiatric assessment and research: Mixing and matching contexts and perspectives. American Journal of Psychiatry, 160(9), 1566–1577. doi: 10.1176/appi.ajp.160.9.1566 [DOI] [PubMed] [Google Scholar]
  55. Kutash K, Duchnowski AJ, Green AL, & Ferron JM (2011). Supporting parents who have youth with emotional disturbances through a parent-to-parent support program: a proof of concept study using random assignment. Administration and policy in mental health, 38(5), 412–427. 10.1007/s10488-010-0329-5 [DOI] [PubMed] [Google Scholar]
  56. Langer DA, McLeod BD, & Weisz JR (2011). Do treatment manuals undermine youth–therapist alliance in community clinical practice? Journal of Consulting and Clinical Psychology, 79(4), 427–432. 10.1037/a0023821 [DOI] [PubMed] [Google Scholar]
  57. Lindsey MA, Brandt NE, Becker KD, Lee BR, Barth RP, Daleiden EL, & Chorpita BF (2014). Identifying the common elements of treatment engagement interventions in children’s mental health services. Clinical Child and Family Psychology Review, 17(3), 283–298. 10.1007/s10567-013-0163-x [DOI] [PubMed] [Google Scholar]
  58. *MacLean LM, Greenough T, Jorgenson V, & Couldwell M (1989). Getting through the front door: Improving initial appointment attendance at a mental-health clinic. Canadian Journal of Community Mental Health, 8, 123–133. 10.7870/cjcmh-1989-0009 [DOI] [Google Scholar]
  59. Makol BA, Youngstrom EA, Racz SJ, Qasmieh N, Glenn LE, & De Los Reyes A (2020). Integrating multiple informants’ reports: How conceptual and measurement models may address long-standing problems in clinical decision-making. Clinical Psychological Science, 8(6), 953–970. 10.1177/2167702620924439 [DOI] [Google Scholar]
  60. *McCabe K, & Yeh M (2009). Parent-child interaction therapy for Mexican Americans: A randomized clinical trial. Journal of Clinical Child and Adolescent Psychology, 38(5), 753–759. 10.1080/15374410903103544 [DOI] [PubMed] [Google Scholar]
  61. *McKay M, & Bannon W (2004). Engaging families in child mental health services. Child & Adolescent Psychiatric Clinics of North America, 13(4), 905–921. 10.1016/j.chc.2004.04.001 [DOI] [PubMed] [Google Scholar]
  62. *McKay MM, Gopalan G, Franco L, Assael K,D, Chacko A, Jackson JM, & Fuss A (2011). A collaboratively designed child mental health service model: Multiple family groups for urban children with conduct difficulties. Research on Social Work Practice, 21(6), 664–674. 10.1177/1049731511406740 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. *McKay M, McCadam K, & Gonzales J (1996). Addressing the barriers to mental health services for inner city children and their caretakers. Community Mental Health Journal, 32(4), 353–361. 10.1007/BF02249453 [DOI] [PubMed] [Google Scholar]
  64. *McKay MM, Nudelman R, McCadam K, & Gonzales J (1996). Evaluating a social work engagement approach to involving inner-city children and their families in mental health care. Research on Social Work Practice, 6(4), 462–472. 10.1177/104973159600600404 [DOI] [Google Scholar]
  65. *McKay M, Stoewe J, McCadam K, & Gonzales J (1998). Increasing access to child mental health services for urban children and their caregivers. Health & Social Work, 23(1), 9–15. 10.1093/hsw/23.1.9 [DOI] [PubMed] [Google Scholar]
  66. Meehl PE (1996). Clinical versus statistical prediction: A theoretical analysis and a review of the evidence. Northvale, NJ: Jason Aronson. (Original work published 1954) [Google Scholar]
  67. *Mendenhall AN, Fristad MA, & Early TJ (2009). Factors influencing service utilization and mood symptom severity in children with mood disorders: Effects of multifamily psychoeducation groups (MFPGs). Journal of Consulting and Clinical Psychology, 77(3), 463–473. 10.1037/a0014527 [DOI] [PubMed] [Google Scholar]
  68. Merikangas KR, He JP, Brody D, Fisher PW, Bourdon K, & Koretz DS (2010). Prevalence and treatment of mental disorders among US children in the 2001–2004 NHANES. Pediatrics, 125(1), 75–81. 10.1542/peds.2008-2598 [DOI] [PMC free article] [PubMed] [Google Scholar]
  69. Meyer B, Pilkonis PA, Krupnick JL, Egan MK, Simmens SJ, & Sotsky SM (2002). Treatment expectancies, patient alliance and outcome: Further analyses from the National Institute of Mental Health Treatment of Depression Collaborative Research Program. Journal of Consulting and Clinical Psychology, 70(4), 1051–1055. 10.1037/0022-006X.70.4.1051 [DOI] [PubMed] [Google Scholar]
  70. *Miller GE, & Prinz RJ (2003). Engagement of families in treatment for childhood conduct problems. Behavior Therapy, 34(4), 517–534. 10.1016/S0005-7894(03)80033-3 [DOI] [Google Scholar]
  71. Moore KL (2018). Mental health service engagement among underserved minority adolescents and young adults: A systematic review. Journal of Racial and Ethnic Health Disparities, 5(5), 1063–1076. 10.1007/s40615-017-0455-9 [DOI] [PubMed] [Google Scholar]
  72. Nock MK, & Ferriter C (2005). Parent management of attendance and adherence in child and adolescent therapy: A conceptual and empirical review. Clinical Child and Family Psychology Review, 8(2), 149–166. 10.1007/s10567-005-4753-0 [DOI] [PubMed] [Google Scholar]
  73. *Nock M, & Kazdin A (2005). Randomized controlled trial of a brief intervention for increasing participation in parent management training. Journal of Consulting and Clinical Psychology, 73(5), 872–879. 10.1037/0022-006X.73.5.872 [DOI] [PubMed] [Google Scholar]
  74. *Noel PE (2006). The impact of therapeutic case management on participation in adolescent substance abuse treatment. The American Journal of Drug and Alcohol Abuse, 32(3), 311–327. 10.1007/s10826-006-9022-4 [DOI] [PubMed] [Google Scholar]
  75. *Ougrin D, Zundel T, Ng A, Banarsee R, Bottle A, & Taylor E (2011). Trial of Therapeutic Assessment in London: Randomised controlled trial of Therapeutic Assessment versus standard psychosocial assessment in adolescents presenting with self-harm. Archives of Disease in Childhood, 96(2), 148–153. 10.1136/adc.2010.188755 [DOI] [PubMed] [Google Scholar]
  76. Palinkas LA (2014). Qualitative and mixed methods in mental health services and implementation research. Journal of Clinical Child & Adolescent Psychology, 43(6), 851–861. 10.1080/15374416.2014.910791 [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. *Parrish JM, Charlop MH, & Fenton LR (1986). Use of a stated waiting list contingency and reward opportunity to increase appointment keeping in an outpatient pediatric psychology clinic. Journal of Pediatric Psychology, 11, 81–89. 10.1093/jpepsy/11.1.81 [DOI] [PubMed] [Google Scholar]
  78. Paul G (1967). Strategy of outcome research in psychotherapy. Journal of Consulting Psychology, 31, 109–118. 10.1037/h0024436. [DOI] [PubMed] [Google Scholar]
  79. Pellerin K, Costa N, Weems C, & Dalton R (2010). An examination of treatment completers and non-completers at a child and adolescent community mental health clinic. Community Mental Health Journal, 46(3), 273–281. 10.1007/s10597-009-9285-5 [DOI] [PubMed] [Google Scholar]
  80. Pereira A, & Barros L (2019). Parental cognitions and motivation to engage in psychological interventions: A systematic review. Child Psychiatry and Human Development, 50(3), 347–361. 10.1007/s10578-018-0852-2 [DOI] [PubMed] [Google Scholar]
  81. Perou R, Bitsko RH, Blumberg SJ, Pastor P, Ghandour RM, Gfroerer JC, Hedden SL, Crosby AE, Visser SN, Schieve LA, Parks SE, Hall JE, Brody D, Simile CM, Thompson WW, Baio J, Avenevoli S, Kogan MD, Huang LN, & Centers for Disease Control and Prevention (CDC) (2013). Mental health surveillance among children--United States, 2005-2011. MMWR supplements, 62(2), 1–35. [PubMed] [Google Scholar]
  82. Petts R, & Shahidullah J (2020). Engagement interventions delivered in primary care to improve off-site pediatric mental health service initiation: A systematic review. Families Systems & Health, 38(3), 310–322. 10.1037/fsh0000521 [DOI] [PubMed] [Google Scholar]
  83. *Planos R, & Glenwick DS (1986). The effects of prompts on minority children’s screening attendance at a community mental health center. Child & Family Behavior Therapy, 8(2), 5–13. 10.1300/J019v08n02_02 [DOI] [Google Scholar]
  84. *Prinz RJ, & Miller GE (1994). Family-based treatment for childhood antisocial behavior: Experimental influences on dropout and engagement. Journal of Consulting and Clinical Psychology, 62(3), 645–650. 10.1037/0022-006X.62.3.645 [DOI] [PubMed] [Google Scholar]
  85. Pullmann M, Ague S, Johnson T, Lane S, Beaver K, Jetton E, & Rund E (2013). Defining engagement in adolescent substance abuse treatment. American Journal of Community Psychology, 52, 347–358. 10.1007/s10464-013-9600-8. [DOI] [PubMed] [Google Scholar]
  86. Saloner B, Carson N, & Cook BL (2014). Episodes of mental health treatment among a nationally representative sample of children and adolescents. Medical care research and review : MCRR, 71(3), 261–279. 10.1177/1077558713518347 [DOI] [PMC free article] [PubMed] [Google Scholar]
  87. *Santisteban D, Szapocznik J, Perez-Vidal A, Kurtines W, Murray E, & LaPerriere A (1996). Efficacy of intervention for engaging youth and families into treatment and some variables that may contribute to differential effectiveness. Journal of Family Psychology, 10(1), 35–44. 10.1037/0893-3200.10.1.35 [DOI] [Google Scholar]
  88. *Saxe GN, Ellis BH, Fogler J, & Navalta CP (2012). Innovations in practice: Preliminary evidence for effective family engagement in treatment for child traumatic stress–trauma systems therapy approach to preventing dropout. Child and Adolescent Mental Health, 17(1), 58–61. 10.1111/j.1475-3588.2011.00626.x [DOI] [PubMed] [Google Scholar]
  89. *Shuman A, & Shapiro J (2002). The effects of preparing parents for child psychotherapy on accuracy of expectations and treatment attendance. Community Mental Health Journal, 38(1), 3–16. 10.1023/A:1013908629870 [DOI] [PubMed] [Google Scholar]
  90. *Sibley MH, Graziano PA, Kuriyan AB, Coxe S, Pelham WE, Rodriguez L, Sanchez F, Derefinko K, Helseth S, & Ward A (2016). Parent–teen behavior therapy + motivational interviewing for adolescents with ADHD. Journal of Consulting and Clinical Psychology, 84(8), 699–712. 10.1037/ccp0000106 [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. *Smith DC, Davis JP, Ureche DJ, & Tabb KM (2015). Normative feedback and adolescent readiness to change: A small randomized trial. Research on Social Work Practice, 25(7), 801–814. 10.1177/1049731514535851 [DOI] [PMC free article] [PubMed] [Google Scholar]
  92. *Spirito A, Boergers J, Donaldson D, Bishop D, & Lewander W (2002). An intervention trial to improve adherence to community treatment by adolescents after a suicide attempt. Journal of the American Academy of Child & Adolescent Psychiatry, 41(4), 435–442. 10.1097/00004583-200204000-00016 [DOI] [PubMed] [Google Scholar]
  93. *Sterling S, Kline-Simon AH, Jones A, Satre DD, Parthasarathy S, & Weisner C (2017). Specialty addiction and psychiatry treatment initiation and engagement: Results from an SBIRT randomized trial in pediatrics. Journal of Substance Abuse Treatment, 82, 48–54. doi: 10.1016/j.jsat.2017.09.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  94. *Stern SB, Walsh M, Mercado M, Levene K, Pepler DJ, Carr A, … Lowe E (2015). When they call, will they come? A contextually responsive approach for engaging multistressed families in an urban child mental health center: A randomized clinical trial. Research on Social Work Practice, 25(5), 549–563. 10.1177/1049731514548038 [DOI] [Google Scholar]
  95. *Sterrett E, Jones DJ, Zalot A, & Shook S (2010). A pilot study of a brief motivational intervention to enhance parental engagement: A brief report. Journal of Child and Family Studies, 19(6), 697–701. 10.1007/s10826-010-9356-9 [DOI] [Google Scholar]
  96. *Stevens J, Klima J, Chisolm D, & Kelleher KJ (2009). A trial of telephone services to increase adolescent utilization of health care for psychosocial problems. Journal of Adolescent Health, 45(6), 564–570. 10.1016/j.jadohealth.2009.04.003 [DOI] [PubMed] [Google Scholar]
  97. *Szapocznik J, Perez-Vidal A, Brickman A, Foote F, Santisteban D, Hervis O, & Kurtines W (1988). Engaging adolescent drug abusers and their families in treatment: A strategic structural systems approach. Journal of Consulting and Clinical Psychology, 56(4), 552–557. 10.1037/0022-006X.56.4.552 [DOI] [PubMed] [Google Scholar]
  98. Warnick E, Gonzalez A, Robin Weersing V, Scahill L, & Woolston J (2012). Defining dropout from youth psychotherapy: how definitions shape the prevalence and predictors of attrition: Defining dropout. Child and Adolescent Mental Health, 17(2), 76–85. 10.1111/j.1475-3588.2011.00606.x [DOI] [PubMed] [Google Scholar]
  99. *Warzak WJ, Parrish JM, & Handen BL (1987). Effects of telephone intake procedures on initial appointment keeping in a child behavior management clinic. Journal of Compliance in Health Care, 2(2), 143–154. Retrieved from http://psycnet.apa.org/psycinfo/1989-16529-001 [Google Scholar]
  100. *Watt BD, Hoyland M, Best D, & Dadds MR (2007). Treatment participation among children with conduct problems and the role of telephone reminders. Journal of Child and Family Studies, 16(4), 522–530. 10.1007/s10826-006-9103-4 [DOI] [Google Scholar]
  101. *Weinstein M (1988). Preparation of children for psychotherapy through videotaped modeling. Journal of Clinical Child Psychology, 17(2), 131–136. 10.1207/s15374424jccp1702_4 [DOI] [Google Scholar]
  102. Werlen L, Gjukaj D, Mohler-Kuo M, & Puhan MA (2019). Interventions to improve children's access to mental health care: A systematic review and meta-analysis. Epidemiology and Psychiatric Sciences, 29, e58. 10.1017/S2045796019000544 [DOI] [PMC free article] [PubMed] [Google Scholar]
  103. *Wiseman M, & McBride M (1998). Increasing the attendance rate for first appointments at child and family psychiatry clinics: An opt-in system. Child Psychology & Psychiatry Review, 3(2), 68–71. 10.1017/S136064179700141X [DOI] [Google Scholar]
  104. Wood JM, Garb HN, & Nezworski MT (2007). Psychometrics: Better measurement makes better clinicians. In Lilienfeld SO & O'Donohue WT (Eds.), The great ideas of clinical science: 17 principles that every mental health professional should understand (p. 77–92). Routledge/Taylor & Francis Group. [Google Scholar]
  105. Wright B, Lau AS, & Brookman-Frazee L (2019). Factors associated with caregiver attendance in implementation of multiple evidence-based practices in youth mental health services. Psychiatric Services, 70(9), 808–815. 10.1176/appi.ps.201800443 [DOI] [PMC free article] [PubMed] [Google Scholar]
  106. *Yasui M, & Henry DB (2014). Shared understanding as a gateway for treatment engagement: A preliminary study examining the effectiveness of the culturally enhanced video feedback engagement intervention. Journal of Clinical Psychology, 70(7), 658–672. 10.1002/jclp.22058 [DOI] [PubMed] [Google Scholar]
  107. Yasui M, Pottick KJ, & Chen Y (2017). Conceptualizing culturally infused engagement and its measurement for ethnic minority and immigrant children and families. Clinical Child and Family Psychology Review, 20(3), 250–332. 10.1007/s10567-017-0229-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Youngstrom EA (2013). Future directions in psychological assessment: Combining evidence-based medicine innovations with psychology’s historical strengths to enhance utility. Journal of Clinical Child and Adolescent Psychology, 42, 139–159. doi: 10.1080/15374416.2012.73635 [DOI] [PubMed] [Google Scholar]

RESOURCES