Abstract
Treatment engagement is a primary challenge to the effectiveness of evidence-based treatments for children and adolescents. One solution to this challenge is technology, which has been proposed as an enhancement to or replacement for standard clinic-based, therapist delivered services. This review summarizes the current state of the field regarding technology’s promise to enhance engagement. A review of this literature suggests that although the focus of much theoretical consideration, as well as funding priorities, relatively little empirical research has been published on the role of technology as a vehicle to enhance engagement in particular. Moreover, lack of consistency in constructs, design, and measures make it difficult to draw useful comparisons across studies and, in turn, to determine if and what progress has been made toward more definitive conclusions. At this point in the literature, we can say only that we do not yet definitively know if technology does (or does not) enhance engagement in evidence-based treatments for children and adolescents. Recommendations are provided with the hope of more definitively assessing technology’s capacity to improve engagement, including more studies explicitly designed to assess this research question, as well as greater consistency across studies in the measurement of and designs used to test engagement.
Keywords: Engagement, Technology, Child, Adolescent
1. Introduction
Evidence-based treatments serving children, adolescents, and their families have been identified (see Southam-Gerow & Prinstein, 2014 for a review). Indeed, many of the most common mental health issues facing youth, including internalizing (i.e., anxiety, depression) and externalizing (i.e., inattention/hyperactivity, oppositionality, conduct problem), as well as interpersonal (e.g., social skills, family conflict), problems can be effectively treated with our existing approaches to intervention. In spite of such empirical success, the clinical utility of our evidence-base is limited by disappointing rates of engagement among those children, adolescents, and their families who need services (or need them the most). For example, while it is estimated that 20% of U.S. youth (15 million) under the age of 18 have at least one emotional or behavioral disorder, half of this group does not receive medication or psychological services at any time while fewer than 10% report receiving services in the past year (see Freeman & Kendziora, 2017 for a review). Moreover, a sizeable proportion (28 to 75%) of families who do engage in mental health services prematurely drop out or receive only half the number of recommended sessions (see de Haan, Boon, de Jong, Hoeve, & Vermeiren, 2013 for a review). Thus, policy-makers, researchers, and clinicians alike agree that identifying and testing innovative strategies for increasing engagement in services is crucial if we are to better serve the health and well-being of children and their families (see Gopalan et al., 2010 for a review).
One proposed strategy for addressing the challenge of engagement, technology, has been the focus of substantive attention by grant-funders and researchers alike (see Gopalan et al., 2010; Jones, 2014; Fairburn & Patel, 2017 for reviews). Although terminology continues to evolve (e.g., telemental health, mobile-health, digital health), various agencies have demonstrated serious commitment to funding empirical inquiries into the promise of technology as a delivery-vehicle for mental health care. For example, the National Institute of Mental Health (NIMH, 2017) reports that it awarded 404 grants totaling 445 million dollars for technology-enhanced mental health interventions between fiscal years 2009 and 2015. Yet, while such work has continued for more than a decade, there remains surprisingly little data reported on if and/or how technology improves engagement in particular.
Accordingly, this paper aims to provide a status update on the evidence for technology as a strategy to increase engagement in mental health services for children and adolescents. We summarize themes reflective of the current state of the evidence-base, as well as make recommendations regarding the necessary next steps if we are to draw more definitive conclusions on the hypothesized promise of technology for increasing engagement in treatments known to be effective. Indeed, the answer to this research question is clinically compelling. While technology is posited to increase the cost-effectiveness of services by improving engagement in and, in turn, clinical outcomes of, research on technology as a mental health service delivery vehicle is quite costly (see Jones, 2014 for a review). Costs sunk into the design, development, and testing of technology would, in turn, not only be exorbitant but counterproductive if empirical inquiry and funding should in fact be diverted elsewhere. That said, cost savings that could result from greater engagement in and, in turn, greater opportunity for children, adolescents, and families to benefit make such work a clinical and public health imperative.
2. Scope of the review
Given the relatively small empirical literature on this topic, we kept our search terms fairly broad, including our search terms related to “engagement.” Although there remains no single generally agreed upon definition of engagement, there seems to be some consensus regarding its constituent elements, which we drew from to guide our review (see Becker et al., 2015; Becker, Boustani, Gellatly, & Chorpita, 2018; Gopalan et al., 2010; Nock & Ferriter, 2005; Staudt, 2007; Tetley, Jinks, Huband, & Howells, 2011 for reviews). Broadly, engagement has been measured as a function of time or phases, such as a family’s initiation of (e.g., scheduling first session), progress in (e.g., session attendance, homework completion), and/or completion (e.g., full course of treatment and/or some prespecified number of sessions) of services. Engagement has also been measured as a series of dimensions, including those that are behavioral (e.g., initiation, attendance, adherence), as well as attitudinal or cognitive dimensions that seek to determine the emotional investment that individuals have in treatment and/or their evaluation of the treatment they received (e.g., satisfaction, usability, acceptability, and therapeutic alliance).
With these conceptualizations in mind, we included studies if 1) the authors explicitly referred to a variable of interest as a marker or component of engagement and/or 2) the study included measures of constructs we or others define as reflective of and/or a proxy for treatment engagement, even if the authors did not explicitly refer to the measure as one intended to assess engagement. Such terms may include, but are not limited to elements characterizing attendance (i.e., measurement of the participants’ presence at a particular therapeutic contact, such as a weekly session), adherence (i.e., measurement of participants’ active demonstration in a prespecified behavior, such as home practice), and/or cognitive or attitudinal characteristics (i.e., measurement of attitudes, expectations, perceptions of treatment process or outcome, such as therapeutic alliance) (Becker et al., 2015; Becker et al., 2018). Consistent with this approach, studies may have assessed these constructs related to engagement in the standard therapy process (e.g., did technology increase family attendance at weekly clinic-based sessions?) and/or as they related to the technology in particular (e.g., did the family adhere to the proscribed use of the technology?). Moreover, engagement may or may not have been the primary outcome variable in the study, but the study simply had to measure a construct the authors referred to as engagement or a valid proxy of engagement as defined above. Although there is promising research in the pipeline that will continue to inform our understanding of if (and how) technology has the potential to increase engagement (e.g., Chacko, Isham, Cleek, & McKay, 2016), studies reviewed here had to be empirical and have a quantitative component. As such, studies using qualitative methods or descriptive case studies alone were excluded.
In addition to engagement, we also kept our definition of technology quite broad, thus encompassing treatments ranging from: 1) Stand-alone technological interventions in which technology is the primary service delivery vehicle (e.g., web-based sessions, sometimes referred to as “self-administered”, or “self-guided”) and there is no involvement of a clinician, to 2) Technology-enhanced service delivery (also referred to as “telemental health” or simply “telehealth”) models, which are generally posited to increase support for, connection to, and generalizability of traditional clinic-based services by virtually connecting the therapist and therapy setting with the child and family’s everyday life (see Anton & Jones, 2017; Jones, 2014; Muñoz, 2010; Tate & Zabinski, 2004 for reviews). Although there is some evidence to suggest that technology-enhanced approaches may be more optimal for children and adolescents with clinically-significant symptomatology (see Anton & Jones, 2017; Mohr, Burns, Schueller, Clarke, & Klinkman, 2013; Nelson & Bui, 2010; Tate & Zabinski, 2004 for reviews), we include both types of approaches in our review given that engagement is also relevant in evidence-based stand-alone technological interventions. That being said, the majority of studies discussed were technology-enhanced treatments (i.e., 31 were technology-enhanced and 11 were stand-alone), which means our review focuses more on these types of treatments.
Related to our search criteria for technology, the increase in and the broad use of games as the sole delivery vehicle or one component of a technology-enhanced treatment approach merited some consideration. First, given our focus on evidence-based treatments in particular, we only included games if they were developed to function as analogs to the standard or non-technology enhanced evidence-based treatment approach. For example, we included Wols, Lichtwarck-Aschoff, Schoneveld, and Granic (2018) because the authors specified that the game they studied was developed using evidence-based cognitive-behavioral principles and, in turn, in our assessment functioned as an analogue to the standard treatment approach. On the other hand, we did not include studies like Bul et al. (2015), which is based on theories that may have informed existing evidence-based interventions; however, the authors did not make that link explicit. We also excluded games that targeted cognitive skills broadly (e.g., attention, memory, executive functioning) in the absence of the aforementioned search criteria.
Third, our search criteria included evidence-based treatments targeting child or adolescent (2 to 19 years old) mental health and, as such, a broad range of potential presenting issues most typical of youth in this age range including broad categories (e.g., anxiety, depression) of internalizing disorders and specific diagnoses (e.g., obsessive compulsive disorder), externalizing disorders (e.g., attention deficit/hyperactivity disorders, behavior disorders), and developmental disorders (e.g., autism spectrum disorder). Consistent with our focus on evidence-based treatments in particular, which tend to be developed for and tested with specific disorders or childhood and adolescence, we did not include technology-enhanced interventions targeting child adjustment or well-being more broadly (e.g., academic achievement, social skills) unless it was a clinical sample. Of note, we did not include studies that primarily targeted parent mental health as we were most interested in technology’s capacity to increase engagement of children, adolescents, and/or families in services directly targeting child or adolescent mental health in particular. Consistent with the broader literature on parent involvement in children’s mental health services (Forehand, Jones, & Parent, 2013), however, treatments for young children most often focused on parental behavior change as the primary means for influencing children’s internal and external symptoms while treatments for older children and those focused on internalizing symptoms more often included parents as a supportive agent through use of separate technology or portals to the technology.
Studies that met these review criteria are the focus of this status update and are summarized in Table 1 with respect to the type of technology, study design, and measures of engagement used. In the sections that follow, we do not discuss each study in detail (i.e., this is not a systematic review); rather, we provide themes that summarize the state of the literature at this point in time as well as representative studies reflective of those themes. Toward this end, we turn next to the overarching themes identified that best characterize the current state of this literature.
Table 1.
Study | Diagnosis | Technology type | Study design | Control group | 1. Satisfaction | 2. Time | 3. Homework | 4. Attendance | 5. Tech-specific | 6. Ther. Alliance |
---|---|---|---|---|---|---|---|---|---|---|
Franke, Keown, & Sanders, 2016 | ADHD* | Web application | RCT | Delayed treatment | X | X | ||||
Myers et al., 2015 | ADHD | Videoconferencing | RCT | Augmented Primary Care (Primary Care Provider and telepsychiatry consultation) | X | X | ||||
Sibley et al., 2017 | ADHD | Videoconferencing | Feasibility | None | X | X | X | X | ||
Tse et al., 2015 | ADHD | Videoconferencing | RCT | TAU (Pharmacotherpy and parent behavior training) | X | X | ||||
Wegrzyn et al., 2012 | ADHD | Videogame | Feasibility | None | X | |||||
Xie et al, 2013 | ADHD | Videoconferencing | RCT | TAU (Face to Face) | X | |||||
DuPaul et al., 2017 | ADHD | Web application | Feasibility | None | X | X | X | X | ||
Carpenter et al., 2018 | Anxiety | Videoconferencing | Pilot | None | X | X | X | X | X | X |
Khanna & Kendall, 2010 | Anxiety | Computer program | RCT | Individual CBT and Computer only | X | X | ||||
March et al., 2018 | Anxiety | Web application | Feasibility | None | X | X | X | |||
Spence et al., 2011 | Anxiety | Computer/web application | RCT | Same treatment, but in clinic | X | X | X | |||
Storch et al., 2015 | Anxiety | Computer Program, videogame | RCT | TAU (client’s choice of treatment) | X | X | ||||
Vigerland et al., 2016 | Anxiety | Web application | RCT | Waitlist | X | X | ||||
Wols et al., 2018 | Anxiety | Internet based Videogame | RCT – only analyzed treatment group | None | X | X | ||||
Wuthrich et al., 2012 | Anxiety | Computer Program | RCT | Waitlist | X | X | X | |||
Ingersoll & Berger, 2015 | ASD** | Website and Skype sessions with therapist | Pilot | Self-directed | X | X | X | X | X | X |
Ingersoll et al., 2017 | ASD | Website and Skype sessions with therapist | RCT | Self-directed and informational control | X | X | X | X | X | |
Kobak et al., 2011 | ASD | Website | Feasibility | None | X | |||||
Law et al., 2018 | ASD | Mobile application, videoconferencing, video uploading | Feasibility | None | X | |||||
Meadan et al., 2013 | ASD | Videoconferencing, video uploading, video recording | Feasibility | None | X | |||||
Nefdt et al., 2010 | ASD | DVD-based training | RCT | Waitlist | X | X | ||||
Vismara et al., 2013 | ASD | Teleconferencing, website | Feasibility | None | X | X | X | |||
Vismara et al., 2012 | ASD | Videoconferencing | Feasibility | None | X | |||||
Vismara et al., 2016 | ASD | Website and videoconferencing | RCT | TAU (community early intervention program) | X | X | ||||
Wainer & Ingersoll, 2013 | ASD | Website | Feasibility | None | X | X | ||||
Wainer & Ingersoll, 2015 | ASD | Website and videoconferencing | Feasibility | None | X | X | X | X | ||
Bearss et al., 2018 | ASD | Videoconferencing, telephone calls | Feasibility | None | X | X | ||||
Enebrink et al., 2012 | Behavior Problems | Website | RCT | Waitlist | X | X | ||||
Högström et al., 2015 | Behavior Problems | Website | RCT | Waitlist | X | X | ||||
Baggett et al., 2010 | Children at risk for Developmental Disability | Internet and telephone | RCT | Internet Only | X | X | X | |||
Merry et al., 2012 | Depression | CD-ROM–Videogame | RCT | TAU (face-to-face therapy, treatment by general practitioners) | X | X | X | X | ||
Stasiak et al., 2014 | Depression | CD-ROM – videogame | RCT | Computerized psychoeducation (computer only) | X | |||||
Comer et al., 2017 | Disruptive behaviors | Online | RCT | TAU (Clinic-based Parent-Child Interaction Therapy) | X | |||||
Day & Sanders, 2018 | Disruptive behaviors | Online, telephone support | RCT | TAU (no telephone support) | X | X | X | X | ||
Jones et al., 2014 | Disruptive behaviors | Smartphone application, telephone calls, video uploading | Pilot | TAU (non-technology-enhanced) | X | X | X | |||
Yasui & Henry, 2014 | Disruptive behaviors | Video recordings | RCT | TAU (behavioral management and parent training) | X | X | ||||
Taylor et al., 2008 | Elevated Disruptive Behaviors | Computer/web application | RCT | Waitlist | X | X | X | |||
Demaso et al., 2006 | Mood Disorders | Computer/web application (journal) | Feasibility | None | X | |||||
Lenhard et al., 2017 | OCD*** | Web application | RCT | Waitlist | X | X | X | |||
McGrath et al., 2011 | ODD****, ADHD, and anxiety | Videos, telephone | RCT | TAU | X | X | ||||
Davidson et al., 2019 | Trauma Symptoms | Tablet | Pilot | TAU (Trauma-Focused CBT) | X | X | X | |||
Stewart et al., 2017 | Trauma Symptoms | Telehealth | Pilot | None | X | X |
ADHD = Attention-Deficit Hyperactivity Disorder;
ASD = Autism Spectrum Disorder;
OCD = Obsessive-compulsive disorder
ODD = Oppositional Defiant Disorder.
Satisfaction: or acceptability;
Time: Total time spent in treatment OR time spent in each session;
Homework or assigned module/activity completion;
Attendance or adherence to treatment: sessions attended, completion of sessions;
Technology-specific measure: measure that is uniquely possible or made more precise through the use of technology;
Therapeutic alliance.
To summarize our search criteria, we used the PsycINFO database and used the search terms “engagement OR homework OR attendance OR satisfaction” and “technology OR computer OR tablet OR mobile phone” and “treatment OR intervention OR therapy” with the age range specified as childhood. This broad search was done to ensure that we captured relevant articles that may not have used the same terminology and returned 686 results. This search returned a wide range of results, many of which could be excluded on title or abstract alone (e.g., physical diagnosis, educational studies). Subsequent targeted searches included specific terms for diagnoses, which largely overlapped with our broader search. We also took note of any citing articles and references that may have met our criteria. Our final sample for the review consisted of 42 studies.
2.1. Themes characterizing technology’s capacity to enhance engagement
Our review of the studies included in Table 1 suggests several themes that characterize the current state of empirical research on engagement in technology-enhanced or stand-alone interventions for children, adolescents, and/or their families.
2.2. Few studies examine engagement as a primary study variable or outcome
While engagement has been cited as a primary rationale for the inclusion of technology in mental health services research (Jones, 2014; Price et al., 2014), relatively few studies in this review were designed to test engagement explicitly or considered engagement the primary outcome variable of interest and findings are mixed or inconclusive (e.g., DuPaul et al., 2017; Ingersoll, Shannon, Berger, Pickard, & Holtz, 2017; Jones et al., 2014; Yasui & Henry, 2014). As an example of work in which engagement was the primary outcome, Ingersoll et al. (2017) examined a technology-enhanced treatment for parents of children with autism. They present findings across various experimental trials; however, the primary theme of the findings is that engagement (i.e., measured by the percentage of learning activities completed) was significantly related to the amount of knowledge parents retained 6-months after treatment. Such findings support the hypothesis that greater engagement leads to better outcomes, in this case knowledge; however, the design did not provide an opportunity to glean the role of technology in improving engagement in particular. In a study that focused on enhancing engagement specifically for ethnic minority families, Yasui and Henry (2014) found that the technology-enhanced intervention led to increases in client-rated clinician cultural competence and therapeutic alliance compared to treatment as usual. This example highlights a promising outcome for engagement defined as therapeutic alliance, but the study did not report on other measures of engagement, such as attendance. In another example, DuPaul et al. (2017) tested the feasibility of both modifying the traditional format and content of traditional Behavioral Parent Training, the standard of care for children with early onset behavior disorders, in order to cut the required time for parents by 50% with the goal of improving engagement. In addition, they tested both a face-to-face version, as well as a web-based, version of the condensed program. Of most relevance to the current review, parents of children with ADHD, the target population, had higher rates of engagement (i.e., parent completion of at least half of the session) in the abbreviated face to face (75%) version than would be expected based on typical drop-out rates in BPT in particular and other family-based treatments in general; however, web-based engagement (95%) was even higher. Although such findings are promising, the feasibility and iterative nature of the study design and implementation, as well as relative lack of improvement in child functioning (i.e., post-treatment behavior still in clinical range for most children), make it difficult to determine if the advances in engagement linked to shortened BPT treatment program in general and web version in particular come at the expense of improvements in parenting or child outcomes.
In contrast to the aforementioned examples, engagement was more typically examined as a secondary outcome in the bulk of studies that we reviewed (e.g., Baggett et al., 2010; Day & Sanders, 2018; Myers, Vander Stoep, Zhou, McCarty, & Katon, 2015; Vismara, McCormick, Young, Nadhan, & Monlux, 2013) or the authors purportedly included, but did not report, examine or complete a formal statistical test on, a construct or measure that met our search criteria related to engagement (e.g., Tse, McCarty, Vander Stoep, & Myers, 2015; Wuthrich et al., 2012; Xie et al., 2013). Day and Sanders (2018), in a comparison of a self-guided online program with and without telephone support from a clinician, offer one potential explanation for the lack of full reporting. They noted that they did not expect to find large differences in engagement since engagement was a secondary outcome not actually targeted in the intervention. However, they did go on to find that the group with telephone support completed significantly more modules and more parents completed the full program compared to the self-directed individuals. As another example, Myers et al. (2015) randomized children to receive either a telehealth service model or augmented primary care model for the treatment of ADHD. Although findings revealed that children in the telehealth condition showed greater behavioral improvement and it occurred at a faster rate than children in the standard intervention condition, the manuscript makes it more difficult to compare the two conditions on engagement. While the authors report the average number of sessions attended by those in the telehealth arm (M = 5.2; Range 0 to 6), they instead reported the percentage (94.6%) of those in the primary care arm who attended the initial consultation session. In a third example, McGrath et al. (2011) examined telephone-based interventions for child disruptive behavior and anxiety disorders and collected data on adherence as well as satisfaction. While the authors report the adherence rates and satisfaction data, they only reported these for the treatment group, despite having a treatment as usual control, and thus did not provide statistical tests comparing the groups. Finally, in a study examining a computer-based version of the Incredible Years parenting program, Taylor et al. (2008) noted that there were control groups, but the authors only offer descriptive comparisons to the treatment as usual group with respect to participation in their discussion and do not provide a statistical comparison. As such, discerning the potential role of technology for engagement in particular is impossible in these examples.
2.3. Variations in research designs
In addition to individual studies being designed in such a way that makes examining the link between technology and engagement difficult, the broad range of designs across studies further complicates the identification of clear trends. For example, studies meeting our research criteria ranged from feasibility (e.g., Bearss et al., 2018; Carpenter, Pincus, Furr, & Comer, 2018; Kobak et al., 2011; March, Spence, Donovan, & Kenardy, 2018; Meadan, Meyer, Snodgrass, & Halle, 2013; Wainer & Ingersoll, 2013), to those with wait-list controls (e.g., Enebrink, Högström, Forster, & Ghaderi, 2012; Lenhard et al., 2017; Vigerland et al., 2016; Wuthrich et al., 2012), to those with fully-powered randomized control trials that compare the standard evidence-based treatment approach to one enhanced by technology (e.g., Jones et al., 2014; Storch et al., 2015; Xie et al., 2013). We also found that three of the studies utilized a “computer only” or “internet only” control group rather than (or in addition to) a treatment as usual control group (Baggett et al., 2010; Khanna & Kendall, 2010; Stasiak, Hatcher, Frampton, & Merry, 2014). Each of these designs, of course, has strengths and limitations. For example, while feasibility studies allow us to determine the usability and acceptability of the technology, they tell us little about statistically or at least clinically meaningful changes or improvement in engagement given inadequate sample sizes and the lack of control groups. That said, sample sizes were quite large for feasibility studies in other examples. March et al. (2018) for example examined the feasibility and acceptability of an online Cognitive Behavioral Therapy (CBT) program using an open trial of 4425 children and adolescents. Similarly, studies that test a wait-list control or some equivalent may have clinical validity (i.e., we know most children do not receive mental health services), yet such designs make it difficult to disentangle the specific role of technology (versus treatment in general) in enhancing engagement. For example, Nefdt, Koegel, Singer, and Gerber (2010) compared parents of children with ASD randomly assigned to a self-directed DVD based program to wait-list controls. While they did collect data related to engagement, they were unable to compare this to the control group since those individuals did not complete any treatment. On the other hand, in a follow up to Enebrink et al. (2012), Högström, Enebrink, Melin, and Ghaderi (2015) found that homework compliance predicted whether conduct problems continued to decrease 18 months after study completion. This example highlights that even when there is a wait list control group, within-group analyses can provide some useful information about engagement.
Thus, only studies that directly compare the technology-enhanced or delivered to the standard evidence-based treatment approach provide a true comparison of rates or levels of engagement (e.g., Jones et al., 2014; Spence et al., 2011; Xie et al., 2013). For example, Jones (2014) randomized families to either standard treatment for early onset (3 to 8 years old) behavior disorders, BPT (i.e., weekly, clinic-based sessions, mid-week calls, home practice) or the technology-enhanced BPT group, which received the standard BPT program plus technology enhancements focused on engagement in particular (e.g., reminders regarding appointments, video call check-ins regarding skill practice and progress). Results indicated the technology-enhanced BPT yielded larger effect sizes for all measures of engagement (i.e., session attendance, home practice, mid-week call participation) and greater cost-effectiveness, as families in the technology-enhanced program required fewer sessions to complete treatment without any compromise to patient satisfaction. Similarly, Khanna and Kendall (2010) reported that in addition to comparable levels of symptom improvement between their computer- and clinic-based programs for childhood anxiety, parents and child also reported comparable levels of satisfaction (i.e., an attitudinal measure of engagement) in both groups as well. Yet, given that both of these were pilot studies with a relatively small sample sizes findings must be interpreted cautiously until similar patterns can be replicated in a fully powered randomized control trial. In an RCT comparing in clinic Parent-Child Interaction Therapy to an internet-delivered version, Comer et al. (2017) noted greater levels of symptom improvement for the internet group, and showed that satisfaction with treatment was slightly elevated for the technology group. While they report that the internet delivered treatment had “high engagement” (p. 915) in the discussion, it is not clear how this was operationalized in the study.
2.4. Lack of measurement consensus
Studies included in this review also used a truly broad array of measures of engagement, which has been the focus of much discussion in the clinical literature more broadly as well. Measures included in the reviewed studies included those that capture phase (e.g., initiation, completion), dimension (i.e., behavioral, attitudinal), and element (i.e., attendance, adherence, cognition/attitude) and some trends were observed. Among the attitudinal measures assessed in the reviewed studies, client satisfaction (e.g., Baggett et al., 2010; Jones et al., 2014; Stasiak et al., 2014) and/or usability or feasibility of the treatment (e.g. Sibley, Comer, & Gonzalez, 2017; Vismara, Young, & Rogers, 2012) were common. Moreover, findings across studies seemed to suggest that participants had relatively high levels of satisfaction with the technology-enhanced or stand-alone technological interventions and at least as high or higher levels of satisfaction with the technology-enhanced relative to the standard treatment condition, as noted above (e.g., Jones et al., 2014; Khanna & Kendall, 2010). A similar pattern of findings was determined for usability and feasibility as well. Such comparisons are important, given clinician concern that technology will impede therapeutic alliance and, in turn, client satisfaction with treatment (Anton & Jones, 2019; see Anton & Jones, 2017 for a review).
Related to the breadth of engagement constructs examined within and between studies, there is a notable lack of consistency in the measures used to assess, as well as the terminology used to reflect, those constructs across studies. For example, homework or home practice (i.e., one behavioral measure of adherence) is a central or common component of most (if not all) evidence-based treatments for children and adolescents (see Chorpita, Daleiden, & Weisz, 2005, Chorpita & Daleiden, 2009 for reviews). While several of the studies in Table 1 explicitly mention homework as part of the treatment, there was considerable variability in how much the authors described, operationalized, and reported on homework completion as a primary outcome variable (or variable of interest). For example, Ingersoll et al. (2017) noted that parents in their web-based program for Autism Spectrum Disorders were encouraged to complete or use a series of between session modules or content (e.g., self-check questions, video-based exercises, homework plan, reflection questions); however, if and how these activities were included in their analyses of program engagement (i.e., percent of learning activities visited at least once across the lessons) and completion (i.e., visiting 75% or more of the learning activities) is unclear. Alternatively, Merry et al. (2012) reported on homework completion as the percentage of adolescents completing most or all of the homework challenges in their web-based treatment for depression; however, it was unclear whether homework factored into their overall measure of engagement (i.e., percentage of participants completing all modules). In a pilot study of an ADHD treatment program for teens, Sibley et al. (2017) reported that home activities were coded as not completed, partially completed, and fully completed and then an average proportion was taken. Thus, an individual who fully completed 5 of 10 homework assignments and partly completed 3 of 10 would have a proportion score of 0.5 in some studies and a 0.65 in Sibley, Comer and Gonzalez’s study. As a final example, Wuthrich et al. (2012) did not report homework completion rates in particular (or include it in a broader measure of engagement) in their web-based study of CBT for adolescent anxiety; however, they did note that the time required to complete homework was identified as a barrier to adolescent treatment completion. Thus, homework completion is but one example of how measurement, inconsistency of measurement, and/or lack of measurement of generally agreed upon elements of engagement make definitive conclusions about technology’s role in engagement even more difficult, if not impossible to glean.
We did not find that the same measures were used across studies and in many cases, items used to measure satisfaction were project-specific. However, there does appear to be at least one instrument developed for measuring engagement in a variety of interventions. In a pilot study of a tablet-based intervention for children experiencing symptoms related to trauma, Davidson et al. (2019) measured child engagement using the Child Involvement Ratings Scale, an observational coding instrument. For this measure, sessions were recorded and coders rated each treatment session on a number of items related to engagement, including positive indicators (e.g., initiation of discussion and enthusiasm), as well as negative indicators (e.g., withdrawal and avoidance). While this measure could be implemented in a range of studies, the feasibility of relying on recordings in technology-enhanced or delivered approaches may be limited, particularly given constraints of privacy protected health information and the expense inherent in accounting for this in technology design and development.
2.5. Technology-specific engagement measures
As a final theme to highlight, some, but not most (i.e. 11 out of 42), studies included technology-specific measures of engagement (e.g., Baggett et al., 2010; Ingersoll & Berger, 2015; Vismara et al., 2016). For example, Vismara et al. (2016) recorded the number of logins to (i.e., time stamps) and the time spent viewing each website tool (e.g., parent discussion board, goal tracking program, modules) in their video-conferencing and website-based treatment for parents of children with ASD. This trend toward new constructs aimed to capture engagement in technology in particular is perhaps not surprising as such an approach affords new opportunities for the collection of a great amount of metadata – such as number of logins, time spent viewing certain materials, and the variety of materials accessed—which could allow researchers to refine treatments in a way that was not previously possible. This is exciting because it not only allows researchers to know more about the particular treatment and how individuals use it, but also presents more opportunities for the testing of theories about the mechanism of engagement. For example, a researcher could use these measurements in a model to examine the effect of the amount of time spent using a computer program compared to the variety of activities accessed. More sophisticated analytic methods could allow researchers to look at the patterns of use for successful participants versus drop outs.
One drawback of these new measures, however, is that they further decrease the likelihood of consistent measurement across studies. For example, while as noted above Vismara et al. (2016) assessed engagement in their online treatment for parents of children with ASD in two ways (i.e., number of logins and amount of time per website tool), Ingersoll and Berger (2015), who were also working with parents of children with ASD, also examined logins; however, instead of time per module they assessed average time across all of the modules, as well as percent of learning activities visited at least once, and program completion (defined as visiting 75% or more of learning activities). In another example, Wols et al. (2018) coded a range of behaviors that participants engaged in during the course of their intervention, which was a game developed to deliver an evidence-based intervention for anxiety. Engagement in this study was defined as the extent to which the participants engaged with the anxiety treatment and was assessed through the behaviors in the game. For example, individuals were engaging with the evidence-based principle of exposure if they explored in the game. They found that certain changes in the behaviors in the game (e.g., duration of exploration, duration of hiding inside a chest) were predictive of improvement in symptoms post-treatment. The coding for this study was completed manually rather than being automated in the game, however, it could be feasible to program the game so that these behaviors were automatically coded. This example speaks to the rich-ness of information that could potentially be provided through the use of technology. Finally, the ubiquity of wearable technologies creates the potential for psychophysiological measures of engagement that do not require participants to be at a particular location. While none of the articles we reviewed included psychophysiological measures of engagement, this is an interesting avenue for future work. Taken together, these examples highlight not only the variability possible in technology-specific measures of engagement in even those studies targeting similar populations with the same platform (i.e., web-delivery), but also highlight how such variability will likely only get more pronounced across different technologies such as videogames (e.g., Wegrzyn, Hearrington, Martin, & Randolph, 2012), or open-ended journals (e.g., Demaso, Marcus, Kinnamon, & Gonzalez-Heydrich, 2006).
Building upon the aforementioned points regarding technology-specific measures of engagement, it is difficult to conceive of if and how to make comparisons on such measures of engagement between interventions that use technology and standard treatment approaches (i.e., is a log-in comparable to session attendance?). Similarly, is an hour spent in person with a therapist equivalent to an hour spent watching videos, completing exercises, or other activities on a computer or smartphone? Related to this question, Merry et al. (2012) compared the average session completion by adolescents randomized to the clinic-based intervention for depressive symptoms to a percentage of those randomized to the computerized version of the treatment who completed all seven of the modules. Notably, the authors operationalized other study variables more similarly between groups, making those comparisons possible and more meaningful (e.g., remission rates and mean changes on clinical measures were found to be higher in the technology group than in the TAU group, while rates of response to treatment did not differ). As such, it begs the question of whether it would also be possible operationalize engagement similarly (or as similarly as possible) across the two groups (e.g., average attendance or module completion)?
That said, one strategy for capitalizing on these technology-specific measures of engagement is to examine within-group outcomes. For example, Anton et al. (2016) examined if and how variability in use of the components of a technology-enhanced BPT treatment program (Jones et al., 2014), including completing surveys, watching skills videos, and/or video recording home practices, were associated with variability in treatment outcome. Of note, findings revealed that families with higher levels of engagement in each of the aspects of the technology evidenced better treatment outcomes (e.g., greater improvement in child problem behavior) than families who used the technology-enhancements less frequently.
Finally, another type of technology-specific measure that could affect engagement is the quality of the treatment and whether problems were encountered when using technology (e.g., Carpenter et al., 2018; March et al., 2018; Stewart, Orengo-Aguayo, Cohen, Mannarino, & De Arellano, 2017). In some cases, this was obtained in the form of qualitative feedback and described accordingly, which is fairly typical of preliminary and feasibility studies. Others, such as Carpenter et al. (2018) for example reported that they recorded the number of minutes used during each session of their videoconferencing-based CBPT program for addressing technical issues; yet, they did not report on their findings.
3. Summary, conclusions, and future directions
In summary, while a great deal of theoretical discussion and federal funding has been devoted to technology as a mental health service delivery vehicle over the past decade, the results of our review suggest that we cannot yet definitively conclude that technology does indeed (or does not) fulfill its promise to enhance engagement in child and adolescent mental health services. As reflected in Table 1, empirical inquiry has been devoted to this topic; however, the still preliminary nature of much of this work, as well lack of consistency in study design, constructs, and measures make it difficult to make comparisons across studies. Our findings echo some of the points made by Hollis et al. (2017) in a review of 30 RCTs featuring digital health interventions used in mental health treatment in which the authors concluded that methodological limitations created a challenge for drawing definitive conclusions. Given that it is difficult to determine if and what progress has been made toward more definitive conclusions, we use this status update as an opportunity to make the following recommendations regarding next steps in this area of research. Recommendations fall into the categories desecribed in the following subsections.
3.1. Research on technology-enhanced treatments collect and report data on engagement
Given that engagement has been a central rationale for the push toward technology-enhanced or telehealth services (see Gopalan et al., 2010; Jones, 2014; Fairburn & Patel, 2017 for reviews), it will be important for future work on technology to include a measure of and analyze data on engagement, whether engagement is the central focus of the study or not. Moreover, analysis of the aspects of treatment that allow us to capture and measure engagement, particularly those that are common in most if not all evidence-based treatments (e.g., homework completion), will increase the pace with which we can decide with certainty whether technology has the capacity to address the challenge of engagement or direct our efforts elsewhere.
To this end, we also recommend establishing a common battery of engagement constructs and measures to be included in research examining technology as a delivery vehicle in mental health intervention work with children and adolescents. This common battery does not preclude investigators including study-specific measures. Rather, such an approach simply creates a common point of comparison across studies that otherwise is not occurring. One seemingly feasible way to facilitate progress toward such a common battery is by preregistering studies and including which engagement measure(s) will be used. Creating a battery may involve reviewing a large pool of items related to the particular dimension of interest and then choosing the best items. In some cases, measures may already exist. For example, Bangor, Kortum, and Miller (2008) provide review of a general scale used to rate a product’s usability in very diverse settings and show that the scores are robust and interpretable. In turn, if there is a compelling theoretical and/or methodological reason to have different engagement measures in a particular study and/or between two conditions in the same study, detailing this beforehand has the potential to increase confidence and credibility in the researchers’ interpretation of results. Preregistration also has the potential to provide context for what may be viewed otherwise as seemingly random cutoffs on various measures of engagement, e.g., visiting 75% or more of the learning activities (Ingersoll et al., 2017; Ingersoll & Berger, 2015); parent completion of at least 50% of sessions (Du Paul et al., 2017).
Building upon the broader literature on engagement, common factors could include those that tap into both phase (e.g., initiation, attendance) and dimension (i.e., behavioral, attitudinal). For example, related work on non-completion (i.e., drop-out, attrition), which is a behavioral marker of phase, essentially, may not be an ideal common measure of engagement across technology-enhanced or telehealth versus standard treatment approaches. The rationale for this assertion is that if drop-out is not equal across groups, it may be a function of technology and/or unsuccessful efforts at random assignment and/or some third variable that is not accounted for (Gupta, 2011). If attrition is similar across groups, however, it suggests that technology does not help or hinder attrition as there are likely other external factors that account for treatment completion or drop-out (Acierno et al., 2016).
Instead, more useful common measures across studies may be other behavioral, as well as attitudinal, markers of engagement. For example, time spent in treatment sessions as measured in minutes, hours, sessions, or weeks would be easily comparable between telehealth and clinic-based approaches. Similarly, completion of a handout assigned for homework, whether assigned and completed via technology or during an in-person session would allow direct and useful comparison as well. As noted earlier, common attitudinal measures important to such work likely include measures of constructs such as usability, satisfaction, and alliance. An important reason for including multiple engagement measures is that behavioral measures are often confounded with the dose of treatment. That is, the more behaviors that are completed, such as homework or attendance, the larger the dose of therapy is. Therefore, it could appear that engagement was responsible for improvement in certain outcomes when the actual improvement was caused by greater exposure to therapy.
3.2. Testing the capacity for technology to enhance engagement requires comparison
Based on our review of the literature to date, we recommend that more research include some point of comparison that allows for determination of the unique role of technology in engagement in particular. For example, such a guideline may be met with a randomized control trial (e.g., does the technology-enhanced treatment increase attendance at weekly sessions relative to the standard, evidence-based treatment approach?); however, we acknowledge that RCTs are not necessarily always the ideal, given the expense and time lag to determination of results and publication of those results (see Jones, Anton, Zachary, & Loiselle, 2018). Thus, alternative designs should also be considered, including comparing the rate of engagement in technology-enhanced approaches to the established literature on engagement in a particular evidence-based approach (i.e., benchmarking).
Regardless of how the comparison group is conceived, measures of engagement need to be consistent between the experimental (i.e., technology-enhanced or stand-alone technology) and “control” group broadly defined (e.g., standard treatment) in order to allow comparisons and, in turn, conclusions regarding the unique role of technology in engagement in particular. If investigators posit that technology will improve the number of sessions completed by children, adolescents, or their parents, for example, then the design must include a standard and equivalent assessment of session completion in both the technology-enhanced and standard treatment group. Spence et al.’s (2011) study is a good example of such an approach as they compared the number of sessions completed at 12 weeks and 12 months in their web-based treatment group for adolescent anxiety to a clinic-based, control group. Their findings revealed that while both groups experienced improvement, the pace of session completion was significantly slower for adolescents in the web-based condition. Given that efficiency of service delivery is an indicator of cost-effectiveness, one may interpret such patterns as less supportive of a telehealth approach. Another example is provided in Davidson et al. (2019) in which a tablet-enhanced version of Trauma-Focused Cogntiive Behavioral Therapy was compared to standard CBT. Using the Child Involvement Rating Scale (Chu & Kendall, 2004), the authors were able to compare engagement directly for both groups and found that it was equivalent for some of the treatment components (e.g., psychoeducation, relaxation, affective regulation and cognitive coping) and better for the tablet condition for other treatment components (e.g., in vivo exposure, enhancing safety). While their samples were likely too small to have sufficient power to detect differences, their graphical representation of results is helpful for getting a sense of general trends. If, however, most adolescents would not have access to or engage in treatment for anxiety otherwise, as reflected by the current data on engagement in children’s mental health, then perhaps the pros of engagement outweigh issues with efficiency or pace?
Inherent within this recommendation is a caveat regarding technology-specific engagement measures (e.g., number of log-ins, time spent viewing a particular webpage etc.), which may not be generalizable to or useful for the control condition; however, we contend that they do have a place in this broader literature. Yet, studies also need to include the more general measures of engagement discussed in the broader treatment outcome literature (i.e., phases, dimensions) in order to facilitate comparisons. Moreover, technology-specific engagement measures may be particularly important for within-group analyses in order to determine if and how use of the technological components and/or telehealth approach is associated with variability in treatment process or outcomes. That said, although these are fairly intuitive measures of engagement research using technology, some have noted the difficulties in using such metrics, given electronic or user error artificially over- or underestimating data on engagement (Law, Neihart, & Dutt, 2018; Wainer & Ingersoll, 2015).
3.3. Whether technology can enhance engagement requires attention to type and quality
Related to the aforementioned point regarding type of technology, it is also conceivable that certain types of technology may be particularly helpful for improving engagement specifically. The relative nascency of this literature precluded us from organizing the review based on type of technology or making broad conclusions about the capacity for one type of technology to improve engagement more than another. As an example, Granic, Lobel, and Engels (2014) reviewed the cognitive, emotional, motivational, and social benefits of gaming for children and adolescents. Building upon this framework, one could envision a similar structure for a review of the literature on technology-enhanced interventions with a category of outcomes entitled “engagement”. Of course, one caveat to this approach is that we also need to consider the types of technologies most accessible to our target populations because without access there is simply less opportunity for impact. Technology-enhanced interventions that include gaming for example can be delivered via a smartphone which is a relatively cost-effective platform that bridges the digital divide (see Jones, 2014 for a review). Enhancements that rely on other still less accessible platforms and applications, for example, virtual or augmented reality, may be less useful regardless of efficacy if not widely available to therapists or consumers.
In addition to type of technology and accessibility, future work on technology-enhanced engagement must continue to, and perhaps even pay more attention to, the quality of the technology-enhancements. One example that we will give here, which we have used elsewhere, is specific to Behavioral Parent Training (BPT) for early-onset behavior disorders (Jones et al., 2013). Interestingly, a number of smartphone applications are widely available to consumers that map onto the theory and/or practice elements of BPT. Most relevant to engagement are the array of time-out applications directed at parents of young children. Generally, these applications including all or some variation of the following: 1) child’s name; 2) child’s age; 3) recommended time-out length based on child’s age; and/or 4) a timer for parents to track the length of the time-out. Yet, when you ask parents why they are or are not doing a time-out, a core component of BPT, it is rarely (if ever) that they are having difficulty figuring out how long the time-out should be or keeping track of time once they decide. Rather, it is getting the child into and to stay in time-out that is challenging. Thus, simply enhancing standard BPT with a time-out timer smartphone application is unlikely to enhance the likelihood that parents will do time-out and/ or that children’s behavior will in turn improve.
In addition to functionality and utility, quality may also be determined and evaluated in some more comprehensive way within and across studies testing technology-enhanced interventions. Generally, assessing quality may include measuring facets like usability and satisfaction. For example, Anton and Jones (2017) reported that parents were more likely to use the technology-enhancements that they viewed as most useful and more use likely informs decision-making regarding usefulness as well. What researchers tend to report less frequently, yet seems equally if not more important, are any implementation glitches related to design or development for example, as well as any problems consumers have using the technology-enhancements as a function of factors like geographic area and inconsistent, sparse, or no access to internet. As such, a true evaluation of the efficacy of technology-enhancements for engagement seems to depend on more consistent assessment and reporting of consumer ratings of multiple aspects of quality.
3.4. If, how, & for whom can technology-enhanced treatment improve engagement
Looking to the future, we think that there are important questions that are not yet being asked in the technology-enhanced or stand-alone technological treatment studies of engagement. The first question is about the mechanism of engagement in treatment. When we talk about technology as a tool for engagement, we are often conceptualizing a mediational relationship (i.e., technology enhances engagement which, in turn, improves outcomes). However, statistical mediation analyses were not used in the articles reviewed here, possibly due to challenges posed by the current designs and relatively small sample sizes (Dunn et al., 2015). Testing the mechanism is important because it could be the case that technology itself is not the specific variable affecting engagement. As a hypothetical example, if appointment reminders that were sent via mail were just as effective at increasing engagement as appointment reminders sent via a smartphone app, the actual causal mechanism is the reminders, not the technology. However, the technology also potentially has the added advantage of scalability which makes it easier to not only implement multiple methods for increasing engagement (such as appointment reminders or homework), but also track whether these measures are working, allowing for a more granular understanding of how the treatment works. The second question asks for whom does technology increase engagement? That is, what characteristics or other covariates (i.e. moderators) might interact with the type of treatment to bolster the capacity for technology to engagement? Combining these questions, one might propose a model in which covariates and baseline characteristics moderate treatment assignment and then predict the level of engagement, which then predict the clinical outcome. This model would test moderation of the mediation effect of engagement (MacKinnon, 2011). Given the lack of evidence we found for engagement being a central research question at this point, it is unclear whether and when the field will move in this direction. One barrier to moving in this direction is that the sample sizes required for these types of analyses might be greater than they would be for a t-test or multiple regression without interactions, depending on the size of the effects (Cohen, Cohen, West, & Aiken, 2003; Fritz & MacKinnon, 2007). This means that current pilot studies and even some RCTs likely do not have the power to detect these effects. However, Carpenter et al. (2018) in a multiple-site pilot study of a videoconferencing CBT intervention reported that they found differences in their engagement measures with respect to site of treatment such that their Boston site had 100% retention of participants while their Miami site had 60%. They reported on and discussed these differences even though their sample size was only 13, precluding any formal analyses. We include this example to speak to the possibility of gleaning potentially useful patterns even when sample sizes are small.
HIGHLIGHTS.
Many studies of technology-enhanced treatments did not set out to examine the impact of technology on treatment engagement.
The concept of engagement is not well defined or consistently measured in studies featuring technology
Study design is a limiting factor in our ability to determine whether technology-enhanced treatments increase engagement
Acknowledgments
The authors would like to thank the anonymous reviewers for their helpful feedback on earlier drafts of the manuscript.
Role of funding sources
Support for this project was provided by grants R01MH100377, R21MH113887-01A1, and R34MH082956 from the National Institute of Mental Health (Deborah J. Jones, Principal Investigator). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Footnotes
Declaration of Competing Interest
There are no conflicts of interest for any of the authors.
References
- Acierno R, Gros DF, Ruggiero KJ, Hernandez-Tejada MA, Knapp RG, Lejuez CW, … Tuerk PW (2016). Behavioral activation and therapeutic exposure for posttraumatic stress disorder: A noninferiority trial of treatment delivered in person versus home-based telehealth. Depression and Anxiety, 33(5), 415–423. 10.1002/da.22476. [DOI] [PubMed] [Google Scholar]
- Anton MT, & Jones DJ (2017). Adoption of technology-enhanced treatments: Conceptual and practical considerations. Clinical Psychology: Science and Practice, 24, 223–240. 10.1111/cpsp.12197. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anton MT, & Jones DJ (2019). Parent-therapist alliance and technology use in behavioral, parent training: A brief report. Psychological Services, 16(2), 260–265. 10.1037/ser0000303. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anton MT, Jones DJ, Cuellar J, Forehand R, Gonzales M, Honeycutt A, … Pitmman S (2016). Cognitive and behavioral. Practice, 23(2), 194–204. 10.1016/j.cbpra.2015.04.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Baggett KM, Davis B, Feil EG, Sheeber LL, Landry SH, Carta JJ, & Leve C (2010). Technologies for expanding the reach of evidence-based interventions: Preliminary results for promoting social-emotional development in early childhood. Topics in Early Childhood Special Education, 29(4), 226–238. 10.1177/0271121409354782. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Bangor A, Kortum PT, & Miller JT (2008). An empirical evaluation of the system usability scale. International Journal of Human–Computer Interaction, 24(6), 574–594. 10.1080/10447310802205776. [DOI] [Google Scholar]
- Bearss K, Burrell TL, Challa SA, Postorino V, Gillespie SE, Crooks C, & Scahill L (2018). Feasibility of parent training via telehealth for children with autism spectrum disorder and disruptive behavior: A demonstration pilot. Journal of Autism and Developmental Disorders, 48(4), 1020–1030. 10.1007/s10803-017-3363-2. [DOI] [PubMed] [Google Scholar]
- Becker KD, Boustani M, Gellatly R, & Chorpita BF (2018). Forty years of engagement research in children’s mental health services: Multidimensional measurement and practice elements. Journal of Clinical Child & Adolescent Psychology, 47, 1–23. 10.1080/15374416.2017.1326121. [DOI] [PubMed] [Google Scholar]
- Becker KD, Lee BR, Daleiden EL, Lindsey M, Brandt NE, & Chorpita BF (2015). The common elements of engagement in children’s mental health services: Which elements for which outcomes? Journal of Clinical Child & Adolescent Psychology, 44, 30–43. 10.1080/15374416.2013.814543. [DOI] [PubMed] [Google Scholar]
- Bul KCM, Franken IHA, Van der Oord S, Kato PM, Danckaerts M, Vreeke LJ, … Maras A (2015). Development and User Satisfaction of “Plan-It Commander,” a Serious Game for Children with ADHD. Games for Health Journal, 4(6), 502–512. 10.1089/g4h.2015.0021. [DOI] [PubMed] [Google Scholar]
- Carpenter AL, Pincus DB, Furr JM, & Comer JS (2018). Working from home: An initial pilot examination of videoconferencing-based cognitive behavioral therapy for anxious youth delivered to the home setting. Behavior Therapy, 49(6), 917–930. 10.1016/j.beth.2018.01.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chacko A, Isham A, Cleek AF, & McKay MM (2016). Using mobile health technology to improve behavioral skill implementation through homework in evidence-based parenting intervention for disruptive behavior disorders in youth: Study protocol for intervention development and evaluation. Pilot and Feasibility Studies, 2, 1–11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chorpita BF, & Daleiden EL (2009). Mapping evidence-based treatments for children and adolescents: Application of the distillation and matching model to 615 treatments from 322 randomized trials. Journal of Consulting and Clinical Psychology, 77(3), 566–579. 10.1037/a0014565. [DOI] [PubMed] [Google Scholar]
- Chorpita BF, Daleiden EL, & Weisz JR (2005). Identifying and selecting the common elements of evidence based interventions: A distillation and matching model. Mental Health Services Research, 7(1), 5–20. 10.1007/s11020-005-1962-6. [DOI] [PubMed] [Google Scholar]
- Chu BC, & Kendall PC (2004). Positive association of child involvement and treatment, outcome within a manual-based cognitive-behavioral treatment for children with anxiety. Journal of Consulting and Clinical Psychology, 72(5), 821 10.1037/0022-006X.72.5.821. [DOI] [PubMed] [Google Scholar]
- Cohen J, Cohen P, West SG, & Aiken LS (2003). Applied multiple regression/correlation analysis for the behavioral sciences (3rd ed). Lawrence Erlbaum. [Google Scholar]
- Comer JS, Furr JM, Miguel EM, Cooper-Vince CE, Carpenter AL, Elkins RM, … Chase R (2017). Remotely delivering real-time parent training to the home: An initial randomized trial of Internet-delivered parent–child interaction therapy (I-PCIT). Journal of Consulting and Clinical Psychology, 85(9), 909–917. 10.1037/ccp0000230. [DOI] [PubMed] [Google Scholar]
- Davidson TM, Bunnell BE, Saunders BE, Hanson RF, Danielson CK, Cook D, … Ruggiero KJ (2019). Pilot evaluation of a tablet-based application to improve quality of Care in Child Mental Health Treatment. Behavior Therapy, 50(2), 367–379. 10.1016/j.beth.2018.07.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Day JJ, & Sanders MR (2018). Do parents benefit from help when completing a self-guided parenting program online? A randomized controlled trial comparing Triple P Online with and without telephone support. Behavior Therapy, 49(6), 1020–1038. 10.1016/j.beth.2018.03.002. [DOI] [PubMed] [Google Scholar]
- Demaso DR, Marcus NE, Kinnamon C, & Gonzalez-Heydrich J (2006). Depression, experience journal: A computer-based intervention for families facing childhood depression. Journal of the American Academy of Child & Adolescent Psychiatry, 45(2), 158–165. 10.1097/01.chi.0000190353.98570.fe. [DOI] [PubMed] [Google Scholar]
- Dunn G, Emsley R, Liu H, Landau S, Green J, White I, & Pickles A (2015). Evaluation and validation of social and psychological markers in randomised trials of complex interventions in mental health: A methodological research programme. Health Technology Assessment, 19(93), 10.3310/hta19930. [DOI] [PMC free article] [PubMed] [Google Scholar]
- DuPaul GJ, Kern L, Belk G, Custer B, Hatfield A, Daffner M, & Peek D (2017). Promoting parent engagement in behavioral intervention for young children with ADHD: Iterative treatment development. Topics in Early Childhood Special Education, 38, 42–53. 10.1177/0271121417746220. [DOI] [Google Scholar]
- Enebrink P, Högström J, Forster M, & Ghaderi A (2012). Internet-based parent management training: A randomized controlled study. Behavior Research and Therapy, 50, 240–249. 10.1016/j.brat.2012.01.006. [DOI] [PubMed] [Google Scholar]
- Fairburn CG, & Patel V (2017). The impact of digital technology on treatments and their dissemination. Behaviour Research and Therapy, 88, 19–25. 10.1176/appi.focus.16405. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Forehand R, Jones DJ, & Parent J (2013). Behavioral parenting interventions for child disruptive behaviors and anxiety: What’s different and what’s the same. Clinical Psychology Review, 33(1), 133–145. 10.1016/j.cpr.2012.10.010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Franke N, Keown LJ, & Sanders MR (2016). An RCT of an online parenting program for parents of preschool-aged children with ADHD symptoms. Journal of Attention Disorders. 10.1177/1087054716667598. [DOI] [PubMed] [Google Scholar]
- Freeman EV, & Kendziora KT (2017). Mental health needs of children and youth: The benefits of having schools assess available programs and services. Washington, DC: American Institutes for Research. [Google Scholar]
- Fritz MS, & MacKinnon DP (2007). Required sample size to detect the mediated effect. Psychological Science, 18, 233–239. 10.1111/j.1467-9280.2007.01882.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Gopalan G, Goldstein L, Klingenstein K, Sicher C, Blake C, & McKay MM (2010). Engaging families into child mental health treatment: Updates and special, considerations. Journal of the Canadian Academy of Child and Adolescent Psychiatry, 19, 182–196 PMC2938751. [PMC free article] [PubMed] [Google Scholar]
- Granic I, Lobel A, & Engels RC (2014). The benefits of playing video games. American Psychologist, 69(1), 66–78. 10.1037/a0034857. [DOI] [PubMed] [Google Scholar]
- Gupta SK (2011). Intention-to-treat concept: a review. Perspectives in Clinical Research, 2(3), 109 10.4103/2229-3485.83221. [DOI] [PMC free article] [PubMed] [Google Scholar]
- de Haan AM, Boon AE, de Jong JTVM, Hoeve M, & Vermeiren RJM (2013). A meta analytic review on treatment dropout in child and adolescent outpatient mental health care. Clinical Psychology Review, 33, 698–711. 10.1016/j.cpr.2013.04.005. [DOI] [PubMed] [Google Scholar]
- Högström J, Enebrink P, Melin B, & Ghaderi A (2015). Eighteen-month follow-up of internet-based parent management training for children with conduct problems and the relation of homework compliance to outcome. Child Psychiatry and Human Development, 46(4), 577–588. 10.1007/s10578-014-0498-7. [DOI] [PubMed] [Google Scholar]
- Hollis C, Falconer CJ, Martin JL, Whittington C, Stockton S, Glazebrook C, & Davies EB (2017). Annual Research Review: Digital health interventions for children and young people with mental health problems–a systematic and meta-review. Journal of Child Psychology and Psychiatry, 58(4), 474–503. 10.1111/jcpp.1266. [DOI] [PubMed] [Google Scholar]
- Ingersoll B, & Berger NI (2015). Parent engagement with a telehealth-based parent-mediated, intervention program for children with autism spectrum disorders: Predictors of program use and parent outcomes. Journal of Medical Internet Research, 17, e227 doi: 10.2196/jmir.4913. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ingersoll B, Shannon K, Berger N, Pickard K, & Holtz B (2017). Self-directed telehealth parent-mediated intervention for children with autism spectrum disorder: Examination of the potential reach and utilization in community settings. Journal of Medical Internet Research, 19, e248 doi: 10.2196/jmir.7484. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jones DJ (2014). Future Directions in the Design, Development, and Investigation of Technology as a Service Delivery Vehicle. Journal of Clinical Child & Adolescent Psychology, 43(1), 128–142. 10.1080/15374416.2013.859082. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jones DJ, Anton MT, Zachary C, & Loiselle R (2018). Conducting psychological intervention research in the information age: Reconsidering the “State of the Field”. Journal of Technology in Behavioral Science, 4(3), 210–218. 10.1007/s41347-018-0072-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jones DJ, Forehand R, Cuellar J, Kincaid C, Parent J, Fenton N, & Goodrum N (2013). Harnessing innovative technologies to advance children’s mental health: behavioral parent training as an example. Clinical Psychology Review, 33(2), 241–251. 10.1016/j.cpr.2012.11.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jones DJ, Forehand R, Cuellar J, Parent J, Honeycutt A, Khavjou O, et al. (2014). Technology-enhanced program for child disruptive behavior disorders: Development and pilot randomized control trial. Journal of Clinical Child and Adolescent Psychology, 43, 88–101. 10.1080/15374416.2013.822308. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Khanna MS, & Kendall PC (2010). Computer-assisted cognitive behavioral therapy for child anxiety: Results of a randomized clinical trial. Journal of Consulting and Clinical Psychology, 78, 737–745. 10.1037/a0019739. [DOI] [PubMed] [Google Scholar]
- Kobak KA, Stone WL, Wallace E, Warren Z, Swanson A, & Robson K (2011). A web-based tutorial for parents of young children with autism: Results from a pilot study. Telemedicine Journal and E-Health, 17, 804–808. 10.1089/tmj.2011.0060. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Law GC, Neihart M, & Dutt A (2018). The use of behavior modeling training in a mobile app parent training program to improve functional communication of young children with autism spectrum disorder. Autism, 22(4), 424–439. 10.1177/1362361316683887. [DOI] [PubMed] [Google Scholar]
- Lenhard F, Andersson E, Mataix-Cols D, Rück C, Vigerland S, Högström J, … Serlachius E (2017). Therapist-guided, internet-delivered cognitive-behavioral therapy for adolescents with obsessive-compulsive disorder: A randomized controlled trial. Journal of the American Academy of Child and Adolescent Psychiatry, 56(1), 10.1016/j.jaac.2016.09.515 10–19.e2. [DOI] [PubMed] [Google Scholar]
- MacKinnon DP (2011). Integratong mediators and moderators in research design. Research on social work practice, 21(6), 10.1177/1049731511414148. [DOI] [PMC free article] [PubMed] [Google Scholar]
- March S, Spence SH, Donovan CL, & Kenardy JA (2018). Large-scale dissemination of internet-based cognitive behavioral therapy for youth anxiety: Feasibility and acceptability study. Journal of Medical Internet Research, 20(7), 1–15. 10.2196/jmir.9211. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McGrath PJ, Lingley-Pottie P, Thurston C, MacLean C, Cunningham C, Waschbusch DA, … Chaplin W (2011). Telephone-based mental health interventions for child disruptive behavior or anxiety disorders: Randomized trials and overall analysis. Journal of the American Academy of Child and Adolescent Psychiatry, 50, 1162–1172. 10.1016/j.jaac.2011.07.013. [DOI] [PubMed] [Google Scholar]
- Meadan H, Meyer LE, Snodgrass MR, & Halle JW (2013). Coaching parents of young children with autism in rural areas using internet-based technologies: A pilot program. Rural Special Education Quarterly, 32(3), 3–10. 10.1177/875687051303200302. [DOI] [Google Scholar]
- Merry SN, Stasiak K, Shepherd M, Frampton C, Fleming T, & Lucassen MF (2012). The effectiveness of SPARX, a computerized self help intervention for adolescents seeking help for depression: Randomized controlled non-inferiority trial. BMJ, 344, e2598 10.1136/bmj.e2598. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mohr DC, Burns MN, Schueller SM, Clarke G, & Klinkman M (2013). Behavioral intervention technologies: Evidence review and recommendations for future research in mental health. General Hospital Psychiatry, 35, 332–338. 10.1016/j.genhosppsych.2013.03.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Muñoz RF (2010). Using evidence-based internet interventions to reduce health disparities worldwide. Journal of Medical Internet Research, 12(5), e60 10.2196/jmir.1463. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Myers K, Vander Stoep A, Zhou C, McCarty CA, & Katon W (2015). Effectiveness of a telehealth service delivery model for treating attention deficit hyperactivity disorder: A community-based randomized control trial. Journal of the American Academy of Child and Adolescent Psychiatry, 54, 263–274. 10.1016/j.jaac.2015.01.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- National Institute of Mental Health (2017). Technology and the Future of Mental Health Treatment. https://www.nimh.nih.gov/health/topics/technology-and-the-future-of-mental-health-treatment/index.shtml.
- Nefdt N, Koegel R, Singer G, & Gerber M (2010). The use of a self-directed learning program to provide introductory training in pivotal response treatment to parents of children with autism. Journal of Positive Behavior Interventions, 12(1), 23–32. 10.1177/1098300709334796. [DOI] [Google Scholar]
- Nelson EL, & Bui T (2010). Rural technology services for children and adolescents. Journal of Clinical Psychology, 66, 490–501. 10.1002/jclp.20682. [DOI] [PubMed] [Google Scholar]
- Nock MK, & Ferriter C (2005). Parent management of attendance and adherence in child and adolescent therapy: A conceptual and empirical review. Clinical Child and Family Psychology Review, 8, 149–166. 10.1007/s10567-005-4753-0. [DOI] [PubMed] [Google Scholar]
- Price M, Yuen EK, Goetter EM, Herbert JD, Forman EM, Acierno R, & Ruggiero KJ (2014). mHealth: a mechanism to deliver more accessible, more effective mental health care. Clinical Psychology & Psychotherapy, 21(5), 427–436. 10.1002/cpp.1855. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Sibley MH, Comer JS, & Gonzalez J (2017). Delivering parent-teen therapy for ADHD videoconferencing: A preliminary investigation. Journal of Psychopathology and Behavioral Assessment, 39, 467–485. 10.1007/s10862-017-9598-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Southam-Gerow MA, & Prinstein MJ (2014). Evidence base updates: The evolution of the evaluation of psychological treatments for children and adolescents. Journal of Clinical Child & Adolescent Psychology, 43(1), 1–6. 10.1080/15374416.2013.855128. [DOI] [PubMed] [Google Scholar]
- Spence SH, Donovan CL, March S, Gamble A, Anderson RE, Prosser S, & Kenardy J (2011). A randomized controlled trial of online versus clinic-based CBT for adolescent anxiety. Journal of Consulting and Clinical Psychology, 79, 629–642. 10.1037/a0024512. [DOI] [PubMed] [Google Scholar]
- Stasiak K, Hatcher S, Frampton C, & Merry SN (2014). A pilot double blind randomized placebo controlled trial of a prototype computer-based cognitive behavioral therapy program for adolescents with symptoms of depression. Behavioral and Cognitive Psychotherapy, 42(4), 385–401. 10.1017/S1352465812001087. [DOI] [PubMed] [Google Scholar]
- Staudt M (2007). Treatment engagement with caregivers of at-risk children: Gaps in research and conceptualization. Journal of Child and Family Studies, 16, 183–196. 10.1007/s10826-006-9077-2. [DOI] [Google Scholar]
- Stewart RW, Orengo-Aguayo RE, Cohen JA, Mannarino AP, & De Arellano MA (2017). A Pilot Study of Trauma-Focused Cognitive-Behavioral Therapy Delivered via Telehealth Technology. Child Maltreatment, 22(4), 324–333. 10.1177/1077559517725403. [DOI] [PubMed] [Google Scholar]
- Storch EA, Salloum A, King MA, Crawford EA, Andel R, McBride NM, & Lewin AB (2015). A randomized controlled trial in community mental health centers of computer-assisted cognitive behavioral therapy versus treatment as usual for children with anxiety. Depression and Anxiety, 32, 843–852. 10.1002/da.22399. [DOI] [PubMed] [Google Scholar]
- Tate DF, & Zabinski MF (2004). Computer and Internet applications for psychological treatment: Update for clinicians. Journal of Clinical Psychology, 60, 209–220. 10.1002/jclp.10247. [DOI] [PubMed] [Google Scholar]
- Taylor TK, Webster-Stratton C, Feil EG, Broadbent B, Widdop CS, & Severson HH (2008). Computer-based intervention with coaching: An example using the incredible years program. Cognitive Behaviour Therapy, 37, 233–246. 10.1080/16506070802364511. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Tetley A, Jinks M, Huband N, & Howells K (2011). A systematic review of measures of therapeutic engagement in psychosocial and psychological treatment. Journal of Clinical Psychology, 67, 927–941. [DOI] [PubMed] [Google Scholar]
- Tse YJ, McCarty CA, Vander Stoep A, & Myers KM (2015). Teletherapy delivery of caregiver behavior training for children with attention deficit hyperactivity disorder. Telemedicine Journal and E-Health, 21, 451–458. 10.1089/tmj.2014.0132. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Vigerland S, Ljótsson B, Thulin U, Öst LG, Andersson G, & Serlachius E (2016). Internet-delivered cognitive behavioral therapy for children with anxiety disorders: A randomized controlled trial. Behaviour Research and Therapy, 76, 47–56. 10.1016/j.brat.2015.11.006. [DOI] [PubMed] [Google Scholar]
- Vismara LA, McCormick C, Young GS, Nadhan A, & Monlux K (2013). Preliminary findings of a telehealth approach to parent training in autism. Journal of Autism and Developmental Disorders, 43, 2953–2969. 10.1007/s10803-013-1841-8. [DOI] [PubMed] [Google Scholar]
- Vismara LA, McCormick CE, Wagner AL, Monlux K, Nadhan A, & Young GS (2016). Telehealth parent training in the Early Start Denver Model: Results from a randomized controlled study. Focus on Autism and Other Developmental Disabilities, 33, 67–79 doi:1088357616651064. [Google Scholar]
- Vismara LA, Young GS, & Rogers SJ (2012). Telehealth for expanding the reach of early autism training to parents. Autism Research and Treatment, 121878 10.1155/2012/121878. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wainer AL, & Ingersoll BR (2013). Disseminating ASD interventions: A pilot study of a distance learning program for parents and professionals. Journal of Autism and Developmental Disorders, 43, 11–24. 10.1007/s10803-012-1538-4. [DOI] [PubMed] [Google Scholar]
- Wainer AL, & Ingersoll BR (2015). Increasing access to an ASD imitation intervention via a telehealth parent training program. Journal of Autism and Developmental Disorders, 45(12), 3877–3890. 10.1007/s10803-014-2186-7. [DOI] [PubMed] [Google Scholar]
- Wegrzyn SC, Hearrington D, Martin T, & Randolph AB (2012). Brain games as a potential nonpharmaceutical alternative for the treatment of ADHD. Journal of Research on Technology in Education, 45(2), 107–130. 10.1080/15391523.2012.10782599. [DOI] [Google Scholar]
- Wols A, Lichtwarck-Aschoff A, Schoneveld EA, & Granic I (2018). In-Game Play Behaviors during an Applied Video Game for Anxiety Prevention Predict Successful Intervention Outcomes. Journal of Psychopathology and Behavioral Assessment, 40(4), 655–668. 10.1007/s10862-018-9684-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wuthrich VM, Rapee RM, Cunningham MJ, Lyneham HJ, Hudson JL, & Schniering CA (2012). A randomized controlled trial of the Cool Teens CD-ROM computerized program for adolescent anxiety. Journal of the American Academy of Child & Adolescent Psychiatry, 51(3), 261–270. 10.1016/j.jaac.2011.12.002. [DOI] [PubMed] [Google Scholar]
- Xie Y, Dixon JF, Yee OM, Zhang J, Chen YA, Deangelo S, … Schweitzer JB (2013). A study on the effectiveness of videoconferencing on teaching parent training skills to parents of children with ADHD. Telemedicine Journal and E-Health, 19, 192–199. 10.1089/tmj.2012.0108. [DOI] [PubMed] [Google Scholar]
- Yasui M, & Henry DB (2014). Shared understanding as a gateway for treatment engagement: A preliminary study examining the effectiveness of the culturally enhanced video feedback engagement intervention. Journal of Clinical Psychology, 70(7), 658–672. 10.1002/jclp.22058. [DOI] [PubMed] [Google Scholar]