Abstract
Psychotherapy is on the verge of a technology-inspired revolution. The concurrent maturation of communication, signal processing, and machine learning technologies begs an earnest look at how these technologies may be used to improve the quality of psychotherapy. Here, we discuss three research domains where technology is likely to have a significant impact: (1) mechanism and process, (2) training and feedback, and (3) technology-mediated treatment modalities. For each domain, we describe current and forthcoming examples of how new technologies may change established applications. Moreover, for each domain we present research questions that touch on theoretical, systemic, and implementation issues. Ultimately, psychotherapy is a decidedly human endeavor, and thus the application of modern technology to therapy must capitalize on – and enhance – our human capacities as counselors, students, and supervisors.
Keywords: technology, psychotherapy, future, machine learning, training
“New directions in science are launched by new tools much more often than by new concepts. The effect of a concept-driven revolution is to explain old things in new ways. The effect of a tool-driven revolution is to discover new things that have to be explained.”
Around the time psychotherapy became a formal medical subspecialty, the Wright brothers were building the first airplane (McCullough, 2015). In the early days of aviation, flying was done by feel, which contributed to accidents and other limitations that prevented flight from becoming a primary commercial form of transportation. Now, a glance into the cockpit of a modern aircraft shows numerous instruments that aid modern pilots. Moreover, fighter pilots have heads-up displays that push information into their field of view in real time (Kinney, 2006). There may be flying purists who yearn for the days when there was a more intimate connection between the pilot, the plane, and the air, but few would dispute that the technological augmentation of pilot skill has benefitted travelers dramatically – leading air travel to be among the safest forms of travel ever.
While aviation and psychotherapy are notably different tasks, both are complex activities with high-stakes outcomes. However, the contrast in assisted technology use between aviation and psychotherapy is stark. Just as it was a century ago, psychotherapy largely remains a conversation between two individuals in the same room - unaided by external tools. Unlike aviation, we do not yet fully understand how psychotherapy works, and thus, instrumentation and technology do not provide counselors with direct feedback on how to improve sessions or avoid negative outcomes (Tracey, Wampold, Lichtenberg, & Goodyear, 2014).
Admittedly, there have been important innovations in psychotherapy, including the adoption of clinical trials, invention of audio recording and direct observation of clinical interactions, formal evidence for the importance of the therapeutic alliance, importance of cultural processes, development of many evidenced-based treatments, meta-analysis for the aggregation of studies, advanced statistics to model change and estimate performance differences between counselors, and large naturalistic effectiveness studies with hundreds of thousands of clients. In addition, mainstream psychotherapy has evolved from early long-term analysis to briefer, goal-directed and active models (Wampold & Imel, 2015). However, when a therapist walks into the consulting room, they are mostly on their own. Training and experience inform counselor’s choices within the session, but in-the-moment decisions still rely on the felt sense of counselors to respond appropriately. Research remains dependent on standard, decades old methodologies to evaluate the counselor-client interaction (e.g., surveys of the client or counselor, observer ratings).
This is changing. In this article, we argue that the next leap forward in psychotherapy will not come from new treatments or theories of change, but from new technology that will allow scientists to upend standard assumptions about the practice of psychotherapy and the way we evaluate it. We describe three areas where technology could notably enhance psychotherapy research: 1) Mechanism and process, 2) Training and feedback, and 3) Technology-mediated treatment modalities. For each area, we present an initial example of how technology may impact the practice and science of psychotherapy, and then discuss potential research questions. In sum, we argue that technology is on the cusp of facilitating a deluge of gauges and tools that counselors may use to inform - and in some cases, fundamentally alter - their practice. These tools have the potential to shape the course of the next generation of psychotherapy research, allowing researchers to address problems that were previously intractable, and revealing new questions that the field had not, or could not, previously consider.
How Does Psychotherapy Work?
In a large comparative psychotherapy trial with 600 clients, each client receives 20 sessions of treatment. All 12,000 sessions are recorded for the purposes of quality assurance. To check internal validity, approximately 200 sessions are coded by humans for adherence to treatment models as well as various common factors (e.g., empathy). In combination with sessions from prior trials coded by humans, these 200 sessions are used to train machine learning models that automatically label each utterance from all 12,000 sessions, classifying utterances into specific, theory-based categories. Combined with low level indicators of emotion, and session topics derived from automatically transcribed text, these utterance-level annotations are used to conduct detailed process analyses exploring both common and treatment-specific predictors of outcomes, such as symptom change, life functioning, and well-being.
There is no shortage of hypotheses about why psychotherapy helps clients change. For example, during a Motivational Interviewing session, the counselor is supposed to express understanding of the client and make strategic reflections of what she hears – particularly statements about changing substance use behaviors – all while maintaining a compassionate and nonjudgmental stance. This interpersonal context is hypothesized to reduce defensiveness and provide the freedom and support necessary for an ambivalent client to explore, and choose, alternative behaviors. Beyond treatment-specific theories, general models of ‘common factors’ propose different conceptualizations of how the therapeutic relationship and other elements of the therapeutic encounter lead to clinical improvement (Wampold & Imel, 2015).
For decades, the evaluation of psychotherapy mechanisms has relied on either self-report measures or behavioral coding systems to map the complex verbal and interpersonal raw data of a session to a simpler numeric representation (Kirschenbaum, 2004). A session could comprise thousands of words and a variety of vocal features, which is dramatically simplified to a small number of behavioral codes or a single client or counselor rating. Analyses have generally focused on either highly-localized, within-session associations (e.g., does a counselor intervention predict a client behavior later in the session?) or broad summary associations (e.g., do session-level adherence or competence ratings predict treatment response?). Unfortunately, these research strategies have often led to equivocal findings, providing only marginal support for some hypothesized relationships (Magill et al., 2014), and none in others (Webb, DeRubeis, & Barber, 2010).
There are two critical limitations of current mechanism research, 1) scale, and 2) specificity. Focusing on scale, a recent meta-analysis of all outcome studies of Motivational Interviewing included 119 students and 204,415 clients (Lundahl, Kunz, Brownell, Tollegson, & Burke, 2010). However, a recent meta-analysis of Motivational Interviewing mechanism studies outcome studies included a total of 783 clients across 10 studies for the critical mechanism analyses (Magill et al., 2014) -- only 3/10ths of 1% of the clients studied in the general MI outcome literature cited above. The current technology of human-based coding in mechanism research dramatically hinders mechanism research and our understanding of why psychotherapies work.
Similarly, the specificity of measures that assess treatment processes like the quality of the therapeutic relationship are limited by human-based assessment. Across interventions and disorders, hundreds of studies suggest therapeutic alliance is robustly correlated with treatment outcomes (r ≈ .30; Horvath, Del Re, Flűckiger, & Symonds, 2011). Although coding manuals provide detailed descriptions of processes and behaviors, behavioral coding ultimately relies on the interpretation and “felt sense” of raters to reflect on the psychological processes embedded in the therapeutic relationship. While felt sense is important, it may be limited in its ability to detect and characterize more nuanced processes, out of the awareness of the participants in the moment. A lack of direct indicators of relational processes as they unfold during the interaction impedes our ability to test how they work and the manner in which they are related to treatment outcomes.
The fundamental problem underlying traditional research is the reliance on humans and human judgment as the assessment tool. Current technologies such as automated speech recognition, natural language processing algorithms, and machine learning models offer the possibility of replacing human evaluators and greatly enhancing scale and specificity in studying treatment mechanisms. This section’s opening vignette is one example of such an integration in motion. In the past two decades, major strides have been made across various domains of machine learning. At a basic level, machine learning models are simply prediction models, which includes traditional regression models. However, more modern machine learning was motivated by research areas with vast quantities of data and predictors (e.g., genetics) and greatly aided by increases in computing power. As two examples, Support Vector Machines and Maximum Entropy models can take large numbers of predictors or features to classify an outcome using statistical optimization (e.g., Do the words in this utterance indicate it is a reflection or an open question? See Jurafsky and Martin (2009) for an overview of machine learning approaches to natural language processing). More recently, deep learning models based on neural networks have led to dramatic improvements for some tasks, such as speech recognition and computer vision (Graves, Mohamed & Hinton, 2013).
These types of tools can dramatically scale up the evaluation of psychotherapy and offer the possibility to test more sophisticated hypotheses about change. Preliminary work in this vein has begun, in which natural language processing and machine learning models have been used to automatically classify psychotherapies and intervention types (Imel et al., 2014), generate fidelity codes (Tanana et al., 2016; see also the following section on training), and test basic theories of empathy in the context of psychotherapy (Lord, Sheng, Imel, Baer, & Atkins, 2014) from session transcripts.
The use of these tools presents a number of opportunities for psychotherapy process research. Combined with session-to-session standardized assessment of client outcome in large scale naturalistic outcome studies (Saxon & Barkham, 2012), and automatically estimated metrics of counselor interventions (see training and feedback section below), the use of raw semantic and vocal data could spur mechanism research – based not on 100 or fewer sessions as is common in current mechanism studies, but on thousands or hundreds of thousands of sessions. Large-scale process studies like the one described in the opening vignette may help to finally offer adequately powered tests of treatment process. For example, if a large clinical trial failed to provide evidence that one treatment was superior to another, automatic coding of therapist interventions might make it possible to compare what specific interventions therapists utilized. At a basic level, researchers could make stronger claims about internal validity of their interventions (i.e., that therapists from one condition did not utilize interventions from the other: see Leichsenring & Salzer, 2013). Similarly, in a sample of sufficient scale, it would be possible examine detailed hypotheses about timing and accuracy of different interventions (e.g., therapist interpretations in dynamic therapies).
There may also be an opportunity to identify and test new and more complex mechanisms when indicators of basic psychological processes are extracted from the raw data. For example, estimates of emotional expression extracted from voice could be fed into dynamic systems models (Gelo & Salvatore, 2016) that examine how dyadic influence within relationships maintains homeostatic balance (e.g., dyadic co-regulation and stress buffering), or alternatively pushes partners out of balance (e.g., emotional processes that systematically trend toward more extreme emotional expression; Butler & Randall, 2013). Once the infrastructure to automatically collect and transcribe thousands of psychotherapy sessions becomes more common, researchers may be able to predict response to therapy using statistical models that only rely on the words that are said in a session (and how they are said) in a way that is not limited by our current theoretical understanding of the mechanisms of change.
It is also possible that machine learning models will improve our ability to predict response to psychotherapy, but may not necessarily improve our understanding. One of the interesting findings of the last two decades in the fields of machine learning and natural language processing is that often predictive models perform better with more training data and fewer theoretical constraints (with ‘deep learning’ representing the best example of this, e.g., Krizhevsky, Sutskever, & Hinton, 2012). Frederick Jelinek, an NLP researcher, famously quipped ‘Every time I fire a linguist the performance of the recognizer improves’ (Jurafsky & Martin, 2009, p.83). Thus, our ability to predict outcomes from in-session data may primarily be limited by sample size and not our ability to correctly conceptualize the session. Specifically, it may be possible to build statistical models that take hundreds, thousands, or millions of inputs that can reliably predict whether a client responds. While such an outcome would not directly provide actionable information for understanding care, it may serve a practical use in helping systems triage resources and quickly intervene when there is a risk of treatment failure.
Therapist Training and Feedback
A mental health clinic trains providers in a specific treatment (e.g., Motivational Interviewing; MI). Training includes a typical workshop with clinical content – case examples, vignettes, and role-plays. In addition, the training incorporates ongoing feedback from a clinical software support tool. From a web-based portal on their computer, a provider records their sessions, which are then immediately processed by machine learning algorithms that generate visual feedback. The feedback is sent to the provider and includes ratings of the counselor’s use of specific interventions, summaries of session content, and comparisons to standardized norms of proficiency. In addition, there is a transcript of the session generated by automated speech recognition wherein each utterance is labeled with different treatment ratings. The transcript is annotated with other low level feedback including summaries of emotion and session topics. This feedback can be presented to the provider both at the session level or aggregated over time, providing visualizations of skills, topics, etc., over many different sessions.
Human capital is the economic value of a person’s skill set, and “capitalization rate” refers to how well a society is able to fully utilize the skills of its people (see Malcolm Gladwell’s recent focus on the failure of modern American society to realize the vast potential of academically gifted youth from low SES backgrounds; Gladwell, 2010). Given the current state of training and continuing education in psychotherapy, we might consider what the capitalization rate is with regard to psychotherapists. Specifically, of the many counselors who begin their training, what percentage reach their maximum capacity to help clients change? While psychotherapy is a demonstrably effective intervention, and many counselors achieve remarkable outcomes with their clients, how many become the best counselor they can possibly be?
Beyond requirements for APA accreditation and licensure regarding amounts of supervision, there is relatively limited information available on the standard type or content of supervision and feedback received by counselors. Clinical supervision is mostly based on verbal reports from providers to their supervisors about relevant clients and clinical issues (Amerikaner & Rose, 2012; Goodyear & Nelson, 1997). The supervisee tells the supervisor a brief narrative from memory (or notes) about clients in their caseload and may focus in detail on one or two cases, or perhaps a particularly difficult case. Occasionally, the supervisor and trainee may watch a session recording. Based on this information, the supervisor makes recommendations to the provider on future directions for treatment. Given various time constraints it is likely that the majority of cases are never discussed in detail. In addition, feedback only occurs after the completion of the therapy session.1 Perhaps as a result, feedback is often general and highly selective in nature, as opposed to targeting specific behaviors (Milne & Westerman, 2001; Beutler & Harwood, 2004). When treatment-related feedback is provided, it is typically based on patients self-reported symptoms, which may include specific warnings of treatment failure (Lambert, Harmon, Slade, Whipple & Hawkins, 2005). This type of measurement-based care has been shown to improve client outcomes, though it provides no direct information about therapist interventions or behaviors (Slade, Lambert, Harmon, Smart & Bailey, 2008). However, even this feedback is the exception not the rule, and post-licensure feedback and supervision are often completely absent. Accordingly, counselors often work in a vacuum with little to no direct feedback or objective input on their work.
Across professions and tasks (e.g., sports, chess, etc.), performance-based feedback is critical to the development of expertise. Psychotherapists may receive supervision and feedback during their training, but it is rare for a therapist to receive detailed in-the-moment feedback on how exactly to respond to a difficult client, and how to select from different potential interventions.2 More generally, cognitive science (Kahneman & Klein, 2009) suggests that specific, immediate feedback is critical to the development of expertise across a broad swath of domains and skills (Kluger & DeNisi, 1996). When this sort of feedback is done correctly it can even outweigh the effects of cognitive ability and socioeconomic influences (Hattie & Timperley, 2007).
Tracey et al. (2014) note that specific, immediate feedback is precisely what is missing from the practice of psychotherapy and as a result “expertise” in psychotherapy is challenging to attain. This is consistent with a body of work that suggests individuals are poor evaluators of their own skill – generally and specifically within psychotherapy. For example, counselor reports of treatment procedures are unrelated to objective, third-party ratings of their practice (Santa Ana et al., 2008; Brosnan, Reynolds, & Moore, 2008). In addition, providers on average are overly optimistic about their own clinical effectiveness (Walfish, McAlister, O’Donnell, & Lambert, 2012), and supervisees may not disclose details of cases when they are performing less competently (Ladany, Hill, & Corbett, 1996). Given the lack of ongoing feedback, we can expect that the capitalization rate for counselors is not currently being maximized and may even be quite low. For all these reasons, the rich, immediate feedback described in this section’s opening vignette may improve the quality of therapist skill learning and maintenance over time.
The new computational technologies described in the above section may soon provide performance-based feedback, thereby improving the number of counselors who operate at full effective capacity. Over the past several years, machine learning technologies have been used to automatically generate session ratings that are competitive with human inter-rater reliability and proof-of-concept research has already been completed for a computational, clinical support tool that can provide automated feedback to counselors using Motivational Interviewing (Xiao et al., 2015; Gibson et al. 2016). Using session audio alone, the support tool provides ratings of counselor empathy, specific counseling skills at the utterance level (e.g., reflections and questions), and emotional arousal (Gibson et al., 2016).
These technologies that evaluate psychotherapy interactions are focused on assisting and scaffolding human processes, that is, the delivery and supervision of psychotherapy. As we have noted above, historically, humans were the assessment tool for gathering data on psychotherapy sessions - via trainee report, supervisor review of recording, or objective ratings applied by independent coder. For example, a psychology intern might see 20 clients in a week. Previously, the intern would think about their caseload before supervision and pick the cases they want to discuss in more detail - usually just one or two. The supervisor might focus on micro level processes such as attending, use of counseling skills (e.g., reflections, response to client expression of emotion), or review how the therapist used specific components of a treatment (e.g., teaching a specific skill in cognitive therapy or whether they assigned homework). As in the example noted at the beginning of the section, if all sessions were coded automatically some selection of these attributes, a supervisor or supervisee could quickly review session summaries and pull out sessions that seemed important to review. Thus, automated feedback may facilitate additional immersion in the raw data of the interaction, such that both providers and supervisors can effectively focus their discussions and decisions about how to maximize the effectiveness of training and treatment. Human judgment via clinical expertise will inform, guide, and refine the technology, but humans will not need to be the sole assessment tool.
As these technologies develop and become available in typical clinical work, there are a number of specific research questions regarding dissemination and implementation. If giving detailed performance based feedback to counselors were possible, how acceptable and adoptable is the technology to providers and the clinics where they work? What particular design considerations are necessary when disseminating machine learning systems for human evaluation such that providers feel engaged in the process rather than threatened or put off (Hirsch et al, under review). Moreover, there are a variety of practical issues surrounding the collection of raw video and/or audio data of psychotherapy sessions. Quite understandably, providers may fear evaluation and observation, and suspect that data on what they are doing with their clients may be used punitively or to limit their ability to practice flexibly. Thus, early work on designing systems to evaluate psychotherapy on a large scale must directly involve providers so they can express their concerns, and provide input on system design. Such research might determine what sort of feedback and on what specific targets counselors find most helpful. Qualitative research might also focus on what particular barriers counselors may have to engaging with feedback regularly, how such feedback would be incorporated into traditional human based supervision, and observing how counselors incorporate new technology into existing workflows that usually allow only 10 minutes between sessions. Work on barriers to providers using simple progress monitoring tools is still ongoing (Tasca et al., 2015), and thus the direct evaluation of psychotherapy is likely to represent an even more complicated and lengthy dissemination process. We would hypothesize that clinical settings that engage providers directly and utilize support tools in non-punitive ways (e.g., not basing performance reviews on what emerges from support tool data) will be more successful in getting providers to regularly utilize and engage with session based feedback.
One of the reasons that feedback in psychotherapy happens in such a delayed manner is that immediate “real-time” feedback would normally require an expert in the room giving feedback in front of a client or using a “bug in the ear” (Gallant & Thyer, 1989) feedback system with an expert remotely watching. These methods involve the time of an expert and some level of awkwardness and distraction. Recent technological innovations may present solutions to delivering feedback more immediately. For example, wearable technology like Google’s now defunct “Glass,” fitbit like wristbands, and now Microsoft’s “HoloLens,” present an opportunity where information could be fed to counselors during the session (e.g., suggestions for specific interventions, or estimates of client emotional states could be sent directly to a counselor). Certainly, the integration of conspicuous tools such as the counselor wearing a large semi-opaque visor throughout the session are not realistic near term targets for practice integration. However, technologies will likely become less obtrusive, and reach the point of being able to provide feedback on what is happening in the session in real-time. Thus, a line of research will be needed to evaluate how and whether to provide such feedback.
Early stage feasibility research might focus on determining how tools might be used, what sorts of information, and in what amounts it should be provided so that it does not distract the counselor from attending to the client. To illustrate, consider the possibilities around having both therapist and client wear wireless wrist-worn heartrate monitors in session, which could inform the therapist about client’s physiological state as well as potential synchrony of therapist and client state. Does the therapist receive data on client heartrate in-the-moment or after the session? How would in-the-moment data be provided? Using a screen behind the client, streaming real-time arousal data? Or via a silent vibration of the tracker when the client’s heartrate exceeds a certain threshold? Moreover, would there be any benefit to the client receiving similar data on their therapist’s heartrate? Would a technology-mediated ‘sixth-sense’ of each other’s arousal facilitate greater attunement? Or, would any momentary feedback be too distracting? Clearly, the introduction of any new technological ingredient to the therapy setting must be carefully vetted. In addition to the emerging integrations discussed throughout this article, technologies such as video recording sessions for supervision (Bailey & Sowder, Jr., 1970; Chodoff, 1972; Gelso, 1973) and virtual reality exposure therapy (Glantz & Durlach, 1997; Hodges, Rothbaum, Watson, Kessler, & Opdyke, 1996) had their own eras of trial, error, and refinement before they became widely accepted (Goodyear & Nelson, 1997; Morina, Ijntema, Meyerbroker, & Emmelkamp, 2015; Riva, 2005). Thus, knowing when, how, and how much feedback to give is critical. Many factors, from the prior experience of the counselor (Kalyuga, Ayres, Chandler, & Sweller, 2010) to the cognitive load of feedback (Sweller, 1994) may influence how well a person can learn from feedback. A failure to consider these factors in a system could result in sophisticated platforms that overload the counselor with information, or provide too little structure.
Beyond questions focused on implementation and design of feedback systems, the most critical research questions involve determining the impact of regular feedback on the course of treatment. There is already substantial evidence that simply providing feedback to counselors on client symptom change can improve client outcomes (Reese & Norsworthy, 2009). Yet, this feedback carries no information about what is happening in the session and does not identify any specific behaviors. One might expect feedback based on direct observation to provide more actionable information to the counselor. Specific targets for evaluation would include the impact of feedback on: 1) client reported measures of treatment quality (e.g., satisfaction, working alliance), 2) treatment specific measures of adherence and competence (e.g., whether feedback increases the likelihood a counselor is implementing a CBT based protocol appropriately, or determining if the counselor is using here and now process comments in a dynamic/relational therapy; see Imel, Steyvers, & Atkins, 2014 for initial evidence that Natural Language Processing technology can identify counselor process comments), 3) decreasing provider burnout, and 4) improvements in client outcomes. In each case, long term observational and experimental work will be necessary. First, researchers might utilize simple pre-post designs where client symptoms or provider behavior are measured before and after feedback protocols are implemented. Eventually, experimental designs should be used in which providers and/or whole clinics are randomized to feedback vs. non-feedback conditions.
The focus of this section has mostly been on feedback given to the provider, but it is straightforward to imagine a similar process of feedback provided to the client. Here, researchers might examine how this information could be formatted so that it empowers clients to participate more fully in treatment. Thus, the same underlying technology that supports provider feedback could be used to provide clients summaries of sessions, skills discussed, key moments, etc. – effectively extending the therapeutic hour out into the client’s life. We might hypothesize that clients who actively utilize such support tools will be more engaged in treatment, learn new coping skills more quickly, and respond to treatment faster.
Technology-Mediated Psychological Treatment
The crisis service at a University Counseling Center offers a text line. The text service functions via a phone app through which participants can message a crisis counselor when they need support or are feeling unsafe. On a given shift a licensed social worker may handle up to 10 or 20 different crisis interactions, sometimes working with 5–7 clients simultaneously. Counselor-client interactions can unfold over several hours, and involve transitions between counselors. As the counselor is interacting with the client via text (and not required to respond in real time as they would on the phone or in person), the types of responses are different (e.g., minimal encouragers like ‘mm-hmm…’ are used rarely if ever). For the clinical provider, this format allows real-time consultation with colleagues (i.e., it is possible to show the dialogue you are having with a client to a colleague and ask “what should I write next?’ without disrupting the interaction).
A client seeking treatment for a mental health problem is typically required to visit a professional in their office and engage in an in-person conversation, often over a number of weeks. Indeed, this process is seen as gold-standard treatment, and many counselors would shy away from technologically mediated interactions that are restricted to video conferencing or text. This means that a client who may have severe problems with motivation, energy, hope, anxiety, etc., is required to repeatedly overcome their primary symptoms in order to get treatment. In many instances, this is equivalent to requiring a client with a broken leg to walk to the doctor to mend the fracture. In addition, this basic structure of in-person treatment can lead to treatment access problems – getting off work, childcare, transportation, limited supply of quality providers, wait times, etc. While the work of psychotherapy ultimately does require clients to approach what they fear, try new strategies, and engage with life when they are pulled to retreat, technology is beginning to provide treatment options that do not require these dramatic efforts at the outset – meeting the client closer to where they are.
Unlike the two prior sections above where there is new technology that has the potential to impact the traditional practice of psychotherapy, forms of technology-mediated counseling have been around for almost as long as the modern computer. Although Weizenbaum (1966) famously created a simple but surprisingly engaging computer chat program called ELIZA that simulated a Rogerian psychotherapist, it was not taken seriously as a viable modality. Now, a multitude of apps are now available that purport to implement specific components of psychotherapy (Huguet et al., 2016). While many of these apps represent ‘gamified’ CBT worksheets, mindfulness exercises, or mood tracking, many others provide a decidedly psychotherapeutic experience wrapped in novel packaging. For example, platforms such as Ginger.io (https://www.ginger.io/) and TAO Connect (https://www.taoconnect.org/) furnish a suite of services that provide brief interventions and feedback tools. Both services leverage mobile apps where counselors and clients can chat via text message or video conference. Moreover, Ginger.io includes options for clients to recruit an entire “care team” including a coach available for 24/7 chat, a licensed counselor for video sessions, and a psychiatrist for video sessions and medication management. TAO Connect’s service also includes informational videos on therapy modalities, web-based interactive interventions, and outcomes tracking. In sum, the features provided by such services emulate the tasks and intended outcomes of traditional psychotherapy, but in an entirely decentralized venue.
Other text-based forms of therapy provide near-instant support, but without a licensed professional at the other end of the line. Online discussion boards such as mental health-related subreddits (https://www.reddit.com/), SupportGroups.com, and 7 Cups (https://www.7cups.com/) present an anonymous venue for non-mental health professionals to give empathic listening, problem solving, and support (e.g., Choudhury, Kiciman, Dredze, Coppersmith, & Kumar, 2016). A more advanced approach to peer support is taken by Koko (https://itskoko.com/), a mobile platform that provides emotional and cognitive reframing through crowd-sourced responses sorted and presented to clients by machine learning algorithms.
Somewhere between the suite of services embedded in Ginger.io and TAO Connect and peer-to-peer discussion boards seen on Reddit lies text-based services such as Breakthrough (https://www.breakthrough.com/), Talkspace (https://www.talkspace.com/), or Crisis Text Line (https://www.crisistextline.org/). These platforms provide contact with a licensed mental health professional (or a paraprofessional with some minimal training in the case of Crisis Text Line), on demand, in decentralized contexts (e.g., the client interacts via text on a phone app). These services are flexible with regards to client demographics and presenting concerns, are not limited by time of day or session length, and are either free or billed at a monthly flat rate.
Online and text-based mental health services are exceedingly popular. Primary anxiety- and depression-support subreddits have over 90,000 and 150,000 subscribers, respectively. Support group topics on SupportGroups.com such as ADHD, Bulimia, and Emotional Abuse have tens of thousands of members. Talkspace has over 300,000 users and recently received 15 million dollars in startup capital (Crook, 2016; Ferguson, 2016). This volume of users has led some to believe that these types of services have the potential to truly disrupt the mental healthcare system, just as Uber and Lyft have in the taxi industry.
Beyond the demand for these services are the questions text-based interactions beg about the process of counseling and what may or may not facilitate effectiveness. It is understandable that therapists and clients might prefer traditional in-person psychotherapy over, for example, the text-based therapy described at the beginning of this section. The non-verbal aspects of in-person therapy such as tone of voice and body language give the client and therapist more data channels to connect. The one-dimension relationship between the text-based therapist and their client is all the more profound when considering clients who are at risk of suicide or is actively in a crisis. Unlike an in-person therapist, the text-based therapist cannot access and interpret their client’s long sigh, lack of eye contact, or trembling fingers. Depending on the system, if they ‘disconnect,’ it may not be possible to find them or issue a safety call (indeed it is possible that this level of anonymity is one reason some clients might access care in this way). However, it is also possible that traditional and technology-mediated therapy have their own strengths and shortcomings and are best understood as complementing, rather than competing, methods. Indeed, rather than suggesting that technology-mediated treatment is somehow less than, there may even be surprising benefits to these interventions. For example, there is evidence that v-teleconferencing based psychotherapy can be as effective as in-person treatment (Gros et al., 2013) and computerized CBT can have substantial benefits (Adelman, Panza, Bartley, Bontempo, & Bloch, 2014).
However, it is important to explore the details of interactions between a text-based crisis line therapist and their clients. As outlined in the example, it is possible for therapists to have “sessions” with multiple clients at the same time. How might this influence the nature of individual client engagement and the development of the therapeutic relationship. Treatment interactions may be briefer, but also could be me more frequent (perhaps surpassing or equaling the total amount of time typical in-person psychotherapy, which can be relatively limited; Simon & Ludman, 2010). In addition, one might expect text-based interactions to be less intimate, but it is also possible that increased anonymity increases how willing clients are to disclose aspects of themselves they consider shameful.3 More broadly, we know very little about how client-therapist interactions may be different - do counselors make similar sorts of interventions, do sessions progress in the same way typical therapy sessions might, or is there more focus on specific problems that can be addressed in the course of a 10 or 15 minute interaction?
The text-based therapy paradigm also has important consequences for psychotherapy at a systemic level. Given the popularity of emerging text-based modalities, how or should text-based therapy be integrated into training programs? What are the ethical ramifications of text-based therapy? Ethics codes indicate that sufficient training in a given modality be undertaken before practicing said modality, but do we have enough data on text-based therapy to ensure competence? In addition, it may be difficult to adhere to legal and ethical statutes when therapists only have access to client user names (see discussion of implications for duty to warn requirements; Ferguson, 2016). Finally, how do text-based therapy outcomes compare with those from in-person therapy? How do we, as a field, address the issue if there is a disparity between the modalities in either direction?
Beyond questions that assess the impact of changing from an in-person to a technology-mediated interaction that provides anonymity, text-based therapy may present a unique opportunity that would not be feasible in typical psychotherapy contexts: crowdsourcing. In-person counseling makes real-time consultation during a session confusing and intractable. Therefore, a single counselor must rely only on his or her internal resources to provide insightful and personalized care for their client. In text-based therapy, however, there is no such limitation on the number of minds or resources that may be funneled into the therapeutic exchange. A therapist could simply lean across the cubical to ask a colleague how to respond to a difficult client.
Perhaps more radically, the capabilities of text-based interactions suggest it may be possible to explore the impact of truly “crowdsourced” psychotherapy. While there may be no one “correct” therapist intervention in any given moment, research on crowdsourcing suggests that pooling judgements may be one means of minimizing adverse effects of low-performing individual therapists (see Imel, Sheng, Baldwin, & Atkins, 2015). For example, the potential integration of applications like crowdsourced CBT in “its Koko” noted above and text-based counseling services suggest it could be possible to examine the impact of having a client interact with a team of several or hundreds of lay counselors who are responding in real time to a multitude of different client concerns. A single client utterance might elicit hundreds or thousands of “counselor” responses. It would be possible for this collection of counselors to collectively rate the quality and appropriateness of interventions and present only the most highly rated utterances to the client, thereby reducing the possibility of the client seeing a problematic intervention. Indeed, there is strong evidence from basic cognitive science that pooling individual judgments to get a consensus yields far better judgments than relying on any individual prediction. Essentially, idiosyncratic (uncorrelated) errors cancel when the group consensus is used in lieu of an individual response. In a classic experiment, 800 individuals guessed the weight of a slaughtered ox. The median guess was accurate within 9 pounds (1% of the true weight of 1207 pounds; (Galton, 1907). This general finding has been replicated in various domains (e.g., Ariely et al., 2000; Surowiecki, 2005) and is now known as the “wisdom of the crowd” effect. The ‘Koko’ app provides a proof of concept that crowds of lay individuals can provide support, but future research should address the relative impact of these sorts of crowd sourced interventions compared to treatments administered by counselors with standard credentials. Similarly, the potential availability of different responses to the same client utterance presents an opportunity to examine the characteristics of those sort of interventions that are rated highly both by other counselors and by clients.
With the advent of many new methodologies that alter the way that psychotherapy is delivered, there are many potential challenges and ethical considerations that need to be addressed. For example, what is the minimal involvement that a human therapist should have with different types of treatment populations? Clinicians might agree that an online curriculum or CBT app could potentially benefit a patient with mild depression as an initial component of stepped care, while few would likely consider this as a solitary long term solution for a suicidal patient with chronic mental health problems. Thus, it is not surprising that many current apps used in practice are ones that enhance access or support the work of a human clinician.
Conclusions
Contemporary society is in the midst of a technological revolution that is now beginning to impact practice of psychotherapy. How this dynamic shifts, or improves traditional models of care remains to be seen. While mental health has thus far avoided the technological disruption that has led to major shifts in transportation, journalism, and other areas of medicine, the status quo seems unlikely to continue unaltered. Yet, in whatever form, it is likely that any future mental health system will continue to involve interpersonal interactions. Thus, research will be needed to understand how to: 1) train providers, 2) adapt to the challenges and opportunities presented by new models of practice, and 3) understand what about those interactions help. The ultimate goal is that technology will enhance the talents that humans bring to counseling. A plane flown by feel was limited to relatively short flights with few passengers. Modern aircraft can fly hundreds of passengers around the world with a small crew. The difference is technology. It is our hope that a fuller integration of technology into the human process of psychotherapy may serve a similar function, resulting in more effective providers and new ways of reducing suffering.
Public Significance Statement.
We discuss the opportunities and challenges of applying spoken language and machine learning technologies to psychotherapy. These technologies may enhance the care received by individual clients through improved implementation and theoretical understanding of treatment.
Acknowledgments
Funding for the preparation of this manuscript was provided by the National Institutes of Health/National Institute on Alcohol Abuse and Alcoholism (NIAAA) under award number R01/AA018673 and National Institute on Drug Abuse under award number (R34/DA034860).
Footnotes
Even research on technology and supervision is focused on this same human process, but mediated over things like video teleconference (Barnett, 2011).
Some readers may wonder about the analogy of psychotherapy to activities such as sports, or chess, where it is far simpler to define good vs. bad moves relative to an ultimate outcome. Certainly, there is a tension between idiographic and nomothetic views and approaches within psychotherapy. The fundamental point here is simply that specific, performance-based feedback is critical for skill development and training and that it is sorely lacking in current psychotherapy training and supervision.
For a video where Irvin Yalom discusses the potential of technology-mediated psychotherapy interactions, see https://www.youtube.com/watch?v=DozICXlrvN0.
The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Some arguments presented in this manuscript were presented at the Annual Convention of the Association of Behavioral and Cognitive Therapies in New York City, and the Annual Convention of the American Psychological Association in Denver Colorado. Also note both Dr. Imel and Dr. Atkins have minority equity stakes in a technology company – Behavioral Informatix (http://www.behavioralinformatix.com/) that is focused on developing computational models that quantify aspects of patient-provider interactions.
References
- Adelman CB, Panza KE, Bartley CA, Bontempo A, Bloch MH. A meta-analysis of computerized cognitive-behavioral therapy for the treatment of DSM-5 anxiety disorders. The Journal of Clinical Psychiatry. 2014;75(7):e695–e704. doi: 10.4088/JCP.13r08894. [DOI] [PubMed] [Google Scholar]
- Amerikaner M, Rose T. Direct observation of psychology supervisees’ clinical work: A snapshot of current practice. The Clinical Supervisor. 2012;31(1):61–80. http://doi.org/10.1080/07325223.2012.671721. [Google Scholar]
- Ariely D, Au WT, Bender RH, Budescu DV, Dietz CB, Gu H, et al. The effects of averaging subjective probability estimates between and within judges. Journal of Experimental Psychology: Applied. 2000;6(2):130–147. doi: 10.1037//1076-898x.6.2.130. [DOI] [PubMed] [Google Scholar]
- Bailey KG, Sowder WT., Jr Audiotape and videotape self-confrontation in psychotherapy. Psychological Bulletin. 1970;74(2):127–137. doi: 10.1037/h0029633. [DOI] [PubMed] [Google Scholar]
- Barnett JE. Utilizing technological innovations to enhance psychotherapy supervision, training, and outcomes. Psychotherapy. 2011;48(2):103–108. doi: 10.1037/a0023381. http://doi.org/10.1037/a0023381. [DOI] [PubMed] [Google Scholar]
- Brosan L, Reynolds S, Moore RG. Self-Evaluation of Cognitive Therapy Performance: Do Therapists Know How Competent They Are? Behavioral and Cognitive Psychotherapy. 2008;36:581–587. [Google Scholar]
- Butler EA, Randall AK. Emotional coregulation in close relationships. Emotion Review. 2013;5(2):202–210. http://doi.org/10.1177/1754073912451630. [Google Scholar]
- Crook J. Talkspace online therapy platform raises $15 million Series B. Techcrunch.com. 2016 Jun 14; https://techcrunch.com/2016/06/14/talkspace-online-therapy-platform-raises-15-million-series-b/
- Choudhury MD, Kiciman E, Dredze M, Coppersmith, Kumar M. Discovering Shifts to Suicidal Ideation from Mental Health Content in Social Media. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ‘16); New York, NY, USA: ACM; 2016. pp. 2098–2110. https://doi.org/10.1145/2858036.2858207. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chodoff P. Supervision of psychotherapy with videotape: Pros and cons. American Journal of Psyciatry. 1972;128(7):819–823. [PubMed] [Google Scholar]
- Decety J, Jackson PL. A social-neuroscience perspective on empathy. Current Directions in Psychological Science. 2006;15(2):54–58. [Google Scholar]
- Dyson F. Imagined worlds: The Jerusalem-Harvard lectures. Cambridge, MA: Harvard University Press; 1997. [Google Scholar]
- Ferguson C. Breakdown: Inside the messy world of anonymous therapy app Talkspace. The Verge. 2016 Retrieved from http://www.theverge.com/2016/12/19/14004442/talkspace-therapy-app-reviews-patient-safety-privacy-liability-online.
- Gallant JP, Thyer BA. The “Bug-in-the-ear” in clinical supervision: A review. The Clinical Supervisor. 1989;7(2–3):43–58. http://doi.org/10.1300/J001v07n02_04. [Google Scholar]
- Galton F. Vox populi. Nature. 1907;75:450–451. [Google Scholar]
- Gawande A. Complications: A surgeon’s notes on an imperfect science. New York, NY: Picador; 2002. [Google Scholar]
- Gelo OCG, Salvatore S. A dynamic systems approach to psychotherapy: A meta-theoretical framework for explaining psychotherapy change processes. Journal of Counseling Psychology. 2016;63:379–395. doi: 10.1037/cou0000150. [DOI] [PubMed] [Google Scholar]
- Gelso CJ. Effect of audiorecording and videorecording on client satisfaction and self-expression. Journal of Consulting and Clinical Psychology. 1973;40(3):455–461. doi: 10.1037/h0034548. [DOI] [PubMed] [Google Scholar]
- Gibson J, Gray G, Hirsch T, Imel ZE, Narayanan SS, Atkins DC. Developing an automated report card for addiction counseling: The counselor observer ratings expert for MI (CORE-MI). Paper presented at the Computer Human Interaction. Mental Health Workshop; San Jose, CA, USA. 2016. pp. 1–4. [Google Scholar]
- Gladwell M. Outliers: The story of success. Boston, MA: Little, Brown and Company; 2008. [Google Scholar]
- Glantz K, Durlach NI. Virtual Reality (VR) and psychotherapy: Opportunities and challenges. Presence: Teleoperators & Virtual Environments. 1997;6(1):87–106. doi: 10.1162/pres.1997.6.1.87. [DOI] [PubMed] [Google Scholar]
- Goodyear RK, Nelson ML. The major formats of psychotherapy supervision. In: Watkins CC, editor. Handbook of psychotherapy supervision. Hoboken, NJ: John Wiley & Sons; 1997. pp. 328–344. [Google Scholar]
- Graves A, Mohamed A, Hinton G. Speech Recognition With Deep Recurrent Neural Networks. Icassp. 2013;(3):6645–6649. [Google Scholar]
- Gros DF, Morland LA, Greene CJ, Acierno R, Strachan M, Egede LE, … Frueh CB. Delivery of evidence-based psychotherapy via video telehealth. Journal of Psychopathology and Behavioral Assessment. 2013;35(4):506–521. http://doi.org/10.1007/s10862-013-9363-4. [Google Scholar]
- Hattie J, Timperley H. The power of feedback. Review of Educational Research. 2007;77(1):81–112. http://doi.org/10.3102/003465430298487. [Google Scholar]
- Hirsch T, Merced K, Imel Z, Narayanan S, Atkins D. Designing Contestability: Interaction Design, Machine Learning, and Mental Health. ACM Conference on Designing Interactive Systems (DIS’17); Edinburgh. (under review) [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hodges LF, Rothbaum BO, Watson B. A virtual airplane for fear of flying therapy. Paper presented at the 1996 Proceedings of Virtual Reality Annual International Symposium.1996. [Google Scholar]
- Horvath AO, Del Re AC, Fl ckiger C, Symonds D. Alliance in individual psychotherapy. Psychotherapy. 2011;48(1):9–16. doi: 10.1037/a0022186. http://doi.org/10.1037/a0022186. [DOI] [PubMed] [Google Scholar]
- Huguet A, Rao S, McGrath PJ, Wozney L, Wheaton M, Conrod J, Rozario S. A systematic review of cognitive behavioral therapy and behavioral activation apps for depression. PLoS ONE. 2016;11(5):e0154248. doi: 10.1371/journal.pone.0154248. http://doi.org/10.1371/journal.pone.0154248. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Imel ZE, Sheng E, Baldwin SA, Atkins DC. Removing very low-performing therapists: A simulation of performance-based retention in psychotherapy. Psychotherapy. 2015;52(3):329–336. doi: 10.1037/pst0000023. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Imel ZE, Steyvers M, Atkins DC. Computational psychotherapy research: Scaling up the evaluation of patient–provider interactions. Psychotherapy. 2014 doi: 10.1037/a0036841. http://doi.org/10.1037/a0036841. [DOI] [PMC free article] [PubMed]
- Jurafsky D, Martin JH. Speech and language processing. Upper Saddle River, NJ: Prentice Hall; 2009. [Google Scholar]
- Kahneman D, Klein G. Conditions for intuitive expertise: A failure to disagree. American Psychologist. 2009;64(6):515–526. doi: 10.1037/a0016755. http://doi.org/10.1037/a0016755. [DOI] [PubMed] [Google Scholar]
- Kalyuga S, Ayres P, Chandler P, Sweller J. The expertise reversal effect. Educational Psychologist. 2010;38(1):23–31. http://doi.org/10.1207/S15326985EP3801_4. [Google Scholar]
- Kinney JR. Airplanes: The life story of a technology. Baltimore, MD: Johns Hopkins University Press; 2006. [Google Scholar]
- Kirschenbaum H. Carl Rogers’s life and work: An assessment on the 100th anniversary of his birth. Journal of Counseling & Development. 2004;82(1):116–124. [Google Scholar]
- Kluger AN, DeNisi A. The effects of feedback interventions on performance: A historical review, a meta-analysis, and a preliminary feedback intervention theory. Psychological Bulletin. 1996;119(2):254–284. http://doi.org/10.1037/0033-2909.119.2.254. [Google Scholar]
- Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Advances in Neural Information Processing System 2012 [Google Scholar]
- Ladany N, Hill C, Corbett M. Nature, extent, and importance of what psychotherapy trainees do not disclose to their supervisors. Journal of Counseling Psychology. 1996;43(1):10–24. [Google Scholar]
- Leichsenring F, Salzer S. Response to Clark. American Journal of Psychiatry. 2013;170(11):1365–1366. doi: 10.1176/appi.ajp.2013.13060744r. [DOI] [PubMed] [Google Scholar]
- Lord SP, Sheng E, Imel ZE, Baer J, Atkins DC. More than reflections: Empathy in motivational interviewing includes language style synchrony between therapist and client. Behavior Therapy. 2014;46(3):296–303. doi: 10.1016/j.beth.2014.11.002. http://doi.org/10.1016/j.beth.2014.11.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pace BT, Tanana M, Stat M, Xiao B, Dembe A, Soma CS, Steyvers M, Narayanan S, Imel ZE, Atkins DC. What About the Words? Natural Language Processing in Psychotherapy Department of Educational Psychology. Psychotherapy Bulletin 2016 [Google Scholar]
- Lundahl BW, Kunz C, Brownell C, Tollefson D, Burke BL. A meta-analysis of motivational interviewing: Twenty-five years of empirical studies. Research on Social Work Practice. 2010;20(2):137–160. http://doi.org/10.1177/1049731509347850. [Google Scholar]
- Magill M, Gaume J, Apodaca TR, Walthers J, Mastroleo NR, Borsari B, Longabaugh R. The technical hypothesis of motivational interviewing: A eta-analysis of MI’s key causal model. Journal of Consulting and Clinical Psychology. 2014 doi: 10.1037/a0036833. http://doi.org/10.1037/a0036833. [DOI] [PMC free article] [PubMed]
- McCullough D. The Wright Brothers. New York, NY: Simon and Schuster; 2015. [Google Scholar]
- Morina N, Ijntema H, Meyerbroker K, Emmelkamp PMG. Can virtual reality exposure therapy gains be generalized to real-life? A meta-analysis of studies applying behavioral assessments. Behavioral Research and Therapy. 2015;74:18–24. doi: 10.1016/j.brat.2015.08.010. [DOI] [PubMed] [Google Scholar]
- Preston SD, de Waal FBM. Empathy: Its ultimate and proximate bases. Behavioral and Brain Sciences. 2002;25(1):1–20. doi: 10.1017/s0140525x02000018. [DOI] [PubMed] [Google Scholar]
- Preston SD, Hofelich AJ. The many faces of empathy: Parsing empathic phenomena through a proximate, dynamic-systems view of representing the other in the self. Emotion Review. 2012;4(1):24–33. http://doi.org/10.1177/1754073911421378. [Google Scholar]
- Reese RJ, Norsworthy LA. Does a continuous feedback system improve psychotherapy outcome? Psychotherapy. 2009;46(4):418–431. doi: 10.1037/a0017901. [DOI] [PubMed] [Google Scholar]
- Riva G. Virtual reality in psychotherapy: A review. Cyber Psychology and Behavior. 2005;8(3):220–240. doi: 10.1089/cpb.2005.8.220. [DOI] [PubMed] [Google Scholar]
- Santa Ana EJ, Martino S, Ball SA, Nich C, Frankforter TL, Carroll KM. What is usual about “treatment-as-usual?” Data from two multisite effectiveness trials. Journal of Substance Abuse Treatment. 2008;35(4):369–379. doi: 10.1016/j.jsat.2008.01.003. http://doi.org/10.1016/j.jsat.2008.01.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Saxon D, Barkham M. Patterns of therapist variability: Therapist effects and the contribution of patient severity and risk. Journal of Consulting and Clinical Psychology. 2012;80(4):535–546. doi: 10.1037/a0028898. http://doi.org/10.1037/a0028898. [DOI] [PubMed] [Google Scholar]
- Simon GE, Ludman E. Predictors of early dropout from psychotherapy for depression in community practice. Psychiatric Services. 2010;61(7):684–689. doi: 10.1176/ps.2010.61.7.684. [DOI] [PubMed] [Google Scholar]
- Surowiecki J. The wisdom of crowds. New York, NY: Anchor; 2005. [Google Scholar]
- Sweller J. Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction. 1994;4(4):295–312. http://doi.org/10.1016/0959-4752(94)90003-5. [Google Scholar]
- Tasca GA, Sylvestre J, Balfour L, Chyurlia L, Evans J, Fortin-Langelier B, et al. What clinicians want: Findings from a psychotherapy practice research network survey. Psychotherapy. 2015;52(1):1–11. doi: 10.1037/a0038252. http://doi.org/10.1037/a0038252. [DOI] [PubMed] [Google Scholar]
- Tracey TJG, Wampold BE, Lichtenberg JW, Goodyear RK. Expertise in psychotherapy: An elusive goal? American Psychologist. 2014;69(3):218–229. doi: 10.1037/a0035099. [DOI] [PubMed] [Google Scholar]
- Walfish S, McAlister B, O’Donnell P, Lambert MJ. An investigation of self-assessment bias in mental health providers. Psychological Reports. 2012;110(2):639–644. doi: 10.2466/02.07.17.PR0.110.2.639-644. http://doi.org/10.2466/02.07.17.PR0.110.2.639-644. [DOI] [PubMed] [Google Scholar]
- Wampold BE, Imel ZE. The great psychotherapy debate: The evidence for what makes psychotherapy work. New York, NY: Routledge; 2015. [Google Scholar]
- Webb CA, DeRubeis RJ, Barber JP. Therapist adherence/competence and treatment outcome: A meta-analytic review. Journal of Consulting and Clinical Psychology. 2010;78(2):200–211. doi: 10.1037/a0018912. http://doi.org/10.1037/a0018912. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xiao B, Imel ZE, Georgiou PG, Atkins DC, Narayanan SS. “Rate my therapist”: Automated detection of empathy in drug and alcohol counseling. PLOS ONE. 2015 doi: 10.1371/journal.pone.0143055. [DOI] [PMC free article] [PubMed] [Google Scholar]