Over the past two decades, research on digital mental health interventions has grown rapidly. They hold promise for addressing existing gaps in mental health care, with numerous studies demonstrating their effectiveness across various psychiatric and somatic conditions 1 . Their formats vary, ranging from video‐based therapy to text‐based approaches delivered via secure email or chat. However, most research has focused on either fully automated, self‐guided, or therapist‐guided smartphone‐ and web‐based applications, typically providing structured cognitive behavioral therapy content through self‐help modules. More recently, conversational agents (i.e., chatbots) have gained attention as innovative tools for providing therapeutic support.
Despite the growing evidence of the effectiveness of digital mental health interventions, several challenges remain. A significant problem with self‐guided interventions is low user engagement and high dropout rates, often leading to poor treatment outcomes. In contrast, therapist‐guided interventions typically show higher adherence and improved effectiveness 1 , 2 . Reflecting this, the role of therapist support and the therapeutic alliance in these interventions have emerged as central themes, and indeed as top research priorities 3 .
A substantial body of research has investigated the therapeutic alliance in digital mental health interventions, frequently measuring it through self‐report questionnaires, most commonly the Working Alliance Inventory (WAI). This tool assesses the alliance through three dimensions: agreement on therapeutic goals, agreement on tasks to reach these goals, and the emotional bond. The WAI is often adapted to reflect the digital context. For instance, in guided interventions, the emotional bond can refer to the patient's relationship with the therapist, while agreement on goals and tasks primarily relates to the digital self‐help program 4 .
Evidence suggests that patients can form therapeutic alliances in digital environments – regardless of the communication modality (e.g., videoconferencing or text‐based), diagnosis, or even minimal‐contact formats such as guided self‐help interventions – with alliance ratings often comparable to those observed in traditional face‐to‐face therapies 5 . Furthermore, although findings vary across studies, meta‐analyses show that the strength of the therapeutic alliance in digital contexts positively correlates with outcomes, with effect sizes that are likely somewhat smaller, but still quite similar, to those observed in face‐to‐face settings 6 , 7 .
Beyond therapist‐patient relationships, research increasingly recognizes that even the emotional bond can extend to self‐help apps and, more recently, conversational agents. This insight is not entirely new. As early as 1966, it was observed that people interacting with the chatbot ELIZA, a simple conversational agent mimicking Rogerian psychotherapy, attributed human‐like empathy and understanding to the program – a phenomenon now known as the “ELIZA effect”. This effect highlights the users' tendency to connect emotionally with digital entities, even when these interactions lack genuine human qualities. Parallel findings have emerged in research on bibliotherapy, where relational elements such as empathically addressing emotional distress, collaboratively setting goals, and normalizing setbacks have been suggested to strengthen the readers’ sense of alliance with therapeutic materials 8 .
The comparability of alliance ratings with traditional therapies has helped legitimize digital mental health interventions. However, caution is warranted when directly transferring alliance concepts from face‐to‐face to digital settings. As the following examples illustrate, similar alliance scores across treatment formats do not necessarily indicate equivalent experiences or underlying processes.
Reflecting on the therapeutic bond in guided self‐help programs, some users commented that the bond with the therapist was surprisingly strong, given that it was a digital intervention with minimal contact 5 . Such statements underline that expectations shape therapeutic experiences, and highlight the importance of context when interpreting self‐report ratings. Just as a five‐star rating for a budget hotel has a different meaning to one for a luxury hotel, similar alliance ratings across treatment formats should be interpreted with caution.
The same caution applies to ratings of goal and task agreement. While similar scores suggest comparability, they may result from fundamentally different processes. Imagine a mental health system in which, instead of broadly trained psychotherapists who collaboratively define goals and adjust treatment over time, patients choose from a list of specialists offering pre‐defined interventions for specific disorders. This model – typical of many digital mental health interventions – relies on matching diagnosis to the intervention from the outset. In such cases, high levels of agreement may indicate an initial fit rather than a co‐created therapeutic process. While this structured approach can effectively align treatment with disorder‐specific goals, its lack of flexibility may contribute to the engagement challenges seen in digital interventions when they fail to address a patient's broader or evolving needs.
Furthermore, the rise of chatbots and their increasingly sophisticated “empathic” responses raise questions about forming meaningful relationships with digital tools. Chatbots can convincingly simulate empathy and respond appropriately to users' emotional states. However, they cannot share another person's feelings and genuinely care about their well‐being. As one of our students once put it, engaging with a chatbot is like hearing a warm, reassuring voice in an empty room. The words may be comforting, the tone just right – but in the end, no one is truly there. Human empathy requires real presence and investment (in terms of time, effort, and emotional energy), which are factors that also encourage mutual engagement. The lack of such investment in automated treatments may partly explain persistent challenges with user engagement.
Overall, the therapeutic alliance in digital mental health interventions tends to be less rich and reciprocal than in traditional therapies, also because certain digital communication formats – such as text‐based or asynchronous communication – inherently lack nonverbal and paraverbal cues needed for nuanced emotional attunement. However, digital formats also offer distinct advantages. For some therapeutic tasks, such as psychoeducation, face‐to‐face interactions may be overly complex and overwhelming. In these cases, the simplified nature of digital communication can reduce relational demands, increase focus, and promote more effective learning. Similarly, digital communication can facilitate quicker openness when discussing sensitive issues. This aligns with the intimacy equilibrium model 9 , which proposes that physical distance and fewer nonverbal cues can encourage verbal openness, unlike situations with close physical proximity to strangers – like, for example, in an elevator – where people tend to avoid intimate conversations.
Our current understanding of the therapeutic alliance in digital mental health interventions is rooted in models developed for face‐to‐face therapies. Applying these models to digital interventions may obscure more than it reveals. Rather than asking whether the “digital alliance” is “as good as” the traditional one, it may be more fruitful to ask how it differs. Digital interventions may even help to broaden our understanding of what constitutes the therapeutic alliance. Is it the subjective sense of being understood that matters most, or the actual presence of an empathic mind? If a chatbot – despite lacking genuine empathy – can still promote symptom relief, this also raises questions about the underlying mechanisms through which the alliance facilitates therapeutic change. For instance, can a chatbot reduce a patient's emotional dysregulation as effectively as a human therapist?
Furthermore, not all clients value a therapeutic relationship. Some prefer more autonomy and only minimal interpersonal interaction. This highlights the need to identify moderators of the alliance‐outcome relationship in digital mental health interventions. Are there patient characteristics or intervention types for which a strong alliance or therapist presence is more important?
Ultimately, digital mental health interventions encourage us to rethink not only how we deliver therapy, but also what fundamentally makes therapy effective and for whom, providing unique insights into the nature of therapeutic alliances and the nature of healing itself.
REFERENCES
- 1. Hedman‐Lagerlöf E, Carlbring P, Svärdman F et al. World Psychiatry 2023;22:305‐14. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2. Krieger T, Bur OT, Weber L et al. Internet Interv 2023;32:100617. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Hollis C, Sampson S, Simons L et al. Lancet Psychiatry 2018;5:845‐54. [DOI] [PubMed] [Google Scholar]
- 4. Gómez Penedo JM, Berger T, Grosse Holtforth M et al. J Clin Psychol 2019;76:973‐86. [DOI] [PubMed] [Google Scholar]
- 5. Berger T. Psychother Res 2017;27:511‐24. [DOI] [PubMed] [Google Scholar]
- 6. Aafjes‐van Doorn K, Spina DS, Horne SJ et al. Clin Psychol Rev 2024;110:102430. [DOI] [PubMed] [Google Scholar]
- 7. Probst GH, Berger T, Flückiger C. Verhaltenstherapie 2019;32:135‐46. [Google Scholar]
- 8. Richardson R, Richards DA, Barkham M. Behav Cogn Psychother 2010;38:67‐81. [DOI] [PubMed] [Google Scholar]
- 9. Argyle M, Dean J. Sociometry 1965;28:289‐304. [PubMed] [Google Scholar]
