Table 1.
Authors (Year) | Duration | Conversational Agent | Unconstrained Input | n | Mean Age | Sex (%M) |
---|---|---|---|---|---|---|
Fulmer et al. 19 (2018) | 2 and 4 Weeks | Tess | Yes | 74 | 22.9 | .28 |
Description: Assessed the feasibility and efficacy of reducing self-identified symptoms of depression and anxiety in college students. Randomized controlled trial showed reduction in symptoms of depression and anxiety. | ||||||
Inkster et al. 20 (2018) | 2 Months | Wysa | Yes | 129 | N/A | N/A |
Description: Evaluated effectiveness and engagement levels of AI-enabled empathetic, text-based conversational mobile mental well-being app on users with self-reported symptoms of depression; 67% of users found the app helpful and encouraging. More engaged users had significantly higher average mood improvement compared to low users. | ||||||
Jungmann et al. 21 (2019) | 3 to 6 Hours | Ada | No | 6 | 34 | .5 |
Description: Investigated diagnostic quality of a health app for a broad spectrum of mental disorders and its dependence on expert knowledge. Psychotherapists had a higher diagnostic agreement (in adult cases) between the main diagnosis of a case vignette in a textbook and the result given by the app. | ||||||
Martínez-Miranda et al. 22 (2019) | 8 Weeks | HelPath | No | 18 | 31.5 | .63 |
Description: Assessed acceptability, perception, and adherence toward the conversational agent. Participants perceived embodied conversational agents as emotionally competent, and a positive level of adherence was reported. | ||||||
Philip et al. 23 (2020) | N/A | Unnamed virtual medical agent | No | 318 | 45.01 | .45 |
Description: Measured engagement and perceived acceptance and trust of the virtual medical agent in diagnosis of addiction and depression. Although 68.2% of participants reported being very satisfied with the virtual medical agent, only 57.23% were willing to interact with the virtual medical agent in the future. | ||||||
Provoost et al. 24 (2019) | N/A | Sentiment miningalgorithm tailored toward Dutch language | Yes | 52 | N/A | N/A |
Description: Evaluated accuracy of automated sentiment analysis against human judgment. Online cognitive behavioral therapy patient’s user texts were evaluated on an overall sentiment and the presence of 5 specific emotions by an algorithm and by psychology students. Results showed moderate agreement between the algorithm and human judgment when evaluating overall sentiment (positive vs. negative sentiment) but low agreement with specific emotions. | ||||||
Suganuma et al. 25 (2018) | 1 Month,minimum15 days | SABORI | No | 454 | 38.04 | .308 |
Description: Evaluated feasibility and acceptability study of a conversational agent. Results showed improvement in WHO-5 and Kessler 10 scores. | ||||||
Mean: 4.6 weeks | Median: 74 | Mean: 34.29 | Mean: 43.36% |
Note. Unconstrained input refers to whether the user of the conversational agent is able to freely converse with the agent, as opposed to constrained input where specific dialogue options are presented from which the user must select a choice to continue the conversation. N/A = not available; AI = artificial intelligence; WHO = World Health Organization.