Abstract
Objectives
Despite the proliferation of mobile mental health apps, evidence of their efficacy around anxiety or depression is inadequate as most studies lack appropriate control groups. Given that apps are designed to be scalable and reusable tools, insights concerning their efficacy can also be assessed uniquely through comparing different implementations of the same app. This exploratory analysis investigates the potential to report a preliminary effect size of an open-source smartphone mental health app, mindLAMP, on the reduction of anxiety and depression symptoms by comparing a control implementation of the app focused on self-assessment to an intervention implementation of the same app focused on CBT skills.
Methods
A total of 328 participants were eligible and completed the study under the control implementation and 156 completed the study under the intervention implementation of the mindLAMP app. Both use cases offered access to the same in-app self-assessments and therapeutic interventions. Multiple imputations were utilized to impute the missing Generalized Anxiety Disorder-7 and Patient Health Questionnaire-9 survey scores of the control implementation.
Results
Post hoc analysis revealed small effect sizes of Hedge's g = 0.34 for Generalized Anxiety Disorder-7 and Hedge's g = 0.21 for Patient Health Questionnaire-9 between the two groups.
Conclusions
mindLAMP shows promising results in improving anxiety and depression outcomes in participants. Though our results mirror the current literature in assessing mental health apps’ efficacy, they remain preliminary and will be used to inform a larger, well-powered study to further elucidate the efficacy of mindLAMP.
Keywords: Smartphone, app, anxiety, depression, effect, digital
Introduction
Digital mental health tools present a scalable and affordable solution to address the current treatment gap in behavioral health.1,2 Though synchronous telehealth has been vital in delivering behavioral health services since the pandemic, 3 asynchronous self-help-based interventions such as smartphone apps have garnered particular interest as they can expand access beyond the limited number of clinicians and augment ongoing care. With smartphone ownership at near-universal levels in the United States, 4 patients report high interest in downloading a mental health app,5,6 and many already used a mental health app to monitor their condition.5–8 Therefore, understanding the efficacy of apps to improve mental health outcomes is important.
Though many people are downloading apps, little is known about the efficacy of these apps to improve mental health conditions. 9 Most apps available on public app marketplaces lack any scientific evidence.10,11 The research studies that have been conducted report promising, but at times, conflicting results. Several meta-analyses summarizing the efficacy of mental health apps demonstrate that these interventions have a small effect (g = 0.30–0.33) on the reduction of anxiety symptoms12,13 and a small effect (g = 0.27–0.38) on depression13–15 resulting in some researchers to report these apps to hold clinical promise.13,16 Yet, many of the analyzed studies are limited by high bias 17 and lack sufficiently rigorous control groups. 18 Few studies assessing mental health apps’ efficacy incorporate active control groups and instead rely on waitlist 19 or treatment as usual controls. 20 Given the potential of a digital placebo effect 21 which may improve outcomes in the group offered the digital interventions compared to that not, 22 assessing the effect size of apps remains challenging today. For example, a meta-review of 14 meta-analyses points to the importance of considering the rigor of comparison groups as the magnitude of the effect of mental health apps’ efficacy diminished as control groups became more robust, 18 a trend further corroborated by two meta-analyses.12,15 With no universal standard over how to study mental health apps’ efficacy,20,23 studies might continue to use weak comparison groups. There is a clear need for more research utilizing active control groups to help elucidate the efficacy of mental health apps.
Another challenge in generating efficacy data is low patient engagement with mental health apps, which is most pervasive when assessing real-world usage.24–26 Studies of naturalistic use patterns show that few users will engage with a mental health app after five days. 27 Though improved, some studies also struggle with maintaining engagement. Sufficient mental health app engagement must precede any analysis or claims of efficacy that can be made. 28 Consequently, improving app engagement has become a central focus of researchers. 25 Strategies to boost patient engagement have included monetary compensation 29 and improved app design, 30 but these have yet to demonstrate a significant sustained impact. Human support, often in the form of coaching, has long been cited as one strategy to improve engagement rates31–34 and is now offered routinely as a part of app interventions.
In this article, we explore how the scalable and reusable nature of apps can support different studies that can complement each other and offer novel methods to explore the role of digital placebo, app interventions, and the impact of human support. For example, by using the open-source mental health application, mindLAMP, we compare data collected from two studies that feature two unique implementations of the same mindLAMP app: an unguided mood monitoring implementation versus a coached implementation of the app. Through this comparison, we will generate a potential effect size of mindLAMP with human support on the reduction of anxiety and depression symptoms which will be utilized to help power a formal subsequent study.
Methods
Recruitment
Both the control and intervention studies recruited participants who had elevated levels of stress or anxiety through online posts.
The methods for the control implementation have been published. 35 Inclusion criteria included English fluency, a mindLAMP compatible smartphone, a college email address, a student ID card, and a score of 14 or higher on the Perceived Stress Scale 36 after the completion of a screening survey. Six hundred and ninety-five participants were recruited entirely through online posts and met all the requirements above. Eighty-three participants were excluded due to never downloading the app. Participants who did not complete any of the weekly Patient Health Questionnaire-9 (PHQ-9) or Generalized Anxiety Disorder-7 (GAD-7) surveys were further excluded from the study.
For the six-week study intervention implementation 37 (Camacho et al., 2023 [Forthcoming]), individuals with anxiety and depression were recruited through Researchmatch.org from July 2021 to February 2022. To participate, individuals needed to be at least 18 years old, own a mindLAMP-compatible smartphone, and score a minimum of 5 on the GAD-7 scale. 38
Protocol
During the intake of the control implementation, a trained research assistant introduced mindLAMP to the participant and answered any relevant questions about this 28-day study. Participants completed a survey consisting of the Perceived Stress Scale (PSS) 39 questionnaire, demographic questions, and a question asking if they ever had COVID-19. The survey was completed and stored on REDcap. Throughout the study, mindLAMP sent push notifications each day for a brief survey and bi-weekly for a longer survey to be completed on the app. The daily survey consisted of 11 questions selected from the Patient Health Questionnaire-9, 40 the General Anxiety Disorder-7 scale, the Prodromal Questionnaire-16, 41 and the PSS. The bi-weekly survey included all questions from PHQ-9, GAD-7, PSS, UCLA Loneliness, 42 PQ-16, Pittsburgh Sleep Quality Index (PSQI), 43 and D-WAI. 44 The mindLAMP app also offered access to the interventions available in the intervention version (see below). However, no engagement support such as scheduled notifications and human coaches were utilized in the control version. The lack of engagement support in the control implementation resulted in low app usage despite the therapeutic interventions being readily accessible in the app. After concluding the study, a research assistant emailed participants with an exit survey and instructions to uninstall the app. Participants in the control implementation were compensated up to $50 depending on the completion of the bi-weekly surveys alone.
After starting the intervention implementation, participants met virtually twice (for up to 20 minutes per session) with a digital navigator or coach for app-related and engagement support, but not for clinical advice. The digital navigators were trained through our standardized and published training to support app use in both care and research. 34 After every meeting, participants completed PHQ-9, GAD-7, PSQI, PSS, SIAS, 45 UCLA Loneliness Scale, Flourishing Scale, 46 SUS, 47 and the D-WAI, which were all completed and stored on REDcap. Participants in the intervention implementation were given $75 after completing the study with payment not related to app use.
Both groups had access to the full mindLAMP application which other than the surveys described above included mindfulness activities such as meditations, and CBT-based interventions. Both studies were approved by the Institutional Review Board of Beth Israel Deaconess Medical Center and written informed consent was obtained from all participants.
mindLAMP
Both studies utilized mindLAMP, an open-source smartphone app developed by our team. mindLAMP's customizable platform couples in-app interventions such as mindfulness activities and meditations with robust digital phenotyping capabilities. A more detailed description of the mindLAMP application's development is reported elsewhere 48 and screenshots of the app have been included in Figure 1.
Figure 1.
Screenshots from mindLAMP
The left most screenshot depicts the “feed” tab which contains activities assigned for that day. The second image is a screenshot of a Patient Health Questionnaire-9 (PHQ-9) question in a weekly survey. The third image depicts the “Manage” tab and the last image depicts one of the many interventions available.
Analysis
To extract survey data from mindLAMP, Cortex, 49 an open-source data analysis pipeline for mindLAMP was utilized. Data in which the initial score for PHQ-9 and GAD-7 was lower than 5 was excluded. More specifically, all analyses excluded PHQ-9 or GAD-7 results if the corresponding initial score was less than 5. Iterative Imputer from the sci-kit-learn library was applied to impute the missing final PHQ-9 or GAD-7 scores with initial score, age, gender, and race and ethnicity serving as the predictor variables. Only the control implementation's data set required the multiple imputations noted above. Data from the intervention implementation was analyzed at both weeks 4 and 6 since that study was a 6-week study, while data from the control implementation was only analyzed at week 4 since the study's duration was 4 weeks. Non-parametric statistical tests were utilized as the pre-post data of both groups were not all normally distributed. We confirmed this by visually inspecting histograms (Figure 2) and through the Shapiro-Wilk test. The Wilcoxon signed-rank test (scipy.stats.wilcoxon) was applied to note any statistically significant longitudinal improvement in scores for the intervention and control study. The Wilcoxon rank-sum test (scipy.stats.ranksums) was utilized to detect if there was a statistically significant difference in percentage improvement of clinical scores between the two cohorts. A complete case analysis was also completed to compare the findings with the imputed results. Pearson from scipy.stats was utilized to measure any linear relationship between demographic and baseline variables and outcomes. All analysis was completed in the programming language Python on Jupyter Notebook.
Figure 2.
Histograms depicting baseline symptom severity and clinical outcomes
A set of histograms depicting baseline symptom severity (PHQ-9 or GAD-7) and percentage change for both the intervention group (top) and the active control group (bottom).
GAD-7: Generalized Anxiety Disorder-7; PHQ-9: Patient Health Questionnaire-9.
Results
Demographics
Patient demographic characteristics of the control implementation's participants and the intervention implementation's participants included in the analysis are depicted in Table 1.
Table 1.
Participant characteristics in each study.
| Sample characteristics | Control | Intervention | p-value | ||
|---|---|---|---|---|---|
| Age (years), mean (SD) | 21.5 (3.9) | 35.4 (12.5) | <0.001 a | ||
| Gender | <0.001 b | ||||
| Male | 104 (31.7%) | 26 (16.7%) | |||
| Female | 211 (64.3%) | 120 (76.9%) | |||
| Non-binary | 12 (3.7%) | 6 (3.8%) | |||
| Did not disclose | 1 (.3%) | 4 (2.6%) | |||
| Race and Ethnicity | <0.001 b | ||||
| White | 162 (49.4%) | 112 (71.8%) | |||
| Asian | 64 (19.5%) | 10 (6.4%) | |||
| Latinx | 55 (16.8%) | 10 (6.4%) | |||
| Black or African American | 30 (9.1%) | 13 (8.3%) | |||
| Native Hawaiian or Pacific Islander | 3 (.9%) | 1 (0.6%) | |||
| Other | 14 (4.3%) | 10 (6.4%) | |||
| Total | 328 | 156 | |||
Kruskal-Wallis rank sum test.
Freeman-Halton test.
Outcomes
Participants in the intervention implementation improved from the beginning of the study to both weeks 4 and 6 for both GAD-7 and PHQ-9. The mean change for each is noted in Table 2. The mean change for all psychological assessments administered in the intervention implementation is provided in Supplemental Appendix Table 1.
Table 2.
Mean percentage change in GAD-7 and PHQ-9 values in the intervention study.
| GAD-7 (SD) (n = 142) | PHQ-9 (SD) (n = 140) | |
|---|---|---|
| Week 4 | −7.80* (30.18) | −8.14* (31.69) |
| Week 6 | −13.42* (41.23) | −14.19* (37.85) |
GAD-7: Generalized Anxiety Disorder-7; PHQ-9: Patient Health Questionnaire-9.
*Indicates the percentage change is statistically significant at p < 0.01.
While the control implementation participants had a significant reduction in PHQ-9 scores, there was no significant reduction in GAD-7 scores over the four weeks. After the missing PHQ-9 and GAD-7 outcomes were imputed, there was a statistically significant reduction in both PHQ-9 and GAD-7. The mean changes for both complete cases and all the cases including imputed values are described in Table 3. There was no difference in baseline symptom severity between those that completed the full four weeks and those that did not in the control implementation.
Table 3.
Mean percentage change in GAD-7 and PHQ-9 scores in the control study.
| Complete case only | With imputed | |||
|---|---|---|---|---|
| n | Mean (SD) | n | Mean (SD) | |
| GAD-7% change | 44 | 4.61 (55.99) | 267 | 3.79 (35.52)* |
| PHQ-9% change | 52 | −6.02 (51.63)* | 298 | −1.39 (32.79)** |
GAD-7: Generalized Anxiety Disorder-7; PHQ-9: Patient Health Questionnaire-9.
*Indicates the percentage change is statistically significant at p < 0.05.
**Indicates the percentage change is statistically significant at p < 0.01.
In both the intervention and control implementation, baseline demographics did not correlate with outcomes for both GAD and PHQ. However, in both the intervention group and the control group, baseline symptom severity had a slight negative correlation with the percentage change in scores (Intervention: −.24 for PHQ, and −.22 for GAD, and Control: −.37 for PHQ and −.35 for GAD). A group of histograms has been included below depicting baseline symptom severity and outcomes.
Effect size calculations results
For analysis including the imputed control implementation values, there was a significant difference in percentage improvement between the control group's GAD-7 and the intervention group's GAD-7 at week 4. This was not the case for PHQ-9. However, upon comparing the intervention cohort's PHQ-9 improvements at week 6, their percent changes were significantly different than that of the control cohort's PHQ-9. For complete case analysis, the only significant difference was between the intervention group at week 6 and the control group at week 4 GAD-7. A more detailed breakdown of the corresponding p-values and Hedges’ g values is available above in Table 4.
Table 4.
Effect size results.
| Intervention week 4 GAD-7 | Intervention week 6 GAD-7 | Intervention week 4 PHQ-9 | Intervention week 6 PHQ-9 | |
|---|---|---|---|---|
| Control GAD-7 | p = 0.12 (g = 0.33) | p < 0.05 (g = 0.39) | – | – |
| Control PHQ-9 | – | – | p = 0.58 (g = 0.06) | p = 0.73 (g = 0.19) |
| Imputed control GAD-7 | p < 0.01 (g = 0.34) | p < 0.01 (g = 0.45) | – | – |
| Imputed control PHQ-9 | – | – | p = 0.07 (g = 0.21) | p < 0.01 (g = 0.37) |
GAD-7: Generalized Anxiety Disorder-7; PHQ-9: Patient Health Questionnaire-9.
p values obtained with rank sums stats test. Hedges’ g (effect size) values. Since the intervention implementation persisted for up to 6 weeks and the control implementation lasted only 4 weeks, the results were compared for both intervention data at week 4 and at week 6 as the data were readily available. Results at week 4 are the main focus of this article.
Discussion
This exploratory analysis illustrates that implementing mindLAMP as an unguided mood monitoring tool versus as a coached intervention has a small effect on the reduction of anxiety and depression symptoms. Engagement with the app was higher in the intervention study than in the control mood monitoring case, likely due to the support offered in the two digital navigator check-ins. Even though the mood monitoring control condition had full access to the interventions, their utilization of said interventions was negligible, allowing us to utilize them as the control condition. Though preliminary, our reported effect size of g = 0.34 for anxiety and g = 0.21 for depression (results for week 4) is in line with other research and useful for informing and powering a future study to further elucidate mindLAMP's effect size while featuring an appropriate control group.
The potential effect size of mindLAMP aligns with other effect sizes of mobile health apps on improving anxiety and depression12,15 and the effect size of transdiagnostic treatments. 50 This effect size is exciting given the scalability of apps and the asynchronous delivery of the intervention. However, its magnitude is of course lower than traditional care 51 suggesting an adjunctive role of apps. While our study does not allow us to assess the impact of the two 20-minute digital navigator meetings, the use of coaches as a scalable means to drive engagement with apps appears promising given the shortage in licensed mental health clinicians 52 and has been widely cited.
Our results also suggest that mental health apps, like mindLAMP, have the potential to help individuals from various backgrounds. In our analysis, baseline demographic characteristics did not correlate with outcomes. This indicates that a variety of patients can benefit from the use of mindLAMP which aligns with a previous study conducted by our team. 53 In addition, the fact there was a negative correlation between the initial severity of PHQ-9 or GAD-7 and the outcome suggests that mindLAMP can help individuals with more severe symptoms just as much if not to a greater degree than patients with more mild presentations. These results support the ability of the app to increase access to care for patients with the highest needs.
Our results also raise questions for future research. While the effect size of g = .34 of mindLAMP when comparing the intervention GAD-7 changes at week 4 with the control groups was significant, this was not the case for PHQ (g = .21). This result does not necessarily mean that mindLAMP is ineffective at improving depression outcomes as when the control group's PHQ-9 outcomes were compared to the intervention group's PHQ-9 outcomes at week 6, there was a significant effect size of g = .37. Rather, it indicates that employing mindLAMP as a mood monitoring app can be effective but may require longer sustained usage to improve depressive symptoms. Our results also highlight the importance of controlling for a digital placebo effect as those in the control implementation had a significant reduction in depression, but not anxiety symptoms. This could contribute to mindLAMP's insignificant effect size on depression. Furthermore, the potentially delayed effectiveness of mindLAMP in decreasing depression symptoms could be a result of the short duration of this study. Compared to mindfulness meditations and deep breathing exercises that might bring more immediate relief to anxiety symptoms, internalizing and applying the CBT exercises in mindLAMP to improve depressive symptoms may require more time than the short duration of this study was unable to capture.
A strength of this study is that it reflects a real-world use case of a mental health app with additional methodological rigor. Specifically, by using mindLAMP in both the control and intervention implementation, our study was able to control for access to the app, app reminders, aesthetics of the app, access to the interventions, and other confounders often not controlled for in research studies. Given mindLAMP is free open-source software that is easy to configure into different versions, it is possible for other teams to use our results to assess their own effect sizes with or build upon our results here to create more effective interventions. Furthermore, the human support provided in the intervention study conformed to peer-reviewed and publicly available training, 34 which is rare as often little information is reported about digital navigators’ training, qualifications, and responsibilities.54,55 While our approach does not replace the need for rigorous randomized controlled trials, being able to assess the potential digital placebo effect before conducting a costly and timely study offers numerous advantages for agile research. It also offers a useful baseline for assessing the preliminary impact of cultural adaptions and additional interventions that could be added to mindLAMP in the future.
Limitations
Our study is limited by several factors. First, to calculate the preliminary effect size of mindLAMP, we combined two studies that were conducted separately. Because of this, the demographics between the two cohorts were slightly different which is noted in Table 1. However, current literature seems to support that app usage between younger and middle-aged adults does not differ. 32 Another limitation is that the protocol differed slightly between the control and intervention implementation as the control implementation lasted 4 weeks, while the intervention was 6 weeks in duration. This limitation was mitigated by analyzing and reporting data at both timepoints. Furthermore, the control implementation required a score of 14 or higher on the PSS, but this was not the case for the intervention implementation in which a score of 5 or higher was required on the GAD-7 survey. In future studies, the inclusion criteria will be the same for both the control and intervention groups. However, to partially address this limitation, significant PSS and GAD-7 correlations are provided in Supplemental Tables 2 and 3. Lastly, there was high missingness of data in the control condition, which is common in many digital health research studies including those that have received FDA approval. 25 To help counteract this missingness, multiple imputation was specifically utilized due to its ability to handle large fractions of missing information, and a complete case analysis was included for comparison as well. In light of these limitations, we consider this analysis to be exploratory as reflected in the title of this article. The results from this analysis will be used to inform a randomized controlled trial that will present similar participant demographics between groups, adherence to the same protocol and psychological assessments, and digital navigator use to promote participant engagement.
Conclusion
In this exploratory analysis, we present a means to utilize the scalable, reusable, and customizable nature of apps to explore the potential effect size of mindLAMP in reducing anxiety and depression symptoms. These promising results will be used to inform the study design of a large, well-powered study that will be conducted in the future. Despite the nontraditional methods presented in this study, our team implemented some of the most cited methodological strengths in digital health (i.e. active digital control, human support, and replicable materials) in a manner that others can use to advance their own research today.
Supplemental Material
Supplemental material, sj-docx-1-dhj-10.1177_20552076231187244 for An exploratory analysis of the effect size of the mobile mental health Application, mindLAMP by Sarah Chang, Noy Alon and John Torous in DIGITAL HEALTH
Acknowledgements
None to declare.
Footnotes
Contributorship: JT designed the study, wrote the protocol, and completed data collection. SC and JT analyzed the data. SC and NA drafted the first draft of the manuscript. All authors contributed to revising the manuscript and have approved the final draft.
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Ethical approval: Both studies were approved by the Institutional Review Board of Beth Israel Deaconess Medical Center (protocols 2020P000310 and 2020P000589).
Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.
Guarantor: JT
ORCID iD: John Torous https://orcid.org/0000-0002-5362-7937
Supplemental material: Supplemental material for this article is available online.
References
- 1.Insel TR. Digital phenotyping: a global tool for psychiatry. World Psychiatry 2018; 17: 276–277. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Torous J, Jän Myrick K, Rauseo-Ricupero N, et al. Digital mental health and COVID-19: using technology today to accelerate the curve on access and quality tomorrow. JMIR Ment Health 2020; 7: e18848. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Whaibeh E, Mahmoud H, Naal H. Telemental health in the context of a pandemic: the COVID-19 experience. Curr Treat Options Psychiatry 2020; 7: 198–202. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Pew Research Center. Mobile fact sheet. Pew Research Center: Internet, Science & Tech, https://www.pewresearch.org/internet/fact-sheet/mobile (2022, accessed 27 January 2023).
- 5.Beard C, Silverman AL, Forgeard M, et al. Smartphone, social media, and mental health app use in an acute transdiagnostic psychiatric sample. JMIR Mhealth Uhealth 2019; 7: e13364. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Buck B, Chander A, Tauscher J, et al. Mhealth for young adults with early psychosis: user preferences and their relationship to attitudes about treatment-seeking. J Technol Behav Sci 2021; 6: 667–676. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Borghouts J, Eikey EV, Mark G, et al. Understanding mental health app use among community college students: web-based survey study. J Med Internet Res 2021; 23: e27745. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Cohen KA, Stiles-Shields C, Winquist N, et al. Traditional and nontraditional mental healthcare services: usage and preferences among adolescents and younger adults. J Behav Health Serv Res 2021; 48: 537–553. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Marshall JM, Dunstan DA, Bartik W. Clinical or gimmickal: the use and effectiveness of mobile mental health apps for treating anxiety and depression. Aust N Z J Psychiatry 2020; 54: 20–28. [DOI] [PubMed] [Google Scholar]
- 10.Larsen ME, Nicholas J, Christensen H. Quantifying app store dynamics: longitudinal tracking of mental health apps. JMIR Mhealth Uhealth 2016; 4: e96. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Larsen ME, Huckvale K, Nicholas J, et al. Using science to sell apps: evaluation of mental health app store quality claims. NPJ Digit Med 2019; 2: 18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Firth J, Torous J, Nicholas J, et al. Can smartphone mental health interventions reduce symptoms of anxiety? A meta-analysis of randomized controlled trials. J Affect Disord 2017; 218: 15–22. [DOI] [PubMed] [Google Scholar]
- 13.Linardon J, Cuijpers P, Carlbring P, et al. The efficacy of app-supported smartphone interventions for mental health problems: a meta-analysis of randomized controlled trials. World Psychiatry 2019; 18: 325–336. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Six SG, Byrne KA, Tibbett TP, et al. Examining the effectiveness of gamification in mental health apps for depression: systematic review and meta-analysis. JMIR Ment Health 2021; 8: e32199. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Firth J, Torous J, Nicholas J, et al. The efficacy of smartphone-based mental health interventions for depressive symptoms: a meta-analysis of randomized controlled trials. World Psychiatry 2017; 16: 287–298. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Lecomte T, Potvin S, Corbière M, et al. Mobile apps for mental health issues: meta-review of meta-analyses. JMIR Mhealth Uhealth 2020; 8: e17458. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Eisenstadt M, Liverpool S, Infanti E, et al. Mobile apps that promote emotion regulation, positive mental health, and well-being in the general population: systematic review and meta-analysis. JMIR Ment Health 2021; 8: e31170. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18.Goldberg SB, Lam SU, Simonsson O, et al. Mobile phone-based interventions for mental health: a systematic meta-review of 14 meta-analyses of randomized controlled trials. PLOS Digit Health 2022; 1: e0000002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Donker T, Cornelisz I, van Klaveren C, et al. Effectiveness of self-guided app-based virtual reality cognitive behavior therapy for acrophobia: a randomized clinical trial. JAMA Psychiatry 2019; 76: 682. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Mohr DC, Azocar F, Bertagnolli A, et al. Banbury forum consensus statement on the path forward for digital mental health treatment. PS (Washington DC) 2021; 72: 677–683. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Torous J, Firth J. The digital placebo effect: mobile mental health meets clinical psychiatry. The Lancet Psychiatry 2016; 3: 100–102. [DOI] [PubMed] [Google Scholar]
- 22.Ghaemi SN, Sverdlov O, van Dam J, et al. A smartphone-based intervention as an adjunct to standard-of-care treatment for schizophrenia: randomized controlled trial. JMIR Form Res 2022; 6: e29154. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Torous J, Stern AD, Bourgeois FT. Regulatory considerations to keep pace with innovation in digital health products. NPJ Digit Med 2022; 5: 121. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Aziz M, Erbad A, Almourad MB, et al. Did usage of mental health apps change during COVID-19? A comparative study based on an objective recording of usage data and demographics. Life 2022; 12: 1266. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Nwosu A, Boardman S, Husain MM, et al. Digital Therapeutics for Mental Health: Is Attrition the Achilles Heel? Epub ahead of print 2022. [DOI] [PMC free article] [PubMed]
- 26.Torous J, Lipschitz J, Ng M, et al. Dropout rates in clinical trials of smartphone apps for depressive symptoms: a s ystematic review and meta-analysis. J Affect Disord 2020; 263: 413–419. [DOI] [PubMed] [Google Scholar]
- 27.Baumel A, Muench F, Edan S, et al. Objective user engagement with mental health apps: systematic search and panel-based usage analysis. J Med Internet Res 2019; 21: e14567. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Lattie EG, Cohen KA, Hersch E, et al. Uptake and effectiveness of a self-guided mobile app platform for college student mental health. Internet Interv 2022; 27: 100493. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Shreekumar A, Vautrey PL. Managing Emotions: The Effects of Online Mindfulness Meditation on Mental Health and Economic Behavior. Boston, MA: MIT, 2022. [Google Scholar]
- 30.Huberty J, Green J, Puzia M, et al. Evaluation of mood check-in feature for participation in meditation mobile app users: retrospective longitudinal analysis. JMIR Mhealth Uhealth 2021; 9: e27106. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Ben-Zeev D, Drake R, Marsch L. Clinical technology specialists. Br Med J 2015; 350: h945–h945. [DOI] [PubMed] [Google Scholar]
- 32.Jacob C, Sezgin E, Sanchez-Vazquez A, et al. Sociotechnical factors affecting patients’ adoption of mobile health tools: systematic literature review and narrative synthesis. JMIR Mhealth Uhealth 2022; 10: e36284. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Noel VA, Carpenter-Song E, Acquilano SC, et al. The technology specialist: a 21st century support role in clinical care. NPJ Digit Med 2019; 2: 61. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Wisniewski H, Gorrindo T, Rauseo-Ricupero N, et al. The role of digital navigators in promoting clinical care and technology integration into practice. Digit Biomark 2020; 4: 119–135. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Melcher J, Patel S, Scheuer L, et al. Assessing engagement features in an observational study of mental health apps in college students. Psychiatry Res 2022; 310: 114470. [DOI] [PubMed] [Google Scholar]
- 36.Cohen S, Kamarck T, Mermelstein R. A global measure of perceived stress. J Health Soc Behav 1983; 24: 385. [PubMed] [Google Scholar]
- 37.Camacho E, Chang S, Currey D, et al. The Impact of Guided vs Supportive Coaching on Mental Health App Engagement and Clinical Outcomes: A Feasibility Study [submitted to Health Informatics J]. [DOI] [PubMed]
- 38.Spitzer RL, Kroenke K, Williams JBW, et al. Generalized Anxiety Disorder 7. Epub ahead of print 10 October 2011.
- 39.Lee EH. Review of the psychometric evidence of the perceived stress scale. Asian Nurs Res (Korean Soc Nurs Sci) 2012; 6: 121–127. [DOI] [PubMed] [Google Scholar]
- 40.Kroenke K, Spitzer RL, Williams JBW. The PHQ-9: validity of a brief depression severity measure. J Gen Intern Med 2001; 16: 606–613. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Ising HK, Veling W, Loewy RL, et al. The validity of the 16-item version of the prodromal questionnaire (PQ-16) to screen for ultra-high risk of developing psychosis in the general help-seeking population. Schizophr Bull 2012; 38: 1288–1296. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Russell D, Peplau LA, Ferguson ML. Developing a measure of loneliness. J Pers Assess 1978; 42: 290–294. [DOI] [PubMed] [Google Scholar]
- 43.Buysse DJ, Reynolds CF, Monk TH, et al. The Pittsburgh sleep quality index: a new instrument for psychiatric practice and research. Psychiatry Res 1989; 28: 193–213. [DOI] [PubMed] [Google Scholar]
- 44.Henson P, Wisniewski H, Hollis C, et al. Digital mental health apps and the therapeutic alliance: initial review. BJPsych Open 2019; 5: e15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Mattick RP, Clarke JC. Development and validation of measures of social phobia scrutiny fear and social interaction anxiety. Behav Res Ther 1998; 36: 455–470. [DOI] [PubMed] [Google Scholar]
- 46.Diener E, Wirtz D, Biswas-Diener R, et al. New measures of well-being. In: Diener E. (ed.) Assessing well-being. Dordrecht: Springer Netherlands, 4 June 2009, pp.247–266. [Google Scholar]
- 47.Brooke J. SUS: a quick and dirty usability scale. Usability Eval Ind 1995 Dec; 189(3): 189–194. [Google Scholar]
- 48.Torous J, Wisniewski H, Bird B, et al. Creating a digital health smartphone app and digital phenotyping platform for mental health and diverse healthcare needs: an interdisciplinary and collaborative approach. J Technol Behav Sci 2019; 4: 73–85. [Google Scholar]
- 49.Division of Digital Psychiatry. What is cortex?: Lamp platform, https://docs.lamp.digital/data_science/cortex/what_is_cortex/ (2020, accessed 27 January 2023).
- 50.Cuijpers P, Miguel C, Ciharova M, et al. Transdiagnostic treatment of depression and anxiety: a meta-analysis. Psychol Med 2023: 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Cuijpers P, Cristea IA, Karyotaki E, et al. How effective are cognitive behavior therapies for major depression and anxiety disorders? A meta-analytic update of the evidence. World Psychiatry 2016; 15: 245–258. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Substance Abuse and Mental Health Services Association (SAMHSA). Behavioral Health Workforce Report 2020: 1–38. [Google Scholar]
- 53.Chang S, Gray L, Torous J. Smartphone app engagement and clinical outcomes in a hybrid clinic. Psychiatry Res 2023; 319: 115015. [DOI] [PubMed] [Google Scholar]
- 54.Bernstein EE, Weingarden H, Wolfe EC, et al. Human support in app-based cognitive behavioral therapies for emotional disorders: scoping review. J Med Internet Res 2022; 24: e33307. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Meyer A, Wisniewski H, Torous J. Coaching to support mental health apps: exploratory narrative review. JMIR Hum Factors 2022; 9: e28301. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplemental material, sj-docx-1-dhj-10.1177_20552076231187244 for An exploratory analysis of the effect size of the mobile mental health Application, mindLAMP by Sarah Chang, Noy Alon and John Torous in DIGITAL HEALTH


