Skip to main content
Journal of Medical Internet Research logoLink to Journal of Medical Internet Research
. 2019 Sep 25;21(9):e14567. doi: 10.2196/14567

Objective User Engagement With Mental Health Apps: Systematic Search and Panel-Based Usage Analysis

Amit Baumel 1,, Frederick Muench 2, Stav Edan 1, John M Kane 3
Editor: Gunther Eysenbach
Reviewed by: Armando Rotondi, Beenish Chaudhry, Lisa Vizer, Muhammad Shahzad Aslam, Lucia Bonet
PMCID: PMC6785720  PMID: 31573916

Abstract

Background

Understanding patterns of real-world usage of mental health apps is key to maximizing their potential to increase public self-management of care. Although developer-led studies have published results on the use of mental health apps in real-world settings, no study yet has systematically examined usage patterns of a large sample of mental health apps relying on independently collected data.

Objective

Our aim is to present real-world objective data on user engagement with popular mental health apps.

Methods

A systematic engine search was conducted using Google Play to identify Android apps with 10,000 installs or more targeting anxiety, depression, or emotional well-being. Coding of apps included primary incorporated techniques and mental health focus. Behavioral data on real-world usage were obtained from a panel that provides aggregated nonpersonal information on user engagement with mobile apps.

Results

In total, 93 apps met the inclusion criteria (installs: median 100,000, IQR 90,000). The median percentage of daily active users (open rate) was 4.0% (IQR 4.7%) with a difference between trackers (median 6.3%, IQR 10.2%) and peer-support apps (median 17.0%) versus breathing exercise apps (median 1.6%, IQR 1.6%; all z≥3.42, all P<.001). Among active users, daily minutes of use were significantly higher for mindfulness/meditation (median 21.47, IQR 15.00) and peer support (median 35.08, n=2) apps than for apps incorporating other techniques (tracker, breathing exercise, psychoeducation: medians range 3.53-8.32; all z≥2.11, all P<.05). The medians of app 15-day and 30-day retention rates were 3.9% (IQR 10.3%) and 3.3% (IQR 6.2%), respectively. On day 30, peer support (median 8.9%, n=2), mindfulness/meditation (median 4.7%, IQR 6.2%), and tracker apps (median 6.1%, IQR 20.4%) had significantly higher retention rates than breathing exercise apps (median 0.0%, IQR 0.0%; all z≥2.18, all P≤.04). The pattern of daily use presented a descriptive peak toward the evening for apps incorporating most techniques (tracker, psychoeducation, and peer support) except mindfulness/meditation, which exhibited two peaks (morning and night).

Conclusions

Although the number of app installs and daily active minutes of use may seem high, only a small portion of users actually used the apps for a long period of time. More studies using different datasets are needed to understand this phenomenon and the ways in which users self-manage their condition in real-world settings.

Keywords: user engagement, usage, adherence, retention, mental health, depression, anxiety, mHealth

Introduction

The wide dissemination of mobile phone devices and the leap in the development and distribution of mobile health (mHealth) apps have altered the ways in which scholars conceptualize care management in the behavioral health domain. The conversation has shifted from patients and providers to individuals who can now engage in self-care around the clock outside of traditional health care settings (eg, [1-3]). Approximately 77% of the US adult population, and more than 89% of those younger than 50 years, now own a mobile phone [4,5] where they can store and use computerized apps. This widespread use has established a market for mHealth apps. Accordingly, a 2015 World Health Organization survey identified approximately 15,000 mobile apps for health care, with at least 29% designed for mental health [6].

The use of unguided apps has the potential to increase access to care in a scalable manner by reducing the costs associated with service uptake [7,8]. However, the impact of digital interventions is limited by their ability to engage users in therapeutic activities and to support user adherence to the therapeutic process [9,10]. Digital interventions require individuals to engage with self-care outside of traditional settings; therefore, individuals’ engagement must compete with other events in their daily lives and endure fluctuating motivation to be involved in effortful behavior [11]. As a result, user engagement with mobile apps and websites across the behavior change spectrum is low in the absence of human support [12-14]. Furthermore, various studies have suggested that most users of unguided Web-based programs exit websites before the full completion of the offered program [9,10,15,16]. For example, Christensen and colleagues [17] reported that less than 1% of users completed all modules in MoodGym, an open-access website for depression. In a systematic review of published articles reporting real-world user engagement with unguided programs for depression, anxiety, or mood enhancement, Fleming and colleagues [18] reported that 7% to 42% of users of Web- and app-based programs engaged in moderate use (completing between 40% and 60% of modular fixed-length programs or continuing to use the app after 4 weeks). For example, the developers of the PTSD Coach mobile app reported a usage decline over time, with 41.6% continuing to use the app 1 month after installation and 19.4% after 6 months [19]. Among Happify mobile app users, 3.5% completed a 6-week assessment. However, the authors noted that these users might have completed assessments without engaging in other content [20] (see [18] for a review).

Understanding patterns of real-world usage of e-mental health apps outside of empirical trials is key to maximizing the potential of apps to increase the public self-management of care. Utilization in real-world settings may differ from that in study settings for several reasons. First, empirical study settings include enrollment and assessment procedures that are not part of real-world utilization of the app, as trials largely emphasize internal validity over real-world generalizability [13]. Ebert and Baumeister [21] claim, for example, that within randomized trials “the securing of commitment represents an adherence-promoting element in self-help interventions.” It is reasonable to assume that the human contact provided by research coordinators, provision of ongoing assessments, and reimbursement to incentivize the completion of assessments—none of which are available in real-world use—impact engagement patterns with the interventions. Second, from an external validity perspective, recruitment challenges in trials are often addressed by increasing the reach to potential participants through the expansion of participating venues and the refinement of social media strategies [13]. In this way, researchers unintentionally recruit people who are much more likely to adhere to e-mental health technologies than people in the general population who download and try available programs “in the wild.” Such assumptions are supported by a systematic review of internet interventions for anxiety and depression, which found that the rates of attrition in randomized controlled trials were lower than the reported dropout rates from open-access websites [22].

Overall, there is a need to understand how the general population engages with the most popular unguided mobile apps targeting anxiety, depression, or emotional well-being, and whether there is a difference in how individuals engage with these apps depending on the mental health focus or incorporated techniques. Although some developer-led studies have published results on the use of individual mental health apps deployed in real-world settings, to the best of our knowledge, no study has examined a large sample of mental health apps relying on independently collected data. This investigation is feasible by leveraging the big data commonly generated and stored by digital platforms that record user traffic in the wild [23,24]. Leveraging such data, this examination provides benchmarks of app usage in the real world, where the general public is expected to benefit from their engagement with unguided programs. This information could shed light on specific engagement problems and opportunities for new intervention development and may offer a resource for researchers and developers who want to study and compare their app performance with similar apps.

For this study, a panel provided objective aggregated nonpersonal data on user engagement with mobile apps to analyze patterns of mental health app usage. The three primary aims were to (1) describe common usage patterns of popular unguided apps based on available metrics, (2) identify patterns of user retention over the first 30 days after app installation, and (3) explore whether these patterns differ based on the app’s mental health focus and primary incorporated techniques.

Methods

Search Strategy

The search strategy aimed at identifying the most-installed unguided apps targeting depression, anxiety-related problems, or mental health. We used keywords related to depression and anxiety because of the high prevalence of these conditions [25,26]. We also included mental health apps that focused on happiness or the enhancement of mental health (ie, mindfulness meditations) because our previous work identified them as highly popular mental health tools [27,28]. We conducted a systematic engine search of the Google Play Store in November 2018 using the following terms: “depression” OR “mood” OR “anxiety” OR “panic attack” OR “phobia” OR “social phobia” OR “PTSD” OR “posttraumatic stress disorder” OR “stress reduction” OR “worry relief” OR “OCD” OR “obsessive compulsive disorder” OR “mental health” OR “emotional well-being” OR “happiness.” One researcher documented all the apps emerging from the first 100 search results of each keyword, removed duplicates, and sorted them alphabetically. We also included a manual search of apps presented on MindTools.io [27] and PsyberGuide [29].

App Screening and Inclusion Criteria

Determining Apps’ Number of Installs Threshold

To avoid including apps without a representative number of users, and to determine a minimum threshold for inclusion, we assessed the install categories presented by Google Play based on the number of app installs (eg, 10,000, 50,000 installs). Table 1 presents a preliminary analysis of the number of identified apps in each install category and the aggregated minimum number of app installs and corresponding percentages. Included apps had at least 5000 installs after removing any nonrelevant apps based on their title (ie, apps that were clearly not targeted at emotional well-being such as Heart Rate Monitor & Pulse Checker, 7 Minute Workout, 30 Day Fitness Challenge). Adding all the apps in the 5000 installs category would have resulted in a less than 0.5% increase in the total sample of users. Therefore, we determined an inclusion threshold of 10,000 app installs. Table 1 also shows that a small number of apps within the higher install categories were responsible for the most app installs. To make sure that including a large portion of apps with a relatively smaller number of installs (eg, <10,000 app installs) would not bias the results, we also examined whether there was a difference in the pattern of results based on the number of app installs. This will be further explained in the data analysis section.

Table 1.

Analysis of install categories based on the number of apps in each category.

Install category Apps identified, n Minimum identified app installs within this categorya, n Cumulative frequency of app installs based on category thresholdb, n Added percentage of installs to the overall samplec, %
≥10,000,000 2 20,000,000 20,000,000 100.00
5,000,000-9,999,999 6 30,000,000 50,000,000 60.00
1,000,000-4,999,999 21 21,000,000 71,000,000 29.58
500,000-999,999 23 11,500,000 82,500,000 13.94
100,000-499,999 69 6,900,000 89,400,000 7.72
50,000-99,999 33 1,650,000 91,050,000 1.81
10,000-49,999 103 1,030,000 92,080,000 1.12
5000-9999 66 330,000 92,410,000 0.36

aThe number of apps multiplied by the minimum number of installs based on the install category.

bThe accumulated number of app installs in all install categories above and including the current install category.

cThe added percentage of installs to the total sample if the current install category is added to the analysis; it represents the percentage of the total number of app installs within this category divided by the accumulated number of app installs based on the current category threshold.

Inclusion and Exclusion Criteria

To be included in this review, apps had to:

  1. Be in English;

  2. Have at least 10,000 installs documented on Google Play;

  3. Focus on mental illness, mental health, or emotional well-being not specifically related to another medical condition (for example, we excluded apps specifically focused on stress reduction due to a physical medical issue such as heart attack); and

  4. Incorporate recognized techniques aimed at promoting self-management of mental health problems such as coping with negative symptoms (eg, feeling nervous, loss of energy), achieving positive results (eg, feeling better), or symptom management (eg, mood tracking). We excluded apps focused on the incorporation of sham techniques (see Multimedia Appendix 1 for a definition of sham techniques).

We excluded apps that:

  1. Required payment for installation or provided a free trial only for a limited amount of time because it would be expected to bias program usage (free to install apps that included in-app purchases were not excluded);

  2. Were therapist-based (eg, telepsychiatry) because the study was focused on unguided interventions; and

  3. Were not meant to be used for more than a few times (eg, tests, one-time exposure technique) or were merely magazines.

Two independent reviewers screened the apps based on the inclusion and exclusion criteria. All disagreements were discussed with a third author with reference to the apps until consensus was reached.

Coding

Two independent reviewers coded the apps’ incorporated techniques based on the following categories: mindfulness/meditation, tracker (including diary or journal), psychoeducation, peer support, and breathing exercise (not exercised as part of a meditation program). These categories were based on previous work done on the therapeutic components of mental health apps [27,30], drawing on the thematic analysis method suggested by Braun and Clarke [31]. The categories were designed to represent nonoverlapping components of potential therapeutic engagement (see Multimedia Appendix 2 for definitions of categories). Although our goal was to identify how specific techniques related to patterns of app use, our metrics did not enable us to differentiate between various techniques incorporated within the same app (ie, we could not tell which parts in the app the users were using). Therefore, we also added a coding of “primary technique” in cases where the app mostly incorporated one technique that was deemed to be the main reason for the app’s use (eg, mindfulness/meditation). It is important to note that this limitation did not enable us to include app features that might influence user engagement but were not identified as a primary incorporated technique. Similarly, it was not feasible to target specific theoretical modalities, such as cognitive behavioral therapy. Because nearly all apps included some components of cognitive behavioral therapy, these were impossible to dismantle given our data.

An app’s mental health focus was determined in the following manner: first, the app’s description had to explicitly state that it targeted people with [mental health focus] and, second, most of the techniques used within the app had to have been built to help users cope with or manage their symptoms directly related to the mental health focus. We grouped apps based on several mental health foci. Under “mental health problems,” we included apps that were focused on supporting people coping with depression, anxiety-related disorders, and emotional difficulties. We also subcoded the app with the terms (a) anxiety-related disorders or (b) depression if the app specifically targeted only one of these aims. (During our coding process, we did not identify another theme for the remaining apps.) Under “happiness,” we included apps that focused on nurturing happiness or general positivity (eg, exercising gratitude, happiness assessment, suggestions for activities nurturing positive feelings), rather than the management of mental health states or problems.

During our coding process, we found a greater ambiguity around the description of apps with a primary incorporated technique of mindfulness/meditation, which leaned more toward enhancing emotional well-being (ie, helping users achieve a positive sense of experience and good mental health), but also aimed at stress reduction. Therefore, we grouped mindfulness and meditation apps separately and did not attribute either of the two mental health foci to them. For this reason, and to enable a proper comparison between categories, we present the mindfulness/meditation category in both the mental health focus and technique outcomes, despite being the same results.

A Cohen kappa interrater agreement of .92 was obtained for coding the variables of interest (incorporated technique, primary technique, and mental health focus). All disagreements were discussed with a third author with reference to the apps until consensus was reached.

Behavioral Data on User Engagement in the Real World

Information on user traffic was obtained from SimilarWeb’s Pro panel data [32]. The panel provides aggregated nonpersonal information on user engagement with websites and mobile apps all over the world to enable Web and mobile app traffic research and analytics. The panel is based on several sources of anonymized usage data, such as data obtained from consenting users of mobile apps (ie, products). A dedicated product team at SimilarWeb is responsible for building and partnering with hundreds of high-value consumer products that make up the panel. According to SimilarWeb, the products are used across diverse audiences, without cluttering the user with advertisements. While benefiting from the products, users contribute to the panel because they enable the documentation of their online or mobile app usage activities seamlessly and anonymously [32]. The data are not used by SimilarWeb or provided to any third parties for the purposes of marketing, advertising, or targeting of individual subjects. The data-gathering procedures comply with data privacy laws, including the way data are collected, anonymized, stored, secured, and used. These procedures are updated regularly based on evolving data privacy legislation and requirements, such as the European Union’s General Data Protection Regulation [33].

Our examination of data validity was tested and presented in a previous study [28]. An Oath researcher [34] (RW) examined 30 randomly selected mobile apps with data on SimilarWeb and usage data in Oath’s independent records. The researcher examined the correlation between the average number of user sessions per day in the two datasets, finding a very strong Spearman correlation (N=30, r=.77, P<.001). In our study, we also examined the Spearman correlation between app install categories presented on Google Play (eg, 10,000, 50,000) and the number of downloads documented on SimilarWeb, and found a very strong correlation (N=93, r=.81, P<.001). These findings suggest a sufficient convergent validity, which is recommended to be above .70 [35].

The study was approved by University of Haifa Institutional Review Board, Haifa, Israel. The measures were set to include data gathered over a 12-month period from August 1, 2017, to July 31, 2018. For each app, available metrics on the panel included app open rate (the average percentage of daily active users out of the total sample of people who currently had the app installed), average number of sessions in a day per daily active user, and average daily minutes of use per daily active user. User 30-day retention included the percentage of users who opened the app each day between day 1 and day 30 out of the number of users who installed and opened the app on day 0. Usage patterns by time were available only for apps with a very large number of users. It was represented by two metrics—average percentage of use per hour (24 hours; eg, 7:00 am, 8:00 am) and per day (7 days; eg, Sunday, Monday)—both calculated based on total app usage.

Data Analysis

We did not assume a normal distribution of the metrics; therefore, medians and interquartile ranges (IQRs) were used as descriptive statistic measures. In cases in which a category included a small number of apps (n≤5), we used range instead of IQR. To examine differences in usage metrics between apps with different mental health foci or techniques, a Kruskal-Wallis one-way analysis of variance (ANOVA) was performed, followed by Mann-Whitney U tests to identify the source of the difference. To examine dependencies in the distribution of categorical values in relevant cases, we used chi-square tests. Most app installs came from a small number of apps with a large number of installs (see Table 1), so we conducted a sensitivity analysis to examine whether including apps with a smaller number of installs would bias the results. Mann-Whitney U tests were conducted to compare the distributions of the usage patterns for the top 5 installed apps and the remaining apps from each category presented in the results section (and that included more than five apps). We picked the top 5 apps based on their install category in Google Play. In cases in which several apps “competed” for the fifth place in the same install category, the app with the higher number of downloads (as documented in the SimilarWeb user panel) was chosen.

Results

Screening

Figure 1 presents the app inclusion flow diagram. The engine search and manual searches produced a total of 386 apps with 10,000 installs or more. Through the first screening process, 299 apps were identified and accessed for a detailed evaluation, and 93 apps were finally included in this study analysis (see Multimedia Appendix 3 for a complete list of included apps).

Figure 1.

Figure 1

App inclusion flow diagram.

Description of Apps

The mental health focus of 59 (63%) apps was a mental health problem. Of these, 19 focused specifically on anxiety-related disorders and 4 focused specifically on depression. In addition, 8 (9%) apps focused on happiness, and 26 (28%) apps focused on the enhancement of emotional well-being through mindfulness/meditation. The distribution of apps based on incorporated techniques is presented in Table 2. Overall, 60 of 93 (65%) apps had a primary incorporated technique, and 33 (36%) apps had two or more incorporated techniques, none of which were primary. Mindfulness/meditation was the most frequent technique as the primary technique of the app (26/93, 28%), followed by use of a tracker (22/93, 24%). Psychoeducation (35/93, 38%) was the most frequent salient technique to be used not as the primary technique, followed by use of a tracker (28/93, 30%).

Table 2.

Distribution of incorporated techniques in the app sample (N=93).

Incorporated technique Primary technique, n (%) Cotechniquea, n (%) Total, n (%)
Mindfulness/meditation 26 (28) 14 (15) 40 (43)
Tracker 22 (24) 28 (30) 50 (54)
Breathing exercise 7 (8) 20 (22) 27 (29)
Psychoeducation 3 (3) 35 (38) 38 (41)
Peer support 2 (2) 7 (8) 9 (1)

aThe technique is saliently presented in the app but is not considered a primary technique.

App Usage by Daily Active Users

All apps had complete metrics on app usage by daily active users. Medians and IQRs of daily app usage are presented in Table 3 based on the app’s mental health focus and in Table 4 based on the app’s incorporated techniques. As shown in Table 3, the median app open rate was 4.0% (IQR 4.7%), with medians of 3.28 (IQR 2.53) daily sessions and 13.03 (IQR 14.27) minutes of app use per active user. Daily active usage of mindfulness/meditation apps (median 21.47, IQR 15.00) was found to be significantly higher than the usage of apps for mental health problems (median 10.02, IQR 10.60; z=4.64, P<.001) or for happiness (median 7.77, IQR 6.90; z=3.82, P<.001). No other significant difference in app usage was found between mental health foci, including between anxiety- and depression-related apps. As seen in Table 4, the number of app minutes of use was significantly higher for mindfulness/meditation (median 21.47, IQR 15.00) and peer support (median 35.08, n=2) than for other techniques (all z ≥2.11, all P<.05). In addition, tracker (median 6.3%, IQR 10.2%) and peer support (median 17.0%, n=2) apps had significantly higher open rates than breathing exercise apps (median 1.6%, IQR 1.6%; all z ≥3.42, all P<.001). No significant differences in usage patterns were found for apps without a primary strategy that incorporated more than one technique.

Table 3.

App usage based on app mental health focus (N=93).

Mental health focus Apps, n Installation category, median (IQR) Open rate (%), median (IQR) Daily number of sessions per active users, median (IQR) Daily minutes of use per active user, median (IQR)a
All apps 93 100,000 (90,000) 4.0 (4.7) 3.28 (2.53) 13.03 (14.27)
Mental health problems 59 50,000 (90,000) 4.0 (5.1) 3.77 (3.15) 10.02 (10.60)*

Anxiety 19 10,000 (40,000) 2.6 (2.5) 3.58 (3.49) 08.17 (09.42)

Depression 4 100,000 (50,000-100,000b) 4.8 (3.0-6.8b) 5.22 (3.97-6.55b) 06.97 (02.05-15.12b)
Happiness 8 100,000 (50,000) 3.7 (5.3) 3.50 (4.18) 7.77 (6.90)*
Mindfulness/meditationc 26 100,000 (650,000) 4.1 (3.3) 2.96 (1.66) 21.47 (15.00)**

aCategories with different number of asterisks (*, **) within a column are significantly different (P<.05) based on our analytical approach, which included Kruskal-Wallis one-way ANOVA at the variable level, followed by Mann-Whitney U tests.

bDue to a small number of included apps, brackets in this cell reflect the range (minimum-maximum value) and not the IQR.

cMindfulness/meditation is presented as a separate mental health focus because all apps in this category were not attributed to another focus as they focus on enhancement of well-being as well as stress reduction.

Table 4.

App usage based on app incorporated technique (N=93).

Incorporated technique Apps, n Installation category, median (IQR) Open rate (%), median (IQR)a Sessions per active user, median (IQR) Daily minutes of use per active user, median (IQR)a
Primary technique





Mindfulness/meditation 26 100,000 (650,000) 4.1 (3.3) 2.96 (1.66) 21.47 (15.00)*

Tracker 22 50,000 (90,000) 6.3 (10.2)* 4.58 (4.47) 07.27 (08.83)**

Breathing exerciseb 7 10,000 (40,000) 1.6 (1.6)** 2.19 (1.23) 08.32 (19.02)**

Psychoeducation 3 10,000 (10,000-100,000b) 3.0 (2.5-3.3c) 4.16 (2.57-4.80c) 03.53 (02.07-19.23c)**

Peer supportd 2 300,000 (N/Ae) 17.0 (N/A)* 8.67 (N/A) 35.08 (N/A)*
Number of primary techniques



2 techniques 17 50,000 (90,000) 4.0 (5.6%) 3.18 (1.40) 07.83 (11.93)

≥3 techniquesf 16 100,000 (50,000) 3.2 (3.1%) 4.06 (3.91) 12.88 (07.13)

aCategories with different number of asterisks (*, **) within a column are significantly different (P<.05) based on our analytical approach, which included Kruskal-Wallis one-way ANOVA at the variable level, followed by Mann-Whitney U tests.

bNot including mindfulness/meditation.

cDue to the small number of included apps, brackets in this cell reflect the range (minimum-maximum value) and not the IQR.

dDue to the small number of included apps, IQR or range could not be calculated (marked with N/A).

eN/A: not applicable.

fIncludes two apps that use a chatbot (Wysa, Woebot), which did not have a different pattern of results emerging for a certain direction.

User 30-Day Retention

Fifty-nine apps (63%) had data on user retention. Chi-square tests for independence revealed no difference between apps with or without user retention data in the distribution of mental health foci (χ22=2.1, P=.36) and primary incorporated techniques (χ24=3.8, P=.44). Figure 2 presents user 30-day retention by the app’s mental health focus; Figure 3 presents user 30-day retention by the app’s incorporated technique. In both figures, there is a sharp decline of more than 80% in app open rates between day 1 and day 10, whereas the differences between day 15 and day 30 are smaller and represent a decline of approximately 20% in app open rates. Figure 2 reveals that, relative to users who opened the app on day 0, the median app open rate was as follows: 69.4% (IQR 27.8%) of users opened it on day 1, 3.9% (IQR 10.3%) of users opened it on day 15, and 3.3% (IQR 6.2%) of users opened it on day 30. Kruskal-Wallis one-way ANOVAs revealed no significant differences in app open rates on day 30 based on mental health focus (H2=1.88, P=.39) and a significant difference in app open rates on day 30 based on incorporated technique (H5=11.31, P=.046). Mann-Whitney U tests revealed that on day 30 peer support (median 8.9%), mindfulness/meditation (median 4.7%, IQR 6.2%), and tracker/diary apps (median 6.1%, IQR 20.4%) had significantly higher retention rates than breathing exercise apps (median 0.0%, IQR 0.0%; all z ≥2.18, all P ≤.04). This pattern of difference is also descriptively apparent in 15-day retention, in which the median retention for breathing exercise apps was 0.0% (IQR 0.0%), whereas the range of medians for peer support, mindfulness/meditation, and tracker/diary apps was from 4.9% (IQR 7.1%) to 11.9% (IQR 0.7%).

Figure 2.

Figure 2

App 30-day retention by mental health focus. The percentages reflect the number of users who opened the app from day 1 to day 30 out of the number of users who installed and opened the app on day 0.

Figure 3.

Figure 3

App 30-day retention by primary incorporated technique. The percentages reflect the number of users who opened the app from day 1 to day 30 out of the number of users who installed and opened the app on day 0.

Usage Pattern by Hours and Days

Sixteen apps had data on hourly and daily app usage. Figure 4 presents the hourly usage patterns of apps and Figure 5 presents the daily usage patterns of apps. The number of apps with available data was small; therefore, we only present categories with data on more than three apps. Furthermore, we have not conducted statistical testing to compare program usage among the different categories. For hourly usage, the results pointed to a peak in app usage in the evening (8:00 pm) for apps targeting mental health problems. The results also showed that mindfulness/meditation apps had two usage peaks: one in the morning (7 am-9 am) and the other in the late evening (10 pm-midnight). In terms of daily usage, the results showed a peak in app usage on Thursday for mindfulness/meditation apps.

Figure 4.

Figure 4

Hourly usage pattern. Usage is presented by hour out of the total app usage; therefore, the sum of percentages within each category is 100%. Note: a subset of apps for which that data were available is included; “All apps” includes both categories and one app targeting happiness.

Figure 5.

Figure 5

Daily usage pattern. Percentage of app usage is presented by day out of the total app usage; therefore, the sum of percentages within each category is 100%. Note: a subset of apps for which that data were available is included; “All apps” includes both categories and one app targeting happiness.

Sensitivity Analysis

We conducted a series of Mann-Whitney U tests to examine the difference in app open rate, number of sessions, daily minutes of use, and 30-day retention among the top 5 installed apps and the remaining apps per mental health focus and incorporated technique. We found a significant difference in the open rate of mental health apps favoring the top 5 installed apps (z=1.68, P ≤.05; top 5 installed apps: median 9.0%, IQR 6.9%; remaining apps: n=54, median 4.0%, IQR 4.7%). Among these five apps, one incorporated online peer support and three incorporated mood trackers. No other differences were found. A series of Mann-Whitney U tests was also conducted to examine whether app usage (app open rates, daily number of sessions, daily minutes of use) in each app category (mental health focus, incorporated technique) differed between apps with or without in-app purchases and no significant differences were found (all P>.05).

Discussion

Principal Findings

This is the first study to report the usage and retention metrics of a large number of frequently installed, unguided mental health apps as recorded “in the wild” and independent of developer-led data. Based on Google Play Store data (using keyword search terms), there were over 90 million mental health app installs documented by the end of 2018 (ie, reach). Although our findings revealed that daily active users use apps for a significant amount of time during the day (daily usage median of 13.03 minutes), most people with the app installed on their device do not open it in any given day (median open rate of 4.0%). Furthermore, general user retention is poor, with a median 15-day retention of 3.9% and 30-day retention of 3.3%. These findings reflect the lower ranges of real-world retention rates reported in developer-led studies [17-20,22].

Our results also indicate that there are significant differences in app usage and user retention that are associated with the app’s incorporated techniques. Daily minutes of use were significantly higher for mindfulness/meditation (median 21.47) and peer support (median 35.08) apps than for apps incorporating other techniques. Daily open rates were significantly lower for breathing exercise apps (median 1.6%) than for apps incorporating the two techniques with the highest open rates (tracker: median 6.3%; peer support: median 17.0%). User 30-day retention was significantly lower for breathing exercise apps (median 0.0%) than for all other incorporated techniques (mindfulness/meditation: 4.7%; trackers: 6.1%; peer support: 8.9%), except for psychoeducation, which exhibited a pattern similar to the breathing exercise apps at 30-day retention. These patterns could be explained using the notion of effective engagement described by Yardley and colleagues [36], wherein there is “sufficient engagement with the intervention to achieve intended outcomes.” From this perspective, it might be that once people acquire the desired skills (breathing exercise) or knowledge (psychoeducation) they no longer use the app, thus affecting the pattern of retention over a longer period. By contrast, mindfulness/meditation apps often include guided meditations designed for repeated use over longer periods of time, while not fostering learning or direct skill acquisition.

Our findings on user retention highlight the low engagement with these apps. Although this warrants a re-evaluation of current engagement and retention strategies, it does not necessarily suggest that these apps are only helpful for a small number of users. First, we do not have data implying that users engage only with one app in the self-management of their states or conditions. However, it is difficult to assume that users are knowledgeable about the different apps available, which apps to use, and when to use them. Although there are some recommender websites [27,29,37] and approaches to help users identify the right apps [38-41], a therapeutic framework that provides guidance to users about how to use the right app at the right time could be useful. For example, in their novel study of IntelliCare—a suite of 13 apps and one Hub app accompanied by 8 weeks of coaching to encourage participants to try the apps recommended to them through the Hub app—Mohr and colleagues [42] found that 95% of participants eventually downloaded five or more of the IntelliCare apps as part of their therapeutic process. In another study, patients with schizophrenia spectrum disorders received 6 months of treatment that included health technology coaching around the use of three digital tools that were offered to patients based on their needs; 96% of patients rated the program as beneficial [43]. Future studies are needed to examine the feasibility of executing a scalable framework of care in which users receive the right app recommendation at the right time as part of a self-management routine.

Second, user retention patterns might also indicate the low burden associated with app installation (ie, the simplicity of opening the Google Play Store and clicking the app download and installation buttons), which implies that user context, motivation, and ability to engage [44] with these apps were not tested before app installation. The poor active user rates found in our analysis (median open rates of 4%) suggest that the number of app installs available in app stores do not provide a proper estimation of the proportion of users who actually self-manage their state by using the app. These issues further justify a previous call for the development of models to conceptualize the relationships between user state, need, ability, and motivation to engage with early interventions in the digital public space [8]. Although we need to significantly improve our ability to engage users who have made initial attempts at help-seeking, taking a public health engagement approach that is also focused on sustainability represents an important step forward in scaling effective care.

Finally, we identified that the two apps that incorporated peer support as a primary technique had relatively high engagement and retention rates. In our previous work, we defined a program’s relatability as “a good representation of a human factor that is easily relatable within the therapeutic context/process” [38]. Relational factors have also been previously acknowledged to nurture a therapeutic alliance with users [45-47], and have demonstrated to be a quality aspect that predicts user engagement with mobile health interventions [28]. Future studies are needed to determine whether technology has a special advantage as an infrastructure that connects between users and results in better engagement rates.

Limitations

This study has several limitations that should be considered. First, because we used an anonymous user panel, we did not have data about how different users use the apps and which parts of the apps were more engaging. The absence of such data means that some apps might have been more engaging due to the characteristics of their users, a phenomenon suggested previously by Ernsting and colleagues [48]. In addition, due to this limitation we were only able to focus on primary incorporated techniques within the apps and not on the way different design features (not deemed to be a primary technique) may have impacted the results. Subsequently, because we were leaning on off-the-shelf programs available to the public, we could not manipulate the programs themselves to account for aspects which lacked variability in our data, such as the impact of theoretical modalities on usage. That is, although our study advantage is that it enables us to present benchmarks of real-world use independent to trial settings, one advantage of direct experiments is the ability to control participant identity and manipulate intervention modalities and features to identify the group of active components leading to the best outcome (eg, [49]). Such experiments could be also helpful in determining causal relationships between intervention modalities and user behaviors, based on the context of use.

Second, some techniques such as peer support were only incorporated by a small number of highly installed apps (median installation category of 300,000). However, our results did not indicate a significant difference in any incorporated technique in terms of app installs, which suggests that these apps usage patterns go beyond an app’s popularity.

Third, because we were led by the available metrics on the platform, we could not examine retention rates after the first 30 days. The retention slope presented a slower decline in app open rates between day 15 and 30 and, based on previous reports, it would be reasonable to assume that there is a continuous usage decline over time (eg, [19,50]), but more studies are needed to determine the magnitude of the decline.

Finally, this study was only based on Android users. Current estimates suggest that the Android market share is approximately 88% of mobile phone users globally [51] and approximately 42.7% of mobile phone users in the United States [52]. Although these data suggest that a sufficient portion of users use the Android operating system, it would be beneficial to validate these results with datasets from the Apple market.

Conclusions

The use of digital platforms that record user traffic “in the wild” enables us to examine patterns of app usage outside of study settings and to assess real-world public engagement. Although we found daily active minutes of use to be relatively high, only a small portion of users actually used popular apps regularly. More studies leveraging different datasets are needed to understand these phenomena. On a broader level, findings point to the importance of the ways we measure, report, and address aspects of user engagement in the real world. It would be helpful to track the context of users who eventually use apps, hopefully through the use of digital footprints, while also tracking the use of multiple apps and websites across times. Obviously, aspects that relate to security and privacy of data have to be addressed. In addition, new studies are needed to better conceptualize our understanding of users’ contexts and the ways they search for and engage with beneficial services outside of traditional health care settings.

Acknowledgments

This study was supported by the Donald & Barbara Zucker Foundation.

Abbreviations

ANOVA

analysis of variance

IQR

interquartile range

Multimedia Appendix 1

Definition of sham techniques.

Multimedia Appendix 2

Definition of coded techniques.

Multimedia Appendix 3

List of included apps.

Footnotes

Conflicts of Interest: None declared.

References

  • 1.Krishna S, Boren S, Balas E. Healthcare via cell phones: a systematic review. Telemed J E Health. 2009 Apr;15(3):231–240. doi: 10.1089/tmj.2008.0099. [DOI] [PubMed] [Google Scholar]
  • 2.Norman GJ, Zabinski MF, Adams MA, Rosenberg DE, Yaroch AL, Atienza AA. A review of eHealth interventions for physical activity and dietary behavior change. Am J Prev Med. 2007 Oct;33(4):336–345. doi: 10.1016/j.amepre.2007.05.007. http://europepmc.org/abstract/MED/17888860 .S0749-3797(07)00363-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Naslund J, Marsch L, McHugo G, Bartels S. Emerging mHealth and eHealth interventions for serious mental illness: a review of the literature. J Ment Health. 2015;24(5):321–332. doi: 10.3109/09638237.2015.1019054. http://europepmc.org/abstract/MED/26017625 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Pew Research Center. 2019. Jun 12, [2019-01-07]. Mobile fact sheet http://www.pewinternet.org/fact-sheet/mobile/
  • 5.WalkerSands Communications. 2019. [2019-01-07]. How technology is expanding the scope of online commerce beyond retail https://www.walkersands.com/resources/the-future-of-retail-2018/
  • 6.Anthes E. Mental health: there's an app for that. Nature. 2016 Apr 7;532(7597):20–23. doi: 10.1038/532020a.532020a [DOI] [PubMed] [Google Scholar]
  • 7.Kazdin A. Addressing the treatment gap: a key challenge for extending evidence-based psychosocial interventions. Behav Res Ther. 2017 Jan;88:7–18. doi: 10.1016/j.brat.2016.06.004.S0005-7967(16)30099-7 [DOI] [PubMed] [Google Scholar]
  • 8.Baumel A, Baker J, Birnbaum ML, Christensen H, De Choudhury M, Mohr DC, Muench F, Schlosser D, Titov N, Kane JM. Summary of key issues raised in the Technology for Early Awareness of Addiction and Mental Illness (TEAAM-I) meeting. Psychiatr Serv. 2018 May 01;69(5):590–592. doi: 10.1176/appi.ps.201700270. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Eysenbach G. The law of attrition. J Med Internet Res. 2005;7(1):e11. doi: 10.2196/jmir.7.1.e11. http://www.jmir.org/2005/1/e11/ v7e11 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Christensen H, Mackinnon A. The law of attrition revisited. J Med Internet Res. 2006;8(3):e20. doi: 10.2196/jmir.8.3.e20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Baumeister R, Vohs K. Self-regulation, ego depletion, and motivation. Social Pers Psych Compass. 2007 Nov;1(1):115–128. doi: 10.1111/j.1751-9004.2007.00001.x. [DOI] [Google Scholar]
  • 12.Kohl LF, Crutzen R, de Vries NK. Online prevention aimed at lifestyle behaviors: a systematic review of reviews. J Med Internet Res. 2013;15(7):e146. doi: 10.2196/jmir.2665. http://www.jmir.org/2013/7/e146/ v15i7e146 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Mohr DC, Weingardt KR, Reddy M, Schueller SM. Three problems with current digital mental health research…and three things we can do about them. Psychiatr Serv. 2017 May 01;68(5):427–429. doi: 10.1176/appi.ps.201600541. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Day J, Sanders M. Do prents benefit from help when completing a self-guided parenting program online? A randomized controlled trial comparing Triple P Online with and without telephone support. Behav Ther. 2018;49(6):1020–1038. doi: 10.1016/j.beth.2018.03.002. [DOI] [PubMed] [Google Scholar]
  • 15.Glasgow RE. eHealth evaluation and dissemination research. Am J Prev Med. 2007 May;32(5 Suppl):S119–S126. doi: 10.1016/j.amepre.2007.01.023.S0749-3797(07)00052-9 [DOI] [PubMed] [Google Scholar]
  • 16.Farvolden P, Denisoff E, Selby P, Bagby RM, Rudy L. Usage and longitudinal effectiveness of a Web-based self-help cognitive behavioral therapy program for panic disorder. J Med Internet Res. 2005;7(1):e7. doi: 10.2196/jmir.7.1.e7. http://www.jmir.org/2005/1/e7/ v7e7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Christensen H, Griffiths KM, Korten AE, Brittliffe K, Groves C. A comparison of changes in anxiety and depression symptoms of spontaneous users and trial participants of a cognitive behavior therapy website. J Med Internet Res. 2004 Dec 22;6(4):e46. doi: 10.2196/jmir.6.4.e46. http://www.jmir.org/2004/4/e46/ v6e46 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Fleming T, Bavin L, Lucassen M, Stasiak K, Hopkins S, Merry S. Beyond the trial: systematic review of real-world uptake and engagement with digital self-help interventions for depression, low mood, or anxiety. J Med Internet Res. 2018 Jun 06;20(6):e199. doi: 10.2196/jmir.9275. https://www.jmir.org/2018/6/e199/ v20i6e199 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Owen JE, Jaworski BK, Kuhn E, Makin-Byrd KN, Ramsey KM, Hoffman JE. mHealth in the wild: using novel data to examine the reach, use, and impact of PTSD Coach. JMIR Ment Health. 2015;2(1):e7. doi: 10.2196/mental.3935. http://mental.jmir.org/2015/1/e7/ v2i1e7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Carpenter J, Crutchley P, Zilca RD, Schwartz HA, Smith LK, Cobb AM, Parks AC. Seeing the "big" picture: big data methods for exploring relationships between usage, language, and outcome in internet intervention data. J Med Internet Res. 2016 Aug 31;18(8):e241. doi: 10.2196/jmir.5725. https://www.jmir.org/2016/8/e241/ v18i8e241 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Ebert D, Baumeister H. Internet-based self-help interventions for depression in routine care. JAMA Psychiatry. 2017 Aug 01;74(8):852–853. doi: 10.1001/jamapsychiatry.2017.1394.2631888 [DOI] [PubMed] [Google Scholar]
  • 22.Christensen H, Griffiths K, Farrer L. Adherence in internet interventions for anxiety and depression. J Med Internet Res. 2009 Apr 24;11(2):e13. doi: 10.2196/jmir.1194.v11i2e13 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Moller A, Merchant G, Conroy D, West R, Hekler E, Kugler KC, Michie S. Applying and advancing behavior change theories and techniques in the context of a digital health revolution: proposals for more effectively realizing untapped potential. J Behav Med. 2017 Feb;40(1):85–98. doi: 10.1007/s10865-016-9818-7. http://europepmc.org/abstract/MED/28058516 .10.1007/s10865-016-9818-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Baumel A, Yom-Tov E. Predicting user adherence to behavioral eHealth interventions in the real world: examining which aspects of intervention design matter most. Transl Behav Med. 2018 Sep 08;8(5):793–798. doi: 10.1093/tbm/ibx037.4868564 [DOI] [PubMed] [Google Scholar]
  • 25.Demyttenaere K, Bruffaerts R, Posada-Villa J, Gasquet I, Kovess V, Lepine JP, Angermeyer MC, Bernert S, de Girolamo G, Morosini P, Polidori G, Kikkawa T, Kawakami N, Ono Y, Takeshima T, Uda H, Karam EG, Fayyad JA, Karam AN, Mneimneh ZN, Medina-Mora ME, Borges G, Lara C, de Graaf R, Ormel J, Gureje O, Shen Y, Huang Y, Zhang M, Alonso J, Haro JM, Vilagut G, Bromet EJ, Gluzman S, Webb Ch, Kessler RC, Merikangas KR, Anthony JC, Von Korff MR, Wang PS, Brugha TS, Aguilar-Gaxiola S, Lee S, Heeringa S, Pennell B, Zaslavsky AM, Ustun TB, Chatterji S, WHO World Mental Health Survey Consortium Prevalence, severity, and unmet need for treatment of mental disorders in the World Health Organization World Mental Health Surveys. JAMA. 2004 Jun 02;291(21):2581–2590. doi: 10.1001/jama.291.21.2581.291/21/2581 [DOI] [PubMed] [Google Scholar]
  • 26.World Health Organization . The World Health Report 2001. Mental Health: New Understanding, New Hope. Geneva: World Health Organization; 2001. [Google Scholar]
  • 27.MindTools.io. [2018-06-24]. https://mindtools.io/
  • 28.Baumel A, Kane J. Examining predictors of real-world user engagement with self-guided eHealth interventions: analysis of mobile apps and websites using a novel dataset. J Med Internet Res. 2018 Dec 14;20(12):e11491. doi: 10.2196/11491. https://www.jmir.org/2018/12/e11491/ v20i12e11491 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.PsyberGuide. [2015-04-10]. http://psyberguide.org/
  • 30.Baumel A, Birnbaum ML, Sucala M. A systematic review and taxonomy of published quality criteria related to the evaluation of user-facing eHealth programs. J Med Syst. 2017;41(8):128. doi: 10.1007/s10916-017-0776-6.10.1007/s10916-017-0776-6 [DOI] [PubMed] [Google Scholar]
  • 31.Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006 Jan;3(2):77–101. doi: 10.1191/1478088706qp063oa. [DOI] [Google Scholar]
  • 32.SimilarWeb. 2018. [2018-06-18]. https://www.similarweb.com/
  • 33.European Commission. [2018-06-18]. Data protection https://ec.europa.eu/info/law/law-topic/data-protection_en .
  • 34.Oath. [2018-10-14]. https://www.oath.com/
  • 35.Carlson KD, Herdman AO. Understanding the impact of convergent validity on research results. Organ Res Methods. 2010 Dec 30;15(1):17–32. doi: 10.1177/1094428110392383. [DOI] [Google Scholar]
  • 36.Yardley L, Spring BJ, Riper H, Morrison LG, Crane DH, Curtis K, Merchant GC, Naughton F, Blandford A. Understanding and promoting effective engagement with digital behavior change interventions. Am J Prev Med. 2016 Nov;51(5):833–842. doi: 10.1016/j.amepre.2016.06.015.S0749-3797(16)30243-4 [DOI] [PubMed] [Google Scholar]
  • 37.ADAA Mental Health Apps rates. [2016-11-01]. https://www.adaa.org/finding-help/mobile-apps .
  • 38.Baumel A, Faber K, Mathur N, Kane JM, Muench F. Enlight: a comprehensive quality and therapeutic potential evaluation tool for mobile and web-based eHealth interventions. J Med Internet Res. 2017 Mar 21;19(3):e82. doi: 10.2196/jmir.7270. http://www.jmir.org/2017/3/e82/ v19i3e82 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Baumel A, Muench F. Heuristic evaluation of eHealth interventions: establishing standards that relate to the therapeutic process perspective. JMIR Ment Health. 2016 Jan 13;3(1):e5. doi: 10.2196/mental.4563. http://mental.jmir.org/2016/1/e5/ v3i1e5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Stoyanov SR, Hides L, Kavanagh DJ, Zelenko O, Tjondronegoro D, Mani M. Mobile app rating scale: a new tool for assessing the quality of health mobile apps. JMIR Mhealth Uhealth. 2015;3(1):e27. doi: 10.2196/mhealth.3422. http://mhealth.jmir.org/2015/1/e27/ v3i1e27 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Torous JB, Chan SR, Gipson SY, Kim JW, Nguyen T, Luo J, Wang P. A hierarchical framework for evaluation and informed decision making regarding smartphone apps for clinical care. Psychiatr Serv. 2018 May 01;69(5):498–500. doi: 10.1176/appi.ps.201700423. [DOI] [PubMed] [Google Scholar]
  • 42.Mohr DC, Tomasino KN, Lattie EG, Palac HL, Kwasny MJ, Weingardt K, Karr CJ, Kaiser SM, Rossom RC, Bardsley LR, Caccamo L, Stiles-Shields C, Schueller SM. IntelliCare:an eclectic, skills-based app suite for the treatment of depression and anxiety. J Med Internet Res. 2017 Jan 05;19(1):e10. doi: 10.2196/jmir.6645. http://www.jmir.org/2017/1/e10/ v19i1e10 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Baumel A, Correll CU, Hauser M, Brunette M, Rotondi A, Ben-Zeev D, Gottlieb JD, Mueser KT, Achtyes ED, Schooler NR, Robinson DG, Gingerich S, Marcy P, Meyer-Kalos P, Kane JM. Health technology intervention after hospitalization for schizophrenia: service utilization and user satisfaction. Psychiatr Serv. 2016 Jun 1;67(9):1035–1038. doi: 10.1176/appi.ps.201500317. [DOI] [PubMed] [Google Scholar]
  • 44.Fogg B. A behavior model for persuasive design. Persuasive '09 4th international Conference on Persuasive Technology; April 26-29, 2009; Claremont, CA. 2009. [DOI] [Google Scholar]
  • 45.Cavanagh K, Millings A. (Inter)personal computing: the role of the therapeutic relationship in e-mental health. J Contemp Psychother. 2013 Jul 17;43(4):197–206. doi: 10.1007/s10879-013-9242-z. [DOI] [Google Scholar]
  • 46.Holter MT, Johansen A, Brendryen H. How a fully automated eHealth program simulates three therapeutic processes: a case study. J Med Internet Res. 2016 Jun 28;18(6):e176. doi: 10.2196/jmir.5415. http://www.jmir.org/2016/6/e176/ v18i6e176 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47.Barazzone N, Cavanagh K, Richards DA. Computerized cognitive behavioural therapy and the therapeutic alliance: a qualitative enquiry. Br J Clin Psychol. 2012 Nov;51(4):396–417. doi: 10.1111/j.2044-8260.2012.02035.x. [DOI] [PubMed] [Google Scholar]
  • 48.Ernsting C, Dombrowski S, Oedekoven M, O Sullivan JL, Kanzler M, Kuhlmey A, Gellert P. Using smartphones and health apps to change and manage health behaviors: a population-based survey. J Med Internet Res. 2017 Apr 05;19(4):e101. doi: 10.2196/jmir.6838. https://www.jmir.org/2017/4/e101/ v19i4e101 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Collins L, Murphy S, Strecher V. The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med. 2007 May;32(5 Suppl):S112–S118. doi: 10.1016/j.amepre.2007.01.022. http://europepmc.org/abstract/MED/17466815 .S0749-3797(07)00051-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Cheung K, Ling W, Karr C, Weingardt K, Schueller S, Mohr D. Evaluation of a recommender app for apps for the treatment of depression and anxiety: an analysis of longitudinal user engagement. J Am Med Inform Assoc. 2018 Aug 01;25(8):955–962. doi: 10.1093/jamia/ocy023. http://europepmc.org/abstract/MED/29659857 .4963735 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Statistica. [2019-01-06]. Global mobile OS market share in sales to end users from 1st quarter 2009 to 2nd quarter 2018 https://www.statista.com/statistics/266136/global-market-share-held-by-smartphone-operating-systems/
  • 52.Statcounter GlobalStats. [2019-01-06]. Mobile operating system market share United States of America http://gs.statcounter.com/os-market-share/mobile/united-states-of-america .

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Multimedia Appendix 1

Definition of sham techniques.

Multimedia Appendix 2

Definition of coded techniques.

Multimedia Appendix 3

List of included apps.


Articles from Journal of Medical Internet Research are provided here courtesy of JMIR Publications Inc.

RESOURCES