Skip to main content
BJPsych Open logoLink to BJPsych Open
editorial
. 2020 Aug 11;6(5):e86. doi: 10.1192/bjo.2020.72

Beyond clicks and downloads: a call for a more comprehensive approach to measuring mobile-health app engagement

Heather L O'Brien 1, Emma Morton 2, Andrea Kampen 3, Steven J Barnes 4, Erin E Michalak 5,
PMCID: PMC7453800  PMID: 32778200

Abstract

Downloading a mobile health (m-health) app on your smartphone does not mean you will ever use it. Telling another person about an app does not mean you like it. Using an online intervention does not mean it has had an impact on your well-being. Yet we consistently rely on downloads, clicks, ‘likes’ and other usage and popularity metrics to measure m-health app engagement. Doing so misses the complexity of how people perceive and use m-health apps in everyday life to manage mental health conditions. This article questions commonly used behavioural metrics of engagement in mental health research and care, and proposes a more comprehensive approach to measuring in-app engagement.

Keywords: Information technologies, mobile health applications, user engagement, mental health, usage data


There is no shortage of mobile health (m-health) apps for people facing mental health challenges; one 2018 study estimated 10 000 available for download.1 m-health apps track symptoms, deliver therapy and promote health behaviour interventions. Their success is often evaluated according to user engagement, typically operationalised as frequency and duration of app use, behavioural interaction with the app (for example downloads, clicks) and popularity (for example user reviews, ratings).2 Usage data is assumed to capture different types and depths of app engagement, yet misses cognitive and emotional responses to the app and is disconnected from behaviour change in real-world settings.3,4

User engagement has become an umbrella term used to describe a host of conceptually unique user-centred outcomes, including usability, acceptability, feasibility and satisfaction.5 These different user experiences are all evaluated with the same kinds of usage statistics: dwell time, bounce rates and number of downloads, logins, visits and specific interactions, such as clicking on links, watching videos, completing modules.2 This creates a schism between the concept of interest and the most salient metrics for its evaluation. It opens the door ‘for [user-engagement indicators] to be selected inappropriately, presented with bias or interpreted incorrectly’ and prevents meaningfully comparisons across apps, studies and user groups.5

m-health engagement is narrowly defined as ‘user uptake … and/or ongoing use, adherence, retention, or completion data’,6 and thus focuses on quantifying rather than qualifying user engagement. Although it is seemingly objective, easy and unobtrusive to record usage data, studies have documented inconsistencies and called for standardised reporting practices.2,5,7 In addition, the positive impact of m-health apps on real-world or clinical trial outcomes is inconclusive,2,7 and the ‘beneficial dose [of apps] … or amount of exposure’ at the population level is unknown.2 This calls into question which metrics and what thresholds may be indicative of user engagement for different apps, users or mental health conditions.

Steady and sustained app use is typically viewed as positive, disengagement and non-use as negative. This emphasis on user behaviour in user-engagement evaluation misses important information about users and their contexts. User engagement may be influenced by how content is organised and presented, symptom burden, environmental stressors and supports, and the desire for social connectivity.8 These cognitive, emotional and social factors may be evaluated through surveys, interviews and app reviews,5,8 sometimes independent of usage data. Consequently, data sources may be disconnected and unable to inform each other.

It has been proposed that the field of m-health evaluation could be advanced if we understood the relationship between out-of-app and in-app engagement.3 However, a barrier to this is the way in which in-app user engagement is currently conceptualised and measured.

(Re)Defining user engagement in the m-health space

Human–computer interaction (HCI) researchers define user engagement as capturing and maintaining the attention and interest of technology users,9 and the cognitive, emotional and temporal investment made by users.10 Mental health interventions target not only behaviour, but emotional and cognitive processes,11 and social connectivity may be fundamental for self-management.8 A holistic definition of user engagement with m-health apps goes beyond what people do to how these tools address needs for information/education, social support and personal agency.

HCI researchers have identified attributes of user engagement, including user attention, interest, motivation, control and system usability and aesthetic appeal.9,10 Focusing on user-engagement attributes offers a targeted approach to measurement. For example, we might use eye-tracking or heat maps to gauge in-app attention, or brief self-report instruments to capture users’ sense of control at defined points in time during an interaction. HCI researchers have modelled user engagement as having natural ebbs and flows over the course of users’ interactions with digital tools. The process of model of engagement suggests that users move through points of engagement, periods of sustained engagement, disengagement and re-engagement and that some attributes are more salient at particular stages than others.10 Process-based models reveal that engagement is not an ‘all or nothing’ phenomenon. Thus, is it essential to not only measure interaction outcomes (i.e. total session duration, total app downloads) but users’ journeys through an app, i.e., ups and downs in interactivity and varying levels of emotional and cognitive involvement.

Qualifying the quantification of m-health app usage

Interpreting the ‘ups and downs’ of in-app user engagement involves looking at usage data differently, and connecting multiple data sources in significant ways. There are ‘unique cognitive, neurological or motor needs arising from mental illness’.5 People with chronic health conditions have fluctuating needs, yet apps do not take into account the diversity of individual lived experiences or different users, for example, young people, those newly diagnosed.8 High usage is not indicative of positive clinical outcomes but may actually reflect worsening mental health. App usage may also exacerbate negative mood by providing poor quality/too much information, technical challenges8 or increasing user's awareness of distressing symptoms.12

Morton et al13 found that people living with bipolar disorder experienced both negative and positive emotions toward self-monitoring; the practice made them feel that they were managing their condition effectively compared with others, but was also ‘an unpleasant reminder that they were living with bipolar disorder’. Nuanced interpretation is not revealed through usage data alone.

Condition-specific knowledge is critical, and participatory design approaches are essential for gathering insights from people with lived experience and healthcare providers. Participatory design draws upon different methods (focus groups, ethnography) to co-create and co-evaluate prototypes with users throughout the design life cycle.14 Such methods are needed to identify what goals an app should fulfil, how people want to use it and for what purposes, and how design features can reflect user preferences and goals. Such knowledge can aid in the development of appropriate engagement indicators that will facilitate the interpretation of usage and other data sources.

Apps contain different types of content (for example educational articles, quizzes, symptom trackers, social connection tools). Looking at how frequently/long users interact with individual content pages or features may be less informative than grouping features according to function or the need they are intended to serve, such as education, symptom tracking, social support. It may be productive to distinguish content that people typically access once, for example quiz, versus multiple times, for example sleep or exercise logs, and to consider the sequence of content interactions in terms of scaffolding engagement. Rethinking the analysis of usage data would allow for richer interpretations of why people use apps in naturalistic settings, such as for routine maintenance, affirmation, social support. It could also help tailor content according to recovery or illness stage to reduce cognitive load.13

Algorithms facilitate the identification of engagement patterns based on interactions with apps over time, often based on duration (short versus long term) and frequency (low versus high). Such analyses reveal that different users have different use trajectories, but they do not explain why the pattern is occurring or its significance to clinical outcomes. Diverse and varied streams of data, including user self-reports, discussion forum transcripts, social network data and data from symptom severity, functioning or quality of life measures can be used to make sense of usage data. Cluster analysis is another option, whereby usage patterns are examined in concert with clinical, sociodemographic variables or other data sources, such as in-app text messages between users and coaches to identify reasons for app use/non-use.15

Users weigh the benefits of using m-health apps with factors such as usability, convenience, personal risk, for example privacy, and cognitive or emotional effort,13 which may not be accounted for in in-app assessments. Empirical studies utilising self-report data may not use validated self-report inventories or use them systematically.5 Thus, incorporating different data sources must balance the burden and risk of self-reporting with the benefits, emphasise replicability in the selection and use of self-report instruments, and transparency in data collection and reporting.

Toward a more comprehensive view of in-app engagement

In-app engagement must seek to support the cognitive, emotional and behavioural changes necessary for desired mental health outcomes, including symptom reduction, recovery and quality of life improvements. For this to occur, use/non-use must be connected to broader out-of-app goals, and the value of negative emotions, behaviours and cognitions relative to overall self-management must be considered. For example, non-use may be indicative of improved mental health and the need for less reliance on digital interventions.

It is tempting to want a magic, uniform formula for measuring user engagement, and this has been the appeal of usage data.

We argue instead that user-engagement indices for m-health apps must be:

  • Corroborative, where different measures including usage data, symptom severity assessment scales and subjective outcome assessments (for example quality of life) are used to determine what meaningful engagement with the app entails.

  • Outcome, rather than output, oriented. If m-health apps are meant to improve intervention effectiveness, then how they are used becomes more important than how often they are used.

  • Process based, where we expect to see ebbs and flows in usage. Rather than labelling app users as low or high engagers based on algorithmic (non)-use, we should adopt participatory design approaches (for example journey mapping) to appreciate how different users interact with different features of the app over time.

  • Expert driven, meaning that the expertise of people with lived experience and clinicians is included throughout the design process to identify salient needs (for example social support) and goals (for example establishing a routine, symptom management) and how these can be met with the app, as well as to inform aesthetic and content design choices.

User-engagement indices should be developed in parallel with the app itself, and draw upon condition-specific knowledge and multiple data sources. The COPE approach (Corroborative, Outcome oriented, Process based, Expert driven) necessitates collaboration among people with mental health conditions, healthcare providers and user-experience designers to develop m-health apps. Documenting each element of this framework would result in greater transparency about how design decisions were made, what is being measured and why, and how the resulting app fits into the broader mental health landscape.

Biographies

Inline graphicHeather O'Brien (pictured) is Associate Professor in the School of Information at University of British Columbia in Vancouver.

Emma Morton is an Institute of Mental Health Marshall Fellow in the Department of Psychiatry at the University of British Columbia.

Andre Kampen is a Doctoral student in the School of Information at the University of British Columbia.

Steven Barnes is a Senior Instructor in the Department of Psychology at the University of British Columbia.

Erin Michalak is a Professor in the Department of Psychiatry at the University of British Columbia.

Funding

This research is supported by a Canadian Institute for Health Research (CIHR) Project Grant, ‘Bipolar Bridges: A Digital Health Innovation Targeting Quality of Life in Bipolar Disorder’.

Author contributions

Conceptualised and original draft by H.L.O'B; conceptualised and reviewing: E.E.M. and E.M.; critical revision of the manuscript for important intellectual content: all authors.

Declaration of interest

H.L.O'B., E.M., A.K. and S.J.B.: none. E.E.M. discloses grant funding from a private company in the 36 months prior to this publication.

Supplementary material

For supplementary material accompanying this paper visit http://dx.doi.org/10.1192/bjo.2020.72.

S2056472420000721sup001.zip (5.6MB, zip)

click here to view supplementary material

References

  • 1.Torous J, Firth J, Huckvale K, Larsen ME, Cosco TD, Carney R, et al. The emerging imperative for a consensus approach toward the rating and clinical recommendation of mental health apps. J Ner Ment Dis 2018; 206: 662–6. [DOI] [PubMed] [Google Scholar]
  • 2.Fleming T, Bavin L, Lucassen M, Stasiak K, Hopkins S, Merry S. Beyond the trial: systematic review of real-world uptake and engagement with digital self-help interventions for depression, low mood, or anxiety. J Med Internet Res 2018; 20: e199. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Cole-Lewis H, Ezeanochie N, Turgiss J. Understanding health behavior technology engagement: pathway to measuring digital behavior change interventions. JMIR Form Res 2019; 3: e14052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Weisel KK, Fuhrmann LM, Berking M, Baumeister H, Cuijpers P, Ebert DD. Standalone smartphone apps for mental health: a systematic review and meta-analysis. NPJ Digit Med 2019; 2: 118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Ng MM, Firth J, Minen M, Torous J. User engagement in mental health apps: a review of measurement, reporting, and validity. Psychiatr Serv 2019; 70: 538–44. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Fletcher K, Foley F, Murray G. Web-based self-management programs for bipolar disorder: insights from the online, recovery-oriented bipolar individualised tool project. J Med Internet Res 2018; 20: e11160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Torous J, Lipschitz J, Ng M, Firth J. Dropout rates in clinical trials of smartphone apps for depressive symptoms: a systematic review and meta-analysis. J Affect Disord 2019; 263: 413–9. [DOI] [PubMed] [Google Scholar]
  • 8.Nicholas J, Fogarty AS, Boydell K, Christensen H. The reviews are in: a qualitative content analysis of consumer perspectives on apps for bipolar disorder. J Med Internet Res 2017; 19: e105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Jacques RD. The Nature of Engagement and its Role in Hypermedia Evaluation and Design (Doctoral dissertation). South Bank University, 1996. [Google Scholar]
  • 10.O'Brien HL, Toms EG. What is user engagement? A conceptual framework for defining user engagement with technology. J Assoc Inf Sci Technol 2008; 59: 938–55. [Google Scholar]
  • 11.Perski O, Blandford A, West R, Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med 2016; 7: 254–67. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Allan S, Mcleod H, Bradstreet S, Beedie S, Moir B, Gleeson J, et al. Understanding implementation of a digital self-monitoring intervention for relapse prevention in psychosis: protocol for a mixed method process evaluation. JMIR Res Protoc 2019; 8: e15634. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Morton E, Hole R, Murray G, Buzwell S, Michalak E. Experiences of a web-based quality of life self-monitoring tool for individuals with bipolar disorder: a qualitative exploration. JMIR Ment Health 2019; 6: e16121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Spinuzzi C. The methodology of participatory design. Tech Commun 2005; 52: 163–74. [Google Scholar]
  • 15.Chen AT, Wu S, Tomasino KN, Lattie EG, Mohr DC. A multi-faceted approach to characterizing user behavior and experience in a digital mental health intervention. J Biomed Inform 2019; 94: e103187. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

For supplementary material accompanying this paper visit http://dx.doi.org/10.1192/bjo.2020.72.

S2056472420000721sup001.zip (5.6MB, zip)

click here to view supplementary material


Articles from BJPsych Open are provided here courtesy of Royal College of Psychiatrists

RESOURCES