Skip to main content
Internet Interventions logoLink to Internet Interventions
. 2024 Mar 11;36:100735. doi: 10.1016/j.invent.2024.100735

The conceptualisation and measurement of engagement in digital health

Madison Milne-Ives a,b, Sophie Homer c, Jackie Andrade c, Edward Meinert a,b,d,
PMCID: PMC10979253  PMID: 38558760

Abstract

Digital tools are an increasingly important component of healthcare, but their potential impact is commonly limited by a lack of user engagement. Digital health evaluations of engagement are often restricted to system usage metrics, which cannot capture a full understanding of how and why users engage with an intervention. This study aimed to examine how theory-based, multifaceted measures of engagement with digital health interventions capture different components of engagement (affective, cognitive, behavioural, micro, and macro) and to consider areas that are unclear or missing in their measurement. We identified and compared two recently developed measures that met these criteria (the Digital Behaviour Change Intervention Engagement Scale and the TWente Engagement with Ehealth Technologies Scale). Despite having similar theoretical bases and being relatively strongly correlated, there are key differences in how these scales aim to capture engagement. We discuss the implications of our analysis for how affective, cognitive, and behavioural components of engagement can be conceptualised and whether there is value in distinguishing between them. We conclude with recommendations for the circumstances in which each scale may be most useful and for how future measure development could supplement existing scales.

Keywords: Engagement, Digital health, Telemedicine, eHealth, Measure, Behaviour change

Highlights

  • Digital health engagement measures are inconsistent and often limited to system use.

  • The conceptualisation and definitions of components of engagement lack clarity.

  • Two recent scales based on similar theory capture engagement differently.

  • Further work is needed to address gaps in the measurement of engagement.

1. Introduction

The role of digital technologies in health is growing exponentially (Huckvale et al., 2019). They have the potential to support patients in improving health behaviours and self-managing health conditions (Digital Implementation Investment Guide (DIIG): Integrating Digital Interventions into Health Programmes, 2020; Forman et al., 2016; Moller et al., 2017). This is a key to addressing the increasing demands on healthcare systems, but the impact of digital health interventions is often limited by a critical factor: lack of engagement (Baumel et al., 2019; Birnbaum et al., 2015; Meyerowitz-Katz et al., 2020; Pratap et al., 2020; Torous et al., 2020a; Yeager and Benight, 2018). Despite a certain degree of engagement being necessary to achieve an intervention's intended outcomes (Yardley et al., 2016), digital health studies often include only a limited evaluation of engagement (Kelders et al., 2020b; Short et al., 2018; Yardley et al., 2016). Over the past decade, there has been a growing focus on engagement in digital health research; there is general agreement that engagement, in a digital health context, is a multifaceted concept (Cole-Lewis et al., 2019; Kelders et al., 2020c; O'Brien, 2016; Perski et al., 2017b; Wannheden et al., 2021), but there is still a lack of established definitions of these facets and agreement on how to evaluate them. This paper examines the conceptualisations of engagement in two recently developed measures as a benchmark for future theoretical and empirical work. It contributes to the literature by providing an in-depth theoretical examination of how the scales capture different components of engagement, how this can inform our conceptualisation of these components, and the strengths and limitations of each approach. It also contributes to future research by providing recommendations for when each scale might be most useful and for how future scale development could improve the measurement of engagement with digital health interventions (DHIs).

Although theoretical conceptualisations of engagement in digital health differ (Kelders et al., 2020c; Nahum-Shani et al., 2022; Perski and Short, 2021), they generally include a few common components. These can be grouped into two main categories, which are visualised in Fig. 1: what users are engaging with (the DCI or the target health behaviour (Cole-Lewis et al., 2019; Yardley et al., 2016)) and how they are engaging with it (e.g. behaviourally, affectively, cognitively [14]). Each of these categories includes various components that are grouped differently depending on the theory. For example, Fig. 1a demonstrates how engagement can refer to micro engagement with the intervention and macro engagement with the target health behaviour that the intervention aims to change, but in some conceptualisations, micro engagement can be broken down into further components of engagement with the intervention interface or its ‘active ingredients’ (contents and features) (Cole-Lewis et al., 2019; Yardley et al., 2016). Likewise, Fig. 1b highlights key components that reflect users' type of engagement with an intervention: affective and cognitive (sometimes considered as one ‘subjective experience’ component) and behavioural (Kelders et al., 2020c; Perski et al., 2017b).

Fig. 1.

Fig. 1

Summary of key theoretical concepts of engagement with digital health. BCTs: Behaviour Change Techniques, DBCI: digital behaviour change intervention, DHI: digital health intervention, UI: user interface, UX: user experience.

(Based on (Cole-Lewis et al., 2019; Kelders et al., 2020c; O'Brien, 2016; Perski et al., 2017b; Yardley et al., 2016), previously published in (Milne-Ives et al., 2022).)

The evaluation of DHIs has not yet caught up with these theoretical advancements. The various components are rarely examined in digital health research; many studies only capture system use as a (micro behavioural) measure of engagement (Bijkerk et al., 2023; Kelders et al., 2020c; Molloy and Anderson, 2021; Short et al., 2018; Torous et al., 2020b; Yardley et al., 2016). System use metrics cannot capture affective or cognitive components (Torous et al., 2020b) or offline (‘macro’) engagement with the behaviour change process (Short et al., 2018; Yardley et al., 2016). Without context, system use cannot provide a comprehensive understanding of engagement. For example, discontinued use could indicate disengagement from the intervention and behaviour or sufficiently successful behaviour change that the intervention is no longer needed (Yardley et al., 2016). Conversely, long durations of use could reflect meaningful engagement or frustration and difficulties with the system (Torous et al., 2020b).

A challenge with evaluating engagement more comprehensively is capturing it in a way that enables comparison of different DHIs and strategies for increasing engagement. This is made more difficult by the inconsistency in definitions and measures of engagement (Cole-Lewis et al., 2019; Kelders et al., 2020d) and a lack of criteria for what constitutes ‘good’ engagement (Mclaughlin et al., 2021; Ng et al., 2019). One review of the measurement of engagement with mental health apps found that all 40 studies concluded that their interventions had good engagement, but all used different measures and criteria thresholds (Ng et al., 2019). As affective and cognitive components of engagement are more commonly captured through qualitative methods (Milne-Ives et al., 2023), mixed methods research offers the best means of capturing a holistic and contextualised assessment of engagement (Milne-Ives et al., 2023; Yardley et al., 2016), but this is not always feasible. Self-report questionnaires are easy to implement, but there is no standard measure in digital health based on an accepted theory of engagement (Short et al., 2018) and the questionnaires used in previous research vary greatly.

The impetus for this paper arose from the authors' efforts to select and justify the use of a self-report measure of engagement for the evaluation of a DHI. This study identified and examined theory-based, self-report psychometric measures of engagement that conceptualise engagement as a multifaceted concept. The aims were to 1) unpack how the facets of engagement have been conceptualised in psychometric scale development, 2) consider what is still missing from current measures, and 3) explore what can be learnt to improve the measurement of engagement in digital health. The study will summarise the theoretical bases of the included measures and conduct an item-level comparison to assess how they capture cognitive, affective, behavioural, micro, and macro components of engagement. Based on this comparison, the discussion will examine the alignment between the scales and the components of engagement and consider the strengths and limitations of the two scales in the context of broader theory. These insights will be used to generate recommendations for how and in what contexts the scales could be used to provide valuable data about users' engagement with DHIs and how we could build on them to improve future measures of engagement.

2. Methods

2.1. Scale selection

A preliminary search for a theoretically-based, self-report measure to evaluate engagement with a DHI identified two promising scales: the Digital Behaviour Change Intervention Engagement Scale (DBCI-ES) (Perski et al., 2019a, Perski et al., 2019b) and the TWente Engagement with Ehealth Technologies Scale (TWEETS) (Kelders et al., 2020a; Kelders and Kip, 2019a). To identify other potentially relevant measures for inclusion in the analysis and to formalise the method for scale selection, we established a set of eligibility criteria (Table 1) and examined recent reviews of engagement measures. Five recent reviews were identified from searches of Google Scholar and PubMed using keywords relating to ‘engagement’, ‘measure’ or ‘method’, ‘review’, and ‘digital health’ (Bijkerk et al., 2023; Borghouts et al., 2021; Ng et al., 2019; Perski et al., 2019a; Short et al., 2018). All the scales identified from these reviews (which included the DBCI-ES and the TWEETS (Bijkerk et al., 2023)) were assessed against our eligibility criteria for scale selection to determine the scales included in the analysis (Appendix 1). The first author conducted the scale selection process.

Table 1.

Criteria for scale selection.

1 Self-report questionnaire measure
2 Theoretically-based on a multifaceted conceptualisation of engagement
3 Aims to capture affective, cognitive, and behavioural components
4 Developed to measure engagement with digital health

2.2. Analysis of included scales

The included scales were assessed and compared by the first author. The comparison was performed by laying out the scale items, grouped by theoretical components specified by the scale. The results section compares the included scales on an item level within the framework of a three-component conceptualisation of engagement (affective, cognitive, and behavioural). We chose to use a three-component conceptualisation to structure the paper because it enabled a more in-depth discussion of whether affective and cognitive components should or should not be considered separately. The discussion examines the theoretical implications of individual items and their associations with the different components of engagement in relation to theory and literature more broadly and provides recommendations for the use and improvement of measures of digital health engagement.

3. Results

3.1. Selection of measures for comparison

After assessing the scales used to measure engagement identified in previous reviews (Appendix 1), only the DBCI-ES and TWEETS met the four inclusion criteria. One other scale - the Website Evaluation Questionnaire (Morrison et al., 2014) - met most of the criteria; however, this questionnaire was developed for a particular study and a particular type of DHI by its authors and there was insufficient discussion of its theoretical basis and conceptualisation in that paper to justify its inclusion.

3.2. Summary of included multifaceted measures of engagement

The DBCI-ES and the TWEETS were developed to provide validated self-report scales that capture multiple facets of engagement with DHIs in quick and easy-to-use measures (Kelders and Kip, 2019a; Perski et al., 2019a) and are currently the only theory-based scales to do so (Bijkerk et al., 2023; Borghouts et al., 2021; Kelders et al., 2020a; Ng et al., 2019; Perski et al., 2019a, Perski et al., 2019b; Short et al., 2018). The theoretical framework for the DBCI-ES was developed from a systematic review examining studies that measured or discussed engagement with digital health (Perski et al., 2017b). It included two constructs (‘subjective experience’ and ‘behaviour’) that incorporate five elements: "attention, interest, enjoyment, amount of use, and depth of use" (Perski et al., 2017b, Perski et al., 2019a). An initial set of scale items was deductively generated based on these five elements, then classified by experts (n = 20) and non-experts (n = 50) in a content adequacy task (Perski et al., 2017a). Exploratory factor analysis (EFA) identified two main factors - experiential and behavioural - with moderate to high internal reliability and no significant correlation (Perski et al., 2019b). The total scale score was significantly associated with subsequent logins, mainly driven by the experiential factor (Perski et al., 2019b). Engagement with the health behaviour was not assessed.

The TWEETS was developed using a similar theoretical conceptualisation, but different methods, to address psychometric limitations of the DBCI-ES, which included a lack of independent association between the behavioural subscale and future behavioural engagement and low criterion and divergent validity between the scale and objective measures of amount of use (Kelders et al., 2020a; Perski et al., 2019a, Perski et al., 2019b). The authors conducted a scoping review of papers that provided a definition, conceptualisation, or theory of engagement in any field and identified three common components: affective, cognitive, and behavioural (Kelders et al., 2020d). Interviews with 20 users and 10 experts were deductively categorised into these three components, then inductively coded to generate key themes, which became 9 scale items (Kelders and Kip, 2019b). Unlike the DBCI-ES, EFA found a one-factor structure (Kelders et al., 2020a). The psychometric evaluation also compared the TWEETS and DBCI-ES (Kelders et al., 2020a; Perski et al., 2019b). Both had good internal consistency (α > 0.80) and moderate test-retest reliability and predictive validity for self-reported behaviour change (Kelders et al., 2020a). The TWEETS and DBCI-ES experiential subscale were moderate-to-strongly correlated (although neither had strong correlations with reported app use frequency). The TWEETS was not compared to the DCBI-ES behavioural subscale because it was not associated with future app engagement and thus not proved valid in its own psychometric evaluation (Kelders et al., 2020a; Perski et al., 2019b). Overall, the authors concluded the TWEETS performed similarly to, and occasionally slightly better than, the DBCI-ES (Kelders et al., 2020a).

3.3. Comparison of scales

Considering the scales are relatively strongly correlated and propose to measure similar components of engagement, their items look quite different (Table 2) (Kelders et al., 2020a; Perski et al., 2019b). The following subsections compare the items addressing the various engagement components.

Table 2.

Comparison of DBCI-ES and TWEETS.

DBCI engagement components DBCI sub- components DBCI items TWEETS items TWEETS engagement components
Subjective experience How strongly did you experience the following? (7-point scale) Thinking about using [the technology] the last week, I feel that: (5-point scale)
Attention
  • 1.

    Focus

  • 2.

    Inattention (R)a

  • 3.

    Distraction (R)

  • 1.

    [this technology] makes it easier for me to work on [my goal]

  • 2.

    [this technology] motivates me to [reach my goal]

  • 3.

    [this technology] helps me to get more insight into [my behaviour relating to the goal]

Cognition
Interest
  • 4.

    Interest

  • 5.

    Intrigue

4. I enjoy using [this technology]
5. I enjoy seeing the progress I make in [this technology]
6. [This technology] fits me as a person
Affect
Enjoyment
  • 6.

    Enjoyment

  • 7.

    Annoyance (R)

  • 8.

    Pleasure

Behaviour Amount of use
  • 9.

    How much time (in minutes) do you roughly think that you spent on the app?

7. [This technology] is part of my daily routine
8. [This technology] takes me little effort to use
9. I'm able to use [this technology] as often as needed (to achieve my goals)
Behaviour
Depth of use
  • 10.

    Which of the app's components do you remember visiting? (0–100 % of components)

a

(R) denotes items that are reverse-scored.

3.3.1. Cognitive engagement

The DBCI-ES includes two sub-components that relate to cognitive engagement - the interest and attention users experience when using the intervention (micro engagement). Attention includes an investment or focusing of cognitive resources towards specific perceptual information at the expense of other information (Fredricks et al., 2004; Hollingshead et al., 2018; Short et al., 2018), which is captured well by its items (1–3). Interest depends on interactions between cognitive and affective factors (Makransky and Petersen, 2021; Short et al., 2018; Silvia, 2006), which highlights the overlap between cognitive and affective engagement. The DBCI-ES does not seek to separate cognitive and affective components, but groups them together under ‘subjective experience’. Compared to the DBCI-ES, the TWEETS cognitive items (1–3) are less obviously aligned with interest and attention to the app (micro engagement) and more with users' motivation to achieve their behavioural goal (macro engagement) (Kelders et al., 2020a). However, the TWEETS items are framed such that the focus is on the space in between micro and macro engagement. Participants are asked how much the DHI supports their motivation, making it more explicit that DHI use is a means to an end.

3.3.2. Affective engagement

The affective component is the most similar across the scales, with both explicitly referencing ‘enjoyment’. Unlike the TWEETS, the DBCI-ES includes both positively- and negatively-valenced items (6–8). Whether negative emotions should be considered part of affective engagement is still an open question, but evidence suggests that negative emotions can play a role in increasing or decreasing engagement (Kelders et al., 2020d; Nahum-Shani et al., 2022; O'Brien and Toms, 2008; Triberti et al., 2018). As above, the concepts within affective engagement are broader in the TWEETS than the DBCI-ES; although TWEETS item 4 also captures enjoyment associated with using the technology, item 5 relates this affective experience to the users' behaviour change and item 6 captures how users feel the technology fits their personal values.

3.3.3. Behavioural engagement

The trend of differing scopes continues with behavioural engagement, where the main difference between the scales is the degree of focus on micro engagement. The DBCI-ES focuses only on intervention use (items 9–10) (Perski et al., 2019a) - data which could be captured objectively by system use metrics - whereas the TWEETS captures behavioural engagement with the intervention in the context of daily routines (item 7), effort (item 8), and goals (item 9) [203]. The latter aligns more with Yardley et al.'s concept of ‘effective engagement’ as “sufficient engagement with the intervention to achieve intended outcomes” (Yardley et al., 2016).

4. Discussion

4.1. How many components of engagement are there?

DBCI-ES and TWEETS used 2- and 3-component conceptualisations of engagement, respectively. This refers to components relating to how users engage with DHIs (subjective and behavioural vs. affective, cognitive, and behavioural; Fig. 1b). The item level comparison of the DBCI-ES and TWEETS found that, although both scales have a similar theoretical basis and methods of development, another key difference between the scales is their scope in terms of what users engage with. The DBCI-ES instructs users to answer the questions “with regards to [their] most recent use,” which focuses the scale clearly on micro engagement (Perski et al., 2019a). In contrast, the TWEETS blurs the distinction between micro and macro engagement by relating intervention use to goal achievement (Kelders et al., 2020a). The correlation between the two scales (Kelders et al., 2020a) supports the assumption that the TWEETS is capturing micro engagement to some extent.

When comparing the scale items, we used a 3-component conceptualisation to identify how well these components could be captured, because a challenge with their measurement is that the components have no clear and agreed definitions. First, let us look at how the DBCI-ES and TWEETS items capture cognitive engagement. The DBCI-ES groups affective and cognitive engagement as subjective experience (Perski et al., 2017b), so first we must determine which of its three subjective sub-components to examine. On first glance, attention and interest might be picked out as the ‘cognitive’ components of the DBCI-ES (Perski et al., 2017b)(Ben-Eliyahu et al., 2018; Kelders et al., 2020c; Nahum-Shani et al., 2022), but Short et al. define situational interest as “an emotional state brought about by situational stimuli” that is spontaneous and temporary and can play a role in how people direct their attention (Short et al., 2018). This suggests that interest relates to both cognitive and affective engagement. As the DBCI-ES does not aim to distinguish between these components, this is not an issue for the scale, but highlights the close integration between the components.

The TWEETS does aim to distinguish between affective and cognitive engagement. Items 1 and 3 appear clearly aligned with cognitive effort and consideration, but item 2 focuses on motivation, which, like interest, is generally considered to be driven by both cognitive and affective variables (Cook and Artino Jr, 2016; Michie et al., 2011; Schunk and Usher, 2019). Across theories (Cook and Artino Jr, 2016; Rodgers et al., 2014; Ryan and Deci, 2000; Schunk and Usher, 2019; Wigfield and Eccles, 2000), motivation tends to include two main components: self-efficacy or beliefs about competence (“can I do it?” (Cook and Artino Jr, 2016)) and value (“do I want to do it?” (Cook and Artino Jr, 2016)). For example, Expectancy-Value Theory identifies two factors influencing motivation: the degree to which an individual believes they will be successful and the degree to which the task has value (shaped by associated emotions from similar experiences) (Cook and Artino Jr, 2016; Wigfield and Eccles, 2000). Likewise, Social Cognitive Theory identifies goals, perceived progress and expected outcomes, values, and self-efficacy as key motivational factors (Schunk and Usher, 2019). Self-Determination Theory suggests that the type or quality, as well as the degree, of motivation can vary (Cook and Artino Jr, 2016; Ryan and Deci, 2000): extrinsic motivation is driven to varying degrees by social values or requirements, while intrinsic motivation drives behaviour simply because the task is interesting, challenging, or enjoyable (Cook and Artino Jr, 2016; Ryan and Deci, 2000). Motivation has also been defined as “cognitive-affective events” that reflect “desires to reach a goal state” (Kavanagh et al., 2020). Although these definitions capture slightly different factors that drive motivation, they all clearly highlight that it includes both affective and cognitive elements.

Turning to affective engagement, the scales seem more aligned: both capture the positive emotional experience associated with the intervention, although TWEETS item 5 extends this beyond the intervention to macro engagement and DBCI-ES item 7 examines negative emotional experience of micro engagement. TWEETS item 6, however, focuses on the technology's fit with the users' personal values, which has conceptual similarities to the ‘value’ component of motivation (particularly the task value aspect of Expectancy-Value Theory: the “degree to which individuals perceive personal importance, value or intrinsic interest in doing the task”) (Cook and Artino Jr, 2016; Wigfield and Eccles, 2000). This demonstrates again how the concept of motivation integrates the cognitive and affective components.

Both scales consider behavioural engagement separately from cognitive and affective engagement, although they again differ in scope (micro vs. macro engagement). DBCI-ES items 9 and 10 and TWEETS items 7 and 9 focus on use of the intervention, but the other behavioural TWEETS item (8) relates to the effort needed to use the technology. It is interesting that the authors chose to categorise item 8 as behavioural engagement; for able-bodied individuals, the physical effort required to use a DHI is low, so the effort spent behaviourally engaging with the DHI is likely to be primarily cognitive (Tullis and Albert, 2013). The authors' state that when users are cognitively engaged, they are willing to spend mental effort to work with the technology (Kelders and Kip, 2019b), which implies that cognitive effort is an intermediary between cognitive and behavioural engagement rather than belonging strictly to one or the other. If this item is interpreted as relating to usability, rather than cognitive effort, it still blurs the boundaries of the categorisation, as the concept of usability includes affective, cognitive, and behavioural elements (such as satisfaction, learnability, and accessibility, among others) (Maqbool and Herold, 2024).

Empirically, EFAs found that the DBCI-ES fit its expected two-factor solution (behavioural and cognitive-affective) (Perski et al., 2019a, Perski et al., 2019b) but that the TWEETS fit a one-factor solution (Kelders et al., 2020a). The TWEETS authors acknowledged the possibility that engagement only includes one component but felt this was at odds with the theory and could be due to conceptual overlap or the small number of items per component (Kelders et al., 2020a). They also argued that the strong loading of items onto the one-factor structure provided evidence that engagement does comprise affective, cognitive, and behavioural aspects (Kelders et al., 2020a). Interestingly, a study of engagement in education found a two-factor solution with yet another combination of components (affective and behavioural-cognitive) (Ben-Eliyahu et al., 2018), which aligns with the overlap between cognitive and behavioural engagement identified in TWEETS item 8 and highlights the difficulty of using factor analyses to define the components of engagement. Given the conceptual overlap observed in the literature and in this analysis of the DBCI-ES and TWEETS, is there value in distinguishing between components of engagement?

There is a history of discussion within psychology on the relationship between affect and cognition in various concepts. For example, there are long-standing unresolved questions around whether emotion is affectively or cognitively-driven (Lai et al., 2012; Whissell, 2023), innate or constructed (Adolphs, 2017; Barrett, 2017; Fox, 2018; Lai et al., 2012). Recently, it has been argued that decoupling affective and cognitive processes does not reflect their close integration at different levels: neural, cognitive, behavioural (Fox, 2018; Pessoa, 2018). This integration is explicit in some theories of engagement, like Perski and colleagues' (Perski et al., 2017b). It is also a core concept in Cognitive Behavioural Therapy (CBT) (Bertollo et al., 2020; Early and Grady, 2016; Fisher et al., 2023; Smith et al., 2021). The close reciprocal links between affect, cognition, and behaviour are represented in the ‘cognitive triangle’ (Fig. 2), which highlights how each component can dynamically influence the others (Early and Grady, 2016; Smith et al., 2021). The idea that an alteration in one component can affect the other two is a key tool in CBT (Early and Grady, 2016). However, despite these close interactions, CBT does not conflate the components into one (Smith et al., 2021). Similarly, despite the empirical evidence from the factor analyses and their close interactions, there is value in considering affective, cognitive, and behavioural components of engagement separately. Much like how the CBT cognitive triangle emphasises the importance of all three elements in treatment (Early and Grady, 2016; Smith et al., 2021), making an effort to disentangle these components in intervention design and evaluation can generate insights that might not be gleaned if engagement was considered as one factor.

Fig. 2.

Fig. 2

Cognitive triangle tool in Cognitive Behavioural Therapy.

4.2. Recommendations for researchers evaluating digital health engagement

The comparison of the scales highlights a few key considerations for researchers who are evaluating engagement with a DHI, primarily what the study is looking to answer and what other data can be captured. The DBCI-ES is recommended for researchers who are specifically looking to capture engagement with a digital tool, rather than engagement with an intervention more broadly, because of its narrower scope focusing on micro engagement. It may also be beneficial for studies that are unable to capture system usage data, as the scale includes a self-reported approximation of system use. The caveat is that evidence for the correlation between objective and self-reported usage data was inconsistent, indicating that the behavioural subscale may lack validity (Perski et al., 2019b). On the other hand, the TWEETS is recommended for researchers who are looking for a holistic measure that captures various elements of engagement but who are not aiming to investigate or compare specific components. This is because its items capture micro and macro engagement, as well as cognitive, affective, and behavioural components, but both our analysis and the authors' EFA indicated that the affective, cognitive, and behavioural elements should be examined as a whole rather than as separate components. This can add an additional layer of information for studies that are already capturing system usage data and extend the assessment beyond micro engagement. For all studies, we recommend using mixed-methods (if feasible) to enable a more in-depth investigation of the different components of engagement, how they interact, and what factors influence them (Milne-Ives et al., 2023; Yardley et al., 2016).

4.3. Recommendations for future measure development

Despite providing useful new tools for assessing engagement as a multifaceted construct, the gaps in the TWEETS and DBCI-ES identified in this analysis demonstrate continued room for improvement. While both scales are useful, theoretically-based tools for examining engagement as a multifaceted concept, both lack the ability to examine engagement components as separate constructs. The differing interpretations of the components of engagement across the two scales emphasises the need to establish clarity and agreement around their definitions. Future scale development should include clear, theory-based definitions of the components of engagement that it aims to capture. This would facilitate judgments of whether a particular scale is the appropriate measure for a particular study aim.

This analysis also highlighted the relative lack of consideration of macro engagement in digital health engagement measures. The DBCI-ES specifically excluded it, while the TWEETS did not clearly distinguish between micro and macro engagement. When it comes to evaluations of engagement in digital health, measures tend to focus on micro engagement with a DHI (such as system use). This could be because macro engagement is already captured in other ways as behavioural outcomes (e.g. for physical activity, as step count or minutes spent exercising per day) or because it is assumed to be a separate concept. Although macro engagement with a health behaviour may be captured with other measures, these tend to focus on the behavioural component, either via objective data collection measures such as sensor data or via self-report measures that ask “how much” or “how often”. The development and potential benefits of a scale that captured affective, cognitive, and behavioural components of micro and macro engagement separately should be explored, as this could be a potentially valuable tool for understanding how engagement with the intervention and the behaviour are related. Although this would be beneficial for enabling comparison and meta-analysis of engagement across studies, there are also advantages to having a variety of measures of engagement available. Engagement is theoretically complex and multi-faceted; a variety of measures might be needed to address different research questions, intervention characteristics, and implementation contexts (Bijkerk et al., 2023).

Future scale development should also explore at what periods it is most appropriate to measure engagement. The DBCI-ES focused on most recent use - this could potentially reduce recall bias, but might result in inaccurate reporting if users felt their most recent use was not reflective of their typical use. If using a similar measure of micro behavioural engagement, the testing of future scales should assess whether validity could be improved by asking about engagement over a period of time rather than the most recent use. Timing is also a consideration for the TWEETS; initial encounters with a DHI may be important in determining whether users continue to engage with it to try and change their health behaviour, but the framing of the TWEETS items assumes some user experience engaging with the DHI in pursuit of a goal. This could miss critical information about first impressions of the app, highlighting again the importance of having various measures to address particular stages and components of engagement.

The challenge for developing such a scale will be the difficulty in differentiating any categories defined. Concepts like motivation, interest, cognitive effort, and flow (a state of intrinsically-rewarding absorption in a task characterised by involvement, enjoyment, and attention (Short et al., 2018)) include elements from various components (Friedrich et al., 2019; Kelders et al., 2018). Some conceptualisations have attempted to address this by removing these elements from their definitions - for example, defining cognitive engagement as “thinking and paying attention” without any motivational aspects (Ben-Eliyahu et al., 2018), but this is reductive and does not capture the whole of the concept. It will be necessary to thoroughly test developed scales to determine if elements are separable. Although any distinctions drawn will be to some degree artificial, a measure with clear components would enable new questions to be examined more broadly - for example, which features of an intervention influence which components of engagement or whether micro engagement affects macro engagement with behaviour change. This would enable strategies to be incorporated into the intervention to support more effective engagement. For example, an intervention might be cognitively engaging (easy to use with interesting content) but affectively disengaging (aesthetically unappealing interface or negative tone). Some users may be sufficiently cognitively engaged to overcome the affective disengagement, but improving the aesthetic or tone would mitigate barriers to continued engagement with the intervention and its target behaviour.

4.4. Conclusions

This theoretical investigation highlighted the strengths and limitations of currently available measures of engagement with DHIs, which have implications for their use in future studies. Few of the scales being used to evaluate engagement with digital health were developed specifically for digital health or theoretically-based on a multifaceted conceptualisation of engagement. In explicitly considering the multifaceted nature of engagement, both the DBCI-ES and the TWEETS represent an advance in measures of engagement; however, there are still gaps in their ability to capture the different components of engagement. Although the TWEETS divides its items into affective, cognitive, and behavioural components, this analysis indicates that many of them blur these boundaries; likewise, the framing of the items so that they capture intervention engagement in relation to behavioural goals blurs the boundary between micro and macro engagement. In contrast, the DBCI-ES focuses on micro engagement with the intervention and has a two-factor solution that splits the measure into behavioural and subjective sub-scales.

Although progress has been made in addressing the need for theoretical- and evidence-based measures of engagement as a multifaceted concept, the DBCI-ES and TWEETS fit different research needs and there is still a need for further scale development to address gaps in our ability to comprehensively capture and differentiate components of engagement. There are several unanswered questions in our understanding of digital health engagement: whether engagement has distinct components, how those components should be defined, and - if distinct - how each can be measured. Despite evidence from other fields demonstrating substantial integration and overlap between the components of engagement, there are practical benefits that can be obtained through their investigation as distinct elements. Through the process of trying to tease these components apart, the ways in which they are connected can become more clear. This could enable a better understanding of users' processes of engagement and how they could be supported. Although this analysis drew on a variety of theoretical and empirical sources to generate insights on the conceptualisation and measurement of different components of engagement, it was not a systematic review and additional sources of evidence and theory from various fields may shed additional light or challenge some of the conclusions.

Funding

This work was supported by the former Health Education England, which is now the South East School of Public Health, Workforce Training and Education Directorate, NHS England [grant reference number: AM1000393]. The funding body had no editorial control and was not involved in the decision to submit the article for publication. MMI and EM are supported by the NIHR Newcastle Biomedical Research Centre (BRC). The views expressed in the paper belong to the authors and not necessarily those of the South East School of Public Health NHS England, Newcastle University, the University of Plymouth, Imperial College London or the NIHR Newcastle BRC.

CRediT authorship contribution statement

MMI conceived the topic, conducted the analysis, and drafted and revised the protocol. SH, JA, and EM supervised the research and contributed revisions.

Declaration of competing interest

The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:

Edward Meinert reports financial support was provided by the South East School of Public Health, Workforce Training and Education Directorate, NHS England. Madison Milne-Ives reports financial support was provided by the South East School of Public Health, Workforce Training and Education Directorate, NHS England. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Footnotes

Appendix A

Supplementary data to this article can be found online at https://doi.org/10.1016/j.invent.2024.100735.

Appendix A. Supplementary data

Appendix 1

Assessment of scales used to measure engagement in the digital health literature

mmc1.docx (29.8KB, docx)

References

  1. Adolphs R. How should neuroscience study emotions? By distinguishing emotion states, concepts, and experiences. Soc. Cogn. Affect. Neurosci. 2017;12:24–31. doi: 10.1093/scan/nsw153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Barrett L.F. The theory of constructed emotion: an active inference account of interoception and categorization. Soc. Cogn. Affect. Neurosci. 2017;12:1–23. doi: 10.1093/scan/nsw154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Baumel A., Muench F., Edan S., Kane J.M. Objective user engagement with mental health apps: systematic search and panel-based usage analysis. J. Med. Internet Res. 2019;21 doi: 10.2196/14567. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Ben-Eliyahu A., Moore D., Dorph R., Schunn C.D. Investigating the multidimensionality of engagement: affective, behavioral, and cognitive engagement across science activities and contexts. Contemp. Educ. Psychol. 2018;53:87–105. [Google Scholar]
  5. Bertollo M., Filho E., Terry P.C. Routledge; 2020. Advancements in Mental Skills Training. [Google Scholar]
  6. Bijkerk L.E., Oenema A., Geschwind N., Spigt M. Measuring engagement with mental health and behavior change interventions: an integrative review of methods and instruments. Int. J. Behav. Med. 2023;30:155–166. doi: 10.1007/s12529-022-10086-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Birnbaum F., Lewis D.M., Rosen R., Ranney M.L. Patient engagement and the design of digital health. Acad. Emerg. Med. 2015;22:754. doi: 10.1111/acem.12692. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Borghouts J., Eikey E., Mark G., De Leon C., Schueller S.M., Schneider M., Stadnick N., Zheng K., Mukamel D., Sorkin D.H. Barriers to and facilitators of user engagement with digital mental health interventions: systematic review. J. Med. Internet Res. 2021;23 doi: 10.2196/24387. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Cole-Lewis H., Ezeanochie N., Turgiss J. Understanding health behavior technology engagement: pathway to measuring digital behavior change interventions. JMIR Form Res. 2019;3 doi: 10.2196/14052. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Cook D.A., Artino A.R., Jr. Motivation to learn: an overview of contemporary theories. Med. Educ. 2016;50:997–1014. doi: 10.1111/medu.13074. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Digital Implementation Investment Guide (DIIG): Integrating Digital Interventions into Health Programmes . 2020. World Health Organization. [Google Scholar]
  12. Early B.P., Grady M.D. Embracing the contribution of both behavioral and cognitive theories to cognitive behavioral therapy: maximizing the richness. Clin. Soc. Work. J. 2016;45:39–48. [Google Scholar]
  13. Fisher L.B., Curreri A.J., Tan E.K., Sprich S.E. Cognitive Techniques. The Massachusetts General Hospital Handbook of Cognitive Behavioral Therapy. 2023:19–38. [Google Scholar]
  14. Forman E.M., Evans B.C., Flack D., Goldstein S.P. Could technology help us tackle the obesity crisis? Future Sci. OA. 2016;2 doi: 10.4155/fsoa-2016-0061. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Fox E. Perspectives from affective science on understanding the nature of emotion. Brain and Neuroscience Advances. 2018;2 doi: 10.1177/2398212818812628. [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Fredricks J.A., Blumenfeld P.C., Paris A.H. School engagement: potential of the concept, state of the evidence. Rev. Educ. Res. 2004;74:59–109. [Google Scholar]
  17. Friedrich T., Schlauderer S., Overhage S. The impact of social commerce feature richness on website stickiness through cognitive and affective factors: an experimental study. Electron. Commer. Res. Appl. 2019;36 [Google Scholar]
  18. Hollingshead A., Williamson P., Carnahan C. Cognitive and emotional engagement for students with severe intellectual disability defined by the scholars with expertise in the field. Res. Pract. Persons Severe Disabl. 2018;43:269–284. [Google Scholar]
  19. Huckvale K., Wang C.J., Majeed A., Car J. Digital health at fifteen: more human (more needed) BMC Med. 2019;17:1–4. doi: 10.1186/s12916-019-1302-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Kavanagh D.J., Teixeira H., Connolly J., Andrade J., May J., Godfrey S., Carroll A., Taylor K., Connor J.P. The Motivational Thought Frequency Scales for increased physical activity and reduced high-energy snacking. Br. J. Health Psychol. 2020;25:558–575. doi: 10.1111/bjhp.12422. [DOI] [PubMed] [Google Scholar]
  21. Kelders S.M., Kip H. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. Presented at the CHI ’19: CHI Conference on Human Factors in Computing Systems. ACM; New York, NY, USA: 2019. Development and initial validation of a scale to measure engagement with eHealth technologies. [DOI] [Google Scholar]
  22. Kelders S.M., Kip H. Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. Presented at the CHI ’19: CHI Conference on Human Factors in Computing Systems. ACM; New York, NY, USA: 2019. Development and initial validation of a scale to measure engagement with eHealth technologies. [DOI] [Google Scholar]
  23. Kelders S.M., Sommers-Spijkerman M., Goldberg J. Investigating the direct impact of a gamified versus nongamified well-being intervention: an exploratory experiment. J. Med. Internet Res. 2018;20 doi: 10.2196/jmir.9923. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Kelders S.M., Kip H., Greeff J. Psychometric evaluation of the TWente Engagement with Ehealth Technologies Scale (TWEETS): evaluation study. J. Med. Internet Res. 2020;22 doi: 10.2196/17757. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Kelders S.M., van Zyl L.E., Ludden G.D.S. The concept and components of engagement in different domains applied to eHealth: a systematic scoping review. Front. Psychol. 2020;0 doi: 10.3389/fpsyg.2020.00926. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Kelders S.M., van Zyl L.E., Ludden G.D.S. The concept and components of engagement in different domains applied to eHealth: a systematic scoping review. Front. Psychol. 2020;0 doi: 10.3389/fpsyg.2020.00926. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Kelders S.M., van Zyl L.E., Ludden G.D.S. The concept and components of engagement in different domains applied to eHealth: a systematic scoping review. Front. Psychol. 2020;11 doi: 10.3389/fpsyg.2020.00926. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Lai V.T., Hagoort P., Casasanto D. Affective primacy vs. cognitive primacy: dissolving the debate. Front. Psychol. 2012;3:26172. doi: 10.3389/fpsyg.2012.00243. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Makransky G., Petersen G.B. The cognitive affective model of immersive learning (CAMIL): a theoretical research-based model of learning in immersive virtual reality. Educ. Psychol. Rev. 2021;33:937–958. [Google Scholar]
  30. Maqbool B., Herold S. Potential effectiveness and efficiency issues in usability evaluation within digital health: a systematic literature review. J. Syst. Softw. 2024;208 [Google Scholar]
  31. Mclaughlin M., Delaney T., Hall A., Byaruhanga J., Mackie P., Grady A., Reilly K., Campbell E., Sutherland R., Wiggers J., Wolfenden L. Associations between digital health intervention engagement, physical activity, and sedentary behavior: systematic review and meta-analysis. J. Med. Internet Res. 2021;23 doi: 10.2196/23180. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Meyerowitz-Katz G., Ravi S., Arnolda L., Feng X., Maberly G., Astell-Burt T. Rates of attrition and dropout in app-based interventions for chronic disease: systematic review and meta-analysis. J. Med. Internet Res. 2020;22 doi: 10.2196/20283. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Michie S., van Stralen M.M., West R. The behaviour change wheel: a new method for characterising and designing behaviour change interventions. Implement. Sci. 2011;6:42. doi: 10.1186/1748-5908-6-42. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Milne-Ives M., Homer S., Andrade J., Meinert E. Associations between behavior change techniques and engagement with mobile health apps: protocol for a systematic review. JMIR Res. Protoc. 2022;11 doi: 10.2196/35172. [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Milne-Ives M., Homer S.R., Andrade J., Meinert E. Potential associations between behavior change techniques and engagement with mobile health apps: a systematic review. Front. Psychol. 2023;14 doi: 10.3389/fpsyg.2023.1227443. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Moller A.C., Merchant G., Conroy D.E., West R., Hekler E., Kugler K.C., Michie S. Applying and advancing behavior change theories and techniques in the context of a digital health revolution: proposals for more effectively realizing untapped potential. J. Behav. Med. 2017;40:85–98. doi: 10.1007/s10865-016-9818-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Molloy A., Anderson P.L. Engagement with mobile health interventions for depression: a systematic review. Internet Interv. 2021;26 doi: 10.1016/j.invent.2021.100454. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Morrison L., Moss-Morris R., Michie S., Yardley L. Optimizing engagement with Internet-based health behaviour change interventions: comparison of self-assessment with and without tailored feedback using a mixed methods approach. Br. J. Health Psychol. 2014;19:839–855. doi: 10.1111/bjhp.12083. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Nahum-Shani I., Shaw S.D., Carpenter S.M., Murphy S.A., Yoon C. Engagement in digital interventions. Am. Psychol. 2022;77:836–852. doi: 10.1037/amp0000983. [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Ng M.M., Firth J., Minen M., Torous J. User engagement in mental health apps: a review of measurement, reporting, and validity. Psychiatr. Serv. 2019;70:538–544. doi: 10.1176/appi.ps.201800519. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. O’Brien H. Why Engagement Matters. Springer International Publishing; Cham: 2016. Theoretical perspectives on user engagement; pp. 1–26. [Google Scholar]
  42. O’Brien H.L., Toms E.G. What is user engagement? A conceptual framework for defining user engagement with technology. J. Am. Soc. Inf. Sci. Technol. 2008;59:938–955. [Google Scholar]
  43. Perski O., Short C.E. Acceptability of digital health interventions: embracing the complexity. Transl. Behav. Med. 2021;11:1473–1480. doi: 10.1093/tbm/ibab048. [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Perski O., Blandford A., Ubhi H.K., West R., Michie S. Smokers’ and drinkers’ choice of smartphone applications and expectations of engagement: a think aloud and interview study. BMC Med. Inform. Decis. Mak. 2017;17:1–14. doi: 10.1186/s12911-017-0422-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Perski O., Blandford A., West R., Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl. Behav. Med. 2017;7:254–267. doi: 10.1007/s13142-016-0453-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Perski O., Blandford A., Garnett C., Crane D., West R., Michie S. A self-report measure of engagement with digital behavior change interventions (DBCIs): development and psychometric evaluation of the “DBCI Engagement Scale”. Transl. Behav. Med. 2019;10:267–277. doi: 10.1093/tbm/ibz039. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Perski O., Lumsden J., Garnett C., Blandford A., West R., Michie S. Assessing the psychometric properties of the digital behavior change intervention engagement scale in users of an app for reducing alcohol consumption: evaluation study. J. Med. Internet Res. 2019;21 doi: 10.2196/16197. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Pessoa L. Embracing integration and complexity: placing emotion within a science of brain and behaviour. Cognit. Emot. 2018 doi: 10.1080/02699931.2018.1520079. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Pratap A., Neto E.C., Snyder P., Stepnowsky C., Elhadad N., Grant D., Mohebbi M.H., Mooney S., Suver C., Wilbanks J., Mangravite L., Heagerty P.J., Areán P., Omberg L. Indicators of retention in remote digital health studies: a cross-study evaluation of 100,000 participants. npj Digital Medicine. 2020;3:1–10. doi: 10.1038/s41746-020-0224-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Rodgers W.M., Markland D., Selzler A.-M., Murray T.C., Wilson P.M. Distinguishing perceived competence and self-efficacy: an example from exercise. Res. Q. Exerc. Sport. 2014 doi: 10.1080/02701367.2014.961050. [DOI] [PubMed] [Google Scholar]
  51. Ryan R.M., Deci E.L. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am. Psychol. 2000;55:68–78. doi: 10.1037//0003-066x.55.1.68. [DOI] [PubMed] [Google Scholar]
  52. Schunk D.H., Usher E.L. Social cognitive theory and motivation. The Oxford Handbook of Human Motivation. 2019 doi: 10.1093/oxfordhb/9780190666453.013.2. [DOI] [Google Scholar]
  53. Short C.E., DeSmet A., Woods C., Williams S.L., Maher C., Middelweerd A., Müller A.M., Wark P.A., Vandelanotte C., Poppe L., Hingle M.D., Crutzen R. Measuring engagement in eHealth and mHealth behavior change interventions: viewpoint of methodologies. J. Med. Internet Res. 2018;20 doi: 10.2196/jmir.9397. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Silvia P.J. Oxford University Press; USA: 2006. Exploring the Psychology of Interest. [Google Scholar]
  55. Smith R., Moutoussis M., Bilek E. Simulating the computational mechanisms of cognitive and behavioral psychotherapeutic interventions: insights from active inference. Sci. Rep. 2021;11:1–16. doi: 10.1038/s41598-021-89047-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Torous J., Lipschitz J., Ng M., Firth J. Dropout rates in clinical trials of smartphone apps for depressive symptoms: a systematic review and meta-analysis. J. Affect. Disord. 2020;263:413–419. doi: 10.1016/j.jad.2019.11.167. [DOI] [PubMed] [Google Scholar]
  57. Torous J., Michalak E.E., O’Brien H.L. Digital health and engagement—looking behind the measures and methods. JAMA Netw. Open. 2020;3 doi: 10.1001/jamanetworkopen.2020.10918. [DOI] [PubMed] [Google Scholar]
  58. Triberti S., Kelders S.M., Gaggioli A. User engagement. eHealth Research, Theory and Development. Routledge. 2018:271–289. [Google Scholar]
  59. Tullis T., Albert B. Measuring the User Experience. Elsevier; 2013. Performance metrics; pp. 63–97. [Google Scholar]
  60. Wannheden C., Stenfors T., Stenling A., von Thiele Schwarz U. Satisfied or frustrated? A qualitative analysis of need satisfying and need frustrating experiences of engaging with digital health technology in chronic care. Front. Public Health. 2021;0 doi: 10.3389/fpubh.2020.623773. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Whissell C. Emotion and cognition. Engaging with Emotion. 2023:141–156. [Google Scholar]
  62. Wigfield A., Eccles J.S. Expectancy-value theory of achievement motivation. Contemp. Educ. Psychol. 2000;25 doi: 10.1006/ceps.1999.1015. [DOI] [PubMed] [Google Scholar]
  63. Yardley L., Spring B.J., Riper H., Morrison L.G., Crane D.H., Curtis K., Merchant G.C., Naughton F., Blandford A. Understanding and promoting effective engagement with digital behavior change interventions. Am. J. Prev. Med. 2016;51:833–842. doi: 10.1016/j.amepre.2016.06.015. [DOI] [PubMed] [Google Scholar]
  64. Yeager C.M., Benight C.C. If we build it, will they come? Issues of engagement with digital health interventions for trauma recovery. Mhealth. 2018;4:37. doi: 10.21037/mhealth.2018.08.04. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Appendix 1

Assessment of scales used to measure engagement in the digital health literature

mmc1.docx (29.8KB, docx)

Articles from Internet Interventions are provided here courtesy of Elsevier

RESOURCES