Abstract
Engagement in electronic health (eHealth) and mobile health (mHealth) behavior change interventions is thought to be important for intervention effectiveness, though what constitutes engagement and how it enhances efficacy has been somewhat unclear in the literature. Recently published detailed definitions and conceptual models of engagement have helped to build consensus around a definition of engagement and improve our understanding of how engagement may influence effectiveness. This work has helped to establish a clearer research agenda. However, to test the hypotheses generated by the conceptual modules, we need to know how to measure engagement in a valid and reliable way. The aim of this viewpoint is to provide an overview of engagement measurement options that can be employed in eHealth and mHealth behavior change intervention evaluations, discuss methodological considerations, and provide direction for future research. To identify measures, we used snowball sampling, starting from systematic reviews of engagement research as well as those utilized in studies known to the authors. A wide range of methods to measure engagement were identified, including qualitative measures, self-report questionnaires, ecological momentary assessments, system usage data, sensor data, social media data, and psychophysiological measures. Each measurement method is appraised and examples are provided to illustrate possible use in eHealth and mHealth behavior change research. Recommendations for future research are provided, based on the limitations of current methods and the heavy reliance on system usage data as the sole assessment of engagement. The validation and adoption of a wider range of engagement measurements and their thoughtful application to the study of engagement are encouraged.
Keywords: telemedicine, internet, health promotion, evaluation studies, treatment adherence and compliance, outcome and process assessment (health care)
Introduction
Electronic health (eHealth) and mobile health (mHealth) behavioral interventions offer wide-reaching support at a low cost, while retaining the capacity to provide comprehensive, ongoing, tailored, and interactive support necessary for improving public health [1,2]. Although there is evidence that eHealth and mHealth behavior change interventions can be effective, low levels of adherence and high levels of attrition have been commonly reported [1-3]. In response, there have been calls to design and implement more engaging interventions to address these concerns [4-6].
It is generally agreed that a certain level of engagement is necessary for intervention effectiveness. However, there is a lack of clarity on how to conceptualize engagement. Some researchers have defined engagement solely as a psychological process relating to user perceptions and experience, whereas others consider engagement a purely behavioral construct, synonymous with intervention usage [4,7]. Consequently, it is often confused with adherence, which refers to whether the intervention is used as intended by the developers [3,8,9]. There have also been interdisciplinary differences. Behavioral scientists tend to characterize good engagement as high acceptability, satisfaction, or intervention adherence, whereas computer scientists tend to consider high engagement as a mental state associated with increased attention and enjoyment [4]. To consolidate these viewpoints and provide a less fragmented foundation for future research, 2 new conceptual models of engagement have been proposed [4,5].
Using a process of expert consensus, Yardley et al [5] proposed distinguishing between micro- and macrolevel engagement when examining the relationships between the user experience, usage, and behavior change. Microlevel engagement refers to the moment-to-moment engagement with the intervention, including the extent of use of the intervention (eg, number of activities completed) and the user experience (eg, level of user interest and attention when completing activities). Macrolevel engagement is defined as the depth of involvement with the behavior change process (eg, extent of motivation for changing behavior) and is linked to the behavioral goals of the intervention. The timing and relationship between micro and macro forms of engagement depend on the intervention, the user, and the broader context. Yardley’s model suggests that after a period of effective engagement at the microlevel, the user may disengage from the platform but still be immersed in the behavior change process. Perski et al [4] offer a similar but more extensive framework based on a systematic review. Similar to Yardley et al, they define engagement as both the extent of usage and a subjective experience but refine this further by characterizing the subjective experience as being related specifically to attention, interest, and affect. These constructs are said to capture the cognitive and emotional aspects of engagement as they are described in computer science disciplines (eg, flow, immersion, and presence), all of which relate to a level of absorption and preoccupation (see Table 1 for definitions of these constructs). According to Perksi et al [4], high engagement influences behavior change through its influence on the determinants of behavior (similar to macroengagement, as described by Yardley et al). Engagement itself is hypothesized to be influenced by intervention features such as content, mode of delivery, and contextual features such as the physical environment (eg, internet access) and individual characteristics (eg, internet self-efficacy).
Table 1.
Construct | Description |
Interest | Individual interest is an enduring preference for certain topics and activities. It is impacted by pre-existing knowledge, personal experiences, and emotions. Situational interest is an emotional state brought about by situational stimuli (eg, the unexpectedness of information). It is evoked spontaneously and is presumed to be transitory. Both types of interest are related to liking and willful engagement in a cognitive activity that affects the use of specific learning strategies and how we allocate attention [15,16]. |
Attention | A state of focused awareness of specific perceptual information [17]. Focalization and concentration of consciousness are the essence of attention. Paying attention implies withdrawal from some perceptual information to deal effectively with others [18]. |
Affect | Affect is an intrinsic part of the sensory experience. It represents how an object or situation impacts how a person feels. It can be described by 2 psychological properties: hedonic valence (pleasure/displeasure) and arousal (activation/sleepy). It can be a central or background feature of consciousness, depending on where and how attention is applied [19,20]. |
Flow | Flow refers to an optimal state that arises when an individual is deeply absorbed in a task. It is characterized by enjoyment, focused attention, absorption, and distorted time perception and is considered intrinsically rewarding. It assumes the complete absence of negative affect [21]. |
Cognitive absorption | Cognitive absorption is a state of deep involvement, similar to flow, though it does not assume intrinsic motivation or the complete absence of negative affect. Cognitive absorption may still occur when a user is frustrated (and, therefore, the experience is not optimal) or extrinsically motivated (eg, by winning a competition with friends; [22]). |
Immersion | Immersion is also similar to cognitive absorption and flow, though it is often used to describe a less extreme experience of engagement, one where one may still have some awareness of one’s surroundings [22,23]. |
Presence | The term presence has been popular since the development of virtual reality technologies. Definitional consensus for presence is still emerging, though it is often described as the psychological sense of being there [24]. |
Intervention usage | The extent to which the intervention has been observed or interacted with by the user. It is made up of several components, including frequency of use, time spent on the intervention, and the type of interaction participated in. This is distinct from intended usage, which is the way in which users should utilize the intervention to derive the minimum benefit, as defined by the intervention developers [3]. |
Both Perski et al and Yardley et al extend previous models [6,9-11] by considering the interaction between usage and psychological processes. By doing so, both models suggest that intervention usage may be a useful indicator of overall engagement with the intervention but is not a valid indicator of engagement in the behavior change process per se. Perski et al also highlight potential moderators and mediators of the engagement process and outline possible pathways in which engagement can influence overall intervention efficacy. These models serve as useful tools to refine and test hypotheses about how to influence engagement and how engagement impacts efficacy, which is necessary if we are to advance eHealth and mHealth behavioral science. However, an understanding of how to measure engagement is needed to test these models.
Basic overviews of the types of measures to assess engagement in eHealth and mHealth interventions have been provided by Yardley et al [5] as well as Perski et al [4]. Yardley et al briefly described the potential usefulness of different measurement types, including qualitative measures, self-report questionnaires, ecological momentary assessment, system usage data, sensor data, and psychophysiological measures. Perski et al identified over 100 studies related to engagement and noted the data collection methods used (eg, survey, website logs, and face-to-face interviews) in each study. Our aim is to extend their work by providing a comprehensive overview of the measurement options currently available. Our overall goal is to summarize and appraise measures of engagement used in eHealth and mHealth research and to highlight future areas of research when evaluating engagement in eHealth and mHealth behavior change interventions. We anticipate this will serve as a useful primer for those interested in the study of engagement and help to advance the field of eHealth and mHealth and behavior change by facilitating the use and validation of a wider range of engagement measurements and their thoughtful application to the study of engagement.
Overview of Methods Used to Identify and Assess Engagement Measures
We used a snowballing approach to identify relevant engagement measures. To begin, we extracted measures identified by Perski et al [4] as well as other systematic reviews and published articles known to us through our former work in the field [12-14]. A data extraction table (see Multimedia Appendix 1) focusing on measurement type, engagement domain, and validity information was used to extract, sort, and explore measurement information to aid synthesis. During the writing and revision process, we searched for additional articles using Google Scholar and reran Perski’s [4] original search strategy on MEDLINE and PsycINFO to identify more recent relevant literature. Readers should, therefore, consider this as a comprehensive, but not exhaustive, overview of the literature.
In line with Yardley et al’s suggestions [5], our overview focuses on a wide range of methods to measure engagement. These include qualitative measures, self-report questionnaires, ecological momentary assessment, psychophysiological measures, as well as the analysis of system usage data, sensor data, and social media data. Methods that capture microlevel constructs were included in our synthesis if they were related to emotional, cognitive, or behavioral aspects of the user experience that could be characterized as interest, attention, affect, or intervention usage. This includes the constructs of flow, cognitive absorption, presence, and immersion, which have been commonly used in other disciplines. An overview of definitions for each of these constructs is provided in Table 1. Macrolevel measures were included if they related specifically to engagement in the behavior change process because of the digital intervention or its features. A single author initially drafted each section below, with all other authors providing a critical review.
Overview of Engagement Measures
Qualitative Methods
Focus Areas
Qualitative measures enable evaluation of micro- and macrolevel engagement and include methods such as focus groups, observations, interviews, and think-aloud activities (Table 2). At the microlevel, they allow for an in-depth account of the users’ experience of the intervention. At the macrolevel, they can be used to explore the users’ perceptions of how the intervention has helped them to engage in the behavior change process.
Table 2.
Qualitative approach | Description | Example items | Considerations (pros/cons) |
Semistructured interviews | Provide an opportunity for sharing of lived experiences and feelings to uncover concealed perceptions related to digital health intervention or the technology; includes informal conversational interviews (spontaneous-suited to ethnographic research), semistructured interviews (interview guide used to steer otherwise spontaneous conversation), or standardized open-ended interviews (worded questions used for all participants). | Microlevel: Tell us what you think about the content; How did completing that module make you feel?; Please explain your pattern of use?; Why did you log on when you did?; Macrolevel: Did you notice any change to your thinking as a result of using the …(“app”)?; What impact did using the ... (“website”) have on how you are going about changing your behavior? |
Pros: inform modifications to increase acceptability, interactivity and tailor to end-user needs; identify a range of issues associated with use (both short and long term); augment interpretation of quantitative evaluation; generally small sample sizes. Cons: subject to bias (eg, recall and social desirability), especially if leading questions are asked; time consuming to collect and transcribe; time consuming to analyze and often requires more than 1 person to decide on and confirm themes. |
Think aloud | Aim to capture the experience of using the technology in real time. The user is provided with a specific task to complete and is observed while they perform the task. The user is prompted to think aloud throughout the process. | Microlevel: Tell me what you are thinking; What are you looking at?; What’s on your mind?; How are you feeling?; Why did you click on this?; Why did you frown/smile/sigh?; Macrolevel: Are you learning anything new? |
Pros: can be used at various stages of development and implementation to understand how intervention features impact on engagement; occurs in real time, so less subject to recall bias. Cons: subject to observer bias; can be cognitively difficult for participants and requires practice; may require additional resources such as video or sound recording equipment to obtain a comprehensive picture. Acquired data can be time consuming and complex to analyze; may be most useful for exploring microlevel engagement. |
Focus groups | Used to identify the social and contextual factors in specific population subgroups that influence engagement with digital health intervention and needs for technological characteristics and operations that promote user alignment and functional utility. | Microlevel: What did you think of the intervention?; Which components caught your attention the most?; What about them caught your attention?; Were there any components that caused frustration?; Did any aspects make you feel guilty? Macrolevel: How often did you think of the intervention during the week?; Was the intervention in the back of your mind?; How did the intervention help or hinder you reach your goals? |
Pros: allow for spontaneous discussion of topics and subsequent voicing of ideas and perceptions that may go unnoticed in semistructured or structured interviews; Can obtain rich data from multiple people at the same time. Cons: subject to group or social desirability bias; some participants may not express themselves as fully in a group situation; requires practice to manage group discussion; can take a long time to transcribe due to interruptions/butting in; time consuming to analyze and often requires more than 1 person to decide on and confirm themes. |
Current Use and Future Directions
Qualitative methodologies are commonly employed in the digital health setting to inform the development of interventions (ie, usability testing) and as an evaluation measure (eg, [25-29]). In most cases, the focus of the evaluation has been on perceptions of usability and acceptability, rather than engagement. However, there are some notable exceptions. For example, some studies have used think-aloud measures to understand cognitive processes and emotional reactions when navigating the intervention and viewing intervention content in real time [30-33]. Others have explored users’ flow experiences, adherence and lived experience of technology using qualitative interviews [34-36], focus groups [37], or a combination of think-aloud and interview methods [32].
Along with exploring the direct user experience, qualitative measures are also often used to probe the perceived usefulness of the intervention experience. Although this can relate to macroengagement (eg, by providing insights into how the intervention may have helped the user to achieve behavioral goals), efforts to explore the users’ experience of the behavior change process in more depth are recommended. For example, researchers could explore how certain intervention features impact intentions and self-efficacy and how the relationship between intervention features and changes in psychosocial factors relate to use or disuse. This could be achieved using simple methods such as open-ended items in a questionnaire or more elaborate methods such as postintervention focus groups, which may help users to reflect on how the intervention has or has not engaged them in the behavior change process in more detail. Assessing these constructs at different time points may be particularly fruitful, especially given the cyclical nature of behavior change [38]. Exploring users’ real-time engagement in the behavior change process was achieved in 1 recent study by thematically analyzing participant responses to intervention text messages [39]. By doing so, the authors were able to demonstrate that the study participants frequently gained positive cognitive and behavioral benefits from the text messages.
Considerations
A limitation of qualitative measures is that the results can be difficult to compare between studies. Results are also often not generalizable, mostly due to sampling bias. Qualitative measures are often used to collect rich data rather than representative data. For this reason, qualitative methods may be particularly suited to help generate hypotheses about engagement including how engagement relates to efficacy and effectiveness. They may also be useful for exploring hypotheses, especially when the focus is on understanding engagement on an individual level such as in n-of-1 studies [40]. In instances where representative data can be collected, such as in the text messaging study described above [39], hypothesis testing at the group level may be possible. However, the time and expertise needed to analyze data, which would ideally involve more than 1 person, is a barrier. This may be overcome in the future using machine learning tools to automate the coding of qualitative data [41].
To facilitate the use of qualitative measures in the future, a brief overview of example questions by qualitative method type, as well as key considerations are provided in Table 2.
Self-Report Questionnaires
Focus Areas
Questionnaires can be used to assess both experiential and behavioral aspects of microlevel engagement as well as aspects of macrolevel engagement.
Current Use and Future Directions
Self-report questionnaires have most often been used to gain insight into users’ subjective experience of digital platforms. Although questionnaire items have often been purpose-built and not subjected to psychometric testing (see Multimedia Appendix 1), there are a number of more rigorously developed scales. An overview of scales identified by our search [4,12-14] is presented in Multimedia Appendix 2. In brief, most scales have been developed to assess subjective experiential engagement with e-commerce websites or video games. Only 2 scales developed specifically for the eHealth and mHealth setting were identified (ie, the eHealth Engagement Scale [42] and the Digital Behavior Change Intervention Engagement Scale [43]), and only 1 of these has been validated [42], whereas validation of the other is currently underway [43]. Of note, some of the available scales assess attributes posited to predict engagement (eg, aesthetic appeal and usability experience [44-46]) as well as attributes considered to be a part of engagement (interest, attention, and affect). This is particularly the case for scales developed in the e-commerce setting and raises some validity concerns. Several of the scales are also quite long, which may place an undue burden on participants. The development and evaluation of high-quality short questionnaires relevant to eHealth and mHealth are therefore encouraged.
Questionnaires have also been used to assess behavioral aspects of engagement (ie, intervention usage). Although objective behavioral data are often available (see usage data below), questionnaires have been used when this is not the case. For example, a study comparing the relative efficacy of 2 off-the-shelf apps used questionnaires to assess the frequency and time of app use [47]. Although there are several scales with reasonable psychometric properties available for assessing the users’ subjective experience (Multimedia Appendix 2), scales for assessing behavioral aspects of engagement in eHealth and mHealth interventions are lacking. Perski et al’s self-report measure [43], which includes 2 items on behavioral engagement, is an exception. However, the validity of the measure is still being investigated. Perski’s items and the purpose-built item used by other researchers usually have reasonable face-validity (eg, “how many times per week did you use the app?”) but might lead to over- or underreporting depending on how items are phrased [48,49]. The validity of the chosen scale should be considered when interpreting the findings of self-reported behavioral data, and we recommend efforts to test the psychometric properties of developed items before use, if not yet available. This could be achieved by comparing the self-reported data with objectively collected data in a controlled setting (eg, [43]). The development of self-reported usage questionnaires that complement and provide useful context for objective usage measures should be considered. For example, if time on site or using an app is of interest, questionnaire data may identify cases where the user has left the program running in the background but has not been actively engaged. Likewise, information on behavioral cues at the point of engagement (eg, “what were you doing before you logged your steps using the app?”) may complement usage data and provide a more comprehensive measure of usage patterns. Lessons may be gleaned from the scales developed to assess social networking intensity [50].
The third use of questionnaires relevant to the study of engagement at the macrolevel is the repeated assessment of psychological mechanisms hypothesized to account for behavioral changes (eg, self-efficacy). The assessment of change in these mechanisms and the conduction of a formal mediation analysis have been increasingly encouraged in the behavioral sciences [51,52] to investigate whether interventions are working as intended (ie, that the selected eHealth and mHealth strategies are indeed influencing determinants and changes in determinants are influencing behavior, eg, [53]). This methodology can be adopted to study engagement. Arguably, a user who demonstrates favorable changes in 1 or more of these determinants can be considered engaged in the behavior change process (eg, self-efficacy significantly increases over time). Furthermore, someone demonstrating changes at a prespecified cut point or where changes are associated with behavioral outcomes could be said to be engaged effectively. There are a number of pre-existing scales that can be used to assess changes in psychological determinants of behavior (eg, [54-56]) as well as guides for constructing purpose-built questions if existing scales are not suitable (eg, [57,58]). Decisions regarding what psychological constructs to assess changes in should be based on the theoretical underpinning of the intervention and the key intervention objectives and strategies used to achieve them.
Considerations
Overall, questionnaires can be a useful tool for measuring various aspects of engagement in a systematic, standardized, and convenient way. This can allow for easy comparison across studies and between experimental arms [5]. Limitations include questionnaire length (and, therefore, duration of completion); a lack of experiential measures designed and tested within a health context; a lack of focus on the behavioral aspects of engagement; and in some cases, the inclusion of items that measure predictors of engagement within engagement scales.
To select an appropriate scale, an understanding of the different constructs used to describe engagement across disciplines will be necessary (see Table 1). Reviewing the wording of the items and assessing how they will fit within the context of one’s project may further help with scale selection. To this end, example items for each scale summarized above are provided in Multimedia Appendix 3. Most items will need to be adapted for a health setting, and not all scales will be applicable across study types or useful for assessing all aspects of engagement (ie, interest, attention, affect, intervention usage, and involvement in behavior change process). In some cases, it may be necessary to generate completely new items or a completely new scale. In such cases, researchers are encouraged to report a measure of internal consistency (preferably McDonald omega) and present factor-analytic evidence confirming the dimensionality of the scale [59]. Attention to the length of the scale should also be given. This will likely be necessary to minimize missing data. The perceptions of those who drop out of the study are currently often not captured in evaluations of eHealth and mHealth interventions, which is problematic as those who drop out are usually those who have used the intervention the least. Ecological momentary assessments (EMAs; described in more detail below) may be useful to assess relevant engagement parameters regularly during the intervention and give a better impression of engagement throughout use [60]. Alternatively, selecting a representative subsample to administer surveys to and reimbursing them for their time might be a viable solution.
Ecological Momentary Assessments
Focus Area
EMAs can be used to assess both experiential and behavioral aspects of microlevel engagement as well as aspects of macrolevel engagement. The main objective of EMAs is to assess behaviors, perceptions, or experiences in real time and as they occur in their natural setting [61]. By prompting users to self-report data at varying times per day, EMAs allow these phenomena to be studied in different contexts and times.
Current Use and Future Directions
In EMAs, short surveys can either be accessed by the user on demand (eg, when logging a recent behavior), sent at specific or random intervals (eg, every 2 hours per day: time-based sampling), or they can be triggered by a certain event (eg, only when an activity tracker indicates the user is performing moderate to vigorous physical activity: event-based sampling). The latter is especially useful to capture rare behaviors, perceptions, or experiences. EMAs are often conducted on smartphone screens, but wearable devices can also be used (eg, CamNtech ProDiary, Philips Actiwatch Spectrum Plus, or Samsung Gear Life) [62].
EMAs have mostly been applied in eHealth and mHealth studies to measure health behavior and determinants (eg, [63,64]). We identified 1 study from previous reviews that used EMA to measure user engagement. This study [65] used event-based sampling to assess the breaks in levels of presence with a shooter game (not intended to improve health). The events that were sampled consisted of several parts of game play. No validity or reliability information for the slider was explicitly provided.
Despite the limited application of EMAs to measure engagement so far, EMAs may be well suited to study moment-to-moment or microlevel engagement with an intervention [5]. EMAs could provide data-driven insights into reasons for low adherence or dropout. EMAs are usually conducted over a short period with regular measurements over the day or week. However, it is also possible to adjust the timing and measurement intervals to collect longer-term insights into engagement. Contextual data and determinant data provided in EMA may enrich intervention usage data obtained from other sources to provide further insights into reasons for dropout.
Considerations
EMA surveys are intended to be very brief, because the purpose is to capture experiences in the moment and often to collect many data points over time, which can pose a burden to users [61]. Ensuring measures are brief is, therefore, important for both validity and for promoting adherence to the EMA protocol. Recent reviews of adherence to EMA protocols in health settings [66,67] suggest that compliance rates (proportion of EMAs completed) are reasonable (>70%), especially when sampling protocols are easy to follow. This speaks to the feasibility of utilizing this measurement approach; however, data analysis can be challenging for those unfamiliar with intensive longitudinal datasets (for a discussion regarding the challenges of EMA and example analysis approaches, see [68-72]). Advantages of EMAs include less recall bias than retrospective self-reports and potential for high ecological validity, as it studies behavior or effects in real-world contexts [60,61].
System Usage Data
Focus Area
System usage data quantitatively capture how the intervention is physically used by each participant. This relates to the behavioral component of microlevel engagement. When paired with other data sources, system usage data can provide insights into how usage patterns, intervention dose, and different adherence rates relate to other aspects of engagement (eg, interest, attention, affect, and changes in determinants) and efficacy and effectiveness outcomes (eg, [73-76]).
Current Use and Future Directions
System usage data are the most commonly collected and reported measures of engagement in eHealth and mHealth interventions [4]. Although the focus has predominantly been on nonusage attrition and overall adherence to the intervention [3,8], more recent studies have begun to explore the multidimensional nature of usage data [77-79], focusing on the depth and type of engagement as well as frequency measures. As the field progresses, it would be helpful to have shared ways of conceptualizing these data, as recent reviews have tended to categorize types of usage data differently using an inductive approach [4,78]. The FITT acronym [80], which stands for frequency, intensity, time, and type, and is commonly used in physical activity research, might be a useful tool in this sense, especially for considering usage data as an engagement measure a priori. Specific examples of how usage data could be categorized using this principle are given in Table 3. Frequency provides information on how often a participant visits the intervention site or uses the app. Intensity measures the strength or depth of engagement with the intervention, for example, the proportion of the intervention site or app features used out of the total available features [4]. Type refers to the type of engagement, for example, this could be categorized as reflective (eg, self-reporting behavior change), altruistic (eg, helping others), or gamified (eg, participating in a challenge) in nature. Type can also be divided into “active” (eg, active input such as when responding to a quiz, self-monitoring, or writing an action plan) or “passive” (eg, an individual can view the intervention without having to interact with it) categories. Time is a measure of the duration of engagement during any single visit or a measure to assess level of exposure as an aggregate over the intervention period.
Table 3.
Frequency, intensity, time, and type (FITT) principle | Example application | |
Frequency of engagement with the intervention | [81-83] | |
|
Log-in (number of log-ins recorded per participant, average log-ins per unit of time or total for intervention duration) |
|
|
Visits to the site (number of visits/hits per participant, average per unit of time or total) |
|
Intensity of engagement | [84-87] | |
|
Pages viewed (number) |
|
|
Lessons or modules viewed (total number, % of prescribed) |
|
|
Posts viewed (eg, lurking) |
|
|
Number of emails sent |
|
|
Number of posts written |
|
|
Accessed “Expert forum” (Ask the Expert) to pose a question/seek advice (number) |
|
|
Action plan created |
|
|
Number of quizzes attempted |
|
Time or duration of engagement with the program | [88,89] | |
|
Amount of time spent at each visit per participant (average and total minutes) |
|
|
Number of days between first and last log-in (duration or intervention stickiness) |
|
Type of engagement | [90,91] | |
|
Reflective (eg, participant recording of behavior or health status) |
|
|
Gamified (eg, accepting challenges and sending gifts) |
|
|
Altruistic (eg, helping others) or malevolent (eg, trolling others) |
|
|
Didactic (eg, reading posts and taking quizzes) |
|
|
Active (eg, recording behavior) versus Passive (eg, reading posts). |
|
Examining usage data by aggregating data across the FITT categories can provide greater insights into engagement than focusing on any one domain [77,79,88]. For example, although the total time on site for users may appear similar (time data), their intensity data could be meaningfully different, which could lead to differences in engagement profiles (eg, attention, elaboration, and experience [79,88]). Separating users with similar data for time on site but markedly different patterns of use in terms of the type of activities may be helpful for identifying what aspects of the intervention are more engaging than others [92]; what aspects may be more influential for achieving behavior change, and in addition, whether this is moderated by user profiles (eg, [88]). The insight obtained from careful examination of system usage data in this way can assist intervention developers with data-driven solutions to encourage engagement [93].
Considerations
User behavior in digital health interventions can be tracked by embedding programming code as part of the development process or by using third-party services. For both methods, it is important during software design (or selection) to consider the type of data desired or needed to track behavioral engagement and ensure the data are adequately captured and can be extracted easily. The most commonly used third-party service is Google Analytics, a service that can be implemented by connecting to the Google Analytics application programming interface. Google Analytics can be used to collect information on the users’ environment (location, browser, and connection speed), and the users’ behavior (eg, number of page visits, time on site, where users came from, and which page they visited last before exiting [94]). Capturing usage data more specific to the intervention platform, such as participation in a quiz or percentages of answers correct, require, as in Google Analytics, intentional programming and capture at the level of the software. Before programming, considerable thought should be given to how the usage data will be analyzed, as good tracking generates a large amount of data (ie, every navigational move that every participant has ever made and even the moves they did not make) that can be hard to make sense of; therefore, an a priori analysis plan is recommended. Visualization tools [82] and engagement indices such as those discussed by Baltierra et al [79] and Couper et al [88], or consideration of new data analyses techniques may be useful to get insights into data [95,96]. Although system usage data are often considered objective and reliable, some caution interpreting data is recommended. The increasing use of dynamic internet protocol (IP) addresses and virtual private networks (which change or hide your IP address), the use of IP addresses shared by multiple users (eg, via the family computer and internet cafes), and typical browsing behavior (eg, leaving multiple tabs open) may obscure usage data, especially for applications that do not require a unique log-in. This may be less of an issue for mobile apps compared with websites.
Intervention developers should, wherever possible, collect and analyze system usage data. Compared with the usage of other behavioral interventions (eg, a printed booklet), these data can be easily collected with early planning and good data capture techniques. Although usage data does not provide direct information on the psychological form of user engagement [4,5], it can provide some information to help us to understand what is engaging about an intervention, and what is not, in an unobtrusive way. There is also some evidence of predictive validity, with technology usage generally correlating with positive behavior change or health outcomes [81,91,97,98]. However, more research to establish the predictive validity of system usage data is needed, especially given that most analyses to date have lacked a suitable control group.
As with analyzing intensive longitudinal EMA data, the analysis of system usage data can be challenging. This is due to the intensive longitudinal and multidimensional nature of the data as well as the pattern of missingness (which tends to be nonrandom and nonignorable). Recognizing this, a comprehensive analysis plan should be developed before the commencement of the study. Exploration of the data visualization tools, composite engagement metrics, and analysis approaches referenced above might assist with the development of this plan.
It is also recommended that developers consider and outline the intended usage of the intervention. Intended usage is the way in which individuals should experience the intervention to derive maximum benefit, based on the conceptual framework informing intervention design (ie, developers’ views on how the intervention should work best for who). Notably, intended usage may not be the same for all individuals (eg, in adaptive interventions [99,100]). By specifying intended usage a priori and comparing this with observed usage, we can establish whether individuals have adhered to the intervention and, in turn, the impact of adherence on efficacy [3].
Sensor Data
Focus Area
Sensors such as global positioning systems (GPS), cameras (eg, facilitating eye tracking analyses), microphones, and accelerometers can unobtrusively monitor users’ behavior and the physical context in which this behavior takes place. They can be provided by the investigator, but many of them are embedded in smartphones or trackers. This relates to the behavioral component of microlevel (eg, information on intervention fidelity) and macrolevel (eg, tracking behavior in real-life settings) engagement.
Current Use and Future Directions
Analyzing sensor data presents an unobtrusive way of measuring engagement that requires no additional time effort from users other than the time spent engaging with the program. Their value lies in being able to track behavior of many users [101] and to enrich usage information in real-life situations or combining them with other user engagement measures such as EMA. There are calls for a different evaluation of eHealth and mHealth behavior change interventions than traditional interventions, to more nimbly respond to rapidly changing technologies and user preferences for functionalities [102-105]. Adaptations to eHealth and mHealth interventions are likely to be needed soon after first design and again after first implementation. Information from sensors that automatically track usage in real-life situations can help in measuring engagement with these interventions and distinguishing between successful mastery of intervention goals or need for continued engagement [5]. For example, in physical activity interventions, accelerometer information could continuously monitor the current activity level and indicate whether lower adherence to the intervention should be considered as a successful completion or disengagement. Sensor data paired with usage information may thus provide insights in macrolevel engagement as a mediator of positive intervention outcomes. In a similar vein, GPS information can enrich macrolevel engagement measures. GPS gives information on where people use the intervention and where it is less often used. For example, an app designed to facilitate healthy food choices may be used at home or at grocery stores but shows lower usage in restaurants. The GPS data give further insight into offline engagement with the intervention goals.
Sensors can also provide an indication of intervention fidelity. For example, distance traveled as measured by GPS and phone cameras taking pictures of meals can indicate whether the intervention is used in the appropriate manner and context [106]. The combination of usage and commonly included sensors can provide more detailed measures of real-life user engagement than usage information by itself. Sensor data can, moreover, trigger the event-based form of EMA. For example, users may be prompted to indicate their engagement with the intervention when the accelerometer shows the person is physically inactive or assess user engagement when GPS data show the person is in a certain physical context (eg, at a bar where there is a personal risk of smoking or alcohol consumption).
Considerations
A challenge of using GPS data for this purpose is the time-intensive nature of GPS data preparation and analysis. This will likely get easier in the future as new analysis packages become available to facilitate automation. Sensors, moreover, have the advantage of presenting a low level of respondent burden. However, especially with context-aware sensing using GPS, users are concerned about privacy issues [107,108]. In addition, sensors integrated in smartphones tend to negatively impact the battery life of the mobile device, and users may, therefore, be less compliant with running these sensors on their phones. This may especially be the case when users are skeptical toward the accuracy and relevance of context-aware smartphone sensing [25]. Therefore, communicating research findings about the validity of such measures [109,110]) and conducting pilot tests and validity studies of new measures may be necessary to increase their use in future interventions and optimize uptake among participants.
Social Media
Focus Area
Another unobtrusive, low-burden approach to capturing engagement with the intervention is to analyze users’ social media patterns. In social media, users create online communities (eg, social networking sites) via which they share information, opinions, personal messages, or visual material. Despite the interest of behavior change professionals in using social media to increase intervention effectiveness (see eg, [111]), to our knowledge, little research is available on the use of social media to measure engagement with eHealth and mHealth. The available resources mostly come from marketing and media audience research [112,113]. Social media message threads may provide useful information on user experience (microlevel engagement with the intervention) but might also provide insights in macrolevel engagement (eg, wall posts on behavioral achievements).
Current Use and Future Directions
One study examined the number of wall posts made over time as an indication of engagement with a social networking physical activity intervention [114]. An approach to reduce the burden in analysis is to use markers that are previously nonexisting words launched exclusively within the intervention [115]. These markers are used to trace any conversation that takes place on social media in relation to the intervention and are a way to measure social proliferation associated with the intervention content. An example comes from a video intervention on cognitive problems that may result from being a victim of violence [115]. To clearly identify all conversations and mentions on social media that would result from this topic, they launched the word falterhead to describe how the main character experienced the negative effects on his brain functioning after being violently attacked. This marker allowed a quick identification of all social media content related to the program, as this nonexisting word is unlikely to occur for content unrelated to the intervention. Several social media sources are then searched with text- and data-mining tools (eg, HowardsHome Finchline) for the occurrence and content of messages that contain these markers. The messages are next analyzed in terms of quantity (eg, Is the intervention being talked about?; What are patterns of social proliferation over time?) and quality (eg, How is the topic mentioned or discussed?; Is this how we wished viewers would think and talk about the intervention?). Social media messages relating to the eHealth and mHealth intervention might also be analyzed for their occurrence of certain profiles in social media engagement. On a continuum from passive and uninterested to more active and engaged, profiles of lurkers, casuals, actives, committed, and loyalists can be distinguished. Although to our knowledge, this has not yet been applied to analyze engagement with eHealth and mHealth behavior change interventions, interventions showing more actives, committed, and loyalists on social media might indicate higher user engagement than those receiving more lurkers and casuals [116]. This might especially be useful to assess comments on engagement in behavior change programs in real-life settings.
Considerations
The vast amount of social media content may make it difficult to extract what is relevant to the intervention. Markers mentioned earlier and audit tools are useful to facilitate such social media analyses. Examples of free audit tools to analyze social media are Sprout Social Simply Measured, Instagram Insights, and Union Metrics. The free statistical software program R also has many packages to analyze social media data. The analysis of these social media patterns requires a combination of qualitative techniques to assess discussion or post sentiment and topic, and quantitative methods, for example, to assess reach by combining number of followers for each mention on social media [117]. Text analytic tools available in many statistical packages such as R and SAS may also be useful here.
Psychophysiological Measures
Focus Area
Psychophysiological methods of measurement are used to examine the relationship between physiology and overt behavior or cognitive processes and variables. Psychophysiological measures are operationalization of cognitive processes or variables, just as self-reported questionnaires are used to measure processes or variables derived from theory [118]. They have been shown to be valuable approaches for measuring the experiential aspects of microengagement [119].
Current Use and Future Directions
There are several types of psychophysiological measures used to study cognitive and affective processes (for a comprehensive overview of measures used in human-computer interaction and user experience research, see [119-121]). We describe the 2 most common methods with a strong temporal resolution (ie, electroencephalography [EEG] and eye-tracking). A strong temporal solution (ie, precision of measurement with respect to time) is warranted to investigate engagement over time. It needs to be stressed, however, that other methods show promising results as well [122-127]. For example, predicting engagement using a novel visual analysis approach to recognize affect performed significantly better or on par with using self-reports [125]. The methods presented here are noninvasive but obtrusive in comparison with, for example, most measurements of system usage data. These methods are mostly used in laboratory settings and during intervention development (eg, pretesting of a website), but the opportunities to use them in field settings are increasing (eg, [128]). Moreover, it is also possible to use these methods in parallel with a trial or afterward to gain more insight into user engagement and, thereby, shed more light on trial findings.
EEG records electrical activity in the brain using small, flat metal discs (electrodes) attached to a person’s scalp. Using this method requires adequate expertise, both in terms of measurement [129] and analysis [130] of data. Event-related potentials (ERPs) are the average changes in the EEG signal in response to a stimulus, and characteristic ERP responses are referred to as components [131]. For example, Leiker et al [132], in a study on motion-controlled video games, focused on the amplitude of a specific component (labeled eP3a), which is a reliable index of attentional reserve [133,134]. This study revealed that participants who reported higher levels of engagement (as measured by the Intrinsic Motivation Inventory) showed a smaller eP3a, which is indicative of paying more attention to the primary task (eg, playing the game). Another study revealed that late negative slow wave components of the ERP were indicative of attention, which was partly confirmed by findings from self-reports (ie, the Immersive Experience Questionnaire) [123].
Eye-tracking is based on the strong association between eye movements and attention [135]. It is a suitable method to assess the course of attention over time [136]. For example, fixation data of an experimental study revealed that participants’ eye movements in the immersive condition decreased over time, which is indicative of increased attention [137]. Another example is a study comparing a video with a text condition of a physical activity intervention. This study revealed that participants in the video condition displayed greater attention to the physical activity feedback in terms of gaze duration, total fixation duration, and focusing on feedback [138]. Another study using eye-tracking found that participants focused more on certain experimentally manipulated aspects of a health-related website (ie, in terms of frequency and duration), but this did not affect usage data (ie, the number of pages visited or the time on the website) [139]. It might be that these aspects attract attention, but there is a trade-off in the sense that participants then focus less on other aspects of the website. However, it could also be that attention only partly predicts engagement.
Considerations
With regard to both EEG and eye-tracking, it is important to note that attention is only the first appraisal in the process of engagement [139]. There are other psychophysiological methods besides EEG and eye-tracking that are mostly focused on measuring arousal. A previous study, for example, recorded electrodermal activity (EDA) and facial muscle activity (electromyography [EMG]) in addition to a Game Experience Questionnaire [140]. The association between these measures, however, was not straightforward. For example, EMG orbicularis oculi (periocular) is usually used to indicate positive emotions and high arousal but was negatively correlated to competence (which is a positive dimension of the Game Experience Questionnaire). Another study measured engagement in 5 different ways: self-reports using 4 dimensions of the Temple Presence Inventory, content analyses of user videos, EDA, mouse movements, and click logs (the latter 2 are measurements of usage data) [124]. These 5 measures correlated in limited ways. The authors concluded that “engagements as a construct is more complex than is captured in any of these measures individually and that using multiple methods to assess engagement can illuminate aspects of engagement not detectable by a single method of measurement” [124].
This is indicative of the complexity of engagement as a construct and reflects recent calls from the human-computer interaction field for future studies to identify valid combinations of psychophysiological measures that more fully capture the multidimensional nature of engagement [119].
Discussion
It is generally agreed that some form of engagement is necessary for eHealth and mHealth behavior change interventions to be effective. However, cohesive and in-depth knowledge about how to develop engaging interventions and the pathways between engagement and efficacy are lacking. Several models of engagement have been proposed in the literature to address this deficit, but little testing of the models has been conducted. To support research in this area and progress the science of user engagement, we aimed to provide a comprehensive overview of the measurement options available to assess engagement in an eHealth and mHealth behavioral intervention setting. The overview should not be treated as exhaustive; however, it should serve as a useful point of reference when considering engagement measures for behavioral eHealth and mHealth research.
The best measurement approach will likely depend on the stage of research and the specific research context, although there are benefits from using multiple methods and pairing the data (eg, self-report data relating to interest, attention and affect combined with system usage data). It is also important to make an inventory—before data collection—to check whether the available expertise for using different methods (eg, EEG) is available. Given the complexity of engagement as a construct, using multiple methods may be necessary to illuminate it fully [119,124]. At present, most studies in the eHealth and mHealth behavioral intervention space rely on system usage data only. Although system usage data is undoubtedly a valuable engagement marker, it is not considered a valid measure of micro- or macroengagement on its own [4,5]. Greater efforts are needed to also assess the psychological aspects of engagement to better understand the interplay between perceptions, usage, and efficacy.
Questionnaires are perhaps the most accessible way to assess microlevel engagement in terms of cost. However, there is currently a lack of validated self-report questionnaires specific to the eHealth and mHealth behavior change intervention context. This is reflected in the large number of purpose-built questionnaires (ie, questionnaires designed for a specific study) that have been used to date [4]. As the main benefit of questionnaires is that they allow for the collection of subjective data in a standardized way, greater efforts are needed to develop and implement standard items. Although not yet validated, the questionnaire developed by Perski et al [44] is promising in this regard, as it includes constructs related to both psychological and behavioral aspects of engagement and only focuses on engagement constructs. The other questionnaires identified focus only on the psychological aspects of engagement, and some include constructs more aligned with standard acceptability items (eg, perceived credibility), rather than the constructs of interest, attention, and affect. It may be best to avoid these questionnaires when testing models that hypothesize that acceptability markers influence engagement parameters.
There are several other measures of engagement that may also be used to test engagement models (eg, sensors, social media data, EMA, and psychophysiological measures). Despite their potential advantages, little research has been conducted exploring their use (and validity) in the digital behavior change setting. This is likely due to higher cost, time, and data analysis requirements relative to other measures. To mitigate this, behavioral researchers are increasingly drawing on expertise across other relevant disciplines (eg, informatics, human-computer interaction, experimental, and cognitive psychology). It is hoped that this paper will help to facilitate this research, especially research establishing the criterion, as well as divergent and predictive validity of these measures.
Overall, establishing the validity of engagement measures across multiple settings and learning how to triangulate measures in a complementary way are necessary next steps to advance the field. This will allow us to thoroughly test contemporary models of user engagement and hence, deepen our understanding of the interplay between intervention perceptions, usage, and efficacy across different settings.
Acknowledgments
The authors would like to thank Celine Chong for her assistance extracting data presented in the literature and reviewing engagement measures. Celine was supported by a Freemasons Foundation Centre for Men’s Health summer scholarship. CES was supported by a National Health and Medical Research Council (NHMRC) Early Career Research fellowship (ID 1090517). LP is funded by the Research Foundation—Flanders. CV (ID 100427) is funded through a Future Leader Fellowship from the National Heart Foundation of Australia. CM is supported by an NHMRC Career Development fellowship (ID 1125913). AD is supported by a Research Foundation Flanders grant (FWO16/PDO/060, 12H6717N).
Abbreviations
- EDA
electrodermal activity
- EEG
electroencephalography
- eHealth
electronic health
- EMA
ecological momentary assessment
- EMG
electromyography
- ERP
event-related potentials
- FITT
frequency, intensity, time, and type
- IP
internet protocol
- mHealth
mobile health
- NHMRC
National Health and Medical Research Council
Initial data extraction table.
Self-report questionnaires for measuring microlevel engagement.
Example items in self-report questionnaires for measuring microlevel engagement.
Footnotes
Authors' Contributions: CES conceived of the idea for this viewpoint. CES, AD, RC, CW, and SLW defined the scope of the manuscript and drafted the initial sections and revisions. CM, AMM, AM, PAW, CV, LP, and MDH provided critical review, refined the scope, and contributed to redrafting and editing of the manuscript.
Conflicts of Interest: None declared
References
- 1.Vandelanotte C, Müller AM, Short CE, Hingle M, Nathan N, Williams SL, Lopez ML, Parekh S, Maher CA. Past, present, and future of eHealth and mHealth research to improve physical activity and dietary behaviors. J Nutr Educ Behav. 2016 Mar;48(3):219–228.e1. doi: 10.1016/j.jneb.2015.12.006.S1499-4046(15)00806-4 [DOI] [PubMed] [Google Scholar]
- 2.Michie S, Yardley L, West R, Patrick K, Greaves F. Developing and evaluating digital interventions to promote behavior change in health and health care: recommendations resulting from an international workshop. J Med Internet Res. 2017 Jun 29;19(6):e232. doi: 10.2196/jmir.7126. http://www.jmir.org/2017/6/e232/ v19i6e232 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Kelders M, Kok RN, Ossebaard HC, Van Gemert-Pijnen JE. Persuasive system design does matter: a systematic review of adherence to web-based interventions. J Med Internet Res. 2012 Nov 14;14(6):e152. doi: 10.2196/jmir.2104. http://www.jmir.org/2012/6/e152/ v14i6e152 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Perski O, Blandford A, West R, Michie S. Conceptualising engagement with digital behaviour change interventions: a systematic review using principles from critical interpretive synthesis. Transl Behav Med. 2017 Dec;7(2):254–67. doi: 10.1007/s13142-016-0453-1. http://europepmc.org/abstract/MED/27966189 .10.1007/s13142-016-0453-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Yardley L, Spring BJ, Riper H, Morrison LG, Crane DH, Curtis K, Merchant GC, Naughton F, Blandford A. Understanding and promoting effective engagement with digital behavior change interventions. Am J Prev Med. 2016 Dec;51(5):833–42. doi: 10.1016/j.amepre.2016.06.015.S0749-3797(16)30243-4 [DOI] [PubMed] [Google Scholar]
- 6.Short CE, Rebar A, Plotnikoff RC, Vandelanotte C. Designing engaging online behaviour change interventions: a proposed model of user engagement. Health Psychol Rev. 2015;17(1):32–8. https://digital.library.adelaide.edu.au/dspace/bitstream/2440/97646/3/hdl_97646.pdf . [Google Scholar]
- 7.Walton H, Spector A, Tombor I, Michie S. Measures of fidelity of delivery of, and engagement with, complex, face-to-face health behaviour change interventions: a systematic review of measure quality. Br J Health Psychol. 2017 Dec;22(4):872–903. doi: 10.1111/bjhp.12260. http://europepmc.org/abstract/MED/28762607 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Sieverink F, Kelders SM, van Gemert-Pijnen JE. Clarifying the concept of adherence to eHealth technology: systematic review on when usage becomes adherence. J Med Internet Res. 2017 Dec 06;19(12):e402. doi: 10.2196/jmir.8578. http://www.jmir.org/2017/12/e402/ v19i12e402 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Ryan C, Bergin M, Wells JS. Theoretical perspectives of adherence to web-based interventions: a scoping review. Int J Behav Med. 2018 Dec;25(1):17–29. doi: 10.1007/s12529-017-9678-8.10.1007/s12529-017-9678-8 [DOI] [PubMed] [Google Scholar]
- 10.O'Brien H, Toms EG. What is user engagement? A conceptual framework for defining user engagement with technology. J Am Soc Inf Sci Technol. 2008 Apr;59(6):938–55. doi: 10.1002/asi.20801. [DOI] [Google Scholar]
- 11.Crutzen R, Ruiter R. Interest in behavior change interventions: a conceptual model. The European Health Psychologist. 2015;17(1):a–11. [Google Scholar]
- 12.Saket B. Stasko, Beyond Usability Performance: A Review of User Experience-focused Evaluations in Visualization, in Proceedings of the Sixth Workshop on Beyond Time Errors on Novel Evaluation Methods for Visualization. BELIV '16 Proceedings of the Sixth Workshop on Beyond Time and Errors on Novel Evaluation Methods for Visualization; October 24, 2016; Baltimore, MD, USA. 2016. pp. A–142. http://bahadorsaket.com/publication/BELIV2016.pdf . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Denisova A, Nordin AI, Cairns P. The Convergence of Player Experience Questionnaires. Proceedings of the 2016 Annual Symposium on Computer-Human Interaction in Play; CHI PLAY '16; October 16 - 19, 2016; Austin, Texas, USA. 2016. pp. 33–37. [Google Scholar]
- 14.Boyle EA, Connolly TM, Hainey T, Boyle JM. Engagement in digital entertainment games: a systematic review. Comput Human Behav. 2012 May;28(3):771–80. doi: 10.1016/j.chb.2011.11.020. [DOI] [Google Scholar]
- 15.Schiefele U. Interest, learning, and motivation. Educ Psychol. 1991 Jun;26(3-4):299–323. doi: 10.1080/00461520.1991.9653136. [DOI] [Google Scholar]
- 16.Schraw G, Lehman S. Situational interest:a review of the literature and directions for future research. Educ Psychol Rev. 2001;13(1):23–52. doi: 10.1023/A:1009004801455. [DOI] [Google Scholar]
- 17.Gerrig R, Zimbardo P. Psychology and Life. CA, United States: Pearson; 2012. [Google Scholar]
- 18.James W. In: The Principles of Psychology. Miller G, editor. New York: Dover Publications; 1950. [Google Scholar]
- 19.Duncan S, Barrett LF. Affect is a form of cognition: a neurobiological analysis. Cogn Emot. 2007 Sep;21(6):1184–211. doi: 10.1080/02699930701437931. http://europepmc.org/abstract/MED/18509504 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Barrett LF, Russell JA. The structure of current affect: controversies and emerging consensus. Curr Dir Psychol Sci. 2016 Jun 22;8(1):10–14. doi: 10.1111/1467-8721.00003. [DOI] [Google Scholar]
- 21.Csikszentmihalyi M. Flow: The Psychology of Optimal Experience. New York: Harper Perennial; 1990. [Google Scholar]
- 22.Brockmyer JH, Fox CM, Curtiss KA, McBroom E, Burkhart KM, Pidruzny JN. The development of the Game Engagement Questionnaire: a measure of engagement in video game-playing. J Exp Soc Psychol. 2009 Jul;45(4):624–34. doi: 10.1016/j.jesp.2009.02.016. [DOI] [Google Scholar]
- 23.Baños RM, Botella C, Alcañiz M, Liaño V, Guerrero B, Rey B. Immersion and emotion: their impact on the sense of presence. Cyberpsychol Behav. 2004 Dec;7(6):734–41. doi: 10.1089/cpb.2004.7.734. [DOI] [PubMed] [Google Scholar]
- 24.Lombard M, Ditton T. At the heart of it all: the concept of presence. J Comput Mediat Commun. 1997 Sep 1;3(2) doi: 10.1111/j.1083-6101.1997.tb00072.x. [DOI] [Google Scholar]
- 25.Dennison L, Morrison L, Conway G, Yardley L. Opportunities and challenges for smartphone applications in supporting health behavior change: qualitative study. J Med Internet Res. 2013 Apr 18;15(4):e86. doi: 10.2196/jmir.2583. http://www.jmir.org/2013/4/e86/ v15i4e86 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Milward J, Khadjesari Z, Fincham-Campbell S, Deluca P, Watson R, Drummond C. User preferences for content, features, and style for an app to reduce harmful drinking in young adults: analysis of user feedback in app stores and focus group interviews. JMIR Mhealth Uhealth. 2016 May 24;4(2):e47. doi: 10.2196/mhealth.5242. http://mhealth.jmir.org/2016/2/e47/ v4i2e47 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Kernot J, Olds T, Lewis LK, Maher C. Usability testing and piloting of the Mums Step It Up program--a team-based social networking physical activity intervention for women with young children. PLoS One. 2014;9(10):e108842. doi: 10.1371/journal.pone.0108842. http://dx.plos.org/10.1371/journal.pone.0108842 .PONE-D-14-25805 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Short C, James EL, Rebar AL, Duncan MJ, Courneya KS, Plotnikoff RC, Crutzen R, Bidargaddi N, Vandelanotte C. Designing more engaging computer-tailored physical activity behaviour change interventions for breast cancer survivors: lessons from the iMove More for Life study. Support Care Cancer. 2017 Dec;25(11):3569–85. doi: 10.1007/s00520-017-3786-5.10.1007/s00520-017-3786-5 [DOI] [PubMed] [Google Scholar]
- 29.Kirwan M, Duncan MJ, Vandelanotte C, Mummery WK. Design, development, and formative evaluation of a smartphone application for recording and monitoring physical activity levels: the 10,000 Steps “iStepLog”. Health Educ Behav. 2013 Apr;40(2):140–51. doi: 10.1177/1090198112449460.1090198112449460 [DOI] [PubMed] [Google Scholar]
- 30.Perski O, Blandford A, Ubhi HK, West R, Michie S. Smokers' and drinkers' choice of smartphone applications and expectations of engagement: a think aloud and interview study. BMC Med Inform Decis Mak. 2017 Dec 28;17(1):25. doi: 10.1186/s12911-017-0422-8. https://bmcmedinformdecismak.biomedcentral.com/articles/10.1186/s12911-017-0422-8 .10.1186/s12911-017-0422-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Bradbury K, Morton K, Band R, van Woezik A, Grist R, McManus RJ, Little P, Yardley L. Using the person-based approach to optimise a digital intervention for the management of hypertension. PLoS One. 2018;13(5):e0196868. doi: 10.1371/journal.pone.0196868. http://dx.plos.org/10.1371/journal.pone.0196868 .PONE-D-17-30284 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Crane D, Garnett C, Brown J, West R, Michie S. Factors influencing usability of a smartphone app to reduce excessive alcohol consumption: Think Aloud and Interview Studies. Front Public Health. 2017;5:39. doi: 10.3389/fpubh.2017.00039. doi: 10.3389/fpubh.2017.00039. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Alkhaldi G, Modrow K, Hamilton F, Pal K, Ross J, Murray E. Promoting engagement with a dimgital health intervention (HeLP-Diabetes) using email and text message prompts: mixed-methods study. Interact J Med Res. 2017 Aug 22;6(2):e14. doi: 10.2196/ijmr.6952. http://www.i-jmr.org/2017/2/e14/ v6i2e14 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.El-Hilly A, Iqbal SS, Ahmed M, Sherwani Y, Muntasir M, Siddiqui S, Al-Fagih Z, Usmani O, Eisingerich AB. Game On? Smoking cessation through the gamification of mHealth: a longitudinal qualitative study. JMIR Serious Games. 2016 Oct 24;4(2):e18. doi: 10.2196/games.5678. http://games.jmir.org/2016/2/e18/ v4i2e18 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Hwang M, Hong J, Hao Y, Jong J. Elders' usability, dependability, and flow experiences on embodied interactive video games. Educ Gerontol. 2011 Aug;37(8):715–31. doi: 10.1080/03601271003723636. [DOI] [Google Scholar]
- 36.Morrison L, Moss-Morris R, Michie S, Yardley L. Optimizing engagement with Internet-based health behaviour change interventions: comparison of self-assessment with and without tailored feedback using a mixed methods approach. Br J Health Psychol. 2014 Nov;19(4):839–55. doi: 10.1111/bjhp.12083. doi: 10.1111/bjhp.12083. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Horsch C, Lancee J, Beun RJ, Neerincx MA, Brinkman WP. Adherence to technology-mediated insomnia treatment: a meta-analysis, interviews, and focus groups. J Med Internet Res. 2015 Sep 04;17(9):e214. doi: 10.2196/jmir.4115. http://www.jmir.org/2015/9/e214/ v17i9e214 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Ritterband LM, Thorndike FP, Cox DJ, Kovatchev BP, Gonder-Frederick LA. A behavior change model for internet interventions. Ann Behav Med. 2009 Aug;38(1):18–27. doi: 10.1007/s12160-009-9133-4. http://europepmc.org/abstract/MED/19802647 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Irvine L, Melson AJ, Williams B, Sniehotta FF, McKenzie A, Jones C, Crombie IK. Real time monitoring of engagement with a text message intervention to reduce binge drinking among men living in socially disadvantaged areas of Scotland. Int J Behav Med. 2017 Dec;24(5):713–721. doi: 10.1007/s12529-017-9666-z. http://europepmc.org/abstract/MED/28702758 .10.1007/s12529-017-9666-z [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.McDonald S, Quinn F, Vieira R, O'Brien N, White M, Johnston DW, Sniehotta FF. The state of the art and future opportunities for using longitudinal n-of-1 methods in health behaviour research: a systematic literature overview. Health Psychol Rev. 2017 Dec;11(4):307–23. doi: 10.1080/17437199.2017.1316672. [DOI] [PubMed] [Google Scholar]
- 41.Kevin C, Xiaozhong L, Eileen E.A. Machine learning and rule-based automated coding of qualitative data. ASIS&T '10 Proceedings of the 73rd ASIS&T Annual Meeting on Navigating Streams in an Information Ecosystem; October 22-27, 2010; Pittsburgh, Pennsylvania. 2010. pp. 1–2. https://pdfs.semanticscholar.org/2f4a/60cb9abf8da062f362c4c47819cc16471bcb.pdf . [Google Scholar]
- 42.Lefebvre C, Tada Y, Hilfiker SW, Baur C. The assessment of user engagement with eHealth content: the eHealth engagement scale. J Comput Mediat Commun. 2010;15(4):666–81. doi: 10.1111/j.1083-6101.2009.01514.x. [DOI] [Google Scholar]
- 43.Perski O. Osf. 2017. [2018-07-13]. Study protocol: Development and psychometric evaluation of a self-report instrument to measure engagement with digital behaviour change interventions https://osf.io/cj9y7/
- 44.O'Brien HL, Toms EG. The development and evaluation of a survey to measure user engagement. J Am Soc Inf Sci. 2009 Oct 19;61(1):50–69. doi: 10.1002/asi.21229. [DOI] [Google Scholar]
- 45.Laugwitz B, Held T, Schrepp M. Construction and Evaluation of a User Experience Questionnaire. Symposium of the Austrian HCI and Usability Engineering Group USAB 2008: HCI and Usability for Education and Work; November 20-21, 2008; Graz, Austria. 2008. pp. 63–76. [DOI] [Google Scholar]
- 46.Jackson S, Marsh HW. Development and validation of a scale to measure optimal experience: the flow state scale. J Sport Exerc Psychol. 1996 Mar;18(1):17–35. doi: 10.1123/jsep.18.1.17. [DOI] [Google Scholar]
- 47.Direito A, Jiang Y, Whittaker R, Maddison R. Apps for IMproving FITness and increasing physical activity among young people: the AIMFIT pragmatic randomized controlled trial. J Med Internet Res. 2015 Aug 27;17(8):e210. doi: 10.2196/jmir.4568. http://www.jmir.org/2015/8/e210/ v17i8e210 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Boase J, Ling R. Measuring mobile phone use: self-report versus log data. J Comput-Mediat Comm. 2013 Jun 10;18(4):508–19. doi: 10.1111/jcc4.12021. [DOI] [Google Scholar]
- 49.Scharkow M. The accuracy of self-reported internet use—a validation study using client log data. Commun Methods Meas. 2016 Mar 24;10(1):13–27. doi: 10.1080/19312458.2015.1118446. [DOI] [Google Scholar]
- 50.Sigerson L, Cheng C. Scales for measuring user engagement with social network sites: A systematic review of psychometric properties. Comput Human Behav. 2018 Jun;83:87–105. doi: 10.1016/j.chb.2018.01.023. [DOI] [Google Scholar]
- 51.MacKinnon D, Fairchild AJ, Fritz MS. Mediation analysis. Annu Rev Psychol. 2007;58:593–614. doi: 10.1146/annurev.psych.58.110405.085542. http://europepmc.org/abstract/MED/16968208 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52.Murray J, Brennan SF, French DP, Patterson CC, Kee F, Hunter RF. Mediators of behavior change maintenance in physical activity interventions for young and middle-aged adults: a systematic review. Ann Behav Med. 2018 May 18;52(6):513–29. doi: 10.1093/abm/kay012.4972881 [DOI] [PubMed] [Google Scholar]
- 53.Rhodes R, Pfaeffli LA. Mediators of physical activity behaviour change among adult non-clinical populations: a review update. Int J Behav Nutr Phys Act. 2010 May 11;7:37. doi: 10.1186/1479-5868-7-37. https://ijbnpa.biomedcentral.com/articles/10.1186/1479-5868-7-37 .1479-5868-7-37 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Dewar DL, Lubans DR, Morgan PJ, Plotnikoff RC. Development and evaluation of social cognitive measures related to adolescent physical activity. J Phys Act Health. 2013 May;10(4):544–55. doi: 10.1186/1479-5868-9-36.2011-0199 [DOI] [PubMed] [Google Scholar]
- 55.Rhodes R, Hunt Matheson D, Mark R. Evaluation of social cognitive scaling response options in the physical activity domain. Meas Phys Educ Exerc Sci. 2010 Jul 28;14(3):137–50. doi: 10.1080/1091367X.2010.495539. [DOI] [Google Scholar]
- 56.Hall E, Chai W, Koszewski W, Albrecht J. Development and validation of a social cognitive theory-based survey for elementary nutrition education program. Int J Behav Nutr Phys Act. 2015 Apr 09;12:47. doi: 10.1186/s12966-015-0206-4. https://ijbnpa.biomedcentral.com/articles/10.1186/s12966-015-0206-4 .10.1186/s12966-015-0206-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Francis J, Johnston M, Eccles M, Walker A, Grimshaw JM, Foy R, Kaner EF, Smith L, Bonetti D. Constructing questionnaires based on the theory of planned behaviour: a manual for Health Services Researchers. Newcastle upon Tyne, UK: Centre for Health Service Research; 2004. http://openaccess.city.ac.uk/1735/1/TPB%20Manual%20FINAL%20May2004.pdf . [Google Scholar]
- 58.Bandura A. Guide for constructing self-efficacy scales. J Phys Act Res. 2006:307–37. doi: 10.12691/jpar-2-1-2. [DOI] [Google Scholar]
- 59.Crutzen R, Peters GJ. Scale quality: alpha is an inadequate estimate and factor-analytic evidence is needed first of all. Health Psychol Rev. 2017 Dec;11(3):242–47. doi: 10.1080/17437199.2015.1124240. [DOI] [PubMed] [Google Scholar]
- 60.Doherty K, Doherty G. The construal of experience in HCI: uUnderstanding self-reports. Int J Hum Comput Stud. 2018 Feb;110:63–74. doi: 10.1016/j.ijhcs.2017.10.006. [DOI] [Google Scholar]
- 61.Reis HT. Why Researchers Should Think “Real World”: A Conceptual Rationale. In: Mehl MR, Conner TS, editors. Handbook of Research Methods for Studying Daily Life. New York: The Guilford Press; 2013. pp. 3–22. [Google Scholar]
- 62.Hernandez J, McDuff D, Infante C, Maes P, Quigley K, Picard R. Wearable ESM: differences in the experience sampling method across wearable devices. MobileHCI '16 Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services; September 6 to 9, 2016; Florance, Italy. 2016. pp. 195–205. [Google Scholar]
- 63.Dunton GF, Liao Y, Intille SS, Spruijt-Metz D, Pentz M. Investigating children's physical activity and sedentary behavior using ecological momentary assessment with mobile phones. Obesity (Silver Spring) 2011 Jun;19(6):1205–12. doi: 10.1038/oby.2010.302. doi: 10.1038/oby.2010.302.oby2010302 [DOI] [PubMed] [Google Scholar]
- 64.Fanning J, Mackenzie M, Roberts S, Crato I, Ehlers D, McAuley E. Physical activity, mind wandering, affect, and sleep: an ecological momentary assessment. JMIR Mhealth Uhealth. 2016 Aug 31;4(3):e104. doi: 10.2196/mhealth.5855. http://mhealth.jmir.org/2016/3/e104/ v4i3e104 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Chung J, Gardner HJ. Temporal presence variation in immersive computer games. Int J Hum Comput Stud. 2012 Aug;28(8):511–29. doi: 10.1080/10447318.2011.627298. [DOI] [Google Scholar]
- 66.Wen C, Schneider S, Stone AA, Spruijt-Metz D. Compliance with mobile ecological momentary assessment protocols in children and adolescents: a systematic review and meta-analysis. J Med Internet Res. 2017 Dec 26;19(4):e132. doi: 10.2196/jmir.6641. http://www.jmir.org/2017/4/e132/ v19i4e132 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Cain AE, Depp CA, Jeste DV. Ecological momentary assessment in aging research: a critical review. J Psychiatr Res. 2009 Jul;43(11):987–96. doi: 10.1016/j.jpsychires.2009.01.014. http://europepmc.org/abstract/MED/19272611 .S0022-3956(09)00019-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Modecki KL, Mazza GL. Are we making the most of ecological momentary assessment data? A comment on Richardson, Fuller-Tyszkiewicz, O'Donnell, Ling, & Staiger, 2017. Health Psychol Rev. 2017 Dec;11(3):295–97. doi: 10.1080/17437199.2017.1347513. [DOI] [PubMed] [Google Scholar]
- 69.Richardson B, Fuller-Tyszkiewicz M, O'Donnell R, Ling M, Staiger PK. Regression tree analysis of ecological momentary assessment data. Health Psychol Rev. 2017 Dec;11(3):235–41. doi: 10.1080/17437199.2017.1343677. [DOI] [PubMed] [Google Scholar]
- 70.Ginexi EM, Riley W, Atienza AA, Mabry PL. The promise of intensive longitudinal data capture for behavioral health research. Nicotine Tob Res. 2014 May;16 Suppl 2:S73–5. doi: 10.1093/ntr/ntt273. http://europepmc.org/abstract/MED/24711629 .ntt273 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Hamaker E, Wichers M. No Time Like the Present. Curr Dir Psychol Sci. 2017 Feb 08;26(1):10–15. doi: 10.1177/0963721416666518. [DOI] [Google Scholar]
- 72.Burke L, Shiffman S, Music E, Styn MA, Kriska A, Smailagic A, Siewiorek D, Ewing LJ, Chasens E, French B, Mancino J, Mendez D, Strollo P, Rathbun SL. Ecological momentary assessment in behavioral research: addressing technological and human participant challenges. J Med Internet Res. 2017 Dec 15;19(3):e77. doi: 10.2196/jmir.7138. http://www.jmir.org/2017/3/e77/ v19i3e77 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Kelders S, Van Gemert-Pijnen JE, Werkman A, Nijland N, Seydel ER. Effectiveness of a Web-based intervention aimed at healthy dietary and physical activity behavior: a randomized controlled trial about users and usage. J Med Internet Res. 2011 Apr 14;13(2):e32. doi: 10.2196/jmir.1624. http://www.jmir.org/2011/2/e32/ v13i2e32 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Danaher BG, Boles SM, Akers L, Gordon JS, Severson HH. Defining participant exposure measures in Web-based health behavior change programs. J Med Internet Res. 2006 Aug 30;8(3):e15. doi: 10.2196/jmir.8.3.e15. http://www.jmir.org/2006/3/e15/ v8i3e15 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Graham ML, Strawderman MS, Demment M, Olson CM. Does usage of an eHealth intervention reduce the risk of excessive gestational weight gain? Secondary analysis from a randomized controlled trial. J Med Internet Res. 2017 Dec 09;19(1):e6. doi: 10.2196/jmir.6644. http://www.jmir.org/2017/1/e6/ v19i1e6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Mattila E, Lappalainen R, Välkkynen P, Sairanen E, Lappalainen P, Karhunen L, Peuhkuri K, Korpela R, Kolehmainen M, Ermes M. Usage and dose response of a mobile acceptance and commitment therapy app: secondary analysis of the intervention arm of a randomized controlled trial. JMIR Mhealth Uhealth. 2016 Jul 28;4(3):e90. doi: 10.2196/mhealth.5241. http://mhealth.jmir.org/2016/3/e90/ v4i3e90 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Taki S, Lymer S, Russell CG, Campbell K, Laws R, Ong K, Elliott R, Denney-Wilson E. Assessing user engagement of an mHealth intervention: development and implementation of the growing healthy app engagement index. JMIR Mhealth Uhealth. 2017 Jun 29;5(6):e89. doi: 10.2196/mhealth.7236. http://mhealth.jmir.org/2017/6/e89/ v5i6e89 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.McCallum C, Rooksby J, Gray CM. Evaluating the impact of physical activity apps and wearables: interdisciplinary review. JMIR Mhealth Uhealth. 2018 Mar 23;6(3):e58. doi: 10.2196/mhealth.9054. http://mhealth.jmir.org/2018/3/e58/ v6i3e58 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79.Baltierra NB, Muessig KE, Pike EC, LeGrand S, Bull SS, Hightow-Weidman LB. More than just tracking time: Complex measures of user engagement with an internet-based health promotion intervention. J Biomed Inform. 2016 Feb;59:299–307. doi: 10.1016/j.jbi.2015.12.015. https://linkinghub.elsevier.com/retrieve/pii/S1532-0464(15)00295-6 .S1532-0464(15)00295-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80.Barisic A, Leatherdale ST, Kreiger N. Importance of frequency, intensity, time and type (FITT) in physical activity assessment for epidemiological research. Can J Public Health. 2011;102(3):174–5. doi: 10.1007/BF03404889. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81.Funk KL, Stevens VJ, Appel LJ, Bauck A, Brantley PJ, Champagne CM, Coughlin J, Dalcin AT, Harvey-Berino J, Hollis JF, Jerome GJ, Kennedy BM, Lien LF, Myers VH, Samuel-Hodge C, Svetkey LP, Vollmer WM. Associations of internet website use with weight change in a long-term weight loss maintenance program. J Med Internet Res. 2010 Jul 27;12(3):e29. doi: 10.2196/jmir.1504. http://www.jmir.org/2010/3/e29/ v12i3e29 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82.Arden-Close EJ, Smith E, Bradbury K, Morrison L, Dennison L, Michaelides D, Yardley L. A visualization tool to analyse usage of web-based interventions: the example of positive online weight reduction (POWeR) JMIR Hum Factors. 2015 May 19;2(1):e8. doi: 10.2196/humanfactors.4310. http://humanfactors.jmir.org/2015/1/e8/ v2i1e8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 83.Manwaring JL, Bryson SW, Goldschmidt AB, Winzelberg AJ, Luce KH, Cunning D, Wilfley DE, Taylor CB. Do adherence variables predict outcome in an online program for the prevention of eating disorders? J Consult Clin Psychol. 2008 Apr;76(2):341–6. doi: 10.1037/0022-006X.76.2.341.2008-03290-015 [DOI] [PubMed] [Google Scholar]
- 84.van Mierlo Trevor. The 1% rule in four digital health social networks: an observational study. J Med Internet Res. 2014 Feb 04;16(2):e33. doi: 10.2196/jmir.2966. http://www.jmir.org/2014/2/e33/ v16i2e33 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 85.Forbes C, Blanchard CM, Mummery WK, Courneya KS. Feasibility and preliminary efficacy of an online intervention to increase physical activity in Nova Scotian cancer survivors: a randomized controlled trial. JMIR Cancer. 2015 Nov 23;1(2):e12. doi: 10.2196/cancer.4586. http://cancer.jmir.org/2015/2/e12/ v1i2e12 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 86.Short CE, Rebar A, James EL, Duncan MJ, Courneya KS, Plotnikoff RC, Crutzen R, Vandelanotte C. How do different delivery schedules of tailored web-based physical activity advice for breast cancer survivors influence intervention use and efficacy? J Cancer Surviv. 2017 Feb;11(1):80–91. doi: 10.1007/s11764-016-0565-0.10.1007/s11764-016-0565-0 [DOI] [PubMed] [Google Scholar]
- 87.Hales SB, Davidson C, Turner-McGrievy GM. Varying social media post types differentially impacts engagement in a behavioral weight loss intervention. Transl Behav Med. 2014 Dec;4(4):355–62. doi: 10.1007/s13142-014-0274-z. http://europepmc.org/abstract/MED/25584084 .274 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88.Couper MP, Alexander GL, Zhang N, Little RJ, Maddy N, Nowak MA, McClure JB, Calvi JJ, Rolnick SJ, Stopponi MA, Cole JC. Engagement and retention: measuring breadth and depth of participant use of an online intervention. J Med Internet Res. 2010 Nov 18;12(4):e52. doi: 10.2196/jmir.1430. http://www.jmir.org/2010/4/e52/ v12i4e52 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 89.Mohr DC, Duffecy J, Ho J, Kwasny M, Cai X, Burns MN, Begale M. A randomized controlled trial evaluating a manualized TeleCoaching protocol for improving adherence to a web-based intervention for the treatment of depression. PLoS One. 2013;8(8):e70086. doi: 10.1371/journal.pone.0070086. http://dx.plos.org/10.1371/journal.pone.0070086 .PONE-D-13-05935 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 90.Cussler EC, Teixeira PJ, Going SB, Houtkooper LB, Metcalfe LL, Blew RM, Ricketts JR, Lohman J, Stanford VA, Lohman TG. Maintenance of weight loss in overweight middle-aged women through the Internet. Obesity (Silver Spring) 2008 May;16(5):1052–60. doi: 10.1038/oby.2008.19. doi: 10.1038/oby.2008.19.oby200819 [DOI] [PubMed] [Google Scholar]
- 91.Glasgow RE, Christiansen SM, Kurz D, King DK, Woolley T, Faber AJ, Estabrooks PA, Strycker L, Toobert D, Dickman J. Engagement in a diabetes self-management website: usage patterns and generalizability of program use. J Med Internet Res. 2011 Jan 25;13(1):e9. doi: 10.2196/jmir.1391. http://www.jmir.org/2011/1/e9/ v13i1e9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92.Davies C, Corry K, Van Itallie Anetta, Vandelanotte C, Caperchione C, Mummery WK. Prospective associations between intervention components and website engagement in a publicly available physical activity website: the case of 10,000 Steps Australia. J Med Internet Res. 2012 Jan 11;14(1):e4. doi: 10.2196/jmir.1792. http://www.jmir.org/2012/1/e4/ v14i1e4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93.Kim JY, Wineinger NE, Taitel M, Radin JM, Akinbosoye O, Jiang J, Nikzad N, Orr G, Topol E, Steinhubl S. Self-Monitoring Utilization Patterns Among Individuals in an Incentivized Program for Healthy Behaviors. J Med Internet Res. 2016 Dec 17;18(11):e292. doi: 10.2196/jmir.6371. http://www.jmir.org/2016/11/e292/ v18i11e292 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 94.Crutzen R, Roosjen Johanna L, Poelman Jos. Using Google Analytics as a process evaluation method for Internet-delivered interventions: an example on sexual health. Health Promot Int. 2013 Mar;28(1):36–42. doi: 10.1093/heapro/das008.das008 [DOI] [PubMed] [Google Scholar]
- 95.Scherer EA, Ben-Zeev D, Li Z, Kane JM. Analyzing mHealth Engagement: joint models for intensively collected user engagement data. JMIR Mhealth Uhealth. 2017 Jan 12;5(1):e1. doi: 10.2196/mhealth.6474. http://mhealth.jmir.org/2017/1/e1/ v5i1e1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.Fan W, Bifet A. Mining big data. SIGKDD Explor Newsl. 2013 Apr 30;14(2):1–5. doi: 10.1145/2481244.2481246. [DOI] [Google Scholar]
- 97.Cugelman B, Thelwall M, Dawes P. Online interventions for social marketing health behavior change campaigns: a meta-analysis of psychological architectures and adherence factors. J Med Internet Res. 2011 Feb 14;13(1):e17. doi: 10.2196/jmir.1367. http://www.jmir.org/2011/1/e17/ v13i1e17 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98.Donkin L, Christensen H, Naismith SL, Neal B, Hickie IB, Glozier N. A systematic review of the impact of adherence on the effectiveness of e-therapies. J Med Internet Res. 2011 Aug 05;13(3):e52. doi: 10.2196/jmir.1772. http://www.jmir.org/2011/3/e52/ v13i3e52 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 99.Kidwell K, Hyde LW. Adaptive interventions and SMART designs: application to child behavior research in a community setting. Am J Eval. 2016 Sep;37(3):344–63. doi: 10.1177/1098214015617013. http://europepmc.org/abstract/MED/28239254 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 100.Collins LM, Murphy SA, Bierman KL. A conceptual framework for adaptive preventive interventions. Prev Sci. 2004 Sep;5(3):185–96. doi: 10.1023/B:PREV.0000037641.26017.00. http://europepmc.org/abstract/MED/15470938 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 101.Althoff T, Sosič R, Hicks JL, King AC, Delp SL, Leskovec J. Large-scale physical activity data reveal worldwide activity inequality. Nature. 2017 Dec 20;547(7663):336–39. doi: 10.1038/nature23018. http://europepmc.org/abstract/MED/28693034 .nature23018 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 102.Mohr D, Schueller SM, Riley WT, Brown CH, Cuijpers P, Duan N, Kwasny MJ, Stiles-Shields C, Cheung K. Trials of intervention principles: evaluation methods for evolving behavioral intervention technologies. J Med Internet Res. 2015 Jul 08;17(7):e166. doi: 10.2196/jmir.4391. http://www.jmir.org/2015/7/e166/ v17i7e166 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103.Jacobs M, Graham Al. Iterative development and evaluation methods of mHealth behavior change interventions. Curr Opin Psychol. 2016 Jun;9:33–7. doi: 10.1016/j.copsyc.2015.09.001. [DOI] [Google Scholar]
- 104.Vandelanotte C, Duncan MJ, Kolt GS, Caperchione CM, Savage TN, Van Itallie A, Oldmeadow C, Alley SJ, Tague R, Maeder AJ, Rosenkranz RR, Mummery WK. More real-world trials are needed to establish if web-based physical activity interventions are effective. Br J Sports Med. 2018 Jul 03; doi: 10.1136/bjsports-2018-099437. Epub ahead of print.bjsports-2018-099437 [DOI] [PubMed] [Google Scholar]
- 105.Collins L, Murphy SA, Strecher V. The multiphase optimization strategy (MOST) and the sequential multiple assignment randomized trial (SMART): new methods for more potent eHealth interventions. Am J Prev Med. 2007 May;32(5 Suppl):S112–8. doi: 10.1016/j.amepre.2007.01.022. http://europepmc.org/abstract/MED/17466815 .S0749-3797(07)00051-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 106.Shaw R, Steinberg DM, Zullig LL, Bosworth HB, Johnson CM, Davis LL. mHealth interventions for weight loss: a guide for achieving treatment fidelity. J Am Med Inform Assoc. 2014;21(6):959–63. doi: 10.1136/amiajnl-2013-002610. http://europepmc.org/abstract/MED/24853065 .amiajnl-2013-002610 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 107.Cornet VP, Holden RJ. Systematic review of smartphone-based passive sensing for health and wellbeing. J Biomed Inform. 2018 Jan;77:120–32. doi: 10.1016/j.jbi.2017.12.008.S1532-0464(17)30278-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 108.Barkhuus L, Dey A. Location-Based Services for Mobile Telephony: a Study of Users' Privacy Concerns (2003). 9TH IFIP TC13 International Conference On Human-Computer Interaction, Interact 2003; 1st-5th September, 2003; Zürich, Switzerland. 2003. [Google Scholar]
- 109.Case MA, Burwick HA, Volpp KG, Patel MS. Accuracy of smartphone applications and wearable devices for tracking physical activity data. J Am Med Assoc. 2015 Feb 10;313(6):625–6. doi: 10.1001/jama.2014.17841.2108876 [DOI] [PubMed] [Google Scholar]
- 110.Gordon BA, Bruce L, Benson AC. Physical activity intensity can be accurately monitored by smartphone global positioning system 'app'. Eur J Sport Sci. 2016 Aug;16(5):624–31. doi: 10.1080/17461391.2015.1105299. [DOI] [PubMed] [Google Scholar]
- 111.Maher C, Lewis LK, Ferrar K, Marshall S, De Bourdeaudhuij I, Vandelanotte C. Are health behavior change interventions that use online social networks effective? A systematic review. J Med Internet Res. 2014 Feb 14;16(2):e40. doi: 10.2196/jmir.2952. http://www.jmir.org/2014/2/e40/ v16i2e40 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 112.Hale TM, Pathipati AS, Zan S, Jethwani K. Representation of health conditions on Facebook: content analysis and evaluation of user engagement. J Med Internet Res. 2014 Aug 04;16(8):e182. doi: 10.2196/jmir.3275. http://www.jmir.org/2014/8/e182/ v16i8e182 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 113.D'heer E, Verdegem P. What social media data mean for audience studies: a multidimensional investigation of Twitter use during a current affairs TV programme. Inform Comm Soc. 2014 Sep 22;18(2):221–34. doi: 10.1080/1369118X.2014.952318. [DOI] [Google Scholar]
- 114.Ryan J, Edney S, Maher C. Engagement, compliance and retention with a gamified online social networking physical activity intervention. Transl Behav Med. 2017 Dec;7(4):702–8. doi: 10.1007/s13142-017-0499-8. http://europepmc.org/abstract/MED/28523603 .10.1007/s13142-017-0499-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 115.Bouman M, Drossaert CH, Pieterse ME. Mark My Words: the design of an innovative methodology to detect and analyze interpersonal health conversations in web and social media. J Technol Hum Serv. 2012 Jul;30(3-4):312–26. doi: 10.1080/15228835.2012.743394. [DOI] [Google Scholar]
- 116.Delahaye Paine K. Measure What Matters: Online Tools for Understanding Customers, Social Media, Engagement, and Key Relationships. United Kingdom: John Wiley and Sons Ltd, United Kingdom; 2018. [Google Scholar]
- 117.Murdough C. Social Media Measurement. J Interact Advert. 2009 Sep;10(1):94–99. doi: 10.1080/15252019.2009.10722165. [DOI] [Google Scholar]
- 118.Peters GY, Crutzen R. Pragmatic nihilism: how a Theory of Nothing can help health psychology progress. Health Psychol Rev. 2017 Dec;11(2):103–21. doi: 10.1080/17437199.2017.1284015. [DOI] [PubMed] [Google Scholar]
- 119.Dirican A, Göktürk M. Psychophysiological measures of human cognitive states applied in human computer interaction. Procedia Comp Sci. 2011;3:1361–67. doi: 10.1016/j.procs.2011.01.016. [DOI] [Google Scholar]
- 120.Cowley B, Filetti M, Lukander K, Torniainen J, Henelius A, Ahonen L, Barral O, Kosunen I, Valtonen T, Huotilainen M, Ravaja N, Jacucci G. The Psychophysiology Primer: a guide to methods and a broad review with a focus on human-computer interaction. Found Trends Hum Comp Interact. 2015;9(3-4):151–308. [Google Scholar]
- 121.Ganglbauer E, Schrammel J, Deutsch S, Tscheligi M. Applying psychophysiological methods for measuring user experience: possibilities, challenges and feasibility. User Experience Evaluation Methods in Product Development (UXEM'09) in conjunction with Interact'09; August 24-28, 2009; Sweden. 2009. [Google Scholar]
- 122.Harmat L, de Manzano Örjan, Theorell T, Högman Lennart, Fischer H, Ullén Fredrik. Physiological correlates of the flow experience during computer game playing. Int J Psychophysiol. 2015 Jul;97(1):1–7. doi: 10.1016/j.ijpsycho.2015.05.001.S0167-8760(15)00168-3 [DOI] [PubMed] [Google Scholar]
- 123.Burns C, Fairclough SH. Use of auditory event-related potentials to measure immersion during a computer game. Int J Hum Comput Stud. 2015 Jan;73(Supplement C):107–14. doi: 10.1016/j.ijhcs.2014.09.002. [DOI] [Google Scholar]
- 124.Martey R, Kenski K, Folkestad J, Feldman L, Gordis E, Shaw A, Stromer-Galley J, Clegg B, Zhang H, Kaufman N, Rabkin AN, Shaikh S, Strzalkowski T. Measuring Game Engagement. Simul Gaming. 2014 Nov 04;45(4-5):528–47. doi: 10.1177/1046878114553575. [DOI] [Google Scholar]
- 125.Dhamija S, Boult TE. Automated mood-aware engagement prediction. Seventh International Conference on Affective Computing Intelligent Interaction (ACII); 23-26 October, 2017; San Antonio, TX, USA. 2017. [Google Scholar]
- 126.Huynh S, Kim S, Ko J, Balan RK, Lee Y. EngageMon. Proc ACM Interact Mob Wearable Ubiquitous Technol. 2018 Mar 26;2(1):1–27. doi: 10.1145/3191745. [DOI] [Google Scholar]
- 127.Bevilacqua F, Engström H, Backlund P. Changes in heart rate and facial actions during a gaming session with provoked boredom and stress. Entertainment Computing. 2018 Jan;24:10–20. doi: 10.1016/j.entcom.2017.10.004. [DOI] [Google Scholar]
- 128.Reinecke K, Cordes M, Lerch C, Koutsandréou F, Schubert M, Weiss M, Baumeister J. From lab to field conditions: a pilot study on EEG methodology in applied sports sciences. Appl Psychophysiol Biofeedback. 2011 Dec;36(4):265–71. doi: 10.1007/s10484-011-9166-x. [DOI] [PubMed] [Google Scholar]
- 129.Kayser J, Tenke CE. Issues and considerations for using the scalp surface Laplacian in EEG/ERP research: A tutorial review. Int J Psychophysiol. 2015 Sep;97(3):189–209. doi: 10.1016/j.ijpsycho.2015.04.012. http://europepmc.org/abstract/MED/25920962 .S0167-8760(15)00160-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 130.Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods. 2004 Mar 15;134(1):9–21. doi: 10.1016/j.jneumeth.2003.10.009.S0165027003003479 [DOI] [PubMed] [Google Scholar]
- 131.Luck SJ. An Introduction to the Event-Related Potential Technique. Cambridge, MA: MIT Press; 2005. [Google Scholar]
- 132.Leiker A, Miller M, Brewer L, Nelson M, Siow M, Lohse K. The relationship between engagement and neurophysiological measures of attention in motion-controlled video games: a randomized controlled trial. JMIR Serious Games. 2016 Apr 21;4(1):e4. doi: 10.2196/games.5460. http://games.jmir.org/2016/1/e4/ v4i1e4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 133.Dyke FB, Leiker AM, Grand KF, Godwin MM, Thompson AG, Rietschel JC, McDonald CG, Miller MW. The efficacy of auditory probes in indexing cognitive workload is dependent on stimulus complexity. Int J Psychophysiol. 2015 Jan;95(1):56–62. doi: 10.1016/j.ijpsycho.2014.12.008.S0167-8760(14)01670-5 [DOI] [PubMed] [Google Scholar]
- 134.Takeda Y, Okuma T, Kimura M, Kurata T, Takenaka T, Iwaki S. Electrophysiological measurement of interest during walking in a simulated environment. Int J Psychophysiol. 2014 Sep;93(3):363–70. doi: 10.1016/j.ijpsycho.2014.05.012.S0167-8760(14)00130-5 [DOI] [PubMed] [Google Scholar]
- 135.Rayner K. Eye movements in reading and information processing: 20 years of research. Psychol Bull. 1998 Nov;124(3):372–422. doi: 10.1037/0033-2909.124.3.372. [DOI] [PubMed] [Google Scholar]
- 136.Hermans D, Vansteenwegen D, Eelen P. Eye movement registration as a continuous index of attention deployment: data from a group of spider anxious students. Cognition & Emotion. 1999 Jul;13(4):419–34. doi: 10.1080/026999399379249. [DOI] [Google Scholar]
- 137.Jennett C, Cox Al, Cairns P, Dhoparee S, Epps A, Tijs T, Walton A. Measuring and defining the experience of immersion in games. Int J Hum Comput Stud. 2008 Sep;66(9):641–61. doi: 10.1016/j.ijhcs.2008.04.004. [DOI] [Google Scholar]
- 138.Alley S, Jennings C, Persaud N, Plotnikoff RC, Horsley M, Vandelanotte C. Do personally tailored videos in a web-based physical activity intervention lead to higher attention and recall? - An eye-tracking study. Front Public Health. 2014;2:13. doi: 10.3389/fpubh.2014.00013. doi: 10.3389/fpubh.2014.00013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 139.Crutzen R, Cyr D, Larios H, Ruiter RA, de Vries NK. Social presence and use of internet-delivered interventions: a multi-method approach. PLoS One. 2013;8(2):e57067. doi: 10.1371/journal.pone.0057067. http://dx.plos.org/10.1371/journal.pone.0057067 .PONE-D-12-29601 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 140.Nacke L, Grimshaw MN, Lindley CA. More than a feeling: measurement of sonic user experience and psychophysiology in a first-person shooter game. Interact Comput. 2010 Sep;22(5):336–43. doi: 10.1016/j.intcom.2010.04.005. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Initial data extraction table.
Self-report questionnaires for measuring microlevel engagement.
Example items in self-report questionnaires for measuring microlevel engagement.