Skip to main content
Internet Interventions logoLink to Internet Interventions
. 2019 Aug 23;18:100267. doi: 10.1016/j.invent.2019.100267

The peril of self-reported adherence in digital interventions: A brief example

Jayde AM Flett a,b,, Benjamin D Fletcher a, Benjamin C Riordan a,1, Tess Patterson b,c, Harlene Hayne a, Tamlin S Conner a
PMCID: PMC6926264  PMID: 31890620

Abstract

Adherence is an important predictor of intervention outcomes, but not all measures of adherence are created equally. Here, we analyzed whether there was a discrepancy between self-report adherence and objective adherence in a digital mindfulness meditation randomised, controlled trial. A sample of 174 young adult undergraduate university students trialled either an app-based or email-based mindfulness meditation program (or an app-based attention control). Participants' adherence (number of sessions completed) and mental health was self-reported. Objective adherence data were provided by the owners of the digital mindfulness programs. We found evidence of inflated self-reported adherence to the app-based intervention and argue that the inflation was not explained by social desirability biases because participants were aware we would have access to object data and no remuneration was tied to adherence. We also comment on the different conclusions we would have drawn about the effectiveness of the digital interventions on mental health, had we used the self-reported adherence data rather than the objective adherence data. We use this example to suggest that it may be perilous to rely on self-reported measures of adherence when assessing the effectiveness of digital interventions.

Abbreviations: RCT, randomized, controlled trial; M, mean; SD, standard deviation

Keywords: Digital interventions, Applications, Mobile phones, Adherence

Highlights

  • Self-reported adherence is not an accurate representation of objective digital intervention adherence.

  • App-based intervention use, but not email-based intervention use, was associated with self-reported adherence over-reporting.

  • Self-reported adherence using experience sampling methods was not more accurate than retrospective self-reported adherence.

1. Introduction

Between 2009 and 2015, yearly publications on e-mental health interventions trebled (Firth et al., 2016), but meta-analytic reviews reveal that self-guided digital interventions often have only modest effects on mental health (Andersson and Cuijpers, 2009; Cuijpers et al., 2011; Spijkerman et al., 2016). One explanation for these modest effects might be that adherence to digital interventions is low (Cuijpers et al., 2011; Eysenbach, 2005). Adherence refers to whether individuals access the content and use it in the manner it was designed to be optimally effective (Christensen et al., 2009; Donkin et al., 2011).

To be optimally effective, regular practice is considered a key component of mindfulness-based interventions (Segal et al., 2013) and adherence to practice guidelines is correlated with intervention outcomes (meta-analysis: k = 28, r = 0.264, p < .001; Parsons et al., 2017). Likewise, adherence in self-guided iCBT is associated with lower depressive symptoms and stronger responsiveness to treatment (Karyotaki et al., 2017). Outside of digital interventions, adherence is a strong predictor of intervention outcomes, particularly when the health issue is less serious, chronic, non-medicated, in a pediatric population, or where outcomes are not disease specific (DiMatteo et al., 2002). Counterintuitively, self-reported adherence is also a strong predictor of intervention outcomes (DiMatteo et al., 2002). But, few digital interventions report adherence rates, and even fewer report how adherence relates to intervention outcomes (Brown et al., 2016; Donkin et al., 2011).

To date, the majority of research on adherence in digital interventions has focused on operationalizing adherence and identifying predictors of adherence (see: Christensen et al., 2009) but the complexity of adherence is often neglected (see: Sieverink et al., 2017 for a systematic review). Adherence has been operationalized in a number of ways (e.g., for practical reasons metrics such as sessions completed, days used, logins, or a combination of these are often used; Donkin et al., 2011; Donkin et al., 2013; Sieverink et al., 2017) but the measures often fail to capture the quality of the engagement with the intervention (e.g., were skills acquired), nor do they distinguish between observed adherence (how much the individual experienced the content of the intervention) and prescribed adherence (how much the individual experienced the intervention as recommended or intended; Kelders et al., 2012; Sieverink et al., 2017). Further, few digital interventions report or justify the level of adherence required to make the intervention work and instead rely on a ‘more-is-more’ approach (Sieverink et al., 2017) that presupposes that the dose-response relationship is linear. But, a linear dose-response relationship between adherence and outcomes is not always the case (e.g., Donkin et al., 2013; Blanck et al., 2018). Researchers have identified a host of additional factors that influence adherence including persuasive intervention design (Kelders et al., 2012), amount of support provided (Andersson and Cuijpers, 2009; Christensen et al., 2009), and participant characteristics (Christensen et al., 2009).

Another important but overlooked issue is the accuracy of self-reported adherence in digital interventions. Although self-reported adherence data are easily collected, they may be subject to biases that affect all self-report data (e.g., recall bias and response bias; Kimberlin and Winterstein, 2008; Schwarz, 1999), which may lead to inaccurate conclusions about the effectiveness of the intervention. Researchers can mitigate recall and response biases in self-report data by using research designs like experience sampling or daily diaries to reduce recall time (Schwarz, 2012) or by nonjudgmentally acknowledging normality of non-adherence2 and anonymizing online reports of sensitive topics to reduce socially desirable responding (Gnambs and Kaspar, 2015). A more direct approach would be to use objective measures of adherence. In contrast to some other interventions, objective measures of adherence are readily available in digital interventions in the form of number of logins, sessions completed, or minutes completed (Donkin et al., 2011) and can be used to measure adherence differences across intervention platforms (Morrison et al., 2018). In the current short report, we demonstrate the peril of relying on self-reported adherence by comparing the discrepancies between self-reported and objectively-gathered adherence data in web-based and app-based digital mindfulness meditation interventions.

2. Method

2.1. Design

This study was a protocol replication of an earlier study (Flett et al., 2018) with a few minor adaptations. The study was a 40-day randomised, controlled trial (RCT) comparing the use of one of two mindfulness meditation programs (or an attention control) on changes in mental health (University of Otago ethics committee #D15/063). A convenience sample of 174 undergraduate university students (M = 19.76 years, SD = 2.56 years, 79.9% female, 71.8% New Zealand European/Pākehā)3 were randomly assigned to use either an app-based mindfulness program (Headspace, n = 65), an email-based mindfulness program (10 Minute Mind, n = 51), or an app-based attention control program (Evernote, n = 58). We recommended that participants use their program for 10 min per day. This period was equivalent to one session of the intervention and was consistent with previous digital mindfulness research (e.g., Flett et al., 2018; Howells et al., 2016). We measured self-reported and objective adherence over two time periods: 1) Prescribed adherence: a 10-day period where adherence was requested each day and 2) Discretionary adherence: a 30-day period where adherence was at the discretion of the individual (mimicking more realistic or natural uptake). Both mindfulness interventions (Headspace and 10 Minute Mind) involved similar active therapeutic components (e.g., they introduced mindfulness through a series of brief formal mindfulness practices such as mindful breathing [using the breath as an attentional object of intense focus] and body scanning [systematically focusing on certain parts of the body]). Access to the interventions followed a hybrid structure whereby the interventions involved fixed core content with additional optional components (Sieverink et al., 2017); app-users had to complete the first 10 sessions consecutively in order to ‘unlock’ other intervention content, whereas email-users were emailed new sessions each day but had access to brief mindfulness “top up” sessions (optional 3-min meditations). For email-based participants, all intervention sessions were 10 min long, whereas, for app-based participants the first 10-sessions were 10 min long, but longer sessions (up to 45 min long) were available during the 30-day Discretionary adherence period.

2.2. Procedure and measures

All participants reported their mental health (depressive symptoms, anxiety, stress, flourishing, resilience, mindfulness, and adjustment to college: measures described in Flett et al., 2018) on Day 0 in the research lab (baseline; also, demographic and personality characteristics using the NEO-FFI 60, Costa and MacCrae, 1992), and online on approximately Day 10 and Day 40. Self-reported and objective adherence were operationalized as the number of intervention sessions completed; this was a pragmatic operationalisation based on the available objective use data. Self-reported adherence was measured daily during the first 10 days as well as retrospectively on Day 10 and Day 40 with a single item survey question (“how many times did you access the app in the preceding study period”). Objective adherence data were also provided by the owners of the mindfulness programs, which gave us the number of times each user completed a session using their mindfulness program. To reduce the likelihood of socially desirable responding, adherence was not tied to any form of remuneration and participants were aware that we would be provided with their objective adherence data. Participants used their participation to obtain a small portion of course credit tied to survey completion (not app use). Except where specified, we only present results for the mindfulness conditions because we did not have access to objective adherence data for the app-based Attention Control program.

3. Results

Discrepancies between self-reported and objective adherence were calculated by subtracting objective adherence from self-reported adherence (self-report – objective = discrepancy) during the 10 days of prescribed adherence (discrepancies were calculated for both retrospective and daily self-reported adherence) and for the 30 days of discretionary adherence (retrospective only). Positive values indicated over-reporting of adherence and negative values indicated under-reporting of adherence. Adherence discrepancies were assessed using two-way mixed ANOVA (self-report vs objective; app-based user vs email-based user) with Bonferroni adjustment. In Supplementary Tables 2–9 we present descriptive statistics and tests comparing all major outcomes over time within (S Table 2) and between conditions (S Table 3), moderation by adherence (S Table 4–7), and correlations between adherence measures and demographic, personality, and outcome measures by condition (S Table 8–9).

3.1. Discrepancies between self-reported and objective adherence

As shown in Table 1, the discrepancy between self-reported and objective adherence differed by digital platform (app vs. email), particularly for the longer time period. Only app-users showed significant discrepancies between self-report and objective measures of adherence. App users self-reported doing 1.54 more sessions than they objectively did during the 10-day prescribed use period and 9.13 more sessions than they objectively did during the discretionary 30-day time period (D11-40). By contrast, for email-based intervention users, the discrepancy was negligible during both the 10-day prescribed adherence period (over-reported by about 0.3 sessions for both retrospective and daily self-report, not significant) and the 30-day discretionary adherence period (over-reported by 3.00 sessions, not significant). In fact, app users over-reported their adherence by over three times as much as email users when adherence was discretionary across 30 days (e.g., App: M = 9.13, SD = 7.84 vs. Email: M = 2.58, SD = 11.40) and almost 35 times as much overall (e.g., App: M = 8.62, SD = 9.24 vs. Email: M = −0.25 SD = 13.91). Furthermore, during the 10-day prescribed adherence period, there were no differences between daily and retrospective self-report of adherence for the sample overall (t(171) = 0.13, p = 0.899) or within conditions (all ps > 0.581; S Table 1), suggesting that daily reports of adherence during the 10-day period were no more accurate than retrospectively recalling their adherence at the end of the 10 days.

Table 1.

Means (M) and standard deviations (SD) of self-reported and objective adherence when adherence was prescribed (Days 0–10, daily and retrospective), discretionary (Days 11–40), and overall (Days 0–40) for all conditions (self-report descriptive statistics only for controls). F tests indicate the results of the mixed ANOVA.

Adherence

Self-report
Objective
Over-reporta
Self-report vs objective
App-based vs email-based over-reportingb
App- and Email-combined n M SD M SD % M SD F df p F df p
Prescribed D0-10 Retro 116 7.87 2.14 6.87 3.53 14.6 1.00 3.44 8.49 1114 0.004 3.71 1114 0.057
Prescribed D0-10 Daily 116 7.84 2.50 6.87 3.53 14.1 0.97 3.93 5.99 1114 0.016 2.70 1114 0.103
Discretionary D11-40 89 12.42 8.29 5.83 8.79 49.8 6.58 9.83 36.02 1, 87 <0.001 9.21 1, 87 0.003
Overall D0-40 Retro 116 17.48 9.55 12.73 10.87 37.3 4.75 11.81 16.78 1114 <0.001 18.71 1114 <0.001
Overall D0-40 dailyc 116 17.45 9.71 12.73 10.87 37.1 4.72 12.30 15.04 1114 <0.001 16.93 1114 <0.001



App-user
Prescribed D0-10 retro 65 8.45 2.08 6.91 4.29 22.3 1.54 3.69 13.32 1114 <0.001
Prescribed D0-10 daily 65 8.40 2.08 6.91 4.29 21.6 1.49 4.18 9.51 1114 0.003
Discretionary D11-40 52 12.23 8.31 3.10 6.77 294.5 9.13 7.84 49.10 1, 87 <0.001
Overall D0-40 retro 65 18.23 9.78 9.57 8.78 90.5 8.66 8.93 40.33 1114 <0.001
Overall D0-40 dailyc 65 18.18 9.76 9.57 8.78 90.0 8.62 9.24 36.33 1114 <0.001



Email-user
Prescribed D0-10 retro 51 7.14 2.61 6.82 2.26 4.7 0.31 2.98 0.43 1114 0.551
Prescribed D0-10 daily 51 7.12 2.81 6.82 2.26 4.4 0.29 3.51 0.29 1114 0.591
Discretionary D11-40 37 12.68 8.37 9.68 9.89 31.0 3.00 11.25 3.77 1, 87 0.050
Overall D0-40 retro 51 16.53 9.26 16.76 11.98 −1.4 −0.24 13.17 0.02 1114 0.879
Overall D0-40 dailyc 51 16.51 9.66 16.76 11.98 −1.5 −0.25 13.91 0.02 1114 0.875



Attention control
Prescribed D0-10 retro 58 8.62 2.15
Prescribed D0-10 daily 56 8.43 2.15
Discretionary D11-40 42 11.33 10.56
Overall D0-40 retro 58 17.02 11.29
Overall D0-40 dailyc 58 16.83 11.15
a

Over-reporting = Self-report – Objective, % = ((Self-report/Objective) ∗ 100) − 100; Positive values indicate number of sessions where adherence was over-reported in a time period.

b

Interaction from Two-Way Mixed ANOVA showing the difference between self-reported vs objective adherence varied by platform.

c

Overall using Prescribed D0–10 Daily and Discretionary D11-40.

3.2. Effect of intervention and adherence on outcomes

We found no consistent nor convincing evidence that mental health changed over time within conditions (S Table 2), nor that intervention condition (app-based or email-based) predicted change in mental health at Day 10 or Day 40 (controlling for Day 0 outcome; all condition model ps > 0.05; S Table 3). Likewise, we found no consistent nor convincing evidence that adherence (self-reported or objective) predicted change in mental health at Day 10 or Day 40 (controlling for Day 0 outcome; all adherence model ps > 0.05 following adjustment for multiple comparisons; S Tables 4–7). Finally, there were no consistent predictors of self-reported or objective adherence that help explain the app-based or email-based differences in over-reported adherence (S Tables 8–9).

4. Discussion

Self-reported adherence –whether reported daily, or retrospectively– was not an adequate representation of app-based intervention adherence. When adherence was at the discretion of the user and occurred over a month-long period, which is more representative of realistic mindfulness meditation platform usage, self-reported app adherence was even less reliable. In fact, the discrepancy between self-reported and adherence to the app during that longer period was staggering – 12 sessions self-reported versus only 3 sessions actually logged. This discrepancy occurred even though participants were aware that we would receive access to their objective adherence data and that remuneration (i.e., course credit) was not contingent on adherence. So, the over-reporting does not appear to be the result of social desirability response biases. Over-reporting app use was also not ameliorated by self-reporting app use each day, which suggests that even daily reporting of adherence is problematic.

Interestingly, adherence was more accurate for the email-based intervention than the app-based intervention. This could be due to the email intervention being more unusual and requiring more effort to enact, which would enhance memory for the session. The email-based mindfulness program was delivered to students' university email address on a platform that is not particularly mobile-friendly. As a result, participants would have likely accessed their intervention using a PC or laptop (indeed, several participants reported that they were unable to easily access the email-based intervention using their mobile phones). By contrast, the app intervention integrated seamlessly into participants' lives, requiring less effort to enact, which could reduce memory for the session. Given that young adults spend on average between 2 and 4 h per day on their mobile phones (Montag et al., 2015, sample primarily from Germany; Liao et al., In Review, sample from New Zealand) and more than two-thirds of global internet use in 2017 was completed on mobile devices rather than laptops (Enge, 2018), it may be that the app-based mindfulness intervention was less salient (and more subject to memory biases) than the relatively more unusual email-based mindfulness intervention. The meditation instructor was a New Zealander, so this may have been more salient to our New Zealand-based participants. Whether other plausible mechanisms explain this modality-based discrepancy in over-reporting requires further research; however, it might be a less relevant question in the future as digital interventions continue to increase in technological sophistication.

The objective adherence data may explain the null effects for this particular intervention. We found no evidence that the digital mindfulness interventions improved mental health over time (Day 10 or Day 40: see Supplementary Table 2 and Table 3 for detail). In the absence of adherence data, we naturally would have concluded that the interventions were not effective. However, objective adherence showed that intervention use was much too low to be effective. Given that face-to-face mindfulness programs typically recommend 45 min of home practice, six days per week (Segal et al., 2013), it is unlikely that three sessions of the mindfulness meditation app over the course of 30 days would qualify as a sufficient dose of a psychotherapeutic intervention to produce any lasting or meaningful benefits. This interpretation fits with previous literature that suggests increased adherence is associated with intervention outcomes (Karyotaki et al., 2017; Parsons et al., 2017). However, it could also be that any effects were present but short-lived (occurring only on days of use) (Schumer et al., 2018). This is also the case in other brief or ‘microinterventions’ (Elefant et al., 2017).

In conclusion, our results suggest that self-reported adherence to app-based intervention trials is suspect, particularly over longer time periods. We present this data as a brief cautionary tale about the peril of relying on self-reported adherence when assessing the effectiveness of digital interventions, and app-based interventions in particular. If self-reports of adherence are inflated, researchers and clinicians may both over-estimate the acceptability of a tool (i.e., thinking usership is higher) and under-estimate the effectiveness of a tool (although, this only holds provided there is a positive relationship between adherence and outcomes). Objective adherence data is recommended to determine whether people access the content and use their digital interventions in the intended manner of use.

Acknowledgments

Acknowledgements

The authors thank Joanne Riley, Isabella Rarm, Michael Fan, Sherry Chen, and Tessa Peck for their assistance collecting data. We also thank Headspace and the 10 Minute Mind for providing us with access to their digital mindfulness meditation programs and for providing us with objective adherence data. Finally, we thank the students for their participation in this research study.

Funding

This research was funded by the Office of the Vice-Chancellor, University of Otago [no grant number; 2015].

Declaration of competing interest

The authors declare that there are no conflicts of interest with respect to the authorship or the publication of this article. Headspace and the 10 Minute Mind provided free access to their digital mindfulness meditation programs and provided adherence data (with participants permission) but they were not involved in research design, analyses, and have no part in publication of this research. The Memorandum of Understanding between Headspace and the research institution is available on request.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors.

Data statement

Deidentified data are available on request.

Footnotes

2

This is a recommended practice in pharmaceutical treatments (see: Stirratt et al., 2015) but could be applied in digital interventions i.e., telling participants “it's okay if you miss a planned session, just start again when you can”.

3

Given these analyses concern the accuracy of self-reported data, we followed a per protocol procedure where we analyzed cases where participants provided data. The original sample was 185 young adult undergraduate university students (M = 19.75 years, SD = 2.50 years, 80.5% female, 70.8% New Zealand European/Pākehā). Consistent with previous mindfulness-based self-help interventions (Cavanagh et al., 2014), attrition rates were low (3.2%, n = 6) at Day 10, but were moderate (27.0%, n = 50) at Day 40. An ID error meant we were unable to attain objective adherence data for 5 email-based participants; these participants were excluded from the adherence-based analyses. Total n at Day 10 = 174; Total n at Day 40 = 131. Including all participants randomised at baseline (n = 185), no demographic variables predicted attrition (rs −0.013–0.093, ps = 0.210–0.861). Completion of Day 10 and Day 40 surveys was not correlated with any baseline mental health characteristics (rs −0.001–0.099, ps 0.179–0.986). There was a negative correlation between completing Day 10 survey and extraversion (r = −0.196, p = .008) and a positive correlation between completing the Day 40 survey and conscientiousness (r = 0.198, p = .018), although we were not adequately powered to detect correlations this small (Gignac and Szodorai, 2016), so caution is warranted.

Appendix A

Supplementary analyses to this article can be found online at https://doi.org/10.1016/j.invent.2019.100267.

Contributor Information

Jayde A.M. Flett, Email: jflett@psy.otago.ac.nz.

Benjamin D. Fletcher, Email: ben.fletcher@postgrad.otago.ac.nz.

Benjamin C. Riordan, Email: benjamin.riordan@sydney.edu.au.

Tess Patterson, Email: tess.patterson@otago.ac.nz.

Harlene Hayne, Email: harlene.hayne@otago.ac.nz.

Tamlin S. Conner, Email: tconner@psy.otago.ac.nz.

Appendix A. Supplementary analyses

Supplementary material

mmc1.docx (63.8KB, docx)

References

  1. Andersson G., Cuijpers P. Internet-based and other computerized psychological treatments for adult depression: a meta-analysis. Cogn. Behav. Ther. 2009;38(4):196–205. doi: 10.1080/16506070903318960. [DOI] [PubMed] [Google Scholar]
  2. Blanck P., Perleth S., Heidenreich T., Kröger P., Ditzen B., Bents H., Mander J. Effects of mindfulness exercises as stand-alone intervention on symptoms of anxiety and depression: systematic review and meta-analysis. Behav. Res. Ther. 2018;102:25–35. doi: 10.1016/j.brat.2017.12.002. [DOI] [PubMed] [Google Scholar]
  3. Brown M., O'Neill N., van Woerden H., Eslambolchilar P., Jones M., John A. Gamification and adherence to web-based mental health interventions: a systematic review. JMIR Mental Health. 2016;3(3) doi: 10.2196/mental.5710. [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Cavanagh K., Strauss C., Forder L., Jones F. Can mindfulness and acceptance be learnt by self-help?: a systematic review and meta-analysis of mindfulness and acceptance-based self-help interventions. Clin. Psychol. Rev. 2014;34(2):118–129. doi: 10.1016/j.cpr.2014.01.001. [DOI] [PubMed] [Google Scholar]
  5. Christensen H., Griffiths K.M., Farrer L. Adherence in internet interventions for anxiety and depression: systematic review. J. Med. Internet Res. 2009;11(2) doi: 10.2196/jmir.1194. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Costa P.T., MacCrae R.R. Psychological Assessment Resources; Incorporated: 1992. Revised NEO Personality Inventory (NEO PI-R) and NEO Five-factor Inventory (NEO-FFI): Professional Manual. [Google Scholar]
  7. Cuijpers P., Donker T., Johansson R., Mohr D.C., van Straten A., Andersson G. Self-guided psychological treatment for depressive symptoms: a meta-analysis. PLoS One. 2011;6(6) doi: 10.1371/journal.pone.0021274. [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. DiMatteo M.R., Giordani P.J., Lepper H.S., Croghan T.W. Patient adherence and medical treatment outcomes a meta-analysis. Med. Care. 2002;40(9):794–811. doi: 10.1097/00005650-200209000-00009. [DOI] [PubMed] [Google Scholar]
  9. Donkin L., Christensen H., Naismith S.L., Neal B., Hickie I.B., Glozier N. A systematic review of the impact of adherence on the effectiveness of e-therapies. J. Med. Internet Res. 2011;13(3) doi: 10.2196/jmir.1772. [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Donkin L., Hickie I.B., Christensen H., Naismith S.L., Neal B., Cockayne N.L., Glozier N. Rethinking the dose-response relationship between usage and outcome in an online intervention for depression: randomized controlled trial. J. Med. Internet Res. 2013;15(10) doi: 10.2196/jmir.2771. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Elefant A.B., Contreras O., Muñoz R.F., Bunge E.L., Leykin Y. Microinterventions produce immediate but not lasting benefits in mood and distress. Internet Interv. 2017;10:17–22. doi: 10.1016/j.invent.2017.08.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Enge E. Mobile vs Desktop Usage in 2018: Mobile Takes the Lead. 2018, April 27. https://www.stonetemple.com/mobile-vs-desktop-usage-study/ Retrieved from.
  13. Eysenbach G. The law of attrition. J. Med. Internet Res. 2005;7(1) doi: 10.2196/jmir.7.1.e11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Firth J., Torous J., Yung A.R. Ecological momentary assessment and beyond: the rising interest in e-mental health research. J. Psychiatr. Res. 2016;80:3–4. doi: 10.1016/j.jpsychires.2016.05.002. [DOI] [PubMed] [Google Scholar]
  15. Flett J.A.M., Hayne H., Riordan B.C., Thompson L.M., Conner T.S. Mobile mindfulness meditation: a randomised controlled trial of the effect of two popular apps on mental health. Mindfulness. 2018:1–14. [Google Scholar]
  16. Gignac G.E., Szodorai E.T. Effect size guidelines for individual differences researchers. Personal. Individ. Differ. 2016;102:74–78. [Google Scholar]
  17. Gnambs T., Kaspar K. Disclosure of sensitive behaviors across self-administered survey modes: a meta-analysis. Behav. Res. Methods. 2015;47(4):1237–1259. doi: 10.3758/s13428-014-0533-4. [DOI] [PubMed] [Google Scholar]
  18. Howells A., Ivtzan I., Eiroa-Orosa F.J. Putting the ‘app’ in happiness: a randomised controlled trial of a smartphone-based mindfulness intervention to enhance wellbeing. J. Happiness Stud. 2016;17(1):163–185. [Google Scholar]
  19. Karyotaki E., Riper H., Twisk J., Hoogendoorn A., Kleiboer A., Mira A.…Andersson G. Efficacy of self-guided internet-based cognitive behavioral therapy in the treatment of depressive symptoms: a meta-analysis of individual participant data. JAMA Psychiatry. 2017;74(4):351–359. doi: 10.1001/jamapsychiatry.2017.0044. [DOI] [PubMed] [Google Scholar]
  20. Kelders S.M., Kok R.N., Ossebaard H.C., Van Gemert-Pijnen J.E. Persuasive system design does matter: a systematic review of adherence to web-based interventions. J. Med. Internet Res. 2012;14(6) doi: 10.2196/jmir.2104. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Kimberlin C.L., Winterstein A.G. Validity and reliability of measurement instruments used in research. Am. J. Health Syst. Pharm. 2008;65(23):2276–2284. doi: 10.2146/ajhp070364. [DOI] [PubMed] [Google Scholar]
  22. Liao W., Doad V., Gross J., Hayne H. 2019. Put Down Your Smartphone: Preliminary Evidence That Reducing Smartphone Use Improves Psychological Well-being in People With Mild to Moderate Mental Health Difficulties. (In Review) [Google Scholar]
  23. Montag C., Błaszkiewicz K., Sariyska R., Lachmann B., Andone I., Trendafilov B.…Markowetz A. Smartphone usage in the 21st century: who is active on WhatsApp? BMC research notes. 2015;8(1):331. doi: 10.1186/s13104-015-1280-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Morrison L.G., Geraghty A.W., Lloyd S., Goodman N., Michaelides D.T., Hargood C.…Yardley L. Comparing usage of a web and app stress management intervention: an observational study. Internet Interv. 2018;12:74–82. doi: 10.1016/j.invent.2018.03.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Parsons C.E., Crane C., Parsons L.J., Fjorback L.O., Kuyken W. Home practice in mindfulness-based cognitive therapy and mindfulness-based stress reduction: a systematic review and meta-analysis of participants' mindfulness practice and its association with outcomes. Behav. Res. Ther. 2017;95:29–41. doi: 10.1016/j.brat.2017.05.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Schumer M.C., Lindsay E.K., Creswell J.D. Brief mindfulness training for negative affectivity: a systematic review and meta-analysis. J. Consult. Clin. Psychol. 2018;86(7):569–583. doi: 10.1037/ccp0000324. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Schwarz N. Self-reports: how the questions shape the answers. Am. Psychol. 1999;54(2):93–105. [Google Scholar]
  28. Schwarz N. Why researchers should think “real-time”: A cognitive rationale. In: Conner T.S., Mehl M.R., editors. Handbook of Research Methods for Studying Daily Life. Guilford Press; NY: 2012. pp. 22–42. [Google Scholar]
  29. Segal Z.V., Williams J.M.G., Teasdale J.D. 2nd ed. Guilford Publications; New York: 2013. Mindfulness-based Cognitive Therapy for Depression. [Google Scholar]
  30. Sieverink F., Kelders S.M., van Gemert-Pijnen J.E. Clarifying the concept of adherence to eHealth technology: systematic review on when usage becomes adherence. J. Med. Internet Res. 2017;19(12) doi: 10.2196/jmir.8578. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Spijkerman M.P.J., Pots W.T.M., Bohlmeijer E.T. Effectiveness of online mindfulness-based interventions in improving mental health: a review and meta-analysis of randomised controlled trials. Clin. Psychol. Rev. 2016;45:102–114. doi: 10.1016/j.cpr.2016.03.009. [DOI] [PubMed] [Google Scholar]
  32. Stirratt M.J., Dunbar-Jacob J., Crane H.M., Simoni J.M., Czajkowski S., Hilliard M.E.…Ogedegbe G. Self-report measures of medication adherence behavior: recommendations on optimal use. Transl. Behav. Med. 2015;5(4):470–482. doi: 10.1007/s13142-015-0315-2. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplementary material

mmc1.docx (63.8KB, docx)

Articles from Internet Interventions are provided here courtesy of Elsevier

RESOURCES