Colorectal cancer (CRC) is a major cause of cancer-related morbidity and mortality across the globe.1 Colonoscopy is an important screening tool that has reduced the burden of CRC through early detection and removal of precancerous lesions. Although average-risk CRC screening is advocated for by major guidelines, adherence is limited, and the quality of bowel preparation is frequently inadequate for proper endoscopic evaluation. In the United States, where colonoscopy is predominantly used for CRC screening,2 more than one-third of eligible individuals are not up-to-date.3 The bowel preparation process itself, which requires dietary changes, large-volume laxative consumption, and adherence to strict timelines, is a major barrier for many patients. To address this issue, numerous educational and technological interventions have been evaluated to improve both CRC screening adherence and the quality of bowel cleansing. In recent years, there has been widespread adoption of smartphones, which has made routine activities easier and more efficient for us. Because we use smartphones in almost all aspects of our lives, the possibility of an app to improve the colonoscopy preparation process is appealing.
In this issue of Clinical Gastroenterology and Hepatology, Walter et al4 evaluated the impact of a smartphone application on the quality of bowel preparation for patients undergoing outpatient colonoscopy for CRC screening or surveillance. The authors randomized 500 patients receiving split-dose bowel preparation to (1) a standard written instructions control arm or (2) a reinforced education intervention arm using a smartphone application. This application contained the standard instructions for bowel preparation, as well as pictograms illustrating the required dietary changes. Push notifications were sent to patients on a daily basis to reinforce this information in the 3 days before colonoscopy and to provide reminders about the timing of key aspects of the preparation process. The authors found that the intervention application was significantly associated with reported adherence to laxative and diet instructions (each P < .01), as well as objectively improved bowel preparation per the Boston Bowel Preparation Score (92% score ≥6 [ie, sufficient] vs 83%; P = .002). The adenoma detection rate was also significantly higher in this group as compared with controls (35.8% vs 26.7%; P = .032). Importantly, in study questionnaires the patients in the intervention arm reported high ratings of application usability, perceived usefulness, and likelihood to use the application for future colonoscopy or recommend it to others.
The results of this study are important in understanding the effectiveness of smartphone apps for colonoscopy preparation but also how digital tools can be leveraged for improving patient outcomes more broadly. There is great promise for technology to improve both patient and clinician behavior, because it can automate manual processes and make clinical care more efficient. An important question from this and other studies is whether and how we can adapt these applications to routine clinical care across the world. Some considerations include the methods for evaluation, clinical context, and the technology tool itself.
Regarding evaluation, the authors are to be commended for conducting a rigorous evaluation of a patient-directed application in a randomized controlled fashion. Similar studies performed as pilots often have very promising results but do not demonstrate significant improvements over usual care in randomized controlled trials.5 Although pilot studies are important to assess feasibility of an intervention and to inform modifications before an effectiveness trial,6 they are also subject to a variety of biases. Volunteer bias may occur through self-selection of patients who are already motivated to engage with their health care agendas. This is particularly important for digital interventions, because those who are able and choose to participate may be different from those who do not participate. In non-randomized studies, positive results may be the result of the intervention or a selection of those who would have had good outcomes without the intervention. Implementation support bias, where more effort is placed into patient-intervention reinforcement, may also improve observed pilot outcomes. The high quality of an intervention can be sustained in a small pilot program but not in a larger trial. Finally, measurement and outcome biases may occur, where the measures and outcomes studied in a research setting change when implemented in larger populations. In light of these considerations, the randomized design of this study is an important element and one that distinguishes this trial compared with digital offerings by many vendors.
Although random assignment to study arms can overcome threats to internal validity, careful implementation and evaluation are needed to ensure similar results are seen as an intervention scales to larger populations.7 In practice, it is challenging to know whether use of this smartphone application in other settings would have similar results. There are several reasons for this. First, there is significant heterogeneity among centers in the standard patient educational and reminder measures that are used for colonoscopies. For example, some centers incorporate staff phone calls or automated text messages at baseline. Because the authors in this study compared the smartphone application with written instructions only, it is not clear whether the addition of an application to other standard practices would have the same incremental benefit. Patients for this study were recruited in a pre-procedure office visit, whereas many screening and surveillance colonoscopies are performed in an open-access setting without a preceding encounter with the endoscopist. This study also evaluated the effect of the intervention only among those who consented, whereas at scale, this intervention would be standard of care focused on all patients. Second, it is difficult to isolate the specific elements of the intervention that provided benefit to patients. Did the visual representations of clear liquids and low-fiber diet through pictograms improve dietary adherence, or were the push notifications as a prompt more important? Perhaps both were critical? It is likely that certain features were helpful for some patients but not for others, and there is further variation in different patient populations. Third, as noted by the authors, use of a smartphone application by nature does not benefit patients who do not have a smartphone. Although the penetrance of these devices is widespread and ever increasing, many patients have text-capable mobile phones only or no mobile phones at all.8 Furthermore, differences in smartphone ownership are most pronounced across differing socioeconomic strata.9 Because CRC adherence is poorest among low socioeconomic status groups,10,11 a smartphone application is expected to benefit these patients least.
Ultimately this type of research highlights the unique challenges of evaluating technology-based interventions. Unlike a pharmaceutical intervention, which can be standardized and administered according to strict criteria, digital interventions are ultimately focused on human behavior. This is complex, because motivating factors to affect behavior change are highly individualized and further mediated by different environmental and cultural contexts.12 How people engage with and use technology is very different, and there are many parts of the experience ranging from access to Internet, enrollment, and device choice that may dramatically impact usage. For example, only 6% of patients even watched an Internet video that was intended to improve colonoscopy preparation in a similarly designed trial.13 Indeed, the aspects of this study that may translate best to other contexts are the processes by which the authors developed and implemented the intervention, and how they targeted underlying principles of behavior change. In this setting, it is useful to view constructs affecting behavior change through the lens of a framework such as the technology acceptance model.14 This model identifies intervention usability (ease-of-use), efficacy (or perceived efficacy), and positive attitude toward use as critical elements to induce behavior change. As the authors reported through participant questionnaires, each of these elements was satisfied by the intervention. This framework separates the nature of the tool from the perceived role that it plays for the patient. To that point, the authors previously published a randomized controlled trial using a text message–based intervention to improve CRC screening bowel preparation, and this yielded similar results.15 This underscores the fact that the vehicle of the intervention (ie, text messages versus smartphone application) is less important than creating a tool that addresses the principles governing behavior change themselves.
In summary, Walter et al4 provide compelling evidence that a reminder application can be implemented in practice to increase adherence to complete bowel preparation. Moreover, improvements in bowel cleansing may yield improved adenoma detection rates, which would be expected to have an effect on improving CRC screening outcomes. This methodology highlights the importance of randomization and rigorous study design in evaluating these types of digital interventions, but more work is needed to ensure that these results are scaled to other populations. Because of the heterogeneity in perceived and actual barriers to improved CRC screening, institution-specific interventions should focus on theories of behavior change that address usability, perceived efficacy, and likelihood of adoption. The vehicle for such an intervention may be a smartphone, as the authors have demonstrated; however, it need not be. Future studies that articulate the process by which a successful intervention is iterated and patterned to address constructs of behavior change may serve as a blueprint for other endoscopy centers to do the same.
Footnotes
Conflicts of interest
The authors disclose no conflicts.
Contributor Information
NADIM MAHMUD, Division of Gastroenterology, Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania.
SHIVAN J. MEHTA, Division of Gastroenterology, Department of Medicine, University of Pennsylvania Perelman School of Medicine, Philadelphia, Pennsylvania; Center for Health Care Innovation, University of Pennsylvania, Philadelphia, Pennsylvania.
References
- 1.Vermeer N, Snijders H, Holman F, et al. Colorectal cancer screening: systematic review of screen-related morbidity and mortality. Cancer Treat Rev 2017;54:87–98. [DOI] [PubMed] [Google Scholar]
- 2.Vital signs: colorectal cancer screening test use—United States, 2012. MMWR Morb Mortal Wkly Rep 2013;62:881–888. [PMC free article] [PubMed] [Google Scholar]
- 3.Burnett-Hartman AN, Mehta SJ, Zheng Y, et al. Racial/ethnic disparities in colorectal cancer screening across healthcare systems. Am J Prev Med 2016;51:e107–e115. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Walter B, Frank R, Ludwig L, et al. Smartphone application to reinforce education increases high-quality preparation for colorectal cancer screening colonoscopies in a randomized trial. Clin Gastroenterol Hepatol 2021;19:331–338. [DOI] [PubMed] [Google Scholar]
- 5.Beets MW, Weaver RG, Ioannidis JP, et al. Identification and evaluation of risk of generalizability biases in pilot versus efficacy/effectiveness trials: a systematic review and meta-analysis. International Journal of Behavioral Nutrition and Physical Activity 2020;17:19. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Leon AC, Davis LL, Kraemer HC. The role and interpretation of pilot studies in clinical research. J Psychiatr Res 2011;45:626–629. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Green BB, Coronado GD, Schwartz M, et al. Using a continuum of hybrid effectiveness-implementation studies to put research-tested colorectal screening interventions into practice. Implementation Science 2019;14:53. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Center PR. Mobile fact sheet. 2020. Available from: https://www.pewresearch.org/internet/fact-sheet/mobile/. Accessed April 17, 2020.
- 9.Gordon NP, Hornbrook MC. Differences in access to and preferences for using patient portals and other eHealth technologies based on race, ethnicity, and age: a database and survey study of seniors in a large health plan. J Med Internet Res 2016; 18:e50. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Vutien P, Mehta R, Saleem N, et al. Neighborhood socioeconomic status (SES) predicts screening colonoscopy adherence for colorectal cancer (CRC): 150. Am J Gastroenterol 2016; 111:S71. [Google Scholar]
- 11.Wheeler DC, Czarnota J, Jones RM. Estimating an area-level socioeconomic status index and its association with colonoscopy screening adherence. PloS One 2017;12:e0179272. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Glanz K, Bishop DB. The role of behavioral science theory in development and implementation of public health interventions. Annu Rev Public Health 2010;31:399–418. [DOI] [PubMed] [Google Scholar]
- 13.Kakkar A, Jacobson BC. Failure of an Internet-based health care intervention for colonoscopy preparation: a caveat for investigators. JAMA Intern Med 2013;173:1374–1376. [DOI] [PubMed] [Google Scholar]
- 14.Lee Y, Kozar KA, Larsen KR. The technology acceptance model: past, present, and future. Communications of the Association for Information Systems 2003;12:50. [Google Scholar]
- 15.Walter B, Klare P, Strehle K, et al. Improving the quality and acceptance of colonoscopy preparation by reinforced patient education with short message service: results from a randomized, multicenter study (PERICLES-II). Gastrointest Endosc 2019;89:506–513.e4. [DOI] [PubMed] [Google Scholar]
