Abstract
Online cancer risk assessment tools, which provide personalized cancer information and recommendations based on personal data input by users, are a promising cancer education approach; however, few tools have been evaluated. A randomized controlled study was conducted to compare user impressions of one tool, Cancer Risk Check (CRC), to non-personalized educational information delivered online as series of self-advancing slides (the control). CRC users (N=1,452) rated the tool to be as interesting as the control (p>.05), but users were more likely to report that the information was difficult to understand and not applicable to them (p<.05). Information seeking and sharing also were lower among CRC users; thus, although impressions of CRC were favorable, it was not shown to be superior to existing approaches. We hypothesized CRC was less effective because it contained few visual and graphical elements; therefore, CRC was compared to a text-based control (online PDF file) post hoc. CRC users rated the information to be more interesting, less difficult to understand and better able to hold their attention (p<.05). Post-hoc results suggest the visual presentation of risk is critical to tool success.
Introduction
Cancer is now the second leading cause of death in the U.S., yet a substantial proportion of cancers could be prevented through behavioral changes and early detection practices [1]. Cancer education and outreach efforts are therefore essential to help communicate prevention recommendations to the larger public; however, translating cancer prevention and control research into practice can be a challenging task [15]. For some time it has been recognized that interactive technologies may be helpful in communicating disease risk information to the larger public [13]. One recent technological approach has been the development of online cancer risk assessment tools, which provide individualized cancer risk information and recommendations based on personal and family health information input by users. Online risk assessment tools may be particularly effective in the cancer context because cancer risk recommendations are often complex, difficult to communicate and based on a variety of individual-level factors.
It was estimated that over 40 online tools are available on the web that most often provide risk assessments for breast, lung and colorectal cancer [17]. Although the tools vary in terms of their risk communication format, most tools provide users with a qualitative risk assessment in absolute terms (e.g., “your risk is high”) or in comparison to other people (e.g., “your risk is higher than average”). Many also communicate cancer prevention messages and provide links to additional information [17].
There is both theoretical and empirical support for the potential effectiveness of these tools. Street and Manning’s [14] model of health promotion using interactive technology suggests that more vivid and interactive messages will increase user involvement in message processing that may increase educational outcomes (e.g., learning) and health behaviors. The interactive nature of these tools also requires individuals to think about their health history and current health practices, which could raise awareness of knowledge gaps and impact health outcomes. In addition, numerous studies have shown that tailored messages (i.e., messages individualized to a particular receiver) are more effective than non-tailored messages at impacting health behavior change [10], likely due to their ability to enhance cognitive preconditions for message processing and their ability to alter psychosocial factors that impact health behavior (e.g., attitudes, self-efficacy and norms) [6]. Thus, we expect online cancer risk assessment tools to have a larger impact on health outcomes than traditional non-interactive and non-personalized approaches.
Evaluation of Cancer Risk Assessment Tools
Despite the increasing availability of online cancer risk assessment tools, a limited number of tools have been rigorously evaluated. Thus, as with many interactive technologies, questions remain regarding the most effective form for health promotion and education [13]. Research with The Ohio State University’s JamesLink tool showed that those with moderate or high risk assessments had higher risk perceptions and intentions to talk to a physician about their risk assessment [8]. Studies evaluating the Centers for Disease Control and Prevention’s Family Healthware™ indicated that the receipt of risk messages increased physical activity and fruit and vegetable intake [11], as well as risk perceptions among those who underestimated their colon cancer risk [16]. While these studies suggest online cancer risk assessment tools are a promising health communication approach, we know little about user impressions and experiences using these tools or the extent to which users act on the information obtained In this study, we present results from an evaluation of MD Anderson Cancer Center’s Cancer Risk Check (CRC).
Cancer Risk Check
CRC was developed in 2009 as a tool for communicating cancer prevention and screening recommendations based on MD Anderson screening practice algorithms [2, 4, 5] and is updated regularly as recommendations change The version of CRC evaluated here included recommendations for breast, cervical and colorectal cancer. The tool now includes recommendations for prostate cancer. CRC users respond to a series of questions regarding demographics, personal and family cancer history, and current prevention behaviors (cancer screening, tobacco and alcohol use, sun exposure, physical activity and dietary behaviors). Based on information users provide, CRC produces a risk profile (Figure 1) that states whether individuals are “more likely” or “not more likely” to develop certain cancers and includes screening recommendations, and behavioral recommendations regarding tobacco and alcohol use, sun exposure, diet and exercise.
Figure 1.
Sample Cancer Risk Check Profile
This study represents the process evaluation portion of a summative evaluation of CRC, looking at how the intervention was conducted and received by participants [3] compared to a low-interactive, non-personalized control group. The degree to which an individual likes a communication message is an indicator of message persuasiveness, as liking may enhance attention and learning [9]. This type of information can also be particularly useful in helping to explain intervention outcomes [12]. We focus on three components of process evaluation: (a) maintenance (i.e., user interest and involvement in the tool), (b) barriers (i.e., problems encountered during use of the tool) and (c) exposure (i.e., the degree to which participants viewed or read the information) [3]. As an indicator of reach, we also investigate the extent to which participants engage in communication behaviors (i.e., seeking additional cancer information and sharing information from CRC) during and immediately following use.
Materials and Methods
This online evaluation study utilized a two-group randomized controlled design, with pre- and post-intervention measures. All participants completed an online baseline survey and were randomized to receive either the intervention, CRC (Figure 1), or standard educational information (the control). The control was a low-interactive and non-personalized cancer risk presentation presented via Adobe Flash as a series of self-advancing slides; it was based on an educational brochure outlining simple steps to reduce cancer risks that was produced by public education staff at MD Anderson. Both CRC and the control were embedded in the online survey.
Participants
Participants were recruited through an online survey research panel, coordinated by Qualtrics, Inc. The use of an online panel was optimal since panelists had access to the internet and completed surveys independently, as they would if accessing CRC on their own. A purposive sampling strategy was used to recruit a balanced sample of Whites, Hispanics and African Americans age 50-plus. Fifty is the recommended age to begin colonoscopy to screen for colorectal cancer among individuals at average risk [4].
Panelists were invited to participate in the study via email (N=35,400). Panelists opted into the study by clicking a link in the recruitment email that took them to the informed consent. Of those who clicked on the link to access the survey (N=4,787; 13.5%), 34% (n=1,620) completed the baseline assessment and viewed CRC (n=811) or the control (n=809). All participants who completed the baseline survey completed the immediate follow-up assessment (N=1,620) and approximately 40% completed the 6-week follow-up assessment (N=655; split nearly evenly between the two groups). Individuals who had viewed CRC prior to participating in the study (2.5%, n=40) were dropped from the analyses, as well as those with missing or questionable data; therefore, our effective sample size was 1,452.
Measures
User impressions
Several single item questions were included on the immediate follow-up survey measuring user impressions to help assess CRC program maintenance, barriers and exposure [3]. All items were measured on a 1–5 Likert scale. Initial questions assessed participants’ interest (“How interesting was [CRC or control] to you?”), attention (“How well did [CRC or control] hold your attention?”) and trust (“How much did you trust the information provided in [CRC or control]?”). To assess barriers, participants were asked if they had encountered the following scenarios while using CRC or the control: (a) “The information was difficult to understand,” (b) “The information was not applicable to me”, (c) “There was too much information presented,” and (d) “It took too long to [answer questions/view presentation].” Additional questions were presented to CRC users, including: (a) “To what extent did you read the “Cancer Risk Check Profile” that appeared after the questions in CRC?” and (b) “Did you click on each of the tabs within the “Cancer Risk Check Profile” for information about your health, tobacco, diet and exercise, and sun exposure?”
User communication
On the immediate follow-up survey, participants were asked about their information seeking (“during or immediately after [CRC or control] did you seek information about cancer from any source?”) and information sharing (“did you share information from [CRC or control] with other people?”) behaviors. At 6-weeks, information sharing was assessed again in addition to family history communication (i.e., “have you talked to your family about diseases that run in the family”) and physician communication (i.e., “have you talked to your doctor about the cancer risk presentation”). Response options for all questions were yes (1) or no (0).
Data Analyses
Analyses were conducted in SPSS version 19.0. Descriptive statistics (means and standard deviations) were used to describe the sample. To assess differences between groups on user impression variables (continuous), non-parametric Mann-Whitney U tests were used (versus independent samples t-tests) since several variables exhibited non-normal distributions. To examine relationships group membership (binary) and user communication (binary), chi-square tests were used.
Results
The demographic characteristics of participants are presented in Table I. Means, standard deviations and medians for continuous variables are presented in Table II.
Table I.
Participant Demographic Characteristics (N=1,452)
| n (%)
|
|
|---|---|
| Race | |
| Black | 478 (33%) |
| White | 516 (36%) |
| Hispanic | 458 (32%) |
| Female | 867 (60%) |
| Income <$50,000/year | 871 (63%) |
| Education > High School/GED | 1093 (75%) |
| Married/Life Partner | 791 (55%) |
| Past Cancer | 138 (1%) |
| Family History of Cancer | 736 (51%)
|
|
M (SD)
|
|
| Age | 58.56 (6.65) |
Table II.
Results of Mann Whitney U-tests Comparing Cancer Risk Check (CRC) to a Control on User Impressions (N=1,452)
| CRC | Control | ||||||
|---|---|---|---|---|---|---|---|
|
| |||||||
| User Impressions a, b | Ma | SD | Mdn | Ma | SD | Mdn | Uc |
| Interest b | 4.33 | 0.98 | 5.00 | 4.35 | 0.87 | 5.00 | 255,933.50 |
| Held Attention b | 4.32 | 0.95 | 5.00 | 4.31 | 0.86 | 5.00 | 248,784.50 |
| Trust b | 4.10 | 0.95 | 4.00 | 4.42 | 0.79 | 5.00 | 208,710.50** |
| Difficult to Understand b | 1.85 | 0.85 | 2.00 | 1.62 | 0.85 | 1.00 | 214,795.00** |
| Not Applicable b | 2.19 | 0.99 | 2.00 | 2.10 | 1.06 | 2.00 | 242,961.50* |
| Too Much Information b | 1.99 | 0.93 | 2.00 | 1.77 | 0.95 | 2.00 | 219,654.00** |
| Too Long b | 2.01 | 1.00 | 2.00 | 1.70 | 0.92 | 1.00 | 211,950.00* |
measured on a 1–5 scale.
assessed immediately following exposure.
Mann-Whitney U tests were used to assess differences between the CRC and Control group.
p<.05,
p<.01
User impressions
Mann-Whitney Tests (Table II) showed that CRC users rated the information to be as interesting and to hold their attention as well as the control (p>.05). Yet, CRC users reported significantly lower trust in the information than controls (p<.05). They also were more likely than controls to state that the information was difficult to understand, that it was not applicable to them, that too much information was presented, and that it took too long to view the information (p<.05). Of CRC users, 72% (n=513) read their risk profile somewhat or very carefully. However, only 48% (n=343) clicked on all the informational tabs within the CRC risk profile and 28% of CRC users (n=67) did not notice the informational tabs.
Information seeking and sharing
Information seeking during or following exposure to the intervention was higher in the control group (8.9% (n=65) than in the CRC group (4.6% (n=33), χ2=10.42, p<.01), although neither stimulated much information seeking generally. Immediately after viewing, those in the CRC group shared the risk information with others less often than those in the control group (11.5% (n=82) versus 15% (n=110), χ2=4.02, p <.05). Six weeks following exposure, information sharing was not significantly higher in the control group (21% (n=60) versus 18% (n=52), χ2=1.23, p >.05) and CRC users were as likely as controls (23% (n=68) versus 27% (n=68), χ2=1.02, p>.05) to have talked to family members about family disease history or with their doctor about the information (14% (n=40) vs. 11% (n=30), χ2=1.15, p>.05).
Follow-up study
To understand why our results showed CRC to be equally effective, or in some cases less effective than the control, a small follow-up study was conducted post-hoc (N=306) comparing CRC (n=153) to a new text-based control (n=153). The new control material remained non-personalized with low-interactivity (i.e., participants scrolled through a PDF), but contained no visual cues (i.e. photographs or graphics) except the institutional logo. CRC, despite its personalization and interactivity, is primarily text-based (i.e., it contains no visual cues except for the institutional logo). We hypothesized that the control materials (in the main study) may have been received more positively than CRC due to the inclusion of these visual elements. To better understand the impact of visual elements, and the effects of personalization and interactivity, we compared CRC to this new text-based control.
The follow-up study survey and methodology were identical to the main study, except that participants did not complete a 6-week follow-up survey (i.e., baseline and immediate follow-up surveys only). Furthermore, no specific recruitment goals were set for race/ethnicity (86% (n=262) were non-Hispanic White) in the follow-up study. Individuals who had viewed CRC prior to participating in the study (2.3%, n=7) were dropped from the analyses, as well as those with missing or questionable data; therefore, our effective sample size was N=278.
Participants rated CRC to be significantly less difficult to understand (Mdn=2, M=1.85 (SD=.92) than the control (Mdn=2, M=2.05 (SD=.97), U=8,414.00, p<.05). CRC users also rated the information to be as personally applicable (Mdn=2, M=2.18 (SD=1.07) as those in the control group (Mdn=2, M=2.41 (SD=1.08), U=8,514.50, p>.05) and CRC users (Mdn=4, M=4.21 (SD=.84) trusted the information as much as controls (Mdn=4, M=4.21 (SD=.85), U=9,621.00, p<.05). However, CRC users rated the information to be significantly more interesting (Mdn=5, M=4.28 (SD=1.00) than controls (Mdn=4, M=4.04 (SD=.98), U=8,008.50, p<.05) and CRC users rated it better able to hold their attention (Mdn=5, M=4.27 (SD=.90) than the control group (Mdn=4, M=3.86 (SD=1.04), U=7,447.50, p<.05). CRC users were less likely to say it took too long to view the information (Mdn=2, M=1.93 (SD=.99) than controls (Mdn=2, M=2.53 (SD=1.19), U=6,764.50. p<.05) and CRC users were more likely to report that too much information was presented (Mdn=2, M=1.97 (SD=.93) than did participants in the control group (Mdn=2, M=2.40 (SD=1.17), U=7,714.50, p<.05). In terms of communication behaviors, less than 2% (n=2) of CRC users sought information during or immediately after the intervention (vs. 5% (n=7) of controls) and less than 10% (n=13) shared information immediately following use (vs. 13.5% (n=19) of controls).
Discussion
This study reports results from a summative evaluation of CRC, focusing specifically on process evaluation measures related to maintenance, barriers and exposure [3]. We examined user impressions of the tool and the extent to which participants engaged in health communication behaviors, to demonstrate reach of the intervention, following use. Main study results show that CRC users had favorable impressions of the tool, but the tool did not appear to be superior to a traditional non-personalized cancer education approach. Although participants rated CRC as highly interesting and able to hold their attention equally well, CRC users felt significantly more burdened by the amount and difficulty of the information, as well as the length of time it took to view the information. CRC users were more likely to state the information was not applicable to them and their ratings of trust in the information were lower. Furthermore, CRC was as likely (and in some cases less likely) to induce information seeking and sharing, thereby illustrating its limited reach, although neither approach was particularly effective at stimulating communication.
There are several possible explanations for these findings. First, the sheer amount of information in the risk profile (i.e., personalized risk assessment, screening and behavioral recommendations) may have deterred participants from reading the risk information provided. Because the information was extensive and personalized, users may have needed more time to process the information presented in the risk profile, particularly if the risk assessment did not match their personal perceptions of risk. A recent study with Family Healthware supports this idea, showing that shifts in individual risk perceptions, and the time at which they occur, may vary based on the specific risk information received [7]. Participants who didn’t fully read the information in the risk profile may also have felt it was less applicable to them personally.
We hypothesized that the visual presentation of risk information within the CRC risk profile may have diminished user impressions of the tool and contributed to our findings. The CRC tool, despite its’ personalization and interactive capabilities, is primarily text-based. It is easy to assume that increased interactivity (i.e., answering questions about personal and family health history) and personalization (i.e., providing recommendations based on question responses and using words such as “you”) within educational tools negate the need for graphic or visual elements (e.g., photographs, graphics or pictorial representations of risk). However, as our main study results showed, participants often rated non-personalized, low–interactive information more highly than CRC.
Results from the follow-up study showed that people responded more positively to CRC when compared to text-based cancer risk information free from personalization visual and graphic elements, and with little interactivity. These findings suggest that interactivity and personalization do have positive effects on user impressions; however, in light of the main study findings, they also suggest that the lack of visual cues may be limiting the potential effectiveness of CRC as a cancer prevention tool. Further testing must determine whether the inclusion of graphic elements positively impacts user perceptions of the CRC and increases its’ effectiveness relative to traditional cancer education approaches.
The visual aspects of cancer education materials, and particularly online communication tools, are extremely important to consider. However, they are often overlooked within health communication (CMC) research. Future research efforts must focus on understanding how visual aspects and graphic design impact the effectiveness of online educational approaches in order to develop best practices for the design of computer-mediated cancer information. Additionally, our results suggest that collaboration between health and visual communication experts is essential in the further development of interactive health communication tools.
Structural issues within the CRC tool also may be limiting its’ effectiveness. Our results showed that less than 50% of CRC users clicked on all tabs within the risk profile (and 28% didn’t even notice risk information was included in the tabs); thus, it appears many CRC users may have missed information within the risk profile, which may have impacted their impressions of the tool. The fact that 72% of participants also reported that they read their risk profile at least somewhat carefully suggests this may have happened unknowingly. The CRC tool requires users to click on different tabs (labeled “your health,” “tobacco,” “diet and exercise,” and “sun exposure”) within the risk profile to obtain information (Figure 1). Some users (especially those less web savvy) may not have known to click on the tabs within the risk profile and therefore viewed only some of the information contained within the risk profile. Further evaluation efforts are needed to understand the CRC user experience more carefully and to improve navigation within the tool. These findings illustrate again the importance of optimizing graphic design, in addition to content, when developing online cancer education tools.
Given the significant investment associated with the development and maintenance of online risk assessment tools such as CRC, evaluation research is particularly warranted. Results of this evaluation of CRC suggest that the approach has promise, but CRC in its current state should supplement and not replace traditional cancer education approaches. Additional research is needed to identify specific ways to enhance the relative effectiveness of the tool (including visual elements and the enhancement of tailoring, interactivity and personalization within the tool). Furthermore, we must look not only at user impressions of the tools but the extent to which they influence health-related attitudes, beliefs and behaviors.
Acknowledgments
This research was supported by a cancer prevention fellowship awarded to S. Hovick at MD Anderson Cancer Center (National Cancer Institute grant R25T CA057730 to S. Chang) and by the National Institutes of Health through MD Anderson's Cancer Center Support Grant (CA016672 to R. DePinho). Project assistance was provided by MD Anderson’s Patient-Reported Outcomes, Survey, and Population Research (PROSPR) Shared Resource and by e-Health Technology, a resource of the Duncan Family Institute for Cancer Prevention and Risk Assessment at MD Anderson Cancer Center.
Contributor Information
Shelly R. Hovick, The Ohio State University
Therese B. Bevers, The University of Texas MD Anderson Cancer Center
Jennifer Irvin Vidrine, Stephenson Cancer Center, The University of Oklahoma Health Sciences Center.
Stephanie Kim, The University of Texas MD Anderson Cancer Center.
Phokeng M. Dailey, The Ohio State University
Lovell A. Jones, Prairie View A&M University
Susan K. Peterson, The University of Texas MD Anderson Cancer Center
References
- 1.American Cancer Society. Cancer Facts & Figures 2014. Atlanta: American Cancer Society; 2014. [Google Scholar]
- 2.Arun Banu, Bartholomew-Bevers Therese, Bedrosian Isabele, Brewster Abenaa, Coyne Robin, Green Marjorie, Hwang Rosa, Yang Wei. University of Texas MD Anderson Cancer Center Breast Cancer Screening Practice Census Algorithm. Houston, TX: University of Texas MD Anderson Cancer Center; 2010. [Google Scholar]
- 3.Baranowski Tom, Stables Gloria. Process evaluations of 5-a-day projects. Health Education & Behavior. 2000;27(2):157–166. doi: 10.1177/109019810002700202. [DOI] [PubMed] [Google Scholar]
- 4.Bartholomew-Bevers Therese, Bresalier Robert, Day Suzanne, Hawk Ernest, Lynch Patrick, Raju Gottumukkala, Vinning David. University of Texas MD Anderson Cancer Center Colorectal Cancer Screening Practice Census Algorithm. Houston, TX: University of Texas MD Anderson Cancer Center; 2010. [Google Scholar]
- 5.Bartholomew-Bevers Therese, Dains Joyce E, Lazzaro Marita, Milbourne Andrea, Rhodes Helen, Ramondetta Lois M, Schmeler Kathleen, Dallard Carol Vreeland. University of Texas MD Anderson Cancer Center Cervical Cancer Screening Practice Census Algorithm. Houston, TX: University of Texas MD Anderson Cancer Center; 2010. [Google Scholar]
- 6.Hawkins RP, Kreuter M, Resnicow K, Fishbein M, Dijkstra A. Understanding tailoring in communicating about health. Health Educ Res. 2008;23(3):454–466. doi: 10.1093/her/cyn004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Hovick Shelly R, Wilkinson Anna V, Ashida Sato, de Heer Hendrik D, Koehly Laura M. The impact of personalized risk feedback on Mexican Americans’ perceived risk for heart disease and diabetes. Health Education Research. 2014;29(2):222–234. doi: 10.1093/her/cyt151. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Kelly Kimberly, Porter Kyle, Remy Amber, Westman Judith A. Promotion of cancer family history awareness: Jameslink cancer risk assessment tool at community health fairs. Journal of Genetic Counseling. 2008;17:274–282. doi: 10.1007/s10897-007-9146-8. [DOI] [PubMed] [Google Scholar]
- 9.McGuire William J. McGuire's classic input-output framework for constructing persuasive messages. In: Atkin Charles K, Rice Ronald E., editors. Public Communication Campaigns. Thousand Oaks, CA: Sage; 2013. pp. 133–146. [Google Scholar]
- 10.Noar Seth M, Benac Christina N, Harris Melissa S. Does tailoring matter? Meta-analytic review of tailored print health behavior change interventions. Pyschological Bulletin. 2007;133(4):673–693. doi: 10.1037/0033-2909.133.4.673. [DOI] [PubMed] [Google Scholar]
- 11.Ruffin Mack T, Nease Donald E, Sen Ananda, Pace Wilson D, Wang Catharine, Acheson Louise S, Rubinstein Wendy S, O'Neill Suzanne M, Gramling Robert for The Family History Impact Trial Group. Effect of preventive messages tailored to family history on health behaviors: The family healthware impact trial. The Annals of Family Medicine. 2011;9(1):3–11. doi: 10.1370/afm.1197. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Saunders Ruth P, Evans Martin H, Joshi Praphul. Developing a process-evaluation plan for assessing health promotion program implementation: A how-To guide. Health Promotion Practice. 2005;6(2):134–147. doi: 10.1177/1524839904273387. [DOI] [PubMed] [Google Scholar]
- 13.Street Richard L, Rimal Rajiv N. Health promotion and interactive technology: A conceptual foundation. In: Street Richard L, Gold William R, Manning Timothy., editors. Health promotion and integrative technology: Theoretical applications and future directions. Mahwah, New jersey: Lawrence Erlbaum Associates; 1997. [Google Scholar]
- 14.Street Richard L, Manning Timothy. Information environments for breast cancer education. In: Street Richard L, Gold William R, Manning Timothy., editors. Health Promotion and Interactive Technology: Theoretical Applications and Future Directions. Mahwah, New Jersey: Lawrence Erlbaum Associates; 1997. [Google Scholar]
- 15.Vanderpool RC, Gainor SJ, Conn ME, Spencer C, Allen AR, Kennedy S. Adapting and implementing evidence-based cancer education interventions in rural Appalachia: real world experiences and challenges. Rural and Remote Health. 2011;11(4):1807. [PMC free article] [PubMed] [Google Scholar]
- 16.Wang Catharine, Sen Ananda, Ruffin Mack T, Iv, Nease Donald E, Jr, Gramling Robert, Acheson Louise S, O'Neill Suzanne M, Rubinstein Wendy S. Family history assessment: Impact on disease risk perceptions. American Journal of Preventive Medicine. 2012;43(4):392–398. doi: 10.1016/j.amepre.2012.06.013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Waters Erika A, Sullivan Helen W, Hesse Bradford W, Nelson Wendy. What is my cancer risk? How internet-based cancer risk assessment tools communicate individualized risk estimates to the public. Journal of Medical Internet Research. 2009 doi: 10.2196/jmir.1222. [DOI] [PMC free article] [PubMed] [Google Scholar]

