Abstract
Objective
Digital communication between patients and healthcare teams is increasing. Most patients find this effective, yet many patients remain digitally isolated, a social determinant of health. This study investigates patient attitudes toward healthcare's newest digital assistant, the chatbot, and perceptions regarding healthcare access.
Methods
We conducted a mixed methods study among patient users of a large healthcare system's chatbot integrated within an electronic health record. We purposively oversampled by race and ethnicity to survey 617/3089 (response rate 20%) patient users online using de novo and validated items. In addition, we conducted semi-structured interviews with users (n = 46) purposively sampled based on diversity, age, or select survey responses between November 2022 and May 2024.
Results
In surveys, 213/609 (35.0%) felt they could not understand the chatbot completely, and 376/614 (61.2%) felt the chatbot did not completely understand them. Of 238 users who felt completely understood by the chatbot, 178 (74.8%) believed the chatbot was intended to help them access healthcare; in comparison, of 376 users who felt not completely understood, 155 (41%) believed the chatbot was intended to help access (p < 0.001). In interviews, among themes observed, Black, Hispanic, less educated, younger, and lower-income participants expressed more positivity about the chatbot aiding healthcare access, stating convenience and perceived absence of judgment or bias.
Conclusion
Patients’ experience with the chatbot appears to affect their perception of the intent of the chatbot's implementation; those adept at chatbot communication or within historically less trusting groups may prefer a quick, non-judgmental answer to questions via the chatbot rather than human interaction. Although our findings are limited to one health system's existing chatbot users, as patient-facing chatbots expand, attention to these factors can support healthcare systems’ efforts to design chatbots that meet the unique communication needs of all patients, expressly those at risk of digital isolation.
Keywords: Patient-facing chatbots, digital literacy, healthcare access, digital isolation, patient experience
Background
Spurred on by COVID-19, patients have increasingly moved to digital communication, such as patient messaging, with their healthcare teams.1–3 Digital literacy, a skill necessary to access digital technologies for healthcare, refers here to the “ability of individuals to acquire, process, communicate, and understand health information and services, make effective health decisions, and promote and improve individual health through the use of digital technologies.”4,5 Such digital literacy, a social determinant of health, typically requires English proficiency and visual acuity and is associated with health outcomes.5–7
Digital isolation refers to “when people find themselves unable to access the internet or digital media and devices as much as other people.” 8 Approximately 15% of patients remain digitally isolated and only one-third of US adults have full digital capability. 4 Digital isolation stems more from a lack of adaptability to this communication mode for healthcare or is limited by communication barriers, rather than availability of technology. 4 Risks for digital isolation include lack of a high school education or age over 65, with patients over age 75 most likely to struggle.4,9 Digital isolation often overlaps with patients in greatest need of frequent communication due to chronic health conditions.4,9,10
The transition to digital communication has been implemented largely without digital literacy patient assessment or education. 3 Digital literacy assessment tools now exist and can be incorporated into electronic health records (EHRs).4,9,11,12 When such assessment tools are utilized with face-to-face patient education an improvement in digital literacy has been shown. 13
Chatbots, artificial intelligence (AI) agents meant to mimic human conversation, are a newer form of direct communication with patients. Public chatbots, such as ChatGPT, are frequently utilized by patients to seek medical information.14–16 Patient-facing chatbots are integrated into the EHR portal interface to assist with patient navigation. 17 Due to AI characteristics, chatbots with expanding abilities have the potential to support 24/7, high-quality, patient-specific care that can improve engagement and assist in healthcare navigation.18–21 Chatbots have also been introduced with minimal clarification as to how best to utilize them for patient benefit. 22
Here, we report on a mixed methods investigation of how patients perceive a chatbot integrated into their EHR and their attitudes toward this chatbot's role in healthcare access. For this study, we define “healthcare access” as “timely use of personal health services to achieve the best possible health outcomes.” 23
Methods
Study setting
Our study involved a “general” patient-facing chatbot integrated within the EHR of a large US health system. The health system partnered with a third party for the chatbot platform but has significant latitude in chatbot features and performance. 24 In our survey and interviews, we referred to the chatbot by its given name chosen by the healthcare system, but referred to it in this manuscript as “the chatbot.” This chatbot is available online to a larger audience and yet this study recruited patient-users within their EHR portal. Launched in October 2018, the chatbot assists in patient tasks, i.e., reporting results, navigating the portal, scheduling appointments, and is linked to the medical record, thus pulling patient-specific information. At the time of our study, the chatbot used natural language processing of patient users’ text-based queries to match user inputs to system intents, providing direct question-and-answer as well as decision-tree-based responses to mimic human conversation. Responses were not generated with a large-language AI model. The chatbot “pops up” automatically as an animated human face for users upon login to their patient portal. As of September 2024, the chatbot was receiving more than 200,000 queries/month from 35,396 unique users. The health system is a three-state hospital system with nineteen affiliated hospitals and hundreds of clinics in rural, suburban, and urban settings. The study was approved as exempt research by the Colorado Multiple Institutional Review Board (21–5127).
Survey sample and procedures
We surveyed patient-users who sent at minimum three back-and-forth messages in one sitting with the chatbot between July 2022 and September 2023. Our sampling procedure was explicitly designed around diversity. We sent a total of 3089 surveys over three phases. In Phase 1, we received (n = 142) responses via random sampling of the eligible patient users. To increase diversity in our sample, in Phase 2 (n = 298) and Phase 3 (n = 177), we sampled in a 2:1 ratio of White non-Hispanic users and users identifying as Black, Hispanic, or Asian to allow exploration of chatbot use by user race and ethnicity. This priority in diverse sampling was done for the sake of justice and inclusivity; if we had continued as in Phase 1, recruitment would have been mainly White participants, and thus not support an ethical commitment to inclusivity in research. There were no repeat participants.
De novo survey questions (Supplemental Materials Survey) asked about the patient-users’ experience with the chatbot, as well as trust, bias, privacy, and perceived intent of the chatbot. Survey questions were a combination of freely available questions and those created by our team related to the novel topic. Survey questions were reviewed for face and content validity by the Community Board of the University of Colorado Center for Bioethics and Humanities (CBH). While the full survey does include validated questions, those questions were not included in this analysis. Users were invited to participate by email up to three times and offered $20 compensation. Data were collected and informed consent appropriate to survey research was obtained using REDCap (version 14.5.19).
Interview sample and procedures
We interviewed chatbot patient-users based on messages or survey answers with ethically significant responses (e.g., misidentification of the chatbot as something other than a computer or extreme trust/distrust). We used purposive sampling to recruit individuals diverse in age, gender, race/ethnicity, rural/urban status, and other sociodemographic traits. Data saturation was met after forty interviews with one exception; as the topic of digital literacy emerged, we sampled five additional older adults to achieve saturation. We also interviewed two healthcare chatbot design engineers for their perspectives.
Semi-structured interviews lasted 30–60 min and asked open-ended questions about the chatbot user experience and ethical concepts. Following an initial review by the Community Board at CBH and a geriatric-focused Patient and Family Research Advisory Council in February 2022, the guide was refined based on emerging findings from each cohort (Supplement Methods Interview Guide). Oral consent was obtained, and interviews were recorded using Zoom (Version 5.0.2) and professionally transcribed.
Analysis
Quantitative analysis
We use descriptive statistics to analyze sociodemographic characteristics. Our primary outcomes were survey questions pertaining to accessibility: “To what extent do you think [the chatbot] is intended to help patients like you navigate the UCHealth system?” and “To what extent do you think [the chatbot] is intended to limit your access to doctors and nurses?”. Both questions had four response options “great extent,” “some extent,” “very little,” or “not at all.” For analysis of both questions, we combined the categories “Very little” and “Not at all” for a total of three response categories per outcome. We combined “very little” with “not at all” due to the small number of respondents in these categories for the system navigation question and then for consistency maintained this categorization for the other access question.
To assess comfort with the Internet, we asked “Overall, how often do you use the Internet?” with answers ranging from “never” to “most of the day,” and “Overall, how confident do you feel using computers, smartphones, or other electronics to do the things you need to do online?”, with responses ranging from “not at all confident” to “very confident.” 25
For this study, we examined whether patient-users’ perceptions of access might be statistically associated with key variables, such as their understanding of the chatbot (and the chatbot's understanding of them), communication abilities, comfort with the Internet, or the belief that the chatbot was asking them to do something they did not want to do. We used two-tailed Chi-square tests and Fisher's exact tests of independence to determine whether there was a significant association between these categorical variables. A p-value of <0.05 was considered statistically significant. All data cleaning and analyses were conducted using Stata version 18.1 (College Station, TX).
Qualitative analysis
Interviews were analyzed using a modified constructivist grounded theory approach. 26 Following interview transcription, the codebook was developed using an open coding approach, to capture and categorize important themes from interviews. All interviews were analyzed by two independent coders in Atlas.ti. (Version 24.0.0.29576). Differences in coding were resolved by a third adjudicator. Constant comparative techniques were applied to clarify codes and the connections between them. 27
Mixed methods integration
The survey initially preceded interviews, but overlapping waves meant the methods were concurrent. For this study, our mixed methods integration began with qualitative themes about users’ perceived impact on accessibility via the chatbot. We then explored survey data to compare the primary outcomes by user sociodemographic characteristics, which we hypothesized were associated with chatbot users’ perceptions of impact on healthcare access. As the survey came first temporally, we present those results first. Qualitative and quantitative researchers met six times to integrate results.
Results
Quantitative results
In total, we received 617 completed surveys, achieving a response rate of 20.0% (617/3089). The mean (SD) survey respondents’ age was 49.3 (SD = 16.8) years, 424/600 (70.7%) reported female sex, and 385/617 (62.4%) reported White non-Hispanic race and ethnicity, 51/617 (8.3%) Black non-Hispanic, 143/617 (23.2%) Hispanic all races, 29/617 (4.7%) Asian non-Hispanic, and 9/617 (1.5%) other races and ethnicities (Table 1).
Table 1.
Demographics of interview and survey participants.
| Total interview participants (col %) | Total survey respondents (col %) a | |
|---|---|---|
| N | 46 (100.0) | 617 (100.0) |
| Genderb | ||
| Male | 15 (32.6) | 166 (27.7) |
| Female | 29 (63.0) | 424 (70.7) |
| Other | 2 (4.3.) | 10 (1.7) |
| Age | ||
| 18–34 | 14 (30.4) | 125 (21.2) |
| 35–64 | 22 (47.8) | 339 (57.5) |
| 65+c | 10 (21.7) | 126 (21.4) |
| Income | ||
| Less than $13,590 | 6 (13.0) | 101 (16.4) |
| $13,591 to $44,999 | 10 (21.7) | 178 (28.8) |
| $45,000 and above | 20 (43.5) | 338 (54.8) |
| Not reported | 10 (21.7) | |
| Race and Ethnicity | ||
| White non-Hispanic | 19 (41.3) | 385 (62.4) |
| Black non-Hispanic | 9 (19.6) | 51 (8.3) |
| Hispanicd (all races) | 17 (37.0) | 143 (23.2) |
| Asiane non-Hispanic | 1 (2.2) | 29 (4.7) |
| Native Hawaiian/Oth PacIsl/AIAN | 0 | 9 (1.5) |
| Education | ||
| High school or less | 6 (13.0) | 94 (15.7) |
| Some college | 18 (39.1) | 212 (35.3) |
| 4-year college graduate | 13 (28.3) | 140 (23.3) |
| More than 4-year degree | 9 (19.6) | 154 (25.7) |
| English speaking proficiency | ||
| Not well | N/A | 6 (1.0) |
| Well | N/A | 26 (4.4) |
| Very well | N/A | 563 (94.6) |
| Difficulty communicatingf | ||
| No difficulty | N/A | 536 (90.8) |
| Some difficulty | N/A | 49 (8.3) |
| A lot of difficulty | N/A | 5 (0.8) |
| Difficulty seeing | ||
| No difficulty | N/A | 388 (65.2) |
| Some difficulty | N/A | 176 (29.6) |
| Lots of difficulty | N/A | 30 (5.0) |
| Cannot do at all | N/A | 1 (0.2) |
| Difficulty remembering or concentrating | ||
| No difficulty | N/A | 361 (61.0) |
| Some difficulty | N/A | 180 (30.4) |
| Lots of difficulty | N/A | 50 (8.4) |
| Cannot do at all | N/A | 1 (0.2) |
| Self-reported overall health | ||
| Excellent | N/A | 64 (10.7) |
| Very good | N/A | 180 (30.2) |
| Good | N/A | 217 (36.3) |
| Fair | N/A | 111 (18.6) |
| Poor | N/A | 25 (4.2) |
Column totals vary based on missingness in survey responses. Not all questions were mandatory. bInterviews asked participant gender; survey asked participant sex. cInterviewees in the 65+ category were 65–75; survey participants in the 65+ category were 65–85. d“Hispanic” includes the following ethnicity categories: Mexican, Mexican American, Chicano, Puerto Rican, Cuban, or Another Hispanic, Latino, or Spanish Origin. e“Asian” includes the following race categories: Chinese, Filipino, Asian Indian, Vietnamese, Korean, Japanese, or Other Asian. fResponse to the survey question “In your usual language, do you have difficulty communicating, for example, understanding or being understood?”
Overall, the survey represented middle-aged, female, healthy, tech-savvy adults. Using the HCAHPS self-reported health scale (where 1 = excellent and 5 = poor) our respondents had an average overall health rating of 2.75, which is slightly improved from the 2023 national average of 2.98. 28 Fewer than 1% (5/590) had a “lot of difficulty communicating” in their usual language. Overall, the group had high English-speaking proficiency; 6/595 respondents (1%) reported speaking English “not well” (Table 2). Approximately three quarters (430/594, 72.3%) of users felt very confident using electronic devices, with most (509/594, 85.7%) using the internet multiple times/most of the day (Table 2).
Table 2.
Survey responses to questions regarding chatbot healthcare access assistance by communication groups.
| To what extent do you think (the chatbot) is intended to help patients like you navigate the UCHealth system? | |||||
|---|---|---|---|---|---|
| Great extent n (row %) |
Some extent n (row %) |
Very little/not at all n (row %) | Total
a
n (col %) |
p b | |
| N | 333 (54.1) | 252 (40.9) | 31 (5.0) | 616 (100.0) | |
| How would you rate your understanding of what (the chatbot) wrote? | <0.001 | ||||
| Not understood completelyc | 68 (31.9) | 126 (59.2) | 19 (8.9) | 213 (35.0) | |
| Understood completely | 261 (65.9) | 124 (31.3) | 11 (2.8) | 396 (65.0) | |
| How would you rate (the chatbot)'s understanding of what you wrote? | <0.001 | ||||
| Not understood completelyc | 155 (41.2) | 193 (51.3) | 28 (7.5) | 376 (61.2) | |
| Understood completely | 178 (74.8) | 57 (24.0) | 3 (1.3) | 238 (38.8) | |
| English speaking proficiency | 0.12 | ||||
| Very well | 313 (55.7) | 221 (39.3) | 28 (5.0) | 562 (94.6) | |
| Well | 9 (34.6) | 17 (65.4) | 0 (0.0) | 26 (4.4) | |
| Not well | 4 (66.7) | 2 (33.3) | 0 (0.0) | 6 (1.0) | |
| Difficulty communicating | 0.9 | ||||
| No difficulty | 296 (55.3) | 214 (40.0) | 25 (4.7) | 535 (90.8) | |
| Some difficulty | 25 (51.0) | 21 (42.9) | 3 (6.1) | 49 (8.3) | |
| Lots of difficulty | 3 (60.0) | 2 (40.0) | 0 (0.0) | 5 (0.8) | |
| Overall, how often do you use the internet? | 0.06 | ||||
| Never to once weekly | 9 (52.9) | 8 (47.1) | 0 (0.0) | 17 (2.9) | |
| Up to once daily | 32 (47.1) | 32 (47.1) | 4 (5.9) | 68 (11.4) | |
| Multiple times daily/most of the day | 285 (56.0) | 200 (39.3) | 24 (4.7) | 509 (85.7) | |
| Overall, how confident do you feel using computers, smartphones, or other electronic to do the things you need to do online? | 0.06 | ||||
| Not at all/a little confident | 13 (44.8) | 14 (48.3) | 2 (6.9) | 29 (4.9) | |
| Somewhat confident | 62 (45.9) | 67 (49.6) | 6 (4.4) | 135 (22.7) | |
| Very confident | 251 (58.4) | 159 (37.0) | 20 (4.7) | 430 (72.4) | |
| It seemed like (the chatbot) wanted me to do something I did not really want to do | <0.001 | ||||
| Disagree/strongly disagree | 252 (62.1) | 143 (35.2) | 11 (2.7) | 406 (66.1) | |
| Neutral | 54 (41.9) | 66 (51.2) | 9 (7.0) | 129 (21.0) | |
| Agree/strongly agree | 25 (31.7) | 43 (54.4) | 11 (13.9) | 79 (12.9) | |
| In your most recent communication with [the chatbot], what were you trying to do? (check all) | |||||
| Get an appt with my doctor(s) | 85 (52.2) | 67 (41.1) | 11 (6.8) | 163 (26.4) | 0.47 |
| Get an appt with a new doctor | 37 (51.4) | 29 (40.3) | 6 (8.3) | 72 (11.7) | 0.40 |
| Check on existing appt | 58 (56.3) | 43 (41.8) | 2 (1.9) | 103 (16.7) | 0.32 |
| Prescription refill | 53 (51.5) | 42 (40.8) | 8 (7.8) | 103 (16.7) | 0.36 |
| Get results of a medical test | 64 (58.7) | 41 (37.6) | 4 (3.7) | 109 (17.7) | 0.56 |
| Get advice on a health issue | 49 (53.3) | 38 (41.3) | 5 (5.4) | 92 (14.9) | 0.93 |
| Get information about a bill | 24 (54.6) | 19 (43.2) | 1 (2.3) | 44 (7.1) | 0.84 |
| Other | 93 (57.4) | 65 (40.1) | 4 (2.5) | 162 (26.3) | 0.18 |
| How would you rate your interaction with (the chatbot)? | <0.001 | ||||
| Very helpful | 179 (73.7) | 63 (25.9) | 1 (0.4) | 243 (39.6) | |
| Mostly helpful | 108 (45.4) | 124 (52.1) | 6 (2.5) | 238 (38.8) | |
| Mostly unhelpful | 34 (34.0) | 50 (50.0) | 16 (16.0) | 100 (16.3) | |
| Very unhelpful | 11 (33.3) | 14 (42.4) | 8 (24.2) | 33 (5.4) | |
Totals vary based on covariate missingness. One participant was deleted due to missing outcome response. bp-values were obtained using Pearson's chi-square test or Fisher's exact test for values less than 10. cCategory “not understood completely” includes three survey response options “understood most of what (the chatbot) said,” “understood some of what (the chatbot) said,” and “understood none of what (the chatbot) said.”
Most respondents (333/616, 54.1%) felt that the chatbot was intended to help patients navigate the health system to a “great extent,” and an additional 252/616 (40.9%) felt that the chatbot was intended to help to “some extent” (Table 2). When asked about comprehension between the user and the chatbot, 213/609 (35.0%) felt that they could not understand the chatbot completely, and 376/614 (61.2%) felt that the chatbot could not understand them completely.
In bivariate analysis, there was a statistically significant association between perceptions of access and the chatbot's understanding of what the user had inputted. Of those users who felt completely understood by the chatbot, 178/238 (74.8%) believed the chatbot was intended to help them access healthcare, compared to 155/376 (41%) of users who felt not completely understood (p < 0.001). A similar finding was seen regarding users’ ability to understand what the chatbot wrote back and the perceived helpfulness of the chatbot. English proficiency and the use-case of the chatbot were not statistically associated with perceptions of access.
When asked whether the chatbot is intended to limit access to doctors and nurses, there was a similar association with users’ understanding of the chatbot. Overall, 84/615 (13.7%) of respondents said the chatbot was intended to limit their access to doctors and nurses “to a great extent” (Table 3). Among users who understood the chatbot completely, 248/396 (65.1%) felt that the chatbot was not intended to limit access to providers, compared to 101/212 (47.6%) of people who could not fully understand the chatbot (p < 0.001). Although not statistically significant, there was a relationship of similar direction and magnitude between those who better understood the chatbot and reported that the chatbot was not intended to limit access to providers (Table 3).
Table 3.
Survey responses to questions regarding chatbot limiting access to doctors and nurses/communication.
| To what extent do you think (the chatbot) is intended to limit your access to doctors and nurses? | |||||
|---|---|---|---|---|---|
| Great extent n (row %) |
Some extent n (row %) |
Very little/not at all n (row %) | Total
a
n (col %) |
pb | |
| N | 84 (13.7) | 177 (28.8) | 354 (57.6) | 615 (100.0) | |
| How would you rate your understanding of what (the chatbot) wrote? | 0.001 | ||||
| Not understood completelyc | 33 (15.6) | 78 (36.8) | 101 (47.6) | 212 (34.9) | |
| Understood completely | 50 (12.6) | 98 (24.8) | 248 (62.6) | 396 (65.1) | |
| How would you rate (the chatbot)'s understanding of what you wrote? | 0.07 | ||||
| Not understood completelyc | 52 (13.9) | 119 (31.7) | 204 (54.4) | 375 (61.2) | |
| Understood completely | 32 (13.5) | 56 (23.5) | 150 (63.0) | 238 (38.8) | |
| English speaking proficiency | 0.29 | ||||
| Very well | 78 (13.9) | 153 (27.3) | 330 (58.8) | 561 (94.6) | |
| Well | 2 (7.7) | 12 (46.2) | 12 (46.2) | 26 (4.4) | |
| Not Well | 0 (0.0) | 2 (33.3) | 4 (66.7) | 6 (1.0) | |
| Difficulty communicating | 0.85 | ||||
| No difficulty | 73 (13.7) | 150 (28.1) | 311 (58.2) | 534 (90.8) | |
| Some difficulty | 6 (12.2) | 14 (28.6) | 29 (59.18) | 49 (8.3) | |
| Lots of difficulty | 1 (20.0) | 2 (40.0) | 2 (40.0) | 5 (0.9) | |
| Overall, how often do you use the internet? | 0.04 | ||||
| Never to once weekly | 5 (29.4) | 6 (35.3) | 6 (35.3) | 17 (2.9) | |
| Up to once daily | 13 (19.1) | 22 (32.4) | 33 (48.5) | 68 (11.5) | |
| Multiple times daily/most of the day | 62 (12.2) | 139 (27.4) | 307 (60.4) | 508 (85.7) | |
| Overall, how confident do you feel using computers, smartphones, or other electronic to do the things you need to do online? | 0.05 | ||||
| Not at all/a little confident | 9 (31.0) | 9 (31.0) | 11 (37.9) | 29 (4.9) | |
| Somewhat confident | 15 (11.1) | 43 (31.9) | 77 (57.0) | 135 (22.8) | |
| Very confident | 56 (13.0) | 115 (26.8) | 258 (60.1) | 429 (72.3) | |
| It seemed like [the chatbot] wanted me to do something I did not really want to do | <0.001 | ||||
| Disagree/strongly disagree | 38 (9.36) | 105 (25.9) | 263 (64.8) | 406 (66.2) | |
| Neutral | 22 (17.2) | 49 (38.3) | 57 (44.5) | 128 (20.9) | |
| Agree/strongly agree | 23 (29.11) | 23 (29.1) | 33 (41.8) | 79 (12.9) | |
| In your most recent communication with (the chatbot), what were you trying to do? (check all) | |||||
| Get an appt with my doctor(s) | 22 (13.5) | 56 (34.4) | 85 (52.2) | 163 (26.5) | 0.17 |
| Get an appt with a new doctor | 15 (20.8) | 19 (26.4) | 38 (52.8) | 72 (11.7) | 0.17 |
| Check on existing appt | 12 (11.7) | 30 (29.1) | 61 (59.2) | 103 (16.7) | 0.81 |
| Prescription refill | 16 (15.7) | 29 (28.4) | 57 (55.9) | 102 (16.6) | 0.81 |
| Get results of a medical test | 13 (11.9) | 26 (23.9) | 70 (64.2) | 109 (17.7) | 0.30 |
| Get advice on a health issue | 16 (17.4) | 23 (25.0) | 53 (57.6) | 92 (15.0) | 0.44 |
| Get information about a bill | 6 (13.6) | 12 (27.3) | 26 (59.1) | 44 (7.2) | 0.98 |
| Other | 15 (9.3) | 39 (24.1) | 108 (66.7) | 162 (26.3) | 0.02 |
| How would you rate your interaction with (the chatbot)? | <0.001 | ||||
| Very helpful | 30 (12.4) | 53 (21.8) | 160 (65.8) | 243 (39.6) | |
| Mostly helpful | 26 (10.9) | 88 (37.1) | 123 (51.9) | 237 (38.7) | |
| Mostly unhelpful | 17 (17.0) | 27 (27.0) | 56 (56.0) | 100 (16.3) | |
| Very unhelpful | 11 (33.3) | 8 (24.2) | 14 (42.2) | 33 (5.4) | |
Totals vary based on covariate missingness. Two participants were deleted due to missing outcome response. bp-values were obtained using Pearson's chi-square test or Fisher's exact test for values less than 10. cCategory “not understood completely” includes three survey response options “understood most of what (the chatbot) said,” “understood some of what (the chatbot) said,” and “understood none of what (the chatbot) said.”
Qualitative results
Of 108 interview invitations sent, we completed 46 interviews with patient-users of the chatbot (overall response rate of 42.6%). Interviewee demographics are presented in Table 1.
Our analysis found that perceptions of accessibility fit into four main themes, i.e., accessibility for self, accessibility for others, barriers to accessibility, and ideas to improve accessibility (Table 4). These four themes were developed by the authors as they emerged from the qualitative data responses.
1. Accessibility for self: Most participants in all demographic groups expressed a positive attitude regarding (chatbot's) potential to improve healthcare accessibility. They recognized its ease, efficiency, and 24/7 accessibility. Some were surprised at how well it functioned and looked forward to how further developments may provide greater benefits for patients able to access this technology. Chatbot affiliation with the healthcare system increased user confidence.This is a fast way to get the help that you need. I really wanna get a response right away, especially if it is something very pertinent, and I need something now rather than going through so many personnel over the phone. (Black female, some college, age 35–64, >$45 K)
2. Accessibility for others: Many participants expressed confidence in how this technology will improve access to healthcare needs in an equitable way for all patients, yet others noted concerns about privacy and digital access. Participants of all ages mention older relatives or friends who would have difficulty navigating this technology, or would not understand what a “chatbot” is. Adults over age 65 expressed the most concern about digitally isolated patients or having communication limitations related to vision, cognition, or language.My mother is 90 years old. There is no way she could interact with [the chatbot]. She would prefer a phone call and talk to a human. (White female, college education, age 65+)
Table 4.
Interviewee quotes by four accessibility themes determined in chatbot interview qualitative analysis.
| Accessibility for self | ||||
|---|---|---|---|---|
| “It's just another way to connect with patients like me who aren't sure if we should e-mail our overworked doctors versus trying to talk to a bot” (White female, college, age 35–64) P1 a | “It’s one more option for better access to care. We don’t have to constantly hear human voices to get the information we need. We don’t have to wait for somebody to give us a call back. We don’t have to listen to busy signals. We can just go online and instantly communicate with chatbot to help go in the direction I need or answer some basic questions.” (Asian male, some college, age 65+, $14,000–35,000) P8 | “I think that she provided really clear, concise answers and that chatbot was just in the UCHealth app—makes me more trustworthy that I had to log on to this app with my credentials, and then she is appearing within that secure app that makes it more trustworthy, yeah, those two things.” (Hispanic female, college, age 18–34; >$45,000) P4 | “The couple of times I've used chatbot in wanting to access information, it was very helpful to not having to call the doctor's office - that I knew was very busy - for the results that I wanted. I was able to just get into chatbot, and after a little bit of sleuthing, access the information that I needed.” (White female, High school education, age 35–64, <$14,000/year) P12 | “I'm not trying to call during certain hours during the day. I can do it in the middle of the night if I need to whenever I think about it.” (White female, college, age 35–64, >$45,000/year) P15 |
| Accessibility for others | ||||
|---|---|---|---|---|
| “She should definitely be able to speak other languages. That would definitely increase access.” (Black female, college +, age18–34, >$45,000) P2 | “We are comfortable and safe with chatbot, just because it's also with UCHealth where I feel comfortable with the company. We wouldn’t put in information in any old chatbot that was on the internet, because I know they can steal information; I don’t feel like the UC Health would employ a chatbot that was going to steal my information.” (White female, high school education, age 35–64). P5 | “Anytime you can get more interactive with your healthcare, I think it's a good thing. There will be some people who are too shy or whatever to send a direct message to their healthcare provider, but they might interact with chatbot, it's non-threatening.” (White female, college +, age 35–64, >$45,000) P9 | “I guess the first thought is to access the chatbots, you have to have a smart device. That means you can’t be poor or at least so poor that you can’t utilize the chatbots… there's already a bias to people who can afford the device that allows them to use the chatbots.” (White male, college +, age 65+) P13 | “I think it is all about gaining the trust of users. I think about my parents, and they don't like to put that sensitive information on a portal.”(White female, college, age 18–34, >$45,000/year), P16 |
| Barriers to accessibility | ||||
|---|---|---|---|---|
| “My grandparents don’t have smart phones, they wouldn’t be able to use chatbot.” (Hispanic male, college, age 18–34, >$45,000,) P3 | “I think the biggest barrier is people's protection of their privacy. They have to be convinced in some way that whatever digitally is utilized with [the chatbot] is protected in the same way as if I was sitting in front of my doctor.” (White male, college +, age 65+, >$45,000) P10 | “Accessibility depends on aptitude… if they know how to work a computer and continue to type away and chat. Some people aren't so good on a computer. “ (Hispanic male, some college, age 65+) P6 | “It would save time for the healthcare community if these bots actually could answer questions. If it would work more effectively, I think it could help.” (White female, college, Age 65+) P5 |
“It would need to change for anyone with hearing or seeing disabilities, patient with those kinds of needs.” (Hispanic female, college, age 18–34, >$45,000) P17 |
| Ideas for accessibility | ||||
|---|---|---|---|---|
| “It would be nice if chatbot could speak to my level of education or to whoever is talking to her.” (White female, college, age 35–64) P1 | “For those with visual impairments, having it be able to talk back would be highly beneficial for them to know where they’re navigating.” (Hispanic female, college, age 35–64, <$14,000) P11 | “Many different languages need to be in the software. Be sure they get the necessary information without a language barrier.” (Black male, college+, age 35–64, >$45,000) P7 |
“If you don’t understand this [from chatbot] or you have more questions, press this button and you can send a question (directly) to your doctor or your nurse and ask that question.” (White female, college+, age 35–64, >$45,000) P14 | “I’d be okay talking to chatbot about stuff. If it's minor things, like a symptom checker thing.” (White gender other, some college, age 18–34, $14,000–45,000) P18 |
P numbers refer to anonymized unique interview participants.
A speech-impaired person or blind person needs to have somebody work with them, unless they're using a keyboard that can interact with them and talk to them. (Hispanic male, some college, age 65+)
3. Barriers to accessibility: Participants identified an accessibility barrier when the chatbot underperforms relative to expectations, such as not providing the information they are seeking, “not answering the question.” This observation was reiterated by developers, noting concerns about patient input into design and bias. As one said:The folks that are developing the AI systems tend to not be representative of the general population…we don't see a lot of chatbots outside of primary English speaking high literacy levels. It's biased to folks who can read and write and speak in English at a high level with higher education.
4. Ideas for accessibility improvement: Themes for chatbot improvement centered around assessing digital literacy, adding functionality, multiple languages, specific help for those visually and speech limited, and offering a talking option.
As part of our analysis, we evaluated interviewees’ responses of (chatbot's) impact on access as “only positive,” “only negative,” or “both.” After sorting these perceptions by demographic group, we observed that younger, less educated, lower income, or those who identify as Black, Asian, or Hispanic appeared to express more positivity about access than other groups (Figure 1). This emergent finding led us to re-examine our survey data for differences between groups on relevant access questions. While not statistically significant, Black, Hispanic, lower income, lower education, and younger interviewees did express more positivity about how the chatbot helps with healthcare access. Along with convenience, these users appeared to reference lack of human judgment or human bias as reasons (Table 5).
It makes me feel a little better, because as you say, it's a robot, so they can’t judge me. I don’t feel like I’m being judged or looked at in a certain way. (Black race, some college, other gender, age 18–34)
Figure 1.
Interviewee participants responses to does the chatbot improve healthcare access.
Table 5.
Insight quotes regarding “why” high chatbot positivity in certain demographic groups.
| Convenient access | Lack of judgement |
|---|---|
| “I just like how [chatbot's] always there even in the middle of the night, you can just ask [the chatbot] a question and she helps you, and that's probably the biggest reason why I trust [the chatbot], because it's just easier and less stressful.” (Less than college education, age 18–64, gender – other. (White gender other, some college, age 18–34, $14,000–45,000) P18 a | “I feel more comfortable with [chatbot] because nobody will see you asking those questions or looking into those resources. I feel more comfortable going to chatbot to find resources or providers on stigmatized issues versus going to my PCP.” (White male, college, age 18–34, >$45,000) P22 |
| “Sometimes just the waiting process to speak to your doctor or speak to a nurse, sometimes that can last up to half an hour, an hour to two hours. At least with [chatbot], the questions are being answered right away…“I think it actually helps. Ever since you guys implemented [the chatbot], and when I first seen it, I love it.” (Black male, high school or less education, age 35–64, <$14,000) P19 | “Someone is seeing, and someone is listening to you, and it is there…I trust it more because it doesn’t have the opinions, and it sticks to the facts…“I don’t need certain personnel that are probably exhausted from a lot of things to give me some type of platitude or whatever they might be going through that day.” (Black female, some college, age 35–64, >$45,000) P4 |
| “A lot more people would ask questions than set up an appointment just to ask a question.” (Hispanic female, high school or less education, age 35–64, <$14,000) P20 | “I just wanna be seen as a patient first versus everything else that people see about.” (Asian male, college, age 35–64, income >$45,000) P23 |
| “It helps with even just bandwidth across the board of small questions like how I went to her needing something. I got my answer really quick instead of having to call someone and try and venture out something through that where you’re using up someone else's bandwidth that's needed for different things”. (Hispanic female, college, age 18–34, >$45,000) P17 | “If I get to my doctor and I ask him, “Hey can I set up a payment plan for the copay?” I might get a look, I might get some anxiety around that question, but if I just ask [chatbot], “Hey, am I able to set up a payment plan yere?” It's an impartial, doctor knows that they’re going to get paid, the payment plan is set up and everything is good to go.” (Black female, some college, age 18–34, >$45,000) P24 |
| “I feel like people like me, who don't like to talk to people or are uncomfortable with healthcare in general, I think it would help if, “My nose is a little runny. My throat is a little dry. Oh, God. I'm freaking out. Do I have COVID?” No, you don't have COVID. You have allergies. If you could talk to [chatbot] and she could help you go through your symptoms, I think that would only help people. She is my doctor friend that I keep in my pocket.” (Hispanic female, high school or less education, age 35–64) P21 | “I don’t think the bot would put me in an embarrassing situation, whereas sometimes that may be frustrating talking to a person, which would have other complications in life, and not the best day for them or whatever, whereas the chatbot, literally, it just does what it needs to do and it doesn’t really care how they feel; cause they have no feeling.” (Hispanic male, some college, age 65+) P14 |
P numbers refer to anonymized unique interview participants.
This observation was also reiterated by developers:
…if they know that their information is private…then they're more comfortable disclosing things (to chatbot) that the system could help them with health-wise…for substance use or gender-affirming care, seeking abortion…
And when asked about chatbots decreasing access:
Actually, we’ve seen the opposite. We have a chatbot deployed in sexual and reproductive health topics, and we’ve seen a lot of appointments being booked from the chatbot…after they ask a couple of questions.
Figure 2 summarizes our results, based on survey findings, qualitative themes, and the concepts above, following a design model from earlier research on trust, defined as “a firm belief in the reliability, truth, ability, and good intentions of someone or something” and patient messaging.29,30
Figure 2.
Optimal system, chatbot and patient factors to promote chatbot user benefits and those excluded.
Discussion
Several important findings emerge from this mixed methods study of patient-facing chatbots. First, English-speaking, visually able, digitally literate patients found value in the chatbot for navigating their healthcare needs efficiently, as well as a desire to expand the chatbot's capability. These findings support further investment in this technology to aid healthcare efficiency and access. 31
Second, our findings confirm that “most patients” does not mean “all patients.” Although our recruitment, by definition, involves patients who use the chatbot, patients from most demographic groups share concerns about how patients who are digitally isolated or have other communication barriers can continue to access healthcare in the chatbot age. The fact that older adults expressed most concern around digitally isolated patients is possibly related to awareness of peers who struggle, or their own perceived digital challenges relative to others. Through exploring factors associated with the nonuse of technology, we can employ approaches to intervene in modifiable factors to reduce digital health disparities. 4 For example, due to the overlap between patients most in need of frequent communication with their healthcare team, such as chronically ill, medically complex older adults, and those digitally isolated, we can incorporate routine assessment for patients of their digital literacy through integration of EHR online tools.8,10,11 Pertinent examples of EHR-integrated digital literacy assessments include the eHealth Literacy Scale (eHEALS), 9 the Digital Health Engagement Tool (DHET), 11 and the Mobile Device Proficiency Questionnaire (MDQ-16), 12 specifically effective for older adults. Based on the outcome of such assessment, one can determine if digital communication is feasible (i.e., through coaching or help desk, educational classes, or proxy through a trusted family member or caregiver), or if an alternative communication plan for these patients is required. As another example of removing accessibility barriers, the health system recently added interactive voice response (IVR), noting an additional 94,478 IVR chatbot queries in September 2024, from 15,302 unique users, and plans to add the Spanish language in 2025.
Third, our survey results suggest that when chatbot users cannot understand the chatbot's messages or feel that the chatbot cannot understand their messages, it not only just affects their experience with the chatbot but also worsens their perception of the intent behind the health system's use of the chatbot. Due to patients’ belief in the positive intent of healthcare, patients expect easy usability and effectiveness when digital tools are released. In other words, when the chatbot “underperforms” by not meeting patient usability expectations the technology worsens patients’ uncertainty gap, leading patients to question the positive intent of healthcare along with how to communicate effectively with their healthcare team in the digital era. 32 Given that evaluation frameworks for patient-facing chatbots are a current research priority, these findings underscore the importance of patient understandability assessment. Part of this requires, as a matter of justice, as outlined in the White House Blueprint for the AI Bill of Rights and other AI ethics guidelines, ensuring that chatbot messages are equally understandable by all user groups. 33 In addition, patients should be educated on what the chatbot is, how to use the chatbot, the capabilities of the chatbot, and how and where information is stored and/or communicated to their healthcare team.
Fourth, demographic groups that historically distrust healthcare34–38 appeared positive about the chatbot in our interviews. While surprising, upon reflection, our findings suggest possible explanation, such as for historically and socially marginalized people, access outside traditional working hours and efficiency may be particularly important. In addition, many of these users discussed the chatbot's favorable traits including “being seen,” “avoidance of risk of shame,” and providing an alternative to their prior experience of bias or judgment with healthcare personnel, especially around stigmatized topics. Other studies have found that, in some circumstances, people may prefer chatbots over people for certain topics.39–41 Given the known possible biases lurking in AI, whether our participants are correct regarding the absence of bias in all patients facing chatbots is unknown and deserves further study.
Limitations
This study occurred at one health system with one chatbot. Our sampling strategy was not meant to represent a general patient population; instead, we oversampled based on diversity to allow subgroup comparisons. This means our findings may not generalize to all patient populations or chatbots. The relatively small number of respondents in key categories limited our ability to do meaningful multivariate analysis; thus, our findings should be seen as exploratory and motivating additional research. The low response rate to the survey also raises the possibility of non-response bias, our use of self-reporting (e.g., assessing chatbot understanding) could introduce the possibility of social desirability bias, and our use of de novo items may limit validity. Last, by recruiting existing chatbot users, we undersampled participants most likely to be digitally isolated, a limitation of the study design. Through utilizing integrated digital literacy assessment tools within the EHR, recruitment could be targeted to include this essential patient group in future research. Additional publications are in progress related to survey and interview questions not included in this analysis.
Conclusion
EHR-integrated chatbots can improve access for patients to their healthcare, along with improving the patient experience and efficiency of healthcare delivery. Further study is essential to understand if chatbots are a preferred communication method for certain demographic groups and/or medical topics.
Healthcare systems have an ethical obligation to integrate digital literacy assessment tools concomitant with chatbot technology and provide education for patients and families in the use of such tools, while providing communication options for patients at every digital literacy level.
Supplemental Material
Supplemental material, sj-docx-1-dhj-10.1177_20552076251337321 for Patient-facing chatbots: Enhancing healthcare accessibility while navigating digital literacy challenges and isolation risks—a mixed-methods study by Annie A Moore, Jessica R Ellis, Natalia Dellavalle, Marlee Akerson, Matt Andazola, Eric G Campbell and Matthew DeCamp in DIGITAL HEALTH
ORCID iD: Annie A Moore https://orcid.org/0000-0002-1079-9464
Statements and declarations
Ethical considerations: The study was approved as exempt research by the Colorado Multiple Institutional Review Board (21-5127).
Consent to participate: Written informed consent to participate was obtained from survey participants and documented in REDCap; verbal consent to participate was obtained from interview participants after obtaining consent to record the conversation.
Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Greenwald Foundation Making a Difference Program, University of Colorado School of Medicine Brown/Moore Endowed Chair for Excellence in the Patient Experience.
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data availability: The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Supplemental material: Supplemental material for this article is available online.
References
- 1.Nath B, Williams B, Jeffery MM, et al. Trends in electronic health record inbox messaging during the COVID-19 pandemic in an ambulatory practice network in New England. JAMA Netw Open 2021; 4: e2131490. PubMed PMID: 34636917; PubMed Central PMCID: PMC8511977. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.North F, Luhman KE, Mallmann EA, et al. A retrospective analysis of provider-to-patient secure messages: how much are they increasing, who is doing the work, and is the work happening after hours? JMIR Med Inform 2020; 8: e16521. PubMed PMID: 32673238; PubMed Central PMCID: PMC7381047. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Wetsman N. Digital messages from patients to doctors spiked during the pandemic. The Verge2021 [12.11.2021]. September 17, 2021: https://www.theverge.com/2021/9/17/22679239/patient-messages-health-record-epic-doctor-burnout.
- 4.Hegeman P, Vader D, Kamke Ket al. et al. Patterns of digital health access and use among US adults: a latent class analysis. Res Sq 2024 [Preprint]. 2024 Jan 29.rs.3.rs-3895228. doi: 10.21203/rs.3.rs-3895228/v1. Update in: BMC Digit Health. 2024(2):42: doe: 10.1186/s44247-024-00100-0. PubMed PMID: 38352382; PubMed Central PMCID: PMC10862941. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Benda NC, Veinot TC, Sieck CJet al. et al. Broadband internet access is a social determinant of health!. Am J Public Health 2020; 110: 1123–1125. PubMed PMID: 32639914; PubMed Central PMCID: PMC7349425. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Arias López MDP, Ong BA, Borrat Frigola X, et al. Digital literacy as a new determinant of health: a scoping review. PLOS Digit Health 2023; 2: e0000279. Epub 20231012. doi: 10.1371/journal.pdig.0000279. PubMed PMID: 37824584; PubMed Central PMCID: PMC10569540. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Sieck CJ, Sheon A, Ancker JS, et al. Digital inclusion as a social determinant of health. NPJ Digit Med 2021; 4: 52. PubMed PMID: 33731887; PubMed Central PMCID: PMC7969595. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Quevatre C. Digital isolation: The vulnerable people left behind2019. Available from: https://www.bbc.com/news/uk-england-cornwall-50812576.
- 9.Estrela M, Semedo G, Roque F, et al. Sociodemographic determinants of digital health literacy: a systematic review and meta-analysis. Int J Med Inform 2023; 177: 105124. PubMed PMID: 37329766. [DOI] [PubMed] [Google Scholar]
- 10.Shimokihara S, Ikeda Y, Matsuda Fet al. et al. Association of mobile device proficiency and subjective cognitive complaints with financial management ability among community-dwelling older adults: a population-based cross-sectional study. Aging Clin Exp Res 2024; 36: 44. PubMed PMID: 38367133; PubMed Central PMCID: PMC10874308. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Rousseau J, Gibbs L, Garcia-Cabrera C, et al. A pioneering EMR-embedded digital health literacy tool reveals healthcare disparities for diverse older adults. J Am Geriatr Soc Aug 2024; 72:S97–S104. doi: 10.1111/jgs.18935. Epub 2024 April 29. PubMed PMID: 38682826. [DOI] [PubMed] [Google Scholar]
- 12.Shimokihara S, Tabira T, Maruta M, et al. Smartphone proficiency in community-dwelling older adults is associated with higher-level competence and physical function: a population-based age-specific cross-sectional study. J Appl Gerontol 2025 Jan;44:52–61. doi: 10.1177/07334648241261885. Epub 2024 June 20. PubMed PMID: 38901835. [DOI] [PubMed] [Google Scholar]
- 13.Dong Q, Liu T, Liu R, et al. Effectiveness of digital health literacy interventions in older adults: single-arm meta-analysis. J Med Internet Res 2023; 25: e48166. PubMed PMID: 37379077; PubMed Central PMCID: PMC10365623. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Imaging G. FDA and AI-enabled medical devices: a few statistics 2024. Available from: https://graylight-imaging.com/blog/fda-and-ai-enabled-medical-devices-a-few-statistics/#:∼:text=On%20August%207%2C%202024%2C%20the%20U.S.%20Food%20and,been%20authorized%20for%20use%20within%20the%20United%20States. Accessed September 16, 2024.
- 15.Malak A, Şahin MF. How useful are current chatbots regarding urology patient information? Comparison of the ten most popular Chatbots’ responses about female urinary incontinence. J Med Syst 2024; 48: 02. PubMed PMID: 39535651. [DOI] [PubMed] [Google Scholar]
- 16.Şahin MF, Keleş A, Özcan R, et al. Evaluation of information accuracy and clarity: chatGPT responses to the most frequently asked questions about premature ejaculation. Sex Med 2024; 12: qfae036. PubMed PMID: 38832125; PubMed Central PMCID: PMC11144523. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Şahin MF, Topkaç EC, Doğan Ç, et al. Still using only ChatGPT? The comparison of five different artificial intelligence Chatbots’ answers to the most common questions about kidney stones. J Endourol 2024; 38: 1172–1177. PubMed PMID: 39212674. [DOI] [PubMed] [Google Scholar]
- 18.Akerson M, Andazola M, Moore Aet al. et al. More than just a pretty face? Nudging and bias in chatbots. Ann Intern Med 2023; 176: 997–998. PubMed PMID: 37276595. [DOI] [PubMed] [Google Scholar]
- 19.Clark M, Bailey S. Chatbots in Health Care: Connecting Patients to Information: Emerging Health Technologies. https://www.ncbi.nlm.nih.gov/books/NBK602381/ January 2024. [PubMed]
- 20.Altamimi I, Altamimi A, Alhumimidi ASet al. et al. Artificial intelligence (AI) chatbots in medicine: a supplement, not a substitute. Cureus 2023; 15: e40922. PubMed PMID: 37496532; PubMed Central PMCID: PMC10367431. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.McGreevey JD, Hanson CW, Koppel R. Clinical, legal, and ethical aspects of artificial intelligence-assisted conversational agents in health care. JAMA 2020; 324: 552–553. PubMed PMID: 32706386. [DOI] [PubMed] [Google Scholar]
- 22.Hindelang M, Sitaru S, Zink A. Transforming health care through chatbots for medical history-taking and future directions: comprehensive systematic review. JMIR Med Inform 2024; 12: e56628. PubMed PMID: 39207827; PubMed Central PMCID: PMC11393511. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Ellis J, Hamer MK, Akerson M, et al. Patient perceptions of chatbot supervision in health care settings. JAMA Netw Open 2024; 7: e248833. PubMed PMID: 38687483; PubMed Central PMCID: PMC11061768. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Services IoMUCoMAtPHC . Access to health care in America. Washington, DC: National Academies Press (US), 1993. [PubMed] [Google Scholar]
- 25. Avaamo/Generative AI for the Enterprise . Available from: https://avaamo.ai/.
- 26. Online Privacy and Security Questionnaire [Internet] . Georgia Tech. https://sites.cc.gatech.edu/gvu/user_surveys/survey-1998-10/questions/privacy.html [cited September 8, 2024].
- 27.Charmaz K. Constructing grounded theory. Thousand Oaks, California: Sage Publications, 2014. [Google Scholar]
- 28.Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res 2005; 15: 1277–1288. PubMed PMID: 16204405. [DOI] [PubMed] [Google Scholar]
- 29.Services CMS. Patient-mix Coefficients for October 2024 (1Q23 through 4Q23 Discharges) Publicly Reported HCAHPS Results. Available from: https://hcahpsonline.org/globalassets/hcahps/mode-patient-mix-adjustment/october_2024_pma_web_document.pdf. Accessed October 26, 2024.
- 30.Green A. How To Build Patient Trust in Healthcare2023. Available from: https://www.qualityinteractions.com/blog/building-patient-trust-in-healthcare.
- 31.Moore A, Fisher M, Chavez C. Factors enhancing trust in electronic communication among patients from an internal medicine clinic: qualitative results of the RECEPT study. J Gen Intern Med 2022(in print); 37: 3121–3127. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32. Commission FC . Keep Americans Connected https://www.fcc.gov/keep-americans-connected. September 8, 2024.
- 33.Sieck CJ, Hefner JL, Schnierle J, et al. The rules of engagement: perspectives on secure messaging from experienced ambulatory patient portal users. JMIR Med Inform 2017; 5: 13. PubMed PMID: 28676467; PubMed Central PMCID: PMC5516097. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Institute B. Unpacking the White House blueprint for an AI Bill of Rights. YouTube2022.
- 35.Alsan M, Wanamaker M, Hardeman RR. The Tuskegee study of untreated syphilis: a case study in peripheral trauma with implications for health professionals. J Gen Intern Med 2020; 35: 322–325. PubMed PMID: 31646456; PubMed Central PMCID: PMC6957600. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.El-Toukhy S, Méndez A, Collins Set al. et al. Barriers to patient portal access and use: evidence from the health information national trends survey. J Am Board Fam Med 2020; 33: 953–968. PubMed PMID: 33219074; PubMed Central PMCID: PMC7849369. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Powell W, Richmond J, Mohottige D, et al. Medical mistrust, racism, and delays in preventive health screening among African-American men. Behav Med 2019; 45: 102–117. PubMed PMID: 31343960; PubMed Central PMCID: PMC8620213. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Smith B, Magnani JW. New technologies, new disparities: the intersection of electronic health and digital health literacy. Int J Cardiol 2019; 292: 280–282. PubMed PMID: 31171391; PubMed Central PMCID: PMC6660987. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Antonio MG, Petrovskaya O, Lau F. Is research on patient portals attuned to health equity? A scoping review. J Am Med Inform Assoc 2019; 26: 871–883. PubMed PMID: 31066893; PubMed Central PMCID: PMC7647227. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Folk D. Can chatbots ever provide more social connection than humans? Collabra: Psychology 2024; 10: 117083. [Google Scholar]
- 41.Lucas GM, Gratch J, King Aet al. et al. It’s only a computer: virtual humans increase willingness to disclose. Comput Hum Behav [Internet] 2014; 37: 94–100. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplemental material, sj-docx-1-dhj-10.1177_20552076251337321 for Patient-facing chatbots: Enhancing healthcare accessibility while navigating digital literacy challenges and isolation risks—a mixed-methods study by Annie A Moore, Jessica R Ellis, Natalia Dellavalle, Marlee Akerson, Matt Andazola, Eric G Campbell and Matthew DeCamp in DIGITAL HEALTH


