Skip to main content
Digital Health logoLink to Digital Health
. 2022 Mar 24;8:20552076221089084. doi: 10.1177/20552076221089084

A framework for examining patient attitudes regarding applications of artificial intelligence in healthcare

Jordan P Richardson 1, Susan Curtis 1, Cambray Smith 1, Joel Pacyna 1, Xuan Zhu 2, Barbara Barry 2, Richard R Sharp 1,
PMCID: PMC8958674  PMID: 35355806

Abstract

Background

While use of artificial intelligence (AI) in healthcare is increasing, little is known about how patients view healthcare AI. Characterizing patient attitudes and beliefs about healthcare AI and the factors that lead to these attitudes can help ensure patient values are in close alignment with the implementation of these new technologies.

Methods

We conducted 15 focus groups with adult patients who had a recent primary care visit at a large academic health center. Using modified grounded theory, focus-group data was analyzed for themes related to the formation of attitudes and beliefs about healthcare AI.

Results

When evaluating AI in healthcare, we found that patients draw on a variety of factors to contextualize these new technologies including previous experiences of illness, interactions with health systems and established health technologies, comfort with other information technology, and other personal experiences. We found that these experiences informed normative and cultural beliefs about the values and goals of healthcare technologies that patients applied when engaging with AI. The results of this study form the basis for a theoretical framework for understanding patient orientation to applications of AI in healthcare, highlighting a number of specific social, health, and technological experiences that will likely shape patient opinions about future healthcare AI applications.

Conclusions

Understanding the basis of patient attitudes and beliefs about healthcare AI is a crucial first step in effective patient engagement and education. The theoretical framework we present provides a foundation for future studies examining patient opinions about applications of AI in healthcare.

Keywords: Artificial intelligence, patient perspectives, bioethics

Introduction

The past few years have been marked with significant innovation in applications of artificial intelligence (AI) to healthcare, including improved imaging and data-powered diagnostics, workflow improvement strategies to enhance provider experiences and decrease errors, and direct-to-patient chatbots to provide behavioral healthcare or chronic disease management. 1 As the biomedical and healthcare sectors consider new applications of AI in healthcare, it is important to consider the broader impact that these technologies will have on patient experiences of medicine.2,3 A major factor that will influence the nature and extent of AI innovation in healthcare will be whether, and under what conditions, patients accept AI. 4 As a result, AI creators need to anticipate patient responses to AI applications in healthcare.

In predicting how patients will react to a new technology or intervention, researchers often use behavioral models to examine relationships between key determinants of a person's performance or nonperformance of a behavior. 5 There are a variety of behavior models and theories for describing patient decision-making and behaviors related to use of previously developed healthcare technologies. Two popular models are the Health Belief Model (HBM) and Technology Acceptance Model (TAM). The HBM is a widely used framework for explaining how patients decide whether to adopt a particular health behavior based on their evaluation of associated health risks, the expected benefits of the behavior change, and their ability to carry out the behavior. 6 The TAM, by contrast, is a behavioral model that describes factors that contribute to patient acceptance of new healthcare technologies based on the evaluation of the perceived usefulness and ease of using those technologies.7,8 These two models have been previously used separately and in combination to predict patient acceptance of healthcare technologies.5,9,10

Despite their widespread use in assessing patient acceptance of other healthcare technologies, these models have limited relevance to predictions of patient response to healthcare AI. Existing behavioral models such as the HBM and TAM focus on volitional behavior change – the “why” and “how” of patient choices to adopt a health-related technology.6,11 This orientation limits their usefulness to a very small subset of AI applications in healthcare, namely, those AI tools that are used directly by patients. However, a large majority of AI applications in healthcare are not intended to be offered directly to patients, even though they may impact the care that patients receive. These applications of healthcare AI include clinical decision support tools, AI embedded in clinical workflows or electronic health records, and tools that evaluate the quality of care across a health system. These and other applications of AI in healthcare will not be “offered” to individual patients with an understanding that they can either accept or decline to use them. Rather, these AI tools will be deployed by healthcare systems, with varying levels of transparency provided to patients.

However, even these AI tools will have direct impacts patients and their healthcare, will be developed and marketed using patient data, and have the potential to harm patients should they be implemented irresponsibly. Thus, AI developers and those who support the implementation of these tools have an ethical responsibility to ensure these tools are deserving of the trust that many patients place in healthcare providers and healthcare systems.12,13 A key component to ethical technological development and implementation is patient-centered research to determine how patients see this new technology impacting their healthcare, including parameters and limitations. This is particularly important when, like many of the anticipated healthcare AI tools, the technology is not directly used, or even seen, by the patients themselves with the opportunity for individual patients to reject or opt-out. For these “behind-the-scenes” tools, the only way that patient voices are included is through deliberate and proactive patient engagement research.

To address these limitations with the application of existing behavioral models to healthcare AI, there is an urgent need to develop alternative approaches to the assessment of patient attitudes and beliefs about applications of AI in healthcare. To this end, we sought to explore what frames of reference patients use in evaluating the acceptability of healthcare AI, including potential sources of apprehension and excitement about these emerging technologies. Creating a theoretical model to predict how patients are likely to form attitudes and beliefs about medical applications of AI is critical for developing AI tools that are responsive to patient needs and anticipate potential patient concerns. That framework can also support AI creators in predicting patient responses to new AI applications, assist in clinical implementation planning, and direct AI innovation toward those applications that are of the greatest interest to patients. In this paper, we describe some of the more salient factors that patients consider in relation to healthcare AI and present a framework for predicting patient attitudes and beliefs about applications of AI in medicine and healthcare.

Methods

Recruitment

We contacted 946 patients from Minnesota and Wisconsin. between October of 2019 and February of 2020. by phone. Participants were over 18 years of age, conversant in English, and received $50 for their participation. All participants provided verbal consent at the beginning of each focus group. This study was approved by the Mayo Clinic Institutional Review Board.

Data collection

As previously reported, 14 focus groups followed a semi-structured, case-based format.15,16 Focus groups allowed us to observe the dynamics between participants of diverse generational, social, occupational, and technological backgrounds as they discussed applications of AI in healthcare. Each focus group was preceded by a brief survey to collect demographic information. Focus groups were conducted by three research team members: one led the discussion while two others took notes. 17 Moderators began the discussion by examining general knowledge and opinions about AI, before giving a brief definition of AI. 18 Each group was then presented with series of case studies illustrating the diversity of ways AI might be applied in medicine. These case studies focused on the use of AI for: image analysis, optimizing preventative health, in-patient monitoring, diagnostic support, and for engagement with patients during primary-care appointments (a selection of which were discussed at any given focus group). Case studies provided a context in which to examine initial participant reactions and promote in-depth engagement with specific aspects of healthcare AI. 19 Focus groups concluded with a general discussion of themes that emerged during the conversation, including overarching concerns about healthcare AI, comparisons between case studies, and sources of patient interest and support for healthcare AI.

Following each focus group, moderators met to compare field notes and create a summary memo for each focus-group session.20,21 This process was also used to modify the moderator guide for clarity and effectiveness throughout data collection.

Data analysis

A memo describing preliminary findings and emerging themes was generated for each case study after the initial six focus groups were completed. This process was repeated for the cases studies used in the subsequent nine focus groups.20,21 Data collection, memoing, and modification of the moderator guide continued until the research team determined that they had reached thematic saturation. 22 Each focus group was audio recorded, transcribed verbatim by a transcription service, and reviewed by the study team for accuracy.

Qualitative data analysis was conducted using a modified inductive approach with constant comparison analysis.23,24 The synthetic memos were used to create a preliminary codebook, which was applied to three transcripts from different time points in data collection by three members of the analysis team. The codebook was then revised, and a final version was created for use in analyzing the remaining focus groups.24,25 Each transcript was coded in duplicate by a primary coder (JR) and a secondary coder (CS or SC) using NVivo 11 software. The coders then met to resolve any coding discrepancies.

Results

Demographics

Between November 2019 and February 2020, we conducted a total of 15 focus groups with 87 participants. Focus groups included three to seven participants and lasted approximately 90 min. A majority of participants were white (93%), non-Hispanic/Latino (94%), and had an education level above a high school equivalent (87%). Participant ages ranged from 18–91 years old, with an average of 53.5 years. Approximately half (49.4%) of our participants were women.

When reacting to the case studies we presented on healthcare AI, we found that patients consistently drew on multiple frames of reference across cases to contextualize this new technology. In our results, we present these contextual points of reference and how patients used them to form opinions about AI in healthcare.

Patients contextualized healthcare AI through their experiences and values associated with non-health technology

In addition to drawing on experiences with the healthcare system, participants saw connections between healthcare AI and non-medical technologies. Participants would often reference technology they use regularly as containing or being akin to AI, including smart phones, search engines and home smart speakers. These anecdotes were often used as examples of how AI can be helpful but also intrusive.

A lot of us, any of us that have a smart phone are already using a form of AI. For example, voice to text. I use the Samsung version, and the more I use it, the more it can get my words accurately, and actually right now with my phone in the Samsung version, it gets it right I would say 95% of the time so… (FG 11)

Others recalled cutting edge advances in non-medical AI that were popularized in media such as humanoid robots, self-driving cars, state surveillance, or success in strategy-based games. Discussion of these AI possibilities were often accompanied with amazement at the existing capacity of AI technology to perform feats they previously thought impossible.

Humans can’t win chess against Big Blue anymore because it has found ways not programmed in initially, and then there’s another… there’s a game that is an Asian game that is very, very popular, and it’s the same thing. The best grand masters of that can’t beat the machines because the artificial intelligence has figured out, so the humans have set template and then the artificial intelligence boom-boom-boom-boom (FG 9)

Some participants contextualized AI as just the newest face of a long history of technological progress. While not always in agreement about whether technological progress was necessarily good, many did describe feeling that it was inevitable and out of their control.

Well, I think that’s progress. I’m all for progress, and regardless what we think, some things are gonna go this way anyway, and it’s progress, and as it comes, sometimes, like I said, you gradually accept this… because I swore years ago I wasn’t going to have a computer. That was just ridiculous… but things change, because as things are changing, we accept it. But technology is here to stay, and there are gonna be things in another 50 years that we wouldn’t dream of and possibly dream of, so that’s where it’s going, and I think it will be accepted. (FG 5)

There was a minority of participants who expressed extreme discomfort with technology, either personally or recalling someone they know. They recalled avoiding popular technologies such as the cell phone or internet for as long as possible, and feeling that technological advances were making humans and society worse.

Unless you’re like my uncle, my uncle says that we’d all be better off if we went back to the times where all this technology hadn’t been invented and computers hadn’t been invented. He says computers are a fad. (FG 15)

Finally, a very common association participants made when thinking about AI in healthcare was to cultural imagery of AI dystopias. This was often in the form of recalling a dystopian movie or invoking language of AI “taking over the world” (FG 12). While participants rarely viewed these scenarios as verbatim threats, they did share them to support a more general gestalt of warning and a need for carefulness.

P4: You’d be living in The Matrix.

P3: And that’s someplace in the future, but there has to be a point in our history where somebody, and hopefully it’s done right, that can control. . .the programming is controlled so it doesn’t get out of hand to where the computers take over our whole civilization.

P5: Instead I mean. . .I know it’s a movie, but I mean there have been movies about intelligence becoming intelligent enough to figure out how emotion works and to figure. . .and then computers take over computers, and. . .Humans become completely obsolete. (FG 8)

Patients framed the promise of AI in relation to past illness experiences

When evaluating the AI case studies we presented, participants often recalled personal challenges associated with a past illness or a chronic condition that they or a loved one experienced. Oftentimes, these prior encounters with medicine were crucial in shaping how they engaged with hypothetical AI innovations. Some participants described a past medical experience to illustrate how healthcare AI would not have been helpful in meeting their needs.

You know what, that’s just like for me, I have multiple sclerosis, but I also have fibromyalgia, so the thing is when I hurt, I can’t tell you why it hurts, you know what I’m saying? I can’t tell you, oh my fibromyalgia’s kicking up or I’m having a MS attack or whatever. I can’t tell you the difference, and if I can’t tell you the difference, I know AI cannot tell me the difference. (FG 8)

Participants who were more optimistic about healthcare AI, by contrast, described long and demanding diagnostic journeys involving repetitive testing or multiple visits to specialists. Participants hypothesized that healthcare AI may reduce the burden and uncertainty of these experiences.

I had a strange illness 10, 11 years ago, and I went through PET scans and CT scans and the whole 9 yards over and over and over and over again for 6 months, and I was miserable with it, and what they ended up doing, they had to take out part of my lung to find out what it was, and if there was some type of AI out there that could have helped in that… [I might have] skipped 6 months’ worth of trial and error. (FG 9)

Other participants didn’t believe healthcare AI would have made a difference in their illness experience. Instead, they felt that an AI might provide additional confidence and help in accepting a diagnosis, and therefore increase emotional comfort during times of difficult health challenges.

That’s not what she wanted to hear, so she went to another doctor, and she went to … several doctors because she knew there was a problem and she was going to find a doctor who confirmed what she knew. Now if there had been, or she had accepted some artificial intelligence, maybe that would’ve helped her. (FG5)

Finally, some participants felt that healthcare AI could provide relief or hope in cases where a definitive diagnosis has proven elusive or where conventional therapies have failed.

Patients situated healthcare AI in relation to other trends in health systems and established health technology

One of the strongest frames of reference that patients used to evaluate and contextualize AI in healthcare was their experience with the current healthcare system, including difficulties they had encountered in that context. In considering new forms of healthcare AI, participants pointed to inefficiencies and inconveniences they experienced in prior medical encounters, wondering if new AI technologies might help in addressing those challenges.

Yeah, streamlining some of those questionnaires that aren’t necessary, and that leads to frustration for individuals, for patients, because it’s long days, you’re tired, you’re hungry, you’re far from home, whatever it might be, And then you’re asked the same questions over and over. It’s exhausting, exhausting to say the least… (FG 10)

When describing encounters with various health care systems, participants recalled difficulties maintaining long-term relationships with primary care providers. Some participants recalled being frustrated that their doctors moved around often or that they were not able to get an appointment with their preferred provider. This contributed to a general dissatisfaction with healthcare experiences, which shaped participants’ views about the impact of AI on their relationship to providers.

It’s widely known it’s tough to get in to see your primary these days, and it’s a national problem. There’s not enough doctors and nurses going into primary care, but if you can figure out a way to create more efficient use of scheduling and seeing a patient, there’s a huge problem with physician and nurse burnout because people are seeing more… they’re expected to see more and more patients, but in the end, you’re not sort of gaining any ground with satisfaction in some cases or you’re feeling overwhelmed. (FG 11)

Patients also recalled ways that the implementation or use of an electronic health record (EHR) impacted their care. This frequently took the form of complaints against overdocumentation of health problems and the immortalization of things that patients viewed as trivial in their health record. Many patients were concerned that healthcare AI might continue this trend and further distract the attention of providers to their computers and to the electronic documentation of healthcare encounters.

In my last doctor visit, I asked if I could… while I was in there, I just had some minor things done, and I had asked if I could have some skin tags removed, or 1 or 2. She said, well I could certainly do that for you. So I asked her well if you do that, is there gonna be a charge? She said yeah, I’d have to charge you. I said even if I had one done? She said yes, and from my understanding, it was attributable to Epic where everything they do they have to log, and that goes through the system, (FG 1)

When discussing possible applications of AI, many patients referenced other health technologies. One of the most common parallels drawn was with online symptom checkers, with patients often asking how an AI-enabled tool in medicine would be different from those tools or framing healthcare AI as an advanced version of online tools with which they were familiar.

I think it would save time from googling. You know, I use the symptom calculators, I don’t know what they’re called, and you’ll enter your symptoms and see if it. . .what it might add up to, to get a little bit deeper, I think that would maybe give a little more detail. (FG 6)

Participants also mentioned remote care modalities – such as nurse help lines, virtual appointments, patient portal messaging, and mobilized virtual providers -- as a frame of reference in framing their expectations about AI technologies.

P3: Well I think of this August… all of a sudden, I’m getting headaches, and they weren’t going away, and I don’t like pills, so I was avoiding, and all of a sudden they were getting to the point, so I got on the phone with a nurse and she asked me a thousand questions, and she says, you wanna come in today or tomorrow? Ya gotta know if this question’s answered to get to the next… where you go with the next question. I don’t know if the robot would… artificial intelligence know, okay, this one answered yes, so we gotta go this direction or…

P5: It’d do it even faster. (FG 11)

Patients’ social context impacts their orientation to healthcare AI

Patients’ interpretations of their previous experiences with the healthcare system and non-AI health technology are nested within their broader social context, including the identities they carry and the communities they belong too. These social factors also influenced how patients engaged with, or imagined others would engage with, examples of AI in healthcare. One of the most common social factors that participants noted was generational differences in trust and comfort of technology. Often this took the form of participants speculating about the opinions of another generation, or intergenerational dialogue between multiple participants.

For our generation, it is really uncomfortable. But young generations who have grown up with technology as this. . .well I look at my grandkids who are little, and that’s their world. So would they feel as uncomfortable as I might in this? I think, maybe not. Is it. . .are you more secure, do you feel better about what that computer is telling you than I would feel about it? (FG 7)

Participants also expressed awareness that the data used to train AI tools may not be appropriately diverse, and that this may be harmful to some disadvantaged social groups. This conversation occurred along some common demographic variables of race and gender, as well as variables like regional diversity or vocation.

And as we become more and more diverse, and even our genealogies become more and more diverse, you want as much data as possible from as many different backgrounds as possible. And I would highly question if all of the data points were coming from one set. I think I’ve also heard statistics where like a lot of studies don’t include women. They are starting to include more women, but a lot of the original studies on medications involved only included men, and so there hasn’t been a lot of data on how it affects women’s bodies. So I would question heavily if it wasn’t more diverse. (FG 6)

Participants also recognized that the manner in which healthcare AI is implemented could differentially impact various populations by reinforcing stigma or limiting access. Some participants recalled examples of when they, or someone they knew, had been discriminated against due to mental illness, alcoholism, weight, or profession, and were concerned about how AI could amplify or nullify these stigmas.

One thing. . .I’m not sure if this is exactly what you’re asking, but just a thought, prejudices that people can have, like it could absorb those or it could be taught to work against them, like a lot of people who are overweight have said that their providers assume that that’s the cause and ignore doing other tests or pursuing other avenues, and if an AI wasn’t going to make the assumption that that was what was the problem, then that would be good, but if it was learning from people around it that it should make that assumption, then it would perpetuate the problem. (FG 13)

Another common concern among participants was whether AI would be accessible and inclusive for all patients. This included various challenges in communication such as strong accents, low literacy levels, or the need to communicate in sign language. Participants were also concerned about the impact on individuals who may be outside of the AI’s capabilities because of atypical sensory or emotional processing or genetic differences.

I guess I’m kinda curious as to wonder how would people with other challenges use that because I live in a boarding lodge, and there’s three or four people that live there that have been there for umpteen years, and the care there is not adequate for them. They were placed there after our state hospital closed, but there’s no way they would be able to understand and communicate a chat box. So I’m just wondering ‘cause you don’t wanna. . .I guess I’d be afraid that they weren’t gonna be helped or get skipped over. (FG 15)

Finally, participants were interested in the impact AI would have on rural communities. While most saw AI as an effective way of disseminating medical knowledge and increasing access through chatbots or telemedicine, a few were concerned that this could create an even larger gap in the quality of care experienced by communities with limited access to medical infrastructure.

Where you have someone stationed somewhere else that’s either videoconferencing in or getting data from another place and able to help make those clinical judgements, which has been a huge blessing. I know I have family in very, very rural areas that have to travel hours to get anywhere, and telehealth has been amazing for them so. (FG 11)

Patients reflect on if healthcare AI aligns with the values of medicine

When reflecting on whether AI was compatible with healthcare broadly, participants invoked what they believed to be the goals and values of medicine and how well they believed AI could support those goals. Many participants believed healthcare was aimed at curing people of illness and maintaining good health and were supportive of AI so long as it was directed towards advancing those goals.

I think the goal of medicine is to either heal you or to live a more fuller, richer life, and I think if AI with all its knowledge can help do that, if it can help me get better, it’s like that extra little thing that. . .you know, that extra training that’ll make me a better runner, that extra math class that’ll help make me smarter in math. It’s just that little extra that maybe will achieve that goal of, like I said, leading healthier lives. (FG 7)

Others felt that one of the core tenants of medicine is innovation and scientific progress. These participants felt that AI offered a new avenue of technological advancement.

I mean what we know now and we think of what we thought we knew 50 or 100 years ago in medicine, and it’s just gonna keep building and building, and the potential to have that added resource to do that is a huge opportunity. It wasn’t that long ago that they used leeches, so. . .you know what I’m saying? I mean really in the span of time and how much exponentially we’ve learned about conditions. . .(FG13)

It was also common for participants to extoll the value of evidence-based medicine, stating that they wanted the maximum amount of data possible when making any decisions.

I think it’s good to have the more input. The more input that comes into here, the better I think it’s going to be able to respond to a situation. We definitely all agree, I think, in this room that the more information that is collected, the better off all people are gonna be. . .(FG 3)

Participants also reported that they believed AI would be well-equipped to help analyze large amounts of data and inform decision making. They felt that there was already a significant amount of medical knowledge in the existing research literature and medical data in health records, and they hoped AI would be able capitalize on this existing data.

They might come back with okay, these are the tests that need to be run. Then you go and get those tests run, and they input that in, and then they can pinpoint and be more accurate with their diagnosis then, and I could see how they would be more accurate, really, with knowing everything’s that ever been known about modern medicine and all the cases out there and collating all the data, so I could see how they would be more accurate. (FG 12)

Participants also recognized that while medicine is grounded in innovation and evidence, it is also very personal and high stakes. Patients recalled moments of extreme vulnerability where they had to trust their lives to a care provider with little personal control over the situation.

So it’s your. . .how much do you trust this AI tool, but I can also say how much do I trust the existing healthcare system? And in both of those, it’s. . .when you’re really ill, you have to trust it ‘cause there’s no choice. (FG 3)

They also recognized that, while health systems hold a trusted place in communities, they also have commercial interests. Participants discussed how various business considerations might impact the development and implementation of healthcare AI.

Like who decides what’s good or bad? It’s relative depending on whatever company wants to make a bunch of money off their data. That’s what I’m the most nervous is about the corporate side of it. Who is regulating it? Who is saying this algorithm is good to go? There’s no. . .there isn’t that yet. (FG 6)

Discussion

This is one of the first studies to examine patient perspectives on applications of AI in healthcare, using specific case studies to illustrate how AI might be integrated into healthcare in the future. Predicting how patients will form attitudes and opinions about healthcare AI is critical to ensuring its responsible implementation and avoiding unrealistic beliefs about either its promise or its perils. 4 The results we report highlight several important frames of reference that are likely to shape patient views about healthcare AI.

Our results suggest that patients’ opinions about healthcare AI are shaped to a large extent by their evaluation of the alignment of new AI technologies with the goals and values of medicine. We also found that patient beliefs about healthcare AI were informed by past experiences of illness and healthcare, which often involved interactions with other digital technologies. When patients had experienced a traumatic or complex illness, for example, this influenced their opinions about the urgency of medical diagnosis and shaped their views regarding the potential role of AI in disease diagnosis. These personal experiences with healthcare technologies and illness also invoked implicit or explicit recognition of the patients own values, and how these values inform their beliefs about the goals and priorities of medicine generally. The core value of trust in a health system or physician recurred as patients reflected on their comfort with introducing more advanced, but less transparent, data-based AI tools. This complex interplay between patient trust in AI and in health systems or physicians echoes concerns in the literature about the role of trust in AI-enabled healthcare more broadly, 26 but adds a crucial patient perspective in addition to the more common physician viewpoint. 27

Participants also noted that healthcare AI would likely impact patients in a variety of different ways, depending on each person's unique social and medical situation. For example, participants noted the potential for healthcare AI to amplify existing social biases, a concern that has been raised by commentators on healthcare AI. 28 Participants also noted how having a disability or lower access to healthcare may make AI less helpful, raising concerns that healthcare AI might perpetuate existing forms of bias in healthcare, including discrimination against people with disabilities, weight bias, medically underserved areas, and certain vocations. These considerations suggest that patient opinions about the value of healthcare AI will be shaped by their assessments of the extent to which these tools are implemented in a manner that promotes health equity.

Additionally, we found that patients’ comfort with other digital technologies played a significant role in shaping their views of healthcare AI. In evaluating less familiar healthcare AI, patients noted similarities with several nonmedical AI, including smart phones, self-driving cars, smart home speakers, and culturally popular AIs like IBM's Deep Blue or Watson. These conversations not only revealed personal feelings of technological literacy, but also extended to include broad beliefs about the relative value of technological progress and innovation which form the bridge between patient experiences of technology and attitude formation towards AI in healthcare. There was significant variability in opinions about the value of emerging AI technologies, ranging from strong positivity about innovation's power to move humanity forward to considerable apprehension about technology's negative impact on individuals and society. Participants also brought up fears associated with popular cultural portrayals of AI. These beliefs formed an important backdrop that participants used to evaluate healthcare AI.

These findings support a novel conceptual framework for understanding patient attitudes and beliefs about healthcare AI (Figure 1). The framework we propose reflects multiple interactions among patient experiences and beliefs related to healthcare technologies, highlighting the role of patient experiences with medical care, health technologies, nonmedical digital technologies, and their broader social context. These experiences are key frames of reference for many patients, shaping their foundational beliefs about healthcare and technology, which in turn shape their initial attitudes towards healthcare AI. Directly engaging with AI in healthcare will likely be uncommon for patients because this technology has a certain amount of inherent opacity due to its’ technological complexity (often described as a “Black Box” problem) and implementation that is commonly “behind-the-scenes” to patients such as systems-level optimization, deep workflow integration, or physician facing tools. 29 As a result, a high-level understanding of patient attitudes and beliefs will be required in order to preserve patient trust in health systems. This is particularly important for AI innovators and developers as it indicates the importance of early public engagement to encourage favorable opinions of healthcare AI based on realistic ideas of how it may be used.

Figure 1.

Figure 1.

Proposed conceptual framework for understanding how patients evaluate AI in healthcare.

The framework we proposed will be useful in understanding how patients think about healthcare AI and can inform future research examining the implementation of AI into patient care. An important next step in this research is to develop and quantitatively validate a conceptual model of attitude formation using our proposed constructs and outcomes. This would allow for crucial analysis into differences in patient perspectives along important variables like education level, access to healthcare services, income level, or health literacy. Additionally, our study is limited by a lack of racial and professional diversity present in our population. This is another very important area of future patient engagement research. While focus groups did enable important cross-demographic dialogue, they also pose the risk of dominant voices overpowering minority voices or pushing the group to artificial consensus. Moderators actively worked to cultivate a space for open discussion by encouraging disagreement and differing opinions, but this bias may still exist in the data set. As with all qualitative research, the results of this study may not be generalizable to other populations.

Conclusion

As AI is used more widely in healthcare, it is crucial to be able to predict how patients will react to this emerging technology as it becomes a larger part of their healthcare. Our study highlights a range of beliefs, experiences, and values that are likely to influence how patients evaluate and form specific opinions about applications of AI in healthcare. Examining how early attitudes are formed, and monitoring how those views change over time, will be important in assessing the extent to which patients view AI as a useful or threatening addition to their care. The conceptual framework developed from our results and presented here is an important first step in understanding patient attitudes about healthcare AI.

Acknowledgements

The authors thank Sara Watson for her help with moderating the focus groups.

Footnotes

Conflicting interest: The authors have no conflicts of interest to declare.

Contributorship: JPR contributed to study design, data collection, analysis, and writing. CS contributed to data collection, analysis, and writing. SC contributed to data collection, analysis, and writing. JP contributed to analysis and writing. XZ contributed to analysis and writing. BB contributed to analysis and writing. RRS contributed to study design, analysis, and writing.

Ethical approval: The Mayo Clinic Institutional Review Board approved this study (protocol # 19-010128)

Funding: This study was funded by the Mayo Clinic Biomedical Ethics Research Program. No external funding was associated with the work described. The author(s) received no financial support for the research, authorship, and/or publication of this article.

Guarantor: RRS.

Informed Consent: Not applicable, because this article does not contain any studies with human or animal subjects.

Trial Registration: Not applicable, because this article does not contain any clinical trials.

References

  • 1.Matheny ME, Whicher D, Thadaney Israni S. Artificial intelligence in health care: a report from the national academy of medicine. JAMA 2020; 323: 509–510. [DOI] [PubMed] [Google Scholar]
  • 2.Matheny M, Israni ST, Ahmed M, et al. Artificial intelligence in health care: the hope, the hype, the promise, the peril. Natl Acad Med 2020: 94–97. [Google Scholar]
  • 3.Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019; 25: 44–56. Review. [DOI] [PubMed] [Google Scholar]
  • 4.Widen K, Olander S, Atkin B. Links between successful innovation diffusion and stakeholder engagement. J Manage Eng 2014; 30: 7. Article. [Google Scholar]
  • 5.Kim J, Park HA. Development of a health information technology acceptance model using consumers’ health behavior intention. J Med Internet Res 2012; 14: e133. 2012/10/03. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Champion VL, Skinner CS. The health belief model. Health Behavior and Health Education: Theory, Research, and Practice 2008; 4: 45–65. [Google Scholar]
  • 7.Venkatesh V, Davis FD. A theoretical extension of the technology acceptance model: four longitudinal field studies. Manage Sci 2000; 46: 186–204. [Google Scholar]
  • 8.Venkatesh V, Bala H. Technology acceptance model 3 and a research agenda on interventions. Decis Sci 2008; 39: 273–315. [Google Scholar]
  • 9.Holden RJ, Karsh BT. The technology acceptance model: its past and its future in health care. J Biomed Inform 2010; 43: 159–172. Review. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Ahadzadeh AS, Sharif SP, Ong FS, et al. Integrating health belief model and technology acceptance model: an investigation of health-related internet use. J Med Internet Res 2015; 17: 45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Davis FD. User acceptance of information technology: system characteristics, user perceptions and behavioral impacts. Int J Man Mach Stud 1993; 38: 475–487. [Google Scholar]
  • 12.Holzer JK, Ellis L, Merritt MW. Why we need community engagement in medical research. J Investig Med 2014; 62: 851–855. 2014/07/01. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Yarborough M, Edwards K, Espinoza P, et al. Relationships hold the key to trustworthy and productive translational science: recommendations for expanding community engagement in biomedical research. Clin Transl Sci 2013; 6: 310–313. 2013/01/14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Richardson JP, Smith C, Curtis S, et al. Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digit Med 2021; 4: 140. 2021/09/23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Wilkinson S. Focus group methodology: a review. Int J Soc Res Methodol 1998; 1: 181–203. [Google Scholar]
  • 16.McLafferty I. Focus group interviews as a data collecting strategy. J Adv Nurs 2004; 48: 187–194. 2004/09/17. [DOI] [PubMed] [Google Scholar]
  • 17.Morrison-Beedy D, Côté-Arsenault D, Feinstein NF. Maximizing results with focus groups: moderator and analysis issues. Appl Nurs Res 2001; 14: 48–53. 2001/02/15. [DOI] [PubMed] [Google Scholar]
  • 18.Deo RC. Machine learning in medicine. Circulation 2015; 132: 1920–1930. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Hughes R. Using vignettes in qualitative research. Sociol Health Illn 1998; 20: 381–400. [Google Scholar]
  • 20.Lempert L. Asking questions of the data: memo writing in the grounded theory tradition. In: 2007.
  • 21.Birks M, Chapman Y, Francis K. Memoing in qualitative research:probing data and processes. J Res Nurs 2008; 13: 68–75. [Google Scholar]
  • 22.Guest G, Namey E, McKenna K. How many focus groups are enough? Building an evidence base for nonprobability sample sizes. Field Methods 2017; 29: 3–22. [Google Scholar]
  • 23.Donovan J. The process of analysis during a grounded theory study of men during their partners’ pregnancies. J Adv Nurs 1995; 21: 708–715. 1995/04/01. [DOI] [PubMed] [Google Scholar]
  • 24.Onwuegbuzie AJ, Dickinson WB, Leech NL, et al. A qualitative framework for collecting and analyzing data in focus group research. Int J Qual Methods 2009; 8: 1–21. [Google Scholar]
  • 25.MacQueen KM, McLellan E, Kay K, et al. Codebook development for team-based qualitative analysis. CAM Journal 1998; 10: 31–36. [Google Scholar]
  • 26.Panch T, Szolovits P, Atun R. Artificial intelligence, machine learning and health systems. J Glob Health 2018; 8: 020303–020303. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res 2020; 22: e15154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Ledford H. Millions of black people affected by racial bias in health-care algorithms. Nature 2019; 574: 608–609. 2019/10/31. [DOI] [PubMed] [Google Scholar]
  • 29.Polimanti R, Peterson RE, Ong JS, et al. Evidence of causal effect of major depression on alcohol dependence: findings from the psychiatric genomics consortium. Psychol Med 2019; 49: 1218–1226. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Digital Health are provided here courtesy of SAGE Publications

RESOURCES