Skip to main content
Journal of the American Medical Informatics Association : JAMIA logoLink to Journal of the American Medical Informatics Association : JAMIA
. 2023 Dec 14;31(3):611–621. doi: 10.1093/jamia/ocad224

Using artificial intelligence to promote equitable care for inpatients with language barriers and complex medical needs: clinical stakeholder perspectives

Amelia K Barwise 1,, Susan Curtis 2, Daniel A Diedrich 3, Brian W Pickering 4
PMCID: PMC10873784  PMID: 38099504

Abstract

Objectives

Inpatients with language barriers and complex medical needs suffer disparities in quality of care, safety, and health outcomes. Although in-person interpreters are particularly beneficial for these patients, they are underused. We plan to use machine learning predictive analytics to reliably identify patients with language barriers and complex medical needs to prioritize them for in-person interpreters.

Materials and methods

This qualitative study used stakeholder engagement through semi-structured interviews to understand the perceived risks and benefits of artificial intelligence (AI) in this domain. Stakeholders included clinicians, interpreters, and personnel involved in caring for these patients or for organizing interpreters. Data were coded and analyzed using NVIVO software.

Results

We completed 49 interviews. Key perceived risks included concerns about transparency, accuracy, redundancy, privacy, perceived stigmatization among patients, alert fatigue, and supply–demand issues. Key perceived benefits included increased awareness of in-person interpreters, improved standard of care and prioritization for interpreter utilization; a streamlined process for accessing interpreters, empowered clinicians, and potential to overcome clinician bias.

Discussion

This is the first study that elicits stakeholder perspectives on the use of AI with the goal of improved clinical care for patients with language barriers. Perceived benefits and risks related to the use of AI in this domain, overlapped with known hazards and values of AI but some benefits were unique for addressing challenges with providing interpreter services to patients with language barriers.

Conclusion

Artificial intelligence to identify and prioritize patients for interpreter services has the potential to improve standard of care and address healthcare disparities among patients with language barriers.

Keywords: artificial intelligence, complex care, language barrier, health equity, non-English language preferred, LEP

Introduction

Language barriers cause enormous challenges for patients when engaging with the healthcare system.1–5 Patients with language barriers experience disparities in quality of care, safety, and health outcomes, including more harm secondary to medical errors in all healthcare settings than patients who speak English.6–11 These disparities in quality of care and health outcomes persist for patients with complex care needs including at end of life and during critical illness.12–14 Our previous work and a population based study by Yarnell et al demonstrated that immigrants and patients with language barriers have increased rates of healthcare utilization and suboptimal outcomes of critical illness such as longer intensive care unit (ICU) stays, a higher likelihood of dying in the ICU, increased use of aggressive interventions as well as poor symptom management.12,13 Additionally, these patients are less likely to adopt comfort measures only orders and do-not-resuscitate orders despite imminent death.13 If and when these approaches are adopted, there is a significant delay in doing so.

A framework developed by Cooper for delivering equitable healthcare supports leveraging interpreters’ skills to reduce cultural, language and literacy barriers with the goal of improving communication between patients and clinicians.15 Several studies demonstrate that patients who do not have access to interpreters do poorly.16,17 A large amount of evidence highlights the benefits of engaging interpreters to promote improved communication, clinical outcomes, and satisfaction with care among patients.16–21

Interpreter inclusion as part of the healthcare team is particularly helpful during complex care needs discussions and should therefore be prioritized.22–24 A systematic review conducted by Silva et al highlighted that patients with language barriers experienced worse quality of end of life care and goals of care discussions when professional interpreters were not used.23 In-person interpreters whether doing verbatim interpretation, cultural brokering or acting as health literacy guardians are also perceived as beneficial by clinicians.25,26

US federal mandates entitle patients with language barriers to professional interpreter services during healthcare interactions, and these services may be provided either in-person or using virtual and remote modalities.27–35 Although evidence supports the use of professional interpreters in all healthcare interactions for patients with language barriers, unfortunately a shortage of interpreters means it is challenging to do this in practice.36,37 In addition to this barrier, currently a reliance on clinicians to take the initiative to call for an interpreter is impeding the utilization of interpreters in some clinical settings even when available. The reasons for this include a lack of awareness by clinicians of the benefits and legal requirement to use interpreters, a perception that the use of interpreters will take up valuable time, concerns about cost, and systems-factors that hamper the timely and appropriate use of interpreters at the bedside.29,38–43 However, studies suggest that clinicians will be more likely to use trained professional interpreters when the hospital’s organizational culture supports specific systems to facilitate this process.44,45 Artificial intelligence (AI) to identify patients with complex care needs and language barriers deployed as part of the clinical workflow may enable an improved process.

Artificial intelligence is a potentially powerful technology that is being increasingly deployed in healthcare.46–54 Machine learning and predictive analytics can be leveraged for multiple purposes including diagnosis, prognostication, and prevention of medication errors.55–58 We plan to use machine learning predictive analytics and workflow integration to support proactive outreach by interpreter services and prioritization to reach patients with complex care needs and language barriers. We define patients with complex care needs as those with a high burden of disease, those experiencing critical or serious illness, those with a life-limiting illness and those with palliative care needs.

Despite the potential of AI, several concerns about the use of AI in healthcare settings exist.59–62 Stakeholder engagement is beneficial for understanding organizational readiness and for gaining insights towards sustainable implementation.63,64 This qualitative study uses stakeholder engagement to understand the perceived risks and benefits of using AI to identify patients with complex medical needs and language barriers to prioritize them for in-person interpreter use.

Methods

We conducted a qualitative study using semi-structured interviews of diverse healthcare team professionals caring for hospitalized patients with complex care needs and language barriers. The study was conducted at Mayo Clinic Rochester, a large quaternary care academic medical center with 2000 beds and 200 ICU beds. The hospital has a substantial proportion of patients with language barriers hospitalized for management of complex care issues.

The use of interpreter services is at the discretion of the healthcare team. In-person interpreters are available for several languages. In-person denotes an interpreter who is physically present at the bedside. However, if a healthcare team member prefers, or an in-person interpreter is not available to be physically present, remote video interpretation is commonly utilized using a tablet. These requests are frequently served by our institutional interpreters remotely, and if not possible using outside vendors. Remote telephone interpretation is rarely used.

Informed oral consent was obtained from participants prior to interview. Mayo Clinic Institutional Review Board approved this research (IRB 22-002974).

Participants and recruitment

Participants were recruited via purposeful sampling65 with an emphasis on both variation (interdisciplinary role) and similarity (role tasks).66 We included ICU and hospitalist physicians, nurses, advanced practice providers, multidisciplinary therapists as well as interpreters, interpreter coordinators and health unit coordinators (all older than 18 years old). These stakeholders were selected as they either care for patients with complex medical needs and language barriers or they are integral to the process for engaging with interpreter services. We used divisional and departmental meetings to seek participants as well as word of mouth. We continued to enroll participants until data saturation was achieved within each role type.67,68 We experienced no difficulties with recruitment.

Data collection and analysis

An interview guide asking a series of open-ended questions was developed by the multidisciplinary team based on literature review, expert opinion, previously conducted work in AI and language barriers, and clinical experience.25,26,58,69–71 We (SC) conducted one-on-one semi-structured interviews in-person and on the phone lasting approximately 30 min each between November 2022 and April 2023.72 Participants heard a brief description of the proposed AI tool under development to use electronic health record data to identify complex care patients located in ICU or on the floors with language barriers. Participants were then asked about how they view use of this tool to prioritize these patients for in-person interpreter services and potential benefits and risks of implementation. Interviews were audio-recorded and externally transcribed by a professional transcription firm, Landmark Associates. The transcripts were subsequently de-identified. The interview questions are included in the appendix interview guide.

Principles of grounded theory were used to analyze the transcripts with open, axial, and selective coding using the software NVivo Version 12 (QSR Intl, Burlington, MA).73 A codebook was developed using inductive and deductive analysis methods. The codebook was then iteratively refined using six transcripts that were deliberately sampled to include diverse participant roles. These were coded independently by two coders who met to discuss how well the codebook reflected the data and mutually agree on codebook modifications. All transcripts were then coded independently and in duplicate (AKB, SC) and coders met to reach consensus.74 The ongoing process of coding led to refinement of the codebook and the addition of some new codes and subcodes as well as clearer definitions of existing codes.75 Following completion the investigators met to develop overarching themes and select representative statements.

Results

We interviewed 49 participants. Demographic characteristics are outlined in Table 1.

Table 1.

Characteristics of participants.

Characteristics LS personnela
n =15
MD
n =12
RN
n =11
APP
n =5
Otherb
n =6
Female sex, n (%) 7 (43) 7 (58) 8 (72) 3 (60) 6 (100)
Age, yr, range 32-64 32-56 22-50 32-42 34-69
Race
 White 5 6 10 4 5
 Black 5 1 1
 Asian 0 5 1
 Choose not to answer 5
 Native American 1
Ethnicity
 Choose not to answer 4
 Non-Hispanic 6 12 11 5 6
 Hispanic 5
Born in US=yes 6 (40) 5 (41) 11 (100) 4 (80) 4 (66)
Years experiencec
  • Ave 12.3

  • Range 2-25

  • Ave 11.2

  • Range 5-40

  • Ave 6.2

  • Range 0-17

  • Ave 6.75

  • Range 1-14

  • Ave 14

  • Range 3-33

 Languages interpreted
  Spanish 6 (40)
  Somali 4 (26)
  Arabic 1 (06)
  Mandarin 1 (06)
  Romanian 1 (06)
a

Language services personnel: interpreters and coordinators. Some coordinators speak English only.

b

Other: Health Unit Coordinators, Occupational therapist, and Physical therapist.

c

Years experience: working in ICU (MD, APP, RN, Other) or years interpreting (language services personnel).

The results are categorized as two major themes: perceived risks and concerns and perceived benefits. These themes each have several sub-themes. Tables 2 and 3 contain representative quotes of the major themes and sub-themes.

Table 2.

Perceived risks and concerns themes and representative quotes.

Theme Quote
i. Transparency about use of AI Quote 1 “If I’m thinking as a patient, I’m thinking that you’re going to be input into this software machine, and it’s going to decide if you’re a priority or not. That’s kind of scary…” Interpreter 03
ii. Accuracy of AI Quote 2 “Taking away that contact or the context of somebody who may need it, and we don’t identify them as one. There might be some missed opportunities potentially.” Physician 04

Quote 3 “… but how do you know who’s the priority? What’s the base? What do you base it on?” Interpreter 03
iii. Redundancy of AI in this context Quote 4 “I think we get those alerts raised with the “needs interpreter” flag on their charts. That information is readily available for their team. They just have to know where to look, and actually look at it. …” Physician 01

Quote 5 “I feel my assessment on whether a patient qualifies for an interpreter is more than … is better than some algorithms” RN 11

Quote 6 “for my style of practice I don’t need it, but once again I would respect that people may have a different perspectives and different needs.” Physician 14
iv. Overreliance on AI Quote 7 “I think about one of the tools that I know is the ECG AI. I think it even has a comment or something on there that says, “this is a tool … I would say, that (this AI) … this is just a tool” Physician 04
v. Privacy and Confidentiality Quote 8 “I just don’t know exactly what is being done with that information.” RN 09
vi. Perceived Stigmatization Quote 9 “what’s the process … because my only question is whether that in some way might mark or stigmatize a patient … sometimes people get upset if we call an interpreter … because some people don’t want an interpreter to be privy to their personal details.” Physician 07
vii. Implementation Issues/Alert Fatigue Quote 10 “Where it’s put into practice … I don’t know … I feel we have to try to eventually know how to use it, how best to use it.” Interpreter 03

Quote 11 “the risk is pretty minimal, but it’s just over-alerting I guess.” Physician 17
viii. Supply and Demand Quote 12 “I anticipate there will be staffing issues … how do we man that … with enough interpreters in house?” Physician 17

Quote 13 “we should be a bit more astute and sophisticated about this rather than just saying, “Oh, yeah, I need …, a Spanish interpreter, or an Arabic interpreter” because the dialects make a lot of difference, and especially the older generations, I worry that we may not getting all the nuances. Secondly, we should just have a lot more availability.” Physician 03

Table 3.

Perceived benefits themes and representative quotes.

Theme Quotes
i. Increase awareness of in-person interpreters Quote 1 “I think people just default to the iPads not realizing we do have interpreters in house that can come” RN 02
Quote 2 “I have to admit that would be really cool if we got an alert—‘cause I know a lot of floors just think, “Oh. We’ll just use the iPad.” Coordinator 03
ii. Improved prioritization and resource allocation for patients Quote 3 “AI might actually help us on the flipside, (of) saying these people probably would benefit more or need a prioritization. I think it’s good to look at it from different perspectives.” Physician 04
Quote 4 “by proactively finding these patients and providing these resources earlier on, I believe would be beneficial, not only in the care that we provide as a hospital but also the way that the patient interprets that care that they are receiving.” RN 04
Quote 5 “I think it’s great to know who does need [interpreter services], so then we can—‘cause a lot of its lost in transit. I know the hospital’s a busy place.” Coordinator 02
iii. Empower and support bedside team Quote 6 “That decision is really one that is led by the bedside team, and so potentially you could have a situation where you have the room nurse empowered to call for an in-person interpreter without needing to go through the team first.” Physician 09
Quote 7 “I think a prompt would be helpful because sometimes, as the bedside team, I can get wrapped up in a busy morning and not always anticipate that (a patient needs an interpreter). Maybe I realize, “Oh, we’re gonna have a goal of care,” but not always have the mental capacity even to put that into the forefront of my mind. That’s not always my biggest priority. Sometimes there’s medical things that take precedent, but maybe even having that reminder could be helpful.” RN 05
iv. Enhance Interpreter Preparation and Input Quote 8 “language services is a prime example of a support for the bedside staff. Something that can make their job better and easier for us to access (it) is what I’m excited for AI to be used for.” RN 12
Quote 9 “I think alerting the interpreter would be the biggest win … if they are prepared for that conversation and getting alerted that yes, we need them, they are going to be more prepared.” Physician 17
v. Streamline process and increase efficiency Quote 10 “I think that it’s always good to have an AI algorithm to reduce the workload of healthcare team members and make you more efficient. Algorithms are very important to avoid redundancies” Physician 15
Quote 11 “I think what you guys are doing [study] I think is maybe the answer to the entire problem if our office [language services] knows that (there is) a patient in need- then they can call and schedule morning rounds. I think … it’d be a great tool.” Interpreter 01
vi. Improve standard of care Quote 12 “Maybe we are missing important patients at risk for this. Maybe that’s something that we’ll be able to learn. Maybe post intervention, maybe the standard would be that we’re having more in person interpreters, versus using the iPad …” Physician 15
Quote 13 “I think most of it is good when we get interpreter in there in person versus video or the phone. I think in person we’re gonna be more beneficial to the patient, to the team, and to the interpreter as well.” Interpreter 07
vii. Overcome clinician bias Quote 14 “I think we do (use in-person interpreters) with certain people, but I think there’s definitely a chunk of people that aren’t (getting in-person interpreters). I feel like sometimes it’s not even a thought. It’s kind of an afterthought to get an in-person interpreter.” Coordinator 03
Quote 15 “I don't think there are going to be risks and concerns. If anything, it would help us overcome our implicit biases where because of their limited English proficiency, they're—I hate to say it this way, but sort of sidelined and not communicated to. The ability to identify them at least will make us consciously do that. I don’t know that it’s risky to do[study]. It’s risky not to do it. That’s what I think.” Physician 11

Perceived risks and concerns (see Table 2 for themes and representative quotes)

We asked participants for responses to the use of AI for identifying and prioritizing patients with language barriers and complex care needs and then probed to uncover their concerns. When asked specifically about the use of AI, our participants’ responses echo discussions of ethical use of AI in healthcare overall.76 Participants’ expressions of concern and identification of risks relating to the use of AI for patients with language barriers and complex care needs were less frequently mentioned by participants and less strongly articulated in contrast to their descriptions of benefits.

Themes related to risk included

(i) Transparency about use of AI

Participants commented about transparency of use, noting that patients might like to be aware that an algorithm was involved in prioritizing them for interpreter services. Participants also noted that patients might wonder why they were selected and what specific criteria is stored and used for this purpose (Quote 1).

(ii) Accuracy of AI

Other participants expressed doubt about the accuracy of the algorithm itself to identify and prioritize the patients most in need of interpreter services (Quote 2, Quote 3).

(iii) Redundancy of AI in this context

Some clinicians objected to the need for AI to be used as we proposed, pointing out that the information in the electronic medical record (EMR) and observation of the patient’s ability to communicate is sufficient to ascertain if they warrant interpreter services (Quotes 4 and 5). These providers viewed the AI as superfluous and proclaimed their use of interpreters to be appropriate (Quote 6).

(iv) Overreliance on AI

Participants cautioned against over reliance on AI, stressing its use as a tool rather than a replacement for decision making (Quote 7).

(v) Privacy and confidentiality

Participants expressed few concerns related to loss of privacy or confidentiality for patients but several raised concerns over potential unknown uses of data related to patients’ language proficiency (Quote 8).

(vi) Perceived stigmatization

Some participants noted that occasionally patients reject interpreter services because they feel their English language proficiency is adequate or they prefer to communicate independently with the healthcare team or using family (Quote 9). Some noted that patients may feel stigmatized by being identified as needing an interpreter and refuse services on that basis.

(vii) Implementation, including alert fatigue

Logistical concerns identified by participants related to how the AI would be integrated into the existing clinical workflow. Many wondered how patient prioritization would be communicated to the patient’s healthcare team (Quote 10) and if that communication itself would result in an additional burden. The most common concern among physician participants was alert fatigue (Quote 11).

(viii) Supply–demand issues

Some feared that the lack of sufficient in-person interpreters would result in disruption of care due to long wait times (Quote 12, Quote 13). These participants recalled situations in which they could not obtain an interpreter who spoke the same dialect as their patient, or out of hours scenarios when they perceived interpreters were not available.

Perceived benefits (see Table 3 for themes and representative quotes)

All participants agreed that in-person interpreters improve the patient experience and are especially important during complex care decision making and were supportive about increasing the use of interpreters for patients with complex care needs “I think when teams recognize that an interpreter is going to be key to patient care, that is always a win for our patients and, I think, for our staff” Interpreter 08.

Themes related to benefit included

(i) Increase awareness of in-person interpreters

Currently, healthcare teams often rely on video interpretation using a tablet (iPad), both out of habit and immediacy of need (Quote 1). While nearly all participants noted technical issues and acknowledged lower quality of communication when using a tablet (iPad), not all were aware of the availability of in-person interpreters in the hospital setting. This is due in part to pandemic restrictions which limited use of in-person bedside interpreters. Many participants, especially interpreters and coordinators, felt that highlighting the availability of in-person interpreters in the hospital setting was a benefit for everyone (Quote 2).

(ii) Improve prioritization and resource allocation for patients

Participants also recognized the potential ability of AI to identify and prioritize patients (Quote 3). These participants appreciated the idea of simplifying access to a service they view as beneficial to their patients (Quote 4). Improved allocation of interpreter resources to those most in need was noted as a key benefit by participants in all categories (Quote 5).

(iii) Empower and support healthcare team/bedside nurses

Some participants imagined an AI-generated alert could serve as a nudge or even an imperative to take the steps necessary to ensure a patient receives interpreter services. Overall, this could potentially empower any member of the healthcare team to arrange for in-person interpreter services without hesitating to use a scarce resource and without relying on a request by a physician (Quote 6, Quote 7).

(iv) Enhance interpreter preparation and input

Interpreters noted that use of AI to identify patients would provide them with more timely notification and thus allow them to prepare for the interaction with the patient and family (Quote 8). This was seen as beneficial and an enhancement of their ability to effectively interpret complex discussions and increased perceived valuation of their skills (Quote 9).

(v) Streamline process and improve efficiency

Participants noted the potential for AI to improve the current processes for deploying interpreters throughout the hospital system (Quote 10). These participants were enthusiastic about an application of AI that solves a recognized systemic logistical issue (Quote 11).

(vi) Improve standard of care

Participants noted that in-person interpreters make clinicians’ jobs easier by improving the quality of communication with patients and providing context in situations where cultural approaches to healthcare decision making affect the dialog between patient, family, and clinician. In addition to those who pointed out benefits to individual patients, some participants identified benefits of the AI for the institution overall improving access to in-person interpreters as a new standard of care (Quote 12, Quote 13).

(vii) Overcome clinician bias

Participants pointed out that an AI tool might be fairer than the current system which relies on clinician initiative to seek an in-person interpreter. Participants noted that the AI might alert them to consider interpreter services more often and for more patients (Quote 14). A few suggested the tool might overcome implicit bias that causes some clinicians to hesitate calling interpreter services potentially due to concern about inherent challenges in scheduling (Quote 15).

Discussion

This qualitative research study explored stakeholders’ perceived risks and benefits about using AI to identify inpatients with language barriers and complex medical needs to prioritize interpreter services. This is the first study that elicits stakeholder perspectives on the use of AI with the goal of improved clinical care for inpatients with language barriers. Participants noted both risks and benefits to the use of AI in this domain. Some of these risks and benefits overlap with already flagged hazards and values of AI but some were new and unique to the use of AI in this realm and are described below.76–78

Risks cited by participants related to the integration of the AI into the workflow and the potential for alert fatigue, redundancy, perceived stigmatization, supply–demand issues as well as transparency, accuracy, and privacy issues. Benefits mentioned by participants included the potential for the AI to increase awareness of in-person interpreters, improved standard of care and prioritization for interpreter utilization, a streamlined process to engage interpreters, empowered clinicians and potential to overcome clinician bias.

Risks

Several participants were concerned about alert fatigue, secondary to notifications about patients needing interpreter services. This concern has also been shown to be common in other studies when AI is used for clinical decision support regarding medication interactions.79–81 Related to this were concerns about the implementation of the AI into the workflow. These views appeared to be based on prior experience with inefficient interpreter services and frustration with shortages and delays. Some participants worried that active identification of patients with complex medical needs and language barriers across the institution might cause “supply and demand issues” particularly for less commonly interpreted languages. Several studies show that AI has the potential to increase clinician workload and some participants we interviewed did convey concerns about AI creating more burden through inefficient incorporation into the clinical workflow.82 This exposes a tension between the promotion of AI as a means of addressing shortages in the healthcare workforce and its potential to increase clinician workload or otherwise create inefficiencies.

Some participants commented that the AI was not necessary as they felt they identified need for interpreter services appropriately. These participants likely did not appreciate the benefits of the AI score to prioritize service provision from the standpoint of institutional interpreter services, something no single human could do even if they did effectively advocate for interpreter services for their own patient. Although we recognize that health information technology including AI cannot be successful unless integrated, implemented, and accepted by the healthcare team, we do not anticipate that the likelihood of the intervention being successful will be determined by the small number of participants who thought the AI was unnecessary. However, further exploration of these perspectives is warranted during implementation.65

Our previous work using machine learning algorithms to encourage use of palliative care consults for patients identified as benefiting from palliative care served as a nudge to clinicians to engage with palliative care services.58,71,83 The proposed AI discussed here will similarly serve as a nudge to promote improved use of interpreters but if a shortage of interpreters occurs, the evaluation of medical complexity can guide interpreter services for allocation of resources.

Some participants expressed some concern about the reliability and accuracy of the AI and whether it might erroneously detect the wrong type of patients, in other words patients without complex medical needs or language barriers. These are common types of fears about AI in healthcare and other studies have also demonstrated these perceived risks about AI in other contexts.84 Trust in AI is a fundamental cornerstone for its successful adoption into practice.85 It may be related to accuracy and understandability.78 Notably lack of trust in AI was not articulated as a distinct concept by our participants.

As with most AI, a few participants had concerns about privacy and confidentiality however this was not a strong feature of our participants’ responses.86 Participants did voice some concerns about how information was handled and shared and who would review it particularly during implementation of the AI.

A few risks that participants verbalized were not uniquely related to the AI but related to identifying interpreter needs among patients generally. For example, refusal to use an interpreter when a patient might worry about their medical care being discussed in their close-knit community relate to the tenets of interpretation ethics and the dual role interpreters often retain.87,88 The concern about perceived stigma among patients, while potentially exacerbated with the knowledge AI had been used, is not unique to the use of AI per se and could also apply if the healthcare team requested interpreter services directly without using AI.

Overall, we found that the risks identified were similar to perceived risks of AI in other domains.89 However, one notable difference was that no participants voiced any negative perceptions about onus of legal responsibility if a patient was identified and interpreter services not engaged.90 Where the legal responsibility falls when AI is used for clinical decision support when integrated into practice is a commonly cited source of disquiet among clinicians.91,92

Perceived benefits

As well as the inherent benefits of in-person interpreters that almost all participants endorsed, the AI was considered by many as a potentially useful mechanism to remind the healthcare team that in-person interpreters were available in the institution and should be used. Anecdotal evidence as well as work we have conducted has demonstrated that the COVID-19 caused a paradigm shift in how interpreter and language services have been provided to patients over the last three years.93,94 While it is true that many institutions always relied exclusively on remote phone and video interpretation, institutions that had integrated in-person interpreters into the clinical practice, almost completely switched to remote interpretation.93,94 With high staff turnover during the pandemic years many staff have only ever used remote video interpretation, and some are not even aware of the presence of in-person interpreters in the institution. Furthermore, even among those who are aware of these services and previously used them and vouch for the importance and benefits of in-person interpreters, the usability and immediacy of the video has made it challenging to change clinician behavior to re-adopt in-person interpreters after pandemic safety measures have been discontinued.

Although the issue of bias woven into and perpetuated within AI-generated algorithms is often cited as a potential harm of AI, participants in our study noted the potential for this type of AI to overcome clinician bias that might have prevented them from using interpreter services.95,96 There are several barriers to the use of in-person interpreters including the challenges with easily arranging and coordinating convenient times.26 This deters many clinicians from using in-person interpreters and sometimes causes delays in communication, avoidance of communication or suboptimal communication without in-person interpreters. The finding that the use of AI in the manner we propose could overcome bias and therefore increase communication and potentially reduce disparities in communication and care is encouraging.

Several participants noted that in their own experience, interpreters may have been deployed to a lower priority setting while another patient in a more critical setting is thus deprived of their services. These participants recognize the value of AI for prioritizing in-need patients across the whole institution, a task not possible for the human interpreter coordinators. AI has been used to improve prioritization in other domains such as radiology and in the ER.97–99 In this way stakeholders supported the benefits of the AI providing some kind of objectivity of patient need.

Nurses commented that AI could support their work by acting as a reminder to engage interpreter services and some participants envisioned bedside nurses empowered by the AI (when integrated into a streamlined process) as it could serve as a mechanism that could galvanize them to engage interpreter services without the need to seek input from the healthcare team. Successful integration of AI into fields such as oncology can enhance clinicians ability to provide personalized, effective and evidence-based care.100 Work by Li et al has indicated that AI has the potential to empower clinicians to collaborate and communicate more effectively when identifying patients with clinical deterioration.101 In our study integration of the AI into the workflow was perceived by most participants to be beneficial and likely to streamline the process of accessing and providing interpreter services and increase efficiency.101

Many participants, especially physicians, stated that the interpreters helped them understand the cultural considerations of patients and families, a key benefit not replicated with remote video interpretation. Our previous qualitative work suggests that clinicians value in-person interpreters for this reason and perceive that poor communication and a failure by healthcare teams to understand and acknowledge sociocultural differences for patients and families with language barriers are contributing factors to suboptimal decision making, poor quality care, and disparities in outcomes.69 The importance of in-person interpreters for discussions involving patients with complex medical needs cannot be overstated. National organizations are increasingly recognizing the importance of good communication for high quality decision making and the role of language and cultural values for achieving care that aligns with cultural and spiritual values.102,103

Wang’s work describes autonomy, beneficence, explainability, justice, and non-maleficence as the five key signals of AI responsibility for healthcare.104 Explainability is a term that denotes that AI can be understood.105 The participants in our study alluded to these concepts in only a very general way. Similarly related terms such as transparency were not widely raised as concepts by our participants.106,107

We examined whether responses differed by role. From a risks and concerns perspective, for example most bedside nurses were very satisfied with video interpretation and expressed concerns about delays during coordination of in-person interpreters. Some even worried that access to video interpretation would be removed from them. In contrast, most physicians reiterated the value of in-person interpreters, but a few physicians voiced concerns about electronic medical record related alerts, potential stigmatization, and redundancy. Interpreters and coordinators were most likely to voice concerns about using AI for tasks currently done by a human.

In terms of benefits, all roles recognized the value of in-person interpreters and that AI could increase awareness of the potential availability and worth of in-person interpreters. Since bedside nurses are often charged with arranging in-person interpreters for the healthcare team, they were more likely to support the idea of using AI to streamline the process although some physicians and interpreters noted this benefit too. Interpreters believed using AI to help prompt healthcare teams to use an in-person interpreter would help coordinators better anticipate the needs and demand around the institution, potentially giving interpreters time to prepare for in-person encounters.

Strengths of the study include the following. We deployed robust qualitative methods, with two coders coding each transcript independently and in duplicate. We reached consensus at weekly meetings. Moreover, we were able to triangulate our data by interviewing diverse groups (physicians, bedside nurses, advanced practice providers and those working in interpreter services).108 These measures enhance the scientific rigor and validity and ensured the trustworthiness of our findings, strengthening the relevance of our discoveries.

This study also has several limitations. This was a single center study therefore the findings may not be generalizable to other institutions. Several participants had limited knowledge about AI in general and its potential risks and benefits of AI, with some misunderstanding the function of the AI in this work. A few participants misunderstood the AI algorithm and believed it was using electronic medical data parameters to determine English language proficiency rather than medical complexity.

Some participants, especially bedside nurses did not have experience with in-person interpreters so could not provide much perspective on the benefits or optimal use of in-person interpreters and were satisfied with remote video interpreters. Future work to gather more focused clinician perceptions might consider the use of patient and clinician scenario vignettes, as well as use of a graphic user interface in which the AI identified patients with language barriers are prioritized using a complexity score and a proposed workflow for integrating the AI into practice is clearly presented for feedback.

As well as understanding the perceived risks and benefits of the use of AI in this domain we acknowledge that in order for AI to benefit patients, it must be integrated effectively into clinical practice workflows.51,56,109,110 This work does not include specific issues about implementation unlike some other work.111 Furthermore, like any clinical intervention, a predictive model, its deployment, and its implementation and impact should be subject to robust evaluation beyond simply assessing the predictive accuracy and reliability.112,113

In addition, we should note that ascertaining the perspectives of patients and community members about the use of AI in healthcare generally as well as for specific tasks is important.62 We acknowledge that not including patient voices in this work is a limitation and an area for future study. Although we did not seek patient perspectives in this study, we did engage with our institution’s Health Data and Technology Community Advisory Board (CAB). The CAB meets monthly and invites investigators using AI to present their research. When we discussed this research program, why we believed it was important as well as the perceived risks and benefits we received positive feedback from the community members at the CAB overall. We plan to present again in the future.

The stepped-wedge cluster randomized trial that will evaluate the efficacy of the AI tool to prioritize patients with language barriers and complex medical needs for interpreter services is currently underway. However, unique potential benefits and concerns of the use of AI in this realm are outlined in a textbox below and are based on those noted by participants.

Conclusion

We conducted a qualitative study using semi-structured interviews of diverse clinical stakeholders who highlighted potential benefits related to the use of AI when caring for patients with language barriers. This specific application of AI may remediate gaps in clinician knowledge and training about when to use in-person interpreters; may effectively support prioritization for interpreters if shortages; can highlight an important human resource to optimize face to face communication and may support best practice to address health disparities, quality, and safety. Risks and concerns must be continually evaluated but these were voiced less frequently and less vociferously than the potential benefits noted by participants.

Textbox: Unique perceived potential benefits and concerns of use of AI to promote equitable care for patients with language barriers.

Benefits

  • Alerts remediate gap in healthcare team/clinician training on use of in-person interpreter at the bedside.

  • Effectively supports prioritization of interpreters across large institution if shortages occur.

  • Uses technology to highlight a human resource to optimize face to face communication for patients with language barriers.

  • Promotes best practice to improve quality and safely to address health disparities among patients with language barriers.

Concerns

  • Some clinicians already ascertain information about when to use an interpreter without use of AI.

  • If identify need for interpreters, AI may highlight Supply–Demand challenges with in-person interpreter availability.

  • Using AI specifically to identify patients needing an interpreter may cause patients to feel stigmatized.

Supplementary Material

ocad224_Supplementary_Data

Acknowledgments

The authors would like to thank Dan J. Tschida-Reuter for supporting guidance in understanding and accessing Language Services personnel and Austin Stroud for reviewing the manuscript and providing helpful edits and guidance.

Contributor Information

Amelia K Barwise, Biomedical Ethics Research Program, Pulmonary and Critical Care Medicine, Mayo Clinic, Rochester, MN 55902, United States.

Susan Curtis, Biomedical Ethics Research Program, Mayo Clinic, Rochester, MN 55902, United States.

Daniel A Diedrich, Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55902, United States.

Brian W Pickering, Department of Anesthesiology and Perioperative Medicine, Mayo Clinic, Rochester, MN 55902, United States.

Author contributions

A.K.B. conceived this study, designed, and conducted analysis with advice and critical review from D.A.D. B.W.P. consulted on study design and provided AI expertise. A.K.B. and S.C. conducted data analysis and drafted the manuscript with input from all authors. All authors provided revisions for intellectual content. All authors approved of the final manuscript.

Supplementary material

Supplementary material is available at Journal of the American Medical Informatics Association online.

Funding

This work was supported by U.S. Department of Health and Human Services and U.S. Public Health Service Agency for Healthcare Research and Quality Grand R21 HS028475.

Conflicts of interest

The authors have no competing interests to report.

Data availability

All data relevant to the analysis are incorporated into the article.

References

  • 1. Flores G, Abreu M, Olivar MA, Kastner B.. Access barriers to health care for Latino children. Arch Pediatr Adolesc Med. 1998;152(11):1119-1125. [DOI] [PubMed] [Google Scholar]
  • 2. Flores G. Language barriers to health care in the United States. N Engl J Med. 2006;355(3):229-231. [DOI] [PubMed] [Google Scholar]
  • 3. Schenker Y, Karter AJ, Schillinger D, et al. The impact of limited English proficiency and physician language concordance on reports of clinical interactions among patients with diabetes: the DISTANCE study. Patient Educ Couns. 2010;81(2):222-228. 10.1016/j.pec.2010.02.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Woloshin S, Schwartz LM, Katz SJ, Welch HG.. Is language a barrier to the use of preventive services? J Gen Intern Med. 1997;12(8):472-477. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Ferguson WJ, Candib LM.. Culture, Language, and the Doctor–Patient Relationship. FMCH Publications and Presentations; 2002:61. [PubMed] [Google Scholar]
  • 6. Orom H. Nativity and perceived healthcare quality. J Immigr Minor Health. 2016;18(3):636-643. [DOI] [PubMed] [Google Scholar]
  • 7. John‐Baptiste A, Naglie G, Tomlinson G, et al. The effect of English language proficiency on length of stay and in‐hospital mortality. J Gen Intern Med. 2004;19(3):221-228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Karliner LS, Kim SE, Meltzer DO, Auerbach AD.. Influence of language barriers on outcomes of hospital care for general medicine inpatients. J Hosp Med. 2010;5(5):276-282. 10.1002/jhm.658 [DOI] [PubMed] [Google Scholar]
  • 9. Hampers LC, Cha S, Gutglass DJ, Binns HJ, Krug SE.. Language barriers and resource utilization in a pediatric emergency department. Pediatrics. 1999;103(6 Pt 1):1253-1256. [DOI] [PubMed] [Google Scholar]
  • 10. Harmsen JA, Bernsen RM, Bruijnzeels MA, Meeuwesen L.. Patients' evaluation of quality of care in general practice: what are the cultural and linguistic barriers? Patient Educ Couns. 2008;72(1):155-162. 10.1016/j.pec.2008.03.018 [DOI] [PubMed] [Google Scholar]
  • 11. Cheng EM, Chen A, Cunningham W.. Primary language and receipt of recommended health care among Hispanics in the United States. J Gen Intern Med. 2007;22(Suppl 2):283-288. 10.1007/s11606-007-0346-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Yarnell CJ, Fu L, Manuel D, et al. Association between immigrant status and end-of-life care in Ontario, Canada. JAMA. 2017;318(15):1479-1488. 10.1001/jama.2017.14418 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Barwise A, Jaramillo C, Novotny P, et al. Differences in code status and end-of-life decision making in patients with limited English proficiency in the intensive care unit. Mayo Clin Proc. 2018;93(9):1271-1281. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Partain DK, Sanders JJ, Leiter RE, Carey EC, Strand JJ.. End-of-life care for seriously ill international patients at a global destination medical center. Mayo Clin Proc. 2018;93(12):1720-1727. [DOI] [PubMed] [Google Scholar]
  • 15. Cooper LA, Hill MN, Powe NR.. Designing and evaluating interventions to eliminate racial and ethnic disparities in health care. J Gen Intern Med. 2002;17(6):477-486. 10.1046/j.1525-1497.2002.10633.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Karliner LS, Jacobs EA, Chen AH, Mutha S.. Do professional interpreters improve clinical care for patients with limited English proficiency? A systematic review of the literature. Health Serv Res. 2007;42(2):727-754. 10.1111/j.1475-6773.2006.00629.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Flores G. The impact of medical interpreter services on the quality of health care: a systematic review. Med Care Res Rev. 2005;62(3):255-299. 10.1177/1077558705275416 [DOI] [PubMed] [Google Scholar]
  • 18. Karliner LS, Pérez-Stable EJ, Gregorich SE.. Convenient access to professional interpreters in the hospital decreases readmission rates and estimated hospital expenditures for patients with limited English proficiency. Med Care. 2017;55(3):199-206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19. Wu MS, Rawal S.. “It’s the difference between life and death”: the views of professional medical interpreters on their role in the delivery of safe care to patients with limited English proficiency. PLoS One. 2017;12(10):e0185659. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20. Bagchi AD, Dale S, Verbitsky-Savitz N, Andrecheck S, Zavotsky K, Eisenstein R.. Examining effectiveness of medical interpreters in emergency departments for Spanish-speaking patients with limited English proficiency: results of a randomized controlled trial. Ann Emerg Med. 2011;57(3):248-256. e4. [DOI] [PubMed] [Google Scholar]
  • 21. Green AR, Ngo-Metzger Q, Legedza AT, Massagli MP, Phillips RS, Iezzoni LI.. Interpreter services, language concordance, and health care quality. J Gen Intern Med. 2005;20(11):1050-1056. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Norris WM, Wenrich MD, Nielsen EL, Treece PD, Jackson JC, Curtis JR.. Communication about end-of-life care between language-discordant patients and clinicians: insights from medical interpreters. J Palliat Med. 2005;8(5):1016-1024. [DOI] [PubMed] [Google Scholar]
  • 23. Silva MD, Genoff M, Zaballa A, et al. Interpreting at the end of life: a systematic review of the impact of interpreters on the delivery of palliative care services to cancer patients with limited English proficiency. J Pain Symptom Manage. 2016;51(3):569-580. 10.1016/j.jpainsymman.2015.10.011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Silva MD, Tsai S, Sobota RM, Abel BT, Reid MC, Adelman RD.. Missed opportunities when communicating with limited English-proficient patients during end-of-life conversations: insights from Spanish-speaking and Chinese-speaking medical interpreters. J Pain Symptom Manage. 2020;59(3):694-701. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Suarez NRE, Urtecho M, Jubran S, et al. The roles of medical interpreters in intensive care unit communication: a qualitative study. Patient Educ Couns. 2020;104(5):1100-1108. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Suarez NRE, Urtecho SM, Nyquist CA, et al. Consequences of suboptimal communication for patients with limited English proficiency in the intensive care unit and suggestions for a way forward: a qualitative study of healthcare team perceptions. J Crit Care. 2021;61:247-251. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Jacobs EA, Press VG, Vela MB.. Use of interpreters by physicians. J Gen Intern Med. 2015;30(11):1589-1589. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Diamond LC, Schenker Y, Curry L, Bradley EH, Fernandez A.. Getting by: underuse of interpreters by resident physicians. J Gen Intern Med. 2009;24(2):256-262. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Kale E, Syed HR.. Language barriers and the use of interpreters in the public health services. A questionnaire-based survey. Patient Educ Couns. 2010;81(2):187-191. [DOI] [PubMed] [Google Scholar]
  • 30. López L, Rodriguez F, Huerta D, Soukup J, Hicks L.. Use of interpreters by physicians for hospitalized limited English proficient patients and its impact on patient outcomes. J Gen Intern Med. 2015;30(6):783-789. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Keers-Sanchez A. Mandatory provision of foreign language interpreters in health care services. J Leg Med. 2003;24(4):557-578. [DOI] [PubMed] [Google Scholar]
  • 32. Ginde AA, Clark S, Camargo CA.. Language barriers among patients in Boston emergency departments: use of medical interpreters after passage of interpreter legislation. J Immigrant Minority Health. 2009;11(6):527. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Baker DW, Parker RM, Williams MV, Coates WC, Pitkin K.. Use and effectiveness of interpreters in an emergency department. JAMA. 1996;275(10):783-788. [PubMed] [Google Scholar]
  • 34. Ramirez D, Engel KG, Tang TS.. Language interpreter utilization in the emergency department setting: a clinical review. J Health Care Poor Underserved. 2008;19(2):352-362. [DOI] [PubMed] [Google Scholar]
  • 35. Brumbaugh JE, Tschida‐Reuter DJ, Barwise AK.. Meeting the needs of the patient with non‐English language preference in the hospital setting. Health Serv Res. 2023;58(5):965-969. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Aitken G. Medical students as certified interpreters. AMA J Ethics. 2019;21(3):232-238. [DOI] [PubMed] [Google Scholar]
  • 37. Dwyer J. Babel, justice, and democracy: reflections on a shortage of interpreters at a public hospital. Hastings Center Rep. 2001;31(2):31-36. [PubMed] [Google Scholar]
  • 38. Gray B, Hilder J, Donaldson H.. Why do we not use trained interpreters for all patients with limited English proficiency? Is there a place for using family members? Aust J Prim Health. 2011;17(3):240-249. [DOI] [PubMed] [Google Scholar]
  • 39. Hadziabdic E, Albin B, Heikkilä K, Hjelm K.. Healthcare staffs perceptions of using interpreters: a qualitative study. Primary Health Care. 2010;11(03):260-270. [Google Scholar]
  • 40. Burkle CM, Anderson KA, Xiong Y, Guerra AE, Tschida-Reuter DA.. Assessment of the efficiency of language interpreter services in a busy surgical and procedural practice. BMC Health Serv Res. 2017;17(1):456. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Jacobs EA, Diamond LC, Stevak L.. The importance of teaching clinicians when and how to work with interpreters. Patient Educ Couns. 2010;78(2):149-153. [DOI] [PubMed] [Google Scholar]
  • 42. Jacobs B, Ryan AM, Henrichs KS, Weiss BD.. Medical interpreters in outpatient practice. Ann Fam Med. 2018;16(1):70-76. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Feiring E, Westdahl S.. Factors influencing the use of video interpretation compared to in-person interpretation in hospitals: a qualitative study. BMC Health Serv Res. 2020;20(1):856. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Baurer D, Yonek JC, Cohen AB, Restuccia JD, Hasnain-Wynia R.. System-level factors affecting clinicians’ perceptions and use of interpreter services in California public hospitals. J Immigr Minor Health. 2014;16(2):211-217. [DOI] [PubMed] [Google Scholar]
  • 45. Narang B, Park S-Y, Norrmén-Smith IO, et al. The use of a mobile application to increase access to interpreters for cancer patients with limited English proficiency: a pilot study. Med Care. 2019;57(Suppl 6 Suppl 2):S184-S189. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Das N, Topalovic M, Janssens W.. Artificial intelligence in diagnosis of obstructive lung disease: current status and future potential. Curr Opin Pulm Med. 2018;24(2):117-123. [DOI] [PubMed] [Google Scholar]
  • 47. Zhou L-Q, Wang J-Y, Yu S-Y, et al. Artificial intelligence in medical imaging of the liver. World J Gastroenterol. 2019;25(6):672-682. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48. Niel O, Bastard P.. Artificial intelligence in nephrology: core concepts, clinical applications, and perspectives. Am J Kidney Dis. 2019;74(6):803-810. [DOI] [PubMed] [Google Scholar]
  • 49. Thiébaut R, Thiessard F, Artificial intelligence in public health and epidemiology. Yearb Med Inform. 2018;27(1):207-210. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Mayo RC, Leung J.. Artificial intelligence and deep learning – radiology's next frontier? Clin Imaging. 2018;49:87-88. [DOI] [PubMed] [Google Scholar]
  • 51. Miller DD, Brown EW.. Artificial intelligence in medical practice: the question to the answer? Am J Med. 2018;131(2):129-133. [DOI] [PubMed] [Google Scholar]
  • 52. He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K.. The practical implementation of artificial intelligence technologies in medicine. Nat Med. 2019;25(1):30-36. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56. [DOI] [PubMed] [Google Scholar]
  • 54. Murdoch TB, Detsky AS.. The inevitable application of big data to health care. JAMA. 2013;309(13):1351-1352. [DOI] [PubMed] [Google Scholar]
  • 55. Rozenblum R, Rodriguez-Monguio R, Volk LA, et al. Using a machine learning system to identify and prevent medication prescribing errors: a clinical and cost analysis evaluation. Jt Comm J Qual Patient Saf. 2020;46(1):3-10. [DOI] [PubMed] [Google Scholar]
  • 56. Peterson ED. Machine learning, predictive analytics, and clinical practice: can the past inform the present? JAMA. 2019;322(23):2283-2284. [DOI] [PubMed] [Google Scholar]
  • 57. Fatima M, Pasha M.. Survey of machine learning algorithms for disease diagnostic. JILSA. 2017;09(01):1. [Google Scholar]
  • 58. Wilson PM, Ramar P, Philpot LM, et al. Effect of an artificial intelligence decision support tool on palliative care referral in hospitalized patients: a randomized clinical trial. J Pain Symptom Manage. 2023;66(1):24-32. [DOI] [PubMed] [Google Scholar]
  • 59. Emanuel EJ, Wachter RM.. Artificial intelligence in health care: will the value match the hype? JAMA. 2019;321(23):2281-2282. [DOI] [PubMed] [Google Scholar]
  • 60. Panch T, Mattie H, Celi LA.. The “inconvenient truth” about AI in healthcare. NPJ Digit Med. 2019;2(1):77. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61. Sauerbrei A, Kerasidou A, Lucivero F, Hallowell N.. The impact of artificial intelligence on the person-centred, doctor–patient relationship: some problems and solutions. BMC Med Inform Decis Mak. 2023;23(1):73-14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62. Richardson JP, Smith C, Curtis S, et al. Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digit Med. 2021;4(1):140. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 63. Juciute R. ICT implementation in the health-care sector: effective stakeholders’ engagement as the main precondition of change sustainability. AI & Soc. 2009;23(1):131-137. [Google Scholar]
  • 64. Alami H, Lehoux P, Denis J-L, et al. Organizational readiness for artificial intelligence in health care: insights for decision-making and practice. J Health Organ Manag. 2021;35(1):106-114. [DOI] [PubMed] [Google Scholar]
  • 65. Palinkas LA, Horwitz SM, Green CA, Wisdom JP, Duan N, Hoagwood K.. Purposeful sampling for qualitative data collection and analysis in mixed method implementation research. Adm Policy Ment Health. 2015;42(5):533-544. 10.1007/s10488-013-0528-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66. Sharma G. Pros and cons of different sampling techniques. Int J Appl Res. 2017;3(7):749-752. [Google Scholar]
  • 67. Francis JJ, Johnston M, Robertson C, et al. What is an adequate sample size? Operationalising data saturation for theory-based interview studies. Psychol Health. 2010;25(10):1229-1245. [DOI] [PubMed] [Google Scholar]
  • 68. Saunders B, Sim J, Kingstone T, et al. Saturation in qualitative research: exploring its conceptualization and operationalization. Qual Quant. 2018;52(4):1893-1907. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69. Barwise AK, Nyquist CA, Suarez NRE, et al. End-of-life decision-making for ICU patients with limited English proficiency: a qualitative study of healthcare team insights. Crit Care Med. 2019;47(10):1380-1387. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70. Wilson PM, Philpot LM, Ramar P, et al. Improving time to palliative care review with predictive modeling in an inpatient adult population: study protocol for a stepped-wedge, pragmatic randomized controlled trial. Trials. 2021;22(1):635-639. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71. Murphree DH, Wilson PM, Asai SW, et al. Improving the delivery of palliative care through predictive modeling and healthcare informatics. J Am Med Inform Assoc. 2021;28(6):1065-1073. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72. Rubin HJ, Rubin IS.. Qualitative Interviewing: The Art of Hearing Data. Sage; 2011. [Google Scholar]
  • 73. Lumivero. NVivo software, QSR Intl Inc; 2023. https://www.qsrinternational.com/nvivo-qualitative-data-analysis-software/home
  • 74. O’Connor C, Joffe H.. Intercoder reliability in qualitative research: debates and practical guidelines. Int J Qual Methods. 2020;19:1609406919899220. [Google Scholar]
  • 75. MacQueen KM, McLellan E, Kay K, Milstein B.. Codebook development for team-based qualitative analysis. Cam J. 1998;10(2):31-36. [Google Scholar]
  • 76. Lambert SI, Madi M, Sopka S, et al. An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals. NPJ Digit Med. 2023;6(1):111. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 77. Cornelissen L, Egher C, van Beek V, Williamson L, Hommes D.. The drivers of acceptance of artificial intelligence–powered care pathways among medical professionals: web-based survey study. JMIR Form Res. 2022;6(6):e33368. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78. Schwartz JM, George M, Rossetti SC, et al. Factors influencing clinician trust in predictive clinical decision support systems for in-hospital deterioration: qualitative descriptive study. JMIR Hum Factors. 2022;9(2):e33960. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79. Joseph AL, Borycki EM, Kushniruk AW.. Alert fatigue and errors caused by technology: a scoping review and introduction to the flow of cognitive processing model. Knowl Manage e-Learn. 2021;13(4):500. [Google Scholar]
  • 80. Kesselheim AS, Cresswell K, Phansalkar S, Bates DW, Sheikh A.. Clinical decision support systems could be modified to reduce ‘alert fatigue’ while still minimizing the risk of litigation. Health Affairs. 2011;30(12):2310-2317. [DOI] [PubMed] [Google Scholar]
  • 81. Co Z, Holmgren AJ, Classen DC, et al. The tradeoffs between safety and alert fatigue: data from a national evaluation of hospital medication-related clinical decision support. J Am Med Inform Assoc. 2020;27(8):1252-1258. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82. Kwee TC, Kwee RM.. Workload of diagnostic radiologists in the foreseeable future based on recent scientific advances: growth expectations and role of artificial intelligence. Insights Imaging. 2021;12(1):88-12. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83. Wilson PM, Philpot LM, Ramar P, et al. Improving time to palliative care review with predictive modeling in an inpatient adult population: study protocol for a stepped-wedge, pragmatic randomized controlled trial. Trials. 2021;22(1):1-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84. Sand M, Durán JM, Jongsma KR.. Responsibility beyond design: physicians’ requirements for ethical medical AI. Bioethics. 2022;36(2):162-169. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 85. Asan O, Bayrak AE, Choudhury A.. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res. 2020;22(6):e15154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 86. Beets B, Newman TP, Howell EL, Bao L, Yang S.. Surveying public perceptions of artificial intelligence in health care in the United States: systematic review. J Med Internet Res. 2023;25:e40337. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87. IMIA Code of Ethics. Date accessed October 16, 2019, https://www.imiaweb.org/code/
  • 88. National Standards of Practice for Interpreters in Health Care. NCIHC, 2023. https://www.ncihc.org/assets/z2021Images/NCIHC%20National%20Standards%20of%20Practice.pdf. Date accessed June 19, 2023.
  • 89. Esmaeilzadeh P. Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives. BMC Med Inform Decis Mak. 2020;20(1):1-19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90. Price WN, Gerke S, Cohen IG.. Potential liability for physicians using artificial intelligence. JAMA. 2019;322(18):1765-1766. [DOI] [PubMed] [Google Scholar]
  • 91. Reddy S, Allan S, Coghlan S, Cooper P.. A governance model for the application of AI in health care. J Am Med Inform Assoc. 2020;27(3):491-497. 10.1093/jamia/ocz192 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92. Scheetz J, Rothschild P, McGuinness M, et al. A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology. Sci Rep. 2021;11(1):5193. 10.1038/s41598-021-84698-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 93. Yang C, Prokop L, Barwise A.. Strategies used by healthcare systems to communicate with hospitalized patients and families with limited English proficiency during the COVID-19 pandemic: a narrative review. J Immigr Minor Health. 2023;25(6):1393-1401. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94. Barwise A, Tschida-Reuter D, Sutor B. Adaptations to interpreter services for hospitalized patients during the COVID-19 pandemic. Mayo Clin Proc. 2021;96(12):3184-3185. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 95. McCradden MD, Joshi S, Mazwi M, Anderson JA.. Ethical limitations of algorithmic fairness solutions in health care machine learning. Lancet Digit Health. 2020;2(5):e221-e223. [DOI] [PubMed] [Google Scholar]
  • 96. Parikh RB, Teeple S, Navathe AS.. Addressing bias in artificial intelligence in health care. JAMA. 2019;322(24):2377-2378. [DOI] [PubMed] [Google Scholar]
  • 97. Topff L, Ranschaert ER, Bartels-Rutten A, et al. Artificial intelligence tool for detection and worklist prioritization reduces time to diagnosis of incidental pulmonary embolism at CT. Radiol Cardiothorac Imaging. 2023;5(2):e220163. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 98. Winkel DJ, Heye T, Weikert TJ, Boll DT, Stieltjes B.. Evaluation of an AI-based detection software for acute findings in abdominal computed tomography scans: toward an automated work list prioritization of routine CT examinations. Invest Radiol. 2019;54(1):55-59. [DOI] [PubMed] [Google Scholar]
  • 99. Weisberg EM, Chu LC, Fishman EK.. The first use of artificial intelligence (AI) in the ER: triage not diagnosis. Emerg Radiol. 2020;27(4):361-366. [DOI] [PubMed] [Google Scholar]
  • 100. Papachristou N, Kotronoulas G, Dikaios N, et al. Digital transformation of cancer care in the era of big data, artificial intelligence and data-driven interventions: navigating the field. Semin Oncol Nurs. 2023;39(3):151433. [DOI] [PubMed] [Google Scholar]
  • 101. Li RC, Smith M, Lu J, et al. AI for empowering collaborative team workflows: two implementations for advance care planning and care escalation. NEJM Catalyst Innov Care Deliv. 2022;3(4):CAT.21.0457. [Google Scholar]
  • 102. AHRQ Publication No. 12-0041. Improving Patient Safety Systems for Patients With Limited English Proficiency A Guide for Hospitals, 2012.
  • 103. Epstein R, Street R.. Patient-Centered Communication in Cancer Care: Promoting Healing and Reducing Suffering. National Cancer Institute; 2007. [Google Scholar]
  • 104. Wang W, Chen L, Xiong M, Wang Y.. Accelerating AI adoption with responsible AI signals and employee engagement mechanisms in health care. Inf Syst Front. 2021:1-18. [Google Scholar]
  • 105. Ehsan U, Liao QV, Muller M, Riedl MO, Weisz JD. Expanding explainability: towards social transparency in AI systems. In: Proceedings of the 2021CHI Conference on Human Factors in Computing Systems. 2021:1-19.
  • 106. Felzmann H, Fosch-Villaronga E, Lutz C, Tamò-Larrieux A.. Towards transparency by design for artificial intelligence. Sci Eng Ethics. 2020;26(6):3333-3361. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 107. Larsson S, Heintz F.. Transparency in artificial intelligence. Internet Policy Rev. 2020;9(2). [Google Scholar]
  • 108. Varpio L, Ajjawi R, Monrouxe LV, O'Brien BC, Rees CE.. Shedding the cobra effect: problematising thematic emergence, triangulation, saturation and member checking. Med Educ. 2017;51(1):40-50. [DOI] [PubMed] [Google Scholar]
  • 109. Murphree DH, Quest DJ, Allen RM, Ngufor C, Storlie CB. Deploying predictive models in a healthcare environment – an open source approach. IEEE; 2018:6112-6116. [DOI] [PubMed]
  • 110. Verghese A, Shah NH, Harrington RA.. What this computer needs is a physician: humanism and artificial intelligence. JAMA. 2018;319(1):19-20. [DOI] [PubMed] [Google Scholar]
  • 111. Sandhu S, Lin AL, Brajer N, et al. Integrating a machine learning system into clinical workflows: qualitative study. J Med Internet Res. 2020;22(11):e22421. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 112. Anderson M, Anderson SL.. How should AI be developed, validated, and implemented in patient care? AMA J Ethics. 2019;21(2):125-130. [DOI] [PubMed] [Google Scholar]
  • 113. Nagendran M, Chen Y, Lovejoy CA, et al. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ. 2020;368:m689. [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

ocad224_Supplementary_Data

Data Availability Statement

All data relevant to the analysis are incorporated into the article.


Articles from Journal of the American Medical Informatics Association : JAMIA are provided here courtesy of Oxford University Press

RESOURCES