Abstract
Purpose
Artificial intelligence (AI) technology is being rapidly adopted into many different branches of medicine. Although research has started to highlight the impact of AI on health care, the focus on patient perspectives of AI is scarce. This scoping review aimed to explore the literature on adult patients’ perspectives on the use of an array of AI technologies in the health care setting for design and deployment.
Methods
This scoping review followed Arksey and O’Malley’s framework and Preferred Reporting Items for Systematic Reviews and Meta-Analysis for Scoping Reviews (PRISMA-ScR). To evaluate patient perspectives, we conducted a comprehensive literature search using eight interdisciplinary electronic databases, including grey literature. Articles published from 2015 to 2022 that focused on patient views regarding AI technology in health care were included. Thematic analysis was performed on the extracted articles.
Results
Of the 10,571 imported studies, 37 articles were included and extracted. From the 33 peer-reviewed and 4 grey literature articles, the following themes on AI emerged: (i) Patient attitudes, (ii) Influences on patient attitudes, (iii) Considerations for design, and (iv) Considerations for use.
Conclusions
Patients are key stakeholders essential to the uptake of AI in health care. The findings indicate that patients’ needs and expectations are not fully considered in the application of AI in health care. Therefore, there is a need for patient voices in the development of AI in health care.
Keywords: artificial intelligence, patient perspectives, scoping review
While artificial intelligence (AI) is defined and described in a variety of ways, it is generally understood as a branch of computer science, whereby machines are designed to simulate and mimic human actions.1 Machine learning is a type of AI that uses collected data to recognize patterns and predict outcomes and best courses of action – the cornerstone of AI.1,2 These human-like problem solving skills have earned AI a practical place in various health care fields.3
Perceptions of AI use in health care vary;4,5 for example, Jotter and Bosco addressed potential ethical considerations behind AI implementation.4 They examined a framework of understanding AI reception in health care and concluded that human-centric components were vital to the implementation and reception of AI.4 In another study, Petersson, Larsson, Nygren, et al identified three primary concerns of health care leaders regarding the implementation of AI: External factors (liability and quality compliance), change management (resourcing and staffing), and transformation (roles and relationship changes).5 These concerns highlighted the need for clear regulatory oversight of AI in health care. When addressing user apprehension of AI-assisted health services, Longoni and Morewedge’s research indicated that, even when presented with evidence of cost savings, accuracy, and efficiency, patients preferred human interaction and discretion over AI to communicate important and potentially life-saving information and interventions.6
Despite all these challenges, the use of AI in medicine is on the rise.7 As such, the promise of the ubiquity and ability of AI needs to be tempered by a realistic approach that considers AI assistance and enhancement in current care. In Artificial Intelligence-Enabled Health Care Delivery, Reddy and colleagues posited the following 4 domains where AI could impact health care: 1) Offloading health care administration duties, 2) Supporting clinical decision-making activities, 3) Facilitating patient monitoring, and 4) Supporting treatments for patients.8 The same perspective of AI as benefiting clinical approaches was echoed in A Population Health Perspective on Artificial Intelligence, which then expanded on the need for a user-friendly interface with positive exposure for AI to be effective.9
Given the essential role patients play in the uptake of AI and the fact that these changes will eventually impact their health, it is necessary to include their perspectives in the implementation of AI in health care.10 Our objective was to map and synthesize the existing literature involving patients’ understanding of AI utilization in health care and to identify the key concepts and knowledge gaps in the literature.
METHODS
Despite the growing use of AI in health care,11 there is a lack of precise information on how patients perceive the advantages and disadvantages of AI. We chose scoping review as our methodology as it best aligned to our objective, allowing our team to cover a broad range of literature and generate a broad, high-level overview of our chosen topic.12 This scoping review of literature on patient perspectives regarding the use of AI in health care was conducted according to the five stages of the Arksey and O’Malley framework,13 described in detail below. The results reported follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews (PRISMA-ScR) checklist.14
Stage 1: Identifying the Key Concepts, Issues, and Objectives
In our scoping review, we focused on identifying available evidence on attitudes and perceptions of patients regarding the use of any AI technology in any health care setting as reported in the literature. We used the Population, Intervention, Comparison, and Outcomes (PICO) format to narrow our research objectives.15 Our main research question was: What is in the literature about perspectives of patients (18+) on the use of AI technologies in health care?
Stage 2: Identifying Relevant Literature
International literature in English regarding the perspectives of adult patients on AI use in health care settings published from 2015 to spring 2022 (including grey literature) was identified and limited to the last seven years to focus on recent AI technology. Our research included adult populations (age 18 and over) given the heterogeneity in pediatric literature.
In consideration of the interdisciplinary nature of AI in medicine, we used an inclusive approach for our literature search, which included eight electronic databases that were the most relevant to our research topic: AgeLine, Cochrane, Cumulative Index to Nursing and Allied Health Literature (CINAHL), Excerpta Medica Database (EMBASE), Institute of Electrical and Electronics Engineers (IEEE), OVID-Medical Literature Analysis and Retrieval System Online (OVID-Medline), Psychological Information Database (PsycInfo), and SCOPUS. In addition to databases with a medical focus, we also included those with a focus on engineering and the social sciences.
The search was initially conducted in March 2021 and then updated to include articles published up to May 16, 2022. Our search strategy followed an iterative course by refining the inclusion/exclusion criteria and updating our search results during each stage and before submission. For grey literature, we applied the honeycomb grey literature search strategy,16 which includes theses and dissertations, association reports, government reports, stakeholder reports, and conference proceedings. Furthermore, we explored numerous global publications available on websites, such as Open Grey, Organization for Economic Co-operation and Development (OECD), DuckDuckGo, and LexisNexis. Due to limitations such as access, language, and diversity of health care systems, we focused on Canadian grey literature resources. Handsearching was also completed to further supplement the literature search. Detailed eligibility criteria are outlined in Table 1.
Table 1.
Inclusion and Exclusion Criteria for Screening
Inclusion | Exclusion |
---|---|
| |
Adult population (age ≥18 years) | Articles published 7 years ago or longer, or published after May 2022 |
Computer science technology classified as AI | Clinician perspectives |
English | Pediatric population |
Global | Non-AI technology |
Grey literature | Non-English languages |
Health care setting | Non-health care setting |
Patient perspectives | Full-text version unavailable |
Peer-reviewed literature | |
Within the last 7 years (2015–2022) |
Our search strategy was developed in consultation with a University of Toronto Reference Librarian. Table 2 illustrates the initial keywords and Medical Subject Headings (MeSH) included. The respective search strings are included (Appendix A) to show how the search strategy developed was applied in practice.
Table 2.
Keywords and MeSH Headings
Concept 1 | AND | Concept 2 |
---|---|---|
|
|
|
Patient satisfaction OR Patient/User/Client experience/perspective/preference/trust/perception/report |
Artificial intelligence OR AI Machine learning Algorithm Mathematical computing Computational intelligence Machine intelligence Neural networks |
Stage 3: Study Selection
We used Covidence v2978 (Melbourne, Australia) to upload our search results, screen titles, and abstracts, as well as to extract data from the selected and deduplicated studies. The results from the article import and screening process are reported in the PRISMA flow chart in Figure 1. After removal of 1,827 duplicates from a total of 10,571 imported studies, we screened the remaining 9,756 titles and abstracts. Additional studies were added through handsearching. Our inclusion criteria included any peer-reviewed, full-text articles or grey literature published in English from 2015 to 2022 that addressed adult patient perspectives on the use of computer science tools classified as AI in any health care setting (Table 1). Studies were excluded if they described non-AI technology, examined non-patient perspectives, or involved pediatric populations. For each article, two independent, parallel reviewers from the team screened articles for inclusion to full-text review. Any discrepancies were resolved by a third reviewer or the team through discussions. Our team then assessed available full-text articles for eligibility with consensus by two reviewers per article. As a result, 37 studies were included for data extraction.17–53 Of these, 33 were peer-reviewed,17–19,21,43,45,51 and 4 were grey literature.20,44,52,53
Figure 1.
A PRISMA flow chart of articles through screening to selection for analysis.
Stage 4: Charting the Data
A charting table was designed in Microsoft Excel 365, piloted, and refined on 37 studies.17–53 Key information extracted included author(s), year of publication, study design and aim, sample size, description and purpose of AI tool, health care setting, and key findings of each study to align with the requirements outlined by Arksey and O’Malley.13
Stage 5: Summarizing, Synthesizing, and Reporting the Results
To facilitate thematic analysis, we used affinity diagrams and tabular formats54–56 to organize ideas or data into related groups and to identify patterns, themes, and insights that may not have been immediately apparent.56 For more precise comparisons, a tabular format was used to present information in a structured format in a table or spreadsheet.57 Key themes were drawn out from the studies in multiple iterations and discussed by team members until consensus was reached. A summary of qualitative inputs, directly derived from articles, can be seen as a word cloud in Figure 2.
Figure 2.
This word cloud depicts the most frequently used words within articles included in the scoping review; the largest fonts indicate the most commonly used terms.
RESULTS
Article Characteristics
A summary of the key elements of the included and extracted articles is available in Supplemental Table S1. The majority of the articles were published within the last 4 years from 2018 to 2022 (95%, n=35)17–35,37,48,50,53 and used a cross-sectional study design (49%, n=18).18,19,21,23,25,28,30,33,36,40,42,50,53 The included articles covered a wide range of hypothetical and real-life clinical health care settings such as radiology (16%, n=6),17,20,24,30,50,51 dermatology (14%, n=5),25,26,32,39,45 and community care (11%, n=4),38,41,47,48 which are further detailed in Figure 3. AI applications featured in the extracted articles explored a broad range of use cases, including diagnostics, personalized support, and risk prediction, aligning with previous research that suggested an increased interest in applying AI to advance these uses.58 Articles that featured hypothetical AI tools included a broad range of AI roles, with the majority of these articles focusing on factors impacting the acceptance of AI tools, such as age,18 education,18 provider endorsement,17 and trust and accountability of AI.22,24,26,40,42,43,53 Clinical applications of AI were investigated in articles that focused on the acceptance of AI for a specific role (diagnostics, care management, etc),28–30,36,37,49 factors impacting acceptance (level of oversight, AI performance, etc),23,25,28,29,32–34,39,41,45,46,50 and how perspectives differ among different patient populations.44,47 Additional details regarding the included articles are provided in Supplemental Table S1.
Figure 3.
This figure depicts the distribution of health care settings across extracted articles.
Themes
We identified 4 themes: (i) Patient attitudes on the experiences of AI, (ii) Factors influencing patient attitudes towards AI (demographics, previous experiences, and general interaction), (iii) Considerations for the design of AI (performance, efficacy, accuracy, security, and ownership), and (iv) Considerations for the use of AI (informed consent and accountability).59 A summary of themes and a description of subthemes are illustrated in Table 3 and Table 4, respectively, and the distribution of key themes across included articles is shown in Figure 4. Of note, key ideas highlighted in the word cloud in Figure 2 show the valency of patient perceptions and use cases of AI in health care.
Table 3.
Summary of Themes
Themes | Description | Subthemes | Distribution |
---|---|---|---|
General attitudes | Patients’ initial attitude towards AI before any exposure or after minimal exposure to AI | Positive general attitudes | 9 |
Negative general attitudes | 10 | ||
| |||
Factors influencing attitudes | The variables originating from participants or perceptions of participants during their interactions with AI tools that influence their attitude towards AI | Positive interactions | 8 |
Neutral interactions | 12 | ||
Negative interactions | 11 | ||
Previous experience | 7 | ||
Demographic information | 9 | ||
| |||
Considerations for the design of AI | The design of AI, which describes the variables of AI tools (eg, color of the AI tool), that play a crucial role in patients’ acceptance of AI use in their care experience | Ease of use | 8 |
Efficacy and accuracy | 6 | ||
Data-related concerns | 13 | ||
| |||
Considerations for the use of AI | The other aspects of AI use that patients may consider or want to know, such as informed consent, regulation, trustworthiness, and user-based development | Informed consent | 14 |
Clinical setting | 8 | ||
Understanding of AI tool | 8 |
Table 4.
Descriptions of Subthemes
Themes | Subthemes | Description |
---|---|---|
General attitudes | Patients’ initial attitude towards AI tool | |
| ||
Factors influencing attitudes | Interactions | Patients’ interactions with AI tools, contributing to their attitudes towards AI tools |
Previous experience | Patients’ experience with AI tools prior to study dictating future engagement | |
Demographic information | Demographic characteristics of patients that may influence experience with AI | |
| ||
Considerations for the design of AI | Ease of use | Considerations based on the interface, look, and function preferences of patients |
Efficacy and accuracy | Considerations based on the efficacy and accuracy of AI | |
Data-related concerns | Considerations for security and ownership of health-related data | |
| ||
Considerations for the use of AI | Informed consent | Considerations for patients to make informed decisions |
Clinical setting | Considerations for different populations | |
Understanding of AI tool | Considerations for regulatory oversight and protection |
Figure 4.
This figure shows the distribution of key themes among the articles included in the scoping review.
General Attitudes
The findings indicate that patients’ initial attitude towards AI before any exposure or after minimal exposure to AI might impact their experiences when interacting with AI tools in a health care setting. In general, patients tended to view AI positively when considering the increased accessibility to care,26,27,41 higher rates of companionship and comfort,23,35 improved efficiency of care,23,44,48,52,53 and decreased cost of diagnosis and treatment.19,27,44,45,53 Negative attitudes of patients stemmed from the lack of human supervision in care provided by AI,17,32–34,41,43,45,47,53 and the potential risk of job loss.28,53
Factors Influencing Attitudes
Alongside patients’ initial attitude towards AI, many variables or factors could influence the attitudes of patients towards this technology in the health care field. Interactions with AI tools could also contribute to their acceptance and willingness for future engagement with these tools in health care settings.25,27,28,30–32,35,37,41,43,46,50,53
The demographic characteristics of patients, including age,30,32,35 gender,30 race,27 geographical location,28,32 diagnosed disease,28,43,46 disease severity,43,50 and level of education,30 were found to influence patients’ attitude towards AI; however, studies showed conflicting results. For example, some studies indicated a higher acceptance and preference for AI in the younger population,28,30,32 while other studies showed that the older population had higher acceptance or comfort for AI than the younger population.35 Other conflicting factors were the performance of AI,27,37,41 level of supervision during AI use,25 and level of trust towards AI and its function.25,27,28,31,53 Reasons that underly these inconsistent results might include differing patient populations and their stratifications, such as comparing articles that chose patients from multiple specialties in hospital settings and articles that selected patients from one specific specialty. Moreover, wide-ranging demographic characteristics like employment status and diverse data collection methods like individual interviews versus surveys were not considered when interpreting the results.
Factors that tended to have a positive influence towards patient experience with AI included familiarity with function,27,31,32 previous exposure to similar tools,31,32,36,46 supervision during use,33,38 and the simplicity of tools.34 Conversely, factors such as lack of evidence and rigor of system,21,28,31,37,39,41,45 inappropriate decisions made by AI and the associated negative consequences,33,37,41 prohibitive costs of purchasing or using an AI tool,32,49,51 and inability to effectively communicate problems and emotions to AI technology31,41 reduced acceptance of AI tools by patients.
Neutral factors came from two domains: the use of AI implemented and the setting preferences wherein AI would be used. Although this is not a comprehensive list, patients considered AI’s uses as second opinions,24,42 personalization of care,42 providing virtual or long-distance care,20 or supplementation of simple tasks.17,19 Factors for timing and setting included the stage of care at which AI was used, such as during initial consultation or follow-up,27,42,44 and the field or discipline for which the AI tool was designed.18,20,25,28,40,53
Considerations for the Design of AI
The design and performance of AI tools played a crucial role in patients’ acceptance of AI, being generally more accepted by patients the easier they were to use31,32,36,51 and when including design features such as comfortable user interfaces,38 familiar operations,34,49 and personalized functions and services.49
The performance of AI tools was evaluated by their intended objectives and the efficiency of the tools to deliver on them.18,39 AI tools were more likely to be deemed beneficial when demonstrating increased reliability and increased efficiency53 while not hindering the safety of any involved stakeholders and end-users.49 These considerations were noted to play a further role in a patient’s quality of life and mental health,39 indicating the need for proof of concepts,18 risk assessments,40 and careful testing before implementing AI tools in health care.37
Another consideration for design was the security and ownership of health-related data. To protect health data, multiple factors were involved, such as respect for patient autonomy and transparency,38,39 encryption for data storage,32 and appropriate platforms and institutions for data.33,46 The final main component was the use of the data and their confidentiality.34,41 Studies showed that patients preferred sharing health data with public health organizations and universities36 but were worried about health data being shared with or sold to commercial organizations that might profit from their use.18,19,35,38,44,45 Despite privacy concerns, some expressed that anonymous use of participants’ data was acceptable for research and that, in extreme health care crises, exceptions to data privacy could be made.33
Considerations for the Use of AI
In clinical settings where AI tools were used, many studies showed that a majority of patients did not fear the use of AI46 and that patients and participants positively responded to the use of AI in various fields of medicine such as cancer,25,53 neurosurgery,28 preventive medicine,40 radiology,18 and virtual care.20 However, other studies showed a lack of trust and understanding of the current state of AI development, leading to negative responses and pushback for AI implementation in medicine.47,53
Informed consent, the principle that one should have sufficient information and understanding before making decisions about one’s own medical care, was another aspect for concern when implementing AI tools.60 Although conflicting opinions were present in debate,21,53 studies suggested that patients preferred transparency and disclosure about the exact application of the tool, even when undergoing AI-assisted brain surgery.28 Furthermore, Robbins and colleagues found that patients preferred transparent oversight not only about the use of AI tools but also at all stages of AI tool development.44
Accountability, defined as considerations for regulatory oversight and protections, was also a concern for patients, when considering the governance of AI tools,53 the trustworthiness of the tool,51,53 and the lack of patient input for device development,41 as well as concerns about disclosure and informed consent in the context of lawsuits.33 Given that AI is not “human,” it was unclear who patients could approach when mistakes were made.18,21,42 Understanding that AI tools are only beginning to be regulated by federal regulatory agencies,21 there is a greater need for standardization of AI tool management and regulation.37
DISCUSSION
This scoping review revealed common themes regarding patient perceptions and considerations on the development and use of AI in health care. Previous literature focused on clinician experiences and factors impacting provider adoption of AI tools in health care, such as perceived trust.61–63 Our results call attention to key insights from the patient perspective, highlighting patients’ general attitudes, the role of demographic factors, the acceptability of AI, and the perceived impact of AI on care. In addition, our findings suggest important considerations for the design and implementation of AI including the user experience,27,31,32,36,38,41 AI performance,18,27,37,39,41,53 accountability framework,18,21,25,37,40,42,53 informed consent,28,32,38,39 data privacy,32–34,41,46 and equitable access.26,27,41
Implications for Practice
These results, which focused solely on the patient perspective, align with previous literature regarding factors that impact the adoption of AI by clinicians in health care settings. For instance, AI user interface and integration with current clinical workflow were suggested as key factors for clinicians.64–66 However, although education about AI and its perceived value were mentioned as important components considered by clinicians, these were of lesser priority for patients, which illustrated the differing design needs and adoption requirements between these groups,67 especially considering there was an increased focus on the role and use of AI cases for clinicians.68 These disparities suggest that when designing AI tools, it is essential to consider all stakeholders, including those beyond patients and clinicians such as caregivers and health administrators.
One study by Scott, Carter and Coiera considered perspectives from multiple stakeholders, including patients, clinicians, health care leaders, and industry representatives.69 Their results presented a spectrum of attitudes regarding AI and health care with common concerns centered around the requirement for safeguards to prevent patient harm, the need for regulatory oversight, and restricting the role of AI as an assistant rather than a decision-maker.69 Of note, patient participants reported more positive attitudes towards AI compared to their non-patient counterparts on the condition that trust and oversight were established.69 These findings,69 along with results from articles included in this review,47,53 indicate that user trust in the technology is an important component in the adoption of AI in a clinical setting.
Implications for Policy
Although equity was found to be an important element from the patient perspective among articles included in this review,18,20,22,26,27,31,43,44,49 there were only a few or vague mentions of managing inequity and bias in AI research and regulatory guidelines, suggesting the need to better regulate and mitigate inequity in AI.70–72 An example of a helpful tool to address this need is the DEEP-MAX Scorecard, where policy makers can compare AI tools in regards to different factors such as data privacy safeguards, equity, and transparency in outputs.73 Furthermore, one study exploring the perspectives in government, health care organizations, and information technology found concerns to be centered around barriers within the legal or regulatory landscape.74 These results stressed a need for public policy and the establishment of standardized guidelines, best practices, and boundaries on the use of AI in health care to reduce the likelihood of data breaches and patient harm, particularly when accessing personal health information.70,75
Implications for Future Research
Although the theme of informed consent was commonly reported by patients, it was not always evident whether research ethics approval was obtained in the studies we selected. There is a need to establish clear research ethics guidelines regarding the involvement of patients or secondary uses of patient data in the deployment of a novel AI technology in consideration of the accelerating pace by which AI is being created and adopted within medicine.76 For example, the National Health Service (NHS) in the United Kingdom came under criticism regarding their intention to extract anonymized patient data, with advocates urging for patient consent to be obtained and transparency to be increased.77 The majority of the current literature comprised proof-of-concept studies with a minority demonstrating AI in a real-life clinical setting, suggesting that AI in health care remains within the early stages of development and will require an established framework to demonstrate its validity and efficacy compared to standard care to showcase its value.78 Although it was noted that unavailability or inaccessibility of real data could hinder the accuracy of AI, there is a palpable tension between safeguarding patient rights and improving the performance of AI.
Limitations
Limitations to our scoping review include the restriction of works published in English and interrater agreement. Consequently, our findings may not be representative of the broader global patient population, limiting the generalizability of our findings.79 Although our team could have considered leveraging translation tools to include non-English papers, it would be difficult to reliably extract results from academic and scientific works without specialized translation assistance, given the nuanced nature of understanding patient perspectives and attitudes.79
A quality assessment was not performed considering that scoping reviews generally aim to provide an overview of the literature on a topic and are generally not designed to assess the quality of included studies.80 Notably, 34 articles were not available to be retrieved as full text and were thus excluded from analysis. To address this issue, we explored other databases, including Google, Google Scholar, PubMed, and the University of Toronto Library System, to retrieve the articles. Additionally, our research team fluctuated from 4 to 8 team members over the course of our scoping review with team transitions occurring during screening. As such, training and alignment during the screening process were important to ensure consistency in approach and decision-making.80
Patient perspectives identified in the included articles represent summarized viewpoints from a variety of data collection methods, and we were unable to conduct a thematic analysis of the patients’ own words. Furthermore, these articles spanned different health care disciplines, AI roles, and technological maturity (eg, hypothetical technology to proof-of-concept to real-life clinical setting), as well as a broad range of patient demographics, such as age, geographical location, and level of education. As such, our scoping review presents a general overview of the literature regarding patient viewpoints on the topic of AI in health care. Future studies should explore how patient perspectives differ among specific AI roles or level of technological development.
Despite considerable investment across the world in COVID-19 AI tools,81 we were unable to find papers related to patient perspectives on AI tools used during the pandemic, suggesting that this might be a current gap in the literature.82 Future studies should investigate AI designed for COVID-19 and any differences between patient perspectives compared to the pre-pandemic era. The majority of the current literature centered around adoption by clinicians, health care organizations, or health system planners.83,84 In consideration of the disparities of the health care systems elucidated by the pandemic and the focus on equity mentioned by patients in our included articles, it would be important to consider the risk of bias and discrimination from AI algorithms used to inform decisions as part of the COVID-19 pandemic response.85 For example, Schwabe and Wahl86 called for person-centric guidelines that embraced ethics and equality in design, development, and implementation. As a key factor identified by patients, these guidelines still require further refinement in their early development phases and were not commonly used in many of the papers we screened, suggesting poor uptake.87,88
Overall, this scoping review is among the first to compile patient perspectives and considerations on the use of AI in health care using an interdisciplinary approach that included literature from both engineering and medicine, as well as grey literature, to consider patient perspectives reported outside of academia. This provided a fuller snapshot of the current literature across the entire breadth of health care, rather than localized to a specific medical specialty. Our findings indicate that patients would like to know how AI is incorporated into the care that clinicians deliver,17,20,23,25,27–32,34,36–38,40–44,47–52 how decisions related to their care are made,18,21,22,24,26,28,32,33,37,41,42,44,51,53 and that their personal health information is safeguarded.18,19,32–35,38,39,41,44–46 Despite the limitations, the themes presented in this review present universal considerations for AI in health care.
CONCLUSIONS
This scoping review identified common themes in the literature regarding patient perceptions on the use of AI in health care. These themes may offer greater insight into considerations during the development, implementation, evaluation, and improvement of AI tools in health care. However, a greater understanding of the guidelines, standards, and reasoning for current patient perspectives, especially among differing patient populations, is needed to facilitate the translation of knowledge into clinical evaluation and practice. By incorporating these factors, AI tools in health care can better address and target users’ needs. This attempt to understand patient concerns and needs can also serve as a baseline for AI tool development and evaluation of existing AI tools. With this purpose in mind, future studies examining AI use in health care should aim to evaluate and incorporate patient perspectives.
Patient-Friendly Recap.
The use of artificial intelligence (AI) in medicine is on the rise, but patients’ perspectives on the design, development, and implementation of AI in health care are scarcely addressed.
This scoping review synthesized current literature regarding patients’ understandings of AI in health care and identified key concepts and knowledge gaps in this literature.
The findings indicate that patients’ needs and expectations are not fully considered in the application of AI in health care, particularly regarding user experience, AI performance, accountability, informed consent, data privacy, and equitable access.
Supplementary Information
Acknowledgments
We would like to thank Brady Bouchard, PhD, Azadeh Bojmehrani, PhD, Kaitlin Fuller, MLIS, Yannie Lai, MRT, Abirami Kirubarajan, MD, and Frank Rudzicz, PhD, for their assistance on this project.
Appendix A. Medline Search Strategy
Below is an example of the search strategy designed for Medline, using manual filter toggles to adjust to English publications from January 2015 to May 2022:
(exp patient satisfaction/or exp patient preference/or ((patient* or user*) adj2 (perspective* or percept* or report* or experienc* or satisfact* or prefer* or trust* or attitud*)).tw,kf.) and (exp algorithms/or exp artificial intelligence/or exp machine learning/or ((Artificial adj2 intelligenc*) or AI or algorith* or ((machine or deep) and learning*) or intelligent agent* or machine intelligen*).tw,kf.)
Results: 6,327
Footnotes
Author Contributions: Study design: Moy, Irannejad, Jeanneret Manning, Ahmed, Lorenz, Mirza, Klinger. Data acquisition or analysis: all authors. Manuscript drafting: all authors. Critical revision: all authors.
Conflicts of Interest: None.
References
- 1.Frankenfield J. Artificial intelligence: what it is and how it is used. [Accessed August 9, 2023];Investopedia. 2023 April 24; https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp . [Google Scholar]
- 2.Shailaja K, Seetharamulu B, Jabbar MA. Machine learning in health care: a review. 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA); March 2018; Coimbatore, India. pp. 910–914. [Google Scholar]
- 3.Alsuliman T, Humaidan D, Sliman L. Machine learning and artificial intelligence in the service of medicine: necessity or potentiality? Curr Res Transl Med. 2020;68:245–51. doi: 10.1016/j.retram.2020.01.002. [DOI] [PubMed] [Google Scholar]
- 4.Jotterand F, Bosco C. Keeping the “human in the loop” in the age of artificial intelligence. Sci Eng Ethics. 2020;26(5):2455–60. doi: 10.1007/s11948-020-00241-1. [DOI] [PubMed] [Google Scholar]
- 5.Petersson L, Larsson I, Nygren JM, et al. Challenges to implementing artificial intelligence in health care: a qualitative interview study with healthcare leaders in Sweden. BMC Health Serv Res. 2022;22:850. doi: 10.1186/s12913-022-08215-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Longoni C, Bonezzi A, Morewedge CK. Resistance to medical artificial intelligence. J Consum Res. 2019;46:629–50. doi: 10.1093/jcr/ucz013. [DOI] [Google Scholar]
- 7.Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J. 2019;6:94–8. doi: 10.7861/futurehosp.6-2-94. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Reddy S, Fox J, Purohit MP. Artificial intelligence-enabled healthcare delivery. J R Soc Med. 2019;112:22–8. doi: 10.1177/0141076818815510. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Lavigne M, Mussa F, Creatore MI, et al. A population health perspective on artificial intelligence. Healthc Manage Forum. 2019;32:173–7. doi: 10.1177/0840470419848428. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Kirubarajan A, Taher A, Khan S, et al. Artificial intelligence in emergency medicine: a scoping review. J Am Coll Emerg Physicians Open. 2020;1:1691–702. doi: 10.1002/emp2.12277. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Bohr A, Memarzadeh K. Chapter 2 - the rise of artificial intelligence in healthcare applications. In: Bohr A, Memarzadeh K, editors. Artificial Intelligence in Healthcare. Academic Press; 2020. pp. 25–60. [DOI] [Google Scholar]
- 12.Munn Z, Peters MDJ, Stern C, et al. Systematic review or scoping review? guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18:143. doi: 10.1186/s12874-018-0611-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8:19–32. doi: 10.1080/1364557032000119616. [DOI] [Google Scholar]
- 14.Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med. 2018;169:467–73. doi: 10.7326/M18-0850. [DOI] [PubMed] [Google Scholar]
- 15.Straus SE, Glasziou P, Richardson WS, et al. Evidence-Based Medicine: How to Practice and Teach EBM. 5th Edition. Elsevier; 2018. [Google Scholar]
- 16.Gerstein Science Information Centre. Searching the literature: a guide to comprehensive searching in the health sciences. Vol. 6. University of Toronto Libraries.; Mar, 2022. [Accessed August 9, 2023]. https://guides.library.utoronto.ca/c.php?g=577919&p=4123572 . [Google Scholar]
- 17.Adams SJ, Tang R, Babyn P. Patient perspectives and priorities regarding artificial intelligence in radiology: opportunities for patient-centered radiology. J Am Coll Radiol. 2020;17:1034–6. doi: 10.1016/j.jacr.2020.01.007. [DOI] [PubMed] [Google Scholar]
- 18.Aggarwal R, Farag S, Martin G, et al. Patient perceptions on data sharing and applying artificial intelligence to health care data: cross-sectional survey. J Med Internet Res. 2021;23:e26162. doi: 10.2196/26162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Aktan ME, Turhan Z, Dolu İ. Attitudes and perspectives towards the preferences for artificial intelligence in psychotherapy. Comput Hum Behav. 2022;133:107273. doi: 10.1016/j.chb.2022.107273. [DOI] [Google Scholar]
- 20.Andrews D. A patient’s perspective on AI. American College of Radiology Bulletin. 2021 June 29; https://www.acr.org/practice-management-quality-informatics/acr-bulletin/articles/july-2021/a-patients-perspective-on-ai . [Google Scholar]
- 21.Cinalioglu K, Elbaz S, Sekhon K, et al. Exploring younger versus older Canadians’ perceptions of the use of AI in healthcare. Am J Geriatr Psychiatry. 2022;30:S124–5. doi: 10.1016/j.jagp.2022.01.031. [DOI] [Google Scholar]
- 22.Esmaeilzadeh P, Mirzaei T, Dharanikota S. Patients’ perceptions toward human-artificial intelligence interaction in healthcare: experimental study. J Med Internet Res. 2021;23:e25856. doi: 10.2196/25856. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Glancova A, Do QT, Sanghavi DK, et al. Are we ready for video recognition and computer vision in the intensive care unit? a Survey. Appl Clin Inform. 2021;12:120–32. doi: 10.1055/s-0040-1722614. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Haan M, Ongena YP, Hommes S, et al. A qualitative study to understand patient perspective on the use of artificial intelligence in radiology. J Am Coll Radiol. 2019;16:1416–9. doi: 10.1016/j.jacr.2018.12.043. [DOI] [PubMed] [Google Scholar]
- 25.Haggenmüller S, Krieghoff-Henning E, Jutzi T, et al. Digital natives’ preferences on mobile artificial intelligence apps for skin cancer diagnostics: survey study. JMIR Mhealth and Uhealth. 2021;9:e22909. doi: 10.2196/22909. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Jutzi TB, Krieghoff-Henning EI, Holland-Letz T, et al. Artificial intelligence in skin cancer diagnostics: the patients’ perspective. Front Med. 2020;7:233. doi: 10.3389/fmed.2020.00233. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Kim DKD, Kim S. What if you have a humanoid AI robot doctor?: an investigation of public trust in South Korea. J Commun Healthc. 2022;15:276–85. doi: 10.1080/17538068.2021.1994825. [DOI] [PubMed] [Google Scholar]
- 28.Kosan E, Krois J, Wingenfeld K, et al. Patients’ perspectives on artificial intelligence in dentistry: a controlled study. J Clin Med. 2022;11:2143. doi: 10.3390/jcm11082143. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Lee H, Piao M, Lee J, et al. The purpose of bedside robots: exploring the needs of inpatients and healthcare professionals. Comput Inform Nurs. 2020;38:8–17. doi: 10.1097/CIN.0000000000000558. [DOI] [PubMed] [Google Scholar]
- 30.Lennartz S, Dratsch T, Zopfs D, et al. Use and control of artificial intelligence in patients across the medical workflow: single-center questionnaire study of patient perspectives. J Med Internet Res. 2021;23:e24221. doi: 10.2196/24221. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Lennox-Chhugani N, Chen Y, Pearson V, et al. Women’s attitudes to the use of AI image readers: a case study from a national breast screening programme. BMJ Health Care Inform. 2021;28:e100293. doi: 10.1136/bmjhci-2020-100293. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Lim K, Neal-Smith G, Mitchell C, et al. Perceptions of the use of artificial intelligence in the diagnosis of skin cancer: an outpatient survey. Clin Exp Dermatol. 2022;47:542–6. doi: 10.1111/ced.14969. [DOI] [PubMed] [Google Scholar]
- 33.Liu T, Tsang W, Huang F, et al. Patients’ preferences for artificial intelligence applications versus clinicians in disease diagnosis during the SARS-CoV-2 pandemic in China: discrete choice experiment. J Med Internet Res. 2021;23:e22841. doi: 10.2196/22841. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Martin-Hammond A, Vemireddy S, Rao K. Exploring older adults’ beliefs about the use of intelligent assistants for consumer health information management: a participatory design study. JMIR Aging. 2019;2:e15381. doi: 10.2196/15381. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.McCradden MD, Baba A, Saha A, et al. Ethical concerns around use of artificial intelligence in health care research from the perspective of patients with meningioma, caregivers and health care providers: a qualitative study. CMAJ Open. 2020;8:E90–5. doi: 10.9778/cmajo.20190151. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Nadarzynski T, Bayley J, Llewellyn C, et al. Acceptability of artificial intelligence (AI)-enabled chatbots, video consultations and live webchats as online platforms for sexual health advice. BMJ Sex Reprod Health. 2020;46:210–7. doi: 10.1136/bmjsrh-2018-200271. [DOI] [PubMed] [Google Scholar]
- 37.Nadarzynski T, Puentes V, Pawlak I, et al. Barriers and facilitators to engagement with artificial intelligence (AI)-based chatbots for sexual and reproductive health advice: a qualitative analysis. Sex Health. 2021;18:385–93. doi: 10.1071/SH21123. [DOI] [PubMed] [Google Scholar]
- 38.Nallam P, Bhandari S, Sanders J, et al. A question of access: exploring the perceived benefits and barriers of intelligent voice assistants for improving access to consumer health resources among low-income older adults. Gerontol Geriatr Med. 2020;6:2333721420985975. doi: 10.1177/2333721420985975. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Nelson CA, Perez-Chada LM, Creadore A, et al. Patient perspectives on the use of artificial intelligence for skin cancer screening: a qualitative study. JAMA Dermatol. 2020;156:501–12. doi: 10.1001/jamadermatol.2019.5014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Ongena YP, Yakar D, Haan M, et al. Artificial intelligence in screening mammography: a population survey of women’s preferences. J Am Coll Radiol. 2021;18:79–86. doi: 10.1016/j.jacr.2020.09.042. [DOI] [PubMed] [Google Scholar]
- 41.Pal D, Funilkul S, Charoenkitkarn N, et al. Internet-of-Things and smart homes for elderly healthcare: an end user perspective. IEEE Access. 2018;6:10483–96. doi: 10.1109/ACCESS.2018.2808472. [DOI] [Google Scholar]
- 42.Palmisciano P, Jamjoom AAB, Taylor D, et al. Attitudes of patients and their relatives toward artificial intelligence in neurosurgery. World Neurosurg. 2020;138:e627–33. doi: 10.1016/j.wneu.2020.03.029. [DOI] [PubMed] [Google Scholar]
- 43.Richardson JP, Smith C, Curtis S, et al. Patient apprehensions about the use of artificial intelligence in healthcare. NPJ Digit Med. 2021;4:140. doi: 10.1038/s41746-021-00509-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Robbins R, Brodwin E. An invisible hand: patients aren’t being told about the AI systems advising their care. [Accessed August 9, 2023];STAT. 2020 July 15; https://www.statnews.com/2020/07/15/artificial-intelligence-patient-consent-hospitals/ [Google Scholar]
- 45.Sangers TE, Wakkee M, Kramer-Noels EC, et al. Views on mobile health apps for skin cancer screening in the general population: an in-depth qualitative exploration of perceived barriers and facilitators. Br J Dermatol. 2021;185:961–9. doi: 10.1111/bjd.20441. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Tran VT, Riveros C, Ravaud P. Patients’ views of wearable devices and AI in healthcare: findings from the ComPaRe e-cohort. NPJ Digit Med. 2019;2:53. doi: 10.1038/s41746-019-0132-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Wolters MK, Kelly F, Kilgour J. Designing a spoken dialogue interface to an intelligent cognitive assistant for people with dementia. Health Informatics J. 2016;22:854–66. doi: 10.1177/1460458215593329. [DOI] [PubMed] [Google Scholar]
- 48.Yang K, Zeng Z, Peng H, et al. Attitudes of Chinese cancer patients toward the clinical use of artificial intelligence. Patient Prefer Adherence. 2019;13:1867–75. doi: 10.2147/PPA.S225952. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Yarborough BJH, Stumbo SP. Patient perspectives on acceptability of, and implementation preferences for, use of electronic health records and machine learning to identify suicide risk. Gen Hosp Psychiatry. 2021;70:31–7. doi: 10.1016/j.genhosppsych.2021.02.008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.York T, Jenney H, Jones G. Clinician and computer: a study on patient perceptions of artificial intelligence in skeletal radiography. BMJ Health Care Inform. 2020;27:e100233. doi: 10.1136/bmjhci-2020-100233. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Zhang Z, Citardi D, Wang D, et al. Patients’ perceptions of using artificial intelligence (AI)-based technology to comprehend radiology imaging data. Health Informatics J. 2021;27:14604582211011215. doi: 10.1177/14604582211011215. [DOI] [PubMed] [Google Scholar]
- 52.May ME. The healthcare innovation Amazon & Apple should focus on: a patient’s perspective. [Accessed August 9, 2023];Medium. 2017 July 27; https://becominghuman.ai/the-healthcare-innovation-amazon-apple-should-focus-on-a-patients-perspective-a8c276901c66 . [Google Scholar]
- 53.Arbour D. Artificial intelligence for authentic engagement. [Accessed August 9, 2023];Syneos Health. 2018 January 29; https://www.syneoshealth.com/insights-hub/artificial-intelligence-authentic-engagement . [Google Scholar]
- 54.Thomas J, Harden A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med Res Methodol. 2008;8:45. doi: 10.1186/1471-2288-8-45. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Vaismoradi M, Turunen H, Bondas T. Content analysis and thematic analysis: implications for conducting a qualitative descriptive study. Nurs Health Sci. 2013;15:398–405. doi: 10.1111/nhs.12048. [DOI] [PubMed] [Google Scholar]
- 56.Simonsen J, Friberg K. Collective analysis of qualitative data. In: Simonsen J, Svabo C, Strandvad SM, Samson K, Hertzum M, Hansen OE, editors. Situated Design Methods (Design Thinking, Design Theory) MIT Press; 2014. pp. 99–117. [Google Scholar]
- 57.Bonfitto S, Casiraghi E, Mesiti M. Table understanding approaches for extracting knowledge from heterogeneous tables. Data Min Knowl Disc. 2021;11:e1407. doi: 10.1002/widm.1407. [DOI] [Google Scholar]
- 58.Scheetz J, Rothschild P, McGuinness M, et al. A survey of clinicians on the use of artificial intelligence in ophthalmology, dermatology, radiology and radiation oncology. Sci Rep. 2021;11:5193. doi: 10.1038/s41598-021-84698-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Moen K, Middelthon AL. Chapter 10 – qualitative research methods. In: Laake P, Benestad HB, Olsen BR, editors. Research in Medical and Biological Sciences. Second Edition. Academic Press; 2015. pp. 321–378. [Google Scholar]
- 60.US Department of Health, Education, and Welfare. Ethical principles and guidelines for the protection of human subjects of research. [Accessed August 9, 2023];The Belmont Report: The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. 1979 April 18; https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/read-the-belmont-report/index.html . [Google Scholar]
- 61.Asan O, Bayrak AE, Choudhury A. Artificial intelligence and human trust in healthcare: focus on clinicians. J Med Internet Res. 2020;22:e15154. doi: 10.2196/15154. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Ting DSW, Pasquale LR, Peng L, et al. Artificial intelligence and deep learning in ophthalmology. Br J Ophthalmol. 2019;103:167–75. doi: 10.1136/bjophthalmol-2018-313173. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63.Young AT, Amara D, Bhattacharya A, et al. Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review. Lancet Digit Health. 2021;3:e599–611. doi: 10.1016/S2589-7500(21)00132-1. [DOI] [PubMed] [Google Scholar]
- 64.Hah H, Goldin DS. How clinicians perceive artificial intelligence-assisted technologies in diagnostic decision making: mixed methods approach. J Med Internet Res. 2021;23:e33540. doi: 10.2196/33540. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Henry KE, Kornfield R, Sridharan A, et al. Human-machine teaming is key to AI adoption: clinicians’ experiences with a deployed machine learning system. NPJ Digit Med. 2022;5:97. doi: 10.1038/s41746-022-00597-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Kyrimi E, Dube K, Fenton N, et al. Bayesian networks in healthcare: what is preventing their adoption? Artif Intell Med. 2021;116:102079. doi: 10.1016/j.artmed.2021.102079. [DOI] [PubMed] [Google Scholar]
- 67.Sarwar S, Dent A, Faust K, et al. Physician perspectives on integration of artificial intelligence into diagnostic pathology. NPJ Digit Med. 2019;2:28. doi: 10.1038/s41746-019-0106-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Finlayson SG, Subbaswamy A, Singh K, et al. The clinician and dataset shift in artificial intelligence. N Engl J Med. 2021;385:283–6. doi: 10.1056/NEJMc2104626. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Scott IA, Carter SM, Coiera E. Exploring stakeholder attitudes towards AI in clinical practice. BMJ Health Care Inform. 2021;28:e100450. doi: 10.1136/bmjhci-2021-100450. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Sun TQ, Medaglia R. Mapping the challenges of artificial intelligence in the public sector: evidence from public healthcare. Gov Inform Q. 2019;36:368–83. doi: 10.1016/j.giq.2018.09.008. [DOI] [Google Scholar]
- 71.Gichoya JW, McCoy LG, Celi LA, et al. Equity in essence: a call for operationalising fairness in machine learning for healthcare. BMJ Health Care Inform. 2021;28:e100289. doi: 10.1136/bmjhci-2020-100289. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72.Thomasian NM, Eickhoff C, Adashi EY. Advancing health equity with artificial intelligence. J Public Health Policy. 2021;42:602–11. doi: 10.1057/s41271-021-00319-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Iqbal U, Celi LA, Hsu YHE, et al. Healthcare artificial intelligence: the road to hell is paved with good intentions. BMJ Health Care Inform. 2022;29:e100650. doi: 10.1136/bmjhci-2022-100650. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Dwivedi YK, Rana NP, Jeyaraj A, et al. Re-examining the unified theory of acceptance and use of technology (UTAUT): towards a revised theoretical model. Inf Syst Front. 2019;21:719–34. doi: 10.1007/s10796-017-9774-y. [DOI] [Google Scholar]
- 75.Macrae C. Governing the safety of artificial intelligence in healthcare. BMJ Qual Saf. 2019;28:495–8. doi: 10.1136/bmjqs-2019-009484. [DOI] [PubMed] [Google Scholar]
- 76.McCradden MD, Anderson JA, Stephenson EA, et al. A research ethics framework for the clinical translation of healthcare machine learning. Am J Bioeth. 2022;22:8–22. doi: 10.1080/15265161.2021.2013977. [DOI] [PubMed] [Google Scholar]
- 77.Shire R, Welch E. The delay to the NHS data grab provides more time to find out what it really means for patients. [Accessed August 9, 2023];The BMJ Opinion. 2021 June 16; https://blogs.bmj.com/bmj/2021/06/16/the-delay-to-the-nhs-data-grab-provides-more-time-to-find-out-what-it-really-means-for-patients/ [Google Scholar]
- 78.Yin J, Ngiam KY, Teo HH. Role of artificial intelligence applications in real-life clinical practice: systematic review. J Med Internet Res. 2021;23:e25759. doi: 10.2196/25759. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79.Jackson JL, Kuriyama A. How often do systematic reviews exclude articles not published in English? J Gen Intern Med. 2019;34:1388–9. doi: 10.1007/s11606-019-04976-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 80.Lockwood C, Dos Santos KB, Pap R. Practical guidance for knowledge synthesis: scoping review methods. Asian Nurs Res. 2019;13:287–94. doi: 10.1016/j.anr.2019.11.002. [DOI] [PubMed] [Google Scholar]
- 81.Pappot N, Taarnhøj GA, Pappot H. Telemedicine and e-health solutions for COVID-19: patients’ perspective. Telemed J E Health. 2020;26:847–9. doi: 10.1089/tmj.2020.0099. [DOI] [PubMed] [Google Scholar]
- 82.Farrugia G, Plutowski RW. Innovation lessons from the COVID-19 pandemic. Mayo Clin Proc. 2020;95:1574–7. doi: 10.1016/j.mayocp.2020.05.024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 83.Chakravorti B. Why AI failed to live up to its potential during the pandemic. [Accessed August 9, 2023];Harvard Business Review. 2022 March 17; https://hbr.org/2022/03/why-ai-failed-to-live-up-to-its-potential-during-the-pandemic . [Google Scholar]
- 84.Yu M, Tang A, Brown K, et al. Integrating artificial intelligence in bedside care for covid-19 and future pandemics. BMJ. 2021;375:e068197. doi: 10.1136/bmj-2021-068197. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 85.Röösli E, Rice B, Hernandez-Boussard T. Bias at warp speed: how AI may contribute to the disparities gap in the time of COVID-19. J Am Med Inform Assoc. 2021;28:190–2. doi: 10.1093/jamia/ocaa210. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 86.Schwalbe N, Wahl B. Artificial intelligence and the future of global health. Lancet. 2020;395:1579–86. doi: 10.1016/S0140-6736(20)30226-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 87.Gama F, Tyskbo D, Nygren J, et al. Implementation frameworks for artificial intelligence translation into health care practice: scoping review. J Med Internet Res. 2022;24:e32215. doi: 10.2196/32215. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88.Caffery LJ. The role of standards in accelerating the uptake of artificial intelligence in dermatology. Iproceedings. 2022;8:e36890. doi: 10.2196/36890. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.