Abstract
Background
The use of artificial intelligence (AI)–based tools in the care of individual patients and patient populations is rapidly expanding.
Objective
The aim of this paper is to systematically identify research on provider competencies needed for the use of AI in clinical settings.
Methods
A scoping review was conducted to identify articles published between January 1, 2009, and May 1, 2020, from MEDLINE, CINAHL, and the Cochrane Library databases, using search queries for terms related to health care professionals (eg, medical, nursing, and pharmacy) and their professional development in all phases of clinical education, AI-based tools in all settings of clinical practice, and professional education domains of competencies and performance. Limits were provided for English language, studies on humans with abstracts, and settings in the United States.
Results
The searches identified 3476 records, of which 4 met the inclusion criteria. These studies described the use of AI in clinical practice and measured at least one aspect of clinician competence. While many studies measured the performance of the AI-based tool, only 4 measured clinician performance in terms of the knowledge, skills, or attitudes needed to understand and effectively use the new tools being tested. These 4 articles primarily focused on the ability of AI to enhance patient care and clinical decision-making by improving information flow and display, specifically for physicians.
Conclusions
While many research studies were identified that investigate the potential effectiveness of using AI technologies in health care, very few address specific competencies that are needed by clinicians to use them effectively. This highlights a critical gap.
Keywords: artificial intelligence, competency, clinical education, patient, digital health, digital tool, clinical tool, health technology, health care, educational framework, decision-making, clinical decision, health information, physician
Introduction
Artificial intelligence (AI), defined as the “branch of computer science that attempts to understand and build intelligent entities, often instantiated as software programs,” [1] has been applied in the health care setting for decades. Starting in the 1960s, a cadre of computer scientists and physicians developed an interest group around AI in Medicine (AIM) [2]. By the time funding sources became aligned with opportunities in the 1980s, AI was in its “expert system” era, using rules and knowledge derived from human experts to solve problems, primarily related to medical diagnosis [3]. Projects that developed these knowledge-based systems resulted in the creation of valuable information infrastructures, including standards, vocabularies, and taxonomies that continue to anchor electronic health records (EHR) [4]. Rule-based clinical decision support (eg, case-specific clinical alerts) is an important component of today’s EHR, but it is no longer considered to be true AI [5].
Since these early forays into AI, great progress has been made in the structure and scope of information and computing technologies, as well as in data and computational resources, enabling the development of a much more powerful generation of AI tools. Human-machine collaborations exploiting these tools are already evident across professional health care practice. The ubiquitous use of personal computers and smartphones linked to external databases and highly connected AI-driven networks supports individual, team, and health system performance. This powerful new generation of AI-based tools will have wide-ranging impacts on the entire health care ecosystem, but concerns about potentially serious technical and ethical liabilities have also emerged [6].
Despite inevitable challenges, all those engaged in the practice and administration of health care should prepare for a future shaped by the presence of increasingly intelligent technologies, including robotic devices, clinical decision support systems based on machine-learning algorithms, and the flow of data and information from multiple sources, ranging from health information technology systems to individual patient sensors. While the health care and health professions education community are perched on the forefront of these complex developments, like many organizations, they may not be prepared to recognize and adequately respond to the deep-change indicators of next-generation technologies [7]. Eaneff and others recently called for new administrative infrastructures to help manage and audit the deluge of AI-induced change [8]. It is imperative for educators to be a part of that infrastructure—to actively engage in deliberations about intended changes in the working-learning environment—so that implications for learning and the needs of learners will be considered as a part of any change management process.
This impending onslaught also creates an urgent mandate for health care organizations, educators, and professional groups to consider the range of professional competencies needed for the effective, ethical, and compassionate use of AI in health care work. While numerous authors have called for structured and intentional learning programs, to date, there has been no published framework to guide teaching, learning, and assessing health care students and practitioners in this emerging and transformative domain [7,9-12]. Additionally, while there are many accredited programs (including board certification) in clinical informatics, they are focused on developing, implementing, and managing AI-based tools. However, these programs do not provide competencies for noninformatics users of AI-based tools, which represents a large gap in knowledge.
To inform these critical needs, this study aimed to systematically identify research studies that reported on provider competencies and performance measures related to the use of AI in clinical settings.
Methods
Study Design
A scoping review was conducted in accordance with PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) [13,14] with an a priori protocol. The objective was to systematically identify studies that specify competencies and measure performance related to the use of AI by health care professionals. Studies had to include students or postgraduate trainees in clinical education settings across medicine, nursing, pharmacy, and social work, or practicing clinicians participating in professional development activities.
Search Strategy
A systematic search query of MEDLINE via PubMed, CINAHL, and the Cochrane Library was conducted to identify references published or available online between January 1, 2009, and July 22, 2020 (Tables S1 to S3 in Multimedia Appendix 1). Queries including medical subject headings (MeSH) and keywords were designed around the following PICOST (population, intervention, control, outcomes, study design, and time frame) framework: (1) populations under consideration included all participants in any phase of clinical education including faculty and health care worker professional development (eg, clinical education participants in medical, nursing, or pharmacy; medical faculty and professional development; health care, clinical, or medical social workers); (2) interventions focused on AI-based tools (eg, AI terms, precision medicine, decision-making, speech recognition, documentation, computer simulation, software, patient participation or engagement, patient monitoring, health information exchange, EHR, and cloud computing) used in all settings; (3) no comparisons were required; (4) outcomes included the identification of clinical competencies and their respective measurements or domains; (5) study settings and limits included those with an abstract, conducted in humans, designed as primary studies or systematic reviews (with the same inclusion criteria), took place in US settings, and were published in English language; and (6) time—the introduction of the Health Information Technology for Economic and Clinical Health Act of 2009 was a distinguishing time point for this protocol [15,16]. AI-related tool use increased dramatically because of the organizational changes needed to accommodate meaningful use of health information technology in clinical care, justifying 2009 as a logical start point for this review.
Notably, during the protocol generation and scoping of the literature, it was determined that the MeSH term “informatics” lowered the precision (ie, irrelevant records returned) of our search strategy and greatly expanded the scope of literature to be reviewed. As such, exploded terms (eg, retrieving results under that selected subject heading and all of the more specific terms listed below in the tree) under the MeSH term “medical informatics,” including “health information exchange,” and fully exploded terms under “medical informatics applications” were applied. MeSH terms including “decision-making,” “computer-assisted,” “decision support techniques,” “computer simulation,” “clinical information systems,” and “information systems,” were among the relevant terms used. Similarly, due to imprecision, “information technology” MeSH term and “digital health” keyword were substituted with specific relevant examples for this study. Please see the search strategies provided in Tables S1 to S3 in Multimedia Appendix 1, which were created to support this scoping review protocol.
Screening Process
Screening of each title and abstract and each full text was performed by a single reviewer for relevance against the inclusion/exclusion criteria (Table S4 in Multimedia Appendix 1).
Studies with a population exclusively limited to other types of clinicians, including allied health professionals (eg, dental hygienists, diagnostic medical sonographers, dietitians, medical assistant, medical technologists, occupational therapists, physical therapists, radiographers, respiratory therapists, and speech language pathologists), dentists, and counselors were excluded.
Relevant AI-based tools could be used in all settings (eg, outpatient, inpatient, ambulatory care, critical care, and long-term care) of clinical practice, and there was a focus on subsets that incorporated either machine learning, natural language processing, deep learning, or neural networking. Exclusions were made for AI-based tools that did not meet inclusion criteria, such as studies using technology that did not incorporate relevant AI-based tools, when the methods provided regarding the tool did not explicitly define what type of AI methodology is incorporated, or if the AI is not machine learning, natural language processing, deep learning, or neural networking. Studies on robotics (eg, robotic surgery) were excluded unless AI was a noted part of the technology.
To identify studies that specified competencies and measured performance related to the use of AI by health care professionals, the inclusion criteria (Table S4 in Multimedia Appendix 1) were limited to the 6 professional education domains of competence (eg, patient care, medical knowledge or knowledge for practice, professionalism, interpersonal and communication skills, practice-based learning and improvement, and systems-based practice) or Entrustable Professional Activities and performance. Studies were excluded if they did not report on competency-based clinical education to provide either an evaluation of a program and its outcomes related to learner achievement; a framework for assessing competency including a performance level (ie, appraisal) for each competency; or information related to instructional design, skills validation, or attitudes related to competency mastery.
The results were tracked in DistillerSR [17]. Additionally, a validated AI-based prioritization tool embedded in DistillerSR was used to support the single screening of titles and abstracts to modify or stop the screening approach once a true recall at 95% was achieved [18]. Studies had to specify competencies and measure performance related to the use of AI by health care professionals.
Data Extraction
Data were abstracted into standardized forms (Table S5 in Multimedia Appendix 1) for synthesis and thematic analysis by 1 reviewer, and the content was examined for quality and completeness by a second reviewer, assuring that each included manuscript was dually reviewed. Abstraction for clinical education outcomes focused on how the necessary clinician competencies were described and measured. Conflict resolution was provided by consensus agreement.
Study Quality
Study quality was assessed by dual review using the Oxford levels of evidence [19].
Results
Search Outcomes
Literature searches yielded 3476 unique citations (Figure 1), of which 109 (3.14%) articles were eligible for full-text screening. Upon full-text screening, 4 articles met our inclusion criteria [20-23]. Abstractions of the included studies can be found in Tables 1 and 2 and Table S5 in Multimedia Appendix 1.
Table 1.
Ref. No. | Ref., Year | Design; level of evidencea | Clinical setting | Users of AIb | Stage of clinical education | Stage of clinical use | Total, n (% male) | Age (years), race or ethnicity (%) | Study duration or follow-up |
1 | Bien, 2018 [23] | Modeling and evaluation; 2bc | Large academic hospital; imaging department | Orthopedic surgeons; general radiologists | Practicing physicians | Implementation | N/Rd (N/R) | N/R (N/R) | N/R |
2 | Hirsch, 2015 [22] | Evaluation; 4e | Large private hospital; large academic medical center; nephrology and internal medicine departments | Internal medicine physicians; nephrologists | Graduate medical education (internal medicine residents and interns; nephrology fellows) | Implementation | 12 (N/R) | N/R (N/R) | ~9 months |
3 | Jordan, 2010 [21] | Evaluation; 4 | Large academic hospital; cardiothoracic intensive care department | Intensive care unit nurses | Practicing nurses | Implementation | N/R (N/R) | N/R (N/R) | N/R |
4 | Sayres, 2019 [20] | Experimental 3-arm observational study; 2b | Large academic hospitals, large health systems, and specialist office; ophthalmology department | Ophthalmologists | Practicing physicians | Implementation | 10 (N/R) | N/R (N/R) | N/R |
aAdapted from Oxford Levels of Evidence [19].
bAI: artificial intelligence.
dLevel 2b: individual cohort, modeling, or observational studies.
cN/R: not reported.
eLevel 4: case series or poor-quality cohort studies.
Table 2.
Ref. No. | Ref., Year | Professional education domains of competence | Description (implied or explicit) of competency | User-AIa interface training and description | Performance assessment |
1 | Bien, 2018 [23] |
|
Implied in methods; improve image interpretation | Training N/Rb; interface not described | Metric N/Pc; evaluate if AI assistance improves expert performance in reading MRId images |
2 | Hirsch, 2015 [22] |
|
Implied in methods; improve summarization of longitudinal patient record and information processing in preparation for new patients | Training N/R; authenticated user queries the database for a patient and is provided with a visual summary of content containing all visit, note, and problem information | Questionnaire; evaluate time and efficiency in information processing for patient care |
3 | Jordan, 2010 [21] |
|
Implied in methods; improve handovers in peri-operative patient care by reducing communication and informational errors | Training N/R; patient summarization and visualization tool are used as an overlay to the existing electronic patient record | Questionnaire; evaluate if AI-based tool performs better than physicians to provide clinical information and patient status in ICUe handovers |
4 | Sayres, 2019 [20] |
|
Implied in methods; improve reader sensitivity and increase specificity of fundal images | Readers were provided training and similar instructions for use; interface not described | Metric N/P; evaluate if AI assistance increases severity grades in model predictions by assessing sensitivity and specificity of reader |
aAI: artificial intelligence.
bN/R: not reported.
cN/P: not provided.
dMRI: magnetic resonance imaging.
eICU: intensive care unit.
Study Characteristics
Of the 4 studies, 3 (75%) studies were published in the past 5 years, and all 4 of the included studies were conducted in large, academic hospitals [20,22,23]. All AI-based tools in these identified studies were in a mature implementation phase and were being evaluated with either practicing physicians, residents, fellows, or nurses [20-23]. All 4 studies were undertaken to characterize the performance of internally developed niche AI software systems when used by health care professionals in specific practice settings (Table 1) [20-23].
All AI-based tools examined in these identified studies aimed to enhance an existing process, create new efficiencies, improve an outcome, and ultimately reduce cost of care [20-23]. Two of the AI-based tools were built on natural language processing frameworks [21,22] and 2 were based on deep learning processes [20,23]. One of the studies provided decision support in interpreting magnetic resonance imaging exams of the knee [23], 1 on enhancing clinician performance in detecting diabetic retinopathy [20], 1 on expediting EHR review prior to patient encounters [22], and 1 on enhancing the quality of patient handovers in the intensive care unit [21]. These systems were evaluated with measures of user satisfaction, usability, and performance outcomes. Studies used either observational or minimally controlled cohort designs, in which performance of the human-AI dyad was compared to expert performance or generalist performance alone. Three studies indicated moderate success with the AI interventions [20,21,23], and 1 had a neutral result (Table S2 in Multimedia Appendix 1) [22].
The impact of advanced data visualization, computerized image interpretation, and personalized just-in-time patient transitions are described in all 4 studies [20-23]. Competencies observed for use of these AI systems fell within the Accreditation Council for Graduate Medical Education patient care and communication competency domains [24]. However, the specific competencies clinicians required to use these innovations most effectively were not clearly described. Only 1 of the studies mentioned any form of training [20]; 3 did not describe any skill development processes for learners. None of the studies specified any need for understanding of basic AI forms, and none described the background information clinicians received about the development, training, and validation of the tools (Table 2).
Study Quality
Using Oxford Levels of Evidence [19] to examine study quality to measure the extent that methodological safeguards (ie, internal study validity) against bias were implemented, 2 studies provided Level 2b evidence as modeling summarizations [20,23], and 2 studies provided Level 4 evidence [21,22]. The overall quality identified is moderate to low, as half of the curated evidence was classified as Level 4.
Discussion
Principal Findings
The volume of studies initially identified for our review confirms predictions about the growth of AI in health care. However, of these nearly 3500 articles, only 4 met the inclusion criteria. This result begs a few questions. Were our requirements overly rigorous or are the research gaps truly that numerous? Moreover, does this result reinforce concerns about a lack of organizational preparedness?
Failure to address user competencies was the most common reason for study exclusion. Many of the excluded studies compared AI tool performance with that of practicing clinicians (human versus machine), while others used simulations to demonstrate the potential of AI innovations to improve clinical outcomes. Only 4 research studies were identified in our search [20-23] that addressed professional competencies observed by this new AI landscape; however, none of the identified studies described new AI-related clinical competencies that had to be developed. The limited evidence derived from this review points to a large gap in adequately designed studies that identify competencies for the use of AI-based tools.
While many skills will be specific for the AI intervention being employed, these “questions of competence” are broader than the technical skills needed for use of any one AI tool or type of intelligent support [25]. All health professionals will interact with these types of technologies during their daily practice and should “know what they need to know” before using a new system. System characteristics will profoundly impact patient and clinician satisfaction as well as clinical recommendations, treatment courses, and outcomes, so health system leaders must also know what to know before adopting new technologies across entire health care delivery enterprises. Health care professionals at all levels have the educational imperative to articulate, measure, and iterate competencies for thriving in this evolving interface of smart technology and clinical care.
The implementation of AI into clinical workflows without sufficient education and training processes to apply the technology safely, ethically, and effectively in practice could potentially negatively impact clinical and societal outcomes. Real-world deployment of AI has caused harms due to data bias (eg, algorithms trained using biased or poor-quality data) and societal bias (eg, algorithmic output reflects societal biases of human developer) [6,26]. These biases can inflate prediction performance, confuse data interpretation, and exacerbate existing social inequities (eg, racial, gender, and socioeconomic status). These ethical considerations bring additional responsibilities and oversight of both AI-based tool implementation and its associated data to the clinical care team. The scalability of AI-based tools can also increase the scale of associated risks [8,10]. These difficulties and potential risks should be identified and understood proactively, and skills for clinicians to approach them must be included in any comprehensive training program.
The scarcity of competencies identified by this scoping review reiterates the need to develop and recommended professional competencies for the use of AI-based tools [27,28]. Ideally, these competencies should promote the effective deployment of AI in shared decision-making models that sustain or even enhance compassion, humanity, and trust in clinicians and clinical care [29]. Additionally, user-centered design (eg, more specifically, human-centered design to develop human-centric AI algorithms) should also be considered in the development of educational frameworks to support AI-related competencies required for all clinicians to use these tools effectively in clinical settings. In follow-up to this report, the authors carried out structured interviews with thought leaders to develop such a competency framework, which subsequently can be tested and iteratively refined within both simulated and authentic workplace experiences [30].
Strengths and Limitations
This scoping review has several strengths. First, this is a novel and rigorous synthesis that adhered to PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) standards. Second, its search strategy was comprehensive and inclusive, using keywords and MeSH terms for trainee populations, settings, interventions, and outcomes that would uncover all potential accounts of currently available evidence. Moreover, the availability of these comprehensive searches will support other studies examining AI and clinical education. Third, this study included the multiple types of health care professionals who might receive training and education for the use of AI in the clinical environment.
Our results should be interpreted in the context of a few limitations. The inclusion of US-only sites limits generalizability to other global settings and health system structures. It also may have eliminated additional salient investigations, although we imagine that the dearth of US studies predicts a similar deficit from other countries. Further, due to the heterogeneity of identified interventions, it would not have been possible to compare one training approach to another. A quality assessment tool was intentionally employed, as we only planned to measure the extent that methodological safeguards (ie, internal validity) against bias were implemented. Alternatively, a risk of bias assessment would have offered a bias judgement (ie, estimation of intervention effects) on such a quality assessment, and judgement of the evidence may have shifted with this approach [31]. The search cutoff date is another limitation, as other evidence may have been published since May 2020. Other limitations include single screening of titles and abstracts, English language restriction, and exclusion of studies reported in gray literature, including conference abstracts. In addition, we excluded articles that investigated the development of robotics-assisted competencies and those that measured the impact of computer vision tools in supporting technical learning in real and simulated settings. Finally, we restricted studies to those that evaluated the use of clinical AI and excluded those supporting other learning processes, although we recognize that tools such as AI-augmented learning management systems will also become a growing part of the health professions education landscape.
Conclusions
While many research studies were identified that investigate the potential effectiveness of using AI technologies in health care, very few address specific competencies that are needed by clinicians to use them effectively. This highlights a critical gap.
Acknowledgments
The authors wish to acknowledge the conceptual contributions of Gretchen P Jackson and Kyu Rhee. This study was supported by a grant from by IBM Watson Health.
Abbreviations
- AI
artificial intelligence
- EHR
electronic health records
- MeSH
medical subject headings
- PICOST
population, intervention, control, outcomes, study design, and time frame
- PRISMA
Preferred Reporting Items for Systematic Reviews and Meta-Analyses
- PRISMA-ScR
Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews
Supplementary tables.
Footnotes
Authors' Contributions: KJTC was responsible for methodology, project administration, and supervision. KJTC, RR, and KVG contributed to the validation of the study. KJTC and KVG were responsible for writing—original draft. All authors contributed to the paper’s conceptualization, formal analysis, and writing—review and editing.
Conflicts of Interest: KJTC was employed by IBM Corporation. KVG, LLN, DM, and BMM are employed by Vanderbilt University Medical Center. RR is employed by Vanderbilt University School of Medicine.
References
- 1.Yu K, Beam AL, Kohane IS. Artificial intelligence in healthcare. Nat Biomed Eng. 2018 Oct;2(10):719–731. doi: 10.1038/s41551-018-0305-z.10.1038/s41551-018-0305-z [DOI] [PubMed] [Google Scholar]
- 2.Patel VL, Shortliffe EH, Stefanelli M, Szolovits P, Berthold MR, Bellazzi R, Abu-Hanna A. The coming of age of artificial intelligence in medicine. Artif Intell Med. 2009 May;46(1):5–17. doi: 10.1016/j.artmed.2008.07.017. https://europepmc.org/abstract/MED/18790621 .S0933-3657(08)00096-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Miller RA. Medical diagnostic decision support systems--past, present, and future: a threaded bibliography and brief commentary. J Am Med Inform Assoc. 1994 Jan 01;1(1):8–27. doi: 10.1136/jamia.1994.95236141. https://europepmc.org/abstract/MED/7719792 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Hammond W, Cimino J. Biomedical Informatics Health Informatics. New York, NY: Springer; 2006. Standards in biomedical informatics; pp. 265–311. [Google Scholar]
- 5.Kulikowski CA. Beginnings of artificial intelligence in medicine (AIM): Computational artifice assisting scientific inquiry and clinical art - with reflections on present AIM challenges. Yearb Med Inform. 2019 Aug;28(1):249–256. doi: 10.1055/s-0039-1677895. http://www.thieme-connect.com/DOI/DOI?10.1055/s-0039-1677895 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019 Jan;25(1):44–56. doi: 10.1038/s41591-018-0300-7.10.1038/s41591-018-0300-7 [DOI] [PubMed] [Google Scholar]
- 7.Wiljer D, Hakim Z. Developing an artificial intelligence-enabled health care practice: Rewiring health care professions for better care. J Med Imaging Radiat Sci. 2019 Dec;50(4 Suppl 2):S8–S14. doi: 10.1016/j.jmir.2019.09.010.S1939-8654(19)30543-0 [DOI] [PubMed] [Google Scholar]
- 8.Eaneff S, Obermeyer Z, Butte AJ. The case for algorithmic stewardship for artificial intelligence and machine learning technologies. JAMA. 2020 Oct 13;324(14):1397–1398. doi: 10.1001/jama.2020.9371.2770772 [DOI] [PubMed] [Google Scholar]
- 9.Hodges BD. Ones and zeros: Medical education and theory in the age of intelligent machines. Med Educ. 2020 Aug;54(8):691–693. doi: 10.1111/medu.14149. [DOI] [PubMed] [Google Scholar]
- 10.Masters K. Artificial intelligence in medical education. Med Teach. 2019 Sep;41(9):976–980. doi: 10.1080/0142159X.2019.1595557. [DOI] [PubMed] [Google Scholar]
- 11.Sapci AH, Sapci HA. Artificial intelligence education and tools for medical and health informatics students: Systematic review. JMIR Med Educ. 2020 Jun 30;6(1):e19285. doi: 10.2196/19285. https://mededu.jmir.org/2020/1/e19285/ v6i1e19285 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Wartman SA, Combs CD. Medical education must move from the information age to the age of artificial intelligence. Academic Medicine. 2018;93(8):1107–1109. doi: 10.1097/acm.0000000000002044. [DOI] [PubMed] [Google Scholar]
- 13.Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J. 2009 Jun;26(2):91–108. doi: 10.1111/j.1471-1842.2009.00848.x. doi: 10.1111/j.1471-1842.2009.00848.x.HIR848 [DOI] [PubMed] [Google Scholar]
- 14.Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009 Jul 21;6(7):e1000097. doi: 10.1371/journal.pmed.1000097. https://dx.plos.org/10.1371/journal.pmed.1000097 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Blumenthal D. Wiring the Health System — Origins and Provisions of a New Federal Program. N Engl J Med. 2011 Dec 15;365(24):2323–2329. doi: 10.1056/nejmsr1110507. [DOI] [PubMed] [Google Scholar]
- 16.Health Information Technology for Economic and Clinical Health (HITECH) Act. Health Information Privacy. [2022-11-02]. https://tinyurl.com/76uvzx6a .
- 17.DistillerSR. [2022-11-02]. https://www.evidencepartners.com/
- 18.Hamel C, Kelly SE, Thavorn K, Rice DB, Wells GA, Hutton B. An evaluation of DistillerSR's machine learning-based prioritization tool for title/abstract screening - impact on reviewer-relevant outcomes. BMC Med Res Methodol. 2020 Oct 15;20(1):256. doi: 10.1186/s12874-020-01129-1. https://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-020-01129-1 .10.1186/s12874-020-01129-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Levels of evidence. The Centre for Evidence-based Medicine. 2009. [2022-11-02]. http://www.cebm.net/blog/2009/06/11/oxford-centre-evidence-based-medicine-levels-evidence-march-2009/
- 20.Sayres R, Taly A, Rahimy E, Blumer K, Coz D, Hammel N, Krause J, Narayanaswamy A, Rastegar Z, Wu D, Xu S, Barb S, Joseph A, Shumski M, Smith J, Sood AB, Corrado GS, Peng L, Webster DR. Using a deep learning algorithm and integrated gradients explanation to assist grading for diabetic retinopathy. Ophthalmology. 2019 Apr;126(4):552–564. doi: 10.1016/j.ophtha.2018.11.016. https://linkinghub.elsevier.com/retrieve/pii/S0161-6420(18)31575-6 .S0161-6420(18)31575-6 [DOI] [PubMed] [Google Scholar]
- 21.Jordan D, Rose SE. Multimedia abstract generation of intensive care data: the automation of clinical processes through AI methodologies. World J Surg. 2010 Apr;34(4):637–45. doi: 10.1007/s00268-009-0319-5. [DOI] [PubMed] [Google Scholar]
- 22.Hirsch JS, Tanenbaum JS, Lipsky Gorman S, Liu C, Schmitz E, Hashorva D, Ervits A, Vawdrey D, Sturm M, Elhadad N. HARVEST, a longitudinal patient record summarizer. J Am Med Inform Assoc. 2015 Mar;22(2):263–74. doi: 10.1136/amiajnl-2014-002945. https://europepmc.org/abstract/MED/25352564 .amiajnl-2014-002945 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Bien N, Rajpurkar P, Ball RL, Irvin J, Park A, Jones E, Bereket M, Patel BN, Yeom KW, Shpanskaya K, Halabi S, Zucker E, Fanton G, Amanatullah DF, Beaulieu CF, Riley GM, Stewart RJ, Blankenberg FG, Larson DB, Jones RH, Langlotz CP, Ng AY, Lungren MP. Deep-learning-assisted diagnosis for knee magnetic resonance imaging: Development and retrospective validation of MRNet. PLoS Med. 2018 Nov;15(11):e1002699. doi: 10.1371/journal.pmed.1002699. https://dx.plos.org/10.1371/journal.pmed.1002699 .PMEDICINE-D-18-01996 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Edgar L, McLean S, Hogan SO, Hamstra S, Holmboe ES. The milestones guidebook. ACGME. 2020. [2022-11-02]. https://www.acgme.org/globalassets/milestonesguidebook.pdf .
- 25.Hodges B, Lingard L. The Question of Competence: Reconsidering Medical Education in the Twenty-First Century. Ithaca, NY, US: Cornell University Press; 2012. [Google Scholar]
- 26.Gerke S, Minssen T, Cohen G. Artificial Intelligence in Healthcare. New York, US: Academic Press; 2020. Ethical and legal challenges of artificial intelligence-driven healthcare; pp. 295–336. [Google Scholar]
- 27.He J, Baxter SL, Xu J, Xu J, Zhou X, Zhang K. The practical implementation of artificial intelligence technologies in medicine. Nat Med. 2019 Jan;25(1):30–36. doi: 10.1038/s41591-018-0307-0. https://europepmc.org/abstract/MED/30617336 .10.1038/s41591-018-0307-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Matheny ME, Whicher D, Thadaney Israni S. Artificial intelligence in health care: A report from the national academy of medicine. JAMA. 2020 Feb 11;323(6):509–510. doi: 10.1001/jama.2019.21579.2757958 [DOI] [PubMed] [Google Scholar]
- 29.Kerasidou A. Artificial intelligence and the ongoing need for empathy, compassion and trust in healthcare. Bull World Health Organ. 2020 Apr 01;98(4):245–250. doi: 10.2471/BLT.19.237198. https://europepmc.org/abstract/MED/32284647 .BLT.19.237198 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Russell LL, Patel M, Garvey KM, Craig KJT, Jackson GP, Moore D, Miller B. Health Professions Education Research Day. Nashville, TN, US: Vanderbilt University School of Medicine; 2021. Dec 03, Probably want to know a bit more about the magic: competencies for the use of artificial intelligence tools by healthcare workers in clinical settings. [Google Scholar]
- 31.Furuya-Kanamori L, Xu C, Hasan SS, Doi SA. Quality versus risk-of-bias assessment in clinical research. J Clin Epidemiol. 2021 Jan 13;129(2):172–175. doi: 10.1016/j.jclinepi.2020.09.044. https://linkinghub.elsevier.com/retrieve/pii/S0895-4356(20)31138-0 .S0895-4356(20)31138-0 [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplementary tables.