Abstract
Purpose
To develop, implement, and evaluate feedback for an artificial intelligence (AI) workshop for radiology residents that has been designed as a condensed introduction of AI fundamentals suitable for integration into an existing residency curriculum.
Materials and Methods
A 3-week AI workshop was designed by radiology faculty, residents, and AI engineers. The workshop was integrated into curricular academic half-days of a competency-based medical education radiology training program. The workshop consisted of live didactic lectures, literature case studies, and programming examples for consolidation. Learning objectives and content were developed for foundational literacy rather than technical proficiency. Identical prospective surveys were conducted before and after the workshop to gauge the participants’ confidence in understanding AI concepts on a five-point Likert scale. Results were analyzed with descriptive statistics and Wilcoxon rank sum tests to evaluate differences.
Results
Twelve residents participated in the workshop, with 11 completing the survey. An average score of 4.0 ± 0.7 (SD), indicating agreement, was observed when asking residents if the workshop improved AI knowledge. Confidence in understanding AI concepts increased following the workshop for 16 of 18 (89%) comprehension questions (P value range: .001 to .04 for questions with increased confidence).
Conclusion
An introductory AI workshop was developed and delivered to radiology residents. The workshop provided a condensed introduction to foundational AI concepts, developed positive perception, and improved confidence in AI topics.
Keywords: Medical Education, Machine Learning, Postgraduate Training, Competency-based Medical Education, Medical Informatics
Supplemental material is available for this article.
© RSNA, 2023
Keywords: Medical Education, Machine Learning, Postgraduate Training, Competency-based Medical Education, Medical Informatics
Summary
A condensed artificial intelligence (AI) workshop implemented within curricular time increased confidence of residents in understanding AI concepts and may support future initiatives in integrating AI training within residency programs.
Key Points
■ A 3-week artificial intelligence (AI) workshop for residents, containing didactic lectures, case studies, and programming examples to introduce foundational concepts, has been developed and implemented within curricular time of a radiology residency program.
■ Eleven resident participants responded to a pre- and postworkshop survey assessing their confidence in understanding 18 AI topics and showed improved confidence in 16 of 18 (89%) topics in the postworkshop survey.
■ The pilot workshop can be implemented in existing radiology residency curricula, as it aims to address challenges in selecting sufficiently thorough AI content without being too burdensome on residents.
Introduction
The field of artificial intelligence (AI) in medicine has experienced rapid growth in the past decade with an increasing number of applications being introduced into radiology practice (1). Radiologists are entrusted with integrating medical imaging AI tools with decision-making and therefore require sufficient AI literacy to evaluate its strengths and limitations. Hence, there has been frequent advocacy in the literature for radiology trainees to receive AI training (2–4). Collado-Mesa et al (5) reported that 97.1% of surveyed radiology trainees plan to pursue further AI training, but only 2.9% received any formal instruction. While workshops have been established to provide AI training for practicing radiologists within the framework of continuing medical education, few structured programs exist that are integrated into medical school or residency training.
There is currently no standardized AI curriculum for radiology residents or medical trainees, causing heterogeneity in the scope of existing AI training initiatives. For instance, the University of Toronto offers a 14-month Computing for Medicine course focused on a wide breadth of foundational computer science knowledge (6), and Harvard Medical School offers extracurricular clinical informatics courses for medical trainees (7). Specific to radiology, the Radiological Society of North America (RSNA) offers multiple extracurricular opportunities such as a 5-day National Imaging Informatics course that provides fundamental informatics not necessarily related to AI (8). RSNA also provides less didactic demonstrations such as the self-paced AI Deep Learning Lab, showcasing programming examples online, and shorter workshops such as the Imaging AI in Practice Demonstration. An institution-specific initiative was proposed by Wiggins et al (9), who implemented an 8-month data science pathway elective for three senior radiology residents paired with radiology faculty to learn from fundamental curricula and consolidate knowledge through AI research projects. Lindqwister et al (10) provided a series of lectures and journal clubs over 7 months, focusing on a different machine learning model type each month. Although trainees may participate in such extracurricular initiatives and continuing medical education workshops, AI training as an extracurricular option may be cumbersome as residents are occupied with clinical duties and mandatory academic learning obligations (11). This approach may be suitable for select radiology trainees who are interested in pursuing technical AI development, who may favor programs such as the data science pathway by Wiggins et al (9).
A general AI training program to provide all residents with a working knowledge of AI may require reduction in scope. To reduce the need to invest extracurricular hours, it may be advantageous to integrate AI into protected academic time within the core curriculum and to select only concepts for foundational literacy. Richardson and Ojeda (12) proposed a condensed 6-hour “no-code” curriculum focused on tutorials on using deep learning image classification examples in an online preprogrammed interface. The curriculum was well received, with 91% of residents reporting its helpfulness in understanding AI. While the examples provided are highly relevant and curriculum length is less burdensome than full AI courses, it may be beneficial to also introduce fundamental data science concepts and non–deep learning models to develop a general aptitude in AI.
Designing an AI curriculum for medical trainees has specific challenges related to heterogeneity of background knowledge, selection of content, and instruction method. Traditional AI training builds upon foundational knowledge in linear algebra, multivariate calculus, algorithms, and software construction (13), which are subjects not commonly taught in medical curricula. However, most medical trainees do receive basic biostatistics training targeted toward the critical appraisal of medical literature as part of their undergraduate medical education (14). Therefore, the integration of AI training at the postgraduate level ensures that participants have at least foundational knowledge in statistics.
Last, AI instruction for radiology trainees requires a curricular team with expertise in both computer science and radiology. This may require a multidisciplinary team to select topics that are clinically relevant, to convey the mathematical foundations of AI, and to highlight the strengths and limitations of AI in a particular application. Separate radiologists and computer scientists may not be necessary, particularly if the instructing team is from a clinical research facility with expertise in AI in radiology, which may have the additional strength of increasing relevance for trainees.
To address the need for AI training and its challenges in implementation, we created and delivered a pilot AI training workshop for diagnostic radiology resident physicians. The workshop was specifically designed by radiology faculty and AI engineers to condense content and develop foundational knowledge for evaluating AI applications in health care. The workshop was integrated into dedicated academic training hours which is, to our knowledge, the first such initiative in Canada. A survey was conducted before and after the workshop to assess the participants’ baseline demographics, perception of AI, and confidence in understanding course concepts. Ultimately, this work aims to inform and motivate future initiatives to integrate AI teaching for medical trainees into existing teaching curricula.
Materials and Methods
Workshop Structure and Delivery Method
The workshop delivery and prospective survey data collection were approved by the Queen’s University Health Sciences & Affiliated Teaching Hospitals Research Ethics Board (code: RADO-107-21) with written informed consent obtained from all participants for the anonymized survey. All participants were briefed on the data collected in the study, risks, and process of disposal of data.
The workshop was delivered to radiology residents for 3 weeks as part of weekly academic half-days, where residents were provided with a weekly time slot free of clinical duties for academic learning (15). Although participation was encouraged, attendance was not mandatory nor was it recorded for an individual’s record. Delivery was virtual as the academic half-days were transitioned to virtual due to the COVID-19 pandemic, although the format is compatible with in-person instruction as well. The workshop was executed in a residency program with a competency-based medical education (CBME) curriculum (the first CBME diagnostic radiology residency program in Canada), which aims to measure educational progression primarily through the achievement of specific entrustable professional activities, or EPAs, rather than time (16). This was done to integrate AI training into the existing curriculum rather than create an entirely new extracurricular workshop, with the goal of promoting attendance and reducing disruption to individual schedules. In each week, the live workshop session length was 1 hour and consisted of equal parts of didactic lecture, case studies in literature, case studies in media, and programming demonstrations. After the first 2 weeks, optional self-study material with an approximate time commitment of 1 hour each was provided, including journal articles and annotated software code. The course structure and an example of a programming demonstration are visualized in Figure 1. The curricular content and programming examples are available at a public online repository at https://github.com/Queens-Radiology-Intro-To-AI/Intro-to-AI.
Figure 1:
(A) Overview of the workshop structure. Each week consists of didactic lectures, consolidated by case studies and programming examples relevant to the topics from the lecture. Supplemental material was provided for participants who wished to further pursue topics beyond the scope of the workshop. (B) Visualization of a programming example. In this specific example, a decision tree classifier is trained to predict malignancy from tumor images. The example highlights data science best practices in organizing input features, feature selection, model optimization, and validation. AI = artificial intelligence.
The goal of reducing didactic content in favor of problem-solving through examples was to promote consolidation of knowledge similar to that of the flipped classroom model (17). In total, 5 hours of curricular content were provided. The content was designed by a team of staff radiologists (A.D.C., B.Y.M.K.), radiology residents (A.R., Z.H.), and AI engineer (R.H.). The live content was delivered by an instructor (R.H.) with a graduate degree in engineering and 5 years of experience in AI.
Learning Objectives
The learning objectives, summarized in Table 1, followed Bloom taxonomy (18), where content delivery was designed to follow the highest taxonomic level possible to promote mastery. However, due to challenges of time constraints and varying levels of prerequisite knowledge, the learning objectives target AI literacy rather than AI proficiency. As a result, foundational principles were emphasized, such as concepts of problem definition, validation, and assessing clinical use. Advanced concepts were delivered with less rigorous objectives. For instance, convolutional neural networks (CNNs) were a topic of interest due to popularity in the literature. However, as interpreting CNN processing involves matrix multiplication and optimization with gradient descent algorithms (19), participants were only expected to recognize when a CNN was presented and understand how to evaluate performance of the model output.
Table 1:
A Summary of Learning Objectives for the Artificial Intelligence Training Workshop with Bloom Taxonomic Levels in Parentheses
Content
Detailed lecture, case study, and programming topics in the curriculum are summarized in Table 2, with corresponding learning objectives from Table 1 in parentheses. Briefly, fundamental AI concepts were introduced in lectures, particularly topics required to analyze the clinical use of an AI application, such as understanding model validation. During case studies, a particular journal article on AI technology was introduced, and the strengths and limitations of the application were evaluated. For instance, one case study computed radiomic features and used the features as an input to a logistic regression classifier (20). The goal of the study was to identify best practices in data science such as the documentation of preprocessing, feature selection to prevent overfitting, and resampling methods for validation. Finally, programming demonstrations were provided as template code in the Google Colaboratory environment (Google), where participants can execute code within their own browser. This provided the advantages of allowing participants to visualize data processing of an AI model without requiring training to install software packages, manage dependencies, or troubleshoot programming runtime errors. The programming examples would reflect topics discussed during the particular session. For instance, one topic taught was resampling methods to generate CIs when validating an AI model. The programming example demonstrated cross-validation and how to plot the range of cross-validated accuracies when retraining an AI model with different subsets of data.
Table 2:
A Summary of the Curriculum Delivered to the Resident Physicians with Associated Learning Objectives from Table 1 in Parentheses
Data Collection
To assess the effectiveness of the workshops, participants were asked to complete two surveys: a preworkshop survey and a postworkshop survey. The surveys were anonymous. The preworkshop survey consisted of two demographic questions on level of training, 18 comprehension questions assessing the participant’s perceived confidence in a certain concept, and three perception questions surveying general opinions on AI in their careers. The postworkshop survey was nearly identical, containing two extra questions on perception of the workshop. The comprehension questions were rated on a five-point Likert scale, where five represents strong agreement and one represents strong disagreement. A list of the survey questions and data format is presented in Table S1.
Statistical Analysis
The preworkshop and postworkshop surveys were analyzed to generate first-order summary statistics. Wilcoxon rank sum tests were conducted to observe differences between the preworkshop and postworkshop survey responses as a nonparametric significance test for small sample sizes. A P value less than .05 was used to indicate statistically significant differences. All analyses were performed in MATLAB R2020b (MathWorks).
Results
Overview of Participant Responses
A total of 12 residents participated in the workshop, with 11 providing survey responses. Figure 2 displays the distribution of demographic survey questions and responses regarding the perception of general knowledge improvement during the workshop and of interest in AI training. Table 3 summarizes the mean scores of the survey responses, and Figure 3 shows the distribution of Likert scores comparing preworkshop and postworkshop answers.
Figure 2:
Baseline characteristics of all the participants and postworkshop survey responses indicate resident confidence and interest. The participants included residents from postgraduate year (PGY) 1 to 4. None of the participants in the workshop had previous undergraduate training or professional experience in artificial intelligence (AI). Most participants reported that knowledge somewhat or strongly improved and were somewhat or very interested in continuing AI training in the future.
Table 3:
A Summary of the Mean Scores of Preworkshop and Postworkshop Survey Responses
Figure 3:
Survey responses before and after the workshop. The questions gauge the confidence of participants in a particular topic, with three questions gauging the perception of the impact, importance, and interest of artificial intelligence. Responses were rated on a five-point Likert scale: 1 = strongly disagree, 2 = somewhat disagree, 3 = neutral, 4 = somewhat agree, and 5 = strongly agree. AI = artificial intelligence, DL = deep learning, ML = machine learning.
Comparison of Pre- and Postworkshop Scores
No participants had previous undergraduate or professional training in AI. Six of 11 (55%) respondents felt their knowledge somewhat improved, and four of 11 (36%) felt their knowledge strongly improved after the course. There was an increase in confidence among participants in 16 of 18 of the fundamental concepts after the completion of the workshop. Specifically, two concepts with no confidence increase were “I understand what steps are required for data cleaning” and the general question of “I understand what artificial intelligence is.” Most participants expressed interest in continuing AI education, with five of 11 (45%) somewhat interested and four of 11 (36%) strongly interested. We found no evidence of a difference in perception of concern over impact of AI on career, importance of AI education, or interest in continuing education between pre- and postworkshop surveys.
Discussion
In this study, we describe the development and implementation of an AI workshop for radiology residents. Overall, the workshop improved confidence in all but two AI concepts surveyed. Every other concept resulted in a statistically significant increase in confidence rating. The workshop was delivered by a curricular team of staff radiologists and engineers as part of dedicated academic time for radiology residents, providing a condensed introduction to AI.
Although there was no statistically significant change in gauging general opinions of AI, this was likely due to baseline enthusiasm in AI and perceived importance of AI reported in the preworkshop survey. Generally, feedback from residents was positive with most agreeing that the workshop improved their AI knowledge. Particular praise was given to the use of relevant medical literature, which described how one may analyze appropriateness of a new AI model for a specific clinical problem. This provides support that training in this format can provide added value in AI literacy for future residents.
To the best of our knowledge, this workshop is the first implementation of AI teaching within a CBME residency curriculum. This is different from existing AI courses, which are external to institutional curricula and require extracurricular time commitment to attend. With future iterations and success, this work can provide support and lay the foundation for AI training to be part of residency curriculum. For instance, such training may be implemented as an accredited competency recognized by postgraduate regulatory bodies.
A particular challenge was designing content with sufficient coverage of topics without a cumbersome time commitment for the diagnostic radiology residents. A thorough course on AI, similar to programs for engineering students with consolidation tasks such as building a custom software, would likely improve understanding but is not feasible with time constraints. In addition, the involvement with AI widely varies across residents, where some may be actively pursuing AI research and others pursuing a user role. To compromise, the goal of our workshop was to develop foundational literacy of select topics. Concepts used in assessing the performance of an AI application, such as problem definition, data types, and validation, were emphasized. Selection of specific applications, however, was a greater challenge. For instance, CNNs are increasing in popularity, particularly in radiology literature, as they comprise the top-performing models on public image classification challenges (21). While it was not feasible to train participants at a level to understand how a CNN learns through convolutional kernels, we desired to improve familiarity on this popular topic. The compromise was to present a general intuition of CNN input and output data types and to review journal articles implementing CNNs so that participants can recognize if a CNN is being used in a health care tool. The goal was that participants can then apply foundational principles, such as assessing if proper validation practices were used given limitations of CNNs relating to sensitivity and interpretability.
Our study had several limitations. First, participants were limited to a single postgraduate program in a single university. This allowed for easier scheduling to coordinate protected academic time but resulted in a smaller sample size. To address this limitation, repeating the workshop in the future at different institutions with additional residents can provide more statistical power in assessing feasibility of the workshop. This will require multi-institutional coordination to select content of appropriate breadth, depth, and delivery. Second, knowledge retainment is unknown given the lack of immediate applications of AI concepts for most of the participants. Medical education employs strategies of reintroducing concepts in clinical rotations to consolidate knowledge (22), and existing training programs for radiology residents retrieved qualitative feedback to assess its receptions. Wiggins et al (9) reported “very positive” feedback, and Richardson and Ojeda (12) reported that most participants felt “the course taught them useful things about deep learning.” The results of our workshop were similarly positive, and more quantitative assessment of confidence in concepts was measured in addition to general feedback. Lindqwister et al (10) provided a five-item Likert scale assessing confidence before and after each lecture, resulting in a statistically significant increase in confidence in every lecture. Our study was different in that we assessed specific concepts relating to a detailed list of learning objectives but produced similar results to this survey. Last, as attendance was not mandatory, there is susceptibility to selection bias such that more enthusiastic residents may have attended. This limits the applicability of the results to the general resident population. However, it should be noted that there are a total of 15 residents in the program and most attended the workshop. There remains a lack of assessment data in our study and in literature for evaluating changes in understanding. Although assessment data can be obtained through traditional methods of examinations or supervised demonstrations, it may be burdensome to add these assessments onto an already rigorous residency curriculum. For a topic such as AI, assessment of confidence was chosen as a preliminary method. Confidence assessment has been employed in the past as an analog to assessing key skills (23) by providing relative measurements on a learner’s changes of skill over time but has limited use in assessing an absolute measurement of skill. One solution may be to assess resident knowledge through consolidating activities, such as by resident-led journal clubs or having a resident analyze an AI tool that a hospital deploys to analyze details of the model such as data dimensionality, training, testing, validation, and limitations. Practical demonstrations can be utilized similar to Richardson and Ojeda (12), where a graphic user interface is provided for residents to select and visualize training different models to stimulate discussion of limitations when applying the model to clinical practice.
In summary, we have implemented an introductory AI training workshop for radiology residents integrated into the teaching curriculum at a CBME institution. The workshop aimed to provide foundational knowledge for assessing AI in medicine in the future and improved self-reported confidence on fundamental topics in AI. Similar workshops may be integrated into standardized curricula in the future to provide institutional support and accreditation to motivate AI literacy.
Acknowledgments
Acknowledgment
The authors acknowledge the department of diagnostic radiology at Queen’s University for institutional support.
Supported by the Department of Radiology Research Grant at Queen’s University.
Disclosures of conflicts of interest: R.H. No relevant relationships. A.R. Radiology: Artificial Intelligence trainee editorial board member. Z.H. No relevant relationships. T.L. Unpaid volunteer director for LEADs Employment Services Board of Directors. A.D.C. No relevant relationships. B.Y.M.K. Queen’s University Department of Radiology Research Grant.
Abbreviations:
- AI
- artificial intelligence
- CBME
- competency-based medical education
- CNN
- convolutional neural network
References
- 1. Kulkarni S , Seneviratne N , Baig MS , Khan AHA . Artificial Intelligence in Medicine: Where Are We Now? Acad Radiol 2020. ; 27 ( 1 ): 62 – 70 . [DOI] [PubMed] [Google Scholar]
- 2. Wood MJ , Tenenholtz NA , Geis JR , Michalski MH , Andriole KP . The Need for a Machine Learning Curriculum for Radiologists . J Am Coll Radiol 2019. ; 16 ( 5 ): 740 – 742 . [DOI] [PubMed] [Google Scholar]
- 3. Grunhut J , Wyatt AT , Marques O . Educating Future Physicians in Artificial Intelligence (AI): An Integrative Review and Proposed Changes . J Med Educ Curric Dev 2021. ; 8 : 23821205211036836 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Huisman M , Ranschaert E , Parker W , et al . An international survey on AI in radiology in 1,041 radiologists and radiology residents part 1: fear of replacement, knowledge, and attitude . Eur Radiol 2021. ; 31 ( 9 ): 7058 – 7066 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Collado-Mesa F , Alvarez E , Arheart K . The Role of Artificial Intelligence in Diagnostic Radiology: A Survey at a Single Radiology Residency Training Program . J Am Coll Radiol 2018. ; 15 ( 12 ): 1753 – 1757 . [DOI] [PubMed] [Google Scholar]
- 6. Windish DM , Huot SJ , Green ML . Medicine residents’ understanding of the biostatistics and results in the medical literature . JAMA 2007. ; 298 ( 9 ): 1010 – 1022 . [DOI] [PubMed] [Google Scholar]
- 7. McCoy LG , Nagaraj S , Morgado F , Harish V , Das S , Celi LA . What do medical students actually need to know about artificial intelligence? NPJ Digit Med 2020. ; 3 ( 1 ): 86 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. National Imaging Informatics Curriculum and Course. Radiological Society of North America . https://www.rsna.org/education/trainee-resources/national-imaging-informatics-curriculum-and-course. Published 2022. Accessed October 15, 2022 .
- 9. Wiggins WF , Caton MT , Magudia K , et al . Preparing Radiologists to Lead in the Era of Artificial Intelligence: Designing and Implementing a Focused Data Science Pathway for Senior Radiology Residents . Radiol Artif Intell 2020. ; 2 ( 6 ): e200057 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Lindqwister AL , Hassanpour S , Lewis PJ , Sin JM . AI-RADS: An Artificial Intelligence Curriculum for Residents . Acad Radiol 2021. ; 28 ( 12 ): 1810 – 1816 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Kijima S , Tomihara K , Tagawa M . Effect of stress coping ability and working hours on burnout among residents . BMC Med Educ 2020. ; 20 ( 1 ): 219 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Richardson ML , Ojeda PIA . A “Bumper-Car” Curriculum for Teaching Deep Learning to Radiology Residents☆ . Acad Radiol 2022. ; 29 ( 5 ): 763 – 770 . [DOI] [PubMed] [Google Scholar]
- 13. Kolachalama VB , Garg PS . Machine learning and medical education . NPJ Digit Med 2018. ; 1 ( 1 ): 54 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Wiens J , Saria S , Sendak M , et al . Do no harm: a roadmap for responsible machine learning for health care . Nat Med 2019. ; 25 ( 9 ): 1337 – 1340 . [Published correction appears in Nat Med 2019;25(10):1627.] [DOI] [PubMed] [Google Scholar]
- 15. Hames K , Patlas M , Duszak R . Barriers to Resident Research in Radiology: A Canadian Perspective . Can Assoc Radiol J 2018. ; 69 ( 3 ): 260 – 265 . [DOI] [PubMed] [Google Scholar]
- 16. Kwan BYM , Mbanwi A , Cofie N , et al . Creating a Competency-Based Medical Education Curriculum for Canadian Diagnostic Radiology Residency (Queen’s Fundamental Innovations in Residency Education)-Part 1: Transition to Discipline and Foundation of Discipline Stages . Can Assoc Radiol J 2021. ; 72 ( 3 ): 372 – 380 . [DOI] [PubMed] [Google Scholar]
- 17. Prober CG , Khan S . Medical education reimagined: a call to action . Acad Med 2013. ; 88 ( 10 ): 1407 – 1410 . [DOI] [PubMed] [Google Scholar]
- 18. Adams NE . Bloom’s taxonomy of cognitive learning objectives . J Med Libr Assoc 2015. ; 103 ( 3 ): 152 – 153 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Mutasa S , Sun S , Ha R . Understanding artificial intelligence based radiology studies: CNN architecture . Clin Imaging 2021. ; 80 : 72 – 76 . [DOI] [PubMed] [Google Scholar]
- 20. Bogowicz M , Jochems A , Deist TM , et al . Privacy-preserving distributed learning of radiomics to predict overall survival and HPV status in head and neck cancer . Sci Rep 2020. ; 10 ( 1 ): 4542 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Morid MA , Borjali A , Del Fiol G . A scoping review of transfer learning research on medical image analysis using ImageNet . Comput Biol Med 2021. ; 128 : 104115 . [DOI] [PubMed] [Google Scholar]
- 22. Kumaravel B , Stewart C , Ilic D . Development and evaluation of a spiral model of assessing EBM competency using OSCEs in undergraduate medical education . BMC Med Educ 2021. ; 21 ( 1 ): 204 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Bray A , Byrne P , O’Kelly M . A Short Instrument for Measuring Students’ Confidence with ‘Key Skills’ (SICKS): Development, Validation and Initial Results . Think Skills Creat 2020. ; 37 : 100700 . [Google Scholar]
- 24. Kuppermann N , Holmes JF , Dayan PS , et al . Identification of children at very low risk of clinically-important brain injuries after head trauma: a prospective cohort study . Lancet 2009. ; 374 ( 9696 ): 1160 – 1170 . [DOI] [PubMed] [Google Scholar]
- 25. Rodriguez-Ruiz A , Lång K , Gubern-Merida A , et al . Stand-alone artificial intelligence for breast cancer detection in mammography: comparison with 101 radiologists . J Natl Cancer Inst 2019. ; 111 ( 9 ): 916 – 922 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26. van Leeuwen KG , Schalekamp S , Rutten MJCM , van Ginneken B , de Rooij M . Artificial intelligence in radiology: 100 commercially available products and their scientific evidence . Eur Radiol 2021. ; 31 ( 6 ): 3797 – 3804 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Laguarta J , Hueto F , Subirana B . COVID-19 Artificial Intelligence Diagnosis Using Only Cough Recordings . IEEE Open J Eng Med Biol 2020. ; 1 : 275 – 281 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Li L , Qin L , Xu Z , et al . Using Artificial Intelligence to Detect COVID-19 and Community-acquired Pneumonia Based on Pulmonary CT: Evaluation of the Diagnostic Accuracy . Radiology 2020. ; 296 ( 2 ): E65 – E71 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Rajpurkar P , Irvin J , Ball RL , et al . Deep learning for chest radiograph diagnosis: a retrospective comparison of the CheXNeXt algorithm to practicing radiologists . PLoS Med 2018. ; 15 ( 2 ): e1002686 . [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Collins GS , Reitsma JB , Altman DG , et al . Transparent reporting of a multivariable prediction model for individual prognosis or diagnosis (TRIPOD): the TRIPOD Statement . Ann Intern Med 2015. ; 162 ( 1 ): 55 – 63 . [DOI] [PubMed] [Google Scholar]
- 31. Wolff RF , Moons KGM , Riley RD , et al . PROBAST: A Tool to Assess the Risk of Bias and Applicability of Prediction Model Studies . Ann Intern Med 2019. ; 170 ( 1 ): 51 – 58 . [DOI] [PubMed] [Google Scholar]
- 32. Yu AC , Mohajer B , Eng J . External Validation of Deep Learning Algorithms for Radiologic Diagnosis: A Systematic Review . Radiol Artif Intell 2022. ; 4 ( 3 ): e210064 . [DOI] [PMC free article] [PubMed] [Google Scholar]