Skip to main content
RSNA Journals logoLink to RSNA Journals
editorial
. 2023 Oct 24;309(1):e231114. doi: 10.1148/radiol.231114

The Future of AI and Informatics in Radiology: 10 Predictions

Curtis P Langlotz 1,
PMCID: PMC10623186  PMID: 37874234

See also the editorial by Chang in this issue.

The computer as an intellectual tool can reshape the present system of health care, fundamentally alter the role of the physician, and profoundly change the nature of medical manpower recruitment and medical education.

–William B. Schwartz, 1970 (1)

Introduction

Artificial intelligence (AI) and informatics are transforming radiology. Ten years ago, no expert would have predicted today’s vibrant radiology AI industry with over 100 AI companies and nearly 400 radiology AI algorithms cleared by the U.S. Food and Drug Administration (FDA). And less than a year ago, not even the savviest prognosticators would have believed that these algorithms could produce poetry, win fine art competitions, and pass the medical boards. Now, as we celebrate the centennial of our specialty’s flagship journal, Radiology, these accomplishments are part of our reality.

Moments of daunting transformation can liberate us to dream big. With that in mind, here are 10 predictions for the future of AI and informatics in radiology (Table).

Ten Predictions for the Future of AI and Informatics in Radiology

graphic file with name radiol.231114.tbl1.jpg

1. Radiology Will Continue to Lead the Way for AI in Medicine

In the current health care market, AI tools for radiologists dominate. Seventy-five percent of the over 500 FDA-cleared AI algorithms target radiology practice, increasing from 70% in 2021 (2). Radiology is a ripe target for AI because imaging data have been digital for decades. Our exabytes of digital imaging data are more objective than other clinical data because they are governed by the laws of physics rather than by subjective signs and symptoms expressed in a narrative note. And pairing images with descriptive text reports makes them ideal for the creation of accurate machine learning algorithms (3).

About 4% of diagnostic interpretations contain clinically significant errors (4). Errors arise because many image interpretation tasks are not well suited for human capabilities. For example, finding a small nodule nestled among the pulmonary vessels is akin to a “needle in a haystack.” Accurately quantifying abnormalities, such as the size of an irregularly shaped tumor or the amount of calcium in the coronary arteries, can also challenge human capabilities. Correlating complex multimodal clinical data sources, such as radiology, genomics, and pathology, may be beyond human capabilities. But AI algorithms can readily perform these tasks. Thus, AI researchers will continue to develop these new capabilities as a complement to human perception.

2. Virtual Assistants Will Draft Radiology Reports and Address Radiologist Burnout

Teaching trainees can be one of the most rewarding aspects of academic radiology. But teaching at the workstation can slow the productivity of attending radiologists (5). Yet, trainees provide powerful productivity advantages: They preview the imaging study, draft a report, edit it based on feedback, and route it to the attending radiologist for final signature, enabling attending radiologists to focus on the big picture—putting the findings in appropriate context (6).

Virtual assistants will bring these same efficiencies to radiologists who do not have the privilege of working with trainees. The combination of computer vision algorithms, which analyze images to identify findings, and large language models (LLMs), which are trained on massive data sets to generate text, will make this possible (Figure). Some computer vision algorithms can detect more than 70 findings in a single imaging study (7). Prompted by this list of findings, an LLM will draft a radiology report (8). Finally, the radiologist will edit and sign the report. The AI models could be periodically retrained from feedback obtained by comparing draft and final reports.

Diagram shows the architecture of a radiology virtual assistant, incorporating two artificial intelligence capabilities: computer vision, which detects findings in images, and natural language generation, which produces text from a prompt. The letters represent the matrix mathematics that are performed within a neural network. (The latest neural network architectures, such as the transformers used by large language models, differ significantly from the abstract schematics shown here.) The system would present the radiologist with a draft report for editing and signature.

Diagram shows the architecture of a radiology virtual assistant, incorporating two artificial intelligence capabilities: computer vision, which detects findings in images, and natural language generation, which produces text from a prompt. The letters represent the matrix mathematics that are performed within a neural network. (The latest neural network architectures, such as the transformers used by large language models, differ significantly from the abstract schematics shown here.) The system would present the radiologist with a draft report for editing and signature.

These AI-based virtual assistants will not only relieve the drudgery of dictating long radiology reports but will also upskill advanced practice providers to address the chronic radiologist shortage. These changes will occur first for repetitive studies, such as bedside chest radiographs, and then for a wide variety of common imaging studies.

3. An Intelligent Image Interpretation Cockpit Will Become as Pervasive as Email

When picture archiving and communication systems first became available, they required custom monitors, dedicated high-performance networks, and expensive bespoke storage devices. Early speech recognition systems required powerful desktop computers. And most medical record systems were still on paper. These disparate technologies, today the mainstay of the radiologist’s desktop, evolved separately and have never worked together well. Thus, it is not surprising that radiologists often work with disjointed system integrations and clashing user interfaces.

Recent technological progress makes a unified system possible. See, for example, the editorial on the maturing of imaging informatics by Chang in this issue of Radiology (9). Cloud computing and storage are just as secure as hospital data centers for storing health care data. Siri, Alexa, and Google already use cloud-based speech recognition. And AI algorithms can be deployed easily in the cloud. This progress sets the stage for a unified radiology workstation, with image display, reporting, and AI seamlessly integrated into a cloud-based cockpit (10) that dramatically improves radiologist efficiency. An automatically protocoled imaging study will arrive in the radiologist’s work queue preprocessed by AI algorithms, with patient history summarized, organs segmented and measured, abnormalities highlighted, and a report drafted. The radiologist will modify the report by using speech recognition or by clicking the image and choosing from a structured list of suggested imaging findings. These cloud-native capabilities will make virtual collaboration with clinical colleagues, including live video, immediate and seamless.

4. Highly Sensitive AI Will Reduce the Need for Human Image Interpretation

Until the advent of modern machine learning methods in the past few years, it was unthinkable that some radiology studies would never be viewed by human eyes. But many electrocardiogram (11) and Papanicolaou test (12) interpretations have been human-free for years. Recent research predicts that workflows combining human and AI expertise can forgo human review of 63% of screening mammograms while increasing overall accuracy (13). Because screening is only a small part of radiologist work, these systems may slow the growth of the radiologist workforce but will not displace radiologists.

5. LLMs Will Transform Patients’ Understanding of Radiology

Over the past year, LLMs have captivated the medical world, explaining medical guidelines (14), drafting radiology reports (8), and even passing board examinations (15). But LLMs have trouble with simple math (16), make potentially harmful errors in differential diagnosis (17), and sometimes authoritatively spout falsehoods (8). As we learn more about the capabilities of LLMs, we will likely find more problems that inhibit the regulatory clearance of these incredibly powerful tools for medical diagnosis (18).

But LLMs will soon be deployed for other radiology applications. Regulations against information blocking give patients ready access to their medical information (19). But the terminology radiologists use to communicate with requesting clinicians can mystify patients. The ability of LLMs to summarize information at an arbitrary reading level in the patient’s preferred language will help patients understand their reports (20). For example, here is the response of ChatGPT, an LLM developed by OpenAI, when asked to explain the circle of Willis at a fifth-grade reading level (21):

“The circle of Willis is a group of blood vessels in your brain that helps to provide blood to your brain and keep it healthy. It is shaped like a circle, and it is made up of several large blood vessels that are connected to each other. These blood vessels are called arteries, and they carry blood from your heart to your brain. The circle of Willis helps to make sure that your brain gets enough blood, even if one of the arteries becomes blocked or narrowed.”

These patient-friendly explanations of radiology reports will transform patients’ relationship with their imaging information and will bring radiologists and patients closer together.

6. Multimodal AI Will Discover New Uses for Diagnostic Images

The human visual perception system has evolved over millions of years to discern patterns; thus, AI models only rarely find patterns humans haven’t yet recognized. As a result, the best AI models exhibit performance comparable to expert humans on most visual tasks.

As the mountain of digital health data grows, massive multimodal data sets will link imaging studies to troves of data from genomics, clinical notes, laboratory values, and wearable devices (22). Self-supervised learning methods (23), which do not require expensive data labeling, will produce “generalist” models that encode relationships among many different data types. These models will uncover new associations beyond the capabilities of human information processing, showing how imaging appearances relate to specific genomic signatures and laboratory values.

Insights from multimodal data sets will change how we stage cancer and other complex diseases. For example, the TNM staging of cancer is scaled to fit the human memory system, relying on a small number of categories representing cancer size, location, and spread. Multimodal AI models will precisely quantify disease burden and produce more accurate predictions of disease course.

Likewise, the Response Evaluation Criteria in Solid Tumors, or RECIST, use antiquated methods to measure cancer progression, developed in part for their feasibility when radiologists measured with calipers on film-based images. Instead, AI methods that rapidly quantify total-body tumor burden will replace these primitive measures. Other quantification methods, sometimes called “opportunistic screening,” will process existing imaging studies at little additional cost, identifying markers of undiscovered chronic disease, such as coronary calcium, body composition, and spinal compression fractures (24,25).

New diagnostic associations, staging systems, and quantification methods will bring the advent of precision health. Using each patient’s data, AI-driven precision health will provide optimal patient-specific recommendations for disease prevention, diagnosis, and treatment.

7. Online Image Exchange Will Reduce Health Care Costs by over $200 Million Annually

Electronic image exchange avoids delays in care, improves patient satisfaction, and reduces costs, especially with urgently needed care (26). A trauma patient with available outside imaging uses 29% fewer imaging resources (27). Internet-based image exchange reduces imaging costs by $84.65 per transferred trauma patient (28). Portable media can’t solve this problem because patients with emergent conditions rarely bring images on CDs or DVDs unless transferred directly from another hospital. The United States spends $136.6 billion annually on emergency department care (29). About 8% of those health expenditures are for imaging (30), representing $10.9 billion. About 8% of this expense is for repeat imaging (31), or $872 million. A seamless national network of internet-based image exchange (32) could reduce this repeat imaging by 25% (31), or $218 million annually, in the emergency department alone.

8. Reformed Regulations Will Accelerate AI-based Improvements in Care Delivery

Over the past several years, the most accurate and generalizable AI systems have been trained on large diverse labeled data sets. Recent research suggests that pretraining on massive unlabeled data produces the most accurate systems (33). These systems, often called foundation models (34), can be fine-tuned on data from the deployment site to produce systems that are accurate for a wide range of tasks. These methods to optimize AI accuracy are on a collision course with medical software regulation. The U.S. FDA makes its decisions based on evidence from data about static products. The need to fine-tune foundational AI models to optimize their accuracy at a local site requires that products change after regulatory clearance. In the next decade, AI researchers, clinicians, ethicists, and regulators will devise flexible regulatory frameworks that allow monitoring and fine-tuning of algorithms on local data. The FDA’s proposed predetermined change control plans are a step in the right direction (35).

Difficulty in assembling large clinical data sets needed to train generalizable models is another impediment to progress. National regulatory reform, new local governance structures, and simple methods for patients to express their privacy preferences will foster a new era of open data for medical AI research.

9. A Widely Available Petabyte-scale Imaging Database Will Unleash Unbiased AI

The current wave of AI innovation was driven by a massive imaging database called ImageNet (36). This large set of labeled digital photographs created a benchmark for computer vision algorithms. Privacy risks and regulatory barriers limit the assembly of large medical image data sets, which impairs the diversity, reliability, fairness, and generalizability of medical AI algorithms (37).

New initiatives are addressing this problem. The Medical Imaging and Data Resource Center, or MIDRC, funded by the National Institute of Biomedical Imaging and Bioengineering, has published over 100 000 imaging studies from sites around the United States (38), serving as a model for large, diverse public medical data sets. The National COVID Cohort Collaborative, or N3C, is a similar national repository, aggregating the medical records of patients diagnosed with COVID-19 (39). The RadImageNet database has aggregated imaging studies from over 130 000 patients who underwent CT, MRI, and US examinations (40). The All of Us Research Program has invited 1 million people across the United States to help build one of the most diverse health databases in history (41). And organizations like the Medical Information Mart for Intensive Care, the RSNA, and academic research centers disseminate large data sets for the study of other important diseases (4244).

These examples will spur health care organizations and patients to aggregate and share vast health data sets for the public good (32), enabling the creation of a medical ImageNet that reduces health disparities and catalyzes the next wave of AI innovation in health care.

10. Flexible and Collaborative Academic Organizations Will Lead AI Innovation

AI algorithms are more likely to be fair and trustworthy when developed by diverse interdisciplinary teams (45). Development of clinically useful algorithms requires not only clinicians who can identify important problems for AI to solve but also computer scientists who can interpret and apply the latest machine learning research. Practicing physicians with formal training in AI and machine learning often lead these highly effective teams. Rounding out these interdisciplinary groups will be ethicists, economists, and philosophers, who can assess the risks and benefits of new technologies and ensure fair algorithms.

Academic institutions will continue to lead AI research and development because of their immediate access to all the necessary raw materials: massive stores of accessible clinical data, a workforce of students with deep technical knowledge, abundant high-performance computing, research teams with interdisciplinary expertise, close partnerships with industry, and relationships with health care delivery systems that serve as showcases and testbeds for their innovations (46).

Conclusion

The breakneck progress of AI makes predictions for even the next 2 years extremely challenging. The next 10 years will bring even more surprises. But radiology, more than any other medical specialty, is poised to capitalize on the strengths of AI, saving us time by performing difficult, menial, or repetitive tasks. These new technologies will allow radiologists to focus on the rewarding work of placing findings in context for our clinical colleagues. Like previous cycles of innovation, highly capable AI tools will refocus radiologists on the intellectual activities that brought us to the profession in the first place.

Acknowledgments

Acknowledgments

The author thanks Christian Bluethgen, MD, MSc; Judy Gichoya, MD, MS; and Adam Flanders, MD, for their comments on an earlier version of the manuscript. The explanation of the circle of Willis at a fifth-grade reading level was generated by GPT-3.5.

Footnotes

Supported in part by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under contract 75N92020C00021 and by the Gordon and Betty Moore Foundation.

Disclosures of conflicts of interest: C.P.L. Grants from Bunkerhill Health, Carestream, CARPL.ai, Clairity, GE HealthCare, Google Cloud, IBM, IDEXX, Hospital Israelita Albert Einstein, Kheiron, Lambda, Lunit, Nightingale Open Science, Nines, Philips, Siemens Healthineers, Subtle Medical, VinBrain, Whiterabbit.ai, Lowenstein Foundation, Gordon and Betty Moore Foundation, and Paustenbach Fund; business consulting fees from Sixth Street and Gilmartin Capital; speaking honorarium from Mayo Clinic; joint patent with GE HealthCare; chair of the board for the RSNA; board member for Bunkerhill Health; stockholder in Bunkerhill Health; option holder in Whiterabbit.ai; advisor and option holder in GalileoCDS, Sirona Medical, Adra, and Kheiron; computing credits and services from Microsoft, Stability.ai, and Google.

References

  • 1. Schwartz WB . Medicine and the computer . The promise and problems of change . N Engl J Med 1970. ; 283 ( 23 ): 1257 – 1264 . [DOI] [PubMed] [Google Scholar]
  • 2. Center for Devices and Radiological Health . Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. U.S . Food and Drug Administration . https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices. Accessed April 15, 2023 .
  • 3. Tiu E , Talius E , Patel P , Langlotz CP , Ng AY , Rajpurkar P . Expert-level detection of pathologies from unannotated chest x-ray images via self-supervised learning . Nat Biomed Eng 2022. ; 6 ( 12 ): 1399 – 1406 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Berlin L . Accuracy of diagnostic procedures: has it improved over the past five decades? AJR Am J Roentgenol 2007. ; 188 ( 5 ): 1173 – 1178 . [DOI] [PubMed] [Google Scholar]
  • 5. Jamadar DA , Carlos R , Caoili EM , et al . Estimating the effects of informal radiology resident teaching on radiologist productivity: what is the cost of teaching? Acad Radiol 2005. ; 12 ( 1 ): 123 – 128 . [DOI] [PubMed] [Google Scholar]
  • 6. Naringrekar HV , Dave J , Akyol Y , Deshmukh SP , Roth CG . Comparing the productivity of teaching and non-teaching workflow models in an academic abdominal imaging division . Abdom Radiol (NY) 2021. ; 46 ( 6 ): 2908 – 2912 . [DOI] [PubMed] [Google Scholar]
  • 7. Wu JT , Wong KCL , Gur Y , et al . Comparison of chest radiograph interpretations by artificial intelligence algorithm vs radiology residents . JAMA Netw Open 2020. ; 3 ( 10 ): e2022779 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Buvat I , Weber W . Nuclear medicine from a novel perspective: Buvat and Weber talk with OpenAI’s ChatGPT . J Nucl Med 2023. ; 64 ( 4 ): 505 – 507 . [DOI] [PubMed] [Google Scholar]
  • 9. Chang PJ . Imaging informatics: maturing beyond adolescence to enable the return of the doctor’s doctor . Radiology 2023. ; 309 ( 1 ): e230936 . [DOI] [PubMed] [Google Scholar]
  • 10. Krupinski E , Bronkalla M , Folio L , et al . Advancing the diagnostic cockpit of the future: an opportunity to improve diagnostic accuracy and efficiency . Acad Radiol 2019. ; 26 ( 4 ): 579 – 581 . [DOI] [PubMed] [Google Scholar]
  • 11. Schläpfer J , Wellens HJ . Computer-interpreted electrocardiograms: benefits and limitations . J Am Coll Cardiol 2017. ; 70 ( 9 ): 1183 – 1192 . [DOI] [PubMed] [Google Scholar]
  • 12. Landau MS , Pantanowitz L . Artificial intelligence in cytopathology: a review of the literature and overview of commercial landscape . J Am Soc Cytopathol 2019. ; 8 ( 4 ): 230 – 241 . [DOI] [PubMed] [Google Scholar]
  • 13. Leibig C , Brehmer M , Bunk S , Byng D , Pinker K , Umutlu L . Combining the strengths of radiologists and AI for breast cancer screening: a retrospective analysis . Lancet Digit Health 2022. ; 4 ( 7 ): e507 – e519 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. Sarraju A , Bruemmer D , Van Iterson E , Cho L , Rodriguez F , Laffin L . Appropriateness of cardiovascular disease prevention recommendations obtained from a popular online chat-based artificial intelligence model . JAMA 2023. ; 329 ( 10 ): 842 – 844 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Nori H , King N , McKinney SM , Carignan D , Horvitz E . Capabilities of GPT-4 on medical challenge problems . arXiv 2303.13375 [preprint]. https://arxiv.org/abs/2303.13375. Posted March 20, 2023. Accessed May 2023 . [Google Scholar]
  • 16. Zumbrun J . AI Bot ChatGPT Needs Some Help With Math Assignments . WSJ Online. https://www.wsj.com/articles/ai-bot-chatgpt-needs-some-help-with-math-assignments-11675390552. Published February 3, 2023. Accessed May 13, 2023 .
  • 17. Hutto E . Dr . OpenAI Lied to Me . https://www.medpagetoday.com/opinion/faustfiles/102723. Published January 20, 2023. Accessed May 13, 2023 .
  • 18. Harvey H , Pogose M . How to get ChatGPT regulatory approved as a medical device . Hardian Health . https://www.hardianhealth.com/blog/how-to-get-regulatory-approval-for-medical-large-language-models. Published April 5, 2023. Accessed May 13, 2023 .
  • 19. 21st Century Cures Act, HR 34, 114th Cong ( 2015. ).
  • 20. Elkassem AA , Smith AD . Potential use cases for ChatGPT in radiology reporting . AJR Am J Roentgenol 2023AJR.23.29198 . [DOI] [PubMed] [Google Scholar]
  • 21. OpenAI . GPT-3.5 . https://chat.openai.com. Accessed April 28, 2023 .
  • 22. Acosta JN , Falcone GJ , Rajpurkar P , Topol EJ . Multimodal biomedical AI . Nat Med 2022. ; 28 ( 9 ): 1773 – 1784 . [DOI] [PubMed] [Google Scholar]
  • 23. Krishnan R , Rajpurkar P , Topol EJ . Self-supervised learning in medicine and healthcare . Nat Biomed Eng 2022. ; 6 ( 12 ): 1346 – 1352 . [DOI] [PubMed] [Google Scholar]
  • 24. Eng D , Chute C , Khandwala N , et al . Automated coronary calcium scoring using deep learning with multicenter external validation . NPJ Digit Med 2021. ; 4 ( 1 ): 88 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Pickhardt PJ . Value-added opportunistic CT screening: state of the art . Radiology 2022. ; 303 ( 2 ): 241 – 254 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Vreeland A , Persons KR , Primo HR , et al . Considerations for exchanging and sharing medical images for improved collaboration and patient care: HIMSS-SIIM collaborative white paper . J Digit Imaging 2016. ; 29 ( 5 ): 547 – 558 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Sodickson A , Opraseuth J , Ledbetter S . Outside imaging in emergency department transfer patients: CD import reduces rates of subsequent imaging utilization . Radiology 2011. ; 260 ( 2 ): 408 – 413 . [DOI] [PubMed] [Google Scholar]
  • 28. Flanagan PT , Relyea-Chew A , Gross JA , Gunn ML . Using the Internet for image transfer in a regional trauma network: effect on CT repeat rate, cost, and radiation exposure . J Am Coll Radiol 2012. ; 9 ( 9 ): 648 – 656 . [DOI] [PubMed] [Google Scholar]
  • 29. Scott KW , Liu A , Chen C , et al . Healthcare spending in U.S. emergency departments by health condition, 2006–2016 . PLoS One 2021. ; 16 ( 10 ): e0258182 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Kassavin MH , Parikh KD , Tirumani SH , Ramaiya NH . Trends in Medicare Part B payments and utilization for imaging services between 2009 and 2019 . Curr Probl Diagn Radiol 2022. ; 51 ( 4 ): 478 – 485 . [DOI] [PubMed] [Google Scholar]
  • 31. Vest JR , Kaushal R , Silver MD , Hentel K , Kern LM . Health information exchange and the frequency of repeat medical imaging . Am J Manag Care 2014. ; 20 ( 11 Spec No. 17 ): eSP16 – eSP24 . [PubMed] [Google Scholar]
  • 32. Larson DB , Magnus DC , Lungren MP , Shah NH , Langlotz CP . Ethics of using and sharing clinical imaging data for artificial intelligence: a proposed framework . Radiology 2020. ; 295 ( 3 ): 675 – 682 . [DOI] [PubMed] [Google Scholar]
  • 33. Huang SC , Pareek A , Jensen M , Lungren MP , Yeung S , Chaudhari AS . Self-supervised learning for medical image classification: a systematic review and implementation guidelines . NPJ Digit Med 2023. ; 6 ( 1 ): 74 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Bommasani R , Hudson DA , Adeli E , et al . On the opportunities and risks of foundation models . arXiv 2108.07258 [preprint] https://arxiv.org/abs/2108.07258. Posted August 16, 2021. Accessed May 2023 . [Google Scholar]
  • 35. Center for Devices and Radiological Health . Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions . U.S. Food and Drug Administration . https://www.fda.gov/regulatory-information/search-fda-guidance-documents/marketing-submission-recommendations-predetermined-change-control-plan-artificial. Published April 3, 2023. Accessed April 15, 2023 .
  • 36. Russakovsky O , Deng J , Su H , et al . ImageNet large scale visual recognition challenge . Int J Comput Vis 2015. ; 115 ( 3 ): 211 – 252 . [Google Scholar]
  • 37. Kaushal A , Altman R , Langlotz C . Geographic distribution of US cohorts used to train deep learning algorithms . JAMA 2020. ; 324 ( 12 ): 1212 – 1213 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. MIDRC Web site . https://www.midrc.org. Accessed April 15, 2023 .
  • 39. Haendel MA , Chute CG , Bennett TD , et al . The National COVID Cohort Collaborative (N3C): rationale, design, infrastructure, and deployment . J Am Med Inform Assoc 2021. ; 28 ( 3 ): 427 – 443 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40. Mei X , Liu Z , Robson PM , et al . RadImageNet: an open radiologic deep learning research dataset for effective transfer learning . Radiol Artif Intell 2022. ; 4 ( 5 ): e210315 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Sankar PL , Parker LS . The Precision Medicine Initiative’s All of Us Research Program: an agenda for research on its ethical, legal, and social issues . Genet Med 2017. ; 19 ( 7 ): 743 – 750 . [DOI] [PubMed] [Google Scholar]
  • 42. Bennett AM , Ulrich H , van Damme P , Wiedekopf J , Johnson AEW . MIMIC-IV on FHIR: converting a decade of in-patient data into an exchangeable, interoperable format . J Am Med Inform Assoc 2023. ; 30 ( 4 ): 718 – 725 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43. Flanders AE , Prevedello LM , Shih G , et al . Construction of a machine learning dataset through collaboration: the RSNA 2019 Brain CT Hemorrhage Challenge . Radiol Artif Intell 2020. ; 2 ( 3 ): e190211 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44. Shared Datasets . Center for Artificial Intelligence in Medicine & Imaging . https://aimi.stanford.edu/shared-datasets. Accessed April 15, 2023 . [Google Scholar]
  • 45. Chen IY , Pierson E , Rose S , Joshi S , Ferryman K , Ghassemi M . Ethical machine learning in healthcare . Annu Rev Biomed Data Sci 2021. ; 4 ( 1 ): 123 – 144 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Recht MP , Dewey M , Dreyer K , et al . Integrating artificial intelligence into the clinical practice of radiology: challenges and recommendations . Eur Radiol 2020. ; 30 ( 6 ): 3576 – 3584 . [DOI] [PubMed] [Google Scholar]

Articles from Radiology are provided here courtesy of Radiological Society of North America

RESOURCES