Skip to main content
Radiology: Artificial Intelligence logoLink to Radiology: Artificial Intelligence
editorial
. 2022 Mar 23;4(2):e220039. doi: 10.1148/ryai.220039

Imaging AI in Practice: Introducing the Special Issue

John Mongan 1,, Achala Vagal 1, Carol C Wu 1
PMCID: PMC8980860  PMID: 35391763

Introduction

Over the last several years, artificial intelligence (AI) has become one of the highest profile topics in radiology, recognized in part by the creation of this journal (1). This focus and interest has been driven largely by the potential AI shows to broadly change the way we practice radiology across every subspecialty. That potential has been demonstrated by a flood of manuscripts describing technical advances, algorithms, and proofs of concept aimed at a wide variety of radiologic tasks.

However, no amount of demonstrated potential has a direct impact on patient care or clinical practice; achieving such an impact requires moving beyond the creation of AI to the deployment of AI into clinical environments for routine use. It is probably not surprising to those who practice radiology or work in radiology information technology that achieving this translational goal is challenging and has occurred at a much slower pace than suggested by some who feverishly predicted that AI would bring an end to radiology as a profession in a few short years. Today, several years into the radiology AI boom, we see that these translational goals are being achieved. This special issue of Radiology: Artificial Intelligence on "Imaging AI in Practice" highlights work directed at bringing AI into routine clinical practice, as a series of concise AI in Brief manuscripts.

Precise and objective quantitative analyses of images are potential benefits of AI. Retson et al aptly applied AI to tackle the quantification of air trapping on CT scans, an important finding with diagnostic and patient management implications (2). While radiologists can easily identify air trapping as areas of persistent hypoattenuation on expiratory images compared with inspiratory images, the evaluation is subjective and changes on serial examinations are difficult to detect. The authors showed the use of an AI algorithm improves interreader consistency and correlation with pulmonary function testing; however, radiologists, based on surveys, did not perceive the benefits. This highlights the need to address user perception of AI algorithm benefits in successful clinical deployment of these algorithms, even if object benefits are evident.

AI may also have application in helping to improve consistency of subjective qualitative assessments where inter- and intraobserver variability is high, such as evaluation of breast density. The research approach to addressing this typically involves consensus among multiple readers, but this typically isn’t feasible in a clinical setting. To this end, Magni and Interlenghi et al created an AI system that demonstrates a high level of agreement with radiologist consensus classification of dense versus nondense breasts (3).

A number of commercially developed AI tools have become available for clinical use though validation of these algorithms in local practice environments, each with unique patient population, scanner types, scanning parameters, and other factors that can potentially reduce effectiveness. Monti et al evaluated the performance of an AI tool for measurement of thoracic aortic diameters on patients with various aortic abnormalities and identified conditions that impact the performance of the algorithm (4). Identification and knowledge of strengths and weaknesses of these tools is key to safe and effective implementation of AI applications in various clinical practices.

Another powerful clinical application of AI is optimization of image quality. AI tools have the potential not only to improve diagnostic image quality but also help with operational efficiency and patient experience. This is also a great opportunity for collaboration between radiologists, technologists, and device manufacturers. Rudie et al prospectively evaluated super-resolution AI-based image enhancement software for three-dimensional volumetric brain MRI (5). The authors assessed a deep learning–based denoising and resolution enhancement algorithm and demonstrated a 45% scan time reduction. Four experienced academic neuroradiologists found noninferior qualitative assessments of image quality metrics compared with existing standard sequences. Furthermore, spatial resolution of small structures was also maintained.

AI solutions are being implemented in acute emergency settings requiring time-sensitive treatments, including large-vessel occlusion, intracranial hemorrhage, and pulmonary embolism. Seyam et al implemented an AI-based detection tool for intracranial hemorrhage at CT into the clinical workflow to assess its diagnostic performance and effect on the clinical workflow (6). Although AI deployment resulted in improvement in communication of critical findings, future efforts are necessary to streamline all components of the workflow. The authors also rightly point out pitfalls of using AI in acute care applications, including a false sense of security, particularly for certain subtypes of intracranial hemorrhage such as subdural and subarachnoid hemorrhage.

The application of AI algorithms to improve diagnosis has gained widespread attention; however, the role of these algorithms in quality improvement is just as important. Hahn et al developed a tool to segment the main pulmonary trunk and determine the level of contrast enhancement on CT pulmonary angiograms (7). The authors demonstrated how their algorithm allowed timely systematic analyses of a large number of CT pulmonary angiographic examinations performed on different scanners at different times to help identify issues and promptly implement quality improvement measures. While such an analysis can be carried out by radiologists or technologists, at most busy practices with a multitude of scanners, automation of the process ensures more efficient and timely quality checks.

One of the key benefits that AI can provide is automation of tedious tasks like segmentation. This makes analyses requiring segmentation more efficient and potentially more consistent. Automating the labor-intensive portions of these analyses may also make them more widely available, as practices that don’t have staffing for performing them manually may be able to support an automated process. Goel et al deployed a semiautomated deep neural network–based algorithm for quantifying total kidney volume in patients with autosomal dominant polycystic kidney disease (8). On average, this model halved the time to determine total kidney volume—a time savings of approximately 12 minutes per patient—with clinically negligible differences in quantified volume. It seems reasonable to expect that further improvement of the model would progressively reduce the amount of manual refinement required, increasing efficiency and potentially eventually resulting in a segmentation algorithm with sufficient performance to be fully automated.

Semi- and fully automated models for segmentation of prostate and surrounding organs at risk in patients with prostate cancer requiring radiation therapy were evaluated by Sanders et al (9). A key innovation of their work was to use dosimetry parameters rather than just traditional segmentation metrics to evaluate the performance of their models in comparison with traditional manual contouring; they found no significant differences in dosimetry parameters across the three approaches. Their work provides an important reminder that the best metrics of performance for clinical AI are those that are closest to patient outcomes, especially when such metrics don’t correlate perfectly with general, nonmedical computer vision metrics.

We believe that the work described in this special issue represents important early steps toward realizing the potential of AI in radiology. As our field gains experience with clinical deployment and usage of algorithms such as those detailed in this issue, we anticipate that some attention will turn toward broadly addressing some of the challenges that are common to putting AI into practice. We look forward to future publications that may discuss topics like standards-based approaches for efficient and effective integration of AI into clinical workflows and user interfaces, postdeployment quality assurance and performance monitoring, detection and resolution of bias and unintended consequences, and other as of yet unknown issues, paving the way for widespread use of imaging AI in practice.

Acknowledgments

Acknowledgments

We thank our Assistant Guest Editors: Marina Codari, PhD, Oleksandra V. Ivashchenko, PhD, Merel Huisman, MD, PhD, and Matthew D. Li, MD.

Footnotes

Authors declared no funding for this work.

Disclosures of conflicts of interest: J.M. Siemens and GE research contracts paid to institution; royalties/licenses from GE; RSNA support for attending meetings/travel; RSNA MLSS Chair; stockholder in Annexon Biosciences; Nuance Advisory Board (volunteer); wife is employee of Annexon Biosciences; associate editor of Radiology: Artificial Intelligence. A.V. No relevant relationships. C.C.W. Grants/contracts from University of Chicago and NIBIB for MIDRC grant/collaborator; royalties from Elsevier; honorarium for lectures from World Class CME.

References

  • 1. Kahn CE Jr. Artificial intelligence, real radiology. Radiol Artif Intell 2019; 1( 1): e184001. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Retson TA, Hasenstab KA, Kligerman SJ, et al. Reader perceptions and impact of AI on CT assessment of air trapping. Radiol Artif Intell 2022; 4( 2): e210160. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Magni V, Interlenghi M, Cozzi A, et al. Development and validation of an AI-driven mammographic breast density classification tool based on radiologist consensus. Radiol Artif Intell 2022; 4( 2): e210199. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Monti CB, van Assen M, Stillman AE, et al. Evaluating the performance of a convolutional neural network algorithm for measuring thoracic aorta diameters in a heterogeneous population. Radiol Artif Intell 2022; 4( 2): e210196. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Rudie JD, Gleason T, Barkovich MJ, et al. Clinical assessment of deep learning–based super-resolution for 3D volumetric brain MRI. Radiol Artif Intell 2022; 4( 2): e210059. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Seyam M, Weikert T, Sauter A, Brehm A, Psychogios MN, Blackham KA. Utilization of artificial intelligence–based intracranial hemorrhage detection on emergent noncontrast CT images in clinical workflow. Radiol Artif Intell 2022; 4( 2): e210168. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Hahn LD, Hall K, Alebdi T, Kligerman SJ, Hsiao A. Automated deep learning analysis for quality improvement of CT pulmonary angiography. Radiol Artif Intell 2022; 4( 2): e210162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Goel A, Shih G, Riyahi S, et al. Deployed deep learning kidney segmentation for polycystic kidney disease MRI. Radiol Artif Intell 2022; 4( 2): e210205. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Sanders JW, Kudchadker RJ, Tang C, et al. Prospective evaluation of prostate and organs at risk segmentation software for MRI-based prostate radiation therapy. Radiol Artif Intell 2022; 4( 2): e210151. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Radiology: Artificial Intelligence are provided here courtesy of Radiological Society of North America

RESOURCES