Skip to main content
Radiology: Artificial Intelligence logoLink to Radiology: Artificial Intelligence
. 2022 Feb 2;4(2):e210114. doi: 10.1148/ryai.210114

Overview of Noninterpretive Artificial Intelligence Models for Safety, Quality, Workflow, and Education Applications in Radiology Practice

Yasasvi Tadavarthi 1, Valeria Makeeva 1, William Wagstaff 1, Henry Zhan 1, Anna Podlasek 1, Neil Bhatia 1, Marta Heilbrun 1, Elizabeth Krupinski 1, Nabile Safdar 1, Imon Banerjee 1, Judy Gichoya 1, Hari Trivedi 1,
PMCID: PMC8980942  PMID: 35391770

Abstract

Artificial intelligence has become a ubiquitous term in radiology over the past several years, and much attention has been given to applications that aid radiologists in the detection of abnormalities and diagnosis of diseases. However, there are many potential applications related to radiologic image quality, safety, and workflow improvements that present equal, if not greater, value propositions to radiology practices, insurance companies, and hospital systems. This review focuses on six major categories for artificial intelligence applications: study selection and protocoling, image acquisition, worklist prioritization, study reporting, business applications, and resident education. All of these categories can substantially affect different aspects of radiology practices and workflows. Each of these categories has different value propositions in terms of whether they could be used to increase efficiency, improve patient safety, increase revenue, or save costs. Each application is covered in depth in the context of both current and future areas of work.

Keywords: Use of AI in Education, Application Domain, Supervised Learning, Safety

© RSNA, 2022

Keywords: Use of AI in Education, Application Domain, Supervised Learning, Safety


Summary

Many noninterpretive artificial intelligence applications with the potential to improve multiple aspects of radiology practice, including workflow, efficiency, image acquisition, reporting, billing, and education, are either currently available or in development.

Key Points

  • ■ Artificial intelligence (AI) models to improve workflow efficiency and safety include automated clinical decision support, study protocoling, examination scheduling, and worklist prioritization.

  • ■ Models to improve image acquisition focus on patient positioning, multimodal image registration, dose reduction, noise reduction, and artifact reduction.

  • ■ Models to improve reporting include automatic finding categorization using classification systems (eg, Breast Imaging Reporting and Data System, Liver Imaging Reporting and Data System), provider notification of incidental findings, and closing the loop on patient follow-up.

  • ■ Business applications include automated billing and coding, obtaining preauthorization, and optimization of performance on quality measures to increase reimbursement.

  • ■ Use of AI in resident education is somewhat controversial, but AI can be used to help flag high-risk cases for faster review by an attending physician, customize teaching files based on residents’ needs, and help improve resident reporting.

Introduction

The radiology community has had a leading role in exploring medical applications of artificial intelligence (AI), and one of the primary drivers for this is the desire for increased accuracy and efficiency in clinical care. Radiologist responsibilities extend beyond image interpretation. AI tools have the potential to improve essential tasks in the imaging value chain, from image acquisition to generating and disseminating radiology reports (1). These applications are crucial in current medical environments with increasing workloads, increasing scan complexity, and the need to decrease costs and reduce errors (24). AI applications related to radiologic quality, safety, and workflow improvements can be grouped by their influence on various steps in the typical radiology workflow, as follows in their approximate order of occurrence: study selection and protocoling; image acquisition; worklist prioritization; study reporting, business applications, and resident education. This qualitative review is a discussion of current research and commercial models regarding these applications within the entire imaging chain.

Methods

Studies published from 1980 through 2019 were retrieved nonsystematically from academic search engines including PubMed, ScienceDirect, and Google Scholar by using search terms related to each application of interest. Public legal documents were also accessed including the Medicare Physician Fee Schedule and Other Revisions to Part B, Quality Payment Program requirements, and Shared Savings Program requirements. Public news sources, such as Becker's Hospital Review, Healthcare Finance, Optum, and Healthcare IT News, and vendor lists from meetings of the Radiological Society of North America and the Society for Imaging Informatics in Medicine were used to find any commercial efforts in each space. All searches were performed by the authors, all of whom are attending radiologists or trainees with a research interest in radiology AI.

Study Selection and Protocoling

Automated Study Vetting and Clinical Decision Support

Inappropriate imaging studies are inefficient because they expend health care resources, increase payer costs, increase patient risk, and delay care (5,6). Inappropriate imaging orders may represent up to 10% of ordered examinations, and not all are caught before the examination is performed (610). Imaging ordering errors have multifactorial causes but can include a lack of knowledge of appropriate imaging types, over-ordering by providers because of constrained resources, erroneous clicks in the computerized physician order entry system, and unnecessary duplicate examinations if a similar study was already performed (eg, chest radiography performed immediately after chest CT).

To address concerns regarding inappropriate imaging, the Protecting Access to Medicare Act of 2014 requires the use of an appropriate use criteria system for any advanced diagnostic imaging service. Many automated clinical decision support systems have been developed to meet these requirements, including by vendors that license the American College of Radiology ACR Select database (11). Implementation of clinical decision support systems in the hospital setting has resulted in decreased inappropriate imaging and advanced imaging overall (12,13). For example, Yan et al (14) reported that the yield of CT angiography in detecting pulmonary embolism doubled after implementation of a clinical decision support system. Doyle et al (15) reported an overall 6% decrease in imaging with the use of a clinical decision support system in a randomized clinical trial of 3500 health care providers. Existing systems, however, are not without substantial limitations: They are largely based on a branching decision tree structure that can be exploited to arrive at the desired examination type. A more advanced system that relies on natural language processing (NLP) of free-text input and integration of electronic medical record (EMR) data could decrease the so-called click fatigue associated with current systems by allowing more flexible input. However, our research did not reveal any advanced NLP-based system currently in existence or development.

Study Protocoling

Protocoling is the process of selecting the appropriate sequences for an MRI or CT examination to ensure that the desired anatomy and abnormalities are adequately captured; it is typically performed by the radiologist because of their domain expertise. This is a time-consuming process, however. At our institution, approximately 1–2 hours per day in each division is spent protocoling studies, totaling 50 hours per week across the department, which is the equivalent of the workload for one full-time equivalent radiologist. Protocoling is time-consuming for many reasons, including the frequent presence of dozens of protocol options, the need to look up information from the EMR, and the lack of intelligent aids within the protocol workflow.

In recent years, NLP has shown good results for automating study protocols. For example, Lee (16) automated the selection of routine versus tumor or infection protocols for musculoskeletal MRI, and Trivedi et al (17) distinguished between musculoskeletal studies with and without gadolinium contrast enhancement. Both models achieved overall accuracies of greater than 90%. Brown and Marotta (18) automated three tasks for brain MRI (protocol selection, need for intravenous contrast agent, and examination prioritization) and achieved overall accuracies between 83% and 88%. More recent work focused on a model that functioned beyond a single anatomic region or imaging modality, achieving a precision of 76%–82% when tested on 18 000 diverse CT and MRI examinations (19). Overall, we found that models with more advanced deep learning approaches had higher performance than those with traditional machine learning techniques.

A limitation of current protocoling model performance is the input data to which the model has access. Just as a radiologist may access EMR data to correctly protocol an examination, AI models also need access to these additional data to maximize their performance. This is challenging, however, because these data are stored in various locations within the EMR and often within free-text clinical notes, the interpretation of which is a difficult machine learning challenge. Approaches such as long short-term memory networks and bidirectional encoder representations of transformers have been used to automatically extract information from the EMR and could be leveraged to provide more data to a protocoling model (2023). In the meantime, human in-the-loop verification of automatically selected protocols is likely necessary to ensure patient safety and optimal imaging.

Image Acquisition

Successful interpretation of medical imaging requires proper image acquisition. Radiation dose, imaging dimensions, patient positioning and motion, implanted hardware, and sensor variability affect image quality for interpretation. Machine learning techniques in this domain have been shown to reduce radiation exposure, decrease scan times, reduce rates of false-positive findings, and reduce unnecessary repeat imaging while maintaining image quality (24).

Dose Reduction

As the use of CT and PET increases worldwide, radiation exposure to patients undergoing frequent examinations is a concern. Radiology departments often must balance radiation dose and image quality against the practices of "as low as reasonably achievable" to avoid unnecessary radiation exposure (25). The conventional method to reduce CT radiation dose is to decrease tube current but this increases noise and reduces diagnostic confidence (26). However, machine learning techniques for image reconstruction have recently demonstrated impressive results that provide higher-quality images than traditional techniques while maintaining lower radiation doses (27,28). These denoising algorithms are discussed in further detail in the Image Reconstruction section below.

In PET imaging, radiotracer dose reduction has been targeted with models that reconstruct low-dose examinations to appear similar to full-dose examinations by using noise-reduction algorithms. One commercial company has been able to use only one 200th of the standard tracer dose and a reduced scan time of up to 75% while achieving image quality comparable to the industry standard by using encoder-decoder residual deep learning networks (25,29,30). Generative adversarial networks have been used to reconstruct PET images acquired with 1%–25% of the standard radiotracer dose with quality similar to that of normal-dose PET images (31,32) (Fig 1).

Figure 1:

(A, B) Two examples of low-dose PET (left), ground truth standard-dose PET (middle), and low-dose PET with generative adversarial network-synthesized images (right). (Adapted, with permission, from reference 32.)

(A, B) Two examples of low-dose PET (left), ground truth standard-dose PET (middle), and low-dose PET with generative adversarial network-synthesized images (right). (Adapted, with permission, from reference 32.)

MRI does not produce ionizing radiation, but researchers have explored machine learning techniques to reduce gadolinium-based intravenous contrast agent dosage (33). Gong et al (33) used machine learning to achieve a 10-fold reduction in gadolinium-based contrast agent administration with no significant reduction in image quality or contrast information.

Image Reconstruction

Image reconstruction is fundamental to medical imaging to create high-quality diagnostic images while managing cost, reconstruction time, and risk to the patient (34,35). The details of image reconstruction are beyond the scope of this review, but there have been extensive research efforts to use machine learning techniques to improve image reconstruction in CT, MRI, and PET. Examples of targets for improvement include noise reduction, artifact suppression, motion compensation, faster image acquisition, and multimodal image registration. These goals are often codependent and closely related, and it is therefore possible to reduce both radiation dose and contrast agent dose with the use of successful image reconstruction techniques.

Image quality is often a trade-off between radiation dose in CT and scan times for MRI. Filtered back projection (36,37), iterative reconstruction (38,39), and newer model-based iterative reconstruction techniques function by filtering raw sensor data or by considering noise statistics, optics, physics, and scanner parameters (38). However, all of these techniques are specific to the vendor and can have substantial overhead costs because of their long computational time (27).

Early machine learning–based CT reconstruction techniques caused over-smoothing, resulting in so-called waxy images (26). Since then, several subtypes of convolutional neural networks have been developed to denoise CT and MR images without loss of technical detail (25,40). One method combines deep learning techniques with standard filtered back projection principles to produce high-quality images with low noise, even with a 20-fold reduction in CT input data (41). Another vendor-agnostic CT solution achieved higher spatial resolution than filtered back projection and model-based iterative reconstruction for processing low-dose CT and has been granted U.S. Food and Drug Administration clearance (ClariCT.AI; ClariPi). A different company has commercialized a deep learning–based CT reconstruction product that provides quality similar to that of model-based iterative reconstruction but with a three- to fourfold reduction in reconstruction time (42,43).

In MRI, longer acquisition times can produce higher image quality, but they also increase the risk of motion artifacts (44). As a result, several machine learning approaches have targeted MRI noise reduction and artifact suppression (44) (Fig 2). Most of these applications are in the research phase, although a few vendor-agnostic denoising products have been approved by the U.S. Food and Drug Administration. These products reduce MRI acquisition times by 30%–40% (45,46).

Figure 2:

MRI with image aliasing, specifically respiratory artifact and blurring suppression (A) before and (B) after artifact reduction. DL = deep learning. (Adapted, with permission, from reference 44.)

MRI with image aliasing, specifically respiratory artifact and blurring suppression (A) before and (B) after artifact reduction. DL = deep learning. (Adapted, with permission, from reference 44.)

Image Quality Control

Poor image quality can be particularly challenging in MRI because of suboptimal scan parameters, artifacts, or inappropriate coverage (47). Repeat MRI sequences are required in up to 20% of examinations, at a cost to hospitals of up to $115 000 per scanner annually (24). Various methods have been proposed to automatically assess image quality prospectively or retrospectively.

Prospective image quality control can benefit scan protocols with high acquisition times, such as brain MRI (24) or real-time T2-weighted liver MRI (48). In these cases, models have shown value in assessing for nondiagnostic scan quality during acquisition so technologists can adjust scan parameters during the examination rather than after its completion (24,48). Retrospective image quality control explores techniques to mitigate metal artifact, respiratory motion, and banding artifact at MRI. Multiple groups have developed models that target noise and artifact suppression (44,49,50) (Fig 3).

Figure 3:

Noise suppression of (top) T1- and (bottom) T2-weighted images. Original images (left) and processed images (right). (Adapted, with permission, from reference 50.)

Noise suppression of (top) T1- and (bottom) T2-weighted images. Original images (left) and processed images (right). (Adapted, with permission, from reference 50.)

One company has developed algorithms for image quality issues in radiography, US, and conventional angiography (ContextVision). They offer products to reduce over- or underexposure and metal artifact in radiography, suppress noise to improve contrast and tissue differentiation at US, and reduce noise and motion artifact for improved visibility of stents and catheter tips in coronary artery angiography.

Image Registration

Image registration refers to linking the same anatomic region together within an examination or across examinations, and it is a frequent and repetitive task for radiologists during study interpretation. Several permutations of this mathematical problem exist because several variables can be considered, including modality, region of interest, temporality, dimensionality, and elasticity of tissues (51).

Several techniques for automatic image registration have been explored. Section-to-volume registration is a common implementation in which a two-dimensional image section is registered to an existing three-dimensional volume. The primary example of this type of application is registration of two-dimensional transrectal US with an existing three-dimensional MRI for targeted prostate biopsy (52). Cross-modality registration is also performed between three-dimensional volumes (eg, registration of a preoperative CT or MRI to an intraoperative CT for targeted thermal ablation of liver lesions [53] or registration of prostate lesions across CT and MRI [54]; Fig 4). Haskins et al (52) published a comprehensive list of image registration applications.

Figure 4:

Sample image registration between CT and MRI scans shows original CT image with the manual contour in yellow (left), MRI scan with manual contour in blue (middle), and colocalized section and contour carried from the CT image to the MRI scan with a good overlap between contours (right). (Adapted, with permission, from reference 54.)

Sample image registration between CT and MRI scans shows original CT image with the manual contour in yellow (left), MRI scan with manual contour in blue (middle), and colocalized section and contour carried from the CT image to the MRI scan with a good overlap between contours (right). (Adapted, with permission, from reference 54.)

Patient Positioning

Radiation dose exposure to different organs depends on patient positioning within the CT gantry, and an inexperienced technologist may inadvertently over- or underexpose the region of interest because of miscalculations of patient size on the basis of the localizer radiograph (55,56). An offset of as little as 20 mm can result in significant changes in effective organ dose (55,56). Advances in patient positioning include a three-dimensional depth-sensing camera that recognizes the anatomic landmarks and models that automatically calculate the patient's center, which is used to optimize the patient bed position for dose and image quality. This implementation is commercially available by one vendor and has been shown to be more accurate and less variable than manual positioning by technologists (55,57,58).

In mammography, poor positioning can result in missed breast cancers or technical recalls (59). Strict adherence to positioning and technique optimizes breast coverage and diagnostic quality while minimizing radiation (59,60). Models to automatically evaluate image quality at the time of acquisition to ensure compliance with the Mammography Quality Standards Act and Program (61) could reduce technical recalls, and one such solution is registered with the U.S. Food and Drug Administration (Mia IQ; Kheiron Medical Technologies).

Worklist Prioritization

Radiologist worklists are typically populated by examinations on the basis of preset criteria, such as body part, modality, patient location, and priority. However, nonemergency examinations are often mistakenly ordered as emergency examination in an effort to expedite imaging, thereby preventing the radiologist from differentiating between routine and emergency studies and potentially delaying the interpretation of truly emergency cases.

Many AI algorithms have been developed across multiple body regions to prioritize examinations with emergent findings (62) (Fig 5). These models must be adequately sensitive and specific to identify emergency findings while avoiding excessive false-positive results. Annarumma et al (63) tested such a system to simulate a triage system for retrospective adult chest radiographs, resulting in a theoretical reduction in reporting delay for critical studies from 11.2 to 2.7 days. Arbabshirani et al (64) prospectively implemented a prioritization system for detection of intracranial hemorrhage at head CT, which flagged 94 of 347 routine cases (60 true-positive findings, 34 false-positive findings) and detected five new intracranial hemorrhages with a reduction in reporting time for these cases from 8.5 hours to 19 minutes. Multiple similar models exist for detection of intracranial hemorrhage (65,66) and emergency findings at abdominal CT (67) and chest CT angiography (68,69).

Figure 5:

Analytic algorithm of noncontrast head CT examinations for urgent findings. AI = artificial intelligence. (Adapted, with permission, from reference 62.)

Analytic algorithm of noncontrast head CT examinations for urgent findings. AI = artificial intelligence. (Adapted, with permission, from reference 62.)

Typically, AI is used to detect positive findings that require emergency intervention (eg, pulmonary embolism, hemorrhage, and pneumoperitoneum), but this narrowed focus addresses only part of the problem in a resource-limited setting such as the emergency department. Prolonged turnaround times for examinations with negative findings also equate to prolonged turnaround times for the emergency department, in which staff may be awaiting a negative result to discharge a patient (70,71). Negative results may also be necessary for taking appropriate steps in patient care, for example, clearing a noncontrast head CT for hemorrhage before a patient can undergo thrombolysis for acute stroke. In this scenario, rapid confirmation of the absence of a finding is crucial for patient care (72). As of the writing of this review, there is no U.S. Food and Drug Administration–approved model for detection of examinations with definitively negative results; however, such models have the potential to substantially affect patient care and throughput.

Reporting

Structured Reporting

Integration of AI applications into radiology reporting has the potential to increase the clarity, accuracy, and quality of reporting and decrease report variability in some situations (73). For example, models have been created to improve patient care by automatically populating recommendations for follow-up of incidental findings (7477). NLP models have also been developed as smart assistants. For example, Do et al (78) developed a tool that detected when the radiologist was reporting a fracture and displayed additional information regarding pertinent classifications, associated injuries, and further clinical recommendations. Whereas multiple frameworks have been developed to convert unstructured findings in reports into structured templates to improve legibility (7981), we were unable to find any recent system that has been systematically tested for performance or implemented clinically.

Classification Systems

Several classification systems have been developed for frequently encountered lesions, including thyroid (Thyroid Imaging Reporting and Data System [TI-RADS]) (82), breast (Breast Imaging Reporting and Data System [BI-RADS]) (83), liver (Liver Imaging Reporting and Data System [LI-RADS]) (84), and primary brain malignancies (Brain Tumor Reporting and Data System [BT-RADS]) (85). Each of these scoring systems relies on imaging characteristics and change over time to guide diagnosis or follow-up management. Many AI algorithms have been developed to automate the tasks associated with these scoring systems, including lesion measurement, image segmentation, and comparison with prior images. Some systems measure lesions that must first be identified by the radiologist (8688), whereas others detect candidate lesions and their characteristics and predict the likelihood of future cancer (89). For example, algorithms have been developed to derive BI-RADS scores and breast densities or to highlight lesions that are suspected for cancer directly from breast MRI, US, or mammography. These algorithms have achieved areas under the curve of greater than 0.9 (9092). For liver lesions, models have been created to identify lesions at multisequence imaging and perform sequence coregistration to help measurement and interpretation (93,94) or to derive the LI-RADS score directly from the images, with accuracies ranging from 57% to 85% (95). In the BT-RADS, NLP algorithms have been able to derive BT-RADS classification scores directly from the MRI report, achieving F1 scores of up to 0.98 (96).

Machine learning algorithms have been incorporated into the data curation process used to update recommendations within the classification systems, as in the case of TI-RADS (97). A model trained with thyroid US lesions and their respective TI- RADS scores was able to improve the specificity of thyroid biopsy from 47% to 65% (ie, decreased biopsy of nonmalignant nodules) while maintaining sensitivity (98).

Automatic Notification to Provider of Incidental and Emergent Findings

Communication of critical diagnoses is mandated by the Joint Commission as a part of National Patient Safety Goal 2, "Improving the Effectiveness of Communication Among Caregivers" (99). In practice, implementing this trail of communication is inefficient and can disrupt workflow, contributing to burnout among radiologists (100). Communication failure is also one of the leading causes of malpractice lawsuits (101). Hiring reading room coordinators or medical students to help with communication increases work satisfaction among radiologists; however, hiring personnel is costly. Therefore, AI has been a topic of interest in automating provider notification (62,102104). A notable implementation of this technology was described by Do et al (105), who used AI in outpatient oncologic CT images to detect actionable incidental findings such as pulmonary embolism, gastrointestinal obstruction, hydronephrosis, and pneumothorax, resulting in a median 1-hour decrease in notification time to referring physicians and a 37% improvement in radiologist interpretation time.

Patient Follow-up

Radiologist reporting and recommendations for incidental findings is variable (106), and patient chain management can be challenging in large, complex health systems, sometimes resulting in lack of follow-up care. Many groups have used NLP to identify incidental follow-up findings in the radiology report to reduce the variability of recommendations or the number of patients for whom follow-up recommendations are not suggested or are not followed (107111). Implementation of such systems into the live clinical environment remains rare; however, Hammer et al (112) implemented a closed-loop system for follow-up of incidental pulmonary nodules, resulting in a significantly higher rate of appropriate follow-up by primary care physicians (P < .001). A sample report from such a system is shown in Figure 6.

Figure 6:

Sample of potential automaton for detection of an incidental pulmonary nodule in the report and appropriate follow-up recommendation generation. Exam = examination. Red boxes = portions of report model would use to generated follow-up recommendation.

Sample of potential automaton for detection of an incidental pulmonary nodule in the report and appropriate follow-up recommendation generation. Exam = examination. Red boxes = portions of report model would use to generated follow-up recommendation.

Business Applications

Billing and Coding

AI applications in business analytics present an opportunity to create value and shape radiology practice. A major area of focus has been billing and coding because of the combined potential effect of increased revenue and decreased errors.

It has been estimated that health care organizations lose between 3% and 5% of net revenue annually because of insurance claim denials (113,114). In 2010, the National Academy of Medicine synthesized one of the most extensive datasets of U.S. administrative costs related to billing and insurance, estimating that billing-related costs account for 13% of physician care spending and 8.5% of hospital care spending (115,116). More than 100 variables contribute to claim denial by insurance companies, and although this number is too vast to assess manually for each report, NLP can automatically ensure that reports are billed and coded appropriately (117,118).

Research that uses NLP has shown that incomplete documentation is common for many examinations. For example, documentation deficiencies have been identified in 9.3%–20.2% of abdominal US reports, representing a 2.5%–5.5% loss in professional reimbursement (119). AI can assist by creating predictive classification models for automated procedure coding. A study investigating the coding of MRI examinations demonstrated that the AI system achieved the same performance as manual coding by a technologist and did not require any human intervention (120). Therefore, automated coding techniques may optimize reimbursement, improve workflow efficiency, and assess rejected claims to help reduce future denials (121).

Preauthorization

Lack of clinical documentation from referrals often leads to delays in authorization of procedures and imaging. Whereas computerized physician order entry was created as a tool to decrease errors in ordering and to help with preauthorization, it has had variable success depending on the use case and method of implementation (122). Even with computerized physician order entry, many referrals must be manually reviewed and are subject to time-consuming telephone calls to insurance companies. Examples of missing information include incomplete patient demographics, outdated or inactive insurance information, and incomplete clinical documentation. According to a survey of 500 health industry leaders in the United States, automation of preauthorization was seen as the AI application with the most potential (123).

A substantial amount of these relevant data resides in the radiology information system and EMR, which may contain data pertinent to preauthorization such as patient orders, insurance, and clinical history that may be amenable to query by using NLP techniques. Prior authorization software enables health care organizations to identify authorization requirements at the time of scheduling by mining the radiology information system and EMR, therefore reducing manual administrative burden and patient scheduling delays (124).

Value-based Payment Models

Data-driven quality improvement lies at the intersection of new value-based payment models and AI. The Quality Payment Program arose as part of the Medicare Access and CHIP Reauthorization Act of 2015 and represented the shift to value-based care by enumerating a series of value-based paradigms for physician reimbursement (125). To understand AI applications within the Quality Payment Program, it is important to understand how reimbursement processes differ between the two major Quality Payment Program pathways—the Merit-based Incentive Payment System and the alternative payment model (Fig 7).

Figure 7:

A comparison of the Merit-based Incentive Payment System (MIPS) and the alternative payment model (APM) pathways and possible artificial intelligence (AI) applications under each model. EMR = electronic medical record, RIS = radiology information system.

A comparison of the Merit-based Incentive Payment System (MIPS) and the alternative payment model (APM) pathways and possible artificial intelligence (AI) applications under each model. EMR = electronic medical record, RIS = radiology information system.

The Merit-based Incentive Payment System involves a 100-point score related to quality, cost, interoperability, and improvement and results in positive, negative, or neutral adjustments to reimbursements based on physician performance. For radiologists, the quality category is the most important, and approximately 85% of radiologist Merit-based Incentive Payment System scores were directly affected by the quality category in 2019 (126,127). Many quality metrics center on reducing unnecessary imaging and ensuring appropriate documentation and follow-up. AI-based tools may be used to optimize performance on quality measures such as carotid artery stenosis measurements or appropriate follow-up for incidentally discovered lesions (126). Similarly, AI could be used to develop tools to automatically measure and track lesion progression, place information into reports, or even search the radiology information system and EMR to evaluate inclusion or exclusion criteria for certain patients (126).

The alternative payment model pathway has a greater focus on population health compared with the Merit-based Incentive Payment System, such that tools that improve the health of the entire population are specifically incentivized. AI applications that reduce cost while maintaining or improving quality are especially relevant to alternative payment model pathways and encourage team-based accountability within a health care organization. In 2019, up to 15% of the final alternative payment model scores were related to cost (127). Within this context, AI that is focused on reducing unnecessary procedures and imaging is especially valuable (eg, models that predict the malignancy potential of a lesion to decrease unnecessary follow-up scans or a tool that mines the EMR for prior studies to reduce redundant imaging) (126). In the future, primary drivers of AI applications in radiology business analytics, such as applications in quality improvement, will likely continue to correlate with the regulatory landscape and payer reimbursement patterns.

Resident Education

There are many potential use cases for AI in radiology education. As AI tools become ubiquitous in the daily workflow for radiologists, care must be taken to ensure that radiology trainees learn adequate interpretation skills and do not rely on AI software to locate abnormal findings or assign diagnoses. Beyond these potential risks, however, there are many opportunities to improve resident education by using AI tools.

Tajmir and Alkasab (128) list various potential applications of AI in radiology education, including selection of trainee cases, improved supervision of residents by attending physicians, analysis of report differences between trainees of various levels, and facilitation of lifelong learning. For example, AI algorithms could identify cases that have educational value based on parameters such as common diseases; rare, interesting, or unique findings; complexity; and acuity. These cases could be automatically incorporated into a trainee's worklist or into a teaching file for dedicated teaching sessions. Conceivably, such a process could be tailored to specific residents, thereby creating individualized learning opportunities.

Receiving feedback from supervising attending physicians is an integral part of clinical education; however, a balance must be struck between complete trainee autonomy and overbearing supervision. AI could help by silently alerting a supervising radiologist when a junior resident opens a complex or high-acuity case (128). This workflow would allow the resident an opportunity to independently review a case while ensuring that an attending physician is also aware of the case, thereby maintaining patient safety and simultaneously allowing for the effective educational growth of residents.

There is also an opportunity for NLP-based applications to affect resident education. NLP and AI algorithms may be used to compare reporting differences between trainees and nontrainees of various levels (128). Although this is a potentially sensitive area, a theoretical use case would demonstrate to junior residents how their reporting differs from that of more senior trainees. The AI system could then provide suggestions for changes that could be made by the junior resident. Care must be taken in implementation, however, so trainees do not feel unnecessarily “watched over” during interpretation.

AI applications could also facilitate lifelong education by incorporating new data and recent updates in imaging guidelines into a radiologist's reporting (128), for example, the newest guidelines for incidental pulmonary nodule follow-up. Such an application could benefit both trainees and attending physicians alike.

Despite these potential benefits, AI must be used judiciously in resident training to avoid interfering with development of the resident's skills. Residents must be educated in the appropriate use and interpretation of AI results because understanding how AI models are developed will better equip them to identify and appropriately manage model errors.

Areas of Future Work

A limitation of most machine learning applications for noninterpretive use cases is the relative lack of exploration of clinical effect and generalizability. Most research models described herein were developed and validated at a single institution. There is a vast technical, resource, time, and cost gap between developing a well-performing model on the basis of retrospective data and implementing the model in a live clinical setting at multiple disparate sites. Unlike imaging-based AI models that work on standardized Digital Imaging and Communications in Medicine imaging, noninterpretive models rely on heterogeneous data from multiple sources that are complex and varied across institutions. In our own institution, more than 80 interconnected software products are used in the radiology department and accessing data from these software products and integrating models into them is complex, requiring the agreement of multiple stakeholders. Those who are interested in trying publicly available research models at their own institution must be prepared to devote the time and personnel for implementation, even if the software is available free of charge. Companies developing products in this space should understand the potential complexity of implementation, which may be unique for every customer.

Ordering, imaging, and billing patterns are also diverse across institutions and patient populations. To ensure models are generalizable, they must be developed and tested by using data from multiple sites. For example, brain MRI protocols likely differ across institutions. A protocoling model must have access to these varied data for training and testing; however, these data must be harmonized to a common schema to be combined. This increases the complexity, time, and cost of model development. The ongoing adoption of standardized lexicons and communications standards such as common data elements (129) and Fast Healthcare Interoperability Resources (130) could help mitigate these issues by reducing variations in the input data structure, thereby allowing easier collection of multisite data.

There are also some underexplored areas in the radiology value chain that could benefit from machine learning applications. Missed appointments, particularly for MRI examinations, represent substantial lost revenue for radiology departments. Several studies have described the use of machine learning to predict no-shows for hospital and outpatient visits (131133) and outpatient appointment and surgery scheduling (134,135). However, this work has not yet been extended to the radiology domain. The largest study in this area used a multivariant model to show the effect of median income and commute distance on missed or canceled appointments, but it did not use more advanced modeling or any EMR data (136). Another study used an XGBoost model only on structured data from the hospital radiology information system and appointment system and achieved an area under the receiver operating characteristic curve of 0.746; however, the model did not include more diagnostic information from the EMR (137). NLP and machine learning–based techniques could be used to process structured and unstructured data from the EMR to potentially achieve improved performance. Intelligent hanging protocols could be trained to automatically extract series information and display examinations according to the preferences of a radiologist, saving time during interpretation. Intelligent worklist optimization to ensure that radiologists read examinations for which they have the most experience or efficiency could improve diagnostic quality and turnaround times. Additionally, chatbots that interface with patients to answer questions or explain report findings could improve health literacy and patient confidence. These are just a few of the many potential areas of exploration in the development of radiology AI models.

Conclusion

Radiology AI software has become increasingly popular over the past several years. Whereas the majority of research and commercial software focuses on diagnostic or interpretive applications, there are large areas of potential improvement in upstream workflow, including protocoling, acquisition, reconstruction, and worklist management, and downstream applications such as reporting, follow-up, and billing and coding. In aggregate, these solutions could have a similar or even larger effect than most diagnostic AI software because of their applicability to a large number of cases and at multiple points in the radiology workflow.

Authors declared no funding for this work.

Disclosures of Conflicts of Interest: Y.T. No relevant relationships. V.M. No relevant relationships. W.W. No relevant relationships. H.Z. No relevant relationships. A.P. Consulting fee or honorarium from QMC. N.B. No relevant relationships. M.H. No relevant relationships. E.K. Publications Ethics Committee for RSNA journals. N.S. Advisor to a resident who received an RSNA Research & Education Foundation grant. The project is entitled HL7-Shield: A Versatile HL7 Listener Software for Automated Follow-up Tracking; pending patent for HL-7 Shield (Emory University is applying for a software patent for an AI approach to findings notification, this author listed as contributor). I.B. No relevant relationships. J.G. Future of Work Grant NIH NIBIB MIDRC grant (not related to this work); associate editor and trainee editorial board lead for Radiology: Artificial Intelligence. H.T. Consultant for BioData Consortium and Sirona Medical; grant from Kheiron Medical for institution; owner of Lightbox AI.

Abbreviations:

AI
artificial intelligence
BI-RADS
Breast Imaging Reporting and Data System
BT-RADS
Brain Tumor Reporting and Data System
EMR
electronic medical record
LI-RADS
Liver Imaging Reporting and Data System
NLP
natural language processing
TI-RADS
Thyroid Imaging Reporting and Data System

References

  • 1. European Society of Radiology (ESR) . What the radiologist should know about artificial intelligence - an ESR white paper . Insights Imaging 2019. ; 10 ( 1 ): 44 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Boland GWL , Guimaraes AS , Mueller PR . The radiologist's conundrum: benefits and costs of increasing CT capacity and utilization . Eur Radiol 2009. ; 19 ( 1 ): 9 – 11 . discussion 12. [DOI] [PubMed] [Google Scholar]
  • 3. Bhargavan M , Kaye AH , Forman HP , Sunshine JH . Workload of radiologists in United States in 2006-2007 and trends since 1991-1992 . Radiology 2009. ; 252 ( 2 ): 458 – 467 . [DOI] [PubMed] [Google Scholar]
  • 4. Forsberg D , Rosipko B , Sunshine JL . Radiologists’ variation of time to read across different procedure types . J Digit Imaging 2017. ; 30 ( 1 ): 86 – 94 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Bernardy M , Ullrich CG , Rawson JV , et al . Strategies for managing imaging utilization . J Am Coll Radiol 2009. ; 6 ( 12 ): 844 – 850 . [DOI] [PubMed] [Google Scholar]
  • 6. Lehnert BE , Bree RL . Analysis of appropriateness of outpatient CT and MRI referred from primary care clinics at an academic medical center: how critical is the need for improved decision support? J Am Coll Radiol 2010. ; 7 ( 3 ): 192 – 197 . [Published correction appears in J Am Coll Radiol. 2010 Jun;7(6):466.] [DOI] [PubMed] [Google Scholar]
  • 7. Bairstow PJ , Persaud J , Mendelson R , Nguyen L . Reducing inappropriate diagnostic practice through education and decision support . Int J Qual Health Care 2010. ; 22 ( 3 ): 194 – 200 . [DOI] [PubMed] [Google Scholar]
  • 8. Blackmore CC , Mecklenburg RS , Kaplan GS . Effectiveness of clinical decision support in controlling inappropriate imaging . J Am Coll Radiol 2011. ; 8 ( 1 ): 19 – 25 . [DOI] [PubMed] [Google Scholar]
  • 9. Hendee WR , Becker GJ , Borgstede JP , et al . Addressing overutilization in medical imaging . Radiology 2010. ; 257 ( 1 ): 240 – 245 . [DOI] [PubMed] [Google Scholar]
  • 10. Dunnick NR , Applegate KE , Arenson RL . The inappropriate use of imaging studies: a report of the 2004 Intersociety Conference . J Am Coll Radiol 2005. ; 2 ( 5 ): 401 – 406 . [DOI] [PubMed] [Google Scholar]
  • 11. American College of Radiology . ACR Select; . http://nationaldecisionsupport.com/acrselect/. Accessed August 30, 2021 . [Google Scholar]
  • 12. Stern RG . "Can we all get along?" Cooperative strategies to reduce imaging overuse . Am J Med 2013. ; 126 ( 8 ): 657 – 658 . [DOI] [PubMed] [Google Scholar]
  • 13. Ip IK , Schneider L , Seltzer S , et al . Impact of provider-led, technology-enabled radiology management program on imaging . Am J Med 2013. ; 126 ( 8 ): 687 – 692 . [DOI] [PubMed] [Google Scholar]
  • 14. Yan Z , Ip IK , Raja AS , Gupta A , Kosowsky JM , Khorasani R . Yield of CT Pulmonary Angiography in the Emergency Department When Providers Override Evidence-based Clinical Decision Support . Radiology 2017. ; 282 ( 3 ): 717 – 725 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15. Doyle J , Abraham S , Feeney L , Reimer S , Finkelstein A . Clinical decision support for high-cost imaging: A randomized clinical trial . PLoS One 2019. ; 14 ( 3 ): e0213373 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16. Lee YH . Efficiency Improvement in a Busy Radiology Practice: Determination of Musculoskeletal Magnetic Resonance Imaging Protocol Using Deep-Learning Convolutional Neural Networks . J Digit Imaging 2018. ; 31 ( 5 ): 604 – 610 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Trivedi H , Mesterhazy J , Laguna B , Vu T , Sohn JH . Automatic determination of the need for intravenous contrast in musculoskeletal MRI examinations using IBM watson's natural language processing algorithm . J Digit Imaging 2018. ; 31 ( 2 ): 245 – 251 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Brown AD , Marotta TR . A Natural Language Processing-based Model to Automate MRI Brain Protocol Selection and Prioritization . Acad Radiol 2017. ; 24 ( 2 ): 160 – 166 . [DOI] [PubMed] [Google Scholar]
  • 19. Kalra A , Chakraborty A , Fine B , Reicher J . Machine learning for automation of radiology protocols for quality and efficiency improvement . J Am Coll Radiol 2020. ; 17 ( 9 ): 1149 – 1158 . [DOI] [PubMed] [Google Scholar]
  • 20. Rasmy L , Xiang Y , Xie Z , Tao C , Zhi D . Med-BERT: pretrained contextualized embeddings on large-scale structured electronic health records for disease prediction . NPJ Digit Med 2021. ; 4 ( 1 ): 86 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21. Lin YW , Zhou Y , Faghri F , Shaw MJ , Campbell RH . Analysis and prediction of unplanned intensive care unit readmission using recurrent neural networks with long short-term memory . PLoS One 2019. ; 14 ( 7 ): e0218942 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22. Rajendran S , Topaloglu U . Extracting Smoking Status from Electronic Health Records Using NLP and Deep Learning . AMIA Jt Summits Transl Sci Proc 2020. ; 2020 ( 507 ): 516 . [PMC free article] [PubMed] [Google Scholar]
  • 23. Li Y , Rao S , Solares JRA , et al . BEHRT: transformer for electronic health records . Sci Rep 2020. ; 10 ( 1 ): 7155 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24. Sreekumari A , Shanbhag D , Yeo D , et al . A Deep Learning-Based Approach to Reduce Rescan and Recall Rates in Clinical MRI Examinations . AJNR Am J Neuroradiol 2019. ; 40 ( 2 ): 217 – 223 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25. Xu J , Gong E , Pauly J , Zaharchuk G . 200x Low-dose PET Reconstruction using Deep Learning . arXiv 1712.04119 [preprint] https://arxiv.org/abs/1712.04119. Posted December 12, 2017. Accessed August 30, 2021 .
  • 26. Richardson ML , Garwood ER , Lee Y , et al . Noninterpretive uses of artificial intelligence in radiology . Acad Radiol 2021. ; 28 ( 9 ): 1225 – 1235 . [DOI] [PubMed] [Google Scholar]
  • 27. Chen H , Zhang Y , Kalra MK , et al . Low-Dose CT With a Residual Encoder-Decoder Convolutional Neural Network . IEEE Trans Med Imaging 2017. ; 36 ( 12 ): 2524 – 2535 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Shan H , Padole A , Homayounieh F , et al . Competitive performance of a modularized deep neural network compared to commercial algorithms for low-dose CT image reconstruction . Nat Mach Intell 2019. ; 1 ( 6 ): 269 – 276 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Katsari K , Penna D , Arena V , et al . Artificial intelligence for reduced dose 18F-FDG PET examinations: a real-world deployment through a standardized framework and business case assessment . EJNMMI Phys 2021. ; 8 ( 1 ): 25 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30. Le V , Frye S , Botkin C , et al . Effect of PET Scan with Count Reduction Using AI-Based Processing Techniques on Image Quality . J Nucl Med 2020. ; 61 ( Suppl 1 ): 3095 . https://jnm.snmjournals.org/content/61/supplement_1/3095 . [Google Scholar]
  • 31. Wang Y , Yu B , Wang L , et al . 3D conditional generative adversarial networks for high-quality PET image estimation at low dose . Neuroimage 2018. ; 174 ( 550 ): 562 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Ouyang J , Chen KT , Gong E , Pauly J , Zaharchuk G . Ultra-low-dose PET reconstruction using generative adversarial network with feature matching and task-specific perceptual loss . Med Phys 2019. ; 46 ( 8 ): 3555 – 3564 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Gong E , Pauly JM , Wintermark M , Zaharchuk G . Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI . J Magn Reson Imaging 2018. ; 48 ( 2 ): 330 – 340 . [DOI] [PubMed] [Google Scholar]
  • 34. Zhang HM , Dong B . A review on deep learning in medical image reconstruction . J Oper Res Soc China 2020. ; 8 ( 2 ): 311 – 340 . [Google Scholar]
  • 35. Zhu B , Liu JZ , Cauley SF , Rosen BR , Rosen MS . Image reconstruction by domain-transform manifold learning . Nature 2018. ; 555 ( 7697 ): 487 – 492 . [DOI] [PubMed] [Google Scholar]
  • 36.Lauritsch G, Haerer WH. Theoretical framework for filtered back projection in tomosynthesis. In: Hanson KM, ed.Proceedings of SPIE: medical imaging 1998—image processing.Vol 3338.Bellingham, Wash:International Society for Optics and Photonics,1998;1127–1137. [Google Scholar]
  • 37. Singh S , Kalra MK , Hsieh J , et al . Abdominal CT: comparison of adaptive statistical iterative and filtered back projection reconstruction techniques . Radiology 2010. ; 257 ( 2 ): 373 – 383 . [DOI] [PubMed] [Google Scholar]
  • 38. Klink T , Obmann V , Heverhagen J , Stork A , Adam G , Begemann P . Reducing CT radiation dose with iterative reconstruction algorithms: the influence of scan and reconstruction parameters on image quality and CTDIvol . Eur J Radiol 2014. ; 83 ( 9 ): 1645 – 1654 . [DOI] [PubMed] [Google Scholar]
  • 39. Schofield R , King L , Tayal U , et al . Image reconstruction: Part 1 - understanding filtered back projection, noise and image acquisition . J Cardiovasc Comput Tomogr 2020. ; 14 ( 3 ): 219 – 225 . [DOI] [PubMed] [Google Scholar]
  • 40.Bermudez C, Plassard AJ, Davis LT, Newton AT, Resnick SM, Landman BA. Learning implicit brain MRI manifolds with deep learning. In: Angelini ED, Landman BA, eds.Proceedings of SPIE: medical imaging 2018—image processing.Vol 10574.Bellingham, Wash:International Society for Optics and Photonics,2018;105741L. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Jin KH , McCann MT , Froustey E , Unser M . Deep Convolutional Neural Network for Inverse Problems in Imaging . IEEE Trans Image Process 2017. ; 26 ( 9 ): 4509 – 4522 . [DOI] [PubMed] [Google Scholar]
  • 42. Bash S . Enhancing Neuroimaging with Artificial Intelligence . Appl Radiol 2020. ; 49 ( 1 ): 20 – 21 . https://appliedradiology.com/communities/Artificial-Intelligence/enhancing-neuroimaging-with-artificial-intelligence . [Google Scholar]
  • 43. Loria K . 2020 Vision — A Look at the Latest Trends and Best Practices in Dose Safety and Image Acquisition . Radiology Today Magazine 2020. ; 21 ( 2 ): 10 – 12 . https://www.radiologytoday.net/archive/rt0220p10.shtml . [Google Scholar]
  • 44. Tamada D . Review: Noise and artifact reduction for MRI using deep learning . arXiv 2002.12889 [preprint] https://arxiv.org/abs/2002.12889. Posted February 28, 2020. Accessed August 30, 2021 .
  • 45. Tanenbaum L , Gibbs W , Johnson B , et al . Machine learning-based iterative image reconstruction algorithm allows significant reduction in brain MRI scan times . In: Proceedings of American Society of Neuroradiology 56th Annual Meeting , 2018. . https://www.asnr.org/wp-content/uploads/Proceedings/ASNR2018_Proceedings.pdf . [Google Scholar]
  • 46. Wang L , Bash S , Dupont S , et al . Deep Learning Enables Accurate Quantitative Volumetric Brain MRI with 2x Faster Scan Times . In: Proceedings of American Society of Neuroradiology 58th Annual Meeting , 2020. . https://www.asnr.org/wp-content/uploads/2021/07/ASNR20-Proceedings.pdf . [Google Scholar]
  • 47. Lakhani P , Prater AB , Hutson RK , et al . Machine learning in radiology: applications beyond image interpretation . J Am Coll Radiol 2018. ; 15 ( 2 ): 350 – 359 . [DOI] [PubMed] [Google Scholar]
  • 48. Esses SJ , Lu X , Zhao T , et al . Automated image quality evaluation of T2 -weighted liver MRI utilizing deep learning architecture . J Magn Reson Imaging 2018. ; 47 ( 3 ): 723 – 728 . [DOI] [PubMed] [Google Scholar]
  • 49. Higaki T , Nakamura Y , Tatsugami F , Nakaura T , Awai K . Improvement of image quality at CT and MRI using deep learning . Jpn J Radiol 2019. ; 37 ( 1 ): 73 – 80 . [DOI] [PubMed] [Google Scholar]
  • 50. Manjon JV , Coupe P . MRI denoising using Deep Learning and Non-local averaging . arXiv 1911.04798 [preprint] https://arxiv.org/abs/1911.04798. Posted November 12, 2019. Accessed August 30, 2021 . [Google Scholar]
  • 51. Lundervold AS , Lundervold A . An overview of deep learning in medical imaging focusing on MRI . Z Med Phys 2019. ; 29 ( 2 ): 102 – 127 . [DOI] [PubMed] [Google Scholar]
  • 52. Haskins G , Kruger U , Yan P . Deep learning in medical image registration: a survey . Mach Vis Appl 2020. ; 31 ( 1-2 ): 8 . [Google Scholar]
  • 53. Wei D , Ahmad S , Huo J , et al . SLIR: Synthesis, localization, inpainting, and registration for image-guided thermal ablation of liver tumors . Med Image Anal 2020. ; 65 : 101763 . [DOI] [PubMed] [Google Scholar]
  • 54. Cao X , Yang J , Wang L , Xue Z , Wang Q , Shen D . Deep Learning based Inter-Modality Image Registration Supervised by Intra-Modality Similarity . Mach Learn Med Imaging 2018. ; 11046 ( 55 ): 63 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 55. Eberhard M , Alkadhi H . Machine learning and deep neural networks: applications in patient and scan preparation, contrast medium, and radiation dose optimization . J Thorac Imaging 2020. ; 35 ( Suppl 1 ): S17 – S20 . [DOI] [PubMed] [Google Scholar]
  • 56. Saltybaeva N , Alkadhi H . Vertical off-centering affects organ dose in chest CT: Evidence from Monte Carlo simulations in anthropomorphic phantoms . Med Phys 2017. ; 44 ( 11 ): 5697 – 5704 . [DOI] [PubMed] [Google Scholar]
  • 57. Saltybaeva N , Schmidt B , Wimmer A , Flohr T , Alkadhi H . Precise and Automatic Patient Positioning in Computed Tomography: Avatar Modeling of the Patient Surface Using a 3-Dimensional Camera . Invest Radiol 2018. ; 53 ( 11 ): 641 – 646 . [DOI] [PubMed] [Google Scholar]
  • 58. Booij R , Budde RPJ , Dijkshoorn ML , van Straten M . Accuracy of automated patient positioning in CT using a 3D camera for body contour detection . Eur Radiol 2019. ; 29 ( 4 ): 2079 – 2088 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59. Taplin SH , Rutter CM , Finder C , Mandelson MT , Houn F , White E . Screening mammography: clinical image quality and the risk of interval breast cancer . AJR Am J Roentgenol 2002. ; 178 ( 4 ): 797 – 803 . [DOI] [PubMed] [Google Scholar]
  • 60. Majid AS , de Paredes ES , Doherty RD , Sharma NR , Salvador X . Missed breast carcinoma: pitfalls and pearls . RadioGraphics 2003. ; 23 ( 4 ): 881 – 895 . [DOI] [PubMed] [Google Scholar]
  • 61. U.S. Food and Drug Administration . Mammography Quality Standards Act and Program . https://www.fda.gov/radiation-emitting-products/mammography-quality-standards-act-and-program. Accessed August 13, 2021 .
  • 62. Prevedello LM , Erdal BS , Ryu JL , et al . Automated critical test findings identification and online notification system using artificial intelligence in imaging . Radiology 2017. ; 285 ( 3 ): 923 – 931 . [DOI] [PubMed] [Google Scholar]
  • 63. Annarumma M , Withey SJ , Bakewell RJ , Pesce E , Goh V , Montana G . Automated Triaging of Adult Chest Radiographs with Deep Artificial Neural Networks . Radiology 2019. ; 291 ( 1 ): 196 – 202 . [Published correction appears in Radiology. 2019 Apr;291(1):272.] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64. Arbabshirani MR , Fornwalt BK , Mongelluzzo GJ , et al . Advanced machine learning in action: identification of intracranial hemorrhage on computed tomography scans of the head with clinical workflow integration . NPJ Digit Med 2018. ; 1 ( 1 ): 9 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Wismüller A, Stockmaster L. A prospective randomized clinical trial for measuring radiology study reporting time on Artificial Intelligence-based detection of intracranial hemorrhage in emergent care head CT. In: Krol A, Gimi BS, eds.Proceedings of SPIE: medical imaging 2020—biomedical applications in molecular, structural, and functional imaging.Vol 11317.Bellingham, Wash:International Society for Optics and Photonics,2020;113170M. [Google Scholar]
  • 66. Ginat DT . Analysis of head CT scans flagged by deep learning software for acute intracranial hemorrhage . Neuroradiology 2020. ; 62 ( 3 ): 335 – 340 . [DOI] [PubMed] [Google Scholar]
  • 67. Winkel DJ , Heye T , Weikert TJ , Boll DT , Stieltjes B . Evaluation of an AI-Based Detection Software for Acute Findings in Abdominal Computed Tomography Scans: Toward an Automated Work List Prioritization of Routine CT Examinations . Invest Radiol 2019. ; 54 ( 1 ): 55 – 59 . [DOI] [PubMed] [Google Scholar]
  • 68. Huang SC , Kothari T , Banerjee I , et al . PENet-a scalable deep-learning model for automated diagnosis of pulmonary embolism using volumetric CT imaging . NPJ Digit Med 2020. ; 3 ( 1 ): 61 . [Published correction appears in NPJ Digit Med 2020;3:102.] [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 69. Banerjee I , Sofela M , Yang J , et al . Development and performance of the pulmonary embolism result forecast model (PERFORM) for computed tomography clinical decision support . JAMA Netw Open 2019. ; 2 ( 8 ): e198719 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70. Pines JM , Griffey RT . What we have learned from a decade of ED crowding research . Acad Emerg Med 2015. ; 22 ( 8 ): 985 – 987 . [DOI] [PubMed] [Google Scholar]
  • 71. Jha S . Value of triage by artificial intelligence . Acad Radiol 2020. ; 27 ( 1 ): 153 – 155 . [DOI] [PubMed] [Google Scholar]
  • 72. Potter CA , Vagal AS , Goyal M , Nunez DB , Leslie-Mazwi TM , Lev MH . CT for treatment selection in acute ischemic stroke: A code stroke primer . RadioGraphics 2019. ; 39 ( 6 ): 1717 – 1738 . [DOI] [PubMed] [Google Scholar]
  • 73. Goldberg-Stein S , Chernyak V . Adding value in radiology reporting . J Am Coll Radiol 2019. ; 16 ( 9 Pt B ): 1292 – 1298 . [DOI] [PubMed] [Google Scholar]
  • 74. Cai T , Giannopoulos AA , Yu S , et al . Natural language processing technologies in radiology research and clinical applications . RadioGraphics 2016. ; 36 ( 1 ): 176 – 191 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 75. Lou R , Lalevic D , Chambers C , Zafar HM , Cook TS . Automated Detection of Radiology Reports that Require Follow-up Imaging Using Natural Language Processing Feature Engineering and Machine Learning Classification . J Digit Imaging 2020. ; 33 ( 1 ): 131 – 136 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 76. Dutta S , Long WJ , Brown DFM , Reisner AT . Automated detection using natural language processing of radiologists recommendations for additional imaging of incidental findings . Ann Emerg Med 2013. ; 62 ( 2 ): 162 – 169 . [DOI] [PubMed] [Google Scholar]
  • 77. Pons E , Braun LMM , Hunink MGM , Kors JA . Natural language processing in radiology: A systematic review . Radiology 2016. ; 279 ( 2 ): 329 – 343 . [DOI] [PubMed] [Google Scholar]
  • 78. Do BH , Wu AS , Maley J , Biswal S . Automatic retrieval of bone fracture knowledge using natural language processing . J Digit Imaging 2013. ; 26 ( 4 ): 709 – 713 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79. Taira RK , Soderland SG , Jakobovits RM . Automatic structuring of radiology free-text reports . RadioGraphics 2001. ; 21 ( 1 ): 237 – 245 . [DOI] [PubMed] [Google Scholar]
  • 80. Zimmerman SL , Kim W , Boonn WW . Informatics in radiology: automated structured reporting of imaging findings using the AIM standard and XML . RadioGraphics 2011. ; 31 ( 3 ): 881 – 887 . [DOI] [PubMed] [Google Scholar]
  • 81. Jungmann F , Arnhold G , Kämpgen B , et al . A hybrid reporting platform for extended radlex coding combining structured reporting templates and natural language processing . J Digit Imaging 2020. ; 33 ( 4 ): 1026 – 1033 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82. Delfim RLC , Veiga LCGD , Vidal APA , Lopes FPPL , Vaisman M , Teixeira PFDS . Likelihood of malignancy in thyroid nodules according to a proposed Thyroid Imaging Reporting and Data System (TI-RADS) classification merging suspicious and benign ultrasound features . Arch Endocrinol Metab 2017. ; 61 ( 3 ): 211 – 221 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 83. Sickles EA , D’Orsi CJ , Bassett LW , et al . American College of Radiology. Breast imaging reporting and data system (BI-RADS) atlas . 5th ed. Reston, Va: : American College of Radiology; , 2013. . [Google Scholar]
  • 84. Mitchell DG , Bruix J , Sherman M , Sirlin CB . LI-RADS (Liver Imaging Reporting and Data System): summary, discussion, and consensus of the LI-RADS Management Working Group and future directions . Hepatology 2015. ; 61 ( 3 ): 1056 – 1065 . [DOI] [PubMed] [Google Scholar]
  • 85. Zhang JY , Weinberg BD , Hu R , et al . Quantitative Improvement in Brain Tumor MRI Through Structured Reporting (BT-RADS) . Acad Radiol 2020. ; 27 ( 6 ): 780 – 784 . [DOI] [PubMed] [Google Scholar]
  • 86. Sedghi Gamechi Z , Bons LR , Giordano M , et al . Automated 3D segmentation and diameter measurement of the thoracic aorta on non-contrast enhanced CT . Eur Radiol 2019. ; 29 ( 9 ): 4613 – 4623 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 87. Barnard R , Tan J , Roller B , et al . Machine Learning for Automatic Paraspinous Muscle Area and Attenuation Measures on Low-Dose Chest CT Scans . Acad Radiol 2019. ; 26 ( 12 ): 1686 – 1694 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 88. Ironside N , Chen CJ , Mutasa S , et al . Fully automated segmentation algorithm for hematoma volumetric analysis in spontaneous intracerebral hemorrhage . Stroke 2019. ; 50 ( 12 ): 3416 – 3423 . [DOI] [PubMed] [Google Scholar]
  • 89. Saha A , Grimm LJ , Ghate SV , et al . Machine learning-based prediction of future breast cancer using algorithmically measured background parenchymal enhancement on high-risk screening MRI . J Magn Reson Imaging 2019. ; 50 ( 2 ): 456 – 464 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 90. Reig B , Heacock L , Geras KJ , Moy L . Machine learning in breast MRI . J Magn Reson Imaging 2020. ; 52 ( 4 ): 998 – 1018 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 91. Mohamed AA , Berg WA , Peng H , Luo Y , Jankowitz RC , Wu S . A deep learning method for classifying mammographic breast density categories . Med Phys 2018. ; 45 ( 1 ): 314 – 321 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 92. Zhang E , Seiler S , Chen M , Lu W , Gu X . BIRADS features-oriented semi-supervised deep learning for breast ultrasound computer-aided diagnosis . Phys Med Biol 2020. ; 65 ( 12 ): 125005 . [DOI] [PubMed] [Google Scholar]
  • 93. Balakrishnan G , Zhao A , Sabuncu MR , Guttag J , Dalca AV . Voxelmorph: A learning framework for deformable medical image registration . IEEE Trans Med Imaging 2019. ; 38 ( 8 ): 1788 – 1800 . [DOI] [PubMed] [Google Scholar]
  • 94. Das A , Rajendra Acharya U , Panda SS , Sabut S . Deep learning based liver cancer detection using watershed transform and Gaussian mixture model techniques . Cogn Syst Res 2019. ; 54 ( 165 ): 175 . [Google Scholar]
  • 95. Banerjee I , Choi HH , Desser T , Rubin DL . A Scalable Machine Learning Approach for Inferring Probabilistic US-LI-RADS Categorization . AMIA Annu Symp Proc 2018. ; 2018 ( 215 ): 224 . [PMC free article] [PubMed] [Google Scholar]
  • 96. Lee SJ , Weinberg BD , Gore A , Banerjee I . A Scalable Natural Language Processing for Inferring BT-RADS Categorization from Unstructured Brain Magnetic Resonance Reports . J Digit Imaging 2020. ; 33 ( 6 ): 1393 – 1400 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 97. Hassanpour S , Langlotz CP . Information extraction from multi-institutional radiology reports . Artif Intell Med 2016. ; 66 ( 29 ): 39 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 98. Wildman-Tobriner B , Buda M , Hoang JK , et al . Using Artificial Intelligence to Revise ACR TI-RADS Risk Stratification of Thyroid Nodules: Diagnostic Accuracy and Utility . Radiology 2019. ; 292 ( 1 ): 112 – 119 . [DOI] [PubMed] [Google Scholar]
  • 99. The Joint Commission. National Patient Safety Goals Effective July 2020 for the Hospital Program . https://www.jointcommission.org/-/media/tjc/documents/standards/national-patient-safety-goals/2022/npsg_chapter_hap_jan2022.pdf. Accessed August 31, 2021 .
  • 100. Chetlen AL , Chan TL , Ballard DH , et al . Addressing burnout in radiologists . Acad Radiol 2019. ; 26 ( 4 ): 526 – 533 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 101. Cannavale A , Santoni M , Mancarella P , Passariello R , Arbarello P . Malpractice in radiology: what should you worry about? Radiol Res Pract 2013 ; 2013. : 219259 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 102. Lakhani P , Langlotz CP . Automated detection of radiology reports that document non-routine communication of critical or significant results . J Digit Imaging 2010. ; 23 ( 6 ): 647 – 657 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 103. Lakhani P , Kim W , Langlotz CP . Automated detection of critical results in radiology reports . J Digit Imaging 2012. ; 25 ( 1 ): 30 – 36 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 104. Meng X , Ganoe CH , Sieberg RT , Cheung YY , Hassanpour S . Assisting radiologists with reporting urgent findings to referring physicians: A machine learning approach to identify cases for prompt communication . J Biomed Inform 2019. ; 93 103169 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 105. Do HM , Spear LG , Nikpanah M , et al . Augmented radiologist workflow improves report value and saves time: A potential model for implementation of artificial intelligence . Acad Radiol 2020. ; 27 ( 1 ): 96 – 105 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 106. Cochon LR , Kapoor N , Carrodeguas E , et al . Variation in Follow-up Imaging Recommendations in Radiology Reports: Patient, Modality, and Radiologist Predictors . Radiology 2019. ; 291 ( 3 ): 700 – 707 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 107. Trivedi G , Dadashzadeh ER , Handzel RM , Chapman WW , Visweswaran S , Hochheiser H . Interactive NLP in clinical care: identifying incidental findings in radiology reports . Appl Clin Inform 2019. ; 10 ( 4 ): 655 – 669 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 108. Kang SK , Garry K , Chung R , et al . Natural language processing for identification of incidental pulmonary nodules in radiology reports . J Am Coll Radiol 2019. ; 16 ( 11 ): 1587 – 1594 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 109. Carrodeguas E , Lacson R , Swanson W , Khorasani R . Use of Machine Learning to Identify Follow-Up Recommendations in Radiology Reports . J Am Coll Radiol 2019. ; 16 ( 3 ): 336 – 343 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 110. Pham AD , Névéol A , Lavergne T , et al . Natural language processing of radiology reports for the detection of thromboembolic diseases and clinically relevant incidental findings . BMC Bioinformatics 2014. ; 15 ( 1 ): 266 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 111. Oliveira L , Tellis R , Qian Y , Trovato K , Mankovich G . Follow-up Recommendation Detection on Radiology Reports with Incidental Pulmonary Nodules . Stud Health Technol Inform 2015. ; 216 1028 . [PubMed] [Google Scholar]
  • 112. Hammer MM , Kapoor N , Desai SP , et al . Adoption of a Closed-Loop Communication Tool to Establish and Execute a Collaborative Follow-Up Plan for Incidental Pulmonary Nodules . AJR Am J Roentgenol 2019;1–5(5):1-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 113. Sanborn BJ . Change Healthcare analysis shows $262 billion in medical claims initially denied, meaning billions in administrative costs . Healthcare Finance News . https://www.healthcarefinancenews.com/news/change-healthcare-analysis-shows-262-million-medical-claims-initially-denied-meaning-billions. Accessed February 24, 2022 .
  • 114. Combatting denials using machine intelligence: How it works and why now is the time for it . https://www.beckershospitalreview.com/finance/combatting-denials-using-machine-intelligence-how-it-works-and-why-now-is-the-time-for-it.html. Accessed February 24, 2022 .
  • 115. Jiwani A , Himmelstein D , Woolhandler S , Kahn JG . Billing and insurance-related administrative costs in United States’ health care: synthesis of micro-costing evidence . BMC Health Serv Res 2014. ; 14 ( 1 ): 556 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 116.Institute of Medicine Roundtable on Evidence-Based Medicine. Excess administrative costs. In: Yong PL, Saunders RS, Olsen L, eds.The Healthcare Imperative: Lowering Costs and Improving Outcomes: Workshop Series Summary.Washington, DC:The National Academies Press,2010.https://www.ncbi.nlm.nih.gov/books/NBK53942/. [PubMed] [Google Scholar]
  • 117. Chan L , Beers K , Yau AA , et al . Natural language processing of electronic health records is superior to billing codes to identify symptom burden in hemodialysis patients . Kidney Int 2020. ; 97 ( 2 ): 383 – 392 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 118. Jagannathan V , Mullett CJ , Arbogast JG , et al . Assessment of commercial NLP engines for medication information extraction from dictated clinical notes . Int J Med Inform 2009. ; 78 ( 4 ): 284 – 291 . [DOI] [PubMed] [Google Scholar]
  • 119. Duszak R , Nossal M , Schofield L , Picus D . Physician documentation deficiencies in abdominal ultrasound reports: frequency, characteristics, and financial impact . J Am Coll Radiol 2012. ; 9 ( 6 ): 403 – 408 . [DOI] [PubMed] [Google Scholar]
  • 120. Denck J , Landschütz W , Nairz K , Heverhagen JT , Maier A , Rothgang E . Automated Billing Code Retrieval from MRI Scanner Log Data . J Digit Imaging 2019. ; 32 ( 6 ): 1103 – 1111 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 121. Syed AB , Zoga AC . Artificial intelligence in radiology: current technology and future directions . Semin Musculoskelet Radiol 2018. ; 22 ( 5 ): 540 – 545 . [DOI] [PubMed] [Google Scholar]
  • 122. Khanna R , Yen T . Computerized physician order entry: promise, perils, and experience . Neurohospitalist 2014. ; 4 ( 1 ): 26 – 33 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 123. Artificial Intelligence Adoption and Investments Growing Rapidly Among Health Industry Leaders . Business Wire; . https://www.businesswire.com/news/home/20191008005194/en/Artificial-Intelligence-Adoption-and-Investments-Growing-Rapidly-Among-Health-Industry-Leaders. Accessed February 24, 2022 . [Google Scholar]
  • 124. At RadNet, AI-fueled prior authorization tech shows promise . Healthcare IT News; . https://www.healthcareitnews.com/news/radnet-ai-fueled-prior-authorization-tech-99-accurate. Accessed February 24, 2022 . [Google Scholar]
  • 125. Allen B . Valuing the professional work of diagnostic radiologic services . J Am Coll Radiol 2007. ; 4 ( 2 ): 106 – 114 . [DOI] [PubMed] [Google Scholar]
  • 126. Golding LP , Nicola GN . A business case for artificial intelligence tools: the currency of improved quality and reduced cost . J Am Coll Radiol 2019. ; 16 ( 9 Pt B ): 1357 – 1361 . [DOI] [PubMed] [Google Scholar]
  • 127. Centers for Medicare & Medicaid Services. Medicare Program; Revisions to Payment Policies Under the Physician Fee Schedule and Other Revisions to Part B for CY 2019; Medicare Shared Savings Program Requirements; Quality Payment Program; Medicaid Promoting Interoperability Program; Quality Payment Program-Extreme and Uncontrollable Circumstance Policy for the 2019 MIPS Payment Year; Provisions From the Medicare Shared Savings Program-Accountable Care Organizations-Pathways to Success; and Expanding the Use of Telehealth Services for the Treatment of Opioid Use Disorder Under the Substance Use-Disorder Prevention That Promotes Opioid Recovery and Treatment (SUPPORT) for Patients and Communities Act . https://www.federalregister.gov/documents/2018/11/23/2018-24170/medicare-program-revisions-to-payment-policies-under-the-physician-fee-schedule-and-other-revisions. Published November 23, 2018. Accessed February 24, 2022 .
  • 128. Tajmir SH , Alkasab TK . Toward augmented radiologists: changes in radiology education in the era of machine learning and artificial intelligence . Acad Radiol 2018. ; 25 ( 6 ): 747 – 750 . [DOI] [PubMed] [Google Scholar]
  • 129. Rubin DL , Kahn CE Jr . Common data elements in radiology . Radiology 2017. ; 283 ( 3 ): 837 – 844 . [DOI] [PubMed] [Google Scholar]
  • 130. Kamel PI , Nagy PG . Patient-Centered Radiology with FHIR: an Introduction to the Use of FHIR to Offer Radiology a Clinically Integrated Platform . J Digit Imaging 2018. ; 31 ( 3 ): 327 – 333 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 131. AlMuhaideb S , Alswailem O , Alsubaie N , Ferwana I , Alnajem A . Prediction of hospital no-show appointments through artificial intelligence algorithms . Ann Saudi Med 2019. ; 39 ( 6 ): 373 – 381 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 132. Kurasawa H , Hayashi K , Fujino A , et al . Machine-Learning-Based Prediction of a Missed Scheduled Clinical Appointment by Patients With Diabetes . J Diabetes Sci Technol 2016. ; 10 ( 3 ): 730 – 736 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 133. Nelson A , Herron D , Rees G , Nachev P . Predicting scheduled hospital attendance with artificial intelligence . NPJ Digit Med 2019. ; 2 ( 1 ): 26 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 134. Levine WC , Dunn PF . Optimizing operating room scheduling . Anesthesiol Clin 2015. ; 33 ( 4 ): 697 – 711 . [DOI] [PubMed] [Google Scholar]
  • 135. Luo L , Zhou Y , Han BT , Li J . An optimization model to determine appointment scheduling window for an outpatient clinic with patient no-shows . Health Care Manage Sci 2019. ; 22 ( 1 ): 68 – 84 . [DOI] [PubMed] [Google Scholar]
  • 136. Mieloszyk RJ , Rosenbaum JI , Hall CS , Hippe DS , Gunn ML , Bhargava P . Environmental Factors Predictive of No-Show Visits in Radiology: Observations of Three Million Outpatient Imaging Visits Over 16 Years . J Am Coll Radiol 2019. ; 16 ( 4 Pt B ): 554 – 559 . [DOI] [PubMed] [Google Scholar]
  • 137. Chong LR , Tsai KT , Lee LL , Foo SG , Chang PC . Artificial Intelligence Predictive Analytics in the Management of Outpatient MRI Appointment No-Shows . AJR Am J Roentgenol 2020. ; 215 ( 5 ): 1155 – 1162 . [DOI] [PubMed] [Google Scholar]

Articles from Radiology: Artificial Intelligence are provided here courtesy of Radiological Society of North America

RESOURCES