Abstract
Abstract
Substantial endeavors have been recently dedicated to developing artificial intelligence (AI) solutions, especially deep learning-based, tailored to enhance radiological procedures, in particular algorithms designed to minimize radiation exposure and enhance image clarity. Thus, not only better diagnostic accuracy but also reduced potential harm to patients was pursued, thereby exemplifying the intersection of technological innovation and the highest standards of patient care. We provide herein an overview of recent AI developments in computed tomography and magnetic resonance imaging. Major AI results in CT regard: optimization of patient positioning, scan range selection (avoiding “overscanning”), and choice of technical parameters; reduction of the amount of injected contrast agent and injection flow rate (also avoiding extravasation); faster and better image reconstruction reducing noise level and artifacts. Major AI results in MRI regard: reconstruction of undersampled images; artifact removal, including those derived from unintentional patient’s (or fetal) movement or from heart motion; up to 80–90% reduction of GBCA dose. Challenges include limited generalizability, lack of external validation, insufficient explainability of models, and opacity of decision-making. Developing explainable AI algorithms that provide transparent and interpretable outputs is essential to enable seamless AI integration into CT and MRI practice.
Relevance statement
This review highlights how AI-driven advancements in CT and MRI improve image quality and enhance patient safety by leveraging AI solutions for dose reduction, contrast optimization, noise reduction, and efficient image reconstruction, paving the way for safer, faster, and more accurate diagnostic imaging practices.
Key Points
Advancements in AI are revolutionizing the way radiological images are acquired, reconstructed, and interpreted.
AI algorithms can assist in optimizing radiation doses, reducing scan times, and enhancing image quality.
AI techniques are paving the way for a future of more efficient, accurate, and safe medical imaging examinations.
Graphical Abstract
Keywords: Artificial intelligence, Image processing (computer-assisted), Patient care, Patient safety, Radiation dosage
Introduction
Advancements in artificial intelligence (AI) have significantly transformed clinical radiology by enhancing image quality, reducing scan times, and improving diagnostic accuracy [1]. By optimizing workflows—increasing acquisition efficiency and decreasing radiation exposure—AI not only ensures superior imaging but also enhances patient safety, making it an indispensable tool in modern radiology [2]. As policies like the European Union’s AI Act guide the development of these tools, AI applications in radiology are being designed with a strengthened focus on regulatory compliance and data privacy, aligning innovation with standards for safe, ethical clinical use [3]. This regulatory focus not only promotes trust and safety but also supports the efficient implementation of AI technologies. By improving workflows and reducing inefficiencies, AI adoption has the potential to lower operational costs while delivering high-quality care, making it a cost-effective solution for healthcare systems [4].
In computed tomography (CT) and magnetic resonance imaging (MRI), AI is revolutionizing imaging practices. In CT, AI refines patient positioning [5], optimizes scan parameters [6, 7], and reduces unnecessary use of iodined contrast media (ICM) [8, 9], while advanced algorithms surpass traditional reconstruction methods in denoising and artifacts reduction [10–14]. In MRI, AI maintains image quality even with faster scans, addressing limitations of conventional methods that often compromise image integrity for shorter scan times [15–22], and contributes to lowering the dose of gadolinium-based contrast agents (GBCAs) [23–26]. In this scenario, expanding the potential of AI, generative AI introduces a creative dimension to radiology, enabling the development of high-quality images that complement existing techniques in CT and MRI and contribute to improving imaging workflows [27]. Table 1 summarizes the main AI applications in CT and MRI for image reconstruction and patient safety discussed in the main text.
Table 1.
Main applications of AI for image quality and patients’ safety in CT and MRI
| Application | Modality | AI model |
|---|---|---|
| Patient positioning: automated table centering | CT | U-Net [5] |
| Scan range selection from scout images | CT | Deep residual neural network [6]; GAN [7] |
| Parameter selection | CT | Deep Learninga [40–42] |
| Contrast media optimization | CT | GAN [8, 9] |
| Image reconstruction | CT | CNN [10–14] |
| Reconstruction of under-sampled images | MRI | CNN [15–17]; Deep Density Priors [18]; GAN [21] |
| Artifacts removal | MRI | CNN [19, 20]; GAN [22] |
| Gadolinium dose reduction | MRI | U-Net [23–25]; GAN [26] |
CNN Convolutional neural network, GAN Generative adversarial network
a Model architecture of commercial solutions not specified by the vendors
This narrative review aims to describe applications of AI in CT and MRI that help optimize imaging protocols and improve image quality while minimizing risks associated with radiation and contrast agents. Translating these advancements into clinical practice may foster greater healthcare efficiency while also reducing costs associated with inefficiencies and patient safety risks [2, 28].
Computed tomography
Computed tomography has long been a cornerstone of diagnostic imaging and continues to be a primary modality in medical imaging [29]. AI offers promising advancements in enhancing its diagnostic image quality while adhering to the “as low as reasonably/diagnostically achievable”—ALARA principle for radiation doses [30]. AI technologies have been developed to optimize each stage of the CT process, from patient preparation to image reconstruction [31]. Recent AI innovations aimed at optimizing ICM use, coupled with advances in deep learning (DL)-based image reconstruction, enable the maintenance of high-quality CT imaging while simultaneously reducing the radiation dose and optimizing ICM administration [30–32].
Exam preparation
Patient positioning
Within the CT system, the x-ray tube-detector pair rotates around a fixed center, namely the “isocenter”. Correct patient positioning is defined by the height of the table at which the isocenter of the patient coincides with the isocenter of the scanner. From the 2000s onwards, an “automatic exposure control” has been incorporated into CT systems, according to the principle that x-ray attenuation and quantum image noise are determined by the size of the object and its tissue density [33]. Automatic exposure control is a tool that varies the number of x-ray photons based on the thickness of the various regions of the body, based on the information provided by the localizer radiograph. If the patient’s positioning is not correct, too high or too low to the isocenter, the system perceives the patient as too thin or too thick and applies an incorrect radiation dose [31].
To address this, AI-based strategies for optimizing patient positioning have been implemented [34]. Some CT manufacturers have integrated a three-dimensional infrared camera placed on the ceiling above the CT table, which detects reference points on the patient’s surface, reproducing their three-dimensional image. A 2019 study evaluated the accuracy of this three-dimensional camera for patient positioning in CT. The camera worked on a statistical shape model to assume the pose and the body proportions of the patient. Automatic positioning by using the camera outperformed manual positioning done by radiographers, with significantly less deviation from ideal table height [35].
In 2023, a method for automatic patient positioning that relied on the deep neural network using only the CT image localizer was proposed. The patient’s body centerline distance from the gantry isocenter was automatically determined to obtain table height and positioning, achieving performance comparable to alternative techniques such as the external three-dimensional camera. Notably, the method had the advantage of being free from errors related to objects blocking the camera visibility [5].
Scan range selection
Once the patient is correctly centered in the isocenter, the operator, relying on the localizer radiograph, must select the anatomical portion from which the data will be acquired. This selection in clinical practice is usually done by the operator manually, and the scan length is commonly selected on landmarks extracted from two-dimensional anterior-posterior and/or lateral scout scans [7]. The process is in most cases prone to human error and highly operator-dependent, often resulting in too much or too little anatomical coverage. For instance, it is common to excessively extend the scan length to avoid the possibility of excluding any anatomical structure from the scan. This “overscanning” causes excessive radiation exposure to patients undergoing CT examinations and must be avoided [36]. AI algorithms have been trained to accurately identify human anatomy and choose the scan range optimally centered around the anatomical structures required by exam indication, thus optimizing patients’ radiation dose [37].
Driven by the high number of chest CT scans performed during Covid19 pandemic, a completely automated system for selecting the acquisition range was proposed in 2021 [6]. It was based on a deep residual neural network algorithm that considered exact lung coverage as the ground truth and anterior-posterior and lateral projections as input to provide scan range delimitation with an error of 0.08 ± 1.46 (mean ± standard deviation) and -1.5 ± 4.1 mm in superior and inferior directions, respectively. With the proposed method, the effective dose reduction achieved by automatic scan selection was as high as 21% [6].
In the same year, another method that relied on generative adversarial networks (GAN) was adopted to automatically delineate the scan range based on chest CT topograms, using the scan range annotated by expert radiologists as the ground truth. The algorithm-based scan ranges achieved an absolute difference of 1.8 mm ± 1.9 and 3.3 mm ± 5.6 at the upper and lower boundary, respectively, gaining lower simulated total radiation exposure [7].
Exam acquisition
Parameter selection
During the image acquisition phase in CT, parameters such as tube current and voltage are selected based on patient-specific biological factors, including age, height, and weight. Manual adjustment of these parameters introduces significant risks of interoperator variability and human error, which can lead to suboptimal scan settings [38]. To improve this process and reduce dose exposure, automated systems that account for patient characteristics can be employed alongside technologies such as automatic exposure control [39]. Manufacturers have developed parameter selection systems integrated into their commercial products, where decision algorithms tailor scan parameters based on individual patient characteristics. Once the patient is properly aligned and personal data is retrieved, these systems automatically provide the optimal settings for generating low-dose, high-quality images. These vendor-specific systems are incorporated into the acquisition workflow and are allegedly based on DL technologies [40–42].
Contrast media optimization
In recent years, progress has been made in optimizing the dose of ICM through several approaches that can be broadly categorized into those that act directly in the image acquisition phase and those that act in the post-processing phase [43]. The first approach aims at optimizing scan timing, obtaining images at the moment of maximum contrast enhancement in the region at study. The second approach exploits AI techniques to improve the quality of CT scans acquired using a reduced amount of ICM [44].
Two pioneering studies published in 2019 introduced novel algorithms designed to optimize contrast-enhanced CT imaging by tailoring acquisition timing to patient-specific cardiovascular data [45, 46]. An approach was developed to predict the aortic contrast enhancement curve by continuously monitoring the patient’s cardiovascular system after contrast injection, enabling a personalized trigger delay that improved image quality and provided a superior contrast-to-noise ratio compared to fixed delay methods [45]. A bolus tracking software was also implemented to determine the optimal transition delay between reaching the density threshold in the aorta and initiating the scan, using real-time data and empirical enhancement curves from previous patients [46]. This individualized strategy not only enhanced image quality but also reduced the required ICM dose and injection flow rates by optimizing the timing of the scan, minimizing the risk of contrast extravasation [46]. These approaches have the potential to reduce both the total amount of ICM and injection flow rates, thus minimizing the risk of extravasation, while maintaining diagnostic accuracy.
Further opportunities for optimizing contrast media usage have emerged due to advancements in DL-based image post-processing techniques. A GAN was trained and validated to selectively enhance image quality and diagnostic accuracy in CT images acquired with lower doses of ICM. Notably, dual-energy CT was utilized to obtain images with virtually reduced ICM dose levels. The GAN algorithm successfully enhanced ICM-induced image contrast in images with a virtually 50% reduced ICM dose, achieving consistency deemed appropriate for clinical application [8].
Similarly, GANs can be used to enhance diagnostic accuracy using conventional imaging techniques by improving image quality and visibility of key features [47]. A GAN was employed to generate virtual monoenergetic images at 40 keV from conventional contrast-enhanced single-energy CT scans of the upper abdomen. The model achieved a peak signal-to-noise ratio of 45.2, indicating extremely high image quality, significantly improving image quality and lesion visibility as compared to single-energy CT. The algorithm was also tested on patients with hepatocellular carcinoma lesions in the liver, demonstrating that GAN-generated virtual monoenergetic images could enhance diagnostic precision for liver pathologies while utilizing conventional contrast-enhanced CT data [9].
Image reconstruction
Image quality is closely related to the mathematical transformation of raw data into two- and three-dimensional images. Research has focused heavily on the development of postprocessing algorithms capable of improving the quality of the image obtained while delivering the lowest possible radiation dose [30]. The two most used reconstruction techniques since the introduction of CT are filtered back projection (FBP) and iterative reconstruction (IR), with the latter gradually replacing FBP [48].
FBP is computationally simple but amplifies noise and lacks advanced artifact correction, leading to degraded image quality and diagnostic accuracy in low-dose CT scans. IR addresses these limitations by iteratively refining images using mathematical models, which enhances spatial resolution and suppresses noise; however, IR can increase reconstruction times and may produce images with an unnatural “waxy” appearance when high reconstruction strength levels are used with low-dose acquisitions [48]. Hybrid iterative reconstruction (HIR) is a conventional IR technique that combines features of both FBP and IR to enhance image quality and reduce noise; this technique is less computationally demanding but also less capable of noise and artifact reduction than IR [49].
DL-based CT image reconstruction has emerged as a transformative approach in medical imaging, aiming to enhance image quality while minimizing radiation exposure. By utilizing advanced neural networks, particularly convolutional neural networks (CNNs), these methods learn complex mappings from raw projection data or preliminary reconstructions to high-fidelity images. This approach effectively reduces noise and artifacts commonly associated with low-dose or sparse-view CT scans. Compared to traditional IR techniques, DL models can achieve superior image quality with faster computational times [49].
In 2019, TrueFidelity (GE Healthcare) became the first DL reconstruction technique approved by the FDA. During its training phase, the algorithm used lower-dose sinograms—the raw projection data from the CT scan—obtained from both phantom studies and patient scans as input data. A direct DL reconstruction model employing a CNN generated estimations of the reconstructed images from these sinograms. These estimations were then compared to high-dose images reconstructed using FBP, which served as the ground truth. Because TrueFidelity was trained using FBP images, the output retains FBP-like characteristics, including noise texture, image sharpness, and artifact properties [10].
In the same year, the Advanced Intelligent Clear-IQ Engine (AiCE) by Canon Medical Systems received FDA approval as the second DL reconstruction algorithm. AiCE is based on a CNN trained with patient data where lower-dose HIR images are used as input, and routine-dose full IR images serve as the ground truth. AiCE employs an indirect, image-based DL reconstruction approach that begins with sinogram filtering, incorporating scanner-specific information such as gantry size and detector materials. An initial image is reconstructed using HIR and then fed into the CNN, which outputs an enhanced image. While the reconstruction time per scan for AiCE is slightly longer than that of HIR (44 s versus 27 s for brain scans), it remains substantially shorter by a factor of three to five times than the reconstruction times required for full IR [11].
Furthermore, other vendors have developed DL reconstruction algorithms. In 2022, Philips Healthcare released Precise Image, a direct DL reconstruction algorithm approved by the FDA, which employs a CNN trained on lower-dose simulated sinograms with added noise and matches them to routine-dose FBP images serving as ground truth [12]. Additionally, to address the limitations of vendor-specific solutions, vendor-neutral DL–based denoising algorithms such as ClariCT.AI (ClariPi) [13] and PixelShine (AlgoMedica) [14] have been introduced. Specifically, ClariCT.AI showed superior noise reduction, enhanced spatial resolution, and improved overall image quality compared to HIR in lower-dose chest CT scans [50]. Similarly, PixelShine was found to improve noise levels, reduce streak artifacts, and enhance overall image quality when applied to lower-dose chest CT images reconstructed with FBP, HIR, and model-based IR techniques [51].
Magnetic resonance imaging
Most AI techniques in radiology have focused on the use of AI solutions for the detection of anatomical structures or lesions and subsequent image segmentation, with neuroradiology being the most common subspecialty for their application [52]. These tools are well-integrated into medical practice and currently support radiological reporting and clinical decision-making [52, 53]. The same cannot be said for AI-based algorithms for image acquisition, MRI reconstruction, and image quality. In this setting, CNNs play a major role among varying DL-technique [53], and applications are currently being proposed with the purpose of reducing the length of MRI protocols, lowering GBCAs dose, performing image harmonization, improving image quality and removing artifacts [54]. In this framework, DL has the opportunity to make an impact on such challenges, with fewer ethical issues and medico-legal risks compared to post-processing tools that guide treatment decision-making [55].
Reconstruction of undersampled images
In MRI, raw k-space data must be transformed into the image domain for clinical interpretation, a process that is often time-consuming. Traditional methods to reduce acquisition time, such as varying k-space trajectory, parallel imaging with multiple coils, or compressed sensing with sparse representations, can be effective but typically sacrifice image quality [56]. Emerging AI applications now complement these methods, potentially preserving or even enhancing image quality in undersampled datasets [15].
Most AI-based reconstruction and denoising algorithms fall into two groups: those that work directly on raw (often multicoil) data and postprocessing methods that use reconstructed images as input. The former leverages a richer dataset, including phase and coil sensitivity data, while the latter, using the Digital Imaging and Communications in Medicine−DICOM standard protocol, benefits from simplicity and a larger pool of training data [15]. Figure 1 illustrates that reconstruction methods based on raw data achieve higher structural similarity index measure values—ranging from 0 (no similarity) to 1 (perfect similarity)—compared to postprocessing models, showing superior detail retention and less blurring in four-times accelerated T2-weighted images [15].
Fig. 1.

Results of CNN-based algorithms for image enhancement that work on either raw data or on already-reconstructed images. a Brain T2-weighted reference image, axial scan at the basal ganglia level, fully sampled. b Four-times accelerated image, zero-filled. c Output from the postprocessing algorithm. d Output from the raw-data algorithm. e Absolute errors for panel (c). f Absolute errors for panel (d). The structural similarity index (range 0–1) is displayed in white at the top right of every image. Reproduced with permission [15]
DL-based reconstruction of undersampled images that work on raw data generally requires paired datasets in the training phase, where prior information on the structure of fully-sampled images is needed to compensate for missing k-space data in the corresponding under-sampled image. A new approach to this subject was introduced [18], using unsupervised DL to learn the probability distribution of fully-sampled MRI images, and this prior information was utilized in reconstruction, without requiring paired datasets for training. This method was evaluated on a publicly available dataset of brain T1-weighted images, scans from healthy controls, and brains with a high burden of white matter lesions. The unsupervised DL algorithm showed high-quality reconstructions and outperformed most of the available methods on the same dataset. According to the authors, their method has the advantage of being less dependent on acquisition specifications and coil settings, and can theoretically be applied to reconstruct any sampling scheme without the need for algorithm retraining [18].
Dynamic imaging poses a major challenge for the application of DL-based reconstruction methods on sparsely sampled data. In this setting, cardiac dynamic sequences take the lion’s share. In cine MRI the pattern of pixels out of the heart remains consistent across time. Moreover, the signal of pixels in the heart for any acquisition is strictly correlated to the previous and succeeding signal in that very pixel location. A network that efficiently leverages such spatiotemporal redundancy would boost its performance and achieve more aggressive undersampling compared to DL architectures that work on static images. Using this technique, missing k-space data for each frame can be filled using the samples from adjacent frames. This principle (named the “data-sharing” approach) combined with convolution has been exploited to develop recurrent CNNs that are able to reconstruct cardiac MR images that are up to eleven times undersampled [16, 17].
GANs offer significant advancements in MRI reconstruction, particularly for compressed sensing and de-aliasing from incomplete k-space data [57]. RefineGAN, an advancement over earlier GAN models, was proposed to incorporate cyclic consistency, ensuring that transformed images could revert to their original state, and residual learning, allowing the network to focus on targeted corrections to specific image features rather than recreating the entire image. This model achieved high image quality even at extremely low sampling rates (10%) [21]. By efficiently reconstructing high-quality images from limited data, these GAN-based methods can significantly reduce acquisition times and enhance diagnostic accuracy, making GANs a highly promising tool for rapid, clinically applicable MRI reconstructions [57].
Artifacts removal
Motion artifacts compromise image interpretability and can lead to radiological misdiagnosis [15, 58, 59]. Motion correction techniques are applied either prospectively, during acquisition, or retrospectively [58, 59]. Prospective correction maintains constant spatial alignment between scanning coordinates and the target region by real-time tracking of position and orientation, enabling the imaging volume to adaptively “follow” the region of interest [59]. However, this approach may reduce scanning efficiency due to repeated acquisitions of motion-corrupted data. Retrospective correction, by contrast, relies on computational processing during reconstruction, thus preserving scan efficiency [58].
Notably, data-driven retrospective methods do not require sequence modifications or external tracking devices with consequent little impact on the clinical workflow, and AI showed promising results for implementation in retrospective motion correction methods for its ability to increase computation efficiency [60]. An artifact removal CNN that integrated a model-based motion estimation was introduced [19]. The CNN motion removal performance was tested on both simulated motion-corrupted images and brain scans affected by intentional head shaking. The network was able to minimize motion artifacts across every image from previously unknown datasets, showing full potential for efficient post-acquisition motion correction in real-world datasets [19].
Meanwhile, among applications of DL-reconstruction techniques for dynamic cardiac imaging, a DL algorithm that automatically identified and corrected motion-related artifacts in cardiac MRI acquisitions during reconstruction from k-space data was presented [20]. The method specifically spotted corrupted k-space lines due to mistakes in electrocardiogram-triggering, dysrhythmia, or patients’ motion. Once the corrupted lines were correctly identified and removed, a recurrent CNN—such as the one developed in [17]—was applied to accomplish the image reconstruction task. The method was evaluated on good-quality 300 cine steady-state free precession acquisitions and synthetic motion-corrupted images from the United Kingdom biobank database. The proposed algorithm impressively reduced the impact of k-space corruption and outperformed other state-of-the-art motion removal methods [20].
A recent study proposed a GAN-based model for motion artifact correction specifically in fetal MRI, where motion artifacts are particularly problematic due to fetal and maternal movement [22]. The model’s generator network follows an autoencoder structure that compresses and then reconstructs images, effectively isolating and removing artifacts. Additionally, it incorporates squeeze-excitation blocks, which enhance important features in the image by dynamically adjusting the emphasis on relevant details, helping to reduce noise without losing key anatomical information. Trained on both synthetic and real motion-corrupted images, the model outperformed standard methods, achieving a high structural similarity index measure (93.7%) and a peak signal-to-noise ratio of 33.5 dB, further supporting GANs’ effectiveness for clinical-quality MRI reconstruction [22].
GBCA dose reduction
The use of GBCAs has been generally recognized as safe, except for rare mild adverse reactions, such as hypersensitivity, nausea, and chest pain [61]. However, studies have demonstrated deposition of gadolinium in the dentate nucleus and globus pallidus in the brain, especially in patients who undergo repeated contrast-enhanced MRI acquisitions [62]. Also, the use of GBCAs in patients with deteriorated renal function moderately increases the risk of developing nephrogenic systemic fibrosis, a potentially life-threatening condition [63]. Safety concerns in the use of GBCAs in the medical community have placed emphasis on the development of AI-based methods that could potentially reduce contrast dose while preserving image quality and the information provided by a full-dose contrast scan [64].
In 2018, a DL model was developed to approximate full-dose brain images from pre-contrast and low-dose images [23]. The workflow included acquiring three scans—precontrast T1-weighted, low-dose (10%), and full-dose (100%) postcontrast T1-weighted), preprocessing (coregistration and normalization), and training the DL model. In the U-net-based network, the full-dose scan served as ground truth, while pre-contrast and low-dose scans were inputs, yielding an output approximating the full-dose MRI. The technique assumed low-dose MRI to be a scaled, noisier version of the full-dose scan. Expert neuroradiologists rated the synthesized images as comparable to full-dose images in quality and diagnostic utility, with non-inferiority testing confirming image quality, artifact suppression, and contrast enhancement at a GBCA dose ten times lower than the one typically used (Fig. 2) [23]. An upgraded version of the model was released in 2021, incorporating data from three different sites and scanners [24]. This updated model achieved high similarity metrics with the full-dose scans, with a structural similarity index measure of 92% and a peak signal-to-noise ratio of 35.1 dB, compared to 85% and 28.1 dB in the initial study [23, 24].
Fig. 2.

Illustrations of a precontrast, full-dose (100% contrast), low-dose (10% contrast), and synthesized full-dose (10% contrast) T1-weighted brain scan. The figures show the case of a patient with intracranial metastatic disease. Reproduced with permission [23]
A similar approach was implemented for MRI of the heart and great vessels in congenital heart disease imaging, proving particularly beneficial for young patients who require repeated acquisitions throughout their lifetime. In a 2021 study [25] low-dose images were acquired using 20% of the full contrast dose, followed immediately by the remaining 80% to obtain a full-dose scan. Virtually-enhanced low-dose images using a variant of the U-net architecture were comparable to full-dose images in terms of diagnostic confidence, edge sharpness, perceptual contrast, agreement of manual measurement of vessel diameters, signal-to-noise ratio and contrast-to-noise ratio [25].
Furthermore, a recent study compared the efficacy of GANs and diffusion probabilistic models in enhancing breast MRI images acquired with reduced GBCA doses [26]. Using contrast subtraction images at 5%, 10%, and 25% of the typical contrast dose, the study found that GANs provided superior reconstruction quality at the lowest dose (5%), while diffusion probabilistic models performed better at higher doses (25%). Both models showed clinical promise in reducing GBCA dose requirements while maintaining diagnostic image quality, highlighting the potential of GANs in optimizing contrast enhancement protocols across varying dose levels [26].
Challenges, future directions, and conclusions
Current challenges in AI for CT and MRI include limited generalizability, as many algorithms are tailored to specific scanner models and acquisition parameters [65–67], as well as the lack of sufficient and externally validated datasets, which raises significant concerns about their reliability in clinical practice [68–70]. Expanding training datasets and conducting real-world validation on independent data are critical steps to ensuring robust and reproducible results [71]. Another major barrier is the insufficient explainability of AI models and the opacity of their decision-making processes, which undermine trust and hinder their adoption in clinical workflows [68]. Developing explainable AI algorithms that provide transparent and interpretable outputs is thereby essential to overcoming these challenges and enabling seamless integration into routine practice [72].
To summarize, AI has emerged as a transformative force in CT and MRI, enhancing image quality, optimizing workflows, and improving patient safety [1, 73, 74]. With the current state-of-the-art equipment and AI techniques, it will be possible to obtain super-specific examinations in the future, interpret them, and store them free of unnecessary data, holding immense potential for improving healthcare outcomes [75]. However, realizing this potential requires not only technical progress but also dedicated funding to integrate AI-based technologies into clinical practice [74]. At the same time, recent policies like the European Union’s AI Act are ensuring that these tools meet rigorous safety and ethical standards, creating a solid foundation for their adoption. These advancements are paving the way toward a future where precision, efficiency, and patient-centered care define the next generation of medical imaging.
Acknowledgements
Generative AI software (ChatGPT-4 and 4o, OpenAI, San Francisco, USA) was used to perform a grammar check on the final version of the manuscript.
Abbreviations
- AI
Artificial intelligence
- AiCE
Advanced intelligent Clear-IQ Engine
- CNN
Convolutional neural network
- DL
Deep learning
- FBP
Filtered back projection
- GAN
Generative adversarial network
- GBCA
Gadolinium-based contrast agent
- HIR
Hybrid iterative reconstruction
- ICM
Iodined contrast media
- IR
Iterative reconstruction
Author contributions
LM drafted the manuscript and performed the literature research. CB and LP designed the study and supervised the work. LM, MA, NB, ADDM, and AG contributed to literature research and data extraction. RB, CB, and LP performed the final review of the manuscript. All authors read and approved the final manuscript.
Funding
This study was supported by the Ricerca Corrente funding program of the Italian Ministry of Health.
Data availability
The two figures included in this manuscript are extracted from previously published studies. Permission for reproduction was obtained through the CCC Copyright Clearance Center.
Declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Khalifa M, Albadawy M (2024) AI in diagnostic imaging: revolutionising accuracy and efficiency. Comput Methods Programs Biomed Updat 5:100146. 10.1016/j.cmpbup.2024.100146 [Google Scholar]
- 2.van Leeuwen KG, de Rooij M, Schalekamp S et al (2022) How does artificial intelligence in radiology improve efficiency and health outcomes? Pediatr Radiol 52:2087–2093. 10.1007/s00247-021-05114-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Parliament E (2024) EU AI act: first regulation on artificial intelligence. In: Topics European Parliament. www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence. Accessed 30 Oct 2024
- 4.Bharadwaj P, Nicola L, Breau-Brunel M et al (2024) Unlocking the value: quantifying the return on investment of hospital artificial intelligence. J Am Coll Radiol. 10.1016/j.jacr.2024.02.034 [DOI] [PubMed]
- 5.Salimi Y, Shiri I, Akavanallaf A et al (2023) Fully automated accurate patient positioning in computed tomography using anterior–posterior localizer images and a deep neural network: a dual-center study. Eur Radiol 33:3243–3252. 10.1007/s00330-023-09424-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Salimi Y, Shiri I, Akhavanallaf A et al (2021) Deep learning-based fully automated Z-axis coverage range definition from scout scans to eliminate overscanning in chest CT imaging. Insights Imaging 12:1–16. 10.1186/s13244-021-01105-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Demircioğlu A, Kim MS, Stein MC et al (2021) Automatic scan range delimitation in chest ct using deep learning. Radiol Artif Intell 3:e200211. 10.1148/ryai.2021200211 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Haubold J, Hosch R, Umutlu L et al (2021) Contrast agent dose reduction in computed tomography with deep learning using a conditional generative adversarial network. Eur Radiol 31:6087–6095. 10.1007/s00330-021-07714-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Zhong H, Huang Q, Zheng X et al (2024) Generation of virtual monoenergetic images at 40 keV of the upper abdomen and image quality evaluation based on generative adversarial networks. BMC Med Imaging 24:151. 10.1186/s12880-024-01331-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Hsieh J, Liu E, Nett B et al (2019) A new era of image reconstruction: TrueFidelityTM. Technical white paper on deep learning image reconstruction. GE Healthcare. https://www.gehealthcare.com/-/jssmedia/040dd213fa89463287155151fdb01922.pdf?srsltid=AfmBOoo2bNN9XsR-3sphDJinKXhx4C4FOh0Kvp1XBfNcLMyRvAbvk5aL. Accessed 23 Jan 2025
- 11.Canon Medical Systems (2024) AiCE deep learning reconstruction bringing. https://global.medical.canon/publication/ct/2019WP_AiCE_Deep_Learning. Accessed 18 Oct 2024
- 12.Koninklijke Philips (2021) White Paper—Precise Image (AI for significantly lower doseand improved image quality). https://www.philips.com/c-dam/b2bhc/master/resource-catalog/landing/precise-suite/incisive_precise_image.pdf. Accessed 18 Oct 2024
- 13.Kim JH (2017) Apparatus and method for denoising CT images. In: US Pat. 9,852,527. https://patents.google.com/patent/US9852527B2/en. Accessed 18 Oct 2024
- 14.AlgoMedica PixelShine. https://www.algomedica.com/medical-imaging-resources#white-papers. Accessed 18 Oct 2024
- 15.Lin DJ, Johnson PM, Knoll F, Lui YW (2021) Artificial intelligence for MR image reconstruction: an overview for clinicians. J Magn Reson Imaging 53:1015–1028. 10.1002/jmri.27078 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Schlemper J, Oktay O, Bai W et al (2018) Cardiac MR segmentation from undersampled k-space using deep latent representation learning. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics). Springer, Granada, pp 259–267
- 17.Qin C, Schlemper J, Caballero J et al (2019) Convolutional recurrent neural networks for dynamic MR image reconstruction. IEEE Trans Med Imaging 38:280–290. 10.1109/TMI.2018.2863670 [DOI] [PubMed] [Google Scholar]
- 18.Tezcan KC, Baumgartner CF, Luechinger R et al (2019) MR image reconstruction using deep density priors. IEEE Trans Med Imaging 38:1633–1642. 10.1109/TMI.2018.2887072 [DOI] [PubMed] [Google Scholar]
- 19.Haskell MW, Cauley SF, Bilgic B et al (2019) Network accelerated motion estimation and reduction (NAMER): convolutional neural network guided retrospective motion correction using a separable motion model. Magn Reson Med 82:1452–1461. 10.1002/mrm.27771 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Oksuz I, Clough J, Ruijsink B et al (2019) Detection and correction of cardiac MRI motion artefacts during reconstruction from k-space. In: Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics). Springer, London, pp 695–703
- 21.Quan TM, Nguyen-Duc T, Jeong WK (2018) Compressed sensing MRI reconstruction using a generative adversarial network with a cyclic loss. IEEE Trans Med Imaging 37:1488–1497. 10.1109/TMI.2018.2820120 [DOI] [PubMed] [Google Scholar]
- 22.Lim A, Lo J, Wagner MW et al (2023) Motion artifact correction in fetal MRI based on a generative adversarial network method. Biomed Signal Process Control 81:104484. 10.1016/j.bspc.2022.104484 [Google Scholar]
- 23.Gong E, Pauly JM, Wintermark M, Zaharchuk G (2018) Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI. J Magn Reson Imaging 48:330–340. 10.1002/jmri.25970 [DOI] [PubMed] [Google Scholar]
- 24.Pasumarthi S, Tamir JI, Christensen S et al (2021) A generic deep learning model for reduced gadolinium dose in contrast-enhanced brain MRI. Magn Reson Med 86:1687–1700. 10.1002/mrm.28808 [DOI] [PubMed] [Google Scholar]
- 25.Montalt-Tordera J, Quail M, Steeden JA, Muthurangu V (2021) Reducing contrast agent dose in cardiovascular MR angiography with deep learning. J Magn Reson Imaging 54:795–805. 10.1002/jmri.27573 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Müller-Franzes G, Huck L, Bode M et al (2024) Diffusion probabilistic versus generative adversarial models to reduce contrast agent dose in breast MRI. Eur Radiol Exp 8:53. 10.1186/s41747-024-00451-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27.Pesapane F, Cuocolo R, Sardanelli F (2024) The Picasso’s skepticism on computer science and the dawn of generative AI: questions after the answers to keep “machines-in-the-loop. Eur Radiol Exp 8:81. 10.1186/s41747-024-00485-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Hassan AE, Ringheanu VM, Rabah RR et al (2020) Early experience utilizing artificial intelligence shows significant reduction in transfer times and length of stay in a hub and spoke model. Interv Neuroradiol 26:615–622. 10.1177/1591019920953055 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Hess A, Klein I, Iung B et al (2013) Brain MRI findings in neurologically asymptomatic patients with infective endocarditis. AJNR Am J Neuroradiol 34:1579–1584. 10.3174/ajnr.A3582 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Immonen E, Wong J, Nieminen M et al (2022) The use of deep learning towards dose optimization in low-dose computed tomography: a scoping review. Radiography 28:208–214. 10.1016/j.radi.2021.07.010 [DOI] [PubMed] [Google Scholar]
- 31.McCollough CH, Leng S (2020) Use of artificial intelligence in computed tomography dose optimisation. Ann ICRP 49:113–125. 10.1177/0146645320940827 [DOI] [PubMed] [Google Scholar]
- 32.Gupta RV, Kalra MK, Ebrahimian S et al (2022) Complex relationship between artificial intelligence and CT radiation dose. Acad Radiol 29:1709–1719. 10.1016/j.acra.2021.10.024 [DOI] [PubMed] [Google Scholar]
- 33.Kalra MK, Maher MM, Toth TL et al (2004) Techniques and applications of automatic tube current modulation for CT. Radiology 233:649–657. 10.1148/radiol.2333031150 [DOI] [PubMed] [Google Scholar]
- 34.Gang Y, Chen X, Li H et al (2021) A comparison between manual and artificial intelligence–based automatic positioning in CT imaging for COVID-19 patients. Eur Radiol 31:6049–6058. 10.1007/s00330-020-07629-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Booij R, van Straten M, Wimmer A, Budde RPJ (2021) Automated patient positioning in CT using a 3D camera for body contour detection: accuracy in pediatric patients. Eur Radiol 31:131–138. 10.1007/s00330-020-07097-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Cohen SL, Ward TJ, Makhnevich A et al (2020) Retrospective analysis of 1118 outpatient chest CT scans to determine factors associated with excess scan length. Clin Imaging 62:76–80. 10.1016/j.clinimag.2019.11.020 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Colevray M, Tatard-Leitman VM, Gouttard S et al (2019) Convolutional neural network evaluation of over-scanning in lung computed tomography. Diagn Interv Imaging 100:177–183. 10.1016/j.diii.2018.11.001 [DOI] [PubMed] [Google Scholar]
- 38.Smith-Bindman R, Wang Y, Chu P et al (2019) International variation in radiation dose for computed tomography examinations: prospective cohort study. BMJ. 10.1136/bmj.k4931 [DOI] [PMC free article] [PubMed]
- 39.Wang Y, Chu P, Szczykutowicz TP et al (2024) CT acquisition parameter selection in the real world: impacts on radiation dose and variation amongst 155 institutions. Eur Radiol 34:1605–1613. 10.1007/s00330-023-10161-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Canon Medical Systems USA (2024) SUREWorkflow. https://us.medical.canon/products/computed-tomography/sureworkflow/. Accessed 17 Oct 2024
- 41.Siemens Belgium (2023) myExam companion. https://www.siemens-healthineers.com. Accessed 17 Oct 2024
- 42.GE HealthCare (United States) (2021) Revolution ascend. https://www.gehealthcare.com/products/computed-tomography/revolution-family/revolution-ascend-platform. Accessed 17 Oct 2024
- 43.Valeri F, Bartolucci M, Cantoni E et al (2023) UNet and MobileNet CNN-based model observers for CT protocol optimization: comparative performance evaluation by means of phantom CT images. J Med Imaging 10:S11904. 10.1117/1.jmi.10.s1.s11904 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Haubold J, Hosch R, Jost G et al (2024) AI as a new frontier in contrast media research bridging the gap between contrast media reduction, the contrast-free question and new application discoveries. Invest Radiol 59:206–213. 10.1097/RLI.0000000000001028 [DOI] [PubMed] [Google Scholar]
- 45.Hinzpeter R, Eberhard M, Gutjahr R et al (2019) CT angiography of the aorta: contrast timing by using a fixed versus a patient-specific trigger delay. Radiology 291:531–538. 10.1148/radiol.2019182223 [DOI] [PubMed] [Google Scholar]
- 46.Gutjahr R, Fletcher JG, Lee YS et al (2019) Individualized delay for abdominal computed tomography angiography bolus-tracking based on sequential monitoring: increased aortic contrast permits decreased injection rate and lower iodine dose. J Comput Assist Tomogr 43:612–618. 10.1097/RCT.0000000000000874 [DOI] [PubMed] [Google Scholar]
- 47.Zhong Z, Xie X (2024) Clinical applications of generative artificial intelligence in radiology: image translation, synthesis, and text generation. BJR Artif Intell 1:ubae012. 10.1093/bjrai/ubae012 [Google Scholar]
- 48.Cozzi A, Cè M, De Padova G et al (2023) Deep learning-based versus iterative image reconstruction for unenhanced brain CT: a quantitative comparison of image quality. Tomography 9:1629–1637. 10.3390/tomography9050130 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Koetzier LR, Mastrodicasa D, Szczykutowicz TP et al (2023) Deep learning image reconstruction for CT: technical principles and clinical prospects. Radiology 306:e221257. 10.1148/radiol.221257 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Nam JG, Ahn C, Choi H et al (2021) Correction to: image quality of ultralow-dose chest CT using deep learning techniques: potential superiority of vendor-agnostic post-processing over vendor-specific techniques. Eur Radiol 31:6410. 10.1007/s00330-021-07733-z [DOI] [PubMed] [Google Scholar]
- 51.Hata A, Yanagawa M, Yoshida Y et al (2020) Combination of deep learning–based denoising and iterative reconstruction for ultra-low-dose CT of the chest: image quality and Lung-RADS evaluation. AJR Am J Roentgenol 215:1321–1328. 10.2214/AJR.19.22680 [DOI] [PubMed] [Google Scholar]
- 52.Kelly BS, Judge C, Bollard SM et al (2022) Correction to: Radiology artificial intelligence: a systematic review and evaluation of methods (RAISE). Eur Radiol 32:8054. 10.1007/s00330-022-08832-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53.Najjar R (2023) Redefining radiology: a review of artificial intelligence integration in medical imaging. Diagnostics (Basel) 13:2760. 10.3390/diagnostics13172760 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Zhao Y, Xia X, Togneri R (2019) Applications of deep learning to audio generation. IEEE Circuits Syst Mag 19:19–38. 10.1109/MCAS.2019.2945210 [Google Scholar]
- 55.Pesapane F, Codari M, Sardanelli F (2018) Artificial intelligence in medical imaging: threat or opportunity? Radiologists again at the forefront of innovation in medicine. Eur Radiol Exp 2:1–10. 10.1186/s41747-018-0061-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 56.Wang G, Ye JC, Mueller K, Fessler JA (2018) Image reconstruction is a new frontier of machine learning. IEEE Trans Med Imaging 37:1289–1296. 10.1109/TMI.2018.2833635 [DOI] [PubMed] [Google Scholar]
- 57.Wang S, Xiao T, Liu Q, Zheng H (2021) Deep learning for fast MR imaging: A review for learning reconstruction from incomplete k-space data. Biomed Signal Process Control 68:102579. 10.1016/j.bspc.2021.102579 [Google Scholar]
- 58.Zaitsev M, Maclaren J, Herbst M (2015) Motion artifacts in MRI: a complex problem with many partial solutions. J Magn Reson Imaging 42:887–901. 10.1002/jmri.24850 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Maclaren J, Armstrong BSR, Barrows RT et al (2013) Correction: Measurement and correction of microscopic head motion during magnetic resonance imaging of the brain. PLoS One 8:e48088. 10.1371/annotation/a29733ae-6317-42ee-92c0-a49542e1b7c8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Duffy BA, Zhao L, Sepehrband F et al (2021) Retrospective motion artifact correction of structural MRI images using deep learning improves the quality of cortical surface reconstructions. Neuroimage 230:117756. 10.1016/j.neuroimage.2021.117756 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.McDonald JS, Larson NB, Schmitz JJ et al (2023) Acute adverse events after iodinated contrast agent administration of 359,977 injections: a single-center retrospective study. Mayo Clin Proc 98:1820–1830. 10.1016/j.mayocp.2023.02.032 [DOI] [PubMed] [Google Scholar]
- 62.Kanda T, Matsuda M, Oba H et al (2015) Gadolinium deposition after contrastenhanced MR imaging. Radiology 277:924–925. 10.1148/radiol.2015150697 [DOI] [PubMed] [Google Scholar]
- 63.Khawaja AZ, Cassidy DB, Al Shakarchi J et al (2015) Revisiting the risks of MRI with Gadolinium based contrast agents—review of literature and guidelines. Insights Imaging 6:553–558. 10.1007/s13244-015-0420-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Luo H, Zhang T, Gong NJ et al (2021) Deep learning-based methods may minimize GBCA dosage in brain MRI. Eur Radiol 31:6419–6428. 10.1007/s00330-021-07848-3 [DOI] [PubMed] [Google Scholar]
- 65.Topff L, Groot Lipman KBW, Guffens F et al (2023) Is the generalizability of a developed artificial intelligence algorithm for COVID-19 on chest CT sufficient for clinical use? Results from the International Consortium for COVID-19 Imaging AI (ICOVAI). Eur Radiol 33:4249–4258. 10.1007/s00330-022-09303-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Abbasi S, Lan H, Choupan J et al (2024) Deep learning for the harmonization of structural MRI scans: a survey. Biomed Eng Online 23:90. 10.1186/s12938-024-01280-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Galbusera F, Cina A (2024) Image annotation and curation in radiology: an overview for machine learning practitioners. Eur Radiol Exp 8:11. 10.1186/s41747-023-00408-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Rundo L, Militello C (2024) Image biomarkers and explainable AI: handcrafted features versus deep learned features. Eur Radiol Exp 8:130. 10.1186/s41747-024-00529-y [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Marti-Bonmati L, Koh DM, Riklund K et al (2022) Considerations for artificial intelligence clinical impact in oncologic imaging: an AI4HI position paper. Insights Imaging 13:89. 10.1186/s13244-022-01220-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Choi Y, Yu W, Nagarajan MB et al (2023) Translating AI to clinical practice: overcoming data shift with explainability. Radiographics 43:e220105. 10.1148/rg.220105 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Gitto S, Serpi F, Albano D et al (2024) AI applications in musculoskeletal imaging: a narrative review. Eur Radiol Exp 8:22. 10.1186/s41747-024-00422-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72.Kim S, Park HW, Park SH (2024) A review of deep learning-based reconstruction methods for accelerated MRI using spatiotemporal and multi-contrast redundancies. Biomed Eng Lett. 10.1007/s13534-024-00425-9 [DOI] [PMC free article] [PubMed]
- 73.Paudyal R, Shah AD, Akin O et al (2023) Artificial intelligence in CT and MR imaging for oncological applications. Cancers (Basel) 15:2573. 10.3390/cancers15092573 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Codari M, Melazzini L, Morozov SP et al (2019) Impact of artificial intelligence on radiology: a EuroAIM survey among members of the European Society of Radiology. Insights Imaging 10:240. 10.1186/s13244-019-0798-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Ghebrehiwet I, Zaki N, Damseh R, Mohamad MS (2024) Revolutionizing personalized medicine with generative AI: a systematic review. Artif Intell Rev 57:1–41. 10.1007/s10462-024-10768-5 [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The two figures included in this manuscript are extracted from previously published studies. Permission for reproduction was obtained through the CCC Copyright Clearance Center.

