Abstract
Background
Cardiovascular magnetic resonance (CMR) is an important imaging modality for the assessment of heart disease; however, limitations of CMR include long exam times and high complexity compared to other cardiac imaging modalities. Recently advancements in artificial intelligence (AI) technology have shown great potential to address many CMR limitations. While the developments are remarkable, translation of AI-based methods into real-world CMR clinical practice remains at a nascent stage and much work lies ahead to realize the full potential of AI for CMR.
Methods
Herein we review recent cutting-edge and representative examples demonstrating how AI can advance CMR in areas such as exam planning, accelerated image reconstruction, post-processing, quality control, classification and diagnosis.
Results
These advances can be applied to speed up and simplify essentially every application including cine, strain, late gadolinium enhancement, parametric mapping, 3D whole heart, flow, perfusion and others. AI is a unique technology based on training models using data. Beyond reviewing the literature, this paper discusses important AI-specific issues in the context of CMR, including (1) properties and characteristics of datasets for training and validation, (2) previously published guidelines for reporting CMR AI research, (3) considerations around clinical deployment, (4) responsibilities of clinicians and the need for multi-disciplinary teams in the development and deployment of AI in CMR, (5) industry considerations, and (6) regulatory perspectives.
Conclusions
Understanding and consideration of all these factors will contribute to the effective and ethical deployment of AI to improve clinical CMR.
Keywords: Cardiovascular magnetic resonance, Artificial intelligence, Deep learning, Clinical translation, Review, Roadmap
Graphical abstract
1. Background
Cardiovascular magnetic resonance (CMR) is the most comprehensive non-invasive technique for assessing cardiac structure, function, perfusion, tissue characterization, and cardiovascular hemodynamics, providing high-quality data for diagnosing heart disease and predicting outcomes. CMR is widely used in clinical practice, but its efficiency and accessibility are hindered by the complexity of performing CMR studies, long exam times, high cost, and the requirement for manual image analysis by experts. Artificial intelligence (AI), particularly deep learning (DL), has demonstrated remarkable progress recently and holds great potential to overcome many of the limitations of CMR. However, despite the large volume of research studies related to this topic, translation of AI methods into the real-world clinical CMR workflow remains challenging. In this article, we use the terms AI, DL, and machine learning (ML), which have distinct meanings that have been previously defined [1]. Briefly, and in practical terms for this paper, AI is a broad umbrella term that refers to the ability of computers to mimic human intelligence, DL refers to deep neural networks that learn from large datasets, and we use ML to refer to shallow learning algorithms, such as support vector machines and decision trees.
A number of papers have reviewed research in this field. Some focused on AI methods for specific CMR sequences, such as parametric mapping [2], perfusion [3], fingerprinting [4], and late gadolinium enhancement (LGE) [5]. Other review papers summarized the state-of-the-art for specific tasks, such as reconstruction [6], segmentation [7], [8], motion and deformation analysis [9], and outcome prediction [10]. Review papers summarizing AI applications for specific diseases, such as myocardial infarction (MI) [11] and dilated cardiomyopathy [12], have also been written. Additionally, there are publications that provide an overview of AI basics with exemplar CMR applications [13], [14] or in the context of multi-modality imaging [15]. Most of these prior articles focus on reviewing evidence of a specific aspect at the research and development stage, while consideration of a roadmap toward their clinical adoption is in demand, but absent.
The intention of this article is not to be overly technical but to provide an overarching introduction of cutting-edge and illustrative examples for the reader hoping to understand the general concepts and clinical applications in this rapidly growing area. Specifically, we introduce image reconstruction, post-processing, quality control (QC), classification, and prognostication tasks that can be accelerated, improved, and/or automated with AI. We then review the roles and applications of AI in common CMR sequences. Beyond reviewing the literature, we brought together CMR clinicians, AI scientists, CMR physicists, industry partners, and experts in regulatory sciences to envision a roadmap to clinical translation of AI CMR methods. The authors convened a meeting of the Society for Cardiovascular Magnetic Resonance (SCMR) AI Special Interest Group during the 2023 SCMR annual scientific sessions (San Diego, LA, 2023) where more than 50 people gathered and provided input. The authors hope that this article can summarize recent developments in AI applied to CMR and suggest approaches to accelerate the adoption of AI in clinical CMR to gain the advantages offered by AI and to do so in a manner that is fair, responsible, and equitable. A graphical abstract is provided in Fig. 1.
2. Review of literature—current AI CMR methods and applications
To conduct this narrative review, we performed comprehensive PubMed searches using varying combinations of keywords [“artificial intelligence”, “machine learning”, “deep learning”] and keywords [“CMR”, “cardiac MRI”, “cardiovascular MRI”] within the 5-year period of 2019 and 2023. Articles studying primarily other modalities or organs were manually excluded, and the search was supplemented by manually identified papers, leading to an aggregate of 751 original research articles that were examined and contributed to our synopsis. The number of AI CMR publications per year is increasing rapidly, reflecting the growing interest and activity in this field (Fig. 2A). All research articles were dissected to show the spectrum of these studies in terms of AI tasks (Fig. 2B) and CMR sequences (Fig. 2C). The AI CMR literature (N = 751 articles) was further visualized in a bipartite graph, which unveiled a comprehensive translation of AI approaches and tasks to various CMR sequences (Fig. 3). Based on this framework, our review of the literature is presented in the following sub-sections.
2.1. AI tasks for CMR
AI can significantly impact the entire CMR workflow, including image reconstruction, image analysis, QC, and diagnosis/prognosis. Recent research advancements promise to speed up scan protocols, automate CMR planning and image processing, improve image quality, and support medical diagnosis and prognostication.
2.1.1. Acceleration and reconstruction
Due to the sequential acquisition of data samples in k-space and electrocardiogram (ECG) synchronization, CMR image acquisition is inherently slow. One approach to accelerate the acquisition is to undersample k-space. However, undersampling introduces aliasing artifacts when the image is reconstructed. A variety of data acquisition and image reconstruction techniques have been developed to produce images of acceptable quality from undersampled data. These techniques exploit coil sensitivity profiles (parallel imaging) [16], sparsity of data in a transform domain (compressed-sensing) [17], [18], [19], [20], and low-rank properties in spatial and/or temporal dimensions [21]. Those approaches, however, come at the cost of high computational burden and long reconstruction times, and are dependent on the choice of the reconstruction parameters, which might not perfectly model the spatiotemporal complexity of CMR imaging [22], [23], [24]. DL approaches have recently been proposed to learn up-front the non-linear optimization processes employed in CMR reconstruction, making use of large datasets to learn the key reconstruction parameters and priors. These methods differ in terms of their intended tasks and include image denoising using image-to-image regressions [25]; direct mapping from acquired k-space to the reconstructed image [26], [27]; physics-based k-space learning or unrolled optimizations [28], [29], [30]; and combinations of these. An alternative approach for accelerating image acquisition is the use of DL–based super-resolution, where images are acquired at a low resolution, with or without undersampling, and retrospectively reconstructed to the high-resolution target [31]. Examples demonstrating various CMR sequences are illustrated in Fig. 4.
2.1.2. Segmentation
Image segmentation, which partitions images into anatomically meaningful regions, represents the earliest and most mature application of DL in CMR and is a crucial step in numerous post-processing applications including visualization and quantification. Most commonly, segmentation is used to define epicardial and endocardial borders of the left ventricular (LV) myocardium to calculate myocardial mass and function on cine images [36], [37], [38], [39], to quantify myocardial tissue properties on parametric T1- and T2-mapping [40], [41] and to calculate scar volume on LGE imaging [42], [43], [44] (Fig. 5A-C). DL has also been employed to segment the right ventricle (RV) to assess RV function [37], [45], the left and right atria to calculate volumes and surface areas [37], [46], and the great arteries to measure flow velocity [47] and aortic distensibility [48]. A popular DL architecture for segmentation tasks is the U-Net [49]—an encoder-decoder convolutional neural networks (CNN) with skip connections, and variants, such as the 3D U-Net [50] and nnU-Net [51], have also been employed. More recently, Vision Transformers [52], [53], [54] using an alternative architecture to the CNN have demonstrated potentially superior performance in CMR segmentation.
2.1.3. Image registration
Image registration is a process that aligns two or more images of the same object. Cardiac image registration is a complex problem due to non-rigid motion, mixed motion patterns caused by both intrinsic heart motion and breathing, limited anatomical landmarks, and variations in spatial and temporal resolutions and contrast between images. DL-based registration methods have emerged as a promising alternative to conventional methods due to their ability to handle complex image features and adapt to different image contrasts and registration scenarios [57]. DL image registration for CMR can utilize supervised learning [58], [59] (Fig. 6A). It can also utilize unsupervised learning using generative models [60], [61] guided by similarity in intensities between images [62], [63] (Fig. 6B). Image registration is a fundamental step in cardiac image processing, and the result can facilitate or be combined with further analysis, such as image segmentation [58], motion correction [64], and motion estimation [65]. Compared to traditional methods, DL registration methods are typically much more inference-efficient, providing the possibility of real-time guidance and inline processing.
2.1.4. Landmark detection
Localization of landmarks and anatomical structures is a common pre-processing step for CMR view planning and image analysis. DL has been applied to detect key landmarks and use them to prescribe imaging planes [66]. This can automate CMR pilot imaging and view planning [67], reduce human intervention in imaging, and reduce the overall exam time. Landmark localization in CMR post-processing commonly includes the detection of RV insertion points on short-axis images to assign American Heart Association (AHA) segments [36], and the tracking of the mitral valve plane and apical points on long-axis images [56] (Fig. 5C and D). These networks usually take one of two forms, either an image-to-image translation model to output the probability map of landmark locations [56], [68] or an image-to-vector regression model to output the predicted coordinates of the landmarks on the images [69].
2.1.5. Quality control
QC of images should occur before or be integrated into the image analysis pipeline, as insufficient image quality can result in image segmentation and other errors and may compromise diagnostic and prognostic accuracy. Also, QC applied during a CMR exam could facilitate image reacquisition to replace low-quality images [70]. Currently, in clinical practice, image quality assessment is performed visually; however, this practice will likely change in the context of fully-automated DL-based image analysis pipelines for CMR.
Pre-analysis QC can address multiple quality issues. Multiple studies have developed QC methods to detect motion-related artifacts like those from mis-triggering, arrhythmias, and inconsistent breath-holding using a variety of AI methods [71], [72], [73], [74], [75], [76], [77], [78]. Suboptimal image contrast can also be identified [70], [71], [72]. In addition, DL has been used to detect improper slice orientation, such as the presence of the LV outflow tract in a four-chamber view, foreshortening of the apex, and the absence of valves in the three-chamber view [73], [76]. AI methods can also detect incomplete coverage of the LV in short-axis stacks using methods, such as Fisher-discriminative 3D CNNs [79] and hybrid decision forests [71], [72]. It has also been shown that a CNN can mimic the image quality assessment of an expert using a numerical quality scale [70].
Post-analysis QC has mainly focused on the evaluation of the output of segmentation models.
QC-driven segmentation frameworks attempt to infer well-known validation metrics, such as the Dice score [38], [80], the Hausdorff distance, or uncertainty estimates by using ensemble DL models [38], [78], multi-task learning [77], a multi-view network [81], or multi-level two-dimensional (2D) and three-dimensional (3D) DL-based methods [82]. More studies use a QC framework to detect CMR segmentation failures using descriptors in a random forest classifier [83], using the approach of Reverse Classification Accuracy [84], and combining uncertainly maps with DL models [85], [86]. Other post-analysis QC can be performed by detecting abnormalities in the computed LV/RV volumes and strain curves using a support vector machine [73].
2.1.6. Classification (diagnosis)
Classification and regression AI models allow for automated diagnosis, prognosis, therapy response prediction, and risk stratification. These algorithms may take parameters derived from image pre-processing and quantification steps, or directly process images or chamber volumes, automatically extracting pertinent features to make predictions. An exemplar DL model using quantitative displacement CMR to predict survival is given in Fig. 7A. Multi-sequence CMR contains complementary information regarding myocardial tissue properties and heart function. Many AI models have been developed utilizing multiple sequences for diagnosis and outcome prediction for a variety of heart conditions; a non-exhaustive list of studies is summarized in Table 1. Additionally, AI has been used to identify ECG phenotypes, such as in hypertrophic cardiomyopathy (HCM) [87], [88] and MI [89], and these combined with CMR and clinical variables, such as patient characteristics and laboratory data, can improve diagnostic accuracy and confidence [90].
Table 1.
Cine | T1 map | T2 map | Perfusion | ECV | LGE | Disease or conditions | |
---|---|---|---|---|---|---|---|
Khozeimeh et al. [91] | ✓ | ✓ | ✓ | ✓ | CAD | ||
Pezel et al. [90] | ✓ | ✓ | CAD | ||||
Khozeimeh et al. [91] | ✓ | ✓ | ✓ | ✓ | CAD | ||
Shu et al. [92] | ✓ | ✓ | DCM | ||||
Shi et al. [93] | ✓ | ✓ | HCM | ||||
Agibetov et al. [94] | ✓ | ✓ | ✓ | Cardiac amyloidosis | |||
Martini et al. [95] | ✓ | ✓ | Cardiac amyloidosis | ||||
Sharifrazi et al. [96] | ✓ | ✓ | ✓ | ✓ | Myocarditis | ||
Moravvej et al. [97] | ✓ | ✓ | ✓ | ✓ | ✓ | Myocarditis | |
Ghareeb et al. [98] | ✓ | ✓ | ✓ | ✓ | Myocarditis | ||
Eichhorn et al. [99] | ✓ | ✓ | ✓ | ✓ | ✓ | Myocarditis | |
Cau et al. [100] | ✓ | ✓ | ✓ | ✓ | Takotsubo | ||
Mannil et al. [101] | ✓ | ✓ | Takotsubo | ||||
Dykstra et al. [102] | ✓ | ✓ | Atrial fibrillation | ||||
Cornhill et al. [103] | ✓ | ✓ | HF hospitalization | ||||
Bivona et al. [104] | ✓ | ✓ | Resynchronization | ||||
Kwak et al. [105] | ✓ | ✓ | ✓ | Aortic stenosis | |||
Lu et al. [106] | ✓ | ✓ | Sarcoidosis | ||||
Okada et al. [107] | ✓ | ✓ | Sarcoidosis |
ECV extracellular volume, LGE late gadolinium enhancement, CAD coronary artery disease, DCM dilated cardiomyopathy, HCM hypertrophic cardiomyopathy, HF heart failure.
Further, the ability of AI techniques to handle high-dimensional data has led to the development of radiomics, a novel field in which digital medical images are converted into mineable high-dimensional data by extracting a large number of quantitative features [109], [110], [111], [112] (Fig. 7B). Within the field of CMR radiomics, texture analysis allows for analysis and classification of medical images based on underlying tissue inhomogeneities [11], [93], [113], [114].
2.2. AI applications in CMR sequences
The previously described AI methods have been applied to a wide range of CMR sequences (Fig. 3). In this section, we summarize several of these studies.
2.2.1. Cardiac cine imaging
CMR cine imaging provides accurate and reproducible measurements of cardiac anatomy and function. Cine images are routinely collected using ECG-gated segmented acquisitions during multiple breath-holds. For patients who exhibit difficulty with breath-holding or with an irregular cardiac rhythm, real-time cine imaging can be acquired, albeit with reduced spatial and temporal resolution.
Accelerated acquisition and advanced reconstruction methods can increase temporal and spatial resolution or reduce the scan time for both segmented and real-time cine imaging. Diverse DL methods have been applied in this context, including methods using image-domain-based artifact reduction [25], [29], [33], [115], [116], hybrid techniques with direct mapping from k-space to the image domain [117], [118], [119], [120], and super-resolution reconstruction from low-resolution inputs [121]. Some proposed methods have achieved 12-fold and 13-fold undersampling for accelerated acquisition. Many of the studies to date were limited to retrospective undersampling of the data (usually using a single coil) [29], [33], [115], [117], [120], healthy subjects [29], [115], [116], and image quality evaluation using non-clinical quantitative metrics only [29], [115]. Two of the aforementioned studies involved testing in prospectively undersampled data from patients [33], [118].
Detection of landmarks and anatomies is an important pre-processing step for the automated analysis of cine imaging. For example, localizing the structure of interest (e.g., the LV) can improve the confidence and accuracy of anatomical segmentation [50]. Detecting the mitral valve plane and the LV apex on the long-axis cine image determines the orientation and length of the ventricle [56]. Tracking of these landmarks through time on long-axis cine images provides key metrics of systolic and diastolic function [69]. Segmentation of the LV allows automated extraction of anatomical parameters, such as LV myocardial mass and wall thickness, and functional parameters, such as LV ejection fraction (LVEF). Segmentation of the RV [45], left atrium (LA) [122], and great arteries [123] on cine images has also been studied, with similar tasks of quantifying anatomical and functional parameters for those chambers. As the most intensively studied sequence in the area of AI for CMR, automated reporting on cine imaging may soon become available as inline methods on scanner platforms [56].
A number of studies have used DL-based automated segmentation of cine images for outcome prediction. For example, one study showed that DL-based segmentation to compute LVEF is as effective as conventional analysis of cine images for predicting major adverse cardiac events in MI patients [124]. Another study evaluated the performance of a DL-based multi-source model (trained using clinical and extracted motion features) for survival prediction and risk stratification in patients with heart failure and reduced ejection fraction (HFrEF). The proposed model could independently predict the outcome of patients with HFrEF better than conventional methods [125]. Another example used DL-based segmentation of cine images along with motion tracking of the RV to improve survival prognostication in patients with pulmonary hypertension [126]. Additionally, DL-based automated segmentation of cine images has been shown to be successful for prognostication in patients with tetralogy of Fallot [125], [127], [128].
In the area of diagnosis, DL was used to segment the LV and extract motion features from cine CMR to detect chronic MI [129]. Also, recently multilinear subspace learning was employed to identify and learn diagnostic features in patients with suspected pulmonary arterial hypertension (PAH) without the need for manual segmentation [130]. In this study, learned features were visualized in feature maps, which confirmed some known diagnostic features and identified other, potentially new, diagnostic features for PAH.
2.2.2. Strain
Strain and strain-rate imaging quantify deformation of myocardial muscle and provide a quantitative assessment of myocardial function with greater sensitivity to LV dysfunction than LVEF. DL-based automatic or semi-automatic segmentation of the LV, RV, and LA facilitates the efficient use of feature tracking or other methods to compute strain for each of these chambers [100], [131], [132], [133]. Indeed, DL-facilitated fully-automated feature-tracking strain from LV cine images has achieved prognostic accuracy equivalent to that of manual-based segmentation for acute MI as shown in a study of over 1000 patients [131]. Similarly, in light-chain amyloidosis, DL-facilitated LA strain was shown to provide independent and additive prognostic value for all-cause mortality [133]. In addition to segmentation, DL can further improve the diagnosis based on strain. A fully-connected neural network taking strain features as the input has outperformed conventional methods to discriminate HCM from its mimic states, namely cardiac amyloidosis, Anderson–Fabry disease, and hypertensive cardiomyopathy [134]. In addition to feature tracking analysis of cine CMR, unsupervised DL has been explored to compute displacement and strain from cine CMR, leading to the DeepStrain method [63] (Fig. 8A). Using supervised learning, displacement encoding with stimulated echoes (DENSE) data have been employed to develop StrainNet [135], and velocity-encoded data have been used to develop synthetic strain [136], both of which can be applied for strain analysis of cine CMR (Fig. 8B).
DL for strain also extends beyond the analysis of cine imaging to strain-dedicated CMR sequences. For example, DL-based methods to analyze tagged CMR images have been shown to be superior to harmonic phase analysis with regard to tag tracking accuracy and inference efficiency [62]. For the analysis of DENSE CMR, DL for LV segmentation and phase unwrapping provides fully-automated, highly accurate and reproducible results for both global and segmental circumferential strain [36].
2.2.3. Late gadolinium enhancement
LGE is an established and validated CMR technique to distinguish myocardial fibrosis and injury from normal myocardium [137], [138]. Routine clinical use involves 2D acquisitions and is limited by spatial resolution, long scan times, and the requirement for breath-holds [139]. Novel frameworks enabling 3D acquisitions have been proposed and DL has been applied to accelerate LGE reconstruction [140]. DL-based noise reduction has also been applied to improve the image quality of fast, low-resolution LGE images [141].
In current clinical practice, LGE scar reporting usually relies on visual assessment by experienced clinicians. In research, scar quantification is currently based on manual delineation of myocardial borders and regions of enhancement, followed by thresholding techniques. For automated quantification, landmark localization [56] and LV segmentation [142], [143] applied to LGE images can be performed with DL. Subsequently, scar/fibrosis segmentation is essential for quantifying scar size and volume fraction [7], [143], [144], [145]. Automated segmentation has been applied to LGE for LV myocardium and ischemic scar segmentation [145], [146], while LGE scar segmentation and quantification for non-ischemic heart disease remain challenging due to the complex patterns of myocardial fibrosis and variations in gadolinium kinetics.
Quantification and segmentation of LGE images can be assisted by routine cine CMR, as cine provides complementary features and better-defined myocardial borders compared to LGE. For example, registration of LGE and cine CMR is beneficial for improving localization and quantification of infarcted regions [147]. Further, joint approaches for image registration and segmentation of LGE and cine offer better performance than segmentation of LGE images alone [148].
2.2.4. Parametric T1 and T2 mapping
Parametric quantitative mapping measures the relaxation times of tissue protons and reflects physical tissue composition [149]. These methods typically require the acquisition of a series of images sampled at various inversion times or echo times and/or utilize preparation modules to develop contrast. Fitting the series of images to a corresponding signal model, in a pixel-wise manner, enables the generation of a quantitative map of T1 or T2 tissue relaxation expressed in units of time (e.g., milliseconds). Current clinical protocols entail relatively lengthy 2D acquisitions with moderate spatial resolution that require breath-holding. In the case of accelerated acquisitions, conventional model-fitting reconstruction techniques are susceptible to aliasing artifacts and noise [150].
A DL-based network that allows sharing information across pixels has recently been applied to T1 mapping which reduced image noise by performing spatial and temporal regularization [150]. Additionally, a neural network, termed Robust artificial-neural-networks for k-space interpolation, which applies non-linear physics-based k-space estimation from undersampled k-space data, has been utilized to recover accelerated SAturation Pulse Prepared Heart rate independent Inversion-REcovery sequence T1 maps [28]. The technique can be regarded as a DL extension of the Generalized Autocalibrating Partially Parallel Acquisition (GRAPPA). A unique aspect of this method is that it is scan-specific; i.e., the network is trained from the center k-space lines of the same scan, therefore obviating the requirement of large training sets. This method was evaluated in retrospectively undersampled data from healthy subjects with quantitative metrics and outperformed traditional GRAPPA reconstruction particularly at five-fold acceleration [28].
Segmentation of LV myocardium is a necessary step for measurement of T1 and T2 values, which can be automated with DL [38]. Patient movement during a CMR scan can cause changes in heart position between the raw images, leading to motion artifacts in the resulting maps. Motion correction can be performed using DL techniques [59], [151] to register the raw images and restore precise T1 and T2 values and parametric maps. Deep generative models have also been developed to enhance T1-mapping signals, combing them with cine for more robust and informative scar imaging in the presentation of “virtual native enhancement” (VNE) [152], [153], [154] that resembles “virtual LGE”, holding promise for fast and gadolinium-free myocardial tissue characterization with further technical development.
2.2.5. Multiparametric quantitative MRI
Simultaneous multiparametric quantitative MRI, where several parameters of interest are obtained from a single scan, has recently gained attention to preclude confounding of the different parameters and achieve a shorter scan time. Several models have been investigated including magnetic resonance fingerprinting (MRF) [155], multitasking [156], and others [22], [157], [158], [159], [160]. Particular hurdles for cardiac MRF include long acquisition and reconstruction times and the requirement for scan-specific dictionary generation based on the patient- and scan-specific heart rhythm.
To overcome some of these limitations, a combination of DL-based denoising and low-rank modeling has been applied to accelerate the MRF acquisition and shorten the breath-hold duration [161]. Furthermore, a fully-connected neural network to directly quantify T1 and T2 from MRF images, bypassing dictionary generation and pattern matching and reducing computation time and memory requirements, has been proposed [35]. Cardiac multitasking has been applied using a low-rank tensor approach with two spatial dimensions and three time-dimensions (cardiac phase, respiratory phase, and inversion time), to enable non-ECG-gated, free-breathing dynamic imaging, and was demonstrated for T1-mapping. Validation in healthy subjects demonstrated similar-quality images and T1 maps to conventional iterative methods, while reducing the reconstruction time by greater than 3000 fold [162].
2.2.6. 3D whole-heart imaging
3D whole-heart imaging is an integral part of anatomical imaging in cardiac disease and recent advances are promising for the assessment of coronary arteries using CMR [31]. Nevertheless, long scan times associated with higher spatial resolution and concurrent motion artifacts hinder wider clinical usage. Advances in DL-based reconstruction methods have been investigated to overcome those limitations. The respective algorithms can be divided into three main categories [27]: (1) algorithms that apply non-linear physics-based k-space estimation from acquired k-space data [163], (2) end-to-end data-to-image techniques, where the network parameters are trained to recover the images directly from undersampled k-space data [27], [30], and (3) an end-to-end network that recovers motion fields between highly undersampled respiratory-resolved images that are utilized for motion-corrected reconstruction [164]. Further advances include approaches that achieve super-resolution reconstruction from rapidly acquired low-resolution data [31], [165]. The aforementioned techniques have been tested on healthy subjects [163] and patient cohorts against clinical coronary or anatomical 3D whole-heart imaging with satisfactory quantitative and qualitative image quality metrics, providing significantly shorter acquisition time. It is worth mentioning that DL may be able to leverage the high-resolution data of CT angiography and transfer the knowledge to CMR to optimize the contrast and resolution of CMR angiography [31].
Current 3D whole-heart frameworks use diaphragm-based navigation, which limits respiratory scan efficiency [166]. Image-navigator and self-navigated [167] techniques have been proposed to account for the complex non-rigid respiratory-induced cardiac motion to achieve high-resolution 3D isotropic scans. However, non-rigid motion estimation/correction is frequently dependent on image registration [168], [169] and laborious image reconstruction techniques. To address these limitations, DL-based estimation of non-rigid cardiac motion has been proposed and validated. A fundamental network enabled a 20-fold speed up in the non-rigid motion estimation step, reducing computation time of the image registration step [170]. The pipeline was further extended to the final motion-corrected reconstruction, reducing the total computational time by 50-fold [171]. Automated image quality assessment for 3D whole-heart imaging has been implemented. It is proposed to estimate image quality with good agreement with respect to human expert reading and may help identify the optimal reconstruction framework or define termination criteria of an iterative reconstruction process [70].
2.2.7. 2D phase-contrast MRI
Phase-contrast MRI is an integral component of CMR protocols enabling quantification of blood flow in the great vessels, estimation of valvular regurgitation, and internal validation of the ventricular stroke volumes [172]. Conventional phase-contrast MRI methods use ECG synchronization and, frequently, strategies of k-space segmentation to reduce acquisition time to a breath-hold duration. Free-breathing accelerated acquisitions are also clinically relevant for patients with difficulty in breath-holding (for example, for children and patients with dyspnea) and for real-time applications, such as exercise stress CMR [34].
DL-based reconstruction methods have been applied to recover images acquired using undersampled radial [34] and spiral trajectories [173]. Both methods have been trained using synthetic breath-held and ECG-gated datasets and evaluated in prospectively acquired free-breathing undersampled phase-contrast images [34], [173]. Qualitative and quantitative metrics and quantifiable hemodynamic parameters demonstrated satisfactory agreement with conventional acquisition and reconstruction techniques with acquisition times that were 28-fold and 18-fold faster, respectively.
2.2.8. 4D flow
Four-dimensional (4D) flow MRI is an emerging technique where 3D blood velocity over time can be captured with full volumetric coverage in a single scan. Challenges for further improvement of 4D flow include velocity aliasing due to suboptimal velocity encoding, low spatiotemporal resolution, and long reconstruction times [174].
Approaches to optimize 4D flow acquisition and reconstruction methods have been investigated. DL-based velocity antialiasing has been tested in healthy-volunteer datasets, demonstrating moderate to excellent agreement to ground truth [175]. Furthermore, a physics-based model has been applied to ECG-gated and breath-hold datasets, achieving reconstruction in under 1 min, which was 30 times faster than state-of-the-art compressed-sensing methods [176].
DL-based segmentation techniques have been applied to 4D flow to delineate the vessel lumen to facilitate calculation of mean velocities and aortic flow quantification [177]. Segmentation can be performed on 2D images [47], bSSFP cine images (with interpolation onto flow CMR) [178], or 3D phase-contrast MR angiograms [179]. Fully-automated 4D flow segmentation remains challenging due to the low blood-tissue contrast in magnitude images, insufficient phase-contrast signal with low velocities, and the requirement for 3D analysis [177].
2.2.9. Perfusion MRI
CMR perfusion imaging is a non-invasive test to assess myocardial blood flow and ischemia. Myocardial perfusion is assessed by imaging the LV myocardium during the first pass of a contrast agent bolus. To detect ischemia, perfusion imaging is performed both at rest and at stress, where stress imaging utilizes vasodilation by a pharmacological agent such as adenosine. High temporal resolution is required, often compromising spatial resolution and coverage. Accelerated imaging techniques are required to optimize the balance between those parameters.
Several approaches for DL-based image enhancement networks have been proposed. Those have been trained in a supervised manner using conventional compressed-sensing reconstruction outputs as reference images [180], [32]. Evaluation included both quantitative and qualitative image quality scores in healthy subjects [32], [181] and in patients [180], demonstrating comparable or superior image quality scores compared to compressed sensing with a much shorter reconstruction time. A physics-guided neural network, based on a signal intensity informed multi-coil encoding operator, has been recently proposed to capture the signal intensity variations across time-frames, allowing highly-accelerated simultaneous multislice myocardial perfusion cardiac MRI [181]. This physics-guided DL framework enabled self-supervised training from undersampled k-space data only, obviating the requirement for reference images and large training datasets. The physics-guided DL framework outperformed multiple regularized reconstructions, demonstrating improved image quality, and reduced noise amplification and aliasing.
Myocardial perfusion analysis involves the delineation of a time series of images to compute myocardial perfusion reserve and present it using the format of the AHA model, which is time-consuming when using conventional methods. DL has been proposed to automate the process by localizing the RV insertion points and LV in a time series of perfusion images [182], [183], [184]. Furthermore, DL algorithms have been successfully applied to the segmentation of LV cavity and myocardium, where the quantification of myocardial blood flow and perfusion reserve parameters produced outputs comparable to manual analysis [182]. In another study, the automatic segmentation and quantification of perfusion mapping provide a strong, independent predictor of adverse cardiovascular outcomes [185].
2.2.10. Protocol planning and efficiency
In addition to the role of AI for individual sequences, AI can improve the efficiency of the workflow by automating the CMR protocol planning. For example, by performing landmark detection or regression algorithms on short-axis views, long-axis views, or additional imaging, DL can determine the orientation of the LV for automated prescription of image planes. An EasyScan technique was reported to offer clinically acceptable planes on par with expert CMR technologists [186]. Another study has demonstrated that by predicting landmarks on multiple images and views, DL can prescribe common CMR view planes similar to those marked by a radiologist or those prescribed by a technologist at the time of image acquisition [187]. Further, in CMR imaging, careful shimming is required to establish a homogeneous B0 field and on-resonance center frequency around the heart, especially for bSSFP sequences. AI-based shimming technique can automatically adjust the field, leading to increased signal-to-noise ratio and contrast [186]. These, together with post-processing DL techniques for common CMR sequences, can be integrated into typical CMR workflows, leading to “one-click” CMR scanning with reduced input demands on MR technologists and reduced scanning time.
3. Roadmap to translate advances in AI CMR research to routine clinical use
3.1. Need for high-quality and representative datasets
Training and evaluating an AI prediction model requires reliable data at sufficient scale and diversity. An appropriate sample size is determined by the expected effect size and the classification accuracy of the model [188]. Models with a large number of tunable parameters may be overfit on small samples such that predictions do not generalize to new data. However, it may not be feasible or affordable to curate very large annotated clinical datasets for every use case. A technical proof-of-concept study may utilize a smaller dataset to show feasibility of a new method, whereas an AI model intended for widespread clinical use would require a larger and more comprehensive dataset for both development and validation. Increasing the dimensionality of the data, from 2D to 2D with a temporal dimension (2D+t), 3D or 3D+t, may also make the data relatively sparse and at risk of overfitting. Fig. 9 summarizes the dataset distribution of 203 CMR-related AI papers, of which 47 utilized open access datasets, including from the UK Biobank [189], [190], ACDC [191], M&Ms [192], HVSMR [193], MS-CMRSeg challenge [194], [195], SunnyBrook Cardiac Data [196], Harvard radial raw data [120], and XCAT phantom data [197]. In the studies utilizing UK Biobank data (28 studies), open access data other than UK Biobank (19 studies), and newly acquired data (156 studies), the median (25th, 75th percentiles) of subject numbers are 4573 (2022, 5745), 130 (92,264), and 139 (43,406), respectively. The average training to testing dataset ratio reported from 124 papers is 5.49:1.
Diverse and representative data are especially important in the translational stage of AI models. Many AI models have shown better performance than human operators on specific test sets [198], [199] but may generalize poorly to other settings that have distribution shifts [200], which hinders their widespread clinical use. Data quality is similarly important. Data annotation requires intensive manual analysis by experienced image analysts and is prone to error. Novel unsupervised learning may obviate the need for laborious data annotation, but large, good quality CMR datasets are essential.
Obtaining representative datasets is challenging due to the scarcity of properly annotated data, the shortage of data covering all relevant cardiovascular diseases, and the presence of artifacts that might result in low-quality training datasets. A useful AI solution to overcome this challenge is data augmentation, where deep generative models have been proposed for synthesis of large, high-quality medical images with variability in anatomical representation and appearance, comparable to real counterparts. The primary approaches in image synthesis are: i) mask-to-image synthesis, where segmentation masks are mapped to corresponding images (the inverse of image segmentation) [201], [202], [203], [204], ii) image-to-image inference [205], and iii) regression models [201], [206], [207]. For example, augmenting real data with synthetic data during training has been shown to improve the performance of cine image segmentation [201], [204]. Similarly, augmenting LGE with VNE modality in AI development improved the accuracy and reliability of LGE segmentation [44].
Nevertheless, for clinical application, DL models should be thoroughly validated on real data that are diverse in terms of gender, race, environment, body habitus, types of disease, sites, MRI vendors, and platforms to ensure model robustness and generalizability. While cross-validation techniques are useful for assessing internal validity, independent external validation on the intended population will assess transportability of the model. Even external validation is not a one-off process before clinical implementation and may require further cycles of recalibration if the model is sensitive to small population differences [208]. Regulatory agencies have also identified evidence for other AI tools in medical imaging that include impact on clinical decision-making, diagnostic accuracy alongside physician review and patient perceptions [209]. Guiding principles on the safe and equitable deployment of AI algorithms may include effective governance oversight, multi-disciplinary evaluation, continuous surveillance, and incorporation of consensus guidelines on the use of AI tools [210].
CMR datasets that are discoverable and open access can accelerate advancements across the entire field. Organizations, such as the SCMR and the National Institute for Health, should encourage data sharing for training and testing AI algorithms, ensuring that data are findable, accessible, interoperable, and reusable [211]. In addition, they should promote the establishment of multi-site datasets accounting for variabilities between sites, with ethics and data governance policies in place. It is also important to acknowledge that non-ideal but unique or distinctive datasets, with their inherent limitations, can make substantial contributions to the field, potentially offering opportunities to train unique AI models and/or demonstrate proof of concept in new and innovative directions.
3.2. Need for guidelines for reporting AI CMR research
To promote quality, improve reproducibility, and increase adoption, there is a need for guidelines for reporting AI-based research. Such guidelines will help the community better understand and assess study findings and their potential clinical impact. Several guidelines for AI in medicine and medical imaging have been proposed over the last few years or are currently under development. For example, CLAIM (Checklist for Artificial Intelligence in Medical Imaging) [212], a guideline for authors and reviewers, suggested a list of information that manuscripts should provide related to models, training procedure, datasets, etc. Similarly, MINimum Information for Medical AI Reporting [213] specified the minimum information that authors of manuscripts should provide in terms of study population, data demographics, model architecture, and model evaluation. CONSORT-AI (Consolidated Standards of Reporting Trials) and SPIRIT-AI (Standard Protocol Items: Recommendations for Interventional Trials) [214], [215] were extensions of CONSORT and SPIRIT and provided guidelines for reporting randomized trials involving AI-based methods. Also, FUTURE-AI [215] provided a list of best practices based on six principles: Fairness, Universality, Traceability, Usability, Robustness and Explainability (FUTURE) that should guide AI-based research to provide trustworthy solutions. Last, Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis - Artificial Intelligence and Prediction model Risk Of Bias Assessment Tool - Artificial Intelligence [216] (extensions of TRIPOD and PROBAST guidelines) are currently under development as guidelines for reporting and risk of bias assessment of clinical prediction models using AI. The CMR community may adopt these guidelines (individual or a combination of these) [217], [218] or extend and modify them to promote high-quality AI-based CMR research. A summary of the most relevant recommendations from these publications is provided in Table 2, this summary follows but also complements the CLAIM recommendations.
Table 2.
Section | Best practices |
---|---|
Manuscript title and abstract | Clearly indicate the AI methodology used and provide a structured summary of the study’s design, methods, results, and conclusions. |
Introduction | Provide the scientific and clinical background of the AI approach employed and state how the proposed approach will help addressing a significant clinical or scientific issue. |
Methods and results |
|
Discussion | Summarize results, discuss limitations including potential bias and generalizability issues, implications for practice, and future directions. |
This summary follows but also complements the CLAIM recommendations.
AI artificial intelligence, CLAIM Checklist for Artificial Intelligence in Medical Imaging.
3.3. Considerations around clinical deployment
AI-based methods are to a large extent data-driven, and hence may unintentionally replicate biases that are hidden in those data [219]. Lack of diversity in the training datasets on gender, race, age, ethnicity, weight, height, and social disparities with regards to access to health care (particularly at academic research centers) might lead to suboptimal performance. Several approaches have been proposed to address this matter that span from the pre-training, training, and post-training stages of data, including comparing standard performance metrics across different sub-groups, and the employment of the fairness-specific criteria to audit for the presence of bias in a given model [220]. From a regulatory standpoint, several strategies have been developed to address the issue of biased data in AI systems. The STANDING Together initiative (standards for data diversity, inclusivity, and generalizability), launched in September 2022, aims to develop recommendations for the composition (who is represented) and reporting (how they are represented) of datasets utilized in medical AI systems. The panel is comprised of patients and the public, clinicians and academic researchers across biomedical, computational, and social sciences, industry experts, regulators, and policy-makers, and the final recommendations are rooted in an 18-month program of systematic reviews, surveys, in-depth interviews, and a modified Delphi study [221].
AI systems raise concerns about transparency and accountability, as they are programmed to “learn” a model from a large set of data, without providing a rationale for the outcome [222].
Before deploying a DL system in the critical infrastructure of medical imaging, validation of the decision pathway and the ground-truth knowledge should be provided. From a technical perspective, various metrics and visualizations are available for evaluating the technical performance of AI algorithms [216]. Saliency methods, although widely used in medical studies for model interpretation and localization, have been scarcely applied in AI techniques for CMR, and their utility in non-CMR applications is debated [223], [224]. To address trustworthiness in AI comprehensively, explainable AI models (XAI) have been introduced in cardiac imaging [225], which focus on exposing AI models to humans in an interpretable manner [226]. Three levels of evaluation of the outcomes of XAI have been proposed. The first one applies proxies and statistical methods (functionally grounded evaluation), followed by the evaluation by non-clinical evaluators (human-grounded), and lastly by medical experts (application-grounded) [225]. A list of open-source XAI tools currently available can be found in [227]. Furthermore, to promote accountability, several procedures are established and are anticipated to evolve to validate system performance in a tiered manner. A recent systematic review proposed a user-centered research design approach, whereby the model designers actively consider and work closely with the stakeholders (clinicians, patients, technologists, etc), particularly during the design and construction of AI models [228]. Explicit guidelines are published for the evaluation of clinical performance of AI applications [215], [216], [229], [230], [231]. To augment the generalizability of current data-driven AI techniques [232], [233], testing of the algorithms—preferably in multiple sites and conditions in real-world settings—is crucial [199], [234], [229]. Paired and parallel study designs have been proposed to evaluate the benefits of AI in clinical practice [229] and randomized clinical trials remain the gold-standard [229]. Further approaches that incorporate human intervention in the network pipeline, the so-called human in the loop, have also been proposed [10].
Successful AI deployment in clinical practice requires the active involvement of all stakeholders, including patients, academics, clinicians, imaging technicians, hospital administrations, regulatory bodies, and industry. Academics and clinical associations can aid health care professionals acquire the basic knowledge of AI, to facilitate critical evaluation of datasets, integration within clinical workflows and bias control. A key theme that has to be implemented across the different stakeholders is the requirement for standardized high-quality datasets, to maximize the potential innovations derived and the transparency in how datasets are acquired, to allow better understanding of their context and limitations [235]. This can be achieved by setting standards and guidelines in the entire process of medical image preparation, from the de-identification step to the data annotation step and especially in the data curation step [236]. Multi-disciplinary collaboration is also crucial to promote effective data-sharing models to optimize the model performance, respecting the legal and ethical aspects of the impact of AI adoption [237]. Communication among the stakeholders is important for the continuous appraisal of the applied AI models to foster quality assurance and product improvement.
The application of AI in CMR that involves personal health information also raises concerns about data protection, autonomy, and privacy. In particular, concerns around meaningful consent and effective anonymization and de-identification of data are valid [216] and need to be addressed at a central regulatory and institutional level. All relevant stakeholders should be familiar with the proposed standards. Last, the social and cultural blueprint of AI is currently largely under-studied [238]. An interdisciplinary approach to the application of AI is advisable to expand its clinical potential in a safe, useful, and fair context [239].
3.4. Responsibility of the clinician and the need for interdisciplinary teams
Clinicians in the current era are challenged to explore CMR through the lens of AI. Deployment of AI demands from the clinicians to recalibrate their approach to information. However, information/data does not constitute knowledge. Hence, a robust framework to integrate and interpret the data in a meaningful way for patients is required [240].
The processes of curation and anonymization of data and validation of the clinical performance of AI lie with both the clinicians and technical experts and have been covered in previous sections, highlighting the significance of a multi-disciplinary approach [216]. In addition to those, patient’s consent, accurate interpretation of the results, and communication with patients for optimal decision-making lie primarily with the clinician. In this context, two significant issues arise, namely respect for patients’ autonomy and the clinician’s accountability. Patient’s autonomy is honored in the process of informed consent where individuals must be given the opportunity to agree to and make choices between risks they are exposed to [219]. Furthermore, the clinician is accountable for decisions regarding patient management [241]. The accountability stretches above regulations and should encompass the accountability to the ecosystem in which the AI information will be shared (patients, caregivers, community, industry, health care professionals). Thus, if radiologists and cardiologists are to be incorporating AI into daily practice, basic proficiency in AI methodology and understanding both its potential and limitations are required [242], [198], [243] to address those issues. Several approaches have been proposed [242], [244]. Creating the educational resources necessary for an AI curriculum requires the collaboration of multiple national and international societies, such as SCMR, as well as academic radiology and cardiology departments. These educational efforts will need the involvement of and collaboration with technical experts, such as computer scientists, statisticians, and biomedical engineers. Last, the framework of clinical applications based on AI should be laid in rigorous legislation, which clinicians, technical experts, and industry employees should familiarize themselves with. In Europe, the relevant laws (General Data Protection Regulation [GDPR] Article 22) prohibit any decision-making based solely on automatic processing of personal data, precluding in practice the possibility to rely only on the outcome of an algorithm for sensitive decision-making. Furthermore, Article 22 requires the controller to implement suitable measures to safeguard the subject’s rights and freedoms and its legitimate interests, which has to include the right to obtain human intervention. This human intervention has to be qualified, capable of discovering and recovering unfair outcomes or discriminations (European Data Protection Board) [219]. To our knowledge, there is no analogous legislation elsewhere to date.
3.5. Industry considerations on AI in CMR
AI presents great opportunities for improved automation [64], [65], [56], data analysis [7], [8], [9], [36], [37], [38], [39], scan time reduction [24], [25], [26], [27], and image quality improvements [30] in MRI and many new products coming from an MRI vendor have an AI component. AI is especially relevant for CMR due to the higher complexity of the exam requiring complex and time-consuming planning, multiple breath-holds and the use of external devices for cardiac and respiratory gating. In addition, practically every single CMR exam is a subject of post-processing and analysis resulting in a quantitative characterization of the cardiac anatomy, function, and tissue properties, which requires fast and reliable segmentation methods. Both the high complexity of CMR scans and the need for data post-processing dictate the demand and trend for automation in CMR, and AI will play a crucial role in realizing these advancements. Such advancements will facilitate the wider dissemination of CMR to low- and middle-income countries [245], benefiting wider populations and promoting equity in access to health care resources globally.
However, there are also several concerns related to the use of AI in MRI scanners. The topics related to data availability, data diversity, and data quality as well as privacy aspects were discussed in detail in previous sections. These are highly relevant in the development of AI-based products, where it is important to ensure generalization and stable performance on a large scale.
There is a growing number of public databases containing CMR data aiming at supporting research activities on AI-based approaches [246]. This allows for more objective comparisons between different network architectures since the performance of an AI model depends both on the architecture and on the data. It also opens the field to research teams with AI expertise but no access to an MR scanner. However, the use of public databases for product development is typically very limited. This is related to several factors including the terms of use, privacy aspects, as well as the quality of the data. Due to the data-dependent performance mentioned above, the gap between research and product development can be much larger for AI models compared to conventional approaches.
The need for acquiring large training datasets may increase the product development time and its costs substantially. If only limited data can be acquired for certain patient groups, this may lead to limiting the product scope beyond these groups. Self-supervised and untrained methods are also seeing increased interest, but they require much longer processing times, which makes their adoption challenging [247], [248], [249]. On the positive side, the risk of reidentification based on image data alone is relatively low for CMR. This is different than in 3D brain imaging, where there is an additional concern of face reconstruction based on the images requiring face removal techniques to protect the participants’ privacy [250]. Nevertheless, caution should be taken to keep the risk of reidentification as low as possible, especially for small local datasets and rare diseases.
Another risk that comes with the data-dependent performance of AI-based approaches is the use of multiple AI-based methods that were developed independently in different stages of the data processing pipeline. For example, an AI-based image reconstruction may change the output of an AI-based image enhancement that was trained on data processed with a different reconstruction. Even seemingly simple modifications in the image reconstruction like computing a magnitude instead of a complex image will have an effect on the performance of subsequent denoising [251]. Similarly, changes in the image reconstruction and enhancement may lead to changes in the image analysis and quantification [252], [253]. The issue of variability in the performance/results of post-processing tools depending on the input data is not new; however, it becomes more acute with the introduction of AI approaches that are more sensitive to modifications in the data processing pipeline. It is especially difficult to predict the outcome of a combination of multiple AI methods that have been independently developed by different vendors. In-depth analysis of potential changes in end, results may be required to address this issue.
It is interesting to speculate about how AI may impact CMR energy needs and its carbon footprint. On one hand, with the growth of data volume, model size, and training infrastructure, developing AI models for CMR will use energy, leading to a negative effect on the environmental footprint. On the other hand, the application of AI methods could shorten CMR exams, leading to decreased energy usage and a positive effect on the environmental footprint. In this way, an analysis of the impact of AI for CMR on the carbon footprint should take a holistic approach [254], by considering both the savings and benefits that it brings to CMR exams, and the energy costs of AI development.
From a regulatory perspective, it is also unclear how the compatibility between different independently developed AI-based devices/techniques should be handled. An even bigger challenge is ensuring the compatibility of multiple AI-based techniques that allow modifications from real-world training.
Another challenge is the deployment of DL models on medical devices. Increasingly complex models (e.g., unrolled reconstruction networks) paired with high-dimensional input data (e.g., high-resolution 3D data, 4D flow, etc.) necessitate high-end hardware accelerators, such as graphics processing units, to ensure acceptable inference times. These must be available either directly within the scanner platform or the infrastructure needs to be in place to off-load computation to an edge or cloud computing facility, which is not yet a common scenario. This may limit widespread availability of advanced applications by excluding systems already in the field and new lower-end systems due to cost.
3.6. Regulatory perspectives and considerations related to AI in CMR
For medical devices deployed clinically in the United States (US), the US Food and Drug Administration (FDA) Center for Devices and Radiological Health assures that patients and providers have timely and continued access to safe, effective, and high-quality medical devices. It is important to acknowledge that other regulatory agencies, such as Health Canada, the European Medicines Agency, and the Therapeutic Goods Administration (Australia) will have jurisdiction depending on the regions. While it would be ideal to provide and compare perspectives across regulatory agencies with different jurisdictions, this article is limited to a detailed review and perspective from the FDA as an example.
As of October 2022, over 500 medical devices incorporating AI/ML technology have been granted marketing authorization by the US FDA through a combination of the premarket approval, 510k, and De Novo regulatory pathways [255]. While the majority of these devices are intended for analyzing radiological data, the general approach to evaluating AI/ML-enabled medical devices is the same regardless of medical specialty. An overview of the regulatory considerations for medical imaging AI/ML devices in the US was recently published [256], of which the main points are briefly summarized here.
Data hygiene is perhaps the most fundamental concern in the evaluation of AI/ML-enabled medical devices. A central principle is that the testing dataset should be independent of the training dataset. This generally means that the testing and training datasets should be collected from different patients and at different clinical sites. At the same time, both development and evaluation datasets should be representative of the target population and the evaluation dataset should be of sufficient size to ensure statistical validity. As was pointed out in Section 3.1, scarcity of complete and properly annotated data is a significant hurdle in the development of algorithms, and it can also be a significant hurdle toward approval of a commercial product.
While the specifics of the performance evaluation for a particular device are informed by both the technology of the device as well as its intended use, AI/ML devices in general are evaluated via standalone performance testing and/or clinical studies. Standalone performance testing is a measure of device performance alone with little to no interaction or interpretation from a clinical end user. When a clinical user needs to interact with the device or interpret the device outputs, depending on the risks posed by the device, an assessment of the device in the hands of the end users may also be needed. To ensure generalizability of device outputs and to better understand performance limitations, results of any performance assessment are generally reported both in aggregate as well by sub-groups based on patient characterizes (e.g., age, race, gender, disease type and stage, etc.) and data acquisition characteristics (e.g., data acquisition site, acquisition device, protocol, etc.).
Guidance documents provide the medical device community with insight into current FDA thinking. While no guidance document specific to CMR currently exists, a series of guidance documents developed for radiological imaging-based AI/ML devices discuss premarket submission details [257] and clinical performance assessment [258] of computer-assisted detection devices and may provide useful information for developers of AI/ML-enabled CMR devices. The guidance document related to the technical performance assessment of quantitative imaging in radiological device submissions may also be relevant [259]. A guidance document broadly providing lifecycle management considerations and premarket submission recommendations for AI/ML-enabled device software functions is promised to publish in draft form by the end of fiscal year 2024 [260].
In June of 2022, the National Heart, Lung, and Blood Institute held a workshop entitled “Artificial Intelligence in Cardiovascular Imaging: Translating Science to Patient Care” [261]. While not specific to CMR, the workshop hoped to address many of the same challenges discussed in this manuscript; that is, to identify challenges and opportunities for AI in cardiovascular imaging, focusing on how various stakeholders can support research and development to move AI from promising proofs of concept to robust, generalizable, equitable, scalable, and implementable AI. The workshop identified both policy and technical needs to further advance clinical translation of AI in cardiovascular imaging [261]. Within this context of rapid innovation and regulatory challenges, the mission of FDA’s Office of Science and Engineering Laboratories (OSEL) is to accelerate patient access to innovative, safe, and effective medical devices through best-in-the-world regulatory science. Through the AI/ML program, OSEL scientists seek to address gaps related to limited training and testing data, bias, equity, and generalizability, develop least-burdensome metrics for the performance assessment of AI/ML devices in situations of high uncertainty, develop evaluation metrics for evolving algorithms, and develop approaches for the effective post-market monitoring of AI/ML-enabled medical devices. OSEL develops and shares regulatory science tools (physical phantoms, methods, datasets, computational models, and simulation pipelines) to help advance medical device development and assessment [262].
4. Conclusion
In summary, this article reviews the current landscape, challenges, and potential future directions of integrating AI approaches into CMR imaging. It emphasizes the potential for AI to address several challenges faced by CMR in terms of efficiency, accessibility, and manual image analysis and demonstrates the remarkable (and rapid) research progress in AI applications across various AI CMR tasks and diverse CMR modalities/applications. However, despite these notable technical advances, there is still limited evidence of their practical value and impact in real-world clinical settings. To help address this gap, we discuss a roadmap to translate AI CMR research into routine clinical practice, aiming to accelerate the adoption of these techniques and to ensure their promise can be realized in a fair, responsible, and equitable manner.
These considerations and recommendations emphasize the importance of big, high-quality, and representative datasets for training and testing AI models, recognizing the challenges in dataset size, diversity, and quality. The role of guidelines for reporting AI CMR research is also underscored to enhance study reproducibility and facilitate a better understanding of findings. Furthermore, the importance of generalizability, transparency, accountability, and explainability in the deployment of AI in CMR is highlighted. The article also touches upon industry considerations, pointing out several opportunities associated with the higher complexity and need for analysis of CMR, and acknowledging challenges related to data privacy, compatibility between different AI-based methods/approaches for different tasks and the importance of ensuring generalization and stable performance of products on a large scale. Last, it also provides insights into the US regulatory landscape, focusing on the US FDA's perspectives.
In conclusion, this article emphasizes the necessity of interdisciplinary collaboration between MR physicists, clinicians, AI scientists, and industry professionals to further advance AI CMR. This collaborative effort, coupled with rigorous regulatory and ethical considerations, is deemed essential to ensure the responsible and effective deployment of AI in routine clinical CMR workflows. The ongoing commitment of the SCMR community to prioritize these crucial aspects is therefore essential for the advancement of CMR through AI and for providing clinical evidence of their benefits.
Author contributions
Mariya Doneva: Writing – review and editing, Methodology. Jens Wetzl: Writing – review and editing, Methodology. Jana G. Delfino: Writing – review and editing, Methodology. Declan P. O’Regan: Writing – review and editing, Methodology. Claudia Prieto: Writing – review and editing, Writing – original draft, Methodology, Conceptualization. Frederick H. Epstein: Writing – review and editing, Writing – original draft, Methodology, Conceptualization. Qiang Zhang: Writing – review and editing, Writing – original draft, Methodology. Anastasia Fotaki: Writing – review and editing, Writing – original draft, Methodology. Sona Ghadimi: Writing – review and editing, Writing – original draft, Methodology. Yu Wang: Writing – review and editing, Writing – original draft, Methodology.
Declaration of competing interests
Qiang Zhang reports a relationship with British Heart Foundation that includes funding grants. Qiang Zhang has patent #Enhancement of Medical Images (WO/2021/044153) pending to Oxford University Innovation. Qiang Zhang has patent #Validation of quantitative magnetic resonance imaging protocols (WO2020234570A1) pending to Oxford University Innovation. Mariya Doneva is an employee of Philips GmbH Innovative Technologies, Hamburg, Germany. Mariya Doneva is an editorial board member of Magnetic Resonance Medicine and IEEE Transactions on Computational Imaging. Jens Wetzl is an employee and shareholder of Siemens Healthineers AG. DPO’R has consulted for Bayer and BMS on AI, holds a research grant from Bayer, and holds relevant patents. Claudia Prieto is an Associate Editor for Magnetic Resonance in Medicine and was not involved in the editorial review or the decision to publish this article. Frederick H. Epstein has research support from Siemens. Frederick H. Epstein is an editorial board member of JCMR and was not involved in the editorial review or the decision to publish this article. Frederick H. Epstein holds relevants patents. Sona Ghadimi holds relevant patents. Yu Wang holds relevant patents. The other authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgements
Q.Z. acknowledges funding support from British Heart Foundation (BHF) and Oxford BHF Centre of Research Excellence (RE/18/3/34214). D.O’R. is supported by the Medical Research Council (MC_UP_1605/13); National Institute for Health Research (NIHR) Imperial College Biomedical Research Centre; and the British Heart Foundation (RG/19/6/34387, RE/18/4/34215). C.P. acknowledges funding from Millennium Institute iHEALTH ICN2021_004. F.H.E. acknowledges funding support from the National Heart, Lung, and Blood Institute NIH R01 HL147104. For the purpose of open access, the authors have applied a creative commons attribution (CC BY) license to any author accepted manuscript version arising.
Contributor Information
Qiang Zhang, Email: qiang.zhang@cardiov.ox.ac.uk.
Anastasia Fotaki, Email: anastasia.fotaki@kcl.ac.uk.
Sona Ghadimi, Email: sq9qd@virginia.edu.
Yu Wang, Email: yw8za@virginia.edu.
Mariya Doneva, Email: mariya.doneva@philips.com.
Jens Wetzl, Email: jens.wetzl@siemens-healthineers.com.
Jana G. Delfino, Email: Jana.Delfino@fda.hhs.gov.
Declan P. O’Regan, Email: declan.oregan@imperial.ac.uk.
Claudia Prieto, Email: claudia.prieto@kcl.ac.uk.
Frederick H. Epstein, Email: fhe6b@virginia.edu.
References
- 1.Janiesch C., Zschech P., Heinrich K. Machine learning and deep learning. Electron Mark. 2021;31:685–695. [Google Scholar]
- 2.Ogier A.C., Bustin A., Cochet H., Schwitter J., van Heeswijk R.B. The road toward reproducibility of parametric mapping of the heart: a technical review. Front Cardiovasc Med. 2022;9 doi: 10.3389/fcvm.2022.876475. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Alskaf E., Dutta U., Scannell C.M., Chiribiri A. Deep learning applications in myocardial perfusion imaging, a systematic review and meta-analysis. Inf Med Unlocked. 2022;32 doi: 10.1016/j.imu.2022.101055. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Velasco C., Fletcher T.J., Botnar R.M., Prieto C. Artificial intelligence in cardiac magnetic resonance fingerprinting. Front Cardiovasc Med. 2022;9 doi: 10.3389/fcvm.2022.1009131. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Zabihollahy F., Rajan S., Ukwatta E. Machine learning-based segmentation of left ventricular myocardial fibrosis from magnetic resonance imaging. Curr Cardiol Rep. 2020;22:65. doi: 10.1007/s11886-020-01321-1. [DOI] [PubMed] [Google Scholar]
- 6.Qi H., Cruz G., Botnar R., Prieto C. Synergistic multi-contrast cardiac magnetic resonance image reconstruction. Philos Trans A Math Phys Eng Sci. 2021;379:20200197. doi: 10.1098/rsta.2020.0197. [DOI] [PubMed] [Google Scholar]
- 7.Wu Y., Tang Z., Li B., Firmin D., Yang G. Recent advances in fibrosis and scar segmentation from cardiac MRI: a state-of-the-art review and future perspectives. Front Physiol. 2021;12 doi: 10.3389/fphys.2021.709230. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Jathanna N., Podlasek A., Sokol A., Auer D., Chen X., Jamil-Copley S. Diagnostic utility of artificial intelligence for left ventricular scar identification using cardiac magnetic resonance imaging-a systematic review. Cardiovasc Digit Health J. 2021;2:S21–S29. doi: 10.1016/j.cvdhj.2021.11.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Duchateau N., King A.P., De Craene M. Machine learning approaches for myocardial motion and deformation analysis. Front Cardiovasc Med. 2019;6:190. doi: 10.3389/fcvm.2019.00190. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Assadi H., Alabed S., Maiter A., Salehi M., Li R., Ripley D.P., et al. The role of artificial intelligence in predicting outcomes by cardiovascular magnetic resonance: a comprehensive systematic review. Medicina (Kaunas) 2022;58(8) doi: 10.3390/medicina58081087. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Chong J.H., Abdulkareem M., Petersen S.E., Khanji M.Y. Artificial intelligence and cardiovascular magnetic resonance imaging in myocardial infarction patients. Curr Probl Cardiol. 2022;47 doi: 10.1016/j.cpcardiol.2022.101330. [DOI] [PubMed] [Google Scholar]
- 12.Asher C., Puyol-Anton E., Rizvi M., Ruijsink B, Chiribiri A, Razavi R, et al. The role of AI in characterizing the DCM phenotype. Front Cardiovasc Med. 2021;8 doi: 10.3389/fcvm.2021.787614. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Argentiero A., Muscogiuri G., Rabbat M.G., Martini C., Soldato N., Basile P., et al. The applications of artificial intelligence in cardiovascular magnetic resonance-a comprehensive review. J Clin Med. 2022;11(10) doi: 10.3390/jcm11102866. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Fotaki A., Puyol-Anton E., Chiribiri A., Botnar R., Pushparajah K., Prieto C. Artificial intelligence in cardiac MRI: is clinical adoption forthcoming? Front Cardiovasc Med. 2021;8 doi: 10.3389/fcvm.2021.818765. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Li L., Ding W., Huang L., Zhuang X., Grau V. Multi-modality cardiac image computing: a survey. Med Image Anal. 2023;88 doi: 10.1016/j.media.2023.102869. Epub 2023 Jun 16. PMID: 37384950. [DOI] [PubMed] [Google Scholar]
- 16.Deshmane A., Gulani V., Griswold M.A., Seiberlich N. Parallel MR imaging. J Magn Reson Imaging. 2012;36:55–72. doi: 10.1002/jmri.23639. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Lustig M., Donoho D., Pauly J.M. Sparse MRI: the application of compressed sensing for rapid MR imaging. Magn Reson Med. 2007;58:1182–1195. doi: 10.1002/mrm.21391. [DOI] [PubMed] [Google Scholar]
- 18.Kido T., Kido T., Nakamura M., Watanabe K, Schmidt M, Forman C, et al. Compressed sensing real-time cine cardiovascular magnetic resonance: accurate assessment of left ventricular function in a single-breath-hold. J Cardiovasc Magn Reson. 2016;18:50. doi: 10.1186/s12968-016-0271-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19.Vermersch M., Longere B., Coisne A., Schmidt M, Forman C, Monnet A, et al. Compressed sensing real-time cine imaging for assessment of ventricular function, volumes and mass in clinical practice. Eur Radiol. 2020;30:609–619. doi: 10.1007/s00330-019-06341-2. [DOI] [PubMed] [Google Scholar]
- 20.Basha T.A., Akcakaya M., Liew C., Tsao CW, Delling FN, Addae G, et al. Clinical performance of high-resolution late gadolinium enhancement imaging with compressed sensing. J Magn Reson Imaging. 2017;46:1829–1838. doi: 10.1002/jmri.25695. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Munoz C., Fotaki A., Botnar R.M., Prieto C. Latest advances in image acceleration: all dimensions are fair game. J Magn Reson Imaging. 2023;57:387–402. doi: 10.1002/jmri.28462. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Qi H., Bustin A., Cruz G., Jaubert O, Chen H, Botnar RM, et al. Free-running simultaneous myocardial T1/T2 mapping and cine imaging with 3D whole-heart coverage and isotropic spatial resolution. Magn Reson Imaging. 2019;63:159–169. doi: 10.1016/j.mri.2019.08.008. [DOI] [PubMed] [Google Scholar]
- 23.Bustin A., Hua A., Milotta G., Jaubert O, Hajhosseiny R, Ismail TF, et al. High-spatial-resolution 3D whole-heart MRI T2 mapping for assessment of myocarditis. Radiology. 2021;298:578–586. doi: 10.1148/radiol.2021201630. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Bustin A., Ginami G., Cruz G., Correia T, Ismail TF, Rashid I, et al. Five-minute whole-heart coronary MRA with sub-millimeter isotropic resolution, 100% respiratory scan efficiency, and 3D-PROST reconstruction. Magn Reson Med. 2019;81:102–115. doi: 10.1002/mrm.27354. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 25.Kofler A., Dewey M., Schaeffter T., Wald C., Kolbitsch C. Spatio-temporal deep learning-based undersampling artefact reduction for 2D radial cine MRI with limited training data. IEEE Trans Med Imaging. 2020;39:703–717. doi: 10.1109/TMI.2019.2930318. [DOI] [PubMed] [Google Scholar]
- 26.Zhu B., Liu J.Z., Cauley S.F., Rosen B.R., Rosen M.S. Image reconstruction by domain-transform manifold learning. Nature. 2018;555:487–492. doi: 10.1038/nature25988. [DOI] [PubMed] [Google Scholar]
- 27.Fuin N., Bustin A., Kustner T., Oksuz I., Clough J., King AP., et al. A multi-scale variational neural network for accelerating motion-compensated whole-heart 3D coronary MR angiography. Magn Reson Imaging. 2020;70:155–167. doi: 10.1016/j.mri.2020.04.007. [DOI] [PubMed] [Google Scholar]
- 28.Akcakaya M., Moeller S., Weingartner S., Ugurbil K. Scan-specific robust artificial-neural-networks for k-space interpolation (RAKI) reconstruction: database-free deep learning for fast imaging. Magn Reson Med. 2019;81:439–453. doi: 10.1002/mrm.27420. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Schlemper J., Caballero J., Hajnal J.V., Price A.N., Rueckert D. A deep cascade of convolutional neural networks for dynamic MR image reconstruction. IEEE Trans Med Imaging. 2018;37:491–503. doi: 10.1109/TMI.2017.2760978. [DOI] [PubMed] [Google Scholar]
- 30.Fotaki A., Fuin N., Nordio G., Velasco Jimeno C., Qi H., Emmanuel Y., et al. Accelerating 3D MTC-BOOST in patients with congenital heart disease using a joint multi-scale variational neural network reconstruction. Magn Reson Imaging. 2022;92:120–132. doi: 10.1016/j.mri.2022.06.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31.Kustner T., Munoz C., Psenicny A., Bustin A., Fuin N., Qi H., et al. Deep-learning based super-resolution for 3D isotropic coronary MR angiography in less than a minute. Magn Reson Med. 2021;86:2837–2852. doi: 10.1002/mrm.28911. [DOI] [PubMed] [Google Scholar]
- 32.Wang J., Weller D.S., Kramer C.M., Salerno M. DEep learning-based rapid Spiral Image REconstruction (DESIRE) for high-resolution spiral first-pass myocardial perfusion imaging. NMR Biomed. 2022;35 doi: 10.1002/nbm.4661. [DOI] [PubMed] [Google Scholar]
- 33.Hauptmann A., Arridge S., Lucka F., Muthurangu V., Steeden J.A. Real-time cardiovascular MR with spatio-temporal artifact suppression using deep learning-proof of concept in congenital heart disease. Magn Reson Med. 2019;81:1143–1156. doi: 10.1002/mrm.27480. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Haji-Valizadeh H., Guo R., Kucukseymen S., Paskavitz A., Cai X., Rodriguez J., et al. Highly accelerated free-breathing real-time phase contrast cardiovascular MRI via complex-difference deep learning. Magn Reson Med. 2021;86:804–819. doi: 10.1002/mrm.28750. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Hamilton J.I., Currey D., Rajagopalan S., Seiberlich N. Deep learning reconstruction for cardiac magnetic resonance fingerprinting T(1) and T(2) mapping. Magn Reson Med. 2021;85:2127–2135. doi: 10.1002/mrm.28568. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Ghadimi S., Auger D.A., Feng X., Sun C., Meyer CH., Bilchick KC., et al. Fully-automated global and segmental strain analysis of DENSE cardiovascular magnetic resonance using deep learning for segmentation and phase unwrapping. J Cardiovasc Magn Reson. 2021;23:20. doi: 10.1186/s12968-021-00712-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37.Bai W., Sinclair M., Tarroni G., Oktay O., Rajchl M., Vaillant G., et al. Automated cardiovascular magnetic resonance image analysis with fully convolutional networks. J Cardiovasc Magn Reson. 2018;20 doi: 10.1186/s12968-018-0471-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Hann E., Popescu I.A., Zhang Q., Gonzales RA., Barutcu A., Neubauer S., et al. Deep neural network ensemble for on-the-fly quality control-driven segmentation of cardiac MRI T1 mapping. Med Image Anal. 2021;71 doi: 10.1016/j.media.2021.102029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Machado I., Puyol-Antón E., Hammernik K., Cruz G., Ugurlu D., Olakorede I., et al. A deep learning-based integrated framework for quality-aware undersampled cine cardiac MRI reconstruction and analysis. IEEE Trans Biomed Eng. 2024;71:855–865. doi: 10.1109/TBME.2023.3321431. [DOI] [PubMed] [Google Scholar]
- 40.Hann E., Popescu I.A., Zhang Q., Gonzales RA., Barutcu A., Neubauer S., et al. Deep neural network ensemble for on-the-fly quality control-driven segmentation of cardiac MRI T1 mapping. Med Image Anal. 2021;71 doi: 10.1016/j.media.2021.102029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41.Bhatt N., Ramanan V., Orbach A., Biswas L, Ng M, Guo F, et al. A deep learning segmentation pipeline for cardiac T1 mapping using MRI relaxation-based synthetic contrast augmentation. Radiol Artif Intell. 2022;4 doi: 10.1148/ryai.210294. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Popescu D.M., Abramson H.G., Yu R., Lai C, Shade JK, Wu KC, et al. Anatomically informed deep learning on contrast-enhanced cardiac magnetic resonance imaging for scar segmentation and clinical feature extraction. Cardiovasc Digit Health J. 2022;3:2–13. doi: 10.1016/j.cvdhj.2021.11.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 43.Navidi Z., Sun J., Chan R.H., Hanneman K, Al-Arnawoot A, Munim A, et al. Interpretable machine learning for automated left ventricular scar quantification in hypertrophic cardiomyopathy patients. PLOS Digit Health. 2023;2 doi: 10.1371/journal.pdig.0000159. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Gonzales R.A., Ibanez D.H., Hann E., Popescu IA, Burrage MK, Lee YP, et al. Quality control-driven deep ensemble for accountable automated segmentation of cardiac magnetic resonance LGE and VNE images. Front Cardiovasc Med. 2023;10 doi: 10.3389/fcvm.2023.1213290. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 45.Wang S., Chauhan D., Patel H., Amir-Khalili A, da Silva IF, Sojoudi A, et al. Assessment of right ventricular size and function from cardiovascular magnetic resonance images using artificial intelligence. J Cardiovasc Magn Reson. 2022;24:32. doi: 10.1186/s12968-022-00861-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Xu H., Williams S.E., Williams M.C., Newby DE, Taylor J, Neji R, et al. Deep learning estimation of three-dimensional left atrial shape from two-chamber and four-chamber cardiac long axis views. Eur Heart J Cardiovasc Imaging. 2023;24:607–615. doi: 10.1093/ehjci/jead010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Bratt A., Kim J., Pollie M., Beecy AN, Tehrani NH, Codella N, et al. Machine learning derived segmentation of phase velocity encoded cardiovascular magnetic resonance for fully automated aortic flow quantification. J Cardiovasc Magn Reson. 2019;21(1) doi: 10.1186/s12968-018-0509-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Jani V.P., Kachenoura N., Redheuil A., Teixido-Tura G, Bouaou K, Bollache E, et al. Deep learning-based automated aortic area and distensibility assessment: the multi-ethnic study of atherosclerosis (MESA) J Digit Imaging. 2022;35:594–604. doi: 10.1007/s10278-021-00529-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Ronneberger O., Fischer P., Brox T. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5–9, 2015, Proceedings, Part III 18. Springer; Cham: 2015. U-net: convolutional networks for biomedical image segmentation; pp. 234–241. [Google Scholar]
- 50.Vesal S., Maier A., Ravikumar N. Fully automated 3D cardiac MRI localisation and segmentation using deep neural networks. J Imaging. 2020;6:65. doi: 10.3390/jimaging6070065. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Isensee F., Jaeger P.F., Kohl S.A.A., Petersen J., Maier-Hein K.H. nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. 2021;18:203–211. doi: 10.1038/s41592-020-01008-z. [DOI] [PubMed] [Google Scholar]
- 52.Strudel R, Garcia R, Laptev I, Schmid C.Segmenter: transformer for semantic segmentation. Proceedings of the IEEE/CVF International Conference on Computer Vision; 2021:7262–7272.
- 53.Li B., Yang T., Zhao X. NVTrans-UNet: neighborhood vision transformer based U-Net for multi-modal cardiac MR image segmentation. J Appl Clin Med Phys. 2023;24 doi: 10.1002/acm2.13908. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Chen Z., Chen X., Liu Y., Chen E.Z., Chen T., Sun S. Springer Nature Switzerland; Cham: 2024. Enhancing Cardiac MRI Segmentation via Classifier-Guided Two-Stage Network and All-Slice Information Fusion Transformer; pp. 145–154. [Google Scholar]
- 55.Machado I., Puyol-Antón E., Hammernik K., Cruz G., Ugurlu D., Olakorede I., et al. A deep learning-based integrated framework for quality-awareundersampled cine cardiac mri reconstruction and analysis. IEEE Trans Biomed Eng. 2024;71(3):855–865. doi: 10.1109/TBME.2023.3321431. [DOI] [PubMed] [Google Scholar]
- 56.Xue H., Artico J., Fontana M., Moon J.C., Davies R.H., Kellman P. Landmark detection in cardiac MRI by using a convolutional neural network. Radiol Artif Intell. 2021;3 doi: 10.1148/ryai.2021200197. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Balakrishnan G., Zhao A., Sabuncu M.R., Guttag J., Dalca A.V. VoxelMorph: a learning framework for deformable medical image registration. IEEE Trans Med Imaging. 2019;38:1788–1800. doi: 10.1109/TMI.2019.2897538. [DOI] [PubMed] [Google Scholar]
- 58.Upendra RR, Simon R, Linte CA. Joint Deep Learning Framework for Image Registration and Segmentation of Late Gadolinium Enhanced MRI and Cine Cardiac MRI. Proc SPIE Int Soc Opt Eng. 2021 Feb;11598:115980F. doi: 10.1117/12.2581386. Epub 2021 Feb 15. PMID: 34079155; PMCID: PMC8168979. [DOI] [PMC free article] [PubMed]
- 59.Gonzales R.A., Zhang Q., Papiez B.W., Werys K, Lukaschuk E, Popescu IA, et al. MOCOnet: robust motion correction of cardiovascular magnetic resonance T1 mapping using convolutional neural networks. Front Cardiovasc Med. 2021;8 doi: 10.3389/fcvm.2021.768245. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Sheikhjafari A., Noga M., Punithakumar K., Ray N. Unsupervised deformable image registration with fully connected generative neural network. Med Imaging Deep Learn. 2022 [Google Scholar]
- 61.Zakeri A., Hokmabadi A., Bi N., Wijesinghe I, Nix MG, Petersen SE, et al. DragNet: learning-based deformable registration for realistic cardiac MR sequence generation from a single frame. Med Image Anal. 2023;83 doi: 10.1016/j.media.2022.102678. [DOI] [PubMed] [Google Scholar]
- 62.Ye M, Kanski M, Yang D, Chang Q, Yan Z, Huang Q, et al. Deeptag: an unsupervised deep learning method for motion tracking on cardiac tagging magnetic resonance images. Proceedings of IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2021:7261–7271.
- 63.Morales M.A., van den Boomen M., Nguyen C., Kalpathy-Cramer J, Rosen BR, Stultz CM, et al. DeepStrain: a deep learning workflow for the automated characterization of cardiac mechanics. Front Cardiovasc Med. 2021;8 doi: 10.3389/fcvm.2021.730316. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64.Arava D., Masarwy M., Khawaled S., Freiman M. 2021 IEEE International Conference on Microwaves, Antennas, Communications and Electronic Systems (COMCAS) IEEE; Israel: 2021. Deep-learning based motion correction for myocardial T 1 mapping; pp. 55–59. [Google Scholar]
- 65.Pan J., Rueckert D., Küstner T., Hammernik K. Machine Learning for Medical Image Reconstruction: 4th International Workshop, MLMIR 2021, Held in Conjunction with MICCAI 2021, Strasbourg, France, October 1, 2021, Proceedings 4. Springer; Cham: 2021. Efficient image registration network for non-rigid cardiac motion estimation; pp. 14–24. [Google Scholar]
- 66.Blansit K., Retson T., Masutani E., Bahrami N., Hsiao A. Deep learning-based prescription of cardiac MRI planes. Radiol Artif Intell. 2019;1 doi: 10.1148/ryai.2019180069. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Edalati M., Zheng Y., Watkins M.P., Chen J, Liu L, Zhang S, et al. Implementation and prospective clinical validation of AI-based planning and shimming techniques in cardiac MRI. Med Phys. 2022;49:129–143. doi: 10.1002/mp.15327. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Duan J., Bello G., Schlemper J., Bai W, Dawes TJW, Biffi C, et al. Automatic 3D bi-ventricular segmentation of cardiac images by a shape-refined multi- task deep learning approach. IEEE Trans Med Imaging. 2019;38:2151–2164. doi: 10.1109/TMI.2019.2894322. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69.Gonzales R.A., Seemann F., Lamy J., Mojibian H, Atar D, Erlinge D, et al. MVnet: automated time-resolved tracking of the mitral valve plane in CMR long-axis cine images with residual neural networks: a multi-center, multi-vendor study. J Cardiovasc Magn Reson. 2021;23 doi: 10.1186/s12968-021-00824-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Piccini D., Demesmaeker R., Heerfordt J., Yerly J, Di Sopra L, Masci PG, et al. Deep learning to automate reference-free image quality assessment of whole-heart MR images. Radiol Artif Intell. 2020;2 doi: 10.1148/ryai.2020190123. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 71.Tarroni G., Bai W., Oktay O., Schuh A, Suzuki H, Glocker B, et al. Large-scale quality control of cardiac imaging in population studies: application to UK Biobank. Sci Rep. 2020;10:2408. doi: 10.1038/s41598-020-58212-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 72.Tarroni G., Oktay O., Bai W., Schuh A, Suzuki H, Passerat-Palmbach J, et al. Learning-based quality control for cardiac MR images. IEEE Trans Med Imaging. 2019;38:1127–1138. doi: 10.1109/TMI.2018.2878509. [DOI] [PubMed] [Google Scholar]
- 73.Ruijsink B., Puyol-Anton E., Oksuz I., Sinclair M, Bai W, Schnabel JA, et al. Fully automated, quality-controlled cardiac analysis from CMR: validation and large-scale application to characterize cardiac function. JACC Cardiovasc Imaging. 2020;13:684–695. doi: 10.1016/j.jcmg.2019.05.030. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Oksuz I., Ruijsink B., Puyol-Anton E., Clough JR, Cruz G, Bustin A, et al. Automatic CNN-based detection of cardiac MR motion artefacts using k-space data augmentation and curriculum learning. Med Image Anal. 2019;55:136–147. doi: 10.1016/j.media.2019.04.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Zhang Q., Hann E., Werys K., Wu C, Popescu I, Lukaschuk E, et al. Deep learning with attention supervision for automated motion artefact detection in quality control of cardiac T1-mapping. Artif Intell Med. 2020;110 doi: 10.1016/j.artmed.2020.101955. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Vergani V., Razavi R., Puyol-Anton E., Ruijsink B. Deep learning for classification and selection of cine CMR images to achieve fully automated quality-controlled CMR analysis from scanner to report. Front Cardiovasc Med. 2021;8 doi: 10.3389/fcvm.2021.742640. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.Arega T.W., Bricq S., Meriaudeau F. Statistical Atlases and Computational Models of the Heart Regular and CMRxMotion Challenge Papers. Springer Nature Switzerland; Cham: 2022. Automatic quality assessment of cardiac MR images with motion artefacts using multi-task learning and k-space motion artefact augmentation; pp. 418–428. [Google Scholar]
- 78.Li H., Jiang S., Tian S., Yue X., Chen W., Fan Y. Statistical Atlases and Computational Models of the Heart Regular and CMRxMotion Challenge Papers. Springer Nature Switzerland; Cham: 2022. Automatic image quality assessment and cardiac segmentation based on CMR images; pp. 439–446. [Google Scholar]
- 79.Zhang L., Gooya A., Pereanez M., Dong B, Piechnik S, Neubauer S, et al. Automatic assessment of full left ventricular coverage in cardiac cine magnetic resonance imaging with fisher discriminative 3D CNN. IEEE Trans Biomed Eng. 2018;66:1975–1986. doi: 10.1109/TBME.2018.2881952. [DOI] [PubMed] [Google Scholar]
- 80.Bard A., Raisi-Estabragh Z., Ardissino M., Lee AM, Pugliese F, Dey D, et al. Automated Quality-controlled cardiovascular magnetic resonance pericardial fat quantification using a convolutional neural network in the UK Biobank. Front Cardiovasc Med. 2021;8 doi: 10.3389/fcvm.2021.677574. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 81.Uslu F., Bharath A.A. TMS-Net: a segmentation network coupled with a run-time quality control method for robust cardiac image segmentation. Comput Biol Med. 2023;152 doi: 10.1016/j.compbiomed.2022.106422. [DOI] [PubMed] [Google Scholar]
- 82.Fournel J., Bartoli A., Bendahan D., Guye M, Bernard M, Rauseo E, et al. Medical image segmentation automatic quality control: a multi-dimensional approach. Med Image Anal. 2021;74 doi: 10.1016/j.media.2021.102213. [DOI] [PubMed] [Google Scholar]
- 83.Alba X., Lekadir K., Pereanez M., Medrano-Gracia P., Young A.A., Frangi A.F. Automatic initialization and quality control of large-scale cardiac MRI segmentations. Med Image Anal. 2018;43:129–141. doi: 10.1016/j.media.2017.10.001. [DOI] [PubMed] [Google Scholar]
- 84.Robinson R., Valindria V.V., Bai W., Oktay O, Kainz B, Suzuki H, et al. Automated quality control in image segmentation: application to the UK Biobank cardiovascular magnetic resonance imaging study. J Cardiovasc Magn Reson. 2019;21:18. doi: 10.1186/s12968-019-0523-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 85.Sander J., de Vos B.D., Isgum I. Automatic segmentation with detection of local segmentation failures in cardiac MRI. Sci Rep. 2020;10 doi: 10.1038/s41598-020-77733-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 86.Puyol-Anton E., Ruijsink B., Baumgartner C.F., Masci PG, Sinclair M, Konukoglu E, et al. Automated quantification of myocardial tissue characteristics from native T(1) mapping using neural networks with uncertainty-based quality-control. J Cardiovasc Magn Reson. 2020;22 doi: 10.1186/s12968-020-00650-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 87.Khurshid S., Friedman S., Pirruccello J.P., Di Achille P, Diamant N, Anderson CD, et al. Deep learning to predict cardiac magnetic resonance-derived left ventricular mass and hypertrophy from 12-lead ECGs. Circ Cardiovasc Imaging. 2021;14 doi: 10.1161/CIRCIMAGING.120.012281. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 88.Lyon A., Ariga R., Minchole A., Mahmod M, Ormondroyd E. Distinct ECG phenotypes identified in hypertrophic cardiomyopathy using machine learning associate with arrhythmic risk markers. Front Physiol. 2018;9:213. doi: 10.3389/fphys.2018.00213. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 89.Hernandez-Casillas A, Del-Canto I, Ruiz-Espana S, Lopez-Lereu MP, Monmeneu JV, Moratal D. Detection and Classification of Myocardial Infarction Transmurality Using Cardiac MR Image Analysis and Machine Learning Algorithms. Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:1686-1689. doi: 10.1109/EMBC48229.2022.9871924. PMID: 36085769. [DOI] [PubMed]
- 90.Pezel T., Sanguineti F., Garot P., Unterseeh T, Champagne S, Toupin S, et al. Machine-learning score using stress CMR for death prediction in patients with suspected or known CAD. JACC Cardiovasc Imaging. 2022;15:1900–1913. doi: 10.1016/j.jcmg.2022.05.007. [DOI] [PubMed] [Google Scholar]
- 91.Khozeimeh F., Sharifrazi D., Izadi N.H., Joloudari JH, Shoeibi A, Alizadehsani R, et al. RF-CNN-F: random forest with convolutional neural network features for coronary artery disease diagnosis based on cardiac magnetic resonance. Sci Rep. 2022;12 doi: 10.1038/s41598-022-15374-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 92.Shu S., Hong Z., Peng Q., Zhou X, Zhang T, Wang, et al. A machine-learning-based method to predict adverse events in patients with dilated cardiomyopathy and severely reduced ejection fractions. Br J Radiol. 2021;94 doi: 10.1259/bjr.20210259. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 93.Shi R.Y., Wu R., An D.L., Chen BH, Wu CW, Du L, et al. Texture analysis applied in T1 maps and extracellular volume obtained using cardiac MRI in the diagnosis of hypertrophic cardiomyopathy and hypertensive heart disease compared with normal controls. Clin Radiol. 2021;76:236.e9–236.e19. doi: 10.1016/j.crad.2020.11.001. [DOI] [PubMed] [Google Scholar]
- 94.Agibetov A., Kammerlander A., Duca F., Nitsche C., Koschutnik M., Donà C., et al. Convolutional neural networks for fully automated diagnosis of cardiac amyloidosis by cardiac magnetic resonance imaging. J Pers Med. 2021;11(12) doi: 10.3390/jpm11121268. PMID: 34945740; PMCID: PMC8705947. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 95.Martini N., Aimo A., Barison A., Della Latta D, Vergaro G, Aquaro GD, et al. Deep learning to diagnose cardiac amyloidosis from cardiovascular magnetic resonance. J Cardiovasc Magn Reson. 2020;22:84. doi: 10.1186/s12968-020-00690-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.Sharifrazi D., Alizadehsani R., Joloudari J.H., Sobhaninia Z, Shoeibi A, Khozeimeh F, et al. CNN-KCL: automatic myocarditis diagnosis using convolutional neural network combined with k-means clustering. Math Biosci Eng. 2022;19:2381–2402. doi: 10.3934/mbe.2022110. [DOI] [PubMed] [Google Scholar]
- 97.Moravvej S.V., Alizadehsani R., Khanam S., Sobhaninia Z, Shoeibi A, Khozeimeh F, et al. RLMD-PA: a reinforcement learning-based myocarditis diagnosis combined with a population-based algorithm for pretraining weights. Contrast Media Mol Imaging. 2022;2022 doi: 10.1155/2022/8733632. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98.Ghareeb A.N., Karim S.A., Jani V.P., Francis W, Van den Eynde J, Alkuwari M, et al. Patterns of cardiovascular magnetic resonance inflammation in acute myocarditis from South Asia and Middle East. Int J Cardiol Heart Vasc. 2022;40 doi: 10.1016/j.ijcha.2022.101029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 99.Eichhorn C., Greulich S., Bucciarelli-Ducci C., Sznitman R., Kwong R.Y., Grani C. Multiparametric cardiovascular magnetic resonance approach in diagnosing, monitoring, and prognostication of myocarditis. JACC Cardiovasc Imaging. 2022;15:1325–1338. doi: 10.1016/j.jcmg.2021.11.017. [DOI] [PubMed] [Google Scholar]
- 100.Cau R., Pisu F., Porcu M., Montisci R, Bassareo P. Machine learning approach in diagnosing Takotsubo cardiomyopathy: the role of the combined evaluation of atrial and ventricular strain, and parametric mapping. Int J Cardiol. 2023;373:124–133. doi: 10.1016/j.ijcard.2022.11.021. [DOI] [PubMed] [Google Scholar]
- 101.Mannil M., Kato K., Manka R., von Spiczak J., Peters B., Cammann V.L., et al. Prognostic value of texture analysis from cardiac magnetic resonance imaging in patients with Takotsubo syndrome: a machine learning based proof-of-principle approach. Sci Rep-UK. 2020;10(1) doi: 10.1038/s41598-020-76432-4. PMID: 33239695; PMCID: PMC7689426. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 102.Dykstra S., Satriano A., Cornhill A.K., Lei LY, Labib D, Mikami Y, et al. Machine learning prediction of atrial fibrillation in cardiovascular patients using cardiac magnetic resonance and electronic health information. Front Cardiovasc Med. 2022;9 doi: 10.3389/fcvm.2022.998558. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 103.Cornhill A.K., Dykstra S., Satriano A., Labib D, Mikami Y, Flewitt J, et al. Machine learning patient-specific prediction of heart failure hospitalization using cardiac MRI-based phenotype and electronic health information. Front Cardiovasc Med. 2022;9 doi: 10.3389/fcvm.2022.890904. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 104.Bivona D.J., Tallavajhala S., Abdi M., Oomen PJA, Gao X, Malhotra R, et al. Machine learning for multidimensional response and survival after cardiac resynchronization therapy using features from cardiac magnetic resonance. Heart Rhythm O2. 2022;3:542–552. doi: 10.1016/j.hroo.2022.06.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 105.Kwak S., Everett R.J., Treibel T.A., Yang S, Hwang D, Ko T, et al. Markers of myocardial damage predict mortality in patients with aortic stenosis. J Am Coll Cardiol. 2021;78:545–558. doi: 10.1016/j.jacc.2021.05.047. [DOI] [PubMed] [Google Scholar]
- 106.Lu C., Wang Y.G., Zaman F., Wu X, Adhaduk M, Chang A, et al. Predicting adverse cardiac events in sarcoidosis: deep learning from automated characterization of regional myocardial remodeling. Int J Cardiovasc Imaging. 2022;38:1825–1836. doi: 10.1007/s10554-022-02564-5. [DOI] [PubMed] [Google Scholar]
- 107.Okada D.R., Xie E., Assis F., Smith J, Derakhshan A, Gowani Z, et al. Regional abnormalities on cardiac magnetic resonance imaging and arrhythmic events in patients with cardiac sarcoidosis. J Cardiovasc Electrophysiol. 2019;30:1967–1976. doi: 10.1111/jce.14082. [DOI] [PubMed] [Google Scholar]
- 108.Ghadimi S, Bivona DJ, Bilchick KC, Epstein FH. Deep learning‑based prognostic model using cine DENSE MRI for outcome prediction after cardiac resynchronization therapy. CMR Global Conference. London, UK; 2024.
- 109.Fahmy A.S., Rowin E.J., Arafati A., Al-Otaibi T., Maron M.S., Nezafat R. Radiomics and deep learning for myocardial scar screening in hypertrophic cardiomyopathy. J Cardiovasc Magn Reson. 2022;24 doi: 10.1186/s12968-022-00869-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 110.Infante T., Cavaliere C., Punzo B., Grimaldi V., Salvatore M., Napoli C. Radiogenomics and artificial intelligence approaches applied to cardiac computed tomography angiography and cardiac magnetic resonance for precision medicine in coronary heart disease: a systematic review. Circ Cardiovasc Imaging. 2021;14:1133–1146. doi: 10.1161/CIRCIMAGING.121.013025. [DOI] [PubMed] [Google Scholar]
- 111.Antonopoulos A.S., Boutsikou M., Simantiris S., Angelopoulos A, Lazaros G, Panagiotopoulos I, et al. Machine learning of native T1 mapping radiomics for classification of hypertrophic cardiomyopathy phenotypes. Sci Rep. 2021;11 doi: 10.1038/s41598-021-02971-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 112.Cetin I., Raisi-Estabragh Z., Petersen S.E., Napel S, Piechnik SK, Neubauer S, et al. Radiomics signatures of cardiovascular risk factors in cardiac MRI: results from the UK Biobank. Front Cardiovasc Med. 2020;7 doi: 10.3389/fcvm.2020.591368. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 113.Mannil M., Kato K., Manka R., von Spiczak J, Peters B, Cammann VL, et al. Prognostic value of texture analysis from cardiac magnetic resonance imaging in patients with Takotsubo syndrome: a machine learning based proof-of-principle approach. Sci Rep. 2020;10 doi: 10.1038/s41598-020-76432-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 114.Huang S., Shi K., Zhang Y., Yan WF, Guo YK, Li Y, et al. Texture analysis of T2-weighted cardiovascular magnetic resonance imaging to discriminate between cardiac amyloidosis and hypertrophic cardiomyopathy. BMC Cardiovasc Disord. 2022;22:235. doi: 10.1186/s12872-022-02671-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 115.Qin C., Schlemper J., Caballero J., Price A.N., Hajnal J.V., Rueckert D. Convolutional recurrent neural networks for dynamic MR image reconstruction. IEEE Trans Med Imaging. 2019;38:280–290. doi: 10.1109/TMI.2018.2863670. [DOI] [PubMed] [Google Scholar]
- 116.Biswas S., Aggarwal H.K., Jacob M. Dynamic MRI using model-based deep learning and SToRM priors: MoDL-SToRM. Magn Reson Med. 2019;82:485–494. doi: 10.1002/mrm.27706. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 117.Sandino C.M., Lai P., Vasanawala S.S., Cheng J.Y. Accelerating cardiac cine MRI using a deep learning-based ESPIRiT reconstruction. Magn Reson Med. 2021;85:152–167. doi: 10.1002/mrm.28420. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 118.Kustner T., Fuin N., Hammernik K., Bustin A, Qi H, Hajhosseiny R, et al. CINENet: deep learning-based 3D cardiac CINE MRI reconstruction with multi-coil complex-valued 4D spatio-temporal convolutions. Sci Rep. 2020;10 doi: 10.1038/s41598-020-70551-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 119.Wang S., Ke Z., Cheng H., Jia S, Ying L, Zheng H, et al. DIMENSION: dynamic MR imaging with both k-space and spatial prior knowledge obtained via multi-supervised network training. NMR Biomed. 2022;35 doi: 10.1002/nbm.4131. [DOI] [PubMed] [Google Scholar]
- 120.El-Rewaidy H., Fahmy A.S., Pashakhanloo F., Cai X, Kucukseymen S, Csecs I, et al. Multi-domain convolutional neural network (MD-CNN) for radial reconstruction of dynamic cardiac MRI. Magn Reson Med. 2021;85:1195–1208. doi: 10.1002/mrm.28485. [DOI] [PubMed] [Google Scholar]
- 121.Masutani E.M., Bahrami N., Hsiao A. Deep learning single-frame and multiframe super-resolution for cardiac MRI. Radiology. 2020;295:552–561. doi: 10.1148/radiol.2020192173. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 122.Chen J., Zhang H., Mohiaddin R., Wong T, Firmin D, Keegan J, et al. Adaptive hierarchical dual consistency for semi-supervised left atrium segmentation on cross-domain data. IEEE Trans Med Imaging. 2022;41:420–433. doi: 10.1109/TMI.2021.3113678. [DOI] [PubMed] [Google Scholar]
- 123.Montalt-Tordera J., Pajaziti E., Jones R., Sauvage E, Puranik R, Singh AAV, et al. Automatic segmentation of the great arteries for computational hemodynamic assessment. J Cardiovasc Magn Reson. 2022;24:57. doi: 10.1186/s12968-022-00891-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 124.Schuster A., Lange T., Backhaus S.J., Strohmeyer C, Boom PC, Matz J, et al. Fully automated cardiac assessment for diagnostic and prognostic stratification following myocardial infarction. J Am Heart Assoc. 2020;9 doi: 10.1161/JAHA.120.016612. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 125.Gao Y., Zhou Z., Zhang B., Guo S, Bo K, Li S, et al. Deep learning-based prognostic model using non-enhanced cardiac cine MRI for outcome prediction in patients with heart failure. Eur Radiol. 2023;33:8203–8213. doi: 10.1007/s00330-023-09785-9. [DOI] [PubMed] [Google Scholar]
- 126.Bello G.A., Dawes T.J.W., Duan J., Biffi C, de Marvao A, Howard L, et al. Deep learning cardiac motion analysis for human survival prediction. Nat Mach Intell. 2019;1:95–104. doi: 10.1038/s42256-019-0019-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 127.Diller G.P., Orwat S., Vahle J., Bauer UMM, Urban A, Sarikouch S, et al. Prediction of prognosis in patients with tetralogy of Fallot based on deep learning imaging analysis. Heart. 2020;106:1007–1014. doi: 10.1136/heartjnl-2019-315962. [DOI] [PubMed] [Google Scholar]
- 128.Samad M.D., Wehner G.J., Arbabshirani M.R., Jing L, Powell AJ, Geva T, et al. Predicting deterioration of ventricular function in patients with repaired tetralogy of Fallot using machine learning. Eur Heart J Cardiovasc Imaging. 2018;19:730–738. doi: 10.1093/ehjci/jey003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 129.Zhang N., Yang G., Gao Z., Xu C, Zhang Y, Shi R, et al. Deep learning for diagnosis of chronic myocardial infarction on nonenhanced cardiac cine MRI. Radiology. 2019;291:606–617. doi: 10.1148/radiol.2019182304. [DOI] [PubMed] [Google Scholar]
- 130.Swift A.J., Lu H., Uthoff J., Garg P, Cogliano M, Taylor J, et al. A machine learning cardiac magnetic resonance approach to extract disease features and automate pulmonary arterial hypertension diagnosis. Eur Heart J Cardiovasc Imaging. 2021;22:236–245. doi: 10.1093/ehjci/jeaa001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 131.Backhaus S.J., Aldehayat H., Kowallick J.T., Evertz R, Lange T, Kutty S, et al. Artificial intelligence fully automated myocardial strain quantification for risk stratification following acute myocardial infarction. Sci Rep. 2022;12 doi: 10.1038/s41598-022-16228-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 132.Kawakubo M., Moriyama D., Yamasaki Y., Abe K, Hosokawa K, Moriyama T, et al. Right ventricular strain and volume analyses through deep learning-based fully automatic segmentation based on radial long-axis reconstruction of short-axis cine magnetic resonance images. MAGMA. 2022;35:911–921. doi: 10.1007/s10334-022-01017-3. [DOI] [PubMed] [Google Scholar]
- 133.Tan Z., Yang Y., Wu X., Li S, Li L, Zhong L, et al. Left atrial remodeling and the prognostic value of feature tracking derived left atrial strain in patients with light-chain amyloidosis: a cardiovascular magnetic resonance study. Int J Cardiovasc Imaging. 2022;38:1519–1532. doi: 10.1007/s10554-022-02534-x. [DOI] [PubMed] [Google Scholar]
- 134.Satriano A., Afzal Y., Sarim Afzal M., Fatehi Hassanabad A, Wu C, Dykstra S, et al. Neural-network-based diagnosis using 3-dimensional myocardial architecture and deformation: demonstration for the differentiation of hypertrophic cardiomyopathy. Front Cardiovasc Med. 2020;7 doi: 10.3389/fcvm.2020.584727. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 135.Wang Y., Sun C., Ghadimi S., Auger DC, Croisille P, Viallon M, et al. StrainNet: improved myocardial strain analysis of cine MRI by deep learning from DENSE. Radiol Cardiothorac Imaging. 2023;5 doi: 10.1148/ryct.220196. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 136.Masutani E.M., Chandrupatla R.S., Wang S., Zocchi C, Hahn LD, Horowitz M, et al. Deep learning synthetic strain: quantitative assessment of regional myocardial wall motion at MRI. Radiol: Cardiothorac Imaging. 2023;5 doi: 10.1148/ryct.220202. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 137.Kuruvilla S., Adenaw N., Katwal A.B., Lipinski M.J., Kramer C.M., Salerno M. Late gadolinium enhancement on cardiac magnetic resonance predicts adverse cardiovascular outcomes in nonischemic cardiomyopathy: a systematic review and meta-analysis. Circ Cardiovasc Imaging. 2014;7:250–258. doi: 10.1161/CIRCIMAGING.113.001144. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 138.Dall'Armellina E., Karia N., Lindsay A.C., Karamitsos TD, Ferreira V, Robson MD, et al. Dynamic changes of edema and late gadolinium enhancement after acute myocardial infarction and their relationship to functional recovery and salvage index. Circ Cardiovasc Imaging. 2011;4:228–236. doi: 10.1161/CIRCIMAGING.111.963421. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 139.Kramer C.M., Barkhausen J., Bucciarelli-Ducci C., Flamm S.D., Kim R.J., Nagel E. Standardized cardiovascular magnetic resonance imaging (CMR) protocols: 2020 update. J Cardiovasc Magn Reson. 2020;22:17. doi: 10.1186/s12968-020-00607-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 140.El-Rewaidy H., Neisius U., Mancio J., Kucukseymen S, Rodriguez J, Paskavitz A, et al. Deep complex convolutional network for fast reconstruction of 3D late gadolinium enhancement cardiac MRI. NMR Biomed. 2020;33 doi: 10.1002/nbm.4312. [DOI] [PubMed] [Google Scholar]
- 141.Muscogiuri G., Martini C., Gatti M., Dell'Aversana S, Ricci F, Guglielmo M, et al. Feasibility of late gadolinium enhancement (LGE) in ischemic cardiomyopathy using 2D-multisegment LGE combined with artificial intelligence reconstruction deep learning noise reduction algorithm. Int J Cardiol. 2021;343:164–170. doi: 10.1016/j.ijcard.2021.09.012. [DOI] [PubMed] [Google Scholar]
- 142.Zhuang X., Xu J., Luo X., Chen C, Ouyang C, Rueckert D, et al. Cardiac segmentation on late gadolinium enhancement MRI: a benchmark study from multi-sequence cardiac MR segmentation challenge. Med Image Anal. 2022;81 doi: 10.1016/j.media.2022.102528. [DOI] [PubMed] [Google Scholar]
- 143.Fahmy A.S., Rausch J., Neisius U., Chan RH, Maron MS, Appelbaum E, et al. Automated cardiac MR scar quantification in hypertrophic cardiomyopathy using deep convolutional neural networks. JACC Cardiovasc Imaging. 2018;11:1917–1918. doi: 10.1016/j.jcmg.2018.04.030. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 144.Moccia S., Banali R., Martini C., Muscogiuri G, Pontone G, Pepi M, et al. Development and testing of a deep learning-based strategy for scar segmentation on CMR-LGE images. MAGMA. 2019;32:187–195. doi: 10.1007/s10334-018-0718-4. [DOI] [PubMed] [Google Scholar]
- 145.Zabihollahy F., Rajchl M., White J.A., Ukwatta E. Fully automated segmentation of left ventricular scar from 3D late gadolinium enhancement magnetic resonance imaging using a cascaded multi-planar U-Net (CMPU-Net) Med Phys. 2020;47:1645–1655. doi: 10.1002/mp.14022. [DOI] [PubMed] [Google Scholar]
- 146.Romero R.W., Viallon M., Spaltenstein J., Petrusca L, Bernard O, Belle L, et al. CMRSegTools: an open-source software enabling reproducible research in segmentation of acute myocardial infarct in CMR images. PLoS One. 2022;17 doi: 10.1371/journal.pone.0274491. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 147.Guo F., Krahn P.R.P., Escartin T., Roifman I., Wright G. Cine and late gadolinium enhancement MRI registration and automated myocardial infarct heterogeneity quantification. Magn Reson Med. 2021;85:2842–2855. doi: 10.1002/mrm.28596. [DOI] [PubMed] [Google Scholar]
- 148.Leong C.O., Lim E., Tan L.K., Abdul Aziz YF, Sridhar GS, Socrates D, et al. Segmentation of left ventricle in late gadolinium enhanced MRI through 2D–4D registration for infarct localization in 3D patient-specific left ventricular model. Magn Reson Med. 2019;81:1385–1398. doi: 10.1002/mrm.27486. [DOI] [PubMed] [Google Scholar]
- 149.Fotaki A., Velasco C., Prieto C., Botnar R.M. Quantitative MRI in cardiometabolic disease: from conventional cardiac and liver tissue mapping techniques to multi-parametric approaches. Front Cardiovasc Med. 2022;9 doi: 10.3389/fcvm.2022.991383. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 150.Jeelani H, Yang Y, Zhou R, Kramer CM, Salerno M, Weller DS. A myocardial T1-mapping framework with recurrent and U-Net convolutional neural networks. 2020 IEEE 17th International Symposium on Biomedical Imaging (ISBI); 2020:1941–1944.
- 151.Xue H., Shah S., Greiser A., Guetter C, Littmann A, Jolly MP, et al. Motion correction for myocardial T1 mapping using image registration with synthetic image estimation. Magn Reson Med. 2012;67:1644–1655. doi: 10.1002/mrm.23153. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 152.Zhang Q., Burrage M.K., Lukaschuk E., Shanmuganathan M, Popescu IA, Nikolaidou C, et al. Toward replacing late gadolinium enhancement with artificial intelligence virtual native enhancement for gadolinium-free cardiovascular magnetic resonance tissue characterization in hypertrophic cardiomyopathy. Circulation. 2021;144:589–599. doi: 10.1161/CIRCULATIONAHA.121.054432. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 153.Zhang Q., Burrage M.K., Shanmuganathan M., Gonzales RA, Lukaschuk E, Thomas KE, et al. Artificial intelligence for contrast-free MRI: scar assessment in myocardial infarction using deep learning-based virtual native enhancement. Circulation. 2022;146:1492–1503. doi: 10.1161/CIRCULATIONAHA.122.060137. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 154.Thompson P., Zhang Q., Neubauer S., Piechnik SK, Ferreira VM, Plein S, et al. Proceedings of the 2024 SCMR Annual Scientific Sessions. SCMR; London, UK: 2023. Gadolinium-free virtual native enhancement for chronic myocardial infarction assessment: independent blinded validation and reproducibility between two centres. [Google Scholar]
- 155.Ma D., Gulani V., Seiberlich N., Liu K, Sunshine JL, Duerk JL, et al. Magnetic resonance fingerprinting. Nature. 2013;495:187–192. doi: 10.1038/nature11971. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 156.Christodoulou A.G., Shaw J.L., Nguyen C., Yang Q, Xie Y, Wang N, et al. Magnetic resonance multitasking for motion-resolved quantitative cardiovascular imaging. Nat Biomed Eng. 2018;2:215–226. doi: 10.1038/s41551-018-0217-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 157.Hermann I., Kellman P., Demirel O.B., Akcakaya M., Schad L.R., Weingartner S. Free-breathing simultaneous T1, T2, and T2 * quantification in the myocardium. Magn Reson Med. 2021;86:1226–1240. doi: 10.1002/mrm.28753. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 158.Chow K., Hayes G., Flewitt J.A., Feuchter P, Lydell C, Howarth A, et al. Improved accuracy and precision with three-parameter simultaneous myocardial T(1) and T(2) mapping using multiparametric SASHA. Magn Reson Med. 2022;87:2775–2791. doi: 10.1002/mrm.29170. [DOI] [PubMed] [Google Scholar]
- 159.Milotta G., Bustin A., Jaubert O., Neji R., Prieto C., Botnar R.M. 3D whole-heart isotropic-resolution motion-compensated joint T(1) /T(2) mapping and water/fat imaging. Magn Reson Med. 2020;84:3009–3026. doi: 10.1002/mrm.28330. [DOI] [PubMed] [Google Scholar]
- 160.Phair A., Cruz G., Qi H., Botnar R.M., Prieto C. Free-running 3D whole-heart T(1) and T(2) mapping and cine MRI using low-rank reconstruction with non-rigid cardiac motion correction. Magn Reson Med. 2023;89:217–232. doi: 10.1002/mrm.29449. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 161.Hamilton J.I. A self-supervised deep learning reconstruction for shortening the breathhold and acquisition window in cardiac magnetic resonance fingerprinting. Front Cardiovasc Med. 2022;9 doi: 10.3389/fcvm.2022.928546. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 162.Chen Y., Shaw J.L., Xie Y., Li D., Christodoulou A.G. Deep learning within a priori temporal feature spaces for large-scale dynamic MR image reconstruction: application to 5-D cardiac MR multitasking. Med Image Comput Comput Assist Inter. 2019;11765:495–504. doi: 10.1007/978-3-030-32245-8_55. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 163.Hosseini S.A.H., Zhang C., Weingartner S., Moeller S, Stuber M, Ugurbil K, et al. Accelerated coronary MRI with sRAKI: a database-free self-consistent neural network k-space reconstruction for arbitrary undersampling. PLoS One. 2020;15 doi: 10.1371/journal.pone.0229418. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 164.Qi H., Hajhosseiny R., Cruz G., Kuestner T, Kunze K, Neji R, et al. End-to-end deep learning nonrigid motion-corrected reconstruction for highly accelerated free-breathing coronary MRA. Magn Reson Med. 2021;86:1983–1996. doi: 10.1002/mrm.28851. [DOI] [PubMed] [Google Scholar]
- 165.Steeden J.A., Quail M., Gotschy A., Mortensen KH, Hauptmann A, Arridge S, et al. Rapid whole-heart CMR with single volume super-resolution. J Cardiovasc Magn Reson. 2020;22:56. doi: 10.1186/s12968-020-00651-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 166.Henningsson M., Smink J., Razavi R., Botnar R.M. Prospective respiratory motion correction for coronary MR angiography using a 2D image navigator. Magn Reson Med. 2013;69:486–494. doi: 10.1002/mrm.24280. [DOI] [PubMed] [Google Scholar]
- 167.Stehning C., Bornert P., Nehrke K., Eggers H., Stuber M. Free-breathing whole-heart coronary MRA with 3D radial SSFP and self-navigated image reconstruction. Magn Reson Med. 2005;54:476–480. doi: 10.1002/mrm.20557. [DOI] [PubMed] [Google Scholar]
- 168.Klein S., Staring M., Murphy K., Viergever M.A., Pluim J.P. elastix: a toolbox for intensity-based medical image registration. IEEE Trans Med Imaging. 2010;29:196–205. doi: 10.1109/TMI.2009.2035616. [DOI] [PubMed] [Google Scholar]
- 169.Rueckert D., Sonoda L.I., Hayes C., Hill D.L., Leach M.O., Hawkes D.J. Nonrigid registration using free-form deformations: application to breast MR images. IEEE Trans Med Imaging. 1999;18:712–721. doi: 10.1109/42.796284. [DOI] [PubMed] [Google Scholar]
- 170.Qi H., Fuin N., Cruz G., Pan J, Kuestner T, Bustin A, et al. Non-rigid respiratory motion estimation of whole-heart coronary MR images using unsupervised deep learning. IEEE Trans Med Imaging. 2021;40:444–454. doi: 10.1109/TMI.2020.3029205. [DOI] [PubMed] [Google Scholar]
- 171.Munoz C., Qi H., Cruz G., Kustner T., Botnar R.M., Prieto C. Self-supervised learning-based diffeomorphic non-rigid motion estimation for fast motion-compensated coronary MR angiography. Magn Reson Imaging. 2022;85:10–18. doi: 10.1016/j.mri.2021.10.004. [DOI] [PubMed] [Google Scholar]
- 172.Nayak K.S., Nielsen J.F., Bernstein M.A., Markl M, P DG, R MB, et al. Cardiovascular magnetic resonance phase contrast imaging. J Cardiovasc Magn Reson. 2015;17:71. doi: 10.1186/s12968-015-0172-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 173.Jaubert O., Steeden J., Montalt-Tordera J., Arridge S., Kowalik G.T., Muthurangu V. Deep artifact suppression for spiral real-time phase contrast cardiac magnetic resonance imaging in congenital heart disease. Magn Reson Imaging. 2021;83:125–132. doi: 10.1016/j.mri.2021.08.005. [DOI] [PubMed] [Google Scholar]
- 174.Ferdian E., Suinesiaputra A., Dubowitz D.J., Zhao D., Wang A., Cowan B., et al. 4DFlowNet: super-resolution 4D flow MRI using deep learning and computational fluid dynamics. Front Phys. 2020;8:138. doi: 10.3389/fphy.2020.00138. [DOI] [Google Scholar]
- 175.Berhane H., Scott M.B., Barker A.J., McCarthy P, Avery R, Allen B, et al. Deep learning-based velocity antialiasing of 4D-flow MRI. Magn Reson Med. 2022;88:449–463. doi: 10.1002/mrm.29205. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 176.Vishnevskiy V., Walheim J., Kozerke S. Deep variational network for rapid 4D flow MRI reconstruction. Nat Mach Intell. 2020;2:228–235. [Google Scholar]
- 177.Peper E.S., van Ooij P., Jung B., Huber A., Grani C., Bastiaansen J.A.M. Advances in machine learning applications for cardiovascular 4D flow MRI. Front Cardiovasc Med. 2022;9 doi: 10.3389/fcvm.2022.1052068. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 178.Garcia J., Beckie K., Hassanabad A.F., Sojoudi A., White J.A. Aortic and mitral flow quantification using dynamic valve tracking and machine learning: prospective study assessing static and dynamic plane repeatability, variability and agreement. JRSM Cardiovasc Dis. 2021;10 doi: 10.1177/2048004021999900. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 179.Berhane H., Scott M., Elbaz M., Jarvis K, McCarthy P, Carr J, et al. Fully automated 3D aortic segmentation of 4D flow MRI for hemodynamic analysis using deep learning. Magn Reson Med. 2020;84:2204–2218. doi: 10.1002/mrm.28257. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 180.Fan L., Shen D., Haji-Valizadeh H., Naresh NK, Carr JC, Freed BH, et al. Rapid dealiasing of undersampled, non-Cartesian cardiac perfusion images using U-net. NMR Biomed. 2020;33 doi: 10.1002/nbm.4239. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 181.Demirel O.B., Yaman B., Shenoy C., Moeller S., Weingartner S., Akcakaya M. Signal intensity informed multi-coil encoding operator for physics-guided deep learning reconstruction of highly accelerated myocardial perfusion CMR. Magn Reson Med. 2023;89:308–321. doi: 10.1002/mrm.29453. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 182.Xue H., Davies R.H., Brown L.A.E., Knott KD, Kotecha T, Fontana M, et al. Automated inline analysis of myocardial perfusion MRI with deep learning. Radiol Artif Intell. 2020;2 doi: 10.1148/ryai.2020200009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 183.Scannell C.M., Veta M., Villa A.D.M., Sammut EC, Lee J, Breeuwer M, et al. Deep-learning-based preprocessing for quantitative myocardial perfusion MRI. J Magn Reson Imaging. 2020;51:1689–1696. doi: 10.1002/jmri.26983. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 184.Sandfort V., Jacobs M., Arai A.E., Hsu L.Y. Reliable segmentation of 2D cardiac magnetic resonance perfusion image sequences using time as the 3rd dimension. Eur Radiol. 2021;31:3941–3950. doi: 10.1007/s00330-020-07474-5. [DOI] [PubMed] [Google Scholar]
- 185.Knott K.D., Seraphim A., Augusto J.B., Xue H, Chacko L, Aung N, et al. The prognostic significance of quantitative myocardial perfusion: an artificial intelligence-based approach using perfusion mapping. Circulation. 2020;141:1282–1291. doi: 10.1161/CIRCULATIONAHA.119.044666. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 186.Edalati M., Zheng Y., Watkins M.P., Chen J, Liu L, Zhang S, et al. Implementation and prospective clinical validation of AI‐based planning and shimming techniques in cardiac MRI. Med Phys. 2022;49:129–143. doi: 10.1002/mp.15327. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 187.Blansit K., Retson T., Masutani E., Bahrami N., Hsiao A. Deep learning–based prescription of cardiac MRI planes. Radiol: Artif Intell. 2019;1 doi: 10.1148/ryai.2019180069. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 188.Rajput D., Wang W.J., Chen C.C. Evaluation of a decided sample size in machine learning applications. BMC Bioinform. 2023;24:48. doi: 10.1186/s12859-023-05156-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 189.Sudlow C., Gallacher J., Allen N., Beral V, Burton P, Danesh J, et al. UK biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age. PLoS Med. 2015;12 doi: 10.1371/journal.pmed.1001779. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 190.Littlejohns T.J., Sudlow C., Allen N.E., Collins R. UK Biobank: opportunities for cardiovascular research. Eur Heart J. 2019;40:1158–1166. doi: 10.1093/eurheartj/ehx254. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 191.Bernard O., Lalande A., Zotti C., Cervenansky F, Yang X, Heng P-A, et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: is the problem solved? IEEE Trans Med Imaging. 2018;37:2514–2525. doi: 10.1109/TMI.2018.2837502. [DOI] [PubMed] [Google Scholar]
- 192.Campello V.M., Gkontra P., Izquierdo C., Martin-Isla C, Sojoudi A, Full PM, et al. Multi-centre, multi-vendor and multi-disease cardiac segmentation: the M&Ms challenge. IEEE Trans Med Imaging. 2021;40:3543–3554. doi: 10.1109/TMI.2021.3090082. [DOI] [PubMed] [Google Scholar]
- 193.Pace DF, Dalca AV, Geva T, Powell AJ, Moghari MH, Golland P. Interactive Whole-Heart Segmentation in Congenital Heart Disease. Med Image Comput Comput Assist Interv. 2015 Oct;9351:80-88. doi: 10.1007/978-3-319-24574-4_10. Epub 2015 Nov 18. PMID: 26889498; PMCID: PMC4753059. [DOI] [PMC free article] [PubMed]
- 194.Zhuang X. International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer; Cham: 2016. Multivariate mixture model for cardiac segmentation from multi-sequence MRI; pp. 581–588. [Google Scholar]
- 195.Zhuang X. Multivariate mixture model for myocardial segmentation combining multi-source images. IEEE Trans Pattern Anal Mach Intell. 2019;41:2933–2946. doi: 10.1109/TPAMI.2018.2869576. [DOI] [PubMed] [Google Scholar]
- 196.Radau P., Lu Y., Connelly K., Paul G., Dick A.J., Wright G.A. Evaluation framework for algorithms segmenting short axis cardiac MRI. MIDAS J. 2009 [Google Scholar]
- 197.Segars W.P., Sturgeon G., Mendonca S., Grimes J., Tsui B.M. 4D XCAT phantom for multimodality imaging research. Med Phys. 2010;37:4902–4915. doi: 10.1118/1.3480985. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 198.Bhuva A.N., Bai W., Lau C., Davies RH, Ye Y, Bulluck H, et al. A multicenter, scan-rescan, human and machine learning CMR study to test generalizability and precision in imaging biomarker analysis. Circ Cardiovasc Imaging. 2019;12 doi: 10.1161/CIRCIMAGING.119.009214. [DOI] [PubMed] [Google Scholar]
- 199.Augusto J.B., Davies R.H., Bhuva A.N., Knott KD, Seraphim A, Alfarih M, et al. Diagnosis and risk stratification in hypertrophic cardiomyopathy using machine learning wall thickness measurement: a comparison with human test-retest performance. Lancet Digit Health. 2021;3:e20–e28. doi: 10.1016/S2589-7500(20)30267-3. [DOI] [PubMed] [Google Scholar]
- 200.Yan W., Huang L., Xia L., Gu S, Yan F, Wang Y, et al. MRI manufacturer shift and adaptation: increasing the generalizability of deep learning segmentation for MR images acquired with different scanners. Radiol Artif Intell. 2020;2 doi: 10.1148/ryai.2020190195. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 201.Gheorghita B.A., Itu L.M., Sharma P., Suciu C, Wetzl J, Geppert C, et al. Improving robustness of automatic cardiac function quantification from cine magnetic resonance imaging using synthetic image data. Sci Rep. 2022;12:2391. doi: 10.1038/s41598-022-06315-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 202.Park T, Liu MY, Wang TC, Zhu JY.Semantic image synthesis with spatially-adaptive normalization. 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2019:2332–2341.
- 203.Amirrajab S., Al Khalil Y., Lorenz C., Weese J., Pluim J., Breeuwer M. Label-informed cardiac magnetic resonance image synthesis through conditional generative adversarial networks. Comput Med Imaging Graph. 2022;101 doi: 10.1016/j.compmedimag.2022.102123. [DOI] [PubMed] [Google Scholar]
- 204.Al Khalil Y., Amirrajab S., Lorenz C., Weese J., Pluim J., Breeuwer M. On the usability of synthetic data for improving the robustness of deep learning-based segmentation of cardiac magnetic resonance images. Med Image Anal. 2023;84 doi: 10.1016/j.media.2022.102688. [DOI] [PubMed] [Google Scholar]
- 205.Zakeri A., Hokmabadi A., Bi N., Wijesinghe I, Nix MG, Petersen SE, et al. DragNet: learning-based deformable registration for realistic cardiac MR sequence generation from a single frame. Med Image Anal. 2023;83 doi: 10.1016/j.media.2022.102678. [DOI] [PubMed] [Google Scholar]
- 206.Luo G., Sun G., Wang K., Dong S., Zhang H. A novel left ventricular volumes prediction method based on deep learning network in cardiac MRI. Comput Cardiol Conf (CinC) 2016;2016:89–92. [Google Scholar]
- 207.Xia Y., Zhang L., Ravikumar N., Attar R, Piechnik SK, Neubauer S, et al. Recovering from missing data in population imaging - cardiac MR image imputation via conditional generative adversarial nets. Med Image Anal. 2021;67 doi: 10.1016/j.media.2020.101812. [DOI] [PubMed] [Google Scholar]
- 208.la Roi-Teeuw H.M., van Royen F.S., de Hond A., Zahra A, de Vries S, Bartels R, et al. Don't be misled: three misconceptions about external validation of clinical prediction models. J Clin Epidemiol. 2024;172 doi: 10.1016/j.jclinepi.2024.111387. [DOI] [PubMed] [Google Scholar]
- 209.Health technology evaluation. Artificial intelligence-derived software to analyse chest X-rays for suspected lung cancer in primary care referrals: early value assessment. 2023.
- 210.Liao F., Adelaine S., Afshar M., Patterson B.W. Governance of clinical AI applications to facilitate safe and equitable deployment in a large health system: Key elements and early successes. Front Digit Health. 2022;4 doi: 10.3389/fdgth.2022.931439. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 211.Wilkinson M.D., Dumontier M., Aalbersberg I.J., Appleton G, Axton M, Baak A, et al. The FAIR guiding principles for scientific data management and stewardship. Sci Data. 2016;3 doi: 10.1038/sdata.2016.18. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 212.Mongan J., Moy L., Kahn C.E., Jr. Checklist for artificial intelligence in medical imaging (CLAIM): a guide for authors and reviewers. Radiol Artif Intell. 2020;2 doi: 10.1148/ryai.2020200029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 213.Hernandez-Boussard T., Bozkurt S., Ioannidis J.P.A., Shah N.H. MINIMAR (MINimum Information for Medical AI Reporting): developing reporting standards for artificial intelligence in health care. J Am Med Inf Assoc. 2020;27:2011–2015. doi: 10.1093/jamia/ocaa088. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 214.Ibrahim H., Liu X., Rivera S.C., Moher D, Chan AW, Sydes MR, et al. Reporting guidelines for clinical trials of artificial intelligence interventions: the SPIRIT-AI and CONSORT-AI guidelines. Trials. 2021;22:11. doi: 10.1186/s13063-020-04951-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 215.Liu X., Cruz Rivera S., Moher D., Calvert MJ, Denniston AK, Spirit AI, et al. Reporting guidelines for clinical trial reports for interventions involving artificial intelligence: the CONSORT-AI extension. Nat Med. 2020;26:1364–1374. doi: 10.1038/s41591-020-1034-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 216.Collins G.S., Dhiman P., Andaur Navarro C.L., Ma J, Hooft L, Reitsma JB, et al. Protocol for development of a reporting guideline (TRIPOD-AI) and risk of bias tool (PROBAST-AI) for diagnostic and prognostic prediction model studies based on artificial intelligence. BMJ Open. 2021;11 doi: 10.1136/bmjopen-2020-048008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 217.Klontzas M.E., Gatti A.A., Tejani A.S., Kahn C.E., Jr AI reporting guidelines: how to select the best one for your research. Radiol Soc North Am. 2023 doi: 10.1148/ryai.230055. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 218.Vasey B., Nagendran M., Campbell B., Clifton DA, Collins GS, Denaxas S, et al. Reporting guideline for the early-stage clinical evaluation of decision support systems driven by artificial intelligence: DECIDE-AI. Nat Med. 2022;28:924–933. doi: 10.1038/s41591-022-01772-9. [DOI] [PubMed] [Google Scholar]
- 219.Schönberger D. Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications. Int J Law Inf Technol. 2019;27:171–203. [Google Scholar]
- 220.Mehrabi N., Morstatter F., Saxena N., Lerman K., Galstyan A. A survey on bias and fairness in machine learning. ACM Comput Surv. 2021;54:1–35. [Google Scholar]
- 221.Ganapathi S., Palmer J., Alderman J.E., Calvert M, Espinoza C, Gath J, et al. Tackling bias in AI health datasets through the STANDING Together initiative. Nat Med. 2022;28:2232–2233. doi: 10.1038/s41591-022-01987-w. [DOI] [PubMed] [Google Scholar]
- 222.Bello G.A., Dawes T.J.W., Duan J., Biffi C, de Marvao A, Howard L, et al. Deep learning cardiac motion analysis for human survival prediction. Nat Mach Intell. 2019;1:95–104. doi: 10.1038/s42256-019-0019-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 223.Arun N., Gaw N., Singh P., Chang K, Aggarwal M, Chen B, et al. Assessing the trustworthiness of saliency maps for localizing abnormalities in medical imaging. Radiol Artif Intell. 2021;3 doi: 10.1148/ryai.2021200267. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 224.Saporta A., Gui X., Agrawal A., Pareek A, Truong SQH, Nguyen CDT, et al. Benchmarking saliency methods for chest X-ray interpretation. Nat Mach Intell. 2022;4:867–878. [Google Scholar]
- 225.Salih A., Boscolo Galazzo I., Gkontra P., Lee AM, Lekadir K, Raisi-Estabragh Z, et al. Explainable artificial intelligence and cardiac imaging: toward more interpretable models. Circ Cardiovasc Imaging. 2023;16 doi: 10.1161/CIRCIMAGING.122.014519. [DOI] [PubMed] [Google Scholar]
- 226.Amann J., Blasimme A., Vayena E., Frey D., Madai V.I., the Precise Qc. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak. 2020;20:310. doi: 10.1186/s12911-020-01332-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 227.Salih A., Galazzo I.B., Gkontra P., Lee AM, Lekadir K, Raisi-Estabragh Z, et al. Explainable artificial intelligence and cardiac imaging: toward more interpretable models. Circ: Cardiovasc Imaging. 2023;16 doi: 10.1161/CIRCIMAGING.122.014519. [DOI] [PubMed] [Google Scholar]
- 228.Chen H., Gomez C., Huang C.M., Unberath M. Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review. NPJ Digit Med. 2022;5:156. doi: 10.1038/s41746-022-00699-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 229.Park S.H., Han K., Jang H.Y., Park JE, Lee JG, Kim DW, et al. Methods for clinical evaluation of artificial intelligence algorithms for medical diagnosis. Radiology. 2023;306:20–31. doi: 10.1148/radiol.220182. [DOI] [PubMed] [Google Scholar]
- 230.Aouad P., Jarvis K.B., Botelho M.F., Serhal A, Blaisdell J, Collins L, et al. Aortic annular dimensions by non-contrast MRI using k-t accelerated 3D cine b-SSFP in pre-procedural assessment for transcatheter aortic valve implantation: a technical feasibility study. Int J Cardiovasc Imaging. 2021;37:651–661. doi: 10.1007/s10554-020-02038-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 231.Sounderajah V., Ashrafian H., Rose S., Shah NH, Ghassemi M, Golub R, et al. A quality assessment tool for artificial intelligence-centered diagnostic test accuracy studies: QUADAS-AI. Nat Med. 2021;27:1663–1665. doi: 10.1038/s41591-021-01517-0. [DOI] [PubMed] [Google Scholar]
- 232.Redley E, 2018: Deep-learning algorithms need real-world testing. https://www.auntminnie.com/imaging-informatics/artificial-intelligence/article/15622054/deep-learning-algorithms-need-real-world-testing.
- 233.Zech J.R., Badgeley M.A., Liu M., Costa A.B., Titano J.J., Oermann E.K. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLoS Med. 2018;15 doi: 10.1371/journal.pmed.1002683. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 234.Bluemke D.A., Moy L., Bredella M.A., Ertl-Wagner BB, Fowler KJ, Goh VJ, et al. Assessing radiology research on artificial intelligence: a brief guide for authors, reviewers, and readers—from the radiology editorial board. Radiol Soc North Am. 2020:487–489. doi: 10.1148/radiol.2019192515. [DOI] [PubMed] [Google Scholar]
- 235.Arora A., Alderman J.E., Palmer J., Ganapathi S, Laws E, McCradden MD, et al. The value of standards for health datasets in artificial intelligence-based applications. Nat Med. 2023;29:2929–2938. doi: 10.1038/s41591-023-02608-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 236.Galbusera F., Cina A. Image annotation and curation in radiology: an overview for machine learning practitioners. Eur Radiol Exp. 2024;8:11. doi: 10.1186/s41747-023-00408-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 237.Guinney J., Saez-Rodriguez J. Alternative models for sharing confidential biomedical data. Nat Biotechnol. 2018;36:391–392. doi: 10.1038/nbt.4128. [DOI] [PubMed] [Google Scholar]
- 238.Lindgren S., Holmström J. Social science perspective on artificial intelligence. J Digit Soc Res. 2020;2:1–15. [Google Scholar]
- 239.Sloane M., Moss E. AI’s social sciences deficit. Nat Mach Intell. 2019;1:330–331. [Google Scholar]
- 240.Emanuel E.J., Wachter R.M. Artificial intelligence in health care: will the value match the hype? JAMA. 2019;321:2281–2282. doi: 10.1001/jama.2019.4914. [DOI] [PubMed] [Google Scholar]
- 241.Price W.N., 2nd, Gerke S., Cohen I.G. Potential liability for physicians using artificial intelligence. JAMA. 2019;322:1765–1766. doi: 10.1001/jama.2019.15064. [DOI] [PubMed] [Google Scholar]
- 242.Tajmir S.H., Alkasab T.K. Toward augmented radiologists: changes in radiology education in the era of machine learning and artificial intelligence. Acad Radiol. 2018;25:747–750. doi: 10.1016/j.acra.2018.03.007. [DOI] [PubMed] [Google Scholar]
- 243.Russak A.J., Chaudhry F., De Freitas J.K., Baron G, Chaudhry FF, Bienstock S, et al. Machine learning in cardiology-ensuring clinical impact lives up to the hype. J Cardiovasc Pharm Ther. 2020;25:379–390. doi: 10.1177/1074248420928651. [DOI] [PubMed] [Google Scholar]
- 244.Wilson H.J., Daugherty P.R. Collaborative intelligence: humans and AI are joining forces. Harv Bus Rev. 2018;96:114–123. [Google Scholar]
- 245.Hilabi B.S., Alghamdi S.A., Almanaa M. Impact of magnetic resonance imaging on healthcare in low- and middle-income countries. Cureus. 2023;15 doi: 10.7759/cureus.37698. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 246.Dishner K.A., McRae-Posani B., Bhowmik A., Jochelson MS, Holodny A, Pinker K, et al. A survey of publicly available MRI datasets for potential use in artificial intelligence research. J Magn Reson Imaging. 2024;59:450–480. doi: 10.1002/jmri.29101. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 247.Hamilton J.I., da Cruz G.L., Rashid I., Walker J., Rajagopalan S., Seiberlich N. Deep image prior cine MR fingerprinting with B1+ spin history correction. Magn Reson Med. 2024;91:2010–2027. doi: 10.1002/mrm.29979. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 248.Darestani M., Heckel R. Accelerated MRI with un-trained neural networks. IEEE Trans Comput Imaging. 2021;7:724–733. [Google Scholar]
- 249.Yoo J., Jin K.H., Gupta H., Yerly J., Stuber M., Unser M. Time-dependent deep image prior for dynamic MRI. IEEE Trans Med Imaging. 2021;40:3337–3348. doi: 10.1109/TMI.2021.3084288. [DOI] [PubMed] [Google Scholar]
- 250.de Sitter A., Visser M., Brouwer I., Cover KS, van Schijndel RA, Eijgelaar RS, et al. Facing privacy in neuroimaging: removing facial features degrades performance of image analysis methods. Eur Radiol. 2020;30:1062–1074. doi: 10.1007/s00330-019-06459-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 251.Pfaff L., Hossbach J., Preuhs E., Wagner F, Arroyo Camejo S, Kannengiesser S, et al. Self-supervised MRI denoising: leveraging Stein’s unbiased risk estimator and spatially resolved noise maps. Sci Rep. 2023;13 doi: 10.1038/s41598-023-49023-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 252.Keenan K.E., Delfino J.G., Jordanova K.V., Poorman ME, Chirra P, Chaudhari AS, et al. Challenges in ensuring the generalizability of image quantitation methods for MRI. Med Phys. 2022;49:2820–2835. doi: 10.1002/mp.15195. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 253.Khodabakhshi Z., Gabrys H., Wallimann P., Guckenberger M., Andratschke N., Tanadini-Lang S. Magnetic resonance imaging radiomic features stability in brain metastases: Impact of image preprocessing, image-, and feature-level harmonization. Phys Imaging Radiat Oncol. 2024;30 doi: 10.1016/j.phro.2024.100585. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 254.Wu C.-J., Raghavendra R., Gupta U., Acun B, Ardalani N, Maeng K, et al. Sustainable AI: environmental implications, challenges and opportunities. Proc Mach Learn Syst. 2022;4:795–813. [Google Scholar]
- 255.Food and Drug Administration. Artificial intelligence and machine learning (AI/ML)-enabled medical devices; 2022.
- 256.Petrick N., Chen W., Delfino J.G., Gallas BD, Kang Y, Krainak D, et al. Regulatory considerations for medical imaging AI/ML devices in the United States: concepts and challenges. J Med Imaging (Bellingham) 2023;10 doi: 10.1117/1.JMI.10.5.051804. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 257.FDA guidance document: Computer-assisted detection devices applied to radiology images and radiology device data – premarket notification submission; 2022.
- 258.FDA guidance document: clinical performance assessment: considerations for computer-assisted detection devices applied to radiology images and radiology device data in premarket notification submissions; 2022.
- 259.FDA guidance document: technical performance assessment of quantitative imaging in radiological device premarket submissions; 2022.
- 260.Administration USFD. CDRH proposed guidances for fiscal year 2024 (FY2024); 2024.
- 261.Dey D., Arnaout R., Antani S., Badano A, Jacques L, Li H, et al. Proceedings of the NHLBI workshop on artificial intelligence in cardiovascular imaging. JACC Cardiovasc Imaging. 2023;16:1209–1223. doi: 10.1016/j.jcmg.2023.05.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 262.Catalog of regulatory science tools to help assess new medical devices. US Food and Drug Administration; 2023.