Skip to main content
Biomedical Engineering Letters logoLink to Biomedical Engineering Letters
. 2021 Nov 27;12(1):75–84. doi: 10.1007/s13534-021-00212-w

A digital cardiac disease biomarker from a generative progressive cardiac cine-MRI representation

Santiago Gómez 1, David Romo-Bucheli 1, Fabio Martínez 1,
PMCID: PMC8825913  PMID: 35186361

Abstract

Cardiac cine-MRI is one of the most important diagnostic tools used to assess the morphology and physiology of the heart during the cardiac cycle. Nonetheless, the analysis on cardiac cine-MRI is poorly exploited and remains highly dependent on the observer's expertise. This work introduces an imaging cardiac disease representation, coded as an embedding vector, that fully exploits hidden mapping between the latent space and a generated cine-MRI data distribution. The resultant representation is progressively learned and conditioned by a set of cardiac conditions. A generative cardiac descriptor is achieved from a progressive generative-adversarial network trained to produce MRI synthetic images, conditioned to several heart conditions. The generator model is then used to recover a digital biomarker, coded as an embedding vector, following a backpropagation scheme. Then, an UMAP strategy is applied to build a topological low dimensional embedding space that discriminates among cardiac pathologies. Evaluation of the approach is carried out by using an embedded representation as a potential disease descriptor in 2296 pathological cine-MRI slices. The proposed strategy yields an average accuracy of 0.8 to discriminate among heart conditions. Furthermore, the low dimensional space shows a remarkable grouping of cardiac classes that may suggest its potential use as a tool to support diagnosis. The learned progressive and generative representation, from cine-MRI slices, allows retrieves and coded complex descriptors that results useful to discriminate among heart conditions. The cardiac disease representation expressed as a hidden embedding vector could potentially be used to support cardiac analysis on cine-MRI sequences.

Keywords: Progressive GANs, Latent space, Cardiac patterns emulation, Cine-MRI

Introduction

Cardiovascular diseases (CVD) are the leading cause of death around the world, accounting for approximately 29% of the total global deaths [1]. Nowadays, cardiac-MRI sequences are widely used to analyze structural and dynamic information of myocardial walls during the cardiac cycle. The computer-aided diagnosis systems (CADs) support physicians in the detection, diagnosis, and follow up of cardiac diseases, facilitating the decision process. Nonetheless, the cardiac performance quantification is based on a set of few coarse hemodynamic metrics, such as heart rate variability (HRV) or ejection fraction (EF) to identify potential cardiac diseases [2, 3]. Unfortunately, these measurements may be insufficient to fully characterize pathologies, losing sight of all the heart functional information conveyed by cine-MRI sequences. Moreover, these cardiac measurements can be sensitive to cardiac disease variability, depending on a proper ventricle segmentation and limiting the analysis to the end-diastole (ED) and end-systole (ES) phases. Besides, the alternative observational analysis is based on the training and expertise of cardiologists showing an intrinsic inter-reader variability.

In the literature multiple efforts to propose alternative descriptors and strategies that allows for discrimination among cardiac pathologies from MRI observations have been reported. For instance, Zhang et al. [4] computed several shape and textural features to improve the classification process. Cetin et al. [5] proposed a statistical shape model to learn ventricle shapes and propagate them to new samples using the resulting segmentation, some hemodynamic measures were computed to predict heart conditions, such as myocardial infarction, cardiomyopathy and abnormal right ventricle. Also, Clough et al. [6] implemented a variational autoencoder to obtain a low-dimensional representation and classify diverse cardiac indexes. Other approaches have tried to build descriptors from cardiac motion displacements. For instance, some LV segmentations have been propagated to quantify ventricle displacements [7]. Also the Motion-segnet was implemented to obtain a motion and shape descriptor, under a unsupervised scheme [8]. Moreover, some strategies have integrated optical flow and geometrical features, extracted from a U-Net, to describe cardiac pathologies. Nonetheless, these supervised strategies do not exploit the Spatio-temporal information and remain subject to a proper ventricle segmentation to characterize cardiac pathologies in the cine-MRI sequences.

On the other hand, the generative and self-representations have explored uncovered descriptors that may be associated with structural and dynamic changes in medical applications. Despite this potential, in cine-MRI, such generative architecture has been mainly adopted in the artificial generation of cardiac sequence [9, 10]. For instance, in [11] progressive sequential causal GANs (PSCGAN) is proposed to simultaneously synthesize and LGE-equivalent image and the segment diagnosis-related tissues. Other works on GANs are used to avoid problems of anonymity and privacy of the data [12]. Realistic GAN-generated images are used instead of real images to avoid sharing clinical imaging data [13, 14].

This work introduces a progressive and generative adversarial strategy to code a new cardiac disease biomarker taking into account a particular cardiac related condition. The proposed approach generates cine-MRI sequences and obtains embedding descriptors conditioned by the heart diseases. For doing so, a progressive adversarial network (PGAN) was trained iteratively to generate coarse-to-fine images. The PGAN is also conditioned by five different heart conditions. The evaluation of the generated images was carried out on a heart condition classification task using a different dataset. Each MRI-image is mapped to the embedded space by applying a backpropagation scheme in the PGAN generator. The resulting embedding vectors are projected into low-dimensional latent space. Notably, images with the same conditions were closer in the low-dimensional space. Also, a supervised classification model trained on the embedding vectors achieved competitive performance on the heart condition classification task. The proposed strategy explores the possibility to analyze new hidden patterns from PGAN generated low-dimensional embeddings. This could lead to the discovery of new biomarkers associated with particular cardiac disorders.

Materials and methods

Heart structure is essential for a proper assessment of cardiac function. However, the complex ventricular geometry and variability difficulties the modeling even in populations with the same heart condition. The strategy herein proposed can compute a generative heart descriptor from the two most relevant cardiac cycle times: the end of the systole and diastole, described in detail in the following subsections. The pipeline of the whole strategy is illustrated in Fig. 1.

Fig. 1.

Fig. 1

Pipeline of the proposed strategy and subsequent evaluation. The progressive generative adversarial network (PGAN) is trained in an iterative process, from low to high resolution, to generate synthetic images conditioned by several heart conditions. The evaluation is carried out in a supervised classification task on a different dataset. The PGAN generator is used to obtain embedding vectors associated with actual cardiac MRI images. A UMAP strategy is then used to project them in a low dimensional space. A classification model was trained on such low dimensional space

Conditioning generation images by heart conditions

GANs are state-of-the-art generative models for log-likelihood estimation. Nevertheless, a main problem in GANs training is the collapsing on a specific distribution mode, hence limiting the diversity of the generated samples [15]. In general, adversarial models consist of two opposite networks (G, D) that represent non-linear mapping functions. During learning, the generator Gz outputs an image from a random noise vector z, while the discriminator, D, estimates the probability P(xnew|Xtr) that an image xnew belongs to the training Xtr or the probability P(xnew|Xg) that belongs to the generated data Xg=Gz. This adversarial strategy maximizes the log-likelihood minGmaxDExpdataxtrlogDxtr+Ezpzzlog1-DGz.

Unfortunately, optimizing the mentioned function does not guarantee the generation of realistic images. Moreover, we are also interested in preserving the morphology associated with specific cardiac diseases. Conditional GANs have emerged to deal with these limitations by integrating prior information y in a joint hidden representation with the vector z, and mapped to the discriminated function. In this case, each trained sample Xtr_i has associated a diagnosed pathology label Ytri, forming the tuple Xtr,Ytr. The auxiliary classifier (AC-GAN) was herein implemented to recover cine-MRI volumes with morphological sense [16]. The AC-GAN generates independent distributions for input images xnew and associated labels ynew, for generator GXtr,Ytr, while the discriminator integrates both sources as independent probabilities: PxnewXtr,Xg, PytriXtr,Xg, being Xg the partial generated images. In such sense the log-likelihood function is rewritten to depend on tuple Xtr,Ytr, as: minGI,ymaxDI,yVDI,y,GI,yB^D,G. From such assumption, the AC-GANs achieve a latent space independent of class label, helping with training stabilization.

Progressive adversarial strategy: PGAN

Lately, a progressive training strategy was adopted in this work to learn the conditional GAN [17]. This strategy starts solving the generation of low sample cine-MRI images, and progressively adds layers to the network to increase spatial resolution on output images (see in Fig. 1). From such a scheme, the large variation of heart structural representation is treated into a multi-scale framework, where firstly the large-scale of image is synthesized and then it focuses the attention of finer scale details. At each tuple scale G,D, in each progressive step, are faced into an adversarial independent training, where G learns the mapping zxg. Generator’s output, xg is expected to be closer to the real distribution prx. The implemented PGAN starts with low-resolution images of 4 × 4 and increase the size until images of 256 × 256. This strategy is more stable than classical strategies, recovers heart variability and results efficiently in computational time. In such case the learning is approximated by a recursive strategy, where a basic GAN BDL0,GL0 start the coarse and primary representation. Then, at each iteration is attached new layer representations to the Discriminator DLi and the generator GLi, allowing to obtain a more robust representation from a flexible training scheme. The final representation has LT layers that properly represent cardiac conditions. The Algorithm 1 summarizes the progressive training process.graphic file with name 13534_2021_212_Figa_HTML.jpg

Generating the latent space

After the PGAN training, the generator network learns a suitable mapping Gz:zxn,yn that transform a random multi-dimensional vector to a cine-MRI image conditioned by a specific heart disease. In summary, the adversarial scheme learns the manifold χ underlying the variability structure of images from different heart pathologies. However, a major drawback is that the inverse mapping (from a particular image to the latent space xn,ynz)) is unknown.

In this work, we choose to approximate the computation of a cardiac disease descriptor as the embedding vector associated with a specific image by minimizing the residual loss function [18]. Hence, the method computes the loss between a target image xn and an initial synthetic image Gz0 for a random vector z0. The loss is minimized via backpropagation, with a total of κ iteration steps, by providing the gradients to update the latent vector coefficients z0z1zκ. The cardiac described, embedded as the vector z_κ generates a synthetic image Gzκ close enough to represent the image xn in the manifold χ. The loss function herein implemented is described as LRzγ=xn-Gzγ [19]. In Fig. 2, the embedding generation process is illustrated. It should be noted that the computation of the embedding vector does not modify the weights of the generator network, and it should not be associated with a training process.

Fig. 2.

Fig. 2

Finding latent vector z via back-propagation using the residual loss. In this process, z represents a cardiac descriptor that can potentially represent different heart conditions. The latent vector z is associated to a specific query cine-MRI sequence

Experimental setup

Dataset

The proposed approach was trained and validated using data from a public challenge dataset (The MICCAI 2017 automated cardiac diagnosis challenge (ACDC)) with a total of 100 short axis (SA) cine-MRI volumes [20]. Each volume was associated to one of the following five different cardiac conditions: dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy (HCM), normal (NOR), heart failure with infarction (MINF), or with an abnormal right ventricle (RV).

We split data into three different sets: 70% for the PGAN training, 10% for PGAN validation and 20% for a test set. Each set was carefully selected to ensure the same distribution of heart conditions and there was no overlap at patient level among the sets. The main goal of the presented approach is to generate cine-MRI images conditioned by reliable pathology characterization. Hence, only basal and middle slices were herein considered. A total set of 70 patients (~ 14,515 slices) for training, 10 patients (~ 3691 slices) for validation and 20 (~ 2296 slices) for test were used.

PGAN training settings

The iterative PGAN training was carried out by using resolution levels of: 4×4,8×8,16×16,,256×256 pixels. A latent space of 512 dimensions was used as input for the generator (see Fig. 3). The batch size was also progressively decreased: a size of 128 was used at the first three levels, 64 for the second group of three levels, and 8 for the final high-resolution layer. The loss function for training the progressive GAN is a Wasserstein loss using gradient penalty to enforce the Lipschitz constraint [21].

Fig. 3.

Fig. 3

Training dataset at different levels of resolution. The multiresolution dataset allows to progressively learn cine-MRI patterns

Evaluation via classification tasks

In this work, we are not only interested in evaluating the overall quality of the generated images, but also the latent space capability of retaining information associated with the different heart conditions. For doing so, we define some classification tasks acting as surrogate evaluation. This evaluation was carried out in two different supervised tasks on the test set: (a) multi-class classification task (5 classes) and (b) four different binary heart condition classification tasks. The test set was further split in two groups for supervised training (70%) and evaluation (30%). There was no overlap at patient level between these groups.

The evaluation starts by generating the dataset embeddings as described in Sect. 2.3. For each image, the recovery of the latent space associated vector was done by using 5000 iterations. Afterward, the UMAP dimensionality reduction strategy was applied [22]. UMAP is an unsupervised dimension reduction technique that generates a mapping T function from the high dimensional Rn into a low dimensional space R2 constrained to preserve the local distances of the feature vectors according to a user defined metric on the high dimensional space. A set of random forest classifiers were trained on the low-dimensional space for the four binary classification tasks and for the multi-class classification task. Afterward, the 2D datapoints for supervised training were fed to the classifier. After the training, the remaining 2D-dimensional datapoints were used for evaluation. Finally, the confusion matrix and the accuracy score for the multi-class classification problem and each of the four binary classification task was computed.

Evaluation via similarity and distance metrics

Additionally, this work validated coherence on the achieved deep representation by considering the resultant cine-MRI generated from several cardiac conditions. The quality of the cardiac magnetic resonance images produced by the GAN is decisive for the subsequent application of segmentation, detection, recording and reconstruction methods. This quality can be measured by an expert cardiologist to determine the coherence of the global and local structures of the images produced, but such observations may be subjective and inefficient for large volumes of data. Here, we decided to use different metrics to quantify the structural similarity of the real and produced images and also to measure the distance between the training distributions Pr and the generated distribution Pg. These metrics are described thereafter:

The multiscale structural similarity metric (MS-SSIM) [23] is a variant of the structural similarity metric (SSIM) that calculates an index of perceived change in structural information, following a perception-based model where pixels have strong dependencies when they are spatially close to each other. These dependencies contain important information about the structure of objects, and they are often more important in multiscale analysis. In cardiac magnetic resonance imaging this metric has the intention to enhance the structure relationship among generated and real imagenes. The values of MS-SSIM are between 0.0 and 1.0, where values close to zero correspond to perceptually more similar images.

The sliced wasserstein distance is an alternative to the Wasserstein Distance (WD) [24] (with reported unattainable performance in relative high dimensional spaces) that calculates a statistical similarity between local patches of the image, extracted from representations of the Laplacian pyramid from these images. The values for this metric are in the range 0, and are indicators of how close the distribution of synthetic images is to the distribution of real images. Values close to zero for a SWD metric indicate that the distributions are close, while large values indicate that the distributions are far away, i.e. the generated images have few characteristics of the real ones.

Evaluation and results

Evaluation via classification tasks

This work proposes a new digital cardiac biomarker that learns a hidden representation from a generative cine-MRI scheme and allows classification and discrimination among a set of cardiac conditions. A main purpose of this work was to evaluate the proposed approach in terms of the capability to discriminate among cardiac pathologies. For this reason, a classification task was carried out as validation, using the resultant embedding vectors using the previously trained generative model. Figure 4 illustrates the results in two different setups: (1) a multi-classification task (a total of five conditions) and (2) binary classification tasks. In both classification schemes, a random forest classifier (RF) was trained on the 2D embedding vectors, extracted from the 512-dimensional feature vectors in the latent space.

Fig. 4.

Fig. 4

Classification task using a random forest for both: multi-class (left) and binary classification (right)

For multiple classes, this work considered five heart conditions achieving an average accuracy of 0.8. It should be noted that best discrimination score was obtained by dilated cardiomyopathy (DCM), heart failure with infarction (MINF), or with an abnormal right ventricle (RV). This capability of discrimination could be associated to the capability of the descriptor to capture morphological features observed in cine-MRI images. In contrast, the generated embedding descriptors that correspond to normal (NOR) conditions have some difficulties to differentiate among other classes, associated with larger variability of training samples in this class, which result naturally that have larger variability w.r.t specific conditions. For binary classifications, the resultant embedding descriptors also preserve enough information to distinguish between the different heart conditions. In such experiments, the RF model yields accuracy measures larger than 0.9 for the different binary classification tasks.

Secondly, a topological low embedding space was built from the computed cardiac descriptors and a UMAP strategy was used to separate cardiac conditions. Under the assumption that the latent space has smooth transitions, we expected that closer points in the latent space generate closer cine-MRI images according to cardiac pathology constraints. Moreover, it is likely that the conditional component would emphasize this behavior. Figure 5 showcases that different classes are properly mapped in such clustering space, albeit some outliers exist. We hypothesize that such outliers may correspond to particular heart morphologies that are forced to learn in some cardiac populations. As expected, for multiple classes, the topological space has major difficulties discriminating among classes, which could be associated with patterns that share multiple classes in embedding configuration. In Fig. 5, some outlier points correspond to some ventricular regions near the apical zone.

Fig. 5.

Fig. 5

UMAP unsupervised clustering of conditioned embedding vectors. A multi-class and binary grouping were carried out in different experiments. Legend values are as follows NOR normal, DCM dilated cardiomyopathy, HCM hypertrophic cardiomyopathy, MINF heart failure with infarction, RV abnormal right ventricle

Evaluation via structural and similarity of synthetic images

In a second evaluation of the proposed approach, we validated the structural and morphological synthesis of new cine-MRI images, generated from the progressive representation. Initially, some synthetic examples generated by using the GAN proposed structure are shown in Fig. 6. Heart conditions NOR, DCM, HCM, MINF and RV in both: ED and ES are presented. The progressive and conditioned proposed GAN achieves reliable results that visually preserve cardiac ventricle structures. Also, it should be noted that for different pathologies remarkable differences are noticeable. These differences could be associated with the nature of the heart condition. Some noise and intensity altered images also were produced, as observed in the third column. However, such kind of images are also found in the training dataset due to some differences in the acquisition protocols. In comparison, we generated synthetic cardiac images but using a progressive unconditioned strategy. The generated images contain artifacts, and, in some cases, anatomical information is missing. In other cases, totally incoherent images are generated (i.e. additional cardiac structures such as ventricles are observed).

Fig. 6.

Fig. 6

Left: Ten synthetic images (two for each class in our study) are synthesized by the conditioned generative adversarial network (GAN). Right: Last two columns show some samples generated by a GAN model trained without any conditional information

A quantitative evaluation of synthesis of new cine-MRI images was conducted using the structural similarity among images produced by the proposed PGAN generator and the training set of reference images. Figure 7 shows the performance of the proposed approach through the incremental generation of images and under criteria of a multiscale structural similarity metric (MS-SSIM) [13]. In our case the reference is computed from the average MS-SSIM measured over the training set of images and represented as the vertical dot line. As expected, in the first generated batch of 2000 images, the computed MS-SSIM shows low structural coherence. However, in the progressive generation, at each new batch the proposed approach achieves a convergence of the set of new images, with close similarity with respect to the training group of images. This results suggest that last batch of synthetic images have similar local structure to real cine-MRI slices and also that the variation of images increases, producing a major range of cardiac samples for each condition.

Fig. 7.

Fig. 7

Evaluation of MS-SSIM metric at different stages of the training process

Finally, the Sliced Wasserstein Distance (SWD) metric was used to validate the resultant data distribution of synthetic images regarding the reference distribution taken from real dataset used during training. The set of synthetic images, obtained from the generator, was validated at different resolution levels ({128, 64, 32, 16}), according to the progressive perspective of the generative net. Figure 8 shows the resultant SWD achieved for different set of new images, generated from different resolution levels. It should be noted that vertical dotted lines represent discrete check points that indicate change of resolution level in the progressive architecture. As expected, the distance between training distribution Pr and distribution of generated images Pg tend to lower, showing the capability of the proposed approach to reproduce realistic images, with a distribution closer to the original dataset, suggesting a proper reproducibility of visual cardiac patterns. The progressive learning also results useful to obtain images with major structural coherence, starting by generate basic textural patterns to final layers with complex semantic cardiac concepts in the cine-MRI slices.

Fig. 8.

Fig. 8

Evaluation of SWD metrics at different stages of training: distance between training distribution Pr and distribution of generated images Pg at different stages of training. Vertical dotted lines indicate where the GAN changed the focus to a new level of detail

Discussion and concluding remarks

This work introduced a cine-MRI cardiac biomarker learned from a deep generative representation. The deep representation is conditioned by labels of cardiac conditions and adjusted by a progressive strategy that restores image patterns at different resolution levels. From such representation, it was possible to learn a latent feature space, whose topological points correspond to embeddings that properly cluster cardiac conditions, capturing most relevant morphological features of cardiac cine-MRI. This topological space was evaluated via a classification task. The recovered embeddings of a testing dataset with associated condition were evaluated on a classification task. The results show that computed cardiac descriptors are able to discriminate between different cardiac conditions, reporting accuracies larger than 90% in all binary classification experiments.

Nowadays, there exist many strategies dedicated to support analysis of cardiac conditions, observed from MRI-sequences. Nonetheless, such strategies are mainly dedicated to support manual ventricle delineation for a posterior analysis of the hemodynamic measures, such as the ejection fraction, that correlate some cardiac diseases [2, 3]. Additional approaches have quantified such cardiac index, from achieved segmentation to support global analysis of heart dynamic. Nonetheless, the computation of such metrics is dependent on proper segmentations and the information is limited to the end of the diastole and systole phases. This dynamic reduction to only two phases of cardiac cycle may lose important nonlinear patterns that may be recovered by a richer morphological descriptor [58]. In our proposal, we leverage the deep learning methods to learn a vector representation that adequately describes heart conditions on MRI images. These vectors were projected to a low dimensional space to summarize hidden information related with particular cardiac diseases. Additionally, the progressive generation framework was able to produce synthetic images (Fig. 6) with very similar features to those observed on the real cine-MRI sequences. These generated sequences also capture a similar structure to real cardiac cine-MRI, as shown by the SWD and MS-SSIM distance metrics. These results, while not direct indicators of how good the mapping between distributions is for further use in classification, are indicators that GAN was able to associate more points in the embedded space to synthetic MRI images.

This work opens the possibility of uncovering potential digital biomarkers associated with cardiac conditions by coding hidden relationships associated with complex visual features captured in cardiac magnetic resonance imaging. Future work includes the exploration of alternative generative representations, under the anomaly detection scheme. This approach may be promising to describe morphological and dynamic pathological heart features. Furthermore, in the future there will be a detailed study on larger datasets of cine-MRI sequences to identify key features of the embedded representations potentially correlated with heart conditions.

Declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  • 1.Roth GA, Abate D, Abate KH, et al. Global, regional, and national age-sex-specific mortality for 282 causes of death in 195 countries and territories, 1980–2017: a systematic analysis for the Global Burden of Disease Study 2017. Lancet. 2018;392:1736–1788. doi: 10.1016/S0140-6736(18)32203-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Reinertsen E, Nemati S, Vest AN, Vaccarino V, Lampert R, Shah AJ, Clifford GD. Heart rate-based window segmentation improves accuracy of classifying posttraumatic stress disorder using heart rate variability measures. Physiol Meas. 2017;38:1061. doi: 10.1088/1361-6579/aa6e9c. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Liang L, Mao W, Sun W. A feasibility study of deep learning for predicting hemodynamics of human thoracic aorta. J Biomech. 2020;99:109544. doi: 10.1016/j.jbiomech.2019.109544. [DOI] [PubMed] [Google Scholar]
  • 4.Zhang N, Yang G, Gao Z, Xu C, Zhang Y, Shi R, Keegan J, Xu L, Zhang H, Fan Z, Firmin D. Deep learning for diagnosis of chronic myocardial infarction on nonenhanced cardiac cine MRI. Radiology. 2019 doi: 10.1148/radiol.2019182304. [DOI] [PubMed] [Google Scholar]
  • 5.Cetin I, Sanroma G, Petersen SE, Napel S, Camara O, Ballester M-AG, Lekadir K. A radiomics approach to computer-aided diagnosis with cardiac cine-MRI. In: International workshop on statistical atlases and computational models of the heart. Springer; 2017. p. 82–90.
  • 6.Clough JR, Oksuz I, Puyol-Antón E, Ruijsink B, King AP, Schnabel JA. Global and local interpretability for cardiac MRI classification. In: International conference on medical image computing and computer-assisted intervention. Springer; 2019. p. 656–64.
  • 7.Yang D, Wu P, Tan C, Pohl KM, Axel L, Metaxas D. 3D motion modeling and reconstruction of left ventricle wall in cardiac MRI. In: International conference on functional imaging and modeling of the heart. Springer; 2017. p. 481–92. [DOI] [PMC free article] [PubMed]
  • 8.Qin C, Bai W, Schlemper J, Petersen SE, Piechnik SK, Neubauer S, Rueckert D. Joint motion estimation and segmentation from undersampled cardiac MR image. In: International workshop on machine learning for medical image reconstruction. Springer; 2018. p. 55–63.
  • 9.Wong SC, Gatt A, Stamatescu V, McDonnell MD. Understanding data augmentation for classification: when to warp? In: 2016 International conference on digital image computing: techniques and applications (DICTA). IEEE; 2016. p. 1–6.
  • 10.Perez L, Wang J. The effectiveness of data augmentation in image classification using deep learning. 2017. arXiv preprint https://arxiv.org/abs/1712.04621.
  • 11.Xu C, Xu L, Ohorodnyk P, Roth M, Chen B, Li S. Contrast agent-free synthesis and segmentation of ischemic heart disease images using progressive sequential causal GANs. Med Image Anal. 2020;62:101668. doi: 10.1016/j.media.2020.101668. [DOI] [PubMed] [Google Scholar]
  • 12.Diller G-P, Vahle J, Radke R, Vidal MLB, Fischer AJ, Bauer UMM, Sarikouch S, Berger F, Beerbaum P, Baumgartner H. Utility of deep learning networks for the generation of artificial cardiac magnetic resonance images in congenital heart disease. BMC Med Imaging. 2020;20:1–8. doi: 10.1186/s12880-020-00511-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Litjens G, Ciompi F, Wolterink JM, de Vos BD, Leiner T, Teuwen J, Išgum I. State-of-the-art deep learning in cardiovascular image analysis. JACC Cardiovasc Imaging. 2019;12:1549–1565. doi: 10.1016/j.jcmg.2019.06.009. [DOI] [PubMed] [Google Scholar]
  • 14.Carneiro G, Zheng Y, Xing F, Yang L. Review of deep learning methods in mammography, cardiovascular, and microscopy image analysis. In: Lu L, Zheng Y, Carneiro G, Yang L, editors. Deep learning and convolutional neural networks for medical image computing. Cham: Springer; 2017. pp. 11–32. [Google Scholar]
  • 15.Goodfellow I. NIPS 2016 tutorial: generative adversarial networks. 2016. arXiv preprint https://arxiv.org/abs/1701.00160.
  • 16.Odena A, Olah C, Shlens J. Conditional image synthesis with auxiliary classifier gans. In: Proceedings of the 34th ICML. 2017. p. 2642–2651.
  • 17.Karras T, Aila T, Laine S, Lehtinen J. Progressive growing of gans for improved quality, stability, and variation. 2017. arXiv preprint https://arxiv.org/1710.10196.
  • 18.Radford A, Metz L, Chintala S. Unsupervised representation learning with deep convolutional generative adversarial networks. 2015. arXiv preprint https://arxiv.org/abs/1511.06434.
  • 19.Schlegl T, Seeböck P, Waldstein SM, et al. Unsupervised anomaly detection with generative adversarial networks to guide marker discovery. In: Proceeding of IPMI. 2017. p. 146–157.
  • 20.Bernard O, Lalande A, Zotti C, et al. Deep learning techniques for automatic MRI cardiac multi-structures segmentation and diagnosis: Is the problem solved? IEEE Trans Med Imaging. 2018;37:2514–2525. doi: 10.1109/TMI.2018.2837502. [DOI] [PubMed] [Google Scholar]
  • 21.Gulrajani I, Ahmed F, Arjovsky M, et al. Advances in neural information processing systems. Berlin: Springer; 2017. Improved training of wasserstein gans; pp. 5767–5777. [Google Scholar]
  • 22.McInnes L, Healy J, Melville J. Umap: Uniform manifold approximation and projection for dimension reduction. 2018. arXiv preprint https://arxiv.org/abs/1802.03426.
  • 23.Wang Z, Bovik AC, Sheikh HR, Simoncelli EP, et al. Image quality assessment: from error visibility to structural similarity. IEEE Trans Image Process. 2004;13:600–612. doi: 10.1109/TIP.2003.819861. [DOI] [PubMed] [Google Scholar]
  • 24.Villani C. Optimal transport: old and new. Berlin: Springer; 2008. [Google Scholar]

Articles from Biomedical Engineering Letters are provided here courtesy of Springer

RESOURCES