Abstract
Magnetic resonance imaging (MRI) images acquired as multislice two-dimensional (2D) images present challenges when reformatted in orthogonal planes due to sparser sampling in the through-plane direction. Restoring the “missing” through-plane slices, or regions of an MRI image damaged by acquisition artifacts can be modeled as an image imputation task. In this work, we consider the damaged image data or missing through-plane slices as image masks and proposed an edge-guided generative adversarial network to restore brain MRI images. Inspired by the procedure of image inpainting, our proposed method decouples image repair into two stages: edge connection and contrast completion, both of which used general adversarial networks (GAN). We trained and tested on a dataset from the Human Connectome Project to test the application of our method for thick slice imputation, while we tested the artifact correction on clinical data and simulated datasets. Our Edge-Guided GAN had superior PSNR, SSIM, conspicuity and signal texture compared to traditional imputation tools, the Context Encoder and the Densely Connected Super Resolution Network with GAN (DCSRN-GAN). The proposed network may improve utilization of clinical 2D scans for 3D atlas generation and big-data comparative studies of brain morphometry.
Keywords: artifact correction, edge, generative adversarial network, image restoration, imputation, magnetic resonance imaging
I. INTRODUCTION
Magnetic resonance imaging (MRI), as an indispensable tool for medical diagnosis and imaging research, offers detailed visualization of the human torso, extremities and brain. However, artifacts often occur, reducing image quality, diagnostic utility, and scientific relevance [1]. While a plethora of two-dimensional (2D) MRI scans are acquired in hospitals, retrieving missing information due to image artifacts, or due to large slice thickness is of great importance, especially for downstream meta-analyses.
Types and manifestations of artifacts were reviewed by Stadler et al. and Zhuo et al. [2], [3], of which some are obvious, and some are subtle, leading to misinterpretation or misdiagnosis. The most common one is motion artifact due to respiration or other movement of the imaging subject [4], [5]. It appears as blurring or coherent ghosting, and in more severe case, it smears the image. The other frequently encountered artifacts are equipment-related ones, such as spike (herringbone) artifacts, appearing as dark stripes overlaid on the image, or zipper artifacts, exhibiting increased noise that extends throughout the image slices [6], [7]. Finally, many of them exhibit as voids in the images. All of these often leads to the discarding of the affected slice, and hence a loss of potentially crucial information, especially in cases of pathologic conditions. Therefore, correcting the artifact affected slices is of great importance for both clinical and research work.
Another important image restoration application lies in retrieving anatomic information coded in the through-plane direction of 2D images, which can be formulated as missing slices. This is particularly imperative for T2-weighted imaging, due to its long signal recovery time and vulnerability to motion artifacts [4], [8], [9]. Specifically, inter-slice spacing is three to six times larger than the in-plane resolution of individual slices. This results in a resolution that is much higher in-plane than in the through-plane direction, as shown in Fig 1.(a).
Fig. 1.
A stack of MRI 2D images. (a) The resolution in the through-plane direction (coronal and sagittal) is usually much lower than that in the in-plane (axial) direction. (b) Initially the slices to be restored are modeled as 1-valued masks in the through plane direction, while shown as masked rows in the other two planes. Note that the masked region shown in coronal and sagittal view in (b) is one slice out of every three slices, different than the mask size that we implemented in this paper (see Section III.A). It is a schematic illustration of how missing slices are represented in the other two orthogonal planes.
Both multi-slice 2D acquisitions, with their large inter-slice spacings, and artifact-corrupted images, are poorly suited for downstream segmentation and shape analyses, such as skull stripping, deformable registration, and surface or shape construction [10]–[12]. Therefore, retrieving missing slices to achieve isotropic-resolution, and correcting artifact affected regions are crucial steps in obtaining as much relevant information as possible out of the images.
Prior MRI restoration methods proposed to go from anisotropic 2D MRI images to isotropic ones can be grouped under two main categories: model-based and data-driven. The earliest methods include piecewise interpolation such as nearest neighbor, linear, polynomial and spline interpolations [13]. Mahmoudzadeh et. al. [14] registered three 2D orthogonal scanning planes to a high-resolution grid, and combined three interpolated volumes to achieve the high-resolution image. On the other hand, using data-driven methods, Greenspan et. al. [15] and Yang [16] extended the iterative back projection (IBP) method proposed by Irani and Peleg Jia et al. [17], and modeled sparse representation and over-complete dictionary learning to restore missing slices. Dalca et al. [18] employed an expectation maximization algorithm to train a Gaussian mixture model, imputing the missing structures by learning solely from the available collection of sparsely sampled images. The work also investigated the effects of various slice thicknesses on the performance [18].
Mathematically, image restoration can be modeled as an ill-posed problem to find f−1 (cdot), the inverse of the image degrading mapping, and to minimize the difference between estimated results and the desired but unknown images X, in forward model Y = f (X), where Y represents the observed images, and f is the image degrading mapping. Efforts have been made using IBP [17], non-local means, and matrix completion algorithms [19]–[21], from which the reconstruction and correction were iterated in a multi-scaled manner. More recently, deep learning approaches have enjoyed explosive popularity and a powerful capability to improve reconstruction results. Convolution neural networks (CNN) [22] have achieved satisfactory results compared with previously applied methods [23]–[25]. However, the widely used optimization methods of CNNs minimize voxel-wise error between estimated and the ground truth images without regard for the underlying structure. This leads to an overall blurring and lower perceptual image quality [26], suggesting that CNN based methods struggle to retain high frequency information. Evidence from other imaging applications suggest that generative adversarial networks (GAN) better preserve the edges and image texture essential to perceptual quality [27]–[29], but suffer from poorer voxel-wise performance because of their emphasis on learned patterns [24].
In this work, we propose a new method to improve the voxel-wise performance of GANs by cascading two networks focusing on specific tasks. We explicitly concentrate on restoring missing slices due to image artifacts or 2D scanning schemes, to generate anatomically plausible and consistent 3D volumes by imputing the missing slices. Our framework was inspired by image inpainting by artists, with the goal of “reconstituting the missing or damaged portions of the work, in order to make it more legible and to restore its unity” [30]. In image inpainting, an artist composes a drawing by initially delineating the spaces and shapes, using a “lines first, color second” principle [31]. Indeed, in both de-novo creation of an artistic painting or during image restoration, a completed sketch or edge recovery plays a vital role and comes before paint is applied to the canvas [32]. Attempts have been made to develop image inpainting using natural images [33]–[35]. Our proposed edge-guided image restoration network decouples the recovery of high and low-frequency components of the missing information, to generate coherent anatomical details from adjacent slices. We first apply a GAN to recover edge information based upon existing image context. A contrast completion GAN subsequently combines the “sketched” edges from the CNN to fill in appropriate image contrasts.
II. METHODS
A. DATA REPRESENTATION AND PROPOSED FRAMEWORK
In this work, we model the damaged or missing through-plane slices (axial slices, illustrated in Fig. 1(a)) as binarized masks, where the masked regions are set to value 1. This in turn will be visualized as masked rows in the other two planes, as presented in Fig.1(b); retrieving missing slices is achieved by estimating the masked rows in the other two orthogonal planes. To mimic image inpainting, the first step is to connect the broken edges in the masked rows. Then a contrast completion network utilizes the connected edges from the first step and estimates the voxel intensities in the missing rows. This approach is achieved by edge-guided GAN (EG-GAN) to enhance perceptually consistent results.
Our proposed method consists of two steps: edge connection and contrast completion; both steps follow an adversarial model, consisting a generator and a discriminator (Fig 2). EG-GAN firstly connects the missing edges of the affected artifacts or low-resolution images by edge generator, taking 2D scans, and masks generated from missing slices in through-plane as input, supervised by the edges generated from the original images. As input and ground truth, edges of 2D images and original, isotropic resolution images, respectively, are extracted by a Canny edge detector [36]. In the second step of our method, a contrast generator fills the intensities based on the original contrast from 2D images, guided by edges generated from the first step and supervised by the original images.
Fig. 2.
Framework of the proposed method. The disconnected edges and their corresponding mask patterns are used to train the edge generator. The edges extracted from original image are used as edge references. The contrast generator trained by paired masked images and ground truth uses the completed edges generated from the edge generator as constraints. Note that ground truth image feeds to the edge discriminator as prior information, while to the contrast discriminator to be differentiated from the recovered image.
Our framework was inspired by Nazeri et al. [37], which has achieved impressive results in image restoration for natural images. We started building our network from context encoder, an image semantic inpainting net implemented by Pathak et. al [38]. The design of two networks will be presented in detail in the following sections.
B. DESIGN OF LOSS FUNCTIONS FOR EG-GAN
1). LOSS FUNCTION IN EDGE CONNECTION:
The loss function of edge connection is extended from [39] and designed as:
| (1) |
where lossGAN1 is the adversarial loss, and loss f is the feature-matching loss [27], used to stabilize the network. To enhance the voxel-wise precision of the generated edge, the Dice similarity coefficient (DSC) loss [40] lossDSC is included. Finally, λGAN1, λ f and λDSC are regularization parameters.
We denote IGT as the original image (ground truth), and is the masked image: where M is the mask that is designed to mimic the thick slices. By the same token, CGT is the image contour and is its masked or degraded edges. The edge mapping performed by generator can be represented as:
| (2) |
where Cpred is the predicted image contour. Cpred and CGT are designed as paired inputs of the discriminator to distinguish if Cpred is real.
The adversarial loss for the edge connection network can be written as:
| (3) |
where IGT and CGT are ground truth images and their contours.
The feature-matching loss, loss loss f , calculating the difference of the activation maps generated by the hidden layers in the discriminator, is defined as
| (4) |
where L is the number of hidden layers in the discriminator, and Ni is the number of elements in the map of the ith layer. Di represents the ith activation of the discriminator.
The dice similarity coefficient calculates a spatial overlap index that ranges between 0 and 1 [41]. It has been widely used in evaluating two sets of binary segmentation results. In edge generator, DSC loss, lossDSC, is designed to restrict the bias towards background in learning [40], [42]:
| (5) |
where The summation runs over the N voxels.
2). LOSS FUNCTION IN CONTRAST COMPLETION:
The contrast completion network takes incomplete image as input, enhanced by the connected contour from the previous step. The contrast mapping Gc can be represented as:
| (6) |
where Ipred is the prediction of the restored image. Similar to (3), the adversarial loss of the contrast completion network is defined as:
| (7) |
where DC is the corresponding discriminator. To ensure that the reconstructed image has both high voxel-wised accuracy and good perceptual quality, the network is trained utilizing a combined loss function, including l2 loss, perceptual loss [43], lossp, and style loss losss [44], where l2 loss is the most common reconstruction loss; perceptual loss is included so that results that are not perceptually similar to the ground truth will be refined; style loss [45] is chosen to ameliorate “checkerboard” artifacts due to transpose convolutional layers [26]. Taken together, the overall loss function of the contrast completion is:
| (8) |
C. ARCHITECTURE OF PROPOSED NETWORK
Fig.3 illustrates the structural details of each component. Our generators are adapted from the method from Johnson et al. [43], which was demonstrated to perform well in image-to-image translation and super-resolution [44], [46], [47]. The generators of both networks include encoders, eight residual blocks and decoders. Both discriminators follow a PatchGAN architecture [46], [48].
Fig. 3.
Individual architecture of edge connection and contrast completion networks. The hyperparameters of each layer are labelled as K (kernel size), N (number of channels), and S (stride). Both of the generators are built from ResNet with different normalization strategies. While spectral normalization is applied after each layer in the edge generator to enhance the stability, several layers in the contrast generator are designed without spectral normalization to speed up the training procedure. Note that the discriminator for both networks follows the same hyperparameters.
To further stabilize networks by scaling weight matrices using their respective largest singular values, spectral normalization is applied in both the generator and discriminator [49]. Note that the edge generator has spectral normalization and instance normalization across all the layers [49], [50], whereas the contrast generator only uses instance normalization, as learning high frequency information such as edges requires more restrictions to maintain the stability of the network [50]. However, for low frequency contrast information, spectral normalization is not necessary and might slow the training procedure. Therefore, spectral normalization is removed from the contrast generator.
III. EXPERIMENTS
In this section, we will introduce our experimental datasets, training and testing schemes, two state-of-art methods for similar applications for comparison, and our evaluation methods.
A. DATA DESCRIPTION
To demonstrate the generalization of EG-GAN, we used a large publicly available T1-weighted brain image dataset in the Human Connectome Project (HCP) S1200 collection [51]. We downloaded images after preprocessing pipelines, including distortion correction and brain extraction. We randomly chose 600 subjects for our experiments. The images come in 0.7 mm isotropic high resolution, and we removed boundary all-zero slices and fit them to 256 × 256 × 256. The whole dataset is split into 50% training, 25% validation and 25% testing, without overlapping subjects. The predicted results from the testing set only were used in our final performance evaluation and comparison.
The original images were used as ground truth images IGT and their counterpart thick-slice images, were artificially generated mimicking damaged slices or 2D MRI scans. To stimulate typical clinical 2D scans to the greatest extend, the slice thickness is four times larger in through plane, i.e., three slices were masked (filled with the value 1, as illustrated in Fig1.b) for every four slices in through plane direction. Therefore, the thick slice images have the same size as the reference images (256 × 256 × 256).
In addition, to investigate the image quality difference of the imputed image using k-space or image-space resampling, we generated low resolution images using similar way as in Chen et al. [52]: we applied FFT to convert the original image into k-space; truncated the outer part of 3D k-space data; filled in the truncated data with zero; and finally, applied inverse FFT to convert to image-space. High frequency truncation and zero padding in k-space effects low-pass filtration with interpolation to restore the original image size. The interpolated slices (representing three out of every four slices) were masked in through-plane direction to mimic 2D MRI scenarios.
To evaluate the intra-subject generalization of our network and slice consistency, we chose the axial plane as the thick slice direction to conduct the following experiments.
B. TRAINING PROCEDURE
The models were implemented in PyTorch on High-Performance Computing clusters with NVIDIA Tesla P100 GPUs. Learning rate was set to 10−4 for generators and 10−4 for discriminators. The models were optimized by ADAM optimizer. Our training image size was 256∗256, with a batch size of eight, and the Canny edge detection threshold was set to 0.5. The standard deviation of the Gaussian filter used in Canny detector was one. We set the maximum training iteration as 20,000 for the edge connection stage, and 40,000 for the image completion stage as no significant improvement afterward.
Our training time is about 230 ms/iteration and sums up to less than two hours to finish training edge connection stage, and less than 4 hours for image completion stage. The prediction time is about 2 minutes for each volume.
C. COMPARISON METHODS
To evaluate the competence of our proposed method, we first assessed the baseline of the performance of the simplest and fastest interpolation methods: nearest neighbor (NN) and cubic. In order to evaluate the improvement of the edge guidance, we then compared a variant of context encoder [38] (the contrast completion part of our proposed network) with the EG-GAN.
Due to very limited work on image imputation, we compared with a similar application, the GAN-based 3D MRI reconstruction in [53], coined DCSRN-GAN, which outperformed previously proposed methods in this task [23], [52]. We used the same data preparation steps as [52] and followed the same hyperparameters to build mDCSRN-GAN, as proposed by the authors [53]. Note that we did not implement our proposed method in 3D as in [52] and [53], due to memory constraints and because patch-based implementations degrade performance of the edge connection stage, which is essential for our method. To evaluate the performance of our method for super-resolution tasks, we inverse-Fourier transformed the images and performed linear down-sampling in Fourier space (in a way of zero-padding), as a linear up-sampling method. Similarly, we used pseudo-k-space down-sampled data as “low-resolution” images for input data while comparing DCSRN-GAN, the context encoder, and EG-GAN.
We also tested our model on two kinds of artifacts affected images, spike artifact and zipper artifact, which are the most relevant extension of our proposed mask model. To simulate the spike artifact, we randomly added spike gradients at different frequencies and angles in the pseudo-k-space of the HCP data. For the experiment of zipper artifact removal, we visually screened a large multi-center pediatric clinical dataset [54] including T1-weighted axial readout (256 × 256 × 22, 0.86mm × 0.86mm × 6mm) and T1-weighted sagittal readout (24 × 256 × 256, 6mm × 0.86mm × 0.86mm) under the supervision of a certified on-site radiologist. Two subjects (age = 9.5 and 13) were identified with images containing zipper artifacts only and were tested using our model trained on the HCP dataset.
D. EVALUATION
To evaluate the similarity between original images and our results, we compared the voxel-wise intensity value accuracy using peak signal-to-noise ratio (PSNR). Furthermore, we measured structural similarity index (SSIM) [55], which is considered to reflect perceptual image quality. Both metrics are used to evaluate each comparison independently.
IV. EXPERIMENTAL RESULTS
A. IMPUTATION
Fig.4 demonstrates the restored images in sagittal and coronal views from a random subject using the four methods. The top panel shows the results from the basic interpolation methods: NN and cubic. Either from the whole brain view or the regional zoomed-in one, we observe the stairway effects as expected from the results using NN in Fig.4 (a). The cubic interpolation yields more aesthetically pleasing results, but at the cost of some edge blurring (Fig.4b). White matter disconnection can be seen in the second zoomed image due to the interpolated values closer to the neighboring grey matter voxels, when comparing to the reference image in Fig.4 (e). The bottom panel compares the results of the context encoder and our proposed method, EG-GAN in Fig.4 (c) and Fig.4 (d), respectively. Neither context encoder nor EG-GAN exhibit the stairway effects or broken white matter tracts of the interpolation methods, and both exhibits many of the fine details of the brain anatomy, However, the result of EG-GAN appears much more visually plausible, and the method is capable of restoring fine details of small vessels (the second magnified region in Fig.4 (d)) as well as distinguishable and smooth boundaries for the cortex. Note that the result of the context encoder shows faint traces of the masked row (missing slices in the axial direction), which can be observed in all three magnified figures in Fig.4 (c). To demonstrate the inter-slice consistency, the axial slices that were filled in the through plane using EG-GAN are presented in Supplemental Fig. 1.
Fig. 4.
MRI through-plane imputation results: representative coronal and sagittal slices of one subject from the HCP dataset. The original T1-weighted image (reference) is down-sampled in the through plane, and missing axial slices are restored by different methods: nearest neighbor, cubic, context encoder, and EG-GAN. Our method provides more visually plausible results which recover more brain anatomy without either stairway effects or broken white matter tracts. Comparing to context encoder, EG-GAN can mitigate the strip-shape artifact caused by masked model.
Quantitative results are summarized in Table. 1. Consistent with Fig. 4, the context encoder, which is widely used for image inpainting, achieved higher PSNR and SSIM than NN or cubic. However, EG-GAN outperformed traditional methods by roughly 10% higher PSNR and 5% higher SSIM. Note that the standard deviation of the PSNR in EG-GAN is only a third of those using other methods.
TABLE I.
EVALUATION RESULTS OF FOUR DIFFERENT METHODS FOR IMAGE IMPUTATION PRESENTED AS MEAN ± STANDARD DEVIATION. PSNR: PEAK SIGNAL-TO-NOISE RATIO; SSIM: STRUCTURAL SIMILARITY INDEX.
| PSNR | SSIM | |
|---|---|---|
| Nearest neighbour | 36.5402 ± 1.5988 | 0.9177 ± 0.0055 |
| Bi-cubic | 37.8756 ± 1.7133 | 0.9410 ± 0.0050 |
| Context Encoder | 38.8756 ± 1.3537 | 0.9528 ± 0.0325 |
| EG-GAN | 40.4849 ± 0.5101 | 0.9745 ± 0.0022 |
B. SUPER-RESOLUTION FROM PSEUDO-K-SPACE
Fig. 5 exhibits the reconstructed images in sagittal and coronal view of another random subject using pseudo-k-space down-sampled data. We compared the basic linear interpolation method, a state-of-art super-resolution method, DCSRN-GAN, and two image inpainting-based methods, context encoder and our EG-GAN in Fig.5 (a)–(d), respectively. Both linear interpolation and DCSRN-GAN avoid stairway effects but show overall blurring when comparing three magnified regions to the reference image in Fig.5 (e). We observed areas that DCSRN-GAN failed to restore, as seen on the top of the second magnified image in Fig.5 (b). When comparing the results of context encoder and EG-GAN, the context encoder is not able to restore regions where the folds of gyri and sulci are more complex, nor the blood vessel (the second magnified image in Fig.5 (c), whereas EG-GAN can (bright voxels between sulcus folds in coronal view of Fig.5 (e), and the second magnified image). The reconstructed axial slices are shown in the Supplemental Fig.2.
Fig. 5.
MRI super-resolution results: representative coronal and sagittal slices of one subject from the HCP dataset. Low resolution coronal and sagittal planes are generated by down-sampling the original T1-weighted scan (reference) in pseudo 3D k-space, and 3D high resolution scans are reconstructed from multiple 2D axial slices by different methods: linear, DCSRN-GAN, context encoder, and EG-GAN. Our approach reconstructs images with more anatomically plausible details and more distinct edges.
Table. 2 shows the quantitative evaluation results of linear, DCSRN-GAN, context encoder, and EG-GAN methods. Note that DCSRN-GAN and the context encoder achieved comparable performance to EG-GAN in both PSNR and SSIM. However, when comparing the performance of super-resolution and image imputation (Table. I), both PSNR and SSIM are higher in image imputation task.
TABLE II.
EVALUATION RESULTS OF FOUR METHODS FOR SUPER-RESOLUTION USING PSEUDO-K-SPACE DOWN-SAMPLED DATA PRESENTED AS MEAN ± STANDARD DEVIATION. PSNR: PEAK SIGNAL-TO-NOISE RATIO; SSIM: STRUCTURAL SIMILARITY INDEX.
| PSNR | SSIM | |
|---|---|---|
| Linear | 37.8820 ± 2.4217 | 0.9332 ± 0.0052 |
| DCSRN-GAN | 39.8407 ± 2.3747 | 0.9604 ± 0.0118 |
| Context Encoder | 39.4249 ± 1.4223 | 0.9447 ± 0.0089 |
| EG-GAN | 40.1975 ± 1.0164 | 0.9559 ± 0.0052 |
C. ARTIFACT CORRECTION
Fig.6 (a) and (d) present axial and coronal views of images with mild zipper artifacts from two subjects. The mask that we used in the training model can perfectly cover the artifact corrupted rows (Fig.6 (b) and (e)). Fig.6 (c) and (f) demonstrate that our EG-GAN model can recover the zipper artifact corrupted rows. The magnified images present the detail of the recovered region, proving that the recovered rows removed the artifact while preserving the anatomical contrast both in the ventricular area (the magnified image of Fig.6 (c)) and cortex (the magnified image of Fig.6 (f)).
Fig. 6.
Clinical axial (a) and sagittal (d) scans with zipper artifact and their corresponding zoomed-in regions, masks overlapped on the artifact corrupted rows (b, e), and artifact corrected images by EG-GAN (c, f) with the magnified areas, respectively.
Our simulated spike artifact on a random subject in the HCP dataset is displayed in Fig.7 (a) and (d). Due to the different frequencies and angles in pseudo-space, they manifest different directions and brightnesses. The artifact corrected images shown in Fig.7 (b) and (e) restored the affected lines in (a) and (d).
Fig. 7.

Spike artifacts removal: representative coronal slices of two subjects from the HCP dataset. The artifacts are simulated by adding two different random spike gradients (a, d) in pseudo k-space. The corrected images and their corresponding references are shown in (b, e) and (c, f), respectively.
V. DISCUSSION AND CONCLUSION
While true 3D images have many advantages, such as higher SNR and the ease of image reformatting and registration, the lengthy acquisitions required for 3D imaging are vulnerable to image motion. Hence, multiplanar 2D acquisitions are the norm in pediatric imaging, particularly for T2-weighted images. In this work, the process of restoring missing slices due to image artifacts or 2D scanning scheme was first modeled as image imputation problem. With this novel data representation, we employed the context encoder [38], which was initially designed to solve image inpainting problem, to solve our specific MRI restoration tasks (imputation, super-resolution, and artifact removal). However, the original context encoder is not capable of retrieving the missing slices due to lack of constraints. Therefore, we sought to edge as a constrain to boost the network performance. We hypothesized that an intact edge could enhance the overall system performance due to its intrinsic high frequency information and its ability to constrain contrast matching. The performance of our proposed edge-guided image restoration network supports this hypothesis by demonstrating higher PSNR and SSIM than the widely used context encoder, which was initially designed for generating the contents of an arbitrary image region conditioned on its surroundings [27], [38], [56].
One of the differences between our work and Nazeri’s method [37] is that our inpainting model was rooted from context encoder, and we only predicted the masked region. However, the network in [37] predicts the whole image, modifying physically acquired data. This difference is also embedded into the loss functions. The reconstruction loss from [37] is only normalized by the mask size, while the context encoder, as well as our work directly calculates the reconstruction loss within the masked region. Despite excluding the primarily acquired data from image optimization, we did not observe any boundary artifacts.
In image imputation, the restored image using NN showed stairway effects due to duplication from neighboring slices (Fig. 4 (a)). Although cubic interpolation outperformed NN (PSNR, SSIM, and aesthetics), the smoothness of interpolated regions led to both underestimation and overestimation in the target voxels. The context encoder fitted the regional voxel distributions in an image, producing better PSNR and SSIM than either NN or cubic interpolation. However, due to the lack of structural constraints, the masked region could not be fully recovered, and images were left with an abnormal texture and boundary effects for the mask sizes used in our study. By conditioning on edges, EG-GAN improved contrast generalization and alleviated model collapse near the boundaries [57], providing quantitatively and aesthetically superior results.
While EG-GAN outperformed all methods in both the imputation (Fig. 4 and Table I) and super-resolution (Fig. 5 and Table II) tasks, EG-GAN exhibited half of the standard deviation during imputation compared with super-resolution.
In imputation, uncorrupted adjacent slices were used to train the network, while during super-resolution adjacent slices suffer from through-plane blurring that may compromise edge reconstruction. In addition, for both the context encoder and DCSRN-GAN methods [38], [53], voxel-wise intensity similarity between the synthesized and real images were enforced (DCSRN-GAN yields the same SSIM as EG-GAN in Table II), however, the structure of image content, such as the complex anatomical details were not emphasized. This is partially because context-encoder aims at reducing reconstruction loss (l2 loss), and it lacks constraints to prevent model collapsing in prediction. In contrast, mDCSRN combines voxel-wised reconstruction loss and adversarial loss to stabilize the network and further improves the structural similarity. However, adversarial loss matching the generated and real distribution may produce artificial structures [58], which causes the higher standard deviation in SSIM comparing with EG-GAN. Close inspection of Fig. 5 demonstrates superior image sharpness and structural texture in EG-GAN compared with the DCSRN-GAN method. However, if we focus on comparing Fig. 5 (d) and (e), our method does not correctly retrieve the shape of the small vessel in the second zoom-in figure (location is highlighted in coronal view from (a)). It is because that the most portion of the vessel falls in the masked region, resulting the failure of connecting the correct counter of the vessel. Note that none of the other comparing methods are able to restore the exact shape of the vessel due to their intrinsic properties, yet this defect in restoring small anatomical structures could be alleviated by adjusting mask size or randomly shuffling the starting masked slice.
Other than task type, mask size also plays a critical role in network performance. Both SSIM and PSNR decrease up to 15% when increasing the mask size from 20% to 50% [59]. Our EG-GAN method collapsed when the mask size increased from 75% to 80%, primarily because edge continuity could not be unambiguously restored. We postulate that exploitation of information provided by two orthogonal 2D scans (which is commonly used in clinical practice) could potentially improve the performance of EG-GAN. There are ongoing efforts to combine orthogonal 2D images using Gaussian mixture models, or including various priors [60], [61]. Potentially those methods could be imbedded in EG-GAN when combining two orthogonal 2D scans in future work. For example, slice to volume image registration usually suffers from the lack of geometry information and may require image fusion process when handling multi-slices [62]. Additionally, the performance of directly registering 2D images to 3D volume might be affected by viewpoint angle. Although registration can be performed, the different 2D feature points might cause inaccurate results [63]. Therefore, building a superior isotropic 3D volume could benefit the downstream registration and multisite image group analysis.
Image artifacts such as zipper and gradient spike artifacts are disruptive to clinical workflow because they are sporadic and do not always resolve when a sequence is repeated. In this paper, we demonstrate that EG-GAN offers a simple method for image restoration from these artifacts that could be easily implemented on clinical workstations. As zipper artifacts usually only affect a few lines/columns of the image, our method can correct the artifact with high fidelity. Spike artifacts, whose mask pattern can be irregular, are more challenging because they affect the entire image (Fig. 6) and the mask percentage is very close to the upper limits of restoration (75%). While the exhibited spike artifact in Fig. 7 could be fully restored, satisfactory results cannot be guaranteed when the spike artifact affects a larger fraction of the image. The same situation could apply to zipper artifacts, if the “zipper” affects more than 75% of the slices, which is the maximum masked area that our model could restore. Note that this mask size can be counted locally than globally, meaning that even if only five or six rows/columns of the entire image are corrupted, the masked ratio would be 5/6 or 6/7 locally, and hence prohibit full image recovery. This inference roots to the model that we presented: at least a quarter of the adjacent slices are needed to recover the missing edge, providing a completed and effective constraint, and further to complete the image with the enhanced consistency of the contrast-matching network.
In addition, for artifact correction, mask generation is required, which may be challenging for tasks such as motion artifact and cardiac tagging, whose masks could be highly irregular. Therefore, an automatic and accurate method might be required to extract the mask of the areas to be restored. Another limitation of artifact correction using our method is that lesions lying exclusively in the masked area could be painted over and not recovered.
As discussed in several places, edge completion plays a critical role in improving the image restoration results. This does not advance our network to utilize an end-to-end learning fashion. We had to visually ensure that edge connection stage was well-trained before starting contrast completion. In practice, using task array on GPUs could make our proposed network an end-to-end training, but an intermediate check is recommended. Secondly, in our work, we used a fairly old and primitive technique for constructing edge maps (Canny edge detection) [36]. However, learning-based methods, such as holistically nested edge detection could potentially be used in combination with the Canny detector in the future work [64], [65] and promote a more efficient learning. In addition, more robust edge-detection and completion could potentially overcome the limitations encountered with 3D patch implementation.
In summary, our proposed edge-guided image restoration network decouples the recovery of high and low-frequency components of the missing information to generate coherent anatomical details from adjacent slices. The network proved to effectively restore the missing image detail either due to 2D scanning schemes, or due to image artifacts. We propose that EG-GAN could improve utilization of clinical 2D scans for 3D atlas generation and big-data comparative studies of brain morphometry.
Supplementary Material
Acknowledgments
This work was supported by the National Heart Lung and Blood Institute (1U01HL117718-01, 1RO1HL136484-A1), by the National Center for Research Resources (UL1 TR001855-02). Computation for the work described in this paper was supported by the University of Southern California’s Center for High-Performance Computing (https://hpcc.usc.edu).
bio

YAQIONG CHAI was born in China. She received the B.S. degree in automation science and electrical engineering from Beijing University of Aeronautics and Astronautics (known as Beihang University), Beijing, in 2009 She received M.S. degree in signal and image informatics in Chinese Academy of Sciences, Beijing, China and Ph.D. degree in biomedical engineering from University of Southern California, Los Angeles, CA, USA in 2019.
She currently works as a post-doctoral scholar-research associate at USC’s Mark and Mary Stevens Neuroimaging and Informatics Institute, Los Angeles, CA, USC. Her research interests lie in the area of multimodal neuroimaging, machine learning, image syntheses and neuroimaging applications in neurodegenerative diseases.

BOTIAN XU received the B.S. degree in electrical engineering from Xidian University, Xi’an, China, in 2012 and the M.S. degree in electrical engineering from University of Southern California, Los Angeles, CA, USA, in 2017. He is currently pursuing the Ph.D. degree in biomedical engineering at University of Southern California, Los Angeles, CA, USA.
Currently, he is a Research Assistant with the Saban Research Institute at Children’s Hospital Los Angeles, Los Angeles, CA, USA. His research interests include machine learning, signal and image processing, and MRI physics.

KANGNING ZHANG received the B.S. in electrical engineering from Beijing Jiaotong University, Beijing, China, in 2015, and the M.S. degree in electrical engineering from University of Southern California, Los Angeles, CA, USA, in 2018. In current, he is a Ph.D. student in electrical and computer engineering at UC Davis and does research on image signal processing and compressive sensing.

NATASHA LEPORE graduated with a Bsc in physics and mathematics from the University of Montreal and then obtained a master’s degree in applied mathematics from Cambridge University, in general relativity. Her PhD at Harvard University is in theoretical physics. She started working in medical imaging as a postdoctoral fellow with Prof. Paul Thompson at the Laboratory of Neuroimaging at UCLA. Since 2009, she has been a faculty in Radiology and in Biomedical Engineering at Children’s Hospital Los Angeles and at the University of Southern California.
Currently, she is the director of the Computational Imaging of Brain Organization Research Group (CIBORG), specializes in mathematical and numerical methods to study brain anatomy and function though magnetic resonance imaging. These methods are applied to furthering the understanding of different neurological disorders, as well as normal and abnormal brain development.

JOHN C. WOOD received the B.S. degree in electrical engineering from the University of California, Davis, CA, USA, in 1984 and the M.D/Ph.D. degree in bioengineering from the University of Michigan, Ann Arbor, MI, USA, in 1994, with a focus in time-frequency transform analysis. He performed his residency and fellowship in Pediatric Cardiology at Yale and joined Children’s Hospital Los Angeles/USC Keck School of Medicine in 1999, studying wavelet-packet denoising applications in MRI. Dr. Wood is the director of cardiovascular MRI and specializes in the MRI assessment of congenital heart disease as well as noninvasive assessment of iron burden by MRI. He has been studying the cardiovascular consequences of hemoglobinopathies for almost a decade. He is one of the pioneers of MRI-based cardiac and liver iron measurements but is also studying oral chelation strategies in animals and humans. He was the principle investigator for the NIH-sponsored Early Detection of Iron Cardiomyopathy Trial whose goal is to identify earlier markers of cardiac dysfunction. He also has funded projects examining pancreatic and pituitary iron burden by MRI and their functional correlates. He received an ARRA Challenge grant to study the role of iron overload and other factors in sickle cell vasculopathy and is exploring the links between abnormal red cell mechanics and vascular dynamics in the hemoglobinopathies.
Recently, his research studies the relationship between cerebrovascular reserve, anemia, and white matter loss in chronic anemia patients at particularly high risk for silent stroke. He also explores white matter quantification and super-resolution image using machine learning methods.
Dr. Wood’s memberships include American Medical Association, American Academy of Pediatrics, Society for Cardiovascular Magnetic Resonance, and International Society for Magnetic Resonance in Medicine. His awards and honors include Tau Beta Pi (National Engineering Honor Society) in 1982, Alpha Omega Alpha in 1993, Alfred F. Towsley Award for Pediatrics in 1994, RSNA scholar in 2001, Russell Smith Award for Innovation in Pediatric Research in 2009.
REFERENCES
- [1].Somasundaram K and Kalavathi P, “Analysis of Imaging Artifacts in MR Brain Images,” Orient. J Comput Sci Technol, vol. 5, no. (1), pp. 135–141, 2012. [Google Scholar]
- [2].Stadler A and Ba-ssalamah A, “Artifacts in body MR imaging?: their appearance and how to eliminate them,” pp. 1242–1255, 2007. [DOI] [PubMed] [Google Scholar]
- [3].Zhuo J and Gullapalli RP, “MR Artifacts, Safety, and Quality Control,” RadioGraphics, vol. 26, no. 1, pp. 275–297, 2006. [DOI] [PubMed] [Google Scholar]
- [4].Welch EB, Felmlee JP, Ehman RL, and Manduca A, “Motion correction using the k-space phase difference of orthogonal acquisitions,” Magn. Reson. Med, vol. 48, no. 1, pp. 147–156, 2002. [DOI] [PubMed] [Google Scholar]
- [5].Maclaren J, Herbst M, Speck O, and Zaitsev M, “Prospective motion correction in brain imaging: A review,” Magn. Reson. Med, vol. 69, no. 3, pp. 621–636, March. 2013. [DOI] [PubMed] [Google Scholar]
- [6].Dietrich O, Reiser MF, and Schoenberg SO, “Artifacts in 3-T MRI: Physical background and reduction strategies,” Eur. J. Radiol, vol. 65, no. 1, pp. 29–35, January. 2008. [DOI] [PubMed] [Google Scholar]
- [7].Heiland S, “From A as in aliasing to Z as in zipper: Artifacts in MRI,” Clinical Neuroradiology, vol. 18, no. 1. pp. 25–36, Mar-2008. [Google Scholar]
- [8].V Fezoulidis I, “T2 FLAIR artifacts at 3-T brain magnetic resonance imaging,” J. Clin. Imaging, vol. 38, no. 2, pp. 85–90, 2013. [DOI] [PubMed] [Google Scholar]
- [9].Zaitsev M, Maclaren J, and Herbst M, “Motion artifacts in MRI: A complex problem with many partial solutions,” J. Magn. Reson. Imaging, vol. 42, no. 4, pp. 887–901, October. 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [10].Despotović I, Goossens B, and Philips W, “MRI segmentation of the human brain: Challenges, methods, and applications,” Computational and Mathematical Methods in Medicine, vol. 2015. Hindawi Publishing Corporation, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [11].Kostelec PJ and Periaswamy S, “Image Registration for MRI,” Mod. Signal Process, vol. 46, pp. 161–184, 2003. [Google Scholar]
- [12].Oliveira FPM and Tavares JMRS, “Medical image registration: a review,” Comput. Methods Biomech. Biomed. Engin, vol. 17, no. 2, pp. 73–93, January. 2014. [DOI] [PubMed] [Google Scholar]
- [13].Lehmann TM, Claudia G, and Spitzer K, “Survey?: Interpolation Methods in Medical Image Processing,” vol. 18, no. 11, pp. 1049–1075, 1999. [DOI] [PubMed] [Google Scholar]
- [14].Mahmoudzadeh AP and Kashou NH, “Interpolation-based super-resolution reconstruction: effects of slice thickness,” J. Med. Imaging, vol. 1, no. 3, p. 034007, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [15].Greenspan H, Oz G, Kiryati N, and Peled S, “MRI inter-slice reconstruction using super-resolution,” vol. 20, pp. 437–446, 2002. [DOI] [PubMed] [Google Scholar]
- [16].Yang J, Wright J, Huang TS, and Ma Y, “Image super-resolution via sparse representation,” IEEE Trans. Image Process, vol. 19, no. 11, pp. 2861–2873, 2010. [DOI] [PubMed] [Google Scholar]
- [17].Irani M and Peleg S, “Motion analysis for image enhancement: Resolution, occlusion, and transparency,” J. Vis. Commun. Image Represent, vol. 4, no. 4, pp. 324–335, 1993. [Google Scholar]
- [18].V Dalca A, Bouman KL, Freeman WT, Rost NS, Sabuncu MR, and Golland P, “Medical Image Imputation From Image Collections,” IEEE Trans. Med. Imaging, vol. 38, no. 2, pp. 504–514, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Buades A, Coll B, and Morel JM, “A non-local algorithm for image denoising,” in Proceedings - 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005, 2005, vol. II, pp. 60–65. [Google Scholar]
- [20].Coupe P, Yger P, Prima S, Hellier P, Kervrann C, and Barillot C, “An optimized blockwise nonlocal means denoising filter for 3-D magnetic resonance images,” IEEE Trans. Med. Imaging, vol. 27, no. 4, pp. 425–441, April. 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [21].Jin KH, Um JY, Lee D, Lee J, Park SH, and Ye JC, “MRI artifact correction using sparse + low-rank decomposition of annihilating filter-based hankel matrix,” Magn. Reson. Med, vol. 78, no. 1, pp. 327–340, 2017. [DOI] [PubMed] [Google Scholar]
- [22].Lecun Y, Bengio Y, and Hinton G, “Deep learning,” Nature, vol. 521, no. 7553. Nature Publishing Group, pp. 436–444, 27-May-2015. [DOI] [PubMed] [Google Scholar]
- [23].Dong X, Loy C, Tang CC, “Accelerating the super-resolution convolutional neural network,” in In European conference on computer vision, 2016, pp. 391–407. [Google Scholar]
- [24].Kim J, Lee JK, and Lee KM, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” 2016 IEEE Conf. Comput. Vis. Pattern Recognit, pp. 1646–1654, 2016. [Google Scholar]
- [25].Pham CH et al. , “Multiscale brain MRI super-resolution using deep 3D convolutional networks,” in Computerized Medical Imaging and Graphics, 2019, vol. 77, pp. 197–200. [DOI] [PubMed] [Google Scholar]
- [26].Odena A, Dumoulin V, and Olah C, “Deconvolution and Checkerboard Artifacts,” Distill, vol. 1, no. 10, p. e3, October. 2017. [Google Scholar]
- [27].Wang TC, Liu MY, Zhu JY, Tao A, Kautz J, and Catanzaro B, “High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, pp. 8798–8807. [Google Scholar]
- [28].Ravì D, Szczotka AB, Shakir DI, Pereira SP, and Vercauteren T, “Adversarial training with cycle consistency for unsupervised super-resolution in endomicroscopy,” no. Midl, pp. 1–10, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [29].Sánchez I and Vilaplana V, “Brain MRI super-resolution using 3D generative adversarial networks,” arXiv Prepr. arXiv1812.11440, 2018. [Google Scholar]
- [30].Émile-Mâle G, The restorer’s handbook of easel painting. Van Nostrand Reinhold., 1976. [Google Scholar]
- [31].Eitz M, Hays J, and Alexa M, “How do humans sketch objects?,” ACM Trans. Graph, vol. 31, no. 4, July. 2012. [Google Scholar]
- [32].Garland P, “Inpainting,” in Painting Conservation Catalog, 2011, pp. 34–52. [Google Scholar]
- [33].Muthukumar S, Krishnan D, Pasupathi P, and Deepa S, “Analysis of Image Inpainting Techniques with Exemplar, Poisson, Successive Elimination and 8 Pixel Neighborhood Methods,” Int. J. Comput. Appl, vol. 9, no. 11, pp. 15–18, 2010. [Google Scholar]
- [34].Bugeau A, Bertalmío M, Caselles V, and Sapiro G, “A comprehensive framework for image inpainting,” IEEE Trans. Image Process, vol. 19, no. 10, pp. 2634–2645, October. 2010. [DOI] [PubMed] [Google Scholar]
- [35].Bertalmio M, Vese L, Sapiro G, and Osher S, “Simultaneous Structure and Texture Image Inpainting,” IEEE Trans. IMAGE Process, vol. 12, no. 8, 2003. [DOI] [PubMed] [Google Scholar]
- [36].Canny J, “A Computational Approach to Edge Detection,” IEEE Trans. Pattern Anal. Mach. Intell, vol. PAMI-8, no. 6, pp. 679–698, 1986. [PubMed] [Google Scholar]
- [37].Nazeri K, Ng E, Joseph T, Qureshi FZ, and Ebrahimi M, “EdgeConnect?: Structure Guided Image Inpainting using Edge Prediction,” in In Proceedings of the IEEE International Conference on Computer Vision Workshop, 2019, pp. 0–0. [Google Scholar]
- [38].Pathak D, Krahenbuhl P, Donahue J, Darrell T, and Efros AA, “Context Encoders: Feature Learning by Inpainting,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, pp. 2536–2544. [Google Scholar]
- [39].Goodfellow Y, Pouget-Abadie I, Mirza J, Xu M, Warde-Farley B, Ozair D, Courville S, Bengio A, “Triple generative adversarial nets,” in Advances in Neural Information Processing Systems, 2014, pp. 2672–2680. [Google Scholar]
- [40].Milletari F, Navab N, and Ahmadi SA, “V-Net: Fully convolutional neural networks for volumetric medical image segmentation,” Proc. 2016 4th Int. Conf. 3D Vision, 3DV 2016, pp. 565–571, 2016. [Google Scholar]
- [41].Taha AA and Hanbury A, “Metrics for evaluating 3D medical image segmentation: Analysis, selection, and tool,” BMC Med. Imaging, vol. 15, no. 1, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [42].Shen C et al. , “On the influence of Dice loss function in multi-class organ segmentation of abdominal CT using 3D fully convolutional networks,” pp. 1–8, 2018. [Google Scholar]
- [43].Johnson J, Alahi A, and Fei-Fei L, “Perceptual losses for real-time style transfer and super-resolution,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2016, vol. 9906 LNCS, pp. 694–711. [Google Scholar]
- [44].Zhang R, Isola P, Efros AA, Shechtman E, and Wang O, “The Unreasonable Effectiveness of Deep Features as a Perceptual Metric,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2018, pp. 586–595. [Google Scholar]
- [45].Gatys LA, Ecker AS, and Bethge M, “Image Style Transfer Using Convolutional Neural Networks,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, pp. 2414–2423. [Google Scholar]
- [46].Zhu JY, Park T, Isola P, and Efros AA, “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks,” Proc. IEEE Int. Conf. Comput. Vis, vol. 2017-Octob, pp. 2242–2251, 2017. [Google Scholar]
- [47].Sajjadi MSM, Scholkopf B, and Hirsch M, “EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis,” in Proceedings of the IEEE International Conference on Computer Vision, 2017, vol. 2017-Octob, pp. 4501–4510. [Google Scholar]
- [48].Isola AA, Zhu P,, Zhou JY, Efros T, “Image-to-image translation with conditional adversarial networks,” in In Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1125–1134. [Google Scholar]
- [49].Miyato Y, Kataoka T, Koyama T, Yoshida M, “Spectral normalization for generative adversarial networks,” in arXiv preprint, 2018. [Google Scholar]
- [50].Ulyanov V, Vedaldi D, Lempitsky A, “Improved texture networks: Maximizing quality and diversity in feed-forward stylization and texture synthesis,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 6924–6932. [Google Scholar]
- [51].Van Essen DC, Smith SM, Barch DM, Behrens TEJ, Yacoub E, and Ugurbil K, “The WU-Minn Human Connectome Project: An overview,” Neuroimage, vol. 80, pp. 62–79, October. 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [52].Chen Y, Xie Y, Zhou Z, Shi F, Christodoulou AG, and Li D, “Brain MRI super resolution using 3D deep densely connected neural networks,” in Proceedings - International Symposium on Biomedical Imaging, 2018, vol. 2018-April, pp. 739–742. [Google Scholar]
- [53].Chen Y, Shi F, Christodoulou AG, Xie Y, Zhou Z, and Li D, “Efficient and accurate MRI super-resolution using a generative adversarial network and 3D multi-level densely connected network,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2018, vol. 11070 LNCS, pp. 91–99. [Google Scholar]
- [54].Casella JF et al. , “Design of the Silent Cerebral Infarct Transfusion (Sit) Trial,” Pediatr. Hematol. Oncol, vol. 27, no. 2, pp. 69–89, 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [55].Wang Z, Bovik AC, Sheikh HR, and Simoncelli EP, “Image Quality Assessment: From Error Visibility to Structural Similarity,” 2004. [DOI] [PubMed] [Google Scholar]
- [56].Yeh RA, Lim TY, Chen C, Schwing AG, Hasegawa-Johnson M, and Do M, “Image Restoration with Deep Generative Models,” ICASSP, IEEE Int. Conf. Acoust. Speech Signal Process. - Proc, vol. 2018-April, pp. 6772–6776, 2018. [Google Scholar]
- [57].Creswell A, White T, Dumoulin V, Arulkumaran K, Sengupta B, and Bharath AA, “Generative Adversarial Networks: An Overview,” IEEE Signal Process. Mag, vol. 35, no. 1, pp. 53–65, 2018. [Google Scholar]
- [58].Yi X, Walia E, and Babyn P, “Generative adversarial network in medical imaging: A review,” Med. Image Anal, vol. 58, 2019. [DOI] [PubMed] [Google Scholar]
- [59].Nazeri K, Ng E, Joseph T, Qureshi FZ, and Ebrahimi M, “Edge-Connect: Generative Image Inpainting with Adversarial Edge Learning,” arXiv Prepr., 2019. [Google Scholar]
- [60].Liu D, Member S, Wang Z, Wen B, and Member S, “Robust Single Image Super-Resolution via Deep Networks With Sparse Prior,” IEEE Trans. Image Process, vol. 25, no. 7, pp. 3194–3207, 2016. [DOI] [PubMed] [Google Scholar]
- [61].Zhang R, Bouman CA, Thibault JB, and Sauer KD, “Gaussian mixture Markov random field for image denoising and reconstruction,” 2013 IEEE Glob. Conf. Signal Inf. Process. Glob. 2013 - Proc, pp. 1089–1092, 2013. [Google Scholar]
- [62].Ferrante E and Paragios N, “Slice-to-volume medical image registration: A survey,” Med. Image Anal, vol. 39, pp. 101–123, 2017. [DOI] [PubMed] [Google Scholar]
- [63].Byun S, Jung K, Im S, and Chang M, “Registration of 3D scan data using image reprojection,” Int. J. Precis. Eng. Manuf, vol. 18, no. 9, pp. 1221–1229, 2017. [Google Scholar]
- [64].Yu B, Zhou L, Wang L, Shi Y, Fripp J, and Bourgeat P, “Ea-GANs?: Edge-Aware Generative Adversarial Networks for Cross-Modality MR Image Synthesis,” IEEE Trans. Med. Imaging, vol. 38, no. 7, pp. 1750–1762, 2019. [DOI] [PubMed] [Google Scholar]
- [65].Xie S and Tu Z, “Holistically-Nested Edge Detection,” in Proceedings of the IEEE international conference on computer vision, 2015, pp. 1395–1403. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.






