Skip to main content
IEEE - PMC COVID-19 Collection logoLink to IEEE - PMC COVID-19 Collection
. 2020 Dec 4;25(2):441–452. doi: 10.1109/JBHI.2020.3042523

COVID-19 CT Image Synthesis With a Conditional Generative Adversarial Network

Yifan Jiang 1, Han Chen 1, Murray Loew 2, Hanseok Ko 1,
PMCID: PMC8545178  PMID: 33275588

Abstract

Coronavirus disease 2019 (COVID-19) is an ongoing global pandemic that has spread rapidly since December 2019. Real-time reverse transcription polymerase chain reaction (rRT-PCR) and chest computed tomography (CT) imaging both play an important role in COVID-19 diagnosis. Chest CT imaging offers the benefits of quick reporting, a low cost, and high sensitivity for the detection of pulmonary infection. Recently, deep-learning-based computer vision methods have demonstrated great promise for use in medical imaging applications, including X-rays, magnetic resonance imaging, and CT imaging. However, training a deep-learning model requires large volumes of data, and medical staff faces a high risk when collecting COVID-19 CT data due to the high infectivity of the disease. Another issue is the lack of experts available for data labeling. In order to meet the data requirements for COVID-19 CT imaging, we propose a CT image synthesis approach based on a conditional generative adversarial network that can effectively generate high-quality and realistic COVID-19 CT images for use in deep-learning-based medical imaging tasks. Experimental results show that the proposed method outperforms other state-of-the-art image synthesis methods with the generated COVID-19 CT images and indicates promising for various machine learning applications including semantic segmentation and classification.

Keywords: COVID-19, computed topography, image synthesis, conditional generative adversarial network

I. Introduction

Coronavirus disease 2019 (COVID-19) [1], which was first identified in Wuhan, China, in December 2019, was declared a pandemic in March 2020 by the World Health Organization (WHO). As of 21 July, there had been more than 14 million confirmed cases and 609,198 deaths across 188 countries and territories [2]. COVID-19 is the result of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and its most common symptoms include fever, dry cough, a loss of appetite, and fatigue, with common complications including pneumonia, liver injury, and septic shock [3], [4].

There are two main diagnostic approaches for COVID-19: rRT-PCR and chest computed tomography (CT) imaging [4]. In rRT-PCR, an RNA template is first converted by reverse transcriptase into complementary DNA (cDNA), which is then used as a template for exponential amplification using polymerase chain reaction (PCR). However, the sensitivity of rRT-PCR is relative low for COVID-19 testing [5], [6]. As an alternative, chest CT scans can be used to take tomographic images from the chest area at different angles with post-computed X-ray measurements. This approach has a higher sensitivity to COVID-19 and is less resource-intensive than traditional rRT-PCR [5], [6].

Over time, artificial intelligence (AI) has come to play an important role in medical imaging tasks, including CT imaging [7], [8], magnetic resonance imaging (MRI) [9] and X-ray imaging [10]. Deep learning is a particularly powerful AI approach that has been successfully employed in a wide range of medical imaging tasks due to the massive volumes of data that are now available. These large datasets allow deep-learning networks to be well-trained, extending their generalizability for use in various applications. However, the collection of COVID-19 data for use in deep-learning models is far more difficult than normal data collection. Because COVID-19 is highly contagious [4], medical staff require full-length protection for CT scans, and the CT scanner and other equipment need to be carefully disinfected after an operation. In addition, certain tasks, such as CT image segmentation, require well-labeled data, which is labor-intensive. These problems mean that the COVID-19 CT data collection process can be difficult and time-consuming.

In order to speed up the COVID-19 CT data collection process for deep-learning-based CT imaging and to protect medical personnel from possible infection when coming into contact with COVID-19 patients, we propose a novel cGAN structure which contains a global-local generator and a multi-resolution discriminator. Both the above generator and discriminator are dual network design so that they can learn global and local information of CT images individually. Also, the dual structure has a communication mechanism for information exchange so that it helps to generate a realistic CT image with both stable global structure and diverse local details. The main contributions of the proposed method are as follows:

  • 1)

    We presented a dual generator structure (global-local generator). This dual global-local generator contains two individual generators that address and reflect different-level of information from CT data.

  • 2)

    We proposed a dual discriminator (multi-resolution discriminator) that contains two sub-discriminators. These two discriminators learn to distinguish input from real or fake. They are specially designed for learning from full-resolution CT data and half-resolution ones, respectively.

  • 3)

    A dynamic communication mechanism is proposed for both generator and discriminator. In the case of the generator, a dynamic element-wise sum process (DESUM) helps generators balance the information of the lung area and small lesion area by dynamically weighting two terms during the element-wise sum process. It also prevents the generator from overweighting details like a traditional cGAN model does for wild scene dataset. For the discriminator, a dynamic feature matching process (DFM) is proposed for dynamically weighting the loss terms from two inputs with different resolutions. In particular, it allows the half-resolution discriminator to receive more information with lung structure or large lesion area. It also offers more features of small lesion area to full-resolution discriminator. This dual multi-resolution discriminator helps to stabilize the training process and improves the image quality of the synthetic data.

  • 4)

    The proposed method outperforms other state-of-the-art image synthesizers in several image-quality metrics and demonstrates its potential for use in image synthesis for computer vision tasks such as semantic segmentation for COVID-19 chest CT imaging.

  • 5)

    A safe COVID-19 chest CT data collection method based on image synthesis is presented. The potential applications of proposed method are summarized as follows: (a) COVID-19 CT synthesis method can be applied to the data augmentation task for the deep learning based COVID-19 diagnosis approaches; (b) COVID-19 CT synthesis method can also be utilized to train the intern radiologists who may need abundant snapshots of COVID-19 CT scans for training purposes; (c) The proposed COVID-19 CT synthesis method can be easily transferred from CT imaging domain to another medical imaging area (e.g. X-ray, MRI).

II. Related Works

Generative Adversarial Networks: Generative adversarial networks (GANs) were first reported in 2014 [11], and they have since been widely applied to many practical applications, including image synthesis [12][15], image enhancement [16], [17], human pose estimation [18], [19], and video generation [20], [21]. A GAN structure generally consists of a generator and a discriminator, where the goal of the generator is to fool the discriminator by generating a synthetic sample that cannot be distinguished from real samples. A common GAN extension is the conditional generative adversarial network (cGAN) [22], which generates images that are conditional on class labels. cGAN always produces more realistic results than traditional GANs due to the extra information from these conditional labels.

Conditional Image-to-Image Translation: Conditional image- to-image translation methods can be divided into three categories based on the input conditions. Class-conditional methods take class-wise labels as input to synthesize images [22][25] while, more recently, text-conditional methods have been introduced [26], [27]. cGAN-based methods [12][15], [26][32] have been widely used for various image-to-image translation methods, including unsupervised [30], high-quality [13], multi-modal [14], [15], [28], and semantic layout conditional image-to-image translation [12][15]. In semantic layout conditional methods, realistic images are synthesized under the navigation of the semantic layout, meaning that it is easier to control a particular region of the image.

AI-based Diagnosis using COVID-19 CT Imaging: Since the outbreak of COVID-19, many researchers have turned to CT imaging technology in order to diagnose and investigate this disease. COVID-19 diagnosis methods based on chest CT imaging have been introduced in order to improve test efficiency [33][36]. Rather than using CT imaging for rapid COVID-19 diagnosis, semantic segmentation approaches have been employed to clearly label the focus position in order to make it easier for medical personnel to identify infected regions in a CT image [37][41]. As an alternative to working at the pixel-level, high-level classification or detection approaches have been proposed [42][44], which can allow medical imaging experts to rapidly locate areas of infection, thus speeding up the diagnosis process. Though two CT image synthesis methods have been previously reported [45], [46], they did not focus on COVID-19 or lung CT imaging. cGAN was introduced to COVID-19 CT image synthesis task firstly by [47], which transforms a normal 3D CT slice to an abnormally synthetic slice under the condition of 3D noise.

III. COVID-19 CT Image Synthesis with a Conditional Generative Adversarial Network

In this paper, we propose a cGAN-based COVID-19 CT image synthesis method. Here, COVID-19 CT image synthesis is formulated as a semantic-layout-conditional image-to-image translation task. The structure consisting of two main components: a global-local generator and a multi-resolution discriminator. During the training stage, the semantic segmentation map of a corresponding CT image is passed to the global-local generator, where the label information from the segmentation map is extracted via down-sampling and re-rendered to generate a synthesized image via up-sampling. The segmentation map is then concatenated with the corresponding CT image or synthesized CT image to form the input for the multi-resolution discriminator, which is used to distinguish the input as either real or synthesized. The decisions from the discriminator are used to calculate the loss and update the parameters for both the generator and discriminator. During the testing stage, only the generator is involved. A data augmented segmentation map is used as input for the generator, from which a realistic synthesized image can be obtained after extraction and re-rendering. This synthesized lung CT image is then combined with the non-lung area to form a completely synthesized CT image as the final result. Fig. 2 presents an overview of the proposed method.

Fig. 2.

Fig. 2.

Overview of the proposed method. The upper section containing the training process of the global-local generator and multi-resolution discriminator, while the lower right section shows the testing process. Within the global-local generator blocks, two types of generator are present: a global information generator and local detail generator. Two individual training processes and single-joint training process are depicted in three different colorized arrows. DESUM block represents the dynamic element-wise sum process which is shown in purple. The multi-resolution discriminator is depicted in blue. And the dynamic feature matching process (DFM) is also shown as a blue block. The synthesized images are transferred from the generator to the discriminator, and this process is shown as the dashed arrow. The yellow arrow shows the completion step for the process in which the non-lung region for the synthesized lung image is added.

Fig. 1.

Fig. 1.

Example CT images from three COVID-19 patients. The first column shows CT images of the entire chest, the second column contains CT images of the lungs only, and the third column shows the corresponding segmentation map, with the lung region colored red, ground-glass opacity colored blue, and areas of consolidation colored green.

A. Global-Local Generator

The global-local generator Inline graphic is a dual network which has two sub-components: global-information generator Inline graphic and local-detail generator Inline graphic. These generators work together by moving in a coarse-to-fine direction. Inline graphic takes charge of learning and re-rendering global information, which always contains high-level knowledge (e.g., semantic segmentation labels and image structure information). Inline graphic is then used for detail enhancement (e.g., image texture and fine structures).

We train the global-local generator using a three-step process:

1). Individual Training for the Global Information Generator

The training process for Inline graphic starts with the training of the global information generator Inline graphic. As shown in Fig. 3, Inline graphic takes a half-resolution (Inline graphic) segmentation map as input, which is then sent for down-sampling to reduce the feature dimensions to Inline graphic. Nine residual blocks that maintain the dimensions at Inline graphic are used to reduce the computational complexity and generate a large reception field. Finally, the features are up-sampled and reconstructed back into a half-resolution (Inline graphic) synthesized image.

Fig. 3.

Fig. 3.

The network structure of the global information generator Inline graphic. The parameters of each layer are separated by the notation ‘-’, e.g. for the first layer, Inline graphic means the kernel size is 7, Inline graphic is the size of the feature, Conv denotes the category of the layer, 64 is the channel number, and Relu is the activation function.

2). Individual Training for the Local Details Generator

The structure of the local detail generator Inline graphic, which is similar to the structure of Inline graphic, is shown in Fig. 4. Rather than taking a low-resolution segmentation map as input, the local detail generator begins the synthesis process with a full-resolution segmentation map (Inline graphic) and maintains this size throughout. That allows the local detail generator to fully learn the fine texture and structure and focus on low-level information within the input image. Inline graphic has a similar encoding-decoding training procedure as G1, though the output synthesized image is Inline graphic.

Fig. 4.

Fig. 4.

Network structure of local details generator Inline graphic.

3). Joint Training for the Global-Local Generator

After training Inline graphic and Inline graphic separately, a joint training process is conducted. This is shown in the global-local generator region of Fig. 2. In the joint training stage, both Inline graphic and Inline graphic take the same input but with different resolutions (half- and full-resolution, respectively). The two networks run a forward process that differs from the individual training stage in which the residual blocks in Inline graphic takes the dynamic element-wise sum from the output feature maps from the up-sampling process in Inline graphic and the output feature maps from down-sampling in Inline graphic, meaning that Inline graphic receives both global and local information to reconstruct the output.

This training strategy enables the global-local generator Inline graphic to effectively learn both global information and local details while also stabilizing the training process by simplifying it into three relatively simple procedures.

B. Multi-Resolution Discriminator

A multi-resolution discriminator Inline graphic is proposed in this paper. This dual network structure consists of two sub-components: the full-resolution discriminator Inline graphic and the half-resolution discriminator Inline graphic. We design above two discriminators by following the PatchGAN discriminator [12], therefore, proposed discriminator contains two PatchGAN discriminators. Two discriminators make patch-wise decisions rather than making a decision for the whole image. Inline graphic takes the full-resolution input and learns the local information from CT image, on the other hand, Inline graphic takes half-resolution image as input and focuses on the global information from CT image. In addition, we proposed a dynamic feature matching process (DFM) for improving the communication quality between Inline graphic and Inline graphic during the training process. As shown in Fig. 5, we first down-sample the segmentation map and real image into half-resolution form, then the synthesized and real images are randomly chosen to be concatenated with the segmentation map to form two inputs (full- and half-resolution) for Inline graphic. The target discriminator takes corresponding input and makes decision through a Inline graphic receptive field, and the decision is represented as a decision matrix, which represents the patch-wise decisions for corresponding inputs. Then, patch-wise decisions are used to update Inline graphic with the DFM in order to adaptively share multiple resolution intermediate features between two discriminators.

Fig. 5.

Fig. 5.

Network structure of the multi-resolution discriminator Inline graphic. Inline graphic denotes down-sampling with a factor of 2.

The dual-discriminator design and dynamic feature matching process enable the multi-resolution discriminator Inline graphic to effectively learn local details, which can significantly improve the quality of the synthesized image. By assigning local and global discrimination to individual discriminators Inline graphic and Inline graphic, and dynamically weighting the feature matching losses from Inline graphic and Inline graphic, the global structure can be maintained while also enhancing the details of the synthesized images.

C. Dynamic Communication Mechanism

1). Dynamic Element-Wise Sum Process

The dynamic element-wise sum (DESUM) process is utilized in the joint training step of global-local generator. As shown in Fig. 2, DESUM process takes two feature maps (Inline graphic, Inline graphic) from Inline graphic and Inline graphic, respectively. We trained a weighting network that contains three convolutional layers and 2 fully-connected layers to dynamically compute the weight of two input terms from Inline graphic. The DESUM process can be formulated as follow:

1).

where Inline graphic is learned by the weighting network. This weighting network is updated during the joint training step. The DESUM process effectively helps to dynamically adjust the generator to specific input and balance the attention from global information to local details. To be specific, the DESUM process can dynamically weight more to Inline graphic when receiving an input that contains complex lesion area. On the other hand, the DESUM process is able to avoid the generator from overweight on tiny lesions but ignore global lung structure.

2). Dynamic Feature Matching Process

As shown in Fig. 2, the dynamic feature matching process (DFM) computes weight parameter Inline graphic for the dynamic feature matching loss (Inline graphic) which will be discussed in sub-section D. Similar to DESUM, DFM uses a CNN structure to calculate the weight parameter by observing an intermediate feature from Inline graphic. However, DFM works on loss level rather than the feature level. By applying the DFM process to DFM loss, multi-resolution discriminator Inline graphic is able to balance between two resolution inputs and communicate with each other. Since the weight parameter Inline graphic is decided by intermediate feature Inline graphic, which is the intermediate feature from Inline graphic layer of full-resolution discriminator Inline graphic, the DFM network can obtain enough information of full-resolution input and weight Inline graphic correctly.

D. Learning Objective

The overall learning objective of proposed approach can be represented by equation (2):

D.

There are two main loss terms in the overall learning objective function (2): the loss for the cGAN Inline graphic and the loss for dynamic feature matching Inline graphic. The variable Inline graphic is the real input image and Inline graphic is the corresponding segmentation map. Inline graphic represents global-local generator while Inline graphic represents the full-resolution discriminator Inline graphic or half-resolution discriminator Inline graphic. Inline graphic denotes the synthesized image produced by generator Inline graphic with input segmentation map Inline graphic, Inline graphic and Inline graphic are the patch-wise decisions made by multi-resolution discriminator Inline graphic with the real image or synthesized image as input, respectively. Inline graphic is the weight factor of feature matching loss term.

We designed the cGAN loss function based on pix2pix [12], as shown in (3)

D.

This loss term allows cGAN to generate a realistic synthesized image that can fool discriminator under the condition of the input segmentation map.

In order to help to improve the communication efficiency between multi-resolution discriminator Inline graphic and Inline graphic, we proposed a dynamic feature matching loss (Eq. (4)) which is inspired by the feature matching loss from ref [48]:

D.

where Inline graphic represents the Inline graphic layer of Inline graphic and Inline graphic is the total number of elements in the Inline graphic layer. Inline graphic is a weight parameter which is computed by dynamic feature matching process (described in sub-section C). Original feature matching loss only considers to manage the feature map difference between different layers within a single discriminator. In order to overcome the communication problem between two discriminators, the dynamic feature matching loss (DFM loss) dynamically weights the feature matching losses from the full- and half-resolution discriminators through observing an intermediate feature Inline graphic. By applying DFM loss, it allows us to train Inline graphic and Inline graphic synchronously, and to learn the details from the inputs with different resolutions effectively.

E. Testing Process

Rather than using both global-local generator Inline graphic and multi-resolution discriminator Inline graphic as in the training stage, we only utilize the pre-trained Inline graphic in the testing process. The input for Inline graphic in this stage is a data augmented segmentation map from the real data. During the practical deployment, the segmentation maps can be obtained by augmenting the segmentation maps which are made by experienced radiologists using standard image editing software. After passing it through Inline graphic, a synthesized CT image of the lung area is generated. The final step in the process combines the synthesized lung image with the corresponding non-lung area from the real image to produce a complete synthesized image.

IV. Experiments

A. Experimental Settings

Dataset: In order to evaluate the proposed method and compare its performance to other state-of-the-art methods, we use 829 lung CT slices from nine COVID-19 patients, which were made public on 13 April 2020, by Radiopaedia [49]. This dataset includes the original CT images, lung masks, and COVID-19 infection masks. The infection masks contain ground-glass opacity and consolidation labels, which are the two most common characteristics used for COVID-19 diagnosis in lung CT imaging [50]. In this experiment, we select 446 slices that contained the areas of infection. We divide the selected dataset into three parts: a training set for image synthesis task (300), a test set for image synthesis task (73), a test set for semantic segmentation task (73). To fully train the deep-learning-based model, data augmentation pre-processing is applied (Table I). The training set for the semantic segmentation tasks consists of real data and synthetic data: the real data comes from the test set of image synthesis task and the synthetic data is generated from the segmentation maps from the test set of image synthesis task.The data augmentation methods include random resizing and cropping, random rotation, Gaussian noise, and elastic transform.

TABLE I. Organization of the COVID-19 CT Image Dataset.

Dataset Original count After data augmentation
Training set
(image synthesis) 300 12,000
Test set
(image synthesis) 73 10,220
Training set
(semantic segmentation) - -
Test set
(semantic segmentation) 73 10,220

Evaluation Metrics: To accurately assess model performance, we utilize both image quality metrics and medical imaging semantic segmentation metrics:

Four image quality metrics are considered in this study: Fréchet inception distance (FID) [51], peak-signal-to-noise ratio (PSNR) [52], structural similarity index measure (SSIM) [52], and root mean square error (RMSE) [15]. FID measures the similarity of the distributions of real and synthesized images using a deep-learning model. PSNR and SSIM are the most widely used metrics when evaluating the performance of image restoration and reconstruction methods. The former represents the ratio between the maximum possible intensity of a signal and the intensity of corrupting noise, while the latter reflects the structural similarity between two images.

Three semantic segmentation metrics for medical imaging are used in this experiment: the dice score (Dice), sensitivity (Sen), and specificity (Spec) [53], [54]. The dice score evaluates the area of overlap between a prediction and the ground truth, while sensitivity and specificity are two statistical metrics for the performance of binary medical image segmentation tasks. The former measures the percentage of actual positive pixels that are correctly predicted to be positive, while the latter measures the proportion of actual negative pixels that are correctly predicted to be negative. These three metrics are employed for semantic segmentation based on the assumption that, if the quality of the synthesized images is high enough, excellent segmentation performance can be achieved when using the synthesized images as input.

Implementation Details: We transform all of the CT slices into gray-scale images on a Hounsfield unit (HU) scale [-600,1500]. The sizes of the images and segmentation maps are then rescaled from Inline graphic to Inline graphic. All of the image synthesis methods are trained with 20 epochs, with a learning rate that is maintained at 0.0002 for the first 10 epochs before linearly decaying to zero over the following ten epochs. Global-local generator Inline graphic and multi-resolution discriminator Inline graphic are trained using an Adam optimizer with parameters Inline graphic and Inline graphic. The feature matching loss weight Inline graphic is set at 10. The batch size used to train the proposed method is 16. All of the experiments are run in an Ubuntu 18.04 environment using an Intel i7 9700k CPU and two GeForce RTX Titan graphics cards (48 GB VRAM).

B. Quantitative Results

The performance of the proposed method is assessed according to both image quality and medical imaging semantic segmentation.

1). Image Quality Evaluation

In this study, common image quality metrics are employed to assess the synthesis performance of the proposed method and four other state-of-the-art image synthesis methods: SEAN [15], SPADE [14], Pix2pixHD [13], and Pix2pix [12]. We evaluate image quality for two synthetic image categories: complete and lung-only images. The complete images are those CT images generated by merging a synthesized lung CT image with its corresponding non-lung CT image. The evaluation results are presented in Table II.

TABLE II. Image Quality Evaluation Results of Synthetic CT Images (The Best Evaluation Score is Marked in Bold. Inline graphic Means Higher Number is Better, and Inline graphic Indicates Lower Number is Better.).
Categories Complemented images Lung only images
Metrics FID (Inline graphic) PSNR (Inline graphic) SSIM (Inline graphic) RMSE (Inline graphic) FID (Inline graphic) PSNR (Inline graphic) SSIM (Inline graphic) RMSE (Inline graphic)
OURS 0.0327 26.89 0.8936 0.0813 0.3641 28.17 0.8959 0.2747
SEAN [15] 0.0341 26.69 0.8922 0.0837 0.3575 28.02 0.8952 0.2795
SPADE [14] 0.0389 26.50 0.8903 0.0854 0.4812 27.79 0.8928 0.2864
Pix2pixHD [13] 0.0430 26.63 0.8893 0.0840 0.4283 27.82 0.8910 0.2856
Pix2pix [12] 0.0611 26.56 0.8870 0.0913 8.4077 26.56 0.8855 0.3301

The proposed method outperforms other state-of-the-art methods based on the four image quality metrics for both the complete and lung-only images. Due to the design of the global-local generator and multi-resolution discriminator, the proposed model can generate realistic lung CT images for COVID-19 with a complete global structure and fine local details and maintain a relatively high signal-to-noise ratio. Thus, the proposed method can achieve state-of-the-art image synthesis results based on image quality.

2). Medical Imaging Semantic Segmentation Evaluation

To evaluate the reconstruction capability of the proposed method, we utilize Unet, a common medical imaging semantic segmentation approach [55]. We first train the Unet model on a mix of synthetic and real CT images. The training set of this task consists of real and synthetic data from the test set of the image synthesis tasks, and the test set here we use the training set of the image synthesis task.

This evaluation consists of two independent experiments: (1) keeping the total number of images the same while replacing the real data with synthesized data from a proportion of 0% to 50% in steps of 10% and (2) keeping the number of real images the same and adding a certain proportion of synthetic images from 0% to 50% in steps of 10%. The first experiment evaluates how similar the synthetic and real data are and the second evaluates the image synthesis potential of the synthetic data. We consider three categories in the assessment: ground-glass opacity, consolidation, and infection (which considers both ground-glass opacity and consolidation). The evaluation results for the two experiments are presented in Table III and Table IV, respectively. The pre-trained Unet model is then tested with a fixed real CT image dataset. 10,220 images from the test set are divide equally into 10 folds, the evaluation results are reported with the format as MEAN Inline graphic 95% CONFIDENCE INTERVAL among above folds.

TABLE III. Experimental Results for CT Images using Semantic Segmentation Methods (Replacing Real Data with Different Proportions of Synthetic Data) (The Best Evaluation Score is Marked in Bold. Inline graphic means Higher Number is Better, and Inline graphic Indicates Lower Number is Better. Ratio Means Replacing Certain Proportion of Synthetic Data. Inline graphic Represents a Small Positive Quantity which is Smaller than Inline graphic. 50%Inline graphic, 50%Inline graphic, 50%Inline graphic, 50%Inline graphic Represent the Synthetic Data are from SEAN [15], SPADE [14], Pix2pixHD [13] and Pix2pix [12], Respectively.
Focus Ground-glass opacity Consolidation Infection
Ratio Dice (%, Inline graphic) Sen (%, Inline graphic) Spec (%, Inline graphic) Dice (%, Inline graphic) Sen (%, Inline graphic) Spec (%, Inline graphic) Dice (%, Inline graphic) Sen (%, Inline graphic) Spec (%, Inline graphic)
0% 87.55Inline graphic0.20 86.84Inline graphic0.31 99.82Inline graphic0.01 84.88Inline graphic0.33 82.80Inline graphic0.51 99.96Inline graphic 89.57Inline graphic0.18 88.58Inline graphic0.23 99.82Inline graphic0.01
10% 87.34Inline graphic0.27 85.08Inline graphic0.45 99.85Inline graphic0.01 85.91Inline graphic0.44 84.23Inline graphic0.55 99.96Inline graphic 89.35Inline graphic0.32 87.14Inline graphic0.39 99.85Inline graphic0.01
20% 84.22Inline graphic0.36 83.38Inline graphic0.41 99.77Inline graphic0.01 84.30Inline graphic0.19 83.60Inline graphic0.33 99.95Inline graphic 86.67Inline graphic0.36 85.83Inline graphic0.27 99.76Inline graphic0.01
30% 87.43Inline graphic0.35 85.73Inline graphic0.41 99.84Inline graphic0.01 86.01Inline graphic0.21 87.14Inline graphic0.21 99.95Inline graphic 89.32Inline graphic0.22 88.11Inline graphic0.30 99.82Inline graphic0.01
40% 87.07Inline graphic0.27 86.70Inline graphic0.29 99.80Inline graphic0.01 85.81Inline graphic0.24 81.94Inline graphic0.36 99.97Inline graphic 88.92Inline graphic0.20 87.90Inline graphic0.30 99.81Inline graphic0.01
50% 86.98Inline graphic0.35 86.46Inline graphic0.40 99.80Inline graphic0.01 85.58Inline graphic0.23 82.18Inline graphic0.36 99.97Inline graphic 89.19Inline graphic0.24 88.12Inline graphic0.37 99.82Inline graphic0.01
50%Inline graphic 85.23Inline graphic0.40 84.99Inline graphic0.31 99.80Inline graphic0.01 83.54Inline graphic0.22 83.66Inline graphic0.19 99.97Inline graphic 86.04Inline graphic0.29 86.00Inline graphic0.39 99.82Inline graphic0.01
50%Inline graphic 83.04Inline graphic0.28 81.56Inline graphic0.24 99.80Inline graphic0.01 81.89Inline graphic0.20 81.50Inline graphic0.30 99.96Inline graphic 85.99Inline graphic0.21 84.05Inline graphic0.15 99.81Inline graphic0.01
50%Inline graphic 81.24Inline graphic0.37 79.53Inline graphic0.29 99.79Inline graphic0.01 80.20Inline graphic0.44 78.14Inline graphic0.46 99.96Inline graphic 83.22Inline graphic0.45 83.01Inline graphic0.48 99.80Inline graphic0.01
50%Inline graphic 75.33Inline graphic0.24 71.02Inline graphic0.38 99.75Inline graphic0.01 72.01Inline graphic0.25 70.55Inline graphic0.21 99.95Inline graphic 79.10Inline graphic0.25 78.89Inline graphic0.39 99.77Inline graphic0.01
TABLE IV. Experimental Results for Ct Images using Semantic Segmentation Methods (Adding Synthetic Data with Different Proportions) (The Best Evaluation Score is Marked in Bold. Inline graphic Means Higher Number is Better, and Inline graphic Indicates Lower Number is Better. Ratio Means Adding Certain Proportion of Synthetic Data. Inline graphic Represents a Small Positive Quantity which is Smaller Than Inline graphic.).
Focus Ground-glass opacity Consolidation Infection
Ratio Dice (%, Inline graphic) Sen (%, Inline graphic) Spec (%, Inline graphic) Dice (%, Inline graphic) Sen (%, Inline graphic) Spec (%, Inline graphic) Dice (%, Inline graphic) Sen (%, Inline graphic) Spec (%, Inline graphic)
0% 87.55Inline graphic0.20 86.84Inline graphic0.31 99.82Inline graphic0.01 84.88Inline graphic0.33 82.80Inline graphic0.51 99.96Inline graphic 89.57Inline graphic0.18 88.58Inline graphic0.23 99.82Inline graphic0.01
10% 87.65Inline graphic0.40 85.82Inline graphic0.32 99.84Inline graphic0.01 86.12Inline graphic0.33 86.02Inline graphic0.63 99.95Inline graphic 89.67Inline graphic0.31 88.11Inline graphic0.32 99.84Inline graphic0.01
20% 87.87Inline graphic0.34 87.67Inline graphic0.28 99.81Inline graphic0.01 85.52Inline graphic0.34 84.16Inline graphic0.53 99.96Inline graphic 89.87Inline graphic0.12 89.44Inline graphic0.20 99.81Inline graphic0.01
30% 87.99Inline graphic0.36 87.25Inline graphic0.36 99.82Inline graphic0.01 86.33Inline graphic0.30 86.38Inline graphic0.51 99.95Inline graphic 89.78Inline graphic0.22 89.17Inline graphic0.27 99.82Inline graphic0.01
40% 88.33Inline graphic0.22 88.71Inline graphic0.32 99.81Inline graphic0.01 87.25Inline graphic0.28 86.30Inline graphic0.38 99.96Inline graphic 90.19Inline graphic0.17 90.34Inline graphic0.31 99.81Inline graphic
50% 88.16Inline graphic0.30 86.88Inline graphic0.01 99.84Inline graphic0.01 87.09Inline graphic0.34 86.54Inline graphic0.47 99.96Inline graphic 90.06Inline graphic0.30 88.88Inline graphic0.18 99.83Inline graphic0.01

In Table III, we describe the experimental results of different replacing ratios of synthetic data. We can obtain the best performance when using pure real data as a training set. By replacing the real data with a ratio of synthetic data, the semantic segmentation performance of Unet does not decrease and stay at a stable level. By replacing real data with 30% synthetic data, the Unet obtains the best performance on the Spec metric for ground-glass opacity focus, also it gets the best performance on Dice and Sen metrics for consolidation focus. The experimental results from Table III show that synthetic CT images are similar to real CT images. They are realistic enough even replacing the real data with a large ratio of synthetic data, the semantic segmentation performance of Unet still seems promising. Besides, we also demonstrate the performance comparison with other state-of-the-art image synthesizers in Table III. Under the condition of replacing real data with 50% synthetic data which is generated by four different competitors, the proposed method shows the competitive performance on the semantic segmentation tasks.

Table III presents the experimental results for different replacement ratios for the synthetic data. We obtain the best performance when using pure real data as the training set. By replacing the real data with a proportion of synthetic data, the semantic segmentation performance of Unet does not decrease, but rather remains stable. By replacing real data with 30% synthetic data, Unet obtains the best performance for the Spec metric for ground-glass opacity and for the Dice and Sen metrics for consolidation. The experimental results thus indicate that the synthetic CT images are similar to real CT images. They are sufficiently realistic for semantic segmentation with Unet to be successful even when real data is replaced with a large proportion of synthetic data.

Table IV presents the semantic segmentation results when a certain proportion of extra synthetic data is added to the real data. The best performance is obtained when adding 40% synthetic data. Overall, the results indicate that the synthetic CT images are sufficiently diverse and realistic, meaning that they have the potential to be utilized in image synthesis to improve the dataset quality for deep-learning-based COVID-19 diagnosis.

C. Qualitative Results

To intuitively demonstrate synthetic results and easily compare them with the results from other state-of-the-art image synthesis methods, we show the synthetic examples in both Fig. 6 and Fig. 7 in this subsection.

Fig. 6.

Fig. 6.

Synthetic lung CT images generated by the proposed method and the other two competitive state-of-the-art image synthesis approaches. The first column shows the segmentation map including the lung (red), ground-glass opacity (blue), and consolidation (green) areas. The second column shows the original CT image. The third, fourth, fifth columns show the synthetic samples which are generated by the proposed method, SEAN [15] and SPADE [14] in order. Each case is presented with zoom in order to show more details, and the yellow arrows point out the special area which is described in the main text.

Fig. 7.

Fig. 7.

Synthetic lung CT images generated by the proposed method. Eight samples are selected, each from an individual patient. The first column shows the segmentation map including the lung (red), ground-glass opacity (blue), and consolidation (green) areas. The second and third columns show the original and synthetic CT images, respectively. The synthetic CT images here merge the synthetic lung CT image and the corresponding real non-lung area. The fourth and fifth columns depict CT images for the original lung and synthesized CT images, respectively.

The synthetic images from three individual cases are compared in Fig. 6. The first case shows that a consolidation infection area locates on the lower left of CT image. By comparing the synthetic results from the proposed method, SEAN [15] and SPADE [14], the infection area remains the original structure and texture in the result which is generated by the proposed method, however, we found that in the results of SEAN and SPADE, there some unnatural artifacts (holes) are generated in the position that yellow arrow points out. For the second case, a large area of ground-glass infection is detected, the results of SPADE and SEAN ignore some small lung area in the middle of the infection area, but the proposed method can still reflect above small lung area correctly. In the final case, it contains both two categories of infection area: consolidation and ground-glass opacity, and the ground-glass opacity is surrounded by the consolidation area. If we focus on the surrounded area, we can found out that the boundary of two infection area is not clear in the synthetic image of SEAN, and the ground-glass area are mistakenly generated as lung area in the synthesized image of SPADE. The result of the proposed method in case 3 shows that it has the ability to handle this complex situation and produce realistic synthetic CT images with high image quality.

We present some synthetic examples that are generated by the proposed method in Fig. 7. We select one example for each patient (8 samples from 9 patients; patient #3 is skipped because the segmentation maps were miss-labeled). For Patient #0, the consolidation area is located at the bottom of the lung area; the synthetic image shows a sharp and high-contrast consolidation area that can be easily distinguished from the surrounding non-lung region. The slides for Patients #1 and #4 have a similarity in that the lung area contains widespread ground-glass opacity. Consolidation is sporadically located within this ground-glass opacity. The small consolidation area can be easily identified due to the clear boundary between the two infection areas. Patient #6 shows ground-glass opacity and consolidation that are distant from each other. The results thus illustrate that the proposed method can handle the two types of infection areas together in a single lung CT image. The CT slides of Patients #5, #7, and #8 show the simplest cases, with only a single category of infection (ground- glass opacity). The experimental results thus indicate that realistic ground-glass opacity can be obtained using the proposed method.

D. Discussion

In order to further discuss the efficiency of DESUM process and DFM process, and justify an optimal structure of cGAN, we follow the experimental settings of the image quality evaluation and medical imaging semantic segmentation evaluation in sub-section B. Experimental results are shown in Table V.

TABLE V. Ablation Study of Various Proposed Model Structure.

Method FID (Inline graphic) PSNR (Inline graphic) Dice (%, Inline graphic)
Ours 0.0327 26.89 89.19Inline graphic0.24
w/o DESUM 0.0395 26.77 84.60Inline graphic0.39
using Inline graphic 0.0355 26.70 87.88Inline graphic0.29
Fixed Inline graphic 0.0380 26.51 85.87Inline graphic0.31
w/o DFM 0.0381 26.62 86.10Inline graphic0.33
using Inline graphic 0.0340 26.75 88.84Inline graphic0.21
Fixed Inline graphic 0.0404 26.71 83.50Inline graphic0.40
D=1 0.0579 26.55 79.44Inline graphic0.80
D=3 0.0330 26.80 88.93Inline graphic0.26
G=1 0.0604 26.51 76.03Inline graphic0.52
G=3 0.0325 26.85 89.01Inline graphic0.20

1). Dynamic Element-Wise Sum Process (DESUM)

In the second part of Table V, we evaluate the performance of three variations that are related to DESUM. Without DESUM, using Inline graphic, one fixed Inline graphic number. The performance shows that DESUM can effectively improve the image quality and segmentation performance. The intermediate feature Inline graphic offers less information than Inline graphic does, which influences the efficiency of DESUM. Using a fixed number of Inline graphic can not help to boost the performance, sometimes it may reduce the performance since a fixed weight is not suitable for diverse COVID-19 CT data.

2). Dynamic Feature Matching Process (DFM)

In this sub-section, we discuss the evaluation of DFM. As shown in the third part of Table V, we compared the performance of the various (a) without DFM, (b) using the intermediate feature from Inline graphic, (c) using a fixed Inline graphic number. The evaluation results show that DFM can help to train discriminator stably and improve the performance with multiple metrics. Moreover, the intermediate feature from Inline graphic contains many more details that can help the DFM process to weight correctly. The results also tell that a dynamic weight of Inline graphic is critical for training our multi-resolution discriminator.

3). Fine-Tuning Level Optimization

We also investigated some potential fine-tuning optimizations which are the number of generators and the number of discriminators. In the last two parts of Table V, we found that the number of generators and discriminators is the important hyper-parameters of COVID-19 CT image synthesis task. A proper number of generator and discriminator can not only avoid the model from overfitting with details from multiple resolution sources but also improve the training efficiency and stability. Experimental results show that the performance can benefit from the dual structure of both generator and discriminator the most, because this dual structure can trade-off well between performance and efficiency.

V. Conclusion and Future Study

In this paper, we proposed a cGAN-based COVID-19 CT image synthesis method that can generate realistic CT images that included two main infection types; ground-glass opacity and consolidation. The proposed method takes the semantic segmentation map of a corresponding lung CT image, and the cGAN structure learns the characteristics and information of the CT image. A global-local generator and a multi-resolution discriminator are employed to effectively balance global information with local details in the CT image. The experimental results have shown that the proposed method was able to generate realistic synthetic CT images and achieve state-of-the-art performance in terms of image quality when compared with common image synthesis approaches. In addition, the evaluation results for semantic segmentation performance demonstrated that the high image quality and fidelity of the synthetic CT images enable their use in image synthesis for COVID-19 diagnosis using AI models. For future research, the authors plan to fully utilize high-quality synthetic COVID-19 CT images to improve specific computer vision approaches that can help in the fight against COVID-19, such as lung CT image semantic segmentation and rapid lung CT image-based COVID-19 diagnosis.

Funding Statement

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea Government (MSIT) under Grant 2019R1A2C2009480.

Contributor Information

Yifan Jiang, Email: yfjiang@ispl.korea.ac.kr.

Han Chen, Email: hanchen@ispl.korea.ac.kr.

Murray Loew, Email: loew@gwu.edu.

Hanseok Ko, Email: hsko@korea.ac.kr.

References

  • [1].“Coronavirus disease (Covid-19) pandemic,” [Online]. Available: https://www.who.int/emergencies/diseases/novel-coronavirus-2019
  • [2].“Johns hopkins coronavirus resource center,” [Online]. Available: https://coronavirus.jhu.edu/map.html
  • [3].Guan W.-j. et al. , “Clinical characteristics of Coronavirus disease 2019 in China,” New Engl. J. Med., vol. 382, no. 18, pp. 1708–1720, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].“Interim clinical guidance for management of patients with confirmed Coronavirus disease (Covid-19),” [Online]. Available: https://www.cdc.gov/coronavirus/2019-ncov/hcp/clinical-guidance-management-patients.html
  • [5].Ai T. et al. , “Correlation of chest CT and RT-PCR testing in Coronavirus disease 2019 (Covid-19) in China: A report of 1014 cases,” Radiology, 2020, Art. no. 200642. [DOI] [PMC free article] [PubMed]
  • [6].Fang Y. et al. , “Sensitivity of chest CT for Covid-19: Comparison to RT-PCR,” Radiology, 2020, Art. no. 200432. [DOI] [PMC free article] [PubMed]
  • [7].Yan K., Peng Y., Sandfort V., Bagheri M., Lu Z., and Summers R. M., “Holistic and comprehensive annotation of clinically significant findings on diverse CT images: Learning from radiology reports and label ontology,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 8523–8532. [Google Scholar]
  • [8].Cui Z., Li C., and Wang W., “Toothnet: Automatic tooth instance segmentation and identification from cone beam CT images,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 6368–6377. [Google Scholar]
  • [9].Zhang Z., Romero A., Muckley M. J., Vincent P., Yang L., and Drozdzal M., “Reducing uncertainty in undersampled MRI reconstruction with active acquisition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 2049–2058. [Google Scholar]
  • [10].Ying X., Guo H., Ma K., Wu J., Weng Z., and Zheng Y., “X2CT-GAN: Reconstructing CT from biplanar X-rays with generative adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 10 619–10 628. [Google Scholar]
  • [11].Goodfellow I. et al. , “Generative adversarial nets,” in Adv. Neural Inf. Proce. Syst., 2014, pp. 2672–2680. [Google Scholar]
  • [12].Isola P., Zhu J.-Y., Zhou T., and Efros A. A., “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2017, pp. 1125–1134. [Google Scholar]
  • [13].Wang T.-C., Liu M.-Y., Zhu J.-Y., Tao A., Kautz J., and Catanzaro B., “High-resolution image synthesis and semantic manipulation with conditional GANs,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 8798–8807. [Google Scholar]
  • [14].Park T., Liu M.-Y., Wang T.-C., and Zhu J.-Y., “Semantic image synthesis with spatially-adaptive normalization,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 2337–2346. [Google Scholar]
  • [15].Zhu P., Abdal R., Qin Y., and Wonka P., “Sean: Image synthesis with semantic region-adaptive normalization,” in Proc. IEEE/CVF Conf. Comput. Vision Pattern Recognition, 2020, pp. 5104–5113. [Google Scholar]
  • [16].Chen Y.-S., Wang Y.-C., Kao M.-H., and Chuang Y.-Y., “Deep photo enhancer: Unpaired learning for image enhancement from photographs with GANs,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 6306–6314. [Google Scholar]
  • [17].Wan Z. et al. , “Bringing old photos back to life,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognition, 2020, pp. 2747–2757. [Google Scholar]
  • [18].Yang W., Ouyang W., Wang X., Ren J., Li H., and Wang X., “3D human pose estimation in the wild by adversarial learning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 5255–5264. [Google Scholar]
  • [19].Ma L., Jia X., Sun Q., Schiele B., Tuytelaars T., and Van Gool L., “Pose guided person image generation,” in Adv. Neural Inform. Process. Syst., 2017, pp. 406–416. [Google Scholar]
  • [20].Wang T.-C., “Video-to-video synthesis,” 2018, arXiv:1808.06601. [Google Scholar]
  • [21].Wang T.-C., Liu M.-Y., Tao A., Liu G., Kautz J., and Catanzaro B., “Few-shot video-to-video synthesis,” 2019, arXiv:1910.12713. [Google Scholar]
  • [22].Mirza M. and Osindero S., “Conditional generative adversarial nets,” 2014, arXiv:1411.1784. [Google Scholar]
  • [23].Odena A., Olah C., and Shlens J., “Conditional image synthesis with auxiliary classifier GANs,” in Proc. 34th Int. Conf. Mach. Learn.-Vol. 70. JMLR. org, 2017, pp. 2642–2651. [Google Scholar]
  • [24].Caesar H., Uijlings J., and Ferrari V., “Coco-stuff: Thing and stuff classes in context,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 1209–1218. [Google Scholar]
  • [25].Mescheder L., Geiger A., and Nowozin S., “Which training methods for GANs do actually converge?,” 2018, arXiv:1801.04406. [Google Scholar]
  • [26].Xu T. et al. , “Attngan: Fine-grained text to image generation with attentional generative adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 1316–1324. [Google Scholar]
  • [27].Hong S., Yang D., Choi J., and Lee H., “Inferring semantic layout for hierarchical text-to-image synthesis,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2018, pp. 7986–7994. [Google Scholar]
  • [28].Zhu J.-Y., Park T., Isola P., and Efros A. A., “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 2223–2232. [Google Scholar]
  • [29].Zhu J.-Y. et al. , “Toward multimodal image-to-image translation,” in Adv. Neural Inf. Process. Syst., 2017, pp. 465–476. [Google Scholar]
  • [30].Liu M.-Y., Breuel T., and Kautz J., “Unsupervised image-to-image translation networks,” in Adv. Neural Inf. Process. Syst., 2017, pp. 700–708. [Google Scholar]
  • [31].Huang X., Liu M.-Y., Belongie S., and Kautz J., “Multimodal unsupervised image-to-image translation,” in Proc. Eur. Conf. Comput. Vis, 2018, pp. 172–189. [Google Scholar]
  • [32].Zhao B., Meng L., Yin W., and Sigal L., “Image generation from layout,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 2019, pp. 8584–8593. [Google Scholar]
  • [33].Kang H. et al. , “Diagnosis of Coronavirus disease 2019 (Covid-19) with structured latent multi-view representation learning,” IEEE Trans. Med. Imaging, vol. 39, no. 8, pp. 2606–2614, Aug. 2020. [DOI] [PubMed] [Google Scholar]
  • [34].Zhang K. et al. , “Clinically applicable AI system for accurate diagnosis, quantitative measurements, and prognosis of Covid-19 pneumonia using computed tomography,” Cell, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [35].Ardakani A. A., Kanafi A. R., Acharya U. R., Khadem N., and Mohammadi A., “Application of deep learning technique to manage Covid-19 in routine clinical practice using CT images: Results of 10 convolutional neural networks,” Comput. Biol. Med., 2020, Art. no. 103795. [DOI] [PMC free article] [PubMed]
  • [36].Li L. et al. , “Artificial intelligence distinguishes Covid-19 from community acquired pneumonia on chest ct,” Radiology, 2020, Art. no. 200905.
  • [37].Zhou T., Canu S., and Ruan S., “An automatic Covid-19 CT segmentation based on u-net with attention mechanism,” 2020, arXiv:2004.06673. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [38].Xie W., Jacobs C., Charbonnier J.-P., and van Ginneken B., “Relational modeling for robust and efficient pulmonary lobe segmentation in CT scans,” IEEE Trans. Med. Imag., vol. 39, no. 8, pp. 2664–2675, Aug. 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [39].Voulodimos A., Protopapadakis E., Katsamenis I., Doulamis A., and Doulamis N., “Deep learning models for Covid-19 infected area segmentation in CT images,” medRxiv, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [40].Chen X., Yao L., and Zhang Y., “Residual attention u-net for automated multi-class segmentation of Covid-19 chest CT images,” 2020, arXiv:2004.05645. [Google Scholar]
  • [41].Fan D.-P. et al. , “INF-net: Automatic Covid-19 lung infection segmentation from CT images,” IEEE Trans. Med. Imag., 2020. [DOI] [PubMed] [Google Scholar]
  • [42].Zheng C. et al. , “Deep learning-based detection for Covid-19 from chest CT using weak label,” medRxiv, 2020. [Google Scholar]
  • [43].Gozes O., Frid-Adar M., Sagie N., Zhang H., Ji W., and Greenspan H., “Coronavirus detection and analysis on chest CT with deep learning,” 2020, arXiv:2004.02640. [Google Scholar]
  • [44].Hu S. et al. , “Weakly supervised deep learning for Covid-19 infection detection and classification from CT images,” IEEE Access, vol. 8, pp. 118869–118883, 2020. [Google Scholar]
  • [45].Lauritzen A. D., Papademetris X., Turovets S., and Onofrey J. A., “Evaluation of CT image synthesis methods: From atlas-based registration to deep learning,” 2019, arXiv:1906.04467. [Google Scholar]
  • [46].Chen Y.-W., Fang H.-Y., Wang Y.-C., Peng S.-L., and Shih C.-T., “A novel computed tomography image synthesis method for correcting the spectrum dependence of CT numbers,” Phys. Med. Biol., vol. 65, no. 2, 2020, Art. no. 025013. [DOI] [PubMed] [Google Scholar]
  • [47].Liu S. et al. , “3D tomographic pattern synthesis for enhancing the quantification of Covid-19,” 2020, arXiv:2005.01903. [Google Scholar]
  • [48].Johnson J., Alahi A., and Fei-Fei L., “Perceptual losses for real-time style transfer and super-resolution,” in Proc. Eur. Conf. Comput. Vis. Berlin, Germany: Springer, 2016, pp. 694–711. [Google Scholar]
  • [49].“Covid-19 CT segmentation dataset,” [Online]. Available: https://medicalsegmentation.com/covid19/
  • [50].Chung M. et al. , “CT imaging features of 2019 novel Coronavirus (2019-ncov),” Radiology, vol. 295, no. 1, pp. 202–207, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].Heusel M., Ramsauer H., Unterthiner T., Nessler B., and Hochreiter S., “GANs trained by a two time-scale update rule converge to a local nash equilibrium,” in Adv Neural Inform. Process. Syst., 2017, pp. 6626–6637. [Google Scholar]
  • [52].Hore A. and Ziou D., “Image quality metrics: PSNR VS SSIM,” in Proc. 20th Int. Conf. Pattern Recognit. IEEE, 2010, pp. 2366–2369. [Google Scholar]
  • [53].Fenster A. and Chiu B., “Evaluation of segmentation algorithms for medical imaging,” in Proc. IEEE Eng. Med. Biol. 27th Annu. Conf., 2006, pp. 7186–7189. [DOI] [PubMed] [Google Scholar]
  • [54].Milletari F., Navab N., and Ahmadi S.-A., “V-net: Fully convolutional neural networks for volumetric medical image segmentation,” in Proc. IEEE 4th Int. Conf. 3D Vis., 2016, pp. 565–571. [Google Scholar]
  • [55].Ronneberger O., Fischer P., and Brox T., “U-net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assisted Intervention. Berlin, Germany: Springer, 2015, pp. 234–241. [Google Scholar]

Articles from Ieee Journal of Biomedical and Health Informatics are provided here courtesy of Institute of Electrical and Electronics Engineers

RESOURCES