Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Apr 27.
Published in final edited form as: Proc Mach Learn Res. 2019 Dec;116:10–24.

Generative Image Translation for Data Augmentation in Colorectal Histopathology Images

Jerry Wei 1, Arief Suriawinata 2, Louis Vaickus 2, Bing Ren 2, Xiaoying Liu 2, Jason Wei 1, Saeed Hassanpour 1,*
PMCID: PMC8076951  NIHMSID: NIHMS1061076  PMID: 33912842

Abstract

We present an image translation approach to generate augmented data for mitigating data imbalances in a dataset of histopathology images of colorectal polyps, adenomatous tumors that can lead to colorectal cancer if left untreated. By applying cycle-consistent generative adversarial networks (CycleGANs) to a source domain of normal colonic mucosa images, we generate synthetic colorectal polyp images that belong to diagnostically less common polyp classes. Generated images maintain the general structure of their source image but exhibit adenomatous features that can be enhanced with our proposed filtration module, called Path-Rank-Filter. We evaluate the quality of generated images through Turing tests with four gastrointestinal pathologists, finding that at least two of the four pathologists could not identify generated images at a statistically significant level. Finally, we demonstrate that using CycleGAN-generated images to augment training data improves the AUC of a convolutional neural network for detecting sessile serrated adenomas by over 10%, suggesting that our approach might warrant further research for other histopathology image classification tasks.

1. Introduction

Accurately analyzing medical images with deep learning classifiers often requires large, balanced datasets. For many diseases, however, the distribution of disease sub-classes in collected datasets is heavily skewed by each class’s prevalence among patients, and so detecting rare diseases in medical images with deep learning can be challenging. In these situations, a reliable method of data augmentation can mitigate the effects of data imbalance by preventing overfitting and thus improving overall performance.

Previous work in data augmentation includes both traditional augmentation methods (rotations, flips, color jittering, etc.) and, more recently, generative models that synthesize completely new images. Since their development, generative adversarial networks (GANs) (Goodfellow et al., 2014), which use noise as an input variable, have been a popular method of generating augmented data for improving image classification (Perez and Wang, 2017; Salehinejad et al., 2017). We hypothesized that, in the field of medical image analysis, data from one class might contain useful information to synthesize new data for another. As such, generative image translation models might suit this task better than models that do not account for information in other classes (e.g., models that use random noise as a basis for image generation).

In this paper, we present an image translation model for generating synthetic colorectal histopathology images. Since adenomatous preneoplastic polyps always originate from normal colonic mucosa, we use normal colonic mucosa as a source domain to generate synthetic images that are similar in structure but present adenomatous features. Our work makes the following contributions:

  1. We demonstrate an image translation model that generates synthetic images of adenomatous colorectal polyps and propose a filtration module called Path-Rank-Filter that enhances the presence of adenomatous features in generated images.

  2. We evaluate the quality of generated images through Turing tests with four gastrointestinal pathologists, finding that for the two adenomatous polyp classes tested, at least two of four pathologists could not distinguish between synthetic and real polyp images at a statistically significant level.

  3. We show that using generated images as augmented data for training improves the AUC of a convolutional neural network in detecting sessile serrated adenomas by over 10%, indicating that our approach might be useful for other histopathology image classification tasks.

Our code for this project is publicly available.1

2. Related Work

Generative adversarial networks (GANs) have commonly been used in the field of medical image analysis. For magnetic resonance imaging (MRI) scans, Nie et al. (2016) used context-aware GANs to generate computed topography (CT) images from MRIs, and Yang et al. (2018) used conditional GANs (cGANs) to generate target modality MRIs given a particular source modality MRI. Furthermore, Dar et al. (2018) used cGANs to generate fake T1 and T2 MRIs and used an improved methodology by using end-to-end training of GANs that synthesize target images given source images. Hiasa et al. (2018) also translated MRIs to CT images with CycleGANs, adding a gradient consistency loss to encourage edge alignment between images. Salehinejad et al. (2018) used DCGANs to generate fake chest x-ray images from real ones, though the resulting fake images were at a lower resolution than real images, and Wang et al. (2018) used cGANs to reduce artifacts in CT images by learning to map an artifact-affected CT image to an artifact-free CT image.

In the field of histopathology in particular, many studies have used GANs for both image generation and image translation. Both Bayramoglu et al. (2018) and Rana et al. (2019) used cGANs to virtually stain Haemotoxylin and Eosin (H&E) lung tissue histopathology. Similarly, Hou et al. (2017) and Quiros et al. (2019) generated fake histopathology samples with GANs, and Burlingame et al. (2018) used cGANs to translate pancreas tumors from H&E-stained to immunofluorescent. In terms of stain normalization, Bentaieb and Harmarneh (2017) used a GAN to normalize tissue samples in order to remove natural discolorations from tissue staining, and Cho et al. (2017) performed stain style transfer by replacing stain normalization models with cGANs. Moreover, Zanjani et al. (2018) integrated a Convolutional Neural Network (CNN) and Gaussian Mixture Model to jointly optimize the modeling and normalizing of color and intensity in H&E stained images.

In terms of data augmentation, both conventional methods and GANs have been used in previous research. Hussain et al. (2017) found that effective methods of data augmentation for images primarily include strategies such as flips, Gaussian noise, jittering, Gaussian blurring, and rotations, and Li et al. (2010) addressed class imbalances by oversampling abnormal classes and undersampling normal classes. For generative methods, Bass et al. (2019) synthesized augmented biomedical images with convolutional capsule GANs. Additionally, Gupta et al. (2019) used CycleGANs on x-ray images to generate augmented images of bone lesions, which were then added to a training set to improve a bone lesion classifier’s AUC by 5%. Both papers, however, did not manually evaluate the quality of their generated images, and Gupta et al. (2019) did not have extensive ablation studies to provide insight on how their method could be applied to other datasets.

In our study, we apply CycleGAN to a colorectal histopathology image dataset to generate augmented data. We propose a filtration module called Path-Rank-Filter that improves the quality of generated images for some classes and perform extensive ablation studies. Furthermore, we evaluate our generated images manually with four pathologists and compare our CycleGAN model’s ability to improve classifier performance with that of two other generative models: DCGAN (Radford et al., 2015) and DiscoGAN (Kim et al., 2017).

3. Image Translation in Colorectal Histopathology Images

Here, we discuss our approach for applying generative image translation to a dataset of colorectal histopathology images. We focus on cycle-consistent generative adversarial networks (Zhu et al., 2017) and propose a simple filtration module called Path-Rank-Filter that enhances the adenomatous features in generated images. Additionally, we describe the process of collecting our dataset as well as our experimental setup.

3.1. Cycle-Consistent Generative Adversarial Networks

We use a cycle-consistent generative adversarial network (CycleGAN) (Zhu et al., 2017) model to translate images of normal colonic mucosa to images of adenomatous colorectal polyps. Given two domains, X and Y, with training samples {xi}i=1N, where xiX, and {yi}i=1N, where yiY, CycleGAN learns the mapping G : XY for unpaired image translation. For colorectal polyp images, we set X as normal colonic mucosa, which has many images, and Y as a less common polyp type with few images (e.g., tubular adenoma or sessile serrated adenoma) so that we can mitigate the imbalance of class Y by generating a set of augmented data {G(xi)}i=1N that presents features of domain Y.

3.2. Path-Rank-Filter

Because histopathology images differ in nature from images in standard computer vision datasets (e.g., MNIST or ImageNet), we propose a module called Path-Rank-Filter that improves CycleGAN’s performance specifically for histopathology images. Whereas distinguishing between common classes in computer vision (e.g., cats and dogs) is relatively straightforward, histopathology images can contain a range of histologic features that determine whether an image can be classified as adenomatous. For instance, both an image with small amounts of tubular architectures and an image covered by tubular architectures would be classified by a pathologist as a tubular adenoma. We hypothesize that images with more prominent features will be more useful for training, and so instead of training a CycleGAN on the original {xi}i=1N and {yi}i=1N, we introduce the following filtration process (Figure 1):

Figure 1:

Figure 1:

Process for generating synthetic histopathology images of rare colorectal polyp classes. Path-Rank-Filter (i-ii) enhances the adenomatous features in generated images by filtering the training data for CycleGAN for only images with strong adenomatous features.

  1. We train a ResNet (He et al., 2015) f to classify X and Y. We define fY (yi) as the output probability of the ResNet for class Y when given image yi as the input.

  2. Then, we run the ResNet on all {yi}i=1N. For some α ∈ (0, 1], we find {y}α{yi}i=1N such that for all yi ∈{y}α, fY (yi) is in the highest α of all output probabilities {fY(yi)}i=1N.

  3. We train CycleGAN on {xi}i=1N and {y}α instead of {xi}i=1N and {yi}i=1N.

Path-Rank-Filter uses the knowledge that an adenomatous class Y includes images with a range of histologic features. It thus finds the images with the strongest features that are most representative of class Y and uses those images to train CycleGAN.

3.3. Dataset Collection

Our dataset of colorectal polyp images was collected from the Dartmouth-Hitchcock Medical Center in New Hampshire, USA, our tertiary medical institution. We collected 427 high-resolution whole-slide images, which we split into a training set of 326 whole-slide images and a testing set of 101 whole-slide images. For the training set, pathologists annotated all whole-slide images with bounding boxes representing regions of interest, for a total of 3517 variable-size image crops. Each image crop was labeled with a single class for the polyp type, which was either benign (normal or hyperplastic), or adenomatous (tubular adenoma, tubulovillous/villous adenoma, or sessile-serrated adenoma). The distributions of different classes in our training set is shown in Figure 2.

Figure 2:

Figure 2:

Distribution for collected dataset of colorectal polyp histopathology images. HP: hyperplastic polyp, NO: normal colonic mucosa, TVA: tubulovillous/villous adenoma, TA: tubular adenoma, SSA: sessile serrated adenoma. Two diagnostically relevant classes of adenomatous polyps, tubular adenoma (TA) and sessile serrated adenoma (SSA), comprise only 14.8% and 3.3% of the dataset, respectively.

For the testing set, pathologists annotated the whole-slide images for fixed-size tiles of classic examples of polyp types (224 × 224 pixels), and polyp type labels were verified by two pathologists so that our evaluation was as close to ground truth as possible. Our final testing set, which is used in §4.3, had 261 hyperplastic polyp images and 39 sessile serrated adenoma images.

3.4. Experimental Setup and Motivation

In this study, we set tubular adenoma (TA) and sessile serrated adenoma (SSA), two adenomatous polyp types that respectively account for only 14.8% and 3.3% of our dataset by size, as the target domains for data generation. As a source domain, we use normal colonic mucosa images, since both tubular and sessile serrated adenomas emerge as a result of cytological transformations on normal colonic mucosa.

For all classifiers, we use the ResNet architecture (He et al., 2015) and train each classifier for 20 epochs. We conducted an ablation study for our particular classification task and found that increasing the depth of the neural network did not substantially improve performance (Supplementary Figure 1). Thus, for all experiments, we used the model with the lowest number of parameters, ResNet-18, so that experiments can be replicated more quickly.

4. Experiments

We perform extensive experiments to evaluate the ability and usefulness of generative image translation on colorectal polyp histopathology images. We measure the strength of our filtering method using a pre-trained classifier, finding that CycleGAN with Path-Rank-Filter generates images that are substantially closer to the target domain (i.e. exhibit more adenomatous features) than when Path-Rank-Filter is not used. Next, we perform a clinical evaluation of our images by conducting a Turing test with four gastrointestinal pathologists, finding that three of the four pathologists could not differentiate at least half of the synthetic images from real images. Finally, we evaluate how adding the generated images as augmented data for training a ResNet classifier can improve performance for detecting sessile serrated adenomas, a clinically important distinction in colorectal cancer screening.

While we limit the scope of this paper to a single source domain, normal colonic mucosa, we show qualitative results of experiments on other source domains in Supplementary Figure 6.

4.1. Enhancing Adenomatous Features with Path-Rank-Filter

In this experiment, we evaluate how Path-Rank-Filter can select a subset of the adenomatous training images with the strongest adenomatous features for CycleGAN so that the generated images will also have a strong presence of features representing the desired class. For the three adenomatous classes of polyps (tubular, tubulovillous/villous, and sessile serrated), we apply CycleGAN using Path-Rank-Filter with filtration parameter values of α={12,14,18,116,132} on the 9054 normal colonic mucosa images in our training set to generate 9054 images of the target adenomatous class. We then measure the prominence of adenomatous features in our generated images by using a pre-trained classifier to evaluate the percent of generated images that were actually classified as the intended target class (Table 1).

Table 1:

Percent of synthetic images generated by a CycleGAN with various α parameters for Path-Rank-Filter that were classified by a pre-trained classifier as the intended class. 9054 synthetic images were evaluated for each class and α value. TA: tubular adenoma, TVA: tubulovillous/villous adenoma, SSA: sessile serrated adenoma.

Polyp Class α = 1 α = 1/2 α = 1/4 α = 1/8 α = 1/16 α = 1/32
TA 35.4 64.4 79.6 87.6 89.2 93.8
TVA 32.7 67.3 49.4 63.1 85.9 86.1
SSA 37.0 20.9 21.5 38.5 23.4 43.7

Based on this evaluation metric, Path-Rank-Filter substantially enhanced adenomatous features in generated images for TA and TVA. For these two classes, the highest classification performance was at α=132, with the pre-trained classifier correctly detecting 93.8% of generated images for TA and 86.1% for TVA. These high accuracies seem to reflect the nature of these two adenomatous classes, for which images in the training set reflect a range of features. TA images are defined by hyperchromatic, pencillate nuclei; pathologists will label both images with small hints of pencillate nuclei and obviously strong tubular features as tubular. Of the same nature, TVA images are characterized by finger-like extensions with hyperchromatic, pencillate nuclei, and therefore some images will have more villous features than others.

For SSA, on the other hand, Path-Rank-Filter did not significantly improve the performance. We hypothesize that this result reflects the differing nature of SSAs, which are classified by the presence of a single broad-based crypt. Unlike TAs and TVAs, SSAs do not present a spectrum of histological features, and so it makes sense that Path-Rank-Filter does not choose a better subset of SSAs for training CycleGAN, and therefore generated images did not exhibit stronger features of SSAs.

Furthermore, we select example images to examine the histologic features as we use different filtration parameters (Figure 3). For TA, we see that CycleGAN transforms normal crypts by introducing pencillate nuclei into the crypt borders, altering cell color, and merging small crypts into more complex structures. For TVA, crypts become more elongated and finger-like for smaller α parameters. For SSA, however, the quality of adenomatous features did not substantially improve with smaller α parameters; perhaps the SSA example shown when using all images for training α = 1 has the strongest features, although interpretations might differ among pathologists. More examples of generated images for varying α are shown in Supplementary Figures 2 (TA), 3 (TVA), and 4 (SSA). Generated images of tubular adenomas after various epochs are shown in Supplementary Figure 5.

Figure 3:

Figure 3:

CycleGAN’s generated images for different values of α. For instance, α = 1/4 means that the top 25% of images with the highest output probabilities from a ResNet were used to train CycleGAN. TA: tubular adenoma, TVA: tubulovillous/villous adenoma, SSA: sessile serrated adenoma. For TA and TVA, adenomatous features were enhanced at smaller α values.

4.2. Evaluation by Pathologists

We further measure the quality of generated adenomatous images through clinical evaluation by four gastrointestinal pathologists. For the tubular and sessile serrated classes,2 the two least common classes in our dataset, we presented the four pathologists with a set of 200 unlabeled images: 100 real images and 100 generated (fake) images. Each pathologist independently classified each image as either real or fake. As shown in Figure 4, at least half of the pathologists could not distinguish real and fake images at a statistically significant level, correctly distinguishing some fake images while also incorrectly labeling real images as fake.

Figure 4:

Figure 4:

Results of Turing test for whether gastrointestinal pathologists could distinguish real and fake images of tubular adenomas (A) and sessile serrated adenomas (B). Left: example real and generated images that were classified correctly and incorrectly by pathologists, with the number of pathologists who labeled the image as such denoted in the lower left corner. Right: evaluation of real and fake images by four pathologists.

We also perform statistical analysis on the pathologists’ overall accuracies, using x0 = 0.5 as the expected accuracy for random guessing and each pathologist’s accuracy on the n = 200 images as x^ to calculate the z-score for each pathologist (Equation 1).

z=x^x0x0(1x0)n (1)

We then calculate p for each pathologist given the null hypothesis H0:x^=x0. With this configuration, a p-value where p < 0.05 is statistically significant (i.e., the pathologist is able to distinguish between real and fake images).

For tubular adenoma images, only one pathologist was able to differentiate real images from synthetic images at a statistically significant level. For sessile serrated adenoma images, two pathologists were able to distinguish between real and synthetic images at a statistically significant level. Based on feedback from pathologists, fake sessile serrated adenoma images were easier to identify because our CycleGAN model created a subtle mosaic-like pattern in the whitespace of images. Sessile serrated adenomas tended to have more whitespace because they are defined by a single large crypt (of mostly whitespace), which might explain why it was easier to detect fake sessile serrated adenomas than tubular adenomas.

4.3. Improving Classifier Performance

Image translation can mitigate class imbalances in training sets by generating synthetic images of rare classes. We generated synthetic images of sessile serrated adenomas (only represented by 3% of the training set) and used them as augmented data for training a ResNet classifier to distinguish between hyperplastic polyps (benign) and sessile serrated adenomas (adenomatous), a clinically important task in colorectal cancer screening (Korbar et al., 2017a,b). We applied CycleGAN to all 9054 normal colonic mucosa images in our training set to generate 9054 images of the sessile serrated class, and added these images into the training set. Then, we used this dataset for training a ResNet and evaluated it on a test set of 261 hyperplastic polyp images and 39 sessile serrated adenoma images, comparing our ResNet’s performance with that of ResNets trained on generated data from DiscoGAN and DCGAN, as well as ResNets trained without augmented data (Figure 5A). Including CycleGAN-generated images for training boosted classification AUC by over 10%, outperforming DCGAN-generated images and DiscoGAN-generated images.

Figure 5:

Figure 5:

A: AUCs of ResNets trained on real images with synthetic images from different generative models given as additional training data. B: AUCs of ResNets trained without real images and with synthetic images from different generative models as the only available training data. In both experiments, the ResNet that was trained with CycleGAN’s synthetic images had the highest AUC.

We also train ResNet on a training set consisting of the same real hyperplastic images but with synthetic images as the only available sessile serrated adenoma images (Figure 5B). Once again, the model trained on CycleGAN-generated images outperformed the models trained on DCGAN-generated images and DiscoGAN-generated images by 8% and 23%, respectively. In both experiments, the ResNet that was trained using CycleGAN-generated images achieved the highest AUC.

5. Limitations and Discussion

Although we show some promising results in terms of image quality and ability to improve the performance of a ResNet classifier, our study has notable limitations. First, fair manual evaluation of images is non-trivial. Even though the pathologists in our study have years of experience examining colorectal polyp slides, these Turing tests do not perfectly reflect image quality, since pathologists do not distinguish real and fake data as a task in clinical practice. Furthermore, variation in results suggest that distinguishing fake images might depend highly on the individual pathologists, and some pathologists reported that they could better distinguish real and fake images as they saw more images. Finally, we only showed pathologists fixed-sized tiles of images; generating an entire high-resolution slide with high-quality features is a substantially more challenging task.

In terms of improving classifier training, we had hoped that training with synthetic data would achieve the same performance as training with real data, but a ResNet trained on only synthetic SSA images achieved an AUC of only 0.68 (Figure 5), much lower than the AUC of a classifier trained on both real and synthetic data (0.89). This result suggests that although the quality of a single generated image might be comparable to that of a single real image, the quality of the set of generative images likely does not match that of a set of real images.

Our paper has explored image translation for data augmentation in colorectal histopathology images. Whereas most work in generative data augmentation focuses on generating images from random noise, we note that images from other classes might be helpful in the field of histopathology and therefore take an image translation approach. Future work might include evaluating our method on other datasets to evaluate the generalizability of our approach.

Supplementary Material

1

Acknowledgments

This research was supported in part by National Institute of Health grants R01LM012837 and P20GM104416. We thank Naofumi Tomita for helpful feedback on the study design.

Footnotes

2

Manual evaluation is costly, and so we do not evaluate tubulovillous/villous adenoma in this paper.

References

  1. Bass Cher, Dai Tianhong, Billot Benjamin, Arulkumaran Kai, Creswell Antonia, Clopath Claudia, Paulo Vincenzo De, and Bharath Anil Anthony. 2019. Image synthesis with a convolutional capsule generative adversarial network. In Proceedings of the International Conference on Medical Imaging with Deep Learning. https://openreview.net/pdf?id=rJen0zC1lE. [Google Scholar]
  2. Bayramoglu Neslihan, Kaakinen Mika, Eklund Lauri, and Heikkila Janne. 2018. Towards virtual H&E staining of hyperspectral lung histology images using conditional generative adversarial networks. In Proceedings of the IEEE International Conference on Computer Vision, pages 64–71. https://ieeexplore.ieee.org/document/8265226. [Google Scholar]
  3. Bentaieb Aicha and Harmarneh Ghassan. 2017. Adversarial stain transfer for histopathology image analysis. In Proceedings of IEEE Transactions on Medical Imaging, volume 37. https://ieeexplore.ieee.org/document/8170242. [DOI] [PubMed] [Google Scholar]
  4. Burlingame Erik A., Margolin Adam, Gray Joe, and Chang Young Hwan. 2018. SHIFT: Speedy histopathological-to-immunofluorescent translation of whole slide images using conditional generative adversarial networks. volume 10581. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6166432/. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Cho Hyungjoo, Lim Sungbin, Choi Gunho, and Min Hyunseok. 2017. Neural stain-style transfer learning using GAN for histopathlogical images. In Journal of Machine Learning Research: Workshop and Conference Proceedings. https://arxiv.org/pdf/1710.08543.pdf. [Google Scholar]
  6. UH. Dar Salman, Yurt Mahmut, Karacan Levent, Aykut Erdem, Erdem Erkut, and Cuker Tolga. 2018. Image synthesis in multi-contrast MRI with conditional generative adversarial networks. IEEE Transactions on Medical Imaging. https://arxiv.org/pdf/1802.01221.pdf. [DOI] [PubMed] [Google Scholar]
  7. Goodfellow Ian J., Jean Pouget-Abadie Mehdi Mirza, Xu Bing, Warde-Farley David, Ozair Sherjil, Courville Aaron, and Bengio Yoshua. 2014. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems, volume 2, pages 2672–2680. https://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf. [Google Scholar]
  8. Gupta Anant, Venkatesh Srivas, Chopra Sumit, and Ledig Christian. 2019. Generative image translation for data augmentation of bone legion pathology. arXiv. https://arxiv.org/pdf/1902.02248.pdf. [Google Scholar]
  9. He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian. 2015. Deep residual learning for image recognition. CoRR, abs/1512.03385. https://dblp.org/rec/bib/journals/corr/HeZRS15. [Google Scholar]
  10. Hiasa Yuta, Otake Yoshito, Takao Masaki, Matsuoka Takumi, Takashima Kazuma, Carass Aaron, Prince Jerry L., Sugano Nobuhiko, and Sato Yoshinobu. 2018. Cross-modality image synthesis from unpaired data using CycleGAN. arXiv. https://arxiv.org/pdf/1803.06629.pdf. [Google Scholar]
  11. Hou Le, Agarwal Ayush, Samaras Dimitris, Kurc Tahsin M., Gupta Rajarsi R., and Saltz Joel H.. 2017. Unsupervised histopathology image synthesis. arXiv. https://arxiv.org/pdf/1712.05021.pdf. [Google Scholar]
  12. Hussain Zeshan, Gimenez Francisco, Yi Darvin, and Rubin Daniel. 2017. Differential data augmentation techniques for medical imaging classification tasks. In AMIA Annual Symposium Proceedings. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5977656/. [PMC free article] [PubMed] [Google Scholar]
  13. Kim Taeksoo, Cha Moonsu, Kim Hyunsoo, Lee Jung Kwon, and Kim Jiwon. 2017. Learning to discover cross-domain relations with generative adversarial networks. arXiv. https://arxiv.org/pdf/1703.05192.pdf. [Google Scholar]
  14. Korbar Bruno, Olofson Andrea M, Miraflor Allen P, Nicka Caherine M, Suriawinata Matthew A, Torresani Lorenzo, Suriawinata Arief A, and Hassanpour Saeed. 2017a. Looking under the hood: Deep neural network visualization to interpret whole-slide image analysis outcomes for colorectal polyps. In 2017 IEEE Confrence on Computer Vision and Pattern Recognition Workshops. https://ieeexplore.ieee.org/document/8014848. [Google Scholar]
  15. Korbar Bruno, Olofson Andrea M, Miraflor Allen P, Nicka Catherine M, Suriawinata Matthew A, Torresani Lorenzo, Suriawinata Arief A, and Hassanpour Saeed. 2017b. Deep learning for classification of colorectal polyps on whole-slide images. http://www.jpathinformatics.org/article.asp?issn=2153-3539;year=2017;volume=8;issue=1;spage=30;epage=30;aulast=Korbar. [DOI] [PMC free article] [PubMed]
  16. Li Der-Chiang, Liu Chiao-Wen, and Hu Susan C.. 2010. A learning method for the class imbalance problem with medical data sets. Computers in Biology and Medicine, 40:509–518. https://www.ncbi.nlm.nih.gov/pubmed/20347072. [DOI] [PubMed] [Google Scholar]
  17. Nie Dong, Trullo Roger, Petitjean Caroline, Ruan Su, and Shen Dinggang. 2016. Medical image synthesis with context-aware generative adversarial networks. arXiv. https://arxiv.org/pdf/1612.05362.pdf. [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Perez Luis and Wang Jason. 2017. The effectiveness of data augmentation in image classification using deep learning. CoRR, abs/1712.04621. https://dblp.org/rec/bib/journals/corr/abs-1712-04621. [Google Scholar]
  19. Quiros Adalberto Claudio, Murray-Smith Roderick, and Yuan Ke. 2019. PathologyGAN: Learning deep representations of cancer tissue. arXiv. https://arxiv.org/pdf/1907.02644.pdf. [Google Scholar]
  20. Radford Alec, Metz Luke, and Chintala Soumith. 2015. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv. https://arxiv.org/pdf/1511.06434. [Google Scholar]
  21. Rana Aman, Yaunery Gregory, Lowe Alarice, and Shah Pratik. 2019. Computational histological staining and destaining of prostate core biopsy RGB images with generative adversarial neural networks. arXiv. https://arxiv.org/pdf/1811.02642.pdf. [Google Scholar]
  22. Salehinejad Hojjat, Valaee Shahrokh, Dowdell Tim, Colak Errol, and Barfett Joseph. 2017. Generalization of deep neural networks for chest pathology classification in x-rays using generative adversarial networks. CoRR, abs/1712.01636. https://dblp.org/rec/bib/journals/corr/abs-1712-01636. [Google Scholar]
  23. Salehinejad Hojjat, Valaee Shahrokh, Dowdell Tim, Colak Errol, and Barfett Joseph. 2018. Generalization of deep neural networks for chest pathology classification in X-Rays using generative adversarial networks. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing. https://arxiv.org/pdf/1712.01636.pdf. [Google Scholar]
  24. Wang Jianing, Zhao Yiyuan, Noble Jack H., and Dawant Benoit M.. 2018. Condtional generative adversarial networks for metal artifact reduction in CT images of the ear. In Proceedings of Medical Imaging Computing and Computer Assisted Inventions, pages 3–11. https://link.springer.com/chapter/10.1007/978-3-030-00928-1_1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Yang Qianye, Li Nannan, Zhao Zixu, Fan Xingyu, Chang Eric I-Chao, and Xu Yan. 2018. MRI image-to-image translation for cross-modality image registration and segmentation. arXiv. https://www.semanticscholar.org/paper/MRI-Image-to-Image-Translation-for-Cross-Modality-Yang-Li/572043ee8a930ab9ef76bca236b5593f83877985. [DOI] [PMC free article] [PubMed]
  26. Zanjani Farhad Ghazvinian, Zinger Svitlana, and H. N. de Peter With. 2018. Deep convolutional gaussian mixture model for stain-color normalization of histopathological images. In Proceedings of Medical Imaging Computing and Computer Assisted Inventions, pages 274–282. https://link.springer.com/chapter/10.1007/978-3-030-00934-2_31. [Google Scholar]
  27. Zhu Jun-Yan, Park Taesung, Isola Phillip, and Efros Alexei A.. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. CoRR, abs/1703.10593. http://arxiv.org/abs/1703.10593. [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1

RESOURCES