Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Oct 6.
Published in final edited form as: Proc COMPSAC. 2020 Sep 22;2020:723–728. doi: 10.1109/compsac48688.2020.0-174

Generating Region of Interests for Invasive Breast Cancer in Histopathological Whole-Slide-Image

Shreyas Malakarjun Patil 1, Li Tong 2, May D Wang 2,*
PMCID: PMC7537355  NIHMSID: NIHMS1602234  PMID: 33029594

Abstract

The detection of the region of interests (ROIs) on Whole Slide Images (WSIs) is one of the primary steps in computer-aided cancer diagnosis and grading. Early and accurate identification of invasive cancer regions in WSI is critical in the improvement of breast cancer diagnosis and further improvements in patient survival rates. However, invasive cancer ROI segmentation is a challenging task on WSI because of the low contrast of invasive cancer cells and their high similarity in terms of appearance, to non-invasive regions. In this paper, we propose a CNN based architecture for generating ROIs through segmentation. The network tackles the constraints of data-driven learning and working with very low-resolution WSI data in the detection of invasive breast cancer. Our proposed approach is based on transfer learning and the use of dilated convolutions. We propose a highly modified version of U-Net based auto-encoder, which takes as input an entire WSI with a resolution of 320×320. The network was trained on low-resolution WSI from four different data cohorts and has been tested for inter as well as intra- dataset variance. The proposed architecture shows significant improvements in terms of accuracy for the detection of invasive breast cancer regions.

Keywords: Invasive Breast Cancer, Whole-Slide-Image, Deep Learning, ROI Segmentation, CNN

1. INTRODUCTION

A whole new branch of big data studies has emerged with the development of digital pathology and interpretations obtained from Whole Slide Images (WSIs). The rapid developments in whole-slide scanners, which can generate digitized histopathology data, have enabled easy storage and sharing of pathological samples and accelerated the research in digital pathology. The analysis of WSIs is one of the most crucial steps in the diagnosis and grading of cancers. The elegance and capability of the computers to process and evaluate digitally stored histopathology data have been thoroughly exploited in the past few years.

Recently, the field of deep learning with the use of convolutional neural networks (CNNs) has given some of the state-of-the-art methods in the field of visual recognition. The ease of procuring technologies required for working with deep learning, such as the GPUs, have made it one of the fast-growing research areas. Also, deep learning is being used to experiment on biomedical imaging data extensively. In digital pathology, the WSIs are enormous in size; a typical WSI could be of 80000×80000 pixels in full spatial resolution, occupying almost 20 GB per image of space. The big size of WSIs brings one of the major challenges for deep learning architectures applied directly to the entire WSI. Directly feeding the whole-slide image with its actual resolution into deep neural networks is infeasible. Thus, researchers usually work on downsampled thumbnails for the slide-level task. For localized features, researchers usually tile the WSIs into smaller patches, which can be directly processed with standard deep models.

Motivation:

Breast cancer is the second leading cause of deaths due to cancer in developed countries and majorly seen in women [1]. Invasive breast cancer refers to the scenario in which cancer has spread from the point of origin to other parts either through blood flow or through the lymphatic system, which typically leads to inferior prognosis [2]. The detection and precise segregation of the invasive cancer cells in digital pathology is one of the primitive steps in the characterizing of breast cancer and in determining the severity [3]. Most of the screening and grading processes involved in pathology are performed within a predefined region of interest. However, manual identification of the ROI can be time-consuming and error-prone because of WSI’s enormous size. Further, the size of the WSI requires a very high number of learnable parameters and hence a very high number of computations. These reasons force us to limit the use of deep learning architectures to a specific region or sample. The proposed approach is motivated by the necessity of automated and accurate delineation of ROIs for invasive cancer and the need to perform the ROI segmentation efficiently.

Previous Work:

Before the era of deep learning, several research groups have applied conventional digital image processing techniques to WSIs. These methods worked on several hand-crafted features, which were then fed to a classifier using conventional machine learning algorithms [4; 5; 6]. The features extracted aim to capture various morphological structures and patterns from the WSI, including the color density variations, the density of cells, cell structure, shape, and background to tumor tissue texture variations [7; 8]. Hand-crafted features are highly subjective to the source and do not cater well to high variance among scanners. Due to these limitations, the performance of the methods lacked generalization to the entirety of the distribution.

With the use of representation learning, many groups have successfully learned the features to be extracted into a low- dimensional hidden space. Several approaches have been suggested for WSI analysis using unsupervised, supervised, and semi-supervised machine learning methods. Yibing in [9] gives an unsupervised method for ROI proposal using over-segmentation and hierarchical clustering. A tremendous increase in deep learning application is seen with the introduction of challenges such as Breast Cancer Histology Images [10] and Cancer Metastasis detection in Lymph Nodes[11]. Almost all the procedures using CNNs for seg- mentation have been based upon a sampling of the WSI into smaller patches and then applying learning algorithms over these sampled images[12; 13; 14; 15; 16; 17; 18]. Slide- level segmentation also has been attempted using semantic segmentation networks [19]. The study focuses on variants of FCN-8s, SegNet, and U-Net [20]. It has been established that the use of sampling strategies increases efficiency. Cruz et al. developed a high through-put adaptive sampling process for invasive cancer region proposals [1]. On the other hand, pathologists generally assess on a coarser level and then zoom in to ROI, Dong et al. propose such a method learning whether to zoom-in or not using CNNs [21].

Challenges:

The major challenges are: 1) The large size of WSI requires sampling the slide into smaller patches for analysis. This leads to a multiplicative increase in processing required. 2) The lack of well-annotated data leads to difficulties in the generalization of deep neural networks. 3) The capability of hand-crafted features is limited and fails to generalize between varying conditions such as scanners, color distributions, etc. 4) Working with only low-resolution WSI data leads to a huge loss in fine information required for ROI segmentation. 5) Most of the methods in the literature have only been validated with a single data cohort, limiting the generalization capability and robustness.

Contributions:

In light of the drawbacks and challenges, this paper contributes to the following aspects: 1) The proposed network takes down-sampled low-resolution WSI and accurately detects invasive breast cancer regions. 2) The proposed network takes fewer computations for training while being able to generalize well. 3) Our model achieves improvements in segmentation results on down-sampled images. 4) We use transfer learning to tackle the challenge of limited data. 5) Most methods perform binary classification of a sample to decide whether it belongs to the ROI or not. We treat an entire WSI with the same resolution as one sample.

2. PROPOSED METHODOLOGY

This section presents technical details and justifications for training and the pre-processing strategy. The network architecture and modifications are presented in a comprehensive manner to make the methods clear.

Data :

The dataset is from DRYAD Digital Repository[1]. It down-sampled WSI from four different data cohorts, including the Cancer Genome Atlas (TCGA - referred to as 1 in this paper)[22], Hospital at the University of Pennsylvania (HUP - referred to as 2 in this paper), Univ. Hospitals Case Medical Center/Case Western Reserve Univ. (UHCMC/CWRU - referred to as 3 in this paper), and Cancer Institute of New Jersey (CINJ - referred to as 4 in this paper). The DYRAD dataset also contains the binary masks for the annotated invasive breast cancer region. Table II provides the number of WSI available from each of the data cohort, and the mean and the variance of the ROI ratio covered by the invasive cancer cells. The range of the ratio values for the overall dataset is from 0.05 to 0.35.

Table II.

Performance analysis of proposed network architecture

Architecture Accuracy DC JI
Combined Training and Testing
U-Net 67.59 62.71 58.51
U-Net+ResNet 82.38 74.98 72.43
RUBIC-Net 152 97.45 89.45 87.6
RUBIC-Net 34 91.87 84.11 82.54
Inter: Trained on 1,2,3 and tested on 4
RUBIC-Net 34 90.61 83.87 79.93
Inter: Trained on 1,2,4 and tested on 3
RUBIC-Net 34 87.51 82.56 79.51
Inter: Trained on 1,3,4 and tested on 2
RUBIC-Net 34 85.43 78.91 75.49
Inter: Trained on 2,3,4 and tested on 1
RUBIC-Net 34 86.15 79.73 76.55
HASHI[1] 76

A. PROPOSED ARCHITECTURE

We used a deep convolutional neural network architecture that takes low-resolution WSI data as the input and produces binary masks for invasive cancer ROI. The model architecture is provided in Figure 1. The preprocessing is standard, where we down-sampled the images to a lower resolution of 320×320 and normalized the values of the pixel intensities in-between 0 and 1. Equivalent to this pre-processing, the binary masks are also scaled and normalized.

Figure 1.

Figure 1.

Proposed network architecture.

Network Architecture and Justification:

The proposed Network architecture is inspired by the skip connection based auto-encoder model of U-Net [23] for image segmentation. The baseline is the U-Net architecture. Due to the limited number of WSIs available for training, we improved the segmentation performance with transfer learning from pre-trained weights specific to the WSI domain.

Transfer Learning:

The encoder and decoder parts of the network were replaced with a pretrained ResNet152 [24] architecture. The weights were pre-trained by a WSI classification task similar to that of this paper for do- main adaptation. The parameters for the entire encoder and decoder, except the final resolution block closer to the bottleneck, were adopted. The learning was limited to the representation space that lowers the number of trainable parameters and the amount of data required. The layers utilized in the reconstruction of the mask were trained to maximize the performance. The deepening of the network at each resolution level leads to better performance.

Convolutional Skip-Connections:

The direct skip connections were replaced with convolutional layers [25]. Convolutions provide an increase in the learnable parameters while transferring of feature maps. Also, processing the output from encoder levels reduced the semantic gap between the feature maps of encoder and decoder sub-networks. The reduction in the semantic gap leads to an effortless optimization problem and a decrease in data required [25].

Dense Blocks:

Residual connections were added to each convolutional layer at each encoder and decoder block in addition to the residual block in the ResNet architecture [26]. Dense blocks provide the layer with combined information at all the previous representation levels, enabling more accurate learning for delineation in terms of shape and size. The model was then validated at all of the modification levels. A steady improvement in performance was observed, with the computational cost increased.

Dilated Convolutions:

The convolutions in the shallow part of the network were replaced with dilated convolutions for distributed learning of the ROI features to capture more features. The dilation rate was steadily increased from one to four. The initial layers learned the intricate shape, while the later layers learned the overall size. Then, the dilation rate was decreased steadily back to one, which compressed the features learned in limited window size. Dilated convolutions steadily increased the window size and enabled distributed learning of shape and size while maintaining computational efficiency.

Pruning:

A systematic pruning of the ResNet152 architecture was performed to understand the effect of the number of residual blocks. After a few blocks were eliminated, the model was validated for the reminder of layers. We observed a very small change in the performance during our experiments caused by residual block removal. The systematic pruning resulted in only 34 layers in the network architecture and performed well. Pruning increased the computational efficiency by almost five folds from the previously learned model.

B. EXPERIMENT IMPLEMENTATION

The proposed network was trained and tested multiple times to determine the robustness of the model. Each of the four datasets was used. 10% of the images were taken out for testing, and 9-fold cross-validation was performed on the remaining 90% training data. Then the model was trained for robustness for inter-dataset performance, where the model was trained on three of the data cohorts and tested on the fourth. The loss function used for training was pixel-wise binary cross-entropy, and batch size was one. The model was trained for 25 epochs, and Adam optimizer with a learning rate of 0.001. The ResNet was fine-tuned for the weights obtained by training the classification task using low-resolution WSI data. Due to a lack of data, we used dropouts of 0.4 on the trainable layers to improve the generalization.

3. EXPERIMENTAL ANALYSIS

This section describes various experiments performed to validate the proposed model and metrics. We designed our experiments with rigorous validation on variance caused by different data cohorts, scanners, and cases of breast cancers.

Testing Strategy:

We used the following strategies: 1) We combined four datasets for training, validation, and testing. Out of 584 available images, 425 were used for training and validation, and 59 for testing. A 9-fold cross- validation was performed with results averaged. 2) We performed experiments on four architectures of U-Net, U- Net with ResNet152, proposed network with ResNet152, and proposed network with pruned ResNet. 3) We separated the four available WSI datasets. The model was trained when combining three data cohorts, and tested on the fourth.

Evaluation Metrics:

In our experiments, we have evaluated our model with three metrics: the pixel-wise classification accuracy, the Dice Coefficient (DC), and the Jaccard Index (JI). Table II summarizes the performance of the proposed network in comparison to the current state-of-the- art and with all the interim architectures during the evolution of the network and pruning residual blocks.

A. COMPARATIVE ANALYSIS AND DISCUSSION

In Table II, we observe the increasing performance of the proposed model through various stages of modifications. The results support our various modifications for improvements in detecting the invasive cancer region. U-Net’s performance was compromised because of the limited data available for training and the shallow network at each resolution level. We observe the performance improvements using the pre- trained ResNet152 architecture as the encoder and decoder, demonstrating the benefit of transfer learning. As we have demonstrated in the previous work [27], an unsupervised pre-training of the encoder with unlabeled breast cancer WSIs might be able to further compensate for the lack of labeled training data. Also, the convolutional skip connections have benefited the performance and made the optimization easier. Incorporating the local and global information processing through dense blocks and dilated convolutions in the shallow region has led to better detection. With all the modifications, we have maintained the same number of training parameters. The segmentation performance goes down while testing with strategy 3 due to the high inter- data cohort variation. The performance is dependent upon the size of each data cohort. Hence, there was a decrease in performance with the change in the training set. However, the proposed model is still able to generalize across datasets. Regarding the dataset statistics and the results obtained, the mean area of ROI ratio is very similar for data-cohorts 2 and 3, but the performance decreases when we do inter- dataset testing on dataset 2 more than that of dataset 3. This result reinforces our explanation regarding the availability of data for training. When testing on dataset 2, because the training data available is almost 20% less than that when testing for dataset 3, there was a drop in performance. To compensate for the lack of training data, besides the transfer learning, we can improve the model performance by fusing the deep learning features with the conventional manual- crafted features [28]. HASHI [1] provides a solution for the detection of invasive cancer through the use of adaptive sampling. CNNs were used to generate a probability map that is computationally efficient and accurate. As seen in Table II, our proposed method is able to give better results with much fewer computations required. Instead of passing the samples through CNN for all the samples, the proposed architecture can perform computations directly on a scaled WSI and decrease the number of computations exponentially. Our proposed network utilized both advancements in domain adaptation and the segmentation learning process.

Network Limitations:

The proposed network architecture falls short in a few examples, as shown in Figures 3 and 4. The network fails to delineate the shape in the examples accurately but can determine the size and identify most of the invasive cancer region. The majority of the cases in which the network fails to perform well are those containing both disjoint ROIs as well as the individual regions were very small. Because the input is a highly down-sampled image, the network is prone to miss-classify the small regions. The limitation arises due to the extremely similar texture between the tissue and the cancerous cells. Although when there exists just one or two small ROIs or a large ROI 2, the network performs comparatively well. Having a large number of disjoint ROIs is very rare; hence, the network was not subjected to many such cases during training.

Figure 3.

Figure 3.

Performance of proposed network in challenging cases (green - ground-truth, red - prediction).

Figure 4.

Figure 4.

Performance of proposed network on WSI with small ROI (green - ground-truth, red - prediction)

4. CONCLUSION

The proposed method based on Residual blocks and U- Net for Invasive Cancer detection has shown significant improvement over previous approaches in the literature. The proposed architecture is both computationally efficient and gives a higher ROI segmentation performance. The utilization of residual blocks and pre-trained weights reduces the effects of over-fitting. Also, the use of dilated convolutions and convolutional layers within the skip connections leads to better feature representations and, consequently, better segmentation performance. In summary, we propose an approach that tackles the constraints of limited data availability. It works with low-resolution WSI data in the detection of invasive breast cancer with a highly modified version of U-Net architecture. Therefore, the proposed network shows promise of identifying ROI with lower-resolution WSI in clinical decision support.

Figure 2.

Figure 2.

Performance of proposed network on WSI with large ROI (green - ground-truth, red - prediction).

Table I.

Data analysis and specifications

Serial
Number
Dataset Number
of
Samples
Mean ROI
Area Ratio
Variance
ROI Area
Ratio
Range of Ratio for Entire Dataset : 0.05 - 0.35
1. TCGA 195 0.23 0.018
2. HUP 239 0.11 0.0055
3. CWRU 110 0.092 0.0058
4. CINJ 40 0.16 0.0168

5. ACKNOWLEDGMENTS

The work was supported in part by grants from the National Science Foundation EAGER Award NSF1651360, Children’s Healthcare of Atlanta and Georgia Tech Partnership Grant, Giglio Breast Cancer Research Fund, Georgia Tech Petit Institute Faculty Fellow, and Carol Ann and David D. Flanagan Faculty Fellow Research Fund. This work was supported in part by the scholarship from China Scholarship Council (CSC) under the Grant CSC NO. 201406010343. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the NIH.

6. REFERENCES

  • [1].Cruz-Roa A, Gilmore H, Basavanhally A, Feldman M, Ganesan S, Shih N, Tomaszewski J, Madabhushi A, and Gonza ´lez F, “High-throughput adaptive sampling for whole- slide histopathology image analysis (hashi) via convolutional neural networks: Application to invasive breast cancer detection,” PloS one, vol. 13, no. 5, pp. e0196828, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Fisher Edwin R, Gregorio Remigio M, Fisher Bernard, ScD With the Assistance of Redmond Carol, Vellios Frank, Sommers Shel- don C, and Cooperating Investigators, “The pathology of invasive breast cancer a syllabus derived from findings of the national surgical adjuvant breast project (protocol no. 4),” Cancer, vol. 36, no. 1, pp. 1–85, 1975. [DOI] [PubMed] [Google Scholar]
  • [3].Elston Christopher W and Ellis Ian O, “Pathological prognostic factors in breast cancer. i. the value of histological grade Performance of Proposed Network on WSI with Small ROI in breast cancer: experience from a large study with long- term follow-up. cw elston & io ellis. histopathology 1991; 19; 403–410: Author commentary,” Histopathology, vol. 41, no. 3a, pp. 151–151, 2002. [PubMed] [Google Scholar]
  • [4].Kothari Sonal, Phan John H, Osunkoya Adeboye O, and Wang May D, “Biological interpretation of morphological patterns in histopathological whole-slide images,” in Proceedings of the ACM Conference on Bioinformatics, Computational Biology and Biomedicine, 2012, pp. 218–225. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Kothari Sonal, Phan John H, Stokes Todd H, and Wang May D, “Pathology imaging informatics for quantitative analysis of whole-slide images,” Journal of the American Medical Informatics Association, vol. 20, no. 6, pp. 1099–1108, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [6].Madabhushi Anant and Lee George, “Image analysis and machine learning in digital pathology: Challenges and opportunities,” 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Wang Haibo, Roa Angel Cruz, Basavanhally Ajay N, Gilmore Hannah L, Shih Natalie, Feldman Mike, Tomaszewski John, Gonzalez Fabio, and Madabhushi Anant, “Mitosis detection in breast cancer pathology images by combining handcrafted and convolutional neural network features,” Journal of Medical Imaging, vol. 1, no. 3, pp. 034003, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [8].Basavanhally Ajay, Ganesan Shridar, Feldman Michael, Shih Natalie, Mies Carolyn, Tomaszewski John, and Madabhushi Anant, “Multi-field-of-view framework for distinguishing tumor grade in er+ breast cancer from entire histopathology slides,” IEEE transactions on biomedical engineering, vol. 60, no. 8, pp. 2089–2099, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [9].Ma Yibing, Jiang Zhiguo, Zhang Haopeng, Xie Fengying, Zheng Yushan, Shi Huaqiang, Zhao Yu, and Shi Jun, “Generating region proposals for histopathological whole slide image retrieval,” Computer methods and programs in biomedicine, vol. 159, pp. 1–10, 2018. [DOI] [PubMed] [Google Scholar]
  • [10].Aresta Guilherme, Arau ´jo Teresa, Kwok Scotty, Chennamsetty Sai Saketh, Safwan Mohammed, Alex Varghese, Marami Bahram, Prastawa Marcel, Chan Monica, Donovan Michael, et al. , “Bach: Grand challenge on breast cancer histology images,” Medical image analysis, 2019. [DOI] [PubMed] [Google Scholar]
  • [11].Litjens Geert, Bandi Peter, Bejnordi Babak Ehteshami, Geessink Os- car, Balkenhol Maschenka, Bult Peter, Halilovic Altuna, Hermsen Meyke, van de Loo Rob, Vogels Rob, et al. , “1399 h&e-stained sentinel lymph node sections of breast cancer patients: the camelyon dataset,” GigaScience, vol. 7, no. 6, pp. giy065, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Xu Jun, Luo Xiaofei, Wang Guanhao, Gilmore Hannah, and Madabhushi Anant, “A deep convolutional neural network for segmenting and classifying epithelial and stromal regions in histopathological images,” Neurocomputing, vol. 191, pp. 214–223, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Wang Dayong, Khosla Aditya, Gargeya Rishab, Irshad Humayun, and Beck Andrew H, “Deep learning for identifying metastatic breast cancer,” arXiv preprint arXiv:1606.05718, 2016. [Google Scholar]
  • [14].Chen Richard, Jing Yating, and Jackson Hunter, “Identifying metastases in sentinel lymph nodes with deep convolutional neural networks,” arXiv preprint arXiv:1608.01658, 2016. [Google Scholar]
  • [15].Liu Yun, Gadepalli Krishna, Norouzi Mohammad, Dahl George E, Kohlberger Timo, Boyko Aleksey, Venugopalan Subhashini, Timofeev Aleksei, Nelson Philip Q, Corrado Greg S, et al. , “Detecting cancer metastases on gigapixel pathology images,” arXiv preprint arXiv:1703.02442, 2017. [Google Scholar]
  • [16].Xiao Kaiwen, Wang Zichen, Xu Tong, and Wan Tao, “A deep learning method for detecting and classfying breast cancer metastasis in lymph nodes on histopathological images,” Beijing, 2017. [Google Scholar]
  • [17].Fan Kun, Wen Shibo, and Deng Zhuofu, “Deep learning for detecting breast cancer metastases on wsi,” in Innovation in Medicine and Healthcare Systems, and Multimedia, pp. 137–145. Springer, 2019. [Google Scholar]
  • [18].Tong Li, Sha Ying, and Wang May D, “Improving classification of breast cancer by utilizing the image pyramids of whole-slide imaging and multi-scale convolutional neural networks,” in 2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC) IEEE, 2019, vol. 1, pp. 696–703. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Ing Nathan, Ma Zhaoxuan, Li Jiayun, Salemi Hootan, Arnold Corey, Knudsen Beatrice S, and Gertych Arkadiusz, “Se- mantic segmentation for prostate cancer grading by convolutional neural networks,” in Medical Imaging 2018: Digital Pathology. International Society for Optics and Photonics, 2018, vol. 10581, p. 105811B. [Google Scholar]
  • [20].Swiderska-Chadaj Z, Markiewicz T, Gallego J, Bueno G, Grala B, and Lorent M, “Deep learning for damaged tissue detection and segmentation in ki-67 brain tumor specimens based on the u-net model,” Bulletin of the Polish Academy of Sciences. Technical Sciences, vol. 66, no. 6, 2018. [Google Scholar]
  • [21].Dong Nanqing, Kampffmeyer Michael, Liang Xiaodan, Wang Zeya, Dai Wei, and Xing Eric, “Reinforced auto-zoom net: towards accurate and fast breast cancer segmentation in whole-slide images,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 317–325. Springer, 2018. [Google Scholar]
  • [22].Weinstein John N, Collisson Eric A, Mills Gordon B, Shaw Kenna R Mills, Ozenberger Brad A, Ellrott Kyle, Shmulevich Ilya, Sander Chris, Stuart Joshua M, Cancer Genome At- las Research Network, et al. , “The cancer genome atlas pan- cancer analysis project,” Nature genetics, vol. 45, no. 10, pp. 1113, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [23].Ronneberger Olaf, Fischer Philipp, and Brox Thomas, “U-net: Convolutional networks for biomedical image segmentation,” in International Conference on Medical image computing and computer-assisted intervention Springer, 2015, pp. 234–241. [Google Scholar]
  • [24].He Kaiming, Zhang Xiangyu, Ren Shaoqing, and Sun Jian, “Deep residual learning for image recognition,” in Proceed- ings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770–778. [Google Scholar]
  • [25].Zhou Zongwei, Siddiquee Md Mahfuzur Rahman, Tajbakhsh Nima, and Liang Jianming, “Unet++: A nested u-net architecture for medical image segmentation,” in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, pp. 3–11. Springer, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [26].Vesal Sulaiman, Patil Shreyas Malakarjun, Raviku- mar Nishant, and Maier Andreas K, “A multi-task framework for skin lesion detection and segmentation,” in OR 2.0 Context-Aware Operating Theaters, Computer Assisted Robotic Endoscopy, Clinical Image-Based Procedures, and Skin Image Analysis, pp. 285–293. Springer, 2018. [Google Scholar]
  • [27].Tong Li, Wu Hang, and Wang May D, “Caesnet: Convolutional autoencoder based semi-supervised network for im- proving multiclass classification of endo microscopic images,” Journal of the American Medical Informatics Association, vol. 26, no. 11, pp. 1286–1296, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Vizcarra Juan, Place Ryan, Tong Li, Gutman David, and Wang May Dongmei, “Fusion in breast cancer histology classification,” in Proceedings of the 10th ACM International Conference on Bioinformatics, Computational Biology and Health Informatics, 2019, pp. 485–493. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES