Abstract
The detection, counting, and precise segmentation of white blood cells in cytological images are vital steps in the effective diagnosis of several cancers. This paper introduces an efficient method for automatic recognition of white blood cells in peripheral blood and bone marrow images based on deep learning to alleviate tedious tasks for hematologists in clinical practice. First, input image pre-processing was proposed before applying a deep neural network model adapted to cells localization and segmentation. Then, model outputs were improved by using combined predictions and corrections. Finally, a new algorithm that uses the cooperation between model results and spatial information was implemented to improve the segmentation quality. To implement our model, python language, Tensorflow, and Keras libraries were used. The calculations were executed using NVIDIA GPU 1080, while the datasets used in our experiments came from patients in the Hemobiology service of Tlemcen Hospital (Algeria). The results were promising and showed the efficiency, power, and speed of the proposed method compared to the state-of-the-art methods. In addition to its accuracy of 95.73%, the proposed approach provided fast predictions (less than 1 s).
Keywords: Deep learning, White blood cells, Image segmentation, Classification, Mask RCNN, Object detection
Introduction
In cytopathology analysis, the morphology of white blood cells and the counting of instances in cell slides can have medical significance for the diagnosis of several cancer diseases [1, 2]. However, the manual detection and segmentation of these structures can be costly, tedious and time-consuming tasks, especially with the variability of structures and the overlap between objects in images. Therefore, automatic search and segmentation techniques can help hematologists in clinical practice expedite treatment and improve efficiency and reliability.
Previously automatic detection and segmentation for cytological images had difficulties, mainly due to high variation between the cell features. While some methods as [3] have used an optimized pixel-based classification approach, they have divided their work into two steps. The first one consists of using a classifier (Decision Tree or Random Forest) to perform the classification of each pixel in the image according to the features of the color spaces. The second step is the segmentation which starts with the detection of the points of interest, then grouping the pixels of the same class by the region growing method. Following this work, Saidi et al. [4] improved this algorithm by adding a features selection step in order to minimize the computation time and keep the same performances of the previous method. Their selection method based on ensemble methods is called Ensemble Margin Instance Selection (EMIS). An other work has been carried out using classical methods for the segmentation of four types of blood cells in cytological images (nuclei, cytoplasm, white blood cell, and background). In [5] the authors proposed a supervised classification of pixels to segment the cells. They used feature selection methods such as ReliefF, LDA, RFE, mRMR, and then classified these features by the Support Vector Machine method (SVM). Baghli et al. [6] have proposed a new method of blood cell segmentation based on the Evidential Segmentation Algorithm (ESA), the main idea is to extract the components of a cell image using evidence theory. These works and others like [7] have conducted intensive research and obtained remarkable improvements in the automatic segmentation of cytological images. However, cell features are not standardized and differ from one to another. Therefore, most of the proposed approaches are limited by their representational capacity and sensitive to large variations in form and color. Moreover, these methods separate the features extraction part of the classification, which is detrimental in terms of performance and cost. These semi-automatic methods use pixel-based classification which makes them very slow.
For this purpose, deep learning methods which can be applicable for large datasets give a potential solution for these problems. Indeed, deep neural networks, whose strength is their ability to represent features, have shown considerable efficiency in image detection and segmentation [8–14]). These methods are designed to automate the characterization part to be invariant and insensitive to all possible changes, such as rotation, displacement, and deformation. They use convolutional neural networks (CNNs) to automatically extract the different features of an image at several abstraction levels, and their architecture is inspired by the organization of the human visual cortex. During the learning step, the convolution layers adjust their initial weights automatically using a back-propagation algorithm.
Recent studies have shown the effectiveness of CNNs over conventional methods in diagnostic aid. In [15] the authors showed that a CNN performed better (71.80%, 84.23%) than random forests (67.53%, 78.74%). In the field of microscopic image segmentation, a novel network was proposed by [16] that target simultaneous segmentation and classification, they used a horizontal and vertical distance map to separate clustered objects. Another method proposed by [17] which integrates object detection in instance segmentation, they used a fully convolutional neural network FCN for localization, then train another CNN to predict the mask of each detected box. Following this box-based idea, which has shown its efficiency in segmentation according to [18], the authors of [19] proposed a new method that uses a combination of the keypoint-based detector and individual cell segmentation. They showed a superiority in cell segmentation compared to other methods. Similarly, He et al. [20] was proposed a method called Mask RCNN that uses the classical object detection method Faster RCNN [21] and adds a binary mask to the network output for each instance. The goal was the detection, classification, and segmentation of all objects in the image. These methods give effective results due to the global understanding of the objects. They solve some problems, such as the overlapping cell issue. However, the existing methods need a large mass of annotated images and a good choice of initial weights for better generalization, which would lead to lower instance segmentation performance in case of insufficient data. Therefore, it needs to be improved by post-processing for a better segmentation quality. In this paper, we propose an approach based on deep learning to address the automatic recognition problem in cytological images. The differentiation between white blood cells (leukocyte) and the extraction of a set of quantitative measures in microscopic images of bone marrow and peripheral blood allows establishing accurate diagnoses of several cancers, including leukemia and myeloma. For these purposes, we are interested in the segmentation of cytoplasm and nuclei, which are clinically very important.
Methods
This paper proposes an efficient approach to treat the segmentation of objects in cytological images. The objective was to train the deep neuronal network on the study’s dataset, so it recognized nuclei and cytoplasm. The proposed approach (Fig. 1) had four fundamental steps: data preparation, pre-processing, model training, and post-processing. In the first two steps, methods and techniques were applied to the dataset that enabled its use with the training algorithm, these two steps had a direct impact on the final results. For example, a small error in the preparation or pre-processing steps may have produced false results and lead to a poorly trained model. In the third step, we propose the improvement of the Mask RCNN algorithm, which was developed by Kaming He and his research group, Facebook, in March 2017 [20]. The improvement process was done by adjusting and adding the hyper-parameters based on the training needs, and by testing the different architectures to choose the better for our datasets. Then, model outputs were improved by combined predictions and applying the processing methods to the predicted image and mask. Finally, a new algorithm that used an ordination between outputs was proposed to improve the segmentation quality. Experimentations were conducted using the Python language and Tensorflow libraries, while the calculations were executed on an Nvidia graphics processing unit GTX 1080 (Nvidia GPU GTX 1080). The deep neural networks were trained using the Adam optimization algorithm.
Fig. 1.
Block diagram of the proposed approach for the automatic recognition of white blood cells in cytological images
We used the classification rate for the evaluation of the classification performance and intersection over union (IoU) as evaluation criteria for the detection and the segmentation. The evaluation used in the comparison was based on the precision and the f-score, which were used to calculate the overlap between the segmented area and ground truth:
With:
TP: True Positive: positive regions classed as positive.
FP: False Positive: negative region classed as positive.
FN: False Negative : positive region classed as negative.
F-score shows the harmonic mean of the precision and recall. A high F-score value demonstrates the pertinence of the technique.
Database description
The cytological images, used in these experiments, was collected from patients in the Hemobiology service of Hospital of Tlemcen “CHU Tlemcen” [5]. The acquisition system consisted of a camera and a high-resolution microscope that enabled the acquisition of 24-bit RGB color images, which were saved to a hard disk in bitmap format. The dataset contains 145 labeled cells in 87 images. Hospital specialists must identify the different cell types existing in each image for the model training and also to evaluate the results of the segmentation and the detection. It contains five types of leukocytes, with 49 normal cells, 24 dystrophic cells, and 72 other cells according to the annotation of the cytologists.
In our experiments, we are much more interested in the detection and segmentation f all types of WBCs even pathological cases. We have chosen randomly 70% of the images for training and the rest for validation and testing.
Data preparation
Initially, this work semi-automatically converted all the annotations from the domain expert of each image into binary masks using image processing tools. Then, the images were standardized into an XML format and stored in a separate file. Each XML file contained the characteristics of the images, such as their titles, sizes, paths, and the number of objects in the image, including their positions and corresponding classes. This information was used to load the images and their masks as input, for network training.
Pre-processing
Normalization and filled holes
The color variation in the images was a significant segmentation complication. The variation was the result of the coloring process and the quality of the captor used during image acquisition. To improve cell segmentation and account for these variations, color normalization, which was completed by subtracting the mean and dividing by the standard deviation, was necessary. This process gave the normalized image a null average and a standard deviation unit and eliminated variations in luminosity and contrast.
The masks provided by the first step can have voids inside, and this can produce complications in the training of the CNN model weights, so it is necessary to fill the holes. For that, we have used two morphological operations (dilatation followed by erosion).
Data augmentation
Random transformations on the input images were used in the database augmentation so that the network was adaptable to any new data or small variations, which enabled more effective generalization when a small database was used. Several shape and color transformations were performed during the training phase. Each transformation had a probability of 0–1 as an image may have undergone more than one transformation at the same time.
Training model
The Mask RCNN training consisted of two steps. First, a region proposal network (RPN) [22] was used to propose a bounding box for each candidate object. Second, the features were extracted from each candidate bounding box using RoIAlign [20]. Notice that the RoIAlign layer is a feature extraction operation for each region of interest (RoI) that proposes a correct alignment between the RoI and the extracted features, which leads to a significant improvement in the predicted mask accuracy [20]. ResNet50 was used as architecture with pre-trained initial weights. Several works, such as [12, 23–25], showed that the transfer learning may be better than random weight initialization and may accelerate the training, even from different problems. Many pre-trained models on known datasets are available and may be used to start model training.
In the current study, the adaptation of the model hyperparameters was crucial for improving detection and segmentation accuracy and enabled the generalization of the system without overfitting. We used a small size of region proposed in FPN because most nuclei and cytoplasm are small and can be found in any position on the image. For this reason also, we increased their number during the training and we added another hyper-parameters that boost the ability to detect the limits of small objects.
During the convolutional neural network training, there will be a point when the model will start to over-adapt on the training data, causing an increase in the validation error. This overfitting of the training dataset makes the model unable to make good predictions on new data. Thus, we propose as a solution, a regularization technique that allows stopping the training at the moment when performance on the validation set starts degrade. This simple effective solution called early stopping [26] allows using of a smaller network.
As illustrated in Fig. 1 the training model output was established on three parallel predictions: the classification (Label Class), detection (Label Banding Box), and output of the segmentation (Label Mask), which contained a binary mask for each detected object. These three outputs were used in the final post-processing phase.
Post-processing
After training the network, the majority of objects in the image were detected and classified correctly, the big problem remaining is the poor quality of the segmentation produced by the model. For this reason, we proposed post-processing on the output of the model. The first idea is to make a combination between the prediction of the original image and these transformation cases by taking only the points of the mask with high overlap. Then, we use a dilatation followed by erosion to fill the small holes in the predicted masks. Finally, we delete detected objects that have a low score (less than 90%).
With all these improvements on the model output, it is possible that some objects do not achieve a good segmentation quality. It leads us to propose a new method to improve the segmentation quality that we called LBLM (Label Banding box and Label Mask). The main idea is to use a combination of the model outputs (predicted mask and bounding box) with the spatial information of the input image. From the center of a candidate bounding box or the existing mask (), we calculate the color variation between the center and its neighbor pixels inside the bounding box. If the variation is less then a fixed threshold , we add this pixel to the new mask, Our LBLM method is described in detail in Algorithm 1. 
Results
We have performed several experiments using different architectures (Resnet50, Resnet101, VGG16, VGG19, and Inception V3), the best results were obtained with the ResNet50 architecture and the Adam optimizer. Table 1 compares the performances of the original Mask RCNN and our approach. It summarizes the classification, detection and segmentation performances on all the training/test images compared with the ground truth images for the two instances: cytoplasm and nuclei.
Table 1.
Comparison of learning and test performances between the original Mask RCNN and our approach
| Mask RCNN | Our approach | |
|---|---|---|
| Training | ||
| Classification | 0.9544 | 0.9915 |
| Detection | 0.9237 | 0.9826 |
| Segmentation | 0.8945 | 0.9630 |
| Test | ||
| Classification | 0.9219 | 0.9804 |
| Detection | 0.8603 | 0.9781 |
| Segmentation | 0.7966 | 0.9573 |
Bold numbers represent the best results
The time required to execute in the training and testing phases varied according to the number of regions of interest to be processed per image, it was between 500 and 900 ms for an image sized .
For a better evaluation of our proposed approach, we have compared it with conventional methods mentioned in [3, 5, 6, 16, 19, 27]. The comparison was made on the same database for nucleus and cytoplasm segmentation. For the configuration of the methods based on deep learning, we used the default parameters that have been implemented by the authors of the papers. Detail performance comparison is exposed in Table 2.
Table 2.
Comparison of the performance of the segmentation between the proposed approach and the previous methods on the same dataset
| Methods | Precision | F-score | Time/image |
|---|---|---|---|
| Benazzouz et al. [5] | 59.000 | 0.7400 | 138 s |
| Baghli et al. [6] | 87.360 | 0.9325 | 5 s |
| Settouti et al. [3] | 98.135 | 0.9257 | 30–60 s |
| Chen et al. [27] | 86.920 | 0.8316 | 2–3 s |
| Graham et al. [16] | 87.141 | 0.8488 | 2 s |
| Yi et al. [19] | 85.210 | 0.8925 | < 1 s |
| Our approach | 95.730 | 0.9706 | 500–900 ms |
Bold numbers represent the best results
Discussion
Results in Table 1 demonstrated the influence of the used hyper-parameters, and post-processing used on the model’s performance, mainly in the test phase. Improvements in the post-processing phase enhanced the prediction quality. As well as the regularization methods dealt well with the overfitting problem and facilitated the generalization. Figure 2 shows the effectiveness of these techniques, a trigger stops the training at the point where the performance measure starts to decrease.
Fig. 2.

Training and validation loss
For a better visibility, we randomly selected images from our test dataset to perform visual comparison and discuss the performance and quality of the segmentation of the expert (ground-truth), the original Mask RCNN algorithm, and our proposed approach (see Fig. 3). Our model has the ability to detect smaller objects thanks to the small size of the RPN anchors used. Another advantage of the proposed approach is its ability to separate correctly between regions that have the same properties, as shown in Fig. 3. We can also see that the overlapped cells have been well recognized Fig. 3 (image 1), the original model did not perform well on the identification and segmentation of nuclei and cytoplasm.
Fig. 3.
Results of automatic segmentation, detection, and classification by the original model and our proposed approach
The proposed methods in the post-processing can give a clear edges detection of the initial images with good segmentation results.
These results enable us to affirm the effectiveness of our proposed approach and confirm the rigor and robustness of deep learning methods for the detection of nuclei and cytoplasm.
While we were working on the application of our approach to cytological data, we noticed some disadvantages of this algorithm. The efficiency of the post-processing algorithm depends on the initial position of the seed predicted by the model, as well as the choice of the stop criteria, which influences the segmentation quality. If the first criteria value (which is the threshold) is very small, the object will be under-segmented, but if the threshold value is very large, the image segmentation will depend on the second criteria which is the limits of the bonding box predicted by the model.
We have solved some of these problems by removing predictions that have a low score (less than 90%) and by using a larger threshold to ensure that segmentation touches the boundaries of the object. It was obvious that the most serious problem is that the second stopping criterion depends on the prediction of the model, and the model depends on the type, nature, and quantity of data available. One of the solutions proposed for future work to solve this is to enrich the dataset by clarifying the type and nature of each cell collected. In fact, it would be very difficult to collect all existing cells by taking into account the aggregation of the same number of each type, which would cause an imbalance in the database, and therefore a bad generalization. We suggest in future work to introduce balancing methods to solve these problems.
We compared our results with conventional methods on the same database (Table 2), our method gave faster results and a better F-score compared to other classical studies (methods that used features engineering), which demonstrates the efficiency and speed of deep learning in the detection of nuclei and cytoplasm, while the studies in [3, 4] provided more accurate results. This means that our model has a high capacity to detect all pertinent objects, so it is more efficient than the traditional models, which can be very useful for blood diagnosis problems. The other comparison was made with methods based on deep learning [16, 19, 27]. The first observation from the table of results is that the work of Yi et al. [19] gives better results compared to [16, 27]. This can be explained by the fact that this method integrates object detection in the segmentation, it consists first of locating the cells and then performing the segmentation of each object delimited by the bounding boxes, which allows separating the cells according to the global characteristics rather than the individual information of each pixel, this is also the case for our approach. The second observation of this study is that our model performs better than the models used in [16, 19, 27]. This can be explained by the fact that we used post-processing (LBLM method) which improves the quality of the segmentation and that our approach gives the best results when using a small database.
This paper reports important results achieved through deep learning methods. The effectiveness of the proposed approach for this type of dataset provides a good indication of its applicability and the support it will be able to offer to physicians in the analysis of cytology images. Hematology analyzer (or hematology automaton), which performs the qualitative analysis of blood cells, available on the market are very expensive. Based on our discussions with physicians at the Hospital (CHU) of Tlemcen, we can say that equipping the standard microscopes available at the Hospital’s Hematology service with a camera for data acquisition and analyzing these obtained images with our model will allow them to have accurate results in record time and thus alleviate tedious tasks for hematologists in clinical practice. The major advantage of our approach is the great speed of response compared to other methods, we are talking here about a few milliseconds to get a reliable and accurate response, which is perfect for an expert. This algorithm can be integrated into a web application deployed on a server to be used by several doctors at the same time on a local network or on the cloud.
Conclusion
In this paper, we proposed an intelligent method based on deep learning for the automatic recognition of nuclei and cytoplasm regions in cytological images to help experts in medical diagnosis. The objective is to automatically detect each object in the image and classify it as a nuclei or a cytoplasm while forming a binary mask to perform the segmentation. Different techniques of regularization, transfer learning, and data augmentation were used to avoid the overfitting problem while minimizing the training time and help the model to better generalize. The main contribution of this paper is the use of a combination of deep learning model outputs to increase the segmentation quality. The model architecture was properly adapted to detect all cells of the input image successfully. To improve results, we proposed a new algorithm for the cooperation between the outputs and the spatial information of the input image.
Our results were very promising and encouraging, and show high improvement of precision and computation time of both segmentation and classification compared to previous models. This work has enabled to open up the research trail in the field of the automatic recognition of microscopic data using deep learning methods. We are currently working on a project in two directions: first, we will collect new cytological images and label them (benefiting from the expertise of cytologist doctors) to clearly define pathological cases for leukemia. Second, we will implement the proposed approach by introducing balancing methods in a diagnosis aid system to help doctors for counting the different types of cells and accurately detect pathological cases.
Compliance with ethical standards
Conflict of interest
The authors declare that they have no conflict of interest.
Ethical approval
This article does not contain any studies with human participants or animals performed by the author.
Footnotes
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Contributor Information
Amin Khouani, Email: amin.khouani@univ-tlemcen.dz.
Mostafa El Habib Daho, Email: mostafa.elhabibdaho@univ-tlemcen.dz.
Sidi Ahmed Mahmoudi, Email: Sidi.MAHMOUDI@umons.ac.be.
Mohammed Amine Chikh, Email: mohammedamine.chikh@univ-tlemcen.dz.
Brahim Benzineb, Email: benzineb.brahim@yahoo.fr.
References
- 1.Naik S, Doyle S, Agner S, Madabhushi A, Feldman M, Tomaszewski J. Automated gland and nuclei segmentation for grading of prostate and breast cancer histopathology. In: 5th IEEE international symposium on biomedical imaging: from nano to macro, 2008. ISBI 2008. IEEE; 2008. p. 284–87.
- 2.Xu J, Xiang L, Liu Q, Gilmore H, Wu J, Tang J, Madabhushi A. Stacked sparse autoencoder (SSAE) for nuclei detection on breast cancer histopathology images. IEEE Trans Med Imaging. 2016;35(1):119–130. doi: 10.1109/TMI.2015.2458702. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Settouti N, Bechar MEA, Daho MEH, Chikh MA. An optimised pixel-based classification approach for automatic white blood cells segmentation. Int J Biomed Eng Technol. 2020;32(2):144–160. [Google Scholar]
- 4.Saidi M, Bechar MEA, Settouti N, Chikh MA. Instances selection algorithm by ensemble margin. J Exp Theor Artif Intell. 2018;30(3):457–478. [Google Scholar]
- 5.Benazzouz M, Baghli I, Benomar A, Ammar M, Benmouna Y, Chikh M. Evidential segmentation scheme of bone marrow images. Adv Image Video Process. 2016;4(1):37. [Google Scholar]
- 6.Baghli I, Nakib A, Sellam E, Benazzouz M, Chikh A, Petit E. Hybrid framework based on evidence theory for blood cell image segmentation. In: Medical imaging 2014: biomedical applications in molecular, structural, and functional imaging, vol. 9038. International Society for Optics and Photonics; 2014. p. 903815.
- 7.Benomar ML, Chikh MA, Descombes X, Benazzouz M. Multi features based approach for white blood cells segmentation and classification in peripheral blood and bone marrow images. Int J Biomed Eng Technol. 2019 [Google Scholar]
- 8.Zhang Z, Luo P, Loy CC, Tang X. Facial landmark detection by deep multi-task learning. In: European conference on computer vision. Springer; 2014. p. 94–108.
- 9.Dhungel N, Carneiro G, Bradley AP. Deep learning and structured prediction for the segmentation of mass in mammograms. In: International conference on medical image computing and computer-assisted intervention. Springer; 2015. p. 605–12.
- 10.Ronneberger O, Fischer P, Brox T. U-net: convolutional networks for biomedical image segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer; 2015. p. 234–41.
- 11.Roth HR, Lu L, Farag A, Shin H-C, Liu J, Turkbey EB, Summers RM. Deeporgan: multi-level deep convolutional networks for automated pancreas segmentation. In: International conference on medical image computing and computer-assisted intervention. Springer; 2015. p. 556–64.
- 12.Chen H, Dou Q, Ni D, Cheng J-Z, Qin J, Li S, Heng P-A. Automatic fetal ultrasound standard plane detection using knowledge transferred recurrent neural networks. In: International conference on medical image computing and computer-assisted intervention. Springer; 2015. p. 507–14.
- 13.Dou Q, Chen H, Jin Y, Yu L, Qin J, Heng P-A. 3d deeply supervised network for automatic liver segmentation from CT volumes. In: International conference on medical image computing and computer-assisted intervention. Springer; 2016. p. 149–57.
- 14.Dou Q, Chen H, Yu L, Zhao L, Qin J, Wang D, Mok VC, Shi L, Heng P-A. Automatic detection of cerebral microbleeds from MR images via 3D convolutional neural networks. IEEE Trans Med Imaging. 2016;35(5):1182–1195. doi: 10.1109/TMI.2016.2528129. [DOI] [PubMed] [Google Scholar]
- 15.Żejmo M, Kowal M, Korbicz J, Monczak R. Classification of breast cancer cytological specimen using convolutional neural network. J Phys Conf Ser. 2017;783:012060. [Google Scholar]
- 16.Graham S, Vu QD, Raza SEA, Azam A, Tsang YW, Kwak JT, Rajpoot N. Hover-net: simultaneous segmentation and classification of nuclei in multi-tissue histology images. Med Image Anal. 2019;58:101563. doi: 10.1016/j.media.2019.101563. [DOI] [PubMed] [Google Scholar]
- 17.Akram SU, Kannala J, Eklund L, Heikkilä J. Cell segmentation proposal network for microscopy image analysis. In: Deep learning and data labeling for medical applications. Springer; 2016. p. 21–9.
- 18.Dai J, He K, Sun J. Instance-aware semantic segmentation via multi-task network cascades. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2016. p. 3150–8.
- 19.Yi J, Wu P, Huang Q, Qu H, Liu B, Hoeppner DJ, Metaxas DN. Multi-scale cell instance segmentation with keypoint graph based bounding boxes. In: International conference on medical image computing and computer-assisted intervention. Springer; 2019. p. 369–77.
- 20.He K, Gkioxari G, Dollár P, Girshick R. Mask R-CNN. In: 2017 IEEE international conference on computer vision (ICCV). IEEE; 2017. p. 2980–8.
- 21.Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. In: Advances in neural information processing systems, 2015. p. 91–9. [DOI] [PubMed]
- 22.Lin T-Y, Dollár P, Girshick R, He K, Hariharan B, Belongie S. Feature pyramid networks for object detection. In: CVPR, vol. 1; 2017, p. 4.
- 23.Yosinski J, Clune J, Bengio Y, Lipson H. How transferable are features in deep neural networks? In: Advances in neural information processing systems, 2014. p. 3320–8.
- 24.Tajbakhsh N, Shin JY, Gurudu SR, Hurst RT, Kendall CB, Gotway MB, Liang J. Convolutional neural networks for medical image analysis: full training or fine tuning? IEEE Trans Med Imaging. 2016;35(5):1299–1312. doi: 10.1109/TMI.2016.2535302. [DOI] [PubMed] [Google Scholar]
- 25.Shin H-C, Roth HR, Gao M, Lu L, Xu Z, Nogues I, Yao J, Mollura D, Summers RM. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE Trans Med Imaging. 2016;35(5):1285–1298. doi: 10.1109/TMI.2016.2528162. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Prechelt L. Early Stopping — But When?. In: Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science. Berlin: Springer; 2012. vol 7700, p. 53–67. 10.1007/978-3-642-35289-8_5.
- 27.Chen H, Qi X, Yu L, Dou Q, Qin J, Heng P-A. DCAN: deep contour-aware networks for object instance segmentation from histology images. Med Image Anal. 2017;36:135–146. doi: 10.1016/j.media.2016.11.004. [DOI] [PubMed] [Google Scholar]


