Abstract
The purpose of this research is to exploit a weak and semi-supervised deep learning framework to segment prostate cancer in TRUS images, alleviating the time-consuming work of radiologists to draw the boundary of the lesions and training the neural network on the data that do not have complete annotations. A histologic-proven benchmarking dataset of 102 case images was built and 22 images were randomly selected for evaluation. Some portion of the training images were strong supervised, annotated pixel by pixel. Using the strong supervised images, a deep learning neural network was trained. The rest of the training images with only weak supervision, which is just the location of the lesion, were fed to the trained network to produce the intermediate pixelwise labels for the weak supervised images. Then, we retrained the neural network on the all training images with the original labels and the intermediate labels and fed the training images to the retrained network to produce the refined labels. Comparing the distance of the center of mass of the refined labels and the intermediate labels to the weak supervision location, the closer one replaced the previous label, which could be considered as the label updates. After the label updates, test set images were fed to the retrained network for evaluation. The proposed method shows better result with weak and semi-supervised data than the method using only small portion of strong supervised data, although the improvement may not be as much as when the fully strong supervised dataset is used. In terms of mean intersection over union (mIoU), the proposed method reached about 0.6 when the ratio of the strong supervised data was 40%, about 2% decreased performance compared to that of 100% strong supervised case. The proposed method seems to be able to help to alleviate the time-consuming work of radiologists to draw the boundary of the lesions, and to train the neural network on the data that do not have complete annotations.
Keywords: Deep learning, Weak and semi-supervision, TRUS, Prostate cancer, Segmentation
Introduction
Prostate cancer is the second most common male cancer in the United States [1, 2]. To decrease morbidity and mortality, early detection and improved treatment of prostate cancer is postulated to be important to reduce the associated death rate [3]. Different types of diagnostics, such as DRE (Digital Rectal Examination) and PSA (Prostate Specific Antigen) are employed to detect prostate cancer at an early stage [4]. However, it is known that DRE misses small tumors and PSA values rely on several factors that are not caused only by prostate cancer [5]. Several imaging methods such as MRI (Magnetic Resonance Imaging) and TRUS (Transrectal Ultrasonography) have been used to make a more reliable diagnosis. TRUS is known to be the most important tool for diagnosing prostate cancer by guiding prostate biopsies. TRUS increased the detection rate of visual lesion about two times compared with non-targeted biopsies [6, 7]. Although MRI targeted biopsy is known to have higher detection rate of clinically significant cancer, overall prostate cancer detection of TRUS did not differ from that of MRI. [8]. While TRUS is a widely used imaging method, it has relatively low predictive value in detecting cancer [9] because of considerable overlap between benign and malignant lesion characteristics. It is also known that its predictive ability is highly dependent on the radiologist’s interpretation. To improve TRUS and alleviate the dependence on the radiologist’s interpretation, an automatic process of segmenting prostate cancer in a TRUS image is helpful. It is because the general process of TRUS works in real time, and TRUS system is expected to give some useful information in real time to the radiologist, which supports the radiologist to make a better decision. To segment prostate cancer automatically, pixel by pixel annotation for every training image is requisite, which is very time-consuming to radiologists. To alleviate the problem of the time-consuming work of radiologists to draw the boundary of the lesions, and to train a neural network on the data that do not have complete annotations, we suggest a weak and semi-supervised deep learning framework for prostate cancer segmentation in this research. This can help the radiologist by drawing the search area roughly on the ultrasound image. Automated processes can save radiologists the time spent routinely to visually inspect and annotate TRUS images and improve learning efficiency.
For the early detection of prostate cancer, researches using conventional method have been published on computer-aided diagnosis of prostate cancer [5, 10–14]. As deep learning methods have been very popular recently in medical imaging field, deep learning–based methods, which employ various modalities, have been also published [15–19]. For the prostate cancer detection, deep learning–based methods also have been proposed [20–22]. Regarding approaches using image-level annotated data combined with pixelwise annotated data, some researches have been introduced [23–29]. In Ref. [23–25], different losses were employed to adapt to the varying characteristics of each image. In Ref. [27], the neural network was trained on the pixelwise annotated breast cancer images first. Then, the images with just image-level annotation were fed into the trained network to produce region-level classification result, which is similar to pixelwise classification. FickleNet by Lee at al. [28] was made to implicitly learn the coherence of each location in the feature maps, resulting in an ensemble classification result of each part to obtain the precise region boundary from image-level annotated images. Souly et al. [29] used image-level annotation to generate adversarial images and applied the generated images to enhance the classification network for segmentation of objects. Previous researches considered the image-level annotation as the weak supervision. In this research, we regarded the estimated position of the cancer as the weak supervision, which is not too tedious for the radiologist to draw. The location, rather than the image-level annotation, is very important in inspection of cancer. Thus, we utilized the information of the estimated location of cancer. Based on the location of the cancers, we could obtain the pixelwise annotated images.
Methods and Materials
We illustrated the overview of the proposed method in Fig. 1.
Fig. 1.
The overview of the proposed method
In this research, the strong supervision denotes the pixelwise annotation, and the weak supervision indicates the position or the location of the cancer. Some of the training set images had the strong supervision, which is pixelwise annotation. However, for the rest of the training set images, we had only the cancer position. Based on the training set with the strong supervision, we trained a CNN (convolutional neural network) for segmentation. Then the CNN could predict and segment the lesions. We fed all the weak supervised (only with the position of the center of mass) training images to the trained CNN. Then, we could obtain the result, which could be considered as the intermediate prediction. This is called the 1st step. If the center of mass of the intermediate prediction is close to the location of the weak supervision, then the intermediate prediction was considered to be the intermediate label. With the intermediate labels, we trained the CNN again. This is called the 2nd step.
Datasets
Data images were collected from Seoul National University Bundang Hospital, Gyeonggi-do, Republic of Korea. As all of the images were collected retrospectvely, our institutional review board waived the informed consents from the patients. The tumors were pathologically proven. Radiologists examined all of the cancer images, manually drew the boundary of the lesions, and the locations of the center of those lesions were provided, considering pathological maps. In this research, we consider the manually drawn boundary of the lesion as the strong supervision, and the location of the lesion center as the weak supervision. This is illustrated in Fig. 2. Here, radiologists manually drew the boundary of the lesions when prostate lesions were TRUS-visible enough to be segmented.
Fig. 2.
The concept of the strong supervision and the weak supervision
The 1st Step
At the beginning, we did not have the pixelwise annotation of all training images. If we assume that only 40% of the training images have the strong supervision, which is the pixelwise annotation, the rest 60% of the training images do not have the strong supervision. The rest 60% images have only the weak supervision, which is the estimated center of mass of the suspicious lesion. The center of mass is defined as the mean value of all the locations of the estimated label pixels.
| 1 |
where mi indicates the cancer label of pixel i, and xi and yi are the x and y coordinates of pixel i, respectively. mi is 1 when the pixel i belongs to the cancer region, and 0 otherwise. To train a CNN for segmentation, the pixelwise annotations of the weak supervised images, as well as those of the strong supervised images, are required. To obtain the pixelwise annotation for the weak supervision data, we first trained a CNN on the 40% strong supervision data. Then, we fed the rest weak supervision data to the trained CNN, which resulted in the estimated boundary of the lesion of input image. Now, we could refine the estimated boundary of the lesion for better performance.
The 2nd Step
After the 1st step, we obtained the estimated lesion boundary of the input image, which could be considered as the intermediate label for the segmentation. Combining the intermediate labels of the 60% weak supervision data with the labels of the 40% strong supervision data, we could prepare labels of all training images. Thus, the CNN was trained on all training images with labels (40% original labels and 60% intermediate labels). After the training, we could feed the test set to the trained CNN. Before feeding the test set to the trained CNN, we should choose the better labels for the 60% weak supervision data and refine them.
Label Update
In addition to the ground truth labels for the training images with the strong supervision, labels are also required for the weak supervised images. For the labels, we considered the intermediate prediction results as the intermediate labels. The labels of the strong supervision data, which are the ground truth labels, should not be altered. However, the intermediate labels could be redrawn for better performance. Based on the intermediate label and the strong supervision, the CNN was trained on all training images. Then, the CNN could predict and segment the suspicious region in the input images. To refine the intermediate labels, training images were fed to the trained CNN again to obtain the segmentation result of the training images. This process could be repeated several times, and we call the result as recursive results. The recursive result should be compared with the previous label, which was the intermediate prediction. If the center of mass of the recursive result is closer to the weak supervision location of the mass than the previous label, then the recursive result will replace the previous label, which was the intermediate prediction. In the case that the center of mass of the previous label is closer to the weak supervision location, the label should not be replaced. This concept is illustrated in Fig. 3. This label update can happen several times. However, it should be noted that this process can lead to overfitting. For example, if the label update is done to just one image and another epoch of learning is done, the average distance between the weak supervision locations of the images and the locations of the resultant center of mass can be increased. To avoid this overfitting, we updated the label only when each epoch of training was finished and the average distance between the resultant locations of the center of mass of all images and the weak supervision locations was reduced.
Fig. 3.
The concept of the label update. If the center of mass of the prediction is closer to the weak supervision than the previous label, the label is replaced with the prediction result
Experimental Results
We applied the ImageNet [30] pretrained DeepLab V3-Res50 [31] for the segmentation. All images were resized to 256 × 256 size, and among the 102 images, 22 images were randomly selected as test set. The Adam optimizer [32] with its default setting was used except the learning rate 0.0001. All experiments were performed on a 3.6 GHz Intel i7-7700 CPU, GTX1080 GPU, and Tensorflow framework [33] working on Linux Ubuntu 14.04. For the experiments, we set up a 2Tb HDD for data storage.
For each image, radiologists provided the pixel-level annotation as well as the locations of lesions. Pixel-level annotation was considered as strong supervision, and the location of lesion was regarded as weak supervision. In training images, some portion of the images were used as strong supervised images, and the others were used as weak supervised images. We call 100% case for 100% strong supervised data for training, and 80% case for 80% strong supervised data and 20% weak supervised data for training, and so on. In that sense, 100%, 80%, 60%, 40%, and 20% cases were examined and evaluated. For evaluation, mIoU was used. In addition to mIoU, recall and precision were also employed.
A comparison is presented in Fig. 4 between the weak and semi-supervised results and the corresponding results using only the strong supervised data. In Fig. 4, red line shows the mIoU values when some portion of the data is strong supervised data and the rest is weak supervised data. In 80% case, 80% training images were strong supervised data and the rest 20% training images were weak supervised data. Black lines shows the mIoU values when only the corresponding strong supervised data is used for training. For example, if the ratio of the strong supervised data is 80%, black line (strong only) indicates the result when only the 80% strong supervision data was used for training.
Fig. 4.

The effect of weak supervised data. mIoU values of the semi- and weak supervised data are higher than that of only strong supervised data. Weak supervised data surely improves the performance
In Figs. 5 and 6, the evaluation results are presented in terms of recall value and precision value, respectively.
Fig. 5.

The effect of weak supervised data. Recall values of the semi- and weak supervised data are higher than that of only strong supervised data
Fig. 6.

The effect of weak supervised data. Precision values of the semi- and weak supervised data are higher than that of only strong supervised data
Seeing the result shown in Figs. 4, 5, and 6, the proposed method shows better result with weak and semi-supervised data than with only the corresponding strong supervised data is used, although the improvement may not be as much as when 100% strong supervised data is used. In this research, we could infer that the optimal ratio between the strong supervised data and the weak supervised data may be from 40 to 60. We presented some segmentation result images in Fig. 7.
Fig. 7.
The left images are the original images. The middle images are the desired label images, and the right images are the predicted segmentation images
Discussion
In this research, weak supervision indicates that the annotation provided for learning process is not pixelwise one, but just locations of the lesions. Semi-supervision means that only small portion of data has the annotation. When they are combined, only small portion of data has the pixelwise annotation, and the rest data has just the locations of the lesions. Its purpose is to alleviate the time-consuming work of radiologists to draw the boundary of the lesions, and to train a neural network on the data that do not have complete and full annotations. It should have the performance better than that of the semi-supervised method in which case the neural network is trained only on the small amount of pixelwise annotated data. However, it may not be able to reach the performance of the neural network trained on the fully pixelwise annotated data. Weak and semi-supervised method may be between the semi-supervised method and the fully supervised method. The proposed method surely provides the intermediate pixelwise labels for the weak supervised data. However, its performance could not reach the performance of the one trained on the fully pixelwise annotated dataset.
In this research, we employed DeepLab V3-Res50 [31] for the segmentation. However, any segmentation method can be applied. The contribution of this research is to suggest a method to cooperate weak and semi-supervised way with the tumor segmentation, rather than suggesting a new CNN model.
The proposed method seems to have promising performance in that it could work in weak and semi-supervised way, which indicates that though we do not have the complete and full pixelwise annotated data, we can train the neural network and obtain some results. However, it should be also noted that a learning process always has the possibility to be in overfitting. We tried to avoid it by updating the labels only when each epoch of training was finished and the average distance between the locations of the center of mass of all images and the weak supervision locations was reduced. The fundamental solution to this problem is to gather a lot of data. In the near future, we are planning to gather more TRUS data and apply the proposed method to real clinical situations.
As we guessed, as the portion of the strong annotated data increased, the performance also increased. However, interestingly, 40% case, which denotes the case that 40% of the training data is strong supervised and the rest 60% data is weak supervised, showed the comparable performance with 60% case in terms of mIoU, recall, and precision, with just a small loss of the performance in comparison to the 100% case.
In 100% case, the mIoU reached about 65%. Considering that the image data was acquired between 2000 and 2007, if this method were applied to newly acquired data, which is acquired with machines of much enhanced image quality, the performance would be much better.
Regarding mIoU value, when the weak and semi-supervised method was applied, the performance of 60% case and 40% case decreased to around 62% mIoU. Considering that the performance reached about 65% in 100% case (when all training images were strong supervised), the decrease of the performance seems to be just a little. When only the strong supervised images were used, the decrease of the performance was about 10%. That indicates that the intermediate labels and its updates are actually helpful. The intermediate labels include false positive region, which is considered to be suspicious while it is not cancerous, as well as true positive region. However, the intermediate labels surely include much of the true positive region, and this seems to help training the neural network well.
Regarding recall and precision, the results 80% and 60% case of weak and semi-supervised method are as good as that of 100% strong supervised case. However, it decreases rapidly at 40% case, which is different from mIoU value of 40% case. At 40% case, the precision is close to that of only 40% strong supervision case, while the recall value at 40% is relatively high. That may indicate that the weak and semi-supervised method may learn to reduce the false negatives, rather than reduce the false positives. In terms of recall and precision, 60% case seems to be the good suggestion for weak and semi-supervised learning.
Regarding the application of the proposed method, most medical centers are using a combination of MRI with ultrasound to provide guided biopsies for prostate [34]. The proposed method can be applied to fuse the lesion segmented on MRI with the ultrasound images in real time.
Conclusion
In this research, we exploited a weak and semi-supervised deep learning framework to segment prostate cancer in TRUS images. A histologic-proven benchmarking dataset of 102 case images was built and 22 images were randomly selected for evaluation. Some portion of the training images were strong supervised, annotated pixel by pixel. Using the strong supervised images, the deep learning neural network was trained. The rest of the training images with only weak supervision, which is just the location of the lesion, were fed to the trained network to produce the intermediate pixelwise labels for the weak supervised images. Then, we retrained the neural network on the all training images with the original labels and the intermediate labels and fed the training images to the retrained network to produce the refined labels. Comparing the distance of the center of mass of the refined labels and the intermediate labels to the weak supervision location, the closer one replaced the previous label, which could be considered as the label updates. After the label updates, test set images were fed to the retrained network for evaluation. The proposed method shows better result with weak and semi-supervised data than the method using only small portion of strong supervised data, although the improvement may not be as much as when the fully strong supervised dataset is used. The proposed method seems to be able to help to alleviate the time-consuming work of radiologists to draw the boundary of the lesions, and to train the neural network on the data that do not have complete annotations.
Acknowledgements
This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2017R1C1B5077068 and NRF-2013R1A1A2011398) and by Korea National University of Transportation in 2019, and also supported by the Technology Innovation Program funded By the Ministry of Trade, Industry and Energy (MOTIE) of Korea (10049785, Development of ‘medical equipment using (ionizing or non-ionizing) radiation’-dedicated R&D platform and medical device technology).
Compliance with Ethical Standards
Conflict of Interest
The authors declare that they have no conflict of interest.
Footnotes
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Stangelberger A, Waldert M. Djavan B:Prostate Cancer in Elderly Men. Rev Urol. 2008;10(2):111–119. [PMC free article] [PubMed] [Google Scholar]
- 2.Rawla P. Epidemiology of Prostate Cancer. World J Oncol. 2019;10(2):63–89. doi: 10.14740/wjon1191. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Jemal A, Center MM, DeSantis C, Ward EM. Global patterns of cancer incidence and mortality rates and trends. Cancer Epidemiol Biomarkers Prev. 2010;19(8):1893–1907. doi: 10.1158/1055-9965.EPI-10-0437. [DOI] [PubMed] [Google Scholar]
- 4.Eastham J. Prostate cancer screening. Investig Clin Urol. 2017;58(4):217–219. doi: 10.4111/icu.2017.58.4.217. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Llobet R, Perez-Cortes JC, Toselli AH, Juan A: Computeraided detection of prostate cancer. Int J Med Imformatics, 2007 [DOI] [PubMed]
- 6.Aus G, Abbou CC, Bolla M, Heidenreich A. EAU guidelines on prostate cancer. Eur Urol. 2005;48:546–551. doi: 10.1016/j.eururo.2005.06.001. [DOI] [PubMed] [Google Scholar]
- 7.Djavan B, Margreiter M. Biopsy standards for detection of prostate cancer. World J Urol. 2007;25:11–17. doi: 10.1007/s00345-007-0151-1. [DOI] [PubMed] [Google Scholar]
- 8.Schoots IG, Roobol MJ, Nieboer D, Bangma CH, Steyerberg EW, Hunink MG. Magnetic resonance imaging-targeted biopsy may enhance the diagnostic accuracy of significant prostate cancer detection compared to standard transrectal ultrasound-guided biopsy: a systematic review and meta-analysis. Eur Urol. 2015;68(3):438–450. doi: 10.1016/j.eururo.2014.11.037. [DOI] [PubMed] [Google Scholar]
- 9.Martinez C, DallOglio M, Nesrallah L et al.: Predictive value of psa velocity over early clinical and pathological parameters in patients with localized prostate cancer who undergo radical retropubic prostatectomy. Int Braz J Urol 30(1), 2004 [DOI] [PubMed]
- 10.Ellis JH, Tempany C, Sarin MS, Gatsonis C, Rifkin MD, Mcneil BJ: MR imaging and sonography of early prostatic cancer : pathologic and imaging features that influence identification and diagnosis, AJR, 1994. [DOI] [PubMed]
- 11.Huynen A, Giesen R et al.: Analysis of ultrasonographic prostate images for the detection of prostatic carcinoma: the automated urologic diagnostic expert system. Ultrasound Med Biol 20(1), 1994 [DOI] [PubMed]
- 12.Rosette J, Giesen R, et al. Automated analysis and interpretation of transrectal ultrasonography images in patients with prostatitis. Eur Urol. 1995;27(1):47–53. doi: 10.1159/000475123. [DOI] [PubMed] [Google Scholar]
- 13.Yfantis EA, Lazarakis T, Bebis G: On Cancer Recognition of Ultrasound Image, Computer Vision Beyond the Visible Spectrum: Methods and Applications, Procedings. IEEE Workshop on, 2000.
- 14.Han SM, Lee JH, Choi JY. Computer-aided Prostate Cancer Detection using Texture Features and Clinical Features in Ultrasound Image. J Digit Imaging. 2008;21:121–133. doi: 10.1007/s10278-008-9106-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Suk HI, Lee SW, Shen D, Alzheimer's Disease Neuroimaging Initiative Hierarchical feature representation and multimodal fusion with deep learning for AD/MCI diagnosis. Neuroimage. 2014;101(0):569–582. doi: 10.1016/j.neuroimage.2014.06.077. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Dalmış MU, Vreemann S, Kooi T, Mann RM, Karssemeijer N, Gubern-Mérida A: Fully automated detection of breast cancer in screening MRI using convolutional neural networks. J Med Imaging 5, 2018 [DOI] [PMC free article] [PubMed]
- 17.Wang J et al.: Discrimination of Breast Cancer with Microcalcifications on Mammography by Deep Learning. Sci Rep 6, 2016 [DOI] [PMC free article] [PubMed]
- 18.Cheng JZ et al.: Computer-Aided Diagnosis with Deep Learning Architecture: Applications to Breast Lesions in US Images and Pulmonary Nodules in CT Scans. Sci Rep 6, 2016 [DOI] [PMC free article] [PubMed]
- 19.Han S, Kang HK, Jeong JY, Park MH, Kim W, Bang WC, Seong YK. Automated diagnosis of prostate cancer in multi-parametric MRI based on multimodal convolutional neural networks. Phys Med Biol. 2017;62:7714–7728. doi: 10.1088/1361-6560/aa7731. [DOI] [PubMed] [Google Scholar]
- 20.Le MH, Chen J, Wang L, Wang Z, Liu W, Cheng KT, Yang X. Automated diagnosis of prostate cancer in multi-parametric MRI based on multimodal convolutional neural networks. Phys Med Biol. 2017;62:6497–6514. doi: 10.1088/1361-6560/aa7731. [DOI] [PubMed] [Google Scholar]
- 21.Tsehay Y, Lay N, Wang X, Kwak JT, Turkbey et al: Biopsy-guided learning with deep convolutional neural networks for Prostate Cancer detection on multiparametric MRI, Proceedings of 2017 IEEE 14th International Symposium on Biomedical Imaging (ISBI 2017), 2017.
- 22.Anas EMA, Mousavi P, Abolmaesumi P. A deep learning approach for real time prostate segmentation in freehand ultrasound guided biopsy. Med Image Anal. 2018;48:107–116. doi: 10.1016/j.media.2018.05.010. [DOI] [PubMed] [Google Scholar]
- 23.Wu F, Wang Z, Zhang Z, Yang Y, Luo J, Zhu W, Zhuang Y. Weakly Semi-Supervised Deep Learning for Multi-Label Image Annotation. IEEE Trans Big Data. 2015;1:109–122. [Google Scholar]
- 24.Papandreou G, Chen LC, Murphy KP, Yuille AL: Weakly-and Semi-Supervised Learning of a Deep Convolutional Network for Semantic Image Segmentation, 2015 IEEE International Conference on Computer Vision (ICCV), 2015.
- 25.Wang Y, Liu J, Li Y, Lu H: Semi- and Weakly- Supervised Semantic Segmentation with Deep Convolutional Neural Networks, The 23rd ACM international conference, 2015.
- 26.Neverova N, Wolf C, Nebout F. Taylor GW:Hand Pose Estimation through Semi-Supervised and Weakly-Supervised Learning. Comp Vision Image Underst. 2017;164:56–67. [Google Scholar]
- 27.Shin SY, Lee S, Yun ID, Kim SM, Lee KM. Joint Weakly and Semi-Supervised Deep Learning for Localization and Classification of Masses in Breast Ultrasound Images. IEEE Trans Med Imag. 2019;38:762–774. doi: 10.1109/TMI.2018.2872031. [DOI] [PubMed] [Google Scholar]
- 28.Lee J, Kim E, Lee S, Lee J, Yoon S:FickleNet: Weakly and Semi-supervised Semantic Image Segmentation using Stochastic Inference, arXiv:1902.10421[cs.CV], 2019.
- 29.Souly N, Spampinato C, Shah M: Semi Supervised Semantic Segmentation Using Generative Adversarial Network, 2017 IEEE International Conference on Computer Vision (ICCV), 2017.
- 30.Deng J et al: Imagenet: A large-scale hierarchical image database. Computer Vision and Pattern Recognition, IEEE Conference on CVPR 2009, 2009.
- 31.Chen LC, Papandreou G, Schroff F, Adam H: Rethinking Atrous Convolution for Semantic Image Segmentation, arXiv: 1706.05587, 2017.
- 32.Kingma DP, Ba J:Adam: A method for stochastic optimization, arXiv:1412.6980, 2014.
- 33.Abadi M, Barham P, Chen J, Chen Z, Davis A et al: Tensorflow: A system for large-scale machine learning, Proceedings of the 12th USENIX Symposium on Operating Systems Design and Implementation (OSDI). Savannah, Georgia, USA, 2016.
- 34.Osuchowski M, Aebisher D, Gustalik J, Aebisher DB, Kaznowska E. The advancement of imaging in diagnosis of prostate cancer. Eur J Clin Exp Med. 2019;17(1):67–70. [Google Scholar]




