Advances in computation pathology have continued at an impressive pace in recent years [1]. New insights into digital pathology, combined with advances in artificial intelligence, computing power, faster networks, and cheaper storage offer more robust forms of cancer prevention, detection, and individualized therapy [2]. Moreover, deep neural networks have transformed the medical imaging field by overcoming multiple imaging challenges [3,4,5,6]. However, analysis of whole slide imaging (WSI) is complicated by extraordinary challenges. WSI is a cutting-edge imaging modality because of its ultra-resolution image (average of 100,000 × 100,000 pixels), presence of coloration patterns from the stain (e.g., hematoxylin and eosin stain), and accessibility of data at multiple magnification levels (e.g., 1×, 10×, 40×). Undoubtedly, a human reader cannot possibly obtain all of the visual information [2].
Furthermore, with the inclusion of digital pathology slides into the pathology work process, advanced algorithm concepts and computer-aided medical diagnostics open up the frontiers of the pathology expert’s vision more than a slide of mounted specimens for microscopic study and facilitate the convergence of knowledge far above human constraints [7]. In order to improve early detection, determine prognosis, and choose the most effective treatments, pathologists can currently use AI to identify specific imaging signals linked to disease processes. As a result, pathologists can provide service to more patients while still providing accurate diagnoses and prognoses [8], while the advantages of AI and digital pathology are growing in many sectors of cancer care and targeted therapies, the focus of this Special Issue is on computational pathology for breast cancer and gynecologic cancer.
Globally, breast cancer is regarded to be the foremost cause of mortality for women [9], and according to the American Cancer Society’s report, 43,780 people in the USA were predicted to die from breast cancer in 2022 [10]. Several types of research have been utilized in an attempt to learn more about computational pathology for breast cancer by analyzing histology images together with the patient data they are correlated with. Wang et al. [4] proposed a DL-based method, namely, the soft label fully convolutional network (SL-FCN) to aid breast cancer target therapy by segmenting the human epidermal growth factor receptor 2 (HER2) amplification using fluorescence in situ hybridization (FISH) and dual in situ hybridization (DISH) datasets of metastatic breast cancer. In another study in segmentation, Khalil et al. [11] developed a modified fully convolutional network using histopathological hematoxylin and eosin (H&E) whole-slide images for breast cancer segmentation and assessed the proposed method using H&E- and IHC cytokeratin (AE1/AE3)-stained whole-slide images. This was utilized to create an objective and unbiased reference standard. Apart from segmentation, Wang et al. [12] presented an interactive fully automatic hierarchical registration model which could cross-stain ultra-high resolution of histopathological and immunohistochemical whole-slide images and resolve the major speed bottleneck to help doctors annotate tumor tissue by defining grade and stage of a tumor. Naik et al. [5] demonstrated that biomarker status, as identified by hormone receptors, can be determined by machine learning effectively from cell morphology. ReceptorNet is a multiple-instance learning (MIL)-based method that is proposed for assessing the estrogen receptor status (ERS) from slides that have been H&E-stained. Similarly, on biomarker status prediction, Shamai et al. [6] showed that a convolutional neural network-based framework of whole-slide images (H&E-stained) can predict the expression of specific biomarkers (e.g., PD-L1). The authors of this study make use of a large tissue micro-array repository that contained various respective stains for different biomarkers (e.g., IHC for PD-L1) and a collection of H&E-stained whole-slide images. Additionally, a highly experienced pathologist annotated the expression of PD-L1 for each sample in the datasets.
On the other hand, the World Health Organization (WHO) estimated that cervical cancer affects approximately 300,000 individuals annually, with less developed nations contributing to more than 85% of these deaths in recent years. Cervical cancer, one of today’s most prevalent cancers in women’s health, is diagnosed in one woman per minute [13]. The cervical cancer dataset has been used in various types of research to analyze tissue samples for the study of the disease. Wang et al. [14] introduced a modified FCN for cervical HSILs or higher (SQCC) segmentation of WSIs using Pap smear specimens for use in real-world situations and discovered that the proposed cervical Pap smear diagnosis method could aid with the automatic recognition and quantification of cervical HSILs or higher (SQCC). Zhu et al. [15] proposed an assistive diagnostic, AIATBS, to optimize the assessment of cervical cytological smears using clinical Bethesda system (TBS) standards. Cheng et al. [16] developed a three-stage DL-based model, consisting of a low-resolution model to locate lesion areas at the low-resolution setting, a high-resolution model to recognize the top 10 lesion cells to be fed to the slide-level classification model, and recurrent neural network (RNN)-based technique to integrate the 10 most suspicious input features and yield the result for slide-level prediction.
In addition, ovarian cancer is the second highest frequent gynecologic cancer among women globally [17]. Over 140,000 women die from ovarian cancer worldwide every year, and over 225,000 women are diagnosed with it [18]. To discover more about computational pathology in ovarian cancer, some experiments were conducted utilizing ovarian cancer datasets. Wang et al. [19] proposed a modified FCN technique for predicting the therapeutic efficacy of bevacizumab on patients with ovarian cancer based on histopathological (H&E-stained) whole-slide images, without using any pathologist-provided regionally annotated areas. In a further study, Wang et al. [20] built a modified FCN-based precision oncology system using immunostained tissue microarray (TMA) whole-slide images to determine bevacizumab therapeutic impact in patients with epithelial ovarian cancer (EOC) and peritoneal serous papillary carcinoma (PSPC). In another computational pathology application, Boehm et al. [21] compiled a multi-modality dataset of 444 ovarian cancer patients with the majority of the high-grade serous carcinoma type and found quantitative characteristics on multimodal imaging affiliated with prognosis (e.g., tumor nuclear dimension on H&E staining and omental texture on contrast-enhanced computed tomography).
An analysis of histopathology images involves more than just visual analysis; it also involves data integration from patient clinical information and demographic details [22,23]. Most clinical data appears in unstructured free-text reports, including demographic information, patient test results, patient medical history, and clinical outcomes. The AI-based system role will be crucial in sorting through these many sources of information and supporting pathologists in making the best treatment decisions for patients. The AI-based medication must incorporate clinical data as well as histopathology images, allowing for assessment that exceeds the capabilities of the human brain alone. Rich data resources will aid pathology’s transition from a clinical to an informatics science, with tissue images supplied as one of the information sources [2].
The approaches for AI-based image recognition and its combination with other information sources have developed well within the area of computational pathology toward the level where they might eventually be ready to be converted from the research environment into real clinical laboratories. Despite computational pathology offering a lot of potential avenues, there are still some challenges to overcome [24]. Therefore, we are issuing a call for research papers that utilize innovative approaches for creating future studies to address current challenges in computational pathology for breast cancer and gynecologic cancer. Readers can find more information about publishing research in Cancers in the Special Issue “Computational Pathology for Breast Cancer and Gynecologic Cancer”.
Conflicts of Interest
The authors declare no conflict of interest.
Funding Statement
This research study is supported by the National Science and Technology Council, Taiwan (MOST 109-2221-E-011-018-MY3).
Footnotes
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.
References
- 1.Lu M., Williamson D., Chen T., Chen R., Barbieri M., Mahmood F. Data-efficient and weakly supervised computational pathology on whole-slide images. Nat. Biomed. Eng. 2021;5:555–570. doi: 10.1038/s41551-020-00682-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Niazi M., Parwani A., Gurcan M. Digital pathology and artificial intelligence. Lancet Oncol. 2019;20:e253–e261. doi: 10.1016/S1470-2045(19)30154-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Esteva A., Kuprel B., Novoa R., Ko J., Swetter S., Blau H., Thrun S. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542:115–118. doi: 10.1038/nature21056. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Wang C., Lin K., Lin Y., Khalil M., Chu K., Chao T. A Soft Label Deep Learning to Assist Breast Cancer Target Therapy and Thyroid Cancer Diagnosis. Cancers. 2022;14:5312. doi: 10.3390/cancers14215312. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Naik N., Madani A., Esteva A., Keskar N., Press M., Ruderman D., Agus D., Socher R. Deep learning-enabled breast cancer hormonal receptor status determination from base-level H&E stains. Nat. Commun. 2020;11:5727. doi: 10.1038/s41467-020-19334-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Shamai G., Livne A., Polónia A., Sabo E., Cretu A., Bar-Sela G., Kimmel R. Deep learning-based image analysis predicts PD-L1 status from H&E-stained histopathology images in breast cancer. Nat. Commun. 2022;13:6753. doi: 10.1038/s41467-022-34275-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Farahani N., Parwani A., Pantanowitz L. Whole slide imaging in pathology: Advantages, limitations, and emerging perspectives. Pathol. Lab. Med. Int. 2015;7:4321. [Google Scholar]
- 8.Zarella M., Bowman D., Aeffner F., Farahani N., Xthona A., Absar S., Parwani A., Bui M., Hartman D. A practical guide to whole slide imaging: A white paper from the digital pathology association. Arch. Pathol. Lab. Med. 2019;143:222–234. doi: 10.5858/arpa.2018-0343-RA. [DOI] [PubMed] [Google Scholar]
- 9.Anderson B., Cazap E., El Saghir N., Yip C., Khaled H., Otero I., Adebamowo C., Badwe R., Harford J. Optimisation of breast cancer management in low-resource and middle-resource countries: Executive summary of the Breast Health Global Initiative consensus, 2010. Lancet Oncol. 2011;12:387–398. doi: 10.1016/S1470-2045(11)70031-6. [DOI] [PubMed] [Google Scholar]
- 10.Siegel R., Miller K., Fuchs H., Jemal A. Cancer statistics, 2022. CA A Cancer J. Clin. 2022;72:7–33. doi: 10.3322/caac.21708. [DOI] [PubMed] [Google Scholar]
- 11.Khalil M., Lee Y., Lien H., Jeng Y., Wang C. Fast Segmentation of Metastatic Foci in H&E Whole-Slide Images for Breast Cancer Diagnosis. Diagnostics. 2022;12:990. doi: 10.3390/diagnostics12040990. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Wang C., Lee Y., Khalil M., Lin K., Yu C., Lien H. Fast cross-staining alignment of gigapixel whole slide images with application to prostate cancer and breast cancer analysis. Sci. Rep. 2022;12:11623. doi: 10.1038/s41598-022-15962-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.WHO . Others Elimination of Cervical Cancer as a Global Health Problem is within Reach. WHO; Geneva, Switzerland: 2018. [Google Scholar]
- 14.Wang C., Liou Y., Lin Y., Chang C., Chu P., Lee Y., Wang C., Chao T. Artificial intelligence-assisted fast screening cervical high grade squamous intraepithelial lesion and squamous cell carcinoma diagnosis and treatment planning. Sci. Rep. 2021;11:16244. doi: 10.1038/s41598-021-95545-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Zhu X., Li X., Ong K., Zhang W., Li W., Li L., Young D., Su Y., Shang B., Peng L., et al. Hybrid AI-assistive diagnostic model permits rapid TBS classification of cervical liquid-based thin-layer cell smears. Nat. Commun. 2021;12:3541. doi: 10.1038/s41467-021-23913-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Cheng S., Liu S., Yu J., Rao G., Xiao Y., Han W., Zhu W., Lv X., Li N., Cai J., et al. Robust whole slide image analysis for cervical cancer screening using deep learning. Nat. Commun. 2021;12:5639. doi: 10.1038/s41467-021-25296-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Lheureux S., Braunstein M., Oza A. Epithelial ovarian cancer: Evolution of management in the era of precision medicine. CA A Cancer J. Clin. 2019;69:280–304. doi: 10.3322/caac.21559. [DOI] [PubMed] [Google Scholar]
- 18.Jemal A., Siegel R., Ward E., Hao Y., Xu J., Murray T., Thun M. Cancer statistics, 2008. CA A Cancer J. Clin. 2008;58:71–96. doi: 10.3322/CA.2007.0010. [DOI] [PubMed] [Google Scholar]
- 19.Wang C., Chang C., Lee Y., Lin Y., Lo S., Hsu P., Liou Y., Wang C., Chao T. Weakly supervised deep learning for prediction of treatment effectiveness on ovarian cancer from histopathology images. Comput. Med. Imaging Graph. 2022;99:102093. doi: 10.1016/j.compmedimag.2022.102093. [DOI] [PubMed] [Google Scholar]
- 20.Wang C., Lee Y., Chang C., Lin Y., Liou Y., Hsu P., Chang C., Sai A., Wang C., Chao T. A Weakly Supervised Deep Learning Method for Guiding Ovarian Cancer Treatment and Identifying an Effective Biomarker. Cancers. 2022;14:1651. doi: 10.3390/cancers14071651. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Boehm K., Aherne E., Ellenson L., Nikolovski I., Alghamdi M., Vázquez-García I., Zamarin D., Roche K., Liu Y., Patel D., et al. Multimodal data integration using machine learning improves risk stratification of high-grade serous ovarian cancer. Nat. Cancer. 2022;3:723–733. doi: 10.1038/s43018-022-00388-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Natrajan R., Sailem H., Mardakheh F., Arias Garcia M., Tape C., Dowsett M., Bakal C., Yuan Y. Microenvironmental heterogeneity parallels breast cancer progression: A histology–genomic integration analysis. PLoS Med. 2016;13:e1001961. doi: 10.1371/journal.pmed.1001961. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Heindl A., Nawaz S., Yuan Y. Mapping spatial heterogeneity in the tumor microenvironment: A new era for digital pathology. Lab. Investig. 2015;95:377–384. doi: 10.1038/labinvest.2014.155. [DOI] [PubMed] [Google Scholar]
- 24.Ching T., Himmelstein D., Beaulieu-Jones B., Kalinin A., Do B., Way G., Ferrero E., Agapow P., Zietz M., Hoffman M., et al. Opportunities and obstacles for deep learning in biology and medicine. J. R. Soc. Interface. 2018;15:20170387. doi: 10.1098/rsif.2017.0387. [DOI] [PMC free article] [PubMed] [Google Scholar]
