Skip to main content
Journal of Translational Medicine logoLink to Journal of Translational Medicine
. 2025 Jan 23;23:110. doi: 10.1186/s12967-024-06017-6

Deep learning model targeting cancer surrounding tissues for accurate cancer diagnosis based on histopathological images

Lanlan Li 1,#, Yi Geng 1,#, Tao Chen 1,#, Kaixin Lin 2,3,4,5,6,#, Chengjie Xie 1, Jing Qi 1, Hongan Wei 1, Jianping Wang 7, Dabiao Wang 8, Ze Yuan 3,4,5, Zixiao Wan 3,4,5, Tuoyang Li 2,3,4,5, Yanxin Luo 2,3,4,5,6, Decao Niu 9,, Juan Li 2,4,5,10,, Huichuan Yu 2,3,4,5,6,
PMCID: PMC11755804  PMID: 39849586

Abstract

Accurate and fast histological diagnosis of cancers is crucial for successful treatment. The deep learning-based approaches have assisted pathologists in efficient cancer diagnosis. The remodeled microenvironment and field cancerization may enable the cancer-specific features in the image of non-cancer regions surrounding cancer, which may provide additional information not available in the cancer region to improve cancer diagnosis. Here, we proposed a deep learning framework with fine-tuning target proportion towards cancer surrounding tissues in histological images for gastric cancer diagnosis. Through employing six deep learning-based models targeting region-of-interest (ROI) with different proportions of no-cancer and cancer regions, we uncovered the diagnostic value of non-cancer ROI, and the model performance for cancer diagnosis depended on the proportion. Then, we constructed a model based on MobileNetV2 with the optimized weights targeting non-cancer and cancer ROI to diagnose gastric cancer (DeepNCCNet). In the external validation, the optimized DeepNCCNet demonstrated excellent generalization abilities with an accuracy of 93.96%. In conclusion, we discovered a non-cancer ROI weight-dependent model performance, indicating the diagnostic value of non-cancer regions with potential remodeled microenvironment and field cancerization, which provides a promising image resource for cancer diagnosis. The DeepNCCNet could be readily applied to clinical diagnosis for gastric cancer, which is useful for some clinical settings such as the absence or minimum amount of tumor tissues in the insufficient biopsy.

Supplementary Information

The online version contains supplementary material available at 10.1186/s12967-024-06017-6.

Keywords: Histological image, Cancer-adjacent tissues, Deep learning, Field cancerization, Cancer diagnosis

Highlights

  • A deep learning framework with optimized non-cancer and cancer ROI for accurate cancer diagnosis

  • The non-cancer region is a promising resource for the current deep-learning framework to improve cancer diagnosis.

  • The deep learning model reveals the histological image changes from potential remodeled microenvironment and field cancerization in the normal tissues surrounding tumors.

  • The DeepNCCNet could be applied to clinical samles with insufficient biopsy.

Supplementary Information

The online version contains supplementary material available at 10.1186/s12967-024-06017-6.

Introduction

Accurate histopathological cancer diagnosis is critical for making clinical decisions on treatment strategies. Currently, the histopathological diagnosis of cancer mainly relies on experienced pathologists who observe suspicious cancer lesions under a microscope, which is time-consuming and varies with different criteria and stages of learning curves. Additionally, pathologists are interested in automatic and accurate methods to minimize the workload and the time consumed in histological diagnosis for cancers [1, 2]. Among them, gastric cancer (GC) is one of the most common cancers and causes of cancer death worldwide[3], and about 65% of global new cases are diagnosed in Eastern Asia each year [4, 5]. In recent years, artificial intelligence (AI) algorithms, such as machine learning- and deep learning-based approaches, have assisted pathologists in diagnosing cancer effectively [68].

Obtaining histologic specimens with sufficient tumor cells under endoscopic biopsy is crucial to confirm GC histopathologically [9]. The missed or inadequate tumor cells from endoscopic biopsy might result in a false negative diagnosis of GC, which might lead to tumor progression and poor prognosis [1013]. There are disparities in missed GC rates during endoscopy in different studies, ranging from 0.41% to 9.4% [1218]. These studies suggest that the missed GC takes a certain part of GC patients, which might become a concerning disease burden. Therefore, it is of significance to develop an approach to avoid missed diagnosis of GC in the insufficient biopsy.

The tumorigenesis and cancer development in gastric mucosa remodels the microenvironment landscape and enables field cancerization in normal-appearing tissues [1922], which may shape a distinct interface between healthy and malignant tissue. Aran et al. conducted a transcriptomic analysis to compare the differences of healthy, cancer-adjacent and cancer tissues, and they identified the abnormally activated inflammatory signals in cancer-adjacent tissues across multiple cancer types [23]. The cancer-specific change is also observed in the cell composition of the non-cancer and cancer tissues from cancer patients, such as the more enriched tumor-infiltrating lymphocytes (TIL) in cancer surrounding tissues compared with the tumor center [2426]. In addition, the rapid growth of the cancer mechanically compresses the peritumor tissue and shapes a unique texture, which may help cancer diagnosis [27]. Therefore, we speculated that the non-cancer tissues surrounding cancer may exhibit cancer-specific features that convolutional neural networks (CNN) could recognize, which may provide additional information and features unavailable in the cancer region and help accurately diagnose cancer in the insufficient biopsy.

In this study, we used six deep learning-based models to explore the diagnostic value of cancer surrounding tissues in the digital slide images of GC and developed an efficient deep learning framework based on MobileNetV2 [28] to extract pathological image features from GC and surrounding normal tissues. By adjusting the weights toward the cancer surrounding tissues, we optimized the region-of-interest (ROI) that maximized feature extraction and diagnostic performance. This novel approach first proved the potential value of normal-appearing cancer-surrounding tissues in cancer diagnosis. This non-tumor ROI enhanced the diagnostic and generalization performance of deep learning-based models, providing a promising resource for histological diagnosis. The presented non-tumor resource is especially useful for some clinical settings, such as the absence or minimum amount of tumor tissues in endoscopic biopsy specimens where quick and accurate diagnosis followed by radiographic examination and cancer treatment may lead to significant improvements in clinical outcomes.

Methods

Histological images and ROI labeling

In this study, we leveraged histological images with hematoxylin–eosin staining from the compiled dataset we generated with in-house labeling from the histological images of BOT and SEED datasets (Supplementary Table 1). This compiled dataset collectively provided 2602 tissue slides, including 2462 GC tissues and 140 normal tissues, which was the internal dataset in this study. Among them, a random allocation of 60%, 20% and 20% images were used for training, testing, and internal validation, respectively. Furthermore, an external validation set comprising 60 GC and ten normal whole slide images (WSI) from the TCGA dataset was utilized to evaluate the model generalization (Supplementary Table 3) [29, 30]. The tumoral ROIs in the histological images of each dataset were reviewed and labeled by experienced pathologists.

Data preprocessing and image normalization

The histopathological images captured with different devices and conditions may vary in color and brightness, impacting the feature extraction and downstream model performance, so color correction was applied to normalize the raw images. The image normalization enabled the identification of cancer-specific features, which ultimately enhanced the performance of the classification model. We used the overlapping sliding window segmentation approach to crop the preprocessed images and increase the receptive field to prevent model overfitting as previously described [31, 32]. The sliding window size was set to 224 × 224 pixels, and the overlapping scale of adjacent windows was set to 50% (Fig. 1).

Fig. 1.

Fig. 1

The overall framework of this study. The labeled raw images of cancer or non-cancer tissues after color correction were cropped using the overlapping sliding window segmentation approach, and the receptive field was increased to prevent model overfitting. Then, the processed images were used to train the models for GC diagnosis based on the six deep learning framework

The overall framework of model development

CNN architectures have demonstrated outstanding performance in medical image classification, and transfer learning further improves the model training efficiency and generalization for pathological image classification by leveraging the network trained in other image sets [33, 34]. In addition, it alleviates the limitations posed by imbalanced pathological image sample data [33]. Therefore, we employed transfer learning with six pre-trained fine-tuning models, including AlexNet, GoogLeNet, VGG16, ResNet50, DenseNet121 and MobileNetV2 [28, 3539], to accelerate and improve the training process in image classification for GC (Fig. 1). The details of these six pre-trained models with optimized parameters were illustrated in the Supplementary Fig. 1. Among them, the MobileNetV2 that demonstrated the best diagnostic performance incorporates designs similar to Bottleneck and Inverted Residuals, which confers the advantages of lightweight, excellent feature extraction, and strong generalization in GC classification (Supplementary Fig. 2). In the model development, the preprocessed images of training and testing sets were input into the six models for training and testing, respectively. Then, to test the generalization ability of each trained model, we conducted external validations (Fig. 2).

Fig. 2.

Fig. 2

Model performance for cancer diagnosis in external validation. The diagnostic performance of the six models, including AlexNet, GoogLeNet, VGG16, ResNet50, DenseNet121 and MobileNetV2, was tested in the external validation set. The accuracy, along with sensitivity and specificity, was shown

Fine-tuning target weights towards the non-cancer and cancer ROI

The sliding window segmentation approach produced multiple cropped images, and the proportion of the cancer region to the histological image in each window was calculated. To evaluate the performance of the model trained with the images with different proportions of cancer and non-cancer regions, we generated the image sets with different proportions ranging from 0 to 1 that assigned the different proportions of the cancer regions in each image. A higher proportion implies greater emphasis on the cancer regions. The models were trained in the image sets with different proportions of cancer with the same amount.

Construction of DeepNCCNet model

Through model trained with the image sets of different proportions of cancer (which are assigned positive samples) and non-cancer region histological images, we obtained different fine-tuned MobileNetV2-based models, which showed different diagnostic performances on GC. Then, we applied the weighted average approach to merge the two models with the best diagnostic performances (models with weights of 0.1 and 0.5) to construct the optimized model targeting the harmonized non-cancer and cancer ROIs for GC diagnosis (DeepNCCNet) as we previously described [40, 41]. In addition, we compared the diagnostic performance of DeepNCCNet with other merged models in the external validation datasets.

Evaluation of model performance

We employed the accuracy, sensitivity, and specificity to evaluate the model performance in GC diagnosis. These metrics are defined as follows:

Accuracy=NTP+NTNNTP+NTN+NFP+NFN 1
Sensitivity=NTPNTP+NFN 2
Specificity=NTNNTN+NFP 3

The NTP, NTN, NFP and NFN represent the counts of true positive, true negative, false positive, and false negative cases, respectively.

Results

Accurate diagnosis of GC with deep learning-based models

We trained and tested the six pre-trained CNN models, including AlexNet, GoogLeNet, VGG16, ResNet50, DenseNet121 and MobileNetV2, by inputting the non-cancer and cancer ROIs with a proportion of 0.5, and fivefold cross-validation was adopted in the analysis. Then, we compared the performance of the well-trained models, including AlexNet, GoogLeNet, VGG16, ResNet50, DenseNet121 and MobileNetV2, in the external validation dataset. Overall, all the models demonstrated an accuracy higher than 87% in the GC diagnosis, and MobileNetV2 achieved the highest accuracy of 92.95% (Supplementary Table 2). Of note, the models based on MobileNetV2 showed the best sensitivity of 91.41% in GC diagnosis. Considering these metrics, the MobileNetV2-based model showed advantages over other histological image-based GC diagnosis models. Therefore, we preferred to employ MobileNetV2 for developing DeepNCCNet in diagnosing GC.

Fine-tuning target proportion towards the non-cancer region improves the diagnostic performance

To investigate the diagnostic value of cancer surrounding tissues with cancer-specific image features in GC diagnosis, we tested the performance of the models trained with ROIs with different proportions of non-cancer and cancer regions.

First, the preprocessed histological images were segmented and selected for model training based on different proportions of non-cancer and cancer regions, including 0, 0.1, 0.3, 0.5, 0.7, 0.9 and 1 (Fig. 3a). Among the proportions, 0 and 1 represented the full non-cancer and cancer ROIs. Images with a cancer ROI proportion exceeding 0.1 are labeled as 0.1 + .

Fig. 3.

Fig. 3

Model performance with fine-tuning target weights towards the cancer surrounding tissues for accurate histological diagnosis. a Histological images preprocessed and resized with different proportions of non-cancer and cancer regions for model development. 0 and 1 represented the full non-cancer and cancer ROIs. b The weight-dependent model performance in cancer diagnosis. The accuracies of six models at each weight of the non-cancer region were shown. Overall, the diagnostic performance of models was inferior at the ROI weight of 0 and improved with the increase of weights. However, the diagnostic performance was decreasing when the weight achieved 0.5 in most models were performed.

Then, the models trained with different proportional ROIs were then tested and validated for GC diagnosis. Overall, we discovered an ROI proportion-dependent model performance, in which the accuracies were increased as the proportion of cancer region increased, while they came down when the cancer region took the major portion of training images (Fig. 3b). Interestingly, all the models trained with non-cancer regions also demonstrated an accuracy of more than 80% when the weight was 0, although this performance was inferior to those at other weights. Among the models, GoogLeNet exhibited the most varied accuracy across different weights, and MobileNetV2 achieved the highest accuracy of 92.95% at a weight of 0.5. In addition, the optimal weight at which each CNN model had the best performance in GC diagnosis varied among the six models. The optimal weight for ResNet50, DenseNet121, and VGG16 was 0.1, indicating the advantages of exploiting the non-cancer region for cancer diagnosis in these CNN models. Furthermore, the weight-dependent performances of six models were also observed in the matched validation set with the histological images preprocessed with the same weights (Supplementary Fig. 3). Together, these results emphasized the contribution of non-cancer regions to cancer diagnosis.

Construction and validation of the DeepNCCNet model

To construct the DeepNCCNet model with improved diagnostic accuracy, the MobileNetV2-based models trained with the images of cancer ROI proportion of 0.1 and 0.5 that showed best diagnostic performance were selected and merged with the weighted average approach. The DeepNCCNet model was compared with the other model trained with images of different ROI weights in GC diagnosis. All the merged models achieved improved diagnostic performance, including accuracy, sensitivity and specificity (Fig. 4). The DeepNCCNet achieved an improved accuracy of 93.96% with higher specificity and comparable sensitivity, outperforming models that incorporated non-cancer and cancer ROI-based approaches (with weights 0 and 0.1) and those solely based on cancer ROIs (with weights 0.9 and 1). Taken together, the DeepNCCNet model with optimized weights of ROI demonstrates accurate performance in GC diagnosis.

Fig. 4.

Fig. 4

The diagnostic performance of DeepNCCNet

Discussion

In this study, we applied deep learning-based approaches to classify the histological images for accurate cancer diagnosis. We especially focused on the diagnostic value of cancer surrounding tissues. Firstly, the six pre-trained CNN models were trained with preprocessed histological images, and all the models demonstrated an accurate performance in GC diagnosis. Among them, the MobileNetV2-based model achieved the highest accuracy and was selected for DeepNCCNet construction for its advantages over other models. Then, to investigate the diagnostic value of cancer surrounding tissues with cancer-specific image features in GC diagnosis, we tested the performance of the six CNN models trained with ROIs with different proportions of non-cancer and cancer regions and discovered a proportion-dependent diagnostic performance. Based on these findings, we constructed the DeepNCCNet model with optimized cancer ROI weights of 0.1 and 0.5, demonstrating improved accuracy in GC diagnosis.

A key point in cancer diagnosis is to avoid missed diagnosis, which requires a high sensitivity for emerging diagnostic tools. In our study, most models reached a high sensitivity in testing on internal datasets. However, the sensitivity of DeepNCCNet and other models suffered a decrease in external validation when pre-trained in single thresholds. After merging different weighted ROI, the sensitivity of merged models was improved, which was higher than in previous studies [6, 42, 43]. In addition, the accuracy of the final DeepNCCNet was 0.9396, which was also better than the previous studies. These results highlighted the generalizability of DeepNCCNet, which allowed the deepNCCNet to be applied in other clinical cohorts.

The high sensitivity of DeepNCCNet for GC diagnosis might decrease the proportion of missed GCs under routine endoscopic biopsy. Gastric adenoma with high-grade neoplasia was a risk factor for GC and had been reported to coexist with GC [44, 45]. It might result in the missing diagnosis of GC due to the insufficient specimen from forceps biopsies. Therefore, the DeepNCCNet model that utilized non-cancer regions could be feasibly applied to assist GC diagnosis in some clinical settings, such as forceps biopsy specimens from high-risk patients with gastric adenoma or a minimum amount of tumor tissues in biopsy specimens.

The endoscopy, radiological imaging, tumor biomarkers and serum pepsinogen have been widely used for early detection of GC. A meta-analysis reported a missed rate of 9.4% in GC detection under endoscopy, which may bring a nonnegligible disease burden [18]. The repeat biopsy under endoscopy may improve the insufficient biopsy and increase the diagnostic sensitivity, while it may bring additional complication. The DeepNCCNet that utilizes the non-cancer tissue provides a novel pathway to reduce the missed rate of GC detection. In addition, several non-invasive methods, such as liquid biopsy, exhaled breath analysis and medical image-based AI models, have been developed for GC screening in recent years. The liquid biopsy detects circulating tumor DNA (ctDNA) and other biomarkers in blood, ascites and other body fluid for GC screening [46, 47]. A study of 124 patients reported a sensitivity of 78.96% and specificity of 91.81% in GC diagnosis by using ctDNA screening [48]. The exhaled breath analysis is an emerging technique for for GC screening in recent years. Two large studies have utilized. A study including 573 participants reported 100% sensitivity, 79% specificity, and 79% accuracy in the GC detection with exhaled breath analysis, and the other study demonstrated 73% sensitivity, 98% specificity, and 92% accuracy in 484 participants [49, 50]. Although these non-invasive methods demonstrate good performance compared with tissue-based methods in GC detection, how these emerging approaches could improve the endoscopic biopsy used in routine clinical practice remains unknown. Overall, the DeepNCCNet may become a promising method that could be readily applied in assisting the GC diagnosis under endoscopy.

The cancer surrounding tissues could be reshaped by the cancer cells with a wide set of differences compared with healthy tissues, which makes it possible to be identified and utilized by the DeepNCCNet. The tumor microenvironment (TME) is the other non-tumor component in cancer and surrounding tissues. These components in TME, such as TIL, cancer-associated fibroblasts and angiogenesis, are induced by cancer cells and are different from normal tissues. They result in different morphologic characteristics, texture and spatial arrangement of TME, form a unique landscape around the cancer cells and may become the mechanism of tumor progression or the prediction of patients’ prognosis [51, 52]. Besides TME, field cancerization was also proposed as an abnormal change in cancer surrounding tissues. Previous studies found there were pathologic atypia and gene mutations in peritumor tissues, and these tissues have the risk of tumor recurrence or regeneration [5355]. However, they are still not detected by current diagnostic techniques. TME and field cancerization remind us that cancer surrounding tissues differ from normal tissues. Although no proven technique can be used to identify the consistent change in cancer surrounding tissues, our studies provided a new insight into cancer surrounding tissues for assisting cancer diagnosis by integrating digital pathologic images and artificial intelligence.

The diagnostic performance showed fluctuations when the ROI weights of non-cancer and cancer regions were increased from 0 to 1 in the testing models. We observed that five of six models exhibited an evident threshold-dependent effect. Overall, the diagnostic performance of models was inferior at the ROI weight of 0 and improved with the increase of weights. However, the diagnostic performance decreased when the weight achieved 0.5 in most models were performed. These results indicated that the histological information that could be utilized for diagnosis was enriched in the limited non-tumor region immediately adjacent to the tumor region. An optimal segmentation of non-cancer regions is necessary to train the models for accurate cancer diagnosis. Taken together, we revealed the value of non-tumor regions with useful information in cancer diagnosis. However, further studies with sufficient size of image set are needed to better integrate the pathological features of adjacent normal tissues into developing and applying deep learning-based diagnostic models. In addition, further validation of the DeepNCCNet in other cohorts is necessary before clinical application. Although we revealed that the distinct image features in cancer surrounding tissue could be exploited to improve cancer diagnosis, it is still difficult to interpret how the deep learning model analyze the images and works as a "black box" [56].

Conclusion

In conclusion, the DeepNCCNet model that has been trained with optimized weight for non-cancer and cancer regions could be feasibly applied to assist in the GC diagnosis, which is useful for some clinical settings such as the absence or minimum amount of tumor tissues in biopsy specimens. We uncovered a non-cancer weight-dependent model performance, indicating the critical value of non-cancer regions with potential remodeled microenvironment and field cancerization in deep learning-based cancer diagnosis, which provides a promising image resource and novel insights into future development of deep learning framework.

Supplementary Information

12967_2024_6017_MOESM1_ESM.docx (17.6MB, docx)

Supplementary Material 1. Figure 1. Overview of six deep learning-based AI models. Figure 2. The overall framework of MobileNetv2. Figure 3. Model performance in the matched validation set with same fine-tuning target weights. Table 1 Summary of the pathological image dataset. Table 2 Comparison of accuracy of models with different ROI weights. Table 3 The clinicopathological characteristics of patients with gastric cancer in TCGA.

Author contributions

HY and LL conceptualized the experiments and study design. HY, LL, YG, TC, HW, DW and KL participated in the data curation, methodology, and formal analysis. LL, HW, YG, TC, KL, ZY, ZW, JQ, TL and CX performed the investigation and validation. Supervision and project administration were in charge of HY, LL, HW, YL, DN and JL. Resources were provided by HY, DN, JL, and YL. Visualization was executed out by YG, TC, KL, JQ, JW, DW, ZY and ZW. Original manuscript draft was written by YG, KL, TC, DN and DW. All authors reviewed and edited the final manuscript. All authors read and approved the final version of the paper.

Funding

Support for these studies was provided by the National Natural Science Foundation of China (No. 82272965; No. 82473456; No. 82173067; No. 82372715; No. 31900505), the Natural Science Foundation of Fujian Province (No. 2020J01453), the Natural Science Foundation of Guangdong Province (No. 2022A1515012656; No. 2021A1515010639; No. 2021A1515010134), Science and Technology Program of Guangzhou (No. 2025A04J5297), the "Five Five" Talent Team Construction Project of the Sixth Affiliated Hospital of Sun Yat-sen University (No. P20150227202010244; No. P20150227202010251), the Excellent Talent Training Project of the Sixth Affiliated Hospital of Sun Yat-sen University (No. R2021217202512965), the Scientific Research Project of the Sixth Affiliated Hospital Of Sun Yat-sen University (No. 2022JBGS07), the Fundamental Research Funds for the Central Universities, Sun Yat-sen University (No. 23ykbj007), the Program of Introducing Talents of Discipline to Universities, and National Key Clinical Discipline (2012).

Availability of data and materials

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request. The trained DeepNCCNet model has been deposited on https://github.com/genggeng-123/DeepNCCNet.git.

Declarations

Ethics approval and consent to participate

The Institutional Review Board of the Sixth Affiliated Hospital of Sun Yat-sen University reviewed and approved the study protocol.

Consent for publication

Written informed consent was obtained from all subjects or their representatives for the study participation.

Competing interests

The authors declare that they have no competing interests.

Footnotes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Lanlan Li, Yi Geng, Tao Chen, and Kaixin Lin contributed equally to this work.

Contributor Information

Decao Niu, Email: ndcdoct@sr.gxmu.edu.cn.

Juan Li, Email: lijuan67@mail.sysu.edu.cn.

Huichuan Yu, Email: yuhch5@mail.sysu.edu.cn.

References

  • 1.Märkl B, et al. Number of pathologists in Germany: comparison with European countries, USA, and Canada. Virchows Arch. 2021;478(2):335–41. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Bonert M, et al. Pathologist workload, work distribution and significant absences or departures at a regional hospital laboratory. PLoS ONE. 2022;17(3): e0265905. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Siegel RL, et al. Cancer statistics, 2023. CA Cancer J Clin. 2023;73(1):17–48. [DOI] [PubMed] [Google Scholar]
  • 4.Xie Y, et al. Gastrointestinal cancers in China, the USA, and Europe. Gastroenterol Rep. 2021;9(2):104. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Smyth EC, et al. Gastric cancer. Lancet (London, England). 2020;396(10251):635–48. [DOI] [PubMed] [Google Scholar]
  • 6.Huang B, et al. Accurate diagnosis and prognosis prediction of gastric cancer using deep learning on digital pathological images: a retrospective multicentre study. EBioMedicine. 2021;73: 103631. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Cao R, et al. Artificial intelligence in gastric cancer: applications and challenges. Gastroenterol Rep (Oxf). 2022;10:goac064. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Ba W, et al. Assessment of deep learning assistance for the pathological diagnosis of gastric cancer. Mod Pathol. 2022;35(9):1262–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Beg S, et al. Quality standards in upper gastrointestinal endoscopy: a position statement of the British Society of Gastroenterology (BSG) and Association of Upper Gastrointestinal Surgeons of Great Britain and Ireland (AUGIS). Gut. 2017;66(11):1886–99. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Gavric A, et al. Survival outcomes and rate of missed upper gastrointestinal cancers at routine endoscopy: a single centre retrospective cohort study. Eur J Gastroenterol Hepatol. 2020;32(10):1312–21. [DOI] [PubMed] [Google Scholar]
  • 11.Veitch AM, et al. Optimizing early upper gastrointestinal cancer detection at endoscopy. Nat Rev Gastroenterol Hepatol. 2015;12(11):660–7. [DOI] [PubMed] [Google Scholar]
  • 12.Januszewicz W, et al. Prevalence and risk factors of upper gastrointestinal cancers missed during endoscopy: a nationwide registry-based study. Endoscopy. 2022;54(7):653–60. [DOI] [PubMed] [Google Scholar]
  • 13.Hernanz N, et al. Characteristics and consequences of missed gastric cancer: a multicentric cohort study. Digest Liver Dis. 2019;51(6):894–900. [DOI] [PubMed] [Google Scholar]
  • 14.Chadwick G, et al. Gastric cancers missed during endoscopy in England. Clin Gastroenterol Hepato. 2015;13(7):1264. [DOI] [PubMed] [Google Scholar]
  • 15.Raftopoulos SC, et al. A cohort study of missed and new cancers after esophagogastroduodenoscopy. Am J Gastroenterol. 2010;105(6):1292–7. [DOI] [PubMed] [Google Scholar]
  • 16.Yalamarthi S, et al. Missed diagnoses in patients with upper gastrointestinal cancers. Endoscopy. 2004;36(10):874–9. [DOI] [PubMed] [Google Scholar]
  • 17.Beck M, et al. Gastric cancers missed at upper endoscopy in central norway 2007 to 2016-a population-based study. Cancers. 2021;13(22):5628. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Pimenta-Melo AR, et al. Missing rate for gastric cancer during upper gastrointestinal endoscopy: a systematic review and meta-analysis. Eur J Gastroenterol Hepatol. 2016;28(9):1041–9. [DOI] [PubMed] [Google Scholar]
  • 19.Zhang S, et al. The peritumor microenvironment: physics and immunity. Trends In Cancer. 2023;9(8):609–23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Kumagai K, et al. Expansion of gastric intestinal metaplasia with copy number aberrations contributes to field cancerization. Cancer Res. 2022;82(9):1712–23. [DOI] [PubMed] [Google Scholar]
  • 21.Yoon JH, et al. Gastric cancer exosomes contribute to the field cancerization of gastric epithelial cells surrounding gastric cancer. Gastric Cancer. 2022;25(3):490–502. [DOI] [PubMed] [Google Scholar]
  • 22.Luo Y, Yu M, Grady WM. Field cancerization in the colon: a role for aberrant DNA methylation? Gastroenterol Rep (Oxf). 2014;2(1):16–20. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Aran D, et al. Comprehensive analysis of normal adjacent to tumor transcriptomes. Nat Commun. 2017;8(1):1077. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Salmon H, et al. Matrix architecture defines the preferential localization and migration of T cells into the stroma of human lung tumors. J Clin Investig. 2012;122(3):899–910. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Ohtani H. Focus on TILs: prognostic significance of tumor infiltrating lymphocytes in human colorectal cancer. Cancer Immun. 2007;7:4. [PMC free article] [PubMed] [Google Scholar]
  • 26.Mrass P, et al. Cell-autonomous and environmental contributions to the interstitial migration of T cells. Semin Immunopathol. 2010;32(3):257–74. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27.Tian J, et al. Application of 3D and 2D quantitative shear wave elastography (SWE) to differentiate between benign and malignant breast masses. Sci Rep. 2017;7:41216. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Sandler M, et al. MobileNetV2: inverted residuals and linear bottlenecks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018.
  • 29.Gutman DA, et al. Cancer Digital Slide Archive: an informatics resource to support integrated in silico analysis of TCGA pathology data. J Am Med Inform Assoc. 2013;20(6):1091–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Cancer Genome Atlas Research N. Comprehensive molecular characterization of gastric adenocarcinoma. Nature 2014;513(7517):202–9. [DOI] [PMC free article] [PubMed]
  • 31.Bardou DK, Zhang, Ahmad SMJIA. Classification of breast cancer based on histology images using convolutional neural networks. 2018;6:24680–24693.
  • 32.Ben Hamida A, et al. Deep learning for colon cancer histopathological images analysis. Comput Biol Med. 2021;136: 104730. [DOI] [PubMed] [Google Scholar]
  • 33.Kim HE, et al. Transfer learning for medical image classification: a literature review. BMC Med Imaging. 2022;22(1):69. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Wang X, et al. ChestX-Ray8: hospital-scale chest X-Ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017.
  • 35.Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM. 2017;60(6):84–90. [Google Scholar]
  • 36.Simonyan K, Zisserman AJC, Very deep convolutional networks for large-scale image recognition. 2014. abs/1409.1556.
  • 37.Hegde RB, et al. Feature extraction using traditional image processing and convolutional neural network methods to classify white blood cells: a study. Australas Phys Eng Sci Med. 2019;42(2):627–38. [DOI] [PubMed] [Google Scholar]
  • 38.He K, et al. Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016.
  • 39.Huang G, et al. Densely connected convolutional networks. in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017.
  • 40.Hu Y, et al. Automatic treatment outcome prediction with DeepInteg based on multimodal radiological images in rectal cancer. Heliyon. 2023;9(2): e13094. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41.Li L, et al. Accurate tumor segmentation and treatment outcome prediction with DeepTOP. Radiother Oncol. 2023;183: 109550. [DOI] [PubMed] [Google Scholar]
  • 42.Tung C-L, et al. Identifying pathological slices of gastric cancer via deep learning. J Formosan Med Assoc. 2022;121(12):2457–64. [DOI] [PubMed] [Google Scholar]
  • 43.Ba W, et al. Assessment of deep learning assistance for the pathological diagnosis of gastric cancer. Modern Pathol. 2022;35(9):1262–8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Kim JH, et al. Endoscopic features suggesting gastric cancer in biopsy-proven gastric adenoma with high-grade neoplasia. World J Gastroenterol. 2014;20(34):12233–40. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Jung MK, et al. Endoscopic characteristics of gastric adenomas suggesting carcinomatous transformation. Surg Endosc. 2008;22(12):2705–11. [DOI] [PubMed] [Google Scholar]
  • 46.Pantel K, Alix-Panabières C. Liquid biopsy in 2016: circulating tumour cells and cell-free DNA in gastrointestinal cancer. Nat Rev Gastroenterol Hepatol. 2017;14(2):73–4. [DOI] [PubMed] [Google Scholar]
  • 47.Zhang Z, et al. Liquid biopsy in gastric cancer: predictive and prognostic biomarkers. Cell Death Dis. 2022;13(10):903. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Qian C, et al. Alu-based cell-free DNA: a novel biomarker for screening of gastric cancer. Oncotarget. 2017;8(33):54037–45. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Broza YY, et al. Screening for gastric cancer using exhaled breath samples. Br J Surg. 2019;106(9):1122–5. [DOI] [PubMed] [Google Scholar]
  • 50.Amal H, et al. Detection of precancerous gastric lesions and gastric cancer through exhaled breath. Gut. 2016;65(3):400–7. [DOI] [PubMed] [Google Scholar]
  • 51.Zeng D, et al. Tumor microenvironment characterization in gastric cancer identifies prognostic and immunotherapeutically relevant gene signatures. Cancer Immunol Res. 2019;7(5):737–50. [DOI] [PubMed] [Google Scholar]
  • 52.Zou Q, et al. DNA methylation-based signature of CD8+ tumor-infiltrating lymphocytes enables evaluation of immune response and prognosis in colorectal cancer. J Immunother Cancer. 2021;9(9):e002671. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Braakhuis BJM, et al. A genetic explanation of Slaughter’s concept of field cancerization: evidence and clinical implications. Can Res. 2003;63(8):1727–30. [PubMed] [Google Scholar]
  • 54.Slaughter DP, Southwick HW, Smejkal W. Field cancerization in oral stratified squamous epithelium; clinical implications of multicentric origin. Cancer. 1953;6(5):963–8. [DOI] [PubMed] [Google Scholar]
  • 55.Willenbrink TJ, et al. Field cancerization: definition, epidemiology, risk factors, and outcomes. J Am Acad Dermatol. 2020;83(3):709–17. [DOI] [PubMed] [Google Scholar]
  • 56.Castelvecchi D. Can we open the black box of AI? Nature. 2016;538(7623):20–3. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

12967_2024_6017_MOESM1_ESM.docx (17.6MB, docx)

Supplementary Material 1. Figure 1. Overview of six deep learning-based AI models. Figure 2. The overall framework of MobileNetv2. Figure 3. Model performance in the matched validation set with same fine-tuning target weights. Table 1 Summary of the pathological image dataset. Table 2 Comparison of accuracy of models with different ROI weights. Table 3 The clinicopathological characteristics of patients with gastric cancer in TCGA.

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request. The trained DeepNCCNet model has been deposited on https://github.com/genggeng-123/DeepNCCNet.git.


Articles from Journal of Translational Medicine are provided here courtesy of BMC

RESOURCES