Skip to main content
World Journal of Gastroenterology logoLink to World Journal of Gastroenterology
. 2020 Sep 28;26(36):5408–5419. doi: 10.3748/wjg.v26.i36.5408

Artificial intelligence in gastric cancer: Application and future perspectives

Peng-Hui Niu 1, Lu-Lu Zhao 2, Hong-Liang Wu 3, Dong-Bing Zhao 4, Ying-Tai Chen 5
PMCID: PMC7520602  PMID: 33024393

Abstract

Gastric cancer is the fourth leading cause of cancer-related mortality across the globe, with a 5-year survival rate of less than 40%. In recent years, several applications of artificial intelligence (AI) have emerged in the gastric cancer field based on its efficient computational power and learning capacities, such as image-based diagnosis and prognosis prediction. AI-assisted diagnosis includes pathology, endoscopy, and computerized tomography, while researchers in the prognosis circle focus on recurrence, metastasis, and survival prediction. In this review, a comprehensive literature search was performed on articles published up to April 2020 from the databases of PubMed, Embase, Web of Science, and the Cochrane Library. Thereby the current status of AI-applications was systematically summarized in gastric cancer. Moreover, future directions that target this field were also analyzed to overcome the risk of overfitting AI models and enhance their accuracy as well as the applicability in clinical practice.

Keywords: Gastric cancer, Image-based diagnosis, Prognosis prediction, Artificial intelligence, Machine learning, Deep learning


Core Tip: Recently, several applications of artificial intelligence have emerged in the gastric cancer field based on its efficient computational power and learning capacities, such as image-based diagnosis and prognosis prediction. In this review, we searched the relevant works published up to April 2020 from the databases of PubMed, Embase, Web of Science, and the Cochrane Library, thus comprehensively summarizing the current status of artificial intelligence applications in gastric cancer. In addition, challenges and future directions that target the field are also discussed to improve the accuracy and applicability of artificial intelligence models in clinical practice.

INTRODUCTION

Gastric cancer has long been believed to be an aggressive malignancy with a 5-year survival rate of less than 40%[1]. Despite the decrease in incidence and mortality over the past few decades in some countries, gastric cancer is still the sixth most common malignancy and remains the fourth leading cause of cancer-related deaths across the globe[2-4]. Due to the early atypical symptoms of gastric cancer and its advanced aggressive behaviors, reducing recurrence and prolonging survival are increasingly dependent on advanced screening, diagnosis, treatment, prognosis prediction, and other new technologies. Artificial intelligence (AI), with its efficient computational power and learning capacity, has caught considerable attention in the field of gastric cancer.

Contrary to human intelligence, AI is the intelligence displayed by machines, which first emerged in 1956. The term “artificial intelligence” was commonly used to describe machines (or computers) that imitate human “cognitive” functions (e.g., learning and problem solving) related to human thinking[5]. As a subset of AI, machine learning (ML) can be defined as the computer algorithms that can automatically improve through experience[6]. Based on training data, the learner utilizes ML algorithms to build models with which the predictions or decisions can be made without explicit programming. In the last few years, ML algorithms, including random forest and support vector machines (SVM), were applied in various domains, especially in medicine. As of 2020, in the field of ML, deep learning (DL) had become the primary approach adopted in much ongoing work. DL is a type of ML algorithm that uses multiple layers to extract higher-level features from the original input gradually. Briefly, ML is a significant branch of AI, and DL is performed to implement ML. Recent developments of efficient hardware and computational power led to several AI models emerging in the field of gastric cancer[7-43]. AI-assisted diagnosis mainly included pathology, endoscopy, and computed tomography (CT)[7-35], while related researchers in prognosis focused on recurrence, metastasis, and survival prediction[36-43].

In this review, we searched the relevant works published up to April 2020 from the databases of PubMed, Embase, Web of Science, and the Cochrane Library, thus comprehensively summarizing the current status of AI-applications in gastric cancer. In addition, challenges and future directions that target the field were also discussed to improve the accuracy and applicability of AI-models in clinical practice.

AI IN THE DIAGNOSIS OF GASTRIC CANCER

Gastric cancer was mostly diagnosed at advanced stages because of their latent and nonspecific symptoms, which led to poor prognosis. It was reported that early accurate detection of gastric cancer could increase the overwhelming 5-year survival rate by approximately 90%[44,45]. However, an early gastric cancer diagnosis was mainly limited to the number of experienced imaging experts. Furthermore, the diagnostic accuracy largely depended on the clinical experience of experts and was vulnerable to multiple factors. It is impossible for qualified experts to avoid all misdiagnoses and missed diagnoses. AI methods, which imitated human cognitive function via a computer, were adept at processing and analyzing large amounts of data and thus could assist gastroenterologists in clinical diagnosis and decision making. To date, AI has been applied in many medical imaging fields, such as endoscopy, pathology as well as CT imaging. AI-assisted endoscopic diagnosis included the extraction of image features[7,8], the detection of early gastric cancer[9-14], the detection of precancerous conditions[15], the optimization of magnifying endoscopy with narrow-band imaging (M-NBI)[16-19] and the application of Raman endoscopy[20,21]. AI-assisted pathologic diagnosis involved the automatic identification of gastric cancer[22], the detection of gastric cancer based on the whole slide imaging (WSI)[23-26], the automatic detection of tumor-infiltrating lymphocytes (TILs)[27] and the segmentation of lesion regions[28-31], while AI-assisted CT diagnosis focused on the identification of preoperative peritoneal metastasis[32], the detection of perigastric metastatic lymph nodes[33] and two other new imaging techniques[34,35]. Under certain conditions, the diagnostic performance of these AI models was not inferior to human experts.

AI-assisted diagnosis in endoscopy

Endoscopy has played an essential role in the detection of gastric cancer because it enables endoscopists to observe cancerous sites directly. Accurate diagnosis of early gastric cancer by using endoscopic images is an urgent need to improve the patient’s poor prognosis. However, recent studies showed that the detection accuracy of conventional endoscopy only ranged from 69% to 79%[46]. Due to the heavy workload of medical image analysis, it was inevitable for experienced endoscopists to run into misdiagnosis and missed diagnosis. Recent efforts in endoscopy focused on adopting AI techniques to enhance the inspection and diagnosis of gastric cancer (summarized in Table 1).

Table 1.

Applications of artificial intelligence in endoscopy based on different study population

Ref. Year Country/region Number of cases Study population Methods Results
Liu et al[7] 2016 China 400 images Hospital JDPCA AUCs (0.9532), accuracy (90.75%)
Ali et al[8] 2018 Pakistan 176 images Public images dataset G2LCM AUC (0.91), accuracy (87%)
Luo et al[9] 2019 China 1036496 images Hospital GRAIDS Accuracy (up to 97.7%)
Sakai et al[10] 2018 Japan 29037 images Hospital CNN Accuracy (87.6%)
Yoon et al[11] 2019 South Korea 11539 images Hospital VGG model AUCs (0.981 for detection), AUCs (0.851 for depth prediction)
Nakahira et al[12] 2019 Japan 107284 images Cancer Institute Deep neural network Kappa value (0.27)
Zhu et al[13] 2019 China 993 images Hospital CNN-CAD system AUCs (0.94), accuracy (89.16%)
Wang et al[14] 2019 China 104864 images Hospital MCNN Sensitivity (79.622%), specificity (78.48%)
Guimarães et al[15] 2020 Germany 270 images Medical center DL AUCs (0.98), accuracy (93%)
Miyaki et al[16] 2015 Japan 100 cases Hospital SVM Average output value (0.846 ± 0.220)
Liu et al[17] 2018 China 1120 M-NBI images/3068 images Hospital Deep CNN Top accuracy (98.5%)
Horiuchi et al[18] 2019 Japan 2828 images Hospital CNN Accuracy (85.3%)
Li et al[19] 2019 China 2088 images Hospital CNN Accuracy (90.91%)
Bergholt et al[20] 2011 Singapore 1063 in vivo Raman spectra Hospital ACO-LDA algorithms Sensitivity (94.6%), specificity (94.6%)
Duraipandian et al[21] 2012 Singapore 2748 in vivo Raman spectra Hospital PLS-DA algorithms Accuracy (85.6%), specificity (86.2%)

JDPCA: Joint diagonalization principal component analysis; AUC: Area under the curve; G2LCM: Gabor-based gray-level co-occurrence matrix; GRAIDS: Gastrointestinal Artificial Intelligence Diagnostic System; CNN: Convolutional neural network; VGG model: Visual geometry group; CNN-CAD system: Convolutional neural network computer-aided detection; MCNN: Multicolumn convolutional neural network; DL: Deep learning; SVM: Support vector machine; M-NBI: Narrow-band imaging; ACO-LDA: Ant colony optimization integrated with linear discriminant analysis; PLS-DA: Partial least squares-discriminant analysis.

The key to obtaining high detection accuracy lies in the extraction of discriminative features, which can significantly distinguish the lesion images from standard images. Liu et al[7] designed an algorithm of ML, which was called joint diagonalization principal component analysis for the dimension reduction of endoscopic images. Then, a novel AI-assisted method was presented to detect early images of gastric cancer by combining joint diagonalization principal component analysis and conventional algorithms without learning, which revealed a better performance than traditional related methods. Ali et al[8] presented a novel texture extraction method named Gabor-based gray level co-occurrence matrix to detect the abnormal frames from the whole chromoendoscopy sequence. Then, the authors combined the SVM classifier and Gabor-based gray level co-occurrence matrix texture features to screen for early gastric cancer. The detection accuracy, specificity, sensitivity and the area under the curve (AUC) value were 87%, 82%, 91% and 0.91, respectively, which were higher than those results obtained by the SVM classifier combined with other texture extraction methods.

A study conducted by Luo et al[9] constructed the Gastrointestinal Artificial Intelligence Diagnostic System to detect upper gastrointestinal cancer in real time automatically. They used 1036496 endoscopy images with standard white light from 84424 cases across China for training and testing. In the different large-scale validation and prospective sets, the diagnostic accuracy was satisfactory as it ranged from 0.915 to 0.977. Moreover, the experimental results demonstrated that the Gastrointestinal Artificial Intelligence Diagnostic System attained sensitivity comparable to that of the human experts (0.942 vs 0.945). Sakai et al[10] introduced a convolutional neural network (CNN)-based automatic detection model with high accuracy and believed that the model could enhance the diagnostic capabilities of endoscopists. Yoon et al[11] adopted a lesion-based model for the accurate detection and depth prediction of early gastric cancer and evaluated the significant factors associated with AI-assisted diagnosis.

Nakahira et al[12] described that the analysis system of AI-assisted endoscopic images could effectively stratify the risk of gastric cancer and further evaluated the consistency of the AI model with the consensus diagnoses of three endoscopists. Zhu et al[13] also constructed a CNN computer-aided detection system to determine the invasion depth of early gastric cancer. Their proposed model achieved substantially higher specificity and accuracy compared to endoscopists with an AUC of 0.94. Another study suggested a multicolumn CNN to improve gastric cancer screening[14]. The novelty of their method lies in combining electronic gastroscopy with the analysis tools of cloud-based endoscopic images. Experimental results revealed that the proposed multicolumn CNN method dramatically outperformed the other CNN models (including AlexNet[47], GoogLeNet[48] and VGGNet[49]) and non-DL methods (including kNN[50] and SVM-based[51] classifiers).

Chronic atrophic gastritis is a common precancerous gastric condition that might lead to the appearance and development of gastric cancer[52]. Conventional endoscopy had high variability among different endoscopists for distinguishing of precancerous conditions. Therefore, Guimarães et al[15] developed and trained a DL approach by using 200 real-world endoscopic images to diagnose atrophic gastritis. The DL model achieved a diagnostic accuracy of 93% and an AUC of 0.98, outperforming the combined results of expert endoscopists.

Given the rapid advances of narrow-band imaging, magnifying endoscopy with M-NBI had been given great attention to the diagnosis in early gastric cancer, which showed a more conspicuous accuracy than that by general white light imaging[53]. However, interobserver variability was a limitation for the diagnosis of the lesions using M-NBI, and it was difficult for endoscopists to master the diagnostic technology in a short time[54]. Given that the advance of AI may offer such a solution, several studies related to AI-assisted image diagnosis have emerged in recent years. Miyaki et al[16] developed the SVM system to quantitatively identify gastric cancer based on magnifying endoscopy with blue-laser imagery. Liu et al[17] first applied transfer learning of fine-tuning deep CNN features to classify the gastric mucosal lesions of M-NBI images. Horiuchi et al[18] and Li et al[19] adopted a CNN system to boost the capability to distinguish early gastric cancer from noncancerous lesions efficiently and obtained excellent diagnostic performances. Their results suggested that the diagnostic sensitivity of the CNN model with M-NBI was superior compared to that of endoscopists.

Furthermore, a previous report suggested that M-NBI was still inadequate to effectively and accurately diagnose grossly invisible lesions due to the lack of sufficient biochemical information[55]. Raman spectroscopy, as a novel point-wise spectroscopic technique, could comprehensively display the surface and subsurface cellular structures from diagnostic tissue. It was suggested that Raman endoscopy had the promising potential for the diagnosis of early gastric cancer. Bergholt et al[20] first combined the real-time Raman endoscopy with AI-based algorithms to distinguish neoplastic and normal gastric tissues. Later, another study devised an automated Raman spectroscopy diagnostic framework named PLS-DA algorithms to detect gastric cancer with a diagnostic accuracy of 85.6%[21].

AI-assisted diagnosis in pathology

The traditional diagnosis of gastric cancers was to identify morphological features of the malignant cells by using histopathological biopsy specimens, and manual pathological inspection of gastric slices was time-consuming and laborious. The need for automatic image analysis and histological classifications of gastric cancer has been increasing. Li et al[22] proposed a novel DL-based framework, called GastricNet, to automatically identify gastric cancers. The classification accuracy of the proposed framework was 100% on gastric pathological slices, which was substantially higher than other existing networks, including DenseNet[56] and ResNet[57].

In addition, the WSI, as a virtual counterpart of glass slides[58], was considered to be comparable to optical microscopy for the diagnosis of gastric cancers. The advances of WSI led to the emergence of several AI applications in pathological diagnosis. Sharma et al[23] described that the CNN architecture could efficiently analyze pathological images with an accuracy of 0.6990 for cancer classification and 0.8144 for necrosis detection. Leon et al[24] assessed the application of deep CNN in the automatic detection of gastric cancer pathological images. Two approaches based on deep CNN were presented: One was performed to analyze the morphological features from the whole images, while the other independently investigated the local characteristic properties. Experiment results showed an average accuracy of up to 89.72%, which demonstrated the excellent performance of the proposed model in the detection of gastric cancers. Iizuka et al[25] trained CNNs and recurrent neural networks to distinguish stomach adenocarcinoma, adenoma and non-neoplastic. On three independent test sets of biopsy histopathology WSI, DL applications achieved AUCs up to 0.97 for the classification of gastric adenocarcinoma. Yoshida et al[26] compared the classification results of experienced pathologists with that of the e-Pathologist constructed by NEC Corporation. Although the overall concordance rate between the two methods was only 55.6% (1702/3062), the concordance rate for the negative biopsy specimens was as high as 90.6% (1033/1140). Furthermore, current evidence revealed that TILs were associated with the prognosis of gastric cancer[59]. A related study presented the CNN model to automatically detect TILs on histopathological WSI with an acceptable accuracy of 96.88%[27].

The automatic segmentation of lesion regions was a challenge in the AI-assisted pathological diagnosis of gastric cancer. To alleviate the shortage of well-annotated pathology image data, Liang et al[28] firstly applied the DL method to segment the pathological images. They presented a new neural network architecture and algorithm named as an overlapped region forecast for the detection of gastric cancers. An intersection over union coefficient (IoU) of 88.3% and 91.1% accuracy indicated that the model had reached the standard of supervised learning. Qu et al[29] presented a novel type of intermediate dataset and developed a stepwise fine-tuning-based scheme to improve the classification performance of deep neural networks. Sun et al[30] also demonstrated that the proposed DL model was a powerful image segmentation tool with 91.60% for the mean accuracy and 82.65% for the mean IoU. Another study also demonstrated that the Mask R-CNN model was an effective method to target the field of medical image segmentation[31].

Hence, these satisfactory results of AI-assisted applications highlighted the enormous potential benefit to help the pathologists and support the pathological detection of gastric cancer, especially for improving the efficiency of image segmentation and reducing the diagnostic time (summarized in Table 2).

Table 2.

Applications of artificial intelligence in pathology and computerized tomography based on different study population

Ref. Year Country/region Number of cases Study population Methods Results
Li et al[22] 2018 China 700 slices Publicly gastric slice dataset GastricNet Accuracy (100%)
Sharma et al[23] 2017 Germany 454 cases Hospital CNN Accuracy (0.6990 for cancer classification), accuracy (0.8144 for necrosis detection)
Leon et al[24] 2019 Colombia 40 images Department of pathology Deep CNN Accuracy (up to 89.72%)
Iizuka et al[25] 2020 Japan 1746 biopsy histopathology WSIs Hospital, TCGA CNN, RNN AUCs (up to 0.98), accuracy (95.6%)
Yoshida et al[26] 2018 Japan 3062 gastric biopsy specimens Cancer center ML Overall concordance rate (55.6%)
Garcia et al[27] 2017 Peru 3257 images - Deep CNN Accuracy (96.88%)
Liang et al[28] 2019 China 1900 images - DL IoU (0.883), accuracy (91.09%)
Qu et al[29] 2018 Japan 9720 images/19440 images Hospital DL AUCs (up to 0.965)
Sun et al[30] 2019 China 500 pathological images Hospital DL IoU (0.8265), accuracy (91.60%)
Cao et al[31] 2019 China 1399 pathological sections - the Mask R-CNN AP value (61.2)

GastricNet: The deep learning framework; CNN: Convolutional neural networks; WSIs: Whole slide images; TCGA: The Cancer Genome Atlas; RNN: Recurrent neural networks; AUC: Area under the curve; ML: Machine learning; DL: Deep learning; IoU: Intersection over union coefficient; AP: Average precision.

AI-assisted diagnosis in CT imaging

Attributable to the noninvasiveness and convenience, CT was widely used for the clinical diagnosis of gastric cancer[60,61]. However, the diagnostic accuracy was mainly dependent on the clinical experience of radiologists. When interpreting large amounts of CT images, the radiologist’s diagnostic accuracy would inevitably decrease, and errors were more prone to occur. Several approaches based on ML and DL reported that they could effectively extract valuable information on CT images (summarized in Table 3). For example, Huang et al[32] applied the DL method on diagnostic analysis and created a deep CNN model to identify the preoperative peritoneal metastasis in advanced gastric cancer.

Table 3.

Applications of artificial intelligence in computerized tomography based on different study population

Ref. Year Country/region Number of cases Study population Methods Results
Huang et al[32] 2020 China - Hospital Deep CNN -
Gao et al[33] 2019 China 32495 images Hospital FR-CNN AUCs (0.9541)
Li et al[34] 2015 China 26 cases Hospital KNN algorithm Accuracy (76.92%)
Li et al[35] 2012 China 38 lymph node datasets Hospital ML Accuracy (96.33%)

CNN: Convolutional neural networks; FR-CNN: Faster region-based convolutional neural networks; AUCs: Area under the curve; KNN algorithm: K-nearest neighbor algorithm; ML: Machine learning.

Compared to the current status of poor CT depiction in lymph node metastasis and low detection sensitivity, the novel DL-based model was expected to obtain an excellent performance of CT imaging. Gao et al[33] developed and validated faster region-based CNN based on CT images. The experimental results showed that faster region-based CNN obtained a high accuracy for the diagnosis of perigastric metastatic lymph nodes with the mean average precision value of 0.7801 and AUC value of 0.9541.

Moreover, dual-energy spectral CT (DEsCT), as a new imaging technique, could easily switch between high-energy and low-energy datasets, which enabled the precise creation of virtual images based on monochromatic spectra. The recent improvement of DEsCT made it available for routine clinical practice. However, it was difficult for radiologists to take full advantage of more quantitative data obtained by the DEsCT system. A recent study introduced the AI-assisted utility of DEsCT imaging for the stages and characteristics of gastric cancer[34]. The authors used a new multiple instance learning method to determine the invasion depth of gastric cancer and achieved a gross accuracy of 0.7692 after optimization. Also, it was reported that gemstone spectral imaging could provide more valuable image information to radiologists than conventional CT[62]. Li et al[35] proposed the ML-based gemstone spectral imaging analysis for lymph node metastasis in gastric cancer, which achieved a higher accuracy of detection. The feasibility and the effectiveness of gemstone spectral imaging-CT to diagnose lymph node metastasis outperformed traditional detection methods, such as endoscopic ultrasound and the multidetector-row CT.

AI IN PROGNOSIS PREDICTION OF GASTRIC CANCER

Accurate prognosis prediction of gastric cancer was of significance for both clinicians and patients. Such information could assist clinicians in decision-making and improve management over patients. It was appreciated that the demographics, pathological indicators, physiological states and even social contacts had an impact on the prognosis of gastric cancer patients. However, conventional statistical methods, such as the tumor–node–metastasis staging system and nomogram, could hardly analyze the complicated internal connections among these characteristics. Based on its excellent computational power and integration capability, AI models had been applied to improve the survival rates of gastric cancer patients.

In the last few years, the applications of AI in prognosis involved the predictions of survival time[36-38], recurrence risk[39,40] and metastasis[41-43] (summarized in Table 4). Jiang et al[36] applied SVM to survival analysis and developed a prognostic classifier. The results showed a higher predictive accuracy of overall survival and disease-free survival than the tumor–node–metastasis staging system defined by the American Joint Committee on Cancer. Besides, the proposed gastric cancer-SVM classifier was also used to predict adjuvant chemotherapeutic benefit, which was able to facilitate the individualized treatment for gastric cancer. Combining demographics, pathological indicators and physiological characteristics of 939 cases, Lu et al[37] created a novel multimodal hypergraph learning framework to improve the accuracy of survival prediction. The result showed that the proposed approach outperformed random forest and SVM in overall survival prediction. Another study compared the value of artificial neural network and Bayesian neural networks (BNN) in survival prediction of gastric cancer patients, and the findings indicated BNN was superior to the artificial neural network method[38].

Table 4.

Applications of artificial intelligence in gastric cancer prognosis based on different study population

Ref. Year Country/region Number of cases Study population Methods Results
Jiang et al[36] 2018 China 786 cases Hospital SVM classifier AUCs (up to 0.834)
Lu et al[37] 2017 China 939 patients Hospital MMHG Accuracy (69.28%)
Korhani Kangi et al[38] 2018 Iran 339 patients Hospital ANN, BNN Sensitivity (88.2% for ANN, 90.3% for BNN), specificity (95.4% for ANN, 90.9% for BNN)
Zhang et al[39] 2019 China 669 cases Hospital ML AUCs (up to 0.831)
Liu et al[40] 2018 China 432 GC tissue samples Hospital SVM classifier Accuracy (up to 94.19%)
Bollschweiler et al[41] 2004 Germany, Japan 135 cases Cancer center ANN Accuracy (93%)
Hensler et al[42] 2005 Germany, Japan 4302 cases Cancer center QUEEN technique Accuracy (72.73%)
Jagric et al[43] 2010 Slovenia 213 cases Clinical center Learning vector quantization neural networks Sensitivity (71%), specificity (96.1%)

SVM: Support vector machine; AUC: Area under the curve; MMHG: Multimodal hypergraph learning framework; ANN: Artificial neural networks; BNN: Bayesian neural network models; ML: Machine learning; GC: Gastric cancer; QUEEN technique: Quality assured efficient engineering of feedforward neural networks with supervised learning.

Recurrence was one of the leading causes of death for gastric cancer patients[63], thereby the accurate evaluation of recurrence risk was relevant in routine clinical work. Recent reports indicated that the AI-assisted recurrence prediction system achieved better performances than traditional statistical methods. Zhang et al[39] used ML methods to extract radiomic signatures from CT images of 669 consecutive patients diagnosed with advanced gastric cancer. Then they constructed a CT-based radiomic model to predict the recurrent risk of advanced gastric cancer. Liu et al[40] trained the SVM classifier to predict the recurrence in patients with gastric cancer. Using the gene expression profiling dataset GSE26253[64], they discovered that a set of feature genes (including PLCG1, PRKACA and TGFBR1) potentially correlated with gastric cancer recurrence.

Lymph node metastasis was a significant prognostic indicator for gastric cancer[65]. The lack of accurate methods to predict gastric cancer metastasis has led to the application of AI-assisted prediction techniques to evaluate the metastasis risk better. Bollschweiler et al[41] demonstrated that artificial neural networks could broadly enhance the predictive accuracy of lymph node metastasis. Hensler et al[42] proposed a novel artificial neural network approach for the preoperative prediction of lymph node metastasis. Compared with the Maruyama Diagnostic System developed at the National Cancer Center in Tokyo, the proposed model showed higher accuracy and better reliability. Also, it was shown that liver metastases could severely diminish the long-term survival of gastric cancer patients. Jagric et al[43] presented a learning vector quantization networks to predict postoperative liver metastasis in patients suffering from gastric cancer and obtained a reasonably high predictive value.

CHALLENGES AND FUTURE PERSPECTIVES

Despite the reported great success of AI in medical image-based diagnosis and prognosis prediction, several barriers must be removed before widespread clinical practice occurs.

A flexible AI model requires a large amount of well-annotated data for training, validating and testing, while the related research with small sample sizes is prone to have measurement errors[66]. With advances in medical-based imaging, such as endoscopy and pathology, numerous data is generated continuously to help physicians in clinical diagnosis and decision making. However, such data are rarely labeled or annotated, which are not suitable for algorithm training. Hence, the availability of high-quality data is a significant challenge for the development and optimization of AI. A meaningful way to access these qualified data sets is to establish large-scale open-access databases. Moreover, existing data resources should also be utilized effectively. Single hospitals and institutions are encouraged to share validated data to improve the applicability of AI in gastric cancer, which is similar to previous research related to Alzheimer’s disease[67].

An additional hurdle to the improvement of robust algorithms for gastric cancer is the interpretability of AI. In some studies, the applications of ML and DL revealed higher sensitivity and fewer false positives than radiologists[68,69]. However, they also inevitably ran into the risk of overfitting, leading to a tradeoff between accuracy and interpretability. In addition, the “black box” feature of algorithms may cause clinician’s suspicion of ML applications. Cabitza et al[70] offered that the “black box” of ML may bring the unintended negative consequences in clinical practice. Fortunately, recent advances in data visualization tools deepened the visual understanding of algorithm decision making[71] thus contributing to the promotion of the optimization algorithms and widespread clinical acceptance.

Given its advantages of computational power and learning capacity, AI will appear in various gastric cancer fields. Increasingly, it is appreciated that the characteristics of diseases, the physiological, psychological states of patients and even social communication have an impact on the prognosis of gastric cancer patients. It is difficult for physicians to integrate complex data manually. An AI model is adept at integrating much information from the vast majority of data, which has the potential to reduce the workload of clinicians substantially. However, due to some ethical and safety issues, the predictions that are generated by AI require further evaluation and interpretation by professional physicians. Thereby, AI techniques will not wholly replace physicians in future clinical practice and combining human beings with AI can achieve the ideal state of higher efficiency.

CONCLUSION

AI techniques, especially ML and DL, are making remarkable progress in the field of gastric cancer. The current status and future perspectives of AI-assisted diagnosis and prognosis were comprehensively introduced in this review. Numerous related researchers reported the impressive performance of AI, which was superior to the standard statistical methods. Despite several limitations and hurdles that exist in AI, such as the lack of well-annotated data and the interpretability of models, based on its efficient computational power and learning competence, AI will revolutionize the diagnosis and prognosis of gastric cancer in the foreseeable future.

Footnotes

Conflict-of-interest statement: All the authors have no conflict of interest related to the manuscript.

Manuscript source: Invited manuscript

Peer-review started: May 24, 2020

First decision: July 29, 2020

Article in press: August 29, 2020

Specialty type: Gastroenterology and hepatology

Country/Territory of origin: China

Peer-review report’s scientific quality classification

Grade A (Excellent): 0

Grade B (Very good): B, B, B

Grade C (Good): 0

Grade D (Fair): 0

Grade E (Poor): 0

P-Reviewer: Ishizawa K, Kinami S, Srivastava M S-Editor: Gao CC L-Editor: Filipodia P-Editor: Wang LL

Contributor Information

Peng-Hui Niu, Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.

Lu-Lu Zhao, Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.

Hong-Liang Wu, Department of Anesthesiology, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.

Dong-Bing Zhao, Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China.

Ying-Tai Chen, Department of Pancreatic and Gastric Surgery, National Cancer Center/National Clinical Research Center for Cancer/Cancer Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100021, China. yingtaichen@126.com.

References

  • 1.Jemal A, Ward EM, Johnson CJ, Cronin KA, Ma J, Ryerson B, Mariotto A, Lake AJ, Wilson R, Sherman RL, Anderson RN, Henley SJ, Kohler BA, Penberthy L, Feuer EJ, Weir HK. Annual Report to the Nation on the Status of Cancer, 1975-2014, Featuring Survival. J Natl Cancer Inst. 2017;109:djx030. doi: 10.1093/jnci/djx030. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Islami F, DeSantis CE, Jemal A. Incidence Trends of Esophageal and Gastric Cancer Subtypes by Race, Ethnicity, and Age in the United States, 1997-2014. Clin Gastroenterol Hepatol. 2019;17:429–439. doi: 10.1016/j.cgh.2018.05.044. [DOI] [PubMed] [Google Scholar]
  • 3.Steevens J, Botterweck AA, Dirx MJ, van den Brandt PA, Schouten LJ. Trends in incidence of oesophageal and stomach cancer subtypes in Europe. Eur J Gastroenterol Hepatol. 2010;22:669–678. doi: 10.1097/MEG.0b013e32832ca091. [DOI] [PubMed] [Google Scholar]
  • 4.Ferlay J, Colombet M, Soerjomataram I, Mathers C, Parkin DM, Piñeros M, Znaor A, Bray F. Estimating the global cancer incidence and mortality in 2018: GLOBOCAN sources and methods. Int J Cancer. 2019;144:1941–1953. doi: 10.1002/ijc.31937. [DOI] [PubMed] [Google Scholar]
  • 5.Russel S, Norvig P. Artificial Intelligence: A Modern Approach. 2th ed. Pearson Education, 2003. [Google Scholar]
  • 6.Christian R. Machine Learning, a Probabilistic Perspective. Chance. 2014;27:62–63. [Google Scholar]
  • 7.Liu DY, Gan T, Rao NN, Xing YW, Zheng J, Li S, Luo CS, Zhou ZJ, Wan YL. Identification of lesion images from gastrointestinal endoscope based on feature extraction of combinational methods with and without learning process. Med Image Anal. 2016;32:281–294. doi: 10.1016/j.media.2016.04.007. [DOI] [PubMed] [Google Scholar]
  • 8.Ali H, Yasmin M, Sharif M, Rehmani MH. Computer assisted gastric abnormalities detection using hybrid texture descriptors for chromoendoscopy images. Comput Methods Programs Biomed. 2018;157:39–47. doi: 10.1016/j.cmpb.2018.01.013. [DOI] [PubMed] [Google Scholar]
  • 9.Luo H, Xu G, Li C, He L, Luo L, Wang Z, Jing B, Deng Y, Jin Y, Li Y, Li B, Tan W, He C, Seeruttun SR, Wu Q, Huang J, Huang DW, Chen B, Lin SB, Chen QM, Yuan CM, Chen HX, Pu HY, Zhou F, He Y, Xu RH. Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: a multicentre, case-control, diagnostic study. Lancet Oncol. 2019;20:1645–1654. doi: 10.1016/S1470-2045(19)30637-0. [DOI] [PubMed] [Google Scholar]
  • 10.Sakai Y, Takemoto S, Hori K, Nishimura M, Ikematsu H, Yano T, Yokota H. Automatic detection of early gastric cancer in endoscopic images using a transferring convolutional neural network. Conf Proc IEEE Eng Med Biol Soc. 2018;2018:4138–4141. doi: 10.1109/EMBC.2018.8513274. [DOI] [PubMed] [Google Scholar]
  • 11.Yoon HJ, Kim S, Kim JH, Keum JS, Oh SI, Jo J, Chun J, Youn YH, Park H, Kwon IG, Choi SH, Noh SH. A Lesion-Based Convolutional Neural Network Improves Endoscopic Detection and Depth Prediction of Early Gastric Cancer. J Clin Med. 2019;8:1310. doi: 10.3390/jcm8091310. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Nakahira H, Ishihara R, Aoyama K, Kono M, Fukuda H, Shimamoto Y, Nakagawa K, Ohmori M, Iwatsubo T, Iwagami H, Matsuno K, Inoue S, Matsuura N, Shichijo S, Maekawa A, Kanesaka T, Yamamoto S, Takeuchi Y, Higashino K, Uedo N, Matsunaga T, Tada T. Stratification of gastric cancer risk using a deep neural network. JGH Open. 2020;4:466–471. doi: 10.1002/jgh3.12281. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Zhu Y, Wang QC, Xu MD, Zhang Z, Cheng J, Zhong YS, Zhang YQ, Chen WF, Yao LQ, Zhou PH, Li QL. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest Endosc. 2019;89:806–815.e1. doi: 10.1016/j.gie.2018.11.011. [DOI] [PubMed] [Google Scholar]
  • 14.Wang H, Ding S, Wu DS, Zhang YT, Yang SL. Smart connected electronic gastroscope system for gastric cancer screening using multi-column convolutional neural networks. Int J Prod Res. 2019;57:6795–6806. [Google Scholar]
  • 15.Guimarães P, Keller A, Fehlmann T, Lammert F, Casper M. Deep-learning based detection of gastric precancerous conditions. Gut. 2020;69:4–6. doi: 10.1136/gutjnl-2019-319347. [DOI] [PubMed] [Google Scholar]
  • 16.Miyaki R, Yoshida S, Tanaka S, Kominami Y, Sanomura Y, Matsuo T, Oka S, Raytchev B, Tamaki T, Koide T, Kaneda K, Yoshihara M, Chayama K. A computer system to be used with laser-based endoscopy for quantitative diagnosis of early gastric cancer. J Clin Gastroenterol. 2015;49:108–115. doi: 10.1097/MCG.0000000000000104. [DOI] [PubMed] [Google Scholar]
  • 17.Liu XQ, Wang CL, Hu Y, Zeng Z, Bai JY, Liao GB. Transfer learning with convolutional neural network for early gastric cancer classification on magnifiying narrow-band imaging images. ICIP 2018: Proceedings of the 25th IEEE International Conference on Image Processing; 2018 Oct 07-10; Athens, Greece. New York: IEEE, 2018: 1388-1392. [Google Scholar]
  • 18.Horiuchi Y, Aoyama K, Tokai Y, Hirasawa T, Yoshimizu S, Ishiyama A, Yoshio T, Tsuchida T, Fujisaki J, Tada T. Convolutional Neural Network for Differentiating Gastric Cancer from Gastritis Using Magnified Endoscopy with Narrow Band Imaging. Dig Dis Sci. 2020;65:1355–1363. doi: 10.1007/s10620-019-05862-6. [DOI] [PubMed] [Google Scholar]
  • 19.Li L, Chen Y, Shen Z, Zhang X, Sang J, Ding Y, Yang X, Li J, Chen M, Jin C, Chen C, Yu C. Convolutional neural network for the diagnosis of early gastric cancer based on magnifying narrow band imaging. Gastric Cancer. 2020;23:126–132. doi: 10.1007/s10120-019-00992-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Bergholt MS, Zheng W, Lin K, Ho KY, Teh M, Yeoh KG, Yan So JB, Huang Z. In vivo diagnosis of gastric cancer using Raman endoscopy and ant colony optimization techniques. Int J Cancer. 2011;128:2673–2680. doi: 10.1002/ijc.25618. [DOI] [PubMed] [Google Scholar]
  • 21.Duraipandian S, Sylvest Bergholt M, Zheng W, Yu Ho K, Teh M, Guan Yeoh K, Bok Yan So J, Shabbir A, Huang Z. Real-time Raman spectroscopy for in vivo, online gastric cancer diagnosis during clinical endoscopic examination. J Biomed Opt. 2012;17:081418. doi: 10.1117/1.JBO.17.8.081418. [DOI] [PubMed] [Google Scholar]
  • 22.Li YX, Li XC, Xie XP, Shen LL. Deep learning based gastric cancer identification. ISBI 2018: Proceedings of the 15th IEEE International Symposium on Biomedical Imaging; 2018 Apr 04-07; Washington, DC. New York: IEEE, 2018: 182-185. [Google Scholar]
  • 23.Sharma H, Zerbe N, Klempert I, Hellwich O, Hufnagl P. Deep convolutional neural networks for automatic classification of gastric carcinoma using whole slide images in digital histopathology. Comput Med Imaging Graph. 2017;61:2–13. doi: 10.1016/j.compmedimag.2017.06.001. [DOI] [PubMed] [Google Scholar]
  • 24.Leon F, Gelvez M, Jaimes Z, Gelvez T, Arguello H. Supervised Classification of Histopathological Images Using Convolutional Neuronal Networks for Gastric Cancer Detection. STSIVA 2019: Proceedings of the 22nd Symposium on Image, Signal Processing and Artificial Vision; 2019 Apr 24-26; Bucaramanga, Colombia. New York: IEEE, 2019. [Google Scholar]
  • 25.Iizuka O, Kanavati F, Kato K, Rambeau M, Arihiro K, Tsuneki M. Deep Learning Models for Histopathological Classification of Gastric and Colonic Epithelial Tumours. Sci Rep. 2020;10:1504. doi: 10.1038/s41598-020-58467-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Yoshida H, Shimazu T, Kiyuna T, Marugame A, Yamashita Y, Cosatto E, Taniguchi H, Sekine S, Ochiai A. Automated histological classification of whole-slide images of gastric biopsy specimens. Gastric Cancer. 2018;21:249–257. doi: 10.1007/s10120-017-0731-8. [DOI] [PubMed] [Google Scholar]
  • 27.Garcia E, Hermoza R, Castanon CB, Cano L, Castillo M, Castaneda C. Automatic Lymphocyte Detection on Gastric Cancer IHC Images using Deep Learning. CBMS 2017: Proceedings of the 30th IEEE International Symposium on Computer-Based Medical Systems; 2017 Jun 22-24; Thessaloniki, Greece. New York: IEEE, 2017: 200-204. [Google Scholar]
  • 28.Liang Q, Nan Y, Coppola G, Zou K, Sun W, Zhang D, Wang Y, Yu G. Weakly Supervised Biomedical Image Segmentation by Reiterative Learning. IEEE J Biomed Health Inform. 2019;23:1205–1214. doi: 10.1109/JBHI.2018.2850040. [DOI] [PubMed] [Google Scholar]
  • 29.Qu J, Hiruta N, Terai K, Nosato H, Murakawa M, Sakanashi H. Gastric Pathology Image Classification Using Stepwise Fine-Tuning for Deep Neural Networks. J Healthc Eng. 2018;2018:8961781. doi: 10.1155/2018/8961781. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Sun MY, Zhang GH, Dang H, Qi XQ, Zhou XG, Chang Q. Accurate Gastric Cancer Segmentation in Digital Pathology Images Using Deformable Convolution and Multi-Scale Embedding Networks. IEEE Access. 2019;7:75530–75541. [Google Scholar]
  • 31.Cao GT, Song WL, Zhao ZW. Gastric Cancer Diagnosis with Mask R-CNN. IHMSC 2019: Proceedings of the 11th International Conference on Intelligent Human-Machine Systems and Cybernetics; 2019 Aug 24-25; Hangzhou, China. New York: IEEE, 2019: 60-63. [Google Scholar]
  • 32.Huang Z, Liu D, Chen X, Yu P, Wu J, Song B, Hu J, Wu B. Retrospective imaging studies of gastric cancer: Study protocol clinical trial (SPIRIT Compliant) Medicine (Baltimore) 2020;99:e19157. doi: 10.1097/MD.0000000000019157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Gao Y, Zhang ZD, Li S, Guo YT, Wu QY, Liu SH, Yang SJ, Ding L, Zhao BC, Li S, Lu Y. Deep neural network-assisted computed tomography diagnosis of metastatic lymph nodes from gastric cancer. Chin Med J (Engl) 2019;132:2804–2811. doi: 10.1097/CM9.0000000000000532. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Li C, Shi C, Zhang H, Chen Y, Zhang S. Multiple instance learning for computer aided detection and diagnosis of gastric cancer with dual-energy CT imaging. J Biomed Inform. 2015;57:358–368. doi: 10.1016/j.jbi.2015.08.017. [DOI] [PubMed] [Google Scholar]
  • 35.Li C, Zhang S, Zhang H, Pang L, Lam K, Hui C, Zhang S. Using the K-nearest neighbor algorithm for the classification of lymph node metastasis in gastric cancer. Comput Math Methods Med. 2012;2012:876545. doi: 10.1155/2012/876545. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Jiang Y, Xie J, Han Z, Liu W, Xi S, Huang L, Huang W, Lin T, Zhao L, Hu Y, Yu J, Zhang Q, Li T, Cai S, Li G. Immunomarker Support Vector Machine Classifier for Prediction of Gastric Cancer Survival and Adjuvant Chemotherapeutic Benefit. Clin Cancer Res. 2018;24:5574–5584. doi: 10.1158/1078-0432.CCR-18-0848. [DOI] [PubMed] [Google Scholar]
  • 37.Lu F, Chen ZK, Yuan X, Li Q, Du ZD, Luo L, Zhang FY. MMHG: Multi-modal Hypergraph Learning for Overall Survival After D2 Gastrectomy for Gastric Cancer. DASC/PiCom/DataCom/CyberSciTech 2017: Proceedings of the 15th Intl Conf on Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence and Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress; 2017 Nov 6-10; Orlando, FL, USA. California: IEEE Computer Society, 2017: 164-9. [Google Scholar]
  • 38.Korhani Kangi A, Bahrampour A. Predicting the Survival of Gastric Cancer Patients Using Artificial and Bayesian Neural Networks. Asian Pac J Cancer Prev. 2018;19:487–490. doi: 10.22034/APJCP.2018.19.2.487. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Zhang W, Fang M, Dong D, Wang X, Ke X, Zhang L, Hu C, Guo L, Guan X, Zhou J, Shan X, Tian J. Development and validation of a CT-based radiomic nomogram for preoperative prediction of early recurrence in advanced gastric cancer. Radiother Oncol. 2020;145:13–20. doi: 10.1016/j.radonc.2019.11.023. [DOI] [PubMed] [Google Scholar]
  • 40.Liu B, Tan J, Wang X, Liu X. Identification of recurrent risk-related genes and establishment of support vector machine prediction model for gastric cancer. Neoplasma. 2018;65:360–366. doi: 10.4149/neo_2018_170507N326. [DOI] [PubMed] [Google Scholar]
  • 41.Bollschweiler EH, Mönig SP, Hensler K, Baldus SE, Maruyama K, Hölscher AH. Artificial neural network for prediction of lymph node metastases in gastric cancer: a phase II diagnostic study. Ann Surg Oncol. 2004;11:506–511. doi: 10.1245/ASO.2004.04.018. [DOI] [PubMed] [Google Scholar]
  • 42.Hensler K, Waschulzik T, Mönig SP, Maruyama K, Hölscher AH, Bollschweiler E. Quality-assured Efficient Engineering of Feedforward Neural Networks (QUEEN) -- pretherapeutic estimation of lymph node status in patients with gastric carcinoma. Methods Inf Med. 2005;44:647–654. [PubMed] [Google Scholar]
  • 43.Jagric T, Potrc S, Jagric T. Prediction of liver metastases after gastric cancer resection with the use of learning vector quantization neural networks. Dig Dis Sci. 2010;55:3252–3261. doi: 10.1007/s10620-010-1155-z. [DOI] [PubMed] [Google Scholar]
  • 44.Amin MB, Greene FL, Edge SB, Compton CC, Gershenwald JE, Brookland RK, Meyer L, Gress DM, Byrd DR, Winchester DP. The Eighth Edition AJCC Cancer Staging Manual: Continuing to build a bridge from a population-based to a more “personalized” approach to cancer staging. CA Cancer J Clin. 2017;67:93–99. doi: 10.3322/caac.21388. [DOI] [PubMed] [Google Scholar]
  • 45.Sano T, Coit DG, Kim HH, Roviello F, Kassab P, Wittekind C, Yamamoto Y, Ohashi Y. Proposal of a new stage grouping of gastric cancer for TNM classification: International Gastric Cancer Association staging project. Gastric Cancer. 2017;20:217–225. doi: 10.1007/s10120-016-0601-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Choi J, Kim SG, Im JP, Kim JS, Jung HC, Song IS. Comparison of endoscopic ultrasonography and conventional endoscopy for prediction of depth of tumor invasion in early gastric cancer. Endoscopy. 2010;42:705–713. doi: 10.1055/s-0030-1255617. [DOI] [PubMed] [Google Scholar]
  • 47.Krizhevsky A, Sutskever I, Hinton GE. ImageNet classification with deep convolutional neural networks. Commun ACM. 2017;60:84–90. [Google Scholar]
  • 48.Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z. Rethinking the Inception Architecture for Computer Vision. CVPR 2016: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2016 JUN 27-30; Seattle, WA, USA. New York: IEEE, 2016: 2818-2826. [Google Scholar]
  • 49.Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition; 2014. Preprint. [cited 10 April 2015] Available from: 1409.3301.
  • 50.Ding JR, Cheng HD, Xian M, Zhang YT, Xu F. Local-weighted Citation-kNN algorithm for breast ultrasound image classification. Optik. 2015;126:5188–5193. [Google Scholar]
  • 51.Shen D, Wu G, Suk HI. Deep Learning in Medical Image Analysis. Annu Rev Biomed Eng. 2017;19:221–248. doi: 10.1146/annurev-bioeng-071516-044442. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Correa P, Piazuelo MB. The gastric precancerous cascade. J Dig Dis. 2012;13:2–9. doi: 10.1111/j.1751-2980.2011.00550.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Pimentel-Nunes P, Dinis-Ribeiro M, Soares JB, Marcos-Pinto R, Santos C, Rolanda C, Bastos RP, Areia M, Afonso L, Bergman J, Sharma P, Gotoda T, Henrique R, Moreira-Dias L. A multicenter validation of an endoscopic classification with narrow band imaging for gastric precancerous and cancerous lesions. Endoscopy. 2012;44:236–246. doi: 10.1055/s-0031-1291537. [DOI] [PubMed] [Google Scholar]
  • 54.Nakanishi H, Doyama H, Ishikawa H, Uedo N, Gotoda T, Kato M, Nagao S, Nagami Y, Aoyagi H, Imagawa A, Kodaira J, Mitsui S, Kobayashi N, Muto M, Takatori H, Abe T, Tsujii M, Watari J, Ishiyama S, Oda I, Ono H, Kaneko K, Yokoi C, Ueo T, Uchita K, Matsumoto K, Kanesaka T, Morita Y, Katsuki S, Nishikawa J, Inamura K, Kinjo T, Yamamoto K, Yoshimura D, Araki H, Kashida H, Hosokawa A, Mori H, Yamashita H, Motohashi O, Kobayashi K, Hirayama M, Kobayashi H, Endo M, Yamano H, Murakami K, Koike T, Hirasawa K, Miyaoka Y, Hamamoto H, Hikichi T, Hanabata N, Shimoda R, Hori S, Sato T, Kodashima S, Okada H, Mannami T, Yamamoto S, Niwa Y, Yashima K, Tanabe S, Satoh H, Sasaki F, Yamazato T, Ikeda Y, Nishisaki H, Nakagawa M, Matsuda A, Tamura F, Nishiyama H, Arita K, Kawasaki K, Hoppo K, Oka M, Ishihara S, Mukasa M, Minamino H, Yao K. Evaluation of an e-learning system for diagnosis of gastric lesions using magnifying narrow-band imaging: a multicenter randomized controlled study. Endoscopy. 2017;49:957–967. doi: 10.1055/s-0043-111888. [DOI] [PubMed] [Google Scholar]
  • 55.Mirabal YN, Chang SK, Atkinson EN, Malpica A, Follen M, Richards-Kortum R. Reflectance spectroscopy for in vivo detection of cervical precancer. J Biomed Opt. 2002;7:587–594. doi: 10.1117/1.1502675. [DOI] [PubMed] [Google Scholar]
  • 56.Huang G, Liu Z, van der Maaten L. Weinberger KQ. Densely Connected Convolutional Networks. CVPR 2017: Proceedings of the 30th IEEE/CVF Conference on Computer Vision and Pattern Recognition; 2017 Jul 21-26; Honolulu, HI, USA. New York: IEEE, 2017: 2261-2269. [Google Scholar]
  • 57.He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. CVPR 2016: IEEE Conference on Computer Vision & Pattern Recognition. 2016 Jun 27-30; Seattle, WA, United States. New York: IEEE, 2016: 770-778. [Google Scholar]
  • 58.Mukhopadhyay S, Feldman MD, Abels E, Ashfaq R, Beltaifa S, Cacciabeve NG, Cathro HP, Cheng L, Cooper K, Dickey GE, Gill RM, Heaton RP, Jr, Kerstens R, Lindberg GM, Malhotra RK, Mandell JW, Manlucu ED, Mills AM, Mills SE, Moskaluk CA, Nelis M, Patil DT, Przybycin CG, Reynolds JP, Rubin BP, Saboorian MH, Salicru M, Samols MA, Sturgis CD, Turner KO, Wick MR, Yoon JY, Zhao P, Taylor CR. Whole Slide Imaging Versus Microscopy for Primary Diagnosis in Surgical Pathology: A Multicenter Blinded Randomized Noninferiority Study of 1992 Cases (Pivotal Study) Am J Surg Pathol. 2018;42:39–52. doi: 10.1097/PAS.0000000000000948. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 59.Amedei A, Della Bella C, Silvestri E, Prisco D, D'Elios MM. T cells in gastric cancer: friends or foes. Clin Dev Immunol. 2012;2012:690571. doi: 10.1155/2012/690571. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Wang FH, Shen L, Li J, Zhou ZW, Liang H, Zhang XT, Tang L, Xin Y, Jin J, Zhang YJ, Yuan XL, Liu TS, Li GX, Wu Q, Xu HM, Ji JF, Li YF, Wang X, Yu S, Liu H, Guan WL, Xu RH. The Chinese Society of Clinical Oncology (CSCO): clinical guidelines for the diagnosis and treatment of gastric cancer. Cancer Commun (Lond) 2019;39:10. doi: 10.1186/s40880-019-0349-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61.Muro K, Van Cutsem E, Narita Y, Pentheroudakis G, Baba E, Li J, Ryu MH, Zamaniah WIW, Yong WP, Yeh KH, Kato K, Lu Z, Cho BC, Nor IM, Ng M, Chen LT, Nakajima TE, Shitara K, Kawakami H, Tsushima T, Yoshino T, Lordick F, Martinelli E, Smyth EC, Arnold D, Minami H, Tabernero J, Douillard JY. Pan-Asian adapted ESMO Clinical Practice Guidelines for the management of patients with metastatic gastric cancer: a JSMO-ESMO initiative endorsed by CSCO, KSMO, MOS, SSO and TOS. Ann Oncol. 2019;30:19–33. doi: 10.1093/annonc/mdy502. [DOI] [PubMed] [Google Scholar]
  • 62.Chandra N, Langan D A. Gemstone Detector: Dual Energy Imaging via Fast kVp Switching. In: Johnson T, Fink C, Schönberg S, Reiser M. (eds) Dual Energy CT in Clinical Practice. Medical Radiology. Springer, Berlin, Heidelberg; 2011. pp. 35–41. [Google Scholar]
  • 63.Li F, Zhang R, Liang H, Liu H, Quan J. The pattern and risk factors of recurrence of proximal gastric cancer after curative resection. J Surg Oncol. 2013;107:130–135. doi: 10.1002/jso.23252. [DOI] [PubMed] [Google Scholar]
  • 64.Lee J, Sohn I, Do IG, Kim KM, Park SH, Park JO, Park YS, Lim HY, Sohn TS, Bae JM, Choi MG, Lim DH, Min BH, Lee JH, Rhee PL, Kim JJ, Choi DI, Tan IB, Das K, Tan P, Jung SH, Kang WK, Kim S. Nanostring-based multigene assay to predict recurrence for gastric cancer patients after surgery. PLoS One. 2014;9:e90133. doi: 10.1371/journal.pone.0090133. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Siewert JR, Böttcher K, Stein HJ, Roder JD. Relevant prognostic factors in gastric cancer: ten-year results of the German Gastric Cancer Study. Ann Surg. 1998;228:449–461. doi: 10.1097/00000658-199810000-00002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Loken E, Gelman A. Measurement error and the replication crisis. Science. 2017;355:584–585. doi: 10.1126/science.aal3618. [DOI] [PubMed] [Google Scholar]
  • 67.Jack CR, Jr, Bernstein MA, Fox NC, Thompson P, Alexander G, Harvey D, Borowski B, Britson PJ, L Whitwell J, Ward C, Dale AM, Felmlee JP, Gunter JL, Hill DL, Killiany R, Schuff N, Fox-Bosetti S, Lin C, Studholme C, DeCarli CS, Krueger G, Ward HA, Metzger GJ, Scott KT, Mallozzi R, Blezek D, Levy J, Debbins JP, Fleisher AS, Albert M, Green R, Bartzokis G, Glover G, Mugler J, Weiner MW. The Alzheimer's Disease Neuroimaging Initiative (ADNI): MRI methods. J Magn Reson Imaging. 2008;27:685–691. doi: 10.1002/jmri.21049. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Chartrand G, Cheng PM, Vorontsov E, Drozdzal M, Turcotte S, Pal CJ, Kadoury S, Tang A. Deep Learning: A Primer for Radiologists. Radiographics. 2017;37:2113–2131. doi: 10.1148/rg.2017170077. [DOI] [PubMed] [Google Scholar]
  • 69.Lan K, Wang DT, Fong S, Liu LS, Wong KKL, Dey N. A Survey of Data Mining and Deep Learning in Bioinformatics. J Med Syst. 2018;42:139. doi: 10.1007/s10916-018-1003-9. [DOI] [PubMed] [Google Scholar]
  • 70.Cabitza F, Rasoini R, Gensini GF. Unintended Consequences of Machine Learning in Medicine. JAMA. 2017;318:517–518. doi: 10.1001/jama.2017.7797. [DOI] [PubMed] [Google Scholar]
  • 71.Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. ICCV 2017: Proceedings of the 16th IEEE International Conference on Computer Vision; 2017 Oct 22-29; Venice, Italy. New York: IEEE, 2017: 618-626. [Google Scholar]

Articles from World Journal of Gastroenterology are provided here courtesy of Baishideng Publishing Group Inc

RESOURCES