Skip to main content
Health Data Science logoLink to Health Data Science
. 2023 Feb 6;3:0005. doi: 10.34133/hds.0005

The Applications of Artificial Intelligence in Digestive System Neoplasms: A Review

Shuaitong Zhang 1,2,, Wei Mu 1,2,, Di Dong 3,, Jingwei Wei 3,, Mengjie Fang 1,2, Lizhi Shao 3, Yu Zhou 3, Bingxi He 1,2, Song Zhang 3, Zhenyu Liu 3,*, Jianhua Liu 4,*, Jie Tian 1,2,3,*
PMCID: PMC10877701  PMID: 38487199

Abstract

Importance

Digestive system neoplasms (DSNs) are the leading cause of cancer-related mortality with a 5-year survival rate of less than 20%. Subjective evaluation of medical images including endoscopic images, whole slide images, computed tomography images, and magnetic resonance images plays a vital role in the clinical practice of DSNs, but with limited performance and increased workload of radiologists or pathologists. The application of artificial intelligence (AI) in medical image analysis holds promise to augment the visual interpretation of medical images, which could not only automate the complicated evaluation process but also convert medical images into quantitative imaging features that associated with tumor heterogeneity.

Highlights

We briefly introduce the methodology of AI for medical image analysis and then review its clinical applications including clinical auxiliary diagnosis, assessment of treatment response, and prognosis prediction on 4 typical DSNs including esophageal cancer, gastric cancer, colorectal cancer, and hepatocellular carcinoma.

Conclusion

AI technology has great potential in supporting the clinical diagnosis and treatment decision-making of DSNs. Several technical issues should be overcome before its application into clinical practice of DSNs.

Introduction

Digestive system neoplasms (DSNs) are the leading cause of cancer-related mortality worldwide [13]. In 2020, 5 of the top 7 cancer types for estimated deaths belong to DSNs, including esophageal cancer, gastric cancer, colorectal cancer, hepatocellular carcinoma (HCC), and pancreas cancer [2]. Despite the fact that clinical treatment has improved, the prognosis of DSN patients is dismal, with a 5-year survival rate of less than 20% [4, 5]. Apart from the DSNs' aggressiveness, the unsatisfactory prognosis could be attributed to the dilemmas in reliable early diagnosis, accurate treatment response, and prognosis prediction [68].

Tumor tissue-based genomic and proteomic technologies have shown the potential for precision medicine [9, 10]. However, these technologies suffer from the intrinsic limitation that molecular characterization from a small portion of tumor tissue could not represent the whole tumor due to the spatial and temporal heterogeneity of tumor [11, 12]. In contrast, medical imaging such as computed tomography (CT), magnetic resonance imaging (MRI), and positron emission tomography (PET) could provide a more comprehensive characterization of tumor and has been used in clinical routine for preoperative diagnosis and evaluation of treatment response.

Conventional radiological characteristics that originated from radiologists' experience, termed “semantic features”, are usually qualitative and subjective [1315]. Though useful in the preoperative diagnosis and treatment response evaluation, these features usually have large interobserver variability and limited predictive performance [13, 1517]. For example, Response Evaluation Criteria in Solid Tumors criteria and its revisions rely on the simple 1- or 2-dimensional size-based measurement of tumor, which have been questioned for its efficacy during the past years [18, 19]. In comparison, artificial intelligence (AI) algorithms could mine specific clinical task-related, high-dimensional, and quantitative features from medical images automatically [6, 20, 21], which could automate the complex process in the diagnosis or treatment response evaluation to assist clinicians and reduce their workload. It is noted that AI could mine features with powerful prediction value but could not be detected by humans visually, thereby improving the efficacy of clinical management [2225].

Here, we first introduce the AI algorithm for analyzing medical images and then review the representative applications of AI in the DSNs. Here, we focus on 4 common DSNs: esophageal cancer, gastric cancer, colorectal cancer, and HCC. Finally, we summarize the current challenge of AI and its potential directions for supporting clinical decision-making of DSNs.

AI Methodology for Medical Image Analysis

AI is a technological science that studies and develops theories, methods, technologies, and application systems for simulating, extending, and expanding human intelligence, which was first introduced at a Dartmouth College conference in 1956 [26, 27]. In the last decade, AI has emerged as a promising approach for supporting clinical management [6, 20]. To achieve this, there are mainly 2 methods in AI: radiomics and deep learning. Both of them could convert medical images into quantitative features for characterizing tumor phenotype. The analysis workflow of radiomics and deep learning in DSNs includes data collection, region of interest (ROI) segmentation, feature extraction and selection, and model construction (Fig. 1).

Fig. 1.

Fig. 1.

The AI analysis workflow in digestive system neoplasms. SVM, support vector machine; LASSO, the least absolute shrinkage and selection operator; LN, lymph node; ROC, receiver operating characteristic; AI, artificial intelligence.

Data collection

Large-scale and multicenter datasets with appropriate annotations are usually needed for building a robust and generalizable AI model. However, the acquisition and reconstruction parameters of medical images, such as manufacture, voxel spacing, and reconstruction method, are usually varied among different hospitals. Those variations will induce changes in quantitative features and further influence the robustness and generalization of the AI model [2832]. There are several methods to mitigate the influence including image resampling [33, 34], image intensity normalization with or without references [28, 35], image augmentation including translation, rotation, flip, Gaussian blur [36, 37], and robust feature selection in terms of different parameters [38, 39]. In addition, details on imaging acquisition parameters and protocols should be reported in the AI-related studies for reproducibility and comparability.

ROI segmentation

After data collection, the subsequent analyses including ROI segmentation, feature extraction, and model construction are different between radiomics and deep learning. Radiomics analysis requires precise tumor segmentation by experienced radiologists. However, manual segmentation is labor-intensive and time-consuming and suffers from interobserver variability. Automatic or semiautomatic segmentation algorithms using convolutional neural network (CNN) are promising to solve these shortcomings. Common automatic segmentation algorithms include fully convolutional network, U-Net, Seg-Net, DeepLab, and their variations [4044]. Previous studies have shown that such algorithms could generate a satisfactory performance with a Dice coefficient larger than 0.9 for organ segmentation [45, 46], while their performance should be further improved for segmenting tumor regions, especially for DSNs [47, 48]. In contrast, deep learning analysis of medical images only needs a coarse segmentation of tumor region, such as a bounding box including the whole tumor region, which is easy to achieve and feasible to use in clinical practice. Apart from tumor region, several studies have also segmented and analyzed peritumoral regions and found additional predictive value of the peritumoral region [13, 17, 39, 49].

Feature extraction

Both radiomics and deep learning could convert the segmented ROI into high-dimensional quantitative features, but in different manners. Radiomic features are designed manually and could be mainly divided into semantic and agnostic features [14]. Semantic features refer to the quantification of features that are visually obtained by radiologists, such as tumor shape, necrosis, and enhancement degree [1315]. Agnostic features are quantitative and could be extracted automatically from the segmented ROI according to the designed mathematical expressions and mainly include histogram-based features, shape- and size-based features, textural features, and filtered features [20]. To facilitate and standardize the extraction of agnostic features, van Griethuysen et al. [50] developed a flexible open-source platform termed Pyradiomics. Since then, plenty of radiomics studies extract radiomic features using Pyradiomics. Compared with radiomic features, deep learning features are learned automatically to characterize tumor phenotype using CNNs from the segmented ROI. Generally, the fully connected layer in the CNN can be regarded as deep learning features [51].

Feature selection and model construction

High-dimensional quantitative features usually contain plenty of redundancy and irrelevance, which could cause overfitting during model construction [52, 53]. Therefore, feature selection should be performed for building a generalizable model. In radiomics, the most commonly used feature selection algorithms include (a) filter-based methods, such as univariate analysis, variance thresholding, and mutual information-based methods [23, 54, 55]; (b) wrapper-based methods, such as forward stepwise selection and recursive feature elimination [13, 56]; and (c) embedding methods, such as the least absolute shrinkage and selection operator (LASSO) and ridge regression [57, 58]. The combination of multiple feature selection methods in a sequential manner was also common in previous radiomics studies [16, 23, 39]. Considering the small sample size of data in some clinical situations, ensemble feature selection might be an effective approach for selecting more robust features [16, 23]. On the other hand, there are also several methods to avoid overfitting in deep learning, such as L1 and L2 norm regularizations and dropout [59]. Afterwards, a predictive or prognostic model was constructed based on the selected key features for predicting clinical outcomes. Radiomics usually utilizes machine learning algorithms to learn the linear or nonlinear mapping from key features to clinical outcomes. Commonly used machine learning algorithms in radiomics include support vector machine [60], LASSO [57], and random forest [61]. The optimal parameters in these algorithms are determined using cross-validation. CNN is the commonly used deep learning algorithm for analyzing medical images, such as ResNet [62], Xception [63], and DenseNet [64]. To train the model, a supervised learning modeling strategy that requires clinical labels for all training data was often utilized in radiomics or deep learning.

AI Applications in DSNs

In recent years, AI applications in DSNs have increased dramatically and have shown their potential for clinical application (Fig. 2). Here, we will describe the applications regarding diagnosis, evaluation of treatment response, and prognosis in the 4 most common DSNs: esophageal cancer, gastric cancer, colorectal cancer, and HCC (Fig. 3 and Table).

Fig. 2.

Fig. 2.

Statistics of AI-related studies including radiomics and deep learning in the 4 most common digestive system neoplasms. The total number of related publications is going straight up.

Fig. 3.

Fig. 3.

The representative applications of AI in digestive system neoplasms. AI, artificial intelligence ; TACE, transarterial chemoembolization; KRAS, Kirsten rat sarcoma viral oncogene homolog; NRAS, neuroblastoma rat sarcoma viral oncogene homolog; BRAF, v-Raf murine sarcoma viral oncogene homolog B.

Table.

Specifications of AI studies in the 4 most common digestive system neoplasms

Studies Study design Patient number (training + validation) AI methodology Model architecture Image modality Clinical task
Esophageal cancer
  Guo et al. [65] Retrospective, multicenter study 549 + 2,123 Deep learning SegNet Endoscopy Diagnosis
  de Groof [67] Retrospective and retrospective, multicenter study 414 + 255 Deep learning U-Net Endoscopy Diagnosis
  Takeuchi et al. [74] Retrospective, multicohort study 411 + 46 Deep learning VGG-16 CT Diagnosis
  Kawahara et al. [73] Retrospective, single-center study 73 + 31 Radiomics LASSO CT Diagnosis
  Chen et al. [70] Retrospective, single-center study 623 + 110 Radiomics ANN CT Diagnosis
  Qu et al. [72] Retrospective, single-center study 90 + 91 Radiomics Logistic regression MRI Diagnosis
  Li et al. [75] Prospective, multicenter study 203 + 103 Deep learning ResNet CT Treatment response
  Xu et al. [77] Retrospective, single-center study 390 + 168 Deep learning Multi-view multiscale CNN CT Treatment response
  Beukinga et al. [81] Retrospective, single-center study 139 + 60 Radiomics Ensemble model PET Treatment response
  Wang et al. [83] Retrospective, single-center study 116 + 38 Radiomics + deep learning Logistic regression + DenseNet-169 CT Prognosis
  Lin et al. [84] Retrospective, single-center study 171 + 114 Deep learning CNN with attention CT Prognosis
Gastric cancer
  Luo et al. [102] Retrospective and prospective, multicenter study 15,040 + 69,390 Deep learning DeepLab V3+ Endoscopy Diagnosis
  Hu et al. [106] Retrospective, multicenter study 170 + 125 Deep learning VGG-19 Endoscopy Diagnosis
  Dong et al. [17] Retrospective, multicenter study 100 + 454 Radiomics LASSO CT Diagnosis
  Dong et al. [110] Retrospective, multicenter study 225 + 505 Radiomics + deep learning SVM + DesseNet-201 CT Diagnosis
  Cui et al. [124] Retrospective, multicenter study 243 + 476 Radiomics + deep learning LASSO + DesneNet-121 CT Treatment response
  Jiang et al. [121] Retrospective, multicenter study 457 + 1,158 Deep learning CNN CT Treatment response
  Zhang et al. [125] Retrospective, multicenter study 302 + 367 Radiomics + deep learning Logistic regression, CNN CT Prognosis
  Zhang et al. [126] Retrospective, multicenter study 518 + 122 Deep learning ResNet CT Prognosis
Colorectal cancer
  Huang et al. [130] Retrospective, single-center study 326 + 200 Radiomics LASSO CT Diagnosis
  Zhou et al. [131] Retrospective, single-center study 261 + 130 Radiomics LASSO MRI Diagnosis
  Liu et al. [132] Retrospective, multicenter study 193 + 366 Radiomics LASSO CT + MRI Diagnosis
  Li et al. [133] Retrospective, multicenter study 226 + 142 Radiomics Logistic regression CT Diagnosis
  Yamashita et al. [135] Retrospective, multicenter study 115 + 479 Deep learning MobileNetV2 WSI Diagnosis
  Liu et al. [55] Retrospective, single-center study 152 + 70 Radiomics SVM MRI Treatment response
  Jin et al. [142] Retrospective, multicenter study 481 + 141 Deep learning Siamese network MRI Treatment response
  Lu et al. [144] Retrospective, multicenter study 502 + 526 Deep learning Inception-v3 with RNN CT Treatment response
  Shao et al. [146] Retrospective, multicenter study 303 + 678 Radiomics XGBoost MRI + WSI Treatment response
  Feng et al. [147] Retrospective and prospective, multicenter study 303 + 730 Radiomics + deep learning SVM + VGG-19 MRI + WSI Treatment response
  Liu et al. [149] Retrospective, multicenter study 176 + 453 Radiomics LASSO-Cox MRI Prognosis
  Liu et al. [150] Retrospective, multicenter study 84 + 151 Deep learning ResNet-18 MRI Prognosis
  Kather et al. [151] Retrospective, multicenter study 86 + 934 Deep learning VGG-19 WSI Prognosis
Hepatocellular carcinoma
  Gao et al. [156] Retrospective, multicenter study 612 + 111 Deep learning CNN with RNN CT Diagnosis
  Zhen et al. [157] Retrospective, single-center study 1,210 + 201 Deep learning Inception-ResNet V2 MRI Diagnosis
  Gu et al. [158] Retrospective, multicenter study 364 + 91 Deep learning DenseNet-121 CT Diagnosis
  Wei et al. [164] Retrospective and prospective, multicenter study 635 + 115 Deep learning ResNet-18 CT + MRI Diagnosis
  Xu et al. [161] Retrospective, single-center study 350 + 145 Radiomics SVM CT Diagnosis
  Wang et al. [166] Retrospective, single-center study 159 + 68 Radiomics Decision tree MRI Diagnosis
  Chen et al. [167] Retrospective, multicenter study 111 + 33 Radiomics Logistic regression MRI Treatment response
  Liu et al. [169] Retrospective, single-center study 491 + 246 Radiomics SVM CT Treatment response
  Peng et al. [171] Retrospective, multicenter study 139 + 171 Deep learning LeNet-5 MRI Treatment response
  Ji et al. [172] Retrospective, multicenter study 210 + 260 Radiomics Cox regression MRI Prognosis
  Shan et al. [173] Retrospective, single-center study 109 + 47 Radiomics LASSO CT Prognosis

AI, artificial intelligence; LASSO, the least absolute shrinkage and selection operator; ANN, artificial neural network; CNN, convolutional neural network; SVM, support vector machine; RNN, recurrent neural network; CT, computed tomography; MRI, magnetic resonance imaging; PET, positron emission tomography; WSI, whole-slide image.

Esophageal cancer

Esophageal cancer is a common malignancy, which includes 2 predominant subtypes: esophageal squamous cell carcinoma (ESCC) and esophageal adenocarcinoma [2]. Despite advances in treatment options, such as neoadjuvant chemoradiotherapy (NCRT) and immunotherapy, the prognosis of patients with esophageal cancer is dismal. Previous studies have shown that AI can help in the diagnosis and treatment response evaluation, thereby improving the prognosis of esophageal cancer patients [7, 6586].

Diagnosis

Endoscopic examination can help in diagnosing esophageal cancer at an early stage and is used in clinical routine. By quantitatively and automatically analyzing the endoscopic images, AI technologies could support the early diagnosis of esophageal cancer [7, 6569]. Liu et al. [7] developed a predictive model for distinguishing esophageal cancer from premalignant lesions using CNN. The developed model could achieve an accuracy of 0.858 on 1,272 white light endoscopic images from 748 esophageal cancer. Guo et al. [65] enrolled a multicenter retrospective cohort consisting of 6,473 narrow-band imaging images from 2,063 patients with precancerous and noncancerous lesions or ESCC. Based on these data, they developed a real-time computer-assisted diagnosis system for ESCC using a deep learning approach, which could achieve an area under the curve (AUC) of 0.989 and a sensitivity of 0.980. Furthermore, the computer-assisted diagnosis system could generate the probability heat map of cancerous lesion for each endoscopic image, which could assist in the early diagnosis of ESCC. In addition, deep learning has also shown its potential of early diagnosis of esophageal neoplasia in patients with Barrett's esophagus [6669].

Lymph node metastasis (LNM) status is closely associated with the prognosis of patients with esophageal cancer, and preoperative diagnosis of LNM can aid in treatment decision-making, such as extended lymphadenectomy and NCRT. Chen et al. [70] performed CT-based radiomic analysis of 733 patients with esophageal cancer for LNM prediction. The proposed radiomics model achieved an accuracy of 0.907 using artificial neural network. They also found that the predictive model using artificial neural network performed better than the one using logistic regression. The potential of radiomic approach for LNM prediction of esophageal cancer has also been demonstrated in other studies [71, 72]. In addition, radiomics-based approach could also be used for predicting differentiation degree [73] and distinguishing esophageal cancer from precancerous lesion [74]. AI models for LNM prediction in previous studies were only validated retrospectively; prospective validation should be performed in the future.

Treatment response and prognosis

Pretreatment evaluation of treatment response is also essential for esophageal cancer to aid individualized treatment decision-making. Plenty of studies have shown that quantitative imaging features from CT images could predict the treatment response including chemoradiation, immunotherapy plus chemotherapy, and concurrent chemoradiation therapy (CCRT) [7578]. However, the findings were only validated retrospectively in almost all studies expect the study by Li et al. [75]. In the study of Li et al., they developed an AI model for evaluating the treatment response to CCRT in locally advanced ESCC patients using a deep learning approach. The predictive model was developed and validated using a prospective and multicenter cohort from 9 Chinese hospitals and achieved a satisfactory predictive performance with an AUC of 0.833 in the validation cohort. NCRT was recommended for locally advanced resectable ESCC patients, and part of these patients could achieve pathological complete response (pCR) or local response. Recent studies confirmed that quantitative radiomic features from PET images have the potential for predicting NCRT treatment response including pCR, non-pCR, and local response [7981]. However, the predictive models developed in their studies should be validated further in a larger-scale and prospective cohort.

Besides treatment response, several AI-related studies focused on the prediction of prognosis. Larue et al. [82] found that radiomic features from CT images were associated with 3-year overall survival of patients with esophageal cancer. Wang et al. [83] found that combining radiomic and deep learning features from CT images could achieve better predictive performance for 3-year overall survival. Lin et al. [84] proposed a novel deep learning algorithm for overall survival prediction based on CT images, which achieved a better prognostic performance than using a radiomic approach or several common deep learning algorithms. The association between PET-derived radiomic features and prognosis was also confirmed in a few studies [85, 86].

Gastric cancer

Gastric cancer is one of the most common cancers worldwide, with a high mortality rate [2]. The overall treatment effect of gastric cancer is poor. Timely and accurate diagnosis and individualized treatment are key to improving the survival rate and quality of life of patients with gastric cancer [87]. The National Comprehensive Cancer Network clinical practice guideline recommended noninvasive medical imaging technologies such as endoscopy, CT, MRI, and PET/CT as the main examination methods for the pretreatment diagnosis of gastric cancer patients [88]. As early as 2013, Ba-Ssalamah et al. [89] found that hand-crafted CT features could be used to distinguish gastric adenocarcinoma, gastric lymphoma, and gastrointestinal stromal tumor, suggesting that mining the CT texture patterns could quantify tumor heterogeneity. In the following years, many studies have been proposed to analyze the correlation between image features of gastric cancer and specific clinical problems, such as screening [90], staging [9194], Lauren classification [95, 96], Borrmann classification [97], treatment response [98, 99], and prognosis [100, 101]. These studies further laid the foundation for the development of AI analysis of gastric cancer images. At present, the applications of AI in image analysis of gastric cancer can be divided into diagnosis of categories and subtypes, and prediction of treatment response and prognosis.

Diagnosis

Endoscopy is widely used to identify precancerous lesions and early gastric cancer lesions. Luo et al. [102] developed a real-time AI diagnosis system for upper gastrointestinal cancer based on more than 1 million white light endoscopy images from more than 80,000 patients. The system, modified from deeplab-V3 network architecture, could determine whether there were lesions in the input image and segment the suspected area in real time. Moreover, based on the white light endoscopy images, Wu et al. [103, 104] used CNN and deep reinforcement learning methods to develop an AI endoscopic diagnosis system called ENDOANGEL, and carried out a randomized controlled trial in 5 hospitals. They found that patients who received ENDOANGEL examination had fewer blind spots compared with the control group. The detection accuracy of the ENDOANGEL system was 84.7%, which was significantly better than the manual interpretation. However, correspondingly, the inspection time using the ENDOANGEL also increased slightly (5.40 vs. 4.38 min). In addition, they also developed an AI system for magnifying image-enhanced endoscopy [105]. Hu et al. [106] constructed and validated an AI model for the early diagnosis of gastric cancer based on the narrow-band images with magnifying endoscopy (ME-NBI) using deep learning. Based on the VGG-19 network pretrained on the public dataset ILSVRC-2012, they used transfer learning to fine-tune the model parameters. The gradient-weighted class activation mapping (Grad-CAM) [107] was used to visualize the ME-NBI area that the AI model focused on when making decisions. This technique could alleviate the poor interpretability of deep learning models, so as to enhance the trust of clinicians in the model. In addition, AI was also applied to evaluate the invasion depth of lesions, which was difficult for gastroscopy-based manual observation [108].

CT scan is the most commonly used noninvasive diagnostic technique for gastric cancer, especially for assessing local regional staging and distant metastasis. The development of various CT-based AI prediction systems for gastric cancer has become a research hotspot. Ma et al. [109] analyzed the venous phase CT images using radiomics and constructed a predictive model for identification of Borrmann type IV gastric cancer and primary gastric lymphoma, which achieved an AUC of 0.827. Dong et al. [17] proposed a predictive model for identifying occult peritoneal metastasis of gastric cancer based on CT image features of primary tumor and peritoneal microenvironment. On the multicenter validation cohorts, this model yielded an accuracy of more than 85% for patients with peritoneal metastasis who were previously missed by CT-based clinical diagnosis, indicating that it could reduce the risk of unnecessary surgical treatment for patients with occult peritoneal metastasis. This work has been referred by the guidelines for the diagnosis and treatment of gastric cancer published by the Chinese society of clinical oncology (CSCO) for 3 consecutive years (2019 to 2021). In addition, their team also used deep learning algorithms to predict other clinical indicators of gastric cancer, such as LNM [110], serosa invasion [111], and pathological type [112]. Aiming at predicting LNM of gastric cancer, Gao et al. [113], Liu et al. [114], and Meng et al. [115] used machine learning algorithm to build prediction models based on CT-derived radiomic features. Furthermore, Jin et al. [116] and Zhang et al. [117] constructed deep-learning-based prediction models for predicting LNM. The former model could analyze the metastasis of regional nodal stations one by one, and the latter model could output the segmentation results of lesions while predicting metastasis.

Treatment response and prognosis

Direct prediction of the treatment effect is expected to help clinicians in treatment decision-making for gastric cancer patients [118, 119]. Jiang et al. [120, 121] successively used radiomics and deep learning approaches to mine the imaging information of gastric cancer on PET/CT or CT images to predict the chemotherapy benefit of patients. The National Comprehensive Cancer Network guideline for gastric cancer recommended neoadjuvant chemotherapy (NAC) combined with surgery for the treatment of locally advanced gastric cancer [88]. However, clinical practice has found the obvious individual differences in NAC, and at least 20% of patients could not benefit. At present, researchers have constructed several NAC response prediction models using AI methods [122124] and achieved improved prediction performances. In addition, there is still a lack of high-precision clinical indicators to evaluate the prognosis of patients with gastric cancer. Although TNM staging is generally used as a reference to distinguish patients' risks, it is still necessary to further study biomarkers that are more directly related to prognosis. Many studies have shown that the AI prediction model could learn more deep-level survival imaging phenotypes by optimizing the design of deep learning networks [125127]. Zhang et al. [127] proposed a knowledge-guided multitask network. It enhanced the acquisition and use of key image features through the attention module and used the useful information contained in multitask learning to improve the prediction of survival risk. Jiang et al. [128] combined 2 clinical events, peritoneal recurrence and disease-free survival, and simultaneously predicted them through multitask learning to improve the network's ability of feature extraction and relationship mapping. Their model could effectively identify the high-risk patients who need intensive treatment.

Colorectal cancer

Colorectal cancer is common and the second leading cause of cancer-related mortality worldwide [2]. AI has also been widely applied in colorectal cancer in recent years.

Diagnosis

Preoperative prediction of LNM is able to aid in pretreatment decision-making, such as adjuvant therapy and lymph node dissection. Currently, MRI and CT are widely used in the diagnosis and staging of colorectal cancer patients in clinical practice [129], which objectively reflect tumor macroenvironment and depict tumor heterogeneity in a noninvasive approach. Huang et al. [130] analyzed the portal venous-phase CT images from 526 patients with colorectal cancer by radiomics and constructed a radiomics nomogram for clinical use, with an AUC of 0.778. The predictive value of MRI-based radiomic features for LNM was also confirmed [131]. Addressing the same clinical problem, Liu et al. [132] developed a predictive model based on both CT and MRI and found that the predictive model incorporating CT- and MRI-based radiomic features performed better than models based on radiomic features from either CT or MRI. In addition, CT-based radiomic features were also associated with microsatellite instability of colorectal cancer [133].

Pathological examination is the gold standard for colorectal cancer diagnosis. Different from CT and MRI, hematoxylin-and-eosin-stained whole-slide images (WSIs) can not only give a picture of the tumor microenvironment but also provide abundant microscopic information that could not be distinguished visually but could reflect molecular characteristics or heredity. Pathomics, like radiomics, has also achieved promising results in cancer diagnosis. Pathomics features were used to construct the model to improve the initial nodal staging in T3 rectal cancer [134]. Combined with radiomic features, the radiopathomics model has produced better prediction [134]. Besides, several studies have revealed that deep-learning-based models in histopathology have potential for predicting microsatellite instability [135, 136], the status of key molecular pathways and key mutations [137], and molecular subtypes [138], which could be an alternative and results in time and cost savings in clinical workflow.

Treatment response and prognosis

The development of various AI prediction models for evaluating the treatment response of NCRT in locally advanced colorectal cancer has attracted plenty of attention. MRI-based radiomic features were proved to be associated with post-NCRT treatment response including pathological complete/good response, local response, and no response [55, 139141]. For instance, Liu et al. [55] analyzed the T2 and DWI MR images from 222 colorectal cancer patients and proposed a radiomics-based AI model for predicting pCR, which achieved an AUC of 0.976 in the validation cohort. Deep-learning-based AI models for predicting treatment response of colorectal cancer patients were also developed [142145]. Lu et al. [144] analyzed serial CT images from 1,028 metastatic colorectal cancer patients and developed an AI model using deep learning for predicting early on-treatment response. They found that the AI model performed better than the tumor size change-based criteria, which is used in current clinical practice. Recently, researchers attempted to build a more accurate prediction model by incorporating multisource tumor images that could jointly improve the description of tumor heterogeneity. Some researchers have combined the characterization information of the macroscopic description of tumor heterogeneity with the characterization information of the microscopic description of tumor heterogeneity to construct an “image-pathology” description of tumor heterogeneity [146, 147]. They found that combining analysis of MRI and WSI could achieve better performance for predicting pCR in locally advanced rectal cancer. Predicting the treatment efficacy of colorectal cancer based on the AI analysis of medical images has also been recognized by experts and societies, and the study by Shao et al. [146] has been referred by the 2021 edition of the CSCO Colorectal Cancer Diagnosis and Treatment Guidelines.

In addition, AI has also made a successful attempt to predict local recurrence [148], disease-free survival [149152], and overall survival [148, 152, 153] based on retrospective cohorts with colorectal cancer patients. It helps to divide the risk of patients at an early stage, which, in turn, assists in precise treatment decisions and improves patient survival.

Hepatocellular carcinoma

HCC accounts for almost 90% of primary liver cancers [2]. Along with the advances in AI technology in recent years, the potential of radiomics and deep learning methods has been proven for predicting diagnosis and prognosis of HCC.

Diagnosis

Previous studies have shown that an AI-based diagnostic model for HCC has a superior accuracy and lower time cost compared to manual diagnosis and can also assist clinicians without much experience in improving the efficiency of diagnosis [154, 155]. Hamm et al. enrolled 494 hepatic lesions with multiphase MRI and developed a deep learning system for liver tumor diagnosis and found that their proposed system performed better than radiologists for classifying HCC (sensitivity: 90% versus 60% to 70%) [155]. Several studies showed that an AI model incorporating quantitative features from MRI and clinical data could provide a better diagnostic performance for HCC than models based on either MRI features or clinical data [156, 157]. Several studies also showed that AI could be used to predict the histological grade of HCC [158, 159].

Predicting microvascular invasion (MVI) of HCC using AI technology is another research hotspot. MVI is closely associated with posthepatectomy recurrence in HCC patients [160]. Preoperatively predicting MVI will help to develop a tailored surgical strategy. Radiomic features from contrast-enhanced CT images and MRI were shown to be closely related to MVI [161, 162]. Deep learning has also showed its potential in predicting MVI of HCC [163, 164]. Wei et al. [164] enrolled both contrast-enhanced CT images and gadoxetic-acid-enhanced MRI of HCC patients and developed an AI-based prediction model for MVI using a deep learning approach. They found that an AI model based on gadoxetic-acid-enhanced MRI performed better than that based on contrast-enhanced CT images.

In addition, MRI-based radiomics analysis can be used for predicting programmed cell death protein 1/programmed cell death protein ligand 1 expression [165] and cytokeratin 19 status [166], demonstrating the potential of MRI and AI techniques to extract noninvasive biomarkers.

Treatment response and prognosis

AI-based prediction of treatment response and prognosis can assist the selection of individualized treatments in HCC patients. Previous studies have demonstrated the potential of AI in predicting the liver failure of HCC patients after hepatectomy [167, 168] and treatment response of transarterial chemoembolization [169171]. Ji et al. [172] developed and validated an AI model for predicting recurrence risk of HCC patients after surgical resection based on 470 contrast-enhanced CT images from 3 independent institutions, which achieved better prediction performance than current staging systems. AI can also help to predict recurrence of HCC patients with ablation [173]. Furthermore, AI could also be used to predict the risk of HCC in chronic hepatitis B patients [174, 175] and detect the local tumor progression [176].

Challenges and Future Opportunities

Current published studies have shown the great potential of AI in supporting clinical diagnosis and treatment decision-making of DSNs. However, there are several challenges to overcome before the application of AI into clinical practice of DSNs.

To train a robust and clinically applicable AI model, especially for the deep learning model, for a specific clinical problem, large-scale and well-annotated image data are usually needed. Though there are a large number of medical images of DSNs, well-annotated image data are limited. To alleviate such issue, a transfer learning approach is widely used currently: first, train an AI model on the publicly available ImageNet dataset consisting of over 14 million natural images with 1,000 classes [177] and then fine-tune its weight on the in-house medical imaging dataset. Nevertheless, due to the huge difference that existed between natural images and medical images, the weight of the AI model on natural images might not be suitable for medical images, which usually induced a suboptimal model for the given clinical task. Therefore, mining the valuable information from the easier collected unlabeled dataset is another choice. A semisupervised modeling strategy, such as mean teacher network [178], and a self-supervised modeling strategy, such as contrast learning [179], are usually used to make full use of both the large unlabeled dataset and the limited well-annotated dataset, which have shown improved predictive performance compared to models developed with only a well-annotated dataset using a fully supervised modeling strategy. At present, several published studies have developed and validated AI models for auxiliary diagnosis and treatment response prediction in DSNs based on large-scale and well-annotated image datasets [65, 102, 121]. These high-quality datasets, however, are usually not publicly available, which might hinder the validation and comparison of different AI models. Therefore, data sharing is vital for a robust and clinically applicable AI model.

At present, the majority of published studies on AI in DSNs rely on accurate, labor-intensive, and time-consuming ROI segmentation by radiologists, which might hinder its clinical application. Fully automatic segmentation algorithms based on deep learning hold promise to alleviate this issue and have achieved satisfactory performance for segmenting multiple organs [46], such as esophagus, liver, and stomach, among others. However, the automatic segmentation performance of DSN is limited due to the complexity of tumors [47]. On the other hand, modeling based on the whole organ images not only could avoid the accurate segmentation of DSNs but also might achieve better performance than modeling based on the tumor region due to the analysis of peritumoral microenvironment. Wang et al. [180] found that deep learning analysis of the whole lung could achieve better prediction performance for EGFR than analysis of the region of lung cancer.

Another limitation of such studies in DSNs is the interpretability of AI models, especially for deep learning models. Although the success of AI has been demonstrated in diagnosis and evaluating treatment response of DSNs, it is always questioned due to its black-box nature. Recently, several methods have been proposed to visualize the deep learning features and prediction models, such as CAM [181], Grad-CAM [107], and Ablation-CAM [182]. By visualizing where the trained AI model pays attention to, these methods could explain the model to some extent. In addition, almost all AI models in the published studies were validated using retrospective cohorts. Before their application into clinical practice, they should be validated in large-scale prospective cohorts from multiple centers.

Previous studies have shown that AI models incorporating both MRI and WSIs could perform better than models based on either MRI images or WSIs alone [146, 147]. This indicated that AI can aggregate multiple information, and incorporating more information into AI, such as radiographic images, pathologic images, genomics, proteomics, and diagnosis reports, might generate a more powerful predictive system.

In addition, AI mainly extracts quantitative features from reconstructed medical images, such as CT and MRI, and has achieved surprising results in a variety of clinical auxiliary diagnosis and treatment tasks in previous studies [149, 183, 184]. However, the image reconstruction procedure inevitably causes information loss, distortion, and variations among medical images, eventually leading to the irretrievable bias in the subsequent analysis. Therefore, the use of AI technology to directly construct the mapping from signals to knowledge has attracted the attention of some researchers [185188], which is expected to bring new breakthroughs for precision medicine. During this procedure, how to decouple the key features from large-size raw data is an essential technical problem that needs to be solved.

In conclusion, AI has shown its potential for aiding diagnosis and predicting treatment response and prognosis of DSNs through a large number of studies. However, several issues need to be overcome before its application into clinical practice of DSNs.

Acknowledgments

We thank all participants for their endeavor and contribution in this study. We thank the School of Engineering Medicine, Beihang University and Chinese Academy of Sciences Key Laboratory of Molecular Imaging for their support of this research project.

Funding: This research was supported by grants from the National Natural Science Foundation of China (82102140, 62027901, 81930053, 82022036, 81971776, and 91959205) and the Beijing Natural Science Foundation (Z20J00105).

Author contributions: All authors contributed and approved the final manuscript. Shuaitong Zhang, W.M., D.D., Z.L., and J.T. designed the study. Shuaitong Zhang, J.W., M.F., L.S., Y.Z., and Song Zhang performed a literature search and drafted the manuscript. Shuaitong Zhang, W.M., D.D., and J.L. revised the manuscript.

Competing interests: The authors declare that they have no competing interests.

Data Availability

No new data were created for this manuscript.

References

  • 1.Bray F, Jemal A, Grey N, Ferlay J, Forman D. Global cancer transitions according to the Human Development Index (2008–2030): A population-based study. Lancet Oncol. 2012; 13(8): 790– 801. [DOI] [PubMed] [Google Scholar]
  • 2.Sung H, Ferlay J, Siegel RL, Laversanne M, Soerjomataram I, Jemal A, Bray F. Global Cancer Statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2021; 71(3): 209– 249. [DOI] [PubMed] [Google Scholar]
  • 3.Siegel RL, Miller KD, Jemal A. Cancer statistics, 2019. CA Cancer J Clin. 2019; 69(1): 7– 34. [DOI] [PubMed] [Google Scholar]
  • 4.Wang S, Lin Y, Xiong X, Wang L, Guo Y, Chen Y, Chen S, Wang G, Lin P, Chen H,et al. Low-dose metformin reprograms the tumor immune microenvironment in human esophageal cancer: Results of a phase II Clinical Trial. Clin Cancer Res. 2020; 26(18): 4921– 4932. [DOI] [PubMed] [Google Scholar]
  • 5.Zheng Y, Chen Z, Han Y, Han L, Zou X, Zhou B, Hu R, Hao J, Bai S, Xiao H,et al. Immune suppressive landscape in the human esophageal squamous cell carcinoma microenvironment. Nat Commun. 2020; 11(1): 6268. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Bi WL, Hosny A, Schabath MB, Giger ML, Birkbak NJ, Mehrtash A, Allison T, Arnaout O, Abbosh C, Dunn IF,et al. Artificial intelligence in cancer imaging: Clinical challenges and applications. CA Cancer J Clin. 2019; 69(2): 127– 157. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Liu G, Hua J, Wu Z, Meng T, Sun M, Huang P, He X, Sun W, Li X, Chen Y. Automatic classification of esophageal lesions in endoscopic images using a convolutional neural network. Ann Transl Med. 2020; 8(7): 486. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Minami Y, Kudo M. Imaging modalities for assessment of treatment response to nonsurgical hepatocellular carcinoma therapy: Contrast-enhanced US, CT, and MRI. Liver Cancer. 2015; 4(2): 106– 114. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Guo X, Lv X, Ru Y, Zhou F, Wang N, Xi H, Zhang K, Li J, Chang R, Xie T,et al. Circulating exosomal gastric cancer-associated long noncoding RNA1 as a biomarker for early detection and monitoring progression of gastric cancer: A multiphase study. JAMA Surg. 2020; 155(7): 572– 579. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Huang G, Wei B, Chen Z, Wang J, Zhao L, Peng X, Liu K, Lai Y, Ni L. Identification of a four-microRNA panel in serum as promising biomarker for colorectal carcinoma detection. Biomark Med. 2020; 14(9): 749– 760. [DOI] [PubMed] [Google Scholar]
  • 11.Forner A, Reig M, Bruix J. Hepatocellular carcinoma. Lancet. 2018; 391(10127): 1301– 1314. [DOI] [PubMed] [Google Scholar]
  • 12.Gerlinger M, Rowan AJ, Horswell S, Math M, Larkin J, Endesfelder D, Gronroos E, Martinez P, Matthews N, Stewart A,et al. Intratumor heterogeneity and branched evolution revealed by multiregion sequencing. N Engl J Med. 2012; 366(10): 883– 892. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Tan Y, Zhang ST, Wei JW, Dong D, Wang XC, Yang GQ, Tian J, Zhang H. A radiomics nomogram may improve the prediction of IDH genotype for astrocytoma before surgery. Eur Radiol. 2019; 29(7): 3325– 3337. [DOI] [PubMed] [Google Scholar]
  • 14.Gillies RJ, Kinahan PE, Hricak H. Radiomics: Images are more than pictures, they are data. Radiology. 2016; 278(2): 563– 577. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Liu Y, Kim J, Qu F, Liu S, Wang H, Balagurunathan Y, Ye Z, Gillies RJ. CT features associated with epidermal growth factor receptor mutation status in patients with lung adenocarcinoma. Radiology. 2016; 280(1): 271– 280. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Zhang S, Huang S, He W, Wei J, Huo L, Jia N, Lin J, Tang Z, Yuan Y, Tian J,et al. Radiomics-based preoperative prediction of lymph node metastasis in intrahepatic cholangiocarcinoma using contrast-enhanced computed tomography. Ann Surg Oncol. 2022; 29(11): 6786– 6799. [DOI] [PubMed] [Google Scholar]
  • 17.Dong D, Tang L, Li ZY, Fang MJ, Gao JB, Shan XH, Ying XJ, Sun YS, Fu J, Wang XX,et al. Development and validation of an individualized nomogram to identify occult peritoneal metastasis in patients with advanced gastric cancer. Ann Oncol. 2019; 30(3): 431– 438. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Therasse P, Arbuck SG, Eisenhauer EA, Wanders J, Kaplan RS, Rubinstein L, Verweij J, Van Glabbeke M, Oosterom AT, Christian MC,et al. New guidelines to evaluate the response to treatment in solid tumors. European Organization for Research and Treatment of Cancer, National Cancer Institute of the United States, National Cancer Institute of Canada. J Natl Cancer Inst. 2000; 92(3): 205– 16. [DOI] [PubMed] [Google Scholar]
  • 19.Eisenhauer EA, Therasse P, Bogaerts J, Schwartz LH, Sargent D, Ford R, Dancey J, Arbuck S, Gwyther S, Mooney M,et al. New response evaluation criteria in solid tumours: Revised RECIST guideline (version 1.1). Eur J Cancer. 2009; 45(2): 228– 247. [DOI] [PubMed] [Google Scholar]
  • 20.Aerts HJ, Velazquez ER, Leijenaar RT, Parmar C, Grossmann P, Carvalho S, Bussink J, Monshouwer R, Haibe-Kains B, Rietveld D,et al. Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nat Commun. 2014; 5: 4006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Zhang S, Chen Z, Wei J, Chi X, Zhou D, Ouyang S, Peng J, Xiao H, Tian J, Liu Y. A model based on clinico-biochemical characteristics and deep learning features from MR images for assessing necroinflammatory activity in chronic hepatitis B. J Viral Hepat. 2021; 28(11): 1656– 1659. [DOI] [PubMed] [Google Scholar]
  • 22.Gulshan V, Peng L, Coram M, Stumpe MC, Wu D, Narayanaswamy A, Venugopalan S, Widner K, Madams T, Cuadros J,et al. Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA. 2016; 316(22): 2402– 2410. [DOI] [PubMed] [Google Scholar]
  • 23.Zhang S, Song G, Zang Y, Jia J, Wang C, Li C, Tian J, Dong D, Zhang Y. Non-invasive radiomics approach potentially predicts non-functioning pituitary adenomas subtypes before surgery. Eur Radiol. 2018; 28(9): 3692– 3701. [DOI] [PubMed] [Google Scholar]
  • 24.Lu MY, Chen TY, Williamson DFK, Zhao M, Shady M, Lipkova J, Mahmood F. AI-based pathology predicts origins for cancers of unknown primary. Nature. 2021; 594(7861): 106– 110. [DOI] [PubMed] [Google Scholar]
  • 25.Wang G, Liu X, Shen J, Wang C, Li Z, Ye L, Wu X, Chen T, Wang K, Zhang X,et al. A deep-learning pipeline for the diagnosis and discrimination of viral, non-viral and COVID-19 pneumonia from chest X-ray images. Nat Biomed Eng. 2021; 5(6): 509– 521. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Howard J. Artificial intelligence: Implications for the future of work. Am J Ind Med. 2019; 62(11): 917– 926. [DOI] [PubMed] [Google Scholar]
  • 27.Su Z. Artificial intelligence in the auxiliary guidance function of athletes’ movement standard training in physical education. J Circuits Syst Comput. 2022; 31: 2240001. [Google Scholar]
  • 28.Escudero Sanchez L, Rundo L, Gill AB, Hoare M, Mendes Serrao E, Sala E. Robustness of radiomic features in CT images with different slice thickness, comparing liver tumour and muscle. Sci Rep. 2021; 11(1): 8262. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29.Mackin D, Fave X, Zhang L, Fried D, Yang J, Taylor B, Rodriguez-Rivera E, Dodge C, Jones AK, Court L. Measuring computed tomography scanner variability of radiomics features. Invest Radiol. 2015; 50(11): 757– 765. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Saeedi E, Dezhkam A, Beigi J, Rastegar S, Yousefi Z, Mehdipour LA, Abdollahi H, Tanha K. Radiomic feature robustness and reproducibility in quantitative bone radiography: A study on radiologic parameter changes. J Clin Densitom. 2019; 22(2): 203– 213. [DOI] [PubMed] [Google Scholar]
  • 31.Lubner MG, Stabo N, Lubner SJ, Rio AM, Song C, Halberg RB, Pickhardt PJ. CT textural analysis of hepatic metastatic colorectal cancer: Pre-treatment tumor heterogeneity correlates with pathology and clinical outcomes. Abdom Imaging. 2015; 40(7): 2331– 2337. [DOI] [PubMed] [Google Scholar]
  • 32.Varghese BA, Hwang D, Cen SY, Lei X, Levy J, Desai B, Goodenough DJ, Duddalwar VA. Identification of robust and reproducible CT-texture metrics using a customized 3D-printed texture phantom. J Appl Clin Med Phys. 2021; 22(2): 98– 107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Shafiq-Ul-Hassan M, Zhang GG, Latifi K, Ullah G, Hunt DC, Balagurunathan Y, Abdalah MA, Schabath MB, Goldgof DG, Mackin D,et al. Intrinsic dependencies of CT radiomic features on voxel size and number of gray levels. Med Phys. 2017; 44(3): 1050– 1062. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Ger RB, Zhou S, Chi PM, Lee HJ, Layman RR, Jones AK, Goff DL, Fuller CD, Howell RM, Li H,et al. Comprehensive investigation on controlling for CT imaging variabilities in radiomics studies. Sci Rep. 2018; 8(1): 13047. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Park HJ, Lee SS, Park B, Yun J, Sung YS, Shim WH, Shin YM, Kim SY, Lee SJ, Lee MG. Radiomics analysis of gadoxetic acid-enhanced MRI for staging liver fibrosis. Radiology. 2019; 290(2): 380– 387. [DOI] [PubMed] [Google Scholar]
  • 36.Klang E, Barash Y, Levartovsky A, Barkin Lederer N, Lahat A. Differentiation between malignant and benign endoscopic images of gastric ulcers using deep learning. Clin Exp Gastroenterol. 2021; 14: 155– 162. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Sali R, Moradinasab N, Guleria S, Ehsan L, Fernandes P, Shah TU, Syed S, Brown DE. Deep learning for whole-slide tissue histopathology classification: A comparative study in the identification of dysplastic and non-dysplastic Barrett's esophagus. J Pers Med. 2020; 10(4): 141. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Gallivanone F, Interlenghi M, D'Ambrosio D, Trifirò G, Castiglioni I. Parameters influencing PET imaging features: A phantom study with irregular and heterogeneous synthetic lesions. Contrast Media Mol Imaging. 2018; 2018: 5324517. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Yan J, Zhang B, Zhang S, Cheng J, Liu X, Wang W, Dong Y, Zhang L, Mo X, Chen Q,et al. Quantitative MRI-based radiomics for noninvasively predicting molecular subtypes and survival in glioma patients. NPJ Precis Oncol. 2021; 5(1): 72. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. Paper presented at: Proceedings of the 2015 IEEE conference on computer vision and pattern recognition (CVPR); 2015 June 7–12; Boston, MA.
  • 41.Ronneberger O, Fischer P, Brox T. U-Net: Convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A, editors. Medical image computing and computer-assisted intervention—MICCAI 2015. MICCAI 2015.Lecture notes in computer science, Vol. 9351. Cham: Springer; 2015. https://doi.org/10.1007/978-3-319-24574-4_28.
  • 42.Badrinarayanan V, Kendall A, Cipolla R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE T Pattern Anal. 2017; 39(12): 2481– 2495. [DOI] [PubMed] [Google Scholar]
  • 43.Chen LC, Papandreou G, Kokkinos I, Murphy K, Yuille AL. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected CRFs. IEEE T Pattern Anal. 2017; 40(4): 834– 848. [DOI] [PubMed] [Google Scholar]
  • 44.Chen LC, Zhu Y, Papandreou G, Schroff F, Adam H. Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y, editors.Computer vision – ECCV 2018. ECCV 2018. Lecture Notes in Computer Science, Vol. 11211. Cham: Springer; 2018. 10.1007/978-3-030-01234-2_49. [DOI]
  • 45.Shi G, Xiao L, Chen Y, Zhou SK. Marginal loss and exclusion loss for partially supervised multi-organ segmentation. Med Image Anal. 2021; 70: 101979. [DOI] [PubMed] [Google Scholar]
  • 46.Isensee F, Jaeger PF, Kohl SAA, Petersen J, Maier-Hein KH. nnU-Net: A self-configuring method for deep learning-based biomedical image segmentation. Nat Methods. 2021; 18(2): 203– 211. [DOI] [PubMed] [Google Scholar]
  • 47.Seo H, Huang C, Bassenne M, Xiao R, Xing L. Modified U-Net (mU-Net) with incorporation of object-dependent high level features for improved liver and liver-tumor segmentation in CT images. IEEE Trans Med Imaging. 2020; 39(5): 1316– 1325. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Park S, Kim JH, Kim J, Joseph W, Lee D, Park SJ. Development of a deep learning-based auto-segmentation algorithm for hepatocellular carcinoma (HCC) and application to predict microvascular invasion of HCC using CT texture analysis: Preliminary results. Acta Radiol. 2022; 16: 2841851221100318. [DOI] [PubMed] [Google Scholar]
  • 49.Hu Y, Xie C, Yang H, Ho JWK, Wen J, Han L, Chiu KWH, Fu J, Vardhanabhuti V. Assessment of intratumoral and peritumoral computed tomography radiomics for predicting pathological complete response to neoadjuvant chemoradiation in patients with esophageal squamous cell carcinoma. JAMA Netw Open. 2020; 3(9): e2015927. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50.Griethuysen JJM, Fedorov A, Parmar C, Hosny A, Aucoin N, Narayan V, Beets-Tan RGH, Fillion-Robin JC, Pieper S, Aerts HJWL. Computational radiomics system to decode the radiographic phenotype. Cancer Res. 2017; 77(21): e104– e107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Liu Z, Wang S, Dong D, Wei J, Fang C, Zhou X, Sun K, Li L, Li B, Wang M,et al. The applications of radiomics in precision diagnosis and treatment of oncology: Opportunities and challenges. Theranostics. 2019; 9(5): 1303– 1322. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Niu J, Zhang S, Ma S, Diao J, Zhou W, Tian J, Zang Y, Jia W. Preoperative prediction of cavernous sinus invasion by pituitary adenomas using a radiomics method based on magnetic resonance images. Eur Radiol. 2019; 29(3): 1625– 1634. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Remeseiro B, Bolon-Canedo V. A review of feature selection methods in medical applications. Comput Biol Med. 2019; 112: 103375. [DOI] [PubMed] [Google Scholar]
  • 54.Bagherzadeh-Khiabani F, Ramezankhani A, Azizi F, Hadaegh F, Steyerberg EW, Khalili D. A tutorial on variable selection for clinical prediction models: Feature selection methods in data mining could improve the results. J Clin Epidemiol. 2016; 71: 76– 85. [DOI] [PubMed] [Google Scholar]
  • 55.Liu Z, Zhang XY, Shi YJ, Wang L, Zhu HT, Tang Z, Wang S, Li XT, Tian J, Sun YS. Radiomics analysis for evaluation of pathological complete response to neoadjuvant chemoradiotherapy in locally advanced rectal cancer. Clin Cancer Res. 2017; 23(23): 7253– 7262. [DOI] [PubMed] [Google Scholar]
  • 56.Chen S, Feng S, Wei J, Liu F, Li B, Li X, Hou Y, Gu D, Tang M, Xiao H,et al. Pretreatment prediction of immunoscore in hepatocellular cancer: A radiomics-based clinical model based on Gd-EOB-DTPA-enhanced MRI imaging. Eur Radiol. 2019; 29(8): 4177– 4187. [DOI] [PubMed] [Google Scholar]
  • 57.Tibshirani R. Regression shrinkage and selection via the lasso: A retrospective. J R Stat Soc Series B Stat Methodol. 2011; 73(3): 273– 282. [Google Scholar]
  • 58.Jin W, Luo Q. When artificial intelligence meets PD-1/PD-L1 inhibitors: Population screening, response prediction and efficacy evaluation. Comput Biol Med. 2022; 7: 105499. [DOI] [PubMed] [Google Scholar]
  • 59.Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: A simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014; 15(1): 1929– 1958. [Google Scholar]
  • 60.Noble WS. What is a support vector machine? Nat Biotechnol. 2006; 24(12): 1565– 1567. [DOI] [PubMed] [Google Scholar]
  • 61.Svetnik V, Liaw A, Tong C, Culberson JC, Sheridan RP, Feuston BP. Random forest: A classification and regression tool for compound classification and QSAR modeling. J Chem Inf Comput Sci. 2003; 43(6): 1947– 1958. [DOI] [PubMed] [Google Scholar]
  • 62.He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. Paper presented at: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 June 27–30; Las Vegas, NV.
  • 63.Chollet F. Xception: Deep learning with depthwise separable convolutions. Paper presented at: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 July 21–26; Honolulu, HI.
  • 64.Huang G, Liu Z, Van Der Maaten L, Weinberger KQ. Densely connected convolutional networks. Paper presented at: Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2017 July 21–26; Honolulu, HI.
  • 65.Guo L, Xiao X, Wu C, Zeng X, Zhang Y, Du J, Bai S, Xie J, Zhang Z, Li Y,et al. Real-time automated diagnosis of precancerous lesions and early esophageal squamous cell carcinoma using a deep learning model (with videos). Gastrointest Endosc. 2020; 91(1): 41– 51. [DOI] [PubMed] [Google Scholar]
  • 66.Pulido JV, Guleria S, Ehsan L, Shah T, Syed S, Brown DE. Screening for Barrett's esophagus with probe-based confocal laser endomicroscopy videos. Proc IEEE Int Symp Biomed Imaging. 2020; 2020: 1659– 1663. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 67.Groof AJ, Struyvenberg MR, Putten J, Sommen F, Fockens KN, Curvers WL, Zinger S, Pouw RE, Coron E, Baldaque-Silva F,et al. Deep-learning system detects neoplasia in patients with Barrett's esophagus with higher accuracy than endoscopists in a multistep training and validation study with benchmarking. Gastroenterology. 2020; 158(4): 915– 929.e4. [DOI] [PubMed] [Google Scholar]
  • 68.Hou W, Wang L, Cai S, Lin Z, Yu R, Qin J. Early neoplasia identification in Barrett's esophagus via attentive hierarchical aggregation and self-distillation. Med Image Anal. 2021; 72: 102092. [DOI] [PubMed] [Google Scholar]
  • 69.Horie Y, Yoshio T, Aoyama K, Yoshimizu S, Horiuchi Y, Ishiyama A, Hirasawa T, Tsuchida T, Ozawa T, Ishihara S,et al. Diagnostic outcomes of esophageal cancer by artificial intelligence using convolutional neural networks. Gastrointest Endosc. 2019; 89(1): 25– 32. [DOI] [PubMed] [Google Scholar]
  • 70.Chen H, Zhou X, Tang X, Li S, Zhang G. Prediction of lymph node metastasis in superficial esophageal cancer using a pattern recognition neural network. Cancer Manag Res. 2020; 12: 12249– 12258. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Wu L, Yang X, Cao W, Zhao K, Li W, Ye W, Chen X, Zhou Z, Liu Z, Liang C. Multiple level CT radiomics features preoperatively predict lymph node metastasis in esophageal cancer: A multicentre retrospective study. Front Oncol. 2020; 9: 1548. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Qu J, Shen C, Qin J, Wang Z, Liu Z, Guo J, Zhang H, Gao P, Bei T, Wang Y,et al. The MR radiomic signature can predict preoperative lymph node metastasis in patients with esophageal cancer. Eur Radiol. 2019; 29(2): 906– 914. [DOI] [PubMed] [Google Scholar]
  • 73.Kawahara D, Murakami Y, Tani S, Nagata Y. A prediction model for degree of differentiation for resectable locally advanced esophageal squamous cell carcinoma based on CT images using radiomics and machine-learning. Br J Radiol. 2021; 94(1124): 20210525. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Takeuchi M, Seto T, Hashimoto M, Ichihara N, Morimoto Y, Kawakubo H, Suzuki T, Jinzaki M, Kitagawa Y, Miyata H,et al. Performance of a deep learning-based identification system for esophageal cancer from CT images. Esophagus. 2021; 18(3): 612– 620. [DOI] [PubMed] [Google Scholar]
  • 75.Li X, Gao H, Zhu J, Huang Y, Zhu Y, Huang W, Li Z, Sun K, Liu Z, Tian J,et al. 3D deep learning model for the pretreatment evaluation of treatment response in esophageal carcinoma: A prospective study (ChiCTR2000039279). Int J Radiat Oncol Biol Phys. 2021; 111(4): 926– 935. [DOI] [PubMed] [Google Scholar]
  • 76.Jin X, Zheng X, Chen D, Jin J, Zhu G, Deng X, Han C, Gong C, Zhou Y, Liu C,et al. Prediction of response after chemoradiation for esophageal cancer using a combination of dosimetry and CT radiomics. Eur Radiol. 2019; 29(11): 6080– 6088. [DOI] [PubMed] [Google Scholar]
  • 77.Xu Y, Cui H, Dong T, Zou B, Fan B, Li W, Wang S, Sun X, Yu J, Wang L. Integrating clinical data and attentional CT imaging features for esophageal fistula prediction in esophageal cancer. Front Oncol. 2021; 11: 688706. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 78.Zhu Y, Yao W, Xu BC, Lei YY, Guo QK, Liu LZ, Li HJ, Xu M, Yan J, Chang DD,et al. Predicting response to immunotherapy plus chemotherapy in patients with esophageal squamous cell carcinoma using non-invasive Radiomic biomarkers. BMC Cancer. 2021; 21(1): 1167. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 79.Beukinga RJ, Hulshoff JB, Mul VEM, Noordzij W, Kats-Ugurlu G, Slart RHJA, Plukker JTM. Prediction of response to neoadjuvant chemotherapy and radiation therapy with baseline and restaging 18F-FDG PET imaging biomarkers in patients with esophageal cancer . Radiology. 2018; 287(3): 983– 992. [DOI] [PubMed] [Google Scholar]
  • 80.Murakami Y, Kawahara D, Tani S, Kubo K, Katsuta T, Imano N, Takeuchi Y, Nishibuchi I, Saito A, Nagata Y. Predicting the local response of esophageal squamous cell carcinoma to neoadjuvant chemoradiotherapy by radiomics with a machine learning method using 18F-FDG PET images . Diagnostics. 2021; 11(6): 1049. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 81.Beukinga RJ, Poelmann FB, Kats-Ugurlu G, Viddeleer AR, Boellaard R, De Haas RJ, Plukker JTM, Hulshoff JB. Prediction of non-response to neoadjuvant chemoradiotherapy in esophageal cancer patients with 18F-FDG PET radiomics based machine learning classification . Diagnostics. 2022; 12(5): 1070. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 82.Larue RTHM, Klaassen R, Jochems A, Leijenaar RTH, Hulshof MCCM, Berge Henegouwen MI, Schreurs WMJ, Sosef MN, Elmpt W, Laarhoven HWM,et al. Pre-treatment CT radiomics to predict 3-year overall survival following chemoradiotherapy of esophageal cancer. Acta Oncol. 2018; 57(11): 1475– 1481. [DOI] [PubMed] [Google Scholar]
  • 83.Wang J, Zeng J, Li H, Yu X. A deep learning radiomics analysis for survival prediction in esophageal cancer. J Healthc Eng. 2022; 2022: 4034404. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 84.Lin Z, Cai W, Hou W, Chen Y, Gao B, Mao R, Wang L, Li Z. CT-guided survival prediction of esophageal cancer. IEEE J Biomed Health Inform. 2022; 26(6): 2660– 2669. [DOI] [PubMed] [Google Scholar]
  • 85.Chen YH, Lue KH, Chu SC, Chang BS, Wang LY, Liu DW, Liu SH, Chao YK, Chan SC. Combining the radiomic features and traditional parameters of 18F-FDG PET with clinical profiles to improve prognostic stratification in patients with esophageal squamous cell carcinoma treated with neoadjuvant chemoradiotherapy and surgery . Ann Nucl Med. 2019; 33(9): 657– 670. [DOI] [PubMed] [Google Scholar]
  • 86.Karahan Şen NP, Aksu A, Çapa Kaya G. A different overview of staging PET/CT images in patients with esophageal cancer: The role of textural analysis with machine learning methods. Ann Nucl Med. 2021; 35(9): 1030– 1037. [DOI] [PubMed] [Google Scholar]
  • 87.Smyth EC, Nilsson M, Grabsch HI, Grieken NC, Lordick F. Gastric cancer. Lancet. 2020; 396(10251): 635– 648. [DOI] [PubMed] [Google Scholar]
  • 88.Ajani JA, D'Amico TA, Bentrem DJ, Chao J, Cooke D, Corvera C, Das P, Enzinger PC, Enzler T, Fanta P,et al. Gastric Cancer, Version 2.2022, NCCN clinical practice guidelines in oncology. J Natl Compr Canc Netw. 2022; 20(2): 167– 192. [DOI] [PubMed] [Google Scholar]
  • 89.Ba-Ssalamah A, Muin D, Schernthaner R, Kulinna-Cosentini C, Bastati N, Stift J, Gore R, Mayerhoefer ME. Texture-based classification of different gastric tumors at contrast-enhanced CT. Eur J Radiol. 2013; 82(10): e537– e543. [DOI] [PubMed] [Google Scholar]
  • 90.Gong L, Wang M, Shu L, He J, Qin B, Xu J, Su W, Dong D, Hu H, Tian J,et al. Automatic captioning of early gastric cancer using magnification endoscopy with narrow-band imaging. Gastrointest Endosc. 2022; 96(6): 929– 942.e6. [DOI] [PubMed] [Google Scholar]
  • 91.Liu S, Shi H, Ji C, Zheng H, Pan X, Guan W, Chen L, Sun Y, Tang L, Guan Y,et al. Preoperative CT texture analysis of gastric cancer: Correlations with postoperative TNM staging. Clin Radiol. 2018; 73(8): 756.e1– 756.e9. [DOI] [PubMed] [Google Scholar]
  • 92.Li J, Fang M, Wang R, Dong D, Tian J, Liang P, Liu J, Gao J. Diagnostic accuracy of dual-energy CT-based nomograms to predict lymph node metastasis in gastric cancer. Eur Radiol. 2018; 28(12): 5241– 5249. [DOI] [PubMed] [Google Scholar]
  • 93.Wang X, Li C, Fang M, Zhang L, Zhong L, Dong D, Tian J, Shan X. Integrating No.3 lymph nodes and primary tumor radiomics to predict lymph node metastasis in T1-2 gastric cancer. BMC Med Imaging. 2021; 21(1): 58. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 94.Li J, Dong D, Fang M, Wang R, Tian J, Li H, Gao J. Dual-energy CT-based deep learning radiomics can improve lymph node metastasis risk prediction for gastric cancer. Eur Radiol. 2020; 30(4): 2324– 2333. [DOI] [PubMed] [Google Scholar]
  • 95.Liu S, Liu S, Ji C, Zheng H, Pan X, Zhang Y, Guan W, Chen L, Guan Y, Li W,et al. Application of CT texture analysis in predicting histopathological characteristics of gastric cancers. Eur Radiol. 2017; 27(12): 4951– 4959. [DOI] [PubMed] [Google Scholar]
  • 96.Wang XX, Ding Y, Wang SW, Dong D, Li HL, Chen J, Hu H, Lu C, Tian J, Shan XH. Intratumoral and peritumoral radiomics analysis for preoperative Lauren classification in gastric cancer. Cancer Imaging. 2020; 20(1): 83. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 97.Wang S, Dong D, Zhang W, Hu H, Li H, Zhu Y, Zhou J, Shan X, Tian J. Specific Borrmann classification in advanced gastric cancer by an ensemble multilayer perceptron network: A multicenter research. Med Phys. 2021; 48(9): 5017– 5028. [DOI] [PubMed] [Google Scholar]
  • 98.Giganti F, Marra P, Ambrosi A, Salerno A, Antunes S, Chiari D, Orsenigo E, Esposito A, Mazza E, Albarello L,et al. Pre-treatment MDCT-based texture analysis for therapy response prediction in gastric cancer: Comparison with tumour regression grade at final histology. Eur J Radiol. 2017; 90: 129– 137. [DOI] [PubMed] [Google Scholar]
  • 99.Fang M, Tian J, Dong D. Non-invasively predicting response to neoadjuvant chemotherapy in gastric cancer via deep learning radiomics. EClinicalMedicine. 2022; 46: 101380. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 100.Giganti F, Antunes S, Salerno A, Ambrosi A, Marra P, Nicoletti R, Orsenigo E, Chiari D, Albarello L, Staudacher C,et al. Gastric cancer: Texture analysis from multidetector computed tomography as a potential preoperative prognostic biomarker. Eur Radiol. 2017; 27(5): 1831– 1839. [DOI] [PubMed] [Google Scholar]
  • 101.Li W, Zhang L, Tian C, Song H, Fang M, Hu C, Zang Y, Cao Y, Dai S, Wang F,et al. Prognostic value of computed tomography radiomics features in patients with gastric cancer following curative resection. Eur Radiol. 2019; 29(6): 3079– 3089. [DOI] [PubMed] [Google Scholar]
  • 102.Luo H, Xu G, Li C, He L, Luo L, Wang Z, Jing B, Deng Y, Jin Y, Li Y,et al. Real-time artificial intelligence for detection of upper gastrointestinal cancer by endoscopy: A multicentre, case-control, diagnostic study. Lancet Oncol. 2019; 20(12): 1645– 1654. [DOI] [PubMed] [Google Scholar]
  • 103.Wu L, Zhou W, Wan X, Zhang J, Shen L, Hu S, Ding Q, Mu G, Yin A, Huang X,et al. A deep neural network improves endoscopic detection of early gastric cancer without blind spots. Endoscopy. 2019; 51(6): 522– 531. [DOI] [PubMed] [Google Scholar]
  • 104.Wu L, He X, Liu M, Xie H, An P, Zhang J, Zhang H, Ai Y, Tong Q, Guo M,et al. Evaluation of the effects of an artificial intelligence system on endoscopy quality and preliminary testing of its performance in detecting early gastric cancer: A randomized controlled trial. Endoscopy. 2021; 53(12): 1199– 1207. [DOI] [PubMed] [Google Scholar]
  • 105.He X, Wu L, Dong Z, Gong D, Jiang X, Zhang H, Ai Y, Tong Q, Lv P, Lu B,et al. Real-time use of artificial intelligence for diagnosing early gastric cancer by magnifying image-enhanced endoscopy: A multicenter diagnostic study (with videos). Gastrointest Endosc. 2022; 95(4): 671– 678.e4. [DOI] [PubMed] [Google Scholar]
  • 106.Hu H, Gong L, Dong D, Zhu L, Wang M, He J, Shu L, Cai Y, Cai S, Su W,et al. Identifying early gastric cancer under magnifying narrow-band images with deep learning: A multicenter study. Gastrointest Endosc. 2021; 93(6): 1333– 1341.e3. [DOI] [PubMed] [Google Scholar]
  • 107.Selvaraju RR, Cogswell M, Das A, Vedantam R, Parikh D, Batra D. Grad-CAM: Visual explanations from deep networks via gradient-based localization. Paper presented at: Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV); 2017 October 22–29; Venice, Italy.
  • 108.Zhu Y, Wang QC, Xu MD, Zhang Z, Cheng J, Zhong YS, Zhang YQ, Chen WF, Yao LQ, Zhou PH,et al. Application of convolutional neural network in the diagnosis of the invasion depth of gastric cancer based on conventional endoscopy. Gastrointest Endosc. 2019; 89(4): 806– 815.e1. [DOI] [PubMed] [Google Scholar]
  • 109.Ma Z, Fang M, Huang Y, He L, Chen X, Liang C, Huang X, Cheng Z, Dong D, Liang C,et al. CT-based radiomics signature for differentiating Borrmann type IV gastric cancer from primary gastric lymphoma. Eur J Radiol. 2017; 91: 142– 147. [DOI] [PubMed] [Google Scholar]
  • 110.Dong D, Fang MJ, Tang L, Shan XH, Gao JB, Giganti F, Wang RP, Chen X, Wang XX, Palumbo D,et al. Deep learning radiomic nomogram can predict the number of lymph node metastasis in locally advanced gastric cancer: An international multicenter study. Ann Oncol. 2020; 31(7): 912– 920. [DOI] [PubMed] [Google Scholar]
  • 111.Sun RJ, Fang MJ, Tang L, Li XT, Lu QY, Dong D, Tian J, Sun YS. CT-based deep learning radiomics analysis for evaluation of serosa invasion in advanced gastric cancer. Eur J Radiol. 2020; 132: 109277. [DOI] [PubMed] [Google Scholar]
  • 112.Li C, Qin Y, Zhang WH, Jiang H, Song B, Bashir MR, Xu H, Duan T, Fang M, Zhong L,et al. Deep learning-based AI model for signet-ring cell carcinoma diagnosis and chemotherapy response prediction in gastric cancer. Med Phys. 2022; 49(3): 1535– 1546. [DOI] [PubMed] [Google Scholar]
  • 113.Gao X, Ma T, Cui J, Zhang Y, Wang L, Li H, Ye Z. A radiomics-based model for prediction of lymph node metastasis in gastric cancer. Eur J Radiol. 2020; 129: 109069. [DOI] [PubMed] [Google Scholar]
  • 114.Liu S, Qiao X, Xu M, Ji C, Li L, Zhou Z. Development and validation of multivariate models integrating preoperative clinicopathological parameters and radiographic findings based on late arterial phase CT images for predicting lymph node metastasis in gastric cancer. Acad Radiol. 2021; 28(Suppl. 1): S167– S178. [DOI] [PubMed] [Google Scholar]
  • 115.Meng L, Dong D, Chen X, Fang M, Wang R, Li J, Liu Z, Tian J. 2D and 3D CT radiomic features performance comparison in characterization of gastric cancer: A multi-center study. IEEE J Biomed Health Inform. 2021; 25(3): 755– 763. [DOI] [PubMed] [Google Scholar]
  • 116.Jin C, Jiang Y, Yu H, Wang W, Li B, Chen C, Yuan Q, Hu Y, Xu Y, Zhou Z,et al. Deep learning analysis of the primary tumour and the prediction of lymph node metastases in gastric cancer. Br J Surg. 2021; 108(5): 542– 549. [DOI] [PubMed] [Google Scholar]
  • 117.Zhang Y, Li H, Du J, Qin J, Wang T, Chen Y, Liu B, Gao W, Ma G, Lei B. 3D multi-attention guided multi-task learning network for automatic gastric tumor segmentation and lymph node classification. IEEE Trans Med Imaging. 2021; 40(6): 1618– 1631. [DOI] [PubMed] [Google Scholar]
  • 118.Cao R, Gong L, Dong D. Pathological diagnosis and prognosis of gastric cancer through a multi-instance learning method. EBioMedicine. 2021; 73: 103671. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 119.Wang S, Feng C, Dong D, Li H, Zhou J, Ye Y, Liu Z, Tian J, Wang Y. Preoperative computed tomography-guided disease-free survival prediction in gastric cancer: A multicenter radiomics study. Med Phys. 2020; 47(10): 4862– 4871. [DOI] [PubMed] [Google Scholar]
  • 120.Jiang Y, Yuan Q, Lv W, Xi S, Huang W, Sun Z, Chen H, Zhao L, Liu W, Hu Y,et al. Radiomic signature of 18F fluorodeoxyglucose PET/CT for prediction of gastric cancer survival and chemotherapeutic benefits . Theranostics. 2018; 8(21): 5915– 5928. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 121.Jiang Y, Jin C, Yu H, Wu J, Chen C, Yuan Q, Huang W, Hu Y, Xu Y, Zhou Z,et al. Development and validation of a deep learning CT signature to predict survival and chemotherapy benefit in gastric cancer: A multicenter. Retrospective Study Ann Surg. 2021; 274(6): e1153– e1161. [DOI] [PubMed] [Google Scholar]
  • 122.Li Z, Zhang D, Dai Y, Dong J, Wu L, Li Y, Cheng Z, Ding Y, Liu Z. Computed tomography-based radiomics for prediction of neoadjuvant chemotherapy outcomes in locally advanced gastric cancer: A pilot study. Chin J Cancer Res. 2018; 30(4): 406– 414. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 123.Mazzei MA, Di Giacomo L, Bagnacci G, Nardone V, Gentili F, Lucii G, Tini P, Marrelli D, Morgagni P, Mura G,et al. Delta-radiomics and response to neoadjuvant treatment in locally advanced gastric cancer-a multicenter study of GIRCG (Italian Research Group for Gastric Cancer). Quant Imaging Med Surg. 2021; 11(6): 2376– 2387. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 124.Cui Y, Zhang J, Li Z, Wei K, Lei Y, Ren J, Wu L, Shi Z, Meng X, Yang X,et al. A CT-based deep learning radiomics nomogram for predicting the response to neoadjuvant chemotherapy in patients with locally advanced gastric cancer: A multicenter cohort study. EClinicalMedicine. 2022; 46: 101348. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 125.Zhang W, Fang M, Dong D, Wang X, Ke X, Zhang L, Hu C, Guo L, Guan X, Zhou J,et al. Development and validation of a CT-based radiomic nomogram for preoperative prediction of early recurrence in advanced gastric cancer. Radiother Oncol. 2020; 145: 13– 20. [DOI] [PubMed] [Google Scholar]
  • 126.Zhang L, Dong D, Zhang W, Hao X, Fang M, Wang S, Li W, Liu Z, Wang R, Zhou J,et al. A deep learning risk prediction model for overall survival in patients with gastric cancer: A multicenter study. Radiother Oncol. 2020; 150: 73– 80. [DOI] [PubMed] [Google Scholar]
  • 127.Zhang L, Zhong L, Li C, Zhang W, Hu C, Dong D, Liu Z, Zhou J, Tian J. Knowledge-guided multi-task attention network for survival risk prediction using multi-center computed tomography images. Neural Netw. 2022; 152: 394– 406. [DOI] [PubMed] [Google Scholar]
  • 128.Jiang Y, Zhang Z, Yuan Q, Wang W, Wang H, Li T, Huang W, Xie J, Chen C, Sun Z,et al. Predicting peritoneal recurrence and disease-free survival from CT images in gastric cancer with multitask deep learning: A retrospective study. Lancet Digit Health. 2022; 4(5): e340– e350. [DOI] [PubMed] [Google Scholar]
  • 129.Patel UB, Brown G, Machado I, Santos-Cores J, Pericay C, Ballesteros E, Salud A, Isabel-Gil M, Montagut C, Maurel J,et al. MRI assessment and outcomes in patients receiving neoadjuvant chemotherapy only for primary rectal cancer: Long-term results from the GEMCAD 0801 trial. Ann Oncol. 2017; 28(2): 344– 353. [DOI] [PubMed] [Google Scholar]
  • 130.Huang YQ, Liang CH, He L, Tian J, Liang CS, Chen X, Ma ZL, Liu ZY. Development and validation of a radiomics nomogram for preoperative prediction of lymph node metastasis in colorectal cancer. J Clin Oncol. 2016; 34(18): 2157– 2164. [DOI] [PubMed] [Google Scholar]
  • 131.Zhou X, Yi Y, Liu Z, Zhou Z, Lai B, Sun K, Li L, Huang L, Feng Y, Cao W,et al. Radiomics-based preoperative prediction of lymph node status following neoadjuvant therapy in locally advanced rectal cancer. Front Oncol. 2020; 10: 604. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 132.Liu ZY, Zhang S, Li ZH, Zhou XZ, Liang WJ, Ding YY, Zhong BS, Goel A, Li XX, Tian J, et al. A novel radiomic signature for prediction of lymph node metastasis in T1 colorectal cancer: A multicenter study. SSRN. 18 May 2021. https://ssrn.com/abstract=3848529.
  • 133.Li Z, Zhong Q, Zhang L, Wang M, Xiao W, Cui F, Yu F, Huang C, Feng Z. Computed tomography-based radiomics model to preoperatively predict microsatellite instability status in colorectal cancer: A multicenter study. Front Oncol. 2021; 11: 666786. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 134.Zhou X, Liu Z, Zhang D, Wu L, Sun K, Shao L, Huang L, Li Z, Tian J. Improving initial nodal staging of T3 rectal cancer using quantitative image features. Br J Surg. 2020; 107(11): e541– e542. [DOI] [PubMed] [Google Scholar]
  • 135.Yamashita R, Long J, Longacre T, Peng L, Berry G, Martin B, Higgins J, Rubin DL, Shen J. Deep learning model for the prediction of microsatellite instability in colorectal cancer: A diagnostic study. Lancet Oncol. 2021; 22(1): 132– 141. [DOI] [PubMed] [Google Scholar]
  • 136.Cao R, Yang F, Ma SC, Liu L, Zhao Y, Li Y, Wu DH, Wang T, Lu WJ, Cai WJ,et al. Development and interpretation of a pathomics-based model for the prediction of microsatellite instability in colorectal cancer. Theranostics. 2020; 10(24): 11080– 11091. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 137.Bilal M, Raza SEA, Azam A, Graham S, Ilyas M, Cree IA, Snead D, Minhas F, Rajpoot NM. Development and validation of a weakly supervised deep learning framework to predict the status of molecular pathways and key mutations in colorectal cancer from routine histology images: A retrospective study. Lancet Digit Health. 2021; 3(12): e763– e772. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 138.Sirinukunwattana K, Domingo E, Richman SD, Redmond KL, Blake A, Verrill C, Leedham SJ, Chatzipli A, Hardy C, Whalley CM,et al. Image-based consensus molecular subtype (imCMS) classification of colorectal cancer using deep learning. Gut. 2021; 70(3): 544– 554. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 139.Tang Z, Zhang XY, Liu Z, Li XT, Shi YJ, Wang S, Fang M, Shen C, Dong E, Sun YS,et al. Quantitative analysis of diffusion weighted imaging to predict pathological good response to neoadjuvant chemoradiation for locally advanced rectal cancer. Radiother Oncol. 2019; 132: 100– 108. [DOI] [PubMed] [Google Scholar]
  • 140.Zhou X, Yi Y, Liu Z, Cao W, Lai B, Sun K, Li L, Zhou Z, Feng Y, Tian J. Radiomics-based pretherapeutic prediction of non-response to neoadjuvant therapy in locally advanced rectal cancer. Ann Surg Oncol. 2019; 26(6): 1676– 1684. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 141.Giannini V, Mazzetti S, Bertotto I, Chiarenza C, Cauda S, Delmastro E, Bracco C, Di Dia A, Leone F, Medico E,et al. Predicting locally advanced rectal cancer response to neoadjuvant therapy with 18F-FDG PET and MRI radiomics features . Eur J Nucl Med Mol Imaging. 2019; 46(4): 878– 888. [DOI] [PubMed] [Google Scholar]
  • 142.Jin C, Yu H, Ke J, Ding P, Yi Y, Jiang X, Duan X, Tang J, Chang DT, Wu X,et al. Predicting treatment response from longitudinal images using multi-task deep learning. Nat Commun. 2021; 12(1): 1851. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 143.Zhang XY, Wang L, Zhu HT, Li ZW, Ye M, Li XT, Shi YJ, Zhu HC, Sun YS. Predicting rectal cancer response to neoadjuvant chemoradiotherapy using deep learning of diffusion kurtosis MRI. Radiology. 2020; 296(1): 56– 64. [DOI] [PubMed] [Google Scholar]
  • 144.Lu L, Dercle L, Zhao B, Schwartz LH. Deep learning for the prediction of early on-treatment response in metastatic colorectal cancer from serial medical imaging. Nat Commun. 2021; 12(1): 6654. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 145.Nie K, Shi L, Chen Q, Hu X, Jabbour SK, Yue N, Niu T, Sun X. Rectal Cancer: Assessment of neoadjuvant chemoradiation outcome based on radiomics of multiparametric MRI. Clin Cancer Res. 2016; 22(21): 5256– 5264. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 146.Shao L, Liu Z, Feng L, Lou X, Li Z, Zhang XY, Wan X, Zhou X, Sun K, Zhang DF,et al. Multiparametric MRI and whole slide image-based pretreatment prediction of pathological response to neoadjuvant chemoradiotherapy in rectal cancer: A multicenter radiopathomic study. Ann Surg Oncol. 2020; 27(11): 4296– 4306. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 147.Feng L, Liu Z, Li C, Li Z, Lou X, Shao L, Wang Y, Huang Y, Chen H, Pang X,et al. Development and validation of a radiopathomics model to predict pathological complete response to neoadjuvant chemoradiotherapy in locally advanced rectal cancer: A multicentre observational study. Lancet Digit Health. 2022; 4(1): e8– e17. [DOI] [PubMed] [Google Scholar]
  • 148.Lovinfosse P, Polus M, Van Daele D, Martinive P, Daenen F, Hatt M, Visvikis D, Koopmansch B, Lambert F, Coimbra C,et al. FDG PET/CT radiomics for predicting the outcome of locally advanced rectal cancer. Eur J Nucl Med Mol Imaging. 2018; 45(3): 365– 375. [DOI] [PubMed] [Google Scholar]
  • 149.Liu Z, Meng X, Zhang H, Li Z, Liu J, Sun K, Meng Y, Dai W, Xie P, Ding Y,et al. Predicting distant metastasis and chemotherapy benefit in locally advanced rectal cancer. Nat Commun. 2020; 11(1): 4308. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 150.Liu X, Zhang D, Liu Z, Li Z, Xie P, Sun K, Wei W, Dai W, Tang Z, Ding Y,et al. Deep learning radiomics-based prediction of distant metastasis in patients with locally advanced rectal cancer after neoadjuvant chemoradiotherapy: A multicentre study. EBioMedicine. 2021; 69: 103442. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 151.Kather JN, Krisam J, Charoentong P, Luedde T, Herpel E, Weis CA, Gaiser T, Marx A, Valous NA, Ferber D,et al. Predicting survival from colorectal cancer histology slides using deep learning: A retrospective multicenter study. PLOS Med. 2019; 16(1): e1002730. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 152.Sun C, Li B, Wei G, Qiu W, Li D, Li X, Liu X, Wei W, Wang S, Liu Z,et al. Deep learning with whole slide images can improve the prognostic risk stratification with stage III colorectal cancer. Comput Methods Programs Biomed. 2022; 221: 106914. [DOI] [PubMed] [Google Scholar]
  • 153.Wang R, Dai W, Gong J, Huang M, Hu T, Li H, Lin K, Tan C, Hu H, Tong T,et al. Development of a novel combined nomogram model integrating deep learning-pathomics, radiomics and immunoscore to predict postoperative outcome of colorectal cancer lung metastasis patients. J Hematol Oncol. 2022; 15(1): 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 154.Yasaka K, Akai H, Abe O, Kiryu S. Deep learning with convolutional neural network for differentiation of liver masses at dynamic contrast-enhanced CT: A preliminary study. Radiology. 2018; 286(3): 887– 896. [DOI] [PubMed] [Google Scholar]
  • 155.Hamm CA, Wang CJ, Savic LJ, Ferrante M, Schobert I, Schlachter T, Lin M, Duncan JS, Weinreb JC, Chapiro J,et al. Deep learning for liver tumor diagnosis part I: Development of a convolutional neural network classifier for multi-phasic MRI. Eur Radiol. 2019; 29(7): 3338– 3347. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 156.Gao R, Zhao S, Aishanjiang K, Cai H, Wei T, Zhang Y, Liu Z, Zhou J, Han B, Wang J,et al. Deep learning for differential diagnosis of malignant hepatic tumors based on multi-phase contrast-enhanced CT and clinical data. J Hematol Oncol. 2021; 14(1): 154. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 157.Zhen SH, Cheng M, Tao YB, Wang YF, Juengpanich S, Jiang ZY, Jiang YK, Yan YY, Lu W, Lue JM,et al. Deep learning for accurate diagnosis of liver tumor based on magnetic resonance imaging and clinical data. Front Oncol. 2020; 10: 680. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 158.Gu D, Guo D, Yuan C, Wei J, Wang Z, Zheng H, Tian J. Multi-scale patches convolutional neural network predicting the histological grade of hepatocellular carcinoma. Annu Int Conf IEEE Eng Med Biol Soc. 2021; 2021: 2584– 2587. [DOI] [PubMed] [Google Scholar]
  • 159.Mao Y, Wang J, Zhu Y, Chen J, Mao L, Kong W, Qiu Y, Wu X, Guan Y, He J. Gd-EOB-DTPA-enhanced MRI radiomic features for predicting histological grade of hepatocellular carcinoma. Hepatobiliary Surg Nutr. 2022; 11(1): 13– 24. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 160.Chen W, Zheng R, Baade PD, Zhang S, Zeng H, Bray F, Jemal A, Yu XQ, He J. Cancer statistics in China, 2015. CA Cancer J Clin. 2016; 66(2): 115– 132. [DOI] [PubMed] [Google Scholar]
  • 161.Xu X, Zhang HL, Liu QP, Sun SW, Zhang J, Zhu FP, Yang G, Yan X, Zhang YD, Liu XS. Radiomic analysis of contrast-enhanced CT predicts microvascular invasion and outcome in hepatocellular carcinoma. J Hepatol. 2019; 70(6): 1133– 1144. [DOI] [PubMed] [Google Scholar]
  • 162.Feng ST, Jia Y, Liao B, Huang B, Zhou Q, Li X, Wei K, Chen L, Li B, Wang W,et al. Preoperative prediction of microvascular invasion in hepatocellular cancer: A radiomics model using Gd-EOB-DTPA-enhanced MRI. Eur Radiol. 2019; 29(9): 4648– 4659. [DOI] [PubMed] [Google Scholar]
  • 163.Zhou W, Jian W, Cen X, Zhang L, Guo H, Liu Z, Liang C, Wang G. Prediction of microvascular invasion of hepatocellular carcinoma based on contrast-enhanced MR and 3D convolutional neural networks. Front Oncol. 2021; 11: 588010. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 164.Wei J, Jiang H, Zeng M, Wang M, Niu M, Gu D, Chong H, Zhang Y, Fu F, Zhou M,et al. Prediction of microvascular invasion in hepatocellular carcinoma via deep learning: A multi-center and prospective validation study. Cancers. 2021; 13(10): 2368. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 165.Zhang J, Wu Z, Zhang X, Liu S, Zhao J, Yuan F, Shi Y, Song B. Machine learning: An approach to preoperatively predict PD-1/PD-L1 expression and outcome in intrahepatic cholangiocarcinoma using MRI biomarkers. ESMO Open. 2020; 5(6): e000910. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 166.Wang W, Gu D, Wei J, Ding Y, Yang L, Zhu K, Luo R, Rao SX, Tian J, Zeng M. A radiomics-based biomarker for cytokeratin 19 status of hepatocellular carcinoma with gadoxetic acid-enhanced MRI. Eur Radiol. 2020; 30(5): 3004– 3014. [DOI] [PubMed] [Google Scholar]
  • 167.Chen Y, Liu Z, Mo Y, Li B, Zhou Q, Peng S, Li S, Kuang M. Prediction of post-hepatectomy liver failure in patients with hepatocellular carcinoma based on radiomics using Gd-EOB-DTPA-enhanced MRI: The liver failure model. Front Oncol. 2021; 11: 605296. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 168.Zhu WS, Shi SY, Yang ZH, Song C, Shen J. Radiomics model based on preoperative gadoxetic acid-enhanced MRI for predicting liver failure. World J Gastroenterol. 2020; 26(11): 1208– 1220. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 169.Liu QP, Xu X, Zhu FP, Zhang YD, Liu XS. Prediction of prognostic risk factors in hepatocellular carcinoma with transarterial chemoembolization using multi-modal multi-task deep learning. EClinicalMedicine. 2020; 23: 100379. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 170.Abajian A, Murali N, Savic LJ, Laage-Gaupp FM, Nezami N, Duncan JS, Schlachter T, Lin M, Geschwind JF, Chapiro J. Predicting treatment response to intra-arterial therapies for hepatocellular carcinoma with the use of supervised machine learning-an artificial intelligence concept. J Vasc Interv Radiol. 2018; 29(6): 850– 857.e1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 171.Peng J, Huang J, Huang G, Zhang J. Predicting the initial treatment response to transarterial chemoembolization in intermediate-stage hepatocellular carcinoma by the integration of radiomics and deep learning. Front Oncol. 2021; 11: 730282. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 172.Ji GW, Zhu FP, Xu Q, Wang K, Wu MY, Tang WW, Li XC, Wang XH. Machine-learning analysis of contrast-enhanced CT radiomics predicts recurrence of hepatocellular carcinoma after resection: A multi-institutional study. EBioMedicine. 2019; 50: 156– 165. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 173.Shan QY, Hu HT, Feng ST, Peng ZP, Chen SL, Zhou Q, Li X, Xie XY, Lu MD, Wang W,et al. CT-based peritumoral radiomics signatures to predict early recurrence in hepatocellular carcinoma after curative tumor resection or ablation. Cancer Imaging. 2019; 19(1): 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 174.Nam JY, Sinn DH, Bae J, Jang ES, Kim J-W, Jeong S-H. Deep learning model for prediction of hepatocellular carcinoma in patients with HBV-related cirrhosis on antiviral therapy. JHEP Rep. 2020; 2(6): 100175. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 175.Kim HY, Lampertico P, Nam JY, Lee HC, Kim SU, Sinn DH, Seo YS, Lee HA, Park SY, Lim YS,et al. An artificial intelligence model to predict hepatocellular carcinoma risk in Korean and Caucasian patients with chronic hepatitis B. J Hepatol. 2022; 76(2): 311– 318. [DOI] [PubMed] [Google Scholar]
  • 176.Lim S, Shin Y, Lee YH. Arterial enhancing local tumor progression detection on CT images using convolutional neural network after hepatocellular carcinoma ablation: A preliminary study. Sci Rep. 2022; 12(1): 1754. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 177.Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. ImageNet: A large-scale hierarchical image database. Paper presented at: Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition; 2009 June 20–25; Miami, FL.
  • 178.Deng J, Li W, Chen Y, Duan L. Unbiased mean teacher for cross-domain object detection. Paper presented at: Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2021 June 20-25; Nashville (TN).
  • 179.He K, Fan H, Wu Y, Xie S, Girshick R. Momentum contrast for unsupervised visual representation learning. Paper presented at: Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR); 2020 June 13–19; Seattle, WA.
  • 180.Wang S, Yu H, Gan Y, Wu Z, Li E, Li X, Cao J, Zhu Y, Wang L, Deng H,et al. Mining whole-lung information by artificial intelligence for predicting EGFR genotype and targeted therapy response in lung cancer: A multicohort study. Lancet Digit Health. 2022; 4(5): e309– e319. [DOI] [PubMed] [Google Scholar]
  • 181.Zhou B, Khosla A, Lapedriza A, Oliva A, Torralba A. Learning deep features for discriminative localization. Paper presented at: Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR); 2016 June 27–30; Las Vegas, NV.
  • 182.Ramaswamy HG. Ablation-cam: Visual explanations for deep convolutional network via gradient-free localization. Paper presented at: Proceedings of the 2020 IEEE Winter Conference on Applications of Computer Vision (WACV); 2020 March 1–5; Snowmass, CO.
  • 183.He B, Dong D, She Y, Zhou C, Fang M, Zhu Y, Zhang H, Huang Z, Jiang T, Tian J,et al. Predicting response to immunotherapy in advanced non-small-cell lung cancer using tumor mutational burden radiomic biomarker. J Immunother Cancer. 2020; 8(2): e000550. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 184.Mu W, Jiang L, Shi Y, Tunali I, Gray JE, Katsoulakis E, Tian J, Gillies RJ, Schabath MB. Non-invasive measurement of PD-L1 status and prediction of immunotherapy response using deep learning of PET/CT images. J Immunother Cancer. 2021; 9(6): e002118. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 185.He B, Guo Y, Zhu Y, Tong L, Kong B, Wang K, Sun C, Li H, Huang F, Wu L,et al. From signal to knowledge: The diagnostic value of raw data in artificial intelligence prediction of human data for the first time. medRxiv. 2022; 2022.08.01.22278299. [Google Scholar]
  • 186.Wang G, Ye JC, De Man B. Deep learning for tomographic image reconstruction. Nat Mach Intell. 2020; 2(12): 737– 748. [Google Scholar]
  • 187.Wang G, Ye JC, Mueller K, Fessler JA. Image reconstruction is a new frontier of machine learning. IEEE Trans Med Imaging. 2018; 37(6): 1289– 1296. [DOI] [PubMed] [Google Scholar]
  • 188.De Man Q, Haneda E, Claus B, Fitzgerald P, De Man B, Qian G, Shan H, Min J, Sabuncu M, Wang G. A two-dimensional feasibility study of deep learning-based feature detection and characterization directly from CT sinograms. Med Phys. 2019; 46(12): e790– e800. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

No new data were created for this manuscript.


Articles from Health Data Science are provided here courtesy of AAAS Science Partner Journal Program

RESOURCES