Skip to main content
Technology in Cancer Research & Treatment logoLink to Technology in Cancer Research & Treatment
. 2021 Jun 18;20:15330338211016386. doi: 10.1177/15330338211016386

The Application and Development of Deep Learning in Radiotherapy: A Systematic Review

Danju Huang 1, Han Bai 1, Li Wang 1, Yu Hou 1, Lan Li 1, Yaoxiong Xia 1, Zhirui Yan 1, Wenrui Chen 1, Li Chang 1,, Wenhui Li 1,
PMCID: PMC8216350  PMID: 34142614

Abstract

With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). The rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), has provided radiation oncologists with many promising tools that can simplify the complex radiotherapy process in the clinical work of radiation oncology, improve the accuracy and objectivity of diagnosis, and reduce the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be more familiar with its principles to properly evaluate and use this powerful tool. In this paper, we explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.

Keywords: artificial intelligence, machine learning, deep neural networks, radiation therapy, convolutional neural network

Introduction

With the massive use of computers, the growth and explosion of data has greatly promoted the development of artificial intelligence (AI). In recent years, AI has been used widely in the medical field to analyze data in pathology, radiology, cardiology, oncology, genomics, and pharmacology to better provide information for diseases prediction, 1 -4 screening, 5 diagnosis, 6,7 treatment, 8 prognosis, health management, and drug development. 9 The various applied research currently underway may lead to the increased use of AI by clinicians, in particular, radiation oncologists. 10 The current clinical practice is both time-consuming and extremely subjective, and the rise of deep learning (DL) algorithms, such as convolutional neural networks (CNN), can simplify the complex radiotherapy work process in the clinical work of radiation oncology, including image fusion, delineation of clinical target volume (CTV), and organ-at-risk (OAR), automatic planning (AP), dose distribution prediction, and outcome prediction. 11 The application of DL not only improves the accuracy and objectivity of diagnosis but also reduces the workload, thus enabling clinicians to spend more time on advanced decision-making tasks. As the development of DL gets closer to clinical practice, radiation oncologists will need to be familiar with its principles to properly evaluate and use this powerful tool. We explain the development and basic concepts of AI and discuss its application in radiation oncology based on different task categories of DL algorithms. This work clarifies the possibility of further development of DL in radiation oncology.

Basic Concepts

Artificial intelligence (AI): AI is a broad general term used to encompass various subdomains that are used specifically to create algorithms to perform tasks that mimic human intelligence. 12

Machine learning (ML): ML belongs to a subfield of AI and is the primary method used to realize AI. ML can provide algorithms, which can build mathematical models based on the collected data. These mathematical models map the input data to the desired output. These input elements can be any sequence of images, numbers, and classification data. 13 ML algorithms are divided into supervised learning, unsupervised learning, reinforcement learning, integrated learning, and DL. Among them, the input data of supervised learning contains tutor signals, which use probability function, algebraic function or artificial neural network as basis function model, adopt iterative calculation method, and learn result as function. Unsupervised learning is that there is no tutor signal in the input data, and the clustering method is adopted, and the learning result is the category. Typical unsupervised learning includes discovery learning, clustering, and competitive learning. Reinforcement learning is a learning method that takes environmental inertia (reward/punishment signals) as input and is guided by statistics and dynamic programming techniques. Ensemble learning is to combine multiple weakly-supervised models here in order to obtain a better and more comprehensive strong-supervised model. The underlying idea of ensemble learning is that even if a certain weak classifier gets a wrong prediction, other weak classifiers may correct its mistakes. In practice, to represent input to output mapping, different functional representations can be used, such as decision trees, 14 support vector machines (SVM), 15 naive Bayes classification, 16 and deep neural network (DNN). 17 In most cases, ML methods can achieve at least as good results as traditional statistical methods. When the basic input–output relationship is not linear and the data set is large enough and contains predictors that capture the nonlinear relationship, the performance of the ML method will be better than the linear statistical model. 18 The SVMs, decision trees, and naive Bayes classification all use supervised learning algorithms to build models, and DNNs use DL algorithms to build models. Ideally, ML can use computers to predict clinical outcomes, identify disease patterns, detect disease characteristics, and optimize treatment strategies, thereby transforming the acquired knowledge into clinical evidence. 13

Deep learning (DL): DL (also known as DNN) was introduced at an event at the end of 2012, when the DL method based on CNN won the world’s most famous computer vision competition ImageNet classification. 19 DL is a computational model that allows for multiple processing layers (input layer, hidden layer, and output layer) to discover complex structures in large data sets using a back propagation algorithm to instruct the machine how to change its internal parameters. 20 In the past 10 years, as a result of the massive use of computers and the growth and explosion of data, DNN has surpassed others in computer vision applications, such as processing and understanding text, 21 voice, 22 and images. 23 DNN includes CNN, recurrent neural network (RNN), and fully convolutional neural network (FCN). CNN is a type of feedforward neural network that contains convolutional calculations and has a deep structure that can classify input information according to its hierarchical structure and that usually is used for images and other data with grid-like structures. To overcome the difficulty of a network whose width increases linearly caused by convolution, some scholars have introduced downsampling to reduce the width. They have proved that downsampling a CNN can approximate the ridge function well, which illustrates the advantages of these structured networks in approximation or modeling. 24 In recent years, CNN has introduced breakthroughs in image, video, voice, and audio processing.

DL task categories in radiation oncology can be divided according to the main purpose of the algorithm, as follows: image fusion, image segmentation, AP, plan evaluation, and prognosis and outcome prediction. The evaluation criteria include receiver operating characteristic curve, area under the receiver operating characteristic curve (AUC), dose volume histogram, dose difference graph, F1 score, accuracy, specificity, sensitivity, precision, dice similarity coefficient, average accuracy, and Jaccard index. 25

Material and Methods

Here we provide a systematic review of the publications using CNN technology for medical image analysis, available in the National Library of Medicine database (PubMed). The search equation was the following: (convolutional OR deep learning) AND (radiotherapy) AND (image fusion OR image segmentation OR auto-planning and dose distribution prediction OR prediction of efficacy and side effects), filtered for “Human studies” and “Title/Abstract” as search fields.

The selected articles were screened according to a standard grid containing the following items: aim of the study; methods: network architecture, dataset, training, validation, test, comparison method; results: accuracy, sensibility and specificity and conclusion.

Implementation Area

Image fusion: Medical image fusion technology can fuse medical images from multiple forms, thereby making the medical diagnosis and treatment process more reliable and accurate. 26 Image registration is an important part of image fusion, for which the process is to find the spatial mapping relationship between one image pixel and another image pixel. This process is not absolute. Its core purpose is to identify the conversion relationship between different images. These images can be taken at different times (multitime registration), different sensors in different places (multimode registration). The relationship between these images can be rigid (translation and rotation), affine, homography, or complex large deformation models. For image-guided radiotherapy, radiosurgery, and interventional radiotherapy, image registration is one of the key technologies of auxiliary medical care. In recent years, DL, especially CNN, has achieved good results in medical image processing, and medical registration research has developed rapidly. 27 The 2 main types of existing medical image registration methods are gray-scale-based methods and feature-based methods. The primary steps of image registration include geometric size change, combined image change, image similarity measurement, iterative optimization, and interpolation process. 28 In the traditional registration method, the cost function is iteratively optimized from scratch so that the images can be registered, which severely limits the registration speed. Compared with traditional medical image registration methods, the greatest contribution of DL in medical image registration is to resolve the problem of slow medical image processing. 27 Eppenhof and Pluim 29 studied a CNN-based deformable registration algorithm and compared it with traditional algorithms. Their results showed that the registration speed of DL networks is hundreds of times that of traditional registration methods, with an average of 0.58 ± 0.07s. Among the existing research results, the main DL methods used are CNN and FCN frameworks. A study by Cao et al 30 used CNN for brain magnetic resonance imaging (MRI) image registration. The results showed that the Dice similarity coefficient (DSC) was improved in the registration of gray matter, white matter, and cerebrospinal fluid. The rate of improvement was 2.6%. Fan et al 31 studied 7different brain MRI deformable registration algorithms. The results showed that the DL network that did not require any iterative optimization also required the least time for registration, and the registration accuracy was based on DL. There are also improvements. Research by Jiang et al 32 showed that 4-dimensional computed tomography (CT) deformed image registration of the lung based on CNN had the smallest error compared with various traditional methods, and the registration time was 1.4s. Hasenstab et al 33 evaluated the performance of a CNN algorithm for liver registration in 314 patients, and compared it with manual image registration. The results showed that compared with manual registration, the liver overlap and image correlation for automatic registration were higher. In conclusion, DL has made medical image registration more rapid and accurate, which is keeping with the needs of clinical practice.

Image segmentation: The delineation of CTV and OAR of tumor patients is a critical and time-consuming part of the radiotherapy process. Usually, it is manually delineated by the radiotherapist. The delineation results often are inconsistent because the experience of the delineator may vary. The emergence of DL has made the automatic segmentation of the tumor and OAR possible. The DSC is usually used as an indicator to evaluate the reliability of the test software output in this field. The closer the DSC result to 1, the higher the degree of overlap between the 2 delineations. If DSC is equal to 1, the 2 delineations completely overlap.

At present, several scholars have applied AI to OAR and CTV for head and neck tumors, lung cancer, breast cancer, prostate cancer, rectal cancer, and cervical cancer. 34 -51 Zeineldin et al evaluated the performance of different CNN models in 125 cases of glioma. Compared with manual rendering, the DSC of several CNN models was 81% to 84%. Zeineldin et al believed that different CNN models could be applied to magnetic resonance images and that segmentation of brain tumors was feasible. 34 Deng et al developed a novel brain tumor segmentation method, which integrated the full convolutional neural network (FCNN) and dense micro-block difference feature (DMDF) into a unified framework to segment brain tumors in the MRIs of 100 patients. The average DSC index was as high as 90.98%, and the segmentation time was less than 1s. Compared with the traditional MRI brain tumor segmentation method, the experimental results showed that the segmentation accuracy and stability were greatly improved. 35 Ye et al used an automated method based on CNN for segmentation of nasopharyngeal carcinoma on dual-sequence magnetic resonance imaging. Through automatic contour training of 44 patients with nasopharyngeal carcinoma, the test results obtained in 7patients had an average DSC of 0.87. 36 Another prospective study used DCNN to train and automatically segment the gross tumor volume (GTV) of 22 patients with head and neck cancer on co-registered positron emission tomography (PET-CT) images. Oncologists and radiologists have manually determined the gold standard of GTV by consensus. The automatic segmentation time is less than 1 min, and the average DSC is 0.785. 37 Tong et al used DNN to segment 9 OARs (brain stem, optic chiasm, mandible, optic nerve, parotid gland, and submandibular gland) in 22 head and neck cases. The average DSC ranged from 0.58 to 0.93. The median time of all OARs was 9.5s. 38 Zhu et al tested the results of automatic segmentation of 9 OARs (brain stem, cross, mandible, left optic nerve, right optic nerve, left parotid gland, right parotid gland, left submandibular gland, and right submandibular gland) in CT images of 261 patients with nasopharyngeal carcinoma based on a DL framework. Tong’s model used a single network to segment the OAR and to conduct end-to-end training, called AnatomyNet. Zhu et al found that compared with the traditional U-Net model, AnatomyNet improved the DSC by 2% to 3%, and 6 out of 9 anatomical structures were better than those under U-Net. 39 Subsequently, Dai et al proposed a DCNN that used a 3-dimensional U-Net DCNN combined with 2 loss functions of dice loss and generalized dice loss to automatically segment 19 OARs (left and right eyeballs, left and right optic nerves, left and right lenses, left and right inner ears, left and right temporomandibular joints, left and right parotid glands, left and right submandibular glands, brainstem, spinal cord, thyroid, laryngo-esophagus-tracheal (LET), and oral cavity) in patients with nasopharyngeal carcinoma. A total of 496 patients were enrolled in the group, and 376 cases were randomly selected for use in training set, 60 cases were included in the validation set, and 60 cases were included in the test set. Overall, the average DSC of the 19 high-risk organs was 0.91, and the Jaccard distance was 0.15. Compared with Zhu’s method, the 3-dimensional (3D) U-Net DCNN combined with Dice Loss function could be better applied to the automatic segmentation of head and neck OARs. The 3D U-Net DCNN with the segmentation time within 20S also achieved ideal automatic segmentation results for small-volume OAR. 40 Shapey et al studied the performance of two-and-half-dimensional CNN to automatically segment schwannomas after training in the MRIs of 243 patients with schwannomas. Compared with manual segmentation, the DSC based on T1-weighted segmentation was 93.43%. The DSC for segmentation based on T2 weighting was 93.68%. 41 A prospective study included 126 patients with intracranial meningioma. The target volume contour manually drawn on MRI T1/T2 weights by 2 experienced doctors was compared with the results of a trained DNN. In these patients, a comparison between the DL model and manual segmentation showed that the average DSC of the tumor volume of the enhanced contrast agent was 0.91 ± 0.08, and the average DSC of the total lesion volume was 0.82 ± 0.12. 42 Another study used 2-dimensional CNN to train on 300 patients’ head CT images and automatically segment the ventricles. The results showed that compared with manual rendering, the DSCs of the left, right, and third ventricles were 0.92, 0.92, and 0.79, respectively. 43 Currently, many reports are available on the automatic segmentation of DL for head and neck cancer. Peng used CNN for automatic segmentation of OARs in the chest and abdomen. The research developed and trained a CNN based on U-Net, which included 60 chest CT scan patients and 43 abdominal CT scan patients. Peng et al performed 5 organ segmentations on chest CTs and 8 organ segmentations on abdominal CTs. Compared with manual drawing, the median DSC was 0.97 (right lung), 0.96 (left lung), 0.92 (heart), 0.86 (spinal cord), 0.76 (esophagus), and 0.96 (spleen), 0.96 (liver), 0.95 (Left kidney), 0.90 (stomach), 0.87 (gallbladder), 0.80 (pancreas), 0.75 (esophagus), and 0.61 (duodenum). The automatic segmentation time for each patient did not exceed 5S. The researcher believed that this work shows that the patient’s multiorgan CT image segmentation could be performed with clinically acceptable accuracy and efficiency. 44 Wang et al developed a patient-specific adaptive convolutional neural network (A-NET) to segment lung tumors in 9 patients’ chest MRIs. The patients in the group had a chest MRI every week during radiotherapy. Wang et al took the previously scanned images as the training set and used the latest images for verification. Compared with manual segmentation, the DSC obtained was 0.81 ± 0.10. 45 Another prospective study proposed a new multimodal segmentation method based on 3D FCN that simultaneously considered PET and CT information for lung tumor segmentation. This method was validated on a dataset of 84 lung cancer patients. Compared with the profile drawn by abundant radiation oncologists, the average DSC was 0.85, which achieved significant performance gains compared with CNN-based methods and traditional methods that used only PET or CT. 46 Zabihollahy et al studied a similar ensemble learning model based on U-Net to identify and describe the 3D U-Net of kidney tumors, using contrast-enhanced CT images of 315 patients as training and test sets. Compared with the gold standard, using 3D U-Net to describe the average DSC of kidney tumors was 85.95% ± 1.46%. 47 The research of Chen et al developed a new cervical cancer segmentation method (called PIC-S-CNN). Chen et al compared this method with 6 different segmentation methods and obtained the best segmentation effect, with an average DSC of 0.84. Chen et al believed that the combination of DL and anatomical prior information could improve the accuracy of cervical tumor segmentation. Scholars who have studied models based on CNN to automatically segment pancreatic tumors, 48 liver tumors, 49 colorectal tumors, 50 and prostate tumors 51 have achieved good segmentation results. These findings have shown that DL can save a significant amount of clinician time to delineate CTV and OARS. Most of these delineation results have met the requirements of clinical treatment and achieved better results than those manually delineated by physicians. Not only does this method have high repeatability, but it also can reduce the interobserver variability (IOV) among physicians.

AP and dose distribution prediction: The ability to automatically generate plans and predict a priori acceptable dose distribution is one of the most important aspects of AI-related radiotherapy plan implementation. Liu et al designed a DNN (called deep MTP) to generate pseudo-CT for AP based on the MRIs of brain tumor patients. Comparing the generated automatic plan with the clinical treatment plan dose parameters, the results provided by the automatic plan–generated dose distribution were not significantly different. 52 Fan et al trained the DNN framework in 195 patients with head and neck cancer to predict the dose distribution of head and neck patients receiving radiotherapy. They used 25 cases for verification and 50 cases as tests. The results showed that, except for the brainstem and lens, all clinically relevant dosimetry parameters were not detected to be statistically different from the actual clinical plan. 53 The combination of CNN and the Monte Carlo (MC) method can be used to predict the dose of brachytherapy. In another study, 47 prostate cancer patients were used as a training set, 14 prostate cancer patients and 10 cervical cancer patients were used as a test set, and the results could be used for clinical needs. The accuracy of the algorithm was close to that of using only the MC method, and the calculation time was significantly reduced. 54 Kajikawa et al compared CNN with traditional DL methods to predict the dose distribution of intensity-modulated conformal radiotherapy plans for prostate cancer patients. Kajikawa et al used an adaptive moment estimation algorithm to optimize the 3D U-net, and the results showed that the CNN model could predict a better or comparable dose distribution than that produced by DL. 55 Similar experience in predicting dose distribution for prostate cancer was validated in a plan for 80 prostate cancer patients. 56 Although the current clinically available ML-based automatic plan effectively saved time, the generated plan still had to be corrected manually. In the future, DL-based AP commercial software is expected to generate plans that can directly meet the needs of clinical treatment.

Other applications (for the prediction of efficacy and side effects): The vast majority of DL predictions of the outcome of radiotherapy use DL to predict the toxicity after radiotherapy. Xerostomia usually occurs in patients who are receiving radiation therapy to the head and neck. Men et al developed a model to predict xerostomia after radiotherapy based on a 3D residual convolutional neural network and included 784 patients with head and neck squamous cell carcinoma of RTOG 0522 test. Using CT planning images, 3D dose distribution, and contours of the parotid and submandibular glands as inputs, good prediction results were obtained. 57 In patients with non-small-cell lung cancer (NSCLC), CNN has been used to predict tumor recurrence after stereotactic radiotherapy. This model was established based on the CT images reviewed after radiotherapy. In the analysis of 1605 characteristics of 46 patients, 5 predicted local recurrence, 3 of which were lobar recurrence, and 7 predicted overall survival. 58 Liang et al constructed a 3D CNN model to predict the occurrence of radiation pneumonia after thoracic radiotherapy, and compared it with 3 prediction models based on multiple logistic regression. The 4 prediction models were all validated in 70 patients with NSCLC who received volume-modulated radiotherapy. The results showed that CNN performed better than the traditional model, with an AUC of 0.842. 59 Lee et al applied ML and bioinformatics tools to genome-wide data to predict and explain the late genitourinary system toxicity in prostate cancer patients after radiotherapy. 60 DL’s prediction of the efficacy and side effects of radiotherapy was able to screen out the possible beneficiaries of radiotherapy in clinical work and to prepare for possible side effects.

Conclusion

Radiation oncology is a medical specialty that closely integrates technology and computers. It should integrate computer science, statistics, and clinical knowledge. In the process of clinical radiotherapy, AI algorithms can work continuously and efficiently. In particular, the emergence of DL algorithms can automatically perform tedious tasks, reduce the deviation of dose distribution, and predict adverse effects after radiotherapy. CNN training is a key step, for which specific technical skills are required to avoid overfitting limited data, which can lead to problems when using the network to analyze wider data sets. Therefore, training needs to be evaluated and monitored. Training thus requires evaluation and monitoring. This method is expected to become the third hand of radiation oncologists. The open-source nature and public availability of the AI library enable clinical researchers from various fields to research and use AI algorithms, which can improve objectivity, reduce the need for manual intervention, and reduce the amount of staff work. At the same time, the repeatability of the process can be greatly improved. Because DL algorithms are an opaque “black box” of internal operations, applying them to clinical practice remains challenging. 11 Some systems provide partial visualization techniques (heat maps, probability maps) to provide certain views of CNN internal functions. Understanding how these networks “work” is a relevant and significant challenge in medical AI.

Currently available clinically automatic registration and automatic segmentation software based on ML algorithms require manual correction before clinical use, and the segmentation results for small organs are not ideal. 61,62 The current technology and framework have limitations, which include model interpretability, data heterogeneity, and lack of common benchmarks. 63 Even if these AI systems show high accuracy in a laboratory environment, it is difficult to practically verify medical AI systems in clinical work. This difficulty is called the last mile of implementation. 64 Therefore, before clinical implementation, in-depth research is needed to evaluate the performance of DL algorithms. 65 One way to make DL results more acceptable in clinical practice is to enable doctors to understand the internal workings of the equipment they use, and the software must provide data protection, algorithm transparency, and accountability to earn clinician and patient trust. 66 Artificial intelligence has clearly demonstrated its efficiency in radiotherapy tasks, but for most applications, there is still a lack of comparative clinical studies showing that the technology has been integrated into the clinical workflow. Nevertheless, the robustness of the current results and the possible simple interface that can be designed using trained CNNs lay the foundation for direct, time-saving, reliable and practical applications. Then, you can treat CNN as a colleague to provide expert second opinions on difficult clinical issues. In addition, CNN is inherently not affected by chaotic factors such as fatigue, personal beliefs, or hierarchical issues, so inter- and intra-individual variability will be minimized when completing specific tasks.

CNN will completely change all processes in the field of radiotherapy, and the role of practitioners is crucial to the development and implementation of such equipment. By understanding deep learning, participating in the concept and evaluation of new equipment, and by contributing one’s own power to conceive the regulatory framework for this new type of medical activity, the MD now has the opportunity to participate in the scientific revolution.

Supplemental Material

Supplemental Material, sj-pdf-1-tct-10.1177_15330338211016386 - The Application and Development of Deep Learning in Radiotherapy: A Systematic Review

Supplemental Material, sj-pdf-1-tct-10.1177_15330338211016386 for The Application and Development of Deep Learning in Radiotherapy: A Systematic Review by Danju Huang, Han Bai, Li Wang, Yu Hou, Lan Li, Yaoxiong Xia, Zhirui Yan, Wenrui Chen, Li Chang and Wenhui Li in Technology in Cancer Research & Treatment

Supplemental Material, sj-pdf-2-tct-10.1177_15330338211016386 - The Application and Development of Deep Learning in Radiotherapy: A Systematic Review

Supplemental Material, sj-pdf-2-tct-10.1177_15330338211016386 for The Application and Development of Deep Learning in Radiotherapy: A Systematic Review by Danju Huang, Han Bai, Li Wang, Yu Hou, Lan Li, Yaoxiong Xia, Zhirui Yan, Wenrui Chen, Li Chang and Wenhui Li in Technology in Cancer Research & Treatment

Acknowledgments

We thank LetPub (www.letpub.com) for its linguistic assistance during the preparation of this manuscript.

Authors’ Note: Our study did not require an ethical board approval because it did not contain human or animal trials. The authors have completed the STROBE guideline check-list. The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. All data generated or analyzed during this study are included in this published article. Wenhui Li, Li Chang and Danju Huang carried out the concepts, design, definition of intellectual content, literature search, data acquistion and manuscript review. Han Bai and Li Wang provided assistance for data acquistion and manuscript editing. Yu Hou, Lan Li and Yaoxiong Xia carried out literature search and data acquisition. Zhirui Yan and Wenrui Chen performed data acquistion and manuscript preparation. All authors have read and approved the content of the manuscript. Danju Huang, Han Bai, and Li Wang contributed equally to this work.

Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was supported by grants from Ten-thousand Talents Program of Yunnan Province (Yunling scholar, Youth talent), Yunnan Provincial Training Funds for Middle-Young Academic and Technical Leader candidate (202005AC160025), Yunnan Provincial Training Funds for High-level Health Technical Personnel (No.L-2018001).

Supplemental Material: Supplemental material for this article is available online.

References

  • 1. Fedewa R, Puri R, Fleischman E, Lee J, Prabhu D, Wilson DL, et al. Artificial intelligence in intracoronary imaging. Curr Cardiol Rep. 2020;22(7):46. doi:10.1007/s11886-020-01299-w [DOI] [PubMed] [Google Scholar]
  • 2. Battista P, Salvatore C, Berlingeri M, Cerasa A, Castiglioni I. Artificial intelligence and neuropsychological measures: the case of Alzheimer’s disease. Neurosci Biobehav Rev. 2020;114:211–228. doi:10.1016/j.neubiorev.2020.04.026 [DOI] [PubMed] [Google Scholar]
  • 3. Fitzpatrick F, Doherty A, Lacey G. Using artificial intelligence in infection prevention. Curr Treat Options Infect Dis. 2020:1–10. doi:10.1007/s40506-020-00216-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4. Exarchos KP, Beltsiou M, Votti CA, Kostikas K. Artificial intelligence techniques in asthma: a systematic review and critical appraisal of the existing literature. Eur Respir J. 2020;56(3):2000521. doi:10.1183/13993003.00521-2020 [DOI] [PubMed] [Google Scholar]
  • 5. Shung DL, Byrne MF. How artificial intelligence will impact colonoscopy and colorectal screening. Gastrointest Endosc Clin N Am. 2020;30(3):585–595. doi:10.1016/j.giec.2020.02.010 [DOI] [PubMed] [Google Scholar]
  • 6. Albahri AS, Hamid RA, Alwan JK, et al. Role of biological data mining and machine learning techniques in detecting and diagnosing the novel coronavirus (COVID-19): a systematic review. J Med Syst. 2020;44(7):122. doi:10.1007/s10916-020-01582-x [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Yoon HJ, Kim JH. Lesion-based convolutional neural network in diagnosis of early gastric cancer. Clin Endosc. 2020;53(2):127–131. doi:10.5946/ce.2020.046 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Kinross JM, Mason SE, Mylonas G, Darzi A. Next-generation robotics in gastrointestinal surgery. Nat Rev Gastroenterol Hepatol. 2020;17(7):430–440. doi:10.1038/s41575-020-0290-z [DOI] [PubMed] [Google Scholar]
  • 9. Maia E, Assis LC, de Oliveira TA, da Silva AM, Taranto AG. Structure-based virtual screening: from classical to artificial intelligence. Front Chem. 2020;8:343. doi:10.3389/fchem.2020.00343 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10. Meyer P, Noblet V, Mazzara C, Lallement A. Survey on deep learning for radiotherapy. Comput Biol Med. 2018;98:126–146. doi:10.1016/j.compbiomed.2018.05.018 [DOI] [PubMed] [Google Scholar]
  • 11. Francolini G, Desideri I, Stocchi G, et al. Artificial intelligence in radiotherapy: state of the art and future directions. Med Oncol. 2020;37(6):50. doi:10.1007/s12032-020-01374-w [DOI] [PubMed] [Google Scholar]
  • 12. Mutasa S, Sun S, Ha R. Understanding artificial intelligence based radiology studies: What is overfitting. Clin Imaging. 2020;65:96–99. doi:10.1016/j.clinimag.2020.04.025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Hügle M, Omoumi P, van Laar JM, Boedecker J, Hügle T. Applied machine learning and artificial intelligence in rheumatology. Rheumatol Adv Pract. 2020;4(1):rkaa005. doi:10.1093/rap/rkaa005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14. DeLisle RK, Dixon SL. Induction of decision trees via evolutionary programming. J Chem Inf Comput Sci. 2004;44(3):862–870. doi:10.1021/ci034188 s [DOI] [PubMed] [Google Scholar]
  • 15. Li Y, Zhang T. Deep neural mapping support vector machines. Neural Netw. 2017;93:185–194. doi:10.1016/j.neunet.2017.05.010 [DOI] [PubMed] [Google Scholar]
  • 16. Santafé G, Lozano JA, Larrañaga P. Bayesian model averaging of naive Bayes for clustering. IEEE Trans Syst Man Cybern B Cybern. 2006;36(5):1149–1161. doi:10.1109/tsmcb.2006.874132 [DOI] [PubMed] [Google Scholar]
  • 17. Li X, Zhang Y, Li M, Marsic I, Yang J, Burd RS. Deep neural network for RFID-based activity recognition. Proc Eighth Wirel Stud Stud Stud Workshop (2016). 2016;2016:24–26. doi:10.1145/2987354.2987355 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Akbilgic O, Davis RL. The promise of machine learning: when will it be delivered. J Card Fail. 2019;25(6):484–485. doi:10.1016/j.cardfail.2019.04.006 [DOI] [PubMed] [Google Scholar]
  • 19. Suzuki K. Overview of deep learning in medical imaging. Radiol Phys Technol. 2017;10(3):257–273. doi:10.1007/s12194-017-0406-5 [DOI] [PubMed] [Google Scholar]
  • 20. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436–444. doi:10.1038/nature14539 [DOI] [PubMed] [Google Scholar]
  • 21. Hirschberg J, Manning CD. Advances in natural language processing. Science. 2015;349(6245):261–266. doi:10.1126/science.aaa8685 [DOI] [PubMed] [Google Scholar]
  • 22. Bochner JH, Garrison WM, Doherty KA. The NTID speech recognition test: NSRT(®). Int J Audiol. 2015;54(7):490–498. doi:10.3109/14992027.2014.991976 [DOI] [PubMed] [Google Scholar]
  • 23. He K, Zhang X, Ren S, Sun J. Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell. 2015;37(9):1904–1916. doi:10.1109/TPAMI.2015.2389824 [DOI] [PubMed] [Google Scholar]
  • 24. Zhou DX. Theory of deep convolutional neural networks: downsampling. Neural Netw. 2020;124:319–327. doi:10.1016/j.neunet.2020.01.018 [DOI] [PubMed] [Google Scholar]
  • 25. Munir K, Elahi H, Ayub A, Frezza F, Rizzi A. Cancer diagnosis using deep learning: a bibliographic review. Cancers (Basel). 2019;11(9):1235. doi:10.3390/cancers11091235 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26. Wang K, Zheng M, Wei H, Qi G, Li Y. Multi-modality medical image fusion using convolutional neural network and contrast pyramid. Sensors (Basel). 2020;20(8):2169. doi:10.3390/s20082169 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Zou M, Yang H, Pan G, Zhong Y. Research progress and challenges of deep learning in medical image registration [in Chinese]. Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2019;36(4):677–683. doi:10.7507/1001-5515.201810004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28. Song G, Han J, Zhao Y, Wang Z, Du H. A review on medical image registration as an optimization problem. Curr Med Imaging Rev. 2017;13(3):274–283. doi:10.2174/1573405612666160920123955 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Eppenhof K, Pluim J. Pulmonary CT registration through supervised learning with convolutional neural networks. IEEE Trans Med Imaging. 2019;38(5):1097–1105. doi:10.1109/TMI.2018.2878316 [DOI] [PubMed] [Google Scholar]
  • 30. Cao X, Yang J, Zhang J, et al. Deformable image registration based on similarity-steered CNN regression. Med Image Comput Comput Assist Interv. 2017;10433:300–308. doi:10.1007/978-3-319-66182-7_35 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Fan J, Cao X, Yap PT, Shen D. BIRNet: Brain image registration using dual-supervised fully convolutional networks. Med Image Anal. 2019;54:193–206. doi:10.1016/j.media.2019.03.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Jiang Z, Yin FF, Ge Y, Ren L. A multi-scale framework with unsupervised joint training of convolutional neural networks for pulmonary deformable image registration. Phys Med Biol. 2020;65(1):015011. doi:10.1088/1361-6560/ab5da0 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Hasenstab KA, Cunha GM, Higaki A, et al. Fully automated convolutional neural network-based affine algorithm improves liver registration and lesion co-localization on hepatobiliary phase T1-weighted MR images. Eur Radiol Exp. 2019;3(1):43. doi:10.1186/s41747-019-0120-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Zeineldin RA, Karar ME, Coburger J, Wirtz CR, Burgert O. DeepSeg: deep neural network framework for automatic brain tumor segmentation using magnetic resonance FLAIR images. Int J Comput Assist Radiol Surg. 2020;15(6):909–920. doi:10.1007/s11548-020-02186-z [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Deng W, Shi Q, Luo K, Yang Y, Ning N. Brain tumor segmentation based on improved convolutional neural network in combination with non-quantifiable local texture feature. J Med Syst. 2019;43(6):152. doi:10.1007/s10916-019-1289-2 [DOI] [PubMed] [Google Scholar]
  • 36. Ye Y, Cai Z, Huang B, He Y, Zeng P, Zou G, et al. Fully-automated segmentation of nasopharyngeal carcinoma on dual-sequence MRI using convolutional neural networks. Front Oncol. 2020;10:166. doi:10.3389/fonc.2020.00166 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37. Huang B, Chen Z, Wu PM, et al. Fully automated delineation of gross tumor volume for head and neck cancer on PET-CT using deep learning: a dual-center study. Contrast Media Mol Imaging. 2018;2018:8923028. doi:10.1155/2018/8923028 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38. Tong N, Gou S, Yang S, Ruan D, Sheng K. Fully automatic multi-organ segmentation for head and neck cancer radiotherapy using shape representation model constrained fully convolutional neural networks. Med Phys. 2018;45(10):4558–4567. doi:10.1002/mp.13147 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39. Zhu W, Huang Y, Zeng L, et al. AnatomyNet: deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy. Med Phys. 2019;46(2):576–589. doi:10.1002/mp.13300 [DOI] [PubMed] [Google Scholar]
  • 40. Dai X, Wang X, Du L, et al. Automatic segmentation of head and neck organs at risk based on three-dimensional U-NET deep convolutional neural network [in Chinese]. Sheng Wu Yi Xue Gong Cheng Xue Za Zhi. 2020;37(1):136–141. doi:10.7507/1001-5515.201903052 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 41. Shapey J, Wang G, Dorent R, et al. An artificial intelligence framework for automatic segmentation and volumetry of vestibular schwannomas from contrast-enhanced T1-weighted and high-resolution T2-weighted MRI. J Neurosurg. 2019:1–9. doi:10.3171/2019.9.JNS191949 [DOI] [PubMed] [Google Scholar]
  • 42. Laukamp KR, Pennig L, Thiele F, et al. Automated meningioma segmentation in multiparametric MRI: comparable effectiveness of a deep learning model and manual segmentation. Clin Neuroradiol. 2020. doi:10.1007/s00062-020-00884-4 [DOI] [PubMed] [Google Scholar]
  • 43. Huff TJ, Ludwig PE, Salazar D, Cramer JA. Fully automated intracranial ventricle segmentation on CT with 2D regional convolutional neural network to estimate ventricular volume. Int J Comput Assist Radiol Surg. 2019;14(11):1923–1932. doi:10.1007/s11548-019-02038-5 [DOI] [PubMed] [Google Scholar]
  • 44. Peng Z, Fang X, Yan P, et al. A method of rapid quantification of patient-specific organ doses for CT using deep-learning-based multi-organ segmentation and GPU-accelerated Monte Carlo dose computing. Med Phys. 2020;47(6):2526–2536. doi:10.1002/mp.14131 [DOI] [PubMed] [Google Scholar]
  • 45. Wang C, Tyagi N, Rimner A, et al. Segmenting lung tumors on longitudinal imaging studies via a patient-specific adaptive convolutional neural network. Radiother Oncol. 2019;131:101–107. doi:10.1016/j.radonc.2018.10.037 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46. Zhao X, Li L, Lu W, Tan S. Tumor co-segmentation in PET/CT using multi-modality fully convolutional neural network. Phys Med Biol. 2018;64(1):015011. doi:10.1088/1361-6560/aaf44b [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 47. Fatemeh Z, Nicola S, Satheesh K, Eranga U. Ensemble U-net-based method for fully automated detection and segmentation of renal masses on computed tomography images. Med Phys. 2020;47(9):4032–4044. doi:10.1002/mp.14193 [DOI] [PubMed] [Google Scholar]
  • 48. Liang Y, Schott D, Zhang Y, et al. Auto-segmentation of pancreatic tumor in multi-parametric MRI using deep convolutional neural networks. Radiother Oncol. 2020;145:193–200. doi:10.1016/j.radonc.2020.01.021 [DOI] [PubMed] [Google Scholar]
  • 49. Chen Y, Wang K, Liao X, et al. Channel-Unet: a spatial channel-wise convolutional neural network for liver and tumors segmentation. Front Genet. 2019;10:1110. doi:10.3389/fgene.2019.01110 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 50. Soomro MH, Coppotelli M, Conforto S, et al. Automated segmentation of colorectal tumor in 3D MRI using 3D Multiscale Densely Connected Convolutional Neural Network. J Healthc Eng. 2019;2019:1075434. doi:10.1155/2019/1075434 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51. Zabihollahy F, Schieda N, Krishna Jeyaraj S, Ukwatta E. Automated segmentation of prostate zonal anatomy on T2-weighted (T2 W) and apparent diffusion coefficient (ADC) map MR images using U-Nets. Med Phys. 2019;46(7):3078–3090. doi:10.1002/mp.13550 [DOI] [PubMed] [Google Scholar]
  • 52. Liu F, Yadav P, Baschnagel AM, McMillan AB. MR-based treatment planning in radiation therapy using a deep learning approach. J Appl Clin Med Phys. 2019;20(3):105–114. doi:10.1002/acm2.12554 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53. Fan J, Wang J, Chen Z, Hu C, Zhang Z, Hu W. Automatic treatment planning based on three-dimensional dose distribution predicted from deep learning technique. Med Phys. 2019;46(1):370–381. doi:10.1002/mp.13271 [DOI] [PubMed] [Google Scholar]
  • 54. Mao X, Pineau J, Keyes R, Enger SA. RapidBrachyDL: rapid radiation dose calculations in brachytherapy via deep learning. Int J Radiat Oncol Biol Phys. 2020;108(3):802–812. doi:10.1016/j.ijrobp.2020.04.045 [DOI] [PubMed] [Google Scholar]
  • 55. Kajikawa T, Kadoya N, Ito K, et al. A convolutional neural network approach for IMRT dose distribution prediction in prostate cancer patients. J Radiat Res. 2019;60(5):685–693. doi:10.1093/jrr/rrz051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56. Ma M, K Buyyounouski M, Vasudevan V, Xing L, Yang Y. Dose distribution prediction in isodose feature-preserving voxelization domain using deep convolutional neural network. Med Phys. 2019;46(7):2978–2987. doi:10.1002/mp.13618 [DOI] [PubMed] [Google Scholar]
  • 57. Men K, Geng H, Zhong H, Fan Y, Lin A, Xiao Y. A deep learning model for predicting xerostomia due to radiation therapy for head and neck squamous cell carcinoma in the RTOG 0522 clinical trial. Int J Radiat Oncol Biol Phys. 2019;105(2):440–447. doi:10.1016/j.ijrobp.2019.06.009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58. Mattonen SA, Palma DA, Haasbeek CJ, Senan S, Ward AD. Early prediction of tumor recurrence based on CT texture changes after stereotactic ablative radiotherapy (SABR) for lung cancer. Med Phys. 2014;41(3):033502. doi:10.1118/1.4866219 [DOI] [PubMed] [Google Scholar]
  • 59. Liang B, Tian Y, Chen X, et al. Prediction of radiation pneumonitis with dose distribution: a convolutional neural network (CNN) based model. Front Oncol. 2019;9:1500. doi:10.3389/fonc.2019.01500 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60. Lee S, Kerns S, Ostrer H, Rosenstein B, Deasy JO, Oh JH. Machine learning on a genome-wide association study to predict late genitourinary toxicity after prostate radiation therapy. Int J Radiat Oncol Biol Phys. 2018;101(1):128–135. doi:10.1016/j.ijrobp.2018.01.054 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 61. Eldesoky AR, Yates ES, Nyeng TB, et al. Internal and external validation of an ESTRO delineation guideline—dependent automated segmentation tool for loco-regional radiation therapy of early breast cancer. Radiother Oncol. 2016;121(3):424–430. doi:10.1016/j.radonc.2016.09.005 [DOI] [PubMed] [Google Scholar]
  • 62. Zhao Y, Li H, Wan S, et al. Knowledge-aided convolutional neural network for small organ segmentation. IEEE J Biomed Health Inform. 2019;23(4):1363–1373. doi:10.1109/JBHI.2019.2891526 [DOI] [PubMed] [Google Scholar]
  • 63. Shickel B, Tighe PJ, Bihorac A, Rashidi P., Deep EHR: A survey of recent advances in deep learning techniques for electronic health record (EHR) analysis. IEEE J Biomed Health Inform. 2018;22(5):1589–1604. doi:10.1109/JBHI.2017.2767063 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 64. Cabitza F, Campagner A, Balsano C. Bridging the “last mile” gap between AI implementation and operation: “data awareness” that matters. Ann Transl Med. 2020;8(7):501. doi:10.21037/atm.2020.03.63 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65. Wong J, Fong A, McVicar N, et al. Comparing deep learning-based auto-segmentation of organs at risk and clinical target volumes to expert inter-observer variability in radiotherapy planning. Radiother Oncol. 2020;144:152–158. doi:10.1016/j.radonc.2019.10.019 [DOI] [PubMed] [Google Scholar]
  • 66. Vayena E, Blasimme A, Cohen IG. Machine learning in medicine: addressing ethical challenges. PLoS Med. 2018;15(11):e1002689. doi:10.1371/journal.pmed.1002689 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

Supplemental Material, sj-pdf-1-tct-10.1177_15330338211016386 - The Application and Development of Deep Learning in Radiotherapy: A Systematic Review

Supplemental Material, sj-pdf-1-tct-10.1177_15330338211016386 for The Application and Development of Deep Learning in Radiotherapy: A Systematic Review by Danju Huang, Han Bai, Li Wang, Yu Hou, Lan Li, Yaoxiong Xia, Zhirui Yan, Wenrui Chen, Li Chang and Wenhui Li in Technology in Cancer Research & Treatment

Supplemental Material, sj-pdf-2-tct-10.1177_15330338211016386 - The Application and Development of Deep Learning in Radiotherapy: A Systematic Review

Supplemental Material, sj-pdf-2-tct-10.1177_15330338211016386 for The Application and Development of Deep Learning in Radiotherapy: A Systematic Review by Danju Huang, Han Bai, Li Wang, Yu Hou, Lan Li, Yaoxiong Xia, Zhirui Yan, Wenrui Chen, Li Chang and Wenhui Li in Technology in Cancer Research & Treatment


Articles from Technology in Cancer Research & Treatment are provided here courtesy of SAGE Publications

RESOURCES