Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Feb 1.
Published in final edited form as: IEEE Trans Radiat Plasma Med Sci. 2021 Aug 24;6(2):158–181. doi: 10.1109/TRPMS.2021.3107454

Artificial Intelligence in Radiation Therapy

Yabo Fu 1, Hao Zhang 2, Eric D Morris 3, Carri K Glide-Hurst 4, Suraj Pai 5, Alberto Traverso 6, Leonard Wee 7, Ibrahim Hadzic 8, Per-Ivar Lønne 9, Chenyang Shen 10, Tian Liu 11, Xiaofeng Yang 12,*
PMCID: PMC9385128  NIHMSID: NIHMS1776970  PMID: 35992632

Abstract

Artificial intelligence (AI) has great potential to transform the clinical workflow of radiotherapy. Since the introduction of deep neural networks, many AI-based methods have been proposed to address challenges in different aspects of radiotherapy. Commercial vendors have started to release AI-based tools that can be readily integrated to the established clinical workflow. To show the recent progress in AI-aided radiotherapy, we have reviewed AI-based studies in five major aspects of radiotherapy including image reconstruction, image registration, image segmentation, image synthesis, and automatic treatment planning. In each section, we summarized and categorized the recently published methods, followed by a discussion of the challenges, concerns, and future development. Given the rapid development of AI-aided radiotherapy, the efficiency and effectiveness of radiotherapy in the future could be substantially improved through intelligent automation of various aspects of radiotherapy.

Keywords: Artificial Intelligence, Image Reconstruction, Image Registration, Image Segmentation, Image Synthesis, Radiotherapy, Treatment Planning

I. Introduction

ARTIFICIAL intelligence (AI) is a data-driven agent that is designed to imitate human intelligence. The concept of AI is believed to be originated from the idea of robots, which can help human perform laborious and time-consuming tasks. In recent years, the advancements of both computer hardware and software have enabled the development of more and more sophisticated AI agents that can excel certain complex tasks without human input. Meanwhile, the growth and sharing of data has powered the continuous evolvement of AI by machine learning (ML) and deep learning (DL).

ML, a subset of AI, enables machines to achieve artificial intelligence through algorithms and statistical techniques trained with data where the training process informs decisions made by the machine learning framework, thus improving the end result as experience is gained[14]. Supervised ML methods for the automatic segmentation of images involves training and tuning a predictive model, often integrating prior knowledge about an image via training samples (i.e., other similarly annotated images to inform the current segmentation task). ML employs statistical tools to explore and analyze previously labeled data with image representations being built from pre-specified filters tuned to a specific segmentation task. Although ML techniques are more efficient with image samples and have a less complicated structure, they are often not as accurate when compared to DL techniques [23]. DL is a subset of ML that was originally designed to mimic the learning style of the human brain using neurons. Unlike ML where the “useful” features for the segmentation process must be decided by the user, with DL, the “useful” features are decided by the network without human intervention.

Radiation oncology is a type of cancer treatment that requires multidisciplinary expertise including medicine, biology, physics, and engineering. The workflow of typical radiotherapy consists of medical imaging, diagnosis, prescription, CT simulation, target registration/contouring, treatment planning, treatment quality assurance and treatment delivery. Owing to the technological advances in the past few decades, the workflow of radiotherapy has become increasingly complex, resulting in heavy reliance on human-machine interactions. Each step in the clinical workflow is highly specialized and standardized with its own technical challenges. Meanwhile, the requirement of manual input from a diverse team of healthcare professionals including a radiation oncologist, medical physicist, medical dosimetrist, and radiation therapist has resulted in a sub-optimal treatment process that prevents patients’ wider access to the scarce treatment infrastructures. The wide adoption of image guided radiotherapy has created a massive amount of imaging data that needs to be analyzed in a short period of time. However, humans are limited in reviewing and analyzing large amounts of data due to time constraints. Machines, on the other hand, can be trained to share many repetitive workloads with humans and therefore boosting the capacity of quality healthcare. Since the introduction of deep neural networks, many AI-based methods have been proposed to address challenges in different aspects of radiotherapy. Given the rapid development of AI-aided radiotherapy, the efficiency and effectiveness of radiotherapy in the future could be substantially improved through intelligent automation in various aspects of radiotherapy.

Several articles have been published regarding artificial intelligence in radiation oncology[30-32]. Huynh et al. provided a high level general description of AI methods, reviewed its impact on each step of the radiation therapy workflow and discussed how AI might change the roles of radiotherapy medical professionals[32]. Siddique et al. provided a review of AI in radiotherapy, including diagnostic processes, medical imaging, treatment planning, patient simulation, and quality assurance[31]. Vandewinckele et al. published an overview of AI based applications in radiotherapy, focusing on the implementation and quality assurance of AI models[30]. In this study, to show the recent progress in AI-aided radiotherapy, we have reviewed AI-based studies in five major aspects of radiotherapy including image reconstruction, image registration, image segmentation, image synthesis, and automatic treatment planning. In each section, we summarized and categorized the recently published methods, followed by a discussion of the challenges, concerns and future development. Specifically,H. Zhang contributed to the image reconstruction section; Y. Fu, T. Liu and X. Yang contributed to the image registration section; E. D. Morris and C. K. Glide-Hurst contributed to the image segmentation section; S. Pai, A. Traverso, L. Wee and I. Hadzic contributed to the image synthesis section; P. Lønne and C. Shen contributed to the automatic treatment planning section.

II. Image Reconstruction

Tomographic imaging plays an important role in external-beam radiation therapy for simulation and treatment planning, pre-treatment and intrafractional image guidance, as well as follow-up care. Before treatment, the patient usually undergoes a computed tomography (CT) simulation to acquire images of the area of body to be treated with radiation. The acquired CT images are used to delineate the tumors and surrounding critical structures, and then to design an optimal treatment plan for the patient. For tumors around the diaphragm, such as those in the liver and lower lung lobe, 4D CT scans may also be performed to capture the motion of tumors in respiration. Due to the advantage of superior soft tissue contrast, magnetic resonance imaging (MRI) scans are also prescribed for some patients with brain tumors, paraspinal tumors, head and neck cancer, prostate cancer, and extremity sarcoma. The MRI scans are fused to simulation CT images to facilitate tumor delineation and organs at risk (OAR) contouring, or in MRI-only simulation to synthesize CT images for treatment planning and dose calculation[43]. Different from anatomical imaging such as CT and MRI, positron emission tomography (PET) provides information on tumor metabolism and is used for visualization of tumor extent and delineation of volume in need of dose-escalation, e.g., in head and neck cancers[45]. In addition, cone-beam CT (CBCT) is equipped on most C-arm linear accelerators (LINACs) and widely used in daily procedures for verifying the position of the patient and treatment target. Gated CBCT or 4D-CBCT is sometimes utilized for positioning patient with moving tumors, such as in lung stereotactic ablative radiotherapy (SABR). Furthermore, CT, megavoltage CT (MV-CT), MV-CBCT are also integrated in some radiotherapy machines for image guidance. MRI-LINAC systems have also been developed in the past decade[48], in which MRI helps to improve patient setup and target localization, and enables interfraction and intrafraction radiotherapy adaptation[50]. Very recently, PET-guided radiation therapy is also ready for clinical adoption to treat advanced-stage and metastatic cancers[53]. Finally, after completing their radiation treatment course, patients may have another scan (CT, MRI, or PET) before a follow-up appointment with radiation oncologist.

In the clinic, tomographic images are displayed on the console soon after a patient scan. Thus, making it easy to be unaware of the crucial reconstruction step which is performed in the background by dedicated reconstruction computers. In fact, reconstruction is at the heart of tomographic imaging modalities, because many clinical tasks in the radiation therapy workflow are highly dependent on the reconstruction quality, including target delineation, OARs segmentation, image registration or fusion, image synthesis, treatment planning, dose calculation, patient positioning, image guidance, and radiation therapy response assessment. Poor reconstruction quality would inevitably jeopardize the accuracy of the clinical tasks mentioned above and eventually the outcome for cancer patients. Thus, tomographic image reconstruction has always been an active area of research, with the aim of reducing radiation exposure and/or scan time, suppressing noise and artifacts, and improving image quality.

After data acquisition, the detector measurements are usually preprocessed/calibrated by vendors for various degrading factors. Then, the sensor domain measurements yRI×1 and the desired image xRJ×1 can be expressed as[55]:

y=Axε (1)

where ARI×J is the system matrix for CT and PET and encoding matrix for MRI, I is the number of sensor measurements, J is the number of image voxels, ε is the noise intrinsic to the data acquisition, and the operator ⊕ denotes the interaction between signal and noise. The operator ⊕ becomes + when additive Gaussian noise is assumed for CT and MRI measurements, and becomes a nonlinear operator when Poisson noise is assumed for PET measurements. Essentially, image reconstruction is an inverse problem where we reconstruct the unknow image x from the measurements y.

Various image reconstruction methods have been proposed in the past few decades. Analytical reconstruction methods, such as filtered back-projection (FBP) for CT, Feldkamp-Davis-Kress (FDK) for CBCT, and inverse fast Fourier transform for MRI, are based on the mathematical inverse of the forward model. Because of their high efficiency and stability, they are still employed by most commercial scanners. However, the reconstructed images may suffer from excessive noise and streak artifacts when the sensor measurements are noisy and undersampled. Iterative reconstruction methods[56-61], including algebraic reconstruction technique, statistical image reconstruction, compressed sensing, and prior-image-based reconstruction, are based on sophisticated system modeling of data acquisition and prior knowledge. They have shown advantages of reducing radiation dose or data acquisition time and improving image quality over analytical methods. One example of these iterative methods is the penalized weighted least square (PWLS) reconstruction which is widely used for CT/CBCT, MRI and PET:

x^=argminyAxW2+βR(x) (2)

where W is a weighting matrix accounting for reliability of each sensor measurement, R(x) is a regularization term incorporating prior knowledge or expectations of image characteristics, and β > 0 is a scalar control parameter to balance data fidelity and regularization. Commonly used regularizations are based on the Markov random field (MRF) or total variation (TV), while a comprehensive review of the regularization strategies can be found in[58].

Inspired by the successes of AI in many other fields, researchers have investigated to leverage AI, especially DL for tomographic image reconstruction[62]. Numerous papers have been published on this topic, and image reconstruction has become a new frontier of DL[63]. It is noted that many DL-based reconstruction methods can be shared by CT, MRI and PET, thus we focus on reviewing them for CT and CBCT since they are most widely used in radiation therapy. Interested readers can refer to these review articles [64-68] on DL for PET and MRI reconstruction.

Patients for radiation therapy receive multiple CT and CBCT scans, and the accumulated imaging dose could be significant. Considering the harmful effects of X-ray radiation including secondary malignancies, low-dose imaging with satisfactory image quality for clinical tasks are desirable. Aside from hardware improvements, two other strategies have been investigated to achieve low-dose imaging for CT and CBCT, reducing the X-ray tube current and exposure time (low-flux acquisition) or the angular sampling per rotation (sparse-view acquisition)[58]. However, these strategies would increase noise and streak artifacts in the FBP or FDK reconstructed images. 4D-CBCT has the potential to reduce motion artifacts and improve patient setup and treatment accuracy, but the scan takes 2-4 minutes to acquire enough projections at each respiratory phase to achieve acceptable image quality. The long scan time leads to increased patient discomfort, intraimaging patient motion, and additional imaging radiation dose. An accelerated scan is desirable in the clinic but the FDK reconstructed images from the sparse-view acquisition are also degraded with severe streak artifacts[69]. While iterative reconstruction can tackle these challenges to some extent, the reconstruction time might be too long. Therefore, many DL-based reconstruction methods have been developed to further improve image quality and/or substantially reduce reconstruction time, which can be grouped into the following five categories.

A. Image Domain Methods

One simple approach to improve low-dose CT image quality is post-reconstruction denoising, and researchers have applied many different filters to the FBP reconstructed low-dose CT images to suppress noise and streak artifacts. Similarly, the FBP reconstructed low-dose images can be fed into a DL neural network to learn a mapping between the low-dose image and its high-quality counterpart. For example, Kang et al. [70] applied a deep convolutional neural network (CNN) to the wavelet transform coefficients of low-dose CT images, which can effectively suppress noise in the wavelet domain. Chen et al. [71] proposed using overlapped patches from low-dose and corresponding high-quality CT images to boost the number of samples, and then employed a residual encoder–decoder CNN (RED-CNN) to improve low-dose image quality. Yang et al. [72] explored a generative adversarial network (GAN) based denoising method with Wasserstein distance and perceptual similarity to improve the GAN performance. Wang et al. [73] argues that these image-to-image mapping approaches have some limitations for ultralow-dose CT images. They proposed an iterative residual-artifact learning CNN (called IRLnet) which estimates the high-frequency details within the noise and then removes them iteratively while the residual low-frequency details can be processed through the conventional network.

The streak artifacts in FBP reconstructed images from sparse-view acquisition are difficult to remove by conventional CNNs. Han and Ye [74] found that the existing U-Net architecture resulted in image blurring and false features for sparse-view CT reconstruction, and proposed a dual frame U-Net and tight frame U-Net to overcome limitations. Zhang et al. [75] investigated a method based on a combination of DenseNet and deconvolution for sparse-view CT, which employs the advantages of both and greatly increases the depth of the network to improve image quality. Alternatively, Jiang et al. [76] used TV-based iterative reconstruction to obtain sparse-view CBCT images (which are superior to FDK reconstructed images), and then fed them into a symmetric residual CNN to learn the mapping between TV-reconstructed images and ground truth. Then, for new sparse-view CBCT acquisitions, one can use the network to boost TV-reconstructed images.

It is noted above that DL methods are based on a supervised learning framework, which requires both the low-dose CT images and corresponding high-quality counterparts. However, these image pairs may not be available in many clinical scenarios. Wolterink et al. [77] explored a GAN consists of a generator CNN and a discriminator CNN to reduce image noise, which showed a generator CNN trained with only adversarial feedback can learn the appearance of high-quality images. Li et al. [78] investigated a cycle-consistent GAN (CycleGAN) based method which does not require low-dose and reference full-dose images from the same patient. In the near future, more unsupervised learning approaches which require no reference ground truth, or semi-supervised learning requiring limited reference data, may be explored for low-dose CT and CBCT.

B. Sensor Domain Methods

It is advantageous to remove noise in the CT sinogram or projection data to prevent its propagation into the reconstruction process, but the edges in sensor domain are usually not well defined as those in the image domain, resulting in edge blurring in the final reconstructed images[79]. Thus, DL-based denoising methods are rarely applied to the sensor domain directly. Instead, efforts [80-82] were dedicated to sparse-view CT reconstruction, which utilize DL to interpolate or synthesize unmeasured projection views. Then, the FBP method is used to reconstruct images with substantially reduced streaking artifacts. Additionally, Beaudry et al. [83] proposed a DL method to reconstruct high-quality 4D-CBCT images from sparse-view acquisitions. They estimated projection data for each respiratory bin by taking projections from adjacent bins and linear interpolation, and then trained a CNN model to predict full projection data which are reconstructed with the FDK method.

C. DL for FBP

DL methods can also be combined with the FBP reconstruction. In 2016, Wurfl et al. [84] demonstrated that FBP reconstruction can be mapped onto a deep neural network architecture, in which the projection filtering is reformed as a convolution layer and the back-projection is formed with a fully connected layer. They showed the advantage of learning projection-domain weights for the limited angle CT reconstruction problem. He et al. [85] further proposed an inverse Radon transform approximation framework which resembles the FBP reconstruction steps. They constructed a neural network with three dedicated components (a fully connected filtering layer, a sinusoidal back-projection layer, and a residual CNN) corresponding to projection filtering, back-projection, and postprocessing. They demonstrated that the approach outperforms TV-based iterative reconstruction for low-flux and sparse-view CT. Li et al. [86] also proposed an iCT-Net which consists of four major cascaded components that are also analogous to the FBP reconstruction. This approach can achieve accurate reconstructions under various data acquisition conditions such as sparse-view and truncated data.

D. DL for Iterative Reconstruction

DL is also applied to iterative reconstruction methods for different purposes including regularization design, parameter tuning, optimization algorithms, and reconstruction results improvement. Wu et al. [87] proposed regularizations trained by artificial neural network for PWLS reconstruction of low-dose CT, which can learn more complex image features and thus outperform the TV and dictionary learning regularizations. Chen et al. [88] learned a CNN-based regularization for PWLS reconstruction and found it can preserve both edges and regions with smooth intensity transition without staircase artifacts. Gao et al. [89] constructed a CNN texture prior from previous full-dose scan for PWLS reconstruction of current ultralow-dose CT images. One drawback of the conventional model based iterative reconstruction is manual tuning of the hyperparameter which controls tradeoff between data fidelity and regularization. Shen et al. [90] used deep reinforcement learning to train a system which can automatically adjust the parameter, and demonstrated that the parameter-tuning policy network is equivalent or superior to manual tuning. Chen et al. [91] proposed a Learned Experts’ Assessment-based Reconstruction Network (LEARN) for sparse-data CT that learns both regularization and parameters in the model. He et al. [92] also proposed a DL-based strategy for PWLS to simultaneously address regularization design and parameter selection in one optimization framework. DL can also be used to modify optimization algorithms for iterative reconstruction. Kelly et al. [93] incorporated DL within an iterative reconstruction framework, which utilizes a CNN as a quasi-projection operator within a least-squares minimization procedure for limited-view CT reconstruction. Gupta et al. [94] presented an iterative reconstruction method that replaces the projector in a projected gradient descent algorithm with a CNN, which is guaranteed to converge and under certain conditions converging to a local minimum for the non-convex inverse problem. They also showed improved reconstruction over TV or dictionary learning based reconstruction for sparse-view CT. Adler and Oktem [95] proposed learned Primal-Dual algorithm for CT iterative reconstruction which replaces the proximal operators with CNNs. They demonstrated this DL-based iterative reconstruction is superior to TV regularized reconstruction and DL-based denoising methods.

E. Domain Transformation Methods

Researchers have also leveraged DL to map sensor domain measurements to image domain reconstruction directly. For example, Zhu et al. [96] proposed an automated transform by manifold approximation (AUTOMAP) approach to achieve end-to-end image reconstruction. But the approach requires high memory for storing the fully connected layer and thus, limits this approach to small size image reconstruction. Fu and De Man [97] recursively decomposed the reconstruction problem into hierarchical sub problems and each can be solved by a neural network. Shen et al. [98] proposed a deep-learning model trained to map a single 2D projection to 3D CBCT images by using patient-specific prior information. They introduced a feature-space transformation between a single projection and 3D volumetric images within a representation- generation framework. Inspired by this work, Lei et al. [99] investigated a GAN-based approach with perceptual supervision to generate instantaneous volumetric images from a single 2D projection for real time imaging in lung SABR.

It should be noted that many reconstruction methods mentioned above are generic and may be used for both diagnostic imaging and image-guided radiotherapy. We are expecting to see more evaluations and validations of these techniques on specific applications in radiotherapy. By now, there are still many concerns on stability and reliability of the DL-based reconstruction methods because of their black-box nature. More research efforts are needed to interpret these DL models and improve robustness and accuracy of DL-based reconstruction.

Although with great potential, we have not seen reports of using DL-based reconstruction algorithms for radiotherapy applications in clinic. The DL-based reconstruction algorithms from GE Healthcare and Canon Medical Systems have received FDA 510(k) clearance. Solomon et al. [100] studied the noise and spatial resolution properties of DL-based reconstruction from GE, and found it can substantially reduce noise compared to FBP while maintaining similar noise texture and high-contrast spatial resolution but also with degraded low-contrast spatial resolution. since the quality of reconstructed image is so crucial in radiotherapy, more clinical assessments and comparisons are needed before we can confidently adopt them in clinic.

III. Image Registration

Image registration is an important component for many medical applications such as motion tracking [101], segmentation [101-103], image guided radiotherapy [104, 105] and so on. Image registration is to seek an optimal spatial transformation which aligns the anatomical structures of two or more images based on its appearances. Traditional image registration methods include optical flow [106], demons [107], ANTs [108], HAMMER [109], ELASTIX [110] and so on. Recently, many DL (DL)-based methods have been published and achieved state-of-art performances in many applications[111]. cNN uses multiple learnable convolutional operation to extract features from the images. Many types of architectures exist for CNN, including the AlexNet, U-Net, ResNet, DenseNet and so on. Due to its excellent feature extraction ability, cNN has become one of the most successful models in DL-based image processing, such as image segmentation and registration. Early works that utilized CNN for image registration attempted to train a network to predict the multi-modal deep similarity metric to replace the traditional image similarity metrics such as mutual information in the iterative registration framework [112, 113]. It is important to ensure the smoothness of the first order derivative of the learnt deep similarity metrics in order to fit them into traditional iterative registration frameworks. The gradient of the deep similarity metric with respect to the transformation can be calculated using the chain rule. The major drawback of this method is that it inherits the iterative nature of the traditional registration frameworks. To enable fast registration, many CNN-based image registration methods have been proposed to directly infer the final DVF in a single/few forward predictions. In this section, we focus on this type of registration method since there is a clear trend towards direct DVF inference for DL-based fast image registration.

According to the network training strategy, this type of CNN-based image registration method can be grouped into two broad categories which are supervised DVF prediction and unsupervised DVF prediction, as shown in Fig. 3. supervised DVF prediction refers to DL models that are trained with known ground truth transformation between the moving and the fixed images. on the contrary, unsupervised DVF prediction does not need the ground truth transformation for network training. For supervised DVF prediction, the ground truth DVF can be generated artificially using mathematical models or by traditional registration algorithms. The DVF error between the predicted and ground truth DVFs can be minimized to train the network. For unsupervised DVF prediction, ground truth DVF is not needed, however, robust image similarity metrics are necessary to train the network to maximize the image similarity between the deformed images and the fixed images. Over the last several years, there has been an increasing number of publications on CNN-based direct DVF inference methods.

Fig. 3.

Fig. 3.

Supervised and unsupervised deformation vector field (DVF) prediction methods.

To investigate the trend of the number of publications that used supervised and unsupervised learning methods for image registration, we have collected 100+ publications from various databases, including Google Scholar, PubMed, Web of Science, Semantic Scholar and so on. Keywords including but not limited to machine learning, deep learning, learning-based, convolutional neural network, image registration, image fusion, image alignment were used. Fig. 4 shows the number of publications from the year of 2017 through November of 2020. In 2017 and 2018, the supervised methods are clearly more prevalent. From 2019, the unsupervised methods have become slightly more popular than the supervised methods.

Fig. 4.

Fig. 4.

Overview of number of publications in DL-based medical image registration.

A. Supervised Transformation Prediction

Supervised transformation prediction aims to train the network with ground truth transformations. However, the ground truth transformation is usually unavailable in practice. Various methods have been proposed to generate/estimate the ground truth transformation, including manual/automatic masks contouring, landmark detection/selection, artificial transformation generation, traditional registration-calculated transformation, and model-based transformation generation. Table I shows a list of selected references that used supervised transformation prediction. Salehi et al. trained a CNN-based rigid image registration for fetal brain MR scans [114]. The network was trained to predict both rotation and translation parameters using datasets generated by randomly rotating and translating the original 3D images. Eppenhof et al. trained a CNN to perform 3D Lung deformable image registration using synthetic transformations [20]. The network was trained by minimizing the mean square error (MSE) between the predicted DVF and the ground truth DVF. A target registration error (TRE) of 4.02±3.08 mm was achieved on DIRLAB [115], which was worse than 1.36±0.99 mm [116] that was achieved when using the traditional DIR method. The TRE was later reduced from 4.02±3.08 mm to 2.17±1.89 mm on DIRLAB datasets using a U-Net architecture [21].

TABLE I.

Selected supervised transformation prediction methods

References ROI Patch-
based
Modality Transformation
[8] Cardiac No MR Deformable
[13] Brain Yes MR Deformable
[15] Brain Yes MR Deformable
[16] Pelvic Yes MR-CT Deformable
[17, 18] Prostate No MR-US Deformable
[19] Lung Yes CT Deformable
[20, 21] Lung No CT Deformable
[12] Brain Yes MR Deformable
[24] Brain No T1, T2, Flair Affine
[25-27] Lung Yes CT Deformable
[29] Prostate No MR-TRUS Affine + Deformable
[33] Pelvis No CT-CBCT Deformable
[35] Cardiac No MRI Deformable

Instead of using artificially generated transformation as ground truth, Sentker et al. proposed generating the DVF using PlastiMatch [117], NiftyReg [118], and VarReg [119] as ground truth [19]. The authors showed that the network trained using VarReg had better performance than those trained using PlastiMatch and NiftyReg on DIRLAB [115] datasets. The best TRE values they achieved on DIRLAB was 2.50±1.16 mm, which was not better than the network trained using artificially generated transformations. statistical appearance models (SAM) have also been used by Uzunova et al. to generate a large and diverse set of training image pairs with known transformations from a few sample images [120]. They showed that CNNs learnt from the SAM-generated transformation outperformed cNNs learnt from artificially generated and affine registration-generated transformations. sokooti et al. used a model of respiratory motion to simulate ground truth DVF for 3D-CT lung image registration [26]. They have outperformed models that were trained using artificially generated transformations. They achieved a TRE of 1.86 mm for the DIRLAB datasets. Instead of using the artificially-generated dense DVF, higher-level correspondence information such as masks of anatomical organs were also used for network training [18, 121]. Networks trained using higher-level of correspondences such as organ masks or landmarks are often called weakly supervised methods since the exact dense voxel-level transformation was unknown during the training. It is called weakly supervised also because the higher-level of correspondence was not required in inference stage to facilitate fast registration.

one major limitation of supervised transformation prediction is that the generated transformation may not reflect the true physiological motion, resulting in a biased model towards the artificially generated transformation prediction. It is possible to mitigate this problem using better transformation models to generate various training image pairs which simulate realistic transformations.

B. Unsupervised Transformation Prediction

The loss function definition in supervised transformation prediction methods was straightforward. However, for unsupervised transformation prediction, it was not so straightforward to define a proper loss function without knowing the ground truth transformations. Fortunately, the spatial transformer network (STN) which allows spatial manipulation of data during training was proposed [122]. The STN can be readily plugged into existing CNN architectures. The STN was used to deform the moving image based on the current predicted DVF to generate the deformed images which were compared to the fixed image to calculate image similarity loss. Table II shows a list of selected references that used unsupervised transformation predictions with their respective similarity metrics. SSD stands for sum of squared difference, MI stands for mutual information, MSE stands for mean squared error, CC stands for cross correlation.

TABLE II.

Selected unsupervised transformation prediction methods

References ROI Patch-
based
Modality Transform Similarity
[9] Brain No MR Deformable SSD
[10] Brain No MR Affine MI
[11, 12] Brain Yes MR Deformable Intensity & gradient difference
[22] HN Yes CT Deformable MSE
[28] Cardiac No MR Deformable CC
[34] Lung No MR Deformable MSE
[36] Brain No MR-US Deformable MSE after intensity mapping
[37] Cardiac, Lung Yes MR, CT Affine and Deformable MSE
[38, 39] Brain No MR Deformable CC
[40] Liver No CT Deformable CC
[41, 42] Brain No MR Deformable CC
[44] Liver No CT Deformable CC
[46] Abdomen Yes CT Deformable CC
[47] Abdomen Yes CT Deformable CC
[49] Lung Yes CT Deformable CC
[51, 52] Prostate No MR-TRUS/CBCT Deformable None
[54] Lung, Cardiac Yes CT, MRI Deformable CC

An unsupervised CNN-based registration method, called VoxelMorph was proposed for MR brain atlas-based registration [38, 39]. The VoxelMorph has a U-Net like architecture. With STN, the image similarity between the deformed images and the fixed images were maximized during training. The predicted transformation was regularized to have low local spatial variations. They have achieved comparable performance to the ANT [108] registration method in terms of the Dice similarity coefficient (DSC) score of multiple anatomical structures. Zhang et al. proposed a network to predict diffeomorphic transformation using trans-convolutional layers for end-to-end MRI brain DVF prediction [123]. An inverse-consistent regularization term was used to penalize the difference between two transformations from the respective inverse mappings. The network was trained using a combination of an image similarity loss, a transformation smoothness loss, an inverse consistent loss, and an anti-folding loss.

Lei et al. used an unsupervised CNN to perform 3D CT abdominal image registration [46]. A dilated inception module was used to extract multi-scale motion features for robust DVF prediction. Besides the image similarity loss and DVF regularization loss, an adversarial loss term was added by training a discriminator. Vos et al. proposed a fast unsupervised registration framework by stacking multiple CNNS into a larger network for cardiac cine MRI and 3D CT lung image registration [37]. They showed their method was comparable to conventional DIR methods while being several orders of magnitude faster. Jiang et al. proposed a multi-scale framework with unsupervised CNN for 3D CT lung DIR [124]. They cascaded three CNN models with each model focusing on its own scale level. The network was trained using image patches to optimize an image similarity loss and a DVF smoothness loss. They demonstrated that the network trained on SPARE datasets has good performance on the DIRLAB datasets. The same trained network could also be generalized to CT-CBCT and CBCT-CBCT registration without re-training or fine-tuning. Jiang et al. achieved an average TRE of 1.66±1.44 mm on DIRLAB datasets. Fu et al. proposed an unsupervised whole-image registration for 3D-CT lung DIR [49]. The network adopted a multi-scale approach where the CoarseNet was first trained using down-sampled images for global registration. Secondly, local image patches were registered to the image patches of the fixed image using a patch-based FineNet. A discriminator was trained to provide adversarial loss to penalize unrealistic warped images. They have outperformed some traditional registration methods with an average TRE of 1.59±1.58 mm on DIRLAB datasets.

Compared to supervised transformation prediction, unsupervised methods could alleviate the problem of lack of training datasets since ground truth transformation is not needed. However, without direct transformation supervision, DVF regularization terms have become more important to ensure plausible transformation prediction. So far, most of the unsupervised methods focused on unimodality registration since it is relatively easy to define image similarity metrics for unimodal registration than multi-modal image registration.

Supervised transformation prediction methods are limited by the lack of known transformations in the training datasets. Artificial transformations could introduce errors due to the inherent differences between the artificial and realistic transformations. Model-based transformation generation which could simulate highly realistic transformation has been shown to alleviate the lack of realistic ground truth transformation. On the other hand, unsupervised methods need extensive transformation regularization terms to constrain the predicted transformation since ground truth transformation is not available for supervision. One challenge is to efficiently determine the relative importance of each regularization term. Repeated trial and evaluation were often performed to find an optimal set of transformation regularization terms that could help generate not only physically plausible but also a physiologically realistic deformation field for a certain registration task. Another limitation for unsupervised transformation prediction is that it relies on effective and accurate image similarity metrics to calculate similarity loss and train the network. However, multi-modal image similarity metrics are usually more difficult to define and calculate than unimodal image similarity metrics. Therefore, there is a lack of multi-modal unsupervised image registration methods as compared to unimodal image registration methods. Deep similarity metrics could be trained for multimodal image registration tasks and used in unsupervised transformation prediction. However, the training of deep similarity metrics often requires pre-aligned training image pairs which are difficult to obtain.

IV. Image segmentation

In radiation therapy, image segmentation can be described as the process where each pixel in an image is assigned a label, and pixels with similar labels are linked such that a visual or logical property is realized()[125]. Resultant groupings of pixels with the same label are called delineations (i.e., segmentations). Once RT images are acquired, tumors and OARs are delineated, often by a physician or dosimetrist, to be incorporated into the treatment planning process. Manual segmentation has been reported to be the most time consuming process of radiation therapy, introducing substantial inter- and intra-observer variability[126], and dependent on image acquisition and display settings[127]. To address these limitations, auto-segmentation is often employed. Auto-segmentation may be unsupervised, where only the image itself is considered and image intensity/gradient analyses are implemented, performing best with distinct boundaries[23]. Supervised segmentation, on the other hand, integrates prior knowledge about the image often in the form of other similarly annotated images to inform the current segmentation task (i.e., training samples).

Recent DL techniques[128, 129] are well poised for the task of accurate automatic segmentation with less reliance on organ contrast[130-132] as the algorithm is designed to acquire higher order features from raw data[128]. Deep neural networks (DNNs) learn a mapping function between an image and a corresponding feature map (i.e. segmented ground-truth) by incorporating multiple hidden layers between the input and output layer. The U-Net[5] is a DNN architecture that has shown great promise for generating accurate and rapid delineations for applications in RT[133]. The U-Net “U-shaped” architecture shown in Fig. 5 was inspired by the original fully convolutional network from Long et al. [134] and was initially implemented by Ronneberger et al. [5] in 2015 to segment biomedical image data using 30 annotated image sets. The U-Net has an additional expansion pathway that replaces the maximum pooling operations with up-convolutions to increase the resolution of the feature maps, a desirable feature for medical image segmentation. The original 2D U-NET was quickly implemented into 3D volumetric inputs to train using the entire dataset and annotations simultaneously to improve segmentation continuity, including multi-channel inputs of different image types[135] (i.e., MRI, CT, etc.). Overall, the U-Net is an end-to-end solution has shown remarkable potential to segment medical images, even when the amount of training data is scarce[136]. Various deep neural networks have also been applied to medical image segmentation[133] including deep CNNs with adaptive fusion[137] or multi-stage[138] strategies, as well as generative adversarial networks (GANs)[139].

Fig. 5.

Fig. 5.

Architecture for original U-Net by Ronneberger et al. [5] with the contraction path shown on the left and the expansion path shown on the right. The original input image has a size of 512 x 512. Feature maps are represented by purple rectangles with the number of feature maps on top of the rectangle.

Data scarcity may be a challenge in radiation therapy. Publicly available annotated “ground truth data” for training and validation are available through The Cancer Imaging Archive[140]. Several other strategies are employed to improve the variability and diversity of available data—without new unique samples—which is referred to as data augmentation. Data augmentation has been shown to improve auto-segmentation accuracy and prevent model overfitting[135, 137, 141]. Examples of augmentation include image flipping, rotation, scaling, and translation (pixels/axis). other emerging areas of data augmentation include integrating inter-fractional data such as incorporating daily cone-beam CTs for patients to increase segmentation accuracy in radiation therapy[142] and using transfer learning to generate new training imagesets from other modalities. Typical endpoints of AI segmentation include qualitative review or comparison with ground truth labels using overlap metrics such as the DSC or distance metrics such as the Hausdorff distance or mean distance to agreement.

Applications of AI for segmentation in RT planning typically fall into two main classes: organs at risk and lesions. Table III outlines a few state-of-the-art examples of each with key findings, with a comprehensive list available in other references[143].

TABLE III.

Example applications of DL into medical image segmentation for radiation therapy planning purposes

Author Disease Site Image Type, Number of cases Algorithm Outcome
Zhu et al.[1] Head and Neck Planning CT, 261 3D U-Net, whole image -9 OAR contours generated in ~0.12 seconds
-Increased DSC ~3%
Lei et al.[2] Male Pelvis Cone-beam with synthetic MRI, 100 CycleGAN -DSC >0.9 and MDA <1.0 mm for bladder, prostate, rectum from ground truth delineations
Dong et al.[3] Thorax CT, 35 U-Net GAN -DSC>0.87 and MDA <1.5 mm for lungs, cord, heart
-DSC esophagus ~0.75
-Negligible dosimetric differences
van der Heyden et al.[4] Brain Dual-energy CT, 14 2-step 3D U-Net -Quantitatively and qualitatively outperformed atlas method for all organs at risk but optic nerves
Chen et al.[6] Abdomen 3T MRI, 102 2D U-Net -9/10 OARs had DSC 0.87-0.96
-Duodenum DSC ~0.8
Fu et al.[7] Abdomen 0.35 T, MR-linac, 120 CNN with correction method -4/5 OARs had DSC >0.85 (liver, kidneys, stomach, bowel)
-Duodenum DSC ~0.65

Emerging areas of interest include segmenting substructures of organs at risk including the cardiac substructures[135], applications for adaptive radiation therapy[7], and longitudinal response assessment. other areas under development include optimizing loss functions, such as integrating DSC less, unweighted DSC loss, or focal DSC loss, with tunable hyperparameters[144] to better address segmentation accuracy for small structures[145].

V. Image synthesis

In this section, we focus on generative modelling of information content across imaging modalities relevant to the radiotherapy workflow. Generative modelling, in this context, refers to capturing information about the data distribution associated with each modality thus enabling translation across different modalities. Within generative modelling, we limit our review to techniques that are most suitable to represent unpaired data obtained in real clinical settings where obtaining one-to-one correspondences between modalities may not be feasible. Among these techniques, GANs have proven to be very successful. GANs are a framework that optimize an objective by running a min-max two player game between two networks. One of the networks, called the generator, tries to learn the data distribution by trying to fool the other network, called the discriminator, which simultaneously tries to differentiate between real images and fake images created by the generator.

GAN-based approaches have also shown to be superior when paired data is available where they eliminate the need to design hand-crafted losses in the image-space[146]. We present methods proposed to aid different stages of the workflow ranging from image acquisition to treatment delivery. Specifically, we discuss MRI to CT, CBCT to CT and CT to PET translation. We focus our review on techniques aimed [?]at inter-modality translation due to the large variation in content representation across these modalities. For example, CT images capture electron densities through Hounsfield units whereas MRI images are generated based on the excitation and relaxation of hydrogen protons. These differences in representation allow for effective demonstration of the capacity of GAN-based image translation approaches to learn complex mappings.

A. MRI to CT Translation

MRI-only radiotherapy can provide multiple benefits to the patient and clinic due to its immense flexibility in imaging physiological and functional characteristics of tissue, combined with much superior soft-tissue differentiation. It also avoids additional risk induced by subjecting the patient to ionizing radiation via CT. However, MRI lacks the ability to provide electron densities which is explicitly needed in radiation dosimetry transport calculations. Generative modelling of information content between MRI to CT modalities can allow for obtaining electron densities from MRI. This is done by translating it into a synthetic CT while retaining structural information present within the original MRI scan itself. In order to effectively translate between the modalities, suitable input-output representations need to be determined, and this is done as follows: 1) constructing a mapping between paired MRI-CT data from the same patient (registered to ensure correspondence) or 2) unpaired MRI-CT data from the same patient or across patients. For MRI-to-CT translation in nasopharyngeal cancer treatment planning – Peng et al. [147] use conditional GANs for paired data and CycleGANs for unpaired data. CycleGANs ensure reliable translation by enforcing cycle-consistency in the MRI-to-CT translation[148]. They use 2D U-Net based generators that operate on a slice-by-slice basis and 6-layered convolutional discriminators. Wolterink et al. [149] and Lei et al. [150, 151] use paired data from the same patients for MRI-to-CT translation for treatment planning of brain tumors and brain /pelvic tumors, respectively. Although they use registered MRI-CT data from the same patient, they employed Cycle-GANs to account for local differences between the spatial representation across these modalities. Wolterink et al. used data from sagittal slices on a slice-by-slice basis and used the default CycleGAN setup including a patch-based discriminator to preserve high frequency features. Lei et al. operated on 3D patches of smaller sizes (323) and implemented dense blocks[152] in the generator to capture multiscale information. They also proposed the use of novel mean P-distance (lp norm) and spatial gradient differences as cycle-consistency losses to avoid blurriness and promote sharpness respectively.

In terms of results, Peng et al. reported mean absolute Hausdorff Unit (HU) differences (MAE) within the body of 69.6±9.27 for the paired approach and 100.6±27.39 for the unpaired approach. Wolterink et al. reported 73.7±2.3 HU MAE and Lei et al. reported 57.5±4.7 HU MAE for the brain.

These methods are not directly comparable since they use different data, but they do represent an estimate of their quantitative performance. However, it is difficult to ascertain clear clinical applicability based solely on these metrics. Consider a case where the average HU values are biased by strong deviations in areas belonging to the tumor where other areas are quite accurate. If this is placed in comparison to another case where smaller but uniform deviations are present across the entire scan, which scenario would be more clinically relevant? These metrics fail to answer these questions in entirety. Peng et al. provided additional metrics such as comparing dose distributions of translated CT with the reference CT with 2%/2-mm gamma passing rates of (98.68%±0.94%) and (98.52%±1.13%) for the paired and unpaired approaches, respectively. This gives a better idea of dosimetric accuracy of implementing their methods in a treatment planning system.

B. CT to PET translation

PET/CT scans can play a crucial role in combining anatomical and functional information to pinpoint metabolic activity and may provide better information in the contouring process for treatment planning. Synthesizing virtual PET from CT-only workflows can eliminate the need for the more costly PET/CT scan. Additionally, this reduces cost and complexities such as storage of radiotracers associated with PET imaging. Ben-Cohen et al. proposed a conditional-GAN (cGAN) [146] based method to generate PET from contrast enhanced CT-scans for false positive reduction in lesion detection within the liver. They used paired PET/CT data and performed a transformation to align and interpolate the PET to the CT. A first synthesized estimate of PET is performed using a fully convolutional variant of VGG [153] followed by a cGAN applied on channel concatenated input comprising of CT and the previous PET estimate. In this two-stage network, while optimizing over the image-losses, SUV-based weighting is applied to provide better results in PET regions with high-SUVs. The proposed method obtains a mean absolute SUV difference of 0.73±0.35 across all regions and 1.33±0.65 in high-SUV regions. Finally, false positives are shown to be reduced from 2.9% to 2.1% in liver lesion detection when using the generated PET information for detection. Bi et al. [154] explored synthesis of PET image from paired PET/CT data for lung cancer patients and propose three different methods exploiting varied input representations. All three methods apply a U-Net based cGAN but vary in terms of input provided to the model: (1) binary label-map of the tumor annotations, (2) CT image, and (3) channel-combined CT with binary label-map. The last method provided the results closest to the real-PET image with a SUV MAE of 4.60. The authors suggested that this synthesized PET can be used to form training data for PET/CT based prediction models. Further, they formulated a potential extension of their work to combine both real and synthetic PET to boost the training samples in an attempt to boost generalization.

C. CBCT to CT Translation

Radiotherapy treatment planning starts with acquiring a CT scan of the patient, usually denoted as a planning CT (pCT). Dose calculation as well as beam energy and arrangement are derived from the pCT, comprising a treatment plan. Commonly, in order to correctly position the patient at each fraction of the dose delivery, a CBCT scan is obtained. However, CBCT can potentially provide insight into the anatomical changes that occurred over the course of the treatment and could enable adaptive radiotherapy by leveraging that information to adjust the treatment plan. Unfortunately, treatment re-planning is not possible with CBCT scans as they are noisier, contain more artifacts and have inferior soft-tissue contrast compared to fan-beam CT. One way to make use of the information in a CBCT scan is by using deformable image registration (DIR) to map pCT to the anatomy of CBCT [155], producing a scan with HU values of the pCT with the latest anatomy, in literature referred as deformed planning CT (dpCT) or virtual CT (vCT). On the other hand, generative DL methods may provide a faster and potentially superior alternative to treatment re-planning based on dpCT or other techniques (look-up table [156], Monte Carlo[157], scatter correction with pCT prior [158]) by synthesizing a CT scan from an input CBCT scan. Such synthetic CT (sCT) should have all the characteristics of a CT scan while preserving the anatomy from the CBCT. Maspero et al. [159] trained 4 standard CycleGAN [148] models on lung, breast, and head-and-neck scans – three for each anatomical site separately and one model on all sites jointly. They showed that a single model for all three anatomical sites performs comparably against models trained per anatomical site, which would simplify its possible clinical adoption. The results are evaluated using rescanned CT (rCT) as ground truth, where rCT is a CT scan that is acquired at the same fraction as the CBCT in question. The reported HU MAE for sCT are 5312, 6618, 8310 for head-and-neck, lung and breast, respectively. Liang et al. ()[160] used a standard CycleGAN model, but evaluate the HU accuracy and dose calculations against dpCT instead of rCT. Furthermore, they performed an evaluation of anatomical accuracy of sCT using deformable head-and-neck phantoms. The phantom allows for a simulation of the patient at the beginning of the treatment and after a few fractions of the treatment, where the tumor shrinkage is observed. This provides a CBCT and rCT scan with identical anatomy, which can be used to assess if and how translation of CBCT to sCT affects the representation of the anatomy. They concluded that the CycleGAN model has higher anatomical accuracy than DIR methods.

D. Clinical Perspective and Future Applications of Synthetic Imaging

In the radiotherapy workflow, medical images are one of the most important sources of data used in decision-making. Imaging is used in all the steps of patient care in oncology: diagnosis, staging, treatment planning, treatment delivery, and disease follow-up. Based on the anatomical site of a tumor and the specific properties we want to investigate, some imaging modalities might be more appropriate than others. For example, an MRI scan is suggested for malignancies located in the pelvic region, due to the large presence of soft tissues compared to bone structures. This allows an improved ability to contour lesions and surrounding organs. CT is the predominant modality for staging lung tumors, but recently MRI scans, and more specifically Diffusion Weighted Imaging (DWI) sequences, are used to evaluate the involvement of mediastinal lymph nodes with higher sensitivity than CT, with implications on better staging and treatment decisions. Finally, it is well established that treatment planning always requires the acquisition of a CT scan, while PET imaging is often used prior to treatment planning to evaluate the presence of metastasis and the degree of suspicion of the identified lesions, and within/after treatment fractions to quantify treatment response[161]. Such examples highlight the need for multiple modalities of imaging to fully capture the complexity of human anatomy and tumor tissues in tandem with the specific tasks that we want to accomplish. This would eventually improve the decision-making process. In an ideal scenario, all modalities of imaging for the patient would be available, but that is far from being an achievable or practical solution.

Several reasons stand behind this evidence:

1). Cost-effectiveness:

Scanning patients costs both hospitals and the patients themselves. These costs may be reimbursed by healthcare providers or insurance companies. With an increasing number of new and diverse imaging technologies a growing demand for cost-effectiveness analysis (CEA) in imaging technology assessment is induced. As pointed out by Sailer et al. [162], when assessing the cost-effectiveness of diagnostic imaging, the initial question is whether adding an imaging test in a medical pathway does indeed lead to improved medical decision-making. One of the most significant examples was the lung cancer screening trial entitled NLST[163], which showed that participants who received low-dose helical CT scans had a 15 to 20 percent lower risk of dying from lung cancer than participants who received standard chest X-rays. In radiotherapy, a study [164] highlighted the costs related to various radiological imaging procedures in image-guided radiotherapy of cancers, based on standard billing procedures. The median imaging cost per patients was $6197, $6183, $6358, $6428, $6535 and $6092 from 2009 to 2014, respectively. This seems to highlight an upcoming trend of reducing costs related to imaging. Unfortunately, it is not clear if this reduction in costs was associated with an optimization of the diagnostic imaging.

2). Patient safety and comfort:

Recent studies indicated that repetitive imaging scans can deposit considerable radiation doses to some radiosensitive organs (e.g. heart) and could cause higher radiogenic cancer risks to the patients, with children being more impacted by this issue[165]. In a very utilitarian way, we might want to have an image with the highest possible contrast to be able to better identify suspicious structures, and to possibly obtain images of our patients within short intervals of time. This would allow using these images, for example, to perform a better evaluation of treatment response. Unfortunately, due to the physics of imaging, ionizing imaging methods give a signal to noise ratio proportional to the released dose to the body[166]. More dose leads to better contrast, but also increases the probability of radiation-induced effects to the patient. Another point is represented by images acquired with the injection of an intravenous (IV) contrast medium. IV contrast media are usually toxic substances that need to be expelled by our body. If it is true that images acquired with IV contrast media (e.g., contrast enhanced CT) provide better resolution for specific anatomic regions compared to images without them (e.g., conventional CT); it also stands that many patients might not be eligible for the injection of a contrast media, because of poor performance status, presence of comorbidities, or poor renal function [167]. Finally, some imaging modalities such as DWI require longer acquisition times compared to T1 and T2 sequences [168], with an impact on costs and also on patients’ comfort.

3). Differences in imaging acquisition protocols and interoperability:

Despite the presence of specific guidelines and recommendations for diagnostic imaging, each institution might adopt not only different image acquisition protocols, but might also be missing specific imaging modalities (or sequences), which might be the standard of care in another institution. This evidence has an impact when performing or designing multicenter institutional studies, especially if retrospective and with the aim of the development of image-derived biomarkers. For example, if a center has developed a prognostic model based on an image-derived biomarker obtained by processing a specific imaging modality, a large external validation of this biomarker might not be possible because this imaging modality may not be available in many institutions. Additionally, when considering quantitative imaging analysis via machine learning and more specifically radiomics[169], a recent review pointed how different acquisition settings (e.g., slice thickness, tube current, reconstruction kernels) should be preferrable with respect to others, since they increase the reproducibility of the biomarkers[170]. It is, as mentioned before, not obvious that these acquisition settings are the same across all clinics. One possible solution, which is close to utopia, is to force each institution to acquire images with the same acquisition protocols. However, even if this was accomplished, the variability of scanner manufacturers, a well-known factor that impacts the stability of image-derived biomarkers, would still be difficult to tackle.

All the points presented above show that there is an open space for the application of DL based synthetic imaging. Without going into details, applications can include the generation of multi-modalities from a starting image (as explained in the case of the section MRI to CT of this paper), augmentation of image quality without exposing the patient to additional dose (as explained in the case of the section CBCT to CT translation), but also the recent work that introduces fast DL reconstruction for DWI images [171-173]. Synthetic imaging is taking a prominent role in oncology. We refer the reader to the following publications as proof of some interesting clinical applications that DL for image generation can offer[174, 175], and to these more general reviews[176, 177].

VI. Automatic treatment planning

In radiotherapy planning, a main objective is to deliver the prescribed dose accurately to the target, keeping the dose to OAR below acceptable limits and minimizing dose to surrounding, healthy tissue. The treatment planning process begins with delineation of target volumes and OARs on a planning CT. A set of dose-constraints are defined for targets, OARs and other regions of interest, typically dose-volume relations, stating the minimum or maximum dose that can be allowed to a given region. Intensity Modulated Radiation Therapy (IMRT) and Volumetric Modulated Arc Therapy (VMAT) are modern techniques that allow the treatment planner to create complex treatment plans by providing a set of such objectives that an optimizer algorithm will attempt to fulfill by inverse optimization, as illustrated by the workflow in Fig. 8. The optimizer often fails to achieve all the desired objectives, due to complicated patient geometry, limitations of the treatment modality or machine, etc. To improve the plan, the planner and physician discuss available options and clinical preference before adjusting the objectives for a new iteration of optimization. In recent years, automated treatment planning techniques have been developed, that aim to provide vastly improved starting points for the treatment planner, and even produce clinically acceptable treatment plans without human interaction.

Fig. 8.

Fig. 8.

The inverse treatment planning process begins with delineation of target volumes and OARs on medical images. Optimization objectives are defined and passed to the optimizer algorithm of the TPS. If the resulting plan is not acceptable, trade-off evaluation is performed to define new targets iteratively until a clinical plan is accepted.

The current planning workflow apparently relies heavily on humans (planner and physician). The reason for this workflow is twofold. First, there is no clear metrics to mathematically quantify plan quality. Although there are some plan quality scoring systems defined over the years [178], they might not necessarily reflect the most stringent clinical requirement for each patient. Second, for a specific patient, the best plan is unknown. The planning process has to explore the very high-dimensional solution space in a trial-and-error process to find out the optimal solution [179, 180]. This complex and cumbersome process poses substantial hurdles to plan quality and planning efficiency.

Due to extensive human involvement, the plan quality heavily depends on a number of human factors, such as the planner’s experience, the planner-physician communications, the amount of effort and available time for treatment planning, and the rate of human errors etc. [181-183]. Suboptimal plans are often unwittingly accepted [183-185], deteriorating treatment outcomes [186].

Moreover, while modern computers can solve the optimization problem rapidly, the trial-and-error iterative planning process yields hours of planning time for a typical planner to generate a plan for the physician to review [187-189]. Multiple iterations between physician and planner are often needed, which extends the overall planning time up to one week in some challenging tumor sites. This lengthy process strongly contributes to the delay between diagnosis and start of RT, which has been shown to adversely affect treatment outcome. Moreover, patient’s anatomy may change during the time waiting for treatment planning [190, 191], making the plan carefully designed based on the initial patient anatomy sub-optimal for the changed anatomy [192, 193]. Additionally, the delayed treatment will increase the anxiety of patients who have already been overwhelmed by cancer diagnosis and are eager to start treatment. Such delays can be particularly severe in low- and middle-income countries, with limited resources, capacity, staff and expertise [194].

The problem of sub-optimal plan quality and planning efficiency are indeed intertwined. Due to the low planning efficiency, the optimality of a plan is hard to guarantee for every patient in current practice given the strict time constraints. Heavy time pressure also increases human error rate and may limit availability of advanced treatment techniques. Auto-planning (AP) techniques are urgently needed to tackle these problems in the current planning process.

AP has already been around for some time and is rapidly improving. There are several research projects and in-house solutions being developed worldwide, and most commercial vendors of treatment planning systems (TPS) have implemented some variety of AP tools. Studies show promising results for AP, but large-scale clinical implementation is not yet seen.

A. Classical Auto-planning Strategies

Classical AP tools can be divided into three categories: treatment planner mimicking (TPM), multi-criteria optimization (MCO) and knowledge-based planning (KBP).

1). Treatment Planner Mimicking:

In treatment planner mimicking (TPM), the behavior and choices of the treatment planner are analyzed over time and converted into computer logic. Such logic can be a series of IF/THEN statements of decision making. Provided with a prioritized list of objectives the TPM follows its logic to create optimizer objectives that it tunes iteratively while steering the optimizer, in the same way that a human treatment planner would, pushing each objective as far as it can without degrading objectives of higher priority. The TPM is not as limited by time as a human planner, allowing it to perform more iterations, potentially leading to higher plan quality. Tol et al. designed a system that automatically scans DVH lines in the Eclipse TPS (Varian Medical Systems, Palo Alto, USA) optimization window, and moved the mouse cursor to adjust on-screen optimization objectives. In a blinded test, automated head-and-neck cancer (HNC) plans were preferred over MP by a HNC radiation oncologist in 19/20 cases, and the method is now in clinical use [195]. Several modern TPSs include possibilities for scripting, which have been used to develop in-house TPM by extracting DVH parameters directly from the TPS and automatically adjusting optimization objectives iteratively until the optimal solution is found [196-202]. A commercial TPM solution is available in Philips Pinnacle TPS (Philips Radiation Oncology Systems, Fitchburg, WI, US) Auto-Planning. Pinnacle Auto-Planning works by defining a template, called a technique, consisting of parameters, e.g., beam setup, dose prescriptions and objectives for each disease site [203]. When the technique is applied to a new patient, the AP will iteratively optimize a treatment plan, add helper volumes with new objectives to control the dose in the same fashion as a human planner might do, lower dose to OARs and reduce hot and cold spots in the dose distribution. The use of this system has been reported in several studies [198, 203-214], creating APs of comparable or higher quality, reduced planning times and improved OAR sparing in comparison to MPs. In a study by Cilla et al. Pinnacle Auto-Planning produced high-quality plans for HNC and high-risk prostate and endometrial cancer while reducing planning time by 60-80 minutes, corresponding to 1/3 of the MP time [203]. However, Zhang et al. found that for nasopharyngeal carcinoma some automated plans did not fully meet dose objectives for the planning target volumes (PTV) and cautioned that AP cannot be fully trusted, and a manual selection between MP and AP should be performed for each patient [213].

2). Multi Criteria Optimization:

MCO is a method for AP that explores so-called Pareto-optimal (PO) plans. In a space spanned by assigning a dimension to each optimization parameter, the PO surface consists of plans that cannot be further optimized without degrading the performance of another objective. A schematic view of the MCO planning process is presented in Fig. 9. The PO surface is populated by several plans with various objective prioritizations, and while the clinically ideal plan is always on the PO surface, not all plans at the PO surface will be clinically acceptable or desirable. Hussein et al. describes two approaches for AP with MCO, a priori and a posteriori. In the case of a priori, only a single pareto-optimal plan is fully generated and presented to the treatment planner, while for a posteriori MCO, multiple plans are automatically generated, and the treatment planner can perform trade-off navigation using e.g., a navigation star [215] or sliders RayStation MCO (RaySearch Laboratories, Stockholm, Sweden), Varian Eclipse MCO) representing the objectives [216]. A posteriori MCO offers a convenient method to efficiently find an optimal treatment plan, while the treatment planner takes active choices in the process, and has been subject to several studies [217-229]. Kierkels et al. found that MCO plans had similar performance to MP, but allowed inexperienced planners to make high quality plans [224]. Creating a large number of plans is computationally intensive, but can be done in the background while the planner attends to other tasks. Furthermore, estimations made by Bortfield and Craft suggests that as few as N+1 plans are needed to populate the PO surface, where N is the number of objectives [230]. The Erasmus-iCycle software [217] presents a solution for a priori MCO, where a disease-site-specific wish-list is defined with absolute and desired constraints, used for iterative optimization by the software. The software has been used in several studies, yielding clinically acceptable plans of similar or higher quality when compared to manually created plans [231-236].

Fig. 9.

Fig. 9.

Schematic view of some classical auto-planning workflows. Left: Multi criteria optimization generates a library of pareto optimal plans from a set of objectives. In the schematic, the user navigates the plans a posteriori by use of sliders for each parameter. Right: Knowledge based planning relies on a set of high-quality clinical plans. Atlas based methods finds the best matching patient in the library when introduced to a new patient, extracts the plan and adjusts it to the new patient. Model based approaches train a predictive model on the library plans. The model is used to predict parameters for new patients, which are used to generate objectives for the optimizer.

3). Knowledge Based Planning:

KBP exploits the knowledge and expertise of treatment planners to aid the planning process for new patients, through a library of high-quality clinical treatment plans. When a new patient is presented to the system, it will be characterized based on anatomical and geometrical features. There are two branches of KBP. In atlas-based systems, the closest matching patient in the library is chosen, and the plan setup belonging to that patient will be duplicated to the new patient and recalculated on its planning-CT. In model-based systems a predictive model is trained from the plan library, to predict parameters used for creating automated plans for new patients. The workflow for these approaches is illustrated in Fig. 9.

4). Atlas-based KBP:

Atlas based KBP [237-246] searches a library (i.e. atlas) of clinical plans to find the plan most similar in geometry to the new patient. The library plan setup is copied to the new patient, and the dose is calculated on the CT of the new patient. A general approach is to copy the plan setup and position the isocenter centered in the target volume of the new patient. The original plan can be adapted further, as demonstrated for HNC patients by Schmidt et al. where the atlas plan was adapted to the new patient by deforming the atlas plan beam fluences to suit the target volume in the new patient and warping the atlas primary/boost dose distribution to the new anatomy. The warped dose distribution was then used to generate dose-volume constraints as optimization constraints. In this study it was found that AP had similar or better quality compared to MP for all objectives. The extra steps to deform and warp the plan led to improved performance compared to simply using the atlas plan directly [244]. Atlas-based KBP can either be used to provide a starting point for further optimization, or fully automate planning. Schreibmann et al. demonstrated the use of atlas-based KBP for whole brain radiotherapy, generating high quality treatment plans in 3-4 minutes, with reduced doses to OARs [245], thus reducing the clinical workload.

5). Model-based KBP:

Model based KBP [194, 202, 210, 214, 226, 229, 237, 238, 247-282] trains a model on the plan library to predict parameters for a new patient introduced to the system. Such parameters can be e.g., beam settings, DVHs for target volumes and OARs, or full 3D dose distributions. DVH prediction has been widely studied in recent years for most disease sites, e.g., head and neck [214, 226, 258, 272, 274, 281], prostate [210, 238, 251, 255, 264, 266-268, 270, 276, 281], upper GI [259, 265, 273, 280], lower GI [229, 254, 269, 277, 278] and breast [259, 262, 275]. A commercial software for DVH prediction is Varian RapidPlan. RapidPlan examines geometric and dosimetric properties of structures in each library plan and uses these to calculate a set of features. The calculated data of each plan is included in model training, where principal component analysis (PCA) is used to identify the 2-3 most important features, which are used as input for a regression model [283]. The final model is then used for DVH prediction. DVHs can be converted to objectives for the inverse planning optimizer by sampling the predicted DVH curve and creating corresponding dose-volume objectives. The majority of studies demonstrate that the dose distribution to target volumes of the APs are equally good or better than that of MPs, with no human planner interaction, while the time it takes to generate a plan is drastically reduced. Several studies also demonstrate reduced dose to OARs for APs compared to MPs, possibly due to manual planners not having sufficient time to make further improvements once a clinically acceptable plan is found. One major limitation of DVH prediction models is that they only consider dose-volume relations for delineated structures, and not spatial distributions, thus the planner must be aware of issues such as where excess dose in healthy tissue is placed.

KBP models can be trained by relatively few patients, as demonstrated by Boutillier et al., who successfully trained a model for rectum DVH estimation using 20 library plans [284]. In a recent review of KBP methods by Ge and Wu, it is suggested that more complex plans will require larger plan libraries. Development of larger training databases, e.g., by multi-institution collaborations, is recommended [261]. Another option to increase plan library size is to include plans from other techniques, e.g., 3D-CRT and IMRT plans for training a VMAT model [285], or plans from a different TPS [273]. However, as demonstrated by Ueda et al., models may perform differently when used under other conditions than those of library plans [273], thus proper QA during commissioning is important.

B. Modern AI in Radiation Therapy Treatment Planning

Modern AI, in particular the recent advancement in DL techniques, have achieved great success in a wide range of different disciplines, including medicine and healthcare. In the regime of RT treatment planning, a number of DL-driven AP methods have been developed recently in literature to address the remaining challenges in classical AP approaches. These novel methods can be roughly categorized into three groups: DL-based dose prediction; DL-based fluence map/aperture prediction; and DL-based intelligent treatment planner.

1). DL based Dose Prediction:

The DL dose prediction AP workflow is depicted in Fig. 10. A key step in this type of approaches is to build a DL-based dose prediction agent. Constructed by a carefully designed deep neural network (DNN), the dose prediction agent directly derives the dose distribution of a clinically acceptable plan based on the patient anatomical information (e.g. organ contours and/or CT image) and planning configurations (e.g. prescription dose, treatment beam/arc setups). Pioneer study [286] was firstly performed to investigate the feasibility of DL-based dose prediction using the treatment planning task of prostate cancer IMRT as a testbed. As a proof-of-principle study, a simplified 2D dose prediction was considered. The prediction accuracy was validated on a set of dosimetric quantities of clinical interest. With feasibility well justified, DL-based dose prediction was then successfully extended to other modalities (e.g. VMAT and helical tomotherapy) [287-291], and more complicated treatment sites (e.g. HNC and lung cancer) [287-290, 292], as well as to the direct 3D dose distribution prediction [291-293], which all are of more clinical and practical values. Later on, Pareto optimal dose prediction models [228, 293, 294], which can generate multiple plans reflecting different trade-offs among critical organs, were also developed to better address the diverse clinical needs. The clinical application of DL-based prediction methods is analogue to the classical KBP methods while the major difference is that DL-based dose prediction derives dose distributions directly in contrast to the DVHs generated in KBP. Despite the great success achieved, DL-based dose prediction also suffers from the similar hurdle as KBP, i.e. the predicted dose may be dosimetrically attractive, but not be physically achievable.

Fig. 10.

Fig. 10.

General workflow of DL dose prediction based auto-planning.

2). DL based Fluence Map Prediction:

As shown in Fig. 11, the second type of approaches predicts the fluence map of an optimal plan from patient anatomy using DNN [196, 295-297], bypassing the plan optimization process in inverse treatment planning. More specifically, Lee et al.[295] developed a DNN model to derive the optimal fluence map of IMRT in a beam-by-beam-fashion using the beam’s eye view of PTV/organ contours, and the predicted optimal dose distribution for each beam as input. Li et al. [296] proposed a Dense-Res Hybrid Network (DRHN) to take a series of projections characterizing patient anatomy and treatment geometry as input and output the fluence intensity maps for the nine-field beam prostate cancer IMRT. Wang et al. [297] proposed two-stage strategy with each stage accomplished by a dedicated DNN. In the first stage, the dose of a beam is predicted from the contours and CT image of a patient, and the fluence map is then obtained based on the predicted dose of beam in the second stage. Note that these algorithms were all developed specifically for IMRT, which typically involves only a very limited number of treatment beams (≤9). Lin et al. [196] proposed a DL-based fluence map prediction approach that can handle the more general VMAT planning in which fluence map of 64 treatment beams at different angles needs to be determined. By assuming the dose of an optimal plan already known, the projected dose was employed as input of the established DNN model to predict the fluence map. All these studies have shown DL as a promising tool for fluence map prediction. However, the physical achievability and deliverability of the predicted fluence map is again not guaranteed.

Fig. 11.

Fig. 11.

General workflow of DL fluence map prediction based auto-planning.

3). DL based Virtual Treatment Planner:

Motivated by the tremendous success of reinforcement learning (RL) in achieving human-level intelligence in decision-making process, the third group of methods focuses on developing RL-based virtual treatment planners (VPNs) to automatically adjust treatment planning setups or machine parameters [298-302] for high-quality treatment plans (Fig. 12). The idea of training a VPN is similar to the trial-and-error learning process of human beings. It allows the VPN to explore different ways of adjusting treatment planning setups/machine parameters and observe the impacts of these adjustments to the plan quality. VPN will gradually learn the consequences of applying different adjustments to a given plan from a wide spectrum of scenarios encountered in the training process, such that it can pick the adjustment leading to optimal plan quality. The feasibility of RL-based VPN has been demonstrated in both a proof-of-principle context of high-dose-rate brachytherapy [300] and more complicated cases of external beam radiotherapy [298, 299, 301, 302], showing that intelligent behaviors of operating a TPS can be spontaneously generated via an end-to-end RL. Unlike the dose prediction and fluence map prediction methods, RL-based approaches naturally guarantee the achievability and deliverability since the final plans are generated by a TPS. However, the low training efficiency and poor scalability of VPN have limited its applications to only simple treatment planning problems. Although recent studies have shown that the training efficiency and model scalability can be substantially improved by incorporating human knowledge [299], and DNN of hierarchical architecture ()[298], respectively, the feasibility of VPN on a commercial TPS for complex clinical treatment planning tasks still needs to be further investigated. Moreover, the VPNs in existing studies are trained under the guidance of simple plan evaluation quantities, such as ProKnow score (ProKnow Systems, Sanford, FL, USA), which does not necessarily reflect the real planning objectives in clinic. Better metrics of more clinical relevance, e.g. physicians’ preference on plan quality, need to be quantified, and incorporated to guide the training process of a VPN.

Fig. 12.

Fig. 12.

General workflow of virtual treatment planner based auto-planning, analog to conventional human planning process (dashed route).

C. Challenges of Auto-planning

Although classical/DL based AP methods have achieved great success in many aspects, a number of technical or practical challenges remain unsolved. The first major challenge is the data size problem. Training effective AP models often requires a large cohort of data of sufficient diversity to cover the variations among different patients. However, such datasets are non-trivial to collect, and their accessibility is very limited due to many concerns, such as privacy issues. This is indeed more of a concern for DL-based algorithm since it typically requires huge amount of data to optimize DNNs with respect to the large number of learnable parameters. Lack of data often leads to severe overfitting, which may substantially deteriorate the performance of models on new patient cases never observed in model training. Second, despite their encouraging performance, most of the AP models, especially the DL-based ones, are difficult to interpret. To confidently deploy a model to automate the treatment planning process of patients, it is essential to have a good understanding about the reason behind the plan generation process. Such interpretability of a model is necessary to ensure its generality and robustness on different patients. Lack of interpretability may lead to unexpected model failure in clinical deployment stage, posing serious risk to patients. In addition, there is also arising worries de-staffing/deskilling of human planners. However, in a study by Speer et al., it was found that experienced planners outperform AP systems in complex cases [23]. In this regard, AP will be helpful to reduce the workload from simple cases and enable human planners to spend more time on complex cases, thereby further improving plan quality. To implement AP in most clinics, commonly accepted guidelines on implementation and quality assurance of AI models will play an important role, as discussed thoroughly in a recent review article by Vandewinckele et al. [303].

D. Outlooks/Future Directions

AP has a great potential to automate and accelerate the tedious and time-consuming treatment planning process of RT. It is expected to improve the quality of treatment plans to a better level consistently, allowing to deliver the best of care to each patient. AP would also release human planners from simple cases, whose efforts can be focused on more challenging cases. The urgent need AP is further amplified in the regime of adaptive RT where planning efficiency critically affects the success of adaptation. Furthermore, moving towards the era of personalized care, AP permits quick generation of treatment plans of different treatment techniques, e.g. IMRT vs. VMAT, standard fractionation vs. SBRT, and photon vs. proton treatment, etc., from which the best treatment plan can be chosen for optimal treatment outcome.

The remaining challenges of AP approaches, such as data size and interpretability, need to be addressed for the development of effective and reliable models. Collecting a large inter-institutional dataset is attractive, but less practical due to privacy and regulation concerns. One potential solution might be federated learning [304-306], which allows each institution to keep their own data while having the model be trained on all data. In addition, interpretable machine learning/DL [307, 308] has become a central topic recently due to the increasing need of model expainability and reliability in many applications including RT. More efforts are definitely needed along these directions to fulfill the urgent, but unmet need of AP.

Fig. 1.

Fig. 1.

Illustration of image reconstruction from sensor domain.

Fig. 2.

Fig. 2.

Comparison of ultralow-dose CT images (120 kVp, 10 mAs) reconstructed by six different approaches. All the images are displayed in the same window of [−160, 240] HU. Figure reprinted from Wang et al. with permission.

Fig. 6.

Fig. 6.

GAN Framework showing MRI to CT image translation. The generator tries to learn the data distribution of MRI and CT images and uses this learnt representation to convert an MRI image to a fake CT image. The generator learns this representation by trying to fool the discriminator while it is comparing real and fake CTs and learning to differentiate between them.

Fig. 7.

Fig. 7.

Workflow for adaptive radiotherapy enabled by dose calculation from CBCT images. Prior to dose recalculation, a CBCT image is converted to a synthetic CT (sCT), providing correct HU values and removing artifacts, while preserving the anatomy.

Acknowledgment

All authors declare that they have no known conflicts of interest in terms of competing financial interests or personal relationships that could have an influence or are relevant to the work reported in this paper.

Contributor Information

Yabo Fu, Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA..

Hao Zhang, Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY 10065, USA..

Eric D. Morris, Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA 90095, USA.

Carri K. Glide-Hurst, Department of Human Oncology, School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI 53792, USA.

Suraj Pai, Maastricht University Medical Centre, Netherlands..

Alberto Traverso, Maastricht University Medical Centre, Netherlands..

Leonard Wee, Maastricht University Medical Centre, Netherlands..

Ibrahim Hadzic, Maastricht University Medical Centre, Netherlands..

Per-Ivar Lønne, Department of Medical Physics, Oslo University Hospital, PO Box 4953 Nydalen, 0424 Oslo, Norway..

Chenyang Shen, Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX 75002, USA..

Tian Liu, Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA..

Xiaofeng Yang, Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA..

References

  • [1].Zhu W, Huang Y, Zeng L, Chen X, Liu Y, Qian Z, Du N, Fan W, and Xie X, “AnatomyNet: Deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy,” Med Phys, vol. 46, no. 2, pp. 576–589, Feb, 2019. [DOI] [PubMed] [Google Scholar]
  • [2].Lei Y, Wang T, Tian S, Dong X, Jani AB, Schuster D, Curran WJ, Patel P, Liu T, and Yang X, “Male pelvic multi-organ segmentation aided by CBCT-based synthetic MRI,” Phys Med Biol, vol. 65, no. 3, pp. 035013, Feb 4, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Dong X, Lei Y, Wang T, Thomas M, Tang L, Curran WJ, Liu T, and Yang X, “Automatic multiorgan segmentation in thorax CT images using U-net-GAN,” Med Phys, vol. 46, no. 5, pp. 2157–2168, May, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].van der Heyden B, Wohlfahrt P, Eekers DBP, Richter C, Terhaag K, Troost EGC, and Verhaegen F, “Dual-energy CT for automatic organs-at-risk segmentation in brain-tumor patients using a multi-atlas and deep-learning approach,” Sci Rep, vol. 9, no. 1, pp. 4126, Mar 11, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Ronneberger O, Fischer P, and Brox T, "U-net: Convolutional networks for biomedical image segmentation." pp. 234–241. [Google Scholar]
  • [6].Chen Y, Ruan D, Xiao J, Wang L, Sun B, Saouaf R, Yang W, Li D, and Fan Z, “Fully automated multiorgan segmentation in abdominal magnetic resonance imaging with deep neural networks,” Med Phys, vol. 47, no. 10, pp. 4971–4982, Oct, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Fu Y, Mazur TR, Wu X, Liu S, Chang X, Lu Y, Li HH, Kim H, Roach MC, Henke L, and Yang D, “A novel MRI segmentation method using CNN-based correction network for MRI-guided adaptive radiotherapy,” Med Phys, vol. 45, no. 11, pp. 5129–5137, Nov, 2018. [DOI] [PubMed] [Google Scholar]
  • [8].Rohé M-M, Datar M, Heimann T, Sermesant M, and Pennec X, "SVF-Net: Learning Deformable Image Registration Using Shape Matching," Medical Image Computing and Computer Assisted Intervention — MICCAI 2017. pp. 266–274. [Google Scholar]
  • [9].Ghosal S, and Rayl N, “Deep deformable registration: Enhancing accuracy by fully convolutional neural net,” Pattern Recognition Letters, vol. 94, pp. 81–86, Jul 15, 2017. [Google Scholar]
  • [10].Chee E, and Wu Z, “AIRNet: Self-Supervised Affine Registration for 3D Medical Images using Neural Networks,” ArXiv, vol. abs/1810.02583, 2018. [Google Scholar]
  • [11].Fan J, Cao X, Xue Z, Yap PT, and Shen D, “Adversarial Similarity Network for Evaluating Image Alignment in Deep Learning based Registration,” Med Image Comput Comput Assist Interv, vol. 11070, pp. 739–746, Sep, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Fan JF, Cao XH, Yap EA, and Shen DG, “BIRNet: Brain image registration using dual-supervised fully convolutional networks,” Medical Image Analysis, vol. 54, pp. 193–206, May, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [13].Yang X, Kwitt R, Styner M, and Niethammer M, “Quicksilver: Fast predictive image registration – A deep learning approach,” NeuroImage, vol. 158, pp. 378–396, 2017/September/01/, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [14].Nguyen G, Dlugolinsky S, Bobák M, Tran V, García ÁL, Heredia I, Malík P, and Hluchý L, “Machine Learning and Deep Learning frameworks and libraries for large-scale data mining: a survey,” Artificial Intelligence Review, vol. 52, no. 1, pp. 77–124, 2019. [Google Scholar]
  • [15].Cao X, Yang J, Wang L, Xue Z, Wang Q, and Shen D, “Deep Learning based Inter-Modality Image Registration Supervised by Intra-Modality Similarity,” Machine learning in medical imaging. MLMI, vol. 11046, pp. 55–63, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [16].Cao XH, Yang JH, Zhang J, Wang Q, Yap PT, and Shen DG, “Deformable Image Registration Using a Cue-Aware Deep Regression Network,” Ieee Transactions on Biomedical Engineering, vol. 65, no. 9, pp. 1900–1911, Sep, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [17].Hu Y, Gibson E, Ghavami N, Bonmati E, Moore CM, Emberton M, Vercauteren T, Noble JA, and Barratt DC, "Adversarial Deformation Regularization for Training Image Registration Neural Networks." [Google Scholar]
  • [18].Hu YP, Modat M, Gibson E, Li WQ, Ghavamia N, Bonmati E, Wang GT, Bandula S, Moore CM, Emberton M, Ourselin S, Noble JA, Barratt DC, and Vercauteren T, “Weakly-supervised convolutional neural networks for multimodal image registration,” Medical Image Analysis, vol. 49, pp. 1–13, Oct, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [19].Sentker T, Madesta F, and Werner R, "GDL-FIRE ^\text 4D : Deep Learning-Based Fast 4D CT Image Registration." [Google Scholar]
  • [20].Eppenhof KAJ, Lafarge MW, Moeskops P, Veta M, and Pluim JPW, "Deformable image registration using convolutional neural networks." [DOI] [PubMed] [Google Scholar]
  • [21].Eppenhof KAJ, and Pluim JPW, “Pulmonary CT Registration Through Supervised Learning With Convolutional Neural Networks,” Ieee Transactions on Medical Imaging, vol. 38, no. 5, pp. 1097–1105, May, 2019. [DOI] [PubMed] [Google Scholar]
  • [22].Kearney V, Haaf S, Sudhyadhom A, Valdes G, and Solberg TD, “An unsupervised convolutional neural network-based algorithm for deformable image registration,” Physics in Medicine and Biology, vol. 63, no. 18, Sep, 2018. [DOI] [PubMed] [Google Scholar]
  • [23].Seo H, Khuzani MB, Vasudevan V, Huang C, Ren H, Xiao R, Jia X, and Xing L, “Machine Learning Techniques for Biomedical Image Segmentation: An Overview of Technical Aspects and Introduction to State-of-Art Applications,” arXiv preprint arXiv:1911.02521, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [24].Kori A, and Krishnamurthi G, “Zero Shot Learning for Multi-Modal Real Time Image Registration,” ArXiv, vol. abs/1908.06213, 2019. [Google Scholar]
  • [25].Sokooti H, de Vos B, Berendsen F, Lelieveldt BPF, Išgum I, and Staring M, "Nonrigid Image Registration Using Multi-scale 3D Convolutional Neural Networks," Medical Image Computing and Computer Assisted Intervention — MICCAI 2017. pp. 232–239. [Google Scholar]
  • [26].Sokooti H, d. Vos BD, Berendsen FF, Ghafoorian M, Yousefi S, Lelieveldt BPF, Išgum I, and Staring M, “3D Convolutional Neural Networks Image Registration Based on Efficient Supervised Learning from Artificial Deformations,” ArXiv, vol. abs/1908.10235, 2019. [Google Scholar]
  • [27].Onieva Onieva J, Marti-Fuster B, de la Puente M. Pedrero, and Estépar R. San José, "Diffeomorphic Lung Registration Using Deep CNNs and Reinforced Learning," Image Analysis for Moving Organ, Breast, and Thoracic Images. pp. 284–294. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [28].Krebs J, Mansi T, Mailhe B, Ayache N, and Delingette H, “Unsupervised Probabilistic Deformation Modeling for Robust Diffeomorphic Registration,” Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Dlmia 2018, vol. 11045, pp. 101–109, 2018. [Google Scholar]
  • [29].Zeng Q, Fu Y, Tian Z, Lei Y, Zhang Y, Wang T, Mao H, Liu T, Curran W, Jani A, Patel P, and Yang X, “Label-driven magnetic resonance imaging (MRI)-transrectal ultrasound (TRUS) registration using weakly supervised learning for MRI-guided prostate radiotherapy,” Physics in Medicine and Biology, vol. 65, pp. 135002, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [30].Vandewinckele L, Claessens M, Dinkla AM, Brouwer C, Crijns W, Verellen D, and Elmpt WV, “Overview of artificial intelligence-based applications in radiotherapy: Recommendations for implementation and quality assurance,” Radiotherapy and Oncology, vol. 153, pp. 55–66, 2020. [DOI] [PubMed] [Google Scholar]
  • [31].Siddique S, and Chow J, “Artificial intelligence in radiotherapy,” Reports of practical oncology and radiotherapy : journal of Greatpoland Cancer Center in Poznan and Polish Society of Radiation Oncology, vol. 25 4, pp. 656–666, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [32].Huynh E, Hosny A, Guthier C, Bitterman DS, Petit SF, Haas-Kogan DA, Kann B, Aerts HJWL, and Mak RH, “Artificial intelligence in radiation oncology,” Nature Reviews Clinical Oncology, vol. 17, no. 12, pp. 771–781, 2020/December/01, 2020. [DOI] [PubMed] [Google Scholar]
  • [33].Kuckertz S, Papenberg N, Honegger J, Morgas T, Haas B, and Heldmann S, "Learning Deformable Image Registration with Structure Guidance Constraints for Adaptive Radiotherapy," Biomedical Image Registration. pp. 44–53. [Google Scholar]
  • [34].Stergios C, Mihir S, Maria V, Guillaume C, Marie-Pierre R, Stavroula M, and Nikos P, "Linear and Deformable Image Registration with 3D Convolutional Neural Networks," Image Analysis for Moving Organ, Breast, and Thoracic Images. pp. 13–22. [Google Scholar]
  • [35].Upendra RR, Simon R, and Linte CA, "A Supervised Image Registration Approach for Late Gadolinium Enhanced MRI and Cine Cardiac MRI Using Convolutional Neural Networks," Medical Image Understanding and Analysis. pp. 208–220. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [36].Sun L, and Zhang S, "Deformable MRI-Ultrasound Registration Using 3D Convolutional Neural Network," Simulation, Image Processing, and Ultrasound Systems for Assisted Diagnosis and Navigation. pp. 152–158. [Google Scholar]
  • [37].de Vos BD, Berendsen FF, Viergever MA, Sokooti H, Staring M, and Isgum I, “A deep learning framework for unsupervised affine and deformable image registration,” Medical Image Analysis, vol. 52, pp. 128–143, Feb, 2019. [DOI] [PubMed] [Google Scholar]
  • [38].Balakrishnan G, Zhao A, Sabuncu MR, Guttag J, and Dalca AV, “An Unsupervised Learning Model for Deformable Medical Image Registration,” 2018 Ieee/Cvf Conference on Computer Vision and Pattern Recognition (Cvpr), pp. 9252–9260, 2018. [Google Scholar]
  • [39].Balakrishnan G, Zhao A, Sabuncu MR, Guttag J, and Dalca AV, “VoxelMorph: A Learning Framework for Deformable Medical Image Registration,” Ieee Transactions on Medical Imaging, vol. 38, no. 8, pp. 1788–1800, Aug, 2019. [DOI] [PubMed] [Google Scholar]
  • [40].Kim B, Kim J, Lee J-G, Kim DH, Park SH, and Ye JC, "Unsupervised Deformable Image Registration Using Cycle-Consistent CNN." [DOI] [PubMed] [Google Scholar]
  • [41].Kuang D, “On Reducing Negative Jacobian Determinant of the Deformation Predicted by Deep Registration Networks,” ArXiv, vol. abs/1907.00068, 2019. [Google Scholar]
  • [42].Kuang D, and Schmah T, "FAIM – A ConvNet Method for Unsupervised 3D Medical Image Registration," Machine Learning in Medical Imaging. pp. 646–654. [Google Scholar]
  • [43].Tyagi N, Fontenla S, Zelefsky M, Chong-Ton M, Ostergren K, Shah N, Warner L, Kadbi M, Mechalakos J, and Hunt M, “Clinical workflow for MR-only simulation and planning in prostate,” Radiation Oncology, vol. 12, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [44].Lau TF, Luo J, Zhao S, Chang EI-C, and Xu Y, “Unsupervised 3D End-to-End Medical Image Registration with Volume Tweening Network,” IEEE journal of biomedical and health informatics, 2019. [DOI] [PubMed] [Google Scholar]
  • [45].Radiation Oncology, Title 15, Recent advances of PET imaging in clinical radiation oncology, 2020, pp. 1–15. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [46].Lei Y, Fu Y, Harms J, Wang T, Curran WJ, Liu T, Higgins K, and Yang X, "4D-CT Deformable Image Registration Using an Unsupervised Deep Convolutional Neural Network," Artificial Intelligence in Radiation Therapy. pp. 26–33. [Google Scholar]
  • [47].Lei Y, Fu Y, Wang T, Liu Y, Patel P, Curran WJ, Liu T, and Yang X, “4D-CT deformable image registration using multiscale unsupervised deep learning,” Physics in Medicine & Biology, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [48].Liney GP, Whelan B, Oborn B, Barton M, and Keall P, “MRI-Linear Accelerator Radiotherapy Systems,” Clinical Oncology, vol. 30, pp. 686–691, 2018. [DOI] [PubMed] [Google Scholar]
  • [49].Fu Y, Lei Y, Wang T, Higgins K, Bradley J, Curran W, Liu T, and Yang X, “LungRegNet: an unsupervised deformable image registration method for 4D-CT lung,” Medical physics, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [50].British Journal of Radiology, Title 92, Role and future of MRi in radiation oncology, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [51].Fu Y, Lei Y, Wang T, Patel P, Jani A, Mao H, Curran W, Liu T, and Yang X, “Biomechanically constrained non-rigid MR-TRUS prostate registration using deep learning based 3D point cloud matching,” Medical image analysis, vol. 67, pp. 101845, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [52].Fu Y, Wang T, Lei Y, Patel P, Jani A, Curran W, Liu T, and Yang X, “Deformable MR-CBCT Prostate Registration using Biomechanically Constrained Deep Learning Networks,” Medical physics, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [53].Stewart RD, and Li XA, “BGRT: Biologically guided radiation therapy - The future is fast approaching!,” Medical Physics, vol. 34, pp. 3739–3751, 2007. [DOI] [PubMed] [Google Scholar]
  • [54].Fechter T, and Baltas D, “One-Shot Learning for Deformable Medical Image Registration and Periodic Motion Tracking,” IEEE Transactions on Medical Imaging, vol. 39, pp. 2506–2517, 2020. [DOI] [PubMed] [Google Scholar]
  • [55].Journal of the Operations Research Society of China, Title 8, A Review on Deep Learning in Medical Image Reconstruction, 2020, pp. 311–340. [Google Scholar]
  • [56].Physica Medica, Title 28, Iterative reconstruction methods in X-ray CT, 2012, pp. 94–108. [DOI] [PubMed] [Google Scholar]
  • [57].Radiology, Title 276, State of the Art: Iterative CT reconstruction techniques1, 2015, pp. 339–357. [DOI] [PubMed] [Google Scholar]
  • [58].Medical Physics, Title 45, Regularization strategies in statistical image reconstruction of low-dose x-ray CT: A review, 2018, pp. e886–e907. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [59].Fessler J, "Model-based image reconstruction for MRI," IEEE Signal Processing Magazine, Institute of Electrical and Electronics Engineers Inc., 2010, pp. 81–89. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [60].Critical Reviews in Biomedical Engineering, Title 41, Compressed sensing MRI: A review, 2013, pp. 183–204. [DOI] [PubMed] [Google Scholar]
  • [61].Physics in Medicine and Biology, Title 51, Iterative reconstruction techniques in emission computed tomography, 2006. [DOI] [PubMed] [Google Scholar]
  • [62].IEEE Access, Title 4, A perspective on deep imaging, 2016, pp. 8914–8924. [Google Scholar]
  • [63].IEEE Transactions on Medical Imaging, Title 37, Image Reconstruction is a New Frontier of Machine Learning, 2018, pp. 1289–1296. [DOI] [PubMed] [Google Scholar]
  • [64].Zeitschrift fur Medizinische Physik, Title 29, An overview of deep learning in medical imaging focusing on MRI, 2019, pp. 102–127. [DOI] [PubMed] [Google Scholar]
  • [65].Reader AJ, Corda G, Mehranian A, da Costa-Luis C, Ellis S, and Schnabel HA, “Deep Learning for PET Image Reconstruction,” IEEE Transactions on Radiation and Plasma Medical Sciences, pp. 1–1, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [66].Ben Yedder H, Cardoen B, and Hamarneh G, “Deep learning for biomedical image reconstruction: a survey,” Artificial Intelligence Review, pp. 1–37, 2020.32836651 [Google Scholar]
  • [67].Wang G, Ye JC, and De Man B, “Deep learning for tomographic image reconstruction,” Nature machine intelligence, vol. 2, pp. 737–748, 2020. [Google Scholar]
  • [68].Ravishankar S, Ye JC, and Fessler JA, “Image Reconstruction: From Sparsity to Data-Adaptive Methods and Machine Learning,” Proceedings of the IEEE, vol. 108, pp. 86–109, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [69].Shieh CC, Gonzalez Y, Li B, Jia X, Rit S, Mory C, Riblett M, Hugo G, Zhang Y, Jiang Z, Liu X, Ren L, and Keall P, “SPARE: Sparse-view reconstruction challenge for 4D cone-beam CT from a 1-min scan,” Medical Physics, vol. 46, pp. 3799–3811, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [70].Kang E, Min J, and Ye JC, “A deep convolutional neural network using directional wavelets for low-dose X-ray CT reconstruction,” Medical Physics, vol. 44, pp. e360–e375, 2017. [DOI] [PubMed] [Google Scholar]
  • [71].Chen H, Zhang Y, Kalra MK, Lin F, Chen Y, Liao P, Zhou J, and Wang G, “Low-Dose CT with a residual encoder-decoder convolutional neural network,” IEEE Transactions on Medical Imaging, vol. 36, pp. 2524–2535, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [72].Yang Q, Yan P, Zhang Y, Yu H, Shi Y, Mou X, Kalra MK,Zhang Y, Sun L, and Wang G, “Low-Dose CT Image Denoising Using a Generative Adversarial Network With Wasserstein Distance and Perceptual Loss,” IEEE Transactions on Medical Imaging, vol. 37, pp. 1348–1357, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [73].Wang Y, Liao Y, Zhang Y, He J, Li S, Bian Z, Zhang H, Gao Y, Meng D, Zuo W, Zeng D, and Ma J, “Iterative quality enhancement via residual-artifact learning networks for low-dose CT,” Physics in Medicine and Biology, vol. 63, 2018. [DOI] [PubMed] [Google Scholar]
  • [74].Han Y, and Ye JC, “Framing U-Net via Deep Convolutional Framelets: Application to Sparse-View CT,” IEEE Transactions on Medical Imaging, vol. 37, pp. 1418–1429, 2018. [DOI] [PubMed] [Google Scholar]
  • [75].Zhang Z, Liang X, Dong X, Xie Y, and Cao G, “A Sparse-View CT Reconstruction Method Based on Combination of DenseNet and Deconvolution,” IEEE Transactions on Medical Imaging, vol. 37, pp. 1407–1417, 2018. [DOI] [PubMed] [Google Scholar]
  • [76].Jiang Z, Chen Y, Zhang Y, Ge Y, Yin FF, and Ren L, “Augmentation of CBCT Reconstructed from Under-Sampled Projections Using Deep Learning,” IEEE Transactions on Medical Imaging, vol. 38, pp. 2705–2715, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [77].Wolterink JM, Leiner T, Viergever MA, and Išgum I, “Generative adversarial networks for noise reduction in low-dose CT,” IEEE Transactions on Medical Imaging, vol. 36, pp. 2536–2545, 2017. [DOI] [PubMed] [Google Scholar]
  • [78].Li Z, Zhou S, Huang J, Yu L, and Jin M, “Investigation of Low-Dose CT Image Denoising Using Unpaired Deep Learning Methods,” IEEE Transactions on Radiation and Plasma Medical Sciences, pp. 1–1, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [79].Medical physics, Title 44, Applications of nonlocal means algorithm in low-dose X-ray CT image processing and reconstruction: A review, 2017, pp. 1168–1185. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [80].arXiv, Deep-neural-network based sinogram synthesis for sparse-view CT image reconstruction, 2018. [DOI] [PubMed] [Google Scholar]
  • [81].Ghani MU, and Karl WC, "Deep Learning-Based Sinogram Completion for Low-Dose CT," 2018 IEEE 13th Image, Video, and Multi-dimensional Signal Processing Workshop, IVMSP 2018 - Proceedings, Institute of Electrical and Electronics Engineers Inc., 2018. [Google Scholar]
  • [82].Liang K, Xing Y, Yang H, and Kang K, "Improve angular resolution for sparse-view CT with residual convolutional neural network," Medical Imaging 2018: Physics of Medical Imaging, Chen G-H, Lo JY and Schmidt T. Gilat, eds., SPIE, 2018, p. 55. [Google Scholar]
  • [83].Beaudry J, Esquinas P, and Shieh C-C, "Learning from our neighbours: a novel approach on sinogram completion using bin-sharing and deep learning to reconstruct high quality 4DCBCT," Medical Imaging 2019: Physics of Medical Imaging, Bosmans H, Chen G-H and Schmidt T. Gilat, eds., SPIE, 2019, p. 153. [Google Scholar]
  • [84].Würfl T, Ghesu F, Christlein V, M.-I. c. A on, and u. 2016, “Deep learning computed tomography,” Springer. [Google Scholar]
  • [85].He J, Wang Y, and Ma J, “Radon Inversion via Deep Learning,” IEEE Transactions on Medical Imaging, vol. 39, pp. 2076–2087, 2020. [DOI] [PubMed] [Google Scholar]
  • [86].Li Y, Li K, Zhang C, Montoya J, and Chen GH, “Learning to Reconstruct Computed Tomography Images Directly from Sinogram Data under A Variety of Data Acquisition Conditions,” IEEE Transactions on Medical Imaging, vol. 38, pp. 2469–2481, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [87].Wu D, Kim K, El Fakhri G, and Li Q, “Iterative Low-dose CT Reconstruction with Priors Trained by Artificial Neural Network,” IEEE Transactions on Medical Imaging, vol. 36, pp. 2479–2486, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [88].Chen B, Xiang K, Gong Z, Wang J, and Tan S, “Statistical Iterative CBCT Reconstruction Based on Neural Network,” IEEE Transactions on Medical Imaging, vol. 37, pp. 1511–1521, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [89].Gao Y, Tan J, Shi Y, Lu S, Gupta A, Li H, and Liang Z, “Constructing a tissue-specific texture prior by machine learning from previous full-dose scan for Bayesian reconstruction of current ultralow-dose CT images,” Journal of Medical Imaging, vol. 7, pp. 1, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [90].Shen C, Gonzalez Y, Chen L, Jiang SB, and Jia X, “Intelligent Parameter Tuning in Optimization-Based Iterative CT Reconstruction via Deep Reinforcement Learning,” IEEE Transactions on Medical Imaging, vol. 37, pp. 1430–1439, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [91].Chen H, Zhang Y, Chen Y, Zhang J, Zhang W, Sun H, Lv Y, Liao P, Zhou J, and Wang G, “LEARN: Learned Experts’ Assessment-Based Reconstruction Network for Sparse-Data CT,” IEEE Transactions on Medical Imaging, vol. 37, pp. 1333–1347, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [92].He J, Yang Y, Wang Y, Zeng D, Bian Z, Zhang H, Sun J, Xu Z, and Ma J, “Optimizing a Parameterized Plug-and-Play ADMM for Iterative Low-Dose CT Reconstruction,” IEEE Transactions on Medical Imaging, vol. 38, pp. 371–382, 2019. [DOI] [PubMed] [Google Scholar]
  • [93].Kelly B, Matthews TP, and Anastasio MA, “Deep Learning-Guided Image Reconstruction from Incomplete Data,” arXiv, 2017. [Google Scholar]
  • [94].Gupta H, Jin KH, Nguyen HQ, McCann MT, and Unser M, “CNN-Based Projected Gradient Descent for Consistent CT Image Reconstruction,” IEEE Transactions on Medical Imaging, vol. 37, pp. 1440–1453, 2018. [DOI] [PubMed] [Google Scholar]
  • [95].Adler J, and Öktem O, “Learned Primal-Dual Reconstruction,” IEEE Transactions on Medical Imaging, vol. 37, pp. 1322–1332, 2018. [DOI] [PubMed] [Google Scholar]
  • [96].Zhu B, Liu J, Cauley S, Rosen B, and Rosen M, “Image reconstruction by domain-transform manifold learning,” Nature, vol. 555, pp. 487–492, 2018. [DOI] [PubMed] [Google Scholar]
  • [97].Fu L, and De Man B, "A hierarchical approach to deep learning and its application to tomographic reconstruction," 15th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, Matej S and Metzler SD, eds., SPIE, 2019, p. 41. [Google Scholar]
  • [98].Shen L, Zhao W, and Xing L, “Patient-specific reconstruction of volumetric computed tomography images from a single projection view via deep learning,” Nature Biomedical Engineering, vol. 3, pp. 880–888, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [99].Lei Y, Tian Z, Wang T, Higgins K, Bradley JD, Curran WJ, Liu T, and Yang X, “Deep learning-based real-time volumetric imaging for lung stereotactic body radiation therapy: a proof of concept study,” Phys Med Biol, vol. 65, no. 23, pp. 235003, Dec 18, 2020. [DOI] [PubMed] [Google Scholar]
  • [100].Solomon J, Lyu P, Marin D, and Samei E, “Noise and spatial resolution properties of a commercially available deep learning-based CT reconstruction algorithm,” Medical Physics, vol. 47, pp. 3961–3971, 2020. [DOI] [PubMed] [Google Scholar]
  • [101].Yang X, and Fei B, “3D Prostate Segmentation of Ultrasound Images Combining Longitudinal Image Registration and Machine Learning,” Proceedings of SPIE-the International Society for Optical Engineering, vol. 8316, pp. 83162O–83162O, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [102].Fu Y, Liu S, Li H, and Yang D, “Automatic and hierarchical segmentation of the human skeleton in CT images,” Phys Med Biol, vol. 62, no. 7, pp. 2812–2833, Apr 7, 2017. [DOI] [PubMed] [Google Scholar]
  • [103].Yang X, Rossi PJ, Jani AB, Mao H, Curran WJ, and Liu T, “3D Transrectal Ultrasound (TRUS) Prostate Segmentation Based on Optimal Feature Learning Framework,” Proc SPIE Int Soc Opt Eng, vol. 9784, Feb-Mar, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [104].Samavati N, Velec M, and Brock KK, “Effect of deformable registration uncertainty on lung SBRT dose accumulation,” Med Phys, vol. 43, no. 1, pp. 233, Jan, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [105].Chetty IJ, and Rosu-Bubulac M, “Deformable Registration for Dose Accumulation,” Semin Radiat Oncol, vol. 29, no. 3, pp. 198–208, Jul, 2019. [DOI] [PubMed] [Google Scholar]
  • [106].Yang D, Brame S, El Naqa I, Aditya A, Wu Y, Goddu SM, Mutic S, Deasy JO, and Low DA, “Technical note: DIRART–A software suite for deformable image registration and adaptive radiotherapy research,” Med Phys, vol. 38, no. 1, pp. 67–77, Jan, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [107].Vercauteren T, Pennec X, Perchant A, and Ayache N, “Diffeomorphic demons: Efficient non-parametric image registration,” NeuroImage, vol. 45, no. 1, Supplement 1, pp. S61–S72, 2009/March/01/, 2009. [DOI] [PubMed] [Google Scholar]
  • [108].Avants BB, Tustison NJ, Song G, Cook PA, Klein A, and Gee JC, “A reproducible evaluation of ANTs similarity metric performance in brain image registration,” Neuroimage, vol. 54, no. 3, pp. 2033–44, Feb 1, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [109].Shen D, “Image registration by local histogram matching,” Pattern Recognition, vol. 40, no. 4, pp. 1161–1172, 2007/April/01/, 2007. [Google Scholar]
  • [110].Klein S, Staring M, Murphy K, Viergever MA, and Pluim JP, “elastix: a toolbox for intensity-based medical image registration,” IEEE Trans Med Imaging, vol. 29, no. 1, pp. 196–205, Jan, 2010. [DOI] [PubMed] [Google Scholar]
  • [111].Fu Y, Lei Y, Wang T, Walter JC, Liu T, and Yang X, “Deep learning in medical image registration: a review,” Physics in Medicine & Biology, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [112].Cheng X, Zhang L, and Zheng Y, “Deep similarity learning for multimodal medical images,” Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, vol. 6, no. 3, pp. 248–252, 2018/May/04, 2018. [Google Scholar]
  • [113].Simonovsky M, Gutiérrez-Becker B, Mateus D, Navab N, and Komodakis N, "A Deep Metric for Multimodal Registration," Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. pp. 10–18.28229131 [Google Scholar]
  • [114].Salehi SSM, Khan S, Erdo?mu? D, and Gholipour A, “Real-time Deep Registration With Geodesic Loss,” ArXiv, vol. abs/1803.05982, 2018. [Google Scholar]
  • [115].Castillo R, Castillo E, Guerra R, Johnson VE, McPhail T, Garg AK, and Guerrero T, “A framework for evaluation of deformable image registration spatial accuracy using large landmark point sets,” Physics in Medicine and Biology, vol. 54, no. 7, pp. 1849–1870, 2009/March/05, 2009. [DOI] [PubMed] [Google Scholar]
  • [116].Berendsen F, Kotte ANT, Viergever M, and Pluim JP, Registration of organs with sliding interfaces and changing topologies, p.^pp. MI: SPIE, 2014. [Google Scholar]
  • [117].Modat M, Ridgway GR, Taylor ZA, Lehmann M, Barnes J, Hawkes DJ, Fox NC, and Ourselin S, “Fast free-form deformation using graphics processing units,” Comput Methods Programs Biomed, vol. 98, no. 3, pp. 278–84, Jun, 2010. [DOI] [PubMed] [Google Scholar]
  • [118].Shackleford JA, Kandasamy N, and Sharp GC, “On developing B-spline registration algorithms for multi-core processors,” Phys Med Biol, vol. 55, no. 21, pp. 6329–51, Nov 7, 2010. [DOI] [PubMed] [Google Scholar]
  • [119].Werner R, Schmidt-Richberg A, Handels H, and Ehrhardt J, “Estimation of lung motion fields in 4D CT data by variational non-linear intensity-based registration: A comparison and evaluation study,” Phys Med Biol, vol. 59, no. 15, pp. 4247–60, Aug 7, 2014. [DOI] [PubMed] [Google Scholar]
  • [120].Uzunova H, Wilms M, Handels H, and Ehrhardt J, "Training CNNs for Image Registration from Few Samples with Model-based Data Augmentation," Medical Image Computing and Computer Assisted Intervention — MICCAI 2017. pp. 223–231. [Google Scholar]
  • [121].Hering A, Kuckertz S, Heldmann S, and Heinrich MP, "Enhancing Label-Driven Deep Deformable Image Registration with Local Distance Metrics for State-of-the-Art Cardiac Motion Tracking." [Google Scholar]
  • [122].Jaderberg M, Simonyan K, Zisserman A, and Kavukcuoglu K, “Spatial Transformer Networks,” ArXiv, vol. abs/1506.02025, 2015. [Google Scholar]
  • [123].Zhang J, “Inverse-Consistent Deep Networks for Unsupervised Deformable Image Registration,” ArXiv, vol. abs/1809.03443, 2018. [Google Scholar]
  • [124].Jiang Z, Yin FF, Ge Y, and Ren L, “A multi-scale framework with unsupervised joint training of convolutional neural networks for pulmonary deformable image registration,” Phys Med Biol, Nov 29, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [125].Ghosh S, Das N, Das I, and Maulik U, “Understanding deep learning techniques for image segmentation,” ACM Computing Surveys (CSUR), vol. 52, no. 4, pp. 1–35, 2019. [Google Scholar]
  • [126].Zaidi H, and El Naqa I, “PET-guided delineation of radiation therapy treatment volumes: a survey of image segmentation techniques,” European journal of nuclear medicine and molecular imaging, vol. 37, no. 11, pp. 2165–2187, 2010. [DOI] [PubMed] [Google Scholar]
  • [127].Zhuang X, and Shen J, “Multi-scale patch and multi-modality atlases for whole heart segmentation of MRI,” Medical Image Analysis, vol. 31, pp. 77–87, 2016. [DOI] [PubMed] [Google Scholar]
  • [128].LeCun Y, Bengio Y, and Hinton G, “Deep learning,” nature, vol. 521, no. 7553, pp. 436–444, 2015. [DOI] [PubMed] [Google Scholar]
  • [129].Men K, Dai J, and Li Y, “Automatic segmentation of the clinical target volume and organs at risk in the planning CT for rectal cancer using deep dilated convolutional neural networks,” Medical physics, vol. 44, no. 12, pp. 6377–6389, 2017. [DOI] [PubMed] [Google Scholar]
  • [130].Yang X, Wu N, Cheng G, Zhou Z, David SY, Beitler JJ, Curran WJ, and Liu T, “Automated segmentation of the parotid gland based on atlas registration and machine learning: a longitudinal MRI study in head-and-neck radiation therapy,” International Journal of Radiation Oncology* Biology* Physics, vol. 90, no. 5, pp. 1225–1233, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [131].Ibragimov B, and Xing L, “Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks,” Medical physics, vol. 44, no. 2, pp. 547–557, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [132].Fu Y, Lei Y, Wang T, Curran WJ, Liu T, and Yang X, “A review of deep learning based methods for medical image multi-organ segmentation,” Physica Medica, vol. 85, pp. 107–122, 2021/May/01/, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [133].Litjens G, Kooi T, Bejnordi BE, Setio AAA, Ciompi F, Ghafoorian M, van der Laak JAWM, van Ginneken B, and Sánchez CI, “A survey on deep learning in medical image analysis,” Medical Image Analysis, vol. 42, pp. 60–88, 2017/December/01/, 2017. [DOI] [PubMed] [Google Scholar]
  • [134].Long J, Shelhamer E, and Darrell T, "Fully convolutional networks for semantic segmentation." pp. 3431–3440. [DOI] [PubMed] [Google Scholar]
  • [135].Morris ED, Ghanem AI, Dong M, Pantelic MV, Walker EM, and Glide-Hurst CK, “Cardiac Substructure Segmentation with Deep Learning for Improved Cardiac Sparing,” Medical Physics, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [136].Ibtehaz N, and Rahman MS, “MultiResUNet: Rethinking the U-Net architecture for multimodal biomedical image segmentation,” Neural Networks, vol. 121, pp. 74–87, 2020. [DOI] [PubMed] [Google Scholar]
  • [137].Mortazi A, Burt J, and Bagci U, "Multi-Planar Deep Segmentation Networks for Cardiac Substructures from MRI and CT." pp. 199–206. [Google Scholar]
  • [138].Payer C, Štern D, Bischof H, and Urschler M, "Multi-label Whole Heart Segmentation Using CNNs and Anatomical Label Configurations." pp. 190–198. [Google Scholar]
  • [139].Zhang Z, Yang L, and Zheng Y, "Translating and segmenting multimodal medical volumes with cycle-and shapeconsistency generative adversarial network." pp. 9242–9251. [Google Scholar]
  • [140].Kirby J, Prior F, Petrick N, Hadjiski L, Farahani K, Drukker K, Kalpathy-Cramer J, Glide-Hurst C, and El Naqa I, “Introduction to special issue on datasets hosted in The Cancer Imaging Archive (TCIA),” Med Phys, vol. 47, no. 12, pp. 6026–6028, Dec, 2020. [DOI] [PubMed] [Google Scholar]
  • [141].Avendi M, Kheradvar A, and Jafarkhani H, “A combined deep-learning and deformable-model approach to fully automatic segmentation of the left ventricle in cardiac MRI,” Medical image analysis, vol. 30, pp. 108–119, 2016. [DOI] [PubMed] [Google Scholar]
  • [142].Brion E, Léger J, Javaid U, Lee J, De Vleeschouwer C, and Macq B, "Using planning CTs to enhance CNN-based bladder segmentation on cone beam CT." p. 109511M. [Google Scholar]
  • [143].Sahiner B, Pezeshk A, Hadjiiski LM, Wang X, Drnkker K, Cha KH, Summers RM, and Giger ML, “Deep learning in medical imaging and radiation therapy,” Med Phys, vol. 46, no. 1, pp. e1–e36, Jan, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [144].Lin T-Y, Goyal P, Girshick R, He K, and Dollár P, "Focal loss for dense object detection." pp. 2980–2988. [DOI] [PubMed] [Google Scholar]
  • [145].Chen M, Fang L, and Liu H, "FR-NET: Focal loss constrained deep residual networks for segmentation of cardiac MRI." pp. 764–767. [DOI] [PubMed] [Google Scholar]
  • [146].Isola P, Zhu J-Y, Zhou T, and Efros Alexei A, “Image-to-Image Translation with Conditional Adversarial Networks,” CoRR, vol. abs/1611.07004 %! Image-to-Image Translation with Conditional Adversarial Networks %M DBLP:journals/corr/IsolaZZE16 %U http://arxiv.org/abs/1611.07004, 2016. [Google Scholar]
  • [147].Peng Y, Chen S, Qin A, Chen M, Gao X, Liu Y, Miao J, Gu H, Zhao C, Deng X, and Qi Z, “Magnetic resonance-based synthetic computed tomography images generated using generative adversarial networks for nasopharyngeal carcinoma radiotherapy treatment planning,” Radiotherapy and Oncology, vol. 150, pp. 217–224 %! Magnetic resonance-based synthetic computed tomography images generated using generative adversarial networks for nasopharyngeal carcinoma radiotherapy treatment planning %@ 18790887, 2020. [DOI] [PubMed] [Google Scholar]
  • [148].Zhu JY, Park T, Isola P, and Efros Alexei A, “Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks,” vol. 2017-Octob, pp. 2242–2251 %! Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks %@ 9781538610329 15505499, 2017. [Google Scholar]
  • [149].Wolterink JM, Dinkla Anna M, Savenije Mark HF, Seevinck Peter R, van den Berg Cornelis AT, and Igum I, “Deep MR to CT synthesis using unpaired data,” Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10557 LNCS, pp. 14–23 %! Deep MR to CT synthesis using unpaired data %@ 9783319681269 16113349, 2017. [Google Scholar]
  • [150].Lei Y, Harms J, Wang T, Liu Y, Shu Hui K, Jani Ashesh B, Curran Walter J, Mao H, Liu T, and Yang X, “MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks,” Medical Physics, vol. 46, no. 8, pp. 3565–3581 %! MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks %@ 00942405, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [151].Liu R, Lei Y, Wang T, Zhou J, Roper J, Lin L, McDonald MW, Bradley JD, Curran WJ, Liu T, and Yang X, “Synthetic dual-energy CT for MRI-only based proton therapy treatment planning using label-GAN,” Phys Med Biol, vol. 66, no. 6, pp. 065014, Mar 9, 2021. [DOI] [PubMed] [Google Scholar]
  • [152].Chen L, Wu Y, M DSA, Anas Z. Abidin, Wismuller A, and Xu C, "MRI Tumor Segmentation with Densely Connected 3D CNN," 2018. [Google Scholar]
  • [153].Simonyan K, and Zisserman A, "Very Deep Convolutional Networks for Large-Scale Image Recognition," 2015. [Google Scholar]
  • [154].Bi L, Kim J, Dagan Feng Feng D, and Fulham MJ, “Synthesis of positron emission tomography (PET) images via multi-channel generative adversarial networks (GANs),” Lecture Notes in Computer Science, vol. 10555, no. September, pp. 105–115 %! Synthesis of positron emission tomography (PET) images via multi-channel generative adversarial networks (GANs) %@ 978-3-319-67563-3, 2017. [Google Scholar]
  • [155].Veiga C, Janssens G, Teng C-L, Baudier T, Hotoiu L, McClelland Jamie R, Royle G, Lin L, Yin L, Metz J, Solberg Timothy D, Tochner Z, Simone Charles B, McDonough J, and Teo Boon-Keng K, “First Clinical Investigation of Cone Beam Computed Tomography and Deformable Registration for Adaptive Proton Therapy for Lung Cancer,” International Journal of Radiation Oncology \ast Biology \ast Physics, vol. 95, no. 1, pp. 549–559 %! First Clinical Investigation of Cone Beam Computed Tomography and Deformable Registration for Adaptive Proton Therapy for Lung Cancer, 2016. [DOI] [PubMed] [Google Scholar]
  • [156].Kurz C, Dedes G, Resch A, Reiner M, Ganswindt U, Nijhuis R, Thieke C, Belka C, Parodi K, and Landry G, “Comparing cone-beam CT intensity correction methods for dose recalculation in adaptive intensity-modulated photon and proton therapy for head and neck cancer,” Acta Oncologica, vol. 54, no. 9, pp. 1651–1657 %! Comparing cone-beam CT intensity correction methods for dose recalculation in adaptive intensity-modulated photon and proton therapy for head and neck cancer, 2015. [DOI] [PubMed] [Google Scholar]
  • [157].Bootsma GJ, Verhaegen F, and Jaffray DA, “Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting,” Medical Physics, vol. 42, no. 1, pp. 54–68 %! Efficient scatter distribution estimation and correction in CBCT using concurrent Monte Carlo fitting, 2014. [DOI] [PubMed] [Google Scholar]
  • [158].Niu T, Sun M, Star-Lack J, Gao H, Fan Q, and Zhu L, “Shading correction for on-board cone-beam CT in radiation therapy using planning MDCT images,” Medical Physics, vol. 37, no. 10, pp. 5395–5406 %! Shading correction for on-board cone-beam CT in radiation therapy using planning MDCT images, 2010. [DOI] [PubMed] [Google Scholar]
  • [159].Maspero M, Houweling Antonetta C, Savenije Mark HF, van Heijst Tristan CF, Verhoeff Joost JC, Kotte Alexis NTJ, and van den Berg Cornelis AT, “A single neural network for cone-beam computed tomography-based radiotherapy of head-and-neck, lung and breast cancer,” Physics and Imaging in Radiation Oncology, vol. 14, pp. 24–31 %! A single neural network for cone-beam computed tomography-based radiotherapy of head-and-neck, lung and breast cancer %@ 2405-6316, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [160].Liang X, Chen L, Nguyen D, Zhou Z, Gu X, Yang M, Wang J, and Jiang S, “Generating synthesized computed tomography ( CT ) from cone-beam computed tomography ( CBCT ) using CycleGAN for adaptive radiation therapy,” Physics in Medicine \& Biology, vol. 64, no. 12, pp. 125002 %! Generating synthesized computed tomography ( CT ) from cone-beam computed tomography ( CBCT ) using CycleGAN for adaptive radiation therapy, 2019. [DOI] [PubMed] [Google Scholar]
  • [161].Ng EY-K, “Imaging as a diagnostic and therapeutic tool in clinical oncology,” World Journal of Clinical Oncology, vol. 2, no. 4, pp. 169 %! Imaging as a diagnostic and therapeutic tool in clinical oncology %@ 6565679118 2218-4333, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [162].Sailer AM, van Zwam WH, Wildberger JE, and Grutters JP, “Cost-effectiveness modelling in diagnostic imaging: a stepwise approach,” Eur Radiol, vol. 25, no. 12, pp. 3629–37 %7 2015/05/24 %8 Dec %! Cost-effectiveness modelling in diagnostic imaging: a stepwise approach %@ 1432-1084, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [163].Aberle DR, Adams AM, Berg CD, Black WC, Clapp JD, Fagerstrom RM, Gareen IF, Gatsonis C, Marcus PM, and Sicks JD, “Reduced lung-cancer mortality with low-dose computed tomographic screening,” N Engl J Med, vol. 365, no. 5, pp. 395–409, Aug 4, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [164].Zhou L, Bai S, Zhang Y, Ming X, and Deng J, “Imaging Dose, Cancer Risk and Cost Analysis in Image-guided Radiotherapy of Cancers,” Sci Rep, vol. 8, no. 1, pp. 10076 %7 2018/07/04 %8 07 %! Imaging Dose, Cancer Risk and Cost Analysis in Image-guided Radiotherapy of Cancers %@ 2045-2322, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [165].Deng J, Chen Z, Roberts KB, and Nath R, “Kilovoltage imaging doses in the radiotherapy of pediatric cancer patients,” Int J Radiat Oncol Biol Phys, vol. 82, no. 5, pp. 1680–8 %7 2011/04/07 %8 Apr %! Kilovoltage imaging doses in the radiotherapy of pediatric cancer patients %@ 1879-355X, 2012. [DOI] [PubMed] [Google Scholar]
  • [166].Quaia E, “Comparison between 80 kV, 100 kV and 120 kV CT protocols in the assessment of the therapeutic outcome in HCC,” Liver and Pancreatic Sciences, vol. 1, no. 1, pp. 1–4 %! Comparison between 80 kV, 100 kV and 120 kV CT protocols in the assessment of the therapeutic outcome in HCC, 2016. [Google Scholar]
  • [167].Heiken JP, “Contrast safety in the cancer patient: preventing contrast-induced nephropathy,” Cancer Imaging, vol. 8 Spec No A, pp. S124–7 %7 2008/10/04 %8 Oct %! Contrast safety in the cancer patient: preventing contrast-induced nephropathy %@ 1470-7330, 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [168].Chilla GS, Tan CH, Xu C, and Poh CL, “Diffusion weighted magnetic resonance imaging and its recent trend-a survey,” Quant Imaging Med Surg, vol. 5, no. 3, pp. 407–22 %8 Jun %! Diffusion weighted magnetic resonance imaging and its recent trend-a survey %@ 2223-4292, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [169].Kumar V, Gu Y, Basu S, Berglund A, Eschrich SA, Schabath MB, Forster K, Aerts HJ, Dekker A, Fenstermacher D, Goldgof DB, Hall LO, Lambin P, Balagurunathan Y, Gatenby RA, and Gillies RJ, “Radiomics: the process and the challenges,” Magn Reson Imaging, vol. 30, no. 9, pp. 1234–48 %7 2012/08/13 %8 Nov %! Radiomics: the process and the challenges %@ 1873-5894, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [170].Traverso A, Wee L, Dekker A, and Gillies R, “Repeatability and Reproducibility of Radiomic Features: A Systematic Review,” Int J Radiat Oncol Biol Phys, vol. 102, no. 4, pp. 1143–1158 %7 2018/06/05 %8 11 %! Repeatability and Reproducibility of Radiomic Features: A Systematic Review %@ 1879-355X, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [171].Daimiel Naranjo I, Lo Gullo R, Saccarelli C, Thakur SB, Bitencourt A, Morris EA, Jochelson MS, Sevilimedu V, Martinez DF, and Pinker-Domenig K, “Diagnostic value of diffusion-weighted imaging with synthetic b-values in breast tumors: comparison with dynamic contrast-enhanced and multiparametric MRI,” Eur Radiol, vol. 31, no. 1, pp. 356–367 %7 2020/08/11 %8 Jan %! Diagnostic value of diffusion-weighted imaging with synthetic b-values in breast tumors: comparison with dynamic contrast-enhanced and multiparametric MRI %@ 1432-1084, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [172].Hu Y, Xu Y, Tian Q, Chen F, Shi X, Moran CJ, Daniel BL, and Hargreaves BA, “RUN-UP: Accelerated multishot diffusion-weighted MRI reconstruction using an unrolled network with U-Net as priors,” Magn Reson Med, vol. 85, no. 2, pp. 709–720 %7 2020/08/11 %8 02 %! RUN-UP: Accelerated multishot diffusion-weighted MRI reconstruction using an unrolled network with U-Net as priors %@ 1522-2594, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [173].Kawamura M, Tamada D, Funayama S, Kromrey ML, Ichikawa S, Onishi H, and Motosugi U, “Accelerated Acquisition of High-resolution Diffusion-weighted Imaging of the Brain with a Multi-shot Echo-planar Sequence: Deep-learning-based Denoising,” Magn Reson Med Sci, vol. 20, no. 1, pp. 99–105 %7 2020/03/06 %8 Mar %! Accelerated Acquisition of High-resolution Diffusion-weighted Imaging of the Brain with a Multi-shot Echo-planar Sequence: Deep-learning-based Denoising %@ 1880-2206, 2021. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [174].Price RG, Kim JP, Zheng W, Chetty IJ, and Glide-Hurst C, “Image Guided Radiation Therapy Using Synthetic Computed Tomography Images in Brain Cancer,” Int J Radiat Oncol Biol Phys, vol. 95, no. 4, pp. 1281–9 %7 2016/03/10 %8 07 %! Image Guided Radiation Therapy Using Synthetic Computed Tomography Images in Brain Cancer %@ 1879-355X, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [175].Morris ED, Price RG, Kim J, Schultz L, Siddiqui MS, Chetty I, and Glide-Hurst C, “Using synthetic CT for partial brain radiation therapy: Impact on image guidance,” Pract Radiat Oncol, vol. 8, no. 5, pp. 342–350 %7 2018/04/06 %8 2018 Sep - Oct %! Using synthetic CT for partial brain radiation therapy: Impact on image guidance %@ 1879-8519, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [176].Sokolova MV, Khristova ML, Egorenkova EM, Leonov SV, and Babkina GM, “Adsorption study of the virions of the influenza virus and its individual proteins on polystyrene,” Voprosy virusologii, vol. 33, no. 3, pp. 369–36972 %! Adsorption study of the virions of the influenza virus and its individual proteins on polystyrene %@ 0507-4088 %M Sokolova1988, 1988. [PubMed] [Google Scholar]
  • [177].Johnstone E, Wyatt JJ, Henry AM, Short SC, Sebag-Montefiore D, Murray L, Kelly CG, McCallum HM, and Speight R, “Systematic Review of Synthetic Computed Tomography Generation Methodologies for Use in Magnetic Resonance Imaging-Only Radiation Therapy,” Int J Radiat Oncol Biol Phys, vol. 100, no. 1, pp. 199–217 %7 2017/09/08 %8 01 %! Systematic Review of Synthetic Computed Tomography Generation Methodologies for Use in Magnetic Resonance Imaging-Only Radiation Therapy %@ 1879-355X, 2018. [DOI] [PubMed] [Google Scholar]
  • [178].Varian. "Varian Medical Affairs Case Studies and Scoring System," http://medicalaffairs.varian.com/halcyon-case-studies.
  • [179].Winz I, “A decision support system for radiation therapy treatment planning,” Department of Engineering Science, University of Auckland, MsC, 2004. [Google Scholar]
  • [180].Cheung K, “Intensity modulated radiotherapy: advantages, limitations and future developments,” Biomedical imaging and intervention journal, vol. 2, no. 1, 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [181].Batumalai V, Jameson MG, Forstner DF, Vial P, and Holloway LC, “How important is dosimetrist experience for intensity modulated radiation therapy? A comparative analysis of a head and neck case,” Practical radiation oncology, vol. 3, no. 3, pp. e99–e106, 2013. [DOI] [PubMed] [Google Scholar]
  • [182].Berry SL, Boczkowski A, Ma R, Mechalakos J, and Hunt M, “Interobserver variability in radiation therapy plan output: Results of a single-institution study,” Practical radiation oncology, vol. 6, no. 6, pp. 442–449, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [183].Nelms BE, Robinson G, Markham J, Velasco K, Boyd S, Narayan S, Wheeler J, and Sobczak ML, “Variation in external beam treatment plan quality: an inter-institutional study of planners and planning systems,” Practical radiation oncology, vol. 2, no. 4, pp. 296–305, 2012. [DOI] [PubMed] [Google Scholar]
  • [184].Das IJ, Cheng C-W, Chopra KL, Mitra RK, Srivastava SP, and Glatstein E, “Intensity-Modulated Radiation Therapy Dose Prescription, Recording, and Delivery: Patterns of Variability Among Institutions and Treatment Planning Systems,” JNCI: Journal of the National Cancer Institute, vol. 100, no. 5, pp. 300–307, 2008. [DOI] [PubMed] [Google Scholar]
  • [185].Moore KL, Brame RS, Low DA, and Mutic S, “EXPERIENCE-BASED QUALITY CONTROL OF CLINICAL INTENSITY-MODULATED RADIOTHERAPY PLANNING,” International Journal of Radiation Oncology Biology Physics, vol. 81, no. 2, pp. 545–551, Oct 1, 2011. [DOI] [PubMed] [Google Scholar]
  • [186].Ohri N, Shen X, Dicker AP, Doyle LA, Harrison AS, and Showalter TN, “Radiotherapy Protocol Deviations and Clinical Outcomes: A Meta-analysis of Cooperative Group Clinical Trials,” JNCI: Journal of the National Cancer Institute, vol. 105, no. 6, pp. 387–393, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [187].Craft DL, Hong TS, Shih HA, and Bortfeld TR, “Improved Planning Time and Plan Quality Through Multicriteria Optimization for Intensity-Modulated Radiotherapy,” International Journal of Radiation Oncology*Biology*Physics, vol. 82, no. 1, pp. e83–e90, 2012/January/01/, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [188].Schreiner LJ, “On the quality assurance and verification of modern radiation therapy treatment,” Journal of medical physics/Association of Medical Physicists of India, vol. 36, no. 4, pp. 189, 2011. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [189].Van Dye J, Batista J, and Bauman GS, “Accuracy and uncertainty considerations in modern radiation oncology,” The Modern Technology of Radiation Oncology, vol. 3, pp. 361–412, 2013. [Google Scholar]
  • [190].Yorke E, Rosenzweig KE, Wagman R, and Mageras GS, “Interfractional anatomic variation in patients treated with respiration-gated radiotherapy,” Journal of Applied Clinical Medical Physics, vol. 6, no. 2, pp. 19–32, 2005, 2005. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [191].Liu F, Erickson B, Peng C, and Li XA, “Characterization and Management of Interfractional Anatomic Changes for Pancreatic Cancer Radiotherapy,” International Journal of Radiation Oncology Biology Physics, vol. 83, no. 3, pp. E423–E429, Jul 1, 2012. [DOI] [PubMed] [Google Scholar]
  • [192].Height R, Khoo V, Lawford C, Cox J, Joon DL, Rolfo A, and Wada M, “The dosimetric consequences of anatomic changes in head and neck radiotherapy patients,” Journal of Medical Imaging and Radiation Oncology, vol. 54, no. 5, pp. 497–504, Oct, 2010. [DOI] [PubMed] [Google Scholar]
  • [193].Britton KR, Starkschall G, Liu H, Chang JY, Bilton S, Ezhil M, John-Baptiste S, Kantor M, Cox JD, Komaki R, and Mohan R, “CONSEQUENCES OF ANATOMIC CHANGES AND RESPIRATORY MOTION ON RADIATION DOSE DISTRIBUTIONS IN CONFORMAL RADIOTHERAPY FOR LOCALLY ADVANCED NON-SMALL-CELL LUNG CANCER,” International Journal of Radiation Oncology Biology Physics, vol. 73, no. 1, pp. 94–102, Jan 1, 2009. [DOI] [PubMed] [Google Scholar]
  • [194].Tinoco M, Waga E, Tran K, Vo H, Baker J, Hunter R, Peterson C, Taku N, and Court L, “RapidPlan development of VMAT plans for cervical cancer patients in low- and middle-income countries,” Medical Dosimetry, vol. 45, no. 2, pp. 172–178, Sum, 2020. [DOI] [PubMed] [Google Scholar]
  • [195].Tol JP, Dahele M, Delaney AR, Doornaert P, Slotman BJ, and Verbakel W, “Detailed evaluation of an automated approach to interactive optimization for volumetric modulated arc therapy plans,” Medical Physics, vol. 43, no. 4, pp. 1818–1828, Apr, 2016. [DOI] [PubMed] [Google Scholar]
  • [196].Lin M, Mingli C, Xuejun G, and Weiguo L, “Deep learning-based inverse mapping for fluence map prediction,” Physics in Medicine & Biology, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [197].Mihaylov IB, Mellon EA, Yechieli R, and Portelance L, “Automated inverse optimization facilitates lower doses to normal tissue in pancreatic stereotactic body radiotherapy,” Plos One, vol. 13, no. 1, pp. 12, Jan, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [198].Speer S, Klein A, Kober L, Weiss A, Yohannes I, and Bert C, “Automation of radiation treatment planning Evaluation of head and neck cancer patient plans created by the Pinnacle(3) scripting and Auto-Planning functions,” Strahlentherapie Und Onkologie, vol. 193, no. 8, pp. 656–665, Aug, 2017. [DOI] [PubMed] [Google Scholar]
  • [199].Teruel JR, Malin M, Liu EK, McCarthy A, Hu K, Cooper BT, Sulman EP, Silverman JS, and Barbee D, “Full automation of spinal stereotactic radiosurgery and stereotactic body radiation therapy treatment planning using Varian Eclipse scripting,” Journal of Applied Clinical Medical Physics, pp. 10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [200].Xhaferllari I, Wong E, Bzdusek K, Lock M, and Chen JZ, “Automated IMRT planning with regional optimization using planning scripts,” Journal of Applied Clinical Medical Physics, vol. 14, no. 1, pp. 176–191, 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [201].Yang YW, Shao KN, Zhang J, Chen M, Chen YY, and Shan GP, “Automatic Planning for Nasopharyngeal Carcinoma Based on Progressive Optimization in RayStation Treatment Planning System,” Technology in Cancer Research & Treatment, vol. 19, pp. 8, Jun, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [202].Zarepisheh M, Hong LD, Zhou Y, Oh JH, Mechalakos JG, Hunt MA, Mageras GS, and Deasy JO, “Automated intensity modulated treatment planning: The expedited constrained hierarchical optimization (ECHO) system,” Medical Physics, vol. 46, no. 7, pp. 2944–2954, Jul, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [203].Cilla S, Ianiro A, Romano C, Deodato F, Macchia G, Buwenge M, Dinapoli N, Boldrini L, Morganti AG, and Valentini V, “Template-based automation of treatment planning in advanced radiotherapy: a comprehensive dosimetric and clinical evaluation,” Scientific Reports, vol. 10, no. 1, pp. 13, Jan, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [204].Gintz D, Latifi K, Caudell J, Nelms B, Zhang G, Moros E, and Feygelman V, “Initial evaluation of automated treatment planning software,” Journal of Applied Clinical Medical Physics, vol. 17, no. 3, pp. 331–346, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [205].Hazell I, Bzdusek K, Kumar P, Hansen CR, Bertelsen A, Eriksen JG, Johansen J, and Brink C, “Automatic planning of head and neck treatment plans,” Journal of Applied Clinical Medical Physics, vol. 17, no. 1, pp. 272–282, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [206].Krayenbuehl J, Di Martino M, Guckenberger M, and Andratschke N, “Improved plan quality with automated radiotherapy planning for whole brain with hippocampus sparing: a comparison to the RTOG 0933 trial,” Radiation Oncology, vol. 12, pp. 7, Oct, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [207].Kusters J, Bzdusek K, Kumar P, van Kollenburg PGM, Kunze-Busch MC, Wendling M, Dijkema T, and Kaanders J, “Automated IMRT planning in Pinnacle,” Strahlentherapie Und Onkologie, vol. 193, no. 12, pp. 1031–1038, Dec, 2017. [DOI] [PubMed] [Google Scholar]
  • [208].McConnell KA, Marston T, Zehren BE, Lirani A, Stanley DN, Bishop A, Crownover R, Eng T, Shi Z, Li Y, Baacke D, Kirby N, Rasmussen K, Papanikolaou N, and Gutierrez AN, “Dosimetric Evaluation of Pinnacle’s Automated Treatment Planning Software to Manually Planned Treatments,” Technology in Cancer Research & Treatment, vol. 17, pp. 7, Jun, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [209].Ouyang Z, Shen ZLL, Murray E, Kolar M, LaHurd D, Yu NC, Joshi N, Koyfman S, Bzdusek K, and Xia P, “Evaluation of auto-planning in IMRT and VMAT for head and neck cancer,” Journal of Applied Clinical Medical Physics, vol. 20, no. 7, pp. 39–47, Jul, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [210].Smith A, Granatowicz A, Stoltenberg C, Wang S, Liang XY, Enke CA, Wahl AO, Zhou SM, and Zheng DD, “Can the Student Outperform the Master? A Plan Comparison Between Pinnacle Auto-Planning and Eclipse knowledge-Based RapidPlan Following a Prostate-Bed Plan Competition,” Technology in Cancer Research & Treatment, vol. 18, pp. 8, Jun, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [211].Wang JQ, Chen Z, Li WW, Qian W, Wang XS, and Hu WG, “A new strategy for volumetric-modulated arc therapy planning using AutoPlanning based multicriteria optimization for nasopharyngeal carcinoma,” Radiation Oncology, vol. 13, pp. 10, May, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [212].Wang S, Zheng DD, Lin C, Lei Y, Verma V, Smith A, Ma RT, Enke CA, and Zhou SM, “Technical Assessment of an Automated Treatment Planning on Dose Escalation of Pancreas Stereotactic Body Radiotherapy,” Technology in Cancer Research & Treatment, vol. 18, pp. 10, Jun, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [213].Zhang QB, Ou LY, Peng YY, Yu H, Wang LJ, and Zhang SX, “Evaluation of automatic VMAT plans in locally advanced nasopharyngeal carcinoma,” Strahlentherapie Und Onkologie, pp. 11. [DOI] [PubMed] [Google Scholar]
  • [214].Delaney AR, Dong L, Mascia A, Zou W, Zhang YB, Yin LS, Rosas S, Hrbacek J, Lomax AJ, Slotman BJ, Dahele M, and Verbakel W, “Automated Knowledge-Based Intensity-Modulated Proton Planning: An International Multicenter Benchmarking Study,” Cancers, vol. 10, no. 11, pp. 15, Nov, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [215].Monz M, Kufer KH, Bortfeld TR, and Thieke C, “Pareto navigation-algorithmic foundation of interactive multi-criteria IMRT planning,” Physics in Medicine and Biology, vol. 53, no. 4, pp. 985–998, Feb, 2008. [DOI] [PubMed] [Google Scholar]
  • [216].Hussein M, Heijmen BJM, Verellen D, and Nisbet A, “Automation in intensity modulated radiotherapy treatment planning-a review of recent innovations,” Br J Radiol, vol. 91, no. 1092, pp. 20180270, Dec, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [217].Breedveld S, Storchi PRM, Voet PWJ, and Heijmen BJM, “iCycle: Integrated, multicriterial beam angle, and profile optimization for generation of coplanar and noncoplanar IMRT plans,” Medical Physics, vol. 39, no. 2, pp. 951–963, Feb, 2012. [DOI] [PubMed] [Google Scholar]
  • [218].Buschmann M, Sharfo AWM, Penninkhof J, Seppenwoolde Y, Goldner G, Georg D, Breedveld S, and Heijmen BJM, “Automated volumetric modulated arc therapy planning for whole pelvic prostate radiotherapy,” Strahlentherapie Und Onkologie, vol. 194, no. 4, pp. 333–342, Apr, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [219].Chen HX, Craft DL, and Gierga DP, “Multicriteria optimization informed VMAT planning,” Medical Dosimetry, vol. 39, no. 1, pp. 64–73, Spr, 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [220].Dias J, Rocha H, Ventura T, Ferreira B, and Lopes MD, "A Heuristic Based on Fuzzy Inference Systems for Multiobjective IMRT Treatment Planning," Machine Learning, Optimization, and Big Data, Mod 2017, Lecture Notes in Computer Science; Nicosia G, Pardalos P, Giuffrida G and Umeton R, eds., pp. 255–267, 2018. [Google Scholar]
  • [221].Engberg L, Forsgren A, Eriksson K, and Hardemark B, “Explicit optimization of plan quality measures in intensity-modulated radiation therapy treatment planning,” Medical Physics, vol. 44, no. 6, pp. 2045–2053, Jun, 2017. [DOI] [PubMed] [Google Scholar]
  • [222].Ghandour S, Matzinger O, and Pachoud M, “Volumetric-modulated arc therapy planning using multicriteria optimization for localized prostate cancer,” Journal of Applied Clinical Medical Physics, vol. 16, no. 3, pp. 258–269, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [223].Khan F, and Craft D, “Three-dimensional conformal planning with low-segment multicriteria intensity modulated radiation therapy optimization,” Practical Radiation Oncology, vol. 5, no. 2, pp. E103–E111, Mar-Apr, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [224].Kierkels RGJ, Visser R, Bijl HP, Langendijk JA, van ’t Veld AA, Steenbakkers R, and Korevaar EW, “Multicriteria optimization enables less experienced planners to efficiently produce high quality treatment plans in head and neck cancer radiotherapy,” Radiation Oncology, vol. 10, pp. 8, Apr, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [225].Mai YH, Kong FT, Yang YW, Zhou LH, Li YB, and Song T, “Voxel-based automatic multi-criteria optimization for intensity modulated radiation therapy,” Radiation Oncology, vol. 13, pp. 13, Dec, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [226].Miguel-Chumacero E, Currie G, Johnston A, and Currie S, “Effectiveness of Multi-Criteria Optimization-based Trade-Off exploration in combination with RapidPlan for head & neck radiotherapy planning,” Radiation Oncology, vol. 13, pp. 13, Nov, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [227].Muller BS, Shih HA, Efstathiou JA, Bortfeld T, and Craft D, “Multicriteria plan optimization in the hands of physicians: a pilot study in prostate cancer and brain tumors,” Radiation Oncology, vol. 12, pp. 11, Nov, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [228].Nguyen D, McBeth R, Barkousaraie AS, Bohara G, Shen CY, Jia X, and Jiang S, “Incorporating human and learned domain knowledge into training deep neural networks: A differentiable dose-volume histogram and adversarial inspired framework for generating Pareto optimal dose distributions in radiation therapy,” Medical Physics, vol. 47, no. 3, pp. 837–849, Mar, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [229].Wang MJ, Li S, Huang YL, Yue HZ, Li T, Wu H, Gao S, and Zhang YB, “An interactive plan and model evolution method for knowledge-based pelvic VMAT planning,” Journal of Applied Clinical Medical Physics, vol. 19, no. 5, pp. 491–498, Sep, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [230].Craft D, and Bortfeld T, “How many plans are needed in an IMRT multi-objective plan database?,” Physics in Medicine and Biology, vol. 53, no. 11, pp. 2785–2796, 2008/May/01, 2008. [DOI] [PubMed] [Google Scholar]
  • [231].Buergy D, Sharfo AWM, Heijmen BM, W. J. Voet P, Breedveld S, Wenz F, Lohr F, and Stieler F, “Fully automated treatment planning of spinal metastases - A comparison to manual planning of Volumetric Modulated Arc Therapy for conventionally fractionated irradiation,” Radiation Oncology, vol. 12, pp. 7, Jan, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [232].Habraken SJM, Sharfo AWM, Buijsen J, Verbakel W, Haasbeek CJA, Ollers MC, Westerveld H, van Wieringen N, Reerink O, Seravalli E, Braam PM, Wendling M, Lacornerie T, Mirabel X, Weytjens R, Depuydt L, Tanadini-Lang S, Riesterer O, Haustermans K, Depuydt T, Dwarkasing RS, Willemssen F, Heijmen BJM, and Romero AM, “The TRENDY multi-center randomized trial on hepatocellular carcinoma - Trial QA including automated treatment planning and benchmark-case results,” Radiotherapy and Oncology, vol. 125, no. 3, pp. 507–513, Dec, 2017. [DOI] [PubMed] [Google Scholar]
  • [233].Sharfo AWM, Breedveld S, Voet PWJ, Heijkoop ST, Mens JWM, Hoogeman MS, and Heijmen BJM, “Validation of Fully Automated VMAT Plan Generation for Library-Based Plan-of-the-Day Cervical Cancer Radiotherapy,” Plos One, vol. 11, no. 12, pp. 13, Dec, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [234].Sharfo AWM, Stieler F, Kupfer O, Heijmen BJM, Dirkx MLP, Breedveld S, Wenz F, Lohr F, Boda-Heggemann J, and Buergy P, “Automated VMAT planning for postoperative adjuvant treatment of advanced gastric cancer,” Radiation Oncology, vol. 13, pp. 8, Apr, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [235].van Haveren R, Heijmen BJM, and Breedveld S, “Automatic configuration of the reference point method for fully automated multi-objective treatment planning applied to oropharyngeal cancer,” Medical Physics, vol. 47, no. 4, pp. 1499–1508, Apr, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [236].Voet PWJ, Dirkx MLP, Breedveld S, and Heijmen BJM, “Automated generation of IMRT treatment plans for prostate cancer patients with metal hip prostheses: Comparison of different planning strategies,” Medical Physics, vol. 40, no. 7, pp. 7, Jul, 2013. [DOI] [PubMed] [Google Scholar]
  • [237].Bai PG, Weng X, Quan KR, Chen JH, Dai YT, Xu YJ, Lin FS, Zhong J, Wu TM, and Chen CB, “A knowledge-based intensity-modulated radiation therapy treatment planning technique for locally advanced nasopharyngeal carcinoma radiotherapy,” Radiation Oncology, vol. 15, no. 1, pp. 10, Aug, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [238].Chanyavanich V, Das SK, Lee WR, and Lo JY, “Knowledge-based IMRT treatment planning for prostate cancer,” Medical Physics, vol. 38, no. 5, pp. 2515–2522, 2011/May/01, 2011. [DOI] [PubMed] [Google Scholar]
  • [239].Haseai S, Arimura H, Asai K, Yoshitake T, and Shioyama Y, “Similar-cases-based planning approaches with beam angle optimizations using water equivalent path length for lung stereotactic body radiation therapy,” Radiological Physics and Technology, vol. 13, no. 2, pp. 119–127, Jun, 2020. [DOI] [PubMed] [Google Scholar]
  • [240].Liu ESF, Wu VWC, Harris B, Lehman M, Pryor D, and Chan LWC, “Vector-model-supported approach in prostate plan optimization,” Medical Dosimetry, vol. 42, no. 2, pp. 79–84, 2017. [DOI] [PubMed] [Google Scholar]
  • [241].McIntosh C, and Purdie TG, “Contextual Atlas Regression Forests: Multiple-Atlas-Based Automated Dose Prediction in Radiation Therapy,” IEEE Transactions on Medical Imaging, vol. 35, no. 4, pp. 1000–1012, 2016. [DOI] [PubMed] [Google Scholar]
  • [242].Petrovic S, Khussainova G, and Jagannathan R, “Knowledge-light adaptation approaches in case-based reasoning for radiotherapy treatment planning,” Artificial Intelligence in Medicine, vol. 68, pp. 17–28, 2016/March/01/, 2016. [DOI] [PubMed] [Google Scholar]
  • [243].Sarkar B, Munshi A, Ganesh T, Manikandan A, Anbazhagan SK, and Mohanti BK, “Standardization of volumetric modulated arc therapy-based frameless stereotactic technique using a multidimensional ensemble-aided knowledge-based planning,” Medical Physics, vol. 46, no. 5, pp. 1953–1962, May, 2019. [DOI] [PubMed] [Google Scholar]
  • [244].Schmidt M, Lo JY, Grzetic S, Lutzky C, Brizel DM, and Das SK, “Semiautomated head-and-neck IMRT planning using dose warping and scaling to robustly adapt plans in a knowledge database containing potentially suboptimal plans,” Medical Physics, vol. 42, no. 8, pp. 4428–4434, Aug, 2015. [DOI] [PubMed] [Google Scholar]
  • [245].Schreibmann E, Fox T, Curran W, Shu HK, and Crocker I, “Automated population-based planning for whole brain radiation therapy,” Journal of Applied Clinical Medical Physics, vol. 16, no. 5, pp. 76–86, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [246].Sheng Y, Li T, Zhang Y, Lee WR, Yin F-F, Ge Y, and Wu QJ, “Atlas-guided prostate intensity modulated radiation therapy (IMRT) planning,” Physics in Medicine and Biology, vol. 60, no. 18, pp. 7277–7291, 2015/September/08, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [247].Amaloo C, Hayes L, Manning M, Liu H, and Wiant D, “Can automated treatment plans gain traction in the clinic?,” Journal of Applied Clinical Medical Physics, vol. 20, no. 8, pp. 29–35, Aug, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [248].Babier A, Boutilier JJ, McNiven AL, and Chan TCY, “Knowledge-based automated planning for oropharyngeal cancer,” Medical Physics, vol. 45, no. 7, pp. 2875–2883, Jul, 2018. [DOI] [PubMed] [Google Scholar]
  • [249].Bai X, Shan GP, Chen M, and Wang BB, “Approach and assessment of automated stereotactic radiotherapy planning for early stage non-small-cell lung cancer,” Biomedical Engineering Online, vol. 18, no. 1, pp. 15, Oct, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [250].Berry SL, Ma RT, Boczkowski A, Jackson A, Zhang PP, and Hunt M, “Evaluating inter-campus plan consistency using a knowledge based planning model,” Radiotherapy and Oncology, vol. 120, no. 2, pp. 349–355, Aug, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [251].Bossart E, Duffy M, Simpson G, Abramowitz M, Pollack A, and Dogan N, “Assessment of specific versus combined purpose knowledge based models in prostate radiotherapy,” Journal of Applied Clinical Medical Physics, vol. 19, no. 6, pp. 209–216, Nov, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [252].Boutilier JJ, Lee T, Craig T, Sharpe MB, and Chan TCY, “Models for predicting objective function weights in prostate cancer IMRT,” Medical Physics, vol. 42, no. 4, pp. 1586–1595, Apr, 2015. [DOI] [PubMed] [Google Scholar]
  • [253].Cagni E, Botti A, Micera R, Galeandro M, Sghedoni R, Orlandi M, Iotti C, Cozzi L, and Iori M, “Knowledge-based treatment planning: An inter-technique and inter-system feasibility study for prostate cancer,” Physica Medica-European Journal of Medical Physics, vol. 36, pp. 38–45, Apr, 2017. [DOI] [PubMed] [Google Scholar]
  • [254].Castriconi R, Fiorino C, Passoni P, Broggi S, Di Muzio NG, Cattaneo GM, and Calandrino R, “Knowledge-based automatic optimization of adaptive early-regression-guided VMAT for rectal cancer,” Physica Medica-European Journal of Medical Physics, vol. 70, pp. 58–64, Feb, 2020. [DOI] [PubMed] [Google Scholar]
  • [255].Chatterjee A, Serban M, Faria S, Souhami L, Cury F, and Seuntjens J, “Novel knowledge-based treatment planning model for hypofractionated radiotherapy of prostate cancer patients,” Physica Medica-European Journal of Medical Physics, vol. 69, pp. 36–43, Jan, 2020. [DOI] [PubMed] [Google Scholar]
  • [256].Delaney AR, Dahele M, Tol JP, Slotman BJ, and Verbakel W, “Knowledge-based planning for stereotactic radiotherapy of peripheral early-stage lung cancer,” Acta Oncologica, vol. 56, no. 3, pp. 490–498, 2017. [DOI] [PubMed] [Google Scholar]
  • [257].Faught AM, Olsen L, Schubert L, Rusthoven C, Castillo E, Castillo R, Zhang JJ, Guerrero T, Miften M, and Vinogradskiy Y, “Functional-guided radiotherapy using knowledge-based planning,” Radiotherapy and Oncology, vol. 129, no. 3, pp. 494–498, Dec, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [258].Fogliata A, Cozzi L, Reggiori G, Stravato A, Lobefalo F, Franzese C, Franceschini D, Tomatis S, and Scorsetti M, “RapidPlan knowledge based planning: iterative learning process and model ability to steer planning strategies,” Radiation Oncology, vol. 14, no. 1, pp. 12, Oct, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [259].Fogliata A, Nicolini G, Bourgier C, Clivio A, De Rose F, Fenoglietto P, Lobefalo F, Mancosu P, Tomatis S, Vanetti E, Scorsetti M, and Cozzi L, “Performance of a Knowledge-Based Model for Optimization of Volumetric Modulated Arc Therapy Plans for Single and Bilateral Breast Irradiation,” Plos One, vol. 10, no. 12, pp. 12, Dec, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [260].Fogliata A, Nicolini G, Clivio A, Vanetti E, Laksar S, Tozzi A,Scorsetti M, and Cozzi L, “A broad scope knowledge based model for optimization of VMAT in esophageal cancer: validation and assessment of plan quality among different treatment centers,” Radiation Oncology, vol. 10, pp. 11, Oct, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [261].Ge YR, and Wu QJ, “Knowledge-based planning for intensity-modulated radiation therapy: A review of data-driven approaches,” Medical Physics, vol. 46, no. 6, pp. 2760–2775, Jun, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [262].Inoue E, Doi H, Monzen H, Tamura M, Inada M, Ishikawa K, Nakamatsu K, and Nishimura Y, “Dose-volume Histogram Analysis of Knowledge-based Volumetric-modulated Arc Therapy Planning in Postoperative Breast Cancer Irradiation,” In Vivo, vol. 34, no. 3, pp. 1095–1101, May-Jun, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [263].Kishi N, Nakamura M, Hirashima H, Mukumoto N, Takehana K, Uto M, Matsuo Y, and Mizowaki T, “Validation of the clinical applicability of knowledge-based planning models in single-isocenter volumetric-modulated arc therapy for multiple brain metastases,” Journal ofApplied Clinical Medical Physics, pp. 10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [264].Lin YH, Hong LX, Hunt MA, and Berry SL, “Use of a constrained hierarchical optimization dataset enhances knowledge-based planning as a quality assurance tool for prostate bed irradiation,” Medical Physics, vol. 45, no. 10, pp. 4364–4369, Oct, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [265].Ling CF, Han X, Zhai P, Xu H, Chen JY, Wang JZ, and Hu WG, “A hybrid automated treatment planning solution for esophageal cancer,” Radiation Oncology, vol. 14, no. 1, pp. 7, Dec, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [266].Nwankwo O, Mekdash H, Sihono DSK, Wenz F, and Glatting G, “Knowledge-based radiation therapy (KBRT) treatment planning versus planning by experts: validation of a KBRT algorithm for prostate cancer treatment planning,” Radiation Oncology, vol. 10, pp. 5, May, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [267].Schubert C, Waletzko O, Weiss C, Voelzke D, Toperim S, Roeser A, Puccini S, Piroth M, Mehrens C, Kueter JD, Hierholz K, Gerull K, Fogliata A, Block A, and Cozzi L, “Intercenter validation of a knowledge based model for automated planning of volumetric modulated arc therapy for prostate cancer. The experience of the German RapidPlan Consortium,” Plos One, vol. 12, no. 5, pp. 13, May, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [268].Sheng Y, Ge YR, Yuan LL, Li TR, Yin FF, and Wu QJ, “Outlier identification in radiation therapy knowledge-based planning: A study of pelvic cases,” Medical Physics, vol. 44, no. 11, pp. 5617–5626, Nov, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [269].Shepherd M, Bromley R, Stevens M, Morgia M, Kneebone A, Hruby G, Atyeo J, and Eade T, “Developing knowledge-based planning for gynaecological and rectal cancers: a clinical validation of Rapid-Plan((TM)),” Journal of Medical Radiation Sciences, vol. 67, no. 3, pp. 217–224, Sep, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [270].Shiraishi S, and Moore KL, “Knowledge-based prediction of three-dimensional dose distributions for external beam radiotherapy,” Medical Physics, vol. 43, no. 1, pp. 378–387, Jan, 2016. [DOI] [PubMed] [Google Scholar]
  • [271].Shiraishi S, Tan J, Olsen LA, and Moore KL, “Knowledge-based prediction of plan quality metrics in intracranial stereotactic radiosurgery,” Medical Physics, vol. 42, no. 2, pp. 908–917, Feb, 2015. [DOI] [PubMed] [Google Scholar]
  • [272].Tol JP, Dahele M, Delaney AR, Slotman BJ, and Verbakel W, “Can knowledge-based DVH predictions be used for automated, individualized quality assurance of radiotherapy treatment plans?,” Radiation Oncology, vol. 10, pp. 14, Nov, 2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [273].Ueda Y, Miyazaki M, Sumida I, Ohira S, Tamura M, Monzen H, Tsuru H, Inui S, Isono M, Ogawa K, and Teshima T, “Knowledge-based planning for oesophageal cancers using a model trained with plans from a different treatment planning system,” Acta Oncologica, vol. 59, no. 3, pp. 274–283, Mar, 2020. [DOI] [PubMed] [Google Scholar]
  • [274].Uehara T, Monzen H, Tamura M, Ishikawa K, Doi H, and Nishimura Y, “Dose-volume histogram analysis and clinical evaluation of knowledge-based plans with manual objective constraints for pharyngeal cancer,” Journal ofRadiation Research, vol. 61, no. 3, pp. 499–505, May, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [275].Wang JQ, Hu WG, Yang ZZ, Chen XH, Wu ZQ, Yu XL, Guo XM, Lu SQ, Li KX, and Yu GY, “Is it possible for knowledge-based planning to improve intensity modulated radiation therapy plan quality for planners with different planning experiences in left-sided breast cancer patients?,” Radiation Oncology, vol. 12, pp. 8, May, 2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [276].Wu AQ, Li YB, Qi MK, Jia QY, Guo FT, Lu XY, Zhou LH, and Song T, “Robustness comparative study of dose-volume-histogram prediction models for knowledge-based radiotherapy treatment planning,” Journal of Radiation Research and Applied Sciences, vol. 13, no. 1, pp. 390–397, Jan, 2020. [Google Scholar]
  • [277].Wu H, Jiang F, Yue HZ, Li S, and Zhang YB, “A dosimetric evaluation of knowledge-based VMAT planning with simultaneous integrated boosting for rectal cancer patients,” Journal of Applied Clinical Medical Physics, vol. 17, no. 6, pp. 78–85, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [278].Wu H, Jiang F, Yue HZ, Zhang H, Wang K, and Zhang YB, “Applying a RapidPlan model trained on a technique and orientation to another: a feasibility and dosimetric evaluation,” Radiation Oncology, vol. 11, pp. 7, Aug, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [279].Younge KC, Marsh RB, Owen D, Geng HZ, Xiao Y, Spratt DE, Foy J, Suresh K, Wu QJ, Yin FF, Ryu S, and Matuszak MM, “Improving Quality and Consistency in NRG Oncology Radiation Therapy Oncology Group 0631 for Spine Radiosurgery via Knowledge-Based Planning,” International Journal of Radiation Oncology Biology Physics, vol. 100, no. 4, pp. 1067–1074, Mar, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [280].Yu G, Li Y, Feng ZW, Tao C, Yu ZY, Li BS, and Li DW, “Knowledge-based IMRT planning for individual liver cancer patients using a novel specific model,” Radiation Oncology, vol. 13, pp. 8, Mar, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [281].Zhang YJ, Li TT, Xiao H, Ji WX, Guo M, Zeng ZC, and Zhang JY, “A knowledge-based approach to automated planning for hepatocellular carcinoma,” Journal of Applied Clinical Medical Physics, vol. 19, no. 1, pp. 50–59, Jan, 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [282].Ziemer BP, Sanghvi P, Hattangadi-Gluth J, and Moore KL, “Heuristic knowledge-based planning for single-isocenter stereotactic radiosurgery to multiple brain metastases,” Medical Physics, vol. 44, no. 10, pp. 5001–5009, Oct, 2017. [DOI] [PubMed] [Google Scholar]
  • [283].Fogliata A, Belosi F, Clivio A, Navarria P, Nicolini G, Scorsetti M, Vanetti E, and Cozzi L, “On the pre-clinical validation of a commercial model-based optimisation engine: Application to volumetric modulated arc therapy for patients with lung or prostate cancer,” Radiotherapy and Oncology, vol. 113, no. 3, pp. 385–391, Dec, 2014. [DOI] [PubMed] [Google Scholar]
  • [284].Boutilier JJ, Craig T, Sharpe MB, and Chan TCY, “Sample size requirements for knowledge-based treatment planning,” Medical Physics, vol. 43, no. 3, pp. 1212–1221, Mar, 2016. [DOI] [PubMed] [Google Scholar]
  • [285].Snyder KC, Kim JK, Reding A, Fraser C, Gordon J, Ajlouni M, Movsas B, and Chetty IJ, “Development and evaluation of a clinical model for lung cancer patients using stereotactic body radiotherapy (SBRT) within a knowledge-based algorithm for treatment planning,” Journal of Applied Clinical Medical Physics, vol. 17, no. 6, pp. 263–275, 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [286].Nguyen D, Long T, Jia X, Lu W, Gu X, Iqbal Z, and Jiang S, “A feasibility study for predicting optimal radiation therapy dose distributions of prostate cancer patients from patient anatomy using deep learning,” Scientific Reports, vol. 9, no. 1, pp. 1076, 2019/January/31, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [287].Chen X, Men K, Li Y, Yi J, and Dai J, “A feasibility study on an automated method to generate patient-specific dose distributions for radiotherapy using deep learning,” Medical physics, vol. 46, no. 1, pp. 56–64, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [288].Fan J, Wang J, Chen Z, Hu C, Zhang Z, and Hu W, “Automatic treatment planning based on three-dimensional dose distribution predicted from deep learning technique,” Medical Physics, vol. 46, no. 1, pp. 370–381, 2019. [DOI] [PubMed] [Google Scholar]
  • [289].Nguyen D, Jia X, Sher D, Lin M-H, Iqbal Z, Liu H, and Jiang S, “3D radiotherapy dose prediction on head and neck cancer patients with a hierarchically densely connected U-net deep learning architecture,” Physics in Medicine & Biology, vol. 64, no. 6, pp. 065020, 2019/March/18, 2019. [DOI] [PubMed] [Google Scholar]
  • [290].Liu Z, Fan J, Li M, Yan H, Hu Z, Huang P, Tian Y, Miao J, and Dai J, “A deep learning method for prediction of three-dimensional dose distribution of helical tomotherapy,” Medical Physics, vol. 46, no. 5, pp. 1972–1983, 2019. [DOI] [PubMed] [Google Scholar]
  • [291].Kearney V, Chan JW, Haaf S, Descovich M, and Solberg TD, “DoseNet: a volumetric dose prediction algorithm using 3D fully-convolutional neural networks,” Physics in Medicine & Biology, vol. 63, no. 23, pp. 235022, 2018/December/04, 2018. [DOI] [PubMed] [Google Scholar]
  • [292].Barragán-Montero AM, Nguyen D, Lu W, Lin M-H, Norouzi-Kandalan R, Geets X, Sterpin E, and Jiang S, “Three-dimensional dose prediction for lung IMRT patients with deep neural networks: robust learning from heterogeneous beam configurations,” Medical Physics, vol. 46, no. 8, pp. 3679–3691, 2019. [DOI] [PubMed] [Google Scholar]
  • [293].Nguyen D, Barkousaraie AS, Shen C, Jia X, and Jiang S, “Generating Pareto optimal dose distributions for radiation therapy treatment planning,” arXiv preprint arXiv:1906.04778, 2019. [Google Scholar]
  • [294].Bohara G, Sadeghnejad Barkousaraie A, Jiang S, and Nguyen D, “Using deep learning to predict beam-tunable Pareto optimal dose distribution for intensity-modulated radiation therapy,” Medical Physics, vol. 47, no. 9, pp. 3898–3912, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [295].Lee H, Kim H, Kwak J, Kim YS, Lee SW, Cho S, and Cho B, “Fluence-map generation for prostate intensity-modulated radiotherapy planning using a deep-neural-network,” Scientific Reports, vol. 9, no. 1, pp. 15671, 2019/October/30, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [296].Li X, Zhang J, Sheng Y, Chang Y, Yin F-F, Ge Y, Wu QJ, and Wang C, “Automatic IMRT planning via static field fluence prediction (AIP-SFFP): a deep learning algorithm for real-time prostate treatment planning,” Physics in Medicine & Biology, vol. 65, no. 17, pp. 175014, 2020/September/08, 2020. [DOI] [PubMed] [Google Scholar]
  • [297].Wang W, Sheng Y, Wang C, Zhang J, Li X, Palta M, Czito B, Willett CG, Wu Q, and Ge Y, “Fluence Map Prediction Using Deep Learning Models–Direct Plan Generation for Pancreas Stereotactic Body Radiation Therapy,” Frontiers in Artificial Intelligence, vol. 3, pp. 68, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [298].Shen C, Chen L, Gonzalez Y, Nguyen D, Jiang S, and Jia X, “Hierarchical Deep Reinforcement Learning for Intelligent Automatic Radiotherapy Treatment Planning,” Annual Meeting of American Association of Physicists in Medicine (AAPM) 2020, 2020. [Google Scholar]
  • [299].Shen C, Liyuan C, Gonzalez Y, and Jia X, “Improving Efficiency of Training a Virtual Treatment Planner Network via Knowledge-guided Deep Reinforcement Learning for Intelligent Automatic Treatment Planning of Radiotherapy,” arXiv:2007.12591, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [300].Shen C, Gonzalez Y, Klages P, Qin N, Jung H, Chen L, Nguyen D, Jiang SB, and Jia X, “Intelligent inverse treatment planning via deep reinforcement learning, a proof-of-principle study in high dose-rate brachytherapy for cervical cancer,” Physics in Medicine & Biology, vol. 64, no. 11, pp. 115013, 2019/May/29, 2019. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [301].Hrinivich WT, and Lee J, “Artificial intelligence-based radiotherapy machine parameter optimization using reinforcement learning,” Medical Physics, vol. n/a, no. n/a. [DOI] [PubMed] [Google Scholar]
  • [302].Zhang J, Wang C, Sheng Y, Palta M, Czito B, Willett C, Zhang J, Jensen PJ, Yin F-F, and Wu Q, “An interpretable planning bot for pancreas stereotactic body radiation therapy,” arXiv preprint arXiv:2009.07997, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [303].Vandewinckele L, Claessens M, Dinkla A, Brouwer C, Crijns W, Verellen D, and van Elmpt W, “Overview of artificial intelligence-based applications in radiotherapy: Recommendations for implementation and quality assurance,” Radiotherapy and Oncology. [DOI] [PubMed] [Google Scholar]
  • [304].Yang Q, Liu Y, Chen T, and Tong Y, “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019. [Google Scholar]
  • [305].Bonawitz K, Eichner H, Grieskamp W, Huba D, Ingerman A, Ivanov V, Kiddon C, Konečný J, Mazzocchi S, and McMahan HB, “Towards federated learning at scale: System design,” arXiv preprint arXiv:1902.01046, 2019. [Google Scholar]
  • [306].Deist TM, Dankers FJWM, Ojha P, Scott Marshall M, Janssen T, Faivre-Finn C, Masciocchi C, Valentini V, Wang J, Chen J, Zhang Z, Spezi E, Button M, Jan Nuyttens J, Vernhout R, van Soest J, Jochems A, Monshouwer R, Bussink J, Price G, Lambin P, and Dekker A, “Distributed learning on 20 000+ lung cancer patients - The Personal Health Train,” Radiotherapy and Oncology, vol. 144, pp. 189–200, 2020. [DOI] [PubMed] [Google Scholar]
  • [307].Zhang Q, Nian Wu Y, and Zhu S-C, "Interpretable convolutional neural networks." pp. 8827–8836. [Google Scholar]
  • [308].Doshi-Velez F, and Kim B, “Towards a rigorous science of interpretable machine learning,” arXiv preprint arXiv:1702.08608, 2017. [Google Scholar]

RESOURCES