Skip to main content
Radiology: Artificial Intelligence logoLink to Radiology: Artificial Intelligence
. 2022 Aug 3;4(5):e210214. doi: 10.1148/ryai.210214

A Proof-of-Concept Study of Artificial Intelligence–assisted Contour Editing

Ti Bai 1, Anjali Balagopal 1, Michael Dohopolski 1, Howard E Morgan 1, Rafe McBeth 1, Jun Tan 1, Mu-Han Lin 1, David J Sher 1, Dan Nguyen 1, Steve Jiang 1,
PMCID: PMC9530760  PMID: 36204538

Abstract

Purpose

To present a concept called artificial intelligence–assisted contour editing (AIACE) and demonstrate its feasibility.

Materials and Methods

The conceptual workflow of AIACE is as follows: Given an initial contour that requires clinician editing, the clinician indicates where large editing is needed, and a trained deep learning model uses this input to update the contour. This process repeats until a clinically acceptable contour is achieved. In this retrospective, proof-of-concept study, the authors demonstrated the concept on two-dimensional (2D) axial CT images from three head-and-neck cancer datasets by simulating the interaction with the AIACE model to mimic the clinical environment. The input at each iteration was one mouse click on the desired location of the contour segment. Model performance is quantified with the Dice similarity coefficient (DSC) and 95th percentile of Hausdorff distance (HD95) based on three datasets with sample sizes of 10, 28, and 20 patients.

Results

The average DSCs and HD95 values of the automatically generated initial contours were 0.82 and 4.3 mm, 0.73 and 5.6 mm, and 0.67 and 11.4 mm for the three datasets, which were improved to 0.91 and 2.1 mm, 0.86 and 2.5 mm, and 0.86 and 3.3 mm, respectively, with three mouse clicks. Each deep learning–based contour update required about 20 msec.

Conclusion

The authors proposed the newly developed AIACE concept, which uses deep learning models to assist clinicians in editing contours efficiently and effectively, and demonstrated its feasibility by using 2D axial CT images from three head-and-neck cancer datasets.

Keywords: Segmentation, Convolutional Neural Network (CNN), CT, Deep Learning Algorithms

Supplemental material is available for this article.

© RSNA, 2022

Keywords: Segmentation, Convolutional Neural Network (CNN), CT, Deep Learning Algorithms


Summary

The proposed artificial intelligence (AI)–assisted contour editing concept is feasible and allows clinicians and AI models to collaboratively achieve clinically acceptable results in a more efficient and controllable manner.

Key Points

  • ■ Compared with the false click-based contour editing model, our proposed contour click-based artificial intelligence–assisted contour editing model was more efficient, more controllable, and more analogous to the clinician’s contouring habit on two-dimensional axial CT images from three head-and-neck cancer datasets.

  • ■ Within three mouse clicks, the average Dice similarity coefficient and 95th percentile of Hausdorff distance of the automatically generated initial contours were improved from 0.82 and 4.3 mm, respectively, to 0.91 and 2.1 mm (validation dataset), 0.73 and 5.6 mm to 0.86 and 2.5 mm (DeepMind dataset), and 0.67 and 11.4 mm to 0.86 and 3.3 mm (University of Texas Southwestern dataset).

  • ■ The model could interact with the clinician in real time, as each contour update required about 20 msec.

Introduction

The success of radiation therapy relies on treatment planning to deliver precise radiation to target volumes while sparing organs at risk (OARs) from unnecessary radiation. Automated contouring can reduce time and labor, especially for online adaptive radiation therapy (ART). Efforts to meet this need include graph cut algorithms (1), atlas-based segmentation algorithms (2), registration-driven automatic segmentation algorithms (3), and deep learning–based algorithms (47). Given the imperfections of current algorithms, clinicians must inspect and manually edit automatically generated contours, which can be time-intensive (8). Modern deep learning–based interactive segmentation methods have been proposed to facilitate this process by providing editing hints, such as a click on the false-positive or false-negative regions (9). However, this false click-based method is less controllable and less analogous to the contouring habits of clinicians who prefer to directly draw organ boundaries. We propose using artificial intelligence (AI) to assist clinicians in editing contours based on contour clicks, which we have termed AI-assisted contour editing (AIACE). We present a proof-of-concept study to demonstrate its feasibility with use of CT images from three head-and-neck cancer datasets.

Materials and Methods

Methods

Given an input image, an initial contour is generated by means of an automatic segmentation model and then reviewed. If the contour is clinically acceptable, then the contouring process is finished. Otherwise, the clinician edits the contour as needed by providing a hint to indicate where the current contour should be, guiding the model to update the contour. The process repeats until an acceptable contour is achieved. The goal of AIACE is to minimize clinician input at each iteration and the number of iterations required. At each iteration, we perform one click on the desired contour segment per the clinician’s provided hint. We assume that the clinician will preferentially edit the contour segment with the largest errors, since this would require fewer clicks.

As shown in Figure 1, the click is converted into a two-dimensional (2D) image, termed the “click image,” by placing a 2D Gaussian point with a radius of 10 pixels around the above boundary point. The input of AIACE consists of three channels: the original image, the current segmentation mask, and the click image, with the updated contour as output. The clinician then reviews the updated contour, and if it is acceptable, the editing process is finished. Otherwise, a second click is made in the position with the current largest error. Both the first and the second clicks are converted into a single joint 2D click image by placing two 2D Gaussian points, each with a radius of 10 pixels, around the above two clicked boundary points. The updated click image is fed into the AIACE model to further revise the contour. This process is repeated until the result is satisfactory.

Figure 1:

The U-Net–based architecture of our artificial intelligence–assisted contour editing model. The input includes three channels: the original image, the current segmentation mask, and the click image. The output is the updated mask. A zoomed-in contour updating example with two clicks is shown in the top (green, ground truth; red, initial contour; yellow, updated contours after the first and second clicks).

The U-Net–based architecture of our artificial intelligence–assisted contour editing model. The input includes three channels: the original image, the current segmentation mask, and the click image. The output is the updated mask. A zoomed-in contour updating example with two clicks is shown in the top (green, ground truth; red, initial contour; yellow, updated contours after the first and second clicks).

Datasets

Training and validation datasets.— Open-access head-and-neck CT scans of patients with nasopharyngeal cancer from the Automatic Structure Segmentation for Radiotherapy Planning Challenge were marked by one oncologist and verified by another. We randomly split the datasets into 40 and 10 patients for training and validation, respectively. Twenty-one annotated images of OARs were used for performance evaluation. All original CT scans contained 100–144 images of 512 × 512 pixels, with a ([0.98 ~ 1.18] × [0.98 ~ 1.18] × 3.00 mm) voxel resolution, totaling 4941 training images and 1210 validation images.

Testing datasets.— Demonstrating a model’s generalizability requires data with previously unseen demographics and distributions, so we used two distinct test datasets. The DeepMind dataset consisted of 4427 CT images from 28 head-and-neck scans, with contours collaboratively determined by three clinicians, among whom, two with at least 4 years’ experience and one with at least 5 years’ postcertification experience. Each scan had 21 annotated images of OARs and consisted of approximately 119–184 sections of 512 × 512 pixels, with a ([0.94 ~ 1.25] × [0.94 ~ 1.25] × 2.50 mm) voxel resolution.

The second test dataset was retrospectively collected at University of Texas Southwestern (UTSW), approved by the institutional review board, and de-identified with code created at our institution. Informed consent was waived because the data were collected retrospectively with minimal risk. This dataset contained 20 patient scans, with contours of OARs exported from the clinical system and cleaned with software tools developed at our institution, followed by a manual confirmation. Each scan had at most 48 annotated OARs, which varied depending on clinical task. All scans in this dataset contained approximately 124–203 sections of 512 × 512 pixels, with a ([1.17 ~ 1.37] × [1.17 ~ 1.37] × 3.00 mm) voxel resolution, for a total of 2980 2D section CT images.

For computational efficiency, we cropped out a subvolume that has a dimension of 256 × 256 in the axial plane for each volumetric CT scan in the training, validation, and test datasets so that clinically meaningful head-and-neck regions were covered.

Model Training and Testing

We used the basic U-Net (5) architecture to prove our AIACE concept. Because there were no interactive segmentation datasets with clinician feedback signals, these data are difficult to collect. We constructed an interactive segmentation dataset online during the training phase by simulating clinician clicks. The clicking positions were randomly sampled from a predefined probability distribution, based on the observation that clinicians are more likely to first click on contour segments with large errors. Given the randomly sampled point, current segmentation map, and input CT image, we constructed three-channel input data to feed into AIACE for training. The supervised signal was the ground truth segmentation mask. To examine reproducibility, we independently trained five different models based on the same parameter settings with different random seeds.

The automatic segmentation model was trained based on the same dataset and shares the same network architecture and training settings as AIACE, except input and output channels were set to 1 and 21, corresponding to the single-channel gray-scale CT image and the number of investigated OARs, respectively.

In the testing phase, we simulated the clinician’s clicking at each iteration by choosing the boundary point corresponding to the largest error and quantified the performance with the popular Dice similarity coefficient (DSC) and the 95th percentile of the Hausdorff distance (HD95).

We computed average values of the two aforementioned metrics among all investigated OARs separately in the validation, DeepMind, and UTSW test datasets and reported the model inference efficiency.

More details regarding the network configurations, training parameter settings, and sampling distribution definition can be found in Appendix E1 (supplement). More details about the OAR selection in all three investigated datasets can be found in Table E1 (supplement).

Comparison with the False Click-based Method

For comparison, we adapted the false click-based method reported in Sakinis et al (9) by training an interactive contouring editing model that used the same network architecture as AIACE. We also trained five different independent false click-based models. These models were used to conduct Student t test–based statistical significance analysis (statistical software: Python 3.7.6, SciPy 1.7.0) with unequal variance assumption when performing the comparison study. P < .05 indicated statistically significant difference.

Failure Analysis

We defined failure as when the model could not achieve a contour with acceptable accuracy after 20 clicks, with a quality threshold of HD95 less than 2.5 mm. We reported the fractions of acceptable contours for the three datasets and the mean number of clicks required to achieve the quality threshold.

Data Access

The training and validation datasets are accessible at https://structseg2019.grand-challenge.org/Dataset/. The DeepMind dataset is accessible at https://github.com/deepmind/tcia-ct-scan-dataset. The UTSW dataset is currently not publicly available but is available from the corresponding author upon reasonable request and with institutional review board approval.

Results

Visualization Comparisons

Figure 2 shows the comparison result based on an image of the left parotid gland in the validation dataset. Figure 2A clearly reveals a large surface gap between the ground truth contour and the initial contour. After one click (Figure 2B), the section of the contour around the clicked point improved substantially. The associated DSC and HD95 value improved from 0.87 and 5.9 mm, respectively, to 0.93 and 4.7 mm. After the second click (Figure 2C), the updated contour closely approximated the ground truth contour, with a DSC and HD95 value of 0.97 and 1.2 mm, respectively. While the false click-based method demonstrated progressive improvements with increasing clicks (Figure 2D, 2E), the updated contour lies between the ground truth and false clicks. Contrastingly, AIACE fully realized the clinician’s intent by generating a contour well matched with the clicked segment.

Figure 2:

(A–E) Example two-click segmentation of an image of the left parotid gland from the validation dataset. A is the initial contour, and B, C and D, E show the contour editing results after one and two clicks with respect to our contour click-based artificial intelligence–assisted contour editing model and the false click-based model, respectively. For outlines, green represents the ground truth contour; red, the initial contour; and yellow, the updated contour at each iteration. For clicks, red indicates the current click, and cyan, past clicks. In the false click-based model, we used dots to represent false-negative clicks. Display window: [−160, 240] HU. DSC = Dice similarity coefficient, HD95 = 95th percentile of Hausdorff distance.

(A–E) Example two-click segmentation of an image of the left parotid gland from the validation dataset. A is the initial contour, and B, C and D, E show the contour editing results after one and two clicks with respect to our contour click-based artificial intelligence–assisted contour editing model and the false click-based model, respectively. For outlines, green represents the ground truth contour; red, the initial contour; and yellow, the updated contour at each iteration. For clicks, red indicates the current click, and cyan, past clicks. In the false click-based model, we used dots to represent false-negative clicks. Display window: [−160, 240] HU. DSC = Dice similarity coefficient, HD95 = 95th percentile of Hausdorff distance.

Figure 3 shows an example of an image of the left optic nerve from the DeepMind test dataset. Figure 3C shows that a close approximation between the revised contour and the ground truth contour can be achieved after two clicks with AIACE. As for the false click-based method, two false-positive clicks can only partially remove the false-positive regions, while the large false-negative region remains. After two contour clicks, the DSC and HD95 metrics improved from 0.43 and 11.0 mm, respectively, to 0.88 and 1.0 mm. Figures 3D and 3E demonstrate the difficulty of predicting where the updated contour should be with use of the false click-based method, whereas Figures 3B and 3C show that the updated contours resulting from contour clicks are highly predictable.

Figure 3:

(A–E) Example two-click segmentation of an image of the left optic nerve from the DeepMind test dataset. A is the initial contour, and B, C and D, E show the contour editing results after one and two clicks with respect to our contour click-based artificial intelligence–assisted contour editing model and the false click-based model, respectively. For outlines, green represents the ground truth contour; red, the initial contour; and yellow, the updated contour at each iteration. For clicks, red indicates the current click, and cyan, past clicks. In the false click-based model, we used dots to represent false-negative clicks and crosses to represent false-positive clicks. Display window: [−160, 240] HU. DSC = Dice similarity coefficient, HD95 = 95th percentile of Hausdorff distance.

(A–E) Example two-click segmentation of an image of the left optic nerve from the DeepMind test dataset. A is the initial contour, and B, C and D, E show the contour editing results after one and two clicks with respect to our contour click-based artificial intelligence–assisted contour editing model and the false click-based model, respectively. For outlines, green represents the ground truth contour; red, the initial contour; and yellow, the updated contour at each iteration. For clicks, red indicates the current click, and cyan, past clicks. In the false click-based model, we used dots to represent false-negative clicks and crosses to represent false-positive clicks. Display window: [−160, 240] HU. DSC = Dice similarity coefficient, HD95 = 95th percentile of Hausdorff distance.

Figure 4 shows an example of an image of the brainstem from the UTSW test dataset. The initial segmentation result in Figure 4A shows segmentation errors, especially in the anterior direction. After the use of AIACE, each contour click precisely corrected one segment around the clicked point, as shown in Figure 4B–4D. The false click-based method also corrected those under- or oversegmented regions by using both false-negative clicks and false-positive clicks, but its performance was still worse than that of AIACE.

Figure 4:

(A–G) Example three-click segmentation of an image of the brainstem from the University of Texas Southwestern test dataset. A is the initial contour, and B–D and E–G show the contour editing results after one, two, and three clicks with our contour click-based artificial intelligence–assisted contour editing model and the false click-based model, respectively. For outlines, green represents the ground truth contour; red, the initial contour; and yellow, the updated contour at each iteration. For clicks, red indicates the current click, and cyan, past clicks. In the false click-based model, we used dots to represent false-negative clicks and crosses to represent false-positive clicks. Display window: [0, 80] HU. DSC = Dice similarity coefficient, HD95 = 95th percentile of Hausdorff distance.

(A–G) Example three-click segmentation of an image of the brainstem from the University of Texas Southwestern test dataset. A is the initial contour, and B–D and E–G show the contour editing results after one, two, and three clicks with our contour click-based artificial intelligence–assisted contour editing model and the false click-based model, respectively. For outlines, green represents the ground truth contour; red, the initial contour; and yellow, the updated contour at each iteration. For clicks, red indicates the current click, and cyan, past clicks. In the false click-based model, we used dots to represent false-negative clicks and crosses to represent false-positive clicks. Display window: [0, 80] HU. DSC = Dice similarity coefficient, HD95 = 95th percentile of Hausdorff distance.

Quantitative Comparisons and Statistical Analysis

Tables 1 and 2 provide further demonstration of the average performance improvement in terms of DSC and HD95, respectively, and the associated SDs as well as the P values based on all organ cases in the three different datasets. It shows that AIACE can result in a more than 10% increase in DSC on all three datasets after three clicks, while HD95 was reduced almost by half. Moreover, the lower the initial performance, the greater the AIACE-achieved improvement, with it significantly outperforming the false click-based method in most scenarios (P < .001 to P = .02). The only exception was the HD95-based comparison after applying two clicks for the validation dataset (Table 2), where P = .08. More performance comparison in terms of dataset-level gradual performance improvement with more and more clicks can be found in Figure E1 (supplement). More performance comparison in terms of different organs with three clicks can be found in Tables E2–E7 (supplement).

Table 1:

Performance (Dice Similarity Coefficient) Improvement with Three Clicks on Five Independent Trainings among All Organs for Each Dataset

graphic file with name ryai.210214.tbl1.jpg

Table 2:

Performance (95th Percentile of Hausdorff Distance) Improvement with Three Clicks on Five Independent Trainings among All Organs for Each Dataset

graphic file with name ryai.210214.tbl2.jpg

Efficiency Analysis

The time required to update the contour at each iteration was about 20 msec when using a single NVIDIA GeForce Titan X graphics card, allowing real-time interaction between the clinician and AI.

Failure Analysis

During failure analysis, we found that the fractions of acceptable contours were 98.6% (3024 of 3067 contours) in the validation set, 98.7% (5808 of 5884 contours) in the DeepMind dataset, and 98.9% (1868 of 1889 contours) in the UTSW dataset. Moreover, the average number of clicks required to reach the target (HD95 < 2.5 mm) were 1.59 (validation), 2.2 (DeepMind), and 3.8 (UTSW). Therefore, our model did statistically fail in rare cases (<1.5%) (140 of 10 840 contours).

We show a failure case in Figure 5, demonstrating that most clicks were spent on the editing of one contour segment. There are small differences when comparing the sixth click image with the fifth. This is understandable because the sixth click is very close to the fifth, as shown by the zoomed-in regions of interest. Consequently, the extra sixth click failed to correct this specific contour segment, since it cannot bring sufficient information to guide the model in editing this segment. We observed the same phenomenon when comparing the seventh click image with the sixth and with click images 10–12.

Figure 5:

Failure case analysis. Top row: axial noncontrast CT image (left) and the zoomed-in region of interest (right). Bottom row: zoomed-in regions of interest (left) and the associated click images (right) with respect to different clicks (from left to right and top to bottom, corresponding to clicks 1–20). For clicks (dots), red indicates the current click, and cyan, past clicks. The subfigures indicated by the red and the blue arrows represent two examples where the change of the click image is very subtle by adding extra clicks. Display window: [−160, 240] HU.

Failure case analysis. Top row: axial noncontrast CT image (left) and the zoomed-in region of interest (right). Bottom row: zoomed-in regions of interest (left) and the associated click images (right) with respect to different clicks (from left to right and top to bottom, corresponding to clicks 1–20). For clicks (dots), red indicates the current click, and cyan, past clicks. The subfigures indicated by the red and the blue arrows represent two examples where the change of the click image is very subtle by adding extra clicks. Display window: [−160, 240] HU.

Discussion

This study is part of our ongoing effort toward developing artificial intelligence and clinician integrated systems, or AICIS. The current dogma for AI implementation involves AI and clinicians working independently and sequentially. AICIS attempts to integrate AI into clinicians’ existing workflows, allowing for collaboration to achieve acceptable results. Specifically, with AIACE, DSCs and HD95 values improved from 0.82 and 4.3 mm, respectively, to 0.91 and 2.1 mm in the validation set, 0.73 and 5.6 mm to 0.86 and 2.5 mm in the DeepMind dataset, and 0.67 and 11.4 mm to 0.86 and 3.3 mm in the UTSW dataset with just three clicks. Furthermore, given the swiftness of the model response time, AIACE can work in real time.

For the clinical task of organ segmentation, after the automatic segmentation model generates the initial contour, the clinician may face three options: accept as is (AAI), accept with editing, and reject. We always strive to improve the accuracy of automatic segmentation models to maximize the AAI ratio and minimize the reject ratio. An automatic segmentation model that produces a 100% AAI ratio could independently finish the clinical task without clinician input, potentially allowing AI to replace clinicians. In the real world, no model can achieve a 100% AAI ratio, and for especially challenging cases, manual editing of initial contours is necessary. Accordingly, we propose AIACE to assist clinicians in revising the initial contours and not to replace automatic segmentation models. AIACE will typically have a higher impact on more complex and lower quality contours because these cases often require extensive manual editing. We believe that the automatic segmentation model and AIACE should work together to maximize overall contouring efficiency, while the former maximizes the AAI ratio, and the latter maximizes the contour editing efficiency.

One advantage of AIACE is that it can alleviate the generalizability issue from which many deep learning–based automatic segmentation models suffer (1012). Even if an automatic segmentation model generalizes poorly to outside datasets, using AIACE downstream can greatly improve these results, as shown by the major improvement in average performance from the initial to revised contours for the UTSW and DeepMind test datasets (Table 1).

Our failure analysis indicated that almost all failed cases show the same patterns and reasons and that the model does not receive sufficient information to further improve the contour. This is one of the drawbacks of our simulation-based click sampling method. In real clinical practice, we believe these failures will not be a major problem because: (a) a statistical analysis based on three different datasets shows that AIACE only fails in rare cases, with a rate of less than 1.5% (140 of 10 840 contours) and (b) a clinician is unlikely to click on the same point repeatedly if they find the model cannot further improve the contour.

For simplicity, in this proof-of-concept study, we used the well-known U-Net architecture for demonstration. It is expected that the model’s performance can be further improved by using advanced architectures that can more effectively extract and fuse both the low- and high-level features, such as HRNet (13,14). In our study, we used 2D axial images to demonstrate the feasibility of AIACE. In the future, we will extend this concept to three-dimensional images and conduct a comprehensive, clinically realistic evaluation. For automatic, objective, and quantitative evaluation of model performance, all the results demonstrated were based on known ground truth contours where the clicks were simulated to be placed in the contour segment exhibiting the largest errors. Therefore, the results tabulated in Tables 1 and 2 represent the upper limit of model performance.

AIACE will likely play an important role in online ART. The current pipeline for ART includes acquiring a same-day image (eg, cone-beam CT scan [15]) and deforming or creating new contours of OARs and target structures to optimize the radiation treatment plan given the patient’s current anatomy. This approach can account for gas passing through the intestines or tumor shrinkage from current therapy. However, a substantial barrier to implementing online ART is the time required to manually correct contours. Therefore, AIACE-based contour editing can improve the efficiency and thereby the acceptance of online ART by radiation therapy departments.

In summary, AIACE significantly outperformed the false click-based method. AIACE is more efficient, more controllable since it can realize the clinician’s editing intent, and more analogous to the clinician’s contouring habits.

Acknowledgments

Acknowledgments

We thank Jonathan Feinberg and Sepeadeh Radpour for editing the manuscript.

Disclosures of conflicts of interest: T.B. No relevant relationships. A.B. No relevant relationships. M.D. Trainee editorial board member for Radiology: Artificial Intelligence. H.E.M. No relevant relationships. R.M. No relevant relationships. J.T. No relevant relationships. M.H.L. No relevant relationships. D.J.S. No relevant relationships. D.N. No relevant relationships. S.J. No relevant relationships.

Abbreviations:

AAI
accept as is
AI
artificial intelligence
AIACE
AI-assisted contour editing
ART
adaptive radiation therapy
DSC
Dice similarity coefficient
HD95
95th percentile of Hausdorff distance
OAR
organ at risk
2D
two-dimensional
UTSW
University of Texas Southwestern

References

  • 1. Chen X , Pan L . A survey of graph cuts/graph search based medical image segmentation . IEEE Rev Biomed Eng 2018. ; 11 : 112 – 124 . [DOI] [PubMed] [Google Scholar]
  • 2. Bach Cuadra M , Duay V , Thiran JP . Atlas-based Segmentation . In: Paragios N , Duncan J , Ayache N , eds. Handbook of Biomedical Imaging: Methodologies and Clinical Research. Boston, Mass: : Springer; , 2015. ; 221 – 244 . [Google Scholar]
  • 3. Hao L . Registration-based segmentation of medical images . Singapore: : School of Computing National University of Singapore; , 2006. . [Google Scholar]
  • 4. Oktay O , Schlemper J , Le Folgoc L , et al . Attention U-Net: Learning Where to Look for the Pancreas . arXiv1804.03999 [preprint] https://arxiv.org/abs/1804.03999. Posted April 11, 2018. Accessed July 12, 2021. [Google Scholar]
  • 5. Ronneberger O , Fischer P , Brox T . U-Net: Convolutional Networks for Biomedical Image Segmentation . In: Navab N , Hornegger J , Wells W , Frangi A , eds. Medical Image Computing and Computer-Assisted Intervention – MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science, vol 9351 . Cham, Switzerland: : Springer; , 2015. ; 234 – 241 . [Google Scholar]
  • 6. Zhou Z , Rahman Siddiquee MM , Tajbakhsh N , Liang J . UNet++: A Nested U-Net Architecture for Medical Image Segmentation . In: Stoyanov D , Taylor Z , Carneiro G , Syeda-Mahmood T , eds. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. DLMIA ML-CDS 2018 2018. Lecture Notes in Computer Science , vol 11045 . Cham, Switzerland: : Springer; , 2018. ; 3 – 11 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7. Nikolov S , Blackwell S , Zverovitch A , et al . Clinically applicable segmentation of head and neck anatomy for radiotherapy: deep learning algorithm development and validation study . J Med Internet Res 2021. ; 23 ( 7 ): e26151 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8. Yoon SW , Lin H , Alonso-Basanta M , et al . Initial evaluation of a novel cone-beam CT-based semi-automated online adaptive radiotherapy system for head and neck cancer treatment – a timing and automation quality study . Cureus 2020. ; 12 ( 8 ): e9660 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9. Sakinis T , Milletari F , Roth H , et al . Interactive segmentation of medical images through fully convolutional neural networks . arXiv 1903.08205 [preprint] https://arxiv.org/abs/1903.08205. Posted March 19, 2019. Accessed April 20, 2020 . [Google Scholar]
  • 10. Zhang Y , Liang Y , Hall WA , et al . A generalizable guided deep learning auto-segmentation method of pancreatic GTV on multi-protocol daily MRIs for MR-guided adaptive radiotherapy . Int J Radiat Oncol Biol Phys 2021. ; 111 ( 3 ): e113 . [Google Scholar]
  • 11. Alam SR , Li T , Zhang P , Zhang SY , Nadeem S . Generalizable cone beam CT esophagus segmentation using physics-based data augmentation . Phys Med Biol 2021. ; 66 ( 6 ): 065008 . [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Jia X , Wang S , Liang X , et al . Cone-Beam Computed Tomography (CBCT) Segmentation by Adversarial Learning Domain Adaptation . In: Shen D , Liu T , Peters TM , et al. , eds. Medical Image Computing and Computer Assisted Intervention – MICCAI 2019. MICCAI 2019. Lecture Notes in Computer Science , vol 11769 . Cham, Switzerland: : Springer; , 2019. ; 567 – 575 . [Google Scholar]
  • 13. Bai T , Nguyen D , Wang B , Jiang S . Deep high-resolution network for low-dose x-ray CT denoising . J Artif Intell Med Sci 2021. ; 2 ( 1-2 ): 33 – 43 . [Google Scholar]
  • 14. Wang J , Sun K , Cheng T , et al . Deep high-resolution representation learning for visual recognition . IEEE Trans Pattern Anal Mach Intell 2021. ; 43 ( 10 ): 3349 – 3364 . [DOI] [PubMed] [Google Scholar]
  • 15. Posiewnik M , Piotrowski T . A review of cone-beam CT applications for adaptive radiotherapy of prostate cancer . Phys Med 2019. ; 59 : 13 – 21 . [DOI] [PubMed] [Google Scholar]

Articles from Radiology: Artificial Intelligence are provided here courtesy of Radiological Society of North America

RESOURCES