Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2026 Jan 1.
Published in final edited form as: Brachytherapy. 2024 Nov 20;24(1):171–176. doi: 10.1016/j.brachy.2024.10.009

Instant Plan Quality Prediction on Transrectal Ultrasound for High-dose-rate Prostate Brachytherapy

Tonghe Wang 1, Yining Feng 1, Joel Beaudry 1, David Aramburu 1, Daniel Gorovets 1, Marisa Kollmeier 1, Antonio L Damato 1
PMCID: PMC11738656  NIHMSID: NIHMS2034043  PMID: 39572330

Abstract

Purpose:

In high-dose-rate (HDR) prostate brachytherapy procedures, the pattern of needle placement solely relies on physician experience. We investigated the feasibility of using AI to provide an instant feedback of the potential plan quality based on live needle placement, and before planning is initiated. This approach would reduce procedure time and ensure consistent plan quality.

Materials and Methods:

We utilized YOLOv8, a family of open-source deep learning models based on deep convolutional neural network, to perform automatic organ segmentation and needle detection on 2D transrectal ultrasound (TRUS) images. The segmentation and detection results for each patient were then fed into a plan quality prediction model based on ResNet101, a convolutional neural network that incorporates residual learning and skip connections. Its outputs are values of selected dose volume metrics. Imaging and plan data from 504 prostate HDR boost patients (456 for training, 24 for validation, and 24 for testing) treated in our clinic were included in this study. The segmentation, needle detection, and prediction results were compared to the clinical results (ground truth).

Results:

For the segmentation model, the Dice Similarity Coefficient on prostate was 0.90±0.08. The Hausdorff Distance at 95% for rectum and center of mass distance for urethra were 1.22±1.26 mm and 0.76±0.56 mm, respectively. For detection model, its detection sensitivity and false discovery rate were 93.7% and 8.5%, respectively, with average distance between correctly detected needles and ground truth < 1mm. For prediction model, the p-values of t-test between the predicted values and ground truth for either rectum D2cc or urethra D20% were larger than 0.8. The sensitivity of prediction model in finding implant geometries resulting in below-median rectum D2cc and urethra D20% were 83% and 87%.

Conclusion:

Quantitative results demonstrated the feasibility of using AI in predicting plan quality based on the needle placement on TRUS images. The proposed method has great potential to facilitate the current prostate HDR brachytherapy workflows by providing valuable feedback during needle insertion, and facilitating decision making of where and if additional needles are required.

1. Introduction

High-dose-rate (HDR) brachytherapy has been widely practiced for prostate cancer treatment. During this procedure, typically eight to eighteen needles are interstitially implanted in the prostate under the guidance of transrectal ultrasound (TRUS) in the operating/procedure room (OR) after patient is anesthetized. After implantation, planning images, of which the modality depends on the choice of the institution, are acquired for target/organs-at-risk (OARs) delineation as well as catheter digitization in the treatment planning system, followed by plan optimization.

Despite the ability to optimize dwell locations and dwell times via inverse optimization, needle placement is an essential component that heavily impacts the achievable plan quality.(1) Currently, needle placement relies on physician experience and lacks instant feedback on expected plan quality. A sub-optimal needle implantation is difficult to identify until the plan optimization stage is initiated. If needle implantation is found unsatisfying, adding extra needles is an option for patients who are still under sedation but will require additional OR time to re-acquire planning image and repeat the planning process. If anesthesia has already been reversed, changes in needle placement under TRUS guidance may be practically impossible. A way sometimes used to minimize this risk is to implant a number of needles exceeding the minimum needed to achieve a good plan. This also has a cost in terms of OR time, longer and harder digitization process, and possible additional risk to the patient from the extra punctures. Thus, it is desirable to have an instant feedback of the potential plan quality based on the current needle placement such that physicians can adjust it before planning image acquisition, thereby minimizing unnecessary restarts, and enhancing patient safety and treatment quality.

In this study, we propose a workflow to provide instant plan quality prediction in the OR for HDR prostate brachytherapy by utilizing artificial intelligence (AI). The proposed method is expected to predict dose volumetric histogram of organs-at-risk on the axial TRUS images across the prostate right after the implants are done, allowing physicians to amend needle insertion in time without going through the entire planning process including contouring, digitization and plan optimization. Providing useful feedback requires accurately and rapidly localizing the position of all needles and the shape of prostate and OARs as the first step. Recently, deep learning has been introduced in medical imaging field as a powerful tool in detection and segmentation tasks. A few studies have demonstrated the feasibility in using deep learning to detect needles on TRUS for prostate HDR brachytherapy.(29) Moreover, auto-segmentation on TRUS for prostate and relevant organs-at-risk (OARs) has also been shown practicable.(1020) However, currently none of these studies investigated the possibility of using these AI-generated results to further predict plan quality using AI. On the other hand, predicting dose or dosimetric metrics have been actively studied in external beam radiation therapy,(21, 22) while similar studies for brachytherapy are sparse.

In this study, we used AI models to track needles and segment prostate and OARs, and investigated the feasibility of further using AI to predict plan quality based on these AI-generated results. By providing immediate plan quality feedback with the proposed method, we aim to streamline the entire prostate HDR brachytherapy workflow and improve clinical outcomes.

2. Methods and materials

2.A. Data

In this study, we retrospectively collected a cohort of 504 patients treated at our institution with HDR prostate boost brachytherapy to a total dose of 15 Gy. Each patient has planning TRUS images acquired during procedure in the OR. Contours of prostate, rectum and urethra were delineated by physicians, catheters were digitized by physicists, and plans were optimized by physicists and approved by physicians. The planning process was performed on Vitesse treatment planning system from Varian. The TRUS images were acquired on axial slices and saved as DICOM with size of 1024x768, pixel spacing of 0.1x0.1 mm2 and slice thickness of 1 mm. The related RTplan file and RTstructure file were also read to extract the location of catheters and organ contours on the TRUS images. Among the 504 patients, 456 of them were used for training, 24 for validation, and 24 for testing.

2.B. Needle detection and organ segmentation

We implemented YOLOv8 models for needle detection and organ segmentation. YOLOv8 is an open source model for real-time object detection and image segmentation(23). It builds upon previous versions of YOLO by incorporating advanced features such as improved architecture, more efficient use of computational resources, and higher detection accuracy. In the context of ultrasound imaging, we trained two YOLOv8 models seperately to track needles and segment organs, leveraging its real-time processing capabilities and robust detection algorithms to handle the complex and dynamic nature of medical images. This model is particularly suitable for applications requiring both high precision and speed, making it an ideal choice for medical imaging tasks.

For the training of needle tracking model, the detection version of YOLOv8 was used with the inputs of the 2D slices of ultrasound images, and the training target is the corresponding coordinates of the manually digitized needles, if there are any on that slice of image. For organ segmentation, the segmentation version of YOLOv8 was used with the same inputs of the 2D slices of ultrasound images, and the trainging target is the coordinates of manually generated contours of prostate, rectum and urethra if there are any on that slice of image. Note that the slices without any catheters or organs were also included as training dataset.

For both detection and segmentation version of YOLOv8 models, different sizes of models were provided. We evaluated all the available sizes of modes: YOLOv8-n(ano), YOLOv8-s(mall), YOLOv8-m(edian), YOLOv8-l(arge) and YOLOv8-x(L). These models have increasing number of parameters, and are expected to have a greater capacity to learn complex patterns and representations from the data.

2.C. Plan quality prediction

To predict the plan quality based on the information of organ contours and needle positions, we implemented a Resnet101, i.e. Residual Network with 101 layers(24). It is a deep convolutional neural network renowned for its exceptional performance in image recognition tasks. Since the input of Resnet101 is supposed to be images, we converted the contours and needle coordinates into binary maps by assigning different values to these objects: prostate with 60, rectum with 120, urethra with 180, catheter with 240, and background with 0. We stacked the 2D slices of these binary maps for each patient as a 3D binary volume. The input for this model was thus the 3D binary volumes of each patient.

We quantified the plan quality based on the commonly used dosimetric metrics in prostate HDR brachytherapy: rectum D2cc and urethra D20%. Since these two metrics are largely dependent on how the plan dose is normalized, we normalized all the plan in our dataset to have 100% of prescription dose to cover 95% of prostate. The training target of this model was rectum D2cc and urethra D20% for each patient plan after normalization. By doing this, we expect the prediction model to tell the physician how the current needle pattern can spare dose in rectum and urethra when the plan is normalized to 95% of CTV covered by prescription dose.

2.D. Implementation and evaluation

The above deep learning networks were implemented using Python 3.6 and Pytorch. Training was implemented on a Nvidia A40 graphic card with 48GB memory, while testing was implemented on a Nvidia 2080Ti with 11GB memory. For catheter detection and organ segmentation, the default setting of YOLOv8 training parameters were used except epochs was set to 100. For plan quality prediction model, to reduce the computational cost, the input 3D binary volume is first zero-padded to 1024x1024x128, and then downsampled to 128×128×128. Random horizontal flipping and random 3D translation were applied as data augmentation methods to reduce overfitting. A total of 200 epochs were used. Optimization was performed using the Adam gradient optimizer. The learning rate was 2×10−3. The needle detection and organ segmentation models were less than 10 MB and the plan quality prediction model was less than 350 MB. The inference time for needle detection and organ segmentation model were around 15ms for each slice and for plan quality prediction model was 17ms for each patient.

To quantitatively evaluate the performance of the proposed workflow, we compared the generated results from AI models with the corresponding metrics in the ground truth clinical plans. For needle tracking, we evaluated the distance between the detected needles and the ground truth. For the segmentation of prostate, we used Dice Similarity Coefficient (DSC) to evaluate the overlapping of the segmented results and physician’s contours. For rectum, we measured the Hausdorff Distance at 95% (HD95) between the segmented contours and ground truth contours on the side close to prostate where the rectum D2cc is sensitive. For urethra, we used the center of mass distance (CMD) between segmentation results and ground truth considering the tiny size of urethra. For plan quality prediction, we first ranked all the patient plans in our dataset by their rectum D2cc and urethra D20% metrics respectively. For each testing patient, we used the predicted dosimetric value to find its percentile in our dataset, and compared with its true percentile using the ground truth dosimetric value.

3. Results

3.A. Needle detection

Fig. 1 shows the catheter detection results at different slices on two exemplary patients by Yolov8-n. Qualitatively, almost all the needles were successfully detected, the localized positions were close to manual digitization. Table 1 summarized the quantitative performance of catheter detection by different sizes of Yolo models. For example, for Yolov8-n, among the total of 25327 needles on the 2D slices of all testing patients, the needle detection model successfully detected 23739 of them, which resulted into a sensitivity of 93.7%. The model also returned 2212 needles that do not correspond to any ground truth needles, which resulted into a false discovery rate of 8.5%. Among the 23739 needles that were successfully detected, the distance between the detected location and the ground truth location is 0.65 mm on average. Overall, all the five models presented very close performance. They were able to detect more than 90% of the needles with submillimeter accuracy and less than 10% false discovery rate on each axial TRUS slice.

Figure 1.

Figure 1

Results of needle detection on two exemplary patients (upper and bottom rows) from apex to base of prostate (from left to right). Blue: ground truth; red: detection results.

Table 1.

The performance of needle detection and organ segmentation and the combined inference time by different sizes of models. The ground truth total number of needles is 25327.

Model
YOLOv8-n YOLOv8-s YOLOv8-m YOLOv8-l YOLOv8-x
# of predicted needles 25951 25651 25630 26168 25732
# of missed needles 1588 1659 1695 1276 1633
# of wrongly predicted needles 2212 1983 1998 2117 2038
Sensitivity 93.7% 93.4% 93.3% 94.9% 93.6%
False discovery rate 8.5% 7.7% 7.8% 8.1% 7.9%
Detection distance error (mm) 0.65±0.59 0.66±0.60 0.63±0.60 0.62±0.60 0.66±0.58
Organ Metrics
Prostate DSC 0.90±0.08 0.90±0.08 0.90±0.08 0.90±0.08 0.90±0.08
Rectum HD95 (mm) 1.22±1.26 1.27±1.27 1.11±1.11 1.15±1.32 1.14±1.15
Urethra CMD (mm) 0.76±0.56 0.83±0.77 0.92±0.80 0.77±0.67 0.90±0.87
Inference time per slice (ms) 10.13±2.18 10.63±4.49 15.41±1.20 23.05±1.05 35.34±1.56

3.B. Organ segmentation

Fig. 2 demonstrates the organ segmentation results from prostate apex to base on two exemplary patients by Yolov8-n. The segmentation model was able to provide the prostate contours accurately close to physician’s contours around mid-gland, and with less accuracy at base and apex. Urethra and the upper surface of rectum were also segmented, with accuracy apparently dependent on contrast shown on the images. When we calculated the evaluation metrics for rectum, only the region in the fan-shaped ultrasound scan plane was considered. The quantitative results of organ segmentation are summarized in Table 1. All the five models demonstrated similar performance. The average inference time for performing both needle detection and organ segmentation on each slice are listed in Table 1 for the different sizes of models. For a TRUS volume with 100 slices, the total processing time would be around 1 second to 3 seconds.

Figure 2.

Figure 2

Results of organ segmentation on two exemplary patients (upper and bottom rows) from apex to base of prostate (from left to right). Blue: ground truth; red: segmentation results.

3.C. Plan quality prediction

Considering the very close performance of all the five models, we selected the smallest model, Yolov8-n, to perform needle detection and organ segmentation for the investigation of plan quality prediction. As seen from Table 2, for both metrics of Rectum D2cc and Urethra D20%, the predicted values are very close to the ground truth on average (p>0.05). We further investigated the sensitivity and specificity of using predicted values in finding needle placement with below-median quality for each metrics. As an example, for rectum D2cc, among the 24 testing patients, there were 10 patients having predicted values below the median value among the entire patient dataset in comparison with 12 patients actually below the median from ground truth, which resulted into a sensitivity of 83%. The remaining results are summarized in Table 3. The average inferring time of the prediction model was 17.5 ms.

Table 2.

Mean ± SD of dose volume histogram (DVH) metrics of ground truth and prediction and their difference on the testing patients. P-values are listed for the t-test between the values of ground truth and prediction.

Rectum D2cc (%) Urethra D20% (%)
Ground Truth 60.0±7.2 113.6±8.1
Prediction 59.6±9.2 113.8±8.0
Difference −0.4±7.0 0.2±5.7
P-value 0.86 0.94

Table 3.

Sensitivity and specificity of prediction model in finding below-median-quality catheter placement regarding each metrics.

Rectum D2cc (%) Urethra D20% (%)
Sensitivity 83% 87%
Specificity 75% 89%

4. Discussion

This study is the first demonstration that AI can be utilized to provide valuable immediate feedback on projected brachytherapy implant quality. While further prospective analysis of the impact of this approach is needed to confirm its applicability to a variety of clinical situations, the potential of reducing the experience gap in brachytherapy can be of great importance for many clinics. We presented a novel workflow to provide instant plan quality prediction for high-dose-rate prostate brachytherapy by utilizing AI. The proposed workflow aims to streamline the entire HDR prostate brachytherapy workflow by providing immediate feedback on the catheter placement during needle implantation. The feedback is essential for physicians to timely optimize needle placement without the need to start treatment planning to have that feedback. The use of AI has the potential to reduce variability in implant quality from physician procedural experience. This approach can be of particular utility for clinics with low brachytherapy volume, permitting to overcome a skill and procedure experience gap that could otherwise discourage the use of brachytherapy or result in suboptimal implant quality. Busy brachytherapy centers can also benefit from the implementation of this technology as it would offer a quality assurance step to their clinical operation.

Our method uses three open-source deep learning models to perform needle detection, organ segmentation and plan quality prediction. In-house AI models for needle detection and organ segmentation have been investigated in previous publications. Compared with those results, our models present comparable performance. The DSC for prostate segmentation was found to be comparable to the results in other studies focusing on segmenting prostate on TRUS.(10, 11) For example, Lei et al. reported average DSC of 0.92 and 0.93 on prostate in their studies. Moreover, the segmentation performance on rectum and urethra was found superior to studies where rectum HD95 was reported at 1.90 mm and urethra CMD at 1.82 mm. For needle detection, since previous studies focused on detection in 3D space while our study performed on 2D slices, a direct comparison is not feasible. However, in general, these methods are able to detect around 95% catheters in 3D space,(24) which is close to the 94% we achieved in 2D slices in this study. Although the performance of open-source models is comparable to the previous in-house AI studies, using open-source models are more advantageous since they are more accessible and easier to implement. A physicist with basic knowledge in python and deep learning is expected to be able to reproduce the method in this study, without advanced knowledge in modifying neural network architectures. These advantages would facilitate a widespread integration of AI in brachytherapy workflow.

On the other hand, to our knowledge, using AI to predict plan quality for prostate HDR brachytherapy has not been investigated before. The most relevant study is using deep learning to predict dose map for cervical cancer HDR brachytherapy by Li et al.(25) In their study, the output of the model is the predicted 3D dose volume. Compared with the output of several scalar numbers in our study, generating a 3D volume will inevitably requires much more computational resources, and thus may not be suitable for instant usage. As mentioned in their study, it took about 12 seconds to predict the dose map, comparing with 17.5 ms of our method in providing selected dose volume metrics with a similar level of GPU. While 12 seconds is not in itself a long wait time, it would not be conducive to provide continuous implant feedback, as our method would.

Similar performance of Yolov8 models with different sizes were shown in this study. A potential reason can be the limited number of training dataset, which may lead to overfitting in larger models. In this study, a total of 34258 slices of TRUS were used for training, which are much less than the commonly used dataset in natural images such as COCO dataset with 118K image. For brachytherapy, the size of available dataset for a single institution is inevitably small due to the limited patient throughput. In future, a multi-institution dataset can be collected to enrich not only the size but also the diversity of dataset.

This study demonstrated the methodology feasibility of using AI in plan quality prediction in prostate HDR brachytherapy. Although this study used dataset from our institution where the brachy plan is performed on TRUS images, the proposed method may also be applicable to CT-based or MRI-based planning as long as the catheter placement step is under TRUS guidance. Moreover, although we focused this research in HDR prostate, we believe the workflow is generalizable to low-dose-rate (LDR) prostate brachytherapy procedures as well. The potential challenges in its clinical implementation are to be investigated in future study.

The main limitation of this study is that it is a single institution, retrospective study. This successful retrospective analysis will permit us to engage with outside institutions to quantify how generalizable this approach is, and to conduct our own prospective study as we integrate this tool intro our clinical operation.

This study proposed an approach to predict plan quality based on the current needle placement. It provides useful information for physicians in making decision on whether to accept the current placement or not. A further direction of this study can be investigating the optimal catheter placement pattern given the contours of prostate and OARs. Considering the operating uncertainty of physicians, the actual placement may deviate from the optimal position. Thus, a live update on the optimal pattern after each needle implanted is more desirable.

5. Conclusion

We developed a novel workflow to provide instant plan quality prediction based on TRUS in OR for high-dose-rate prostate brachytherapy by utilizing AI. The method automatically segmented prostate and OARs, and detected placed needles. The segmentation and detection results were then used predict relevant dose volume metrics. It provides an effective solution in giving feedback on catheter placement to physicians, which is essential to minimize unnecessary restarts and compromise in plan quality, thus enhancing patient safety and treatment outcomes.

Acknowledgments

This research is supported in part by the National Cancer Institute of the National Institutes of Health under Award Number P30CA008748.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Reference

  • [1].Lei Y, Wang T, Fu Y, et al. Catheter position prediction using deep-learning-based multi-atlas registration for high-dose rate prostate brachytherapy. Med Phys 2021; 48(11):7261–7270. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [2].Zhang Y, He X, Tian Z, et al. Multi-Needle Detection in 3D Ultrasound Images Using Unsupervised Order-Graph Regularized Sparse Dictionary Learning. IEEE Transactions on Medical Imaging 2020; 39(7):2302–2315. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [3].Zhang Y, Lei Y, Qiu RLJ, et al. Multi-needle Localization with Attention U-Net in US-guided HDR Prostate Brachytherapy. Medical Physics 2020; 47(7):2735–2745. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [4].Zhang Y, Tian Z, Lei Y, et al. Automatic multi-needle localization in ultrasound images using large margin mask RCNN for ultrasound-guided prostate brachytherapy. Physics in Medicine & Biology 2020; 65(20):205003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [5].Gillies DJ, Rodgers JR, Gyacskov I, et al. Deep learning segmentation of general interventional tools in two-dimensional ultrasound images. Medical Physics 2020; 47(10):4956–4970. [DOI] [PubMed] [Google Scholar]
  • [6].Andersén C, Rydén T, Thunberg P, Lagerlöf JH. Deep learning-based digitization of prostate brachytherapy needles in ultrasound images. Medical physics 2020; 47(12):6414–6420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [7].Wang F, Xing L, Bagshaw H, Buyyounouski M, Han B. Deep learning applications in automatic needle segmentation in ultrasound-guided prostate brachytherapy. Medical Physics 2020; 47(9):3797–3805. [DOI] [PubMed] [Google Scholar]
  • [8].Liu D, Tupor S, Singh J, et al. The challenges facing deep learning-based catheter localization for ultrasound guided high-dose-rate prostate brachytherapy. Medical Physics 2022; 49(4):2442–2451. [DOI] [PubMed] [Google Scholar]
  • [9].Hu Z, Brastianos H, Ungi T, et al. Automated catheter segmentation using 3D ultrasound images in high-dose-rate prostate brachytherapy. Medical Imaging 2021: Image-Guided Procedures, Robotic Interventions, and Modeling. 11598. SPIE; 2021:2512–259. [Google Scholar]
  • [10].Lei Y, Tian S, He X, et al. Ultrasound prostate segmentation based on multidirectional deeply supervised V-Net. Medical Physics 2019; 46(7):3194–3206. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [11].Lei Y, Wang T, Roper J, et al. Male pelvic multi-organ segmentation on transrectal ultrasound using anchor-free mask CNN. 2021; 48(6):3055–3064. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [12].Chen Y, Xing L, Yu L, et al. MR to ultrasound image registration with segmentation-based learning for HDR prostate brachytherapy. Med Phys 2021; 48(6):3074–3083. [DOI] [PubMed] [Google Scholar]
  • [13].Peng T, Dong Y, Di G, et al. Boundary delineation in transrectal ultrasound images for region of interest of prostate. Physics in Medicine & Biology 2023; 68(19):195008. [DOI] [PubMed] [Google Scholar]
  • [14].Orlando N, Gillies DJ, Gyacskov I, Romagnoli C, D’Souza D, Fenster A. Automatic prostate segmentation using deep learning on clinically diverse 3D transrectal ultrasound images. Medical physics 2020; 47(6):2413–2426. [DOI] [PubMed] [Google Scholar]
  • [15].Girum KB, Lalande A, Hussain R, Créhange G. A deep learning method for real-time intraoperative US image segmentation in prostate brachytherapy. International Journal of Computer Assisted Radiology and Surgery 2020; 15(9):1467–1476. [DOI] [PubMed] [Google Scholar]
  • [16].Karimi D, Zeng Q, Mathur P, et al. Accurate and robust deep learning-based segmentation of the prostate clinical target volume in ultrasound images. Medical image analysis 2019; 57:186–196. [DOI] [PubMed] [Google Scholar]
  • [17].Anas EMA, Nouranian S, Mahdavi SS, et al. Clinical target-volume delineation in prostate brachytherapy using residual neural networks. Medical Image Computing and Computer Assisted Intervention– MICCAI 2017: 20th International Conference, Quebec City, QC, Canada, September 11-13, 2017, Proceedings, Part III 20. Springer; 2017:365–373. [Google Scholar]
  • [18].Nouranian S, Ramezani M, Spadinger I, Morris WJ, Salcudean SE, Abolmaesumi P. Learning-based multi-label segmentation of transrectal ultrasound images for prostate brachytherapy. IEEE transactions on medical imaging 2015; 35(3):921–932. [DOI] [PubMed] [Google Scholar]
  • [19].Xu X, Sanford T, Turkbey B, Xu S, Wood BJ, Yan P. Polar transform network for prostate ultrasound segmentation with uncertainty estimation. Medical image analysis 2022; 78:102418. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [20].Hampole P, Harding T, Gillies D, et al. Deep learning-based ultrasound auto-segmentation of the prostate with brachytherapy implanted needles. Medical Physics 2024; 51(4):2665–2677. [DOI] [PubMed] [Google Scholar]
  • [21].Kandalan RN, Nguyen D, Rezaeian NH, et al. Dose prediction with deep learning for prostate cancer radiation therapy: Model adaptation to different treatment planning practices. Radiother Oncol 2020; 153:228–235. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • [22].Chen X, Men K, Zhu J, et al. DVHnet: A deep learning-based prediction of patient-specific dose volume histograms for radiotherapy planning. Med Phys 2021; 48(6):2705–2713. [DOI] [PubMed] [Google Scholar]
  • [23].Reis D, Kupec J, Hong J, Daoudi A. Real-Time Flying Object Detection with YOLOv8. 2023:arXiv:2305.09972.
  • [24].He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. 2015:arXiv:1512.03385. [Google Scholar]
  • [25].Li Z, Yang Z, Lu J, et al. Deep learning-based dose map prediction for high-dose-rate brachytherapy. Phys Med Biol 2023; 68(17). [DOI] [PubMed] [Google Scholar]

RESOURCES