Abstract
Background
Three-dimensional T1 magnetization prepared rapid acquisition gradient echo (3D-T1-MPRAGE) is preferred in detecting brain metastases (BM) among MRI. We developed an automatic deep learning–based detection and segmentation method for BM (named BMDS net) on 3D-T1-MPRAGE images and evaluated its performance.
Methods
The BMDS net is a cascaded 3D fully convolution network (FCN) to automatically detect and segment BM. In total, 1652 patients with 3D-T1-MPRAGE images from 3 hospitals (n = 1201, 231, and 220, respectively) were retrospectively included. Manual segmentations were obtained by a neuroradiologist and a radiation oncologist in a consensus reading in 3D-T1-MPRAGE images. Sensitivity, specificity, and dice ratio of the segmentation were evaluated. Specificity and sensitivity measure the fractions of relevant segmented voxels. Dice ratio was used to quantitatively measure the overlap between automatic and manual segmentation results. Paired samples t-tests and analysis of variance were employed for statistical analysis.
Results
The BMDS net can detect all BM, providing a detection result with an accuracy of 100%. Automatic segmentations correlated strongly with manual segmentations through 4-fold cross-validation of the dataset with 1201 patients: the sensitivity was 0.96 ± 0.03 (range, 0.84–0.99), the specificity was 0.99 ± 0.0002 (range, 0.99–1.00), and the dice ratio was 0.85 ± 0.08 (range, 0.62–0.95) for total tumor volume. Similar performances on the other 2 datasets also demonstrate the robustness of BMDS net in correctly detecting and segmenting BM in various settings.
Conclusions
The BMDS net yields accurate detection and segmentation of BM automatically and could assist stereotactic radiotherapy management for diagnosis, therapy planning, and follow-up.
Keywords: brain metastases, deep learning, fully convolution network, MRI, stereotactic radiotherapy
Key Points.
A 3D deep learning model was developed for detection and segmentation of BM.
Experimental results showed good consistency with manual delineations by human experts.
BMDS net can facilitate the stereotactic radiotherapy management of BM.
Importance of the Study.
Detection and segmentation are essential for the clinical management of BM, including their diagnosis, therapy planning, and follow-up. Nevertheless, the manual delineations are time-consuming and subject to operators. With the recent advances, deep learning methods have obtained superior performance in medicine. We developed a deep learning–based method for the automatic detection and segmentation of BM. Our method agrees with those of human experts, resulting in a fast differential diagnosis, tumor delineation, and tumor burden assessment with less subjective bias, and the performance was validated in an external dataset, facilitating the clinical management of BM by assisting clinicians.
Brain metastases (BM) are the most common malignant tumor in the central nervous system. Patients with BM show a poor outcome without efficient treatment.1 Both an accurate diagnosis and therapeutic strategy have a strong influence on the clinical outcomes of BM.2 Benefiting from efficient local control, stereotactic radiotherapy (SRT) is becoming popular in the treatment of BM even in cases with more than 10 lesions3–5 and is often used instead of whole-brain radiotherapy, yielding >80–90% durable local control.
Detection and segmentation are the necessary steps in the clinical management of SRT of BM. However, manual delineation is time-consuming and often irreproducible.6 Although time taken for the BM is associated with their size, a median value of 2.8 minutes was still recorded in finishing the manual segmentation per lesion in our clinical practice, while a deep learning method could perform the work within 25 seconds. Moreover, omitting small and even large BM is incidentally seen during the planning of Gamma Knife radiosurgery in clinical practice, because of carelessness, exhaustion, and other human factors.7 Additionally, interobserver variability in target volume delineation is another important contributor to uncertainty in SRT.8,9 Therefore, there is a need to develop an efficient computer-aided detection and segmentation method to help radiation oncologists identify and cover BM, especially under the condition of multiple BM.10
Various methods have been used to detect brain tumors. Pereira et al11 proposed the automatic segmentation method based on convolutional neural networks (CNNs) for the segmentation of glioma. A symmetrically driven fully convolution network (FCN)12 completed segmentation of glioma using multimodal data. Havaei et al proposed a new segmentation framework13 for heterogeneous multimodal MRI, which allows segmentation of data that lacks certain modalities. Hu et al14 extracted multi-scale features through multilevel CNNs, and used conditional random fields (CRF)15 to combine spatial context information to segment glioma. A prior driven FCN-CRF16 combined with prior knowledge of the tumor core was used to segment the tumor core. Feature recombination and spatial adaptive recalibration blocks were proposed, and the feature maps were recalibrated to improve the segmentation result of glioma.17 Chen et al18 proposed the dual-force training strategy that attaches the auxiliary loss function to the network to explicitly encourage the learning of multilevel features. Interactive methods19,20 were also used for segmentation of glioma. However, few works studied BM. Farjam et al21 used a set of unevenly spaced 3D spherical shell templates to localize BM using T1-weighted MRI. Robert et al22 proposed spherical tumor appearance models to match the expected geometry of BM, and Krafft et al23 utilized soft independent modeling of class analogies to cluster BM.
To our best knowledge, no study has investigated the fully automatic segmentation of BM because of the following reasons: (i) many very small BM exist in the brain, (ii) niduses show various distributions among either individual participants or patient groups, and (iii) the intensities of voxels in niduses would be very similar to those of the neighboring tissues if patients were administered some specific targeted drugs. 24 With the recent advances, deep learning methods have been regarded as a very promising approach for medical image processing, including brain tumor segmentation.25,26 Davy et al divided 3D MR images into 2D images and trained a CNN to predict their center pixel class.25 Urban et al27 and Zikic et al28 implemented a fairly common CNN, comprising a series of convolutional layers, a nonlinear activation function between each layer, and a softmax output layer. However, BM are uneven and sometimes very small in size. For such tumors, deep learning–based segmentation methods can be disrupted by the nontarget region, which often occupies a large fraction in postcontrast T1-weighted MR images. Furthermore, the methods above divided the 3D MR images into 2D25,28 or 3D patches27 to train a CNN, unavoidably missing some spatial information across slices. These deficiencies often resulted in inaccurate segmentation results compared with experts’ segmentations, impeding their further applications in clinical practice.
Aimed to overcome the abovementioned challenges and compare to the state-of- the-art method, we proposed and evaluated a cascaded 3D FCN for BM detection and segmentation (BMDS net) using whole 3D MR images. Our hypothesis is that the proposed BMDS net will provide accurate detection and segmentation of BM and outperform the classical FCN. In this approach, the first FCN focuses on fast location of the region of the BM, and segmentation by the second FCN is implemented on the detected region, thereby reducing the influence of similar or neighboring tissues to assist the precise management of BM with SRT.
Materials and Methods
Study Populations
This study was approved by the institutional review board of the 3 hospitals, and all experiments were performed in compliance with the Declaration of Helsinki. Written informed consent was acquired from each patient or next of kin.
From October 2016 to May 2019, Dataset 1 was retrospectively collected consisting of 1201 cases of patients with BM from the Shandong Provincial Hospital. Two hundred thirty-one cases were retrospectively collected from the Affiliated Hospital of Qingdao University Medical College as Dataset 2, and 220 cases were collected from the Second Hospital of Shandong University as Dataset 3. The detailed distribution of all datasets and inclusion and exclusion criteria are provided in Supplementary Content 1. The summary results of the 3 datasets are provided in Supplementary Table 1.
MRI Acquisitions
All the patients from the 3 different institutes were examined using 3.0 T MRI machines but with different manufacturers and imaging parameters. Detailed information about the MR machines and imaging parameters of the 3 hospitals are provided in Supplementary Content 2.
Manual Segmentation
Manual segmentation of BM on postcontrast 3D T1 magnetization prepared rapid acquisition gradient echo (MPRAGE) was performed by one neuroradiologist and one radiation oncologist (with 10 years and 15 years of experience, respectively) together using the noncommercial software ITK-SNAP (version 3.6.0, http://www.itksnap.org), and a consensus was reached after a careful discussion if divergence existed. Only the whole BM (including both enhancing area and necrotic area) were covered for segmentation, avoiding adjacent vessels. Lesions that were visible in only 1 axial slice were not considered for segmentation. Any lesions that occupied over 2 axial slices would be segmented whatever the sizes were. Time taken for manual segmentations and the mean value of 2 performers’ time on 1 lesion were considered as the time taken for this lesion.
BMDS Net Architecture
Deep learning models can learn a hierarchy of features by building high-level features from low-level features. The FCN29 is a special case of a convolutional network, in which the training is an end-to-end process and embodies fewer network parameters. The FCN comprises multiple convolution and pooling layers that reduce the number of network parameters to a large extent, without incorporating any fully connected layers. The FCN can take inputs of arbitrary size and generate same-size outputs, through efficient inference and learning. It generates dense pixel outputs by interpolating the coarse output via deconvolution layers. Recently, the 3D FCN was developed and applied to deal with 3D imaging data.30 The proposed BMDS net employed and further extended the 3D FCNs, involving 2 stages—detection and segmentation. In the detection stage, we designed a 3D FCN to rapidly localize the region of BM in raw 3D T1-weighted MR images, which will propose bounding boxes for segmentation. This can be considered as coarse segmentation of BM. The FCN is devised for fast segment BM from the downsampled MR image. Specifically, for accurate detection, the original MR image is downsampled to ¼ of its original resolution. After inference with the proposed FCN, we upsampled the coarse segmentation results to the original resolution. Next, the bounding box of BM is obtained by morphological operation on this coarse segmentation. The intermediate point of the bounding box is selected as the centroid to crop the region from the raw MR images. The region size is ensured to cover whole sizes of BM. In the segmentation stage, a second 3D FCN utilizes the proposed bounding boxes to predict segmentation masks for the BM. An illustration of the framework is shown in Fig. 1. The BMDS net iteratively compares the output masks with reference masks. The reference masks ensure that the network learns the relationship between MR image contexts and these reference masks. Given new MRI data, the trained BMDS net can output the location of the BM and corresponding segmented masks.
Fig. 1.
Network architecture of BMDS net.
BMDS Net Implementation Details
The training data for BMDS net consist of 3D T1-weighted MR images as input and manual segmentations as the reference masks. The size of the image was 256 × 256 × 120. For preprocessing, all the intensity values were first saturated into [0, 1000]. Next, we normalized intensities into the range of [0, 1] by dividing the maximum intensity value. Particularly, BMDS net does not need to correct deviation, remove skull, or register heterogeneous images31 on the training data, a procedure that is less time-consuming.
BMDS net was trained via the standard stochastic gradient descent algorithm using a stepped learning rate initialized at 0.002 and decayed over the training iterations at a rate of 0.1 until the training reached 500 epochs. A momentum of 0.9 was used to make a tradeoff between the previously observed image and the newly observed image. All the network parameters were initialized by Xavier’s32 method.
BMDS net was implemented based on the widely used TensorFlow33 open-source framework on an NVIDIA Tesla V100 GPU with 32 GB of memory. Our experiments were conducted by splitting the dataset into 4 folds. In each round of the 4-fold cross-validation procedure, 3 data folds were employed as training cases and the remaining fold was used for testing. Ten percent of the training cases were randomly selected for validation.
Statistical Analysis
Statistical analyses were conducted using Python Language v3.7. Quantitative results are displayed as mean (±SD). Paired samples t-tests were used in pairwise comparison between BMDS net and its baseline model (ie, one FCN). To evaluate the performance of automatic segmentation, results were compared on the level of BM volume. Dice ratio34 computed the overlap of ground truth segmentation Vg and automatic segmentation Vs:
Specificity (or the true negative [TN] rate) and sensitivity (or the true positive [TP] rate) measure the fractions of relevant segmented voxels. The definitions of these metrics are given below:
The TP score reflects the number of tumor voxels correctly identified as tumor voxels. The false positive (FP) score reflects the number of nontumor voxels incorrectly identified as tumor voxels. The TN score reflects the number of background voxels correctly identified as background voxels. Finally, the false negative (FN) score reflects the number of nonbackground voxels incorrectly identified as background voxels.
Results
BMDS net required approximately 23 hours to train, whereas labeling a single input image using the trained model required approximately 24 s (4 s for detection and 20 s for segmentation). The median time taken for manual segmentation of one lesion was 2.8 minutes (interquartile range: 2.1, 4.2). For all 1201 cases, the first, second, and third quartiles of the tumor number (range, 1–17) were 1, 2, and 4, respectively. The BMDS net could detect the presence of all BM (ie, the detection accuracy was 100%), ensuring the identification of all target lesions without omission. 3D visualization of the ground-truth segmentations and our detections are shown in Fig. 2.
Fig. 2.
3D visualization of the ground-truth segmentations and our detections. The first row represents the ground-truth segmentations (up), and the second row (down) represents our detections by BMDS net.
Detection Performance of BMDS Net
In clinical practice, a higher positive predictive rate (ideally, 100%) is more preferable than a higher negative predictive rate because false segmentation can be excluded during the process of planning. The BMDS net can detect all BM, providing a detection result with an accuracy of 100%. The minimal diameter of the smallest BM was 2 mm. The BMDS net could yield both high positive predictive rate and negative predictive rate. The TP rate (ie, sensitivity) and TN rate (ie, specificity) were 0.96 ± 0.03 (range, 0.84–0.99) and 0.99 ± 0.0002 (range, 0.99–1.00), respectively. The diagnostic performance demonstrates the ability of BMDS net to assist the stereotactic radiotherapy management of BM.
Segmentation Performance of BMDS Net on Various Volumes
A lethal dose of Gamma Knife radiotherapy could be performed on the small lesions with a diameter ≤5 mm, resulting in excellent local control. Under this condition, interobserver variability in target volume delineation could have very limited influences on the results of these very small lesions. In addition, Response Assessment in Neuro-Oncology Brain Metastases (RANO-BM) suggests that the target lesions for evaluation be 10 mm, but they also supplemented that a smaller target lesion could be selected if the imaging slice thickness decreased. Recently, Neil et al35 chose to lower the minimum size limit of measurable disease to 5 mm using an imaging slice thickness less than 1.5 mm, which is gradually accepted by more institutions. Therefore, we calculated the dice ratio of lesions with a diameter over 5 mm.
For the Dataset 1 of 1201 cases, the first, second, and third quartiles of the mean brain metastasis volume (MBMV) (which is per metastasis volume, range from 0.07 cm3 to 23.81 cm3) were 0.79 cm3, 2.22 cm3, and 5.42 cm3, respectively, and the mean value was 4.01 cm3. The total brain metastasis volumes (TBMVs) (which is total metastases in entire brain volume, range from 0.08 cm3 to 71.44 cm3) as the union of BM volumes from manual segmentations in the 3D T1 MPRAGE images were 1.47 cm3, 3.86 cm3, and 10.45 cm3, respectively. The dice ratios were 0.81 ± 0.08 (range, 0.62–0.93) and 0.88 ± 0.07 (range, 0.71–0.95) for tumors of MBMV <4.01 cm3 and MBMV ≥4.01 cm3, respectively. Therefore, the proposed method works well on both small (MBMV <4.01 cm3) and large tumors (MBMV ≥4.01 cm3) as shown in Fig. 3(A).
Fig. 3.
Comparison between various volumes and size of the enhancing area for each slice (A); comparison among different datasets (B, C); and comparison between BMDS net and the classic FCN (D). BMDS net achieved a better dice ratio among BM with MBMV greater than 4.01 cm3 and a diameter greater than 18 mm (A); a high sensitivity, specificity, and dice ratio were reached using BMDS net among the 3 different datasets (B); the receiver operating characteristic curve of BMDS for 3 different datasets and the classic FCN (C); BMDS net showed higher sensitivity, specificity, and dice ratio than the classic FCN (D).
Segmentation Performance of BMDS Net on Various Area Sizes
The first, second, and third quartiles of the diameter to enhance the BM area (range, 6–45 mm) were 10.50 mm, 15.75 mm, and 22.50 mm, respectively. The mean diameter of the enhancing BM area was 17.58 mm. The dice ratios were 0.83 ± 0.09 (range, 0.62–0.95) and 0.89 ± 0.07 (range, 0.71–0.95) for the diameters ranging from 6 to 18 mm and from 18 to 45 mm, respectively. The results verify the effectiveness of BMDS net for BM with different diameters, as shown in red in Fig. 3A.
Robustness of BMDS Net for Different Datasets
We trained the networks for Dataset 2 (231 cases) and Dataset 3 (220 cases) using the same training settings as those for the 1201 cases. The mean sensitivity, specificity, and dice ratio (with SD) were 0.95 ± 0.02, 0.99 ± 0.0001, 0.84 ± 0.07 and 0.96 ± 0.04, 0.99 ± 0.0001, 0.83 ± 0.06, respectively. As shown in Fig. 3B, C), the segmentations generated by our proposed method are highly consistent with the ground-truth segmentations for all 3 datasets, indicating potential feasibility in real clinical applications.
Comparison with the Classical FCN
The FCN is a special case of convolutional networks that is widely used in medical image processing. We compared the segmentation performance of the proposed method with the classical FCN (ie, U-net), using the mean sensitivity, specificity, and dice ratio (with SD). The 3 indices over 1201 samples of the classical FCN were 0.83 ± 0.08, 0.99 ± 0.0004 and 0.69 ± 0.15, respectively, which were all lower than those of the proposed method. In addition, we also compared our results with the classical FCN through a paired samples t-test. Collectively, our proposed method showed statistically significant improvement (P < 0.001) compared with the classical FCN. The summary of the results of the classical FCN and BMDS net is shown in Fig. 3C, D.
Discussion
We developed a cascaded 3D FCN for BM detection and segmentation (BMDS net) using whole 3D MR images that not only would improve the accuracy of localizing and segmenting BM but also would reduce the labor intensity of SRT planning and follow-up assessment. BMDS net could detect all the BM, ensuring the identification of all target lesions without omission. Automated segmentation correlated well with manual segmentations, and reliable segmentation results were shown in different stages of the BM managements.
BMDS Net Could Help Separate BM from High-Grade Gliomas
BM and high-grade gliomas (HGGs) have different strategies of treatments, while the differential diagnosis between the 2 entities is not always effortless, especially in solitary BM and HGGs.36 Because the radiological appearance of solitary intracranial metastasis and HGGs are often similar, this will result in misdiagnosis or delayed treatment, further affecting the prognosis. In clinical practice, multiple lesions are considered as one of the most important imaging characteristics in separating BM from HGGs,37 and identifying the very small BM in the brain would improve the confidence of neuroradiologists when diagnosing BM. Therefore, the detection and segmentation of small BM are also important to identify solitary lesions. Our results reveal that BMDS net could detect the very small nidus accurately (Fig. 4) to avoid misdiagnosis.
Fig. 4.
Example of segmentation of small enhancing lesions to avoid the misdiagnosis of BM. A 58-year-old female patient without a history of primary cancer complained of dizziness and memory deterioration. An inhomogeneous enhancing lesion was obviously seen in the right thalamus (A). However, a very small nidus was also identified in the left cerebellum (B, C), suggesting a high possibility of the diagnosis of BM. (D) Manual segmentation. The predicted segmentations (E) showed good concordance with manual labels. The specificity, sensitivity, and dice ratio were 0.99, 0.97, and 0.89, respectively.
BMDS Net Simplifies Treatment Planning
On the one hand, Gamma Knife radiotherapy, which is the main technology of SRT, is usually a one-day outpatient procedure that requires fast segmentation for a rapid clinical workflow.37 On the other hand, omitting small and even large BM is incidentally seen during the planning of Gamma Knife radiosurgery in clinical practice because of carelessness, exhaustion, and other human factors.7 To solve this clinical problem, an automatic method of detection and segmentation is urgently needed. Our BMDS net has a strong potential to help radiation oncologists accelerate the workflow without omission of lesions. An example of quick segmentation of all the multiple small BM by BMDS net is shown in Fig. 5.
Fig. 5.
Example of segmentation of multiple BM by BMDS net. A 67-year-old male patient with a history of lung cancer had a chief complaint of headache. Contrast-enhanced 3D T1 MPRAGE images showed multiple BM in the brain (A). The BM were separately segmented manually using various colored labels. (C) Manual segmentation in a spatial view. The predicted segmentations (D) showed strong concordance with the manual labels. The specificity, sensitivity, and dice ratio were 0.99, 0.93, and 0.86, respectively.
Although the proposed method still omitted some slices of tumors, this result may not discourage radiation oncologists because almost all the omitting slices are the beginning and ending slices of BM. Radiation oncologists could make amends in the process of planning; particularly, the beginning and ending slices should not always be included (partial volume effect).
BMDS Net Facilitates the Follow-up of BM
Because assessment of the treatment response is mostly based on the change in the lesion volume, a quick and visualized spatial distribution of all lesions would liberate the neuro-oncologists from measuring the volumes repeatedly. Moreover, conditions of new lesions appearing or old lesions disappearing would coexist during the follow up. Therefore, the identification of spatial distribution and volumetric changes is crucial to assess the treatment response of each lesion and help treatment decision making. BMDS net could display the spatial distribution and volumetric changes of BM accurately and simplify the treatment response evaluation. An example of the segmentations in the pretreatment and follow up of one patient is shown in Fig. 6. Regarding RANO-BM,38 lesions with a diameter greater than 1 cm should be considered as target lesions for treatment response assessment. Our results showed that BMDS net could produce a very high dice ratio in lesions over 1 cm, guaranteeing its value in the treatment response assessment.
Fig. 6.
Example of tumor burden changes between pretreatment (A) and the follow-up (B) of one patient. A 49-year-old male patient with multiple BM was treated by stereotactic radiotherapy. Pretreatment images (A and E) clearly showed solid lesions, and then segmentations were manually performed (B and E). Posttreatment images (C and G) showed that the lesions decreased. Additionally, a new metastatic lesion was discovered (white G). The lesions were also manually segmented (D and H). Manual pretreatment (I) and posttreatment (K) segmentations showed strong concordance with the predicted pretreatment (K) and posttreatment (L) segmentations by BMDS net. The specificity, sensitivity and dice ratio of pretreatment segmentations were 0.99, 0.97, and 0.85, respectively, while those of posttreatment segmentations were 0.99, 0.97, and 0.80, independently.
Vascular epidermal growth factor (VEGF) receptor inhibitor, as the first-line treatment modality for BM,39 has always caused a dilemma concerning evaluation during the follow-up assessment of BM because these agents could tighten the vascular endothelial space and further decrease the permeability of the ruptured blood–brain barrier, preventing contrast agent leakage into the extravascular and extracellular space and resulting in light or no enhancement in the postcontrast T1-weighted images of patients with BM. Therefore, identifying all the lesions would be more challenging. Remarkably, BMDS net performed well in detecting and segmenting nonenhancing BM. This is important progress for the clinical management of SRT of BM. An example of segmenting nonenhancing lesions after VEGF receptor inhibitor treatment by BMDS net is shown in Supplementary Figure 1.
There are several limitations to this study. First, only 3D T1 MPRAGE images (similar to 3D BRAVO T1-weighted images) were included in the study. Other 3D imaging sequences are also used in clinical practice—for example, 3D fat-saturated T1 sampling perfection with application-optimized convtrast using different flip angle evolutions (3D T1 SPACE); the diagnostic performance of BMDS net in these imaging sequences should be investigated in a future study. Second, in the first stage of BMDS net, we could not afford any missing detected region of a case; thus, we chose a fixed size of the located nidus region. Additionally, we know that an adaptive method will be more flexible. In future work, we will explore more stable region proposal methods based on the predicted region size to alleviate the usage of prior knowledge.
Conclusion
In this paper, we have proposed a cascaded 3D FCN to automatically detect and segment BM, in which the first FCN focuses on fast location and the second on segmentations. Experimental results across different institutions indicated that our BMDS net could not only detect all the BM, ensuring the identification of all target lesions without omission, but also segment lesions with high accuracy and robustness. The BMDS net shows great potential in helping to separate BM from HGGs, simplifying treatment planning and facilitating the follow-up of BM. The deep learning–based method could be a useful tool to reduce the labor intensity of clinicians in the clinical management of BM.
Supplementary Material
Acknowledgment
We would like to thank American Journal Expert for English language editing.
Conflict of interest statement All the authors declare no relevant relationship with any funding agencies or commercial institutes.
Authorship statement. Jie Xue, Bao Wang, Ying Mao, Dengwang Li, and Yingchao Liu conceived and designed the study. Jie Xue performed the experiments. Bao Wang and Yingchao Liu provided data analysis. Jie Xue and Bao Wang wrote the paper. Yang Ming, Xuejun Liu, Zekun Jiang, Chengwei Wang, Xiyu Liu, Ligang Chen, Jianhua Qu, Shangchen Xu, Ying Mao, Yingchao Liu, and Dengwang Li reviewed and edited the manuscript. All authors read and approved the manuscript.
Funding
This work was supported in part by the National Natural Science Foundation of China (nos. 61802234, 61876101, 61971271), Natural Science Foundation of Shandong Province (ZR2019QF007), China Postdoctoral Project (2017M612339), Key Technologies R & D Program of Shandong Province (2017CXCG1209), and the Taishan Scholars Program (tsqn20161070).
References
- 1. Takei H, Rouah E, Ishida Y. Brain metastasis: clinical characteristics, pathological findings and molecular subtyping for therapeutic implications. Brain Tumor Pathol. 2016;33(1):1–12. [DOI] [PubMed] [Google Scholar]
- 2. Achrol AS, Rennert RC, Anders C, et al. Brain metastases. Nat Rev Dis Primers. 2019;5(1):5. [DOI] [PubMed] [Google Scholar]
- 3. Aoyama H, Shirato H, Tago M, et al. Stereotactic radiosurgery plus whole-brain radiation therapy vs stereotactic radiosurgery alone for treatment of brain metastases: a randomized controlled trial. JAMA. 2006;295(21):2483–2491. [DOI] [PubMed] [Google Scholar]
- 4. Patel TR, Knisely JP, Chiang VL. Management of brain metastases: surgery, radiation, or both? Hematol Oncol Clin North Am. 2012;26(4):933–947. [DOI] [PubMed] [Google Scholar]
- 5. Pinkham MB, Whitfield GA, Brada M. New developments in intracranial stereotactic radiotherapy for metastases. Clin Oncol (R Coll Radiol). 2015;27(5):316–323. [DOI] [PubMed] [Google Scholar]
- 6. Pérez‐Ramírez Ú, Arana E, Moratal D. Brain metastases detection on MR by means of three‐dimensional tumor‐appearance template matching. J Magn Reson Imaging. 2016;44(3):642–652. [DOI] [PubMed] [Google Scholar]
- 7. Higuchi Y, Yamamoto M, Serizawa T, Aiyama H, Sato Y, Barfod BE. Modern management for brain metastasis patients using stereotactic radiosurgery: literature review and the authors’ Gamma Knife treatment experiences. Cancer Manag Res. 2018;10:1889–1899. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Growcott S, Dembrey T, Patel R, Eaton D, Cameron A. Inter-observer variability in target volume delineations of benign and metastatic brain tumours for stereotactic radiosurgery: results of a national quality assurance programme. Clin Oncol. 2019;32:13–25. [DOI] [PubMed] [Google Scholar]
- 9. Chang ATY, Tan LT, Duke S, Ng WT. Challenges for quality assurance of target volume delineation in clinical trials. Front Oncol. 2017;7:221. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Rudie JD, Rauschecker AM, Bryan RN, Davatzikos C, Mohan S. Emerging applications of artificial intelligence in neuro-oncology. Radiology. 2019;290(3):607–618. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Pereira S, Pinto A, Alves V, Silva CA. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans Med Imaging. 2016;35(5):1240–1251. [DOI] [PubMed] [Google Scholar]
- 12. Shen H, Zhang J, Zheng W. Efficient symmetry-driven fully convolutional network for multimodal brain tumor segmentation. Paper presented at: 2017 IEEE International Conference on Image Processing (ICIP)2017.
- 13. Havaei M, Guizard N, Chapados N, Bengio Y. Hemis: hetero-modal image segmentation. Paper presented at: International Conference on Medical Image Computing and Computer-Assisted Intervention 2016.
- 14. Hu K, Gan Q, Zhang Y, et al. Brain tumor segmentation using multi-cascaded convolutional neural networks and conditional random field. IEEE Access. 2019;7:92615–92629. [Google Scholar]
- 15.Xue J, He K, Nie D, Adeli E, Shi Z, Lee S, Zheng Y, Liu X, Li D, Shen D. Cascaded multi-task 3d fully convolutional networks for pancreas segmentation [published online ahead of print December 18, 2019]. IEEE Trans on Cybern. 2019. doi:10.1109/TCYB. 2019.2955178. [DOI] [PubMed]
- 16. Shen H, Zhang J. Fully connected CRF with data-driven prior for multi-class brain tumor segmentation. Paper presented at: 2017 IEEE International Conference on Image Processing (ICIP)2017.
- 17. Pereira S, Pinto A, Amorim J, Ribeiro A, Alves V, Silva CA. Adaptive feature recombination and recalibration for semantic segmentation with fully convolutional networks. IEEE Trans Med Imaging. 2019;38:2914–2925. [DOI] [PubMed] [Google Scholar]
- 18. Chen S, Ding C, Liu M. Dual-force convolutional neural networks for accurate brain tumor segmentation. Pattern Recognition. 2019;88:90–100. [Google Scholar]
- 19. Wang G, Li W, Zuluaga MA, et al. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans Med Imaging. 2018;37(7):1562–1573. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20. Wang G, Zuluaga MA, Li W, et al. DeepIGeoS: a deep interactive geodesic framework for medical image segmentation. IEEE Trans Pattern Anal Mach Intell. 2019;41(7):1559–1572. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Farjam R, Parmar HA, Noll DC, Tsien CI, Cao Y. An approach for computer-aided detection of brain metastases in post-Gd T1-W MRI. Magn Reson Imaging. 2012;30(6):824–836. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22. Ambrosini RD, Wang P, O’Dell WG. Computer-aided detection of metastatic brain tumors using automated three-dimensional template matching. J Magn Reson Imaging. 2010;31(1):85–93. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Krafft C, Shapoval L, Sobottka SB, Geiger KD, Schackert G, Salzer R. Identification of primary tumors of brain metastases by SIMCA classification of IR spectroscopic images. Biochim Biophys Acta. 2006;1758(7):883–891. [DOI] [PubMed] [Google Scholar]
- 24. Minniti G, Romano A, Scaringi C, Bozzao A. Imaging in neuro-oncology. In: Neurorehabilitation in Neuro-Oncology. Cham, German: Springer; 2019:53–68. [Google Scholar]
- 25. Havaei M, Davy A, Warde-Farley D, et al. Brain tumor segmentation with deep neural networks. Med Image Anal. 2017;35:18–31. [DOI] [PubMed] [Google Scholar]
- 26. Shen D, Wu G, Suk HI. Deep learning in medical image analysis. Annu Rev Biomed Eng. 2017;19:221–248. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Urban G, Bendszus M, Hamprecht F, Kleesiek J. Multi-modal brain tumor segmentation using deep convolutional neural networks. MICCAI BraTS (Brain Tumor Segmentation) Challenge Proceedings, Winning Contribution. 2014:31–35. [Google Scholar]
- 28. Zikic D, Ioannou Y, Brown M, Criminisi A. Segmentation of brain tumor tissues with convolutional neural networks. Proceedings MICCAI-BRATS. 2014:36–39. [Google Scholar]
- 29. Sun W, Wang R. Fully convolutional networks for semantic segmentation of very high resolution remotely sensed images combined with DSM. IEEE Geosci Remote S. 2018;15(3):474–478. [Google Scholar]
- 30. Nie D, Wang L, Adeli E, Lao C, Lin W, Shen D. 3-D fully convolutional networks for multimodal isointense infant brain image segmentation. IEEE Trans Cybern. 2019;49(3):1123–1136. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Wu G, Qi F, Shen D. Learning-based deformable registration of MR brain images. IEEE Trans Med Imaging. 2006;25(9):1145–1157. [DOI] [PubMed]
- 32. Glorot X, Bengio Y. Understanding the difficulty of training deep feedforward neural networks. 2010. Paper presented at: International Conference on Artificial Intelligence and Statistics 2018.
- 33. Tensorflow. https://www.tensorflow.org/.
- 34. Dice LR. Measures of the amount of ecologic association between species. Ecology. 1945;26(3):297–302. [Google Scholar]
- 35. Taunk NK, Oh JH, Shukla-Dave A, et al. Early posttreatment assessment of MRI perfusion biomarkers can predict long-term response of lung cancer brain metastases to stereotactic radiosurgery. Neuro Oncol. 2018;20(4):567–575. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Jung BC, Arevalo-Perez J, Lyo JK, et al. Comparison of glioblastomas and brain metastases using dynamic contrast-enhanced perfusion MRI. J Neuroimaging. 2016;26(2):240–246. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 37. Barajas RF Jr, Cha S. Metastasis in adult brain tumors. Neuroimaging Clin N Am. 2016;26(4):601–620. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Lin NU, Lee EQ, Aoyama H, et al. ; Response Assessment in Neuro-Oncology (RANO) group Response assessment criteria for brain metastases: proposal from the RANO group. Lancet Oncol. 2015;16(6):e270–e278. [DOI] [PubMed] [Google Scholar]
- 39. Roca E, Pozzari M, Vermi W, et al. Outcome of EGFR-mutated adenocarcinoma NSCLC patients with changed phenotype to squamous cell carcinoma after tyrosine kinase inhibitors: a pooled analysis with an additional case. Lung Cancer. 2019;127:12–18. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.






