Abstract
Purpose
To verify the effectiveness of artificial intelligence-assisted volume isotropic simultaneous interleaved bright-/black-blood examination (AI-VISIBLE) for detecting brain metastases.
Methods
This retrospective study was approved by our institutional review board and the requirement for written informed consent was waived. Forty patients were included: 20 patients with and without brain metastases each. Seven independent observers (three radiology residents and four neuroradiologists) participated in two reading sessions: in the first, brain metastases were detected using VISIBLE only; in the second, the results of the first session were comprehensively evaluated by adding AI-VISIBLE information. Sensitivity, diagnostic performance, and false positives/case were evaluated. Diagnostic performance was assessed using a figure-of-merit (FOM). Sensitivity and false positives/case were evaluated using McNemar and paired t-tests, respectively.
Results
The McNemar test revealed a significant difference between VISIBLE with/without AI information (P < 0.0001). Significantly higher sensitivity (94.9 ± 1.7% vs. 88.3 ± 5.1%, P = 0.0028) and FOM (0.983 ± 0.009 vs. 0.972 ± 0.013, P = 0.0063) were achieved using VISIBLE with AI information vs. without. No significant difference was observed in false positives/case with and without AI information (0.23 ± 0.19 vs. 0.18 ± 0.15, P = 0.250). AI-assisted results of radiology residents became comparable to results of neuroradiologists (sensitivity, FOM: 85.9 ± 3.4% vs. 90.0 ± 5.9%, 0.969 ± 0.016 vs. 0.974 ± 0.012 without AI information; 94.8 ± 1.3% vs. 95.0 ± 2.1%, 0.977 ± 0.010 vs. 0.988 ± 0.005 with AI information, respectively).
Conclusion
AI-VISIBLE improved the sensitivity and performance for diagnosing brain metastases.
Keywords: Artificial intelligence, Brain metastases, Convolutional neural network, Volume isotropic simultaneous interleaved bright-/black-blood examination
Introduction
Imaging techniques are paramount in the diagnosis and management of brain metastases [1, 2]. Whole-brain radiotherapy is commonly used to treat multiple metastases; however, stereotactic radiosurgery may be more appropriate for patients with fewer and smaller lesions [3–5]. Therefore, accurate determination of the number, size, and location of these lesions is crucial for effective treatment [4–8]. MRI with gadolinium-enhanced 3D T1-weighted imaging is the standard imaging method used because of its high sensitivity [9–12]; however, it can be challenging to distinguish enhancing blood vessels from metastatic tumors [13]. Consequently, black-blood imaging was introduced to address this issue [13, 14]. However, this led to the problem of residual slow-flow blood vessel signals resembling brain metastases [13]. Therefore, additional non-vessel suppression imaging is necessary to remove mimicking lesions from residual vessel signals, which increases the imaging time [13].
Volume isotropic simultaneous interleaved bright- and black-blood examination (VISIBLE), proposed as a novel sequence, can increase diagnostic sensitivity and specificity without increasing scan time by allowing simultaneous acquisition with and without vessel suppression [15, 16]. Most studies on brain metastasis detection by artificial intelligence (AI) have been performed using regular post-contrast 3D T1-weighted images as training data [17–24]. In contrast, studies on AI that simultaneously learn with and without blood vessel suppression images as input, such as the VISIBLE method, are rare. Moreover, AI-VISIBLE, which uses a convolutional neural network (CNN), previously achieved a diagnostic performance similar to that of radiologists in simulation [25]. Thus, AI-VISIBLE can automatically label potential lesions with the possibility of brain metastasis and may show promising results in simulation; however, its effectiveness in near-real-world clinical practice remains unclear. Therefore, this study aimed to verify the usefulness of AI-VISIBLE for diagnosing brain metastases.
Materials and methods
This retrospective study was approved by our institutional review board, and the requirement for written informed consent was waived.
MRI
The technical details of VISIBLE have been reported previously [15, 16]. The VISIBLE sequence is a 3D T1-weighted turbo field echo sequence that employs a motion-sensitized driven equilibrium (MSDE) pre-pulse to produce black-blood images. Bright-blood images are acquired after the T1 recovery of the blood, when the MSDE pre-pulse no longer affects the blood signal. All MR examinations were performed using a 3.0 T clinical MR scanner (Ingenia 3.0 T or Ingenia Elition X 3.0 T; Philips Healthcare, Best, the Netherlands) using a 15-channel head coil. For each patient, VISIBLE images were obtained sequentially 5 min after intravenous administration of gadoteridol (ProHance; Eisai; 0.2 mmol/kg) for patients with estimated glomerular filtration rate (eGFR) ≥ 60 mL/ min/1.73 m2 or gadobutrol (Gadovist; Bayer; 0.1 mmol/kg) for patients with eGFR < 60 mL/min/1.73 m2. The VISIBLE scan duration was 3 min using the following imaging parameters: repetition time/echo time, 6.9/3.2 ms; flip angle, 15°; turbo field echo factor, 20; Compressed SENSE (CS) factor, 7.2; velocity-encoding for MSDE, 6.0 mm/s; field-of-view, 240 × 240 mm2; and voxel size, 1.0 × 1.0 × 1.0 mm3; total scan time 3 min. These images were subsequently reformatted into 2-mm-thick contiguous transverse images.
Patient selection
Since 2011, our institution has consistently employed VISIBLE as a diagnostic tool for patients with clinical signs of brain metastases. Therefore, the data of 264 patients who consulted our institution between December 2020 and March 2021 were retrospectively obtained from the database. Our institutional review board approved the use of these data, which included the patients’ age, sex, primary cancers, and the date of their studies. A lesion was considered metastatic if it met either of the following criteria: (1) increased size in subsequent examinations or (2) decreased size after radiotherapy or chemotherapy, as indicated in previous studies [16, 25]. A neuroradiologist certified by the Japan Radiology Society (K.K., with 18 years of experience in neuroradiology) identified intraparenchymal-enhancing lesions by comparing the VISIBLE images (black and bright-blood images). Patients with any of the following conditions were excluded: no brain metastasis, no change in lesion size at follow-up, no follow-up examination with VISIBLE, image degradation due to artifacts, and extra-axial tumors or infarctions (Fig. 1). Additionally, patients with ≥ 11 lesions were excluded because those with > 10 lesions typically undergo whole-brain radiation therapy at our institution [3]. In total, 45 metastases were identified in 20 patients. For the observer test, we included 20 patients without metastasis (verified at least 2 follow-up scans with the same imaging protocol) who were matched for age and sex to the patients with metastasis. Consequently, 40 patients were selected (Fig. 1).
Fig. 1.
Patient selection flowchart. Both the sensitivity and figure-of-merit increases significantly with reference to artificial intelligence information
Artificial intelligence inference
Details of the AI-VISIBLE model have been reported previously [25]. Briefly, we used a pre-trained CNN model that was trained using the previous data (50 patients with 165 lesions) scanned with VISIBLE without CS (scan time 5 min) in the previous study [25]. We used the DeepMedic network proposed by Kamnitsas et al. [26], which is a multi-scale 3D CNN for brain lesion segmentation, to infer data from the 40 patients in our observer study. DeepMedic consists of the 11 layers and 2 parallel convolutional pathways that aim to effectively capture both detailed local and broad contextual information.
Observer test
Seven observers who were blinded to the patients’ clinical and follow-up information conducted the observer tests for all 40 patients. The observers included three radiology residents with 3 years of experience and four board-certified neuroradiologists with 10, 12, 23, and 25 years of experience. Each observer participated in two reading sessions, where the first session used only VISIBLE images, and the second session used VISIBLE images with AI information. Each session was conducted sequentially, as previously described [27]. During the reading session, the observers used a PACS to evaluate the images of the 40 patients. They were instructed to use the synchronized section increment function of the PACS to compare the findings between the black and bright-blood images of the VISIBLE and AI suggestions for diagnosing the lesions. The observers recorded the results of their readings by electronically placing an arrow at each location where they found the metastatic lesions. Additionally, they reported their level of confidence regarding the presence of a lesion at each location by assigning a number ranging from 0 (lowest confidence level) to 100 (highest confidence level).
Evaluations of the AI-VISIBLE and observer test
We evaluated the following four endpoints for both AI-VISIBLE and the readers based on previous studies [16, 25]: (1) sensitivity, (2) diagnostic performance, (3) false positives per case, and (4) reasons for false positives. The sensitivity between the sessions was compared based on the lesion size, which was classified as small (≤ 5 mm in longest diameter) or large (> 5 mm). The diagnostic performance was assessed using a figure-of-merit (FOM) calculated via jackknife free-response receiver operating characteristic (ROC) analysis using method 1 by Chakraborty and Berbaum [28, 29]. The FOM is an indicator equivalent to the area under the curve: AUC of the ROC.
Comparisons of AI-VISIBLE detection and ground truth were performed visually and not via co-registration. AI-VISIBLE detections were considered true positives if they coincided with a part of the ground truth image; otherwise, they were considered false negatives. Detections were considered false positives if they did not overlap with any of the lesions marked in the ground truth images [25].
Statistical analysis was performed using McNemar’s test to compare AI and human sensitivities. The sensitivity between VISIBLE with and without AI information was evaluated using a paired t-test after ensuring that the data were normally distributed using a D’Agostino-Pearson normality test. For all analyses, statistical significance was set at P < 0.05.
Results
The sensitivity and false positives per case in the inference of the AI-VISIBLE model in the observer test cohort were 95.6% and 1.35 in the database simulation, respectively. For the observer test, the McNemar test showed a significant difference between the VISIBLE with and without AI information (P < 0.0001). Significantly higher sensitivity (residents: 94.8 ± 1.3% vs. 85.9 ± 3.4%, P = 0.020; neuroradiologists: 95.0 ± 2.1% vs. 90.0 ± 5.9%, P = 0.078; all readers: 94.9 ± 1.7% vs. 88.3 ± 5.1%, P = 0.002) and FOM (residents: 0.977 ± 0.010 vs. 0.969 ± 0.016, P = 0.178; neuroradiologists: 0.988 ± 0.005 vs. 0.974 ± 0.012, P = 0.034; all readers: 0.983 ± 0.009 vs. 0.972 ± 0.013, P = 0.006) were achieved in VISIBLE with AI vs. VISIBLE without AI information (Fig. 2). No significant difference was observed in false positives per case with/without AI information (0.23 ± 0.19 vs. 0.18 ± 0.15, P = 0.250). Using AI, residents became comparable to neuroradiologists (sensitivity, FOM: 85.9 ± 3.4% vs. 90.0 ± 5.9%, 0.969 ± 0.016 vs. 0.974 ± 0.012 without AI information; 94.8 ± 1.3% vs. 95.0 ± 2.1%, 0.977 ± 0.010 vs. 0.988 ± 0.005 with AI information). Table 1 shows the comparison of the sensitivity with and without AI information according to the lesion size. AI improved the sensitivity, especially for small lesions (≤ 5 mm) among residents and neuroradiologists (Without AI vs. With AI, residents: 60.7% vs. 74.4%, P = 0.0007; neuroradiologists: 68.8% vs. 83.1%, P = 0.0002). Table 2 shows the results of the imaging findings related to false positives between humans and AI. Both humans and AI show a high number of false positives owing to the presence of blood vessels. However, AI had approximately six times more false positives than humans (blood vessels: 24 vs. 5.7, respectively).
Fig. 2.
Results of sensitivity and figure-of-merit in residents, neuroradiologists, and all readers. Both the sensitivity and figure-of-merit significantly increased with reference to artificial intelligence information
Table 1.
Comparison of the sensitivity with and without AI information based on the lesion size
| Without AI | With AI | *P-value | |
|---|---|---|---|
| Residents | |||
| Diameter ≤ 5 mm | 60.7% | 74.4% | 0.0007 |
| Diameter > 5 mm | 91.2% | 95.7% | 0.0312 |
| Neuroradiologists | |||
| Diameter ≤ 5 mm | 68.8% | 83.1% | 0.0002 |
| Diameter > 5 mm | 96.6% | 100% | 0.0001 |
*P-values for comparison between with and without AI information AI, artificial intelligence
Table 2.
Imaging findings related to false positives between human and artificial intelligence
| Seven Readers (mean ± standard deviation) | |||
|---|---|---|---|
| AI-VISIBLE | Without AI | With AI | |
| Findings | |||
| Blood Vessels | 24 | 40 (5.7 ± 4.7) | 49 (7 ± 5.7) |
| Noises/Artifacts | 17 | 10 (1.4 ± 1.8) | 15 (2.1 ± 2.3) |
| Choroid Plexus | 12 | 1 (0.1 ± 0.4) | 1 (0.1 ± 0.4) |
| Pituitary Stalk | 1 | 0 (0) | 0 (0) |
| False Positives/Case | 1.35 | 0.18 ± 0.15a | 0.23 ± 0.19a |
AI, artificial intelligence; VISIBLE, volume isotropic simultaneous interleaved bright- and black-blood examination
aData are presented as mean ± standard deviation
Figures 3 and 4 show representative images of patients with brain metastases. Figure 5 shows a representative artifact of a patient without metastases.
Fig. 3.
Images from a 61-year-old female patient with lung cancer. Blood vessels, including the dural sinus (a) suppressed in the black-blood image, and (b) almost completely restored in the bright-blood image. A small metastatic lesion (yellow box) in the left cerebellar hemisphere is labeled accurately using AI-VISIBLE. Without AI information, only one of the seven readers was able to identify this lesion; however, with AI information, all readers were able to detect it. AI-VISIBLE: artificial intelligence-assisted volume isotropic simultaneous interleaved bright- and black-blood examination
Fig. 4.
Images from an 80-year-old male patient with lung cancer. A small metastatic lesion (yellow box) in the right hippocampus is labeled accurately using AI-VISIBLE (a) black-blood image (b) bright-blood image. Only one radiology resident was able to detect this metastasis; however, all radiology residents were able to detect it with the assistance of AI-VISIBLE. AI-VISIBLE: artificial intelligence-assisted volume isotropic simultaneous interleaved bright- and black-blood examination
Fig. 5.
Images from a 60-year-old male patient with lung cancer (no brain metastasis). A small artifact (yellow box) in the left cerebellar hemisphere is mislabeled by AI-VISIBLE (a) black-blood image, and (b) bright-blood image. By referring to the bright-blood image, it is easy to determine that this artifact is a false positive. None of the readers diagnosed this as a true positive in the observer test. AI-VISIBLE: artificial intelligence-assisted volume isotropic simultaneous interleaved bright- and black-blood examination
Discussion
This study evaluated the effectiveness of AI-VISIBLE for detecting brain metastases compared to VISIBLE. We found that using the automatic detection of candidate lesions by AI-VISIBLE, brain metastases can be detected with high sensitivity and low false positive rates. Therefore, the radiology residents were able to achieve a diagnostic performance comparable to that of neuroradiologists.
Our AI-VISIBLE achieved a high sensitivity of 95.6% and a low false-positive rate of 1.35 in the observation cohort. These results align with those of a previous study, which reported a high sensitivity of 91.7% and a low false-positive rate of 1.5 [25]. This is because AI-VISIBLE was trained using small brain metastases (4 mm) and learned twice using both black and bright-blood images simultaneously. This means that the CNN can learn from two different types of images in one case, potentially making it more efficient than a CNN that uses conventional contrast-enhanced 3D T1WI. In addition, the lack of motion artifacts due to the simultaneous acquisition of postcontrast T1WI with and without blood vessel suppression using VISIBLE may have contributed to the promising results of our study. We used the DeepMedic network proposed by Kamnitsas et al. [26], which was validated in a previous study on brain lesion segmentation tasks [26]. Thus, the CNN can learn twice the number of features and blood vessel locations in the brain by learning black and bright-blood images simultaneously.
Moreover, small lesions showed an improvement in sensitivity, compared with larger lesions, which was significant for both radiology residents and neuroradiologists. Thus, the results for the residents were comparable to those of the neuroradiologists in terms of both sensitivity and diagnostic performance with and without AI information, producing statistically significant differences. Small lesions require only one stereotactic radiotherapy session, and if the lesion is followed up, it is important to consider shortening the time to the next examination. Therefore, the importance of detecting small lesions is directly related to treatment and management. The black-blood image of VISIBLE has a high sensitivity for lesion detection owing to its high CNR; in particular, it can detect small lesions with higher sensitivity than a conventional magnetization-prepared rapid gradient echo sequence [16]. Additionally, the incorporation of AI information prevents small lesions from being overlooked. However, in the observer test, each session required approximately 3 h to be performed consecutively; thus, the number of overlooked lesions may have increased, even with the use of the black-blood image, owing to decreased concentration of the radiologist caused by fatigue. In contrast, since AI does not experience fatigue, it is effective in preventing fatigue-related oversight; the results of this study are consistent with those of previous studies [30, 31]. Consequently, preventing oversights through AI is beneficial because radiologist fatigue is inevitable in clinical practice.
In the medical setting, radiologists may be required to verify whether brain metastases have been incorrectly identified using automated detection systems. Similarly, our AI-VISIBLE system generated more false positives per case than humans. The main reasons for these false positives were the presence of blood vessels, noise, or unknown causes. We hypothesized that subtle variations in signals that are not visible to the human eye would be mistakenly identified as brain metastases on AI-VISIBLE. However, radiologists can easily exclude false positives because they do not significantly increase in VISIBLE patients with or without AI information. Bousabarah et al. discovered that a CNN model trained on lesions smaller than 0.4 mm3 had a higher number of false positives per case than a model trained on lesions of larger sizes [21]. Zhang et al. utilized conventional postcontrast 3D T1WI and found that 88% of the false positives were present in small vessels, whereas 12% of them were in artifacts [23]. Hence, they proposed the use of images with blood-vessel suppression to decrease the number of false positives. However, our CNN model demonstrated a lower number of false positives per case (1.35) than previous CNN studies (false positives/case range of 0.008–19.9) [17–24]. This is because the proposed model used both bright and black-blood images. Park et al. utilized postcontrast 3D gradient echo imaging twice, including routine and black-blood imaging, and reported high sensitivity and low false positives per case; however, their method increased imaging time [24]. In contrast, our technique, which involves the simultaneous acquisition of VISIBLE-based black- and bright-blood images, eliminates the need for motion correction during the preprocessing step before analysis and reduces scan time. This simplified and enhanced the overall procedure, making it suitable for clinical use. Additionally, the images were precisely matched in terms of their geometry. While we acknowledge that the CNN model may not outperform its trainer in detecting false positives, such as the pituitary stalk, which radiologists may not typically identify, we believe that the CNN model will serve as an aid to radiologists. Although our results showed that false positives were not significantly increased using AI in the observer study, false positives remain a relevant problem in the therapeutic management of patients with cancer. We found that our AI significantly increased the number of false positives. This has a negative impact on clinical efficiency because it increases the reading time required to eliminate false positives. However, in terms of clinical effectiveness, the increased sensitivity and maintained specificity due to human removal of false positives still makes it useful for clinical diagnosis.
Nonetheless, our study had certain limitations. First, the sample sizes used for training and testing were relatively small. However, we expect this to have minimal impact because previous studies have shown that DeepMedic can learn efficiently even with a small number of cases [32–34]. Moreover, owing to our stricter diagnostic criteria that required follow-up examinations compared with previous studies [17–24], it was challenging to include more patients within the given study period. However, we are continuously collecting patient data and aim to improve our imaging techniques to overcome this limitation in the future. Additionally, we excluded patients with ≥ 11 lesions because it was difficult to manually define true positives. Nevertheless, we believe that this maximum number of lesions per case is clinically acceptable since stereotactic radiosurgery for brain metastases is generally performed under this condition [3]. Second, we were unable to determine the reasons for the false positives in the unknown category. We speculate that signal inhomogeneities not detectable by humans may have led to the misdiagnosis of brain metastases in our model. However, further investigations are required to understand the causes of these results. Third, AI-VISIBLE could not detect two lesions (2/45), indicating that AI-VISIBLE is not still perfect. The AI used in this study was created with the training of 50 patients with 165 lesions, and an improvement in sensitivity can be expected with a larger number of patients.
Conclusions
AI-VISIBLE improved the sensitivity and performance for diagnosing brain metastases. Radiologists can identify brain metastases more accurately using AI-VISIBLE. This technology can improve overall patient survival rates by enabling early detection and prompt treatment.
Acknowledgements
This work was supported by the GE Healthcare Pharma Educational Grant, Philips Japan, Ltd., JSPS KAKENHI (Grant Number 20K16791, 21K07645), and the Shin-Nihon Foundation of Advanced Medical Research.
Abbreviations
- AI
Artificial Intelligence
- VISIBLE
Volume Isotropic Simultaneous Interleaved Bright- And Black-Blood Examination
- MSDE
Motion-Sensitized Driven Equilibrium
- CNN
Convolutional Neural Network
Author contributions
KK: Conceptualization; Methodology; Software; Validation; Formal analysis; Investigation; Resources; Data Curation; Writing—Original Draft; Visualization; OT: Validation; Investigation; Resources; Data Curation; Writing—Review & Editing; YK: Validation; Investigation; Resources; Data Curation; Writing—Review & Editing; KY: Validation; Investigation; Writing—Review & Editing; DM: Validation; Investigation; Resources; Data Curation; Writing—Review & Editing; KF: Validation; Investigation; Resources; Data Curation; Writing—Review & Editing; SN: Validation; Investigation; Resources; Data Curation; Writing—Review & Editing; HT: Validation; Investigation; Resources; Data Curation; Writing—Review & Editing; MO: Conceptualization; Methodology; Software; Writing— Review & Editing; KI: Validation; Investigation; Writing— Review & Editing; OT: Validation; Investigation; Writing— Review & Editing; Supervision; Project administration.
Data availability
The datasets and material used during the current study are available from the corresponding author on reasonable request.
Declarations
Ethics approval and consent to participate
All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. This retrospective study was approved by our Institutional Review Board (Number 24–45).
Consent for publication
Not applicable.
Competing interests
The authors declare no conflicts of interest related to the content of this article.
Footnotes
The original online version of this article was revised due to a retrospective Open Access order.
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Change history
12/4/2024
The original online version of this article was revised due to a retrospective Open Access order.
Change history
1/21/2025
A Correction to this paper has been published: 10.1007/s00234-024-03522-9
References
- 1.Pope WB (2018) Brain metastases: neuroimaging. Handb Clin Neurol 149:89–112. 10.1016/B978-0-12-811161-1.00007-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Posner JB, Chernik NL (1978) Intracranial metastases from systemic cancer. Adv Neurol 19:579–592 [PubMed] [Google Scholar]
- 3.Yamamoto M, Serizawa T, Higuchi Y et al (2017) A multi-institutional prospective observational study of stereotactic radiosurgery for patients with multiple brain metastases (JLGK0901 study update): irradiation-related complications and long-term maintenance of Mini-mental State Examination scores. Int J Radiat Oncol Biol Phys 99:31–40. 10.1016/j.ijrobp.2017.04.037 [DOI] [PubMed] [Google Scholar]
- 4.Lippitz B, Lindquist C, Paddick I et al (2014) Stereotactic radiosurgery in the treatment of brain metastases: the current evidence. Cancer Treat Rev 40:48–59. 10.1016/j.ctrv.2013.05.002 [DOI] [PubMed] [Google Scholar]
- 5.Monaco EA 3rd, Faraji AH, Berkowitz O et al (2013) Leukoencephalopathy after whole-brain radiation therapy plus radiosurgery versus radiosurgery alone for metastatic lung cancer. Cancer 119:226–232. 10.1002/cncr.27504 [DOI] [PubMed] [Google Scholar]
- 6.Linskey ME, Andrews DW, Asher AL et al (2010) The role of stereotactic radiosurgery in the management of patients with newly diagnosed brain metastases: a systematic review and evidence-based clinical practice guideline. J Neurooncol 96:45–68. 10.1007/s11060-009-0073-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Serizawa T, Hirai T, Nagano O et al (2010) Gamma knife surgery for 1–10 brain metastases without prophylactic whole-brain radiation therapy: analysis of cases meeting the Japanese prospective multi-institute study (JLGK0901) inclusion criteria. J Neurooncol 98:163–167. 10.1007/s11060-010-0169-x [DOI] [PubMed] [Google Scholar]
- 8.Hunter GK, Suh JH, Reuther AM et al (2012) Treatment of five or more brain metastases with stereotactic radiosurgery. Int J Radiat Oncol Biol Phys 83:1394–1398. 10.1016/j.ijrobp.2011.10.026 [DOI] [PubMed] [Google Scholar]
- 9.Russell EJ, Geremia GK, Johnson CE et al (1987) Multiple cerebral metastases: detectability with Gd-DTPA-enhanced MR imaging. Radiology 165:609–617. 10.1148/radiology.165.3.3317495 [DOI] [PubMed] [Google Scholar]
- 10.Sze G, Milano E, Johnson C, Heier L (1990) Detection of brain metastases: comparison of contrast-enhanced MR with unenhanced MR and enhanced CT. AJNR Am J Neuroradiol 11:785–791 [PMC free article] [PubMed] [Google Scholar]
- 11.Davis PC, Hudgins PA, Peterman SB et al (1991) Diagnosis of cerebral metastases: double-dose delayed CT vs contrast-enhanced MR imaging. AJNR Am J Neuroradiol 12:293–300 [PMC free article] [PubMed] [Google Scholar]
- 12.Akeson P, Larsson EM, Kristoffersen DT (1995) Brain metastases–comparison of gadodiamide injection-enhanced MR imaging at standard and high dose, contrast-enhanced CT and non-contrast-enhanced MR imaging. Acta Radiol 36:300–306 [PubMed] [Google Scholar]
- 13.Nagao E, Yoshiura T, Hiwatashi A et al (2011) 3D turbo spin-echo sequence with motion-sensitized driven-equilibrium preparation for detection of brain metastases on 3T MR imaging. AJNR Am J Neuroradiol 32:664–670. 10.3174/ajnr.A2343 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Obara M, Honda M, Imai Y et al (2009) Assessment of motion sensitized driven equilibrium (MSDE) improvement for whole brain application. Proc ISMRM Honolulu:4547 [Google Scholar]
- 15.Yoneyama M, Obara M, Takahara T et al (2014) Volume isotropic simultaneous interleaved black- and bright-blood imaging: a novel sequence for contrast-enhanced screening of brain metastasis. Magn Reson Med Sci 13:277–284. 10.2463/mrms.2013-0065 [DOI] [PubMed] [Google Scholar]
- 16.Kikuchi K, Hiwatashi A, Togao O et al (2015) 3D MR sequence capable of simultaneous image acquisitions with and without blood vessel suppression: utility in diagnosing brain metastases. Eur Radiol 25:901–910. 10.1007/s00330-014-3496-z [DOI] [PubMed] [Google Scholar]
- 17.Losch M (2015) Detection and segmentation of brain metastases with deep convolutional networks. Degree project, KTH Royal Institute of Technology
- 18.Charron O, Lallement A, Jarnet D et al (2018) Automatic detection and segmentation of brain metastases on multimodal MR images with a deep convolutional neural network. Comput Biol Med 95:43–54. 10.1016/j.compbiomed.2018.02.004 [DOI] [PubMed] [Google Scholar]
- 19.Grøvik E, Yi D, Iv M et al (2020) Deep learning enables automatic detection and segmentation of brain metastases on multisequence MRI. J Magn Reson Imaging 51:175–182. 10.1002/jmri.26766 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Dikici E, Ryu JL, Demirer M et al (2020) Automated brain metastases detection framework for t1-weighted contrast-enhanced 3D MRI. IEEE J Biomed Health Inf 24:2883–2893. 10.1109/JBHI.2020.2982103 [DOI] [PubMed] [Google Scholar]
- 21.Bousabarah K, Ruge M, Brand J-S et al (2020) Deep convolutional neural networks for automated segmentation of brain metastases trained on clinical data. Radiat Oncol 15:87. 10.1186/s13014-020-01514-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Zhou Z, Sanders JW, Johnson JM et al (2020) Computer-aided detection of brain metastases in T1-weighted MRI for stereotactic radiosurgery using deep learning single-shot detectors. Radiology 295:407–415. 10.1148/radiol.2020191479 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Zhang M, Young GS, Chen H (2020) Deep-learning detection of cancer metastases to the brain on MRI. J Magn Reson Imaging 52:1227–1236. 10.1002/jmri.27129 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Park YW, Jun Y, Lee Y et al (2021) Robust performance of deep learning for automatic detection and segmentation of brain metastases using three-dimensional black-blood and three-dimensional gradient echo imaging. Eur Radiol 31:6686–6695. 10.1007/s00330-021-07783-3 [DOI] [PubMed] [Google Scholar]
- 25.Kikuchi Y, Togao O, Kikuchi K et al (2022) A deep convolutional neural network-based automatic detection of brain metastases with and without blood vessel suppression. Eur Radiol 32:2998–3005. 10.1007/s00330-021-08427-2 [DOI] [PubMed] [Google Scholar]
- 26.Kamnitsas K, Ledig C, Newcombe VFJ et al (2017) Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal 36:61–78. 10.1016/j.media.2016.10.004 [DOI] [PubMed] [Google Scholar]
- 27.Kikuchi K, Hiwatashi A, Togao O et al (2018) Arterial spin-labeling is useful for the diagnosis of residual or recurrent meningiomas. Eur Radiol 28:4334–4342. 10.1007/s00330-018-5404-4 [DOI] [PubMed] [Google Scholar]
- 28.Chakraborty DP, Berbaum KS (2004) Observer studies involving detection and localization: modeling, analysis, and validation. Med Phys 31:2313–2330. 10.1118/1.1769352 [DOI] [PubMed] [Google Scholar]
- 29.Chakraborty DP (2006) Analysis of location specific observer performance data: validated extensions of the jackknife free-response (JAFROC) method. Acad Radiol 13:1187–1193. 10.1016/j.acra.2006.06.016 [DOI] [PubMed] [Google Scholar]
- 30.Lakhani P, Sundaram B (2017) Deep learning at chest radiography: automated classification of pulmonary tuberculosis by using convolutional neural networks. Radiology 284:574–582. 10.1148/radiol.2017162326 [DOI] [PubMed] [Google Scholar]
- 31.Kawauchi K, Furuya S, Hirata K et al (2020) A convolutional neural network-based system to classify patients using FDG PET/CT examinations. BMC Cancer 20:227. 10.1186/s12885-020-6694-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Savenije MHF, Maspero M, Sikkes GG et al (2020) Clinical implementation of MRI-based organs-at-risk auto-segmentation with convolutional networks for prostate radiotherapy. Radiat Oncol 15:104. 10.1186/s13014-020-01528-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Battalapalli D, Rao BVVSNP, Yogeeswari P et al (2022) An optimal brain tumor segmentation algorithm for clinical MRI dataset with low resolution and non-contiguous slices. BMC Med Imaging 22:89. 10.1186/s12880-022-00812-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Patel TR, Pinter N, Sarayi SMMJ et al (2021) Automated cerebral vessel segmentation of magnetic resonance imaging in patients with intracranial atherosclerotic diseases. Conf Proc IEEE Eng Med Biol Soc 3920–3923. 10.1109/EMBC46164.2021.9630626 [DOI] [PubMed]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The datasets and material used during the current study are available from the corresponding author on reasonable request.





