Abstract
Simple Summary
Machine learning in radiology of the central nervous system has seen many interesting publications in the past few years. Since the focus has largely been on malignant tumors such as brain metastases and high-grade gliomas, we conducted a systematic review on benign tumors to summarize what has been published and where there might be gaps in the research. We found several studies that report good results, but the descriptions of methodologies could be improved to enable better comparisons and assessment of biases.
Abstract
Objectives: To summarize the available literature on using machine learning (ML) for the detection and segmentation of benign tumors of the central nervous system (CNS) and to assess the adherence of published ML/diagnostic accuracy studies to best practice. Methods: The MEDLINE database was searched for the use of ML in patients with any benign tumor of the CNS, and the records were screened according to PRISMA guidelines. Results: Eleven retrospective studies focusing on meningioma (n = 4), vestibular schwannoma (n = 4), pituitary adenoma (n = 2) and spinal schwannoma (n = 1) were included. The majority of studies attempted segmentation. Links to repositories containing code were provided in two manuscripts, and no manuscripts shared imaging data. Only one study used an external test set, which raises the question as to whether some of the good performances that have been reported were caused by overfitting and may not generalize to data from other institutions. Conclusions: Using ML for detecting and segmenting benign brain tumors is still in its infancy. Stronger adherence to ML best practices could facilitate easier comparisons between studies and contribute to the development of models that are more likely to one day be used in clinical practice.
Keywords: machine learning, deep learning, benign brain tumor, vestibular schwannoma, meningioma, pituitary adenoma
1. Introduction
Whilst an increase in computational power and the development of more user-friendly software libraries have accelerated the adoption of machine learning (ML) techniques in both neuro-radiology and neuro-oncology, much of the research that is being published focuses on malignant tumor entities, such as high-grade gliomas or brain metastases [1].
A possible explanation for this phenomenon lies in the availability of data that are required to train, validate and test ML models. While almost any hospital will have a sufficient number of cases for epidemiologically significant entities such as brain metastases, this is not the case for central nervous system (CNS) tumors with lower incidences, such as many benign brain tumors [2].
In addition, most publicly available imaging datasets are comprised of malignant entities. The popular Brain Tumor Segmentation (BraTS) Challenge dataset consists exclusively of gliomas, which also applies to most of brain datasets that are available as part of The Cancer Imaging Archive (TCIA) [3,4].
Despite these obstacles, there have been publications investigating the use of ML for benign CNS tumors [5,6]. This review will therefore summarize the research that has been conducted on this topic in systematic fashion and assess the quality of the studies that have used ML for tumor detection and segmentation, as has been done previously for malignant CNS tumors [7,8]. The goal is to create a point of reference that other researchers can use to identify gaps in the research that are worthy of further investigation and to identify possible shared issues regarding methodologies or their descriptions so that they can be addressed by future publications.
While there are numerous potential benefits to having machine learning techniques taking over parts of the radiology workflow or serving as automated second opinions, this requires common reporting standards to identify approaches that are worthy of being pursued further with the goal of possibly translating them into routine clinical practice someday.
2. Methods
2.1. Literature Search
The review was conducted according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [9]. Studies published in English after 2000 that used any kind of machine learning technique for the detection or segmentation of benign tumors of the CNS were included. Studies using semi-automatic segmentation requiring manual user input prior to creating the segmentation were not included. Since studies using segmentation only as a means to predict clinical or pathologic features frequently provide little detail on the segmentation methodology, so these studies were not included as well. No limits regarding size of the patient collective or length of follow-up were applied.
The Medical Literature Analysis and Retrieval System Online (MEDLINE) database was searched on 14 June 2021 via the PubMed interface. The query was designed to include all studies that contained one or more words from two groups, one group comprised of words that indicate the usage of an ML technique (automated, computer aided, computer-aided, CAD, radiomic, texture analysis, deep learning, machine learning, neural network, artificial intelligence) and the other group comprised of words associated with benign brain tumors (meningioma, meningiomas, schwannoma, schwannomas, craniopharyngioma, craniopharyngiomas, ganglioglioma, gangliogliomas, glomus, pineocytoma, pineocytomas, pilocytic, pituitary, benign brain tumor, benign brain tumors).
The complete search query that was used was therefore:
“((automated[title]) OR (computer aided[title]) OR (computer-aided[title]) OR (CAD[title]) OR (radiomic[title]) OR (radiomics[title]) OR (texture analysis[title]) OR (texture analyses[title]) OR (textural analysis[title]) OR (textural analyses[title]) OR (deep learning[title]) OR (machine learning[title]) OR (ML[title]) OR (neural network[title]) OR (NN[title]) OR (artificial intelligence[title]) OR (AI[title])) AND ((meningioma[title]) OR (meningiomas[title]) OR (schwannoma[title]) OR (schwannomas[title]) OR (craniopharyngioma[title]) OR (craniopharyngiomas[title]) OR (ganglioglioma[title]) OR (gangliogliomas[title]) OR (glomus[title]) OR (glomera[title]) OR (pineocytoma[title]) OR (pineocytomas[title]) OR (pilocytic[title]) OR (pituitary[title]) OR (benign brain tumor[title]) OR (benign brain tumors[title]) OR (benign brain tumour[title]) OR (benign brain tumours[title])) AND (“2000/01/01”[Date-Create]: “2021/06/14”[Date-Create])”
The review had neither been registered nor had a protocol published beforehand.
After exclusion of duplicates, the titles and abstracts were screened, and only relevant publications proceeded to full-text screening. The decision as to whether a study met the inclusion criteria of the review was performed by two authors (P.W. and C.K.) without the use of automated tools. A third author (C.S.) acted as a referee in case of a potential disagreement between the two authors responsible for screening. All articles that did not focus on the use of ML for detection or segmentation in patients with benign brain tumors were excluded.
2.2. Data Extraction
Two authors (P.W. and C.S.) independently extracted data and discussed any discrepancies. Data were extracted with regard to:
Study parameters: authors, title, year, design, number of patients in training/test set, ground truth, inter-/intrarater variability, task, conflict of interest, sources of funding.
Clinical parameters: tumor entity, tumor volume, treatment of tumors prior to imaging.
Imaging parameters: MRI machine, field strength, slice thickness, sequences.
ML parameters: algorithm, dimensionality, training duration and hardware, libraries/frameworks/packages, data augmentation, performance measures, explainability/interpretability features, code/data availability.
3. Results
The inclusion workflow is depicted in Figure 1. The query returned 110 publications and no duplicates. When screening the records, 99 articles were excluded. A complete list of the excluded articles and the respective reasons for exclusion is provided in Supplementary Table S1. Thirty three articles were excluded due to predicting only pathological features, e.g., grade (n = 16), or differentiating between tumor entities (n = 8) [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42]. Thirty four articles were excluded due to predicting only clinical parameters, e.g., tumor consistency (n = 7), response/treatment outcome (n = 12) or brain/bone invasion (n = 4) [43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75]. Twelve articles did not focus on ML techniques [76,77,78,79,80,81,82,83,84,85,86,87]. Eight articles were not original reports but reviews or editorials [88,89,90,91,92,93,94,95]. Three articles used semi-automatic segmentation techniques [96,97,98]. Three articles dealt with the application of ML techniques to brain tumors in dogs [99,100,101]. Six articles were excluded for other reasons, such as using ML for image reconstruction (n = 1), analyzing tissue (n = 3) or non CNS-tumor entities (n = 1) and predicting drivers of costs (n = 1) [102,103,104,105,106].
All eleven articles that underwent full text screening were included, and the extracted characteristics from all articles are provided in Supplementary Table S2. All studies were conducted retrospectively and published between 2018 and 2021. The tumor entities that were investigated were meningioma (n = 4), vestibular schwannoma (n = 4), pituitary adenoma (n = 2) and spinal schwannoma (n = 1). Between 50 and 1876 patients were used for developing or testing the models in the respective studies. Notably, only the study by Qian et al. studied tumor detection using healthy controls or controls with other cerebral neoplasms [107]. Other studies claiming to do tumor detection did so by doing tumor segmentation, but these models never had to consider the possibility that a tumor was not present.
3.1. Disclosures and Declarations
The authors of two publications uploaded code to a public repository that was referenced in the manuscript [108,109]. The remaining publications did not mention code availability. No data were shared, but two articles mentioned the option to obtain data from the corresponding author upon request [109,110]. Employment by Philips was the most frequent conflict of interest at study level and declared in two publications [111,112]. A patent application related to the published work was present in one publication. Six publications explicitly stated that the authors had no conflict of interest. The Ministry of Science and Technology of Taiwan was the most frequent source of funding (n = 2), and three publications stated no additional funding [108,113].
3.2. Imaging
All studies used magnetic resonance imaging (MRIs) with a field strength of between one and three Tesla for imaging. Whilst six studies used homogenous datasets from a single device, the remaining studies used datasets consisting of images from multiple devices. Where reported, slice thickness was between one and six millimeters. All studies used a T1-weighted sequence, presumably with contrast enhancement, though this was not explicitly stated in all publications. T2-weighted sequences were used in seven studies, and two specified the use of a T2 FLAIR sequence.
3.3. Ground Truth
All manuscripts claimed that at least two people worked on tumor segmentation. Additional information was frequently lacking—for example, whether every image was independently annotated by two people and whether the annotators had access to clinical information. Similarly, only four manuscripts reported interrater variability. The most frequently used metric for interrater variability, the dice coefficient, was between 0.89 and 0.94 [5,112,114,115].
3.4. Modeling
All publications used convolutional neural networks for modeling. Eight publications used a designated test set, unseen by the model during training, and one of which can be considered an external test set, as the images were provided by an institution other than the one that supplied the training set. The publications that did not use a separate test set used cross-validation or the contours of one annotator for training and those of the other for testing.
Regarding libraries, five publications mentioned the use of tensorflow and one the use of PyTorch. The remaining publications did not reference libraries in the manuscript. The use of data augmentation was mentioned in three publications. The implementation of explainability features was not discussed, but one publication analyzed the performance of the classifier depending on tumor volume [5].
3.5. Meningioma
All publications on meningiomas (n = 4) used meningiomas from different intracranial locations.
Laukamp et al. published two articles on meningioma segmentation [111,112]. For the first publication, they trained a network based on the DeepMedic architecture with contrast-enhanced T1 (T1c) and T2FLAIR images from glioblastoma cases and used those to segment meningiomas, which resulted in a Dice coefficient of 0.78. In their second publication, they used grade I/II meningiomas for training as well and tested on the same cohort, this time achieving a dice coefficient of 0.91 for the contrast-enhancing tumor.
Zhang et al. used T1c slices from 1876 patients to train a model to segment meningiomas and predict the tumor grade by using the segmentation [109]. To describe the performance of their segmentation, they used a less established concept called “tumor accuracy” defined as the percentage of correctly predicted pixels in the tumor, which was 0.814.
Bouget et al. used T1c MRIs of 698 patients with various architectures, the best of which achieved a dice coefficient of 0.73 for meningioma segmentation. Notably, the authors used one fold of the cross-validation for testing, rather than a separate set with previously unseen data [5].
3.6. Schwannoma
Shapey, Wang et al. used T1c and T2 images from 243 patients to train a model to segment vestibular schwannomas. Median tumor size in the test set was 1.89 mL and the best dice coefficient was 0.937 [115].
George-Jones et al. analyzed a cohort of 65 patients with a median tumor volume of only 0.28 mL [114]. Unlike other publications, the authors did not report dice coefficients, but instead tried to analyze how well the model was able to detect growth compared to the manual segmentations which were used as the ground truth. The model achieved an area under the receiver operating characteristic curve (ROC-AUC) of 0.822.
Lee et al. published two manuscripts on vestibular schwannoma segmentation and used the segmentations to analyze changes in volume [108,113]. The authors achieved a dice coefficient of 0.90 when taking advantage of both T1c- and T2-weighted imaging data.
Ito et al. used a dataset of 50 patients for bounding-box segmentations of spinal schwannomas, and used one cross-validation fold for testing instead of a fully independent test set [6]. The authors reported an accuracy of 0.935, though the actual ground-truth was not explicitly stated.
3.7. Pituitary Adenoma
Qian et al. published a study on pituitary adenoma detection, and it is the only study included in this manuscript that used a control group of patients without tumors [107]. The reported overall accuracy was 0.91.
Wang, Zhang et al. used a collective of 163 patients to train and test automated segmentation for pituitary adenomas [110]. The dice coefficient for all slices of the tumors was 0.898.
Highlighted study, imaging and machine learning parameters can be found in Table 1, Table 2 and Table 3 respectively.
Table 1.
Author | Year | Tumor Entity | Average Tumor Volume | No. of Patients |
---|---|---|---|---|
Wang, Zhang et al. [110] | 2021 | Pituitary adenoma | 7.9 mL | 163 |
Bouget et al. [5] | 2021 | Meningioma | 29.8 mL (surgically resected); 8.47 mL (untreated) | 698 |
Lee et al. [108] | 2021 | Vestibular schwannoma | 2.05 mL | 381 |
Ito et al. [6] | 2020 | Spinal schwannoma | Not mentioned | 50 |
Ugga et al. [89] | 2020 | Meningioma | Not mentioned | 1876 |
Lee et al. [113] | 2020 | Vestibular schwannoma | Not mentioned | 516 |
George-Jones et al. [114] | 2020 | Vestibular schwannoma | 0.28 mL | 65 |
Qian et al. [107] | 2020 | Pituitary adenoma | Not mentioned | 149 |
Laukamp et al. [112] | 2020 | Meningioma | ∼31 mL | 126 |
Shapey, Wang et al. [115] | 2019 | Vestibular schwannoma | 1.89 mL (test set) | 243 |
Laukamp et al. [111] | 2018 | Meningioma | 30.9 mL | 56 (test set) |
Table 2.
Author | Field Strength [T] | Slice Thickness [mm] | MRI Sequence Used for Task |
---|---|---|---|
Wang, Zhang et al. [110] | 3 | 3 | T1c |
Bouget et al. [5] | 1.5/3 | heterogeneous | T1c |
Lee et al. [108] | 1.5 | 3 | T1c; T2 |
Ito et al. [6] | 1.5/3 | heterogeneous | T1; T2 |
Ugga et al. [89] | 3 | 5 | T1c |
Lee et al. [113] | 1.5 | 3 | T1; T1c; T2 |
George-Jones et al. [114] | 1.5/3 | heterogeneous (median 3.3) | T1c |
Qian et al. [107] | 1.5 | 3 | T1; T2 |
Laukamp et al. [112] | 1–3 | heterogeneous | T1c; T2FLAIR |
Shapey, Wang et al. [115] | 1.5 | 1.5 | T1c; T2 |
Laukamp et al. [111] | 1–3 | 1–6 | T1c; T2FLAIR |
Table 3.
Author | Detection/Segmentation Algorithm | Data Augmentation | Performance Measures | Explainability/Interpretability | Code Availability | Data Availability |
---|---|---|---|---|---|---|
Wang, Zhang et al. [110] | Convolutional Neural Network (Gated-Shaped U-Net) | Not mentioned | Dice coefficient: 0.898 | Not mentioned | Not mentioned | From authors upon request |
Bouget et al. [5] | Convolutional Neural Network (3D U-Net, PLS-Net) | Horizontal and vertical flipping, random rotation in the range [−20°, 20°], translation up to 10% of the axis dimension, zoom between [80, 120]%, and perspective transform with a scale within [0.0, 0.1] | Best dice coefficients: 0.714 (U-Net), 0.732 (PLS-Net) | Authors analyzed the influence of tumor volume on the performance of the classifiers | Not mentioned | Not mentioned |
Lee et al. [108] | Convolutional Neural Network (Dual Pathway U-Net Model) | Not mentioned | Dice coefficient: 0.9 | Not mentioned | https://github.com/KenLee1996/Dual-pathway-CNN-for-VS-segmentation (accessed on 25 April 2022) | Claims that all data is in the supplement but that appears not to be the case |
Ito et al. [6] | Convolutional Neural Network (YOLO v3) | Random transformations such as flipping and scaling | Accuracy: 0.935 | Not mentioned | Not mentioned | Not mentioned |
Ugga et al. [89] | Convolutional Neural Network (Pyramid Scene Parsing Network) | Not mentioned | Tumor accuracy: 0.814 | Not mentioned | https://github.com/zhangkai62035/Meningioma_demo (accessed on 25 April 2022) | From authors upon request |
Lee et al. [113] | Convolutional Neural Network (Dual Pathway U-Net Model) | Not mentioned | Dice coefficient: 0.9 | Not mentioned | Not mentioned | Not mentioned |
George-Jones et al. [114] | Convolutional Neural Network (U-Net) | Not mentioned | ROC-AUC: 0.822 (for agreement wether a tumor had grown between scans) | Not mentioned | Not mentioned | Not mentioned |
Qian et al. [107] | Convolutional Neural Networks (one per combination of perspective/sequence) | Zooming (0–40%), rotating (−15° to +15°), and shear mapping (0–40%) | Accuracy: 0.91 | Not mentioned | Not mentioned | Not mentioned |
Laukamp et al. [112] | Convolutional Neural Network (DeepMedic) | Not mentioned | Dice coefficient: 0.91 | Not mentioned | Not mentioned; DeepMedic is a public repository | Not mentioned |
Shapey, Wang et al. [115] | Convolutional Neural Network (U-Net) | Not mentioned | Dice coefficient: 0.937 | Not mentioned | Not mentioned | Not mentioned |
Laukamp et al. [111] | Convolutional Neural Network (DeepMedic) | Not mentioned | Dice coefficient: 0.78 | Not mentioned | Not mentioned; DeepMedic is a public repository | Not mentioned |
4. Discussion
The results of our review indicate that machine learning for the segmentation, and even more so for the detection, of benign brain tumors is still in its infancy but is gaining traction.
All included studies were published after 2018 and used deep learning, which is in line with the finding by Cho and colleagues, who found a shift from classical ML to deep learning for brain metastasis detection after 2018 [7].
Guidelines and checklists for diagnostic accuracy, and artificial intelligence studies, have been available for some time [8,116]. The fact that all studies mentioned two physicians being involved with creating the ground truth can be considered as evidence that the authors were aware of at least some of their requirements and best practices. However, many studies were vague about other items of these guidelines, or did not mention them at all, even though they would apply, which indicates that the guidelines are only enforced to a limited degree when a manuscript is reviewed prior to publication.
Many questions regarding the exact methodology could be answered by providing the code that was used for the project, but a public repository was only referenced in two of the included studies. Data sharing is even rarer, though this is somewhat understandable given the sensitive nature of complete cranial MRI datasets that could be used for face recognition if the resolution is sufficiently high [117].
As an example, the study by Qian et al. mentions that the data were augmented and then divided “into training or testing set in a ratio of 8:2 for further analysis”. This makes it seem like slices from the same patient could have been present in the training and test sets, and maybe even different augmentations based on the exact same slice could have been present in the training and test sets. If this was the case, it is likely that the performance of the model would be attributable to overfitting and unlikely to be sustained on previously unseen data [107]. If code had been provided, this could have been easily clarified by any technical reader or reviewer.
In general, the presence of overfitting cannot really be assessed for the majority of publications, as external test sets were almost never used. The logistics involved with obtaining data for fairly rare tumor entities from other institutions is challenging, but strategies to mitigate this could be employed. If a hospital has more than one MRI machine, one might, for example, use data from one machine for testing and data from the other machines for training.
Considering the progress in services that allow researchers to deploy their models, we might get to a point where researchers host models so that reviewers/readers can test them with their own data in the future to make additional conclusions regarding generalizability.
The fact that only one publication tried to train a network for detection is surprising, as overlooking a small, benign brain tumor on an MRI is a real-world problem that could be addressed by having an artificial intelligence function as a safety net. This would, however, require a low false positive rate of the AI to not increase the radiology workload, and the logistics of creating and processing datasets with different tumors, including healthy controls, remain a hurdle.
Creating a dataset for segmentation is easier, as it only requires images of patients with the tumor one is trying to segment. Automatic segmentation has a clear application, as fractionated and stereotactic radiotherapy are treatment options for many benign tumors of the CNS and require segmentation of the tumor prior to treatment. Furthermore, volumetric measurements, to determine if a tumor is growing, could be facilitated by automatic segmentations [118,119].
Considering that benign brain tumors are relatively rare, and that related datasets are consequently often small, it was surprising to see that only three publications reported the use of data augmentation techniques, as they are an effective way to add heterogeneity to the data [120]. One underlying reason might be that such techniques are less established for 3D convolutional neural networks (CNNs), which were used in several publications [121].
Lastly, the use of explainability/interpretability features could be expanded. Implementing model explainability has the potential to not only enable trust in and adoption of models by physicians, but also to support the training process by discovering pitfalls regarding data selection and overfitting [122].
As a general consideration, it will be interesting to see if an automated detection of benign brain tumors, if feasible, actually improves outcomes. As other studies on screening, even in malignant entities show that finding more tumors is not necessarily a guarantee for making patients’ lives better and longer, which should always be the ultimate goals. This question, however, can only be answered by enrolling patients in a randomized-controlled trial testing ML-augmented radiology vs. non-ML-augmented radiology once the technique has matured.
Limitations of this review include the fact that studies using segmentation only as a means for predicting clinical or pathologic features were not considered. In addition, the small sample size and heterogeneous descriptions of methodologies prevent us from drawing quantitative conclusions. Only one database was queried, but this limitation was mitigated by the fact that the majority of publications in the field of ML for radiology appear in PubMed-indexed journals.
Strengths of this review include the adherence to PRISMA guidelines and the fact that systematically searching the literature showed several ways to easily improve the quality of publications in the future—e.g., ensuring code availability. We hope that our review will also serve as a starting point for interested ML researchers to identify interesting topics in the field of benign brain tumors more efficiently by getting up to speed with the literature more quickly.
5. Conclusions
In conclusion, machine learning for detecting and segmenting benign tumors of the CNS is gaining traction but is still at an early stage. The possible presence of overfitting and other biases in several publications makes it difficult to assess whether the high dice coefficients that were reported would be achievable when deploying the models on data from other institutions. Enforcing guidelines at the review and publication level could enhance the quality of published studies. This is likely to happen as ML in medicine becomes more established and those involved in the publication process become increasingly aware of the possible pitfalls.
Supplementary Materials
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/cancers14112676/s1, Table S1: Excluded studies, Table S2: Study parameters.
Author Contributions
Conceptualization, P.W. and R.F.; methodology, P.W.; formal analysis, P.W., C.K. and C.S.; data curation, P.W., C.K. and C.S.; writing—original draft preparation, P.W.; writing—review and editing, C.K., S.R., C.S., R.F., D.R.Z. and S.B.; supervision, D.R.Z. and S.B.; project administration, P.W. All authors have read and agreed to the published version of the manuscript.
Conflicts of Interest
P.W. has a patent application titled “Method for detection of neurological abnormalities”. The other authors declare no conflict of interest.
Funding Statement
The research has been supported with a BRIDGE-Proof of Concept research grant from the Swiss National Science Foundation (SNSF, project number 195054) and Innosuisse.
Footnotes
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.Valliani A.A.-A., Ranti D., Oermann E.K. Deep Learning and Neurology: A Systematic Review. Neurol. Ther. 2019;8:351–365. doi: 10.1007/s40120-019-00153-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 2.Ostrom Q.T., Gittleman H., Truitt G., Boscia A., Kruchko C., Barnholtz-Sloan J.S. CBTRUS Statistical Report: Primary Brain and Other Central Nervous System Tumors Diagnosed in the United States in 2011–2015. Neuro. Oncol. 2018;20:iv1–iv86. doi: 10.1093/neuonc/noy131. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Menze B.H., Jakab A., Bauer S., Kalpathy-Cramer J., Farahani K., Kirby J., Burren Y., Porz N., Slotboom J., Wiest R., et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) IEEE Trans. Med. Imaging. 2015;34:1993–2024. doi: 10.1109/TMI.2014.2377694. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.TCGA-GBM The Cancer Imaging Archive (TCIA) Public Access—Cancer Imaging Archive Wiki. [(accessed on 25 April 2022)]. Available online: https://wiki.cancerimagingarchive.net/display/Public/TCGA-GBM.
- 5.Bouget D., Pedersen A., Hosainey S.A.M., Vanel J., Solheim O., Reinertsen I. Fast Meningioma Segmentation in T1-Weighted Magnetic Resonance Imaging Volumes Using a Lightweight 3D Deep Learning Architecture. J. Med. Imaging. 2021;8:24002. doi: 10.1117/1.JMI.8.2.024002. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Ito S., Ando K., Kobayashi K., Nakashima H., Oda M., Machino M., Kanbara S., Inoue T., Yamaguchi H., Koshimizu H., et al. Automated Detection of Spinal Schwannomas Utilizing Deep Learning Based on Object Detection from Magnetic Resonance Imaging. Spine. 2021;46:95–100. doi: 10.1097/BRS.0000000000003749. [DOI] [PubMed] [Google Scholar]
- 7.Cho S.J., Sunwoo L., Baik S.H., Bae Y.J., Choi B.S., Kim J.H. Brain Metastasis Detection Using Machine Learning: A Systematic Review and Meta-Analysis. Neuro. Oncol. 2020;23:214–225. doi: 10.1093/neuonc/noaa232. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8.Mongan J., Moy L., Kahn C.E. Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers. Radiol. Artif. Intell. 2020;2:e200029. doi: 10.1148/ryai.2020200029. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Page M.J., McKenzie J.E., Bossuyt P.M., Boutron I., Hoffmann T.C., Mulrow C.D., Shamseer L., Tetzlaff J.M., Akl E.A., Brennan S.E., et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ. 2021;372 doi: 10.1136/bmj.n71. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Huang Z.-S., Xiao X., Li X.-D., Mo H.-Z., He W.-L., Deng Y.-H., Lu L.-J., Wu Y.-K., Liu H. Machine Learning-Based Multiparametric Magnetic Resonance Imaging Radiomic Model for Discrimination of Pathological Subtypes of Craniopharyngioma. J. Magn. Reson. Imaging. 2021;54:1541–1550. doi: 10.1002/jmri.27761. [DOI] [PubMed] [Google Scholar]
- 11.Kalasauskas D., Kronfeld A., Renovanz M., Kurz E., Leukel P., Krenzlin H., Brockmann M.A., Sommer C.J., Ringel F., Keric N. Identification of High-Risk Atypical Meningiomas According to Semantic and Radiomic Features. Cancers. 2020;12:2942. doi: 10.3390/cancers12102942. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Zhao Y., Lu Y., Li X., Zheng Y., Yin B. The Evaluation of Radiomic Models in Distinguishing Pilocytic Astrocytoma from Cystic Oligodendroglioma With Multiparametric MRI. J. Comput. Assist. Tomogr. 2020;44:969–976. doi: 10.1097/RCT.0000000000001088. [DOI] [PubMed] [Google Scholar]
- 13.Prince E.W., Whelan R., Mirsky D.M., Stence N., Staulcup S., Klimo P., Anderson R.C.E., Niazi T.N., Grant G., Souweidane M., et al. Robust Deep Learning Classification of Adamantinomatous Craniopharyngioma from Limited Preoperative Radiographic Images. Sci. Rep. 2020;10:16885. doi: 10.1038/s41598-020-73278-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Hu J., Zhao Y., Li M., Liu J., Wang F., Weng Q., Wang X., Cao D. Machine Learning-Based Radiomics Analysis in Predicting the Meningioma Grade Using Multiparametric MRI. Eur. J. Radiol. 2020;131:109251. doi: 10.1016/j.ejrad.2020.109251. [DOI] [PubMed] [Google Scholar]
- 15.Bi S., Chen R., Zhang K., Xiang Y., Wang R., Lin H., Yang H. Differentiate Cavernous Hemangioma from Schwannoma with Artificial Intelligence (AI) Ann. Transl. Med. 2020;8:710. doi: 10.21037/atm.2020.03.150. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16.Khayat Kashani H.R., Azhari S., Nayebaghayee H., Salimi S., Mohammadi H.R. Prediction Value of Preoperative Findings on Meningioma Grading Using Artificial Neural Network. Clin. Neurol. Neurosurg. 2020;196:105947. doi: 10.1016/j.clineuro.2020.105947. [DOI] [PubMed] [Google Scholar]
- 17.Li M., Wang H., Shang Z., Yang Z., Zhang Y., Wan H. Ependymoma and Pilocytic Astrocytoma: Differentiation Using Radiomics Approach Based on Machine Learning. J. Clin. Neurosci. 2020;78:175–180. doi: 10.1016/j.jocn.2020.04.080. [DOI] [PubMed] [Google Scholar]
- 18.Peng A., Dai H., Duan H., Chen Y., Huang J., Zhou L., Chen L. A Machine Learning Model to Precisely Immunohistochemically Classify Pituitary Adenoma Subtypes with Radiomics Based on Preoperative Magnetic Resonance Imaging. Eur. J. Radiol. 2020;125:108892. doi: 10.1016/j.ejrad.2020.108892. [DOI] [PubMed] [Google Scholar]
- 19.Chen C., Guo X., Wang J., Guo W., Ma X., Xu J. The Diagnostic Value of Radiomics-Based Machine Learning in Predicting the Grade of Meningiomas Using Conventional Magnetic Resonance Imaging: A Preliminary Study. Front. Oncol. 2019;9:1338. doi: 10.3389/fonc.2019.01338. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Maki S., Furuya T., Horikoshi T., Yokota H., Mori Y., Ota J., Kawasaki Y., Miyamoto T., Norimoto M., Okimatsu S., et al. A Deep Convolutional Neural Network with Performance Comparable to Radiologists for Differentiating Between Spinal Schwannoma and Meningioma. Spine. 2020;45:694–700. doi: 10.1097/BRS.0000000000003353. [DOI] [PubMed] [Google Scholar]
- 21.Ke C., Chen H., Lv X., Li H., Zhang Y., Chen M., Hu D., Ruan G., Zhang Y., Zhang Y., et al. Differentiation Between Benign and Nonbenign Meningiomas by Using Texture Analysis from Multiparametric MRI. J. Magn. Reson. Imaging. 2020;51:1810–1820. doi: 10.1002/jmri.26976. [DOI] [PubMed] [Google Scholar]
- 22.Zhu H., Fang Q., He H., Hu J., Jiang D., Xu K. Automatic Prediction of Meningioma Grade Image Based on Data Amplification and Improved Convolutional Neural Network. Comput. Math. Methods Med. 2019;2019:7289273. doi: 10.1155/2019/7289273. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Morin O., Chen W.C., Nassiri F., Susko M., Magill S.T., Vasudevan H.N., Wu A., Vallières M., Gennatas E.D., Valdes G., et al. Integrated Models Incorporating Radiologic and Radiomic Features Predict Meningioma Grade, Local Failure, and Overall Survival. Neurooncol. Adv. 2019;1:vdz011. doi: 10.1093/noajnl/vdz011. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24.Hamerla G., Meyer H.-J., Schob S., Ginat D.T., Altman A., Lim T., Gihr G.A., Horvath-Rizea D., Hoffmann K.-T., Surov A. Comparison of Machine Learning Classifiers for Differentiation of Grade 1 from Higher Gradings in Meningioma: A Multicenter Radiomics Study. Magn. Reson. Imaging. 2019;63:244–249. doi: 10.1016/j.mri.2019.08.011. [DOI] [PubMed] [Google Scholar]
- 25.Ugga L., Cuocolo R., Solari D., Guadagno E., D’Amico A., Somma T., Cappabianca P., Del Basso de Caro M.L., Cavallo L.M., Brunetti A. Prediction of High Proliferative Index in Pituitary Macroadenomas Using MRI-Based Radiomics and Machine Learning. Neuroradiology. 2019;61:1365–1373. doi: 10.1007/s00234-019-02266-1. [DOI] [PubMed] [Google Scholar]
- 26.Li X., Lu Y., Xiong J., Wang D., She D., Kuai X., Geng D., Yin B. Presurgical Differentiation between Malignant Haemangiopericytoma and Angiomatous Meningioma by a Radiomics Approach Based on Texture Analysis. J. Neuroradiol. 2019;46:281–287. doi: 10.1016/j.neurad.2019.05.013. [DOI] [PubMed] [Google Scholar]
- 27.Zhu Y., Man C., Gong L., Dong D., Yu X., Wang S., Fang M., Wang S., Fang X., Chen X., et al. A Deep Learning Radiomics Model for Preoperative Grading in Meningioma. Eur. J. Radiol. 2019;116:128–134. doi: 10.1016/j.ejrad.2019.04.022. [DOI] [PubMed] [Google Scholar]
- 28.Banzato T., Causin F., Della Puppa A., Cester G., Mazzai L., Zotti A. Accuracy of Deep Learning to Differentiate the Histopathological Grading of Meningiomas on MR Images: A Preliminary Study. J. Magn. Reson. Imaging. 2019;50:1152–1159. doi: 10.1002/jmri.26723. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Hale A.T., Stonko D.P., Wang L., Strother M.K., Chambless L.B. Machine Learning Analyses Can Differentiate Meningioma Grade by Features on Magnetic Resonance Imaging. Neurosurg. Focus. 2018;45:E4. doi: 10.3171/2018.8.FOCUS18191. [DOI] [PubMed] [Google Scholar]
- 30.Park Y.W., Oh J., You S.C., Han K., Ahn S.S., Choi Y.S., Chang J.H., Kim S.H., Lee S.-K. Radiomics and Machine Learning May Accurately Predict the Grade and Histological Subtype in Meningiomas Using Conventional and Diffusion Tensor Imaging. Eur. Radiol. 2019;29:4068–4076. doi: 10.1007/s00330-018-5830-3. [DOI] [PubMed] [Google Scholar]
- 31.Lu Y., Liu L., Luan S., Xiong J., Geng D., Yin B. The Diagnostic Value of Texture Analysis in Predicting WHO Grades of Meningiomas Based on ADC Maps: An Attempt Using Decision Tree and Decision Forest. Eur. Radiol. 2019;29:1318–1328. doi: 10.1007/s00330-018-5632-7. [DOI] [PubMed] [Google Scholar]
- 32.Kanazawa T., Minami Y., Jinzaki M., Toda M., Yoshida K., Sasaki H. Preoperative Prediction of Solitary Fibrous Tumor/Hemangiopericytoma and Angiomatous Meningioma Using Magnetic Resonance Imaging Texture Analysis. World Neurosurg. 2018;120:e1208–e1216. doi: 10.1016/j.wneu.2018.09.044. [DOI] [PubMed] [Google Scholar]
- 33.Dong F., Li Q., Xu D., Xiu W., Zeng Q., Zhu X., Xu F., Jiang B., Zhang M. Differentiation between Pilocytic Astrocytoma and Glioblastoma: A Decision Tree Model Using Contrast-Enhanced Magnetic Resonance Imaging-Derived Quantitative Radiomic Features. Eur. Radiol. 2019;29:3968–3975. doi: 10.1007/s00330-018-5706-6. [DOI] [PubMed] [Google Scholar]
- 34.Coroller T.P., Bi W.L., Huynh E., Abedalthagafi M., Aizer A.A., Greenwald N.F., Parmar C., Narayan V., Wu W.W., Miranda de Moura S., et al. Radiographic Prediction of Meningioma Grade by Semantic and Radiomic Features. PLoS ONE. 2017;12:e0187908. doi: 10.1371/journal.pone.0187908. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Tian Z., Chen C., Zhang Y., Fan Y., Feng R., Xu J. Radiomic Analysis of Craniopharyngioma and Meningioma in the Sellar/Parasellar Area with MR Images Features and Texture Features: A Feasible Study. Contrast Media Mol. Imaging. 2020;2020:4837156. doi: 10.1155/2020/4837156. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36.Han Y., Wang T., Wu P., Zhang H., Chen H., Yang C. Meningiomas: Preoperative Predictive Histopathological Grading Based on Radiomics of MRI. Magn. Reson. Imaging. 2021;77:36–43. doi: 10.1016/j.mri.2020.11.009. [DOI] [PubMed] [Google Scholar]
- 37.Park Y.W., Kang Y., Ahn S.S., Ku C.R., Kim E.H., Kim S.H., Lee E.J., Kim S.H., Lee S.-K. Radiomics Model Predicts Granulation Pattern in Growth Hormone-Secreting Pituitary Adenomas. Pituitary. 2020;23:691–700. doi: 10.1007/s11102-020-01077-5. [DOI] [PubMed] [Google Scholar]
- 38.Chu H., Lin X., He J., Pang P., Fan B., Lei P., Guo D., Ye C. Value of MRI Radiomics Based on Enhanced T1WI Images in Prediction of Meningiomas Grade. Acad. Radiol. 2021;28:687–693. doi: 10.1016/j.acra.2020.03.034. [DOI] [PubMed] [Google Scholar]
- 39.Laukamp K.R., Shakirin G., Baeßler B., Thiele F., Zopfs D., Große Hokamp N., Timmer M., Kabbasch C., Perkuhn M., Borggrefe J. Accuracy of Radiomics-Based Feature Analysis on Multiparametric Magnetic Resonance Images for Noninvasive Meningioma Grading. World Neurosurg. 2019;132:e366–e390. doi: 10.1016/j.wneu.2019.08.148. [DOI] [PubMed] [Google Scholar]
- 40.Niu L., Zhou X., Duan C., Zhao J., Sui Q., Liu X., Zhang X. Differentiation Researches on the Meningioma Subtypes by Radiomics from Contrast-Enhanced Magnetic Resonance Imaging: A Preliminary Study. World Neurosurg. 2019;126:e646–e652. doi: 10.1016/j.wneu.2019.02.109. [DOI] [PubMed] [Google Scholar]
- 41.Chen X., Tong Y., Shi Z., Chen H., Yang Z., Wang Y., Chen L., Yu J. Noninvasive Molecular Diagnosis of Craniopharyngioma with MRI-Based Radiomics Approach. BMC Neurol. 2019;19:6. doi: 10.1186/s12883-018-1216-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Zhang S., Song G., Zang Y., Jia J., Wang C., Li C., Tian J., Dong D., Zhang Y. Non-Invasive Radiomics Approach Potentially Predicts Non-Functioning Pituitary Adenomas Subtypes before Surgery. Eur. Radiol. 2018;28:3692–3701. doi: 10.1007/s00330-017-5180-6. [DOI] [PubMed] [Google Scholar]
- 43.Zhai Y., Song D., Yang F., Wang Y., Jia X., Wei S., Mao W., Xue Y., Wei X. Preoperative Prediction of Meningioma Consistency via Machine Learning-Based Radiomics. Front. Oncol. 2021;11:657288. doi: 10.3389/fonc.2021.657288. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44.Shahrestani S., Cardinal T., Micko A., Strickland B.A., Pangal D.J., Kugener G., Weiss M.H., Carmichael J., Zada G. Neural Network Modeling for Prediction of Recurrence, Progression, and Hormonal Non-Remission in Patients Following Resection of Functional Pituitary Adenomas. Pituitary. 2021;24:523–529. doi: 10.1007/s11102-021-01128-5. [DOI] [PubMed] [Google Scholar]
- 45.Dang S., Manzoor N.F., Chowdhury N., Tittman S.M., Yancey K.L., Monsour M.A., O’Malley M.R., Rivas A., Haynes D.S., Bennett M.L. Investigating Predictors of Increased Length of Stay After Resection of Vestibular Schwannoma Using Machine Learning. Otol. Neurotol. 2021;42:e584–e592. doi: 10.1097/MAO.0000000000003042. [DOI] [PubMed] [Google Scholar]
- 46.Chen J.M., Wan Q., Zhu H.Y., Ge Y.Q., Wu L.L., Zhai J., Ding Z.M. The value of conventional magnetic resonance imaging based radiomic model in predicting the texture of pituitary macroadenoma. Zhonghua Yi Xue Za Zhi. 2020;100:3626–3631. doi: 10.3760/cma.j.cn112137-20200511-01511. [DOI] [PubMed] [Google Scholar]
- 47.Cepeda S., Arrese I., García-García S., Velasco-Casares M., Escudero-Caro T., Zamora T., Sarabia R. Meningioma Consistency Can Be Defined by Combining the Radiomic Features of Magnetic Resonance Imaging and Ultrasound Elastography. A Pilot Study Using Machine Learning Classifiers. World Neurosurg. 2021;146:e1147–e1159. doi: 10.1016/j.wneu.2020.11.113. [DOI] [PubMed] [Google Scholar]
- 48.Zhang J., Sun J., Han T., Zhao Z., Cao Y., Zhang G., Zhou J. Radiomic Features of Magnetic Resonance Images as Novel Preoperative Predictive Factors of Bone Invasion in Meningiomas. Eur. J. Radiol. 2020;132:109287. doi: 10.1016/j.ejrad.2020.109287. [DOI] [PubMed] [Google Scholar]
- 49.Kandemirli S.G., Chopra S., Priya S., Ward C., Locke T., Soni N., Srivastava S., Jones K., Bathla G. Presurgical Detection of Brain Invasion Status in Meningiomas Based on First-Order Histogram Based Texture Analysis of Contrast Enhanced Imaging. Clin. Neurol. Neurosurg. 2020;198:106205. doi: 10.1016/j.clineuro.2020.106205. [DOI] [PubMed] [Google Scholar]
- 50.Cuocolo R., Ugga L., Solari D., Corvino S., D’Amico A., Russo D., Cappabianca P., Cavallo L.M., Elefante A. Prediction of Pituitary Adenoma Surgical Consistency: Radiomic Data Mining and Machine Learning on T2-Weighted MRI. Neuroradiology. 2020;62:1649–1656. doi: 10.1007/s00234-020-02502-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Liu Y.Q., Gao B.B., Dong B., Padikkalakandy Cheriyath S.S., Song Q.W., Xu B., Wei Q., Xie L.Z., Guo Y., Miao Y.W. Preoperative Vascular Heterogeneity and Aggressiveness Assessment of Pituitary Macroadenoma Based on Dynamic Contrast-Enhanced MRI Texture Analysis. Eur. J. Radiol. 2020;129:109125. doi: 10.1016/j.ejrad.2020.109125. [DOI] [PubMed] [Google Scholar]
- 52.Voglis S., van Niftrik C.H.B., Staartjes V.E., Brandi G., Tschopp O., Regli L., Serra C. Feasibility of Machine Learning Based Predictive Modelling of Postoperative Hyponatremia after Pituitary Surgery. Pituitary. 2020;23:543–551. doi: 10.1007/s11102-020-01056-w. [DOI] [PubMed] [Google Scholar]
- 53.Cha D., Shin S.H., Kim S.H., Choi J.Y., Moon I.S. Machine Learning Approach for Prediction of Hearing Preservation in Vestibular Schwannoma Surgery. Sci. Rep. 2020;10:7136. doi: 10.1038/s41598-020-64175-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54.Abouzari M., Goshtasbi K., Sarna B., Khosravi P., Reutershan T., Mostaghni N., Lin H.W., Djalilian H.R. Prediction of Vestibular Schwannoma Recurrence Using Artificial Neural Network. Laryngoscope Investig. Otolaryngol. 2020;5:278–285. doi: 10.1002/lio2.362. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Su C.-Q., Zhang X., Pan T., Chen X.-T., Chen W., Duan S.-F., Ji J., Hu W.-X., Lu S.-S., Hong X.-N. Texture Analysis of High B-Value Diffusion-Weighted Imaging for Evaluating Consistency of Pituitary Macroadenomas. J. Magn. Reson. Imaging. 2020;51:1507–1513. doi: 10.1002/jmri.26941. [DOI] [PubMed] [Google Scholar]
- 56.Fan Y., Liu Z., Hou B., Li L., Liu X., Liu Z., Wang R., Lin Y., Feng F., Tian J., et al. Development and Validation of an MRI-Based Radiomic Signature for the Preoperative Prediction of Treatment Response in Patients with Invasive Functional Pituitary Adenoma. Eur. J. Radiol. 2019;121:108647. doi: 10.1016/j.ejrad.2019.108647. [DOI] [PubMed] [Google Scholar]
- 57.Speckter H., Bido J., Hernandez G., Rivera D., Suazo L., Valenzuela S., Miches I., Oviedo J., Gonzalez C., Stoeter P. Pretreatment Texture Analysis of Routine MR Images and Shape Analysis of the Diffusion Tensor for Prediction of Volumetric Response after Radiosurgery for Meningioma. J. Neurosurg. 2018;129:31–37. doi: 10.3171/2018.7.GKS181327. [DOI] [PubMed] [Google Scholar]
- 58.Mekki A., Dercle L., Lichtenstein P., Nasser G., Marabelle A., Champiat S., Chouzenoux E., Balleyguier C., Ammari S. Machine Learning Defined Diagnostic Criteria for Differentiating Pituitary Metastasis from Autoimmune Hypophysitis in Patients Undergoing Immune Checkpoint Blockade Therapy. Eur. J. Cancer. 2019;119:44–56. doi: 10.1016/j.ejca.2019.06.020. [DOI] [PubMed] [Google Scholar]
- 59.Staartjes V.E., Zattra C.M., Akeret K., Maldaner N., Muscas G., Bas van Niftrik C.H., Fierstra J., Regli L., Serra C. Neural Network-Based Identification of Patients at High Risk for Intraoperative Cerebrospinal Fluid Leaks in Endoscopic Pituitary Surgery. J. Neurosurg. 2019;133:329–335. doi: 10.3171/2019.4.JNS19477. [DOI] [PubMed] [Google Scholar]
- 60.Zeynalova A., Kocak B., Durmaz E.S., Comunoglu N., Ozcan K., Ozcan G., Turk O., Tanriover N., Kocer N., Kizilkilic O., et al. Preoperative Evaluation of Tumour Consistency in Pituitary Macroadenomas: A Machine Learning-Based Histogram Analysis on Conventional T2-Weighted MRI. Neuroradiology. 2019;61:767–774. doi: 10.1007/s00234-019-02211-2. [DOI] [PubMed] [Google Scholar]
- 61.Speckter H., Santana J., Bido J., Hernandez G., Rivera D., Suazo L., Valenzuela S., Oviedo J., Gonzalez C.F., Stoeter P. Texture Analysis of Standard Magnetic Resonance Images to Predict Response to Gamma Knife Radiosurgery in Vestibular Schwannomas. World Neurosurg. 2019;132:e228–e234. doi: 10.1016/j.wneu.2019.08.193. [DOI] [PubMed] [Google Scholar]
- 62.Hollon T.C., Parikh A., Pandian B., Tarpeh J., Orringer D.A., Barkan A.L., McKean E.L., Sullivan S.E. A Machine Learning Approach to Predict Early Outcomes after Pituitary Adenoma Surgery. Neurosurg. Focus. 2018;45:E8. doi: 10.3171/2018.8.FOCUS18268. [DOI] [PubMed] [Google Scholar]
- 63.Galm B.P., Martinez-Salazar E.L., Swearingen B., Torriani M., Klibanski A., Bredella M.A., Tritos N.A. MRI Texture Analysis as a Predictor of Tumor Recurrence or Progression in Patients with Clinically Non-Functioning Pituitary Adenomas. Eur. J. Endocrinol. 2018;179:191–198. doi: 10.1530/EJE-18-0291. [DOI] [PubMed] [Google Scholar]
- 64.Muhlestein W.E., Akagi D.S., Kallos J.A., Morone P.J., Weaver K.D., Thompson R.C., Chambless L.B. Using a Guided Machine Learning Ensemble Model to Predict Discharge Disposition Following Meningioma Resection. J. Neurol. Surg. B Skull Base. 2018;79:123–130. doi: 10.1055/s-0037-1604393. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65.Ko C.-C., Zhang Y., Chen J.-H., Chang K.-T., Chen T.-Y., Lim S.-W., Wu T.-C., Su M.-Y. Pre-Operative MRI Radiomics for the Prediction of Progression and Recurrence in Meningiomas. Front. Neurol. 2021;12:636235. doi: 10.3389/fneur.2021.636235. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66.Xiao B., Fan Y., Zhang Z., Tan Z., Yang H., Tu W., Wu L., Shen X., Guo H., Wu Z., et al. Three-Dimensional Radiomics Features from Multi-Parameter MRI Combined With Clinical Characteristics Predict Postoperative Cerebral Edema Exacerbation in Patients With Meningioma. Front. Oncol. 2021;11:625220. doi: 10.3389/fonc.2021.625220. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 67.Ma G., Kang J., Qiao N., Zhang B., Chen X., Li G., Gao Z., Gui S. Non-Invasive Radiomics Approach Predict Invasiveness of Adamantinomatous Craniopharyngioma Before Surgery. Front. Oncol. 2020;10:599888. doi: 10.3389/fonc.2020.599888. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Langenhuizen P.P.J.H., Zinger S., Leenstra S., Kunst H.P.M., Mulder J.J.S., Hanssens P.E.J., de With P.H.N., Verheul J.B. Radiomics-Based Prediction of Long-Term Treatment Response of Vestibular Schwannomas Following Stereotactic Radiosurgery. Otol. Neurotol. 2020;41:e1321–e1327. doi: 10.1097/MAO.0000000000002886. [DOI] [PubMed] [Google Scholar]
- 69.Zhang Y., Ko C.-C., Chen J.-H., Chang K.-T., Chen T.-Y., Lim S.-W., Tsui Y.-K., Su M.-Y. Radiomics Approach for Prediction of Recurrence in Non-Functioning Pituitary Macroadenomas. Front. Oncol. 2020;10:590083. doi: 10.3389/fonc.2020.590083. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Yang H.-C., Wu C.-C., Lee C.-C., Huang H.-E., Lee W.-K., Chung W.-Y., Wu H.-M., Guo W.-Y., Wu Y.-T., Lu C.-F. Prediction of Pseudoprogression and Long-Term Outcome of Vestibular Schwannoma after Gamma Knife Radiosurgery Based on Preradiosurgical MR Radiomics. Radiother. Oncol. 2021;155:123–130. doi: 10.1016/j.radonc.2020.10.041. [DOI] [PubMed] [Google Scholar]
- 71.Machado L.F., Elias P.C.L., Moreira A.C., Dos Santos A.C., Murta Junior L.O. MRI Radiomics for the Prediction of Recurrence in Patients with Clinically Non-Functioning Pituitary Macroadenomas. Comput. Biol. Med. 2020;124:103966. doi: 10.1016/j.compbiomed.2020.103966. [DOI] [PubMed] [Google Scholar]
- 72.Zhang J., Yao K., Liu P., Liu Z., Han T., Zhao Z., Cao Y., Zhang G., Zhang J., Tian J., et al. A Radiomics Model for Preoperative Prediction of Brain Invasion in Meningioma Non-Invasively Based on MRI: A Multicentre Study. EBioMedicine. 2020;58:102933. doi: 10.1016/j.ebiom.2020.102933. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 73.Zhang Y., Chen J.-H., Chen T.-Y., Lim S.-W., Wu T.-C., Kuo Y.-T., Ko C.-C., Su M.-Y. Radiomics Approach for Prediction of Recurrence in Skull Base Meningiomas. Neuroradiology. 2019;61:1355–1364. doi: 10.1007/s00234-019-02259-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 74.Rui W., Wu Y., Ma Z., Wang Y., Wang Y., Xu X., Zhang J., Yao Z. MR Textural Analysis on Contrast Enhanced 3D-SPACE Images in Assessment of Consistency of Pituitary Macroadenoma. Eur. J. Radiol. 2019;110:219–224. doi: 10.1016/j.ejrad.2018.12.002. [DOI] [PubMed] [Google Scholar]
- 75.Niu J., Zhang S., Ma S., Diao J., Zhou W., Tian J., Zang Y., Jia W. Preoperative Prediction of Cavernous Sinus Invasion by Pituitary Adenomas Using a Radiomics Method Based on Magnetic Resonance Images. Eur. Radiol. 2019;29:1625–1634. doi: 10.1007/s00330-018-5725-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Goertz L., Stavrinou P., Stranjalis G., Timmer M., Goldbrunner R., Krischek B. Single-Step Resection of Sphenoorbital Meningiomas and Orbital Reconstruction Using Customized CAD/CAM Implants. J. Neurol. Surg. B Skull Base. 2020;81:142–148. doi: 10.1055/s-0039-1681044. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 77.McCradden M.D., Baba A., Saha A., Ahmad S., Boparai K., Fadaiefard P., Cusimano M.D. Ethical Concerns around Use of Artificial Intelligence in Health Care Research from the Perspective of Patients with Meningioma, Caregivers and Health Care Providers: A Qualitative Study. CMAJ Open. 2020;8:E90–E95. doi: 10.9778/cmajo.20190151. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Lovo E.E., Campos F.J., Caceros V.E., Minervini M., Cruz C.B., Arias J.C., Reyes W.A. Automated Stereotactic Gamma Ray Radiosurgery to the Pituitary Gland in Terminally Ill Cancer Patients with Opioid Refractory Pain. Cureus. 2019;11:e4811. doi: 10.7759/cureus.4811. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79.Carolus A., Weihe S., Schmieder K., Brenke C. One-Step CAD/CAM Titanium Cranioplasty after Drilling Template-Assisted Resection of Intraosseous Skull Base Meningioma: Technical Note. Acta Neurochir. 2017;159:447–452. doi: 10.1007/s00701-016-3053-4. [DOI] [PubMed] [Google Scholar]
- 80.Qiao N., Zhang Y., Ye Z., Shen M., Shou X., Wang Y., Li S., Wang M., Zhao Y. Comparison of Multifocal Visual Evoked Potential, Static Automated Perimetry, and Optical Coherence Tomography Findings for Assessing Visual Pathways in Patients with Pituitary Adenomas. Pituitary. 2015;18:598–603. doi: 10.1007/s11102-014-0613-6. [DOI] [PubMed] [Google Scholar]
- 81.Garrido R., Zabka T.S., Tao J., Fielden M., Fretland A., Albassam M. Quantitative Histological Assessment of Xenobiotic-Induced Liver Enzyme Induction and Pituitary-Thyroid Axis Stimulation in Rats Using Whole-Slide Automated Image Analysis. J. Histochem. Cytochem. 2013;61:362–371. doi: 10.1369/0022155413482926. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 82.Chang V., Narang J., Schultz L., Issawi A., Jain R., Rock J., Rosenblum M. Computer-Aided Volumetric Analysis as a Sensitive Tool for the Management of Incidental Meningiomas. Acta Neurochir. 2012;154:589–597; discussion 597. doi: 10.1007/s00701-012-1273-9. [DOI] [PubMed] [Google Scholar]
- 83.Brossaud J., Bouton M., Gatta B., Tabarin A., Corcuff J.-B. Use of an Automated ACTH Assay for the Diagnosis of Pituitary and Adrenal-Related Diseases. Clin. Biochem. 2011;44:1160–1162. doi: 10.1016/j.clinbiochem.2011.06.002. [DOI] [PubMed] [Google Scholar]
- 84.Gorzalka B.B., Hill M.N. Integration of Endocannabinoid Signaling into the Neural Network Regulating Stress-Induced Activation of the Hypothalamic-Pituitary-Adrenal Axis. Curr. Top. Behav. Neurosci. 2009;1:289–306. doi: 10.1007/978-3-540-88955-7_12. [DOI] [PubMed] [Google Scholar]
- 85.Grala B., Markiewicz T., Kozłowski W., Osowski S., Słodkowska J., Papierz W. New Automated Image Analysis Method for the Assessment of Ki-67 Labeling Index in Meningiomas. Folia Histochem. Cytobiol. 2009;47:587–592. doi: 10.2478/v10042-008-0098-0. [DOI] [PubMed] [Google Scholar]
- 86.Kim Y.J., Romeike B.F.M., Uszkoreit J., Feiden W. Automated Nuclear Segmentation in the Determination of the Ki-67 Labeling Index in Meningiomas. Clin. Neuropathol. 2006;25:67–73. [PubMed] [Google Scholar]
- 87.Pillay P.K., Leo C.W., Sethi D.S. Computer-Aided/image-Guided and Video-Endoscopic Resection of Pituitary Tumors. Stereotact. Funct. Neurosurg. 2000;74:203–209. doi: 10.1159/000056481. [DOI] [PubMed] [Google Scholar]
- 88.Shapey J., Kujawa A., Dorent R., Saeed S.R., Kitchen N., Obholzer R., Ourselin S., Vercauteren T., Thomas N.W.M. Artificial Intelligence Opportunities for Vestibular Schwannoma Management Using Image Segmentation and Clinical Decision Tools. World Neurosurg. 2021;149:269–270. doi: 10.1016/j.wneu.2021.03.010. [DOI] [PubMed] [Google Scholar]
- 89.Ugga L., Perillo T., Cuocolo R., Stanzione A., Romeo V., Green R., Cantoni V., Brunetti A. Meningioma MRI Radiomics and Machine Learning: Systematic Review, Quality Score Assessment, and Meta-Analysis. Neuroradiology. 2021;63:1293–1304. doi: 10.1007/s00234-021-02668-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 90.Neromyliotis E., Kalamatianos T., Paschalis A., Komaitis S., Fountas K.N., Kapsalaki E.Z., Stranjalis G., Tsougos I. Machine Learning in Meningioma MRI: Past to Present. A Narrative Review. J. Magn. Reson. Imaging. 2020;55:48–60. doi: 10.1002/jmri.27378. [DOI] [PubMed] [Google Scholar]
- 91.Soldozy S., Farzad F., Young S., Yağmurlu K., Norat P., Sokolowski J., Park M.S., Jane J.A., Jr., Syed H.R. Pituitary Tumors in the Computational Era, Exploring Novel Approaches to Diagnosis, and Outcome Prediction with Machine Learning. World Neurosurg. 2021;146:315–321.e1. doi: 10.1016/j.wneu.2020.07.104. [DOI] [PubMed] [Google Scholar]
- 92.van Staalduinen E.K., Bangiyev L. Editorial for “Texture Analysis of High B-Value Diffusion-Weighted Imaging for Evaluating Consistency of Pituitary Macroadenomas”. J. Magn. Reson. Imaging. 2020;51:1514–1515. doi: 10.1002/jmri.27130. [DOI] [PubMed] [Google Scholar]
- 93.Saha A., Tso S., Rabski J., Sadeghian A., Cusimano M.D. Machine Learning Applications in Imaging Analysis for Patients with Pituitary Tumors: A Review of the Current Literature and Future Directions. Pituitary. 2020;23:273–293. doi: 10.1007/s11102-019-01026-x. [DOI] [PubMed] [Google Scholar]
- 94.Won S.Y., Park Y.W., Ahn S.S., Moon J.H., Kim E.H., Kang S.-G., Chang J.H., Kim S.H., Lee S.-K. Quality Assessment of Meningioma Radiomics Studies: Bridging the Gap between Exploratory Research and Clinical Applications. Eur. J. Radiol. 2021;138:109673. doi: 10.1016/j.ejrad.2021.109673. [DOI] [PubMed] [Google Scholar]
- 95.Gu H., Zhang X., di Russo P., Zhao X., Xu T. The Current State of Radiomics for Meningiomas: Promises and Challenges. Front. Oncol. 2020;10:567736. doi: 10.3389/fonc.2020.567736. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 96.MacKeith S., Das T., Graves M., Patterson A., Donnelly N., Mannion R., Axon P., Tysome J. A Comparison of Semi-Automated Volumetric vs Linear Measurement of Small Vestibular Schwannomas. Eur. Arch. Otorhinolaryngol. 2018;275:867–874. doi: 10.1007/s00405-018-4865-z. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 97.McGrath H., Li P., Dorent R., Bradford R., Saeed S., Bisdas S., Ourselin S., Shapey J., Vercauteren T. Manual Segmentation versus Semi-Automated Segmentation for Quantifying Vestibular Schwannoma Volume on MRI. Int. J. Comput. Assist. Radiol. Surg. 2020;15:1445–1455. doi: 10.1007/s11548-020-02222-y. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 98.MacKeith S.A.C., Das T., Graves M., Patterson A., Donnelly N., Mannion R., Axon P., Tysome J. A Comparison of Repeatability and Usability of Semi-Automated Volume Segmentation Tools for Measurement of Vestibular Schwannomas. Otol. Neurotol. 2018;39:e496–e505. doi: 10.1097/MAO.0000000000001796. [DOI] [PubMed] [Google Scholar]
- 99.Banzato T., Bernardini M., Cherubini G.B., Zotti A. Texture Analysis of Magnetic Resonance Images to Predict Histologic Grade of Meningiomas in Dogs. Am. J. Vet. Res. 2017;78:1156–1162. doi: 10.2460/ajvr.78.10.1156. [DOI] [PubMed] [Google Scholar]
- 100.Banzato T., Cherubini G.B., Atzori M., Zotti A. Development of a Deep Convolutional Neural Network to Predict Grading of Canine Meningiomas from Magnetic Resonance Images. Vet. J. 2018;235:90–92. doi: 10.1016/j.tvjl.2018.04.001. [DOI] [PubMed] [Google Scholar]
- 101.Banzato T., Bernardini M., Cherubini G.B., Zotti A. A Methodological Approach for Deep Learning to Distinguish between Meningiomas and Gliomas on Canine MR-Images. BMC Vet. Res. 2018;14:317. doi: 10.1186/s12917-018-1638-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 102.Lenz M., Krug R., Dillmann C., Stroop R., Gerhardt N.C., Welp H., Schmieder K., Hofmann M.R. Automated Differentiation between Meningioma and Healthy Brain Tissue Based on Optical Coherence Tomography Ex Vivo Images Using Texture Features. J. Biomed. Opt. 2018;23:71205. doi: 10.1117/1.JBO.23.7.071205. [DOI] [PubMed] [Google Scholar]
- 103.Chavali P., Uppin M.S., Uppin S.G., Challa S. Meningiomas: Objective Assessment of Proliferative Indices by Immunohistochemistry and Automated Counting Method. Neurol. India. 2017;65:1345–1349. doi: 10.4103/0028-3886.217934. [DOI] [PubMed] [Google Scholar]
- 104.Kim M., Kim H.S., Kim H.J., Park J.E., Park S.Y., Kim Y.-H., Kim S.J., Lee J., Lebel M.R. Thin-Slice Pituitary MRI with Deep Learning-Based Reconstruction: Diagnostic Performance in a Postoperative Setting. Radiology. 2021;298:114–122. doi: 10.1148/radiol.2020200723. [DOI] [PubMed] [Google Scholar]
- 105.Wang J., Xie Z., Zhu X., Niu Z., Ji H., He L., Hu Q., Zhang C. Differentiation of Gastric Schwannomas from Gastrointestinal Stromal Tumors by CT Using Machine Learning. Abdom. Radiol. 2021;46:1773–1782. doi: 10.1007/s00261-020-02797-9. [DOI] [PubMed] [Google Scholar]
- 106.Muhlestein W.E., Akagi D.S., McManus A.R., Chambless L.B. Machine Learning Ensemble Models Predict Total Charges and Drivers of Cost for Transsphenoidal Surgery for Pituitary Tumor. J. Neurosurg. 2018;131:507–516. doi: 10.3171/2018.4.JNS18306. [DOI] [PubMed] [Google Scholar]
- 107.Qian Y., Qiu Y., Li C.-C., Wang Z.-Y., Cao B.-W., Huang H.-X., Ni Y.-H., Chen L.-L., Sun J.-Y. A Novel Diagnostic Method for Pituitary Adenoma Based on Magnetic Resonance Imaging Using a Convolutional Neural Network. Pituitary. 2020;23:246–252. doi: 10.1007/s11102-020-01032-4. [DOI] [PubMed] [Google Scholar]
- 108.Lee C.-C., Lee W.-K., Wu C.-C., Lu C.-F., Yang H.-C., Chen Y.-W., Chung W.-Y., Hu Y.-S., Wu H.-M., Wu Y.-T., et al. Applying Artificial Intelligence to Longitudinal Imaging Analysis of Vestibular Schwannoma Following Radiosurgery. Sci. Rep. 2021;11:3106. doi: 10.1038/s41598-021-82665-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 109.Zhang H., Mo J., Jiang H., Li Z., Hu W., Zhang C., Wang Y., Wang X., Liu C., Zhao B., et al. Deep Learning Model for the Automated Detection and Histopathological Prediction of Meningioma. Neuroinformatics. 2021;19:393–402. doi: 10.1007/s12021-020-09492-6. [DOI] [PubMed] [Google Scholar]
- 110.Wang H., Zhang W., Li S., Fan Y., Feng M., Wang R. Development and Evaluation of Deep Learning-Based Automated Segmentation of Pituitary Adenoma in Clinical Task. J. Clin. Endocrinol. Metab. 2021;106:2535–2546. doi: 10.1210/clinem/dgab371. [DOI] [PubMed] [Google Scholar]
- 111.Laukamp K.R., Thiele F., Shakirin G., Zopfs D., Faymonville A., Timmer M., Maintz D., Perkuhn M., Borggrefe J. Fully Automated Detection and Segmentation of Meningiomas Using Deep Learning on Routine Multiparametric MRI. Eur. Radiol. 2019;29:124–132. doi: 10.1007/s00330-018-5595-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 112.Laukamp K.R., Pennig L., Thiele F., Reimer R., Görtz L., Shakirin G., Zopfs D., Timmer M., Perkuhn M., Borggrefe J. Automated Meningioma Segmentation in Multiparametric MRI: Comparable Effectiveness of a Deep Learning Model and Manual Segmentation. Clin. Neuroradiol. 2021;31:357–366. doi: 10.1007/s00062-020-00884-4. [DOI] [PubMed] [Google Scholar]
- 113.Lee W.-K., Wu C.-C., Lee C.-C., Lu C.-F., Yang H.-C., Huang T.-H., Lin C.-Y., Chung W.-Y., Wang P.-S., Wu H.-M., et al. Combining Analysis of Multi-Parametric MR Images into a Convolutional Neural Network: Precise Target Delineation for Vestibular Schwannoma Treatment Planning. Artif. Intell. Med. 2020;107:101911. doi: 10.1016/j.artmed.2020.101911. [DOI] [PubMed] [Google Scholar]
- 114.George-Jones N.A., Wang K., Wang J., Hunter J.B. Automated Detection of Vestibular Schwannoma Growth Using a Two-Dimensional U-Net Convolutional Neural Network. Laryngoscope. 2021;131:E619–E624. doi: 10.1002/lary.28695. [DOI] [PubMed] [Google Scholar]
- 115.Shapey J., Wang G., Dorent R., Dimitriadis A., Li W., Paddick I., Kitchen N., Bisdas S., Saeed S.R., Ourselin S., et al. An Artificial Intelligence Framework for Automatic Segmentation and Volumetry of Vestibular Schwannomas from Contrast-Enhanced T1-Weighted and High-Resolution T2-Weighted MRI. J. Neurosurg. 2019;134:171–179. doi: 10.3171/2019.9.JNS191949. [DOI] [PubMed] [Google Scholar]
- 116.Bossuyt P.M., Reitsma J.B. The STARD Initiative. Lancet. 2003;361:71. doi: 10.1016/S0140-6736(03)12122-8. [DOI] [PubMed] [Google Scholar]
- 117.Schwarz C.G., Kremers W.K., Therneau T.M., Sharp R.R., Gunter J.L., Vemuri P., Arani A., Spychalla A.J., Kantarci K., Knopman D.S., et al. Identification of Anonymous MRI Research Participants with Face-Recognition Software. N. Engl. J. Med. 2019;381:1684–1686. doi: 10.1056/NEJMc1908881. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 118.Goldbrunner R., Weller M., Regis J., Lund-Johansen M., Stavrinou P., Reuss D., Evans D.G., Lefranc F., Sallabanda K., Falini A., et al. EANO Guideline on the Diagnosis and Treatment of Vestibular Schwannoma. Neuro. Oncol. 2020;22:31–45. doi: 10.1093/neuonc/noz153. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 119.Goldbrunner R., Minniti G., Preusser M., Jenkinson M.D., Sallabanda K., Houdart E., von Deimling A., Stavrinou P., Lefranc F., Lund-Johansen M., et al. EANO Guidelines for the Diagnosis and Treatment of Meningiomas. Lancet Oncol. 2016;17:e383–e391. doi: 10.1016/S1470-2045(16)30321-7. [DOI] [PubMed] [Google Scholar]
- 120.Shorten C., Khoshgoftaar T.M. A Survey on Image Data Augmentation for Deep Learning. J. Big Data. 2019;6:60. doi: 10.1186/s40537-019-0197-0. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 121.Xu J., Li M., Zhu Z. Automatic Data Augmentation for 3D Medical Image Segmentation. arXiv. 20202010.11695v1 [Google Scholar]
- 122.Kundu S. AI in Medicine Must Be Explainable. Nat. Med. 2021;27:1328. doi: 10.1038/s41591-021-01461-z. [DOI] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.