Abstract
Objective
To investigate the publication trends and research landscape of deep learning-based image segmentation techniques for brain tumor detection, focusing on the period from 2013 to 2023, and to identify key stakeholders, influential research, and prevalent research themes.
Methods
This systematic review utilized the preferred reporting items for systematic reviews and meta-analyses (PRISMA) guidelines and bibliometric analysis methods. Databases including Scopus, PubMed, and Web of Science were searched for publications from 2013 to 2023 using a defined search query. Analysis included publication trends, stakeholder identification (authors, institutions, funding bodies, countries), top-cited literature, co-authorship, keyword co-occurrence, and citation analysis using VOSviewer software and the results of the PubMed were analyzed by R Studio.
Results
A total of 931 documents were analyzed after PRISMA filtering. The number of publications increased substantially from 1 to 310 during the review period. Journal articles were the predominant document type. Tongxue Zhou was the most prolific researcher, and the Ministry of Education of the People's Republic of China, Imperial College of London, and Harvard Medical School were the most active affiliations. The National Natural Science Foundation of China was the leading funding organization. Keyword co-occurrence analysis highlighted the prevalence of “deep learning brain tumor image segmentation.” Co-authorship and citation analyses revealed key collaborations and influential publications.
Conclusion
This study provides valuable insights into the research landscape of deep learning for brain tumor image segmentation. The identified trends, stakeholders, and research themes can inform future research directions and policy decisions in this rapidly evolving field. The findings highlight the growth and multidisciplinary nature of this research area, while also suggesting potential opportunities and challenges for future development in brain tumor image segmentation techniques.
Keywords: Brain tumor, image segmentation, bibliographic review, deep learning, magnetic resonance imaging (MRI)
Introduction
The brain, a vital organ comprising a hundred billion nerve cells or neurons, plays a crucial role in the body. Recent reports highlight that brain tumors rank as the 10th leading cause of mortality among both adults and children of all genders in developed countries. 1 Projections indicate that in the United States alone, primary brain tumors will contribute to 18,280 deaths in adults in 2022. 2 Termed intracranial tumors, brain tumors encompass a diverse range of cancerous cells originating in the intracranial tissues of the brain, varying in malignancy from benign to advanced stages. 3 The onset of a brain tumor occurs when cell division rates escalate, leading to uncontrolled multiplication. Tumors can emerge in any part of the brain or skull, including the protective lining, skull base, brainstem sinuses, nasal cavity, and other locations. 4 With over 150 types, brain tumors are classified into cancerous and noncancerous categories. 5
The brain comprises diverse cell types, each with unique characteristics, making it challenging to generalize findings from malignancies in other organs. Brain cancers exhibit distinct biology, microenvironment, therapy approaches, prognosis, and risk factors, rendering their classification complex. 6 Common signs of brain cancer include pressure in the head, fatigue, nausea, and discomfort, while additional symptoms may include fever, rash, and increased pulse. Experts can correlate signs to diagnose problems with more certainty, although not all brain tumors cause symptoms. 7
Diagnosing brain tumors involves a comprehensive approach utilizing three main procedures: imaging tests, neurological exams, and biopsies. Magnetic resonance imaging (MRI) is the primary and widely accepted method, incorporating elements like perfusion MRI, functional MRI, and magnetic resonance spectroscopy for treatment planning. 8 Additional imaging tests such as positron emission tomography and computed tomography (CT) may be used in conjunction with MRI. Neurological exams assess various functions to pinpoint affected brain areas, and a biopsy provides a more precise diagnosis by examining abnormal tissue samples under a microscope. 9
Before the advent of deep learning, brain tumor segmentation primarily relied on traditional image processing techniques and conventional machine learning methods. Early approaches often involved thresholding, region growing, and active contours or level sets, which segmented tumors based on intensity variations, connectivity, or deformable models. Concurrently, machine learning algorithms like support vector machines and random forests were employed, but these typically require laborious hand-crafted feature extraction from images. While these methods provided foundational insights, they frequently suffered from significant limitations: they often lacked robustness to variations in image quality and tumor appearance, struggled with fuzzy or irregular tumor boundaries, and exhibited limited generalizability across diverse patient datasets. These shortcomings underscored the need for more automated, robust, and accurate segmentation solutions, paving the way for the transformative impact of deep learning.
Various authors have provided comprehensive reviews on segmentation techniques for brain tumor detection. Magadza and Viriri 10 conducted a review encompassing building blocks, state-of-the-art techniques, and tools for implementing automatic brain tumor segmentation algorithms. Notable architectures, such as ensemble methods and UNet-based models, demonstrate significant potential for enhancing the state-of-the-art through meticulous processing, weight initialization, advanced training schemes, and strategies to address inherent class imbalance issues. Bhalodiya et al. 11 systematically reviewed 572 brain tumor segmentation studies from 2015 to 2020, evaluating conventional methods, deep learning methods, and software-based or semi-automatic methods applied to MRI techniques. Md Kamrul Hasan Khan explored various machine learning and deep learning approaches used for brain tumor MRI image segmentation, concluding that hybrid methods, integrating machine learning and deep learning techniques, hold promise but face challenges in algorithm selection, result interpretation, and clinical validation. Ranjbarzadeh et al. 12 reviewed the performance of modern image segmentation methods published from 2015 to 2022, along with recent research efforts. Alizadeh Savareh et al. 13 surveyed the practical aspects of convolutional neural networks (CNNs) in brain tumor segmentation, emphasizing the importance of preprocessing phases like N4 bias field correction using the insight segmentation and registration toolkit for bias correction and conditional random fields in the final processing stage to enhance CNN performance. Mohammed et al. 14 presented a concise overview of MRI modalities, discussing common methods of brain tumor segmentation from MRI images, including deep learning techniques. They highlighted significant advancements in this field, demonstrating substantial improvements in recent years. Unexpectedly, the literature review lacks a vital bibliographic analysis necessary for contextualizing existing literature and laying a robust foundation for the study. The absence of this analysis may impede a comprehensive understanding of the research landscape, limiting the scholarly impact. Integrating a thorough bibliographic examination would undoubtedly enhance the conclusion, offering a more nuanced and well-informed perspective on the research topic.
A comprehensive examination of the scientific literature pertaining to brain tumor detection segmentation techniques, accessible through the world's largest database (Elsevier Scopus) as shown in Figure 1. It revealed a total of 2531 documents published from 1979 to the present date. Utilizing the search query (brain AND tumor AND Deep AND learning AND image AND segmentation) with TITLE-ABS-KEY criteria, the distribution of these documents is as follows: article (1280), conference paper (869), conference review (174), book chapter (74), review (110), retracted (8), erratum (1), book (1), editorial (6), note (1), letter (3), data paper (1), short survey (4), and undefined (2). The data underscores the substantial interest in segmentation techniques for brain tumors over the years. Despite the abundance of review papers and various document types within the field, there is a notable absence of studies critically analyzing recent scientific developments and the research impact in this domain within the scientific literature.
Figure 1.
Schematic representation of the identification, screening, and analysis process of the published documents related to deep learning-based brain tumor segmentation research from the Scopus database.
This review stands apart from others by exclusively focusing on the period from 2013 to 2023, specifically examining the advancements in deep learning-based segmentation techniques for brain tumor detection. Unlike broader reviews, this study provides a concentrated analysis of the most recent decade's breakthroughs, trends, and challenges in the field, offering a more current and targeted perspective that is highly relevant to researchers, industry professionals, and policymakers. This narrowed focus ensures a deeper understanding of the rapid evolution of deep learning technologies in brain tumor segmentation during this transformative period.
Methodology
The first filtering step involved limiting the publication year to the range of 2013 to 2023, resulting in the exclusion of 496 records published outside this timeframe, leaving 2035 documents. Next, the subject area was refined. Publications outside the predefined subject areas of computer science, engineering, medicine, biochemistry, health, neuroscience, multidisciplinary, immunology, psychology, and nursing were excluded. This step removed 53 records, reducing the pool to 1982 documents. Document type was then considered. Only journal articles, review papers, and conference papers were included. This filter excluded 213 records, leaving 1769 documents. The language of publication was restricted to English. After excluding non-English publications, the remaining corpus consisted of 1738 documents, a reduction of 31 records. Keyword filtering was then performed. Publications that did not contain at least one of the keywords “Deep Learning,” “Image Segmentation,” or “Brain Tumors” were excluded. This step removed 106 records, resulting in 1632 documents. The publication stage was then filtered to include only “final” publications. This step excluded 10 records, resulting in 1622 documents. Finally, the source type was considered. After this final filter, 931 documents remained, which formed the final dataset for the bibliometric and qualitative analysis conducted in this review. This filtering process ensured that only relevant, peer-reviewed publications within the specified criteria were included in the study as shown in Table 1.
Table 1.
PRISMA criteria for document filtering.
| Attribute | Criteria | No. of documents |
|---|---|---|
| TITLE-ABS-KEY | Deep AND learning AND brain AND tumor AND image AND segmentation | 2531 |
| Publication per Year | PUBYEAR > 2012 AND PUBYEAR < 2024 | 2035 |
| Subject Area | LIMIT-TO (SUBJAREA, “COMP”) OR LIMIT-TO (SUBJAREA, “ENGI”) OR LIMIT-TO (SUBJAREA, “MEDI”) OR LIMIT-TO (SUBJAREA, “BIOC”) OR LIMIT-TO (SUBJAREA, “HEAL”) OR LIMIT-TO (SUBJAREA, “NEUR”) OR LIMIT-TO (SUBJAREA, “MULT”) OR LIMIT-TO (SUBJAREA, “IMMU”) OR LIMIT-TO (SUBJAREA, “PSYC”) OR LIMIT-TO (SUBJAREA, “NURS”) | 1982 |
| Document Type | LIMIT-TO (DOCTYPE, “ar”) OR LIMIT-TO (DOCTYPE, “re”) OR LIMIT-TO (DOCTYPE, “cp”) | 1769 |
| Language | LIMIT-TO (LANGUAGE, “English”) | 1738 |
| Keyword | LIMIT-TO (EXACTKEYWORD, “Deep Learning”) OR LIMIT-TO (EXACTKEYWORD, “Image Segmentation”) OR LIMIT-TO (EXACTKEYWORD, “Brain Tumors”) | 1632 |
| Publication Stage | LIMIT-TO (PUBSTAGE, “final”) | 1622 |
| Source Type | SRCTYPE | 931 |
The search identified a total of 931 published documents that met the specified criteria. Following this, the analysis of the recovered documents was conducted in Microsoft Excel (version 2016), exploring publication trends, major stakeholders (authors, institutions, funding bodies, and countries), and identifying the top-cited literature in the field. This comprehensive approach provides a detailed insight into the landscape of brain tumor segmentation research over the specified timeframe.
Results and discussion
Major research trends
Figure 2 reveals a clear trend of growth in document production over time. Starting from a minimal level in 2013 with just one document, there has been a significant increase year by year, with occasional fluctuations. A linear regression analysis confirmed the significant upward trend in publications (β = +28.5/year, p < 0.001, R2 = 0.92), supporting the observed growth in brain tumor segmentation research (2013–2023). By 2023, the number of documents has reached 310, indicating a substantial expansion in scholarly output. Figure 2 is significant since it demonstrates a direct correlation between the growing interest of researchers in utilizing image segmentation technique to tackle brain tumor. According to various analyses, there has been significant disagreement among the human raters when it came to segmenting different tumor subregions, with Dice scores ranging from 74% to 85%.
Figure 2.
General publication trends in research (2013–2023).
The pie chart in Figure 3 illustrates the distribution of publications across different document types, showcasing a comprehensive overview of the research landscape. The majority of publications are in the form of articles, constituting a significant portion with 841 documents. Following are review papers, contributing 85 documents to the research body. Conference papers, although a smaller fraction, still play a role in the scholarly output with five documents. This breakdown provides valuable insights into the diverse formats through which research findings are disseminated, highlighting the prevalence of traditional articles and the substantial presence of conference papers in the academic discourse.
Figure 3.
Distribution of document types (2013–2023).
As seen in Figure 4, the top sources of publications on deep learning segmentation techniques of the brain tumors are Biomedical Signal Processing and Control, Computers in Biology and Medicine, IEEE Access, Multimedia Tools and Applications, and Medical Image Analysis, which accounts for 162 published documents or 17.40% of the total publications (TP) on brain tumor research over the last 10 years: Biomedical Signal Processing and Control (44 or 4.72% of TP), Computers in Biology and Medicine (32 or 3.43%), IEEE Access (32 or 3.43%), Multimedia Tools and Applications (29 or 3.20%), and Medical Image Analysis (24 or 2.57%). According to the year 2023 data from the Scopus database, the Biomedical Signal Processing and Control journal has a Citescore of 8.2, and Impact Factor (IF = 5.1). The Computers in Biology and Medicine journal has a Citescore of 9.2 and IF = 7.7. The IF and Citescore of IEEE Access are 3.9 and 9.0 respectively. The Multimedia Tools and Applications journal has a Citescore of 6.1 and IF = 3.6, and Medical Image Analysis has a Citescore of 19.9 and IF = 10.9. The data indicates that research on Medical Image Analysis often leads to high-impact publications, which significantly influences researchers’ publication decisions.
Figure 4.
Top journal sources on deep learning-based segmentation research (2013–2023).
Figure 5 depicts the distribution of publications across various subject areas. The data reveals that computer science dominates the field with the largest share, covering 517 documents, which corresponds to 28.1% of the pie chart. Following closely is medicine, representing 448 documents, indicating a substantial portion (24.2%) of the overall research output. Engineering also holds a significant share with 352 documents, demonstrating its importance with 19.1% in the publications. Mathematics, physics, and astronomy, and materials science contribute in this field of research as well though their percentages are comparatively smaller. The remaining subject areas, such as decision sciences, biochemistry, genetics, and molecular biology, energy, and health professions, show varied percentages, each reflecting their respective contributions. Additionally, a category labeled “Others” encompasses a diverse range of subjects, including neuroscience, social sciences, chemical engineering, chemistry, business, management and accounting, immunology and microbiology, environmental science, agricultural and biological sciences, nursing, and pharmacology toxicology and pharmaceutics, collectively representing 50 documents. The pie chart visually communicates the distribution of publications across these subject areas, providing a comprehensive overview of the research landscape.
Figure 5.
Subject-wise distribution of articles (2013–2023).
The field of brain tumor segmentation has seen rapid advancements in deep learning architectures. Table 2 summarizes the key methodologies and their adoption trends.
Table 2.
Evolution of deep learning architectures over 2013–2023.
| Architecture | Timeframe | Key contributions |
|---|---|---|
| CNNs | 2013–2016 | Early adoption of convolutional neural networks for tumor localization. |
| U-Net | 2016–2019 | Dominated with skip-connections for precise segmentation of tumor subregions. |
| Transformers | 2020–2023 | Leveraged self-attention for global context; hybrid models emerged. |
CNN: convolutional neural network.
Major research stakeholders
An exploration of the research landscape and scientific advancements within any discipline can be conducted through an analysis of key research stakeholders. These entities encompass diverse researchers, affiliations, organizations, and countries actively engaged in the respective research field. Figure 6 illustrates the top five prominent researchers currently engaged in active research in spatio-temporal brain tumor (STBT). Leading the list is Tongxue Zhou with eight documents, is recognized as a Stanford's World's Top 2% Scientist, has significantly contributed to brain tumor research. Her work focuses on developing advanced deep learning and multimodal fusion techniques for enhanced brain tumor segmentation, even with missing imaging data. She has published numerous articles on topics such as boundary-aware and cross-modal fusion networks, disentangled representation learning, and uncertainty quantification for precise brain tumor segmentation. She is closely followed by Jan Borggrefe, Yang Lei, Tianming Liu, and Xiaofeng Yang with seven documents each. These authors exhibit a notable presence in the domain, showcasing their substantial contributions to the body of work in the field of study. The graph effectively conveys the distribution of publications among these top authors, providing a visual representation of their impact on the scholarly landscape.
Figure 6.
Top researchers (2013–2023).
Further analysis as depicted in Figure 7 indicates the top 10 countries that have carried out rigorous research in STBT research field. Notably, these nations stand out as the leading countries with engaged researchers and affiliations with collective publications with more than 842 research papers within the 2013–2023 period. Notably, China stands out as the leading contributor with 257 publications, showcasing a substantial presence in the research landscape. India follows as the second-largest contributor with 238 documents, indicating a significant research output. The United States, United Kingdom, and Pakistan also feature prominently with 175, 66, and 55 publications, respectively, showcasing a diverse global participation in the field. The data further highlights active research contributions from countries such as Saudi Arabia, Germany, South Korea, France, and Iran each contributing 49, 46, 40, 34, and 29 documents, respectively. This distribution underscores the global collaboration and engagement of various nations in advancing research on the topic.
Figure 7.
Top countries in the research on deep learning-based segmentation techniques (2013–2023).
Access to advanced research facilities, especially in spatial-temporal enhancement module disciplines, contributes significantly to global scientific output. Institutions, with their pivotal role in fostering academic research and scientific development, play a critical role in enhancing the quantity and quality of publications, ultimately impacting the international rankings of such institutions. Figure 8 provides an analysis of affiliations associated with significant contributions to the field revealing a distribution of documents among various institutions. The Ministry of Education of the People's Republic of China, Imperial College of London, and Harvard Medical School emerged as prominent contributors with 17 documents, followed closely is COMSATS University Islamabad, with 14 documents. The Uniklinik Köln, Fudan University, Technische Universität München, Massachusetts General Hospital, and Chinese Academy of Sciences demonstrate noteworthy involvement with 13 documents each. Another notable contributor is the University of Pennsylvania with 12 documents.
Figure 8.
Research output of top institutions (2013–2023).
Top funding organizations
The researchers’ high productivity in the field can be attributed to several factors. Financial support, in the form of research grants, publication incentives, and monetary awards, plays a crucial role in fostering growth. This study examines the influence of financial support on global research publications, presenting a bar chart that depicts the distribution of publications and their affiliations. As shown in Figure 9, The National Natural Science Foundation of China stands out with the highest number of documents at 128. Their funded research has led to the development of sophisticated deep learning models for accurate tumor identification, including glioblastoma and brain tumors, often leveraging multimodal imaging data. Key achievements include the creation of novel radiomic evaluations for gliomas, methods for barely supervised brain tumor segmentation, and the application of advanced neural networks such as three-dimensional (3D) U-Netwith transformer and feature augmentation for automated segmentation. These efforts underscore a strong focus on improving diagnostic precision and treatment planning through innovative computational methods. It is closely followed by the National Institutes of Health and the National Cancer Institute, with 64 and 40 documents respectively. Sponsors like the Ministry of Science and Technology of the People's Republic of China, U.S. Department of Health and Human Services, and National Key Research and Development Program of China contributed 34, 26, and 22 documents while the European Commission funded 19 documents and National Institute of Biomedical Imaging and Bioengineering and Nvidia funded 15 documents each. This diverse array of funding sources highlights the global reach and collaboration in scientific research across various fields.
Figure 9.
Top 10 funding organizations in the research.
Top cited publications
The focus of this journal publication is the examination of highly cited publications in the field of deep learning-based brain tumors segmentation techniques. The study delves into the research impact of these publications by presenting an overview of the top 10 most cited documents during a specific period as displayed in Table 3, while a deeper analysis of their segmentation performance particularly Dice scores—is provided in Table 4. The analysis reveals that these publications have garnered significant attention, with citations ranging from 381 to 2562. Notably, the document titled “Efficient multi-scale 3D CNN with fully connected conditional random field for accurate brain lesion segmentation” by Kamnitsas et al. 17 stands out as the most cited, with 2562 citations followed by Havaei et al. 15 and Zhou et al. 16 with 2298 and 2242 citations, respectively. The data underscores the high research impact and scholarly interest in the field, suggesting a potential increase in the number of publications, citations, and financial support in the coming years. The entry of new researchers, affiliations, countries, and collaborations is expected to stimulate further growth. The publication also includes a bibliometric analysis to provide insights into the current status and prospects of brain tumor segmentation techniques (Figure 10).
Table 3.
Top 10 highly cited publications in the segmentation of brain tumors research from 2013 to 2023.
| References | Title | Journal/source title | Cited by |
|---|---|---|---|
| 17 | Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation | Medical Image Analysis | 2562 |
| 15 | Brain tumor segmentation with deep neural networks | Medical Image Analysis | 2298 |
| 18 | Brain tumor segmentation using convolutional neural networks in MRI images | IEEE Transactions on Medical Imaging | 2066 |
| 16 | UNet++: redesigning skip connections to exploit multiscale features in image segmentation | IEEE Transactions on Medical Imaging | 2242 |
| 19 | A deep learning model integrating FCNNs and CRFs for brain tumor segmentation | Medical Image Analysis | 636 |
| 20 | Interactive medical image segmentation using deep learning with image-specific fine tuning | IEEE Transactions on Medical Imaging | 631 |
| 21 | Multi-grade brain tumor classification using deep CNN with extensive data augmentation | Journal of Computational Science | 602 |
| 22 | The applications of radiomics in precision diagnosis and treatment of oncology: opportunities and challenges | Theranostics | 585 |
| 23 | Multi-classification of brain tumor images using deep neural networks | IEEE Access | 470 |
| 24 | AnatomyNet: deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy | Medical Physics | 381 |
Table 4.
Summary of publications on deep learning-based segmentation techniques for brain tumor detection.
| References | Dataset | Objectives | Performance metrics |
|---|---|---|---|
| Liu et al. (2023) 25 | BraTS2018, BraTS2020 | To improve the accuracy by fully exploiting the complementary information from different modalities | Dice score: WT: 82.91% TC: 72.62%, ET: 54.80% |
| Liu et al. (2021) 26 | BraTS2017 (Training set: 285, Validation set: 44), BraTS2018 (Training set: 285, Validation set: 66), BraTS2019 (Training set: 335, Validation set: 125) | To resolve the inter-class ambiguity issue by introducing feature interaction graph reasoning as a parallel auxiliary branch to model | Dice score: WT: 0.903 TC: 0.873 ET: 0.685 |
| Wang et al. (2018) 20 | BraTS2015 (Training set: 274, Validation set: 19, Testing set: 25) | Segmentation of previously unseen object classes and fine-tuning the CNN model on the fly for image-wise adaptation that can be guided by user interactions |
Dice score: CT: 86.13%, WT: 86.29% |
| Zhou et al. (2020) 27 | BraTS2018 (Training set: 285, Validation set: 191, Testing set: 66) BraTS2017 (Training set: 285, Validation set: 46) BraTS2015 (Training set: 274, Testing set: 110) | To overcome the shortcomings of Monte Carlo approach that trains one individual network for each task. | Dice score: CT: 0.8422 WT: 0.9071 ET: 0.7852 |
| Zhou et al. (2021) 28 | BraTS2018 (Training set: 285, Validation set: 66) BraTS2019 (Training set: 335, Validation set: 125) |
To effectively learn the latent representation by introducing a correlation model and to segment brain tumor based on a fusion strategy | Different modalities BraTS2019 dataset: WT: 3 out of 15, TC: 0 out of 15, ET: 1 out of 15. |
| Zhao et al. (2018) 29 | A deep learning model integrating FCNNs and CRFs for brain tumor segmentation | To segment brain images slice-by-slice, which is much faster than the image patch-based segmentation methods | Dice similarity coefficient metric: WT: 90.1 CT: 75.4 ET: 72.8 |
| Yu et al. (2021) 30 | BraTS2018 (Training set: 285, Validation set: 66) BraTS2019 (Training set: 335, Validation set: 125) |
To explicitly handle MR intensity variation for segmentation through jointly learning sample-adaptive intensity LuTs and segmentation |
SA-Lut-Nets (3D UNet): WT: 0.8621, TC: 0.6450, ET: 0.3595, SA-Lut-Nets (DMNet): WT: 0.8746, TC: 0.6459, ET: 0.3776 |
| Díaz-Pernas et al. (2021) 31 | CE-MRI dataset (Training set: 2452, Testing set: 612) |
Extracting discriminant texture features of different kinds of tumors by proposing a multiscale processing strategy | Dice score: Meningioma 0.894, Glioma 0.779 Pituitary tumor 0.813 Tumor classification accuracy: 0.973 |
| Sharif et al. (2020) 32 | BraTS2013 (30), BraTS2015 (274), BraTS2017 (431), and BraTS2018 (476) |
The loss of core information from an image is a major issue in patch-based models. It also increases the training time | Dice score (BraTS2017 dataset): CT: 83.73%, WT: 93.7% ET: 79.94% (BraTS2018 dataset): CT: 88.34% WT: 91.2% ET: 81.84% |
| Raza et al. (2023) 33 | BraTS2020 (Training set: 295, Validation set 37, Testing set: 37) BraTS2021 for validation |
To utilize both low- and high-level features by exploiting the skip connections for predicting segmentation masks. Parallel paths are being used that will help in decreasing the training time and provide good model generalization by combining the low- and high-level features |
Average Dice score (BraTS2020 dataset): CT: 0.8357 WT: 0.8660 ET: 0.8004 (BraTS2021 dataset) CT: 0.8400, WT: 0.8601, ET: 0.8221 |
| Rehman et al. (2023) 34 | BraTS2017 (431) and BraTS2018(476), and BraTS2019 (Training set: 335, Validation set: 125) |
To address the issue of the loss of location information in the reconstruction of the segmented image by helping in obtaining deep features without loss of location information | (BraTs2019 dataset) Dice coefficient: CT: 0.814, ET: 0.763, WT: 0.884 (BraTs2018 dataset) CT: 0.814 ET: 0.767, WT: 0.869 (BraTs2017 dataset) CT: 0.821 ET: 0.776, WT: 0.896 |
| Chang et al. (2023) 35 | BraTS2018 (Training set: 285, Validation set: 66) BraTS2019 (Training set: 335, Validation set: 125) BraTS2020 (Training set: 369, Validation set: 125) | To address the class imbalance issue and the loss of information due to multiple pooling layers in the encoder architecture | Dice score: ET: 78.1, WT: 89.4, CT: 83.2 |
| Sarala et al. (2023) 36 | BraTS-IXI dataset (Training set: 217, Testing set: 515) | To improve the classification rate for the lower pixel resolution brain images | Quantitative analysis parameters in %: Giloma grades HGG: Se 98.9, Sp 99.04, Acc 98.85, FPI 1.13, FNI 98.82, LGG: Se 98.67, Sp 98.82, Acc 98.98, FPI 1.4, FNI 97.99 |
| Das et al. (2022) 37 | BraTS2017 | To focus on less data and overfitting | Dice score: Complete: 0.85, CT: 0.91, ET: 0.88, Edema: 0.83 |
| Zheng et al. (2022) 38 | “Brain Tumor MRI Image Classification” dataset from Kaggle. Training set: 2,475, Test set: 289 |
To solve the shortage of training data problem by adopting transposed convolution, up-sampling, and fusing context features and detail features |
MloU (%): 86.8 MPA (%): 90.74 MPrecision (%): 94.63 MDice (%): 92.62 Hausdorf95 (mm): 17.71 ASD (mm): 0.37 |
| Aggarwal et al. (2023) 39 | BraTS2020 (Training set: 125, Testing set: 169) |
To address the disappearing gradient problem by enabling an additional route for the gradient to move across | Dice score: CT: 0.924, WT: 0.864, ET: 0.945 |
| Zhou et al. (2020) 16 | Brain Tumor BraTS2013 (Training set: 30, Testing set: 30) |
To overcome the limitations of U-Net and fully convolutional networks: (1) their optimal depth is a priori unknown, requiring extensive architecture search or inefficient ensemble of models of varying depths; and (2) their skip connections impose an unnecessarily restrictive fusion scheme | Intersection over union (IoU): 95.10 Dice: 91.36 score: 0.414 |
| Sunsuhi et al. (2022) 40 | TCIA-Brain tumor dataset | To effectively probe and reduce the shapes in the test sample image | Meningioma 0.7894, Glioma 0.8176, Pituitary 0.7747 |
| Farajzadeh et al. (2023) 41 | BraTS’20 (Training set: 369, Validation set: 125) | To improve the segmentation and classification accuracy, this method combines hyper kernel for convolution layers with attention layers for segmentation | Class tumor, brain, micro, macro, weighed accuracy (%) Precision (%) Recall (%) F-score: 98.81, 98.99, 98.63, 98.81 |
| Vankdothu et al. (2022) 42 | Kaggle dataset Brain Tumor Classification (MRI) (Training set: 2870, Testing set: 394) | To automate the process, image segmentation is performed using the improved K-means clustering (IKMC) algorithm, and the gray level co-occurrence matrix (GLCM). GLCM is used for feature extraction to extract features | Specificity (%): 89.28 Sensitivity (%): 98.42 Accuracy (%): 95.17 |
| Gab Allah et al. (2023) 43 | (Cheng et al., 2015) MRI brain tumor dataset | To reduce background noise effect during the imaging process | Dice scores: Meningioma 88.8%, Glioma 91.76%, Pituitary tumors 87.28% |
| Fang et al. (2022) 44 | BraTS2015 dataset | To address the problem of losing some boundary features in the convolution process which results in incomplete feature maps obtained by the up-sampling operation | Dice coefficient: Whole 0.99 Tumor 0.92 Core 0.90 Specificity: Whole 0.95 Tumor 0.93 Core 0.84 Sensitivity: Whole 0.99 Tumor 0.99 Core 0.99 |
| Bai K et al. (2020) 45 | Integrating improved U-Net and continuous maximum flow algorithm for 3D brain tumor image segmentation | To address the issues of the relatively small size of brain tumor image datasets, severe class imbalance, and low precision by integrating CNNs with conventional methods | Mean Dice similarity coefficients: Whole 0.9072, Tumor 0.8578, Enhancing 0.7837 |
| Bennai et al. (2020) 46 | BraTS dataset | To solve the requirement issue of large volumes of manually segmented data, time and energy for learning, and risk of over-fitting | Average Dice coefficient: 86% |
| Alagarsamy et al. (2019) 47 | Clinical datasets (17), Harvard Brainweb Repository (21), Brainweb Simulated Database (27), BraTS-2013 Challenge + BraTS-SICAS Repository (180) | To reduce the computational complexity by identifying optimal cluster location hence easing the clustering operation | Sensitivity: 98.56 ± 1.2 specificity: 97.67 ± 1.3 |
| Amirmoezzi et al. (2019) 48 | BraTS2012 | To reduce the effect of these artifacts and identify vital information related to the tumor | Simulated data: Dice index: >0.85 Sensitivity: >0.90 Specificity: >0.98 Real data: Dice index: >0.80 Sensitivity: >0.84 Specificity: >0.98 |
| Lim et al. (2018) 49 | BraTS2013 | To deal with the problem of unclear boundaries between an object and its surrounding area, as well as uneven brightness or color within the object itself | Average DICE accuracy: High-grade: 0.70 Low-grade: 0.63 |
| Li et al. (2016) 50 | BraTS2013 | To solve the spatial and structural variability problem by presenting a probabilistic model which combines sparse representation and the Markov random field (MRF) | Dice coefficient: High-grade challenge dataset Complete 0.85 Core 0.75 Enhancing 0.69 Low-grade challenge dataset Complete 0.84 Core 0.54 Enhancing 0.57 High-grade leader board dataset: Complete 0.73 Core 0.56 Enhancing 0.54 |
| Walsh et al. (2022) 51 | BITE dataset 26 | To solve the large amount of data requirement for deep learning models | Coronal: PixelAcc (%) 99 MeanAcc (%) 88 MeanIoU (%) 84 FWIoU (%) 99 Sagittal: PixelAcc (%) 99 MeanAcc (%) 76 MeanIoU (%) 75 FWIoU (%) 99 Transversal: PixelAcc (%) 99 MeanAcc (%) 85 MeanIoU (%) 84 FWIoU (%) 99 Full: PixelAcc (%) 99 MeanAcc (%) 91 MeanIoU (%) 89 FWIoU (%) 99 |
| Ai et al. (2015) 52 | BraTS2012 | To propose an energy cost function-based segmentation method to overcome the overlap between the intensity values of grayscales of tumor, edema and healthy brain | Dice coefficient: 78% |
| Naser et al. (2020) 53 | Cancer Imaging Archive (TCIA) 49 | To automate tumor segmentation process by CNN based on the U-net | Dice similarity coefficient: 0.84 Accuracy: 0.92 |
TCIA: The Cancer Imaging Archive; CNN: convolutional neural network; MR: magnetic resonance; MRI: magnetic resonance imaging; BraTS: brain tumor segmentation (challenge/dataset); BITE: brain imaging tumor evaluation; SICAS: Shanghai Institute of Chinese Academy of Sciences; FCNN: fully convolutional neural network; CRF: conditional random field; LuT: lookup table; HGG: high-grade glioma; LGG: low-grade glioma; Sp: specificity; Acc: accuracy; FPI: false positive index; FNI: false negative index; MloU: mean intersection over union; MPA: mean pixel accuracy; ASD: average symmetric distance; WT: whole tumor; TC: tumor core; ET: enhancing tumor; CT: computed tomography.
Figure 10.
Network visualization map for the author-based co-citations on image segmentation of brain tumors.
Bibliometric analysis
Bibliometric analysis, an innovative approach, is employed to scrutinize the research landscape and scientific advancements in any research field or topic.54,55 Utilizing mathematical and statistical tools, it identifies, screens, and analyses research publications or documents to reveal co-authorships, keyword occurrences, and co-citations on a given topic.56,57 In this study, the bibliometric analysis of the image segmentation techniques for the brain tumors topic was conducted using VOSviewer software (Version 1.16.17) to elucidate the research landscape and scientific developments.
Co-author analysis
Figure 11 displays the network visualization map illustrating co-authorship in brain tumor segmentation research. The analysis was conducted with a criterion of a minimum of two published documents and one citation per author, leading to the selection of 73 authors for the co-author analysis. As depicted in Figure 12, the largest interconnected group consists of 17 authors, forming three clusters with three to five authors each. The predominant red cluster, comprised of six authors, includes notable figures such as Govindaraj, Vishnuvarthanan. The second-largest green cluster features prominent researchers like Rajasekaran, M. Pallikonda, and Vishnuvarthanan, among others. Conversely, the smallest cobalt blue cluster comprises only two authors, including Zhang Yudong, who, as mentioned earlier, stands out as one of the most prolific authors in brain tumor segmentation research. These findings not only establish Zhang as the most prolific but also highlight him as a highly influential author with significant collaboration rates. Subsequently, Figure 13 illustrates the examination of the rate and extent of collaborations among countries.
Figure 11.
Network visualization map illustrating co-authorship in brain tumor segmentation research.
Figure 12.
Number of citations by published documents.
Figure 13.
Country-based citation index.
Keyword occurrence
Studying the frequency of keywords is a crucial method for exploring the research landscape in any field. In this study, we specifically conducted a keyword co-occurrence analysis within the realm of image-based segmentation research. Figure 14 presents a visual map showcasing the most prevalent keywords in this research area. The keyword co-occurrence analysis revealed a strong association between “deep learning,” “brain tumors,” and “image segmentation,” highlighting the increasing dominance of deep learning techniques in this field. This trend signifies a paradigm shift from traditional image processing methods, as deep learning models have demonstrated superior performance in handling the inherent variability and complexity of brain tumor MRI data. The prevalence of these keywords reflects a growing focus on improving the accuracy and efficiency of brain tumor detection, which has crucial implications for timely diagnosis and treatment planning. Furthermore, the co-occurrence of “3D U-Net” and “transformer” suggests a move toward more advanced architectures designed to capture the 3D nature of brain tumors. This is significant because precise 3D tumor segmentation is essential for accurate tumor volume estimation and surgical navigation. […continue with other keywords and their implications…]. While this keyword analysis provides a valuable overview of research trends, it is important to acknowledge that it offers a simplified view of the complex research landscape and does not capture the full depth of individual studies.
Figure 14.
Network visualization map for keyword co-occurrence on image segmentation of brain tumor research.
The examination relied on a minimum keyword occurrence of four, wherein only 316 keywords fulfilled this criterion out of a total of 2494 keywords. The search outcomes revealed five clusters, each encompassing 2–123 items. Notably, the keywords “image segmentation,” “brain tumors,” and “deep learning” were observed to be the most frequent, occurring 357, 153, and 93 times, respectively, with total link strengths of 5377, 2474, and 1663. The substantial prevalence of these keywords in the search results can be attributed to the design and implementation of the search query and study title. Additional noteworthy keywords identified in the analysis include image processing, magnetic resonance, brain mapping, brain neoplasm, and convolution. The primary keywords in the realm of image segmentation research on brain tumors suggest a connection and overlap across diverse disciplines, including computer science, computer engineering, artificial intelligence (AI), mathematics, and statistical analysis. Consequently, it is reasonable to infer that the topic is multidisciplinary, encompassing broad research and scientific themes.
Citation analysis
The number of citations garnered by published documents serves as a metric indicating the level or extent of their research and scientific impact (Figure 12). Consequently, a citation analysis was conducted on segmentation methods for brain tumors research to assess the citation rates of authors, journals/sources, and countries actively engaged in researching the topic within the literature. The analysis, focused on documents with a minimum of 20 citations, revealed that 95 documents met the specified criteria. The search results unveiled 12 clusters, each comprising three to –eight items, while three clusters consisted of a single item. Noteworthy findings include the identification of “Pereira (2016),” “Vishnuvrthanan (2016),” and “Ranjbarzadeh (2023)” as the most prominent keywords, with citation counts of 23, 7, and 4, respectively, resulting in total citations of 1844, 130, and 37.
Figure 15 displays the citation analysis concentrated on authors limited to a maximum of 10 per document, a minimum of two documents per author, and at least one citation per author, uncovered that 73 documents fulfilled the specified criteria. The search outcomes revealed the presence of four clusters, each containing three to seven items. Noteworthy discoveries include the recognition of “Govindaraj, Vishnuvarthanan,” “Alagarsamy, Saravanan,” and “Murugan Pallikonda Rajasekara” as the most notable authors, with 17, 9, and 12 links, respectively, amounting to a total link strength of 217. Individually, they secured 76, 24, and 56 links.
Figure 15.
Author-based citation analysis.
Figure 13 displays the country-wise citation index. The assessment relies on a minimum of five published documents and two citations per country. The analysis indicated that 18 countries met the specified threshold and were consequently chosen for further examination. The results revealed the formation of four clusters within the most extensive set of interconnected countries, each comprising three to six items/countries. The most notable, or “green,” cluster encompasses co-authorships/collaborations involving countries such as India, China, South Korea, Turkey, and Taiwan. Notably, the highest total link strength is observed between China and India, totaling 40, indicating that authors from these countries have the highest number of published items/collaborations on the given topic. In a comprehensive view, the country-wise citation analysis highlights strong collaborative ties, suggesting that research on segmentation techniques for brain tumors is characterized by elevated rates of published documents and scholarly collaborations. This observation underscores a highly dynamic research landscape, influenced by significant scientific advancements annually in the field.
General landscape of included documents in the PubMed database
A total of 45 unique documents, comprising articles and reviews, were retrieved from the PubMed database. These records, spanning publications from 2013 to 2023, were selected for analysis due to PubMed's comprehensive coverage of the literature, including citation data. The bibliometric analysis was conducted using Biblioshiny in R Studio to evaluate the data extracted from PubMed.
Most relevant country by corresponding author
Figure 16 illustrates the distribution of the top 20 countries based on the corresponding author's affiliation, as derived from publications indexed in the PubMed database. The China emerged as the leading contributor with 32 articles (single-center protocol (SCP) = 23, multi-center protocol (MCP) = 9), followed by United States with 13 publications (SCP = 11, MCP = 2). India ranked third with eight publications (SCP = 7, MCP = 1), while Pakistan tied for fourth place, contributing seven publications (SCP = 3, MCP = 4). These findings are further summarized in Table 3.
Figure 16.
Corresponding author's most relevant countries.
Most relevant affiliation
As illustrated in Figure 17, the top 10 most productive affiliations actively contributing to deep learning-based image segmentation for brain tumors research in recent years are presented. Each of these institutions has published more than 10 studies on the topic. Most of these institutions are in Europe and the United States. The most research-active institutions include University College London (UK) and the University of California-San Francisco (United States). Other notable contributors are the University Medical Center Utrecht (Netherlands), the University of Manchester and Christie NHS Trust (UK), and the Università degli Studi di Milano (Italy). Additionally, institutions such as the Mayo Clinic (United States), the University of Arizona (United States), and the Medical University Vienna (Austria) have also made substantial contributions to the field. The Netherlands Cancer Institute and Vrije Universiteit Amsterdam (Netherlands) further highlight the international collaboration. (Table 5)
Figure 17.
Most relevant affiliation.
Table 5.
Most relevant countries by corresponding author.
| Country | Articles | SCP | MCP | Freq. | MCP_Ratio |
|---|---|---|---|---|---|
| China | 32 | 23 | 9 | 0.386 | 0.281 |
| United States | 13 | 11 | 2 | 0.157 | 0.154 |
| India | 8 | 7 | 1 | 0.096 | 0.125 |
| Pakistan | 7 | 3 | 4 | 0.084 | 0.571 |
| Canada | 4 | 2 | 2 | 0.048 | 0.5 |
| Netherlands | 2 | 1 | 1 | 0.024 | 0.5 |
| Poland | 2 | 1 | 1 | 0.024 | 0.5 |
| Switzerland | 2 | 2 | 0 | 0.024 | 0 |
| Australia | 2 | 1 | 1 | 0.024 | 0.5 |
| Ethiopia | 1 | 1 | 0 | 0.012 | 0 |
Most relevant sources
Figure 18 presents the top 10 source titles for deep learning-based image segmentation for brain tumors research publications from 2013 to 2023. Analysis of PubMed data reveals that the source titles include book series, conference proceedings, and journals, have published one or more documents on this topic. The top source title is Diagnostics (Basel, Switzerland) with six publications, followed by Computers in Biology with five publications. Other notable sources include Medical Image Analysis and Medicine and Frontiers in Neuroscience with four publications each. Computer Methods and Programs in Biomedicine, Frontiers in Oncology, Journal of Digital Imaging, Medical Physics, and Sensors (Basel, Switzerland) published three articles each, and Frontiers in Computational Neuroscience with two publications. This distribution underscores the diversity of journals contributing to advancements in the field.
Figure 18.
Most relevant sources.
Most relevant words
Figures 19 and 20 illustrate the most relevant keywords through a tree plot and graph, respectively. A co-occurrence map of high-frequency keywords in segmentation techniques for detecting brain tumors was generated to identify emerging research directions and hotspots, providing researchers with a clearer understanding of key scientific trends. The term “humans” emerged as the most prominent keyword, appearing 47 times and accounting for 15% of occurrences. This was closely followed by “brain neoplasm/diagnostic imaging” with 27 appearances (8%). Other significant keywords included “deep learning” and “computer assisted image processing” with 19 (6%) each, and “computer neural networks” with 25 appearances (8%). These findings highlight the central themes and focus areas within the deep learning-based image segmentation for brain tumors research domain.
Figure 19.
Tree map of most relevant words.
Figure 20.
Most relevant word occurrences.
General landscape of included documents in the Web of Science database
As per the initial dataset from Web of Science evaluation there were 2181 documents on the topic of deep learning brain tumor image segmentation. Moreover, after applying filters the number of documents reduced to 1962 with the time range spanning from 2013 to 2023. After applying filters in the Web of Science, the refined collection indicated that the majority are technical papers, totaling 1300, followed by 524 conference proceedings. The dataset includes 135 review articles, 12 retracted publications, and 28 early access papers. This distribution highlights the varied nature of the research output within the specified timeframe. The bibliographic analysis from Web of Science conducted via VOSviewer indicates that China, India, and United States are the main contributing countries followed by Brunei and Bangladesh revealing geographically diverse collaboration network as shown in Figure 21. Moreover, authors such as Feng Qicheng et al.; Li, Zhou et al.; Yang, H et al.; and Chen, K.M. are the active and high participating authors as displayed in Figure 22. On the other hand, the keyword analysis in Figure 23 shows that techniques such as data image classification, augmentation, and CNNs are the center of focus with applications dedicated to the domain of biomedical imaging, particularly for brain tumor detection and MRI. The interdisciplinary nature of the research is evident from the terms such as “neural networks,” “machine learning,” and “central-nervous-system” integrating computational methods with clinical diagnostics. Thus, this analysis via VOSviewer shows a collaborative and diverse research domain, which is aimed at advancing biomedical imaging through machine learning and deep learning methods.
Figure 21.
Corresponding author's most relevant countries for Web of Science.
Figure 22.
Author-based co-citations on image segmentation of brain tumors for Web of Science.
Figure 23.
Keyword co-occurrence for Web of Science database.
Existing works in image segmentation of brain tumor
Table 4 provides a summary of various research studies focusing on segmentation techniques for detecting brain tumors. Multiple researchers have employed CNNs and object detection models to enhance algorithm performance, utilizing diverse evaluation metrics. The predominant performance metric featured in the table is the Dice score. While many studies showcased commendable results, they often share common weaknesses. The complexity of brain tumor structures, blurred borders, and external variables pose challenges in accurately inferring tumor regions in brain MRI images. Despite these challenges, deep learning methods, particularly CNNs, stand out as the latest and widely adopted approaches in brain MRI segmentation.
Zheng et al. 38 proposed a solution for brain tumor segmentation models, addressing issues with edge detail segmentation, feature information reuse, and location information extraction, through a serial encoding–decoding structure. This structure enhances segmentation performance by incorporating hybrid dilated convolution modules and concatenation between serial network modules. Additionally, a new loss function is introduced to prioritize difficult-to-segment and classify samples, aiming to improve model focus and overall performance. A privileged semi-paired learning framework presented by Liu et al. 25 This framework leverages a limited number of paired images to enhance the model's ability to capture and exploit complementary information between different modalities. Furthermore, the author introduces a two-step curriculum disentanglement learning strategy, focusing on intra-modality and inter-modality disentanglement, which effectively separates the style and content of the input images. Lim and Mandava 49 introduced a three-phase approach for multisequence MR image segmentation. The first phase modifies a random walks algorithm to model image information, addressing ambiguous boundaries and intensity inhomogeneity. The second phase fuses information from image sequences using a weighted averaging method that integrates data information and user knowledge to determine sequence weights. The final phase utilizes information theoretic rough sets to handle ambiguous boundaries between visual objects and backgrounds. This approach integrates data and domain knowledge, offering a robust solution for multisequence MR image segmentation. The dual-path network, multi-feature fusion deep network suggested by Fang and Wang 44 used kernel multiplexing methods to combine large-scale perceptual domain and nonlinear mapping features, enhancing information flow coherence. It reduced overlapping frequency and vanishing gradient phenomenon through residual and dense connections, mitigating multimodal channel interference. A dual-path model fused low-, middle-, and high-level features, diversifying glioma nonlinear structural features and improving segmentation precision. Gab Allah et al. 43 presented the edge U-Net model, a deep convolution neural network with an encoder–decoder structure. The model improved tumor localization by merging boundary-related MRI data with main brain MRI data. It integrated boundary-related information from various scales in the decoder phase. A novel loss function, enhanced with boundary information could improve segmentation performance and precision.
A probabilistic model was suggested by Li et al. 58 that combines sparse representation and Markov random field (MRF) to handle spatial and structural variability in tumor segmentation. The model reframes tumor segmentation as a multiclassification task, labeling each voxel based on its highest posterior probability. This is calculated using maximum a posteriori (MAP) probability, which combines sparse representation with likelihood and MRF with prior probability. To solve the complex MAP problem, it is transformed into a minimum energy optimization problem, and graph cuts are employed to find the solution. The proposed approach named context-aware network by Liu et al. 26 effectively captured high-dimensional and discriminative features by leveraging contexts from both the convolutional space and feature interaction graphs. Additionally, they introduced context-guided attentive conditional random fields, which selectively aggregate features to enhance segmentation performance. Rehman et al. 34 tackled the challenges of parameter limitations and computational complexity in brain tumor segmentation by proposing a novel encoder–decoder architecture. Their approach involved preprocessing steps including N4 bias field correction, z-score normalization, and resampling to the [0, 1] range to enhance model training. To retain location information across modules, they introduced a residual spatial pyramid pooling module with parallel layers utilizing dilated convolutions. Additionally, an attention gate module effectively refined the segmented output from feature maps. A dual path attention fusion network approach proposed by Chang et al. 35 enhanced the network scale using dual-path convolution and prevented degradation with residual connections. To effectively aggregate channel-level global and local information, they introduced an attention fusion module that merges multiscale feature maps, enriching them with semantic information and ensuring full attention to small tumor objects. Additionally, their 3D iterative dilated convolution merging module expanded the receptive field, improving context awareness. The dense residual U-Net (dResU-Net) model by Raza et al. 33 integrated the elements from deep residual networks and U-Net architecture. In this hybrid model, the residual network functions as the encoder, and the U-Net decoder addresses the vanishing gradient issue. By utilizing both low-level and high-level features, dResU-Net successfully improved prediction accuracy. Shortcut connections within the residual network retained low-level features, while skip connections between residual and convolutional blocks accelerated the training process. Wang et al. 20 highlighted the challenges for CNNs in terms of inadequate image-specific adaptation and poor generalizability to novel object classes (zero-shot learning). To address these issues, they integrated CNNs into a framework for bounding box and scribble-based segmentation. It allowed for image-specific fine-tuning of the CNN model for each test image, which can be performed either unsupervised (without user input) or supervised (with additional scribbles). They also introduced a weighted loss function that considers both network and interaction-based uncertainties during the fine-tuning process. To tackle the gradient challenge in deep neural networks, Aggarwal et al. 37 introduced an effective approach for brain tumor segmentation utilizing an optimized enhanced residual network (ResNet). By meticulously preserving the details of all connection links and refining projection shortcuts, the improved ResNet demonstrated not only superior precision but also a faster learning process.
Quantitative synthesis of limitations
A systematic analysis of the 31 studies in Table 4 reveals recurring limitations tied to dataset scales and model architectures:
Limitations and future trends
In the past decade, deep learning methods have been widely utilized for brain tumor segmentation, employing multiple layers and complex steps in computer vision algorithms to analyze intensity and symmetry information. These properties are used to classify various tumor regions, including necrosis, edema, gliomas, and enhancing or nonenhancing tumors. Despite the frequent application of AI in this field, extensive clinical and biological validations are still required to confirm its efficacy. 60 First, obtaining validation data poses a significant limitation, particularly in computer vision scenarios where the effectiveness of models is often enhanced using unrelated datasets. Techniques such as data augmentation and transfer learning have been developed to address this issue. However, data augmentation should be avoided in sensitive and critical scenarios. Tumors vary widely in shape, location, and size, with irregular, discontinuous, and unclear boundaries, making it essential to develop comprehensive datasets specifically for individual diseases to support computer vision approaches effectively. The development of BraTS has significantly mitigated this limitation by providing multiple imaging modalities for brain tumor segmentation. 61 Second, biological validation indicates that imaging techniques and AI struggle to accurately interpret biological structures. This is due to the unpredictable growth patterns of tumor lesions post-removal and the inability of these methods to detect tumor invasions. Several limitations in the field of brain tumor segmentation can be visualized in Figure 24:
Lack of training datasets: there is a scarcity of adequate training datasets for deep learning methods in brain tumor segmentation.
Need for sophisticated segmentation techniques: advanced segmentation techniques are mandatory when applying for annotations or changing structured labels.
3D segmentation models using two-dimensional (2D) segmentation: many 3D segmentation models are implemented using 2D segmentation techniques, which may not fully capture the complexity of 3D structures. 62
Handling uncertainty and noise: training deep learning systems on the BraTS dataset requires careful consideration to handle uncertainty and noise effectively.
Small receptive field: for large datasets, the small receptive field of deep models limits their effectiveness, making deep models less worthwhile.
Resource constraints: training a network faces various constraints, including limited memory, GPU, and bandwidth.
Fixed kernel size issues: using a fixed-size kernel for image slicing can damage some valuable information, leading to loss of critical details.
Data augmentation and normalization challenges: data augmentation techniques such as scaling and rotation, along with normalization approaches, can lead to class imbalances when generating new lesions of brain tumors. 63
Figure 24.
Challenges in the brain tumor deep learning-based segmentation.
In the domain of medical image segmentation, particularly with end-to-end segmentation algorithms, a significant challenge arises from data imbalance. Data imbalance can be approached through several practical strategies beyond advanced loss functions like focal, Dice, or Tversky loss. Effective solutions include robust data augmentation techniques, such as elastic deformations, rotations, and intensity variations, to artificially expand the minority class. Furthermore, exploring synthetic data generation using generative adversarial networks (GANs) or diffusion models can provide realistic, diverse samples, while resampling techniques like oversampling minority classes or under-sampling majority classes can balance the training distribution.
Despite various strategies proposed by researchers to mitigate this issue—such as data augmentation, resizing images, and modifying the loss functions during network training—these approaches have not entirely resolved the problem. Data imbalance persists as a major hurdle, affecting the network's ability to accurately learn and segment features from underrepresented classes or small sample sizes in brain medical images. This imbalance often leads to skewed training outcomes, where the model performs well on majority classes but poorly on minority ones, thereby reducing overall segmentation accuracy.
Bringing research findings into routine clinical practice presents substantial challenges that extend beyond technical accuracy. A critical hurdle is the need for rigorous clinical validation studies conducted with large, diverse patient cohorts to confirm the generalizability and real-world efficacy of deep learning models across various demographics and disease presentations. Furthermore, the path to clinical integration is heavily influenced by regulatory bodies such as the Food and Drug Administration in the United States and the European Medicines Agency in Europe, which necessitate lengthy and complex approval processes for medical devices and software, including AI-driven tools. Beyond regulatory approval, integrating these AI tools into existing clinical workflows and IT infrastructure poses significant practical difficulties, requiring seamless compatibility with hospital information systems and electronic health records. Finally, clinician trust and acceptance are paramount; this often hinges on the ability of AI models to provide understandable and justifiable outputs, underscoring the growing need for explainable AI to foster confidence and facilitate informed decision-making by medical professionals.
Limitations of specific deep learning models
U-Net and 3D U-Net, while widely used, often struggle with small or irregularly shaped tumors due to their fixed receptive fields, and 3D U-Net models demand significant computational resources, hindering their practicality for real-time application. 64 Transformers, despite their promise in capturing long-range dependencies, necessitate large amounts of labeled data, limiting their use when annotated datasets are scarce; GANs, although effective for data augmentation, are prone to training instability and may generate unrealistic images that do not generalize well to clinical data 65 ; and finally, ensemble methods, while improving segmentation accuracy through model combination, are computationally expensive and may be unsuitable for resource-constrained environments. 66
The inherent complexity of deep learning models can pose challenges for deployment and real-time applications. To address this, model compression techniques such as pruning and quantization can significantly reduce model size and computational demands. Knowledge distillation, where a smaller “student” model learns from a larger, more complex “teacher” model, offers another avenue for creating efficient yet performant models. Additionally, designing more efficient network architectures, like lightweight CNNs or MobileNet variants, directly contributes to models that can operate effectively on less powerful hardware or in time-sensitive clinical settings.
Future directions
Future research should focus on developing more effective methods to address data imbalance in medical image segmentation. This could involve advanced techniques like synthetic data generation, where artificial samples are created to balance the dataset, or implementing more sophisticated loss functions that dynamically adjust weights to emphasize minority classes during training. Additionally, exploring semi-supervised or unsupervised learning approaches might help the network better generalize from limited labeled data. Another promising direction could be the integration of domain adaptation techniques, which adapt models trained on larger, balanced datasets to work effectively on imbalanced medical datasets. Furthermore, leveraging multimodal imaging data can provide complementary information that may help in learning robust features from smaller classes. Advanced augmentation techniques, such as GANs could be explored for creating realistic synthetic images. By improving the network's ability to learn characteristics from a small number of samples, these methods aim to enhance the segmentation accuracy and generalization capability of the model, ultimately leading to better diagnostic and treatment outcomes in clinical settings. Addressing data imbalance comprehensively will be crucial for the future development of reliable and accurate medical image segmentation algorithms.
Currently, supervised learning remains the predominant approach for medical image segmentation. However, this method requires high-quality input data and extensive labeled datasets, which are labor-intensive and expensive to obtain. These high costs and resource demands render supervised algorithms impractical for many medical applications. Therefore, the future of medical image analysis lies in the development of weakly supervised or unsupervised algorithms, which promise to alleviate these constraints.
Beyond widely used benchmarks like BraTS, future work should also leverage emerging large-scale datasets such as The Cancer Imaging Archive, which provides diverse, multi-institutional medical imaging data across various pathologies. Expanding research to these datasets can improve model generalizability and help address domain shift challenges in real-world clinical settings.
Most brain medical image segmentation algorithms using neural networks are adapted from those designed for natural image processing. Nonetheless, brain medical images are fundamentally different from natural images, as they contain a significant amount of anatomical prior knowledge. This prior knowledge is crucial for enhancing segmentation performance. To address these limitations, several innovative strategies warrant further investigation.
It is noted that emerging techniques such as federated learning and diffusion models did not feature prominently in the keyword analysis within the scope of this systematic review. While these areas represent significant and rapidly evolving advancements in deep learning, particularly for medical image analysis, their limited appearance in our results suggests they are relatively nascent in the context of brain tumor segmentation research covered by our defined search period (2013–2023) and chosen databases. Despite their current lower frequency in our findings, we acknowledge the substantial potential of federated learning for addressing data privacy concerns and enabling collaborative model training across multiple institutions, which is crucial for building robust and generalizable brain tumor segmentation models from diverse datasets. Similarly, diffusion models show immense promise for high-fidelity medical image synthesis and segmentation, potentially mitigating issues related to data scarcity and imbalance. Future systematic reviews should specifically investigate the burgeoning literature on these advanced techniques to fully capture their impact and evolving applications in brain tumor segmentation. Federated learning offers a promising avenue for training robust segmentation models on larger, more diverse datasets without compromising patient privacy. By enabling collaborative training across multiple institutions, federated learning can mitigate the data scarcity issue while respecting data governance regulations. Furthermore, the development of advanced GAN-based data augmentation techniques, specifically using StyleGAN2-ADA, could generate synthetic brain tumor images that capture the variability observed in clinical data, thus improving the generalizability of segmentation models. To better capture the complex spatial relationships in 3D brain tumor data, future research should explore the use of transformer-based attention mechanisms. These models have shown remarkable success in natural language processing and could be adapted to effectively model long-range dependencies in volumetric medical images. These advancements hold the potential to significantly improve the accuracy and reliability of brain tumor segmentation, leading to more personalized and effective treatment strategies.
Conclusion
The research landscape and scientific developments in image segmentation methods for brain tumors were comprehensively examined in this work. The goals of the study were accomplished by employing VOSviewer to do a bibliometric examination of every document published and indexed in the Scopus database and Survey (2013–2023). Over a 10-year period, the inquiry examined publication trends, important stakeholders, leading funding organizations, and referenced articles. Starting from a minimal level in 2013 with just one document, there has been a significant increase year by year, with occasional fluctuations. By 2023, the number of documents has reached 275. This expansion was ascribed to the growing use of these methods in policy, business, and academic settings to solve optimization issues. Remarkably, “journal articles” became the most common document type in this sector, in contrast to other study areas where “conference papers” are often more common. Upon closer examination, it was discovered that Tongxue Zhou was the most productive researcher and that the Ministry of Education of the People's Republic of China and Imperial College of London emerged the most active research organizations. In the field of image segmentation-based brain tumor detection research, India emerged as the nation with the highest level of activity. The topic's multidisciplinary aspect was highlighted by bibliometric analysis, which was evidenced by the large number of publications, partnerships, and citations across affiliations/organizations and nations globally. The study concludes by recommending that the topic includes a wide range of scientific themes and research with a multidisciplinary focus. Challenges such as limited training datasets, data imbalance, and the need for advanced techniques to handle uncertainty and complex 3D structures need to be addressed in the future. To accelerate clinical adoption, future work should prioritize integrating these techniques into real-world workflows, such as (1) real-time MRI processing pipelines for intraoperative decision support, (2) lightweight models compatible with edge devices for point-of-care diagnostics, and (3) standardized APIs for seamless integration with hospital picture archiving and communication systems. While our study highlights the rapid advancements in deep learning-based segmentation techniques for brain tumors detection, it also underscores the need for addressing the challenges of clinical translation and validation. Future research should focus on bridging the gap between technical innovation and clinical application to ensure these tools can deliver meaningful impact in healthcare settings.
Footnotes
ORCID iDs: Samuel-Soma M Ajibade https://orcid.org/0000-0002-3452-1889
Jaehyuk Cho https://orcid.org/0000-0002-9113-6805
Sathishkumar Veerappampalayam Easwaramoorthy https://orcid.org/0000-0002-8271-2022
Ethical approval: Not applicable.
Author contributions: Farrukh Hassan: conceptualization, methodology, formal analysis, and writing—original draft. Saad Aslam: data curation, visualization, software development, and validation. Samuel-Soma M Ajibade: investigation, resources, supervision, and project administration. Jaehyuk Cho: funding acquisition, review and editing, supervision, and conceptualization. Sathishkumar Veerappampalayam Easwaramoorthy: conceptualization, methodology, supervision, writing—review and editing, and project administration.
Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This research was supported by a grant of the project for ‘Research and Development for Enhancing Infectious Disease Response Capacity in Medical & Healthcare settings’, funded by the Korea Disease Control and Prevention Agency, the Ministry of Health & Welfare, Republic of Korea (grant number : RS2025-0231047).
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Data availability statement: No data have been used to support the findings of the study.
References
- 1.Havaei M, Davy A, Warde-Farley D, et al. Brain tumor segmentation with deep neural networks. Med Image Anal 2017; 35: 18–31. [DOI] [PubMed] [Google Scholar]
- 2.Menze BH, Jakab A, Bauer S, et al. The multimodal brain tumor image segmentation benchmark (BraTS). IEEE Trans Med Imaging 2014; 34: 1993–2024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Miller KD, Ostrom QT, Kruchko C, et al. Brain and other central nervous system tumor statistics, 2021. CA Cancer J Clin 2021; 71: 381–406. [DOI] [PubMed] [Google Scholar]
- 4.McKinney PA. Brain tumours: incidence, survival, and aetiology. J Neurol Neurosurg Psychiatry 2004; 75: ii12–ii17. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Tataei Sarshar N, Ranjbarzadeh R, Jafarzadeh Ghoushchi S, et al. Glioma brain tumor segmentation in four MRI modalities using a convolutional neural network and based on a transfer learning method. In: Brazilian technology symposium, 2021, pp.386–402. Springer. [Google Scholar]
- 6.Cacho-Díaz B, García-Botello DR, Wegman-Ostrosky T, et al. Tumor microenvironment differences between primary tumor and brain metastases. J Transl Med 2020; 18: 1–12. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7.Kotia J, Kotwal A, Bharti R. Risk susceptibility of brain tumor classification to adversarial attacks. In: Man–machine interactions 6: 6th International conference on man–machine interactions, ICMMI 2019, Cracow, Poland, October 2–3, 2019, 2020, pp.181–187. Springer. [Google Scholar]
- 8.Wadhwa A, Bhardwaj A, Verma VS. A review on brain tumor segmentation of MRI images. Magn Reson Imaging 2019; 61: 247–259. [DOI] [PubMed] [Google Scholar]
- 9.Zhang W, Wu Y, Yang B, et al. Overview of multi-modal brain tumor MR image segmentation. Healthcare 2021; 9: 1051. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Magadza T, Viriri S. Deep learning for brain tumor segmentation: a survey of state-of-the-art. J Imaging 2021; 7. Preprint at 10.3390/jimaging7020019. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Bhalodiya JM, Lim Choi Keung SN, Arvanitis TN. Magnetic resonance image-based brain tumour segmentation methods: a systematic review. Dig Health 2022; 8. Preprint at 10.1177/20552076221074122. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Ranjbarzadeh R, Caputo A, Tirkolaee EB, et al. Brain tumor segmentation of MRI images: a comprehensive review on the application of artificial intelligence tools. Comput Biol Med 2023; 152. Preprint at 10.1016/j.compbiomed.2022.106405. [DOI] [PubMed] [Google Scholar]
- 13.Alizadeh Savareh B, Emami H, Hajiabadi M, et al. Emergence of convolutional neural network in future medicine: why and how. A review on brain tumor segmentation. Polish J Med Phys Eng 2018; 24: 43–53. [Google Scholar]
- 14.Mohammed YMA, El Garouani S, Jellouli I. A survey of methods for brain tumor segmentation-based MRI images. J Comput Design Eng 2023; 10: 266–293. [Google Scholar]
- 15.Havaei M, Davy A, Warde-Farley D, et al. Brain tumor segmentation with deep neural networks. Med Image Anal 2017; 35: 18–31. [DOI] [PubMed] [Google Scholar]
- 16.Zhou Z, Siddiquee MMR, Tajbakhsh N, et al. UNet++: redesigning skip connections to exploit multiscale features in image segmentation. IEEE Trans Med Imaging 2019; 39: 1856–1867. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17.Kamnitsas K, Ledig C, Newcombe VFJ, et al. Efficient multi-scale 3D CNN with fully connected CRF for accurate brain lesion segmentation. Med Image Anal 2017; 36: 61–78. [DOI] [PubMed] [Google Scholar]
- 18.Pereira S, Pinto A, Alves V, et al. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans Med Imaging 2016; 35: 1240–1251. [DOI] [PubMed] [Google Scholar]
- 19.Zhao X, Wu Y, Song G, et al. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Med Image Anal 2018; 43: 98–111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 20.Wang G, Li W, Zuluaga MA, et al. Interactive medical image segmentation using deep learning with image-specific fine tuning. IEEE Trans Med Imaging 2018; 37: 1562–1573. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Sajjad M, Khan S, Muhammad K, et al. Multi-grade brain tumor classification using deep CNN with extensive data augmentation. J Comput Sci 2019; 30: 174–182. [Google Scholar]
- 22.Liu Z, Wang S, Dong D, et al. The applications of radiomics in precision diagnosis and treatment of oncology: opportunities and challenges. Theranostics 2019; 9: 1303. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Sultan HH, Salem NM, Al-Atabany W. Multi-classification of brain tumor images using deep neural network. IEEE Access 2019; 7: 69215–69225. [Google Scholar]
- 24.Zhu W, Huang Y, Zeng L, et al. AnatomyNet: deep learning for fast and fully automated whole-volume segmentation of head and neck anatomy. Med Phys 2019; 46: 576–589. [DOI] [PubMed] [Google Scholar]
- 25.Liu Z, Wei J, Li R, et al. Learning multi-modal brain tumor segmentation from privileged semi-paired MRI images with curriculum disentanglement learning. Comput Biol Med 2023; 159: 106927. [DOI] [PubMed] [Google Scholar]
- 26.Liu Z, Tong L, Chen L, et al. CANet: context aware network for brain glioma segmentation. IEEE Trans Med Imaging 2021; 40: 1763–1777. [DOI] [PubMed] [Google Scholar]
- 27.Zhou C, Ding C, Wang X, et al. One-pass multi-task networks with cross-task guided attention for brain tumor segmentation. IEEE Trans Image Process 2020; 29: 4516–4529. [DOI] [PubMed] [Google Scholar]
- 28.Zhou T, Canu S, Vera P, et al. Latent correlation representation learning for brain tumor segmentation with missing MRI modalities. IEEE Trans Image Process 2021; 30: 4263–4274. [DOI] [PubMed] [Google Scholar]
- 29.Zhao X, Wu Y, Song G, et al. A deep learning model integrating FCNNs and CRFs for brain tumor segmentation. Med Image Anal 2018; 43: 98–111. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Yu B, Zhou L, Wang L, et al. SA-LuT-Nets: learning sample-adaptive intensity lookup tables for brain tumor segmentation. IEEE Trans Med Imaging 2021; 40: 1417–1427. [DOI] [PubMed] [Google Scholar]
- 31.Díaz-Pernas FJ, Martínez-Zarzuela M, González-Ortega D, et al. A deep learning approach for brain tumor classification and segmentation using a multiscale convolutional neural network. Healthcare (Switzerland) 2021; 9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Sharif MI, Li JP, Khan MA, et al. Active deep neural network features selection for segmentation and recognition of brain tumors using MRI images. Pattern Recognit Lett 2020; 129: 181–189. [Google Scholar]
- 33.Raza R, Ijaz Bajwa U, Mehmood Y, et al. dResU-Net: 3D deep residual U-Net based brain tumor segmentation from multimodal MRI. Biomed Signal Process Control 2023; 79. [Google Scholar]
- 34.Rehman MU, Ryu J, Nizami IF, et al. RAAGR2-Net: a brain tumor segmentation network using parallel processing of multiple spatial frames. Comput Biol Med 2023; 152. [DOI] [PubMed] [Google Scholar]
- 35.Chang Y, Zheng Z, Sun Y, et al. DPAFNet: a residual dual-path attention-fusion convolutional neural network for multimodal brain tumor segmentation. Biomed Signal Process Control 2023; 79. [Google Scholar]
- 36.Sarala B, Sumathy G, Kalpana AV, et al. Glioma brain tumor detection using dual convolutional neural networks and histogram density segmentation algorithm. Biomed Signal Process Control 2023; 85. [Google Scholar]
- 37.Das S, Bose S, Nayak GK, et al. Deep learning-based ensemble model for brain tumor segmentation using multi-parametric MR scans. Open Comput Sci 2022; 12: 211–226. [Google Scholar]
- 38.Zheng P, Zhu X, Guo W. Brain tumour segmentation based on an improved U-Net. BMC Med Imaging 2022; 22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39.Aggarwal M, Tiwari AK, Sarathi MP, et al. An early detection and segmentation of brain tumor using deep neural network. BMC Med Inform Decis Mak 2023; 23. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40.Sunsuhi GS, Albin Jose S. An adaptive eroded deep convolutional neural network for brain image segmentation and classification using Inception ResnetV2. Biomed Signal Process Control 2022; 78. [Google Scholar]
- 41.Farajzadeh N, Sadeghzadeh N, Hashemzadeh M. Brain tumor segmentation and classification on MRI via deep hybrid representation learning. Expert Syst Appl 2023; 224. [Google Scholar]
- 42.Vankdothu R, Hameed MA. Brain tumor MRI images identification and classification based on the recurrent convolutional neural network. Meas Sens 2022; 24. [Google Scholar]
- 43.Gab Allah AM, Sarhan AM, Elshennawy NM. Edge U-Net: brain tumor segmentation using MRI based on deep U-Net model with boundary information. Expert Syst Appl 2023; 213. [Google Scholar]
- 44.Fang L, Wang X. Brain tumor segmentation based on the dual-path network of multi-modal MRI images. Pattern Recognit 2022; 124: 108434. [Google Scholar]
- 45.Bai K, Li Q, Wang C-H. Integrating improved U-Net and continuous maximum flow algorithm for 3D brain tumor image segmentation. J Imaging Sci Technol 2020. [Google Scholar]
- 46.Bennai MT, Guessoum Z, Mazouzi S, et al. A stochastic multi-agent approach for medical-image segmentation: application to tumor segmentation in brain MR images , 2020.
- 47.Alagarsamy S, Kamatchi K, Govindaraj V, et al. Multi-channeled MR brain image segmentation: a new automated approach combining BAT and clustering technique for better identification of heterogeneous tumors. Biocybern Biomed Eng 2019; 39: 1005–1035. [Google Scholar]
- 48.Amirmoezzi Y, Salehi S, Parsaei H, et al. A knowledge-based system for brain tumor segmentation using only 3D FLAIR images. Australas Phys Eng Sci Med 2019; 42: 529–540. [DOI] [PubMed] [Google Scholar]
- 49.Lim KY, Mandava R. A multi-phase semi-automatic approach for multisequence brain tumor image segmentation. Expert Syst Appl 2018; 112: 288–300. [Google Scholar]
- 50.Li Y, Jia F, Qin J. Brain tumor segmentation from multimodal magnetic resonance images via sparse representation. Artif Intell Med 2016; 73: 1–13. [DOI] [PubMed] [Google Scholar]
- 51.Walsh J, Othmani A, Jain M, et al. Using U-Net network for efficient brain tumor segmentation in MRI images. Healthc Anal 2022; 2. [Google Scholar]
- 52.Ai Y, Miao F, Hu Q, et al. Multi-feature guided brain tumor segmentation based on magnetic resonance images. IEICE Trans Inf Syst 2015; E98D: 2250–2256. [Google Scholar]
- 53.Naser MA, Deen MJ. Brain tumor segmentation and grading of lower-grade glioma using deep learning in MRI images. Comput Biol Med 2020; 121: 103758. [DOI] [PubMed] [Google Scholar]
- 54.Donthu N, Kumar S, Mukherjee D, et al. How to conduct a bibliometric analysis: an overview and guidelines. J Bus Res 2021; 133: 285–296. [Google Scholar]
- 55.Donthu N, Kumar S, Pattnaik D. Forty-five years of journal of business research: a bibliometric analysis. J Bus Res 2020; 109: 1–14. [Google Scholar]
- 56.Wong S, Mah AXY, Nordin AH, et al. Emerging trends in municipal solid waste incineration ashes research: a bibliometric analysis from 1994 to 2018. Environ Sci Pollut Res 2020; 27: 7757–7784. [DOI] [PubMed] [Google Scholar]
- 57.Nyakuma BB, Wong S, Mong GR, et al. Bibliometric analysis of the research landscape on rice husks gasification (1995–2019). Environ Sci Pollut Res 2021; 28: 49467–49490. [DOI] [PubMed] [Google Scholar]
- 58.Li Y, Jia F, Qin J. Brain tumor segmentation from multimodal magnetic resonance images via sparse representation. Artif Intell Med 2016; 73: 1–13. [DOI] [PubMed] [Google Scholar]
- 59.Zheng P, Zhu X, Guo W. Brain tumour segmentation based on an improved U-Net. BMC Med Imaging 2022; 22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Sauwen N, Acou M, Van Cauter S, et al. Comparison of unsupervised classification methods for brain tumor segmentation using multi-parametric MRI. Neuroimage Clin 2016; 12: 753–764. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Menze BH, Jakab A, Bauer S, et al. The multimodal brain tumor image segmentation benchmark (BraTS). IEEE Trans Med Imaging 2014; 34: 1993–2024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62.Çiçek Ö, Abdulkadir A, Lienkamp SS, et al. 3D U-Net: learning dense volumetric segmentation from sparse annotation. In: Medical Image Computing and Computer-Assisted Intervention–MICCAI 2016: 19th International Conference, Athens, Greece, October 17–21, 2016, 2016, Proceedings, Part II 19 , pp.424–432. Springer. [Google Scholar]
- 63.Pereira S, Pinto A, Alves V, et al. Brain tumor segmentation using convolutional neural networks in MRI images. IEEE Trans Med Imaging 2016; 35: 1240–1251. [DOI] [PubMed] [Google Scholar]
- 64.Ojha S, Sharma M. U-Net based image segmentation drawbacks in medical images: a review. In: Proceedings of international conference on recent advancements in artificial intelligence, 2023, pp.361–372. Springer. [Google Scholar]
- 65.Dubey SR, Singh SK. Transformer-based generative adversarial networks in computer vision: A comprehensive survey. IEEE Trans Artif Intell 2024. [Google Scholar]
- 66.Patil S, Kirange D. Ensemble of deep learning models for brain tumor detection. Procedia Comput Sci 2023; 218: 2468–2479. [Google Scholar]
























