Skip to main content
PLOS One logoLink to PLOS One
. 2024 Mar 11;19(3):e0295536. doi: 10.1371/journal.pone.0295536

Adaptive mask-based brain extraction method for head CT images

Dingyuan Hu 1,#, Shiya Qu 1,#, Yuhang Jiang 1, Chunyu Han 1, Hongbin Liang 1,*, Qingyan Zhang 2
Editor: Khan Bahadar Khan3
PMCID: PMC10927156  PMID: 38466697

Abstract

Brain extraction is an important prerequisite for the automated diagnosis of intracranial lesions and determines, to a certain extent, the accuracy of subsequent lesion identification, localization, and segmentation. To address the problem that the current traditional image segmentation methods are fast in extraction but poor in robustness, while the Full Convolutional Neural Network (FCN) is robust and accurate but relatively slow in extraction, this paper proposes an adaptive mask-based brain extraction method, namely AMBBEM, to achieve brain extraction better. The method first uses threshold segmentation, median filtering, and closed operations for segmentation, generates a mask for the first time, then combines the ResNet50 model, region growing algorithm, and image properties analysis to further segment the mask, and finally complete brain extraction by multiplying the original image and the mask. The algorithm was tested on 22 test sets containing different lesions, and the results showed MPA = 0.9963, MIoU = 0.9924, and MBF = 0.9914, which were equivalent to the extraction effect of the Deeplabv3+ model. However, the method can complete brain extraction of approximately 6.16 head CT images in 1 second, much faster than Deeplabv3+, U-net, and SegNet models. In summary, this method can achieve accurate brain extraction from head CT images more quickly, creating good conditions for subsequent brain volume measurement and feature extraction of intracranial lesions.

1. Introduction

As brain morbidity continues to increase, computed tomography (CT) and magnetic resonance imaging (MRI) play a vital role in diagnosing intracranial lesions as brain imaging modalities [1]. Compared to MRI, CT is widely used because of its lower cost and faster diagnosis. In recent years, in order to assist radiologists in making accurate and rapid diagnoses, domestic and foreign researchers have been dedicated to applying image processing techniques to medical image analysis [2] to achieve segmentation or detection of brain tumors [3], intracranial hematoma [4, 5]… and other lesions. However, for diagnosing intracranial lesions, not all of the information provided by CT scans is useful. For example, diagnostic equipment, pillows, skulls, and other non-brain tissues are not useful and can largely affect the algorithm’s ability to identify, localize, and segment intracranial lesions. Extracting the brain from CT images can provide a better environment for subsequent feature extraction of intracranial lesions [6], which can improve the accuracy of subsequent lesion detection, so extracting the brain from CT images has significant research significance. However, CT images of the head are complex. From a tomographic anatomical point of view, the human head can be divided from bottom to top into the basis cranii layer, the sella turcica layer, the suprasellar cistern layer, the third ventricle layer, the third ventricle top layer, the lateral ventricle layer, the lateral ventricle top layer, the centrum semiovale layer, and the cerebral cortex layer, each of which is different from the other. At the same time, situations such as the unclosed skull, the distribution of multiple brain regions, and various lesions make high-quality brain extraction difficult [7].

In recent decades, researchers at home and abroad have researched brain extraction and proposed representative algorithms, roughly divided into traditional image segmentation methods, secondary development of medical image post-processing software, and deep learning models. Traditional image segmentation methods achieve the segmentation of target regions by artificially set rules. MM Kyaw et al. [8] used a tracking algorithm to perform brain parenchyma extraction, but extracranial soft tissue could not be eliminated. B. Shahangian et al. [9] used threshold segmentation, median filtering, and image and mask multiplication for brain extraction. They later built on this to achieve further segmentation of cerebral hematomas with high accuracy. N Farzaneh et al. [10] used a custom distance regularized level set evolution (DRLSE) for brain extraction before further implementing subdural hematoma segmentation.

Gautam et al. [11] and G Cao’s team [12] clustered images using White Matter Fuzzy C-means (WMFCM) and Fuzzy C-means (FCM), respectively, and then used morphological imaging to extract brain parenchyma. However, none of the above algorithms was tested on the whole group of head CT images containing different lesions, and whether they can overcome large areas of soft tissue edema, unclosed skull, and multi-regional distribution of the brain remains to be examined.

The secondary development of medical image post-processing software is widely used in MR and CT. Muschelli et al. [13] modified the fractional intensity (FI) parameters. They adjusted the brain parenchyma threshold range based on the Brain Extraction Tool (BET) to achieve high-accuracy brain extraction of MR and CT images. Bauer et al. [14] developed a brain extraction method based on Insight Toolkit (ITK), which first forms a rough mask based on the original image and later uses a level set algorithm to increase the accuracy further. These methods are publicly available, but incorporating them into other systems will be challenging.

Recently, deep learning has also been widely used in brain neuroimage. DHM Nguyen et al. [15] combined the active shape model and convolutional neural network (CNN) to give full play to the advantages of two to extract the brain from head images with good results. Zeynettin Akkus et al. [16]. proposed a full convolutional neural network (FCN) based approach and tested five models, including 2D U-Net, two modified versions of 2D U-Net, 3D U-Net, and SegNet. The experimental results show that the best model has strong robustness and high accuracy, which proves the feasibility of FCN to achieve CT image brain extraction. However, the effects of different lesions on the FCN segmentation effect still need to be thoroughly tested. Comprehensive analysis shows that for brain extraction of head CT images, the FCN model has better robustness and higher accuracy compared with traditional image segmentation algorithms, but the number of parameters is vast, and the segmentation speed is slow [17, 18], and further improvements in segmentation speed while ensuring accuracy and robustness have yet to be thoroughly tried.

Here, to solve the above problems and thus better achieve brain extraction, this paper combines traditional image processing methods and CNN and proposes an integrated method, namely AMBBEM. The AMBBEM and three FCN models were tested on 22 sets of head CT images containing different lesions. The contributions of this paper are as follows: First, an integrated algorithm for brain extraction from head CT images is proposed, which is simple in structure, possesses good accuracy and faster extraction speed, is easy to integrate into other algorithms to provide a better environment for the subsequent feature extraction of intracranial lesions at a minimal cost of efficiency. Second, the feasibility of combining traditional image processing methods and CNNs to deal with complex segmentation problems is explored. Third, the performance of AMBBEM and 3 FCN models for brain extraction was evaluated, as well as the effect of multiple lesions on various algorithms, which can be used as a reference for related research.

The structure of this paper is as follows. Section 2 presents the data acquisition and processing, as well as the specific ideas of the algorithm. Section 3 presents the arrangement of the experiments and their results. In Section 4, the paper analyzes and discusses the various methods with the experimental results. Finally, Section 5 summarizes the research of this paper and provides an outlook for future work.

2. Materials and methods

2.1. Data selection and processing

The datasets were derived from the RSNA Intracranial Hemorrhage Original Size PNGs (RIHOSP)dataset, publicly available on the Kaggle website, and the CQ500 dataset on the Academic Torrents website. No patient privacy was involved.

With the assistance of radiologists, we extracted three sets of images from two datasets. First, 1400 slices were selected from the RIHOSP dataset as the first set of images, including 250 at the layer of the basis cranii, 500 at the layer of the sella turcica, and a total of 650 at the remaining layer, some of which contained lesions such as intracranial hematomas and soft tissue edema. Another 140 head CT images were randomly selected from the RIHOSP dataset as the second set of images, with the same proportion of each layer as the first set of images. In the third group, 22 sets of head CT images containing different lesions were screened from the CQ500 dataset, with nine slices in each group. Among the first 18 groups containing lesions, there were 10 cases of intracranial hematoma, 2 cases of each subtype (parenchymal hematoma, ventricular hematoma, subarachnoid hematoma, subdural hematoma, and epidural hematoma); 3 cases of cerebral infarction; 2 cases of soft tissue edema; 2 cases of physiologic calcification; and 1 case of the intracranial cyst. The latter four groups had no lesions, 2 in adults and 2 in minors. The CT slice size was 512 × 512 × 3 without any pre-processing.

The experimental operating system is Windows 11, the processor is AMD Ryzen 7 5700X 8-Core Processor, the graphics card is NVIDIA GeForce 3060Ti, which has 8GB memory for processing data, and the experimental platform is chosen as MATLAB2022b with CUDA version 12.0 (The complete code is given in S1 File).

2.2. Algorithm design

The algorithm extracts the brain by multiplying a high-precision mask [19, 20] with the original image. It can be divided into three parts: 1) initial segmentation by threshold segmentation, median filtering, and image filling to generate the mask; 2) the improved ResNet50 model [21] is used to classify the image and then combined with the closed operation to achieve the closure of the skull gap and ensure the complete filling of the mask; 3) according to the classification results, the mask is further trimmed by combining the connected component labeling method [22], region growing method [23] and image properties analysis to improve the accuracy, and then finally complete the extraction by multiplying the final mask with the original image.

2.2.1. Preliminary segmentation of brain tissue

In CT, images of human tissues are formed based on the absorption properties of radiation energy by human tissue [24, 25]. As shown in Fig 1, Fig 1(a) shows the original image, and the skull, pillow, scalp, and accessory tissues are the parts to be removed. Fig 1(b) and 1(c) show the gray value grid surface plot and the gray value (1–254) percentage bar plot of the original image, respectively.

Fig 1. Analysis of head CT images.

Fig 1

a) Original image b) Mesh surface of the gray value c) Percent bar plot of gray value from 1 to 254.

The first peak, d1 in Fig 1(c), corresponds to the gray value distribution of normal brain parenchyma, and the second peak, d2, corresponds to the gray value distribution of cerebral hematoma. Combining Fig 1(b) and 1(c), it can be seen that the gray value of each tissue has a Gaussian distribution, with the skull having the largest gray value at around 255, the intracranial hematoma having the second largest gray value, and the gray value of the brain parenchyma and the extracranial soft tissue are close, both of which are much lower than those of the skull.

In summary, the CT image is first converted into a gray image, and then the skull and brain parenchyma are segmented using threshold segmentation. Considering the influence of CT window width and window level, the threshold range is enlarged to a certain extent. The specific formula is as follows:

e1(i,j)=1E(i,j)Max(E)150E(i,j)<Max(E)15 (1)
e2(i,j)=e2(i,j)=E(i,j)1E(i,j)Max(E)20e2(i,j)=0E(i,j)>Max(E)20orE(i,j)<1 (2)

Where E represents the gray image;

max (E) represents the maximum gray value in the gray image;

e1 represents the skull image;

e2 represents the image of skull removal.

The extracted skull is filled as a template to obtain mask 1, and then the image of the removal of the skull is multiplied by mask 1 to remove the skull and extracranial soft tissue. The process is shown in Fig 2.

Fig 2. Diagram of results generated during the initial segmentation process.

Fig 2

a) Original image. b) Skull. c) Image with the skull removed. d) Mask 1. e) Image after noise reduction. f) Initial image with the skull and extra-cranial soft tissue removed.

2.2.2. Fill detection and skull closure

As seen above, mask 1 is obtained by filling the segmented skull. As shown in Fig 3(a) and 3(b), the skull is not closed in the head CT images, and there are obvious gaps in the skull after the threshold segmentation, which cannot guarantee the complete filling of the subsequent mask 1. In this paper, the closure of the skull gap is achieved by the closed operation. Considering that the size, location, and shape of the skull gap are changing, cycle 1 is designed in this paper, which is shown in Fig 4. Among them,

q=(SiSe1)/Si0i5 (3)
Fig 3. Diagram of the process of skull closure.

Fig 3

A) Original image b) Skull image c) Images after skull closure d) Image of mask 1.

Fig 4. Cycle 1.

Fig 4

i represents the number of cycles;

Si represents the mask area after the i th cycle;

Se1 denotes the area of the skull;

Si-Se1 denotes the filled area of the i th cycle;

q denotes the percentage of the filled area to the mask area after the i th cycle.

The extracted skull is first filled once, and whether the mask is filled completely is judged by whether q is greater than the threshold value (TV), where TV is obtained by regression fitting from the soft tissue area and the skull area. Observation and regression experiments were performed on 142 groups of head CT images. It was found that the proportion of brain tissue area in the soft tissue area at the basis cranii layer was small, and the fitting effect was poor. The average value of brain area as a percentage of the complete mask was 0.2485. The proportion of brain tissue area in the other layers was larger, and the fitting effect was good, with a similarity coefficient of 0.9403 and a p-value of 0 for the statistic, proving that the regression model was established. So when q is less than TV, the convolutional neural network(CNN) discriminates the image. If it belongs to the basis cranii layer, TV is reassigned to 0.2485, after which the relationship between q-value and TV is judged again, and if it belongs to other layers, closed operation and refilling are performed directly. The structural element of the closed operation increases one by one during the cycle. If q is greater than TV, the cycle is jumped out, and the following steps are continued. It is also observed that a small portion of images with a small brain area exists, and even if a complete filling is obtained, q is still less than this TV. To avoid falling into a dead cycle, the maximum number of cycles is limited at the same time. The closed operation is tested on 142 sets(A total of 3132 sheets) of head CT images. When the number of cycles reaches 3, all the images containing gaps are closed, but the increase in the number of closed operations will bring errors to the subsequent segmentation, and we set the maximum number of cycles to 5 in careful consideration. As shown in Fig 3(c), the gap of the skull is closed. After filling, mask 1 is obtained, as shown in Fig 3(d).

We used five CNN models to classify the original images, classifying the images into a total of four classes, with the basis cranii layer as a separate class and the other layers divided into three classes based on the number of brain distribution regions. The first three networks are VGG19 [26], EfficientNet [27], and RestNet50. The fourth one is an improved version of the RestNet50 network, which improves RestNet50 in two ways: firstly, adding a Convolutional Block Attention Module (CBAM) [28] to the Stem Block section of ResNet50 and then adding Squeeze Excitation (SE) [29] to each of the four Bottleneck sections. Squeeze Excitation). The first four networks have an input layer size of 224 × 224 × 3, which is converted to 224 × 224 × 3 using bilinear interpolation [30] before the image is input. In the end, the softmax function is used to generate four types of outputs. The fifth one is the SqueezeNet network [31], where the size of the input layer of the network is 227 × 227 × 3. The image is transformed using bilinear interpolation before the image input, and the number of channels in the final convolutional layer is adjusted to four.

2.2.3. Re-segmentation of the mask

In the whole set of head CT sections, some sections are relatively more complex in structure, and it is difficult to ensure high-quality brain extraction by preliminary segmentation only, as shown in Fig 5(a). Through preliminary segmentation, non-brain tissues are still not removed and need further detection and segmentation. We first perform median filtering on the initially segmented image to remove small areas of non-brain tissue, producing mask 2, as in Fig 5(b). Then, the connected component labeling method measures the number of connected regions of the mask. If the number of connected regions equals 1, mask 2 is used as the final mask. If it is greater than 1, different methods are used to re-segment mask 2 according to CNN’s different classification results.

Fig 5. Process diagram for mask re-segmentation.

Fig 5

a) Initial segmented image b) Image of Mask 2 c) Final mask image d) Final brain extraction image.

In a set of head CT images, due to the presence of the skull, the brain tissue in some of the images is distributed as a single block, and some of them are distributed in multiple regions. We classify the head CT images into brain single-region distribution and brain multi-region distribution according to the difference in brain tissue distribution. As shown in Fig 6, for the images that require mask re-segmentation, we first use CNN to discriminate. For the images with the single-region distribution of the brain, we use the region growing [32] algorithm to trim the clipping of the mask 2. After much observation, In mask 2, the part to be eliminated originates from the human tissue above the brain parenchyma and not below the brain parenchyma, and the previous steps have eliminated the skull below the brain parenchyma and the extracranial soft tissues. Moreover, the brain is distributed in the middle of the image. Therefore, the first point that is not ’0’ can be used as the seed point for the region growth algorithm by searching from bottom to top within a certain range of the image midline. To ensure the robustness of segmentation, we moved the position of this point up another five lines to ensure that the seed point can fall precisely in the target region, see Fig 5(b) for details, and the seed point falls accurately in the region corresponding to the brain parenchyma. After capturing the seed points, the region growing algorithm is used to realize the re-segmentation of mask 2, and then the final mask is obtained, as in Fig 5(c). After that, the original image is multiplied with the final mask, and then the brain parenchyma extraction is completed, as in Fig 5(d).

Fig 6. Mask re-segmentation.

Fig 6

For images with the multi-regional distribution of the brain, using the above methods results in small areas of missing brain tissue, so we perform segmentation based on image properties. The image is first transformed into a binary image. Then, the mask is re-segmented with the area size (number of pixels in each region) as the specified attribute and the classification result of CNN as the number of extractions.

In summary, the overall flow chart of the AMBBEM algorithm is shown in Fig 7. The soft tissue and skull are separated by threshold segmentation first, and then the image is filled with the skull as the template to get mask 1. The fill detection and closed operations are designed to overcome the problem of skull gaps, further ensure the fill integrity of mask 1, and increase the algorithm’s robustness. In the whole algorithm process, median filtering can eliminate the small area of non-brain tissue in the image. The connected component labeling method determines whether further segmentation of the mask is needed. The re-segmentation of the mask was done to improve further the accuracy of the final segmentation, details of which are shown in Fig 6 above. In addition, CNN is used throughout the algorithm to classify images that require closed operations and re-segmentation, providing a basis for selecting the appropriate processing method for different images and largely increasing the robustness and accuracy of the algorithm.

Fig 7. Flowchart of AMBBEM.

Fig 7

3. Experiment

To verify the AMBBEM algorithm’s robustness, accuracy, and segmentation speed, this paper compares the algorithm with three FCN models: Deeplabv3+ [33], U-net [34], and SegNet [35]. The first set of images is used as the training set for the six CNN models and the three FCN models (The training set for CNNs is given in S1 Data, the training set of FCN is given in S2S4 Data), the second set of images is used as test set 1 for testing the classification effect of the six CNN models (The test set 1 is given in S5 Data), and the third set of images is used as the test set 2 for testing the brain extraction effect of AMBBEM and the three FCN models (The test set 2 is given in S6 Data).

3.1. Training settings

During the training of the five CNN models, the initial learning rate is 0.001, and the BatchSize is 16. To determine the best epochs, we first trained with 15 epochs and then increased the number of epochs by 15 each time and made observations. As shown in Fig 8, it is observed that the training loss of each CNN model tends to be stable at epochs of 30, so we finally set the epochs to 45. In order to determine the best optimizer, each CNN model is tested simultaneously with three optimizers, Adam, SGD, and RMSProp, and the one with the best test results is selected. The optimizer for VGG19 is SGD; the optimizer for ResNet50, EfficientNet, and the improved ResNet50 is Adam; and SqueezeNet is the RMSProp optimizer.

Fig 8. The training loss curve of each CNN.

Fig 8

Similarly, during the training of the three FCN models, after comparison, the optimizer chooses Adam with an initial learning rate of 0.001, and the BatchSize is 8. Also, in Fig 9, it is observed that the training loss of the three FCN models tends to be stable when the epochs are 45, so we finally set the epochs to 60.

Fig 9. The training loss curve of each FCN.

Fig 9

3.2. Evaluation indicators

Among the five CNN models tested, this paper chooses two standard metrics, accuracy and average precision (AP), as evaluation criteria. In the final test of AMBBEM and three FCN models, Mean Pixel Accuracy (MPA), Mean Intersection over Union (MIoU), Mean boundary F1-Measure Score (MBF) [36], and Average speed of segmentation (Ass) are selected as evaluation metrics.

As shown in Table 1, where TP (true positive) indicates the number of brain tissue(BT) predicted as BT in all pixels of the cranial CT images, FP (false positive) shows the number of BT predicted as non-brain tissue (NBT) in the pixels of the cranial CT images; TN (true negative) indicates the number of NBT predicted as NBT and FN (false positive) indicates the number of NBT expected as BT. The MPA is used to represent the average of the BT and NBT accuracy and is calculated as follows:

MPA=12TPTP+FP+TNTN+FN (4)

Table 1. Dichotomous confusion matrix.

True Results Predicted Results
Positive Sample Negative Sample
Positive Sample TP FN
Negative Sample FP TN

In the semantic extraction, IoU (Intersection over Union) denotes the interaction rate between the prediction frame and the real frame, while the MIoU denotes the average value of the IoU between BT and NBT, so the MIoU is calculated as:

MIoU=12TPTP+FP+FN+TNTN+FN+FP (5)

Also, considering the importance of boundary accuracy in medical image extraction, this paper used the F1-measures as an evaluation index, using 0.75% of the image’s diagonal as the tolerance distance. MBE denotes the average of the F1-Measure of the BT and NBT. The Ass then indicates the number of sheets extracted per second and is tested as follows:

Ass=N/T (6)

Where N denotes the total number of images, T denotes the whole time to complete the segmentation, test set 2 is used as the test object and tested five times, and the results are averaged.

3.3. Results

Table 2 shows the classification results of the five CNN models on Test Set 1. Among them, the improved ResNet50 has the highest accuracy and AP of 99.31% and 99%, which are 2.07% and 2.44% higher than the original ResNet50, 3.45%, and 3.88% higher than VGG19, and 6.90% and 8.22% higher than EfficientNet, respectively, while the SqueezeNet performs relatively poorly, much lower than the other four network models. Taken together, we chose the improved ResNet50 for the task of head CT image recognition.

Table 2. Performance of each CNN model in test set 1.

VGG19 SqueezeNet EfficientNet ResNet50 Improved ResNet50
Accuracy 0.9586 0.8483 0.9241 0.9724 0.9931
AP 0.9512 0.8229 0.9078 0.9656 0.9900

Table 3 shows the performance of AMBBEM and the three FCN models in Test Set 2. Among the four evaluation metrics, the MPA value of the AMBBEM algorithm is comparable to the SegNet model, which is 0.10% higher than the U-net model but 0.03% lower than Deeplabv3+. The MIoU value is highest with the AMBBEM algorithm, which is 0.10%, 0.14%, and 0.17% higher than the Deeplabv3+, U-net, and SegNet models, respectively. The MBF value of the AMBBEM algorithm is 0.10% lower than that of the Deeplabv3+ model but 0.62% and 0.07% higher than that of the U-net and SegNet models, respectively. The Ass of the AMBBEM algorithm is 6.16 sheets/second, which can complete the brain extraction of one head CT image in about 0.16 seconds. It is 10.53 times faster than Deeplabv3+, 10.6 times faster than U-net, and 10.49 times faster than the SegNet model, respectively.

Table 3. Performance of AMBBEM and the three FCN models in test set 2.

MPA MIoU MBF Ass(Sheets/second)
AMBBEM 0.9963 0.9924 0.9941 6.160
Deeplabv3+ 0.9966 0.9914 0.9951 0.585
U-net 0.9953 0.9910 0.9879 0.581
SegNet 0.9963 0.9907 0.9934 0.587

In a group of head CT images, the images at the basis cranii and sella turcica layers are more complex, and an unclosed skull is common. In addition, the brain area is smaller in the basis cranii layers images, while the other soft tissue areas are larger. In the images of the sella turcica layers, the brain is distributed in multiple regions. To further test the characteristics of each algorithm, images at the basis cranii layers, sella turcica layers, and other layers in test set 2 were examined separately. As shown in Tables 46, the test results of the AMBBEM and the three FCN models at the basis cranii layers, the sella turcica layers, and other layers, respectively, are shown. The segmentation effect of the AMBBEM algorithm in the basis cranii layer is significantly better than that of the three FCN models, but in the segmentation test at the sella turcica level, the segmentation effect of the AMBBEM algorithm is significantly lower than that of the three FCN models. In other layers of testing, AMBBEM extracts segmentation results similar to Deeplabv3+ and slightly better than U-net and SegNet models.

Table 4. Test results of each algorithm at the basis cranii layers.

MPA MIoU MBF
AMBBEM 0.9925 0.9842 0.9909
Deeplabv3+ 0.9915 0.9753 0.9776
U-net 0.9874 0.9754 0.9696
SegNet 0.9909 0.9713 0.9741

Table 6. Test results of each algorithm at the other layers.

MPA MIoU MBF
AMBBEM 0.9971 0.9938 0.9964
Deeplabv3+ 0.9970 0.9924 0.9973
U-net 0.9959 0.9920 0.9901
SegNet 0.9967 0.9918 0.9958

Table 5. Test results of each algorithm at the sella turcica layers.

MPA MIoU MBF
AMBBEM 0.9848 0.9759 0.9808
Deeplabv3+ 0.9929 0.9827 0.9930
U-net 0.9902 0.9824 0.9887
SegNet 0.9924 0.9819 0.9929

Figs 10 and 11 shows the extraction effects of AMBBEM and three FCN models at the basis cranii and sella turcica layers (All segmentation effect images for AMBBEM and FCN in Test Set 2 are detailed in S1 Fig). As shown in Fig 10, in the randomly selected eight images, the three models of Deeplabv3 +, U-net, and SegNet all showed a small area of extracranial soft tissue that was not removed, and no similar situation was found in the test of the AMBBEM algorithm. However, in the extraction effect map of the sella turcica layer, the AMBBEM algorithm showed a small area of brain tissue loss, while no loss was found in the extraction effect map of the three FCN models.

Fig 10. Extraction performance of AMBBEM and three FCN models at the basis cranii layer.

Fig 10

Fig 11. Extraction performance of AMBBEM and three FCN models at the sella turcica layer.

Fig 11

Fig 12 shows the effect of various methods on extracting images containing different lesions. The U-net model has a small area deletion in the area of intracranial hematoma and cerebral infarction. However, AMBBEM, Deeplabv3+, and SegNet were unaffected by various lesions and showed good robustness.

Fig 12. Extraction effect of each algorithm on images containing different lesions.

Fig 12

4. Discussion

We propose an adaptive mask-based brain extraction method for head CT images. Threshold segmentation, region growing, and improved ResNet50 are skillfully combined by designing a special detection mechanism, a loop structure, and an automatic seed point selection method. The method can accomplish brain extraction of approximately 6 CT images in 1 second using NVIDIA GeForce 3060Ti.

For image segmentation speed, traditional image processing methods such as tracking algorithms, WMFCM, FCM, threshold segmentation, DRLSE, etc., are close to AMBBEM. However, when facing the problems of extracranial soft tissue edema, unclosed skull, and multi-region distribution of the brain, the above traditional image processing methods may have certain limitations, such as non-brain tissue not being excluded and brain tissue loss. However, AMBBEM can effectively overcome the above problems with better robustness and accuracy and can be applied to extract the brain from the whole set of head CT images.

The secondary development of medical image post-processing software has a decent segmentation accuracy but relies on a certain software platform. It is difficult to integrate the function of brain extraction into other systems separately. AMBBEM, on the other hand, with its simple structure, is easier to combine with related algorithms.

The FCN model has both good robustness and accuracy in the task of brain extraction from CT images. Compared with the FCN model, the segmentation accuracy of AMBBEM is close to that of FCN, but the segmentation speed is much faster than that of the FCN model, which is several times its segmentation speed. At the same time, the FCN model suffers from the need for big data and black box problems, while AMBBEM is easy to implement and has better interpretability due to its relatively simple structure and principle.

For the classification task of head CT images, the improved ResNet50 significantly outperforms the original ResNet50 as well as other networks, which proves the effectiveness of the improved method. This is due to the fact that the SE module and CBAM module multiply the learned weights with the input feature maps and adaptively adjust the weights of the feature maps, which improves the quality and diversity of the features and motivates the network to capture the key information in the images better.

Frankly speaking, our algorithm also has some shortcomings. There is a certain probability of missing brain tissue in the segmentation task of the sella turcica layer image. This is caused by the simultaneous presence of multiple gaps in the skull. When multiple gaps exist, AMBBEM performs a closed operation due to q being less than TV, but when one of the gaps is closed, subsequent image filling increases q and thus exceeds TV, but at this time, the other gaps are not closed, which results in the loss of a small area of brain tissue.

5. Conclusion

The AMBBEM proposed in this paper combines traditional image processing methods with CNN, which can accomplish 6.16 images per second while ensuring similar accuracy to the FCN model, reaching tens of times the latter. The feasibility and superiority of combining traditional image processing methods with CNN to solve complex segmentation tasks quickly is demonstrated. The corresponding contribution is to provide a fast and accurate brain extraction method that can replace manual segmentation to realize brain extraction from head CT images, which in turn provides the basis for automated measurement of brain volume and automated detection of intracranial lesions. In our future work, we will try to combine this method with FCN to improve the accuracy and robustness of the algorithm further by giving the task of segmenting the sella turcica layer to FCN. As well as expanding the scope of testing in order to find problems and then further adjust the algorithm.

Supporting information

S1 File. This is the program code.

(ZIP)

pone.0295536.s001.zip (86.3MB, zip)
S1 Data. This is the training set for the CNN.

(ZIP)

pone.0295536.s002.zip (66MB, zip)
S2 Data. This is the label of the FCN training set.

(ZIP)

pone.0295536.s003.zip (2.8MB, zip)
S3 Data. This is the image of the FCN training set.

(ZIP)

pone.0295536.s004.zip (53.8MB, zip)
S4 Data. This is the image of the FCN training set.

(ZIP)

pone.0295536.s005.zip (47.6MB, zip)
S5 Data. Test set 1.

(ZIP)

pone.0295536.s006.zip (10.3MB, zip)
S6 Data. Test set 2.

(ZIP)

pone.0295536.s007.zip (19.7MB, zip)
S1 Fig. This is the segmentation effect of AMBBEM and FCN in test set 2.

(ZIP)

pone.0295536.s008.zip (42MB, zip)

Data Availability

The datasets were derived from the RSNA Intracranial Hemorrhage Original Size PNGs (RIHOSP)dataset, publicly available on the Kaggle website, and the CQ500 dataset on the Academic Torrents website. No patient privacy was involved. The training and test sets mentioned in the paper are obtained from the (RIHOSP) dataset and CQ500 dataset after screening and processing. We have put the training and test sets mentioned in the experiments in the Supporting information, which can be used directly. As for the (RIHOSP) dataset and CQ500 dataset, they are also available at the following links. The link to the (RIHOSP) dataset is https://www.kaggle.com/datasets/vaillant/rsna-ich-png/. Download Methods: Click the link to enter the webpage, then click "Download" to download it directly. It should be noted that the download requires an account to log in. The account we provide is account a18737263685; password: hu123456. The link to the CQ500 data collection is https://academictorrents.com/details/47e9d8aab761e75fd0a81982fa62bddf3a173831. Download Methods: First, click the link to enter the webpage; then click "Download" to download a TORRENT file (named "qure.headct.study-47e9d8aab761e75fd0a81982fa62bddf3a173831.torrent"). Finally, upload the TORRENT file to the download utility for direct download.

Funding Statement

This research was supported by the Department of Science and Technology of Liaoning Province, Natural Foundation Project (No: 2015020128). The funders had no role in study design, data collection and analysis, the decision to publish, or the preparation of the manuscript.

References

  • 1.Soomro TA, Zheng L, Afifi AJ, Ali A, Soomro S, Yin M, et al. Image Segmentation for MR Brain Tumor Detection Using Machine Learning: A Review. IEEE Rev Biomed Eng. 2023;16: 70–90. doi: 10.1109/RBME.2022.3185292 [DOI] [PubMed] [Google Scholar]
  • 2.Soomro TA, Gao J. Neural network based denoised methods for retinal fundus images and MRI brain images. 2016 International Joint Conference on Neural Networks (IJCNN). Vancouver, BC, Canada: IEEE; 2016. pp. 1151–1157.
  • 3.Almalki YE, Jandan NA, Soomro TA, Ali A, Kumar P, Irfan M, et al. Enhancement of Medical Images through an Iterative McCann Retinex Algorithm: A Case of Detecting Brain Tumor and Retinal Vessel Segmentation. Applied Sciences. 2022;12: 8243. doi: 10.3390/app12168243 [DOI] [Google Scholar]
  • 4.Chilamkurthy S, Ghosh R, Tanamala S, Biviji M, Campeau NG, Venugopal VK, et al. Development and Validation of Deep Learning Algorithms for Detection of Critical Findings in Head CT Scans. arXiv; 2018. http://arxiv.org/abs/1803.05854 [DOI] [PubMed] [Google Scholar]
  • 5.Phaphuangwittayakul A, Guo Y, Ying F, Dawod AY, Angkurawaranon S, Angkurawaranon C. An optimal deep learning framework for multi-type hemorrhagic lesions detection and quantification in head CT images for traumatic brain injury. Appl Intell. 2022;52: 7320–7338. doi: 10.1007/s10489-021-02782-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Shen L. Implementation of CT Image Segmentation Based on an Image Segmentation Algorithm. Liu Y, editor. Applied Bionics and Biomechanics. 2022;2022: 1–11. doi: 10.1155/2022/2047537 [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  • 7.Monteiro M, Kamnitsas K, Ferrante E, Mathieu F, McDonagh S, Cook S, et al. TBI Lesion Segmentation in Head CT: Impact of Preprocessing and Data Augmentation. In: Crimi A, Bakas S, editors. Brainlesion: Glioma, Multiple Sclerosis, Stroke, and Traumatic Brain Injuries. Cham: Springer International Publishing; 2020. pp. 13–22. doi: 10.1007/978-3-030-46640-4_2 [DOI] [Google Scholar]
  • 8.Kyaw MM. Computer-Aided Detection system for Hemorrhage contained region. International Journal of Computational Science and Information Technology. 2013. [Google Scholar]
  • 9.Shahangian B, Pourghassem H. Automatic brain hemorrhage segmentation and classification in CT scan images. 2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP). Zanjan: IEEE; 2013. pp. 467–471.
  • 10.Farzaneh N, Soroushmehr SMR, Williamson CA, Jiang C, Srinivasan A, Bapuraj JR, et al. Automated subdural hematoma segmentation for traumatic brain injured (TBI) patients. 2017 39th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). Seogwipo: IEEE; 2017. pp. 3069–3072. [DOI] [PubMed]
  • 11.Gautam A, Raman B. Automatic Segmentation of Intracerebral Hemorrhage from Brain CT Images. In: Tanveer M, Pachori RB, editors. Machine Intelligence and Signal Analysis. Singapore: Springer Singapore; 2019. pp. 753–764. doi: 10.1007/978-981-13-0923-6_64 [DOI] [Google Scholar]
  • 12.Guogang C, Yijie W, Xinyu Z, Mengxue L, Xiaoyan W, Ying C. Segmentation of Intracerebral Hemorrhage based on Improved U-Net. jist. 2021;65: 30405-1–30405–7. doi: 10.2352/J.ImagingSci.Technol.2021.65.3.030405 [DOI] [Google Scholar]
  • 13.Muschelli J, Ullman NL, Mould WA, Vespa P, Hanley DF, Crainiceanu CM. Validated automatic brain extraction of head CT images. NeuroImage. 2015;114: 379–385. doi: 10.1016/j.neuroimage.2015.03.074 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Bauer S, Fejes T, Reyes M. A Skull-Stripping Filter for ITK. The Insight Journal. 2013. [cited 29 Jun 2023]. doi: 10.54294/dp4mfp [DOI] [Google Scholar]
  • 15.Nguyen DHM, Nguyen DM, Truong MTN, Nguyen T, Tran KT, Triet NA, et al. ASMCNN: An Efficient Brain Extraction Using Active Shape Model and Convolutional Neural Networks. Information Sciences. 2022;591: 25–48. doi: 10.1016/j.ins.2022.01.011 [DOI] [Google Scholar]
  • 16.Akkus Z, Kostandy P, Philbrick KA, Erickson BJ. Robust brain extraction tool for CT head images. Neurocomputing. 2020;392: 189–195. doi: 10.1016/j.neucom.2018.12.085 [DOI] [Google Scholar]
  • 17.Zhao F, Xie X. An Overview on Interactive Medical Image Segmentation. 2013;2013. [Google Scholar]
  • 18.Song Y, Yan H. Image Segmentation Techniques Overview. 2017 Asia Modelling Symposium (AMS). Kota Kinabalu: IEEE; 2017. pp. 103–107. doi: 10.1109/AMS.2017.24 [DOI] [Google Scholar]
  • 19.Cheng B, Misra I, Schwing AG, Kirillov A, Girdhar R. Masked-attention Mask Transformer for Universal Image Segmentation. arXiv; 2022. http://arxiv.org/abs/2112.01527 [Google Scholar]
  • 20.Jha D, Riegler MA, Johansen D, Halvorsen P, Johansen HD. DoubleU-Net: A Deep Convolutional Neural Network for Medical Image Segmentation. 2020 IEEE 33rd International Symposium on Computer-Based Medical Systems (CBMS). Rochester, MN, USA: IEEE; 2020. pp. 558–564.
  • 21.He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. arXiv; 2015. http://arxiv.org/abs/1512.03385 [Google Scholar]
  • 22.Asad P, Marroquim R, Souza ALEL. On GPU Connected Components and Properties: A Systematic Evaluation of Connected Component Labeling Algorithms and Their Extension for Property Extraction. IEEE Trans on Image Process. 2019;28: 17–31. doi: 10.1109/TIP.2018.2851445 [DOI] [PubMed] [Google Scholar]
  • 23.Biratu ES, Schwenker F, Debelee TG, Kebede SR, Negera WG, Molla HT. Enhanced Region Growing for Brain Tumor MR Image Segmentation. J Imaging. 2021;7: 22. doi: 10.3390/jimaging7020022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Cho J, Park K-S, Karki M, Lee E, Ko S, Kim JK, et al. Improving Sensitivity on Identification and Delineation of Intracranial Hemorrhage Lesion Using Cascaded Deep Learning Models. J Digit Imaging. 2019;32: 450–461. doi: 10.1007/s10278-018-00172-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Siegel MJ, Ramirez-Giraldo JC. Dual-Energy CT in Children: Imaging Algorithms and Clinical Applications. Radiology. 2019;291: 286–297. doi: 10.1148/radiol.2019182289 [DOI] [PubMed] [Google Scholar]
  • 26.Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv; 2015. http://arxiv.org/abs/1409.1556 [Google Scholar]
  • 27.Tan M, Le QV. EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. arXiv; 2020. http://arxiv.org/abs/1905.11946 [Google Scholar]
  • 28.Woo S. CBAM: Convolutional Block Attention Module.
  • 29.Hu J, Shen L, Albanie S, Sun G, Wu E. Squeeze-and-Excitation Networks. arXiv; 2019. http://arxiv.org/abs/1709.01507 [DOI] [PubMed] [Google Scholar]
  • 30.Kim K-H, Shim P-S, Shin S. An Alternative Bilinear Interpolation Method Between Spherical Grids. Atmosphere. 2019;10: 123. doi: 10.3390/atmos10030123 [DOI] [Google Scholar]
  • 31.Iandola FN, Han S, Moskewicz MW, Ashraf K, Dally WJ, Keutzer K. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv; 2016. http://arxiv.org/abs/1602.07360 [Google Scholar]
  • 32.Wadhwa A, Bhardwaj A, Singh Verma V. A review on brain tumor segmentation of MRI images. Magnetic Resonance Imaging. 2019;61: 247–259. doi: 10.1016/j.mri.2019.05.043 [DOI] [PubMed] [Google Scholar]
  • 33.Chen L-C, Zhu Y, Papandreou G, Schroff F, Adam H. Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y, editors. Computer Vision–ECCV 2018. Cham: Springer International Publishing; 2018. pp. 833–851. doi: 10.1007/978-3-030-01234-2_49 [DOI] [Google Scholar]
  • 34.Ronneberger O, Fischer P, Brox T . U-Net: Convolutional Networks for Biomedical Image Segmentation. In: Navab N, Hornegger J, Wells WM, Frangi AF, editors. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015. Cham: Springer International Publishing; 2015. pp. 234–241. doi: 10.1007/978-3-319-24574-4_28 [DOI] [Google Scholar]
  • 35.Badrinarayanan V, Kendall A, Cipolla R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Trans Pattern Anal Mach Intell. 2017;39: 2481–2495. doi: 10.1109/TPAMI.2016.2644615 [DOI] [PubMed] [Google Scholar]
  • 36.Csurka G, Larlus D, Perronnin F. What is a good evaluation measure for semantic segmentation? Procedings of the British Machine Vision Conference 2013. Bristol: British Machine Vision Association; 2013. p. 32.1–32.11.

Decision Letter 0

Khan Bahadar Khan

4 Jun 2023

PONE-D-23-11044Adaptive Mask-Based Brain Extraction Method for Head CT ImagesPLOS ONE

Dear Dr. Liang,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

Thank you to the reviewers for their valuable comments and evaluations of the manuscript titled "Adaptive Mask-Based Brain Extraction Method for Head CT Images." After carefully considering the reviews and conducting my own evaluation, I have decided to request major revisions before accepting the manuscript for publication. The reviewers have identified several areas that require significant improvements to enhance the clarity, methodology, and comparison with other methods.

==============================

Please submit your revised manuscript by Jul 19 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Khan Bahadar Khan, Ph.D

Academic Editor

PLOS ONE

Journal requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.

3. Thank you for stating the following financial disclosure:

“This research was supported by the Department of Science and Technology of Liaoning Province, Natural Foundation Project (No: 2015020128).”

Please state what role the funders took in the study.  If the funders had no role, please state: "The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript."

If this statement is not correct you must amend it as needed.

Please include this amended Role of Funder statement in your cover letter; we will change the online submission form on your behalf.

4. Thank you for stating the following in the Acknowledgments Section of your manuscript:

“This research was supported by the Department of Science and Technology of Liaoning Province, Natural Foundation Project (No: 2015020128).”

We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

Please remove any funding-related text from the manuscript and let us know how you would like to update your Funding Statement. Currently, your Funding Statement reads as follows:

“This research was supported by the Department of Science and Technology of Liaoning Province, Natural Foundation Project (No: 2015020128).”

Please include your amended statements within your cover letter; we will change the online submission form on your behalf.

5. Please upload a copy of Supporting Information S1 File which you refer to in your text on page 10.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: No

Reviewer #2: No

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: Dear editor

The paper “ Adaptive Mask-Based Brain Extraction Method for Head CT Images” contains the extract brain parenchyma from head CT images and they used the different computerized techniques. This paper could be published in this journal after some major comments for improving the paper.

1.The introduction section should be re-write based on the introduction of the field as well motivation and problem statement as well as contribution of the work. The authors can get idea from the following papers.

“ Enhancement of Medical Images through an Iterative McCann Retinex Algorithm: A Case of Detecting Brain Tumor and Retinal Vessel Segmentation YE Almalki, NA Jandan, TA Soomro, A Ali, P Kumar… - Applied Sciences, 2022. “

“Image segmentation for MR brain tumor detection using machine learning: A Review”, TA Soomro, L Zheng, AJ Afifi, A Ali, S Soomro, M Yin… - IEEE Reviews in Biomedical Engineering, 2022.

“Neural network based denoised methods for retinal fundus images and MRI brain images”, TA Soomro, J Gao - 2016 International Joint Conference on Neural …, 2016.

2.A related work section should be introduced.

3.The proposed method is the combination of multiple existing methods , author should explain the purpose of use these methods in this research along with their contribution.

4.The results section should be improved and compared with other recently proposed techniques and why only CT images , why do they not use MRI images also. ?

Reviewer #2: An algorithm based on adaptive masking is proposed to extract brain parenchyma in CT images, combined with traditional image processing and the AlexNet network. It outperforms U-net and Deeplabv3+ models, achieving better MPA, MIoU, and MBF and faster speed. It is significant to practical applications. However, there are a few more comments about the presentation and writing.

Major comments:

1. The AlexNet is a classic and sound network, but not perfect today. Have you ever tried some relatively SOTA classification networks?

2. The Alexnet is improved by batch normalization. How about the Unet using batch normalization?

3. It is unfair that the AlexNet network used 5000 images for training while U-net and Deeplabv3 used 1400 images.

4. In experiments, the training settings, such as optimizer, learning rate, BatchSize, epochs et al., are identical for Unet and Deeplabv3+. Do they both converge and get the best results?

5. Although the Unet network is widely used in medical image segmentation, it is not designed for segmenting CT brain images. Have you ever compared the proposed method with other methods?

Minor comments:

1. Figure 6 is upside down, and "The algorithm designs a second loop as shown in Figure 7" actually means in Figure 6.

2. All Figure numbers are mismatched in the text from Figure 6.

3. Lack of explanation for some settings, such as why cycle 1 is 5 times instead of 3 or 10 times, and Q is 0.22, and so on. Moreover, the results of Alexnet with batch normalization or without are not compared.

4. Some figures and formulas are difficult to read. Highlight the best results of each evaluation metric in tables.

5. Some sentences are hard to follow, so language and grammar editing is highly recommended.

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Mar 11;19(3):e0295536. doi: 10.1371/journal.pone.0295536.r002

Author response to Decision Letter 0


1 Jul 2023

Dear Editors,

Many thanks to the editor and reviewers for their valuable comments. We have adjusted our work accordingly based on your comments and those of the two reviewers, and the adjustments are shown in ' Revised Manuscript with Track Changes,' 'Manuscript,' and 'Cover Letter, ' respectively, according to your request. Below we detail the changes we have made to each of the recommendations.

First, for the five editorial requirements, we made the following adjustments:

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming.

Our response: We apologize for the formatting issues with the manuscript. We have revised the paper according to the PLOS ONE style template. We refer to the latest published articles in PLOS ONE for some unsure details of the modification. If formatting issues still need to be revised, please give us your specific suggestions, and we will make modifications as soon as possible according to your requirements.

2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work.

Our response: We understand and support the journal’s requirements and are willing to actively cooperate with the related work. The source code, dataset, and mathematical model are put into the supporting information. If specific operations are required in the future, we will actively cooperate.

3. Please state what role the funders took in the study.

Our response: Based on the actual situation, we provide a more detailed description of the funders’ situation in the cover letter.

4. We note that you have provided funding information that is not currently declared in your Funding Statement. However, funding information should not appear in the Acknowledgments section or other areas of your manuscript. We will only publish funding information present in the Funding Statement section of the online submission form.

Our response: We removed the funding information from the manuscript and added a statement to the cover letter upon request.

5. Please upload a copy of Supporting Information S1 File which you refer to in your text on page 10.

Our response: We apologize for failing to upload support information in time! We have reorganized the support information and uploaded them one by one. If there is still a need for additional information, we will be sure to upload it upon request.

In response to reviewer #1's four comments, we made the following changes:

1. The introduction section should be re-write based on the introduction of the field as well motivation and problem statement as well as contribution of the work. The authors can get ideas from the following papers. “ Enhancement of Medical Images through an Iterative McCann Retinex Algorithm: A Case of Detecting Brain Tumor and Retinal Vessel Segmentation” YE Almalki, NA Jandan, TA Soomro, A Ali, P Kumar… - Applied Sciences, 2022.

“Image segmentation for MR brain tumor detection using machine learning: A Review,” TA Soomro, L Zheng, AJ Afifi, A Ali, S Soomro, M Yin… - IEEE Reviews in Biomedical Engineering, 2022.

“Neural network based denoised methods for retinal fundus images and MRI brain images,” TA Soomro, J Gao - 2016 International Joint Conference on Neural …, 2016.

Our response: First of all, we acknowledge and appreciate the reviewer's suggestion and realize the shortcomings of the introduction section and that drawing on this suggestion does make the introduction section more organized. Therefore, we read the recommended articles in detail according to the reviewers’ suggestions and used some of them for reference. Finally, we have rewritten the introduction according to the proposal to provide an introduction to the current state and problems of the field and to describe specifically the motivation for our work and the corresponding contributions.

2. A related work section should be introduced.

Our response According to this recommendation, we provide an overview of the relevant literature and studies in the introduction, outlining in detail the implications, gaps, and problems of the existing work. And in the discussion section of the article, we position our study.

3. The proposed method is the combination of multiple existing methods; author should explain the purpose of use these methods in this research along with their contribution.

Our response: After reading this comment, we realized the shortcomings of the manuscript and the necessity to revise it.AMBBEM, as an integration algorithm, an introduction to the relevant methods used is necessary. Therefore, we present the purpose and contribution of the methods used separately in the last part of Section 2.

4. The results section should be improved and compared with other recently proposed techniques and why only CT images, why do they not use MRI images also?

Our response We sincerely thank you for this suggestion and acknowledge and understand it. We have revised the experimental results according to your suggestion. We have again investigated the current state of research in this field, and we can confirm that U-net, SegNet, and Deeplabv3+ are widely used in medical image segmentation with good segmentation results. Further, according to the paper "Robust brain extraction tool for CT head images" (Neurocomputing. 2020;392: 189-195. doi:10.1016/j.neucom.2018. 12.085) and other papers, for brain extraction of head CT images, the U-net and SegNet models show good robustness, accuracy, and segmentation speed, which are superior to other existing methods. In our work, considering many model parameters such as U-net and SegNet, the segmentation speed is relatively slow, so we aim to improve the segmentation speed while ensuring good robustness and accuracy. Therefore, combined with the above situation and modification suggestions, we additionally added the SegNet model as a comparison object in the experimental results section. And in the discussion section, AMABBEM is compared theoretically with existing traditional image segmentation methods and the secondary development of medical image post-processing software to analyze various methods objectively. Why not use MRI images? It is undeniable that researchers at home and abroad have done more research on brain extraction of MRI images and proposed more methods. For the testing of MRI images and the comparison with the latest techniques, we are working on the next one. Finally, if this revision still needs to be improved, we are willing to make further changes based on the reviewers' comments.

In response to reviewer #2's five major comments and six minor comments, we made the following changes:

First of all, for the five major comments proposed:

1. The AlexNet is a classic and sound network, but not perfect today. Have you ever tried some relatively SOTA classification networks?

Our response: We are very grateful to the reviewers for this suggestion, which has broadened our thinking and improved our algorithm to some extent. Based on this suggestion, we trained and tested AlexNet, VGG19, RestNet50, and squeezeNet networks, compared and analyzed them in the experimental and discussion sections, and then selected the optimal network model for the classification task of CT images based on the comparison results.

2. The Alexnet is improved by batch normalization. How about the U-net using batch normalization?

Our response: This suggestion is complementary to the first suggestion and helps a lot to improve the manuscript. According to this suggestion, we introduce the method to improve AleNet in the algorithm design section, compare the modified version of AleNet with AleNet in the experimental section, and analyze the experimental results in the discussion section.

3. It is unfair that the AlexNet network used 5000 images for training while U-net and Deeplabv3 used 1400 images.

Our response: We thank the reviewers for this suggestion, which makes the article more rigorous and convincing. Based on this suggestion, we have redone the relevant experiments, using the same training set for six CNN models and three FCN models.

4. In experiments, the training settings, such as optimizer, learning rate, BatchSize, epochs et al., are identical for Unet and Deeplabv3+. Do they both converge and get the best results?

Our response: We sincerely thank the reviewers for this suggestion, according to which we tested different parameter settings in the training of CNN and FCN models and selected the optimal ones. In the experimental section of the manuscript, we present the training settings in detail.

5. Although the U-net network is widely used in medical image segmentation, it is not designed for segmenting CT brain images. Have you ever compared the proposed method with other methods?

Our response: I sincerely thank the reviewer for this suggestion, and I also acknowledge and understand this suggestion. We have revised the experimental results according to your suggestion. We have again investigated the current state of research in this field, and we can confirm that U-net, SegNet, and Deeplabv3+ are widely used in medical image segmentation with good segmentation results. Further, according to the paper "Robust brain extraction tool for CT head images" (Neurocomputing. 2020;392: 189-195. doi:10.1016/j.neucom.2018. 12.085) and other papers, for brain extraction of head CT images, the U-net and SegNet models show good robustness, accuracy, and segmentation speed, which are superior to other existing methods. In our work, considering a large number of model parameters such as U-net and SegNet, the segmentation speed is relatively slow, so we aim to: further improve the segmentation speed while ensuring good robustness and accuracy. Therefore, combined with the above situation and modification suggestions, we additionally added the SegNet model as a comparison object in the experimental results section. And in the discussion section, AMABBEM is compared theoretically with existing traditional image segmentation methods and the secondary development of medical image post-processing software to analyze various methods objectively. The latest technology is mostly applied to MRI images. For the expansion of the application of AMABBEM (application to MRI images) and the comparison with the latest technology, we are working on it in the next work. Finally, if this revision still does not meet the relevant requirements, we are willing to make further revisions based on the reviewers' comments.

Finally, for the 6 Minor comments:

1. Figure 6 is upside down, and "The algorithm designs a second loop as shown in Figure 7" actually means in Figure 6

Our response: We are very sorry for this error. We have replaced most of the diagrams in the manuscript, and the replacement is clearer and more concise.

2. All Figure numbers are mismatched in the text from Figure 6.

Our response: We are very sorry for such an error. We have re-edited the relevant content and checked it several times to ensure the figures correspond to the chart.

3. Lack of explanation for some settings, such as why cycle 1 is five times instead of 3 or 10 times, and Q is 0.22, and so on. Moreover, the results of Alexnet with batch normalization or without are not compared.

Our response: Thank you very much for this suggestion! In our algorithm design, q and the number of cycles are introduced in detail. The experimental part compares the modified version of Ale Net with Ale Net, and the experimental results are analyzed in the discussion section.

4. Some figures and formulas are difficult to read. Highlight the best results of each evaluation metric in tables.

Our response: We apologize that the numbers and formulas were difficult to read. We have re-edited the relevant content to make it more specific and concise. For the experimental results, we select the best results for each algorithm upon request.

5. Some sentences are hard to follow, so language and grammar editing is highly recommended.

Our response: We apologize for the sentence description issue. We have re-edited the entire article and inspected it several times to ensure grammatical correctness and clarity of language presentation.

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

Our response: For this suggestion, we thank the reviewers for their kind words! However, we still need to understand peer review well after reading the relevant guidelines, but we are willing to actively cooperate with the reviewers and editors if peer review helps the reviewers and editors.

Finally, we would like to express again our deep appreciation to the editors and reviewers for their valuable suggestions. We have significantly benefited from your suggestions. If the manuscript still needs to be modified, we are willing to revise it according to your requests. Finally, we wish you all the best in your review!

Yours sincerely,

Ding-Yuan Hu

Attachment

Submitted filename: Response to Reviewers.docx

pone.0295536.s009.docx (25.6KB, docx)

Decision Letter 1

Khan Bahadar Khan

19 Sep 2023

PONE-D-23-11044R1Adaptive Mask-Based Brain Extraction Method for Head CT ImagesPLOS ONE

Dear Dr. Liang,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Nov 03 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Khan Bahadar Khan, Ph.D

Academic Editor

PLOS ONE

Journal Requirements:

Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #1: All comments have been addressed

Reviewer #2: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Yes

Reviewer #2: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: no comments. All the comments are address. The comments have been addressed, but I was unable to locate the highlighted manuscript for tracking. I recommended this paper after reviewing the editable PDF format, as the highlighted PDF is more suitable for the reviewer

Reviewer #2: Most comments have been revised, and some additional comments are as follows.

1. AlexNet, VGG19, ResNet50, and squeezeNet mentioned in response are all classic classification networks proposed before 2016. How about the results for methods in recent years, such as EfficientNet, ResNeXt?

2. The caption of subfigure "d" in Fig 3 is missing.

3. In Figure 6, is the "Extract objects from image using area" module parallel to the main path? And what is "single-region distribution"?

4. Describe the input sizes of the first four models among the six CNN models and briefly analyze the results. Additionally, if the regular AlexNet and VGG19 also use the image with the size of 512×512×3, how would the results be?

5. The format of Mask 1 should be consistent. Some are "mask1", and others are "Mask 1". So as the "Mask2".

6. There is no caption for many figures with sub-figure.

7. In the section "Re-segmentation of the mask", what is the "region growing algorithm" for the re-segmentation of mask 2?

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Mar 11;19(3):e0295536. doi: 10.1371/journal.pone.0295536.r004

Author response to Decision Letter 1


30 Sep 2023

Dear Editor:

I am very grateful to the editor for replying to my busy schedule, and I would like to thank the reviewers again for their valuable comments. In publishing a paper, receiving comments and guidance from the reviewers is crucial for better solving practical problems. We are lucky because the reviewers' comments are professional and rigorous, which opened our eyes and standardized the manuscript. We answered each reviewer's comments and adjusted the manuscript and algorithms. Our specific answers and revisions are listed below:

First of all, we carefully reviewed the references for the journal's requirements for a reference list, combined with the actual situation of manuscript changes. We made the following adjustments:

1.To further highlight the characteristics of CT imaging and to facilitate the reader's understanding of the latter. We replaced the original reference ' Parameter Calibration and Image Reconstruction of CT System ' with ' Improving Sensitivity on Identification and Delineation of Intracranial Hemorrhage Lesion Using Cascaded Deep Learning Models '.

2.Because the manuscript excludes the use of AlexNet, we also exclude the reference to the literature "ImageNet Classification with Deep Convolutional Neural Networks."

3.The manuscript also tested the EfficientNet network and used the SE and CBAM modules. I have added three articles: ' EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks, '' CBAM: Convolutional Block Attention Module ' and ' Squeeze-and-Excitation Networks '.

4.To make it easier for readers to understand the characteristics of the region growing algorithm, we also cite the literature "A review on brain tumor segmentation of MRI images."

Second, we responded to reviewer #1's comment as follows:

1. no comments. All the comments are address. The comments have been addressed, but I was unable to locate the highlighted manuscript for tracking. I recommended this paper after reviewing the editable PDF format, as the highlighted PDF is more suitable for the reviewer

Our response: We are very grateful to the reviewer for this comment and for recognizing the last revision. Further, we apologize for the reviewer's inability to locate the highlighted manuscript and thus follow it up; this time, we have made the revision marks on the original manuscript much clearer.

Third, we have made the following responses and revisions to the six comments made by reviewer 2:

1. AlexNet, VGG19, ResNet50, and squeezeNet mentioned in response are all classic classification networks proposed before 2016. How about the results for methods in recent years, such as EfficientNet, ResNeXt?

Our response: Thank you very much to the reviewer for this comment! This comment once again broadens our horizons, and introducing more excellent networks further improves the superiority of our algorithm. As the reviewer said, AlexNet, VGG19, ResNet50, and squeezeNet are indeed relatively old, and the subsequent more excellent EfficientNet and ResNeXt are more worth trying. Therefore, we also tested the EfficientNet network and again delved into the ResNet family of networks. We found that ResNeXt was obtained by improving ResNet while also noting that SE_ResNet50... and other networks. Finally, we try to improve on ResNet50 by adding a CBAM module to the Stem Block session of ResNet50 and then adding an SE module to each of the four Bottleneck sessions. Then, we fine-tune the training set and leave the test set 1 unchanged. Finally, VGG19, ResNet50, squeezeNet, EfficientNet, and the improved ResNet50 are trained on the training set; during the training process, multiple sets of optimizers and hyperparameters are tested for each network model, and the best optimizer and hyperparameters are selected. The experimental results show that the improved ResNet50 gets a significant improvement in both Accuracy and AP values.

In addition, we again performed further inspection and correction of test set 2 with the assistance of a radiologist, but of course, the adjustments were minimal. Finally, each FCN and AMBBEM were tested again. The results showed that the segmentation speed of AMBBEM was further improved after the replacement of the CNN, which was attributed to the improved resnet50 with higher accuracy and a lower number of parameters than the previous CNN, which resulted in a faster classification speed!

Of course, the article has been adjusted accordingly in the Abstract, Algorithm Design, Experimental Results, and Talking About sections because of the adjustments to the CNN network.

2. The caption of subfigure "d" in Fig 3 is missing.

Our response: We apologize for this comment! We have corrected this error.

3. In Figure 6, is the "Extract objects from an image using area" module parallel to the main path? And what is "single-region distribution"?

Our response: We thank the reviewer for this comment! This evaluation made us aware of errors and deficiencies in Figure 6 and its presentation. We have revised Figure 6. In addition, the term "single-region distribution" indicates that the brain tissue is a single block in a head CT image, as opposed to the brain being distributed in multiple blocks. Finally, we have made adjustments in the presentation of the manuscript to address these two issues to facilitate better understanding by the readers.

4. Describe the input sizes of the first four models among the six CNN models and briefly analyze the results. Additionally, if the regular AlexNet and VGG19 also use the image with the size of 512×512×3, how would the results be?

Our response: Much respect to the reviewers for their dedication and seriousness! We should describe the CNN networks' input sizes in further detail. In addition, we also tested the normal AlexNet and VGG19 networks with the input size of 512×512×3, respectively. The results show that, due to the setting of the three fully connected layers in the VGG19, the input layer size is fixed (224×224×3), and the input size can not be changed. For the normal AlexNet, the adjustment of the input size does not have any significant effect on it. To show the experimental conclusions more concisely. We only offer the test results of VGG19, ResNet50, squeezeNet, EfficientNet, and the improved ResNet50 and exclude the results of ordinary AlexNet and improved AlexNet from the manuscript content.

5. The format of Mask 1 should be consistent. Some are "mask1", and others are "Mask 1". So as the "Mask2".

Our response: I thank the reviewer for this comment and apologize for this oversight; we have standardized the formatting involved.

6. There is no caption for many figures with sub-figures.

Our response: We have been discussing this evaluation carefully and have always found the reviewer's suggestion more reasonable. We have added a description of the images with subplots based on the format of the latest published paper in the journal.

7. In the section "Re-segmentation of the mask," what is the "region growing algorithm" for the re-segmentation of mask 2?

Our response: Thanks to the reviewers for this comment! We apologize for the lack of clarity here. Region growing algorithm is an image segmentation technique. The basic idea will be to combine pixels with similar criteria to form regions based on certain discriminative criteria. The main step is to find a seed pixel for each region to be segmented as a starting point for growth, and then according to a certain discriminative criterion, the similar pixels around the seed pixel are discriminated, and the pixels with higher similarity are merged so that they germinate and grow like a seed. We have adapted the presentation part of the region growing algorithm in the article, in addition to adding references to facilitate the reader's understanding.

In addition, we have negotiated and hope to make the following adjustments to the paper:

1.The paper was published with equally significant contributions from Shiya Qu, so we would like to list Shiya Qu as the 1st set of equal contributors. In addition, we would like to add Qingyan Zhang (radiologist) as the sixth author, considering that he has provided a lot of help in the process of publication and revision of the paper. Of course, this change must be made with the editor's permission. We are willing to follow the editor's arrangement if the editor feels something inappropriate, and the editor's arrangement will prevail.

Finally, we checked and proofread the article several times and adjusted some words and grammar. For the supporting material, we were keen to follow the request to share experimental materials. However, some of the datasets were too large, and multiple uploads failed. For this reason, we can only split the training set (S1) into four parts for uploading. The file "S1_CNN_Training set" contains the dataset required for CNN training, which has already been categorized and can be used directly. File "S1_FCN_Training set_Label" contains the labels required for FCN model training, "S1_FCN_Training set_Set_1" and "S1_FCN_Training set_Set_2" files together contain 1400 images required for FCN model training, which need to be combined into one piece after downloading. Files S2 through S5 are as described in the manuscript. Where S5 contains instructions for use and an improved ResNet50 network model in addition to the code. Also, we mention in the manuscript that readers can also direct questions about the experimental material to the corresponding author.

Attachment

Submitted filename: Response to Reviewers.docx

pone.0295536.s010.docx (22.9KB, docx)

Decision Letter 2

Khan Bahadar Khan

23 Nov 2023

Adaptive Mask-Based Brain Extraction Method for Head CT Images

PONE-D-23-11044R2

Dear Dr. Liang,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Khan Bahadar Khan, Ph.D

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #2: All comments have been addressed

Reviewer #3: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #2: Yes

Reviewer #3: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #2: Yes

Reviewer #3: Yes

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #2: Yes

Reviewer #3: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #2: Yes

Reviewer #3: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #2: The authors have addressed all comments raised in a previous round and this manuscript is now acceptable for publication. No further comment.

Reviewer #3: 1-Consider restructuring the content to present contributions in a concise and organized manner, utilizing bullet points for clarity. This would help both the authors and readers to quickly grasp the significant contributions made in the study.

2- The manuscript exhibits numerous formatting and grammatical errors, including inconsistencies in headings.

3- The conclusion section needs revision to ensure a clear distinction between conclusions and methodologies. Please rewrite the conclusion section, focusing solely on summarizing the key findings and drawing conclusions without delving into the details of the methodologies employed.

4- The manuscript would benefit from a more comprehensive discussion of the technical aspects of previously published work. It is essential to delve into the details of the relevant literature, providing a thorough analysis of the technical methodologies and approaches employed in previous studies. Please consider expanding the discussion on the technical aspects of previously published work in the revision.

5- The overall quality of the paper is commendable. The authors have done a good job in presenting their work. However, it would be beneficial to address specific points mentioned in the earlier comments for further improvement. Keep up the good work!

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #2: No

Reviewer #3: Yes: Tehreem Awan

**********

Acceptance letter

Khan Bahadar Khan

12 Dec 2023

PONE-D-23-11044R2

Adaptive Mask-Based Brain Extraction Method for Head CT Images

Dear Dr. Liang:

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department.

If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Khan Bahadar Khan

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 File. This is the program code.

    (ZIP)

    pone.0295536.s001.zip (86.3MB, zip)
    S1 Data. This is the training set for the CNN.

    (ZIP)

    pone.0295536.s002.zip (66MB, zip)
    S2 Data. This is the label of the FCN training set.

    (ZIP)

    pone.0295536.s003.zip (2.8MB, zip)
    S3 Data. This is the image of the FCN training set.

    (ZIP)

    pone.0295536.s004.zip (53.8MB, zip)
    S4 Data. This is the image of the FCN training set.

    (ZIP)

    pone.0295536.s005.zip (47.6MB, zip)
    S5 Data. Test set 1.

    (ZIP)

    pone.0295536.s006.zip (10.3MB, zip)
    S6 Data. Test set 2.

    (ZIP)

    pone.0295536.s007.zip (19.7MB, zip)
    S1 Fig. This is the segmentation effect of AMBBEM and FCN in test set 2.

    (ZIP)

    pone.0295536.s008.zip (42MB, zip)
    Attachment

    Submitted filename: Response to Reviewers.docx

    pone.0295536.s009.docx (25.6KB, docx)
    Attachment

    Submitted filename: Response to Reviewers.docx

    pone.0295536.s010.docx (22.9KB, docx)

    Data Availability Statement

    The datasets were derived from the RSNA Intracranial Hemorrhage Original Size PNGs (RIHOSP)dataset, publicly available on the Kaggle website, and the CQ500 dataset on the Academic Torrents website. No patient privacy was involved. The training and test sets mentioned in the paper are obtained from the (RIHOSP) dataset and CQ500 dataset after screening and processing. We have put the training and test sets mentioned in the experiments in the Supporting information, which can be used directly. As for the (RIHOSP) dataset and CQ500 dataset, they are also available at the following links. The link to the (RIHOSP) dataset is https://www.kaggle.com/datasets/vaillant/rsna-ich-png/. Download Methods: Click the link to enter the webpage, then click "Download" to download it directly. It should be noted that the download requires an account to log in. The account we provide is account a18737263685; password: hu123456. The link to the CQ500 data collection is https://academictorrents.com/details/47e9d8aab761e75fd0a81982fa62bddf3a173831. Download Methods: First, click the link to enter the webpage; then click "Download" to download a TORRENT file (named "qure.headct.study-47e9d8aab761e75fd0a81982fa62bddf3a173831.torrent"). Finally, upload the TORRENT file to the download utility for direct download.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES