Skip to main content
PLOS One logoLink to PLOS One
. 2024 Apr 19;19(4):e0302358. doi: 10.1371/journal.pone.0302358

A bilateral filtering-based image enhancement for Alzheimer disease classification using CNN

Nicodemus Songose Awarayi 1,2,*, Frimpong Twum 1, James Ben Hayfron-Acquah 1, Kwabena Owusu-Agyemang 1
Editor: Sunder Ali Khowaja3
PMCID: PMC11029622  PMID: 38640105

Abstract

This study aims to develop an optimally performing convolutional neural network to classify Alzheimer’s disease into mild cognitive impairment, normal controls, or Alzheimer’s disease classes using a magnetic resonance imaging dataset. To achieve this, we focused the study on addressing the challenge of image noise, which impacts the performance of deep learning models. The study introduced a scheme for enhancing images to improve the quality of the datasets. Specifically, an image enhancement algorithm based on histogram equalization and bilateral filtering techniques was deployed to reduce noise and enhance the quality of the images. Subsequently, a convolutional neural network model comprising four convolutional layers and two hidden layers was devised for classifying Alzheimer’s disease into three (3) distinct categories, namely mild cognitive impairment, Alzheimer’s disease, and normal controls. The model was trained and evaluated using a 10-fold cross-validation sampling approach with a learning rate of 0.001 and 200 training epochs at each instance. The proposed model yielded notable results, such as an accuracy of 93.45% and an area under the curve value of 0.99 when trained on the three classes. The model further showed superior results on binary classification compared with existing methods. The model recorded 94.39%, 94.92%, and 95.62% accuracies for Alzheimer’s disease versus normal controls, Alzheimer’s disease versus mild cognitive impairment, and mild cognitive impairment versus normal controls classes, respectively.

Introduction

Alzheimer’s disease (AD) is a neurological condition contributing to the progressive degeneration of cognitive abilities, including memory, visual-spatial perception, and alterations in personality and behavior. This ailment is commonly linked to the geriatric population, although it can impact individuals of all ages. It is postulated that a global population exceeding 1.5 million individuals exhibits indications of this particular ailment, with a probable escalation anticipated within the forthcoming two to three decades [1]. The absence of a viable remedy for AD underscores the importance of timely identification to manage the condition and forestall further deterioration effectively. Individuals in the initial stages of the disease, referred to as mild cognitive impairment (MCI), primarily manifest indications such as memory impairment or diminished decision-making abilities, which may subsequently deteriorate into AD. Detecting MCI early enough would be beneficial for clinicians in preventing further progression [2, 3].

Brain imaging is a known non-invasive technique for detecting diseases through the use of brain scans in the form of magnetic resonance imaging (MRI), computed tomography (CT), or positron emission tomography (PET) [4, 5]. Brain imaging has substantially contributed to the timely diagnosis of AD and detecting the transformation of MCI into AD using a deep learning methodology. Various alternative deep-learning methods have been successfully implemented to accurately categorize AD, including those of Y. Zhang et al. [6], Zhang et al. [7].

Despite these notable achievements, using MRI imaging to diagnose AD or predict its conversion using deep learning is still challenging. One of the key obstacles is the insufficiency of appropriate and uncontaminated data, which adversely impacts the training of deep learning models and consequently undermines their accuracy and generalizability [812]. As the data is collected over time, there is a tendency for the datasets not to be enough for training models, and again, due to the varying image scanning protocols, imaging artefacts, and moving artefacts being used during the data collection, MRI images are highly susceptible to noise. Image quality can also vary due to the different imaging equipment being used. To address these challenges, it requires developing more robust deep learning models that are more tolerant to noisy images [13, 14] or applying a data preprocessing approach to addressing image quality [1518].

The motivation of our study is to propose a more reliable approach to improving image quality with ease for classifying AD with improved accuracy. As such, the research devised a proficient strategy for improving image quality by utilizing bilateral filtering and histogram equalization techniques. The proposed method aimed to reduce noise and enhance brightness in images, thereby improving the quality of images used in categorizing AD. The primary contribution of this study comprises two key components:

  • First, an image enhancement technique that effectively reduces noise in MRI images while preserving image quality was designed and implemented. The image enhancement algorithm for color images was developed using two image processing techniques: image equalization and bilateral filtering. The algorithm mainly improves the quality of the images while maintaining their edges and details. The bilateral filtering was used to keep the edges and details of the image, while histogram equalization modifies the intensities of the image to improve its contrast.

  • Second, a CNN model consisting of four convolutional layers and two hidden layers successfully classifies AD more accurately than existing deep learning models. The model was trained using the k-fold cross-validation technique, ensuring that all the images were effectively used in training and testing the model.

The remaining sections of the paper are the literature review, materials and methods, results, and discussion. The literature review section presents a review of some existing state-of-the-art studies. A description of the dataset, the proposed method, and a detailed description of the experiment are presented under the materials and methods section. The results section reveals the study’s experimental results, which are further discussed in the discussion section.

Literature review

This section analyzes prior studies that employed MRI images to construct machine learning and deep learning models to classify AD. A particular emphasis was placed on methods that address data inadequacy and image quality challenges.

Ebrahimi et al. [19] designed a pre-trained ResNet in their deep sequencing model to classify AD. The prognostication of the AD recorded an accuracy of approximately 91.78%, signifying a 10% enhancement compared to other CNNs. Kang et al. [20] developed a deep ensemble 2D CNN architecture incorporating multiple models and slides. The model’s accuracy was 90.36% on normal controls (NC)/AD, 77.19% on AD/MCI classes, and 72.36% on NC/MCI class labels.

Alinsaif and Lang [21] designed a methodology that integrated 3D shearlet-based descriptors to decrease the dimensionality of MRI datasets. The method showed a noteworthy degree of efficacy in identifying AD. Mahendran and P. M. [22] developed a framework to detect AD that achieved an accuracy of 88.7% using a limited number of data records.

Zhang et al. [23] proposed a multi-modal model to facilitate timely AD detection. The proposed method showed superior performance in diagnosing AD, as evidenced by its training on two image modalities, namely PET and MRI, obtained from ADNI.

Spasov et al. [24] deployed a parameter-efficient model to check the conversion of MCI to AD. The approach combines various data types, including MRI images, demographic information, neuropsychological data, and genetic data. The method showed adaptability and has the potential to incorporate additional imaging techniques, including PET images and various collections of medical information.

In their scholarly work, Sharma et al. [25] presented a model capable of extracting all-level features. The model, FDN-ADNet, designed for early AD diagnosis using MRI images, yielded encouraging outcomes. Abrol et al. [26] applied ResNet in analyzing medical images to investigate the conversion of MCI to AD. Initially, the deep models were trained solely on MCI individuals for prediction purposes. Subsequently, a domain transfer learning version was employed, which involved additional training on AD and NC. The frameworks demonstrated significant prominence in the use of the model. The authors of [27] tackled the issue of insufficient data and missing data in diagnosing AD. The accuracy of all four classes of subjects used was observed to have improved by up to 3% when both complete and incomplete samples were considered during the model generation and testing phases.

In their study, Liu et. al. [28] presented a CNN-based approach for detecting AD. The model was specifically designed to train on a limited quantity of MRIs and accurately identify cases of AD. The model’s accuracy was approximately 78.02%, and it was observed that its portability was enhanced as an added advantage. Janghel and Rathore [29] applied a distinctive approach to improve the efficiency of Convolutional Neural Network (CNN) models. They implemented a preprocessing technique on an image dataset before submitting it to the CNN architecture for feature extraction. The experimental findings indicate that an fMRI dataset yielded a precision of 99.95%, whereas the PET data stood at 73.46%.

In their study, Bae et. al. [30] employed transfer learning to forecast the progression of MCI to AD dementia using a 3-D CNN. An 82.4% accuracy recorded for the target task surpassed the performance of existing models in the respective domain. Sathiyamoorthi et al. [31] proposed a method that exploits an adaptive histogram adjustment image correction algorithm to improve the quality of the image in terms of brightness and contrast. The algorithm was applied to segment the interested AD regions in an adaptive and modified manner. The diverse characteristics were computed using a second-order Gray Level Co-Occurrence Matrix. The method classified diseased images and their respective stages based on distinctive features. The empirical findings demonstrate that the suggested approach yields superior levels of precision and efficacy compared to the current system.

Ferri et al. [32] proposed a classification system with LORETA source estimation derived from rsEEG and sMRI variables. The study implemented two artificial neural networks (ANNs) to classify participants as having AD or being part of the control group. The results indicated classification accuracy of 83%, and 86% for the two ANNs, respectively.

In their study, Tong et al. [33] developed a CNN-Sparse Coding Network (CNN-SCN) architecture to detect MCI before it converts. The empirical findings indicate that the model exhibits good stability, accuracy, and generalization. The experiment recorded an accuracy rate of 92.6% in the AD/CN class, while the AD/MCI and MCI/CN classes recorded an accuracy of 74.9% and 76.3%, respectively.

Hridhee et al. [34] investigated and proposed an early detection 2D CNN model for classifying AD using MRI data samples. The researchers applied various image processing and augmentation strategies to the dataset. Subsequently, they trained the data on a VGG16, Xception, and a custom model. The customized model achieved the best performance of the selected models with an accuracy of 94.77%.

Allada et al. [35] aimed to enhance the accuracy of classifying AD and developed a swarm multi-verse algorithm to optimize a deep neuro-fuzzy network for AD classification at various stages. Their study incorporated techniques such as median filtering for preprocessing to improve image quality, a channel-wise feature pyramid network for image segmentation, and CNN for feature extraction. Their model achieved an accuracy of 89.9%, assessed based on k-fold values.

Gowhar et al. [36] introduced a deep learning framework centered on CNN for early AD detection. They implemented various data preprocessing and augmentation techniques on the dataset. The classifier was constructed through feature extraction, reduction, and classification. Their proposed model surpassed existing models, achieving an accuracy of 96.22%.

Tufail et al. [37] developed an early-stage AD classifier using 2D and 3D CNN with PET images. Their study differentiated between binary and multiclass categories of AD. Experimental results highlighted superior performance with an accuracy of 89.21% for the AD/NC binary classification using the 3D-CNN model. They employed data augmentation and 5-fold cross-validation to boost the model’s performance.

This section examined the performance of some deep learning models in classifying AD and is summarized in Table 1.

Table 1. Summary of existing CNN models.

S/No. Author Cite Method Dataset Classification Accuracy
1 (Hridhee et al., 2023) [34] CNN ADNI AD/NC/MCI/ EMCI/LMCI 94.77
2 (Allada et al., 2023) [35] Neuro-fuzzy Kaggle AD/NC/MCI/ EMCI/LMCI 89.9
3 (Gowhar et al., 2023) [36] CNN ADNI AD/NC/MCI/ EMCI/LMCI 96.22
4 (Tufail et al., 2022) [37] CNN ADNI AD/NC 89.21
AD/MCI 71.70
NC/MCI 62.25
AD/NC/MCI 59.73
5 (Kang et al., 2021) [20] CNN ADNI AD/NC 90.36
AD/MCI 77.19
NC/MCI 72.36
6 (Mahendran & P, 2022) [22] EDRNN GEO Omnibus AD/MCI 88.7
7 (Tong et al., 2022) [33] CNN ADNI AD/NC 92.6
AD/MCI 74.9
NC/MCI 76.3
8 (Zhang et al., 2022) [6] ResNet ADNI AD/NC 90
AD/MCI 74.9
NC/MCI 62.6
9 (J. Liu et al., 2021) [28] CNN OASIS AD/NC/ MCI 78.02
10 (Ferri et al., 2021) [32] autoencoders ADNI AD/NC 89.0

Materials and methods

Subjects/datasets

The datasets were acquired from the AD Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu) [38]. In 2003, the ADNI was established as a public-private collaboration with the primary objective of evaluating the feasibility of integrating MRI, PET, and other biological markers to investigate and curb the degeneration of MCI to AD.

This study extracted MRI images from a sample of 396 subjects diagnosed with AD, 255 subjects classified as NC, and 104 subjects with MCI from the ADNI database captured from 2005 to 2021 and within an age range of 55 to 97 years old. The MRI datasets were initially in DICOM format and were subsequently transformed into JPG format. The images collected for the experiment included 1,581 AD images, 1,310 MCI images, and 1,591 NC images. Each image was initially dimensioned as 256 x 256 pixels and later resized to 64 x 64 pixels.

Method

In this study, we sought to build a deep learning model with optimal performance to classify AD into three categories: MCI, AD, and NC. To achieve this, we proposed an approach to improve the quality of the image datasets during the data preprocessing stage, after which the data volume was increased using data augmentation. The data was then sampled using the K-fold cross-validation technique to ensure that all data samples were used in the training and testing. The method divides the datasets into K folds, of which one-fold is used for testing and the remaining for training. The technique assesses the model’s capability to handle new examples, with K referring to the number of groups into which the data samples are divided. As presented in the subsequent section, a convolutional neural network was proposed, which was trained and evaluated iteratively using 10-fold cross-validation. For each iteration, the model is tested with the one-fold test data, after which the recorded test results are averaged. A detailed representation of the flow of activities in the proposed method is shown in Fig 1.

Fig 1. The proposed method defining the flow of activities.

Fig 1

The experiment was implemented with an NVIDIA GeForce GTX 1060 Graphic Processing Unit (GPU) machine operating on CUDA 10.1 with 8 GB of memory allocation. The model was implemented in Python using the TensorFlow framework and other libraries such as Sklearn, Numpy, Keras, Pillow and Opencv-python.

Data preprocessing

Data preprocessing is an essential step in conducting a deep learning experiment, as the efficacy of the dataset can significantly influence the learning outcomes of the models. The MRI image datasets utilized in this study were initially transformed into JPG format to facilitate manual preprocessing of the datasets. MRI images that were corrupt or of insufficient quality were manually removed. Following manual preprocessing, it was observed that the dataset still contained noisy images with varying brightness levels, highlighting the necessity for additional preprocessing measures.

An algorithm was proposed to address the issue of image noise and varying brightness. The algorithm was designed with histogram equalization and bilateral filtering, as presented under Algorithm 1.

Algorithm 1 Image enhancement algorithm

IRGBimage

[Ir, Ig, Ib] ← GetImageChannels(I)

IrhistogramEqualize(Ir)

IghistogramEqualize(Ig)

IbhistogramEqualize(Ib)

IrbilateralFilter(Ir)

IgbilateralFilter(Ig)

IbbilateralFilter(Ib)

ImergeImageChannels(Ir, Ig, Ib)

InormalizedImage(I)

Algorithm 1 was designed to improve the contrast of an input image by performing histogram equalization on each color channel, followed by bilateral filtering on the equalized channels. The bilateral filtering aims to maintain the edges and details of the image. The filtered channels are recombined into a unified RGB image and subjected to normalization procedures to enhance overall contrast. Eq (1) defines bilateral filtering.

BF[I]p=1WpqϵsGσs(||p-q||)Gσr(|Ip-Iq|)Iq (1)
Wp=qϵsGσs(||p-q||)Gσr(|Ip-Iq|)Iq (2)

The equation for the image value at pixel position p is defined as Ip, where Gσ represents a 2D Gaussian kernel. The normalization factor, Wp, is utilized to ensure that the pixel weights sum up to 1.0. The parameters s and r determine the extent of filtering applied to the image. Eq (3) defines the histogram equalization process, which is used to modify an image’s intensities to improve its contrast.

Ii,j=floor(L-1)n=0fi,jPn (3)
Pn=InTpn=0,1,,L-1 (4)

where L is the number of possible intensity values, In is the number of pixels with intensity, n and Tp is the total number of pixels.

Convolutional neural network model

The CNN model in Fig 2 comprised four convolutional layers to extract features from 64 x 64-sized color image inputs. In this model, every convolutional layer is paired with a corresponding max-pooling layer, batch normalization layer, and a dropout layer to reduce the learnable parameters. Each layer also incorporated a rectified linear activation function, a 3 by 3 kernel filter and L2-regularization. All convolutional layers had a stride of one (1), and same padding was used to keep the dimensions of the output feature map the same as the input dimension. Two (2) hidden layers were incorporated into the model, wherein a batch normalization layer and a dropout layer succeeded every layer. The model generated 426,979 parameters, consisting of 425,187 trainable and 1,792 nontrainable. Additional hyperparameters include a learning rate of 0.001, a batch size of 64, and 200 training epochs.

Fig 2. The proposed CNN model for classifying Alzheimer’s disease.

Fig 2

Performance evaluation

Various metrics were used to measure the model’s performance during training and testing. These include recall, precision, accuracy and area under the curve (AUC).

The accuracy metric represents the aggregate count of accurately predicted labels by the model and is defined as Eq (5).

accuracy=TP+TNTP+TN+FP+FN (5)

TN is the total number of true positives, TN is the true negatives, FP refers to the false positives, and FN is the false negatives. As shown in Eq (6), precision is the ratio of true positives to the total number of predictions made.

precision=TPTP+TN (6)

Recall calculates the true positive rates as presented in Eq (7).

recall=TPTP+FN (7)

The ROC curve’s integral, AUC, corresponds to the likelihood of a positive sample being ranked at random higher than a negative sample. Given that the ROC curve is constructed through the successive joining of coordinates (x1, y1), (x2, y2), …, (xm, ym), the AUC is approximated using Eq (8).

AUC=12i=1m-1(xi-1-xi)(yi-yi+1) (8)

Results

In this study, an algorithm for enhancing images was developed by integrating two image processing techniques: image equalization and bilateral filtering. The algorithm primarily enhances the image quality while preserving its edges and details. Bilateral filtering is employed to maintain the edges and details of the image, whereas histogram equalization adjusts the image’s intensity levels to improve contrast. Fig 3a shows the original images, while Fig 3b shows the images processed by the proposed algorithm.

Fig 3. Applying the proposed image enhancement algorithm.

Fig 3

(a) Original images (b) Transformed images.

In order to ascertain the impact of the image transformation by the algorithm, image quality parameters such as mean pixel intensity, contrast, and entropy were used. The mean pixel intensity calculates the average of the pixel intensities across the image, and contrast evaluates the variability (standard deviation) of pixel intensities, contributing to the sharpness or clarity of the image. An increase in mean pixel intensity indicates that the image has become brighter, while a higher contrast suggests enhanced sharpness. Entropy measures the richness of information in the image’s intensity distribution, with an increase in entropy indicating an improvement in image quality. The mean pixel intensity, contrast, and entropy values were calculated for both the original and enhanced images and visualized in Fig 4. The results indicate that the enhanced images showed more brightness, sharpness and richness of information since the mean pixel intensity, contrast, and entropy are higher.

Fig 4. Image quality assessment.

Fig 4

(a) Mean pixel intensity (b) Contrast (c) Entropy.

This study developed a CNN model to classify AD based on three distinct class labels: AD, MCI, and NC. The study further investigated the binary classification of AD versus MCI, AD versus NC, and MCI versus NC. The model was trained using a 10-fold cross-validation technique, with the learning rate hyperparameter set to 0.001, a batch size of 64, and 200 epochs. The training and testing were done on both the original and enhanced datasets to aid in assessing the impact of the image transformation. The test outcomes are displayed in Table 2.

Table 2. Testing results for classifying AD, MCI and NC.

Fold No Image Enhancement With Image Enhancement
Accuracy (%) Accuracy (%) Precision(%) Recall (%) AUC
1 92.37 93.09 93.49 92.87 0.98
2 87.66 93.42 93.51 93.20 0.99
3 91.98 90.30 90.56 89.86 0.97
4 89.57 93.76 94.27 93.53 0.99
5 89.44 94.20 94.51 94.08 0.99
6 89.95 93.64 93.84 93.53 0.99
7 90.59 94.64 94.62 94.20 0.99
8 92.37 94.53 94.95 94.42 0.99
9 91.46 92.75 92.85 92.75 0.98
10 88.03 94.20 94.39 93.97 0.99
Average 90.34 93.45 93.70 93.24 0.99
Standard Deviation 1.62 1.20 1.21 1.25 0.005

The model attained an accuracy of 90.34% on the original dataset, while on the transformed dataset, an accuracy of approximately 93.45% was recorded, accompanied by a standard deviation of 1.20%, suggesting the model’s capacity to forecast the labels accurately. The precision of the model in identifying true positives was 93.70%, with a standard deviation of 1.21%. The recorded recall value is 93.24%, with a standard deviation of 1.25%. The model achieved a high AUC score of 0.99 and a low standard deviation of 0.005, suggesting the accuracy of the model’s predictions. The performance of the model is further visualized using the receiver operating characteristic (ROC) curve and confusion matrix in Fig 5.

Fig 5. ROC curve and confusion matrix for classifying AD, MCI and NC.

Fig 5

(a) ROC curve (b) Confusion matrix.

The model underwent further training with binary class examples, namely AD and MCI, AD and NC, and MCI and NC, while maintaining the same hyperparameters and model architecture. The model demonstrated significant efficacy in forecasting the diverse labels, as outlined in Table 3. The test results show a consistent accuracy, precision, and recall score. The model’s accuracy rates for classifying AD versus MCI, AD versus NC, and MCI versus NC stood at 94.92%, 94.39%, and 95.62%, respectively. Our proposed model demonstrated good performance, as evidenced by AUC scores of 0.98, 0.98, and 0.99 for comparing AD versus MCI, AD versus NC, and MCI versus NC, respectively. The confusion matrix and the ROC curve for the binary classification are presented in Figs 6 and 7, respectively.

Table 3. Testing results for classifying AD/MCI, AD/NC and MCI/NC.

Fold AD/MCI AD/NC MCI/NC
Accuracy Precision Recall AUC Accuracy Precision Recall AUC Accuracy Precision Recall AUC
1 94.30 94.30 94.30 0.97 94.02 94.02 94.02 0.99 96.04 96.04 96.04 0.99
2 88.77 88.77 88.77 0.94 95.28 95.28 95.28 0.98 95.01 95.01 95.01 0.99
3 96.19 96.19 96.19 0.99 96.54 96.54 96.54 0.99 95.86 95.86 95.86 0.99
4 94.64 94.64 94.64 0.98 92.91 92.91 92.91 0.98 96.38 96.38 96.38 0.99
5 95.16 95.16 95.16 0.99 94.79 94.79 94.79 0.98 95.52 95.52 95.52 0.99
6 95.85 95.85 95.85 0.99 95.90 95.90 95.90 0.98 96.03 96.03 96.03 0.99
7 95.67 95.67 95.67 0.99 94.01 94.01 94.01 0.98 96.38 96.38 96.38 0.98
8 95.85 95.85 95.85 0.98 92.27 92.27 92.27 0.98 96.21 96.21 96.21 0.98
9 97.75 97.75 97.75 0.99 92.43 92.43 92.43 0.98 94.31 94.31 94.31 0.98
10 94.98 94.98 94.98 0.98 95.74 95.74 95.74 0.99 94.48 94.48 94.48 0.99
AVG 94.92 94.92 94.92 0.98 94.39 94.39 94.39 0.98 95.62 95.62 95.62 0.99
S. D. 2.24 2.24 2.24 0.01 1.43 1.43 1.43 0.01 0.73 0.73 0.73 0.004

Fig 6. Confusion matrix for binary classification.

Fig 6

(a) AD/MCI (b) AD/NC (c) MCI/NC.

Fig 7. ROC curve for binary classification.

Fig 7

(a) AD/MCI (b) AD/NC (c) MCI/NC.

To adequately examine the model’s performance, we conducted a comparative study with prevailing models employed for AD classification. The accuracy metric was the baseline metric for comparing this study to the existing models. The findings suggest the superior efficacy of our research in AD classification, as demonstrated in Table 4.

Table 4. Comparative analysis of existing and proposed model.

S/No. Various Models Cite Class labels
AD/MCI AD/NC MCI/NC Multiclass (AD/MCI/NC or more)
1 (Hridhee et al., 2023) [34] - - - 94.77
2 (Allada et al., 2023) [35] - - - 89.9
3 (Gowhar et al., 2023) [36] - - - 96.22
4 (Tufail et al., 2022) [37] 71.70 89.21 62.25 59.73
5 (Kang et al., 2021) [20] 77.19 90.36 72.36 -
6 (Mahendran and P M, 2022) [22] 88.7 - - -
7 (Tong et al., 2022) [33] 74.9 92.6 76.3 -
8 (Zhang et al., 2022) [6] 82.5 90.00 62.6 -
9 (J. Liu et al., 2021) [28] - - - 78.02
10 (Ferri et al., 2021) [32] 89.0 - - -
11 Proposed Model 94.92 94.39 95.62 93.45

Discussion

The experimental results highlight the superior performance of the proposed method in relation to other methods discussed in the literature, especially in the binary and multiclass classification of AD. As evident from Table 4, the accuracy of the proposed model surpassed that of [6, 20, 22, 32, 33, 37] in binary classification. In multiclass scenarios, our model outperformed the methods from [28, 37] that considered the same number of classes as in this study. Conversely, methods [34, 36] which encompassed more than the three classes in our study exhibited higher accuracies than our proposed model. Thus, the findings suggest that our model stands out as the most effective for binary classification, as well as for multiclass classification, when specifically focusing on AD, MCI, and NC labels.

Conclusion

AD is a prevalent neurological disorder, predominantly affecting the elderly, and remains without a definitive cure. Early detection of AD is paramount to manage and potentially slow its progression from a mild stage to more severe stages. While deep learning offers promising avenues for detecting AD, it also presents specific challenges, with image quality being a prominent one. The study proposed and implemented an image quality scheme using histogram equalization and bilateral filtering to improve the dataset’s quality, which was then used to train the proposed convolutional neural network. The proposed model consisted of four convolutional layers and two fully connected layers. It was trained using 10-fold cross-validation on three class labels: AD, MCI, and NC. The resulting accuracy, precision, recall score, and AUC score stood at 93.45%, 93.70%, 93.24%, and 0.99, respectively. These scores were comparatively higher than those of some selected state-of-the-art existing models.

Supporting information

S1 Checklist. PLOS ONE clinical studies checklist.

(PDF)

pone.0302358.s001.pdf (193.1KB, pdf)

Acknowledgments

We acknowledge the ADNI for granting us access to use their datasets for this research.

Data Availability

Data cannot be shared publicly because of institutional data access policy. Data are available from the ADNI Institutional Data Access / Ethics Committee (contact via adni.loni.usc.edu) for researchers who meet the criteria for access to confidential data.

Funding Statement

The author(s) received no specific funding for this work.

References

  • 1. Liu CF, Padhy S, Ramachandran S, Wang VX, Efimov A, Bernal A, et al. Using deep Siamese neural networks for detection of brain asymmetries associated with Alzheimer’s Disease and Mild Cognitive Impairment. Magn Reson Imaging [Internet]. 2019;64(July):190–9. doi: 10.1016/j.mri.2019.07.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2. Reiss AB, de Levante Raphael D, Chin NA, Sinha V. The physician’s Alzheimer’s disease management guide: Early detection and diagnosis of cognitive impairment, Alzheimer’s disease and related dementia. AIMS Public Heal. 2022;9(4):661–89. doi: 10.3934/publichealth.2022047 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Sabbagh MN, Boada M, Borson S, Doraiswamy PM, Dubois B, Ingram J, et al. Early Detection of Mild Cognitive Impairment (MCI) in an At-Home Setting. J Prev Alzheimer’s Dis. 2020;7(3):171–8. [DOI] [PubMed] [Google Scholar]
  • 4. Hussain S, Mubeen I, Ullah N, Shah SSUD, Khan BA, Zahoor M, et al. Modern Diagnostic Imaging Technique Applications and Risk Factors in the Medical Field: A Review. Biomed Res Int. 2022;2022. doi: 10.1155/2022/5164970 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5. Kim B, Kim H, Kim S, Hwang Y ran. A brief review of non-invasive brain imaging technologies and the near-infrared optical bioimaging. Appl Microsc. 2021;51(1) doi: 10.1186/s42649-021-00058-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6. Zhang Y, Teng Q, Liu Y, Liu Y, He X. Diagnosis of Alzheimer’s disease based on regional attention with sMRI gray matter slices. J Neurosci Methods [Internet]. 2022;365(June 2021):109376. doi: 10.1016/j.jneumeth.2021.109376 [DOI] [PubMed] [Google Scholar]
  • 7. Zhang J, Zheng B, Gao A, Feng X, Liang D, Long X. A 3D densely connected convolution neural network with connection-wise attention mechanism for Alzheimer’s disease classification. Magn Reson Imaging. 2021;78:119–26. doi: 10.1016/j.mri.2021.02.001 [DOI] [PubMed] [Google Scholar]
  • 8. Althnian A, Alsaeed D, Al-baity H, Samha A, Bin Dris A, Alzakari N, et al. Impact of Dataset Size on Classification Performance: An Empirical Evaluation in the Medical Domain. Appl. Sci 2021. doi: 10.3390/app11020796 [DOI] [Google Scholar]
  • 9. Gupta S, Gupta A. Dealing with Noise Problem in Machine Learning Data-sets: A Systematic Review. Procedia Comput Sci [Internet]. 2019;161:466–74. doi: 10.1016/j.procs.2019.11.146 [DOI] [Google Scholar]
  • 10.Nazari Z, Sayed M, Danish S. Evaluation of Class Noise Impact on Performance of Machine Learning Algorithms. IJCSNS 2018;(August).
  • 11. Roxana A, Florin T, Gheorghe L, Grigorovici R, Iancu S, Hlusneac M, et al. Impact of quality, type and volume of data used by deep learning models in the analysis of medical images. Informatics Med Unlocked [Internet]. 2022;29(May 2021):100911. [Google Scholar]
  • 12. Varoquaux G, Cheplygina V. Machine learning for medical imaging: methodological failures and recommendations for the future. npj Digital Medicine. 2022; doi: 10.1038/s41746-022-00592-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13. Ghafari S, Ghobadi Tarnik M, Sadoghi Yazdi H. Robustness of convolutional neural network models in hyperspectral noisy datasets with loss functions. Comput Electr Eng [Internet]. 2021;90(January) [Google Scholar]
  • 14. Lin CY, Lin RS. A Noise Robust Convolutional Neural Network by using Noise Removal Techniques. J Inf Hiding Multimed Signal Process. 2022;13(3):178–87. [Google Scholar]
  • 15.Zhao M, Shi P, Xu X, Xu X, Liu W, Yang H. Identification System Using Different. 2022; [DOI] [PMC free article] [PubMed]
  • 16.Chen X. Image enhancement effect on the performance of convolutional neural networks. 2019;(June):1–40. Available from: www.bth.se
  • 17. Heidari M, Mirniaharikandehei S, Khuzani AZ, Danala G, Qiu Y, Zheng B. Improving the performance of CNN to predict the likelihood of COVID-19 using chest X-ray images with preprocessing algorithms. Int J Med Inform. 2020;144 doi: 10.1016/j.ijmedinf.2020.104284 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Caseneuve G, Valova I, LeBlanc N, Thibodeau M. Chest X-Ray image preprocessing for disease classification. Procedia Comput Sci [Internet]. 2021;192:658–65. doi: 10.1016/j.procs.2021.08.068 [DOI] [Google Scholar]
  • 19. Ebrahimi A, Luo S, Chiong R. Deep sequence modelling for Alzheimer’s disease detection using MRI. Comput Biol Med. 2021;134(June). [DOI] [PubMed] [Google Scholar]
  • 20. Kang W, Lin L, Zhang B, Shen X, Wu S. Multi-model and multi-slice ensemble learning architecture based on 2D convolutional neural networks for Alzheimer’s disease diagnosis. Comput Biol Med. 2021;136(July). [DOI] [PubMed] [Google Scholar]
  • 21. Alinsaif S, Lang J. 3D shearlet-based descriptors combined with deep features for the classification of Alzheimer’s disease based on MRI data. Comput Biol Med. 2021;138(September). [DOI] [PubMed] [Google Scholar]
  • 22. Mahendran N, P M DRV. A deep learning framework with an embedded-based feature selection approach for the early detection of the Alzheimer’s disease. Comput Biol Med [Internet]. 2022;141(September 2021):105056. doi: 10.1016/j.compbiomed.2021.105056 [DOI] [PubMed] [Google Scholar]
  • 23. Zhang F, Li Z, Zhang B, Du H, Wang B, Zhang X. Multi-modal deep learning model for auxiliary diagnosis of Alzheimer’s disease. Neurocomputing [Internet]. 2019;361:185–95. doi: 10.1016/j.neucom.2019.04.093 [DOI] [Google Scholar]
  • 24. Spasov S, Passamonti L, Duggento A, Liò P, Toschi N. A parameter-efficient deep learning approach to predict conversion from mild cognitive impairment to Alzheimer’s disease. Neuroimage. 2019;189(January):276–87. doi: 10.1016/j.neuroimage.2019.01.031 [DOI] [PubMed] [Google Scholar]
  • 25. Sharma R, Goel T, Tanveer M, Murugan R. FDN-ADNet: Fuzzy LS-TWSVM based deep learning network for prognosis of the Alzheimer’s disease using the sagittal plane of MRI scans. Appl Soft Comput [Internet]. 2022;115:108099. doi: 10.1016/j.asoc.2021.108099 [DOI] [Google Scholar]
  • 26. Abrol A, Bhattarai M, Fedorov A, Du Y, Plis S, Calhoun V. Deep residual learning for neuroimaging: An application to predict progression to Alzheimer’s disease. J Neurosci Methods [Internet]. 2020;339(September 2019):108701. doi: 10.1016/j.jneumeth.2020.108701 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 27. Aghili M, Tabarestani S, Adjouadi M. Addressing the missing data challenge in multi-modal datasets for the diagnosis of Alzheimer’s disease. J Neurosci Methods [Internet]. 2022;375(March):109582 doi: 10.1016/j.jneumeth.2022.109582 [DOI] [PubMed] [Google Scholar]
  • 28. Liu J, Li M, Luo Y, Yang S, Li W, Bi Y. Alzheimer’s disease detection using depthwise separable convolutional neural networks. Comput Methods Programs Biomed [Internet]. 2021;203:106032. doi: 10.1016/j.cmpb.2021.106032 [DOI] [PubMed] [Google Scholar]
  • 29. Janghel RR, Rathore YK. Deep Convolution Neural Network Based System for Early Diagnosis of Alzheimer’s Disease. Irbm [Internet]. 2021;42(4):258–67. doi: 10.1016/j.irbm.2020.06.006 [DOI] [Google Scholar]
  • 30. Bae J, Stocks J, Heywood A, Jung Y, Jenkins L, Hill V, et al. Transfer learning for predicting conversion from mild cognitive impairment to dementia of Alzheimer’s type based on a three-dimensional convolutional neural network. Neurobiol Aging. 2021;99:53–64. doi: 10.1016/j.neurobiolaging.2020.12.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Sathiyamoorthi V, Ilavarasi AK, Murugeswari K, Thouheed Ahmed S, Aruna Devi B, Kalipindi M. A deep convolutional neural network based computer aided diagnosis system for the prediction of Alzheimer’s disease in MRI images. Meas J Int Meas Confed [Internet]. 2021;171(June 2020):108838. doi: 10.1016/j.measurement.2020.108838 [DOI] [Google Scholar]
  • 32. Ferri R, Babiloni C, Karami V, Triggiani AI, Carducci F, Noce G, et al. Stacked autoencoders as new models for an accurate Alzheimer’s disease classification support using resting-state EEG and MRI measurements. Clin Neurophysiol [Internet]. 2021;132(1):232–45. doi: 10.1016/j.clinph.2020.09.015 [DOI] [PubMed] [Google Scholar]
  • 33. Tong Y, Tong Y, Li Z, Huang H, Gao L, Xu M, et al. Application of CNN-SCN in early diagnosis of Alzheimer’ s disease Application of CNN-SCN in early diagnosis of Alzheimer’s disease. 2022; 10.21203/rs.3.rs-2187429/v1 [DOI] [Google Scholar]
  • 34.Hridhee RA, Bhowmik B, Hossain QD. Alzheimer’s Disease Classification From 2D MRI Brain Scans Using Convolutional Neural Networks. 3rd Int Conf Electr Comput Commun Eng ECCE 2023. 2023;(August):1–6.
  • 35. Allada A, Bhavani R, Chaduvula K, Priya R. Alzheimer’s disease classification using competitive swarm multi-verse optimizer-based deep neuro-fuzzy network. Concurr Comput Pract Exp. 2023;35(21):1–19. doi: 10.1002/cpe.7696 [DOI] [Google Scholar]
  • 36. Gowhar M, Bhagat A, Ansarullah SI, Ben Othman MT, Hamid Y, Alkahtani HK, et al. A Novel Framework for Classification of Different Alzheimer’s Disease Stages Using CNN Model. Electron. 2023;12(2):1–14. [Google Scholar]
  • 37. Bin Tufail A, Anwar N, Ben Othman MT, Ullah I, Khan RA, Ma Y-K, et al. Early-Stage Alzheimer’s Disease Categorization Using PET Neuroimaging Modality and Convolutional Neural Networks in the 2D and 3D Domains. Sensors [Internet]. 2022. Jun 18;22(12):4609. Available from: https://www.mdpi.com/1424-8220/22/12/4609 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.adni.loni.usc.edu. (n.d.). ADNI | Alzheimer’s Disease Neuroimaging Initiative. Retrieved June 21, 2023; https://adni.loni.usc.edu/

Decision Letter 0

Muhammad Fazal Ijaz

22 May 2023

PONE-D-23-12558A Bilateral Filtering-based Image Enhancement for Alzheimer Disease Classification using CNNPLOS ONE

Dear Dr. Awarayi,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

Please submit your revised manuscript by Jul 06 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Muhammad Fazal Ijaz

Academic Editor

PLOS ONE

Journal Requirements:

When submitting your revision, we need you to address these additional requirements.

1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at 

https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and 

https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf

2. Please note that PLOS ONE has specific guidelines on code sharing for submissions in which author-generated code underpins the findings in the manuscript. In these cases, all author-generated code must be made available without restrictions upon publication of the work. Please review our guidelines at https://journals.plos.org/plosone/s/materials-and-software-sharing#loc-sharing-code and ensure that your code is shared in a way that follows best practice and facilitates reproducibility and reuse.

3. In your Data Availability statement, you have not specified where the minimal data set underlying the results described in your manuscript can be found. PLOS defines a study's minimal data set as the underlying data used to reach the conclusions drawn in the manuscript and any additional data required to replicate the reported study findings in their entirety. All PLOS journals require that the minimal data set be made fully available. For more information about our data policy, please see http://journals.plos.org/plosone/s/data-availability.

"Upon re-submitting your revised manuscript, please upload your study’s minimal underlying data set as either Supporting Information files or to a stable, public repository and include the relevant URLs, DOIs, or accession numbers within your revised cover letter. For a list of acceptable repositories, please see http://journals.plos.org/plosone/s/data-availability#loc-recommended-repositories. Any potentially identifying patient information must be fully anonymized.

Important: If there are ethical or legal restrictions to sharing your data publicly, please explain these restrictions in detail. Please see our guidelines for more information on what we consider unacceptable restrictions to publicly sharing data: http://journals.plos.org/plosone/s/data-availability#loc-unacceptable-data-access-restrictions. Note that it is not acceptable for the authors to be the sole named individuals responsible for ensuring data access.

We will update your Data Availability statement to reflect the information you provide in your cover letter.

4. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide.

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #1: Partly

Reviewer #2: Yes

**********

2. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #1: Yes

Reviewer #2: No

**********

3. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #1: No

Reviewer #2: Yes

**********

4. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #1: Yes

Reviewer #2: Yes

**********

5. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #1: The overall impression of the technical contribution of the current study is reasonable. However, the Authors may consider making necessary amendments to the manuscript for better comprehensibility of the study.

1. The abstract must be re-written, focusing on the technical aspects of the proposed model, the main experimental results, and the metrics used in the evaluation. Briefly discuss how the proposed model is superior.

2. Additionally, method names should not be capitalized. Moreover, it is not the best practice to employ abbreviations in the abstract, they should be used when the term is introduced for the first time.

3. The contribution of the current study must be briefly discussed as bullet points in the introduction. And motivation must also be discussed in the manuscript.

4. Introduction section in too lengthy, some part of the introduction can be moved as the literature review.

5. Introduction section must discuss the technical gaps associated with the current problem. Authors may refer and include https://doi.org/10.32604/cmc.2021.018472

6. The literature section is missing. Authors are recommended to incorporate the same for better comprehensibility of the study.

7. Authors may provide the architecture/block diagram of the proposed model for better comprehensibility of the proposed model concerning various aspects of the proposed model.

8. What is the size of the input image that is considered for processing and the size of the kernels? Authors may refer and include the content shown in https://doi.org/10.1038/s41598-022-25089-2 for better comprehensibility.

9. Whether authors have used Stride 1 or Stride, 2 must be presented.

10. For how many epochs does the proposed model execute. what is the initial learning rate, and after how many epochs does the model's learning rate saturated.

11. Majority of the figures lack the clarity, they quality is fair but they must be explained in the text and the figures must be cited.

12. By considering the current form of the conclusion section, it is hard to understand by PlosOne Journal readers. It should be extended with new sentences about the necessity and contributions of the study by considering the authors' opinions about the experimental results derived from some other well-known objective evaluation values if it is possible.

13. English proofreading is strongly recommended for a better understanding of the study, and the quality of the figures must be tremendously improved.

14. Captions of the Figures not self-explanatory. The caption of figures should be self-explanatory, and clearly explaining the figure. Extend the description of the mentioned figures to make them self-explanatory. For example : Fig 1. Proposed CNN architecture.

Reviewer #2: 1. Concept addressed in this study is quite good.The contribution of the current study must be briefly discussed as bullet points in the introduction. And motivation must also be discussed in the introduction of the manuscript.

2. The introduction and the literature must be tremendously improvised with a focus on the limitations of existing models.

3.Where is the graph for testing loss and accuracy presented in the study?

4. Please discuss more on the implementation platform and the dataset details as two

sub-sections in the manuscript.

5.Include the block diagram for the proposed approach and its process

6.By considering the current form of the conclusion section, it is hard to understand by the Journal readers. It should be extended with new sentences about the necessity and contributions of the study by considering the authors' opinions about the experimental results derived from some other well-known objective evaluation values if it is possible

**********

6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #1: No

Reviewer #2: Yes: Dr.Jana Shafi

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Apr 19;19(4):e0302358. doi: 10.1371/journal.pone.0302358.r002

Author response to Decision Letter 0


12 Jul 2023

Editor comments# 3, 4: Data availability

Response: The datasets used in this study were obtained from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) database, a public data repository with access control. Researchers can apply to be granted access to the datasets. Due to the data use policy of the ADNI, the authors do not have absolute control over the data; however, we have duly cited the data source.

The authors would like to state that we do not have control over the dataset due to the data use policy restrictions of the ADNI; however, we will include the link to the database, to which researchers can submit requests to be granted access by the ADNI

Reviewer 1, Comment # 1: The abstract must be re-written, focusing on the technical aspects of the proposed model, the main experimental results, and the metrics used in the evaluation. Briefly discuss how the proposed model is superior.

Response: Thank you for your comments and suggestions. The abstract has been rewritten, taking into account all the suggestions made by the reviewer.

Reviewer 1, Comment # 2: Additionally, method names should not be capitalized. Moreover, it is not the best practice to employ abbreviations in the abstract, they should be used when the term is introduced for the first time

Response: Your comments are duly appreciated: The abbreviations in the abstract have been written in full. Please refer to the abstract section (page 1).

Reviewer 1, Comment # 3,4 and 5: The contribution of the current study must be briefly discussed as bullet points in the introduction. And motivation must also be discussed in the manuscript. Introduction section in too lengthy, some part of the introduction can be moved as the literature review. Introduction section must discuss the technical gaps associated with the current problem.

Response: Thank you for your contribution. The introduction has been modified in accordance to the reviewer’s comments. The literature was imbedded in the introduction making it too lengthy. The literature review in the introduction has, therefore, been moved to a new section (page 2-4). The motivation and technical gaps have been expatiated (page 1).

Reviewer 1, Comment #6: The literature section is missing. Authors are recommended to incorporate the same for better comprehensibility of the study.

Response: The literature review was added to the introduction and has been moved into a new section for more clarity. See page 2-4.

Reviewer 1, Comment #7: Authors may provide the architecture/block diagram of the proposed model for better comprehensibility of the proposed model concerning various aspects of the proposed model.

Response: Your suggestions are duly acknowledged. The block diagram has been incorporated for better clarity.

Reviewer 1, Comment #8: What is the size of the input image that is considered for processing and the size of the kernels? Authors may refer and include the content shown in https://doi.org/10.1038/s41598-022-25089-2 for better comprehensibility.

Response: Thank you for the comments. The size of the input images is defined under the datasets section (page 5) and the convolutional neural network model section (page 7). The kernel filter size is also indicated under the convolutional neural network model section (page 7) and the hyperparameters section (page 8).

Reviewer 1, Comment #9: Whether authors have used Stride 1 or Stride, 2 must be presented.

Response: Stride 1 was used in the convolutional layers and has now been included in the manuscript.

Reviewer 1, Comment #10: For how many epochs does the proposed model execute. what is the initial learning rate, and after how many epochs does the model's learning rate saturated.

Response: Thank you for the comments. The model was trained for 200 epochs with a learning rate of 0.001, which is incorporated under the hyperparameters section. This has been indicated under various sections in the manuscript (Pages 1, 8, and 9)

Reviewer 1, Comment #11: Majority of the figures lack the clarity; they quality is fair but they must be explained in the text and the figures must be cited.

Response: The figure labeling has been revised, and all figures have been cited appropriately for better clarity.

Reviewer 1, Comment #12: By considering the current form of the conclusion section, it is hard to understand by PlosOne Journal readers. It should be extended with new sentences about the necessity and contributions of the study by considering the authors' opinions about the experimental results derived from some other well-known objective evaluation values if it is possible.

Response: Thank you for the comments. The conclusion has been rewritten for better comprehension. (see page 11).

Reviewer 1, Comment #13: English proofreading is strongly recommended for a better understanding of the study, and the quality of the figures must be tremendously improved.

Response: Thank you for the recommendation. Proofreading has been done.

Reviewer 1, Comment #14: Captions of the Figures not self-explanatory. The caption of figures should be self-explanatory, and clearly explaining the figure. Extend the description of the mentioned figures to make them self-explanatory. For example: Fig 1. Proposed CNN architecture.

Response: The figure labeling has been revised, and all figures have been cited appropriately for better clarity.

Reviewer 2, Comment #1,2: Concept addressed in this study is quite good. The contribution of the current study must be briefly discussed as bullet points in the introduction. And motivation must also be discussed in the introduction of the manuscript. The introduction and the literature must be tremendously improvised with a focus on the limitations of existing models

Response: Thank you for your contribution. The introduction and the literature review have been modified in accordance with the reviewer’s comments. The literature review in the introduction has been moved to a new section (pages 2–4) for better clarity. The motivation and technical gaps have been explained (page 1).

comment #3: Where is the graph for testing loss and accuracy presented in the study?

Response: The authors excluded the graphs because the method employed in the training (10-fold cross validation) generated about 10 loss and 10 accuracy graphs, which we thought would be too many to be presented in the manuscript. The training of the model involved 10 iterations, generating a loss and accuracy graph per iteration.

Reviewer 2, Comment #4: Please discuss more on the implementation platform and the dataset details as two sub-sections in the manuscript.

Response: Your comments are duly appreciated. The datasets have been further described under the dataset section, while the implementation platform is presented under the method section.

Reviewer 2, Comment #5: Include the block diagram for the proposed approach and its process.

Response: Thank you for the suggestion. The block diagram has been incorporated for better clarity.

Reviewer 2, Comment #6: By considering the current form of the conclusion section, it is hard to understand by the Journal readers. It should be extended with new sentences about the necessity and contributions of the study by considering the authors' opinions about the experimental results derived from some other well-known objective evaluation values if it is possible.

Response: Thank you for the comments. The conclusion has been rewritten for better comprehension. (see page 11).

Attachment

Submitted filename: Response to reviewers.pdf

pone.0302358.s002.pdf (164.8KB, pdf)

Decision Letter 1

Sunder Ali Khowaja

17 Sep 2023

PONE-D-23-12558R1A Bilateral Filtering-based Image Enhancement for Alzheimer Disease Classification using CNNPLOS ONE

Dear Dr. Awarayi,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

ACADEMIC EDITOR: The reviewers' have reviewed the revised manuscript. Based on the assessment, the paper needs to undergo major revisions as suggested by the reviewers. The authors are requested to make necessary changes to the manuscript and prepare a point-by-point response to the reviewer comments for further consideration. 

==============================

Please submit your revised manuscript by Nov 01 2023 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Sunder Ali Khowaja, Ph.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #3: (No Response)

Reviewer #4: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #3: No

Reviewer #4: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #3: No

Reviewer #4: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #3: Yes

Reviewer #4: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #3: No

Reviewer #4: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #3: PONE-D-23-12558R1

I think this article is a revised version but still there many issues that must be considered to properly reach the publication. Some of the suggestions and comments are listed as follows:

1. There are several grammatical mistakes and typos that must be corrected with careful revision, such as from where this parenthesis “….. [1]. ).” Is started, and so on.

2. Most of the abbreviations are incorrect such as Alzheimer’s 8 disease, and so on. Every term should be defined completely for the first time and then use the abbreviation throughout the article, do not repeat.

3. The first sentence “First, an image enhancement technique that effectively reduces noise in MRI 49 images while preserving image quality” of the contribution seems incomplete. Please revise. Additionally, the contribution is very limited.

4. There is no paper organization, please add at the end of the introduction section.

5. Generally, the English writing of this article is very low standard.

6. Moreover, the technical depth of the paper is not adequate too. I strictly recommend the authors to have a look at some standard article and learn about the article writing and organization, etc.

7. Table 1 summarize the existing CNN model used for AD, although there several more latest models such as “3D Convolutional Neural Networks Based Multiclass Classification of Alzheimer’s and Parkinson’s Diseases using PET and SPECT Neuroimaging Modalities, Brain Informatics, 2021”, “On Improved 3D-CNN Based Binary and Multiclass Classification of Alzheimer’s Disease Using Neuroimaging Modalities and Data Augmentation Methods”, Journal of Healthcare Engineering, 2021”, “Early-Stage Alzheimer's Disease Categorization using PET Neuroimaging Modality and Convolutional Neural Networks in the 2D and 3D Domains”, Sensors, 2022”, “On Disharmony in Batch Normalization and Dropout Methods for Early Categorization of Alzheimer's Disease” Sustainability, 2022”. Beside this, the authors can also refer to more latest models for better comparison and understanding.

8. Figure 1 should be redesigned. Most of the arrows are not correctly connected. Also, keep the figures text always consistent with the paper body text.

9. After any equation, in term “Where” w should be small always.

10. Table 4 presents a Comparative analysis of existing and proposed model but the authors should add more latest models in comparison.

11. Most of the reference are old enough and also limited. Further latest references can be added.

Reviewer #4: Authors have addressed the comments made by previous reviewer; however, I would like to suggest that authors should update the literature review with recent studies from 2023, for improved readability and relevancy.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #3: No

Reviewer #4: Yes: Parus Khuwaja

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Apr 19;19(4):e0302358. doi: 10.1371/journal.pone.0302358.r004

Author response to Decision Letter 1


23 Oct 2023

Reviewer 3, Comment # 1: There are several grammatical mistakes and typos that must be corrected with careful revision, such as from where this parenthesis “….. [1]. ).” Is started, and so on.

Response: Thank you for your comments and suggestions. The grammatical mistakes and typos have been rectified. The manuscript has been thoroughly reviewed, and the errors duly corrected. We also used the Grammarly software to check and correct the mistakes.

Reviewer 3, Comment # 2: Most of the abbreviations are incorrect such as Alzheimer’s 8 disease, and so on. Every term should be defined completely for the first time and then use the abbreviation throughout the article, do not repeat.

Response: Your comments are duly appreciated. The corrections have been made to that effect.

Reviewer 3, Comment # 3: The first sentence “First, an image enhancement technique that effectively reduces noise in MRI 49 images while preserving image quality” of the contribution seems incomplete. Please revise. Additionally, the contribution is very limited.

Response: Thanks for the comments. The manuscript has been modified accordingly.

Action: The primary contribution of this study comprises two key components:

• First, an image enhancement technique that effectively reduces noise in MRI images while preserving image quality was designed and implemented. The image enhancement algorithm for color images was developed using two image processing techniques: image equalization and bilateral filtering. The algorithm mainly improves the quality of the images while maintaining their edges and details. The bilateral filtering was used to keep the edges and details of the image, while histogram equalization modifies the intensities of the image to improve its contrast.

• Second, a CNN model consisting of four convolutional layers and two hidden layers successfully classifies AD more accurately than existing deep learning models. The model was trained using the k-fold cross-validation technique, ensuring that all the images were effectively used in training and testing the model.

Reviewer 3, Comment # 4: There is no paper organization, please add at the end of the introduction section.

Response: The paper organization has been added at the end of the introduction.

Action: The remaining sections of the paper are the literature review, materials and methods, results, and discussion. The literature review section presents a review of some existing state-of-the-art studies. A description of the dataset, the proposed method, and a detailed description of the experiment are presented under the materials and methods section. The results section reveals the study's experimental results, which are further discussed in the discussion section.

Reviewer 3, Comment # 5 & 6: Generally, the English writing of this article is very low standard. Moreover, the technical depth of the paper is not adequate too. I strictly recommend the authors to have a look at some standard article and learn about the article writing and organization, etc.

Response: Your comment is duly acknowledged. The manuscript has been reviewed to improve the writing standard. A reference was made to Tufail et al. (2023), Allada et al. (2023) and the manuscript writing guideline of the PLOS ONE journal to gather insight to improve the manuscript.

Reviewer 3, Comment # 7: Table 1 summarize the existing CNN model used for AD, although there several more latest models such as “3D Convolutional Neural Networks Based Multiclass Classification of Alzheimer’s and Parkinson’s Diseases using PET and SPECT Neuroimaging Modalities, Brain Informatics, 2021”, “On Improved 3D-CNN Based Binary and Multiclass Classification of Alzheimer’s Disease Using Neuroimaging Modalities and Data Augmentation Methods”, Journal of Healthcare Engineering, 2021”, “Early-Stage Alzheimer's Disease Categorization using PET Neuroimaging Modality and Convolutional Neural Networks in the 2D and 3D Domains”, Sensors, 2022”, “On Disharmony in Batch Normalization and Dropout Methods for Early Categorization of Alzheimer's Disease” Sustainability, 2022”. Beside this, the authors can also refer to more latest models for better comparison and understanding.

Response: Thanks for your recommendations: We have updated Table 1 with more latest models such as Hridhee et al., (2023), Allada et al., (2023), Gowhar et al., (2023) and Tufail et al., (2022).

Reviewer 3, Comment # 8: Figure 1 should be redesigned. Most of the arrows are not correctly connected. Also, keep the figures text always consistent with the paper body text.

Response: Thanks for the observation and suggestion. Figure 1 has been redesigned to ensure the arrows are well connected, and the text is consistent with the paper’s body text.

Action:

Reviewer 3, Comment # 9: After any equation, in term “Where” w should be small always.

Response: Thanks for the correction. The correction has been done.

Reviewer 3, Comment # 10: Table 4 presents a Comparative analysis of existing and proposed model but the authors should add more latest models in comparison.

Response: Thank you for the recommendation. Latest models, such as Hridhee et al. (2023), Allada et al. (2023), Gowhar et al. (2023) and Tufail et al. (2022) have been added.

Reviewer 3, Comment # 11: Most of the reference are old enough and also limited. Further latest references can be added.

Response: Thank you for the comment. The suggestion has been implemented.

Action:

1. Hridhee RA, Bhowmik B, Hossain QD. Alzheimer’s Disease Classification From 2D MRI Brain Scans Using Convolutional Neural Networks. 3rd Int Conf Electr Comput Commun Eng ECCE 2023. 2023;(August):1–6.

2. Allada A, Bhavani R, Chaduvula K, Priya R. Alzheimer’s disease classification using competitive swarm multi-verse optimizer-based deep neuro-fuzzy network. Concurr Comput Pract Exp. 2023;35(21):1–19.

3. Mohi ud din dar G, Bhagat A, Ansarullah SI, Othman MT Ben, Hamid Y, Alkahtani HK, et al. A Novel Framework for Classification of Different Alzheimer’s Disease Stages Using CNN Model. Electron. 2023;12(2):1–14.

4. Tufail A Bin, Anwar N, Othman MT Ben, Ullah I, Khan RA, Ma Y-K, et al. Early-Stage Alzheimer’s Disease Categorization Using PET Neuroimaging Modality and Convolutional Neural Networks in the 2D and 3D Domains. Sensors [Internet]. 2022 Jun 18;22(12):4609. Available from: https://www.mdpi.com/1424-8220/22/12/4609

Reviewer 4, Comment # 11: Authors have addressed the comments made by previous reviewer; however, I would like to suggest that authors should update the literature review with recent studies from 2023, for improved readability and relevancy.

Response: Thank you for your comment and suggestion. Latest models, such as Hridhee et al. (2023), Allada et al. (2023), Gowhar et al. (2023) and Tufail et al. (2022), have been added.

Attachment

Submitted filename: Response to reviewers.pdf

pone.0302358.s003.pdf (148.5KB, pdf)

Decision Letter 2

Sunder Ali Khowaja

18 Dec 2023

PONE-D-23-12558R2A Bilateral Filtering-based Image Enhancement for Alzheimer Disease Classification using CNNPLOS ONE

Dear Dr. Awarayi,

Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.

==============================

ACADEMIC EDITOR: The reviewer reports have been submitted. It seems that the reviewers are suggesting revisions before the manuscript can be considered for publication. The authors are advised to revise the manuscript according to the reviewers comments and prepare a point-by-point response to provide clarity on the revisions been made. 

==============================

Please submit your revised manuscript by Feb 01 2024 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file.

Please include the following items when submitting your revised manuscript:

  • A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'.

  • A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'.

  • An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'.

If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter.

If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols.

We look forward to receiving your revised manuscript.

Kind regards,

Sunder Ali Khowaja, Ph.D.

Academic Editor

PLOS ONE

[Note: HTML markup is below. Please do not edit.]

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #4: All comments have been addressed

Reviewer #5: All comments have been addressed

Reviewer #6: All comments have been addressed

Reviewer #7: (No Response)

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #4: Yes

Reviewer #5: Yes

Reviewer #6: Partly

Reviewer #7: No

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #4: N/A

Reviewer #5: Yes

Reviewer #6: N/A

Reviewer #7: No

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #4: No

Reviewer #5: No

Reviewer #6: Yes

Reviewer #7: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #4: Yes

Reviewer #5: No

Reviewer #6: Yes

Reviewer #7: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #4: The authors have addressed all the comments in an adequate manner. Therefore, I would like to recommend the acceptance of this article.

Reviewer #5: The author has basically solved my problem, and I have no other problems. Now I recommend receiving the revised manuscript.

Reviewer #6: The article entitled " A Bilateral Filtering-based Image Enhancement for Alzheimer Disease Classification using CNN " can be Accepted in its current form

Reviewer #7: The authors propose Alzheimer Disease (AD) Classification using convolutional neural network. The proposed method employs preprocessing techniques like histogram equalization and bilateral filtering techniques to reduce noise and improve image quality, with the ultimate goal of facilitating classification of AD. Overall contribution of the work is not impressive.

1. The overall representation/structure of paper is not appropriate. It needs figurative approach that should be representation of the overall proposed work.

2. With necessary images show the impact of proposed enhancement techniques (histogram equalization, bilateral filtering).

3. Describe with necessary data how the preprocessing techniques improves the overall accuracy of the proposed work.

4. Compare the result with the standard CNN models like VGG19, ResNet110, Dense net also compare the number of parameters.

5. Data samples are imbalanced how this is handled please describe.

6. Show the confusion matrix for each class and also describe the ROC curves.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #4: Yes: Parus Khuwaja

Reviewer #5: No

Reviewer #6: No

Reviewer #7: No

**********

[NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.]

While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.

PLoS One. 2024 Apr 19;19(4):e0302358. doi: 10.1371/journal.pone.0302358.r006

Author response to Decision Letter 2


8 Feb 2024

Reviewer 4, Comment #1: The authors have addressed all the comments in an adequate manner. Therefore, I would like to recommend the acceptance of this article.

Response: Thank you for reviewing and accepting our manuscript.

Reviewer 5, Comment #1: The author has basically solved my problem, and I have no other problems. Now I recommend receiving the revised manuscript.

Response: Thank you for reviewing and accepting our manuscript.

Reviewer 6, Comment #1: The article entitled " A Bilateral Filtering-based Image Enhancement for Alzheimer Disease Classification using CNN " can be Accepted in its current form

Response: Thank you for reviewing and accepting our manuscript.

Reviewer 7, Comment #1: The overall representation/structure of paper is not appropriate. It needs figurative approach that should be representation of the overall proposed work.

Response: Thank you for the comment. We have reviewed the manuscript to ensure it follows the general guidelines of the PLOS ONE journal. We have also introduced more figures to improve the visual elements in the work for better clarity.

Reviewer 7, Comment #2: With necessary images show the impact of proposed enhancement techniques (histogram equalization, bilateral filtering).

Response: Thank you for the suggestion. Images of the proposed enhancement techniques are presented in the Fig 3 of the manuscript. A further comparative assessment of the image quality was performed based on mean pixel intensity, contrast and entropy metrics and presented in Fig 4 of the manuscript.

Reviewer 7, Comment #3: Describe with necessary data how the preprocessing techniques improves the overall accuracy of the proposed work.

Response: Table 2 of the manuscript has been modified to include the test results of the model before the image enhancement.

Reviewer 7, Comment #4: Compare the result with the standard CNN models like VGG19, ResNet110, Dense net also compare the number of parameters.

Response: Thank you for your recommendation. This comparison has already been done in Table 4 of the manuscript. The study by Hridhee et al., (2023) used VGG16 model while Zhang et al., (2022) used the ResNet110 model.

Reviewer 7, Comment #5: Data samples are imbalanced how this is handled please describe.

Response: Your comments are duly appreciated. Data augmentation was used to increase the data volume; however, in the case of data imbalance from the MCI class, it was observed not to be significant since it did not affect the training process. This can be seen in the confusion matrices in Figs 5 and 6 of the manuscript.

Reviewer 7, Comment #6: Show the confusion matrix for each class and also describe the ROC curves..

Response: Thank for the comment. The confusion matrix and the ROC curves have been included. Refer to the manuscript Figs 5, 6 and 7.

Attachment

Submitted filename: Response to reviewers.pdf

pone.0302358.s004.pdf (17.2KB, pdf)

Decision Letter 3

Sunder Ali Khowaja

3 Apr 2024

A Bilateral Filtering-based Image Enhancement for Alzheimer Disease Classification using CNN

PONE-D-23-12558R3

Dear Dr. Awarayi,

We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements.

Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication.

An invoice will be generated when your article is formally accepted. Please note, if your institution has a publishing partnership with PLOS and your article meets the relevant criteria, all or part of your publication costs will be covered. Please make sure your user information is up-to-date by logging into Editorial Manager at Editorial Manager® and clicking the ‘Update My Information' link at the top of the page. If you have any questions relating to publication charges, please contact our Author Billing department directly at authorbilling@plos.org.

If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

Kind regards,

Sunder Ali Khowaja, Ph.D.

Academic Editor

PLOS ONE

Additional Editor Comments (optional):

It seems that authors have revised their manuscript in a satisfactory manner as indicated by the reviewer. Therefore, I would like to recommend the acceptance of this manuscript.

Reviewers' comments:

Reviewer's Responses to Questions

Comments to the Author

1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation.

Reviewer #6: All comments have been addressed

**********

2. Is the manuscript technically sound, and do the data support the conclusions?

The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented.

Reviewer #6: Yes

**********

3. Has the statistical analysis been performed appropriately and rigorously?

Reviewer #6: N/A

**********

4. Have the authors made all data underlying the findings in their manuscript fully available?

The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified.

Reviewer #6: Yes

**********

5. Is the manuscript presented in an intelligible fashion and written in standard English?

PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here.

Reviewer #6: Yes

**********

6. Review Comments to the Author

Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters)

Reviewer #6: The authors have made substantial corrections in the article . Hence the article can be accepted in its current form.

**********

7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files.

If you choose “no”, your identity will remain anonymous but your review may still be made public.

Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy.

Reviewer #6: No

**********

Acceptance letter

Sunder Ali Khowaja

8 Apr 2024

PONE-D-23-12558R3

PLOS ONE

Dear Dr. Awarayi,

I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now being handed over to our production team.

At this stage, our production department will prepare your paper for publication. This includes ensuring the following:

* All references, tables, and figures are properly cited

* All relevant supporting information is included in the manuscript submission,

* There are no issues that prevent the paper from being properly typeset

If revisions are needed, the production department will contact you directly to resolve them. If no revisions are needed, you will receive an email when the publication date has been set. At this time, we do not offer pre-publication proofs to authors during production of the accepted work. Please keep in mind that we are working through a large volume of accepted articles, so please give us a few weeks to review your paper and let you know the next and final steps.

Lastly, if your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org.

If we can help with anything else, please email us at customercare@plos.org.

Thank you for submitting your work to PLOS ONE and supporting open access.

Kind regards,

PLOS ONE Editorial Office Staff

on behalf of

Dr. Sunder Ali Khowaja

Academic Editor

PLOS ONE

Associated Data

    This section collects any data citations, data availability statements, or supplementary materials included in this article.

    Supplementary Materials

    S1 Checklist. PLOS ONE clinical studies checklist.

    (PDF)

    pone.0302358.s001.pdf (193.1KB, pdf)
    Attachment

    Submitted filename: Response to reviewers.pdf

    pone.0302358.s002.pdf (164.8KB, pdf)
    Attachment

    Submitted filename: Response to reviewers.pdf

    pone.0302358.s003.pdf (148.5KB, pdf)
    Attachment

    Submitted filename: Response to reviewers.pdf

    pone.0302358.s004.pdf (17.2KB, pdf)

    Data Availability Statement

    Data cannot be shared publicly because of institutional data access policy. Data are available from the ADNI Institutional Data Access / Ethics Committee (contact via adni.loni.usc.edu) for researchers who meet the criteria for access to confidential data.


    Articles from PLOS ONE are provided here courtesy of PLOS

    RESOURCES