Skip to main content
Heliyon logoLink to Heliyon
. 2024 Oct 9;10(20):e39037. doi: 10.1016/j.heliyon.2024.e39037

Fractional gradient optimized explainable convolutional neural network for Alzheimer's disease diagnosis

Zeshan Aslam Khan a, Muhammad Waqar a, Naveed Ishtiaq Chaudhary b, Muhammad Junaid Ali Asif Raja c, Saadia Khan d, Farrukh Aslam Khan e, Iqra Ishtiaq Chaudhary f,, Muhammad Asif Zahoor Raja b
PMCID: PMC11532259  PMID: 39498007

Abstract

Alzheimer's is one of the brain syndromes that steadily affects the brain memory. The early stage of Alzheimer's disease (AD) is referred to as mild cognitive impairment (MCI), and the growth of Alzheimer's is not certain in patients with MCI. The premature detection of Alzheimer's is crucial for maintaining healthy brain function and avoiding memory loss. Different multi-neural network architectures have been proposed by researchers for efficient and accurate AD detection. The absence of improved feature extraction mechanisms and unexplored efficient optimizers in complex benchmark architectures lead to an inefficient and inaccurate AD classification. Moreover, the standard convolutional neural network (CNN)-based architectures for Alzheimer's diagnosis lack interpretability in their predictions. An interpretable, simplified, yet effective deep learning model is required for the accurate classification of AD. In this study, a generalized fractional order-based CNN classifier with explainable artificial intelligence (XAI) capabilities is proposed for accurate, efficient, and interpretable classification of AD diagnosis. The proposed study (a) classifies AD accurately by incorporating unexplored pooling technique with enhanced feature extraction mechanism, (b) provides fractional order-based optimization approach for adaptive learning and fast convergence speed, and (c) suggests an interpretable method for proving the transparency of the model. The proposed model outperforms complex benchmark architectures with regard to accuracy using standard ADNI dataset. The proposed fractional order-based CNN classifier achieves an improved accuracy of 99 % as compared to the state-of-the-art models.

Keywords: Alzheimer's disease, Convolutional neural network, Fractional optimization, Neuroimaging, Explainable artificial intelligence, Customized pooling


Nomenclature

Abbreviation Full Form
DL Deep Learning
AD Alzheimer's Disease
MCI Mild Cognitive Impairment
Fr-CNN Fractional Order Convolutional Neural Network
XAI Explainable Artificial Intelligence
ADNI Alzheimer's Disease Neuroimaging Initiative
LIME Local Interpretable Model-Agnostic Explanations
FSGD Fractional Stochastic Gradient Descent
GFSGD Generalized Fractional Stochastic Gradient Descent
MRI Magnetic Resonance Imaging
SVM Support Vector Machine
KNN K-Nearest Neighbors
VGG Visual Geometry Group (a deep learning model)
ADASYN Adaptive Synthetic Sampling (an oversampling technique)
FLOPs Floating Point Operations
PCA Principal Component Analysis
ICA Independent Component Analysis
3D-CNN 3-Dimensional Convolutional Neural Network
EEG Electroencephalogram
DWT Discrete Wavelet Transform
EMD Empirical Mode Decomposition
NCSA Non-Local Context Spatial Attention
SOTA State of the Art
Alpha (α) Fractional Order

1. Introduction

Nowadays, image classification problems are mostly tackled through deep learning (DL) approaches [1,2]. One of the important roles of DL-based models is learning prominent features specially for solving classification problems. Convolutional neural network (CNN) based variants have been proposed for object detection [3] and image classification [4] tasks. The promising healthcare applications of DL-based methods include automated classification [5,6], disease diagnosis [7], edge devices [8], segmentation [9], and anomaly detection [10]. Fig. 1, graphically demonstrates the significance of image classification for handling different healthcare issues.

Fig. 1.

Fig. 1

Applications of image classification in medical field.

Alzheimer's disease (AD) corresponds to the well-known brain disorder that steadily affects the brain memory of the patients. The mild cognitive impairment (MCI) is referred to as an early stage of AD, though the growth of AD is not certain in patients with MCI. The evolution of AD is witnessed in adults with the age of 65 and above. The proportion of patients affected from AD during the year 2023 are graphically shown in Fig. 2. The hierarchical structure of the brain makes it difficult to detect AD diagnosis by using neuroimaging features. The ground evidences [11] portrayed in Fig. 3 confirm the critical nature and intricacy of the AD diagnosis. The effective treatment for the early AD diagnosis is one of the challenges in the healthcare domain. The number of deaths (62,000) reported in Fig. 4 for five years (2018–2022) were due to the improper AD treatment. The accurate AD detection is required to treat the patients timely and properly.

Fig. 2.

Fig. 2

Adults age groups with Alzheimer's disease diagnosis, 2023.

Fig. 3.

Fig. 3

Truths [11] about Alzheimer's disease.

Fig. 4.

Fig. 4

Alzheimer's disease death Reports (2018–2022).

Researchers have suggested numerous approaches for accomplishing the task of accurate classification of AD diagnosis over the past few years. Different deep learning methods [12] are developed for AD diagnosis classification. It is observed that the existing models are computationally expensive due to their high complexity in the proposed architectures. Despite their complex architectures, the suggested models are incapable of producing interpretable and quality results for accurate classification of AD. The probable challenges of the study include complex CNN-based variants, unexplainable outcomes, accuracy, efficiency, computational cost, adaptability, and robustness.

The goal of this study is to propose a simplified yet effective fractional order-based convolutional neural network (Fr-CNN) model that comprises enhanced attention mechanism, unexploited pooling technique, and novel optimization approach for accurate and efficient AD diagnosis classification. Explainable Artificial Intelligence (XAI)-based Local Interpretable Model-Agnostic Explanation (LIME) approach is incorporated to produce interpretable predictions by the proposed Fr-CNN model. By exploiting LIME, it offers explanation of the decision made by the proposed Fr-CNN model by highlighting the prominent features and portraying the features' contributions.

Recently, fractional order-based stochastic gradient descent (FSGD) [[13], [14], [15]] variant is developed for tackling different problems. Furthermore, a generalized version of fractional-SGD (GFSGD) is suggested in Ref. [16] to further boost the convergence speed and accuracy of recommendations. In this study, we aim to exploit tensorflow framework-based GFSGD variant for image classification to provide accurate and efficient classification through CNN.

The prime objectives of the study are as follows:

  • To correctly classify AD at early stages when preventive measures can be effective to overcome further development.

  • To propose a simplified yet effective and computationally inexpensive CNN classifier for accurately and efficiently classifying AD diagnosis.

  • To increase the classification accuracy by manipulating an effective and reliable pooling technique with enhanced attention mechanism.

  • To boost the classification speed by proposing a novel GFSGD optimization technique for AD classification.

  • To exploit XAI-based LIME approach for producing interpretable predictions by the proposed Fr-CNN model.

  • To increase the robustness of the proposed model as compared to standard methods by discovering the adaptive nature of the optimizer.

1.1. Literature review

Janghel et al. [17] suggested a VGG based CNN architecture with multiple classifiers for detecting AD diagnosis at premature stages through MRI analysis. The ADNI database is utilized for AD diagnosis classification task with two class labels of AD and CN (normal control). The recommended model comprises VGG-16 architecture with k-means clustering and SVM classifier for precise classification among AD diagnosis MRIs. Sarraf et al. [18] utilized a multiple pre-trained model for AD diagnosis classification by examining neuroimaging. The scale & shift variant approach is exploited as a feature extraction mechanism. Later, comparing the performance of GoogleNet and LeNet-5 models, it appears that GoogleNet achieves a better accuracy of 98.8 % for AD diagnosis classification. Wu et al. [19], proposed a 3D transfer network in comparison with present 2D transfer network for classifying AD patients with normal individuals. In the suggested technique for feature extraction, 2D networks are incorporated. The extraction output is concatenated and forwarded for categorization purpose. It is depicted that the proposed 3D transfer networks outperform conventional 2D networks with around 10 % accuracy points.

In [20], a 3D Residual RepVGG-based attention network is developed for AD classification. With the incorporation of non-local context spatial attention (NCSA) mechanism within VGG architecture, a satisfactory accuracy is achieved. However, the computational complexity of the ResRepANet model was too high due to more than 7 million architectural parameters. In Ref. [21], authors utilized adaptive synthetic (ADASYN)-based sampling strategy in preprocessing steps for augmentation purposes. Afterwards, four transfer learning (TL)-based pre-trained models such as VGG16, InceptionV3, ResNet50, and Xception are exploited to classify MRIs of AD patients. Moreover, in Ref. [22], the customized architecture is developed by recombining the blocks of CNN and LSTM. Through this ensemble network, authors achieved a remarkable classification accuracy of 98.6 % for AD detection. However, due to the excessive architectural layers in the network, the size of the model was around 165 MB. So, the computational inefficiency was the core limitation in their solution. Furthermore, in Ref. [23], another customized network named as ‘Siamese 4D-AlzNet’ is developed by incorporating the additional convolutional blocks in TL-based models. The Siamese 4D-AlzNet network was extremely complex with more than 15 billion floating point operations (FLOPs) and 470 MB model size. However, the suggested approach reaches a remarkable accuracy of 95.07 %. Similarly, authors of [24] introduce a triplet loss operator driven Siamese CNN model for AD classification. It utilizes triplet loss function to represent the MRIs in embedding forms. This approach displays satisfactory performance by achieving 91.83 % accuracy on ADNI dataset.

Al-khuzaie et al. [25] suggested a novel ‘AlzNet’ CNN-based model for classification among AD and CN. The architecture comprises 5 conv layers and max-pooling layers with ReLU as the activation function. The OASIS dataset is exploited and the model is trained on 15,200 images. The best accuracy of 99 % is attained by the proposed AlzNet model for binary classification task. Safi et al. [26], proposed a unique approach for AD diagnosis through EEG signal analysis. Various signal processing strategies like empirical mode decomposition (EMD) and discrete wavelet transform (DWT) are incorporated with multiple classifiers such as SVM and KNN for accurate and efficient AD detection. The findings of the study include an improved accuracy in terms of AD detection. Lu et al. [27] present a study of comparison between MobileNet and VGG16 architectures for AD classification. VGG16 is considered as a state-of-the-art (SOTA) architecture and baseline model for the study. With MobileNet architecture, it achieves a better accuracy of 94 % as compared to the baseline model for accurate classification of AD through neuroimaging analysis.

Antony et al. [28], suggested VGG family architectures for AD diagnosis classification. In preprocessing steps, data augmentation technique and skull stripped mechanism are incorporated for 780 MRIs of ADNI database. Due to the VGGs' complex and computationally expensive models, better results are not generated with the suggested approach. The best accuracies of 81 % and 84 % are achieved by VGG-16 and VGG-19 models. Mujahid et al. [29] introduced an ensemble learning strategy for the classification of AD diagnosis. Two heavy CNN-based architectures like VGG16 and EffecientNet-B2 are ensembled for premature stage AD detection. OASIS dataset is utilized in the study. In order to overcome the imbalance distribution challenge within the dataset, an adaptive oversampling approach is exploited. The accuracy of 97.35 % is attained by the proposed technique.

Duc et al. [30] present a novel approach for automatic detection of AD using MRIs through a 3D-based deep learning technique. Multiple feature extraction & optimization approaches like SVM-RFE and selection operator are exploited for enhanced feature extraction through improved information focus. The accuracy of 86.12 % is achieved by the suggested 3D-based deep learning method. Basheera et al. [31] proposed a novel custom CNN model for AD classification. The residual and inception blocks are incorporated within CNN architectures for enhanced feature extraction and better dimensionality reduction. The enhanced independent component analysis (EICA) technique is exploited for segmentation mask on MRI images for better visualization. The suggested approach is executed for both multi-class and binary classification of AD. The accuracy of 86.7 % is achieved by the proposed technique. Ben Ahmed et al. [32] introduced a hippocampal visual features (HVF) mechanism within the classifier model for enhanced prominent features extraction from an input MRI image. By exploiting HVF, an improved accuracy is reached by the model while distinguishing between Alzheimer's disease and mild cognitive impairment patients. The accuracy of 87 % is attained by the proposed study. Similarly, Venugopalan et al. [33] exploited denoising auto-encoders for prominent feature extraction within neuroimaging data. Afterwards, the study utilizes multiple classifiers such as KNN, SVM, and random forest during critical analysis of the study. The best accuracy of 88 % is achieved with the SVM classifier.

Feng et al. [34] recommended a 3D-CNN model for AD detection. A CNN architecture's hidden layers are incorporated for feature extraction and dimensionality reduction. A separate SVM classifier is utilized for the classification of AD in the study. With the suggested method, an accuracy of 92.1 % is achieved. Maqsood et al. [35] utilize transfer learning based AlexNet pretrained model for AD diagnosis classification. The model is trained on segmented images and evaluated on unsegmented MRI scans. The model is executed for both binary and multi-class classification in the study. The accuracies achieved by the model are 92.8 % and 89.6 % for multi-class and binary classification, respectively. Similarly, Savas et al. [36], explored pre-trained CNN architectures like ResNet-50, DenseNet, XceptionNet, and EfficientNet for AD classification. After comprehensive training, it is depicted that EfficientNetB3 model outperforms other models in terms of accuracy by achieving best accuracy of 92.98 % for AD classification.

Choi et al. [37] proposed an ensemble learning concept for AD classification. Multiple deep CNN architectures ensembled with novel ensemble generalization loss (EGL) are exploited for weight optimization. The classification accuracy achieved by the suggested ensemble model is 93.84 %. Jain et al. [38] presented a study on VGG-16 architecture for AD detection and classification. Feature extraction is performed through VGG-16 hidden layers and dense layers are used for classification of AD and MCI among different MRIs. An accuracy of 95.7 % is achieved in the proposed study. Peng et al. [39] introduced a regularized kernel learning-based approach for AD classification. Kernal learning is incorporated as a feature extraction mechanism in the architecture by extracting most prominent features or hidden patterns within MRIs. The best accuracy of 96.1 % is achieved by the proposed kernel-based approach. Hon et al. [40] exploited pre-trained models such as VGG-16 and Inception V4 for AD diagnosis classification. The results with VGG-16 model were not satisfying but with InceptionV4, better accuracy of 96.25 % was achieved for AD classification. Similarly, Shanmugam et al. [41] and Ebrahimi et al. [42] presented a study on GoogleNet and ResNet-18 pretrained model for AD classification. The accuracy of 96.3 % and 96.8 % are attained for the multi-class classification task by exploiting GoogleNet and ResNet-18 architectures.

In [43], a transformer driven pre-trained DeiT model is exploited for AD classification. The robustness of the DeiT model was verified through the addition of noise by fluctuating number of slices. The suggested approach reaches an accuracy of 76 % for classifying MRIs of AD patients. Moreover, in Ref. [44], an ensemble learning approach is explored for the classification of AD and prediction about MCI conversion. The study presents a mixture of regression-classification strategy, in which the suggested ensemble architecture is utilized to classify the MRIs with AD diagnosis and separately predicts about MCI conversion. However, due to the complexity of the architecture, the size of the suggested network was around 60 MB with 15.2 million architectural parameters, which shows that the suggested approach was not reliable and cost-effective in terms of practical deployment on resource-friendly devices.

Beheshti et al. [45], proposed a novel genetic algorithm for AD diagnosis classification. By exploiting the genetic approach for AD classification, an improved accuracy of 97.4 % is reached by the suggested model as compared to baseline models. Raju et al. [46] introduced an innovative 3D-CNN model for AD classification. The hidden layers of CNN are utilized for feature extraction and SVM-based classifier is incorporated for classification. The accuracy of 97.7 % is achieved by the proposed custom 3D-CNN model. Raza et al. [47] proposed a transfer learning approach for AD classification through neuroimaging analysis. Gray Matter (GM) based segmentation technique is utilized for performing image segmentation on MRIs of ADNI database. The accuracy of 97.84 % is achieved by the recommended study after 50 iterations. Illakiya et al. [48] suggested an adaptive hybrid attention based ‘AHANet’ model for AD classification. The attention mechanism is incorporated for enhanced information focus. The classification accuracy achieved by the proposed model is around 98.5 % for AD classification. Similarly, Bhuvaneswari et al. [49] exploited principal component analysis (PCA) for better extraction of most prominent features within MRIs. By utilizing PCA, the proposed model shows enhanced performance by accurately classifying AD with the classification accuracy of 98.7 %.

Recently, various segmentation driven approaches are introduced to identify the certain regions of MRI such as cortical, hippocampal, and ventricle regions. These regions are mainly identified through segmentation techniques to gain valuable insights about the structural variations that happen in brain due to the progression of AD. In Ref. [50], authors segmented the hippocampal regions of human brain to understand the progression of AD. Similarly, in Ref. [51], a hybrid deep architecture is developed to initially segment the hippocampal region from input MRIs and then predict about the AD chances on the basis of the segmented MRIs. Furthermore, in Ref. [52], authors designed a fuzzy C-means clustering strategy to segment the brain tissues in human brain, which provide valuable information about the structural atrophy linked with Alzheimer's disease. Furthermore, machine learning models have been constructed to categorize other tumor-based diseases, such as skin cancer [53], lung cancer [54], colorectal cancer [55], and cataract and glaucoma cancer detection [56].

Fractional calculus-based [57] solutions have gained significant attention due to their strong mathematical foundations. Fractional calculus concepts have been exploited for solving problems in various domains including disease modeling [[58], [59], [60]], corruption dynamics [61], power systems [62], medical field [63], epidemic systems [64], control systems [65], heat transfer analysis [66], boundary problems [67,68], nonlinear systems [69], and biological systems [70]. To achieve better convergence speed and estimated accuracy, researchers have introduced fractional gradient-based optimizers for solving different problems, such as communication [71], parameter estimation [72,73], output error model identification [74], neural networks [[75], [76], [77]], fuzzy functions [78], Laplace transform [79], and recommender systems [[13], [14], [15]]. Recently, fractional order-based implementations of standard optimizers like FSGD and FADAM for solving deep neural network problems are provided in Ref. [80] using pytorch framework. Moreover, a generalized fractional variant of standard stochastic gradient decent (GFSGD) is proposed in Ref. [16] to provide a computationally inexpensive solution by offering improved convergence speed and accuracy. The proposed GFSGD is further explored for efficient matrix factorization in recommender systems [81]. The improved performance of GFSGD in terms of speed and accuracy motivates us to investigate the potential of fractional gradient based optimizer for AD diagnosis classification.

1.2. Our contributions

The key contributions of the proposed study are as follows:

  • A simplified CNN architecture for accurate and efficient classification of AD is proposed. The proposed approach is novel in terms of architecutre with effective pooling technique and improved attention mechanism for diverse feature extraction.

  • A tensorflow framework-based generalized version of fractional SGD (GFSGD) is developed to enhance the classification speed.

  • Explainable AI powered CNN classifier is designed for interpretable AD diagnosis classification.

  • The proposed Fr-CNN model outclasses complex benchmark models in terms of evaluation metrics on standard ADNI dataset.

1.3. Paper organization

The contents of the paper are organized in the following manner: The architecture of the proposed Fr-CNN model is described in Section 2. Afterwards, the critical analysis of the results and comparison with benchmark models is performed in Section 3. The findings of the study are elaborated in Section 4. Finally, the conclusions are presnted in Section 5 along with potential future directions.

2. Architecture of the proposed model

The capabilities of CNN are utilized for accomplishing the task of accurately classifying AD diagnosis by analyzing given MRIs. The attention mechanism is incorporated within the proposed Fr-CNN architecture to enhance the performance of the model. The attention module comprises a single convolutional filter with the size of 1∗1 for an input tensor, using a sigmoid activation function. The outcome of this operation is multiplied with the original input tensor for feature mapping process. The multiplied result with dimension vector is the final output of the attention mechanism, which is forwarded to the layer ahead within the architecture. The general architecturaldiagram of the proposed Fr-CNN model is presented is Fig. 5. The architecture comprises 2 convolutional, attention, and mixed pooling layers. First conv layer with 200∗3∗3 and second with 100∗3∗3 filter size are used with regular CNN computations. An effective mixed pooling technique with 2∗2 pool size is incorporated in the architecture for prominent feature extraction and dimensionality reduction task. The model utilizes ReLU as the activation function. The attention module is placed between convolutional and pooling layer in architecture for better information focus. A sandwich-like structure is proposed in the context of these hidden layers. Furthermore, the proposed architecture incorporates 3 dense layers, which are influential in learning the complex hidden patterns within human MRIs from the training image dataset and resulting in a significant performance by the proposed Fr-CNN model with regards to predictions on unseen images. Softmax activation function is incorporated in the last fully-connected layer for effective probability-based classification. The overall architectural parameters of the proposed Fr-CNN model are listed in Table 1.

Fig. 5.

Fig. 5

General architecture diagram of proposed Fr-CNN.

Table 1.

Architecture parameters of proposed Fr-CNN model.

Layer (type) Output Shape Parameters
Conv2D (None,126,126,200) 5600
Attention (None,126,126,200) 201
MixedPooling (None,63,63,400) 0
Conv2D-1 (None,61,61,100) 360100
Attention-1 (None,61,61,100) 101
MixedPooling-1 (None,30,30,200) 0
Flatten (None,180000) 0
Dense (None,100) 18000100
Dense-1 (None,50) 5050
Dense-2 (None,3) 153
Total params: 18371305 (70.08 MB)
Trainable params: 18371305 (70.08 MB)
Non-trainable params: 0 (0.00 Byte)

2.1. Unexploited pooling method

The utilized mixed pooling is achieved by average and max pooling techniques. The diverse nature of customized mixed pooling method offers essential benefits in image classification tasks.

  • Diverse information extraction: Combination of multiple pooling methods result in extracting most prominent information with smooth representation of the desired region.

  • Overcome overfitting: Mixture of operations assists as a regularization technique within neural architecture, which avoids the proposed model to be overfitted.

2.2. Attention module

Transformer architectures are popular for their self-attention mechanisms. In this study, a spatial attention mechanism is implemented by using a convolution followed by a sigmoid attention function. Given an input feature map X of shape (H, W, C), where H and W are the height and width, and C is the number of channels, the attention mechanism A is formulated as A=σ(WX+b), where W represents a convolutional kernel of the shape (1,1,C,1), b is the bias term, σ denotes the sigmoid activation function, and ∗ is the convolutional operator. The resulting attention map is A of shape (H, W, 1), where each individual element aij[0,1]. The attention map A is multiplied elementwise with the original input feature map X resulting in the attended feature map X=XA. This way the important regions are emphasized (aij is closer to 1) and the less important regions are suppressed (aij is closer to 0). This attention mechanism is incorporated within the simplified CNN to boost the performance of the proposed model.

  • Enhanced information focus: Attention layers allow the model to focus on the prominent features region within the input image.

  • Increased robustness and adaptability: It permits the model to disregard the unrelated share of the input image, which may enhance model's robustness. Besides, attention weights aid a degree of adaptability, making it easier to distinguish which region of the input the model deems significant for a given prediction.

2.3. Explainable AI

Explainable AI (XAI) driven Local Interpretable Model-Agnostic Explanation (LIME) based method is exploited in the study. LIME provides transparency to the model, which improves the model's analytical proficiencies. LIME enlightens the performance of the model in terms of its decision-making procedure. It emphasizes on local actions of the model to produce human-understandable clarifications for any precise estimate by highlighting the section with most prominent features and their contributions behind the accurate predictions. LIME chooses the instance for which the explanation is needed; in our case, it is the desired image from test dataset. Afterwards, it generates a perturbed dataset by changing the feature values of the given instance, but the truth label remains same for each perturbed sample as that of original instance label. Later, the perturbed dataset and image of the original instance is fed to the proposed model for prediction. The previous labels of perturbed dataset are then updated with the labels predicted from the proposed model. Furthermore, LIME incorporates a linear model like decision trees. The updated perturbed dataset except the original instance image is used to train the local model. After comprehensive training process, original instance is fed to the local model for predictions. LIME investigates the local behavior of the model and coefficient parameters with the proposed model to figure out the prominent features region within the input image. LIME graphically represents the positive feature contributions inside the image. By exploiting LIME-based XAI, the proposed model generates interpretable and explainable predictions on any test image. The graphical working process of LIME is shown in Fig. 6.

Fig. 6.

Fig. 6

Graphical Abstract of LIME

2.4. Generalized Fractional Stochastic Gradient Descent (GFSGD)

This section delves into the Generalized Fractional Stochastic Gradient Descent (GFSGD) optimization algorithm. A rigorous mathematical derivation of the update rule for GFSGD is presented, along with TensorFlow-like pseudocode tailored for practical implementation.

2.4.1. Mathematical foundations behind the GFSGD method

We know that the update rule to find the minimum of a function g() using the well-known gradient descent methods is as follows:

xe+1=xeρg(xe) (1)

where.

  • xe and xe+1 represent the parameters of the function g() at the current epoch e and the next epoch e+1, respectively.

  • ρ is the learning rate, where ρ>0 determines the step size at each iteration.

  • g(xe) is the gradient of the function g() with respect to x at epoch e, evaluated at the current parameter value xe.::

The main intuition behind the GFSGD is to replace the conventional derivative with a fractional derivative with their characteristic memory. Fractional calculus extends the gradient to a non-integer domain, allowing for the integration of past system states into current evaluations. With substituted fractional gradient into (1), the update rule becomes

xe+1=xeραg(xe) (2)

where αg(xe) is the gradient of the function with fractional order α. We use the Caputo definition of the fractional derivative for the derivation of the update rule for the GFSGD method, which is given as

Dxαacg(x)=1Γ(nα)ax(xτ)nα1g(n)(τ)dτ (3)

where n1<α<n, nN+ and a represents the lower bound of the integral and x is the current position for the continuous function g(). The expression Γ(nα) in (3) represents the notation of Gamma function. The integral illustration of Gamma function Γ(nα) can be generally expressed as: Γ(nα)=0s(nα)1esds,forRe(nα)>0. For the discrete case, (3) becomes:

Dacxeαg(xe)=1Γ(nα)axe(xeτ)nα1g(n)(τ)dτ (4)

Taylor series expansion of the function g() at xe can be used to rewrite the equation as equation (3) in Ref. [82].

Dacxeαg(xe)=1Γ(nα)axe(xeτ)nα1g(n)(τ)dτ=1Γ(nα)axej=0+(1)jg(j+n)(xe)Γ(j+1)(xeτ)n+jα1dτ(Taylorexpansionatxe)=1Γ(nα)j=0+(1)jg(j+n)(xe)Γ(j+1)axe(xeτ)n+jα1dτ(Interchangingand)=1Γ(nα)j=0+(1)jg(j+n)(xe)Γ(j+1)(xea)n+jαn+jα(Solvingtheintegral)=j=n+(1)jg(j)(xe)(nα)Γ(jn+1)(jα)(xea)jα(Reindexingthefromj=0toj=n)=j=n+(αnjn)g(j)(xe)Γ(jα+1)(xea)jα(Simplifyingusingbinomialcoefficient) (5)

where n1<α<n, nN+, and the binomial coefficient (ab)=Γ(a+1)Γ(b+1)Γ(ab+1). It is intuitive to use the short memory principle (SMP) by replacing the lower bound a with xeE where E is the fixed memory length [83,84]. SMP is employed to reduce the computational burden and emphasize recent information. The adoption of fixed memory E=1, as supported by findings in Ref. [85], involves using only the immediate previous change in the function. This approach further aids in preventing overshoot and further curtails computational demands. The calculations in equation (5) can be repeated with a different lower bound xe1, the resultant takes the form:

Dxe1cxeαg(xe)=j=n(αnjn)g(j)(xe)Γ(jα+1)(xexe1)jα. (6)

For an optimization problem, if the first derivative g(1)(xe) becomes zero, we have effectively found the solution, so we can truncate the higher order terms in the summation. By effectively using only the first order term (j=1), we get

Dxe1cxeαg(xe)=g(1)(xe)Γ(2α)(xexe1)1α (7)

where 0<α<1. To help prevent a complex case, we can add an absolute for the difference in the current and previous parameters such that xexe1>0 is always true. By introducing this, we effectively extend the range of α to 0<α<2. We can also introduce a small term ϵ=108 to the power to prevent 0 in the denominator for the case where xe=xe1.

Dxe1cxeαg(xe)=g(1)(xe)Γ(2α)(|xexe1|+ϵ)1α. (8)

By replacing (8) in (2), we essentially get the update rule of GFSGD

xe+1=xe+ρg(1)(x)Γ(2α)(|xexe1|+ϵ)1α (9)

GFSGD update rule is a merge between Theorem 1 and Theorem 2 from Ref. [67]. It utilizes both the fixed memory step along with higher order truncation. You can also observe that for α=1, equation (9) becomes equivalent to the standard gradient descent method.

xe+1=xe+ρg(1)(x) (10)

2.4.2. Pseudocode for GFSGD

For TensorFlow implementation of GFSGD, we mainly modify update_step of the standard SGD in a Tensorflow code-like pseudocode, as follows:

Image 1

3. Results and discussion

This section comprises database description, critical analysis of the proposed Fr-CNN model in terms of results, and performance comparison with benchmark models.

3.1. Dataset description

The study utilizes ADNI [86] database, best available for Alzheimer's disease diagnosis tasks. The database comprises 3 classes: AD (patients with Alzheimer's disease), CI (patients with mild cognitive impairment), and CN (common normal individuals). ADNI database contains 5154 MRIs in total. Fig. 7 shows the distribution of each class label with respect to MRIs. It is evident that initially, the database has imbalanced class distribution, which is not preferred for classification tasks. In order to make the uniform number of MRIs for all class labels, the data augmentation techniques (horizontal and vertical random flip) are exploited to oversample the MRIs of various classes. Afterwards, 3002 samples for each class are considered for further progress in the study. Subsequently, pixel normalization is applied to produce pixel values in the range [0,1]. The balanced distribution of each class with respect to MRI scan images is represented in Fig. 8. Sample MRI scans of each class in ADNI dataset are given in Fig. 9, to offer the valuable insights about the dataset in terms of distinct visual attributes of each class and quality, or resolution of MRI scans. In Fig. 9, the MRIs of each AD diagnostics class is presented to notify the difficult characteristics/features that the model needs to learn in order to accurately classify different stages of AD. The overlapping characteristics of MRI scans for each class label shown in Fig. 9 underscore the enriching capabilities of the proposed approach in accurately and efficiently classifying the progression of Alzheimer's disease. Overall, the ADNI database emerges as a best available dataset for any research work in the field of AD.

Fig. 7.

Fig. 7

Imbalanced class distribution in ADNI dataset.

Fig. 8.

Fig. 8

Balanced Distribution of classes for AD classification task.

Fig. 9.

Fig. 9

Sample Neuroimages from standard ADNI database.

3.2. Evaluation measures

The performance of the proposed Fr-CNN model is validated through diverse evaluation measures given in Table 2, which are computed through the formulation of true positives (Tr_Ps), true negatives (Tr_Ng), false positives (F_Ps), and false negatives (F_Ns).

Table-2.

Evaluation metrics.

Accuracy (A) Tr_Ps+Tr_NgTr_Ps+Tr_Ng+F_Ng+F_Ps
Precision (P) Tr_psTr_Ps+F_Ps
Recall (R) Tr_PsTr_Ps+F_Ng
F1-Score (F_S) 2×P×RP+R

3.3. Critical analysis of the proposed model

The proposed Fr-CNN model is trained on ADNI dataset with 85 % and 15 % split for training and testing sets, respectively. Additionally, 15 % of the training is reserved as validation set. Both splits are stratified to maintain consistent class distribution across the three sets. By exploiting improved attention mechanism and more effective mixed pooling technique in the architecture, enhanced feature extraction task is accomplished, which leads to a substantial performance by the proposed model. In order to make the model computationally inexpensive, fast optimizer GFSGD is incorporated in the study with different alpha variations, to analyze the convergence speed and trends of the proposed model. Initially, the Fr-CNN model is executed with standard stochastic gradient descent (SGD) with the learning rate of 0.001, and the model is compiled for 35 iterations. It is observed that the convergence speed is very slow, the max-val accuracy of 0.68 %, and min-val loss of 0.78 is computed during the execution. Afterwards, the study incorporated Generalized Fractional Stochastic Gradient Descent (GFSGD) optimizer for the better convergence speed and improved generalization by the proposed model. Alpha parameter of GFSGD ranges from 0 to 2; it is observed that with low alpha rates, the convergence speed is not impressive and GFSGD performs best between the alpha rates of 0.8–1.5. Initially, the Fr-CNN model is implemented with GFSGD having alpha of 0.8 with learning rate of 0.001. With alpha of 0.8, the model achieves best accuracy of 0.52 % and loss of 1.05 after 35 epochs. It is depicted that the model with GFSGD having alpha rate of 0.8, the results are even worse as compared to standard SGD with same learning rate and epochs. Later, the proposed model is executed with GFSGD having alpha rates of 0.9 and 1.0. The Fr-CNN model achieves maximum accuracy of 0.58 % and loss of 0.91 after 35 epochs with an alpha rate of 0.9. Then, the accuracy of 0.80 % and loss of 0.53 is attained with alpha rate of 1.0, which are the best results of the study until now. It is observed that as the alpha rates are gradually increasing, the proposed model with GFSGD improved its convergence speed and attained good results by reaching local minima quickly, resulting in computationally effective solution. Seeing the improved results, the proposed model is further executed with GFSGD having alpha rates of 1.1 and 1.2. After same 35 epochs and learning rate of 0.001, the model achieves an accuracy of 0.95 % for both 1.1 and 1.2 alpha rates. The minimum loss reached by the model is 0.18 and 0.15 with alpha rates of 1.1 and 1.2, respectively. It is noted that the convergence speed is extremely boosted as the alpha rate reaches the value of 1.0 in GFSGD. In order to make the proposed Fr-CNN model computationally inexpensive with improved results, the study further tested the higher alpha rates for the GFSGD.

Afterwards, the model achieves max-val accuracy of 0.98 % and min-val loss of 0.09 after only 32 epochs with GFSGD having alpha rate of 1.3. Later, the proposed model is executed with an alpha rate of 1.4 for GFSGD. It attained a max-val accuracy of 0.98 % and min-val loss of 0.07 after 35 iterations. After observing the improved convergence speed by the alpha rates of 1.3 and 1.4, it further motivated to go in depth of GFSGD with higher alpha rates, to achieve the optimal results. Thus, the proposed Fr-CNN model achieved an optimal accuracy of 0.99 % and loss value of 0.06 after 35 epochs when executed with GFSGD having alpha value of 1.5 and learning rate of 0.001. Later, it is noted that as the model executes with alpha value of 1.6, its convergence speed gradually decreases, which results in bad results as compared to alpha values of 1.5 and 1.4. The proposed model attained an accuracy and loss of 0.92 % and 0.25, respectively. It is analyzed that with alpha values of above 1.5, the GFSGD convergence speed reduces, producing a bad impact on the performance of the model. After comprehensive testing of the Fr-CNN model, with different alpha values and learning rate variations, it seems that the proposed best-fit model is with alpha value of 1.5 having learning rate of 0.001, for the given task. With variation in alpha values, the accuracy and loss trends seem to be changing for each case as with its convergence speed. With the change in variations, the model's predictive capabilities also seem to be modified, which can be analyzed by Fig. 10, Fig. 11, Fig. 12, Fig. 13, Fig. 14, Fig. 15, Fig. 16, Fig. 17, Fig. 18, Fig. 19, demonstrating the trends of accuracy, loss graphs, and confusion matrix of the proposed Fr-CNN model for each alpha variation. The evaluation metrics for the classification of AD for each variation with respect to standard SGD and alpha variations in GFSGD are enlisted in Table 3. After analyzing the graph trends, evaluation metrics, and confusion matrix, it is concluded that with the alpha value of 1.5 in GFSGD, the performance and convergence speed of the proposed Fr-CNN model outclasses the standard SGD and produces exceptional results in the context of AD classification.

Fig. 10.

Fig. 10

Accuracy and Loss graphs, and Confusion matrix of the proposed Fr-CNN model with SGD.

Fig. 11.

Fig. 11

Accuracy and Loss graphs, and Confusion matrix of the proposed Fr-CNN model with GFSGD (alpha = 0.8).

Fig. 12.

Fig. 12

Accuracy and Loss graphs, and Confusion matrix of the proposed Fr-CNN model with GFSGD (alpha = 0.9).

Fig. 13.

Fig. 13

Accuracy and Loss graphs, and Confusion matrix of the proposed Fr-CNN model with GFSGD (alpha = 1.0).

Fig. 14.

Fig. 14

Accuracy and Loss graphs, and Confusion matrix of the proposed Fr-CNN model with GFSGD (alpha = 1.1).

Fig. 15.

Fig. 15

Accuracy and Loss graphs, and Confusion matrix of the proposed Fr-CNN model with GFSGD (alpha = 1.2).

Fig. 16.

Fig. 16

Accuracy and Loss graphs, and Confusion matrix of the proposed Fr-CNN model with GFSGD (alpha = 1.3).

Fig. 17.

Fig. 17

Accuracy, Loss graphs and Confusion matrix of proposed Fr-CNN model with GFSGD (alpha = 1.4).

Fig. 18.

Fig. 18

Accuracy and Loss graphs, and Confusion matrix of the proposed Fr-CNN model with GFSGD (alpha = 1.5).

Fig. 19.

Fig. 19

Accuracy and Loss graphs, and Confusion matrix of the proposed Fr-CNN model with GFSGD (alpha = 1.6).

Table 3.

Comparison of the Proposed Model with standard SGD and variations in alpha rates for GFSGD in terms of Evaluation Metrics.

Optimizer Variations Class Labels Precision Recall F1_score Accuracy (%)
Standard SGD AD 0.98 0.18 0.30 0.55
CI 0.46 0.95 0.62
CN 0.68 0.51 0.59
GFSGD (alpha = 0.8) AD 0.45 0.58 0.51 0.51
CI 0.66 0.43 0.52
CN 0.50 0.53 0.52
GFSGD (alpha = 0.9) AD 0.53 0.72 0.61 0.58
CI 0.74 0.41 0.53
CN 0.55 0.59 0.57
GFSGD (alpha = 1.0) AD 0.78 0.84 0.81 0.81
CI 0.91 0.76 0.83
CN 0.77 0.84 0.80
GFSGD (alpha = 1.1) AD 0.92 0.98 0.95 0.95
CI 1.00 0.91 0.95
CN 0.94 0.96 0.95
GFSGD (alpha = 1.2) AD 0.94 0.95 0.95 0.94
CI 0.91 0.95 0.93
CN 0.96 0.91 0.93
GFSGD (alpha = 1.3) AD 0.93 0.99 0.96 0.97
CI 0.98 0.97 0.97
CN 0.99 0.94 0.96
GFSGD (alpha = 1.4) AD 0.97 1.00 0.98 0.98
CI 0.99 0.97 0.98
CN 0.99 0.98 0.99
GFSGD (alpha = 1.5) AD 0.99 1.00 0.99 0.99
CI 1.00 0.98 0.99
CN 0.99 0.99 0.99
GFSGD (alpha = 1.6) AD 0.89 0.95 0.92 0.91
CI 0.92 0.87 0.89
CN 0.92 0.91 0.91

3.4. Computational efficiency of the proposed Fr-CNN model

The proposed Fr-CNN model provides good balance between accuracy and computational cost. The computational efficiency of the proposed model over other benchmark models is compared through the following metrics:

  • Architectural Parameter(AP):

The AP for the convolutional layers is generally computed as:

AP=(ks×Ci+1)×Co (11)

where ks is the size of the kernel, Ci and Co refer to the input and output channels. Whereas, the AP for the dense layers can be expressed as:

AP=(Ni+1)×No (12)

where Ni and No are the input and output features.

  • Model Size (MS):

The general formula for computing the size of the DL-based models is:

MS=No.ofAP×SizeofEachAP (13)
  • Floating Point Operations (FLOPs):

Similar to architectural parameters, the FLOPs for convolutional layers and dense layers are computed separately and then summed up at the end for the final value. The FLOPs for the convolutional layers are computed as follows:

FLOPs=2×Ho×Wo×Co×(ks×Ci) (14)

where HoandWo are the height and width of the output. Furthermore, the FLOPs for dense layers are computed separately through the below formula:

FLOPs=2×Ni×No (15)

The computational efficiency of the proposed Fr-CNN model and other benchmark models is verified through the above evaluation metrics. The performance comparison of the Fr-CNN model over state-of-the-art models in terms of computational efficiency is given in Table 4.

Table 4.

Computational efficiency of the proposed Fr-CNN model over benchmark models.

Source Method Epochs AP MS (MB) FLOPs
Basheera et al. [31] Custom 2D-CNN 250 134,268,738 512.2 39,400,000,000
Ben Ahmed et al. [32] Hippocampal Visual Feature Mechanism 200 31,586,547 126 10,100,000,000
Savas et al. [36] EffecientNetB3 10 61,100,840 244.4 19,000,000,000
Jain et al. [38] VGG16 145,000,000 200 1,600,000,000
Peng et al. [39] Kernal Learning 50 138,000,000 528 15,500,000,000
Shanmugam et al. [41] Transfer Learning via GoogleNet 42,000,000 168 24,000,000,000
Illakiya et al. [48] AHANet 182,071,364 668 17,000,000,000
Maganti et al. [21] ADASYN + TL 100 39,134,696 530.76 11,520,000,000
Sorour et al. [22] CNN + LSTM 41,580,858 165 22,000,000,000
Mehmood et al. [23] Siamese 4D-AlzNet 100 248,780,802 936 15,760,000,000
Hajamohideen et al. [24] Siamese CNN with triplet loss 250 138,000,000 528 15,500,000,000
Carcagnì et al. [43] Deit 100 278,380,000 1126.4 37,970,000,000
Li et al. [44] Ensemble Learning 150 25,200,000 100 1,900,000,000
Our Proposed Fr-CNN 35 18,371,305 70.08 184,000,000

3.5. Best-fit model and comparison with benchmark models

After tuning the essential hyper parameters, it is depicted that the proposed model produces substantial performance in terms of evaluation metrics with novel GFSGD optimizer having alpha rate of 1.5 with learning rate of 0.001. By exploiting undiscovered and novel GFSGD for Alzheimer's disease classification task, better results with computationally cost-effective solution are provided in the study. With improved attention mechanism and effective mixed pooling strategy in the proposed CNN architecture, it results in efficiently extracting prominent features form the MRI scan, which helps the model to attain the best accuracy for the AD classification task. Table 5, shows the brief comparison of the proposed model with benchmark models using MRI images. From the comparison, it is clearly seen that the proposed model outperforms its counterparts in terms of accuracy for the classification of AD.

Table 5.

Performance comparison of Proposed Model with benchmark models using ADNI database.

Source Method Year Accuracy (%)
Duc et al. [30] 3D-Deep Learning 2020 86.2
Basheera et al. [31] Custom 2D-CNN 2020 86.7
Ahmed et al. [32] Hippocampal Visual Feature Mechanism 2015 87.0
Venugopalan et al. [33] Multimodal Custom 3D-CNN 2021 88.0
Feng et al. [34] 3D-CNN & Support Vector Machine 2020 92.1
Maqsood et al. [35] AlexNet 2019 92.8
Savas et al. [36] EffecientNetB3 2021 92.9
Choi et al. [37] Ensemble Learning 2020 93.8
Jain et al. [38] VGG16 2018 95.7
Peng et al. [39] Kernal Learning 2019 96.1
Hon et al. [40] Transfer Learning via Inception V4 2017 96.2
Shanmugam et al. [41] Transfer Learning via GoogleNet 2022 96.3
Ebrahimi et al. [42] Pre-trained ResNet-18 2021 96.8
Beheshti et al. [45] Genetic Algorithm 2017 97.4
Raju et al. [46] 3D-CNN 2020 97.7
Noman et al. [47] Transfer Learning 2023 97.8
Illakiya et al. [48] AHANet 2023 98.5
Buvaneswari et al. [49] Kernal PCA 2023 98.7
Chen et al. [20] ResRepANet 2022 89.2
Maganti et al. [21] ADASYN + TL 2023 87.0
Sorour et al. [22] CNN + LSTM 2024 98.6
Mehmood et al. [23] Siamese 4D-AlzNet 2024 95.4
Hajamohideen et al. [24] Siamese CNN with triplet loss 2023 91.8
Carcagnì et al. [43] Deit 2023 76.0
Li et al. [44] Ensemble Learning 2023 98.6
Proposed Fr-CNN Model 99.0

4. Discussion

The proposed model shows remarkable predictive capabilities by accurately classifying unseen MRI scans of different class labels. After rigorous training and evaluation, it is observed that the proposed model is reliable in terms of classifying MRIs among the AD, CI, and CN cases. The proposed Fr-CNN model serves as a noteworthy contribution for accurate detection and classification of AD at premature stages. Transparency in the medical field is essential, so LIME based Explainable AI (XAI) technique is incorporated in the study, for utilizing the capabilities of XAI to propose an interpretable model for explainable classification of AD. LIME encircles the region of most prominent features, on the basis of which the model makes predictions. Seeing the highlighted region, it can be noted which areas of MRIs are important for accurately classifying patients with AD. Some of the interpretable predictions by the proposed model on the unseen images of MRI scans are included in the study and can be visualized in Fig. 20.

Fig. 20.

Fig. 20

Predictive Capabilities of Proposed Model on Unseen MRI images with Interpretable Explanation.

5. Conclusion

The modification of alpha rates above 1 in standard SGD offers a noteworthy opportunity to tune deep learning model further, which proves to have a positive impact in the study. The proposed Fr-CNN model attained fast convergence speed, when executed with alpha rates above unity. The proposed Fr-CNN model proves to be a notable contribution in AD diagnosis classification. As compared to its counterparts on ADNI benchmark dataset, the improved convergence and accuracy achieved by the proposed model validate the robustness of Fr-CNN over state-of-the-art models. The proposed Fr-CNN model shows substantial performance by achieving an accuracy of 99 % for AD classification task on benchmark dataset through neuroimaging analysis. By utilizing explainable artificial intelligence approach, Fr-CNN model becomes more explicable by providing insightful information about salient features behind the prediction process. The proposed Fr-CNN model's superior convergence and accuracy over benchmark models on standard dataset verify the robustness and adaptability of the proposed work.

5.1. Future work

The performance can be further enhanced in the future by employing a more effective strategy with better optimizers and more effective pooling methods. Other XAI techniques can be employed to further enhance the interpretability of the model. This approach is a step towards applying fractional order optimizers in computer vision-based healthcare tasks.

CRediT authorship contribution statement

Zeshan Aslam Khan: Writing – original draft, Conceptualization. Muhammad Waqar: Writing – original draft, Methodology. Naveed Ishtiaq Chaudhary: Writing – review & editing. Muhammad Junaid Ali Asif Raja: Writing – original draft, Methodology. Saadia Khan: Visualization, Validation. Farrukh Aslam Khan: Visualization, Project administration. Iqra Ishtiaq Chaudhary: Visualization, Validation. Muhammad Asif Zahoor Raja: Writing – review & editing.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgment

This work was supported by Researchers Supporting Project Number (RSPD2024R1062), King Saud University, Riyadh, Saudi Arabia.

Contributor Information

Zeshan Aslam Khan, Email: zeshank@yuntech.edu.tw.

Muhammad Waqar, Email: m11363014@yuntech.edu.tw.

Naveed Ishtiaq Chaudhary, Email: chaudni@yuntech.edu.tw.

Iqra Ishtiaq Chaudhary, Email: D21126271@mytudublin.ie.

Muhammad Asif Zahoor Raja, Email: rajamaz@yuntech.edu.tw.

References

  • 1.Uddin M.Z., Shahriar M.A., Mahamood M.N., Alnajjar F., Pramanik M.I., Ahad M.A.R. Deep learning with image-based autism spectrum disorder analysis: a systematic review. Eng. Appl. Artif. Intell. Jan. 2024;127 doi: 10.1016/J.ENGAPPAI.2023.107185. [DOI] [Google Scholar]
  • 2.Xiang Y., Li T., Ren W., Zhu T., Choo K.K.R. A lightweight privacy-preserving scheme using pixel block mixing for facial image classification in deep learning. Eng. Appl. Artif. Intell. Nov. 2023;126 doi: 10.1016/J.ENGAPPAI.2023.107180. [DOI] [Google Scholar]
  • 3.Tong K., Wu Y. Small object detection using deep feature learning and feature fusion network. Eng. Appl. Artif. Intell. Jun. 2024;132 doi: 10.1016/J.ENGAPPAI.2024.107931. [DOI] [Google Scholar]
  • 4.Yu Y., Zhang Y., Cheng Z., Song Z., Tang C. MCA: multidimensional collaborative attention in deep convolutional neural networks for image recognition. Eng. Appl. Artif. Intell. Nov. 2023;126 doi: 10.1016/J.ENGAPPAI.2023.107079. [DOI] [Google Scholar]
  • 5.Sedeh S.S., Fatemi A., Nematbakhsh M.A. Development and application of an optimal COVID-19 screening scale utilizing an interpretable machine learning algorithm. Eng. Appl. Artif. Intell. Nov. 2023;126 doi: 10.1016/J.ENGAPPAI.2023.106786. [DOI] [Google Scholar]
  • 6.Kansal S., Garg D., Upadhyay A., Mittal S., Talwar G.S. DL-AMPUT-EEG: design and development of the low-cost prosthesis for rehabilitation of upper limb amputees using deep-learning-based techniques. Eng. Appl. Artif. Intell. Nov. 2023;126 doi: 10.1016/J.ENGAPPAI.2023.106990. [DOI] [Google Scholar]
  • 7.Liyanarachchi R., Wijekoon J., Premathilaka M., Vidhanaarachchi S. COVID-19 symptom identification using Deep Learning and hardware emulated systems. Eng. Appl. Artif. Intell. Oct. 2023;125 doi: 10.1016/J.ENGAPPAI.2023.106709. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Chen Q., Ye Q., Zhang W., Li H., Zheng X. TGM-Nets: a deep learning framework for enhanced forecasting of tumor growth by integrating imaging and modeling. Eng. Appl. Artif. Intell. Nov. 2023;126 doi: 10.1016/J.ENGAPPAI.2023.106867. [DOI] [Google Scholar]
  • 9.Fu Z., Li J., Hua Z., Fan L. Deep supervision feature refinement attention network for medical image segmentation. Eng. Appl. Artif. Intell. Oct. 2023;125 doi: 10.1016/J.ENGAPPAI.2023.106666. [DOI] [Google Scholar]
  • 10.Raza A., Tran K.P., Koehl L., Li S. AnoFed: adaptive anomaly detection for digital health using transformer-based federated learning and support vector data description. Eng. Appl. Artif. Intell. May 2023;121 doi: 10.1016/J.ENGAPPAI.2023.106051. [DOI] [Google Scholar]
  • 11.“2020 Alzheimer's disease facts and figures,”. Alzheimer's Dementia. Mar. 2020;16(3):391–460. doi: 10.1002/ALZ.12068. [DOI] [PubMed] [Google Scholar]
  • 12.Lim B.Y., et al. Deep learning model for prediction of progressive mild cognitive impairment to Alzheimer's disease using structural MRI. Front. Aging Neurosci. Jun. 2022;14 doi: 10.3389/FNAGI.2022.876202/BIBTEX. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Khan Z.A., Chaudhary N.I., Zubair S. Fractional stochastic gradient descent for recommender systems. Electron. Mark. Jun. 2019;29(2):275–285. doi: 10.1007/S12525-018-0297-2/FIGURES/4. [DOI] [Google Scholar]
  • 14.Khan Z.A., Zubair S., Chaudhary N.I., Raja M.A.Z., Khan F.A., Dedovic N. Design of normalized fractional SGD computing paradigm for recommender systems. Neural Comput. Appl. Jul. 2020;32(14):10245–10262. doi: 10.1007/S00521-019-04562-6/TABLES/5. [DOI] [Google Scholar]
  • 15.Khan Z.A., Zubair S., Alquhayz H., Azeem M., Ditta A. Design of momentum fractional stochastic gradient descent for recommender systems. IEEE Access. 2019;7:179575–179590. doi: 10.1109/ACCESS.2019.2954859. [DOI] [Google Scholar]
  • 16.Wei Y., Kang Y., Yin W., Wang Y. Generalization of the gradient method with fractional order gradient direction. J. Franklin Inst. Mar. 2020;357(4):2514–2532. doi: 10.1016/J.JFRANKLIN.2020.01.008. [DOI] [Google Scholar]
  • 17.Janghel R.R., Rathore Y.K. Deep convolution neural network based system for early diagnosis of Alzheimer's disease. IRBM. Aug. 2021;42(4):258–267. doi: 10.1016/J.IRBM.2020.06.006. [DOI] [Google Scholar]
  • 18.Sarraf S., Tofighi G., Org S. Classification of Alzheimer's disease structural MRI data by deep learning convolutional neural networks. Jul. 2016. https://arxiv.org/abs/1607.06583v2 [Online]. Available:
  • 19.Wu H., Luo J., Lu X., Zeng Y. 3D transfer learning network for classification of Alzheimer's disease with MRI. International Journal of Machine Learning and Cybernetics. Jul. 2022;13(7):1997–2011. doi: 10.1007/S13042-021-01501-7/FIGURES/13. [DOI] [Google Scholar]
  • 20.Chen Z., et al. A new classification network for diagnosing Alzheimer's disease in class-imbalance MRI datasets. Front. Neurosci. Aug. 2022;16 doi: 10.3389/FNINS.2022.807085/BIBTEX. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.S. R. Maganti S., S. H. A Deep transfer learning models for Alzheimer's disease classification using MRI images. International Journal of Intelligent Systems and Applications in Engineering. Dec. 2023;12(1):95–101. https://ijisae.org/index.php/IJISAE/article/view/3768 [Online]. Available: [Google Scholar]
  • 22.Sorour S.E., El-Mageed A.A.A., Albarrak K.M., Alnaim A.K., Wafa A.A., El-Shafeiy E. Classification of Alzheimer's disease using MRI data based on Deep Learning Techniques. Journal of King Saud University - Computer and Information Sciences. Feb. 2024;36(2) doi: 10.1016/J.JKSUCI.2024.101940. [DOI] [Google Scholar]
  • 23.Mehmood A., Shahid F., Khan R., Ibrahim M.M., Zheng Z. Utilizing siamese 4D-AlzNet and transfer learning to identify stages of Alzheimer's disease. Neuroscience. May 2024;545:69–85. doi: 10.1016/J.NEUROSCIENCE.2024.03.007. [DOI] [PubMed] [Google Scholar]
  • 24.Hajamohideen F., et al. Four-way classification of Alzheimer's disease using deep Siamese convolutional neural network with triplet-loss function. Brain Inform. Dec. 2023;10(1):1–13. doi: 10.1186/S40708-023-00184-W/FIGURES/9. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Al-Khuzaie F.E.K., Bayat O., Duru A.D. Diagnosis of alzheimer disease using 2D MRI slices by convolutional neural network. Appl. Bionics Biomech. 2021;2021 doi: 10.1155/2021/6690539. [DOI] [PMC free article] [PubMed] [Google Scholar] [Retracted]
  • 26.Safi M.S., Safi S.M.M. Early detection of Alzheimer's disease from EEG signals using Hjorth parameters. Biomed. Signal Process Control. Mar. 2021;65 doi: 10.1016/J.BSPC.2020.102338. [DOI] [Google Scholar]
  • 27.Lu X., Wu H., Zeng Y. Classification of Alzheimer's disease in MobileNet. J Phys Conf Ser. Nov. 2019;1345(4) doi: 10.1088/1742-6596/1345/4/042012. [DOI] [Google Scholar]
  • 28.Antony F., Anita H.B., George J.A. Classification on Alzheimer's disease MRI images with VGG-16 and VGG-19. Smart Innovation, Systems and Technologies. 2023;312:199–207. doi: 10.1007/978-981-19-3575-6_22/COVER. [DOI] [Google Scholar]
  • 29.Mujahid M., Rehman A., Alam T., Alamri F.S., Fati S.M., Saba T. An efficient ensemble approach for Alzheimer's disease detection using an adaptive synthetic technique and deep learning. Diagnostics 2023. Jul. 2023;13(15):2489. doi: 10.3390/DIAGNOSTICS13152489. 2489, vol. 13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Duc N.T., Ryu S., Qureshi M.N.I., Choi M., Lee K.H., Lee B. 3D-Deep learning based automatic diagnosis of Alzheimer's disease with joint MMSE prediction using resting-state fMRI. Neuroinformatics. Jan. 2020;18(1):71–86. doi: 10.1007/S12021-019-09419-W/FIGURES/7. [DOI] [PubMed] [Google Scholar]
  • 31.Basheera S., Satya Sai Ram M. A novel CNN based Alzheimer's disease classification using hybrid enhanced ICA segmented gray matter of MRI. Comput. Med. Imag. Graph. Apr. 2020;81 doi: 10.1016/J.COMPMEDIMAG.2020.101713. [DOI] [PubMed] [Google Scholar]
  • 32.Ben Ahmed O., Benois-Pineau J., Allard M., Ben Amar C., Catheline G. Classification of Alzheimer's disease subjects from MRI using hippocampal visual features. Multimed. Tool. Appl. Feb. 2015;74(4):1249–1266. doi: 10.1007/S11042-014-2123-Y/TABLES/2. [DOI] [Google Scholar]
  • 33.Venugopalan J., Tong L., Hassanzadeh H.R., Wang M.D. Multimodal deep learning models for early detection of Alzheimer's disease stage. Scientific Reports 2021. Feb. 2021;11(1):1–13. doi: 10.1038/s41598-020-74399-w. 11, no. 1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Feng W., et al. vol. 30. May 2020. (“Automated MRI-Based Deep Learning Model for Detection of Alzheimer's Disease Process,”). 10.1142/S012906572050032X. [DOI] [PubMed] [Google Scholar]
  • 35.Maqsood M., et al. Transfer learning assisted classification and detection of Alzheimer's disease stages using 3D MRI scans. Sensors 2019. Jun. 2019;19(11):2645. doi: 10.3390/S19112645. 2645, vol. 19. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Savaş S. Detecting the stages of Alzheimer's disease with pre-trained deep learning architectures. Arabian J. Sci. Eng. Feb. 2022;47(2):2201–2218. doi: 10.1007/S13369-021-06131-3/TABLES/5. [DOI] [Google Scholar]
  • 37.Choi J.Y., Lee B. Combining of multiple deep networks via ensemble generalization loss, based on MRI images, for Alzheimer's disease classification. IEEE Signal Process. Lett. 2020;27:206–210. doi: 10.1109/LSP.2020.2964161. [DOI] [Google Scholar]
  • 38.Jain R., Jain N., Aggarwal A., Hemanth D.J. Convolutional neural network based Alzheimer's disease classification from magnetic resonance brain images. Cogn Syst Res. Oct. 2019;57:147–159. doi: 10.1016/J.COGSYS.2018.12.015. [DOI] [Google Scholar]
  • 39.Peng J., Zhu X., Wang Y., An L., Shen D. Structured sparsity regularized multiple kernel learning for Alzheimer's disease diagnosis. Pattern Recognit. Apr. 2019;88:370–382. doi: 10.1016/J.PATCOG.2018.11.027. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Hon M., Khan N.M. Proceedings - 2017 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2017. 2017-January. Towards Alzheimer's disease classification through transfer learning; pp. 1166–1169. Dec. 2017. [DOI] [Google Scholar]
  • 41.Shanmugam J.V., Duraisamy B., Simon B.C., Bhaskaran P. Alzheimer's disease classification using pre-trained deep networks. Biomed. Signal Process Control. Jan. 2022;71 doi: 10.1016/J.BSPC.2021.103217. [DOI] [Google Scholar]
  • 42.Convolutional neural networks for Alzheimer's disease detection on MRI images. https://www.spiedigitallibrary.org/journals/journal-of-medical-imaging/volume-8/issue-2/024503/Convolutional-neural-networks-for-Alzheimers-disease-detection-on-MRI-images/10.1117/1.JMI.8.2.024503.full [DOI] [PMC free article] [PubMed]
  • 43.Carcagnì P., Leo M., Del Coco M., Distante C., De Salve A. Convolution neural networks and self-attention learners for alzheimer dementia diagnosis from brain MRI. Sensors 2023. Feb. 2023;23(3):1694. doi: 10.3390/S23031694. 1694, vol. 23. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Li M., Jiang Y., Li X., Yin S., Luo H. Ensemble of convolutional neural networks and multilayer perceptron for the diagnosis of mild cognitive impairment and Alzheimer's disease. Med. Phys. Jan. 2023;50(1):209–225. doi: 10.1002/MP.15985. [DOI] [PubMed] [Google Scholar]
  • 45.Beheshti I., Demirel H., Matsuda H. Classification of Alzheimer's disease and prediction of mild cognitive impairment-to-Alzheimer’s conversion from structural magnetic resource imaging using feature ranking and a genetic algorithm. Comput. Biol. Med. Apr. 2017;83:109–119. doi: 10.1016/J.COMPBIOMED.2017.02.011. [DOI] [PubMed] [Google Scholar]
  • 46.Raju M., Gopi V.P., Anitha V.S., Wahid K.A. Multi-class diagnosis of Alzheimer's disease using cascaded three dimensional-convolutional neural network. Phys Eng Sci Med. Dec. 2020;43(4):1219–1228. doi: 10.1007/S13246-020-00924-W/FIGURES/6. [DOI] [PubMed] [Google Scholar]
  • 47.Raza N., Naseer A., Tamoor M., Zafar K. Alzheimer disease classification through transfer learning approach. Diagnostics 2023. Feb. 2023;13(4):801. doi: 10.3390/DIAGNOSTICS13040801. 801, vol. 13. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Illakiya T., Ramamurthy K., Siddharth M.V., Mishra R., Udainiya A. AHANet: adaptive hybrid attention network for Alzheimer's disease classification using brain magnetic resonance imaging. Bioengineering 2023. Jun. 2023;10(6):714. doi: 10.3390/BIOENGINEERING10060714. 714, vol. 10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Buvaneswari P., Gayathri R. Detection and Classification of Alzheimer's disease from cognitive impairment with resting-state fMRI. Neural Comput. Appl. Nov. 2023;35(31):22797–22812. doi: 10.1007/S00521-021-06436-2/FIGURES/9. [DOI] [Google Scholar]
  • 50.Kwak K., Niethammer M., Giovanello K.S., Styner M., Dayan E. Differential role for hippocampal subfields in Alzheimer's disease progression revealed with deep learning. Cerebr. Cortex. Jan. 2022;32(3):467–478. doi: 10.1093/CERCOR/BHAB223. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 51.Li F., Liu M. A hybrid convolutional and recurrent neural network for Hippocampus analysis in Alzheimer's disease. J. Neurosci. Methods. Jul. 2019;323:108–118. doi: 10.1016/J.JNEUMETH.2019.05.006. [DOI] [PubMed] [Google Scholar]
  • 52.Ali E.H., Sadek S., Makki Z.F. ICCA 2023 - 2023 5th International Conference on Computer and Applications, Proceedings. 2023. Novel improved fuzzy C-means clustering for MR image brain tissue segmentation to detect Alzheimer's disease. [DOI] [Google Scholar]
  • 53.Mazhar T., Haq I., Ditta A., Mohsan S.A.H., Rehman F., Zafar I., Gansau J.A., Goh L.P.W. The role of machine learning and deep learning approaches for the detection of skin cancer. Healthcare. Feb. 2023;11(3):415. doi: 10.3390/healthcare11030415. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 54.Haq I., Mazhar T., Malik M.A., Kamal M.M., Ullah I., Kim T., Hamdi M., Hamam H. Lung nodules localization and report analysis from computerized tomography (CT) scan using a novel machine learning approach. Appl. Sci. 2022;12(24) [Google Scholar]
  • 55.Haq I., Mazhar T., Asif R.N., Ghadi Y.Y., Ullah N., Khan M.A., Al-Rasheed A. YOLO and residual network for colorectal cancer cell detection and counting. Heliyon. Feb. 2024;10(2) doi: 10.1016/j.heliyon.2024.e24403. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 56.Saqib S.M., Iqbal M., Asghar M.Z., Mazhar T., Almogren A., Rehman A.U., Hamam H. Cataract and glaucoma detection based on Transfer Learning using MobileNet. Heliyon. 2024;10(17) doi: 10.1016/j.heliyon.2024.e36759. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 57.Guo L., Wang Y., Liu H., Li C., Zhao J., Chu H. ON iterative positive solutions for a class of singular infinite-point P-laplacian fractional differential equation with singular source terms. Journal of Applied Analysis and Computation. 2023;13(5):2827–2842. doi: 10.11948/20230008. [DOI] [Google Scholar]
  • 58.Zhang X., Alahmadi D. Study on the maximum value of flight distance based on the fractional differential equation for calculating the best path of shot put. Applied Mathematics and Nonlinear Sciences. Jul. 2022;7(2):151–162. doi: 10.2478/AMNS.2021.2.00136. [DOI] [Google Scholar]
  • 59.Qin Y., Basheri M., Omer R.E.E. Energy-saving technology of BIM green buildings using fractional differential equation. Applied Mathematics and Nonlinear Sciences. Jan. 2022;7(1):481–490. doi: 10.2478/AMNS.2021.2.00085. [DOI] [Google Scholar]
  • 60.Atangana A., Aguilar J.F.G., Kolade M.O., Hristov J.Y. Editorial: fractional differential and integral operators with non-singular and non-local kernel with application to nonlinear dynamical systems. Chaos, Solit. Fractals. 2020;132(Mar) doi: 10.1016/J.CHAOS.2019.109493. [DOI] [Google Scholar]
  • 61.Atangana A., Gómez-Aguilar J.F. Fractional derivatives with no-index law property: application to chaos and statistics. Chaos, Solit. Fractals. Sep. 2018;114:516–535. doi: 10.1016/J.CHAOS.2018.07.033. [DOI] [Google Scholar]
  • 62.Atangana A. Fractal-fractional differentiation and integration: connecting fractal calculus and fractional calculus to predict complex system. Chaos, Solit. Fractals. Sep. 2017;102:396–406. doi: 10.1016/J.CHAOS.2017.04.027. [DOI] [Google Scholar]
  • 63.Mukhtar R., Chang C.Y., Raja M.A.Z., Chaudhary N.I., Shu C.M. Novel nonlinear fractional order Parkinson's disease model for brain electrical activity rhythms: intelligent adaptive Bayesian networks. Chaos, Solit. Fractals. Mar. 2024;180 doi: 10.1016/J.CHAOS.2024.114557. [DOI] [Google Scholar]
  • 64.Wen C., Yang J. Complexity evolution of chaotic financial systems based on fractional calculus. Chaos, Solit. Fractals. Nov. 2019;128:242–251. doi: 10.1016/J.CHAOS.2019.08.005. [DOI] [Google Scholar]
  • 65.Yang Y., Qi Q., Hu J., Dai J., Yang C. Adaptive Fault-tolerant control for consensus of nonlinear fractional-order multi-agent systems with diffusion. Fractal and Fractional 2023. Oct. 2023;7(10):760. doi: 10.3390/FRACTALFRACT7100760. 760, vol. 7. [DOI] [Google Scholar]
  • 66.Zhang Y., Sun H.G., Stowell H.H., Zayernouri M., Hansen S.E. A review of applications of fractional calculus in Earth system dynamics. Chaos, Solit. Fractals. Sep. 2017;102:29–46. doi: 10.1016/J.CHAOS.2017.03.051. [DOI] [Google Scholar]
  • 67.Liu Z., Ding Y., Liu C., Zhao C. Existence and uniqueness of solutions for singular fractional differential equation boundary value problem with p-Laplacian. Advances in Difference Equations 2020. Feb. 2020;1:1–12. doi: 10.1186/S13662-019-2482-9. 2020, no. 1. [DOI] [Google Scholar]
  • 68.Zhao Y., Sun Y., Liu Z., Wang Y. Solvability for boundary value problems of nonlinear fractional differential equations with mixed perturbations of the second type. journal/Math AIMS Mathematics. 2019;5(1):557–567. doi: 10.3934/math.2020037. [DOI] [Google Scholar]
  • 69.Araz S.İ. Numerical analysis of a new volterra integro-differential equation involving fractal-fractional operators. Chaos, Solit. Fractals. Jan. 2020;130 doi: 10.1016/J.CHAOS.2019.109396. [DOI] [Google Scholar]
  • 70.Wen L., Liu H., Chen J., Fakieh B., Shorman S.M. Fractional linear regression equation in agricultural disaster assessment model based on geographic information system analysis technology. Applied Mathematics and Nonlinear Sciences. Jan. 2022;7(1):275–284. doi: 10.2478/AMNS.2021.2.00096. [DOI] [Google Scholar]
  • 71.Shah S.M., Samar R., Khan N.M., Raja M.A.Z. Design of fractional-order variants of complex LMS and NLMS algorithms for adaptive channel equalization. Nonlinear Dynam. Apr. 2017;88(2):839–858. doi: 10.1007/S11071-016-3279-Y/TABLES/7. [DOI] [Google Scholar]
  • 72.Pu Y.F., Yi Z., Zhou J.L. Fractional hopfield neural networks: fractional dynamic associative recurrent neural networks. IEEE Trans Neural Netw Learn Syst. Oct. 2017;28(10):2319–2333. doi: 10.1109/TNNLS.2016.2582512. [DOI] [PubMed] [Google Scholar]
  • 73.Cheng S., Wei Y., Sheng D., Chen Y., Wang Y. Identification for Hammerstein nonlinear ARMAX systems based on multi-innovation fractional order stochastic gradient. Signal Process. Jan. 2018;142:1–10. doi: 10.1016/J.SIGPRO.2017.06.025. [DOI] [Google Scholar]
  • 74.Chaudhary N.I., Aslam khan Z., Zubair S., Raja M.A.Z., Dedovic N. Normalized fractional adaptive methods for nonlinear control autoregressive systems. Appl. Math. Model. Feb. 2019;66:457–471. doi: 10.1016/J.APM.2018.09.028. [DOI] [Google Scholar]
  • 75.Jia T., Chen X., He L., Zhao F., Qiu J. Finite-Time synchronization of uncertain fractional-order delayed memristive neural networks via adaptive sliding mode control and its application. Fractal and Fractional 2022. Sep. 2022;6(9):502. doi: 10.3390/FRACTALFRACT6090502. 6, Page 502. [DOI] [Google Scholar]
  • 76.Xing R., Xiao M., Zhang Y., Qiu J. Stability and hopf bifurcation analysis of an (n + m)-neuron double-ring neural network model with multiple time delays. J. Syst. Sci. Complex. Feb. 2022;35(1):159–178. doi: 10.1007/S11424-021-0108-2/METRICS. [DOI] [Google Scholar]
  • 77.Xu C., et al. New results on bifurcation for fractional-order octonion-valued neural networks involving delays. Netw. Comput. Neural Syst. Apr. 2024 doi: 10.1080/0954898X.2024.2332662. [DOI] [PubMed] [Google Scholar]
  • 78.Khan M.B., Santos-García G., Noor M.A., Soliman M.S. Some new concepts related to fuzzy fractional calculus for up and down convex fuzzy-number valued functions and inequalities. Chaos, Solit. Fractals. Nov. 2022;164 doi: 10.1016/J.CHAOS.2022.112692. [DOI] [Google Scholar]
  • 79.Li X., Ma W., Bao X. Generalized fractional calculus on time scales based on the generalized Laplace transform. Chaos, Solit. Fractals. Mar. 2024;180 doi: 10.1016/J.CHAOS.2024.114599. [DOI] [Google Scholar]
  • 80.Herrera-Alcántara O., Castelán-Aguilar J.R. Fractional gradient optimizers for PyTorch: enhancing gan and bert. Fractal and Fractional 2023. Jun. 2023;7(7):500. doi: 10.3390/FRACTALFRACT7070500. 7, Page 500. [DOI] [Google Scholar]
  • 81.Khan Z.A., Chaudhary N.I., Raja M.A.Z. Generalized fractional strategy for recommender systems with chaotic ratings behavior. Chaos, Solit. Fractals. Jul. 2022;160 doi: 10.1016/J.CHAOS.2022.112204. [DOI] [Google Scholar]
  • 82.Wei Y., Chen Y.Q., Gao Q., Wang Y. Proceedings - 2019 Chinese Automation Congress. CAC 2019; Nov. 2019. Infinite series representation of functions in fractional calculus; pp. 1697–1702. [DOI] [Google Scholar]
  • 83.Chen Y., Gao Q., Wei Y., Wang Y. Study on fractional order gradient methods. Appl. Math. Comput. Dec. 2017;314:310–321. doi: 10.1016/J.AMC.2017.07.023. [DOI] [Google Scholar]
  • 84.Wei Y., Chen Y., Cheng S., Wang Y. A note on short memory principle of fractional calculus. Fract Calc Appl Anal. Dec. 2017;20(6):1382–1404. doi: 10.1515/FCA-2017-0073/METRICS. [DOI] [Google Scholar]
  • 85.Wei Y., Kang Y., Yin W., Wang Y. Generalization of the gradient method with fractional order gradient direction. J. Franklin Inst. Mar. 2020;357(4):2514–2532. doi: 10.1016/J.JFRANKLIN.2020.01.008. [DOI] [Google Scholar]
  • 86.Jack C.R., et al. The Alzheimer's disease neuroimaging initiative (ADNI): MRI methods. J. Magn. Reson. Imag. Apr. 2008;27(4):685–691. doi: 10.1002/JMRI.21049. [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Heliyon are provided here courtesy of Elsevier

RESOURCES