Abstract
The classification technology of hyperspectral images (HSI) consists of many contiguous spectral bands that are often utilized for a various Earth observation activities, such as surveillance, detection, and identification. The incorporation of both spectral and spatial characteristics is necessary for improved classification accuracy. In the classification of hyperspectral images, deep learning has gained significant traction. This research analyzes how to accurately classify new HSI from limited samples with labels. A novel deep-learning-based categorization based on feature extraction and classification is designed for this purpose. Initial extraction of spectral and spatial information is followed by spectral and spatial information integration to generate fused features. The classification challenge is completed using a compressed synergic deep convolution neural network with Aquila optimization (CSDCNN-AO) model constructed by utilising a novel optimization technique known as the Aquila Optimizer (AO). The HSI, the Kennedy Space Center (KSC), the Indian Pines (IP) dataset, the Houston U (HU) dataset, and the Salinas Scene (SS) dataset are used for experiment assessment. The sequence testing on these four HSI-classified datasets demonstrate that our innovative framework outperforms the conventional technique on common evaluation measures such as average accuracy (AA), overall accuracy (OA), and Kappa coefficient (k). In addition, it significantly reduces training time and computational cost, resulting in enhanced training stability, maximum performance, and remarkable training accuracy.
1. Introduction
Due to the fast growth of photonics with optics, sensors in hyperspectral (HS) are needed to install in several satellites. HSI classification is an essential and challenging task that is targeted towards labelling each pixel contained in a hyperspectral image. HSI images contained spatial-spectral information which is useful for detecting scene objects [1]. This had been used in many fields like environmental surveillance, astronomy, and precise agriculture [2].
In the earlier days, HSI classification was done by the machine learning methods such as support vector machines (SVM) [3, 4], k-nearest neighbor (KNN) [5, 6], multinomial logistic regression (MLR) [7, 8], and decision tree [9, 10]. Within the similar data which exists, spectral changes in various materials and various spaces might have the same features, so the attained details were still corrupt because of inadequate spatial structure feature extraction. To solve these issues, it is hard to perfect the classification of HSI. So, numerous spectral and spatial feature extraction methods are proposed.
These techniques have validated major classification performance, which is not in effect for classifying HSI in difficult situations. In recent times, deep learning techniques had achieved maximum success for this kind of task [11–13]. So, this method had reached admirable performance for different analysis-oriented tasks, e.g., object recognition and image classification. To classify HSI, entire spatial and spectral perspectives must be considered for the processing. Intuitively, HSI consists of a higher number of images and every image signifies electromagnetic spectrum classification. Temporarily, the spatial perspective denotes 2D spatial data of objects consistent in the HSI. Thus, HSI is typically denoted as the 3D spectral-spatial data. Therefore, many methods had been proposed in the literature [14, 15].
Towards concurrently modelling spectral-spatial data, certain developer attempts were made. This method performed operations in a stacked manner along with convolution over spectral and spatial feature space in a stacked manner, named CNN model [16]. Apparently, the benefit of this CNN model may create rich feature maps. Moreover, the major drawback of this method is threefold. Initially, It is hard to generate a deeper CNN structure. An intention in the resultant area increasingly improves through cumulative amount in the convoluted function that confines the interpretation ability and depth of the model. Next to that, the cost of the memory is too expensive while maximum convolution operations were performed [17–20]. To reduce the abovementioned challenges, we introduced the new CNN model namely compressed synergic deep convolution neural network with Aquila optimization (CSDCNN-AO).
The significant goals to achieve the above-said objectives are listed below:
to determine the suitable deep learning method which provides huge support for HSI image classification.
To reduce the complexity and loss function in classification.
To develop the future outcome based on both present and traditional output.
The major contribution of this technique is given below.
This combination will reduce the learning complexity of the wavelet concept and reduce the loss function with the Aquila optimization. This Aquila optimization method could reduce the enormous amount of data features by maintaining its unique possessions and using less time for computation and less memory space. Furthermore, a synergic deep convolutional neural network (CNN) is useful and intended for getting an initial result, similarly, the CNN weights are optimized by Aquila optimization for reducing an error rate. Here, the key role is the compression of data with the Aquila optimization technique with CNN for increasing accuracy with maximum steadiness among both exploitation and exploration of optimization.
The organization of the work is given below:
the literature survey is given in Section 2. In Section 3, the proposed methodology is given. In Section 4, the experimental results and discussions are explained. At last, in Section 5, the conclusion is given.
2. Literature Review
Yang et al. [21] present a novel synergistic CNN for an accurate HSI classification. The SyCNN contains the hybrid structure of 2D and 3D CNNs with a data interaction module with feature learning that fuses both spatial and spectral HSI data. Moreover, it presents a three-dimensional process earlier to a fully connected layer that supports and extracts features effectively. But still, they could not handle high-dimensional data.
Li et al. [22] suggested an HSI model called local and hybrid dilated convolution fusion network (LDFN) that combines both the local and rich spatial features through expanding the perception field. Initially, several functions were considered, such as dropout, standard convolution, batch normalization, and average pooling. After that, both local and dilated convolution operations were involved in efficient spatial-spectral feature extraction. On the other hand, parameters were manually selected in the suggested paper.
Patel et al. [23] suggested HSI categorization by an autoencoder through CNN (AECNN). Pre-processed by autoencoder-enhanced HSI features that helped towards obtaining optimized weights in CNN initial layers. Thus, here, CNN with a shallow model could be applied towards extracted features from the HSI data. But still, they need to cover more contextual information and advanced strategies for robustification of the spatial information.
Wang et al. [24] suggested a semi-supervised HSI classification model which improved deep learning. Here, the suggested model namely the arbitrary multiple graphs method, and then replaced skilled learning with the anchor graph method that could be labelled a significant unlabelled data automatically and precisely. In this, the number of training samples is limited.
Shi et al. [25] presented a model namely the 3D coordination attention mechanism (3DCAM). This attention process could not attain the HIS's spatial position in both vertical and horizontal ways. Also, HSIs spatial and spectral data were extracted, using CNN. The drawback is that the implementation complexity is not considered.
Zhao et al. [26] suggested combining stacked autoencoder (SAE) with 3D deep residual network (3DDRN) to classify HSI. An SAE neural network was designed to reduce HSI size. 3DCNN and residual network module were used to develop 3DDRN. The 3DDRN extracted spectral-spatial features from dimension-reduced 3D HSI cubes. 3DDRN continuously identified deep features, which were passed into SoftMax to complete classification. Batch normalization (BN) and dropout were used to avoid overfitting training data.
Yin et al [27] developed a spatial-spectral mixed network for HSI categorization. The network collects spatial-spectral information from HSI using three layers of 3-D convolution and one layer of 2-D convolution. This network employs Bi-LSTM to boost spectral band interactions and extract spectral features as a series of images. Combining two FC layers and utilising SoftMax for classification creates a unified neural network. However, the model misclassified samples in the dataset.
Paul et al. [28] developed SSNET, which blends 3D and 2D convolutions of HSI spectral-spatial information with SPP for creating spatial features at various scales. SPP is employed in two-dimensional local convolutional filters for HSI classification because it resists object distortions. SPP layer's fixed feature vector output reduces trainable parameters and improves classification performance. They do, however, have a complicated structure.
Zhang et al. [29] introduced an SSAF-DCR for hyperspectral image classification. Three components were linked to extract features in the recommended network. First, a dense spectral block reuses spectral characteristics as much as possible. Then, a spectral attention block refines and optimises the spectral features. In the second segment, a dense spatial block and an attention block pick spatial features. But in this, the selection of the number of features is not considered.
Yan et al. [30] offer a 3D cascaded spectral-spatial element attention network (3D-CSSEAN) for picture classification. Using the spectral element attention module and the spatial element attention module, the network may concentrate on key spectral and spatial aspects. Two-element attention modules were built using activation functions and element-wise multiplication. The model can extract classification-helping properties and is computationally efficient. The network structure is also suitable for small sample learning since the attention module has few training parameters. On the other hand, obtaining labelled samples are expensive and difficult.
To overcome existing challenges, our proposed work introduces novel techniques which are discussed in the following section.
3. Proposed Synergic Deep Learning Model
Let us assign the hyperspectral image x=[X1, X2, X3,…,Xs]t ∈ rs×(c × d), where s represented entire bands with c × d band samples. Additionally, t is the sample in which x=(Xi, Yi) ∈ (rs×(c × d), ry) with Yi labels. Usually, HSI classification is affected due to inter-class similarity and high intra-class variability. To compensate for these issues, we introduce the proposed technique namely, the synergic deep learning model with the feature reduction principle. This method minimizes complexities for computation by reducing spectral and spatial feature dimensions. Here, we evaluate the efficiency of the subsequent feature suppression methods using a hybrid synergic deep CNN model. The proposed synergic deep learning model consists of synergic deep learning (SDL)-based feature extraction, feature reduction, classification, and loss function optimization. The schematic representation of the proposed method is represented in Figure 1, which is given in the following sections.
Figure 1.

Schematic representation of the proposed methodology.
3.1. Synergic Deep Convolutional Neural Network Feature Extraction
In this proposed model as shown in Figure 2, we extract the HSI useful features which are normally represented by the input layer, n DCNN components and synergic network (cn2). Recently, DCNN yields more attention for the classification which is proposed to reduce the number of input variables and develop the neural network architecture. DCNN is a combination of layers where each layer performs different functions. Pre-processing, convolution, pooling, and final classification operations are sequentially performed in synergic DCNN [31]. The forward process is a convolution operation on the inputs. The multiplication between weights and inputs is combined across layers. The filter has the same number of layers as input volume channels, and output volume has the same depth as the number of filters. In the convolution process, several computations are carried out. Every layer is composed of neurons that take input values, perform calculations, and produces output values, which are forwarded to the next layer. Under CNN, there are four important operations performed in feature learning: the convolution, the activation, the pooling, and the normalization. Before convolution operation, pre-processing is worked out.
Figure 2.

Synergic deep learning model.
3.1.1. Pair Input Layer
Synergic pair input layers are trained randomly, and here, each 200-data group with corresponding class labels is given to the DCNN units. Here, the image is in the size of 224 × 224 × 3. Before applying the data to the next layer, we have to apply the feature reduction principle.
3.1.2. Feature Reduction by Wavelet Transform
In this feature reduction concept, we used wavelet transform with the Haar basis model so that they can handle the high-dimensional data efficiently. Here, two filters h and g are applied for effective feature reduction. These filters are incorporated with the transforms to yield deducted input coefficients. The following equation is for the feature reduction which is given in equation (1).
| (1) |
As a result of this transformation into the DCNN, learning complexity and learning time can be reduced. In this process, it reduces CNN architecture with the number of features.
3.1.3. DCNN Component
In every DCNN component, we initiate with ResNet-101 architecture which is denoted as DCNN-n (n = 1, 2,…, N). This type of architecture is suitable for synergic deep learning (SDL) method. Here, we consider the data sequence with compressed features x′={X′(1), X′(2),…, X′(n)} and output class label series y′={Y′(1), Y′(2),…Y′(n)}. This has to be intended with the θ variable which undertakes cross-entropy loss expressed in equation (2).
| (2) |
The above equation (2), z(i)=f(X(A), θ) means the forward computing process. In the same way, the variable used in DCNN-n is mentioned as θi, and these components will not share enormous DCNN components.
In this SDN model, synergic labels in DCNN are applied to input layers, embedding, and learning layers. In SDN, the consequence data pair is denoted as (zI, zJ), and this pair of input is given to (DCNNi, DCNNj). Output from the FC layer is given in the following equations (3) and (4).
| (3) |
| (4) |
In the next stage, all the deep features are embedded fI∘J and the resultant outcome is expressed in the following equation (5).
| (5) |
Loss in binary cross-entropy is given as below:
| (6) |
The above expression θSDL represents the synergic attributes, and YSDL represents the synergic forward computation. This process validates data pair classes and yields a recovery response belonging to the synergic (SN) errors.
3.1.4. Training and Testing
In this stage, we do the SN maximization process
| (7) |
where, SDL(i, j) and γ(z) represents the learning rate
| (8) |
where, ϑ refers to the trade-off among synergic error and classification sub-model. Additionally, test data classification belonging to the SN DCNN component is processed under some of the prediction vectors which are represented as p(i)=(P1(i), P2(i),…, Pk(i)). Further, the test data class label is deliberated as below:
| (9) |
3.2. Image Classification
This is the final stage to classify the HSI images concerning the different class labels. This classification is performed under the SoftMax layer which has more attention for the multi-label classification. It leads to a mapping function on behalf of the C input vector as of space n to class k labels, which is given in equation (10).
| (10) |
where, Q=1,2,…, k and θk=[θK1, θK2 … θKn]z refers the weights, and this has to be tuned using the optimization process. As a result, we can reduce the loss function in this architecture.
3.3. Loss Reduction by Aquila Optimization Algorithm
Losses in this SDL are reduced by the Aquila optimization algorithm with the weight tuning process. This Aquila optimization algorithm yields the best solution despite the definite limitations.
The mathematical model of Aquila optimization (AO) [32] consists following stages: expanded exploration, narrowed exploration, expanded exploitation, and narrowed exploitation.
3.3.1. Expanded Exploration
In this work, Aquila recognizes the best weight θk based on the best hunting area. Here, the best hunting area refers to the minimum losses. In this process, the AO (weight optimization) extensively explores extraordinary soar to conclude the search space area.
| (11) |
where, θ1(T+1) refers to the next iteration solution, and this is estimated by the initial search method θ1. θBEST(T) is considered as the best until iteration T. Expanded search (exploration) is controlled by the (1 − T/t) iteration. In addition to that, θm(T) represented the current location mean value which is calculated in the following equation. t and T are the maxima and current iterations.
| (12) |
where, n is the population size.
3.3.2. Narrowed Exploration
In this stage, AO barely discovers (explores) the certain space of the targeted prey for the solution.
| (13) |
where, θ2(T+1) is the next iteration solution. LEVY(d) and d is the levy flight distribution function and dimension space, respectively. Additionally, θr(T) is the random solution which is taken from the range of (1,…, n).
| (14) |
where, S refers to the constant which has the value of 0.01. Moreover, U and V are constant numbers.
| (15) |
In the above equation (15), β is the constant value. Moreover, the value of u and v are calculated as follows, which is used for spiral search in this optimization.
| (16) |
R 1 has the values from 20 to toward fixed search cycles, and ε has the value of 0.00565. d1 differs based on dimension, then ϖ is a minimum value which is a constant 0.005.
3.3.3. Expanded Exploitation (X3)
In this stage, weight optimization exploits the accurate value of the solution for getting nearer to prey and attack.
| (17) |
where θ3(t+1) refers to the next iteration solution, and θBEST(T) represents the estimated prey location. In addition to that, θm(T) represents the current mean value at the Tth iteration, and RAND means the random value which is between 0 and 1. α and δ are the small values (0, 1) which are adjustment parameters for the exploitation process. Bupper and Blower represents the upper and lower bound of the problem, respectively.
3.3.4. Narrowed Exploitation
In this phase, attacking is processed in the last location.
| (18) |
where θ4(T+1) demonstrates the next iteration solution. qf mentions the quality function which is applied for balancing the search strategies. g1 specifies several optimization motions that are applied for tracking the prey. g2 specifies the values that are reduced from two to zero. θ(T) represents t iteration with the current solution.
| (19) |
qf(T) refers to the tth iteration's quality function, and RAND means random value between 0 and 1. T and t presents the maximum and current iteration, respectively. Levy(D) is the levy flight distribution function calculated using equation (6). As a result, we can get optimum weights which reduces losses in the architecture.
4. Experimental Results and Discussion
In our work, we have used four HSI datasets which are used for analyzing our proposed CSDCNN-AO technique. Here, we use Houston U (HU) dataset [33], Indiana Pines (IP) [34], Kennedy Space Center (KSC) [35], and Salinas Scene (SS) dataset [17]. In the case of the IP dataset, the size of the dataset is 145 × 145. For the KSC dataset, the size is equivalent to 512 × 614 with 13 classes of ground truths.
4.1. Dataset and Its Description
4.1.1. Houston U (HU) Dataset
The first dataset is GRSS DFC 2013, which measures 349 1905 bytes, and has 144 bands spanning the wavelength range 380–1050 nm. It was obtained by the National Center for Airborne Laser Mapping (NCALM) and has a spatial resolution of 2.5 metres over the University of Houston. The picture is separated into two halves: the bright and dark sections. The bright section has 4143 samples, whereas the dark section contains 824 samples.
4.1.2. Indiana Pines (IP)
This agricultural dataset was collected in 1992 from Northwest Indiana utilising the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) sensor. It has 145 × 145 pixels and 16 vegetation classifications with 20 m per pixel spatial resolution. After removing 4 zero bands and 20 bands affected by water absorption effects, 200 spectral bands ranging from 400 to 2500 nm with 10-nm intervals were used for analysis.
4.1.3. Kennedy Space Center (KSC)
The AVIRIS instrument in Florida collected the Kennedy Space Center dataset in 1996. It has a resolution of 512 by 614 pixels, 176 bands, and 13 categories.
4.1.4. Salinas Scene (SS) Dataset
Experiments on the Salinas Scene collected by the AVIRIS sensor over Salinas Valley, California, USA, with a spatial resolution of 3.7 m per pixel in the wavelength range of 0.4–2.5 m and a spectral resolution of 10 nm, used a second set of AVIRIS data. It measures 512 × 217 × 224 pixels (water absorption bands included).
The model for comparison enactment depending on the IP dataset through different classes are evaluated.
In this Table 1, we evaluated the classification performance for the Indian Pines Scene dataset. Here, overall accuracy, average accuracy, and Kappa coefficients are evaluated. From the results, we can show that our proposed CSDCNN-AO yields maximum performance than other techniques. In Table 1, CSDCNN-AO achieves a better result for the 13th class. In the case of CSDCNN, the 8th class achieves a better performance. For SDCNN, the 16th class has the maximum performance. DCNN also attains maximum performance for the 16th class only. For RNN, it has the maximum performance under the 6th class.
Table 1.
HSI categorization for Indiana Pines (IP) dataset.
| Methods | RNN | DCNN | SDCNN | CSDCNN | CSDCNN-AO |
|
| |||||
| OA | 46.33 ± 0.45 | 48.73 ± 0.89 | 89.36 ± 1.13 | 89.57 ± 0.86 | 93.44 ± 1.08 |
| AA | 36.20 ± 1.06 | 49.60 ± 3.29 | 88.46 ± 1.17 | 83.14 ± 1.11 | 94.44 ± 1.82 |
| K | 53.97 ± 0.58 | 51.04 ± 1.03 | 89.62 ± 2.54 | 89.12 ± 0.26 | 98.33 ± 1.25 |
| 1 | 22.89 ± 1.09 | 1.33 ± 7.33 | 90.00 ± 1.03 | 30.21 ± 30.0 | 93.77 ± 11.6 |
| 2 | 45.46 ± 5.00 | 41.53 ± 3.04 | 87.35 ± 3.80 | 81.79 ± 0.26 | 90.38 ± 4.87 |
| 3 | 26.69 ± 2.61 | 30.91 ± 8.28 | 87.18 ± 7.24 | 75.93 ± 1.26 | 90.06 ± 4.53 |
| 4 | 22.79 ± 9.7 | 21.17 ± 3.25 | 83.17 ± 5.52 | 89.11 ± 1.12 | 96.84 ± 4.89 |
| 5 | 37.71 ± 6.67 | 69.79 ± 2.13 | 86.75 ± 2.55 | 79.28 ± 1.34 | 95.65 ± 1.95 |
| 6 | 89.57 ± 1.71 | 91.78 ± 0.78 | 89.08 ± 3.06 | 92.82 ± 0.32 | 96.95 ± 0.96 |
| 7 | 39.54 ± 11.4 | 19.85 ± 7.59 | 69.89 ± 29.7 | 39.69 ± 2.13 | 91.48 ± 24.0 |
| 8 | 87.46 ± 2.15 | 87.84 ± 3.15 | 85.25 ± 2.40 | 99.22 ± 0.31 | 89.11 ± 3.09 |
| 9 | 47.78 ± 19.04 | 0.00 ± 0.00 | 49.0 ± 49.0 | 19.00 ± 2.05 | 92.72 ± 8.38 |
| 10 | 49.46 ± 1.91 | 52.53 ± 1.24 | 86.47 ± 7.69 | 74.28 ± 0.89 | 92.40 ± 2.86 |
| 11 | 70.89 ± 2.49 | 61.88 ± 4.33 | 91.88 ± 5.03 | 91.12 ± 0.25 | 93.97 ± 3.29 |
| 12 | 37.14 ± 5.56 | 37.46 ± 3.85 | 77.82 ± 5.16 | 85.88 ± 2.37 | 87.56 ± 3.44 |
| 13 | 32.68 ± 7.17 | 85.02 ± 1.22 | 96.26 ± 5.29 | 50.86 ± 3.56 | 98.89 ± 0.87 |
| 14 | 81.32 ± 8.95 | 89.94 ± 2.59 | 89.16 ± 2.22 | 93.89 ± 1.37 | 96.89 ± 2.57 |
| 15 | 45.75 ± 5.12 | 44.64 ± 4.56 | 94.00 ± 7.49 | 93.76 ± 1.98 | 89.74 ± 2.65 |
| 16 | 29.60 ± 34.12 | 95.38 ± 1.94 | 99.89 ± 3.86 | 98.11 ± 2.67 | 96.89 ± 4.98 |
In the above Figure 3, (a) represents the original image and here we evaluated the results of the proposed algorithm with other algorithms like CSDCNN-ALO [36], CSDCNN-PSO [37], CSDCNN-WOA [38], and CSDCNN-GWO [39]. Different application [40–45] were used in different fields for optimization. Among these methods, our proposed work yields the maximum performance since the performance of our proposed work is nearly equivalent to the original ground truth image compared to others.
Figure 3.

HSI classified image for IP dataset (a) original ground truth image (b) CSDCNN-ALO, (c) CSDCNN-PSO, (d) CSDCNN-WOA, and (e) CSDCNN-GWO (f) CSDCNN-AO.
In this Table 2, we evaluated the classification performance for the KSC dataset. Here, abovementioned performances are evaluated. From the results, we can show that our proposed CSDCNN-AO yields the maximum performance than other techniques. In Table 2, CSDCNN-AO achieves a better result for the 10th class. In the case of CSDCNN, the 11th class achieves a better performance. For SDCNN, the 13th class has the maximum performance. DCNN attains the maximum performance for the 8th class. For RNN, it has the maximum performance under the 6th class.
Table 2.
HSI categorization for KSC dataset.
| methods | RNN | DCNN | SDCNN | CSDCNN | CSDCNN-AO |
|
| |||||
| OA | 45.22 ± 0.45 | 47.89 ± 0.89 | 88.36 ± 1.13 | 87.48 ± 0.86 | 92.33 ± 1.08 |
| AA | 37.31 ± 1.06 | 56.60 ± 3.29 | 85.39 ± 1.17 | 82.43 ± 1.11 | 93.44 ± 1.82 |
| K | 52.97 ± 0.58 | 53.12 ± 1.03 | 91.62 ± 2.54 | 93.12 ± 0.26 | 97.22 ± 1.25 |
| 1 | 22.89 ± 1.09 | 30.33 ± 7.33 | 89.00 ± 1.03 | 29.21 ± 30.0 | 84.77 ± 11.6 |
| 2 | 44.46 ± 5.00 | 42.53 ± 3.04 | 86.35 ± 3.80 | 84.79 ± 0.26 | 92.38 ± 4.87 |
| 3 | 46.69 ± 2.61 | 53.91 ± 8.28 | 86.18 ± 7.24 | 85.93 ± 1.26 | 93.06 ± 4.53 |
| 4 | 29.79 ± 9.7 | 21.17 ± 3.25 | 84.17 ± 5.52 | 88.11 ± 1.12 | 95.84 ± 4.89 |
| 5 | 36.71 ± 6.67 | 68.79 ± 2.13 | 85.75 ± 2.55 | 87.28 ± 1.34 | 94.65 ± 1.95 |
| 6 | 88.57 ± 1.71 | 92.78 ± 0.78 | 88.08 ± 3.06 | 93.82 ± 0.32 | 95.95 ± 0.96 |
| 7 | 38.34 ± 11.4 | 20.85 ± 7.79 | 71.89 ± 29.45 | 42.69 ± 2.13 | 92.48 ± 24.0 |
| 8 | 91.46 ± 2.15 | 94.84 ± 3.15 | 91.25 ± 2.40 | 95.22 ± 0.31 | 97.11 ± 3.09 |
| 9 | 64.78 ± 19.04 | 0.00 ± 0.00 | 54.0 ± 49.0 | 32.00 ± 2.05 | 89.72 ± 8.38 |
| 10 | 61.46 ± 1.91 | 61.53 ± 1.24 | 89.47 ± 7.69 | 81.28 ± 0.89 | 98.40 ± 2.86 |
| 11 | 79.89 ± 2.49 | 66.88 ± 4.33 | 93.77 ± 5.03 | 95.85 ± 0.67 | 93.97 ± 3.29 |
| 12 | 47.14 ± 5.56 | 47.57 ± 3.85 | 77.97 ± 5.16 | 83.77 ± 4.37 | 95.89 ± 4.55 |
| 13 | 49.68 ± 7.17 | 87.02 ± 1.22 | 97.45 ± 5.29 | 79.86 ± 4.56 | 97.89 ± 0.87 |
In the above Figure 4, (a) represents the original image and here we evaluated the results of the proposed algorithm with other algorithms like CSDCNN-ALO, CSDCNN-PSO, CSDCNN-WOA, and CSDCNN-GWO. From these methods, our proposed work yields the maximum performance since the obtained proposed image is nearly equivalent to the original ground truth image.
Figure 4.

HSI classified image for KSC dataset: (a) original ground truth image, (b) CSDCNN-ALO, (c) CSDCNN-PSO, (d) CSDCNN-WOA, (e) CSDCNN-GWO, and (f) CSDCNN-AO.
In this Table 3, we evaluated the classification performance for the Salinas Scene (SS) dataset. From the results, we can show that our proposed CSDCNN-AO yields the maximum performance than the other techniques. In Table 3, CSDCNN-AO achieves a better result for the 13th class. In the case of CSDCNN, the 16th class achieves a better performance. For SDCNN, the 14th class has the maximum performance. DCNN also attains the maximum performance for the 16th class only. For RNN, it has the maximum performance under 11th class.
Table 3.
HSI categorization for Salinas Scene (SS) dataset.
| Methods | RNN | DCNN | SDCNN | CSDCNN | CSDCNN-AO |
|
| |||||
| OA | 46.78 ± 1.45 | 57.49 ± 1.39 | 89.65 ± 1.13 | 91.89 ± 0.86 | 95.77 ± 1.08 |
| AA | 78.20 ± 1.06 | 49.60 ± 3.29 | 88.46 ± 1.17 | 83.14 ± 1.11 | 94.44 ± 1.82 |
| K | 53.97 ± 0.58 | 51.04 ± 1.03 | 89.62 ± 2.54 | 89.12 ± 0.26 | 98.33 ± 1.25 |
| 1 | 22.89 ± 1.09 | 1.33 ± 7.33 | 90.00 ± 1.03 | 30.21 ± 30.0 | 93.77 ± 11.6 |
| 2 | 45.46 ± 5.00 | 41.87 ± 3.04 | 88.79 ± 2.80 | 82.69 ± 0.26 | 92.56 ± 4.87 |
| 3 | 26.69 ± 2.61 | 30.91 ± 8.28 | 87.18 ± 7.24 | 75.93 ± 1.26 | 90.06 ± 4.53 |
| 4 | 22.79 ± 9.7 | 21.17 ± 3.25 | 83.17 ± 5.52 | 89.11 ± 1.12 | 96.84 ± 4.89 |
| 5 | 37.71 ± 6.67 | 69.79 ± 2.13 | 86.75 ± 2.55 | 79.28 ± 1.34 | 95.65 ± 1.95 |
| 6 | 89.57 ± 1.71 | 91.78 ± 0.78 | 89.08 ± 3.06 | 92.82 ± 0.32 | 96.95 ± 0.96 |
| 7 | 39.54 ± 11.4 | 19.85 ± 7.59 | 69.89 ± 29.7 | 39.69 ± 2.13 | 91.48 ± 24.0 |
| 8 | 87.46 ± 2.15 | 87.84 ± 3.15 | 85.25 ± 2.40 | 96.22 ± 0.31 | 88.11 ± 3.09 |
| 9 | 51.81 ± 19.04 | 0.00 ± 0.00 | 48.0 ± 49.0 | 22.00 ± 2.05 | 95.72 ± 8.38 |
| 10 | 67.46 ± 1.91 | 63.53 ± 1.24 | 90.47 ± 6.87 | 87.28 ± 0.89 | 95.30 ± 2.86 |
| 11 | 90.56 ± 3.49 | 73.88 ± 4.33 | 94.57 ± 5.03 | 95.21 ± 0.23 | 97.94 ± 3.29 |
| 12 | 54.14 ± 5.56 | 38.46 ± 3.85 | 76.47 ± 5.16 | 92.77 ± 3.37 | 92.56 ± 3.44 |
| 13 | 34.47 ± 7.17 | 83.02 ± 1.22 | 97.26 ± 5.29 | 49.93 ± 3.56 | 99.89 ± 0.87 |
| 14 | 84.32 ± 8.95 | 87.94 ± 2.59 | 99.16 ± 2.22 | 92.89 ± 1.37 | 95.89 ± 2.57 |
| 15 | 48.75 ± 5.12 | 47.64 ± 4.56 | 91.00 ± 7.49 | 96.76 ± 2.98 | 93.74 ± 2.65 |
| 16 | 34.60 ± 34.12 | 97.38 ± 1.94 | 97.89 ± 3.86 | 99.11 ± 2.67 | 95.89 ± 4.98 |
In the above Figure 5, (a) represents the original image and here we evaluated the results of the proposed algorithm with other algorithms like CSDCNN-ALO, CSDCNN-PSO, CSDCNN-WOA, and CSDCNN-GWO. From these methods, our proposed work yields the maximum performance since the obtained proposed image is nearly equivalent to the original ground truth image.
Figure 5.

HSI classified image for SS dataset (a) original ground truth image (b) CSDCNN-ALO, (c) CSDCNN-PSO, (d) CSDCNN-WOA, and (e) CSDCNN-GWO (f) CSDCNN-AO.
In Table 4, we evaluate the classification performance for the Houston U dataset. From the results, we can show that our proposed CSDCNN-AO yields the maximum performance than the other techniques. In Table 4, CSDCNN-AO achieves a better result for the 11th class. In the case of CSDCNN, the 8th class achieves a better performance. For SDCNN, the 15th class has the maximum performance. DCNN also attains the maximum performance for the 14th class only. For RNN, it has the maximum performance under 8th class.
Table 4.
HSI categorization for the Houston U dataset.
| Methods | RNN | DCNN | SDCNN | CSDCNN | CSDCNN-AO |
|
| |||||
| OA | 49.23 ± 1.45 | 57.49 ± 1.39 | 88.73 ± 1.13 | 93.78 ± 0.86 | 94.67 ± 1.08 |
| AA | 78.20 ± 1.06 | 49.60 ± 3.29 | 88.46 ± 1.17 | 83.14 ± 1.11 | 94.44 ± 1.82 |
| K | 53.97 ± 0.58 | 51.04 ± 1.03 | 89.62 ± 2.54 | 89.13 ± 0.26 | 97.33 ± 1.25 |
| 1 | 27.89 ± 1.09 | 1.66 ± 7.33 | 92.00 ± 1.03 | 37.21 ± 30.0 | 94.77 ± 11.6 |
| 2 | 49.46 ± 5.00 | 42.87 ± 3.04 | 89.79 ± 2.80 | 82.69 ± 0.26 | 92.56 ± 4.87 |
| 3 | 26.69 ± 2.61 | 30.91 ± 8.28 | 87.18 ± 7.24 | 75.93 ± 1.26 | 90.06 ± 4.53 |
| 4 | 22.79 ± 9.7 | 21.17 ± 3.25 | 83.17 ± 5.52 | 89.11 ± 1.12 | 96.84 ± 4.89 |
| 5 | 37.71 ± 6.67 | 69.79 ± 2.13 | 86.75 ± 2.55 | 79.28 ± 1.34 | 95.65 ± 1.95 |
| 6 | 88.79 ± 1.71 | 92.78 ± 0.78 | 88.08 ± 3.06 | 93.82 ± 0.32 | 98.95 ± 0.96 |
| 7 | 38.54 ± 11.4 | 20.85 ± 7.59 | 72.89 ± 29.7 | 40.69 ± 2.13 | 92.48 ± 24.0 |
| 8 | 89.96 ± 2.15 | 88.84 ± 3.15 | 84.25 ± 2.40 | 99.33 ± 0.31 | 86.11 ± 3.09 |
| 9 | 57.81 ± 19.04 | 0.00 ± 0.00 | 76.0 ± 49.0 | 28.00 ± 2.05 | 99.72 ± 8.38 |
| 10 | 67.46 ± 1.91 | 67.53 ± 1.24 | 90.47 ± 6.87 | 87.28 ± 0.89 | 95.30 ± 2.86 |
| 11 | 82.56 ± 3.49 | 73.88 ± 4.33 | 94.57 ± 5.03 | 95.21 ± 0.23 | 99.84 ± 3.29 |
| 12 | 54.14 ± 5.56 | 48.46 ± 3.85 | 77.47 ± 5.16 | 92.77 ± 3.37 | 92.56 ± 3.44 |
| 13 | 34.47 ± 7.17 | 83.02 ± 1.22 | 97.26 ± 5.29 | 49.93 ± 3.56 | 97.89 ± 0.87 |
| 14 | 84.32 ± 8.95 | 93.94 ± 2.59 | 89.16 ± 2.22 | 95.89 ± 1.37 | 95.89 ± 2.57 |
| 15 | 48.75 ± 5.12 | 47.64 ± 4.56 | 98.00 ± 7.49 | 96.77 ± 2.98 | 93.74 ± 2.65 |
In the above Figure 6, (a) represents the original image then we evaluated the outcome of the proposed algorithm with other algorithms like CSDCNN-ALO, CSDCNN-PSO, CSDCNN-WOA, and CSDCNN-GWO. From these methods, our proposed work yields the maximum performance since the obtained proposed image is nearly equivalent to the original ground truth image.
Figure 6.

HSI classified image for Houston U dataset: (a) original ground truth image, (b) CSDCNN-ALO, (c) CSDCNN-PSO, (d) CSDCNN-WOA, (e) CSDCNN-GWO, and (f) CSDCNN-AO.
The input images were obtained from the four datasets. Results are obtained after feature extraction, feature reduction, classification, and loss function optimization. The four different datasets taken for testing purposes are the HU dataset, IP, KSC, and SS dataset. These four datasets have shown promising results in this classification. The results (i.e., computational complexity, overall accuracy and loss functions) that are obtained by these datasets are given in the following figure.
The computational complexity attained for various iterations are shown in Figure 7. The usage of various optimizations along with the synergic deep CNN has improved the performance of the proposed algorithm. The computational complexity attained by Aquila optimization is much better as it has identified the optimal solution in lesser number of iterations, due to this the computational complexity has to be increased while increasing the iterations. Not like other meta-heuristic algorithms, this optimization algorithm has provided satisfactory results on weight parameter selection compared to ALO, WOA, PSO, and GWO. Therefore, in this proposed process, the Aquila optimization is encouraged.
Figure 7.

Comparative analysis of computational complexity.
The overall accuracy comparison for the abovementioned data sets is shown in Figure 8. Among all the datasets, the dataset named KSC has shown a higher accuracy value than other algorithms. These four efficient datasets are taken for comparison. However, the overall accuracy is evaluated with the coefficient loss. The comparison analysis in terms of overall accuracy is affected while coefficient loss is increased. Our proposed work yields maximum accuracy of 99.02%, and this is lagged for the increasing coefficient losses.
Figure 8.

A comparative analysis of overall accuracy.
The loss comparison for proposed and existing algorithms for the four datasets are shown in Figure 9. Among all the techniques, our proposed CSDCNN-AO has shown a lower loss value than other algorithms. The four efficient existing algorithms that are taken for comparison are CSDCNN, SDCNN, DCNN, and RNN. However, the loss shown in all these datasets are found to be much less than that in other existing algorithms. Especially for the KSC dataset, obtained losses are very low compared to another one. This is because the proposed technique has enhanced the effectiveness of the classification process.
Figure 9.

Loss comparison for (a) HU dataset, (b) SS dataset, (c) IP dataset, and (d) KSC dataset.
5. Conclusion
Compressed spatial and spectral characteristics are employed as the key perception to develop a compressed synergic deep convolution neural network with Aquila optimization (CSDCNN-AO) for efficient HSI classification in this study. This combination will reduce the wavelet concept's learning difficulty and the Aquila optimization's loss function. This Aquila optimization approach may minimize the maximum number of data features without losing their characteristic state, while using less computing time and memory. Our proposed approach is superior to existing deep learning models due to higher learning ability of our synergic deep learning model based on compressed features. While comparing with the other techniques, our proposed approach can reach the maximum level of classification. In addition, the experimental results showed that the loss function does not significantly impact classification accuracy. In addition, the outcome demonstrates that the CSDCNN-AO approach has the highest accuracy among all the four datasets. Furthermore, the performance of average accuracy, total accuracy, and Kappa coefficients is optimal when implemented on all datasets. However, the proposed technique lacks optimal performance with certain samples. In future research, this issue will be resolved using a new model.
Acknowledgments
The authors extend their appreciation to the Deanship of Scientific Research at King Khalid University for funding this work through the Large Groups Project under grant number RGP. 2/252/43.
Data Availability
The dataset can be obtained from the corresponding author based on the request.
Conflicts of Interest
The authors declare that they have no conflicts of interest.
References
- 1.Fauvel M., Benediktsson J. A., Boardman J., et al. Recent advances in techniques for hyperspectral image processing. Remote Sensing of Environment . 2007;113:S110–S120. [Google Scholar]
- 2.Bioucas-Dias J. M., Plaza A., Camps-Valls G., Scheunders P., Nasrabadi N., Chanussot J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geoscience and Remote Sensing Magazine . 2013;1(2):6–36. doi: 10.1109/mgrs.2013.2244672. [DOI] [Google Scholar]
- 3.Zhong S., Chang C.-I., Zhang Y. Iterative support vector machine for hyperspectral image classification. Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP); October 2018; Athens, Greece. pp. 3309–3312. [Google Scholar]
- 4.Sun S., Zhong P., Xiao H., Liu F., Wang R. An active learning method based on SVM classifier for hyperspectral images classification. Proceedings of the 2015 7th Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing (WHISPERS); June 2015; Tokyo, Japan. pp. 1–4. [Google Scholar]
- 5.Tu B., Wang J., Kang X., Zhang G., Ou X., Guo L. KNN-based representation of superpixels for hyperspectral image classification. Ieee Journal of Selected Topics in Applied Earth Observations and Remote Sensing . 2018;11(11):4032–4047. doi: 10.1109/jstars.2018.2872969. [DOI] [Google Scholar]
- 6.Huang K., Li S., Kang X., Fang L. Spectral-spatial hyperspectral image classification based on KNN. Sensing and Imaging . 2016;17(1):p. 1. doi: 10.1007/s11220-015-0126-z. [DOI] [Google Scholar]
- 7.Li J., Bioucas-Dias J. M., Plaza A. Spectral-spatial hyperspectral image segmentation using subspace multinomial logistic regression and markov random fields. IEEE Transactions on Geoscience and Remote Sensing . 2012;50(3):809–823. doi: 10.1109/tgrs.2011.2162649. [DOI] [Google Scholar]
- 8.Prabhakar T. N., Xavier G., Geetha P., Soman K. P. Spatial preprocessing based multinomial logistic regression for hyperspectral image classification. Procedia Computer Science . 2015;46:1817–1826. doi: 10.1016/j.procs.2015.02.140. [DOI] [Google Scholar]
- 9.Velásquez L., Cruz-Tirado J. P., Siche R., Quevedo R. An application based on the decision tree to classify the marbling of beef by hyperspectral imaging. Meat Science . 2017;133:43–50. doi: 10.1016/j.meatsci.2017.06.002. [DOI] [PubMed] [Google Scholar]
- 10.Amirruddin A. D., Muharam F. M., Ismail M. H., Ismail M. F., Tan N. P., Karam D. S. Hyperspectral remote sensing for assessment of chlorophyll sufficiency levels in mature oil palm (Elaeis guineensis) based on frond numbers: analysis of decision tree and random forest. Computers and Electronics in Agriculture . 2020;169 doi: 10.1016/j.compag.2020.105221.105221 [DOI] [Google Scholar]
- 11.Smirnov E. A., Timoshenko D. M., Andrianov S. N. Comparison of regularization methods for imagenet classification with deep convolutional neural networks. Aasri Procedia . 2014;6:89–94. doi: 10.1016/j.aasri.2014.05.013. [DOI] [Google Scholar]
- 12.Simonyan K., Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014. http://arXiv.org/abs/1409.1556 .
- 13.Hong D., Gao L., Yao J., Zhang B., Plaza A., Chanussot J. Graph convolutional networks for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing . 2021;59(7):5966–5978. doi: 10.1109/tgrs.2020.3015157. [DOI] [Google Scholar]
- 14.Pan Z., Healey G., Tromberg B. Comparison of spectral-only and spectral/spatial face recognition for personal identity verification. EURASIP Journal on Applied Signal Processing . 2009;2009:6. doi: 10.1155/2009/943602.943602 [DOI] [Google Scholar]
- 15.Fang B., Li Y., Zhang H., Chan J. C. Hyperspectral images classification based on dense convolutional networks with spectral-wise attention mechanism. Remote Sensing . 2019;11(2):p. 159. doi: 10.3390/rs11020159. [DOI] [Google Scholar]
- 16.Yang X., Ye Y., Li X., Lau R. Y. K., Zhang X., Huang X. Hyperspectral image classification with deep learning models. IEEE Transactions on Geoscience and Remote Sensing . 2018;56(9):5408–5423. doi: 10.1109/tgrs.2018.2815613. [DOI] [Google Scholar]
- 17.Roy S. K., Krishna G., Dubey S. R., Chaudhuri B. B. HybridSN: exploring 3-D-2-D CNN feature hierarchy for hyperspectral image classification. IEEE Geoscience and Remote Sensing Letters . 2020;17(2):277–281. doi: 10.1109/lgrs.2019.2918719. [DOI] [Google Scholar]
- 18.Zhong Z., Li J., Luo Z., Chapman M. Spectral-spatial residual network for hyperspectral image classification: a 3-D deep learning framework. IEEE Transactions on Geoscience and Remote Sensing . 2018;56(2):847–858. doi: 10.1109/tgrs.2017.2755542. [DOI] [Google Scholar]
- 19.Paoletti M. E., Haut J. M., Fernandez-Beltran R., Plaza J., Plaza A. J., Pla F. Deep pyramidal residual networks for spectral-spatial hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing . 2019;57(2):740–754. doi: 10.1109/tgrs.2018.2860125. [DOI] [Google Scholar]
- 20.Makantasis K., Karantzalos K., Doulamis A., Doulamis N. Deep supervised learning for hyperspectral data classification through convolutional neural networks. Proceedings of the In2015 IEEE International Geoscience and Remote Sensing Symposium (IGARSS); July 2015; Milan, Italy. IEEE; pp. 4959–4962. [Google Scholar]
- 21.Yang X., Zhang X., Ye Y., et al. Synergistic 2D/3D convolutional neural network for hyperspectral image classification. Remote Sensing . 2020;12(12):p. 2033. doi: 10.3390/rs12122033. [DOI] [Google Scholar]
- 22.Li C., Qiu Z., Cao X., Chen Z., Gao H., Hua Z. Hybrid dilated convolution with multi-scale residual fusion network for hyperspectral image classification. Micromachines . 2021;12(5):p. 545. doi: 10.3390/mi12050545. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Patel H., Upla K. P. A shallow network for hyperspectral image classification using an autoencoder with convolutional neural network. Multimedia Tools and Applications . 2022;81(1):695–714. doi: 10.1007/s11042-021-11422-w. [DOI] [Google Scholar]
- 24.Wang Q., Chen M., Zhang J., Kang S., Wang Y. Improved active deep learning for semi-supervised classification of hyperspectral image. Remote Sensing . 2021;14(1):p. 171. doi: 10.3390/rs14010171. [DOI] [Google Scholar]
- 25.Shi C., Liao D., Zhang T., Wang L. Hyperspectral image classification based on 3D coordination attention mechanism network. Remote Sensing . 2022;14(3):p. 608. doi: 10.3390/rs14030608. [DOI] [Google Scholar]
- 26.Zhao J., Hu L., Dong Y., Huang L., Weng S., Zhang D. A combination method of stacked autoencoder and 3D deep residual network for hyperspectral image classification. International Journal of Applied Earth Observation and Geoinformation . 2021;102 doi: 10.1016/j.jag.2021.102459.102459 [DOI] [Google Scholar]
- 27.Yin J., Qi C., Chen Q., Qu J. Spatial-spectral network for hyperspectral image classification: a 3-D CNN and Bi-LSTM framework. Remote Sensing . 2021;13(12):p. 2353. doi: 10.3390/rs13122353. [DOI] [Google Scholar]
- 28.Paul A., Bhoumik S., Chaki N. SSNET: an improved deep hybrid network for hyperspectral image classification. Neural Computing & Applications . 2021;33(5):1575–1585. doi: 10.1007/s00521-020-05069-1. [DOI] [Google Scholar]
- 29.Zhang T., Shi C., Liao D., Wang L. A spectral spatial attention fusion with deformable convolutional residual network for hyperspectral image classification. Remote Sensing . 2021;13(18):p. 3590. doi: 10.3390/rs13183590. [DOI] [Google Scholar]
- 30.Yan H., Wang J., Tang L., et al. A 3D cascaded spectral-spatial element attention network for hyperspectral image classification. Remote Sensing . 2021;13(13):p. 2451. doi: 10.3390/rs13132451. [DOI] [Google Scholar]
- 31.Anupama C. S. S., Sivaram M., Lydia E. L., Gupta D., Shankar K. Synergic deep learning model-based automated detection and classification of brain intracranial hemorrhage images in wearable networks. Personal and Ubiquitous Computing . 2020;26(1):1–10. doi: 10.1007/s00779-020-01492-2. [DOI] [Google Scholar]
- 32.Abualigah L., Yousri D., Abd Elaziz M., Ewees A. A., Al-qaness M. A., Gandomi A. H. Aquila optimizer: a novel meta-heuristic optimization algorithm. Computers & Industrial Engineering . 2021;157 doi: 10.1016/j.cie.2021.107250.107250 [DOI] [Google Scholar]
- 33.Alipour-Fard T., Arefi H. Structure aware generative adversarial networks for hyperspectral image classification. Ieee Journal of Selected Topics in Applied Earth Observations and Remote Sensing . 2020;13:5424–5438. doi: 10.1109/jstars.2020.3022781. [DOI] [Google Scholar]
- 34.Laban N., Abdellatif B., Ebeid H. M., Shedeed H. A., Tolba M. F. Reduced 3-d deep learning framework for hyperspectral image classification. Proceedings of the InInternational Conference on Advanced Machine Learning Technologies and Applications; March 2019; Cairo, Egypt. Springer, Cham; pp. 13–22. [Google Scholar]
- 35.Roy S. K., Manna S., Song T., Bruzzone L. Attention-based adaptive spectral-spatial kernel ResNet for hyperspectral image classification. IEEE Transactions on Geoscience and Remote Sensing . 2021;59(9):7831–7843. doi: 10.1109/tgrs.2020.3043267. [DOI] [Google Scholar]
- 36.Abualigah L., Shehab M., Alshinwan M., Mirjalili S., Elaziz M. A. Ant lion optimizer: a comprehensive survey of its variants and applications. Archives of Computational Methods in Engineering . 2021;28(3):1397–1416. doi: 10.1007/s11831-020-09420-6. [DOI] [Google Scholar]
- 37.Parsopoulos K. E., Vrahatis M. N. Particle swarm optimization method in multiobjective problems. InProceedings of the 2002 ACM symposium on Applied computing . 2002;11:603–607. doi: 10.1145/508791.508907. [DOI] [Google Scholar]
- 38.Mirjalili S., Lewis A. The whale optimization algorithm. Advances in Engineering Software . 2016;95:51–67. doi: 10.1016/j.advengsoft.2016.01.008. [DOI] [Google Scholar]
- 39.Mirjalili S., Mirjalili S. M., Lewis A. Grey wolf optimizer. Advances in Engineering Software . 2014;69:46–61. doi: 10.1016/j.advengsoft.2013.12.007. [DOI] [Google Scholar]
- 40.Alazab M., Lakshmanna K., Thippa Reddy G., Pham Q.-V., Reddy Maddikunta P. K. Multi-objective cluster head selection using fitness averaged rider optimization algorithm for IoT networks in smart cities. Sustainable Energy Technologies and Assessments . 2021;43 doi: 10.1016/j.seta.2020.100973.100973 [DOI] [Google Scholar]
- 41.Gadekallu T. R., Alazab M., Kaluri R., Maddikunta P. K. R., Bhattacharya S., Lakshmanna K. Hand gesture classification using a novel CNN-crow search algorithm. Complex Intell. Syst. . 2021;7:1855–1868. doi: 10.1007/s40747-021-00324-x. [DOI] [Google Scholar]
- 42.Aiswarya S., Sasikumar S., Ramesh S. IoT based big data analytics in healthcare: a survey. Proceedings of the Fist International Conference on Advanced Scientific Innovation in Science, Engineering and Technology, ICASISET 2020; May 2020; Chennai, India. [DOI] [Google Scholar]
- 43.Priyanka A., Parimala M., Sudheer K., et al. BIG data based on healthcare analysis using IOT devices. IOP Conference Series: Materials Science and Engineering . 2017;263(No. 4) doi: 10.1088/1757-899x/263/4/042059.042059 [DOI] [Google Scholar]
- 44.Lakshmanna K., Khare N. Constraint-based measures for DNA sequence mining using group search optimization algorithm”. International Journal of Intelligent Engineering and Systems . 2016;9(3):91–100. doi: 10.22266/ijies2016.0930.09. [DOI] [Google Scholar]
- 45.R.M. S. P., Bhattacharya S., Maddikunta P. K. R., et al. Load balancing of energy cloud using wind driven and firefly algorithms in internet of everything. Journal of Parallel and Distributed Computing . 2020;142:16–26. doi: 10.1016/j.jpdc.2020.02.010. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The dataset can be obtained from the corresponding author based on the request.
