2015 |
Convolutional neural network and multilayer perceptron (CNN-MLP) [120] |
Pavia city—99.91%, UP—99.62%, SV—99.53%, IP—98.88% |
Far better than SVM, RBF mixed classifiers, the effective convergence rate can be useful for large datasets |
Detection of human behavior from hyperspectral video sequences |
|
2016 |
3D-CNN [121] |
IP—98.53%, UP—99.66%, KSC—97.07% |
A landmark in terms of quality and overall performance |
Mapping performance to be accelerated by postclassification processing |
|
2016 |
Spectral-spatial feature-based classification (SSFC) [122] |
Pavia center—99.87%, UP—96.98% |
Highly accurate than other methods |
Inclusion of optimal observation scale for improved outcome |
|
2016 |
CNN-based simple linear iterative clustering (SLIC-CNN) [123] |
KSC—100%, UP—99.64, IP—97.24% |
Deals with a limited dataset use spectral and local-spatial probabilities as an enhanced estimate in the Bayesian inference |
|
2017 |
Pixel-pair feature enhanced deep CNN (CNN-PPF) [124] |
IP—94.34%, SV—94.8%, UP—96.48% |
Overcomes the significant parameter and bulk-data problems of DL, PPFs make the system unique and reliable, and voting strategy makes the more enhanced evaluations in classification |
|
2017 |
Multiscale 3D deep convolutional neural network (M3D-DCNN) [125] |
IP—97.61%, UP—98.49%, SV—97.24% |
Outperforms popular methods like RBF-SVM and combinations of CNNs |
Removing data limitations and improving the network architecture |
|
2018 |
2D-CNN, 3D-CNN, recurrent 2D-CNN (R-2D-CNN), and recurrent 3-D-CNN (R-3D-CNN) [126] |
IP-99.5%, UP—99.97%, Botswana—99.38%, PaviaC—96.79%, SV—99.8%, KSC—99.85% |
R-3D-CNN outperforms all other CNNs mentioned and proves to be very potent in both fast convergence and feature extraction but suffers from the limited sample problem |
Applying prior knowledge and transfer learning |
|
2019 |
3D lightweight convolutional neural network (CNN) (3D-LWNet) [127] |
UP—99.4%, IP—98.87%, KSC—98.22% |
Provides irrelevance to the sources of data |
Architecture is to be improvised by intelligent algorithms |
|
2020 |
Hybrid spectral CNN (HybridSN) [128] |
IP—99.75%, UP—99.98%, SV—100% |
Removes the shortfalls of passing over the essential spectral bands and complex, the tedious structure of 2D-CNN and 3D-CNN exclusively and outruns all other contemporary CNN methods superiorly, like SSRN and M-3D-CNN |
|
2020 |
Heterogeneous TL based on CNN with attention mechanism (HT-CNN-attention) [129] |
SV—99%, UP—97.78%, KSC—99.56%, IP—96.99% |
Efficient approach regardless of the sample selection strategies chosen |
|
2020 |
Quantum genetic-optimized SR based CNN (QGASR-CNN) [27] |
UP—91.6%, IP—94.1% |
With enhanced accuracy, overfitting and “salt-and-pepper” noise are resolved |
Improvement of operational performance by the relation between feature mapping and selection of parameters |
|
2020 |
Rotation-equivariant CNN2D (reCNN2D) [130] |
IP—97.78%, UP—98.89, SV—98.18% |
Provides robustness and optimal generalization and accuracy without any data augmentation |
|
2020 |
Spectral-spatial dense connectivity-attention 3D-CNN (SSDANet) [131] |
UP—99.97%, IP— 99.29% |
Higher accuracy but high computational hazard |
Optimization by using other efficient algorithms |