Skip to main content
. 2022 Apr 28;2022:3854635. doi: 10.1155/2022/3854635

Table 6.

Summary of the review of HSI classification using deep learning—AE.

Year Method used Dataset and COA Research remarks and future scope
2013 Autoencoders (AE) [110] Error rate: KSC—4%, Pavia city—14.36% This article opened a considerable doorway of research, including other deep models for better accuracy

2014 Stacked autoencoder and logistic regression (SAE-LR) [113] KSC—98.76%, Pavia city—98.52% Highly accurate in comparison to RBF-SVM and performs testing in optimized time limit than SVM or KNN but fails in training time efficiency

2016 Spatial updated deep AE with collaborative representation-based classifier (SDAE-CR) [114] IP—99.22%, Pavia center—99.9%, Botswana—99.88% Highly structured in extracting high specialty deep features and not the hand-crafted ones and accurate
Improving the deep network architecture and selection of parameters

2019 Compact and discriminative stacked autoencoder (CDSAE) [115] UP—97.59%, IP—95.81%, SV—96.07% Efficient in dealing with feature space in low dimension, but the computation cost is high as per architecture size

2021 Stacked autoencoder with distance-based spatial-spectral vector [116] SV—97.93%, UP—99.34%, surrey—94.31% Augmentation of EMAP features with the geometrically allocated spatial-spectral feature vectors achieves excellent results. Better tuning of hyperparameter and more powerful computational tool required
Improving the training model to become unified and classified in a more generalized and accurate way