Skip to main content
. 2022 Apr 28;2022:3854635. doi: 10.1155/2022/3854635

Table 2.

Summary of review of HSI classification using sparse representation.

Year Method used Dataset and COA Research remarks and future scope
2013 Kernel sparse representation classification (KSRC) [45] IP—96.8%, UP—98.34%, KSC—98.95% Lacks in devising automatic window size collection of spatial image quality, and filtering degree of class spatial relations

2014 Multiscale adaptive sparse representation (MASR) [46] UP—98.47%, IP—98.43%, SV—97.33% MASR outperformed the JSRM single-scale approach and several other classifiers on classification maps and accuracy
The structural dictionary desired to be more inclusive and trained by discriminative learning algorithms

2015 Sparse multinomial logistic regression (SMLR) [47] IP—97.71%, UP—98.69% Being a pixelwise supervised method, its performance is better than other contemporary methods
The model can be improved via more technical validations, exploitation of MRF, and structured sparsity-inducing norm that enhances the interpretability, stability, and identity of the model learned

2015 Super-pixel-based discriminative sparse model (SBDSM) [377] IP—97.12%, SV—99.37%, UP—97.33%, Washington DC mall—96.84% The advantages of this model lie in harnessing spatial contexts effectively through the super-pixel concept, which is better in performance speed and classification accuracy
Determination of a supplementary and systematic way to adjust the count of super-pixels to various conditions and apply SR to other remote sensing practices

2015 Shape-adaptive joint sparse representation classification (SAJSRC) [48] IP—98.45%, UP—98.16%, SV—98.53% Local area shape-adapted for every test pixel rather than a fixed square window for adaptive exploration of spatial PCs, making the method outperforms other corresponding methods
Region searching based on shape-adaption can be used instead of the reduced dimensional map to reconnoiter complete spatial information of the actual HSI

2017 Multiple-feature-based adaptive sparse representation (MFASR) [49] IP—97.99%, UP—98.39%, Washington DC mall—97.26% SA regions' full utilization of all embedded joint features makes the method superior to some cutting-edge approaches
Enhancement of the proposed method in the future by selecting features automatically and improving dictionary learning to reduce the computational cost

2018 Weighted joint nearest neighbor and joint sparse representation (WJNN-JSR) [50] UP—97.42%, IP— 93.95%, SV—95.61%, Pavia center—99.27% The model was improved using the Gaussian weighted method and incorporates the conventional test pixel area to achieve a new measure of classification knowledge: The Euclidean-weighted joint size
Creating more effective approaches to applying the system and further increasing classification accuracy are taken as future work

2019 Log-Euclidean kernel-based joint sparse representation (LogEKJSR) [51] IP—97.25%, UP—99.06%, SV—99.36% Specializes in extracting covariance traits from a spatial square neighborhood to calculate the analogy of matrices with covariances employing the conventional Gaussian form of Kernel
Creation of adaptive local regions using super-pixel segmentation methods and learning the required kernel using multiple kernel learning methods

2019 Multiscale super-pixels and guided filter (MSS-GF) [52] IP—97.58%, UP—99.17% Effective spatial and edge details in his, various regional scales to build MSSs to acquire accurate spatial information, and GF improved the classification maps for near-edge misclassifications
Additional applications of efficient methods to extract local features and segment super-pixels are added as future work

2019 Joint sparse representation—self-paced learning (JSR-SPL) [53] IP—96.60%, SV—98.98% The findings are more precise and reliable than other JSR methods

2019 Maximum-likelihood estimation based JSR (MLEJSR) [54] IP—96.69%, SV—98.91%, KSC—97.13% The model is reliable in terms of outliers

2020 Global spatial and local spectral similarity-based manifold learning-group sparse representation-based classifier (GSLS-ML-GSRC) [55] UP—93.42%, Washington DC mall—91.64%, SV—93.79% The said fusion makes the method outperform other contemporary methods focused on nonlocal or local similarities

2020 Sparse-adaptive hypergraph discriminant analysis (SAHDA) [56] Washington DC mall—95.28% Effectively depict the multiple complicated aspects of the HSI and will be considered for future spatial knowledge