Skip to main content
. 2022 Apr 28;2022:3854635. doi: 10.1155/2022/3854635

Table 5.

Summary of review of HSI classification using active learning.

Year Method used Dataset and COA Research remarks and future scope
2008 AL with expectation-maximization-binary hierarchical classifier (BHC-EM-AL) and maximum-likelihood (ML-EM-AL) [90] Range: KSC-90-96%, Botswana—94-98% Better learning levels than the random choice of data points and an entropy-based AL
Measurement of the efficacy of the active learning-based knowledge transfer approach while systematically increasing the spatial/temporal segregation of the data sources

2010 Semi-supervised-segmentation with AL and multinomial logistic regression (MLR-AL) [91] IP—79.90%, SV—97.47% Innovative mechanisms for selecting unlabeled training samples automatically, AL to enhance segmentation results
Testing the segmentation in various scenarios influenced by limited a priori accessibility of training images

2013 Maximizer of the posterior marginal by loopy belief propagation with AL (MPM-LBP-AL) [92] IP—94.76%, UP—85.78% Improved accuracy than previous AL applications
Use parallel-computer-architectures such as commodity—clusters or GPUs to build computationally proficient implementation

2015 Hybrid AL-MRF, that is, uncertainly sampling breaking ties (MRF-AL-BT), passive selection approach random sampling (MRF-AL-RS), and the combination (MRF-AL-BT + RS) [93] IP—94.76%, UP—85.78% (MRF-AL-RS provides the highest accuracies) Outperforms conventional AL and SVM AL methods due to MRF regularization and pixelwise output
Merge the model with other effective AL methods and test them with a limited number of training samples

2015 Integration of AL and Gaussian process classifier (GP-AL) [94] IP—89.49%, Pavia center—98.22% Empirical autonomation of AL achieves reasonable accuracy
Adding diversity criterion to the heuristics and contextual information with the model and reducing computation time

2016 AL with hierarchical segmentation (HSeg) tree: adding features and adding samples (Adseg_AddFeat + AddSamp) [95] IP—82.77%, UP—92.23% Outruns several baseline methods-selecting appropriate training data from already existing labeled datasets and potentially decreasing manual laboratory labeling
Reduce the computational time that limits its applicability on large-scale datasets

2016 Multiview 3D redundant discrete wavelet transform-based AL (3D-RDWT-MV-AL) [96] HU—99%, KSC—99.8%, UP—95%, IP—90% The precious method as a combination of an initial process with AL, improved classification

2017 Discovering representativeness and discriminativeness by semi-supervised active learning (DRDbSSAL) [97] Botswana—97.03%, KSC—93.47%, UP—93.03%, IP—88.03% Novel approach with efficient accuracy

2017 Multicriteria AL [98] KSC—99.71%, UP—99.66%, IP—99.44% Surpasses other existing AL methods regarding stability, accuracy, robustness, and computational hazard
A multi-objective optimization strategy and the usage of advanced attribute-based profile features

2018 Feature-driven AL associated with morphological profiles and Gabor filter [99] IP—99.5%, UP—99.84%, KSC—99.53% (Gabor-BT) A discriminative feature space is designed to gather helpful information into restricted samples

2018 Multiview intensity-based AL (MVAL)-multiview intensity-based query-representative strategy (MVIQ-R) [100] UP—98%, Botswana—99.5%, KSC—99.9%, IP—95% Focus on pixel intensity obtains unique feature and hence better performance
Selection of combination of optimal attribute features

2019 Super-pixel with density peak augmentation (DPA)-based semi-supervised AL (SDP-SSAL) [101] IP—90.08%, UP—85.61% Novel approach proposed based on super-pixels density metric
Development of a pixelwise solution to produce super-pixel-based neighborhoods

2020 Adaptive multiview ensemble spectral classifier and hierarchical segmentation (Ad-MVEnC_Spec + Hseg) [102] KSC—97.63%, IP—87.1%, HU—93.3% Enhancement in the view sufficiency, and promotion of the disagreement level by the dynamic view, provides lower computational complexity due to parallel computing

2020 Spectral-spatial feature fusion using spatial coordinates-based AL (SSFFSC-AL) [103] IP—100%, UP—98.43% High running speed can successfully address the “salt and pepper” phenomenon but drops a few if similar class samples are distributed in different regions differently
The sampling weight parameter conversion to an adaptive parameter is adjusted adaptively as the training samples are modified