Table 4.
Summary of review of HSI classification using ELM.
Year | Method used | Dataset and COA | Research remarks and future scope |
---|---|---|---|
2014 | Ensemble extreme learning machines (E2LM)-bagging-based ELMs (BagELMs) and AdaBoost-based (BoostELMs) [72] | UP—94.3%, KSC—97.71%, SV—97.19% | BoostELM performs better than kernel and other EL methods |
Performance of other differential or nondifferentiable activation functions | |||
| |||
2015 | Kernel-based ELM—composite kernel (KELM-CK) [75] | IP—95.9%, UP—93.5%, SV—96.4% | Outperforms other SVM-CK-based models |
| |||
2015 | ELM's two-level fusions: feature-level fusion (FF-ELM) and mixing ELM classifier two levels of fusions: feature-level fusion (FF-ELM) [76] | FF-ELM: UP—98.11%, IP—92.93%, SV—99.12%; DF-ELM—UP—99.25%, IP—93.58%, SV—99.63% | Outperforms basic ELM models |
| |||
2016 | Hierarchical local-receptive-field-based ELM (HL-ELM) [77] | IP—98.36%, UP—98.59% | Surpasses other ELM methods in terms of accuracy and training speed |
| |||
2017 | Genetic-firefly algorithm with ELM (3FA-ELM) [78] | HyDice DC mall—97.36%, HyMap—95.58% | Low complexity (ELM), better adaptability, and searching capability (FA) |
Execution time needs to be reduced in future | |||
| |||
2017 | Local receptive fields-based kernel ELM (LRF-KELM) [79] | IP—98.29% | Outperforms other ELM models |
| |||
2017 | Distributed KELM based on MapReduce framework with Gabor filtering (DK-Gabor-ELMM) [80] | IP—92.8%, UP—98.8% | Outperforms other ELM models |
| |||
2017 | Loopy belief propagation with ELM (ELM-LBP) [81] | IP—97.29% | Efficient time complexity |
| |||
2018 | Mean filtering with RBF-based KELM (MF-KELM) [82] | IP—98.52% | The model offers the most negligible computational hazard |
| |||
2018 | Augmented sparse multinomial logistic ELM (ASMLELM) [83] | IP—98.85%, UP—99.71%, SV—98.92% | Improved classification accuracy by extended multi-attribute profiles and more SR |
| |||
2018 | ELM with enhanced composite feature (ELM-ECF) [84] | IP—98.8%, UP—99.7%, SV—99.5% | Low complexity and multiscale spatial feature for better accuracy |
Incorporate feature-fusion technology | |||
| |||
2019 | Local block multilayer sparse ELM (LBMSELM) [85] | IP—89.31%, UP—89.47%, SV—90.03% | Performs anomaly and target detection. Reduced computational overhead and increased classification accuracy by inverse free; saliency detection and gravitational search |
| |||
2019 | ELM-based heterogeneous domain adaptation (EHDA) [25] | HU-DC —97.51%, UP-DC —96.63%, UP-HU —97.53% | Outperforms other HDA methods. Invariant feature selection |
| |||
2019 | Spectral-spatial domain-specific convolutional deep ELM (S2CDELM) [86] | IP—97.42%, UP—99.72% | Easy construction with high training-testing speed |
Merge of DL with ELM | |||
2020 | Cumulative variation weights and comprehensive evaluated ELM (CVW-CEELM) [87] | IP—98.5%, UP—99.4% | Accuracy achieved due to the weight determination of multiple weak classifiers. Multiscale neighborhood choice and optimized feature selection |