Skip to main content
. 2022 Apr 28;2022:3854635. doi: 10.1155/2022/3854635

Table 3.

Summary of review of HSI classification using MRF.

Year Method used Dataset and COA Research remarks and future scope
2011 Adaptive-MRF (a-MRF) [59] IP—92.55% Handles homogeneous problem of “salt and pepper” areas and the possibility of overcorrection impact on class boundaries

2014 Hidden MRF and SVM (HMRF-SVM) [60] IP—90.50%, SV—97.24% Outperforms SVM and improves overall accuracy outcomes by nearly 8% and 3.2%, respectively

2014 Probabilistic SR with MRF-based multiple linear logistic (PSR-MLL) [61] IP—97.8%, UP—99.1%, Pavia center—99.4% Exceeds other modern contemporary methods in terms of accuracy

2014 MRF with Gaussian mixture model (GMM-MRF) [62] UP(LFDA-GMM-MRF)-90.88% UP(LPNMF-GMM-MRF)—94.96% Advantageous for a vast range of operating conditions and spatial-spectral information to preserve multimodal statistics
GMM classificatory distributions are to be considered in the future

2011 MRF with sparse multinomial logistic regression classifier—spatially adaptive total variation regularization (MRF-SMLR-SpATV) [63] UP—90.01%, IP—97.85%, Pavia center—99.23% Efficient time complexity of the model
Improvisation of the model by implementing GPU and learning dictionaries are the future agendas

2016 Multitask joint sparse representation (MJSR) and a stepwise Markov random filed framework (MSMRF) [64] IP—92.11%, UP—92.52% The gradual optimization explores the spatial correlation, which significantly improves the effectivity and accuracy of the classification

2016 MRF with hierarchical statistical region merging (HSRM) [65] SVMMRF-HSRM: IP—93.10%, SV—99.15%, UP— 86.52%; MLRsubMRF-HSRM-IP—82.60%, SV—88.16%, UP—95.52% Better solution to the technique of majority voting that suffers from the problem of scale choice
Considering the spatial features in the spatial prior model of objects of the different groups in the future

2018 Integration of optimum dictionary learning with extended hidden Markov random field (ODL-EMHRF) [66] ODL-EMHRF-ML-IP—98.56%, UP—99.63%; ODL-EMHRF-EM-IP—98.47%, UP—99.58% The method has been proven to be better than SVM-associated EMRF

2018 Label-dependent spectral mixture model (LSMM) fused with MRF (LSMM-MRF) [67] The Konka image—94.19%, the shipping scene—66.45% Efficient unsupervised classification strategy that considers spectral information in mixed pixels and the impact of spatial correlation
Enhanced theoretical derivations of EM steps

2019 Adaptive interclass-pair penalty and spectral similarity information (aICP2-SSI) along with MRF and SVM [68] UP—98.10%, SV—96.40%, IP— 96.14% Outperforms other MRF-based methods
More efficient edge-preserving strategies, more spectral similitude, and class separable calculation methods as future research

2019 Cascaded version of MRF (CMRF) [69] IP—98.56%, Botswana—99.32%, KSC—99.24% Backpropagation tunes the model parameters and least computation expenses

2020 Fusion of transfer learning and MRF (TL-MRF) [70] IP—93.89%, UP—91.79% TL is taken to be very effective for HSI classification
Future research for reducing the number of calculations involved in the existing

2020 MRF with capsule net (caps-MRF) [71] IP—98.52%, SV—99.74%, Pavia center—99.84% Ensures that relevant information is preserved, and the spatial constraint of the MRF helps achieve more precise model convergence
The combination of CapsNet with several postclassification techniques