Permutation Feature Importance (PFI) |
The PFI is a technique for overall interpretability by examining the model score after shuffling a single feature value [31]. |
Local Interpretable Model-agnostic Explanation (LIME) |
The LIME is a perturbation-based strategy that uses a surrogate interpretable model to substitute the complex model locally, providing local interpretability [31]. |
SHappley Additive exPlanation (SHAP) |
The SHAP is a method for determining how each feature contributes to a specific outcome [31]. |
Faster Region with Convolutional Neural Network (R-CNN) |
The faster R-CNN presented the Region Proposal Network [RPN], which speeds up the selective search. RPN adheres to the last convolution layer of CNN. Proposals from RPN are given to a region of interest pooling (RoI pooling), then classification and bounding-box regression [56]. |
Pseudo-coloring methodology |
The pseudo-coloring methodology employs a range of colors to represent continuously changing values [29]. |
Class Activation Map (CAM) |
The CAM uses global average pooling to generate class-specific heatmaps that indicate discriminative regions [30]. |
Value permutation and Feature-object semantics |
The permutation of values is analyzed for their impact on predictions, and the most significant variables are then translated into statements using feature-object semantics [34]. |
Cumulative Fuzzy Class Membership Criterion (CFCMC) |
The CFCMC offers a confidence measure for a test image's classification, followed by a representation of the training image and the most similar images [32]. |