Skip to main content
. 2024 Jul 3;3(5):e136. doi: 10.1002/cai2.136

Table 5.

List of studies that used local interpretable model agnostic explanations.

Authors Year Objective Data set(s) Data type Important features Machine learning (ML)/Deep learning (DL) Explained model
Kaplun et al. [90] 2021 Extract complex features from cancer cell images and classify malignant and benign cancer cell images BreakHis [91] Microscopic images Yellow highlighted segments in the image DL ANN (2‐layer feed forward neural network)
Saarela et al. [92] 2021 Comparing different feature importance measurements using linear (LR) and nonlinear (RF) classification ML models Breast Cancer Wisconsin (Diagnostic) [87] Text L1‐LR  all except one (compactness 3) RF  nine features were significant ML L1 regularized LR, RF
Adnan et al. [93] 2022 Proposing a model in BC metastasis prediction that can provide personalized interpretations using a very small number of biologically interpretable features Amsterdam Classification Evaluation Suite (ACES) [94] (composed of 1616 patients, among which 455 is metastatic) Genomic data N/A M/DL RF, LR, lSVM, rSVM, ANN
Maouche et al. [95] 2023 Propose an explainable approach for predicting BC distant metastasis that quantifies the impact of patient and treatment characteristics Public data set composed of 716 Moroccan women diagnosed with breast cancer [96] Clinicopathological data The characteristics have different impacts ranging from high, moderate, and low ML Cost‐sensitive CatBoost
Deshmukh et al. [97] 2023 Improve the qk‐means clustering algorithm using LIME to explain the predictions The breast cancer data set has 600 attributes or patient records and 7 features Text A tabular explainer explains the positively and negatively correlated features ML qk‐means (hybrid classical‐quantum clustering approach)