Skip to main content
. 2019 Jul 4;1(1):20190021. doi: 10.1259/bjro.20190021

Table 1. .

The evaluation of the accuracy (A), interpretability (I) and explainability (E) of ML approaches in radiation outcomes prediction

Basic ML Type A I E Improved ML Type A I E
Logistic regression20,21 IP * **** *** GA2M68 IP ** *** **
Ridge Regression22 IP ** ** *
LASSO23 IP ** *** **
Elastic Net9,24 IP *** ** *
Decision tree
24,30,31
IP ** ***** ***** CART32 IP *** **** *****
Random Forests7 NIP **** * NA
GBM9,33 NIP **** * NA
MediBoost9,34 IP **** ** *
Naïve BN
35,37
IP * **** **** HBN38,40 IP ** *** **
HBN-EK41 IP ** **** ***
Linear SVM
24
NIP ** ** * SVM-RBF43 NIP *** * NA
SVM-LRBF44 NIP *** ** *
Deep learning49,50 NIP **** * NA DL-HLV
48,55,56
NIP ***** ** NA
DL-SA52,57 /AM59,60 NIP ***** ** NA
DL-DHLR61–63 NIP ***** *** NA
DL-LIME69 NIP ***** *** NA

BN, Bayesian network; CART, classification and regression tree; DHLR, disentangled hidden layer representation; DL-AM, deep learning withattention mechanisms; DL-HLV, deep learning withcombination of handcrafted features and latent variables; GBM, gradient boosting machine; HBN, hierarchical Bayesian network; HBN-EK, hierarchical Bayesiannetwork with expert knowledge; HLV, handcrafted features and latent variables; IP, interpretable; LASSO, least absolute shrinkage and selection operator; LIME, local interpretable model-agnostic explanation; ML, machine learning; NIP, non-interpretable; SVM, support vector machine.