Skip to main content
. Author manuscript; available in PMC: 2018 Jan 1.
Published in final edited form as: Int J Med Inform. 2016 Oct 1;97:120–127. doi: 10.1016/j.ijmedinf.2016.09.014

Table 2.

Comparison of different classifiers and the expert algorithm (baseline), measured by their average performance (and standard deviation) in cross-validation.

Classifiers Feature Sets Accuracy Sensitivity Specificity Precision AUC
Expert Algorithm - 0.84 0.78 1.00 1.00 0.71
LR #107 0.86 (0.06) 0.90 (0.09) 0.84 (0.10) 0.70 (0.11) 0.88 (0.07)
#33 0.91 (0.04) 0.98 (0.03) 0.88 (0.06) 0.77 (0.07) 0.92 (0.03)
#5 0.99 (0.01) 1.00 (0) 0.98 (0.01) 0.95 (0.03) 0.99 (0.01)
NB #107 0.94 (0.05) 0.98 (0.03) 0.93 (0.07) 0.85 (0.11) 0.98 (0.02)
#33 0.91 (0.07) 1.00 (0) 0.88 (0.10) 0.79 (0.15) 1.00 (0)
#5 0.96 (0.03) 1.00 (0) 0.94 (0.05) 0.87 (0.09) 1.00 (0)
RF #107 0.98 (0.01) 1.00 (0) 0.97 (0.02) 0.94 (0.05) 1.00 (0)
#33 0.98 (0.01) 1.00 (0) 0.97 (0.02) 0.94 (0.05) 1.00 (0)
#5 0.98 (0) 0.98 (0.03) 0.98 (0.01) 0.95 (0.03) 1.00 (0)
kNN #107 0.83 (0.06) 0.87 (0.05) 0.81 (0.08) 0.65 (0.09) 0.91 (0.01)
#33 0.94 (0.05) 0.98 (0.03) 0.92 (0.08) 0.84 (0.12) 0.98 (0.02)
#5 0.97 (0.03) 1.00 (0) 0.96 (0.04) 0.90 (0.08) 0.99 (0.01)
SVM #107 0.96 (0.04) 0.95 (0.03) 0.96 (0.04) 0.91 (0.10) 0.96 (0.03)
#33 0.97 (0.02) 0.97 (0.04) 0.97 (0.02) 0.93 (0.06) 0.97 (0.02)
#5 0.98 (0.01) 0.95 (0.03) 0.99 (0.01) 0.98 (0.03) 0.97 (0.02)
J48 #107 0.98 (0.02) 1.00 (0) 0.97 (0.02) 0.93 (0.05) 0.98 (0.01)
#33 0.97 (0.02) 0.97 (0.04) 0.97 (0.02) 0.94 (0.05) 0.99 (0.01)
#5 0.97 (0.03) 0.95 (0.03) 0.97 (0.03) 0.94 (0.07) 0.98 (0.03)

The bold values indicate the best models in terms of accuracy, sensitity, specificity, precision and AUC. The significance values are inappropriate for them.