Skip to main content
. 2015 Apr 19;16:123. doi: 10.1186/s12859-015-0554-8

Table 8.

Performances of different prediction tools on the three filtered test sets

Data Set Tool AUC Balanced Accuracy Sens Spec PPV NPV F-m MCC
# 1 PaPI .9218 .8575 .8518 .8631 .8989 .803 .8747 .7084
Carol .9120 .8492 .821 .8774 .9054 .7742 .8611 .689
Provean .8938 .8264 .7894 .8634 .892 .7415 .8375 .643
SIFT .883 .8142 .7633 .8651 .8899 .7189 .8218 .6185
PolyPhen2 .9144 .8425 .8503 .8348 .8803 .796 .865 .6806
FATHMM .8301 .7517 .6267 .8766 .8789 .6217 .7317 .502
LRT .8455 .8249 .8009 .8488 .8833 .749 .8401 .6409
MutAssessor .8899 .812 .7578 .8662 .89 .7144 .8186 .6141
# 2 PaPI .9246 .863 .8623 .8637 .8989 .8169 .8802 .7209
Carol .9121 .8442 .811 .8774 .9029 .7675 .8545 .6794
Provean .8984 .8354 .8074 .8634 .8926 .7613 .8479 .6623
SIFT .8836 .8091 .7532 .8651 .887 .7137 .8146 .6094
PolyPhen2 .9183 .8491 .8635 .8348 .8802 .813 .8717 .6957
FATHMM .8355 .7603 .6441 .8766 .8801 .6366 .7438 .5187
LRT .8506 .8317 .8147 .8488 .8834 .7651 .8477 .656
MutAssessor .8923 .8134 .7606 .8662 .8888 .7202 .8197 .6178
# 3 PaPI .9332 .8721 .8751 .8692 .9046 .8308 .8896 .7398
Carol .9239 .8551 .8187 .8915 .9145 .7763 .8639 .7004
Provean .9159 .8444 .8156 .8731 .9011 .7697 .8562 .6797
SIFT .8911 .8166 .759 .8743 .8953 .7191 .8215 .6238
PolyPhen2 .9303 .8542 .8729 .8355 .8826 .8226 .8777 .7068
FATHMM .8436 .7643 .6410 .8876 .8899 .6356 .7452 .527
LRT .8682 .8408 .8289 .8527 .8886 .7786 .8577 .6744
MutAssessor .8988 .8273 .7772 .8774 .8998 .7354 .8341 .6449

Comparison of PaPI, PolyPhen2, SIFT, Carol, PROVEAN, FATHMM, LRT and MutationAssessor on the three test sets filtered for unpredictable variants by the other prediction tools. Area under the curve (AUC), balanced accuracy (sensitivity/2 + specificity/2), sensitivity (Sens), specificity (Spec), Positive Predictive Value (PPV), Negative Predictive Value (NPV), F-measure (F-m) and Matthews correlation coefficient (MCC) are reported for each method. Highest values for each set are marked in bold.