Skip to main content
. 2012 Oct 3;34(1):57–65. doi: 10.1002/humu.22225

Table 2.

Performance of Computational Prediction Methods using the VariBench Benchmarking Dataset

tp fp tn fn Accuracya Precisiona Specificitya Sensitivitya NVPa MCCa
Theoretical/unweighted computational prediction methods
SIFT 10,464 4,856 12,188 7,433 0.65 0.64 0.62 0.68 0.66 0.30
PolyPhen 1b 10,093 9,185 17,669 3,199 0.69 0.77 0.85 0.52 0.64 0.39
PolyPhen 1c 14,285 4,993 13,671 7,197 0.70 0.68 0.66 0.74 0.72 0.40
PANTHER 9,689 2,859 8,676 2,797 0.76 0.76 0.76 0.77 0.77 0.53
FATHMM (unweighted) 11,561 4,839 16,257 7,707 0.69 0.72 0.77 0.60 0.66 0.38
Trained/weighted computational prediction methods
PolyPhen 2b 13,807 5,102 13,863 6,010 0.71 0.71 0.70 0.73 0.72 0.43
PolyPhen 2c 16,206 2,703 10,199 9,674 0.69 0.64 0.51 0.86 0.78 0.39
PhD-SNP 11,900 6,896 16,788 4,377 0.71 0.75 0.79 0.63 0.68 0.43
SNPs&GO 13,736 5,487 17,028 1,382 0.82 0.90 0.92 0.71 0.76 0.65
nsSNPAnalyzer 4,360 2,778 1,319 943 0.60 0.59 0.58 0.61 0.60 0.19
SNAP 16,000 2,146 8,190 6,387 0.72 0.67 0.56 0.88 0.83 0.47
MutPred 13,829 2,507 15,891 4,557 0.81 0.79 0.78 0.85 0.84 0.63
FATHMM (weighted) 14,231 1,633 10,146 2,336 0.86 0.86 0.86 0.86 0.86 0.72

tp, fp, tn, fn refer to the number of true positives, false positives, true negatives, and false negatives, respectively.

a

Accuracy, Precision, Specificity, Sensitivity, NVP, and MCC are calculated from normalized numbers.

b

“Probably Pathogenic” predictions classed as disease causing.

c

“Probably Pathogenic” predictions classed as functionally neutral.

The performances of alternative computational prediction algorithms have been reproduced with permission from Thusberg et al. (2011). Copyright 2012, Wiley.