Table 3:
Screening performance of similarity indices
| Classifier Feature |
TP | FP | TN | FN | Acc. | Sen. | Spe. | AUROC (95% CI) |
|---|---|---|---|---|---|---|---|---|
| SI_m3 | 20 | 17 | 43 | 21 | 0.624 | 0.488 | 0.717 | 0.73(0.62,0.82) |
| SI_m4 | 28 | 15 | 45 | 13 | 0.723 | 0.683 | 0.750 | 0.81(0.71,0.88) |
| SI_m5 | 28 | 15 | 45 | 13 | 0.723 | 0.683 | 0.750 | 0.78(0.69,0.86) |
| SI_m6 | 29 | 13 | 47 | 12 | 0.752 | 0.707 | 0.783 | 0.78(0.68,0.86) |
| SI_m7 | 31 | 10 | 50 | 10 | 0.802 | 0.756 | 0.833 | 0.79(0.69,0.87) |
| SI_m8 | 27 | 13 | 47 | 14 | 0.733 | 0.659 | 0.783 | 0.78(0.66,0.87) |
| SI_m9 | 25 | 13 | 47 | 16 | 0.713 | 0.610 | 0.783 | 0.77(0.64,0.85) |
| SI_m10 | 23 | 14 | 46 | 18 | 0.683 | 0.561 | 0.767 | 0.76(0.66,0.85) |
SI_mi, i = 3,4, …,10: Similarity index of i-bit word, TP: true positive, FP: false positive, TN: true negative, FN: false negative, Acc.: accuracy, Sen.: sensitivity, Spe.: specificity, AUROC: area under ROC, CI: confidence interval