Table 3.
The performance of existing methods on independent dataset.
| Methods | Sensitivity | Specificity | Accuracy | AUROC | MCC |
|---|---|---|---|---|---|
| Sigma70Pred | 91.45 | 88.56 | 90.41 | 0.953 | 0.794 |
| iPro70-FMWin | 84.12 | 86.67 | 85.04 | 0.921 | 0.693 |
| iProEP | 84.50 | 53.83 | 69.30 | 0.541 | 0.404 |
| MULTiPly* | 90.43 | 76.93 | 84.91 | – | 0.685 |
| iPromoter-2L | 86.21 | 72.81 | 79.56 | – | 0.601 |
| iPromoter-2L2.0 | 88.72 | 77.91 | 83.36 | – | 0.674 |
| iPromoter-FSEn | 68.76 | 68.16 | 68.46 | 0.751 | 0.369 |
| iPromoter-BnCNN | 80.64 | 72.70 | 76.71 | – | 0.543 |
| pcPromoter-CNN | 81.44 | 61.07 | 71.35 | – | 0.445 |
| Promoter-LCNN | 88.77 | 70.15 | 79.54 | – | 0.604 |
Reported by the authors in the manuscript. The values in the tables are in bold to represent the best performing classifier or method.