Skip to main content
. 2019 Jul 3;17:972–981. doi: 10.1016/j.csbj.2019.06.024

Table 2.

Performance of various classifiers on the benchmark dataset.

Dataset Methods MCC Accuracy Sensitivity Specificity AUC P-value
AntiTb_MD AtbPpred 0.700 0.849 0.819 0.879 0.909
Antitbpred 0.550 0.775 0.768 0.773 0.820 0.000656
AntiTb_RD AtbPpred 0.834 0.917 0.905 0.930 0.942
Antitbpred 0.640 0.817 0.787 0.846 0.870 0.001013

The first and the second column represent the dataset and the classifier name employed in this study. The third, fourth, fifth, sixth, and the seventh columns respectively represent the MCC, accuracy, sensitivity, specificity, and AUC. For comparison, we have included Antitbpred metrics reported in the literature [7]. The last column represents the pairwise comparison of ROC area under curves (AUCs) between AtbPpred and the Antitbpred using a two-tailed t-test. P < .01 indicates a statistically meaningful difference between AtbPpred and the selected method (shown in bold).