Skip to main content
. 2019 Dec 9;20:645. doi: 10.1186/s12859-019-3288-1

Table 3.

Comparison of GTB with other typical classifiers on primary ontology features

Method Precision Recall F-Measure MCC AUC
GTB 0.526 0.53 0.523 0.052 0.528
kNN 0.514 0.514 0.513 0.028 0.516
SVM 0.509 0.491 0.478 -0.019 0.491
Logistic 0.506 0.506 0.506 0.012 0.504
Naive Bayes 0.479 0.479 0.478 -0.043 0.46
Random forest 0.499 0.499 0.478 -0.002 0.499
AdaBoost 0.501 0.501 0.425 0.002 0.497
LogitBoost 0.499 0.499 0.479 -0.002 0.495

The boldface figures indicate that GTB achieves the best performance compared with other 7 typical classifiers trained on primary ontology features