Skip to main content
. 2019 Dec 24;20(Suppl 15):547. doi: 10.1186/s12859-019-3117-6

Table 2.

The performance results of DR-IBRW and comparing methods on Gottlieb’s dataset [11] and Luo’s dataset [12]

AUROC AUPR Micro-F1 Macro-F1 Precision Recall
Gottlieb’s dataset
DR-IBRW 0.955±0.000 0.499±0.174 0.613±0.006 0.513±0.005 0.332±0.002 0.880 ±0.000
MBiRW 0.933 ±0.000 0.213 ±0.028 0.294 ±0.004 0.244 ±0.003 0.256 ±0.001 0.906±0.000
BLM 0.865 ±0.000 0.298 ±0.003 0.583 ±0.001 0.479 ±0.001 0.315 ±0.000 0.891 ±0.000
JI 0.845 ±0.001 0.247 ±0.043 0.385 ±0.003 0.462 ±0.004 0.250 ±0.001 0.894 ±0.181
HGBI 0.811 ±0.000 0.016 ±0.000 0.187 ±0.001 0.157 ±0.001 0.101 ±0.000 0.367 ±0.007
NBI 0.503 ±0.000 0.000 ±0.000 0.022 ±0.000 0.018 ±0.000 0.012 ±0.000 0.039 ±0.001
Luo’s dataset
DR-IBRW 0.964±0.000 0.529±0.167 0.537±0.006 0.452 ±0.004 0.294±0.002 0.895±0.002
MBiRW 0.945 ±0.000 0.285 ±0.042 0.431 ±0.004 0.363 ±0.003 0.236 ±0.001 0.835 ±0.013
BLM 0.892 ±0.000 0.424 ±0.017 0.527 ±0.003 0.463±0.004 0.278 ±0.001 0.843 ±0.000
JI 0.865 ±0.000 0.287 ±0.041 0.537 ±0.004 0.447 ±0.003 0.294 ±0.001 0.783 ±0.000
HGBI 0.848 ±0.000 0.037 ±0.001 0.170 ±0.001 0.141 ±0.001 0.093 ±0.000 0.318 ±0.005
NBI 0.479 ±0.000 0.000 ±0.000 0.020 ±0.000 0.016 ±0.000 0.011 ±0.000 0.032 ±0.000

The entry in boldface represent the method perform best in this evaluation metric