Table 3.
DREAM 5 dataset benchmark.
GRN inference method | Net1 | Net2b | Net3b | Score | |||
---|---|---|---|---|---|---|---|
AUROC | AUPR | AUROC | AUPR | AUROC | AUPR | ||
KBoost (2020) | 0.88 | 0.43 | 0.57 | 0.04 | 0.51 | 0.02 | > 300 |
GRNBoost2 (2019) | 0.82 | 0.33 | 0.63 | 0.10 | 0.52 | 0.02 | 54.30 |
PLSNET (2016) | 0.85 | 0.24 | 0.57 | 0.06 | 0.51 | 0.02 | 37.03 |
ENNET (2013) | 0.85 | 0.44 | 0.61 | 0.05 | 0.51 | 0.02 | > 300 |
TIGRESS (2012) | 0.75 | 0.29 | 0.58 | 0.06 | 0.51 | 0.02 | 22.63 |
GENIE3 (2010)a | 0.82 | 0.29 | 0.62 | 0.09 | 0.52 | 0.02 | 40.74 |
aWinner of the DREAM 5 challenge which included 351 algorithms.
bIn the original DREAM 5 dataset, Net 2 and 3 are labeled 3 and 4.
The best performance in each column is in bold.