Table 3.
GO prediction performance on a dataset based on a time-based split as in (Kulmanov and Hoehndorf, 2019; You et al., 2018b) in comparison to literature results collected by DeepGOPlus (Kulmanov and Hoehndorf, 2019)
Methods |
|
|
AUPR |
|||||||
---|---|---|---|---|---|---|---|---|---|---|
MFO | BPO | CCO | MFO | BPO | CCO | MFO | BPO | CCO | ||
Single | Naive | 0.306 | 0.318 | 0.605 | 12.105 | 38.890 | 9.646 | 0.150 | 0.219 | 0.512 |
DiamondScore | 0.548 | 0.439 | 0.621 | 8.736 | 34.060 | 7.997 | 0.362 | 0.240 | 0.363 | |
DeepGO | 0.449 | 0.398 | 0.667 | 10.722 | 35.085 | 7.861 | 0.409 | 0.328 | 0.696 | |
DeepGOCNN | 0.409 | 0.383 | 0.663 | 11.296 | 36.451 | 8.642 | 0.350 | 0.316 | 0.688 | |
Ensemble | DeepText2GO | 0.627 | 0.441 | 0.694 | 5.240 | 17.713 | 4.531 | 0.605 | 0.336 | 0.729 |
GOLabeler | 0.580 | 0.370 | 0.687 | 5.077 | 15.177 | 5.518 | 0.546 | 0.225 | 0.700 | |
DeepGOPlus | 0.585 | 0.474 | 0.699 | 8.824 | 33.576 | 7.693 | 0.536 | 0.407 | 0.726 | |
UDSMProt a | Fwd; from scratch | 0.418 | 0.303 | 0.655 | 14.906 | 47.208 | 12.929 | 0.304 | 0.284 | 0.612 |
Fwd; pretr. | 0.465 | 0.404 | 0.683 | 10.578 | 36.667 | 8.210 | 0.406 | 0.345 | 0.695 | |
Bwd; pretr. | 0.465 | 0.403 | 0.664 | 10.802 | 36.361 | 8.210 | 0.414 | 0.348 | 0.685 | |
Fwd+bwd; pretr. | 0.481 | 0.411 | 0.682 | 10.505 | 36.147 | 8.244 | 0.472 | 0.356 | 0.704 | |
Bwd+bwd; pretr. + DiamondScore | 0.582 | 0.475 | 0.697 | 8.787 | 33.615 | 7.618 | 0.548 | 0.422 | 0.728 |
Note: Best overall results (highest and AUPR; lowest ) are marked in bold face and best single-model results are underlined.
Fwd/bwd, training in forward/backward direction; pretr., using language model pre-training.
Results established in this work.