Table 2.
Evaluation of MedusaGraph against Other Approaches in Terms of Classification Accuracy and AUCa
Accuracy |
AUC |
|||||
---|---|---|---|---|---|---|
Evaluation metric | avg | min | max | avg | min | max |
Medusadock | N/A | N/A | N/A | 0.474 | 0.462 | 0.489 |
Atomnet | 0.741 | 0.628 | 0.872 | 0.863 | 0.849 | 0.885 |
Medusanet | 0.855 | 0.705 | 0.93 | 0.893 | 0.868 | 0.915 |
Autodock Vina | N/A | N/A | N/A | 0.615 | 0.592 | 0.636 |
Graph-DTI | 0.895 | 0.836 | 0.953 | 0.906 | 0.876 | 0.933 |
Pose selection | 0.914 | 0.855 | 0.954 | 0.892 | 0.866 | 0.923 |
Pose prediction+selection | 0.958 | 0.940 | 0.981 | 0.960 | 0.943 | 0.985 |
We evaluate these approaches on the PDBbind test set. Note that the accuracies for MedusaDock and Autodock Vina are marked as N/A because the scoring function of MedusaDock and the affinity score of Autodock cannot be used to distinguish between a good pose and bad pose. It can only be used to compare the goodness of two poses.