Table 1. Comparison of the results for the models trained on BF and FL images and CP features.
(a) Macro-F1 scores on the test sets for the five data splits. | |||||
BFdmso | FLdmso | BFsite | FLsite | CP | |
Split 1 | 0.738 | 0.777 | 0.662 | 0.661 | 0.771 |
Split 2 | 0.821 | 0.799 | 0.770 | 0.762 | 0.801 |
Split 3 | 0.724 | 0.793 | 0.654 | 0.677 | 0.718 |
Split 4 | 0.710 | 0.738 | 0.676 | 0.645 | 0.739 |
Split 5 | 0.728 | 0.716 | 0.708 | 0.688 | 0.736 |
(b) F1 scores per MoA across all five test sets. | |||||
BFdmso | FLdmso | BFsite | FLsite | CP | |
ATPase-i | 0.605 | 0.701 | 0.650 | 0.683 | 0.779 |
AuroraK-i | 0.683 | 0.675 | 0.713 | 0.671 | 0.746 |
HDAC-i | 0.756 | 0.773 | 0.766 | 0.785 | 0.740 |
HSP-i | 0.738 | 0.730 | 0.756 | 0.682 | 0.676 |
JAK-i | 0.675 | 0.653 | 0.405 | 0.429 | 0.607 |
PARP-i | 0.895 | 0.886 | 0.789 | 0.748 | 0.912 |
Prot.Synth.-i | 0.711 | 0.793 | 0.520 | 0.646 | 0.711 |
Ret.Rec.Ag | 0.740 | 0.769 | 0.767 | 0.796 | 0.786 |
Topo.-i | 0.780 | 0.728 | 0.702 | 0.651 | 0.742 |
Tub.Pol.-i | 0.887 | 0.854 | 0.850 | 0.865 | 0.845 |
DMSO | 0.790 | 0.866 | 0.836 | 0.691 | 0.809 |
Macro average | 0.751 | 0.766 | 0.705 | 0.695 | 0.759 |