Skip to main content
. 2022 Nov 15;14:81. doi: 10.1186/s13321-022-00659-8

Table 4.

Performance comparison with the state-of-the-art methods on three tasks of Dataset2

ACC AUPR AUC F1 Precision Recall
Task1
 MDDI-SCL 0.9516 0.9862 0.9995 0.9321 0.9162 0.9500
 MDF-SA-DDI 0.9291 0.9773 0.9996 0.9117 0.9381 0.8910
 DDIMDL 0.9229 0.9637 0.9993 0.9105 0.9212 0.9039
 Lee et al.'s methods 0.9370 0.9791 0.9991 0.9181 0.9226 0.9153
 DeepDDI 0.7211 0.7724 0.9914 0.6854 0.6654 0.7183
 DNN 0.7908 0.8539 0.9949 0.7649 0.7560 0.8046
 RF 0.6956 0.7567 0.9892 0.5760 0.6694 0.5426
 KNN 0.5797 0.5964 0.8998 0.3805 0.4758 0.3347
 LR 0.5229 0.5288 0.9805 0.2373 0.3128 0.2185
Task2
 MDDI-SCL 0.6595 0.6794 0.9757 0.5578 0.5605 0.5712
 MDF-SA-DDI 0.6664 0.6820 0.9862 0.5919 0.6526 0.5518
 DDIMDL 0.6720 0.7086 0.9885 0.5817 0.6680 0.5295
 Lee et al.'s methods 0.6917 0.7119 0.9687 0.5934 0.6144 0.5848
 DeepDDI 0.5883 0.5851 0.9746 0.4709 0.5250 0.4361
 DNN 0.6687 0.6838 0.9818 0.6164 0.7279 0.5479
Task3
 MDDI-SCL 0.4696 0.4261 0.9315 0.2838 0.3160 0.2773
 MDF-SA-DDI 0.4794 0.4450 0.9686 0.2937 0.3667 0.2659
 DDIMDL 0.4699 0.4386 0.9685 0.3032 0.3773 0.2729
 Lee et al.'s methods 0.4867 0.4349 0.9093 0.3082 0.3355 0.3066
 DeepDDI 0.3611 0.2820 0.9264 0.1868 0.2301 0.1711
 DNN 0.4570 0.4129 0.9565 0.2997 0.4345 0.2508