Skip to main content
. 2023 Apr 4:1–16. Online ahead of print. doi: 10.1007/s10489-023-04458-y

Table 1.

Results on classical MIL datasets

Methods Musk1 Musk2 Fox Tiger Elephant
mi-Net [51] 0.889 ± 0.039 0.858 ± 0.049 0.613 ± 0.035 0.824 ± 0.034 0.858 ± 0.037
MI-Net [51] 0.887 ± 0.041 0.859 ± 0.046 0.622 ± 0.038 0.830 ± 0.032 0.862 ± 0.034
MI-Net with DS [51] 0.894 ± 0.042 0.874 ± 0.043 0.630 ± 0.037 0.845 ± 0.039 0.872 ± 0.032
MI-Net with RC [51] 0.898 ± 0.043 0.873 ± 0.044 0.619 ± 0.047 0.836 ± 0.037 0.873 ± 0.044
Attention [8] 0.892 ± 0.040 0.858 ± 0.048 0.615 ± 0.043 0.839 ± 0.022 0.868 ± 0.022
Gated Attention [8] 0.900 ± 0.050 0.863 ± 0.042 0.603 ± 0.029 0.845 ± 0.018 0.857 ± 0.027
mi-Net Attention [52] 0.900 ± 0.063 0.870 ± 0.048 0.630 ± 0.026 0.845 ± 0.028 0.865 ± 0.024
ELDB [53] 0.902 ± 0.016 0.857 ± 0.039 0.648 ± 0.014 0.767 ± 0.013 0.843 ± 0.012
TGA-MIL (ours) 0.910 ± 0.033 0.881 ± 0.040 0.628 ± 0.020 0.846 ± 0.015 0.875 ± 0.020

Experiments were repeated five times, with the average classification accuracy (±standard error) provided. The best results for each dataset are highlighted in bold