Table II.
Average performance (with standard derivation) comparison of eight competing algorithms on three data sets in terms of average AUC (top section), Macro F1 (middle section), and Micro F1 (bottom section). All parameters of the eight methods are tuned via cross-validation, and the reported performance is averaged over five random repetitions.
Data (n, d, m) | Scene (2407, 294, 6) | Yeast (2417, 103, 14) | References (7929, 26397, 15) | |
---|---|---|---|---|
Average AUC | MixedNorm | 91.602 ± 0.374 | 79.871 ± 0.438 | 77.526 ± 0.285 |
MTRL | 90.821 ± 0.512 | 78.437 ± 0.610 | 75.133 ± 0.410 | |
MTL(Ω&Σ) | 90.217 ± 1.139 | 78.515 ± 0.393 | 76.249 ± 0.277 | |
TraceNorm | 90.205 ± 0.374 | 76.877 ± 0.127 | 71.259 ± 0.129 | |
ASO | 86.258 ± 0.981 | 64.519 ± 0.633 | 75.960 ± 0.104 | |
OneNorm | 87.846 ± 0.193 | 65.602 ± 0.842 | 75.444 ± 0.074 | |
IndSVM | 84.056 ± 0.010 | 64.601 ± 0.056 | 73.882 ± 0.244 | |
RidgeReg | 85.209 ± 0.246 | 65.491 ± 1.160 | 74.781 ± 0.556 | |
Macro F1 | MixedNorm | 60.602 ± 1.383 | 55.624 ± 0.621 | 37.135 ± 0.229 |
MTRL | 58.873 ± 0.814 | 53.913 ± 0.785 | 36.492 ± 0.575 | |
MTL(Ω&Σ) | 59.212 ± 0.671 | 54.854 ± 0.803 | 36.218 ± 0.157 | |
TraceNorm | 57.692 ± 0.480 | 52.400 ± 0.623 | 35.562 ± 0.278 | |
ASO | 56.819 ± 0.214 | 45.599 ± 0.081 | 34.462 ± 0.315 | |
OneNorm | 55.061 ± 0.801 | 42.023 ± 0.120 | 36.579 ± 0.157 | |
IndSVM | 54.253 ± 0.078 | 38.507 ± 0.576 | 31.207 ± 0.416 | |
RidgeReg | 53.281 ± 0.949 | 42.315 ± 0.625 | 32.724 ± 0.190 | |
Micro F1 | MixedNorm | 64.392 ± 0.876 | 56.495 ± 0.190 | 59.408 ± 0.344 |
MTRL | 63.958 ± 0.324 | 55.127 ± 0.922 | 58.112 ± 0.322 | |
MTL(Ω&Σ) | 63.219 ± 0.769 | 54.235 ± 0.318 | 58.118 ± 1.246 | |
TraceNorm | 61.172 ± 0.838 | 54.172 ± 0.879 | 57.497 ± 0.130 | |
ASO | 59.015 ± 0.124 | 45.952 ± 0.011 | 55.406 ± 0.198 | |
OneNorm | 59.951 ± 0.072 | 47.558 ± 1.695 | 58.798 ± 0.166 | |
IndSVM | 57.450 ± 0.322 | 52.094 ± 0.297 | 54.875 ± 0.185 | |
RidgeReg | 56.012 ± 0.144 | 46.743 ± 0.625 | 53.713 ± 0.213 |