Table 2.
System | Precision | Recall | F1 score | AUC iP/R |
---|---|---|---|---|
m-LR | 41.36 | 53.81 | 46.37 | 22.85 |
m-SVM | 72.12 | 51.31 | 59.96 | 39.05 |
b-SVM (*) | 68.35 | 61.05 | 64.49 | 42.02 |
union(m-SVM,b-SVM) (*) | 65.62 | 63.11 | 64.33 | 44.98 |
union(m-LR,b-SVM) | 42.33 | 64.36 | 50.73 | 27.76 |
union(m-LR,m-SVM) | 41.39 | 54.01 | 46.46 | 22.94 |
intersect(m-LR,b-SVM) (*) | 75.24 | 54.96 | 63.52 | 44.02 |
intersect(m-LR,m-SVM,b-SVM) (*) | 78.22 | 50.17 | 61.13 | 40.92 |
Macro-averaged results on the IMT development dataset with 10 best models selected by cross-validation on the training data (%). m-LR – multi-label Logistic Regression; m-SVM – multi-label Support Vector Machines; b-SVM – binary Support Vector Machines. Asterisks (*) denote systems that were submitted to the challenge.