Table 3. Cross-validation results of Multi-Label classifiers.
IBLR_ML | MLkNN | BRkNN | RAkEL | HOMER | |
---|---|---|---|---|---|
Micro-averaged Precision | 0.9239 | 0.9202 | 0.9251 | 0.9117 | 0.9070 |
Micro-averaged Recall | 0.9128 | 0.919 | 0.9159 | 0.9117 | 0.8869 |
Micro-averaged F-Measure | 0.9183 | 0.9196 | 0.9205 | 0.8628 | 0.8968 |
Macro-averaged Precision | 0.9176 | 0.9134 | 0.9189 | 0.9181 | 0.9006 |
Macro-averaged Recall | 0.9021 | 0.9103 | 0.907 | 0.8039 | 0.8759 |
Macro-averaged F-Measure | 0.9097 | 0.9118 | 0.9128 | 0.8559 | 0.8879 |
Average Precision | 0.9554 | 0.9542 | 0.9442 | 0.9267 | 0.9305 |