Table 4. The comparative studies of multilabel models’ performances on augmented BU-3DFE dataset are presented as follows.
Metric with ↑ indicates the higher the metric value, the better the model performance, and metric with ↓ indicates the lower or smaller the value of the metric the better the model’s performance.
ML-Model | Hamming loss ↓ | Ranking loss ↓ | Average precision ↑ | Coverage ↓ |
---|---|---|---|---|
RAKELD | 0.3858 | 0.7223 | 0.2241 | 4.0453 |
CC | 0.1825 | 0.8948 | 0.2812 | 4.7270 |
MLkNN | 0.1929 | 0.9025 | 0.2573 | 4.9623 |
MLARAM | 0.3169 | 0.6963 | 0.3280 | 2.9315 |
ML-CNN | 0.1124 | 0.2278 | 0.7216 | 2.2397 |
VGGML-CNN | 0.0628 | 0.1561 | 0.8637 | 1.3140 |