Table 3. The result of the comparative studies of multilabel models’ performances on BU-3DFE dataset is presented as follows.
Metric with ↑ indicates the higher the metric value, the better the model performance, and metric with ↓ indicates the lower or smaller the value of the metric the better the model’s performance.
ML-Models | Hamming loss ↓ | Ranking loss ↓ | Average precision ↑ | Coverage ↓ |
---|---|---|---|---|
RAKELD | 0.4126 | 0.6859 | 0.2274 | 4.8137 |
CC | 0.1807 | 0.8393 | 0.3107 | 4.8094 |
MLkNN | 0.1931 | 0.8917 | 0.2634 | 4.9486 |
MLARAM | 0.3045 | 0.6552 | 0.3180 | 3.1970 |
ML-CNN | 0.1273 | 0.2867 | 0.5803 | 2.5620 |
VGGML-CNN | 0.0890 | 0.1647 | 0.7093 | 1.9091 |