Skip to main content
. 2021 Nov 29;7:e736. doi: 10.7717/peerj-cs.736

Table 5. The result of the comparative studies of multilabel models’ performances on CK+ dataset is presented as follows.

Metric with ↑ indicates the higher the metric value, the better the model performance, and metric with ↓ indicates the lower or smaller the value of the metric the better the model’s performance.

ML-Model Hamming loss ↓ Ranking loss ↓ Average precision ↑ Coverage ↓
RAKELD 0.3904 0.6637 0.2370 4.4435
CC 0.1489 0.6842 0.4234 4.7339
MLkNN 0.1839 0.8345 0.2965 4.7930
MLARAM 0.1951 0.4636 0.4144 3.0748
ML-CNN 0.1487 0.4161 0.5926 2.8120
VGGML-CNN 0.1393 0.3897 0.6002 1.4359