Skip to main content
. 2024 Feb 29;10:e1915. doi: 10.7717/peerj-cs.1915

Table 14. Test set results of M1–M4.

Class Precision Recall F1 score Class Precision Recall F1 score Support
M1 1 0.8895 0.8635 0.8763 M3 1 0.8914 0.8652 0.8781 3,085
2 0.9253 0.9239 0.9246 2 0.9259 0.9242 0.9250 9,956
3 0.8681 0.9170 0.8919 3 0.8674 0.9189 0.8924 9,956
4 0.8062 0.9170 0.8580 4 0.8236 0.8973 0.8589 9,956
5 0.8993 0.6210 0.7347 5 0.8517 0.6658 0.7474 5,108
6 0.4778 0.2170 0.2985 6 0.5079 0.2148 0.3019 447
Accuracy 0.8671 Accuracy 0.8687 38,508
Macro avg 0.8110 0.7432 0.7640 Macro avg 0.8113 0.7477 0.7673 38,508
M2 1 0.8820 0.8476 M4 1 0.8818 0.8804 0.8811 3,085
2 0.9158 0.9226 0.9191 2 0.9293 0.9217 0.9255 9,956
3 0.8601 0.9133 0.8859 3 0.8694 0.9166 0.8924 9,956
4 0.8153 0.8896 0.8509 4 0.8216 0.9013 0.8596 9,956
5 0.8492 0.6496 0.7361 5 0.8625 0.6656 0.7514 5,108
6 0.5439 0.2081 0.3010 6 0.5026 0.2170 0.3031 447
Accuracy 0.8611 Accuracy 0.8689 38,508
Macro avg 0.8110 0.7385 0.7596 Macro avg 0.8112 0.7504 0.7688 38,508

Note:

Values in bold represent the optimum values for each group.