Table 3.
The performance of the four classifier models.
| Fold1 | Fold2 | Fold3 | Fold4 | Fold5 | Mean-Value | |
|---|---|---|---|---|---|---|
| (A) GBDT-LZC | ||||||
| Accuracy (%) | 67 | 87 | 80 | 93 | 79 | 81 |
| Precision (%) | 69 | 80 | 81 | 100 | 90 | 84 |
| Recall (%) | 90 | 100 | 90 | 92 | 82 | 91 |
| F1-score (%) | 78 | 89 | 86 | 96 | 86 | 87 |
| AUC (%) | 64 | 89 | 82 | 100 | 70 | 81 |
| Sensitivity (%) | 90 | 100 | 90 | 92 | 82 | 91 |
| Specificity (%) | 20 | 71 | 60 | 100 | 67 | 64 |
| (B) SVM-LZC | ||||||
| Accuracy (%) | 60 | 53 | 67 | 67 | 64 | 62 |
| Precision (%) | 70 | 56 | 78 | 100 | 80 | 77 |
| Recall (%) | 70 | 63 | 70 | 62 | 73 | 67 |
| F1-score (%) | 70 | 59 | 74 | 76 | 76 | 71 |
| AUC (%) | 54 | 64 | 64 | 100 | 33 | 63 |
| Sensitivity (%) | 70 | 63 | 70 | 62 | 73 | 67 |
| Specificity (%) | 40 | 43 | 60 | 100 | 33 | 55 |
| (C) GBDT-KC | ||||||
| Accuracy (%) | 67 | 87 | 80 | 100 | 79 | 82 |
| Precision (%) | 69 | 80 | 82 | 100 | 90 | 84 |
| Recall (%) | 90 | 100 | 90 | 100 | 82 | 92 |
| F1-score (%) | 78 | 89 | 86 | 100 | 86 | 88 |
| AUC (%) | 66 | 89 | 88 | 100 | 73 | 83 |
| Sensitivity (%) | 66 | 89 | 88 | 100 | 73 | 83 |
| Specificity (%) | 90 | 100 | 90 | 100 | 82 | 92 |
| (D) SVM-KC | ||||||
| Accuracy (%) | 60 | 53 | 67 | 67 | 64 | 62 |
| Precision (%) | 70 | 56 | 78 | 100 | 80 | 77 |
| Recall (%) | 70 | 63 | 70 | 62 | 73 | 67 |
| F1-score (%) | 70 | 59 | 74 | 76 | 76 | 71 |
| AUC (%) | 54 | 63 | 64 | 100 | 33 | 63 |
| Sensitivity (%) | 70 | 63 | 70 | 62 | 73 | 67 |
| Specificity (%) | 40 | 43 | 60 | 100 | 33 | 55 |
LZC, Lempel-Ziv complexity; GBDT, gradient boosting decision tree; AUC, the area under the curve; SVM, support vector machine; RFE, recursive feature elimination; KC, Kolmogorov complexity.