Skip to main content
. 2022 Nov 21;12:913806. doi: 10.3389/fonc.2022.913806

Table 3.

The results of 5-fold cross-validations with the 1–3-gram language model.

CV 1 CV 2 CV 3 CV 4 CV 5 Average
LR 0.9545 0.9535 0.9525 0.9540 0.9540 0.9537
RF 0.9403 0.9457 0.9452 0.9437 0.9374 0.9424
Multinomial NB 0.9183 0.9202 0.9207 0.9154 0.9168 0.9183
MLP 0.9310 0.9300 0.9344 0.9315 0.9281 0.9310
KNN 0.8127 0.8112 0.8136 0.8078 0.8083 0.8107
SVM 0.9310 0.9315 0.9339 0.9334 0.9325 0.9325
XGBoost 0.9618 0.9569 0.9574 0.9584 0.9603 0.9590

CV, cross-validation; LR, logistic regression; RF, random forest; NB, naive Bayes; MLP, multi-layer perceptron; KNN, k-nearest neighbor; SVM, support vector machine; XGBoost, extreme gradient boosting.