Skip to main content
. 2023 May 31;25:e44081. doi: 10.2196/44081

Table 7.

Model performance comparison—rebalanced data.

Method and model Average accuracy Average precision Average recall Average F1-score Average AUROCa
Random undersampling

LRb 0.72 0.18 0.61 0.28 0.67

NBc 0.83 0.22 0.39 0.28 0.63

RFd 0.65 0.15 0.62 0.24 0.63

XGBooste 0.71 0.18 0.61 0.28 0.67

AdaBoostf 0.72 0.18 0.61 0.28 0.67

MLPg 0.72 0.18 0.60 0.28 0.72

Sequential ANNh 0.69 0.17 0.61 0.26 0.65
Random oversampling

LR 0.72 0.18 0.61 0.28 0.67

NB 0.83 0.23 0.38 0.28 0.63

RF 0.83 0.19 0.26 0.22 0.57

XGBoost 0.72 0.18 0.61 0.28 0.67

AdaBoost 0.72 0.18 0.61 0.28 0.67

MLP 0.72 0.18 0.60 0.28 0.67

Sequential ANN 0.69 0.17 0.61 0.26 0.65
SMOTEi

LR 0.73 0.18 0.58 0.28 0.66

NB 0.82 0.22 0.39 0.28 0.63

RF 0.89 0.31 0.22 0.26 0.59

XGBoost 0.89 0.31 0.22 0.26 0.59

AdaBoost 0.85 0.27 0.36 0.31 0.63

MLP 0.84 0.25 0.38 0.30 0.63

Sequential ANN 0.69 0.16 0.55 0.24 0.63
Weight rebalancing

LR 0.72 0.18 0.61 0.28 0.67

NB 0.82 0.22 0.39 0.28 0.63

RF 0.87 0.21 0.18 0.20 0.56

XGBoost j 0.64 0.16 0.70 0.26 0.67

AdaBoost 0.81 0.15 0.24 0.18 0.55

aAUROC: area under the receiver operating characteristic curve.

bLR: logistic regression.

cNB: naive Bayes.

dRF: random forest.

eXGBoost: extreme gradient boosting.

fAdaBoost: adaptive boosting.

gMLP: multilayer perceptron.

hANN: artificial neural network.

iSMOTE: synthetic minority oversampling technique.

jModel with the best performance is italicized.