Skip to main content
. 2022 Jun 11;20:265. doi: 10.1186/s12967-022-03469-6

Table 1.

| Comparison of the performance of multiple prediction models

Methods Accuracy Precision Recall F1 AUC
Random Forest  Training 0.851 1.000 0.238 0.384 0.619
 Test 0.808 0.909 0.068 0.127 0.533
Logistic Regression  Training 0.825 0.629 0.256 0.364 0.610
 Test 0.808 0.567 0.260 0.357 0.605
Lasso Regression  Training 0.825 0.762 0.148 0.248 0.568
 Test 0.813 0.710 0.151 0.249 0.567
Radial SVM [40]  Training 0.515 0.970 0.491 0.652 0.701
 Test 0.337 0.896 0.204 0.333 0.586
 Val 0.806 0.849 0.920 0.883 0.642
Gradient boosting [40]  Training 0.851 0.934 0.899 0.916 0.690
 test 0.718 0.822 0.816 0.819 0.574
 Val 0.828 0.885 0.905 0.895 0.682
Bayes [40]  Training 0.567 0.965 0.553 0.703 0.649
 Test 0.465 0.861 0.405 0.551 0.562
 Val 0.828 0.891 0.895 0.893 0.713
Linear regression [40]  Training 0.801 0.943 0.835 0.886 0.599
 Test 0.679 0.828 0.763 0.794 0.541
 Val 0.788 0.885 0.842 0.863 0.689
Linear SVM [40]  Training 0.337 0.896 0.205 0.333 0.586
 Test 0.467 0.861 0.407 0.553 0.586
 Val 0.818 0.873 0.906 0.889 0.676
Sofa Score [13] DCQMFF (proposed)  All data 0.752 0.371 0.327 0.348 0.807
 Training 0.822 0.822 0.821 0.822 0.896
 Test 0.821 0.812 0.812 0.812 0.885
 Val 0.775 0.764 0.754 0.759 0.849
CNN (Proposed)  Training 0.928 0.924 0.856 0.888 0.953
 Test 0.924 0.887 0.845 0.865 0.947
 Val 0.834 0.825 0.818 0.821 0.909