Skip to main content
. Author manuscript; available in PMC: 2024 Feb 9.
Published in final edited form as: Sci Stat Database Manag. 2023 Aug 27;2023:1. doi: 10.1145/3603719.3603737

Table 3:

Model performance comparison. Average results for AUC, Precision (Pr), Recall (Rc), and F1-Score (F1) were computed using 5 times 5-Fold cross-validation for the two models. Performance for each class using micro-averaging (ALL).

Model Setting Class AUC Pr Rc F1
Three-Class MLP Full set GBM .881 .830 .806 .818
LYM .859 .681 .705 .693
MET .920 .795 .803 .799
ALL .911 .790 .789 .789
AE 5 latent GBM .894 .737 .778 .757
LYM .846 .667 .500 .571
MET .925 .800 .833 .816
ALL .887 .756 .760 .756
AE 15 latent GBM .910 .801 .813 .807
LYM .897 .749 .640 .690
MET .923 .781 .810 .795
ALL .909 .783 .784 .783
AE 50 latent GBM .910 .839 .806 .822
LYM .868 .639 .575 .605
MET .889 .776 .825 .800
ALL .899 .777 .779 .777
Two-Stage Stage 1 GBM .852 .828 .871 .849
REST .750 .682 .714
Stage 2 LYM .864 .667 .581 .621
MET .880 .913 .896
Combined ALL - .717 .712 .713