Skip to main content
. 2023 Aug 23;6:156. doi: 10.1038/s41746-023-00905-9

Table 3.

Performance of different regressor models we tested.

Model MAE MSE Accuracy (%) Kendall’s τ MAPE (%) PCC Spearman’s ρ
SVR 0.5861 0.5586 51.94 0.5044 32.14 0.6388 0.6329
Random forest regressor 0.5920 0.5482 49.28 0.5116 33.5 0.6518 0.6389
AdaBoost regressor 0.5926 0.5499 45.6 0.5038 33.75 0.6219 0.6317
XGBoost regressor 0.5904 0.5553 51.33 0.4989 32.57 0.6417 0.6282
LightGBM regressor 0.5802 0.5364 50.92 0.5147 32.01 0.6563 0.6429
Shallow neural network-I (one trainable layer) 0.6154 0.6007 46.83 0.4810 33.9 0.6097 0.6100
Shallow neural network-II (two trainable layers) 0.6069 0.6162 51.33 0.4813 32.42 0.6004 0.6044

Performance was reported based on leave-one-patient-out cross-validation. For each performance metric, the best value is highlighted in bold text. Some of the metrics are abbreviated for the simplicity of presentation.

MAE mean absolute error (points), MSE mean squared error (points), MAPE mean absolute percentage error (%), PCC Pearson’s correlation coefficient.