Skip to main content
. 2019 Aug 6;24(15):2851. doi: 10.3390/molecules24152851

Table 4.

Performance comparison of different machine learning models.

Models RMSE (%) R2 MAE
Calibration Sets Test Sets Calibration Sets Test Sets Calibration Sets Test Sets
PLS 14.36 13.34 0.80 0.81 11.18 10.69
Ridge 17.09 14.84 0.74 0.78 13.39 11.81
Enet 15.23 14.38 0.77 0.78 12.11 11.60
Rqlasso 15.72 14.92 0.76 0.77 12.46 11.94
Earth 16.30 16.84 0.74 0.71 12.93 13.14
Kknn 16.44 16.02 0.75 0.74 12.79 12.38
ParRF 15.91 14.87 0.77 0.79 12.92 11.91
Qrf 15.66 14.81 0.76 0.77 11.99 10.99
Rf 15.92 14.99 0.77 0.78 12.95 11.98
Ctree 21.74 22.71 0.55 0.48 17.09 16.95
Cubist 12.67 10.93 0.84 0.87 9.78 8.37
Glmboost 15.20 14.38 0.77 0.78 12.17 11.57
XgbTree 29.67 29.22 0.33 0.30 22.70 22.86
Msaene 15.33 14.39 0.77 0.78 12.37 11.69

MAE was the average absolute error.