Table 2.
Year | Measurement | GBM | SVM | RF | XGBoost | SLa | DEMLb |
---|---|---|---|---|---|---|---|
2015 | 0.69 | 0.79 | 0.85 | 0.81 | 0.85 | 0.89 | |
RMS E () | 9.25 | 6.42 | 6.49 | 7.23 | 6.47 | 5.54 | |
2016 | 0.72 | 0.80 | 0.84 | 0.81 | 0.84 | 0.87 | |
RMSE () | 7.74 | 6.51 | 5.84 | 6.33 | 5.82 | 5.18 | |
2017 | 0.74 | 0.81 | 0.85 | 0.81 | 0.85 | 0.89 | |
RMSE () | 8.20 | 7.19 | 6.41 | 7.09 | 6.38 | 5.37 | |
2018 | 0.70 | 0.78 | 0.86 | 0.82 | 0.86 | 0.89 | |
RMSE () | 7.44 | 6.22 | 5.18 | 5.69 | 5.13 | 4.43 | |
2019 | 0.68 | 0.76 | 0.84 | 0.79 | 0.84 | 0.87 | |
RMSE () | 7.34 | 6.42 | 5.13 | 5.78 | 5.12 | 4.55 | |
Total | 0.51 | 0.76 | 0.83 | 0.70 | 0.83 | 0.87 | |
RMSE () | 10.4 | 7.42 | 6.23 | 8.20 | 6.23 | 5.38 |
Note: DEML, the three-stage stacked deep ensemble machine learning method; GBM, gradient boosting machine; , particulate matter with aerodynamic diameter ; , coefficients of determination for unseen independent data; RF, random forest; RMSE, root mean square error; SL, super learner algorithm; SVM, support vector machine; XGBoost, extreme gradient boosting.
SL was constructed with four machine learning models (GBM, SVM, RF, and XGBoost) using a nonnegative least squares (NNLS) approach to achieve the optimal weight.
DEML was a three-stage stacked ensemble model by constructing with four base models (GBM, SVM, RF, and XGBoost), three second-level models (RF, XGBoost, and GLM), and an NNLS algorithm.