Table 1. Implemented algorithms and evaluation metrics extracted from the publications focused on the prediction of NO2 (*).
Work | ML Algorithm | Evaluation Metric |
---|---|---|
[15] | BRT, SVM, XGBoost, RF, GAM, Cubist | RMSE, ME, NRMSE, NME, POD, POF, R2 |
[16] | LSTM | RMSE, NSE, PBIAS, R |
[17] | LSTM | MSE |
[18] | MLR, MLPNN, ELM, OSMLR, OSELM | |
[19] | LSTM | RMSE, MAE |
[20] | ELM | RMSE, MAE, IA, R2 |
[21] | ANN | RMSE, R, NMB, NMSD, Rs, SD, SD′ |
[22] | SVM, M5P model trees, ANN | RMSE, NRMSE, PTA |
[23] | Cluster-based bagging | RMSE, R2, RMSEIQR |
[24] | MLP with hierarchical clustering, SOM and k-means clustering | RMSE, MAE, NRMSE, MBE, IA, R |
[25] | GAM, Bagging, RF, GBM, ANN, KRLS, SVR, Linear stepwise regression algorithms, Regularization or shrinkage algorithms | RMSE, R2, MSE-R2 |
[26] | Ensemble model with DRR | RMSE |
[27] | AIS-RNN (RNN, LSTM, GRU) | RMSE, MAE, MAPE |
[28] | SVM | RMSE, MAE, CWIA, RE |
[29] | RF partition model | MAPE, MADE, BIC, R2 |
[30] | SVM | RMSE, MAE, WIA |
[31] | LSTM | RMSE |
* ML Algorithms: BRT–Boosted Regression Trees, SVM–Support Vector Machine, XGBoost–EXtreme Gradient Boosting, RF–Random Forest, GAN–Generalized Additive Model, LSTM–Long Short Term Memory, ANN–Artificial Neural Network, GBM–Gradient Boosting Machines, KRLS–Kernel-based Regularized Least Squares, AIS–Adaptive Input Selection, RNN–Recurrent Neural Network, GRU–Gated Recurrent Unit, MLR–Multiple Linear Regression, MLPNN–Multi-layer Perceptron Neural Networks, ELM–Extreme Learning Machine, OSMLR–Online Sequential Multiple Linear Regression, OSELM–Online Sequential Extreme Learning Machine, SOM–Self-organizing Map, DRR–Discounted Ridge Regression; Evaluation Metrics: RMSE–Root Mean Squared Error, ME–Mean Error, NRMSE–Normalized Root Mean Squared Error, NME–Normalized Mean Error, POD–Probability of Detection, POF–Probability of False Alarm, R2–Coefficient of Determination, NSE–Nash–Sutcliffe Efficiency Index, PBIAS–Percentage Bias, R–Pearson Correlation Coefficient, MSE–Mean Squared Error, MAE–Mean Absolute Error, IA–Index of Agreement, NMB–Normalised Mean Bias, Rs–Rank Correlation by Spearman, SD–Standard Deviation, PTA–Prediction Trend Accuracy, MBE–Mean Bias Error, MAPE–Mean Absolute Percentage Error, CWIA–Complementary Willmott’s Index of Agreement, RE–Relative Error, MADE–Mean Absolute Deviation Error, BIC–Bayesian Information Criterion, WIA–Willmott’s Index of Agreement.