Table 2.
Different options for improving the developed ANN and RF models.
| Parameter | Available options | Optimum option | |
|---|---|---|---|
| Q i model | EUR model | ||
| ANN model | |||
| Number of hidden layers | 1–3 | Single hidden layer | Single hidden layer |
| Number of neurons in each layer | 5–40 | 8 | 8 |
| Training/testing split ratio | 70%–90% | (Training/testing) 70/30% | (Training/testing) 70/30% |
| Training algorithms | Trainlm, trainbfg, trainrp, trainscg, traincgb, traincgf, traincgp, trainoss, traingdx | “Trainbr” | “Trainbr” |
| Transfer function | Tansig, logsig, elliotsig, radbas, hardlim, satlin | “Logsig” | “Logsig” |
| Learning rate | 0.01–0.9 | 0.05 | 0.05 |
|
| |||
| RF | |||
| Maximum features | [“Auto,” “sqrt,” “log2”] | sqrt | Auto |
| Maximum depth | [3, 4, 5, ..., 30] | 20 | 30 |
| Number of estimators | [3, 4, 5, ..., 150] | 150 | 100 |