Table 3.
RMSE, number of input and hidden layers for the best fit based on each neural network architecture by times series. M stands for multiplicative and A for additive. tX stands for the number of lagged input variables
| PETR4 | ITUB4 | |||||||
|---|---|---|---|---|---|---|---|---|
| Test | Validation | Input | Hidden | Test | Validation | Input | Hidden | |
| ELMAN | 0.67 | 0.83 | M.t5 | 2 | 0.60 | 0.69 | A.t4 | 20 |
| JORDAN | 0.67 | 0.83 | M.t5 | 2 | 0.60 | 0.71 | A.t4 | 20 |
| MLP-TANG. | 0.67 | 0.83 | A.t10 | 2 | 0.60 | 0.69 | A.t7 | 20 |
| MLR | 0.67 | 0.79 | A.t8 | 1 | 0.61 | 0.69 | A.t4 | 1 |
| RBF | 2.06 | 4.75 | M.t10 | 2 | 1.98 | 3.46 | M.t10 | 2 |
| BBDC4 | BOVA11 | |||||||
|---|---|---|---|---|---|---|---|---|
| ELMAN | 0.52 | 0.70 | M.t3 | 15 | 1.14 | 2.09 | M.t3 | 15 |
| JORDAN | 0.53 | 0.71 | M.t5 | 2 | 1.14 | 2.09 | M.t3 | 15 |
| MLP-TANG. | 0.52 | 0.73 | M.t6 | 2 | 1.14 | 2.11 | A.t1 | 5 |
| MLR | 0.52 | 0.71 | M.t1 | 1 | 1.14 | 2.10 | A.t2 | 1 |
| RBF | 2.10 | 2.40 | M.t10 | 2 | 3.61 | 6.28 | M.t10 | 2 |
| B3SA3 | VALE3 | |||||||
|---|---|---|---|---|---|---|---|---|
| ELMAN | 0.54 | 1.33 | A.t1 | 2 | 1.38 | 1.38 | A.t9 | 15 |
| JORDAN | 0.54 | 1.33 | M.t4 | 20 | 1.38 | 1.40 | A.t10 | 20 |
| MLP-TANG. | 0.54 | 1.36 | M.t5 | 15 | 1.38 | 1.41 | A.t9 | 20 |
| MLR | 0.54 | 1.33 | A.t4 | 1 | 1.38 | 1.40 | A.t8 | 1 |
| RBF | 1.96 | 3.43 | A.t9 | 15 | 2.48 | 1.83 | M.t7 | 2 |
Bold represents the best solution for each stock