Skip to main content
. 2024 Mar 4;15:1289783. doi: 10.3389/fpls.2024.1289783

Table 2.

Test results of different activation function and number of convolution kernels (shaded group is the optimal structure; bold font indicates the optimal solution).

Group Activation function Number of convolution kernels Training set Validation set Test set
R2 RMSE R2 RMSE R2 RMSE
1 Relu 8, 16 0.9931 17.3 0.9860 31.4 0.9734 24.7
2 8, 32 0.9910 19.8 0.9888 22.1 0.9823 25.6
3 8, 64 0.9911 19.7 0.9892 21.7 0.9829 25.1
4 16, 32 0.9957 13.8 0.9876 23.2 0.9913 17.9
5 16, 64 0.9948 15.1 0.9908 20.0 0.9800 27.2
6 32, 64 0.9926 18.0 0.9867 24.1 0.9769 29.2
7 Tanh 8, 16 0.9314 54.7 0.9096 57.8 0.8710 75.0
8 8, 32 0.8904 69.2 0.8770 73.3 0.8795 66.8
9 8, 64 0.8892 53.4 0.8836 58.2 0.8854 58.0
10 16, 32 0.9439 49.5 0.8566 79.1 0.8625 71.3
11 16, 64 0.9377 40.9 0.9299 41.5 0.8947 56.8
12 32, 64 0.9248 57.3 0.8945 62.5 0.8891 69.5
13 Sigmoid 8, 16 0.9391 51.5 0.9229 53.4 0.9014 65.6
14 8, 32 0.9806 29.1 0.9749 33.1 0.9718 32.3
15 8, 64 0.9791 30.2 0.9644 39.4 0.9512 42.5
16 16, 32 0.9388 51.7 0.9214 53.9 0.8984 66.6
17 16, 64 0.9676 37.6 0.9630 40.2 0.9575 39.6
18 32, 64 0.9858 24.9 0.9714 35.3 0.9688 34.0