Skip to main content
. 2020 Nov 9;21:771. doi: 10.1186/s12864-020-07181-x

Table 2.

The best deep neural network architecture selected based on prediction correlation on the tuning set for each sub-sampling of the training set

Size (%) Deep neural network architecture
Number of layers Number of units per layera L2b Dropout ratec Accuracy MSEPd
1 4 5000(1)-1(2)-600(3)-800(4) 0.0600 1.0 0.090 30,589.3
3 4 5000(1)-300(2)-200(3)-4000(4) 0.0675 1.0 0.137 29,649.9
5 3 400(1)-200(2) -900(3) 0.0100 0.5 0.145 30,408.7
7 2 500(1)-2000(2) 0.0450 0.8 0.166 29,062.4
10 2 800(1)-100(2) 0.0025 0.6 0.200 28,440.9
15 2 800(1)-900(2) 0.0050 0.5 0.236 27,755.0
20 4 600(1)-100(2)-500(3)-700(4) 0.0325 0.5 0.226 28,849.5
30 1 1000(1) 0.0100 0.7 0.274 27,025.5
40 1 2000(1) 0.0800 0.6 0.285 26,877.4
50 3 600(1)-4000(2) -100(3) 0.0975 0.5 0.285 27,250.3
60 1 300(1) 0.0800 0.8 0.304 26,622.3
70 1 400(1) 0.0800 0.5 0.309 26,506.4
80 1 800(1) 0.0925 0.7 0.308 26,484.5
90 1 400(1) 0.0800 0.5 0.307 26,710.1
100 1 500(1) 0.0600 1.0 0.322 26,264.8

aThe number in parenthesis represents the corresponding hidden layer

bL2 = ridge regularization

cDropout rate was applied in all layers, except for the output layer

dMSEP = mean square error of prediction