Skip to main content
. 2018 Oct 8;6(1):74–86. doi: 10.1093/nsr/nwy108

Table 1.

Summary of hyper-parameters and default settings of gcForest. Bold font highlights hyper-parameters with relatively larger influence; ‘?' indicates default value unknown, or a general requirement for different settings for different tasks.

Deep neural networks (e.g. convolutional neural networks) gcForest
Type of activation functions: Type of forests:
 Sigmoid, ReLU, tanh, linear, etc.  Completely random forest, random forest, etc.
Architecture configurations: Forest in multi-grained scanning:
No. hidden layers: ? No. forests: {2}
No. nodes in hidden layer: ? No. trees in each forest: {500}
No. feature maps: ?  Tree growth: till pure leaf, or reach depth 100
Kernel size: ? Sliding window size: {⌊d/16⌋, ⌊d/8⌋, ⌊d/4⌋}
Optimization configurations: Forest in cascade:
Learning rate: ? No. forests: {8}
 Dropout: {0.25/0.50} No. trees in each forest: {500}
Momentum: ?  Tree growth: till pure leaf
L1/L2 weight regularization penalty: ?
 Weight initialization: Uniform, glorot_normal, glorot_uni, etc.
 Batch size: {32/64/128}