Skip to main content
. 2022 Nov 29;23:511. doi: 10.1186/s12859-022-05036-8

Table 5.

Model (4, 5, 6), (7, 8, 9) architecture

Model 4/5/6 Model 7/8/9
Layers Parameters Parameters
Layer 1—FullyConnected Input layer Input layer
Layer 2—GRU/LSTM/BILSTM 50 50
Layer 3—GRU/LSTM/BILSTM 20 20
Layer 4—FullyConnected 2 10
Layer 5—FullyConnected 5
Layer 6—FullyConnected 2

The number of layers and the number of neurons in each layer can vary. Moreover, the hyper-parameters can be tuned to improve the final performance. The number of trainable and non-trainable layers can vary, but transfer learning does not perform well if all layers are trainable and the performance is improved