Skip to main content
. 2020 Apr 17;11:269. doi: 10.3389/fphar.2020.00269

Table 2.

Hyperparameters for neural networks training on gene expression data. All neural networks are fully connected, and decoders have an architecture symmetric to the encoders.

Hyperparameter Value
Molecular Encoder GRU; hidden size 128; 2 layers
Expression Encoder IN(978)→256→OUT(128)
Difference Encoder IN(129)→128→OUT(10 + 10)
Discriminator IN→1024→512→OUT(1)
Batch Normalization After each linear layer in encoders
Activation Function LeakyReLU
Learning Rate 0.0003