Skip to main content
. 2022 Feb 2;23:58. doi: 10.1186/s12859-021-04540-7

Fig. 2.

Fig. 2

The architecture of the prediction model. “conv” denotes a convolutional layer followed by batch normalization and LeakyRelu nonlinearity. The readout layer is another conv block followed by a convolution layer both with kernel size 1. The loss is computed after each residual block with skip connections. The left dashed block, a residual block, is repeated N times with different weights, whereas the right one, a shared residual block, is used M times with the same weights