Skip to main content
. 2022 Jul 17;12(7):995. doi: 10.3390/biom12070995

Figure 1.

Figure 1

The architecture of the Enhancer-LSTMAtt. Conv1D, Batch Norm, Attention, Activation, Dense, and Max Pool 1D denote the 1D CNN layer, the batch normalization layer, the feed-forward attention layer, activation function, the fully-connected layer, and the max pooing layer respectively.