Skip to main content
. 2018 Dec 6;35(14):2386–2394. doi: 10.1093/bioinformatics/bty977

Fig. 1.

Fig. 1.

Architecture of the proposed CapsNet. The input is a 33-length peptide in the 6D quantitative coding. The first two layers are two 1D convolutional layers, each with 200 channels, as well as with size 1 and 9 convolution kernels, respectively. The PrimaryCaps layer is the convolutional capsule layer, which has size 20 convolution kernels and 60 channels of 8D capsules, as described in (Sabour et al., 2017). The PTMCaps layer has two 10D capsules to represent two states of the input peptides—whether the input has the interested PTM site or not. The L2-norm of each capsule vector was calculated indicating the probability of each state