Skip to main content
. 2020 Jan 28;10:1631. doi: 10.3389/fphar.2019.01631

Table 5.

Prediction results of hERG blockers/nonblockers classification models developed by capsule networks with different architectures.

Capsule network architecture SE SP MCC SD Q (%)
Original CapsNet 80.4% 86.7% 0.673 0.0141 84.1%
FC+FC 82.6% 86.7% 0.694 0.0195 85.0%
Conv+FC 82.2% 86.4% 0.687 0.0166 84.6%
Conv+FC+FC (Conv-CapsNet) 88.6% 89.1% 0.774 0.0109 88.9%
Conv+Conv+FC+FC 84.5% 85.3% 0.693 0.0142 84.9%
Conv+Conv+Conv+FC+FC 81.9% 86.9% 0.685 0.0173 84.9%
One RBM 83.1% 86.5% 0.694 0.0182 84.9%
Two RBMs (RBM-CapsNet) 84.3% 89.0% 0.734 0.0160 87.0%
Three RBMs 84.5% 85.5% 0.696 0.0160 85.0%
Four RBMs 81.2% 86.0% 0.673 0.0108 83.9%
Five RBMs 84.1% 86.4% 0.701 0.0156 85.4%

*Conv, convolutional operation; FC, fully connected operation; RBM, restricted Boltzmann machine; Conv-CapsNet, convolution-capsule network; RBM-CapsNet, restricted Boltzmann machine-capsule network (The training set used was the Doddareddy's training set, and five-fold cross-validation was used to monitor the training performance. SE (%), sensitivity; SP (%), specificity; MCC, Matthew's correlation coefficient; SD, standard deviation; Q (%), overall accuracy). Conv-CapsNet and Conv-CapsNet showed the best performance.