Skip to main content
. 2020 Dec 18;8:e10555. doi: 10.7717/peerj.10555

Table 1. The comparison of Macrel AMP classifier performance and state-of-art methods shows that Macrel is among the best methods across a range of metrics.

The same test set (Xiao et al., 2013) was used to calculate the general performance statistics of the different classifiers, and the best value per column is in bold. Macrel refers to the Macrel classifier, while MacrelX is the same system trained with the Xiao et al. (2013) training set.

Method Acc. Sp. Sn. Pr. MCC Reference
AmPEP* 0.98 0.92 Bhadra et al. (2018)
MacrelX 0.95 0.97 0.94 0.97 0.91 This study
iAMP-2L 0.95 0.92 0.97 0.92 0.90 Xiao et al. (2013)
Macrel 0.95 0.998 0.90 0.998 0.90 This study
AMAP 0.92 0.86 0.98 0.88 0.85 Gull, Shamim & Minhas (2019)
CAMPR3-NN 0.80 0.71 0.89 0.75 0.61 Waghu et al. (2016)
APSv2 0.78 0.57 0.99 0.70 0.61 Veltri, Kamath & Shehu (2018)
CAMPR3-DA 0.72 0.49 0.94 0.65 0.48 Waghu et al. (2016)
CAMPR3-SVM 0.68 0.40 0.95 0.61 0.42 Waghu et al. (2016)
CAMPR3-RF 0.65 0.34 0.96 0.59 0.39 Waghu et al. (2016)
iAMPpred 0.64 0.32 0.96 0.59 0.37 Meher et al. (2017)

Notes:

*

These data were retrieved from the original article.

Acc, Accuracy; Sn, Sensitivity; Sp, Specificity; Pr, Precision; MCC, Matthew’s Correlation Coefficient.