Skip to main content
. 2014 Oct 28;12(5):228–238. doi: 10.1016/j.gpb.2014.09.002

Table 1.

Statistics for the best models for various learners

Positive data Learner Recall Precision Sensitivity Specificity F-measure Accuracy
Human SVM 1.000 0.501 1.000 1.000 0.668 0.501
DT 0.840 0.839 0.840 0.840 0.835 0.835
MLP 0.835 0.835 0.835 0.835 0.835 0.835
NB 0.845 0.843 0.845 0.845 0.838 0.837
BLR 0.840 0.800 0.840 0.840 0.765 0.741
RF 0.872 0.872 0.872 0.872 0.872 0.872



Rodent SVM 1.000 0.501 1.000 1.000 0.668 0.501
DT 0.859 0.851 0.859 0.859 0.840 0.837
MLP 0.907 0.904 0.907 0.907 0.890 0.888
NB 0.897 0.890 0.897 0.897 0.871 0.866
BLR 0.864 0.812 0.864 0.864 0.761 0.728
RF 0.918 0.918 0.918 0.918 0.916 0.916

Note: The best results for each learner given either human or rodent positive data with pseudo hairpins as negative data. RF performs best for this dataset (F-measure bolded). General performance among classifiers does not differ greatly even without parameter optimization. SVM, support vector machine; DT, decision tree; MLP, multi-layer perceptron; NB, naïve Bayes; BLR, Bayesian logistic regression; RF, random forest.