Skip to main content
. 2017 Mar 29;5:e3095. doi: 10.7717/peerj.3095

Table 6. Testing performance of dataset II.

Type SI AA Decision fusion Feature fusion
Classifier SVM NN SVM NN SVM NN SVM NN
SVM NN SVM NN
Alpha 0.73 0.69 0.80 0.76
Hamming-loss 0.103 0.119 0.064 0.063 0.045 0.054 0.054 0.063 0.083 0.098
Accuracy 0.790 0.800 0.883 0.885 0.906 0.898 0.879 0.889 0.823 0.831
Precision 0.857 0.829 0.901 0.906 0.942 0.918 0.947 0.907 0.889 0.856
Recall 0.825 0.831 0.908 0.908 0.924 0.920 0.885 0.911 0.847 0.856
F1 score 0.835 0.829 0.904 0.906 0.928 0.919 0.893 0.908 0.859 0.855
Subset accuracy 0.688 0.739 0.834 0.841 0.847 0.854 0.841 0.847 0.726 0.783
Macro Precision 0.921 0.744 0.940 0.941 0.962 0.945 0.967 0.903 0.927 0.806
Recall 0.741 0.777 0.881 0.871 0.887 0.879 0.854 0.881 0.791 0.787
F1 0.801 0.758 0.902 0.897 0.921 0.905 0.905 0.889 0.844 0.794
Micro Precision 0.864 0.822 0.904 0.907 0.943 0.919 0.953 0.904 0.901 0.857
Recall 0.829 0.832 0.910 0.910 0.925 0.922 0.885 0.913 0.850 0.857
F1 0.846 0.827 0.907 0.908 0.934 0.921 0.918 0.909 0.875 0.857

Note:

The best classification performance (based on different criteria) is indicated in bold for each technique.