Skip to main content
. 2014 Jan 8;9(1):e82450. doi: 10.1371/journal.pone.0082450

Table 4. Performance comparison of different datasets.

Performance Method Dataset
E (52) M (305) META-22+M (327) E+M (357)
Accuracy (%) wHLFS 68.99(5.53) 65.17(9.38) 64.85(5.97) 69.68(8.77)
wHLFS+SVM 65.22(6.40) 60.41(10.35) 61.17(10.27) 72.10(9.17)
wHLFS+RF 71.00(5.10) 65.57(10.03) 66.30(7.96) 74.76(7.68)
Specificity (%) wHLFS 78.97(11.54) 75.88(13.26) 73.38(12.85) 77.13(15.80)
wHLFS+SVM 71.51(13.91) 65.22(13.21) 64.12(11.21) 76.51(14.60)
wHLFS+RF 77.61(12.26) 75.88(9.88) 75.96(11.74) 81.43(12.99)
Sensitivity (%) wHLFS 56.76(9.10) 52.20(14.65) 54.51(12.03) 60.55(10.82)
wHLFS+SVM 57.53(8.40) 54.51(14.77) 57.75(16.25) 66.76(7.63)
wHLFS+RF 62.86(9.23) 53.08(15.23) 54.73(11.94) 66.65(11.57)

MCI converter/non-converter classification comparison with different datasets in terms of accuracy, sensitivity and specificity. Methods applied here include the combinations of wHLFS and different classification methods. The different feature datasets are META (E), MRI (M), and META without baseline cognitive scores (META-22). Parameters are selected by five-fold cross validation on the training dataset. The number in the parenthesis indicates the number of features in the specific dataset. The bolded and underlined entry denotes the best performance for that particular method. The standard deviations are shown in the parentheses along with the accuracy.