Skip to main content
. 2011 May 11;4:12. doi: 10.1186/1756-0381-4-12

Table 5.

Experimental comparison between the number of false negatives found on the test sets by the different machine learning methods.

GP SVM-K1 SVM-K2 SVM-K3 MP RF
best 2 6 6 6 5 6

average (SEM) 9.82 (0.44) 13.26 (0.51) 12.60 (0.35) 14.08 (0.39) 12.88 (0.51) 13.38 (0.49)

Each method was independently run 50 times using each time a different training/test partition of the validation dataset (see text for details). The first line indicates the method: Genetic Programming (GP), Support Vector Machine (SVM), Multilayer Perceptrons (MP), and Random Forest (RF). The second line shows the best value of the incorrectly classified instances obtained on the test set over the 50 runs, and the third line reports the average performances of each group of 50 runs on their test sets (standard error of mean is shown in parentheses).