Table 7.
Algorithm | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Random forest | 61.43 | 62.86 | 64.29 | 61.43 | 62.86 | 62.86 | 64.23 | 60.00 | 58.57 | 60,00 | 62.86 | 58.57 | 54.23 |
Adaboost | 71.43 | 71.43 | 71.43 | 71.43 | 71.43 | 71.43 | 71.43 | 71.43 | 71.43 | 71.43 | 71.43 | 71.43 | 70,00 |
Decision tree | 60.00 | 58.57 | 58.57 | 58.57 | 58.57 | 58.57 | 58.57 | 58.57 | 57.14 | 74.29 | 62.86 | 54.29 | 58.57 |
Experiment with attributes removed from the training set. Performance evaluation results considering the Validation set of ML algorithms—Correctly Classified Instances—Accuracy. Combination of Attributes in the Training Set Algorithm 1 2 3 4 5 6 7 8 9 10 11 12 13 Random forest, Adaboost and Decision tree. We observe that Decision Tree obtained the best result for accuracy (74.29) in experiment 10.