Table 3.
Nr | RIPPER | RIDOR | PART |
---|---|---|---|
1 | -F 3 -N 2.0 -O 10 | -F 3 -S 1 -N 2.0 -A | -R -B -M 2 -N 3 |
2 | -F 3 -N 5.0 -O 10 | -F 3 -S 1 -N 5.0 -A | -R -B -M 5 -N 3 |
3 | -F 3 -N 10.0 -O 10 | -F 3 -S 1 -N 10.0 -A | -R -B -M 10 -N 3 |
4 | -F 10 -N 2.0 -O 10 | -F 10 -S 1 -N 2.0 -A | -R -B -M 2 -N 10 |
5 | -F 10 -N 5.0 -O 10 | -F 10 -S 1 -N 5.0 -A | -R -B -M 5 -N 10 |
6 | -F 10 -N 10.0 -O 10 | -F 10 -S 1 -N 10.0 -A | -R -B -M 10 -N 10 |
7 | -F 100 -N 2.0 -O 10 | -F 20 -S 1 -N 2.0 -A | -R -B -M 2 -N 100 |
8 | -F 100 -N 5.0 -O 10 | -F 20 -S 1 -N 5.0 -A | -R -B -M 5 -N 100 |
9 | -F 100 -N 10.0 -O 10 | -F 20 -S 1 -N 10.0 -A | -R -B -M 10 -N 100 |
10 | -F 3 -N 2.0 -O 100 | -R -M 2 -N 3 | |
11 | -F 3 -N 5.0 -O 100 | -R -M 5 -N 3 | |
12 | -F 3 -N 10.0 -O 100 | -R -M 10 -N 3 | |
13 | -F 10 -N 2.0 -O 100 | -R -M 2 -N 10 | |
14 | -F 10 -N 5.0 -O 100 | -R -M 5 -N 10 | |
15 | -F 10 -N 10.0 -O 100 | -R -M 10 -N 10 | |
16 | -F 100 -N 2.0 -O 100 | -R -M 2 -N 100 | |
17 | -F 100 -N 5.0 -O 100 | -R -M 5 -N 100 | |
18 | -F 100 -N 10.0 -O 100 | -R -M 10 -N 100 | |
19 | -B -M 2 -C 0.25 | ||
20 | -B -M 2 -C 0.1 | ||
21 | -B -M 5 -C 0.25 | ||
22 | -B -M 5 -C 0.1 | ||
23 | -B -M 10 -C 0.25 | ||
24 | -B -M 10 -C 0.1 | ||
25 | -M 2 -C 0.25 | ||
26 | -M 2 -C 0.1 | ||
27 | -M 5 -C 0.25 | ||
28 | -M 5 -C 0.1 | ||
29 | -M 10 -C 0.25 | ||
30 | -M 10 -C 0.1 |
RIPPER: F: number of folds for reduced error pruning; N: minimal weights of instances within a split; O: number of optimization runs
RIDOR: F: number of folds for reduced error pruning; S: number of shuffles for randomization; A: Flag set to use the error rate of all the data to select the default class in each step. N: minimal weight of instances within a split.
PART: C: confidence threshold for pruning; M: minimum number of instances per leaf; R: use reduced error pruning; N: number of folds for reduced error pruning; B: Use binary splits for nominal attributes