Given:
-
-
(x1, y1), …, (xm, ym) where xi ∈ χ, y ∈ {−1, +1}
-
-
ε : false negative rate upper bound
-
-
λ : indecision weight
-
-
τ : stopping condition threshold
-
-
T : maximum allowed number of rounds
|
1: |
procedure Train(data) |
2: |
Initialize example i weight at iteration 1: D1(i) = 1/m for i = 1, …, m
|
3: |
for t=1,…,T do
|
4: |
Train weak classifier using distribution Dt
|
5: |
Find weak learner at iteration t: ht,
|
6: |
Choose weak learner weight
|
7: |
Update
where Zt is normalization factor |
8: |
If δFPR| ≤ 0 and δINDR ≥ τ then break |
9: |
Return final classifier: H(x) |