Algorithm 1.
1. Initialize weights distribution of training samples: D1 = (ω11, ω12, ...ω1i..., ω1N), ω1i = 1/N, i = 1, 2...., N |
2. For m = 1, 2, ⋯ , M multiple iterations: (1) Training the weighted Dm sample set to obtain the base learner Gm(x) (2) Calculating the maximum error of the training set: Em = max∣yi − Gm(xi)∣ (3) Calculating the relative error of each sample: (4) Calculating the regression error rate: (5) Calculating the weight coefficients of weak learners: (6) Updating the weight distribution of the sample set: Dm+1, i = (ωm+1, 1, ωm+1, 2, ...ωm+1, i..., ωm+1, N) |
3. Output the ultimate strong learner end |