Skip to main content
. 2020 Oct 26;20(21):6075. doi: 10.3390/s20216075
Algorithm 2. Gradient boosting decision tree (GBDT) algorithm.
Inputs: Training data set: T={(x1,y1),(x2,y2),,(xn,yn)},xiχRn,yiϒR;
     Loss function L(y,f(x))=12(yf(x))2
(1) Initialize:
     f0(x)=argminγi=1NL(yi,γ)
(2) For m = 1, 2, …, M
(a) For i = 1, 2, …, N, compute
     rim=[L(yi,f(xi))f(xi)]f=fm1
(b) Fit a regression tree to the targets rim giving terminal regions Rjm, j = 1, 2,…, Jm.
(c) For j = 1, 2, …, Jm compute
     γjm=argminγxiRjmL(yi,fm1(xi)+γ)
(d) Update fm(x)=fm1(x)+j=1jmγjmI(xRjm)
Outputs: f^(x)=fM(x)