Skip to main content
. Author manuscript; available in PMC: 2013 Dec 11.
Published in final edited form as: Stat Biosci. 2012 Feb 9;4(1):10.1007/s12561-012-9056-7. doi: 10.1007/s12561-012-9056-7

Algorithm 3.

NetBoosting

  1. Initialize F0(X) = 0.

  2. For m = 1 to M (boosting steps),

    1. Calculate the gradients w.r.t Fm−1(X) over observed samples
      yi=-[L{yi,F(xi)}F(xi)]F(xi)=Fm-1(xi),i=1,,n.
    2. Fit regression trees to the gradient vector {i}.

      1. For each j = 1, ···, p,

        Begin with Xj and grow a tree hj(X) along the network topology. Specifically, splitting variable for a child node is selected from the network neighborhood of the splitting variable for its parent node (when this neighborhood is not empty) or otherwise is chosen as the same splitting variable for the parent node.

      2. Pick the best tree among {hj(X)}j=1p to be Tm(X).

    3. For a given learning rate ν, update Fm(X) = Fm−1(X) + νTm(X).