Skip to main content
. 2015 Jun 11;15(6):13763–13777. doi: 10.3390/s150613763
Algorithm 1 BRF learning
1. T: the maximum number of decision trees to grow for BRF
D: the maximum depth of trees to extend
 M: number of classes
Sn: Training set, including positive (river and lake) and negative (land, mountain and building) samples with their labels and weight, {x1, y1, w1},…,{xN, yN, wN}; xiX, yM
 Initialize sample weight wi(1) = 1/N
2. For t = 1 to T do
 Select subset s from training set Sn
 Grow an unpruned tree using the s subset samples with their corresponding weights.
  For d = 1 to D do
  Each internal node randomly selects p variables and determines the best split function using only these variables.
  Loop: Using different p-th variables, the split function f(vp) iteratively splits the training data into left (Il) and right (Ir) subsets using Equation (6).
Il={pIn|f(vp)<t},Ir=In\Il (6)
  The threshold t is randomly chosen by the split function f(vp) in the range t(minpf(vp),maxpf(vp)).
  Compute information gain ΔG function f(vp)
  IfG= max) then Determine the best split function f(vp) for the node d
    Else goto Loop.
  End For
 Store the probability distribution P (C | lt) to leaf node
 Output: A weak decision tree
Estimate class label y^i of the training data with the trained decision trees:
y^i=argmaxcP(c|lt) (7)
 Calculate the error of decision tree εt:
εt=i:yiy^iNwi(t)/iNwi(t) (8)
 Compute weight of the t-th decision tree αt:
αt=12log(M1)(1εt)εt (9)
 If α > 0, then
 Update weight of training sample wi(t+1):
wi(t+1)={wi(t)exp(αt)ifyiy^iwi(t)exp(αt)otherwise (10)
  else
    Reject the decision tree
End For
3. Final output: A BRF consists of N decision trees (N ≤ T)