Skip to main content
. 2015 Feb 23;10(2):e0117844. doi: 10.1371/journal.pone.0117844

Table 1. Algorithm 1. Bagging.

1: Let T be the set of n training examples (x i, y i), i ∈ 1, 2, ⋯, n.
2: B is the number of base learners and L the base learning algorithm.
3: for(i = 0; i < B; i + +){
4:  Create a bootstrapped training set T i of size n by sampling with replacement.
5:  Learn a specific base learner L i(x, y) on T i by using L.
6: }
7: The final learning algorithm C is the ensemble of all base learners {L i} and a test example x* is classified by using a simple majority voting method:
y*=argmaxyLiCLi(x*,y)