Skip to main content
. 2020 Apr 30;20(9):2559. doi: 10.3390/s20092559
Algorithm 2: Bagging classifier algorithm.
Input: KDD99 and NSLKDD datasets
Training:
  1. Selection of the number of samples for Bagging which is n samples and also the selection of base classifier C (j48, Random Forest, and reptree in our case).

  2. Dividing dataset into two subsets (Training and Testing subsets). Produce further training datasets using with replacement sampling and these datasets are D1D2D3.........Dn.

  3. Then, train a base classifier on each dataset Di and build n number of classifiers C1C2C3.........Cn.

Testing:
  1. In the testing dataset, each data object X is passed to trained classifiers C1C2C3.........Cn.

  2. The label is assigned to every new data object based on a majority vote. For the classification problem, the majority vote is used to assign a new label to data point X and, for the regression problem, the average value is used to be assigned to a new data object Xi.

  3. We repeat these steps until we classify every object in the dataset.