Skip to main content
. 2021 Sep 29;211:106444. doi: 10.1016/j.cmpb.2021.106444

Algorithm 1.

HMCBCG model

Input: Train data Dtr, maximum cluster numbers kmax, m base learners L1, L2,…,Lm. Set the number of samples in each subset obtained by bagging to n.
Output: Candidate classifier pool Ψ
1: Ψ←∅
2: for 2 to kmaxdo:
3: Use k-means to divide Dtr into k clusters.
4: for each cluster do:
5: Apply GA to optimize SVM to get the SVM with optimal parameters.
6: Add the trained SVM to Ψ.
7: end for
8: Shuffle all the clusters to restore Dtr.
9: end for
10: for 1 to m do:
11: for 1 to n do:
12: Randomly draw a sample from Dtr.
13: end for
14: Use the n samples to train a base learner.
15: Add the trained base learner to Ψ.
16: Put the n samples back to restore Dtr.
17: end for
18: return Ψ