Skip to main content
. 2021 Mar 15;21(6):2056. doi: 10.3390/s21062056
Algorithm 1. The MIVs-WBDA Algorithm
Input: Data set X, dimensions after reduction L
Output: Feature set FS, classification result O, and recognition rate R
Step 1: Calculate the data feature set; randomly assign the training samples, cross-validation samples, and test samples
Step 2: The MIVs of each feature are calculated using the cross-validation samples, and the most prominent N features are selected to form the feature set FS1
Step 3: Calculate the between-class WBDA of residual features
Step 4: Arrange the WBDA from small to large, select the first L-N features, and form the feature set FS2. Then form a new special collection FS with FS1. It should be noted that the L-N features should be distributed in as many classes as possible.
FS = {FS1,FS2}
Step 5: According to the SVM optimization algorithm, the cross-validation set is used to optimize the selection of support vector machines (C, γ)
Step 6: Learn through the training set, and test the SVM output classification result O and recognition accuracy R