Algorithm 1. Feature Selection |
Input: Original feature matrix with M-dimensional features. Output: Selected feature subset with D-dimensional features . Procedure: 1: D = 1. Compute of each dimension of the original feature matrix and record the score1: . Choose the feature with the largest as the first element of the optimal feature subset S. Then, the remaining feature element is . 2: do Step 1: D = D + 1. Then, choose a feature element from in turn, and combine the element with S into a new feature subset , all subset make up a new feature matrix . Compute the class separability index () of each feature subset in the and the is defined as . Step 2: For the formed new feature matrix in Step 1, obtain subsets. Then, compute the average of the pairwise dissimilarity of all the subsets. Step 3: For each subset, compute the score2 defined as , which reflects whether the feature subset is appropriate. Step 4: Put the feature element with the largest value of into and reset the remaining feature element . Step 5: Input the selected feature subset with D-dimensional features into the classifier. Then, the classification accuracy of the D-dimensional features will be obtained. End while until the number of selected elements D reaches M. 3: Choose the best classification accuracy from as the final accuracy for this kind of feature after feature selection. If but , can be considered as the optimal feature dimension. Return: = {s1, s2, …, sM}. Note: The larger score2 means the feature is more beneficial to increasing classification performance. |