Input: Training data set |
T = {(,),(,),…,(,)} |
where is the feature vector of the instance, is the class of the instance, i = 1, 2, …,N; the instance feature vector x; |
Output: Class y to which instance x belongs |
1. According to the given distance metric, k points in the training set t are found to be nearest to x, and the neighborhood of x covering the k points is denoted as ; |
2. Determine the class y of x according to the classification decision rule (such as majority vote) in : |
Y = arg
|
where I is the indicator function, i = 1 when , otherwise i = 0. |
The standard SVM was proposed by Cortes and Vapnik [25]. This is a supervised learning algorithm that implements network optimal parameter selection by minimizing structural risk minimization. The support vector machine can realize non-linear mapping of input vectors to high-dimensional feature space through nonlinear kernel function and can achieve effective classification. This makes the sample linearly feasible within this feature space. The following kernel functions are used in this study: |
Linear: K(,) =
|
Polynomial kernel: K(,) =
|
Gaussian radial basis function (RBF): K(,) = exp
|