Skip to main content
. 2018 Aug 9;9(9):4175–4183. doi: 10.1364/BOE.9.004175
Input: Training data set
T = {(x1,y1),(x2,y2),…,(xN,yN)}
where xiχRn is the feature vector of the instance, yiΥ{c1,c2,,ck} is the class of the instance, i = 1, 2, …,N; the instance feature vector x;
Output: Class y to which instance x belongs
1. According to the given distance metric, k points in the training set t are found to be nearest to x, and the neighborhood of x covering the k points is denoted as Nk(x);
2. Determine the class y of x according to the classification decision rule (such as majority vote) in Nk(x):
Y = arg maxcjxiNk(x)I(yi=cj), i=1,2,N;j=1,2,K 
where I is the indicator function, i = 1 when yi=cj, otherwise i = 0.
The standard SVM was proposed by Cortes and Vapnik [25]. This is a supervised learning algorithm that implements network optimal parameter selection by minimizing structural risk minimization. The support vector machine can realize non-linear mapping of input vectors to high-dimensional feature space through nonlinear kernel function and can achieve effective classification. This makes the sample linearly feasible within this feature space. The following kernel functions are used in this study:
Linear: K(xi,xj) = xiTxj
Polynomial kernel: K(xi,xj) = (xiTxj)d
Gaussian radial basis function (RBF): K(xi,xj) = exp xixj22σ2