Skip to main content
. 2019 Jun 2;19(11):2528. doi: 10.3390/s19112528
Algorithm 1 Improved conditional variational AutoEncoder (ICVAE)-deep neural network (DNN).
Input: Training dataset S, latent variable Z, learning rate lr, L2 regularization β and the maximum reconstruction loss scaling factor k.
Output: the final classification results.
1: Data preprocessing: feature mapping and data normalization, all data is scaled to [0,1].
2: The network structures of ICVAE on NSL-KDD and UNSW-NB15 datasets are 122-80-40-20-10-20-40-80-122 and 196-140-80-40-20-40-80-140-196, respectively. Weights are randomly initialized with scaling variance and biases are initialized to 0.
3: Train the ICVAE using the training data set and calculate the maximum reconstruction loss maxL for each category in the training data set according to Equation (11).
4: Sample z from the multivariate standard Normal N(0,I), specify the attack class y^, and feed them into the trained ICVAE decoder to generate a new attack sample x^. According to Equation (12), the newly generated sample (x^,y^) is merged into the training data set S.
5: The weights of the trained ICVAE encoder are used to initialize the weight of the DNN hidden layers. First, all hidden layers are frozen, the parameters of output layer are adjusted by back propagation, then all hidden layers are unfrozen, and the merged training data set is used to fine tune DNN classifier.
6: Test samples are fed into the trained DNN classifier to detect attacks.
7: return the classification result.