|
Algorithm 1 Improved conditional variational AutoEncoder (ICVAE)-deep neural network (DNN). |
|
Input: Training dataset S, latent variable Z, learning rate , L2 regularization and the maximum reconstruction loss scaling factor k. |
|
Output: the final classification results.
|
| 1: |
Data preprocessing: feature mapping and data normalization, all data is scaled to . |
| 2: |
The network structures of ICVAE on NSL-KDD and UNSW-NB15 datasets are 122-80-40-20-10-20-40-80-122 and 196-140-80-40-20-40-80-140-196, respectively. Weights are randomly initialized with scaling variance and biases are initialized to 0. |
| 3: |
Train the ICVAE using the training data set and calculate the maximum reconstruction loss for each category in the training data set according to Equation (11). |
| 4: |
Sample z from the multivariate standard Normal , specify the attack class , and feed them into the trained ICVAE decoder to generate a new attack sample . According to Equation (12), the newly generated sample is merged into the training data set S. |
| 5: |
The weights of the trained ICVAE encoder are used to initialize the weight of the DNN hidden layers. First, all hidden layers are frozen, the parameters of output layer are adjusted by back propagation, then all hidden layers are unfrozen, and the merged training data set is used to fine tune DNN classifier. |
| 6: |
Test samples are fed into the trained DNN classifier to detect attacks. |
| 7: |
return the classification result. |