Algorithm 1: AL-DNN |
Input:
Labeled sensor dataset:
Unlabeled sensor dataset:
Parameters of DNN: the number of hidden layer n; learning rate
Iterations: ITER
Number of chosen samples at the each iteration: k
Output:
Main step:
Data preprocessing: data normalization by (Equation (6))
Pre-training (unsupervised): Use all samples in U to train a SDAE layer-by-layer, compute the weights of SDAE by minimizing the cost Function (Equation (10)): ,
Use and to initialize DNN, Use labeled sensor dataset L to fine-tune DNN
Obtain the overall parameters of trained DNN:
Classify all unlabeled samples in U using the constructed DNN:
Initialization: = ∅
Active learning stage:
For iteration = 1: ITER
Compute the posterior probabilities of all unlabeled samples in U: post
Select the samples having the lowest difference between the two highest values in post by (Equation (15)):
Select the samples that will most probably show the false positive using (Equation (16)):
Obtain k chosen samples from U by the combination of two criteria:
Ask an expert to label the sensor dataset :
Update the unlabeled sensor dataset:
Augment the active training set:
Update the weights of DNN by fine-tuning using
End
|