Skip to main content
. 2016 Oct 13;16(10):1695. doi: 10.3390/s16101695
Algorithm 1: AL-DNN
Input:
  • Labeled sensor dataset: L={x1,x2,...,xn}

  • Unlabeled sensor dataset: U={xn+1,xn+2,...,xn+m}

  • Parameters of DNN: the number of hidden layer n; learning rate λ

  • Iterations: ITER

  • Number of chosen samples at the each iteration: k


Output:
  • The result of fault diagnosis


Main step:
  1. Data preprocessing: data normalization by (Equation (6))

  2. Pre-training (unsupervised): Use all samples in U to train a SDAE layer-by-layer, compute the weights of SDAE by minimizing the cost Function (Equation (10)): Wall, ball

  3. Use Wall and ball to initialize DNN, Use labeled sensor dataset L to fine-tune DNN

  4. Obtain the overall parameters of trained DNN: θ*

  5. Classify all unlabeled samples in U using the constructed DNN: f(U)=Test(C,U)

  6. Initialization: Dtest = ∅

  7. Active learning stage:

  • For iteration = 1: ITER
    • Compute the posterior probabilities of all unlabeled samples in U: post
    • Select the samples having the lowest difference between the two highest values in post by (Equation (15)):
      xBvSB=select_BvSB(U)
    • Select the samples that will most probably show the false positive using (Equation (16)):
      xLFP=select_LFP(U)
    • Obtain k chosen samples Xk={xs1,xs2...xsk} from U by the combination of two criteria:
      Xk=xBvSBxLFP
    • Ask an expert to label the sensor dataset Xk: label(Xk)
    • Update the unlabeled sensor dataset: UUXk
    • Augment the active training set: DtestDtestXk
    • Update the weights of DNN by fine-tuning using Dtest

End