Skip to main content
. 2018 Jan 20;18(1):306. doi: 10.3390/s18010306
Algorithm 1 Training algorithm for L-Tree.
Input
  • X¯¯: the training image set which consist of {X1,X2,,XN} (an N×W×H matrix)

  • Y: the labels of training image set (an N×1 matrix)

  • s: the ratio between local area and original image (Ws=s·W, Hs=s·H)

  • K: the number of neurons in one SOM (K=Kr×Kc)

  • {L1,L2,,LC}: the set of class labels

Output: a single L-Tree
Build a L-Tree (X¯¯)
  • 1:

    Extract random local-area position (xst,yst)

  • 2:

    Normalize by subtracting the mean, x¯¯

  • 3:

    Node learning (x¯¯)

  • 4:

    Split X¯¯ into subset X¯¯1,X¯¯2,,X¯¯K according to the similarity with K trained neurons.

  • 5:

    Calculate class probability from relative frequency of each samples and save it to the neurons.

  • 6:

    For j=1,2,,K inspect a terminalcriteria

  • 7:

    If 6 satisfying stop the node learning, else then Build a L-Tree (X¯¯j).

Node learning (x¯¯)
  • 1:

    Generate new K neurons weight W(t=0) and initialize with random values before learning

  • 2:

    do

  • 3:

      Select randomly n(=N) samples from x¯¯

  • 4:

      wi(t+1)=wi(t)+hc,i(xjwi(t))          ▹ Update a neuron’s weights

  • 5:

      α(t)=α(t)exp1τ                   ▹ Update a learning rate

  • 6:

      t=t+1

  • 7:

    while until W(t)W(t1)