Skip to main content
. 2018 Jan 20;18(1):306. doi: 10.3390/s18010306
Algorithm 2 Training algorithm for an Optimized L-Tree.
Prepare
  • Wbest: the best SOM weights to classification the object

  • IGbest: the information gain when the SOM weights has best classification ability

Build an optimized L-Tree (X¯¯)
  • 1:

    for t=1:T

  • 2:

      Extract random local-area position (xst,yst)

  • 3:

      normalize by subtracting the mean, x¯¯

  • 4:

      Learning SOM weights Wt through the Node learning (x¯¯)

  • 5:

      Split X¯¯ into subset X¯¯1,X¯¯2,,X¯¯K according to the similarity with K trained nuerons.

  • 6:

      Calculate parent and child entropies H(X¯¯) when the dataset is classified with W.

  • 7:

      Calculate Information gain (IGt(X¯¯,Wt)) from the entropy.

  • 8:

      if IGt>IGbest

  • 9:

        Wbest=Wt

  • 10:

        IGbest=IGt

  • 11:

      End if

  • 12:

    Endfor

  • 13:

    for j=1,2,,K check a terminalcriteria

  • 14:

    If 6 satisfying stop the node learning, else then Build an optimized L-Tree (X¯¯j).