Skip to main content
. 2020 Nov 12;20(22):6477. doi: 10.3390/s20226477
Algorithm 1 EFNN-SOF training and update algorithm.
Initial Batch Learning Phase (Input: data matrix X with K samples):
  •   1:

    Extract L clouds in the first layer using the SOF approach (L is automatically estimated therein).

  •   2:

    Estimate center values c and widths σ for the L clouds derived from SOF.

  •   3:

    Calculate the combination (feature) weights w for neuron construction using Equation (20).

  •   4:

    Construct L logic neurons on the second layer of the network by welding the L fuzzy neurons of the first layer, using logical neurons concept and weights w.

  •   5:

     

  •   6:

    fori=1,,Kdo

  •   7:

      Calculate the regression vector z(xi) by activation levels of all neurons in xi.

  •   8:

      Store it as one row entry into the activation level matrix Z.

  •   9:

    end for

  • 10:

    Extract reduced activation level matrix Zrul according to the Ls<L selected neurons.

  • 11:

    Estimate the weights vk of the output layer for all classes k=1,,c by Equation (23) using Zrul and indicator vectors yk.

Update Phase (Input: single data sample xt):
  •   1:

    Update Ls clouds and evolving new ones on demand in the first layer using evolving SOF approach (→Ls,upd clouds).

  •   2:

    Update the feature weights w by updating the within- and between-class scatter matrix and recalculating Equation (20).

  •   3:

    Perform Steps 2 and 4 with Ls,upd clouds.

  •   4:

    Calculate the degree of change of all neurons (rules) by Equation (29).

  •   5:

    Calculate the regression vector z(xt) by activation levels of all neurons in xt.

  •   6:

    Update the weights vk of the output layer by Equation (26).