Skip to main content
. 2023 Jan 30;25(2):253. doi: 10.3390/e25020253
Algorithm 2 ELM-Autoencoder Overview.
  • 1:

    Input: The training dataset matrix X: (xi)|xiRn, i = 1, …, N; The number of hidden layer for sparse autoencoder: m;

  • 2:

    Output: Output weight vector of each layer: βi;

  • 3:

    Randomly generate hidden node matrix for original ELM V(m+1),

  • 4:

    O0X;

  • 5:

    for (i = 0; i < x; i++) do

  • 6:

        Calculate hidden weight vector βi by Algorithm 1 using Oi1 and Vi as its parameters  

  • 7:

        OiOi1×βi

  • 8:

    end for

  • 9:

    Compute output weight β m + 1 by Equation (8) using Om and Vm as its parameters;

  • 10:

    Return βi,i=1,,(m+1).