Skip to main content
. 2020 Apr 8;20(7):2108. doi: 10.3390/s20072108
Algorithm 1
  • procedure EBRM(V, T)

  • 2: for k1,K do wm(1) and T^m(1)

  •    for i1,I do

  • 4:    for b1,B do Vi,b*=(v1*,v2*,...,vN*) and Tj,b*=(t1*,t2*,...,tN*),

  •       V¯i,b*=1Nn=1Nvn* and T¯j,b*=1Nn=1Ntn*,

  • 6:     Vi*=(V¯i,1*,V¯i,2*,...,V¯i,B*) and Tj*=(t¯j,1*,t¯j,2*,...,t¯j,B*)

  •       V˜*=(V*|wm(k))

  • 8:     U*=[V˜*T^m(k)]

  •      end for

  • 10:  end for

  •    call learning: back-propagation {f^*k(Um*,Tm*)},

  • 12:  output: T^m*, m=1 to M

  •    εmax=maxm=1,...,M[T^m*-Tm*]2

  • 14:  εm=[T^m-Tm*]2εmax

  •    ε¯=m=1Mεmwm(k)

  • 16:  βk=ε¯1-ε¯

  •    wm(k+1)=wm(k)βk(1-εm)

  • 18:  wm(k+1)=wm(k+1)mwm(k+1)

  •   end for

  • 20: end procedure