Skip to main content
. 2021 Apr 8;21(8):2625. doi: 10.3390/s21082625
Algorithm 1 Mini-batch gradient ascent.
  • Input: 
    The number of employed features F, the training feature vector fiRN×1, the training ground-truth depth vector drRN×1, the number of epochs EP, the batch size BS, and the learning rate ρ
  • Output: 
    The estimates of θ0, θi, and σ2

1: Initialization ei=0, bn=N/BS, θ0 and θi are initialized with random values drawn from N(0,102)


2: while ei<EP do


3:  bi=0


4:   while bi<bn do


5:    if bi<bn1 then


6:     σ2=BS1j=bi×BS(bi+1)×BS1drjθ0+i=1Fθifi(j)2


7:     θ0=θ0+ρσ2j=bi×BS(bi+1)×BS1drjθ0+i=1Fθifi(j)


8:     θi=θi+ρσ2j=bi×BS(bi+1)×BS1fi(j)drjθ0+i=1Fθifi(j)


9:     check for termination


10:    else


11:     σ2=(Nbi×BS)1j=bi×BSN1drjθ0+i=1Fθifi(j)2


12:     θ0=θ0+ρσ2j=bi×BSN1drjθ0+i=1Fθifi(j)


13:     θi=θi+ρσ2j=bi×BSN1fi(j)drjθ0+i=1Fθifi(j)


14:     check for termination


15:    end if


16:    bi=bi+1


17:   end while


18:   ei=ei+1


19: end while