Skip to main content
. 2024 Mar 26;11(4):314. doi: 10.3390/bioengineering11040314
Algorithm 1. Adam Hyper-parameter Tuning
  • Step 1.

    Start Algorithm

  • Step 2.

    Initialise iteration counter, t = 0

  • Step 3.

    Initialise and assign values to hyper-parameters β1, β2, , Lr, wt, wt+1, vt,st

  • Step 4.

    Initialise parameters (weights) for the chosen classifier

  • Step 5.

    Define the loss function to be minimised.

  • Step 6.

    For each iteration t:

  • Step 7.

    Compute the gradient of the loss function with respect to the hyper-parameters, Lrwt

  • Step 8.

    Update the exponential moving averages of the gradient and its square, vt and st using Equations (30) and (31)

  • Step 9.

    Compute bias-corrected estimates of the averages, Vt^ and St^  using Equations (28) and (29)

  • Step 10.

    Update the parameters (weights) or the chosen classifier

  • Step 11.

    Calculate ER for the current equation

  • Step 12.

    If tr=1, compute the gradient of the loss function with respect to the hyper-parameter win

  • Step 13.

    Else if tr>1, compute the gradient of the loss function with respect to the hyper-parameter wtr

  • Step 14.

    Update the hyper-parameter wt+1

  • Step 15.

    If t = ConvCrit

  • Step 16.

    Go to Step 19

  • Step 17.

    Else

  • Step 18.

    Go to Step 7

  • Step 19.

    End Algorithm