Skip to main content
. 2023 Oct 27;23(21):8772. doi: 10.3390/s23218772
Algorithm 2: Proposed Model (SA-BiLSTM)
  • 1.
    Input
    1. Sequence of vectors At (a1, a2, …, aT)
  • 2.
    Bidirectional LSTM Processing
    1. For_LSTM: Processing input sequence from left to right.
    2. Back_LSTM: Processing input sequence from right to left.
    3. Concatenate Outputs: Form a single sequence of hidden states.
  • 3.
    LSTM Cell Computation:
    1. The Input Gate is computed via Equation (26)
    2. Forget gate is calculated via Equation (27)
    3. The output gate and Input Modulation gate are calculated via Equations (28) and (29)
    4. Memory cell is updated via Equation (30)
    5. The Hidden state is updated via Equation (31)
  • 4.
    Multi-Head Attention Mechanism
    1. Hidden States are divided into multiple “heads”.
    2. Attention Weights calculated Individually for each head, and weighted sum is calculated.
    3. Weighted Sum yt=Σ(αt  ht), where Σ represents the elementwise multiplication and sum of attention weights and hidden states.
  • 5.
    Fully Connected Layers
    1. Process the attention-weighted hidden states through one or more fully connected layers.
  • 6.
    Output
    1. Classification, including attention-weighted hidden states.