Skip to main content
. 2025 Aug 4;15:28405. doi: 10.1038/s41598-025-13949-6

Table 3.

Notations with description.

Notation Description
Inline graphic Input time-series matrix with Inline graphic time steps and Inline graphic sensor features
Inline graphic Number of time steps in each input sequence
D Number of input features (e.g., sensor channels)
Inline graphic Kernel size used in temporal convolutions
Inline graphic Dilation factor in dilated convolutions
Inline graphic Number of convolutional filters in the Inline graphic-th TCN branch
Inline graphic Total number of TCN output feature maps (sum across all branches)
Inline graphic Output feature representation from the TCN module
Inline graphic Output from BiLSTM containing forward and backward contextual features
H Hidden dimension of LSTM in each direction
Inline graphic Forward and backward hidden states at time 't'
Inline graphic Concatenated BiLSTM hidden state at time 't'
Inline graphic Attention alignment score at time step 't'
Inline graphic Normalized attention weight at time step 't'
Inline graphic Context vector computed as the attention-weighted sum of hidden states
Inline graphic Concatenated attention and hidden feature at time step 't'
Inline graphic Final attention-enhanced hidden representation at time 't'
Inline graphic Attention-refined sequence of hidden representations
Inline graphic Output vector after Global Average Pooling (GAP)
Inline graphic Mean squared error loss for prediction accuracy.
Inline graphic Loss for encouraging temporal consistency across outputs
Inline graphic Attention supervision loss for pattern alignment
Inline graphic Total composite loss function
Inline graphic Weighting coefficients for loss terms (satisfying Inline graphicInline graphic)