Skip to main content
. 2023 May 5;23(9):4512. doi: 10.3390/s23094512

Table 2.

Summary of parameters and variables in the proposed architecture.

Symbol Description
X Input sequence (time series of interest)
xt Input at time step t
h Sequence of hidden states from LSTM encoder
ht Hidden state at time step t
W Weight matrices
b Bias vectors
it Input gate vector at time step t
ft Forget gate vector at time step t
ot Output gate vector at time step t
gt Candidate gate vector at time step t
ct Candidate memory cell at time step t
c Feature map of the input sequence from CNN
αt,i Attention weights for encoder hidden state hi and CNN output ci
ut Context vector at time step t
Qt Query matrix at time step t
Kt Key matrix at time step t
Vt Value matrix at time step t
βt,i Multi-head attention weights for key-value pairs
vt Multi-head context vector at time step t
η1 Intermediate output of the GRN
η2 Intermediate output of the GRN
y^t Quantile regression output at time step t