Skip to main content
. 2021 Jul 9;16(7):e0254319. doi: 10.1371/journal.pone.0254319

Table 5. Model summary table.

Recurrent neural networks
RNN-LSTM Stacked recurrent neural network with LSTM cells—RNN-LSTM operates on an input sequence and make the N-step ahead prediction at each RNN step
RNN-GRU Stacked recurrent neural network with GRU units—RNN-GRU operates on an input sequence and make the N-step ahead prediction at each RNN step
Sequence-to-sequence-type neural networks
Seq2Seq (GRU) Encoder-decoder type architecture: both the encoder and the decoder are modeled as RNN-GRU—the encoder produces a context vector, which is then fed into the decoder RNN; the decoder reconstructs the input sequence shifted by the N steps.
Seq2Seq (NODE) Encoder-decoder type architecture: the encoder is modeled as RNN-GRU and the decoder is modeled as neural ordinary differential equations—the encoder produces a context vector, which is then used as an initial condition for the NODE decoder
Temporal-convolution-layers-based neural networks
TCN Temporal convolutional neural networks with residual blocks: each residual block consists of a sequence of temporal convolutional layers with an increasing dilation rate; the final output is the N-step ahead prediction.
SNAIL Temporal convolutional neural networks with a temporal-convolution (TC) block and attention layers: 1) the TC block consists of a series of dense blocks, with each using two parallel dilated TCs and 2) attention layers point out at which points in the input sequence should be more emphasized. The final output is the N-step ahead prediction.