Skip to main content
. 2023 Feb 20;15:24. doi: 10.1186/s13321-023-00694-z

Fig. 2.

Fig. 2

Architectures of four different end-to-end deep learning models: A The Graph Transformer; B The LSTM-based encoder-decoder model (LSTM-BASE); C The LSTM-based encoder-decoder model with attention mechanisms (LSTM + ATTN); D The sequential Transformer model. The Graph Transformer accepts a graph representation as input and SMILES sequences are taken as input for the other three models