Skip to main content
. 2023 Apr 18;14:1128326. doi: 10.3389/fimmu.2023.1128326

Table 1.

Overview of the tested models.

Models Architecture Embedding Year Trainable parameters
TITAN Weber et al. (11) Bimodal attention networks, pretrained with BindingDB. Encoded peptides with SMILES, TCRs with BLOSUM62 and padded to the same length. 2021 15,506,099
DLpTCR Xu et al. (14) Ensemble network out of: FCN, CNN and ResNet depending on subNN: PCA on 500 amino acid indices, one-hot encoded or 20 different physicochemical properties (PCP) 2021 10,454,869
ERGO Springer et al. (13) Autoencoder or LSTM Multilayer perceptron (MLP) One-hot encoded and embedded with either LSTM or Autoencoder 2020 580,299 (Autoencoder) or 6,557,421 (LSTM)
NetTCR2.0 Montemurro et al. (12) CNN Both sequences were encoded using the BLOSUM50 matrix 2021 21,345
ImRex Moris et al. (15) CNN, L2 regularization penalty of 0.01. Dual-input CNN architecture PCP interaction map between CDR3 and peptide sequence with 20x11x4 dimensions. 2020 248,257