Table 1.
Model name | Attention-based model | Jointly used model or strategy | Reference |
---|---|---|---|
Att-BLSTM | Attention | LSTM | [92] |
DDKG | Attention | Bi-LSTM | [95] |
MuFRF | Multi-head Attention | CNN/Auto-encoder | [97] |
SumGNN | Self-attention | GNN | [93] |
MSEDDI | Self-attention | GNN | [96] |
SSIM | Substructure Attention | MPNN | [48] |
DGNN-DDI | Substructure Attention | DMPNN | [47] |
DGAT-DDI | Directed GAT | [98] | |
LaGAT | Link-aware GAT | [99] | |
GNN-DDI | GAT | [100] | |
AttentionDDI | Transformer | Siamese network | [69] |
MDF-SA-DDI | Transformer | Siamese network, CNN/AE | [68] |
AMDE | Transformer/MPAN | [94] |