Table 3.
Tool | Attention-based model | Jointly used model or strategy | Reference |
---|---|---|---|
SAMPN | self-attention | MPNN | [136] |
/ | Attention | MPNN | [137] |
MV-GNN | Attention | GCN/Multi-view Learning | [151] |
SME | Attention | GCN | [56] |
EAGCN | Edge Attention | GCN | [152] |
CasANGCL | Cascaded Attention | Pre-training/Graph Contrastive Learning | [138] |
HiGNN | Feature-Wise Attention | GNN | [49] |
MG-BERT | BERT | GNN | [81] |
K-BERT | BERT | Pre-training/Contrastive Learning/Fine-tuning | [80] |
MolRoPE-BERT | BERT | Pre-training/Fine-tuning | [78] |
FP-BERT | BERT | CNN | [143] |
SMG-BERT | BERT | [144] | |
ChemBERTa | BERT | Pre-training | [82] |
ChemBERTa-2 | BERT | Pre-training | [83] |
SMILES-BERT | BERT | Pre-training | [145] |
FraGAT | GAT | [139] | |
ATMOL | GAT | Graph Contrastive Learning | [57] |
FP-GNN | GAT | [140] | |
MoGAT | GAT | [141] | |
PredPS | GAT | [142] | |
ExGCN | GAT | GCN | [153] |
ABT-MPNN | Transformer | MPNN | [147] |
TranGRU | Transformer | BiGRU | [148] |
DHTNN | Transformer | [149] | |
MolHGT | Heterogeneous Graph Transformer | [146] | |
PharmHGT | [59] | ||
GROVER | GNN Transformer | Pre-training | [150] |