Table 7.
Different components used in graph neural networks optimized for heterophilic node classification.
| Higher-order neighbors | Weights for self-loops | Concat across layers | Dynamic gating | |
|---|---|---|---|---|
| GCN | ✗ | ✗ | ✗ | ✗ |
| SAGE | ✗ | ✓ | ✗ | ✗ |
| MixHop | ✓ | ✗ | ✓ | ✗ |
| GGCN | ✗ | ✗ | ✗ | ✓ |
| H2GCN | ✓ | ✓ | ✓ | ✗ |
| HH-GCN | ✗ | ✗ | ✗ | ✗ |
| HH-SAGE | ✗ | ✓ | ✗ | ✗ |
From left to right, we show methods that incorporate additional information from higher-order neighbors, separate weights for self-loops, and other additional components. Here, we observe the fact that Half-Hop is lightweight and doesn’t require extra components in the loss and also doesn’t explicitly compute separate weights for self-loops.