Table 2.
Quantitative results under ID shift, where = {jitter, mirroring, rotation} applied to evaluation set. The first rows for both datasets with no (under no-shift scenario) serve as the baselines. Results are reported for 1-way 20-shot tasks. The best performance metrics are shown in bold, and the second-best are underlined. Only the model with the lowest predictive uncertainty is highlighted in bold.
| Evaluation Set | mIoU | OA | MCC | |||
|---|---|---|---|---|---|---|
| Infrabel-5 | – | 89.10 | 92.48 | 0.8250 | 0.0938 | ⇒ baseline |
| Infrabel-5 | jitter | 89.07 | 92.47 | 0.8247 | 0.0951 | |
| Infrabel-5 | mirroring | 89.02 | 92.43 | 0.8237 | 0.0952 | |
| Infrabel-5 | rotation | 89.01 | 92.42 | 0.8236 | 0.0938 | |
| WHU | – | 83.04 | 89.02 | 0.7714 | 0.3254 | ⇒ baseline |
| WHU | jitter | 82.89 | 88.86 | 0.7682 | 0.3261 | |
| WHU | mirroring | 82.95 | 88.97 | 0.7701 | 0.3255 | |
| WHU | rotation | 82.94 | 88.96 | 0.7700 | 0.3256 |