Table 3.
Mean metrics for comparing our method to expert annotators, displayed separately for T1 and T2 mapping images.
| Automated vs expert annotation | Agreement of three experts (interobserver) | |||
|---|---|---|---|---|
| T1 | T2 | T1 | T2 | |
| IoU - Jaccard index () | 81.7 | 78.39 | 84.14 | 86.25 |
| Dice score () | 87.8 | 85.02 | 89.84 | 91.73 |
| Hausdorff distance (mm) | 3.97 | 5.39 | 2.89 | 2.41 |
| Mean surface distance (mm) | 1.77 | 2.44 | 1.34 | 1.00 |
| Number of training samples | 378 | 454 | — | — |
| Number of test samples | 116 | 119 | 116 | 119 |
For each metric and mapping type (T1 or T2), differences are statistically significant (, also denoted with different lettering in superscripts). indicates higher-the-better metrics, while metrics are lower-the-better.