Skip to main content
. 2024 May 4;10(10):e30763. doi: 10.1016/j.heliyon.2024.e30763

Table 2.

Comparison of evaluation metrics of various typical models.

Models DSC ↑ HD95 (mm) ↓ Precision ↑ Recall ↑
3D UNet 0.7172 ± 0.0955 5.7052 ± 26.80 0.7354 ± 0.1185 0.7223 ± 0.1317
Isensee et al. [24] 0.7693 ± 0.0893 2.4136 ± 3.5872 0.7633 ± 0.1143 0.7924 ± 0.1163
2D Isensee et al. [24] 0.6520 ± 0.1463 7.9084 ± 10.2189 0.8081 ± 0.1134 0.5763 ± 0.1846
Lin et al. [23] 0.7754 ± 0.0954 2.6832 ± 4.7794 0.7695 ± 0.1177 0.7983 ± 0.1197
TransBTS 0.7744 ± 0.0929 8.6436 ± 87.3025 0.7583 ± 0.1107 0.8080 ± 0.1221
UNETR 0.7683 ± 0.0944 3.0822 ± 6.2588 0.7687 ± 0.1184 0.7850 ± 0.1199
VT-UNet 0.7758 ± 0.0906 2.5433 ± 5.1414 0.7690 ± 0.1073 0.7989 ± 0.1203
MPU-Net 0.7693 ± 0.0963 3.1479 ± 7.7695 0.7817 ± 0.1143 0.7747 ± 0.1279
LVPA-UNet 0.7907 ± 0.0937 1.8702 ± 2.8491 0.7929 ± 0.1088 0.8025 ± 0.1201