Skip to main content
. 2025 Aug 7;11:e3059. doi: 10.7717/peerj-cs.3059

Table 3. Comprehensive comparison of metrics for different methods on the MSDPT dataset.

Model Dice↑ Iou↑ Sens↑ Spec↑ HD95↓
3DUNet 0.4359 ± 0.0527 0.3223 ± 0.0393 0.4371 ± 0.0766 0.9997 ± 0.0001 38.5963 ± 18.2940
SegResNet 0.4732 ± 0.0572 0.3536 ± 0.0492 0.4736 ± 0.0812 0.9997 ± 0.0001 37.6231 ± 15.6751
3DUX-Net 0.4781 ± 0.0683 0.3554 ± 0.0566 0.4922 ± 0.0916 0.9997 ± 0.0001 36.3027 ± 22.6361
AttentionUnet 0.4839 ± 0.0853 0.3608 ± 0.0703 0.5124 ± 0.1399 0.9997 ± 0.0001 36.8667 ± 18.2556
UNETR 0.4017 ± 0.0547 0.2883 ± 0.0449 0.4025 ± 0.0854 0.9996 ± 0.0001 43.8937 ± 17.7325
nnFormer 0.4666 ± 0.1123 0.3513 ± 0.0942 0.4628 ± 0.1328 0.9997 ± 0.0001 45.0921 ± 22.3324
SwinUNETR 0.4781 ± 0.0663 0.3556 ± 0.0546 0.5055 ± 0.0902 0.9996 ± 0.0002 45.6697 ± 23.5726
Ours 0.5210 ± 0.0651 0.4174 ± 0.0184 0.5521 ± 0.0836 0.9997 ± 0.0001 28.3127 ± 5.6131

Note:

Bold values indicate the optimal value of each evaluation index.