Skip to main content
. Author manuscript; available in PMC: 2025 Aug 26.
Published in final edited form as: Med Image Comput Comput Assist Interv. 2024 Oct 23;15012:542–552. doi: 10.1007/978-3-031-72390-2_51

Table 2.

Performance comparison of 2D and 3D SAM approaches in terms of Dice score.

Dim Method AMOS [13] TotalSegmentator [28] BraTS [20]
1pt 3pt 5pt 10pt 1pt 3pt 5pt 10pt 1pt 3pt 5pt 10pt
2D SAM [16] 0.049 0.093 0.114 0.145 0.202 0.279 0.311 0.348 0.108 0.192 0.217 0.237
MobileSAM [32] 0.041 0.056 0.063 0.070 0.149 0.170 0.182 0.212 0.079 0.132 0.156 0.186
TinySAM [24] 0.049 0.077 0.089 0.101 0.171 0.225 0.243 0.262 0.103 0.165 0.187 0.211
MedSAM [18] 0.004 0.051 0.060 0.074 0.006 0.069 0.090 0.111 0.008 0.059 0.064 0.071
SAM-Med2D [4] 0.097 0.127 0.129 0.132 0.008 0.081 0.100 0.128 0.013 0.076 0.082 0.084
3D SAM-Med3D [27] 0.289 0.386 0.418 0.448 0.252 0.400 0.463 0.522 0.328 0.395 0.418 0.446
FastSAM3D 0.273 0.368 0.402 0.437 0.250 0.378 0.445 0.519 0.333 0.401 0.421 0.445

We measure the performance at 1, 3, 5, and 10 point prompts (pt). SAM-Med3D and our FastSAM3D are evaluated in a 3D context, whereas SAM, MobileSAM, TinySAM, MedSAM and SAM-Med2D are applied independently to all 2D slices of the entire 3D volume. Notably, FastSAM3D demonstrates competitive performance with SAM-Med3D and shows enhanced Dice scores relative to all its 2D counterparts, highlighting the effectiveness of our approach. The best performance is shown in red and boldface, while the second best is in blue.