Table 3.
Performance of models on DDTI and Stanford CINE datasets. Bolded values highlight the model with the best performance on a certain metric.
DDTI |
Stanford CINE |
|||
---|---|---|---|---|
Average Across Positive Images |
Average Across Positive Images |
|||
Model Name | DSC + | IoU | DSC + | IoU |
UNet-AD | 0.494±0.242 | 0.364±0.223 | 0.456±0.264 | 0.335±0.236 |
NestedUNet-AD | 0.502±0.247 | 0.372±0.227 | 0.476±0.275 | 0.356±0.244 |
SResUNet-AD | 0.576±0.224 | 0.438±0.212 | 0.535±0.251 | 0.404±0.230 |
SGUNet-AD | 0.513±0.257 | 0.384±0.232 | 0.530±0.271 | 0.405±0.248 |
AttUNet-AD | 0.504±0.234 | 0.370±0.215 | 0.462±0.264 | 0.339±0.232 |
MSU-Net-AD | 0.527±0.325 | 0.421±0.290 | 0.569±0.322 | 0.463±0.294 |
| ||||
UNet | 0.639±0.203 | 0.501±0.211 | 0.616±0.245 | 0.488±0.242 |
NestedUNet | 0.614±0.195 | 0.470±0.195 | 0.609±0.258 | 0.483±0.249 |
SResUNet | 0.604±0.214 | 0.466±0.215 | 0.601±0.251 | 0.472±0.242 |
Wang et al. | 0.728±0.241 | 0.617±0.243 | 0.701±0.303 | 0.609±0.299 |
SGUNet | 0.587±0.202 | 0.443±0.197 | 0.563±0.253 | 0.433±0.234 |
AttUNet | 0.649±0.191 | 0.509±0.204 | 0.612±0.244 | 0.483±0.240 |
MSU-Net | 0.705±0.191 | 0.575±0.210 | 0.658±0.269 | 0.542±0.267 |