Table 2.
The average four-fold performance on two public dataset (the performance of our method is described by mean ± std).
| Method | DSC (%) | Jaccard (%) | Recall (%) | Precision (%) |
| NIH dataset | ||||
| Bottom-up (32) | 70.7 | 57.9 | 71.6 | 74.4 |
| Fixed-point (53) | 82.4 | – | – | – |
| 3D Coarse-to-Fine (54) | 84.6 | – | – | – |
| Holistically nested (55) | 81.3 | 68.9 | – | – |
| RSTN (31) | 84.5 | – | – | – |
| Recurrent Contextual Learning (39) | 83.3 | 71.8 | 84.5 | 82.8 |
| Vnet (56) | 80.1 | – | – | – |
| Attention Unet (57) | 83.1 | – | – | – |
| DenseASPP (40) | 85.4 | – | – | – |
| (46) | 84.10 | 72.86 | 85.3 | 83.6 |
| Cascaded FCN (23) | 85.9 | 75.7 | 85.2 | 87.6 |
| AX-Unet (Ours) | 87.7 ± 3.8 | 78.2 ± 5.3 | 90.9 ± 2.2 | 92.9 ± 6.1 |
| MSD dataset | ||||
| Unet-64 | 70.7 | – | – | – |
| Unet-16 | 67.1 | – | – | – |
| Attention Unet (57) | 66.0 | – | – | – |
| MoNet (58) | 74.0 | 68.9 | – | – |
| nn-Unet (27) | 80.0 | – | – | – |
| AX-Unet (Ours) | 85.9 ± 5.1 | 77.9 ± 3.4 | 86.3 ± 5.1 | 93.1 ± 6.9 |