Table 5.
Different parameters for algorithms.
| Ref | Model | Whole Tumor | Core Tumor | Enhanced Tumor |
|---|---|---|---|---|
| Our Model | DVIT-DSUNET with feature fusion | 0.908 | 0.923 | 0.914 |
| [14] | Supervised Multi-Scale Attention Network | 0.902 | 0.869 | 0.806 |
| [17] | 3D-Znet | 0.906 | 0.845 | 0.859 |
| [21] | Swing BTS | 0.897 | 0.792 | 0.744 |
| [44] | Deep nuanced reasoning and Swin‐T | 0.920 | 0.872 | 0.913 |
| [45] | Dual Encoding- Decoding Method | 0.909 | 0.883 | 0.904 |
| [46] | Transformer and CNN combined | 0.908 | 0.853 | 0.857 |
| [47] | C-CNN with Distance-Wise Attention | 0.920 | 0.8726 | 0.9113 |
| [48] | CNN and Chimp Optimization Algorithm | 0.97 | – | – |
To summarise, whereas several models show impressive performance, our DVIT-DSUNET with feature fusion distinguishes itself by consistently achieving high DSC values in all tumor locations. This complete and balanced performance emphasizes the superiority of our model in efficiently managing the intricacies of tumor segmentation compared to the other studied methods.