Table 6.
Model | Configuration | Dataset | Curvature correction | Average λ | Performance | ||
---|---|---|---|---|---|---|---|
Precision (%) | AUC | Average testing time (s/VOI) | |||||
MCME[16] | l3−l2−l1−l0 | Set 1 | × | 0.08 | 94.68±1.47 | 0.961 | 10.9 |
√ | 0.21 | 99.01±0.41 | 0.998 | ||||
Set 2 | × | 0.17 | 94.63±2.45 | 0.974 | |||
√ | 0.26 | 97.11±0.96 | 0.994 | ||||
WCME[17] | LL−LH−HL−HH | Set 1 | × | 0.12 | 95.91±1.56 | 0.976 | 9.34 |
√ | 0.18 | 97.40±0.78 | 0.995 | ||||
Set 2 | × | 0.21 | 96.03±2.14 | 0.986 | |||
√ | 0.24 | 96.33±1.29 | 0.990 |
MCME – Multiscale convolutional mixture of experts; AUC – Area under the ROC curve; ROC – Receiver operating characteristic; WCME – Wavelet-based convolutional mixture of experts; VOI – Volume of interest