Skip to main content
. 2019 Jan-Mar;9(1):1–14. doi: 10.4103/jmss.JMSS_27_17

Table 6.

Comparison of average classification performance for multiscale convolutional mixture of experts and wavelet-based convolutional mixture of experts on the research Heidelberg datasets based on 10 repetitions of 5-fold cross-validation method with τ=15% for decision-making

Model Configuration Dataset Curvature correction Average λ Performance

Precision (%) AUC Average testing time (s/VOI)
MCME[16] l3l2l1l0 Set 1 × 0.08 94.68±1.47 0.961 10.9
0.21 99.01±0.41 0.998
Set 2 × 0.17 94.63±2.45 0.974
0.26 97.11±0.96 0.994
WCME[17] LLLHHLHH Set 1 × 0.12 95.91±1.56 0.976 9.34
0.18 97.40±0.78 0.995
Set 2 × 0.21 96.03±2.14 0.986
0.24 96.33±1.29 0.990

MCME – Multiscale convolutional mixture of experts; AUC – Area under the ROC curve; ROC – Receiver operating characteristic; WCME – Wavelet-based convolutional mixture of experts; VOI – Volume of interest