Skip to main content
. 2018 Jul 4;39(11):4241–4257. doi: 10.1002/hbm.24243

Table 5.

Validation of the proposed method on a secondary dataset (Souza et al., 2017)

Dice Sensitivity Specificity
ANTs 95.93 (0.87) 94.51 (1.58) 99.71 (0.11)
BEaST 95.77 (1.23) 93.84 (2.57) 99.76 (0.13)
BET 95.22 (0.94) 98.26 (1.61) 99.13 (0.23)
BSE 90.49 (7.03) 91.44 (5.32) 98.65 (2.27)
HWA 91.66 (1.11) 99.93 (0.12) 97.83 (0.82)
MBWSS 95.57 (1.46) 92.78 (2.67) 99.85 (0.04)
optiBET 95.43 (0.71) 96.13 (0.95) 99.36 (0.31)
Proposed 97.35 (0.44) 97.72 (0.81) 99.64 (0.16)
Proposed + EC 97.58 (0.38) 98.11 (0.80) 99.65 (0.14)
ROBEX 95.61 (0.72) 98.42 (0.70) 99.13 (0.28)
STAPLE 96.80 (0.74) 98.98 (0.60) 99.38 (0.22)
Silver standard 97.14 (0.51) 96.83 (0.68) 99.71 (0.11)

The performances of other methods as reported in the original study are shown. The two top‐performing methods for each performance measure are emboldened. Note that both “Proposed” and “Proposed + EC” refer to the accelerated version of the method described in Section 3.1.