Skip to main content
. 2018 Sep 14;20:65. doi: 10.1186/s12968-018-0471-x

Table 4.

Qualitative visual assessment of automated segmentation

Agreement (%) Disagreement (%)
Auto. better Man. better Not sure
Analyst 1 Basal 40.0 26.2 20.6 13.2
Mid-ventricular 84.8 12.2 2.4 0.6
Apical 44.0 29.0 22.0 5.0
Analyst 2 Basal 33.0 27.4 17.4 22.2
Mid-ventricular 91.6 6.6 1.8 0.0
Apical 80.8 8.8 9.6 0.8

Two experienced image analysts visually compared automated segmentation to manual segmentation for 250 test subjects and assessed whether the two segmentations achieved a good agreement (visually close to each other) or not. If there was a disagreement between the two, the analysts would score in three categories: automated segmentation performs better; manual segmentation performs better; not sure which one is better. The visual assessment was performed for basal, mid-ventricular and apical slices. The percentage of each score catetory is reported