Skip to main content
. 2021 Apr 8;10:e65894. doi: 10.7554/eLife.65894

Table 2. Comparison of segmentation IoU scores for different weight initialization methods versus the best results on each benchmark as reported in the publication presenting the segmentation task.

All IoU scores are the average of five independent runs. References listed after the benchmark names indicate the sources for Reported IoU scores.

Benchmark Training Iterations Random Init. IN-super IN-moco CEM500K-moco Reported
All Mitochondria 10000 0.587 0.653 0.653 0.770
CREMI Synaptic Clefts 5000 0.000 0.196 0.226 0.254
Guay (Guay et al., 2020) 1000 0.308 0.275 0.300 0.429 0.417
Kasthuri++ (Casser et al., 2018) 10000 0.905 0.908 0.911 0.915 0.845
Lucchi++ (Casser et al., 2018) 10000 0.894 0.865 0.892 0.895 0.888
Perez (Perez et al., 2014) 2500 0.672 0.886 0.883 0.901 0.821
Lysosomes 0.842 0.838 0.816 0.849 0.726
Mitochondria 0.130 0.860 0.866 0.884 0.780
Nuclei 0.984 0.987 0.986 0.988 0.942
Nucleoli 0.731 0.859 0.865 0.885 0.835
UroCell 2500 0.424 0.584 0.618 0.734