Skip to main content
. 2013 Nov 7;14:319. doi: 10.1186/1471-2105-14-319

Table 1.

Evaluation of the three designated tools on the eight available datasets

Algorithm/Dataset
Init
SN15
Melanoma
TScratch
Scatter
Microfluidics
HEK293
MDCK
Mean F-Measure
(N = 28) (N = 54) (N = 20) (N = 24) (N = 6) (N = 13) (N = 12) (N = 14)
(Median F-Measure)
[F-Measure Adjusted]
Tscratch (Geback et al. 2009)
0.96
0.96
0.88
0.94
0.47
0.42
0.90
0.92
(0.96)
(0.97)
(0.90)
(0.93)
(0.47)
(0.41)
(0.91)
(0.93)
MultiCellSeg (Zaritsky et al. 2011)
0.98
0.97
0.85
0.93
0.55
0.35
0.95
0.96
(0.98)
(0.98)
(0.91)
(0.95)
(0.56)
(0.45)
(0.95)
(0.98)
Topman et al. 2011 0.98
0.95
0.93
0.78
0.58
0.63
0.85
0.89
(0.98)
(0.97)
(0.93)
(0.76)
(0.60)
(0.63)
(0.87)
(0.93)
[0.97] [0.96] [0.93] [0.84] [0.52] [0.61] [0.84] [0.93]

F-measure was used for evaluation in three forms: mean F-measure of images in the dataset, median, and mean after threshold adjustment on the training set (for [10]).

Best mean F-measure performance is marked in bold.