Skip to main content
. 2017 Dec 26;40(6):317–328. doi: 10.1016/j.bj.2017.09.001

Table 4.

Percentage agreement of semi-quantification by 10 technicians and by the algorithms compared to the final truth determined by manual reading of 150 plate images.

nBoot Reader Accuracy CI Accuracy.pm1 CI pm1
10,000 Algorithms 78.0 (71.3, 84.6) 98.7 (96.7, 100.0)
10,000 Reader 1 84.7 (78.7, 90.0) 98.7 (96.7, 100.0)
10,000 Reader 2 80.0 (73.3, 86.0) 98.7 (96.7, 100.0)
10,000 Reader 3 81.3 (74.7, 87.3) 98.7 (96.7, 100.0)
10,000 Reader 4 83.3 (77.3, 88.7) 98.7 (96.7, 100.0)
10,000 Reader 5 79.3 (72.7, 86.0) 98.0 (95.3, 100.0)
10,000 Reader 6 82.7 (76.0, 88.7) 98.7 (96.7, 100.0)
10,000 Reader 7 78.0 (71.3, 84.7) 99.3 (98.0, 100.0)
10,000 Reader 8 82.0 (76.0, 88.0) 98.0 (95.3, 100.0)
10,000 Reader 9 77.3 (70.7, 84.0) 98.7 (96.7, 100.0)
10,000 Reader 10 82.0 (76.0, 88.0) 98.7 (96.7, 100.0)

Abbreviaitons: nBoot: number of bootstrap samples to compute the 95% CI; CI: Confidence interval; Accuracy.pm1: accuracy with a plus or minus 1 log difference tolerance; CI.pm1: 95% confidence interval for the accuracy.pm1.