Skip to main content
. 2018 Sep 28;92(1093):20180691. doi: 10.1259/bjr.20180691

Table 7.

Inter rater agreement for the MLO projections for both readers and the dCNN compared to each other and to the consensus decision

RMLO Projections ACR A ACR B ACR C ACR D Overall
κ
(95% CI)
Agreement κ
(95% CI)
Agreement κ
(95% CI)
Agreement κ
(95% CI)
Agreement κ
(95% CI)
Agreement
Reader 1/Reader 2 0.57
(0.37–0.77)
Moderate 0.69
(0.55–0.85)
Strong 0.40
(0.22–0.60)
Fair 0.66
(0.48–0.85)
Strong 0.67
(0.58–0.76)
Strong
Reader 1/Consensus 0.81
(0.66–0.97)
Almost perfect 0.79
(0.66–0.92)
Almost perfect 0.75
(0.61–0.91)
Strong 0.91
(0.82–1.00)
Almost perfect 0.87
(0.81–0.93)
Almost perfect
Reader 2/Consensus 0.76
(0.60–0.91)
Strong 0.90
(0.80–0.99)
Almost perfect 0.61
(0.47–0.80)
Strong 0.75
(0.60–0.92)
Strong 0.80
(0.73–0.88)
Strong
dCNN/Consensus 0.84
(0.70–0.97)
Almost perfect 0.81
(0.69–0.93)
Strong 0.87
(0.77–0.98)
Almost perfect 0.94
(0.86–1.00)
Almost perfect 0.91
(0.86–0.96)
Almost perfect

ACR, American College of Radiology; dCNN, deep convolutional neural network; MLO, medio-lateral oblique;