Skip to main content
. 2008 May 9;8:29. doi: 10.1186/1471-2288-8-29

Table 5.

Simulated chart-specific inter-rater reliability coefficients

Kappa* Sensitivity§ Specificity§



Simulated Charts Percentage agreement (%) κ 95% CI Estimate 95% CI Estimate 95% CI
1 88 0.75 (0.72–0.78) 0.92 (0.86–0.96) 0.90 (0.85–0.94)
2 91 0.78 (0.75–0.82) 0.99 (0.91–1.00) 0.90 (0.85–0.93)
3 90 0.70 (0.66–0.74) 0.84 (0.73–0.91) 0.96 (0.92–0.98)
4 88 0.76 (0.73–0.79) 0.96 (0.90–0.99) 0.87 (0.82–0.91)
5 88 0.74 (0.71–0.78) 0.91 (0.82–0.96) 0.88 (0.83–0.92)
6 85 0.69 (0.65–0.73) 0.93 (0.85–0.97) 0.85 (0.79–0.90)
7 88 0.76 (0.73–0.79) 0.91 (0.85–0.95) 0.91 (0.85–0.95)
8 86 0.72 (0.69–0.76) 0.85 (0.77–0.90) 0.85 (0.79–0.90)

* Chi-square test statistic for homogeneity = 22, df = 7, p = 0.003

§ Sensitivity and specificity are calculated to compare assessments of all raters against the gold standard.

Abbreviations: κ kappa statistic, CI confidence interval