Skip to main content
. 2011 Jul 21;19:16. doi: 10.1186/2045-709X-19-16

Table 2.

Inter-rater reproducability between trainee clinicians and highly trained raters within a single coding round, expressed as a paired-comparison percentage agreement and Kappa co-efficient with 95% confidence intervals (95% CI)

Paired comparison Percentage agreement (95% CI) Kappa (95% CI, p value)
Highly trained rater 1 × Trainee clinician 1 98.2% (97.5%-98.8%) 0.87 (0.82-0.91)
Highly trained rater 1 × Trainee clinician 2 97.8% (97.2%-98.5%) 0.85 (0.79-0.89)
Highly trained rater 1 × Trainee clinician 3 97.5% (96.8%-98.2%) 0.83 (0.77-0.87)
Mean 97.8% (97.0%-98.7%)

Highly trained rater 2 × Trainee clinician 1 98.1% (97.4%-98.7%) 0.87 (0.82-0.90)
Highly trained rater 2 × Trainee clinician 2 97.5% (96.8%-98.2%) 0.83 (0.77-0.87)
Highly trained rater 2 × Trainee clinician 3 98.6% (98.1%-99.2%) 0.91 (0.86-0.94)
Mean 98.1% (96.7%-99.5%)