Skip to main content
. 2018 Dec 4;18:160. doi: 10.1186/s12874-018-0617-4

Table 3.

The inter-rater agreement between reviewers determining definitely and probably preventable events versus non-preventable events, by assessment method

Preventability Assessment Inter-rater agreement
Kappa (95% CIs)
n = 1356
Best Practice-Based [4] 0.53 (0.48–0.59)
Error -Based [23] 0.55 (0.50–0.60)
Algorithm-Based [24] 0.55 (0.49–0.60)