Skip to main content
. 2022 Mar 29;48(5):4181–4188. doi: 10.1007/s00068-022-01959-2

Table 3.

Intraobserver reliability of the different classification systems

Classification Rater 1 Rater 2 Rater 3
AO 0.81 [0.74;0.88] 0.81 [0.74;0.88] 0.85 [0.78;0.92]
Weber 0.86 [0.78;0.95] 0.93 [0.87;0.99] 0.91 [0.85;0.98]
Herscovici 0.61 [0.52;0.71] 0.64[0.55;0.73] 0.59 [0.50;0.69]
Haraguchi 0.72 [0.63;0.80] 0.74 [0.66;0.83] 0.77 [0.69;0.85]
Bartoníček 0.79 [0.72;0.87] 0.76 [0.68;0.84] 0.81 [0.74;0.88]
Mason 0.63 [0.54;0.72] 0.64 [0.55;0.74] 0.65 [0.56;0.75]

Illustrated are the kappa values with 95% CI of the three different raters