Skip to main content
. 2019 Nov 14;25(11):1103–1108. doi: 10.1089/acm.2019.0260

Table 3.

Inter-Rater Reliability: Agreement Among 38 Clinicians

    TEAMSI NESA
Training No training Training No training
TEAMSI Training 0.21      
No Training 0.26 0.26    
NESA Training 0.28 0.29 0.31  
No Training 0.23 0.27 0.29 0.26

Mean kappa values across all clinicians within each group. SE for all four groups were equivalent (after rounding up) at 0.03.

Shaded values indicate if our inter-rater agreement was better, the same, or worse than chance. Kappa is a measure of this difference, standardized to lie on a −1 to 1 scale. One would indicate perfect agreement, zero is exactly what would be expected by chance, and negative values indicate agreement less than chance. Our found values or 0.21 to 0.31 indicate that our agreement was better than chance, but far from strong agreement.

NESA, New England School of Acupuncture; SE, standard error; TEAMSI, Traditional East Asian Medicine Structure Interview.