Skip to main content
. Author manuscript; available in PMC: 2016 Jan 31.
Published in final edited form as: J Subst Abuse Treat. 2014 Aug 29;0:50–57. doi: 10.1016/j.jsat.2014.08.005

Table 2.

Inter- and intra-rater reliability calculated with Cohen’s Kappa.

Coder comparison Agreement by words Agreement by matching whole utterances Agreement by matching split utterances
κ N words κ N utterances κ N utterances
A vs. B .699 222539 .770 16249 .714 16837
A vs. C .715 222035 .783 16770 .722 17181
B vs. C .687 221395 .760 16404 .705 16842
Multi-κ .700 .771 .714

A Intra- .829 67866 .865 5402 .820 5304
B Intra- .731 33084 .801 2651 .762 2557
C Intra- .761 69850 .826 5475 .777 5389
Multi-κ .774 .831 .786