Skip to main content
. 2022 Dec 9;3:128. doi: 10.1186/s43058-022-00378-z

Table 5.

Categorical concordance

Cohen’s kappa (κ) Categorical interpretation Dissemination concordance
n (%) [M]
Implementation concordance
n (%) [M]
−1 Perfect disagreement
−0.81 to −0.99 Near perfect disagreement
−0.61 to −0.80 Substantial disagreement
−0.41to −0.60 Moderate disagreement 2 (2.0) [−0.47]
−0.21 to −0.40 Fair disagreement 3 (3.0) [−0.29] 6 (5.9) [−0.30]
−0.10 to −0.20 Slight disagreement 2 (2.0) [−0.15] 8 (7.9) [−0.15]
−0.09–0.09 No different from chance 5 (4.9) [0.01] 24 (23.8) [0.03]
0.10–0.20 Slight agreement 10 (9.9) [0.13] 23 (22.8) [0.15]
0.21–0.40 Fair agreement 20 (19.8) [0.31] 25 (24.7) [0.29]
0.41–0.60 Moderate agreement 24 (23.8) [0.51] 9 (8.9) [0.50]
0.61–0.80 Substantial agreement 19 (18.8) [0.70] 3 (3.0) [0.65]
0.81–0.99 Near perfect agreement 14 (13.9) [0.88] 1 (1.0) [0.85]
1 Perfect agreement 2 (2.0) [1.0] 2 (2.0) [1.0]

Table 5 shows the seven-level categorization of Cohen’s kappa (κ) at the individual level. Dissemination and implementation concordance were calculated for each participant and categorized. We reported the number of participants, the percent of total, and average for each category