Skip to main content
. 2016 Nov 2;7:1629. doi: 10.3389/fpsyg.2016.01629

Table 6.

Range of interrater agreement, using Cohen's kappas (κ), for listener (n = 239) and randomly generated (n = 239) ratings of the 24 specific statements compared with performers' and commenting listener's ratings.

Minimum listener κ Minimum random κ Maximum listener κ Maximum random κ Number of listeners with κ > random
Sax player −0.339 −0.390 0.541 0.405 11
Pianist −0.296 −0.226 0.318 0.400 0
Commenting listener −0.316 −0.213 0.600 0.214 68

Kappas are calculated collapsing the 5-point scale into three categories (Agree, Neutral, Disagree): ratings of 4 and 5 on the 5-point scale (Agree and Strongly Agree) are collapsed into “Agree,” and ratings of 1 and 2 (Strongly Disagree and Disagree) are collapsed into “Disagree.” Ratings of “don't understand” are treated as missing data.