Table 3.
McGraw and Wong (1996) Conventiona | Shrout and Fleiss (1979) Conventionb | Formulas for Calculating ICCc |
---|---|---|
One-way random effects, absolute agreement, single rater/measurement | ICC (1,1) | |
Two-way random effects, consistency, single rater/measurement | – | |
Two-way random effects, absolute agreement, single rater/measurement | ICC (2,1) | |
Two-way mixed effects, consistency, single rater/measurement | ICC (3,1) | |
Two-way mixed effects, absolute agreement, single rater/measurement | – | |
One-way random effects, absolute agreement, multiple raters/measurements | ICC (1,k) | |
Two-way random effects, consistency, multiple raters/measurements | – | |
Two-way random effects, absolute agreement, multiple raters/measurements | ICC (2,k) | |
Two-way mixed effects, consistency, multiple raters/measurements | ICC (3,k) | |
Two-way mixed effects, absolute agreement, multiple raters/measurements | – |
ICC, intraclass correlation coefficients.
McGraw and Wong18 defined 10 forms of ICC based on the model (1-way random effects, 2-way random effects, or 2-way fixed effects), the type (single rater/measurement or the mean of k raters/measurements), and the definition of relationship considered to be important (consistency or absolute agreement). In SPSS, ICC calculation is based on the terminology of McGraw and Wong.
Shrout and Fleiss19 defined 6 forms of ICC, and they are presented as 2 numbers in parentheses [eg, ICC (2,1)]. The first number refers to the model (1, 2, or 3), and the second number refers to the type, which is either a single rater/measurement (1) or the mean of k raters/measurements (k).
This column is intended for researchers only. MSR = mean square for rows; MSW = mean square for residual sources of variance; MSE = mean square for error; MSC = mean square for columns; n = number of subjects; k = number of raters/measurements.