Skip to main content
. 2016 Mar 31;15(2):155–163. doi: 10.1016/j.jcm.2016.02.012

Table 3.

Equivalent ICC Forms Between Shrout and Fleiss (1979) and McGraw and Wong (1996)

McGraw and Wong (1996) Conventiona Shrout and Fleiss (1979) Conventionb Formulas for Calculating ICCc
One-way random effects, absolute agreement, single rater/measurement ICC (1,1) MSRMSWMSR+k+1MSW
Two-way random effects, consistency, single rater/measurement MSRMSEMSR+k1MSE
Two-way random effects, absolute agreement, single rater/measurement ICC (2,1) MSRMSEMSR+k1MSE+knMSCMSE
Two-way mixed effects, consistency, single rater/measurement ICC (3,1) MSRMSEMSR+k1MSE
Two-way mixed effects, absolute agreement, single rater/measurement MSRMSEMSR+k1MSE+knMSCMSE
One-way random effects, absolute agreement, multiple raters/measurements ICC (1,k) MSRMSWMSR
Two-way random effects, consistency, multiple raters/measurements MSRMSEMSR
Two-way random effects, absolute agreement, multiple raters/measurements ICC (2,k) MSRMSEMSR+MSCMSEn
Two-way mixed effects, consistency, multiple raters/measurements ICC (3,k) MSRMSEMSR
Two-way mixed effects, absolute agreement, multiple raters/measurements MSRMSEMSR+MSCMSEn

ICC, intraclass correlation coefficients.

a

McGraw and Wong18 defined 10 forms of ICC based on the model (1-way random effects, 2-way random effects, or 2-way fixed effects), the type (single rater/measurement or the mean of k raters/measurements), and the definition of relationship considered to be important (consistency or absolute agreement). In SPSS, ICC calculation is based on the terminology of McGraw and Wong.

b

Shrout and Fleiss19 defined 6 forms of ICC, and they are presented as 2 numbers in parentheses [eg, ICC (2,1)]. The first number refers to the model (1, 2, or 3), and the second number refers to the type, which is either a single rater/measurement (1) or the mean of k raters/measurements (k).

c

This column is intended for researchers only. MSR = mean square for rows; MSW = mean square for residual sources of variance; MSE = mean square for error; MSC = mean square for columns; n = number of subjects; k = number of raters/measurements.