Skip to main content
. Author manuscript; available in PMC: 2021 Jul 1.
Published in final edited form as: J Pain. 2019 Dec 11;21(7-8):858–868. doi: 10.1016/j.jpain.2019.11.013

Table 4:

Test-retest and inter-examiner reliability of ordinal measures

Modality Exam 1A vs. 1B1 (n=32) % Inter-Examiner Agreement Exam 1A vs. 2A1 (n=31) % Test-Retest Agreement
Light Brush (std err) .74* (.50, .98) 87% .93* (.79, 1) 87%
Vibration (std err) .58* (.33, .83) 74% .40* (.15, .65) 60%
Cool (std err) .42* (.17, .67) 66% .74* (.52, .96) 84%
Warm (std err) .40* (.13, .67) 63% .59* (.34, .84) 74%
Pinprick (std err) .63* (.39, .87) 81% .55* (.31, .79) 80%
Cold (std err) .53* (.29, .77) 69% .52* (.27, .77) 71%
Heat (std err) .55* (.30, .80) 72% .47* (.22, .72) 68%
Mean (std err) .55* (.30, .80) 73% .60* (.35, .85) 75%
1.

Kappa statistic for extent of agreement for ordinal measures with the upper and lower bounds of the 95% confidence interval in parentheses.

*

p values <.01.