Skip to main content
. Author manuscript; available in PMC: 2017 Aug 9.
Published in final edited form as: Arthritis Care Res (Hoboken). 2016 Nov;68(11):1579–1590. doi: 10.1002/acr.22984

Table 4.

Performance measure score concurrent validity for numerators of RA electronic clinical quality measures.

Site 1 Site 2 Site 3 Overall
N
K
Sensitivity Specificity N
K
Sensitivity % (95% CI) Specificity % (95% CI) N
K
Sensitivity % (95% CI) Specificity % (95% CI) N
K
Sensitivity % (95% CI) Specificity % (95% CI)
Assessment of Disease Activity 70
1.00
100 100 34
0.17
44 (25, 66) 86 (42, 100) 117
0.98
99 (92, 100) 100 (92, 100) 221
.84
88 (82, 93) 99 (94, 100)
Functional Status Assessment 81
1.00
100 100 34
NA
3 (0.1, 15) Undef. 117
0.73
96.4 (91, 99) 100.0 (54, 100) 232
0.54
80.4 (74, 86) 100 (89, 100)
DMARD Therapy 81
1.00
100 100 34
NA
94 (80, 99) Undef. 58
NA
98 (91, 100) Undef 173
0.85
98 (95, 100) 100 (66, 100)
Tuberculosis screening 62
1.00
100 100 35
0.85
93 (76, 99) 100 (63,100) 36
0.33
69 (50, 84) 100 (40, 100) 133
0.73
89 (82, 94) 100 (84, 100)

K=kappa, a measure of agreement (in these analyses, Kappa=1.0 when the automated EHR data query agrees exactly with data obtained through manual chart extraction, and Kappa=0 when the agreement appears entirely due to chance); RA = rheumatoid arthritis; DMARD = disease modifying anti-rheumatic drug.

Site 1 had perfect agreement on all measures.

2nd site was different for TB screening. 3rd site was different for DMARD therapy and TB screening.

*

Some Kappas were undefined because there were no true negatives; this is an inherent limitation of the kappa statistic that requires a distribution to produce a result. Similarly, specificity was undefined in instances where the denominator of the specificity calculation (false positive + true negative) was zero.