TABLE I.
Reliability of Classification Systems for Osteonecrosis of the Femoral Head
Study | Classification System* | Measure of Reliability | Result† |
Schmitt-Sody et al.38 (2008) | Ficat | Interobserver reliability | 0.37 (0.23-0.70) |
Intraobserver reliability | 0.50 (0.29-0.71) | ||
ARCO | Interobserver reliability | 0.35 (0.06-0.56) | |
Intraobserver reliability | 0.44 (0.26-0.56) | ||
Smith et al.39 (1996) | Ficat | Interobserver reliability | 0.46 (0.30-0.67) |
Intraobserver reliability | 0.59 (0.44-0.73) | ||
Kay et al.40 (1994) | Ficat | Interobserver variability | 0.56 ± 0.01 |
Intraobserver variability | 0.82 ± 0.16 |
ARCO = Association Research Circulation Osseous.
Data are presented as the mean kappa value (range) or mean kappa value and standard deviation. According to the guidelines of Svanholm et al.187, kappa values for reliability of <0.5 indicate poor agreement, those between 0.5 and 0.75 indicate fair agreement, and values of >0.75 indicate excellent agreement.