Skip to main content
. 2021 Apr 13;156(4):607–619. doi: 10.1093/ajcp/aqaa275

Table 4.

Comparisons Between Different Methods of Ki-67 Measurementa

Comparison Pair Pearson Correlation Coefficient Closeness of Match: No. (%) of Cases Within ±0.2xKi-67 Index No. of Grade-Discordant Cases κ
Manual counting vs manual counting
Observer 1 (DP) vs observer 2 (Ob 2) (case level) 0.976 10/20 (50) 4 1
Observer 1 (DP) vs observer 1 (Ob 1) (case level) 0.977 12/20 (60) 4 0.63
Observer 1 (Ob 1) vs observer 2 (Ob 2) (case level) 0.952 10/20 (50) 0 0.63
Manual counting vs DIA
Observer 2 (Ob 2) vs HALO (Ob 2) (hotspot level) 0.971 11/20 (55) 2 0.81
Observer 2 (Ob 2) vs HALO (DP) (case level) 0.949 7/20 (35) 2 0.81
Observer 2 (Ob 2) vs QuantCenter (DP) (case level) 0.978 10/20 (50) 2 0.81
Observer 1 (Ob 1) vs HALO (Ob 2) (case level) 0.902 7/20 (35) 2 0.81
Observer 1 (Ob 1) vs HALO (DP) (case level) 0.881 7/20 (35) 4 0.81
Observer 1 (Ob 1) vs QuantCenter (DP) (case level) 0.946 8/20 (40) 4 0.81
Observer 1 (DP) vs HALO (Ob 2) (case level) 0.942 9/20 (45) 2 0.63
Observer 1 (DP) vs HALO (DP) (hotspot level) 0.922 9/20 (45) 2 0.63
Observer 1 (DP) vs QuantCenter (DP) (hotspot level) 0.976 13/20 (65) 2 0.63
DIA vs DIA
QuantCenter (DP) vs HALO (DP) (hotspot level) 0.953 10/20 (50) 2 0.81
QuantCenter (DP) vs HALO (Ob 2) (case level) 0.969 9/20 (45) 0 1
HALO (DP) vs HALO (Ob 2) (case level) 0.980 6/20 (30) 2 0.81

DIA, digital image analysis; DP, digital image analysis pathologist; Ob, observer.

aThe comparison pairs are grouped under three headings. For each method, the name/label in the leftmost column indicates the observer or software device performing the count followed by the person choosing the hotspot in parentheses. The grade-discordant cases column enumerates the number of cases that fall under different World Health Organization grades (G1/G2/G3) for each comparison, and κ indicates the Cohen’s κ statistic value for the pair.