TABLE 2.
All Subjects | Version 2 (n = 4318) | Version 3 (n = 4318) | |||||
---|---|---|---|---|---|---|---|
Test | CDR 0 vs 0.5 | CDR 0 vs 1 | CDR 0.5 vs 1 | Test | CDR 0 vs 0.5 | CDR 0 vs 1 | CDR 0.5 vs 1 |
NACCMMSE | 0.780 | 0.956 | 0.803 | MOCATOTS | 0.794 | 0.966 | 0.814 |
Total mini mental state exam (MMSE) score | (0.765, 0.794) | (0.944, 0.969) | (0.777, 0.829) | MoCA total raw score | (0.779, 0.808) | (0.956, 0.976) | (0.789, 0.838) |
LOGIMEM | 0.777 | 0.927 | 0.741 | CRAFTVRS | 0.750 | 0.918 | 0.749 |
Immediate story recall | (0.763, 0.792) | (0.911, 0.944) | (0.711, 0.77) | Immediate craft story recall (verbatim scoring) | (0.734, 0.765) | (0.902, 0.934) | (0.72, 0.777) |
CRAFTURS | 0.764 | 0.921 | 0.751 | ||||
Immediate craft story recall (paraphrase scoring) | (0.749, 0.779) | (0.905, 0.938) | (0.722, 0.779) | ||||
MEMUNITS | 0.796 | 0.940 | 0.735 | CRAFTDVR | 0.772 | 0.947 | 0.784 |
Delayed story recall | (0.781, 0.81) | (0.924, 0.957) | (0.706, 0.764) | Delayed craft story recall (verbatim scoring) | (0.757, 0.787) | (0.932, 0.962) | (0.757, 0.811) |
CRAFTDRE | 0.782 | 0.949 | 0.788** | ||||
Delayed craft story recall (paraphrase scoring) | (0.767, 0.797) | (0.935, 0.964) | (0.761, 0.815) | ||||
DIGIF | 0.655 | 0.738 | 0.609 | DIGFORCT | 0.631 | 0.737 | 0.634 |
Digit span forward (correct trials) | (0.638, 0.672) | (0.71, 0.766) | (0.576, 0.641) | Forward number span test (correct trials) | (0.613, 0.649) | (0.708, 0.766) | (0.600, 0.668) |
DIGFORSL | 0.633 | 0.734 | 0.629 | ||||
Forward number span test (longest span) | (0.615, 0.651) | (0.704, 0.763) | (0.595, 0.664) | ||||
DIGIB | 0.669 | 0.786 | 0.644 | DIGBACCT | 0.666 | 0.799 | 0.668 |
Digit span backward (correct trials) | (0.653, 0.686) | (0.76, 0.812) | (0.612, 0.677) | Backward number span test (correct trials) | (0.648, 0.683) | (0.773, 0.825) | (0.635, 0.701) |
DIGBACLS | 0.653 | 0.790 | 0.667 | ||||
Backward number span test (longest span) | (0.636, 0.671) | (0.763, 0.816) | (0.634, 0.700) | ||||
BOSTON | 0.715 | 0.836 | 0.671 | MINTTOTS | 0.703 | 0.854 | 0.695 |
Boston Naming Test (total score) | (0.699, 0.731) | (0.812, 0.86) | (0.638, 0.704) | Multilingual naming test (MINT) (total score) | (0.686, 0.719) | (0.831, 0.877) | (0.662, 0.729) |
WAIS | 0.721 | 0.877 | 0.722 | UDSVERFC | 0.643 | 0.776 | 0.658 |
WAIS‐R Digit Symbol | (0.705, 0.737) | (0.856, 0.898) | (0.691, 0.753) | Number of correct F‐words generated | (0.625, 0.66) | (0.749, 0.802) | (0.626, 0.691) |
UDSVERLC | 0.638 | 0.778 | 0.661 | ||||
Number of correct L‐words generated | (0.62, 0.655) | (0.751, 0.805) | (0.628, 0.693) | ||||
UDSVERTN | 0.644 | 0.783 | 0.661 | ||||
Number of correct F‐words and L‐words | (0.627, 0.662) | (0.756, 0.809) | (0.629, 0.694) | ||||
UDSBENTC | 0.633 | 0.755 | 0.647 | ||||
Total Score for copy of Benson figure | (0.615, 0.651) | (0.726, 0.784) | (0.613, 0.682) | ||||
UDSBENTD | 0.770 | 0.933 | 0.759 | ||||
Total score for delayed drawing of Benson figure | (0.754, 0.785) | (0.916, 0.95) | (0.729, 0.789) | ||||
ANIMALS | 0.741 | 0.892 | 0.724 | ANIMALS | 0.725 | 0.892 | 0.731 |
Category Fluency (animals) | (0.725, 0.756) | (0.872, 0.911) | (0.693, 0.754) | Category fluency (animals) | (0.709, 0.74) | (0.874, 0.911) | (0.702, 0.761) |
VEG | 0.745 | 0.911 | 0.740 | VEG | 0.734 | 0.910 | 0.746 |
Category Fluency (vegetables) | (0.730, 0.760) | (0.894, 0.929) | (0.711, 0.769) | Category fluency (vegetables) | (0.719, 0.75) | (0.893, 0.928) | (0.717, 0.774) |
TRAILA | 0.688 | 0.839 | 0.693 | TRAILA | 0.674 | 0.842 | 0.704 |
Trail Making Test Part A | (0.671, 0.704) | (0.815, 0.863) | (0.661, 0.725) | Trail making test Part A | (0.657, 0.691) | (0.819, 0.864) | (0.674, 0.734) |
TRAILA_CONNECT_SEC | 0.688 | 0.839 | 0.700 | TRAILA_CONNECT_SEC | 0.672 | 0.843 | 0.707 |
Trail A Lines per second | (0.671, 0.704) | (0.816, 0.863) | (0.669, 0.731) | Trail A Lines per second | (0.655, 0.688) | (0.82, 0.865) | (0.678, 0.737) |
TRAILB | 0.730 | 0.896 | 0.740 | TRAILB | 0.719 | 0.891 | 0.742 |
Trail Making Test Part B | (0.714, 0.745) | (0.876, 0.916) | (0.710, 0.770) | Trail Making Test Part B | (0.703, 0.735) | (0.871, 0.912) | (0.714, 0.771) |
TRAILB_CONNECT_SEC | 0.731 | 0.893 | 0.746 | TRAILB_CONNECT_SEC | 0.720 | 0.890 | 0.742 |
Trail B Lines per second | (0.715, 0.746) | (0.871, 0.915) | (0.716, 0.777) | Trail A Lines per second | (0.704, 0.736) | (0.868, 0.911) | (0.713, 0.772) |
African American subjects | |||||||
---|---|---|---|---|---|---|---|
Version 2 (n = 604) | Version 3 (n = 604) | ||||||
Test | CDR 0 vs 0.5 | CDR 0 vs 1 | CDR 0.5 vs 1 | Test | CDR 0 vs 0.5 | CDR 0 vs 1 | CDR 0.5 vs 1 |
NACCMMSE | 0.750 | 0.962 | 0.837 | MOCATOTS | 0.702 | 0.972 | 0.858 |
Total mini mental state exam (MMSE) score | (0.707, 0.792) | (0.925, 0.998) | (0.717, 0.958) | MoCA total raw score | (0.656, 0.748) | (0.939, 1) | (0.775, 0.941) |
LOGIMEM | 0.714 | 0.937 | 0.811 | CRAFTVRS | 0.686 | 0.937 | 0.813 |
Immediate story recall | (0.67, 0.758) | (0.90, 0.974) | (0.722, 0.901) | Immediate craft story recall (verbatim scoring) | (0.64, 0.732) | (0.874, 1.00) | (0.716, 0.911) |
CRAFTURS | 0.693 | 0.927 | 0.790 | ||||
Immediate craft story recall (paraphrase scoring) | (0.648, 0.739) | (0.864, 0.991) | (0.681, 0.899) | ||||
MEMUNITS | 0.729 | 0.983 | 0.847 | CRAFTDVR | 0.697 | 0.945 | 0.834 |
Delayed story recall | (0.685, 0.774) | (0.97, 0.995) | (0.784, 0.91) | Delayed craft story recall (verbatim scoring) | (0.652, 0.742) | (0.885, 1.00) | (0.739, 0.929) |
CRAFTDRE | 0.698 | 0.925 | 0.806 | ||||
Delayed craft story recall (paraphrase scoring) | (0.653, 0.743) | (0.852, 0.998) | (0.696, 0.916) | ||||
DIGIF | 0.617 | 0.697 | 0.588 | DIGFORCT | 0.636 | 0.798 | 0.681 |
Digit span forward (correct trials) | (0.568, 0.666) | (0.582, 0.812) | (0.467, 0.709) | Forward number span test (correct trials) | (0.588, 0.683) | (0.678, 0.918) | (0.544, 0.818) |
DIGFORSL | 0.639 | 0.801 | 0.679 | ||||
Forward number span test (longest span) | (0.591, 0.686) | (0.679, 0.923) | (0.539, 0.818) | ||||
DIGIB | 0.620 | 0.690 | 0.563 | DIGBACCT | 0.642 | 0.879* | 0.772* |
Digit span backward (correct trials) | (0.571, 0.669) | (0.561, 0.818) | (0.42, 0.705) | Backward number span test (correct trials) | (0.594, 0.689) | (0.809, 0.949) | (0.662, 0.882) |
DIGBACLS | 0.636 | 0.865 | 0.737 | ||||
Backward number span test (longest span) | (0.588, 0.683) | (0.795, 0.935) | (0.62, 0.855) | ||||
BOSTON | 0.694 | 0.839 | 0.689 | MINTTOTS | 0.669 | 0.888 | 0.755 |
Boston Naming Test (total score) | (0.648, 0.739) | (0.739, 0.938) | (0.562, 0.815) | Multilingual naming test (MINT) (total score) | (0.624, 0.715) | (0.812, 0.964) | (0.646, 0.864) |
WAIS | 0.700 | 0.897 | 0.757 | UDSVERFC | 0.643 | 0.860 | 0.749 |
WAIS‐R Digit Symbol | (0.655, 0.745) | (0.836, 0.958) | (0.662, 0.853) | Number of correct F‐words generated | (0.595, 0.692) | (0.775, 0.945) | (0.627, 0.87) |
UDSVERLC | 0.634 | 0.866 | 0.749 | ||||
Number of correct L‐words generated | (0.585, 0.682) | (0.785, 0.946) | (0.647, 0.852) | ||||
UDSVERTN | 0.644 | 0.865 | 0.758 | ||||
Number of correct F‐words and L‐words | (0.595, 0.693) | (0.782, 0.948) | (0.646, 0.869) | ||||
UDSBENTC | 0.618 | 0.864 | 0.745 | ||||
Total Score for copy of Benson figure | (0.569, 0.667) | (0.792, 0.936) | (0.649, 0.841) | ||||
UDSBENTD | 0.681 | 0.950 | 0.814 | ||||
Total score for delayed drawing of Benson figure | (0.634, 0.729) | (0.909, 0.991) | (0.735, 0.894) | ||||
ANIMALS | 0.675 | 0.829 | 0.682 | ANIMALS | 0.652 | 0.883 | 0.756 |
Category Fluency (animals) | (0.63, 0.721) | (0.728, 0.93) | (0.549, 0.815) | Category fluency (animals) | (0.605, 0.699) | (0.812, 0.954) | (0.645, 0.866) |
VEG | 0.677 | 0.893 | 0.773 | VEG | 0.656 | 0.881 | 0.774 |
Category Fluency (vegetables) | (0.631, 0.724) | (0.824, 0.962) | (0.657, 0.889) | Category fluency (vegetables) | (0.609, 0.704) | (0.792, 0.971) | (0.66, 0.888) |
TRAILA | 0.651 | 0.805 | 0.710 | TRAILA | 0.655 | 0.888 | 0.769 |
Trail Making Test Part A | (0.604, 0.699) | (0.691, 0.918) | (0.595, 0.825) | Trail making test Part A | (0.608, 0.702) | (0.808, 0.968) | (0.671, 0.867) |
TRAILA_CONNECT_SEC | 0.658 | 0.804 | 0.690 | TRAILA_CONNECT_SEC | 0.659 | 0.891 | 0.755 |
Trail A Lines per second | (0.611, 0.705) | (0.69, 0.919) | (0.565, 0.816) | Trail A Lines per second | (0.613, 0.706) | (0.822, 0.959) | (0.654, 0.856) |
TRAILB | 0.679 | 0.919 | 0.809 | TRAILB | 0.672 | 0.939 | 0.809 |
Trail Making Test Part B | (0.632, 0.726) | (0.848, 0.989) | (0.709, 0.909) | Trail making test Part B | (0.625, 0.719) | (0.893, 0.985) | (0.723, 0.895) |
TRAILB_CONNECT_SEC | 0.675 | 0.925 | 0.839 | TRAILB_CONNECT_SEC | 0.677 | 0.943 | 0.810 |
Trail B Lines per second | (0.628, 0.722) | (0.845, 1.000) | (0.735, 0.943) | Trail A Lines per second | (0.63, 0.724) | (0.900, 0.986) | (0.724, 0.896) |
Caucasian subjects | |||||||
---|---|---|---|---|---|---|---|
Version 2 (n = 3439) | Version 3 (n = 3439) | ||||||
Test | CDR 0 vs 0.5 | CDR 0 vs 1 | CDR 0.5 vs 1 | Test | CDR 0 vs 0.5 | CDR 0 vs 1 | CDR 0.5 vs 1 |
NACCMMSE | 0.784 | 0.955 | 0.794 | MOCATOTS | 0.817** | 0.966 | 0.804 |
Total mini mental state exam (MMSE) score | (0.768, 0.8) | (0.941, 0.969) | (0.766, 0.822) | MoCA total raw score | (0.802, 0.832) | (0.955, 0.977) | (0.776, 0.832) |
LOGIMEM | 0.783 | 0.924 | 0.732 | CRAFTVRS | 0.760 | 0.914 | 0.739 |
Immediate story recall | (0.766, 0.799) | (0.906, 0.942) | (0.7, 0.764) | Immediate craft story recall (verbatim scoring) | (0.743, 0.778) | (0.896, 0.932) | (0.707, 0.771) |
CRAFTURS | 0.776 | 0.920 | 0.743 | ||||
Immediate craft story recall (paraphrase scoring) | (0.759, 0.792) | (0.902, 0.938) | (0.711, 0.774) | ||||
MEMUNITS | 0.802 | 0.938 | 0.725 | CRAFTDVR | 0.785 | 0.945 | 0.776 |
Delayed story recall | (0.786, 0.817) | (0.92, 0.955) | (0.694, 0.757) | Delayed craft story recall (verbatim scoring) | (0.768, 0.801) | (0.929, 0.962) | (0.746, 0.806) |
CRAFTDRE | 0.798 | 0.952 | 0.782** | ||||
Delayed craft story recall (paraphrase scoring) | (0.782, 0.814) | (0.936, 0.967) | (0.753, 0.812) | ||||
DIGIF | 0.658* | 0.724 | 0.594 | DIGFORCT | 0.627 | 0.719 | 0.626 |
Digit span forward (correct trials) | (0.639, 0.677) | (0.693, 0.754) | (0.559, 0.629) | Forward number span test (correct trials) | (0.607, 0.647) | (0.686, 0.752) | (0.588, 0.663) |
DIGFORSL | 0.628 | 0.712 | 0.618 | ||||
Forward number span test (longest span) | (0.608, 0.648) | (0.679, 0.745) | (0.58, 0.656) | ||||
DIGIB | 0.674 | 0.777 | 0.635 | DIGBACCT | 0.669 | 0.785 | 0.657 |
Digit span backward (correct trials) | (0.655, 0.692) | (0.749, 0.805) | (0.599, 0.67) | Backward number span test (correct trials) | (0.65, 0.689) | (0.756, 0.814) | (0.621, 0.694) |
DIGBACLS | 0.657 | 0.777 | 0.655 | ||||
Backward number span test (longest span) | (0.637, 0.676) | (0.747, 0.806) | (0.619, 0.692) | ||||
BOSTON | 0.730 | 0.838 | 0.658 | MINTTOTS | 0.719 | 0.855 | 0.679 |
Boston Naming Test (total score) | (0.712, 0.747) | (0.812, 0.864) | (0.622, 0.694) | Multilingual naming test (MINT) (total score) | (0.701, 0.737) | (0.83, 0.88) | (0.641, 0.716) |
WAIS | 0.725 | 0.868 | 0.711 | UDSVERFC | 0.637 | 0.753 | 0.645 |
WAIS‐R Digit Symbol | (0.707, 0.742) | (0.844, 0.892) | (0.678, 0.745) | Number of correct F‐words generated | (0.617, 0.656) | (0.723, 0.784) | (0.609, 0.681) |
UDSVERLC | 0.632 | 0.760 | 0.651 | ||||
Number of correct L‐words generated | (0.613, 0.652) | (0.73, 0.79) | (0.615, 0.686) | ||||
UDSVERTN | 0.638 | 0.762 | 0.649 | ||||
Number of correct F‐words and L‐words | (0.618, 0.658) | (0.732, 0.793) | (0.613, 0.684) | ||||
UDSBENTC | 0.627 | 0.733 | 0.640 | ||||
Total Score for copy of Benson figure | (0.607, 0.647) | (0.7, 0.767) | (0.602, 0.678) | ||||
UDSBENTD | 0.785 | 0.929 | 0.750 | ||||
Total score for delayed drawing of Benson figure | (0.769, 0.802) | (0.91, 0.949) | (0.717, 0.783) | ||||
ANIMALS | 0.755 | 0.894 | 0.714 | ANIMALS | 0.740 | 0.889 | 0.719 |
Category Fluency (animals) | (0.739, 0.772) | (0.874, 0.914) | (0.681, 0.747) | Category fluency (animals) | (0.723, 0.757) | (0.87, 0.909) | (0.687, 0.751) |
VEG | 0.756 | 0.909 | 0.725 | VEG | 0.746 | 0.906 | 0.729 |
Category Fluency (vegetables) | (0.739, 0.773) | (0.891, 0.928) | (0.693, 0.757) | Category fluency (vegetables) | (0.729, 0.764) | (0.887, 0.925) | (0.698, 0.76) |
TRAILA | 0.693 | 0.836 | 0.680 | TRAILA | 0.676 | 0.835 | 0.696 |
Trail Making Test Part A | (0.675, 0.712) | (0.811, 0.861) | (0.645, 0.714) | Trail making test Part A | (0.658, 0.695) | (0.81, 0.859) | (0.663, 0.729) |
TRAILA_CONNECT_SEC | 0.692 | 0.834 | 0.689 | TRAILA_CONNECT_SEC | 0.674 | 0.834 | 0.698 |
Trail A Lines per second | (0.673, 0.71) | (0.809, 0.859) | (0.655, 0.722) | Trail A Lines per second | (0.655, 0.692) | (0.809, 0.858) | (0.666, 0.73) |
TRAILB | 0.739 | 0.887 | 0.727 | TRAILB | 0.727 | 0.884 | 0.734 |
Trail Making Test Part B | (0.721, 0.756) | (0.864, 0.91) | (0.694, 0.761) | Trail making test Part B | (0.71, 0.745) | (0.86, 0.907) | (0.702, 0.766) |
TRAILB_CONNECT_SEC | 0.739 | 0.886 | 0.735 | TRAILB_CONNECT_SEC | 0.727 | 0.883 | 0.734 |
Trail B Lines per second | (0.721, 0.756) | (0.862, 0.911) | (0.702, 0.769) | Trail A Lines per second | (0.709, 0.745) | (0.858, 0.907) | (0.702, 0.767) |
Adjusted for age, gender and years of education. For models for all subjects, race was also adjusted. Subjects missing years of education are excluded.
Shaded tests are used to match V2 and V3 cohorts. Therefore, by design, ROC‐AUC of these tests are similar between the V2 and V3 cohorts.
*: significant evidence that version 2 ROC‐AUC not equal to version 3 AUC (at 0.05 significance level, two tailed)
The test which showed better performance is indicted by * or **.
Comparisons made between following 10 pairs: MMSE and MOCATOTS, LOGIMEM and CRAFTURS, MEMUNITS and CRAFTDRE, DIGIB and DIGBACCT, DIGIF and DIGFORSL, BOSTON and MINTTOTS. TRAILA, TRAILB, ANIMALS, and VEG were used to match the V2 and V3 cohorts. When there is more than one test in V 3 to compare to V 2, we chose the one with the higher ROC‐AUC for comparisons.
Adjusted for age, gender and years of education. Subjects missing years of education are excluded.
Shaded tests are used to match V2 and V3 cohorts. Therefore, by design, ROC‐AUC of these tests are similar between the V2 and V3 cohorts.
*: significant evidence that version 2 ROC‐AUC not equal to version 3 AUC (at 0.05 significance level, two tailed).
**: significant evidence that version 2 ROC‐AUC not equal to version 3 AUC (at 0.01 significance level, two tailed).
The test which showed better performance is indicted by * or **.
Comparisons made between following 10 pairs: MMSE and MOCATOTS, LOGIMEM and CRAFTURS, MEMUNITS and CRAFTDRE, DIGIB and DIGBACCT, DIGIF and DIGFORSL, BOSTON and MINTTOTS. TRAILA, TRAILB, ANIMALS, and VEG were used to match the V2 and V3 cohorts. When there is more than one test in V 3 to compare to V 2, we chose the one with the higher ROC‐AUC for comparisons.
Adjusted for age, gender and years of education. Subjects missing years of education are excluded.
Shaded tests are used to match V2 and V3 cohorts. Therefore, by design, ROC‐AUC of these tests are similar between the V2 and V3 cohorts.
*: significant evidence that version 2 ROC‐AUC not equal to version 3 AUC (at 0.05 significance level, two tailed)
**: significant evidence that version 2 ROC‐AUC not equal to version 3 AUC (at 0.01 significance level, two tailed)
The test which showed better performance is indicted by * or **.
Comparisons made between following 10 pairs: MMSE and MOCATOTS, LOGIMEM and CRAFTURS, MEMUNITS and CRAFTDRE, DIGIB and DIGBACCT, DIGIF and DIGFORSL, BOSTON and MINTTOTS. TRAILA, TRAILB, ANIMALS, and VEG were used to match the V2 and V3 cohorts. When there is more than one test in V 3 to compare to V 2, we chose the one with the higher ROC‐AUC for comparisons.