Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2020 Sep 11.
Published in final edited form as: Neuropsychology. 2020 Apr 27;34(6):629–640. doi: 10.1037/neu0000640

Using Multivariate Base Rates of Low Scores to Understand Early Cognitive Declines on the Uniform Data Set 3.0 Neuropsychological Battery

Andrew M Kiselica 1, Troy A Webber 2, Jared F Benge 3
PMCID: PMC7484046  NIHMSID: NIHMS1610880  PMID: 32338945

Abstract

Objective:

Low neuropsychological test scores are commonly observed even in cognitively healthy older adults. For batteries designed to assess for and track cognitive decline in older adults, documenting the multivariate base rates (MBRs) of low scores is important to differentiate expected from abnormal low score patterns. Additionally, it is important for our understanding of mild cognitive impairment and preclinical declines to and determine how such score patterns predict future clinical states.

Method:

The current study utilized Uniform Data Set Neuropsychological Battery 3.0 (UDS3NB) data for 5,870 English-speaking, older adult participants from the National Alzheimer’s Coordinating Center from 39 Alzheimer’s disease Research Centers from March 2015 to December 2018. MBRs of low scores were identified for 2,608 cognitively healthy participants that had completed all cognitive measures. The association of abnormal MBR patterns with subsequent conversion to mild cognitive impairment and dementia were explored.

Results:

Depending on the operationalization of “low” score, the MBR of demographically adjusted scores ranged from 1.40 to 79.2%. Posttest probabilities using MBR methods to predict dementia status at 2-year follow up ranged from .06 to .33, while posttest probabilities for conversion to mild cognitive impairment (MCI) ranged from .12-.32.

Conclusions:

The data confirm that abnormal cognitive test scores are common among cognitively normal older adults. Using MBR criteria may improve our understanding of MCI. They may also be used to enrich clinical trial selection processes through recruitment of at-risk individuals.

Keywords: aging, assessment, neuropsychology, multivariate base rates, uniform data set


Neuropsychologists have yet to agree on what constitutes a “low” or “abnormal” score, with a number of different standard deviation or percentile rank cut-points being proffered. For many cut-points, prior research suggests that low scores are very common when neuropsychological test batteries are administered, even when participants have been screened carefully to avoid inclusion of conditions that influence cognitive performance. For instance, Brooks, Iverson, Holdnack, and Feldman (2008) reported rates of at least one low score that ranged from 12 to 64% on the Wechsler Memory Scale-III (Wechsler, 1997), depending on the criterion used to define a low score (cut-offs ranging from ≤2nd percentile to ≤16th percentile). The base rate of participants with at least one low score (cut-offs ranging from ≤2nd percentile to ≤25th percentile were used) was even higher when the Wechsler Memory Scale-IV (Wechsler, Holdnack, & Drozdick, 2009) and Wechsler Adult Intelligence Scale-IV (Wechsler, 2008) were given in tandem, ranging from 26 to 89% (Brooks, Holdnack, & Iverson, 2011). A similar pattern has been found when using other test batteries. For example, Holdnack and colleagues (2017) reported that 16–62% of healthy adults obtained at least one low score (cut-offs ranging from ≤5th percentile to ≤25th percentile were used) when using the National Institutes of Health Toolbox—Cognition Battery normative data (Beaumont et al., 2013). Similarly, the base rate of one or more low index scores on the Neuropsychological Assessment Battery (Stern & White, 2003) ranged from 5 to 52% for cut-offs ranging from ≤2nd percentile to ≤25th percentile (Brooks, Iverson, & White, 2009), and the rate of two or more low scores ranged from 2 to 99% when using tests from the Calibrated Neuropsychological Normative System, depending on the cutpoint for defining normality (ranged from T <30 to T <45) and the number of tests administered (Schretlen, Testa, & Pearlson, 2010; Schretlen, Testa, Winicki, Pearlson, & Gordon, 2008).

The base rate of low scores is significantly influenced by demographic factors, in particular age. For example, Schretlen et al. (2008) noted moderate-to-strong correlations (range = .31–.54) between age and number of low test scores in their sample of 327 neurologically normal adults. Thus, abnormality is quite normal when a battery of neuropsychological measures is given, particularly among older individuals.

Lack of Consensus in Defining Mild Cognitive Impairment

This variability is far from a statistical oddity. It can present a challenge to researchers and clinicians attempting to study conditions common in the aging process, such as mild cognitive impairment (MCI). The identification of individuals with mild cognitive impairment—those with objective cognitive weaknesses on neuropsychological tests without accompanying impairments in day-today activities (Petersen, 2004; Petersen et al., 2001)—represented a considerable step forward in understanding neurodegenerative diseases and lead to developments in differential diagnosis, prognosis, and management (Petersen et al., 2018; Yanhong, Chandra, & Venkatesh, 2013). Despite the proliferation of the MCI moniker in clinical research, there is lack of a clear empirical definition for this the term.

At least six major conceptualizations of MCI have been proffered across several disease states (Albert et al., 2011; APA, 2014; Bondi et al., 2014; Litvan et al., 2012; Morris et al., 2014; Sachdev et al., 2014). Each publication clearly states that MCI consists of cognitive impairment in the absence of dementia and functional deficits. Cognitive impairment is frequently defined by performance on objective neuropsychological tests, given that continuous scores on these instruments have been shown to predict cognitive and functional outcomes in myriad longitudinal studies of older adults (see, e.g., Atchison, Massman, & Doody, 2007; Baerresen et al., 2015; Tabert et al., 2006).

Nonetheless, more specific operationalization of “cognitive impairment” varies considerably, leading to lack of clarity about what constitutes MCI, as well as conflicting research findings. The International Work Group criteria are general and do not state the level of cognitive impairment that should be observed on cognitive tests (Morris et al., 2014). More specific are the American Psychiatric Association and National Institutes on Aging and Alzheimer’s Association criteria, which choose a standard deviation (SD) threshold varying from −1 to −2 SDs below some normative value, but do not specify how many tests must fall in this range to make a diagnosis (Albert et al., 2011; APA, 2014; Sachdev et al., 2014). Most specific are the Parkinson’s disease MCI criteria, which require cognitive impairment to be observed on at least two tests when a comprehensive assessment (at least two tests per each of five cognitive domains must be administered) is completed (Litvan et al., 2012). Similarly, the Jak/Bondi criteria require performance <1 SD below the mean on at least two tests, but additionally specify that these tests must fall within the same cognitive domain (Jak et al., 2016).

The presence of a high base rate of low scores even in clinically healthy older adults may contribute to lack of clarity in defining MCI. For instance, Brooks, Iverson, and White (2007) raised concern for the potential of “accidental MCI” diagnoses. They reported that the overall base rate of obtaining at least one low memory score (defined at cut-points ranging from z < −1 to z < −2) on the Neuropsychological Assessment Battery Memory Module (White & Stern, 2003) ranged from 1.50 to 14.40% in a healthy sample of older adults. Thus, a relatively high proportion of cognitively normal older adults might be labeled as MCI if the base rates of low scores is not considered, and it is highly important to examine the low score cut-points and MBR patterns that best define MCI.

MBR patterns of low scores in individuals with purportedly normal cognition may also be useful for detecting so called “transitional cognitive declines” (Jack et al., 2018). Indeed, subtle abnormalities in cognition that do not rise to the level of impairment may be harbingers of future decline in currently healthy individuals (Papp et al., 2019). Thus, MBR methods could be used to distinguish healthy individuals undergoing preclinical cognitive changes from healthy individuals at lower risk for cognitive decline.

These findings argue for two related lines of research. The first concerns the ongoing need for critical analysis of older adult cognitive performances to help understand the frequency of low scores. As new batteries meant to assess cognitive impairment in older adults are developed, understanding the MBR of low scores should be part of the normative research process. The second, related aim concerns the importance of understanding how the frequency of low scores on neuropsychological batteries, even in putatively healthy individuals, relates to clinical outcomes. This goal is particularly important in light of new research criteria, which emphasize detection of individuals with transitional cognitive declines that do not rise to the level of impairment (Jack et al., 2018). Relatedly, there has been an increased focus in recent clinical trials on specifically recruiting individuals in prodromal or preclinical stages of disease progression (Cummings, 2019; Cummings, Lee, Ritter, Sabbagh, & Zhong, 2019). Given this shift in clinical research, creating a uniformly accepted method of identifying at-risk individuals is necessary. An MBR pattern of low scores among “normal” individuals could be one such technique to reach this important goal and empirically guide the clinical trial selection process. Of note, limited available research suggests that MBR techniques hold potential to better identify older adults at risk for clinical progression (Oltra-Cucarella et al., 2018).

Current Study

The National Alzheimer’s Coordinating Center Uniform Data Set 3.0 Neuropsychological Battery (UDS3NB; Weintraub et al., 2009, 2018) consists of a set of cognitive tests administered at Alzheimer’s disease Research Centers across the country. It serves as a common language of cognitive measures meant to assist in identification, differential diagnosis, and staging of older adults in one of the largest data sets available to researchers. Publication of normative data and user-friendly interpretive tools is ongoing (Devora, Beevers, Kiselica, & Benge, 2019; Kiselica, Webber, & Benge, 2020; Shirk et al., 2011; Weintraub et al., 2018). Adding to this armamentarium, publication of MBR data could lead to improvements in the UDS research database, enhanced clinical interpretation of UDS3NB data, advancements in understanding the MCI construct, and improvement in identification of preclinical individuals at-risk for conversion to MCI.

In summary, there were three main goals of the paper:

  1. To present base rates of low scores using both unadjusted (i.e., based on the raw means and standard deviations) and demographically adjusted (i.e., corrected for the influence of sex, age, and education) methods (Weintraub et al., 2018) in a cognitive normal sample. We hypothesized that the base rate of low scores would be negatively related to cutpoint stringency and number of low scores required.

  2. Given lack of clarity surrounding what pattern of low scores should best define MCI, we examined UDS3NB data from initially cognitively normal and MCI participants, who subsequently completed a follow up neuropsychological assessment at approximately 2 years postbaseline. This data allowed us to evaluate the predictive validity and diagnostic accuracy for the number of low scores at baseline in predicting subsequent diagnosis of dementia at 2-year follow-up. These analyses provided evidence for MBR patterns of low scores that could best predict conversion to dementia, thereby shedding light on the most appropriate empirical definition for MCI. We predicted that the probability of dementia diagnosis would increase with increasing cutpoint stringency and number of low scores required.

  3. Because prior studies suggest that low scores are relatively common even in normal older adults (Brooks et al., 2007), we also sought to assess whether MBR patterns of low scores could identify a group of putatively “normal” individuals at risk of further decline. To this end, we also examined 2-year follow up data for individuals classified as clinically normal at baseline to assess for patterns of low scores that would predict conversion to MCI. We again expected that the probability of a subsequent diagnosis would increase with increasing cutpoint stringency and number of low scores required.

Method

Cognitively Normal Sample

On January 29, 2018, we requested all UDS data through the National Alzheimer’s Coordinating Center online portal. This data is collected under the supervision of the Internal Review Boards (IRBs) at individual Alzheimer’s disease Research Centers. All participants provided written informed consent. Data is available to researchers in a deidentified format, and analysis of this data was determined to be exempt from review by our own IRB.

Cases included 39,412 individuals from 39 Alzheimer’s disease Research Centers. Since many measures in the UDS3NB are language-based, we restricted the sample to individuals with a primary language of English, who were tested in English (n = 36,134). Next, because this study focused on the UDS 3.0, the dataset was further limited to cases with UDS3NB given at their initial site visit (n = 6,042), which included participants assessed from March 2015 through December 2018. The sample was then further restricted to individuals 50 years of age and older, as older adults represent the population of interest (n = 5,870).

This group was further limited to those individuals categorized as cognitively normal. To avoid criterion contamination because of the fact that consensus UDS diagnoses rely on interpretation of cognitive data, determination of normality was based solely on scores on the Clinical Dementia Rating (CDR) Dementia Staging Instrument (Morris, 1993). This structured clinical interview is a reliable and valid measure for dementia staging (Fillenbaum, Peterson, & Morris, 1996; Morris, 1997), which includes a global rating of cognitive impairment (0 = cognitively normal, 0.5 = MCI, 1 = mild dementia, 2 = moderate dementia, 3 = severe dementia). Restricting the sample to individuals with CDR global score of 0 yielded 2,701 remaining cases.

Finally, to ensure a uniform sample, the dataset was further restricted to participants with data available for all cognitive variables in the UDS3NB (n = 2,608). While this decision may have introduced bias in that certain groups may be more likely to have missing data, it is in keeping with prior work using the UDS (Devora et al., 2019; Weintraub et al., 2018). Additionally, the proportion of the sample with missing data was small (3.4%), limiting the likelihood of introducing bias. Demographic characteristics of the sample and descriptive statistics for study variables are provided in Table 1.

Table 1.

Demographic Characteristics and Descriptive Statistics for Raw Cognitive Variables at Baseline

Demographic variables Cognitively normal
sample (n = 2,608)
Dementia prediction
sample (n = 642)
MCI prediction
sample (n = 437)
Age: M (SD) 69.69 (7.88) 71.75 (7.61) 71.54 (7.56)
Education: M (SD) 16.36 (2.54) 16.47 (2.67) 16.48 (2.59)
% Female 64.70% 51.10% 56.50%
Racial breakdown
 White 77.50% 80.20% 76.70%
 Black/African American 18.10% 15.30% 17.40%
 American Indian/Alaska Native 1.30% 1.60% 2.10%
 Asian 2.00% 2.30% 3.00%
 Unknown/other 0.70% 0.60% 1.00%
Ethnic breakdown
 Hispanic 3.90% 1.50% 3.90%
 Non-Hispanic 95.90% 98.00% 95.90%
Cognitive variables M (SD) M (SD) M (SD)
 Benson Figure Copy 15.50 (1.37) 17.44 (12.38) 17.51 (12.22)
 Benson Figure Recall 11.24 (2.98) 12.14 (13.64) 13.22 (13.14)
 Trailmaking Test part A 32.33 (13.21) 56.42 (141.63) 54.85 (144.99)
 Trailmaking Test part B 86.06 (45.12) 122.69 (165.57) 108.12 (149.21)
 Letter fluency 27.50 (8.37) 28.85 (14.42) 30.08 (13.59)
 Number Span Forward 8.22 (2.32) 10.29 (14.07) 10.41 (14.13)
 Number Span Backward 6.94 (2.22) 8.95 (14.28) 9.24 (14.31)
 Vegetable Naming 14.63 (4.07) 15.41 (12.93) 16.54 (12.95)
 Animal Naming 21.13 (5.66) 21.40 (11.59) 22.86 (11.57)
 Multilingual Naming Test 29.94 (2.31) 31.41 (11.13) 31.75 (10.65)
 Craft Story Immediate Recall 21.40 (6.45) 21.10 (13.32) 22.78 (13.00)
 Craft Story Delayed Recall 18.52 (6.72) 18.03 (13.87) 19.93 (13.40)

Note. MCI = mild cognitive impairment.

Dementia Prediction Sample

To test the utility of using number of low neuropsychological test scores at an initial visit to predict subsequent conversion to dementia, we next selected a sample of individuals who were diagnosed as clinically normal (CDR = 0) or with MCI (CDR = .5) at baseline, and who had a 2-year follow up visit. Of note, both normal and MCI individuals were included because of the possibility that a number of low scores could identify either clinically normal or MCI patients as being at risk for conversion. This sample included 642 participants total with 437 individuals diagnosed as cognitively intact and 205 diagnosed as MCI. At 2-year follow up, 34 individuals with MCI had converted to dementia (29 were classified as mild severity, whereas five were classified as moderate severity). Descriptive statistics for this sample are provided in Table 1.

MCI Prediction Sample

To test the utility of using number of low neuropsychological test scores at an initial visit to predict subsequent conversion to MCI, we next selected a sample of individuals who were diagnosed as clinically normal (CDR = 0) at baseline, and who had a 2-year follow up visit. This sample included 437 normal participants. At follow-up, 47 patients had converted to MCI, while 390 remained in the cognitively normal category. Descriptive statistics for this sample are provided in Table 1.

Measures

Cognitive tests utilized in the UDS3NB are described in detail in other publications (Besser et al., 2018; Weintraub et al., 2018). We used selection of measures from the battery, including (a) a test of immediate and delayed recall of orally presented story information (though both thematic unit and verbatim recall scores and can be used, verbatim recall points were chosen for the current study), the Craft Story (Craft et al., 1996); (b) a figure copy and recall task, the Benson Figure (Possin, Laluz, Alcantar, Miller, & Kramer, 2011); (c) Number Span Forwards and Backwards (both total score and longest span were available; total score was used in the current analyses); (d) a confrontation naming task, the Multilingual Naming Test (Gollan, Weissberger, Runnqvist, Montoya, & Cera, 2012; Ivanova, Salmon, & Gollan, 2013); (e) semantic fluency (animals and vegetables) and letter (F- and L-words) tasks; and (f) simple number sequencing and letter-number sequencing, indexed by the Trailmaking Test parts A and B (Partington & Leiter, 1949).

Analyses

Unadjusted base rate analysis.

We constructed a table of the base rates of obtaining at least one low unadjusted score on the UDS3NB battery at cut-points of ≤16th, ≤9th, ≤5th, and ≤2nd percentiles, following the procedures of Brooks and colleagues (2008, 2007). To obtain percentile cut-points, raw scores were transformed to z-scores, based on the mean/standard deviations for the cognitively normal sample. The z-score corresponding to each percentile was then used to set a cutpoint for determining a low score. Finally, the base rate of having at least one score at or below each cutpoint was presented for the total sample and by age and education groupings. Age groupings were based on decades of life and included groups for individuals aged 50–59, 60–69, 70–79, and 80+ , respectively. Education groupings were consistent with natural educational divisions in the United States, including high school or below (≤12 years of education), some college (13–15 years of education), baccalaureate (16 years of education), some postbaccalaureate or Master’s (17–18 years of education), and doctoral/professional level (19 or more years of education).

Demographically corrected base rate table.

We created a second table for the base rate of low scores after making demographic adjustments to each cognitive variable using a regression-based approach. This method is statistically rigorous and increases the likelihood of differentiating pathological from normal patterns of low scores (Schretlen et al., 2008; Testa & Schretlen, 2006). The raw score for each variable was transformed to a demographically (age, education, and gender) corrected z-score using weights published by Weintraub et al. (2018). That is, regression coefficients from the normative calculator provided in that paper were applied to each raw score to generate demographically adjusted z-scores. For example, for Number Span forward, the following equation was applied: Number Span Demographically Corrected z-score = (Number Span Raw Score [7.811 + −0.295 * sex + −0.026 * age + 0.154 * education])/2.251. Finally, the base rates of low scores for these demographically adjusted variables were calculated at the ≤16th, ≤9th, ≤5th, and ≤2nd percentiles, respectively.

MBR and prediction of dementia and MCI.

We assessed sensitivity and specificity for predicting 2-year follow-up dementia status from baseline number of low scores at each cutpoint in the Dementia Prediction Sample. That is, diagnostic accuracy statistics were calculated at each percentile cut-off (≤16th, ≤9th, ≤5th, and ≤2nd) for 1+, 2+, 3+ , and 4+ low scores. We also assessed predictive utility of the various cut-off/number of low score options by calculating positive likelihood ratios and posttest probabilities of dementia. The same process was repeated to calculate diagnostic accuracy and predictive utility of the MBR approach for identifying individuals at risk for MCI in the MCI Prediction Sample. Analytic procedures followed the equations and methods outlined by Smith, Ivnik, and Lucas (2008):

  • Sensitivity = true positives/(true positives + false negatives)

  • Specificity = true negatives/(true negatives + false positives)

  • Positive predictive value = true positives/(true positives + false positives)

  • Negative predictive value = true negatives/(true negatives + false negatives).

Sensitivity and specificity values were then entered into the online tool created by Hozo and Djulbegovic (1998) to calculate likelihood ratios, posttest probabilities, and 95% confidence intervals (CIs) for posttest probabilities (available at http://www.iun.edu/~mathiho/medmath/old/ci-java.htm).

Results

Unadjusted Base Rate Table

The percentage of individuals obtaining at least one low score by various percentile cutpoints across age and education groupings is presented in Table 2. The base rate of having at least one low score ranged from 1.90% (for individuals ages 60–69 at the ≤2nd percentile cutpoint) to 39.90% (for individuals aged 80+ at the ≤ 16th percentile cutpoint). The frequency of low scores tended to decrease with decreasing age, increasing education, and an increasingly strict cutpoint used to define a low score. In this sample, low scores were quite common, particularly when lax cut-points were used and when participants were of increasing age.

Table 2.

Base Rate of at Least One Unadjusted Low Score (and Total Cell Sample Size) in the UDS3NB by Cutpoint for Defining Abnormality, Age, and Years of Education in the Cognitively Normal Sample

Years of education
Percentile cutpoint Age ≤12 13–15 16 17–18 19+ All levels of education
≤16 50–59 24.00% (25) 30.00% (50) 19.00% (84) 17.60% (68) 7.10% (28) 19.90% (261)
60–69 38.20% (89) 31.20% (189) 17.00% (288) 16.80% (315) 12.10% (157) 20.70% (1,042)
70–79 46.20% (106) 33.10% (169) 23.30% (257) 19.60% (322) 18.70% (150) 25.60% (1,009)
80+ 58.10% (43) 46.39% (54) 39.70% (68) 34.10% (82) 26.50% (49) 39.90% (296)
All ages 43.30% (263) 33.50% (462) 21.80% (697) 19.80% (787) 16.10% (384) 24.70% (2,608)
≤9 50–59 12.00% (25) 12.00% (50) 8.30% (84) 5.90% (68) 7.10% (28) 8.80% (261)
60–69 19.10% (89) 15.90% (189) 6.90% (288) 6.00% (315) 6.40% (157) 9.40% (1,042)
70–79 26.40% (106) 16.60% (169) 11.30% (257) 7.10% (322) 6.70% (150) 11.80% (1,009)
80+ 46.50% (43) 24.10% (54) 26.50% (68) 8.50% (82) 12.20% (49) 21.60% (296)
All ages 25.90% (263) 16.70% (462) 10.60% (697) 6.70% (787) 7.30% (384) 11.70% (2,608)
≤5 50–59 8.00% (25) 6.00% (50) 1.20% (84) 2.90% (68) 0.00% (28) 3.10% (261)
60–69 7.90% (89) 5.30% (189) 2.40% (288) 2.20% (315) 1.30% (157) 3.20% (1,042)
70–79 13.20% (106) 6.50% (169) 4.30% (257) 1.90% (322) 1.30% (150) 4.50% (1,009)
80+ 27.90% (43) 9.30% (54) 7.40% (68) 7.30% (82) 4.10% (49) 10.10% (296)
All ages 13.30% (263) 6.30% (462) 3.40% (697) 2.70% (787) 1.60% (384) 4.40% (2,608)
≤2 50–59 8.00% (25) 2.00% (50) 1.20% (84) 2.90% (68) 0.00% (28) 2.30% (261)
60–69 4.50% (89) 3.70% (189) 1.40% (288) 1.00% (315) 1.30% (157) 1.90% (1,042)
70–79 7.50% (106) 4.10% (169) 0.40% (257) 0.60% (322) 0.70% (150) 2.00% (1,009)
80+ 18.60% (43) 3.70% (54) 5.90% (68) 3.70% (82) 2.00% (49) 6.10% (296)
All ages 8.40% (263) 3.70% (462) 1.40% (697) 1.30% (787) 1.00% (384) 2.50% (2,608)

Note. UDS3NB = Uniform Data Set Neuropsychological Battery 3.0.

Demographically Corrected Base Rate Table

The rates of low scores using demographic corrections (age, education, and gender) on the UDS3NB are presented in Table 3. The base rate of having low scores ranged from 1.40% (for four or more scores at or below the 2nd percentile cutpoint) to 76.20% (for one or more scores at or below the 16th percentile cutpoint). Again, the frequency of low scores declined with increasing strictness of the cut-off and with a greater number of low scores required.

Table 3.

Base Rates of Low Scores for Demographically Adjusted Variables Among Individuals Diagnosed as Cognitively Normal

Percentile
cutpoint
Number of low scores
1 or more 2 or more 3 or more 4 or more
≤16 76.2% (1,977) 53.8% (1,395) 35.9% (932) 23.6% (612)
≤9 57.8% (1,499) 33.4% (866) 18.8% (487) 10.6% (274)
≤5 42.1% (1,091) 20.2% (525) 9.8% (253) 5.2% (135)
≤2 23.9% (619) 9.4% (243) 3.6% (93) 1.4% (37)

MBR and Prediction of Dementia and MCI

Diagnostic accuracy and predictive utility statistics for assessing conversion to dementia and MCI from MBR patterns are presented in Table 4. Sensitivity ranged from .35 to 1.00 for predicting dementia and .09 to .94 for predicting MCI. Negative predictive value ranged from .96 to 1.00 for predicting dementia and .90 to .97 for predicting MCI.

Table 4.

Diagnostic Classification Accuracy and Predicted Probabilities

Percentile cutpoint Number of tests Sens Spec PPV NPV LR+ Posttest probability [95%CI]
Dementia classification at follow up among initially cognitively normal or MCI participants
≤16 1+ 1.00 .22 .07 1.00 1.28 .07 [.04, .09]
2+ 1.00 .39 .08 1.00 1.64 .08 [.06, .11]
3+ .94 .55 .10 .99 2.04 .10 [.07, .14]
4+ .88 .67 .13 .99 2.66 .13 [.09, .17]
≤9 1+ .97 .38 .08 1.00 1.56 .08 [.05, .10]
2+ .97 .57 .11 1.00 2.25 .11 [.07, .15]
3+ .82 .73 .15 .99 3.04 .14 [.10, .19]
4+ .71 .83 .19 .98 4.18 .19 [.13, .25]
≤5 1+ .97 .52 .10 1.00 2.02 .10 [.07, .13]
2+ .88 .72 .15 .99 3.14 .15 [.10, .19]
3+ .74 .84 .20 .98 4.63 .20 [.14, .27]
4+ .62 .90 .27 .98 6.20 .25 [.17, .33]
≤2 1+ .91 .70 .14 .99 3.03 .14 [.10, .19]
2+ .74 .83 .19 .98 4.35 .19 [.13, .25]
3+ .53 .93 .29 .97 7.57 .29 [.19, .39]
4+ .35 .96 .34 .96 8.75 .32 [.21, .44]
MCI classification at follow up among initially cognitively normal participants
≤16 1+ .94 .29 .14 .97 1.32 .06 [.05, .09]
2+ .85 .51 .17 .97 1.73 .09 [.06, .12]
3+ .66 .69 .20 .94 2.13 .11 [.07, .14]
4+ .53 .80 .24 .93 2.65 .23 [.09, .17]
≤9 1+ .78 .47 .15 .95 1.47 .08 [.05, .10]
2+ .70 .71 .23 .95 2.41 .12 [.08, .16]
3+ .43 .86 .27 .93 3.07 .15 [.10, .20]
4+ .30 .94 .36 .92 5.00 .22 [.14, .30]
≤5 1+ .77 .64 .20 .96 2.14 .11 [.07, .14]
2+ .55 .85 .31 .94 3.67 .17 [.11, .23]
3+ .34 .94 .39 .92 5.31 .24 [.15, .32]
4+ .19 .97 .45 .91 6.33 .26 [.15, .37]
≤2 1+ .57 .82 .28 .94 3.17 .15 [.10, .20]
2+ .40 .94 .44 .93 6.67 .27 [.18, .36]
3+ .17 .98 .57 .91 8.50 .32 [.18, .47]
4+ .09 .99 .67 .90 9.00 .33 [.13, .53]

Note. MCI = mild cognitive impairment; CI = confidence interval; Sens = sensitivity; Spec = specificity; PPV = positive predictive value; NPV = negative predictive value; LR+ = positive likelihood ratio.

The opposite pattern emerged for specificity and positive predictive value, with these numbers increasing as the strictness of the criterion and the number of low scores required increased. Specificity ranged from .22 to .96 for predicting dementia and from .14 to .67 for predicting MCI.

Similarly, positive likelihood ratios and the posttest probability of conversion to a worse cognitive state increased as individuals’ performances fell at lower cut-offs or demonstrated higher numbers of low scores. Posttest probabilities ranged from .06 to .33 for MCI and .12 to .32 for dementia.

In summary, greater predictive accuracy is obtained with an increasingly stringent MBR requirement, while lax MBR requirements appear not to yield much improvement over simple examination of base rates.

Discussion

MBR of Low Scores Among Cognitively Normal Individuals

Consistent with previous research, base rates of low demographically adjusted scores were high (ranging from around 1 to 75%, depending on the cut-score chosen to define abnormality) among cognitively normal individuals, replicating the wide ranges reported in previous studies with other common neuropsychological test batteries, such as the Wechsler family of tests, the Neuropsychological Assessment Battery, and the Calibrated Neuropsychological Normative System (Brooks et al., 2007, 2008, 2009; Holdnack et al., 2017; Schretlen et al., 2008). Taken together extant research findings support the notion that “abnormality is normal” in terms of performance on a cognitive test battery.

These findings also highlight the vast heterogeneity in the prevalence of low scores, depending on the chosen cutpoint. They have clear implications for efforts to develop consensus standards for describing scores, such as those recently published by the American Academy of Clinical Neuropsychology (Guilmette et al., 2020). The authors of the consensus statement argued that labels to describe scores must be separated from interpretation of those scores. Thus, an “Exceptionally Low Score” (defined as scores <2nd percentile) may not be indicative of a cognitive impairment. Indeed, nearly a quarter of putatively normal patients in our sample demonstrated at least one score at or below the 2nd percentile, such that it could be a mistake to interpret one Exceptionally Low Score as indicative of dysfunction.

Our findings also suggest possible ways to revise the consensus criteria in the future, as the committee “recognized that our recommendations are not fixed in stone and … may require future modifications” (p. 15). For instance, the current results may indicate that certain score ranges are too wide. The criteria define the “Below Average” range of scores as the 2nd to 8th percentiles. Our data indicate that the base rate of low scores will differ markedly across this Below Average range. As can be seen in Table 3, the rate of one low score at or below the 9th percentile was more than double the rate of one low score at or below the 2nd percentile. Moreover, as shown in Table 4, having one low score at or below the 9th percentile suggests a nearly twofold increase in the probability for conversion to MCI when compared with having one low score at or below the 2nd percentile. Thus, a score at the 2nd percentile is likely qualitatively different from a score at the 8th percentile, such that it may not be advisable to label such scores the same.

Another important aspect of interpreting low cognitive test scores concerns the impact of demographic factors on performance. Notably, we replicated previous findings regarding the influence of demographics on the base rate of low scores; that is, the frequency of low scores in the UDS3NB decreased with decreasing age, increasing education, and increasing stringency of the chosen percentile cutpoint (Brooks et al., 2007, 2008, 2009; Holdnack et al., 2017). This latter finding held when demographically adjusted scores were used, which is also consistent with prior research (Schretlen et al., 2008).

What Do MBR Findings Tell Us About MCI?

These findings have critical implications for defining and diagnosing MCI. Many groups recommend a cutoff of ≤ −1 to −2 SD below the normative mean for identifying MCI (Albert et al., 2011; APA, 2014; Litvan et al., 2012; Sachdev et al., 2014). Using this approach would identify 23.9 to 79.2% of cognitively normal older adults (as indexed by an independent measure) as having MCI. Even if the stricter cutoff of ≥2 scores ≤16th percentile proposed by Litvan et al. (2012) was used to diagnose MCI, 53.8% of the cognitively normal participants in the UDS sample would be diagnosed with MCI. These data support the contention that abnormal test scores are actually common even among healthy older adults and raise concern that over interpreting low test scores may lead to a high rate of false positive (“accidental”) MCI diagnoses (Brooks, Iverson, & Holdnack, 2013; Brooks et al., 2007; Holdnack et al., 2017). Thus, regardless of the chosen cutpoint for identifying individuals with MCI used in clinical criteria, clinicians will need to be aware of the MBR of low scores to make accurate diagnostic decisions. Additionally, they illustrate the importance of differentiating a pattern of low scores that constitutes normal variability from patterns that indicate emerging pathology.

In that vein, our study also evaluated the extent to which patterns of low scores in individuals initially judged as cognitively normal or MCI predicted dementia classification at 2-year follow up. Conversion to dementia was rare in our sample (5.30%), but occurred at a similar rate as reported in prior work, with annual rates of conversion ranging from 1.6 to 4.9% in community samples (Mitchell & Shiri-Feshki, 2009). Given the low prevalence of conversion in the short-term, a clinician’s best guess for an individual patient will typically be that the patient is unlikely to be at risk for decline. However, our findings add to the growing body of literature that clinical prediction can be improved by examining cognitive test data (see, e.g., Ewers et al., 2012; Summers & Saunders, 2012). In particular, we demonstrated that as individual performance falls below increasingly stringent cut-offs and the number of low scores expands, patients become increasingly likely to convert to dementia over time. Of note, we relied on a sample of both clinically normal and MCI individuals to calculate posttest probabilities because of the fact that there could be normal individuals that demonstrated low score patterns that might be more consistent with MCI. Thus, posttest probabilities based on test data might be increased in more exclusive clinical samples, suggesting a need for further research.

The results have implications for our understanding of the MCI construct as well. MCI is typically diagnosed when a dementia prodrome is suspected (Dubois & Albert, 2004); thus, the argument can be made that diagnosis should be reserved for cases wherein suspicion for conversion to dementia is high. Our findings suggest that suspicion for impending (2-year) conversion to dementia tends to be low (at least in this sample), even in cases where objective cognitive performance is obviously poor. For instance, the posttest probability of conversion to dementia is only .33 even when an individual has four or more tests below the 2nd percentile cutpoint. The results may indicate a need to revise MCI criteria to be more restrictive if high confidence of a dementia prodrome is desired, though analyses of longer-term follow up data is of course needed to draw firmer conclusions. Alternatively, scholars may prefer laxer standards and accept a high rate of false positive MCI diagnoses (i.e., diagnosing individuals with MCI who do not have a substantial risk of dementia) if their purpose is to identify a broader pool of participants for closer study.

MBR and Preclinical Cognitive Decline

This research also has implications for identifying individuals in pre-MCI stages, a key goal in diagnostic research and clinical trial design (Cummings, 2019; Cummings et al., 2019; Jack et al., 2018). Prior longitudinal research with initially cognitively normal participants has suggested a conversion rate to MCI of around 7% over a year (DeCarlo et al., 2016) and 16–28% over 4 years (Fernández-Blázquez, Ávila-Villanueva, Maestú, & Medina, 2016; Rizk-Jackson et al., 2013), which is roughly consistent with our finding of 10.70% conversion over an approximate 2-year follow-up period. It is likely that rates may differ based on country of recruitment, age differences, and criteria to define normality.

Findings suggest that cognitive variables can help identify those at risk for such a conversion, consistent with past longitudinal research (Lin et al., 2018). Indeed, similar to our findings regarding prediction of dementia, having more low scores at increasingly stringent cut-offs increased the probability of moving from normal to MCI over the 2-year follow-up period. These low scores patterns may represent the “transitional cognitive declines” that are thought to be characteristic of the preclinical stage of the neurodegenerative process (Jack et al., 2018).

A very stringent MBR criterion was needed in our sample to identify putatively cognitively normal individuals at high risk for developing MCI. More specifically, examination of posttest probabilities suggests that a having four or more low scores at or below the 2nd percentile yielded the greatest probability (.33) of developing MCI. Notably, this value is a substantial improvement over examining simple base rate data from our sample, which would predict a .107 probability of conversion. While that difference may not be large enough to influence clinical decision-making, MBR data could be extremely useful in situations in which even modest gains in predictive accuracy could lead to great benefit, such as in clinical trial selection. That is, a 22.3% higher probability of conversion is extremely meaningful in pilot study scenarios in which having even a few participants who are poorly selected could meaningfully alter the outcome. Moreover, prediction using MBR data could be improved by selecting from a sample of individuals with a higher prevalence of conversion (Smith et al., 2008), such as those presenting to a memory disorders clinic, individuals with identified biomarkers, or persons with subjective cognitive complaints. Thus, cognitive screening methods using MBR methodologies could be fruitfully combined with other clinical and diagnostic approaches to maximize efficiency in clinical trial selection.

It is worth noting here that the calibrated probabilities in this sample may not be applicable outside of the current data set and should be validated using external samples in future research (Bleeker et al., 2003). To support this effort, individuals can easily apply our MBR data in new samples using an existing online posttest probability calculator (Hozo & Djulbegovic, 1998).

Limitations and Future Directions

The current findings should be considered within the context of six important limitations. First, the UDS3NB dataset is largely a racially and ethnically homogenous sample, and the relatively high education levels in the UDS3NB do not generalize to estimates from the most recent U.S. census data (U.S. Census Bureau, 2017). Given well-established concerns about measuring cognitive status of racial or ethnic minorities using predominantly White normative data (Pedraza & Mungas, 2008), caution is recommended in attempts to generalize these findings, particularly to non-White samples. Special care should be taken when applying cut-scores to populations that do not reflect the unique characteristics of the current sample, as there is substantial variability in the most appropriate cut-off points for cognitive tests across such variables as different cultures (O’Driscoll & Shaikh, 2017). Relatedly, the NACC database constitutes a nonrandom sample of patients presenting for studies at Alzheimer’s disease Research Centers and may not be representative of community-dwelling adults, though other normative systems are similarly limited (Heaton, Miller, Taylor, & Grant-Isibor, 2004). Thus, the current findings may be more applicable in research settings, and the field would certainly benefit from collection of randomly recruited, nationally representative samples.

Second, the MBRs identified in the current sample were based on adults ≥50 years old. The current study was primarily interested in identifying how MBRs predict conversion to MCI or dementia in older adults, and estimates of neurocognitive decline earlier than age 50 are exceedingly rare (Harvey, Skelton-Robinson, & Rossor, 2003; Kokmen, Beard, Offord, & Kurland, 1989; Newens et al., 1993). Thus, results may not overlap with the MBRs of low scores in younger populations with earlier neurocognitive decline and should be used cautiously in those populations.

Third, there may be alternative ways of examining the MBR of low scores. For instance, the current study did not evaluate the MBR of low intradomain cognitive test scores. While some recent conceptualizations of MCI have recommended that multiple low scores within the same domain can be used to accurately operationalize MCI (e.g., Jak et al., 2016), testing the MBR of low intradomain scores and how these values best predict conversion to MCI and dementia was outside of the scope of the current study. Future research would benefit from evaluating how MBRs of low intradomain scores in the UDS3NB can be leveraged to identify those at greatest risk for progression to MCI and dementia. Similarly, in keeping with prior research (Brooks et al., 2011; Brooks, Holdnack, & Iverson, 2016; Brooks & Iverson, 2010; Brooks, Iverson, Lanting, Horton, & Reynolds, 2012; Holdnack et al., 2017; Karr, Garcia-Barrera, Holdnack, & Iverson, 2017; Schretlen et al., 2008), we expressed the MBR of low scores in terms of number of tests that were impaired. However, we acknowledge, as have a number of the above cited authors, that the number of low test scores observed will differ as a function of the number of tests administered. Thus, the MBR of low scores may be better expressed as a percentage of the tests that fall in the impaired range to account for differences in the number of tests administered. Subsequent studies may empirically evaluate whether this approach has advantages to the simple numeric MBR approach. Researchers may also fruitfully investigate what is the most appropriate number of tests to be administered in a neuropsychological battery (and within a particular cognitive domain) to detect true effects (Litvan et al., 2011, 2012). It is also important to note here that the base rate of a low score on one measure is likely to be positively correlated with the base rate of a low score on a similar measure (e.g., those who perform poorly on Trails A will also likely perform poorly on Trails B). Prior publications present methods for accounting for the influence of test interrelatedness on the base rates of low scores (Decker, Schneider, & Hale, 2012; Ingraham & Aiken, 1996), which could be used in subsequent research.

Fourth, the current study utilized static cut-scores to evaluate how MBRs in cognitively healthy adults to predict conversion to MCI and dementia. As mentioned in Litvan et al. (2012), a concern with using a rigid cut-score is that people with higher premorbid abilities would have to experience a larger decline to meet criteria. Notably, the current study utilized demographically adjusted MBRs when assessing risk for progression. Given extensive evidence that demographic variables are proxies for premorbid levels of intellectual functioning (e.g., Barona, Reynolds, & Chastain, 1984), presenting demographically adjusted MBRs partially accounts for variance in premorbid intellectual functioning. However, findings may not be as accurate among patients whose level of education does not match their true baseline cognitive abilities (e.g., among individuals with limited educational opportunities who have average to above average intellectual abilities).

Fifth, issues related to the shape of the distributions for the cognitive measures can influence interpretation of low scores. For instance, for nonnormally distributed measures, a mild reduction in score may be associated with a substantial decrease in the base rate of that score. Notably, prior research on the UDS3NB indicated that scores from the neuropsychological measures tended to be normally distributed, with the exception of the MINT and the Benson Figure Copy (Weintraub et al., 2018). Future research might examine alternative means of assessing low base rate scores for nonnormally distributed variables.

Finally, it should be noted that there are limitations to using the CDR to classify individuals as normal, MCI, or dementia. Indeed, there may have been some individuals with cognitive impairment on testing, who were classified as normal on the CDR. Of course, avoiding diagnosis based on cognitive test data was necessary in this case to avoid criterion contamination, given that we were interested in assessing the validity of test data for diagnosis. Nonetheless, it will be important to replicate these findings with other diagnostic methods.

Conclusions

While the current study should be interpreted within the context of these limitations, the findings build on existing research by examining how MBRs in cognitively healthy older adults predict subsequent conversion to MCI or dementia. These results impact clinical and research efforts that utilize the UDS3NB to identify individuals at risk for conversion from MCI to dementia and also have downstream effects on understanding the MCI construct more generally. Specifically, it appears that confidence in risk for conversion from MCI to dementia is highest when very stringent criteria are applied, such that revision of MCI definitions may need to account for this phenomenon to more accurately identify at-risk individuals. Findings also have implication for identifying individuals in preclinical stages of change before the development of MCI. Indeed, they imply that there are MBR patterns of low scores that may identify individuals in a transitional stage of cognitive decline that portends the development of MCI. These results will be useful when engaging in screening for clinical trials, and we encourage other researchers to apply our data in new samples using available online calculators (Hozo & Djulbegovic, 1998).

Key Points.

Question: What is the base rate of low scores on within the Uniform Data Set Neuropsychological Battery 3.0 after accounting for demographic factors (i.e., the multivariate base rate [MBR]) and how can patterns of low scores predict subsequent cognitive decline? Findings: MBR of low scores on the Uniform Data Set Neuropsychological Battery 3.0 range from 1.40 to 79.2% in a sample of putatively cognitively normal participants. Posttest probabilities of conversion to dementia and mild cognitive impairment based on MBR findings range from .06 to .33 and .12 to .64, respectively. Importance: Findings may lead to revision of mild cognitive impairment criteria and improvement of clinical trial selection procedures. Next Steps: Identifying how these findings generalize to ethnically diverse samples and how combinations of low scores within neuropsychological domains on the Uniform Data Set Neuropsychological Battery 3.0 can predict development of mild cognitive impairment and dementia.

Acknowledgments

The NACC database is funded by NIA/NIH Grant U01 AG016976. NACC (National Alzheimer’s Coordinating Center) data are contributed by the NIA-funded ADCs: P30 AG019610 (PI Eric Reiman), P30 AG013846 (PI Neil Kowall), P30 AG062428-01 (PI James Leverenz) P50 AG008702 (PI Scott Small), P50 AG025688 (PI Allan Levey), P50 AG047266 (PI Todd Golde), P30 AG010133 (PI Andrew Saykin), P50 AG005146 (PI Marilyn Albert), P30 AG062421-01 (PI Bradley Hyman), P30 AG062422-01 (PI Ronald Petersen), P50 AG005138 (PI Mary Sano), P30 AG008051 (PI Thomas Wisniewski), P30 AG013854 (PI Robert Vassar), P30 AG008017 (PI Jeffrey Kaye, MD), P30 AG010161 (PI David Bennett, MD), P50 AG047366 (PI Victor Henderson, MD, MS), P30 AG010129 (PI Charles DeCarli), P50 AG016573 (PI Frank LaFerla), P30 AG062429-01(PI James Brewer), P50 AG023501 (PI Bruce Miller), P30 AG035982 (PI Russell Swerdlow), P30 AG028383 (PI Linda Van Eldik), P30 AG053760 (PI Henry Paulson), P30 AG010124 (PI John Trojanowski), P50 AG005133 (PI Oscar Lopez), P50 AG005142 (PI Helena Chui), P30 AG012300 (PI Roger Rosenberg), P30 AG049638 (PI Suzanne Craft), P50 AG005136 (PI Thomas Grabowski), P30 AG062715-01 (PI Sanjay Asthana), P50 AG005681 (PI John Morris), P50 AG047270 (PI Stephen Strittmatter). This work was supported by an Alzheimer’s Association Research Fellowship (2019-AARF-641693, PI Andrew M. Kiselica) and the 2019-2020 National Academy of Neuropsychology Clinical Research Grant (PI Andrew M. Kiselica).

Contributor Information

Andrew M. Kiselica, Baylor Scott and White Health, Temple, Texas.

Troy A. Webber, Michael E. DeBakey VA Medical Center, Houston, Texas

Jared F. Benge, Baylor Scott and White Health, and Plummer Movement Disorders Center, Temple, Texas.

References

  1. Albert MS, DeKosky ST, Dickson D, Dubois B, Feldman HH, Fox NC, … Phelps CH (2011). The diagnosis of mild cognitive impairment due to Alzheimer’s disease: Recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimer’s & Dementia, 7, 270–279. 10.1016/j.jalz.2011.03.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. APA. (2014). Diagnostic and statistical manual of mental disorders. Washington, DC: Author. [Google Scholar]
  3. Atchison TB, Massman PJ, & Doody RS (2007). Baseline cognitive function predicts rate of decline in basic-care abilities of individuals with dementia of the Alzheimer’s type. Archives of Clinical Neuropsychology, 22, 99–107. 10.1016/j.acn.2006.11.006 [DOI] [PubMed] [Google Scholar]
  4. Baerresen KM, Miller KJ, Hanson ER, Miller JS, Dye RV, Hartman RE, … Small GW (2015). Neuropsychological tests for predicting cognitive decline in older adults. Neurodegenerative Disease Management, 5, 191–201. 10.2217/nmt.15.7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Barona A, Reynolds CR, & Chastain R (1984). A demographically based index of premorbid intelligence for the WAIS—R. Journal of Consulting and Clinical Psychology, 52, 885–887. 10.1037/0022-006X.52.5.885 [DOI] [Google Scholar]
  6. Beaumont JL, Havlik R, Cook KF, Hays RD, Wallner-Allen K, Korper SP, … Gershon R (2013). Norming plans for the NIH Toolbox. Neurology, 80(Suppl. 3), S87–S92. 10.1212/WNL.0b013e3182872e70 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Besser L, Kukull W, Knopman DS, Chui H, Galasko D, Weintraub S, … the Neuropsychology Work Group, Directors, and Clinical Core leaders of the National Institute on Aging-funded U.S. Alzheimer’s Disease Centers. (2018). Version 3 of the National Alzheimer’s Coordinating Center’s Uniform Data Set. Alzheimer Disease and Associated Disorders, 32, 351–358. 10.1097/WAD.0000000000000279 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Bleeker SE, Moll HA, Steyerberg EW, Donders AR, Derksen-Lubsen G, Grobbee DE, & Moons KG (2003). External validation is necessary in prediction research: A clinical example. Journal of Clinical Epidemiology, 56, 826–832. 10.1016/S0895-4356(03)00207-5 [DOI] [PubMed] [Google Scholar]
  9. Bondi MW, Edmonds EC, Jak AJ, Clark LR, Delano-Wood L, McDonald CR, … Salmon DP (2014). Neuropsychological criteria for mild cognitive impairment improves diagnostic precision, biomarker associations, and progression rates. Journal of Alzheimer’s Disease, 42, 275–289. 10.3233/JAD-140276 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Brooks BL, Holdnack JA, & Iverson GL (2011). Advanced clinical interpretation of the WAIS-IV and WMS-IV: Prevalence of low scores varies by level of intelligence and years of education. Assessment, 18, 156–167. 10.1177/1073191110385316 [DOI] [PubMed] [Google Scholar]
  11. Brooks BL, Holdnack JA, & Iverson GL (2016). To change is human: “Abnormal” reliable change memory scores are common in healthy adults and older adults. Archives of Clinical Neuropsychology, 31, 1026–1036. 10.1093/arclin/acw079 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Brooks BL, & Iverson GL (2010). Comparing actual to estimated base rates of “abnormal” scores on neuropsychological test batteries: Implications for interpretation. Archives of Clinical Neuropsychology, 25, 14–21. 10.1093/arclin/acp100 [DOI] [PubMed] [Google Scholar]
  13. Brooks BL, Iverson GL, & Holdnack JA (2013). Understanding and using multivariate base rates with the WAIS-IV/WMS-IV. San Diego, CA: Elsevier Academic Press Inc; 10.1016/B978-0-12-386934-0.00002-X [DOI] [Google Scholar]
  14. Brooks BL, Iverson GL, Holdnack JA, & Feldman HH (2008). Potential for misclassification of mild cognitive impairment: A study of memory scores on the Wechsler Memory Scale-III in healthy older adults. Journal of the International Neuropsychological Society, 14, 463–478. 10.1017/S1355617708080521 [DOI] [PubMed] [Google Scholar]
  15. Brooks BL, Iverson GL, Lanting SC, Horton AM, & Reynolds CR (2012). Improving test interpretation for detecting executive dys-function in adults and older adults: Prevalence of low scores on the test of verbal conceptualization and fluency. Applied Neuropsychology Adult, 19, 61–70. 10.1080/09084282.2012.651951 [DOI] [PubMed] [Google Scholar]
  16. Brooks BL, Iverson GL, & White T (2007). Substantial risk of “Accidental MCI” in healthy older adults: Base rates of low memory scores in neuropsychological assessment. Journal of the International Neuropsychological Society, 13, 490–500. 10.1017/S1355617707070531 [DOI] [PubMed] [Google Scholar]
  17. Brooks BL, Iverson GL, & White T (2009). Advanced interpretation of the Neuropsychological Assessment Battery with older adults: Base rate analyses, discrepancy scores, and interpreting change. Archives of Clinical Neuropsychology, 24, 647–657. 10.1093/arclin/acp061 [DOI] [PubMed] [Google Scholar]
  18. Craft S, Newcomer J, Kanne S, Dagogo-Jack S, Cryer P, Sheline Y, … Alderson A (1996). Memory improvement following induced hyperinsulinemia in Alzheimer’s disease. Neurobiology of Aging, 17, 123–130. 10.1016/0197-4580(95)02002-0 [DOI] [PubMed] [Google Scholar]
  19. Cummings J (2019). The National Institute on Aging-Alzheimer’s Association framework on Alzheimer’s disease: Application to clinical trials. Alzheimer’s & Dementia, 15, 172–178. 10.1016/j.jalz.2018.05.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Cummings J, Lee G, Ritter A, Sabbagh M, & Zhong K (2019). Alzheimer’s disease drug development pipeline: 2019. Alzheimer’s & Dementia: Translational Research & Clinical Interventions, 5, 272–293. 10.1016/j.trci.2019.05.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. DeCarlo CA, MacDonald SWS, Vergote D, Jhamandas J, Westaway D, & Dixon RA (2016). Vascular Health and Genetic Risk Affect Mild Cognitive Impairment Status and 4-Year Stability: Evidence From the Victoria Longitudinal Study. The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences, 71, 1004–1014. 10.1093/geronb/gbv043 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Decker SL, Schneider WJ, & Hale JB (2012). Estimating base rates of impairment in neuropsychological test batteries: A comparison of quantitative models. Archives of Clinical Neuropsychology, 27, 69–84. 10.1093/arclin/acr088 [DOI] [PubMed] [Google Scholar]
  23. Devora PV, Beevers S, Kiselica AM, & Benge JF (2019). Normative data for derived measures and discrepancy scores for the Uniform Data Set 3.0 Neuropsychological Battery. Archives of Clinical Neuropsychology, 35, 75–89. 10.1093/arclin/acz025 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Dubois B, & Albert ML (2004). Amnestic MCI or prodromal Alzheimer’s disease? The Lancet Neurology, 3, 246–248. 10.1016/S1474-4422(04)00710-0 [DOI] [PubMed] [Google Scholar]
  25. Ewers M, Walsh C, Trojanowski JQ, Shaw LM, Petersen RC, Jack CR Jr., … Scheltens P (2012). Prediction of conversion from mild cognitive impairment to Alzheimer’s disease dementia based upon biomarkers and neuropsychological test performance. Neurobiology of Aging, 33, 1203–1214. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Fernández-Blázquez MA, Ávila-Villanueva M, Maestú F, & Medina M (2016). Specific features of subjective cognitive decline predict faster conversion to mild cognitive impairment. Journal of Alzheimer’s Disease, 52, 271–281. 10.3233/JAD-150956 [DOI] [PubMed] [Google Scholar]
  27. Fillenbaum GG, Peterson B, & Morris JC (1996). Estimating the validity of the Clinical Dementia Rating scale: The CERAD experience. Aging Clinical and Experimental Research, 8, 379–385. 10.1007/BF03339599 [DOI] [PubMed] [Google Scholar]
  28. Gollan TH, Weissberger GH, Runnqvist E, Montoya RI, & Cera CM (2012). Self-ratings of spoken language dominance: A Multilingual Naming Test (MINT) and preliminary norms for young and aging Spanish–English bilinguals. Bilingualism: Language and Cognition, 15, 594–615. 10.1017/S1366728911000332 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Guilmette TJ, Sweet JJ, Hebben N, Koltai D, Mahone EM, Spiegler BJ, … Participants C (2020). American Academy of Clinical Neuropsychology consensus conference statement on uniform labeling of performance test scores. The Clinical Neuropsychologist, 34, 437–453. 10.1080/13854046.2020.1722244 [DOI] [PubMed] [Google Scholar]
  30. Harvey RJ, Skelton-Robinson M, & Rossor MN (2003). The prevalence and causes of dementia in people under the age of 65 years. Journal of Neurology, Neurosurgery and Psychiatry, 74, 1206–1209. 10.1136/jnnp.74.9.1206 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Heaton R, Miller SW, Taylor MJ, & Grant-Isibor I (2004). Revised comprehensive norms for an expanded Halstead-Reitan Battery: Demographically adjusted neuropsychological norms for African American and Caucasian adults. Lutz, FL: Psychological Assessment Resources, Inc. [Google Scholar]
  32. Holdnack JA, Tulsky DS, Brooks BL, Slotkin J, Gershon R, Heinemann AW, & Iverson GL (2017). Interpreting patterns of low scores on the NIH Toolbox cognition battery. Archives of Clinical Neuropsychology, 32, 574–584. 10.1093/arclin/acx032 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Hozo I, & Djulbegovic B (1998). Calculating confidence intervals for threshold and post-test probabilities. M. D. Computing, 15, 110–115. [PubMed] [Google Scholar]
  34. Ingraham LJ, & Aiken CB (1996). An empirical approach to determining criteria for abnormality in test batteries with multiple measures. Neuropsychology, 10, 120–124. 10.1037/0894-4105.10.1.120 [DOI] [Google Scholar]
  35. Ivanova I, Salmon DP, & Gollan TH (2013). The multilingual naming test in Alzheimer’s disease: Clues to the origin of naming impairments. Journal of the International Neuropsychological Society, 19, 272–283. 10.1017/S1355617712001282 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Jack CR Jr., Bennett DA, Blennow K, Carrillo MC, Dunn B, Haeberlein SB, … the Contributors. (2018). NIA-AA Research Framework: Toward a biological definition of Alzheimer’s disease. Alzheimer’s & Dementia, 14, 535–562. 10.1016/j.jalz.2018.02.018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Jak AJ, Preis SR, Beiser AS, Seshadri S, Wolf PA, Bondi MW, & Au R (2016). Neuropsychological criteria for mild cognitive impairment and dementia risk in the Framingham Heart Study. Journal of the International Neuropsychological Society, 22, 937–943. 10.1017/S1355617716000199 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Karr JE, Garcia-Barrera MA, Holdnack JA, & Iverson GL (2017). Using multivariate base rates to interpret low scores on an abbreviated battery of the Delis-Kaplan Executive Function System. Archives of Clinical Neuropsychology, 32, 297–305. 10.1093/arclin/acw105 [DOI] [PubMed] [Google Scholar]
  39. Kiselica AM, Webber TA, & Benge JF (2020). The Uniform Data Set 3.0 Neuropsychological Battery: Factor structure, invariance testing, and demographically-adjusted factor score calculation. Journal of the International Neuropsychological Society. Advance online publication. 10.1017/S135561772000003X [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Kokmen E, Beard CM, Offord KP, & Kurland LT (1989). Prevalence of medically diagnosed dementia in a defined United States population: Rochester, MN, January 1, 1975. Neurology, 39, 773–776. 10.1212/WNL.39.6.773 [DOI] [PubMed] [Google Scholar]
  41. Lin M, Gong P, Yang T, Ye J, Albin RL, & Dodge HH (2018). Big data analytical approaches to the NACC Dataset: Aiding preclinical trial enrichment. Alzheimer Disease and Associated Disorders, 32, 18–27. 10.1097/WAD.0000000000000228 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Litvan I, Aarsland D, Adler CH, Goldman JG, Kulisevsky J, Mollenhauer B, … Weintraub D (2011). MDS Task Force on mild cognitive impairment in Parkinson’s disease: Critical review of PD-MCI. Movement Disorders, 26, 1814–1824. 10.1002/mds.23823 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Litvan I, Goldman JG, Tröster AI, Schmand BA, Weintraub D, Petersen RC, … Emre M (2012). Diagnostic criteria for mild cognitive impairment in Parkinson’s disease: Movement Disorder Society Task Force guidelines. Movement Disorders, 27, 349–356. 10.1002/mds.24893 [DOI] [PMC free article] [PubMed] [Google Scholar]
  44. Mitchell AJ, & Shiri-Feshki M (2009). Rate of progression of mild cognitive impairment to dementia—Meta-analysis of 41 robust inception cohort studies. Acta Psychiatrica Scandinavica, 119, 252–265. 10.1111/j.1600-0447.2008.01326.x [DOI] [PubMed] [Google Scholar]
  45. Morris JC (1993). The Clinical Dementia Rating (CDR): Current version and scoring rules. Neurology, 43, 2412–2414. 10.1212/WNL.43.11.2412-a [DOI] [PubMed] [Google Scholar]
  46. Morris JC (1997). Clinical dementia rating: A reliable and valid diagnostic and staging measure for dementia of the Alzheimer type. International Psychogeriatrics, 9(Suppl. 1), 173–176. 10.1017/S1041610297004870 [DOI] [PubMed] [Google Scholar]
  47. Morris JC, Blennow K, Froelich L, Nordberg A, Soininen H, Waldemar G, … Dubois B (2014). Harmonized diagnostic criteria for Alzheimer’s disease: Recommendations. Journal of Internal Medicine, 275, 204–213. 10.1111/joim.12199 [DOI] [PubMed] [Google Scholar]
  48. Newens AJ, Forster DP, Kay DW, Kirkup W, Bates D, & Edwardson J (1993). Clinically diagnosed presenile dementia of the Alzheimer type in the Northern Health Region: Ascertainment, prevalence, incidence and survival. Psychological Medicine, 23, 631–644. 10.1017/S0033291700025411 [DOI] [PubMed] [Google Scholar]
  49. O’Driscoll C, & Shaikh M (2017). Cross-cultural applicability of the Montreal Cognitive Assessment (MoCA): A systematic review. Journal of Alzheimer’s Disease, 58, 789–801. 10.3233/JAD-161042 [DOI] [PubMed] [Google Scholar]
  50. Oltra-Cucarella J, Sánchez-SanSegundo M, Lipnicki DM, Sachdev PS, Crawford JD, Pérez-Vicente JA, … the Alzheimer’s Disease Neuroimaging Initiative. (2018). Using base rate of low scores to identify progression from amnestic mild cognitive impairment to Alzheimer’s disease. Journal of the American Geriatrics Society, 66, 1360–1366. 10.1111/jgs.15412 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Papp KV, Buckley R, Mormino E, Maruff P, Villemagne VL, Masters CL, … Amariglio RE (2019). Clinical meaningfulness of subtle cognitive decline on longitudinal testing in preclinical AD. Alzheimer’s & Dementia, 16, 552–560. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Partington JE, & Leiter RG (1949). Partington Pathways Test. Psychological Service Center Journal, 1, 11–20. [Google Scholar]
  53. Pedraza O, & Mungas D (2008). Measurement in cross-cultural neuropsychology. Neuropsychology Review, 18, 184–193. 10.1007/s11065-008-9067-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Petersen RC (2004). Mild cognitive impairment as a diagnostic entity. Journal of Internal Medicine, 256, 183–194. 10.1111/j.1365-2796.2004.01388.x [DOI] [PubMed] [Google Scholar]
  55. Petersen RC, Doody R, Kurz A, Mohs RC, Morris JC, Rabins PV, … Winblad B (2001). Current concepts in mild cognitive impairment. Archives of Neurology, 58, 1985–1992. 10.1001/archneur.58.12.1985 [DOI] [PubMed] [Google Scholar]
  56. Petersen RC, Lopez O, Armstrong MJ, Getchius TSD, Ganguli M, Gloss D, … Rae-Grant A (2018). Practice guideline update summary: Mild cognitive impairment: Report of the Guideline Development, Dissemination, and Implementation Subcommittee of the American Academy of Neurology. Neurology, 90, 126–135. 10.1212/WNL.0000000000004826 [DOI] [PMC free article] [PubMed] [Google Scholar]
  57. Possin KL, Laluz VR, Alcantar OZ, Miller BL, & Kramer JH (2011). Distinct neuroanatomical substrates and cognitive mechanisms of figure copy performance in Alzheimer’s disease and behavioral variant frontotemporal dementia. Neuropsychologia, 49, 43–48. 10.1016/j.neuropsychologia.2010.10.026 [DOI] [PMC free article] [PubMed] [Google Scholar]
  58. Rizk-Jackson A, Insel P, Petersen R, Aisen P, Jack C, & Weiner M (2013). Early indications of future cognitive decline: Stable versus declining controls. PLoS ONE, 8, e74062 10.1371/journal.pone.0074062 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Sachdev P, Kalaria R, O’Brien J, Skoog I, Alladi S, Black SE, … the Internationlal Society for Vascular Behavioral and Cognitive Disorders. (2014). Diagnostic criteria for vascular cognitive disorders: A VASCOG statement. Alzheimer Disease and Associated Disorders, 28, 206–218. 10.1097/WAD.0000000000000034 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Schretlen D, Testa S, & Pearlson G (2010). Calibrated neuropsychological normative system professional manual. Lutz, FL: Psychological Assessment Resources. [Google Scholar]
  61. Schretlen DJ, Testa SM, Winicki JM, Pearlson GD, & Gordon B (2008). Frequency and bases of abnormal performance by healthy adults on neuropsychological testing. Journal of the International Neuropsychological Society, 14, 436–445. 10.1017/S1355617708080387 [DOI] [PubMed] [Google Scholar]
  62. Shirk SD, Mitchell MB, Shaughnessy LW, Sherman JC, Locascio JJ, Weintraub S, & Atri A (2011). A web-based normative calculator for the uniform data set (UDS) neuropsychological test battery. Alzheimer’s Research & Therapy, 3, 32 10.1186/alzrt94 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Smith GE, Ivnik RJ, & Lucas J (2008). Assessment techniques: Tests, test batteries, norms, and methodological approaches In Morgan JE & Ricker JH (Eds.), Studies on neuropsychology, neurology and cognition. Textbook of clinical neuropsychology (pp. 38–57). New York, NY: Taylor & Francis. [Google Scholar]
  64. Stern RA,, & White T (2003). Neuropsychological Assessment Battery (NAB): Administration, scoring, and interpretation manual. Lutz, FL: Psychological Assessment Resources Lutz. [Google Scholar]
  65. Summers MJ, & Saunders NL (2012). Neuropsychological measures predict decline to Alzheimer’s dementia from mild cognitive impairment. Neuropsychology, 26, 498–508. 10.1037/a0028576 [DOI] [PubMed] [Google Scholar]
  66. Tabert MH, Manly JJ, Liu X, Pelton GH, Rosenblum S, Jacobs M, … Devanand DP (2006). Neuropsychological prediction of conversion to Alzheimer disease in patients with mild cognitive impairment. Archives of General Psychiatry, 63, 916–924. 10.1001/archpsyc.63.8.916 [DOI] [PubMed] [Google Scholar]
  67. Testa SM, & Schretlen DJ (2006). Diagnostic utility of regression-based norms in schizophrenia. The Clinical Neuropsychologist, 20, 206. [Google Scholar]
  68. U.S. Census Bureau. (2017). Educational attainment in the United States: 2017. Retrieved from https://www.census.gov/data/tables/2017/demo/education-attainment/cps-detailed-tables.html
  69. Wechsler D (1997). Wechsler Memory Scale (3rd ed.). San Antonio, TX: The Psychological Corporation. [Google Scholar]
  70. Wechsler D (2008). Wechsler Adult Intelligence Scale–Fourth Edition (WAIS–IV): San Antonio, TX: The Psychological Corporation. [Google Scholar]
  71. Wechsler D, Holdnack JA, & Drozdick LW (2009). Wechsley Memory Scale: Fourth Edition Technical and Interpretive Manual. San Antonio, TX: Pearson. [Google Scholar]
  72. Weintraub S, Besser L, Dodge HH, Teylan M, Ferris S, Goldstein FC, … Morris JC (2018). Version 3 of the Alzheimer Disease Centers’ Neuropsychological Test Battery in the Uniform Data Set (UDS). Alzheimer Disease and Associated Disorders, 32, 10–17. 10.1097/WAD.0000000000000223 [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Weintraub S, Salmon D, Mercaldo N, Ferris S, Graff-Radford NR, Chui H, … Morris JC (2009). The Alzheimer’s disease centers’ uniform data set (UDS): The neuropsychologic test battery. Alzheimer Disease and Associated Disorders, 23, 91–101. 10.1097/WAD.0b013e318191c7dd [DOI] [PMC free article] [PubMed] [Google Scholar]
  74. White T, & Stern RA (2003). Neuropsychological assessment battery psychometric and technical manual. Lutz, FL: Psychological Assessment Resources. [Google Scholar]
  75. Yanhong O, Chandra M, & Venkatesh D (2013). Mild cognitive impairment in adult: A neuropsychological review. Annals of Indian Academy of Neurology, 16, 310–318. 10.4103/0972-2327.116907 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES