Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2006 Sep 5.
Published in final edited form as: J Neuropsychiatry Clin Neurosci. 2005;17(1):98–105. doi: 10.1176/appi.neuropsych.17.1.98

Self-Administered Screening for Mild Cognitive Impairment: Initial Validation of a Computerized Test Battery

Jane B Tornatore , Emory Hill, Jo Anne Laboff, Mary E McGann
PMCID: PMC1559991  NIHMSID: NIHMS2312  PMID: 15746489

Abstract

The CANS-MCI, a computer administered, scored, and interpreted touch screen battery, was evaluated for its ability to screen for mild cognitive impairment. 310 community-dwelling elders enrolled in an NIA-funded study. One-month test-retest reliability correlations were all significant (p<.05-p<.001). Concurrent validity correlations were all significant (p<.001). A high level of diagnostic validity was attained relative to the WMS-R LMS-II test (p<.001). Confirmatory factor analysis supported a three-factor model indicating the tests measure the intended cognitive dimensions of Memory, Language/Spatial Fluency, and Executive Function/Mental Control. Goodness of fit indicators were strong (Bentler Comparative Fit Index = .99; Root Mean Square Error of Approximation=.055). Initial validation analyses indicate that the CANS-MCI shows promise of being a reliable, valid screening tool to determine whether more intensive testing for early cognitive impairment is warranted.

Keywords: Screening, Mild Cognitive Impairment, Computer, Neuropsychology, Tests


Since most new research and treatments for dementia focus upon slowing the progression of Alzheimer’s Disease (AD),1,2 it is critical to identify the need for intensive diagnostic evaluation so that early treatment can delay AD progression.3,4 People with Mild Cognitive Impairment (MCI), characterized by marked memory impairment without disorientation, confusion, or abnormal general cognitive functioning, appear to develop AD at a rate of 10–5% a year.57 Research concerning MCI, both as a distinct diagnostic entity and as a precursor to AD,611 suggests that instruments focused upon MCI measurement would provide useful screening information for decisions concerning full diagnostic evaluations for AD. Brief or automated neuropsychological tests may be the preliminary step most suited to determine the need for evaluations, which require costly neuropsychological, biochemical, or neuroimaging techniques.12

While memory deficits have been found to be the most reliable single predictor,5,10,1315 studies indicate that tests sampling different cognitive domains, when combined, significantly enhance the predictive validity of a test battery because of variations in the initial cognitive deficits associated with AD.1519 Current methods of detection are costly and often deferred until later in the disease process, when interventions to delay AD are likely to be less effective. Therefore, an effective screening device for MCI would be cost efficient and incorporate tests that assess multiple cognitive domains. In this article, we present a reliability and validity study of a touch screen test battery that accomplishes these goals and is administered, scored, and interpreted by computer : The Computer-Administered Neuropsychological Screen for Mild Cognitive Impairment (CANS-MCI).

METHODS

Subjects

A community sample of 310 elderly people was recruited through senior centers, American Legion halls, and retirement homes in four counties of Washington State. Exclusionary criteria were non-English language, significant hand tremor, inability to sustain a seated position for 45 minutes, recent surgery, cognitive side effects of drugs, indications of recent alcohol abuse, or inadequacies in visual acuity, reading ability, hearing, or dominant hand agility. The subjects were predominantly Caucasian (86%), female (65%), and had at least some college education (76%). Subject age ranged from 51 to 93 years with the majority between 60 and 80 years of age (63%).

Neuropsychological Measures

The CANS-MCI tests were based upon the cognitive domains found to be predictive of AD in previous neuropsychological research: visuospatial ability and spatial fluency; 4,15,17,2022 executive mental control; 1,23,24 immediate and delayed recall; 4,5,1315,17,18,2027 and language fluency 4,5,17,18,2022,25,27 (Table 1). The usability of the CANS-MCI has been previously evaluated.28 It can be fully self-administered, regardless of computer experience, even by elderly people with MCI. It does not cause significant anxiety-based cognitive interference during testing. After a researcher enters a subject ID and adjusts the volume, the tests, including any assistance needed, are administered by a computer with external speakers and a touch screen.

TABLE 1.

Cognitive Domains of Mild Cognitive Impairments that Predict Alzheimer’s Disease

Predictive Domain CANS-MCI Tests
Executive Functions:
   Visuospatial Ability & Spatial Fluency General Reaction Time; Design Matching; Word-to-Picture Matching; Clock
   Mental Control Stroop
Immediate Recall Free & Guided Recognition I
Delayed Recall Free & Guided Recognition II
Language Fluency Picture Naming

The CANS-MCI was developed to include tests designed to measure executive function, language fluency, and memory (Table 1). The executive function tests sample two cognitive abilities: mental control and spatial abilities. Mental control is measured with a Stroop test (on which the user matches the ink color of word names rather than the name itself). Spatial abilities are tested with: a general reaction time test with minimal cognitive complexity (on which the user touches ascending numbers presented on a jumbled display; design-letter matching; word-to-picture matching; and a clock hand placement test (touching the hour and minute positions for a series of digital times). Memory for 20 object names is assessed with an immediate and a delayed free and guided recognition test. Language fluency is tested with a picture naming test (on which pictures are each presented with four 2-letter word beginnings).

Unless specifically described as latency measures, the CANS-MCI and conventional neuropsychological tests in this study measure response accuracy. All latency and accuracy measures on the CANS-MCI are scalable scores.

The following conventional neuropsychological tests were used to assess the validity of the CANS-MCI tests: Weschler Memory Scale-Revised (WMS-R) Logical Memory Immediate and Delayed (LMS-I and LMS-II);29 Mattis Dementia Rating Scale (DRS);30 and Weschler Adult Intelligence Scale, Digit Symbol test.31 LMS-II scores were used to classify participants as having normal cognitive functioning or MCI. The LMS-II has previously been found to differentiate normal from impaired cognitive functioning.4,5,17

Procedures

Each testing session lasted approximately one hour. The progression of tests was designed to minimize inter-test interference; the order was Digit Symbol, DRS, LMS-I, CANS-MCI, LMS-II. The CANS-MCI tests lasted approximately one-half hour. During the first session, test procedures were started after the testing procedure was explained and informed written consent obtained. Subjects were tested at Time 1, one month later (Time 2), and six months later (Time 3). This study was approved by the University of Washington’s Human Subjects Committee and the Western Institutional Review Board.

Data Analyses

Alpha coefficient reliabilities were performed to analyze the internal consistency of the scale items. Pearson correlations and paired sample t-tests were used to evaluate the test-retest reliability of the CANS-MCI tests. Concurrent validity was evaluated by running Pearson correlations to compare the CANS-MCI tests with scores on conventional neuropsychological tests administered during the same test sessions. To assess diagnostic validity, independent sample t-tests were used to analyze differences between those subjects in the lowest 10th percentile of cognitive functioning and subjects in the highest 90th percentile based on their LMS-II scores. We conducted exploratory and confirmatory factor analyses on different subsets of data to establish whether our tests measure the expected cognitive domains.

RESULTS

Reliability

Internal consistency was examined (Table 2) using Cronbach's Coefficient alpha reliabilities to determine the degree to which individual test items correlate with one another. Only two test scores (Free Recognition II and Guidance Recognition I Percent Correct) did not meet the predetermined standard for internal consistency (alpha >.70). When the Free Recognition II score was combined with the Free Recognition I score, the combination had very high reliability (alpha=.93).

TABLE 2.

Internal Consistency (Alpha Coefficient Reliability) of CANS-MCI Tests

CANS-MCI Test Scores No. of Items Coefficient Alpha
General Reaction Time 10 .828
Word-to-Picture Matching (Latency) 14 .865
Design Matching 136 NA*
Stroop (Discordant Latency) 48 .963
Clock Hand Placement 30 .902
Free Recognition I (5 Trials of 20 Items) 5 .939
Guided Recognition I Errors 5 .927
Guidance Recognition I (Percent Correct) 5 .588
Free Recognition II (1 Trial) 20 .637
Free Recognition I & II Combined (6 Trials) 6 .935
Picture Naming 42 .766
Picture Naming (Latency) 42 NA*
*

Since participants did not all answer the same number of items, reliability analyses could not be performed.

If an item was incorrect, the subject received a guided recall test on that item.

Correlations measuring the stability of the CANS-MCI over a one-month period ranged from .61–.85 (Table 3). The Free Recognition I and II tests had coefficient alphas below .70. When these tests were combined to form a more global memory measure the alpha was acceptable (alpha=.74), therefore, they were included in further analyses. Six-month test-retest reliabilities ranged from .62–.89; 9 of the 12 scores demonstrated higher 6-month reliabilities than 1-month reliabilities. One of the two measures of guidance effectiveness (Guidance Percent Correct) had low internal consistency and test-retest alphas (1-month and 6-month) and therefore was not included in further analyses. Separate analyses indicated little relationship to education level on either internal consistency or test-retest reliability of the measures. All Time 2 test scores were significantly better than Time 1 scores but scores remained stable from times 2 to 3. When subjects were already familiar with the testing procedures, test scores on the CANS-MCI demonstrated markedly higher score stability over 6 months than they did over the initial one-month period. This was confirmed by t-test analyses that showed significant improvements in all scores from Time 1 to Time 2 but no significant differences from Time 2 to Time 3.

Table 3.

Test-Retest Reliabilities and T-Tests of CANS-MCI at Baseline, 1-and 6-Months

Time 1 Time 2 Time 3
Baseline 1-month 6-month Coefficient Time Coefficient Time 2-
CANS-MCI Test Mean (SD) Mean (SD) Mean (SD) Alpha 1–2 1–2 t Alpha 2–3 3 t
General Reaction Time .77 (.21) .73 (.17) .74 (.16) .702 3.6 .620 −.92
Word-to-Picture Matching (Latency) 2.0 (.55) 1.9 (.50) 1.9 (.54) .825 6.1 .704 −1.6
Design Matching 39.4 (9.7) 41.4 (8.9) 41.7 (8.6) .820 −6.5 .839 −.52
Stroop (Discordant Latency) 1.67 (.49) 1.61 (.52) 1.62 (.51) .795 4.0 .801 −.06
Clock Hand Placement 30.7 (9.7) 32.9 (8.9) 33.4 (9.3) .792 −5.5 .759 −1.0
Free Recognition I 17.7 (2.1) 18.1 (1.9) 18.1 (2.0) .681 −3.7 .760 −.79
Guided Recognition I (Errors) 2.9 (5.5) 2.0 (4.6) 2.3 (4.9) .766 3.1 .895 .16
Guided Recognition I (Percent Correct) .86 (.16) .90 (.14) .90 (.14) .383 −2.6* .574 −.96
Free Recognition II 17.6 (2.3) 17.8 (2.3) 17.8 (2.3) .609 −2.3* .677 .16
Free Recognition I & II 35.4 (4.0) 35.9 (3.9) 35.9 (4.0) .738 −3.7 .826 −.32
Picture Naming 31.7 (4.8) 32.0 (4.9) 32.3 (4.8) .788 −3.1 .806 −.83
Picture Naming (Latency) 4.4 (1.1) 4.1 (1.1) 4.1 (1.1) .822 7.3 .834 .16
*

p<.05

p<.01

p<.001

Validity

The concurrent validity of the CANS-MCI was evaluated by examining the correlations of the CANS-MCI component tests with the scores on the Digit Symbol, LMS-I and II, and the DRS. The correlations between each CANS-MCI test and its domain-specific conventional neuropsychological tests are listed in Table 4. The coefficients ranged from .44 to .64, all highly significant (p<.001). For 10 of the 11 CANS-MCI tests, the correlation was the strongest with the conventional neuropsychological tests designed to measure the comparable cognitive domains. Immediate memory on the CANS-MCI (Free Recognition I) was more highly correlated with the delayed memory score on the LMS-II (r=.560; data not shown) than with the hypothesized immediate memory scale on the DRS (r=.506).

TABLE 4.

Correlations of Conventional Tests with CANS-MCI Tests*

Cognitive Domain (CANS-MCI) CANS-MCI Score Standardized Test Score Correlation Coefficient P
Executive Functions General Reaction Time Digit Symbol −.530 <.001
Design Matching Digit Symbol .644 <.001
Word-To-Picture Matching (Latency) Digit Symbol −.610 <.001
Clock Hand Placement Digit Symbol .502 <.001
Stroop (Discordant Latency) Digit Symbol −.589 <.001
Immediate Recall Free Recognition I DRS Memory .506 <.001
WMS LMS-I .563 <.001
Guided Recognition I Errors DRS Memory −.567 <.001
WMS LMS-I −.493 <.001
Delayed Recall Free Recognition II DRS Memory .526 <.001
WMS LMS-II .458 <.001
Composite Memory Free Recognition I & II DRS Memory .553 <.001
Score WMS LMS-I .519 <.001
WMS LMS-II .538 <.001
Spatial Fluency Clock Hand Placement DRS Initiation .437 <.001
Language Fluency Picture Naming DRS Initiation .531 <.001
Picture Naming (Latency) DRS Initiation −.479 <.001
*

All tests administered within the same testing sessions.

Low latency scores mean faster response time.

Diagnostic Validation

T-tests were performed to analyze differences between memory-impaired and unimpaired groups. Though Free Recognition I and II scales individually have moderate reliabilities, they were separated in the t-tests because we hypothesized that they might measure different aspects of a cognitive function. Highly significant differences were observed between the memory intact group and the memory-impaired group on all CANS-MCI tests. A significant difference in age and education between groups was also observed (Table 5).

TABLE 5.

Diagnostic Validation

Demographics LMS-II*≤10thpercentile N = 49 Means (SD) LMS-II*>10thpercentile N = 233 Means (SD) P
   Age 80 (8.1) 76 (8.1) <.01
   Years Of Formal Education 14 (3.0) 15 (2.9) <.05
CANS-MCI Scores
   General Reaction Time .91 (.27) .74 (.18) <.001
   Design Matching 32 (10.3) 41 (8.9) <.001
   Word-to-Picture Matching (Latency) 2.5 (.68) 1.9 (.46) <.001
   Clock Hand Placement 23 (9.2) 32 (9.0) <.001
   Stroop (Discordant Latency) 1.97 (.53) 1.61 (.45) <.001
   Free Recognition I 15 (3.0) 18 (1.4) <.001
   Free Recognition II 15 (3.4) 18 (1.8) <.001
   Free Recognition I & II 30 (6.0) 36 (2.8) <.001
   Guided Recognition I Errors 9 (10.1) 1.5 (1.9) <.001
   Picture Naming 27 (5.2) 32 (4.4) <.001
   Picture Naming (Latency) 5.4 (1.4) 4.3 (.94) <.001
*

Weschler Memory Scale-Revised Logical Memory Scale-II

Factor Analysis

Confirmatory factor analysis supported the highly correlated three-factor model (Memory, Language/Spatial Fluency, and Executive Function/Mental Control) found in the exploratory analysis (data not shown) indicating the tests measure the intended cognitive dimensions (Table 6). The factor loadings in the confirmatory analysis were all significant and ranged from .45–.82. The Free and Guided Recognition, Guided Recognition Errors, LMS, and DRS Memory tests comprised the Memory factor. Four tests scores made up the Language/Spatial Fluency factor: Clock Hand Placement, Picture Naming, Picture Naming (latency), and DRS Initiation. The Executive Function/Mental Control factor consisted of General Reaction Time, Design Matching, Word-to-Picture Matching (Latency), Stroop (Discordant Latency), and WAIS Digit Symbol. Goodness of fit indicators were strong (χ270 = 155.65, p=.001.; Bentler Comparative Fit Index = .99; Root Mean Square Error of Approximation=.055). The three factors were highly correlated (.91, .75, .97). Because the correlation between Language/Spatial Fluency and Executive Function/Mental Control factors was so high (.97) we combined them into one factor and tested the two-factor model against the three-factor model. The two-factor solution was not as strong as the three-factor model (χ22 = 15.96, p<.001).

Table 6.

Factor Analysis

Score Memory Language/Spatial Fluency Executive Function /Mental Control
General Reaction Time (Latency) .45
Design Matching −.80
Word-to-Picture Matching (Latency) .65
Stroop (Discordant Latency) ..61
WAIS Digit Symbol −.82
Clock Hand Placement .70
Picture Naming .73
Picture Naming (Latency) −.63
DRS Initiation .61
Free Recognition I .73
Guided Recognition I Errors −.61
Free Recognition II .64
WMS-R LMS-I .70
WMS-R LMS-II .78
DRS Memory .64

Discussion

The results of these initial analyses indicate the CANS-MCI is a user-friendly, reliable and valid instrument that measures several cognitive domains. CANS-MCI computer interactions were designed to provide a high degree of self-administration usability. Our results indicate high internal reliability and high test-retest reliability comparable to other validated instruments.3033 One score, Guidance Percent Correct, was not highly reliable and therefore was not included in further analyses. The test scores are all more consistent from Time 2 to Time 3 than they were from Time 1 to Time 2, as demonstrated by the t-scores. Even memory tests are more stable once subjects are familiar with the tests, despite the much longer time interval. All CANS-MCI test scores improve with one previous exposure, which indicates that familiarity with the testing procedures appears to make test results more dependable. It would be advisable to have a patient take the test once for practice and use the second testing as the baseline measure. Recommendations for more extensive evaluation should be based upon multiple testing sessions, at least several months apart.

The CANS-MCI subtests correlate with the LMS-I & II, Digit Symbol and DRS highly enough to demonstrate concurrent validity. However, the moderate correlations indicate some differences, which might be attributed to the different ranges of ability the tests were designed to measure. For example, the DRS Initiation Scale is less discriminating at higher ability levels30 than are the CANS-MCI Clock Hand Placement and Picture Naming tests with which is it best correlated; 42% of the subjects obtained a perfect DRS Initiation score, while 4.8% and .5% received perfect scores on the Clock Hand Placement and Naming Accuracy tests, respectively. The LMS-II is less discriminating than the CANS-MCI Free Recognition II score at the lower levels of functioning; 20 (7%) subjects received a LMS-II score of 0 and 1 (0.4%) of the subjects received the lowest score on the Free Recognition II. On the other hand, the LMS-II appears to be more discriminating at the higher levels of functioning. No subjects received a perfect LMS-II score, while 19% of subjects got the highest score possible on Free Recognition II. The CANS-MCI Free Recognition II test is substantially different from the LMS-II free recall test because free recall without prompting (used in the LMS-II) is more difficult than guided recall. Like other tests that guide learning,34 the CANS-MCI Free Recognition test may enhance “deep semantic processing,”13 allowing trace memories to improve its scores while not improving test scores on unprompted recall.35

Diagnostic validity is an important criterion for cognitive screening effectiveness. T-tests based upon the LMS-II diagnostic criterion show that the CANS-MCI tests differentiate between the memory-impaired and unimpaired elderly. Although the reference measure (LMS-II) is a limited criterion of diagnostic validity, the corresponding differences on all the CANS-MCI tests suggest that the CANS-MCI has enough diagnostic validity to pursue more extensive analyses. The CANS-MCI also measures changes over time, a feature that seems likely to enhance its validity when assessing highly educated people who have greater cognitive reserve. Given that reserve capacity can mask the progression of AD,36 comparisons of highly educated persons against their own previous scores might detect the insidious progression of pre-dementia changes long before the diagnosis of MCI or AD would otherwise be reached. The extent to which sensitivity/specificity and longitudinal analyses confirm this dimension of validity is currently being studied, using full neuropsychological evaluations as the criterion standard.

The factor analyses indicated that CANS-MCI tests load onto three main factors; Memory, Language/Spatial Fluency, and Executive Function/Mental Control. These factors are similar to the three aspects of cognitive performance measured by the Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) test batteries.37 The CANS-MCI, LMS and DRS tests that measure the immediate acquisition or delayed retention abilities load onto Memory. Language/Spatial Fluency is heavily loaded with tests that measure the retrieval from semantic memory of words (or, in the case of the Clock Hand Placement test, spatial representation of numbers). The General Reaction Time, Design Matching, Word-to-Picture Matching, Stroop (Discordant Latency), and the Digit Symbol tests make up the Executive Function/Mental Control factor which appears to represent rapid attention switching and mental control. The Memory, Language/Spatial Fluency, and Executive Function/Mental Control factors were highly correlated, as would be expected of cognitive functions all associated with MCI and/or AD. Even though highly correlated, they appear to be distinct factors.

The Clock Hand Placement test was not part of the Executive Function/Mental Control factor as hypothesized, though in pairwise analyses (Table 4) the test had the highest correlation with the Digit Symbol test--a conventional measure of several executive functioning abilities that is highly predictive of AD.21 However, Clock Hand Placement also correlated moderately with the DRS Initiation scale (r=.437), a scale that involves the naming of items in a grocery store. Both the Clock and DRS Initiation tests involve mental visualization translated into a response, and both load on the Language/Spatial Fluency factor. The test that most heavily loads on Language/Spatial Fluency (Picture Naming) also requires visualization of an answer before making a response.

CANS-MCI computerized tests have several advantages. By consistent administration and scoring, computerized tests have the ability to reduce examiner administration and scoring error,12,38 as well as reduce the influence of the examiner on patient response.33 Because they are self-administered, and no training is required to administer the tests, they take little staff time. Other cognitive screening tests, both brief and computerized, still take significant amounts of staff time to administer and score. Medications for stalling memory decay are being widely used and are expected to be most effective when started early in the course of dementia. Therefore, it will become increasingly important to identify cognitive impairment early in the process of decline.3,4,39 The CANS-MCI is also amenable to translation with automated replacement of text, pictures, and audio segments based upon choice of a language and a specific location (e.g. English/England) at the beginning of the testing program. The CANS-MCI was created for U.S. English speakers and versions have not yet been validated for other populations.

There are several weaknesses in our study. A community sample increases the chance of false positives because of the low incidence of impairment in the population as compared to a clinical sample.40 There may also be a selection bias in our sample because participants are volunteers with high levels of education.

The ability of the CANS-MCI to discriminate between individuals with and without MCI has not yet been adequately compared with findings of a criterion standard such as an independent neuropsychological battery. We do not yet know what combination of tests and scores have the most sensitivity and specificity in predicting the presence of MCI. The next step in our research is to determine this using a full neuropsychological battery as the criterion standard. Further, we will look at what amount of change over time is significant enough to warrant a recommendation for comprehensive professional testing.

The CANS-MCI is a promising tool for low-cost, objective clinical screening. Further validation research is under way to determine if its longitudinal use may indicate the presence or absence of cognitive impairment well enough to guide clinician decisions about the need for a complete diagnostic evaluation.

Acknowledgments

The research reported here was supported by the National Institute on Aging, National Institute of Health (1R43AG18658-01 and 2R44AG18658-02) as well as an earlier test development grant from the Department of Veterans Affairs, Veterans Health Administration, Health Services Research and Development Service (LIP 61-501). The authors wish to thank Bruce Center, PhD for statistical assistance.

Footnotes

A partial version of these data was presented at the 8th International Conference on Alzheimer’s Disease and Related Disorders, Stockholm, Sweden 2002.

References

  • 1.Bookheimer SA, Strojwas BS, Cohen MS, et al. Patterns of brain activation in people at risk for developing Alzheimer's disease. N Engl J Med. 2000;343(7):450–456. doi: 10.1056/NEJM200008173430701. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 2.Petersen RC, Stevens JC, Ganguli M, et al. Practice parameter: early detection of dementia: mild cognitive impairment (an evidence-based review). Report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurol. 2001;56:1133–1142. doi: 10.1212/wnl.56.9.1133. [DOI] [PubMed] [Google Scholar]
  • 3.Brookmeyer R, Gray S, Kawas C. Projections of Alzheimer’s disease in the United States and the public health impact of delaying disease onset. Am J Public Health. 1998;88:1337–1342. doi: 10.2105/ajph.88.9.1337. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Howieson DB, Dame A, Camicioli R, et al. Cognitive markers preceding Alzheimer’s dementia in the healthy oldest old. J Am Geriatr Soc. 1997;45:584–589. doi: 10.1111/j.1532-5415.1997.tb03091.x. [DOI] [PubMed] [Google Scholar]
  • 5.Petersen RC, Smith GE, Waring SC, et al. Mild cognitive impairment: clinical characterization and outcome. Arch Neuro. 1999;56:303–308. doi: 10.1001/archneur.56.3.303. [DOI] [PubMed] [Google Scholar]
  • 6.Shah S, Tangalos EG, Petersen RC. Mild cognitive impairment: when is it a precursor to Alzheimer’s disease? Geriatrics. 2000;55:62–68. [PubMed] [Google Scholar]
  • 7.Petersen RC, Doody R, Kurz A, et al. Current concepts in mild cognitive impairment. Arch Neurol. 2001;58:1985–1992. doi: 10.1001/archneur.58.12.1985. [DOI] [PubMed] [Google Scholar]
  • 8.Ritchie K, Touchon J. Mild cognitive impairment: conceptual basis and current nosological status. Lancet. 2000;355:225–228. doi: 10.1016/S0140-6736(99)06155-3. [DOI] [PubMed] [Google Scholar]
  • 9.Chertkow H. Mild cognitive impairment. Curr Opin Neurol. 2002;15:401–407. doi: 10.1097/00019052-200208000-00001. [DOI] [PubMed] [Google Scholar]
  • 10.Morris JC, Storandt M, Miller JP, et al. Mild cognitive impairment represents early-stage Alzheimer Disease. Arch Neurol. 2001;58:397–405. doi: 10.1001/archneur.58.3.397. [DOI] [PubMed] [Google Scholar]
  • 11.Reischies FM, Hellweg R. Prediction of deterioration in mild cognitive disorder in old age neuropsychological and neurochemical parameters of dementia diseases. Compr Psychiatry. 2000;41(Suppl 2):66–75. doi: 10.1016/s0010-440x(00)80011-5. [DOI] [PubMed] [Google Scholar]
  • 12.Green RC, Green J, Harrison JM, et al. Screening for cognitive impairment in older individuals. Arch Neurol. 1994;51:779–786. doi: 10.1001/archneur.1994.00540200055017. [DOI] [PubMed] [Google Scholar]
  • 13.Grober E, Kawas C. Learning and retention in preclinical and early Alzheimer's disease. Psychol Aging. 1997;12(1):183–188. doi: 10.1037//0882-7974.12.1.183. [DOI] [PubMed] [Google Scholar]
  • 14.Fuld PA, Masur DM, Blau AD, et al. Object-memory evaluation for prospective detection of dementia in normal functioning elderly: predictive and normative data. J Clin Exp Neuropsychol. 1990;12:520–528. doi: 10.1080/01688639008400998. [DOI] [PubMed] [Google Scholar]
  • 15.Kluger A, Ferris SH, Golomb J, et al. Neuropsychological prediction of decline to dementia in nondemented elderly. J Geriatr Psychiatry Neurol. 1999;12(4):168–179. doi: 10.1177/089198879901200402. [DOI] [PubMed] [Google Scholar]
  • 16.Jacobs DM, Sano M, Dooneief G, et al. Neuropsychological detection and characterization of preclinical Alzheimer's disease. Neurol. 1995;45:957–962. doi: 10.1212/wnl.45.5.957. [DOI] [PubMed] [Google Scholar]
  • 17.Chen P, Ratcliff G, Belle SH, et al. Cognitive tests that best discriminate between presymptomatic AD and those who remain nondemented. Neurol. 2000;55:1847–1853. doi: 10.1212/wnl.55.12.1847. [DOI] [PubMed] [Google Scholar]
  • 18.Hänninen T, Hallikainen M, Koivisto K, et al. A follow-up study of age-associated memory impairment neuropsychological predictors of dementia. J Am Geriatr Soc. 1995;43:1007–1015. doi: 10.1111/j.1532-5415.1995.tb05565.x. [DOI] [PubMed] [Google Scholar]
  • 19.Bozoki A, Giordani B, Heidebrink JL, et al. Mild cognitive impairments predict dementia in nondemented elderly patients with memory loss. Arch Neurol. 2001;58:411–416. doi: 10.1001/archneur.58.3.411. [DOI] [PubMed] [Google Scholar]
  • 20.Masur DM, Sliwinski M, Lipton RB, et al. Neuropsychological prediction of dementia and the absence of dementia in healthy elderly persons. Neurol. 1994;44:1427–1432. doi: 10.1212/wnl.44.8.1427. [DOI] [PubMed] [Google Scholar]
  • 21.Flicker C, Ferris SH, Reisberg B. A two-year longitudinal study of cognitive function in normal aging and Alzheimer's disease. J Geriatr Psychiatry Neurol. 1993;6:84–96. doi: 10.1177/089198879300600205. [DOI] [PubMed] [Google Scholar]
  • 22.Welsh KA, Butters N, Hughes JP, et al. Detection and staging of dementia in Alzheimer's disease: use of the neuropsychological measures developed for the Consortium to Establish a Registry for Alzheimer's Disease (CERAD) Arch Neurol. 1992;49(5):448–452. doi: 10.1001/archneur.1992.00530290030008. [DOI] [PubMed] [Google Scholar]
  • 23.Tierney MC, Szalai JP, Snow WG, et al. Prediction of probable Alzheimer's disease in memory-impaired patients: a prospective longitudinal study. Neurol. 1996;46(3):661–665. doi: 10.1212/wnl.46.3.661. [DOI] [PubMed] [Google Scholar]
  • 24.Berg L, Danziger WL, Storandt M, et al. Predictive features in mild senile dementia of the Alzheimer type. Neurol. 1984;34:563–569. doi: 10.1212/wnl.34.5.563. [DOI] [PubMed] [Google Scholar]
  • 25.Bondi MW, Monsch AU, Galasko D, et al. Preclinical cognitive markers of dementia of the Alzheimer's type. Neuropsychol. 1994;8(3):374–384. [Google Scholar]
  • 26.Tuokko H, Vernon-Wilkinson R, Weir J, et al. Cued recall and early identification of dementia. J Clin Exp Neuropsychol. 1991;13:871–879. doi: 10.1080/01688639108405104. [DOI] [PubMed] [Google Scholar]
  • 27.Jacobs DM, Sano M, Dooneief G, et al. Neuropsychological detection and characterization of preclinical Alzheimer's disease. Neurol. 1995;45:957–962. doi: 10.1212/wnl.45.5.957. [DOI] [PubMed] [Google Scholar]
  • 28.Hill E, Hammond K. The usability of multimedia automated psychological tests to screen for Alzheimer's disease. Proc AMIA Symposium. 2000:1030. [Google Scholar]
  • 29.Wechsler, D: Wechsler Memory Scale-Revised. New York, NY: Psychological Corporation; 1987
  • 30.Jurica PJ, Leitten CL, Mattis, S: DRS-2 Dementia Rating Scale -2: Professional Manual. Lutz, FL: Psychological Assessment Resources, Inc.; 2001
  • 31.Wechsler, D: Wechsler Adult Intelligence Scale-Revised. San Antonio, TX: The Psychological Corporation; 1981
  • 32.Ruchinskas RA, Singer HK, Repetz NK. Clock drawing, clock copying, and physical abilities in geriatric rehabilitation. Arch Phys Med Rehabil. 2001;82:920–924. doi: 10.1053/apmr.2001.23993. [DOI] [PubMed] [Google Scholar]
  • 33.Campbell KA, Rohlman DS, Storzbach D, et al. Test-retest reliability of psychological and neurobehavioral tests self-administered by computer. Assessment. 1999;6(1):21–32. doi: 10.1177/107319119900600103. [DOI] [PubMed] [Google Scholar]
  • 34.Buschke H. Cued recall in amnesia. J of Clinical Neuropsychol. 1984;6(4):433–440. doi: 10.1080/01688638408401233. [DOI] [PubMed] [Google Scholar]
  • 35.Tuokko H, Crockett D. Cued recall and memory disorders in dementia. J Clin Exp Neuropsychol. 1989;11(2):278–294. doi: 10.1080/01688638908400889. [DOI] [PubMed] [Google Scholar]
  • 36.Snowdon DA, Kemper SJ, Mortimer JA, et al. Linguistic ability in early life and cognitive function and Alzheimer's disease in late life. JAMA. 1996;275(7):528–532. [PubMed] [Google Scholar]
  • 37.Morris JC, Heyman A, Mohs RC, et al. The Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) Part I: clinical and neuropsychological assessment of Alzheimer’s disease. Neurol. 1989;39:1159–1165. doi: 10.1212/wnl.39.9.1159. [DOI] [PubMed] [Google Scholar]
  • 38.Letz R, Green RC, Woodard JL. Development of a computer-based battery designed to screen adults for neuropsychological impairment. Neurotoxicol Teratol. 1996;18(4):365–370. doi: 10.1016/0892-0362(96)00041-4. [DOI] [PubMed] [Google Scholar]
  • 39.Ganguli M, Dodge HH, Chen P, et al. Ten-year incidence of dementia in a rural elderly US community population: the MoVIES Project. Neurol. 2000;54:1109–1116. doi: 10.1212/wnl.54.5.1109. [DOI] [PubMed] [Google Scholar]
  • 40.Albert M, Smith LA, Scherr PA, et al. Use of brief cognitive tests to identify individuals in the community with clinically diagnosed Alzheimer’s disease. Int J Neurosci. 1991;57:167–178. doi: 10.3109/00207459109150691. [DOI] [PubMed] [Google Scholar]

RESOURCES