Skip to main content
American Journal of Alzheimer's Disease and Other Dementias logoLink to American Journal of Alzheimer's Disease and Other Dementias
. 2011 Nov 22;26(6):484–491. doi: 10.1177/1533317511426133

A Novel Technology to Screen for Cognitive Impairment in the Elderly

David W Wright 1,, Harriet Nevárez 1, Patrick Kilgo 2, Michelle LaPlaca 3, Amber Robinson 1, Shawn Fowler 1, John Brumfield 3, Felicia C Goldstein 4
PMCID: PMC10845310  PMID: 22110158

Abstract

Background: Traditional evaluation of mild cognitive impairment (MCI) can be costly, time consuming, and impractical for widespread screening. DETECT is a portable device developed to rapidly perform cognitive testing in diverse settings. This study compares DETECT with formal clinical assessment. Methods: A prospective cross-sectional comparison of the DETECT device versus an expert neuropsychologist’s assessment (NPA). A total of 405 participants ≥65 years old, recruited from geriatric clinics and retirement facilities, completed both DETECT and NPA. Multivariable logistic regression methods were used to evaluate the degree of correlation between DETECT testing and the NPA diagnosis. Results: Predictive modeling demonstrated very good ability to discriminate between normal, MCI, and dementia per the NPA reference standard using DETECT subtests (c = 0.85 for any impairment; c = 0.99 for dementia). Conclusion: DETECT scores closely correlate with NPA. DETECT can identify and discriminate between normal, MCI, and dementia and could be incorporated as a screener for MCI.

Keywords: MCI, cognitive impairment, dementia, Alzheimer’s disease, cognitive screen, immersive

Introduction

Mild cognitive impairment (MCI) is an intermediate stage between normal aging and dementia. 15 Dementia, whether due to Alzheimer’s disease (AD) or other causes, is a pathological condition that increases the morbidity and mortality of its victims and negatively impacts their families and society. Patients with amnestic MCI often progress to AD with an annual rate of conversion at approximately 10 times that of the normal elderly population. 1,5

In 2006, 37.3 million or 12.4% of Americans were 65 years of age or older. This number is expected to double by 2030 to about 71.5 million, accounting for 20% of the US population. 6 As this number grows, cases of MCI, AD, and other forms of dementia will rise. 7 In 2000, there were approximately 2.2 to 4.8 million cases of AD in the United States alone. Experts expect that this total will increase 4-fold by 2050. 8 The global burden of AD is even more striking; over 26.6 million people worldwide are currently affected by this condition. 9 The number of persons with MCI is thought to be double or triple this figure. 10

Mild cognitive impairment often goes undetected. As a result, patients are frequently not identified until significant loss of cognitive function has occurred or they have progressed to frank dementia. 11,12 In one study, up to 63% of patients in an assisted living facility had undiagnosed dementia. 12 In another study, primary care physicians in a geriatric clinic did not recognize dementia in 67% of their patients. 13

Few diseases produce as dramatic an impact on the patient, his or her family, and society as AD. The average family caregiver of a patient with AD spends 16.1 hours a week providing care. 8 The economic toll of AD exceeds $100 billion annually in the United States—more than heart disease, cancer, and stroke. 14 The cost of care increases dramatically when an individual is placed in assisted living or a skilled nursing facility. 8

Many professional organizations advocate early screening to detect the first signs of AD. In the Alzheimer’s Foundation of America (AFA) 2008 report, Memory Matters, states, “the screening of at-risk populations for dementia should become a cornerstone for early treatment or prevention of cognitive decline.” Interventions that can delay symptom progression and defer the need for long-term care services (LTC) are a major focus of research. 11,15 Early diagnosis permits the best opportunity to understand disease progression, begin long-term planning, and ultimately implement a treatment regimen.

The adoption of new screening tools into clinical care environments is lagging. Studies have sited barriers to screening such as inadequate staff time to administer screening tests, the need for dedicated space, lack of training and other resources, and lingering concerns over the accuracy of these tests. 16,17

DETECT is a simple, self-contained device that administers a battery of cognitive tests in 7 to 10 minutes. In an earlier study, 18 DETECT accurately identified patients with cognitive impairment (MCI) and discriminated between MCI and dementia. The primary objectives of this current study are 2-fold—to determine the degree of agreement between DETECT subtests and formal neuropsychological testing and to use these results to formulate a predictive model capable of generating estimated probabilities of impairment in an at-risk elderly population.

Methods

Participants and Sample

Participants were recruited from patients of the Wesley Woods Primary Care Clinic in Atlanta, Georgia and other local retirement facilities. To be eligible, a participant had to be ≥65 years of age without a previous diagnosis of MCI or dementia and be physically capable of testing on both the DETECT and neuropsychologist’s assessment (NPA). Participants were prescreened with the Mini-Mental State Examination (MMSE) and/or Mini-Cog Assessment tool to reduce the pool of severely impaired patients. 26,27 Therefore, participants with a score <18 on the MMSE, or <4 on the Mini-Cog score were excluded. Patients with significant visual impairments (blindness or noncorrectable vision loss), severe hearing impairments (unable to hear even with a hearing aid), physical impairments (not able to push the buttons on the controller), claustrophobia, non-English speaking, illiterate, unable to provide informed consent, or had an acute medical illness that warranted immediate attention were excluded. The Emory University’s Institutional Review Board approved the study protocol.

Measurements

Each participant was first given the DETECT test in a familiar setting (ie, clinic, retirement facility, etc). DETECT took an average of 10 minutes to administer (including instructions). Following automated assessment with the DETECT device, each participant was scheduled for a second assessment, within 1 month of the DETECT testing, to complete the standard battery of neuropsychological (NP) tests. The NP battery was scored by a professional neuropsychologist who was blinded to each participant’s DETECT score.

The DETECT device was developed by modifying the elements of a battery of NP tests that, in their full form, have been previously used in MCI (ie, Simple & Complex Choice Reaction Time, 19,20 Selective Reminding Memory Test, 21 Go/No-Go Executive function, 22,23 and N-back Working Memory 24 ). The heads up display and noise-canceling headphones create a portable immersive environment for the testing (see Figure 1). The other components include a dedicated computer with the DETECT software and a handheld input unit with 2 buttons. Depending on the test, the buttons signal “Yes” and “No” or “Right” and “Left.” The software automatically administers a short series of four NP tests that evaluate information processing speed, episodic memory, working memory, and executive function. Results from each patient are stored in a database for later analysis. A description of the DETECT subtests is contained in Table 1.

Figure 1.

Figure 1.

The DETECT system.

Table 1.

NP Testing Incorporated in DETECT Software

NP Test Test Description
Complex attention 19,20 Participants respond to a series of stimuli with 1 to 3 characteristics: shape, color, and internal line orientation. Patients must remember an object’s defining characteristic and rapidly find it in similar stimuli that vary by that characteristic.
Go-No-Go 22,23 Uses an arrows task to determine whether the participant can inhibit a prepotent response. Participants respond to a stimulus with 2 characteristics: color and orientation. If the arrow is blue, they select the side that the arrow is pointing. If the arrow is red, they select the side opposite from the direction the arrow is pointing.
Selective reminding memory test 21,24,39 Participants remember a list of 12 words in 3 trials. They must pick them out of a list of 24 words. The participant’s ability to recognize the same list of words after a period of delay is evaluated by readministering this test at the end of the neuropsychological battery of tests.
N-back working memory test 24,40 Participants see black and white photographs of faces.
In the 1-Back condition, the participant determines whether the face shown is identical to the immediately preceding face.In the 2-Back condition, the subject determines whether the face shown is identical to the face that was presented 2 images back.

The formal NPA included a 90-minute neuropsychological test battery (NP), 10-item Functional Assessment Questionnaire (FAQ), 25 and an expert neuropsychologist’s judgment. This NPA constituted the reference or “gold” standard. The NP battery chosen is well validated, routinely administered at our institution, and widely employed by other experts. The NP tests are manually administered using paper and pencil. The results are converted to z scores using published norms. A summary of the tests used in the NP battery is found in Table 2.

Table 2.

Neuropsychological Battery of Tests Administered to Each Patient

Cognitive Function Tested NP Test Description
Overall cognitive functioning Mini-Mental State Examination 26 A convenient “quick screen” for cognitive impairment
Attention Digit Span forward subtest, Wechsler Adult Intelligence Test Revised47 and Trails A&B 42 Number of correct trials and number of seconds needed to sequence numbers
Language Consortium to establish a registry for Alzheimer’s disease (CERAD) 43,44 Number of words related to a particular letter and category in 60 seconds
30-item Boston Naming Test. 45 Number of correct responses given
Visuo-motor performance Brief Visual Memory Test–Revised (BVMT-R)46 Number of points in copying designs
JOLO47 Number of correct responses on a judgment of the angular orientation of lines task
Verbal episodic memory CERAD 43,44 Number of words recalled after a brief delay, number of elements recalled for stories
Visual memory Brief Visuospatial Memory Test-R46 Number of points earned in learning and recalling designs
Executive functioning Clock drawing test48 Number of correct points
Trails B 42 Number of seconds to alternate between numbers and letters
Digit Span Backward subtest 41 Number of correct trials
Digit Symbol subtest Number of symbols transcribed
Similarities subtest, Wechsler Adult Intelligence Scale–Revised 41 Number of points when inferring relationships between stated items

Using the NPA, participants were diagnosed as normal, MCI, or demented. The neuropsychologist classified a participant as “normal” if no cognitive scores were less than 1.5 standard deviations (SDs) below the mean, and the participant performed instrumental activities of daily living (IADLs) without assistance. Mild cognitive impairment was defined as cognitive scores more than 1.5 SDs below the mean, but a participant was able to perform the IADLs. Mild cognitive impairment was further subdivided into 2 categories, possible MCI (only 1 score within the cognitive domain was impaired—such as memory or language) and probable MCI (when 2 or more scores, either within a domain or across domains, were impaired). The category of dementia was defined as 3 or more cognitive test scores more than 1.5 SDs below the mean and participant requiring assistance with IADLs.

Scoring of the DETECT tests fall into three categories—accuracy, response time, and nonresponse (hereafter called time-outs). When measuring accuracy, 4 outcomes are possible with every click of the device’s handheld unit—true positive (correct), false positive, (incorrect), true negative (correct), and false negative (incorrect). In each round of each DETECT test, the accuracy percentage was tallied as a summary measure.

For response time, participants were instructed to answer each question as rapidly as possible. For each prompt inside each test, a response time was measured. Participants must respond to each question/prompt within 2.5 to 3 seconds (depending on the specific test). When a response was not registered, the system automatically moved to the next prompt/question. Nonresponses are recorded as time-outs. In each test/round, the percentage of responses resulting in a time-out was calculated. For each DETECT test/round, the median, first and third quartiles of response time were calculated, as well as the SD of response time.

Analysis

Each of the DETECT summary measures described above were examined to determine how strongly they were associated with NPA. For each DETECT test, separate univariate logistic regression models were constructed that related the participant’s impairment as a function of the summary measures (accuracy, response time, and time-outs). Tests of significance of the beta parameter associated with each DETECT test summary measure were performed and the associated P values were calculated. The c-index (similar to a receiver–operating characteristic [ROC]) was calculated as a measure of discrimination, that is, how well the summary measurement discriminated between those who are versus those who are not impaired.

We sought to develop a multivariable predictive ordinal regression model that would use DETECT summary measures to generate the estimated probabilities of each level of impairment (normal/no impairment, possible MCI, probable MCI, and definite impairment). We employed 3-model selection procedures—forward, backward, and stepwise selection—to develop these predictive models. Each model was then evaluated on the basis of discrimination (c-index) and model calibration—the degree of agreement between observed and expected probabilities—was gauged via the Hosmer-Lemeshow (H-L) statistic.

A comparison between the relative predictive abilities of DETECT and MMSE was also performed. Two different classifications of the outcome were considered. For the first classification, all patients were classified as either demented or not demented. Second, the 4-level ordinal outcome (normal/no impairment, possible MCI, probable MCI, and definite impairment) was compared between MMSE and DETECT. For each outcome, the respective c-indices of the scores were calculated and tested for differences.

A 10-fold cross-validation approach was used to measure the degree to which subsamples of the study could predict MCI in other subsamples. This is an internal validation algorithm designed to test the performance of our model in subsets of the data set that did not contribute to the formulation of the model itself. The validation consists of 10 steps. First, the participants were randomly divided into 10 approximately equal groups of 40 patients each. Second, Groups 1 to 9 were used to calculate the model parameters of the model we chose to validate. Third, we used these estimated model parameters to calculate the predicted probabilities of impairment for the participants in group 10, the held-out group. Fourth, we repeated this process 9 more times, each time holding out a different group and applying model estimates from the other groups to obtain predicted probabilities. In the end, all patients have predicted probabilities of impairment that were computed from other cases, thus avoiding the “contamination” that can come with estimating probabilities for a participant that are functions of that participant’s own data. Once probabilities are obtained for each participant, the c-index was calculated and compared to the c-index of the same model calculated on all of the patients. This validation exercise was primarily designed to avoid model “overfitting,” that is, having a model that is too large or too specific to one group of patients to be widely applicable to other cases.

Results

Of the 425 participants enrolled in the study, 2 could not complete DETECT due to anxiety during testing and 18 did not return for NP testing. The remaining 405 participants (95%) completed both forms of assessment and are included in our analysis. Participants determined to have normal cognition through formal NP testing were slightly younger (76.3 vs 79.4), better educated (16.2 vs 14.9 years), and disproportionately Caucasian (90.7% vs 75.0%) compared to their nonnormal peers (P < .001). These demographic variables were considered as possible confounders/predictors in our multivariable model. There was no association between gender and cognitive impairment (P = .33).

The accuracy of each DETECT test is shown in Table 3. The discrimination is highest for the selected reminding first round (c-index = 0.760) and delayed recall tests (c-index = 0.769) which are essentially the same test given at different points during the DETECT administration. Response time and time-out statistics were examined to assess their association with the clinical diagnosis (Tables 4). The c-indices for the NP measures were generally in the moderate-to-low moderate range for both response time and time-outs. The selective reminding test had the highest c-index for both response time (0.728) and time-outs (0.723).

Table 3.

Estimates of Accuracy of DETECT Subtests, Mean (SD)

DETECT Test a Normal Cognition (N = 172) Possible MCI (N = 89) Probable MCI (N = 112) Dementia (N = 32) All Nonnormal (N = 233) C-Index
Complex attention 89.7 (10.0) 85.3 (11.5) 83.8 (11.8) 74.7 (16.3) 83.1 (12.8) 0.670
Go-No-Go 47.7 (13.9) 46.1 (15.9) 43.2 (18.2) 29.9 (18.2) 42.5 (18.1) 0.584
Selected reminding round 1 76.7 (21.0) 64.2 (27.3) 49.8 (24.9) 28.6 (21.6) 52.3 (27.9) 0.760
Selected reminding round 2 80.4 (21.2) 71.9 (23.5) 55.8 (26.1) 34.0 (28.0) 58.9 (28.2) 0.741
N-Back oneback 67.4 (24.1) 56.3 (26.3) 47.2 (23.1) 24.6 (20.4) 47.5 (26.0) 0.716
N-Back twoback 48.6 (24.4) 38.0 (22.3) 32.4 (20.2) 22.3 (21.4) 33.1 (21.7) 0.680
Delayed recall 83.1 (15.9) 73.0 (20.9) 61.2 (22.0) 28.2 (25.5) 61.3 (26.2) 0.769

Abbreviations: MCI, mild cognitive impairment; SD, standard deviation.

aComparisons between normal and nonnormal patients are statistically significant at the α < .001 for all DETECT variables

Table 4.

Estimates of Median Response Time of DETECT Subtests, Mean (SD) a

DETECT Test Normal Cognition (N = 172) Possible MCI (N = 89) Probable MCI (N = 112) Dementia (N = 32) All Nonnormal (N = 233) C-index
Simple complex attention 0.79 (0.44) 0.96 (0.65) 1.13 (0.67) 1.78 (0.91) 1.15 (0.75) 0.699
Go-No-Go 1.29 (0.48) 1.36 (0.56) 1.38 (0.62) 2.07 (0.84) 1.47 (0.68) 0.554
Selected reminding round 1 1.12 (0.33) 1.28 (0.40) 1.50 (0.44) 1.75 (0.40) 1.45 (0.45) 0.728
Selected reminding round 2 1.04 (0.33) 1.16 (0.40) 1.41 (0.43) 1.65 (0.43) 1.34 (0.45) 0.706
N-Back one-back 1.32 (0.38) 1.44 (0.38) 1.56 (0.42) 1.79 (0.41) 1.55 (0.42) 0.657
N-Back two-back 1.66 (0.32) 1.73 (0.32) 1.77 (0.34) 1.87 (0.25) 1.76 (0.32) 0.608
Delayed recall 0.96 (0.24) 1.08 (0.37) 1.25 (0.39) 1.63 (0.49) 1.23 (0.43) 0.714

Abbreviations: MCI, mild cognitive impairment; SD, standard deviation.

a Comparisons between normal and nonnormal patients are statistically significant at the α <.01 for all DETECT tests.

Using model selection procedures (forward, backward, and stepwise), 1 common model was recommended from each algorithm, a model containing 7 of the DETECT test summary measures. This model provided good discrimination (c = 0.85) of the 4-level diagnosis, and excellent discrimination for nondemented versus dementia (c = .99). In addition, the DETECT model was capable of discriminating between the 2 subgroups of patients with MCI having varying severity (possible vs probable) as evidenced in both univariate (Tables 3-4) and multivariable analyses of the predicted probability of any impairment (see Figure 2). This multivariable model showed acceptable calibration (P = .8428, which means we fail to reject the H-L null hypothesis of agreement between observed and predicted probabilities).

Figure 2.

Figure 2.

Box plots of DETECT-generated probabilities of impairment by neuropsychological diagnosis.

When comparing DETECT and MMSE (perhaps the most widely used NP screening test) DETECT showed superior discrimination for both outcome types with respect to the dementia (yes or no). The DETECT c-index (0.995) was significantly higher than the MMSE c-index (0.901). Similarly, for the 4-level ordinal outcome, the c-index for DETECT (0.85) was significantly higher than the MMSE c-index (0.739). Both comparisons were statistically significant (P < .001).

To assess the internal validity of DETECT 7-term multivariable model, we employed the 10-fold cross-validation procedure, in which various subsamples of the study population were used to predict impairment in other subsamples. The model showed itself to be internally valid with a c-index of 0.837, only slightly (and expectedly) smaller than the c-index calculated from the entire sample (0.850).

Discussion

The DETECT system demonstrated good correlation with our reference standard NPA and was able to correctly stratify participants into 4 categories of cognition: normal, possible MCI, probable MCI, and dementia. The tool was able to discriminate between normal and demented in 99% of the cases. There was also a strong correlation for the MCI group (ROC .85).

Patients with known and probable dementia were excluded to avoid artificially increasing the utility of the test. To minimize the training effects, a delay of 1 to 4 weeks was placed between testing with the device and the subsequent NPA assessment. This allowed any learned behavior to “washout” before the reference standard test was administered. The neuropsychologist who performed the NPA evaluation and scored the pencil-and-paper tests was blinded to the results of the DETECT assessments. High rates of follow-up were achieved, minimizing the potential for nonresponse bias.

The diagnosis of MCI, a frequent precursor to AD, is difficult to make in the best of circumstances. Even in our study, 32 patients (8%) who were classified as “demented” by NPA were not previously known to be impaired. In addition, these patients all passed prestudy screening (MMSE >17 or Mini-Cog test >3). This observation underscores the difficulty of identifying dementia, much less MCI. Unfortunately, clinicians rarely have the time, training, or suitable facilities to administer a standard battery of NP tests. The cost of performing a formal NP screening on every patient >65 years of age would be prohibitive. As a result, many patients are misdiagnosed or undiagnosed until their symptoms are quite advanced.

The most commonly used screening instrument for dementia, the MMSE, 26 has been available for decades, is inexpensive, and relatively easy to administer. However, it has not been widely adopted for screening. The limitations of the MMSE include relatively poor discrimination, especially for milder forms of cognitive impairment, the time required for health care staff to perform the test, and the need for an appropriate setting to administer. 2729 Other brief screens, like the Mini-Cog, may be more sensitive than the MMSE 30 but provide limited information about the type of cognitive deficit and are also poor at detecting milder forms of the condition. Newer testing paradigms that incorporate computerized NP batteries have been developed and have the advantage of providing greater detail about the cognitive domains affected and can measure reaction time, an important factor impacted by cognitive impairment. 27-28, 3134 However, these and other currently available tests still require a conducive environment (free from external distractions) in order to ensure reliability.

Limitations

The NPA composite (NP battery, FAQ, and neuropsychologist judgment) was used as the reference standard in this study. Although fairly comprehensive, our NPA did not include all the elements of a complete dementia workup. Participants did not undergo a full clinical evaluation to rule out other potential organic causes for dementia or cognitive impairment. The NP battery we employed was restricted to a selection of tests considered to be most important by our expert neuropsychologist but is consistent with what most experts and institutions use. The participants in this study were drawn from diverse settings (retirement homes, geriatrics clinics), but the demographics of the study population were fairly uniform. Therefore, the generalizability of our findings to minority populations and seniors living in other parts of the country is unknown. The adoption of new screening tools into clinical care environments must be done with caution. Misdiagnosing an individual as impaired could provoke needless anxiety for a patient and his or her family. 29 As a result, it is important to carefully study a test’s accuracy, reliability, and precision before disseminating it to clinical practice.

Conclusion

DETECT was superior to the MMSE and compared favorably with a formal neuropsychological evaluation for detecting MCI and early dementia in elderly patients. The DETECT device has many features that make it more practical than current screening tools. If additional testing produces equally encouraging results, it may represent a more feasible way for clinicians to screen for MCI.

Acknowledgments

We would like to thank the Wallace H. Coulter Foundation for funding this study and the development of the DETECT device. We would also like to thank the Wesley Woods Clinic at Emory University, Park Springs Retirement Community, St George Village, Waynesville First United Methodist Church, Auburn Neighborhood Senior Center, and Lenbrook for the use of their facilities.

DWW, ML, and FCG, conceived and designed the study, participated in study operations, assisted with data interpretation, and contributed significantly to manuscript preparation. JRB, HN, AR, and SF assisted with study development and conduct, and participated in manuscript preparation. PK performed data analysis, assisted in data interpretation, and contributed significantly to manuscript preparation.

Drs. Wright and LaPlaca are co-inventors of the DETECT technology. After the completion of this study, a patent was filed and is pending. A start-up company, Zenda Technologies, has formed around the technology. Dr Wright, Dr LaPlaca, and Mr Brumfield are not employees of the company but do retain financial interest.

The Wallace H. Coulter Foundation provided funding for this study.

References

  • 1. Gauthier S, Reisberg B, Zaudig M, et al. International Psychogeriatric Association Expert Conference on mild cognitive impairment. Mild cognitive impairment. Lancet. 2006;367(9518):1262–1270. [DOI] [PubMed] [Google Scholar]
  • 2. Scott KR, Barrett AM. Dementia syndromes: evaluation and treatment. Expert Rev Neurother. 2007;7(4):407–422. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3. Petersen RC, Stevens JC, Ganguli M, Tangalos EG, Cummings JL, DeKosky ST. Practice parameter: early detection of dementia: mild cognitive impairment (an evidence-based review). Report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology. 2001;56(9):1133–1142. [DOI] [PubMed] [Google Scholar]
  • 4. Petersen RC, Smith GE, Waring SC, Ivnik RJ, Tangalos EG, Kokmen E. Mild cognitive impairment: clinical characterization and outcome. Arch Neurol. 1999;56(3):303–308. [DOI] [PubMed] [Google Scholar]
  • 5. Petersen RC, Doody R, Kurz A, et al. Current concepts in mild cognitive impairment. Arch Neurol. 2001;58(12):1985–1992. [DOI] [PubMed] [Google Scholar]
  • 6. Aging Ao. Aging Statistics. http://www.aoa.gov/AoARoot/Aging_Statistics/index.aspx, 2009.
  • 7. Hebert LE, Beckett LA, Scherr PA, Evans DA. Annual incidence of Alzheimer disease in the United States projected to the years 2000 through 2050. Alzheimer Dis Assoc Disord. 2001;15(4):169–173. [DOI] [PubMed] [Google Scholar]
  • 8. Sloane PD, Zimmerman S, Suchindran C, et al. The public health impact of Alzheimer’s disease, 2000-2050: potential implication of treatment advances. Annu Rev Public Health. 2002;23:213–231. [DOI] [PubMed] [Google Scholar]
  • 9. Brookmeyer R, Johnson E, Ziegler-Graham K, Arrighi HM. Forecasting the global burden of Alzheimer’s disease. Alzheimers Dement. 2007;3(3):186–191. [DOI] [PubMed] [Google Scholar]
  • 10. Ferri CP, Prince M, Brayne C, et al. Global prevalence of dementia: a Delphi consensus study. Lancet. 2005;366(9503):2112–2117. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11. Weimer DL, Sager MA. Early identification and treatment of Alzheimer’s disease: social and fiscal outcomes. Alzheimers Dement. 2009;5(3):215–226. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12. Magsi H, Malloy T. Underrecognition of cognitive impairment in assisted living facilities. J Am Geriatr Soc. 2005;53(2):295–298. [DOI] [PubMed] [Google Scholar]
  • 13. Valcour VG, Masaki KH, Curb JD, Blanchette PL. The detection of dementia in the primary care setting. Arch Intern Med. 2000;160(19):2964–2968. [DOI] [PubMed] [Google Scholar]
  • 14. Geldmacher DS. Cost-effective recognition and diagnosis of dementia. Semin Neurol. 2002;22(1):63–70. [DOI] [PubMed] [Google Scholar]
  • 15. Lopez OL, Becker JT, Saxton J, Sweet RA, Klunk W, DeKosky ST. Alteration of a clinically meaningful outcome in the natural history of Alzheimer’s disease by cholinesterase inhibition. J Am Geriatr Soc. 2005;53(1):83–87. [DOI] [PubMed] [Google Scholar]
  • 16. Borson S, Scanlan J, Hummel J, Gibbs K, Lessig M, Zuhr E. Implementing routine cognitive screening of older adults in primary care: process and impact on physician behavior. J Gen Intern Med. 2007;22(6):811–817. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17. Pezzotti P, Scalmana S, Mastromattei A, Di Lallo D. The accuracy of the MMSE in detecting cognitive impairment when administered by general practitioners: a prospective observational study. BMC Fam Pract. 2008;9:29. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18. Wright DW, Goldstein FC, Kilgo P, et al. Use of a novel technology for presenting screening measures to detect mild cognitive impairment in elderly patients. Int J Clin Pract. 2010;64(9):1190–1197. [DOI] [PubMed] [Google Scholar]
  • 19. Economou A, Papageorgiou SG, Karageorgiou C, Vassilopoulos D. Nonepisodic memory deficits in amnestic MCI. Cogn Behav Neurol. 2007;20(2):99–106. [DOI] [PubMed] [Google Scholar]
  • 20. Dannhauser TM, Walker Z, Stevens T, Lee L, Seal M, Shergill SS. The functional anatomy of divided attention in amnestic mild cognitive impairment. Brain. 2005;128(pt 6):1418–1427. [DOI] [PubMed] [Google Scholar]
  • 21. Masur DM, Fuld PA, Blau AD, Crystal H, Aronson MK. Predicting development of dementia in the elderly with the selective reminding test. J Clin Exp Neuropsychol. 1990;12(4):529–538. [DOI] [PubMed] [Google Scholar]
  • 22. Meguro K. Community based measures for managing mild cognitive impairment and dementia: the Osaki-Tajiri Project [in Japanese]. Rinsho Shinkeigaku. 2007;47(11):862–864. [PubMed] [Google Scholar]
  • 23. Ready RE, Ott BR, Grace J, Cahn-Weiner DA. Apathy and executive dysfunction in mild cognitive impairment and Alzheimer disease. Am J Geriatr Psychiatry. 2003;11(2):222–228. [PubMed] [Google Scholar]
  • 24. Borkowska A, Drozdz W, Jurkowski P, Rybakowski JK. The wisconsin card sorting test and the N-back test in mild cognitive impairment and elderly depression. World J Biol Psychiatry. 2009;10(4 pt 3):870–876. [DOI] [PubMed] [Google Scholar]
  • 25. Pfeffer RI, Kurosaki TT, Harrah CH, Jr, Chance JM, Filos S. Measurement of functional activities in older adults in the community. J Gerontol. 1982;37(3):323–329. [DOI] [PubMed] [Google Scholar]
  • 26. Folstein MF, Folstein SE, McHugh PR. “Mini-mental state”. A practical method for grading the cognitive state of patients for the clinician. J Psychiatr Res. 1975;12(3):189–198. [DOI] [PubMed] [Google Scholar]
  • 27. de Jager CA, Schrijnemaekers AC, Honey TE, Budge MM. Detection of MCI in the clinic: evaluation of the sensitivity and specificity of a computerised test battery, the Hopkins verbal learning test and the MMSE. Age Ageing. 2009;38(4):455–460. [DOI] [PubMed] [Google Scholar]
  • 28. Crooks VC, Parsons TD, Buckwalter JG. Validation of the cognitive assessment of later life status (CALLS) instrument: a computerized telephonic measure. BMC Neurol. 2007;7:10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 29. Xu G, Meyer JS, Thornby J, Chowdhury M, Quach M. Screening for mild cognitive impairment (MCI) utilizing combined mini-mental-cognitive capacity examinations for identifying dementia prodromes. Int J Geriatr Psychiatry. 2002;17(11):1027–1033. [DOI] [PubMed] [Google Scholar]
  • 30. Steenland NK, Auman CM, Patel PM, et al. Development of a rapid screening instrument for mild cognitive impairment and undiagnosed dementia. J Alzheimers Dis. 2008;15(3):419–427. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 31. Rabin LA, Saykin AJ, Wishart HA, et al. The memory and aging telephone screen: development and preliminary validation. Alzheimers Dement. 2007;3(2):109–121. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32. Tornatore JB, Hill E, Laboff JA, McGann ME. Self-administered screening for mild cognitive impairment: initial validation of a computerized test battery. J Neuropsychiatry Clin Neurosci. 2005;17(1):98–105. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33. Dwolatzky T, Whitehead V, Doniger GM, et al. Validity of a novel computerized cognitive battery for mild cognitive impairment. BMC Geriatr. 2003;3:4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34. Saxton J, Morrow L, Eschman A, Archer G, Luther J, Zuccolotto A. Computer assessment of mild cognitive impairment. Postgrad Med. 2009;121(2):177–185. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35. Werner P, Korczyn AD. Mild cognitive impairment: conceptual, assessment, ethical, and social issues. Clin Interv Aging. 2008;3(3):413–420. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36. Buschke H, Fuld PA. Evaluating storage, retention, and retrieval in disordered memory and learning. Neurology. 1974;24(11):1019–1025. [DOI] [PubMed] [Google Scholar]
  • 37. McAllister TW, Sparling MB, Flashman LA, Guerin SJ, Saykin AJ. Early plateau of brain activation on fMRI in response to increased working memory task difficulty one month after mild traumatic brain injury (MTBI). J Neuropsychiatry Clin Neurosci. 2000;12(1):159–160. [Google Scholar]
  • 38. Weschler D. Weschler Adult Intelligence Scale-Revised. New York, NY: Psychological Corporation; 1987. [Google Scholar]
  • 39. United States War Department. Army Individual Test Battery (AIT-1). Washington, DC: Government Printing Office; 1944:1–12. [Google Scholar]
  • 40. Morris JC, Heyman A, Mohs RC, et al. The Consortium to Establish a Registry for Alzheimer’s Disease (CERAD). Part I. Clinical and neuropsychological assessment of Alzheimer’s disease. Neurology. 1989;39(9):1159–1165. [DOI] [PubMed] [Google Scholar]
  • 41. Fillenbaum GG, van Belle G, Morris JC, et al. Consortium to Establish a Registry for Alzheimer’s Disease (CERAD): the first twenty years. Alzheimers Dement. 2008;4(2):96–109. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42. Goodglass H, Kaplan E. Boston diagnostic aphasia examination (BDAE). Philadelphia, PA: Lea and Febiger; 1983. [Google Scholar]
  • 43. Benedict RHB. Brief Visuospatial Memory Test-Revised (BVMT-R). Odessa, FL: Psychological Assessment Resources;1997. [Google Scholar]
  • 44. Benton AL, Hamsher K. Multilingual Aphasia Examination. Iowa City, IA: University of Iowa; 1978. [Google Scholar]
  • 45. Feedman M, Leach L, Kaplan E, Winocur G, Shulman K, Delis D. Clock Drawing, A Neuropsycholgical Analysis. New York, NY: Oxford University Press; 1994. [Google Scholar]

Articles from American Journal of Alzheimer's Disease and Other Dementias are provided here courtesy of SAGE Publications

RESOURCES