Abstract
Cognitive impairment (CI) and dementia can have profound social and emotional effects on older adults. Early detection of CI is imperative both to the identification of potentially treatable conditions and to provide services to minimize the effects of CI in cases of dementia. While primary care settings are ideal for identifying CI, it frequently goes undetected. We tailored a brief, iPad-based, cognitive assessment (MyCog) for primary care settings and piloted it in a sample of older adults. Eighty participants were recruited from an existing cohort study and completed a brief, in-person interview. CI was determined based on a diagnosis of dementia or CI in their medical record or based on a comprehensive cognitive battery performed within the past 18 months. MyCog had a sensitivity of 79% and specificity of 82%, offering a practical, scalable, primary care assessment for the routine case finding of cognitive impairment and dementia.
Keywords: cognitive impairment, dementia, health services research, technology
Cognitive impairment (CI) is a significant public health concern that can have profound social and emotional effects on older adults. Prevalence estimates of CI vary, depending on the definition, with 5% to 36% of older adults having some type of CI and 3% to 42% experiencing Mild CI (MCI) (Ward et al., 2012).
Earlier detection of CI has many benefits for patients and their families. In some cases, early detection of CI may reveal reversible or treatable causes (e.g., depression, vitamin B12 deficiency) (Office of Disease Prevention and Health Promotion, n.d.). In cases that may lead to Alzheimer’s disease and related dementias (ADRD), early detection can give patients time to emotionally adjust and plan for the future, opportunities for treatments to reduce symptoms, optimizing functional independence and quality of life, and the ability to address/prevent safety concerns (e.g., driving, home environment) (Alzheimer's Association, 2020).
With the growing aging population and shortage of geriatricians, primary care practices are managing the health needs of many older patients. Given their familiarity of patients’ baseline health status and their ability to monitor patients across time, involving primary care providers in early detection efforts is essential (Langa & Levine, 2014). Yet primary care practices contend with diverse patient populations, have limited resources, and clinic visit times are typically brief (Day et al., 2014). Therefore, CI and dementia often go undetected in primary care (Bradford et al., 2009).
A fundamental problem with early detection efforts is the lack of standard, cognitive assessments that are both sensitive to early indicators of impairment and appropriate for use in resource-constrained primary care settings (Bradford et al., 2009). Instead, providers use a variety of methods that are inconsistently applied and may be less effective. Clinicians may solely rely on patients proactively self-reporting concerns when only a third typically acknowledge any cognitive problems (Centers for Disease Control and Prevention, 2013).
Our goal was to develop a brief measure (<10 min) for use in primary care settings for the early detection of CI including dementia (CID). The MyCog assessment uses two well-validated iPad-based measures from the NIH Toolbox for the Assessment of Neurological Behavior and Function Cognitive Battery (NIHTB-CB) that address two of the first domains to show CI: Picture Sequence Memory (PSM) which assesses episodic memory and Dimensional Change Card Sort (DCCS) measuring cognitive flexibility (Dikmen et al., 2014). The purpose of this study was to pilot, refine, and preliminarily validate the MyCog Assessment to detect CID in a sample of community dwelling older adults in primary care.
Methods
Participants
Participants were recruited from an existing, well-characterized cohort study originally recruited from one academic general internal medicine practice and two federally qualified health centers (Wolf et al., 2012). Participants were eligible for this study if they (1) had completed at least two assessments within each of four cognitive domains (processing speed, working memory, inductive reasoning, long-term memory) and 1 test of language within 18 months or (2) had a documented diagnosis of dementia or cognitive impairment.
To classify impairment, Z-scores were created for each test using regression-based criteria (Shirk et al., 2011) described for this sample in more detail in a previous publication (Lovett et al., 2020). Participants are considered impaired per domain if ≥2 tests indicated impairment (Z < –1) (Jak et al., 2009). For the language domain, we used a more conservative cutoff (Z < –1.5) since only one measure was used (Petersen & Morris, 2005). Participants were considered cognitively impaired (CI) either by having a documented diagnosis or were impaired in at least 1 domain within 18 months. All others were considered cognitively normal (CN).
Normative Sample
The Advancing Reliable Measurement in Alzheimer’s Disease and Cognitive Aging (ARMADA) study is an on-going longitudinal study validating the NIHTB in individuals ages 65 to 85 who are cognitively healthy or have diagnoses of either MCI or dementia of the Alzheimer type (AD) (Weintraub et al., 2022). For this analysis, we used baseline data from 405 participants (n = 252 normal controls) recruited from nine different clinical centers through May 2020.
Procedure
Between January and March 2018, 80 participants completed a 30 to 45 min, in-person research interview including the full NIHTB-CB described below. Prior to any research activities, written consent was obtained from participants by trained, NIHTB-certified research coordinators.
Measures
The NIHTB-CB consists of seven iPad-based measures in five sub-domains and can be feasibly administered to older individuals, including those diagnosed with CI, MCI, or AD (Weintraub et al., 2014). NIHTB measures were originally normed with a sample representative of the U.S. population based on gender, race/ethnicity, and socioeconomic status (Casaletto et al., 2015).
MyCog Measures
MyCog consists of two well-validated measures from the NIHTB-CB: the PSM and DCCS. PSM measures episodic memory and takes approximately 6 min to complete. After successfully completing a practice trial, adults memorize the order of a 15-item sequence (Trial 1) and then an 18-item sequence (Trial 2). Participants get one point per correctly recalled pair of adjacent pictures with scores ranging from 0 to 31. The DCCS is an executive functioning test that requires an individual to match images based on either color or shape and takes 3 to 4 min to complete. The DCCS computed score incorporates both accuracy and reaction time with scores ranging from 0 to 10.
Other NIHTB-CB Measures
Additional tests from the NIHTB-CB assess working memory (List Sorting Working Memory Test [LSWM]), executive function (Flanker Inhibitory Control and Attention Test [Flanker]), language (Oral Reading Recognition [ORR]), Picture Vocabulary Test [PV]), and processing speed (Pattern Comparison Processing Speed Test [PCPS]). Detailed descriptions of these measures are reported elsewhere (Weintraub et al., 2014). Age corrected standard scores (ACSS) were calculated for each measure as well as composite scores for fluid (DCCS, PSM, Flanker, LSWM, PCPS), crystallized (ORR, PV), and all tests combined.
Statistical Analysis
Descriptive statistics (mean [SD], percentages) were used to characterize the pilot sample overall and by impairment status (CI, CN). The derived normative sample was characterized for comparison. Administration time for each measure was summarized using means (SD) and medians and interquartile ranges (IQR) overall and by group (CI, CN). Mean comparisons of administration time and scores by group were performed using t-tests. Spearman correlations were used to compare the PSM and DCCS with other NIHTB-CB measures.
MyCog Assessment Scoring
Due to potential time constraints, we also investigated the use of an abbreviated PSM by using only the first of two trials. Consequently, since age-adjusted standard scores were not calculated by trial using the NIHTB normative sample, we created our own. Specifically, we created regression based normative models adjusting each of the measures (i.e., PSM Trial 1, the full PSM, and DCCS) for age using the normative sample described in Section 2.2. These models were applied to PSM raw scores and DCCS computed scores from the pilot sample and Z-scores for each test were created using the predicted means and RMSE as approximation of the variance. To create a MyCog composite score, we applied the formulas to the entire sample described in Section 2.2 (impaired and non-impaired) and calculated Z-scores for each test. Logistic regression analyses were performed with the Z-scores as predictors and impairment status as the outcome to obtain parameter estimates to use as weights. This formula was applied to the CN group in order to scale the scores of our test sample, resulting in a combined weighted age-adjusted Z-score. All Z-scores were transformed to create age corrected standard scores (ACSS) with a mean = 100 and SD = 15.
Detection of Cognitive Impairment
To evaluate the efficacy of the MyCog paradigm as a detection tool, we examined the receiver operator characteristic (ROC) curves to predict CI using ACSS’s for each test and the MyCog composite score. Cut-offs based on being 1SD below the mean (ACSS < 85) were used to determine impairment per measure. Sensitivity, specificity, and accuracy were calculated.
Results
Sample Characteristics
Descriptive statistics for the pilot validation sample are shown in Table 1 overall and by impairment status. Participants were on average 72 years old, with 39% self-reporting Black race, 57% White, just over half earned a college degree, and 20% had a high school education or less.
Table 1.
Sample Characteristics.
| Pilot sample | Normative sample | |||
|---|---|---|---|---|
| Total | Cognitively normal | Cognitively impaired | Non-Impaired | |
| N = 80 | N = 51 | N = 29 (36%) | N = 252 | |
| Age, mean (SD); range | 72.1 (6.2); 63–83 | 71.9 (6.4); 63–83 | 72.4 (5.8); 63–83 | 73.1 (5.1); 65–84 |
| Age categories, % | ||||
| 65–69 | 41 | 47 | 31 | 26 |
| 70–74 | 19 | 12 | 31 | 38 |
| 75–79 | 26 | 27 | 24 | 21 |
| 80–84 | 14 | 14 | 14 | 15 |
| Female, % | 76 | 84 | 62 | 73 |
| Race, % | ||||
| Black | 39 | 27 | 59 | 46 |
| White | 57 | 69 | 38 | 53 |
| Other/not-reported | 4 | 4 | 3 | 1 |
| Education, % | ||||
| High school or less | 20 | 12 | 34 | 8 |
| Some College | 28 | 27 | 28 | 18 |
| College Degree | 20 | 24 | 14 | 26 |
| Graduate Degree | 32 | 37 | 24 | 48 |
Associations Between NIHTB-CB Measures
Correlations between PSM & DCCS with other NIHTB-CB measures and composite scores are shown in Table 2. PSM correlated most strongly with working memory test (r = .60, p < .001) while DCCS correlated most strongly with Flanker, another test of executive function (r = .68, p < .001). PSM and DCCS had a moderate to strong correlation with the fluid composite score (PSM r = .60, DCCS r = .79, both p < .001). The correlation between PSM and DCCS was low (r = .26, p = .03).
Table 2.
Correlations of Selected Tests With Other NIHTB-CB Measures.
| PSM | DCCS | |||
|---|---|---|---|---|
| Domain | Episodic memory | Executive function | ||
| Working Memory | ||||
| LSWM | 0.56** | 0.36* | ||
| Executive Function/Attention | ||||
| Flanker | 0.21 | 0.63** | ||
| Language | ||||
| ORR | 0.52** | 0.54** | ||
| PV | 0.55** | 0.52** | ||
| Processing speed | ||||
| PCPS | 0.21 | 0.49** | ||
| Composite scores | ||||
| Fluid | 0.62** | 0.78** | ||
| Crystallized | 0.53** | 0.54** | ||
| Total | 0.63** | 0.71** | ||
Note. NIHTB-CB = NIH Toolbox for the Assessment of Neurological Behavior and Function Cognitive Battery; PSM = Picture Sequence Memory; DCCS = Dimensional Change Card Sort; LSWM = List Sorting Working Memory Test; Flanker = Flanker Inhibitory Control and Attention Test; ORR = Oral Reading Recognition; PV = Picture Vocabulary Test; PCPS = Pattern Comparison Processing Speed Test.
<0.01. **<0.001.
Timing
Administration times on the MyCog measures are reported for the full sample and by CI status in Table 3. DCCS averaged 309 s while PSM averaged 490 s; together, these two measures required more than 13 min and took significantly longer in the CI than the CN. Without the second PSM trial, the total administration time dropped to less than 10 min, on average.
Table 3.
Timing of and Scores for MyCog Components, Overall and by Impairment Status.
| Total sample | Cognitively normal | Cognitively impaired | P value | |
|---|---|---|---|---|
| N = 80 | N = 51 | N = 29 (36%) | ||
| Mean (SD); edian, IQR | Mean (SD); Median, IQR | Mean (SD); Median, IQR | ||
| Timing (s) | ||||
| PSM + DCCS | 791.2 (82.9); 780, 727–841 | 774.9 (82.1); 750, 720–840 | 828.9 (73.2); 836, 780–876 | .01 |
| PSM Trial 1 + DCCS | 597.3 (60.6); 593, 554–642 | 584.2 (58.4); 580, 549–618 | 627.7 (55.7); 620, 599–660 | .004 |
| Scores* | ||||
| PSM | 92.5 (14.4); 89.1, 81.9–98.7 | 98.5 (14.5); 96.9, 87.2–107.8 | 81.9 (5.5); 80.9, 78.4–86.1 | <.001 |
| PSM Trial 1 | 94.8 (15.2); 89.3, 84.2–102.2 | 100.1 (16.5); 99.0, 88.1–105.9 | 85.5 (5.5); 84.8, 81.8–88.2 | <.001 |
| DCCS | 90.6 (24.0); 96, 87–106 | 99.5 (11.1); 99, 90–107 | 74.9 (31.7); 88.1, 48–97 | <.001 |
| MyCog Score | 92.8 (16.4); 90.0, 83.3–101.4 | 99.9 (15.3); 97.2, 89.3–106.1 | 80.3 (9.4); 81.9, 74.2–85.3 | <.001 |
All scores are age corrected standard scores (mean = 100, SD = 15).
Note. PSM = Picture Sequence Memory; DCCS = Dimensional Change Card Sort.
Scoring
Scoring using the ARMADA normative sample resulted in the following formulas.
Age-corrected standard scores created by applying the above formulas to the MyCog pilot sample are shown in Table 3 overall and by CI status. The CN participants performed similar to the normative sample on all MyCog measures, with mean (SD) scores of 99.5 (11.1) on the DCCS, 100.1 (16.5) on PSM Trial 1, and 99.9 (15.3) on the MyCog composite score. For comparison, scores were slightly lower when including both PSM trials 98.5 (14.5). CI participants performed significantly worse on all measures compared to CN participants (p < .001; Table 3).
Efficacy of MyCog
Receiver operator characteristic (ROC) curves for PSM Trial 1, DCCS, and the MyCog weighted score are shown in Figure 1. Area under the ROC curves (AUC) were 0.82 for PSM Trial 1 and 0.72 for the DCCS. Combining the two scores into a single composite produced an AUC of 0.89. The AUC for the abbreviated version of the PSM was not substantially lower than it was when using both trials (Trial 1 AUC = 0.82 vs. Trials 1 and 2 = 0.87).
Figure 1.
ROC curves of age corrected standard scores.
Note. ROC = receiver operator characteristic; PSM = Picture Sequence Memory; DCCS = Dimensional Change Card Sort.
Diagnostic Accuracy
Based on ACSS < 85 indicating CI, 24(30%) were impaired on PSM Trial 1, 16 (20%) on DCCS, and 32 (40%) on the weighted measure. Sensitivity, specificity, and accuracy for all scores are shown in Table 4. Both measures had low sensitivity (PSM 55%, DCCS 45%) and high specificity (PSM 84%, DCCS 94%) alone. The MyCog composite score had sensitivity of 79% and specificity of 82%.
Table 4.
Diagnostic Accuracy of MyCog Assessment Based on Standard Scores <85.
| N (%) impaired | Sensitivity | Specificity | Accuracy | |
|---|---|---|---|---|
| Test | ||||
| PSM Trial 1 | 24 (30%) | 55 | 84 | 74 |
| DCCS | 16 (20%) | 45 | 94 | 76 |
| MyCog Score | 32 (40%) | 79 | 82 | 81 |
Note. PSM = Picture Sequence Memory, DCCS = Dimensional Change Card Sort.
Discussion
The MyCog assessment may offer a practical, scalable, measure for the routine case finding of CID in primary care. By leveraging common technologies such as an iPad tablet, MyCog removes the need for clinicians to administer and interpret cognitive assessments; and allows for the possibility to auto-populate results into electronic health records (EHR), further reducing staff burden. More broadly, there are many factors to consider for introducing cognitive testing into everyday clinical settings.
First, our original paradigm took longer for older adults than we anticipated. However, we were able to shorten the task to less than 10 min by omitting one of two PSM trials with minimal or no loss to diagnostic accuracy. Even still, a test that takes more than 5 min may be a formidable disruption to clinical workflows and primary care protocols. Any additional time may need to be balanced by other accommodations that enhance staff workload and efficiency, or clinical benefit. We have already begun to address modifications to the MyCog paradigm that may further improve its acceptability and feasibility for primary care practices. This includes the development of a self-administered version of the assessment that would require no staff proctoring. Other modifications in progress include the tethering of the iPad to the EHR to share results, with “turnkey” clinical recommendations on how to standardize responses to any detected CI.
Beyond the timing of the assessment, preliminary findings of MyCog’s diagnostic accuracy demonstrates that it is comparable to other existing, manual cognitive screeners available to detect CID (Scott & Mayo, 2018). Further evaluations will be necessary to fully understand the tests’ performance and to set thresholds most appropriately for determining CI. Unique to primary care, it could be argued that there should be a prioritization of specificity over sensitivity. The risk of a false positive, in context to current limited bandwidths and wait times for referrals to behavioral neurology, neuropsychology or other subspecialties may pose the greater risk of causing psychological harm (Lin et al., 2013), especially given the current lack of viable treatments for CI or ADRD. If used routinely with Medicare Annual Wellness Visits, a missed case with cognitive testing has the benefit of a follow-up test in the near future.
As a preliminary validation study of the MyCog assessment, our investigation has limitations. Our analyses were conducted among a predominately female, small convenience sample recruited from an ongoing cohort study. When possible, cases of CID were determined by EHR review of appropriate ICD diagnoses, but some participants were classified by objective performance on a neuropsychological battery. While this varied, under-detection of CI in primary care is all too common and represents the justification for the need of new tools that are sympathetic to primary care barriers.
Conclusion
This is the initial validation of the MyCog assessment; clearly future studies should continue to examine its implementation and performance in diverse primary care practices. New clinical validation studies of a self-administered MyCog version are underway now. In parallel, we are working to link the MyCog app to the EHR in anticipation of testing our paradigm in a clinic randomized trial, with a specific interest in determining the tool’s fidelity in practice.
Footnotes
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: Prof. Curtis reports grants from the NIH (NIA, NIDDK, NINDS, NCI) and the Gordon and Betty Moore Foundation. Ms. Batio reports grants from the NIH (NIA, NIDDK, NINDS) and the Gordon and Betty Moore Foundation. Ms. Yoshino Benavente reports grants from the NIH (NIA, NINDS, NIDDK) and the Gordon and Betty Moore Foundation. Dr. Shono reports grants from the NIH (NIA). Dr. Nowinski reports grants and contracts from the NIH (NIA, NINDS, NICHD) and the FDA. Dr. Lovett reports grants from the NIH (NIA, NINDS). Dr. Yao reports grants from the NIH (NIA, NINDS). Dr. Gershon reports grants from the NIH (NIA, NINDS, NICHD, Office of the Director). Dr. Wolf reports grants from the NIH (NIA, NIDDK, NINR, NHLBI, NINDS), Gordon and Betty Moore Foundation, and Eli Lilly, and personal fees from Pfizer, Sanofi, Luto UK, University of Westminster, and Lundbeck.
Funding: The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the National Institutes of Neurological Disorders and Stroke [grant number UG3NS105562/UH3NS105562].
Human Subjects Approval: The institutional review board of Northwestern University approved all study procedures (STU00206451).
ORCID iDs: Laura M. Curtis
https://orcid.org/0000-0003-2380-2201
Yusuke Shono
https://orcid.org/0000-0002-7006-1816
Rebecca M. Lovett
https://orcid.org/0000-0003-0169-9485
References
- Alzheimer's Association. (2020). 2020 Alzheimer's disease facts and figures. Alzheimer's & Dementia, 14(3), 367–429. 10.1002/alz.12068 [DOI] [Google Scholar]
- Bradford A., Kunik M. E., Schulz P., Williams S. P., Singh H. (2009). Missed and delayed diagnosis of dementia in primary care: Prevalence and contributing factors. Alzheimer Disease and Associated Disorders, 23(4), 306–314. 10.1097/WAD.0b013e3181a6bebc [DOI] [PMC free article] [PubMed] [Google Scholar]
- Casaletto K. B., Umlauf A., Beaumont J., Gershon R., Slotkin J., Akshoomoff N., Heaton R. K. (2015). Demographically corrected normative standards for the English version of the NIH Toolbox Cognition Battery. Journal of the International Neuropsychological Society, 21(5), 378–391. 10.1017/S1355617715000351 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Centers for Disease Control and Prevention. (2013). Self-reported increased confusion or memory loss and associated functional difficulties among adults aged >/= 60 years - 21 states, 2011. MMWR Morbidity and Mortality Weekly Report, 62(18), 347–350. https://www.ncbi.nlm.nih.gov/pubmed/23657108 [PubMed] [Google Scholar]
- Day H., Eckstrom E., Lee S., Wald H., Counsell S., Rich E. (2014). Optimizing health for complex adults in primary care: Current challenges and a way forward. Journal of General Internal Medicine, 29(6), 911–914. 10.1007/s11606-013-2749-x [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dikmen S. S., Bauer P. J., Weintraub S., Mungas D., Slotkin J., Beaumont J. L., Gershon R., Temkin N. R., Heaton R. K. (2014). Measuring episodic memory across the lifespan: NIH Toolbox Picture Sequence Memory Test. Journal of the International Neuropsychological Society, 20(6), 611–619. 10.1017/S1355617714000460 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Jak A. J., Bondi M. W., Delano-Wood L., Wierenga C., Corey-Bloom J., Salmon D. P., Delis D. C. (2009). Quantification of five neuropsychological approaches to defining mild cognitive impairment. The American Journal of Geriatric Psychiatry, 17(5), 368–375. 10.1097/JGP.0b013e31819431d5 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Langa K. M., Levine D. A. (2014). The diagnosis and management of mild cognitive impairment: A clinical review. JAMA, 312(23), 2551–2561. 10.1001/jama.2014.13806 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lin J. S., O'Connor E., Rossom R. C., Perdue L. A., Eckstrom E. (2013). Screening for cognitive impairment in older adults: A systematic review for the U.S. Preventive Services Task Force. Annals of Internal Medicine, 159(9), 601–612. 10.7326/0003-4819-159-9-201311050-00730 [DOI] [PubMed] [Google Scholar]
- Lovett R. M., Curtis L. M., Persell S. D., Griffith J. W., Cobia D., Federman A., Wolf M. S. (2020). Cognitive impairment no dementia and associations with health literacy, self-management skills, and functional health status. Patient Education and Counseling, 103(9), 1805–1811. 10.1016/j.pec.2020.03.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Office of Disease Prevention and Health Promotion. (n.d.). Dementias, including Alzheimer's disease. Healthy People 2020. U.S. Department of Health and Human Services. https://www.healthypeople.gov/2020/topics-objectives/topic/dementias-including-alzheimers-disease [Google Scholar]
- Petersen R. C., Morris J. C. (2005). Mild cognitive impairment as a clinical entity and treatment target. Archives of Neurology, 62(7), 1160–11633; discussion 1167. 10.1001/archneur.62.7.1160 [DOI] [PubMed] [Google Scholar]
- Scott J., Mayo A. M. (2018). Instruments for detection and screening of cognitive impairment for older adults in primary care settings: A review. Geriatric Nursing, 39(3), 323–329. 10.1016/j.gerinurse.2017.11.001 [DOI] [PubMed] [Google Scholar]
- Shirk S. D., Mitchell M. B., Shaughnessy L. W., Sherman J. C., Locascio J. J., Weintraub S., Atri A. (2011). A web-based normative calculator for the uniform data set (UDS) neuropsychological test battery. Alzheimer's Research & Therapy, 3(6), 32. 10.1186/alzrt94 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Ward A., Arrighi H. M., Michels S., Cedarbaum J. M. (2012). Mild cognitive impairment: Disparity of incidence and prevalence estimates. Alzheimer's & Dementia, 8(1), 14–21. 10.1016/j.jalz.2011.01.002 [DOI] [PubMed] [Google Scholar]
- Weintraub S., Dikmen S. S., Heaton R. K., Tulsky D. S., Zelazo P. D., Slotkin J., Carlozzi N. E., Bauer P. J., Wallner-Allen K., Fox N., Havlik R., Beaumont J. L., Mungas D., Manly J. J., Moy C., Conway K., Edwards E., Nowinski C. J., Gershon R. (2014). The cognition battery of the NIH toolbox for assessment of neurological and behavioral function: Validation in an adult sample. Journal of the International Neuropsychological Society, 20(6), 567–578. 10.1017/s1355617714000320 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weintraub S., Karpouzian-Rogers T., Peipert J. D., Nowinski C., Slotkin J., Wortman K., Ho E., Rogalski E., Carlsson C., Giordani B., Goldstein F., Lucas J., Manly J. J., Rentz D., Salmon D., Snitz B., Dodge H. H., Riley M., Eldes F., . . . Gershon R. (2022). ARMADA: Assessing reliable measurement in Alzheimer's disease and cognitive aging project methods. Alzheimer's & Dementia, 18(8), 1449–1460. 10.1002/alz.12497 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wolf M. S., Curtis L. M., Wilson E. A., Revelle W., Waite K. R., Smith S. G., Weintraub S., Borosh B., Rapp D. N., Park D. C., Deary I. C., Baker D. W. (2012). Literacy, cognitive function, and Health: Results of the LitCog Study. Journal of General Internal Medicine, 27(10), 1300–1307. 10.1007/s11606-012-2079-4 [DOI] [PMC free article] [PubMed] [Google Scholar]

