Abstract
Introduction
Annual cognitive screening in older adults is essential for early detection of cognitive impairment, yet primary care settings face time constraints that present barriers to routine screening. A remote cognitive screener completed on a patient’s personal smartphone before a visit has the potential to save primary care clinics time, encourage broader screening practices and increase early detection of cognitive decline. MyCog Mobile is a promising new remote smartphone-based cognitive screening app for primary care settings. We propose a combined construct and clinical validation study of MyCog Mobile.
Methods and analysis
We will recruit a total sample of 300 adult participants aged 65 years and older. A subsample of 200 healthy adult participants and a subsample of 100 adults with a cognitive impairment diagnosis (ie, dementia, mild cognitive impairment, cognitive deficits or other memory loss) will be recruited from the general population and specialty memory care centres, respectively. To evaluate the construct validity of MyCog Mobile, the healthy control sample will self-administer MyCog Mobile on study-provided smartphones and be administered a battery of gold-standard neuropsychological assessments. We will compare correlations between performance on MyCog Mobile and measures of similar and dissimilar constructs to evaluate convergent and discriminant validity. To assess clinical validity, participants in the clinical sample will self-administer MyCog Mobile on a smartphone and be administered a Mini-Cog screener and these data will be combined with the healthy control sample. We will then apply several supervised model types to determine the best predictors of cognitive impairment within the sample. Area under the receiver operating characteristic curve, accuracy, sensitivity and specificity will be the primary performance metrics for clinical validity.
Ethics and dissemination
The Institutional Review Board at Northwestern University (STU00214921) approved this study protocol. Results will be published in peer-reviewed journals and summaries provided to the study’s funders.
Keywords: aging, primary care, dementia, telemedicine, geriatric medicine
STRENGTHS AND LIMITATIONS OF THIS STUDY.
MyCog Mobile is an innovative smartphone-based cognitive screener that older adults can self-administer remotely on their own smartphones prior to a primary care visit; the present study will determine how well MyCog Mobile assesses the target cognitive constructs and how accurately it can classify cognitive impairments versus healthy cognitive functioning.
Our study will recruit a diverse sample of older adults with (n=100) and without (n=200) cognitive impairments which will be adequately powered to employ advanced predictive modelling techniques.
Given concerns about fatigue in older adults with existing impairments, the clinical sample will be administered an abbreviated battery only, limiting our potential analyses.
MyCog Mobile may not be appropriate for older adults who have little experience with smartphones or are already experiencing significant cognitive decline, and potential self-selection bias in our sample is a limitation.
Introduction
Pathological cognitive decline in older adulthood is a serious global health crisis and differs from normal cognitive ageing.1 Cognitive impairments may result from Alzheimer’s disease and related dementias,2–5 neurological disorders (eg, Parkinson’s disease,6 multiple sclerosis7) or reversible causes such as infections or medications.8 Early detection of cognitive impairment is important to identify potentially reversible causes, manage symptoms and comorbidities, determine appropriate clinical care and caregiver involvement, and help families plan for the future.9 10 Primary care visits offer an important opportunity for early detection of cognitive decline in adults over age 65.11 However, primary care clinics face significant barriers to routine cognitive screening, such as lack of time to conduct screenings and limited training and resources for dementia care.12 MyCog Mobile is a remote cognitive screener developed to address these barriers.
MyCog Mobile is a remote, smartphone-based cognitive screening system in which participants can self-administer cognitive measures on their own smartphones prior to a primary care visit. Scores are automatically sent to their primary care clinician’s electronic medical record (EHR) and trigger appropriate clinical decision-making support recommendations.13 MyCog Mobile was developed as a companion to MyCog, a tablet-based app that is self-administered in person in a clinical setting.14 MyCog Mobile offers many of the same advantages of MyCog including self-administration, automatic scoring, integration with the clinician’s EHR and clinical decision-making support. However, MyCog Mobile differs from MyCog in several important ways: (1) it is intended to be self-administered in a completely unsupervised setting, (2) it is completed on a personal smartphone device and (3) it is intended to be completed before the in-person visit begins.
MyCog Mobile underwent a human-centred design process to ensure an optimal user experience for older adults,13 and has demonstrated initial evidence of feasibility and reliability in a pilot study.15 Findings from these studies support the present proposal: a construct and clinical validation of MyCog Mobile. Our primary research questions include: (1) Is there evidence of construct validity of the MyCog Mobile measures when compared with ‘gold-standard’ neuropsychological assessments and cognitive screeners? and (2) Can performance on MyCog Mobile accurately differentiate between older adults with cognitive impairment and healthy controls?
Method
Sample
We will recruit a sample of 300 participants in total, with 200 healthy controls and 100 participants with cognitive impairments. We will aim to stratify by demographic criteria in table 1. Participants will be recruited through two sources. The healthy control sample will be recruited by a market research agency through their large, nationally representative database of research volunteers. Participants in the clinical sample will be recruited through specialty care clinics located in the Chicago land area. All participants will be compensated and provide informed consent prior to participation. Participants in the clinical sample will also be evaluated with the Decision-Making Audit Tool16 to determine if they are able to consent to participation.
Table 1.
Target stratification goals and minimum subsample sizes (N=300 total)
Stratification criteria | Clinical sample (n=100) |
Healthy control (n=200) |
Age | ||
65–69 years | 25 (25%) | 50 (25%) |
70–74 years | 25 (25%) | 50 (25%) |
75–79 years | 25 (25%) | 50 (25%) |
80 and older | 25 (25%) | 50 (25%) |
Gender | ||
Female | 40 (40%) | 80 (40%) |
Male | 40 (40%) | 80 (40%) |
Race/ethnicity* | ||
Black/African American | 15 (15%) | 30 (15%) |
Hispanic | 20 (20%) | 40 (20%) |
Education* | ||
Less than a high school diploma/GED | 10 (10%) | 20 (10%) |
High school diploma/GED | 35 (35%) | 70 (35%) |
*In order to ensure a sufficiently diverse and generalisable sample, we will aim to recruit a maximum of 70% participants who identify as non-Hispanic white, a maximum of 50% with a bachelor’s degree and a maximum of 20% participants with an advanced degree.
All participants must be aged 65 or older, speak English and provide informed consent prior to participation. Participants in the healthy control sample must have a Mini-Cog score of 5 or higher, while participants in the clinical sample must have a Mini-Cog score <5 and a documented diagnosis related to mild cognitive impairment, dementia, cognitive deficits or other memory loss (table 2). Data collection will begin in March of 2024 and is expected to be completed by October of 2024.
Table 2.
Pertinent ICD codes for cognitive deficits and impairment
ICD-9 | ICD-10 | |
Dementia | 290.0–290.4, 291.2, 292.82, 294.1, 294.10, 294.11, 294.20, 294.21, 331.8 | F01.X, F10.97, F10.27, F19.17, F19.97, F19.27, F02.X, F03.X, G31.83 |
MCI | 331.83 | G31. 84 |
Cognitive deficits | 438 | 69.91, R41.81, R41.84 |
Other memory loss | 780.93 | R41.3 |
Measures
MyCog mobile measures
MyFaces
MyFaces is an associative memory test originally developed by Rentz and colleagues to predict cerebral amyloid beta burden.17 The MyCog Mobile version of this task was adapted from the Mobile Toolbox Faces and Names Test, which was also based on the original.18 Participants are first shown 12 pictures of people paired with their names. After an approximately 5-to-10-minute delay, participants’ memories are tested in three subtests: the first subtest (Recognition) asks the participant to select the person they saw in the learning trial from three options. The second subtest (First Letter) asks participants to indicate the first letter of the name of the person presented on the screen (figure 1). The third subtest (Name Matching) asks participants to select the name of the person presented from among three possible response options. A raw accuracy score is given for each of the three subtests.
Figure 1.
Example screens from each of the MyCog Mobile measures from left to right: MyFaces, MySorting, MyPictures, MySequences.
MySorting
MySorting is a measure of executive function and cognitive flexibility adapted from the MyCog Dimensional Change Card Sorting14 and Mobile Toolbox Shape-Color Sorting Test.18 Respondents are asked to sort images across two dimensions, shape and colour, as quickly as they can. The relevant dimension for sorting is indicated by a cue word (‘shape’ or ‘colour’) that appears on the screen (figure 1). Scores are given for accuracy and response speed.
MyPictures
MyPictures is a measure of episodic memory adapted from MyCog Picture Sequence Memory14 and the Arranging Pictures task in the Mobile Toolbox.18 A series of 16 images depicting independent, non-sequential activities is presented in a specific order and placed in specific, sequential locations on the screen. Following this presentation, the images are scrambled, and the participant is asked to recall the original position of the images accordingly (figure 1). There are two trials. Scores are given for exact match (the number of pictures in the correct positions) as well as adjacent pairs (the number of correctly ordered pairs of pictures next to each other) on each trial.
MySequences
MySequences is a measure of working memory adapted from the Mobile Toolbox Sequences Test.18 MySequences requires participants to remember strings of letters and numbers and arrange them in order, with the letters in alphabetical order first and then the numbers in ascending numerical order (figure 1). Trials begin with strings of three alphanumeric characters and increase in length, reaching a maximum difficulty of 10 characters. Scores reflect the number of correct trials.
External cognitive screeners
Mini-Cog
The Mini-Cog is a brief and widely used cognitive screening tool designed to assess memory and executive function.19 It consists of two components: a three-word recall task and a clock drawing task. In the recall task, individuals are presented with three unrelated words and asked to remember and later recall them, assessing short-term memory. The clock drawing task evaluates visuospatial and executive functioning by requiring the individual to draw a clock face to a specified time. The clock drawing task is slotted in the delay of the three-word learning and recall trials. The Mini-Cog is estimated to have a sensitivity of 91% and specificity of 86% to detect dementia.
MyCog
MyCog is a tablet-based cognitive screener intended to be self-administered during the rooming process of a primary care visit. MyCog contains two assessments: Dimensional Change Card Sorting (DCCS) and Picture Sequence Memory (PSM). DCCS is the MyCog counterpart to MyCog Mobile’s MySorting and the paradigm is the same except it is administered on a tablet. PSM is the MyCog counterpart to MyCog Mobile’s MyPictures, except the series contains 14 images in a single trial. A pilot study found MyCog demonstrates a 79% sensitivity and 82% specificity to detect cognitive impairment.20
Gold-standard neuropsychological battery
Verbal Paired Associates Test
A subtest from the Wechsler Memory Scale, fourth edition (WMS-IV) administered via the Q-Interactive tablet app.21 The Verbal Paired Associates Test assesses memory for associated word pairs. A list of 14 word pairs is read to the examinee. The examinee is then asked to provide the associated word when given the first word of the pair. This task is repeated across four trials and feedback is given regarding performance on each item. After a 20-min delay, the examinee is asked to recall the paired word without performance feedback. A yes/no recognition test of word pairs and a free-recall test of words from the word pairs are administered after the delayed memory trial.
Color-Word Interference Test
The Color Word Interference Test (CWIT), subtest from the Delis Kaplan Executive Functioning System (D-KEFS), assesses inhibitory control and ability to switch between tasks.22 The CWIT involves presenting the participants with a series of colour words (eg, red, blue, green) displayed in different ink colours and asking them to name the ink colour as quickly and accurately as possible while ignoring the meaning of the word. This task is designed to measure the ability to inhibit the automatic response of reading the word and to switch between different rules (ie, naming the ink colour rather than reading the word). Primary scores include completion time and errors.
Trail Making Test
The Trail Making Test (TMT) is a neuropsychological test commonly used to assess cognitive function, particularly attention, mental flexibility and executive functions.22 The test consists of two parts, TMT A and TMT B, in which the participant is asked to connect a sequence of numbered or lettered circles in ascending order as quickly as possible. TMT A is designed to measure simple attention and visual scanning abilities, while TMT B requires the participant to alternate between numbers and letters, which adds a cognitive flexibility component. Primary scores include completion time and errors.
Letter and Number sequencing
The Letter Number Sequencing subtest from the Wechsler Adult Intelligence Scale, Fourth Edition (WAIS-IV) assesses working memory and cognitive flexibility.23 The task involves presenting the participant with a series of letters and numbers (eg, C, 7, K, 3) and asking them to repeat the sequence back in either numerical or alphabetical order. The difficulty of the task progressively increases as longer digit sequences are presented.
Procedure
All participants will first be administered the Mini-Cog to ensure they continue to meet inclusion criteria. The healthy control sample will be invited to the lab space to complete assessments. They will then self-administer MyCog Mobile (~20 min) on study-provided iPhones and be administered a battery of gold standard neuropsychological assessments by trained examiners (~60 min). Participants will not be told which measures are considered gold standard versus experimental (ie, MyCog Mobile).
The clinical sample will be evaluated within the specialty memory care clinics from which they were recruited. Due to concerns for fatigue during extensive testing with patients with AD/ADRD,24 25 the clinical sample will be administered an abbreviated battery which will only include the Mini-Cog and MyCog Mobile measures (~25 min).
Patient involvement
We received feedback from older adults during the design process of the MyCog Mobile app via Shared Resource Panels (ShARPs) offered through Northwestern University Feinberg School of Medicine Center for Community Health.13 ShARPs are custom panels that bring together 8–10 community stakeholders with personal expertise related to a research project. Feedback from this panel informed the design and implementation of the MyCog Mobile app.
Analysis
All analyses will be conducted in R statistical software,26 and packages and code will be made publicly available. Exploratory data analysis (eg, descriptive statistics, correlations, scatter plots) will be conducted on the overall sample and stratified by the clinical and healthy control groups. The score distributions of each MyCog Mobile measure will be evaluated for floor and ceiling effects.
Power analysis
We used standard guidelines to evaluate the acceptability of the receiver operating characteristic area under the curve (ROC AUC).27 28 In the pilot MyCog validation study, we achieved an excellent ROC AUC value of 0.96 with a pilot sample of n=86 and an impairment rate of 22%. In the present study, we plan to use cross-validated machine learning methods splitting 70% of the sample into a training set and 30% of the sample into a testing set. As such, a total sample of 300 participants (200 controls and 100 cases) will ensure that the relatively smaller testing set (n=100) is 80% powered at a 0.05 significance level to detect a conservative ROC AUC value of 0.67, with at least 30 clinical cases.
Reliability
Internal consistency will be assessed in the respective clinical and control groups using various methods which aligned with each task’s paradigm. For MySorting and MySequences, we will calculate median Spearman-Brown correlations between bootstrapped random split-half coefficients for the accuracy scores. For MyPictures, we will use the Pearson correlation between trial 1 and trial 2 adjacent pairs scores to calculate the Spearman-Brown split-half reliability (2r/(1+r)). For MyFaces, we will use a look-up table to find expected a posteriori scores and SD based on the sum of the accuracy scores across the three subtests,29 30 and then calculate the empirical and mean marginal reliability.31 We consider internal consistency coefficients of .70 or greater to be acceptable.32
Construct validity
We will evaluate convergent validity by examining Spearman Rho correlations between the MyCog Mobile subtests and measures of similar constructs from gold standard neuropsychological assessments (table 3). Spearman Rho correlations will be used instead of Pearson correlations as we are interested in monotonic relationships between variables rather than strictly linear relationships. We expect correlations with convergent measures to be large in magnitude (>0.5) based on Cohen’s guidelines.33 Once the optimal scoring model is identified (see Clinical Validity section), we will also compare MyCog Mobile final scores to final scores on the Mini-Cog and MyCog screeners. To evaluate discriminate validity, we will examine correlations between MyCog Mobile subtests and measures of dissimilar constructs from the gold standard battery (table 3). We expected the MyCog Mobile subtests to demonstrate significantly larger correlations with similar constructs than correlations with dissimilar constructs based on Steigler’s z tests.
Table 3.
Convergent and discriminant validity comparison measures
MyCog Mobile subtest | Convergent measure | Discriminant measure |
MySorting | D-KEFS Color-Word Interference Test D-KEFS Trail Making Test MyCog Dimensional Change Card Sorting |
WMS-IV Verbal Paired Associates |
MyPictures | WMS-IV Verbal Paired Associates MyCog Picture Sequence Memory |
D-KEFS Trail Making Test |
MyFaces | WMS-IV Verbal Paired Associates | D-KEFS Trail Making Test |
MySequences | WAIS-IV Letter Number Series | D-KEFS Trail Making |
D-KEFS, Delis Kaplan Executive Functioning System; WAIS-IV, Wechsler Adult Intelligence Scale, Fourth Edition; WMS-IV, Wechsler Memory Scale, fourth edition.
Clinical validity
First, we will assess which subtests within MyCog Mobile best predict cognitive impairment. We limit our proposed analyses to a handful of supervised model types that allow for transparency in determining the best predictors within each model and allow the independent variables to be specified during the analysis to classify the clinical groups. Examples of these model classes include, but are not limited to logistic regression, elastic net, random forest, gradient boosting, k-nearest neighbours, support vectors and artificial neural networks. For each of these model classes, 70% of the data from each clinical group will be randomly selected using a seed into a training set and 30% of the data will be subset into a testing set with the splitting process being stratified by the outcome (clinical group). Stratified sampling is used to avoid creating a class imbalance across the split datasets and to increase the likelihood that the training and testing datasets are representative of one another. The testing dataset will act as a holdout sample and remain untouched until the model assessment stage has been reached to evaluate the efficacy of the models. When selecting the best model type and models for classifying the clinical groups, we will use the ROC AUC as our primary performance metric; with higher scores indicating greater model performance. Although ROC AUC will be used as the primary fit statistic and metric of performance, accuracy, sensitivity and specificity will also be examined and reported; again, with greater values indicating better model performance. Findings regarding the differential sensitivity of each measure within MyCog Mobile will inform a revised version of the battery with the goal of reducing the number of subtests and length of self-administration.
Ethics and dissemination
The Institutional Review Board at Northwestern University (STU00214921) approved this study protocol. All participants will provide informed consent prior to participation. Results will be published in peer-reviewed journals and summaries will be provided to the funders of the study.
Supplementary Material
Footnotes
Contributors: SRY, CN and MW conceptualised the study. MW and CN obtained funding. SRY, EMD, LMC and LY wrote the statistical analysis plan. SRY and CN contributed to the final selection of measurement tools. GJB, JYB and CMJ administer the project and CN and MW supervised the project administration. MVD managed the development of the MyCog Mobile application. SRY wrote the original draft and revised the paper. SRY, EMD, CN, GJB, CMJ, JYB, MVD, RG, MW and CN reviewed and edited the draft paper. All authors approved the final version of the manuscript.
Funding: This study was supported by the National Institute on Aging (grant 1R01AG074245-01). The funding agencies played no role in the study design, collection of data, analysis or interpretation of data.
Competing interests: MW reports grants from the NIH, Gordon and Betty Moore Foundation, and Eli Lilly and personal fees from Pfizer, Sanofi, Luto UK, University of Westminster, Lundbeck and GlaxoSmithKline outside the submitted work. All other authors report no conflicts of interest.
Patient and public involvement: Patients and/or the public were not involved in the design, or conduct, or reporting or dissemination plans of this research.
Provenance and peer review: Not commissioned; externally peer reviewed.
Ethics statements
Patient consent for publication
Not applicable.
References
- 1. The Lancet Neurology . Time to confront the global dementia crisis. The Lancet Neurology 2008;7:761. 10.1016/S1474-4422(08)70175-3 [DOI] [PubMed] [Google Scholar]
- 2. Alzheimer’s Association . 2021 Alzheimer’s disease facts and figures. Alzheimers Dement J Alzheimers Assoc 2021;17:327–406. [DOI] [PubMed] [Google Scholar]
- 3. Corey-Bloom J. The ABC of Alzheimer’s disease: cognitive changes and their management in Alzheimer’s disease and related Dementias. Int Psychogeriatr 2002;14 Suppl 1:51–75. 10.1017/s1041610203008664 [DOI] [PubMed] [Google Scholar]
- 4. Bature F, Guinn B-A, Pang D, et al. Signs and symptoms preceding the diagnosis of Alzheimer’s disease: a systematic Scoping review of literature from 1937 to 2016. BMJ Open 2017;7:e015746. 10.1136/bmjopen-2016-015746 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Lindeboom J, Weinstein H. Neuropsychology of cognitive ageing, minimal cognitive impairment, Alzheimer’s disease, and vascular cognitive impairment. Eur J Pharmacol 2004;490:83–6. 10.1016/j.ejphar.2004.02.046 [DOI] [PubMed] [Google Scholar]
- 6. Nagano-Saito A, Bellec P, Hanganu A, et al. Why is aging a risk factor for cognitive impairment in Parkinson’s Disease?—A resting state fMRI study. Front Neurol 2019;10:267. 10.3389/fneur.2019.00267 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Jakimovski D, Weinstock-Guttman B, Roy S, et al. Cognitive profiles of aging in multiple sclerosis. Front Aging Neurosci 2019;11:105. 10.3389/fnagi.2019.00105 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Gupta R, Chari D, Ali R. Reversible dementia in elderly: really uncommon J Geriatr Ment Health 2015;2:30. 10.4103/2348-9995.161378 [DOI] [Google Scholar]
- 9. Chang CY, Silverman DHS. Accuracy of early diagnosis and its impact on the management and course of Alzheimer’s disease. Expert Rev Mol Diagn 2004;4:63–9. 10.1586/14737159.4.1.63 [DOI] [PubMed] [Google Scholar]
- 10. Rasmussen J, Langerman H. Alzheimer’s disease – why we need early diagnosis. Degener Neurol Neuromuscul Dis 2019;9:123–30. 10.2147/DNND.S228939 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Brayne C, Fox C, Boustani M. Dementia screening in primary Careis it time JAMA 2007;298:2409–11. 10.1001/jama.298.20.2409 [DOI] [PubMed] [Google Scholar]
- 12. Bradford A, Kunik ME, Schulz P, et al. Missed and delayed diagnosis of dementia in primary care: prevalence and contributing factors. Alzheimer Dis Assoc Disord 2009;23:306–14. 10.1097/WAD.0b013e3181a6bebc [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Young SR, Lattie EG, Berry ABL, et al. Remote cognitive screening of healthy older adults for primary care with the Mycog mobile App: Iterative design and usability evaluation. JMIR Form Res 2023;7:e42416. 10.2196/42416 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Curtis L, Opsasnick L, Benavente JY, et al. Preliminary results of Mycog, a brief assessment for the detection of cognitive impairment in primary care. Innovation in Aging 2020;4:259–60. 10.1093/geroni/igaa057.833 [DOI] [Google Scholar]
- 15. Young SR, Dworak EM, Byrne GJ, et al. Remote self-administration of cognitive screeners for older adults prior to a primary care visit: pilot cross-sectional study of the Reliability and usability of the Mycog mobile screening App. JMIR Form Res 2024;8:e54299. 10.2196/54299 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Kieslich K, Littlejohns P. Does accountability for reasonableness work? A protocol for a mixed methods study using an audit tool to evaluate the decision-making of clinical commissioning groups in England. BMJ Open 2015;5:e007908. 10.1136/bmjopen-2015-007908 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Rentz DM, Amariglio RE, Becker JA, et al. Face-name associative memory performance is related to Amyloid burden in normal elderly. Neuropsychologia 2011;49:2776–83. 10.1016/j.neuropsychologia.2011.06.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 18. Gershon RC, Sliwinski MJ, Mangravite L, et al. The mobile Toolbox for monitoring cognitive function. Lancet Neurol 2022;21:589–90. 10.1016/S1474-4422(22)00225-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Borson S, Scanlan J, Brush M, et al. The mini-Cog: a cognitive ‘vital signs’ measure for dementia screening in multi-lingual elderly. Int J Geriatr Psychiatry 2000;15:1021–7. [DOI] [PubMed] [Google Scholar]
- 20. Curtis LM, Batio S, Benavente JY, et al. Pilot testing of the Mycog assessment: rapid detection of cognitive impairment in everyday clinical settings. Gerontol Geriatr Med 2023;9:23337214231179895. 10.1177/23337214231179895 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21. Wechsler D. Wechsler Memory Scale: WMS-IV; Technical and Interpretive Manual. Pearson, 2009. [Google Scholar]
- 22. Delis DC, Kaplan E, Kramer JH. Delis-Kaplan Executive Function System (D-KEFS). Psychological Corporation, 2001. [Google Scholar]
- 23. Wechsler D. Wechsler adult intelligence scale–fourth edition (WAIS–IV). 2014.
- 24. Ackerman PL, Kanfer R. Test length and cognitive fatigue: an empirical examination of effects on performance and test-taker reactions. J Exp Psychol Appl 2009;15:163–81. 10.1037/a0015719 [DOI] [PubMed] [Google Scholar]
- 25. Angioni D, Raffin J, Ousset P-J, et al. Fatigue in Alzheimer’s disease: biological basis and clinical management—a narrative review. Aging Clin Exp Res 2023;35:1981–9. 10.1007/s40520-023-02482-z [DOI] [PubMed] [Google Scholar]
- 26. R Core Team . R Foundation for Statistical Computing, Vienna, Austria; R: A Language and Environment for Statistical Computing, 2023. Available: https://www.R-project.org/ [Google Scholar]
- 27. Carter JV, Pan J, Rai SN, et al. ROC-Ing along: evaluation and interpretation of receiver operating characteristic curves. Surgery 2016;159:1638–45. 10.1016/j.surg.2015.12.029 [DOI] [PubMed] [Google Scholar]
- 28. Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 1982;143:29–36. 10.1148/radiology.143.1.7063747 [DOI] [PubMed] [Google Scholar]
- 29. Chapman R. Expected a Posteriori scoring in PROMIS®. J Patient Rep Outcomes 2022;6. 10.1186/s41687-022-00464-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30. Cai L. Lord–Wingersky algorithm version 2.0 for Hierarchical item factor models with applications in test scoring, scale alignment, and model fit testing. Psychometrika 2015;80:535–59. 10.1007/s11336-014-9411-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 31. Bechger TM, Maris G. Structural equation Modelling of multiple facet data: extending models for Multitrait-Multimethod data. Psicológica 2004;25:253–74. [Google Scholar]
- 32. Nunnally JC, Bernstein IH. Psychometric Theory. New York: McGraw-Hill, 1994. [Google Scholar]
- 33. Cohen J. The effect size index: d. Stat Power Anal Behav Sci 1988;2:284–8. [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.