Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2023 Aug 1.
Published in final edited form as: Clin Neuropsychol. 2022 Aug 5;37(6):1257–1275. doi: 10.1080/13854046.2022.2103033

The survey for memory, attention, and reaction time (SMART): Preliminary normative online panel data and user attitudes for a brief web-based cognitive performance measure

Mira I Leese a, Nora Mattek b,c, John P K Bernstein d, Katherine E Dorociak e, Sarah Gothard b,c, Jeffrey Kaye b,c, Adriana M Hughes b,f,g
PMCID: PMC9899293  NIHMSID: NIHMS1854841  PMID: 35930438

Abstract

Objective:

The Survey for Memory, Attention, and Reaction Time (SMART) was recently introduced as a brief (<5 min), self-administered, web-based measure of cognitive performance in older adults. The purpose of this study was threefold: (1) to develop preliminary norms on the SMART; (2) to examine the relationship between demographic variables (i.e. age, sex, education), device type used, and SMART performance; and (3) to assess user attitudes of the SMART.

Method:

A sample of 1,050 community-dwelling adults (M age =59.5 (15.2), M education = 16.5 (2.1), 67.1% female, 96% white) was recruited from an ongoing web-based research cohort. Participants completed the SMART, which consists of four face-valid cognitive tasks assessing visual memory, attention/processing speed, and executive functioning. SMART performance outcome metrics were subtest completion time (CT), click count, and total CT. Participants provided demographic information and completed a survey of user attitudes toward the SMART (i.e. usability, acceptability).

Results:

Older age was the only demographic variable associated with slower SMART total CT (r = .60, p <.001). Education was not associated with SMART CT or click counts overall (p > .05). Male sex was generally associated with longer SMART CT (p < .001, partial eta squared = .14) on all sub-tests. Regarding acceptability, 97.3% indicated willingness to take the SMART again, with more than half willing to complete it on a weekly basis.

Conclusion:

The preliminary normative data on the SMART indicates that it is a feasible and well-accepted web-based cognitive assessment tool that can be administered on multiple device platforms.

Keywords: Normative data, cognition, computerized testing, cognitive screening, technology

Introduction

Traditional clinic-based neuropsychological assessment is considered a gold standard in clarifying both the nature and extent of cognitive deficits in a variety of clinical populations (Freedman & Manly, 2015). However, these evaluations are costly, time-consuming, sporadic in administration, and have poor sensitivity to milder forms of cognitive impairment (Lussier et al., 2019; Morrison et al., 2015). With advances in technology, along with increases in the use of computers and internet access in the older adult population (G. Anderson, 2016), computerized cognitive assessments are being developed for research and clinical contexts (Boise et al., 2013; Tsoy et al., 2021). Computerized assessments have several advantages compared to traditional cognitive assessment. These advantages include ease to launch and use (Tsoy et al., 2020), cost-efficiency (Scott & Mayo, 2018), reduced examiner bias (Parsons et al., 2018), enhanced precision of measurement and scoring (Gates & Kochan, 2015; Parsons et al., 2018), and automated scoring and interpretation (Wild et al., 2008). Further, computerized cognitive assessments have the potential to capture longitudinal cognitive changes over time (Dorociak et al., 2021), are more accessible to older adults in remote locations (Parsons et al., 2018), and are capable of being administered in the home, which is helpful for older adults with physical or other logistical limitations (Kaye et al., 2011).

Despite the potential benefits of computerized assessment, several challenges remain with respect to development and implementation. A recent systematic review (Tsoy et al., 2021) found gaps in scientific rigor regarding the development, validation, and establishment of normative data for self-administered, brief, computerized assessments. According to the American Psychological Association (APA), computerized assessments should be held to the same standards as traditional paper and pencil neuropsychological assessments (Wahlstrom, 2017). However, few studies have produced normative data for computerized cognitive assessments, limiting their clinical utility (Capizzi et al., 2021; Moore et al., 2017). It is important that test developers provide appropriate normative information and examine psychometric properties of web-based assessments, so that clinicians and researchers can accurately interpret test data. Further, technological issues (e.g. variability in Internet speed, hardware, electronic devices and software characteristics) may affect the standardization, implementation and realization of computerized assessments’ full potential (Parsons et al., 2018; Wild et al., 2008). More research is needed to assess how device type (e.g. computer, tablet, smartphone) affects web-based test norms and administration (Brearly et al., 2019).

Additionally, digital readiness (i.e. confidence and preparedness to engage with existing technologies and digital tools) remains low among older adults, especially older adults with mild cognitive impairment (Leese et al., 2021), lower educational attainment, and fewer financial resources (Anderson & Perrin, 2017; Choi & DiNitto, 2013). The successful adoption of self-administered computerized assessments depends on the receptivity of potential users. Usability (i.e. ease of use of the technology; Davis, 1989) and acceptability (i.e. the degree of primary users’ predisposition to carry out daily activities using the intended device; Holthe et al., 2018) are both considered strong predictors of technology adoption. However, few studies have examined user attitudes towards self-administered, web-based cognitive test platforms (Capizzi et al., 2021; Dorociak et al., 2021; Lathan et al., 2016). There is a strong need for researchers to investigate user attitudes (i.e. usability, acceptability) of web-based cognitive test platforms in a wide range of clinical populations to determine their feasibility.

Despite these gaps in the literature, web-based cognitive assessments are considered the future of clinical neuropsychology (Germine et al., 2019) and are being used by researchers to detect the earliest signs of cognitive decline in individuals at risk for dementia (Bissig et al., 2020; Charalambous et al., 2020). To date, few cognitive assessments exist that are brief, web-based, have normative data, and that are available for completion in multiple platforms (e.g. computers, smartphones and tablets (de Roeck et al., 2019; Leese et al., 2021; Tsoi et al., 2015 ). One such measure is the Survey for Memory, Attention, and Reaction Time (SMART), a feasible and reliable assessment tool to monitor cognitive performance in older adults with and without mild cognitive impairment (MCI) in research settings (Dorociak et al., 2021; Seelye et al., 2017). Preliminary evidence suggests that the SMART may also be a valid measure of cognitive performance, however future studies are warranted (Dorociak et al., 2021). This brief (<5 minutes to complete), web-based cognitive assessment is self-administered, compatible with multiple electronic platforms, sensitive to cognitive impairment, has been validated against in-person examinations of health and cognition, and has a range of potential utilities in clinical and research settings (Dorociak et al., 2021; Seelye et al., 2017). The SMART consists of 4 face-valid cognitive tasks (visual memory task, Trails A, Trails B, and Stroop Color-Word Interference Task) and assesses visual memory, attention processing speed, and executive functioning. In order to use the SMART in a meaningful way for research and in clinical contexts, it is important that appropriate normative data becomes available, that the SMART demonstrates high feasibility among end-users, and that performance differences among individuals of various demographic characteristics (i.e. age, sex, education) are explicated.

While Dorociak and colleagues provided initial psychometric data on a small sample (n = 69), essential information concerning normative data, demographic correlates with SMART performance, and user attitudes towards the SMART were not reported. This information is essential for test development (American Psychological Association, 2020). As such, the goals of this paper were to: (1) provide preliminary normative data on the SMART, using a sample of 1,050 community-dwelling adults; (2) explore the relationship between demographic variables (i.e. age, sex, education) and SMART performance outcomes; and (3) assess user attitudes toward the SMART (i.e. usability and acceptability), which we hypothesized would be favorable, given previous research demonstrating high usability and acceptability ratings toward the SMART in a smaller sample of older adults (Dorociak et al., 2021; Leese et al., 2021).

Methods

Participants

All participants were enrolled in the Research via Internet Technology and Experience (RITE) program, which is associated with the Oregon Center for Aging and Technology (ORCATECH) and the NIA Oregon Aging & Alzheimer’s Disease Research Center at Oregon Health and Science University (OHSU). Consent was obtained from participants electronically through the Qualtrics online platform. The RITE program uses web-based surveys to collect data from participants about their opinions and experiences with the intersection between technology and healthcare (tinyurl.com/38p4ez56). The RITE participants receive baseline surveys collecting self-reported medical histories and demographic metrics, annual surveys to update this information, bi-weekly surveys to assess health and important events in their life, as well as quarterly surveys (cross-sectional in nature) to assess specific research topics of interest. No formal in-person cognitive testing was available for the present study. However, all participants self-reported a negative history of neurodegenerative disease, neurological disorders, and psychopathology, and were considered relatively healthy for their age. Participants in the RITE program are not obligated to answer every quarterly survey. Thus, only a subsample from the RITE program answered surveys deployed in the current study. Of 3,131 RITE participants, 1,050 (34%) completed the SMART surveys. The group who completed the SMART was no different in sex distribution than those who did not and their mean ages were similar (59 vs. 57 years). Participants were not compensated for their time.

Participants were community-dwelling adults recruited from the Portland, Oregon metropolitan area. Recruitment strategies included invitations through the EPIC electronic medical record system at OHSU Healthcare, as well as targeted emails and social media advertising. The protocols were approved by OHSU’s IRB (IRB00010237; ORCATECH Online Cohort: RITE). Inclusion criteria were 18 years of age or older, owning and using a computer or smart device at least once per week, having a wired broadband Internet connection, and demonstrating English language fluency. A total of 1,050 participants were included in analyses for the present study. Of the initial 1,485 participants in the RITE sample, 435 participants were excluded from analyses for the following reasons: (1) they did not have demographic data (N = 1); (2) they completed less than 100% of the SMART survey and the Online Cognition Testing Feedback Survey (N = 149); (3) they were considered statistical outliers (i.e. 3 standard deviations above or below the mean on any SMART metric; N = 67); or (4) they had a tremor (N = 41) visual impairment (N = 127) or colorblindness (N = 50). It was necessary to exclude participants who endorsed a tremor, visual impairment, or colorblindness because of the potentially negative impact of these conditions on SMART performance.

Measures

The survey for memory, attention and reaction time (SMART)

This brief (<5 min), web-based cognitive assessment research tool consists of 4 face-valid cognitive tasks available in the public domain assessing visual memory (Visual Memory Task [Dorociak et al., 2021; Seelye et al., 2017], attention processing speed (Trail Making Test A [Reitan, 1955]), and executive functioning (Trails Making Test B [Reitan, 1955] and Stroop Color-Word Interference Task [Stroop, 1935]). The SMART included directions and practice items before each task which are described in more detail below. SMART performance outcome measures include task completion time (CT) for each subtest, click count (i.e. number of mouse clicks or touchscreen taps), total test completion time (CT; this includes the respondents’ time spent processing directions and time spent responding to practice items as well as individual subtests) for Stroop and Trails A/B, and accuracy (i.e. yes/no) for the Visual Memory task. Total test completion time is considered a measure of accuracy on Stroop and Trails A/B subtasks [Dorociak et al., 2021]. No alternate forms of the SMART are currently available. Other aspects of cognition (i.e. language) were omitted from the development of the SMART, so that the SMART could be accessible to the broadest population of older adults.

The SMART has demonstrated fair to excellent test-retest reliability (intraclass correlation coefficient = 0.50–0.76 [Dorociak et al., 2021]). Regarding the validity of the SMART, the evidence is preliminary and future studies are warranted (Dorociak et al., 2021). Dorociak et al. (2021) was a small exploratory study of a healthy, white, well-educated cohort that was receptive to using technology. This reduces the generalizability to more diverse, medically complex populations, and to individuals who are not confident using technology. Regarding convergent validity of the SMART, total SMART completion time (test + practice + directions) was correlated moderately with global cognition (β = −0.47, p < 0.01) and more weakly, but significantly correlated with attention/processing speed (β = −0.35, p < 0.01), language (β = −0.37, p < 0.01), memory (β = −0.30, p < 0.05), and executive functioning (β = −0.36, p < 0.01; Dorociak et al., 2021). Additionally, medium correlations were found (r’s > .30) between traditional neuropsychological test scores and the Trails B task completion time as well as the Stroop Color-Word Interference task completion time (Dorociak et al., 2021). While these correlations are modest, they’re similar to correlations observed in other studies comparing more comprehensive computer-based cognitive measures with traditional neuropsychological tests (Aalbers et al., 2013; Feenstra et al., 2018; Hansen et al., 2015). In comparison, convergent validity was weaker for the visual memory task, start time of the survey, and click count metrics (Dorociak et al., 2021). Therefore, these metrics may not be valid or useful for assessing cognition. SMART metrics were not correlated with Montreal Cognitive Assessment (MoCA) scores, a well-established cognitive screening tool (Dorociak et al., 2021; Nasreddine et al., 2005). This could be due to the ceiling effect and skewed variable distribution of the MoCA in this small cohort. Additional validation work is needed in more diverse samples and future validation work should include a comparison of SMART subtasks with the standard versions of Trails A, Trails B, and the Stroop Color-Word Interference tests.

Visual memory task

This task is designed to capture visuospatial aspects of memory and consists of three phases, encoding, distraction, and retrieval. The examinee is first shown a 4 × 4 grid of shapes and is asked to remember the location of a specific colored triangle during the encoding phase which lasts for 20 seconds. During the distraction phase, the examinee completes other cognitive tasks in the SMART (i.e. Trail Making Test, Stroop Color-Word Interference Task). During the retrieval phase, the examinee is shown an empty 4 × 4 grid and asked to recall the location of the specific colored triangle previously shown. Examinees are allowed unlimited attempts until they respond correctly. However, the metric of accuracy used in the present study is being correct on the first attempt. Practice items are not included in this task.

Trail making test (TMT-A and TMT-B)

These two tasks assess attention and processing speed and are based on the Halstead-Reitan Battery measures (Reitan, 1955). These two tasks include practice items before each task to ensure participants are doing the tasks the correct way. The TMT-A task requires the examinee to click or tap on a sequential series of encircled numbers from 1 to 12. The TMT-B task requires the examinee to click or tap on encircled numbers and letters, alternating sequentially between them (e.g. 1, A, 2, B, etc.).

Stroop color-word interference task

This task assesses executive function requiring cognitive flexibility, selective attention, cognitive inhibition, information processing speed, and is based on the Stroop task (Stroop, 1935). Examinees are shown a practice item with the words “green”, “red” or “blue” and are instructed to click or tap the color of the word’s ink instead of clicking or tapping the word itself. For example, if the examinee is shown the word “green” in red ink, they should click or tap the word “red” and inhibit the automatic impulse to click or tap the word “green”. After the practice items, examinees complete 15 items and are permitted to advance to the next item once the previous item is answered correctly.

Online cognition testing feedback survey: SMART usability and acceptability

Following completion of the SMART, participants completed an experimental survey (i.e. Online Cognition Testing Feedback Survey) developed by the RITE study team and research stakeholders to assess participants’ attitudes (i.e. usability, acceptability) toward the SMART, seek feedback on the test taking experience, and better understand how often respondents would be willing to take similar web-based cognitive assessments. The survey was developed based on a review of relevant literature (Boise et al., 2013; Braun & Clarke, 2006; Creswell et al., 2011; Leese et al., 2021; Mitchell et al., 2020; see Table 8 for Survey items). SMART usability (i.e. ease of use of the technology [Davis, 1989]) items included: (1) “Please rate how challenging this online cognition test was for you” (3-point rating scale); (2) “Was the text easy to read?” (3-point rating scale); (3) “Were the instructions for each test clear?” (yes/no); and (4) “Please rate the length of time it took to take the online cognition test” (3-point rating scale). SMART acceptability (i.e. the degree of primary users’ predisposition to carry out daily activities using the intended device [Holthe et al., 2018]) items included: (1) “Cognitive tests may be repeatedly administered to better understand brain health over time. After having the experience of taking this test, would you be willing to regularly complete this type of cognitive test online?” (yes/no); (2) “How frequently would you be willing to complete this task?” (7-point rating scale). Cutoff scores above 80% endorsement for each item were considered adequate usability and acceptability ratings (Dorociak et al., 2021; Hafiz et al., 2019).

Table 8.

Online cognition testing feedback survey scores on SMART usability and acceptability.

“Rating-scale item”: N (%) N = 1050
SMART Usability
Please rate how challenging this online cognition test was for you? “Very challenging”: 6 (0.6%)
“A little bit challenging”: 531 (56.6%)
“Not challenging at all”: 401 (42.8%)
Was the text easy to read? “I could barely read the letters or numbers”: 17 (1.6%)
“It was easy to read”: 943 (90.5%)
“I could read it, but it was hard”: 82 (7.9%)
Were the instructions for each test clear? “Yes”: 765 (94.1%)
“No”: 48 (5.9%)
Please rate the length of time it took to take the online cognition test. “Took more time than I expected”: 74 (7.6%)
“Took as much time as I expected”: 503 (51.4%)
“Took less time than I expected”: 402 (41.1%)
SMART Acceptability
Cognitive tests may be repeatedly administered to better understand brain health over time. After having the experience of taking this test, would you be willing to regularly complete this type of cognitive test online?” “Yes”: 996 (97.3%)
“No”: 28 (2.7%)
How frequently would you be willing to complete this task?” “Multiple times a day for a single day”: 92 (9.3%)
“Once a day”: 180 (18.3%)
“Once a week”: 316 (32.1%)
 “2–3 times a month”: 143 (14.5%)
“Once a month”: 167 (17.0%)
“Every 3 months”: 67 (6.8%)
“Once a year”: 20 (2.0%)

Note. SMART = The Survey for Memory, Attention, and Reaction Time.

Procedures

Participants received an email with a link to the survey and were asked to complete the survey in their own time. The survey included the SMART and the Online Cognition Testing Feedback Survey. This was the first and only time participants completed these surveys. The survey was developed and administered via the Qualtrics Survey Platform (Qualtrics, Provo). The SMART subtests were built using HTML and JavaScript. Participants were able to complete the survey on their chosen computer, tablet or smartphone, and data from the survey was automatically captured and recorded upon completion. Surveys were distributed to respondents using a functionality called “individual links”, which maps to specific participants and specific distributions. This type of link only allows one response from the email associated with that participant in the contact list for that distribution. Once a survey response was recorded, if a participant attempted to use that distribution’s individual link again, it would be inaccessible. If technical difficulties arose, participants were able to contact study staff in order to help trouble-shoot. However, no immediate or real time intervention between study staff and respondents took place.

Statistical analyses

In providing preliminary normative data on SMART performance outcomes, outliers on any SMART test scores (i.e. +/− 3 SD above or below the mean) were removed from data analyses to limit the influence of extreme scores. The relationships between SMART performance outcome metrics and the demographic factors of age and education were examined with partial correlation analyses, adjusting for device used. The impact of sex on SMART performance outcomes were performed with ANCOVA, again adjusting for device used. Differences in racial/ethnic background on the SMART was not performed given the discrepant samples sizes (i.e. predominately white sample, 96%). To examine SMART usability and acceptability, descriptive data are presented. All statistical analyses were performed using SPSS software 24.0.

Results

Participant characteristics and preliminary normative data

Across the sample of 1,050 individuals, the average age at survey response was 59.5 (SD = 15.2; range: 19–91), with 16.5 years of education on average (SD = 2.1; range: 8–20). The majority of the sample was female (n = 703, 67.0%) and white (n = 860/898 that reported on race/ethnicity, 96%). Of the sample, the majority of individuals were computer (laptop or desktop) users (n = 708, 67.4%), followed by smartphone (n = 251, 23.9%), and tablet users (n = 91, 8.7%). Participant characteristics are presented in Table 1. In line with other research providing normative data for cognitive tools (i.e. Feenstra et al., 2018; Keefe et al., 2008), age, sex, and education were considered in providing SMART performance outcome scores. Given the impact of device on SMART performance outcomes (Dorociak et al., 2021), preliminary norms also accounted for device type (i.e. computer, smartphone, or tablet). The mean raw score performances for all participants on all SMART metrics are presented by age, sex, education (binned as < 16 years or ≥ 16 years), and device type (computer, tablet, smartphone) in Tables 2-5. The data were summarized into five age groups: less than 50 (n = 271), 50–59 (n = 150), 60–69 (n = 315), 70–79 (n = 261), and 80 and older (n = 50). Of those less than 50 years: 45 were less than 30, 107 were 30–39 and 119 were 40–49. On average the full SMART survey took less than one minute to complete (mean= 55 seconds; SD = 16; range: 23 – 142) while Trail Making Test A, Trail Making Test B and Stroop Color-Word Task took 10 seconds, 15 seconds, and 29 seconds respectively. Completion time metrics are approximately normally distributed on visual inspection with no ceiling effects. In regards to click count metrics, Trail Making Test A and B require a minimum click count of 12 and Stroop Color-Work Task requires a minimum click count of 15, no maximum limit. Therefore, click count metrics are highly skewed. Overall 84% of test respondents answered the Visual Memory Delayed Recall task correctly on the first attempt.

Table 1.

Demographics of RITE cohort normative online panel sample (n = 1050).

Demographics Total Less than
50
50–59 60–69 70–79 80 and
older
Total 1050 M (SD) 271 M (SD) 150 M (SD) 315 M (SD) 261 M (SD) 50 M (SD)
Age 59.5 (15.2) 37.6 (7.3) 55.1 (3.0) 65.0 (2.8) 73.7 (2.6) 83.0 (2.7)
Education 16.5 (2.1) 16.5 (2.1) 16.0 (2.0) 16.6 (2.1) 16.7 (2.0) 16.7 (2.9)
N (%) N (%) N (%) N (%) N (%) N (%)
Sex
 Male 338 (33) 57 (22) 51 (34) 103 (33) 107 (41) 20 (40)
 Female 700 (67) 207 (78) 99 (66) 210 (67) 154 (59) 30 (60)
Education (n = 892)
 < College 226 (25) 59 (27) 39 (31) 68 (25) 48 (21) 12 (27)
 > College 666 (75) 162 (73) 86 (69) 205 (75) 180 (79) 33 (73)
Device
 Computer 706 (67) 169 (62) 93 (62) 212 (67) 189 (72) 43 (86)
 Tablet 91 (9) 3 (1) 10 (7) 36 (11) 36 (14) 6 (12)
 Smartphone 250 (24) 99 (37) 47 (31) 67 (21) 36 (14) 1 (2)

Note: RITE = Research via Internet Technology and Experience.

Table 2.

SMART metrics by age group.

TOTAL
N = 1050
Less than
50 N = 271
50–59
N = 150
60–69
N = 315
70–79
N = 261
80 and
older N = 50
Completion time (sec) Mean(SD) Mean(SD) Mean(SD) Mean(SD) Mean(SD) Mean(SD)
Full SMART Survey (subtests + directions + practice items) 54.8 (15.7) 42.6 (10.3) 51.1 (11.0) 57.2 (13.7) 63.5 (15.5) 71.6 (15.1)
Trail Making Test A 10.0 (3.6) 8.1 (2.4) 10.0 (3.6) 10.0 (3.0) 11.5 (4.4) 12.2 (3.5)
Trail Making Test B 15.3 (5.7) 12.0 (4.0) 14.1 (4.0) 15.6 (5.4) 17.9 (6.2) 20.5 (6.5)
Stroop Color-Word Task 22.3 (5.1) 26.5 (6.1) 30.8 (8.4) 33.4 (9.0) 37.6 (9.6) 29.0 (9.1)
Click Counts
Trail Making Test A 14.2 (4.6) 14.7 (5.4) 14.6 (5.5) 13.8 (3.7) 14.1 (4.1) 13.3 (2.6)
Tail Making Test B 14.0 (4.0) 14.5 (4.9) 14.2 (4.2) 13.8 (3.4) 13.7 (3.3) 13.5 (2.9)
Stroop Color-Word Task 29.0 (9.1) 22.3 (5.9) 26.5 (6.1) 30.8 (8.4) 33.4 (9.0) 37.6 (9.6)
Visual Memory Delayed Recall (% correct) 84% 82% 81% 83% 88% 88%

Note: SMART = The Survey for Memory, Attention, and Reaction Time.

Table 5.

SMART task online panel normative data by age group and device type.

Less than
50 N = 271
50–59
N = 150
60–69
N = 315
70–79
N = 261
80 and
older N = 50
Completion time (sec) Mean (SD) Mean (SD) Mean (SD) Mean (SD) Mean (SD)
Full SMART Survey (subtests + directions + practice items)
 Computer 41.0 (8.8) 48.3 (9.1) 55.7 (12.1) 62.7 (15.4) 71.0 (14.5)
 Tablet 39.1 (10.4) 49.8 (10.3) 55.6 (10.8) 61.9 (16.2) 76.0 (20.9)
 Smartphone 45.4 (12.0) 56.9 (12.5) 62.8 (17.8) 69.1 (14.4) 71
Trail Making Test A
 Computer 7.8 (1.9) 9.4 (2.2) 9.8 (2.3) 11.4 (4.2) 12.0 (2.9)
 Tablet 6.3 (0.9) 8.8 (2.7) 8.7 (4.2) 10.4 (5.3) 13.7 (7.0)
 Smartphone 8.7 (3.0) 11.6 (5.1) 11.3 (4.0) 12.7 (4.4) 11.1
Trail Making Test B
 Computer 11.5 (3.4) 13.3 (3.5) 15.2 (4.6) 17.4 (5.7) 20.4 (6.4)
 Tablet 10.8 (4.5) 15.2 (4.6) 14.8 (5.3) 18.8 (8.0) 20.9 (8.4)
 Smartphone 12.8 (4.7) 15.6 (4.6) 18.0 (7.1) 19.9 (6.1) 20.4
Stroop Color-Word Task
 Computer 21.7 (5.9) 25.5 (5.6) 30.3 (7.9) 33.3 (9.6) 37.2 (10.0)
 Tablet 10.8 (4.5) 15.2 (4.6) 14.8 (5.3) 18.8 (8.0) 20.9 (8.4)
 Smartphone 23.4 (5.8) 28.7 (6.5) 31.9 (9.8) 35.2 (7.8) 39.5
Click Counts
Trail Making Test A
 Computer 12.6 (1.1) 12.5 (1.0) 12.5 (1.1) 13.1 (2.8) 12.7 (1.5)
 Tablet 12.7 (0.6) 13.6 (1.8) 13.9 (3.2) 14.7 (4.2) 17.2 (5.3)
 Smartphone 18.6 (7.5) 19.2 (8.0) 17.7 (6.0) 18.7 (6.3) 15
Trail Making Test B
 Computer 12.6 (1.1) 12.5 (1.1) 12.6 (1.2) 12.9 (1.8) 12.9 (1.8)
 Tablet 12.0 (0.0) 13.4 (2.2) 13.7 (3.0) 14.4 (4.2) 16.5 (5.6)
 Smartphone 17.8 (6.7) 17.9 (5.9) 17.5 (5.5) 17.3 (5.5) 19
Stroop Color-Word Task
 Computer 15.5 (1.6) 15.3 (0.6) 15.6 (1.4) 15.9 (2.0) 16.1 (2.0)
 Tablet 15.7 (1.2) 16.3 (1.6) 16.5 (1.9) 15.9 (1.3) 20.5 (5.3)
 Smartphone 21.4 (7.3) 22.2 (7.2) 21.4 (6.9) 20.7 (6.2) 19
Visual Memory DR (% correct)
 Computer 97% 99% 95% 92% 88%
 Tablet 100% 90% 89% 89% 83%
 Smartphone 56% 45% 42% 67% 100%

Note: SMART = The Survey for Memory, Attention, and Reaction Time.

Influence of age, sex, and education on SMART performance outcomes

Examining the relationship between demographic variables (i.e. age, sex, education) and SMART performance outcomes, full SMART survey CT (r = .60, p <.001), Trail Making Test A CT (r = .38, p < .001) Trail Making Test B CT (r = .45, p<.001), and Stroop Color-Word Task CT (r = .55, p < .001) were significantly and positively correlated with age after adjusting for device platform. Age was not significantly correlated with any click count metrics (p > .05). As a post-hoc sensitivity analysis, we excluded participants who were younger than 50 years of age, and the pattern of significant correlations was unchanged. Greater educational attainment was correlated with faster Trail Making Test B completion times (p <.05). Education was not significantly correlated with the Full SMART survey completion time or any click count metrics (p > .05). After adjusting for device platform and age, there was a significant difference between males and females on all completion time metrics such that females demonstrated faster completion times (i.e. better performance) than males (p <.001). There was also a significant difference in Trail Making Test A click counts such that females had significantly less clicks than males (i.e. better performance, p=.01); there were no other differences in click count metrics. Correlation analyses are presented in Table 6 and ANCOVA analyses are presented in Table 7.

Table 6.

Partial correlation coefficients between age, education and SMART metrics.

SMART metrics Age N = 1047 Education N = 892
Completion time (seconds)
Full SMART Survey (subtests + directions + practice items) .60** −.06
Trail Making Test A .38** −.02
Trail Making Test B .45** −.07*
Stroop Color-Word Task .55** −.03
Click Counts
Trail Making Test A .01 −.04
Trail Making Test B −.00 −.03
Stroop Color-Word Task .00 −.05

Note. Adjusted for device type

*

p<.05

**

p<.001; SMART = The Survey for Memory, Attention, and Reaction Time.

Table 7.

Analysis of Covariance for SMART metrics and sex.

SMART metrics Female M (SD)
N = 700
Male M (SD)
N = 338
Partial ETA
Squared
p-value
Completion time (seconds)
Full SMART Survey (subtests + directions + practice items) 58.2 (15.9) 47.7 (11.6) .14 <.001**
Trail Making Test A 10.7 (4.0) 8.8 (2.7) .09 <.001**
Trail Making Test B 16.5 (6.0) 13.1 (4.2) .11 <.001**
Stroop Color-Word Task 30.6 (9.6) 25.4 (7.1) .09 <.001**
Click Counts
Trail Making Test A 14.0 (4.8) 14.4 (4.8) .01 .01*
Trail Making Test B 13.7 (3.4) 14.3 (4.5) .00 .25
Stroop Color-Word Task 16.7 (3.8) 17.6 (5.2) .00 .50

Note. Adjusted for device type

*

p<.05

**

p<.001; SMART = The Survey for Memory, Attention, and Reaction Time.

User attitudes of the SMART

Scores on SMART usability and acceptability are provided in Table 8. Regarding usability, a little more than half of the sample (56.6%) found the SMART “a little bit challenging.” Over 90% of the sample found the SMART easy to read and having clear instructions. For length of time, the majority of participants reported that the SMART took as much time as expected (51.4%) or took less time than expected (41.4%). Regarding acceptability, the vast majority (97.3%) indicated willingness to take the SMART again, with more than half willing to complete it on a weekly basis (or more frequently).

Discussion

This study presents preliminary normative data for the SMART in a sample of 1,050 independently living, community-dwelling adults without any self-reported cognitive diagnoses, referenced by age, sex, education, and device type (i.e. computer, smartphone, tablet). Previously, Dorociak et al. (2021) validated the SMART against in person examinations of health and cognition, and provided preliminary evidence for the use of the SMART as a feasible, reliable, and valid assessment research tool to monitor cognitive performance. To date, normative data has not been available for the SMART, thus limiting its utility in clinical and research settings. Development of appropriate norms for the SMART is a critical next step and the present study’s preliminary norms will allow for the calculation of standardized scores so that researchers will be able to better understand how their participants are performing from a normative standpoint. Additionally, clinicians will be able to potentially screen older adults for cognitive difficulties that would warrant further assessment. The SMART has not been formally published by a recognized test publisher, but it could be made available at no cost to interested parties for research or experimental clinical use based upon email request and discussion with our research team. Although caution is warranted when discussing findings from the SMART, the results may be used if they are supported by similar findings from more formal, standardized tests (Mitrushina et al., 2005).

Examining the effects of demographic factors on neuropsychological test outcomes is important when interpreting neurocognitive performance. Consistent with other web-based cognitive assessments (Feenstra et al., 2018; Keefe et al., 2008), age, education, and sex demonstrated relationships with SMART performance outcomes, which supports the need for demographic corrections (Casaletto et al., 2015). The relationship between age, education, and the timed SMART performance outcome metrics followed an expected pattern. Preliminary normative data and correlation analyses revealed that fewer click counts and longer completion time on most SMART subtests was associated with older age and less educational attainment. Previous research has also demonstrated that older age and less educational attainment were correlated with lower performance on computerized cognitive tests (Feenstra et al., 2018; Keefe et al., 2008). After adjusting for device platform, preliminary normative data and correlation analyses showed that females demonstrated a significantly faster completion time on the SMART subtests than males. These findings are consistent with previous research which found that females have faster response times (van Exel et al., 2001) and demonstrate more sustained performance during cognitive test-taking than males (Balart & Oosterveen, 2019). Further, there was a lack of an age effect on the memory task, which may be due to the healthy sample or perhaps the low task difficulty for the memory task.

Few studies have used multiple device platforms (e.g. computer, tablet, smartphone) for brief, web-based self-administered cognitive assessment (de Roeck et al., 2019; Dorociak et al., 2021). The present study’s findings suggest that device platform is critical to appraising cognitive performance. Regarding device platforms, smartphone users had higher click counts and performed worse on the visual memory delayed recall task than computer or tablet users. Several factors could explain this finding, including a dexterity issue, a smaller screen size, and perhaps poor resolution. Previous research (Brouillette et al., 2013) has found that the small screen of the smartphone was an obstacle for older adult end users leading to errors made on cognitive tasks as well as premature screen advancement. Notably, participants in the present study who took the SMART on a tablet were older than computer and smartphone users, which may suggest that older adults are more readily able to complete the SMART on a tablet versus other Internet devices. This is consistent with other research that found tablets to be more preferable in older adult populations due to their portability, ease of use, and their ability to be more accommodating to those who have motor or visual limitations (Canini et al., 2014; Tsoy et al., 2021; Vaportzis et al., 2017). Given these discrepancies, in the future, researchers and clinicians may want to encourage participants to take web-based cognitive tests on a computer or tablet device instead of a smartphone due to their larger screens and higher computing capacity.

The Online Cognition Testing Feedback Survey revealed that participants found the SMART protocol to be brief and easy to use, with more than half the sample reporting the SMART subtests to be “a little bit challenging.” Such results are promising given efforts in the field to design tests of an optimum level of difficulty (neither too hard nor too easy) so that they can provide meaningful information and discriminate among test-takers with and without cognitive impairment (Harvey, 2012). Self-reported SMART acceptability data showed that the majority of participants were willing to take the SMART again and that more than half the sample were willing to complete the SMART on a weekly basis (or more frequently). Given that the SMART has typically only been administered monthly in prior work to reduce time and effort demands on participants, such results suggest that more frequent administrations may be well tolerated. Overall, these results suggest that the SMART is feasible for in-home, remote, self-administered assessment in adult populations and this is consistent with current research that found high rates of SMART usability and acceptability in smaller samples of older adults with and without MCI (Dorociak et al., 2021; Leese et al., 2021).

Limitations and future directions

The present study is not without limitations. The majority of participants were white, from a metropolitan area of Oregon, female, fluent in English, highly educated adults who were frequent users of Internet devices and who were comfortable taking web-based cognitive assessments. This study also excluded adults with visual impairments, tremors or color blindness. The SMART is also only available in the English language which limits its accessibility for those who do not speak English (Tsoy et al., 2021). It’s important to note that the lack of correlations between SMART tasks and education could be due to restricted range. These limitations reduce the generalizability of our sample and derived norms to more diverse populations and to adults who have less access to broadband Internet and are less receptive to technology. Studies with more diverse samples over longer periods of time are needed to replicate the present study’s findings and to report validity and reliability information for the SMART in these additional groups, given that the SMART was validated in a small sample of white, well-educated older adults.

The present study included an online only cohort for which there was only self-reported health history and no in-person cognitive testing to confirm cognitive status or medical conditions. We would like to acknowledge that our sample was not a “traditional” normative sample without having formal testing being done to classify participants. However, it is not necessary to have objective cognitive data on a sample for it to be considered normative data. According to Mitrushina et al., 2005, assessment of health status in many normative studies is based on self-report, and studies have shown that self-report of a negative history of neurodegenerative disease, neurological disorders, and psychopathology is sufficient for the purposes of inclusion into a normative sample. It is appropriate to report normative data if there are sufficient sample sizes, if the demographic and descriptive characteristics (e.g. age, education, gender) of the normative sample are sufficiently described to allow researchers or clinicians to determine how similar the norms are to their participant or patient, and if the administration and scoring procedures are sufficiently described to allow researchers or clinicians to determine how similar the norms are to their specific setting (Mitrushina et al., 2005). In order to norm the SMART in the future, it was important to norm a web-based test in a typical user population first (e.g. a web-based population) and the present study aimed to fill in this gap with preliminary normative data.

The Online Cognition Testing Feedback Survey is a new measure developed by our research team to assess key feasibility domains with low response burden; as such, it is not an exhaustive measure of all aspects of feasibility (e.g. barriers, digital readiness; Leese et al., 2021). Rigorous investigation of the survey’s psychometric properties is needed to demonstrate its reliability and validity. While the present study did not determine the correlation between usability/acceptability ratings of the SMART and demographics and device type variables, future studies should examine these relationships.

Data was not collected on the living arrangements of participants and there is no way for us to know if participants received assistance on the surveys from friends or family. The present study also did not collect data on subtypes of device type (e.g. Android vs I-phones). Future studies should examine differences on SMART performance based on device subtype. Additionally, the SMART does not include a metric of performance validity so we are unable to measure valid effort in our sample. Stimuli for the SMART were not randomized upon presentation so priming effects may be present. While Dorociak and colleagues (2021) found that longer CT is associated with worse performance on cognitive tests (Dorociak et al., 2021), we acknowledge that other factors could contribute to longer CT’s besides cognition itself (e.g. careful test-taking style, difficulties with technology use). Finally, while the overall sample size is large, certain cell sizes were small (N < 10) in three categories: tablet users under 50 years of age, tablet users over 80 years old, and smartphone users over 80 years old. The associated metrics should be considered preliminary and warrant future research.

Important considerations

Given the overall research goals and our second aim, it is important to note that we partially realized our goals of examining the role of education and racial and ethnic diversity. The American Academy of Clinical Neuropsychology (Sim et al., 2022) has advocated for researchers to collect more representative norms for neuropsychological assessments to competently serve underresourced communities who are at a greater risk of dementia disparities (e.g. health inequities regarding dementia rates and access to treatment; Brewster et al., 2019). Appropriate and representative normative data for the SMART is necessary as an initial step to ensuring the utility of this novel web-based cognitive performance measure. Therefore, the RITE program has future plans to improve the diversity of their participants by adopting a racially diverse team science approach (Miller et al., 2019), by establishing reciprocity (e.g. compensation for study participation; (Brewster et al., 2019), and by building partnerships with trusted community organizations to recruit and retain more diverse participants (Brewster et al., 2019; Buchanan et al., 2021).

Conclusions

This study presents preliminary normative data and evidence for the SMART as a feasible web-based cognitive assessment tool. Few web-based, self-administered cognitive tests have normative data available in the public domain and are available for completion in multiple device platforms. The SMART adds to the growing literature on web-based neuropsychological assessment tools and has potential wide-reaching benefits for clinical and research contexts. Future studies include further development and longitudinal exploration of the SMART, as well as developing more representative norms to implement the SMART in to clinical practice.

Table 3.

SMART task online panel normative data by age group and gender.

Less than
50 N = 271
50–59
N = 150
60–69
N = 315
70–79
N = 261
80 and
older N = 50
Completion time (sec) Mean(SD) Mean(SD) Mean(SD) Mean (SD) Mean(SD)
Full SMART Survey (subtests + directions + practice items)
 Female 42.4 (9.7) 50.2(10.6) 55.7(13.4) 62.3 (15.3) 72.8(16.9)
 Male 42.6(10.8) 52.8(11.7) 59.9 (13.9 65.1 (15.7) 69.7(12.2)
Trail Making Test A
 Female 7.9 (2.2) 9.6 (2.9) 9.8 (3.0) 11.2 (4.2) 12.4 (4.1)
 Male 8.3 (2.6) 11.0 (4.5) 10.4 (3.2) 11.8 (4.7) 11.8 (2.4)
Trail Making Test B
 Female 11.8 (3.6) 13.8 (3.8) 15.2 (5.2) 17.5 (6.1) 21.4 (6.7)
 Male 12.3 (4.5) 14.8 (4.4) 16.8 (5.7) 18.6 (6.3) 19.1 (6.2)
Stroop Color-Word Task
 Female 22.4 (5.6) 26.6 (6.4) 30.0 (8.0) 32.8 (9.1) 36.8 (9.5)
 Male 21.8 (6.7) 26.3 (5.3) 32.2 (9.0) 34.3 (8.9) 38.8(10.0)
Click Counts
Trail Making Test A
 Female 14.8 (5.4) 14.2 (4.4) 13.9 (3.9) 14.3 (4.1) 13.7 (3.1)
 Male 14.7 (5.6) 15.5 (7.1) 13.5 (3.4) 13.7 (4.2) 12.6 (1.6)
Trail Making Test B
 Female 14.6 (5.0) 14.1 (3.9) 13.8 (3.7) 14.0 (3.9) 14.1 (3.4)
 Male 14.0 (4.4) 14.6 (4.8) 13.7 (3.1) 13.2 (2.4) 12.6 (1.4)
Stroop Color-Word Task
 Female 17.9 (5.6) 17.5 (5.2) 17.0 (4.2) 16.9 (4.0) 16.5 (3.1)
 Male 16.8 (4.7) 17.5 (5.1) 16.9 (4.1) 16.2 (2.1) 16.9 (2.8)
Visual Memory Delayed Recall (% correct)
 Female 80% 82% 82% 86% 80%
 Male 88% 80% 85% 92% 100%

Note: SMART = The Survey for Memory, Attention, and Reaction Time.

Table 4.

SMART task online panel normative data by age group and education.

Less than
50 N = 271
50–59
N = 150
60–69
N = 315
70–79
N = 261
80 and
older N = 50
Completion time (sec) Mean(SD) Mean (SD) Mean(SD) Mean (SD) Mean (SD)
Full SMART Survey (subtests + directions + practice items)
 Less than college 44.2 (11.6) 54.4 (12.9) 59.1(13.7) 65.6 (19.0) 74.0 (19.4)
 College or greater 41.9 (9.8) 49.5 (10.2) 56.4(13.5) 62.7 (14.0) 67.8 (11.0)
Trail Making Test A
 Less than college 8.3 (2.8) 11.3 (4.9) 9.7 (2.7) 12.4 (5.6) 13.2 (5.2)
 College or greater 8.0 (2.3) 9.3 (2.6) 10.1 (3.1) 11.1 (3.9) 11.5 (2.7)
Trail Making Test B
 Less than college 13.0 (4.7) 15.4 (4.8) 16.9 (6.6) 18.0 (6.3) 21.6 (8.0)
 College or greater 11.5 (3.5) 13.5 (3.5) 15.3 (4.8) 17.7 (6.1) 19.0 (4.4)
Stroop Color-Word Task
 Less than college 22.6 (5.9) 26.8 (5.7) 32.1 (8.4) 33.7 (10.5) 36.6 (7.6)
 College or greater 22.3 (6.1) 26.5 (6.6) 30.1 (8.0) 33.4 (8.4) 36.3 (9.0)
Click Counts
Trail Making Test A
 Less than college 15.8 (6.1) 16.0 (7.4) 13.5 (3.2) 14.7 (4.8) 14.3 (4.4)
 College or greater 14.2 (4.9) 13.7 (3.5) 13.7 (3.7) 13.8 (4.0) 13.1 (1.8)
Trail Making Test B
 Less than college 15.4 (5.4) 14.8 (4.3) 13.9 (3.7) 13.3 (2.8) 14.3 (4.5)
 College or greater 13.8 (4.0) 13.7 (3.6) 13.5 (3.1) 13.7 (3.4) 13.2 (2.2)
Stroop Color-Word Task
 Less than college 18.6 (6.5) 18.2 (5.6) 16.9 (3.6) 16.6 (3.9) 16.8 (3.2)
 College or greater 17.1 (4.7) 16.9 (4.3) 16.6 (3.9) 16.6 (3.2) 16.4 (2.9)
Visual Memory Delayed Recall (% correct)
 Less than college 78% 72% 84% 81% 83%
 College or greater 86% 88% 84% 89% 91%

Note: SMART = The Survey for Memory, Attention, and Reaction Time.

Acknowledgment

The authors thank the RITE volunteers, research study staff and the OHSU Layton Center for Aging & Alzheimer’s Research.

Funding

This work was funded by the Oregon Royal Center for Care Support Translational Research Advantaged by Integrating Technology (ORCASTRAIT; PI: Kaye, eIRB 20236; supported by the National Institutes of Health, National Institute on Aging [P30-AG024978]) and the OHSU Alzheimer’s Disease Research Center (PI: Kaye, eIRB 725; supported by NIH P30-AG066518; P30-AG008017]. This work was supported in part by NIH grant AG058687; PI Hughes.

Footnotes

Disclosure statement

No potential conflict of interest was reported by the authors.

References

  1. Aalbers T, Baars MAE, Rikkert MGMO, & Kessels RPC (2013). Puzzling with online games (BAM-COG): Reliability, validity, and feasibility of an online self-monitor for cognitive performance in aging adults. Journal of Medical Internet Research, 15(12), e2860. 10.2196/jmir.2860 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. American Psychological Association. (2020). APA guidelines for psychological assessment and evaluation. A. T. F. on P. A. and E. G. https://www.apa.org/about/policy/guidelines-psychological-assessment-evaluation.pdf [Google Scholar]
  3. Anderson G (2016). Technology trends among mid-life and older Americans. Washington, DC: AARP Research. 10.26419/res.00140.001 [DOI] [Google Scholar]
  4. Anderson M, & Perrin A (2017). Technology use among seniors. Pew Research Center for Internet & Technology. [Google Scholar]
  5. Balart P, & Oosterveen M (2019). Females show more sustained performance during test-taking than males. Nature Communications, 10(1), 1–11. 10.1038/s41467-019-11691-y [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bissig D, Kaye J, & Erten-Lyons D (2020). Validation of SATURN, a free, electronic, self-administered cognitive screening test. Alzheimer’s & Dementia, 6(1), e12116. [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Boise L, Wild K, Mattek N, Ruhl M, Dodge HH, & Kaye J (2013). Willingness of older adults to share data and privacy concerns after exposure to unobtrusive home monitoring. Gerontechnology, 11(3), 428–435. 10.4017/gt.2013.113.001.00 [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Braun V, & Clarke V (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. [Database] 10.1191/1478088706qp063oa [DOI] [Google Scholar]
  9. Brearly TW, Rowland JA, Martindale SL, Shura RD, Curry D, & Taber KH (2019). Comparability of iPad and web-based NIH toolbox cognitive battery administration in Veterans. Archives of Clinical Neuropsychology, 34(4), 524–530. 10.1093/arclin/acy070 [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Brewster P, Barnes L, Haan M, Johnson JK, Manly JJ, Nápoles AM, Whitmer RA, Carvajal-Carmona L, Early D, Farias S, Mayeda ER, Melrose R, Meyer OL, Zeki Al Hazzouri A, Hinton L, & Mungas D (2019). Progress and future challenges in aging and diversity research in the United States. Alzheimer’s & Dementia, 15(7), 995–1003. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Brouillette RM, Foil H, Fontenot S, Correro A, Allen R, Martin CK, Bruce-Keller AJ, & Keller JN (2013). Feasibility, reliability, and validity of a smartphone based application for the assessment of cognitive function in the elderly. PLoS One. 8(6), e65925. 10.1371/journal.pone.0065925 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Buchanan NT, Perez M, Prinstein MJ, & Thurston IB (2021). Upending racism in psychological science: Strategies to change how science is conducted, reported, reviewed, and disseminated. The American Psychologist, 76(7), 1097–1112. [DOI] [PubMed] [Google Scholar]
  13. Canini M, Battista P, della Rosa PA, Catricalà E, Salvatore C, Gilardi MC, & Castiglioni I(2014). Computerized neuropsychological assessment in aging: testing efficacy and clinical ecology of different interfaces. Computational and Mathematical Methods in Medicine, 2014, 804723. 10.1155/2014/804723 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Capizzi R, Fisher M, Biagianti B, Ghiasi N, Currie A, Fitzpatrick K, Albertini N, & Vinogradov S (2021). Testing a novel web-based neurocognitive battery in the general community: validation and usability study. Journal of Medical Internet Research, 23(5), e25082. [DOI] [PMC free article] [PubMed] [Google Scholar]
  15. Casaletto KB, Umlauf A, Beaumont J, Gershon R, Slotkin J, Akshoomoff N, & Heaton RK (2015). Demographically corrected normative standards for the English version of the NIH Toolbox Cognition Battery. Journal of the International Neuropsychological Society, 21(5), 378–391. 10.1017/S1355617715000351 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Charalambous AP, Pye A, Yeung WK, Leroi I, Neil M, Thodi C, & Dawes P (2020). Tools for app-and web-based self-testing of cognitive impairment: systematic search and evaluation. Journal of Medical Internet Research, 22(1), e14551. 10.2196/14551 [DOI] [PMC free article] [PubMed] [Google Scholar]
  17. Choi NG, & DiNitto DM (2013). The digital divide among low-income homebound older adults: Internet use patterns, eHealth literacy, and attitudes toward computer/Internet use. Journal of Medical Internet Research, 15(5), e93. 10.2196/jmir.2645 [DOI] [PMC free article] [PubMed] [Google Scholar]
  18. Creswell JW, Klassen AC, Plano Clark VL, & Smith KC (2011). Best practices for mixed methods research in the health sciences. Bethesda (Maryland): National Institutes of Health, 2013, 541–545. [Google Scholar]
  19. Davis FD (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. 10.2307/249008 [DOI] [Google Scholar]
  20. de Roeck EE, de Deyn PP, Dierckx E, & Engelborghs S (2019). Brief cognitive screening instruments for early detection of Alzheimer’s disease: A systematic review. Alzheimer’s Research & Therapy, 11(1), 1–14. [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Dorociak KE, Mattek N, Lee J, Leese MI, Bouranis N, Imtiaz D, Doane BM, Bernstein JPK, Kaye JA, & Hughes AM (2021). The Survey for memory, attention, and reaction time (SMART): Development and validation of a brief web-based measure of cognition for older adults. Gerontology, 67(6), 740–713. 10.1159/000514871 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Feenstra HEM, Murre JMJ, Vermeulen IE, Kieffer JM, & Schagen SB (2018). Reliability and validity of a self-administered tool for online neuropsychological testing: The Amsterdam Cognition Scan. Journal of Clinical and Experimental Neuropsychology, 40(3), 253–273. [DOI] [PubMed] [Google Scholar]
  23. Feenstra HEM, Vermeulen IE, Murre JMJ, & Schagen SB (2018). Online self-administered cognitive testing using the Amsterdam cognition scan: establishing psychometric properties and normative data. Journal of Medical Internet Research, 20(5), e9298. 10.2196/jmir.9298 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Freedman D, & Manly J (2015). Use of normative data and measures of performance validity and symptom validity in assessment of cognitive function. Institute of Medicine Webpage. https://nap.nationalacademies.org/resource/21704/FreedmanManlyCommissioned-paper.pdf [Google Scholar]
  25. Gates NJ, & Kochan NA (2015). Computerized and on-line neuropsychological testing for late-life cognition and neurocognitive disorders: Are we there yet? Current Opinion in Psychiatry, 28(2), 165–172. [DOI] [PubMed] [Google Scholar]
  26. Germine L, Reinecke K, & Chaytor NS (2019). Digital neuropsychology: Challenges and opportunities at the intersection of science and software. The Clinical Neuropsychologist, 33(2), 271–286. [DOI] [PubMed] [Google Scholar]
  27. Hafiz P, Miskowiak KW, Kessing LV, Elleby Jespersen A, Obenhausen K, Gulyas L, Zukowska K, & Bardram JE (2019). The internet-based cognitive assessment tool: System design and feasibility study. JMIR Formative Research, 3(3), e13898. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Hansen TI, Haferstrom ECD, Brunner JF, Lehn H, & Haberg AK (2015). Initial validation of a web-based self-administered neuropsychological test battery for older adults and seniors. Journal of Clinical and Experimental Neuropsychology, 37(6), 581–594. [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Harvey PD (2012). Clinical applications of neuropsychological assessment. Dialogues in Clinical Neuroscience, 14(1), 91–99. 10.31887/DCNS.2012.14.1/pharvey [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Holthe T, Halvorsrud L, Karterud D, Hoel K-A, & Lund A (2018). Usability and acceptability of technology for community-dwelling older adults with mild cognitive impairment and dementia: A systematic literature review. Clinical Interventions in Aging, 13, 863–886. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Kaye JA, Maxwell SA, Mattek N, Hayes TL, Dodge H, Pavel M, Jimison HB, Wild K, Boise L, & Zitzelberger TA (2011). Intelligent systems for assessing aging changes: Home-based, unobtrusive, and continuous assessment of aging. Journals of Gerontology Series B: Psychological Sciences and Social Sciences, 66(suppl_1), i180–i190. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. Keefe RSE, Harvey PD, Goldberg TE, Gold JM, Walker TM, Kennel C, & Hawkins K (2008). Norms and standardization of the Brief Assessment of Cognition in Schizophrenia (BACS). Schizophrenia Research, 102(1–3), 108–115. [DOI] [PubMed] [Google Scholar]
  33. Lathan CE, Coffman I, Shewbridge R, Lee M, & Cirio R (2016). A pilot to investigate the feasibility of mobile cognitive assessment of elderly patients and caregivers in the home. Journal of Geriatrics and Palliative Care, 4(1), 6. [Google Scholar]
  34. Leese MI, Dorociak KE, Noland M, Gaugler JE, Mattek N, & Hughes A (2021). Use of in-home activity monitoring technologies in older adult veterans with mild cognitive impairment: The impact of attitudes and cognition. Gerontechnology, 20(2), 1–12. 10.4017/gt.2021.20.2.10.06 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Lussier M, Adam S, Chikhaoui B, Consel C, Gagnon M, Gilbert B, Giroux S, Guay M, Hudon C, Imbeault H, Langlois F, Macoir J, Pigot H, Talbot L, & Bier N (2019). Smart home technology: a new approach for performance measurements of activities of daily living and prediction of mild cognitive impairment in older adults. Journal of Alzheimer’s Disease : JAD, 68(1), 85–96. [DOI] [PubMed] [Google Scholar]
  36. Miller AL, Stern C, & Neville H (2019). Forging diversity-science-informed guidelines for research on race and racism in psychological science. Journal of Social Issues, 75(4), 1240–1261. 10.1111/josi.12356 [DOI] [Google Scholar]
  37. Mitchell LL, Peterson CM, Rud SR, Jutkowitz E, Sarkinen A, Trost S, Porta CM, Finlay JM, & Gaugler JE (2020). It’s like a cyber-security blanket”: The utility of remote activity monitoring in family dementia care. Journal of Applied Gerontology, 39(1), 86–98. 10.1177/0733464818760238 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Mitrushina M, Boone KB, Razani J, & D’Elia LF (2005). Handbook of normative data for neuropsychological assessment. Oxford University Press. [Google Scholar]
  39. Moore RC, Swendsen J, & Depp CA (2017). Applications for self-administered mobile cognitive assessments in clinical research: A systematic review. International Journal of Methods in Psychiatric Research, 26(4), e1562. 10.1002/mpr.1562 [DOI] [PMC free article] [PubMed] [Google Scholar]
  40. Morrison GE, Simone CM, Ng NF, & Hardy JL (2015). Reliability and validity of the NeuroCognitive Performance Test, a web-based neuropsychological assessment. Frontiers in Psychology, 6, 1652. [DOI] [PMC free article] [PubMed] [Google Scholar]
  41. Nasreddine ZS, Phillips NA, Bédirian V, Charbonneau S, Whitehead V, Collin I, Cummings JL, & Chertkow H (2005). The Montreal Cognitive Assessment, MoCA: A brief screening tool for mild cognitive impairment. Journal of the American Geriatrics Society, 53(4), 695–699. [DOI] [PubMed] [Google Scholar]
  42. Parsons TD, McMahan T, & Kane R (2018). Practice parameters facilitating adoption of advanced technologies for enhancing neuropsychological assessment paradigms. The Clinical Neuropsychologist, 32(1), 16–41. [DOI] [PubMed] [Google Scholar]
  43. Reitan RM (1955). Certain differential effects of left and right cerebral lesions in human adults. Journal of Comparative and Physiological Psychology, 48(6), 474–477. 10.1037/h0048581 [DOI] [PubMed] [Google Scholar]
  44. Scott J, & Mayo AM (2018). Instruments for detection and screening of cognitive impairment for older adults in primary care settings: A review. Geriatric Nursing, 39(3), 323–329. 10.1016/j.gerinurse.2017.11.001 [DOI] [PubMed] [Google Scholar]
  45. Seelye A, Mattek N, Reynolds C, Sharma N, Wild K, & Kaye J (2017). [O2–16–01]: The Survey for memory, attention, and reaction time (SMART): a brief online personal computing-based cognitive assessment for healthy aging and mild cognitive impairment. Alzheimer’s & Dementia, 13(7S_Part_12), P596–P596. [Google Scholar]
  46. Sim A, Byrd D, Burton V, Morrison C, Schmitt SN, & Paltzer J (2022). American academy of clinical neuropsychology: Relevance 2050 initiative. https://theaacn.org/relevance-2050/relevance-2050-initiative/ [Google Scholar]
  47. Stroop JR (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 18(6), 643–662. 10.1037/h0054651 [DOI] [Google Scholar]
  48. Tsoi KKF, Chan JYC, Hirai HW, Wong SYS, & Kwok TCY (2015). Cognitive tests to detect dementia: A systematic review and meta-analysis. JAMA Internal Medicine, 175(9), 1450–1458. [DOI] [PubMed] [Google Scholar]
  49. Tsoy E, Possin KL, Thompson N, Patel K, Garrigues SK, Maravilla I, Erlhoff SJ, & Ritchie CS (2020). Self-administered cognitive testing by older adults at-risk for cognitive decline. The Journal of Prevention of Alzheimer’s Disease, 7(4), 283–287. [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Tsoy E, Zygouris S, & Possin KL (2021). Current State of Self-Administered Brief Computerized Cognitive Assessments for Detection of Cognitive Disorders in Older Adults: A Systematic Review. The Journal of Prevention of Alzheimer’s Disease, 8(3), 267–276. 10.14283/jpad.2021.11 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. van Exel E, Gussekloo J, de Craen AJM, Bootsma-Van Der Wiel A, Houx P, Knook DL, & Westendorp RGJ (2001). Cognitive function in the oldest old: women perform better than men. Journal of Neurology, Neurosurgery & Psychiatry, 71(1), 29–32. 10.1136/jnnp.71.1.29 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Vaportzis E, Giatsi Clausen M, & Gow AJ (2017). Older adults perceptions of technology and barriers to interacting with tablet computers: A focus group study. Frontiers in Psychology, 8, 1687. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Wahlstrom D (2017). Technology and computerized assessments: Current state and future directions. [Google Scholar]
  54. Wild K, Boise L, Lundell J, & Foucek A (2008). Unobtrusive in-home monitoring of cognitive and physical health: Reactions and perceptions of older adults. Journal of Applied Gerontology, 27(2), 181–200. 10.1177/0733464807311435 [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES