Abstract
The current demand for cognitive assessment cannot be met with traditional in-person methods, warranting the need for remote unsupervised options. However, lack of visibility into testing conditions and effort levels limit the utility of existing remote options. This retrospective study analyzed the frequency of and factors associated with environmental distractions during a brief digital assessment taken at home by 1,442 adults aged 23–84. Automated scoring algorithms flagged low data capture. Frequency of environmental distractions were manually counted on a per-frame and per-trial basis. A total of 7.4% of test administrations included distractions. Distractions were more frequent in men (41:350) than women (65:1,092) and the average age of distracted participants (51.7) was lower than undistracted participants (57.8). These results underscore the challenges associated with unsupervised cognitive assessment. Data collection methods that enable review of testing conditions are needed to confirm quality, usability, and actionability.
Key words: Cognitive testing, remote assessment, unsupervised, quality assurance, eye tracking
Introduction
Demand for cognitive testing continues to outpace the supply of providers available for in-person evaluation, and this disparity is expected to increase as the population ages (1). Furthermore, the onset of the COVID-19 pandemic has disproportionately affected the availability of neurocognitive testing due to the few reliable and valid remote digital testing options (2). High quality digital cognitive assessments that can be administered remotely and asynchronously are urgently needed to meet the growing demand and backlog of patients requiring neuropsychological assessment. While the availability of computerized cognitive assessments has increased rapidly over the past decade (3, 4), the clinical validity of these assessments in a remote setting remains a significant issue for both researchers and clinicians. Moreover, the use of computerized cognitive assessments in such unsupervised settings raises an equally important issue regarding environmental validity (5).
Few studies have compared the outcomes of digital cognitive assessments taken in supervised and unsupervised environments, showing similar overall results between administration settings (6–8). However, the modality of data collection with most computerized cognitive assessments precludes the ability to assess the physical environment, level of effort, or to verify the identity of the participant during assessment administration. Lack of insight into these factors has prevented widespread adoption of remote unsupervised cognitive assessment in clinical and research settings. The difficulties surrounding reliability and validity of remote cognitive assessments have been magnified during the COVID-19 pandemic. With in-person testing unavailable, clinicians and clinical researchers have sought out digital testing options; however, a paucity of data demonstrating reliability and validity across clinical populations and settings exists for remote administration of digital cognitive assessments. Without an ability to “have eyes on the patient” there is significant clinical risk that environmental distractions will result in test performances that do not reflect the participant or patient’s true abilities.
The recording of eye movements with device-embedded cameras to assess cognition is a burgeoning area of research. As web cameras have become standard hardware in most smartphones, tablets, and laptop computers, opportunities exist to develop eye movement-based tasks to efficiently and quickly assess cognitive function through these devices.9 Visual paired comparison task paradigms assess recognition memory through eye movements and have been shown to reliably detect memory dysfunction, representing a readily deployable paradigm to devices with web cameras for the rapid assessment of declarative memory dysfunction (10–12). The collection of video data for eye tracking purposes also provides an opportunity to assess environmental conditions and quantify the occurrence of distractions during test administration. This study aimed to investigate the frequency of and factors associated with environmental distractions during a brief unsupervised digital cognitive assessment in a real-world setting.
Methods
This was a retrospective study of 1,442 adults aged 23–84 who completed a 5-minute eye tracking-based visual paired comparison task in an unsupervised remote setting and was approved by the University of Arkansas Institutional Review Board. Participants completed the task in their homes utilizing a web camera on their laptop or desktop computer. Briefly, participants were shown a series of identical image pairs during a familiarization phase. Participants were then shown a series of non-identical image pairs during the test phase, each consisting of one novel and one familiar image and tasked with focusing their gaze on the novel image. The main outcome measure was novelty preference, or the proportion of time spent viewing the novel images compared to familiar images, which is lower in individuals with impaired memory function than in individuals with normal memory function. The task is described in more detail elsewhere (13).
Automated algorithms scored the exams and subsequently flagged low data capture across the 20 test trials. Distractions were operationalized by a third party source. The frequency of environmental distractions which resulted in participants looking away from the camera (e.g., interruptions, fatigue, lack of interest) were manually counted on a per-frame and per-trial basis. Overall frequency within the sample was counted to investigate the percentage of tests impacted by environmental distraction. A Fisher’s exact test was used to compare the frequency of distractions during the assessment by sex. A Welch’s t-test was used to compare the age of participants across assessment administrations with and without environmental distractions. A Welch’s t-test was used to compare the novelty preference scores for participants who were distracted during the assessment administration and participants who were not distracted.
Results
Results are highlighted in Figure 1. A total of 1,442 participants (mean age = 57.4, SD 12.2) completed the visual paired comparison task. Seventy six percent of the participants (n = 1,092) were female. Of the 1,442 assessment administrations, 106 (7.4%) included environmental distraction resulting in participants looking away from the screen at least one time during test trials.
Assessment administrations with environmental distractions were more frequent in male participants (41:350) than female participants (65:1,092), with an odds ratio of 2.10 (p <.001). The mean age of participants with environmental distractions (M = 51.7, SD=13.8) was significantly lower than participants without the presence of environmental distractions (M = 57.8, SD=11.9) (t = −4.44, p <.001). Lastly, novelty preference scores were lower for participants who were distracted (M = 55.6%, SD = 8.0) compared to those who were not distracted (M = 58.8%, SD = 9.1) (t = 3.7, p <.001).
Discussion
Digital cognitive assessments that can be taken remotely and asynchronously represent a compelling solution to meet the growing demand for cognitive testing. Adoption of available testing options remains low due to uncertainty surrounding the quality, usability, and actionability of the data collected. In this study, we set out to measure the occurrence of environmental distractions, defined as periods of time spent looking away from the camera, during an unsupervised at-home administration of a brief cognitive assessment.
The role environmental distractions play in assessment variability is not limited to remote, asynchronous test administration. For example, Schatz and colleagues (2010) reported that high school athletes completing group baseline Impact testing that reported the presence of environmental distractions endorsed significantly more behavioral symptoms than those who did not report environmental distractions. The frequency of environmental distractions during the brief unsupervised cognitive assessment in this study (7.4%) were comparable to what Schatz and colleagues previously reported during cognitive testing batteries administered in group settings (9.7%) (14). While there is currently no established threshold for the amount of distraction that is acceptable to maintain test validity, these rates of distraction frequency likely introduce enough uncertainty to preclude clinicians and researchers from using remote cognitive testing data from unsupervised tests without a reliable and validated form of quality assurance. We also found relationships between the frequency of distractions and both the age and gender of the participants. These data suggest it may be possible to predict which participants are more likely to become distracted during a remote cognitive testing session based on standard demographic information. Additionally, participants who were distracted during the testing session scored significantly lower than participants who were not distracted. This highlights how time looking off screen may negatively impact scores and can also yield a potentially artificially low cognitive performance.
These results also underscore the challenges of high-quality data collection associated with unsupervised comprehensive cognitive assessment. This study used a brief 5-minute VPC task and included instructions at the beginning to ensure the testing environment was quiet and free from distractions. Most standard comprehensive digital cognitive assessments or assessment batteries require 30 to 45 minutes (e.g. the 30-minute National Institutes of Health Toolbox Cognitive Battery) (15). It is likely that longer durations of testing administration will result in increased likelihood of distractions during remote, asynchronous at-home administrations. The inability to determine which participants’ results may have been affected by distractions presents a challenge for researchers and clinicians attempting to use the data in clinical or research decisions.
Despite these challenges, unsupervised and asynchronous neuropsychological assessment remains a promising method for the efficient remote measurement of cognition, but only when data quality metrics can be collected and verified. The use of eye tracking-based cognitive assessments presents the unique opportunity to collect such data by having “eyes on the patient.” Our use of an automated algorithm to flag periods of low data capture and manual coding of environmental distractions when participants looked away from the screen for reasons including fatigue, interruptions, and lack of interest, provides a model for a scalable analysis of environmental conditions during remote cognitive assessment administrations.
The ubiquity of webcams in mobile devices, tablets, and computers presents an intriguing opportunity to further develop methods to enable the collection and rigorous analysis of remotely collected cognitive assessment data. In the future, the incorporation of methods that allow for identity verification will assure researchers and clinicians that the correct person is in fact the one completing the assessment. These developments can provide a level of data quality assurance not previously possible and lay the groundwork for the wider adoption of remote cognitive assessment options by clinicians and researchers to help meet the growing demand.
Acknowledgments
The authors would like to thank the participants for their time.
Funding: This work was supported by funding from Neurotrack Technologies, Inc.
Conflict of Interest: Dr. Harrison reports personal fees from Astra Zeneca, personal fees from Axon Neuroscience, personal fees from Axovant, personal fees from Biogen Idec, personal fees from Boehringer Ingelheim, personal fees from Signant, personal fees from CRF Health, personal fees from Eisai, personal fees from Eli Lilly, personal fees from GfHEU, personal fees from Heptares, personal fees from Kaasa Health, personal fees from MyCognition, personal fees from Neurocog, personal fees and other from Neurotrack, personal fees from Novartis, personal fees from Nutricia, personal fees from Probiodrug, personal fees from Regeneron, personal fees from Sanofi, personal fees from Servier, personal fees from Takeda, personal fees from vTv Therapeutics, personal fees from Lundbeck, personal fees from Compass Pathways, personal fees from G4X Discovery, personal fees from Cognition Therapeutics, personal fees from AlzeCure, personal fees from FSV7, personal fees from BlackThornRx, personal fees from Winterlight Labs, personal fees from Rodin Therapeutics, personal fees from Lysosome Therapeutics, personal fees from Syndesi Therapeutics, personal fees from Vivoryon Therapeutics, personal fees from Neurodyn Inc, personal fees from Aptinyx, personal fees from Athira Therapeutics, personal fees from EIP Pharma, personal fees from Cerecin, personal fees from Neurocentria, personal fees from Curasen, personal fees from Samumed, personal fees from Cognition Therapeutics, personal fees from ReMynd, personal fees from Ki-Elements, personal fees from The NHS, outside the submitted work.
Ethical standards: Ethical approval was obtained from the University of Arkansas Institutional Review Board.
References
- 1.Rao A, Manteau-Rao M, Aggarwal N. [P1–561]: Dementia Neurology Deserts: What Are They And Where Are They Located In The U.S.? Alzheimers Dement. 2017;13(7S_Part_10):P509–P509. doi: 10.1016/j.jalz.2017.06.577. [DOI] [Google Scholar]
- 2.Hantke NC, Gould C. Examining Older Adult Cognitive Status in the Time of COVID-19. J Am Geriatr Soc. 2020;68(7):1387–1389. doi: 10.1111/jgs.16514. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Charalambous A P, Pye A, Yeung W K, Leroi I, Neil M, Thodi C, Dawes P. Tools for App-and Web-Based Self-Testing of Cognitive Impairment: Systematic Search and Evaluation. Journal of Medical Internet Research. 2020;22(1):e14551. doi: 10.2196/14551. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Sabbagh, M. N., Boada, M., Borson, S., Doraiswamy, P. M., Dubois, B., Ingram, J., … & Vellas, B. (2020). Early detection of Mild Cognitive Impairment (MCI) in an at-home setting. The Journal of Prevention of Alzheimer’s Disease, 1–8. doi:10.14283/jpad.2020.22 [DOI] [PubMed]
- 5.Bilder R M, Postal K S, Barisa M, Aase D M, Cullum C M, Gillaspy S R, Morgan J M. InterOrganizational practice committee recommendations/guidance for teleneuropsychology (TeleNP) in response to the COVID-19 pandemic. The Clinical Neuropsychologist. 2020;34(7–8):1314–1334. doi: 10.1080/13854046.2020.1767214. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Maljkovic V, Pugh M A M, Yaari R, Shen J, Juusola J. At Home Cognitive Testing (CANTAB battery) in Healthy Controls and Cognitively Impaired Patients: A Feasibility Study. Age. 2019;66(69.65):0–001. [Google Scholar]
- 7.Cromer JA, Harel BT, Yu K, et al. Comparison of Cognitive Performance on the Cogstate Brief Battery When Taken In-Clinic, In-Group, and Unsupervised. Clin Neuropsychol. 2015;29(4):542–558. doi: 10.1080/13854046.2015.1054437. [DOI] [PubMed] [Google Scholar]
- 8.Backx R, Skirrow C, Dente P, Barnett JH, Cormack FK. Comparing Web-Based and Lab-Based Cognitive Assessment Using the Cambridge Neuropsychological Test Automated Battery: A Within-Subjects Counterbalanced Study. J Med Internet Res. 2020;22(8):e16792. doi: 10.2196/16792. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9.Bott NT, Madero EN, Glenn JM, et al. Device-Embedded Cameras for Eye Tracking-Based Cognitive Assessment: Implications for Teleneuropsychology. Telemed J E Health. 2020;26(4):477–481. doi: 10.1089/tmj.2019.0039. [DOI] [PubMed] [Google Scholar]
- 10.Bott N, Madero EN, Glenn J, et al. Device-Embedded Cameras for Eye Tracking-Based Cognitive Assessment: Validation With Paper-Pencil and Computerized Cognitive Composites. Journal of Medical Internet Research. 2018;20(7):e11143. doi: 10.2196/11143. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Crutcher MD, Calhoun-Haney R, Manzanares CM, Lah JJ, Levey AI, Zola SM. Eye tracking during a visual paired comparison task as a predictor of early dementia. Am J Alzheimers Dis Other Demen. 2009;24(3):258–266. doi: 10.1177/1533317509332093. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12.Zola SM, Manzanares CM, Clopton P, Lah JJ, Levey AI. A behavioral task predicts conversion to mild cognitive impairment and Alzheimer’s disease. Am J Alzheimers Dis Other Demen. 2013;28(2):179–184. doi: 10.1177/1533317512470484. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13.Bott NT, Lange A, Rentz D, Buffalo E, Clopton P, Zola S. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task. Front Neurosci. 2017;11:370. doi: 10.3389/fnins.2017.00370. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Schatz P, Neidzwski K, Moser RS, Karpf R. Relationship between subjective test feedback provided by high-school athletes during computer-based assessment of baseline cognitive functioning and self-reported symptoms. Arch Clin Neuropsychol. 2010;25(4):285–292. doi: 10.1093/arclin/acq022. [DOI] [PubMed] [Google Scholar]
- 15.Weintraub S, Dikmen SS, Heaton RK, et al. The cognition battery of the NIH toolbox for assessment of neurological and behavioral function: validation in an adult sample. J Int Neuropsychol Soc. 2014;20(6):567–578. doi: 10.1017/S1355617714000320. [DOI] [PMC free article] [PubMed] [Google Scholar]