Skip to main content
Digital Health logoLink to Digital Health
. 2023 Aug 13;9:20552076231194944. doi: 10.1177/20552076231194944

Assessing cancer-related cognitive function in the context of everyday life using ecological mobile cognitive testing: A protocol for a prospective quantitative study

Ashley M Henneghan 1,, Kathleen M Van Dyk 2,3, Robert A Ackerman 4, Emily W Paolillo 5, Raeanne C Moore 6
PMCID: PMC10426293  PMID: 37588154

Abstract

Objective

Millions of cancer survivors are at risk for cancer-related cognitive impairment (CRCI), yet accurate and accessible assessments of cognitive functioning remain limited. Ecological mobile cognitive testing (EMCT) could offer a solution. This paper presents the protocol for a study that aims to (1) establish the reliability and validity of EMCT to assess CRCI in breast cancer survivors, and (2) prospectively evaluate within-person processes (and interactions) among context, mood, and behavior that explain cognitive variability, everyday functioning, and quality of life of cancer survivors.

Methods

Participants will include breast cancer survivors (>21 years old) who are within 5 years of completing chemotherapy treatment. Participants will complete two virtual visits (baseline, follow-up) 2 months apart to assess self-reported cognitive symptoms and cognitive performance, sociodemographic characteristics, clinical history, everyday functioning, and quality of life. Between virtual visits, EMCT will be used to sample cognitive functioning every other day (28 times total). We will use linear mixed-effect regressions and single-level multiple regression models to analyze the data.

Results

We anticipate a minimum of 124 breast cancer survivors enrolling and completing data collection. Study results will be published in peer-reviewed scientific journals.

Conclusions

Our findings will have broad implications for assessing CRCI in an ecologically valid and person-centered way using EMCT. We aim to provide this protocol to aid researchers who would like to apply this approach to their studies.

Keywords: Cancer-related cognitive impairment, ecological momentary assessment, mobile cognitive testing, protocol, intensive longitudinal design

Introduction

Up to 75% of cancer survivors have cancer-related cognitive impairments (CRCI), which typically manifest as post-treatment difficulties with attention, executive function, memory, and processing speed.16 CRCI can have devastating effects on the daily functioning of survivors7,8 and are associated with reduced quality of life (QOL), poor social and occupational function, and decreased survival.9,10 Evidence suggests that CRCI are subtle, diffuse, and vary considerably from person to person and across different settings. 11 Long-term trajectories of CRCI are variable—some will improve over time, others remain unchanged, and some progressively worsen or evolve into more severe cognitive disorders.5,1214 Despite the millions of cancer survivors at risk for CRCI, 15 accurate and accessible assessments of cognitive functioning remain limited (e.g. high cost, low access, high demand).

In cancer patients and survivors, CRCI are typically measured using standardized neuropsychological tests 16 and patient-reported outcomes (PROs) of cognitive function. 17 Specific cognitive tests are recommended for the quantification and identification of CRCI. 16 However, standardized neuropsychological measures were originally developed for other, typically more severely neurologically compromised populations (e.g. post-stroke, dementia, traumatic brain injury) and thus have been criticized for having limited sensitivity, specificity, and reliability for CRCI.1821 Standardized neuropsychological tests also likely lack ecological validity.22,23 They are typically administered in lab or research settings and rarely correlate with (or predict) functioning on everyday tasks, occupational outcomes, QOL, or well-being in cancer patients.2427 On the other hand, cognitive PROs used to assess CRCI often do; however, correlate with everyday cognitive functioning and QOL,2,9,19,24,26,2833 and several were developed for use specifically in cancer populations.34,35 One limitation of cognitive PROs is a reliance on retrospective recall, which can bias data. They have also been criticized for not correlating with standardized cognitive tests.22,24,26

Real life context is a critical component of cognitive functioning not directly addressed by traditional assessments (i.e. standardized cognitive tests, cognitive PROs). 36 In real world environments, one's cognitive performance is highly influenced by many state-dependent factors (e.g. mood, energy) and contexts (e.g. a cognitively demanding activity, distractions) that individually, or in combination, interfere with cognitive functioning. Research findings, including our own, have emphasized the importance of these day-to-day variations in cancer survivors, documenting survivors’ critiques that retrospective cognitive outcome measures may not capture their cognitive fluctuation.9,37 To date, use of ecologically valid cognitive assessments is scarce in the field of CRCI, which limits our understanding of the nature, trajectory, and impact of this clinical problem.

Ecological momentary assessments (EMAs) allow researchers to directly capture within-person variation in, and interactions among, within-person behavioral and cognitive processes, 38 and are most often delivered via mobile technologies such as smartphones. Mobile cognitive testing can be coupled with EMA to objectively measure cognitive performance in people's natural environments—what we term ecological momentary cognitive testing (EMCT). Use of EMCT in research studies is on the rise, and feasibility, validity, and reliability of EMCT has been established in several cognitively vulnerable populations, including people with serious mental illness, mood disorders, and mild cognitive impairment.3943 However, EMCT studies in the field of CRCI are only recently gaining traction.

Small et al. 44 examined associations between fatigue and depressed mood and cognitive variability in 47 breast cancer survivors (BCS) over 14 days using EMCT. They reported reliable and valid within- and between-person EMCT measures and found predictive relationships between fatigue and cognitive performance on days when fatigue was worse. They also reported that over half of the variability in cognitive performance was attributed to within-person variations rather than between-person variations. 44 This group also reported that survivors were 3 times more likely to experience memory lapses associated with negative affect compared to a control group. 45 This work provides preliminary evidence that within-person variability outweighs between person variability for cognitive measures in CRCI, and that negative affect and worse fatigue are predictive of cognitive performance in BCS. However, questions remain regarding how well these EMCTs map onto more traditional measures for CRCI (standardized cognitive testing, PROs), and what within-person processes (and interactions) among context, mood, and behavior explain cognitive variability in survivors with CRCI.

In this study, our primary aim to establish the reliability and validity of EMCT measures in BCS, including their relations to performance-based measures of executive functioning, attention, processing speed, and memory, as well as self-reported EMAs of psychological and cognitive symptoms. Our secondary aim is to quantify longitudinal relationships among contextual factors (i.e. daily activities and self-reported cognitive function) and EMCT performance within BCS. Finally, our exploratory aim is to determine how within-person variation in EMCT predicts everyday functioning and QOL of BCS across time.

Methods

Design

This will be a remote, prospective observational study of BCS within 5 years of completing chemotherapy treatment. We will collect baseline and follow-up data using traditional surveys delivered via REDCap (Vanderbuilt, TN) and remote cognitive assessments delivered via BrainCheck (BrainCheck, Inc.) 2 months apart. Baseline data collection will take approximately 60 minutes to complete (45 minutes of surveys, 15 minutes of cognitive tests) and follow-up data collection will take approximately 45 minutes to complete (30 minutes of surveys, 15 minutes of cognitive tests). Procedures will be explained to participants verbally upon enrollment, and written instructions for completing the REDCap survey and BrainCheck batteries will be provided via email. BrainCheck batteries include video instructions prior to starting testing. Video conference appointments will be offered to assist participants with data collection as needed. As of June 14, 2023, we have enrolled 14 participants, and all have seamlessly completed baseline data collection; none have requested or required assistance.

We will administer an EMCT protocol for 56 days (8 weeks), every other day in between baseline and follow-up data collection (28 assessments total) via participants’ smartphones using the NeuroUX platform (https://www.getneuroux.com/). Daily assessments take approximately 10 minutes to complete. Study start-up activities began in March 2023 and recruitment and data collection began in May of 2023. We anticipate data collection will be completed by September 2024, and final data analyses to be completed by January 2025. See Figure 1 for protocol data collection timeline for each participant.

Figure 1.

Figure 1.

Protocol data collection timeline. Abbreviations: D: day; Hx: history; PROs: patient-reported outcome measures. Baseline data collection will take approximately 1 hour; follow-up data will take approximately 45 minutes; EMCT every other day will take approximately 10 minutes/day. Total time for each participant in the protocol is 8 weeks.

Ethics and Institutional Review Board

All study related procedures have been reviewed and are overseen by the University of Texas at Austin Institutional Review Board (STUDY00002393) and were determined to be exempt, thus written consent was not deemed necessary. The potential risks associated with this study entail psychological risks (fatigue related to cognitive testing and data collection burden) and privacy and/or confidentiality risks. We do not expect that these risks will be greater than those experienced in meeting the cognitive, physical, and emotional demands of everyday life for cancer survivors. The primary benefit to participants is that they will help gain new knowledge for the clinical and practical care of BCS. While not a benefit, participants will receive a small incentive (up to $78) for participating in the study ($25 for baseline data, $25 for follow-up data, $1/day of EMCT protocol) and reports of their scores on EMCT gamified cognitive tests after completing their protocols.

Study population inclusion and exclusion criteria

We will recruit and enroll 124 female BCS, aged 21 years and older, within 5 years of finishing chemotherapy who are willing and physically/cognitively able to participate in data collection (Karnofsky Performance score > 70; Mini Moca Score > 12).46,47 CRCI onset is common after chemotherapy completion, with trajectories of improvement/decline emerging during this period.24,48 We aim to capitalize on this within- and between-person variability in cognitive functioning to achieve the aims of this study. BCS currently on hormonal therapies or HER-2 targeted therapies will be included. Participants will be excluded for any prior history of systemic cancer treatment (other than breast cancer diagnosis and treatment) as cancer recurrence and multiple cancer diagnoses requiring systemic treatment could confound cognitive functioning. Those with a healthcare provider diagnosed serious psychiatric or neurological conditions (e.g. dementia, active substance abuse, unmanaged mental health diagnoses), current metastases to the brain, or those who are pregnant will also be excluded as these diagnoses are known to impact cognitive functioning. It is estimated that 85% of the U.S. population has a smartphone 49 and participants will be asked to use their personal smartphones for this study; however, if participants do not own a smartphone or do not wish to use their personal phone, we will provide them with one for the duration of the study. We will recruit participants both locally in Central Texas and Southern California, and nationally (via Social Media platforms), since all data collection will be conducted remotely. We plan to enroll a demographically representative sample of BCS (see Table 1 for Planned Study Enrollment Table based on race and ethnicity, and sex).

Table 1.

Planned study enrollment table.

Racial categories Ethnic categories
Not Hispanic/Latina Hispanic/Latina Total
Female Male Female Male
American Indian/Alaska Native 2 2
Asian 12 12
Native Hawaiian/Pacific Islander 2 2
Black or African American 13 3 16
White 63 26 89
More than One Race 2 1 3
Total 94 0 30 0 124

Instruments

The corresponding author can be contacted to request a complete packet and references of all study measures that are not proprietary.

Sociodemographic and clinical variables

A questionnaire will be used to collect sociodemographic characteristics (e.g. age, education, race, ethnicity, marital status, children/dependents, income, employment), health history (e.g. comorbidities, menstrual history, current medications) and cancer history (e.g. breast cancer type/stage, cancer treatment details, end date of chemotherapy) at baseline.

Cognitive function: At baseline and follow-up we will administer the Functional Assessment of Cancer Treatment Cognitive Function version 3 (FACT-Cog) to assess cognitive symptoms in the previous 7 days in four domains—perceived impairments, perceived abilities, QOL, and comments from others. 50 The FACT-Cog Perceived Cognitive Impairment subscale will be used in data analyses and is an often used measure to capture cognitive symptoms in this population. We will administer a computerized battery of standardized neuropsychological tests (BrainCheck) to assess cognitive performance, including: the Flanker Test for visual executive attention, the Trail Making Test for executive function, the Digit Symbol Substitution Test for processing speed, the Stroop Test for response inhibition, and the recall test (list learning) for verbal memory. 51 BrainCheck administration includes sending participants detailed instructions for accessing and using the platform, including an introductory video, and study staff being available to assist via video conference if needed. BrainCheck has demonstrated high sensitivity and reliability 52 for mild cognitive impairment 53 and is a novel tool for efficient and remote cognitive testing. We will use standardized domain-specific scores in the analyses.

Mood, anxiety, fatigue: At baseline and follow-up, we will administer the Patient-Reported Outcomes Measurement Information System (PROMIS) scales—Emotional Distress—Anxiety Short Form 8a (“PROMIS Anxiety”), Emotional Distress—Depression Short Form 8a (“PROMIS Depression”), and PROMIS Fatigue Short Form 8a (“PROMIS Fatigue”). 54 These instruments are psychometrically sound and have demonstrated strong reliability in our previous studies with BCS (reliabilities 0.93–0.96).54,55 Raw total scores will be used in the analyses.

Everyday functioning and quality of life. At baseline and follow-up, we will administer the social difficulties inventory (SDI) to comprehensively evaluate social/occupational functioning in the previous week. 56 We will thus operationalize “everyday functioning” as social/occupational functioning and social/occupational satisfaction. The SDI was recommended in a systematic review of instruments used to measure social impact in cancer patients. 57 This 21-item instrument, developed for oncology populations, demonstrates validity and reliability. 58 The PROMIS Satisfaction with Participation in Social Roles—Short Form 8a (“PROMIS Social”) will also be administered to capture satisfaction with one's ability to participate in their social/occupational roles. This instrument has demonstrated measurement invariance, internal reliability, validity, and no ceiling effects in BCS. 59 The FACT-General (G) will be used to measure domains of QOL and well-being (physical; social; family; emotional; functional) in the previous week. 60 The FACT-G was chosen based on its widespread use in oncology research61,62 including CRCI specific research. 63 Total scores will be used in the analyses.

EMCT protocol

EMCT, including EMA surveys and gamified mobile cognitive tests, will be administered via NeuroUX and texted to participants via a web link every other day between baseline and follow-up data collection (total of 28 times across the 56-day protocol). EMCT administration time will occur between participant's average waking and bedtime, and vary across morning, midday, and evening throughout the study period. The session times are personalized for each participant based on their availability and preferred times. The research team inputs the earliest and latest session times for each participant in the Investigator dashboard. A random time is used within that range for each of the 28 sessions for that participant. We send one reminder notification if a session is not completed within an hour of the initial notification and the sessions will expire 2 hours after the notification is sent if not completed. The session times are thus varied pseudorandomly for each participant based on the time ranges inputed by the research team. The time intervals between sessions for each participant also vary since the sessions are administered on alternative days. NeuroUX EMCT has demonstrated reliability and validity in middle-aged and older adults, with and without cognitive impairment.39,42,43

Each EMCT session will query current daily activity using a list of items that are categorized as cognitively demanding activities, passive leisure activities, instrumental activities of daily living, physical activity, social activities, and other activities as previously described. 64 Each session will also collect single-item Likert-type ratings for “depressed/ sadness,” “anxiety,” “fatigue,” and “cognitive functioning” that assess how bad/good each symptom is “right now.” Response options will range from “Not at all” (0) to “Extremely” (7), consistent with our previous EMCT studies. These questions were adapted from an EMA study with BCS. 44 After daily activity and symptoms are assessed, 4 mobile cognitive tests will be administered, each approximately 2-minute long, in the domains of attention, executive functioning, processing speed, and verbal/spatial memory. These domains were chosen as they are the most common domains impacted by CRCI. 1

Two different tests will be administered for each cognitive domain and balanced throughout the protocol (14 times each). For executive attention, the N-Back (using a 2-back design, 12 trials each test) and the CopyKat tasks will be used. The N-Back is a common task that requires the participant to remember if a certain number was displayed 2 trials back. The CopyKat is similar to the popular electronic game Simon, where the participants are presented with a 2 × 2 matrix of colored tiles in a fixed position. The tiles briefly light up in a random order, and participants are asked to replicate the pattern by pressing on the colored tiles in the correct order. The time to complete is variable, but 3 trials take approximately 2 to 3 minutes. For executive functioning, the Color Trick and the Hand Swype tasks will be used. The color trick asks the participant to match the color of the word with its meaning, with 15 trials each test. The Hand Swype test asks the participant to swipe in the direction of a hand symbol or in the direction that matches the way the symbols are moving across the screen. Hand Swype is a time-based task. The test will continue for 1 minutes, no matter if responses are correct or incorrect.

For processing speed, the matching pair and quick tap tasks will be used. For matching pair, the participant is asked to quickly identify the matching pair of tiles out of 6 or more tiles. Matching pair is a time-based task and runs for 90 seconds. The grid size increases based on correct responses and the max grid size is 4 × 4 tiles. For quick tap, the participant is asked to wait and tap the symbol when it is displayed. Twelve trials are administered each test. For memory, the variable difficulty list memory test (VLMT) and Memory Matrix task will be used. For the VLMT, participants are provided with a list of 12–18 random words and given 30 seconds to memorize the list. Then they are asked yes/no questions to determine if words were on the list or not, immediately following the memorization time. For memory matrix, patterns are quickly displayed to the participant, then they are asked to indicate the pattern that was displayed by touching the tiles that were in the pattern. This test gets progressively harder if responses are correct: the first trial starts with a 2 × 2 grid size and the grid size can go up to 7 × 7 tiles until the participant gives three incorrect responses.

Data analysis plan

Data preprocessing

We will use valid and complete EMCT data to first calculate day-specific and aggregate mobile cognitive test indices for longitudinal and cross-sectional analyses, respectively. Day-specific mobile cognitive test indices capture a participant's mobile cognitive test performance on a given day and include their total score and/or median reaction times across trials for each task. Aggregate mobile cognitive test indices are calculated by collapsing day-specific indices over the entire length of the study period to measure participants’ average and day-to-day variability in task performances over time. Domain-specific cognitive composite scores will be calculated (i.e. executive attention, executive functioning, processing speed, memory) for both day-specific and aggregate mobile cognitive test indices.

Aim 1: To establish the reliability and validity of EMCT administration in BCS

We expect that EMCTs will demonstrate good levels of reliability and convergent validity. We will use generalizability theory to evaluate the reliability of between-person individual differences and within-person change for each symptom assessment and mobile cognitive task. 65 We will then compute average within-person ratings for each EMA-reported symptom (depressive symptoms, anxiety, fatigue, and self-rated cognitive functioning) across the 28 days. Convergent validity for the EMCT symptom assessments will be evaluated by correlating the average within-person EMA-reported symptom severities with the baseline and follow-up PROMIS depressive, PROMIS anxiety, PROMIS fatigue, and FACT-Cog scores. Moreover, convergent validity for the aggregate mobile cognitive test indices will be evaluated by correlating them with the baseline and follow-up BrainCheck test scores for Flanker (attention), trail making test B and Stroop test (executive functioning), digit symbol substitution (processing speed), and immediate recall scores (memory).

Aim 2: To determine longitudinal relationships among contextual factors and EMCTs across time

We expect that self-rated cognitive functioning will predict same day mobile cognitive test performance (Hypothesis 2a) and that type of daily activity (cognitively demanding versus not) will predict both the self-rating for cognitive functioning and mobile cognitive test performance on the same day (Hypothesis 2b).

To test Hypothesis 2a, we will use linear mixed-effect regressions to model the within-person concurrent relationship between EMCT self-rated cognitive functioning and same-day domain-specific cognitive test performance. We will covary for participants’ average self-rating for cognitive functioning throughout the study duration, as this appropriately differentiates within-person versus between-person effects. All within-person variables will be person-mean centered, and all between-person variables will be grand-mean centered. Participant-specific random intercepts will be modeled. Age, education, menopausal status, and time since chemotherapy will be considered as between person covariates if they are associated with average mobile cognitive test performance at p< .10.

To test Hypothesis 2b, we will use linear mixed-effect regressions to model the within-person concurrent relationships between: (1) type of daily activity and same-day self-rated cognitive functioning, and (2) type of daily activity and same-day MCT performance. We will covary for the proportion of surveys on which participants endorsed doing a cognitively demanding activity to appropriately differentiate within-person versus between-person effects. Participant-specific random intercepts will be modeled. Age, education, menopausal status, phone type (iOS; Android), and time since chemotherapy will be considered as between person covariates if they are associated with average self-rating for cognitive functioning or average mobile cognitive test performance at p< .10.

Aim 3: To determine how within-person variability in EMCT predicts everyday functioning and quality of life of BCS across time

We expect that within-person variability in EMCT variables will predict cognitive PROs (Hypothesis 3a), social functioning (Hypothesis 3b), and QOL (Hypothesis 3c) at follow-up. We will calculate the mean square of successive differences (MSSDs) as our measure of within-person variability for each EMCT variable (symptoms of depression, anxiety, fatigue, cognitive dysfunction, and domain composite scores for mobile cognitive test performance). Then three separate multiple linear regressions will examine the relationship between MSSDs for EMCT variables (entered simultaneously in the models as independent variables) and follow-up PROs (FACT-Cog perceived cognitive impairment subscale [Hypothesis 3a], SDI [Hypothesis 3b], FACT-G [Hypothesis 3c]) covarying for age, education, time since chemotherapy, and menopausal status.

Sample size determination

Monte Carlo simulations were used to conduct power analyses for the longitudinal hypotheses specified in Aim 2, which use linear mixed-effects regression. 66 The “SIMR” package in R (version 4.0.5) was used to run 1000 simulations with a two-sided alpha of .0125 (Bonferroni adjusted for 4 cognitive domain scores), 28 data points per participant, and a medium effect of f2= 0.15, which yielded a sample size of 109 at 80% power. The medium effect size was based on our previous study that found longitudinal relationships between self-report and cognitive performance using EMCT 67 and another cross-sectional study reporting relationships between subjective and objective cognitive outcomes in BCS with CRCI. 68 We will oversample to account for a 13% attrition rate (N = 124). The attrition rate was informed by another prospective study we conducted with BCS (16-week protocol). 37 For Aim 3, a power analysis was conducted using an estimated a medium effect (f2= 0.2), a Bonferroni adjusted α (.05/3 = .017), 13 tested predictors in a linear regression model, and a sample size of 124, which yielded 80% power.

Discussion

Principal findings

This study will provide essential psychometric data for using EMCT to assess cognitive functioning of BCS in the context of everyday life that can be used for future observational or interventional CRCI research. Neuropsychological testing for domain-specific cognitive functioning remains the most common assessment tool for CRCI measurement. 69 In recent years, efforts have been made to translate these traditional paper and pencil tests into digital formats for remote administration (e.g. Cogsuite, 70 Cogstate, 71 NIH Toolbox, 72 BrainCheck 73 ). These platforms can ease burdens related to in-person data collection for both participants and researchers and may be more sensitive to CRCI detection than paper and pencil testing.70,74 However, they are typically administered in place of traditional neuropsychological testing and under similar circumstances (i.e. in quiet/non-distracting environments, with administrator oversite via phone or videoconference) and at similar frequencies (i.e. weeks to months), limiting ecological validity and ability to capture frequent variations in cognitive functioning. Findings from this study will also provide new insights on assessing both the within- and between-person day-to-day variations in cognitive functioning of BCS in an ecologically valid way (using EMCT). Traditional measures of CRCI (standardized cognitive tests; PROs) are likely not administered frequently enough, or in ecologically valid environments, to capture the dynamic changes and variability of cognitive function and may be a contributing factor to missed or under diagnosed CRCI in cancer survivors.

To be inclusive, we will sample BCS from a large age range, limiting internal validity but strengthening external validity. We will control for age in our analyses and collect other age confounders (e.g. menopausal status; medications; comorbidities) to control for these effects if needed. We acknowledge the possibility of recruitment difficulties. To facilitate study enrollment/ retention, we will collect all data remotely on participants’ own schedules (except EMCTs which must be completed within 2 hours of receiving the text). We have also decreased study demands by sending assessments only one time per day rather than multiple times per day. We will use a commercially available platform for EMCT administration, open-source statistical software for analyses (R studio) and include STROBE checklists 36 (recommended for disseminating EMA data) when disseminating study findings.

This study will provide evidence for context-specific factors associated with worse or better cognitive functioning, and for using within-person cognitive variations to predict everyday functioning and QOL of survivors. This approach is aligned with precision health methodologies (i.e. person-centered) and could lead to better screening and diagnosis of CRCI. Findings could also be used to develop personalized intervention for survivors with CRCI, ultimately reducing the disabling impact of CRCI and improving survivors’ QOL. Finally, we hope that this protocol will provide a resource for other investigators who aim to apply a similar approach to their studies.

Footnotes

Contributorship: AMH and RCM conceived and planned the design, drafted and revised the manuscript and coordinated review and feedback from all co-authors; AMH is the PI on the two funded studies that provided support for this study; AMH, KVD, RA, EWP, and RCM all reviewed and provided critical feedback the drafts and final version of the manuscript.

The authors declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: R.C.M. is a co-founder of KeyWise AI, Inc. and NeuroUX, Inc. The terms of this arrangement have been reviewed and approved by UC San Diego in accordance with its conflict-of-interest policies. The remaining authors declare that they have no competing interests.

Ethical approval: All study procedures and protocols were in accordance with the Declaration of Helsinki and were approved by the University of Texas at Austin Institutional Review Board (STUDY000029).

Funding: The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: Research reported in this publication was supported by the National Institute of Nursing Research of the National Institutes of Health under Award Number R21NR020497 (AMH); and by the Alzheimer's Association Award number AARF-22-974065 (EWP). Salary support for RCM was provided by NIA R01 AG062387, NIA R01AG070956, and NIEHS 2R01ES025792. KVD is supported by NIH grants K08CA241337, R01CA129769, R01AG068193, and R35CA197289.

Guarantor: Ashley M. Henneghan, PhD, RN, FAAN.

ORCID iD: Ashley M Henneghan https://orcid.org/0000-0002-6733-1926

References

  • 1.Mayo SJ, Lustberg M, Dhillon HM, et al. Cancer-related cognitive impairment in patients with non-central nervous system malignancies: an overview for oncology providers from the MASCC Neurological Complications Study Group. Support Care Cancer 2021; 29: 2821–2840. [DOI] [PubMed] [Google Scholar]
  • 2.Lange M, Joly F, Vardy J, et al. Cancer-related cognitive impairment: an update on state of the art, detection, and management strategies in cancer survivors. Ann Oncol 2019; 30: 1925–1940. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 3.Kesler S, Rao A, Blayney DW, et al. Predicting long-term cognitive outcome following breast cancer with pre-treatment resting state fMRI and random forest machine learning. Front Human Neurosci 2017; 11: 55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Boscher C, Joly F, Clarisse B, et al. Perceived cognitive impairment in breast cancer survivors and its relationships with psychological factors. Cancers (Basel) 2020; 12: 3000. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Janelsins MC, Kesler SR, Ahles TA, et al. Prevalence, mechanisms, and management of cancer-related cognitive impairment. Int Rev Psychiatry 2014; 26: 102–113. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Wefel JS, Kesler SR, Noll KR, et al. Clinical characteristics, pathophysiology, and management of noncentral nervous system cancer-related cognitive impairment in adults. CA Cancer J Clin 2015; 65: 123–138. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Selamat MH, Loh SY, Mackenzie L, et al. Chemobrain experienced by breast cancer survivors: a meta-ethnography study investigating research and care implications. PLoS One 2014; 9: e108002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Porro B, Durand MJ, Petit A, et al. Return to work of breast cancer survivors: toward an integrative and transactional conceptual model. J Cancer Surviv 2022; 16: 590–603. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Boykoff N, Moieni M, Subramanian SK. Confronting chemobrain: an in-depth look at survivors’ reports of impact on work, social networks, and health care response. J Cancer Surviv 2009; 3: 223–232. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Robb C, Boulware D, Overcash J, et al. Patterns of care and survival in cancer patients with cognitive impairment. Crit Rev Oncol Hematol 2010; 74: 218–224. [DOI] [PubMed] [Google Scholar]
  • 11.Ahles TA, Hurria A. New challenges in psycho-oncology research IV: cognition and cancer: conceptual and methodological issues and future directions. Psychooncology 2018; 27: 3–9. [DOI] [PubMed] [Google Scholar]
  • 12.Janelsins MC, Heckler CE, Peppone LJ, et al. Longitudinal trajectory and characterization of cancer-related cognitive impairment in a nationwide cohort study. J Clin Oncol 2018; 36: JCO2018786624. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Araújo N, Severo M, Lopes-Conceição L, et al. Trajectories of cognitive performance over five years in a prospective cohort of patients with breast cancer (NEON-BC). Breast 2021; 58: 130–137. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Mayo SJ, Lustberg M, Dhillon HM, et al. Cancer-related cognitive impairment in patients with non-central nervous system malignancies: an overview for oncology providers from the MASCC Neurological Complications Study Group. Support Care Cancer 2021; 29: 2821–2840. [DOI] [PubMed] [Google Scholar]
  • 15.Society AC. Cancer treatment & survivorship facts & figures 2019-2021; 2019. Atlanta, GA: Author.
  • 16.Wefel JS, Vardy J, Ahles T, et al. International cognition and cancer task force recommendations to harmonise studies of cognitive function in patients with cancer. Lancet Oncol 2011; 12: 703–708. [DOI] [PubMed] [Google Scholar]
  • 17.Henneghan AM, Van Dyk K, Kaufmann T, et al. Measuring self-reported cancer-related cognitive impairment: recommendations from the cancer neuroscience initiative working group. J Natl Cancer Inst 2021; 113: 1625–1633. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 18.Nelson WL, Suls J. New approaches to understand cognitive changes associated with chemotherapy for non-central nervous system tumors. J Pain Symptom Manage 2013; 46: 707–721. [DOI] [PubMed] [Google Scholar]
  • 19.Savard J, Ganz PA. Subjective or objective measures of cognitive functioning-what’s more important? JAMA Oncol 2016; 2: 1263–1264. [DOI] [PubMed] [Google Scholar]
  • 20.Horowitz TS, Suls J, Trevino M. A call for a neuroscience approach to cancer-related cognitive impairment. Trends Neurosci 2018; 41: 493–496. [DOI] [PubMed] [Google Scholar]
  • 21.Andreotti C, Root JC, Schagen SB, et al. Reliable change in neuropsychological assessment of breast cancer survivors. Psychooncology 2016; 25: 43–50. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Hutchinson AD, Hosking JR, Kichenadasse G, et al. Objective and subjective cognitive impairment following chemotherapy for cancer: a systematic review. Cancer Treat Rev 2012; 38: 926–934. [DOI] [PubMed] [Google Scholar]
  • 23.Howieson D. Current limitations of neuropsychological tests and assessment procedures. Clin Neuropsychol 2019; 33: 200–208. [DOI] [PubMed] [Google Scholar]
  • 24.Bray VJ, Dhillon HM, Vardy JL. Systematic review of self-reported cognitive function in cancer patients following chemotherapy treatment. J Cancer Surviv 2018; 12: 537–559. [DOI] [PubMed] [Google Scholar]
  • 25.Dwek MR, Rixon L, Hurt C, et al. Is there a relationship between objectively measured cognitive changes in patients with solid tumours undergoing chemotherapy treatment and their health-related quality of life outcomes? A systematic review. Psychooncology 2017; 26: 1422–1432. [DOI] [PubMed] [Google Scholar]
  • 26.Costa DSJ, Fardell JE. Why are objective and perceived cognitive function weakly correlated in patients with cancer? J Clin Oncol 2019; 37: 1154–1158. [DOI] [PubMed] [Google Scholar]
  • 27.Nieuwenhuijsen K, de Boer A, Spelten E, et al. The role of neuropsychological functioning in cancer survivors’ return to work one year after diagnosis. Psychooncology 2009; 18: 589–597. [DOI] [PubMed] [Google Scholar]
  • 28.Duijts SF, van Egmond MP, Spelten E, et al. Physical and psychosocial problems in cancer survivors beyond return to work: a systematic review. Psychooncology 2014; 23: 481–492. [DOI] [PubMed] [Google Scholar]
  • 29.Klaver KM, Schagen SB, Kieffer JM, et al. Trajectories of cognitive symptoms in sick-listed cancer survivors. Cancers (Basel) 2021; 13: 2444. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 30.Von Ah D, Habermann B, Carpenter JS, et al. Impact of perceived cognitive impairment in breast cancer survivors. Eur J Oncol Nurs 2013; 17: 236–241. [DOI] [PubMed] [Google Scholar]
  • 31.Hall PA, Marteau TM. Executive function in the context of chronic disease prevention: theory, research and practice. Prev Med 2014; 68: 44–50. [DOI] [PubMed] [Google Scholar]
  • 32.Hardy-Leger I, Charles C, Lange M, et al. Differentiation of groups of patients with cognitive complaints at breast cancer diagnosis: results from a sub-study of the French CANTO cohort. Psychooncology 2021; 30: 463–470. [DOI] [PubMed] [Google Scholar]
  • 33.Lycke M, Lefebvre T, Pottel L, et al. Subjective, but not objective, cognitive complaints impact long-term quality of life in cancer patients. J Psychosoc Oncol 2019; 37: 427–440. [DOI] [PubMed] [Google Scholar]
  • 34.Fieo R, Ocepek-Welikson K, Kleinman M, et al. Measurement equivalence of the patient reported outcomes measurement information system((R)) (PROMIS((R))) applied cognition—general concerns, short forms in ethnically diverse groups. Psychol Test Assess Model 2016; 58: 255–307. [PMC free article] [PubMed] [Google Scholar]
  • 35.Teresi JA, Jones RN. Methodological issues in examining measurement equivalence in patient reported outcomes measures: methods overview to the two-part series, “measurement equivalence of the patient reported outcomes measurement information system((R)) (PROMIS((R))) short forms”. Psychol Test Assess Model 2016; 58: 37–78. [PMC free article] [PubMed] [Google Scholar]
  • 36.Thong MSY, Chan RJ, van den Hurk C, et al. Going beyond (electronic) patient-reported outcomes: harnessing the benefits of smart technology and ecological momentary assessment in cancer survivorship research. Support Care Cancer 2021; 29: 7–10. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Henneghan AM, Becker H, Harrison ML, et al. A randomized control trial of meditation compared to music listening to improve cognitive function for breast cancer survivors: feasibility and acceptability. Complement Ther Clin Pract 2020; 41: 101228. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Bolger N, Laurenceau J-P, Laurenceau J-P. Intensive longitudinal methods: an introduction to diary and experience sampling research. New York: Guilford Publications, 2013. [Google Scholar]
  • 39.Parrish EM, Kamarsu S, Harvey PD, et al. Remote ecological momentary testing of learning and memory in adults with serious mental illness. Schizophr Bull 2021; 47: 740–750. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 40.Harvey PD, Miller ML, Moore RC, et al. Capturing clinical symptoms with ecological momentary assessment: convergence of momentary reports of psychotic and mood symptoms with diagnoses and standard clinical assessments. Innov Clin Neurosci 2021; 18: 24–30. [PMC free article] [PubMed] [Google Scholar]
  • 41.Bomyea JA, Parrish EM, Paolillo EW, et al. Relationships between daily mood states and real-time cognitive performance in individuals with bipolar disorder and healthy comparators: a remote ambulatory assessment study. J Clin Exp Neuropsychol 2021; 43: 813–824. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Moore RC, Ackerman RA, Russell MT, et al. Feasibility and validity of ecological momentary cognitive testing among older adults with mild cognitive impairment. Front Digit Health 2022; 4: 946685. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 43.Moore RC, Parrish EM, Van Patten R, et al. Initial psychometric properties of 7 NeuroUX remote ecological momentary cognitive tests among people with bipolar disorder: validation study. J Med Internet Res 2022; 24: e36665. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 44.Small BJ, Jim HSL, Eisel SL, et al. Cognitive performance of breast cancer survivors in daily life: role of fatigue and depressed mood. Psychooncology 2019; 28: 2174–2180. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 45.Scott SB, Mogle JA, Sliwinski MJ, et al. Memory lapses in daily life among breast cancer survivors and women without cancer history. Psychooncology 2020; 29: 861–868. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 46.Brandt J, Spencer M, Folstein M. The telephone interview for cognitive status. Cogn Behav Neurol 1988; 1: 111–117. [Google Scholar]
  • 47.Castanho TC, Amorim L, Zihl J, et al. Telephone-based screening tools for mild cognitive impairment and dementia in aging studies: a review of validated instruments. Front Aging Neurosci 2014; 6: 16. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 48.Janelsins MC, Heckler CE, Peppone LJ, et al. Cognitive complaints in survivors of breast cancer after chemotherapy compared with age-matched controls: an analysis from a nationwide, multicenter, prospective longitudinal study. J Clin Oncol 2017; 35: 506–514. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 49.Center PR. Mobile Fact Sheet, https://www.pewresearch.org/internet/fact-sheet/mobile/ (2021, 2022).
  • 50.Wagner L, Sweet J, Butt Z, et al. Measuring patient self-reported cognitive function: development of the functional assessment of cancer therapy-cognitive function instrument. J Support Oncol 2009; 7: W32–W39. [Google Scholar]
  • 51.Groppell S, Soto-Ruiz KM, Flores B, et al. A rapid, mobile neurocognitive screening test to aid in identifying cognitive impairment and dementia (BrainCheck): cohort study. JMIR Aging 2019; 2: e12615. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 52.Yang S, Flores B, Magal R, et al. Diagnostic accuracy of tablet-based software for the detection of concussion. PLoS One 2017; 12: e0179352. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 53.Ye S, Huang B, Sun K, et al. BrainCheck: validation of a computerized cognitive test battery for detection of mild cognitive impairment and dementia. medRxiv 2020. [DOI] [PMC free article] [PubMed]
  • 54.Patient-Reported Outcomes Measurement Information System 2017, http://www.healthmeasures.net/explore-measurement-systems/promis/obtain-administer-measures. (2017).
  • 55.Henneghan A, Stuifbergen A, Becker H, et al. Modifiable correlates of perceived cognitive function in breast cancer survivors up to 10 years after chemotherapy completion. J Cancer Surviv 2018; 12: 224–233. [DOI] [PubMed] [Google Scholar]
  • 56.Wright P, Smith AB, Keding A, et al. The social difficulties inventory (SDI): development of subscales and scoring guidance for staff. Psycho-oncology (Chichester, England) 2011; 20: 36–43. [DOI] [PubMed] [Google Scholar]
  • 57.Catt S, Starkings R, Shilling V, et al. Patient-reported outcome measures of the impact of cancer on patients’ everyday lives: a systematic review. J Cancer Surviv 2017; 11: 211–232./ [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 58.Wright EP, Kiely M, Johnston C, et al. Development and evaluation of an instrument to assess social difficulties in routine oncology practice. Qual Life Res 2005; 14: 373–386. [DOI] [PubMed] [Google Scholar]
  • 59.Cai T, Huang Q, Wu F, et al. Psychometric evaluation of the PROMIS social function short forms in Chinese patients with breast cancer. Health Qual Life Outcomes 2021; 19: 149. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 60.Cella DF, Tulsky DS, Gray G, et al. The functional assessment of cancer therapy scale: development and validation of the general measure. J Clin Oncol 1993; 11: 570–579. [DOI] [PubMed] [Google Scholar]
  • 61.Mokhatri-Hesari P, Montazeri A. Health-related quality of life in breast cancer patients: review of reviews from 2008 to 2018. Health Qual Life Outcomes 2020; 18: 338. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 62.Luckett T, King MT, Butow PN, et al. Choosing between the EORTC QLQ-C30 and FACT-G for measuring health-related quality of life in cancer clinical research: issues, evidence and recommendations. Ann Oncol 2011; 22: 2179–2190. [DOI] [PubMed] [Google Scholar]
  • 63.Bell ML, Dhillon HM, Bray VJ, et al. Important differences and meaningful changes for the functional assessment of cancer therapy-cognitive function (FACT-Cog). J Patient-Reported Outc 2018; 2: 48. [Google Scholar]
  • 64.Campbell LM, Paolillo EW, Heaton A, et al. Daily activities related to mobile cognitive performance in middle-aged and older adults: an ecological momentary cognitive assessment study. JMIR Mhealth Uhealth 2020; 8: e19579. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 65.Cranford JA, Shrout PE, Iida M, et al. A procedure for evaluating sensitivity to within-person change: can mood measures in diary studies detect change reliably? Pers Soc Psychol Bull 2006; 32: 917–929. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 66.Arend MG, Schäfer T. Statistical power in two-level models: a tutorial based on Monte Carlo simulation. Psychol Methods 2019; 24: 1–19. [DOI] [PubMed] [Google Scholar]
  • 67.Bomyea JA, Parrish EM, Paolillo EW, et al. Relationships between daily mood states and real-time cognitive performance in individuals with bipolar disorder and healthy comparators: a remote ambulatory assessment study. J Clin Exp Neuropsychol 2021; 43: 813–824. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 68.Von Ah D, Tallman EF. Perceived cognitive function in breast cancer survivors: evaluating relationships with objective cognitive performance and other symptoms using the functional assessment of cancer therapy-cognitive function instrument. J Pain Symptom Manage 2015; 49: 697–706. [DOI] [PubMed] [Google Scholar]
  • 69.Saita K, Amano S, Kaneko F, et al. A scoping review of cognitive assessment tools and domains for chemotherapy-induced cognitive impairments in cancer survivors. Front Hum Neurosci 2023; 17: 1063674. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 70.Root JC, Gaynor AM, Ahsan A, et al. Remote, computerised cognitive assessment for breast cancer- and treatment-related cognitive dysfunction: psychometric characteristics of the cogsuite neurocognitive battery. Arch Clin Neuropsychol 2023; 38: 699–713. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 71.Patel SK, Meier AM, Fernandez N, et al. Convergent and criterion validity of the CogState computerized brief battery cognitive assessment in women with and without breast cancer. Clin Neuropsychol 2017; 31: 1375–1386. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 72.Fox RS, Zhang M, Amagai S, et al. Uses of the NIH Toolbox® in clinical samples: a scoping review. Neurol Clin Pract 2022; 12: 307–319. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 73.Franco-Rocha OY, Mahaffey ML, Matsui W, et al. Remote assessment of cognitive dysfunction in hematologic malignancies using web-based neuropsychological testing. Cancer Med 2023; 12: 6068–6076. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 74.Gaynor AM, Ahsan A, Jung D, et al. Novel computerized neurocognitive test battery is sensitive to cancer-related cognitive deficits in survivors. J Cancer Surviv 2022. DOI: 10.1007/s11764-022-01232-w [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Digital Health are provided here courtesy of SAGE Publications

RESOURCES