Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2025 Apr 1.
Published in final edited form as: Am J Geriatr Psychiatry. 2023 Oct 28;32(4):446–459. doi: 10.1016/j.jagp.2023.10.014

Computerized Cognitive and Skills Training in Older People with Mild Cognitive Impairment: Using Ecological Momentary Assessment to Index Treatment Related Changes in Real-World Performance of Technology-Dependent Functional Tasks

Courtney Dowell-Esquivell 1, Sara J Czaja 2,3, Peter Kallestrup 3, Colin A Depp 4, John N Saber 5, Philip D Harvey 1,3
PMCID: PMC10950539  NIHMSID: NIHMS1940702  PMID: 37953132

Abstract

Objectives:

Cognitive and functional skills training improves skills and cognitive test performance, but the true test of efficacy is real-world transfer. We trained participants with mild cognitive impairment (MCI) or normal cognition (NC) for up to 12 weeks on six technology-related skills using remote computerized functional skills assessment and training (FUNSAT) software. Using ecological momentary assessment (EMA), we measured real-world performance of the technology-related skills over 6 months and related EMA-identified changes in performance to training gains.

Design.

Randomized clinical trial with post-training follow-up.

Setting:

14 Community centers in New York City and Miami.

Participants.

Older adults with normal cognition (n=72) or well-defined MCI (n = 92), ranging in age from 60-90, primarily female, and racially and ethnically diverse.

Intervention:

Computerized cognitive and skills training.

Measurements.

EMA surveys measuring trained and untrained functional skills 3 or more days per week for 6 months and training gains from baseline to end of training.

Results:

Training gains in completion times across all 6 tasks were significant (p<.001) for both samples, with effect sizes more than 1.0 SD for all tasks. EMA surveys detected increases in performance for both trained (p<.03) and untrained (p<.001) technology-related skills for both samples. Training gains in completion times predicted increases in performance of both trained and untrained technology-related skills (all p<.001).

Conclusions.

Computerized training produces increases in real-world performance of important technology-related skills. These gains continued after the end of training, with greater gains in MCI participants.

Keywords: Computerized Cognitive Training, Functional Skills Training, Ecological Momentary Assessment, Mild Cognitive Impairment

Introduction

The older population will grow significantly in the future. In 20401 there will be 80 million people older than age 65 in the US, double that of 2000. With these projections, the prevalence of dementia and precursor conditions such as mild cognitive impairment (MCI) are global concerns. MCI is defined by cognitive functioning between normal and dementia, reflecting a decline from previous functioning. Amnestic MCI (a-MCI2) has been found to predict increased risk for developing Alzheimer’s Disease (AD)3.

In this increasingly technology-driven world, older adults are disadvantaged by deficits in technological skills 4. Many are challenged in navigating the internet to perform financial tasks such as online shopping and banking, or health management tasks like managing insurance (e.g., Medicare) or appointments and refilling prescriptions59. The continuous updating of technology and decreases in access to alternative strategies (i.e., closing bank branches; elimination of paper prescriptions) compounds these challenges.

Even with cognitive decline and limited prior experience, evidence suggests that older individuals retain cognitive plasticity, benefitting from computerized training. Meta-analyses have shown benefits of computerized cognitive training (CCT) in healthy older individuals and those with MCI1011. Our studies have shown that older adults can improve significantly in simulations of everyday tasks, such as ATM and internet banking, and develop mastery of trained tasks, as well as near to transfer to improvements in cognitive performance1213.

A critical question regarding computerized training is the extent to which training generalizes to real-world activities14. CCT has been shown to improve previously acquired skills, such as driving15, but not acquisition of novel skills16, which may require targeted training. There are several levels of improvement associated with computerized skills training: 1) Gains in the trained tasks12, 2) near transfer to related tasks like neurocognitive performance13 and far transfer to untrained functional capacity tasks, and 3) real-world transfer, performing newly trained skills in everyday settings. The Functional Skills Assessment and Training Program (FUNSAT) has demonstrated two levels of gains in previous studies.1213 The current study evaluated transfer of training to the real world.

Specifically, an updated version of the FUNSAT program was developed and tested in a randomized clinical trial (RCT) remotely delivering cognitive and functional skills training, targeting 6 technology-based activities of daily living, in older adults with NC and MCI. This trial has three different pre-planned outcomes (NCT046779441) presented separately. Improvements in performance on the training simulations in errors and time to completion, Czaja et al.17 was the designated primary paper, near transfer to cognitive performance and far transfer to untrained functional capacity measures (Chirino et al., in preparation) the second, and this paper, presenting real world transfer of the technology related skills assessed with Ecological momentary assessment (EMA),18 is the third.

Modern EMA is conducted with real-time surveys on digital devices and can be augmented by passive measurement19. EMA offers benefits compared to other assessments, including capture of variability in experiences and activities associated with location, social context, stress, and mood. EMA strategies also improve reliability by densely sampling behaviors that have validity challenges with dispersed assessments, such as retrospective recall20 or informant reports21, while increasing statistical power22. EMA can capture wide ranging data across populations. Although EMA requires adherence, adherence rates for EMA data can be 80% in both healthy and neuropsychiatric populations23. There are several design features (e.g., momentary compensation and predictable sampling) that increase adherence24, although there are daily upper limits25.

We present EMA data regarding changes in real-world performance of trained and untrained technology-related functional skills. We also examined the extent to which training gains on the primary outcome for the simulations ( time to completion) predicted changes in real-world technology-related skills performance.

We hypothesized that:

  1. Real world performance of trained technology-related skills would increase in frequency across both MCI and NC samples.

  2. Increases in performance of untrained technology-related skills such as using the internet would also be detected.

  3. Participants with greater training gains would manifest greater real-world transfer on both trained and untrained skills.

Methods

Overall Study Design

This was a randomized trial conducted at community centers in South Florida (n=4) and New York City (n=10). Following screening, orientation to the study, and an in-person baseline assessment with one of three fixed difficulty assessments of six functional tasks , participants self-administered up to 12 weeks of computerized training at home, with in-person follow-up assessments. The WCG IRB approved the study and all participants provided signed informed consent.

Participants

The sample included English or Spanish-speaking adults over the age of 60 who lived in the community, had at least 20/60 vision, were able to read a computer screen, and had adequate manual dexterity to use a touch-screen. Males and females were recruited, without restrictions on racial or ethnic status. MCI status was ascertained with a neuropsychological assessment using the Jak-Bondi criteria26. Participants were designated as having normal cognition or one of three different MCI subtypes: Amnestic (impairments on two or more memory tests but no other domains); Non-Amnestic (impairments on two non-memory cognitive domains, but no more than one memory domain); Multi-domain: (two or more impairments on both memory and other domains). Normative standards were used to evaluate performance, with impairment defined as 1.0 SD below normative standards.

Exclusion criteria included a MOCA score of <18 and a reading score, in their commonly spoken language, at less than a 6th grade level. Other exclusions included the inability to undergo assessments in in either English or Spanish, receiving a similar intervention in the past 12 months, a diagnosis of a serious psychiatric condition apart from major depression, a previous medical history of brain disease such as CVA, seizures, tumor, or significant traumatic brain injury with extended loss of consciousness.

Cognitive Assessments

Cognitive assessments were used to collect data for the performance-based MCI criteria. All assessments were performed in the participants’ preferred language (English or Spanish).

Montreal Cognitive Assessment (MOCA)27.

This test examines cognitive performance with scores ranging from 0-30. Assessments were performed by certified bilingual raters.

Reading Performance.

The literacy level of English speakers was examined with the Wide Range Achievement Test (WRAT)28, 3rd edition. Spanish speakers were assessed with the Woodcock-Munoz Language Survey, 3rd edition29.

Wechsler Memory Scale- revised, Logical Memory I and II (Anna Thompson Story).

Participants were read the story and asked for immediate recall, followed by 20 -minute delayed recall filled with other non-verbal assessments.

Brief Assessment of Cognition (BAC): App version.

The BAC measures domains of cognition known to be related to everyday functioning30. The BAC App31 delivers the same assessments with cloud-connected tablet delivery for ease of administration and standardization.

The cognitive domains assessed include:

Verbal Memory:

5-trial word list learning test.

Digit Sequencing:

Verbal working memory task.

Token Motor Task:

Measures motor speed and manual dexterity.

Verbal Fluency:

Measures category fluency (animals) and letter fluency (F and S).

Symbol Coding:

Processing speed task, measuring coding performance.

Tower of London:

Executive functioning task, measures problem solving.

General Procedures

All participants completed the baseline fixed difficulty (Form A) assessments in person. MCI participants were randomized into FUNSAT only or FUNSAT + CCT, stratified by site, with NC participants receiving FUNSAT only in order to develop a normative understanding of training gains. FUNSAT training targeted all 6 functional tasks, 2 hours per week, for up to 12 weeks or mastery. In each session, participants trained for up to 60 minutes and were instructed to practice as many tasks as possible, completing each task twice consecutively. As participants progressed in training, they retrained only on tasks not previously mastered. Those in the combined FUNSAT + CCT training group received a 3-week, twice weekly one-hour burst of CCT, after which time they trained for up to 9 weeks on the FUNSAT. At the end of the 12-week training period, or after mastery of all 6 training tasks, participants were reassessed with form B of the fixed difficulty assessments on the six tasks with later follow-ups occurring out to approximately 6 months. See Figure 1 for a full depiction of the study flow.

FIGURE 1.

FIGURE 1.

FLOW CHART FOR STUDY ACTIVITIES AND ASSESSMENTS

Participants were paid $2.00 for each EMA survey answered, with a momentary signal providing a running total of their expected compensation. They were also compensated $30.00 for each in-person assessment. Participants received a $15.00 bonus payment for each task upon mastery.

Training Procedures

FUNSAT Program

The skills trained in the third generation of the FUNSAT program are the same as previous versions of the software, including: operating an ATM and a ticket kiosk, internet banking, utilizing a pharmacy website for online shopping and prescription refill, navigating a telephone voice menu for prescription refills, and managing medication (comprehending medication labels and organizing medication). (Figure 2). The six tasks were presented in a multi-media format that included upgraded graphic representations, text, and voice, and had 3-6 subtasks with sequential demands. For example, for the telephone refill task, participants called the pharmacy (using a simulated mobile phone keypad), refilled different prescriptions (pill bottles appeared on the screen), chose a delivery preference, and requested a pick-up time and date. Data on completion time and errors were collected in real time. Consistent with other technology based functional capacity assessments32, completion time included only the time that the participant was actively engaging in the tasks The FUNSAT software automatically progoresses to the next question if more than four errors are made on any item in a subtask (e.g., choosing the wrong account in the ATM task). Error feedback is delivered by repetition of the original instructions in a pop-up window.

FIGURE 2.

FIGURE 2.

IMAGES OF SIX DIFFERENT FUNCTIONAL TASKS TRAINING FINANCES, TRANSPORTATION, MEDICATION MANAGEMENT, AND ON-LINE SHOPPING.

FUNSAT training was delivered on a touch-screen device (Chrome Book) in a cloud-based format. All participants had the option of accessing the internet via a provided hotspot or through their own Wi-Fi connection. FUNSAT training uses an adaptive protocol, with immediate feedback and graduated instruction, with increases in corrective information provided following errors. If the participant entered the wrong PIN in the ATM task in their first attempt, they would receive the following feedback, repeating the original instruction, “Try Again! Your ATM PIN is 1234.” If they repeated the error for a second time, feedback was “Try Again! Remember, your PIN is 1234. Please enter 1234.” Feedback for a third error was “Try Again! Press 1, then press 2, then press 3, and then press 4. Then press ENTER.” If they made a fourth error, the four keys lit up in sequence, with the participant instructed to touch them. Successful mastery of a subtask (e.g., Entering a PIN) was defined as performing that subtask with no errors once or twice consecutively with a maximum of a single error, with each of the 6 tasks considered mastered when all individual subtasks were mastered. When the participant returned to train after not mastering a subtask, only the non-mastered items were re-trained. After all 6 tasks were mastered, training was complete and follow-up assessments were initiated.

Computerized Cognitive Training

As described above the MCI participants were randomized to FUNSAT training alone or combined FUNSAT + CCT. The CCT procedure was Brain HQ “Double Decision”. The task was chosen due to the significant benefits from processing speed training reported in the ACTIVE and related trials3334 and in our previous study. The program consisted of two concurrent tasks: identifying one of two centrally presented items (Car vs. Truck) and the location of a simultaneous peripheral stimulus that differs from 7 others in a semi-circular array. In line with previous studies with Brain HQ35, participants were also allowed to train up to 20% of their sessions on an additional task, “Hawkeye”, to provide variety.

EMA Surveys

Surveys were delivered daily to the participants from an application on the training device at 5 PM, with instructions to answer at least 3 times per week. Surveys began on the first day of training and continued until the end of final follow-up period . Participants were compensated for answering surveys. The surveys included check-box questions across 4 screens regarding participants’ engagement in any of the trained skills since the previous survey, with participants also asked if they had performed any technology-related skills that were untrained by FUNSAT such as using the internet to look up information or taking digital photos. To examine shifts from in-person to technology related activities, we also asked if, when refilling a prescription, they used the phone or internet (trained skills) or went in person. Surveys remained open for one hour. Surveys began on the first day of at home training and continued until the participants fully completed the protocol (up to 6 months), with participants who completed training earlier answering fewer surveys. Participants who did not answer a survey during a one-week period were contacted by study coordinators. Each survey was timestamped to facilitate organization of data in relation to the first training session. Supplementary Table 1 includes the items in the EMA Survey.

Data Analyses

EMA survey responses were aggregated within surveys into total trained and untrained skills performed. Gains in completion time from the baseline assessment to the final training session on all 6 tasks were calculated. The primary results of the training study are submitted for publication, but we performed a non-overlapping examination of the changes from baseline across cognitive status and training condition. All training gain change scores from baseline were tested for significance with paired t-tests. Each individual’s change score for each task was standardized based on the standard deviation of the baseline scores in the entire sample (change score/overall SD). The six standard scores for the individual tasks averaged to create an aggregate standard (z) change score for each participant.

We compared differences in training gains by condition in MCI participants with t-tests, as well as examining training gains across cognitive status, in order to determine if these two factors should be used as covariates.

Analyses of the time course of the EMA responses were performed with Mixed Model Repeated Measures Analysis of variance (MMRM) using the SPSS v28 Generalized Linear Models (GLM) module. In these analyses, we created a random subject intercept and interpreted models whose fit improved on the null (intercept-only) model. We used month (1-6) as the repeated-measures factor, defining day 1 as the first day of training. We used full information maximum likelihood procedures, because participants answered different numbers of surveys within each one-month period. The test statistic was the Wald Chi-square, with an unstructured covariance matrix.

Analyses were performed separately for trained and untrained tasks. Any significant models were repeated, first incorporating participants’ composite training gains as a between subjects factor and then adding baseline cognitive status as another between-subjects factor. After this model, we added total training sessions as a covariate. When we reached the final model, we repeated the analyses with race, ethnicity, and location as additional covariates.

Results

Demographic information is presented in Table 1 and the CONSORT diagram is presented in Figure 3. There were 4074 surveys answered over the 6-month period. There were 164 participants who were scheduled to complete surveys and 155 participants who answered one or more surveys (95%). Participants with MCI provided 2038 survey responses and those with NC provided 2036, suggesting lower response rates on the part of MCI participants (22 per participant) than NC (28 per participant). Month- by-month EMA survey responses are presented in Supplemental figure 1.

Table 1.

Demographic and Descriptive Information on Participants

Mild Cognitive Impairment Normal Cognition
N=92 N=72
M SD M SD t p
Age 71.68 6.40 71.17 6.36 −.50 .65
MOCA score 22.45 3.21 27.15 1.37 11.50 <.001
Years of Education 13.29 3.94 15.56 2.52 4.18 <.001
     Mild Cognitive Impairment Normal Cognition
N=92 N=72
Site N(%) N(%) X2 p
  Miami 38 (41%) 32 (43%) 1.21 .55
  NY 54 (59%) 40 (57%)
Sex
  Male 18 (20%) 7 (8%) 6.00 .05
  Female 74 (80%) 65 (92%)
Race
  White 36 (36%) 34 (47%) 7.95 .44
  Black 28 (30%) 14 (18%)
  Other/ 28 (30%) 24 (35%)
  More than 1/None
Ethnicity
  Latinx 52 (57%) 34 (46%) 8.20 .04
  Non Latinx 40 (43%) 38 (54%)
Training Language
  English 50 (55%) 48 (67%) 2.64 .27
  Spanish 42 (35%) 24 (33%)
MCI Classification
Amnestic 14 (15%)
Multi-Domain 38 (41%)
Non-Amnestic 40 (43%)

FIGURE 3.

FIGURE 3.

CONSORT DIAGRAM FOR PATIENT FLOW IN THE STUDY

MCI participants had significantly less education and lower MOCA scores than NC participants but did not differ in age. There were no site, race, or training language differences across MCI status. There were slightly more Latinx participants and slightly more male participants in the MCI group than in the NC group. In terms of mastery, 128 (79%) of the cases mastered all six tests and trained less than 12 weeks, averaging 6.52 (SD=5.94) training sessions per task compared to those who did not master all six tasks of 7.70 (SD=6.21)

Table 2 presents the training gains in time to completion for the total sample, including baseline time and changes in time to completion from baseline to the final training session. As shown in the table, performance all 6 tasks improved significantly with training, with the smallest effect size compared to no gains (Cohen’s d) being over 1.0. We also examined the differences in the 6 change scores across cognitive status, finding that the MCI participants had significantly larger training gains in seconds from baseline to the final training session: (all t>3.18, all p<.001;df=154) , although their performance at baseline was worse across all six tasks, (all t>3.58, all p<.001; df=154). There were no training condition differences in changes in completion time from baseline to the final training session in the MCI participants, all t<1.45, all p>.15; df=91. Cognitive status, but not training condition, was entered as between-subjects factors in the subsequent MMRM analyses.

Table 2.

Baseline Completion Times and Training Gains Across Simulations:Presented in Seconds.

Task Baseline Improvement From Baseline t-Test for changes Compared to 0 To Final Training Session
M SD M SD t p d
Ticket Kiosk 1068.07 470.23 470.52 410.94 14.57 <.001 1.15
ATM banking 1541.46 863.82 700.59 626.91 14.09 <.001 1.14
Medication Management 1152.46 704.04 542.64 437.54 14.96 <.001 1.11
Telephone Voice Menu 825.29 380.35 342.37 325.77 13.34 <.001 1.06
Pharmacy Website 1592.31 868.22 917.11 728.98 15.96 <.001 1.26
On-Line Banking 1357.14 739.32 680.44 604.42 13.93 <.001 1.13
Composite 0.00 1.00 −0.94 1.05 14.21 <.001 1.14

Table 3 presents the results of the MMRM analyses for time effects for the entire sample on trained and untrained tasks and Figure 4 presents the growth curves for the scores. As can be seen in the Table, there were significant increases in performance for trained and untrained skills in the whole sample. When training gains were entered as a between-subjects covariate for trained tasks, the overall results were still significant and there was an association between training gains and changes in functional performance, despite the effect of month no longer being significant. Cognitive status was associated with changes in frequency of performance of both trained and untrained tasks. Like training gains, real-world functional performance increased more over time for participants with MCI.

Table 3.

Mixed Model Repeated Measures Analyses of the Effects of Time Post Training Start on Trained and UnTrained Technology Related Everyday Activities Measured with Ecological Momentary Assessment

Total Sample
Trained Activities Untrained Activities
X2 df p X2 df p
Omnibus 12.36 5 .03 22.70 5 <.001
Intercept 3969.23 1 <.001 1195.27 1 <.001
Month 12.38 5 .03 22.86 5 <.001
Total Sample, Training Gains as a Between-Subjects Fixed Covariate
Trained Activities Untrained Activities
X2 df p    X2 df p
Omnibus 77.39 6 <.001    22.79 1 <.001
Intercept 140.84 1 <.001    395.08 1 <.001
Month 8.77 5 0.11    9.60 5 .08
Composite 25.15 1 <.001    242.74 1 <.001
Training Gains
Full Training Ggains Model with Cognitive Status Added as a Between-Subjects Fixed Covariate
X2 df p X2 df p
CognitiveStatus Status 38.11 1 <.001 108.21 1 <.001

FIGURE 4.

FIGURE 4.

GROWTH CURVES FOR CHANGES IN TRAINED AND UNTRAINED TECHNOLOGY-RELATED FUNCTIONAL SKILLS OVER 6 MONTHS FOR BOTH PARTICIPANT SAMPLES

We then added several potential covariates to the full model with these results presented in Table 4. We found that fewer total training sessions and mastery of all six tasks added to the prediction of changes in both trained and untrained activities and that race and ethnic status were unassociated with changes in performance. Location was associated only with untrained activities and at a much lower level of significance.

Table 4.

Statistical Tests for Training Session, Mastery of All Tasks, Race, Ethnicity, and Location Added to the Previous Full Models Including Cognitive Status

Trained Activities Untrained Activities
X2 df p X2 df p
Training Sessions 87.98 1 <.001 92.22 1 <.001
Mastery of All Tasks 67.64 1 <.001 34.47 1 <.001
Race 0.43 1 0.51 0.42 1 0.51
Ethnicity 2.94 1 0.86 2.15 1 0.14
Location 0.36 1 0.45 5.84 1 0.016

In a final descriptive analysis, we examined changes in the proportion of surveys reporting refilling a prescription at a pharmacy. The overall model was significant, X2(5)=12.19, p=.032, with the effect of month also significant, X2(5)=12.21, p=.032. The proportion of surveys answered affirmatively dropped from a baseline of .15 to a low of .09 at month 6. When training gains were added as a covariate, the effect of training gains was also significant, X2(1)=13.29, p<.001, predicting a decrease in in-person refills.

Discussion

Consistent with studies of in person training1213, we found substantial (d>1.0) improvements in time to completion of training tasks with both NC and MCI participants receiving benefit. In terms of our hypotheses, we detected statistically significant increases in performance of both trained and untrained technology-related functional skills. In terms of our third hypothesis, training gains were a predictor of increases in performance of technology-related functional skills, suggesting that real-world transfer is not explainable by increased exposure to technology.

There are several important features of these findings. This is the first study to our knowledge reporting transfer of computerized training to real-world task performance in participants with MCI. Remote training was associated with gains that were consistent with gains from in person training in previous studies. Adherence to EMA assessments was excellent, with 95% of participants answering surveys and modest differences in the average number of surveys answered across cognitive status. These findings suggest that fully remote training is feasible, efficacious, and associated with changes in real-world functioning across cognitive status. Our MCI participants had more training gains and increases in technology related activities (albeit starting out lower), suggesting that computerized training can lead to real-world transfer in cognitively impaired participants. Further, over a third of our participants were assessed and trained in Spanish, suggesting that training gains have cross-cultural relevance.

Both cognitive status subsamples contained substantial racial and ethnic minorities and their levels of education were comparatively low for a long and complex training protocol. In contrast, in the ACTIVE trial, participants receiving processing speed training were 75% white, had 13.7 years of education, and a mean Mini-Mental State Examination score of 2736. The ACTIVE intervention was considerably shorter than our intervention (maximum of 14 hours), but recipients of speed training in ACTIVE manifested persistence of cognitive gains up to 10 years after training37 with reduced risk for development of dementia38. Importantly, sustained gains in performance of previously acquired complex skills, such as driving,16 were greater in the training groups than controls. In our study, participants with MCI were found to increase the frequency of performance of tasks that they rarely performed in month 1, suggesting acquisition of novel skills.

There are some limitations in this study. Our EMA sampling density was less than other studies. We excluded participants who may have had dementia, so findings may not generalize to more impaired populations. We could not stratify our MCI participants on subtype and the samples are too small to make meaningful comparisons across subtypes and training conditions. That said, to select representative samples, we did take all comers at each of the research sites. There were fewer surveys completed in months 5-6 than before, but this is largely because the fastest learners mastered all the tasks more rapidly.

A final critical finding was the shift toward using technology for medication refills, rather than in person visits. This shift occurred concurrently with the elimination of pandemic restrictions at the study locations, arguing against the idea that outside restrictions were the operative factor in choice of refill strategy. Even in the context of reduced restrictions, our participants shifted toward technology. If there is a recurrence of a pandemic, the ability to rapidly learn, deploy, and utilize technology related skills could be a protective factor, not only against infection, but also for sustaining independence and managing money and deliveries. More broadly, technological competence equals independence and showing that real-world gains are possible with remote training in participants with MCI has the potential to reduce person and caregiver burden in a variety of domains.

Supplementary Material

1
2

Highlights.

  • What is the primary question addressed by this study?

    The study addressed the question of whether a fully remote cognitive and functional skills training program improved real-world performance of technology-based tasks, measured with daily ecological momentary assessment (EMA) in older adults with normal cognition and mild cognitive impairments.

  • What is the main finding of this study?

    EMA assessments demonstrated statistically significant increases in activity of both trained and untrained technology-related activities. Moreover, the degree of training gains was found to be a significant predictor of real-world performance in domains of technology utilization.

  • What is the meaning of the finding?

    These findings provide compelling evidence that a fully remote functional skills training program is both practical and efficacious, as well as providing benefits for real-world functioning. This training program has the potential to empower individuals, regardless of their cognitive status or geographical location, by enabling them to maintain or optimize their autonomy as they age.

Conflicts of Interest and Source of Funding

This research was supported by NIA grants 1 R21 AG041740-01 (Czaja and Harvey), and 1 R43 AG057238-04 (Kallestrup), as well as by a grant from the Wallace Coulter Innovation Foundation.

The intellectual property in the FUNSAT training system is licensed by the University of Miami Miller School of Medicine to i-Function, inc.

Courtney Dowell-Equivel is a medical Student at the University of Miami Miller School of Medicine and reports no biomedical conflicts of interest.

Dr. Czaja is co-Chief Scientific director and owns equity in i-Function.

Mr. Kallestrup is CEO of iFunction, Inc, owns equity in i-Function, and was PI on the NIA grant supporting this study.

Dr. Depp was compensated as a consultant to i-Function.

Mr. Saber is a full-time employee of EMA Wellness, who provided the EMA software

Dr. Harvey is co-CSI of i-Function and owns equity in i-Function. He has also served as a consultant to: Alkermes, Bioexcel, Boehinger-Ingelheim, Merck Pharma, Minerva Neurosciences, Karuna Therapeutics, Novartis Pharma, and Sunovion Pharma.

He receives royalty payments for the Brief Assessment Cognition from WCG Endpoint Solutions (Formerly Neurocog Trials, Formerly Verasci).

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

Data Statement: Previous Presentations.

These data have been presented with smaller datasets at:

Clinical Trials in Alzheimers Disease (CTAD; Poster, 2022)

American Association of Geriatric Psychiatry (Oral Presentation; 2023)

Society of Biological Psychiatry (SOBP; Poster 2023)

Columbia Conference on Cognitive Remediation (Oral presentation, 2023)

References

  • 1.https://acl.gov/sites/default/files/aging%20and%20Disability%20In%20America/2020Profileolderamericans.final_.pdf; Accessed July 12, 2023
  • 2.Petersen RC, Smith GE, Waring SC, Ivnik RJ, Tangalos EG, Kokmen E. Mild cognitive impairment: clinical characterization and outcome. Arch Neurol. 1999. Mar;56(3):303–8. doi: 10.1001/archneur.56.3.303. [DOI] [PubMed] [Google Scholar]
  • 3.Albert MS, DeKosky ST, Dickson D, et al. The diagnosis of mild cognitive impairment due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 2011;7(3):270–279. doi: 10.1016/j.jalz.2011.03.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 4.Harvey PD, Nascimento V. Helping older individuals overcome the challenges of technology. Curr Psychiatry 2020, 19, 13–23 [Google Scholar]
  • 5.Czaja SJ, Loewenstein DA, Sabbag SA,et al. A novel method for direct assessment of everyday competence among older adults. J Alzheimers Dis. 2017;57(4):1229–1238. doi: 10.3233/JAD-161183 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Taha J, Czaja SJ, Sharit J, Morrow DG. Factors affecting usage of a personal health record (PHR) to manage health. Psychol Aging 2018,28, 1124–1139. doi: 10.1037/a0033911. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Czaja SJ, Sharit J, Lee CC, et al. Factors influencing use of an e-health website in a community sample of older adults. J Am Med Inform Assoc. 2012. doi: 10.1136/amiajnl-2012-00087 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Pak R, Czaja SJ, Sharit J, et al. The Role of spatial Abilities and age in performance in an auditory computer navigation Task. Comp Human Behav 24 (2008) 3045–3051 DOI: 10.1016/i.chb.2008.05.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Czaja SJ, Sharit J, Nair SN. Usability of the Medicare health website, JAMA, 2008. 300 (7), 790–792. doi: 10.1001/jama.300.7.790-b. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Zhang H, Huntley J, Bhome R, et al. Effect of computerised cognitive training on cognitive outcomes in mild cognitive impairment: a systematic review and meta-analysis. BMJ Open. 2019;9(8):e027062. oi: 10.1136/bmjopen-2018-027062 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Sherman DS, Mauser J, Nuno M, Sherzai D. The efficacy of cognitive intervention in mild cognitive impairment (MCI): a meta-analysis of outcomes on neuropsychological measures. Neuropsychol Rev. 2017;27(4):440–484. doi: 10.1007/s11065-017-9363-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 12.Czaja SJ, Kallestrup P, Harvey PD. Evaluation of a novel technology-based program designed to assess and train everyday skills in older adults. Innov Aging. 2020;4(6):igaa052.. doi: 10.1093/geroni/igaa052 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Harvey PD, Zayas-Bazan M, Tibiriçá L, Kallestrup P, Czaja SJ. Improvements in cognitive performance with computerized training in older people with and without cognitive impairment: synergistic effects of skills-focused and cognitive-focused strategies. Am J Geriatr Psychiatry. 2022;30(6):717–726. doi: 10.1016/j.jagp.2021.11.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 14.Harvey PD, McGurk SR, Mahncke H, Wykes T. Controversies in computerized cognitive training. Biol Psychiatry Cogn Neurosci Neuroimaging. 2018;3(11):907–915. doi: 10.1016/j.bpsc.2018.06.008 [DOI] [PubMed] [Google Scholar]
  • 15.Edwards JD, Delahunt PB, Mahncke HW: Cognitive speed of processing training delays driving cessation. J Gerontol A Biol Sci Med Sci.2009. 64: 1262–1267. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Czaja SJ, Kallestrup P, Harvey PD. The efficacy of a home-based functional skills training program for older adults with and without a cognitive impairment. Submitted for publication [DOI] [PMC free article] [PubMed]
  • 17.Shiffman S, Stone AA, Hufford MR. Ecological momentary assessment. Annu Rev Clin Psychol. 2008;4:1–32. doi: 10.1146/annurev.clinpsy.3.022806.091415 [DOI] [PubMed] [Google Scholar]
  • 18.Harvey PD, Depp CA, Rizzo AA, et al. Technology and mental health: state of the art for assessment and treatment. Am J Psychiatry. 2022;179(12):897–914. doi: 10.1176/appi.ajp.21121254 [DOI] [PubMed] [Google Scholar]
  • 19.Shiffman S. How many cigarettes did you smoke? Assessing cigarette consumption by global report, time-line follow-back, and ecological momentary assessment. Health Psychol. 2009. Sep;28(5):519–26. doi: 10.1037/a0015197. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 20.Sabbag S, Twamley EM, Vella L, Heaton RK, Patterson TL, Harvey PD. Assessing everyday functioning in schizophrenia: not all informants seem equally informative. Schizophr Res. 2011;131(1-3):250–255. doi: 10.1016/j.schres.2011.05.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Schuster R, Schreyer ML, Kaiser T, et al. Effects of intense assessment on statistical power in randomized controlled trials: Simulation study on depression. Internet Interv. 2020;20:100313. doi: 10.1016/j.invent.2020.100313 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 22.Jones SE, Moore RC, Pinkham AE, et al. A cross-diagnostic study of adherence to ecological momentary assessment: Comparisons across study length and daily survey frequency find that early adherence is a potent predictor of study-long adherence. Pers Med Psychiatry. 2021; 29-30:100085. doi: 10.1016/j.pmip.2021.100085. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Jones A, Remmerswaal D, Verveer I, et al. Compliance with ecological momentary assessment protocols in substance users: a meta-analysis. Addiction 2019;114(4):609–19. doi: 10.1111/add.14503. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Raugh IM, James SH, Gonzalez CM, et al. Digital phenotyping adherence, feasibility, and tolerability in outpatients with schizophrenia. J Psychiatr Res. 2021;138:436–443. doi: 10.1016/j.jpsychires.2021.04.022 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Jak A, Bondi M, Delano-Wood L, et al. : Quantification of five neuropsychological approaches to defining mild cognitive impairment. Am J Geriatr Psychiatry 2009; 17:368–375 doi: 10.1097/JGP.0b013e31819431d5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Nasreddine ZS, Phillips NA, Bédirian V, et al. The Montreal Cognitive Assessment, MoCA: a brief screening tool for mild cognitive impairment. J Am Geriatr Soc, 2005. 53(4), 695–699. 10.1111/j.1532-5415.2005.53221.x [DOI] [PubMed] [Google Scholar]
  • 27.Jastak S, 1993. Wide-Range Achievement Test, 3rd ed. San Antonio, TX, Wide Range, Inc [Google Scholar]
  • 28.Woodcock RW, Alvarado CG, Ruef M, et al. Woodcock-Muñoz Language Survey, Third Edition. Rolling Meadows, IL: Riverside, 2017. [Google Scholar]
  • 29.Keefe RS, Goldberg TE, Harvey PD, et al. The Brief Assessment of Cognition in Schizophrenia: reliability, sensitivity, and comparison with a standard neurocognitive battery. Schizophr Res. Jun 1 2004;68(2-3):283–297. doi: 10.1016/j.schres.2003.09.011. [DOI] [PubMed] [Google Scholar]
  • 30.Atkins AS, Tseng T, Vaughan A,et al. Validation of the tablet-administered Brief Assessment of Cognition (BAC App). Schizophr Res, 2017.181, 100–106. DOI: 10.1016/j.schres.2016.10.010 [DOI] [PubMed] [Google Scholar]
  • 31.Keefe RSE, Davis VG, Atkins AS, et al. Validation of a computerized test of functional Capacity. Schizophr Res. 2016;175(1-3):90–96. doi: 10.1016/j.schres.2016.03.038 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 32.Edwards J, Wadley V, Myers RE, et al. , Transfer of a speed of processing intervention to near and far cognitive functions. Gerontology. 2002;48(5):329–340. doi: 10.1159/000065259 [DOI] [PubMed] [Google Scholar]
  • 33.Wolinsky FD, Vander Weg MWV, Howren MB, et al. : The effect of cognitive speed of processing training on the development of additional IADL difficulties and the reduction of depressive symptoms results from the IHAMS randomized controlled trial. J Aging Health. 2015. 27: 334–54. doi 10.1177/0898264314550715. [DOI] [PubMed] [Google Scholar]
  • 34.Mahncke HW, DeGutis J, Levin H, et al. A randomized clinical trial of plasticity-based cognitive training in mild traumatic brain injury. Brain. 2021;144(7):1994–2008. doi: 10.1093/brain/awab202 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Willis SL, Tennstedt SL, Marsiske M, et al. : Long-term effects of cognitive training on everyday functional outcomes in older adults. J Am Med Assoc. 2006. 296: 2805–2814. doi: 10.1001/jama.296.23.280 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 36.Rebok GW, Ball K, Guey LT, et al. Ten-year effects of the Advanced Cognitive Training for Independent and Vital Elderly cognitive training trial on cognition and everyday functioning in older adults. J Am Geriatr Soc. 2014.62: 16–24 DOI: 10.1111/jgs.12607 [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 37.Edwards J, Xu H, Clark D, et al. Speed of processing training results in lower risk of dementia. Alz Dement Transl Res Clin Interv. 2017;3(4):603–611. DOI: 10.1016/j.trci.2017.09.002 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials

1
2

RESOURCES