Skip to main content
The Journals of Gerontology Series B: Psychological Sciences and Social Sciences logoLink to The Journals of Gerontology Series B: Psychological Sciences and Social Sciences
. 2019 May 29;75(6):1144–1154. doi: 10.1093/geronb/gbz073

Home-Based, Adaptive Cognitive Training for Cognitively Normal Older adults: Initial Efficacy Trial

Hyun Kyu Lee 1,, James D Kent 2, Christopher Wendel 3, Fredric D Wolinsky 4, Eric D Foster 5, Michael M Merzenich 1, Michelle W Voss 2,3
Editor: Brent Small
PMCID: PMC7265807  PMID: 31140569

Abstract

Objectives

We examined whether a home-based, adaptive cognitive training (CT) program would lead to cognitive performance changes on a neuropsychological test battery in cognitively normal older adults.

Method

Sixty-eight older adults (age = 70.0, SD = 3.74) were randomly assigned to either CT or an active control group (AC, casual computer games). Participants were instructed to train on their assigned programs for 42 min per day, 5 days per week, over 10 weeks (35 hr of total program usage). Participants completed tests of processing speed, working memory, and executive control before and after 10 weeks of training.

Results

Training groups did not differ in performance before training. After training, CT participants out-performed AC participants in the overall cognitive composite score, driven by processing speed and working memory domains.

Discussion

Our results show that a limited dose of home-based CT can drive cognitive improvements as measured with neuropsychological test battery, suggesting potential cognitive health maintenance implications for cognitively normal older adults.

Keywords: Cognitive improvement, Cognitively normal, Neuropsychological test battery


Unprecedented increases in longevity are coming to the developed world, and as a consequence, the number of people suffering from age-related neurodegenerative diseases limiting functional independence is rapidly growing (Hebert, Weuve, Scherr, & Evans, 2013). On its current course, the human and financial costs of age-related cognitive decline and the progression to senile dementia and frailty will be societally unbearable (Miller, 2000). Cognitive training (CT) (Merzenich, Van Vleet, & Nahum, 2014) has been increasingly deployed over the last decade to determine if it is an effective strategy for changing the course of cognitive aging so that brain span can better match his/her extended life span.

However, research on CT has proven controversial. Critics argue CT demonstrates improved performance on primarily trained tasks, with limited transfer of training to closely related measures indexing cognitive aging, and so have commonly failed to show convincing evidence of influencing the course of cognitive aging per se (Boot, Blakely, & Simons, 2011; Owen et al., 2010; Simons et al., 2016; Souders et al., 2017). Also, some studies demonstrating CT benefits have methodological flaws such as lack of adequate controls for placebo effects, sample sizes that are too small, or incomplete reports of outcomes.

On the other hand, proponents of CT have argued that computer-based training can improve underlying cognitive processes, as evidenced by transfer to improved performance on tasks different from training but which measure abilities in the same cognitive constructs (Edwards, Fausto, Tetlow, Corona, & Valdes, 2018; Lampit, Hallock, & Valenzuela, 2014; Mewborn, Lindbergh, & Stephen Miller, 2017). One example demonstrating transfer of training in a controlled intervention is the ACTIVE trial, in which more than 2,800 older adults (all aged 65 years or older; mean age = 74 years) were randomly assigned to memory, reasoning, and speed-of-processing training groups, or to a no-contact control group. Among those interventions, improvements were largest for those training in processing speed followed by reasoning and memory. Each intervention produced immediate improvements in the trained cognitive abilities, with benefits persisting through 5 years of follow-up. The training gains also lasted to the 10-year follow-up for processing speed and reasoning (Ball et al., 2002; Ball, Ross, Roth, & Edwards, 2013; Rebok et al., 2014; Tennstedt & Unverzagt, 2013).

Demonstrating training benefits to processing speed is theoretically significant because cognitive aging is often explained by a mediational model whereby aging effects on working memory and executive control are largely explained by a common decline in processing speed (Salthouse, 1994). As an individual has slowed and degraded sensory input from normal aging, the quality of information fed to higher-order cognitive function is also degraded. Similarly, the higher-order cognitive process itself slows down (i.e., memory and executive function) by degrading rate-limiting factors for both memory (e.g., encoding speed for perceptual information, rehearsal rate, motor response speed) and reasoning (generating and traversing states in a problem space) (Li, Naveh-Benjamin, & Lindenberger, 2005; Li & Rieckmann, 2014).

The neurological bases of the relationship between the speed and efficiency of information processing and higher-order cognitive functions are also becoming increasingly clear. Speed of processing limits are determined by time constants of known excitatory and inhibitory molecular processes documented by electrophysiological analyses of synaptic (Sengupta, Laughlin, & Niven, 2013), cellular, and network processes, by the state of local and system myelination (Lu et al., 2011), by the functional integrity and power of parvalbumin inhibitory neuron processes (Donato, Rompani, & Caroni, 2013), whose actions support local network system coordination and reliability, among other known physiological processes.

In this study, we examined the effect of CT on older adults’ cognitive ability by extending and improving the speed-of-processing training in the ACTIVE trial. First, we applied multiple CT exercises beyond Useful Field of View (UFOV, the speed-of processing training exercise used in ACTIVE trial). Out of 17 total exercises, 9 exercises were specifically designed to enhance the quality and quantity of information processing and decision making at speed. Speed of processing exercises were presented early in training with the goal of first improving the quality of information fed to higher-order cognitive processes. Second, higher-order cognitive abilities such as episodic memory, working memory, and executive control were trained in later stages of the program (training session 28–50) expanding the benefit from speeded processes to higher-order abilities. Third, training exercises were continuously adaptive within and between sessions to promote enduring improvements (UFOV in ACTIVE trial was static in the first five sessions and adaptive only in the last five sessions). Also, unlike the ACTIVE trial, the current training was delivered outside the lab via an online training website to test the scalability of CT. Participants completed training using their own computer at home. Finally, we compared CT to an active control (AC) with matched computer usage and human interaction.

To examine whether training improved performance on cognitive abilities measured with distinct cognitive tests tapping into the same cognitive constructs, older adults were assigned to two groups: CT with adaptive exercises or AC with nonadaptive computer games for 10 weeks (5 days per week, 42 min per day, ~ 35 hr of total training, ~1–4 hr of training for each game). AC games were selected from casual games engaging attention, memory, and reasoning abilities. However, unlike CT exercises, casual games were neither adaptive based on performance nor designed to enhance the speed and accuracy of information processing.

Our primary hypothesis was: (a) CT, given its demands on fast and accurate information processing, would result in general improvements on cognitive tests measured by a composite score combining all cognitive assessment tasks. Our secondary hypotheses were: (b) CT would impact individual cognitive domains (e.g., processing speed, working memory, and executive control) by accelerating information processing and decision making, (c) improvement expectations would not differ between groups, ruling out placebo effects. Along with cognitive abilities, changes in psychological well-being, odor identification, and standing balance were examined with the NIH toolbox as exploratory measures, since the decline or impairment of these functional measures has been related to normal aging, and often observed in early phases of neurodegenerative diseases such as Alzheimer’s.

Method

The current study was preregistered on ClnicalTrials.Gov (NCT02331784). All modifications and their reasons on the registration were explained as a supplement on the study GitHub page https://github.com/HBClab/ProjectPACR_Share.

Participants

Participants were recruited from the Iowa City community through flyers, newspapers, and online advertisements for a CT study for age-related cognitive decline. Applicants were first screened via e-mail with a questionnaire that surveyed demographics (e.g., age, English language proficiency). If not excluded based on the survey, a phone interview checked for medical and nonmedical conditions affecting neuropsychological testing. Although we focus primarily on behavioral cognitive effects here, magnetic resonance imaging (MRI) was also completed and thus considered in screening. Results of MRI/fMRI will be described in a separate report. Eligible participants were (a) between ages 65 and 79 years, (b) fluent English speakers, (c) adequate in sensorimotor capacity to perform training, including visual capacity to read a computer screen at normal viewing distance, auditory capacity to understand normal speech, and motor capacity to control a computer mouse, they had (d) no diagnosis with Alzheimer’s or related dementias, (e) no evidence of dementia as indicated by Montreal Cognitive Assessment (MoCA) scores >25, (f) no medical conditions predisposing to imminent functional decline, and (g) no nonremovable metal on or in their body presenting a safety hazard in the MRI or affecting image quality (please refer to clinicaltrials.gov for detailed inclusion and exclusion criteria). A total of 68 participants were enrolled into the study (see Figure 1 and Table 1 for information on excluded participants and basic demographic information). All participants signed an informed consent form approved by the University of Iowa Institutional Review Board.

Figure 1.

Figure 1.

Overall study statement. AC = Active control; CT = Cognitive training; MRI = Magnetic resonance imaging.

Table 1.

Demographics

Cognitive training (CT) Active control (AC) p value
Enrolled N 29 39*
Age 70.41 (3.56) 69.69 (3.88) .43
Male 11 15 .83
Years of education 17.00 (3.27) 17.24 (3.06) .75
Baseline MoCA Score 27.65 (1.31) 27.61 (1.31) .90
Occupational status (% retired) 68.2% (seven no responders) 66.6% (nine no responders) .91
Racial/Ethnic characteristics (% white) 100% (zero no responder) 97.4% (one no responder) .39
Baseline Letter comparison 10.51 (1.62) 10.43 (1.66) .84
Baseline Pattern comparison 15.10 (2.09) 15.34 (1.99) .62
Used in Analysis Maximum N 29 30
Mean hours of training 34.01 (5.32) 34.72 (1.53) .48
Age 70.41 (3.56) 69.10 (3.49) .16
Male 11 12 .91
Years of education 17.00 (3.27) 17.45 (3.09) .58
Baseline MoCA Score 27.65 (1.31) 27.56 (1.38) .80
Occupational status (% retired) 68.2% (seven no responders) 60% (five no responders) .57
Racial/Ethnic characteristics (% white) 100% (zero no responder) 100% (zero no responder) N/A
Baseline Letter comparison 10.51 (1.62) 10.31 (1.75) .65
Baseline Pattern comparison 15.10 (2.09) 15.50 (2.08) .46

Note: MoCA = Montreal Cognitive Assessment.

*Nine participants were enrolled and randomized but did not perform any training activities (see Figure 1).

Randomization

After screening and baseline testing, participants were assigned to either the CT or AC group using the minimization method (Endo, Nagatani, Hamada, & Yoshimura, 2006) with probability 0.8 to the group minimizing symmetric Kullback-Leibler discrepancy on four prognostic variables (age, education, gender, baseline composite of letter, and pattern comparison). The largest allowable discrepancy in subject counts between the two groups was set to four participants. However, due to a procedural error in processing speed post assessments, we restarted the randomization algorithm after 16 participants, which set the largest allowable discrepancy between groups to eight participants. Two participants in AC group who were disqualified during the MRI session due to claustrophobia were mistakenly deleted from randomization tracking sheet when they did not complete the session. Therefore, the maximum discrepancy between groups allowed up to ten participants. Given the small sample size used in our study, allocating participants to groups based on minimizing the imbalance on prognostic variables is considered a better method compared to traditional stratification randomization.

Sample Size Justification

Based on the effect size for the four largest CT RCTs (Ball et al., 2002; Klusmann et al., 2010; Smith et al., 2009; Wolinsky, 2013) indicating standardized effects of 0.57 to 0.87, we used the DSS research statistical power calculator for a two-tailed test with effect size at 0.6 and alpha level 0.05. That calculation indicated we would have 63.5% power to test group differences after training with 59 participants used in analysis (29 in CT and 30 in AC). This is considered adequate given that the current study was designed as a feasibility trial to check the safety and estimated efficacy (effect size) to power a larger Phase II trial.

Study Design

All participants completed cognitive testing and MRI sessions, with the task order fixed within the cognitive session and the MRI always occurring on a separate day following cognitive testing (see Table 2 for details). To maintain participant blinding, consent forms described the study as comparing two distinct types of CT. Site staff (i.e., training coach) directly interacting with participants during training were not blinded in order to effectively handle any training-related issues but were instructed to describe each program’s features as potentially beneficial. Post-training assessments were administered by staff who were separate from training coaches and blinded to the training condition.

Table 2.

Transfer Tasks

Transfer task Category Order Session Type Reference
Digit Symbol Substitution Processing Speed 3 Screen Paper and Pencil (Wechsler, 1987)
3 Post
Pattern Comparison Processing Speed 1 Screen Paper and Pencil (Salthouse, 2004)
1 Post
Letter Comparison Processing Speed 2 Screen Paper and Pencil (Salthouse, 2004)
2 Post
N-back Working Memory 2 Pre Computerized (E-prime) (Kane & Engle, 2002)
5 Post
Visual Short-term Memory Working Memory 3 Pre Computerized (E-prime) (Luck & Vogel, 1997)
6 Post
Spatial Working Memory Working Memory 5 Pre Computerized (E-prime) (Erickson et al., 2011)
8 Post
Trail Making Executive Control 4 Pre Paper and Pencil (Reitan, 1958)
7 Post
Attentional Blink Executive Control 1 Pre Computerized (E-prime) (Shapiro, Raymond, & Arnell, 1997)
4 Post
Color Stroop Executive Control 6 Pre Computerized (E-prime) (Stroop, 1935)
9 Post
General life satisfaction Psychological Well-Being 7 Pre Computerized (NIH toolbox)
10 Post
Self-efficacy Psychological Well-Being 8 Pre Computerized (NIH toolbox)
11 Post
Perceived stress Psychological Well-Being 9 Pre Computerized (NIH toolbox)
12 Post
Odor identification Odor Perception 11 Pre Computerized (NIH toolbox)
14 Post
Standing Balance Standing Balance 10 Pre Computerized (NIH toolbox)
13 Post
Flanker Executive Control 1 MRI Pre Computerized (E-prime) (Fan, McCandliss, Sommer, Raz, & Posner, 2002)
1 MRI Post

Note: MRI = Magnetic resonance imaging.

Participants completed the assigned home-based training program with their own computer. Training was restricted to one session per day, with at least five sessions in a 7-day period. The training coach interacted with participants in regard to their training only when participants experienced technical difficulties (five incidents related to the training program, four incidents requesting additional help, and three personal computer-related errors) or when participants had an extended period of training absence. If participants had not trained in a week, the training coach sent a standard reminder e-mail. In cases where participants did not respond to the reminder e-mail, the coach called the participant to determine their reason for training absences.

When training programs were finished, participants completed an MRI and cognitive testing session. Since the current study was designed as a Phase I feasibility trial, having limits on both project period and budget, a longer-term follow-up retention session was not included.

Cognitive Assessments

Assessments administered before and after training were grouped into three categories: Processing Speed (Digit Symbol Substitution, Letter Comparison, Pattern Comparison), Working Memory (N-back, Visual Short-Term Memory, Spatial Working Memory), and Executive Control (Trail Making, Flanker, Color Stroop, Attentional Blink). A brief description of each task and further details such as trial conditions and number, stimulus duration, response keys, response period, inter trial interval, and stimulus response interval can be found from the study GitHub page, https://github.com/HBClab/ProjectPACR_Share. Our primary outcome measure is an overall cognitive composite score (i.e., mean of all z-scored assessment measures). As secondary outcome measures, we separately examined composite scores of processing speed, working memory, and executive control.

Exploratory Measures

Psychological Well-Being was measured with General life satisfaction, Perceived stress, and Self-efficacy surveys from NIH Tool Box. Olfaction and Standing balance were measured with Odor identification and Standing balance tests from NIH Tool Box. For all NIH toolbox measures, the primary outcome was the age-adjusted-scale score (See study GitHub page for details of each measure).

Online Daily Diary Questionnaire

At the beginning of each training session, participants reported on their physical activity, diet, social interaction, and general feedback on training, with a weekly report for each domain (See study GitHub page for details). For physical activity level, a composite score was calculated based on the Godin leisure activity score, (7 × strenuous exercise × 5 × moderate exercise × 3 × mild exercise). For healthy diet and program feedback, composite scores were calculated by summing answers (Likert scale 10) in each domain. For social interaction, a composite score was calculated as the sum of responses on the community and human interaction time.

Training Programs

Cognitive Training (CT)

The CT program included 17 computerized exercises delivered from the study website (see study GitHub page for details). Core training targets included visual and auditory processing speed and accuracy, attention, memory, and executive control abilities. On each session, participants were given seven games each lasting ~6 min. During a 6-min game, participants played a certain level multiple times (usually two to three times, varying based on participant’s response speed and number of trials). Baseline was set as performance on the first attempt. On repetition, initial difficulty was set by the previous trial’s best performance to promote performance improvement. Participants progressed from an initial limited performance ability (i.e., easy trial) that is supported by continual positive feedback to individually adapted challenge trials to maintain ~75%–85% accuracy. As sessions progressed, processing speed and/or performance accuracy and/or higher-order performance operations were challenged at progressively more demanding levels (see study GitHub page for adaptive features for each exercise). The adaptive training ensured virtually all participants, regardless of learning rate, were continuously challenged even as their abilities improved.

Early in training, participants completed exercises designed to improve speed and accuracy of visual/auditory information processing and attention. Later in training, participants progressively trained more on working memory capacity and executive control. For instance, memory and executive control exercises were first introduced after 28 sessions of processing speed and attention training. A detailed training schedule is presented in study GitHub site.

Active Control (AC)

We used 13 commercially available computer games designed to: (a) provide a face-valid approach to CT, ensuring participants were blind to group affiliation; (b) match expectation-based influences on performance in cognitive outcomes; (c) match the experimental program in overall program use intensity, staff interaction, reward, and overall engagement; (d) not be continuously adaptive (see study GitHub page for details of AC games). Three out of 13 games could increase in difficulty within session by discrete levels. The user experience in the AC and CT group was almost identical except the games played (see study GitHub page for screenshots of each group).

Statistical Analysis Plan

We tested our predictions with mixed-effects linear regressions using normalized baseline and post-training performance calculated with all data points across time. Each model included the treatment group by time interaction term as a fixed factor, and each participant’s intercept as a random factor. An interaction term (training group by time) estimated the effect of CT on outcome measure change. Data preparation and analysis were performed using Python v.3.6 and R v.3.4. Data were checked for normality, outliers, and errors.

The modified Intent to Treat (mITT, Abraha et al., 2015) population included all randomized participants who performed any training. Nine participants were enrolled and randomized but discontinued before initiating training (nine in AC; six were unable to complete the MRI scan due to claustrophobia or schedule conflict, one was unable to start training promptly, two dropped out due to too much time commitment). These nine participants were excluded from analysis since they have no data either on training (n = 4) or on training and baseline assessments (n = 5, i.e., only have screening data required for randomization).

Results

Demographics

At baseline, there were no group differences in demographics (age, gender, education) or cognitive performance (MoCA, Pattern Comparison, Letter Comparison scores, ps > .16).

Attrition

Adherence was strong. Fifty-seven out of 68 Intent to Treat (ITT) participants (83.8%) completed their training program. Fifty-six participants out of 68 ITT participants (82.3%) completed post-training assessments. Among those who started training, two participants discontinued during training (1 in AC and 1 in CT) and their data were included in analysis. At baseline, non-completers were not significantly different from the completers in all demographic variables (ps > .16). The mean duration to complete training was 68.8 days (SD = 10.4). Due to a procedural error, among the mITT population, the first 16 participants did not complete the post-training processing speed assessment (see Figure 1 for detailed number of participants), and therefore, the processing speed measures have fewer post-training observations included in the analysis compared to other measures.

Game Performance Improvement in CT

Since the AC games were casual games without a data saving system, we could not quantify changes in AC group performance. In the CT group, each exercise has multiple levels, presented to participants in a fixed order (i.e., from easy to complex). Participants set the baseline-score during the initial trial. Post-training score was determined by the best performance on the repeated trials at each given exercise level. The baseline and post score of each game was calculated by averaging baseline- and post scores of all levels in the game. The composite score for each cognitive domain was calculated by averaging z-scores of all games in each domain. Mixed-effects linear regression was performed for each domain separately with time (pre and post) as a factor and intercept of each participant as a random factor. In all domains, CT participants showed improvement from training to their post-training score (processing speed: t(29) = 34.49, p < .01, attention: t(29) = 27.43, p < .01, memory: t(28) = 25.24, p < .01, executive control: t(28) = 22.31, p < .01).

Transfer of Training

First, we examined whether groups performed equivalently at pretesting using a one-way ANOVA with group as a between-subjects factor (CT vs AC) for all measures. There were no group differences in any measures (ps > .23) except Attentional Blink (AB), in favor of better performance for the AC group (F(1,56) = 5.75, p < .05).

Primary Outcome

Overall cognitive composite score

To examine whether CT had any overall effect on cognitive abilities and to better address the issue of measurement error and multiple comparisons, we performed analyses at the construct level using a composite score. We first tested a composite score based on the mean of participant’s z-scores across all assessment tasks. We conducted mixed-effects linear regression on this composite score with the group by time interaction term as a fixed factor, and intercept of each participant as a random factor (see Figure 2A). Results showed a significant interaction between group and time (t(57.72) = 2.59, p < .05, Cohen’s d for between-group difference of pre scores (pre) = .06, between-group difference of post scores (post) = .38, between-group difference of the change scores (change) = .66) favoring CT over AC. We performed the same analysis including four participants having only pre-assessment data (n = 63) and found that the results did not change (t(59.17) = 2.64, p <.05, Cohen’s d for change = .68).

Figure 2.

Figure 2.

Pre- and post-training performance scores for composite and individual measures. Error bars are standard error. (A) Overall cognitive composite score. (B) processing speed composite and individual measures. (C) Working memory composite and individual measures. (D) Executive control composite and individual measures. Higher scores represent better performance for the overall cognitive composite score, processing speed and working memory. For the executive control composite and individual measures, the lower score (smaller cost) is better performance. Non-normalized summaries of the components are presented in study GitHub site. AC = Active control; CT = Cognitive training.

Secondary Outcomes

Composite-level analysis for each cognitive domain

Cognitive domain groupings were confirmed by a Principal Component Analysis (PCA) with Varimax rotation on the pre-test data. Because the AB measure showed a significant group difference at baseline, we excluded AB from further secondary outcome analysis and also excluded it from the PCA analysis (the analysis and figure for executive function including AB showed the equivalent result and can be found in study GitHub page). Despite the smaller sample size (n = 59), PCA results were comparable with previous literature (Baniqued et al., 2014; Salthouse, 2003) showing three interpretable components that in combination explained 58.7% of the variance (see study GitHub page for detailed loadings): Working Memory (N-back, Visual Short-Term Memory, Spatial Working Memory), Processing Speed (Digit Symbol, Pattern Comparison, Letter Comparison), and Executive Control (Flanker Cost, Stroop Cost, and Trail B-A).

However, due to a relatively small sample size in the current study, the factor scores from the current PCA were not considered adequate for creating composite scores from factor loadings. Instead, composite measures of processing speed, working memory, and executive control were calculated by averaging the z-scores of each domain, allowing equal weight to each task in the composite.

We conducted mixed-effects linear regression analyses on these composite scores, with the group by time interaction term as a fixed factor, and intercept of each participant as a random factor (see Figure 2). Significant group by time interactions were found after correcting for multiple comparisons with the Benjamini-Hochberg false discovery rate control procedure for processing speed (t(42.84) = 2.54, p < .016, Cohen’s d for pre = .03, post = .57, and change = .74) and for working memory (t(56.59) = 2.38, p < .033, Cohen’s d for pre = .22, post = .65, and change = .63). For executive control, the interaction between group and time was not statistically significant (t(57.02) = −1.24, p = .213). When using the weighted factor scores to form the composite score, the results did not change from those from when using the equal weighted method. For the processing speed and working memory, there were significant interactions (t(42.08) = 2.16, p <.05, Cohen’s d for change = .64, t(56.58) = 2.38, p <.05, Cohen’s d for change = .65., for Processing Speed and Working Memory respectively). There was no significant interaction for Executive control domain, which was identical to the result of the unweighted composite method (t(57.10) = −1.41, p >.05).

Since the working memory composite score was created by combining RT and accuracy, the group by time interaction in RT and accuracy was tested separately to examine the locus of benefit. Results showed that the benefit in working memory was mainly driven by improved response time in CT over AC (t(55.61) = 3.40, p < .01). Accuracy results did not pass the threshold for statistical significance (t(57.10) = 0.97, p > .05).

Task-level analyses

To evaluate transfer of training, mixed-effects linear regressions were performed for each task with the group by time interaction term as a fixed factor and intercept of each participant as a random factor. Significant group by time interactions at p < .05 (uncorrected for multiple comparisons) were found for the Digit Symbol Substitution and N-back tasks, favoring the CT group. After correcting for multiple comparisons with the Benjamini-Hochberg false discovery rate control procedure, a significant group by time interaction was significant only for N-back task (p < .005).

NIH toolbox measures

Mixed-effects linear regressions were performed for each measure as described above. There were no interactions between group and time in any measures (ps > .05, see study GitHub site for details).

Online weekly diary questionnaire

Mixed-effects linear regression used in the previous analysis were performed for each domain. The interaction between group and training weeks was significant only for diet composite scores (t(522.5) = −3.57, p < .01, Cohen’s d for pre = −.08, post = .41, change = .65), favoring AC to suggest they reported changing their diet more during the intervention (see study GitHub page for figure). Most importantly, there was no difference found between groups or interaction between group by training weeks for general feedback on training (i.e., subjective report on memory improvement, functional independence, and happiness) (ps > .65).

Discussion

Older adults engaging in home-based adaptive CT showed improved cognitive performance compared to peers playing casual computer games. Improvements were strongest for processing speed and working memory. Interestingly, transfer benefits in working memory appeared to be driven by improved response time. Although working memory accuracy improvements also favored CT over AC, they did not pass the threshold for statistical significance. This observation suggests memory improvements reflected mostly enhanced speed. Given that exercises in the early stages of training were designed to improve processing speed, results support the transfer of speeded stimulus processing to decision making in working memory tasks.

Contrary to our predictions, we found no transfer to executive control domains outside of working memory. One potential explanation could be that improved processing speed obscured observed benefits on executive control measured by the cost between compatible and incompatible conditions. For example, when we examined group by time interactions on RT of compatible and incompatible conditions of the Stroop and Flanker tasks separately, there were significant group by time interaction favoring CT over AC (ps < .01). Because the cost measures for Stroop and Flanker depend on an assumption of a baseline measure of processing speed (compatible condition), changes in processing speed that are larger than effects on executive function would obscure or potentially invalidate their use for measuring change in cost. A better outcome would tap into fundamental components of executive function such as task-switching with a global switch cost measure (Salthouse, 2005; Verhaeghen, 2011). Because global switch costs have more support for assessing unique aspects of executive function besides slowing, a task-switching task could better detect change in executive function beyond general speed, while also providing a within-task control of local switch costs.

Related to the lack of benefit on executive control, another possibility is that the dosage of training for executive control was not sufficient enough to produce cognitive improvement. In the current training schedule, speed-of-processing exercises were trained more (23 hr for processing speed and attention) than information processing capacity or ability exercises (7 hr for memory, and 5 hr for executive control). Further potential benefit with extended working memory or executive control training should be examined with future studies.

Training benefits on cognitive abilities, however, did not transfer to exploratory functional measures such as psychological well-being, balance, and odor identification. The lack of benefit might be due to either normal functional capacity of our sample or due to the limit of transfer benefit from a cognitive to functional domain. Given that functional benefits from the ACTIVE trial were not present immediately after training but present years later, it may also be that benefits on functional measures would not appear until years after completion of training.

When documenting the transfer from training, it is important to employ appropriate control groups to confirm that CT—and not a differential placebo effect—enhanced performance on outcome measures (Boot, Simons, Stothart, & Stutts, 2013; Simons et al., 2016). The current study compared computerized CT to an AC group in which computer usage and human interaction were matched in every respect, (i.e., with the sole difference being game content). Also, placebo effects were indirectly tested by measuring the participant’s self-perceived improvement (i.e., questions of happiness, memory, and healthy independence in daily diary) across training. When testing only the memory question (“I think my memory is improving.”) which was directly related to perceived cognitive improvement, there was no interaction between group and training weeks (p = .36). Despite both groups reporting relatively comparable scores on perceived improvement, only the CT group showed performance improvement on the untrained cognitive function measures.

In terms of drop-out, the AC group had a higher drop-out rate compared to CT (10 vs 1). However, most AC drop-outs happened before participants were introduced to their training program (mostly due to MRI-related issues). Among those who were introduced to the training program, the drop-out rates were equivalent between groups (one in each group), suggesting no influence of group allocation on motivation. In order to directly examine if task-related motivation is equivalent in both groups, additional post-training surveys could be conducted to rule out possible confounds from effort or motivation.

When interpreting our results to suggest a possible means to improve cognitive abilities in older adults, there are some caveats to be noted. First, due to a procedural error, 16 participants were missing data at post-training on processing speed. However, even with the smaller sample size for the processing speed measure, the interaction between group and time was significant, documenting robust benefit from training on processing speed. Second, the participants were limited to cognitively normal older adults aged 65–79 years, and participants included primarily Caucasian highly educated participants, limiting generalization to a broader older population. Given that the purpose of CT is to provide an effective training tool with broad accessibility and generalizability, it is a priority to determine training efficacy in a more diverse population of older adults with respect to age, ethnicity, education, and cognitive status. Also, the current study could not distinguish if the benefit from CT over AC was from its design (adaptive training), content (speed-based training), or a combined effect of both. Future studies should therefore directly compare effects of adaptive training with and without speed-based components.

It may be argued that the benefits shown here were not from improved cognitive ability but instead merely due to familiarity between training and assessment modalities (computer) or from using a specific learned strategy from training on the assessment tasks. However, out of the 17 CT exercises, only 4 exercises (Card Shark, Auditory Ace, Mixed Signal, Scene Crasher) shared a task goal with three assessment tasks (N-back task, Color Stroop task, SPWM) (e.g., matching if the current item is the same or different from the item N-number of items before it in Card Shark and N-back task). Beside the task goal, features such as trial procedures, stimulus features, stimuli modality, and task complexity were different. As a confirmatory analysis, we examined assessment outcomes excluding those three assessment measures from the overall cognitive composite and found a significant group by time interaction as shown in primary result (t = 1.97, p < .05).

In terms of statistical power, with approximately 30 participants in each group, the current study was powered only to detect a large effect. Therefore, there is a possibility that a more subtle effect was undetected. Even with the underpowered sample size, the current study demonstrated a significant group by time interaction on the primary measure, and two of secondary measures.

Overall, this initial efficacy study showed that 35 hr of home-based, adaptive CT is feasibly delivered at home, for effective use, by cognitively normal older adults to improve cognitive performance. Based on results of this feasibility trial, a Phase II study is underway with the goal of recruiting a larger and more diverse sample of older adults across two study sites, longer-term retention tested 6 months after program completion, an assessment battery including episodic memory, and a performance-based task of real-world functional ability (TIADL) to examine the evidence of transfer from CT to a real-world situation.

Funding

This work was supported by National Institutes of Health/National Institute on Aging (R43AG047722).

Acknowledgments

We wish to thank all of the participants who participated in study for their time and effort. We also thank the research assistants in the Health, Brain, and Cognition lab, including especially Conner Wharff, Elaine Schultz, Seima Al-Momani, and Kelsey Shrier for their help with data collection.

Conflict of Interest

H. K. Lee and M. M. Merzenich disclose a conflict of interest: H. K. Lee is a salaried employee of Posit Science, and M. M. Merzenich is a founder of Posit Science, which could benefit if the cognitive training program is shown to be an effective treatment.

References

  1. Abraha I.,, Cherubini A.,, Cozzolino F.,, De Florio R.,, Luchetta M. L.,, Rimland J. M.,…, Montedori A. (2015). Deviation from intention to treat analysis in randomised trials and treatment effect estimates: Meta-epidemiological study. BMJ (Clinical research ed.), 350, h2445. doi:10.1136/bmj.h2445 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Ball K.,, Berch D. B.,, Helmers K. F.,, Jobe J. B.,, Leveck M. D.,, Marsiske M.,…, Willis S. L.; Advanced Cognitive Training for Independent and Vital Elderly Study Group (2002). Effects of cognitive training interventions with older adults: A randomized controlled trial. JAMA, 288, 2271–2281. doi:10.1001/jama.288.18.2271 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Ball K. K., Ross L. A., Roth D. L., & Edwards J. D (2013). Speed of processing training in the ACTIVE study: How much is needed and who benefits? Journal of Aging and Health, 25, 65S–84S. doi:10.1177/0898264312470167 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Baniqued P. L., Kranz M. B., Voss M. W., Lee H., Cosman J. D., Severson J., & Kramer A. F (2014). Cognitive training with casual video games: Points to consider. Frontiers in Psychology, 4, 1010. doi:10.3389/fpsyg.2013.01010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Boot W. R., Blakely D. P., & Simons D. J (2011). Do action video games improve perception and cognition? Frontiers in Psychology, 2, 226. doi:10.3389/fpsyg.2011.00226 [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Boot W. R., Simons D. J., Stothart C., & Stutts C (2013). The pervasive problem with placebos in psychology: Why active control groups are not sufficient to rule out placebo effects. Perspectives on Psychological Science, 8, 445–454. doi:10.1177/1745691613491271 [DOI] [PubMed] [Google Scholar]
  7. Donato F., Rompani S. B., & Caroni P (2013). Parvalbumin-expressing basket-cell network plasticity induced by experience regulates adult learning. Nature, 504, 272–276. doi:10.1038/nature12866 [DOI] [PubMed] [Google Scholar]
  8. Edwards J. D., Fausto B. A., Tetlow A. M., Corona R. T., & Valdés E. G (2018). Systematic review and meta-analyses of useful field of view cognitive training. Neuroscience and Biobehavioral Reviews, 84, 72–91. doi:10.1016/j.neubiorev.2017.11.004 [DOI] [PubMed] [Google Scholar]
  9. Endo A., Nagatani F., Hamada C., & Yoshimura I (2006). Minimization method for balancing continuous prognostic variables between treatment and control groups using Kullback-Leibler divergence. Contemporary Clinical Trials, 27, 420–431. doi:10.1016/j.cct.2006.05.002 [DOI] [PubMed] [Google Scholar]
  10. Erickson K. I., Voss M. W., Prakash R. S., Basak C., Szabo A., Chaddock L., … Kramer A (2011). Exercise training increases size of hippocampus and improves memory. Proceedings of the National Academy of Sciences of the United States of America, 108, 3017–3022. doi:10.1073/pnas.1015950108 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Fan J., McCandliss B. D., Sommer T., Raz A., & Posner M. I (2002). Testing the efficiency and independence of attentional networks. Journal of Cognitive Neuroscience, 14, 340–347. doi:10.1162/089892902317361886 [DOI] [PubMed] [Google Scholar]
  12. Hebert L. E., Weuve J., Scherr P. A., & Evans D. A (2013). Alzheimer disease in the United States (2010-2050) estimated using the 2010 census. Neurology, 80, 1778–1783. doi:10.1212/WNL.0b013e31828726f5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Kane M. J., & Engle R. W (2002). The role of prefrontal cortex in working-memory capacity, executive attention, and general fluid intelligence: An individual-differences perspective. Psychonomic Bulletin & Review, 9, 637–671. doi:10.3758/BF03196323 [DOI] [PubMed] [Google Scholar]
  14. Klusmann V., Evers A., Schwarzer R., Schlattmann P., Reischies F. M., Heuser I., & Dimeo F. C (2010). Complex mental and physical activity in older women and cognitive performance: A 6-month randomized controlled trial. The Journals of Gerontology, Series A: Biological Sciences and Medical Sciences, 65, 680–688. doi:10.1093/gerona/glq053 [DOI] [PubMed] [Google Scholar]
  15. Lampit A., Hallock H., & Valenzuela M (2014). Computerized cognitive training in cognitively healthy older adults: A systematic review and meta-analysis of effect modifiers. PLoS Medicine, 11, e1001756. doi:10.1371/journal.pmed.1001756 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Li S. C., Naveh-Benjamin M., & Lindenberger U (2005). Aging neuromodulation impairs associative binding: A neurocomputational account. Psychological Science, 16, 445–450. doi:10.1111/j.0956-7976.2005.01555.x [DOI] [PubMed] [Google Scholar]
  17. Li S. C., & Rieckmann A (2014). Neuromodulation and aging: Implications of aging neuronal gain control on cognition. Current Opinion in Neurobiology, 29, 148–158. doi:10.1016/j.conb.2014.07.009 [DOI] [PubMed] [Google Scholar]
  18. Lu P. H., Lee G. J., Raven E. P., Tingus K., Khoo T., Thompson P. M., & Bartzokis G (2011). Age-related slowing in cognitive processing speed is associated with myelin integrity in a very healthy elderly sample. Journal of Clinical and Experimental Neuropsychology, 33, 1059–1068. doi:10.1080/13803395.2011.595397 [DOI] [PMC free article] [PubMed] [Google Scholar]
  19. Luck S. J., & Vogel E. K (1997). The capacity of visual working memory for features and conjunctions. Nature, 390, 279–281. doi:10.1038/36846 [DOI] [PubMed] [Google Scholar]
  20. Merzenich M. M., Van Vleet T. M., & Nahum M (2014). Brain plasticity-based therapeutics. Frontiers in Human Neuroscience, 8, 385. doi:10.3389/fnhum.2014.00385 [DOI] [PMC free article] [PubMed] [Google Scholar]
  21. Mewborn C. M., Lindbergh C. A., & Stephen Miller L (2017). Cognitive interventions for cognitively healthy, mildly impaired, and mixed samples of older adults: A systematic review and meta-analysis of randomized-controlled trials. Neuropsychology Review, 27, 403–439. doi:10.1007/s11065-017-9350-8 [DOI] [PubMed] [Google Scholar]
  22. Miller C. A. (2000). Report from the World Alzheimer Congress 2000. Geriatric Nursing, 21, 274–275. doi:10.1067/mgn.2000.110833 [DOI] [PubMed] [Google Scholar]
  23. Owen A. M., Hampshire A., Grahn J. A., Stenton R., Dajani S., Burns A. S.,…Ballard C. G (2010). Putting brain training to the test. Nature, 465, 775–778. doi:10.1038/nature09042 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Rebok G. W., Ball K., Guey L. T., Jones R. N., Kim H. Y., King J. W., … Willis S. L (2014). Ten-year effects of the advanced cognitive training for independent and vital elderly cognitive training trial on cognition and everyday functioning in older adults. Journal of the American Geriatrics Society, 62, 16–24. doi:10.1111/jgs.12607 [DOI] [PMC free article] [PubMed] [Google Scholar]
  25. Reitan R. M. (1958). Validity of the Trail Making Test as an indicator of organic brain damage. Perceptual and Motor Skills, 8, 271–276. doi:10.2466/pms.8.7.271-276 [Google Scholar]
  26. Salthouse T. A. (1994). Aging associations: Influence of speed on adult age differences in associative learning. Journal of Experimental Psychology. Learning, Memory, and Cognition, 20, 1486–1503. doi:10.1037/0278-7393.20.6.1486 [DOI] [PubMed] [Google Scholar]
  27. Salthouse T. A. (2003). Memory aging from 18 to 80. Alzheimer Disease and Associated Disorders, 17, 162–167. [DOI] [PubMed] [Google Scholar]
  28. Salthouse T. A. (2004). Localizing age-related individual differences in a hierarchical structure. Intelligence, 32. doi:10.1016/j.intell.2004.07.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Salthouse T. A. (2005). Relations between cognitive abilities and measures of executive functioning. Neuropsychology, 19, 532–545. doi:10.1037/0894-4105.19.4.532 [DOI] [PubMed] [Google Scholar]
  30. Sengupta B., Laughlin S. B., & Niven J. E (2013). Balanced excitatory and inhibitory synaptic currents promote efficient coding and metabolic efficiency. PLoS Computational Biology, 9, e1003263. doi:10.1371/journal.pcbi.1003263 [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. Shapiro K. L., Raymond J. E., & Arnell K. M.(1997). The attentional blink. Trends in Cognitive Sciences, 1, 291–296. doi:10.1016/S1364-6613(97)01094-2 [DOI] [PubMed] [Google Scholar]
  32. Simons D. J., Boot W. R., Charness N., Gathercole S. E., Chabris C. F., Hambrick D. Z., & Stine-Morrow E. A (2016). Do “Brain-training” programs work? Psychological Science in the Public Interest, 17, 103–186. doi:10.1177/1529100616661983 [DOI] [PubMed] [Google Scholar]
  33. Smith G. E., Housen P., Yaffe K., Ruff R., Kennison R. F., Mahncke H. W., & Zelinski E. M (2009). A cognitive training program based on principles of brain plasticity: Results from the Improvement in Memory with Plasticity-based Adaptive Cognitive Training (IMPACT) study. Journal of the American Geriatrics Society, 57, 594–603. doi:10.1111/j.1532-5415.2008.02167.x [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Souders D. J., Boot W. R., Blocker K., Vitale T., Roque N. A., & Charness N (2017). Evidence for narrow transfer after short-term cognitive training in older adults. Frontiers in Aging Neuroscience, 9, 41. doi:10.3389/fnagi.2017.00041 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Stroop J. R. (1935). Studies of interference in serial verbal reactions. Journal of Experimental Psychology, 12, 643–662. [Google Scholar]
  36. Tennstedt S. L., & Unverzagt F. W (2013). The ACTIVE study: Study overview and major findings. Journal of Aging and Health, 25, 3S–20S. doi:10.1177/0898264313518133 [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Verhaeghen P. (2011). Aging and executive control: Reports of a Demise Greatly Exaggerated. C1urrent Directions in Psychological Science, 20, 174–180. doi:10.1177/01963721411408772 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Wechsler D. (1987). WMS-R: Wechsler Memory Scale--Revised: manual. San Antonio: Psychological Corp. [Google Scholar]
  39. Wolinsky F. D. (2013). A randomized controlled trial of cognitive training using a visual speed of processing intervention in middle aged and older adults. PLoS One, 8, e61624. doi:10.1371/journal.pone.0061624 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from The Journals of Gerontology Series B: Psychological Sciences and Social Sciences are provided here courtesy of Oxford University Press

RESOURCES