Abstract
Background and Objectives
Numerous longitudinal studies suggest that technology use in late adulthood is associated with cognitive benefits. Using data from a randomized controlled trial, the current study examined whether computer use improves cognition in older adults with little to no previous computer experience.
Research Design and Methods
This study used data from the Personal Reminder Information and Social Management (PRISM) trial. Community-dwelling older adults with little previous computer experience (MAge = 76.15) were randomly assigned to learn and use a computer (the PRISM system, n = 150) or interact with parallel content delivered in a nondigital format (paper binder, n = 150) for 12 months. Objective and subjective cognitive outcomes were measured before (pretest) and after the intervention (posttest). Latent change score models and Bayesian analysis of variances were used to examine cognitive change at the ability and individual measure level.
Results
Computer training and use for 12 months did not lead to cognitive improvements at the ability level. Strong evidence against cognitive benefits at the individual measure level was also observed.
Discussion and Implications
Casual computer use does not provide enough cognitive stimulation to improve cognition in late adulthood. Cognitive benefits observed in longitudinal studies may be mediated by other factors or influenced by confounding variables.
Keywords: Cognition, Cognitive aging, Technology
Population aging and age-related cognitive declines present unprecedented challenges for the United States and the world. Information and communication technologies have the potential to address age-associated declines in cognitive and functional abilities in several ways. For example, technologies can augment or substitute for lost functions (e.g., the use of a smart assistant to compensate for prospective memory loss; Charness, 2020). However, can the act of learning to use and using technology itself, which requires multiple cognitive abilities (attention, memory, reasoning, etc.), serve as an enrichment activity to prevent or reverse age-related cognitive declines? The current article investigates the value of computer use as a cognitive enrichment tool and examines potential cognitive benefits of computer training and usage in older adults with no prior computer experience using data from a randomized control trial.
Multiple large-scale longitudinal studies have observed positive associations between computer use and cognitive change in older adults (Almeida et al., 2012; Berner et al., 2019; Choi et al., 2021; Hartanto et al., 2020; Ihle et al., 2020; Kamin & Lang, 2020; Xavier et al., 2014). For instance, Kamin and Lang (2020) examined the reciprocal relationship between internet use and cognitive function over 2 years in an older adult sample and found a greater impact of internet use on cognition than vice versa. Specifically, internet use predicted positive change in cognition over 2 years after controlling for demographics and self-reported mental and physical health. Hartanto et al. (2020) further controlled for the frequency of other cognitively stimulating activities (e.g., reading, attending lectures, and courses) and found similar results for frequency of computer use and executive function over 9 years in a middle-aged and older adult sample. These results suggest that computer use plays a unique role in protecting against cognitive declines.
A potential explanation for computer and internet-related cognitive benefits is the cognitive enrichment hypothesis (Hertzog et al., 2008), which posits that new learning and engagement in cognitively challenging activities can alter the course of cognitive decline despite aging constraints on the level of theoretical maximal performance. Sustained engagement in cognitively challenging and stimulating activities is believed to benefit general cognition by promoting the formation of supportive neural circuitry that provides additional neural resources to compensate for age-related changes in the brain (Park & Reuter-Lorenz, 2009; Reuter-Lorenz & Park, 2014). Given that learning to use computers is cognitively challenging and using computers proficiently requires multiple cognitive abilities including memory, processing speed, reasoning, executive function, and vocabulary (Czaja et al., 2006; Zhang et al., 2017), computer training and computer use may be ideal enrichment activities to benefit general cognition and those specific cognitive abilities, especially for older adults with little or no previous experience.
Only a limited number of intervention studies have examined the effect of computer training and usage on cognition in older adults, and results from those studies are inconsistent. Chan et al. (2016) trained older adults on how to use an iPad for 5 h each week over 10 weeks and had them use it at their leisure for 10 h every week following each training session. They found improvements on episodic memory, processing speed but not on cognitive control or visual spatial processing. Vaportzis et al. (2017) trained older adults how to use an iPad in ten 2-h sessions and found improvements on processing speed but not on reasoning, working memory, or verbal comprehension. Finally, Klusmann et al. (2010) trained older women how to use a computer in 75 90-min sessions over 6 months and found improvements on immediate and delayed recall. Older women in the training group also maintained performance on executive functions while those in the control group experienced declines. On the other hand, Slegers et al. (2009) failed to observe any cognitive benefit after three 4-h training sessions or following the 12-month period of computer use following training. Null findings were also observed in studies with older adults with mild cognitive impairment (Djabelkhir et al., 2017) and with older adults in care facilities (Cid et al., 2020).
Although several previous intervention studies did observe some cognitive benefits, most were based on small convenience samples, and control groups involved were either passive (Ordonez et al., 2011; Vaportzis et al., 2017) or active but involved intervention content of a different nature (e.g., watching movies; Chan et al., 2016). The current analysis provides a stronger test of the cognitive benefits of computer and internet use with data from a 12-month multisite randomized controlled trial examining quality of life and well-being after using a specially designed computer system (the Personal Reminder Information and Social Management trial [PRISM]; Czaja et al., 2018). The PRISM trial features (a) a large sample of older adults with diverse ethnic backgrounds and socioeconomic status (SES), (b) objective and subjective measures of processing speed, reasoning, crystallized intelligence, working memory, executive function, and everyday memory, along with (c) a rigorous control group in which participants interacted with parallel contents delivered in a nondigital form (paper binder). Moreover, participants were also prescreened to not own a computer and have minimal computer experience, making the group more likely to show cognitive benefits from computer training and use. The sample size and broad measures enable us to estimate cognitive change as a latent variable to attenuate influences of measurement errors and task-specific variances. The rigor in the parallel content control group ensures that any benefits observed are directly related to computer training and computer use. Based on the cognitive enrichment hypothesis and previous longitudinal and intervention studies, we hypothesized that older adults interacting with the PRISM system would demonstrate significant improvements in general cognitive ability and on one or more cognitive measures after 12 months while the control group would experience no significant change.
Method
Study Design
The PRISM system trial was a multisite randomized controlled trial conducted in three diverse locations: Atlanta, Georgia; Miami, Florida; and Tallahassee, Florida. The trial was 12 months in duration and collected measures of social isolation and loneliness at baseline, 6 months, and 12 months and measures of cognition at baseline (i.e., pretest) and 12 months (i.e., post-test). For full trial details, see Czaja et al., 2015 and Czaja et al., 2018.
Participants
Participants were 300 community-dwelling older adults without cognitive impairment, with little computer experience, and who were at risk for social isolation (lived alone, worked or volunteered minimally, made minimal use of senior center or formal organizations). Ages ranged from 65 to 98 years (M = 76.15, SD = 7.4) and the sample was 78% female. The sample was diverse (46% non-White) and many were of lower SES (39% had attained a high school diploma or less, 89% had an annual household income of <$30,000). At 12 months, attrition was 19%. Attrition was higher in the control condition (25%) compared to the PRISM condition (13%; χ 2(1) = 4.74, p = .029). However, attrition was not significantly related to baseline cognitive performance (t range = 0.02–1.25, p range = .212–.981).
Procedures
During an in-person home visit, participants were administered pretest (baseline) cognitive assessments by a trained assessor. Participants then received three additional home visits in which they were trained in the use of the PRISM system or the binder. Randomization occurred after baseline assessments and “check-in” calls were conducted 1 week, 3 months, and 9 months after training. At 12 months, the cognitive assessment battery was repeated (see Czaja et al., 2015 and Czaja et al., 2018 for more details). Although primary outcome measures of social isolation and loneliness were administered in a manner that maintained assessor blinding (phone/mail), it is likely that assessors were not blind during the post-test at-home cognitive assessment sessions due to the presence of the PRISM system within the home of some participants. All participants were compensated $25 per assessment (baseline, 6 months, 12 months). Because participants in the PRISM condition were also allowed to keep the computer, participants in the control condition were paid an additional $75 during the final 12-month assessment.
Intervention
Maintaining social connectivity and social relationships is important for health, well-being, and quality of life and can help support successful aging (Pruchno et al., 2010; Rowe & Kahn, 1998). Unfortunately, older adults often experience isolation due to loss of a partner, mobility and health problems, and changes in employment/economic status. The PRISM intervention was developed primarily to provide technology-based social support. The PRISM system is an easy-to-use computer system designed for older adults with no computer experience using an iterative, user-centered design process (see Boot et al., 2020 for details about the design process). As the system’s primary target was increasing social connectivity and reducing loneliness, components included e-mail and a buddy list (list of other participants with similar interests to contact). To facilitate information access and new learning, it also included resource guides, a classroom feature, and access to the internet. The classroom feature was dynamic, and content was updated every month to help maintain engagement. A calendar was included to support prospective memory, and games supported leisure. Participants in the control condition received a nonelectronic binder with paper resources, a calendar, physical games, and other nondigital analogs of the PRISM system. Similar to PRISM, new classroom features were mailed every month.
In terms of PRISM system use, the median number of days participants used the computer system over the year was 201 days. E-mail was the most used feature (median = 132 days), followed by internet (median = 98 days) and games (median = 28 days). Overall, participants used PRISM about 4 days per week (median = 4.10, M = 3.77, SD = 2.03). This pattern of use persisted over the course of the trial, with participants still using the system about 3 days per week at the end (last week) of the year-long trial (median = 3.00, M = 3.56, SD = 2.66). Eighty percent of participants were still using the system at least once a week (4 times) during the final month of the study. Although usage declined over the intervention period, in general, use data are consistent with meaningful use of the system over an extended period. One caveat in interpreting these data, though, is that analogous data are not available for the binder control condition. Use data are more explicitly explored in papers by Czaja et al. (2018), Boot et al. (2018), and Sharit et al. (2019). Participants in the PRISM condition showed psychosocial benefits compared to the control group after using the system (see Czaja et al., 2018 for details).
Cognitive Battery
The full battery is reported elsewhere (Czaja et al., 2015). Measures of processing speed (Digit Symbol Substitution), reasoning (Letter Set), vocabulary/crystallized intelligence (Shipley Institute of Living Scale), working memory (Stroop Color Name Test), and executive function (Trails A and B) assessed objective cognitive performance. A measure of everyday memory (Perception of Memory Functioning) was included to assess subjective cognition. Three subscales of the Perception of Memory Functioning questionnaire (frequency of forgetting, seriousness of forgetting, retrospective functioning) were included in the current analyses. A fourth subscale (use of mnemonics subscale) was excluded because it measures memory strategy use rather than self-perceptions of memory problems and failures. Data from the Perception of Memory Function measure have been analyzed and reported in a previous paper on a different topic (Yoon et al., 2017). More details about measure administration, scoring, and reliability are presented in Table 1.
Table 1.
List of Cognitive Measures
| Domain | Measure | Descriptions |
|---|---|---|
| Objective cognition: processing speed | Digit Symbol Substitution | Participants were instructed to record the symbols that correspond to a series of digits. The task is scored by the number of correct symbols recorded within 90 s. The scores can range from 0 to 100. A higher score reflects faster processing speed. |
| Objective cognition: reasoning | Letter Set | Participants were presented with five sets of four-letter sets and were instructed to identify the letter set that is dissimilar to the other four. The score can range from 0 to 30. A higher score reflects higher reasoning ability. |
| Objective cognition: vocabulary/crystallized intelligence | Shipley Institute of Living Scale | Participants circled the word that has the same or nearly the same meaning as a referent word for 40 words. The score can range from 0 to 40. A higher score indicates greater vocabulary/crystallized intelligence. |
| Objective cognition: working memory | Stroop Color Name Test | Participants were instructed to name the color of a series of two to six congruent and incongruent color words and recall the color order. The score can range from 0 to 1. A higher score indicates better working memory. |
| Objective cognition: executive function | Trails A and B | In Part A, participants were asked to connect consecutively numbered circles. In Part B, participants were asked to connect alternating numbered and lettered circles. Executive function is reflected by the difference in completion time between Parts A and B. The scores for Parts A and B were logarithmic transformed to avoid skewness. A lower score indicates better executive function. |
| Subjective cognition: everyday memory | Perception of Memory Functioning | 64 items that assess perception of memory functioning including frequency of forgetting (Cronbach’s α = 0.941), seriousness of forgetting (Cronbach’s α = 0.949), retrospective functioning (Cronbach’s α = 0.880) and use of mnemonics (Cronbach’s α = 0.827). The use of mnemonics subscale was excluded from current analysis because it measures mnemonic technique use irrelevant to our focus. The score for each subscale can range from 1 to 7. A higher score indicates fewer problems. |
Statistical Analysis
Separate latent change score models (Kievit et al., 2018) representing a change from baseline to 12 months were fit to objective and subjective cognitive measures separately. All models were fitted using AMOS. Latent cognitive factors were created to represent the shared variance among measures. Pretest and 12-month post-test latent factors were generated separately using measures administered at their corresponding time point. Latent change factors were estimated to represent changes in cognition between the two time points.
For each model, the factor loading and intercept of the first indicator variable of pretest and post-test factors were constrained to unity and zero, respectively, for identifiability purposes. The variance of the post-test factor was constrained to zero and was perfectly regressed onto the pretest and change factor. Pretest and change factors were allowed to covary with each other. The unique variance of each measure at pretest also covaried with corresponding unique variances at post-test. Figure 1 depicts a graphic illustration of the model used for objective and subjective cognitive measures separately. Model fit was assessed with Tucker–Lewis Index (TLI), Comparative Fit Index (CFI), and Root Mean Squared Error of Approximation (RMSEA). Model fit is considered to be adequate when TLI and CFI are above 0.95 and RMSEA is below 0.06 (Hu & Bentler, 1999).
Figure 1.
Graphic illustration of a latent change score model. Notes: The depicted model highlights the overall approach used to assess cognitive change. The pretest latent construct (Pre) is measured by indicators X, Y, Z at Time 1 (X1, Y1, Z1), while the post-test latent construct (Post) is measured by the same indicators X, Y, Z at Time 2 (X2, Y2, Z2). The latent change factor captures the mean change and the remaining variance in the post-test latent construct after accounting for the pretest latent factor. Three indicators (frequency of forgetting, seriousness of forgetting, retrospective functioning) were used for the model assessing subjective cognitive change while five indicators (Digit Symbol Substitution, Letter Set, Shipley Institute of Living Scale, Stroop Color Name Test, and Trails A and B) were used for the model assessing objective cognitive change. Factor loadings and measurement intercepts were constrained for indicator Y over time (λ 1, μ 1) and Z over time (λ 2, μ 2).
Nested model comparisons were used to test whether computer use benefited overall cognition compared to the control group. The PRISM and the control groups were modeled simultaneously in a multigroup framework. The mean of the change factor was first freely estimated in each group (unconstrained model) and subsequently constrained to be the same across groups (constrained model). Model fits of the unconstrained and constrained model were compared with chi-square difference testing. Differential cognitive changes would be indicated by a significantly worse fit for the constrained model compared to the freely estimated model. This would be indicated by significant chi-square tests with a degree of freedom of one.
Longitudinal measurement invariance across the PRISM and control groups was evaluated before the fitting of latent change models. Weak (metric), strong (scalar), and strict (residual) invariances were tested by imposing constraints on factor loadings, measurement intercept, and residual variances. All models displayed adequate fit at the strict (residual) invariance level (Supplementary Table 1). However, the model fits for the strict (residual) invariance models fit significantly worse than the strong (scalar) invariance model for the objective cognition model (Supplementary Table 1). Therefore, the latent change score models were built on the strong (scalar) invariance level.
Differential changes on the measure level were estimated with Bayesian repeated-measure analysis of variances (ANOVAs), with group (PRISM/Control) as between-subject factor and time (pretest/post-test) as within-subject factor. Compared with the frequentist approach, Bayesian statistics provide more intuitive and meaningful inferences about the probability that a hypothesis is true. For instance, a Bayes Factor (BF) in Bayesian analysis would indicate the likelihood of the null hypothesis being true relative to the alternative and vice versa, whereas a p value in frequentist analysis only indicates the likelihood of encountering a false positive case if we repeated the same study and conducted the same analysis multiple times. The Bayesian approach enables the current study to quantify evidence in favor of the null or alternative hypothesis and therefore provides added values to the existing pool of experimental studies with inconsistent results. A multivariate Cauchy distribution with r = 0.5 was used for all fixed effects (Rouder et al., 2012). Support for a hypothesis is considered to be anecdotal and not worth much mentioning when BF is between 1 and 3, moderate when BF is between 3 and 10, strong when BF is between 10 and 30, and very strong when BF is above 30 (Lee & Wagonmakers, 2014).
Results
All latent change score models demonstrated adequate fit (Table 2). Results from the latent change score models showed that constraining the mean latent change to equality across the computer and control group did not lead to a significant worse fitting model for either the objective (Δχ 2 (1) = 0.19, p = .662) or the subjective (Δχ 2 (1) = 0.08, p = .774) cognitive domain (Table 2). These results suggest that computer use over 12 months in older adults with no previous computer experience did not lead to cognitive improvement at the ability level compared to the control group.
Table 2.
Model Fit and Model Comparison Statistics for the Latent Change Models
| Domain | Models | χ 2 | df | p | TLI | CFI | RMSEA | p RMSEA | Comparison to previous model | |
|---|---|---|---|---|---|---|---|---|---|---|
| Δχ 2 (1) | p | |||||||||
| Objective cognition | Unconstrained | 98.27 | 82 | .106 | 0.984 | 0.988 | 0.026 (0.000, 0.043) | .992 | — | — |
| Constrained | 98.35 | 83 | .120 | 0.985 | 0.988 | 0.025 (0.000, 0.042) | .994 | 0.08 | .774 | |
| Subjective cognition | Unconstrained | 28.45 | 22 | .161 | 0.961 | 0.979 | 0.031 (0.000, 0.061) | .829 | — | — |
| Constrained | 28.64 | 23 | .193 | 0.967 | 0.982 | 0.029 (0.000, 0.058) | .864 | 0.19 | .662 |
Note: TLI = Tucker–Lewis Index; CFI = Comparative Fit Index; RMSEA = Root Mean Squared Error of Approximation.
Results from the Bayesian repeated-measure ANOVAs showed strong evidence against any differential changes between the computer and control group at the single measure level, with the null hypothesis (i.e., no differential change) being 11.9–72.1 times more likely to occur than the alternative (i.e., differential change; Table 3).
Table 3.
Mean Difference in Pre- and Post-Test Scores for Individual Cognitive Measures
| Measure | Computer group | Control group | BF 01 | ||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Pretest | Post-test | Pretest | Post-test | ||||||||||
| Mean (SD) | 95% credible interval | Mean (SD) | 95% credible interval | Mean (SD) | 95% credible interval | Mean (SD) | 95% credible interval | ||||||
| Lower | Upper | Lower | Upper | Lower | Upper | Lower | Upper | ||||||
| DSST | 35.57 (12.8) | 33.34 | 37.80 | 35.54 (13.08) | 33.26 | 37.82 | 35.04 (10.52) | 33.07 | 37.01 | 35.62 (11.53) | 33.46 | 37.78 | 53.60 |
| Letter sets | 9.03 (5.16) | 8.02 | 10.04 | 8.39 (5.67) | 7.28 | 9.50 | 9.51 (5.3) | 8.39 | 10.62 | 9.34 (5.71) | 8.14 | 10.54 | 20.63 |
| Shipley | 29.25 (6.77) | 28.08 | 30.43 | 29.34 (7.19) | 28.09 | 30.59 | 30.75 (5.68) | 29.69 | 31.81 | 30.2 (6.16) | 29.05 | 31.34 | 11.90 |
| Stroop Color Name | 0.35 (0.11) | 0.33 | 0.37 | 0.34 (0.12) | 0.32 | 0.36 | 0.35 (0.12) | 0.32 | 0.37 | 0.33 (0.12) | 0.30 | 0.35 | 31.39 |
| Trails B – Trails A | 4.46 (0.79) | 4.32 | 4.6 | 4.48 (0.70) | 4.36 | 4.61 | 4.50 (0.70) | 4.36 | 4.63 | 4.46 (0.64) | 4.34 | 4.58 | 72.09 |
| Frequency of forgetting | 5.02 (0.87) | 4.87 | 5.17 | 5.01 (0.99) | 4.84 | 5.19 | 4.96 (0.77) | 4.82 | 5.11 | 4.91 (0.77) | 4.77 | 5.05 | 66.37 |
| Seriousness of forgetting | 4.70 (1.26) | 4.48 | 4.93 | 4.61 (1.34) | 4.37 | 4.85 | 4.57 (1.23) | 4.34 | 4.79 | 4.46 (1.12) | 4.25 | 4.66 | 59.28 |
| Retrospective functioning | 3.59 (1.12) | 3.39 | 3.79 | 3.45 (1.18) | 3.24 | 3.65 | 3.48 (1.21) | 3.25 | 3.70 | 3.28 (0.98) | 3.10 | 3.46 | 19.34 |
Note: DSST = Digit Symbol Substitution Task.
Given the null finding, we also conducted exploratory analyses to examine potential differential changes between high-intensity PRISM users and the control group. The PRISM group was split into high-intensity use and low-intensity use groups by median days of system usage. Bayesian repeated-measure ANOVAs with group (High intensity use/Control) as a between-subject factor and time (pretest/post-test) as a within-subject factor were conducted for each measure. Results demonstrated only anecdotal evidence that high-intensity users improved differentially on the Shipley Institute of Living Scale (BF10 = 1.04) and moderate to strong support for the null hypothesis for all the other cognitive tests (BF01 = 5.05–33.64).
Discussion
We examined potential cognitive benefits of computer training and usage in a large sample of older adults with no prior computer experience. Data from the PRISM trial were assessed with latent change score models to explore the benefits of computer use on general cognition and Bayesian linear mixed models to examine benefits on each cognitive task. We did not observe evidence of cognitive benefits for computer use over 12 months compared to the active control group interacting with parallel contents delivered in a nondigital form. We observed strong evidence supporting the null hypothesis on the individual cognitive measure level. Exploratory analyses comparing differential changes between high-intensity users and the control group indicated moderate to strong evidence supporting the null hypothesis.
The current findings are consistent with a previous study with a similar design (Slegers et al., 2009). Specifically, Slegers et al. (2009) provided older adults with 12 h of computer training followed by 12 months of self-initiated use. They did not observe benefits on verbal memory, processing speed, or cognitive flexibility compared to a passive control group, and exploratory analysis comparing cognitive changes in heavy and light users also failed to find reliable differences. The current study expanded on those results by providing stronger evidence with a larger sample and two different analytical strategies. Together with Slegers et al. (2009), our results cast doubt on casual computer use for 12 months as sufficient stimulation to provide cognitive benefits in older adults.
It is interesting to note that our experimental findings are inconsistent with findings from a large number of longitudinal studies, especially given that computer use in our study and the longitudinal studies are both casual and comparable in cognitive demand. However, all previous longitudinal studies examined relationships between computer use and cognition over at least 2 years, with some looking into as many as 9 years (Hartanto et al., 2020), whereas our experimental study, although being one of the longest in duration so far, lasted only a year. Given the lack of understanding of the required strength and frequency of cognitive stimulation to achieve cognitive benefit, one possible explanation is that it might take more than a year of casual computer use for older adults to show cognitive benefits. Similarly, experimental studies lasting for a year but involving more cognitively challenging aspects of computer use, such as actively learning of demanding computer features and software with a well-structured curriculum (Chan et al., 2016), might also see cognitive benefits.
Previous research suggests that the beneficial effects of different cognitive enrichment activities tend to vary in their scope, with some operating at the skill level (e.g., practicing on a gamified cognitive task in a cognitive intervention) and others operating at the ability level (e.g., physical activities; Hertzog et al., 2008). It is possible that cognitive enrichment effects for computer and internet use operate predominantly at the skill level and learned skills often do not strongly affect general cognitive abilities (Hertzog et al., 2008; Simons et al., 2016). Therefore, cognitive benefits in longitudinal studies might result from other mechanisms. For instance, computer and internet use might provide opportunities for older adults to move out of their comfort zone and participate in a greater number of other activities, which, in turn, stimulates cognitive function and provides cognitive benefits. This was supported by a recent longitudinal study showing that current internet use was associated with increases in the variety of activities engaged 2 years later, which subsequently predicted cognitive functioning another 2 years later (Kamin et al., 2020). Similarly, computer and internet use may provide cognitive benefits through intervening on known risk factors of cognitive decline or self-perceptions of cognitive efficacy.
Finally, the current study should also be considered alongside some limitations. First, the sample is predominantly female, which affects generalizability. Two previous longitudinal studies found computer and internet use to be differentially protective against cognitive decline in male and female participants (Almeida et al., 2012; Ihle et al., 2020). Second, the PRISM system was a closed system with a limited number of features. The simplicity of the system, coupled with its carefully designed training package, is a benefit for older adults with limited computer experience but a potential limitation when examining cognitive benefits derived from the stimulation and challenge of computer use. Improved cognition might be observed with a more complex, dynamic system, or a system with more customizability and opportunities for continued learning of new computer applications. Also, as participants were free to use or not use different features, it is possible that certain patterns of use might be beneficial to cognition, but the current study is underpowered to detect those effects. Third, there was no record of how much the control group interacted with their binder materials. Given the higher attrition in the control group, we suspect that they might not have engaged with their materials as frequently. However, even with potentially higher engagement with the PRISM system, differential cognitive benefits were not observed. Finally, as assessors may not have been blind to condition during post-test, this may have influenced the reported results to some degree (though this would be of greater concern in the presence of differential cognitive change).
Limitations notwithstanding, the current study is the first to provide strong evidence against casual computer use for a year as a viable means to protect against cognitive decline. The current finding weakened the potential and questioned the practicality of computer use as a useful form of cognitive stimulation to provide meaningful cognitive benefits aside from increasing computer proficiency. Nevertheless, future research that wishes to treat computer use as a form of cognitive stimulation can still make practical and theoretical contributions by testing whether a greater dosage or different features can lead to cognitive benefits. That said, future research should also explore alternative mechanisms contributing to computer-use-related cognitive benefits.
Supplementary Material
Acknowledgments
Sara J. Czaja, Walter R. Boot, Neil Chanciness, Wendy A. Rogers, and Joseph Sharit conceptualized and designed the PRISM system and study.
Contributor Information
Shenghao Zhang, Department of Psychology, Florida State University, Tallahassee, Florida, USA.
Walter R Boot, Department of Psychology, Florida State University, Tallahassee, Florida, USA.
Neil Charness, Department of Psychology, Florida State University, Tallahassee, Florida, USA.
Funding
This work was supported by a grant from the National Institute on Aging, under the auspices of the Center for Research and Education on Aging and Technology Enhancement (CREATE), 4 P01 AG 17211.
Conflict of Interest
None declared.
References
- Almeida, O. P., Yeap, B. B., Alfonso, H., Hankey, G. J., Flicker, L., & Norman, P. E. (2012). Older men who use computers have lower risk of dementia. PLoS One, 7(8), e44239. doi: 10.1371/journal.pone.0044239 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Berner, J., Comijs, H., Elmståhl, S., Welmer, A., Sanmartin Berglund, J., Anderberg, P., & Deeg, D. (2019). Maintaining cognitive function with internet use: A two-country, six-year longitudinal study. International Psychogeriatrics, 31(7), 929–936. doi: 10.1017/S1041610219000668 [DOI] [Google Scholar]
- Boot, W. R., Charness, N., Czaja, S. J., & Rogers, W. A. (2020). Designing for older adults: Case studies, methods, and tools. CRC Press. [Google Scholar]
- Boot, W. R., Moxley, J. H., Roque, N. A., Andringa, R., Charness, N., Czaja, S. J., Sharit, J., Mitzner, T., Lee, C. C., & Rogers, W. A. (2018). Exploring older adults’ video game use in the PRISM computer system. Innovation in Aging, 2(1), 1– 13. doi: 10.1093/geroni/igy009 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Chan, M. Y., Haber, S., Drew, L. M., & Park, D. C. (2016). Training older adults to use tablet computers: Does it enhance cognitive function? The Gerontologist, 56(3), 475–484. doi: 10.1093/geront/gnu057 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Charness, N. (2020). A framework for choosing technology interventions to promote successful longevity: Prevent, rehabilitate, augment, substitute (PRAS). Gerontology, 66(2), 169–175. doi: 10.1159/000502141 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Choi, E. Y., Wisniewski, K. M., & Zelinski, E. M. (2021). Information and communication technology use in older adults: A unidirectional or bi-directional association with cognitive function? Computers in Human Behavior, 121, 106813. doi: 10.1016/j.chb.2021.106813 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Cid, A., Sotelo, R., Leguisamo, M., & Ramírez-Michelena, M. (2020). Tablets for deeply disadvantaged older adults: Challenges in long-term care facilities. International Journal of Human-Computer Studies, 144, 102504. doi: 10.1016/j.ijhcs.2020.102504 [DOI] [Google Scholar]
- Czaja, S. J., Boot, W. R., Charness, N., Rogers, W. A., Sharit, J., Fisk, A. D., Lee, C. C., & Nair, S. N. (2015). The personalized reminder information and social management system (PRISM) trial: Rationale, methods and baseline characteristics. Contemporary Clinical Trials, 40, 35–46. doi: 10.1016/j.cct.2014.11.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Czaja, S. J., Boot, W. R., Charness, N., Rogers, W. A., & Sharit, J. (2018). Improving social support for older adults through technology: Findings from the PRISM randomized controlled trial. The Gerontologist, 58(3), 467–477. doi: 10.1093/geront/gnw249 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Czaja, S. J., Charness, N., Fisk, A. D., Hertzog, C., Nair, S. N., Rogers, W. A., & Sharit, J. (2006). Factors predicting the use of technology: Findings from the Center for Research and Education on Aging and Technology Enhancement (CREATE). Psychology and Aging, 21(2), 333–352. doi: 10.1037/0882-7974.21.2.333 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Djabelkhir, L., Wu, Y. H., Vidal, J. S., Cristancho-Lacroix, V., Marlats, F., Lenoir, H., Carno, A., & Rigaud, A. S. (2017). Computerized cognitive stimulation and engagement programs in older adults with mild cognitive impairment: Comparing feasibility, acceptability, and cognitive and psychosocial effects. Clinical Interventions in Aging, 12, 1967–1975. doi: 10.2147/CIA.S145769 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hartanto, A., Yong, J. C., Toh, W. X., Lee, S. T., Tng, G. Y., & Tov, W. (2020). Cognitive, social, emotional, and subjective health benefits of computer use in adults: A 9-year longitudinal study from the Midlife in the United States (MIDUS). Computers in Human Behavior, 104, 106179. doi: 10.1016/j.chb.2019.106179 [DOI] [Google Scholar]
- Hertzog, C., Kramer, A. F., Wilson, R. S., & Lindenberger, U. (2008). Enrichment effects on adult cognitive development: Can the functional capacity of older adults be preserved and enhanced?. Psychological Science in the Public Interest, 9(1), 1–65. doi: 10.1111/j.1539-6053.2009.01034.x [DOI] [PubMed] [Google Scholar]
- Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1–55. doi: 10.1080/10705519909540118 [DOI] [Google Scholar]
- Ihle, A., Bavelier, D., Maurer, J., Oris, M., & Kliegel, M. (2020). Internet use in old age predicts smaller cognitive decline only in men. Scientific Reports, 10, 8969. doi: 10.1038/s41598-020-65846-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kamin, S. T., & Lang, F. R. (2020). Internet use and cognitive functioning in late adulthood: Longitudinal findings from the Survey of Health, Ageing and Retirement in Europe (SHARE). The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences, 75(3), 534–539. doi: 10.1093/geronb/gby123 [DOI] [PubMed] [Google Scholar]
- Kamin, S. T., Seifert, A., & Lang, F. R. (2020). Participation in activities mediates the effect of internet use on cognitive functioning in old age. International Psychogeriatrics, 33, 83–88. doi: 10.1017/S1041610220003634 [DOI] [PubMed] [Google Scholar]
- Kievit, R. A., Brandmaier, A. M., Ziegler, G., van Harmelen, A. L., de Mooij, S. M., Moutoussis, M., & NSPN Consortium. (2018). Developmental cognitive neuroscience using latent change score models: A tutorial and applications. Developmental Cognitive Neuroscience, 33, 99–117. doi: 10.1016/j.dcn.2017.11.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Klusmann, V., Evers, A., Schwarzer, R., Schlattmann, P., Reischies, F. M., Heuser, I., & Dimeo, F. C. (2010). Complex mental and physical activity in older women and cognitive performance: A 6-month randomized controlled trial. The Journals of Gerontology, Series A: Biomedical Sciences and Medical Sciences, 65(6), 680–688. doi: 10.1093/gerona/glq053 [DOI] [PubMed] [Google Scholar]
- Lee, M. D., & Wagenmakers, E. J. (2014). Bayesian cognitive modeling: A practical course. Cambridge University Press. [Google Scholar]
- Ordonez, T. N., Yassuda, M. S., & Cachioni, M. (2011). Elderly online: Effects of a digital inclusion program in cognitive performance. Archives of Gerontology and Geriatrics, 53(2), 216–219. doi: 10.1016/j.archger.2010.11.007 [DOI] [PubMed] [Google Scholar]
- Park, D. C., & Reuter-Lorenz, P. (2009). The adaptive brain: Aging and neurocognitive scaffolding. Annual Review of Psychology, 60, 173–196. doi: 10.1146/annurev.psych.59.103006.093656 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Pruchno, R. A., Wilson-Genderson, M., Rose, M., & Cartwright, F. (2010). Successful aging: Early influences and contemporary characteristics. The Gerontologist, 50, 821–833. doi: 10.1093/geront/gnq041 [DOI] [PubMed] [Google Scholar]
- Reuter-Lorenz, P. A., & Park, D. C. (2014). How does it STAC up? Revisiting the scaffolding theory of aging and cognition. Neuropsychology Review, 24(3), 355–370. doi: 10.1007/s11065-014-9270-9 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Rouder, J. N., Morey, R. D., Speckman, P. L., & Province, J. M. (2012). Default Bayes factors for ANOVA designs. Journal of Mathematical Psychology, 56(5), 356–374. doi: 10.1016/j.jmp.2012.08.001 [DOI] [Google Scholar]
- Rowe J. W., & Kahn, R. L. (1998). Successful aging. Pantheon Random House. [Google Scholar]
- Slegers, K., van Boxtel, M., & Jolles, J. (2009). Effects of computer training and internet usage on cognitive abilities in older adults: A randomized controlled study. Aging Clinical and Experimental Research, 21(1), 43–54. doi: 10.1007/BF03324898 [DOI] [PubMed] [Google Scholar]
- Sharit, J., Moxley, J. H., Boot, W. R., Charness, N., Rogers, W. A., & Czaja, S. J. (2019). Effects of extended use of an age-friendly computer system on assessments of computer proficiency, attitudes, and usability by older non-computer users. ACM Transactions on Accessible Computing (TACCESS), 12(2), 1–28. doi: 10.1145/3325290 [DOI] [Google Scholar]
- Simons, D. J., Boot, W. R., Charness, N., Gathercole, S. E., Chabris, C. F., Hambrick, D. Z., & Stine-Morrow, E. A. (2016). Do “brain-training” programs work? Psychological Science in the Public Interest, 17(3), 103– 186. doi: 10.1177/1529100616661983 [DOI] [PubMed] [Google Scholar]
- Vaportzis, E., Martin, M., & Gow, A. J. (2017). A tablet for healthy ageing: The effect of a tablet computer training intervention on cognitive abilities in older adults. The American Journal of Geriatric Psychiatry, 25(8), 841–851. doi: 10.1016/j.jagp.2016.11.015 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Xavier, A. J., d’Orsi, E., de Oliveira, C. M., Orrell, M., Demakakos, P., Biddulph, J. P., & Marmot, M. G. (2014). English Longitudinal Study of Aging: Can internet/e-mail use reduce cognitive decline? The Journals of Gerontology, Series A: Biomedical Sciences and Medical Sciences, 69(9), 1117–1121. doi: 10.1093/gerona/glu105 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yoon, J.-S., Charness, N., Boot, W. R., Czaja, S. J., & Rogers, W. A. (2017). Depressive symptoms as a predictor of memory complaints in the PRISM sample. The Journals of Gerontology, Series B: Psychological Sciences and Social Sciences, 74(2), 254– 263. doi: 10.1093/geronb/gbx070 [DOI] [PMC free article] [PubMed] [Google Scholar]
- Zhang, S., Grenhart, W. C., McLaughlin, A. C., & Allaire, J. C. (2017). Predicting computer proficiency in older adults. Computers in Human Behavior, 67, 106–112. doi: 10.1016/j.chb.2016.11.006 [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.

