Abstract
Up to 40% of dementias may be preventable via risk factor modification. This inference has motivated the development of lifestyle interventions for reducing cognitive decline. Typically delivered to older adults face-to-face, the COVID-19 pandemic has necessitated their adaptation for remote delivery. We systematically reviewed randomized controlled trials of remotely delivered lifestyle interventions (≥4 weeks duration and delivered >50% remotely), for adults aged ≥ 60 without dementia, examining effects on objective cognitive measures. Comparators were active (face-to-face or remote) or passive. Ten studies (n = 2967) comprising multidomain (k = 4), physical activity (k = 3) or psychosocial (k = 3) remote interventions were included. Data were synthesized using robust variance estimation meta-analysis. The pooled estimate comparing the effect of remote interventions versus comparators on cognition was not significant (g=−0.02; 95%CI [−0.14, 0.09]; p = .66); subgroup analyses by type of intervention or comparator also yielded non-significant effects. Most studies had low risk of bias. Current evidence to support remote lifestyle interventions is limited. Included studies were conducted pre-pandemic, and evaluated individual, rather than group interventions. Future studies may exploit the greater digital connectivity of older people since the pandemic. Group formats, more frequently efficacious than individual interventions in face-to-face dementia prevention trials, may be a rational approach for future remote trials.
Keywords: Mild cognitive impairment, Dementia, Prevention, Randomized controlled trial, Remote delivery, Cognitive function
1. Introduction
Worldwide, approximately 50 million people live with dementia, and prevalence is expected to increase threefold by 2050 (Nichols et al., 2019). While current medications improve neuropsychiatric symptoms, as well as functional and cognitive outcomes in dementia, there is currently no cure (Yiannopoulou and Papageorgiou, 2020). There has thus been increasing interest and investment in the prevention of dementia through the identification and modification of risk factors. Livingston et al. (2017) proposed a life-course model of potentially modifiable dementia risk factors, focusing on those with the best evidence. The model was recently updated, and now includes 12 modifiable risk factors (Livingston et al., 2020); it is estimated that, collectively, these account for around 40% of dementias worldwide. The availability of high-quality epidemiological data and modeling has informed the development and evaluation of lifestyle interventions designed to modulate risk factors. Whilst the prevention of dementia is frequently the primary objective, the sample sizes and extended follow-ups required to statistically power clinical outcomes are expensive and impractical. The majority of trials thus feature surrogate endpoints, including neuropsychiatric, functional and/or cognitive measures.
The body of literature describing face-to-face non-pharmacological (including lifestyle-based) trials for reducing cognitive decline is substantial, and is the focus of a number of recent reviews. Some syntheses focused on specific groups of older adults, for example subjective cognitive decline (SCD; (Bhome et al., 2018; Smart et al., 2017)), while others evaluated evidence relating to multiple populations (Kane et al., 2017, Whitty et al., 2020). Given the different rationales, included studies and synthesis methods across these reviews, it is not surprising that they presented varying conclusions, although the best currently-available evidence may be for physical activity interventions (Kane et al., 2017, Whitty et al., 2020). Whilst these reviews identified the interventions most likely to confer benefit, the majority of the included interventions were delivered in-person. The face-to-face delivery of interventions, especially those that are group-based (a typical format for lifestyle interventions, which are the focus of this review), has been curtailed by the COVID-19 pandemic. We therefore conducted a systematic review of RCTs of remotely delivered lifestyle-based interventions for older adults without dementia to assess their impact on cognition.
2. Methods
In line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) recommendations (Moher et al., 2009), this review was registered with PROSPERO in April 2020 [CRD42020182170]. Our research question was: ‘How successfully have remote psychosocial or lifestyle interventions positively impacted cognitive function or dementia risk in people without dementia aged ≥ 60 years, relative to comparators?’.
2.1. Study inclusion and exclusion criteria
We included randomized studies where all participants had a minimum age of 60, given that this age group are at increased risk of dementia. Both healthy and/or clinical samples were eligible; the latter could comprise individuals with physical or mental health diagnoses, or cognitive impairment (without dementia). As we wanted to identify interventions with the potential to prevent dementia, we excluded studies that did not exclude participants with dementia, and/or did not screen for dementia at baseline.
Our eligibility criteria required interventions to be lifestyle-based, that is, involving the application of environmental, behavioral and/or motivational principles, including self-care and self-management (Egger et al., 2017). Moreover, interventions had to be primarily delivered remotely (> 50% of the sessions involving facilitator-participant interaction had to be remote). Our primary rationale for specifying this criterion at 50% was to maximize the number of eligible studies given the nascence of the field. This low threshold would also enable the comparison of remote-only versus ‘blended’ (i.e. incorporating a nontrivial face-to-face component) intervention approaches via moderator analyses. The proportion of each intervention that was remote versus in-person was assessed at the full-text stage; where this was not clear, we planned to contact the corresponding author for clarification, although in practice this was not required. To be eligible, remote interventions had to have a minimum duration of four weeks. We specified this on the bases that four weeks seemed a reasonable minimum period to permit a meaningful change in participants’ lifestyles, and following earlier work which judged lifestyle interventions of < 4 weeks duration to be inefficacious (Whitty et al., 2020). Remote interventions must have included some form of interaction or personalisation (e.g. feedback from a facilitator or algorithm). The latter was stipulated in order to maximize the commensurability with previous reviews of face-to-face interventions.
We excluded studies of pharmacological interventions and brain stimulation therapies; these interventions were not considered to implicate environmental, behavioral, or motivational (i.e. lifestyle) mechanisms. We also excluded studies of computerized cognitive interventions, which target specific cognitive functions via repeated training (Huntley et al., 2015). Our rationale for excluding cognitive interventions was that they do not directly map on to a change in lifestyle or the mentally stimulating activities linked to reduced dementia risk (e.g. more education, occupational complexity and cognitively taxing leisure activities; see Fratiglioni et al., 2020). Trials of dietary supplements were also excluded, on the basis that these typically supply participants with supplements directly; these interventions thus do not require a substantial change in participants’ lifestyles. Moreover, nutritional patterns are more important in the etiology and amelioration of lifestyle-related diseases than supplements (Lentjes, 2019). Whilst exclusively dietary interventions (including intermittent fasting diets) were eligible for inclusion in this review, no eligible studies of this type were identified during screening.
We included randomized studies that compared a remote intervention to a comparator, including passive or active (whether face-to-face or remote) control groups. Eligible outcomes were objective cognitive measures and rates of progression to dementia. All types of standardized neuropsychological or laboratory-based cognitive tests were eligible. These could be administered in pen-and-paper or computerized format. To be included, outcomes had to measure cognitive performance objectively; self-reported measures were thus excluded. We used the framework of Lezak et al. (2012) to code outcomes into cognitive domains. The framework subsumes the following domains: attention, perception, episodic memory, construction, executive function, concept formation and reasoning, and language (all outcomes could be coded into one of these domains, although no outcomes from the last two domains were included). Cognitive tests used to screen for mild cognitive impairment (MCI) and dementia, for example the mini-mental state examination (MMSE; Folstein et al., 1975), were also included; these constituted the ‘cognitive screening’ domain. Notably, the majority of dementia prevention trials utilize cognitive function endpoints, as the measurement of incident dementia is often impracticable (Andrieu et al., 2015). Nevertheless, the link between changes in cognitive function and reduced or delayed progression to dementia remains unproven, and studies reporting salutary cognitive effects should thus be regarded as proof-of-concept trials requiring confirmation from studies using clinically-defined endpoints (Andrieu et al., 2015).
2.2. Search strategy
Systematic searches of the following databases were conducted: Embase (1980–2020), MEDLINE (1946–2020), and PsychINFO (1806–2020). These databases were combined using the OVID interface and searches were restricted to human studies published in English. Additional records were identified via forwards and backwards citation searches of eligible studies (e.g. screening the forward citations of trial protocols identified in the original searches). Our search strategy combined a number of search term strings with ‘AND’. Each string reflected an aspect of our eligibility criteria, with these seeking to capture (i) randomized studies (random* OR randomized control* OR randomised control* OR RCT OR cluster random*); (ii) studies of adults aged ≥ 60 years (old* OR adult OR elder* OR senior* OR geriatric*); (iii) remotely-delivered interventions (online* OR internet* OR digital* OR electronic* OR tele* OR mobile* OR computer* OR video* OR email* OR self-guide* OR computer-based* OR m-health OR mhealth OR distance* OR remote* OR e-health OR ehealth OR app*); (iv) lifestyle interventions (non-pharma* OR psycho* OR lifestyle* OR social*); and (v) studies where the rationale was the improvement of cognition or reduction of cognitive decline (cognition* OR cognitive* OR dementia*).
2.3. Procedures
The web platform Covidence (Veritas Health Innovation (Melbourne), 2020) was used for deduplication, and to coordinate multiuser title-abstract and full-text screening. Each record identified through electronic searches was independently screened (CC, NLM, HM, SZ) in duplicate at both the title-abstract and full-text stage. At both stages, discrepancies were resolved by a third author (EA, or NLM where she was not previously a reviewer).
All data were independently extracted by two authors (BM and PR) and discrepancies were resolved by discussion, with involvement of a third author (NLM) if necessary. Cognitive outcomes were coded into the relevant domain during data extraction. Outcome domain coding followed clinical-academic convention, and was informed by a number of relevant frameworks (Diamond, 2013, Lezak et al., 2012, Petersen and Posner, 2012).
2.4. Synthesis and analysis
The final number of studies, reporting of effects, and degree of bias (see ‘Results’) were amenable to quantitative synthesis. The measure of effect size was Hedges’ g, the standardized mean difference (SMD) corrected for small sample size (Borenstein, 2009, Morris, 2007). Please see the supplementary materials for the precise formula used for the calculation of g. Effect sizes were transformed where necessary to ensure these operated in the same direction; higher scores indicated better cognitive function. Two studies (Dodge et al., 2015, Lee et al., 2014) reported effects as regression coefficients. These were converted to SMDs using published formula (Lipsey, 2001).
2.4.1. Accounting for dependencies
The majority of studies reported more than one cognitive outcome; these could include multiple measures of the same domain; multiple measures from different domains; and/or multiple score types derived from a single outcome measure. Conventional meta-analysis models all effect sizes independently (i.e. treating each as if it was derived from a unique study); the use of this method for clustered data is inappropriate, as it gives rise to estimates with spuriously narrow confidence intervals. We thus conducted a random-effects meta-analysis with robust variance estimation (RVE; Hedges et al., 2010). RVE accommodates effect sizes nested within studies (without underestimating confidence intervals), and also adjusts for the assumed correlation between related outcomes measured using the same participants. The RVE meta-analysis was conducted with the ‘robumeta’ 2.0 package in R 4.0.3. As per the ‘robumeta’ default, rho (within-study correlation between outcomes) was set to 0.8, and sensitivity analyses varied rho from 0 to 1 to ensure consistency in results (Fisher and Tipton, 2015). The primary RVE meta-analysis combined all outcomes from all studies, and was interpreted as the effect of remote interventions on overall cognitive function. Heterogeneity for the model is reported using Tau2, which represents between-study variance, and I2, which estimates the proportion of observed dispersion in effect sizes due to ‘real’ variation, rather than randomness. Planned subgroup analyses calculated pooled effect sizes for separate cognitive domains. The validity of p-values for RVE meta-analytic estimates is contingent on the associated degrees of freedom (d.f.). Where d.f. < 4, p-values are unreliable, and are thus not reported (Fisher and Tipton, 2015). A full forest plot of all the effect sizes is included in the supplementary materials. A ‘compact’ forest plot, displaying the unweighted mean effect size for each study, is also included to display the data more intelligibly. This was based on univariate random effects meta-analyses produced using the R package ‘metafor’ 2.4–0. Whilst averaging effect sizes within studies for univariate meta-analysis is not optimal for quantitative synthesis (Matt and Cook, 1994), we used this method for data visualization only. All other quantitative syntheses utilized full RVE models.
2.5. Risk of bias
For this evidence synthesis, we utilized the Cochrane risk of bias tool version 2 (Sterne et al., 2019). The revised tool is structured into five domains of bias: (1) the randomization process; (2) deviations from intended interventions; (3) missing outcome data; (4) measurement of the outcome; and (5) selection of the reported result. Each domain could be rated as being at ‘low’ risk of bias, to have ‘some concerns’, or to be at ‘high’ risk of bias. These risk of bias judgments were also made for each study overall. For the assessment of bias due to deviations from intended interventions, we specified the ‘effect of interest’ as the effect of assignment, rather than adherence, to intervention (Sterne et al., 2019). We thus prioritized effects derived from intention-to-treat (ITT) analyses for the quantitative syntheses; only studies utilizing ITT analyses could achieve a ‘low’ rating for this domain. Risk of bias judgements were made by two authors independently (TW and SZ), who discussed and resolved discrepancies jointly. Where agreement could not be reached, the senior author (NLM) made the final judgment.
2.6. Evaluating publication bias
The clustering of effect sizes within studies precluded the use of traditional methods for detecting publication bias (e.g. Egger’s test, funnel plot). We thus utilized methods appropriate for clustered data (Mathur and VanderWeele, 2020) operationalized in the R package ‘PublicationBias’. This approach establishes how robust a meta-analysis is to potential publication bias through the use of a sensitivity analysis. This departs from conventional assessments of publication bias, which attempt to identify the severity of publication bias from the sample of studies under review. Under the current approach, all the available effect sizes are meta-analyzed, constituting the unadjusted primary meta-analysis. A separate (sensitivity) meta-analysis combines only the non-significant (i.e. ps ≥ 0.05) effect sizes. The latter estimate is essentially corrected for ‘worst case scenario’ publication bias (whereby significant effect sizes are infinitely more likely to be published than non-significant ones). Comparing the two meta-analytic estimates reveals the degree to which non-significant effect sizes are systematically smaller than effects overall. In cases where there is a notable discrepancy, results are considered to be sensitive to the effects of potential publication bias (Mathur and VanderWeele, 2020).
3. Results
3.1. Study selection
The literature search across three databases yielded 4156 records. A further 10 records were identified via screening the forward citations of trial protocols captured by the original literature search. Following the removal of 60 duplicates, 4106 records were reviewed at the title-abstract stage. Of these, 129 were reviewed at the full-text stage, with 10 studies included in the final synthesis (see Fig. 1).
Fig. 1.
Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flowchart. Number of studies (k).
3.2. Study characteristics
The 10 eligible studies included 2967 participants (1464 in remote interventions and 1503 in comparators; see Table 1). Study sample sizes varied considerably from 16 to 2283 (median n = 78). Publication year ranged from 2012 to 2020. Four studies took place in North America, three in Asia, one in Europe, one in Australasia and one in Europe/Australasia. Eight studies randomized participants at the individual level, while two studies (Anderson-Hanley et al., 2012, Lee et al., 2014) utilized cluster randomization.
Table 1.
Study characteristics.
| Study | Setting and population | Total sample size*(% Female) | Ethnicity (%) | Education | Cognitive domains (measure names) assessed |
|---|---|---|---|---|---|
| Multidomain Interventions | |||||
| Lee et al. (2014) | Older adults recruited from geriatric mental health community centers# in Korea. Mean age of intervention group was 77 yrs, for comparator group 78 yrs. | 174 (78%) | NR | 21% of the intervention group and 36% of the comparator group were illiterate; the remainder had varying amounts of education. | Cognitive screening (MMSE) |
| Richard et al. (2019) | Community-dwelling older adults with/at increased risk of cardiovascular disease, recruited from Holland, France and Finland. Mean age of intervention group was 69 yrs, for comparator group, also 69 yrs. | 2283 (48%) | White 98% | Intervention/Comparator education: Basic (30%/27%); Post-secondary non-tertiary (31%/30%); Tertiary (40%/43%). | Executive function (Stroop, Category fluency); Episodic memory (RAVLT); Cognitive screening (MMSE) |
| Other 2% | |||||
| Roh et al. (2020) | Older adults with Major depressive disorder recruited from mental health centers in Korea. Mean age of intervention group was 74 yrs, for comparator group, also 74 yrs. | 77 (75%) | NR | Mean education for both intervention/comparator groups was 5 yrs. | Executive function (Digit span backwards, Stroop); Episodic memory (SVLT) |
| Vanoh et al. (2019) | Community-dwelling older adults recruited from the Klang Valley, Malaysia. Mean age of intervention group was 67 yrs, for comparator group 69 yrs. | 50 (58%) | Malaysian 88% | Mean education for intervention group was 13 yrs, for comparator group 11 yrs. | Episodic memory (RAVLT, Visual reproduction); Attention (WAIS-III Coding, TMT-A); Cognitive screening (MMSE); Construction (Clock drawing); Perception (WAIS-III MR) |
| Indian 12% | |||||
| Physical Activity Interventions | |||||
| Anderson-Hanley et al. (2012) | Older adults recruited from independent living facilities in the USA. Mean age of intervention group was 76 yrs, for comparator group 82 yrs. | 79 (78%) | NR | Mean education for intervention group was 13 yrs, for comparator group 15 yrs. | Executive function (Category fluency, Color Trails, Letter fluency, Digit span backwards, Stroop); Episodic memory (Figure recall, Fuld, RAVLT); Attention (LDST); Construction (Clock drawing, Figure copy); |
| Gschwind et al. (2015) | Community-dwelling older adults recruited in Germany, Spain, or Australia. Mean age of intervention group was 75 yrs, for comparator group, also 75 yrs. | 153 (61%) | NR | Mean education for intervention group was 12 yrs, for comparator group 11 yrs. | Executive function (ANT, Digit span backwards, TMT-B, VST); Attention (ANT, WAIS-III Coding, TMT-A) |
| Sebastião et al. (2018) | Older adults with Multiple sclerosis recruited from a research register, word-of-mouth or advertisements in the USA. Mean age of intervention group was 64 yrs, for comparator group 65 yrs. | 25 (88%) | White 100% | No breakdown by trial arm, but 35% of whole sample had a Master's degree. | Episodic memory (BICAMS CVLT, BICAMS BVMT); Attention (BICAMS SDMT) |
| Psychosocial Interventions | |||||
| Dodge et al. (2015) | Older adults recruited from retirement communities and/or senior centers in the USA. Mean age of intervention group was 81 yrs, for comparator group 80 yrs. | 83 (76%) | NR | High school or greater: 98% of intervention group, 95% of comparator group. | Executive function (Category fluency, Cogstate 1-back, Cogstate 2-back, Letter fluency, Stroop, TMT-B); Episodic memory (Word list); Attention (Cogstate Detection test, TMT-A); Cognitive screening (CAMCI, MMSE) |
| Wahbeh et al. (2016) | Community-dwelling older adults recruited in Portland, USA, via an informational talk, advertisements or clinical referral. Sample grand mean age was 76 yrs (data for separate groups NR). | 16 (50%) | White 88% | No breakdown by trial arm, but sample grand mean was 18 yrs of education. | Executive function (Letter fluency, Flanker task, WMS-III LNS); Episodic memory (RAVLT); Attention (Simple RT) |
| Wuthrich et al. (2019) | Mental health outpatients with a Primary anxiety and/or unipolar mood disorder recruited in Sydney, Australia. Mean age of intervention group was 72 yrs, for comparator group 73 yrs. | 27 (74%) | NR | Intervention/Comparator education: Secondary (31%/23%); Diploma (31%/31%); University (38%/46%). | Cognitive screening (M-ACE) |
Attention network test (ANT); Brief international cognitive assessment for multiple sclerosis (BICAMS); Brief visuospatial memory test (BVMT); California verbal learning test (CVLT); Computer assessment of mild cognitive impairment (CAMCI); Letter digit substitution test (LDST); Matrix reasoning (MR); Mini-Addenbrooke’s Cognitive Examination (M-ACE); Mini-mental state examination (MMSE); Not reported (NR); Reaction time (RT); Rey auditory verbal learning test (RAVLT); Seoul verbal learning test (SVLT); Symbol digit modalities test (SDMT); Trail-making test part A (TMT-A); Trail-making test part B (TMT-B); Victoria Stroop test (VST); Weschler adult intelligence scale-III (WAIS-III); Weschler memory scale-III Letter number sequencing (WMS-III LNS); Years (yrs); * = Primary meta-analyzed sample only (i.e. only participants in the remote intervention and main comparator arms, with available outcome data); # = For centers to be included, at least 50% of service users had to fulfil the inclusion criteria of (1) ≥ weekly attendance; and (2) ≥ 60 yrs of age; and not meet the exclusion criteria of (1) significant hearing or visual impairment; (2) diagnosis of a neurological disorder; (3) serious mental illness; (4) taking psychotropics; or (5) history of substance abuse.
3.3. Participant characteristics
Across studies, the mean age of participants ranged from 64 to 81 years (median 74 years), and the proportion that were female ranged from 48% to 88% (median 75% female). Five studies reported mean participant education in years (range 5–18 years; median 12). Four studies included sample ethnicity data, with three reporting predominantly white participants, and one predominantly Malaysian participants. Seven studies recruited older adults from the general population, while three studies sampled from specific clinical populations (major depressive disorder; primary anxiety and/or mood disorder; or multiple sclerosis).
3.4. Intervention characteristics
The 10 studies described various remote interventions; these were categorized as multidomain (k = 4), physical activity (k = 3) or psychosocial approaches (k = 3; see Table 2). The multidomain interventions included a care management program promoting physical, social and cognitive activity (Lee et al., 2014); a coach-supported virtual platform to improve cardiovascular health (Richard et al., 2019); a nurse-led intervention providing cognitive restructuring and supporting lifestyle changes (Roh et al., 2020); and a web-based health management portal (Vanoh et al., 2019). The physical activity interventions included two based on exergaming (Anderson-Hanley et al., 2012, Gschwind et al., 2015) and one using square-stepping exercises (Sebastião et al., 2018). The three psychosocial interventions comprised mindfulness training (Wahbeh et al., 2016), cognitive behavioral therapy (CBT; Wuthrich et al., 2019) or social interaction between participants and trained conversationalists via webcam (Dodge et al., 2015). The remote intervention modalities (i.e. the primary means by which interventions were delivered) included telephone (k = 3), website (k = 3), video call (k = 2) and computer application (k = 2). The duration of interventions varied from 6 to 78 weeks (median 15 weeks). The proportion of interventions that was delivered remotely ranged from 67% to 100% (median 99%).
Table 2.
Intervention and comparator characteristics, by remote intervention type.
| Study | Trial arm | Intervention name | Intervention description | Intervention type | Intervention duration | Session characteristics n sessions (%) x duration |
|---|---|---|---|---|---|---|
| Multidomain Interventions | ||||||
| Lee et al. (2014) | Primary | Manualized bimonthly telephonic care management* | Manualized health education delivered individually by nurses via telephone. Recommendations included engaging in physical, cognitive, and social activities; reducing alcohol/tobacco consumption; and following a healthy diet. | Multidomain | 18 months | Remote: 9 (100%) x 10–15 mins Face-to-face: 0 (0%) |
| Comparator | Manualized face-to-face care management# | Identical to the primary arm (see above), except nurses delivered the intervention face-to-face. | Face-to-face active comparator | 18 months | Remote: 0 (0%) Face-to-face: 9 (100%) x 15–20 mins | |
| Richard et al. (2019) | Primary | Healthy ageing through internet counselling in the elderly (HATICE) | Virtual, individually-accessed platform to improve cardiovascular health, focusing on smoking, blood pressure, cholesterol, diabetes, weight, physical activity, and nutrition. Incorporated a personalized risk profile, goal setting, and support from a coach. | Multidomain | 18 months | Remote: Flexible (100%) x flexible mins Face-to-face: 0 (0%) |
| Comparator | Non-interactive health website | Static, individually-accessed website with limited general health information; did not include personalisation or coach input. | Minimal intervention comparator | 18 months | Remote: Flexible (100%) x flexible mins Face-to-face: 0 (0%) | |
| Roh et al. (2020) | Primary | The gold medal program | Individually-delivered, nursed-led telephonic intervention encouraging physical activity, healthy diet and social activity; and also including brief cognitive restructuring for depression. | Multidomain | 12 weeks | Remote: 12 (75%) x 10 mins Face-to-face: 4 (25%) x 40–50 mins |
| Comparator | Supportive therapy | Individual, face-to-face, monthly therapy sessions and a weekly telephone call. | Remote active comparator | 12 weeks | Remote: 12 (75%) x 10 mins Face-to-face: 4 (25%) x 40–50 mins | |
| Vanoh et al. (2019) | Primary | WESIHAT ("Healthy senior citizens") 2.0 | Web-based, individually-accessed health education website comprising (1) estimation of risk of memory decline; (2) lifestyle modification guides; and (3) biochemical test results. | Multidomain | 6 months | Remote: 96 (97%) x 30 mins Face-to-face: 3 (3%) x 240 mins |
| Comparator | Healthy eating pamphlet | Provided with individual dietary counselling utilizing a pamphlet of recommendations based on the Malaysian food pyramid. | Minimal intervention comparator | 6 months | Remote: 0 (0%) Face-to-face: NR (100%) x NR mins | |
| Physical Activity Interventions | ||||||
| Anderson-Hanley et al. (2012) | Primary | Cybercycle exergame | Initial 1-month familiarization phase followed by individual virtual cycle tours competing against the participant's personal best time. | Physical activity | 3 months | Remote: 65 (NA) x 45 mins Face-to-face: NR (NA) x NR mins |
| Comparator | Control bike | Initial 1-month familiarization phase followed by individual sessions on a static exercise bike reporting standard feedback (e.g. heart rate and mileage). | Remote active comparator | 3 months | Remote: 0 (0%) Face-to-face: NR (100%) x NR mins | |
| Gschwind et al. (2015) | Primary | iStoppFalls exergame | Tailored and targeted exercise program to reduce falls in older people, completed individually. Consisted of balance sessions and muscle strength sessions, and provided participant feedback. | Physical activity | 16 weeks | Remote: 96 (NA) x 55–60 mins Face-to-face: ≥ 2 (NA) x NR mins |
| Comparator | Educational booklet | Individuals were given a booklet consisting of healthy lifestyle and falls reduction advice. | Minimal intervention comparator | 16 weeks | Remote: 0 (NA) Face-to-face: 0 (NA) | |
| Sebastião et al. (2018) | Primary | Square stepping exercise | Individuals were given a mat and pedometer for practicing step patterns at home. Included twice-monthly face-to-face instruction sessions, and weekly monitoring via Skype calls. | Physical activity | 12 weeks | Remote: 12 (67%) x 7 mins Face-to-face: 6 (33%) x 45 mins |
| Comparator | "Stretching for people with MS" illustrated manual | At-home, light intensity stretching and minimal muscle strengthening program. Included twice-monthly face-to-face instruction sessions, and weekly monitoring via Skype calls. | Remote active comparator | 12 weeks | Remote: 12 (67%) x 7 mins Face-to-face: 6 (33%) x 45 mins | |
| Psychosocial Interventions | ||||||
| Dodge et al. (2015) | Primary | Video-chat communication | Daily one-to-one conversation sessions via webcam, each lasting half an hour. | Social | 6 weeks | Remote: 30 (100%) x 30–35 mins Face-to-face: 0 (0%) |
| Comparator | Weekly telephone calls | Weekly one-to-one telephone calls to assess control participants' social engagement activities. | Minimal intervention comparator | 6 weeks | Remote: 6 (100%) x NR mins Face-to-face: 0 (0%) | |
| Wahbeh et al. (2016) | Primary | Internet mindfulness meditation intervention | Structured individual mindfulness-based intervention. Sessions included (1) discussion on stress, relaxation, and mind-body interaction; (2) meditation instruction/practice; and (3) addressing difficulties with mindfulness practice. | Psychological | 6 weeks | Remote: 6 (86%) x 60 mins Face-to-face: 1 (14%) x NR mins |
| Comparator | Internet health education | Health videos/podcasts covering: 1) diet; 2) exercise; 3) sleep; 4) brain health; 5) mood; and 6) community involvement. Completed individually. | Remote active comparator | 6 weeks | Remote: 6 (100%) x 60 mins Face-to-face: 0 (0%) | |
| Wuthrich et al. (2019) | Primary | Low-intensity CBT | Work-at-home CBT and motivational interviewing-informed intervention targeting emotional, health and lifestyle factors linked to cognitive decline. | Psychological | 16 weeks | Remote: 16 (100%) x 15 mins Face-to-face: 0 (0%) |
| Comparator | Face-to-face CBT | Face-to-face, individual CBT and motivational interviewing targeting emotional, health and lifestyle factors linked to cognitive decline. | Face-to-face active comparator | 16 weeks | Remote: 0 (0%) Face-to-face: 16 (100%) x 60 mins | |
Not reported (NR); Not applicable (NA); Multiple sclerosis (MS); Cognitive behavioral therapy (CBT); WargaEmasSihat [Malay for “Healthy senior citizens”] (WESIHAT); * = corresponding to Group B in Lee et al. (2014); # = corresponding to Group D in Lee et al. (2014).
3.5. Comparator characteristics
The included studies’ comparators were categorized as active interventions (with these subcategorized as remote (k = 4) or face-to-face (k = 2)), or minimal intervention comparators (k = 4). The latter category comprised the dissemination of health information (e.g. via pamphlet or website) without further input, or a weekly phone call to monitor social activity levels. Lee et al. (2014) was the only study to include more than one comparator. For this study, we included the face-to-face active comparator in the primary analysis, to ensure a rigorous evaluation of the remote multidomain intervention. However, for the subgroup meta-analysis of minimal intervention comparators only, we also included the treatment as usual group from that study. Amongst the remaining three multidomain studies, two used minimal intervention comparators (Richard et al., 2019, Vanoh et al., 2019), while one utilized a remote active comparator (Roh et al., 2020). All three studies of remote physical activity interventions utilized remote active comparators (Anderson-Hanley et al., 2012, Gschwind et al., 2015, Sebastião et al., 2018). Of the two remote psychological interventions, one featured a remote (Wahbeh et al., 2016), and the other a face-to-face active comparator (Wuthrich et al., 2019). The only remote social intervention utilized a minimal intervention comparator (Dodge et al., 2015).
3.6. Participant adherence
Participant adherence was not reported by all studies and, where reported, varied in format. Furthermore, some studies reported two types of adherence data, relating to participant-facilitator consultations, and participants’ engagement with intervention activities, respectively (this distinction was sometimes inapplicable). Of the four studies of remote multidomain interventions, only one (Richard et al., 2019) reported adherence data; participants assigned to an 18-month cardiovascular risk reduction intervention logged in to the online platform an average of 1.8 times per month, representing 42% of the recommended amount (comparator website: 0.7 times). All three physical activity interventions reported adherence data. Anderson-Hanley et al. (2012) reported that participants completed 79% (comparator bike: 82%) of prescribed cycling during an exergaming intervention. Gschwind et al. (2015) reported that 23% of participants achieved the recommended minimum amount of training in an exergaming intervention to prevent falls (comparator data not reported). Older adults with Multiple Sclerosis taking part in a square-stepping intervention (Sebastião et al., 2018) engaged with 100% of weekly phone/webcam calls to monitor compliance (stretching-based comparator: 100%). Face-to-face meeting attendance was lower, with only 53% of square-stepping participants attending all six meetings (comparator: 70%). Both psychological interventions reported adherence data. Wahbeh et al. (2016) reported that individuals taking part in a remote mindfulness intervention attended an average of 71% (health education comparator: 79%) of sessions and completed 56% of assigned home practice (comparator: 81%). An RCT comparing work-at-home to face-to-face CBT (Wuthrich et al., 2019) reported that adherence in the work-at-home arm was good, with 79% of older adults attending 15 of 16 sessions (face-to-face comparator: 85%). The only trial of a remote social intervention (Dodge et al., 2015) reported that the mean proportion of sessions attended was 89%, indicating high adherence (comparator data not reported). Thus, of the seven studies reporting remote intervention adherence data, five also reported data for comparators. In the majority of these cases, adherence between the remote intervention and comparator appeared approximately equal, although the cardiovascular risk reduction platform was accessed more regularly than the comparator website in Richard et al. (2019), and participants in the remote mindfulness group accrued less home practice than controls in Wahbeh et al. (2016).
3.7. Outcomes
None of the included studies assessed language function, or non-visual modalities of perception. Included outcomes thus represented the following cognitive domains: executive function, episodic memory, attention, cognitive screening, construction, or visual perception. Three studies included computerized cognitive tests alongside conventional pen-and-paper approaches; the remaining seven studies solely administered conventional tests. No included study administered outcome measures beyond the post-intervention visit, or evaluated intervention effects on dementia incidence. However, one trial of a remote multidomain intervention versus minimal intervention comparator (Richard et al., 2019) calculated a dementia risk composite primarily reflecting cardiovascular factors (see Kivipelto et al., 2006); the improvement on this measure was significantly greater in the remote intervention compared to the comparator.
3.8. Risk of bias
All studies were assessed for risk of bias according to the Cochrane risk of bias tool version 2 Sterne et al. (2019). Across the ten studies, the number of each type of judgment for overall risk of bias was: ‘low’ risk of bias (k = 6), ‘some concerns’ (k = 3), and ‘high’ risk of bias (k = 1). Please see supplementary Fig. S2 for the summary figure. Considering the separate domains of bias, all studies bar one received a ‘low’ risk of bias judgment for the domain ‘Randomization process’. The rationale for judging Anderson-Hanley et al. (2012) as having ‘some concerns’ was that baseline age and education were not balanced between arms. All studies except one were considered to be at ‘low’ risk of bias for the domain ‘Deviations from the intended interventions’. The analysis reported by Vanoh et al. (2019) was ‘per-protocol’ (i.e. it only included the 83% of participants who completed the study); this trial was thus judged to raise ‘some concerns’. The remaining nine studies utilized ITT analyses, although the use of this term was inconsistent (see Abraha and Montedori, 2010). Of the studies utilizing ITT, five had retention in excess of 96%, and missing data were not imputed. Three studies did not impute missing data but attempted to contact discontinued participants at follow-up; two of these included 89% of the randomized sample in analyses (Gschwind et al., 2015, Richard et al., 2019) and one included 47% (Lee et al., 2014). One study had 80% retention and missing data were imputed (Anderson-Hanley et al., 2012). All studies bar one received a ‘low’ risk of bias judgment for the domain ‘Missing outcome data’. The reason for the ‘high’ risk of bias judgment for Lee et al. (2014) was low retention (see above). Eight studies were judged to be at ‘low’ risk of bias for the domain ‘Measurement of the outcome’. Both Lee et al. (2014) and Wuthrich et al. (2019) were judged as giving rise to ‘some concerns’ for this domain, because the MMSE was the only outcome measure in either study; this measure is insensitive to change in interventional studies (Posner et al., 2017). All studies were considered to be at ‘low’ risk of bias for the domain ‘Selection of the reported result’.
3.9. Quantitative synthesis of results
The primary RVE meta-analysis, estimating the effect of remote interventions versus comparators on overall cognitive function, included 64 effect sizes from the ten studies. The pooled estimate of g did not significantly differ from zero (g = −0.02; 95% confidence interval (CI) [− 0.14, 0.09]; p = .66; see Table 3). Two forest plots present this result graphically. The full forest plot (visualizing all 64 effect sizes) is included in the supplementary materials (see Fig. S1). We present a more compact forest plot in Fig. 2. Whilst all other analyses utilized RVE meta-analysis for clustered data, the compact forest plot presents the unweighted mean effect size within each study, with the summary effect derived from a univariate random effects meta-analysis. Whilst averaging effect sizes within studies is not optimal for quantitative synthesis (Matt and Cook, 1994), we include a forest plot of mean effects here as a visual aid.
Table 3.
Primary and subgroup meta-analyses for overall cognitive function.
| Type | K (N ES) | ES (g) | 95% CI | d.f. | p-value | Tau2 | I2 | |
|---|---|---|---|---|---|---|---|---|
| Primary analysis | ||||||||
| All studies | 10 (64) | -0.02 | [− 0.14, 0.09] | 6.0 | .663 | 0.03 | 47.38 | |
| Intervention type | ||||||||
| Multidomain | 4 (19) | -0.01 | [− 0.07, 0.05] | 1.8 | * | 0.01 | 29.93 | |
| Physical activity | 3 (26) | 0.07 | [− 0.34, 0.48] | 1.6 | * | 0.04 | 37.95 | |
| Psychosocial | 3 (19) | -0.28 | [− 2.14, 1.58] | 2.0 | * | 0.46 | 75.10 | |
| Comparator type | ||||||||
| Minimal intervention comparators# | 5 (31) | 0.06 | [− 0.18, 0.31] | 2.2 * | * | 0.01 | 31.72 | |
| Active comparators (all) | 6 (34) | -0.10 | [− 0.41, 0.21] | 4.2 | .439 | 0.07 | 58.32 | |
| Remote active comparators | 4 (32) | 0.02 | [− 0.14, 0.18] | 2.4 * | * | 0.04 | 38.91 | |
| Face-to-face active comparators | 2 (2) | -0.53 | [− 7.67, 6.61] | 1 * | * | 0.54 | 83.88 | |
Effect sizes operate so that positive values indicate improvement. Number of studies (K); Effect size (ES); Hedges’ standardized mean difference (g); Confidence interval (CI); Degrees of freedom (d.f.); Between-study variance (Tau2); Proportion of observed dispersion due to real variation in effect sizes (I2); # = additionally includes the treatment as usual group from Lee et al. (2014); * = where d.f. < 4, p-values are unreliable, and are thus not reported here.
Fig. 2.
Compact forest plot of within-study mean effect sizes, grouped by remote intervention type. This figure plots within-study mean effect sizes and the univariate RE meta-analytic estimate for these effects across studies (produced using the ‘metafor’ R package). The meta-analytic estimate shown on the plot above is comparable to that derived from the ‘full’ RVE meta-analysis of the individual effect sizes (RVE model: g = −0.02; 95% CI [−0.14, 0.09]; p = .66. Univariate RE model: g = −0.01; 95% CI [−0.08, 0.06]; p = .82). Confidence interval (CI); Random effects (RE); Hedges’ standardized mean difference (g); Robust variance estimation (RVE).
Across individual cognitive domains, the only analysis achieving the requisite 4 d.f. was for episodic memory (k = 8; ES = 18; g = −0.02; 95% CI [−0.31, 0.27]; p = .84). All of the pooled effect size estimates for the remaining cognitive domains did, however, yield 95% confidence intervals including zero. Full details of these meta-analyses are reported in the supplementary materials (see Table S1). Lastly, we conducted separate subgroup meta-analyses of the different remote intervention types (i.e. multidomain, physical activity and psychological; the single remote social intervention was not included). In-keeping with the other analyses, the estimated difference between remote interventions and comparators was not significant for any subgroup. For all meta-analyses described, rho (within-study correlation between outcomes) was set to 0.8, and sensitivity analyses varied rho from 0 to 1 (in all cases, varying rho did not substantively affect results).
Given that the included remote interventions could be categorized as multidomain (k = 4), physical activity (k = 3) or psychosocial (k = 3) interventions, we also conducted subgroup meta-analyses of these separately. Due to the small number of studies included in each subgroup, all of the meta-analytic estimates had < 4 d.f. and thus the associated p-values were not reliable (see Table 3). Even so, all of the estimates had 95% CIs comfortably spanning zero, suggesting that the results for separate remote intervention types were comparable to the main analyses.
Given the variability in the type of control group, subgroup meta-analyses were also conducted for separate types of comparator (see Table 3). As encountered above, the small number of studies for each comparator type resulted in unreliable p-values for all but one of these analyses (see Table 3). The meta-analysis of the subgroup of studies utilizing a minimal intervention comparator yielded a substantively unchanged estimate relative to the primary analysis, although a reliable p-value was not available. A meta-analysis of just the six studies featuring active comparators yielded a negative, small, non-significant effect size. Further subdividing active comparators as face-to-face (k = 2) or remote (k = 4) also yielded pooled effect sizes with 95% CIs approximately centered on zero (with unreliable p-values), although the estimate for the two studies utilizing face-to-face comparators was somewhat negative (g = −0.53; 95% CI [−7.67, 6.61]; d.f. < 4). Taken together, these results suggest that the type of comparator had limited bearing on the estimated efficacy of remote interventions.
3.10. Publication bias
The trial by Wuthrich et al. (2019) was excluded from the assessment of publication bias, as the remote intervention arm in that RCT appeared to be the comparator. This, in conjunction with the fact that the results of that study favored the (intended) primary face-to-face arm, suggests that any publication bias acting on that study may have operated in the ‘opposite’ direction from the remaining nine studies. Our assessment of publication bias thus focused solely on these nine trials. Following Mathur and VanderWeele (2020), we calculated a sensitivity meta-analysis of only the non-significant effect sizes (this representing ‘worst case scenario’ publication bias). The resulting estimate (k = 9; ES = 60; g = −0.00; 95% CI [−0.03, 0.02]; d.f. < 4) was substantively unchanged from the primary meta-analysis result, suggesting that the present results are robust to publication bias.
4. Discussion
This is the first systematic review and meta-analysis to evaluate the effect of remotely delivered lifestyle interventions on cognition in older adults without dementia. The ten eligible studies included multidomain, physical activity, psychological or social interventions. Combined, their effect on cognition did not significantly differ from comparators. Subgroup meta-analyses of separate comparator types, remote intervention types, and cognitive domains supported this result. Previous reviews of non-pharmacological interventions for reducing cognitive decline in older adults have predominantly included face-to-face studies. They concluded that evidence for efficacy was mixed, although more promising for some intervention types (Kane et al., 2017, Whitty et al., 2020). It remains to be established whether the current, contrasting results reflect the remote delivery modality and/or factors specific to the current pool of studies (e.g. trial methodology, intervention characteristics).
Across the ten studies, just over half used an active comparator (either face-to-face or remotely delivered). Two studies utilized a face-to-face active comparator (Lee et al., 2014, Wuthrich et al., 2019). Whilst one of these reported little difference between the remote intervention and comparator (Lee et al., 2014), results from the other clearly favored the face-to-face intervention ((Wuthrich et al., 2019); see Fig. 2). However, in both studies, the amount of contact time with intervention facilitators was greater in the face-to-face compared to the remote arm. Thus, whilst the results of Wuthrich et al. (2019) favored the face-to-face intervention, attributing this to the in-person modality is precluded by the confounding with contact time.
Four of the original studies specified cognition as the primary outcome, with the remainder being unclear or specifying a physical or affective endpoint. As a result, some studies may have been underpowered for the included cognitive measures. A broad screening tool for dementia (i.e. the MMSE) was the only cognitive outcome available in two studies (Lee et al., 2014, Wuthrich et al., 2019); this measure lacks adequate sensitivity to change in interventional designs (Posner et al., 2017). Other studies included cognitive tests with low test-retest reliability (e.g. Stroop; see Strauss et al., 2005). Whilst meta-analysis can overcome low statistical power in original studies, including reliable and sensitive cognitive outcomes in future remote intervention trials will increase the likelihood of identifying relevant effects.
Intervention duration, subtype of remote delivery (e.g. telephone, video call), and adherence of participants to the intervention protocol varied widely between studies; each of these factors has the potential to impact efficacy. Interestingly, all of the efficacious (face-to-face) interventions identified by a previous review (Whitty et al., 2020) had a duration of at least four months; only three of the current ten remote interventions met or exceeded this. Moreover, none of the interventions included in this review were group-based. This is in marked contrast to the majority of face-to-face lifestyle interventions included in previous reviews ((Kane et al., 2017; Whitty et al., 2020); cf. the FINGER RCT (Ngandu et al., 2015)); we speculate that group-based remote interventions may be more efficacious than individual approaches, although the evidence required to test this hypothesis is currently lacking.
4.1. Strengths
This review has a number of strengths. It is timely given the increasing adaptation of interventions and clinical services for remote delivery in the wake of the COVID-19 pandemic. We solely included objective cognitive function outcome measures, which, in contrast to subjective measures, are not susceptible to self-report biases. The type of meta-analysis conducted, RVE, was purposely selected for its appropriate handling of within-study effect size clustering, thus removing the need to simplify or average the data. The method used to assess the sensitivity of results to potential publication bias was also selected for its appropriate treatment of clustered data. Studies were assessed for risk of bias according to the latest version of the Cochrane tool, and were found to be at predominantly low risk of bias overall.
4.2. Limitations
The most salient limitations are the small number of studies, as well as the between-study variability in populations, interventions and comparators. The limited number of original studies resulted in some of the subgroup meta-analyses being reported without p-values, and precluded the planned comparison between remote-only and ‘blended’ intervention approaches. Two studies solely administered the MMSE, which lacks adequate sensitivity to change in RCTs. We combined outcomes across cognitive domains for some analyses. A previous meta-analysis corroborated the view that tests generally measure more than one cognitive domain (Agelink van Rentergem et al., 2020), providing empirical support for the present analytical approach. Moreover, syntheses of the effects of other non-pharmacological interventions on cognition also included pooled analyses (Mewborn et al., 2017, Sherman et al., 2017). Nevertheless, this approach does not yield a true measure of overall cognitive function, and thus should be interpreted with a degree of caution. No included study administered outcome measures beyond the post-intervention visit, or compared dementia incidence between trial arms. Whilst the lack of a difference between arms on cognitive outcomes in the short-term suggests longer-term effects would not have manifested, this remains a limitation given the overarching research rationale of dementia prevention. None of the included studies recruited individuals with subjective or objective cognitive impairment (i.e. SCD or MCI), groups at increased risk of dementia (Mitchell et al., 2014, Mitchell and Shiri-Feshki, 2009). Given the assumed importance of secondary prevention strategies for reducing dementia incidence, the present lack of studies in these populations is a limitation. Lastly, the methodological decision to only include English language publications may have resulted in research written in other languages to be overlooked; however, recent work suggests that the negative impact of this inclusion criterion is likely minor (Dobrescu et al., 2021).
4.3. Recommendations for future studies
The growing movement towards remote delivery of interventions promises to yield rapid growth in the evidence base over the coming years. Based on the early evidence reported here, we offer some recommendations for future trials. Firstly, all participants included in this review were cognitively intact older adults. The future inclusion of individuals with SCD and/or MCI, groups at increased risk of dementia, will be vital to improve the evidence base for preventative strategies in these populations. Moreover, including people with SCD and/or MCI would increase the sensitivity of studies to detect interventional effects on cognition. Regarding outcome measures, the inclusion of cognitive tests that are reliable and sensitive to change (e.g. NIH Toolbox; (Weintraub et al., 2013)) would increase the likelihood of identifying effects, should these exist. Investigators are encouraged to include follow-up assessments of cognition and to record dementia incidence in trials; this will maximize the relevance of the evidence to the overarching initiative of prevention. Whilst one study in this review favored a face-to-face over a remote intervention (Wuthrich et al., 2019), no cost-effectiveness data were available in this (or any) study. Future studies and reviews comparing face-to-face and remote interventions are encouraged to consider the respective health economic profiles of these delivery modalities, in addition to efficacy.
Compared to face-to-face, remotely delivered interventions are more scalable, more accessible to geographically isolated individuals, and might be easier for some people to integrate with their daily routine (Rincker et al., 2020). Nevertheless, remote delivery typically requires fast and reliable digital infrastructure, access to which varies by country. Moreover, technological access and fluency is lower in older individuals compared with the general population (UK Office for National Statistics, 2019). Providing participants with the option of remote or face-to-face delivery, and/or adopting a ‘blended’ approach, may maximize inclusivity. Practical help, which could include provision of devices (e.g. Wi-Fi enabled tablets) and technological assistance, would further mitigate the negative impact of digital inequality on healthcare access (Watts, 2020). It is noteworthy that all included studies were published prior to the COVID-19 pandemic. It seems likely that the recent increases in ‘social technology’ use (most notably, video calls) may result in a greater proportion of older adults being able and willing to participate in remote interventions in the future. Given the variability in participant adherence to the interventions reported here, researchers are also encouraged to consider ways to support and promote engagement, such as group-based sessions, personalized goals, and collaborative exercises.
4.4. Conclusions
This review of remotely delivered lifestyle interventions found that their effect on cognitively intact older adults’ cognitive function did not differ from comparators. Notably, these results were based on ten methodologically varied studies. Whilst the evidence is limited at present, large-scale trials are ongoing and will consolidate the knowledge base going forward (Cooper et al., 2020, Kivipelto et al., 2020). As further evidence accumulates, the early findings summarized here will need to be updated.
Funding
This work was funded by The Dunhill Medical Trust [RTF1806\45]. This work was also supported by a dementia programme grant from the ESRC/NIHR awarded to the APPLE-Tree programme [ES/S010408/1].
Footnotes
Supplementary data associated with this article can be found in the online version at doi:10.1016/j.arr.2021.101505.
Appendix A. Supplementary material
Supplementary material
.
References
- Abraha I., Montedori A. Modified intention to treat reporting in randomised controlled trials: systematic review. BMJ. 2010;340:c2697. doi: 10.1136/bmj.c2697. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Agelink van Rentergem J.A., de Vent N.R., Schmand B.A., Murre J.M.J., Staaks J.P.C., Huizenga H.M., Consortium A. The Factor Structure of Cognitive Functioning in Cognitively HealthyParticipants: a Meta-Analysis and Meta-Analysis of Individual Participant Data. Neuropsychology Review. 2020;30:51–96. doi: 10.1007/s11065-019-09423-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Anderson-Hanley C., Arciero P.J., Brickman A.M., Nimon J.P., Okuma N., Westen S.C., Merz M.E., Pence B.D., Woods J.A., Kramer A.F., Zimmerman E.A. Exergaming and older adult cognition: a cluster randomized clinical trial. Am. J. Prev. Med. 2012;42:109–119. doi: 10.1016/j.amepre.2011.10.016. [DOI] [PubMed] [Google Scholar]
- Andrieu S., Coley N., Lovestone S., Aisen P.S., Vellas B. Prevention of sporadic Alzheimer’s disease: lessons learned from clinical trials and future directions. Lancet Neurol. 2015;14:926–944. doi: 10.1016/S1474-4422(15)00153-2. [DOI] [PubMed] [Google Scholar]
- Bhome R., Berry A.J., Huntley J.D., Howard R.J. Interventions for subjective cognitive decline: systematic review and meta-analysis. BMJ Open. 2018;8 doi: 10.1136/bmjopen-2018-021610. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Borenstein M. Wiley; Oxford: 2009. Introduction to Meta-Analysis. [Google Scholar]
- Cooper C., Aguirre E., Barber J.A., Bass N., Brodaty H., Burton A., Higgs P., Hunter R., Huntley J., Lang I., Kales H.C., Marchant N.L., Minihane A.M., Ritchie K., Morgan-Trimmer S., Walker Z., Walters K., Wenborn J., Rapaport P. APPLE-tree (active prevention in people at risk of dementia: lifestyle, behaviour change and technology to reduce cognitive and functional decline) programme: protocol. Int. J. Geriatr. Psychiatry. 2020;35:811–819. doi: 10.1002/gps.5249. [DOI] [PubMed] [Google Scholar]
- Diamond A. Executive functions. Annu Rev. Psychol. 2013;64:135–168. doi: 10.1146/annurev-psych-113011-143750. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Dobrescu A.I., Nussbaumer-Streit B., Klerings I., Wagner G., Persad E., Sommer I., Herkner H., Gartlehner G. Restricting evidence syntheses of interventions to English-language publications is a viable methodological shortcut for most medical topics: a systematic review. J. Clin. Epidemiol. 2021;137:209–217. doi: 10.1016/j.jclinepi.2021.04.012. [DOI] [PubMed] [Google Scholar]
- Dodge H.H., Zhu J., Mattek N.C., Bowman M., Ybarra O., Wild K.V., Loewenstein D.A., Kaye J.A. Web-enabled conversational interactions as a method to improve cognitive functions: results of a 6-week randomized controlled trial. Alzheimer’s. Dement. Transl. Res. Clin. Interv. 2015;1:1–12. doi: 10.1016/j.trci.2015.01.001. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Egger G., Binns A., Rössner S., Sagner M. In: Lifestyle Medicine. third edition. Egger G., Binns A., Rössner S., Sagner M., editors. Academic Press; 2017. Chapter 1 - introduction to the role of lifestyle factors in medicine; pp. 3–13. [Google Scholar]
- Fisher, Z., Tipton, E., 2015. robumeta: An R-package for robust variance estimation in meta-analysis, arXiv.
- Folstein M.F., Folstein S.E., McHugh P.R. Mini-mental state". A practical method for grading the cognitive state of patients for the clinician. J. Psychiatr. Res. 1975;12:189–198. doi: 10.1016/0022-3956(75)90026-6. [DOI] [PubMed] [Google Scholar]
- Fratiglioni L., Marseglia A., Dekhtyar S. Ageing without dementia: can stimulating psychosocial and lifestyle experiences make a difference? Lancet Neurol. 2020;19:533–543. doi: 10.1016/S1474-4422(20)30039-9. [DOI] [PubMed] [Google Scholar]
- Gschwind Y.J., Eichberg S., Ejupi A., de Rosario H., Kroll M., Marston H.R., Drobics M., Annegarn J., Wieching R., Lord S.R., Aal K., Vaziri D., Woodbury A., Fink D., Delbaere K. ICT-based system to predict and prevent falls (iStoppFalls): results from an international multicenter randomized controlled trial. Eur. Rev. Aging Phys. Act. 2015;12:10. doi: 10.1186/s11556-015-0155-6. 10-10. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Hedges L.V., Tipton E., Johnson M.C. Robust variance estimation in meta-regression with dependent effect size estimates. Res. Syn. Methods. 2010;1:39–65. doi: 10.1002/jrsm.5. [DOI] [PubMed] [Google Scholar]
- Huntley J.D., Gould R.L., Liu K., Smith M., Howard R.J. Do cognitive interventions improve general cognition in dementia? A meta-analysis and meta-regression. BMJ Open. 2015;5 doi: 10.1136/bmjopen-2014-005247. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kane R.L., Butler M., Fink H.A., Brasure M., Davila H., Desai P., Jutkowitz E., McCreedy E., Nelson V.A., McCarten J.R., Calvert C., Ratner E., Hemmy L.S., Barclay T. Interventions to prevent age-related cognitive decline, mild cognitive impairment, and clinical Alzheimer’s-type dementia. AHRQ Comp. Eff. Rev. 2017;188 [PubMed] [Google Scholar]
- Kivipelto M., Mangialasche F., Snyder H.M., Allegri R., Andrieu S., Arai H., Baker L., Belleville S., Brodaty H., Brucki S.M., Calandri I., Caramelli P., Chen C., Chertkow H., Chew E., Choi S.H., Chowdhary N., Crivelli L., Torre R.D.L., Du Y., Dua T., Espeland M., Feldman H.H., Hartmanis M., Hartmann T., Heffernan M., Henry C.J., Hong C.H., Håkansson K., Iwatsubo T., Jeong J.H., Jimenez-Maggiora G., Koo E.H., Launer L.J., Lehtisalo J., Lopera F., Martínez-Lage P., Martins R., Middleton L., Molinuevo J.L., Montero-Odasso M., Moon S.Y., Morales-Pérez K., Nitrini R., Nygaard H.B., Park Y.K., Peltonen M., Qiu C., Quiroz Y.T., Raman R., Rao N., Ravindranath V., Rosenberg A., Sakurai T., Salinas R.M., Scheltens P., Sevlever G., Soininen H., Sosa A.L., Suemoto C.K., Tainta-Cuezva M., Velilla L., Wang Y., Whitmer R., Xu X., Bain L.J., Solomon A., Ngandu T., Carrillo M.C. World-Wide FINGERS network: a global approach to risk reduction and prevention of dementia. Alzheimer’s. Dement. 2020;16:1078–1094. doi: 10.1002/alz.12123. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kivipelto M., Ngandu T., Laatikainen T., Winblad B., Soininen H., Tuomilehto J. Risk score for the prediction of dementia risk in 20 years among middle aged people: a longitudinal, population-based study. Lancet Neurol. 2006;5:735–741. doi: 10.1016/S1474-4422(06)70537-3. [DOI] [PubMed] [Google Scholar]
- Lee K.S., Lee Y., Back J.H., Son S.J., Choi S.H., Chung Y.K., Lim K.Y., Noh J.S., Koh S.H., Oh B.H., Hong C.H. Effects of a multidomain lifestyle modification on cognitive function in older adults: an eighteen-month community-based cluster randomized controlled trial. Psychother. Psychosom. 2014;83:270–278. doi: 10.1159/000360820. [DOI] [PubMed] [Google Scholar]
- Lentjes M.A.H. The balance between food and dietary supplements in the general population. Proc. Nutr. Soc. 2019;78:97–109. doi: 10.1017/S0029665118002525. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Lezak M.D., Howieson D.B., Bigler E.D., Tranel D. Neuropsychological Assessment. fifth ed. Oxford University Press; New York, NY, US: 2012. [Google Scholar]
- Lipsey M.W. In: Practical Meta-Analysis. Mark W.Lipsey, David B.Wilson., editors. Sage; Thousand Oaks, Calif. London: 2001. (/) [Google Scholar]
- Livingston G., Huntley J., Sommerlad A., Ames D., Ballard C., Banerjee S., Brayne C., Burns A., Cohen-Mansfield J., Cooper C., Costafreda S.G., Dias A., Fox N., Gitlin L.N., Howard R., Kales H.C., Kivimäki M., Larson E.B., Ogunniyi A., Orgeta V., Ritchie K., Rockwood K., Sampson E.L., Samus Q., Schneider L.S., Selbæk G., Teri L., Mukadam N. Dementia prevention, intervention, and care: 2020 report of the <em>Lancet</em> Commission. Lancet. 2020;396:413–446. doi: 10.1016/S0140-6736(20)30367-6. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Livingston G., Sommerlad A., Orgeta V., Costafreda S.G., Huntley J., Ames D., Ballard C., Banerjee S., Burns A., Cohen-Mansfield J., Cooper C., Fox N., Gitlin L.N., Howard R., Kales H.C., Larson E.B., Ritchie K., Rockwood K., Sampson E.L., Samus Q., Schneider L.S., Selbæk G., Teri L., Mukadam N. Dementia prevention, intervention, and care. Lancet. 2017;390:2673–2734. doi: 10.1016/S0140-6736(17)31363-6. [DOI] [PubMed] [Google Scholar]
- Mathur M.B., VanderWeele T.J. Sensitivity analysis for publication bias in meta-analyses. J. R. Stat. Soc. Ser. C (Appl. Stat. ) 2020;69:1091–1119. doi: 10.1111/rssc.12440. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Matt G.E., Cook T.D. In: Handbook of Research Synthesis. Cooper H., Hedges L.V., editors. Russell Sage Foundation; New York: 1994. Threats to the validity of research syntheses. [Google Scholar]
- Mewborn C.M., Lindbergh C.A., Stephen Miller L. Cognitive Interventions for Cognitively Healthy, Mildly Impaired, andMixed Samples of Older Adults: A Systematic Review and Meta-Analysis ofRandomized-Controlled Trials. Neuropsychol Rev. 2017;27:403–439. doi: 10.1007/s11065-017-9350-8. [DOI] [PubMed] [Google Scholar]
- Mitchell A.J., Beaumont H., Ferguson D., Yadegarfar M., Stubbs B. Risk of dementia and mild cognitive impairment in older people with subjective memory complaints: meta-analysis. Acta Psychiatr. Scand. 2014;130:439–451. doi: 10.1111/acps.12336. [DOI] [PubMed] [Google Scholar]
- Mitchell A.J., Shiri-Feshki M. Rate of progression of mild cognitive impairment to dementia--meta-analysis of 41 robust inception cohort studies. Acta Psychiatr. Scand. 2009;119:252–265. doi: 10.1111/j.1600-0447.2008.01326.x. [DOI] [PubMed] [Google Scholar]
- Moher D., Liberati A., Tetzlaff J., Altman D.G., Group P. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6 [PMC free article] [PubMed] [Google Scholar]
- Morris S.B. Estimating effect sizes from pretest-posttest-control group designs. Organ. Res. Methods. 2007;11:364–386. [Google Scholar]
- Ngandu T., Lehtisalo J., Solomon A., Levälahti E., Ahtiluoto S., Antikainen R., Bäckman L., Hänninen T., Jula A., Laatikainen T., Lindström J., Mangialasche F., Paajanen T., Pajala S., Peltonen M., Rauramaa R., Stigsdotter-Neely A., Strandberg T., Tuomilehto J., Soininen H., Kivipelto M. A 2 year multidomain intervention of diet, exercise, cognitive training, and vascular risk monitoring versus control to prevent cognitive decline in at-risk elderly people (FINGER): a randomised controlled trial. Lancet. 2015;385:2255–2263. doi: 10.1016/S0140-6736(15)60461-5. [DOI] [PubMed] [Google Scholar]
- Nichols E., Szoeke C.E.I., Vollset S.E., Abbasi N., Abd-Allah F., Abdela J., Aichour M.T.E., Akinyemi R.O., Alahdab F., Asgedom S.W., Awasthi A., Barker-Collo S.L., Baune B.T., Béjot Y., Belachew A.B., Bennett D.A., Biadgo B., Bijani A., Bin Sayeed M.S., Brayne C., Carpenter D.O., Carvalho F., Catalá-López F., Cerin E., Choi J.-Y.J., Dang A.K., Degefa M.G., Djalalinia S., Dubey M., Duken E.E., Edvardsson D., Endres M., Eskandarieh S., Faro A., Farzadfar F., Fereshtehnejad S.-M., Fernandes E., Filip I., Fischer F., Gebre A.K., Geremew D., Ghasemi-Kasman M., Gnedovskaya E.V., Gupta R., Hachinski V., Hagos T.B., Hamidi S., Hankey G.J., Haro J.M., Hay S.I., Irvani S.S.N., Jha R.P., Jonas J.B., Kalani R., Karch A., Kasaeian A., Khader Y.S., Khalil I.A., Khan E.A., Khanna T., Khoja T.A.M., Khubchandani J., Kisa A., Kissimova-Skarbek K., Kivimäki M., Koyanagi A., Krohn K.J., Logroscino G., Lorkowski S., Majdan M., Malekzadeh R., März W., Massano J., Mengistu G., Meretoja A., Mohammadi M., Mohammadi-Khanaposhtani M., Mokdad A.H., Mondello S., Moradi G., Nagel G., Naghavi M., Naik G., Nguyen L.H., Nguyen T.H., Nirayo Y.L., Nixon M.R., Ofori-Asenso R., Ogbo F.A., Olagunju A.T., Owolabi M.O., Panda-Jonas S., Passos V.Md.A., Pereira D.M., Pinilla-Monsalve G.D., Piradov M.A., Pond C.D., Poustchi H., Qorbani M., Radfar A., Reiner R.C., Jr., Robinson S.R., Roshandel G., Rostami A., Russ T.C., Sachdev P.S., Safari H., Safiri S., Sahathevan R., Salimi Y., Satpathy M., Sawhney M., Saylan M., Sepanlou S.G., Shafieesabet A., Shaikh M.A., Sahraian M.A., Shigematsu M., Shiri R., Shiue I., Silva J.P., Smith M., Sobhani S., Stein D.J., Tabarés-Seisdedos R., Tovani-Palone M.R., Tran B.X., Tran T.T., Tsegay A.T., Ullah I., Venketasubramanian N., Vlassov V., Wang Y.-P., Weiss J., Westerman R., Wijeratne T., Wyper G.M.A., Yano Y., Yimer E.M., Yonemoto N., Yousefifard M., Zaidi Z., Zare Z., Vos T., Feigin V.L., Murray C.J.L. Vol. 18. 2019. Global, regional, and national burden of Alzheimer's disease and other dementias, 1990-2016: a systematic analysis for the Global Burden of Disease Study 2016; pp. 88–106. (The Lancet Neurology). [DOI] [PMC free article] [PubMed] [Google Scholar]
- Petersen S.E., Posner M.I. The attention system of the human brain: 20 years after. Annu Rev. Neurosci. 2012;35:73–89. doi: 10.1146/annurev-neuro-062111-150525. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Posner H., Curiel R., Edgar C., Hendrix S., Liu E., Loewenstein D.A., Morrison G., Shinobu L., Wesnes K., Harvey P.D. Outcomes assessment in clinical trials of Alzheimer’s disease and its precursors: readying for short-term and long-term clinical trial needs. Innov. Clin. Neurosci. 2017;14:22–29. [PMC free article] [PubMed] [Google Scholar]
- Richard E., Moll van Charante E.P., Hoevenaar-Blom M.P., Coley N., Barbera M., van der Groep A., Meiller Y., Mangialasche F., Beishuizen C.B., Jongstra S., van Middelaar T., Van Wanrooij L.L., Ngandu T., Guillemont J., Andrieu S., Brayne C., Kivipelto M., Soininen H., Van Gool W.A. Healthy ageing through internet counselling in the elderly (HATICE): a multinational, randomised controlled trial. Lancet Digit. Health. 2019;1:e424–e434. doi: 10.1016/S2589-7500(19)30153-0. [DOI] [PubMed] [Google Scholar]
- Rincker J., Wallis J., Fruik A., King A., Young K., Tucker T., Bales C., Starr K.P. Implementation of remote interventions in older adults: lessons learned during COVID-19 isolation. Innov. Aging. 2020;4:950. 950-950. [Google Scholar]
- Roh H.W., Hong C.H., Lim H.K., Chang K.J., Kim H., Kim N.-R., Choi J.W., Lee K.S., Cho S.-M., Park B., Son S.J. A 12-week multidomain intervention for late-life depression: a community-based randomized controlled trial. J. Affect. Disord. 2020;263:437–444. doi: 10.1016/j.jad.2019.12.013. [DOI] [PubMed] [Google Scholar]
- Sebastião E., McAuley E., Shigematsu R., Adamson B.C., Bollaert R.E., Motl R.W. Home-based, square-stepping exercise program among older adults with multiple sclerosis: results of a feasibility randomized controlled study. Conte Clin. Trials. 2018;73:136–144. doi: 10.1016/j.cct.2018.09.008. [DOI] [PubMed] [Google Scholar]
- Smart C.M., Karr J.E., Areshenkoff C.N., Rabin L.A., Hudon C., Gates N., Ali J.I., Arenaza-Urquijo E.M., Buckley R.F., Chetelat G., Hampel H., Jessen F., Marchant N.L., Sikkes S.A.M., Tales A., van der Flier W.M., Wesselman L., the Subjective Cognitive Decline Initiative Working, G Non-pharmacologic interventions for older adults with subjective cognitive decline: systematic review, meta-analysis, and preliminary recommendations. Neuropsychol. Rev. 2017;27:245–257. doi: 10.1007/s11065-017-9342-8. [DOI] [PubMed] [Google Scholar]
- Sterne J.A.C., Savović J., Page M.J., Elbers R.G., Blencowe N.S., Boutron I., Cates C.J., Cheng H.-Y., Corbett M.S., Eldridge S.M., Emberson J.R., Hernán M.A., Hopewell S., Hróbjartsson A., Junqueira D.R., Jüni P., Kirkham J.J., Lasserson T., Li T., McAleenan A., Reeves B.C., Shepperd S., Shrier I., Stewart L.A., Tilling K., White I.R., Whiting P.F., Higgins J.P.T. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. 2019;366:l4898. doi: 10.1136/bmj.l4898. [DOI] [PubMed] [Google Scholar]
- Strauss G.P., Allen D.N., Jorgensen M.L., Cramer S.L. Test-retest reliability of standard and emotional stroop tasks: an investigation of color-word and picture-word versions. Assessment. 2005;12:330–337. doi: 10.1177/1073191105276375. [DOI] [PubMed] [Google Scholar]
- Sherman D.S., Mauser J., Nuno M., Sherzai D. The Efficacy of Cognitive Intervention in Mild Cognitive Impairment (MCI): a Meta-Analysis of Outcomes on Neuropsychological Measures. Neuropsychol Rev. 2017;27:440–484. doi: 10.1007/s11065-017-9363-3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- UK Office for National Statistics . Age UK; London: 2019. Internet Users, UK: 2019. [Google Scholar]
- Vanoh D., Shahar S., Razali R., Ali N.M., Manaf Z.A., Mohd Noah S.A., Nur A.M. The effectiveness of a web-based health education tool, WESIHAT 2.0, among older adults: a randomized controlled trial. J. Alzheimers Dis. 2019;70:255. doi: 10.3233/JAD-180464. S255-s270. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Veritas Health Innovation (Melbourne), Covidence systematic review software. 〈www.covidence.org〉. Last accessed August 6, 2020.
- Wahbeh H., Goodrich E., Oken B.S. Internet-based mindfulness meditation for cognition and mood in older adults: a pilot study. Alter. Ther. Health Med. 2016;22:44–53. [PMC free article] [PubMed] [Google Scholar]
- Watts G. COVID-19 and the digital divide in the UK. Lancet Digit. Health. 2020;2:e395–e396. doi: 10.1016/S2589-7500(20)30169-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Weintraub S., Dikmen S.S., Heaton R.K., Tulsky D.S., Zelazo P.D., Bauer P.J., Carlozzi N.E., Slotkin J., Blitz D., Wallner-Allen K., Fox N.A., Beaumont J.L., Mungas D., Nowinski C.J., Richler J., Deocampo J.A., Anderson J.E., Manly J.J., Borosh B., Havlik R., Conway K., Edwards E., Freund L., King J.W., Moy C., Witt E., Gershon R.C. Cognition assessment using the NIH Toolbox. Neurology. 2013;80:S54–S64. doi: 10.1212/WNL.0b013e3182872ded. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Whitty E., Mansour H., Aguirre E., Palomo M., Charlesworth G., Ramjee S., Poppe M., Brodaty H., Kales H.C., Morgan-Trimmer S., Nyman S.R., Lang I., Walters K., Petersen I., Wenborn J., Minihane A.M., Ritchie K., Huntley J., Walker Z., Cooper C. Efficacy of lifestyle and psychosocial interventions in reducing cognitive decline in older people: systematic review. Ageing Res. Rev. 2020;62 doi: 10.1016/j.arr.2020.101113. [DOI] [PubMed] [Google Scholar]
- Wuthrich V.M., Rapee R.M., Draper B., Brodaty H., Low L.-F., Naismith S.L. Reducing risk factors for cognitive decline through psychological interventions: a pilot randomized controlled trial. Int. Psychogeriatr. 2019;31:1015–1025. doi: 10.1017/S1041610218001485. [DOI] [PubMed] [Google Scholar]
- Yiannopoulou K.G., Papageorgiou S.G. Current and future treatments in Alzheimer Disease: an update. J. Cent. Nerv. Syst. Dis. 2020;12 doi: 10.1177/1179573520907397. 1179573520907397. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supplementary material


