Skip to main content
Frontiers in Aging Neuroscience logoLink to Frontiers in Aging Neuroscience
. 2021 Jun 9;13:682683. doi: 10.3389/fnagi.2021.682683

Neurofeedback and the Aging Brain: A Systematic Review of Training Protocols for Dementia and Mild Cognitive Impairment

Lucas R Trambaiolli 1,*, Raymundo Cassani 2, David M A Mehler 3,, Tiago H Falk 2,
PMCID: PMC8221422  PMID: 34177558

Abstract

Dementia describes a set of symptoms that occur in neurodegenerative disorders and that is characterized by gradual loss of cognitive and behavioral functions. Recently, non-invasive neurofeedback training has been explored as a potential complementary treatment for patients suffering from dementia or mild cognitive impairment. Here we systematically reviewed studies that explored neurofeedback training protocols based on electroencephalography or functional magnetic resonance imaging for these groups of patients. From a total of 1,912 screened studies, 10 were included in our final sample (N = 208 independent participants in experimental and N = 81 in the control groups completing the primary endpoint). We compared the clinical efficacy across studies, and evaluated their experimental designs and reporting quality. In most studies, patients showed improved scores in different cognitive tests. However, data from randomized controlled trials remains scarce, and clinical evidence based on standardized metrics is still inconclusive. In light of recent meta-research developments in the neurofeedback field and beyond, quality and reporting practices of individual studies are reviewed. We conclude with recommendations on best practices for future studies that investigate the effects of neurofeedback training in dementia and cognitive impairment.

Keywords: neurofeedback, dementia, Alzheimer's disease, mild cognitive impairment, electroencephalography, functional magnetic resonance imaging

1. Introduction

Dementia describes a set of symptoms that occur in neurodegenerative disorders caused by damage and death of neurons. These symptoms include gradual loss of cognitive, affective and behavioral functions, leading to increased impairment in activities of daily living (Livingston et al., 2017). Dementia mostly affects people over the age of 65 years and its incidence rows exponentially with age (Prince et al., 2015). For instance, while the prevalence of dementia in the age range of 70–90 Hz years is between 15 and 20% of the population, it is higher than 40% for 90+ years old (Plassman et al., 2007; Corrada et al., 2008, 2010). Further, given expected demographic developments, dementia presents a major concern for many societies. While elderly people currently account for about 12% of the world population, this proportion is expected to grow to 21% by 2050 (Wasay et al., 2016), with projections pointing to more than 130 million people with dementia by then (Prince et al., 2015).

Symptoms in dementia are complex and affect different domains, including cognitive (e.g., memory loss, mental confusion, language impairment), behavioral (e.g., irritability, personality changes), and psychological functions (e.g., anxiety, depression, hallucinations), leading to increasing levels of dependency as the disease progresses (World Health Organization, 2012; Prince et al., 2015). Among the biological causes for dementia, the most common in order of frequency are: Alzheimer's Disease (AD), which accounts to nearly 70% of the dementia cases, vascular dementia (VD), Lewy body dementia (LBD), and frontotemporal dementia (FTD) (World Health Organization, 2012). Another neurodegenerative disorder that leads to dementia in patients is Parkinson's disease (PD) (Gratwicke et al., 2015). Neurofeedback studies have been successfully conducted with PD patients to treat motor symptoms (Linden and Turner, 2016; Subramanian et al., 2016). To our knowledge, however, and in contrast to neurofeedback based rehabilitation in stroke (Wang et al., 2018), no neurofeedback study to-date has targeted cognitive symptoms in PD patients. Lastly, mild cognitive impairment (MCI) constitutes another disorder that features cognitive impairment. These deficits exceed those that are commonly observed in normal aging, but are less severe compared to mild forms of dementia. Noteworthy, MCI is associated with a higher risk to develop dementia (Petersen, 2004, 2011; Mariani et al., 2007; Plassman et al., 2007). For instance, longitudinal evaluations have shown that more than 25% of patients with MCI develop AD in subsequent years, which is a much higher conversion rate compared to a healthy aging population (Boyle et al., 2006; Brodaty et al., 2014). On the other hand, a substantial proportion of patients diagnosed with MCI may recover to cognitive levels comparable to their age group (Ganguli et al., 2011; Han et al., 2012; Klekociuk et al., 2016). MCI patients may hence benefit in particular from complex cognitive interventions, such as neurofeedback training.

Current therapies for dementia focus on providing temporary symptom improvement and reducing the rate of cognitive decline, but are minimally effective to slow down disease progression (Koyama et al., 2012). Also, existing drugs are not effective for all types of dementia, nor for different severity levels of the same disease. For example, although cholinesterase inhibitors are effective at reducing symptoms in patients with mild to moderate AD, they show negligible effects in patients presenting with very mild or severe symptoms (Cummings et al., 2002; Cummings, 2004). For other types of dementia, such as VD, there is currently no specific drug available, and patients are usually treated off-label with substances that are used to treat AD patients Erkinjuntti et al. (2004). Moreover, there are no approved efficacious treatments for MCI (Karakaya et al., 2013). Taken together, there is an urgent need to develop effective interventions that can slow disease progression (Hogan et al., 2008). Non-invasive neurofeedback training has been suggested as a potential complementary treatment for dementia. During this intervention, patients actively engage in cognitive tasks and modulate the activity in areas that show correlated activity and that are selected based on a pathophysiological disease model (Kim and Birbaumer, 2014; Sitaram et al., 2017).

1.1. Neurofeedback Systems

Neurofeedback protocols aim to train users to achieve self-regulation of specific neural substrates through real-time feedback (Kim and Birbaumer, 2014; Sitaram et al., 2017). This learning process is grounded in operant conditioning (or reinforcement learning) such that desired brain activity is rewarded (Ros et al., 2014). Neurofeedback systems consist of three main components: an imaging modality (e.g., functional magnetic resonance imaging), a series of signal processing steps to extract and filter relevant (i.e., ideally neural) information, and a feedback presentation of this information to the user.

Two different groups of neuroimaging techniques are used for neurofeedback studies: Electrical or magnetic signals (that result from dipole sources of electrical, neural activity) (Min et al., 2010) are the basis for electroencephalography (EEG) (Enriquez-Geppert et al., 2017) and magnetoencephalography (MEG) (Parkkonen, 2015). These neuroimaging techniques possess high temporal resolution (of up to 100 kHz in modern equipment), allowing for high sampling rates and thus frequent updates of presented neurofeedback. Blood oxygenation level (i.e., hemodynamic responses) forms the basis for functional magnetic resonance imaging (fMRI) (Weiskopf, 2012; Paret et al., 2019) and functional near-infrared spectroscopy (fNIRS) (Kohl et al., 2020). These imaging techniques thus provide an indirect measure of neural activity, which results from the metabolism of brain cells (Min et al., 2010). fMRI and fNIRS have lower temporal but higher spatial resolution compared to EEG, allowing for more specific targeting of brain structures.

During real-time data processing, recorded signals are converted into an output of the closed-loop system (Sitaram et al., 2017). Ideally, noise-reduction and feature extraction approaches are used to remove artifacts and convert the original time series into standardized and informative measures of neural activity (Gruzelier, 2014). Processing algorithms used to achieve better data quality vary between imaging techniques and regions of interest and remain an active field for methodological research. For example, EEG-based protocols usually involve self-regulation of frequencies or electrical potentials of specific EEG channels (Enriquez-Geppert et al., 2017). On the other hand, fNIRS- and fMRI-based neurofeedback systems focus on the up- or down-regulation of the hemodynamic signal in specific brain areas. Noteworthy, while fMRI allows recording from and thus targeting subcortical areas directly, the spatial resolution of fNIRS is limited to the cortical surface (Kohl et al., 2020).

Feedback presentation constitutes another relevant component of neurofeedback protocols. The goal of most currently employed paradigms is to provide users with real-time about the targeted neural activity, allowing them to adapt their control strategy to achieve a desired level of proficiency (Curran and Stokes, 2003; Birbaumer et al., 2013). Different perceptual modalities can be stimulated, e.g., via auditory, visual, vibrotactile, electrical or proprioceptive feedback systems (Sitaram et al., 2017). The choice and configuration of the feedback modality should be carefully planned because it can interfere negatively with self-regulation performance and learning curves of participants (McFarland et al., 1998; Birbaumer et al., 2013).

1.2. Rationale for Using Neurofeedback in the Treatment of Dementia

Although being a relatively new field, preliminary studies with healthy participants and different clinical populations suggest that neurofeedback training may be effective to improve brain function, treat cognitive as well as affective symptoms and induce brain plasticity (Arns et al., 2017; Sitaram et al., 2017; Thibault et al., 2018). Therefore, neurofeedback training has been suggested as a potential complementary treatment for patients suffering from dementia.

Cognitive decline is a defining feature of dementia. Neurofeedback training combines cognitive training with operant conditioning of the associated neural substrate, e.g., attention or memory recall (Sitaram et al., 2017). For instance, working memory and attention share common neural mechanisms, which can be trained through top-down cognitive strategies (Cicerone et al., 2011; Gazzaley and Nobre, 2012), leading to performance improvements (Karbach and Verhaeghen, 2014). Thus, neurofeedback-based cognitive training may provide an attractive complimentary treatment for patients suffering from different forms of dementia (Jiang et al., 2017). For example, in experiments with healthy elderly participants, EEG-based neurofeedback training led to improved performance in different cognitive domains (Angelakis et al., 2007; Keizer et al., 2010; Lecomte and Juhel, 2011; Becerra et al., 2012; Wang and Hsieh, 2013; Reis et al., 2016; da Paz et al., 2018).

Besides cognitive decline, it is known that more than 70% of patients with dementia experience psychological symptoms, such as anxiety, depression, or apathy (Lyketsos and Lee, 2004; Craig et al., 2005; Steffens et al., 2005). Different neurofeedback protocols were able to target brain areas and networks responsible for emotion processing (Johnston et al., 2010; Linhartová et al., 2019), which opens the possibility of applying this method to treat psychological symptoms in patients suffering from dementia (Kim and Birbaumer, 2014; Arns et al., 2017). For example, randomized controlled trials (RCTs) of fMRI-based neurofeedback training reported improvement of depressive symptoms in patients with major depressive disorder (Young et al., 2017; Mehler et al., 2018) and anxiety symptoms in patients with obsessive-compulsive disorder (Scheinost et al., 2014). For instance, in experiments with older adults, fMRI- and EEG-based neurofeedback training was associated with improved recognition of emotionally valent faces (Rana et al., 2016) and reduced depressive symptoms (Ramirez et al., 2015), respectively.

As previously mentioned, patients with dementia present specific neural signatures that correlate with their symptoms. For example, patients with MCI and AD present altered EEG frequencies (Cassani et al., 2018) and abnormal fMRI functional connectivity at rest (Jacobs et al., 2013; Badhwar et al., 2017). These correlates may provide biomarkers, and their predictive potential as well as validity can be tested by using these as treatment targets (Mehler and Kording, 2018; Micoulaud-Franchi et al., 2019). Similar to brain stimulation protocols, neurofeedback protocols aim to modulate local activity (Linden, 2014). However, since neurofeedback acts as an “endogenous” stimulation, it reduces safety risks or side effects associated with other approaches, such as non-invasive transcranial stimulation or invasive deep brain stimulation. Specifically, it is hypothesized that training participants to regulate biomarkers, it may be possible to induce cognitive improvements, stimulating residual neural plasticity that is maintained to some degree despite dementia (Mirmiran et al., 1996; Prichep, 2007).

To the best of our knowledge, the available literature lacks a systematic review about neurofeedback training to treat dementia. We intend to fill this gap following three aims: First, we summarize and compare current findings reported for neurofeedback studies conducted in patients suffering from dementia or mild cognitive impairment, with special attention to clinical effects reflected in standardized cognitive assessment scales; second, we evaluate the design and reporting quality of these studies according to standardized neurofeedback checklists; and third, we provide guidelines for future research that may help the field progressing.

2. Methods

This review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines (Liberati et al., 2009).

2.1. Search Strategy and Study Selection

A survey on peer-reviewed journal articles published in English until October 9th, 2020, was performed for this review. The bibliometric databases Pubmed, Web of Science, IEEE Xplore, Scopus, and ScienceDirect were queried to collect an initial list of papers containing specific search terms in their title or abstract. The following keywords were used in the search:

  1. Neurofeedback

  2. Neurotherap*

  3. Dement*

  4. Alzheimer*

  5. Lewy*

  6. Frontotemporal

  7. Vascular

  8. Cognitive*impair*

These terms were further combined with logical operators in the following way:

(1 OR 2) AND (3 OR 4 OR 5 OR 6 OR 7 OR 8)

Resulting articles were selected or rejected based on the criteria described in Table 1. To assess the eligibility of the selected papers, we first evaluated the titles. If the inclusion or exclusion criteria were not clearly met, the abstract was read as well. Finally, the remaining papers were submitted to full text screening and those found to be misaligned with the eligibility criteria were rejected.

Table 1.

Eligibility criteria.

INCLUSION CRITERIA
1. Studies presenting original results (clinical trials, pilot studies, etc.)
2. Studies including patients with a formal diagnosis of dementia
EXCLUSION CRITERIA
1. Studies including samples with other neurological/psychiatric disorders,
or targeting dementia-like symptoms in other disorders
2. Studies exclusively evaluating healthy participants
3. Studies applying biofeedback based only on non-neural signals
4. Studies without voluntary control of brain activity
5. Studies with animal models
6. Review articles, Commentaries, Editorials, and Case Reports (N < 5)

2.2. Data Extraction and Analysis

To collect relevant information while reading the articles, a data extraction sheet was created including 29 data items which were extracted and grouped into five categories: population characteristics (population type, age, gender, education, cerebrospinal fluid (CSF) biomarkers, baseline scores in standardized symptom scales, comorbidity, and current treatment); study design (existence of control group, randomization, blinding, evaluation at baseline, at post-training, and at follow-up); neurofeedback protocol (imaging method, neurofeedback paradigm, control paradigm, number of sessions, session duration, session description, feedback modality, feedback description, and instruction); outcomes from standardized cognitive assessment scales (differences within groups, between groups, and at follow-up); and other cognitive, behavioral, and neural changes (differences within groups, between groups, and at follow-up).

We also evaluated each article based on the study design and reporting quality as previously conducted in systematic neurofeedback reviews (Kohl et al., 2020; Trambaiolli et al., 2021). For this, we scored each study according to the checklist for quasi-experimental studies of the Joanna Briggs Institute (JBI) critical appraisal tools (Tufanaru et al., 2017), and the “Consensus on the Reporting and Experimental Design of Neurofeedback studies” (CRED-nf) checklist (Ros et al., 2019). The JBI checklist includes items regarding clarity of cause and effect, similar participants, similar treatment in compared groups, existence of a control group/condition, multiple measurement points of the outcome, completion of follow-up, similar outcome measurements in compared groups, reliability of outcome measurements, and appropriate statistical methods. On the other hand, the CRED-nf checklist presents a series of 18 essential and eight encouraged items that should be included in neurofeedback design and reports, including pre-experiment registration, control groups and measures, feedback specifications, outcome description, and data storage/publishing. Deviations from the original criteria in both checklists are detailed in the Supplementary Material.

Data extraction was performed by two independent raters who read the full manuscript and reported independently information to separate extraction sheets. These extraction sheets were later compared to ensure data consistency. In case of disagreement on the extracted/scored items, the final decision was made based on discussions between the two raters.

3. Results and Discussion

The database queries identified 1,912 studies that matched the search terms (see Figure 1), with 1,024 unique records remaining after duplicates were removed. Through title and abstract screening, 957 studies were rejected because they did not meet the inclusion criteria. After full-text examination, only ten studies were included in the final sample. Of interest, the oldest study in the final sample dates from 2016, and more than half of the studies have been published in the last 2 years, emphasizing the novelty in the use of neurofeedback as a complementary intervention for patients with dementia. Following the first aim of this review, we provide an overview of the selected studies in the next manuscript section, focusing on provided information about enrolled study populations (Table 2), experimental group design (Table 3), neurofeedback protocol (Table 4), and main outcome measures (Tables 5, 6).

Figure 1.

Figure 1

PRISMA flowchart describing the literature screening.

Table 2.

Summary of population characteristics.

Study Population Age (years) Gender Education (years) CSF biomarkers Symptom scales
(baseline avg ± std)
Comorbidity Treatment
DEMENTIA
Surmeli et al. (2016) NF:
9 AD
11 VD
NF:
68.9 ± 10.6
NF:
9M/11F
NF:
6 (median)
NR MMSE:
NF: 18.8 ± 6.4
Psychiatric disorders (anxiety, depression, etc.) Medicines
(N = 17)
Luijmes et al. (2016) NF:
10 poss. AD
NF:
71.5 ± 6.7
NF:
3F/7M
NR NR CAMCOG:
NF: 0.8 ± 0.1
No* Medicines
(N = 10)
Hohenfeld et al. (2017) NF1:
10 prom. AD

CG1 (real):
16 HC

CG2 (sham):
4 HC
NF1:
66.2 ± 8.9

CG1 (real):
63.5 ± 6.7

CG2 (sham):
64.8 ± 9.5
NF1:
8M/2F

CG1 (real):
9M/7F

CG2 (sham):
3M/1F
NF1:
12 (median)

CG1 (real):
13 (median)

CG2 (sham):
13 (median)
NF1:
Yes

CG1 (real):
No

CG2 (sham):
No
MWT-B IQ
NF1: 107.8 ± 14.1
CG1 (real): 124.0 ± 9.3
CG2 (sham): 121.5 ± 21.7

MoCA
NF1: 24.8 ± 3.2
CG1 (real): 26.8 ± 2.0
CG2 (sham): 26.0 ± 4.2
No* NR
Hohenfeld et al. (2020) NF:
9 prom. AD

CG (real):
12 HC
NF:
64.7 ± 8.3

CG (real):
65.3 ± 6.3
NF:
7M/2F

CG (real):
6M/6F
NF:
10 (median)

CG (real):
13 (median)
see Hohenfeld et al., 2017 MWT-B IQ:
NF: 107.3 ± 14.8
CG (real): 126.0 ± 9.4

MoCA:
NF: 25.1 ± 3.3
CG (real): 26.8 ± 2.2
No* NR
MCI
Mendoza Laiz et al. (2018) NF1:
22 MCI

NF2:
10 MCI
NF1:
66.0 ± 2.2

NF2:
73.1 ± 3.6
NF1:
12M/10F

NF2:
2M/8F
NR NR MMSE:
Between 18 and 23
NR NR
Jang et al. (2019) NF:
5 MCI
NF:
66.6 ± 3.5
NF:
NR
NF:
9.2 ± 3.6
NR MoCA-K:
NF: 19.4 ± 2.1
NR No
Jirayucharoensak et al. (2019) NF:
26 HC
32 aMCI

CG1 (game):
17 HC
19 aMCI

CG2 (CAU):
11 HC
14 aMCI
NF:
71.7 ± 6.5

CG1 (game):
73.9 ± 6.2

CG2 (CAU):
70.9 ± 5.1
NF:
58F

CG1 (game):
36F

CG2 (CAU):
25F
NF:
9.0 ± 5.7

CG1 (game):
9.4 ± 6.0

CG2 (CAU):
11.4 ± 4.5
NR MMSE:
NF: 27.3 ± 2.1
CG1 (game): 27.5 ± 2.1
CG2 (CAU): 27.9 ± 1.8

MoCA:
NF: 22.4 ± 4.3
CG1 (game): 22.8 ± 4.4
CG2 (CAU): 23.6 ± 0.4
No* CAU
(type not
specified)
(N = 119)
Lavy et al. (2019) NF:
11 MCI
NF:
70.0 ± 10.0
NF:
6F/5M
NR NR NR No* NR
Li et al. (2020) NF:
40 MCI
NF:
54.3 ± 4.9
NF:
20M/20F
NR NR NR No* NR
Marlats et al. (2020) NF:
22 MCI
NF:
76.1 ± 5.9
NF:
5M/17F
NF:
14.9 ± 2.6
NR MMSE:
NF: 25.4 ± 2.8

MoCa:
NF: 23.1 ± 2.5
No* NR

Symptom scales described as primary outcomes are highlighted in bold.

*

Studies describing exclusion criteria for other neurological or psychiatric disorders were considered without comorbidities. CAMCOG, Cambridge Cognitive Examination; CAU, Care as Usual; CG, Control Group; MMSE, Mini Mental Status Examination; MoCA, Montreal Cognitive Assessment; MoCA-K, Montreal Cognitive Assessment - Korean version; MWT-B IQ, Multiple Choice Word Test; NF, Neurofeedback; NR, Not Reported.

Table 3.

Summary of study design.

Study Control group Randomization Blinding Evaluation time points
Baseline Post-training Follow-up (weeks)
DEMENTIA
Surmeli et al. (2016) No No No Yes Yes No
Luijmes et al. (2016) No No No Yes (up to 3 months before) Yes No
Hohenfeld et al. (2017) Yes NR NR Yes Yes No
Hohenfeld et al. (2020) Yes NR NR Yes Yes No
MCI
Mendoza Laiz et al. (2018) No No No Yes Yes No
Jang et al. (2019) No No No Yes Yes No
Jirayucharoensak et al. (2019) Yes Yes NR Yes Yes No
Lavy et al. (2019) No No No Yes Yes Yes (4)
Li et al. (2020) No No No Yes Yes No
Marlats et al. (2020) No No No Yes Yes Yes (4)

NR, Not Reported.

Table 4.

Summary of each neurofeedback protocol employed.

Study Imaging method NF paradigm (participants) Control paradigm (participants) Number of sessions Session duration (min.) Session description Feedback modality Feedback description Instructions
DEMENTIA
Surmeli et al. (2016) EEG participant-specific protocols (N = 20) No 10–96 (avg. 45.0 ± 27.3) 60 NR NR NR NR
Luijmes et al. (2016) EEG participant-specific protocols (N = 10) No 30 30 4 blocks, with 5 min. breaks Visual and auditory Movie with varying contrast and beeping sound No
Hohenfeld et al. (2017) fMRI ↑ left parahipp. gyrus (N = 10) CG1: ↑ left parahipp. gyrus (N = 16)
CG2: ↑ left primary somatosensory cortex (N = 4)
3 60 4 blocks containing 12 trials (6 activation + 6 resting-state) of 40 s each Visual Thermometer bar To remember footpath and/or count backwards
Hohenfeld et al. (2020)* fMRI ↑ left parahipp. gyrus (N = 9) ↑ left parahipp. gyrus (N = 12) 3 60 4 blocks containing 12 trials (6 activation + 6 resting-state) of 40 s each Visual Thermometer bar To remember footpath and/or count backwards
MCI
Mendoza Laiz et al. (2018)** EEG ↓ alpha (11–13 Hz) and ↑ beta (17–22 Hz) in C3, Cz, and C4 (N = 22 and 10) No 5 60 60 trials with five different difficulty levels Visual Open or close virtual doors, or moving cursor To imagine hand movements
Jang et al. (2019) EEG ↑ beta (12–15 Hz) in F6 (N = 5) No 16 45 9 trials of 5 min each Visual Moving a boat, or changing blurred flowers To develop personal strategies
Jirayucharoensak et al. (2019) EEG ↑ beta (12–32 Hz)/alpha (8–12 Hz) ratio in AF3 and AF4 (N = 58) game (N = 38), CAU (N = 25) 20 30 5 blocks of 4–5 min separated by breaks of 2 min Visual Real-time game (5 different games) Existent, but not reported
Lavy et al. (2019) EEG ↑ alpha (8–10 Hz) in Pz (N = 11) No 10 32 10 trials of 3 min each, separated by breaks of 10 s Visual and auditory Balls moving in 3D and beeping sound To develop personal strategies
Li et al. (2020) EEG self-regulation of alpha (8–13 Hz) band and beta (13–30 Hz)/alpha (8–13 Hz) ratio (N = 40) No 10 No limit NR Visual NR NR
Marlats et al. (2020) EEG ↑ SMR (12–15 Hz) and ↓ theta (4–8 Hz) and beta (21–30 Hz) in Cz (N = 22) No 20 75 NR Visual and auditory Animated graphics Existent, but not reported
*

Methodological information detailed in previous publication from Hohenfeld et al. (2017).

**

Methodological information detailed in previous publication from Gomez-Pilar et al. (2016).

CG, Control Group; NR, Not Reported; SMR, Sensory-Motor Rhythm; ↑, expected up-regulation; ↓, expected down-regulation.

Table 5.

Summary of outcomes from standardized cognitive assessment scales.

Study Within groups Between groups Follow-up
DEMENTIA
Surmeli et al. (2016) NF:
↑ MMSE (19.00%)
N/A N/A
Luijmes et al. (2016) NF:
↑ CAMCOG (2.00%)
N/A N/A
Hohenfeld et al. (2017) NF:
↓ MoCA (1.00%)
CG1:
↑ MoCA (3.97%)
CG2:
↑ MoCA (0.83%)
NR N/A
Hohenfeld et al. (2020) NR CG>NF:
↑ MoCA
N/A
MCI
Mendoza Laiz et al. (2018) NR NR N/A
Jang et al. (2019) NF:
↑ MoCA-K (20.67%)
N/A N/A
Jirayucharoensak et al. (2019) NR NR N/A
Lavy et al. (2019) NR N/A NR
Li et al. (2020) NR N/A N/A
Marlats et al. (2020) NF:
↑ MMSE (1.67%)
↑ MoCA (6.33%)
N/A Regression
of MoCA
improvement

CAMCOG, Cambridge Cognitive Examination; CG, Control Group; MMSE, Mini Mental Status Examination; MoCA, Montreal Cognitive Assessment; MoCA-K, Montreal Cognitive Assessment-Korean version; N/A, Not Apply; NF, Neurofeedback; NR, Not Reported; ↑, increased; ↓, decreased.

Table 6.

Summary of cognitive, behavioral, and neural outcomes.

Study Significant cognitive and behavioral changes Significant neural changes Follow-up
Within groups Between groups Within groups Between groups
DEMENTIA
Surmeli et al. (2016) NF:
↑ orientation and recall MMSE
subscales
↑ commission errors and reaction
time variability TOVA subscales
↓ CGI
N/A NF:
↓ in theta activity
↓ interhemispheric
coherence
N/A N/A
Luijmes et al. (2016) No significant changes N/A NR N/A N/A
Hohenfeld et al. (2017) NF:
↑ delayed recall of the
visuospatial memory task
of the VVM
CG1:
↑ immediate recall condition
visuospatial task of the VVM
↑ backward digit-span task
of the WMS
CG2:
No significant differences
NR No significant changes NF>CG1:
gray matter volume loss
in the left parahipp. gyrus
N/A
Hohenfeld et al. (2020) NR Mixed model:
Time effect for MoCA, VVM
visuo-spatial memory test,
and delayed recall
Group differences for MoCA
NR CG>NF:
Activation in voxel clusters
during task
N/A
MCI
Mendoza Laiz et al. (2018) NF1:
↑ visual perception
↑ spatial orientation
↑ receptive and expressive speech
↑ logical and immediate memory
↑ picture recognition and concepts
NF2:
↑ picture recognition and concepts
No significant changes NR NR N/A
Jang et al. (2019) NF:
↑ CNSVS for composite and
visual memory, cognitive
flexibility, complex attention,
reaction time, and executive
function
↑ WM performance
N/A NF:
↑ beta frequency
N/A N/A
Jirayucharoensak et al. (2019) NF:
↓ SWM_BER
↓ SWM_STR
↑ RVP_A'
CG1 (game):
SSP_SPAN
time × treatment groups on
SWM_BER, SWM_STR,
RVP_A′, and SSP_SPAN
aMCI>HC:
SWM_BER, SWM_STR,
DMS_PER
HC>aMCI:
PRM_COR, DMS_COR
NR NR N/A
Lavy et al. (2019) NR N/A NF:
+ corr. between peak alpha and
session number
↑ composite memory following
training
↑ non-verbal and verbal recall
task
N/A Sustained:
composite
memory
improvement
Li et al. (2020) NR N/A NF:
↑ connectivity in delta, theta,
alpha and beta bands
N/A N/A
Marlats et al. (2020) NF:
↑ delayed recall of the RAVLT
↑ Forward digit span
↑ Mac Nair score ↓ GAS
↑ WAIS-IV
N/A NF:
↑ overall theta and alpha power
N/A Sustained:
changes in theta and alpha
power, and cognitive items
Reduced:
Forward digit
span

CAMCOG, Cambridge Cognitive Examination; CG, Control Group; CGI, Clinical Global Impression; CNSVS, Central Nervous System Vital Signs; DMS_COR, Delayed Matching to Sample Total Correct; DMS_PER, Delayed Matching to Sample Percent Correct; GAS, Goldberg Anxiety Scale; MMSE, Mini Mental Status Examination; N/A, Not Apply; NF, Neurofeedback; NR, Not Reported; PRM_COR, Pattern Recognition Memory Number Correct; RAVLT, Rey auditory verbal learning test; RVP_A', Rapid Visual Information Processing A prime; SSP_SPAN, Spatial Span Length; SWM_BER, Spatial Working Memory Between Error; SWM_STR, Spatial Working Memory Strategy; TOVA, Test of Variables of Attention; VVM, Visual and Verbal Memory Test; WAIS-IV, Wechsler Adult Intelligence Score-IV; WM, Working Memory task; WMS, Wechsler Memory Scale; ↑, increased; ↓, decreased.

3.1. Overview of Studies

In 2016, two non-controlled experiments targeted the effect of EEG-based neurofeedback in patients with different types of dementia. Surmeli et al. (2016) conducted neurofeedback training with nine AD patients and eleven VD patients (mean age of 68.9 ± 10.6 years). All patients were receiving pharmacological treatment including cholinesterase inhibitors and antidepressant medication. The authors used individual, participant-specific training protocols that were based on the self-regulation of different EEG frequency bands across the scalp and with varying numbers of training sessions. After 10–96 training sessions, patients showed on average an improvement of 19.00% in their Mini Mental State Examination (MMSE) scores compared to baseline measures. Statistically significant changes were found in the orientation and recall sub-scales. This improvement is larger than what has been considered a meaningful clinical change (Howard et al., 2011; Andrews et al., 2019). However, the lack of control groups renders the interpretation difficult because the extent to which these changes are non-specific remains unclear (Ros et al., 2019; Sorger et al., 2019). Further, a reduction in the Clinical Global Impression (CGI) scale was also observed between the beginning and end of treatment. Moreover, the authors reported that after training 19 of 20 patients were withdrawn from their medication to treat dementia symptoms because their clinical improvement was considered sufficient.

Luijmes et al. (2016) evaluated patients with AD following the NINCDS-ADRDA guidelines (McKhann et al., 1984) (N = 10, 71.5 ± 6.8 years) who were receiving pharmacological treatment during 30 sessions of neurofeedback training. Using participant-specific designs, patients were instructed to self-regulate different EEG frequency bands on midline electrodes. A slight improvement (2.00%) in the Cambridge Cognitive Examination (CAMCOG) scale (primary outcome) was observed, with the highest improvement being reported in the Memory Learning sub-scale.

Hohenfeld et al. (2017) trained one group of prodromal AD patients (N = 10, 66.2 ± 8.9 years) and an age-matched healthy control group (CG1, N = 16, 63.5 ± 6.7 years) to up-regulate the fMRI signal of the left parahippocampal gyrus (PHG) in a non-randomized controlled study. A second age-matched healthy control group (CG2, N = 4, 64.8 ± 9.5 years) received sham feedback from the left primary somatosensory cortex. Participants were instructed to recall a footpath previously encoded in a visuospatial memory task, or to count backwards during rest periods. After three training sessions, the neurofeedback group with healthy participants showed significant improvements on the Montreal Cognitive Assessment test (MoCA, 3.97%) and increased flow of input functional connectivity in the left PHG. However, no significant changes in the MoCA scores were observed in the prodromal AD (−1.00%) or the sham control group (0.83%). In a secondary analysis, the authors compared clinical improvements and brain activations in different ROIs between sub-samples of prodromodal AD and healthy participants (Hohenfeld et al., 2020). The authors reported that the healthy control group showed significantly higher MoCA scores and activation of the left PHG following neurofeedback training.

Six other studies evaluated the effect of EEG-based neurofeedback training in patients with MCI. Mendoza Laiz et al. (2018) trained in a non-controlled study two groups of MCI patients (divided according to age ranges) with an EEG-based neurofeedback protocol to either down-regulate alpha (11–13 Hz) and or up-regulate beta (17–22 Hz) frequency bands in C3, Cz, and C4 electrodes using motor imagery of hand movements. The neurofeedback training comprised five sessions, which alternated with five working memory training sessions during which feedback was not provided. The working memory training task consisted of exercises related to different shapes, colors, and expressions. At the primary endpoint, the group including patients between 61 and 69 years of age (N = 22, 65.9 ± 2.2 years) showed significant improvement in several subscales of the Luria-DNA neuropsychological battery, including visual perception and orientation, receptive and expressive speech, and logical and immediate memory. Significant results were also observed for the group including patients between 70 and 81 years of age (N = 10, 73.1 ± 3.6), but only for the concept and picture recognition subscales. However, with the used study design it is not possible to disentangle the reason for this improvement as it may result from neurofeedback specific, neurofeedback non-specific effects (e.g., increased motivation or attention) that occurred in context of the motor imagery NF training, or from the working memory task.

Jang et al. (2019) published a non-controlled pilot EEG-neurofeedback study in which five MCI patients (66.6 ± 3.5 years) were instructed to develop their own strategies to up-regulate the beta (12–15 Hz) frequency band on the F6 location (allegedly recording the activity of the dorsolateral prefrontal cortex). After 16 training sessions, patients showed a significant improvement in the Korean version of the MoCA scale of 20.67%, an improvement that is larger than what is considered a minimum detectable change (Feeney et al., 2016). Moreover, patients showed improvements in several domains of the Central Nervous System Vital Signs (CNSVS) neurocognitive test battery, including the visual and composite memory, cognitive flexibility, complex attention, reaction time, and executive function. The authors also report a significantly better performance in an N-back working memory task at the primary outcome, and a significant correlation between the beta power and the session numbers. However, this single-group study did not control for non-specific, rendering the interpretation of the reported symptomatic changes difficult.

That same year, Lavy et al. (2019) conducted another non-controlled study with 11 patients suffering from MCI (70.0 ± 10.0 years). They trained patients to increase the EEG power in the alpha (8–10 Hz) frequency band over the central parietal region (Pz location) during 10 sessions. A positive correlation between the peak of alpha frequency and the session number was observed. After the intervention, patients showed increased performance on composite memory, verbal and non-verbal memory recall tasks. At the 4 weeks follow-up, the composite memory improvement, but not the improvements in other domains, was maintained. Again, the lack of a control group did not allow to control for unspecific effects, limiting the conclusiveness of these findings.

In 2020, Li et al. (2020) trained 40 MCI patients (54.3 ± 4.9 years) self-regulation of both power in the alpha (8–13 Hz) frequency band and the beta (13–30 Hz)/alpha (8–13 Hz) power ratio for 10 EEG-neurofeedback sessions. In a non-controlled setup, patients showed significant increase of the overall connectivity in delta, theta, alpha and beta bands. No behavioral outcome measures were reported.

Also in 2020, another non-controlled experiment reported by Marlats et al. (2020), aimed to entrain sensory-motor rhythm (SMR, 12–15 Hz) frequencies in twenty-two MCI patients (76.1 ± 5.9 years). Patients were trained to up-regulate SMR frequencies and down-regulate theta (4–8 Hz) and beta (21–30 Hz) frequency bands in the Cz electrode. The authors report that from the initial sample, only 20 participants completed the training. These patients presented improved scores in the MoCA scale (1.67%), as well as in the Goldberg Anxiety Scale (GAS) and the Wechsler Adult Intelligence Score IV (WAIS-IV). Significant increase in overall spectral power was also observed for theta and alpha bands. At the 4 weeks follow-up, cognitive and EEG changes were sustained, despite the MoCA scores having returned to baseline levels.

Finally, the only RCT (Jirayucharoensak et al., 2019) that was included in this review compared participants training to up-regulate the EEG beta (12–32 Hz)/alpha (8–12 Hz) ratio in AF3 and AF4 (N = 58, 71.7 ± 6.5 years) with two other groups: one control group (CG1, N = 36, 73.9 ± 6.2 years) engaged in a game-based physical exercise program (also referred as “exergame”), while the other control group received only care as usual (no more details reported, CG2, N = 25, 70.9 ± 5.1 years). However, samples in all three groups contained both healthy participants and MCI patients (please refer to Table 2). A significant treatment effect was reported for the experimental group in three subscales of the Cambridge Neuropsychological Test Automated Battery (CANTAB): spatial working memory between error, spatial working memory strategy, and rapid visual information processing. However, although the authors report higher effects in MCI patients vs. healthy participants, between group comparisons may be influenced by the heterogeneity within the experimental and groups. Because both groups contained healthy elderly participants as well as MCI patients, reported group differences may have been driven by CANTAB score improvements of (some) healthy elderly participants [note that in contrast to measurements, such as MMSE or MoCA, the CANTAB scale is a more complex test battery that prevents ceiling effects (Coull et al., 1995)].

3.2. Comparison of Cognitive Efficacy, Feasibility, and Safety Across Studies

The different cognitive screening instruments used for the diagnosis of dementia are highly correlated (Stewart et al., 2012; Trzepacz et al., 2015). Thus, we compared the clinical efficacy across studies after converting the different assessment scales to percentage values based on the scale maximum scores (Table 5 and Figure 2). The scores used in this comparison were the ones listed as primary outcome of each study (highlighted in bold in Table 2). For papers not identifying the primary outcome, we adopted a conservative approach and evaluated the scale with lower difference between baseline and the primary endpoint.

Figure 2.

Figure 2

Summary of cognitive improvement according to standardized cognitive screening scales. The baseline and post-neurofeedback measures normalized as a percentage of the respective scales. In orange are the studies using patients with formal diagnosis of dementia, and in blue patients with mild-cognitive impairment (MCI). If the study did not report a primary outcome, we adopted a conservative approach and included in this chart results from the scale showing lower improvement. Solid bars represent the baseline scores, and dashed lines the post-intervention values. NF, Neurofeedback; NR, Not Reported.

Figure 2 shows the baseline scores (solid bars) and the scores at the primary endpoint (dashed bars). Samples including patients with dementia are shown in orange, and those studying patients with MCI are shown in blue. One interesting factor is the low score presented by the MCI population in one study (Jang et al., 2019), suggesting that these participants may be already in transitory stages to dementia (Flicker et al., 1991). On the other hand, the study presenting higher scores at baseline (Jirayucharoensak et al., 2019) included both MCI patients and healthy participants in the neurofeedback group, which may explain the bar level.

Although many studies report improved performance in different memory tasks, or subscales, after intervention (Surmeli et al., 2016; Hohenfeld et al., 2017; Jang et al., 2019; Jirayucharoensak et al., 2019; Lavy et al., 2019), only five studies reported changes in standardized cognitive screening instruments (Figure 2). The averaged difference across these studies weighted by the sample size of each sample is 8.1%. However, although in two of these cases, differences were around 20.0% (Surmeli et al., 2016; Jang et al., 2019), in the three other studies these differences ranged from −1.0% (Hohenfeld et al., 2017) to 2.0% (Luijmes et al., 2016). Also, only one of the two studies that included follow-up sessions reported scores for a standardized cognitive screening scale (Marlats et al., 2020). In this study, the MoCA scores at follow-up returned to similar levels as observed at baseline, suggesting that clinical improvements might not be sustained long-term.

One possible explanation for such difference in behavioral/cognitive changes across studies might be the heterogeneity of training protocols. For instance, two studies employed participant-specific designs, with one study reporting substantial cognitive improvements (Surmeli et al., 2016) and one study reporting marginal cognitive improvements in patients (Luijmes et al., 2016). Moreover, all other EEG-based protocols included different channels or frequency ranges in their protocols (Mendoza Laiz et al., 2018; Jang et al., 2019; Jirayucharoensak et al., 2019; Lavy et al., 2019; Li et al., 2020; Marlats et al., 2020), limiting comparisons across studies. Future studies may benefit from translating (standardized) protocols that have been successfully tested in elderly participants (Laborda-Sánchez and Cansino, 2021) to evaluate their potential efficacy in patients suffering from dementia or cognitive impairment. However, we note that a previous review of this field identified mostly non-RCT studies Laborda-Sánchez and Cansino (2021), limiting conclusions that can be drawn about the effectiveness of these protocols. Alternatively, protocols that were previously tested in healthy participants could be translated to patients. For instance, a recent meta-analysis (Yeh et al., 2020) evaluated RCTs of EEG neurofeedback training of the alpha frequency band in healthy individuals. Findings suggested that this protocol may be an effective option to improve working memory and episodic memory. Specifically, the authors found moderate effect size for both memory categories, whilst further analysis showed little risk for publication bias among these RCTs (Yeh et al., 2020). Efficacy in improving cognitive function has thus far mainly been demonstrated in young and healthy individuals. As a next step, it will require translation of this protocol to patients suffering from dementia or MCI. Further, here we identified only one protocol that used fMRI neurofeedback training (Hohenfeld et al., 2017). Hence, there remains substantial scope for future neurofeedback studies to explore the potential of fMRI targets (e.g., subcortical areas, such as the hippocampus) to treat core dementia and MCI symptoms (e.g., memory loss) (Ruan et al., 2016).

Another relevant aspect is the lack of control groups and conditions in many experimental designs. In this review, we only identified one RCT (Jirayucharoensak et al., 2019) and one controlled protocol (Hohenfeld et al., 2017, 2020), which stands in stark contrast to the 16 RCTs that were recently reported in a meta-analysis of EEG protocols for memory improvement in healthy participants (Yeh et al., 2020). Another difference worth noticing is that most trials that were included in the meta-analysis by Yeh et al. featured larger sample sizes per group, and several trials included multiple arms, enabling them to control for different non-specific effects (Sorger et al., 2019). For the only RCT included in our review, the authors compared neurofeedback training with treatment as usual, including both MCI patients and healthy participants in all groups (Jirayucharoensak et al., 2019). As described above, such group composition may have influenced within and between group comparisons. Regarding the other controlled study included in this review, the control group (which received sham feedback) was composed exclusively by healthy participants (Hohenfeld et al., 2017), which limits any clinical conclusions. Taken together, the lack of adequately controlled studies for neurofeedback training targeting cognitive symptoms in dementia and MCI patients severely limits interpretations.

On a positive note, neurofeedback protocols are rarely associated with side effects (Hawkinson et al., 2012). Potential side effects experienced by patients include potential physical discomfort experienced before [e.g., during EEG cap preparation and calibration (Nijholt et al., 2011)] or during training sessions [e.g., claustrophobia due to the physical restriction in fMRI scanners (Sulzer et al., 2013)]. Noteworthy, only one study reported withdraws before the primary endpoint (Marlats et al., 2020), one other study reported drop-offs before the follow-up completion (Lavy et al., 2019), and three studies reported data exclusion due to technical problems or excessive noise in the recordings (Hohenfeld et al., 2017, 2020; Jang et al., 2019). However, none of these studies reported serious side effects for the neurofeedback intervention. These findings are in line with those reported by systematic reviews of other clinical and non-clinical neurofeedback applications (Kohl et al., 2020; Tursic et al., 2020; Trambaiolli et al., 2021). The safety and feasibility of such experimental setups is especially important in dementia research (as discussed in section 4), since changes in environment, interaction with experimenters, and task demands can trigger emotional and psychological distress in elderly participants (Hellström et al., 2007; Novek and Wilkinson, 2019).

3.3. Experimental Design and Reporting Quality

Following our second aim, we present a systematic evaluation of studies' experimental design and reporting quality using the CRED-nf checklist (Ros et al., 2019) and the JBI critical appraisal tools (Tufanaru et al., 2017) (Figure 3, please see Supplementary Material for detailed scoring for each study).

Figure 3.

Figure 3

The Consensus on the Reporting and Experimental Design of Neurofeedback studies (CRED-nf) percentage scores (A) per study, and (B) averaged per category. The Joanna Briggs Institute (JBI) averaged percentage scores (C) per study, and (D) averaged per category. **Methodological information detailed in previous publication from Hohenfeld et al. (2017); *Methodological information detailed in previous publication from Gomez-Pilar et al. (2016).

The average CRED-nf score (as percentage) across studies was 50.8 ± 20.8% for essential, 6.3 ± 6.7% for encouraged, and 35.3 ± 14.8% for all items. As shown in Figure 3A, only one study had an overall score (gray bars) above 50.0%, while two studies showed total scores that were below 25.0%. In Figure 3B, it is notable that the items with the lowest scores are related to control groups and conditions, and data sharing (the latter was not fulfilled by any study included in this review). Comparing the essential, encouraged, and total CRED-nf ratings for studies in dementia with other fields, these scores are substantially lower than those reported in systematic reviews about EEG and fMRI neurofeedback training in depression (~65.0, 13.0, and 47.0%, respectively, Trambaiolli et al., 2021), and fNIRS neurofeedback in non-clinical/clinical populations (63.0, 10.0, and 45.0%, respectively, Kohl et al., 2020).

For the JBI checklist the averaged score (as percentage) across studies was 68.1 ± 9.3% (Figure 3C), with the lowest scores being related to the similarities between groups, and control design (Figure 3D). Comparing the JBI ratings for studies in dementia patients with other fields, we see a similarity to those reported in systematic reviews about EEG and fMRI neurofeedback training in depression (mean 68.89%, Trambaiolli et al., 2021), fMRI neurofeedback in stroke patients (mean 70.00%, Wang et al., 2018), and fNIRS neurofeedback in non-clinical/clinical populations (mean 61.67%, Kohl et al., 2020). However, an important difference is that the number of included studies in these previous reviews (24, 33, and 22, in the reviews on depression, stroke, and fNIRS, respectively) is higher than the number of studies included in the current review (N = 10).

The highest scores in the CRED-nf checklist were obtained in the “Feedback specification” category. Items from this group check for the instrumental component of the feedback protocol (Ros et al., 2019). These relatively good scores are in agreement with other applications of neurofeedback (Kohl et al., 2020; Trambaiolli et al., 2021), although specific details about applied real-time artifact correction procedures are often still lacking for most neurofeedback studies (Heunis et al., 2020).

On the other hand, some of the lowest scores are related to control groups and conditions. This effect is strongly related to the low scores in the “Similar participants,” “Similar treatment,” and “Control group/conditions” from the JBI checklist. This finding can be explained, for example, by the fact that 7 out of 10 studies in this review used single-group designs (Luijmes et al., 2016; Surmeli et al., 2016; Mendoza Laiz et al., 2018; Jang et al., 2019; Lavy et al., 2019; Li et al., 2020; Marlats et al., 2020). This approach is relevant in early stages of a new intervention, for instance to assess safety as well as technical and clinical feasibility (similar to Phase I Clinical Trial designs) (Sorger et al., 2019). However, given that several unspecific effects contribute to the overall outcome of neurofeedback training (Micoulaud-Franchi and Fovet, 2018; Ros et al., 2019), appropriate controlled experiments are much needed to validate clinical applications (Thibault et al., 2018; Sorger et al., 2019).

Finally, low scores in “Pre-experiment” and “Data storage” items indicate that study registration and data sharing, i.e., transparent research practices, are still lacking. A similar conclusion was drawn in our recent systematic reviews of fNIRS neurofeedback training (Kohl et al., 2020) as well as for neurofeedback training in depression (Trambaiolli et al., 2021).

4. Future Directions

Studies evaluating potential clinical interventions for dementia require careful planning and preparation because they face several methodological, analytical and ethical challenges (Hellström et al., 2007; Richard et al., 2012; Ritchie et al., 2015; Novek and Wilkinson, 2019). For instance, patients may present with a mix of amyloid, vascular or other pathologies associated with dementia (Schneider et al., 2007). Thus, traditional single intervention RCTs may not be the most adequate approach for dementia (Richard et al., 2012). Another challenge concerns when assessments should be performed (Ritchie et al., 2015), since follow-up studies are commonly affected by high mortality indices (Agüero-Torres et al., 1999; Andersen et al., 2010). Further, participation in complex interventions, such as neurofeedback training may result in psychological side-effects. For instance, tasks and interactions with researchers may cause distress (Hellström et al., 2007; Novek and Wilkinson, 2019), or even trigger aggressive behaviors in patients (Whall et al., 1997; Orengo et al., 2008). With this in mind, we understand that research using neurofeedback protocols in this population should be carefully designed to minimize risks and optimize potential benefits. Thus, in line with the final aim of this review, we provide recommendations for future research evaluating the effects of neurofeedback training in patients suffering from dementia or MCI.

4.1. Comprehensive Clinical Documentation

Although standardized cognitive screening scales were used as part of diagnostic process, four out of ten studies did not report baseline scores according to these scales. Further aspects of a comprehensive clinical documentation include a detailed description of previous pharmacological treatments and other possible comorbidities. To ensure reliable clinical results and to allow comparison between studies, future studies should report baseline comparisons for formal and standardized assessment scores (e.g., MMSE, MoCA, or CAMCOG). Regarding the choice of the primary and secondary outcome measures, we recommend using cognitive testing tools that allow capturing the core cognitive processes that are being targeted (Lubianiker et al., 2019). Further, because dementia affects also other psychological domains, such as mood, we encourage researchers to use adequate scales that allow monitoring potential mood changes. For instance, lower mood and elevated anxiety are often observed in patients suffering from dementia and may impact disease trajectories (Paterniti et al., 2002). Previous work has demonstrated substantial therapeutic effects of neurofeedback training to treat mood or anxiety (Tolin et al., 2020; Trambaiolli et al., 2021), partly based on protocols that employed memory based self-regulation strategies (Young et al., 2017) or entrained areas of the hippocampal formation (see active control group in Mehler et al., 2018). Noteworthy, mood and anxiety influence cognitive performance, and may hence mediate observed effects (McDermott and Ebmeier, 2009; de Vito et al., 2019). Standardized clinical mood scales may further be complemented by measurements that allow disentangling changes in symptoms that occur within and between training sessions (Mehler et al., 2021). In addition to clinical scales, which bear the risk of a reductionist view of treatment effects, using additional qualitative or semi-quantitative measures (e.g., testimonials of patients, relatives, and care-givers) may be worthwhile. To ensure a comprehensive clinical documentation of patients, we recommend measuring and reporting well-established molecular biomarkers and risk factors for dementia. Noteworthy, only one (Hohenfeld et al., 2017) of the included protocols reported such data. In this context we want to highlight the recent research from Skouras et al. who found an association between a clinical biomarker for AD and self-regulation success (please note that the study was not included in our systematic review because the authors did not report cognitive or clinical outcome measurements). First, the authors showed that participants carrying APOE-ε4 alleles showed lower self-regulation performance during hippocampal down-regulation compared to non-carriers (Skouras et al., 2019). Second, they reported reduced eigenvector centrality (i.e., less influence based on iterative whole-brain connectomics) in the anterior cingulate cortex and primary motor cortex during hippocampus down-regulation when comparing cognitively unimpaired participants who had abnormal levels of CSF amyloid-β peptide 42 with cognitively unimpaired participants with lower CSF amyloid-β peptide 42 levels (Skouras et al., 2020). This example shows how biomarker data can provide valuable insights about the neurofeedback literacy in patients with dementia. For instance, carriers of neurobiological biomarkers may need more training sessions to achieve a comparable performance as non-carriers, and the training protocol may need to be designed accordingly. Lastly, information about recruitment and attainment should be documented alongside with other phases within the study (from screening to follow-up) using a CONSORT diagram (Moher et al., 2012).

4.2. Appropriately Controlled Study Designs

One challenge when evaluating neurofeedback training is to control adequately for unspecific effects, e.g., reported cognitive improvement of patients may be partly or even mostly due to motivational factors, personal positive believes, and engaging in the experiment (Thibault and Raz, 2017; Thibault et al., 2017). Several studies have reported cognitive improvements that were larger than minimally detectable changes (MDC), reliable change indices (RCI) or even minimum clinically important differences (MCID), suggesting that changes may be both reliable and clinically meaningful. However, it requires properly controlled study designs to determine if such effects are specific to the used neurofeedback protocol. In this context, an experimental framework for proper design of control groups or conditions in neurofeedback experiments was recently described by Sorger et al. (2019). For studies evaluating patients with dementia or MCI, different control conditions should be considered. To control for unspecific effects, future studies should consider control conditions which would emulate the same experimental environment and reward process (Sorger et al., 2019). Possible strategies include the presentation of sham feedback (Hohenfeld et al., 2017), targeting different areas or networks, or more recent approaches, such as the “randomized ROI” condition, where participants of the control group are randomly assigned to different subsets of neural control targets (Lubianiker et al., 2019). Further, studies need to evaluate the participant's remained blind to their assigned group. These aspects are relevant as control belief can directly affect the training performance and future engagement (Witte et al., 2013). Since one main goal of neurofeedback treatment studies in dementia is to evaluate possible clinical benefits and current treatment options are limited, the evaluation of neurofeedback alongside standard-of-care interventions is in particular desirable (Cox et al., 2016). Lastly, in two controlled studies identified in this review, the control groups were completely (Hohenfeld et al., 2017) or partially (Jirayucharoensak et al., 2019) composed of healthy elderly participants. Comparisons between different study populations may invalidate conclusions regarding possible therapeutic effects. Thus, studies should include groups with similar clinical characteristics, e.g., they should be matched for diagnosis, age, gender, and education level.

4.3. Appropriately Powered Study Designs

Notably, many studies reviewed here neither reported sampling plans, nor were they labeled accordingly as a “pilot,” or “proof-of-concept.” Hence, the robustness of clinical findings remain limited (Ros et al., 2019; Sorger et al., 2019). Future studies should be powered appropriately to allow detecting relevant effects and provide more precise estimates. Sampling plans should be based on plausible effect sizes, i.e., the smallest relevant effect sizes one can afford to miss at a given type-I and II error rates (Algermissen and Mehler, 2018), rather than estimates from small pilot studies. The latter tend to overestimate true effects, and when used to inform power calculations, may bias follow-up studies, increasing the risk for type-I errors (Albers and Lakens, 2018). Ideally, the choice of the smallest relevant is informed by meta-research, e.g., clinically relevant effect sizes reported for the chosen primary outcome measure (Howard et al., 2011; Kopecek et al., 2017; Andrews et al., 2019). We acknowledge, however, that achieving sufficient recruitment may be challenging in the targeted populations (e.g., due to inclusion or exclusion criteria) and that researchers may face considerable attrition rates (e.g., in particular for longitudinal designs). Hence, we recommend researchers to explore recent statistical developments. For instance, repeated measurements with mixed-effects modeling may increase statistical power (Aarts et al., 2015). Further, potential recruitment difficulties could be mitigated by the use of flexible statistical approaches, such as sequential Bayesian sampling (Schönbrodt and Wagenmakers, 2018), which allows to sample data until a pre-defined evidence threshold (often expressed as Bayes Factor) is reached. Sequential Bayesian sampling provides higher statistical sensitivity compared to fixed-N sampling plans, in particular for small effects (Schönbrodt and Wagenmakers, 2018). Sampling plans could be either based on effects from a neural outcome measure, such as self-regulation success (e.g., see Mehler et al., 2020) or a behavioral/clinical outcome measure. A detailed description of the sampling plan should ideally be preregistered (see also section 4.9). Lastly, we recommend for studies that fail to reject the null hypothesis should conduct follow-up tests that allow to establish whether reported outcomes are conclusive (Mehler et al., 2019).

4.4. Specific Demands for Studies With Elderly Cohorts

As mentioned, protocols focusing on patients with dementia and MCI require specific methodological considerations (Hellström et al., 2007; Richard et al., 2012; Ritchie et al., 2015; Novek and Wilkinson, 2019). For instance, large-scale RCTs may benefit from multiple acquisition sites to achieve the recruitment goals, which will demand data collection and preprocessing methods to reduce the effects of scanner/amplifier variability (Teipel et al., 2017). Also, extensive and challenging sessions may cause discomfort and distress, possibly triggering aggressive behaviors in patients (Whall et al., 1997; Orengo et al., 2008). Thus, the protocol should include shorter experimental sessions to improve tolerability, but specific pipelines will be necessary to compensate for shorter data length, movement-related noise, among other effects (Harms et al., 2018). Finally, the inclusion and exclusion criteria should consider potential comorbidities and possible consequences of pharmacological treatments in the neural signal of interest (Evangelisti et al., 2019).

4.5. Inclusion of Transfer Sessions and Follow-Up Evaluation

Similar to other cognitive training approaches, the mechanisms of transferring the learned cognitive strategies to real-life situations should be evaluated (Greenwood and Parasuraman, 2016). For example, healthy participants were able to transfer strategies learned during neurofeedback training to situations without feedback presentation [e.g., during self-regulation of somatomotor cortices (Auer et al., 2015)]. Further, participants could successfully use these cognitive strategies, for instance, motor (Auer et al., 2015) or visual imagery (Robineau et al., 2017), even months after the end of the experiment. Additionally, effects in behavior and symptoms should also be monitored during transfer sessions and longitudinally after neurofeedback training. For instance, longitudinal therapeutic effects can be found for the benefits of neurofeedback training in psychiatric populations (Mehler et al., 2018; Rance et al., 2018). However, none of the studies reviewed here included transfer sessions and only two studies reported follow-up evaluation. Although both studies that reported follow-up assessments found sustained memory benefits, effects in general cognitive assessment scales (Marlats et al., 2020) or neural signatures (Lavy et al., 2019) returned to baseline levels. Thus, in addition to transfer sessions, long-term monitoring should be included in future study designs.

4.6. Standardization of Protocols

The main goal of neurofeedback studies that are tailored toward patients suffering from dementia is to achieve cognitive and clinical improvements. Ideally, these are accompanied by (neuroplastic) changes in neural outcomes of, for instance, the targeted brain region (e.g., a percent signal change in the fMRI signal) or electrical frequency (e.g., changes in power of the EEG signal). Moreover, functional or structural changes on network level should be explored. Although participant-specific designs (Luijmes et al., 2016; Surmeli et al., 2016) present value due to individual variability, comparability between studies could be further enhanced once feature extraction methods and feedback presentation methods are standardized. Further, standardized definitions on measures, such as self-regulation training success are needed to explore dose-response relationships. Recent consensus statements have suggested options for standardizing methodological approaches (Paret et al., 2019) and reporting measures of self-regulation performance (Ros et al., 2019). Lastly, with regards to the clinical effectiveness of neurofeedback training protocols for AD/MCI, definitions for treatment responders and non-responders should be used (e.g., MDCs, RCIs, or MCIDs) (Howard et al., 2011; Kopecek et al., 2017; Andrews et al., 2019) to allow comparing its effectiveness to other interventions or clinical neurofeedback applications in other conditions.

4.7. Investigation of New Neurofeedback Targets

Dementia and MCI are complex diseases, with the neural substrates expanding beyond local activity (Ruan et al., 2016; Chandra et al., 2019). Moreover, disorders considered risk factors for dementia, such as depression, seem to show mechanistic overlapping (Kim and Kim, 2021). In this sense, further exploration of new neurofeedback targets focusing on functional networks is much needed. For instance, protocols using fMRI can focus on recent methods of neurofeedback targeting functional connectivity or network patterns (Rana et al., 2016). In protocols using EEG, the use of source-level functional connectivity might be an option. In this context, we would like to highlight the case reports from Koberda (2014) (not included in our sample given the exclusion criteria listed in Table 1). In this study, patients trained with LORETA-based neurofeedback from different brain regions presented reorganization of EEG-based functional connectivity. These results suggest that LORETA-based functional connectivity may also be an option for future protocols targeting functional networks.

4.8. Comprehensive Reporting

As previously mentioned, some studies in our sample did not provide sufficient detail about sample characteristics, the methodological setup, or the relationship between the outcomes and the learned control of the neurofeedback protocol. We encourage researchers to use the CRED-nf (Ros et al., 2019), that was used in this review to evaluate published studies. We note that this checklist was recently published and, consequently unavailable to the authors of most studies included in this review. However, these checklists provide a standardized orientation for design and reporting practices. These can be used for the planning phase of future experiments (see also the CRED-nf online application rtfin.org/CREDnf), including early phase pilot and proof-of-concept studies, and thereby help the field progressing toward higher quality RCTs that will facilitate achieving more conclusive findings.

4.9. Open Science Practices

Neurofeedback training in dementia is a relatively new field. Thus, in order to accelerate the development of this novel clinical application, as well as to increase transparency and reliability of proposed protocols, we strongly recommend that researchers pre-register their protocols comprehensively to provide transparency about a priori hypotheses and delineate planned from exploratory hypotheses (see for a detailed example, Mehler et al., 2020). We further encourage authors to share data and code that support their results. In particular fMRI neurofeedback researchers can take advantage of substantial progress in the neuroimaging field when it comes to standardized pipelines, e.g., when using the pipeline proposed by Nichols et al. (2017), which incorporates best practices to promote open data in functional neuroimaging. Overall, we recommend to explore potential benefits of open science research practices while considering possible challenges (Allen and Mehler, 2019), e.g., when translating a complex paradigm from healthy participants to patients.

4.10. Test of Mobile Approaches

One common complication for patients suffering from dementia are impaired mobility (Härlein et al., 2009) and functional dependencies (Livingston et al., 2017). These symptoms may hinder participants from participating in studies at research sides. The investigation of mobile protocols may provide an attractive alternative to address these challenges and may hence be worthwhile to investigate in these populations. For instance, low-cost and portable EEG equipment using dry electrodes have shown to achieve similar results to state-of-the-art wired laboratory EEG systems in event-related paradigms (De Vos et al., 2014; Ries et al., 2014; Cassani et al., 2017) with the addition of extra pre-processing data enhancement steps. Other possible options include the use of wearable fNIRS systems that allow mobile use outside of laboratory settings (Kohl et al., 2020), e.g., to conduct experiments or intervention studies in naturalistic environments (Balardin et al., 2017). In this context, recent advances in the development of mobile/modular EEG-fNIRS (von Lühmann et al., 2016) and mobile/unshielded MEG systems (Zhang et al., 2020) may be considered in future protocols. Moreover, mobile neurofeedback systems for home use could be integrated in mobile-health approaches, such as cognitive training applications to promote more autonomous aging for elderly patients with beginning cognitive impairment (Cisotto et al., 2021). Altogether, mobile applications can facilitate study participation and will allow for scalable employment of neurofeedback interventions.

4.11. Use of Hybrid Systems

Hybrid systems combine different types of signals in one brain-computer interface (BCI) or neurofeedback system (Pfurtscheller et al., 2010). For example, they may combine signals from different neuroimaging modalities (e.g., electric and hemodynamic responses), neural patterns from the same neuroimaging modality (e.g., SMRs and evoked potentials measured with EEG), or from a neuroimaging modality and a non-neural source (e.g., EEG signals and heart-rate variability) (for a complete overview, please refer to Banville and Falk, 2016). Using hybrid systems may help increasing the robustness of the system and facilitate classifying the user's mental processes (Fazli et al., 2012). Specifically, it may allow monitoring levels of stress, attention and mental workload, and then optimize feedback and task demands individually to the user, enabling more “neuroergonomic” and effective designs of BCI or neurofeedback interventions (Albuquerque et al., 2020; Parent et al., 2020). Hence, hybrid systems seem in particular attractive to test for populations, such as patients suffering from dementia.

5. Conclusion

Neurofeedback presents a potential non-invasive intervention to slow down or even reverse cognitive decline in patients suffering from dementia or mild cognitive impairment. Our review of the current literature suggests that while patients have shown significant improvements in memory tasks or some subscales of standardized cognitive tests, clinical efficacy still remains undetermined. The design and reporting quality of studies published to date largely lag behind current best research practices with regards to their design and reporting quality. Some main issues include the lack of (1) control conditions, (2) sampling plans, (3) randomized treatment allocation, (4) rater blinding, and (5) use of standardized cognitive screening instruments. These issues render the evaluation of clinical effects difficult and require improvements in future studies. We therefore close this review with a set of recommendations, including more comprehensive clinical documentation, adequate control conditions, follow-up investigations, reporting quality, and use of transparent research practices. We further encourage exploring the potential for outside-the-lab neurofeedback applications with portable devices.

Author Contributions

LT conceptualized the review with input from all other authors, selected the studies, extracted the data, assessed the quality of the studies, drafted, and revised the manuscript. RC extracted the data, assessed the quality of the studies, and revised the manuscript. DM and TF supervised the drafting of the manuscript and revised it. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors would like to thank Dr. Simon Kohl and Dorothea Lückerath for thoughtful comments and suggestions.

Footnotes

Funding. TF received funding from the NSERC Discovery Grants Program (grant number RGPIN-2016-04175).

Supplementary Material

The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fnagi.2021.682683/full#supplementary-material

References

  1. Aarts E., Dolan C. V., Verhage M., van der Sluis S. (2015). Multilevel analysis quantifies variation in the experimental effect while optimizing power and preventing false positives. BMC Neurosci. 16:94. 10.1186/s12868-015-0228-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  2. Agüero-Torres H., Fratiglioni L., Guo Z., Viitanen M., Winblad B. (1999). Mortality from dementia in advanced age: a 5-year follow-up study of incident dementia cases. J. Clin. Epidemiol. 52, 737–743. 10.1016/S0895-4356(99)00067-0 [DOI] [PubMed] [Google Scholar]
  3. Albers C., Lakens D. (2018). When power analyses based on pilot data are biased: inaccurate effect size estimators and follow-up bias. J. Exp. Soc. Psychol. 74, 187–195. 10.1016/j.jesp.2017.09.004 [DOI] [Google Scholar]
  4. Albuquerque I., Tiwari A., Parent M., Cassani R., Gagnon J. F., Lafond D., et al. (2020). Wauc: a multi-modal database for mental workload assessment under physical activity. Front. Neurosci. 14:549524. 10.3389/fnins.2020.549524 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Algermissen J., Mehler D. M. (2018). May the power be with you: are there highly powered studies in neuroscience, and how can we get more of them? J. Neurophysiol. 119, 2114–2117. 10.1152/jn.00765.2017 [DOI] [PubMed] [Google Scholar]
  6. Allen C., Mehler D. M. (2019). Open science challenges, benefits and tips in early career and beyond. PLoS Biol. 17:e3000246. 10.1371/journal.pbio.3000246 [DOI] [PMC free article] [PubMed] [Google Scholar]
  7. Andersen K., Lolk A., Martinussen T., Kragh-Sørensen P. (2010). Very mild to severe dementia and mortality: a 14-year follow-up-the odense study. Dement. Geriatr. Cogn. Disord. 29, 61–67. 10.1159/000265553 [DOI] [PubMed] [Google Scholar]
  8. Andrews J. S., Desai U., Kirson N. Y., Zichlin M. L., Ball D. E., Matthews B. R. (2019). Disease severity and minimal clinically important differences in clinical outcome assessments for alzheimer's disease clinical trials. Alzheimers Dement 5, 354–363. 10.1016/j.trci.2019.06.005 [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Angelakis E., Stathopoulou S., Frymiare J. L., Green D. L., Lubar J. F., Kounios J. (2007). EEG neurofeedback: a brief overview and an example of peak alpha frequency training for cognitive enhancement in the elderly. Clin. Neuropsychol. 21, 110–129. 10.1080/13854040600744839 [DOI] [PubMed] [Google Scholar]
  10. Arns M., Batail J. M., Bioulac S., Congedo M., Daudet C., Drapier D., et al. (2017). Neurofeedback: one of today's techniques in psychiatry? L'Encéphale 43, 135–145. 10.1016/j.encep.2016.11.003 [DOI] [PubMed] [Google Scholar]
  11. Auer T., Schweizer R., Frahm J. (2015). Training efficiency and transfer success in an extended real-time functional mri neurofeedback training of the somatomotor cortex of healthy subjects. Front. Hum. Neurosci. 9:547. 10.3389/fnhum.2015.00547 [DOI] [PMC free article] [PubMed] [Google Scholar]
  12. Badhwar A., Tam A., Dansereau C., Orban P., Hoffstaedter F., Bellec P. (2017). Resting-state network dysfunction in Alzheimer's disease: a systematic review and meta-analysis. Alzheimer Dement. 8, 73–85. 10.1016/j.dadm.2017.03.007 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Balardin J. B., Zimeo Morais G. A., Furucho R. A., Trambaiolli L., Vanzella P., Biazoli Jr C., et al. (2017). Imaging brain function with functional near-infrared spectroscopy in unconstrained environments. Front. Hum. Neurosci. 11:258. 10.3389/fnhum.2017.00258 [DOI] [PMC free article] [PubMed] [Google Scholar]
  14. Banville H., Falk T. (2016). Recent advances and open challenges in hybrid brain-computer interfacing: a technological review of non-invasive human research. Brain Comput. Interfaces 3, 9–46. 10.1080/2326263X.2015.1134958 [DOI] [Google Scholar]
  15. Becerra J., Fernandez T., Roca-Stappung M., Diaz-Comas L., Galan L., Bosch J., et al. (2012). Neurofeedback in healthy elderly human subjects with electroencephalographic risk for cognitive disorder. J. Alzheimers Dis. 28, 357–367. 10.3233/JAD-2011-111055 [DOI] [PubMed] [Google Scholar]
  16. Birbaumer N., Ruiz S., Sitaram R. (2013). Learned regulation of brain metabolism. Trends Cogn. Sci. 17, 295–302. 10.1016/j.tics.2013.04.009 [DOI] [PubMed] [Google Scholar]
  17. Boyle P., Wilson R., Aggarwal N., Tang Y., Bennett D. (2006). Mild cognitive impairment: risk of Alzheimer disease and rate of cognitive decline. Neurology 67, 441–445. 10.1212/01.wnl.0000228244.10416.20 [DOI] [PubMed] [Google Scholar]
  18. Brodaty H., Connors M. H., Ames D., Woodward M., Group P. S. (2014). Progression from mild cognitive impairment to dementia: a 3-year longitudinal study. Aust. N. Zeal. J. Psychiatry 48, 1137–1142. 10.1177/0004867414536237 [DOI] [PubMed] [Google Scholar]
  19. Cassani R., Estarellas M., San-Martin R., Fraga F. J., Falk T. H. (2018). Systematic review on resting-state EEG for Alzheimer's disease diagnosis and progression assessment. Dis. Markers 2018:5174815. 10.1155/2018/5174815 [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Cassani R., Falk T. H., Fraga F. J., Cecchi M., Moore D. K., Anghinah R. (2017). Towards automated electroencephalography-based Alzheimer's disease diagnosis using portable low-density devices. Biomed. Signal Process. Control 33, 261–271. 10.1016/j.bspc.2016.12.009 [DOI] [Google Scholar]
  21. Chandra A., Dervenoulas G., Politis M. (2019). Magnetic resonance imaging in Alzheimer's disease and mild cognitive impairment. J. Neurol. 266, 1293–1302. 10.1007/s00415-018-9016-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Cicerone K. D., Langenbahn D. M., Braden C., Malec J. F., Kalmar K., Fraas M., et al. (2011). Evidence-based cognitive rehabilitation: updated review of the literature from 2003 through 2008. Arch. Phys. Med. Rehabil. 92, 519–530. 10.1016/j.apmr.2010.11.015 [DOI] [PubMed] [Google Scholar]
  23. Cisotto G., Trentini A., Zoppis I., Zanga A., Manzoni S., Pietrabissa G., et al. (2021). Acta: a mobile-health solution for integrated nudge-neurofeedback training for senior citizens. arXiv 2102.08692. [Google Scholar]
  24. Corrada M., Brookmeyer R., Berlau D., Paganini-Hill A., Kawas C. (2008). Prevalence of dementia after age 90: results from the 90+ study. Neurology 71, 337–343. 10.1212/01.wnl.0000310773.65918.cd [DOI] [PubMed] [Google Scholar]
  25. Corrada M. M., Brookmeyer R., Paganini-Hill A., Berlau D., Kawas C. H. (2010). Dementia incidence continues to increase with age in the oldest old: the 90+ study. Ann. Neurol. 67, 114–121. 10.1002/ana.21915 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Coull J., Middleton H., Robbins T., Sahakian B. (1995). Contrasting effects of clonidine and diazepam on tests of working memory and planning. Psychopharmacology 120, 311–321. 10.1007/BF02311179 [DOI] [PubMed] [Google Scholar]
  27. Cox W. M., Subramanian L., Linden D. E., Lührs M., McNamara R., Playle R., et al. (2016). Neurofeedback training for alcohol dependence versus treatment as usual: study protocol for a randomized controlled trial. Trials 17, 1–10. 10.1186/s13063-016-1607-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Craig D., Mirakhur A., Hart D. J., McIlroy S. P., Passmore A. P. (2005). A cross-sectional study of neuropsychiatric symptoms in 435 patients with Alzheimer's disease. Am. J. Geriatr. Psychiatry 13, 460–468. 10.1097/00019442-200506000-00004 [DOI] [PubMed] [Google Scholar]
  29. Cummings J. L. (2004). Use of cholinesterase inhibitors in clinical practice: evidence-based recommendations. Focus 11, 131–252. 10.1097/00019442-200303000-00004 [DOI] [PubMed] [Google Scholar]
  30. Cummings J. L., Frank J. C., Cherry D., Kohatsu N. D., Kemp B., Hewett L., et al. (2002). Guidelines for managing Alzheimer's disease: part II. treatment. Am. Fam. Physician 65:2525. [PubMed] [Google Scholar]
  31. Curran E. A., Stokes M. J. (2003). Learning to control brain activity: a review of the production and control of EEG components for driving brain-computer interface (BCI) systems. Brain Cogn. 51, 326–336. 10.1016/S0278-2626(03)00036-8 [DOI] [PubMed] [Google Scholar]
  32. da Paz C., Kouzak V., Garcia A., Campos da Paz Neto A., Tomaz C. (2018). Smr neurofeedback training facilitates working memory performance in healthy older adults: a behavioral and EEG study. Front. Behav. Neurosci. 12:321. 10.3389/fnbeh.2018.00321 [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. de Vito A., Calamia M., Greening S., Roye S. (2019). The association of anxiety, depression, and worry symptoms on cognitive performance in older adults. Aging Neuropsychol. Cogn. 26, 161–173. 10.1080/13825585.2017.1416057 [DOI] [PubMed] [Google Scholar]
  34. De Vos M., Kroesen M., Emkes R., Debener S. (2014). P300 speller BCI with a mobile EEG system: comparison to a traditional amplifier. J. Neural Eng. 11:036008. 10.1088/1741-2560/11/3/036008 [DOI] [PubMed] [Google Scholar]
  35. Enriquez-Geppert S., Huster R. J., Herrmann C. S. (2017). EEG-neurofeedback as a tool to modulate cognition and behavior: a review tutorial. Front. Hum. Neurosci. 11:51. 10.3389/fnhum.2017.00051 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Erkinjuntti T., Román G., Gauthier S. (2004). Treatment of vascular dementia—evidence from clinical trials with cholinesterase inhibitors. J. Neurol. Sci. 226, 63–66. 10.1016/j.jns.2004.09.018 [DOI] [PubMed] [Google Scholar]
  37. Evangelisti S., Pittau F., Testa C., Rizzo G., Gramegna L. L., Ferri L., et al. (2019). L-dopa modulation of brain connectivity in parkinson's disease patients: a pilot EEG-fMRI study. Front. Neurosci. 13:611. 10.3389/fnins.2019.00611 [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Fazli S., Mehnert J., Steinbrink J., Curio G., Villringer A., Müller K. R., et al. (2012). Enhanced performance by a hybrid NIRS-EEG brain computer interface. Neuroimage 59, 519–529. 10.1016/j.neuroimage.2011.07.084 [DOI] [PubMed] [Google Scholar]
  39. Feeney J., Savva G. M., O'Regan C., King-Kallimanis B., Cronin H., Kenny R. A. (2016). Measurement error, reliability, and minimum detectable change in the mini-mental state examination, Montreal cognitive assessment, and color trails test among community living middle-aged and older adults. J. Alzheimers Dis. 53, 1107–1114. 10.3233/JAD-160248 [DOI] [PubMed] [Google Scholar]
  40. Flicker C., Ferris S. H., Reisberg B. (1991). Mild cognitive impairment in the elderly: predictors of dementia. Neurology 41, 1006–1006. 10.1212/WNL.41.7.1006 [DOI] [PubMed] [Google Scholar]
  41. Ganguli M., Snitz B. E., Saxton J. A., Chang C. C. H., Lee C. W., Vander Bilt J., et al. (2011). Outcomes of mild cognitive impairment by definition: a population study. Arch. Neurol. 68, 761–767. 10.1001/archneurol.2011.101 [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Gazzaley A., Nobre A. C. (2012). Top-down modulation: bridging selective attention and working memory. Trends Cogn. Sci. 16, 129–135. 10.1016/j.tics.2011.11.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  43. Gomez-Pilar J., Corralejo R., Nicolas-Alonso L. F., Álvarez D., Hornero R. (2016). Neurofeedback training with a motor imagery-based BCI: neurocognitive improvements and EEG changes in the elderly. Med. Biol. Eng. Comput. 54, 1655–1666. 10.1007/s11517-016-1454-4 [DOI] [PubMed] [Google Scholar]
  44. Gratwicke J., Jahanshahi M., Foltynie T. (2015). Parkinson's disease dementia: a neural networks perspective. Brain 138, 1454–1476. 10.1093/brain/awv104 [DOI] [PMC free article] [PubMed] [Google Scholar]
  45. Greenwood P. M., Parasuraman R. (2016). The mechanisms of far transfer from cognitive training: review and hypothesis. Neuropsychology 30:742. 10.1037/neu0000235 [DOI] [PubMed] [Google Scholar]
  46. Gruzelier J. H. (2014). EEG-neurofeedback for optimising performance. III: a review of methodological and theoretical considerations. Neurosci. Biobehav. Rev. 44, 159–182. 10.1016/j.neubiorev.2014.03.015 [DOI] [PubMed] [Google Scholar]
  47. Han J. W., Kim T. H., Lee S. B., Park J. H., Lee J. J., Huh Y., et al. (2012). Predictive validity and diagnostic stability of mild cognitive impairment subtypes. Alzheimers Dement. 8, 553–559. 10.1016/j.jalz.2011.08.007 [DOI] [PubMed] [Google Scholar]
  48. Härlein J., Dassen T., Halfens R. J., Heinze C. (2009). Fall risk factors in older people with dementia or cognitive impairment: a systematic review. J. Adv. Nurs. 65, 922–933. 10.1111/j.1365-2648.2008.04950.x [DOI] [PubMed] [Google Scholar]
  49. Harms M. P., Somerville L. H., Ances B. M., Andersson J., Barch D. M., Bastiani M., et al. (2018). Extending the human connectome project across ages: imaging protocols for the lifespan development and aging projects. Neuroimage 183, 972–984. 10.1016/j.neuroimage.2018.09.060 [DOI] [PMC free article] [PubMed] [Google Scholar]
  50. Hawkinson J. E., Ross A. J., Parthasarathy S., Scott D. J., Laramee E. A., Posecion L. J., et al. (2012). Quantification of adverse events associated with functional MRI scanning and with real-time fMRI-based training. Int. J. Behav. Med. 19, 372–381. 10.1007/s12529-011-9165-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  51. Hellström I., Nolan M., Nordenfelt L., Lundh U. (2007). Ethical and methodological issues in interviewing persons with dementia. Nurs. Ethics 14, 608–619. 10.1177/0969733007080206 [DOI] [PubMed] [Google Scholar]
  52. Heunis S., Lamerichs R., Zinger S., Caballero-Gaudes C., Jansen J. F., Aldenkamp B., et al. (2020). Quality and denoising in real-time functional magnetic resonance imaging neurofeedback: a methods review. Hum. Brain Mapp. 41, 3439–3467. 10.1002/hbm.25010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Hogan D. B., Bailey P., Black S., Carswell A., Chertkow H., Clarke B., et al. (2008). Diagnosis and treatment of dementia: 5. Nonpharmacologic and pharmacologic therapy for mild to moderate dementia. Can. Med. Assoc. J. 179, 1019–1026. 10.1503/cmaj.081103 [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Hohenfeld C., Kuhn H., Müller C., Nellessen N., Ketteler S., Heinecke A., et al. (2020). Changes in brain activation related to visuo-spatial memory after real-time fMRI neurofeedback training in healthy elderly and Alzheimer's disease. Behav. Brain Res. 381:112435. 10.1016/j.bbr.2019.112435 [DOI] [PubMed] [Google Scholar]
  55. Hohenfeld C., Nellessen N., Dogan I., Kuhn H., Müller C., Papa F., et al. (2017). Cognitive improvement and brain changes after real-time functional mri neurofeedback training in healthy elderly and prodromal Alzheimer's disease. Front. Neurol. 8:384. 10.3389/fneur.2017.00384 [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Howard R., Phillips P., Johnson T., O'Brien J., Sheehan B., Lindesay J., et al. (2011). Determining the minimum clinically important differences for outcomes in the Domino trial. Int. J. Geriatr. Psychiatry 26, 812–817. 10.1002/gps.2607 [DOI] [PubMed] [Google Scholar]
  57. Jacobs H. I., Radua J., Lückmann H. C., Sack A. T. (2013). Meta-analysis of functional network alterations in Alzheimer's disease: toward a network biomarker. Neurosci. Biobehav. Rev. 37, 753–765. 10.1016/j.neubiorev.2013.03.009 [DOI] [PubMed] [Google Scholar]
  58. Jang J. H., Kim J., Park G., Kim H., Jung E. S., Cha J. y., et al. (2019). Beta wave enhancement neurofeedback improves cognitive functions in patients with mild cognitive impairment: a preliminary pilot study. Medicine 98:e18357. 10.1097/MD.0000000000018357 [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Jiang Y., Abiri R., Zhao X. (2017). Tuning up the old brain with new tricks: attention training via neurofeedback. Front. Aging Neurosci. 9:52. 10.3389/fnagi.2017.00052 [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Jirayucharoensak S., Israsena P., Pan-ngum S., Hemrungrojn S., Maes M. (2019). A game-based neurofeedback training system to enhance cognitive performance in healthy elderly subjects and in patients with amnestic mild cognitive impairment. Clin. Interv. Aging 14:347. 10.2147/CIA.S189047 [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Johnston S. J., Boehm S. G., Healy D., Goebel R., Linden D. E. (2010). Neurofeedback: a promising tool for the self-regulation of emotion networks. Neuroimage 49, 1066–1072. 10.1016/j.neuroimage.2009.07.056 [DOI] [PubMed] [Google Scholar]
  62. Karakaya T., Fußer F., Schroder J., Pantel J. (2013). Pharmacological treatment of mild cognitive impairment as a prodromal syndrome of Alzheimer's disease. Curr. Neuropharmacol. 11, 102–108. 10.2174/157015913804999487 [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Karbach J., Verhaeghen P. (2014). Making working memory work: a meta-analysis of executive-control and working memory training in older adults. Psychol. Sci. 25, 2027–2037. 10.1177/0956797614548725 [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Keizer A. W., Verment R. S., Hommel B. (2010). Enhancing cognitive control through neurofeedback: a role of gamma-band activity in managing episodic retrieval. Neuroimage 49, 3404–3413. 10.1016/j.neuroimage.2009.11.023 [DOI] [PubMed] [Google Scholar]
  65. Kim J., Kim Y. K. (2021). Crosstalk between depression and dementia with resting-state fMRI studies and its relationship with cognitive functioning. Biomedicines 9:82. 10.3390/biomedicines9010082 [DOI] [PMC free article] [PubMed] [Google Scholar]
  66. Kim S., Birbaumer N. (2014). Real-time functional mri neurofeedback: a tool for psychiatry. Curr. Opin. Psychiatry 27, 332–336. 10.1097/YCO.0000000000000087 [DOI] [PubMed] [Google Scholar]
  67. Klekociuk S. Z., Saunders N. L., Summers M. J. (2016). Diagnosing mild cognitive impairment as a precursor to dementia: fact or fallacy? Aust. Psychol. 51, 366–373. 10.1111/ap.12178 [DOI] [Google Scholar]
  68. Koberda J. L. (2014). Z-score Loreta neurofeedback as a potential therapy in cognitive dysfunction and dementia. J. Psychol. Clin. Psychiatry 1:00037. 10.15406/jpcpy.2014.01.00037 [DOI] [Google Scholar]
  69. Kohl S. H., Mehler D., Lührs M., Thibault R. T., Konrad K., Sorger B. (2020). The potential of functional near-infrared spectroscopy-based neurofeedback—a systematic review and recommendations for best practice. Front. Neurosci. 14:594. 10.31234/osf.io/yq3vj [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Kopecek M., Bezdicek O., Sulc Z., Lukavsky J., Stepankova H. (2017). Montreal cognitive assessment and mini-mental state examination reliable change indices in healthy older adults. Int. J. Geriatr. Psychiatry 32, 868–875. 10.1002/gps.4539 [DOI] [PubMed] [Google Scholar]
  71. Koyama A., Okereke O. I., Yang T., Blacker D., Selkoe D. J., Grodstein F. (2012). Plasma amyloid-β as a predictor of dementia and cognitive decline: a systematic review and meta-analysis. Arch. Neurol. 69, 824–831. 10.1001/archneurol.2011.1841 [DOI] [PMC free article] [PubMed] [Google Scholar]
  72. Laborda-Sánchez F., Cansino S. (2021). The effects of neurofeedback on aging-associated cognitive decline: a systematic review. Appl. Psychophysiol. Biofeedb. 46, 1–10. 10.1007/s10484-020-09497-6 [DOI] [PubMed] [Google Scholar]
  73. Lavy Y., Dwolatzky T., Kaplan Z., Guez J., Todder D. (2019). Neurofeedback improves memory and peak alpha frequency in individuals with mild cognitive impairment. Appl. Psychophysiol. Biofeedb. 44, 41–49. 10.1007/s10484-018-9418-0 [DOI] [PubMed] [Google Scholar]
  74. Lecomte G., Juhel J. (2011). The effects of neurofeedback training on memory performance in elderly subjects. Psychology 2:846. 10.4236/psych.2011.28129 [DOI] [Google Scholar]
  75. Li X., Zhang J., Li X. D., Cui W., Su R. (2020). Neurofeedback training for brain functional connectivity improvement in mild cognitive impairment. J. Med. Biol. Eng. 40, 489–495. 10.1007/s40846-020-00531-w [DOI] [Google Scholar]
  76. Liberati A., Altman D. G., Tetzlaff J., Mulrow C., Gøtzsche P. C., Ioannidis J. P., et al. (2009). The prisma statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ann. Intern. Med. 151:W-65. 10.7326/0003-4819-151-4-200908180-00136 [DOI] [PubMed] [Google Scholar]
  77. Linden D. E. (2014). Neurofeedback and networks of depression. Dialog. Clin. Neurosci. 16:103. 10.31887/DCNS.2014.16.1/dlinden [DOI] [PMC free article] [PubMed] [Google Scholar]
  78. Linden D. E., Turner D. L. (2016). Real-time functional magnetic resonance imaging neurofeedback in motor neurorehabilitation. Curr. Opin. Neurol. 29:412. 10.1097/WCO.0000000000000340 [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Linhartová P., Látalová A., Kóša B., Kašpárek T., Schmahl C., Paret C. (2019). fMRI neurofeedback in emotion regulation: a literature review. Neuroimage 193, 75–92. 10.1016/j.neuroimage.2019.03.011 [DOI] [PubMed] [Google Scholar]
  80. Livingston G., Sommerlad A., Orgeta V., Costafreda S. G., Huntley J., Ames D., et al. (2017). Dementia prevention, intervention, and care. Lancet 390, 2673–2734. 10.1016/S0140-6736(17)31363-6 [DOI] [PubMed] [Google Scholar]
  81. Lubianiker N., Goldway N., Fruchtman-Steinbok T., Paret C., Keynan J. N., Singer N., et al. (2019). Process-based framework for precise neuromodulation. Nat. Hum. Behav. 3, 436–445. 10.1038/s41562-019-0573-y [DOI] [PubMed] [Google Scholar]
  82. Luijmes R. E., Pouwels S., Boonman J. (2016). The effectiveness of neurofeedback on cognitive functioning in patients with Alzheimer's disease: preliminary results. Clin. Neurophysiol. 46, 179–187. 10.1016/j.neucli.2016.05.069 [DOI] [PubMed] [Google Scholar]
  83. Lyketsos C. G., Lee H. B. (2004). Diagnosis and treatment of depression in Alzheimer's disease. Dement. Geriatr. Cogn. Disord. 17, 55–64. 10.1159/000074277 [DOI] [PubMed] [Google Scholar]
  84. Mariani E., Monastero R., Mecocci P. (2007). Mild cognitive impairment: a systematic review. J. Alzheimers Dis. 12, 23–35. 10.3233/JAD-2007-12104 [DOI] [PubMed] [Google Scholar]
  85. Marlats F., Bao G., Chevallier S., Boubaya M., Djabelkhir-Jemmi L., Wu Y. H., et al. (2020). SMR/theta neurofeedback training improves cognitive performance and EEG activity in elderly with mild cognitive impairment: a pilot study. Front. Aging Neurosci. 12:147. 10.3389/fnagi.2020.00147 [DOI] [PMC free article] [PubMed] [Google Scholar]
  86. McDermott L. M., Ebmeier K. P. (2009). A meta-analysis of depression severity and cognitive function. J. Affect. Disord. 119, 1–8. 10.1016/j.jad.2009.04.022 [DOI] [PubMed] [Google Scholar]
  87. McFarland D. J., McCane L. M., Wolpaw J. R. (1998). EEG-based communication and control: short-term role of feedback. IEEE Trans. Rehabil. Eng. 6, 7–11. 10.1109/86.662615 [DOI] [PubMed] [Google Scholar]
  88. McKhann G., Drachman D., Folstein M., Katzman R., Price D., Stadlan E. M. (1984). Clinical diagnosis of Alzheimer's disease: report of the NINCDS-ADRDA work group* under the auspices of department of health and human services task force on Alzheimer's disease. Neurology 34, 939–939. 10.1212/WNL.34.7.939 [DOI] [PubMed] [Google Scholar]
  89. Mehler D., Williams A. N., Whittaker J. R., Krause F., Lührs M., Kunas S., et al. (2020). Graded fmri neurofeedback training of motor imagery in middle cerebral artery stroke patients: a preregistered proof-of-concept study. Front. Hum. Neurosci. 14:226. 10.3389/fnhum.2020.00226 [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Mehler D. M., Edelsbrunner P. A., Matić K. (2019). Appreciating the significance of non-significant findings in psychology. J. Eur. Psychol. Stud. 10, 1–7. 10.5334/e2019a [DOI] [Google Scholar]
  91. Mehler D. M., Kunas S. L., Sokunbi M. O., Goebel R., Linden D. E. (2021). Trajectories for mood states during a multi-session neurofeedback training intervention in major depressive disorder. PsyArXiv. 10.31234/osf.io/2msqp [DOI] [Google Scholar]
  92. Mehler D. M., Sokunbi M. O., Habes I., Barawi K., Subramanian L., Range M., et al. (2018). Targeting the affective brain—a randomized controlled trial of real-time fmri neurofeedback in patients with depression. Neuropsychopharmacology 43, 2578–2585. 10.1038/s41386-018-0126-5 [DOI] [PMC free article] [PubMed] [Google Scholar]
  93. Mehler D. M. A., Kording K. P. (2018). The lure of misleading causal statements in functional connectivity research. arXiv 1812.03363. [Google Scholar]
  94. Mendoza Laiz N., Del Valle Diaz S., Rioja Collado N., Gomez-Pilar J., Hornero R. (2018). Potential benefits of a cognitive training program in mild cognitive impairment (MCI). Restor. Neurol. Neurosci. 36, 207–213. 10.3233/RNN-170754 [DOI] [PubMed] [Google Scholar]
  95. Micoulaud-Franchi J. A., Batail J. M., Fovet T., Philip P., Cermolacce M., Jaumard-Hakoun A., et al. (2019). Towards a pragmatic approach to a psychophysiological unit of analysis for mental and brain disorders: an EEG-copeia for neurofeedback. Appl. Psychophysiol. Biofeedb. 44, 151–172. 10.1007/s10484-019-09440-4 [DOI] [PubMed] [Google Scholar]
  96. Micoulaud-Franchi J. A., Fovet T. (2018). A framework for disentangling the hyperbolic truth of neurofeedback: comment on Thibault and Raz (2017). Am. Psychol. 73, 933–935. 10.1037/amp0000340 [DOI] [PubMed] [Google Scholar]
  97. Min B. K., Marzelli M. J., Yoo S. S. (2010). Neuroimaging-based approaches in the brain-computer interface. Trends Biotechnol. 28, 552–560. 10.1016/j.tibtech.2010.08.002 [DOI] [PubMed] [Google Scholar]
  98. Mirmiran M., Van Someren E., Swaab D. (1996). Is brain plasticity preserved during aging and in Alzheimer's disease? Behav. Brain Res. 78, 43–48. 10.1016/0166-4328(95)00217-0 [DOI] [PubMed] [Google Scholar]
  99. Moher D., Hopewell S., Schulz K. F., Montori V., Gøtzsche P. C., Devereaux P., et al. (2012). Consort 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. Int. J. Surg. 10, 28–55. 10.1016/j.ijsu.2011.10.001 [DOI] [PubMed] [Google Scholar]
  100. Nichols T. E., Das S., Eickhoff S. B., Evans A. C., Glatard T., Hanke M., et al. (2017). Best practices in data analysis and sharing in neuroimaging using MRI. Nat. Neurosci. 20, 299–303. 10.1038/nn.4500 [DOI] [PMC free article] [PubMed] [Google Scholar]
  101. Nijholt A., Allison B. Z., Jacob R. J. (2011). “Brain-computer interaction: can multimodality help?” in Proceedings of the 13th International Conference on Multimodal Interfaces (Alicante: ), 35–40. 10.1145/2070481.2070490 [DOI] [Google Scholar]
  102. Novek S., Wilkinson H. (2019). Safe and inclusive research practices for qualitative research involving people with dementia: a review of key issues and strategies. Dementia 18, 1042–1059. 10.1177/1471301217701274 [DOI] [PubMed] [Google Scholar]
  103. Orengo C. A., Khan J., Kunik M. E., Snow A. L., Morgan R., Steele A., et al. (2008). Aggression in individuals newly diagnosed with dementia. Am. J. Alzheimers Dis. Dement. 23, 227–232. 10.1177/1533317507313373 [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. Parent M., Albuquerque I., Tiwari A., Cassani R., Gagnon J. F., Lafond D., et al. (2020). Pass: a multimodal database of physical activity and stress for mobile passive body/brain-computer interface research. Front. Neurosci. 14:1274. 10.3389/fnins.2020.542934 [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Paret C., Goldway N., Zich C., Keynan J. N., Hendler T., Linden D., et al. (2019). Current progress in real-time functional magnetic resonance-based neurofeedback: methodological challenges and achievements. Neuroimage 202:116107. 10.1016/j.neuroimage.2019.116107 [DOI] [PubMed] [Google Scholar]
  106. Parkkonen L. (2015). “Real-time magnetoencephalography for neurofeedback and closed-loop experiments,” in Clinical Systems Neuroscience, eds K. Kansaku, L. Cohen, and N. Birbaumer (Tokyo: Springer; ), 315–330. 10.1007/978-4-431-55037-2_17 [DOI] [Google Scholar]
  107. Paterniti S., Verdier-Taillefer M. H., Dufouil C., Alpérovitch A. (2002). Depressive symptoms and cognitive decline in elderly people: longitudinal study. Br. J. Psychiatry 181, 406–410. 10.1192/bjp.181.5.406 [DOI] [PubMed] [Google Scholar]
  108. Petersen R. C. (2004). Mild cognitive impairment as a diagnostic entity. J. Intern. Med. 256, 183–194. 10.1111/j.1365-2796.2004.01388.x [DOI] [PubMed] [Google Scholar]
  109. Petersen R. C. (2011). Clinical practice. Mild cognitive impairment. N. Engl. J. Med. 364:2227. 10.1056/NEJMcp0910237 [DOI] [PubMed] [Google Scholar]
  110. Pfurtscheller G., Allison B. Z., Bauernfeind G., Brunner C., Solis Escalante T., Scherer R., et al. (2010). The hybrid BCI. Front. Neurosci. 4:3. 10.3389/fnpro.2010.00003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  111. Plassman B. L., Langa K. M., Fisher G. G., Heeringa S. G., Weir D. R., Ofstedal M. B., et al. (2007). Prevalence of dementia in the united states: the aging, demographics, and memory study. Neuroepidemiology 29, 125–132. 10.1159/000109998 [DOI] [PMC free article] [PubMed] [Google Scholar]
  112. Prichep L. S. (2007). Quantitative EEG and electromagnetic brain imaging in aging and in the evolution of dementia. Ann. N. Y. Acad. Sci. 1097, 156–167. 10.1196/annals.1379.008 [DOI] [PubMed] [Google Scholar]
  113. Prince M., Wimo A., Guerchet M., Ali G., Wu Y., Prina M., et al. (2015). The Global Impact of Dementia. World Alzheimer Report. Alzheimer's Disease International (ADI), 1–82. [Google Scholar]
  114. Ramirez R., Palencia-Lefler M., Giraldo S., Vamvakousis Z. (2015). Musical neurofeedback for treating depression in elderly people. Front. Neurosci. 9:354. 10.3389/fnins.2015.00354 [DOI] [PMC free article] [PubMed] [Google Scholar]
  115. Rana M., Varan A. Q., Davoudi A., Cohen R. A., Sitaram R., Ebner N. C. (2016). Real-time fmri in neuroscience research and its use in studying the aging brain. Front. Aging Neurosci. 8:239. 10.3389/fnagi.2016.00239 [DOI] [PMC free article] [PubMed] [Google Scholar]
  116. Rance M., Walsh C., Sukhodolsky D. G., Pittman B., Qiu M., Kichuk S. A., et al. (2018). Time course of clinical change following neurofeedback. Neuroimage 181, 807–813. 10.1016/j.neuroimage.2018.05.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
  117. Reis J., Portugal A. M., Fernandes L., Afonso N., Pereira M., Sousa N., et al. (2016). An alpha and theta intensive and short neurofeedback protocol for healthy aging working-memory training. Front. Aging Neurosci. 8:157. 10.3389/fnagi.2016.00157 [DOI] [PMC free article] [PubMed] [Google Scholar]
  118. Richard E., Andrieu S., Solomon A., Mangialasche F., Ahtiluoto S., van Charante E. P. M., et al. (2012). Methodological challenges in designing dementia prevention trials—the European Dementia Prevention Initiative (EDPI). J. Neurol. Sci. 322, 64–70. 10.1016/j.jns.2012.06.012 [DOI] [PubMed] [Google Scholar]
  119. Ries A. J., Touryan J., Vettel J., McDowell K., Hairston W. D. (2014). A comparison of electroencephalography signals acquired from conventional and mobile systems. J. Neurosci. Neuroeng. 3, 10–20. 10.1166/jnsne.2014.1092 [DOI] [Google Scholar]
  120. Ritchie C. W., Terrera G. M., Quinn T. J. (2015). Dementia trials and dementia tribulations: methodological and analytical challenges in dementia research. Alzheimers Res. Ther. 7, 1–11. 10.1186/s13195-015-0113-6 [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Robineau F., Meskaldji D. E., Koush Y., Rieger S. W., Mermoud C., Morgenthaler S., et al. (2017). Maintenance of voluntary self-regulation learned through real-time fMRI neurofeedback. Front. Hum. Neurosci. 11:131. 10.3389/fnhum.2017.00131 [DOI] [PMC free article] [PubMed] [Google Scholar]
  122. Ros T., Enriquez-Geppert S., Zotev V., Young K. D., Wood G., Whitfield-Gabrieli S., et al. (2019). Consensus on the reporting and experimental design of clinical and cognitive-behavioural neurofeedback studies (CRED-nf checklist). Brain 143, 1674–1685. 10.1093/brain/awaa009 [DOI] [PMC free article] [PubMed] [Google Scholar]
  123. Ros T., J Baars B., Lanius R. A., Vuilleumier P. (2014). Tuning pathological brain oscillations with neurofeedback: a systems neuroscience framework. Front. Hum. Neurosci. 8:1008. 10.3389/fnhum.2014.01008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  124. Ruan Q., D'Onofrio G., Sancarlo D., Bao Z., Greco A., Yu Z. (2016). Potential neuroimaging biomarkers of pathologic brain changes in mild cognitive impairment and Alzheimer's disease: a systematic review. BMC Geriatr. 16:104. 10.1186/s12877-016-0281-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  125. Scheinost D., Stoica T., Wasylink S., Gruner P., Saksa J., Pittenger C., et al. (2014). Resting state functional connectivity predicts neurofeedback response. Front. Behav. Neurosci. 8:338. 10.3389/fnbeh.2014.00338 [DOI] [PMC free article] [PubMed] [Google Scholar]
  126. Schneider J. A., Arvanitakis Z., Bang W., Bennett D. A. (2007). Mixed brain pathologies account for most dementia cases in community-dwelling older persons. Neurology 69, 2197–2204. 10.1212/01.wnl.0000271090.28148.24 [DOI] [PubMed] [Google Scholar]
  127. Schönbrodt F. D., Wagenmakers E. J. (2018). Bayes factor design analysis: planning for compelling evidence. Psychonom. Bull. Rev. 25, 128–142. 10.3758/s13423-017-1230-y [DOI] [PubMed] [Google Scholar]
  128. Sitaram R., Ros T., Stoeckel L., Haller S., Scharnowski F., Lewis-Peacock J., et al. (2017). Closed-loop brain training: the science of neurofeedback. Nat. Rev. Neurosci. 18:86. 10.1038/nrn.2016.164 [DOI] [PubMed] [Google Scholar]
  129. Skouras S., Torner J., Anderson P., Koush Y., Falcon C., Minguillon C., et al. (2019). The effect of apoe genotype and streamline density volume, on hippocampal ca1 down-regulation: a real-time fMRI virtual reality neurofeedback study. bioRxiv 643577. 10.1101/643577 [DOI] [Google Scholar]
  130. Skouras S., Torner J., Andersson P., Koush Y., Falcon C., Minguillon C., et al. (2020). Earliest amyloid and tau deposition modulate the influence of limbic networks during closed-loop hippocampal downregulation. Brain 143, 976–992. 10.1093/brain/awaa011 [DOI] [PMC free article] [PubMed] [Google Scholar]
  131. Sorger B., Scharnowski F., Linden D. E., Hampson M., Young K. D. (2019). Control freaks: towards optimal selection of control conditions for fmri neurofeedback studies. Neuroimage 186, 256–265. 10.1016/j.neuroimage.2018.11.004 [DOI] [PMC free article] [PubMed] [Google Scholar]
  132. Steffens D. C., Maytan M., Helms M. J., Plassman B. L. (2005). Prevalence and clinical correlates of neuropsychiatric symptoms in dementia. Am. J. Alzheimers Dis. Dement. 20, 367–373. 10.1177/153331750502000611 [DOI] [PMC free article] [PubMed] [Google Scholar]
  133. Stewart S., O'Riley A., Edelstein B., Gould C. (2012). A preliminary comparison of three cognitive screening instruments in long term care: the MMSE, SLUMS, and MoCA. Clin. Gerontol. 35, 57–75. 10.1080/07317115.2011.626515 [DOI] [Google Scholar]
  134. Subramanian L., Morris M. B., Brosnan M., Turner D. L., Morris H. R., Linden D. E. (2016). Functional magnetic resonance imaging neurofeedback-guided motor imagery training and motor training for Parkinson's disease: randomized trial. Front. Behav. Neurosci. 10:111. 10.3389/fnbeh.2016.00111 [DOI] [PMC free article] [PubMed] [Google Scholar]
  135. Sulzer J., Haller S., Scharnowski F., Weiskopf N., Birbaumer N., Blefari M. L., et al. (2013). Real-time fMRI neurofeedback: progress and challenges. Neuroimage 76, 386–399. 10.1016/j.neuroimage.2013.03.033 [DOI] [PMC free article] [PubMed] [Google Scholar]
  136. Surmeli T., Eralp E., Mustafazade I., Kos H., Özer G. E., Surmeli O. H. (2016). Quantitative EEG neurometric analysis-guided neurofeedback treatment in dementia: 20 cases. How neurometric analysis is important for the treatment of dementia and as a biomarker? Clin. EEG Neurosci. 47, 118–133. 10.1177/1550059415590750 [DOI] [PubMed] [Google Scholar]
  137. Teipel S. J., Wohlert A., Metzger C., Grimmer T., Sorg C., Ewers M., et al. (2017). Multicenter stability of resting state fMRI in the detection of Alzheimer's disease and amnestic MCI. Neuroimage Clin. 14, 183–194. 10.1016/j.nicl.2017.01.018 [DOI] [PMC free article] [PubMed] [Google Scholar]
  138. Thibault R. T., Lifshitz M., Raz A. (2017). Neurofeedback or neuroplacebo? Brain 140, 862–864. 10.1093/brain/awx033 [DOI] [PubMed] [Google Scholar]
  139. Thibault R. T., MacPherson A., Lifshitz M., Roth R. R., Raz A. (2018). Neurofeedback with fMRI: a critical systematic review. Neuroimage 172, 786–807. 10.1016/j.neuroimage.2017.12.071 [DOI] [PubMed] [Google Scholar]
  140. Thibault R. T., Raz A. (2017). The psychology of neurofeedback: clinical intervention even if applied placebo. Am. Psychol. 72:679. 10.1037/amp0000118 [DOI] [PubMed] [Google Scholar]
  141. Tolin D. F., Davies C. D., Moskow D. M., Hofmann S. G. (2020). Biofeedback and neurofeedback for anxiety disorders: a quantitative and qualitative systematic review. Anxiety Disord. 1191, 265–289. 10.1007/978-981-32-9705-0_16 [DOI] [PubMed] [Google Scholar]
  142. Trambaiolli L. R., Kohl S. H., Linden D. E., Mehler D. M. (2021). Neurofeedback training in major depressive disorder: a systematic review of clinical efficacy, study quality and reporting practices. Neurosci. Biobehav. Rev. 125, 33–56. 10.1016/j.neubiorev.2021.02.015 [DOI] [PubMed] [Google Scholar]
  143. Trzepacz P. T., Hochstetler H., Wang S., Walker B., Saykin A. J., Initiative A. D. N., et al. (2015). Relationship between the Montreal cognitive assessment and mini-mental state examination for assessment of mild cognitive impairment in older adults. BMC Geriatr. 15:107. 10.1186/s12877-015-0103-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  144. Tufanaru C., Munn Z., Aromataris E., Campbell J., Hopp L. (2017). “Chapter 3: Systematic reviews of effectiveness,” in Joanna Briggs Institute Reviewer's Manual, eds E. Aromataris and Z. Munn (The Joanna Briggs Institute). Available online at: https://reviewersmanual.joannabriggs.org/
  145. Tursic A., Eck J., Lührs M., Linden D. E., Goebel R. (2020). A systematic review of fmri neurofeedback reporting and effects in clinical populations. Neuroimage Clin. 28:102496. 10.1016/j.nicl.2020.102496 [DOI] [PMC free article] [PubMed] [Google Scholar]
  146. von Lühmann A., Wabnitz H., Sander T., Müller K. R. (2016). M3ba: a mobile, modular, multimodal biosignal acquisition architecture for miniaturized EEG-NIRS-based hybrid BCI and monitoring. IEEE Trans. Biomed. Eng. 64, 1199–1210. 10.1109/TBME.2016.2594127 [DOI] [PubMed] [Google Scholar]
  147. Wang J. R., Hsieh S. (2013). Neurofeedback training improves attention and working memory performance. Clin. Neurophysiol. 124, 2406–2420. 10.1016/j.clinph.2013.05.020 [DOI] [PubMed] [Google Scholar]
  148. Wang T., Mantini D., Gillebert C. R. (2018). The potential of real-time fMRI neurofeedback for stroke rehabilitation: a systematic review. Cortex 107, 148–165. 10.1016/j.cortex.2017.09.006 [DOI] [PMC free article] [PubMed] [Google Scholar]
  149. Wasay M., Grisold W., Carroll W., Shakir R. (2016). World brain day 2016: celebrating brain health in an ageing population. Lancet Neurol. 15:1008. 10.1016/S1474-4422(16)30171-5 [DOI] [PubMed] [Google Scholar]
  150. Weiskopf N. (2012). Real-time fMRI and its application to neurofeedback. Neuroimage 62, 682–692. 10.1016/j.neuroimage.2011.10.009 [DOI] [PubMed] [Google Scholar]
  151. Whall A. L., Black M. E., Groh C. J., Yankou D. J., Kupferschmid B. J., Foster N. L. (1997). The effect of natural environments upon agitation and aggression in late stage dementia patients. Am. J. Alzheimers Dis. 12, 216–220. 10.1177/153331759701200506 [DOI] [Google Scholar]
  152. Witte M., Kober S. E., Ninaus M., Neuper C., Wood G. (2013). Control beliefs can predict the ability to up-regulate sensorimotor rhythm during neurofeedback training. Front. Hum. Neurosci. 7:478. 10.3389/fnhum.2013.00478 [DOI] [PMC free article] [PubMed] [Google Scholar]
  153. World Health Organization (2012). Dementia: A Public Health Priority. Geneva: World Health Organization. [Google Scholar]
  154. Yeh W. H., Hsueh J. J., Shaw F. Z. (2020). Neurofeedback of alpha activity on memory in healthy participants: a systematic review and meta-analysis. Front. Hum. Neurosci. 14:588. 10.3389/fnhum.2020.562360 [DOI] [PMC free article] [PubMed] [Google Scholar]
  155. Young K. D., Siegle G. J., Zotev V., Phillips R., Misaki M., Yuan H., et al. (2017). Randomized clinical trial of real-time fMRI amygdala neurofeedback for major depressive disorder: effects on symptoms and autobiographical memory recall. Am. J. Psychiatry 174, 748–755. 10.1176/appi.ajp.2017.16060637 [DOI] [PMC free article] [PubMed] [Google Scholar]
  156. Zhang R., Xiao W., Ding Y., Feng Y., Peng X., Shen L., et al. (2020). Recording brain activities in unshielded earth's field with optically pumped atomic magnetometers. Sci. Adv. 6:eaba8792. 10.1126/sciadv.aba8792 [DOI] [PMC free article] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Supplementary Materials


Articles from Frontiers in Aging Neuroscience are provided here courtesy of Frontiers Media SA

RESOURCES