Abstract
BACKGROUND
Early discrimination and prediction of cognitive decline are crucial for the study of neurodegenerative mechanisms and interventions to promote cognitive resiliency.
METHODS
Our research is based on resting‐state electroencephalography (EEG) and the current dataset includes 137 consensus‐diagnosed, community‐dwelling Black Americans (ages 60–90 years, 84 healthy controls [HC]; 53 mild cognitive impairment [MCI]) recruited through Wayne State University and Michigan Alzheimer's Disease Research Center. We conducted multiscale analysis on time‐varying brain functional connectivity and developed an innovative soft discrimination model in which each decision on HC or MCI also comes with a connectivity‐based score.
RESULTS
The leave‐one‐out cross‐validation accuracy is 91.97% and 3‐fold accuracy is 91.17%. The 9 to 18 months’ progression trend prediction accuracy over an availability‐limited subset sample is 84.61%.
CONCLUSION
The EEG‐based soft discrimination model demonstrates high sensitivity and reliability for MCI detection and shows promising capability in proactive prediction of people at risk of MCI before clinical symptoms may occur.
Keywords: cognitive health, mild cognitive impairment, resting‐state electroencephalography (rsEEG)
1. INTRODUCTION
Developing economically viable assessment tools and biomarkers that are highly sensitive to cognitive decline and neural dysfunction, before frank Alzheimer's disease (AD) pathology, is critical for the study of neurodegenerative mechanisms and interventions to promote cognitive resiliency, 1 , 2 , 3 , 4 , 5 , 6 , 7 especially for underserved populations who may face challenges in acquiring proper health care. 8 , 9
Due to its high time resolution, accessibility, affordability, and patient acceptance, electroencephalography (EEG)‐based detection of AD and mild cognitive impairment (MCI) has attracted increased research attention. 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 , 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 The results in the existing literature, however, are mixed and call for newer analytic methods. Notably, in several recent papers, 20 , 21 , 22 it was reported that very high accuracy (>96%) may be achieved over modest sample sizes (22–34 participants). However, the scalability and reproducibility of these approaches still need to be further verified. On the other hand, in a study 25 which included a relatively large sample of 336 older participants, it was reported that the accuracy of EEG‐based healthy control (HC) and MCI discrimination was only 61.76%.
This article focuses on data collected from Black American seniors with high susceptibility to cognitive decline and aims to develop a reliable and sensitive assessment tool for the early detection of persons at risk for MCI based solely on resting‐state EEG. The basic idea is to develop a soft discrimination model for HC and MCI which not only can produce a binary decision (also known as a hard decision) on HC or MCI for each tested participant but also can provide a soft score that characterizes the overall functional connectivity of the participant.
First, we conduct multiscale analysis on dynamic brain functional connectivity. Existing research suggests that the abnormal brain functions in AD and MCI are closely related to the weakening or loss of connectivity among critical brain regions. 26 , 27 , 28 , 29 , 30 , 31 , 32 , 33 , 34 , 35 In the literature, functional connectivity between two brain regions is often taken as a static parameter and is represented as a constant, such as the Pearson correlation of two time series. 36 More recently, however, it has been observed 37 , 38 , 39 , 40 , 41 , 42 that in fact, functional connectivity varies significantly with time, and the dynamic variation of functional connectivity may indicate changes in neural activity patterns in cognitive and behavioral aspects. 43
RESEARCH IN CONTEXT
Systematic review: The authors reviewed the literature using traditional (e.g., PubMed) sources and meeting abstracts and presentations. Electroencephalography (EEG)‐based detection of mild cognitive impairment (MCI) has attracted increased research attention recently. The results in the existing work, however, are mixed and call for further analysis. The relevant citations are appropriately cited.
Interpretation: The novel EEG‐based discrimination model demonstrates high sensitivity and stability for MCI detection. In addition, each decision on healthy controls or MCI also comes with an EEG‐based brain activity status score, which shows promising capability in predicting the personal progression trend of cognitive health in older adults.
Future directions: The article proposes a framework for the development of new approaches for the detection and prediction of MCI and the conduct of additional studies. Examples include: (1) personalized MCI detection and prediction based on dynamic brain connectivity and (2) joint analysis of functional and effective connectivity for early detection of the risk of MCI.
In this article, instead of using a fixed observation window size as in traditional evaluation tools for time‐varying functional connectivity, 44 , 45 , 46 , 47 we choose to use a whole set of different window sizes and therefore capture the dynamics of the functional connectivity at different frequency resolutions, which we refer to as “multiscale analysis.” We generate the feature vectors for each participant accordingly and feed the selected features into a machine‐learning algorithm for the discrimination of HC and MCI. By tuning the observing window size and the group of features used, we obtain a series of approaches—each represents a different configuration of the discrimination model and has its own accuracy.
The new biomarkers introduced here reflect how functional connectivity is changing in both the time and frequency domains across the EEG‐based regions of interest (ROIs) in HC and MCI. Our analysis indicates that these biomarkers are closely related to the resting‐state EEG biomarkers for AD as identified in most consistent findings. 10 , 11 , 12 , 13 , 14
Second, unlike existing work which generally relies only on one detection approach, 19 , 20 , 21 , 23 , 24 our soft discrimination of HC and MCI is obtained through weighted majority voting of a selected group of reliable discrimination approaches. This combination of diversified approaches takes the discrepancies between HC and MCI from different perspectives into consideration, and greatly improves the accuracy, stability, and reliability of the proposed model. Moreover, in addition to a binary HC or MCI decision, our result also comes with a connectivity‐based score of the participant obtained from the EEG data, which reflects the status of the overall physiological brain activity at the resting state and shows promising capability in predicting the individualized progression of cognitive health in older adults, and hence makes it possible for the early detection of people at risk of MCI even before the clinical symptoms may be recognizable.
2. MATERIALS AND METHODOLOGY
2.1. Participants and demographic data
We recruited 137 community‐dwelling Black participants (122 females, 15 males), ranging in age from 60 to 90 years, from the greater Detroit area. Some of the participants were recruited at the Healthier Black Elders Center, a collaboration between Wayne State University's Institute of Gerontology and University of Michigan's Institute of Social Research, 48 and others were recruited through the Michigan Alzheimer's Disease Research Center (MADRC) from outreach programs in local churches and community centers. To evaluate a group of community‐dwelling participants with a high risk of cognitive decline, persons were recruited if they considered themselves to be fully functioning, though they also responded positively to a question asking if they were concerned that they may have experienced a potential decline in cognitive ability over the past year. All participants were diagnosed through the MADRC consensus conference using the National Alzheimer's Coordinating Center (NACC) Uniform Data Set (UDS)—84 of them being HC and 53 with MCI (42 amnestic MCI [aMCI] and 11 non‐amnestic MCI [naMCI]). Because of the small number of naMCI participants, all MCI were combined into one group. All participants consented and signed a written consent document. All procedures were in accordance with the principles expressed in the Declaration of Helsinki and approved by the Wayne State University Institutional Review Board and the University of Michigan Medical School Institutional Review Board.
The demographic information of our participants is presented in Table 1. There were no significant differences among HC, aMCI, and naMCI participants in terms of education, and as expected, the average age of the MCI group is slightly higher than that of the HC group.
TABLE 1.
Demographic information of our participants.
| Controls | aMCI | naMCI | |||||
|---|---|---|---|---|---|---|---|
| Demographic | M | SD | M | SD | M | SD | P |
| Sex | |||||||
| Female (number, %) | 76 (90%) | 31 (86%) | 16 (94%) | ||||
| Male (number, %) | 8 (10%) | 5 (14%) | 1 (6%) | ||||
| Age | 72.19 | 6.17 | 74.92 | 6.66 | 74.00 | 8.01 | 0.04 |
| Education | 15.40 | 2.32 | 14.64 | 2.40 | 15.06 | 2.59 | 0.13 |
Abbreviations: aMCI, amnestic mild cognitive impairment; M, mean; naMCI, non‐amnestic mild cognitive impairment; SD, standard deviation.
2.2. Non‐invasive EEG recordings
Brain vision (Brain Vision, Inc.) equipment was used to record scalp EEG activity with a non‐invasive high‐density actiCap (64 active electrodes), modified according to the International 10–20 System. That is, the electrodes were only attached to the scalp surface. The recording locations included the FCz electrode as an online reference and the AFz electrode at the midline location as a ground. After proper placing of the electrode cap with 64 electrodes and obtaining satisfactory impedances, the participant was seated behind the desk in a comfortable chair, adjusted for height, in a dimly lit room. As part of a larger study on brain activity event‐related potential (ERP), each participant received a 3‐minute, eye‐closed resting‐state EEG recording. 15
In this article, we focus on resting‐state EEG, which requires no training or active responses of the participants and is more desirable for clinical operability compared to task‐based EEG. More specifically, eye‐closed resting‐state EEG is adopted in this article for higher scalability, reproducibility, and clinical operability, as well as minimalizing external influence.
2.3. Experimental procedures
We performed EEG recordings in a community center at the University of Michigan (UM) Detroit Center or the Institute of Gerontology at Wayne State University using the same EEG system. We evaluated several available spaces with a Gauss meter prior to EEG recording to find the area with the least external noise (preferably < 0.3 mG) to obtain an acceptable EEG signal. Active electrodes also were used to additionally isolate external noise, minimize cable movement artifacts, and keep impedances below 10 kΩ. 15 , 49
2.4. EEG data pre‐processing
Analyzer 2 (Brain Vision, Inc.) was used for pre‐processing of the baseline EEG data following the recommendations from the Organization for Human Brain Mapping Committee on Best Practice in Data Analysis and Sharing Magnetoencephalographic and Electroencephalographic committee. 50 Off‐line inspection was used to identify and remove segments of EEG contaminating either excessive noise, saturation, or lack of EEG activity. We segmented cleaned EEG data in consecutive epochs of 2 seconds and analyzed off line (1024 data points; 0.488 Hz resolution; Hanning window). 15 An automatic computerized procedure using a rejection criterion of +/– 100 mV on any channel affected by artifacts (muscular, instrumental) was used to identify acceptable epochs. The artifact‐free segments were additionally detrended and baseline corrected before averaging.
After the preprocessing procedure, the shortest length of the eye‐closed, resting‐state EEG among all the 137 participants was 110 s. Therefore, for each participant, we kept the first 110 s of the eye‐closed, resting‐state EEG signals of all the 64 channels for further analysis.
2.5. Selection of ROIs
As illustrated in Figure 1, we selected a total of 12 ROIs: 51 right frontal—RF (Fp1, AF7, AF3, F7, F5, F3), medium frontal—MF (F1, Fz, F2, FC1, FC2), left frontal—LF (F4, Fp2, AF4, AF8, F6, F8), left temporal—LT (FT9, FT7, T7, TP7, TP9), left central—LC (FC5, FC3, C5, C3, CP5, CP3), medial central—MC (C1, Cz, C2, CP1, CPz, CP2), right central—RC (FC4, FC6, C4, C6, CP4, CP6), right temporal—RT (FT10, FT8, T8, TP8, TP10), left parietal—LP (P7, P5, P3, PO7, PO3), medial parietal—MP (P1, Pz, P2, POz), right parietal—RP (P4, P6, P8, PO4, PO8), occipital—O (PO9, O1, Oz, O2, PO10).
FIGURE 1.

The regions of interest. LC, left central; LF, left frontal; LP, left parietal; LT, left temporal; MC, medial central; MF, medium frontal; MP, medial parietal; O, occipital; RC, right central; RF, right frontal; RP, right parietal; RT, right temporal.
Before further analysis, we calculated the current source density (CSD) or the Laplacian (second spatial derivative) of the scalp voltage 52 , 53 , 54 from the EEG signal for all the ROIs using the CSD Toolbox in MATLAB. 55 Please refer to the Supplemental File in supporting information for more details.
3. RESULTS
3.1. Multiscale evaluation and analysis of time‐varying functional connectivity
We evaluated the time‐varying functional connectivity between all the selected ROIs using the sliding window approach. In most of the existing work involving the sliding window approach, a fixed window size is used. 21 , 37 , 42 A commonly asked question is: what is the right window size? Denoting the window size by L, we chose to investigate a whole set of window sizes samples, with the sampling period being and performed a multiscale evaluation of the time‐varying connectivity for the brain network of each subject. The base‐2 scaling of the window size is motivated by the discrete wavelet transform. 56 Denoting the length of the CSD signal as in our case, samples (i.e., 110s). For each fixed window size L, we partitioned the CSD of each ROI into successive, non‐overlapping blocks of length L and evaluated the functional connectivity—in terms of Pearson correlation—among the time‐synchronized blocks corresponding to all the ROI region pairs. In this way, for each region pair, instead of a single connectivity value, we obtained a Pearson correlation vector, denoted as where As can be seen, vector P elaborates how the functional connectivity between brain regions changes over time. The window size L determines the time and frequency resolutions of the Pearson correlation vector P. By calculating the Pearson correlation vector under different window sizes for each region pair, we obtained a multiscale evaluation of the time‐varying functional connectivity of the ROI network.
Note that we had a total of 12 EEG‐based ROIs—this implies there were altogether 66 region pairs. Therefore, for each subject, we had 66 Pearson correlation vectors. We then performed a time‐frequency analysis to each Pearson correlation vector using the continuous Wavelet Toolbox in MATLAB 57 and found the wavelet coefficients , where j corresponds to frequency and k corresponds to time shift. For each of the 66 region pairs, we calculated the mean, minimum, and maximum of with respect to k and took all the of them as primitive features. This procedure was conducted for every window size L.
After generating the primitive features for each subject based on joint time–frequency–spatial analysis of the functional connectivity of the EEG‐based ROI network, we are ready to move to the next phase: the discrimination. Due to the small size of the MCI sample set (especially naMCI), we combined aMCI and naMCI into one group for HC and MCI discrimination, as in existing works. 20 , 21 , 22 , 23 , 24 , 25
We would like to point out that particularly important from a non‐invasive EEG analysis standpoint, the novel multiscale time‐varying functional connectivity analysis technique introduced here breaks the barriers of fixed window size in the sliding window approach and brings the long‐lasting discussion on how to select the window size to an end. As will be seen from the next section, adopting multiple window sizes allows us to develop a whole series of discrimination approaches—each represents a unique window size and feature group configuration of our discrimination model and has its own accuracy—and therefore provides us a large pool of HC and MCI classifiers from which to select.
3.2. Soft discrimination of HC and MCI
Traditionally, for a given participant, the discrimination result is generally a binary decision—also called a hard decision—which is either HC or MCI. Here, by soft discrimination, we mean that in addition to the binary decision of HC or MCI, our discrimination result also comes with a connectivity‐based score, which potentially can be used as a predictor for the cognitive health of each participant. In the following, the proposed soft discrimination procedure is summarized in four steps, in which we first constructed the hard decision classifiers and then performed soft discrimination through majority voting of a group of selected, reliable classifiers.
In step 1, feature selection, a discriminating feature is one whose presence is more indicative of one sentiment over the other. For each fixed window size L, each primitive feature for all the 137 subjects was put together and screened using the t test, and only M out of the 198 primitive features with the smallest P‐values were selected to formulate the feature vector. Here and That is, each of them can take eight possible values, and each pair represents one configuration of our discrimination model. Therefore, all together, we have possible hard‐decision discrimination approaches or classifiers.
In step 2, dimension reduction, for each fixed pair, the corresponding feature vector has length M. Using a regularized LDA (linear discriminant analysis), 58 we mapped the feature vector of each participant to a point in a one‐dimensional subspace or axis, in which the difference between HC and MCI subjects is maximized. Regularized LDA was used to reduce the noise effect (caused by both biological variability and measurement errors) in the size‐limited data set. 58
In step 3, first‐layer discrimination using single classifiers, we then constructed the decision trees based on the LDA output and carried out the classification using the AdaBoost classifier, which has proved to be a highly effective classification tool. 58
We repeated Steps 2 and 3 for all the pairs, and therefore obtained a series of 64 classifiers or discrimination approaches, in which each of them is a unique configuration of our discrimination model and has its own accuracy.
In step 4, soft discrimination of HC and MCI through majority voting of selected classifiers, note that each classifier has a different feature group from all the other classifiers and each feature reflects the difference between HC and MCI from a unique perspective. We selected a group of N reliable classifiers (i.e., the N classifiers with the highest accuracy, here N is an odd number) and performed the final discrimination of HC and MCI through weighted majority voting. More specifically, for denote the vote of voter n as , where if the decision is HC and if the decision is MCI. Denoting the accuracy of voter n by , then for each subject, in addition to the binary output of HC or MCI, we can also obtain a soft discrimination output , which is a connectivity‐based score of the participant obtained from the EEG data and reflects the status of the overall physiological brain activity at the resting state. Potentially, it can be used to predict how likely the participant is to progress from HC to MCI, or from MCI to AD.
The effectiveness of majority voting can be roughly illustrated through the following result: assuming there are N (where N is an odd number) independent voters, all with classification accuracy a, then the probability that the majority voting delivers the correct result, is given by
where (note that is odd) is the smallest number of correct voters needed for the majority voting result to be correct. The value of under different number of voters and voter accuracy is illustrated in Figure 2. For example, if and then . As can be seen, when the individual voter accuracy is reasonably high (e.g., , majority voting can improve the discrimination accuracy significantly when the number of voters is sufficiently large.
FIGURE 2.

Majority voting accuracy versus the number of independent voters. As can be seen, the majority voting accuracy increases with the number of independent voters when individual voter accuracy is reasonably high (e.g., a >0.6).
In our case, even though the voters (i.e., the selected classifiers) are not completely independent, each classifier relies on a different feature group from all the other classifiers. As will be seen in the next section, majority voting can improve discrimination sensitivity and stability significantly.
We would like to point out that an odd number of classifiers is selected here to avoid the situation in which we have an equal number of positive and negative votes. However, because our soft score also takes the accuracy of each classifier into consideration, an even number of voters may work as well because the voters are unlikely to have identical accuracy.
3.3. Discrimination results of individual classifiers
We conducted the discrimination process for all the individual classifiers corresponding to each pair, where is the window size and is the number of features with the smallest P‐values selected. To assess the accuracy of the classifiers, we adopted the traditional leave‐one‐out test as well as the k‐fold (with ) cross‐validation technique. 59 This is because in addition to leave‐one‐out or 10‐fold (i.e., leave‐10%‐out) cross‐validation, a clinically desirable test would use 5‐fold or even 3‐fold cross‐validations. 60
In the k‐fold cross‐validation, the data are split into k equally sized subsets, which are also called “folds.” One of the k‐folds acts as the test set, also known as the holdout set or validation set, and the remaining folds train the model. This process is repeated until each fold has acted as a holdout fold. When all iterations are completed, the accuracy scores for all the test sets are averaged to evaluate the performance of the classifier.
For a more accurate assessment of our discrimination model, random shuffles were applied to enhance the dataset cardinality 61 (i.e., the number of participants in that dataset) in the k‐fold cross‐validation. More specifically, for each k, the k‐fold cross‐validation was repeated 10 times, and each time, the dataset was randomly shuffled to ensure sufficient diversity in the k‐fold test.
The discrimination accuracy tables of the individual classifiers for leave‐one‐out, 10‐fold, 5‐fold, and 3‐fold cross‐validations are shown in Figure 3. As can be seen, as we gradually reduced the size of the training set and increased the size of the test set, the accuracy scores of the classifiers may decrease slightly, but continue to demonstrate relatively high stability in the tests.
FIGURE 3.

Discrimination results for individual classifiers: A,leave‐one‐out cross‐validation; (B) 10‐fold cross‐validation; (A) 5‐fold cross‐validation; (B) 3‐fold cross‐validation. Each individual classifier is uniquely determined by the window size and the number of features used and has its own accuracy. As can be seen, when we gradually reduced the size of the training set and extended the holdout set, the discrimination results downgraded slightly but demonstrated relatively high stability in the tests. The 21 classifiers which delivered the highest accuracy in leave‐one‐out cross‐validation were selected as our voters and marked by the red boxes.
3.4. Majority voting and soft discrimination
We selected classifiers, which delivered the highest accuracy in leave‐one‐out cross‐validation as our voters, as marked by the red boxes in Figure 3. The same group of voters was used for the soft discrimination of HC and MCI for leave‐one‐out, 10‐fold, 5‐fold, and 3‐fold cross‐validations, and the results based on weighted majority voting are shown in Figure 4 through the confusion matrices.
FIGURE 4.

Majority voting results—the confusion matrices for: (A) leave‐one‐out; (B) 10‐fold; (C) 5‐fold; (D) 3‐fold. To enhance the dataset cardinality,10 random shuffles were applied for 10‐fold, 5‐fold, and 3‐fold cross‐validations. As can be seen, the soft discrimination model achieves high accuracy (>91%) for leave‐one‐out and K‐fold (K = 10, 5, 3) cross‐validations. We also compared the leave‐one‐out discrimination accuracy for sex, and the discrimination model does show comparable sensitivity on female and male.
As can be seen, the voting‐based soft discrimination achieved high accuracy (>91%) for leave‐one‐out and k‐fold (K = 10, 5, 3) cross‐validations, and demonstrated significantly higher sensitivity and stability than the individual classifiers as we gradually downsized the training set and extended the test set. It is interesting to note that 10‐fold cross‐validation shows a higher accuracy than leave‐one‐out. This is because the dataset is oversampled when it is randomly shuffled 10 times, which increases the resolution of the accuracy scores. We also compared the leave‐one‐out discrimination accuracy for sex, and our analysis indicated that the discrimination model does show comparable sensitivity on females and males.
The soft discrimination score for each participant is shown in Figure 5A, together with the mean and variance for both the HC and MCI groups. Note that “+1” represents pure HC, and “−1” represents pure MCI. Each participant received a score s within, in which a positive s implies that the decision is HC, and a negative s implies that the decision is MCI. At the same time, the connectivity‐based soft score s could also serve as an indicator of the cognitive health level in the sense that the larger the s, the better the cognitive health. For example, a score of 0.83 would indicate that the participant is in a very optimistic HC condition, and a score of 0.12 would indicate that although the participant is classified as HC, there is a trend that the participant may progress to the MCI condition. A similar interpretation applies to the negative scores. In other words, the soft discrimination score may be used to predict whether an HC participant is likely to progress to MCI, which is critical for timely intervention or treatment to prevent the cognitive condition from getting worse or for clinical trial inclusion.
FIGURE 5.

Soft discrimination of HC and MCI based on leave‐one‐out cross‐validation. (A), Soft discrimination scores in ascending order. Here “+1” represents pure HC, and “−1” represents pure MCI. Each participant receives a soft score s within (−1, +1), in which a positive s implies that the decision is HC, and a negative s implies that the decision is MCI. At the same time, the soft score s may also serve as an indicator of the cognitive health level in the sense that the larger the s, the better the cognitive status. Potentially, the soft discrimination score makes it possible to predict the personal progression trends for MCI. (B), Distribution of the soft discrimination scores. As expected, incorrect prediction happens mainly when the soft discrimination score is within the range , where the differences between low‐scoring HC and high‐scoring MCI are not significant. EEG, electroencephalography; HC, healthy control; LOO, leave one out; MCI, mild cognitive impairment.
The distribution of the soft discrimination scores is shown in Figure 5B. As expected, errors occur mainly in the score range where the differences between low‐scoring HC and high‐scoring MCI are not that significant.
We also compared the aMCI (36 participants) and naMCI (17 participants) groups on the mean and variance of the soft score; please refer to Section 2 of the Supplemental File. It was observed that naMCI, which generally has less severe cognitive decline than aMCI, also has a slightly higher average soft score (Mean = −0.3678, Var = 0.0873) than aMCI (Mean = −0.3761, Var = 0.0794). This does not apply to the misclassified (i.e., incorrectly classified) group mainly due to the extremely small sample sizes of the misclassified groups (five aMCI and two naMCI). However, what can be observed is that the correctly classified groups have a low average score (Mean = −0.4550 for aMCI and Mean = −0.4273 for naMCI), while the average soft scores of the misclassified groups (Mean = 0.1129 for aMCI and Mean = 0.0789 for naMCI) fall into the same interval with HC with low soft scores.
3.5. Soft‐decision score and possible progression trend prediction
Out of the total 137 participants, 74 had a follow‐up MADRC consensus diagnosis. Of these 74 participants, there were 26 who had at least one follow‐up consensus diagnosis made by MADRC in about 9 to 18 months after the EEG test. Out of these 26 participants, there were 19 who had a consensus diagnosis made or remained unchanged in about 12‐18 months after the EEG test, one participant had a consensus diagnosis in 8 months after the EEG test.
Note that our soft‐decision scores were obtained based on the EEG data; we explored the possibility of predicting the temporal progression trend of each participant in 9 to 18 months after the EEG test, and the result is shown in Table 2. Motivated by Figure 2, our prediction criterion was: (1) if the soft score , then the participant would remain HC for 9 to 18 months, otherwise (2) if then the participant is likely to progress to MCI after 9 to 18 months; (3) if then the participant would remain MCI for 9 to 18 months; and (4) if then the participant is likely to progress to AD after 9 to 18 months. For each participant who has been correctly classified, based on whether the progression trend is predicted correctly or incorrectly, we marked the result as “correct trend” or “incorrect trend,” respectively. When our discrimination decision was incorrect, we marked it as “misclassified.”
TABLE 2.
Soft‐decision score and progression trend prediction.
| Subject | EEG Test Year | Decision and & Soft Score | 1st Visit: Year, Diagnosis | 2nd Visit: Year, Diagnosis | 3rd Visit: Year, Diagnosis | Prediction for 9‐18 Months |
|---|---|---|---|---|---|---|
| 1 | 2015 | HC, 0.7271 | 2016, HC | 2017, HC | 2018, HC | Correct trend |
| 2 | 2016 | MCI, −0.7195 | 2016, naMCI | 2017, aMCI | Correct trend | |
| 3 | 2015 | HC, 0.7271 | 2015, HC | 2016, HC | 2017, HC | Correct trend |
| 4 | 2015 | MCI, −0.1175 | 2016, naMCI | 2017, naMCI | 2018, aMCI | Correct trend |
| 5 | 2015 | HC, 0.114 | 2015, HC | 2016, naMCI | Correct trend | |
| 6 | 2015 | HC, 0.6486 | 2015, HC | 2016, aMCI | 2017, naMCI | Incorrect trend |
| 7 | 2015 | HC, 0.8001 | 2015, HC | 2016, HC | 2017, aMCI | Correct trend |
| 8 | 2015 | MCI, −0.7258 | 2015, aMCI | 2017, aMCI | Correct trend | |
| 9 | 2015 | HC, 0.4915 | 2015, HC | 2016, aMCI | Incorrect trend | |
| 10 | 2016 | MCI, −0.5019 | 2015, aMCI | 2017, aMCI | Correct trend | |
| 11 | 2015 | HC, 0.7271 | 2015, HC | 2016, HC | 2017, HC | Correct trend |
| 12 | 2016 | MCI, −0.3531 | 2015, aMCI | 2017, aMCI | 2018, naMCI | Correct trend |
| 13 | 2015 | HC, 0.2725 | 2015, HC | 2016, HC | 2017, naMCI | Correct Trend |
| 14 | 2016 | MCI, −0.7188 | 2016, aMCI | 2017, aMCI | 2018, aMCI | Correct trend |
| 15 | 2016 | HC, 0.7244 | 2016, HC | 2017, aMCI | 2018, naMCI | Incorrect trend |
| 16 | 2016 | MCI, −0.4115 | 2016, aMCI | 2017, aMCI | Correct trend | |
| 17 | 2016 | MCI, −0.6548 | 2016, aMCI | 2017, aMCI | 2018, aMCI | Correct trend |
| 18 | 2016 | MCI, −0.6465 | 2016, aMCI | 2017, aMCI | Correct trend | |
| 19 | 2016 | HC, 0.4964 | 2016, HC | 2017, HC | 2018, naMCI | Correct trend |
| 20 | 2017 | HC, 0.4359 | 2016, HC | 2017, HC | 2018, HC | Correct trend |
| 21 | 2017 | HC, 0.2649 | 2017, HC | 2019, HC | 2020, HC | Correct trend |
| 22 | 2017 | MCI, −0.3434 | 2017, aMCI | 2018, aMCI | 2019, aMCI | Correct trend |
| 23 | 2017 | MCI, −0.1189 | 2017, HC | 2018, HC | 2019, HC | Misclassified |
| 24 | 2017 | MCI, −0.114 | 2017, aMCI | 2018, aMCI | 2019, aMCI | Correct trend |
| 25 | 2017 | MCI, −0.6548 | 2017, aMCI | 2018, naMCI | 2019, naMCI | Correct trend |
| 26 | 2017 | MCI, −0.3427 | 2017, aMCI | 2018, aMCI | Correct trend |
Result Summary: Total number of subjects with at least one follow‐up consensus diagnosis (made by MADRC) in about 9–18 months after the EEG test: 26; Correct trend (i.e., subjects with progression trend predicted correctly): 22 out of 26, 84.61%; Incorrect trend (i.e., subjects with progression trend predicted incorrectly): 3 out of 26, 11.54%; Misclassified: 1 out of 26, 3.85%.
[Correction added on December 27, 2023, after first online publication: Subject IDs were removed and test dates were revised to include only a year.]
As can be seen from Table 2, of the 26 subjects, we have correct trend (i.e., subjects with progression trend predicted correctly): 22 out of 26, 84.61%; incorrect trend (i.e., subjects with progression trend predicted incorrectly): 3 out of 26, 11.54%; misclassified: 1 out of 26, 3.85%. It is worth noting that the fifth participant in Table 2, got a soft score of 0.114, which put that person in the HC group but indicates that the participant is more likely to progress to MCI. [Correction added on December 27, 2023, after first online publication: In the preceding sentence, the subject ID was removed.] The prediction is validated in the follow‐up visit 1 year later and therefore this one is marked “correct trend.”
For all 26 participants examined, the lowest soft score was and none of them progressed to AD in 9 to 18 months after the EEG test. Due to data limitation, in this article, we have been focused on the proactive prediction of MCI for people diagnosed as HC and were not able to explore the possible conversion from MCI to AD yet, but the MADRC will continue to follow these individuals so that longitudinal data will become available as time goes on. We would also like to emphasize that due to high uncertainty and instability in the cognitive condition of the senior population group, our prediction is mainly limited to 18 months.
3.6. A recapitulation of feature selection
From Figure 3, we can see that the selected voters were all corresponding to window sizes and no features were selected from window sizes This was because when the window size is too small, the corresponding Pearson correlation vector cannot really reflect the statistical property of the functional connectivity between region pairs; when the window size is too large, the Pearson correlation vector cannot accurately capture the time‐varying property of the functional connectivity.
The complete tables of selected features for window sizes can be found in the Supplemental File. Our analysis indicated that the region pairs occurring most frequently in the features (for ) are: right parietal (RP) ↔ occipital (Occ), left temporal (LT) ↔ right parietal (RP), left temporal (LT) ↔ right central (RC), left temporal (LT) ↔ medial parietal (MP).
The frequency that each ROI region occurs in the selected features are shown in Figure 6. As can be seen, RP, MP, LT, and Occ were the ROI regions that appeared most frequently in the features, indicating that these regions play significant roles in identifying the differences in resting‐state brain connectivity between HC and MCI. These brain regions are exactly those most often cited as involved in MCI and AD deterioration. 11 , 14 It also should be noted that EEG electrodes do reflect surrounding area activities to some degree.
FIGURE 6.

Frequency of occurrence of all the ROIs in the 45 selected features. (A), (B), For window size . (C), (D), For window size . This figure illustrates the concept of multiscale functional connectivity analysis in both time and frequency domains and shows the reciprocal relationship between the window size and frequency range of the features. Moreover, the right parietal (RP), medial parietal (MP), left temporal (LT), occipital (Occ) are the ROI regions that appear most frequently in the selected features, indicating that these regions play significant roles in differentiating HC and MCI in information transmission and receiving during the resting state. HC, healthy control; LOO, leave one out; MCI, mild cognitive impairment; ROI, region of interest
From Figure 6, the reciprocal relationship between the window size and frequency range of the features can also be observed, illustrating the concept of multiscale functional connectivity analysis in both time and frequency domains.
4. DISCUSSION
In this article, we developed and verified a highly sensitive and reliable soft discrimination model for HC and MCI based only on 3 minutes of resting state EEG. By choosing a diversified group of reliable classifiers as voters, which were generated through multi‐scale analysis of time‐varying functional connectivity between the ROIs, a soft discrimination score was obtained for each participant through weighted majority voting of all these voters. The connectivity‐based soft score not only can provide a hard binary decision on HC or MCI but can also serve as an indicator of the participant's cognitive health status. Our preliminary results on 9 to 18 months’ progression trend prediction indicated that the soft score shows promising capability in predicting a person's progression trend toward MCI, especially Black seniors, and therefore may enable the early detection of individuals at a high risk of MCI even before the clinical symptoms may clearly appear. This is crucial in research toward preventing pathological cognitive decline from a very early stage and reducing the risk of AD and related dementia.
Upon further demonstration, our discrimination model may be developed into a cost‐effective, highly sensitive, non‐invasive, and personalized assessment tool for early detection of individuals at risk of cognitive impairment, which could encourage and promote cognitive resiliency in seniors, especially older Black individuals.
Studies have shown that community‐dwelling older Black Americans show faster rates of cognitive decline than older White Americans and are almost twice as likely to develop MCI and AD. 8 , 9 This study focuses on EEG data collected among high‐risk older Black Americans, a highly underserved population, and may provide a better understanding of uniqueness of such populations and promote health equity.
In the following, we discuss the relationship of this study with existing work on EEG biomarkers and predictive models for AD pathology and deterioration, as well as some possible limitations faced by the proposed soft discrimination model.
In reference to the relationship with existing work on resting‐state EEG biomarkers for AD pathology, recall that the new biomarkers introduced in this study reflect how functional connectivity is changing in both the time and frequency domains across the EEG‐based ROIs in HC and MCI. Here we articulate the underlying relationship between these new biomarkers and existing resting‐state EEG biomarkers for AD pathology.
In literature, 10 , 11 , 12 , 13 it was pointed out that as most consistent findings, AD patients with MCI and dementia showed abnormalities in peak frequency, power, and “interrelatedness” resting‐state EEG measures (e.g., directed transfer function, phase lag index, linear lagged connectivity, etc.) at delta (0.5–4 Hz), theta (4–8 Hz), and alpha (8–12 Hz) rhythms in relation to disease progression and interventions.
From Figure 3, in which the discrimination results for individual classifiers are presented, it can be seen that the most reliable voters lie in the window size Recall that our sampling period is 2 ms, roughly speaking, the window size set corresponds to the frequency range of 0.31 to 10 Hz, which also spans over the delta, theta, and alpha range and is consistent with existing findings. 10 , 11 , 12 , 13 Moreover, in literature, 11 it was pointed out that theta frequency is the earliest and most sensitive EEG marker of AD pathology. It is interesting to note that in our study, window size (which corresponds to the theta band) contributed most of the reliable voters among all the selected window sizes.
In addition, brain regions that were found to be most important in the ROI analysis were those that are most often cited as involved in MCI and AD deterioration (e.g., parietal, temporal, and occipital). 11 , 14 It also should be noted that EEG electrodes do reflect surrounding area activities to some degree.
Concerning the relationship with existing predictive models in the AD area, existing research literature on predictive models in the AD area 62 , 63 , 64 , 65 , 66 has mainly been focused on the prediction of the conversion or progression from MCI to AD or dementia, with the major goal to identify which individuals with MCI are more likely to develop AD. Notably in Jiao et al., 67 it was pointed out that EEG is a promising tool for the diagnosis and disease progression evaluation of MCI and AD. The prediction on the incidence of MCI in older adults—before clinical symptoms have occurred—has been very limited. Note that AD generally leads to irreversible deterioration of cognition; the lack of full success in AD treatment to date indicates that the current interventions may be too late to be efficient, 68 and therefore calls for proactive prediction of people at risk of MCI before clinical symptoms may occur.
In this article, we moved one step forward by showing that a soft discrimination model can make it possible to predict the progression of cognitive impairment even before MCI symptoms may first appear. This study is an initial step toward soft detection and proactive prediction of MCI among older at‐risk individuals, and we believe that advanced research on proactive prediction of MCI would provide new insight for treatment options in fighting AD through early, time‐sensitive interventions in older adults at risk of cognitive impairment, and therefore help prevent or delay the onset of MCI and AD.
Does our model suffer from underfitting or overfitting? Underfitting and overfitting are two main problems in machine learning that degrade the performance of the machine learning model. Underfitting happens when a model is oversimplified and is unable to capture the underlying pattern of the data. An underfit model has poor performance on the training data and will also result in unreliable predictions on the new data. On the other hand, overfitting happens when the model captures the noise along with the underlying pattern in data, generally because the model is too complex and/or the dataset is too noisy. An overfit model tends to perform well for training data but has poor performance with the test data and is generally more difficult to be identified than underfitting. For these reasons, the k‐fold cross‐validation technique is often applied to evaluate the performance of machine learning models.
In our case, we tried to eliminate possible underfitting and overfitting problems by including both simple classifiers (i.e., the ones with fewer features) and complex classifiers (i.e., the ones with more features) in the voter group. After majority voting, both the sensitivity and stability of the soft discrimination classifier are significantly improved, and the model does demonstrate stable performance in leave‐one‐out, 10‐fold, 5‐fold, and 3‐fold cross‐validations. Given the biological complexity and variability of the human brain, the sample size of 137 participants may not be sufficient to capture the patterns of the brain networks of HC and MCI and larger scale training may still be needed before the practical application of the model.
Another possible issue we are facing is an unbalanced dataset. First, we have more HC (84) than MCI (53). As a result, the model is better trained for HC. We can see that in all the cross‐validation tests, the model achieves a higher accuracy in the discrimination of HC than that of the MCI. We expect to fix this problem as our MCI sample size increases. On the other hand, we would also like to point out that the ratio of the number of HC and MCI here—84:53—closely reflects the natural distribution of HC and MCI in our clinically observed datasets, in which we always have more HC than MCI. Therefore, we will explore both balanced and unbalanced training for the model and see which one would deliver a more reliable result.
Second, as in many community studies, we are not able to recruit enough male participants, therefore the unbalanced sex (female 123, male 14) does not really allow us to address whether there are sex differences in the analysis. However, we compare the leave‐one‐out (LOO) discrimination accuracy of the model on both females and males, as shown Figure 4. The discrimination model does show comparable sensitivity in females and males.
In terms of proactive prediction of MCI and future work, out of the 26 participants, three participants with high soft scores (1. HC, 0.6486; 2. HC, 0.4915; 3. HC: 0.7244) progressed to MCI in 12 to 18 months after the EEG test and the corresponding predictions are marked as “incorrect trend.” It is possible that the incorrect trends reflect the limitations of the features used for prediction and indicate that we may need to adopt more diversifying features, such as the biomarkers that reflect the localized neural activity at each individual region, and perhaps also investigate the task‐based EEG features. In addition, MCI progression trend prediction based on the proposed soft discrimination model still needs to be verified through sizable longitudinal data collection and analysis. We will further examine whether lower‐scoring HC is more likely to progress to MCI, and whether low‐scoring MCI is more likely to develop AD. We also plan to extend it to a 3‐class classification model of HC, MCI, and AD, and continue to investigate the application of our model to personalized assessment 69 by following the longitudinal data of individual participants.
CONFLICT OF INTEREST STATEMENT
The authors report no conflicts of interest. Author disclosures are available in the supporting information
CONSENT STATEMENT
All participants were consented and signed a written consent document.
Supporting information
Supporting information
Supporting information
ACKNOWLEDGMENTS
The authors would like to thank Dr. Rok Požar for preparing the ROI distribution figure (i.e., Figure 1) for the paper. We also appreciate Alina Brighty Renli for her help in editing the paper. This study was supported in part by the National Science Foundation (NSF) under award 2032709; the National Institutes of Health (NIH) under awards R21‐AG046637, R01‐AG054484, P30AG024824, P30AG053760, and P30AG072931; the Alzheimer's Association under award HAT‐07‐60437; and the Slovenian Research Agency under award P3‐0366/2451.
Deng J, Sun B, Kavcic V, Liu M, Giordani B, Li T. Novel methodology for detection and prediction of mild cognitive impairment using resting‐state EEG. Alzheimer's Dement. 2024;20:145–158. 10.1002/alz.13411
REFERENCES
- 1. Qin Q, Qu J, Yin Y, et al. Unsupervised machine learning model to predict cognitive impairment in subcortical ischemic vascular disease. Alzheimers Dement. 2023;1‐12. Published online Feb 14, 2023. doi: 10.1002/alz.12971 [DOI] [PubMed] [Google Scholar]
- 2. Wang WZ, Tang Z, Zhu Y, et al. AD risk score for the early phases of disease based on unsupervised machine learning. Alzheimers Dement. 2020;16(11):1524‐1533. doi: 10.1002/alz.12140 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3. Knopman DS, Amieva H, Petersen RC, et al. Alzheimer disease. Nat Rev Dis Primers. 2021;7(1):33. doi: 10.1038/s41572-021-00269-y. Published 2021 May 13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4. Qiu S, Miller MI, Joshi PS, et al. Multimodal deep learning for Alzheimer's disease dementia assessment. Nat Commun. 2022;13(1):3404. doi: 10.1038/s41467-022-31037-5. Published 2022 Jun 20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5. Huang Y, Sun X, Jiang H, et al. A machine learning approach to brain epigenetic analysis reveals kinases associated with Alzheimer's disease. Nat Commun. 2021;12(1):4472. doi: 10.1038/s41467-021-24710-8. Published 2021 Jul 22. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6. Yang Z, Nasrallah IM, Shou H, et al. A deep learning framework identifies dimensional representations of Alzheimer's disease from brain structure. Nat Commun. 2021;12(1):7065. doi: 10.1038/s41467-021-26703-z. Published 2021 Dec 3. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 7. Rodriguez S, Hug C, Todorov P, et al. Machine learning identifies candidates for drug repurposing in Alzheimer's disease. Nat Commun. 2021;12(1):1033. doi: 10.1038/s41467-021-21330-0. Published 2021 Feb 15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 8. Steenland K, Goldstein FC, Levey A, Wharton W. A meta‐analysis of Alzheimer's disease incidence and prevalence comparing African‐Americans and Caucasians. J Alzheimers Dis. 2016;50(1):71‐76. doi: 10.3233/JAD-150778 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 9. Kunkle BW, Schmidt M, Klein HU, et al. Novel Alzheimer disease risk loci and pathways in African American individuals using the African genome resources panel: a meta‐analysis [published correction appears in JAMA Neurol. JAMA Neurol. 2021;78(1):102‐113. doi: 10.1001/jamaneurol.2020.3536 2021 May 1;78(5):620] [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10. Babiloni C, Arakaki X, Azami H, et al. Measures of resting state EEG rhythms for clinical trials in Alzheimer's disease: recommendations of an expert panel. Alzheimers Dement. 2021;17(9):1528‐1553. doi: 10.1002/alz.12311 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11. Cecchetti G, Agosta F, Basaia S, et al. Resting‐state electroencephalographic biomarkers of Alzheimer's disease. Neuroimage Clin. 2021;31:102711. doi: 10.1016/j.nicl.2021.102711 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 12. Ranasinghe KG, Kudo K, Hinkley L, et al. Neuronal synchrony abnormalities associated with subclinical epileptiform activity in early‐onset Alzheimer's disease. Brain. 2022;145(2):744‐753. doi: 10.1093/brain/awab442 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 13. Meghdadi AH, Stevanović Karić M, McConnell M, et al. Resting state EEG biomarkers of cognitive decline associated with Alzheimer's disease and mild cognitive impairment. PLoS One. 2021;16(2):e0244180. doi: 10.1371/journal.pone.0244180. Published 2021 Feb 5. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14. Jiao B, Li R, Zhou H, et al. Neural biomarker diagnosis and prediction to mild cognitive impairment and Alzheimer's disease using EEG technology. Alz Res Therapy. 2023;15:32. doi: 10.1186/s13195-023-01181-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15. Kavcic V, Daugherty AM, Giordani B. Post‐task modulation of resting state EEG differentiates MCI patients from controls. Alzheimers Dement (Amst). 2021;13(1):e12153. doi: 10.1002/dad2.12153. Published 2021 Feb 20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 16. Rossini PM, Miraglia F, Vecchio F. Early dementia diagnosis, MCI‐to‐dementia risk prediction, and the role of machine learning methods for feature extraction from integrated biomarkers, in particular for EEG signal analysis. Alzheimers Dement. 2022;18(12):2699‐2706. doi: 10.1002/alz.12645 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 17. Blinowska KJ, Rakowski F, Kaminski M, et al. Functional and effective brain connectivity for discrimination between Alzheimer's patients and healthy individuals: a study on resting state EEG rhythms. Clin Neurophysiol. 2017;128(4):667‐680. doi: 10.1016/j.clinph.2016.10.002 [DOI] [PubMed] [Google Scholar]
- 18. McBride JC, Zhao X, Munro NB, et al. Spectral and complexity analysis of scalp EEG characteristics for mild cognitive impairment and early Alzheimer's disease. Comput Methods Programs Biomed. 2014;114(2):153‐163. doi: 10.1016/j.cmpb.2014.01.019 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 19. Perez‐Valero E, Lopez‐Gordo MÁ, Gutiérrez CM, Carrera‐Muñoz I, Vílchez‐Carrillo RM. A self‐driven approach for multi‐class discrimination in Alzheimer's disease based on wearable EEG. Comput Methods Programs Biomed. 2022;220:106841. doi: 10.1016/j.cmpb.2022.106841 [DOI] [PubMed] [Google Scholar]
- 20. Siuly S, Alcin OF, Kabir E, et al. A new framework for automatic detection of patients with mild cognitive impairment using resting‐state EEG signals. IEEE Trans Neural Syst Rehabil Eng. 2020;28(9):1966‐1976. doi: 10.1109/TNSRE.2020.3013429 [DOI] [PubMed] [Google Scholar]
- 21. Yin J, Cao J, Siuly S, Wang H. An integrated MCI detection framework based on spectral‐temporal analysis. Int J Autom Comput. 2019;16(6):786‐799. doi: 10.1007/s11633-019-1197-4 [DOI] [Google Scholar]
- 22. Movahed RA, Rezaeian M. Automatic diagnosis of mild cognitive impairment based on spectral, functional connectivity, and nonlinear EEG‐Based features. Comput Math Methods Med. 2022;2022:2014001. doi: 10.1155/2022/2014001. Published 2022 Aug 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23. Araújo T, Teixeira JP, Rodrigues PM. Smart‐data‐driven system for Alzheimer disease detection through electroencephalographic signals. Bioengineering (Basel). 2022;9(4):141. doi: 10.3390/bioengineering9040141. Published 2022 Mar 28. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 24. Kashefpoor M, Rabbani H, Barekatain M. Automatic diagnosis of mild cognitive impairment using electroencephalogram spectral features. J Med Signals Sens. 2016;6(1):25‐32. [PMC free article] [PubMed] [Google Scholar]
- 25. Jiang J, Yan Z, Sheng C, et al. A novel detection tool for mild cognitive impairment patients based on eye movement and electroencephalogram. J Alzheimers Dis. 2019;72(2):389‐399. doi: 10.3233/JAD-190628 [DOI] [PubMed] [Google Scholar]
- 26. Wang X, Michaelis ML, Michaelis EK. Functional genomics of brain aging and Alzheimer's disease: focus on selective neuronal vulnerability. Curr Genomics. 2010;11(8):618‐633. doi: 10.2174/138920210793360943 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 27. Cui X, Xiang J, Guo H, et al. Classification of Alzheimer's disease, mild cognitive impairment, and normal controls with subnetwork selection and graph kernel principal component analysis based on minimum spanning tree brain functional network. Front Comput Neurosci. 2018;12:31. doi: 10.3389/fncom.2018.00031. Published 2018 May 9. Frontiers in computational neuroscience (2018), 12, 31. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28. Zhang D, Wang Y, Zhou L, Yuan H, Shen D. Alzheimer's disease neuroimaging initiative. Multimodal classification of Alzheimer's disease and mild cognitive impairment. Neuroimage. 2011;55(3):856‐867. doi: 10.1016/j.neuroimage.2011.01.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29. Liu F, Zhou L, Shen C, Yin J. Multiple kernel learning in the primal for multimodal Alzheimer's disease classification. IEEE J Biomed Health Inform. 2014;18(3):984‐990. doi: 10.1109/JBHI.2013.2285378 [DOI] [PubMed] [Google Scholar]
- 30. Xu L, Wu X, Li R, et al. Prediction of progressive mild cognitive impairment by multi‐modal neuroimaging biomarkers. J Alzheimers Dis. 2016;51(4):1045‐1056. doi: 10.3233/JAD-151010 [DOI] [PubMed] [Google Scholar]
- 31. Li Q, Wu X, Xu L, Chen K, Yao L, Li R. Multi‐modal discriminative dictionary learning for Alzheimer's disease and mild cognitive impairment. Comput Methods Programs Biomed. 2017;150:1‐8. doi: 10.1016/j.cmpb.2017.07.003 [DOI] [PubMed] [Google Scholar]
- 32. Fu H, Hardy J, Duff KE. Selective vulnerability in neurodegenerative diseases. Nat Neurosci. 2018;21(10):1350‐1358. doi: 10.1038/s41593-018-0221-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33. Hata M, Kazui H, Tanaka T, et al. Functional connectivity assessed by resting state EEG correlates with cognitive decline of Alzheimer's disease ‐ an eLORETA study. Clin Neurophysiol. 2016;127(2):1269‐1278. doi: 10.1016/j.clinph.2015.10.030 [DOI] [PubMed] [Google Scholar]
- 34. Smailovic U, Koenig T, Savitcheva I, et al. Regional disconnection in Alzheimer dementia and amyloid‐positive mild cognitive impairment: association between EEG functional connectivity and brain glucose metabolism. Brain Connect. 2020;10(10):555‐565. doi: 10.1089/brain.2020.0785 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35. Lin Q, Rosenberg MD, Yoo K, Hsu TW, O'Connell TP, Chun MM. Resting‐state functional connectivity predicts cognitive impairment related to Alzheimer's disease. Front Aging Neurosci. 2018;10:94. doi: 10.3389/fnagi.2018.00094. Published 2018 Apr 13. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 36. Zhu DC, Majumdar S, Korolev IO, Berger KL, Bozoki AC. Alzheimer's disease and amnestic mild cognitive impairment weaken connections within the default‐mode network: a multi‐modal imaging study. J Alzheimers Dis. 2013;34(4):969‐984. doi: 10.3233/JAD-121879 [DOI] [PubMed] [Google Scholar]
- 37. Hutchison RM, Womelsdorf T, Allen EA, et al. Dynamic functional connectivity: promise, issues, and interpretations. Neuroimage. 2013;80:360‐378. doi: 10.1016/j.neuroimage.2013.05.079 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38. Allen EA, Damaraju E, Plis SM, Erhardt EB, Eichele T, Calhoun VD. Tracking whole‐brain connectivity dynamics in the resting state. Cereb Cortex. 2014;24(3):663‐676. doi: 10.1093/cercor/bhs352 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 39. Schumacher J, Peraza LR, Firbank M, et al. Dynamic functional connectivity changes in dementia with Lewy bodies and Alzheimer's disease. Neuroimage Clin. 2019;22:101812. doi: 10.1016/j.nicl.2019.101812 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 40. Fiorenzato E, Strafella AP, Kim J, et al. Dynamic functional connectivity changes associated with dementia in Parkinson's disease. Brain. 2019;142(9):2860‐2872. doi: 10.1093/brain/awz192 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 41. Preti MG, Bolton TA, Van De Ville D. The dynamic functional connectome: state‐of‐the‐art and perspectives. Neuroimage. 2017;160:41‐54. doi: 10.1016/j.neuroimage.2016.12.061 [DOI] [PubMed] [Google Scholar]
- 42. Liang Y, Zheng Y, Renli B, Zhu DC, Yu F, Li T. Dynamic functional connectivity fading analysis and classification of Alzheimer's disease, mild cognitive impairment and normal control subjects based on resting‐state fMRI data. OBM Neurobiol. 2020;4(2):1‐20. 10.21926/obm.neurobiol.2002059 [DOI] [Google Scholar]
- 43. Jie B, Liu M, Shen D. Integration of temporal and spatial properties of dynamic connectivity networks for automatic diagnosis of brain disease. Med Image Anal. 2018;47:81‐94. doi: 10.1016/j.media.2018.03.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 44. O'Neill GC, Tewarie P, Vidaurre D, Liuzzi L, Woolrich MW, Brookes MJ. Dynamics of large‐scale electrophysiological networks: a technical review. Neuroimage. 2018;180(Pt B):559‐576. doi: 10.1016/j.neuroimage.2017.10.003 [DOI] [PubMed] [Google Scholar]
- 45. Zhang J, Cheng W, Liu Z, et al. Neural, electrophysiological and anatomical basis of brain‐network variability and its characteristic changes in mental disorders. Brain. 2016;139(Pt 8):2307‐2321. doi: 10.1093/brain/aww143. published correction appears in Brain. 2018 Aug 1;141(8):e64. [DOI] [PubMed] [Google Scholar]
- 46. Jones DT, Vemuri P, Murphy MC, et al. Non‐stationarity in the “resting brain's” modular architecture. PLoS One. 2012;7(6):e39731. doi: 10.1371/journal.pone.0039731 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47. Allen EA, Damaraju E, Eichele T, Wu L, Calhoun VD. EEG signatures of dynamic functional network connectivity states. Brain Topogr. 2018;31(1):101‐116. doi: 10.1007/s10548-017-0546-2 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48. Chadiha LA, Washington OG, Lichtenberg PA, Green CR, Daniels KL, Jackson JS. Building a registry of research volunteers among older urban African Americans: recruitment processes and outcomes from a community‐based partnership. Gerontologist. 2011;51(Suppl 1):S106‐S115. doi: 10.1093/geront/gnr034 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49. Kappenman ES, Luck SJ. The effects of electrode impedance on data quality and statistical significance in ERP recordings. Psychophysiology. 2010;47(5):888‐904. doi: 10.1111/j.1469-8986.2010.01009.x [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50. Pernet C, Garrido MI, Gramfort A, et al. Issues and recommendations from the OHBM COBIDAS MEEG committee for reproducible EEG and MEG research. Nat Neurosci. 2020;23(12):1473‐1483. doi: 10.1038/s41593-020-00709-0 [DOI] [PubMed] [Google Scholar]
- 51. Požar R, Giordani B, Kavcic V. Effective differentiation of mild cognitive impairment by functional brain graph analysis and computerized testing. PLoS One. 2020;15(3):e0230099. doi: 10.1371/journal.pone.0230099. Published 2020 Mar 16. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 52. Kayser J, Tenke CE. On the benefits of using surface Laplacian (current source density) methodology in electrophysiology. Int J Psychophysiol. 2015;97(3):171‐173. doi: 10.1016/j.ijpsycho.2015.06.001 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 53. Kamarajan C, Pandey AK, Chorlian DB, Porjesz B. The use of current source density as electrophysiological correlates in neuropsychiatric disorders: a review of human studies. Int J Psychophysiol. 2015;97(3):310‐322. doi: 10.1016/j.ijpsycho.2014.10.013 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 54. Kayser J, Tenke CE. Issues and considerations for using the scalp surface Laplacian in EEG/ERP research: a tutorial review. Int J Psychophysiol. 2015;97(3):189‐209. doi: 10.1016/j.ijpsycho.2015.04.012 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55. Kayser J. Current source density (CSD) interpolation using spherical splines ‐ CSD Toolbox ‐ current source density (CSD) and surface potential (SP) interpolation using spherical splines. Accessed May 17, 2023. http://psychophysiology.cpmc.columbia.edu/Software/CSDtoolbox
- 56. Meyer Y, Salinger DH. Wavelets and Operators. Cambridge University Press; 2004. [Google Scholar]
- 57. The MathWorks I . Wavelet toolbox. Wavelet Toolbox Documentation. Accessed May 17, 2023. https://www.mathworks.com/help/wavelet/
- 58. Wang Z, Zheng Y, Zhu DC, Bozoki AC, Li T. Classification of Alzheimer's disease, mild cognitive impairment and normal control subjects using resting‐state fMRI based network connectivity analysis. IEEE J Transl Eng Health Med. 2018;6:1801009. doi: 10.1109/JTEHM.2018.2874887. Published 2018 Oct 15. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59. Jung Y, Hu J. A K‐fold averaging cross‐validation procedure. J Nonparametr Stat. 2015;27(2):167‐179. doi: 10.1080/10485252.2015.1010532 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60. Spetsieris PG, Dhawan V, Eidelberg D. Three‐fold cross‐validation of parkinsonian brain patterns. Annu Int Conf IEEE Eng Med Biol Soc. 2010;2010:2906‐2909. doi: 10.1109/IEMBS.2010.5626327 [DOI] [PubMed] [Google Scholar]
- 61. Fassina L, Faragli A, Lo Muzio FP, et al. A random shuffle method to expand a narrow dataset and overcome the associated challenges in a clinical study: a heart failure cohort example. Front Cardiovasc Med. 2020;7:599923. doi: 10.3389/fcvm.2020.599923. Published 2020 Nov 20. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 62. Chen Y, Qian X, Zhang Y, et al. Prediction models for conversion from mild cognitive impairment to Alzheimer's disease: a systematic review and meta‐analysis. Front Aging Neurosci. 2022;14:840386. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 63. Huang K, Lin Y, Yang L, et al. A multipredictor model to predict the conversion of mild cognitive impairment to Alzheimer's disease by using a predictive nomogram. Neuropsychopharmacol. 2020;45:358‐366. doi: 10.1038/s41386-019-0551-0 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 64. Inglese M, Patel N, Linton‐Reid K, et al. A predictive model using the mesoscopic architecture of the living brain to detect Alzheimer's disease. Commun Med. 2022;2:70. doi: 10.1038/s43856-022-00133-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 65. Grueso S, Viejo‐Sobera R. Machine learning methods for predicting progression from mild cognitive impairment to Alzheimer's disease dementia: a systematic review. Alzheimers Res Ther. 2021;13:162. doi: 10.1186/s13195-021-00900-w [DOI] [PMC free article] [PubMed] [Google Scholar]
- 66. Er F, Goularas D. Predicting the prognosis of MCI patients using longitudinal MRI data. IEEE/ACM Trans Comput Biol Bioinform. 2021;18(3):1164‐1173. doi: 10.1109/TCBB.2020.3017872 [DOI] [PubMed] [Google Scholar]
- 67. Jiao B, Li R, Zhou H, et al. Neural biomarker diagnosis and prediction to mild cognitive impairment and Alzheimer's disease using EEG technology. Alzheimers Res Ther. 2023;15(1):32. doi: 10.1186/s13195-023-01181-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68. Hussin NM, Shahar S, Yahya HM, et al. Incidence and predictors of mild cognitive impairment (MCI) within a multi‐ethnic Asian populace: a community‐based longitudinal study. BMC Public Health. 2019;19:1159. doi: 10.1186/s12889-019-7508-4 [DOI] [PMC free article] [PubMed] [Google Scholar]
- 69. Svob Strac D, Konjevod M, Sagud M, et al. Personalizing the care and treatment of Alzheimer's disease: an overview. Pharmgenomics Pers Med. 2021;14:631‐653. doi: 10.2147/PGPM.S284615. Published 2021 May 28. [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Supporting information
Supporting information
