Skip to main content
Heliyon logoLink to Heliyon
. 2024 Feb 15;10(4):e26365. doi: 10.1016/j.heliyon.2024.e26365

A tensor decomposition scheme for EEG-based diagnosis of mild cognitive impairment

Alireza Faghfouri a, Vahid Shalchyan a,, Hamza Ghazanfar Toor b, Imran Amjad b,c, Imran Khan Niazi c,d,e
PMCID: PMC10901001  PMID: 38420472

Abstract

Mild Cognitive Impairment (MCI) is the primary stage of acute Alzheimer's disease, and early detection is crucial for the person and those around him. It is difficult to recognize since this mild stage does not have clear clinical signs, and its symptoms are between normal aging and severe dementia. Here, we propose a tensor decomposition-based scheme for automatically diagnosing MCI using Electroencephalogram (EEG) signals. A new projection is proposed, which preserves the spatial information of the electrodes to construct a data tensor. Then, using parallel factor analysis (PARAFAC) tensor decomposition, the features are extracted, and a support vector machine (SVM) is used to discriminate MCI from normal subjects. The proposed scheme was tested on two different datasets. The results showed that the tensor-based method outperformed conventional methods in diagnosing MCI with an average classification accuracy of 93.96% and 78.65% for the first and second datasets, respectively. Therefore, it seems that maintaining the spatial topology of the signals plays a vital role in the processing of EEG signals.

Keywords: Mild cognitive impairment (MCI), Tensor decomposition, Electroencephalogram (EEG), Alzheimer's disease, Parallel factor analysis (PARAFAC)

1. Introduction

Alzheimer's disease involves the weakening or destruction of neurons in different brain parts, often beginning in the mild to the acute stages. Since the nature of this disease is progressive, in the early stages, some mild cognitive symptoms appear; in many cases, these symptoms are perceived by the patient and those around him as a normal state related to aging and symptoms are hard to distinguish from the other types of dementia [1]. The progression of Alzheimer's disease can be divided into four general categories: The first stage is known as Mild Cognitive Impairment (MCI), which is the mediator between the normal decline of cognitive abilities due to old age and acute dementia [1]. The second and third stages are related to mild Alzheimer's disease and moderate Alzheimer's disease, which are characterized by increasing cognitive deficits as well as problems and disorders related to daily activities. The last stage is acute Alzheimer's, in which patients lose their cognitive skills and memory over time, do not feel for their surroundings, and with symptoms of mobility impairment, are not able to move and swallow food and the like. Therefore, people at this stage need a nurse to perform their essential activities [1,2]. There is currently no definitive cure for Alzheimer's disease, and the only way to deal with it is to diagnose it early and prevent it from progressing to more acute stages. It should be noted that this is not an easy task because the symptoms of the disease, especially in the early stages, are indistinguishable from other diseases and normal symptoms. The prevalence and spread of MCI in the elderly population (50–95 years) have been reported to be between 5.8% and 18.5% [3]. Moreover, the incidence of patients with MCI with acute dementia is about 50%, a very high rate. Therefore, MCI as a transient stage between normal aging and dementia is known as one of the critical goals for treating dementia [[4], [5], [6], [7]]. Diagnosis of MCI is one of the main challenges in preventing the progression of the disease because this disorder has no clear symptoms, and the abilities of individuals in daily activities are not affected [8].

Clinical and neurophysiological analyses of memory impairment have led to discovering Alzheimer's subtypes, disease staging, and prognosis. Despite new criteria for early disease prediction, including biomarkers derived from cerebrospinal fluid or hippocampal volume analysis, neurophysiological tests are still the mainstay of prognosis and segregation of disease stages [9]. Of course, these tests are very time-consuming, dependent on the individual, and require skilled test personnel [10]. On the other hand, various techniques, including Electroencephalogram (EEG), Magnetoencephalogram (MEG), and functional Magnetic Resonance Imaging (fMRI), are used to study the mechanisms of mild cognitive impairment. Due to the slow and silent changes of MCI, patients with it should be investigated for a long time to develop a system that can diagnose MCI. Such a system must necessarily be non-invasive, accessible, and inexpensive. Therefore, an EEG-based system is a good option [11]. EEG involves recording the electrical potentials generated by the brain through a set of electrodes located on the scalp.

Previous studies in the diagnosis of MCI based on EEG have shown that in people with MCI, the power of conventional frequency bands has been changed (e.g., increased gamma-band power and decreased alpha band power) [[12], [13], [14]], and slowing occurs in rhythms [12,[15], [16], [17]]. Also, because in these patients, connectivity between different parts of the brain is impaired [10,11,18,19], some studies on reducing the statistical complexity of EEG signals [7,9,20,21] and anxiety in the synchronous activity of different parts of the brain of people with mild cognitive impairment compared to healthy older people [1,2,22,23]. In recent years, studies have been conducted in this field. In 2019, Kashefpour et al. used the dictionary learning approach to classify MCI and healthy individuals [24]. In 2019, Loracitano et al. used a time-frequency information approach and continuous wavelet transform to diagnose Alzheimer's patients with mild cognitive impairment and healthy individuals [25]. Furthermore, in 2020, Duan Feng et al. used the functional connectivity features of brain topology to diagnose patients with MCI for use in their deep learning model [8]. In 2012, Latchumane et al. used the EEG spectral information and the tensor decomposition approach to analyze Alzheimer's disease. However, in their work, all EEG channels were placed together in a 1-dimensional vector; thus, the spatial information about the channels has not been preserved [26].

Since a dense number of electrodes are often used to record EEG signals, their spatial arrangement can contain helpful information to distinguish between patients with MCI and healthy individuals. Additionally, according to the results of previous studies, spectral features are the most popular for MCI detection. This study combined spatial and spectral features and considered their interactions using a tensor decomposition approach. Tensor decompositions originated with Hitchcock in 1927 [27], and the idea of a multiway model is attributed to Cattell in 1944 [28]. These concepts received scant attention until the work of Tucker in the 1960s [29], all of which appeared in psychometrics literature. Appellof and Davidson [30] are generally credited as being the first to use tensor decomposition in chemometrics, and tensors have since become extremely popular in that field. In the last ten years, interest in tensor decompositions has expanded to other fields, incliding signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, graph analysis, neuroscience, and more [31].

In this work, by using a proper projection and a spatial arrangement of channels as well as spectral information of each electrode, the tensor dimensions of each person can be formed. In addition to maintaining spatial information about the channels, the advantages of the tensor-based approach can be utilized. Considering the obtained results and comparing the conventional methods with the method of using tensor decomposition, it seems that tensor concepts, as they have achieved good results in other applications, have a high potential in the analysis of EEG time series related to MCI. For this purpose, we used the tensor decomposition method of parallel factor analysis (PARAFAC) to analyze the multi-dimensional array and reveal the effects and characteristics of MCI.

The rest of this manuscript is organized as follows: Section 2 introduces the database used in this research and the set of methods used. Section 3 deals with the results and discussion in section 4, and finally, this article is summarized in section 5.

2. Materials and methods

An overview of the proposed method is shown in Figure (1). This method includes the sections of preprocessing, data tensor construction, feature extraction, and classification, and each detail is described in the following sections of the manuscript.

Fig. 1.

Fig. 1

Block diagram of the proposed method.

2.1. Participants

To evaluate the effectiveness and robustness of the proposed scheme, we used two datasets that include EEG signals from different individuals, the first of which was recorded according to the 10–20 standard and the second of which was recorded by an Emotiv Epoc device.

2.1.1. Dataset I

In this study, the dataset in Ref. [24] has been used, which was recorded in Noor Hospital of Isfahan Province. The dataset includes 61 people over the age of 55,29 of them had MCI, and 32 of them were healthy. All participants had primary or higher education. According to the psychiatric interview using Patterson criteria, all subjects were scored, and those with a Mini-mental state examination (MMSE) score between 21 and 26 were classified as MCI patients. Furthermore, people who scored above 26 are considered healthy [24]. In addition, to ensure the labeling of individuals, the Neuropsychiatry unit cognitive assessment tool (NUCOG) was used as the MCI diagnosis confirmation test, with scores between 75 and 86.5 being considered as MCI patients. The EEG signals were recorded in the early morning for each person with their eyes closed and in a quiet room for 30 min. Moreover, a 19-channel EB-Neuro system whose electrodes were arranged according to a 10–20 standard was used to record the signal. Figure (2) shows how the channels are arranged in a 10–20 standard placement.

Fig. 2.

Fig. 2

Name and arrangement of electrodes on the scalp according to a 10–20 standard.

The sampling rate in signal recording was 256 Hz, and the impedance between the electrode and the scalp was kept at less than 5 KΩ. For more descriptions of the recording details, please see Ref. [24].

2.1.2. Dataset II

The second dataset includes EEG signals with mild, normal, and severe cognitive impairment as well as healthy individuals [32]. Throughout the signal recording process, subjects were asked to sit in a chair and refrain from any hand, finger, or face contraction for 2 min during the recording. After 30 s, upon hearing a distinct sound, the subjects have to open their eyes, and therefore, during the 2 min of signal recording, 1 min corresponds to the closed state and 1 min to the open state. The MMSE clinical criterion was used to divide individuals into different stages of the disease in order to diagnose the severity of the disease. For this purpose, similar to dataset I, individuals with MMSE scores greater than 26 were classified as healthy individuals and individuals with a score between 21 and 26 were identified as MCI [32]. The number of healthy people in this dataset was 21 people with an average age of 61 years, and the number of MCI people was equal to 22 people with an average age of 64 years. This dataset has not been used for classifying and recognition of healthy individuals and MCI, and it has been used for other applications. Since some of the signals were much shorter than 2 min, two persons were removed from the data related to healthy individuals, and one personwas excluded from the data related to MCI individuals from the signal processing process. Therefore, the number of healthy people present in the next stages will be equal to 20 people, and the number of people with mild cognitive impairment will be 20 people. In this dataset, to record the EEG signals, a 14-channel Emotiv Epoc was used, and the sampling frequency was 128 Hz. Figure (3) shows how the channels are arranged in the Emotiv Epoc device.

Fig. 3.

Fig. 3

Emotiv electrodes. The Emotiv Epoc headset is composed of 14 different electrodes.

2.2. Preprocessing

In order to prepare the data, for Dataset I, the EEG data underwent a two-step preprocessing procedure. Firstly, the data was normalized to ensure consistent scaling across all channels. Thus the mean of all channels is zero, and their variance is equal to one. Secondly, a 4th order Butterworth band-stop filter was applied to remove power line 50Hz interference. Also, during the data processing stages, the data are divided into short time patches with different lengths, including 0.5, 1, 1.5, 5, 10, 15, and 30 min to identify the most appropriate length of patches for diagnosis in the analysis. To divide the data into training and test groups in cross-validation, three distinct groups are randomly formed, and 41 subjects (20 MCI and 21 healthy) for training process and 20 subjects (9 MCI and 11 healthy) for test, are randomly selected each time. In each group, the training and testing process is performed separately, and to determine the diagnostic label of the test subjects, voting on the labels of the timepieces of each test subject is used. To increase the stability of the answers, this process is performed on three random groups and the average of the results will be reported as the test result. In the case of Dataset II, the sampling rate in the signal recording is 128 Hz, and to begin with, Similar to Dataset I, the EEG data for Dataset II also underwent a two-step preprocessing procedure. Firstly, the data was normalized to achieve consistent scaling across all channels. Secondly, a 4th order Butterworth band-stop filter was utilized to eliminate power line interference. Subsequently, the signals are divided into short time patches with different lengths, including 2, 4, 6, 10, 12, 20, 24, 40, 60, and 120 s. The difference in the time division of the two datasets is the time of data acquisition, which was 2 min in Dataset II and 30 min in Dataset I. Then, similar to dataset I, to divide the dataset II into training and test groups, three different groups are randomly formed and each time, 67% of the people are randomly selected for the training process and 33% for testing. So, 27 subjects (13 MCI and 14 healthy) for training process and 13 subjects (7 MCI and 6 healthy) for test, are randomly selected.

2.3. Construction of Data tensor

For this purpose, first using the azimuthal equidistant projection method (also called polar projection) [33], the arrangement of the electrodes on the head is converted to the Cartesian mode. Then, for example, by placing the spectral values of each electrode in the desired matrix, a three-dimensional array (tensor) is formed. Tensors are multi-dimensional arrays. A third-order tensor has three faces. The first-order tensor is a vector, the second-order tensor is the same as the matrix, and tensors with dimensions three and higher are called high-order tensors. Moreover, Figure (4 A-C) demonstrates the transformation of electrode coordinates and their polar arrangement on the head into a matrix.

Fig. 4.

Fig. 4

Converting the coordinates and polar arrangement of the electrodes on the head to a matrix. A: The side view shows the position of the signal recording channels in polar mode. B: It is related to the front view of the location of the electrodes. C: It is an estimate of the position of the electrodes in the matrix state, which can be defined in Cartesian coordinates.

EEG signals are usually displayed as vectors or matrices by conventional methods such as time-series, spectral, and matrix analysis to facilitate data processing. However, the EEG signal inherently has more dimensions, such as time, frequency, space, and subject. Therefore, the need to represent the dataset with higher dimensions is felt more. According to figure (4), the data matrix is mapped to the five-by-five dimensions of the 2D plane. Unlike the conventional case where the data are stacked one after the other, in the proposed method, the data from each channel are placed in orange dots in order to preserve their spatial information. Therefore, a 3-dimensional tensor will be formed for each subject, and then, by stacking these three-dimensional tensors for all subjects, a four-dimensional tensor consisting of an arrangement of spatial locations of channels, spectral information, and subjects will be formed. In this case, the spatial and spectral information of everyone can be used at once, and there is no need to consider them independently. This idea will preserve the electrodes' spatial arrangement and allow individuals' spatial-frequency information to be used simultaneously for classification.

In dataset II, considering that the number of electrodes is equal to 14 channels, in order to construct the data tensor, according to the spatial arrangement of electrodes in the Emotiv Epoc device, first, the topographic map of the electrode arrangement related to the Emotiv Epoc is modeled as a six-by-six matrix. As shown in Figure (5 - B), the six-by-six data matrix is mapped to the Cartesian plane, features are located in green dots, and their spatial information is preserved. Figure (5) shows how to form a three-dimensional tensor for a specific subject for dataset I (Fig. 5 - A) and dataset II (Fig. 5 - B).

Fig. 5.

Fig. 5

Construction of a data tensor for a specific subject for dataset I (A) and dataset II (B). In the figure on the left, the standard location related to the recording device can be seen. In the middle figure, a polar projection is made to the Cartesian plane, and the desired matrix is formed. In the figure on the right, a three-dimensional tensor is formed for a specific subject by placing each of the matrices in a row.

2.4. Feature extraction and tensor decomposition

To learn the classification model, useful features must be extracted that can distinguish between healthy and MCI groups properly. Therefore, one of the most important parts of machine learning problems is the feature extraction part. In connection with studies related to the diagnosis of MCI, the use of signal spectral properties and the increase and decrease of frequency band power, as well as the reduction of irregularity and statistical complexity of signals due to brain disorders, have been proven. Accordingly, in the following, we will try to use other types of features to form tensors in addition to spectral features.

2.4.1. Spectral features

In many studies related to the processing of EEG signals, and especially in studies related to MCI, the spectral features of each channel have useful information and have led to acceptable performance. In previous studies, the use of frequency band power in diagnosing MCI has been considered. It showed that the power in low-frequency bands increases, and in high-frequency bands decreases [34]. Classification using power ratios in frequency bands has also been considered [35,36]. In this study, two approaches have been used to implement spectral properties. The first approach is related to the traditional use of the power of conventional frequency bands in EEG signals, which include delta, theta, alpha, beta, and gamma bands. To do this, we filter the signals using a 4th degree Butterworth band pass filter and perform a Fast Fourier Transform (FFT) on these signals, and then the abs of signal power is calculated. In a more comprehensive classification, 19 spectral features related to these bands can be used, which are presented in Table (1). Furthermore, in Table (1), concepts such as relative powers and the variables in the equations in R1-R3 are included. These concepts generally pertain to the division of power values across different frequency bands, and for further elaboration, readers can refer to Ref. [37].

Table 1.

19 Spectral features associated with conventional frequency bands [37].

Row Name Description
1 delta Delta band power (0.5–3.5 Hz)
2 r-delta The relative power of the delta band
3 theta Theta band power (3.5–7.5 Hz)
4 r-theta The relative power of the theta band
5 alpha1 Alpha 1 band power (7.5–9.5 Hz)
6 r-alpha1 Relative power of alpha band 1
7 alpha2 Alpha 2 band power (9.5–12.5 Hz)
8 r-alpha2 Relative power of alpha band 2
9 beta1 Beta 1 band power (12.5–17.5 Hz)
10 r-beta1 Relative power of beta band 1
11 beta2 Beta 2 band power (17.5–25 Hz)
12 r-beta2 Relative power of beta band 2
13 gamma Gamma band power (25–40 Hz)
14 r-gamma The relative power of the gamma band
15 total Total EEG signal band power (0.5–40 Hz)
16 R1 R1=θ/α1+α2+β1
17 R2 R2=(δ+θ)/α1+α2+β1+β2
18 R3 R3=θ/α1+α2
19 PAF Peak Alpha Frequency

Another approach used in this study is inspired by the filter bank problem, which divides the signal into, for example, 40, 60, or 80 equal frequency bands and uses the power of each micro bands as a feature.

2.4.2. Entropy features and statistical complexity

Researchers have also favored entropy as a measure of information theory in diagnosing patients with MCI, especially through EEG and MEG signals. Many studies have acknowledged the reduction in entropy and complexity of information in patients. Because in patients with Alzheimer's disease, as well as its mild levels, including MCI, the connections of different parts of the brain are damaged and sometimes even lost, so in the degree of information disorder in distinctive EEG signal channels, changes occur compared to normal; Furthermore, information complexity has been reported in patients less than healthy individuals.

2.4.2.1. Shannon's entropy

Entropy in information theory is a numerical measure of the amount of information or randomness of a random variable. To be more precise, the entropy of a random variable is the average value (expected value) of the information obtained from observing it (in other words, the higher the entropy of a random variable, the more ambiguity we have about that random variable; Therefore, by observing the definite result of that random variable, more information is obtained. So, the higher the entropy of a random variable, the more information we can get from its definitive observation). Furthermore, the entropy of an information source is the low expectation of the best compression rate without data loss for that source.

The information obtained from observing an event is defined as equal to its negative logarithm probability function; naturally, there is an expectation from any function suitable for measuring the amount of information in an observation, including that the information from observation is non-negative. The information obtained from observing a definite event (i.e., with a probability of one) is zero, and most importantly, the information obtained from two independent observations is equal to the sum of the information obtained from observing each of them. It can be shown that just a function that satisfies the above three properties is the negative of the logarithm probability function. The amount of information with different bases of the logarithm differs only by a constant coefficient. The most common logarithm basis in information computation is two, which calculates information in units of bit or Shannon. In science and engineering in general, entropy is a measure of the degree of ambiguity. In 1948, Claude Shannon in his paper introduced Shannon's entropy and became the founder of information theory [38].

Assume that X is the random variable with the values of X1,X2,...,XM with the probabilities in order P1,P2,...,PM. In this case, entropy is defined as follows:

H(P1,P2,...,PM)=i=1MPilogPi. (1)

Equation (1) expresses the entropy in this context.

If the base of the logarithm is 2, the unit entropy will be bits, and generally, the highest entropy for a random variable occurs in a uniform distribution, and the lowest entropy occurs in a distribution with a definite event.

2.4.2.2. Multi-scale entropy

Multi-Scale entropy is generally a measure of the complexity and order of a signal or time series, as well as a measure of the rate at which information is produced. In more studies, different methods have been developed to calculate entropy. For example, the Kolenogrof-Sinai entropy measure was introduced in 1983 by Grasberger [39], and a modified version was introduced in 1985 by Eckmann & Rauler [40]. Following research on how to calculate information entropy, in 1991, a measure called the approximate entropy was developed by Pincus to find a specific pattern of time series [41]. The high bias of this method, as well as the strong dependence of this method on the signal length, were the main drawbacks of this method, which is a major challenge, especially in vital signals, which are usually short and noisy. Therefore, in 2000, a method called sample entropy was developed to eliminate the defects of the previous version [42]. This method, in addition to being less dependent on the signal length, also has fewer mathematical calculations and, therefore, has been considered by researchers. In general, the sample entropy is negatively defined as the mean of the natural logarithm of the conditional probability. Thus, when two sequences at the same point m are the same, with the tolerance value r, they are the same for the next point, provided that the self-comparison is not included in statistical calculations [42]. In other words, this type of entropy is defined as a logarithmic difference between the two probabilities of having a vector v with tolerance r in the dimension m and the existence of this vector with the same tolerance in the range m + 1. Therefore, when this value is less, it indicates more similarity in the time series. The Multi-Scale Entropy (MSE) criterion uses the same sample theory and is a new method of complexity calculation that focuses on determining the information expressed by signals at different time scales. MSE analysis is based on calculating the sample entropy of several coarse-grained sequences that show the system's dynamics at different time scales [43]. To construct the coarse-grained time series based on the scale factor, the main time series is divided into non-overlapping time windows, each with a specified length, and then the average value of each of the values inside each window is calculated. So, we have Equation (2):

{yε(j)}=1εi=(j1)ε+1jεx(i),1jNε. (2)

After the coarse-grained time series is obtained, its sample entropy criterion is calculated.

2.4.2.3. Lempel-Ziv complexity criterion

In 1976, Abraham Lempel and Jacob Ziv designed an algorithm that dealt with the complexity of binary time series. In many texts related to Alzheimer's disease, the Lempel-Ziv criterion, which actually examines the degree of randomness of signals, has been used to distinguish patients from control subjects [17,20,21,44]. To calculate the LZ complexity criterion, the signal {x(n)} must be converted to a finite (usually binary) sequence. Using a Td threshold value, the original signal samples are converted to 0 and 1, which we have Equation (3):

{s(i)}=s(1),s(2),...,s(N);{s(i)=0,x(i)<Tds(i)=1,x(i)Td}. (3)

The sequence {x(n)} is then scanned from left to right, and the complexity counter c(N) increases by one unit each time a sequence change is observed. Furthermore, to eliminate the dependence of complexity on signal length, the obtained values must be normalized [44]. In general, the upper limit c(N) is equal to NlogαN;The value of α (base of the logarithm) will be the number of variations of the states, which in the binary state is equal to 2. So, we have Equation (4):

limNc(N)=b(N)NlogαN. (4)

Therefore, the value of c(N) can be normalized by division into b(N). Finally, The formula is as follows:

C(N)=c(N)b(N). (5)

Equation (5) represents the LZ complexity criterion, where higher values indicate increased complexity.

2.4.3. Tensor decomposition

Given that today EEG signal data, especially due to the use of dense multi-channel recordings, most data have large dimensions and big volumes, another task of the feature extraction stage is to reduce the size and summary of the data. Multi-way array analysis is a method that can extract the interactions between different dimensions of tensors, such as spatial, spectral, subjects, etc., as well as summarize them into separate components. These components are considered basic components, and a matrix of these components is called the base factor [31]. The issue of tensor decomposition was raised by Hitchcock [27] in 1927 and seventeen years later by Cattell [28] in 1944; he introduced multi-dimensional models. One of the most important tensor decomposition methods is the PARAFAC or Canonical Polyadic Decomposition (CPD) method, which was developed in 1970 by Harshman [45]. The following is also named as PARAFAC tensor decomposition method.

2.4.3.1. PARAFAC tensor decomposition

PARAFAC tensor decomposition actually decomposes an N-dimensional tensor into the sum of rank-1 tensors. For a hypothetical three-dimensional tensor X, the R-component PARAFAC decomposition will be as Equation (6) [46]:

X_=a1b1c1+a2b2c2+...+aRbRcR+E_a1b1c1+a2b2c2+...+aRbRcRX_1+X_2+...+X_R. (6)

In Equation (6), the symbol " " is the symbol related to the outer product of vectors, and all tensors X_1 up to X_R are rank one. The sum of these rank-1 tensors will actually be an estimate of the original tensor. the first components a1, b1, and c1 are associated with one another, and their outer product produces rank-one tensor X_1. The second components a2, b2, and c2 are associated with one another, and their outer product generates rank-one tensor X_2 and so on. Therefore, PARAFAC decomposition is the sum of rank-one tensors plus error tensor. Figure (6) shows how PARAFAC decomposes a three-dimensional array [46].

Fig. 6.

Fig. 6

PARAFAC decomposition of a 3-dimensional array.

In general, to decompose N-dimensional tensor using PARAFAC methods, we have X_R_I1×I2×...×IN; we will have Equation (7):

X_=r=1Rur(1)ur(2)...ur(N)+E_=r=1RXr+E_=X_+E_X_. (7)

Which we have in the above relation:

E_R_I1×I2×...×IN,X_isanestimateofthetensorX_,r=1,2,...,R,X_r=ur(1)ur(2)...ur(N).

Also, after applying the PARAFAC tensor decomposition technique to the N-dimensional data tensor, the component matrices (loading factors) for the n-th dimension are obtained as shown in Equation (8):

U(N)=[u1(n),u2(n),...,uR(n)]RIn×R. (8)

Equation (7) can also be written in the form of a matrix-tensor product, which is shown in Equation (9) as follows:

X_=I_×1U(1)×2U(2)×3...×NU(N)+E_=X_+E_. (9)

In Equation (9), I_ is the identity tensor that is actually a diagonal tensor, and all the elements on its main diagonal are equal to one [46]. “×1, ×2, …, ×N” is the N-mode product. The n-mode (matrix) product of a tensor X_RI1×I2×...×IN with a matrix URJ×IN is denoted by X_×nU and is of size I1×...×In1×J×In+1×...×IN. Elementwise, we have Equation (10):

(X_×nU)i1...in1jin+1...iN=in=1iNxi1i2...iNujin. (10)

It should be noted that for tensor decomposition, depending on the nature of each dimension, restrictive constraints can be applied to some or even all dimensions of the tensor. For example, if the data is non-negative, the non-negative clause can be used. This type of restriction has been used in many cases [47].

2.5. Classification

Two-class SVM has been used to classify patients with MCI and healthy individuals [48]. This method distinguishes between the two classes by creating a hyperplane in the multi-dimensional feature space. To achieve this, the Radial Basis Function (RBF) kernel was chosen due to its effectiveness in capturing complex and nonlinear relationships within the data. In this study, the RBF kernel with a kernel scale of 4 was utilized.

3. Results

Given that the main question of this study is related to the effect of using spatial information of electrodes with a tensor analysis approach in improving the classification results of patients with mild cognitive impairment and healthy individuals, it is necessary to compare the process with the conventional approach of previous research and without considering the concepts of tensor analysis and finally compare them. Therefore, in the following, the suitable frequency segmentation will be determined first. Afterward, using a variety of time patches, the finest time patch is determined. Next, by determining the best result of classification accuracy in the normal case, the proposed approach based on tensor analysis is applied, and using the time patch and suitable frequency bands, a data tensor will be formed and finally; with tensor analysis, the classification accuracy of the proposed method will be presented.

3.1. Select the finest frequency segmentation

In order to determine the suitable frequency band segmentation in feature extraction, first, the conventional method of using frequency band power without tensor analysis is applied and the classification accuracy for different features is reported in Figure (7). For this purpose, the signal of each channel is filtered using a 4th degree Butterworth band pass filter and then the absolute power of the desired frequency band is extracted. To extract the power of 60 frequency micro bands, the frequency band of the signal (0.5–40 Hz) is divided into 60 equal parts, and the power is calculated for each part. For 40 and 80-frequency micro bands, filtering and feature extraction was done in the same way. Furthermore, for the conventional bands and 19 spectral features, according to Table (1), feature extraction has been performed. Figure (7) shows the classification accuracy of the results related to the use of 40 frequency bands, 60 frequency bands, 80 frequency bands, seven traditional frequency bands and also 19 spectral features in conventional bands. The best result was related to the use of the 60 frequency micro band feature and the average accuracy was 89.54%. After that, the use of 80 frequency bands is in second place. Finally, the weakest accuracy is the use of seven traditional frequency bands.

Fig. 7.

Fig. 7

Comparison of classification accuracy for power of different frequency bands (Dataset I).

Evaluation metric, including the average classification accuracy, sensitivity and specificity of each method, are shown in Table (2) and according to it, the best classification accuracy is related to 60 frequency bands. Therefore, the power feature of 60 frequency bands has been selected in both datasets to continue the research.

Table 2.

Evaluation metric for each feature for accuracy for power of different frequency bands.

Features/evaluation metric Accuracy (%) Sensitivity (%) Specificity (%)
40 Frequency power 86.98 88.73 87.14
60 Frequency power 89.54 90.18 88.29
80 Frequency power 87.06 89.77 85.75
7 traditional bands power 82.65 85.10 80.55
19 spectral features 86.46 86.37 84.53

3.2. Select the suitable time patch

For dataset I, given that the signals were 30 min long, in order to select the optimal time patch, divide each signal into 0.5, 1, 1.5, 5, 10, 15, and 30 min time patches, and for each case, the power feature in the 60 extracted frequency micro bands, the classification accuracy in the conventional state (without using the concept of our tensor-based scheme) was calculated, which can be seen in Table (3). The left column of the table corresponds to the different time patches, and the right column corresponds to the percentage of classification accuracy corresponding to each of them. According to the results, the best case was related to the time patch of 1.5 min and equal to 89.54%, which is the same division to continue the steps.

Table 3.

Accuracy of classification in different time patches (Dataset I).

Time patch (minutes) Classification accuracy (%)
0.5 87.34
1 89.16
1.5 89.54
5 84.78
10 78.23
15 67.37
30 53.12

Furthermore, for the second dataset, since the recording time is different from the first dataset, in a similar way, the 2-min signal of each person at short intervals, including 2, 4, 6, 10, 12, 20, 24, 40, 60 And 120 s are divided. To determine the finest time patch, the classification accuracy for the normal state (without tensor analysis) with 60 frequency band features (similar to the first dataset) was calculated for different time patches. According to Table (4), for dataset II, the best time patch is 12 s, and its classification accuracy is 64.12%.

Table 4.

Accuracy of classification in different time patches (Dataset II).

Time patch (seconds) Classification accuracy (%)
2 56.38
4 58.35
6 59.16
10 63.76
12 64.12
20 62.86
24 61.45
40 57.64
60 41.18
120 43.24

3.3. Our tensor-based scheme and comparison of different features

According to the results of the previous sections, the finest time patch was related to 1.5 min, and the best frequency spectrum segmentation was related to distances equal to 60 frequency bands. To continue the route, first divide the signals into short time intervals of 1.5 min, and therefore, each of the 30-min signals will be divided into 20 short signal patch.

In order to tensor decomposition of signals, a training tensor must be formed first. At this stage, for the signals of 67% of people (40 people) who were randomly selected for the training process, different features related to different frequency band power, Lempel-Ziv complexity, Shannon's entropy, and multi-scale sample entropy are stored in the training tensor while retaining spatial information. In the next step, to form the final features, the PARAFAC tensor decomposition was performed for the training tensor, and as a result; four matrices were obtained. Then, using the component matrix (the fourth matrix that belongs to subjects), the SVM was trained. For the model evaluation phase, component vector obtained from production of test tensor and pseudo-inverse of the Khatri-rao production of load matrices (first, second, and third matrices) obtained from the tensor decomposition of the training data. The results of accurate classification of control subjects and patients with MCI are obtained according to Figure (8).

Fig. 8.

Fig. 8

Classification accuracy for different methods and features (Dataset I).

Furthermore, according to Table (5), for dataset I, the best case is related to using the power feature of 60 frequency bands and the average classification accuracy is 93.96%, but according to the diagram of the third box in Figure (8), this criterion in one of the cases with an accuracy of 96.52% has also arrived.

Table 5.

Evaluation metric for each feature for the tensor-based mode (Dataset I).

Features/evaluation metric Accuracy (%) Sensitivity (%) Specificity (%)
Band power of theta and alpha1 82.13 85.78 78.89
19 Spectral features 83.37 89.70 82.41
Power of 60 frequency bands 93.96 96.07 96.98
Lempel-Ziv 82.75 82.31 84.62
Shannon's entropy 83.21 84.80 80.90
Multi-scale sample entropy 68.89 70.86 68.27

Table (6) also shows the dimensions corresponding to each of the above methods in the training and test phase. In this table, the value of the variable related to the number of components (R) in the tensor analysis is selected using the classification accuracy assessment and in accordance with the best accuracy in each of the methods.

Table 6.

Feature space of each method for the 90-s signal (Dataset I).

Features/Dimension Train Tensor R-component Train Vector Test Tensor Test Vector
Band power of theta and alpha1 5×5×2×820 50 820×50 5×5×2×400 400×50
19 Spectral features 5×5×19×820 60 820×60 5×5×19×400 400×60
Power of 60 frequency bands 5×5×60×820 65 820×65 5×5×60×400 400×65
Lempel-Ziv 5×5×7×820 45 820×45 5×5×7×400 400×45
Shannon's entropy 5×5×7×820 85 820×85 5×5×7×400 400×85
Multi-scale sample entropy 5×5×7×820 85 820×85 5×5×7×400 400×85

In relation to the second dataset, by forming the training data tensor, it will include the power of different frequency bands with equal segments. In the subsequent step, to extract the final features, PARAFAC tensor decomposition was performed for the training tensor, and as a result; four matrices were obtained. In the next step, support vector machine was learned using the component matrix. For the validation phase, the test tensor was inverted on the operating matrices obtained from the tensor analysis of the training data, and thus the component matrix for the test tensor was obtained. The results of classification accuracy were obtained for three different categories and after averaging, the classification accuracy is computed. In order to examine other features, the tensor decomposition process is used for the state of features related to the power of 60 equal frequency segments, the power of traditional bands (delta, theta, alpha 1, alpha 2, beta 1, beta two and gamma bands) and also, 19 spectral features listed in Table (1) were examined in dataset II to construct the tensor. Table (7) summarizes the results of the second dataset. The highest classification accuracy, according to the table below, is related to the power features of 60 frequency bands.

Table 7.

Feature space and accuracy of different features (Dataset II).

Features/Dimension & Accuracy Train Tensor R-component Test Tensor Accuracy (%)
19 Spectral features 6×6×19×270 65 6×6×19×130 52.24
Power of 60 frequency bands 6×6×60×270 75 6×6×60×130 78.65
Power of 7 traditional bands 6×6×7×270 65 6×6×7×130 69.72

3.4. Comparison of tensor decomposition results and traditional approach

According to the results, to compare the use of our tensor-based scheme and traditional approaches, the classification accuracy criteria for each of these cases are compared in Table (8). For this purpose, the average classification accuracy of each method has been used. According to the table below, the proposed tensor-based method had better results than the traditional approach and has improved the accuracy of distinguishing MCI patients from control subjects in both datasets. Also, according to Table 6, Table 7, the number of components is actually smaller than the number of subjects in the datasets.

Table 8.

Classification accuracy between traditional and proposed methods based on tensor decomposition.

Evaluation method Accuracy (%)
Traditional method Dataset I 89.54
Dataset II 64.12
Our Tensor-based Method Dataset I 93.96
Dataset II 78.65

4. Discussion

In this research, our primary inquiry revolved around the question of whether considering the spatial location of electrodes as opposed to the conventional approach leads to an improvement in problem-solving. To investigate this, we employed tensor decomposition as one of the tools to delve into this matter. The ultimate findings on dataset I demonstrated a relative enhancement in performance, validating our hypothesis. For further validation, we applied this new approach to another dataset, and the same observation was noted in dataset II.

On the subject of diagnosing MCI, many studies have been performed so far, none of which used the concept of tensor and tensor decomposition. In this study, in addition to using the concept of tensor, in order to form a tensor, contrary to conventional methods, we tried to preserve the spatial information about the channels so that more useful and valuable information could be extracted. According to the results obtained in this study, which are shown in Table (8), the use of this innovation and tensor decomposition method led to the accuracy of classification in classification patients with MCI, and Healthy people recovered well; In this study, without using the tensor decomposition method, the average classification accuracy was 89.54%, which reached 93.96% using our proposed method in dataset I. To be more sure, we also tested our method on the second dataset, in which case the classification accuracy increased from 64.12% to 78.65% compared to the traditional method and therefore, our initial theory was proved, and it was shown that maintaining the spatial position of the electrodes in the processing stage contains more important information.

Dataset I that was used in this study belonged to the study [24] that the best classification accuracy reported in the result was related to the channels of the left temporal zone and equal to 80%; however, by using the same dataset and the equivalent method of cross-evaluation, by using the method proposed in our study, which preserves spatial and frequency information simultaneously and uses tensor decomposition to extract the feature, the amount of classification accuracy has greatly improved.

In the case of MCI, tensor decomposition has not been used to date; however, this concept in many applications has increased and improved performance as well as the accuracy of classification in machine learning problems. For example, in the application of Brain-Computer Interfaces (BCIs), Rob Zink et al., using tensor decomposition, caused the dependence of the system on the calibration phase to be significantly reduced for subjects, and this process became unnecessary [49]. Jingyangli et al. also used the adaptive method based on tensor decomposition to improve the classification of motor imagery in EEG signals [50]. Lucianus Spiro et al. also used the concept of tensor in the application of functional brain connectivity, which led to a better estimate of brain connections [51]. Tensor decomposition in compression-related problems also improved performance, and in a study by Dowels et al., tensor decomposition was achieved using near-lossless compression [52]. In studies related to the extraction of different stages of sleep and also the study of children's development, tensor concepts and particularly tensor decomposition has increased performance and improved the outcome conditions compared to normal [53,54]. Thus, the use of the advantages of tensor decomposition and tensor-related concepts in many previous studies has promised to improve system performance in other applications, including the diagnosis of diseases, especially the diagnosis of MCI using tensor decomposition. The problem is quite evident in the results of our study and somehow confirms the good results of other studies related to the concept of tensors. Of course, this is not the end of the matter, and due to the limitations of this study, it is possible to generalize and expand it along the path, using different methods of tensor decomposition to extract features or also can implement a variety of other methods to form a tensor. Furthermore, in the dataset used in this study, 19 channels were used to record the signal, and it seems that in the next studies, better results can be achieved by increasing the number and density on the electrodes. Following the path and in further studies, the efficiency of this method can be measured for other paradigms of EEG signal recording from MCI patients, such as signal recording with open eyes. It also seems that good results can be obtained by fusion of signal and image modalities, such as EEG signal and fMRI image, and using tensor decomposition.

5. Conclusions

The diagnosis of MCI, which is the first step in the acute conditions of Alzheimer's disease and dementia, significantly reduces the progression of the disease to severe stages. Therefore, due to the mild and ambiguous clinical symptoms, it is difficult and expensive to diagnose MCI. The use of cheap and available EEG signals has made it possible to improve this by using signal processing techniques. In this study, it was found that in addition to temporal and spectral features, maintaining the spatial position of the electrodes also has valuable information. Accordingly, a method based on tensor analysis with the ability to maintain the location of the EEG electrodes in the formation of the tensor was developed to diagnose MCI. The results showed a good advantage over conventional and classical methods.

Data Availability Statement

The authors do not have permission to share data.

CRediT authorship contribution statement

Alireza Faghfouri: Writing – original draft, Visualization, Software, Methodology, Investigation, Formal analysis, Data curation. Vahid Shalchyan: Writing – review & editing, Validation, Supervision, Resources, Project administration, Methodology, Data curation, Conceptualization. Hamza Ghazanfar Toor: Software, Investigation, Data curation. Imran Amjad: Software, Investigation, Data curation. Imran Khan Niazi: Writing – review & editing, Resources, Funding acquisition, Data curation, Conceptualization.

Declaration of competing interest

The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:Imran Khan Niazi reports administrative support, article publishing charges, equipment, drugs, or supplies, and statistical analysis were provided by The Hamblin Trust, New Zealand. If there are other authors, they declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.

Acknowledgment

This work was supported by a grant from “The Hamblin Trust, New Zealand”, and also received co-funding from the Hamblin chiropractic research fund trust and through donations to the Centre for Chiropractic Research Supporters Program at the New Zealand College of Chiropractic.

References

  • 1.Handayani N. Coherence and phase synchrony analyses of EEG signals in Mild Cognitive Impairment (MCI): a study of functional brain connectivity. Pol. J. Med. Phys. Eng. 2018;24(1):1–9. [Google Scholar]
  • 2.Dauwels J. 2009 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE; 2009. EEG synchrony analysis for early diagnosis of Alzheimer's disease: a study with several synchrony measures and EEG data sets. [DOI] [PubMed] [Google Scholar]
  • 3.Collie A.P., Maruff The neuropsychology of preclinical Alzheimer's disease and mild cognitive impairment. Neurosci. Biobehav. Rev. 2000;24(3):365–374. doi: 10.1016/s0149-7634(00)00012-9. [DOI] [PubMed] [Google Scholar]
  • 4.Ally B.A. Preserved frontal memorial processing for pictures in patients with mild cognitive impairment. Neuropsychologia. 2009;47(10):2044–2055. doi: 10.1016/j.neuropsychologia.2009.03.015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 5.Choi E. Clinical characteristics of patients in a dementia prevention center. J. Neurol. Sci. 2009;283(1):291–292. [Google Scholar]
  • 6.Villeneuve S. Episodic memory deficits in vascular and non vascular mild cognitive impairment. J. Neurol. Sci. 2009;283(1):291. [Google Scholar]
  • 7.Ahmadlou M. Complexity of functional connectivity networks in mild cognitive impairment subjects during a working memory task. Clin. Neurophysiol. 2014;125(4):694–702. doi: 10.1016/j.clinph.2013.08.033. [DOI] [PubMed] [Google Scholar]
  • 8.Duan F., et al. Topological network analysis of early alzheimer's disease based on resting-state EEG. IEEE Trans. Neural Syst. Rehabil. Eng. 2020;28(10):2164–2172. doi: 10.1109/TNSRE.2020.3014951. [DOI] [PubMed] [Google Scholar]
  • 9.Vecchio F., et al. Cortical connectivity and memory performance in cognitive decline: a study via graph theory from EEG data. Neuroscience. 2016;316:143–150. doi: 10.1016/j.neuroscience.2015.12.036. [DOI] [PubMed] [Google Scholar]
  • 10.Musaeus C.S., Nielsen M.S., Høgh P. Altered low-frequency EEG connectivity in mild cognitive impairment as a sign of clinical progression. J. Alzheim. Dis. 2019;68(3):947–960. doi: 10.3233/JAD-181081. [DOI] [PubMed] [Google Scholar]
  • 11.Mammone N., et al. Permutation disalignment index as an indirect, EEG-based, measure of brain connectivity in MCI and AD patients. Int. J. Neural Syst. 2017;27(5) doi: 10.1142/S0129065717500204. [DOI] [PubMed] [Google Scholar]
  • 12.McBride J., et al. Discrimination of mild cognitive impairment and Alzheimer's disease using transfer entropy measures of scalp EEG. Journal of healthcare engineering. 2015;6(1):55–70. doi: 10.1260/2040-2295.6.1.55. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Van der Hiele K. EEG correlates in the spectrum of cognitive decline. Clin. Neurophysiol. 2007;118(9):1931–1939. doi: 10.1016/j.clinph.2007.05.070. [DOI] [PubMed] [Google Scholar]
  • 14.Herrmann C., Demiralp T. Human EEG gamma oscillations in neuropsychiatric disorders. Clin. Neurophysiol. 2005;116(12):2719–2733. doi: 10.1016/j.clinph.2005.07.007. [DOI] [PubMed] [Google Scholar]
  • 15.Dauwels J. Slowing and loss of complexity in Alzheimer's EEG: two sides of the same coin? Int. J. Alzheimer's Dis. 2011:2011. doi: 10.4061/2011/539621. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Latchoumane C., et al. Proceedings of the World Congress on Engineering. 2008. Multiway analysis of Alzheimer's disease: classification based on space-frequency characteristics of EEG time series. [Google Scholar]
  • 17.Hornero R. Nonlinear analysis of electroencephalogram and magnetoencephalogram recordings in patients with Alzheimer's disease. Phil. Trans. Math. Phys. Eng. Sci. 2009;367:317–336. doi: 10.1098/rsta.2008.0197. 1887. [DOI] [PubMed] [Google Scholar]
  • 18.Jiang Z.-y. Study on EEG power and coherence in patients with mild cognitive impairment during working memory task. J. Zhejiang Univ. - Sci. B. 2005;6(12):1213–1219. doi: 10.1631/jzus.2005.B1213. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 19.Adler G., Brassen S., Jajcevic A. EEG coherence in Alzheimer's dementia. Journal of neural transmission. 2003;110(9):1051–1058. doi: 10.1007/s00702-003-0024-8. [DOI] [PubMed] [Google Scholar]
  • 20.Fernandez A., et al. Complexity analysis of spontaneous brain activity in Alzheimer disease and mild cognitive impairment: an MEG study. Alzheimer Disease & Associated Disorders. 2010;24(2):182–189. doi: 10.1097/WAD.0b013e3181c727f7. [DOI] [PubMed] [Google Scholar]
  • 21.Zhu B., et al. Analysis of EEG complexity in patients with mild cognitive impairment. Journal of Neurological Disorders. 2017;5(4) [Google Scholar]
  • 22.Babiloni C., et al. Directionality of EEG synchronization in Alzheimer's disease subjects. Neurobiol. Aging. 2009;30(1):93–102. doi: 10.1016/j.neurobiolaging.2007.05.007. [DOI] [PubMed] [Google Scholar]
  • 23.Gallego-Jutglà E., et al. 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE; 2012. Diagnosis of Alzheimer's disease from EEG by means of synchrony measures in optimized frequency bands. [DOI] [PubMed] [Google Scholar]
  • 24.Kashefpoor M., Rabbani H., Barekatain M. Supervised dictionary learning of EEG signals for mild cognitive impairment diagnosis. Biomed. Signal Process Control. 2019;53 [Google Scholar]
  • 25.Ieracitano C., et al. A novel multi-modal machine learning based approach for automatic classification of EEG recordings in dementia. Neural Network. 2020;123:176–190. doi: 10.1016/j.neunet.2019.12.006. [DOI] [PubMed] [Google Scholar]
  • 26.Latchoumane C.-F.V., et al. Multiway array decomposition analysis of EEGs in Alzheimer's disease. J. Neurosci. Methods. 2012;207(1):41–50. doi: 10.1016/j.jneumeth.2012.03.005. [DOI] [PubMed] [Google Scholar]
  • 27.Hitchcock F.L. The expression of a tensor or a polyadic as a sum of products. J. Math. Phys. 1927;6(1–4):164–189. [Google Scholar]
  • 28.Cattell R.B. “Parallel proportional profiles” and other principles for determining the choice of factors by rotation. Psychometrika. 1944;9(4):267–283. [Google Scholar]
  • 29.Tucker L.R. Implications of factor analysis of three-way matrices for measurement of change. Problems in measuring change. 1963;15(122–137):3. [Google Scholar]
  • 30.Appellof C.J., Davidson E.R. Strategies for analyzing data from video fluorometric monitoring of liquid chromatographic effluents. Anal. Chem. 1981;53(13):2053–2056. [Google Scholar]
  • 31.Kolda T.G., Bader B.W. Tensor decompositions and applications. SIAM Rev. 2009;51(3):455–500. [Google Scholar]
  • 32.Khan N. Xbox 360 Kinect cognitive games improve slowness, complexity of EEG, and cognitive functions in subjects with mild cognitive impairment: a randomized control trial. Game. Health J. 2019 doi: 10.1089/g4h.2018.0029. [DOI] [PubMed] [Google Scholar]
  • 33.Qiao W., Bi X. Ternary-task convolutional bidirectional neural turing machine for assessment of EEG-based cognitive workload. Biomed. Signal Process Control. 2020;57 [Google Scholar]
  • 34.Berendse H., et al. Magnetoencephalographic analysis of cortical activity in Alzheimer's disease: a pilot study. Clin. Neurophysiol. 2000;111(4):604–612. doi: 10.1016/s1388-2457(99)00309-0. [DOI] [PubMed] [Google Scholar]
  • 35.Osipova D., et al. Altered generation of spontaneous oscillations in Alzheimer's disease. Neuroimage. 2005;27(4):835–841. doi: 10.1016/j.neuroimage.2005.05.011. [DOI] [PubMed] [Google Scholar]
  • 36.Poza J., et al. 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE; 2007. Analysis of spontaneous MEG activity in patients with Alzheimer's disease using spectral entropies. [DOI] [PubMed] [Google Scholar]
  • 37.Kashefpoor M., Rabbani H., Barekatain M. Automatic diagnosis of mild cognitive impairment using electroencephalogram spectral features. Journal of medical signals and sensors. 2016;6(1):25. [PMC free article] [PubMed] [Google Scholar]
  • 38.Shannon C.E. A mathematical theory of communication. The Bell system technical journal. 1948;27(3):379–423. [Google Scholar]
  • 39.Grassberger P., Procaccia I. Estimation of the Kolmogorov entropy from a chaotic signal. Phys. Rev. 1983;28(4):2591. [Google Scholar]
  • 40.Eckmann J.P., Ruelle D. The theory of chaotic attractors; 1985. Ergodic Theory of Chaos and Strange Attractors; pp. 273–312. [Google Scholar]
  • 41.Pincus S.M. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci. USA. 1991;88(6):2297–2301. doi: 10.1073/pnas.88.6.2297. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 42.Richman J.S., Moorman J.R. Physiological time-series analysis using approximate entropy and sample entropy. Am. J. Physiol. Heart Circ. Physiol. 2000 doi: 10.1152/ajpheart.2000.278.6.H2039. [DOI] [PubMed] [Google Scholar]
  • 43.Costa M., Goldberger A.L., Peng C.-K. Multiscale entropy analysis of biological signals. Phys. Rev. 2005;71(2) doi: 10.1103/PhysRevE.71.021906. [DOI] [PubMed] [Google Scholar]
  • 44.Zhang X.-S., Roy R.J., Jensen E.W. EEG complexity as a measure of depth of anesthesia for patients. IEEE Trans. Biomed. Eng. 2001;48(12):1424–1433. doi: 10.1109/10.966601. [DOI] [PubMed] [Google Scholar]
  • 45.Harshman R.A. 1970. Foundations of the PARAFAC Procedure: Models and Conditions for an" Explanatory" Multimodal Factor Analysis. [Google Scholar]
  • 46.Cong F., et al. Tensor decomposition of EEG signals: a brief review. J. Neurosci. Methods. 2015;248:59–69. doi: 10.1016/j.jneumeth.2015.03.018. [DOI] [PubMed] [Google Scholar]
  • 47.Cichocki A. John Wiley & Sons; 2009. Nonnegative Matrix and Tensor Factorizations: Applications to Exploratory Multi-Way Data Analysis and Blind Source Separation. [Google Scholar]
  • 48.Hearst M.A., et al. Support vector machines. IEEE Intell. Syst. Their Appl. 1998;13(4):18–28. [Google Scholar]
  • 49.Zink R., et al. Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase. J. Neural. Eng. 2016;13(2) doi: 10.1088/1741-2560/13/2/026005. [DOI] [PubMed] [Google Scholar]
  • 50.Li X., et al. Adaptation of motor imagery EEG classification model based on tensor decomposition. J. Neural. Eng. 2014;11(5) doi: 10.1088/1741-2560/11/5/056020. [DOI] [PubMed] [Google Scholar]
  • 51.Spyrou L., Parra M., Escudero J. Complex tensor factorization with PARAFAC2 for the estimation of brain connectivity from the EEG. IEEE Trans. Neural Syst. Rehabil. Eng. 2018;27(1):1–12. doi: 10.1109/TNSRE.2018.2883514. [DOI] [PubMed] [Google Scholar]
  • 52.Dauwels J., et al. Near-lossless multichannel EEG compression based on matrix and tensor decompositions. IEEE journal of biomedical and health informatics. 2012;17(3):708–714. doi: 10.1109/titb.2012.2230012. [DOI] [PubMed] [Google Scholar]
  • 53.Kinney-Lang E., et al. Tensor-driven extraction of developmental features from varying paediatric EEG datasets. J. Neural. Eng. 2018;15(4) doi: 10.1088/1741-2552/aac664. [DOI] [PubMed] [Google Scholar]
  • 54.Kouchaki S., et al. Tensor based singular spectrum analysis for automatic scoring of sleep EEG. IEEE Trans. Neural Syst. Rehabil. Eng. 2014;23(1):1–9. doi: 10.1109/TNSRE.2014.2329557. [DOI] [PubMed] [Google Scholar]

Associated Data

This section collects any data citations, data availability statements, or supplementary materials included in this article.

Data Availability Statement

The authors do not have permission to share data.


Articles from Heliyon are provided here courtesy of Elsevier

RESOURCES