Abstract
The process of reconstructing underlying cortical and subcortical electrical activities from Electroencephalography (EEG) or Magnetoencephalography (MEG) recordings is called Electrophysiological Source Imaging (ESI). Given the complementarity between EEG and MEG in measuring radial and tangential cortical sources, combined EEG/MEG is considered beneficial in improving the reconstruction performance of ESI algorithms. Traditional algorithms mainly emphasize incorporating predesigned neurophysiological priors to solve the ESI problem. Deep learning frameworks aim to directly learn the mapping from scalp EEG/MEG measurements to the underlying brain source activities in a data-driven manner, demonstrating superior performance compared to traditional methods. However, most of the existing deep learning approaches for the ESI problem are performed on a single modality of EEG or MEG, meaning the complementarity of these two modalities has not been fully utilized. How to fuse the EEG and MEG in a more principled manner under the deep learning paradigm remains a challenging question. This study develops a Multi-Modal Deep Fusion (MMDF) framework using Attention Neural Networks (ANN) to fully leverage the complementary information between EEG and MEG for solving the ESI inverse problem, which is termed as MMDF-ANN. Specifically, our proposed brain source imaging approach consists of four phases, including feature extraction, weight generation, deep feature fusion, and source mapping. Our experimental results on both synthetic dataset and real dataset demonstrated that using a fusion of EEG and MEG can significantly improve the source localization accuracy compared to using a single-modality of EEG or MEG. Compared to the benchmark algorithms, MMDF-ANN demonstrated good stability when reconstructing sources with extended activation areas and situations of EEG/MEG measurements with a low signal-to-noise ratio.
Index Terms—: Brain source localization, EEG/MEG source imaging, multi-modal deep learning, deep attention network, information fusion
I. Introduction
Electrophysiological source imaging (ESI), also known as EEG/MEG source localization, is a non-invasive neuroimaging technique that aims to determine specific brain sources responsible for generating electrophysiological signals that can be detected by EEG/MEG sensors from the scalp [1]. Compared with other non-invasive neuroimaging techniques, such as functional near-infrared spectroscopy (fNIRS), functional magnetic resonance imaging (fMRI), computed tomography (CT), and positron emission tomography (PET), EEG and MEG have excellent temporal resolution up to milliseconds. EEG also enjoys the advantages of easy portability and low cost. Furthermore, EEG/MEG provides direct measurements of neuronal activities, while fMRI measures blood oxygen level dependent (BOLD) signals [2], which are secondary measurements of metabolic signals. Owing to these advantages, ESI enables researchers and clinicians to uncover latent brain signals characterizing rapid neuronal dynamic activities, which serves as a fundamental tool for neuroscience studies and clinical diagnosis [1], [3], [4], [5], [6], [7], [8]. However, the ill-posed nature of the ESI inverse problem makes it challenging to uniquely determine the active underlying brain sources. Additionally, the recorded EEG/MEG signals are inevitably contaminated by measurement noises, which complicates the accurate reconstruction of source distributions. During the past decades, numerous algorithms have been proposed to seek a unique solution to the ESI inverse problem (see the comprehensive and insightful review paper by Dr. Bin He [1] and references therein).
A traditional approach is to constrain the solution space by utilizing prior information about the configurations of the source signal [9], [10], [11], [12]. Based on this assumption, the minimum norm estimate (MNE) [13] was introduced by imposing the norm as a regularization term to promote a solution with minimum energy while explaining the EEG/MEG measurements. MNE exhibits excellent performance in the localization of superficial or near-surface sources, but when applied to deep sources, MNE may face challenges. To deal with this problem, several variants and extensions of MNE have been developed, including weighted MNE (wMNE) [13], standardized low resolution electromagnetic tomography (sLORETA) [14] and dynamic statistical parametric mapping (dSPM) [15], etc. These variants were designed to enhance the original MNE by incorporating additional information, such as the depth sensitivity and sensor noise covariance. The methods guided by -norm tend to provide spatially diffuse source estimates. To encourage the sparseness of source solutions, the norm was proposed, termed as the minimum current estimate (MCE) [16], which promotes sparse source solutions. Bore et al. proposed to apply the norm where on the source signal and the norm on the data fitting error term [17] to obtain sparse sources inferred from EEG with artifact contamination. Babadi et al. developed a greedy pursuit algorithm to iteratively solve the sparse MEG source localization [18]. Gramfort et al. presented a combination use of the prior and the norm where , termed as the mixed norm estimate (MxNE), to encourage an adaptive balance between sparsity and smoothness of the source activation [9]. Yang et al. used low-frequency components of spatial graph Fourier filters to help classical ESI methods to estimate extended brain sources [19]. Another widely used approach for brain source localization is beamformer [20], which is an adaptive spatial filter that can be used to localize neural activities originating within a specific location while attenuating activities originating from other locations by adjusting the weights of the filter [21]. One of the most commonly used beamforming techniques is the linearly constrained minimum variance (LCMV) beamformer [22]. For instance, Hong et al. presented an LCMV beamformer cooperated with a source suppression strategy for localization of coherent sources [23]. Samadzadehaghdam et al. proposed a SParse LCMV (SP-LCMV) by introducing norm regularization of the beamformer output in the cost function to encourage the reconstruction of sparse sources [24]. Moreover, to encourage the localization of multiple sources, Moiseev et al. derived source localizers based on multiple constrained minimum variance (MCMV) filters [22]. Herdman et al. proposed an MCMV-based multi-step iterative approach, called MIA, to reconstruct source activity for simulated and real EEG data [25]. More implementations of LCMV and MCMV beamformers can be found in [22] and [26].
Recently, as deep learning approaches have gained increasing popularity, various deep learning frameworks have been investigated to solve the ESI inverse problem in a data-driven way. A key advantage of deep learning approaches is they can directly learn the mapping from the scalp EEG/MEG measurements to the underlying brain sources, eliminating the necessity for pre-specifying the regularization terms. Additionally, once a deep learning model is well-trained, it allows online reconstruction of source signals with EEG/MEG recordings as input in a highly accurate and efficient way. Most of the proposed deep learning frameworks to solve ESI are based on an end-to-end architecture, which leverages convolutional neural networks (CNNs) or Long Short-Term Memory (LSTM) units. For instance, Hecker et al. [27] designed a deep architecture called ConvDip, leveraging 2D CNN layers for precise localization of a varying number of sources. Craley et al. [28] introduced SZTrack, utilizing a 1D CNN encoder alongside bidirectional LSTM (BiLSTM) units for automatic tracking of epileptic seizure activity and localization of seizure zones. Sun et al. [29] proposed DeepSIF by employing residual blocks and LSTM units to perform spatiotemporal estimation of underlying source dynamics. Jiao et al. proposed a graph Fourier transform based Bi-LSTM framework for electrophysiological source imaging [30]. Huang et al. [31] presented a data-synthesized spatiotemporally convolutional encoder-decoder network (DST-CedNet) to learn a robust mapping from EEG/MEG measurements to sources. Furthermore, Liang et al. [32] developed a novel framework called SI-SBLNN for solving the ESI inverse problem by incorporating sparse Bayesian learning with deep neural network.
Existing publications indicate that deep learning frameworks have significant superiority in improving the accuracy of source localization. However, most deep learning models employed a single modality of EEG or MEG to solve the ESI inverse problem, and few studies have leveraged a multimodal integration of simultaneously recorded EEG and MEG, which can benefit the accuracy of ESI given the complementary measurements of the two modalities. Fusion of MEG and EEG reconstruction results in the decision level is usually suboptimal, and how to design an early fusion framework in the context of deep learning paradigm remains a challenging problem. In this study, improved upon our previous work with new attention mechanism [33], we propose a new Multi-Modal Deep Fusion framework using Attention Neural Network, termed as MMDF-ANN, where EEG and MEG signals are integrated through early fusion with a specially designed deep learning architecture. Our main contributions are highlighted as follows:
We proposed a new MMDF-ANN framework for brain source imaging, which employed two CNN modules to separately extract complementary information from EEG and MEG.
The unique design of EEG and MEG input matrices largely preserves the spatial locations of EEG and MEG sensors, which has been shown to be very effective for solving the ESI problem. Besides, the dilated convolution filter was introduced to expand the receptive field of CNN without adding layers, allowing for more effective capture of spatial dependencies.
We designed a channel-wise attention module to generate proper weights for feature maps, enabling the adaptive integration of complementary information from two neural signal modalities of EEG and MEG.
Comprehensive experiments have been conducted on simulated data and real data, showing the superior performance of the proposed MMDF-ANN against the state-of-the-art methods, particularly in the case of extended sources and situations when EEG/MEG measurements are contaminated with a high level of noises.
II. Problem Statement and Related Work
In this section, we first introduce the forward and inverse problem of ESI in Section II-A, then we highlight two paradigms of ESI algorithms in Section II-B and Section II-C which motivated our method.
A. Problem Statement
In order to estimate the brain source activation patterns based on the scalp EEG/MEG measurements, an EEG/MEG forward model needs to be established in advance to characterize the mapping from the source space to the EEG/MEG sensors. Solving the “ESI forward problem” produces a leadfield matrix, in which each element reflects how the electrophysiological signals generated by neural activities in each brain region influence the EEG/MEG signals measured by each sensor. This relationship can be described by a linear equation:
| (1) |
where is the recorded EEG and/or MEG, is the number of electrodes, indicates time points, is the source signal from brain regions, is the leadfield matrix, and is the measurement noise.
Solving the “ESI inverse problem” requires calculating based on the measured and the leadfield matrix by solving Eq. (1). However, the ESI inverse problem is ill-conditioned as the number of electrodes is far less than that of brain regions (i.e., ), resulting in infinite source solutions. In order to deal with the ill-posed problem of ESI, traditional approaches attempt to impose regularization constraints to restrict solutions to a subspace that satisfies specific prior assumptions on the source structure [3], [34]. In this case, can be obtained by solving Eq. (2):
| (2) |
where is the Frobenius norm. The first term is called data fitting error, which is introduced to find the solution that explains the EEG/MEG measurements. The second term is called regularization term, which is imposed to obtain a source solution that is constrained to a prior assumption. For example, MCE encourages a sparse solution by adopting the norm. MNE and its variants (wMNE, sLORETA, dSPM, etc.) promote a diffuse and smooth source distribution by introducing the norm. is a hyperparameter added to control the balance between data fitting error minimization and regularization.
B. Extended Source Imaging
When a localized assemble of neurons fires, due to the volume conductivity property of the brain, the neighboring sources are most likely to be activated, generating a spatially smoothed activation pattern, which is called source extents or extended source activation. Previous works leveraged the spatial smoothness configuration of source signals based on 3D triangular meshes to promote the source extent estimation [35], [36], [37], [38], [39]. Ding proposed a Total Variation (TV) matrix derived from brain 3D meshes as a sparse constraint in the transform domain (first order differential spatial space) to encourage similarity of neighboring source activities, thus enabling reconstruction of source extents [40]. The extended source activation has been integrated into multiple ESI frameworks [11], [41], [42], [43]. Given that the epileptogenic zone (EZ) exhibits a focal activation with spatial continuity, the TV term is particularly useful for the localization of EZs. For instance, Sohrabpour et al. [44] proposed the fast spatiotemporal iteratively reweighted edge sparsity minimization (FAST-IRES) algorithm to estimate the extended sources by balancing the source sparsity and edge sparsity while fitting the scalp EEG measurements, and further validated using MEG [45].
C. Complementarity Between EEG and MEG
According to previous studies [46], [47], [48], EEG and MEG have been shown to be complementary in sensitivity to cortical source orientations. To be specific, MEG exhibits more sensitivity to tangential sources, while EEG can accurately reflect activities of both radial and tangential components [49]. Besides, EEG exhibits high sensitivity to conductivity uncertainties. The electric signals detected by EEG are typically attenuated and smeared due to the low conductivity of the skull, which may further limit the accuracy of source imaging. By contrast, MEG measures the magnetic field generated by electrical activities in the brain, which is barely contaminated by conductivity changes [50]. Previous studies have proposed several EEG/MEG fusion techniques, for instance, the signal-to-noise ratio (SNR) transformation was utilized in [51] and [52] to convert EEG and MEG data into the common SNR domain. Based on this, Ding and Yuan [53] developed a sparse ESI approach with the integration of EEG and MEG to facilitate the reconstruction of complex brain activities. Henson et al. [54] presented an empirical Bayesian scheme where the source solutions from each modality were fused under the common generative model. Baillet et al. [55] proposed a method for cooperative processing of MEG and EEG by selectively weighting their leadfields to minimize the mutual information between them. Given that the complementary information between the EEG and MEG modalities is under-exploited [33], utilizing simultaneously recorded EEG and MEG provides a potential possibility to improve the accuracy and robustness of ESI algorithms [56].
III. Method
A. Proposed Framework
The MMDF-ANN architecture, as illustrated in Fig. 1, draws inspiration from the input design of ConvDip [27], where the EEG measurements were first interpolated to a matrix according to spatial locations of EEG electrodes, then fed into a 2D convolutional architecture for feature extraction. In this study, both EEG and MEG measurements are separately interpolated to two distinct matrices and further set as inputs of two individual 2D CNN modules. Both CNN modules share identical hyperparameter configurations, employing the dilated convolution filter [57] in which the dilation rate is set to 2 to strategically expand the receptive field without increasing CNN depth. The receptive field is defined as the spatial extent within the input space that contributes to the activation of a neuron in CNN. In the early layers of a CNN, the receptive field is limited and can only capture local features. As the network structure deepens, the receptive field expands, allowing the network to extract more abstract and high-level features by aggregating a broader range of information. Our work employs the dilated convolution, where empty “spaces” are inserted between kernel elements, allowing the kernel to capture information over a more extensive spatial range without the necessity of increasing the CNN depth, thereby mitigating the expansion of model parameters. The dilation rate indicates the extent of kernel expansion, and it is worth noting that when the dilation rate is set to 1, the dilated convolution operation can be regarded as a regular convolution operation.
Fig. 1.

Illustration of the proposed framework: Multi-Modal Deep Fusion framework using Attention Neural Networks (MMDF-ANN).
Given that feature maps derived from distinct CNN channels contribute differently to the source estimation, a self-attention mechanism is introduced to produce proper weights for feature maps [58], [59]. The feature maps extracted from channels are first aggregated to a set of channel descriptors through a global average operation in each feature map, defined as for the c-th feature map, where is the mean value of the matrix is the obtained descriptor for the -th channel of convolution filter. Next, the obtained channel descriptors are fed into a two-layer fully connected (FC) neural network module to generate the attention weights denoted as , expressed as where and are two learnable weight matrices, is the reduction ratio, represents the ReLU activation, represents the sigmoid activation, is the obtained weight vector, where the c-th element represents the generated weight for the feature map . In MMDF-ANN, the feature maps extracted from different channels were first assigned proper weights learned from the two-layer fully connected neural network, and then the weighted feature maps were flattened to a set of vectors. Next, a concatenation operation was applied to these feature vectors, and finally, the fused feature was fed into a fully connected module with each output representing the activation of each source in the brain. With the designed architecture, the MMDF-ANN framework is able to adaptively leverage the information derived from both EEG and MEG modalities and conduct early fusion in the latent feature space for the ESI inverse problem.
B. Loss Function
In deep learning, the loss function serves as the objective or goal that the model tries to minimize during training. The design of the loss function reflects in which direction the model is optimized, which can significantly affect the model’s convergence and performance. The first loss function for deep learning models applied to solve the ESI inverse problem is the mean squared error (MSE), which is defined as:
| (3) |
where and with respectively represent the ground truth and the estimated source signal from the proposed MMDF-ANN. The MSE loss can effectively quantify the discrepancy between and by measuring their Euclidean distance, in which the geodesic distance between brain regions is not considered. In other words, the MSE loss imposes the same penalty on two source estimations where one is distributed next to the ground truth location while the other one is further away, resulting in quite different localization errors. To effectively reduce the localization error, we proposed the topological loss to assign an appropriate penalty for source estimates based on their geodesic distance, which is defined as:
| (4) |
where represents the element-wise multiplication. is a symmetric error matrix, in which the element indicates the product of estimation errors corresponding to region and region is a symmetric transformation matrix based on the shortest paths between all brain regions, in which the element is defined as:
| (5) |
where is the shortest path between region and region is the set of all the neighboring regions with the shortest path less than a threshold value to region is the kernel width in the range of 0 to 1. Thus, the non-zero elements in exhibit an “inverse-Gaussian” shaped distribution centered on the diagonal. By introducing , ESI solutions with different spatial distances from the ground truth can be penalized to varying degrees. When the estimated location corresponding to is closer to that of , the topological loss will provide a small value, otherwise a large value. By including the topological loss, the complete loss function can be defined as:
| (6) |
where is a hyperparameter ranging from 0 to 1 used to balance the above two loss components.
IV. Numerical Experiments
To fully evaluate the performance of the proposed MMDF-ANN framework, we first carried out extensive experiments on synthetic EEG and MEG data with varying activation patterns. Then, we applied the framework to real EEG/MEG recordings from face perception tasks and epileptic seizures for further validation.
A. Experiments on Synthetic Data
In this section, we first illustrated the process of solving the ESI forward problem, and then we explained the generation procedure of synthetic data with varying activation configurations. Finally, we conducted comprehensive experiments on the obtained datasets.
1). Realistic Head Model:
The forward brain model is derived from MRI scans from the MNE-Python sample dataset [60]. The MEG system consists of 204 gradiometers and 102 magnetometers based on the configuration of the Neuromag Vectorview system, and the EEG data were simultaneously acquired using a 60-channel electrode cap. The original MRI images were obtained with a Siemens 1.5 T Sonata scanner, and the source surfaces were reconstructed using FreeSurfer [61]. Next, a three-layer head model was constructed based on the boundary element method (BEM), and the source space was defined as a grid containing 1984 dipoles distributed across the cortex. All bad channels in EEG and MEG were removed, and only the magnetometers in MEG were involved in the computation process. This configuration results in an EEG leadfield matrix with dimensions of 59 × 1984 and a MEG leadfield matrix with dimensions of 102 × 1984.
2). Synthetic Data Generation:
To generate synthetic data, we first generate source signals with different activation patterns. As illustrated in Fig. 2, we introduced three levels of neighborhoods (LNs=1, 2, 3) to indicate different sizes of source extents. Then, we simultaneously activated all brain regions included in the entire “patch”, where the activation intensity of neighboring areas with LNs=1, 2, 3 is respectively configured to be 85%, 70%, and 55% of the central region. After obtaining the source data, the synthetic EEG and MEG data were calculated according to Eq. 1, where the measurement noise was determined based on different SNR levels (SNR = 30 dB, 20 dB, 10 dB). SNR is a measure used to quantify the ratio of the signal power to the noise power , defined as .
Fig. 2.

Illustration of activated brain regions based on varying levels of neighborhoods (LNs).
3). Experimental Settings:
For model training, we first randomly selected 1600 out of 1984 regions to activate under different source configurations. Then all obtained synthetic data sets were employed in the training process of MMDF-ANN, where the designed loss function in Eq. (6) was utilized with set to 0.05 and set to 0.005. For model testing, 50 brain regions, distinct from those used for the training set, were randomly selected to form a test set, ensuring that there was no source overlap between the training data and test data. The comparison algorithms include MNE [13], sLORETA [14], dSPM [15], MCMV beamformer [22], ADMM [62], and ConvDip [27]. For MNE, sLORETA, dSPM, MCMV beamformer, and ADMM, the SNR transformation technique [51] was used for EEG/MEG fusion. For ConvDip, since the integration of locations of EEG and MEG sensors was difficult to implement, we first carried out brain source imaging on EEG and MEG independently, and then we averaged the results from both modalities for source reconstruction. The MSE loss function was adopted to guide the model training. All experiments were executed on a Windows PC equipped with an Intel i9 CPU and 64 GB of memory. Training deep learning models employed an NVIDIA V100 GPU with 32 GB of memory.
4). Evaluation Metrics:
The performance of all algorithms was assessed quantitatively using two metrics: localization error (LE) and area under the precision-recall curve (AUPRC). LE quantifies the geodesic distance between the region with the maximum amplitude in the reconstructed source area and the center of the actual source patch on cortex meshes using the Dijkstra shortest path algorithm. In this study, the unit of measurement for LE is millimeters (mm). AUPRC assesses the overlap of source extents between the reconstructed and actual sources. Given the sparsity nature of source distributions (unbalanced activated vs non-activated regions), PRC has been suggested to be a more suitable metric than area under the receiver operating characteristics curve (AUROC) for assessing success in imbalanced datasets [63], [64]. Good performance is achieved if LE approaches 0 and AUPRC approaches 1.
5). Source Reconstruction With Single Source:
The performance comparison between MMDF-ANN and benchmark algorithms on LE and AUPRC is summarized in Table I. The boxplots for different LNs settings with SNR=30 dB are provided in Fig. 3. Reconstructed source distributions for LNs=3 with varying SNR settings (SNR=30 dB, 20 dB, and 10 dB) are shown in Fig. 4. From Table I and Figs. 3–4, we can see that:
TABLE I.
Performance Comparison Between the Proposed Method and Benchmark Algorithms With Single Source
| Source with LNs=l | Source with LNs=2 | Source with LNs=3 | |||||
|---|---|---|---|---|---|---|---|
|
| |||||||
| SNR | Method | LE (mm) | AUPRC | LE (mm) | AUPRC | LE (mm) | AUPRC |
|
| |||||||
| 30dB | MNE | 4.991 ± 6.769 | 0.681 ± 0.207 | 6.336 ± 7.322 | 0.557 ± 0.141 | 8.084 ± 10.508 | 0.457 ±0.118 |
| sLORETA | 0.709 ± 2.562 | 0.870 ± 0.118 | 2.981 ± 4.250 | 0.783 ± 0.119 | 4.890 ± 5.483 | 0.662 ± 0.121 | |
| dSPM | 8.890 ± 11.074 | 0.678 ± 0.242 | 12.991 ± 13.672 | 0.567 ±0.188 | 23.002 ± 20.596 | 0.427 ± 0.147 | |
| MCMV | 1.688 ± 4.932 | 0.557 ± 0.202 | 6.463 ± 11.550 | 0.425 ± 0.163 | 10.822 ± 15.528 | 0.352 ± 0.140 | |
| ADMM | 3.254 ± 6.012 | 0.895 ± 0.155 | 4.416 ± 5.807 | 0.906 ± 0.083 | 6.011 ± 5.432 | 0.869 ± 0.060 | |
| ConvDip | 10.828 ± 7.607 | 0.577 ± 0.259 | 9.157 ± 6.996 | 0.812 ± 0.159 | 9.062 ± 7.171 | 0.868 ± 0.107 | |
| Proposed | 5.046 ± 5.713 | 0.819 ± 0.181 | 3.739 ± 6.002 | 0.942 ± 0.097 | 4.478 ± 6.072 | 0.965 ± 0.056 | |
|
| |||||||
| 20dB | MNE | 5.013 ± 6.824 | 0.679 ± 0.208 | 6.218 ± 7.056 | 0.555 ± 0.140 | 7.767 ± 10.626 | 0.457 ±0.117 |
| sLORETA | 1.447 ± 3.163 | 0.858 ± 0.124 | 2.967 ± 4.300 | 0.778 ± 0.121 | 5.085 ± 5.787 | 0.651 ± 0.121 | |
| dSPM | 9.569 ± 12.219 | 0.655 ± 0.238 | 12.952 ± 13.736 | 0.554 ± 0.194 | 26.923 ± 28.094 | 0.411 ± 0.144 | |
| MCMV | 1.688 ± 4.932 | 0.556 ± 0.203 | 7.236 ± 12.778 | 0.426 ± 0.164 | 11.175 ± 16.327 | 0.352 ± 0.141 | |
| ADMM | 2.793 ± 4.825 | 0.895 ± 0.152 | 3.798 ± 4.690 | 0.905 ± 0.081 | 6.613 ± 5.454 | 0.864 ± 0.067 | |
| ConvDip | 11.343 ± 7.775 | 0.569 ± 0.251 | 9.456 ± 6.772 | 0.813 ± 0.159 | 9.216 ± 6.820 | 0.866 ± 0.104 | |
| Proposed | 5.978 ± 5.898 | 0.813 ± 0.185 | 3.739 ± 6.002 | 0.939 ± 0.100 | 4.387 ± 5.513 | 0.964 ± 0.054 | |
|
| |||||||
| lOdB | MNE | 4.919 ± 6.337 | 0.669 ± 0.203 | 5.307 ± 5.756 | 0.550 ± 0.138 | 7.487 ± 9.334 | 0.450 ±0.114 |
| sLORETA | 12.543 ± 30.572 | 0.569 ± 0.236 | 6.286 ± 6.755 | 0.629 ± 0.153 | 9.041 ± 7.622 | 0.536 ± 0.120 | |
| dSPM | 10.190 ± 10.527 | 0.473 ± 0.221 | 15.494 ± 15.873 | 0.442 ±0.185 | 25.623 ± 20.348 | 0.337 ± 0.131 | |
| MCMV | 1.688 ± 4.932 | 0.557 ± 0.202 | 7.639 ± 12.645 | 0.423 ± 0.167 | 10.065 ± 11.563 | 0.351 ± 0.142 | |
| ADMM | 3.027 ± 5.543 | 0.888 ± 0.169 | 4.928 ± 5.744 | 0.876 ± 0.092 | 7.293 ± 6.747 | 0.801 ± 0.089 | |
| ConvDip | 11.616 ± 7.704 | 0.558 ± 0.262 | 9.470 ± 7.458 | 0.786 ± 0.170 | 9.922 ± 7.279 | 0.851 ± 0.111 | |
| Proposed | 5.608 ± 5.158 | 0.804 ± 0.181 | 4.999 ± 6.068 | 0.933 ± 0.099 | 5.068 ± 5.543 | 0.943 ± 0.058 | |
Results are shown as mean ± std.
Fig. 3.

Peformance comparison of different algorithms on AUPRC (on the top) and LE (at the bottom), with SNR of 30 dB for LNs = 1 (on the left), LNs = 2 (in the middle) and LNs = 3 (on the right).
Fig. 4.

Brain sources reconstruction by different ESI algorithms with 3 levels of neighborhoods for SNR = 30 dB (on the top), SNR = 20 dB (in the middle) and SNR = 10 dB (at the bottom).
With the SNR decreases and source range expands, there is a significant increase on LE and a decrease on AUPRC for MNE, sLORETA, dSPM, and MCMV, which means the performance of these algorithms are limited and they are more suitable for localizing sources with concentrated distributions with high SNR EEG/MEG measurements. When an extended area of the source is activated, or the SNR of EEG/MEG data is at a low level, it is difficult for these algorithms to provide reliable solutions. In contrast, ADMM exhibits remarkable stability in LE and achieves superior performance in AUPRC for reconstructing concentrated source extents (LNs=1) across varying SNRs.
Moreover, it can be observed in Fig. 3 that with an increase of source extents area (LNs=2, 3), ADMM demonstrates deteriorated performance in AUPRC and LE, whereas ConvDip and the proposed MMDF-ANN both display enhanced performance. This indicates that the deep learning models are more accurate than the traditional ones for larger source extent estimation. The reason is that the unique input design enables ConvDip and MMDF-ANN to effectively capture the spatial dependencies present in EEG/MEG measurements and subsequently aggregate the spatial information associated with source activation. What’s more, compared to ConvDip, MMDF-ANN achieves superior performance in all cases, which shows that the designed framework guarantees higher accuracy in source localization.
Empirically, it can be seen in Fig. 4 that for source activations distributed on the left frontal, the source estimation provided by ConvDip is partially diffused into the right hemisphere, while the source estimation provided by MMDF-ANN is more concentrated within the left hemisphere. This shows that the proposed loss function can provide better guidance for model training by taking into account the distinct spatial distances between brain regions and penalizing spatial mismatches based on the localization error.
6). Source Reconstruction With Multiple Sources:
To further assess the performance of MMDF-ANN, we conducted experiments with multiple simultaneously activated sources. The number of sources was set to 2, 3, and 4, with MMDF-ANN and benchmark algorithms employed for the reconstruction of multiple sources. The performance comparison on AUPRC is summarized in Table II. The boxplots for different numbers of sources with SNR=30 dB and LNs=3 are provided in Fig. 5. Reconstructed source activities for SNR=30 dB and LNs=3 with different number of sources are shown in Fig. 6.
TABLE II.
Performance Comparison Between the Proposed Method and Benchmark Algorithms With Multiple Sources
| Source with LNs=l | Source with LNs=2 | Source with LNs=3 | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|
|
| ||||||||||
| N s | Method | SNR=30 | SNR=20 | SNR=10 | SNR=30 | SNR=20 | SNR=10 | SNR=30 | SNR=20 | SNR=10 |
|
| ||||||||||
| 2 | MNE | 0.544 ± 0.135 | 0.546 ± 0.133 | 0.538 ± 0.146 | 0.424 ±0.121 | 0.424 ±0.118 | 0.410 ± 0.117 | 0.327 ± 0.116 | 0.325 ± 0.116 | 0.322 ±0.117 |
| sLORETA | 0.846 ± 0.075 | 0.807 ± 0.113 | 0.490 ± 0.215 | 0.753 ± 0.065 | 0.740 ± 0.063 | 0.559 ± 0.133 | 0.668 ± 0.073 | 0.657 ± 0.076 | 0.531 ± 0.091 | |
| dSPM | 0.631 ± 0.192 | 0.599 ± 0.193 | 0.337 ± 0.206 | 0.493 ± 0.121 | 0.479 ± 0.123 | 0.342 ± 0.143 | 0.393 ± 0.108 | 0.378 ± 0.106 | 0.314 ± 0.125 | |
| MCMV | 0.326 ± 0.154 | 0.327 ± 0.157 | 0.318 ± 0.156 | 0.265 ± 0.099 | 0.264 ± 0.101 | 0.259 ± 0.096 | 0.213 ± 0.081 | 0.213 ± 0.081 | 0.215 ± 0.079 | |
| ADMM | 0.814 ± 0.104 | 0.805 ± 0.110 | 0.745 ± 0.183 | 0.868 ± 0.064 | 0.863 ± 0.070 | 0.744 ± 0.145 | 0.832 ± 0.080 | 0.808 ± 0.091 | 0.696 ± 0.137 | |
| ConvDip | 0.538 ± 0.171 | 0.542 ± 0.184 | 0.520 ± 0.165 | 0.785 ± 0.103 | 0.783 ± 0.104 | 0.730 ± 0.122 | 0.884 ± 0.070 | 0.883 ± 0.067 | 0.841 ± 0.100 | |
| Proposed | 0.696 ± 0.129 | 0.680 ± 0.145 | 0.618 ± 0.155 | 0.908 ± 0.060 | 0.901 ± 0.066 | 0.839 ± 0.110 | 0.942 ± 0.040 | 0.941 ± 0.039 | 0.892 ± 0.071 | |
|
| ||||||||||
| 3 | MNE | 0.394 ± 0.120 | 0.392 ± 0.121 | 0.383 ± 0.126 | 0.294 ± 0.085 | 0.294 ± 0.084 | 0.286 ± 0.088 | 0.222 ± 0.071 | 0.222 ± 0.072 | 0.216 ± 0.069 |
| sLORETA | 0.776 ± 0.088 | 0.746 ± 0.106 | 0.471 ±0.177 | 0.695 ± 0.072 | 0.683 ± 0.073 | 0.535 ± 0.112 | 0.591 ± 0.081 | 0.581 ± 0.084 | 0.478 ± 0.078 | |
| dSPM | 0.516 ± 0.165 | 0.498 ± 0.159 | 0.301 ± 0.154 | 0.405 ± 0.110 | 0.394 ± 0.109 | 0.307 ±0.112 | 0.320 ± 0.096 | 0.314 ± 0.098 | 0.257 ± 0.070 | |
| MCMV | 0.180 ± 0.102 | 0.178 ± 0.102 | 0.178 ± 0.099 | 0.167 ± 0.077 | 0.167 ± 0.078 | 0.166 ± 0.077 | 0.153 ± 0.074 | 0.152 ± 0.073 | 0.154 ± 0.070 | |
| ADMM | 0.738 ± 0.094 | 0.732 ± 0.096 | 0.654 ± 0.145 | 0.822 ± 0.067 | 0.809 ± 0.074 | 0.683 ±0.117 | 0.782 ± 0.070 | 0.757 ± 0.085 | 0.586 ± 0.151 | |
| ConvDip | 0.425 ± 0.147 | 0.419 ± 0.149 | 0.407 ± 0.153 | 0.708 ± 0.124 | 0.701 ± 0.123 | 0.623 ± 0.125 | 0.817 ± 0.076 | 0.808 ± 0.079 | 0.744 ± 0.094 | |
| Proposed | 0.543 ± 0.151 | 0.531 ± 0.148 | 0.483 ± 0.148 | 0.808 ± 0.106 | 0.792 ± 0.110 | 0.695 ± 0.149 | 0.859 ± 0.067 | 0.843 ± 0.078 | 0.761 ± 0.112 | |
|
| ||||||||||
| 4 | MNE | 0.318 ± 0.103 | 0.317 ± 0.105 | 0.302 ± 0.104 | 0.214 ± 0.073 | 0.212 ± 0.074 | 0.208 ± 0.074 | 0.182 ± 0.060 | 0.181 ± 0.059 | 0.178 ± 0.060 |
| sLORETA | 0.740 ± 0.094 | 0.689 ± 0.094 | 0.394 ± 0.154 | 0.632 ± 0.076 | 0.619 ± 0.078 | 0.471 ±0.113 | 0.533 ± 0.071 | 0.521 ± 0.072 | 0.432 ± 0.086 | |
| dSPM | 0.427 ± 0.136 | 0.390 ± 0.135 | 0.243 ±0.114 | 0.336 ± 0.084 | 0.328 ± 0.081 | 0.272 ± 0.091 | 0.286 ± 0.072 | 0.281 ± 0.073 | 0.240 ± 0.077 | |
| MCMV | 0.117 ± 0.095 | 0.117 ± 0.096 | 0.118 ± 0.093 | 0.136 ± 0.079 | 0.136 ± 0.080 | 0.136 ± 0.077 | 0.146 ± 0.058 | 0.146 ± 0.058 | 0.146 ± 0.060 | |
| ADMM | 0.693 ± 0.100 | 0.677 ± 0.103 | 0.587 ± 0.149 | 0.764 ± 0.090 | 0.742 ± 0.096 | 0.556 ± 0.155 | 0.716 ± 0.109 | 0.679 ± 0.113 | 0.492 ± 0.155 | |
| ConvDip | 0.385 ± 0.114 | 0.380 ± 0.109 | 0.359 ± 0.110 | 0.647 ± 0.104 | 0.635 ± 0.109 | 0.585 ± 0.134 | 0.770 ± 0.088 | 0.763 ± 0.093 | 0.669 ± 0.118 | |
| Proposed | 0.426 ± 0.147 | 0.418 ± 0.149 | 0.358 ± 0.142 | 0.689 ± 0.120 | 0.678 ± 0.115 | 0.595 ± 0.140 | 0.769 ± 0.093 | 0.764 ± 0.097 | 0.691 ± 0.117 | |
Results are shown as mean ± std of AUPRC.
Fig. 5.

Peformance comparison of different algorithms on AUPRC with SNR = 30 dB and LNs = 3 for multiple sources.
Fig. 6.

Brain sources reconstruction by different ESI algorithms with SNR = 30 dB and LNs = 3 for multiple sources.
From the results, we can see that: with the increase in the number of sources, there is a reduction in AUPRC for all algorithms, and the superiority of MMDF-ANN is becoming less pronounced. This indicates the complexity and difficulty inherent in accurately reconstructing multiple sources. However, the performance of deep learning models significantly depends on the diversity of the training set.
B. Hyperparameter Tuning With Bayesian Optimization
The loss function for the proposed MMDF-ANN contains two adjustable hyperparameters: the weight and the kernel width controlling the penalty degree for source solutions with different geodesic distances. To achieve optimal source reconstruction performance with MMDF-ANN, we employed Bayesian optimization [65] to determine the optimal values of and within the specified range of 0.001 to 1. The training set and the validation set were split in a ratio of 70% and 30%. The initial values of and were randomly selected. The number of trials was 50, with the objective value for each trial determined by the mean AUPRC of results on the validation set.
The optimization history and corresponding hyperparameters in each trial were presented in Fig. 7, from which we can see that:
Fig. 7.

Hyperparameter tuning using Bayesian optimization with AUPRC as objective. There are two subplots: (A) optimization history, (B) selected hyperparameters and the obtained objective value from each trail.
The performance of MMDF-ANN is highly sensitive to changes in the weight and the kernel width . When and are set greater than 0.1, it is challenging to attain satisfactory model performance. The identical pairing of and consistently leads to the poorest model performance with the lowest objective value as 0.483.
With the decrease of either or , the range of the other hyperparameter that can yield optimal model performance broadens. When ranges from 0.001 to 0.01 and ranges from 0.01 to 0.1, there are multiple combinations of and yield optimal model performance, yet the optimal hyperparameters were achieved in Trail #15 where and with the highest objective value as 0.894. Specifically, the corresponding AUPRC values for reconstruction results with varying number of sources are , and .
Overall, Bayesian optimization offers a highly efficient, fast-converging, and robust adaptive hyperparameter tuning strategy, which is capable of finding the optimal hyperparameter combination in limited iterations, thus effectively saving computing resources.
C. Ablation Study
To validate the efficacy of individual components within MMDF-ANN, we conducted a set of ablation experiments. The experimental settings are shown in Table III. The LE comparison for all ablation experiments with LNs=3 and SNR=30 dB, 20 dB, 10 dB are provided in Fig. 8.
TABLE III.
Design of Ablation Experiments
| Model | EEG | MEG | Dilation Rate | |
|---|---|---|---|---|
|
| ||||
| M1 | ✓ | 1 | ||
| M2 | ✓ | 1 | ||
| M3 | ✓ | ✓ | 1 | |
| M4 | ✓ | 2 | ||
| M5 | ✓ | 2 | ||
| M6 | ✓ | ✓ | 2 | |
| M7 | ✓ | 2 | ✓ | |
| M8 | ✓ | 2 | ✓ | |
| M9 | ✓ | ✓ | 2 | ✓ |
Fig. 8.

Performance comparison between different models in ablation study for LNs = 3 with SNR = 30 dB (on the top), SNR = 20 dB (in the middle) and SNR = 10 dB (at the bottom). Results are shown as mean values of LE. Asterisks indicate the results of post-hoc paired-sample t-tests: ***-p<0.01, **-p<0.1, *-p<1.
It can be concluded from the results that:
By the comparison of M3 vs M1-M2, M6 vs M4-M5, and M9 vs M7-M8, we can see that as opposed to the use of single modality EEG or MEG, the combination use of EEG and MEG can significantly reduce the LE level in brain source localization, which suggests that the complementarities between EEG and MEG provide valuable additional information that benefits source estimation.
Besides, by the comparison of M1 vs M2, M4 vs M5, and M7 vs M8, we can see models trained on single modality MEG outperform those trained on single modality EEG in most cases. The reason is MEG employs more channels than EEG, and the MEG signal is hardly attenuated by the low conductivity of the skull, resulting in the MEG data contains more comprehensive and accurate information than the EEG data.
By the comparison of M1-M3 vs M4-M6, we can see that introducing the dilated convolution can promote the model performance to some extent when SNR is 30 dB. When SNR is set to 10 dB, the LE increases instead.
By the comparison of M4 vs M7, M5 vs M8, M6 vs M9, we can see that introducing the topological loss can effectively improve the accuracy of source localization, especially when only EEG data is adopted. When MEG data is employed, a less obvious reduction in LE is observed. This is because the LE has been reduced to a relatively low level with the incorporation of MEG. It is challenging to further significantly improve model performance by modifying the loss function. This further demonstrates the superior effectiveness and stability of the proposed MMDF-ANN framework.
D. Experiments on Real Data
To further evaluate the performance of the proposed MMDF-ANN on real data, we applied this framework to both neuroscience studies and clinical applications, which include the source estimation of face perception dataset from SPM (https://www.fil.ion.ucl.ac.uk/spm/data/mmfaces/) and the epilepsy dataset from BrainStorm [60].
1). Evaluation With Face Perception Dataset:
The face perception dataset includes EEG, MEG, and a high resolution anatomical MRI image (aMRI) from a subject who underwent a face perception task. During the task, the subject was asked to make a comparison between Faces and Scrambled faces. At the same time, a Biosemi system (128 channels) was employed for EEG acquisition, and a CTF system (275 channels) was employed for MEG acquisition. Annotations corresponding to different events are also provided. We conducted head model construction and forward model calculation using MNE-Python toolbox [60], and then we extracted the event-related potentials (ERP) from both EEG and MEG recordings. Finally, we averaged these ERPs (See Fig. 9) and performed brain source localization with MMDF-ANN and benchmark methods (MNE, sLORETA, dSPM, MCMV, ADMM, and ConvDip). The reconstructed source distributions are shown in Fig. 10.
Fig. 9.

Averaged EEG (on the top) and MEG (at the bottom) time series and corresponding topographic maps of event evoked potentials.
Fig. 10.

Reconstructed sources for EEG and MEG data from the face perception dataset.
As shown in Fig. 10, the source distributions estimated by MNE, sLORETA, and dSPM produce excessively broad cortical areas that extend beyond the range of the visual area. By contrast, MCMV, ADMM, ConvDip, and the proposed MMDF-ANN provide sparser and more compact and concentrated source reconstructions. Nevertheless, in comparison to MCMV, ADMM and ConvDip, the proposed MMDF-ANN delivers a more precise and cleaner estimation of the visual area.
2). Evaluation With Epilepsy Dataset:
The epilepsy dataset was acquired at the Epilepsy Center Freiburg, Germany, and contains high-resolution 3T epilepsy MRI images and EEG data (29-channel) recorded at 256Hz from a patient with focal epilepsy since the age of 8 years. This patient underwent invasive EEG for epileptogenic area identification, followed by a left frontal tailored resection, and was seizure-free during a 5-year follow-up period. The EEG spikes marked by epileptologists at the Epilepsy Center in Freiburg were also provided. We followed the Brainstorm tutorial to conduct head model construction and forward model calculation, then we averaged the EEG spikes and employed MNE, sLORETA, dSPM, MCMV, ADMM, ConvDip, and MMDF-ANN for epileptic focus localization. Since MEG recordings are not provided, we set EEG as input to both CNN modules in MMDF-ANN. The averaged time series and the topographic map of inter-ictal EEG spikes are plotted in Fig. 11. The reconstructed source distributions of different ESI algorithms are shown in Fig. 12.
Fig. 11.

Averaged EEG time series and topographic map of the averaged inter-ictal spike.
Fig. 12.

Reconstructed sources for EEG data from the Brainstorm tutorial dataset.
It can be seen from Fig. 12 that the reconstructed seizure onset zones (SOZs) by sLORETA, dSPM, MCMV, and ADMM span a wide range of cortical areas and extended beyond the left frontal lobe, whereas MNE, ConvDip, and the proposed MMDF-ANN provide more accurate and concentrated SOZs. Compared to MNE, ConvDip gives a more accurate reconstruction of the extended source area, but there are still observable source activations outside the range of SOZ. By contrast, the proposed MMDF-ANN shows a cleaner reconstruction of the epileptogenic source activation.
V. Conclusion
In this study, a multi-modal deep fusion framework with attention neural network (MMDF-ANN) is developed to solve the ESI inverse problem by fusing simultaneously recorded EEG and MEG. In the MMDF-ANN, two CNN modules are employed as feature extractors, and the dilated convolution is introduced to expand the receptive field without increasing CNN depth. A channel-wise attention mechanism is designed to selectively learn important feature maps. A topological loss is designed to penalize remote sources with large LE by leveraging the geodesic shortest path defined on the source mesh. To evaluate the performance of the proposed MMDF-ANN, we conducted comprehensive experiments on both simulated and real datasets. The numerical experiments indicate that the performance of MMDF-ANN is superior and more stable compared to the benchmark algorithms, especially when applied to the reconstruction of extended source activations. The results of ablation experiments show that using a multimodal fusion of EEG/MEG can significantly improve the source localization accuracy compared to using single-modality EEG/MEG. The experimental results on real data show that the proposed framework provides a satisfactory reconstruction with a more concentrated and compact source distribution than benchmark algorithms.
Acknowledgment
The authors are grateful for Dr. Andreas Schulze-Bonhage and Dr. Marcel Heers at the Epilepsy Center Freiburg for their permission to use the epilepsy dataset in this work.
This work was supported in part by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) of National Institutes of Health (NIH), under Award R21EB033455; and in part by New Jersey Health Foundation under Grant PC 40–23.
Contributor Information
Meng Jiao, Department of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ 07030 USA..
Shihao Yang, Department of Systems and Enterprises, Stevens Institute of Technology, Hoboken, NJ 07030 USA..
Xiaochen Xian, H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332 USA..
Neel Fotedar, Epilepsy Center, Neurological Institute, University Hospitals Cleveland Medical Center, Cleveland, OH 44106 USA; Department of Neurology, Case Western Reserve University School of Medicine, Cleveland, OH 44106 USA..
Feng Liu, Department of Systems and Enterprises and the Semcer Center for Healthcare Innovation, Stevens Institute of Technology, Hoboken, NJ 07030 USA.
References
- [1].He B, Sohrabpour A, Brown E, and Liu Z, “Electrophysiological source imaging: A noninvasive window to brain dynamics,” Annu. Rev. Biomed. Eng, vol. 20, no. 1, pp. 171–196, Jun. 2018. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [2].Hillman EMC, “Coupling mechanism and significance of the BOLD signal: A status report,” Annu. Rev. Neurosci, vol. 37, no. 1, pp. 161–181, Jul. 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [3].Michel CM, Murray MM, Lantz G, Gonzalez S, Spinelli L, and de Peralta RG, “EEG source imaging,” Clin. Neurophysiol, vol. 115, no. 10, pp. 2195–2222, Oct. 2004. [DOI] [PubMed] [Google Scholar]
- [4].Canuet L et al. , “Resting-state EEG source localization and functional connectivity in schizophrenia-like psychosis of epilepsy,” PLoS ONE, vol. 6, no. 11, Nov. 2011, Art. no. e27863. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [5].Aghajani H, Zahedi E, Jalili M, Keikhosravi A, and Vahdat BV, “Diagnosis of early Alzheimer’s disease based on EEG source localization and a standardized realistic head model,” IEEE J. Biomed. Health Informat, vol. 17, no. 6, pp. 1039–1045, Nov. 2013. [DOI] [PubMed] [Google Scholar]
- [6].Liu F, Wang S, Rosenberger J, Su J, and Liu H, “A sparse dictionary learning framework to discover discriminative source activations in EEG brain mapping,” in Proc. AAAI Conf. Artif. Intell., 2017, vol. 31, no. 1. [Google Scholar]
- [7].Zhang S et al. , “Whole-brain dynamic resting-state functional network analysis in benign epilepsy with centrotemporal spikes,” IEEE J. Biomed. Health Informat, vol. 26, no. 8, pp. 3813–3821, Aug. 2022. [DOI] [PubMed] [Google Scholar]
- [8].Liu F, Wang L, Lou Y, Li R-C, and Purdon PL, “Probabilistic structure learning for EEG/MEG source imaging with hierarchical graph priors,” IEEE Trans. Med. Imag, vol. 40, no. 1, pp. 321–334, Jan. 2021. [DOI] [PubMed] [Google Scholar]
- [9].Gramfort A, Kowalski M, and Hämäläinen M, “Mixed-norm estimates for the M/EEG inverse problem using accelerated gradient methods,” Phys. Med. Biol, vol. 57, no. 7, p. 1937, 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [10].Haufe S, Nikulin VV, Ziehe A, Müller K-R, and Nolte G, “Combining sparsity and rotational invariance in EEG/MEG source reconstruction,” NeuroImage, vol. 42, no. 2, pp. 726–738, Aug. 2008. [DOI] [PubMed] [Google Scholar]
- [11].Liu F, Rosenberger J, Lou Y, Hosseini R, Su J, and Wang S, “Graph regularized EEG source imaging with in-class consistency and out-class discrimination,” IEEE Trans. Big Data, vol. 3, no. 4, pp. 378–391, Dec. 2017. [Google Scholar]
- [12].Qin J, Liu F, Wang S, and Rosenberger J, “EEG source imaging based on spatial and temporal graph structures,” in Proc. 7th Int. Conf. Image Process. Theory, Tools Appl. (IPTA; ), Nov. 2017, pp. 1–6. [Google Scholar]
- [13].Hämäläinen MS and Ilmoniemi RJ, “Interpreting magnetic fields of the brain: Minimum norm estimates,” Med. Biol. Eng. Comput, vol. 32, no. 1, pp. 35–42, Jan. 1994. [DOI] [PubMed] [Google Scholar]
- [14].Pascual-Marqui RD, “Standardized low-resolution brain electromagnetic tomography (sLORETA): Technical details,” Methods Find Exp. Clin. Pharmacol, vol. 24, pp. 5–12, Jan. 2002. [PubMed] [Google Scholar]
- [15].Dale AM et al. , “Dynamic statistical parametric mapping: Combining fMRI and MEG for high-resolution imaging of cortical activity,” Neuron, vol. 26, no. 1, pp. 55–67, 2000. [DOI] [PubMed] [Google Scholar]
- [16].Uutela K, Hämäläinen M, and Somersalo E, “Visualization of magnetoencephalographic data using minimum current estimates,” NeuroImage, vol. 10, no. 2, pp. 173–180, Aug. 1999. [DOI] [PubMed] [Google Scholar]
- [17].Bore JC et al. , “Sparse EEG source localization using LAPPS: Least absolute l-P (0<p<1) penalized solution,” IEEE Trans. Biomed. Eng, vol. 66, no. 7, pp. 1927–1939, Jul. 2019. [DOI] [PubMed] [Google Scholar]
- [18].Babadi B, Obregon-Henao G, Lamus C, Hämäläinen MS, Brown EN, and Purdon PL, “A subspace pursuit-based iterative greedy hierarchical solution to the neuromagnetic inverse problem,” NeuroImage, vol. 87, pp. 427–443, Feb. 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [19].Yang S, Jiao M, Xiang J, Fotedar N, Sun H, and Liu F, “Rejuvenating classical brain electrophysiology source localization methods with spatial graph Fourier filters for source extents estimation,” Brain Informat, vol. 11, no. 1, p. 8, Dec. 2024. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [20].Sekihara K and Nagarajan SS, Adaptive Spatial Filters for Electromagnetic Brain Imaging. Cham, Switzerland: Springer, 2008. [Google Scholar]
- [21].Van Veen BD, Van Drongelen W, Yuchtman M, and Suzuki A, “Localization of brain electrical activity via linearly constrained minimum variance spatial filtering,” IEEE Trans. Biomed. Eng, vol. 44, no. 9, pp. 867–880, Sep. 1997. [DOI] [PubMed] [Google Scholar]
- [22].Moiseev A, Gaspar JM, Schneider JA, and Herdman AT, “Application of multi-source minimum variance beamformers for reconstruction of correlated neural activity,” NeuroImage, vol. 58, no. 2, pp. 481–496, Sep. 2011. [DOI] [PubMed] [Google Scholar]
- [23].Hong JH, Ahn M, Kim K, and Jun SC, “Localization of coherent sources by simultaneous MEG and EEG beamformer,” Med. Biol. Eng. Comput, vol. 51, no. 10, pp. 1121–1135, Jun. 2013. [DOI] [PubMed] [Google Scholar]
- [24].Samadzadehaghdam N, Makkiabadi B, Masjoodi S, Mohammadi M, and Mohagheghian F, “A new linearly constrained minimum variance beamformer for reconstructing EEG sparse sources,” Int. J. Imag. Syst. Technol, vol. 29, no. 4, pp. 686–700, Dec. 2019. [Google Scholar]
- [25].Herdman AT, Moiseev A, and Ribary U, “Localizing event-related potentials using multi-source minimum variance beamformers: A validation study,” Brain Topography, vol. 31, no. 4, pp. 546–565, Jul. 2018. [DOI] [PubMed] [Google Scholar]
- [26].Nunes AS, Moiseev A, Kozhemiako N, Cheung T, Ribary U, and Doesburg SM, “Multiple constrained minimum variance beamformer (MCMV) performance in connectivity analyses,” NeuroImage, vol. 208, Mar. 2020, Art. no. 116386. [DOI] [PubMed] [Google Scholar]
- [27].Hecker L, Rupprecht R, Van Elst LT, and Kornmeier J, “ConvDip: A convolutional neural network for better EEG source imaging,” Frontiers Neurosci., vol. 15, Jun. 2021, Art. no. 569918. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [28].Craley J, Jouny C, Johnson E, Hsu D, Ahmed R, and Venkataraman A, “Automated seizure activity tracking and onset zone localization from scalp EEG using deep neural networks,” PLoS ONE, vol. 17, no. 2, Feb. 2022, Art. no. e0264537. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [29].Sun R, Sohrabpour A, Worrell GA, and He B, “Deep neural networks constrained by neural mass models improve electrophysiological source imaging of spatiotemporal brain dynamics,” Proc. Nat. Acad. Sci. USA, vol. 119, no. 31, Aug. 2022, Art. no. e2201128119. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [30].Jiao M et al. , “A graph Fourier transform based bidirectional LSTM neural network for EEG source imaging,” Frontiers Neurosci, vol. 447, Jan. 2022. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [31].Huang G et al. , “Electromagnetic source imaging via a data-synthesis-based convolutional encoder–decoder network,” IEEE Trans. Neural Netw. Learn. Syst., vol. 35, no. 5, pp. 6423–6437, Sep. 2022. [DOI] [PubMed] [Google Scholar]
- [32].Liang J, Yu ZL, Gu Z, and Li Y, “Electromagnetic source imaging with a combination of sparse Bayesian learning and deep neural network,” IEEE Trans. Neural Syst. Rehabil. Eng, vol. 31, pp. 2338–2348, 2023. [DOI] [PubMed] [Google Scholar]
- [33].Jiao M et al. , “MMDF-ESI: Multi-modal deep fusion of EEG and MEG for brain source imaging,” in Proc. Int. Conf. Brain Informat. Cham, Switzerland: Springer, 2023, pp. 273–285. [Google Scholar]
- [34].Ou W, Hämäläinen MS, and Golland P, “A distributed spatio-temporal EEG/MEG inverse solver,” NeuroImage, vol. 44, no. 3, pp. 932–946, Feb. 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [35].Baillet S, Mosher JC, and Leahy RM, “Electromagnetic brain mapping,” IEEE Signal Process. Mag, vol. 18, no. 6, pp. 14–30, Nov. 2001. [Google Scholar]
- [36].Ding L and He B, “Sparse source imaging in electroencephalography with accurate field modeling,” Hum. Brain Mapping, vol. 29, no. 9, pp. 1053–1067, Sep. 2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [37].Liu F, Wan G, Semenov YR, and Purdon PL, “Extended electrophysiological source imaging with spatial graph filters,” in Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. Cham, Switzerland: Springer, 2022, pp. 99–109. [Google Scholar]
- [38].Haufe S et al. , “Large-scale EEG/MEG source localization with spatial flexibility,” NeuroImage, vol. 54, no. 2, pp. 851–859, Jan. 2011. [DOI] [PubMed] [Google Scholar]
- [39].Becker H et al. , “EEG extended source localization: Tensor-based vs. conventional methods,” NeuroImage, vol. 96, pp. 143–157, Aug. 2014. [DOI] [PubMed] [Google Scholar]
- [40].Ding L, “Reconstructing cortical current density by exploring sparseness in the transform domain,” Phys. Med. Biol, vol. 54, no. 9, pp. 2683–2697, May 2009. [DOI] [PubMed] [Google Scholar]
- [41].Zhu M, Zhang W, Dickens DL, and Ding L, “Reconstructing spatially extended brain sources via enforcing multiple transform sparseness,” NeuroImage, vol. 86, pp. 280–293, Feb. 2014. [DOI] [PubMed] [Google Scholar]
- [42].Sohrabpour A, Lu Y, Worrell G, and He B, “Imaging brain source extent from EEG/MEG by means of an iteratively reweighted edge sparsity minimization (IRES) strategy,” NeuroImage, vol. 142, pp. 27–42, Nov. 2016. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [43].Becker H et al. , “SISSY: An efficient and automatic algorithm for the analysis of EEG sources based on structured sparsity,” NeuroImage, vol. 157, pp. 157–172, Aug. 2017. [DOI] [PubMed] [Google Scholar]
- [44].Sohrabpour A, Cai Z, Ye S, Brinkmann B, Worrell G, and He B, “Noninvasive electromagnetic source imaging of spatiotemporally distributed epileptogenic brain sources,” Nature Commun, vol. 11, no. 1, p. 1946, 2020. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [45].Jiang X, Ye S, Sohrabpour A, Bagić A, and He B, “Imaging the extent and location of spatiotemporally distributed epileptiform sources from MEG measurements,” NeuroImage, Clin., vol. 33, Jan. 2022, Art. no. 102903. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [46].Dassios G, Fokas AS, and Hadjiloizi D, “On the complementarity of electroencephalography and magnetoencephalography,” Inverse Problems, vol. 23, no. 6, pp. 2541–2549, Dec. 2007. [Google Scholar]
- [47].Malmivuo J, “Comparison of the properties of EEG and MEG in detecting the electric activity of the brain,” Brain Topography, vol. 25, no. 1, pp. 1–19, Jan. 2012. [DOI] [PubMed] [Google Scholar]
- [48].da Silva FL, “EEG and MEG: Relevance to neuroscience,” Neuron, vol. 80, no. 5, pp. 1112–1128, 2013. [DOI] [PubMed] [Google Scholar]
- [49].Ahlfors SP, Han J, Belliveau JW, and Hämäläinen MS, “Sensitivity of MEG and EEG to source orientation,” Brain Topography, vol. 23, no. 3, pp. 227–232, Sep. 2010. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [50].Ebersole JS and Ebersole SM, “Combining MEG and EEG source modeling in epilepsy evaluations,” J. Clin. Neurophysiol, vol. 27, no. 6, pp. 360–371, 2010. [DOI] [PubMed] [Google Scholar]
- [51].Fuchs M et al. , “Improving source reconstructions by combining bioelectric and biomagnetic data,” Electroencephalogr. Clin. Neurophysiol, vol. 107, no. 2, pp. 93–111, Apr. 1998. [DOI] [PubMed] [Google Scholar]
- [52].Lin F-H, Belliveau JW, Dale AM, and Hämäläinen MS, “Distributed current estimates using cortical orientation constraints,” Hum. Brain Mapping, vol. 27, no. 1, pp. 1–13, Jan. 2006. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [53].Ding L and Yuan H, “Simultaneous EEG and MEG source reconstruction in sparse electromagnetic source imaging,” Hum. Brain Mapping, vol. 34, no. 4, pp. 775–795, Apr. 2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [54].Henson R, Mouchlianitis E, and Friston K, “MEG and EEG data fusion: Simultaneous localisation of face-evoked responses,” NeuroImage, vol. 47, p. S167, Jul. 2009. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [55].Baillet S, Garnero L, Marin G, and Hugonin J-P, “Combined MEG and EEG source imaging by minimization of mutual information,” IEEE Trans. Biomed. Eng, vol. 46, no. 5, pp. 522–534, May 1999. [DOI] [PubMed] [Google Scholar]
- [56].Aydin Ü et al. , “Combined EEG/MEG can outperform single modality EEG or MEG source reconstruction in presurgical epilepsy diagnosis,” PLoS ONE, vol. 10, no. 3, Mar. 2015, Art. no. e0118753. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [57].Yu F and Koltun V, “Multi-scale context aggregation by dilated convolutions,” 2015, arXiv:1511.07122. [Google Scholar]
- [58].Chen L et al. , “SCA-CNN: Spatial and channel-wise attention in convolutional networks for image captioning,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Honolulu, HI, USA, Jul. 2017, pp. 5659–5667. [Google Scholar]
- [59].Hu J, Shen L, and Sun G, “Squeeze-and-excitation networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Sep. 2018, pp. 7132–7141. [Google Scholar]
- [60].Gramfort A et al. , “MNE software for processing MEG and EEG data,” NeuroImage, vol. 86, pp. 446–460, Feb. 2014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [61].Fischl B, “FreeSurfer,” NeuroImage, vol. 62, no. 2, pp. 774–781, Aug. 2012. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [62].Xu F, Liu K, Yu Z, Deng X, and Wang G, “EEG extended source imaging with structured sparsity and -norm residual,” Neural Comput. Appl, vol. 33, no. 14, pp. 8513–8524, Jul. 2021. [Google Scholar]
- [63].Saito T and Rehmsmeier M, “The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets,” PLoS ONE, vol. 10, no. 3, Mar. 2015, Art. no. e0118432. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [64].Samuelsson JG, Peled N, Mamashli F, Ahveninen J, and Hämäläinen MS, “Spatial fidelity of MEG/EEG source estimates: A general evaluation approach,” NeuroImage, vol. 224, Jan. 2021, Art. no. 117430. [DOI] [PMC free article] [PubMed] [Google Scholar]
- [65].Wu J, Chen X-Y, Zhang H, Xiong L-D, Lei H, and Deng S-H, “Hyperparameter optimization for machine learning models based on Bayesian optimization,” J. Electron. Sci. Technol, vol. 17, pp. 26–40, Mar. 2019. [Google Scholar]
