Abstract
Accurate and efficient analysis of Electroencephalogram (EEG) signals is crucial for applications like neurological diagnosis and Brain-Computer Interfaces (BCI). Traditional methods often fall short in capturing the intricate temporal dynamics inherent in EEG data. This paper explores the use of Convolutional Spiking Neural Networks (CSNNs) to enhance EEG signal classification. We apply Discrete Wavelet Transform (DWT) for feature extraction and evaluate CSNN performance on the Physionet EEG dataset, benchmarking it against traditional deep learning and machine learning methods. The findings indicate that CSNNs achieve high accuracy, reaching 98.75% in 10-fold cross-validation, and an impressive F1 score of 98.60%. Notably, this F1-score represents an improvement over previous benchmarks, highlighting the effectiveness of our approach. Along with offering advantages in temporal precision and energy efficiency, CSNNs emerge as a promising solution for next-generation EEG analysis systems.
Keywords: Electroencephalogram (EEG), Spiking neural networks (SNN), Convolutional spiking neural networks (CSNN)
Subject terms: Machine learning, Stress and resilience
Introduction
The detection and monitoring of stress through electroencephalogram (EEG) signals are crucial in today’s health landscape due to the increasing prevalence of stress-related disorders. EEG-based stress detection leverages non-invasive measurements of electrical brain activity, offering promising insights into cognitive and emotional states. EEG signals are characterized by their complexity and non-stationary properties, making them challenging to analyze, also accurate interpretation of these signals demands computational models capable of capturing both spatial and temporal patterns.
Early research in EEG-based stress analysis primarily focused on frequency-domain transformations, utilizing wavelet-based decompositions in conjunction with deep learning architectures such as Convolutional Neural Networks (CNNs) to enable effective feature extraction for emotion and stress classification. Building on these foundations, hybrid deep learning models that combine CNNs with Bidirectional Long Short-Term Memory (BLSTM) networks have been introduced. These models often employ Discrete Wavelet Transform (DWT) for signal denoising and decomposition into frequency sub-bands, facilitating the extraction of both spatial and temporal features from EEG signals. Such approaches have demonstrated high classification accuracy, indicating their potential for precise stress detection.
In parallel, alternative machine learning techniques have been explored for EEG signal classification. Recurrent neural networks like Long Short-Term Memory (LSTM) models have shown promise in capturing temporal dependencies within EEG data, while traditional models such as Support Vector Machines (SVMs) have generally exhibited lower performance in comparison to deep learning methods. Despite significant progress, challenges persist in effectively modeling the temporal complexity of EEG signals and addressing the energy constraints critical to mobile and wearable applications.
To address these limitations, Spiking Neural Networks (SNNs) have emerged as a promising solution. By processing information through discrete spikes rather than continuous signals, SNNs align well with the inherent temporal nature of EEG data and offer substantial improvements in energy efficiency. Recent developments suggest that integrating SNNs with CNN-based spatial feature extractors can yield robust and real-time EEG classification performance, particularly suited for deployment in resource-constrained environments.
Building upon these advancements, the present study explores the integration of SNNs and Convolutional SNNs (CSNNs) within a stress detection framework–an area that has received limited attention to date. The proposed models demonstrate strong performance, achieving improvements in recall and F1 score relative to earlier approaches, with a minor trade-off in precision and overall accuracy. This trade-off indicates a more sensitive detection of stress conditions while maintaining robust classification capabilities. The resulting framework contributes to the advancement of EEG-based diagnostic systems by supporting consistent and efficient stress detection in real time, making it particularly valuable for Brain-Computer Interface (BCI) applications and portable clinical tools.
Related work
Stress detection using EEG signals has gained notable attention due to its ability to monitor cognitive and emotional states. Early studies primarily relied on traditional machine learning and frequency-domain analysis techniques.1 and2 highlighted the effect of wavelet transformations for feature extraction, with CNNs proving to be highly accurate in classifying stress and emotional states. These works established the importance of combining signal decomposition with deep learning for robust EEG-based stress detection. Incorporating temporal dynamics into models has further enhanced performance.3 proposed a hybrid model integrating CNNs and BiLSTM network augmented with DWT, achieving 99.20% accuracy in stress classification. Similarly,4 developed an optimized LSTM-based model that surpassed 90% accuracy in emotion recognition. Recurrent architectures, such as LSTMs and BiLSTMs, effectively capture sequential dependencies in EEG signals but are computationally expensive, limiting their real-time applicability. Alternative machine learning approaches have also been explored. For example,5 employed SVM for EEG classification. However, while SVMs demonstrated moderate accuracy, they fell short compared to the capabilities of deep learning models in capturing intricate EEG patterns.6 highlighted that combining ensemble methods, such as Random Forests, with wavelet-based features offered computational simplicity but lacked the generalization capabilities of advanced neural network architecture.
Recent advancements in deep learning architectures have explored combining spatial and temporal feature extraction.7 employed attention mechanisms within CNN-LSTM models to dynamically focus on relevant spatial-temporal features, achieving state-of-the-art performance in EEG-based affective computing.8 introduced graph-based neural networks to model inter-channel dependencies, showcasing improved robustness in classifying non-stationary EEG signals. Additionally,9 proposed transformer-based models for EEG analysis, leveraging multi-head attention to capture long-range dependencies, which significantly improved classification accuracy in stress detection tasks.
Spiking Neural Networks (SNNs) are emerging as a novel paradigm in EEG signal processing, offering energy-efficient solutions while preserving high accuracy and has real-time applications.10 demonstrated the suitability of SNNs for EEG classification by leveraging their event-driven nature, which aligns with the temporal characteristics of EEG signals.11 combined CNNs with SNNs for spatial and temporal feature extraction, achieving high accuracy in real-time EEG analysis.12 extended this approach by incorporating neuromorphic hardware, reducing latency and power consumption in wearable devices.13 explored adaptive spiking neural architectures, highlighting their potential in real-world EEG-based stress monitoring scenarios.
Despite these advancements, challenges remain in creating models that balance accuracy, computational efficiency, and real-time applicability. While SNNs provide energy efficiency, their integration with traditional deep learning frameworks for stress detection is still in its nascent stages. Few studies have explored CSNNs to address this gap.
This work builds upon the existing literature by introducing a CSNN-based approach for EEG stress detection, combining convolutional layers for spatial feature extraction and SNNs for temporal pattern recognition. By leveraging frequency band decomposition and advanced feature engineering techniques, our research focuses on the pressing demand for methods that are both efficient and precise in processing and analyzing data, particularly for applications like stress detection.
This paper is organized as follows. The proposed methodology section describes the data collection, preprocessing methods, and model architecture used in this study. The results and discussion section details the evaluation setup, presents the results, analyzes the performance, and provides a discussion of the findings. Lastly, the conclusion section summarizes the key contributions and suggests potential avenues for further research.
Proposed methodology
Proposed model presents a combined approach for EEG signal classification, integrating traditional methods and advanced neural architectures. European Data Format (EDF) files containing EEG signals were processed using DWT to break the signals into sub-bands: Alpha, Beta, Delta, Theta, and Gamma. Statistical features were derived from these sub-bands and subsequently used to train the model.
We evaluated traditional models (e.g., Naive Bayes, K Nearest Neighbors) and deep learning architectures (e.g., LSTM, BLSTM, Conv-BLSTM) using 10-fold cross-validation. Additionally, CSNNs were explored for enhanced spatial and temporal dynamics. This approach aims to improve EEG diagnostics and BCI technologies, providing robust, efficient, and biologically plausible solutions. Figure 1 shows the whole framework.
Fig. 1.
CSNN Model.
Data acquisition
As shown in3, EEG signals were extracted from EDF files14,15. Each file consists of multiple channels representing the electrical activity of different brain regions, enabling a comprehensive analysis.
These signals were meticulously parsed and organized to preserve the integrity of the raw data for subsequent preprocessing.
To ensure uniformity in the data, all extracted features underwent standardization. This method normalized each feature by ensuring a mean of zero and a variance of one, thereby eliminating biases and ensuring balanced contributions during model training.
Feature extraction
Through the application of Discrete Wavelet Transform (DWT), the EEG signals were separated into five frequency ranges: Alpha, Beta, Delta, Theta, and Gamma. This decomposition provided a detailed representation of brain rhythms, enabling time-frequency localization critical for analyzing non-stationary EEG signals as mentioned in3 and16 and represented in Fig. 2.
Fig. 2.
Decomposition of wavelet coefficient sequences using filters corresponding to EEG frequency ranges.
Statistical feature extraction was performed on each frequency sub-band, from which features such as mean, variance, skewness, and kurtosis were derived. These features formed the core dataset for model training, providing a compact yet informative representation of the EEG signals’ characteristics.
Classification technique
Spiking neural network (SNNs)
SNNs introduce a new approach to neural computation, offering biological plausibility and temporal dynamics in artificial intelligence. In contrast to conventional Artificial Neural Networks (ANNs) that process information through continuous signals, SNNs utilize discrete spikes to encode data, mimicking the brain’s neuronal communication process17. This spike-based representation enables SNNs to process temporal data with remarkable efficiency and precision, making them particularly suitable for tasks like EEG signal analysis.
SNNs are built upon neuron models such as the Leaky Integrate-and-Fire (LIF) model, which characterizes neuronal activity using a first-order differential equation. The membrane potential
of a neuron changes over time as follows:
| 1 |
Here,
denotes the membrane time constant,
represents the membrane potential at time
and
is the input current18. When the membrane potential exceeds a predefined threshold
, the neuron generates a spike, after which the membrane potential is reset to its baseline value
. This spiking behavior can be described mathematically as follows:
| 2 |
SNNs integrate learning mechanisms inspired by biology, such as Spike-Timing-Dependent Plasticity (STDP). STDP modifies synaptic weights depending on the timing of pre-synaptic and post-synaptic spikes, allowing the network to capture temporal patterns. The synaptic weight change
is mathematically defined as:
| 3 |
In this context,
denotes the time interval between pre-synaptic and post-synaptic spikes,
and
represent the learning rates for synaptic strengthening (potentiation) and weakening (depression), respectively, and
and
are their corresponding time constants19.
The event-driven nature of SNNs offers significant advantages in energy efficiency, particularly for edge devices. Unlike conventional neural networks, SNNs only compute when spikes occur, reducing the computational load18. This property is especially beneficial for processing large-scale temporal datasets like EEG signals. By converting EEG signals into spike trains, SNNs can leverage their temporal dynamics to extract meaningful patterns, such as neural oscillations in the Alpha, Beta, Delta, Theta, and Gamma bands.
Algorithm 1.
EEG Stress Detection using Spiking Neural Network
Proposed hybrid CSNN model
This study proposes ancompact CSNN architecture designed for real-time, energy-efficient EEG classification on edge devices. The model combines a 1D convolutional layer (16 filters, kernel size 5, stride 2) with spiking neural layers using LIF neurons and fast sigmoid surrogate gradients. This setup extracts spatial features and models temporal dynamics over 105 time steps, with outputs averaged for binary classification using a sigmoid activation. Implemented with PyTorch and snnTorch, the model uses 64 hidden units in the fully connected layer and is trained for 600 epochs using the Adam optimizer (learning rate
) and Binary Cross-Entropy with Logits Loss. Evaluation is conducted using 10-fold stratified cross-validation to ensure robustness and generalizability.
The hybridization of CNN and SNN, in integrates the performance benefits of both CNN and SNN. CNN contributes strong spatial feature extraction, while SNN provides superior temporal information processing, combined architecture leverages complementary strengths, this integration creates a more comprehensive system capable of processing both spatial patterns and their temporal dynamics. The visualization in Fig. 3 evidence demonstrates that CNN+SNN integration effectively combines spatial and temporal processing capabilities, resulting in a more robust computational approach than either network used independently.
Fig. 3.
Visualization of performance of CNN hybrid SNN.
As in Fig. 3 CNN strengths are shown at the top left and SNN strengths are shown at the top right. The next level shows the effective temporal processing by neuron spike patterns. The last level shows structured activation patterns across both time steps and quantitative comparisons of neurons.
CSNNs begin with encoding raw input data, like EEG signals, into spike trains. Common encoding strategies include Phase Encoding in which signal phases are mapped into precise spike timings and Latency Encoding which uses signal amplitude to delay spike generation20. Convolutional layers apply spatial filters to identify patterns across input dimensions. For EEG, this involve detecting spatial correlations between electrode channels:
| 4 |
where
is the output at position
,
is the filter weight, and
is the activation function modeled for spiking dynamics21.
Neurons leverage event-driven computations to recognize temporal relationships between features. One widely used model is the Adaptive Leaky Integrate-and-Fire (ALIF) neuron, which dynamically modifies its firing threshold in response to recent activity levels. When the membrane potential surpasses the threshold
, the neuron generates a spike22.
Temporal and spatial pooling layers help minimize the dimensionality of spike maps, retaining crucial information and lowering computational demands23.
Unlike the approach in3, which incorporates BiLSTM layers for temporal modeling and adds computational overhead due to recurrent sequential processing, our model eliminates such complexity by using spiking neurons to capture temporal dynamics in a more efficient, event-driven manner. This significantly reduces latency and energy consumption, making it highly suitable for deployment on low-power, local hardware. The proposed CSNN achieves similar or improved classification performance compared to the CNN-BiLSTM model while being faster, lighter, and more optimized for real-time stress detection using EEG signals. This work represents one of the first successful applications of CSNNs for EEG-based stress classification.
This research’s contribution lies in applying a reduced-complexity, biologically inspired CSNN for EEG-based stress detection. By combining temporal spiking dynamics with a simplified architecture, we demonstrate strong performance and real-time feasibility, making it one of the few works to leverage CSNNs in this domain.
Results and discussion
In this section, we discuss the results of our proposed method for the classification of EEG signals, which uses Discrete Wavelet Transform (DWT) for the extraction of features and CSNN for the classification. The evaluation focuses on essential performance metrics such as accuracy, precision, recall, and computational efficiency. These metrics demonstrate the effectiveness of CSNNs in capturing the spatiotemporal characteristics present in EEG signals.
Performance analysis
The proposed CSNN model is evaluated for its performance using a rigorous set of experiments. The model is trained using preprocessed inputs, optimized with a binary cross-entropy loss function, and fine-tuned utilizing the Adam optimizer. Training is conducted for 600 epochs and the batch size of 20 to ensure consistency and convergence.
The CSNN’s performance is evaluated using a diverse range of metrics, including accuracy, sensitivity, precision (also referred to as Positive Predictive Value, PPV), specificity, Negative Predictive Value (NPV), F1-score, and computational efficiency indicators such as energy consumption and latency.
To evaluate the performance of a classification model, various parameters derived from the confusion matrix are utilized. Recall, also known as sensitivity, measures the true positive rate (TPR) and is calculated as the ratio of true positives (TP) to the sum of true positives and false negatives (TPR = TP / (TP + FN) * 100). Specificity (SPC) quantifies the true negative rate and is determined by dividing true negatives (TN) by the sum of true negatives and false positives (SPC = TN / (FP + TN) * 100). Precision, or positive predictive value (PPV), represents the proportion of correctly predicted positive instances and is given by TP divided by the total predicted positives (PPV = TP / (TP + FP) * 100). The negative predictive value (NPV) reflects the ratio of true negatives to the total predicted negatives (NPV = TN / (TN + FN) * 100). False positive rate (FPR) and false negative rate (FNR) measure the rates of incorrect predictions; FPR is computed as FP / (FP + TN) * 100, while FNR is calculated as FN / (FN + TP) * 100. The false discovery rate (FDR) indicates the proportion of false positives among all positive predictions (FDR = FP / (FP + TP) * 100). Overall accuracy (ACC) measures the proportion of correct predictions and is given by (TP + TN) / (P + N) * 100. Finally, the F1 score provides a harmonic mean of precision and recall, calculated as (2TP) / (2TP + FP + FN) * 100.
As illustrated in the confusion matrix (Fig. 4), the CSNN demonstrated excellent classification accuracy, accurately detecting 35 true positives and 36 true negatives, while reporting just one false positive and zero false negatives. This shows the robustness of the model and the ability to minimize misclassification errors.
Fig. 4.

CSNN confusion matrix.
Performance analysis using ROC curve
The performance of the proposed CSNN is further analyzed using the Receiver Operating Characteristic (ROC) curve, depicted in Fig. 5. The ROC curve illustrates the relationship between the True Positive Rate (TPR) and the False Positive Rate (FPR) across different threshold values, offering an insightful representation of the model’s capacity to distinguish between classes.
Fig. 5.

CSNN ROC curve.
The Area Under the ROC Curve (AUC) is calculated as 0.99, indicating an excellent classification capability. A near-perfect AUC value signifies the CSNN’s high sensitivity and specificity across different decision thresholds, underscoring its robustness in distinguishing between positive and negative classes.
The sharp initial ascent of the ROC curve highlights the model’s capability to attain a high True Positive Rate while maintaining a low False Positive Rate.
Convergence curve analysis
The training dynamics of the CNN, SNN and the proposed CSNN are illustrated through the loss and accuracy convergence curves presented in Figs. 6, 7 and 8 respectively. These curves provide insights into the model’s learning behavior over 600 epochs.
Fig. 6.
CNN convergence curve.
Fig. 7.
SNN convergence curve.
Fig. 8.
CSNN convergence curve.
As in Fig. 8, the loss convergence curve (left panel) shows a steady and consistent decrease in the training loss, indicating effective optimization and proper gradient updates during training. The loss stabilizes after approximately 400 epochs, suggesting that the model has reached a point of convergence and is no longer overfitting. The accuracy convergence curve (right panel) demonstrates a rapid increase in training accuracy during the initial epochs, reaching near-perfect accuracy by around 100 epochs. The model maintains high accuracy through the subsequent training iterations, highlighting its stability and ability to generalize effectively.
Analysis of proposed and other models
The comparison between the proposed models and existing works, as shown in Table 1, highlights significant advancements in EEG data consisting of 22 channels, from a total of 36 participants were fed into an LSTM model for stress detection analysis, it achieves a reasonable accuracy of 91.67% with a specificity of 94.22%. However, its sensitivity is lower at 83.33%, indicating challenges in detecting stress cases accurately24. Similarly, the Naive Bayes (NB) classifier, evaluated on 128 EEG channels from 22 subjects, demonstrates high sensitivity (98.30%) and an accuracy of 94.60%, but the absence of additional metrics such as F1-score and precision limits its comprehensive assessment25. The K-Nearest Neighbors (KNN) approach, tested with 19 EEG channels from 36 subjects, performs robustly with 96.60% accuracy and 97.00% sensitivity, making it a strong traditional baseline26.
Table 1.
Comparison of Proposed Model with the state-of-the-art existing works.
| Dataset used(EEG) | Models | Accuracy (%) | Sensitivity (%) | Specificity (%) | F1-score (%) | PPV (%) | NPV (%) |
|---|---|---|---|---|---|---|---|
| 22 EEG channels from 36 subjects24 | LSTM | 91.67 | 83.33 | 94.22 | 81.88 | 80.56 | 95.00 |
| 128 EEG channels from 22 subjects25 | NB | 94.60 | 98.30 | 93.30 | N/A | N/A | N/A |
| 19 EEG channels from 36 subjects26 | KNN | 96.60 | 97.00 | 96.80 | 96.90 | 96.80 | 97.00 |
| 28 EEG channels from 15 subjects27 | CNN-LSTM | 94.83 | 93.10 | 96.55 | 94.75 | 96.43 | 93.33 |
| 14 EEG channels from 48 subjects28 | BLSTM | 86.33 | 86.88 | 70.59 | N/A | 98.91 | N/A |
| 19 EEG channels from 36 subjects3 | CNN-BILSTM | 99.20 | 98.50 | 99.40 | 98.40 | 98.40 | 99.40 |
| 19 EEG channels from 36 subjects (Proposed) | CSNN | 98.61 | 100.00 | 97.30 | 98.60 | 97.22 | 100.00 |
N/A: Not Applicable or Not Available
Deep learning methods such as CNN-LSTM and BLSTM offer further improvements in analyzing spatiotemporal EEG features. CNN-LSTM, applied to 28 EEG signals from 15 subjects, achieves an accuracy of 94.83% and a high specificity of 96.55%, reflecting its ability to handle complex temporal patterns effectively27. On the other hand, BLSTM, which processes 14 EEG channels from 48 subjects, achieves lower overall accuracy at 86.33% but excels in precision with a value of 98.91%, indicating its reliability in reducing false positives28. The CNN-BLSTM model, an extended hybrid approach applied to 19 EEG channels from 36 subjects, achieves remarkable results with 99.20% accuracy, 98.50% sensitivity, and 99.40% specificity, showcasing its effectiveness in both feature extraction and classification3. Notably, the CNN-BLSTM exhibits an inference time of 0.0885 seconds per sample with a model size of 666.08 KB, demonstrating moderate computational efficiency.
While the models mentioned above show promising results, none report overall F1-scores, which are essential for a more balanced evaluation of performance. However, the proposed models as demostrated in Table 2 demonstrate even greater potential. The SNN model, evaluated on 19 EEG channels, achieves 90.28% accuracy and 91.43% specificity, highlighting its strength in feature generalization. Meanwhile, the CSNN model achieves the highest performance metrics across all methods, with 98.61% accuracy, 100% sensitivity, and 97.30% specificity. Furthermore, the CSNN’s F1-score of 98.60% and 100% NPV underscore its reliability in stress detection tasks. The CSNN architecture also offers significantly improved computational efficiency, with an inference time of 0.0032 seconds per sample (2.7
faster than CNN-BLSTM) and a compact model size of 838.53 KB, while achieving similar or better accuracy. When compared to a CNN-Transformer baseline (0.0087 seconds inference time, 1686.61 KB model size), the CSNN provides 3.5
better parameter efficiency and 2.7
faster inference, alongside competitive classification performance.
Table 2.
Performance of CNN, SNN and hybrid CSNN on similar EEG Dataset.
| Dataset used(EEG) | Models | Accuracy (%) | Sensitivity (%) | Specificity (%) | F1-score (%) | PPV (%) | NPV (%) |
|---|---|---|---|---|---|---|---|
| 19 EEG channels from 36 subjects | CNN | 90.28 | 91.67 | 90.40 | 98.40 | 90.41 | 90.40 |
| 19 EEG channels from 36 subjects | SNN | 90.28 | 89.19 | 91.43 | 90.41 | 91.67 | 88.89 |
| 19 EEG channels from 36 subjects (proposed) | CSNN | 98.61 | 100.00 | 97.30 | 98.60 | 97.22 | 100.00 |
These results, as summarized in Table 1, position the proposed CSNN model as a similarly accurate yet more efficient solution, outperforming traditional and hybrid approaches in key metrics such as F1-score and inference time, where few existing models offer such a balanced trade-off.
Figure 10 shows how traditional models such as LSTM, NB, and KNN offer reliable performance, with KNN standing out for its high accuracy and sensitivity. However, their capabilities are limited compared to deep learning-based approaches, hybrid models like CNN-LSTM and BLSTM exhibit enhanced performance, leveraging their ability to extract spatiotemporal features effectively. Among these, CNN-BLSTM achieves near-optimal performance, showcasing its robustness in handling EEG data. The confusion matrix and ROC curves of all models are given in Figs. 11 and 9.
Fig. 10.

Comparison between existing approaches and proposed model.
Fig. 11.
Confusion matrix of different models.
Fig. 9.

ROC curves of different models.
Discussion
A key strength of the CSNN is its ability to balance high sensitivity (100%) with strong specificity (97.30%), minimizing both false negatives and false positives. The slight trade-off between precision (97.22%) and recall (100%) suggests that the model prioritizes detecting true stress cases, which is critical for clinical and real-time monitoring applications. Additionally, the energy-efficient nature of SNNs–due to their sparse, spike-based computation–makes them ideal for deployment in wearable and edge devices, addressing a major limitation of power-intensive deep learning models.
However, certain limitations must be acknowledged. The study was conducted on a controlled dataset (Physionet), and further validation on diverse populations–including varying age groups and individuals with comorbid neurological conditions–is necessary to ensure generalizability. Another challenge lies in the practical implementation of SNNs, as their full potential depends on neuromorphic hardware, which is still an emerging technology. Future research should explore adaptive neuron models (e.g., ALIF) and multi-modal data fusion (e.g., combining EEG with heart rate variability) to enhance robustness. Additionally, improving the interpretability of CSNN decisions through spike-train visualization could increase clinical adoption.
Conclusion
The study demonstrates significant advancements in EEG-based stress detection through comprehensive comparison of traditional and neural network approaches. While conventional methods like LSTM, NB, and KNN provide reliable baseline performance (with KNN achieving 96.60% accuracy and 97.00% sensitivity), they show limitations in handling complex EEG patterns. Deep learning hybrids, particularly CNN-BLSTM, achieve near-optimal performance (99.20% accuracy, 98.50% sensitivity), establishing strong benchmarks for spatiotemporal feature extraction.
The proposed CSNN model surpasses all existing techniques, achieving exceptional performance metrics including 98.61% accuracy, 100% sensitivity, and a 98.60% F1-score. The model’s balanced performance across all evaluation metrics demonstrates its reliability for stress detection tasks. These results validate the effectiveness of combining convolutional and spiking neural architectures for EEG analysis, offering both superior classification capability and computational efficiency.
This work establishes new standards for EEG-based stress detection while providing a robust framework for future research in mental health monitoring. The demonstrated performance advantages of CSNNs suggest promising directions for developing practical, efficient neurodiagnostic systems.
Acknowledgements
The authors sincerely thank Manipal University Jaipur for their generous funding, which made this project possible. Their support has been invaluable in advancing my research.
Author contributions
All authors have contributed equally to this work.
Funding
Open access funding provided by Manipal University Jaipur. The research is funded by Manipal University Jaipur.
Data availability
The dataset link is available in reference.
Declarations
Competing interests
All authors have no conflict of interest.
Footnotes
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
References
- 1.García-Martínez, B., Fernández-Caballero, A., Alcaraz, R. & Martínez-Rodrigo, A. Assessment of dispersion patterns for negative stress detection from electroencephalographic signals. Pattern Recogn.119, 108094 (2021). [Google Scholar]
- 2.Priya, A., Garg, S. & Tigga, N. P. Predicting anxiety, depression and stress in modern life using machine learning algorithms. Proced. Comput. Sci.167, 1258–1267 (2020). [Google Scholar]
- 3.Malviya, L. & Mal, S. A novel technique for stress detection from EEG signal using hybrid deep learning model. Neural Comput. Appl.34(22), 19819–19830 (2022). [Google Scholar]
- 4.Xing, X. et al. SAE+ LSTM: A new framework for emotion recognition from multi-channel EEG. Front. Neurorobot.13, 37 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Sharma, R. & Chopra, K. EEG signal analysis and detection of stress using classification techniques. J. Inf. Optim. Sci.41(1), 229–238 (2020). [Google Scholar]
- 6.Zeynali, M., Seyedarabi, H. & Afrouzian, R. Classification of EEG signals using transformer based deep learning and ensemble models. Biomed. Signal Process. Control86, 105130 (2023). [Google Scholar]
- 7.Sun, J., Xia, G., Jiang, Z., Dai, Y. & Zhang, J. Attention-based CNN-LSTM for enhanced perception of bone milling states in surgical robots. IEEE Trans Instrum Meas73, 1–9 (2024). [Google Scholar]
- 8.Hou, Y. et al. GCNS-net: A graph convolutional neural network approach for decoding time-resolved EEG motor imagery signals. IEEE Trans Neural Netw Learn Syst35(6), 7312–7323 (2022). [DOI] [PubMed] [Google Scholar]
- 9.Gour, N. et al. Transformers for autonomous recognition of psychiatric dysfunction via raw and imbalanced EEG signals. Brain Inform10(1), 25 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Jebelli, H., Khalili, M. M., Hwang, S. & Lee, S. A supervised learning-based construction workers’ stress recognition using a wearable electroencephalography (EEG) device. Constr Res Congr2018, 40–50 (2018). [Google Scholar]
- 11.Luján, M. Á., Jimeno, M. V., Mateo Sotos, J., Ricarte, J. J. & Borja, A. L. A survey on EEG signal processing techniques and machine learning: Applications to the neurofeedback of autobiographical memory deficits in schizophrenia. Electronics10(23), 3037 (2021). [Google Scholar]
- 12.Li, Z., Zhang, G., Wang, L., Wei, J. & Dang, J. Emotion recognition using spatial-temporal EEG features through convolutional graph attention network. J. Neural Eng.20(1), 016046 (2023). [DOI] [PubMed] [Google Scholar]
- 13.Alqarni, M.A., Masood, H., Qureshi, A.J., Alvi, M., Arbab, H., Khan, H.A., Kamboh, A.M., Shafait, S. & Shafait, F. Neuroassist: Open-source automatic event detection in scalp EEG. IEEE Access (2024).
- 14.Zyma, I. et al. Electroencephalograms during mental arithmetic task performance. Data10.3390/data4010014 (2019). [Google Scholar]
- 15.Zyma, I., Tukaev, S., Seleznov, I., Kiyono, K., Popov, A., Chernykh, M. & Shpenkov, O. EEG during mental arithmetic tasks v1.0.0 — physionet.org. https://physionet.org/content/eegmat/1.0.0/. [Accessed 13-01-2025] (2018). 10.13026/C2JQ1P
- 16.Faust, O., Acharya, U. R., Adeli, H. & Adeli, A. Wavelet-based EEG processing for computer-aided seizure detection and epilepsy diagnosis. Seizure26, 56–64 (2015). [DOI] [PubMed] [Google Scholar]
- 17.Gerstner, W., Kistler, W. M., Naud, R. & Paninski, L. Neuronal dynamics: From single neurons to networks and models of cognition (Cambridge University Press, Cambridge, 2014). [Google Scholar]
- 18.Ponulak, F. & Kasinski, A. Introduction to spiking neural networks: Information processing, learning and applications. Acta Neurobiol. Exp.71(4), 409–433 (2011). [DOI] [PubMed] [Google Scholar]
- 19.Bohte, S. M., Kok, J. N. & La Poutre, H. Error-backpropagation in temporally encoded networks of spiking neurons. Neurocomputing48(1–4), 17–37 (2002). [Google Scholar]
- 20.Panda, P. & Roy, K. Unsupervised regenerative learning of hierarchical features in spiking deep networks for object recognition. In: 2016 International Joint Conference on Neural Networks (IJCNN), pp. 299–306 (2016). IEEE
- 21.Tavanaei, A., Ghodrati, M., Kheradpisheh, S. R., Masquelier, T. & Maida, A. Deep learning in spiking neural networks. Neural Netw.111, 47–63 (2019). [DOI] [PubMed] [Google Scholar]
- 22.Zenke, F. & Ganguli, S. Superspike: Supervised learning in multilayer spiking neural networks. Neural Comput.30(6), 1514–1541 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Shrestha, S.B. & Orchard, G. Slayer: Spike layer error reassignment in time. Advances in neural information processing systems 31 (2018)
- 24.Ganguly, B., Chatterjee, A., Mehdi, W., Sharma, S. & Garai, S. EEG based mental arithmetic task classification using a stacked long short term memory network for brain-computer interfacing. In: 2020 IEEE VLSI DEVICE CIRCUIT AND SYSTEM (VLSI DCS), pp. 89–94 (2020). IEEE
- 25.Subhani, A. R., Mumtaz, W., Saad, M. N. B. M., Kamel, N. & Malik, A. S. Machine learning framework for the detection of mental stress at multiple levels. IEEE Access5, 13545–13556 (2017). [Google Scholar]
- 26.Priya, T.H., Mahalakshmi, P., Naidu, V. & Srinivas, M. Stress detection from EEG using power ratio. In: 2020 International Conference on Emerging Trends in Information Technology and Engineering (ic-ETITE), pp. 1–6 (2020). IEEE
- 27.Kang, M., Shin, S., Jung, J. & Kim, Y. T. Classification of mental stress using CNN-LSTM algorithms with electrocardiogram signals. J Healthc Eng2021(1), 9951905 (2021). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 28.Chakladar, D. D., Dey, S., Roy, P. P. & Dogra, D. P. EEG-based mental workload estimation using deep BLSTM-LSTM network and evolutionary algorithm. Biomed. Signal Process. Control60, 101989 (2020). [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Data Availability Statement
The dataset link is available in reference.








