Table 2.
Summary of deep learning architectures used in EEG-based healthcare.
| Architecture | Key features | Complex | Input latitude | Typical applications | Strengths | Limitations | Representative studies |
|---|---|---|---|---|---|---|---|
| CNN | Spatial feature extraction, convolution layers | Low–Med | High rep., Med length/channels, Low topology. | Seizure detection, sleep staging, BCI | Captures local spatial patterns, less manual feature engineering | Needs large datasets, weak temporal modeling | Acharya et al. (2018) and Supratak et al. (2017) |
| RNN/ LSTM |
Sequential modeling, temporal dependencies | Med–High | Med rep., High length, Low–Med channels, Low topology. | Emotion recognition, seizure prediction | Models long-term dependencies | Sensitive to data imbalance, slower training | Tripathi et al. (2017) |
| Transformer | Self-attention, parallel processing | High | High rep./length, Med channels, Low topology |
Cognitive workload estimation, emotion recognition | Captures global dependencies, interpretable attention | Computationally heavy, needs large data | Klein et al. (2025) |
| CNN–LSTM Hybrid | Combines spatial + temporal modeling | Med–High | High rep./length, Med channels, Low topology | Seizure detection, sleep staging | Joint spatio-temporal learning | High computational cost | Roy et al. (2019) |
| GNN | Graph-based electrode modeling | Med–High | Med rep./length, High channels/topology | Motor imagery, emotion recognition | Captures inter-channel connectivity | Graph construction complexity | Xu et al. (2024) |