Algorithm 1.
Our unsupervised seizure identification algorithm. We employ a variational autoencoder architecture comprising an encoder network and a decoder network , with trainable parameters and , respectively. We train our architecture on EEG recordings that do not contain any seizures, employing a sparsity-enforcing loss function to suppress EEG artifacts (c.f. Section (3)). As training captures non-seizure activity, we identify seizures w.r.t. the reconstruction errors at inference time.
| 1: | procedure Training ( for non-seizure training recordings, , ) |
| 2: | Initialize trainable parameters and |
| 3: | repeat |
| 4: | Sample recording from non-seizure training recordings |
| 5: | Sample auxiliary variables , |
| 6: | Reparametrize to obtain latent features , |
| 7: | Compute sparsity-enforcing loss (2) and its gradients w.r.t. and |
| 8: | Update trainable parameters and via Adam optimization |
| 9: | until Loss value (2) converged |
| 10: | return Trained , Trained |
| 11: | end procedure |
| 1: | procedure Inference ( for test recordings, Trained , Trained ) |
| 2: | repeat |
| 3: | Sample recording from test recordings |
| 4: | Sample auxiliary variables , |
| 5: | Reparametrize to obtain latent features , |
| 6: | Compute for each |
| 7: | Compute decoder reconstruction by averaging over |
| 8: | Compute seizure evidence score (3) w.r.t.~the reconstruction error between and |
| 9: | until All recordings are tested |
| 10: | return Seizure evidence scores for all test recordings |
| 11: | end procedure |