Skip to main content
. Author manuscript; available in PMC: 2023 Jan 19.
Published in final edited form as: Comput Methods Programs Biomed. 2021 Dec 29;215:106604. doi: 10.1016/j.cmpb.2021.106604

Algorithm 1.

Our unsupervised seizure identification algorithm. We employ a variational autoencoder architecture comprising an encoder network gϕ:M×T2×D and a decoder network pθ:DM×T, with trainable parameters ϕdg and θdp, respectively. We train our architecture on EEG recordings that do not contain any seizures, employing a sparsity-enforcing loss function to suppress EEG artifacts (c.f. Section (3)). As training captures non-seizure activity, we identify seizures w.r.t. the reconstruction errors at inference time.

1: procedure Training (X(i)M×T for i non-seizure training recordings, gϕ, pθ)
2:  Initialize trainable parameters ϕ and θ
3: repeat
4:   Sample recording X(i) from non-seizure training recordings
5:   Sample auxiliary variables ϵ(l)N(0,I), l{1,,L}
6:   Reparametrize to obtain latent features z(l)=μ+σϵ(l), l{1,,L}
7:   Compute sparsity-enforcing loss (2) and its gradients w.r.t. ϕ and θ
8:   Update trainable parameters ϕ and θ via Adam optimization
9: until Loss value (2) converged
10: return Trained gϕ, Trained pθ
11: end procedure
1: procedure Inference (X(i)M×T for i test recordings, Trained gϕ, Trained pθ)
2: repeat
3:   Sample recording X(i) from test recordings
4:   Sample auxiliary variables ϵ(l)N(0,I), l{1,,L}
5:   Reparametrize to obtain latent features z(l)=μ+σϵ(l), l{1,,L}
6:   Compute X^(i,l) for each z(l)
7:   Compute decoder reconstruction X^(i) by averaging X^(i,l) over l{1,,L}
8:   Compute seizure evidence score (3) w.r.t.~the reconstruction error between X(i) and X^(i)
9: until All recordings are tested
10: return Seizure evidence scores for all test recordings
11: end procedure