Skip to main content
. 2025 Sep 1;15(9):954. doi: 10.3390/brainsci15090954
Algorithm 1 Attention-Weighted Component-to-Latent Mapping
Notation:
         C:                Number of input components (e.g., 53)
         Denc:            Encoder LSTM hidden size (e.g., 256)
         Dmain:          Main LSTM hidden size (e.g., 200)
         K:                Number of windows (e.g., 7)
         T:                Number of time steps per window (e.g., 20)
Require:
       Wenc: Encoder weight matrix RDenc×C
       Wmain: Main LSTM weight matrix RDmain×Denc
       αwin: Window-level attention matrix RK×T
       αglobal: Global attention vector RK
Ensure:
       M: Component-to-latent mapping matrix RDmain×C
 
 1:   procedure ComponentToLatentMapping(Wenc,Wmain,αwin,αglobal)
 2:        M0                                                                                     ▹ Initialize M to zeros
 3:        for k1 to K do                                                         ▹ Iterate over each window
 4:               Wmain(scaled)αglobal[k]·Wmain                                ▹ Scale by global importance
 5:               for t1 to T do                          ▹ Iterate over each time step in window k
 6:                      Wenc(scaled)αwin[k,t]·Wenc                     ▹ Scale by temporal importance
 7:                      contributionWmain(scaled)×Wenc(scaled)         ▹ Compute transformation path
 8:                      MM+contribution                    ▹ Accumulate weighted contribution
 9:             end for
10:        end for
11:        return M
12:   end procedure