|
Algorithm 1 Attention-Weighted Component-to-Latent Mapping |
| Notation: |
| C: Number of input components (e.g., 53) |
| : Encoder LSTM hidden size (e.g., 256) |
| : Main LSTM hidden size (e.g., 200) |
| K: Number of windows (e.g., 7) |
| T: Number of time steps per window (e.g., 20) |
| Require: |
| : Encoder weight matrix
|
| : Main LSTM weight matrix
|
| : Window-level attention matrix
|
| : Global attention vector
|
| Ensure: |
| M: Component-to-latent mapping matrix
|
| |
| 1: procedure
ComponentToLatentMapping() |
| 2: ▹ Initialize M to zeros |
| 3: for to K
do ▹ Iterate over each window |
| 4: ▹ Scale by global importance |
| 5: for to T
do ▹ Iterate over each time step in window k
|
| 6: ▹ Scale by temporal importance |
| 7: ▹ Compute transformation path |
| 8: ▹ Accumulate weighted contribution |
| 9: end for
|
| 10: end for
|
| 11: return M
|
| 12: end procedure
|