Skip to main content
. 2025 Jul 25;25(15):4622. doi: 10.3390/s25154622
Algorithm 1. Transformer-Enhanced Autoencoder Construction
Input: X
Output: Branch A: Encoder (E);  X^, LAE
1: procedure Transformer-enhanced Autoencoder
2: Input Layer ← Input
3: The intermediate feature A is calculated according to the multi-head attention mechanism
4: Layernormalization Layer yields intermediate variables p
5: Compute Encoder output z according to Equation (6)
6: Compute the reconstruction matrix X^ according to Equation (7)
7: Calculate the Latent feature matrix E according to Equation (8)
8: Compute the reconstruction loss LAE according to Equations (9)–(11)
9: end procedure