Skip to main content
. 2025 Sep 2;25(17):5415. doi: 10.3390/s25175415
Algorithm 1: MTPL Model Training and Prototype Learning
Require:
  • Training dataset Dtrain =xi,yii=1N;

  • Number of known classes K;

  • Loss weights λp, λr;

  • Batch size B;

  • Number of training epochs, Ep.

Ensure:
  • Optimized model parameters (ϕ,θ,ψ);

  • Prototype set C=ckk=1K.

1 Initialize parameters for the encoder Eϕ, decoder Dθ, and classifier Fψ.
2 Initialize the learnable prototype matrix CK×Dfeat using Gaussian random initialization.
3 For epoch = 1 to Ep do:
4 for mini-batch xb,yb from Dtrain do:
5 //Forward propagation
6   Latent features: zbEϕxb;
7   Reconstructed signal: x^bDθzb;
8   Classification logits: logitsbFψzb;
9   Classification probabilities: pbSoftmax(logitsb).
10 //Multi-Task Loss Calculation
11   Classification loss: LcCrossEntropyLosspb,yb;
12   Prototype loss: Lp1Bi=1Bzb,icyb,i22;
13   Reconstruction loss: Lr1Bi=1Bxb,ix^b,i22;
14   Total loss: LtotalLc+λpLp+λrLr.
15 //Backward Propagation and Parameter Update
16   Compute gradients of Ltotal with respect to all learnable parameters ϕ,θ,ψ,C;
17   Update parameters ϕ,θ,ψ,C using an optimizer.
18 end for
19 End for
20 Return Trained parameters (ϕ,θ,ψ) and prototype set C