Skip to main content
. 2022 Sep 12;22(18):6881. doi: 10.3390/s22186881
Algorithm 1: LAPNet-HAR
Input: Initial network parameters θ0, hyperparameter α, replay buffer size/class b
Data: Training data (X,Y)=D
(X0,Y0)=D0 // (get base data for pretraining(
Cbase= set of base classes in D0
Mp={};Mr={} //(initialize empty prototype memory and replay(
  buffer
/* (offline pretraining process( */
Update θ0 with (X0,Y0)
Store prototypes pk for kCbase in Mp using D0
Sample data from D0 and store in Mr for replay
while continually learning do
   ((Xt,Yt)=Dt
   Mp UpdatePrototypeMemory(Dt, Mp, θt1) // (Figure 2a &
  Equation (3)(
   QDtMr; (Xq,Yq)=Q // (form combined query set(
   Incur loss L(fθt1(Xq),Yq) // (Equation (5)(
   Update model θt with (Xq,Yq)
   Mp PrototypeAdaptation(Mr, Mp, θt, α) // (Equation (6)(
   Mr UpdateReplayBuffer(Q, Mr, θt, b)
end (