Skip to main content
. 2020 Oct 22;22(11):1190. doi: 10.3390/e22111190
Algorithm 1 Learning without forgetting
Start with:
  θs: shared parameters
  θo: task specific parameters for each old task
  Xn, Yn: training data and ground truth on the new task
Initialize:
  YoCNN(Xn,θs,θo)       // compute output of old tasks for new data
  θnRANDINIT(|θn|)    // randomly initialize new parameters
Train:
  Define Y^oCNN(Xn,θs,θo)       // old task output
  Define Y^nCNN(Xn,θs,θn)       // new task output
  θs*,θo*,θn*argminθs*,θo*,θn*(λoLold(Yo,Yo^)+Lnew(Yn,Yn^)+R(θ^s+θ^o+θ^n)