Skip to main content
. 2021 Mar 13;176:114848. doi: 10.1016/j.eswa.2021.114848
Algorithm 1. Self-correction learning mechanism

Input:ω^0: initialized model weight by fenc, gmd, gad1, gad2, and φ after domain adaptation; y^0a: augmented initial pseudo label generated by ω^0; Ep: epoch in each cycle; C: total number of cycles;
Output: Optimal ω^C
1: Construct a training dataset Dtraining using DS=IS,YS and DT=IT,Y^0a
2: for each c[1,C]do
3:  for each e[1,Ep]do
4:   Update the learning rate;
5:   Calculate loss Lseg by Eq. (7);
6:  end for
7:  Update weight ω^c+1 for initializing the weight of the next cycle c+1 by Eq. (9);
8:  Generate pseudo label Y^c+1 using ω^c;
9:  Augment pseudo label Y^c+1a using ω^c;
10:  Pseudo label refinement by Eq. (10);
11:  Re-construct a training dataset Dtraining using DS=IS,YS and DT=IT,Y^c+1a
12: end for