Skip to main content
. 2021 May 7;21(9):3241. doi: 10.3390/s21093241
Algorithm 1: Training process of ZIC-LDM
Input: Training process iteration rounds epochs, batch size m, learning rate lr, semantic features v, FC parameter Wh initialized for semantic features mapping, visual feature fφ(x), FC parameter Wg for visual features mapping and relation module rω.
Output: Optimized FC parameter Wh for semantic features mapping, FC parameter Wg for visual features mapping and relation module rω*.
1 for epoch=0,1,2,, epochs1do
2   for i=0,1,2,, ntrain/m1 do
3    Sampling m training samples x and corresponding label y from seen classes;
4    Mapping fφ(x) into common space: gφ(x)Wg×fφ(x);
5    Mapping v into common space: hφ(v)Wh×v;
6    Concatenate gφ(x) and hφ(v);
7    Calculate similarity score: sp,mrω(gφ(x),hφ(v));
8    Calculate MSE loss: L=MSE(sp,m,yv);
9    Update FC parameters for semantic features mapping, FC parameters for visual features mapping and relation module:
   Wg*,Wh*,rω*Adan(Wg,Wh,rω[L],Wg,Wh,rω,lr);
10   end for
11 end for