Skip to main content
. 2022 Jul 29;24(8):1046. doi: 10.3390/e24081046
Algorithm 1: Algorithm for the parameter optimization of PDTN.

Input: the input features of source and target data: {xis}i=1ns, {xjt}j=1nt;

   training labels of source data: {li}i=1ns; fc layers: [fc1,fc2,fc3];

   learning rate: lr and trade-off parameters λ, γ, and μ.

Initialize:θf, θc randomly.

Output: the optimized parameters: θ^f, θ^c.

while the total loss Ltotal<ϵ or iter n< maxIter do

(1) Generate a mini-batch features of source and target data: {xis}i=1nb,{xjt}j=1nb;

(2) Extract the high-level features of source and target data: {fks,fkt}k=1nl=Gf([xs,xt];θf);

(3) Calculate the negative-valence and positive-valence feature centers vnb and vpb

 in each mini-batch by the Equations (4) and (5);

(4) Calculate the feature center of qth class cqb in each mini-batch by the Equation (7);

(5) if iter n=1:

  Initialize global centers vn, vp, and cq (or cp) in whole source data using steps (4) and (5);

else:
vn=11+nb1inb,liN(vnbfks,i),vnvnηvn,vp=11+nb1jnb,ljP(vpbfks,i),vpvpηvp,cq=11+nsq1inbq(cqbfks,i),cqcqηcq;

(6) Calculate Lv, Lc, La, Lce, and Ltotal using Equations (2), (6), and (10)–(12), respectively;

(7) Update the parameter θf and θc:
θcθcμLceθc,θfθfμLtotalθf;

(8) n=n+1.

end while