Table 3.
Algorithm 2: Adversarial learning |
---|
For each X in training samples: 1. Calculate the forward loss of X and get the gradient g by back propagation; g=∇XL(θ,X,Y) 2. Calculate radv according to the gradient of the embedding matrix X and add it to the current embedding, which is equivalent to X+radv; radv=ϵ∙g/||g||2 Xadv=X+radv 3. Calculate the forward loss of Xadv, backpropagate to obtain the gradient of the confrontation, and add to the gradient of step 1; 4. Restore embedding to the value at step 1; 5. Update the parameters according to the gradient of step 3. |