Skip to main content
. 2021 May 26;21(11):3708. doi: 10.3390/s21113708
Algorithm 2 Implementation of the CNPs–SDE or ANPs–SDE models
Inputs: ID dataset pID(x,y); MR is the missing rate of the ID dataset; the downsampling layer h1 is the encoder of the CNPs or ANPs models; f and g are the drift net and diffusion net, respectively; L1 is the negative log-likelihood loss function for the CNPs model or the ELBO for the ANPs model; L2 is the binary cross-entropy loss function; the fully collected layer h2 is the decoder of the CNPs or ANPs models to produce Means and Vars.
Outputs: Means and Vars
for #training iterations do
1. Sample a minibatch of m data: (Xm,Ym) ~ pID(x,y);
2. Forward through the downsampling net: d_mean_z = h1(Xm) and X0m=(Xm, d_mean_z );
3. Forward through the SDE-Net block:
4. for k = 0 to n1 do
5. Sample Zk NM~N (0,1);
6. Xk+1m=Xkm+f (X0m,t)t+g(X0m)tZk;
7. end for
8. Means, Vars = h2(Xk+1m)
9. Update h1, h2 and f by  h1, h2 and f 1mL1(Means,Ym);
10. Sample a minibatch of m data from ID: (Xm,0)~pID(x,y);
11. Sample a minibatch of (X˜m,1)~pOOD(x,y);
12. Forward through the downsampling or upsampling nets of the SDE-Net block: X0m,X˜0m=h1(Xm),h1(X˜m);
13. Update g by g1mL2(g(X0m),0)gL2(g(X˜0m),1);
for #testing iterations do
14. Evaluate the CNPs–SDE or ANPs–SDE models;
15. Sample a minibatch of m data from ID: (Xm,Ym)~pID(x,y);
16. mask = Bernoulli(1-MR);
17. masked_Xm = maskXm;
18. Means, Vars = CNPs_SDE (masked_Xm) or ANPs_SDE (masked_Xm).