| Algorithm 2 Implementation of the CNPs–SDE or ANPs–SDE models |
|
Inputs: ID dataset ; MR is the missing rate of the ID dataset; the downsampling layer is the encoder of the CNPs or ANPs models; f and g are the drift net and diffusion net, respectively; is the negative log-likelihood loss function for the CNPs model or the ELBO for the ANPs model; is the binary cross-entropy loss function; the fully collected layer is the decoder of the CNPs or ANPs models to produce Means and Vars. Outputs: Means and Vars for #training iterations do |
| 1. Sample a minibatch of m data: ; |
| 2. Forward through the downsampling net: d_mean_z = and ; |
| 3. Forward through the SDE-Net block: |
| 4. for k = 0 to do |
| 5. Sample ; |
| 6. ; |
| 7. end for |
| 8. Means, Vars () |
| 9. Update and f by ; |
| 10. Sample a minibatch of data from ID: ; |
| 11. Sample a minibatch of ; |
| 12. Forward through the downsampling or upsampling nets of the SDE-Net block: ; |
| 13. Update g by ; |
| for #testing iterations do |
| 14. Evaluate the CNPs–SDE or ANPs–SDE models; |
| 15. Sample a minibatch of m data from ID: ; |
| 16. mask = Bernoulli(1-MR); |
| 17. masked_ = mask ∗ ; |
| 18. Means, Vars = CNPs_SDE (masked_) or ANPs_SDE (masked_). |