Skip to main content
. 2020 May 5;4(5):e14064. doi: 10.2196/14064

Table 2.

Summary of the experiments.

Name Learning strategy
A Centralized training using stochastic gradient descent (learning rate: 0.9)
B Collaborative training of 5 workers using the round robin method (learning rate: 0.9)
C_0.1 Collaborative privacy preserving training of 5 workers with DSSGDad = 0.1, θu = 0.5, γ = 10, τ = 0.0001; learning rate: 0.9)
C_0.5 Collaborative privacy preserving training of 5 workers with DSSGD (θd = 0.5, θu = 0.5, γ = 10, τ = 0.0001; learning rate: 0.9)
D Local training without collaboration of 5 workers using stochastic gradient descent

aDSSGD: distributed selective stochastic gradient descent.