Table 2.
Summary of the experiments.
Name | Learning strategy |
A | Centralized training using stochastic gradient descent (learning rate: 0.9) |
B | Collaborative training of 5 workers using the round robin method (learning rate: 0.9) |
C_0.1 | Collaborative privacy preserving training of 5 workers with DSSGDa (θd = 0.1, θu = 0.5, γ = 10, τ = 0.0001; learning rate: 0.9) |
C_0.5 | Collaborative privacy preserving training of 5 workers with DSSGD (θd = 0.5, θu = 0.5, γ = 10, τ = 0.0001; learning rate: 0.9) |
D | Local training without collaboration of 5 workers using stochastic gradient descent |
aDSSGD: distributed selective stochastic gradient descent.