Protocol |
[138] |
Scalable production system for Federated Learning |
Standard protocol as a basis for Federated Learning |
Need for optimization for application-specific scenarios |
[139] |
Promote client selection under heterogeneous resource scenarios |
FedCS protocol to select users based on their resource availability |
Relies on the truthfulness of user resource availability submissions. |
[140] |
Federated Learning for traffic prediction models |
Suitable protocol for small-scale Federated Learning enabled traffic control |
Extension to larger scale of recruited clients. |
Aggregation |
[141] |
A standard aggregation method |
FedAvg algorithm to aggregate the average model parameters of updates |
An alternative to equally weighing all local model updates during aggregation. |
[143] |
Optimize Federated Learning in heterogeneous networks |
Proximal parameter to limit the impact of variable updates allowing partial work to be done |
Solutions for the cases where not all updates are of positive contribution |
[144] |
Optimize Federated Learning through data distribution |
Loss-based Adaptive Boosting to compare local model losses prior to aggregation |
Extensions to consider heterogeneous contribution scenarios during aggregation |
Reputation Models |
[145] |
Incentive to promote reliable Federated Learning |
Multi-weight subjective logic to formulate reputation scores |
Advanced reputation scores to directly reflect performance of users |
[146] |
Enhanced client selection to improve model performance |
Local model performance metrics to formulate reputation scores |
Minimum computational overhead for assessment of reputation scores for every user |
[147] |
Reputation-awareness |
Interaction records to generate reputation opinions |
Reputation scores to reflect performance of users directly |
Differential Privacy |
[150] |
Enhanced privacy preservation through sketching |
Obfuscation of the original data to achieve differential privacy |
Performance versus privacy gain |
[41] |
Differential privacy in Federated Learning |
Noise before model aggregation |
Considering varied size and distribution of user data |
[151] |
Enhanced privacy and efficiency of Federated Learning in industrial AI applications |
Add noise according to Gaussian distribution to local models |
Extensive analyses on high-dimensional data |
BlockChain |
[152] |
Accountable Federated Learning |
Combine aggregator and blockchain to preserve privacy of users |
Fairness assurance in participant rewarding |
[153] |
Enhanced privacy for Federated Learning |
Noise at the initial stage onto the original data, and use BlockChain to facilitate the Federated Learning process |
Tackle potential performance issues due to noising too early |
[154] |
Improved fairness and privacy in Federated Learning |
Scale rewards with respect to participant contribution |
Extension to non-IID scenarios |
[155] |
Privacy Preserving Federated Learning for industrial IoT applications |
Use blockchain with Inter-Planetary File System (IPFS) and noise to local model features |
Extension to non-IID data or heterogeneous device scenarios |