Table 2.
The pros (+) and cons (−) of committing to a particular design choice with respect to robustness (R), efficiency (E), privacy (P), and fairness (F). The given numbers indicate their significances.
| Design | R | Reason | E | Reason | P | Reason | F | Reason |
|---|---|---|---|---|---|---|---|---|
| NoInc. | −1 | encourage malicious behavior | ✗ | – | ✗ | – | −2 | not fair for honest clients |
| Flat. | +1 | encourage honest behaviour | −1 | need little computation to distribute rewards | ✗ | – | −1 | well-trained models subsidize poor-trained models |
| ConRe. | +2 | encourage even honest behaviour | −2 | need to calculate contributions from all clients | ✗ | – | +1 | clients are rewarded based on their contributions |
| Open. | −1 | most likely to include malicious trainers | ✗ | – | ✗ | – | +1 | all clients can equally become trainers |
| Res. | +1 | less likely to include malicious trainers | −1 | must add filtering algorithm for trainer candidates | −1 | clients need to disclose private info about themselves | −1 | only qualified clients can become trainers |
| OC | −1 | no log on models | ✗ | – | ✗ | – | −1 | hard to audit |
| Blo. | +2 | models are safely recorded in the blockchain | −2 | storing huge models in the blockchain is costly | −1 | all blockchain nodes can see the models | +1 | all blockchain nodes can audit the models |
| IPFS | +1 | hash of the model is stored in the blockchain | −1 | storing only hashes is cheaper | −1 | all blockchain and storage nodes can see the models | +1 | all blockchain and storage nodes can audit the models |
| NoPrev. | −1 | attackers can obtain models | ✗ | – | −1 | attackers can obtain models | −1 | attackers can obtain models |
| Enc. | +1 | models are protected from third party | −1 | additional steps are required for encryption | +1 | models are protected from leakage | +1 | only eligible entities can see the models |
| NoPrev. | ✗ | – | ✗ | – | −1 | attackers may obtain private data | ✗ | – |
| DP | −1 | decrease models’ accuracy | −1 | additonal steps to add noise during training | +1 | private data is secured | ✗ | – |
| HE | ✗ | does not affect robustness | −1 | perform training on encrypted models is complex | +1 | private data is secured | ✗ | – |
| NoComp. | ✗ | – | ✗ | – | ✗ | – | ✗ | – |
| Comp. | ✗ | – | +1 | can safe a lot of bandwidth | ✗ | – | ✗ | – |
| NoVer. | −1 | malicious models can jeopardize the global model | ✗ | – | ✗ | – | −2 | malicious models may outperform honest models |
| Sin. | +1 | models are verified by a reviewer | −1 | require simple single-validation | −1 | models are leaked to single reviewer | −1 | may not fair if the reviewer is compromised |
| All. | +3 | models are peer-reveiwed by all clients | −3 | models must be transferred to all clients | −3 | all clients know each others’ model | +2 | hard to compromise when validated by all clients |
| Boa. | +2 | models are verified by few reviewers | −2 | models need to be delivered to few reviewers | −2 | models are leaked to few reviewers | +1 | sligthly difficult to compromise few reviewers |
| Ran. | +1 | reducing the chance of malicious models to be selected | −1 | need to have a trustable random oracle | ✗ | – | ✗ | – |
| Repo. | +1 | reducing the chance of malicious models to be selected | −1 | need to build a mediator for accusers and victims | ✗ | – | ✗ | – |
| Vot. | +1 | reducing the chance of malicious models to be selected | −1 | need to build a voting mechanism for all clients | ✗ | – | ✗ | – |
| Con. | +2 | have a higher chance to exclude malicious models | −2 | need to calculate contribution for each client | ✗ | – | ✗ | – |
| NoPun. | −1 | may encourage malicous behaviour | ✗ | – | ✗ | – | −1 | malicious entities does not get punishment |
| Repu. | +1 | encourage honest behaviour | −1 | needs additional credit score processing | −1 | clients are most likely to use the same account | +1 | malcious entity is punished socially |
| Depo. | +1 | encourage honest behaviour | −1 | needs additional deposit processing | ✗ | – | +1 | malicious entity is punished economically |
| Sin. | −1 | malicious aggregator can compromise the global model | ✗ | – | ✗ | – | −1 | not fair if the aggregator is malicious |
| Mul. | +1 | slightly difficult for a malicious aggregator to corrupt the global model | −1 | models need to be distributed to all aggregators | −1 | many nodes obtain information about the global models | +1 | the robust aggregation process boosts clients’ trust |
| Syn. | ✗ | – | −1 | must wait for slow trainers | ✗ | – | ✗ | – |
| Asy. | ✗ | – | +1 | aggregate without waiting | ✗ | – | ✗ | – |
| Off. | ✗ | – | ✗ | – | ✗ | – | ✗ | – |
| On. | +1 | the aggregation process becomes hard-to-tamper | −1 | smart contract code execution is costly and complex | −1 | all blockchain nodes can see the aggregation process | +1 | the aggregation process can be audited |
(1) How to attract trainers? NoInc. No incentive; Flat. Same reward; ConRe. Contribution-based reward; (2) How to select trainers? Open. Allow all; Res. Restricted trainer; (3) How to distribute models? OC Open channel; Blo. Blockchain; IPFS InterPlanetary File System; (4) How to prevent model leakage? NoPrev. No prevention; Enc. Use encryption; (5) How prevent data leakage? NoPrev. No prevention; DP Differential privacy; HE Homomorphic encryption; (6) How to make communication efficient? NoComp. No compression; Comp. Model compression; (7) Who should become reviewers? NoVer. No verfication; Sin. Single reviewer; All. All nodes are reviewers; Boa. A board of reviewers; (8) How to select models for aggregation? Ran. Random; Repo. Reporting; Vot. Voting; Con. Contribution scores; (9) How to punish malicious actors? NoPun. No punishment; Dep. Deposit mechanism; Repu. Reputation system; (10) Who can become aggregators? Sin. Single aggregator; Mul. Multiple aggregators; (11) How to aggregate models? Syn. Synchronous aggregation; Asy. Asynchronous aggregation; (12) Where does aggregation happen? On. On-chain; Off. Off-chain.