Skip to main content
. Author manuscript; available in PMC: 2023 Nov 15.
Published in final edited form as: IEEE Sens J. 2022 Oct 5;22(22):21362–21390. doi: 10.1109/jsen.2022.3210773

TABLE X.

Features of Notable TinyML Federated Learning Frameworks

Framework FL Strategy Communication Stack Scalability and Heterogeneity Privacy Client Hardware (language) Open-source
Flower [190] [191] FedAvg, Fault tolerant FedAvg, FedProx, QFedAvg, FedAdagrad, FedYogi, FedAdam Bidirectional gRPC and ClientProxy (language, communication and serialization agnostic) FedFS (partial work, importance sampling, and dynamic time-outs to handle bandwidth heterogeneity); Virtual Client Engine for scheduling and resource management (15M clients tested) Salvia secure aggregation CPU, GPU, MCU (Python, Java, C++)
FedPARL [192] Reparametrized FedAvg with sample-based pruning None (simulated) Resource tracking (memory, battery life, bandwidth, and data volume); Trust value tracking (task completion, delay, model integrity); Partial work (12 clients tested) Vanilla model aggregation None (simulated) χ
DIoT [193] FedAvg Bidirectional WebSocket protocol over WiFi and Ethernet AuDI device-type identification (15 clients tested) Vanilla model aggregation CPU, GPU (Python and JavaScript) χ
PruneFL [194] FedAvg with adaptive and distributed pruning WiFi and Ethernet, with distributed pruning to reduce communication overhead Adaptive pruning to modify local models based on resource availability (9 clients tested) Vanilla model aggregation CPU, MCU (Python)
TinyFedTL [195] FedAvg with last layer transfer learning USART 9 clients tested Vanilla model aggregation MCU (C++)
FLAgr [196] Reinforcement learning None (simulated) Real-time collaboration scheme discovery via deep deterministic policy gradient (1000 clients tested) Rating feedback mechanism None (simulated) χ
PerFit [197] FedPer, FedHealth, FedAvg, Personalized FedAvg, MOCHA, FedMD, Federated Distillation WiFi, BLE, Cellular (simulated) Federated transfer learning, federated distillation, federated meta-learning, and federated multi-task learning to personalize the model, device and statistical heterogeneity (30 clients tested) Vanilla model aggregation None (simulated) χ