Skip to main content
. 2024 Jul 15;24(14):4591. doi: 10.3390/s24144591
Algorithm 1. Adaptive Federated Learning Algorithm with Divergence Estimation and Weight Calculation.
Input:
Global model parameters ωt
Local datasets for each client Di
Local learning rate schedule for each client ηi(t)
Output:
Updated global model parameters ωt+1
//Client-Side:
1: for each client i in parallel do
2:   for each epoch e do
3:   Train local copy of ωt on Di using ηi(t)
4:   Obtain ωit+1
5:   end for
6:     Calculate local performance metrics on ωit+1 and Di
7:     Estimate divergence between Pi (client’s data distribution) and Pt (global model distribution) using
        the Kullback–Leibler (KL) divergence (Equation (2)):
8:      if divergence is acceptable then
9:       Calculate importance weight αit based on divergence estimation:
10:    else
11:      Adjust learning rate ηi(t) and retrain local model
12: end if
13: end for

//Server-Side:
14: Sample clients with probability proportional to αit
15: Collect ωit+1  from selected clients
16: Aggregate updates using weighted average (Equation (4)) with αit as weights:
17: Update the global model using (Equation (5)).
18: if the global model converges then
19:     Broadcast ωt+1 to all clients
20: else
21:     Repeat the process for the next round
22: end if
   Updated global model parameters ωt+1