Table 2.
Summary of notations.
| Notation | Description |
|---|---|
![]() |
Final predicted output for the ith input sample |
![]() |
The ith input data sample |
![]() |
Predicted class label by the nth model or client for input
|
| Summation overall N models or clients | |
![]() |
Averaging factor to compute the mean prediction from all contributors |
![]() |
Total number of models or clients |
![]() |
Statistical mode function that returns the most frequent class label |
![]() |
Total loss function with parameters
|
![]() |
Total number of data samples |
![]() |
Loss between ground truth and predicted output
|
![]() |
Summation overall n training samples |
![]() |
Summation over all K model components |
![]() |
Regularization term for the kth model component |
![]() |
Model parameters of the kth component |
![]() |
Overall set of model parameters |
![]() |
Bias or constant term related to iteration T |
![]() |
Regularization coefficient |
![]() |
Total number of training iterations or time steps |
![]() |
Model weight parameter at step j |
![]() |
Sum of squared weights |
![]() |
L2 regularization term |
![]() |
Output of the kth model when applied to input
|
![]() |
Total number of models contributing to the aggregation |
| Summation over all K models | |
![]() |
An estimated value at index k |
![]() |
Summation from j = 0 to j = n, so summing up n + 1 terms |
![]() |
Classifier |
![]() |
Weight assigned to jth classifier |






























