Skip to main content
. 2022 Jun 2;3(6):100521. doi: 10.1016/j.patter.2022.100521

Table 2.

Performance for federated molecular regression

Dataset α Centralized training
Federated learning
MolNeta FedChemaours FedAvg FedProx MOON FedFocalours FedVATours FLITours FLIT+ours
FreeSolv 0.1 1.40 1.430 1.771 1.693 1.376 1.686 1.371 1.634 1.228d
0.5 1.445 1.376 1.423 1.322 1.299 1.366 1.127d
1 1.223 1.216 1.469 1.294 1.150 1.277 1.061d
Lipophilicity 0.1 0.655 0.6290 0.6361d 0.6403 0.6426 0.6403 0.6556 0.6563 0.6392
0.5 0.6306 0.6365 0.6339 0.6351 0.6333 0.6368 0.6270d
1 0.6505 0.6474 0.6442 0.6461 0.6488 0.6443 0.6403d
ESOL 0.1 0.97 0.6570 0.8016 0.7702 0.7537d 0.8022 0.7776 0.7788 0.7642
0.5 0.7524 0.7382 0.7258 0.7708 0.7243 0.7426 0.7119d
1 0.7056 0.6828 0.6751 0.6822 0.7253 0.6705d 0.6998
QM9 0.1 0.0479b 0.0890c 0.5889 0.6036 0.5817 0.6164 0.5606 0.5713 0.5356d
0.5 0.5906 0.5751 0.5707 0.6059 0.5656 0.5658 0.5222d
1 0.5786 0.5691 0.5808 0.5822 0.5602 0.5621 0.5282d

, indicate if lower or higher numbers are better.

a

Results were obtained with centralized training.

b

Results were retrieved from Klicpera et al.2 with a seperate SchNet for each task.

c

Results were obtained by a single multitask network. Smaller α of LDA generates more extreme heterogeneous scenario. FedFocal and FedVAT are proposed in this paper as the variants of FLIT(+).

d

Best federated-learning results.