Skip to main content
. 2024 Jun 30;25(13):7237. doi: 10.3390/ijms25137237
Algorithm 1. Nash-MTL
Input: θ0- initial parameter vector, {li}i=1K–differentiable loss functions, μ–learning rate
Output: θT
for t = 1,…, T do
 Compute task gradients git=θ(t1)li
 Set G(t) the matrix with columns gi(t)
 Solve for α: (Gt)T(Gt)α=1/α to obtain αt
 Update the parameters θ(t)=θ(t)μG(t)α(t)
end for
return θT