Table 1.
Parameters | ||
---|---|---|
Reinforcement learning (1) | Bayesian ideal-observer (2) | |
Basic models (A) | Learning rate α | Update rate π |
Asymmetric learning (B): Stronger weighting of win outcomes promotes risk seeking | Learning rates for win and no-win outcomes: α+ and α− | Update rates for win and no-win outcomes: π+ and π− |
Nonlinear utility function (C): Overvaluation of higher outcomes promotes risk seeking |
Utility parameter > 1 and < 1 cause over- and undervaluation of higher outcomes, respectively |
Utility parameter |
Uncertainty affects value (D): Uncertainty bonus promotes risk seeking | Not applicable |
Uncertainty parameter > 0 and < 0 cause uncertainty bonus and penalty, respectively |