Skip to main content
. 2022 Apr 20;12:6490. doi: 10.1038/s41598-022-10100-7

Table 1.

Overview of the learning models. The initial value of the risky option Q1 was a free parameter in all models as well (not included in the table).

Parameters
Reinforcement learning (1) Bayesian ideal-observer (2)
Basic models (A) Learning rate α Update rate π
Asymmetric learning (B): Stronger weighting of win outcomes promotes risk seeking Learning rates for win and no-win outcomes: α+ and α Update rates for win and no-win outcomes: π+ and π
Nonlinear utility function (C): Overvaluation of higher outcomes promotes risk seeking

Utility parameter κ

κ > 1 and κ < 1 cause over- and undervaluation of higher outcomes, respectively

Utility parameter κ
Uncertainty affects value (D): Uncertainty bonus promotes risk seeking Not applicable

Uncertainty parameter φ

φ > 0 and φ < 0 cause uncertainty bonus and penalty, respectively