Skip to main content
. 2008 Apr 21;105(18):6741–6746. doi: 10.1073/pnas.0711099105

Table 1.

Model update rules

Model Update rule
RL Vt+1a = Vt + ηa(RtVta)
Fictitious pt+1* = pt* + η(Ptpt*)
Influence pt+1* = pt* + η(Ptpt*) − κ(Qtqt**)

The RL model updates the value of the chosen action a with a simple Rescorla–Wagner (35) prediction error (RtVta) as the difference between received rewards and expected rewards, where η is the learning rate. The fictitious play model instead updates the state (strategy) of the opponent pt* with a prediction error (Ptpt*) between the opponent's action and expected strategy. The influence model extends this approach by also including the influence (Qtqt**) that a player's own action Qt has on the opponent's strategy (see Methods).