1/λ
|
mean of exponential effective prior probability density for leisure time |
α ∈ [0,1] |
weight on linear component of microscopic benefit-of-leisure |
β ∈ [0,∞) |
inverse temperature or degree of stochasticity–determinism parameter |
CHT |
cumulative handling time |
CL(·) |
microscopic benefit-of-leisure |
|
maximum of sigmoidal microscopic benefit-of-leisure |
|
shift of sigmoidal microscopic benefit-of-leisure |
δ(·) |
delta/indicator function |
|
expected value with respect to policy π
|
KL
|
slope of linear microscopic benefit-of-leisure |
L |
leisure |
μa(τa) |
effective prior probability density of choosing duration τa
|
P |
price |
|
policy or choice rule: probability of choosing action a, for duration τa from state s
|
post |
post-reward |
pre |
pre-reward |
|
expected return or (differential) Q-value of taking action a, for duration τa from state s
|
ρ |
reward rate |
ρτa
|
opportunity cost of time for taking action a for duration τa
|
RI |
(subjective) reward intensity |
|
pay-off |
s |
state |
TA |
time allocation |
τL
|
duration of instrumental leisure |
τPav
|
Pavlovian component of post-reward leisure |
τW
|
duration of work |
W |
work |
w ∈ [0, P) |
amount of work time so far executed out of the price |
V(s) |
expected return or value of state s
|