Skip to main content
. 2025 Aug 22;16:7849. doi: 10.1038/s41467-025-63135-5

Table 1.

Separable effects of valence and reward PEs predict learning to punish

Punishi,t~β0+β1RewardPEi,t+β2ValencePEi,t+β3ArousalPEi,t+β4Roundi,t+β5RewardPEi,t:Roundi,t+β6ValencePEi,t:Roundi,t+β7ArousalPEi,t:Roundi,t+ε
Variable Estimate (SE) z p 95% Cis
Punish
 Intercept –1.61 (0.32) –5.03 <0.001 [–2.28, –0.98]
 Reward PE –0.50 (0.17) –3.02 0.003 [–0.84, –0.17]
 Valence PE –0.99 (0.15) –6.42 <0.001 [–1.30, –0.68]
 Arousal PE 0.05 (0.14) 0.34 0.74 [–0.23, 0.33]
 Round 0.04 (0.03) 1.66 0.10 [–0.01, 0.09]
 Reward PE×Round –0.06 (0.03) –1.87 0.06 [–0.13, 0.004]
 Valence PE×Round 0.08 (0.03) 2.30 0.02 [0.01, 0.14]
 Arousal PE×Round 0.03 (0.03) 0.98 0.33 [–0.03, 0.08]

Reward PEs are calculated by taking the difference between the experienced and predicted reward. Valence PEs and Arousal PEs are calculated by taking the difference between the experienced and predicted emotion on the relevant affect dimension. All variables were scaled but not mean-centered, as the zero point on each scale refers to the meaningful instance where expectations matched experience. The model includes subject-specific random intercepts and slopes for Reward PE, Valence PE, and Arousal PE. All statistical tests are two-sided with no adjustments for multiple comparisons. The dataset includes 7376 observations from 41 participants.