Skip to main content
. 2022 Oct 14;13:898548. doi: 10.3389/fphar.2022.898548

FIGURE 3.

FIGURE 3

Comparison of behavior in time- and performance-based PRL tasks in MK-801. (A) BIC scores of four Rescorla–Wagner (RW) reinforcement learning models fit to the time-based PRL data; 1: standard RW, 2: RW + stickiness, 3: RW + side bias, and 4: win-stay/lose-shift model. (B) Model 2 coefficients (α, β, and φ) fit to the time-based PRL data. (C,D) Same as A and B, models were fit to performance-based task data. (E) Number of trials in the time-based PRL. (F) Time-based PRL mean trial duration across all trials (log10 transformed). (G) Time-based PRL log latency distributions of trial initiation (G, left), lever presses (G, middle), and reward collection (G, right). Dashed line indicates median values. (H) Number of trials in the performance-based PRL. (I) Performance-based PRL trial duration (log10 transformed). (J) Same as in G but for the performance-based PRL. (K) Time to magazine entry after unrewarded trials for the performance based PRL. (L) Proportion of short (<1 s, L, left) and long latency (>1 s, L, right) magazine entries on non-rewarded trials in the performance-based PRL (mean ± SEM). (M,N) Same as in K and L but for the time-based PRL task. Solid and empty boxplots show time- and performance-based task data, respectively. *p < 0.05, **p < 0.001, ****p < 0.0001 (see Table 1 for statistical results).