Abstract
Human development is often described as a ‘cooling off’ process, analogous to stochastic optimization algorithms that implement a gradual reduction in randomness over time. Yet there is ambiguity in how to interpret this analogy, due to a lack of concrete empirical comparisons. Using data from n = 281 participants ages 5 to 55, we show that cooling off does not only apply to the single dimension of randomness. Rather, human development resembles an optimization process of multiple learning parameters, for example, reward generalization, uncertainty-directed exploration and random temperature. Rapid changes in parameters occur during childhood, but these changes plateau and converge to efficient values in adulthood. We show that while the developmental trajectory of human parameters is strikingly similar to several stochastic optimization algorithms, there are important differences in convergence. None of the optimization algorithms tested were able to discover reliably better regions of the strategy space than adult participants on this task.
Subject terms: Human behaviour, Development studies, Computational science
Giron et al. provide empirical evidence that human development has much in common with the algorithm of ‘stochastic optimization’ widely used in machine learning, resolving ambiguities around commonly used analogies in developmental psychology.
Main
Human development has fascinated researchers of both biological and artificial intelligence alike. As the only known process that reliably produces human-level intelligence1, there is broad interest in characterizing the developmental trajectory of human learning2–4 and understanding why we observe specific patterns of change5.
One influential hypothesis describes human development as a ‘cooling off’ process4,6,7, comparable to simulated annealing (SA)8,9. SA is a stochastic optimization algorithm named in analogy to a piece of metal that becomes harder to manipulate as it cools off. Initialized with high temperature, SA starts off highly flexible and likely to consider worse solutions as it explores the optimization landscape. But as the temperature cools down, the algorithm becomes increasingly greedy and more narrowly favours only local improvements, eventually converging on an (approximately) optimal solution. Algorithms with similar cooling mechanisms, such as stochastic gradient descent and its discrete counterpart stochastic hill climbing (SHC) are abundant in machine learning and have played a pivotal role in the rise of deep learning10–13.
This analogy of stochastic optimization applied to human development is quite alluring: young children start off highly stochastic and flexible in generating hypotheses4,14–16 and selecting actions17, which gradually tapers off over the lifespan. This allows children to catch information that adults overlook18, and learn unusual causal relationships adults might never consider4,15. Yet this high variability also results in large deviations from reward-maximizing behaviour3,19–22, with gradual improvements during development. Adults, in turn, are well calibrated to their environment and quickly solve familiar problems, but at the cost of flexibility, since they experience difficulty adapting to novel circumstances23–26.
While intuitively appealing, the implications and possible boundaries of the optimization analogy remain ambiguous without a clear definition of the process and a specification of what is being optimized. As a consequence, there is a need for a direct empirical test of the similarities and differences between human development and algorithmic optimization.
Perhaps the most direct interpretation is to apply cooling off to the single dimension of random decision temperature, controlling the amount of noise when selecting actions or sampling hypotheses6,16,27, although alternative implementations are also possible28,29. Evidence from experimental studies suggest that young children are harder to predict than adults17,30, implying greater stochasticity, which is amplified in neurodevelopmental disorders such as attention deficit hyperactivity disorder28 and impulsivity31. However, this interpretation is only part of the story, since developmental differences in choice variability can be traced to changes affecting multiple aspects of learning and choice behaviour. Aside from a decrease in randomness, development is also related to changes in more systematic, uncertainty-directed exploration19,27,32, which is also reduced over the lifespan. Additionally, changes in how people generalize rewards to novel choices27 and the integration of new experiences33,34 affect how beliefs are formed and different actions are valued, also influencing choice variability. In sum, while decision noise certainly diminishes over the lifespan, this is only a single aspect of human development.
Alternatively, one could apply the cooling off metaphor to an optimization process in the space of learning strategies, which can be characterized across multiple dimensions of learning. Development might thus be framed as parameter optimization, which tunes the parameters of an individual’s learning strategy, starting off by making large tweaks in childhood, followed by gradually lesser and more-refined adjustments over the lifespan. In the stochastic optimization metaphor, training iterations of the algorithm become a proxy for age.
This interpretation connects the metaphor of stochastic optimization with Bayesian models of cognitive development, which share a common notion of gradual convergence35,36. In Bayesian models of development, individuals in early developmental stages possess broad priors and vague theories about the world, which become refined with experience35. Bayesian principles dictate that, over time, novel experiences will have a lesser impact on future beliefs or behaviour as one’s priors become more narrow36,37. Observed over the lifespan, this process will result in large changes to beliefs and behaviour early in childhood and smaller changes in later stages, implying a similar developmental pattern as the stochastic optimization metaphor. In sum, not only might the outcomes of behaviour be more stochastic during childhood, but the changes to the parameters governing behaviour might also be more stochastic in earlier developmental stages.
Goals and scope
In this work, we aim to resolve ambiguities around commonly used analogies to stochastic optimization in developmental psychology. While past work has compared differences in parameters between discrete age groups in both structured27,30 and unstructured reward domains19,32,33, here we characterize the shape of developmental change across the lifespan, from ages 5 to 55. Instead of relying on verbal descriptions, we use formal computational models to clarify which cognitive processes are being tweaked during development through explicit commitments to free parameters. We then directly compare the trajectory of various optimization algorithms to age differences in those parameters, allowing us to finally put the metaphor to a direct empirical test.
Behavioural analyses show that rather than a uni-dimensional transition from exploration to exploitation, human development produces improvements in both faculties. Through computational models, we find simultaneous changes across multiple dimensions of learning, starting with large tweaks during childhood and plateauing in adulthood. We then provide direct empirical comparisons to multiple optimization algorithms as a meta-level analysis to describe changes in model parameters over the lifespan, where the best-performing algorithm is the most similar to human development. However, we also find notable differences in convergence between human development and algorithmic optimization. Yet, this disparity fails to translate into reliable differences in performance, suggesting a remarkable efficiency of human development.
Results
We analyse experimental data from n = 281 participants between the ages of 5 and 55, performing a spatially correlated multi-armed bandit task that is both intuitive and richly complex (Fig. 1a)38. Participants were given a limited search horizon (25 choices) to maximize rewards by either selecting an unobserved or previously revealed tile on an 8 × 8 grid. Each choice yielded normally distributed rewards, with reward expectations correlated based on spatial proximity (Fig. 1b), such that tiles close to one another tended to have similar rewards. Since the search horizon was substantially smaller than the number of unique options, generalization and efficient exploration were required to obtain high rewards.
Our dataset (Fig. 1c) combines openly available data from two previously published experiments27,30 targeting children and adult participants (n = 52 and n = 79, respectively; after filtering), along with new unpublished data (n = 150) targeting the missing gap of adolescents. Although experimental designs differed in minor details (for example, tablet versus computer; Methods), the majority of differences were removed by filtering out participants (for example, assigned to a different class of reward environments). Reliability tests revealed no differences in performance, model accuracy and parameter estimates for overlapping age groups across experiments (all P > 0.128 and Bayes Factor (BF) < 0.73; Supplementary Fig. 1 and Supplementary Table 1). Additionally, robust model and parameter recovery (Supplementary Figs. 4–6) provide high confidence in our ability to capture the key components of learning across the lifespan.
Behavioural analyses
We first analysed participant performance and behavioural patterns of choices (Fig. 2). We treat age as a continuous variable when possible, but also discretize participants into seven similarly sized age groups (n ∈ [30, 50], see Supplementary Table 4 for exact sample sizes). These behavioural results reveal clear age-related trends in learning and exploration captured by our task.
Performance
Participants monotonically achieved higher rewards as a function of age (Pearson correlation: r = 0.51, P < 0.001, BF > 100), with even the youngest age group (five-to-six year olds) strongly outperforming chance (one-sample t test: t(29) = 5.1, P < 0.001, Cohen’s d = 0.9, BF > 100; Fig. 2a). The learning curves in Fig. 2b show average reward as a function of trial, revealing a similar trend, with older participants displaying steeper increases in average reward. Notably, the two youngest age groups (five-to-six and seven-to-eight year olds) displayed decaying learning curves with decreasing average reward on later trials, suggesting a tendency to overexplore (supported by subsequent analyses below). We did not find any reliable effect of learning over rounds (Supplementary Fig. 2).
We also analysed maximum reward (up until a given trial) as a measure of exploration efficacy, where older participants reliably discovered greater maximum rewards (Kendall rank correlation: rτ = 0.23, P < 0.001, BF > 100) and showed steeper increases on a trial-by-trial basis (Fig. 2c). Thus, the reduction in the average reward acquired by the youngest age groups did not convert into improved exploration outcomes, measured in terms of maximum reward.
Behavioural patterns
Next, we looked at search patterns to better understand the behavioural signatures of age-related changes in exploration. The youngest participants (five-to-six year olds) sampled more unique options than chance (Wilcoxon signed-rank test: Z = −4.7, P < 0.001, r = −0.85, BF > 100), but also less than the upper-bound on exploration (that is, unique options on all 25 trials: Z = −4.1, P < 0.001, r = −0.76, BF > 100). The number of unique options decreased strongly as a function of age (rτ = −0.33, P < 0.001, BF > 100; Fig. 2d), consistent with the overall pattern of reduced exploration over the lifespan. Note that all participants were informed and tested about the fact they could repeat choices.
We then classified choices into repeat, near (distance = 1) and far (distance > 1), and compared this pattern of choices to a random baseline (red dashed line; Fig. 2e). Five-to-six year olds started off with very few repeat choices (comparable to chance: Z = 0.9, P = 0.820, r = 0.17, BF = 0.27) and a strong preference for near choices (more than chance: Z = −4.6, P < 0.001, r = −0.84, BF > 100). Over the lifespan, the rate of repeat choices increased, while near decisions decreased, gradually reaching parity for 14–17 year olds (comparing repeat versus near: Z = −1.0, P = 0.167, r = −0.15, BF = 0.46) and remaining equivalent for all older age groups (all P > 0.484, BF < 0.23). In contrast, the proportion of far choices remained unchanged over the lifespan (rτ = −0.08, P = 0.062, BF = 0.48). These choice patterns indicate that even young children do not simply behave randomly, with the amount of randomness decreasing over time. Rather, younger participants exploit past options less than older participants (repeat choices), preferring instead to explore unknown tiles within a local radius (near choices). While the tendency to prefer exploring near rather than far options gradually diminished over the lifespan, this preference for local search distinguished participants of all age groups from the random model.
Lastly, we analysed how reward outcomes influenced search distance using a Bayesian hierarchical regression (Fig. 2f and Supplementary Table 3). This model predicted search distance as a function of the previous reward value and age group (including their interaction), with participants treated as random effects. This can be interpreted as a continuous analogue to past work using a win–stay lose–shift strategy17,38, and provides initial behavioural evidence for reward generalization. We found a negative linear relationship in all age groups, with participants searching locally following high rewards and searching further away after low rewards. This trend becomes stronger over the lifespan, with monotonically more negative slopes over the lifespan (Supplementary Table 3). While all age groups adapted their search patterns in response to reward, the degree of adaptation increased over the lifespan.
Behavioural summary
To summarize, younger children tended to explore unobserved tiles instead of exploiting options known to have good outcomes. This can be characterized as overexploration since increased exploration did not translate into higher maximum rewards (Fig. 2c). Older participants explored less but more effectively, and were more responsive in adapting their search patterns to reward observations (Fig. 2f). We now turn to model-based analyses to complement these results with a more precise characterization of how the different mechanisms of learning and exploration change over the lifespan.
Model-based analyses
We conducted a series of reinforcement learning39 analyses to characterize changes in learning over the lifespan. We first compared models in their ability to predict out-of-sample choices (Fig. 3b,c) and simulate human-like learning curves across age groups (Fig. 3d). We then analysed the parameters of the winning model (Fig. 3e,f), which combined Gaussian process (GP) regression with upper confidence bound (UCB) sampling (described below). These parameters allow us to describe how three different dimensions of learning change with age: generalization (λ; equation (2)), uncertainty-directed exploration (β; equation (3)) and decision temperature (τ; equation (4)). We then compare the developmental trajectory of these parameters to different stochastic optimization algorithms (Fig. 4).
Modelling learning and exploration
We first describe the GP-UCB model (Fig. 3a) combining all three components of generalization, exploration and decision temperature. We then lesion away each component to demonstrate all are necessary for describing behaviour (Fig. 3b,c). We describe the key concepts below, while Fig. 3a provides a visual illustration of the model (see Methods for details).
GP regression40 provides a reinforcement learning model of value generalization38, where past reward observations can be generalized to novel choices. Here, we describe generalization as a function of spatial location, where closer observations exhibit a larger influence. However, the same model can also be used to generalize based on the similarity of arbitrary features41 or based on graph-structured relationships42.
Given previously observed data of choices Xt = [x1, …, xt] and rewards yt = [y1, …, yt] at time t, the GP uses Bayesian principles to compute posterior predictions about the expected rewards rt for any option x:
1 |
The posterior in equation (1) takes the form of a Gaussian distribution, allowing it to be fully characterized by posterior mean mt(x) and uncertainty vt(x) (that is, variance; see equations (6) and (7) for details and Fig. 3a for an illustration).
The posterior mean and uncertainty predictions critically depend on the choice of kernel function , for which we use a radial basis function describing how observations from one option x generalize to another option as a function of their distance:
2 |
The model thus assumes that nearby options generate similar rewards, with the level of similarity decaying exponentially over increased distances. The generalization parameter λ describes the rate at which generalization decays, with larger estimates corresponding to stronger generalization over greater distances.
We then use UCB sampling to describe the value of each option q(x) as a weighted sum of expected reward m(x) and uncertainty v(x):
3 |
β captures uncertainty-directed exploration, determining the extent that uncertainty is valued positively, relative to exploiting options with high expectations of reward.
Lastly, we use a softmax policy to translate value q(x) into choice probabilities:
4 |
The decision temperature τ controls the amount of random exploration. Larger values for τ introduce more choice stochasticity, where τ → ∞ converges on a random policy.
Lesioned models
To ensure all components of the GP-UCB model play a necessary role in capturing behaviour, we created model variants lesioning away each component. The λ lesion model removed the capacity for generalization, by replacing the GP component with a Bayesian reinforcement learning model that assumes independent reward distributions for each option (equations (8) to (11)). The β lesion model removed the capacity for uncertainty-directed exploration by fixing β = 0, thus valuing options solely based on expectations of reward m(x). Lastly, the τ lesion model swapped the softmax policy for an epsilon-greedy policy, as an alternative form of choice stochasticity: with probability p(ϵ) a random option is sampled, and with probability p(1 − ϵ) the option with the highest UCB value is sampled (equation (12)).
Model comparison
We fitted all models using leave-one-round-out cross-validation (Methods). We then conducted hierarchical Bayesian model selection43 to compute the protected exceedance probability (pxp) for describing which model is most likely in the population. We found that GP-UCB was the best model for each individual age group and also aggregated across all data (Fig. 3a). There is still some ambiguity between models in the five-to-six year old group, but this quickly disappears in all subsequent age groups (pxpGP-UCB > 0.99). Figure 3b describes the out-of-sample predictive accuracy of each model as a continuous function of age, where a pseudo-R2 provides an intuitive comparison to random chance (equation (13)). Intuitively, R2 = 0 indicates chance-level predictions and R2 = 1 indicates theoretically perfect predictions. While there is again some ambiguity among five-to-six year olds, GP-UCB quickly dominates and remains the best model across all later ages.
Aside from only predicting choices, we also simulated learning curves for each model (using median participant parameter estimates) and compared them against human performance for each age group (Fig. 3c). The full GP-UCB model provides the best description across all age groups, although the β lesion model also produces similar patterns. However, only GP-UCB by virtue of the exploration bonus β is able to recreate the decaying learning curves for five-to-six and seven-to-eight year olds. Altogether, these results reveal that all three components of generalization (λ), uncertainty-directed exploration (β) and decision temperature (τ) play a vital role in describing behaviour. Next, we analyse how each of these parameters changes over the lifespan.
Parameters
We use both regression and similarity analyses to interpret age-related changes in GP-UCB parameters. The regression (Fig. 3e) modelled age-related changes in the log-transformed parameters using a multivariate Bayesian changepoint regression44 (Methods). This approach models the relationship between age and each parameter as separate linear functions separated by an estimated changepoint ω, at which point the regression slope changes from b1 to b2 (equation (15)). Using leave-one-out cross-validation, we established that this simple changepoint model predicted all GP-UCB parameters better than linear or complex regression models up to fourth-degree polynomials (both with and without log-transformed variables and compared against lesioned intercept-only variants; Supplementary Tables 5 and 6).
The regression analysis (Fig. 3e) revealed how all parameters changed rapidly during childhood (all b1 confidence intervals (CIs) different from 0), but then plateaued such that there were no credible changes in parameters after the estimated changepoint (all b2 CIs overlapped with 0; Supplementary Table 7). More specifically, generalization increased (b1(λ) = 0.08 [0.02,0.26]) until around 13 years of age (ω(λ) = 12.7 [7.70,19.80]), whereas there were sharp decreases in directed exploration (b1(β) = −0.39 [−0.79,−0.13]) and decision temperature (b1(τ) = −0.59 [−1.05,−0.25]), until around nine (ω(β) = 9.10 [7.44,11.40]) and eight (ω(τ) = 7.74 [6.88,8.77]) years of age, respectively. In a multivariate similarity analysis, we computed the pair-wise similarity of all parameter estimates between participants (Kendall’s τ), which we then averaged over age groups (Fig. 3f). This shows that older participants were more similar to each other than were younger participants (Fig. 3f inset), suggesting development produces a convergence towards a more similar set of parameters. Whereas older participants achieved high rewards using similar learning strategies, younger participants tended to overexplore and acquired lower rewards, but each in their own fashion, with more-diverse strategies.
These results highlight how development produces changes to all parameters governing learning, not only a uni-dimensional reduction in random sampling. An initially steep but plateauing rate of change across model parameters is broadly consistent with the metaphor of stochastic optimization in the space of learning strategies (Fig. 3e). The increasing similarity in participants’ parameters again speaks for a developmental process that gradually converges on a configuration of learning parameters (Fig. 3f), which can also be used to generate better performance (Fig. 3d).
Comparison of human and algorithmic trajectories
Beyond qualitative analogies, we now present a direct empirical comparison between human development and stochastic optimization. We first computed a fitness landscape (Supplementary Fig. 7) across one million combinations of plausible parameter values of the GP-UCB model (Methods), with each parameter combination yielding a mean reward based on 100 simulated rounds. We then simulated different optimization algorithms on the fitness landscape, using each of the cross-validated parameter estimates of the five-to-six year old age group as initialization points. Specifically, we tested SA and SHC in combination with three common cooling schedules (fast cooling, exponential cooling and linear cooling; Methods). Intuitively, SA uses rejection sampling to preferentially select better solutions in the fitness landscape (equation (16)), with higher optimization temperatures relaxing this preference. SHC is a discrete analogue of stochastic gradient descent, selecting new solutions proportional to their relative fitness (equation (17)), with higher optimization temperatures making it more likely to select lower fitness solutions.
While we do not attempt to curve-fit the exact cooling schedule that best describes human development, we observe that fast and exponential cooling generally performed better than linear cooling (Fig. 4a). The metaphor therefore holds: we do not observe linear changes in development, but rather rapid initial changes during childhood, followed by a gradual plateau and convergence. Yet remarkably, neither SA nor SHC converged on reliably better solutions than adult participants aged 25–55 (SA fast cooling t(149) = −0.4, P = 0.675, d = 0.08, BF = 0.23; SHC fast cooling t(149) = 0.8, P = 0.447, d = 0.2, BF = 0.27).
Figure 4b compares the developmental trajectory of human participants (labelled dots) to the trajectory of the best-performing SHC (fast cooling) algorithm (blue line; see Supplementary Fig. 8 for all algorithms and all parameter comparisons and Supplementary Fig. 9 for variability of trajectories). We focus on changes in generalization and exploration parameters since rewards decrease monotonically with increased decision temperature. Particularly for younger age groups, age-related changes in parameters follow a similar trajectory to the optimization algorithms. However, a notable divergence emerges around adolescence (ages 14–17).
To concretely relate human and algorithmic trajectories, we estimated the same changepoint regression model (Fig. 3e) on the sequence of algorithmic parameters from the SHC (fast cooling) algorithm (see Supplementary Fig. 10 for details). This revealed a similar pattern of rapid change before the changepoint with all b1 slopes having the same direction as humans, followed by a plateau of b2 slopes around 0 (Supplementary Fig. 10a and Supplementary Table 8).
This analysis also allows us to quantify convergence differences between human development and stochastic optimization. For a given dataset (human versus algorithm) and a given parameter (λ, β, τ), we use the respective upper 95% CI of ω estimates as a threshold for convergence (Supplementary Fig. 10b). Comparing a matched sample of parameter estimates after the convergence threshold, we find that human generalization λ and directed-exploration β parameters converged at lower values than the algorithm (λ: U = 2740, P < 0.001, rτ = −0.38, BF > 100; β: U = 10,114, P < 0.001, rτ = −0.19, BF > 100), while decision temperature τ was not credibly higher or lower (U = 22,377, P = 0.188, rτ = 0.05, BF = 0.12; Supplementary Fig. 10c).
Since these deviations nevertheless fail to translate into reliable differences in performance, this may point towards resource rational constraints on human development45,46: aside from only optimizing for the best performance, the cognitive costs of different strategies may also be considered (Discussion).
Discussion
From a rich dataset of n = 281 participants ages 5 to 55, our results reveal human development cools off not only in the search for rewards or hypotheses, but also in the search for the best learning strategy. Thus, the stochastic optimization metaphor best applies to a multivariate optimization of all parameters of a learning model, rather than only a decrease in randomness when selecting actions or hypotheses. What begins as large tweaks to the cognitive mechanisms of learning and exploration during childhood gradually plateaus and converges in adulthood (Fig. 3e,f). This process is remarkably effective, resembling the trajectory of the best-performing stochastic optimization algorithm (SHC fast cooling; Fig. 4) as it optimizes the psychologically interpretable parameters of a Bayesian reinforcement learning model (GP-UCB). While there are notable differences in the solutions that human development and stochastic optimization converged upon, none of the algorithms achieved reliably better performance than adult human participants (25–55 year olds; Fig. 4a). This work provides important insights into the nature of developmental changes in learning and offers normative explanations for why we observe these specific developmental patterns.
Rather than a uni-dimensional transition from exploration to exploitation over the lifespan25, we observe refinements in both the ability to explore (Fig. 2c) and exploit (Fig. 2f), with monotonic improvements in both measures as a function of age. While even five-to-six year olds perform better than chance (Fig. 2a), exploration becomes more effective over the lifespan, as larger reward values are discovered despite sampling fewer unique options (Fig. 2c,d). Meanwhile, exploitation becomes more responsive, with older participants adapting their search distance more strongly based on reward outcomes (Fig. 2e). This resembles a developmental refinement of a continuous win–stay lose–shift heuristic17,38, and is consistent with the hypothesis that people use heuristics more efficiently as they age47. These results also reaffirm past work showing a reduction of stochasticity over the lifespan4,14,16, but expand the scope of developmental changes across multiple dimensions of learning.
With a reinforcement learning model (GP-UCB), we characterize age-related changes in learning through the dimensions of generalization (λ), uncertainty-directed exploration (β) and decision temperature (τ). All three dimensions play an essential role in predicting choices (Fig. 3b,c) and simulating realistic learning curves across all ages (Fig. 3d), with recoverable models and parameter estimates (Supplementary Figs. 4–6).
Changes in all parameters occur rapidly during childhood (increase in generalization and decrease in both exploration and decision temperature), but then plateau around adolescence (Fig. 3e). Younger participants tend to be more diverse, whereas adults are more similar to one another, with continued convergence of parameter estimates until adulthood (25–55 year olds; Fig. 3f). Both the reduction of age-related differences and the increasing similarity of parameters support the analogy of development as a stochastic optimization process, which gradually converges upon an (approximately optimal) configuration of learning parameters.
Our direct comparison between the developmental trajectory of human parameters and various stochastic optimization algorithms (Fig. 4) revealed both striking similarities and intriguing differences. The best-performing algorithm (SHC with fast cooling) also most resembles the parameter trajectory of human development (Supplementary Fig. 8), suggesting that optimization provides a useful characterization of developmental changes in learning strategy.
However, there are also limitations to the metaphor, and it should be noted that different parameters are optimal in different contexts48,49. Other developmental studies using reinforcement learning models suggest older participants may display more-optimal parameters in general3, by being better able to adapt their strategies to task demands. This raises the question of whether children indeed have less optimal parameters per se or are simply slower when adapting to the task. We partially address this possibility by analysing performance over rounds, where we found no reliable age-related differences in learning over rounds (Supplementary Fig. 2). Thus, it seems unlikely that given more time to adapt to the task (for example, adding more rounds), children would register more optimal parameters.
Nevertheless, suboptimality of task-specific parameters does not suggest that younger individuals are maladaptive from a developmental context. Rather, development prepares children to learn about the world more generally, beyond the scope of any specific experimental paradigm or computational model. In line with the stochastic optimization analogy, younger children could try diverse strategies that our model does not account for, which then registers as suboptimal parameter estimates. This is consistent with the result that the predictive accuracy of the model generally increases over the lifespan (that is, R2; Fig. 3c).
We also observed intriguing differences in the parameters that humans converged on compared to the algorithm trajectories, with adult participants displaying lower generalization and less uncertainty-directed exploration (Supplementary Fig. 10). Yet remarkably, none of the optimization algorithms achieved significantly better performance than adult participants (Fig. 4). Thus, these differences might point towards cognitive costs, which are not justified by any increased performance benefits. Generalization over a greater extent may require remembering and performing computations over a larger set of past observations42,50, which is why some GP approximations reduce the number of inputs to save computational costs51. Similarly, deploying uncertainty-directed exploration is also associated with increased cognitive costs52, and can be systematically diminished through working memory load53 or time pressure54 manipulations.
Limitations and future directions
One limitation of our analyses is that we rely on cross-sectional rather than longitudinal data, observing changes in learning not only across the lifespan but also across individuals. Yet despite the advantages of the longitudinal study, it might not be appropriate in this setting because we would be unable to distinguish between performance improvement due to cognitive development and practice. Having participants repeatedly interact with the same task at different stages of development could conflate task-specific changes in reward learning with domain-general changes in their learning strategy. Yet future longitudinal analyses may be possible using a richer paradigm. For instance, modelling how we learn intuitive theories about the world55 or compositional programs56 as a search process in some latent hypothesis space. The richness of these domains may allow similar dimensions of learning to be measured from sufficiently distinct tasks administered at different developmental stages.
While we have characterized behavioural changes in learning using the distinct and recoverable parameters of a reinforcement learning model, future work is needed to relate these parameters to the development of specific neural mechanisms. Existing research provides some promising candidates. Blocking dopamine D2 receptors has been shown to impact stimulus generalization57, selectively modulating similarity-based responses in the hippocampus. Similar multi-armed bandit tasks have linked the frontopolar cortex and the intraparietal sulcus to exploratory decisions58, where more specifically the right frontopolar cortex has been causally linked to uncertainty-directed exploration59, which can be selectively inhibited via transcranial magnetic stimulation.
Stochastic optimization also allows for ‘re-heating’8 by adding more flexibility in later optimization stages. Re-heating is often used in dynamic environments or when insensitivities of the fitness landscape can cause the algorithm to get stuck. Since deviations from the algorithm trajectories start in adolescence, this may coincide with a second window of developmental plasticity during adolescence60,61. While we observed relatively minor changes in the parameters governing individual learning, plasticity in adolescents is thought to specifically target social learning mechanisms7,62. Thus, different aspects of development may fall under different cooling schedules and similar analyses should also be applied to other learning contexts.
Additionally, our participant sample is potentially limited by a WEIRD (western, educated, industrialized, rich and democratic) bias63, where the relative safety of developmental environments may promote more exploration. While we would expect to find a similar qualitative pattern in more-diverse cultural settings, different expectations about the richness or predictability of environments may promote quantitatively different levels of each parameter. Indeed, previous work using a similar task but with risky outcomes found evidence for a similar generalization mechanism, but a different exploration strategy that prioritized safety64.
Finally, our research also has implications for the role of the environment in maladaptive development. Our comparison to stochastic optimization suggests, in line with life history theory65 and empirical work in rodents and humans4,61,66, that childhood and adolescence are sensitive periods for configuring learning and exploration parameters. Indeed, adverse childhood experiences have been shown to reduce exploration and impair reward learning67. Organisms utilize early life experience to configure strategies for interacting with their environment, which for most species remain stable throughout the lifespan68. Once the configuration of learning strategies has cooled off, there is less flexibility for adapting to novel circumstances in later developmental stages. In the machine learning analogy we have used, some childhood experiences can produce a mismatch between training and test environments, where deviations from the expected environment have been linked to a number of psychopathologies69. Such a mismatch would set the developmental trajectory towards regions of the parameter space that are poorly suited for some features of the adult environment, but may provide hidden benefits for other types of problems more similar to the ones encountered during development70. Rather than only focusing on adult phenotypes at a single point in time, accounting for adaptation and optimization over the lifespan provides a more complete understanding of developmental processes.
Conclusions
Scientists often look to statistical and computational tools for explanations and analogies71. With recent advances in machine learning and artificial intelligence, these tools are increasingly vivid mirrors into the nature of human cognition and its development. We can understand idiosyncrasies of hypothesis generation through Monte Carlo sampling72, individual learning through optimization73–75, and development as programming or ‘hacking’56. An important advantage of computational explanations is that they offer direct empirical demonstrations, instead of remaining as vague, verbal comparisons. Here, we provided such a demonstration, and added much-needed clarity to commonly used analogies of stochastic optimization in developmental psychology.
Methods
Experiments
All experiments were approved by the ethics committee of the Max Planck Institute for Human Development (protocols ABC2016/08, ABC2017/04 and A2018/23). We combined open data from two previously published experiments (Meder et al.30 (n = 52, Mage = 6.35, s.d. = 0.95, 25 female) and Schulz et al.27 (n = 79, Mage = 16.67, s.d. = 12.82, 33 female) targeting children and adult participants, together with new unpublished data targeting adolescent participants (n = 150, Mage = 16.1, s.d. = 4.97, 69 female). The experimental designs differed in a few details, the majority of which were removed by filtering participants. The combined and filtered data consisted of 281 participants between the ages of 5 and 55 (Mage = 14.46, s.d. = 8.61, 126 female). Informed consent was obtained from all participants or their legal guardians before participation.
Generic materials and procedure
All participants performed a spatially correlated bandit task38 on an 8 × 8 grid of 64 options (that is, tiles). A random tile was revealed at the beginning of each round, with participants given a limited search horizon of 25 trials to acquire as many cumulative rewards as possible by choosing either new or previously revealed tiles. After each round, participants were rewarded a maximum of five stars reflecting their performance relative to always selecting the optimal tile. The number of stars earned in each round stayed visible until the end of the experiment.
When choosing a tile, participants earned rewards corrupted by normally distributed noise . Reward expectations were spatially correlated across the grid, such that nearby tiles had similar reward expectations (described below). Earned rewards were depicted numerically along with a corresponding colour (colours only in Meder et al.30; see below), with darker colours depicting higher rewards. Figure 1a provides a screenshot of the task and Fig. 1b depicts the distribution of rewards on a fully revealed environment.
All experiments (after filtering, see next section) used the same set of 40 underlying reward environments, which define a bivariate function on the grid, mapping each tile’s location on the grid to an expected reward value. The environments were generated by sampling from a multivariate Gaussian distribution , with covariance matrix Σ defined by a radial basis function kernel (equation (2)) with λ = 4. In each round, a new environment was chosen without replacement from the list of environments. To prevent participants from knowing when they found the highest reward, a different maximum range was sampled from a uniform distribution for each round and all reward values were rescaled accordingly. The rescaled rewards were then shifted by +5 to avoid reward observations below 0. Hence, the effective rewards ranged from 5 to 45, with a different maximum in each round. All experiments included an initial training round designed to interactively explain the nature of the task, and ended with a bonus round in which they were asked to predict the rewards of unseen tiles. All analyses exclude the training and bonus rounds.
Differences across experiments
Participants from the Meder et al.30 and Schulz et al.64 studies were recruited from museums in Berlin and paid with stickers (Meder et al.30) or with money (Schulz et al.64) proportional to their performance in the task. The new adolescent data was collected at the Max Planck Institute for Human Development in Berlin along with a battery of ten other decision-making tasks on a desktop computer. These participants were given a fixed payment of €10 per hour.
The studies by both Meder et al.30 and Schulz et al.64 used a between-subject manipulation of the strength of rewards correlations (smooth versus rough environments: λsmooth = 4, λrough = 1). Because only minimal differences in model parameters were found in previous studies27,30,38, the rough condition was omitted in the adolescent sample. Thus, we filtered out all participants assigned to the rough condition such that only participants assigned to the smooth environments were included in the final sample. Lastly, both Schulz et al.64 and the adolescent experiment used ten rounds, while Meder et al.30 included only six rounds to avoid lapses in attention in the younger age group. In addition, numerical depictions of rewards were removed in the Meder et al.30 experiment, and participants were instructed to focus on the colours (deeper red indicating more rewards) to avoid difficulties with reading large numbers.
After filtering, the remaining differences in modality (tablet versus computer), incentives (stickers versus variable money versus fixed money), number of rounds (six versus ten) and visualization of rewards (numbers plus colours versus colours only) did not result in any differences in performance (Supplementary Fig. 1a), model fits (Supplementary Fig. 1b) or parameter estimates (Supplementary Fig. 1c).
Computational models
Gaussian process generalization
GP regression40 provides a non-parametric Bayesian framework for function learning, which we use as a method of value generalization38. We use the GP to infer a value functions mapping input space (all possible options on the grid) to a real-valued scalar outputs r (reward expectations). The GP performs this inference in a Bayesian manner, by first defining a prior distribution over functions p(r0), which is assumed to be multivariate Gaussian:
5 |
with the prior mean m(x) defining the expected output of input x, and with covariance defined by the kernel function , for which we use a radial basis function kernel (equation (2)). Per convention, we set the prior mean to zero, without loss of generality40.
Conditioned on a set of observations , the GP computes a posterior distribution (equation (1)) for some new input , which is also Gaussian, with posterior mean and variance defined as:
6 |
7 |
where is the covariance matrix between each observed input and the new input and K = k(Xt, Xt) is the covariance matrix between each pair of observed inputs. I is the identity matrix and is the observation variance, corresponding to assumed independent and identically distributed Gaussian noise on each reward observation.
Lesioned models
The λ lesion model removes the capacity for generalization, by replacing the GP with a Bayesian mean tracker (BMT) as a reinforcement learning model that learns reward estimates for each option independently using the dynamics of a Kalman filter with time-invariant rewards. Reward estimates are updated as a function of prediction error, and thus the BMT can be interpreted as a Bayesian variant of the classic Rescorla–Wagner model76,77 and has been used to describe human behaviour in a variety of learning and decision-making tasks54,78,79.
The BMT also defines a Gaussian prior distribution of the reward expectations, but does so independently for each option x:
8 |
The BMT computes an equivalent posterior distribution for the expected reward for each option (equation (1)), also in the form of a Gaussian, but where the posterior mean mt(x) and posterior variance vt(x) are defined independently for each option and computed by the following updates:
9 |
10 |
Both updates use δt(x) = 1 if option x was chosen on trial t, and δt(x) = 0 otherwise. Thus, the posterior mean and variance are only updated for the chosen option. The update of the mean is based on the prediction error yt(x) − mt(x) between observed and anticipated reward, while the magnitude of the update is based on the Kalman gain Gt(x):
11 |
analogous to the learning rate of the Rescorla–Wagner model. Here, the Kalman gain is dynamically defined as a ratio of variance terms, where vt(x) is the posterior variance estimate and is the error variance, which (analogous to the GP) models the level of noise associated with reward observations. Smaller values of thus result in larger updates of the mean.
The β lesion simply fixes β = 0, making the valuation of options solely defined by the expected rewards q(x) = m(x).
The τ lesion model swaps the softmax policy (characterized by decision temperature τ) for an epsilon-greedy policy39, since it is not feasible to simply remove the softmax component or fix τ = 0 making it an argmax policy (due to infinite log loss from zero probability predictions). Instead, we take the opportunity to compare the softmax policy against epsilon-greedy as an alternative mechanism of random exploration28. We still combine epsilon-greedy with GP and UCB components, but rather than choosing options proportional to their UCB value, the τ lesion estimates ϵ as a parameter controlling the probability of choosing an option at random versus the highest UCB option:
12 |
Model cross-validation
Each model was fitted using leave-one-round-out cross-validation for each individual participant using maximum likelihood estimation. Model fits are described using negative log likelihoods summed over all out-of-sample predictions, while individual participant parameter estimates are based on averaging over the cross-validated maximum likelihood estimations. Figure 3c reports model fits in terms of a pseudo-R2, which compares the out-of-sample negative log likelihoods for each model k against a random model:
13 |
Changepoint regression
We use a hierarchical Bayesian changepoint model to quantify univariate changes in model parameters as a function of age:
14 |
15 |
I(⋅) is an indicator function, b0 is the intercept, and ω is the age at which the slope b1 changes to b2. We included random intercepts for different experiments. To account for potential outliers in the parameter estimates, we used a student-t likelihood function as a form of robust regression. Supplementary Table 5 depicts a model comparison between this robust regression reported in the main text, a regression with a Gaussian likelihood that attenuates skew by log transforming the dependent variable and regressions with a Gaussian likelihood that does not account for skew in the dependent variable. Using approximate leave-one-out cross-validation we found that the robust regression consistently fitted the parameter distributions best.
Fitness landscape
We used Tukey’s fence to define a credible interval for each GP-UCB parameter (λ, β, τ) based on participant estimates and created a grid of 100 equally sized log-space intervals for each parameter. This defines a space of plausible learning strategies corresponding to one million parameter combinations. We then ran 100 simulations of the GP-UCB model for each parameter combination (sampling one of the 40 reward environments with replacement each time) and computed the mean reward across iterations (Supplementary Fig. 7).
Optimization algorithms
Using this fitness landscape defined across learning strategies, we simulated the trajectories of various optimization algorithms. Specifically, we tested SA4,9 and SHC12,13, the latter of which provides a discrete analogue to the better-known stochastic gradient descent method commonly used to optimize neural networks10,11. Each optimization algorithm (SA versus SHC) was combined with one of three common cooling schedules80 defining how the optimization temperature (temp) changes as a function of the iteration number i. Fast cooling uses tempi = 1/(1 + i), exponential cooling uses and linear cooling uses . As the optimization temperature decreases over iterations, there is a general decrease in the amount of randomness or stochasticity.
SA is a stochastic sampling algorithm, which is more likely to select solutions with lower fitness when the optimization temperature is high. After initialization, SA iteratively selects a random neighbouring solution snew in the fitness landscape (that is, one step in the grid of one million parameter combinations), and deterministically accepts it if it corresponds to higher fitness than the current solution sold; otherwise, it accepts worse solutions with probability:
16 |
where tempi is the current optimization temperature.
SHC is similar, but considers all neighbouring solutions and selects a new solution proportional to its fitness:
17 |
For each combination of optimization algorithm and cooling function, we simulated optimization trajectories over 1500 iterations. Each simulated was initialized on each of the cross-validated parameter estimates of all participants of the youngest age group as starting points. This resulted in 120 (30 participants × 4 rounds of cross-validation) trajectories for each combination of algorithm and cooling schedule.
Reporting summary
Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.
Supplementary information
Acknowledgements
We thank K. Murayama, M. Sakaki, P. Schwartenbeck, M. Hamidi and R. Uchiyama for helpful feedback and C. Wysocki for data collection. This work was supported by the German Federal Ministry of Education and Research (BMBF): Tübingen AI Center, FKZ: 01IS18039A (C.M.W.) and funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy-EXC2064/1-390727645 (C.M.W.).
Author contributions
C.M.W., S.C. and A.P.G. conceived the study with feedback from all authors. S.C. collected the data using materials provided by C.M.W. C.M.W., A.P.G. and S.C. performed the analyses. C.M.W., S.C. and A.P.G. wrote the paper, with feedback from E.S., W.B., A.R. and B.M.
Peer review
Peer review information
Nature Human Behaviour thanks Elizabeth Bonawitz and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Peer reviewer reports are available.
Funding
Open access funding provided by Max Planck Society.
Data availability
Data are publicly available at https://github.com/AnnaGiron/developmental_trajectory.
Code availability
Code is publicly available at https://github.com/AnnaGiron/developmental_trajectory.
Competing interests
The authors declare no competing interests.
Footnotes
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
These authors contributed equally: Anna P. Giron, Simon Ciranka.
Supplementary information
The online version contains supplementary material available at 10.1038/s41562-023-01662-1.
References
- 1.Tenenbaum JB, Kemp C, Griffiths TL, Goodman ND. How to grow a mind: statistics, structure, and abstraction. Science. 2011;331:1279–1285. doi: 10.1126/science.1192788. [DOI] [PubMed] [Google Scholar]
- 2.Moran RJ, Symmonds M, Dolan RJ, Friston KJ. The brain ages optimally to model its environment: evidence from sensory learning over the adult lifespan. PLoS Comput. Biol. 2014;10:e1003422. doi: 10.1371/journal.pcbi.1003422. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 3.Nussenbaum K, Hartley CA. Reinforcement learning across development: what insights can we draw from a decade of research? Dev. Cogn. Neurosci. 2019;40:100733. doi: 10.1016/j.dcn.2019.100733. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Gopnik A, et al. Changes in cognitive flexibility and hypothesis search across human life history from childhood to adolescence to adulthood. Proc. Natl Acad. Sci. 2017;114:7892–7899. doi: 10.1073/pnas.1700811114. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 5.Walasek N, Frankenhuis WE, Panchanathan K. Sensitive periods, but not critical periods, evolve in a fluctuating environment: a model of incremental development. Proc. R. Soc. B. 2022;289:20212623. doi: 10.1098/rspb.2021.2623. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Gopnik A, Griffiths TL, Lucas CG. When younger learners can be better (or at least more open-minded) than older ones. Curr. Dir. Psychol. Sci. 2015;24:87–92. doi: 10.1177/0963721414556653. [DOI] [Google Scholar]
- 7.Ciranka S, van den Bos W. Adolescent risk-taking in the context of exploration and social influence. Dev. Rev. 2021;61:100979. doi: 10.1016/j.dr.2021.100979. [DOI] [Google Scholar]
- 8.Du, K.-L. & Swamy, M. N. S. in Search and Optimization by Metaheuristics: Techniques and Algorithms Inspired by Nature (eds Du, K.-L. & Swamy, M. N. S.) 29–36 (Springer International Publishing, 2016).
- 9.Kirkpatrick S, Gelatt CD, Vecchi MP. Optimization by simulated annealing. Science. 1983;220:671–680. doi: 10.1126/science.220.4598.671. [DOI] [PubMed] [Google Scholar]
- 10.Robbins H, Monro S. A stochastic approximation method. Ann. Math. Stat. 1951;22:400–407. doi: 10.1214/aoms/1177729586. [DOI] [Google Scholar]
- 11.Bottou L, Curtis FE, Nocedal J. Optimization methods for large-scale machine learning. Siam Rev. 2018;60:223–311. doi: 10.1137/16M1080173. [DOI] [Google Scholar]
- 12.Rieskamp J, Busemeyer JR, Laine T. How do people learn to allocate resources? Comparing two learning theories. J. Exp. Psychol. Learn. Mem. Cog. 2003;29:1066. doi: 10.1037/0278-7393.29.6.1066. [DOI] [PubMed] [Google Scholar]
- 13.Moreno-Bote R, Ramírez-Ruiz J, Drugowitsch J, Hayden BY. Heuristics and optimal solutions to the breadth–depth dilemma. Proc. Natl Acad. Sci. 2020;117:19,799–19,808. doi: 10.1073/pnas.2004929117. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 14.Buchsbaum D, Bridgers S, Skolnick Weisberg D, Gopnik A. The power of possibility: causal learning, counterfactual reasoning, and pretend play. Philos. Trans. R. Soc. B Biol. Sci. 2012;367:2202–2212. doi: 10.1098/rstb.2012.0122. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 15.Lucas CG, Bridgers S, Griffiths TL, Gopnik A. When children are better (or at least more open-minded) learners than adults: developmental differences in learning the forms of causal relationships. Cognition. 2014;131:284–299. doi: 10.1016/j.cognition.2013.12.010. [DOI] [PubMed] [Google Scholar]
- 16.Denison S, Bonawitz E, Gopnik A, Griffiths TL. Rational variability in children’s causal inferences: the sampling hypothesis. Cognition. 2013;126:285–300. doi: 10.1016/j.cognition.2012.10.010. [DOI] [PubMed] [Google Scholar]
- 17.Bonawitz E, Denison S, Gopnik A, Griffiths TL. Win-stay, lose-sample: a simple sequential algorithm for approximating Bayesian inference. Cogn. Psychol. 2014;74:35–65. doi: 10.1016/j.cogpsych.2014.06.003. [DOI] [PubMed] [Google Scholar]
- 18.Sumner, E. et al. The exploration advantage: children’s instinct to explore allows them to find information that adults miss. Preprint at psyArXiv10.31234/osf.io/h437v (2019).
- 19.Somerville LH, et al. Charting the expansion of strategic exploratory behavior during adolescence. J. Exp. Psychol. Gen. 2017;146:155. doi: 10.1037/xge0000250. [DOI] [PubMed] [Google Scholar]
- 20.Jepma M, Schaaf JV, Visser I, Huizenga HM. Uncertainty-driven regulation of learning and exploration in adolescents: a computational account. PLoS Comput. Biol. 2020;16:e1008276. doi: 10.1371/journal.pcbi.1008276. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Palminteri S, Kilford EJ, Coricelli G, Blakemore S-J. The computational development of reinforcement learning during adolescence. PLoS Comput. Biol. 2016;12:e1004953. doi: 10.1371/journal.pcbi.1004953. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 22.Rosenbaum GM, Venkatraman V, Steinberg L, Chein JM. The influences of described and experienced information on adolescent risky decision making. Dev. Rev. 2018;47:23–43. doi: 10.1016/j.dr.2017.09.003. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 23.Baltes PB. Theoretical propositions of life-span developmental psychology: on the dynamics between growth and decline. Dev. Psychol. 1987;23:611–626. doi: 10.1037/0012-1649.23.5.611. [DOI] [Google Scholar]
- 24.Baltes PB, et al. Lifespan psychology: theory and application to intellectual functioning. Annu. Rev. Psychol. 1999;50:471–507. doi: 10.1146/annurev.psych.50.1.471. [DOI] [PubMed] [Google Scholar]
- 25.Gopnik A. Childhood as a solution to explore–exploit tensions. Philos. Trans. R. Soc. B. 2020;375:20190502. doi: 10.1098/rstb.2019.0502. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 26.Gopnik A. Scientific thinking in young children: theoretical advances, empirical research, and policy implications. Science. 2012;337:1623–1627. doi: 10.1126/science.1223416. [DOI] [PubMed] [Google Scholar]
- 27.Schulz E, Wu CM, Ruggeri A, Meder B. Searching for rewards like a child means less generalization and more directed exploration. Psychol. Sci. 2019;30:1561–1572. doi: 10.1177/0956797619863663. [DOI] [PubMed] [Google Scholar]
- 28.Dubois M, et al. Exploration heuristics decrease during youth. Cogn. Affect. Behav. Neurosci. 2022;22:969–983. doi: 10.3758/s13415-022-01009-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 29.Feng SF, Wang S, Zarnescu S, Wilson RC. The dynamics of explore–exploit decisions reveal a signal-to-noise mechanism for random exploration. Sci. Rep. 2021;11:1–15. doi: 10.1038/s41598-021-82530-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 30.Meder, B., Wu, C. M., Schulz, E. & Ruggeri, A. Development of directed and random exploration in children. Dev. Sci.10.1111/desc.13095 (2021). [DOI] [PubMed]
- 31.Dubois M, Hauser TU. Value-free random exploration is linked to impulsivity. Nat. Commun. 2022;13:1–17. doi: 10.1038/s41467-022-31918-9. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 32.Blanco NJ, Sloutsky VM. Systematic exploration and uncertainty dominate young children’s choices. Dev. Sci. 2021;24:e13026. doi: 10.1111/desc.13026. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 33.Van den Bos W, Cohen MX, Kahnt T, Crone EA. Striatum–medial prefrontal cortex connectivity predicts developmental changes in reinforcement learning. Cereb. Cortex. 2012;22:1247–1255. doi: 10.1093/cercor/bhr198. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 34.Blanco NJ, et al. Exploratory decision-making as a function of lifelong experience, not cognitive decline. J. Exp. Psychol. Gen. 2016;145:284. doi: 10.1037/xge0000133. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 35.Tenenbaum JB, Kemp C, Griffiths TL, Goodman ND. How to grow a mind: statistics, structure, and abstraction. Science. 2011;331:1279–1285. doi: 10.1126/science.1192788. [DOI] [PubMed] [Google Scholar]
- 36.Stamps JA, Frankenhuis WE. Bayesian models of development. Trends Ecol. Evol. 2016;31:260–268. doi: 10.1016/j.tree.2016.01.012. [DOI] [PubMed] [Google Scholar]
- 37.Frankenhuis WE, Panchanathan K. Balancing sampling and specialization: an adaptationist model of incremental development. Proc. Biol. Sci. 2011;278:3558–3565. doi: 10.1098/rspb.2011.0055. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 38.Wu CM, Schulz E, Speekenbrink M, Nelson JD, Meder B. Generalization guides human exploration in vast decision spaces. Nat. Hum. Behav. 2018;2:915–924. doi: 10.1038/s41562-018-0467-4. [DOI] [PubMed] [Google Scholar]
- 39.Sutton, R. S. & Barto, A. G. Reinforcement Learning: An Introduction (MIT press, 2018).
- 40.Rasmussen, C. E. & Williams, C. Gaussian Processes for Machine Learning (MIT Press, 2006).
- 41.Wu CM, Schulz E, Garvert MM, Meder B, Schuck NW. Similarities and differences in spatial and non-spatial cognitive maps. PLOS Comput. Biol. 2020;16:1–28. doi: 10.1371/journal.pcbi.1008149. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 42.Wu CM, Schulz E, Gershman SJ. Inference and search on graph-structured spaces. Comput. Brain Behav. 2021;4:125–147. doi: 10.1007/s42113-020-00091-x. [DOI] [Google Scholar]
- 43.Rigoux L, Stephan KE, Friston KJ, Daunizeau J. Bayesian model selection for group studies—revisited. Neuroimage. 2014;84:971–985. doi: 10.1016/j.neuroimage.2013.08.065. [DOI] [PubMed] [Google Scholar]
- 44.Simonsohn U. Two lines: a valid alternative to the invalid testing of U-shaped relationships with quadratic regressions. Adv. Methods Pract. Psychol. Sci. 2018;1:538–555. doi: 10.1177/2515245918805755. [DOI] [Google Scholar]
- 45.Bhui R, Lai L, Gershman SJ. Resource-rational decision making. Curr. Opin. Behav. Sci. 2021;41:15–21. doi: 10.1016/j.cobeha.2021.02.015. [DOI] [Google Scholar]
- 46.Lieder F, Griffiths TL. Resource-rational analysis: understanding human cognition as the optimal use of limited computational resources. Behav. Brain Sci. 2020;43:e1. doi: 10.1017/S0140525X1900061X. [DOI] [PubMed] [Google Scholar]
- 47.Reyna VF, Brainerd CJ. Fuzzy-trace theory: an interim synthesis. Learn. Individ. Differ. 1995;7:1–75. doi: 10.1016/1041-6080(95)90031-4. [DOI] [Google Scholar]
- 48.Eckstein MK, Wilbrecht L, Collins AGE. What do reinforcement learning models measure? Interpreting model parameters in cognition and neuroscience. Curr. Opin. Behav. Sci. 2021;41:128–137. doi: 10.1016/j.cobeha.2021.06.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Ciranka S, et al. Asymmetric reinforcement learning facilitates human inference of transitive relations. Nat. Hum. Behav. 2022;6:555–564. doi: 10.1038/s41562-021-01263-w. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 50.Gershman SJ, Daw ND. Reinforcement learning and episodic memory in humans and animals: an integrative framework. Annu. Rev. Psychol. 2017;68:101–128. doi: 10.1146/annurev-psych-122414-033625. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 51.Liu, H., Ong, Y.-S., Shen, X. & Cai, J. When Gaussian process meets big data: a review of scalable GPs. IEEE Trans. Neural Netw. Learn. Syst. (2020). [DOI] [PubMed]
- 52.Otto AR, Knox WB, Markman AB, Love BC. Physiological and behavioral signatures of reflective exploratory choice. Cogn. Affect. Behav. Neurosci. 2014;14:1167–1183. doi: 10.3758/s13415-014-0260-4. [DOI] [PubMed] [Google Scholar]
- 53.Cogliati Dezza I, Cleeremans A, Alexander W. Should we control? The interplay between cognitive control and information integration in the resolution of the exploration-exploitation dilemma. J. Exp. Psychol. Gen. 2019;148:977. doi: 10.1037/xge0000546. [DOI] [PubMed] [Google Scholar]
- 54.Wu CM, Schulz E, Pleskac TJ, Speekenbrink M. Time pressure changes how people explore and respond to uncertainty. Sci. Rep. 2022;12:1–14. doi: 10.1038/s41598-022-07901-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 55.Gerstenberg, T. & Tenenbaum, J. B. Intuitive theories. In The Oxford Handbook of Causal Reasoning (Waldmann, M. R. ed.) (Oxford Univ. Press, 2017); 10.1093/oxfordhb/9780199399550.013.28
- 56.Rule JS, Tenenbaum JB, Piantadosi ST. The child as hacker. Trends Cogn. Sci. 2020;24:900–915. doi: 10.1016/j.tics.2020.07.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 57.Kahnt T, Tobler PN. Dopamine regulates stimulus generalization in the human hippocampus. eLife. 2016;5:e12678. doi: 10.7554/eLife.12678. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 58.Daw ND, O’Doherty JP, Dayan P, Seymour B, Dolan RJ. Cortical substrates for exploratory decisions in humans. Nature. 2006;441:876–879. doi: 10.1038/nature04766. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 59.Zajkowski WK, Kossut M, Wilson RC. A causal role for right frontopolar cortex in directed, but not random, exploration. eLife. 2017;6:e27430. doi: 10.7554/eLife.27430. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 60.Laube C, van den Bos W, Fandakova Y. The relationship between pubertal hormones and brain plasticity: Implications for cognitive training in adolescence. Dev. Cogn. Neurosci. 2020;42:100753. doi: 10.1016/j.dcn.2020.100753. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 61.Dahl RE, Allen NB, Wilbrecht L, Suleiman AB. Importance of investing in adolescence from a developmental science perspective. Nature. 2018;554:441–450. doi: 10.1038/nature25770. [DOI] [PubMed] [Google Scholar]
- 62.Blakemore S-J, Mills KL. Is adolescence a sensitive period for sociocultural processing? Annu. Rev. Psychol. 2014;65:187–207. doi: 10.1146/annurev-psych-010213-115202. [DOI] [PubMed] [Google Scholar]
- 63.Henrich J, Heine SJ, Norenzayan A. Most people are not WEIRD. Nature. 2010;466:29–29. doi: 10.1038/466029a. [DOI] [PubMed] [Google Scholar]
- 64.Schulz E, Wu CM, Huys QJ, Krause A, Speekenbrink M. Generalization and search in risky environments. Cogn. Sci. 2018;42:2592–2620. doi: 10.1111/cogs.12695. [DOI] [PubMed] [Google Scholar]
- 65.Giudice, M. D., Gangestad, S. & Kaplan, H. Life History Theory and Evolutionary Psychology, 88–114 (John Wiley & Sons, Inc., 2015). Life history theory and evolutionary psychology. In The Handbook of Evolutionary Psychology (Buss, D. M. ed) (Wiley, 2015); 10.1002/9781119125563.evpsych102
- 66.Lin, W. C. et al. Transient food insecurity during the juvenile-adolescent period affects adult weight, cognitive flexibility, and dopamine neurobiology. Curr. Biol. (2022). [DOI] [PMC free article] [PubMed]
- 67.Lloyd A, McKay RT, Furl N. Individuals with adverse childhood experiences explore less and underweight reward feedback. Proc. Natl Acad. Sci. 2022;119:e2109373119. doi: 10.1073/pnas.2109373119. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 68.Frankenhuis, W. E., Panchanathan, K. & Nettle, D. Cognition in harsh and unpredictable environments. Curr. Opin. Psychol. 10.1016/j.copsyc.2015.08.011 (2016).
- 69.Humphreys KL, Zeanah CH. Deviations from the expectable environment in early childhood and emerging psychopathology. Neuropsychopharmacology. 2015;40:154–170. doi: 10.1038/npp.2014.165. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 70.Young, E. S., Frankenhuis, W. E., DelPriore, D. J. & Ellis, B. J. Hidden talents in context: cognitive performance with abstract versus ecological stimuli among adversity-exposed youth. Child Dev. (2022). [DOI] [PMC free article] [PubMed]
- 71.Gigerenzer G. From tools to theories: a heuristic of discovery in cognitive psychology. Psychol. Rev. 1991;98:254–267. doi: 10.1037/0033-295X.98.2.254. [DOI] [Google Scholar]
- 72.Dasgupta I, Schulz E, Gershman SJ. Where do hypotheses come from? Cogn. Psychol. 2017;96:1–25. doi: 10.1016/j.cogpsych.2017.05.001. [DOI] [PubMed] [Google Scholar]
- 73.Barry, D. N. & Love, B. C. Human learning follows the dynamics of gradient descent. Preprint at PsyArXiv10.31234/osf.io/75e4t (2021).
- 74.Ritz H, Leng X, Shenhav A. Cognitive control as a multivariate optimization problem. J. Cogn. Neurosci. 2022;4:569–591. doi: 10.1162/jocn_a_01822. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 75.Hennig JA, et al. How learning unfolds in the brain: toward an optimization view. Neuron. 2021;109:3720–3735. doi: 10.1016/j.neuron.2021.09.005. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 76.Rescorla, R. A. & Wagner, A. R. Classical Conditioning II: Current Research and Theory 64–99 (Appleton-Century-Crofts, 1972).
- 77.Gershman SJ. A unifying probabilistic view of associative learning. PLoS Comput. Biol. 2015;11:e1004567. doi: 10.1371/journal.pcbi.1004567. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 78.Gershman SJ. Deconstructing the human algorithms for exploration. Cognition. 2018;173:34–42. doi: 10.1016/j.cognition.2017.12.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
- 79.Dayan P, Kakade S, Montague PR. Learning and selective attention. Nat. Neurosci. 2000;3:1218–1223. doi: 10.1038/81504. [DOI] [PubMed] [Google Scholar]
- 80.Nourani Y, Andresen B. A comparison of simulated annealing cooling strategies. J. Phys. A Math. Gen. 1998;31:8373. doi: 10.1088/0305-4470/31/41/011. [DOI] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
Data are publicly available at https://github.com/AnnaGiron/developmental_trajectory.
Code is publicly available at https://github.com/AnnaGiron/developmental_trajectory.