Abstract
Occam’s razor is the principle that, all else being equal, simpler explanations should be preferred over more complex ones. This principle is thought to guide human decision-making, but the nature of this guidance is not known. Here we used preregistered behavioral experiments to show that people tend to prefer the simpler of two alternative explanations for uncertain data. These preferences match predictions of formal theories of model selection that penalize excessive flexibility. These penalties emerge when considering not just the best explanation but the integral over all possible, relevant explanations. We further show that these simplicity preferences persist in humans, but not in certain artificial neural networks, even when they are maladaptive. Our results imply that principled notions of statistical model selection, including integrating over possible, latent causes to avoid overfitting to noisy observations, may play a central role in human decision-making.
To make decisions in the real world, we must often choose between multiple, plausible explanations for noisy, sparse data. When evaluating competing explanations, Occam’s razor says that we should consider not just how well they account for data that have been observed, but also their potentially excessive flexibility in describing alternative, and potentially irrelevant, data that have not been observed2 (e.g., “a ghost did it!”, Figure 1a,b). This kind of simplicity preference has long been proposed as an organizing principle for mental function3, such as in the early concept of Prägnanz in Gestalt psychology4,5, a number of “minimum principles” for vision6 and theories that posit a central role for data compression in cognition7. However, despite evidence that human decision-makers can exhibit simplicity preferences under certain task conditions8–12 we lack a principled understanding of what, exactly, constitutes the “simplicity” that is favored (or, equivalently, “complexity” that is disfavored) and how we balance that preference with the evidence provided by the observed data when we make decisions.
To address this issue quantitatively, we formalize decision-making as a model-selection problem: given a set of observations and a set of possible parametric statistical models , we seek the model that in some sense is the best description of the data . In this context, Occam’s razor can be interpreted as requiring the goodness-of-fit of a model to be penalized by some measure of its flexibility, or complexity, when comparing it against other models. Bayesian statistics offers a natural characterization of such a measure of complexity and specifies the way in which it should be traded off against goodness-of-fit to maximize decision accuracy, typically because the increased flexibility provided by increased complexity tends to cause errors by overfitting to noise in the observations1,13–19. According to the Bayesian framework, models should be compared based on their evidence or marginal likelihood , where represents model parameters and their associated prior. By varying the parameters, we can explore instances of a model and sweep out a manifold of possible descriptions of the data. Two such manifolds are visualized in Figure 1c, along with the maximum-likelihood parameters that assign the highest probability to the observed data.
Under mild regularity assumptions and with sufficient data, the (log) evidence can be written as the sum of the maximum log likelihood of and several penalty factors (Figure 1d). These penalty factors, which are found even when the prior (data-independent) probabilities of the models under consideration are equal, can be interpreted as providing quantitatively defined preferences against certain models according to specific forms of complexity that they embody1,19. This approach, which we call the Fisher Information Approximation (FIA), has been used to identify worse-fitting, but better-generalizing, psychophysical models describing the relationship between physical variables (e.g., light intensity) and their psychological counterparts (e.g., brightness)20. It is related to similar quantitative definitions of statistical model complexity, such as the Minimum Description Length21–23, Minimum Message Length24, and Predictive Information25 frameworks. A key feature of the FIA is that if the prior over parameters is taken to be uninformative15, each penalty factor can be shown to capture a distinct geometric property of the model1. These properties include not just the model’s dimensionality (number of parameters), which is the well-known Bayesian Information Criterion (BIC) for model selection26,27, but also its boundary (a novel term, detailed in Methods section M.2; see also the Discussion), volume, and shape (Figure 1c,d).
The complexity penalties depicted in Figure 1 emerge because the Bayesian framework marginalizes over the model parameters. In applying this framework to human decision-making, we interpret this marginalization as an integration over latent causes: to evaluate a particular explanation (or “model”) for a given set of observed data, one considers how likely the data are under that explanation, on average over all possible configurations of that explanation. Intuitively, flexible explanations are penalized by the averaging because many of their configurations have nothing to do with the observed state of the world X and thus possess a vanishingly small likelihood . Consider the following example, in which the data are represented by a point on a plane, (Figure 2, top left). The problem is to decide between two alternative explanations (models) for the data: 1) , a Gaussian distribution centered in (0,0) with unit, isotropic variance; and 2) , a parametric family of Gaussians, also with unit variance, but with centers located anywhere along the straight line connecting and . It is clear that can explain a wider range of data, just like the ghost in Figure 1a, and is therefore more complex. For data that are equidistant from the two models, , Occam’s razor prescribes that we should choose . In other words, the decision boundary separating the area where one or the other model should be preferred is closer to (the more complex model) than (the simpler one). This simplicity bias is specific to a decision-maker that integrates over the latent causes (model configurations) and does not result from sampling multiple possible explanations via other, less systematic means, for example by adding sensory and/or choice noise (Figure 2; see also Supplementary Figure S.9).
To relate the FIA complexity terms to the potential preferences exhibited by both human and artificial decision-makers, we designed a simple decision-making task. For each trial, N=10 simultaneously presented, noisy observations (red dots in Figure 1e) were sampled from a 2D Normal (“generative”) distribution centered somewhere within one of two possible shapes (black shapes in Figure 1e). The identity of the shape generating the data (top versus bottom) was chosen at random with equal probability. Likewise, the location of the center of the Normal distribution within the selected shape was sampled uniformly at random, in a way that did not depend on the model parameterization, by using Jeffrey’s prior15. Given the observations, the decision-maker chose the shape (model) that was more likely to contain the center of the generative distribution. We used four task variants, each designed to probe primarily one of the distinct geometrical features that are penalized in Bayesian model selection (i.e., a Bayesian observer is expected to have a particular, quantitative preference away from the more-complex alternative in each pair; Figure 1c and d). For this task, the FIA provided a good approximation of the exact Bayesian posterior (Supplementary Information section S.1). Accordingly, simulated observers that increasingly integrated over latent causes, like the Bayesian observer, exhibited increasing FIA-like biases. These biases were distinguishable from (and degraded by) effects of increasing sensory and/or choice noise (Supplementary Information S.7.1 and Supplementary Figure S.10).
For our human studies, we used the on-line research platform Pavlovia to implement the task, and Prolific to recruit and test participants. Following our preregistered approaches28–30, we collected data from 202 participants, divided into four groups that each performed one of the four separate versions of the task depicted in Figure 1e (each group comprised ~50 participants). We provided instructions that used the analogy of seeds from a flower located in one of two flower beds, to provide an intuitive framing of the key concepts of noisy data generated by a particular instance of a parametric model from one of two model families. To minimize the possibility that participants would simply learn from implicit or explicit feedback over the course of each session to make more optimal (i.e., simplicity-preferring) choices of flower beds, we: 1) used conditions for which the difference in performance between ideal observers that penalized model complexity according to the FIA and simulated observers that used only model likelihood was ~1% (depending on the task type; Figure 1e, insets), which translates to ~5 additional correct trials over the course of an entire experiment; and 2) provided feedback only at the end of each block of 100 trials, not each trial. We used hierarchical (Bayesian) logistic regression to measure the degree to which each participant’s choices were affected by model likelihood (distance from the data to a given model) and each of the FIA features (see Methods section M.6). We defined each participant’s sensitivity to each FIA term as a normalized quantity, relative to their likelihood sensitivity (i.e., by dividing the logistic coefficient associated with a given FIA term by the logistic coefficient associated with the likelihood).
The human participants were sensitive to all four forms of model complexity (Figure 3). Specifically, the estimated normalized population-level sensitivity (posterior mean ± st. dev., where zero implies no sensitivity and one implies Bayes-optimal sensitivity) was 4.66±0.96 for dimensionality, 1.12±0.10 for boundary, 0.23±0.12 for volume, and 2.21±0.12 for robustness (note that, following our preregistered plan, we emphasize parameter estimation using Bayesian approaches31–33 here and throughout the main text, and we provide complementary null hypothesis significance testing in the Supplementary Information section S.6 and Table S.8). Formal model comparison (WAIC; see Supplementary Information section S.6.1 and Tables S.6 and S.7) confirmed that their behavior was better described by taking into account the geometric penalties defined by the theory of Bayesian model selection, rather than by relying only on the minimum distance between model and data (i.e., the maximum-likelihood solution). Consistent with these analyses, their decisions were consistent with processes that tended to integrate over latent causes (and tended to exhibit moderate levels of sensory noise and low choice noise; Supplementary Information S.7 and Supplementary Figures S.11 and S.12). Overall, our data indicate that people tend to integrate over latent causes in a way that gives rise to Occam’s razor, manifesting as sensitivity to the geometrical features in Bayesian model selection that characterize model complexity.
The participants exhibited substantial individual variability in performance that included ranges of sensitivities to each FIA term that spanned optimal and sub-optimal values. This variability was large compared to the uncertainty associated with participant-level sensitivity estimates (Supplementary Information S.4) and impacted performance in a manner that highlighted the usefulness of appropriately tuned (i.e., close to Bayes optimal) simplicity preferences: accuracy tended to decline for participants with FIA sensitivities further away from the theoretical predictions (Figure 2c; posterior mean ± st. dev. of Spearman’s rho between accuracy and |β−1|, where β is the sensitivity: dimensionality, −0.69±0.05; boundary, −0.21±0.11; volume, −0.10±0.10; robustness, −0.54±0.10). The sub-optimal sensitivities exhibited by many participants did not appear to result simply from a lack of task engagement, because FIA sensitivity did not correlate with errors on easy trials (posterior mean ± st. dev. of Spearman’s rho between lapse rate, estimated with an extended regression model detailed in Methods section M.6.1, and the absolute difference from optimal sensitivity for: dimensionality, 0.08±0.12; boundary, 0.15±0.12; volume, −0.04±0.13; robustness, 0.15±0.14; see Supplementary Information section S.5). Likewise, sub-optimal FIA sensitivity did not correlate with weaker likelihood sensitivity for the boundary (rho=−0.13±0.11) and volume (−0.06±0.11) terms, although stronger, negative relationships with the dimensionality (−0.35±0.07) and robustness terms (−0.56±0.10) suggest that the more extreme and variable simplicity preferences under those conditions (and lower performance, on average; see Figure 2c) reflected a more general difficulty in performing those versions of the task.
To better understand the optimality, variability, and generality of the simplicity preferences exhibited by our human participants, we compared their performance to that of artificial neural networks (ANNs) trained to optimize performance on this task. We used a novel ANN architecture that we designed to perform statistical model selection, in a form applicable to the task described above (Figure 4a,b). On each trial, the network took as input two images representing the models to be compared, and a set of coordinates representing the observations on that trial. The output of the network was a decision between the two models, encoded as a softmax vector. We analyzed 50 instances of the ANN that differed only in the random initialization of their weights and in the examples seen during training, using the same logistic-regression approach we used for the human participants.
The ANN was designed as follows (see Methods for more details). The input stage consisted of two pretrained VGG16 convolutional neural networks (CNNs), each of which took in a pictorial representation of one of the two models under consideration. VGG was chosen as a popular architecture that is often taken as a benchmark for comparisons with the human visual system34,35. The CNNs were composed of a stack of convolutional layers whose weights were kept frozen at their pretrained values, followed by three fully-connected layers whose weights were allowed to change during training . The output of the CNNs were each fed into a multilayer perceptron (MLP) consisting of linear, rectified-linear (ReLU), and batch-normalization layers. The MLP outputs were then concatenated and fed into an equivariant MLP, which enforces equivariance of the network output under position swap of the two models through a custom parameter-sharing scheme36. The network also contained two conditional variational autoencoder (C-VAE) structures, which sought to replicate the data-generation process conditioned on each model and therefore encouraged the fully connected layers upstream to learn model representations that captured task-relevant features.
After training, the ANNs performed the task substantially better than the human participants, with higher overall accuracies that included higher likelihood sensitivities (Supplementary Information section S.3) and simplicity preferences that more closely matched the theoretically optimal values (Figure 4c,d). These simplicity preferences were closer to those expected from simulated observers that use the exact Bayesian model posterior rather than the FIA-approximated one, consistent with the fact that the FIA provides an imperfect approximation to the exact Bayesian posterior. These simplicity preferences varied slightly in magnitude across the different networks, but unlike for the human participants this variability was relatively small (compare ranges of values in Figures 2c and 3d, plotted on different x-axis scales) and it was not an indication of suboptimal network behavior because it was not related systematically to any differences in the generally high accuracy rates for each condition (Figure 4d; posterior mean ± st. dev. of Spearman’s rho between accuracy and |β−1|, where β is the sensitivity: dimensionality, −0.14±0.10; boundary, 0.08±0.11; volume, −0.12±0.11; robustness, −0.08±0.11). These results imply that the stochastic nature of the task gives rise to some variability in simplicity biases even after extensive training to optimize performance accuracy, but this source of variability cannot by itself account for the range of sensitivities (and suboptimalities) exhibited by the human participants.
Humans, unlike ANNs, maintain suboptimal simplicity preferences
Our results, combined with the fact that we did not provide trial-by-trial feedback to the participants while they performed the task, suggest that the human simplicity preferences we measured were not simply learned optimizations for these particular task conditions but rather are a more inherent (and variable) part of how we make decisions under uncertainty. However, because we provided each participant with instructions that echoed Bayesian-like reasoning (see Methods) and a brief training set with feedback before their testing session, we cannot rule out from the data presented in Figure 3 alone that at least some aspects of the simplicity preferences we measured from the human participants depended on those specific instructions and training conditions. We therefore ran a second experiment to rule out this possibility. For this experiment, we used the same task variants as above but a different set of instructions and training, designed to encourage participants to pick the model with the maximum likelihood (i.e., not integrate over latent causes but instead just consider the single cause that best matches the observed data), thus disregarding model complexity. Specifically, the visual cues were the same as in the original experiment, but the participants were asked to report which of the two shapes on the screen was closest to the center-of-mass of the dot cloud. We ensured that the participants recruited for this “maximum-likelihood” task had not participated in the original, “generative” task. We also trained and tested ANNs on this version of the task, using the maximum-likelihood solution as the correct answer.
Despite this major difference in instructions and training, the human participants exhibited similar simplicity preferences on the generative and maximum-likelihood tasks, suggesting that humans have a general predilection for simplicity even without relevant instructions or incentives (Figure 4, left). Specifically, despite some quantitative differences, the distributions of relative sensitivities showed the same basic patterns for both tasks, with a general increase of relative sensitivity from volume (0.19±0.08 for the maximum-likelihood task; compare to values above), to boundary (0.89±0.10), to robustness (2.27±0.15), to dimensionality (2.29±0.41). In stark contrast to the human data and to ANNs trained on the true generative task, ANN sensitivity to model complexity on the maximum-likelihood task was close to zero for all four terms (Figure 4, right).
To summarize the similarities and differences between how humans and ANNs used simplicity biases to guide their decision-making behaviors for these tasks, and their implications for performance, Figure 5 shows overall accuracy for each set of conditions we tested. Specifically, for each participant or ANN, task configuration, and instruction set, we computed the percentage of correct responses with respect to both the generative task (i.e., for which theoretically optimal performance depends on simplicity biases) and the maximum-likelihood task (i.e., for which theoretically optimal performance does not depend on simplicity biases). Because the maximum-likelihood solutions are deterministic (they depend only on which model the data centroid is closest to, and thus there exists an optimal, sharp decision boundary that leads to perfect performance) and the generative solutions are not (they depend probabilistically on the likelihood and bias terms, so it is generally impossible to achieve perfect performance), performance on the former is expected to be higher than on the latter. Accordingly, both ANNs and (to a lesser extent) humans tended to perform better when assessed relative to maximum-likelihood solutions. Moreover, the ANNs tended to exhibit behavior that was consistent with optimization to the given task conditions: networks trained to find maximum-likelihood solutions did better than networks trained to find generative solutions for the maximum-likelihood task, and networks trained to find generative solutions did better than networks trained to find maximum-likelihood solutions for the generative task. In contrast, humans tended to adopt similar strategies regardless of the task conditions, in all cases using Bayesian-like simplicity biases.
Put briefly, ANNs exhibited simplicity preferences only when trained to do so, whereas human participants exhibited them regardless of their instructions and training.
Discussion
Simplicity has long been regarded as a key element of effective reasoning and rational decision-making. It has been proposed as a foundational principle in philosophy2, psychology3,7, statistical inference1,13,17,19,22,24,37,38, and more recently machine learning39–42. Accordingly, various forms of simplicity preferences have been identified in human cognition8,10,11, such as a tendency to prefer smoother (simpler) curves as the inferred, latent source of noisy observed data9,12, and visual perception related to grouping, contour detection, and shape identification (Feldman & Singh 2006, Wilder, Feldman, Singh 2015, Froyen, Feldman, Singh 2015). However, despite the solid theoretical grounding of these works, none of them attempted to define a quantitative notion of simplicity bias that could be measured (as opposed to simply detected) in human perception and behavior. In this work, we showed that simplicity preferences are closely related to a specific mathematical formulation of Occam’s razor, situated at the convergence of Bayesian model selection and information theory1. This formulation enabled us to go beyond the mere detection of a preference for simple explanations for data and to measure precisely the strength of this preference in artificial and human participants under a variety of theoretically motivated conditions.
Our study makes several novel contributions. The first is theoretical: we derived a new term of the Fisher Information Approximation (FIA) in Bayesian model selection that accounts for the possibility that the best model is on the boundary of the model family (see details in Methods section M.2). This boundary term is important because it can account for the possibility that, because of the noise in the data, the best value of one parameter (or of a combination of parameters) takes on an extreme value. This condition is related to the phenomenon of “parameter evaporation” that is common in real-world models for data43. Moreover, boundaries for parameters are particularly important for studies of perceptual decision-making, in which sensory stimuli are limited by the physical constraints of the experimental setup and thus reasoning about unbounded parameters would be problematic for observers. For example, imagine designing an experiment that requires participants to report the location of a visual stimulus. In this case, an unbounded set of possible locations (e.g., along a line that stretches infinitely far in the distance to the left and to the right) is clearly untenable. Our “boundary” term formalizes the impact of considering the set of possibilities as having boundaries, which tend to increase local complexity because they tend to reduce the number of local hypotheses close to the data (see Figure 1b).
The second contribution of this work relates to Artificial Neural Networks: we showed that these networks can learn to use or ignore simplicity preferences in an optimal way (i.e., according to the magnitudes prescribed by theory), depending on how they are trained. These results are different from, and complementary to, recent work that has focused on the idea that implementation of simple functions could be key to generalization in deep neural networks39–42. Here we have shown that effective learning can take into account the complexity of the hypothesis space, rather than that of the decision function, in producing normative simplicity preferences. On the one hand, these results do not seem surprising, because ANNs, and deep networks in particular, are powerful function approximators that perform well in practice on a vast range of inference tasks44. Accordingly, our ANNs trained with respect to the true generative solutions were able to make effective decisions, including simplicity preferences, about the generative source of a given set of observations. Likewise, our ANNs trained with respect to maximum-likelihood solutions were able to make effective decisions, without simplicity preferences, about the maximum-likelihood match for a given set of observations. On the other hand, these results also provide new insights into how ANNs might be analyzed to better understand the kinds of solutions they produce for particular problems. In particular, assessing the presence or absence of the kinds of simplicity preferences that we observed might help identify if and/or how well an ANN is likely to avoid overfitting to training data and provide more generalizable solutions.
The third, and most important, contribution of this work relates to human behavior: people tend to use simplicity preferences when making decisions, and unlike ANNs these preferences do not seem to be simply consequences of learning specific tasks but rather an inherent part of how we interpret uncertain information. This tendency has important implications for the kinds of computations our brains must use to solve these kinds of tasks and how those computations appear to differ from those implemented by the ANNs we used. From a theoretical perspective, the difference between a Bayesian solution (i.e., one that includes the simplicity preferences) and a maximum-likelihood solution (i.e., one that does not include the simplicity preferences) to these tasks is that the latter considers only the single best-fitting model from each family, whereas the former integrates over all possible models in each family. This integration process is what gives rise to the simplicity bias, which, crucially, cannot emerge from simpler mechanisms such as sampling over different possible causes of the stimulus due to an unreliable sensory representation or a stochastic choice process (see Figure 2). Our finding that ANNs can converge on either strategy when trained appropriately indicates that both are, in principle, learnable. However, our finding that people tend to use the Bayesian solution even when instructed to use the maximum-likelihood solution suggests that we naturally do not make decisions based on the single best or archetypical instance within a family of possibilities but rather integrate across that family. Put more concretely in terms of our task, when told to identify the shape closest to the data points, participants were likely uncertain about which exact location on each shape was closest. They therefore naturally integrated over the possibilities, which induces the simplicity preferences as prescribed by the Bayesian solution. These findings will help motivate and inform future studies to identify where and how the brain implements and stores these integrated solutions to relevant decision problems.
Another key feature of our findings that merits further study is the magnitude and variability of preferences exhibited by the human participants. On average, human sensitivity to each geometrical model feature was: 1) larger than zero, 2) at least slightly different from the optimal value (e.g., larger for dimensionality and robustness, smaller for volume), 3) different for distinct features and different participants, and 4) relatively insensitive to instructions and training. What is the source of this diversity? One hypothesis is that people may weigh more heavily the model features that are easier or cheaper to compute. In our experiments, the most heavily weighted feature was model dimensionality. In our mathematical framework, this feature corresponds to the number of degrees of freedom of a possible explanation for the observed data and thus can be relatively easy to assess. By contrast, the least heavily weighted feature was model volume. This feature involves integrating over the whole model family (to count how many distinct states of the world can be explained by a certain hypothesis, one needs to enumerate them) and thus can be very difficult to compute. The other two terms, boundary and robustness, are intermediate in terms of human weighting and computational difficulty: they are harder to compute than dimensionality, because they depend on the data and on the properties of the model at the maximum likelihood location, but are also simpler than the volume term, because they are local quantities that do not require integration over the whole model manifold. This intuition leads to new questions about the relationship between the complexity of explanations being compared and the complexity of the decision-making process itself, calling into question notions of bounded rationality and diminishing returns in optimal inference45,46. Answering such questions is beyond the scope of the present work but merits further study.
A different, intriguing future direction is a comparison with other formal approaches to the emergence of simplicity that can lead to different predictions. Recent studies have argued that Jeffrey’s prior (upon which our geometric approach is based) could give an incomplete picture of the complexity of a class of models that occur commonly in the natural sciences, which contain many combinations of parameters that do not affect model behavior, and proposed instead the use of data-dependent priors47,48. The two methods lead to different results, especially in the data-limited regime49. It would be useful to understand the relevance of these differences to human and machine decision-making. Finally, our task design and analyses were constrained to conditions in which the FIA for the models involved could be computed analytically. Generalizing our approach to other conditions is another important future direction.
In summary, our work demonstrates the direct, quantitative relevance of formal notions of model complexity to human behavior. By relying on a combination of theoretical advances, computational modeling, and behavioral experiments, we have established a novel set of normative reference points for decision-making under uncertainty. These findings open up a new arena in which human cognition could be measured against optimal inferential processes, potentially leading to new insights into the constraints affecting information processing in the brain.
Supplementary Material
Acknowledgements
We thank Kamesh Krishnamurthy for discussions and acknowledge the financial support of R01 NS113241 (EP), R01 EB026945 (JIG and VB), as well as a hardware grant from the NVIDIA Corporation (EP). The HPC Collaboration Agreement between SISSA and CINECA granted access to the Marconi100 and Leonardo clusters. VB was supported in part by the Eastman Professorship at Balliol College, University of Oxford.
Footnotes
Code availability
All data and code needed to reproduce the experiments (including running the online psychophysics tasks and training and testing the neural networks), and to analyze the data and produce all figures is available at doi:10.17605/OSF.IO/R6D8N.
Ethics
Human participant protocols were approved and determined to be Exempt by the University of Pennsylvania Internal Review Board (IRB protocol 844474). Participants provided consent on-line before they began the task.
Data availability
All experimental data collected in this work is available at doi:10.17605/OSF.IO/R6D8N.
Bibliography
- 1.Balasubramanian V. Statistical Inference, Occam’s Razor, and Statistical Mechanics on the Space of Probability Distributions. Neural Comput. 9, 349–368 (1997). [Google Scholar]
- 2.Baker A. Simplicity. in The Stanford Encyclopedia of Philosophy (ed. Zalta E. N.) (Metaphysics Research Lab, Stanford University, 2022). [Google Scholar]
- 3.Feldman J. The simplicity principle in perception and cognition: The simplicity principle. Wiley Interdiscip. Rev. Cogn. Sci. 7, 330–340 (2016). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 4.Koffka K. Principles of Gestalt Psychology. (Mimesis international, Milano; [et al. , 2014). [Google Scholar]
- 5.Wagemans J. et al. A century of Gestalt psychology in visual perception: I. Perceptual grouping and figure–ground organization. Psychol. Bull. 138, 1172–1217 (2012). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 6.Hatfield G. The status of the minimum principle in the theoretical analysis of visual perception. Psychol. Bull. 97, 155 (1985). [PubMed] [Google Scholar]
- 7.Chater N. & Vitányi P. Simplicity: a unifying principle in cognitive science? Trends Cogn. Sci. 7, 19–22 (2003). [DOI] [PubMed] [Google Scholar]
- 8.Pothos E. M. & Chater N. A simplicity principle in unsupervised human categorization. Cogn. Sci. 26, 303–343 (2002). [Google Scholar]
- 9.Genewein T. & Braun D. A. Occam’s Razor in sensorimotor learning. Proc. R. Soc. B Biol. Sci. 281, 20132952 (2014). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 10.Gershman S. & Niv Y. Perceptual estimation obeys Occam’s razor. Front. Psychol. 4, (2013). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 11.Little D. R. B. & Shiffrin R. Simplicity Bias in the Estimation of Causal Functions. Proc. Annu. Meet. Cogn. Sci. Soc. 31, (2009). [Google Scholar]
- 12.Johnson S., Jin A. & Keil F. Simplicity and Goodness-of-Fit in Explanation: The Case of Intuitive Curve-Fitting. in Proceedings of the Annual Meeting of the Cognitive Science Society, 36(36) (2014). [Google Scholar]
- 13.Jeffreys H. Theory of Probability. (Clarendon Press, 1939). [Google Scholar]
- 14.Good I. J. Corroboration, Explanation, Evolving Probability, Simplicity and a Sharpened Razor. Br. J. Philos. Sci. 19, 123–143 (1968). [Google Scholar]
- 15.Jaynes E. T. Probability Theory: The Logic of Science. (Cambridge University Press, 2003). [Google Scholar]
- 16.Smith A. F. M. & Spiegelhalter D. J. Bayes Factors and Choice Criteria for Linear Models. J. R. Stat. Soc. Ser. B Methodol. 42, 213–220 (1980). [Google Scholar]
- 17.Gull S. F. Bayesian Inductive Inference and Maximum Entropy. in Maximum-Entropy and Bayesian Methods in Science and Engineering: Foundations (eds. Erickson G. J. & Smith C. R.) 53–74 (Springer Netherlands, Dordrecht, 1988). doi: 10.1007/978-94-009-3049-0_4. [DOI] [Google Scholar]
- 18.Jeffreys W. & Berger J. Sharpening Ockham’s Razor on a Bayesian Strop. (1991). [Google Scholar]
- 19.MacKay D. J. C. Bayesian Interpolation. Neural Comput. 4, 415–447 (1992). [Google Scholar]
- 20.Myung I. J., Balasubramanian V. & Pitt M. A. Counting probability distributions: Differential geometry and model selection. Proc. Natl. Acad. Sci. 97, 11170–11175 (2000). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 21.Rissanen J. J. Fisher information and stochastic complexity. IEEE Trans. Inf. Theory 42, 40–47 (1996). [Google Scholar]
- 22.Grünwald P. D. The Minimum Description Length Principle. (MIT press, 2007). [Google Scholar]
- 23.Lanterman A. D. Schwarz, Wallace, and Rissanen: Intertwining Themes in Theories of Model Selection. Int. Stat. Rev. 69, 185–212 (2001). [Google Scholar]
- 24.Wallace C. S. Statistical and Inductive Inference by Minimum Message Length. (Springer, New York, 2005). [Google Scholar]
- 25.Bialek W., Nemenman I. & Tishby N. Predictability, Complexity and Learning. Neural Comput. 2409–2463 (2001) doi: 10.1162/089976601753195969. [DOI] [PubMed] [Google Scholar]
- 26.Schwarz G. Estimating the Dimension of a Model. Ann. Stat. 6, 461–464 (1978). [Google Scholar]
- 27.Neath A. A. & Cavanaugh J. E. The Bayesian information criterion: background, derivation, and applications. WIREs Comput. Stat. 4, 199–203 (2012). [Google Scholar]
- 28.Piasini E., Balasubramanian V. & Gold J. I. Preregistration document. https://doi.org/10.17605/OSF.IO/2X9H6 (2020) doi: 10.17605/OSF.IO/2X9H6. [DOI] [Google Scholar]
- 29.Piasini E., Balasubramanian V. & Gold J. I. Preregistration document addendum. https://doi.org/10.17605/OSF.IO/5HDQZ (2021) doi: 10.17605/OSF.IO/5HDQZ. [DOI] [Google Scholar]
- 30.Piasini E., Liu S., Balasubramanian V. & Gold J. I. Preregistration document addendum. https://doi.org/10.17605/OSF.IO/826JV (2022) doi: 10.17605/OSF.IO/826JV. [DOI] [Google Scholar]
- 31.McElreath R. Statistical Rethinking. (CRC Press, 2016). [Google Scholar]
- 32.Kruschke J. K. Doing Bayesian Data Analysis. (Academic Press, 2015). [Google Scholar]
- 33.Gelman A. et al. Bayesian Data Analysis. (CRC Press, 2014). [Google Scholar]
- 34.Schrimpf M. et al. Brain-Score: Which Artificial Neural Network for Object Recognition is most Brain-Like? 407007 Preprint at 10.1101/407007 (2020). [DOI] [Google Scholar]
- 35.Muratore P., Tafazoli S., Piasini E., Laio A. & Zoccolan D. Prune and distill: similar reformatting of image information along rat visual cortex and deep neural networks. Adv. Neural Inf. Process. Syst. 35, 30206–30218 (2022). [Google Scholar]
- 36.Ravanbakhsh S., Schneider J. & Póczos B. Equivariance Through Parameter-Sharing.in Proceedings of the 34th International Conference on Machine Learning 2892–2901 (PMLR, 2017). [Google Scholar]
- 37.de Mulatier C., Mazza P. P. & Marsili M. Statistical Inference of Minimally Complex Models. (2021) doi: 10.48550/arXiv.2008.00520. [DOI] [Google Scholar]
- 38.Xie R. & Marsili M. Occam learning. (2022) doi: 10.48550/arXiv.2210.13179. [DOI] [Google Scholar]
- 39.De Palma G., Kiani B. & Lloyd S. Random deep neural networks are biased towards simple functions. in Advances in Neural Information Processing Systems (eds. Wallach H. et al.) vol. 32 (Curran Associates, Inc., 2019). [Google Scholar]
- 40.Valle-Perez G., Camargo C. Q. & Louis A. A. Deep learning generalizes because the parameter-function map is biased towards simple functions. in International Conference on Learning Representations (2019). [Google Scholar]
- 41.Chaudhari P. et al. Entropy-SGD: biasing gradient descent into wide valleys*. J. Stat. Mech. Theory Exp. 2019, 124018 (2019). [Google Scholar]
- 42.Yang R., Mao J. & Chaudhari P. Does the Data Induce Capacity Control in Deep Learning? in Proceedings of the 39th International Conference on Machine Learning 25166–25197 (PMLR, 2022). [Google Scholar]
- 43.Transtrum M. K. et al. Perspective: Sloppiness and emergent theories in physics, biology, and beyond. J. Chem. Phys. 143, 010901 (2015). [DOI] [PubMed] [Google Scholar]
- 44.Bengio Y., Lecun Y. & Hinton G. Deep learning for AI. Commun. ACM 64, 58–65 (2021). [Google Scholar]
- 45.Tavoni G., Balasubramanian V. & Gold J. I. What is optimal in optimal inference? Curr. Opin. Behav. Sci. 29, 117–126 (2019). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 46.Tavoni G., Doi T., Pizzica C., Balasubramanian V. & Gold J. I. Human inference reflects a normative balance of complexity and accuracy. Nat. Hum. Behav. 6, 1153–1168 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 47.Mattingly H. H., Transtrum M. K., Abbott M. C. & Machta B. B. Maximizing the information learned from finite data selects a simple model. Proc. Natl. Acad. Sci. U. S. A. 115, 1760–1765 (2018). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 48.Quinn K. N., Abbott M. C., Transtrum M. K., Machta B. B. & Sethna J. P. Information geometry for multiparameter models: new perspectives on the origin of simplicity. Rep. Prog. Phys. 86, 035901 (2022). [DOI] [PMC free article] [PubMed] [Google Scholar]
- 49.Abbott M. C. & Machta B. B. Far from Asymptopia: Unbiased High-Dimensional Inference Cannot Assume Unlimited Data. Entropy 25, 434 (2023). [DOI] [PMC free article] [PubMed] [Google Scholar]
Associated Data
This section collects any data citations, data availability statements, or supplementary materials included in this article.
Supplementary Materials
Data Availability Statement
All experimental data collected in this work is available at doi:10.17605/OSF.IO/R6D8N.