Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2021 Jan 7.
Published in final edited form as: Neuroimage. 2020 Aug 13;222:117254. doi: 10.1016/j.neuroimage.2020.117254

Keep it real: rethinking the primacy of experimental control in cognitive neuroscience

Samuel A Nastase a, Ariel Goldstein a, Uri Hasson a,b
PMCID: PMC7789034  NIHMSID: NIHMS1658218  PMID: 32800992

Abstract

Naturalistic experimental paradigms in neuroimaging arose from a pressure to test the validity of models we derive from highly-controlled experiments in real-world contexts. In many cases, however, such efforts led to the realization that models developed under particular experimental manipulations failed to capture much variance outside the context of that manipulation. The critique of non-naturalistic experiments is not a recent development; it echoes a persistent and subversive thread in the history of modern psychology. The brain has evolved to guide behavior in a multidimensional world with many interacting variables. The assumption that artificially decoupling and manipulating these variables will lead to a satisfactory understanding of the brain may be untenable. We develop an argument for the primacy of naturalistic paradigms, and point to recent developments in machine learning as an example of the transformative power of relinquishing control. Naturalistic paradigms should not be deployed as an afterthought if we hope to build models of brain and behavior that extend beyond the laboratory into the real world.

Keywords: Ecological psychology, Ecological validity, Experimental design, Generalizability, Naturalistic stimuli, Representative design


Cognitive neuroscientists employ clever experimental manipulations in hopes of discovering interpretable relationships between brain, behavior, and the environment. There is a commitment—often implicit—in both our scientific thinking and writing that the models we derive from tightly-controlled experimental manipulations will provide some traction in real-world contexts. This commitment relies on the assumption that the human brain implements a set of nomothetic principles or rules that capture the underlying principles or rules by which the world works. We assume that these rules, like in classical physics, are relatively simple and interpretable, and, once discovered, will extrapolate to the richness of human behavior (Jolly and Chang, 2019). We proceed by filtering out as many seemingly irrelevant variables (considered “confounds” or “noise”) as possible in hopes of isolating the handful of latent variables (considered “signal”) dictating brain—behavior relationships. To what extent do our models actually generalize outside the laboratory? What proportion of neural or behavioral variability do our models predict in real-life contexts? These kinds of questions have prompted the neuroimaging community, and neuroscience more broadly, to begin adopting more naturalistic experimental paradigms (Hasson and Honey, 2012; Maguire, 2012; Hamilton and Huth, 2018; Matusz et al., 2019; Sonkusare et al., 2019).

Naturalistic paradigms have generally been considered a testbed for models developed under highly-controlled experimental paradigms. In neuroimaging, naturalistic stimuli were introduced optimistically in hopes of validating existing models (Bartels and Zeki, 2004; Hasson et al., 2004). This optimism has declined over the intervening years. In the following, we provide a historical context for naturalistic neuroimaging and appeal to representative design as a principled basis for ecological generalizability (Brunswik, 1947). We assume that no cognitive neuroscientist would be satisfied with a science strictly confined to peculiar experimental manipulations with little relevance outside the laboratory. However, the world outside the laboratory is not amenable to many of the assumptions of classical experimental design; real-world ecological variables are often multidimensional, sometimes nonlinear, and interact in unexpected ways. To make matters worse, evolution has built a brain that capitalizes on these interactions to guide adaptive behavior.

To be clear, we are not arguing indiscriminately against controlled experiments. Experimental manipulations provide a powerful and necessary tool for testing hypotheses and models. Our argument pertains to the source and character of these hypotheses. As experimentalists, we take complex phenomena and try to deconstruct them into manageable components that we can more easily manipulate in our experiments. We often bootstrap hypotheses from preexisting experimental manipulations, thus superimposing the assumptions of experimental design on the process of hypothesis formation and data generation. When data from the experimental manipulation adjudicate in favor of the hypothesis, we generally assume that we have discovered something meaningful about brain and behavior. However, when stringent design considerations constrict both hypotheses and data, we risk maneuvering ourselves into theoretical corners that are difficult to reconcile with ecological brain function. We argue that this necessitates a shift toward the primacy of naturalistic paradigms in developing and evaluating models of brain and behavior.

1. What problems does the brain confront outside the laboratory?

Evolution has shaped our brains to guide behavior in a multidimensional, uncertain world. The importance of this fact has been periodically reasserted in the schools of functional and ecological psychology (e.g., Brunswik, 1943; Gibson, 1979), but the implications remain underappreciated. We contend that many properties of the brain, as an evolutionary solution for guiding adaptive behavior, undermine many of the theoretical assumptions of cognitive neuroscience outlined above (see Hasson et al., 2020, for an extended discussion). Evolution does not have the privilege of operating under controlled laboratory conditions, does not necessarily produce intuitively “optimal” solutions (cf. Attneave, 1954; Barlow, 1961; Olshausen and Field, 1996; Lewicki, 2002), and does not appeal to human-interpretable design principles (Dennett, 1995; Cisek, 2019). In the case of the brain, evolution has converged on a high-dimensional modeling/control organ for estimating whatever structure in the world is relevant for guiding context-specific adaptive behavior. In this respect, the brain does not operate like a scientist, as the kind of estimation needed to guide behavior does not necessitate the kind of understanding scientists seek. In other words, the brain is not necessarily designed to rely on simple, human-interpretable variables; it does not always cleanly segregate variables into signal and noise; and it does not necessarily respect the theoretical boundaries imposed by our experimental designs.

Ecological variables in the environment are poorly understood. Any ecologically relevant “signal” in the environment is multidimensional and there are nonlinearities and interactions among dimensions (Campbell, 1973; Cronbach, 1975; Gibson, 1979). Furthermore, ecologically relevant dimensions of the environment are always mixed with non-relevant dimensions. The brain cannot simply ignore non-relevant dimensions; it must learn to actively adjust particular dimensions in order to guide behavior. In most ecological situations, the relevant dimensions for a particular action (e.g., recognizing a face, or interpreting the meaning of words in a particular context) are always mixed with non-relevant dimensions (e.g., luminance, motion, or occlusion of the face; the sentence structure used or the accent of a speaker). To perform these tasks, the brain must dynamically weight and re-weight all the incoming dimensions as a function of task and context. In other words, there are no two systems such that one processes the “signal” and the other processes “confounds” or “noise.” Classical controlled experiments, where the vast majority of these variables are artificially clamped or factored out, ignore one of the central problems the brain must face, and may hinder our understanding of the solutions the brain has found to overcome it. It is surprisingly difficult to generalize from a contrived experiment artificially isolating a handful of experimental variables to other contexts with five, ten, or perhaps hundreds of dimensions; however this doesn’t discourage us for interpreting experimental results more generally (Cronbach et al., 1963; Yarkoni, 2019).

Take for example the seminal findings of Hubel and Wiesel (1962): probing the visual system of the anaesthetized cat with differently oriented edges reveals an orderly model of orientation tuning in primary visual cortex. It was thought that extending this systematic program to other stimulus features would eventually allow us to piece together a complete model of early visual function. However, despite revealing some important insights, the limits of this program have become increasingly evident. For example, work by David et al. (2004) has demonstrated that the spatiotemporal tuning of neurons in primary visual cortex (V1) differs substantially between naturalistic and non-naturalistic contexts, likely due to nonlinear relationships among neural variables and environmental variables. Models of neural tuning derived from synthetic stimuli in the vein of Hubel and Wiesel may not generalize well to the real-world conditions in which our brains evolved (Simoncelli and Olshausen, 2001; Kayser et al., 2004; Felsen and Dan, 2005; McMahon et al., 2015; Park et al., 2017; Leopold and Park, 2020). Olshausen and Field (2005), famously cautioned that “we can rightfully claim to understand only 10% to 20% of how V1 actually operates under normal conditions,” attributing this in part to biased stimulus sampling and a tendency toward easily-interpretable models.

2. Systematic and representative design

Advocates for naturalistic paradigms often appeal to their “ecological validity,” a term that originated with Egon Brunswik (Brunswik, 1947, 1949).1 Brunswik championed a heterodox school of psychological theory summarized as “probabilistic functionalism,” emphasizing the messy, probabilistic nature of organism–environment relations and the importance of Darwin’s notion of adaptive fitness in guiding behavior (Tolman and Brunswik, 1935; Brunswik, 1943). Brunswik (1949) contended that psychology maintains a “double standard” in the application of sampling theory (Neyman, 1934; Kruskal and Mosteller, 1980) to subjects and stimuli: whereas subjects are sampled with the goal of generalizing to the population, stimuli and tasks generally are not.

Brunswik challenged the paradigm of “systematic design”—the practice of artificially reducing the world to a small number of hand-picked variables for experimental manipulation—on grounds that it often fails to actually isolate variables of interest and tends to impose non-naturalistic relationships among variables (Brunswik, 1955). In contrast, Brunswik advocated for “representative design,” arguing that we should sample stimuli or conditions in a way that respects the distribution and covariance of ecological variables if we hope to achieve generalizability beyond the boundaries of the experimental manipulation. Ecological generalizability demands a “representative sampling of situations” where “situational instances in an ecology are analogous to individuals in a population” (Brunswik, 1955, p. 198). Ecologically relevant configurations of variables carve out a manifold in a multidimensional space of organism—environment relations. Systematic experimental manipulations that clamp or orthogonalize certain variables risk unintentionally relocating an experiment off the manifold into a peculiar region of this space, thus for feiting ecological generalizability.

Though considered heretical during Brunswik’s lifetime, the critical thrust of his program has nonetheless permeated a variety of fields (Hammond, 1955; Jenkins, 1974; Bronfenbrenner, 1977; Neisser and Hyman, 2000; Fiedler, 2011). For example, Barker’s (1965) “ecological psychology” advocates for the psychologist as a “transducer” of psychological phenomena in situ, rather than the traditional "operator/transducer” who manipulates the environment and organism to “send messages to [them]self.” This critique also resonates with modern statistical debates: for example, the “stimulus-as-fixed-effect” controversy in psycholinguistics (Coleman, 1964; Clark, 1973; Baayen et al., 2008), social psychology (Wells and Windschitl, 1999; Judd et al., 2012), and neuroimaging (Bedny et al., 2007; Westfall et al., 2016); or endogenous selection bias, where a spurious relationship between variables of interest is induced by biased sampling along another collider variable (Elwert and Winship, 2014). One particular zenith along this line of thought was Gibson’s (1979) theory of “direct perception,” which forcefully elevated the environment itself to a principal object of study in psychology, emphasizing in particular the organism- and context-specific elements of the environment that offer opportunities for adaptive behavior (i.e., “affordances”). Despite the artificiality of many laboratory manipulations, an organism cannot be decoupled from the environment in which it evolved (von Uexküll, 1934; Chiel and Beer, 1997; Gomez-Marin and Ghazanfar, 2019).

Much of cognitive neuroscience still operates in a similar regime to Hubel and Wiesel, using contrived, non-naturalistic stimuli and tasks in hopes of revealing fundamental features of functional neuroanatomy. This analytic, reductionist program is endemic to psychology and neuroscience more broadly: complex, real-world phenomena are decomposed into increasingly circumscribed subcomponents that are manifest in highly-constrained experimental manipulations (cf. Braitenberg, 1984; Cisek, 2019).2 We use disjoint tasks to devise complex taxonomies of memory (e.g., Squire, 2004) and attention (e.g., Carrasco, 2011), subdividing the brain into a mosaic of regions reflecting intuitive, hand-picked contrasts (e.g., Kanwisher et al., 1997; Kanwisher, 2010); but rarely do we reassemble these manipulations into functional, ecological behavior. How do these disparate systems conspire to perform complex, real-world behavior (e.g., summarizing a complex idea and verbally conveying it to a colleague, a task many readers perform every day)? The assumption that we can someday cobble together these piecemeal processes and representations into a satisfying model of brain and behavior is tenuous at best (Newell, 1973; Meehl, 1990). Concerns about the utility of traditional laboratory tasks are not specific to neural measurements (e.g., Elliott et al., 2020); in fact there is increasing evidence that many behavioral tasks have little conceptual overlap with self-report measures and fail to capture real-world behavior (e.g., Eisenberg et al., 2019; Dang et al., 2020).

To illustrate this point, consider working memory processes in a daily context, such as reading a story, as opposed to a laboratory context, such as a delayed match-to-sample task. In the delayed match-to-sample task, the process of protecting information in a working memory buffer is isolated from other perceptual, decision-making, and motor-related processes by the structure of the task itself. However, in real-world contexts, each word we accumulate while reading a story interacts with and is synthesized with all previous written or spoken words in an evolving narrative (Willems et al., 2020). The naturalistic reading task reveals that neural systems, across all levels of the processing hierarchy, need to accumulate, maintain, and synthesize information at their preferred processing timescale, making the classical distinction between processing systems and memory systems intangible (see Hasson et al., 2015). Face perception provides another illustrative example. The first step in studying face perception is typically to experimentally strip away cumbersome social content like facial expressions, personal familiarity, and temporal dynamics. Tightly-constrained stimulus parameterization and contrasts with randomized trial order reveals orderly face-selective responses in several cortical areas (e.g., Kanwisher et al., 1997; Tsao et al., 2006). However, the dynamics of face perception circuitry become considerably more nuanced when presented with complex, naturalistic stimuli, particularly in social contexts (McMahon et al., 2015; Russ and Leopold, 2015; Park et al., 2017; Leopold and Park, 2020). Cortical areas with seemingly uniform face-selective responses presumably also encode dynamic features that were simply not present in the decontextualized, static face stimuli. In this sense, naturalistic stimuli—in which faces are persistent, sometimes familiar, and carry dynamic social and semantic content—allow us to better gauge the relative contributions of different variables, and can reveal the importance of previously underappreciated variables for neural representation (Haxby et al., 2020). Beyond naturalistic stimuli, there is also evidence that spontaneous, naturalistic behavior plays an unexpectedly large role in neural activity throughout the brain, including in putative low-level sensory areas (e.g., Musall et al., 2019; Stringer et al., 2019).

3. Lessons from machine learning

Recent advances in artificial neural networks (ANNs) provide an instructive foil for experimental neuroscience. The machine learning community has made tremendous strides in building neurally-inspired models that match or exceed human performance in cognitive tasks spanning visual processing, language processing, and complex gameplay (LeCun et al., 2015). Why have neural network models developed in the machine learning community so dramatically outstripped models developed in psychology and neuroscience laboratories?

One of the key developments was to relinquish some amount of control and embrace the complexity of real life. The machine learning ing community does not fixate on “experimental design” in the way that neuroscientists do. They do not manufacture a small set of well-behaved inputs in developing their models; instead, they use vast, largely-unconstrained training data sampled from the real world. They do not impose the strong constraint that their models must learn human-interpretable representations or rules. Instead, machine learning has—for pragmatic reasons—prioritized predictive power over easily-interpretable, explanatory models (Breiman, 2001; Yarkoni and Westfall, 2017; Varoquaux and Poldrack, 2019). The implicit goal in most cases is not to model an experimentally-isolated cognitive process, but to build useful models of the phenomenon of interest out in the world. Take for example a deep convolutional neural network for face recognition that matches (and exceeds) human performance in recognizing face identities (Schroff et al., 2015). This model is trained on face images ages spanning numerous identities sampled “in the wild” to include all manner of naturalistic “confounds”—differences in expression, lighting, head angle, and so on. The same model trained on a tightly-controlled subset of facial images would fail dramatically due to biased, non-representative sampling (O’Toole et al., 2018; Srivastava and Grill-Spector, 2018).

The way these models learn to map noisy, real-world inputs along objective functions to perform complex tasks resonates with Gibson’s (1979) notions of direct perception. Much like the brain, the structure of the fitted model is inseparable from the task(s) the model is trained to perform in the world. We argue that the way both artificial and biological neural networks learn to pursue objective functions cleaves more toward Gibson’s (1979) notion of direct perception than, for example, Marr’s (1982) constructivist, representationalist approach (Brooks, 1991; Pezzulo and Cisek, 2016; Hasson et al., 2020). In the same way that evolutionary theory shifted our understanding of biology to a few relatively simple processes and principles, the effectiveness of artificial neural networks in learning cognitive tasks may force us to rethink the neural code (see Richards et al., 2019, and Hasson et al., 2020). The recent success of neural networks in solving many of the tasks we study in cognitive neuroscience serves as a cautionary tale for those probing the brain for easily-interpretable representations.

4. Studying ecological brain function without losing control

Most psychologists and neuroscientists are trained to respect the primacy of experimental control. We celebrate the ingenuity of tasks that manage to isolate a handful of interpretable variables from confounds. When a particular task or manipulation fails to elicit the desired effect, we often adjust the task or fine-tune the manipulation in hopes of homing in on the effect. This research program hinges on the assumption that the brain extrapolates from a number of human-interpretable representations and processes to navigate the world; and that using clever designs to experimentally isolate the neural implementation of these rules will allow us to extrapolate to ecological behavior. With these assumptions in hand, we consider tightly-controlled experimental manipulations as the principal (perhaps only) source of insight into the underlying neural code (Gillis and Schneider, 1966), whereas naturalistic paradigms are treated as a necessary (albeit inconvenient) testbed for validating these theories. But what if these assumptions are unsound? What if nonlinearities and interactions among environmental variables hamstring generalization from contrived experiments? What if biological systems rely more on exhaustive sampling and brute-force interpolation rather than rule-based extrapolation? How is the cognitive neuroscientist to proceed?

Naturalistic paradigms are not a panacea and are not trivial to implement or analyze. We do believe there is value in using controlled experiments to test hypotheses, but contend that these hypotheses should stem from ecological considerations and address head-on the actual problems the brain confronts in the world. Controlled experiments can reveal important boundary conditions of ecological brain function, and no single paradigm can be exhaustively representative or generalizable. However, we believe that non-naturalistic experimental manipulations have occupied an overly privileged position in cognitive neuroscience.

We caution against allowing classical experimental manipulations to play an outsized role in hypothesis formation. For example, if the goal is to differentiate neural systems processing articulatory and semantic features of words, rather than using tightly-controlled lists of words and nonwords, we recommend using natural speech stimuli and comparing models of articulation and semantic content (e.g., de Heer et al., 2017). When designing an experiment, we recommend, whenever possible, to use naturalistic tasks and to sample stimuli and conditions (including controls) from ecological contexts; for example, leveraging each subject’s personal social network (Parkinson et al., 2017, 2018; Hyon et al., 2020), probing memory using naturalistic recall (Chen et al., 2017; Zadbood et al., 2017; Heusser et al., 2018), comparing natural language across modalities and contexts (Stephens et al., 2010; Regev et al., 2013; Yeshurun et al., 2017; Deniz et al., 2019), and using data-driven modeling to capture the complexity of naturalistic neural responses (Haxby et al., 2011, 2020; Baldassano et al., 2017; Chang et al., 2018; Nastase et al., 2019). Appealing to representative sampling in experimental design will tend to introduce (ecological) intercorrelations among variables, and may reduce statistical power for low-frequency phenomena (Hamilton and Huth, 2018); in this sense, naturalistic paradigms may resemble observational research, and may benefit from the associated methods (e.g., Rohrer, 2018). Our thesis, however, is that anyone adopting the alternative approach—clamping or and artificially orthogonalizing these variables—must contend with the challenge of ecological generalizability.

Building a more ecological research program demands increasingly rich data and quantitative tools for describing brain, behavior, and environment. Publicly shared naturalistic datasets (e.g., Hanke et al., 2014; Nastase et al., 2019) have exceptional re-use value and can serve as benchmarks for model comparison (DuPre et al., 2019). These datasets will eventually become exhausted as competing models improve and reach ceiling performance; data generators will never be out of work and there will always be a market for innovations in data acquisition. Developing technologies, such as continuous intracranial electroencephalography (iEEG; e.g., Wang et al., 2016), functional near-infrared spectroscopy (fNIRS; e.g., Liu et al., 2017), high-density diffuse optical tomography (HD-DOT; e.g., Fishell et al., 2019), and wearable magnetoencephalography (MEG; Boto et al., 2018) promise higher-fidelity and more ergonomic neuroimaging. Even the workhorse fMRI is beginning to see increased adoption of immersive virtual reality paradigms (Mathiak and Weber, 2006; Spiers and Maguire, 2006, 2007; Maguire, 2012). Finally, we are seeing advances in quantifying the richness of natural behavior (Gomez-Marin et al., 2014; Calhoun and Murthy, 2017; Nath et al., 2019; Pereira et al., 2019). We live in an age of ubiquitous real-life behavioral data collection (for better or worse); experience sampling technologies such as mobile sensing (Miller, 2012; Harari et al., 2016) provide new windows into naturalistic behavior, and can be used to procure subject-specific representative stimuli (Nielson et al., 2015; Rissman et al., 2016).

In the context of representative design, Brunswik contends that the “challenge of further [isolating variables] must be met by after-the-fact, mathematical means” (Brunswik, 1955, pp. 202–203). This resonates with the more recent notion of “late commitment” in cognitive neuroscience (Kriegeskorte et al., 2008, p. 19), wherein theoretical assumptions are relaxed at the stage of experimental design and data collection, and later imposed at the analysis stage. Representative design is also conducive to a “system identification” approach for mapping between formal models of the environment and neural responses (Wu et al., 2006; Naselaris et al., 2011; Gallant et al., 2012; Nunez-Elizalde et al., 2019). In this framework, explicit models capturing, e.g., visual or semantic content (Nishimoto et al., 2011; Huth et al., 2012, 2016) are constructed to predict brain activity from naturalistic stimuli or tasks. In both of these frameworks, hypotheses are formalized as explicit models of the stimulus or task, and the relative quality of a given model is quantified in terms of its accuracy in predicting neural responses to novel input. Commonality analysis (Mood, 1971; Seibold and McPhee, 1979) provides a statistical framework for partitioning variance due to combinations of variables and has been deployed for both voxelwise encoding models (e.g., Lescroart et al., 2015; de Heer et al., 2017) and pattern-based representational similarity analysis (e.g., Groen et al., 2018; Hebart et al., 2018). Adopting a prediction-oriented framework with an emphasis on accounting for variance in real-world contexts may help combat the reductionism inherent in contrived experimental manipulations and simple models (Yarkoni and Westfall, 2017; Varoquaux and Poldrack, 2019).

We can summarize these examples into several concrete recommendations, many of which are reflected in the exceptional body of work presented in this special issue: (1) formulate hypotheses with ecological considerations in mind; (2) rather than constraining data collection, sample brain activity under representative contexts for the ecological behaviors you wish to study; (3) find manipulations for characterizing the boundary conditions that naturally emerge in real-life contexts; (4) when possible, formalize hypotheses as explicit models capable of making quantitative predictions of neural activity under the most naturalistic conditions possible; (5) interrogate your models with the goal of understanding not only the neural data, but also the structure of the task, stimulus, or environment; (6) use your insights to generate new predictions to be tested in real-life contexts or under more controlled conditions as necessary.

5. Conclusion

We hope our argument has punctuated the fundamental tension between experimental control and ecological generalizability. We cannot naively decompose organism—environment relations into contrived experimental manipulations in hopes of recomposing them into a satisfying understanding of ecological brain function. By dogmatically adhering to systematic design, we risk creating a cognitive neuroscience of contrived experimental manipulations that have little meaning outside the laboratory—confining ourselves to what Brunswik (1947, p. 110) referred to as “a self-created ivory-tower ecology.” Naturalistic paradigms should not be relegated to post hoc model validation—they should provide a foundation from which theories are developed (Hasson et al., 2020). Moving toward a more ecological cognitive neuroscience is not simply a matter of plugging more realistic stimuli into our usual experiments, but stepping outside our usual mode of inquiry and reframing our questions to encompass the nested dynamics of brain, body, and environment (Gomez-Marin and Ghazanfar, 2019). We are optimistic that adopting an ecological perspective will not only complement our existing models, but revolutionize them.

Acknowledgments

We thank Christopher J. Honey, Tal Yarkoni, Rita Goldstein, Asieh Zadbood, Kenneth A. Norman, and James V. Haxby for conversations that motivated and informed the writing of this manuscript. This work was supported by the National Institutes of Health under award numbers DP1HD091948 (U.H., A.G.) and R01MH112566 (S.A.N.).

Footnotes

1

Brunswik in fact used the term “ecological validity” in a narrow sense to indicate the utility of a perceptual cue with respect to an ecologically-relevant state of the environment—modern usage more closely resembles Brunswik’s notion of “representative design” (Hammond and Stewart, 2001; Araújo et al., 2007).

2

Bannister (1966) humorously put it: “In order to behave like scientists, [experimental psychologists] must construct situations in which our subjects are totally controlled, manipulated and measured. We must cut our subjects down to size. We construct situations in which they can behave as little like human beings as possible and we do this in order to allow ourselves to make statements about the nature of their humanity.”

References

  1. Araújo D, Davids K, Passos P, 2007. Ecological validity, representative design, and correspondence between experimental task constraints and behavioral setting: comment on Rogers, Kadar, and Costall (2005). Ecol. Psychol. 19, 69–78. doi: 10.1080/10407410709336951. [DOI] [Google Scholar]
  2. Attneave F, 1954. Some informational aspects of visual perception. Psychol. Rev. 61, 183–193. doi: 10.1037/h0054663. [DOI] [PubMed] [Google Scholar]
  3. Baayen RH, Davidson DJ, Bates DM, 2008. Mixed-effects modeling with crossed random effects for subjects and items. J. Mem. Lang. 59, 390–412. doi: 10.1016/j.jml.2007.12.005. [DOI] [Google Scholar]
  4. Baldassano C, Chen J, Zadbood A, Pillow JW, Hasson U, Norman KA, 2017. Discovering event structure in continuous narrative perception and memory. Neuron 95, 709–721. doi: 10.1016/j.neuron.2017.06.041. [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bannister D, 1966. Psychology as an exercise in paradox. Bull. Br. Psychol. Soc. 19, 21–26. [Google Scholar]
  6. Barker RG, 1965. Explorations in ecological psychology. Am. Psychol. 20, 1–14. doi: 10.1037/h0021697. [DOI] [PubMed] [Google Scholar]
  7. Barlow HB, 1961. Possible principles underlying the transformation of sensory messages In: Rosenblith WA (Ed.), Sensory Communication. MIT Press, Cambridge, MA, pp. 217–234. doi: 10.7551/mitpress/9780262518420.003.0013. [DOI] [Google Scholar]
  8. Bartels A, Zeki S, 2004. Functional brain mapping during free viewing of natural scenes. Hum. Brain Mapp. 21, 75–85. doi: 10.1002/hbm.10153. [DOI] [PMC free article] [PubMed] [Google Scholar]
  9. Bedny M, Aguirre GK, Thompson-Schill SL, 2007. Item analysis in functional magnetic resonance imaging. Neuroimage 35, 1093–1102. doi: 10.1016/j.neuroimage.2007.01.039. [DOI] [PubMed] [Google Scholar]
  10. Boto E, Holmes N, Leggett J, Roberts G, Shah V, Meyer SS, Duque Muñoz L, Mullinger KJ, Tierney TM, Bestmann S, Barnes GR, Bowtell R, Brookes MJ, 2018. Moving magnetoencephalography towards real-world applications with a wearable system. Nature 555, 657–661. doi: 10.1038/nature26147. [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Braitenberg V, 1984. Vehicles: Experiments in Synthetic Psychology. MIT Press, Cambridge, MA: http://www.worldcat.org/oclc/476621844. [Google Scholar]
  12. Breiman L, 2001. Statistical modeling: the two cultures (with comments and a rejoinder by the author). Stat. Sci. 16, 199–231. doi: 10.1214/ss/1009213726. [DOI] [Google Scholar]
  13. Bronfenbrenner U, 1977. Toward an experimental ecology of human development. Am. Psychol. 32, 513–531. doi: 10.1037/0003-066X.32.7.513. [DOI] [Google Scholar]
  14. Brooks RA, 1991. Intelligence without representation. Artif. Intell. 47, 139–159. doi: 10.1016/0004-3702(91)90053-M. [DOI] [Google Scholar]
  15. Brunswik E, 1943. Organismic achievement and environmental probability. Psychol. Rev. 50, 255–272. doi: 10.1037/h0060889. [DOI] [Google Scholar]
  16. Brunswik E, 1947. Perception and the Representative Design of Psychological Experiments. University of California Press, Berkeley, CA: http://www.worldcat.org/oclc/9551141. [Google Scholar]
  17. Brunswik E, 1949. Systematic and representative design of psychological experiments with results in physical and social perception In: Neyman J (Ed.), Proceedings of the Berkeley Symposium on Mathematical Statistics and Probability. University of California Press, Berkeley, CA, pp. 143–202. [Google Scholar]
  18. Brunswik E, 1955. Representative design and probabilistic theory in a functional psychology. Psychol. Rev. 62, 193–217. doi: 10.1037/h0047470. [DOI] [PubMed] [Google Scholar]
  19. Calhoun AJ, Murthy M, 2017. Quantifying behavior to solve sensorimotor transformations: advances from worms and flies. Curr. Opin. Neurobiol. 46, 90–98. doi: 10.1016/j.conb.2017.08.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Campbell DT, 1973. The social scientist as methodological servant of the experimenting society. Policy Stud. J. 72–75. doi: 10.1111/j.1541-0072.1973.tb00128.x. [DOI] [Google Scholar]
  21. Carrasco M, 2011. Visual attention: the past 25 years. Vis. Res. 51, 1484–1525. doi: 10.1016/j.visres.2011.04.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Chang LJ, Jolly E, Cheong JH, Rapuano K, Greenstein N, Chen PHA, Manning JR, 2018. Endogenous variation in ventromedial prefrontal cortex state dynamics during naturalistic viewing reflects affective experience. bioRxiv, 487892 doi: 10.1101/487892. [DOI] [PMC free article] [PubMed] [Google Scholar]
  23. Chen J, Leong YC, Honey CJ, Yong CH, Norman KA, Hasson U, 2017. Shared memories reveal shared structure in neural activity across individuals. Nat. Neurosci. 20, 115–125. doi: 10.1038/nn.4450. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Chiel HJ, Beer RD, 1997. The brain has a body: adaptive behavior emerges from interactions of nervous system, body and environment. Trends Neurosci. 20, 553–557. doi: 10.1016/S0166-2236(97)01149-1. [DOI] [PubMed] [Google Scholar]
  25. Cisek P, 2019. Resynthesizing behavior through phylogenetic refinement. Atten. Percept. Psychophys. 81, 2265–2287. doi: 10.3758/s13414-019-01760-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Clark HH, 1973. The language-as-fixed-effect fallacy: a critique of language statistics in psychological research. J. Verbal Learn. Verbal Behav. 12, 335–359. doi: 10.1016/S0022-5371(73)80014-3. [DOI] [Google Scholar]
  27. Coleman EB, 1964. Generalizing to a language population. Psychol. Rep. 14, 219–226. doi: 10.2466/pr0.1964.14.1.219. [DOI] [Google Scholar]
  28. Cronbach LJ, 1975. Beyond the two disciplines of scientific psychology. Am. Psychol. 30, 116–127. doi: 10.1037/h0076829. [DOI] [Google Scholar]
  29. Cronbach LJ, Rajaratnam N, Gleser GC, 1963. Theory of generalizability: a liberalization of reliability theory. Br. J. Math. Stat. Psychol. 16, 137–163. doi: 10.1111/j.2044-8317.1963.tb00206.x. [DOI] [Google Scholar]
  30. Dang J, King KM, Inzlicht M, 2020. Why are self-report and behavioral measures weakly correlated? Trends Cogn. Sci. 24, 267–269. doi: 10.1016/j.tics.2020.01.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  31. David SV, Vinje WE, Gallant JL, 2004. Natural stimulus statistics alter the receptive field structure of V1 neurons. J. Neurosci. 24, 6991–7006. doi: 10.1523/jneurosci.1422-04.2004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  32. de Heer WA, Huth AG, Griffiths TL, Gallant JL, Theunissen FE, 2017. The hierarchical cortical organization of human speech processing. J. Neurosci. 37, 6539–6557. doi: 10.1523/jneurosci.3267-16.2017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  33. Deniz F, Nunez-Elizalde AO, Huth AG, Gallant JL, 2019. The representation of semantic information across human cerebral cortex during listening versus reading is invariant to stimulus modality. J. Neurosci. 39, 7722–7736. doi: 10.1016/j.neuron.2011.08.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  34. Dennett DC, 1995. Darwin’s Dangerous Idea: Evolution and the Meanings of Life. Simon and Schuster, New York, NY: http://www.worldcat.org/oclc/892927037. [Google Scholar]
  35. DuPre E, Hanke M, Poline JB, 2019. Nature abhors a paywall: how open science can realize the potential of naturalistic stimuli. Neuroimage 216, 116330. doi: 10.1016/j.neuroimage.2019.116330. [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Eisenberg IW, Bissett PG, Enkavi AZ, Li J, MacKinnon DP, Marsch LA, Poldrack RA, 2019. Uncovering the structure of self-regulation through data-driven ontology discovery. Nat. Commun. 10, 2319. doi: 10.1038/s41467-019-10301-1. [DOI] [PMC free article] [PubMed] [Google Scholar]
  37. Elliott ML, Knodt AR, Ireland D, Morris ML, Poulton R, Ramrakha S, Sison ML, Moffitt TE, Caspi A, Hariri AR, 2020. What is the test-retest reliability of common task-functional MRI measures? New empirical evidence and a meta-analysis. Psychol. Sci. doi: 10.1177/0956797620916786. [DOI] [PMC free article] [PubMed] [Google Scholar]
  38. Elwert F, Winship C, 2014. Endogenous selection bias: the problem of conditioning on a collider variable. Annu. Rev. Sociol. 40, 31–53. doi: 10.1146/annurev-soc-071913-043455. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Felsen G, Dan Y, 2005. A natural approach to studying vision. Nat. Neurosci. 8, 1643–1646. doi: 10.1038/nn1608. [DOI] [PubMed] [Google Scholar]
  40. Fiedler K, 2011. Voodoo correlations are everywhere—Not only in neuroscience. Perspect. Psychol. Sci. 6, 163–171. doi: 10.1177/1745691611400237. [DOI] [PubMed] [Google Scholar]
  41. Fishell AK, Burns-Yocum TM, Bergonzi KM, Eggebrecht AT, Culver JP, 2019. Mapping brain function during naturalistic viewing using high-density diffuse optical tomography. Sci. Rep 9, 11115. doi: 10.1038/s41598-019-45555-8. [DOI] [PMC free article] [PubMed] [Google Scholar]
  42. Gallant JL, Nishimoto S, Naselaris T, Wu MCK, 2012. System identification, encoding models, and decoding models: a powerful new approach to fMRI research In: Visual Population Codes. MIT Press, Cambridge, MA, pp. 163–188. [Google Scholar]
  43. Gibson JJ, 1979. The Ecological Approach to Visual Perception. Psychology Press, New York, NY: http://www.worldcat.org/oclc/962481298. [Google Scholar]
  44. Gillis J, Schneider C, 1966. The historical preconditions of representative design In: Hammond KR (Ed.), The Psychology of Egon Brunswik. Holt, Rinehart & Winston, New York, NY, pp. 204–236. [Google Scholar]
  45. Gomez-Marin A, Ghazanfar AA, 2019. The life of behavior. Neuron 104, 25–36. doi: 10.1016/j.neuron.2019.09.017. [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Gomez-Marin A, Paton JJ, Kampff AR, Costa RM, Mainen ZF, 2014. Big behavioral data: psychology, ethology and the foundations of neuroscience. Nat. Neurosci. 17, 1455–1462. doi: 10.1038/nn.3812. [DOI] [PubMed] [Google Scholar]
  47. Groen II, Greene MR, Baldassano C, Fei-Fei L, Beck DM, Baker CI, 2018. Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior. Elife 7, e32962. doi: 10.7554/eLife.32962. [DOI] [PMC free article] [PubMed] [Google Scholar]
  48. Hamilton LS, Huth AG, 2018. The revolution will not be controlled: natural stimuli in speech neuroscience. Lang. Cogn. Neurosci. 1–10. doi: 10.1080/23273798.2018.1499946. [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Hammond KR, 1955. Probabilistic functioning and the clinical method. Psychol. Rev. 62, 255–262. doi: 10.1037/h0046845. [DOI] [PubMed] [Google Scholar]
  50. Hammond KR, Stewart TR, 2001. The Essential Brunswik: Beginnings, Explications, Applications. Oxford University Press, Oxford, England: http://www.worldcat.org/oclc/59150825. [Google Scholar]
  51. Hanke M, Baumgartner FJ, Ibe P, Kaule FR, 2014. A high-resolution 7-Tesla fMRI dataset from complex natural stimulation with an audio movie. Sci. Data 1, 140003. doi: 10.1038/sdata.20143. [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Harari GM, Lane ND, Wang R, Crosier BS, Campbell AT, Gosling SD, 2016. Using smartphones to collect behavioral data in psychological science: opportunities, practical considerations, and challenges. Perspect. Psychol. Sci. 11, 838–854. doi: 10.1177/1745691616650285. [DOI] [PMC free article] [PubMed] [Google Scholar]
  53. Hasson U, Chen J, Honey CJ, 2015. Hierarchical process memory: memory as an integral component of information processing. Trends Cogn. Sci. 19, 304–313. doi: 10.1016/j.tics.2015.04.006. [DOI] [PMC free article] [PubMed] [Google Scholar]
  54. Hasson U, Honey CJ, 2012. Future trends in neuroimaging: neural processes as expressed within real-life contexts. Neuroimage 62, 1272–1278. doi: 10.1016/j.neuroimage.2012.02.004. [DOI] [PMC free article] [PubMed] [Google Scholar]
  55. Hasson U, Nastase SA, Goldstein A, 2020. Direct fit to nature: an evolutionary perspective on biological and artificial neural networks. Neuron 105, 416–434. doi: 10.1016/j.neuron.2019.12.002. [DOI] [PMC free article] [PubMed] [Google Scholar]
  56. Hasson U, Nir Y, Levy I, Fuhrmann G, Malach R, 2004. Intersubject synchronization of cortical activity during natural vision. Science 303, 1634–1640. doi: 10.1126/science.1089506. [DOI] [PubMed] [Google Scholar]
  57. Haxby JV, Gobbini MI, Nastase SA, 2020a. Naturalistic stimuli reveal a dominant role for agentic action in visual representation. Neuroimage 2016, 116561. doi: 10.1016/j.neuroimage.2020.116561. [DOI] [PubMed] [Google Scholar]
  58. Haxby JV, Guntupalli JS, Connolly AC, Halchenko YO, Conroy BR, Gob- bini MI, Hanke M, Ramadge PJ, 2011. A common, high-dimensional model of the representational space in human ventral temporal cortex. Neuron 72, 404–416. doi: 10.1016/j.neuron.2011.08.026. [DOI] [PMC free article] [PubMed] [Google Scholar]
  59. Haxby JV, Guntupalli JS, Nastase SA, Feilong M, 2020b. Hyperalignment: modeling shared information encoded in idiosyncratic cortical topographies. Elife 9, e56601. doi: 10.7554/eLife.56601. [DOI] [PMC free article] [PubMed] [Google Scholar]
  60. Hebart MN, Bankson BB, Harel A, Baker CI, Cichy RM, 2018. The representational dynamics of task and object processing in humans. Elife 7, e32816. doi: 10.7554/eLife.32816. [DOI] [PMC free article] [PubMed] [Google Scholar]
  61. Heusser AC, Fitzpatrick PC, Manning JR, 2018. How is experience transformed into memory? bioRxiv, 409987 doi: 10.1101/409987. [DOI] [Google Scholar]
  62. Hubel DH, Wiesel TN, 1962. Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. J. Physiol. 160, 106–154. doi: 10.1113/jphysiol.1962.sp006837. [DOI] [PMC free article] [PubMed] [Google Scholar]
  63. Huth AG, de Heer WA, Griffiths TL, Theunissen FE, Gallant JL, 2016. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature 532, 453–458. doi: 10.1038/nature17637. [DOI] [PMC free article] [PubMed] [Google Scholar]
  64. Huth AG, Nishimoto S, Vu AT, Gallant JL, 2012. A continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neuron 76, 1210–1224. doi: 10.1016/j.neuron.2012.10.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  65. Hyon R, Kleinbaum AM, Parkinson C, 2020. Social network proximity predicts similar trajectories of psychological states: evidence from multi-voxel spatiotemporal dynamics. Neuroimage 216, 116492. doi: 10.1016/j.neuroimage.2019.116492. [DOI] [PubMed] [Google Scholar]
  66. Jenkins JJ, 1974. Remember that old theory of memory? Well, forget it. Am. Psychol. 29, 785–795. doi: 10.1037/h0037399. [DOI] [Google Scholar]
  67. Jolly E, Chang LJ, 2019. The Flatland fallacy: moving beyond low-dimensional thinking. Top. Cogn. Sci. 11, 433–454. doi: 10.1111/tops.12404. [DOI] [PMC free article] [PubMed] [Google Scholar]
  68. Judd CM, Westfall J, Kenny DA, 2012. Treating stimuli as a random factor in social psychology: a new and comprehensive solution to a pervasive but largely ignored problem. J. Pers. Soc. Psychol. 103, 54–69. doi: 10.1037/a0028347. [DOI] [PubMed] [Google Scholar]
  69. Kanwisher N, 2010. Functional specificity in the human brain: a window into the functional architecture of the mind. Proc. Natl. Acad. Sci. U.S.A. 107, 11163–11170. doi: 10.1073/pnas.1005062107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  70. Kanwisher N, McDermott J, Chun MM, 1997. The fusiform face area: a module in human extrastriate cortex specialized for face perception. J. Neurosci. 17, 4302–4311. doi: 10.1523/jneurosci.17-11-04302.1997. [DOI] [PMC free article] [PubMed] [Google Scholar]
  71. Kayser C, Körding KP, König P, 2004. Processing of complex stimuli and natural scenes in the visual cortex. Curr. Opin. Neurobiol. 14, 468–473. doi: 10.1016/j.conb.2004.06.002. [DOI] [PubMed] [Google Scholar]
  72. Kriegeskorte N, Mur M, Bandettini PA, 2008. Representational similarity analysis—connecting the branches of systems neuroscience. Front. Syst. Neurosci. 2, 4. doi: 10.3389/neuro.06.004.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  73. Kruskal W, Mosteller F, 1980. Representative sampling, IV: the history of the concept in statistics, 1895-1939. Int. Stat. Rev. 48, 169–195. doi: 10.2307/1403151. [DOI] [Google Scholar]
  74. LeCun Y, Bengio Y, Hinton G, 2015. Deep learning. Nature 521, 436–444. doi: 10.1038/nature14539. [DOI] [PubMed] [Google Scholar]
  75. Leopold DA, Park SH, 2020. Studying the visual brain in its natural rhythm. Neuroimage 216, 116790. doi: 10.1016/j.neuroimage.2020.116790. [DOI] [PMC free article] [PubMed] [Google Scholar]
  76. Lescroart MD, Stansbury DE, Gallant JL, 2015. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas. Front. Comput. Neurosci. 9, 135. doi: 10.3389/fn-com.2015.00135. [DOI] [PMC free article] [PubMed] [Google Scholar]
  77. Lewicki MS, 2002. Efficient coding of natural sounds. Nat. Neurosci. 5, 356–363. doi: 10.1038/nn831. [DOI] [PubMed] [Google Scholar]
  78. Liu Y, Piazza EA, Simony E, Shewokis PA, Onaral B, Hasson U, Ayaz H, 2017. Measuring speaker—listener neural coupling with functional near infrared spectroscopy. Sci. Rep. 7, 43293. doi: 10.1038/srep43293. [DOI] [PMC free article] [PubMed] [Google Scholar]
  79. Maguire EA, 2012. Studying the freely-behaving brain with fMRI. Neuroimage 62, 1170–1176. doi: 10.1016/j.neuroimage.2012.01.009. [DOI] [PMC free article] [PubMed] [Google Scholar]
  80. Marr D, 1982. Vision: A Computational Investigation Into the Human Representation and Processing of Visual Information. MIT Press, Cambridge, MA: http://www.worldcat.org/oclc/648759762. [Google Scholar]
  81. Mathiak K, Weber R, 2006. Toward brain correlates of natural behavior: fMRI during violent video games. Hum. Brain Mapp. 27, 948–956. doi: 10.1002/hbm.20234. [DOI] [PMC free article] [PubMed] [Google Scholar]
  82. Matusz PJ, Dikker S, Huth AG, Perrodin C, 2019. Are we ready for real-world neuroscience? J. Cogn. Neurosci. 31, 327–338. doi: 10.1162/jocn_e_01276. [DOI] [PMC free article] [PubMed] [Google Scholar]
  83. McMahon DB, Russ BE, Elnaiem HD, Kurnikova AI, Leopold DA, 2015. Single-unit activity during natural vision: diversity, consistency, and spatial sensitivity among AF face patch neurons. J. Neurosci. 35, 5537–5548. doi: 10.1523/jneurosci.3825-14.2015. [DOI] [PMC free article] [PubMed] [Google Scholar]
  84. Meehl PE, 1990. Why Summaries of Research on Psychological Theories are Often Uninterpretable. Psychol. Rep. 66, 195–244. doi: 10.2466/pr0.1990.66.1.195. [DOI] [Google Scholar]
  85. Miller G, 2012. The smartphone psychology manifesto. Perspect. Psychol. Sci. 7, 221–237. doi: 10.1177/1745691612441215. [DOI] [PubMed] [Google Scholar]
  86. Mood AM, 1971. Partitioning variance in multiple regression analyses as a tool for developing learning models. Am. Educ. Res. J. 8, 191–202. doi: 10.2307/1162174. [DOI] [Google Scholar]
  87. Musall S, Kaufman MT, Juavinett AL, Gluf S, Churchland AK, 2019. Single-trial neural dynamics are dominated by richly varied movements. Nat. Neurosci. 22, 1677–1686. doi: 10.1038/s41593-019-0502-4. [DOI] [PMC free article] [PubMed] [Google Scholar]
  88. Naselaris T, Kay KN, Nishimoto S, Gallant JL, 2011. Encoding and decoding in fMRI. Neuroimage 56, 400–410. doi: 10.1016/j.neuroimage.2010.07.073. [DOI] [PMC free article] [PubMed] [Google Scholar]
  89. Nastase SA, Gazzola V, Hasson U, Keysers C, 2019a. Measuring shared responses across subjects using intersubject correlation. Soc. Cogn. Affect. Neurosci. 14, 667–685. doi: 10.1093/scan/nsz037. [DOI] [PMC free article] [PubMed] [Google Scholar]
  90. Nastase SA, Liu YF, Hillman H, Zadbood A, Hasenfratz L, Keshavarzian N, Chen J, Honey CJ, Yeshurun Y, Regev M, Nguyen M, Chang CHC, Baldassano CB, Lositsky O, Simony E, Chow MA, Leong YC, Brooks PP, Micciche E, Choe G, Goldstein A, Halchenko YO, Norman KA, Hasson U. Narratives: fMRI data for evaluating models of naturalistic language comprehension. [DOI] [PMC free article] [PubMed] [Google Scholar]
  91. Nath T, Mathis A, Chen AC, Patel A, Bethge M, Mathis MW, 2019. Using DeepLab-Cut for 3D markerless pose estimation across species and behaviors. Nat. Protoc. 14, 2152–2176. doi: 10.1038/s41596-019-0176-0. [DOI] [PubMed] [Google Scholar]
  92. Neisser U, Hyman IE, 2000. Memory Observed: Remembering in Natural Contexts, Worth, New York, NY: http://www.worldcat.org/oclc/1040762184 [Google Scholar]
  93. Newell A, 1973. You can’t play 20 Questions with nature and win In: Chase WG (Ed.), Visual Information Processing. Academic Press, New York, NY, pp. 283–308. [Google Scholar]
  94. Neyman J, 1934. On the two different aspects of the representative method: the method of stratified sampling and the method of purposive selection. J. R. Stat. Soc. 97, 558–625. doi: 10.1111/j.2397-2335.1934.tb04184.x. [DOI] [Google Scholar]
  95. Nielson DM, Smith TA, Sreekumar V, Dennis S, Sederberg PB, 2015. Human hippocampus represents space and time during retrieval of real-world memories. Proc. Natl. Acad. Sci. U.S.A. 112, 11078–11083. doi: 10.1073/pnas.1507104112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  96. Nishimoto S, Vu AT, Naselaris T, Benjamini Y, Yu B, Gallant JL, 2011. Reconstructing visual experiences from brain activity evoked by natural movies. Curr. Biol. 21, 1641–1646. doi: 10.1016/j.cub.2011.08.031. [DOI] [PMC free article] [PubMed] [Google Scholar]
  97. Nunez-Elizalde AO, Huth AG, Gallant JL, 2019. Voxelwise encoding models with non-spherical multivariate normal priors. Neuroimage 197, 482–492. doi: 10.1016/j.neuroimage.2019.04.012. [DOI] [PubMed] [Google Scholar]
  98. Olshausen BA, Field DJ, 1996. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature 381, 607–609. doi: 10.1038/381607a0. [DOI] [PubMed] [Google Scholar]
  99. Olshausen BA, Field DJ, 2005. How close are we to understanding V1? Neural Comput. 17, 1665–1699. doi: 10.1162/0899766054026639. [DOI] [PubMed] [Google Scholar]
  100. O’Toole AJ, Castillo CD, Parde CJ, Hill MQ, Chellappa R, 2018. Face space representations in deep convolutional neural networks. Trend Cogan. Sci. 22, 794–809. doi: 10.1016/j.tics.2018.06.006. [DOI] [PubMed] [Google Scholar]
  101. Park SH, Russ BE, McMahon DB, Koyano KW, Berman RA, Leopold DA, 2017. Functional subpopulations of neurons in a macaque face patch revealed by single-unit fMRI mapping. Neuron 95, 971–981. doi: 10.1016/j.neuron.2017.07.014. [DOI] [PMC free article] [PubMed] [Google Scholar]
  102. Parkinson C, Kleinbaum AM, Wheatley T, 2017. Spontaneous neural encoding of social network position. Nat. Hum. Behav. 1, 0072. doi: 10.1038/s41562-017-0072. [DOI] [Google Scholar]
  103. Parkinson C, Kleinbaum AM, Wheatley T, 2018. Similar neural responses predict friendship. Nat. Commun. 9, 332. doi: 10.1038/s41467-017-02722-7. [DOI] [PMC free article] [PubMed] [Google Scholar]
  104. Pereira TD, Aldarondo DE, Willmore L, Kislin M, Wang SSH, Murthy M, Shaevitz JW, 2019. Fast animal pose estimation using deep neural networks. Nat. Methods 16, 117–125. doi: 10.1038/s41592-018-0234-5. [DOI] [PMC free article] [PubMed] [Google Scholar]
  105. Pezzulo G, Cisek P, 2016. Navigating the affordance landscape: feedback control as a process model of behavior and cognition. Trends Cogn. Sci. 20, 414–424. doi: 10.1016/j.tics.2016.03.013. [DOI] [PubMed] [Google Scholar]
  106. Regev M, Honey CJ, Simony E, Hasson U, 2013. Selective and invariant neural responses to spoken and written narratives. J. Neurosci. 33, 15978–15988. doi: 10.1523/jneurosci.1580-13.2013. [DOI] [PMC free article] [PubMed] [Google Scholar]
  107. Richards BA, Lillicrap TP, Beaudoin P, Bengio Y, Bogacz R, Christensen A, Clopath C, Costa RP, de Berker A, Ganguli S, Gillon CJ, Hafner D, Kepecs A, Kriegeskorte N, Latham P, Lindsay GW, Miller KD, Naud R, Pack CC, Poirazi P, Roelfsema P, Sacramento J, Saxe A, Scellier B, Schapiro AC, Senn W, Wayne G, Yamins D, Zenke F, Zylberberg J, Therien D, Kording KP, 2019. A deep learning framework for neuroscience. Nat. Neurosci. 22, 1761–1770. doi: 10.1038/s41593-019-0520-2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  108. Rissman J, Chow TE, Reggente N, Wagner AD, 2016. Decoding fMRI signatures of real-world autobiographical memory retrieval. J. Cogn. Neurosci. 28, 604–620. doi: 10.1162/jocn_a_00920. [DOI] [PubMed] [Google Scholar]
  109. Rohrer JM, 2018. Thinking clearly about correlations and causation: graphical causal models for observational data. Adv. Methods Pract. Psychol. Sci. 1, 27–42. doi: 10.1177/2F2515245917745629. [DOI] [Google Scholar]
  110. Russ BE, Leopold DA, 2015. Functional MRI mapping of dynamic visual features during natural viewing in the macaque. Neuroimage 109, 84–94. doi: 10.1016/j.neuroimage.2015.01.012. [DOI] [PMC free article] [PubMed] [Google Scholar]
  111. Schroff F, Kalenichenko D, Philbin J, 2015. FaceNet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 815–823. doi: 10.1109/CVPR.2015.7298682. [DOI]
  112. Seibold DR, McPhee RD, 1979. Commonality analysis: a method for decomposing explained variance in multiple regression analyses. Hum. Commun. Res. 5, 355–365. doi: 10.1111/j.1468-2958.1979.tb00649.x. [DOI] [Google Scholar]
  113. Simoncelli EP, Olshausen BA, 2001. Natural image statistics and neural representation. Annu. Rev. Neurosci. 24, 1193–1216. doi: 10.1146/annurev.neuro.24.1.1193. [DOI] [PubMed] [Google Scholar]
  114. Sonkusare S, Breakspear M, Guo C, 2019. Naturalistic stimuli in neuroscience: critically acclaimed. Trends Cogn. Sci. 23, 699–714. doi: 10.1016/j.tics.2019.05.004. [DOI] [PubMed] [Google Scholar]
  115. Spiers HJ, Maguire EA, 2006. Thoughts, behaviour, and brain dynamics during navigation in the real world. Neuroimage 31, 1826–1840. doi: 10.1016/j.neuroimage.2006.01.037. [DOI] [PubMed] [Google Scholar]
  116. Spiers HJ, Maguire EA, 2007. Decoding human brain activity during real-world experiences. Trends Cogn. Sci. 11, 356–365. doi: 10.1016/j.tics.2007.06.002. [DOI] [PubMed] [Google Scholar]
  117. Squire LR, 2004. Memory systems of the brain: a brief history and current perspective. Neurobiol. Learn. Mem. 82, 171–177. doi: 10.1016/j.nlm.2004.06.005. [DOI] [PubMed] [Google Scholar]
  118. Srivastava M, Grill-Spector K, 2018. The Effect of Learning Strategy Versus Inherent Architecture Properties on the Ability of Convolutional Neural Networks to Develop Transformation Invariance. https://arxiv.org/abs/1810.13128 [Google Scholar]
  119. Stephens GJ, Silbert LJ, Hasson U, 2010. Speaker-listener neural coupling underlies successful communication. Proc. Natl. Acad. Sci. U.S.A. 107, 14425–14430. doi: 10.1073/pnas.1008662107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  120. Stringer C, Pachitariu M, Steinmetz N, Reddy CB, Carandini M, Harris KD, 2019. Spontaneous behaviors drive multidimensional, brainwide activity. Science 364. doi: 10.1126/science.aav7893, eaav7893. [DOI] [PMC free article] [PubMed] [Google Scholar]
  121. Tolman EC, Brunswik E, 1935. The organism and the causal texture of the environment. Psychol. Rev. 42, 43–77. doi: 10.1037/h0062156. [DOI] [Google Scholar]
  122. Tsao DY, Freiwald WA, Tootell RB, Livingstone MS, 2006. A cortical region consisting entirely of face-selective cells. Science 311, 670–674. doi: 10.1126/science.1119983. [DOI] [PMC free article] [PubMed] [Google Scholar]
  123. Varoquaux G, Poldrack RA, 2019. Predictive models avoid excessive reductionism in cognitive neuroimaging. Curr. Opin. Neurobiol. 55, 1–6. doi: 10.1016/j.conb.2018.11.002. [DOI] [PubMed] [Google Scholar]
  124. von Uexküll J, 1934. A Foray Into the Worlds of Animals and Humans: With a Theory of Meaning (O’Neill JD, Trans.). University of Minnesota Press, Minneapolis, MN: http://www.worldcat.org/oclc/918537521. [Google Scholar]
  125. Wang NX, Olson JD, Ojemann JG, Rao RP, Brunton BW, 2016. Unsupervised decoding of long-term, naturalistic human neural recordings with automated video and audio annotations. Front. Hum. Neurosci. 10, 165. doi: 10.3389/fnhum.2016.00165. [DOI] [PMC free article] [PubMed] [Google Scholar]
  126. Wells GL, Windschitl PD, 1999. Stimulus sampling and social psychological experimentation. Pers. Soc. Psychol. Bull. 25, 1115–1125. doi: 10.1177/01461672992512005. [DOI] [Google Scholar]
  127. Westfall J, Nichols TE, Yarkoni T, 2016. Fixing the stimulus-as-fixed-effect fallacy in task fMRI. Wellcome Open Res. 1, 23. doi: 10.12688/wellcomeopenres.10298.2. [DOI] [PMC free article] [PubMed] [Google Scholar]
  128. Willems RM, Nastase SA, Milivojevic B, 2020. Narratives for neuroscience. Trends Neurosci. 43, 271–273. doi: 10.1016/j.tins.2020.03.003. [DOI] [PubMed] [Google Scholar]
  129. Wu MC-K, David SV, Gallant JL, 2006. Complete functional characterization of sensory neurons by system identification. Annu. Rev. Neurosci. 29, 477–505. doi: 10.1146/annurev.neuro.29.051605.113024. [DOI] [PubMed] [Google Scholar]
  130. Yarkoni T, 2019. The Generalizability Crisis. PsyArXiv. [DOI] [PMC free article] [PubMed] [Google Scholar]
  131. Yarkoni T, Westfall J, 2017. Choosing prediction over explanation in psychology: lessons from machine learning. Perspect. Psychol. Sci. 12, 1100–1122. doi: 10.1177/1745691617693393. [DOI] [PMC free article] [PubMed] [Google Scholar]
  132. Yeshurun Y, Swanson S, Simony E, Chen J, Lazaridi C, Honey CJ, Hasson U, 2017. Same story, different story: the neural representation of interpretive frameworks. Psychol. Sci. 28, 307–319. doi: 10.1177/2F0956797616682029. [DOI] [PMC free article] [PubMed] [Google Scholar]
  133. Zadbood A, Chen J, Leong YC, Norman KA, Hasson U, 2017. How we transmit memories to other brains: constructing shared neural representations via communication. Cereb. Cortex 27,4988–5000.doi: 10.1093/cercor/bhx202. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES