Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2014 Aug 14.
Published in final edited form as: Neuroimage. 2011 Aug 11;62(2):1216–1220. doi: 10.1016/j.neuroimage.2011.08.007

The future of fMRI in Cognitive Neuroscience

Russell A Poldrack 1
PMCID: PMC4131441  NIHMSID: NIHMS609319  PMID: 21856431

Abstract

Over the last twenty years, fMRI has revolutionized cognitive neuroscience. Here I outline a vision for what the next twenty years of fMRI in cognitive neuroscience might look like. Some developments that I hope for include increased methodological rigor, an increasing focus on connectivity and pattern analysis as opposed to “blobology”, a greater focus on selective inference powered by open databases, and increased use of ontologies and computational models to describe underlying processes.

Keywords: fMRI, cognitive neuroscience, meta-analysis, ontology, connectivity


When I began graduate school for cognitive psychology in 1989, I was unaware of the ongoing race to develop functional magnetic resonance imaging (fMRI). It may come as no surprise to those who know my writings that when I first encountered imaging studies as a graduate student, I was highly skeptical that it could inform us about mental function. I was, in this sense, in good company; in those days, it was not uncommon to hear conversations between cognitive psychologists likening fMRI to trying to understand how an engine works by measuring the temperature of the exhaust manifold. When I moved to Stanford in 1995 to start my postdoc, I didn’t actually go there to do fMRI, but circumstances led me to get involved in the imaging work that was ongoing in John Gabrieli’s lab, and my career as a neuroimager was born.

My thoughts below about the future of fMRI in cognitive neuroscience would be better characterized as hopes rather than predictions. Despite what I see as serious fundamental problems in how fMRI has been and continues to be used in cognitive neuroscience, I think that the last few years have witnessed a number of encouraging new developments, and I remain very hopeful that fMRI will continue to provide us with useful insights into the relation between mind and brain. In the foregoing, I outline what I see as the the most promising new directions for fMRI in cognitive neuroscience, with an obvious bias towards the some of directions that my own research is currently taking.

Methodological rigor

Foremost, I hope that in the next 20 years the field of cognitive neuroscience will increase the rigor with which it applies neuroimaging methods. The recent debates about circularity and “voodoo correlations” [16, 34] have highlighted the need for increased care regarding analytic methods. Consideration of similar debates in genetics and clinical trials led Ioannidis [12] to outline a number of factors that may contribute to increased levels of spurious results in any scientific field, and the degree to which many these apply to fMRI research is rather sobering:

  • small sample sizes

  • small effect sizes

  • large number of tested effects

  • flexibilty in designs, definitions, outcomes, and analysis methods

  • being a “hot” scientific field

Some simple methodological improvements could make a big difference. First, the field needs to agree that inference based on uncorrected statistical results is not acceptable [cf. 5]. Many researchers have digested this important fact, but it is still common to see results presented at thresholds such as uncorrected p < .005. Because such uncorrected thresholds do not adapt to the data (e.g, the number of voxels tests or their spatial smoothness), they are certain to be invalid in almost every situation (potentially being either overly liberal or overly conservative). As an example, I took the fMRI data from Tom et al. [31], and created a random “individual difference” variable. Thus, there should be no correlations observed other than Type I errors. However, thresholding at uncorrected p < .001 and a minimum cluster size of 25 voxels (a common heuristic threshold) showed a significant region near the amygdala; Figure 1 shows this region along with a plot of the “beautiful” (but artifactual) correlation between activation and the random behavioral variable. This activation was not present when using a corrected statistic. A similar point was made in a more humorous way by Bennett et al. [4], who scanned a dead salmon being presented with a social cognition task and found activation when using an uncorrected threshold. There are now a number of well-established methods for multiple comparisons correction [26], such that there is absolutely no excuse to present results at uncorrected thresholds. The most common reason for failing to use rigorous corrections for multiple tests is that with smaller samples these methods are highly conservative, and thus result in a high rate of false negatives. This is certainly a problem, but I don’t think that the answer is to present uncorrected results; rather, the answer is to ensure that one’s sample is large enough to provide sufficient statistical power to find the effects of interest.

Figure 1.

Figure 1

An example of artifactual activation that survives a heuristic statistical correction. The left panel shows the thresholded activation map (p < .001, voxelwise uncorrected, 25 voxel extent threshold) for the correlation between activation and a randomly generated variable. The right plot shows the correlation between activation in the peak voxel from the map and the randomly generated variable, showing that completely spurious data can produce seemingly compelling results when heuristic corrections are combined with circular data analysis.

Second, I have become increasingly concerned about the use of “small volume corrections” to address the multiple testing problem. The use of a priori masks to constrain statistical testing is perfectly legitimate, but one often gets the feeling that the masks used for small volume correction were chosen after seeing the initial results (perhaps after a whole-brain corrected analysis was not significant). In such a case, any inferences based on these corrections are circular and the statistics are useless. Researchers who plan to use small volume corrections in their analysis should formulate a specific analysis plan prior to any analyses, and only use small volume corrections that were explicitly planned a priori. This sounds like a remedial lesson in basic statistics, but unfortunately it seems to be regularly forgotten by researchers in the field.

Third, the field needs to move towards the use of more robust methods for statistical inference [e.g., 11]. In particular, analyses of correlations between activation and behavior across subjects are highly susceptible to the influence of outlier subjects, especially with small sample sizes. Robust statistical methods can ensure that the results are not overly influenced by these outliers, either by reducing the effect of outlier datapoints (e.g., robust regression using iteratively reweighted least squares) or by separately modeling data points that fall too far outside of the rest of the sample (e.g., mixture modeling). Robust tools for fMRI group analysis are increasingly available, both as part of standard software packages (such as the “outlier detection” technique implemented in FSL: Woolrich [36]) and as add-on toolboxes [35]. Given the frequency with which outliers are observed in group fMRI data, these methods should become standard in the field. However, it’s also important to remember that they are not a panacea, and that it remains important to apply sufficient quality control to statistical results, in order to understand the degree to which one’s results reflect generalizeable patterns versus statistical figments.

From blobology to pattern analysis

The foregoing methodological issues have taken on particular importance due to the strong focus of the field on localization of function; the goal of finding blobs in a specific region can drive researchers into analytic gymnastics in order to find a significant blob to report. However, for the last few years the most interesting and novel research has focused on understanding patterns of activation rather than localized blobs. The appreciation of patterns is happening at multiple scales. At the systems (whole-brain) scale, the modeling of connectivity and its relation to behavior continues to grow. One impediment to the analysis of connectivity with fMRI is the lack of a standardized set of approaches for connectivity modeling; with so many different approaches being used across the field, it can be difficult to understand how the results relate to one another. The recent work by Smith et al. [30] examining the relative power of different modeling approaches using simulated data is a good start towards better understanding the relative strengths and weaknesses of these different methods, and I envision that these kinds of methodological “shoot-outs” will eventually help determine which methods are most optimal for which questions. I think that the jury is still out on how well fMRI can ever characterize neuronal connectivity; as we outlined in Ramsey et al. [27], there are a number of fundamental challenges to using fMRI to characterize causal interactions between brain regions. Further work combining neurophysiology and fMRI [e.g., 9] will be essential to fully understand the potential and limitations of fMRI for modeling of underlying neuronal connectivity.

At a more local level, the use of analyses that focus on patterns of activation over a region rather than mean activation in that region have become very popular in the last few years [e.g., 14, 15, 19], and I expect that this will continue. It is now clear that information can be coded in fMRI data either through local changes in mean activation or through changes in local patterns of activity, and that analysis of only the mean activation (which has been the standard over the last 20 years) may be missing a significant part of the picture. For example, we recently compared results from closely-matched univariate and multivariate (pattern-information) analyses for a decision-making task, and found many regions in the cortex where the multivariate test was significantly more sensitive to task-relevant information than the univariate analysis, with a smaller set of regions showing the opposite pattern [13]. I think that one approach that has particularly strong appeal is representational similarity analysis [15], in which the patterns of activity in fMRI data are related to behaviorally-relevant variables such as stimulus features or task manipulations. For example, we [37] recently used this approach to show that memory is better for items whose patterns of activity are more similar across study episodes, which has implications for cognitive theories of memory. I think that this approach has great potential to provide much stronger links from neuroimaging data to cognitive theories (which often make detailed predictions about patterns across stimuli) and computational models, which will help provide a stronger basis for relating brain activity and mental function.

Selective inference

Neuroimaging as it is used by most cognitive neuroscientists faces a fundamental problem, which is evident from even a cursory reading of the literature: Within different subfields in cognitive neuroscience, the same structure can be assigned very different functions. The anterior cingulate may be the poster child for this heterogeneity, with proposed functions as various as conflict monitoring, interoception, pain, autonomic regulation, effort, and consciousness. This becomes particularly problematic when researchers aware of only one of those literatures attempt to make reverse inferences based on activation in a region (e.g., “the anterior cingulate is active, thus the subject must be experiencing conflict”) [21].

If the goal of cognitive neuroscience is to map function onto structure, then this is a serious problem, and the standard approach to neuroimaging does not provide any way to solve it. How can we identify what the basic function is that a region or network subserves? As I argued recently [23], I think that the answer is to move towards methods that allow us to identify selective associations between functions and structures. This requires that we ask not simply whether we can find a region that is engaged by a particular mental process, but whether one can find a region that is engaged selectively, such that activation of the region is actually predictive of the mental process. The tools from machine learning that have recently been brought to bear on fMRI data [19, 20, 24] can provide important leverage on this issue, by providing a principled way to assess the predictive power of activation and thus allowing a quantitative assessment of the strength of any reverse inference [22].

In addition to these tools, however, we need data sets that are broad enough to allow the assessment of any particular function against many other functions in order to assess selectivity and understand the larger-scale structure of mind-brain relationships. To date, the only databases large enough to support such meta-analysis are based on activation coordinates mined from papers, an approach pioneered by the BrainMap group [17]. We [38] have recently developed a coordinate-based database (http://www.neurosynth.org) that allows mining of full text in relation to brain activation. This allows the creation of maps reflecting the power of “forward inference” (i.e., inferring the presence of activation in each voxel given the presence of a particular term in the paper) and “reverse inference” (i.e., inferring the presence of a term in the paper given activation in a voxel). The striking insight to come from analyses of this database [38] is that some regions (e.g., anterior cingulate) can show high degrees of activation in forward inference maps, yet be of almost no use for reverse inference due to their very high base rates of activation across studies (e.g., some voxels in the anterior cingulate are active in more than 25% of papers). Tools like this, which combine text mining with neuroimaging data, have the potential to increase our understanding of selective associations between mental function and brain activity.

Open science and data aggregation

The need to integrate information across many different task paradigms and psychological domains highlights the critical role of effective data sharing across research groups, as it would be exceedingly difficult for an individual research group to collect the large amounts of data needed for such analyses. While the coordinate-based meta-analytic approaches mentioned above will be useful for many questions, they will undoubtedly fall short in some cases, leading to a need for the analysis of raw fMRI data. When the fMRI Data Center (fMRIDC) was started in 2001 with the goal of sharing raw fMRI data [33], there was widespread skepticism and opposition regarding the utility of data sharing. Although there were certainly some missteps in the execution of the fMRIDC project, this reaction largely reflected the fact that the project was ahead of its time, and there was a lack of a social consensus in favor of data sharing. A number of developments suggest that this has changed, such that the field is now ready for large-scale data aggregation. First, there is an incipient “open science” revolution in scientific publishing, examples of which can be seen in the Frontiers and Public Library of Science (PLOS) journals. In parallel, there is growing interest across science in the value of open access to large datasets. The powerful insights that have arisen from data sharing in other fields (such as genomics, where every genome sequence is shared via GenBank) has provided additional proof of concept for the power of large shared databases. Together, these developments suggest the possibility of large-scale effective data sharing in the future. Groups such as the Science Commons (http://sciencecommons.org/) are developing new models for open access to publications, data, and scientific tools. These efforts have been heavily inspired by the open source software movement, which has demonstrated the effectiveness of “crowdsourcing” [10].

The recent development and publication of large-scale resting state fMRI datasets by the 1000 Functional Connectomes Project (FCP) and International Neuroimaging Data Initiative (INDI) has demonstrated the feasibility of sharing large fMRI datasets in a free and open manner [7]. However, sharing of task fMRI data remains much more challenging due to the need for additional metadata describing the task. We have recently begun to address this challenge via the OpenfMRI Project (http://www.openfmri.org). Although currently very small in comparison to other databases such as FCP/INDI, this project has developed a framework for the representation of task-based fMRI metadata that should make the sharing of these data significantly easier. As tools for data sharing are developed and integrated with tools for data analysis (i.e., a “submit” button within the software package that would automatically upload the data to a shared database), I hope that we will see that sharing of fMRI data will become increasingly common. I hope that as such databases grow, they will provide a unique resource for testing the power of reverse inference across a much larger set of tasks.

The utility of such large shared datasets also depends on sufficient computing resources to process hundreds or thousands of subjects’ worth of fMRI data. The development of high-performance computing clusters based on commodity hardware has begun to provide such resources; for example, using the clusters at the Texas Advanced Computing Center we regularly run jobs that utilize more than 4000 processors, allowing us to complete analyses in minutes that would have previously taken months to finish. As cognitive neuroscience research becomes increasingly reliant on analysis of very large datasets, the use of such high-performance computing systems will become critical, and effective use of these resources will require cognitive neuroscience researchers to interact even more closely with computer scientists and informaticians in order to take the best advantage of them.

Describing mental processes: Ontologies and computational models

When cognitive neuroscience researchers use fMRI to map mental processes onto brain structure, the processes that are being mapped are generally described in an informal way in the text of the publication. While this has long been viewed as adequate, it makes the aggregation of hypothesis-relevant data very difficult. In other areas of bioscience such as genomics, the use of ontologies (formal descriptions of terms and their relationships) to annotate data has led to incredible progress in mapping of biological functions onto genetic structure [3], and it is likely that ontologies will play an equally important role for cognitive neuroscience [6, 39]. If we are to make good use of the data that are shared across research groups, these data will need to be deeply annotated using an ontology of mental function, in order to then ask which proposed associations between mental processes and brain systems are supported across multiple task domains. Development of such ontologies is very challenging because psychologists do not agree about much of the underlying mental architecture; however, capturing both the agreements and disagreements is fundamental if we wish to ultimately use brain imaging to understand the organization of the mind. The Cognitive Atlas project (http://www.cognitiveatlas.org) [25] has begun to develop such an ontology, which will serve as a basis for annotation in the OpenfMRI.org data sharing project. Other ontologies are being developed to describe the structure of psychological tasks (CogPO: Turner and Laird [32]) and EEG data (NEMO: http://nemo.nic.uoregon.edu/wiki/NEMO). The implementation of these ontologies will provide the semantic infrastructure that is needed to maximize the power of shared data. In addition, thinking during the design process about how a specific task manipulation relates to an ontology forces clarity about what the manipulation is measuring, which could be generally useful in driving cleaner and more specific experimental designs.

Mathematical/computational models provide another language for describing the underlying processes being mapped by neuroimaging studies. In some areas of cognitive neuroscience (such as high-level vision and decision making), such models have already provided valuable insights into the functional organization of the brain. One difficult but important challenge will be to effect a clearer mapping between the functional organization described by ontologies and the computational organization implemented in mathematical models. Within cognitive science, a number of general computational frameworks have been developed for the description of cognitive processes [e.g., 1, 8, 28], and models based on these frameworks have been related to brain function [e.g., 18, 2, 29]; in the future, I expect that computational frameworks like these will play an increasingly important role in characterizing the mental operations being mapped using neuroimaging.

Conclusions

fMRI has advanced cognitive neuroscience research in a way that has been nothing short of revolutionary, though at the same time there are fundamental limits to the standard imaging approach that have not been widely appreciated. I am hopeful that twenty years from now, the history of fMRI in cognitive neuroscience will show that the field attacked this problem head on and developed new, robust methods for better understanding the relation between mental processes and brain function.

Highlights.

  • -

    fMRI has revolutionized cognitive neuroscience

  • -

    There is a need for increased methodological rigor in fMRI research.

  • -

    Large datasets and ontologies are important for increasing selectivity.

Acknowledgments

Thanks to Tyler Davis, Koji Jimura, Jeanette Mumford, Tom Schonberg, Corey White, and Tal Yarkoni for helpful comments on a draft of this paper. Preparation of this manuscript was supported by NIH RO1MH082795.

Footnotes

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

References

  • 1.Anderson JR. The Architecture of Cognition. Cambridge, MA: Harvard University Press; 1983. [Google Scholar]
  • 2.Anderson JR, Bothell D, Byrne MD, Douglass S, Lebiere C, Qin Y. An integrated theory of the mind. Psychol Rev. 2004;111:1036–1060. doi: 10.1037/0033-295X.111.4.1036. [DOI] [PubMed] [Google Scholar]
  • 3.Bard JBL, Rhee SY. Ontologies in biology: design, applications and future challenges. Nat Rev Genet. 2004;5:213–222. doi: 10.1038/nrg1295. [DOI] [PubMed] [Google Scholar]
  • 4.Bennett CM, Baird AA, Miller MB, Wolford GL. Neural correlates of interspecies perspective taking in the post-mortem atlantic salmon: An argument for proper multiple comparisons correction. Journal of Serendipitous and Unexpected Results. 2010;1:1–5. [Google Scholar]
  • 5.Bennett CM, Wolford GL, Miller MB. The principled control of false positives in neuroimaging. Soc Cogn Affect Neurosci. 2009;4:417–422. doi: 10.1093/scan/nsp053. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Bilder RM, Sabb FW, Parker DS, Kalar D, Chu WW, Fox J, Freimer NB, Poldrack RA. Cognitive ontologies for neuropsychiatric phenomics research. Cogn Neuropsychiatry. 2009;14:419–450. doi: 10.1080/13546800902787180. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Biswal BB, Mennes M, Zuo XN, Gohel S, Kelly C, Smith SM, Beckmann CF, Adelstein JS, Buckner RL, Colcombe S, Dogonowski AM, Ernst M, Fair D, Hampson M, Hoptman MJ, Hyde JS, Kiviniemi VJ, Kötter R, Li SJ, Lin CP, Lowe MJ, Mackay C, Madden DJ, Madsen KH, Margulies DS, Mayberg HS, McMahon K, Monk CS, Mostofsky SH, Nagel BJ, Pekar JJ, Peltier SJ, Petersen SE, Riedl V, Rombouts SARB, Rypma B, Schlaggar BL, Schmidt S, Seidler RD, Siegle GJ, Sorg C, Teng GJ, Veijola J, Villringer A, Walter M, Wang L, Weng XC, Whitfield-Gabrieli S, Williamson P, Windischberger C, Zang YF, Zhang HY, Castellanos FX, Milham MP. Toward discovery science of human brain function. Proc Natl Acad Sci U S A. 2010;107:4734–4739. doi: 10.1073/pnas.0911855107. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Chater N, Tenenbaum JB, Yuille A. Probabilistic models of cognition: conceptual foundations. Trends Cogn Sci. 2006;10:287–291. doi: 10.1016/j.tics.2006.05.007. [DOI] [PubMed] [Google Scholar]
  • 9.David O, Guillemain I, Saillet S, Reyt S, Deransart C, Segebarth C, Depaulis A. Identifying neural drivers with functional mri: an electrophysiological validation. PLoS Biol. 2008;6:2683–2697. doi: 10.1371/journal.pbio.0060315. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 10.Doan A, Ramakrishnan R, Halevy AY. Crowdsourcing systems on the world-wide web. Commun. ACM. 2011;54:86–96. [Google Scholar]
  • 11.Huber PJ. Robust statistics. Hoboken, NJ: Wiley-Interscience; 2004. [Google Scholar]
  • 12.Ioannidis JPA. Why most published research findings are false. PLoS Med. 2005;2:e124. doi: 10.1371/journal.pmed.0020124. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 13.Jimura K, Poldrack RA. Do univariate and multivariate analyses tell the same story? 2011 submitted. [Google Scholar]
  • 14.Kriegeskorte N, Goebel R, Bandettini P. Information-based functional brain mapping. Proc Natl Acad Sci U S A. 2006;103:3863–3868. doi: 10.1073/pnas.0600244103. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 15.Kriegeskorte N, Mur M, Bandettini P. Representational similarity analysis - connecting the branches of systems neuroscience. Front Syst Neurosci. 2008;2:4. doi: 10.3389/neuro.06.004.2008. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 16.Kriegeskorte N, Simmons W, Bellgowan P, Baker C. Circular analysis in systems neuroscience - the dangers of double dipping. Nature Neuroscience in press. 2009 doi: 10.1038/nn.2303. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 17.Laird AR, Lancaster JL, Fox PT. Brainmap: the social evolution of a human brain mapping database. Neuroinformatics. 2005;3:65–78. doi: 10.1385/ni:3:1:065. [DOI] [PubMed] [Google Scholar]
  • 18.Norman KA, O’Reilly RC. Modeling hippocampal and neocortical contributions to recognition memory: a complementary-learning-systems approach. Psychol Rev. 2003;110:611–646. doi: 10.1037/0033-295X.110.4.611. [DOI] [PubMed] [Google Scholar]
  • 19.Norman KA, Polyn SM, Detre GJ, Haxby JV. Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends in Cognitive Sciences. 2006;10:424–430. doi: 10.1016/j.tics.2006.07.005. [DOI] [PubMed] [Google Scholar]
  • 20.Pereira F, Mitchell T, Botvinick M. Machine learning classifiers and fmri: a tutorial overview. Neuroimage. 2009;45:S199–S209. doi: 10.1016/j.neuroimage.2008.11.007. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 21.Poldrack RA. Can cognitive processes be inferred from neuroimaging data? Trends Cogn Sci. 2006;10:59–63. doi: 10.1016/j.tics.2005.12.004. [DOI] [PubMed] [Google Scholar]
  • 22.Poldrack RA. The role of fmri in cognitive neuroscience: where do we stand? Curr Opin Neurobiol. 2008;18:223–227. doi: 10.1016/j.conb.2008.07.006. [DOI] [PubMed] [Google Scholar]
  • 23.Poldrack RA. Mapping mental function to brain structure: How can cognitive neuroimaging succeed? Perspectives on Psychological Science. 2010;5:753–761. doi: 10.1177/1745691610388777. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 24.Poldrack RA, Halchenko YO, Hanson SJ. Decoding the large-scale structure of brain function by classifying mental states across individuals. Psychological Science. 2009;20:1364–1372. doi: 10.1111/j.1467-9280.2009.02460.x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 25.Poldrack RA, Kittur A, Kalar D, Miller E, Seppa C, Gil Y, Parker DS, Sabb FW, Bilder RM. The cognitive atlas: Towards a knowledge foundation for cognitive neuroscience. 2011 doi: 10.3389/fninf.2011.00017. submitted. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Poldrack RA, Mumford JA, Nichols TE. Handbook of fMRI Data Analysis. Cambridge University Press; 2011. [Google Scholar]
  • 27.Ramsey JD, Hanson SJ, Hanson C, Halchenko YO, Poldrack RA, Glymour C. Six problems for causal inference from fmri. Neuroimage. 2010;49:1545–1558. doi: 10.1016/j.neuroimage.2009.08.065. [DOI] [PubMed] [Google Scholar]
  • 28.Rumelhart DE, McClelland JL. Parallel distributed processing: explorations in the microstructure of cognition. Cambridge, Mass.: MIT Press; 1986. [DOI] [PubMed] [Google Scholar]
  • 29.Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science. 1997;275:1593–1599. doi: 10.1126/science.275.5306.1593. [DOI] [PubMed] [Google Scholar]
  • 30.Smith SM, Miller KL, Salimi-Khorshidi G, Webster M, Beckmann CF, Nichols TE, Ramsey JD, Woolrich MW. Network modelling methods for fmri. Neuroimage. 2011;54:875–891. doi: 10.1016/j.neuroimage.2010.08.063. [DOI] [PubMed] [Google Scholar]
  • 31.Tom SM, Fox CR, Trepel C, Poldrack RA. The neural basis of loss aversion in decision-making under risk. Science. 2007;315:515–518. doi: 10.1126/science.1134239. [DOI] [PubMed] [Google Scholar]
  • 32.Turner JA, Laird AR. The cognitive paradigm ontology: Design and application. Neuroinformatics in press. 2011 doi: 10.1007/s12021-011-9126-x. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 33.Van Horn JD, Grethe JS, Kostelec P, Woodward JB, Aslam JA, Rus D, Rockmore D, Gazzaniga MS. The functional magnetic resonance imaging data center (fmridc): the challenges and rewards of large-scale databasing of neuroimaging studies. Philos Trans R Soc Lond B Biol Sci. 2001;356:1323–1339. doi: 10.1098/rstb.2001.0916. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 34.Vul E, Harris C, Winkielman P, Pashler H. Puzzlingly high correlations in fmri studies of emotion, personality, and social cognition. Perspectives on Psychological Science. 2009;4:274–290. doi: 10.1111/j.1745-6924.2009.01125.x. [DOI] [PubMed] [Google Scholar]
  • 35.Wager TD, Keller MC, Lacey SC, Jonides J. Increased sensitivity in neuroimaging analyses using robust regression. Neuroimage. 2005;26:99–113. doi: 10.1016/j.neuroimage.2005.01.011. [DOI] [PubMed] [Google Scholar]
  • 36.Woolrich M. Robust group analysis using outlier inference. Neuroimage. 2008;41:286–301. doi: 10.1016/j.neuroimage.2008.02.042. [DOI] [PubMed] [Google Scholar]
  • 37.Xue G, Dong Q, Chen C, Lu Z, Mumford JA, Poldrack RA. Greater neural pattern similarity across repetitions is associated with better memory. Science. 2010;330:97–101. doi: 10.1126/science.1193125. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 38.Yarkoni T, Poldrack RA, Nichols TE, Van Essen DC, Wager T. Large-scale automated synthesis of human functional neuroimaging data. Nature Methods in press. 2011 doi: 10.1038/nmeth.1635. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 39.Yarkoni T, Poldrack RA, Van Essen DC, Wager TD. Cognitive neuroscience 2.0: building a cumulative science of human brain function. Trends Cogn Sci. 2010;14:489–496. doi: 10.1016/j.tics.2010.08.004. [DOI] [PMC free article] [PubMed] [Google Scholar]

RESOURCES