Abstract
Historically, single-case studies of individuals with brain damage have contributed substantially to our understanding of cognitive processes and their neural substrates. However, the role of single case cognitive neuropsychology has diminished with the proliferation of techniques that measure neural activity in humans. Instead, large-scale informatics approaches in which data is gathered from hundreds of neuroimaging studies have become popular. Furthermore, it has been claimed that utilizing these informatics approaches can address problems found in single imaging studies. We first discuss reasons for why cognitive neuropsychology is thought to be in decline. Next, we note how these informatics approaches, while having benefits, are not particularly suited for understanding functional architectures. We propose that the single-case cognitive neuropsychological approach, focused on developing models of cognitive processing, addresses several of the weaknesses inherent in informatics approaches. Furthermore, we discuss how using neural data from individuals with brain damage provides data that can inform both cognitive and neural models of cognitive processing.
Keywords: cognitive neuropsychology, big data, cognition
The primary goal of cognitive neuropsychology is to use evidence from brain-damaged individuals to draw inferences about the organization of the normal cognitive system. In many ways, evidence from brain-damaged individuals allows us to make significant advances in our understanding of cognition. One clear benefit of cognitive neuropsychological research, especially single case studies, is the serendipitous nature of these explorations. The experimenter has no control over lesion location, and has to figure out how to characterize the individual’s cognitive impairment. Sometimes, their behavior fits well with existing cognitive models. But at other times, the location of the stroke results in patterns of behavior that cannot be explained based on current models. When presented with these unexpected behaviors, the experimenter is led down a path of experiments that can end in the development of new cognitive models. These cases can often result in major breakthroughs in our understanding of cognition. A number of “textbook” cases have had substantial impact on our understanding of cognition, including memory (Scoville & Milner, 1957), vision and action (Goodale & Milner, 1992), language (Broca, 1861; Wernicke, 1874), reading (Marshall & Newcombe, 1966), visuospatial attention (Bisiach & Luzzatti, 1978; Caramazza & Hillis, 1990a), spatial representations (McCloskey et al., 1995), and others (Caramazza, 1986; Coltheart, 2001; McCloskey & Chaisilprungraung, 2017).
Cognitive neuropsychology – A field in decline?
Although cognitive neuropsychology has provided ample evidence to develop our understanding of cognitive architectures, there is a sense that it is in decline as a method of scientific inquiry. Some journals that previously published single case studies now have explicit policies against it1 or will only consider extremely rare cases2. Others have noted that cognitive neuropsychology is considered by some as a “relic of a past era” (Rorden & Karnath, 2004), and have debated whether the cognitive neuropsychology research paradigm as a “dodo or phoenix” (Shallice, 2014). Along with Shallice, we do not believe that cognitive neuropsychology is a “dodo” that has outlived its usefulness as a method to understand cognition. Though there have been some critiques of the approach (e.g. Patterson & Plaut, 2009), this decline cannot simply be attributed to theoretical papers pointing out the flaws in the method. Evidence from cognitive neuropsychological studies are still widely cited and taught regularly as providing evidence for cognitive theories, and the assumptions that underlie the approach are still valid (see Coltheart, this issue).
Although we argue that cognitive neuropsychology has not declined in its usefulness, there has been a clear decline in its usage since its peak (Chatterjee, 2005; Fellows et al., 2005). We contend that this decline is due to difficulties specific to patient work and sociological factors. First, there are a number of barriers to cognitive neuropsychological research that are not found in other methods. Some of these barriers have to do with the special nature of the population being tested. Finding brain-damaged individuals who are willing to engage in substantial amounts of research is difficult even with access to a clinical population (Fellows, Stark, Berg, & Chatterjee, 2008). For those interested in using evidence from brain-damaged individuals to understand neural correlates of functions (e.g. voxel-lesion symptom-mapping), it is difficult to recruit a large enough sample for sufficiently powered studies (Kimberg, Coslett, & Schwartz, 2007). Other barriers concern the enormous effort typically required for collecting data within this approach. In single case cognitive neuropsychology, it is often necessary to generate and run a number of experiments, many specifically for testing this single individual, to isolate a deficit to a specific function. Administering all of these experiments frequently takes months or even years. Furthermore, if that single case has another stroke or health event before the study is complete, which is common when working with an older population with pre-existing cardiovascular problems, a significant outlay of effort and resources can be for naught. Even if the participant is able to complete every experiment, the substantial outlay of effort typically results in only one publication. Given pressures to publish and a competitive grant climate, the incentives are skewed towards maximizing the number of publications. With this in mind, many will opt towards methods that involve less outlay and risk.
Second, there are strong incentives for using the newest technique, as there are clear rewards to be the first individual to report a specific finding with a new technique (i.e. increased likelihood of getting a grant, publication in higher impact journals). Given that nearly all methods used in the cognitive neurosciences (PET, TMS, tDCS, various types of functional and structural neuroimaging methods and techniques) are more recent than cognitive neuropsychology, the field suffers from being perceived as a “relic”. Third, given that the major technological advances in cognitive neuroscience have been in neuroimaging, there has been a large shift in the field to focusing on neural activity over developing cognitive models. For example, Shallice (2009) identified two strands in cognitive neuroscientific research; a biomedical strand with its historical origins in neurophysiology with a focus on neural substrates, and a more cognitive strand focused on theories of information processing. Early functional neuroimaging work found that certain cognitive processes were associated with activity in specific brain regions – a sensible initial endeavor for studying structure-function relationships. These initial studies were highly cited and published in high impact journals. Given that the first study to show that process X was associated with brain region Y were rewarded, this created skewed incentives to grab the remaining low-hanging fruit – finding the neural correlates of other processes, tasks, constructs, and now famous pop songs (“Gangnam Style”, Chen et al., 2017), Harry Potter (Hsu, Jacobs, Altmann, & Conrad, 2015), and political attitudes (Kaplan, Freedman, & Iacoboni, 2007). Again, this created an initial framework in which studies that used neural data to adjudicate between cognitive theories were not as incentivized. All of these factors have contributed to a decline in focus and resources on cognitive neuropsychological (and also cognitive psychological) research. These disincentives disproportionately affect younger researchers. Established cognitive neuropsychologists have already established a name for themselves, and are protected (at some level) by their reputation and tenure. However, the ability of younger researchers to obtain a post-doctoral position, faculty position, and tenure is primarily dictated by publication count and manuscripts in high-impact journals. Given this, young researchers will be less likely to use cognitive neuropsychological methods, leading to fewer publications and a field that is in decline.
Big data – More peril, less promise?
Paired with these disincentives is a growing feeling in cognitive science that we need to rely more on “big data” from hundreds of individuals, which would entail a move away from the single case approach common in cognitive neuropsychology (see Coltheart, 2001 for a discussion of replicability in cognitive neuropsychology). Researchers are rightly concerned about how small sample sizes common in behavioral and neuroimaging increase the likelihood of false positive results and decrease the likelihood of replication (Button et al., 2013; Ioannidis, 2005; Szucs & Ioannidis, 2017; Yarkoni, 2009). One option for addressing this concern would be to increase the sample size in each individual study. An alternative, which has recently been favored, is to synthesize results via meta-analysis. Given that neuroimaging has become the dominant tool in the cognitive sciences, there has been an explosion in the number of published fMRI papers (with one estimate at 40,000; Eklund, Nichols, & Knutsson, 2016), and terabytes of data generated by these papers. The ability to share neuroimaging data, along with automated methods to extract data from the literature and increases in computing power, has led many to suggest that “big data” approaches will be transformative for understanding cognition. Specifically, it has been argued that these informatics-driven approaches address a myriad of weaknesses inherent in single neuroimaging studies – issues of statistical power, reverse inference and experimental design - leading to a better understanding of structure-function mapping. Once these structure-function mappings are well understood, then neural data could be used to develop cognitive theories (Henson, 2005; Mather, Cacioppo, & Kanwisher, 2013). However, as we will argue below, the promises of these big data approaches may be overstated, and scientific advances with these techniques will require the kind of careful inferential work cognitive neuropsychologists use when developing cognitive theory from the pattern of performance of a single case study.
Consider, for example, the Neurosynth system (www.neurosynth.org) developed by Yarkoni and colleagues (2011), which has been used to generate hundreds of fMRI meta-analyses. Yarkoni and colleagues (2011) used a text mining approach to extract reported significant activations and important terms from over 11,000 studies. The Neurosynth system allows one to examine what brain regions are consistently or preferentially active for a specific keyword, or to examine what keywords are most associated with activation of a specific voxel. In addition to addressing issues of statistical power, large databases such as this could be used to deal with the issue of reverse inference, using thousands of studies to see if a specific brain region is associated with only one, or several cognitive processes. At a coarse level, this approach provides interesting information regarding the relationship between brain activity and general cognitive concepts such as “language”, “working memory”, etc.
While there are clear statistical advantages to combining results across studies, this type of meta-analytic thinking with general cognitive concepts might blur our understanding of structure-function mapping. The ability to map terms like “language” or “working memory” to specific cognitive processes that reside in specific brain regions depends on a) the relationship between the frequency of a specific term in a paper and whether a process related to that term was specifically manipulated in the imaging contrast and b) whether that term has a one-to-one relationship with a specific cognitive process. There is no doubt that there is an issue with structure-function mapping with these very broad terms. The question of where “language” resides in the brain is massively underspecified; “language” is not a single cognitive process and there are clear dissociations between language production and comprehension, between written and spoken language. While it is clear that broad concepts like “language” fail at having a one-to-one relationship with specific cognitive processes, it might appear as though more specific terms, such as “orthography”, may not face the same issue. However, cognitive neuropsychological investigations of individuals with “orthography” impairments in spelling have shown that even the concept of “orthography” involves a series of different, dissociable sub-processes that likely have their own neural instantiation, as they can be separably damaged by stroke (Rapp & Fischer-Baum, 2015). Alternative approaches to meta-analyses of neuroimaging data attempt to address these concerns. For example BrainMap (Laird, Lancaster, & Fox, 2005) allows the user to manually curate the studies to include in the meta-analyses. Even in this tool, however, the information entered is focused more on task manipulations, and less on the cognitive processes involved in this task (see Poldrack & Yarkoni, 2016 for a discussion). The usefulness of these meta-analytic approaches to synthesizing neuroimaging studies is limited by the extent to which we are convinced that the manipulations in the studies being combined map onto the same cognitive process.
This limitation has led Poldrack and others to propose the development of a formal “cognitive ontology” for characterizing mental processes (e.g. Poldrack, 2010; Price & Friston, 2005). Inspired by ontologies utilized by biologists that formally specify the relationship between gene products and gene functions, the idea is that one can formally specify the relationship between mental concepts and mental tasks. For example, one can define a putative cognitive process based on whether it is a part of some other process or a kind of process, and relate specific tasks (with their own experimental conditions and contrasts) to this process. Assuming a fully formed cognitive ontology3, in which every task has a process mapped to it, one could label the results of every imaging study based on what processes are activated for each neuroimaging study. Utilizing the entire neuroimaging literature has been claimed to have multiple benefits for understanding brain-behavior relationships and cognition (see Poldrack and Yarkoni, 2016). One example is isolating cognitive functions. In cognitive psychological and neuroimaging studies, one will often design two experimental conditions that are matched for all but one process. The assumption of pure insertion is made in these studies, where changing one process does not have any additional influence on other cognitive processes. There are many reasons to think that this assumption of pure insertion should not be valid in most cases (Friston et al., 1996). For example, in addition to having a single additional process, a task might change the way attention is deployed or a participant’s motivation. One of the promises of “big data” approaches to neuroimaging is that they can address these limitations of experimental design. For example, Poldrack and Yarkoni (2016)4 claim that by simply considering more studies, one can deal with this issue.
In general, there is a concern that claims regarding “big data” approaches for the cognitive sciences may overpromise and underdeliver. It is not clear how one would make the jump from simply having more data from more neuroimaging studies to solving the problem of pure insertion. Although we are not aware of an explicit argument made for how big data would solve this problem, we assume that the logic is as follows. If one has 500 neuroimaging studies involved in the “process” of X, these studies may all vary in other extraneous factors that could violate the assumption of pure insertion at the single study level – i.e. one study may involve additional attentional load, another may somehow involve other higher-order processing, etc. The assumption is that extraneous factors in any single study are just noise that will average out to zero once multiple studies are considered, resulting in cleanly isolating the neural correlates of a specific process. However, it is problematic to assume that the extraneous factors in one study of a cognitive process are really independent from those in a different study of the same process. For example, if the majority of the paradigms used to examine process X also involve some aspect of process Y, the problem of pure insertion will not be solved – regardless of the number of selected studies. More generally, there is the issue of how the process is defined in the cognitive ontology. If the components of the cognitive ontology do not map onto actual cognitive processes, simply increasing the amount of data will not reveal anything substantial about brain-behavior relationships, and/or cognitive processes. To put it another way, it is not clear how meta-analyses of 500 studies of “language”, “working memory”, or even “orthography” will provide substantial advances compared to 25 studies on those same topics. Meta-analyses could provide information if theory-driven. However, if the input is atheoretical and not driven by cognitive processes, the output will be atheoretical and lack specificity.
A shift towards “big data” approaches in cognitive neuroscience may pose an existential threat to the inherently “small data” approaches in single case cognitive neuropsychology. These “big data” methods are innovative and novel, and have the potential to address statistical issues in the field. However, the inferential problems that these “big data” approaches are currently struggling with – specifically how can we isolate the neural substrates of specific cognitive processes – are not novel. They have been the core questions in cognitive neuroscience since its inception. We very much agree with the need for formal specifications of cognitive processes to advance our understanding of the mind and brain. We are just not convinced that “big data” approaches that pull together disparate studies using varying methods, stimuli, etc. will be particularly fruitful as it stands. Instead, we believe that “big data” approaches can only succeed if they are paired with “big theory”. That is, relating brain activity to cognitive constructs in a general way will only allow us to carve nature at the limbs, not the joints, of mental function. It is more likely that a focus on developing testable, well-specified cognitive models will lead to a more efficient utilization of resources and better overall output.
Cognitive neuropsychology and neuroscience – Complementary or in competition?
There are a number of ways that cognitive psychology and neuropsychology can be integrated with cognitive neuroscience. This includes an increased focus on standard cognitive psychological studies that develop cognitive models to “model-based” cognitive neuroscience (e.g. Mack, Preston, & Love, 2013; Palmeri, Love, & Turner, 2017). Importantly, cognitive neuropsychological studies can and should continue to play a role in developing these cognitive models. One of the major contributions of Alfonso Caramazza’s research on cognitive neuropsychology was not just using evidence from brain-damaged individuals to study the mind. Importantly, many of his important works formalized the assumptions made in this method (Badecker & Caramazza, 1985; Caramazza, 1984, 1986; Caramazza & McCloskey, 1988; McCloskey & Caramazza, 1988), with important debates on what we can (and cannot) learn from cognitive neuropsychology (e.g. Caramazza, 1992 versus Kosslyn & Intriligator, 1992). One of the major benefits of this approach was the development of well-specified models of cognitive processes using evidence from brain-damaged individuals (e.g. Caramazza, 1997; Caramazza & Hillis, 1990b; Caramazza & Miceli, 1990; Mahon & Caramazza, 2009; McCloskey, Caramazza, & Basili, 1985; Rapp & Caramazza, 1997; Rapp & Goldrick, 2000). “Big data” approaches hold the promise that is typically expected with new technological advances in any scientific field – the potential for new ideas and even paradigm shifts. However, to truly assess their potential, it is important to be clear and formalize exactly what can (and can not) be found with these new analytical methods.
Finally, the older cognitive neuropsychological approach can be integrated with newer techniques in cognitive neuroscience to understand cognition. This may seem odd to some, given the perceived disconnect between neural data and cognitive theories (e.g. Coltheart, 2006). However, we note that Coltheart was not arguing whether neuroimaging can inform cognitive theories, but whether it had informed cognitive theories. Furthermore, cognitive neuropsychologists have not argued “against the relevance of neuroanatomical or other neurally based observations in constraining cognitive theory…To the contrary…advances in cognitive science and neuroscience will be mutually constraining in the development of a mature cognitive neuroscience” (Caramazza, 1992). For example, changes in BOLD signal in a given region are dependent variables that can be used to infer cognitive function just as reaction time and accuracy are used in cognitive neuropsychological studies (Henson, 2005). Statistical methods for analyzing fMRI data in single, neurologically-intact individuals, whether it be MVPA (Norman, Polyn, Detre, & Haxby, 2006) or the functional localization approach (Fedorenko & Kanwisher, 2009) can be used to examine functional changes in single subjects, including brain-damaged individuals.
For example, neural data can constrain interpretations of behavior. Dilks and colleagues (2007) examined an individual (BL) with an upper left visual field cut who reported objects near this field cut as severely elongated. Neural data established that this field cut was caused, not by damage to primary visual cortex (V1), but damage to the right optic radiations that projected to a fully-intact V1. Functional neuroimaging revealed that stimuli presented in the lower left visual field resulted in activation across the entire right V1, whereas controls only showed activation in upper right V1 for the same stimuli. The extension of activation in V1 was consistent with BL’s reports of vertically stretched objects. Along with providing evidence for plasticity in V1 of adults, neural data provided a clear mechanism for why the observed behavior occurred. Neural data from patients can also provide converging evidence for cognitive theories. Models of word production have shown that information goes through lexical-semantic, followed by lexical-phonological processing, though researchers have debated the time course of when these different processing levels are engaged (see Strijkers & Costa, 2016 for a recent review). Laganaro and colleagues (2011) reported a single individual who took part in an event related potential (ERP) study on single-word spoken picture naming both before and after a stroke (see also Laganaro, Morand, & Schnider, 2009 for a similar approach with a case series). This individual was anomic after stroke, demonstrating a lexical-phonological deficit. Consistent with the behavioral deficit, they also found a large change in ERP components 250–450 msec after picture presentation, a time frame consistent with lexical-phonological processing as had been previously proposed (Indefrey & Levelt, 2004). This study provides evidence using neural data that supports specific models of the time course of lexical access. (For other examples of single-case cognitive neuropsychology and its methods, see Bridge et al., 2013; Mullally, Hassabis, & Maguire, 2012; Price, Crinion, & Friston, 2006; Snow, Goodale, & Culham, 2015; Wolmetz, Poeppel, & Rapp, 2011).
In summary, we argue a detailed understanding of cognitive processes is necessary to understand the mind and brain (see Krakauer, Ghazanfar, Gomez-Marin, MacIver, & Poeppel, 2017 for a similar argument). Big data approaches may provide some information regarding general relationships between brain and behavior, but likely will not advance our understanding of cognition without being rooted in theory. As for cognitive neuropsychology, its value can be assessed by its output – that is, the proof of the pudding is in the eating (Caramazza, 1992). Although cognitive neuropsychology may not be considered “cutting edge”, it has and will continue to provide important contributions towards understanding cognition. Future cognitive neuropsychological work, including strictly behavioral studies and ones that integrate neural data, will continue towards this goal.
Footnotes
Brain: “preliminary reports of work in progress or single case studies are not considered”
Nature Neuroscience uses the metaphor of finding a talking pig when discussing how important a case study needs to be accepted.
Is a cognitive ontology synonymous with a cognitive model? Not at the moment. In these cognitive ontologies, the relationship between processes (Price and Friston, 2007) or concepts (Poldrack, 2010; Poldrack & Yarkoni, 2016; see also www.cognitiveatlas.org) are specified by ontological relationships (e.g., “is a”, “is part of”). These cognitive ontologies are therefore less precise than what is typically seen in cognitive models, which characterize not only the types of processes involved in a cognitive task, but the relationship between these processes (e.g. the output from process X feeds into process Y, X necessarily proceeds Y during the course of processing ,etc). It is possible to have ontological relationships that contain these types of relationships – though they likely would not capture details typically found in well-specified cognitive models (feedback, interactivity, etc.). Furthermore, cognitive models typically contain some statements about the nature of the cognitive representations in each of these processes and/or the specific nature of the computations involved. These critical details are missing from cognitive ontologies.
“…the uncertainty surrounding which cognitive process deserves credit for the effect of a particular experimental task on brain activity is attributable to the impracticality of using dozens of different tasks in every study in order to isolate a specific process by converging operations....these limitations can be ameliorated by scaling up one’s investigation to simultaneously consider the results of many different studies.”
References
- Badecker W, & Caramazza A (1985). On considerations of method and theory governing the use of clinical categories in neurolinguistics and cognitive neuropsychology: The case against agrammatism. Cognition, 20(2), 97–125. [DOI] [PubMed] [Google Scholar]
- Bisiach E, & Luzzatti C (1978). Unilateral Neglect of Representational Space. Cortex, 14(1), 129–133. [DOI] [PubMed] [Google Scholar]
- Bridge H, Thomas OM, Minini L, Cavina-Pratesi C, Milner AD, & Parker AJ (2013). Structural and functional changes across the visual cortex of a patient with visual form agnosia. Journal of Neuroscience, 33(31), 12779–12791. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Broca P (1861). Remarque sur le siege de la faculte du language articule, suivie d’une observation d’aphemie mie (perte de la parole). Bulletin de la societe anatomique anatomique de Paris, 36, 330–356. [Google Scholar]
- Button KS, Ioannidis JP, Mokrysz C, Nosek BA, Flint J, Robinson ES, & Munafo MR (2013). Power failure: why small sample size undermines the reliability of neuroscience. [Research Support, Non-U.S. Gov’t]. Nat Rev Neurosci, 14(5), 365–376. doi: 10.1038/nrn3475 [DOI] [PubMed] [Google Scholar]
- Caramazza A (1984). The logic of neuropsychological research and the problem of patient classification in aphasia. Brain and Language, 21(1), 9–20. [DOI] [PubMed] [Google Scholar]
- Caramazza A (1986). On drawing inferences about the structure of normal cognitive systems from the analysis of patterns of impaired performance: The case for single-patient studies. Brain and Cognition, 5(1), 41–66. [DOI] [PubMed] [Google Scholar]
- Caramazza A (1992). Is Cognitive Neuropsychology Possible. Journal of Cognitive Neuroscience, 4(1), 80–95. [DOI] [PubMed] [Google Scholar]
- Caramazza A (1997). How many levels of processing are there in lexical access? Cognitive Neuropsychology, 14(1), 177–208. [Google Scholar]
- Caramazza A, & Hillis AE (1990a). Spatial Representation of Words in the Brain Implied by Studies of a Unilateral Neglect Patient. Nature, 346(6281), 267–269. [DOI] [PubMed] [Google Scholar]
- Caramazza A, & Hillis AE (1990b). Where do semantic errors come from? Cortex, 26(1), 95–122. [DOI] [PubMed] [Google Scholar]
- Caramazza A, & McCloskey M (1988). The case for single-patient studies. Cognitive Neuropsychology, 5(5), 517–527. [Google Scholar]
- Caramazza A, & Miceli G (1990). The structure of graphemic representations. Cognition, 37(3), 243–297. [DOI] [PubMed] [Google Scholar]
- Chatterjee A (2005). A madness to the methods in cognitive neuroscience? Journal of Cognitive Neuroscience, 17(6), 847–849. [DOI] [PubMed] [Google Scholar]
- Chen Q, Zhang Y, Hou H, Du F, Wu S, Chen L, … Zhang H (2017). Neural correlates of the popular music phenomenon: evidence from functional MRI and PET imaging. European Journal of Nuclear Medicine and Molecular Imaging, 1–9. [DOI] [PubMed] [Google Scholar]
- Coltheart M (2001). Assumptions and methods in cognitive neuropsychology. The handbook of cognitive neuropsychology: What deficits reveal about the human mind, 3–21. [Google Scholar]
- Coltheart M (2006). What has functional neuroimaging told us about the mind (so far)?(position paper presented to the european cognitive neuropsychology workshop, bressanone, 2005). Cortex, 42(3), 323–331. [DOI] [PubMed] [Google Scholar]
- Coltheart M (this issue). The assumptions upon which cognitive neuropsychology depends. Cognitive Neuropsychology. [Google Scholar]
- Dilks DD, Serences JT, Rosenau BJ, Yantis S, & McCloskey M (2007). Human adult cortical reorganization and consequent visual distortion. Journal of Neuroscience, 27(36), 9585–9594. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Eklund A, Nichols TE, & Knutsson H (2016). Cluster failure: why fMRI inferences for spatial extent have inflated false-positive rates. Proceedings of the National Academy of Sciences, 201602413. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Fedorenko E, & Kanwisher N (2009). Neuroimaging of language: why hasn’t a clearer picture emerged? Language and Linguistics Compass, 3(4), 839–865. [Google Scholar]
- Fellows LK, Heberlein AS, Morales DA, Shivde G, Waller S, & Wu DH (2005). Method matters: an empirical study of impact in cognitive neuroscience. Journal of Cognitive Neuroscience, 17(6), 850–858. [DOI] [PubMed] [Google Scholar]
- Fellows LK, Stark M, Berg A, & Chatterjee A (2008). Patient registries in cognitive neuroscience research: Advantages, challenges, and practical advice. Journal of Cognitive Neuroscience, 20(6), 1107–1113. [DOI] [PubMed] [Google Scholar]
- Friston KJ, Price C, Fletcher P, Moore C, Frackowiak R, & Dolan R (1996). The trouble with cognitive subtraction. Neuroimage, 4(2), 97–104. [DOI] [PubMed] [Google Scholar]
- Goodale MA, & Milner AD (1992). Separate Visual Pathways For Perception And Action. Trends In Neurosciences, 15(1), 20–25. [DOI] [PubMed] [Google Scholar]
- Henson R (2005). What can functional neuroimaging tell the experimental psychologist? The Quarterly Journal of Experimental Psychology Section A, 58(2), 193–233. [DOI] [PubMed] [Google Scholar]
- Hsu C-T, Jacobs AM, Altmann U, & Conrad M (2015). The magical activation of left amygdala when reading Harry Potter: An fMRI study on how descriptions of supra-natural events entertain and enchant. PLOS One, 10(2), e0118179. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Indefrey P, & Levelt WJ (2004). The spatial and temporal signatures of word production components. Cognition, 92(1), 101–144. [DOI] [PubMed] [Google Scholar]
- Ioannidis JP (2005). Why most published research findings are false. PLos med, 2(8), e124. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Kaplan JT, Freedman J, & Iacoboni M (2007). Us versus them: Political attitudes and party affiliation influence neural response to faces of presidential candidates. Neuropsychologia, 45(1), 55–64. [DOI] [PubMed] [Google Scholar]
- Kimberg DY, Coslett HB, & Schwartz MF (2007). Power in voxel-based lesion-symptom mapping. Journal of Cognitive Neuroscience, 19(7), 1067–1080. [DOI] [PubMed] [Google Scholar]
- Kosslyn SM, & Intriligator JM (1992). Is cognitive neuropsychology plausible? The perils of sitting on a one-legged stool. Journal of Cognitive Neuroscience, 4(1), 96–105. [DOI] [PubMed] [Google Scholar]
- Krakauer JW, Ghazanfar AA, Gomez-Marin A, MacIver MA, & Poeppel D (2017). Neuroscience needs behavior: correcting a reductionist Bias. Neuron, 93(3), 480–490. [DOI] [PubMed] [Google Scholar]
- Laganaro M, Morand S, Michel CM, Spinelli L, & Schnider A (2011). ERP correlates of word production before and after stroke in an aphasic patient. Journal of Cognitive Neuroscience, 23(2), 374–381. [DOI] [PubMed] [Google Scholar]
- Laganaro M, Morand S, & Schnider A (2009). Time course of evoked-potential changes in different forms of anomia in aphasia. Journal of Cognitive Neuroscience, 21(8), 1499–1510. [DOI] [PubMed] [Google Scholar]
- Laird AR, Lancaster JJ, & Fox PT (2005). Brainmap. Neuroinformatics, 3(1), 65–77. [DOI] [PubMed] [Google Scholar]
- Mack ML, Preston AR, & Love BC (2013). Decoding the brain’s algorithm for categorization from its neural implementation. Current Biology, 23(20), 2023–2027. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Mahon BZ, & Caramazza A (2009). Concepts and categories: A cognitive neuropsychological perspective. Annual review of psychology, 60, 27–51. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Marshall JC, & Newcombe F (1966). Syntactic and semantic errors in paralexia. Neuropsychologia, 4(2), 169–176. [Google Scholar]
- Mather M, Cacioppo JT, & Kanwisher N (2013). How fMRI can inform cognitive theories. Perspectives on Psychological Science, 8(1), 108–113. [DOI] [PMC free article] [PubMed] [Google Scholar]
- McCloskey M, & Caramazza A (1988). Theory and methodology in cognitive neuropsychology: A response to our critics. Cognitive Neuropsychology, 5(5), 583–623. [Google Scholar]
- McCloskey M, Caramazza A, & Basili A (1985). Cognitive mechanisms in number processing and calculation: Evidence from dyscalculia. Brain and Cognition, 4(2), 171–196. [DOI] [PubMed] [Google Scholar]
- McCloskey M, & Chaisilprungraung T (2017). The Value of Cognitive Neuropsychology: Examples from Vision Research. Cognitive Neuropsychology. [DOI] [PubMed] [Google Scholar]
- McCloskey M, Rapp B, Yantis S, Rubin G, Bacon WF, Dagnelie G, … Palmer E (1995). A Developmental Deficit in Localizing Objects from Vision. Psychological Science, 6(2), 112–117. [Google Scholar]
- Mullally SL, Hassabis D, & Maguire EA (2012). Scene construction in amnesia: An fMRI study. Journal of Neuroscience, 32(16), 5646–5653. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Norman KA, Polyn SM, Detre GJ, & Haxby JV (2006). Beyond mind-reading: multi-voxel pattern analysis of fMRI data. Trends in Cognitive Sciences, 10(9), 424–430. [DOI] [PubMed] [Google Scholar]
- Palmeri TJ, Love BC, & Turner BM (2017). Model-based cognitive neuroscience. Journal of Mathematical Psychology, 76, Part B, 59–64. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Patterson K, & Plaut DC (2009). “Shallow draughts intoxicate the brain”: Lessons from cognitive science for cognitive neuropsychology. Topics in Cognitive Science, 1(1), 39–58. [DOI] [PubMed] [Google Scholar]
- Poldrack RA (2010). Mapping mental function to brain structure: how can cognitive neuroimaging succeed? Perspectives on Psychological Science, 5(6), 753–761. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Poldrack RA, & Yarkoni T (2016). From brain maps to cognitive ontologies: informatics and the search for mental structure. Annual review of psychology, 67, 587–612. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Price CJ, Crinion J, & Friston KJ (2006). Design and analysis of fMRI studies with neurologically impaired patients. Journal of Magnetic Resonance Imaging, 23(6), 816–826. [DOI] [PubMed] [Google Scholar]
- Price CJ, & Friston KJ (2005). Functional ontologies for cognition: The systematic definition of structure and function. Cognitive Neuropsychology, 22(3–4), 262–275. [DOI] [PubMed] [Google Scholar]
- Rapp B, & Caramazza A (1997). From graphemes to abstract letter shapes: levels of representation in written spelling. Journal of Experimental Psychology: Human Perception and Performance, 23(4), 1130. [DOI] [PubMed] [Google Scholar]
- Rapp B, & Fischer-Baum S (2015). Uncovering the Cognitive Architecture of Spelling In Hillis AE (Ed.), The Handbook of Adult Language Disorders (pp. 59–86). New York, NY: Psychology Press. [Google Scholar]
- Rapp B, & Goldrick M (2000). Discreteness and interactivity in spoken word production. Psychological Review, 107(3), 460. [DOI] [PubMed] [Google Scholar]
- Rorden C, & Karnath H-O (2004). Using human brain lesions to infer function: a relic from a past era in the fMRI age? Nature Reviews Neuroscience, 5(10), 812–819. [DOI] [PubMed] [Google Scholar]
- Scoville WB, & Milner B (1957). Loss of recent memory after bilateral hippocampal lesions. Journal of neurology, neurosurgery, and psychiatry, 20(1), 11. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Shallice T (2009). The Declining Influence of Cognitive Theorising: Are the causes intellectual or socio-political? Psychologica Belgica, 49(2–3). [Google Scholar]
- Shallice T (2014). The cognitive neuropsychology research paradigm: Dodo or phoenix? Paper presented at the 32nd European Workshop on Cognitive Neuropsychology, Bressanone, Italy. [Google Scholar]
- Snow JC, Goodale MA, & Culham JC (2015). Preserved haptic shape processing after bilateral LOC lesions. Journal of Neuroscience, 35(40), 13745–13760. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Strijkers K, & Costa A (2016). The cortical dynamics of speaking: Present shortcomings and future avenues. Language, Cognition and Neuroscience, 31(4), 484–503. [Google Scholar]
- Szucs D, & Ioannidis JP (2017). Empirical assessment of published effect sizes and power in the recent cognitive neuroscience and psychology literature. PLOS Biology, 15(3), e2000797. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Wernicke C (1874). Der aphasische Symptomencomplex: eine psychologische Studie auf anatomischer Basis: Cohn.
- Wolmetz M, Poeppel D, & Rapp B (2011). What does the right hemisphere know about phoneme categories? Journal of Cognitive Neuroscience, 23(3), 552–569. [DOI] [PMC free article] [PubMed] [Google Scholar]
- Yarkoni T (2009). Big correlations in little studies: Inflated fMRI correlations reflect low statistical power—Commentary on Vul et al.(2009). Perspectives on Psychological Science, 4(3), 294–298. [DOI] [PubMed] [Google Scholar]
- Yarkoni T, Poldrack RA, Nichols TE, Van Essen DC, & Wager TD (2011). Large-scale automated synthesis of human functional neuroimaging data. Nature methods, 8(8), 665–670. [DOI] [PMC free article] [PubMed] [Google Scholar]