Skip to main content
Sage Choice logoLink to Sage Choice
. 2020 Jul 15;29(5):431–437. doi: 10.1177/0963721420925518

Beyond the Core-Deficit Hypothesis in Developmental Disorders

Duncan E Astle 1,, Sue Fletcher-Watson 2
PMCID: PMC7539596  EMSID: EMS97763  PMID: 33071483

Abstract

Developmental disorders and childhood learning difficulties encompass complex constellations of relative strengths and weaknesses across multiple aspects of learning, cognition, and behavior. Historically, debate in developmental psychology has been focused largely on the existence and nature of core deficits—the shared mechanistic origin from which all observed profiles within a diagnostic category emerge. The pitfalls of this theoretical approach have been articulated multiple times, but reductionist, core-deficit accounts remain remarkably prevalent. They persist because developmental science still follows the methodological template that accompanies core-deficit theories—highly selective samples, case-control designs, and voxel-wise neuroimaging methods. Fully moving beyond “core-deficit” thinking will require more than identifying its theoretical flaws. It will require a wholesale rethink about the way we design, collect, and analyze developmental data.

Keywords: cognitive development, developmental psychology, developmental science, developmental disorders


Neurodevelopmental diversity results in some children receiving a clinical diagnosis of a developmental disorder and others experiencing learning difficulties in the absence of a diagnosis. As a result, 14% to 30% of children and adolescents worldwide experience barriers to learning (Department for Education, 2018; National Center for Education Statistics, 2020) that vary widely in scope and impact. Some children are formally diagnosed via specialist education services and placed in categories including dyslexia, dyscalculia, or developmental language disorder. Other children, such as those with attention-deficit/hyperactivity disorder (ADHD), dyspraxia, or autism spectrum disorder (hereafter, autism), are normally diagnosed in clinical services. However, many children who struggle will never receive a formal label for their learning difficulty, despite meeting the criteria for multiple different diagnoses (e.g., Bathelt, Holmes, et al., 2018; Holmes, Bryant, the CALM Team, & Gathercole, 2019; Siugzdaite, Bathelt, Holmes, & Astle, 2020; see also Embracing Complexity Coalition, n.d.).

This classification structure has also been used within research to integrate and guide empirical work (Regier, Kuhl, & Kupfer, 2013). As a result, the literature is organized around a patchwork of different theories that provide putative explanations for different recognized disorders. In these theories, a complex array of observable characteristics are frequently categorized according to a single defining neurocognitive deficit. As understanding of a particular set of diagnostic features evolves, most such theories are gradually pruned from the field because of insufficient evidence, counterevidence, or the emergence of better-specified theories. These theoretical accounts fail, but that is their purpose. Choose any clinical categorization of developmental differences, and there is a graveyard of once-popular theoretical accounts, from the magnocellular theory of dyslexia (Stein, 2001) to the mirror-neuron theory of autism (Williams, Whiten, Suddendorf, & Perrett, 2001).

Over and above specific problems with any single theory, this general class of theory is problematic because of the reliance on the very notion of a “core deficit”—something that has been repeatedly debunked both within specific (Happé, Ronald, & Plomin, 2006) and across multiple diagnostic categories (e.g., Pennington, 2006). But in practice, developmental psychologists and developmental-cognitive neuroscientists frequently return to core-deficit theories, either implicitly or explicitly, if not to motivate studies, then to interpret them. Here, we provide a working definition of a core-deficit account, describe the inherent weaknesses in the theory’s accompanying methods and methodology, and argue that building the empirical foundations for more complex (and accurate) theoretical frameworks will require a wholesale rethink in the way we design, collect, and analyze developmental data.

Identifying a Core-Deficit Hypothesis

A core-deficit hypothesis pins a multiplicity of cognitive, behavioral, and neurobiological phenomena onto a single mechanistic impairment and is assumed to have the power to explain all observed profiles within a particular diagnostic category. To provide one example, in autism, the dominant core-deficit hypothesis since the mid-1980s has been the theory-of-mind model. This model proposes that autistic people uniquely lack the ability to detect, interpret, or understand the mental states of others. Versions of this model vary in the range of behaviors and mental states they attempt to encompass. At its broadest, the theory-of-mind deficit can draw in differences in play, language development, and all types of processing of emotions, including basic emotions, as well as desires, beliefs, and higher-order complex mental states such as suspicion. On a narrower scale, theory-of-mind impairments in autism are understood to apply only to the automatic and easy processing of complex mental states. Until recently, evidence for this latter position has looked fairly robust, but now even this pillar of autism theory is under threat, as innovative research has revealed that the apparent “deficits” in mental-state understanding exhibited by autistic people may apply only to understanding the mental states of nonautistic people (e.g., Crompton et al., 2020). At the same time, there is a growing body of evidence showing that nonautistic people show impairments in detecting and interpreting the mental states of autistic people (e.g., Edey et al., 2016). This reframes the characteristic “deficit” of autism as a typical manifestation of failed communication across different sociocultural groups (Milton, Heasman, & Sheppard, 2018). More importantly for our argument here, the model has never been adequately able to explain the sleep disturbances, sensory sensitivities, intense and consuming interests, or executive difficulties that are equally prevalent among autistic people.

The Origins of Core-Deficit Theories and Their Persistent Appeal

Across multiple categories, and despite well-articulated perspective articles highlighting their limitations (Happé et al., 2006; Pennington, 2006), core-deficit accounts remain a recurring theme in developmental-disorders research (e.g., Kucian & von Aster, 2015; Mayes, Reilly, & Morgan, 2015). Part of their intuitive appeal is their relative simplicity. Publishing in higher-tier journals is easier when the story is simple, and doing so can quickly turn into a citation gold mine if the field invests in challenging your theory. If a series of findings can be woven together, this provides an ideal way of combining decades of research and cementing its contribution to the field. However, such tapestries are riddled with loose threads that, if pulled, quickly reveal the flaws in their construction.

Examples of these construction flaws are found in the design choices, participant selection, sample sizes, statistical methods, and restrictive range of measures that are hallmarks of core-deficit thinking. These flaws are still prevalent in the wider developmental literature because despite being mindful of the problems inherent in core-deficit theories, we have yet to change the methodological template inherited with them.

Highly selective samples

Studies of developmental disorders typically use strict exclusion criteria (e.g., Toplak, Jain, & Tannock, 2005; Willcutt et al., 2001), including the removal of children with co-occurring difficulties. But in reality, co-occurring difficulties are the norm rather than the exception (Gillberg, 2010). For example, it is rare to have selective reading or math impairments; the majority of children with one difficulty will also have the other (e.g., Landerl & Moll, 2010). The same is true within clinical samples: 44% of children who receive an ADHD diagnosis would also meet the diagnostic criteria for a learning difficulty, and a similar percentage who meet the latter criteria would also meet the criteria for an ADHD diagnosis (Pastor & Reuben, 2008).

Because most children with the difficulties of interest are excluded, the literature overstates the “purity” of developmental differences. In turn, basing models on highly selective samples biases theory toward simpler core-deficit accounts. Where studies do include children with different diagnoses or with co-occurring difficulties, the purpose is usually to identify what is unique to each diagnosis rather than to establish which dimensions are important for understanding individual outcomes, irrespective of the diagnostic category applied.

Study designs do not capture heterogeneity

Most studies use a case-control design, with children grouped according to strict inclusion criteria (see above) and then matched to one or more control groups. Significant differences in group-level statistics are then taken as evidence for a specific deficit in the “case” group. Variability in performance within groups is rarely studied, in part because few studies have sufficient power but also because of reliance on univariate statistics. This issue becomes more and more problematic as diagnostic criteria broaden. Regarding heterogeneity as noise to be minimized removes a crucial signal that could lead to a more complex and accurate theoretical conclusion. Although we endorse the application of Occam’s razor to interpretation of findings, we must be wary of parsimony achieved via flawed means.

Circular logic of measurement selection

Theoretical conclusions about underpinning mechanisms are constrained by our choice of measures. Tasks are often included because they are regarded as the gold standard for distinguishing a particular group from controls, even though the conceptual underpinnings of the task are poorly understood. The logic can become circular: “We always include a theory-of-mind task when studying autistic people, and autism is characterized by theory-of-mind task performance.” The task has become an implicit requirement within the field without any real mechanistic understanding of why different children find it hard. Moreover, the dominance of a specific core-deficit theory with an associated gold-standard measure eliminates consideration of other possibilities: In the case of autism, why not consider an executive-planning or sensory-profiling measure? If the same domains of measurement are selected in every study, to the exclusion of alternatives, then the supposed cardinality of this profile will inevitably be reinforced. But little has been explained, and alternative possible profiles are not documented.

Neuroimaging methods inherited from the adult literature

Finding a shared neural substrate that purportedly causes the difference is typically taken as strong evidence for a core-deficit theory. But this is largely an artifact of the analytical approach we take to neuroimaging. Most canonical neuroimaging methods assume a direct correspondence between spatially overlapping brain differences (structural or functional) and surface-level cognitive profiles. A voxel-wise approach to analysis will always produce peaks. Despite its dominance, this approach yields remarkably inconsistent results. For example, ADHD has been associated with differences in gray matter within the anterior cingulate cortex (Amico, Stauber, Koutsouleris, & Frodl, 2011), caudate nucleus (Onnink et al., 2014), pallidum (Frodl & Skokauskas, 2012), striatum (Greven et al., 2015), cerebellum (Mackie et al., 2007), prefrontal cortex (Dirlikov et al., 2015), premotor cortex (Mahone et al., 2011), and most parts of the parietal lobe (Shaw et al., 2006).

Our contention is that the assumption of spatial correspondence is not valid for understanding brain–cognition or brain–behavior links in childhood, especially in children with developmental disorders (see also Johnson, 2011). Difficulties that emerge over developmental time can be arrived at via multiple different underlying neural routes—a phenomenon called equifinality (Bishop, 1997). There may also be multifinality: The same apparent underlying neural effects can have different consequences for behavior and cognition across children (Siugzdaite et al., 2020). The canonical voxel-wise neuroimaging methods that dominate developmental cognitive neuroscience instead create the false impression that the neural underpinnings of developmental differences are akin to those resulting from acquired focal brain damage, and in turn this implicitly leads us back to “core deficits” in our theoretical interpretation.

Moving Beyond the Core-Deficit Hypothesis

There is no shortcut to a comprehensive empirical basis for more complex theory. Nonetheless, in this section, we make some nonprescriptive suggestions as a position from which to move forward.

Well-powered studies with inclusive samples

If our samples are more reflective of the children we are seeking to understand, then our theories, though necessarily more complex, will likely have greater practical value. It is necessary to include participants with different or multiple diagnoses. Emerging first in adult psychiatry (Cuthbert & Insel, 2013; Morris & Cuthbert, 2012), transdiagnostic approaches focus on identifying underlying symptom dimensions that likely span multiple supposed categories (e.g., Astle, Bathelt, The CALM Team, & Holmes, 2019; Bathelt, Holmes, et al., 2018; Bryant, Guy, The CALM Team, & Holmes, 2020; Hawkins, Gathercole, Astle, The CALM Team, & Holmes, 2016; Holmes et al., 2019; Siugzdaite et al., 2020). Within developmental science there are good examples of researchers cutting across hitherto unquestioned diagnostic boundaries in order to identify cognitive symptoms that underpin learning, but they remain relatively rare (e.g., Astle et al., 2019; Casey, Oliveri, & Insel, 2014; Hulme & Snowling, 2009; Peng & Fuchs, 2016; Sonuga-Barke & Coghill, 2014; Zhao & Castellanos, 2016). A review of a transdiagnostic approach is well beyond the scope of the current article, but suffice it to say, contemporary developmental science needs larger and more diverse samples.

Embracing methods that capture complexity

There is now a growing array of analysis methods allowing researchers to move beyond univariate group comparisons or pairwise associations between variables. A well-developed tool is structural equation modeling (SEM; e.g., Kline, 2015). SEM combines the strengths of path-modeling and latent-variable approaches to allow researchers, for instance, to specify how latent factors may explain the continuous variability on a set of observed measures (e.g., task performance) and the potential causes and consequences of such factors. SEM offers the tools to identify continuous dimensions that cut across diagnostic boundaries or to directly compare competing causal accounts. More advanced variants of SEM, such as latent-growth-curve models and growth-mixture models, can address more sophisticated questions about how changes in one latent construct relate to changes in another (e.g., Hjetland et al., 2019; Kievit, Hofman, & Nation, 2019). Other variants—for example, hierarchical mimic modeling—can be used to identify multiple routes to particular outcomes (equifinality), such as the role that multiple different brain structures might play in developmental changes to a particular aspect of cognition (e.g., Kievit et al., 2016; Ritchie et al., 2015).

Whereas SEM methods are ideal for testing complex theories or pitting theories against one another, other methods are able to handle complexity in a different way. A more recent development from data science that aims to capture complex interrelationships in an exploratory way is network analysis (e.g., Bathelt, Holmes, et al., 2018; Epskamp, Borsboom, & Fried, 2018; Mareva, the CALM team, & Holmes, 2019). The resulting network can open the door to a new toolbox of analytical techniques, such as graph theory. Rather than inferring the presence of singular latent factors, these approaches capture various different ways in which individual measures can be related over developmental time. For example, does phonological processing act like a hub for verbal short-term memory and literacy development, or is there a dynamic relationship between these constructs over time? Network analysis can also capture heterogeneity within a sample and provide a relatively theory-free means of exploring the underlying structure and composition of a data set. With a network analysis, it is possible to identify subgroups of individuals within which the task interrelationships are different, and a strong advantage of this approach is that a network analysis incorporates metrics for identifying these clusters (e.g., Bathelt, Holmes, et al., 2018).

Unsupervised machine-learning techniques are also capable of capturing the multidimensional space in which children may differ (e.g., Siugzdaite et al., 2020). But relative to SEM-based approaches and network analysis, machine-learning applications remain underdeveloped. Although popular in other areas of science with similar challenges, these methods have yet to gain much traction within the study of developmental differences (but see Astle et al., 2019). These algorithms are highly flexible, and the resulting models can easily accommodate nonlinear relationships, make predictions about unseen data, be combined with simulations, incorporate different data types, and open the way to tools for testing generalization, such as cross-validation. For the data shown in Figure 1, a simple artificial neural network was used to map different profiles across multiple measures of short-term memory, working memory, fluid reasoning, vocabulary, and phonological awareness in a large group of struggling learners. Once the individual nodes of the network were trained to represent the different profiles within the data set, the locations of children with different diagnoses could be identified. If common diagnoses are predictive of the cognitive profiles learned by the network, then those children’s performance profiles should group together—but they did not. This highlights the potential utility of this approach to mapping the heterogeneity of a data set and exploring its composition. This is something largely untapped within the field to date.

Fig. 1.

Fig. 1.

Distributions of children within a simple artificial neural network trained on data from 530 children taken from the Centre for Attention, Learning and Memory (CALM) sample (Holmes, Bryant, the CALM Team, & Gathercole, 2019). Each node represents a profile learned by the algorithm, with spatially nearby nodes having more similar profiles. Therefore, the maps represent the multidimensional spaces that reflect the performance differences across the children. The left-most panel shows the best-matching unit for all children, and the subsequent panels show those for children with different diagnoses. The training data set included measures of fluid reasoning, vocabulary, verbal and spatial short-term and working memory, and phonological awareness (see also Astle, Bathelt, The CALM Team, & Holmes, 2019). ADHD = attention-deficit/hyperactivity disorder; ASD = autism spectrum disorder.

Beyond voxel-wise measures of brain structure and function

Just as methods that capture complexity in cognitive and behavioral data are needed, the same is true for neuroimaging. In theory, the methods outlined above can be used alongside neuroimaging, although in practice there are very few examples. SEM techniques could allow researchers to identify many-to-one mappings (e.g., Kievit et al., 2016; Kievit et al., 2014), allowing the possibility that a particular set of symptoms could be associated with multiple different neurobiological effects. And network science has enabled the subfield of brain “connectomics,” enabling researchers to demonstrate that apparently disparate neural effects could have very similar effects on brain organization, providing a meaningful endophenotype to bridge complex causal interrelations and shared surface-level profiles (e.g., Bathelt, Gathercole, Butterfield, the CALM Team, & Astle, 2018; Bathelt, Scerif, Nobre, & Astle, 2019).

Conclusions

Core-deficit models long held promise as optimistic researchers aimed to interpret behavioral complexity via cognitive or neurological simplicity. However, as more and more of these attempts fall by the wayside, many researchers have come to question the validity of the principle of the core deficit. At the same time, increased pooling and sharing of data, as well as better diagnostic ascertainment, has improved our capacity to gather substantial samples for well-powered complex analyses. New technologies provide opportunities for creative, data-driven analysis of such samples, which can provide us with an empirical basis for the development of new theories.

We must embrace complexity within and between diagnostic boundaries in such theoretical models. In doing so, we will unlock our potential to understand the cross-cutting issues that frequently have the biggest impact on people’s lives rather than dwell on the narrow selection of domains that seem to be unique to a specific population. Importantly, a move away from the concept of “core” should also entail a movement away from the concept of “deficit”—there is no objective reason why a condition should be defined by its disadvantageous elements instead of its advantageous elements. Although the former may be needful of intervention, the latter may be essential to delivering that intervention. Examples of successful application of this principle come from formal evaluations—for example, technological supports for autistic children (Grynszpan, Weiss, Perez-Diaz, & Gal, 2014; Kasari et al., 2014)—but are also evident in practitioner-focused guides—for example, image-based rather than text-based learning materials for children with dyslexia (Mortimore & Dupree, 2008). In this way, the current direction of research in neurodevelopmental diversity is at a potential tipping point. The issues we outline above, and the recent developments we highlight, could have a beneficial impact not only on research innovation and knowledge generation but also on policy, practice, and societal understanding.

Recommended Reading

Bathelt, J., Holmes, J., & Astle, D. E., on behalf of the Centre for Attention Learning and Memory (CALM) Team. (2018). (See References). A data-driven approach to exploring profiles of executive-functional difficulty and then testing whether they overlap with diagnostic status.

Bishop, D. V. (1997). (See References). A classic article that sowed the seeds for many of the questions we raise about developmental theory—required reading for anyone interested in developmental disorders.

Kievit, R. A., Hofman, A. D., & Nation, K. (2019). (See References). Excellent use of latent-change modeling to explore the relationship between different constructs over developmental time.

Mareva, S., the CALM Team, & Holmes, J. (2019). (See References). An excellent example of using a network analysis with a transdiagnostic sample to explore the interplay among different cognitive, communication, and behavioral characteristics.

Siugzdaite, R., Bathelt, J., Holmes, J., & Astle, D. E. (2020). (See References). Uses an artificial neural network to map between cognition and brain structure.

Footnotes

ORCID iD: Duncan E. Astle Inline graphic https://orcid.org/0000-0002-7042-5392

Transparency

Action Editor: Randall W. Engle

Editor: Randall W. Engle

Declaration of Conflicting Interests: The author(s) declared that there were no conflicts of interest with respect to the authorship or the publication of this article.

Funding: D. E. Astle’s contribution to this work was supported by funding from the UK Medical Research Council (MC-A0606- 5PQ40). S. Fletcher-Watson’s contribution was supported by funding to the Salvesen Mindroom Research Centre from the A E H Salvesen Charitable Trust.

References

  1. Amico F., Stauber J., Koutsouleris N., Frodl T. (2011). Anterior cingulate cortex gray matter abnormalities in adults with attention deficit hyperactivity disorder: A voxel-based morphometry study. Psychiatry Research: Neuroimaging, 191, 31–35. doi: 10.1016/j.pscychresns.2010.08.011 [DOI] [PubMed] [Google Scholar]
  2. Astle D. E., Bathelt J., The CALM Team, & Holmes J. (2019). Remapping the cognitive and neural profiles of children who struggle at school. Developmental Science, 22(1), Article e12747. doi: 10.1111/desc.12747 [DOI] [PMC free article] [PubMed] [Google Scholar]
  3. Bathelt J., Gathercole S. E., Butterfield S., the CALM Team, & Astle D. E. (2018). Children’s academic attainment is linked to the global organization of the white matter connectome. Developmental Science, 21(5), Article e12662. doi: 10.1111/desc.12662 [DOI] [PMC free article] [PubMed] [Google Scholar]
  4. Bathelt J., Holmes J., Astle D. E., on behalf of the Centre for Attention Learning and Memory (CALM) Team. (2018). Data-driven subtyping of executive function–related behavioral problems in children. Journal of the American Academy of Child & Adolescent Psychiatry, 57, 252–262. doi: 10.1016/j.jaac.2018.01.014 [DOI] [PMC free article] [PubMed] [Google Scholar]
  5. Bathelt J., Scerif G., Nobre A. C., Astle D. E. (2019). Whole-brain white matter organization, intelligence, and educational attainment. Trends in Neuroscience & Education, 15, 38–47. [DOI] [PMC free article] [PubMed] [Google Scholar]
  6. Bishop D. V. (1997). Cognitive neuropsychology and developmental disorders: Uncomfortable bedfellows. The Quarterly Journal of Experimental Psychology A, 50, 899–923. [DOI] [PubMed] [Google Scholar]
  7. Bryant A., Guy J., The CALM Team, & Holmes J. (2020). The Strengths and Difficulties Questionnaire predicts mental health difficulties in a transdiagnostic sample of struggling learners. PsyArXiv. doi: 10.31234/osf.io/wpj8y [DOI] [PMC free article] [PubMed] [Google Scholar]
  8. Casey B. J., Oliveri M. E., Insel T. (2014). A neurodevelopmental perspective on the research domain criteria (RDoC) framework. Biological Psychiatry, 76, 350–353. doi: 10.1016/j.biopsych.2014.01.006 [DOI] [PubMed] [Google Scholar]
  9. Crompton C. J., Ropar D., Evans-Williams C. V. M., Flynn E. G., Fletcher-Watson S. (2020). Autistic peer to peer information transfer is highly effective. OSF Preprints. doi: 10.31219/osf.io/j4knx [DOI] [PMC free article] [PubMed] [Google Scholar]
  10. Cuthbert B. N., Insel T. R. (2013). Toward the future of psychiatric diagnosis: The seven pillars of RDoC. BMC Medicine, 11, Article 126. doi: 10.1186/1741-7015-11-126 [DOI] [PMC free article] [PubMed] [Google Scholar]
  11. Department for Education. (2018). Special educational needs in England: January 2018. Retrieved from https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/729208/SEN_2018_Text.pdf
  12. Dirlikov B., Shiels Rosch K., Crocetti D., Denckla M. B., Mahone E. M., Mostofsky S. H. (2015). Distinct frontal lobe morphology in girls and boys with ADHD. NeuroImage: Clinical, 7, 222–229. doi: 10.1016/j.nicl.2014.12.010 [DOI] [PMC free article] [PubMed] [Google Scholar]
  13. Edey R., Cook J., Brewer R., Johnson M. H., Bird G., Press C. (2016). Interaction takes two: Typical adults exhibit mind-blindness towards those with autism spectrum disorder. Journal of Abnormal Psychology, 125, 879–885. doi: 10.1037/abn0000199 [DOI] [PubMed] [Google Scholar]
  14. Embracing Complexity Coalition. (n.d.). Embracing complexity. Retrieved from http://embracingcomplexity.org.uk/
  15. Epskamp S., Borsboom D., Fried E. I. (2018). Estimating psychological networks and their accuracy: A tutorial paper. Behavior Research Methods, 50, 195–212. doi: 10.3758/s13428-017-0862-1 [DOI] [PMC free article] [PubMed] [Google Scholar]
  16. Frodl T., Skokauskas N. (2012). Meta-analysis of structural MRI studies in children and adults with attention deficit hyperactivity disorder indicates treatment effects. Acta Psychiatrica Scandinavica, 125, 114–126. doi: 10.1111/j.1600-0447.2011.01786.x [DOI] [PubMed] [Google Scholar]
  17. Gillberg C. (2010). The ESSENCE in child psychiatry: Early symptomatic syndromes eliciting neurodevelopmental clinical examinations. Research in Developmental Disabilities, 31, 1543–1551. doi: 10.1016/j.ridd.2010.06.002 [DOI] [PubMed] [Google Scholar]
  18. Greven C. U., Bralten J., Mennes M., O’Dwyer L., van Hulzen K. J. E., Rommelse N., . . . Buitelaar J. K. (2015). Developmentally stable whole-brain volume reductions and developmentally sensitive caudate and putamen volume alterations in those with attention-deficit/hyperactivity disorder and their unaffected siblings. JAMA Psychiatry, 72, 490–499. doi: 10.1001/jamapsychiatry.2014.3162 [DOI] [PubMed] [Google Scholar]
  19. Grynszpan O., Weiss P. L., Perez-Diaz F., Gal E. (2014). Innovative technology-based interventions for autism spectrum disorders: A meta-analysis. Autism, 18, 346–361. [DOI] [PubMed] [Google Scholar]
  20. Happé F., Ronald A., Plomin R. (2006). Time to give up on a single explanation for autism. Nature Neuroscience, 9, 1218–1220. [DOI] [PubMed] [Google Scholar]
  21. Hawkins E., Gathercole S., Astle D., The CALM Team, & Holmes J. (2016). Language problems and ADHD symptoms: How specific are the links? Brain Sciences, 6(4), Article 50. doi: 10.3390/brainsci6040050 [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Hjetland H. N., Lervåg A., Lyster S. A. H., Hagtvet B. E., Hulme C., Melby-Lervåg M. (2019). Pathways to reading comprehension: A longitudinal study from 4 to 9 years of age. Journal of Educational Psychology, 111, 751–763. [Google Scholar]
  23. Holmes J., Bryant A., the CALM Team, & Gathercole S. E. (2019). Protocol for a transdiagnostic study of children with problems of attention, learning and memory (CALM). BMC Pediatrics, 19, Article 10. doi: 10.1186/s12887-018-1385-3 [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Hulme C., Snowling M. J. (2009). Developmental disorders of language learning and cognition. Chichester, England: Wiley-Blackwell. [Google Scholar]
  25. Johnson M. H. (2011). Interactive specialization: A domain-general framework for human functional brain development? Developmental Cognitive Neuroscience, 1(1), 7–21. doi: 10.1016/j.dcn.2010.07.003 [DOI] [PMC free article] [PubMed] [Google Scholar]
  26. Kasari C., Kaiser A., Goods K., Nietfeld J., Mathy P., Landa R., . . . Almirall D. (2014). Communication interventions for minimally verbal children with autism: A sequential multiple assignment randomized trial. Journal of the American Academy of Child & Adolescent Psychiatry, 53, 635–646. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Kievit R. A., Davis S. W., Griffiths J., Correia M. M., Cam-CAN, & Henson R. N. (2016). A watershed model of individual differences in fluid intelligence. Neuropsychologia, 91, 186–198. doi: 10.1016/j.neuropsychologia.2016.08.008 [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Kievit R. A., Davis S. W., Mitchell D. J., Taylor J. R., Duncan J., Henson R. N. A. (2014). Distinct aspects of frontal lobe structure mediate age-related differences in fluid intelligence and multitasking. Nature Communications, 5, Article 5658. doi: 10.1038/ncomms6658 [DOI] [PMC free article] [PubMed] [Google Scholar]
  29. Kievit R. A., Hofman A. D., Nation K. (2019). Mutualistic coupling between vocabulary and reasoning in young children: A replication and extension of the study by Kievit et al. (2017). Psychological Science, 30, 1245–1252. [DOI] [PMC free article] [PubMed] [Google Scholar]
  30. Kline R. B. (2015). Principles and practice of structural equation modeling (4th ed.). New York, NY: Guilford Press. [Google Scholar]
  31. Kucian K., von Aster M. (2015). Developmental dyscalculia. European Journal of Pediatrics, 174(1), 1–13. [DOI] [PubMed] [Google Scholar]
  32. Landerl K., Moll K. (2010). Comorbidity of learning disorders: Prevalence and familial transmission. Journal of Child Psychology and Psychiatry, and Allied Disciplines, 51, 287–294. doi: 10.1111/j.1469-7610.2009.02164.x [DOI] [PubMed] [Google Scholar]
  33. Mackie S., Shaw P., Lenroot R., Pierson R., Greenstein D. K., Nugent T. F., . . . Rapoport J. L. (2007). Cerebellar development and clinical outcome in attention deficit hyperactivity disorder. The American Journal of Psychiatry, 164, 647–655. doi: 10.1176/ajp.2007.164.4.647 [DOI] [PubMed] [Google Scholar]
  34. Mahone E. M., Ranta M. E., Crocetti D., O’Brien J., Kaufmann W. E., Denckla M. B., Mostofsky S. H. (2011). Comprehensive examination of frontal regions in boys and girls with attention-deficit/hyperactivity disorder. Journal of the International Neuropsychological Society, 17, 1047–1057. doi: 10.1017/S1355617711001056 [DOI] [PMC free article] [PubMed] [Google Scholar]
  35. Mareva S., the CALM Team, & Holmes J. (2019). Transdiagnostic associations across communication, cognitive and behavioural problems in a developmentally at-risk population: A network approach. BMC Paediatrics, 19(1), Article 452. doi: 10.1186/s12887-019-1818-7 [DOI] [PMC free article] [PubMed] [Google Scholar]
  36. Mayes A. K., Reilly S., Morgan A. T. (2015). Neural correlates of childhood language disorder: A systematic review. Developmental Medicine & Child Neurology, 57, 706–717. [DOI] [PubMed] [Google Scholar]
  37. Milton D. E. M., Heasman B., Sheppard E. (2018). Double empathy. In Volkmar F. R. (Ed.), Encyclopedia of autism spectrum disorders. doi: 10.1007/978-1-4614-6435-8_102273-1 [DOI] [Google Scholar]
  38. Morris S. E., Cuthbert B. N. (2012). Research domain criteria: Cognitive systems, neural circuits, and dimensions of behavior. Dialogues in Clinical Neuroscience, 14, 29–37. [DOI] [PMC free article] [PubMed] [Google Scholar]
  39. Mortimore T., Dupree J. (2008). Dyslexia-friendly practice in the secondary classroom. Exeter, England: Learning Matters. [Google Scholar]
  40. National Center for Education Statistics. (2020). Students with disabilities. Retrieved from https://nces.ed.gov/programs/coe/indicator_cgg.asp
  41. Onnink A. M. H., Zwiers M. P., Hoogman M., Mostert J. C., Kan C. C., Buitelaar J., Franke B. (2014). Brain alterations in adult ADHD: Effects of gender, treatment and comorbid depression. European Neuropsychopharmacology, 24, 397–409. doi: 10.1016/j.euroneuro.2013.11.011 [DOI] [PubMed] [Google Scholar]
  42. Pastor P. N., Reuben C. A. (2008). Diagnosed attention deficit hyperactivity disorder and learning disability: United States, 2004–2006 (Vital and Health Statistics Series 10, No. 237). Hyattsville, MD: National Center for Health Statistics. [PubMed] [Google Scholar]
  43. Peng P., Fuchs D. (2016). A meta-analysis of working memory deficits in children with learning difficulties: Is there a difference between verbal domain and numerical domain? Journal of Learning Disabilities, 49(1), 3–20. doi: 10.1177/0022219414521667 [DOI] [PubMed] [Google Scholar]
  44. Pennington B. F. (2006). From single to multiple deficit models of developmental disorders. Cognition, 101, 385–413. [DOI] [PubMed] [Google Scholar]
  45. Regier D. A., Kuhl E. A., Kupfer D. J. (2013). The DSM-5: Classification and criteria changes. World Psychiatry, 12, 92–98. doi: 10.1002/wps.20050 [DOI] [PMC free article] [PubMed] [Google Scholar]
  46. Ritchie S. J., Booth T., Hernández M. D. C. V., Corley J., Maniega S. M., Gow A. J., . . . Deary I. J. (2015). Beyond a bigger brain: Multivariable structural brain imaging and intelligence. Intelligence, 51, 47–56. [DOI] [PMC free article] [PubMed] [Google Scholar]
  47. Shaw P., Lerch J., Greenstein D., Sharp W., Clasen L., Evans A., . . . Rapoport J. (2006). Longitudinal mapping of cortical thickness and clinical outcome in children and adolescents with attention-deficit/hyperactivity disorder. Archives of General Psychiatry, 63, 540–549. doi: 10.1001/archpsyc.63.5.540 [DOI] [PubMed] [Google Scholar]
  48. Siugzdaite R., Bathelt J., Holmes J., Astle D. E. (2020). Transdiagnostic brain mapping in developmental disorders. Current Biology, 30, 1245–1257. doi: 10.1016/j.cub.2020.01.078 [DOI] [PMC free article] [PubMed] [Google Scholar]
  49. Sonuga-Barke E. J. S., Coghill D. (2014). The foundations of next generation attention-deficit/hyperactivity disorder neuropsychology: Building on progress during the last 30 years. The Journal of Child Psychology and Psychiatry, 55(12), e1–e5. doi: 10.1111/jcpp.12360 [DOI] [PubMed] [Google Scholar]
  50. Stein J. (2001). The magnocellular theory of developmental dyslexia. Dyslexia, 7(1), 12–36. doi: 10.1002/dys.186 [DOI] [PubMed] [Google Scholar]
  51. Toplak M. E., Jain U., Tannock R. (2005). Executive and motivational processes in adolescents with attention-deficit-hyperactivity disorder (ADHD). Behavioral and Brain Functions, 1, Article 8. doi: 10.1186/1744-9081-1-8 [DOI] [PMC free article] [PubMed] [Google Scholar]
  52. Willcutt E. G., Pennington B. F., Boada R., Ogline J. S., Tunick R. A., Chhabildas N. A., Olson R. K. (2001). A comparison of the cognitive deficits in reading disability and attention-deficit/hyperactivity disorder. Journal of Abnormal Psychology, 110, 157–172. doi: 10.1037//0021-843x.110.1.157 [DOI] [PubMed] [Google Scholar]
  53. Williams J. H., Whiten A., Suddendorf T., Perrett D. I. (2001). Imitation, mirror neurons and autism. Neuroscience and Biobehavioral Reviews, 25, 287–295. [DOI] [PubMed] [Google Scholar]
  54. Zhao Y., Castellanos F. X. (2016). Annual research review: Discovery science strategies in studies of the pathophysiology of child and adolescent psychiatric disorders - promises and limitations. Journal of Child Psychology and Psychiatry, and Allied Disciplines, 57, 421–439. doi: 10.1111/jcpp.12503 [DOI] [PMC free article] [PubMed] [Google Scholar]

Articles from Current Directions in Psychological Science are provided here courtesy of SAGE Publications

RESOURCES