Skip to main content
Archives of Clinical Neuropsychology logoLink to Archives of Clinical Neuropsychology
. 2017 Jan 25;32(1):2–7. doi: 10.1093/arclin/acw103

Comprehensive Cognitive Assessments are not Necessary for the Identification and Treatment of Learning Disabilities

Jack M Fletcher 1,*, Jeremy Miciak 1
PMCID: PMC5860395  PMID: 27932345

Abstract

There is considerable controversy about the necessity of cognitive assessment as part of an evaluation for learning and attention problems. The controversy should be adjudicated through an evaluation of empirical research. We review five sources of evidence commonly provided as support for cognitive assessment as part of the learning disability (LD) identification process, highlighting significant gaps in empirical research and where existing evidence is insufficient to establish the reliability and validity of cognitive assessments used in this way. We conclude that current evidence does not justify routine cognitive assessment for LD identification. As an alternative, we offer an instructional conceptualization of LD: a hybrid model that directly informs intervention and is based on documenting low academic achievement, inadequate response to intensive interventions, and a consideration of exclusionary factors.

Keywords: Learning disabilities, Neuropsychological assessment, Response to intervention, Cognitive processes

Introduction

It is commonly accepted in clinical neuropsychology that an evaluation of cognitive strengths and weaknesses is useful for children with developmental disorders of learning and attention. In describing the uses of neuropsychological evaluations, the website for the American Academy of Clinical Neuropsychology states that neuropsychological evaluations help identify how problems with the brain relate to problems at school and other areas of adaptation, match strengths and weaknesses to treatment, and identify neurological and psychiatric problems. The website also states that “Pediatric neuropsychologists and school psychologists often use some of the same tests. However, school evaluations focus on deciding ‘if’ a child has a problem with academic skills such as reading, spelling, or math. Pediatric neuropsychologists focus on understanding ‘why’ a child is having problems in school or at home. This is done by examining academic skills but also examining all of the thinking skills needed to perform well in and outside of school – skills like memory, attention, and problem-solving. Understanding a child's specific thinking strengths and weaknesses helps to better focus school plans and medical treatment and understand potential areas of future difficulty.” (https://theaacn.org/pediatric-neuropsychology/downloaded July 1, 2016).

In fact, school psychologists often evaluate these same cognitive strengths and weaknesses using a variety of methods. Some professionals identify as “school neuropsychologists” because of their emphasis on evaluating cognitive processes, which is often justified as identifying “why” the child has a learning or attention problem, along with the need to match cognitive strengths and weaknesses with treatment (Hale & Fiorello, 2004). In contrast to clinical neuropsychology, the assessment of cognitive processes in children with learning disabilities and ADHD by psychologists in the schools is controversial, with many decrying such evaluations as unrelated to effective identification practices or to treatment (Burns et al., 2016).

Issues like the value of cognitive/ neuropsychological evaluations as part of a comprehensive assessment of LD should be adjudicated by research. Unfortunately, little evidence supports the routine assessment of cognitive assessments for children with LD and/or ADHD. For ADHD, we cite the evidence showing weak correspondence between cognitive measures and the structured interviews and rating scales used to identify children with ADHD (Barkley, 2014). For LD, where psychometric assessments of academic skills are essential for identification and are correlated with cognitive measures, we raise issues about the value added by neuropsychological tests and whether they lead to improved identification and treatment.

IQ tests are usually part of an assessment of cognitive skills. We will not directly address issues related specifically to the use of IQ tests as components of evaluation or identification with an aptitude-achievement discrepancy method. This method has been widely questioned and was dropped as a requirement when the regulations for IDEA 2004, the special education legislation, were released in 2006. There is substantial evidence showing little difference between IQ-discrepant and low-achieving children in achievement, behavior, or cognitive skills, prognosis, intervention outcomes, and neuroimaging markers of brain function. In addition, IQ scores do not represent the capacity of a child to learn, but are products of the same processes that lead to low achievement, which is why age-adjusted scores decline over time in children with LD (Fletcher et al., in press). Finally, in both statistical stimulations and actual data, individual identification decisions based on aptitude-achievement discrepancies have poor reliability, reflecting the use of difference scores on correlated measures that are normally distributed (Francis et al., 2005). In many respects, assessments of strengths and weaknesses using a broader battery of cognitive tests demonstrate these same issues with reliability and validity and may even exacerbate them.

Evidence for the Value of Cognitive Tests

Proponents of cognitive assessment as part of a comprehensive evaluation process typically cite five sources of evidence in support of the value of cognitive testing:

  1. The statutes defining LD in federal legislation mandate cognitive assessments (Hale et al., 2010).

  2. Cognitive assessments are correlated with achievement domains that do not develop adequately in LD (Johnson, 2014; Kudo, Lussier, & Swanson, 2015).

  3. Patterns of cognitive strengths and weaknesses discriminate LD from non-LD “slow learners” (Fenwick et al., 2016; Reynolds & Shaywitz, 2009).

  4. Cognitive tests permit better treatment planning and intervention outcomes (Hale et al., 2010; Reynolds & Shaywitz, 2009).

  5. Clinicians using cognitive tests make more informed decisions.

We review each of these sources of evidence, highlighting gaps in research and clinical considerations which limit the value of cognitive testing as part of a comprehensive assessment to identify and treat learning and attention problems.

Statutory Requirements

The Federal definition of a learning disability states that “The term ‘specific learning disability’ means a disorder in one or more of the basic psychological processes involved in understanding or in using language, spoken or written, which may manifest itself in an imperfect ability to listen, speak, read, write, spell, or to do mathematical calculations” (U.S. Office of Education, 1968, p. 34). This definition does not indicate that cognitive processes must be measured, just the manifestations in levels of achievement, which were subsequently defined in the 1977 regulations after adoption of Public Law 94–175 in 1975. This conclusion is clearly supported by the guidance to the regulations accompanying the IDEA 2004 revision of the special education legislation: “The Department does not believe that an assessment of psychological or cognitive processing should be required in determining whether a child has an SLD. There is no current evidence that such assessments are necessary or sufficient for identifying SLD. Further, in many cases, these assessments have not been used to make appropriate intervention decisions” (Individuals with Disabilities Education Act (IDEA) regulations, 2006, p. 46651).

Cognitive Assessments are Correlated with Achievement

Cognitive processes are correlated with achievement. Establishing such relations has been pivotal in the development of a science of LD (Fletcher et al., in press). However, demonstrating that cognitive measures and achievement are correlated does not establish that such measures are related to intervention outcomes or provide value-added information to identification. The use of such assessments without achievement tests lacks reliability and validity (Torgesen, 2002). The fact that cognitive and achievement tests are correlated cannot indicate causal direction. A cognitive deficit does not indicate “why” a child has a learning problem; it is also possible that the learning problem causes the cognitive processing problem.

Patterns of Strengths and Weaknesses

In school psychology, identification methods based on patterns of strengths and weaknesses (PSW) are commonly proposed, implemented in many districts and states, and are a major source of contention. These PSW methods require a cognitive strength and a cognitive weakness, the latter correlated with an achievement weakness. Described with terms such as “concordance-discordance” and the “cross-battery,” these methods are often treated as interchangeable, independent of the tests used to operationalize the methods, and facilitating of intervention. While appealing logically, there is little evidence for the reliability and validity of these approaches. Any within-group statistical method will generate profiles; the profiles themselves do not establish reliability and validity (Morris & Fletcher, 1988). The reliability and validity of these methods should be established by comparing low-achieving students who meet criteria for LD based on PSW criteria with low-achieving students who do not meet these criteria. As with IQ-discrepancy methods, simulations and empirical comparisons do not support PSW methods.

To illustrate, in a simulation of PSW methods, Stuebing et al. (2012) found low identification rates for children with LD, which was surprising because the simulation was designed for assessments after a series of interventions where the base rate for LD should be high. Individual decisions were highly accurate if the formulae indicated no LD. However, positives for LD decisions were inaccurate because of a high false positive rate. Similar indications of low base rates of positive decisions, high specificity, and low sensitivity were reported in a study that used the normative data from the Woodcock–Johnson III cognitive and achievement test to evaluate decisions using the cross-battery PSW method (Kranzler, Floyd, Benson, Zaboski, & Thibodaux, 2016). In a series of studies summarized in Miciak, Fletcher, and Stuebing (2015), there was poor overlap between concordance–discordance and cross-battery methods. Simply changing highly correlated achievement tests and holding the cognitive assessments constant led to low agreement in individual decisions. There were no significant differences in the achievement profiles associated with LD and “low achievers” who did not meet LD criteria.

Intervention outcomes

Given poor reliability, it is not surprising that treatment validity evidence is also lacking. Miciak et al. (2016) used data from an intervention study that included baseline assessments of cognitive functions to evaluate whether different outcomes emerged for reading impaired students identified as LD and not as LD based on the concordance–discordance or cross-battery methods. There was little evidence of incremental value relative to baseline assessments of reading skills. These findings were consistent with Stuebing and colleagues (2015), who found in a meta-analysis of intervention studies that cognitive measures accounted for 1–2% of the unique variance in growth during intervention when baseline reading skills were included in the prediction model. Burns and colleagues (2016) synthesized studies addressing the relation of cognitive and neuropsychological tests for screening, planning, intervention design, and outcomes. The authors reported a small effect of cognitive testing (hedges g = 0.17), which was much smaller than the effect of baseline status on reading fluency (g = 0.43).

These findings highlight the essence of the issues involved in assessing cognitive processes, which is the value of the assessments for identification and intervention. Cognitive processes are correlated with achievement. What value is added by measuring correlates when levels of academic achievement have been assessed? Contrary to proponents of cognitive assessments (Hale et al., 2010; Reynolds & Shaywitz, 2009), there is not a strong evidence-base demonstrating that these assessments can be used to prescribe interventions that match specific cognitive profiles. Two literature reviews on different approaches to matching individual characteristics to intervention did not report strong evidence of interactions of person-level attributes and differential intervention response (Kearns & Fuchs, 2013; Pashler, McDaniel, Rohrer, & Bjork, 2009). It is well known that interventions based on cognitive skills in the absence of instruction in academic skills do not generalize to improved reading, math, or writing (Melby-Lervåg, Redick, & Hulme, 2016). In fact, their status is no better than the null results for low level optometric exercises, physical exercise, colored overlays and lenses, and other questionable treatments (Pennington, 2009).

Clinical Judgement

It is important to separate a clinician's evaluation of a comprehensive assessment that includes cognitive and achievement tests, history, and behavioral observation from the cognitive tests themselves. It seems likely that clinicians can make more informed decisions because they can account for measurement error and have experience, and that judgement may be enhanced by observing children undergoing testing (Waber, 2010). We ask what needs to be observed: The student memorizing lists of words, drawing, or connecting numbers and letters, or actually reading, writing, and completing math problems. Any evaluation of a child for LD must include a careful assessment of academic performance, especially for high stakes decision-making. This assessment, in combination with direct observation, history, and evaluation of contextual factors and other possible disorders, provides ample opportunity to develop and apply clinical judgement.

Caveats

  1. If there is a question of an intellectual disability or autism spectrum disorder, for example, IQ tests may be important for identification and treatment (Munson et al., 2008). We also would not extend our arguments to children with brain injury, including congenital disorders, frank brain injury, or conditions or treatments variably associated with brain injury (e.g., premature birth, traumatic brain injury, or radiological treatment of leukemia).

  2. Cognitive tests, such as assessments of phonological awareness, “do” predict academic achievement. Prior to formal academic instruction, a short cognitive assessment may help identify children at risk for LD. The utility of these cognitive assessments expires once formal academic instruction and academic achievement can be directly assessed.

  3. Evaluating adults with LD has not been adequately studied and some suggest an important role of neuropsychological testing for adults, especially given the inability to observe adults in intervention and the rules for accommodation laid out for high stakes tests (Mapou, 2013). We would still argue that such assessments should include a careful review of previous assessments and especially of intervention history, with particular attention to the automaticity of academic skills.

  4. Some argue that cognitive or neuropsychological tests are needed to identify “gifted” or “twice-exceptional” children as LD. Advocates of identifying “twice exceptional” children support intraindividual discrepancy (and IQ-discrepancy) methods (Gilman et al., 2013). This notion has been criticized since its introduction, with an absence of empirically validated identification criteria (Lovett & Lewandowski, 2006). We urge empirical evaluation of this hypothesis.

  5. Null results do not prove the hypothesis of “no differences.” We would encourage proponents of cognitive assessments to produce evidence that such assessments are reliable, valid, and useful for LD identification and treatment, and thus worth the expense. For example, Fuchs et al. (2014) reported evidence that a working memory assessment was associated with differential response to a math intervention. It may be that embedding cognitive interventions within academic instruction will prove more beneficial for some students.

  6. The issues with the reliability of individual decisions for cognitive tests are universal across alternative methods for LD identification. The hypothesized attributes of LD (low achievement, cognitive discrepancy, instructional response) are dimensional and normally distributed in the population. When formulae are applied that use bright thresholds with no consideration of the measurement error present in any psychometric measure, individual decisions will not show strong agreement (Fletcher et al., in press; Francis et al., 2005). Some may conclude that the problem with use of cognitive tests is with the application of formulae, but we see little evidence of relations with identification or intervention at a group level when achievement is measured.

Concluding Comments

We have argued that cognitive tests are not necessary for evaluating LD. At the heart of this argument was two implicit questions. First, what is the cost? If cognitive assessment does not improve the reliability or contribute to intervention outcomes, we cannot afford them. Funds spent for assessment may reduce funds available for intervention, which is a higher priority. Second, our response involves a different conceptualization of the core construct of LD, which is “unexpected underachievement.” For the past 40 years, unexpected underachievement has been operationalized as a cognitive discrepancy, despite the limited evidence-base we have reviewed.

As an alternative conceptualization, we would focus on inadequate response to quality instruction. The child with LD is harder to teach—not unable to learn. We believe evidence supports a hybrid method based on a comprehensive assessment that includes assessment of instructional response, low achievement based on well-validated, standardized academic assessments, and contextual factors that interfere with achievement, such as the presence of other disabilities or environmental circumstances (Bradley, Danielson, & Hallahan, 2002). These assessments should be brief, directly assess the behaviors of interest, and focused on hypotheses about why the child's learning is not adequate. If other disabilities or comorbid disorders are suspected, the comprehensive assessment process should include assessments to evaluate this possibility (e.g., assessment for ADHD). Any assessment should lead to intervention.

Because of the importance of screening and early intervention, we have advocated that the comprehensive assessment process is best conducted in the context of service delivery frameworks representing response to intervention or multi-tiered systems of support. Such approaches are not panaceas for the identification issue and are essentially products of school reform efforts to modify traditional approaches to service delivery, but valid and reliable interventions and assessments are available (Kovaleski, VanDerHeyden, & Shapiro, 2013). Although it is true that neuropsychologists may evaluate children outside school contexts, the focus should still include a review of instructional history and response, along with assessments of academic achievement and contextual factors and other disorders. The suggestion on the American Academy of Clinical Neuropsychology website that neuropsychological tests address questions of “why” the child is struggling in school and the contrast with school psychology evaluations are not evidence-based.

Funding

This research was supported by grant P50 HD052117, Texas Center for Learning Disabilities, from the Eunice Kennedy Shriver National Institute of Child Health and Human Development. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Eunice Kennedy Shriver National Institute of Child Health and Human Development or the National Institutes of Health.

Conflict of Interest

None declared.

References

  1. Barkley R. A. (2014). Attention-deficit/hyperactivity disorder (4th ed.). New York: Guilford. [Google Scholar]
  2. Bradley R., Danielson L., & Hallahan D. P. (Eds.) (2002). Identification of learning disabilities: Research to practice. Mahwah, NJ: Erlbaum. [Google Scholar]
  3. Burns M., Peterson-Brown K., Haegle S., Rodrigues K., Schmitt M., et al. (2016). Meta-analysis of academic interventions derived from neuropsychological data. School Psychology Quarterly, 31, 28–42. [DOI] [PubMed] [Google Scholar]
  4. Fenwick M. E., Kubas H. A., Witzke J. W., Fitzer K. R., Miller D. C., Maricle D. E., et al. (2016). Neuropsychological profiles of written expression learning disabilities determined by concordance-discordance model criteria. Applied Neuropsychology: Child, 5, 83–96. [DOI] [PubMed] [Google Scholar]
  5. Fletcher J. M., Lyon G. R., Fuchs L. S., & Barnes M. A. (in press). Learning disabilities: From identification to intervention (2nd ed.). New York: Guilford Press. [Google Scholar]
  6. Francis D. J., Fletcher J. M., Stuebing K. K., Lyon G. R., Shaywitz B. A., & Shaywitz S. E. (2005). Psychometric approaches to the identification of learning disabilities: IQ and achievement scores are not sufficient. Journal of Learning Disabilities, 38, 98–108. [DOI] [PubMed] [Google Scholar]
  7. Fuchs L. S., Schumacher R. F., Sterba S. K., Long J., Namkung J., Malone A., et al. (2014). Does working memory moderate the effects of fraction intervention? An aptitude-treatment interaction. Journal of Educational Psychology, 106, 499–514. [Google Scholar]
  8. Gilman B. L., Lovecky D. V., Kearney K., Peters D. B., Wasserman J. D., Silverman L. K., et al. (2013). Critical issues in the identification of gifted students with co-existing disabilities: the twice-exceptional. Sage Open, 3(3) DOI:10.1177/2158244013505855. [Google Scholar]
  9. Hale J., Alfonso V., Berninger V., Bracken B., Christo C., Clark E., et al. (2010). Critical Issues in response-to-intervention, comprehensive evaluation, and specific learning disabilities identification and intervention: An expert white paper consensus. Learning Disability Quarterly, 33, 223–236. [Google Scholar]
  10. Hale J. B., & Fiorello C. A. (2004). School neuropsychology: A practitioner's handbook. New York, NY: The Guilford Press. [Google Scholar]
  11. Individuals with Disabilities Education Act (IDEA) regulations 34 C.F.R. §§ 300.3 et seq (2006). IDEA regulations commentary, 71 Fed. Reg. 46651, (2006, August 14). Accessed July 1 at http://idea.ed.gov/download/finalregulations.pdf
  12. Johnson E. S. (2014). Understanding why a child is struggling to learn: The role of cognitive processing evaluation in LD Identification. Topics in Language Disorders, 34, 59–73. [Google Scholar]
  13. Kearns D. M., & Fuchs D. (2013). Does cognitively focused instruction improve the academic performance of low-achieving students. Exceptional Children, 79, 263–290. [Google Scholar]
  14. Kovaleski J. F., VanDerHeyden A. M., & Shapiro E. S. (2013). The RTI approach to evaluating learning disabilities. New York: Guilford Press. [Google Scholar]
  15. Kranzler J. H., Floyd R. G., Benson N., Zaboski B., & Thibodaux L. (2016). Classification agreement analysis of Cross-Battery Assessment in the identification of specific learning disorders in children and youth. International Journal of School & Educational Psychology, 4, 146–157. [Google Scholar]
  16. Kudo M. F., Lussier C. M., & Swanson H. L. (2015). Reading disabilities in children: A selective meta-analysis of the cognitive literature. Research in Developmental Disabilities, 40, 51–62. [DOI] [PubMed] [Google Scholar]
  17. Lovett B. J., & Lewandowski L. J. (2006). Gifted students with learning disabilities: Who are they. Journal of Learning Disabilities, 30, 515–527. [DOI] [PubMed] [Google Scholar]
  18. Mapou R. L. (2013). Process focused assessment of learning disabilities and ADHD in adults In Ashendorf, L., Swenson R., & Lisbon D. J. (Eds.). The Boston Process Approach to neuropsychological assessment: A practitioner's guide (pp. 329–354). New York: Oxford University Press. [Google Scholar]
  19. Melby-Lervåg M., Redick T. S., & Hulme C. (2016). Working memory training does not improve performance on measures of intelligence or other measures of “far transfer:” Evidence from a meta-analytic review. Perspectives on Psychological Science, 11, 512–534. [DOI] [PMC free article] [PubMed] [Google Scholar]
  20. Miciak J., Fletcher J. M., & Stuebing K. K. (2015). Accuracy and validity of methods for identifying learning disabilities in a RTI service delivery framework In Jimerson S. R., Burns M. K., & VanDerHeyden A. M. (Eds.). The handbook of response to intervention: The science and practice of multi-tiered systems of support (2nd ed., pp.421–440). New York: Springer. [Google Scholar]
  21. Miciak J., Williams J. L., Taylor W. P., Cirino P. T., Fletcher J. M., & Vaughn S. (2016). Do processing patterns of strengths and weaknesses predict differential treatment response. Journal of Educational Psychology, 108, 898–909. [DOI] [PMC free article] [PubMed] [Google Scholar]
  22. Morris R., & Fletcher J. M. (1988). Classification in neuropsychology: A theoretical framework and research paradigm. Journal of Clinical and Experimental Neuropsychology, 10, 640–658. [DOI] [PubMed] [Google Scholar]
  23. Munson J., Dawson G., Sterling L., Beauchaine T., Zhou A., Koehler E., et al. (2008). Evidence for latent classes of IQ in young children with autism spectrum disorder. American Journal of Mental Retardation, 113, 439–452. [DOI] [PMC free article] [PubMed] [Google Scholar]
  24. Pashler H., McDaniel M., Rohrer D., & Bjork R. (2009). Learning styles: Concepts and evidence. Psychological Science in the Public Interest, 9, 105–119. [DOI] [PubMed] [Google Scholar]
  25. Pennington B. F. (2009). Diagnosing learning disorders: A neuropsychological framework (2nd ed.). New York: Guilford Press. [Google Scholar]
  26. Reynolds C. R., & Shaywitz S. E. (2009). Response to intervention: Ready or not? Or, from wait-to-fail to watch-them-fail. School Psychology Quarterly, 24, 130. [DOI] [PMC free article] [PubMed] [Google Scholar]
  27. Stuebing K. K., Barth A. E., Trahan L. H., Reddy R. R., Miciak J., & Fletcher J. M. (2015). Are child cognitive characteristics strong predictors of response to intervention? A meta-analysis. Review of Educational Research, 85, 395–429. [DOI] [PMC free article] [PubMed] [Google Scholar]
  28. Stuebing K. K., Fletcher J. M., Branum-Martin L., & Francis D. J. (2012). Simulated comparisons of three methods for identifying specific learning disabilities based on cognitive discrepancies. School Psychology Review, 41, 3–22. [PMC free article] [PubMed] [Google Scholar]
  29. Torgesen J. K. (2002). Empirical and theoretical support for direct diagnosis of learning disabilities by assessment of intrinsic processing weaknesses In Bradley R., Danielson L., & Hallahan D. (Eds.). Identification of learning disabilities: Research to practice (pp.565–650). Mahwah, NJ: Erlbaum. [Google Scholar]
  30. U.S. Office of Education (1968). First annual report of the National Advisory Committee on Handicapped Children. Washington, DC: U.S. Department of Health, Education and Welfare. [Google Scholar]
  31. Waber D. (2010). Rethinking learning disabilities: Understanding children who struggle in school. New York: Guilford. [Google Scholar]

Articles from Archives of Clinical Neuropsychology are provided here courtesy of Oxford University Press

RESOURCES